Experts: Spy used AI-generated face to connect with targets
#1
This whole story seems like a plot from an old scifi movie. If AI technology has developed to the point of reading how a person looks, with only analyzing the subject's voice, now having the ability to mimic an 'actual' person. Now THAT is spooky.

Reality is under attack, at least, man's ability to reproduce 'reality' by recording it, on media, be it stone, tape or digital data. Now, technology can even doctor any video recording or audio recording, adding created sections and even creating whole scenarios. All this virtually, technically, undetectable from the original source.

The second part of this post has a blurb and link regarding these developements.



Keyboard Warrior




Quote:Link to Original Article


Experts: Spy used AI-generated face to connect with targets
By RAPHAEL SATTER


LONDON (AP) — Katie Jones sure seemed plugged into Washington’s political scene. The 30-something redhead boasted a job at a top think tank and a who’s-who network of pundits and experts, from the centrist Brookings Institution to the right-wing Heritage Foundation. She was connected to a deputy assistant secretary of state, a senior aide to a senator and the economist Paul Winfree, who is being considered for a seat on the Federal Reserve.

But Katie Jones doesn’t exist, The Associated Press has determined. Instead, the persona was part of a vast army of phantom profiles lurking on the professional networking site LinkedIn. And several experts contacted by the AP said Jones’ profile picture appeared to have been created by a computer program.

“I’m convinced that it’s a fake face,” said Mario Klingemann, a German artist who has been experimenting for years with artificially generated portraits and says he has reviewed tens of thousands of such images. “It has all the hallmarks.”
[Image: 800.jpeg]


Why experts think this is a fake photo. (AP Photo)
[/url]
Experts who reviewed the Jones profile’s LinkedIn activity say it’s typical of espionage efforts on the professional networking site, whose role as a global Rolodex has made it a powerful magnet for spies.

“It smells a lot like some sort of state-run operation,” said Jonas Parello-Plesner, who serves as program director at the Denmark-based think tank Alliance of Democracies Foundation and was the target several years ago of [url=https://www.the-american-interest.com/2018/10/23/chinas-linkedin-honey-traps/]an espionage operation that began over LinkedIn
.

William Evanina, director of the U.S. National Counterintelligence and Security Center, said foreign spies routinely use fake social media profiles to home in on American targets — and accused China in particular of waging “mass scale” spying on LinkedIn.

“Instead of dispatching spies to some parking garage in the U.S to recruit a target, it’s more efficient to sit behind a computer in Shanghai and send out friend requests to 30,000 targets,” he said in a written statement.

Last month, retired CIA officer Kevin Mallory was sentenced to 20 years in prison for passing details of top secret operations to Beijing, a relationship that began when a Chinese agent posing as a recruiter contacted him on LinkedIn.

Unlike Facebook’s friends-and-family focus, LinkedIn is oriented toward job seekers and headhunters, people who routinely fire out resumes, build vast webs of contacts and pitch projects to strangers. That connect-them-all approach helps fill the millions of job openings advertised on the site, but it also provides a rich hunting ground for spies.


And that has Western intelligence agencies worried.

British
, French and German officials have all issued warnings over the past few years detailing how thousands of people had been contacted by foreign spies over LinkedIn.
In a statement, LinkedIn said it routinely took action against fake accounts, yanking thousands of them in the first three months of 2019. It also said “we recommend you connect with people you know and trust, not just anyone.”

The Katie Jones profile was modest in scale, with 52 connections. But those connections had enough influence that they imbued the profile with credibility to some who accepted Jones’ invites. The AP spoke to about 40 other people who connected with Jones between early March and early April of this year, many of whom said they routinely accept invitations from people they don’t recognize.

“I’m probably the worst LinkedIn user in the history of LinkedIn,” said Winfree, the former deputy director of President Donald Trump’s domestic policy council, who confirmed connection with Jones on March 28.

Winfree, whose name came up last month in relation to one of the vacancies on the Federal Reserve Board of Governors, said he rarely logs on to LinkedIn and tends to just approve all the piled-up invites when he does.

“I literally accept every friend request that I get,” he said.

Lionel Fatton, who teaches East Asian affairs at Webster University in Geneva, said the fact that he didn’t know Jones did prompt a brief pause when he connected with her back in March.

“I remember hesitating,” he said. “And then I thought, ‘What’s the harm?’”

Parello-Plesner noted that the potential harm can be subtle: Connecting to a profile like Jones’ invites whoever is behind it to strike up a one-on-one conversation, and other users on the site can view the connection as a kind of endorsement.

“You lower your guard and you get others to lower their guard,” he said.

The Jones profile was first flagged by Keir Giles, a Russia specialist with London’s Chatham House think tank. Giles was recently caught up in an entirely separate espionage operation targeting critics of the Russian antivirus firm Kaspersky Lab. So when he received an invitation from Katie Jones on LinkedIn he was suspicious.

She claimed to have been working for years as a “Russia and Eurasia fellow” at the Center for Strategic and International Studies in Washington, but Giles said that, if that were true, “I ought to have heard of her.”

CSIS spokesman Andrew Schwartz told the AP that “no one named Katie Jones works for us.”

Jones also claimed to have earned degrees in Russian studies from the University of Michigan, but the school said it was “unable to find anyone by this name earning these degrees from the university.”

The Jones account vanished from LinkedIn shortly after the AP contacted the network seeking comment. Messages sent to Jones herself, via LinkedIn and an associated AOL email account, went unreturned.

Numerous experts interviewed by AP said perhaps the most intriguing aspect of the Katie Jones persona was her face, which they say appears to be artificially created.
Klingemann and other experts said the photo — a closely cropped portrait of a woman with blue-green eyes, copper-colored hair and an enigmatic smile — appeared to have been created using a family of dueling computer programs called generative adversarial networks, or GANs, that can create realistic-looking faces of entirely imaginary people. GANs, sometimes described as a form of artificial intelligence, have been the cause of increasing concern for policymakers already struggling to get a handle on digital disinformation. On Thursday, U.S. lawmakers held their first hearing devoted primarily to the threat of artificially generated imagery.

Hao Li, who directs the Vision of Graphics Lab at the University of Southern California’s Institute for Creative Technologies, reeled off a list of digital tells that he believes show the Jones photo was created by a computer program, including inconsistencies around Jones’ eyes, the ethereal glow around her hair and smudge marks on her left cheek.
“This is a typical GAN,” he said. “I’ll bet money on it.”
__
Online:
Test your ability to tell a real face from a fake one at: http://www.whichfaceisreal.com/
Generate your own deepfake faces at: https://thispersondoesnotexist.com
___
Raphael Satter can be reached at: https://raphaelsatter.com

even some Democrat leaders are getting concerned about what the above represents...

Next Article Wrote:'Deepfakes' called new election threat, with no easy fix
By SUSANNAH GEORGE


WASHINGTON (AP) — “Deepfake” videos pose a clear and growing threat to America’s national security, lawmakers and experts say. The question is what to do about it, and that’s not easily answered.

A House Intelligence Committee hearing Thursday served up a public warning about the deceptive powers of artificial intelligence software and offered a sobering assessment of how fast the technology is outpacing efforts to stop it.

With a crudely altered video of House Speaker Nancy Pelosi, D-Calif., fresh on everyone’s minds, lawmakers heard from experts how difficult it will be to combat these fakes and prevent them from being used to interfere in the 2020 election.

“We don’t have a general solution,” said David Doermann, a former official with Defense Advanced Research Projects Agency. “This is a cat and a mouse game.” As the ability to detect such videos improves, so does the technology used to make them.

The videos are made using facial mapping and artificial intelligence . The altered video of Pelosi, which was viewed more than 3 million times on social media, gave only a glimpse of what the technology can do. Experts dismissed the clip, which was slowed down to make it appear that Pelosi was slurring her words, as nothing more than a “cheap fake.”
Rep. Adam Schiff, the committee chairman, said the Pelosi video “demonstrates the scale of the challenge we face.” But he said he fears a more “nightmarish scenario,” with these video spreading disinformation about a political candidate and the public struggling to separate fact from fiction.

The technology, said Schiff, D-Calif., has “the capacity to disrupt entire campaigns, including that for the presidency.”

[...]


Then, this development that could destroy audio/video 'recordings' as solid evidence in court, for one. If it is now not possible to verify pedigree of evidence, as this increasingly easy to access technology, becomes available and fine tuned, could this tech be the bullet to kill that source of 'proof'?

Seems so:

Article Wrote: 
Link to Original Article


Watch: Scientists Create "Deepfake" Software Allowing Anyone To Edit Anything Anyone Says On Video
GoldCore's blog
s


Scientists at Stanford are doing their part to create what will be an inevitable dystopian nightmare.

The staff at the Max Planck Institute for Informatics, Princeton University and Adobe Research have developed software that allows you to now edit and change what people are saying in videos, allowing anyone to edit anybody into saying anything, according to Observer

The software uses machine learning and 3-D models of the target's face to generate new footage which allows the user to change, edit and remove words that are coming out of a person's mouth on video, simply by typing in new text. Not only that, the changes appear to have a seamless audio/visual flow without cuts.

Here’s a video of the frightening software at work.


We're sure there will be absolutely no blowback at all to this. After all, just last week, there was public outrage with somebody jokingly edited a video of Nancy Pelosi to make her seem drunk. What would happen if somebody edited a video of her speaking to have her swear wildly, or say racist things?

This deepfake software is already being described as "the equivalent of Christmas coming early for a Russian troll farm", now that the 2020 election is underway. We're sure it'll eventually also be a topic du jour on MSNBC and CNN if Trump wins again in 2020. 

And we have to ask: how long before the software is incorporated into Adobe‘s retail video editing software? After all, the software company already forces users to read a massive disclaimer that states:
Quote:We also believe that it is essential to obtain permission from the performers for any alteration before sharing a resulting video with a broad audience.

And…
Quote:We acknowledge that bad actors might use such technologies to falsify personal statements and slander prominent individuals. We are concerned about such deception and misuse.

Are they covering themselves legally for this "technology" to go mainstream?

Meanwhile, joke deepfakes continue to pop up, like this fake video of Mark Zuckerberg sitting at a desk giving a nefarious sounding speech about Facebook‘s power.
Joe Rogan was also victim to a deepfake by the AI company Dessa recently, who released audio making it sound like he is discussing chimpanzee hockey.

Don’t worry though, we’re sure this won’t fall into the wrong hands.



From Article Above Wrote:Don’t worry though, we’re sure this won’t fall into the wrong hands.


Ya, no Kidding. It already is sad
One should have an open mind; open enough that things get in, but not so open that everything falls out
Art Bell
 
The individual is handicapped by coming face to face with a conspiracy so monstrous that he cannot believe it exists.
J Edgar Hoover

 
I don't need a good memory, because I always tell the truth.
Jessie Ventura

 
Its no wonder truth is stranger than fiction.
Fiction has to make sense
Mark Twain

If history doesn't repeat itself, it sure does rhyme.
Mark Twain
Reply
#2
Aren't you a little glad you're in your latter years, rather than to live a full life time under this nightmare....?

Soon, that last generation to grow up without this being the norm (80s, 90s) is going to be old, and no one will have a reference point to how relatively free we all used to be.
[-] The following 1 user Likes BC's post:
  • Zedta
Reply
#3
The deep fake memes are fun, too bad it won't be just used for memes. At its current state it's easy enough to spot the fakes, but in 5-10 years? One shudders to think.
Blood of Christ, relief of the burdened, save us.

“It is my design to die in the brew house; let ale be placed in my mouth when I am expiring, that when the choirs of angels come, they may say, “Be God propitious to this drinker.” – St. Columbanus, A.D. 612

[Image: 2lq3.png]
[-] The following 1 user Likes GangGreen's post:
  • Zedta
Reply




Users browsing this thread: 1 Guest(s)