\
  The most prestigious law school admissions discussion board in the world.
BackRefresh Options Favorite

This 'DEEP FAKE NEWS' has been getting a lot of play lately but is stupid

This is the concept of using CGI to have someone like Obama ...
Concupiscible senate coffee pot
  09/18/18
post the article
Navy son of senegal theater
  09/18/18
Please use the sharing tools found via the share button a...
Concupiscible senate coffee pot
  09/18/18


Poast new message in this thread



Reply Favorite

Date: September 18th, 2018 8:27 PM
Author: Concupiscible senate coffee pot

This is the concept of using CGI to have someone like Obama say something he really didn't on film.

so what gives? Do pols want to have a recourse to actually being filmed saying stupid shit which they can then deny as being created in a Russian hoax?

https://www.ft.com/content/8e63b372-8f19-11e8-b639-7680cedcc421



(http://www.autoadmit.com/thread.php?thread_id=4080793&forum_id=2#36836229)



Reply Favorite

Date: September 18th, 2018 8:28 PM
Author: Navy son of senegal theater

post the article

(http://www.autoadmit.com/thread.php?thread_id=4080793&forum_id=2#36836237)



Reply Favorite

Date: September 18th, 2018 8:29 PM
Author: Concupiscible senate coffee pot

Please use the sharing tools found via the share button at the top or side of articles. Copying articles to share with others is a breach of FT.com T&Cs and Copyright Policy. Email licensing@ft.com to buy additional rights. Subscribers may share up to 10 or 20 articles per month using the gift article service. More information can be found at https://www.ft.com/tour.

https://www.ft.com/content/8e63b372-8f19-11e8-b639-7680cedcc421

Save to myFT

Roula Khalaf JULY 25, 2018 Print this page88

Recently, I was shown dozens of small pictures of Donald Trump, some real, others digitally created. I found it impossible to distinguish among them. Asked to pick three that were possibly fake, I got only one right.

The exercise was an introduction to the looming security threat of “deepfakes”, the artificial intelligence-powered imitation of speech and images to create alternative realities, making someone appear to be saying or doing things they never said or did.

In their simplest form, deepfakes are achieved by giving a computer instructions and feeding it images and audio of a person to teach it to imitate that person’s voice (and possibly much more). There is already an app for that: FakeApp (and video tutorials on how to use it), and an underground digital community that is superimposing celebrity faces on to actors in porn videos.

Today’s deepfake productions are still imperfect and can be detected, but the technology is progressing rapidly. Within two or three years we may be watching moving images and speeches without anyone being able to tell whether they are real or fabrications.

In a fascinating study, researchers at Washington university learnt to generate videos of Barack Obama from his voice and stock footage. The shape of the former US president’s mouth was essentially modelled to create a “synthetic Obama”. At Stanford University, researchers have manipulated head rotation, eye gaze and eye blinking, producing computer-generated videos that are hard to distinguish from reality.

The technology could do wonders for film editing and production and for virtual reality. In the not-too-distant future, dubbing could be transformed: Mexican actors in a soap opera will appear as if they are speaking English (or Chinese or Russian) and look more authentic. In business and world affairs, the technology could break the language barrier on video conference calls by translating speech and simultaneously altering facial and mouth movements so everyone appears to be speaking the same language.

But think also of the potential abuse, by individuals or state actors bent on spreading misinformation. Deepfakes could put words and expressions on to the face and mouth of a politician and influence elections. Videos could fabricate a threat and spark a political crisis or a security incident. “If the past few years are anything to go by, fake videos will be increasingly deployed to advance political agendas,” says Yasmin Green, director of research and development at Jigsaw, the Alphabet think-tank. “Past efforts were not technically sophisticated and were therefore easily debunked, but the technology is developing . . . faster than our understanding of the threat it poses.”

There is already evidence of the problem. In May last year, Qatar’s news agency and its social media accounts were hacked and statements were attributed to the emir that set off a diplomatic row. Qatar’s neighbours used the remarks to justify an economic boycott of the emirate. “The Qatar episode showed appetite for the use of fakes to pursue political agendas,” says Ms Green. “Imagine if they had the capabilities for deepfakes.”

More recently, ahead of local elections in Moldova, a video of a news segment from Al Jazeera was posted on the network’s Facebook page with Romanian subtitles. It claimed to be about a mayoral candidate’s proposal to lease an island to the United Arab Emirates. It was a fabrication, but the video went viral.

The damage from current fake news pales in comparison to the harm that could come from deepfakes. Aviv Ovadya, chief technologist at the Center for Social Media Responsibility at the University of Michigan, worries that deepfakes will not only convince people of things that are not real but also undermine people’s confidence in what is. “This impacts everything in our society, from the rule of law to how journalism is done,” he says.

Intelligence agencies and defence departments are well aware of the advances in computer-generated videos (and may be deep into the research themselves). Some of the leading researchers in the field are also looking at detection solutions, as are tech companies and governments.

I hope they catch up to this problem a lot faster than they did in spotting and weeding out fake news.



(http://www.autoadmit.com/thread.php?thread_id=4080793&forum_id=2#36836240)