Are Deepfake Videos A Threat? Simple Tools Still Spread Misinformation Just Fine Deepfake videos haven't been a problem yet in the 2020 presidential race. It's not because they aren't a threat, but because simpler deceptive tactics are still effective at spreading misinformation.
NPR logo

Where Are The Deepfakes In This Presidential Election?

  • Download
  • <iframe src="https://www.npr.org/player/embed/918223033/919006747" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript
Where Are The Deepfakes In This Presidential Election?

Where Are The Deepfakes In This Presidential Election?

  • Download
  • <iframe src="https://www.npr.org/player/embed/918223033/919006747" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript

DAVID GREENE, HOST:

Sophisticated, computer-generated audio or video is something known as a deepfake. It's been a concern ever since this video went viral.

(SOUNDBITE OF VIDEO)

JORDAN PEELE: (As Barack Obama) We're entering an era in which our enemies can make it look like anyone is saying anything at any point in time, even if they would never say those things.

GREENE: That's from a deepfake featuring the voice of actor Jordan Peele. His face was transposed on a video of President Obama, making it look and sound like the president delivered those words. Experts are worried our adversaries would use the deepfake technology to meddle in an election. Tim Mak and Dina Temple-Raston from NPR's investigations team explain why this hasn't happened yet.

TIM MAK, BYLINE: The first deepfake of the 2020 election season was tweeted out by someone you might not expect - President Donald Trump. Back in April, the president retweeted a crudely manipulated video of former Vice President Joe Biden.

LINDSAY GORMAN: Biden appeared to have his tongue out and in kind of a ridiculous pose.

MAK: That's Lindsay Gorman. She's an expert in technology and disinformation at the Alliance for Securing Democracy.

GORMAN: And it turns out that that actually was manipulated using deep learning-based technology. And I would classify that as a deepfake.

DINA TEMPLE-RASTON, BYLINE: Deep learning-based technology is sophisticated. It's more than just Photoshopping something. It uses a kind of artificial intelligence, or AI, and it works a bit like the brain does and takes lots of little bits of information and brings them together. And it's that deep learning that computers can now do that makes deepfakes so believable. An example of just how far we've come popped up in China recently. A news anchor for Xinhua, China's state-run news agency, was completely generated with AI.

(SOUNDBITE OF VIDEO)

COMPUTER-GENERATED VOICE: (Non-English language spoken).

TEMPLE-RASTON: This is from part of a newscast read by a computer-generated person. And she seems pretty real.

MAK: Which is a problem because when experts look at adversaries who might use deepfakes against the United States, China is at the top of the list, so says Brian Pierce, a scientist at the Applied Research Laboratory for Intelligence and Security at the University of Maryland.

BRIAN PIERCE: Foreign entities who have an interest in interfering with U.S. elections or just causing trouble in general, yes, we certainly put China, Russia, Iran and even North Korea falls into that category.

MAK: Pierce says deepfakes haven't played the role people feared in this election season because at this point, to make one is still time-consuming and expensive. You can't just cook one up in rapid response to something that just happened. So that's the good news. The bad news, he says, is that selective editing, textual misinformation and lies are all rather effective. And you don't need AI for that.

PIERCE: I think things are getting better now. There's - a lot of times, there are strategies probably you could employ, but I hesitate to focus too much on deepfakes because I think there's a lot of other ways they could achieve their goal.

MAK: We saw that in 2016 with Twitter accounts and Facebook pages that can sow division without all that software and computing power. David Doermann, director of the AI Institute at University of Buffalo, says that deepfakes right now are highly personal, and they may enter the political scene locally.

DAVID DOERMANN: And the place that we saw these deepfakes hurt people initially was a very grassroots level. They were using it for revenge on a spouse or a partner. And at that level, it can do a lot of damage.

TEMPLE-RASTON: And that makes sense. When America's adversaries first started testing their cybercapabilities, it wasn't at the national level. They also went local. They cracked into local election databases just to look around and to see if they could. In the same way, Doermann says, people trying to meddle in our national politics might test their deepfakes on local election races first. And that's what we should watch for. For NPR News, I'm Dina Temple-Raston in New York.

MAK: And I'm Tim Mak in Washington.

(SOUNDBITE OF SHIGETO'S "WHAT WE HELD ON TO")

Copyright © 2020 NPR. All rights reserved. Visit our website terms of use and permissions pages at www.npr.org for further information.

NPR transcripts are created on a rush deadline by Verb8tm, Inc., an NPR contractor, and produced using a proprietary transcription process developed with NPR. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.