New AI tools make it easy to create fake video, audio and text Powerful artificial intelligence tools that can create video, audio, text and pictures are raising fears the technology will supercharge disinformation and propaganda by bad actors.

It takes a few dollars and 8 minutes to create a deepfake. And that's only the start

  • Download
  • <iframe src="https://www.npr.org/player/embed/1165146797/1165527291" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript

LEILA FADEL, HOST:

More and more artificial intelligence tools are showing up online, and those tools make it possible to create realistic videos, audio, text and pictures. Think chatbots, like ChatGPT and Microsoft Bing, or image generators, like DALL-E and Midjourney. They represent big advances in AI, but they also raise concerns about supercharged propaganda and influence campaigns by bad actors. NPR's Shannon Bond has been looking into this. And a warning - her report contains crude words that have been bleeped.

SHANNON BOND, BYLINE: In February, Wharton Business School professor Ethan Mollick posted a video of himself online.

(SOUNDBITE OF ARCHIVED RECORDING)

COMPUTER-GENERATED VOICE: (Imitating Ethan Mollick) I have been studying startups and entrepreneurship for over a decade and have some thoughts on the subject that I would like to share with you today.

BOND: His delivery is stiff and his mouth moves strangely. But if you don't know him well, you might not think twice, until the video dissolves into a slightly different Mollick...

(SOUNDBITE OF ARCHIVED RECORDING)

COMPUTER-GENERATED VOICE: (Imitating Ethan Mollick) My first piece of advice is to focus on solving a real problem for customers.

ETHAN MOLLICK: Focus on solving a real problem for customers.

BOND: ...Because that first Mollick you heard wasn't Ethan Mollick. It was a deepfake. His words, his voice and that video were all created by artificial intelligence.

MOLLICK: It was mostly to see if I could and then realizing that it's so much easier than I thought.

BOND: Mollick teaches about innovation, and lately, he's gotten really into these new tools that anyone can now use to create highly plausible images, text, audio and video. He's excited about AI's potential to change the way we work and help us be more creative. But he's also wary. So he decided to fake himself. To start, he had ChatGPT write a speech about entrepreneurship. Next, he turned to a tool that can clone a voice from a short audio clip.

MOLLICK: So I gave it a minute of me talking about some unrelated topic, like cheese.

BOND: And finally, he fed that audio and a photo of himself into another AI app.

MOLLICK: And it realistically moves the mouth around and moves the eyes around and makes you shrug. And that was all I needed.

BOND: This deepfake was quick, easy and cheap. Mollick says it took about $11 and just eight minutes to make.

MOLLICK: And by the end, I had me - a fake me giving a fake lecture I've never given in my life, but sounds like me in my fake voice.

BOND: Mollick posted his experiment online as a demonstration and a warning that the risks from this kind of AI are not in the distant future. They're already here.

MOLLICK: It's not going to convince anyone who knows me. But it's also, at first glance, something that you may actually believe in. And that was these models a month ago, and they're already advanced past that point.

BOND: Concerns about deepfakes have been around for a while, but what's different now is that pretty much anyone can make them easily. People are having fun using them for jokes and memes, but they're also already being used for political ends. Jack Posobiec, a right-wing influencer known for promoting the Pizzagate conspiracy theory, recently created a fake video of President Biden announcing a draft to send American soldiers to Ukraine.

(SOUNDBITE OF ARCHIVED RECORDING)

COMPUTER-GENERATED VOICE: (Imitating Joe Biden) The illegal Russian offensive has been swift, callous and brutal.

BOND: While Posobiec explained the video was a fake created by AI, he also described it as...

(SOUNDBITE OF ARCHIVED RECORDING)

JACK POSOBIEC: A sneak preview, coming attractions, a glimpse into the world beyond.

BOND: And many people went on to share the video without a disclaimer that it's not really Biden. Late last year, the research firm Graphika identified the first known case of a state-backed influence campaign using deepfakes. They found pro-China bots sharing fake news videos featuring AI-generated anchors on Facebook and Twitter. Meanwhile, scammers are using fake audio to steal money by posing as family members in crisis. Gary Marcus is a cognitive scientist at NYU who studies AI. He says we're not prepared for what it means to live in a world full of AI-generated content.

GARY MARCUS: The information ecosphere is going to get polluted.

BOND: He fears widespread access to this technology will further erode our ability to trust anything we see online.

MARCUS: A bad actor can take one of these tools and use this to make unimaginable amounts of really plausible, almost terrifying misinformation that the average person is not going to recognize as misinformation.

BOND: Marcus and others following the rapid rollout of AI to the public are particularly concerned about powerful tools that create text, the technology behind the Bing and ChatGPT chatbots. They can generate news articles, essays, Twitter posts and conversations that sound like they were written by real people. Josh Goldstein, a research fellow at Georgetown, says this kind of AI is a natural tool for propaganda.

JOSH GOLDSTEIN: Using a language model, propagandists can create lots and lots of original text, and they can do it quickly and at little cost.

BOND: What's more, researchers have found AI-created content can be really convincing.

GOLDSTEIN: You can generate persuasive propaganda even if you're not entirely fluent in English, or even if you don't know the idioms of your target community.

BOND: Generated text can also be harder to detect than faked video or audio. Online campaigns that use AI to write posts may appear to be more organic than the copy and paste messages usually associated with bots. And even if AI-written content is not always successful at persuasion, for propagandists, that's a feature, not a bug, says Marcus. He worries the profusion of generated text will amplify what's called the firehose of falsehood, a propaganda strategy that indiscriminately sprays out false and often contradictory messages.

MARCUS: There's also the famous phrase - I don't know if I can say on the air, but you can bleep it - flooding the zone with [expletive], from Steve Bannon. If you want to flood the zone with [expletive], there is no better tool than this.

BOND: To be clear, researchers have not yet identified a propaganda or influence campaign using generated text. The tech companies launching AI tools are scrambling to put guardrails in place to prevent abuse. But there are open-source versions these companies don't control. At least one powerful AI language tool made by Facebook parent Meta leaked online, where it was quickly posted to the anonymous message board 4Chan. Ethan Mollick, the professor who deepfaked himself, worries none of this will slow Silicon Valley's rush to incorporate AI into more and more products.

MOLLICK: But I think that the speed at which the cat has come out of the bag and we're all dealing with cats everywhere is a pretty big one.

BOND: For now, the race is heating up. This week, Google launched its own AI chatbot to the public.

Shannon Bond, NPR News.

(SOUNDBITE OF PHAELEH'S "FOREVER ONE")

Copyright © 2023 NPR. All rights reserved. Visit our website terms of use and permissions pages at www.npr.org for further information.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.