
AILSA CHANG, HOST:
Ethan Mollick is both very excited about the potential upsides of artificial intelligence and very wary about its potential consequences. In February, the Wharton Business School professor posted a video of himself online that captured both those emotions.
(SOUNDBITE OF ARCHIVED RECORDING)
COMPUTER-GENERATED VOICE: (As Ethan Mollick) I have been studying startups and entrepreneurship for over a decade and have some thoughts on the subject that I would like to share with you today.
CHANG: If you're watching this video, you would see his mouth moving a little unnaturally. But the sound in the video, I mean, basically was like a standard, kind-of-boring PowerPoint speech. But then the video dissolves into a slightly different version of Mollick.
(SOUNDBITE OF ARCHIVED RECORDING)
COMPUTER-GENERATED VOICE: My first piece of advice is to focus on solving a real problem for customers.
ETHAN MOLLICK: Focus on solving a real problem for customers.
CHANG: That first video was a deepfake created by AI. Mollick had used the AI text generator ChatGPT to write a short speech on entrepreneurship, and he put that speech into his voice using another AI app. It just needed a short audio sample.
MOLLICK: So I gave it a minute of me talking about some unrelated topic like cheese.
CHANG: Then he fed that audio plus a photo of himself into a third app that made a video, and voila.
MOLLICK: By the end I had me - fake me - giving a fake lecture I've never given in my life, but sounds like me, in my fake voice.
CHANG: It is very easy to make a video like this. Mollick says it took about $11 and eight minutes to put all of this together, and that makes it ripe for abuse. It's not hard to imagine how - fake videos of politicians used to spread disinformation, personalized propaganda from authoritarian governments delivered in a human voice. Scenarios like these are alarmingly plausible, and AI is only getting more powerful.
MOLLICK: I think that the speed at which the cat has come out of the bag and we're all dealing with cats everywhere is a pretty big one.
(SOUNDBITE OF MUSIC)
CHANG: CONSIDER THIS - the explosive growth of AI could radically change life for the better or for the worse. This week, a group of tech industry leaders called for a pause on giant AI experiments to make sure that we're not racing towards a dystopian future.
(SOUNDBITE OF MUSIC)
CHANG: From NPR, I'm Ailsa Chang. It's Thursday, March 30.
It's CONSIDER THIS FROM NPR. Lots of big tech companies are working on AI. Google has big plans for AI tools and email and productivity software. Meta, the parent company of Facebook, has piloted multiple chatbots in the past year and last month even announced a new AI-focused team. But the company that has made the most headlines lately is OpenAI. Its chatbot ChatGPT surpassed 100 million monthly users earlier this year. OpenAI unveiled the latest version of GPT this month and claims it is so good, it can figure out your taxes.
(SOUNDBITE OF ARCHIVED RECORDING)
GREG BROCKMAN: Honestly, I - every time it does it, it just - it's amazing. This model's so good at mental math. It's way, way better than I am at mental math.
CHANG: That's Greg Brockman, one of the founders of OpenAI. NPR science correspondent Geoff Brumfiel has been putting ChatGPT-4 - the latest version - through its paces and sat down with my colleague Ari Shapiro to talk about it.
ARI SHAPIRO, BYLINE: All right. You've had a chance to try out this version of GPT. How good is it?
GEOFF BRUMFIEL, BYLINE: It's really impressive. The previous version would get things like simple math problems wrong, and this one does much, much better. It also, according to OpenAI, passed a bunch of academic tests - several AP course exams - and it has the ability to look at images and describe them in detail, which is a pretty cool feature. So it definitely seems to be a lot more capable than the previous version.
SHAPIRO: But you found some problems. Like, apparently you got it to tell you some things about nuclear weapons that it's not supposed to share.
BRUMFIEL: Yeah. I am a big nuke nerd, as people may know. And so, you know, OpenAI has tried to put in guardrails to prevent people from using it for things like, say, designing a nuclear weapon. But I worked around that by simply asking it to impersonate a famous physicist who designed nuclear weapons - Edward Teller - and then I just started asking Dr. Teller about his work, and I got about 30 pages of really detailed information. But I should say there's no need to panic. I gave this to some real nuclear experts, and they said, this stuff is already on the internet, which makes sense 'cause that's how OpenAI trains ChatGPT. And also, they said there were some errors in there.
SHAPIRO: OK. So you're not, like, the next supervillain in the Marvel Universe?
BRUMFIEL: Not yet.
SHAPIRO: Why were there errors if this stuff was already on the internet?
BRUMFIEL: Right. I mean, this gets to the real fundamental issue about these chatbots, which is they are not designed to fact-check. I spoke to a researcher named Eno Reyes, who works for an AI company called Hugging Face, and he told me these AI programs are basically just giant autocomplete machines.
ENO REYES: They're trying to just say, what is the next word based on all of the words I've seen before? They don't really have a true sense of factuality.
BRUMFIEL: That means that they can be wrong, and they could be wrong in really subtle ways that are hard to spot. They also can just make stuff up. In fact, one of our journalist colleagues, Nurith Aizenman - she actually got contacted about a story she supposedly wrote on Korean American woodworkers, except she never wrote the story. It didn't even exist. Somebody had used ChatGPT to research about, you know, woodworkers and come up with this story that Nurith had supposedly written. But it wasn't real.
SHAPIRO: It put her byline on something that a chatbot wrote?
BRUMFIEL: Yeah. Not only her byline, but, like, the whole story was made up.
SHAPIRO: Whoa. OK. What does OpenAI say about this?
BRUMFIEL: Well, they acknowledged that GPT does get things wrong, and it does hallucinate. And they say for those reasons, people who use it should be careful. They should check its work. That researcher I spoke to, Eno Reyes, though, adds that you do not want GPT to do your taxes. That would be a very bad idea.
CHANG: That's NPR's Geoff Brumfiel speaking with my colleague Ari Shapiro.
(SOUNDBITE OF MUSIC)
CHANG: This problem of made-up information is the most immediate of a long list of worries that researchers have about a future of unconstrained artificial intelligence. Those concerns range from the elimination of huge numbers of jobs, all the way up to the development of artificial minds so powerful that they could threaten human existence. Even the CEO of OpenAI, Sam Altman, acknowledges that he's a little bit scared of where AI might go. Here's what he told ABC News earlier this month.
(SOUNDBITE OF ARCHIVED RECORDING)
SAM ALTMAN: A thing that I do worry about is we're not going to be the only creator of this technology. There will be other people who don't put some of the safety limits that we put on it. Society, I think, has a limited amount of time to figure out how to react to that, how to regulate that, how to how to handle it.
CHANG: Altman says that's one reason his company has made ChatGPT available to the public. He argues that the stakes at the moment are relatively low. So now is the time to figure out how AI works in the real world and to use this experience to develop technological or legal boundaries on AI. And in that same interview, Altman made the case that for all the risks, the potential benefits are just too promising not to pursue.
(SOUNDBITE OF ARCHIVED RECORDING)
ALTMAN: Would you push a button to stop this if it meant we are no longer able to cure all diseases? Would you push a button to stop this if it meant we couldn't educate every child in the world super-well?
CHANG: If not a stop button, some AI experts say now is the time to push at least the pause button. An open letter signed by over a thousand tech industry leaders and academics urged all AI labs to agree to stop training for six months any AI model more advanced than GP-4. During this pause, the signatories are calling on tech companies and outside experts to agree on shared safety protocols and outside audits for AI models. And they want to see governments develop with urgency new rules for AI and authorities to enforce those rules. My colleague Adrian Florido sat down with one of the signatories of that letter, Peter Stone, the associate chair of computer science and director of robotics at the University of Texas.
ADRIAN FLORIDO, BYLINE: AI is a technology, a system that learns skills by analyzing massive amounts of data to the point where it can start to perform a lot of the tasks that until now only humans could do, like have a conversation or write an essay. So when tech professionals talk about their fear of advanced AI, what are you talking about?
PETER STONE: From my perspective, it's very important to distinguish different types of artificial intelligence technologies. The one you described is one of the more recent ones - generative artificial intelligence models based on neural networks. And I think myself and many other AI professionals and researchers are concerned about the possible uses and misuses of these new technologies and concerned that the progress is moving more quickly than is allowing us to have time to really understand the true implications before the next generation comes out.
Some of the things that we've been coming to terms with are having to do with changing people's opinions in the political sphere and understanding, you know, how that can happen when it's appropriate. People are still getting to grips with the intellectual property implications of these generative models. But there's still, I believe, many realms and domains where we haven't had time yet to explore what these models can do. And that's the thing that concerns me the most - is that, while we're still understanding that, the next generation is being developed. To me, it seems a little bit like, immediately after the Model T was invented, jumping straight to a national highway system, with cars that can go 80 miles an hour, without having the time to think about what regulations would be needed along the way.
FLORIDO: The letter you signed calls for a pause in the development of some of the most advanced AI technology. Why a pause? What would that achieve?
STONE: So the pause, if enforceable, would give time for the dust to settle, really, on what are these potential implications of these models. And so, you know, the pause would, for one thing, give the academic community a chance to educate the general public about what to expect from these models. They're fantastic tools, but it's very easy and natural for people to give them more credit than they deserve - to expect things from them that they're not capable of. You know, I think there's sort of a need for some time for everybody to understand how they can be regulated. That's sort of called for in the letter as well - to let, you know, governments and society respond.
FLORIDO: I should be clear here that your letter is not directed at a government agency. You're asking these tech companies to police themselves - to sort of hit the brake themselves. But these companies are locked in a race to develop the most advanced technology. What incentive do they have to heed your warnings?
STONE: So I think there is no incentive other than the agreement or the - you know, the moral compass, as is mentioned in the letter, of the people who are doing the development. And we're not likely to see the effect that the letter is directly calling for, but I think what it is going to do is raise public awareness of the need for, you know, understanding and the need for, if possible, taking some steps to sort of slow down and think a little bit more soberly about the next step before, you know, racing, as you said, to be the first to generate the next bigger model.
FLORIDO: Are you excited about the potential in artificial intelligence technology?
STONE: Oh, absolutely. This is a fantastic time to be in the field of artificial intelligence. There's really exciting things happening, and I would not at all be in favor of stopping research on artificial intelligence. I identify very much in the letter with the statement that humanity can enjoy a flourishing future with artificial intelligence, but I don't think it'll happen automatically. I think we need to think very carefully about what we should do, not just what we can do, when it comes to AI development. If we do it correctly, I think the world is going to become a much better place as a result of progress in artificial intelligence.
CHANG: That was my colleague Adrian Florido speaking with Peter Stone from the University of Texas in Austin. At the top of this episode, you heard reporting on AI and disinformation from NPR's Shannon Bond. Find a link to more in our episode notes. It's CONSIDER THIS FROM NPR. I'm Ailsa Chang.
Copyright © 2023 NPR. All rights reserved. Visit our website terms of use and permissions pages at www.npr.org for further information.
NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.