Republican reaction to Trump's guilty verdict : It's Been a Minute What is our justice system for? Many Republicans over the past week have suggested it's for revenge, calling for the prosecution of Democrats across the country following Trump's guilty verdict. Brittany looks at how the justice system can be politicized with NPR's national justice correspondent Carrie Johnson and national political correspondent Mara Liasson.

Plus, we all have examples of how bad those those new artificial intelligence search engine results can be. So why does it seem like every tech company is all in on the hottest tech trend? Brittany gets into it with NPR's technology correspondent Bobby Allyn and disinformation correspondent Shannon Bond.

Republicans really want revenge; plus, can AI take a chill pill?

  • Download
  • <iframe src="" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript


Hello, hello. I'm Brittany Luse, and you're listening to IT'S BEEN A MINUTE from NPR, a show about what's going on in culture and why it doesn't happen by accident.


LUSE: This week, we're connecting the dots between a prison sentence, the presidential election and 30,000 misleading statements. I know, I know. You think you know how these things are connected, and you're almost right. It is about former President Trump's guilty verdict, but the connection between these dots is so much more. We're going to find out why with NPR's national political correspondent Mara Liasson and NPR's national justice correspondent Carrie Johnson. Mara, Carrie, welcome to IT'S BEEN A MINUTE.

CARRIE JOHNSON, BYLINE: Hey there, Brittany.

MARA LIASSON, BYLINE: Happy to be here.

LUSE: Oh, so wonderful to have you both. And really quick, Carrie, can you do me a little favor and tell me who is talking here?


DONALD TRUMP: Hillary Clinton - I didn't say lock her up, but the people would say lock her up, lock her up. OK.

LUSE: Who was that?

JOHNSON: That's our former president and current GOP frontrunner, Donald J. Trump.

LUSE: You are right. That was Trump this past Sunday on "Fox & Friends," claiming he never said lock her up. Now, Mara, this one's a little tougher because it's from eight years ago. But can you tell me who is talking here?


TRUMP: For what she's done, they should lock her up. She's disgraceful.


TRUMP: It's disgraceful.

LIASSON: Sounds like the same guy to me.

LUSE: (Laughter) Right again. And you'd be right if you guessed Trump for this one.


TRUMP: So crooked Hillary - wait. Crooked - you should lock her up, I'll tell you.

LUSE: Or this one.


TRUMP: Hillary Clinton has to go to jail, OK? She has to go to jail. Has to go.


LUSE: Or even this one.


TRUMP: Lock up the Bidens. Lock up Hillary.

UNIDENTIFIED PEOPLE: (Chanting) Lock 'em up. Lock 'em up.

TRUMP: Lock 'em up.

LUSE: The Washington Post had this tracker of how many times Trump blurred the truth. And over the course of the four years of his presidency, they counted 30,573 false or misleading statements from the former president. So here's just one more for the pile. But it's interesting to me that Trump is out there claiming he never said lock her up just days after many in the left have leaned into saying lock him up.

Last week when Trump was found guilty of 34 counts of falsifying business records, it was instantaneous. Meme after meme celebrated the former president's conviction, but at the same time, folks on the right are calling for legal retribution by prosecuting Democrats. And that's exactly what I want to get into with the two of you - Carrie, Mara, why exactly we, as a public, are so energized right now over locking up our political opponents. And Mara, I guess my first question is a rhetorical one, but are our political imaginations just too simple or maybe so extreme that what everyone wants is for their political rivals to get locked up?

LIASSON: Don't forget that lock her up statement was a chant. It was one of the signature chants of Trump rallies. Lock her up, lock her up.

LUSE: Certainly.

LIASSON: And don't forget his signature statement, I am your retribution.


TRUMP: In 2016, I declared, I am your voice. Today, I add, I am your warrior. I am your justice. And for those who have been wronged and betrayed, I am your retribution. I am your retribution.


LIASSON: His is a campaign based on grievance and revenge. So I don't think the entire country wants to see all their political enemies locked up. Many Democrats thought Trump should be convicted and certainly, as you mentioned, were happy about it. But it's...

LUSE: Right.

LIASSON: ...Mostly the Republican Party who is saying that their opponents should be jailed or, in some cases, executed. Donald Trump said that the former top military official should be tried for treason. Steve Bannon, who is one of the leading voices in Trump's MAGA...

LUSE: Yes.

LIASSON: ...Movement, said that Alvin Bragg, the Manhattan district attorney, should be jailed. Laura Loomer, who's a far-right activist who's close to the Trump campaign, said they shouldn't just get jailed. They should get the death penalty. I mean, this is pretty extreme stuff.

LUSE: Yeah. Now, Carrie, in that interview with "Fox & Friends" on Sunday, Trump suggested, you know, he never would have actually prosecuted Hillary Clinton. But I think it's important to note that during the Mueller investigation, we discovered that Trump himself had been pushing then-Attorney General Jeff Sessions to do just that, to investigate and prosecute Hillary Clinton. In that same interview on "Fox & Friends," Trump suggested now that he himself has been prosecuted, he's more up to the task of prosecuting political opponents. Thinking of the Justice Department, how bendable is it to a president's political agenda?

JOHNSON: You know, in the years since Watergate, DoJ officials have basically said there are strong guardrails. And so basically, the White House has felt, in most cases since the Watergate era leery of trying to tell the Justice Department leadership whom to go after and investigate. And those guardrails were crashed through in some significant respects the last time Donald Trump was president. Remember, people like the deputy director of the FBI, Andrew McCabe, was investigated for a long time.

And the only reason it appears he did not get charged with a crime is because a grand jury here in Washington, D.C., declined to charge him with that crime even though the former President Donald Trump was out there every which way to Sunday on Twitter, on other social media, making remarks about how McCabe should have been prosecuted. The difference now, Brittany, is that Trump actually knows how to use the government more than he did when he came into office all those years ago, and he may be more effective now than he was back then.

LUSE: So bringing this into 2024 and beyond, Stephen Miller, an adviser to Trump, had this to say on Fox News following Trump's conviction.


STEPHEN MILLER: Is every House committee controlled by Republicans using its subpoena power in every way it needs to? Right now, is every Republican DA starting every investigation they need to right now?

LUSE: As you all have mentioned, so many more examples of Republicans calling for this legal retribution. I mean, I'm thinking about Marco Rubio's tweet, I believe it was, or X post with fire emojis saying that they needed to fight fire with fire, basically implying that they want to get their lick back. Mara, what does that say about what Republicans, both voters and elected officials, think the justice system is for?

LIASSON: Well, the strategy of Donald Trump all along has been to undermine faith in the justice system in case he got convicted. That was the political strategy in this case. This is very, very similar to his frontal assault on another democratic institution, the peaceful transfer of power.

LUSE: Right.

LIASSON: The litmus tests now in the Republican Party are that you have to say or believe that 2020 was a fraudulent election, and the justice system in America, the judicial system is illegitimate if it rules against Trump. And just to give you an example about how fully the Republican Party has embraced this assault on democratic institutions, one Republican senatorial candidate, Larry Hogan, in Maryland had - which is a blue state, had the temerity to say that people should respect the verdict regardless of the outcome.

And the reaction he got from the Trump campaign was, quote, "you just ended your campaign." That was from Chris LaCivita, who's a top Trump adviser. And then the co-chair of the Republican National Committee, Donald Trump's daughter-in-law, went on television to say that Hogan doesn't deserve the respect of anyone in the Republican Party. Now, what's significant about this is this is the - a Republican Party that would rather support Trump than win elections because Larry Hogan is running in a blue state. He needs all the help he can get, and he has just been excommunicated by Republicans, and the Senate majority might hang in the balance.

LUSE: My gosh, my gosh. This is all happening at a time when the public's faith in the justice system is declining. Pew Research Center found that fewer than half of Americans, 44%, have a favorable view of the Supreme Court. I don't even want to get into how two Supreme Court justices have been linked to figures or beliefs that led to January 6.

But also, according to an NBC News poll, only 35% of respondents said that they viewed the Justice Department in a positive light. That was last year when the Trump trials were kicking off. If the Justice Department and the court system could get bent to political will and many people already do not trust either the Justice Department or the courts, where's that leave us?

JOHNSON: In a very, very bad place. There's no evidence that this Justice Department, the Biden Justice Department, has been acting at the will of the president. In fact, as we speak, the Justice Department is prosecuting Joe Biden's son Hunter on gun charges in Delaware. And DOJ is also prosecuting two prominent Democrats, Senator Bob Menendez of New Jersey and Representative Henry Cuellar of Texas.

LIASSON: And you don't hear Democrats saying that those two cases are somehow rigged and fraudulent and a political witch hunt.

JOHNSON: That's exactly right.

LIASSON: And don't forget there's a case before the Supreme Court where Donald Trump is arguing that presidents should have immunity.

LUSE: Right.

LIASSON: There should be no check on presidential power. And that is, of course, the definition of authoritarianism. But what really surprised me is how openly Trump is running on his plans to go after his enemies. Usually, if you want to punish someone using the vast powers of the federal government, you do it quietly. But he's saying openly that this is what he's going to do if he's elected. And I've found for my gazillions of years covering politics that politicians are very transparent. They generally tell you what they want to do, and then if they're elected, they try to do it. And he has been very, very clear about this.

JOHNSON: A couple of observations. One is that Trump is making these statements, and people are listening, OK? So Republican-led committees in the House have already tried to issue subpoenas to the Manhattan District Attorney Alvin Bragg. And secondly, a lot of the people who are making these statements about going after Trump's enemies are people who themselves have been under investigation. Steve Bannon has been convicted, criminally convicted by a jury, for blowing off the January 6 committee. And he's facing a prison sentence any day now.

LUSE: Right.

JOHNSON: Some of these people are the same ones who are arguing for this vendetta, this retribution if Trump returns to power.

LIASSON: You know, I have a political question that I don't know the answer to. But we know that the conviction itself showed a little bit of movement in the polls. Not too much, but there were a fair number of independents who thought that the conviction would make them less likely to vote for Trump. But on the other hand, it's super energized, turbo-energized the Republican base. Trump raised tremendous amounts of money off of this.

It's clearly made his base, which was already devoted to him, even more devoted and active. But he can't win with his base alone. So what I'm wondering is, will these full-bore attacks on the justice system help him or hurt him in the election? It's unclear to me which way it will go, but the attacks are so extreme that I think that they're going to have some kind of political impact with voters.

LUSE: This conversation - it definitely has me concerned about how vulnerable our justice system is to political weaponization. And I can't help but feel that that ups the stakes for this next election.

LIASSON: They couldn't be much higher even before this.

LUSE: I know. And now here we are. Somehow, they managed to up the ante. Carrie, Mara, I have learned so much here. Thank you both so much.

LIASSON: Thanks for having us.

JOHNSON: Happy to do it.

LUSE: And as a thank you, I'd like to teach you something by playing a game with the two of you. Can you stick around for a tiny bit longer?

JOHNSON: Oh, sure.


LUSE: Wonderful. We'll be right back with a little game I like to call But Did You Know? Stick around.


LUSE: All right, all right. We're going to play a little game I like to call But Did You Know? Here's how it works. I'm going to share a story that's been making headlines this week, and as I give you some background on that story, I'll also ask you trivia related to it. But don't worry - it's all multiple choice. And the first one to blurt out the right answer gets a point. The person with the most points wins. And, of course, their prize is bragging rights.

LUSE: I don't want anyone out there asking me about tote bags. I don't have any.


LUSE: I don't have any. Now, I know it was just last week that I was talking about the cicada invasion happening all across the South and the Midwest. But this week, I'm here to warn you of another impending insect invasion. Mara, Carrie, which of these bugs is currently flying towards New York? Is it A, spiders, B, wasps, or C, cockroaches?

JOHNSON: Spiders.

LIASSON: Cockroaches.

LUSE: The answer is A, spiders. Carrie, you're correct.



LUSE: I am talking about giant venomous Joro spiders.


LUSE: I know. That's exactly how I feel. It's disgusting. And by giant, I mean, they have four-inch legs.


LUSE: Exactly. To give you a picture, they're about the size of the palm of your hand.


LUSE: Oh, my God. I just looked at the palm of my hand, and now I'm, like - my stomach's turning. Now, Joro spiders are not native to the U.S., having originated in Japan. They first appeared in the U.S. in the Southeast, and now they are expected to head northeast to where I live, which I'm not thrilled about. We're talking about flying spiders, again, flying up the mid-Atlantic. On a scale of 1 to 10, how upset are we about this?



LUSE: Seven? Both of you holding at seven. I am at a 9.8.


LUSE: I am at 9.8. So I - whatever I need to do to get on y'all's level, I am ready to get there. All right. My next question. I keep calling these flying spiders, so I'm sure you're wondering, what does she mean? Mara, Carrie, how do Joro spiders fly? Do they A, hitch a ride on a neighboring insect, B, leap into the air with their four-inch legs, which morph into wings, or C, parachute through the sky with spider silk?



LUSE: Mara, you were first. You said C. The answer is C, parachuting with spider silk.


LUSE: Joro spiders can sail up to 100 miles through a process called ballooning, which involves the spider releasing strands of spider silk into the wind and sailing away. To which I say, what horror movie are we living in right now, and how did we get on this timeline?

JOHNSON: I need to go dust my house. I'm worried about cobwebs now. It's not good.

LUSE: (Laughter) All righty, to recap the score. Mara, you are at one point. And Carrie, you're at one point. We've got a tie right now, and this last question is a tiebreaker. So I'm on the edge of my seat. This feels like "Challengers" right now. I'm just...

JOHNSON: (Laughter).

LUSE: I'm waiting to see how things are going to go. All right, to recap the facts before our final question. The Joro spider is giant. It flies. It's bright yellow, and it's heading towards New York. Taking in all these facts, how venomous is the Joro spider? A, so totally nothing to worry about, B, so totally something to be concerned about, or C, so totally something that we should absolutely be freaking out about?

JOHNSON: A. Not to worry.


LUSE: Once again, Carrie, you're correct for the win. The answer is A, so totally nothing to worry about.


LUSE: Although, I mean, I do personally - I am kind of freaked out about it.

JOHNSON: (Laughter).

LUSE: But Joro flying spiders are venomous, but according to scientists, that venom is not only weak; Joro spiders' fangs also aren't strong enough to pierce your skin or the skin of your pet. So, yes, they look terrifying and disgusting and gross and scary and petrifying, but there's actually not much for any of us to worry about, aside from breaking down in tears if I saw one.

JOHNSON: The stuff of nightmares.

LUSE: (Laughter). Yes. All right, that's it for But Did You Know? for this week. Congrats to Carrie on your win.

JOHNSON: All right.

LUSE: All right. And, Carrie, Mara, thank you so much for joining me today.

LIASSON: Thanks for having us.

JOHNSON: Thank you.

LUSE: That was NPR's Carrie Johnson and Mara Liasson. I am going to take a quick break, and when I get back, we're getting into Silicon Valley's AI explosion, why all the tech bros are so obsessed with it and why they might need to invest in chill pills.

JOHNSON: (Laughter).


LUSE: Stick around.

My big question this week is, do we know why AI needs to be in everything, or does Silicon Valley just have FOMO?


LUSE: Y'all, I'll be the first to admit it. I'm not the most techy person, but so many of the tech revolutions that have happened in my lifetime, like smartphones and social media, are inescapable. Like, I can't not use them. And that has me wondering if AI is on that same path. But I still have so many questions about artificial intelligence, and it seems like we're not really stopping to think it all through.

SHANNON BOND, BYLINE: It's all moving very quickly.

LUSE: That's Shannon Bond, NPR correspondent on the disinformation team.

BOND: And in many cases, it's being put out to the public without necessarily understanding well how it works - right? - or if it's working correctly and who gets to define what working correctly is.

LUSE: I sat down with Shannon and NPR tech correspondent Bobby Allyn to pause and take in where we're at right now and what ethical issues lay ahead of us.


LUSE: Shannon, Bobby, welcome back to IT'S BEEN A MINUTE.

BOBBY ALLYN, BYLINE: Thanks for having me.

BOND: Thanks for having us.

LUSE: My pleasure, my pleasure. So, to start off, Google has rolled out its AI search summarizer. And there have been so many memes about the weird results it's giving back to people. What's the weirdest one that you've seen?

ALLYN: I guess when it recommended that somebody add glue to their pizza sauce.

LUSE: What?

ALLYN: In order to keep the cheese on the pizza. Yeah, that was a pretty strange one.

LUSE: (Laughter) That's one method. That's one method. What about you, Shannon?

BOND: I liked that it's using both Reddit and The Onion as sources of...

ALLYN: Oh, my God.

BOND: ...Information, which is how we ended up with how many rocks we should be eating a day. And I think Bobby got some pretty bizarre results.

ALLYN: Yeah, I even asked it a really simple question. I was like, OK, how many U.S. presidents have been white? And it said 17. So that was a new piece of American history to me.

LUSE: Oh. I had someone tell me that when you Google me, it suggests that my husband is Ben Affleck.

ALLYN: Oh, really, yeah?

LUSE: It scrapes information from, like, an essay I wrote about Jennifer Lopez and attributed that to my own life.

BOND: Well, we appreciate you, like, slumming to do podcasts with us.

LUSE: (Laughter) What can I say? I'm a woman of the people. I'm a woman of the people. OK, so you both have been covering tech for several years now. And it feels to me like AI has started proliferating in a way just over the past couple of years that feels really, really intense. What changes in the AI landscape have surprised you in the years since you started covering tech?

BOND: I mean, I think the biggest change that I feel is affecting all of us is that it is, like, suddenly everywhere. I mean, people have been working in tech, have been working on AI for a long time. This used to be the kind of thing, like, companies were doing, academics we're doing in, like, research settings. But there's just been this, like, incredible acceleration in regular people kind of getting their hands on these products, even as they are sort of new and, in some cases, not really thoroughly tested, or, you know, as we're just talking about with these Google results, like, you don't know what you're getting, necessarily, or where it's coming from.

ALLYN: Yeah. I mean, when ChatGPT came out and wowed the world, there was a lot of think pieces about whether this was the next tech craze, like NFTs and like crypto - there's a lot of excitement. There's a lot of investor money that floods into this new idea. Everyone talks about it, and it disappears. And the most surprising thing to me, as the tech reporter, is no, people are not moving on, that AI is still the center of Silicon Valley. It's where all the money is. It's something that's just not going away. I don't think it is something that we're going to stop talking about anytime soon.

LUSE: I mean, to that point, it feels like so many companies are rushing to use AI and figuring out how to incorporate it into their business plan.

ALLYN: It's not like it happened overnight, right? Big tech companies have been developing generative AI tools for many, many years. But what ChatGPT did is it lit a match. And now everyone feels like in order to stay competitive, in order to keep shareholders happy, in order to stay relevant, they have to keep pushing these products out.

LUSE: You know, and thinking about some of the costs of this arms race, what are some of the most alarming uses of AI that you've seen or reported on?

BOND: I think the most disturbing sort of thing we are seeing right now that is happening is the use of artificial intelligence to create nonconsensual pornography, nonconsensual sex imagery. It's been used to sort of target and harass women. Really troublingly, it's been used to target and harass children.

LUSE: Gosh.

BOND: Actually, the first case of someone being arrested for creating AI-generated child pornography.

LUSE: Happened a couple of weeks ago, yeah.

BOND: Which, again, is illegal. And these are not, like, theoretical issues. These are, like, real things that are happening.

ALLYN: Yeah, the harms that are happening right now. Another one I'll add to that is AI's voice cloning capabilities. It can be really chilling. There was an example out of Maryland where a principal's voice was cloned with AI to make it sound like he was saying racist things when it wasn't him at all, and it caused a huge crisis at this school. There was another example, I think, in Hong Kong. There was a video that was portraying executives at a business, and an employee was convinced through AI videos and voice cloning to transfer $25 million.


BOND: To scammers. Yeah.

ALLYN: The scammers, exactly. But they're harnessing AI in order to impersonate and to dupe people into doing things.

LUSE: Geez. Yeah, people are already so vulnerable to scammers without even needing AI. It's scary that AI could add an impression of authenticity to scams that are already really effective. Shannon, you focus on misinformation. How is AI being used to mislead voters, and what does that mean for the election this year?

BOND: Well, as you can imagine, another popular target for this would be politicians. We've already seen this happen back in January in New Hampshire. A Democratic operative used AI to create a fake-sounding voice of Joe Biden. They sent it out as a robocall to thousands of voters. And it sounded like Biden discouraging Democrats from voting in the primary. He has been indicted in New Hampshire on criminal charges and is facing federal fines over this.

But you can imagine it's created a lot of concern in the political world. Then also, there's a term called the liar's dividend. The idea is if it's so ubiquitous that, like, anything could be AI, then anything could be AI, and nothing is real. Say a real video came out that was actually an embarrassing gaffe or something you didn't want to have seen out there. You the politician could say, that wasn't me. It was made up. Right? I've been targeted.


BOND: And I think if you lose that kind of trust - democracy runs on trust in our systems and trust in a shared reality. And I think there's a lot of concern that this is going to represent yet another erosion of that shared reality.

LUSE: Shared reality. That is so astute. Although I feel like we're already way past having that (laughter). So you both have shared examples of generative AI leading to damaging misinformation being put out into the world. But it's also taking in a lot of information about us, too. A lot of people have been concerned about what AI means for privacy. So what does AI mean for privacy?

ALLYN: I think one way to answer that is to talk about a very specific example. It's something that I reported on. There's this company called PimEyes, which allows you to upload a photo, and it uses AI to scan the entire internet. Say I took a photo of Shannon and uploaded it to this tool, PimEyes. It will show me every single instance of Shannon appearing online, many of which she may not have even seen herself. And it goes to this idea of if everyone had PimEyes on their phone and use it on a daily basis, just think about it. You could walk around Midtown Manhattan, snap a photo of some guy in a suit and, within seconds, be able to identify him. So when it comes to privacy, yeah, you're not going to have very much of it if these tools become really widespread.

BOND: Yeah, I mean, there's a lot of concerns about AI enabling a lot more surveillance. There's also this whole other question around the data collection we've been talking about. I mean, it maybe, like, it feels like a less direct privacy harm, but it's definitely a question about privacy. There's a question there about compensation.

LUSE: I also want to ask you about the recent controversy around OpenAI's new voice assistant product, which sounds suspiciously similar to Scarlett Johansson's role in the movie "Her," where she played an AI companion.

ALLYN: Yeah. I think the reason why the Scarlett Johansson debacle struck such a chord is because it's just an amazing window into something that so many creatives are worried about right now, which is having their original work basically...

LUSE: Right.

ALLYN: ...Stolen without compensation, without permission. And you're seeing with OpenAI and all of the AI companies, you know, the move fast and break things ethos of Silicon Valley on steroids, right? They're definitely operating under this principle of it's easier to ask for forgiveness than permission. And yes, we are seeing some licensing deals being struck with publishers and others, but a lot of people feel violated.

LUSE: Yeah, we covered AI being trained by human art last year and the artists themselves being pretty upset with that. But I want to move on to what rules we have for AI. I've seen some reporting recently that over at OpenAI, there have been quite a few public departures from a team that's in charge of AI safety. Can you say more about what guardrails are in place for AI?

ALLYN: I mean, it's - they all self-govern themselves, right? There are no AI regulations in the U.S., and so companies like OpenAI build their own guardrails, which is to say nobody is forcing them to do anything they don't want to do right now. We just saw recently two former board members of OpenAI, Helen Toner and Tasha McCauley, wrote a pretty damning piece in The Economist saying, as former board members of OpenAI, we don't think this company, which is solely focused these days on market share and profits, should govern itself. The government has to step in, or these tools are potentially going run amok. But I think it really is a microcosm of something happening across Silicon Valley, which is this question of, what is the right balance between speed and safety? I don't think anyone has really figured that out.

And, yes, so there was a team at OpenAI who were devoted to this idea of making sure that artificial general intelligence, so a supercomputer that can outperform humans, was done so as safe as possible. And that team was dissolved. Some of the departing ex-OpenAI staffers criticized the company, saying that, you know, they seem to be really focused on the latest, shiny new toy over the safety concerns that many folks were raising and trying to buy in for. But OpenAI announced that they're launching a new committee that's going to be solely focused on, what are the dangers of these products, and how do we mitigate them?

BOND: And one of the criticisms of the companies from other advocates who are very worried about not just the theoretical harms that might happen, like, you know, if we do reach, like, a supercomputer that will be better than humans and everything - but actually, in addition to those risks, we actually need to talk about the harms that are happening right now, right? This kind of technology has moved much more quickly than our ability to sort of set any guardrails on it. We're now seeing many of these tools being embedded in things that people are using every day. There is this question of, like, but what if I don't want to be a part of this, like, massive experiment?

ALLYN: How do I opt out? Right.

LUSE: Yeah.

BOND: And it is meant ultimately to, like, better a product that will enrich this company. Like, I think there's some real, like, ethical dilemmas there.

LUSE: I think looking at a rather sudden rise in AI technology, it's pretty easy to focus on a lot of the aspects of this that feel scary or potentially dangerous. But also, it seems like there's a lot that AI can do for us. What can AI do better than we can?

ALLYN: I mean, just analyze rapidly huge datasets. I mean, there's lots of applications in hospitals that are pretty astounding, just the ways it can instantly scan, like, an MRI and figure out what's going on much faster than, say, a doctor's two eyes can. Some of the first and really impressive uses of AI has been in the coding space among software developers who can figure out when there's a problem in code instantaneously. I mean, that back-end code, a lot of it now, is being written by AI, and that's just really going to speed up tech and really every sector, I think.

BOND: Yeah, I mean, I think some of the most promising ones are, like, less sexy things, right? It's not about, like, having this, you know, amazing product that can create an entire movie for you from scratch. It's like Bobby said, like, the back end. It's about offloading some of that work that would take you a lot longer. There's other areas where the positive is very much the same as the negative consequence potentially, so you could take somebody's voice and then, you know, have them say the same thing but in a different language. And you can imagine how that could be used in a lot of ways that are helpful in, you know, allowing people to translate their messages very quickly and hopefully accurately into other languages.

LUSE: Something I'm totally baffled by is that what has been pushed most to consumers as positives in the past couple of years are things like AI art, chatbot dating, AI script writing, etc, when humans can do all that already. And we like doing it, and some of us are good at those things. In my opinion, AI is currently really bad at those things. And there are so many other things that, as we discussed, that AI can do that we're not as good at and don't want to do sometimes. And I know that AI assistants are starting to roll out for things that we don't want to do, but why is there still so much emphasis on AI doing things that humans are already good at and enjoy doing?

BOND: I think it's a good question. I mean, I remember, like, initially using DALL-E when that first rolled out. And the idea you could just, like, type in, like, penguin surfing, you know, in the Pacific Ocean, and you could - it would, like, create this thing. It feels like a magic trick, right? It's a gimmick, but it's kind of cool. But it', like, I feel like we've gotten very distracted by things that seem kind of cool. But - right - are they actually useful? Are they better than humans at doing these creative tasks?

ALLYN: Another thing I'll just add which - I mean, it's not directly responding to the question, but it's going to this idea of, should we have released these products at all? What you often hear from AI company boosters and Silicon Valley types is, well, if our guardrails are too strict, then other parts of the world, particularly China, are going to beat us in the AI race. So we can't stifle innovation. We have to let this industry blossom. And it's the same argument you heard at the dawn of the internet.

LUSE: I have a question that's a little bit more of a conjecture question. Is AI worth all this? Like, how do we weigh its benefits with its costs?

ALLYN: I think that's a hard question to answer.

BOND: Yeah.

ALLYN: But what gets a lot of headlines are various ways in which AI can be dangerous when used by bad actors, when it's in the wrong hands, right? We don't know yet, but we shall see. And in the coming years, you know, without regulations, we're just going to see both incredible good and incredible harms come from AI.

BOND: I guess the way I always sort of think about that is - and to be fair, I'm coming at this from the perspective of somebody who, like, spends a lot of time, like, reporting and writing about, like, all the bad things that people are doing with AI, which is coloring my perspective, of course. On the one hand, people are, like, totally overpromising what AI can do when, in a lot of cases, it's actually just not that good at these things. And like we were saying at the beginning, like, these Google AI-written summaries are sometimes just giving you, like, really bad information.

And then, at the same time, we're not actually really grappling with the harms that unleashing this all has created already for people. We're leaping ahead of coming to an understanding of what these things are for, what they are actually good for, and what they're not so good for. It's, like, this very Silicon Valley tech industry, like, it's everything, and it's huge, and it's going to change your life, and it's going to change the world, and you've got to get on board because if you don't get on board, like, it's going to pass you by. And I just don't think that's a very helpful way of thinking about these things. It doesn't push us to sort of think rationally about, what can this actually do? Like, we can choose how to incorporate technology into our lives, but we're not doing that.

LUSE: Shannon, Bobby, thank you so much.

BOND: Thanks for having me.

ALLYN: Yeah, thanks for having us.

LUSE: Thanks again to NPR tech correspondent Bobby Allyn and NPR disinformation correspondent Shannon Bond. You can find more of their work at


LUSE: This episode of IT'S BEEN A MINUTE was produced by...





LUSE: This episode was edited by...


LUSE: Engineering support came from...


LUSE: We had fact-checking help from...


LUSE: Our executive producer is...


LUSE: Our VP of programming is...


LUSE: All right, that's all for this episode of IT'S BEEN A MINUTE from NPR. I'm Brittany Luse. Talk soon.

Copyright © 2024 NPR. All rights reserved. Visit our website terms of use and permissions pages at for further information.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.