Lawmakers Want To Be Proactive On Artificial Intelligence Regulation : The NPR Politics Podcast OpenAI head Sam Altman appeared before a Senate panel this week to talk about his ChatGPT product and the future of artificial intelligence. Lawmakers acknowledge the broad upsides of the fast-moving technology but hope to craft regulation in order to blunt the social and civic drawbacks that arrived alongside past tech breakthroughs.

This episode: political reporter Deepa Shivaram, disinformation correspondent Shannon Bond, and congressional correspondent Claudia Grisales.

The podcast is produced by Elena Moore and Casey Morell. Our editor is Eric McDaniel. Our executive producer is Muthoni Muturi.

Unlock access to this and other bonus content by supporting The NPR Politics Podcast+. Sign up via Apple Podcasts or at

Email the show at
Join the NPR Politics Podcast Facebook Group.
Subscribe to the NPR Politics Newsletter.

Lawmakers Want To Be Proactive On Artificial Intelligence Regulation

  • Download
  • <iframe src="" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript

AI-GENERATED VOICE: (As Hugh) Hi, this is Hugh (ph) in - well, I'm not anywhere, really. I'm a synthetic voice.




AI-GENERATED VOICE: (As Hugh) I don't belong to a real person.


AI-GENERATED VOICE: (As Hugh) This podcast was recorded at...

SHIVARAM: 1:06 p.m., Wednesday, May 17, 2023.

AI-GENERATED VOICE: (As Hugh) Things may have changed by the time you hear it.


AI-GENERATED VOICE: (As Hugh) OK, here's the show.


GRISALES: Oh, my gosh. That still freaks me out.

SHIVARAM: That was actually kind of scary (laughter). I love it. Hey there. It's the NPR POLITICS PODCAST. I'm Deepa Shivaram. I cover politics.

GRISALES: I'm Claudia Grisales. I cover Congress.

SHIVARAM: And Shannon Bond from NPR's disinformation team is with us today. Hey, Shannon.


SHIVARAM: So Congress is racing to try and catch up to exploding advances in AI. As you can tell, this podcast is also trying to do some catching up.


SHIVARAM: For the past...

BOND: It's really us. It's really us.

SHIVARAM: But it's really us. I promise. No AI voices from the three of us in this show.

GRISALES: (Laughter).

SHIVARAM: You can guarantee that. So for the past several weeks, Senate Majority Leader Chuck Schumer has met with at least 100 experts in artificial intelligence to craft legislation around this technology. And yesterday, the Senate held a hearing with the AI executive behind ChatGPT. So Claudia, talk me through some of these meetings. One hundred of them - what is Schumer hoping to achieve here?

GRISALES: Right. He's trying to craft a bipartisan consensus behind comprehensive legislation to install safeguards for AI. And I sat down with him for a few minutes to talk about it. He said it's probably the most important issue facing our country, families and humanity in the next hundred years. So he tried to illustrate that there's a lot at stake here. He said it's a national issue, a country issue, a human issue. But it's easier said than done, and he knows that. He admits that. They want to try and craft law where AI can be allowed to see the tremendous good that it could be capable of but also put guardrails where there are worries, where there could be tremendous bad. And he said this is very difficult because it is moving so quickly. It's changing so quickly. And he's facing a bitterly divided Congress. So it's going to be very tough for him to weigh in on an issue when it's very hard for Congress right now to pass any kind of bipartisan legislation for the most part.

SHIVARAM: Yeah. And this is, I mean, an issue that I think a lot of us don't even realize how much it's already impacting us and our industries and our jobs. And clearly, this is very top of mind. Shannon, what are the biggest concerns around AI technologies that the government might want to have a say in? What kind of guardrails would they want to put in here?

BOND: Yeah, I mean, I think it can be easy to sort of go really catastrophic with sort of, you know, these kind of extreme far-off warnings about, you know, are the robots all going to take over? But actually, I think the stuff, you know, that critics are talking about, but also it sounds like lawmakers are talking about as well - a huge question on everyone's minds is, you know, what is going to be the impact on jobs? An IBM official appeared at this hearing this week in front of Congress. You know, IBM has said, you know, it's going to pause hiring for certain positions because it thinks over time it could be replacing, you know, close to 8,000 jobs with AI. You know, these are, like, back-office jobs. I mean, it affects so many industries especially any kind of knowledge work or anything that can be easily automated. So I think there's going to be big questions around that.

There are big questions around privacy. You know, these systems, systems like ChatGPT, are trained using huge amounts of data that's, you know, scraped from the internet. And there's a lot of questions around, you know, like, you know, what gets involved? How you - can you opt out? This is something that, you know, a lot of artists and musicians are really concerned about. But also, you know, average people should be worried about as well. There are questions about bias. You know, these are systems - we have to remember. It can be easy to say, oh, it's just an algorithm. Algorithms are written by people, right? These are systems that are built by people, and they reflect our own biases. And so if we're thinking about using AI, you know, in all kinds of parts of life, you know, we need to be thinking about - what are the impacts on people who are going to be affected by the decisions we're turning over to AI?

And then, of course, there's the question about disinformation, about misleading information, you know, manipulation. And you could see how that could have huge effects, you know, if you can really scale up interference in elections on social media, you know? One of the things that Sam Altman from OpenAI spoke about in the hearing was this - you know, the idea that you could have essentially, like, personalized interactive manipulation and disinformation. And, you know, there's just lots and lots of questions about how we are going to handle the impacts in these different risk areas.

SHIVARAM: Yeah, that's no small thing. I mean, I know we're not talking about robots literally taking over, but these are some pretty existential questions that we're throwing out there. And lawmakers are going to have to grapple with all of this. So Claudia, what are they saying? Are they using AI? Are they familiar with this? How are they kind of responding to all of these questions that are being raised?

GRISALES: Yeah, they are using AI. It's pretty interesting how much ChatGPT, for example, has just permeated society. And we're seeing it here on the Hill as well. We've seen several members of Congress use ChatGPT, for example, to write remarks for a hearing and read from those. Yesterday, I think we had a first during a Senate Judiciary subpanel hearing where we saw the chairman of that panel, Richard Blumenthal, use an AI-generated audio software to basically play his voice or mimic his voice. It was drawn from floor speeches.


AI-GENERATED VOICE: (As Richard Blumenthal) Too often, we have seen what happens when technology outpaces regulation.

RICHARD BLUMENTHAL: If you were listening from home, you might have thought that voice was mine and the words from me. But in fact, that voice was not mine. The words were not mine.

GRISALES: And so it was just one of these moments where he's trying to illustrate the dangers here. People were smiling and smirking and laughing a little bit when that happened. But at the same time, later in the hearing, he talked about how people's voices can be stolen. And so yes, this is going to be challenging for Capitol Hill to address this, for lawmakers to address this. They're basically facing the equivalent of trying to put brakes on a runaway train. They've already missed critical windows to regulate the internet and social media. And I talked to a law professor in North Carolina - at the University of North Carolina at Chapel Hill - who co-founded an AI research program there, law professor Ifeoma Ajunwa. And she talks about how there are not enough experts in both computer science and law on Capitol Hill, and that makes AI lawmaking all the more challenging.

IFEOMA AJUNWA: AI or automated decision-making technologies are advancing at breakneck speed. And there is this AI race, yet the regulations are not keeping pace.

SHIVARAM: Yeah, I'm definitely having flashbacks to the previous hearings where lawmakers were interviewing CEOs of Google and Mark Zuckerberg and asking questions that we were all like, do you even know? Do you know what this is?

BOND: Yes.

SHIVARAM: And it was really hard to watch.

GRISALES: Yeah. Yeah, it's really interesting, in terms of they have a lot to wrap their arms around. They're already behind. And also, this professor - professor Ajunwa - told me that maybe it's up to the White House to try and get on this quicker. They have that ability with executive orders. And we've already seen the Biden White House roll out some initiatives, so they are trying to get on top of this.

SHIVARAM: All right. We're going to take a quick break, and we'll be back in a second.

And we're back. Yesterday's hearing was with the OpenAI CEO, Sam Altman. OpenAI is the company behind ChatGPT. So Shannon, how did that hearing go? Was he open to the idea of any regulations here?

BOND: Yeah. I mean, I would say this was quite a different hearing than what we might have been used to with the past...


BOND: ...Few years of, like, tech executives getting grilled on Capitol Hill and being, you know, yelled at by senators and other lawmakers. First of all, I would say Sam Altman got a pretty enthusiastic reception from a lot of these lawmakers. But I also thought, you know, it was interesting - you know, he kind of came in - you know, this is, like, the brand-new technology. It's - you know, there's lots of hype around it. People are pretty excited about it. People are also pretty wary about it. And so, you know, it was interesting to sort of see him come in and really, you know, from the beginning, say, you know, we want to be regulated - you know, giving some pretty specific ideas about how regulation could shape up. And, you know, I just think, compared - I used to cover these other - a lot of these social media companies. You know, kind of compared to how they used to come in, it took them a lot longer, right? It took Mark Zuckerberg at Facebook a lot longer to kind of come around to the idea of, like, yes, there should be regulation and to agree with any sort of specific regulations. And that was very different.

I mean, you know - and Sam Altman, I think, was also pretty candid about this idea that there are big risks here. And he talked about how, you know, if this technology goes wrong, it can go quite wrong. And so I think he was very much trying to strike this sort of balance as the - one of the leading - maybe the leading company right now in AI, saying, you know, this is something we're taking seriously, but also clearly wanting to influence the shape of whatever lawmaking gets done.

GRISALES: It was really interesting in terms of how open he was on these ideas of regulation. And as Shannon mentioned, you know, if the technology goes wrong, it can go quite wrong. And he said he was open to models that would require testing of these various AI programs or licensing requirements or to allow the AI industry to be overseen by a new government regulatory body.

SHIVARAM: Oh, wow.

GRISALES: The devil's in the details. No legislation has been written yet, and we'll see what the back-and-forth looks like once that gets started.

BOND: Yeah, and I would say, you know, when he - that idea around creating a new agency - I mean, what - you know, sort of a record-scratch moment for me during the hearing was Senator Kennedy then saying, well, maybe you could lead that agency. And, you know...

SHIVARAM: Oh, my God.


BOND: And then, you know, Sam Altman was like, well, I already have a job.


BOND: But, I mean, that is kind of - it's kind of remarkable 'cause, like, he's the - you know, his company is going to be one of the targets of these regulations.


BOND: You know, you heard from the lawmakers how much they feel they missed this moment, you know...


BOND: ...With the internet...

SHIVARAM: That's interesting.

BOND: ...With social media, to sort of rein these companies in before they cause tremendous damage to our society. And now, you know, we still, despite the fact that we're kind of - we've already seen the evidence of just how harmful some of this technology can be, there still is no progress on regulating most of these companies.


BOND: You know, lawmakers are very aware of that, but I also think - I get the sense that the AI industry is also aware of that, and so they want to very much, from the beginning, show themselves or, you know, present themselves as being kind of responsible and open to this because they don't want that kind of backlash.

SHIVARAM: Yeah, that's interesting, though, because I feel like there's, like, two sides of it a little bit - right? - where they want to come across as open and kind of, like, cooperative and stuff like that. But I'm a little confused because doesn't more regulation generally hurt tech companies? Is that not the case here?

GRISALES: Yeah. You know, it's interesting. I was outside the doors - closed door - where Altman had a dinner with a group of bipartisan members. And so the message was a little different behind closed doors when I heard lawmakers talking about what they heard. And Altman warned them in that room that aggressive regulation could hurt AI, in turn hurt the growth that AI could fuel with the economy. And so he highlighted a lot of the positives, but he also warned against going too far if Congress were to get there.

BOND: And I think, with these kind of pushes by companies, there is a way in which, if you are already a dominant company in the space, like, of course, you want to be involved in shaping what the laws are written about that space because that can help entrench your own dominance, right? And so that is one of the questions here - is, you know, like, obviously they need to understand - lawmakers need to understand this industry, and they do need to talk to people in the industry to help understand it. But there is this question of, like, just how beneficial whatever is going to be written will be to companies like OpenAI.

SHIVARAM: Generally speaking, on the timeline here, I know that Congress is trying to, you know, catch up, do their homework, not bungle this like they did with social media and whatnot. But in a lot of ways, they're already kind of too late. The European Union is far ahead on this kind of regulation. Shannon, what does that look like, and how far ahead are they really?

BOND: Yeah. So, I mean, like with many of these areas, like things around privacy and, you know, social media rules, the EU is definitely moving much faster on this. And, you know, the consequences of that is that the rules that the EU set end up sort of becoming the kind of de facto global regulations. You know, we've seen, you know, many of these tech companies have to change how they operate because of things like the European privacy law. So the EU does have this framework for regulating AI. It's a risk-based framework, where the idea is - it's much harder to sort of say we're just going to, like, blanket regulate a technology. Their approach seems to be, we're going to look at different cases. So, you know, how might AI be allowed to be used in contexts like elections or politics or, you know, medical information - you know, different areas that they've identified of risk. And they're moving ahead with this and, you know, clearly at a much faster pace than anyone in the U.S.

I mean, Claudia mentioned, you know, the White House has talked about this. Joe Biden hosted leaders of many American tech companies working on AI recently. But again, we haven't really seen anything concrete. We - you know, there have been a number of bills proposed, including, you know, things about regulating the use of AI in, you know, election campaigns or things around disclosure. You know, if you're going to be using ChatGPT to write fundraising messages - you know, do you - you know, what are your obligations? - but, again, nothing really moving that quickly. And, you know, we heard from lawmakers this week, you know, including - Senator Hawley, you know, was actually kind of skeptical of the idea that you could regulate this.


JOSH HAWLEY: Having seen how agencies work in this government, they usually get captured by the interests that they're supposed to regulate. They usually get controlled by the people who they're supposed to be watching. I mean, that's just been our history for a hundred years. Maybe this agency would be different.

BOND: I think there is always this question of, like, just how quickly can the U.S. move on this, and will it even matter if kind of the global regulatory force is really coming out of Europe?

GRISALES: You know, and even with all this momentum in the EU and seeing AI just going - you know, exploding right now without, you know, Congress keeping pace, you know, there's a lot of members here who are undeterred. Schumer's among them. Hawley's pretty interesting. I talked to him a few days before that hearing, and he kept talking about, I've got to get educated, got to get educated. It was obvious, by this week's hearing, that he had really caught up on a lot of what's going on in the AI industry by the time he was able to question Altman and others about options in terms of regulation. So we're seeing members really trying to catch up. There's one member in the House - this is Representative Don Beyer of Virginia - he's actually gone to school - back to school - to learn about AI.

SHIVARAM: Oh, wow.

GRISALES: So he's doing that kind of on the side to catch up. We're seeing other members like Ted Lieu of California. This is a Democrat. He introduced legislation this year that was written by ChatGPT. It was never - that's never happened before - another first that we're seeing. And so we are seeing members trying to catch up as much as they can right now. Whether they will - that's to be determined. But they're trying to stay on track and catch up.

SHIVARAM: Shannon Bond, thank you so much for joining us today.

BOND: Yeah, thanks for having me.

SHIVARAM: I'm Deepa Shivaram. I cover politics.

GRISALES: I'm Claudia Grisales. I cover Congress.

SHIVARAM: And thank you for listening to the NPR POLITICS PODCAST.


Copyright © 2023 NPR. All rights reserved. Visit our website terms of use and permissions pages at for further information.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.