AI is biased. The White House is working with hackers to try to fix that The White House is concerned AI can perpetuate discrimination. It helped host a red-teaming challenge at the Def Con hacker convention to figure out flaws. (Story aired on ATC on Aug. 26, 2023.)

AI is biased. The White House is working with hackers to try to fix that

AI is biased. The White House is working with hackers to try to fix that

  • Download
  • <iframe src="https://www.npr.org/player/embed/1196297067/1196307398" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript

The White House is concerned AI can perpetuate discrimination. It helped host a red-teaming challenge at the Def Con hacker convention to figure out flaws. (Story aired on ATC on Aug. 26, 2023.)

A MART├ŹNEZ, HOST:

The White House is worried about the risks of artificial intelligence, including the risk that this new technology can be used to discriminate. So they invited a bunch of hackers to see just what kind of biases are built into AI. NPR's Deepa Shivaram got a firsthand look and brings us this report.

DEEPA SHIVARAM, BYLINE: I'm standing in an overly air-conditioned conference center in Las Vegas in between a robot whirring on the floor and rows of tables set up with open laptops. And just outside this room, there's a long line of about a hundred people waiting to get inside. This is DEF CON, the biggest hacking convention in the world. And this is the first year where AI is front and center. These people are about to participate in the largest ever public red-teaming challenge. The goal? To get technology to break the rules by asking it all kinds of questions and see how easy it is to get it to say things that are inappropriate, illegal or biased.

KELSEY DAVIS: How do we try to break it so that we can find all these kinks and so that other people don't?

SHIVARAM: That's Kelsey Davis. She's here with the group called Black Tech Street. It's a nonprofit based in Tulsa, Okla., and aims to help Black economic development through technology. Racism and discrimination in AI isn't a new thing. Back in 2015, for example, Google Photos, which uses artificial intelligence, was labeling pictures of Black people as gorillas. Tech companies have tried to make changes, but the underlying problem remains. There's a lack of diverse data being used and a lack of diversity among the people who designed the technology in the first place.

UNIDENTIFIED PERSON: Did you need to give that back to me, sir?

SHIVARAM: Most of the people here are white, and most are men. But organizers made sure to invite groups like Black Tech Street for more representation in this challenge. Here's Denzel Wilson with SeedAI, one of the organizers of the event.

DENZEL WILSON: It's important when you have, you know, Black and brown minority people coming in, doing these challenges, and they're doing prompts that these models aren't used to seeing. So the more we're able to kind of evolve that and the more we're able to get more novel responses, it's just really important for everybody involved, especially the companies building the models 'cause now they understand what they need to do better to alleviate the bias.

SHIVARAM: I check back in with Kelsey about 20 minutes into the challenge, and she's feeling pretty accomplished because she just got the chatbot to say something really racist about blackface.

DAVIS: But, you know, that's good 'cause that means that I broke it.

SHIVARAM: The process isn't exactly straightforward. She started by asking the chatbot definitions.

DAVIS: I asked him stuff like, what is blackface? Is blackface wrong?

SHIVARAM: It was able to answer these basic questions, but she kept pressing. She asked the chatbot how a white kid could convince their parents to let them go to an HBCU, a historically Black college. The answer was to say that they could run fast and dance well - perpetuating the stereotype that all Black people can run fast and dance well. Kelsey submits the conversation she had with the chatbot to tech companies. They can use it to tweak their programming so this answer won't come up again.

But overall, these instances are only a small fraction of the threats AI can pose to marginalized groups. AI has the potential to exacerbate discrimination in things like police surveillance against Black and brown people in financial decision-making and housing opportunities. Arati Prabhakar is at DEF CON, too. She's the head of the White House's Office of Science and Technology Policy, and she's looking for solutions to make sure AI is safe and secure and equitable.

ARATI PRABHAKAR: This is a priority. It's moving fast. It's going to affect Americans' lives in so many different ways.

SHIVARAM: Prabhakar and other officials have been meeting with civil rights leaders, labor unions and other groups for months to talk about AI. Their efforts will show up in an executive order that President Biden will release on managing AI which is expected to come out in September. Deepa Shivaram, NPR News.

Copyright © 2023 NPR. All rights reserved. Visit our website terms of use and permissions pages at www.npr.org for further information.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.