Amazon, Microsoft, and IBM Limit Use Of Facial Recognition By Law Enforcement : Short Wave Earlier this month, IBM said it was getting out of the facial recognition business. Then Amazon and Microsoft announced prohibitions on law enforcement using their facial recognition tech. There's growing evidence these algorithmic systems are riddled with gender and racial bias. Today on the show, Short Wave speaks with AI policy researcher Mutale Nkonde about algorithmic bias — how facial recognition software can discriminate and reflect the biases of society.
NPR logo

Tech Companies Are Limiting Police Use of Facial Recognition. Here's Why

  • Download
  • <iframe src="https://www.npr.org/player/embed/881845711/881918790" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript
Tech Companies Are Limiting Police Use of Facial Recognition. Here's Why

Tech Companies Are Limiting Police Use of Facial Recognition. Here's Why

  • Download
  • <iframe src="https://www.npr.org/player/embed/881845711/881918790" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript

MADDIE SOFIA, HOST:

Hey, everybody. Maddie Sofia here. So we're making SHORT WAVE history with a night of virtual trivia. That's right - tonight, June 23. Join us for an evening of science, friendly competition and Emily Kwong telling your team you're great while I tell you to get your [expletive] together tonight at 8:00 p.m. Register in advance at nprpresents.org.

(SOUNDBITE OF MUSIC)

SOFIA: You're listening to short wave from NPR.

OK so I'm here with SHORT WAVE reporter Emily Kwong.

EMILY KWONG, BYLINE: Hey, Maddie.

SOFIA: Hey, you. So today we're talking about some pretty big news from the world of tech, which is that Amazon, Microsoft and IBM are placing significant limits around their facial recognition technology.

KWONG: Yeah. This is kind of a sea change for these companies to voluntarily regulate facial recognition technology, citing concerns about how this tech is used by law enforcement, which we'll delve into in a minute. But let's start with the basics. So Maddie, what do you know about facial recognition.

SOFIA: Well, I remember being very weirded out by it when I could suddenly unlock my cellphone with my face.

KWONG: Right. Me, too. That's called a one-to-one search. Your phone is basically saying, aha, yes, this is Maddie's face; we shall unlock.

SOFIA: But it also kind of creeps me out. And I will say, Emily, sometimes it doesn't recognize my face in the morning, which is rude.

KWONG: (Laughter) So here's the thing about facial recognition. One, it is imperfect. And two, it's completely unregulated. There are no federal laws or standards dictating how these technologies should and shouldn't be used. Innovations in AI have basically moved way faster than policies to regulate them.

SOFIA: Right. And I guess now we're seeing facial recognition being tested in doctors' offices to help diagnose patients and in shopping malls to look at patterns of how people move around, that kind of stuff.

KWONG: Right. And local and state law enforcement agencies have been using facial recognition technology for years to identify people through what's called one-to-many searches, taking - let's say - a photo of a suspect or grainy security camera footage and seeking to match the image within these massive photo databases made up of mug shots, passport and visa pictures and driver's license images. This technology has helped agencies solve cases, identify victims, but civil liberties groups say it's a violation of privacy and prone to discrimination. And that's what I want to talk about today - growing evidence that facial recognition identification systems are riddled with gender and racial bias.

(SOUNDBITE OF ARCHIVED NPR BROADCAST)

JOY BUOLAMWINI: All of these systems work better on lighter-skinned faces than darker-skinned faces. They all overall work better on male-identified faces than female-identified faces.

KWONG: That's MIT researcher Joy Buolamwini speaking with NPR's Bobby Allyn earlier this month. In 2018, Joy and fellow researcher Timnit Gebru were among the first to provide evidence of algorithmic bias in facial recognition software. It was groundbreaking work, raising questions about just how accurate these systems really are.

SOFIA: So today on the show, algorithmic bias - how even facial recognition software can discriminate and reflect the biases of society.

KWONG: And how current debate about policing has opened the door for a national dialogue about how this technology should be used by law enforcement.

(SOUNDBITE OF MUSIC)

SOFIA: All right, Emily Kwong, so we're talking about this announcement from a string of tech companies that they are going to put limits on their facial recognition technology, especially when it comes to law enforcement - Amazon, Microsoft and IBM.

KWONG: Yes. On June 8, IBM said it would discontinue general purpose facial recognition or analysis software altogether - get out of the business completely. And it made an impression. After IBM's big letter, Amazon announced a one-year moratorium on sales of their very popular software Rekognition - spelled with a K - to law enforcement to give Congress time to, quote, "implement appropriate rules."

SOFIA: So a one-year ban.

KWONG: Yes. Microsoft took it a step further, saying it wouldn't sell products to law enforcement at all until a federal law is in place. Here's Microsoft President Brad Smith speaking to The Washington Post.

(SOUNDBITE OF ARCHIVED RECORDING)

BRAD SMITH: We need to use this moment to pursue a strong national law to govern facial recognition that is grounded in the protection of human rights.

KWONG: And for Mutale Nkonde, who has been pushing for regulation changes in tech for years, this was a big deal. When these words were coming out of Silicon Valley, she felt all of the feelings.

MUTALE NKONDE: My initial was, thank God. Thank God. I was happy. I was pleased. I was optimistic. I was short of breath. I was exhausted.

KWONG: Mutale is the CEO of AI for the People, a fellow at both Harvard and Stanford universities. For her, these announcements shifted the conversation, but that's about it.

NKONDE: So I'm pleased. It's got us incredibly far, but we're by no means out of the woods.

KWONG: Not out of the woods because for all of the advancement in facial recognition, these systems still get it wrong. They'll incorrectly match folks, what's called a false positive - or fail to associate the same person to two different images.

SOFIA: So a false negative.

KWONG: Yeah. And what's vexing is these errors are happening more often when the machines are analyzing dark-skinned faces. And that can disproportionally affect already marginalized communities prone to unconscious bias at the hands of law enforcement, leading to false accusations, arrests and much worse. So until there's action on this, Mutale said words just aren't enough.

SOFIA: Gotcha. So - OK, Emily, let's unpack this a little bit. Let's talk about how bias gets into facial recognition systems in the first place.

KWONG: I'd love that. OK. So it starts - right? - with how these systems learn to do their jobs, a process known as machine learning. So to make facial recognition systems, engineers feed algorithms large amounts of what's called training data.

SOFIA: In this case, that would be pictures of human faces.

KWONG: Yes.

NKONDE: The way machines learn is that they repeat a task again and again and again and again and again.

KWONG: Developing a statistical model for what a face is supposed to look like. So if you wanted to teach the algorithm to recognize a man...

SOFIA: You'd put in, like, millions of pictures of men.

KWONG: You got it.

NKONDE: The machine will then measure the distance between the eyes on each picture, the circumference of the nose, for example, the ear-to-eye measurement.

KWONG: And over time, the machine starts to be able to predict whether the next image it's seeing is, quote, "a man," which sounds OK. Right?

SOFIA: Here comes the "but."

KWONG: But the machine is only as smart as its training data. So remember Joy Buolamwini, who I mentioned at the top of the episode?

SOFIA: Yeah, the one at MIT.

KWONG: Yes. So she and her colleague Timnit Gebru developed a way to analyze skin color in these training sets. And the two they looked at were overwhelmingly composed of lighter-skinned subjects - 79% for IJB-A and 86% for Adience. These are two common datasets that were largely, as Joy put it, pale and male.

SOFIA: So basically, the training data used to create these algorithms is not diverse, and that's how that bias gets in.

KWONG: Mmm hmm. The diversity of human beings is not always being represented in these training sets, and so faces outside the system's norm sometimes don't get recognized. Here's Mutale explaining what the research meant to her.

NKONDE: That goes back to this other issue of not just hiring but a bigger issue of there's no one in the team to say that you haven't put all the faces - you know, you haven't put all the digital images of what all human beings could look like in the way that they show up in society in order to recognize these faces. And it's...

KWONG: And so after realizing how unbalanced these training sets were, Joy and Timnit decided to create their own with equality in race and gender to get a general idea of how facial AI systems performed with a more diverse population.

SOFIA: So basically, they fed it more diverse pictures to look at.

KWONG: Yeah. It was kind of interesting. They used images from the top ten national parliaments in the world with women in power - yeah - specifically picking African and European nations. And they tested this new data against three different commercially available systems for classifying gender, one made by IBM, the second by Microsoft and the third by Face++. And in running these tests, Joy and Timnit found clear discrepancies along gender and racial lines, with darker-skinned faces getting misclassified the most. Here's Mutale again.

NKONDE: So one of the things that Joy Buolamwini's amazing work looks at is the correlation (ph) between short hair and gender. So many, many, many black women with Afros were mislabeled as men - misgendered because the system had trained itself to recognize short hair as a male trait.

KWONG: And this research project, Maddie, produced a massive ripple effect - further studies, legislation. In December, the National Institute of Standards and Technology, or NIST, published a big paper of its own testing 189 facial recognition algorithms from around the world. And they found biases, too. Looking at one global data set, some algorithms in their study produced 100 times more false positives with African and Asian faces compared to Eastern European ones. And when tested using another dataset of mug shots from the U.S., the highest false positives were found among American Indians with higher rates in African American and Asian populations, again, depending on the algorithm.

SOFIA: Wow. Yeah, that is not what you want from your data. And I'm guessing white men benefited from the highest accuracy rates.

KWONG: Yes, they did. Now, the NIST study did conclude that the most accurate algorithms demonstrated far less demographic bias. But for Mutale, this evidence of bias raises a bigger question about the ethics of relying on AI systems to classify and police people at all.

NKONDE: The problem with AI systems and machine learning is that they're really, really, really good at standard routine tasks. And the issue with humans is that we are not standard. We're not routine. We're actually massively messy.

SOFIA: Right, we're not all the same. But when a police officer searches a face in the system, they're not making an arrest based on just that match alone, are they?

KWONG: Oh, absolutely not. Yeah, it's a tool for identifying potential suspects. But if you think about how there's already implicit bias in policing, critics of facial recognition are basically saying it doesn't make sense to embrace technologies riddled with bias, too.

SOFIA: Right. If all this research has shown, these tools are capable of misidentifying Black people.

NKONDE: We cannot use biometric tools that discriminate against a group of people who are already discriminated against within the criminal justice system but policing most specifically.

KWONG: Maddie, when I first spoke to Mutale in March, she was open to moratoriums on facial recognition like Amazon is doing - buying time for these systems to improve or regulations to be put in place. But the protests have changed her views.

NKONDE: Because why am I being moderate when what we need to do is completely reimagine how we interact with technology?

KWONG: So now she wants to see facial recognition banned from law enforcement use, which some cities in the U.S. have done - in California and Massachusetts. And now a police reform bill introduced to Congress proposes to severely limit facial recognition in police body cameras. Mutale has tried to push for legislation to outlaw discrimination in technology before, but it seems like now people are paying attention and have a language for talking about structural racism that they just didn't have before.

NKONDE: Whether white America listen to me or not, I was going to continue with this work. I believe that technology should be an empowering force for all people, and that's my work. But now, having old and new allies - not just allies but co-conspirators - right? - I'm so happy because I didn't think it would happen in my lifetime. And it's happening. I'm delighted.

SOFIA: OK, Emily Kwong. I appreciate you. Thanks for reporting this out.

KWONG: You're welcome, Maddie.

SOFIA: This episode was produced by Brit Hanson and fact-checked by Berly McCoy. Our editor was Viet Le. I'm Maddie Sofia.

KWONG: And I'm Emily Kwong.

SOFIA: Thanks for listening to SHORT WAVE from NPR.

(SOUNDBITE OF MUSIC)

Copyright © 2020 NPR. All rights reserved. Visit our website terms of use and permissions pages at www.npr.org for further information.

NPR transcripts are created on a rush deadline by Verb8tm, Inc., an NPR contractor, and produced using a proprietary transcription process developed with NPR. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.