Amazon, Microsoft, and IBM Limit Use Of Facial Recognition By Law Enforcement : Short Wave Earlier this month, IBM said it was getting out of the facial recognition business. Then Amazon and Microsoft announced prohibitions on law enforcement using their facial recognition tech. There's growing evidence these algorithmic systems are riddled with gender and racial bias. Today on the show, Short Wave speaks with AI policy researcher Mutale Nkonde about algorithmic bias — how facial recognition software can discriminate and reflect the biases of society.
NPR logo

Tech Companies Are Limiting Police Use of Facial Recognition. Here's Why

  • Download
  • <iframe src="https://www.npr.org/player/embed/881845711/881918790" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript
Tech Companies Are Limiting Police Use of Facial Recognition. Here's Why

Tech Companies Are Limiting Police Use of Facial Recognition. Here's Why

Tech Companies Are Limiting Police Use of Facial Recognition. Here's Why

  • Download
  • <iframe src="https://www.npr.org/player/embed/881845711/881918790" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript

AI (artificial intelligence) security cameras with facial recognition technology are seen at the 14th China International Exhibition on Public Safety and Security in Beijing on October 24, 2018. NICOLAS ASFOURI/AFP via Getty Images hide caption

toggle caption
NICOLAS ASFOURI/AFP via Getty Images

AI (artificial intelligence) security cameras with facial recognition technology are seen at the 14th China International Exhibition on Public Safety and Security in Beijing on October 24, 2018.

NICOLAS ASFOURI/AFP via Getty Images

Earlier this month, IBM said it was getting out of the facial recognition business. Then Amazon and Microsoft announced prohibitions on law enforcement using their facial recognition tech. Nationwide protests have opened the door for a conversation around how these systems should be used by police, amid growing evidence of gender and racial bias baked into the algorithms.

Today on the show, Short Wave host Maddie Sofia and reporter Emily Kwong speak with AI policy analyst Mutale Nkonde about algorithmic bias — how facial recognition software can discriminate and reflect the biases of society.

Nkonde is the CEO of AI For the People, fellow at the Berkman Klein Center for Internet & Society at Harvard University, and Fellow at the Digital Civil Society Lab at Stanford University.

Articles mentioned in this episode:

NPR's Bobby Allyn's reporting on IBM and Amazon halting police use of facial recognition technology

Joy Buolamwini and Timnit Gebru's 2018 MIT research project "Gender Shades"

The National Institute of Standards and Technology's (NIST) 2019 study "Facial Recognition Vendor Test, Part 3: Demographic Effects"

Email the show at shortwave@npr.org.

This episode was produced by Brit Hanson, fact-checked by Berly McCoy, and edited by Viet Le.