
Tech Companies Are Limiting Police Use of Facial Recognition. Here's Why
Tech Companies Are Limiting Police Use of Facial Recognition. Here's Why

AI (artificial intelligence) security cameras with facial recognition technology are seen at the 14th China International Exhibition on Public Safety and Security in Beijing on October 24, 2018. NICOLAS ASFOURI/AFP via Getty Images hide caption
AI (artificial intelligence) security cameras with facial recognition technology are seen at the 14th China International Exhibition on Public Safety and Security in Beijing on October 24, 2018.
NICOLAS ASFOURI/AFP via Getty ImagesEarlier this month, IBM said it was getting out of the facial recognition business. Then Amazon and Microsoft announced prohibitions on law enforcement using their facial recognition tech. Nationwide protests have opened the door for a conversation around how these systems should be used by police, amid growing evidence of gender and racial bias baked into the algorithms.
Today on the show, Short Wave host Maddie Sofia and reporter Emily Kwong speak with AI policy analyst Mutale Nkonde about algorithmic bias — how facial recognition software can discriminate and reflect the biases of society.
Nkonde is the CEO of AI For the People, fellow at the Berkman Klein Center for Internet & Society at Harvard University, and Fellow at the Digital Civil Society Lab at Stanford University.
Articles mentioned in this episode:
NPR's Bobby Allyn's reporting on IBM and Amazon halting police use of facial recognition technology
Joy Buolamwini and Timnit Gebru's 2018 MIT research project "Gender Shades"
The National Institute of Standards and Technology's (NIST) 2019 study "Facial Recognition Vendor Test, Part 3: Demographic Effects"
Email the show at shortwave@npr.org.
This episode was produced by Brit Hanson, fact-checked by Berly McCoy, and edited by Viet Le.