Tech Companies Are Limiting Use Of Facial Recognition By Law Enforcement : Short Wave In June 2020, Amazon, Microsoft and IBM announced that they were limiting some uses of their facial recognition technology. In this encore episode, Maddie and Emily talk to AI policy analyst Mutale Nkonde about algorithmic bias — how facial recognition software can discriminate and reflect the biases of society and the current debate about policing has brought up the issue about how law enforcement should use this technology.

Why Tech Companies Are Limiting Police Use of Facial Recognition

Why Tech Companies Are Limiting Police Use of Facial Recognition

  • Download
  • <iframe src="https://www.npr.org/player/embed/968710172/968827605" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">

Facial recognition researcher Joy Buolamwini stands for a portrait behind a mask she had to use so that software could detect her face. Buolamwini's research has uncovered racial and gender bias in facial analysis tools sold by companies such as Amazon that have a hard time recognizing certain faces, especially darker-skinned women. Steven Senne/AP hide caption

toggle caption
Steven Senne/AP

Facial recognition researcher Joy Buolamwini stands for a portrait behind a mask she had to use so that software could detect her face. Buolamwini's research has uncovered racial and gender bias in facial analysis tools sold by companies such as Amazon that have a hard time recognizing certain faces, especially darker-skinned women.

Steven Senne/AP

IBM said it was getting out of the facial recognition business last year. Then, Amazon and Microsoft announced prohibitions on law enforcement using their facial recognition tech. Nationwide protests last summer opened the door for a conversation around how these systems should be used by police, amid growing evidence of gender and racial bias baked into the algorithms.

On today's encore episode, Short Wave host Maddie Sofia and reporter Emily Kwong speak with AI policy analyst Mutale Nkonde about algorithmic bias — how facial recognition software can discriminate and reflect the biases of society.

Nkonde is the CEO of AI For the People, fellow at the Berkman Klein Center for Internet & Society at Harvard University and Fellow at the Digital Civil Society Lab at Stanford University.

Additional Reading:

This episode was produced by Brit Hanson, fact-checked by Berly McCoy and edited by Viet Le.