Joy Buolamwini: How Do Biased Algorithms Damage Marginalized Communities? Data, numbers, algorithms are supposed to be neutral ... right? Computer scientist Joy Buolamwini discusses the way biased algorithms can lead to real-world inequality.
NPR logo

Joy Buolamwini: How Do Biased Algorithms Damage Marginalized Communities?

  • Download
  • <iframe src="https://www.npr.org/player/embed/929204946/929313199" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript
Joy Buolamwini: How Do Biased Algorithms Damage Marginalized Communities?

Joy Buolamwini: How Do Biased Algorithms Damage Marginalized Communities?

Joy Buolamwini: How Do Biased Algorithms Damage Marginalized Communities?

  • Download
  • <iframe src="https://www.npr.org/player/embed/929204946/929313199" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript

Part 3 of the TED Radio Hour episode Warped Reality

Data, numbers, algorithms are supposed to be neutral ... right? Computer scientist Joy Buolamwini discusses the way biased algorithms can lead to real-world inequality.

About Joy Buolamwini

Joy Buolamwini is a graduate researcher at the Massachusetts Institute of Technology who researches algorithmic bias in computer vision systems. She founded the Algorithmic Justice League to create a world with more ethical and inclusive technology.

Buolamwini serves on the Global Tech Panel convened by the vice president of European Commission to advise world leaders and technology executives on ways to reduce the harms of AI. In late 2018, in partnership with the Georgetown Law Center on Privacy and Technology, she launched the Safe Face Pledge, the first agreement that prohibits the lethal application of facial analysis and recognition technology.

She holds two masters degrees from Oxford University and MIT as well as a bachelor's degree in computer science from the Georgia Institute of Technology.