How a computer scientist fights bias in algorithms Computer scientist Joy Buolamwini is on a mission to fight bias in algorithms. In this comic, Buolamwini discusses the way biased algorithms can lead to real world inequality — and what we can do.

COMIC: How a computer scientist fights bias in algorithms

COMIC: How a computer scientist fights bias in algorithms

  • Download
  • <iframe src="" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">

This comic, illustrated by Vreni Stollberger, is inspired by TED Radio Hour's episode Warped Reality.

Panel 1: "My name is Joy Buolamwini. I'm a poet of code on a mission to stop an unseen force that's rising. A force that I call the coded gaze – my term for algorithmic bias. Algorithmic bias, like human bias, results in unfairness. However, algorithms, like viruses, can spread bias on a massive scale at a rapid pace."
Panel 2: "When I look at algorithmic bias, what's potentially more nefarious is you don't have to intend to deceive or do harm. In fact, we can fool ourselves into thinking because it's based on numbers, it's neutral."
Panel 3: "The deception can be our own belief in a neutral system that doesn't actually exist in practice. That's because what we're training these systems on is a reflection of the inequalities in the world. Something I call power shadows."
Panel 5: "Oftentimes, people are gathering the data that's most readily available – focused on people who are public figures or public officials. So you're going to have an overrepresentation of white men."
Panel 6: "This is where the power shadows come in. Your selection of what's easiest to gather, what's most readily available, what's viewed as credible, is being shaped by social, cultural and political factors."
Panel 7: "This is where collective action is important. We need to have systemwide change so that companies can't operate with impunity. We're starting to see more bills come out around this. In Illinois, you have a bill where if an AI  system is being used in hiring, it has to be disclosed that this is being used in the first place."
Panel 8: "Another step is saying before it can even be used, it has to be proven to show nondiscrimination. It could be in violation of Title VII of the Civil Rights Act. If you are able to say technology on the whole has done well, it probably means you're in a fairly privileged position."
Panel 9: "I always ask 'who can afford to say that?' The kids who are sitting in a McDonald's parking lot so they can access the internet to be able to attend school remotely? That has never been their reality. In the ideal future, before any kind of algorithmic decision-making system is even created, we're already in conversation with those who are going to be most impacted."
Panel 10: "When I critique tech, it's really coming from a place of having been enamored with it and wanting it to live up to its promises. I think that's a more optimistic approach than to believe in wishful thinking that isn't true."
Vreni Stollberger for NPR

Joy Buolamwini is a computer scientist with a PhD in philosophy from MIT's Media Lab. She uses art and research to illuminate the social implications of artificial intelligence. She founded the Algorithmic Justice League to create a world with more equitable and accountable technology.

Katie Monteleone contributed to this story.