Play Live Radio
Next Up:
0:00
0:00
Available On Air Stations

COMIC: How a computer scientist fights bias in algorithms

Panel 4: "You might have a data set that's 75% male faces and over 80% lighter-skin faces. So what it means is the machine is learning a representation of the world that is skewed. But how are we getting such skewed data sets?"

This comic, illustrated by Vreni Stollberger, is inspired by TED Radio Hour's episode Warped Reality.

Panel 1: "My name is Joy Buolamwini. I'm a poet of code on a mission to stop an unseen force that's rising. A force that I call the coded gaze – my term for algorithmic bias. Algorithmic bias, like human bias, results in unfairness. However, algorithms, like viruses, can spread bias on a massive scale at a rapid pace."
/
/
Panel 2: "When I look at algorithmic bias, what's potentially more nefarious is you don't have to intend to deceive or do harm. In fact, we can fool ourselves into thinking because it's based on numbers, it's neutral."
/
/
Panel 3: "The deception can be our own belief in a neutral system that doesn't actually exist in practice. That's because what we're training these systems on is a reflection of the inequalities in the world. Something I call power shadows."
/
/
Panel 5: "Oftentimes, people are gathering the data that's most readily available – focused on people who are public figures or public officials. So you're going to have an overrepresentation of white men."
/
/
Panel 6: "This is where the power shadows come in. Your selection of what's easiest to gather, what's most readily available, what's viewed as credible, is being shaped by social, cultural and political factors."
/
/
Panel 7: "This is where collective action is important. We need to have systemwide change so that companies can't operate with impunity. We're starting to see more bills come out around this. In Illinois, you have a bill where if an AI  system is being used in hiring, it has to be disclosed that this is being used in the first place."
/
/
Panel 8: "Another step is saying before it can even be used, it has to be proven to show nondiscrimination. It could be in violation of Title VII of the Civil Rights Act. If you are able to say technology on the whole has done well, it probably means you're in a fairly privileged position."
/
/
Panel 9: "I always ask 'who can afford to say that?' The kids who are sitting in a McDonald's parking lot so they can access the internet to be able to attend school remotely? That has never been their reality. In the ideal future, before any kind of algorithmic decision-making system is even created, we're already in conversation with those who are going to be most impacted."
/
/
Panel 10: "When I critique tech, it's really coming from a place of having been enamored with it and wanting it to live up to its promises. I think that's a more optimistic approach than to believe in wishful thinking that isn't true."
/ Vreni Stollberger for NPR
/
Vreni Stollberger for NPR

Joy Buolamwini is a computer scientist with a PhD in philosophy from MIT's Media Lab. She uses art and research to illuminate the social implications of artificial intelligence. She founded the Algorithmic Justice League to create a world with more equitable and accountable technology.

Copyright 2023 NPR. To see more, visit https://www.npr.org.

Christina Cala
Christina Cala is a producer for Code Switch. Before that, she was at the TED Radio Hour where she piloted two new episode formats — the curator chat and the long interview. She's also reported on a movement to preserve African American cultural sites in Birmingham and followed youth climate activists in New York City.
Vreni Stollberger
LA Johnson
LA Johnson is an art director and illustrator at NPR. She joined in 2014 and has a BFA from The Savannah College of Art and Design.
Related Stories