D.C. attorney general introduces bill to ban 'algorithmic discrimination' The first-of-its-kind legislation would prohibit companies from using algorithms to make decisions that discriminate based on race, gender, sexual orientation, and other characteristics.
From NPR station


D.C. attorney general introduces bill to ban 'algorithmic discrimination'

D.C. Attorney General Karl Racine. Tyrone Turner/WAMU/DCist hide caption

toggle caption
Tyrone Turner/WAMU/DCist

D.C. Attorney General Karl Racine on Thursday announced a first-in-the-nation bill to ban "algorithmic discrimination," the practice of computer algorithms that discriminate against certain people who apply for jobs, seek a place to live, or try to get a loan.

The legislation would build on D.C.'s existing Human Rights Act, which prohibits discrimination based on a number of protected characteristics, ranging from race and gender to national origin and sexual orientation, and extend it into the technological realm.

In a letter to D.C. Council Chairman Phil Mendelson unveiling the Stop Discrimination by Algorithms Act, Racine said that algorithms — computer processes that use large amounts of data to produce specific predictions, results, or outcomes — are increasingly enmeshed in peoples' daily lives, but can also "reflect and replicate historical bias, exacerbating existing inequalities and harming marginalized communities."

Article continues below

That comes from the data that feeds the algorithms, says Cynthia Khoo, an associate at the Center on Privacy & Technology at the Georgetown Law Center, which studied the issue and helped Racine's office draft the bill. She says that reliance on data for everything from ZIP codes of residence to traditional credit scores — both of which can be affected by historical patterns of discrimination — is the main source of the problem.

"If that data and those historical biases make it into these computational algorithms that are used to make important life decisions about people, such as whether to hire them, whether to rent to them, whether to approve a mortgage for them, then you basically end up with a form of technological discrimination against historically marginalized groups," she says.

In August, an investigation of two million conventional mortgage applications by The Markup revealed that lenders nationwide were 40% more likely to deny Latino applicants and 80% more likely to deny Black applicants than comparable white applicants for home loans. Earlier this year, AlgorithmWatch found that Facebook ads were in some cases displayed to users based on specific stereotypes. "A job ad for a truck driver was 10 times more likely to be displayed to men than women, and a job ad for an educator was 20 times more likely to be displayed to women than men," it reported.

A 2015 study by researchers at Carnegie Mellon University similarly found that Google was more likely to show ads for high-paying jobs to men than to women, and in 2019 researchers reported that an algorithm used to make health care decisions in many U.S. hospitals was more likely to flag white patients for extra care than comparable Black patients.

"You see these kinds of patterns replicated across credit and finances and education, such as college admissions. So you can imagine the totality of the impact when across somebody's life, particularly if there are Black or brown or part of a marginalized group," says Khoo.

The legislation would make it illegal for companies to use discriminatory algorithms to make decisions about "key areas" of life, including education, jobs, access to credit, health care, insurance, and housing. It would also require those companies to conduct annual audits on their algorithms and document how their algorithms are built, and have them disclose to consumers how algorithms are used to make decisions. If an algorithm's decision goes against a consumer, the company would also have to provide an in-depth explanation on why. Violations would be met with fines of up to $10,000.

"This so-called artificial intelligence is the engine of algorithms that are, in fact, far less smart than they are portrayed, and more discriminatory and unfair than big data wants you to know. Our legislation would end the myth of the intrinsic egalitarian nature of AI," said Racine in a statement.

Khoo says that addressing discrimination by algorithms could start with companies ensuring they consult historically marginalized communities when deciding what data and inputs to draw on. But even if those conversations happen, Khoo says it may be impossible to avoid discrimination altogether because of problems with the underlying data — thus the algorithms would need to be banned.

Some companies have already jumped ahead of the possibility of legislation like the one Racine introduced. This week, a group of companies including CVS Health, Walmart, Mastercard, General Motors, Nike, and Meta unveiled the Data & Trust Alliance, whose first initiative is "Algorithmic Safety: Mitigating Bias in Workforce Decisions."

"Alliance member organizations identify unfair bias as one of the highest risks when using these technologies to augment workforce management," said the new group. "Therefore, we have developed Algorithmic Bias Safeguards for Workforce — criteria and education for HR teams to evaluate vendors on their ability to detect, mitigate, and monitor algorithmic bias in workforce decisions."

The bill represents another of Racine's legal broadsides against new and emerging technologies and companies. He has sued Facebook over private user data and hate speech, Amazon over its online retailing practices, and Google over its primacy in online searches.

This story is from DCist.com, the local news website of WAMU.

Questions or comments about the story?

WAMU values your feedback.

From NPR station