Her Campus Logo Her Campus Logo
This article is written by a student writer from the Her Campus at U Mass Amherst chapter.

Google’s former chief artificial intelligence expert Fei-Fei Li once said, “[be careful] not to be misled by the name ‘artificial intelligence’- there is nothing artificial about it. A.I is made by humans, intended to behave by humans and, ultimately, impact humans’ lives and human society.”

In August 2018, the ride-sharing app Uber came under fire after several transgender drivers were temporarily or permanently suspended due to new security features that required employees to verify their identity with a selfie. Later that same year, Amazon’s experimental hiring algorithm was exposed to be actively downgrading applications with the word “women’s,” including graduates from all-women’s universities. In May 2019, Vox raised concerns about algorithmic bias after self-driving cars were reported to have a harder time detecting pedestrians with darker complexions.

All these examples of artificial intelligence working against racial and gender minorities is referred to as “data violence,” the idea that, just like physical violence, these algorithm decisions occur as the result of choices that implicitly or explicitly lead to harmful and even fatal outcomes. 

Twitter logo in front of a blurred screen
Photo by Joshua Hoehne from Unsplash

These algorithms do more than just determine what’s on your Facebook page, they decide who gets hired, who gets a loan, who goes to prison, and even who gets benefits for medical emergencies. And it’s likely you have no idea how they work.

There are many who view this technology as a result of “bad data” rather than the result of unconscious or systemic bias within the artificial intelligence industry. It’s true that algorithms are “taught” by inputting massive amounts of data, and when data contains implicit bias (ex. Amazon’s hiring tool learning from majority male applicants), the result will likely be the same. So why not just go back and fix the “bad” data?

It’s not just that AI systems need to be fixed when they misidentify a face or reinforce stereotypes: it’s that they perpetuate existing forms of structural inequality, creating a sort of feedback loop that shapes the industry and the tools they create. The 2019 research paper “Discriminating Systems: Gender, Race, and Power in AI” writes, “[t]hese questions point to the larger problem…even when working as intended.”

group of diverse people holding hands
Photo by Wylly Suhendra from Unsplash

Who are behind these algorithms that impact nearly every aspect of our lives? The answer, to many, is not surprising: predominantly white cis heterosexual men. Women comprise only 15% of AI research staff at Facebook and 10% at Google. And there’s no public data on trans workers or other gender minorities. Almost half the women who go into technology eventually leave the field, more than double the percentage of men who depart. For Black workers, the picture is even worse. Only 2.5% of Google’s workforce is Black, while Facebook and Microsoft are each at 4%. There are jokes that there are more Black Lives Matter posters at Google than actual Black people

Given decades of concern and investment to address this racial and gender imbalance, the current state of the field is alarming. It’s not that these engineers are intentionally embedding racial bias into the tech they create- it’s often an issue of unconscious bias and lack of upper-management diversity to catch and question certain assumptions about the technology. And a public uneducated in artificial intelligence is less likely to hold them accountable.

Many in Silicon Valley are pushing back by claiming that companies like Google are diverse- cognitively, at least. The trend of “cognitive diversity” claims that a board of 12 white male directors is, in fact, diverse in life experience and thought. Others blame “pipeline” issues with minority graduates and shadowy hiring practices.

But we need to ask ourselves: who does this line of thinking benefit? Who does it harm? Why does the public know so little about these consequences? And even if we solve the diversity issue in the AI sector, should we readdress these algorithmic systems entirely?

 

While the future is uncertain, one thing seems clear: some problems can’t (and shouldn’t) be fixed by a technical solution.

gray and black laptop sitting on the grass
Photo by Picography from Pexels

 

Katherine Scott

U Mass Amherst '21

Katherine is an honors double major in Journalism and Political Science at the University of Massachusetts Amherst. She hopes to one day combine her love of activist writing and politics to become a host of her own podcast. When she's not writing, Katherine loves to spend her time traveling, going to the theatre, and watching Star Wars (for the 100th time). Follow her on Instagram @_katiescott17
Contributors from the University of Massachusetts Amherst