Her Campus Logo Her Campus Logo
Toronto MU | Culture

Can Artificial Intelligence Be Feminist?

Samhitha Balamoni Student Contributor, Toronto Metropolitan University
This article is written by a student writer from the Her Campus at Toronto MU chapter and does not reflect the views of Her Campus.

Artificial intelligence (AI) is quickly becoming the operating system of the modern world, automating important financial, healthcare, and employment decisions. However, because AI systems are developed by a largely homogeneous workforce and trained on enormous volumes of historical data, they inevitably reproduce and amplify systemic biases, most notably gender inequality. 

Can we fundamentally re-engineer AI to be feminist, or does its current trajectory render it intrinsically discriminatory? This is the main ethical dilemma. The idea that a radically inclusive and equitable technology is feasible is advanced by the emerging field of feminist AI governance.

Gender bias in AI is a proven fact that results from both skewed training data and a lack of diversity in development teams; it is not a theoretical issue. AI models learn from the data they are fed, and if the data is biased, so will the resulting judgments. 

This was made abundantly evident by Joy Buolamwini and Timnit Gebru’s groundbreaking Gender Shades study. According to their 2018 study, commercial facial analysis systems from large tech companies performed noticeably worse for women, particularly those with darker skin tones. In one system, error rates for darker-skinned females reached 34.4%, while for lighter-skinned males they were less than 1%. This illustrates what Buolamwini called the “coded gaze”, a prejudice ingrained in the technology that reflects the training data’s preponderance of “pale and male” representation.

Biased data has direct and financial repercussions. For example, an internal AI recruiting tool used by Amazon was reportedly halted after it showed bias against women. The AI learned that male candidates were preferred since the model was trained on the resumes of employees over a 10-year period, during which time males dominated the tech industry. 

Resumes using the phrase “women’s” (such as “women’s chess club captain”) were further devalued. This demonstrates that AI can automate and scale human prejudice rather than eradicate it.

Feminist Global AI Ethics and Governance Observatory seeks a systematic, revolutionary approach to technology development grounded in feminist ideas rather than merely in efforts to minimize prejudice. Gender and ethnicity are frequently treated as distinct categories in traditional anti-discrimination frameworks. 

However, research shows that AI biases work intersectionally, meaning that Black women experience different outcomes than Black males or white women with the same skills. This includes studies on large language models (LLMs) used in recruiting.

The conventional AI emphasis on pure efficiency and abstract logic is challenged by feminist philosophy, especially the ethics of care. It demands that algorithms and the entire development lifecycle incorporate principles such as empathy, relationality, and fairness.

According to academic Cathy O’Neil, “algorithms are opinions embedded in code”, which means that they are contextual and influenced by power dynamics rather than being impartial. Consequently, feminist AI governance requires examining AI systems as byproducts of social injustices and power dynamics, acknowledging that the potential for damage is not uniformly distributed. To prevent algorithms from becoming prejudiced, it is crucial to have robust systems in place to critically examine algorithms and their decision-making processes.

Ultimately, the question “can AI be feminist?” concerns who develops technology, how it is trained, and whose values it prioritizes. It is feasible to create an AI ecosystem that actively seeks to eliminate rather than perpetuate the racial and gendered injustices that permeate our society by including intersectional feminist ethics and changing the paradigm from one of detached logic to one of compassion and justice.

Samhitha Balamoni

Toronto MU '26

Samhitha is a Computer Science and Psychology student at Toronto Metropolitan University who enjoys writing about everything from pop culture rabbit holes to the ways technology shape everyday life. She strives to keep her writing approachable and easy to connect with, making readers feel like they’re part of the conversation. Outside of writing, she loves reading books, watching movies, and discovering new music to soundtrack her day.