ChatGPT. Alexa. The Matrix.
Artificial Intelligence is growing increasingly prominent in our everyday lives. What was once seen as a dystopian threat presented in popular works of fiction and film has now become our modern day reality, with these technologies being incorporated into our daily routines. The question is, does AI reduce human bias as claims suggest? Or can we still find these attitudes embedded within AI systems?
Many companies have introduced ‘enabled hiring’ into their recruitment process. This involves the selection of candidates via quick-working AI tools which shortlist applicants whose skills are most suited to the job. This may seem like an effective method for shortlisting potential employees, but the process is not free from biases. In 2020, Amazon employed an automated recruitment system to evaluate job applicants. The system was programmed to select applications based on the CVs of previously successful candidates. This resulted, however, in the selection of predominantly male workers. Here, in its streamlining of the recruitment process, AI exposed existing biases in the company’s employment history and replicated those exclusionary practices.
AI is trained on large, annotated sets of data collected by humans and drawn from human experiences.This means that prejudices existing within our society are implicitly (and in some ways explicitly) programmed into the world of Artificial Intelligence. For example, AI image classifications are trained on ImageNet, a large-scale image database used for advancing computer object recognition and image processing, which analyses data from Google Images. This means that Google Images’ own biases, particularly in terms of representation, are mirrored on ImageNet and are trained into these AI image classifications too. This leads to the production of data possessing gender, racial and cultural biases. For example, images from China and India contribute to only 3% of ImageNet’s data combined, despite these countries representing 36% of the world’s population. Lacking sufficient supporting data, AI is more likely to offer biased or misleading representations of these countries. This applies similarly to AI generated images of women. The misrepresentation of women in the media leads to harmful and biased images produced by Artificial Intelligence as they are a source of algorithmic data that informs these computerised systems. AI generated images of CEOs disproportionately show white men in suits, while images of secretaries foreground women, often wearing revealing clothing. Here we can see how the male gaze can inform and be reproduced via technology.
It is easy to imagine AI, as a computerised technology, as neutral and bias-free. Especially for job recruitment, AI seems to be a productive addition to streamline employment processes. Yet, when we take a deeper look into how these AI systems are programmed and by whom, it becomes clear how human biases wriggle their way in. Perhaps AI isn’t quite the impartial addition we might have thought…