Content warning: deepfake pornography.
Across all areas of life, AI has quickly become something significant. It could be the one of the largest global shifts since the dawn of the internet. The companies developing AI systems and tools have become some of the biggest companies in the world, despite AI being such a new phenomenon. For example, Nvidia has become the most valuable company in the world and is valued, at time of writing this, $5 trillion, and OpenAI has announced its partnership with Microsoft. Beyond the tech-sphere, all industries have, either willingly or unwillingly, become adept at utilising AI in their daily work. For instance, the majority of large law firms in the UK use their own AI system for day-to-day work. Given AI’s impact on society as a whole, from individuals to companies, it is important to examine how, if implemented incorrectly and current issues are not adequately addressed, AI can worsen the abuse women already face in daily life.Â
Chatbots
AI Chatbots are designed to “stimulate conversation with the user”, following IBM’s definition. These AI Chatbots learn through information being imputed into it and then further adapt to, essentially, predict what the user is going to say and reply to that in an adequate manner. If it is fed information that contains or is built on the foundations of misogyny or sexism, as a lot of things are imbued with, it will learn this and use it in its work. This leads to the reproduction and reinforcement of misogynistic ideologies wherever the AI is being used, for instance in workplaces and in learning environments. This will not aid, rather negate, any work done to improve the gender division and discrimination in these places.Â
Laura Bates, in her most recent work The New Age of Sexism, How AI and Emerging Technologies are Reinventing Misogyny, highlights the dangers of these Chatbots, when they are used in a romantic or sexual capacity by the user, with some Chatbots being designed for this specific purpose. This has become a new trend recently, with cases mentioning people falling in love with the Chatbots they speak to. This issue of the reinforcement of misogynistic values comes into play here where a user may wish to, in essence, design a companion that they can hurt and victimise, and carry out behaviour which they would not be allowed to do to a human woman. One problem here is that the users, due to the reinforcement of this behaviour by the Chatbot, attempt to replicate this behaviour, to varying degrees, in the real world. Reinforcing this behaviour and allowing people to believe this is an acceptable way to think, feel, act and behave goes against all the work that people have put in to make women feel safer both in the real world and online. Through this, women continue to suffer and harmful ideologies which encourage this treatment of women remain the same.Â
DeepfakesÂ
Alongside this consequence of certain Chatbots, deepfake pornography impacts individuals in a specific, personal way. Deepfake pornography is defined as when someone’s face or “likeness” is put onto “sexually explicit images with AI”. This non-consensual creation of sexual images is incredibly violating for the victim and, as AI has advanced, these Deepfakes look increasingly realistic. Although the woman will know this is an AI-created video, others will not, and I cannot imagine the brain is able to differentiate the violation of an AI video being created and seen by others compared to a non-AI created video. The victims of this kind of abuse are often women and a wide range are affected by this, from celebrities, such as Taylor Swift, to children.
Further to this point, beyond the violating nature and the personal impact this has on the victim and their loved ones, this carries with it the potential for reputational damage. Having such violating images and videos shared can be used to inflict harm upon people’s work and livelihoods, especially if used in a targeted attack.Â
Concluding notesÂ
Both of these types of abuse have one common cause: dangerous ideologies. If we do not combat the ideologies that support and allow abuse against women in this way, AI tools will become another weapon against women. AI can be used in positive ways, such as in relation to medical image screening or for cancer screening. However, in order for this to happen, we need to address these problems, before we allow AI to perpetuate the wrongs that humans have reinforced in the real world already, for far too long. AI can be a beacon of hope, of a new world which will push society forward, but if its tools allow abuse against women to worsen, then it is not progress for everyone. Instead, it will just push inequality.Â