Her Campus Logo Her Campus Logo
Casper Libero | Culture

Deep Fakes, Elections and The New Threat to Democracy

Giovanna Zanetti Student Contributor, Casper Libero University
This article is written by a student writer from the Her Campus at Casper Libero chapter and does not reflect the views of Her Campus.

Deep Fakes are visual and audio contents that have been manipulated, by using advanced software or AI,  to change how a person, object or environment is presented –  usually by swapping their faces, changing their voices or even creating a whole new person. But how did this actually start? How could this affect anyone who has a profile on the internet? And most important, what are the legal jurisdictions of using Deep Fakes and their consequences? 

THE BEGINNING OF THE DEEP FAKES

Back in 2017, on Reddit (a social media and public forum for conversations), a user called “deepfake” started to post fake pornographic videos of famous Hollywood actresses. By using an open source face-swapping technology, Reddit quickly became a point of spread of Deep Fakes, and even adult enterneirs were horrified by the lack of consent of this kind of posting, stating that “it’s really disturbing. It kind of shows how some men basically only see women as objects that they can manipulate and be forced to do anything they want.”

According to a report made by Deeptrace – an Amsterdam cybersecurity company -, about fifteen thousand videos posted online in 2019 were Deep Fakes, 96% of it being pornographic, and 99% used mapped faces from female celebrities on to porn actresses.

HOW DO YOU MAKE A DEEP FAKE? 

The developer, Alan Zucconi explains that basically, anyone can make a Deep Fake of whoever they want, but it’s not that simple, because you need a high-end desktop, and must have powerful graphics cards or computing power in the cloud, so it can reduce the processing time from weeks to days. 

Then, the creator needs to swap one person’s face and replace it with another, usually,by using a facial recognition algorithm called variational auto-encoder (VAE), that is trained to encode images into low-dimensional representations, and then decode them back to images – that’s how the face swapping works. 

HOW CAN WE DEAL WITH THE PROBLEM?

But how can we do that, since in many countries, Deep Fakes aren’t considered illegal yet, and don’t even have a proper law about them. For example, the UK Law Society states that in the financial sector, Deep Fake incidents had an increase of 700% in 2023, having criminals using AI to imitate voices and applying fraudulent instructions over the phone. 

In Brazil, the Federal Government has launched conscientization campaigns about this problem, especially considering this has been a problem since the Bolsonaro government, and it’s a main concern this year, because it could affect the presidential elections in October.

MIT suggests some steps to check the truthfulness of images and videos on the internet, such as by paying attention to three main things: the face, the audio and the lighting. For UNESCO, “the rise of AI-augmented disinformation and misinformation demands a fundamental shift in how education must equip citizens to combat it. The task is not simply to teach new verification techniques, but to rebuild our very relationship with knowledge in an age where our senses can be deceived.” 

The article above was edited by Sarah Pizarro .

Like ths type of content? Check out Her Campus Casper Libero for more.


Giovanna Zanetti

Casper Libero '29

Journalism undergraduate, passionate about politics, culture and entertainment