Her Campus Logo Her Campus Logo
UCSB | Culture > Digital

Truth In The Age Of Deepfakes: How AI Is Undermining Public Trust

Letitia Sleiman Student Contributor, University of California - Santa Barbara
This article is written by a student writer from the Her Campus at UCSB chapter and does not reflect the views of Her Campus.

In the age of artificial intelligence, seeing is no longer believing. A recent global survey found that 72% of people worry about being deceived by deepfakes.

Videos can show events that never happened, voices can be cloned in seconds, and entire moments can be digitally fabricated. The line between what is real and what is fabricated begins to blur, raising deeper questions about trust online and in politics…

When proof stops proving anything

Suddenly, the audience carries a new burden. Before forming an opinion, people must first determine whether the evidence itself is even real. Understanding the news now comes with a preliminary investigation. Is the clip authentic? Was the audio altered? Did this moment actually happen, or was it engineered by an algorithm?

Processing information used to mean interpreting facts. Now it means auditing reality first. And that takes time, attention, and digital literacy that many people scrolling through social media simply do not have. In an endless feed of content, few people pause long enough to verify a video frame by frame or trace the origin of a recording.

The burden quietly shifts. Instead of institutions proving information is credible, individuals are left to play detective. In a fast-moving digital environment where attention is scarce and content spreads instantly, that responsibility becomes almost impossible to keep up with.

The Psychology of Doubting What We See

This creates a dilemma society has never faced before. For most of human history, our senses helped us judge reality. If you saw something happen or heard someone say something, that was usually enough to believe it existed. Our brains evolved to trust what we see and hear because those signals typically reflected reality.

Deepfakes disrupt that instinct.

When technology can perfectly imitate voices, faces, and movements, the signals our brains rely on suddenly become unreliable. A video may look authentic. A voice may sound identical. But neither guarantees that the moment actually happened.

Ironically, the more convincing fake content becomes, the easier it is to doubt real information. A genuine video can be dismissed as “just AI,” while a fabricated one can circulate widely before anyone questions it. Researchers describe this dynamic as the liar’s dividend, where the existence of deepfakes allows people to deny real evidence by claiming it was generated or manipulated.

The result is a psychological shift. Instead of trusting what we see and questioning later, people begin from a place of suspicion. When everything could be fake, skepticism grows… but so does confusion. Over time, people may stop questioning misinformation and begin questioning reality itself.

The Democratic Cost of Constant Skepticism 

If citizens cannot agree on what is real, democracy starts to wobble. Political systems rely on a shared set of facts that people can debate, criticize, and interpret differently. But when the facts themselves become questionable, the entire conversation changes.

Constant skepticism reshapes how people engage with politics. Instead of debating policies or accountability, attention shifts to the credibility of the information itself. Every video, speech, or piece of evidence becomes another puzzle to solve.

Research suggests this confusion is already affecting public understanding of politics. In one study, 68% of Americans said made-up news creates significant confusion about basic facts, making it harder for citizens to stay informed and participate in public debate.

That confusion has consequences. Some people disengage entirely, while others retreat into sources they already trust, even if those sources reinforce existing biases. Over time, skepticism does not just challenge misinformation. It reshapes how people consume political information.

The danger is not only believing false information. It is that uncertainty can slowly weaken the shared informational ground democratic debate depends on.

Learning to Navigate a Synthetic Internet

Deepfakes are not going anywhere. If anything, they are only going to get better. The same technology that can fabricate speeches, clone voices, or stage entire events is evolving faster than public awareness and regulation can keep up with.

So the real question becomes how people are supposed to navigate it.

Part of the answer lies in digital literacy. Recognizing manipulated content, questioning suspicious sources, and pausing before instantly sharing information are becoming essential skills. In a world where convincing videos can be generated in minutes, slowing down may be one of the few ways people can avoid being misled.

But responsibility cannot fall entirely on individuals. Technology companies and researchers are also developing tools to detect AI-generated media or verify authentic content. Some initiatives aim to attach digital signatures to images and videos when they are created, allowing viewers to trace where the content originated and whether it has been altered.

When Trust Stops Being Automatic

Even with detection tools, regulations, and greater awareness, deepfakes raise a deeper challenge. For decades, photos and videos functioned as shared proof. Seeing something usually meant it had happened.

That assumption is now changing.

When technology can perfectly imitate voices, faces, and events, visual evidence stops being automatically convincing. Instead of asking what happened, people increasingly ask whether it happened at all.

Trust must now be built through context, credible sources, and verification rather than simply through what appears on a screen.

Truth does not disappear. But believing it requires more effort than it once did.

And in a world where reality can be edited, manufactured, and shared instantly, the challenge ahead may not simply be spotting what is fake.

It may be learning how to keep believing in what is real.

Hey! I’m Leti, a second-year Political Science major at UCSB on a pre-law journey. I’m beyond excited to share my passions, experiences, and all the cool things I come across with you guys! When I’m not studying, you’ll catch me vibing to house music, hunting down the best foodie spots, bingeing true crime series, or just chilling with friends and family. As an Editorial Intern, I can’t wait to bring my voice and energy to this incredible Her Campus community!