Her Campus Logo Her Campus Logo
Anna Schultz-Girl Using Ipad In Bed
Anna Schultz-Girl Using Ipad In Bed
Anna Schultz / Her Campus
Casper Libero | Culture

Seeing Is No Longer Believing: AI and the Crisis of Visual Trust

Isabella Scaramucci Student Contributor, Casper Libero University
This article is written by a student writer from the Her Campus at Casper Libero chapter and does not reflect the views of Her Campus.

It should be easy to trust what one of our five senses shows us. For centuries, images and videos have been considered evidence of events, serving as irrefutable proof of reality. That assumption, however, has always been challenged, either by staging scenes before photographs are taken or by editing images with softwares such as Photoshop. Nowadays, artificial intelligence (AI) tools go further: they cannot only modify images but also virtually create any image.

Before AI, videos and pictures manipulation was a manual process that required qualified professionals and powerful computers, but now it has become a much more accessible process. AI makes it easier to generate content for a wide range of purposes, which goes from harmless memes to make people laugh to images capable of damaging reputations and lives.

Given how realistic those generated images seem, it is now hard to trust content we see online at first glance. In this context, seeing is no longer enough to believe. People have not only started questioning suspicious images but also genuine ones, often assuming that controversial photos or videos may have been created by AI. 

But is this growing skepticism actually protecting users from misinformation? And are the “AI-generated” labels often seen on social media enough to prevent the harm caused by fake content?

It starts with memes

When artificial intelligence was first developed, chatbots could only follow prompts or answer questions through text. This, at the time, not only sounded synthetic but also turned obvious that there was a robot behind the words. 

But as the years have gone by, AI has become so realistic that it even feels scary sometimes. Now, you can not only receive information through text, but also generate it, among many other forms of media, such as: documents, voice recording, pictures, videos, and even worse: you can now hide that it was generated by AI.

Building on that, when people combine creativity with the unlimited possibilities of AI-generated images — also known as deepfakes — to create memes about any topic, so that any audience can be easily created and shared online. This ease may benefit content creators, and even looks innocent, but the exposure to deepfakes can change how viewers consume information, especially when it comes to credibility and the perception of reality.

@longliveai

AI has officially turned midnight meowing into a full-blown concert. Thanks to OpenAI’s Sora 2, the internet is going crazy over hyper-realistic clips of a cat performing outside its owner’s door — switching from drums to guitar to piano like a furry one-cat band determined to get attention. What started as a funny idea has become a viral showcase of how far AI video generation has come. Sora 2 can now recreate cinematic lighting, realistic instrument motion, and adorable animal expressions — all from a single text prompt. The result looks so real you might actually believe cats are secretly music prodigies. It’s cute, chaotic, and wildly entertaining all at once. Would you open the door or buy tickets for the next show? 👀💬 Join the fastest-growing AI community on TikTok @longliveai #ai #sora #cat #viral #meme #aivideo #aiart #sora2 #openai #animals #musica #instrumental

♬ Originalton – theprompter – theprompter

According to the study “An Agenda for Studying Credibility Perceptions of Visual Misinformation’’, written by Yilang Peng, Yingdan Lu & Cuihua Shen, online visual jokes are processed by users in a different way. The author explains that, since memes are well known for not having any direct connection to reality, due to being visibly manipulated, the audience no longer expects them to always be true. 

Because of meme’s viral nature, they spread easily online, and consequently more people come into contact with this kind of content, which they know is fake. From this exposure, users get used to being entertained by manipulated images, and stop being as critical while consuming online jokes.

As a result of this lack of criticism, followers absorb false information effortlessly — not because it is logical, but because it has become familiar through repetition. This, according to the study “The Disruptive Impact Deepfake Can Have on Society” , written by Everton Ferreira Silva, Lucca de Barros Casalenovo and Cássio Aparecido do Amaral, leads to an effect called The Illusory Truth effect.

This effect, as noted by the report, is a trick that makes a lie look like the truth because it is easier to process, even when people previously knew it was just a joke. Consequently, the visual message sticks in memory and influences future judgment, despite the person previously knowing that it was fake, their brain remembers it as if it was real, making this effect a strong cause of misinformation.

Moreover, the issue is amplified when users begin not only to believe repeated false content, but also to question everything they see online, including what is actually real. In this scenario, even authentic images can be dismissed as false — a phenomenon known as The Liar’s Dividend, in which malicious actors dismiss real evidence by claiming it was manipulated.

Therefore, people get used to the idea that images do not always represent reality, and images lose their automatic status of truth, making it harder for people to distinguish what is real from what is fake.

An internet without limits

The crisis of online trust first appeared in images. Deepfakes has not just made users more skeptical, but also created confusion. Online, seeing was no longer enough, since AI-generated content blurred the line between what was real and fake and, consequently, making visual evidence unreliable.

But this dilemma didn’t remain visual for so long: it has moved forward. With the emergence of interactive conversational systems, like ChatGPT or Grok, especially when integrated with synthetic media, the issue reached a deeper level. From there, the crisis of online trust is not only about what we see, but it also includes what we are told.

Unlike manipulated images, conversational AI doesn’t only imitate appearance: it imitates understanding. Through dialogue, it generates personalized responses for each user, adapting to context and questions. Therefore, the result is not just information, but the impression of knowledge and authority.

As the report “Impacts of Adversarial Use of Generative AI on Homeland Security” by the U.S. Department of Homeland Security’s Science and Technology Directorate notes, artificial media is no longer limited to a single format: AI has turned multimodal.

The technology is now capable of generating text, images, voice and video simultaneously — as well as editing already existing pictures online. So when these forms are combined with interaction, the system simulates a presence. And because trust online is built through interaction, AI can begin to play a role traditionally held by human communicators. 

However, this turns into a problem when AI starts creating personalized content based on interaction and requests, which in theory is meant to be about any topic preferred by the user. This shift became visible in 2025 with Grok, the AI of X (formerly Twitter). 

As soon as an image-generation tool was released within the chatbot, according to a news report by CNN Brasil, more than 3 million pieces of sexualized content without the victim’s consent were generated, of which 23 thousand involved children. The report noted that the platform restricted the generation and editing tools to subscribers only, which was considered insufficient. 

The sexualized pictures may have been removed from the system, but the damage to the victim’s reputation will continue to affect their lives. This incident revealed something beyond: online reality is no longer mediated just by professionals or platforms — now it is produced by systems capable of conversing, adapting, and persuading.

When the market catches on

As visual content online lost its credibility as evidence, and artificial intelligence evolved into a multimodal system designed to gain user’s  trust, persuasion also adapted. Businesses and scammers quickly discovered a new possibility: instead of advertising to a person, they could digitally create one.

According to the journal “Navigating the age of AI influence: A systematic literature review of trust, engagement, efficacy and ethical concerns of virtual influencers in social media, written by Isuru Udayangani Hewapathirana and Nipuni Perera, computer-generated virtual influencers are designed not only to resemble real people, but also to interact with audiences in a human-like way.

With good storytelling, consistent presence, an unique personality, and seeming alignment with the brand, they gain the audience’s trust, leading followers to act on their recommendations and buy products. An example of a virtual influencer is Magalu, from the Brazilian department store Magazine Luiza.

The issue begins when the company doesn’t explicitly explain to users that these virtual influencers were created by AI. If this is not clearly communicated, the study states that followers might feel tricked after finding out the person they had emotionally connected to was actually a virtual invention, which was just a part of a marketing system.

Besides, virtual influencers are not just a tool used by companies to get closer to their customers: they might also be appropriated by online fraudsters to promote financial scams. By targeting users through near-perfect visual and auditory simulations, scammers can either create someone “from scratch” or, as the report by the U.S. Department of Homeland Security’s Science and Technology Directorate states, edit or swap a video’s visual characteristics, as well as use reenactment.

When it comes to this kind of fraud, the online advertising market, especially on platforms like TikTok, is an ideal environment for scammers, where they often use AI content-generation tools to create advertisements intended to deceive consumers. In these advertisements, a virtual influencer promotes and recommends products, offered at a lower price than they usually are on the market, and users buy them but never actually receive them.

Another method, on the authority of the report, happens when a video of a figure of authority, usually a celebrity or a reputable specialist, is modified to make them appear in a certain way or to perform a specific action, which, in this case, has the purpose of deceiving online consumers.

This is what happened, for example, with Martin Lewis, a renowned economic journalist in the United Kingdom and owner of the MoneySavingExpert.com website. In 2023, online fraudsters manipulated a video of Lewis to make him appear as if he was promoting a fake investment scheme, which caused one of his followers to lose over £250,000.

The report highlights that these kinds of frauds turn out to be highly convincing, not only because whoever has their image manipulated has the audience’s trust, but also because deepfakes can easily trigger trust by imitating humans, since we are more sensitive to others’ faces and voices. Once a simulation becomes realistic enough, the brain accepts it as real.

From this environment full of potential deception and scams, not only those who were tricked but other users also start doubting the veracity of every online advertisement. This, in the words of Hewapathirana and Perera, erodes customers’ trust, making users increasingly skeptical of online sellers — including legitimate ones — and harming the market, since users no longer prefer to buy online.

Who gets to be believed?

In what appears to be a war between human-generated content and synthetic media, who gets to be believed? 

Traditional media now sits at the center of a growing credibility crisis shaped by artificial intelligence. Before, images and videos were crucial to its reliability, since they have long been treated as a direct connection with reality, at least until the rise of AI. Now, visual content can just as easily mean fabrication as fact.

However, as the confusion between truth and falsehood threatens the media’s credibility, misinformation has begun to circulate based on this type of content. The content is fake, yet it tends to appear more reliable and persuasive – this is not random.

The study “Digital Deception in Geopolitical Crises: The Role of AI-Generated Fake News in the US–Iran Conflict”, written by Dr. Siraj Ahmed Soomro, Fozia Soomro, Dr. Dastar Ali Chandio and Bakhtawar Jatoi, notes that when AI operates across multiple formats, combining generated pictures with automated text, the false information produced becomes more convincing.

This effect comes from, as the document states, the impression of reliability caused by an artificial consistency. When AI generates texts, it includes personalized arguments without hesitation, contradiction or uncertainty —unlike human communication — which makes it feel more believable. Consequently, content alone is not enough to gain trust; it must also interact with the user.

Furthermore, the study argues that despite digital platform’s efforts to detect and restrict the spread of misinformation, by implementing moderation policies and tools to identify synthetic content, its circulation is still difficult to contain. Like memes, fake news often relies on shocking information that helps it go viral, at a speed traditional media can’t fully keep up.

As a result, platforms often respond only after fake news has already circulated widely, labeling or removing the content. According to the report, this containment is further weakened because such material can also bypass traditional media fact-checking. 

An illustration of this effect happened in 2023, when an image of Pope Francis wearing a puffer jacket went viral. The picture was generated by AI, but many people still believed it was real for hours. Another example emerged that same year, when pictures of Donald Trump being arrested spread online, even though the creator explicitly stated they were fictional. 

At the moment, we can’t trust what our eyes show us anymore. And not trusting what we see online has become more than just a consequence of the blur between truth and synthetic content — being skeptical is a way to avoid being tricked. 

Despite our efforts to determine what is actually reliable, what must be doubted is no longer only the information itself, but the perception and the artificial minds behind it all.

————————————————————————————————————

The article above was edited by Alyah Gomes.

Liked this type of content? Check Her Campus Cásper Líbero for more!

Isabella Scaramucci

Casper Libero '28

From Teresina - Piauí to São Paulo, journalism student of Casper Líbero institution.