Her Campus Logo Her Campus Logo
U Toronto - Mississauga | Culture > Digital

The Destructive Nature of AI

Emma Catarino Student Contributor, University of Toronto Mississauga
This article is written by a student writer from the Her Campus at U Toronto - Mississauga chapter and does not reflect the views of Her Campus.

I went to my doctor for a routine prescription refill, for a common outpatient condition. She did all the normal tests – checked my blood pressure, asked me if I felt any different since my last refill. All was well. Then, my doctor logged onto the desktop in the patient exam room, pulled up a version of ChatGPT for doctors, and asked the AI if I needed any more checks before refilling my prescription. 

She did this all right in front of me. I was appalled. 

This situation poses three major issues. First of all, I know doctors are human. I would’ve understood completely if my doctor needed to consult her textbook or ask a colleague for assistance in remembering the correct treatment steps. However, using generative AI as a replacement tool can be dangerous and irresponsible. If the AI and my doctor had both forgotten to check me for some crucial test before refilling my prescription, I could’ve potentially gotten sick/injured. 

Secondly, it can breed mistrust. In a world where misinformation is becoming increasingly more commonplace, while doctors visits are becoming more unaffordable, situations like this may lead patients to reconsider visiting their doctors offices when in need. After all, why waste money and time for an appointment when ChatGPT, the more accessible option, could’ve helped you at home? 

The last issue I have with this encounter is more personal. As someone who is studying biology in undergrad, doing extracurriculars, studying for the MCAT, and is otherwise working really hard to get into medical school, this visit was so disheartening. It was essentially an invalidation of all the work my peers and I do for this career path. It made me wonder why we put in all this effort to get into medical school (a notoriously difficult task) when doctors are relying on chatbots anyway.

This entire situation highlights just some reasons why relying heavily on generative AI can be dangerous. 

One way generative AI has been used lately is in a politically-charged manner. Citizens who identify themselves with certain political parties have the ability to use AI to generate pictures, audio, and text that make their side look better or make the other side look worse. For example, President Trump tried to make the MAGA/Republican party look more dominant with an AI generated Tiktok he released. The video featured himself driving a fighter jet, dropping poop on those who attended the “No Kings” rally, which was done to oppose his leadership.

AI also breeds mistrust in the validity of text, audio, and visuals being used by different political parties. Doug Ford released an ad using lines from a speech Ronald Reagen’s made in 1987 to oppose Trump’s implementation of high foreign tariffs. Many of Trump’s supporters accused Ford of using AI-generated audio in the ad, claiming that a fellow republican president like Reagan would never oppose tariffs similar to the ones Trump has been advocating for.

Another example of the mistrust AI creates is in relation to the Epstein files. With approximately half of the files being currently released, there is now a lot of explicit images and audio from the files available on the internet. Generative AI like Sora, a division of OpenAI focused on visuals, makes it possible that some of the evidence in the files is falsified/enhanced to serve a certain agenda. It also makes it easier to dismiss the validity of the evidence with claims that it was created by AI. Both of these possibilities would have the implication of not punishing those who deserve it for committing the atrocious acts present in the files, and not serving justice to the victims. 

This leads to another disgusting way in which generative AI has been used. Multiple platforms have been used for this act. Grok, the AI associated with X (formerly Twitter), has been the most talked about as of late. The generative tool has been used to create sexually explicit images of people, particularly female celebrities and children. This is clearly a deplorable way to use the technology, as it creates victims and violates consent. Large scale companies like X need to take action against requests like these, to ensure the safety of those who are vulnerable to this type of exploitation. Especially since AI could easily produce images of this nature from any photo of anybody ever posted online. 

Though this is less talked about, AI is terrible for the environment. In order for AI to operate, massive servers take up large areas of land. They also need large amounts of cold, fresh water to cool down their servers, different sources of energy to power the servers, and a means to get rid of the waste created. The number of data centers housing these servers has increased from 500K to approximately 8 million in the last decade, highlighting the challenge of finding an environmentally viable way to address these issues. It is estimated that AI uses 6x more water per year than all of Denmark currently. This is abhorrent when considering the population of people worldwide who have unreliable access to clean drinking water. It also poses concerns for the sustainability of AI, as concerns surrounding the availability of fresh water grow. 

Water bottle illustration?width=1024&height=1024&fit=cover&auto=webp&dpr=4
Original Illustration Designed in Canva for Her Campus Media

Last but not least, a key reason to be critical of the use of generative AI is its impact on the media literacy and critical thinking skills of the general public. This issue has been most prevalent in schools with academic dishonesty. Students have increasingly been using AI in a number of ways, from assistance in finding ideas/sources, to creating notes from lecture materials or writing entire papers. Likewise, some teachers have been using AI to grade assessments. Both instances show a lack of effort put into the work being created. If we start to rely too heavily on technology to do everything for us then we lose the ability to think critically, to take in information and form our own thoughts/opinions.

In Fahrenheit 451 by Ray Bradbury, a novel about censorship, there is a scene where books are given summaries that strip them of their nuance. The public gets so accustomed to the shorter versions that they don’t bother reading the full novels, which makes it easier for the government to later burn books on a large scale, destroying information and creativity. This is eerily similar to how people are getting used to being lazy by having AI do everything for them. Lack of education/critical thinking skills is also correlated with increased susceptibility to misinformation and propaganda, which could be dangerous. 

All of these issues are worsened by large companies’ complete willingness to incorporate AI into their brands. Other than X having Grok, Google, Instagram, Youtube, etc have all adopted AI fairly quickly. Instagram now has a feature where videos recorded in foreign languages will be translated and dubbed with AI voices, going so far as to alter the mouth movements of the content creator to make it appear as if they speak English. Youtube has a feature that summarizes its comment sections, entirely removing the nuance and individualism that its users bring. Websites like Canva and Wix use AI to generate text and images, taking work away from actual artists, and removing the users need for creativity/self expression. Most abhorrently, Disney recently announced a feature where AI can be used to create art/videos, completely undermining decades of work put into the brand by animators over the last century.

I believe it would be highly beneficial for us to stop using AI on an individual level, both for the environment and for our own preservation of intelligence. However, it’d be even better if large corporations incorporating AI on these massive scales would stop and recognize the overall harm AI causes. Though this is unlikely to happen as AI is currently a highly profitable product, even if it ends up contributing to our downfall. 

Emma Catarino

U Toronto - Mississauga '27

I'm a Biology Specialist student with a minor in Creative Writing at UTM. I want to be a doctor one day, but I also really love writing. I'm passionate about defending women's rights and advocating for feminism, which is why I'm so excited to be apart of this community!