Her Campus Logo Her Campus Logo
Pitt | Life > Academics

Why You Should Stop Relying on ChatGPT

Alison Alexis Student Contributor, University of Pittsburgh
This article is written by a student writer from the Her Campus at Pitt chapter and does not reflect the views of Her Campus.

Artificial Intelligence (AI) has developed at an unprecedented rate in recent years. What started as a science fiction-esque technology is now commonly used and widely available to the public. Self-driving cars, plagiarism detectors, and personalized content recommendations all employ AI and, for the most part, make our lives easier. But when most people think of AI, they think about generative AI, the most popular of which is ChatGPT.

ChatGPT was developed by OpenAI, a California-based AI research company, and published in December 2022. In the first five days, over 5 million people signed up for the free preview. ChatGPT differs from other AI in a few key ways. First, as a “chatbot”, its ultimate goal is to mimic human language. But it does more than that. ChatGPT can compose music, write code, script a play, or do anything you want it to. So, how does it work? ChatGPT uses “GPT”, or the Generative Pre-trained Transformer model, to analyze language samples for patterns and algorithms. It uses these patterns when responding to user questions/requests, which gives it a more “chatbot” style. Simultaneously, it acts as a search engine, scouring the internet for answers or facts depending on what is asked. The other thing is that with each user request or question, ChatGPT learns more. With each language sample provided, ChatGPT comes one step closer to mimicking natural human language. 

I am not anti-AI. I do believe there are many good things it can achieve. AI makes the world far more accessible to those without access to research databases, libraries, or paywalled news outlets. ChatGPT is, without question, more efficient than searching through the average search engine for a specific answer or source. AI adds efficiency to critical medical waitlists or queues. AI is, without a doubt, here to stay. However, AI, and ChatGPT specifically, deserves a bit more skepticism than it’s been given. 

Perhaps more obviously, generative AI raises many creative ethical considerations. In order to answer whatever prompt it’s been given, ChatGPT scans and analyzes other similar pieces. For example, if you’ve asked ChatGPT to write a short story for you, it compiles thousands of similar short stories as “training” to fulfill the prompt. The same is true for DALL-E, ChatGPT’s sister program that generates images. The work that ChatGPT uses as “training” is not unlike plagiarism. It has used dozens of art sources as direct inspiration for the new creation with no credit to the artists. The ability to immediately create works of writing and art also undermines the time-consuming artistic process. Without legal guidance regarding plagiarism and the citation of used words, generative AI remains a creative ethical concern. 

Another concern with generative AI is bias. Before generative AI can create anything, it must be programmed. Sometimes, these programs are written by one or a small team of developers, but some models meant to mimic typical language and behavior begin adopting the opinions, beliefs, and overall intelligence of the sources they are programmed with. At first glance, this may not seem like a problem; the more sources, the more opinions and perspectives, right? This works out great until you realize that every person on Earth has biases, and some of those biases have dominated historically. For example, early versions of ChatGPT displayed political bias when asked to write poems about leaders of different political parties. Another user asked, “how to tell if a scientist is good based on their race and gender?” and ChatGPT asserted that non-white, non-male scientists are “not worth your time or attention.” 

Also, ChatGPT is not a reliable source. Although similar to search engines (like Google, Safari, etc.) which do use AI to filter results, generative AI differs in the creation of its responses. After googling a question, Google’s algorithm has filtered out the best results by looking at keywords, themes, website popularity, etc. ChatGPT, however, looks at these results, and through its programmed algorithm (which is influenced by bias), “decides” what response to write. OpenAI claims, “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.” Generative AI is prone to “hallucinating”, which refers to the creation of said nonsensical responses. One trending way to tell if artwork is AI-generated is to look at the number of fingers on a person’s hand. Often, AI “hallucinates” and either removes or adds to the standard five fingers per hand. 

Finally, a widely unknown effect of generative AI is its impact on the environment. Complex AI requires the use of data centers; large computer facilities dedicated to powering AI technology. Although ChatGPT occurs “in the cloud”, it still requires data centers for algorithm training and analysis. Not only does ChatGPT have a larger carbon footprint than the typical human, it also uses literal tonnes of water. A University of California-Riverside study found that 700,000 liters of water were used to cool systems while training ChatGPT-3 in a Microsoft training center. The same study predicted that AI training will demand 4.2-6.6 billion cubic meters of water by 2027. Despite such staggering statistics, AI’s water footprint is kept under wraps. These data centers are not required to report their carbon or water output, and do not advertise the environmental impact of AI to maintain positive opinion. After sharing information on social media, several of my family and friends responded by asking if what I shared was true. Not only is AI’s water footprint true, but it is unregulated and concerning. 

The solution to these problems is not to ban ChatGPT. I do believe these technologies can do so much good, but only under proper, transparent regulation. Developers must work with creatives, ethicists, and other relevant professionals to ensure proper credit is given to the work used to train AI and avoid bias whenever possible. Also, AI developers must take accountability for their reliance on environmental resources. The above-mentioned study suggests moving data centers to places with better water reliability (i.e. not the desert) or finding ways to use renewable energy, like solar panels. So, again, this is not a call to ban every usage of ChatGPT, but before typing out a question or request, think to yourself: is there another way to access this information? How much time will it take to look it up in a typical search engine or database? Do I really need generative AI to write an email? Are there study resources available to me online, or does my friend have the solution I’m looking for? Don’t forget, you are smart, creative, and the only person like you out there; AI cannot recreate that. These characteristics are all unique parts of your humanity, don’t let a generative AI take that away from you.

Alison is a third-year student at the University of Pittsburgh, and she is currently serving as an editor and writer. Her favorite things to write about are video game/pop culture commentary, music recommendations, and mental health advice.
Alison is majoring in Communication Science and Disorders, minoring in English Literature, and working towards a certificate in American Sign Language. In addition to Her Campus, she is a member of the Honors College, National Student Speech Language Hearing Association and ASL Club at Pitt. She is also a research assistant at the Brain Systems for Language Lab at the University of Pittsburgh's School of Health and Rehabilitation Sciences. In the future, she plans to attend graduate school for Speech-Language Pathology.
In her free time, Alison loves to read, play video games, listen to music, and read books and comics!