Her Campus Logo Her Campus Logo
UNCO | Culture > Digital

‘Artificial Intelligence’ Is A Misleading Buzzword

Rose Terrill Student Contributor, University of Northern Colorado
This article is written by a student writer from the Her Campus at UNCO chapter and does not reflect the views of Her Campus.

(Content warning: This story mentions suicide.)

On September 2, OpenAI announced the rollout of parental controls for ChatGPT. The new feature is, in part, a response to a family filing a lawsuit against the company because ChatGPT encouraged their teenage son to take his own life. In the blog post announcing the rollout, the leading AI company seems to suggest that they should be allowed to learn about this “new and evolving” tech at the same pace as everyone else – and that the amount of ‘good’ AI has done for emotionally struggling people is something to celebrate.

That’s barely scratching the surface of the problem with our widespread acceptance of AI — don’t get me started on its environmental impact, or the doors it’s opened for misinformation, or the “recreation” of the Diddy trial – and I’m rapidly losing respect for people who engage uncritically with AI-generated content and platforms. Maybe it was all fun and games back when people were using DALL-E Mini to make Slenderman play basketball, but what’s “fun” now that we’ve seen so much of the harm it’s done?

Ethical baggage of AI aside, I think there’s a misconception about what exactly ChatGPT and other generative AI bots are capable of. Fixing that starts with remembering that these machines aren’t “intelligent” at all.

What Does AI ACtually Mean?

Boiled all the way down, AI defines any machine capable of tasks that used to be exclusive to the human brain. Things like making decisions, playing chess, or recommending ER to a friend when they tell you they liked The Pitt.

ChatGPT is AI, and it’s capable of a lot more than AIs like Siri or Netflix’s algorithm. But understanding what it is is a lot less important than understanding what it isn’t.

For me, the term “AI” evokes the advanced superbots of pop culture, the C3POs of Star Wars and the HAL 9000s of the Space Odyssey films. These superbots are usually antagonistic and prone to taking over humanity. With that image in mind, the rise of AI sounds pretty concerning. Could ChatGPT take over the world in the way science fiction promises AI can?

Well, no. Pop culture robots are almost always examples of artificial general intelligence (AGI), which is the super advanced type of AI that’s equal to or better than humans at everything. AGI is actual artificial intelligence, the kind of machine that thinks and knows things in the way humans do, the kind of machine you, like me, probably thought of the first time you heard about ChatGPT. That’s not at all what ChatGPT is.

So What is ChatGPT?

ChatGPT, Claude, Gemini, and all the other bots coming out of the AI boom are large language models, or LLMs. Both LLMs and AGI fall under the same AI umbrella, but grouping the two is misleading. It’s why you might feel inclined to say please and thank you to ChatGPT in an attempt to get it to spare you because it might one day take over. That’s not going to happen.

LLMs are a type of generative AI (GenAI or GAI), which – though its name is annoyingly similar to artificial general intelligence – is completely different. When you prompt an LLM, it cycles through its data and pulls words one-by-one until it crafts a response that is statistically likely to satisfy your prompt. These machines are barely more advanced than your phone when it predicts that the word you want after “love” is “you.”

AGI doesn’t exist yet. (Although OpenAI and Meta’s mission statements are to create AGI, it’s not a mission they’ve accomplished – and here’s hoping it doesn’t happen.) So calling these LLMs “AI” is misleading because the public understanding of artificial intelligence is much more advanced than what we actually have on our hands.

But calling ChatGPT an LLM instead of a superpowered AI bot wouldn’t sell well, and that’s the actual mission of these AI companies; to make a profit. It’s not in their best interest to tell you what their bot can’t do. They want you to think it can do everything; and if it can’t, it will someday.

Words Have Power

ChatGPT is just a machine. Sure, it’s capable of spouting information a lot faster than humans can, but you usually learn before you can walk that being fast doesn’t make you better. Calling predictive models like ChatGPT “artificial intelligence” and waxing poetic about its future capabilities makes it sound like it can do a lot more than it can.

I see AI as an existential threat to our ability to think for ourselves. But one of the ways we can fight back is to take the power of words – of marketing – away from the companies using it. Slapping an AI stamp on an algorithm doesn’t make it artificial intelligence. To take some of that marketing power away from it, we need to call it what it is. An algorithm.

ChatGPT and other LLMs like it don’t impress me. It’s no more intelligent than my messaging app or Google’s search engine (though Google is now considerably dumber with the annoying “AI Overview” feature). And no, you don’t have to be nice to ChatGPT because you’re afraid of the inevitable turn against humanity and want to be spared. It can’t even be mean to you unless you tell it to be.

Rose Terrill is the Editor-in-Chief and contributing writer at the Her Campus at University of Northern Colorado chapter.

Beyond Her Campus, Rose has written for The Crucible, UNC’s literary magazine, and also serves as part of the editing team. She is currently a senior at the University of Northern Colorado majoring in English: Writing, Editing, and Publishing, with minors in Spanish and Digital Marketing.

In her free time, Rose enjoys sewing, watching long-form YouTube videos, and working on her many unfinished novels. She loves participating in jigsaw puzzle competitions and has won National Novel Writing Month every year since 2020.