(Content warning: This story mentions suicide.)
On September 2, OpenAI announced the rollout of parental controls for ChatGPT. The new feature is, in part, a response to a family filing a lawsuit against the company because ChatGPT encouraged their teenage son to take his own life. In the blog post announcing the rollout, the leading AI company seems to suggest that they should be allowed to learn about this ânew and evolvingâ tech at the same pace as everyone else â and that the amount of âgoodâ AI has done for emotionally struggling people is something to celebrate.
Thatâs barely scratching the surface of the problem with our widespread acceptance of AI â donât get me started on its environmental impact, or the doors itâs opened for misinformation, or the ârecreationâ of the Diddy trial â and Iâm rapidly losing respect for people who engage uncritically with AI-generated content and platforms. Maybe it was all fun and games back when people were using DALL-E Mini to make Slenderman play basketball, but whatâs âfunâ now that weâve seen so much of the harm itâs done?
Ethical baggage of AI aside, I think thereâs a misconception about what exactly ChatGPT and other generative AI bots are capable of. Fixing that starts with remembering that these machines arenât âintelligentâ at all.
What Does AI ACtually Mean?
Boiled all the way down, AI defines any machine capable of tasks that used to be exclusive to the human brain. Things like making decisions, playing chess, or recommending ER to a friend when they tell you they liked The Pitt.
ChatGPT is AI, and itâs capable of a lot more than AIs like Siri or Netflixâs algorithm. But understanding what it is is a lot less important than understanding what it isnât.
For me, the term âAIâ evokes the advanced superbots of pop culture, the C3POs of Star Wars and the HAL 9000s of the Space Odyssey films. These superbots are usually antagonistic and prone to taking over humanity. With that image in mind, the rise of AI sounds pretty concerning. Could ChatGPT take over the world in the way science fiction promises AI can?
Well, no. Pop culture robots are almost always examples of artificial general intelligence (AGI), which is the super advanced type of AI thatâs equal to or better than humans at everything. AGI is actual artificial intelligence, the kind of machine that thinks and knows things in the way humans do, the kind of machine you, like me, probably thought of the first time you heard about ChatGPT. Thatâs not at all what ChatGPT is.
So What is ChatGPT?
ChatGPT, Claude, Gemini, and all the other bots coming out of the AI boom are large language models, or LLMs. Both LLMs and AGI fall under the same AI umbrella, but grouping the two is misleading. Itâs why you might feel inclined to say please and thank you to ChatGPT in an attempt to get it to spare you because it might one day take over. Thatâs not going to happen.
LLMs are a type of generative AI (GenAI or GAI), which â though its name is annoyingly similar to artificial general intelligence â is completely different. When you prompt an LLM, it cycles through its data and pulls words one-by-one until it crafts a response that is statistically likely to satisfy your prompt. These machines are barely more advanced than your phone when it predicts that the word you want after âloveâ is âyou.â
AGI doesnât exist yet. (Although OpenAI and Metaâs mission statements are to create AGI, itâs not a mission theyâve accomplished â and hereâs hoping it doesnât happen.) So calling these LLMs “AIâ is misleading because the public understanding of artificial intelligence is much more advanced than what we actually have on our hands.
But calling ChatGPT an LLM instead of a superpowered AI bot wouldnât sell well, and thatâs the actual mission of these AI companies; to make a profit. Itâs not in their best interest to tell you what their bot canât do. They want you to think it can do everything; and if it canât, it will someday.
Words Have Power
ChatGPT is just a machine. Sure, itâs capable of spouting information a lot faster than humans can, but you usually learn before you can walk that being fast doesnât make you better. Calling predictive models like ChatGPT âartificial intelligenceâ and waxing poetic about its future capabilities makes it sound like it can do a lot more than it can.
I see AI as an existential threat to our ability to think for ourselves. But one of the ways we can fight back is to take the power of words â of marketing â away from the companies using it. Slapping an AI stamp on an algorithm doesnât make it artificial intelligence. To take some of that marketing power away from it, we need to call it what it is. An algorithm.
ChatGPT and other LLMs like it donât impress me. Itâs no more intelligent than my messaging app or Googleâs search engine (though Google is now considerably dumber with the annoying âAI Overviewâ feature). And no, you donât have to be nice to ChatGPT because youâre afraid of the inevitable turn against humanity and want to be spared. It canât even be mean to you unless you tell it to be.