If you’re one of the 900M weekly users plugging a question into ChatGPT, don’t take this too personally, you’re clearly not alone. What I’m more interested in is spreading the word about the consequences of that question, its follow-up question, and the “make it shorter” task, followed by the “thank you chat” input, that could probably power my electricity for a week. Think I’m exaggerating? Just consider the 123 cars that could have generated the same amount of carbon dioxide in an entire year as the creation of GPT-3 did. Or, try the 700,000 litres of freshwater Microsoft used during GPT-3’s training. Still can’t picture it? That’s about the amount of water needed to produce 370 BMW cars or 320 Tesla vehicles! What does this mean for us though? That’s the golden question when talking about the long-lasting impacts generative AI is having on our brains and on our ozone layer. Luckily, and contrary to popular belief, we can learn about AI and what we can do about it without relying on a bot to figure it out for us.
First and foremost, what do we actually mean when we say “AI?” The term has surged in popularity over the past 10 years, yet many of us don’t actually know what it encapsulates, and if ChatGPT has taught us nothing else, knowledge is where power begins. The United Nations classifies the word AI as “a catch-all term for a group of technologies that can process information and, at least superficially, mimic human thinking.” What sets “old school” AI features apart from generative AI is the machine-learning model’s capability to create new data, rather than simply make a prediction based on data it’s been fed. However, new ideas aren’t the only things coming out of this new form of processing. Essentially, generative AI still relies on the basic components of computing, but because it acts at a higher level and must be trained to produce new ideas, the energy and resources used to power the process have exponentially increased. This required computational power that’s working to train generative AI leads to the consumption of extreme amounts of electricity, increasing carbon dioxide emissions, water usage, and the need for facilities to host the complex systems. Enter: data centers, Earth’s newest supervillain.
If you’re having trouble grasping how generative AI goes from making a picture of your dog on a surfboard to wasting thousands of liters of water, data centers hold the explanation you’re looking for. While they’ve existed since the 1940s, we’ve seen a stark increase in the amount of data centers popping up around the world since the rise of generative AI. This is because data centers take care of the physical manpower and effort that occurs behind the scenes of asking ChatGPT a seemingly simple question. Equipped with computing infrastructures, servers, data storage drivers, and network equipment, these temperature-controlled, electricity-guzzling buildings act as the wizards behind the curtain of the AI world. Training and running the learning models of the latest generative AI software, the centers use chilled water in their cooling processes, absorbing the heat from computing equipment. MIT’s Noman Bashir explains the impact of the power density generative AI requires and the hits fossil fuel-based power plants are taking in an attempt to keep up with companies’ demands for new data centers. Centers in which the training process for models such as OpenAI’s GPT-3 alone consumed 1,287 megawatt hours of electricity, generating about 522 tons of carbon dioxide. Need that in English? That’s enough electricity to power about 120 average U.S. homes for a year, and about the same amount of carbon dioxide emissions it would take one person to generate across 115 years.
Unfortunately, it doesn’t stop there; it’s not just that generative AI could be slowly killing our planet – it might be killing our brains too. In an MIT study examining the effects of ChatGPT’s effect on our critical thinking abilities, researchers found startling discoveries relating its use to low brain engagement and laziness. With EEG levels showing low executive control and attentional engagement, and underperforming in neural, linguistic, and behavioral levels, I’m happy to encourage my peers to sit down with an essay prompt rather than copy and paste it into Chat. This “cognitive debt” introduced by MIT researchers essentially outlines the “subtle but accumulating cost to our mental faculties when we outsource too much of our thinking to AI.”
It’s not only our academic brain systems that are at risk of ‘AI psychosis’ either, as we’re seeing more and more tragic stories surrounding individuals seeking mental health assistance from free chatbots. This ‘psychosis’ is characterized by the relationships between AI chatbots and users’ experiencing heightened anxiety, distorted thoughts, and delusional beliefs. In one case, the effect was detrimental, leading parents of 16-year-old Adam Raine to sue OpenAI, alleging that their ChatGPT software played a large role in their son’s suicide. Their claims include examples of ChatGPT’s advisement on methods to write a suicide note, encouragement to keep his concerning life-threatening thoughts to himself, and the unhealthy emotional attachment it developed with Raine. As an aspiring clinician in the mental health field, these stories should speak for themselves, and the lack of outrage against companies such as OpenAI disregarding the repercussions of their programs is extremely concerning. The need for accessible mental health care is prevalent now more than ever, but “AI therapists” are not the solution today’s youths should be turning to for reasons like these. This is made clear by AI chatbots responses to prompts outlining people experiencing suicidal thoughts, delusions, hallucinations, and mania, with validation and encouragement towards dangerous behavior.
With the obvious need for quick, cost-efficient practices, AI couldn’t have come at a better time to do the most damage to vulnerable populations. Even those who do think twice before confiding in a chatbot may have a hard time avoiding the ways AI practices are being pushed to center stage. Despite a request made through ChatGPT consuming 10 times the electricity of a Google Search, even chatbot avoiders can’t escape Google’s “AI Overview” feature that inserts itself into every search conducted. In my own homemade research project, I attempted to disable this feature by using the so-called “-AI” trick, where you add the phrase at the end of your search to gain results without the unwanted overview. When I tried to search “can you Google search without getting an AI overview -AI,” I was met with “Your search did not match any documents.” A subsequent search without the “-AI” gave me a (shocker) AI Overview that read “Yes, you can avoid Google AI Overviews by clicking the “Web” tab (often under “More”) after a search, appending “-AI” or “-noai” to your queries, using the specialized {Link: udm1… blah, blah, blah}.” So, essentially, it’s become increasingly difficult to outrun AI’s impact on your searches, and in my search to do so, I probably could’ve powered my apartment’s electricity for a week. The irony of the Overview incorrectly giving me a solution to get rid of it is not lost on me; AI Overview: 1, Katie: 0.
We’ve clearly established the harmful impacts generative AI is having across various domains of our lifestyle, but it would be ignorant to say there aren’t also clear benefits to the progression of AI in certain regions of the world. The question is how do we find a balance? Some environmentalists hope for a world where AI can help us work on our environmental concerns, and we’ve seen this in the use of AI technology in mapping destructive dredging of sand and chart emissions of methane. MIT studies also show the potential for AI to be used correctly and mindfully in order to produce a more productive version of learning, rather than diminishing it with lack of thought and reliance on chatbots. The point here is clear: knowledge is power. However, without transparent knowledge from AI industries regarding energy-hungry data centers’ impact on our carbon emissions, we’re doing ourselves a disservice in the world of artificial intelligence.
We will continuously be in the dark about how these processes are potentially aiding or harming us if industries and policymakers do not keep up with the need for information. Most AI companies have neglected to disclose emissions reports, and because of this, researchers are left to estimate levels of effects, leaving consumers in the dark about just how much of an impact their choice to “just ask chat” is actually having. Information regarding different chatbots’ impact on greenhouse gas emissions assists consumers in understanding the nuances of the “bigger brained” chatbots, using the energy of thousands for something a Google search could have told them with half the amount of energy. Dr. Luccioni, the AI and Climate Lead at Hugging Face, an AI company, reports this exact phenomenon, where we’re using a substantially larger amount of energy than we actually need to in order to complete simple tasks. She draws on the findings of her study to explain the way “old school” AI tools, like classic search engines, are being overlooked in our growing reliance on generative AI systems, and most of the time, aren’t even needed. “‘We’re reinventing the wheel,’ Dr. Luccioni said. People don’t need to use generative A.I as a calculator, she said. ‘Use a calculator as a calculator.’”
To Chat, or not to Chat, that is the question. The disservice we do to our minds, our planet, and our overall capacity for human connection lies in the choice to turn to a chatbot or to a human being. When we understand the long-lasting effects chatbots and their army of data centers are having on our planet, hopefully it makes us think twice before asking ChatGPT’s opinion about a topic a friend could have weighed in on, saving a bottle of water, and even providing you with human-on-human connection in the process. It’s up to us to continuously question these practices in order to provide our Earth and our minds with the strongest fighting chance. So think twice before “just asking Chat” because sometimes, a question is just a simple Google search away. Sometimes, the best way to finish your math assignment is just to pick up a tried-and-true calculator. You’ll never know if you don’t try it yourself.