Artificial intelligence (AI) has become an integral part of people’s everyday routine, from answering questions to daily conversations to making videos. Although most people have encountered AI, it is unlikely that the average person could recognize the different kinds of AI, and if they cannot recognize the differences, would they be able to recognize AI’s harm?Â
AI is categorized into three different sectors: Narrow AI, General AI and Super Intelligent AI. The difference between the sectors is that Narrow AI trains its systems less than General and Super Intelligent AI, has a more specialized focus and does not act as humanly as the other sectors. General AI is similar to Narrow AI, although it covers a broader range of material, undergoes more training and acts more human-like. The final sector, Super Intelligent AI, is similar to both of the aforementioned types, but the difference is that it is well-trained and covers a wider variety of information than the other sectors. There is a deeper explanation to the systematic training of AI that involves data collection, algorithms and models.Â
One particular AI model, ChatGPT, became very popular among those open for public use. ChatGPT rose to fame in 2022 from the idea that, instead of just using algorithms, what if AI used a system that could truly understand the human aspect of things, such as language and emotion. Essentially, this model is a humanized Open AI system that could be used in different fields like entertainment, military, content creation and businesses.Â
As a result of the idea of improving upon AI, this notion made AI a future-driving force. It was pioneered by businessmen such as Sam Altman, Greg Brockman, Elon Musk, Ilya Sutskever, Wojciech Zaremba and John Schulman, although Musk does not have as much involvement, if any, currently.
At the same time that ChatGPT became more influential, so did its problems. A major one arose from the increase in demand for AI, which led to more strain on AI data centers, which in turn led to strain on natural resources. Examples include, but are not limited to, larger amounts of natural resources like freshwater being used to cool down AI supercomputers, instead of going to citizens who pay for access to clean drinking water and water to wash their dishes and themselves. Also, citizens are being made to pay higher electric bills at a time when people can barely afford groceries, just because data centers are using more electricity from the power grid. Instead of the cost being passed onto the corporation, it is pushed onto regular consumers.Â
It is not just natural resources that are affected by AI, but also people’s daily lives. Those who live close to AI data centers claim to be unable to sleep regularly because of the bright lights and noise produced by the data centers at all times of the day. Besides disturbing rest, data centers can interrupt animals’ natural behavior by disturbing the environment of the place they are built. This becomes a much bigger concern when data centers are intentionally placed in low-income, ethnically diverse neighborhoods, leading to what resembles environmental racism. According to Dr. Benjamin F. Chavis Jr., this is the “intentional siting of polluting and waste facilities in communities primarily populated by African Americans, Latines, Indigenous People, Asian Americans and Pacific Islanders, migrant farmworkers, and low-income workers.” Â
Aside from the negatives, there are still opportunities for job growth from the data center boom. At the same time, is potential wealth more essential than the health of the planet we live on, our neighbors and our dignity? Does this limited possibility outweigh the mental and physical cost to communities affected by the unregulated data centers’ pollution, drainage of natural resources and capitalistic byproducts? The next question average citizens should be concerned with is this: with the way things are, would it be possible for ChatGPT to have a place in society without causing harm? If so, what are some ways that we can better regulate it?Â
The next time you ask chat about something, you should first consider: is this really necessary?