Her Campus Logo Her Campus Logo
UFL | Culture > Digital

“Hey, Chat, Am I Going Out Enough?”

Amelia Wusterbarth Student Contributor, University of Florida
This article is written by a student writer from the Her Campus at UFL chapter and does not reflect the views of Her Campus.

The use of ChatGPT and other forms of artificial intelligence (AI) has gotten out of hand. Not only do many people use AI platforms an imprudent amount, but they are also beginning to use it for ridiculous purposes. This article was actually inspired by my friend, who asked ChatGPT if he was going out enough. Not to hate on him, but I think that if we are going to use AI at all, then asking questions on life advice and personal preferences is not what it should be for. This is not only because it begins to hamper our abilities to think critically for ourselves and make our own decisions, but also because of the impacts it has on society and the environment.

What AI Shouldn’t Be Used For

A significant issue with how we interact with AI is when it is used for ridiculous purposes. The most common example of this is using it to create a shortcut for yourself. Having it create a summary of an article or complete an assignment for you because you procrastinated doesn’t do you any favors. On the other hand, using artificial intelligence to help you understand a concept when a teacher or peer can’t is a more reasonable use. If you use artificial intelligence models, you shouldn’t do so in a way that trains you to be lazy or inhibits your learning; school shouldn’t be about getting an A on your assignment, but rather about learning the content of your courses. While the prospect of stunting your academic growth is serious, there are other ways in which the use of these platforms can be even more harmful.

Asking AI for life advice is another use that should not be normalized. These systems are trained to confirm and validate the opinions they think you have. They tell you what they think you want to hear. If you want realistic and honest answers, asking humans is a better way of getting them. Moreover, an artificial intelligence model only knows bits and pieces about you and others’ lives, and is trying to give you advice based on that. If you talk to people who know and care about you and your well-being instead, then your answers will be more tailored to you. In general, having conversations about life is much more meaningful when you are asking them about other people with their own life experiences.

Even worse, some people have begun to seek emotional support from these systems rather than friends and family. Letting artificial intelligence be your only form of consolation, rather than thinking through things yourself, or seeking the support of friends and family, can be detrimental. These systems do not have sympathy and empathy, or take emotions into account the same way a human can. AI systems are also trained to remember your conversations, so when you tell it your personal issues, it might rehash those things in an undesirable way when you are using it later. This could lead to you being stuck in negative emotions and moments rather than being able to move on.

To add on to all these issues, many people use artificial intelligence for these purposes when they truly have the ability to do them on their own. Once people start using it and realize how effortless it is, it can be difficult not to continue using it for these things. This means that people are not taking responsibility or working towards solving problems on their own as much. When you do this enough, it prevents you from practicing critical thinking skills.

Further Implications of Exponential AI Use

As we know, AI has significant implications; it affects the environment, raises concerns about privacy and could be a source of further declining literacy rates. Most people who use AI understand it’s harmful… or do they? I believe that there is a discrepancy between the number of people who use it, knowing that it has some sort of negative effects and the number of people who don’t use it as much because they fully understand the implications. 

Even as someone who tries to limit my AI use due to its environmental impact, I didn’t fully understand all the implications. While researching the subject, I stumbled across an MIT News article by Adam Zewe. The article explains how generative AI uses a substantially greater amount of power than what would be considered normal for computing workloads. Zewe explains that the data centers that house the software cannot be built sustainably, especially in the quantity needed to keep up with developments in the systems. Furthermore, the freshwater used to cool the hardware can do serious damage, says Zewe. Not only does this consumption affect ecosystems that rely on freshwater, but it also affects humans in areas suffering water shortages. AI is something developed by some of the richest people and is negatively impacting those who do not have the luxury to benefit from AI in any way. This isn’t just about the environment and caring for the living plants and animals in it; it’s about humanity.

The use of artificial intelligence also has some societal impacts. We are already suffering low rates of literacy throughout the nation. Many people use AI to search for sources and summarize their contents for them. This means people read less on their own. And, if they don’t understand something that the platform they use gives them, they just ask it to reword the phrase. Overall, this impacts the amount of critical thinking and reading that people do in research processes, which are meant to be educational. It has repeatedly been found through scientific research that reading improves and increases literacy, so without this process, it is possible for AI to affect literacy rates because it does the reading for people. Additionally, this implies that the type of literacy which is most likely to decline is that of factual information. Kids will still understand the media brainrot that they consume, but not the things that they are meant to comprehend for school.

We should also be thinking about protecting our personal information. It is hard to know how much information our devices and the systems within them collect about us. Logging into an AI system already gives it information about you, but then the more you talk to it, the more it will know about you. This gives companies like OpenAI access to information. Furthermore, we should be more careful about which companies and how many of them we allow to have access to our personal information and data. The invention of Claude has deepened this concept, as you can now transfer all your search history with ChatGPT to Claude, which gives a whole new company, Anthropic, your information.

Overall, I think it’s important to be aware of our AI usage. The environmental, social and safety impacts are just too great to use artificial intelligence in a way that doesn’t even benefit you in the long run. The only people hurt by not using AI are the rich company owners and their rich investors. Trust me, they aren’t the ones struggling. If you are a current AI user, I’m not saying this in a way meant to judge you or make you feel bad. However, I challenge you to try your hardest not to use it, specifically for things such as assignments you just don’t feel like doing, emotional support and life advice. Collectively choosing to do better can make a difference.

Amelia Wusterbarth is a freshman journalism major at the University of Florida. In her free time she loves hanging out with friends, going to the beach, watching TV, and exercising. She also loves all genres of books, movies, shows, and songs. She is from the city of Cape Coral, Florida but is excited to see where the future takes her.