Her Campus Logo Her Campus Logo
Carleton | Life > Academics

The Double Edge of AI in University

Kaitlin Gruehl Student Contributor, Carleton University
This article is written by a student writer from the Her Campus at Carleton chapter and does not reflect the views of Her Campus.

I will preface this by stating that, although backed by research, much of the following information is experience-based. Further, any possible “benefit” of artificial intelligence should only be taken advantage of within the context of your university’s guidelines. Toleration and authorization of AI varies from school to school, class to class, and assignment to assignment. Always refer to your school and professors’ precepts first. 

The use of artificial intelligence is prevalent in universities and is likely here to stay. 

Generative AI, in simple terms, is a trained version of traditional AI. Using existing content, generative AIs such as ChatGPT or Gemini create new pieces of work. 

The most recent data from the Pan-Canadian Report on Digital Learning comes from 2024. 

With over 600 participants, 85 per cent believe AI use will soon become a regular part of education. Forty-one per cent of educators who responded said generative AI is used in students’ learning activities. In 2023, the same question’s result was 12 per cent. 

With this data, it is clear that generative AI is not only being used as a tool, but its user base is also increasing. 

Many users and students may find this beneficial. The benefits are immediate after all. 

Creating individual study plans, explaining terms, and analyzing texts are all common uses for programs like ChatGPT. Professors or teaching assistants may also put these systems to use to create lesson plans or grade assignments. 

Users can save time and energy by having these systems complete tasks. This is an ideal benefit in university, with time management being a major stressor. 

Some of the issues with generative AI in education include detection challenges, academic integrity concerns, and regulatory oversight. Schools can create codes of conduct and set guidelines, but whether a student chooses to follow them is an individual gamble. 

Assignments written completely artificially can be obvious standouts of dishonesty and plagiarism. However, with unique responses, AI can be increasingly difficult to detect if used more carefully by the student. Universities and graders lack the technology to keep up. 

Biased outputs and misinformation are also a concern. 

Generative AI creates new data, which is always based on a set of input data. Depending on the input, whether taught by the user or other sources, the AI will form biases that reflect its informants. 

The last issue I will mention is pressing, in my opinion. While AI can still be a tool that is used honestly and creatively, it is accomplishing critical thinking for the user. Whether it is brainstorming, writing, or reading, it is doing something that the user is no longer doing. 

A 2024 Smart Learning Environments research article found that a habitual reliance on AI can diminish one’s motivation and ability to think and analyze individually. There is a risk of dependency and a suffocation of creativity. 

Although my concerns and opinions on generative AI are obvious, I am curious how academia will flex and adapt to these technologies. The authority of written statements on AI restriction is ephemeral, and acclimation is necessary. 

Kaitlin Gruehl

Carleton '26

I am a journalism student at Carleton University. My main topics of interest is science and environmental journalism, but I also love the creativity that comes with covering topics like lifestyle, music, and wellness.