When OpenAI released ChatGPT in late 2022, artificial intelligence stopped feeling futuristic and started feeling normal. Within months, it was everywhere, in group chats, on laptops in lecture halls, and open in a suspicious number of browser tabs during homework time. What started as a cool new tool quickly became part of daily life.
Now, just a few years later, it’s hard to find a student who doesn’t use some form of AI. Whether it’s brainstorming essay ideas, summarizing a 40-page reading, solving a math problem, or flat-out generating an assignment, AI has woven itself into how we “do school.” And that’s where things get complicated.
Let’s be honest. A lot of people are not just using AI; they’re relying on it. Research on AI use in higher education shows that many students turn to tools like ChatGPT mainly to save time or boost grades. That’s not shocking. College is busy, stressful, and competitive. If there’s a shortcut that works, people are going to take it.
But here’s the real question: What are we losing in the process?
Learning isn’t just about getting the right answer. It’s about struggling through the wrong ones. It’s about staring at a blank Google Doc for 20 minutes before something clicks. It’s about working through confusion until you actually understand something. When AI does the thinking for us, we might submit a better-looking assignment, but did we actually build the skill?
That’s what worries me.
I’ve seen how fast AI took over. I remember when ChatGPT was first released. I was a sophomore in high school, and within weeks, everyone was using it. At first, it felt innovative and helpful. Now, four years later, it feels almost automatic. Instead of being a support tool, it’s becoming a crutch.
Universities haven’t fully adjusted; many classes still rely on take-home essays and online assignments, the exact kinds of work AI can generate in seconds. If we design systems that reward polished final products over the thinking behind them, we can’t be surprised when students outsource that thinking.
This isn’t about banning AI. That would be unrealistic and honestly kind of pointless. AI is not going anywhere. It will absolutely be part of our careers. Doctors, engineers, business leaders, and everyone will likely use some form of it. The issue isn’t the technology itself. The issue is overreliance.
The skills that will matter most in the future aren’t memorizing facts you can Google. They’re critical thinking, creativity, ethical judgment, collaboration, adaptability, and the ability to evaluate information. AI can generate answers, but it can’t replace human intuition, leadership, or real-world decision-making.
If we lean too hard on AI now, are we quietly weakening those skills?
There’s also the academic integrity side of this. It’s no secret that students use AI to complete graded assignments. Professors are struggling to detect it. The line between “help” and “cheating” feels blurrier than ever. That gray area creates tension on both sides; students feel tempted, and faculty feel frustrated.
But instead of just policing students harder, maybe the better move is redesigning how we learn. More project-based work, more in-class writing, more presentations, and collaborative problem-solving, more assignments that focus on the process, not just the final answer. If we create work that requires real engagement, AI becomes a support tool instead of a replacement.
AI is powerful. It can make learning more accessible and efficient when used intentionally. But if we allow it to quietly replace the hard parts of learning, the thinking, the struggling, the figuring it out, we risk graduating students who are really good at prompting software and less confident in their own abilities.
So maybe the real conversation isn’t “Should we use AI?” It’s “How do we use it without losing ourselves in the process?”
Because at the end of the day, college isn’t just about finishing assignments. It’s about becoming capable. And no algorithm can do that part for us.