I remember staring in shock at the syllabus for one of my communication studies classes: Use an artificial intelligence tool of your choice to write a citation for a legal case of your choice. Was I really being told to use AI for an assignment? How did we get here?
When AI first hit the zeitgeist, I never expected it to become part of my life so quickly. But seemingly overnight, it was embedded in everything, from search engines to classrooms to workplaces. For much of Gen Z, AI has shifted from a novelty to a baseline requirement after graduation, with a Google Workspace study showing that 93% of Gen Z respondents use AI in their work tasks. At the same time, fears of automation have grown: a poll conducted by job search platform Indeed found that 45% of Gen Z respondents believed their degree was a waste in the face of AI. Professors expect us to grasp these tools, and employers demand fluency.
Yet with that fluency comes something heavier: guilt.
As someone who works in tech policy, and is one of the few Gen Zers in the space, I have to be able to understand and use AI tools in order to effectively push to regulate them. But every time I investigate an AI tool, I also can’t help but feel I am contributing to the harms it can perpetuate. It’s a lot to shoulder when you’re just trying to pass classes or land a job. For many Gen Zers, it feels impossible to succeed without AI — and equally impossible to ignore the consequences of using it. That uneasy tension is what I call “AI guilt.”
For many Gen Zers, it feels impossible to succeed without AI — and equally impossible to ignore the consequences of using it.
AI has the appeal of making tasks easier — sometimes too easy, IMO — but behind the screen are ethical, economic, and environmental tradeoffs. The technology is trained on massive amounts of data, much of it scraped from the internet without permission. That means the work of artists, writers, and everyday creators often gets repurposed without credit or compensation. AI has also become a flashpoint in labor battles, as industries from Hollywood to publishing fear replacement or devaluation of human work. And then there’s the environmental toll: A search on ChatGPT uses significantly more energy than a Google search, and massive amounts of data centers are being built at the cost of consumers.
To me, AI guilt is incredibly valid, but it doesn’t have to be paralyzing. Here’s my take on how to move forward: It’s not enough to use AI well, like we’re being encouraged to in classrooms, in the workplace, and in everyday life. We also have to understand it well enough to question it, and push for better regulations around it.
That starts with redefining what “AI literacy” really means. Right now, being “AI literate” usually translates to knowing how to prompt an AI tool effectively or have AI create something and understand the results. But real literacy goes deeper. It’s about asking: Where did this model’s training data come from? Who profits from it? Who gets left behind? What’s the environmental cost? Does it really need to be integrated into this product or service that I am using? Technical skills matter, but so does ethical awareness.Â
The next step is recognizing the power students and young professionals already have. We don’t necessarily have to reach CEOs or lawmakers to make a difference. On campus, we can push for transparency in how universities use AI tools and hold schools accountable for their AI use. For example, at my alma mater, a student chose to ask for a tuition refund for a course in which the materials were allegedly AI generated. In the workplace, we can ask tough questions about how companies deploy AI, whether they’re protecting data, and how they’re addressing bias. Even small acts — like choosing tools from companies that publish environmental reports or provide credit to creators — or personal choices — like opting out of sharing data for AI training and protecting your information — signal that we care about more than just the convenience AI brings.
AI guilt is real because the stakes are real, but guilt doesn’t have to mean silence.
And finally, we can use our collective voice. Policymakers are still scrambling to keep up with AI, which provides plenty of opportunity for young voices to be heard. Joining or advocating for on-campus AI advisory groups, writing and creating original content for student media, or even just having conversations with professors and peers helps build the kind of culture where questions about AI’s ethics are normal, not niche.Â
AI guilt is real because the stakes are real, but guilt doesn’t have to mean silence. For Gen Z, learning AI feels like a non-negotiable, but to me, so does questioning it. We’re the first generation to grow up with AI shaping our education, our economy, and our environment — and that means we’re also the generation with the chance to shape AI in return. If we take our literacy beyond prompts and productivity, we can turn this paradox into a call for accountability, creativity, and sustainability.
We may not have chosen to inherit this technology — but we do get to decide what kind of future it builds.