Her Campus Logo Her Campus Logo
PSU | Culture

When AI Becomes Unsettling

Lucy Dahl Student Contributor, Pennsylvania State University
This article is written by a student writer from the Her Campus at PSU chapter and does not reflect the views of Her Campus.

Artificial intelligence has moved from a futuristic idea in sci‑fi movies to something we interact with almost every day. It drafts emails, suggests shows to binge, polishes our photos and even helps brainstorm essays when we are staring at a blank screen at 2 a.m..

Tools from companies like OpenAI have become study lifesavers, while apps like TikTok and Instagram use AI to curate our feeds in ways we barely notice. AI can be amazing. It saves time, boosts productivity and makes life a little easier. But sometimes, it also gives us a strange, uneasy feeling.

Maybe you have felt it. You scroll past a digital influencer who looks perfect with their flawless skin, ideal lighting and natural expressions. But something about them feels off. Or you hear a voice in a video that sounds almost human — smooth and clear — but somehow lacks warmth. That feeling is called the “uncanny valley.”

The term was coined in 1970 by Japanese roboticist Masahiro Mori. He explained that as robots or digital characters become more humanlike, people tend to feel more comfortable with them. When something is almost human but not quite, our brains react with discomfort. The closer something gets to looking real without fully succeeding, the creepier it feels.

Humans are wired to pick up subtle cues from each other. Tiny facial expressions, tone shifts and micro‑movements signal trust, empathy and emotion. When AI imitates these cues but misses even small details, such as a delayed blink, overly smooth speech or eyes that feel frozen, our brains notice. We feel the difference even if we cannot explain why.

This year, that uneasy feeling was on full display during the Super Bowl. Several ads leaned heavily on AI, but one commercial from Ring sparked real controversy. The ad promoted a feature that uses AI to scan neighborhood cameras to help find lost pets. On screen, it looked heartwarming. A little girl searches for her dog while neighbors help with their cameras. But viewers quickly noticed something unsettling.

The problem was not the missing dog: It was the implications of surveillance.

The ad showed how AI could scan footage across multiple cameras, raising questions about privacy and consent. If AI can locate a pet, could it also track people? Critics and privacy advocates argued that the commercial blurred the line between helpful technology and intrusive monitoring. Within days, Ring announced it was ending a controversial partnership with a surveillance company. Many observers saw the timing as a response to public concern.

This controversy highlights a larger tension with AI. On one hand, it is incredibly useful. College students rely on it to brainstorm essays, refine resumes or spark creative projects. On the other hand, the more humanlike and intelligent these systems become, the more they challenge our comfort levels and privacy expectations.

The uncanny valley is not just about creepy faces or imperfect voices — it’s about trust. When AI imitates human behaviors or emotions convincingly, but not perfectly, we feel a gap. Advertisements that seem friendly and clever can still leave us unsettled because, deep down, we know it’s all artificially generated. When combined with questions about data collection and privacy, the unease becomes even stronger.

Perhaps that discomfort is actually valuable. It reminds us to ask questions: How is AI using our data? How far is too far? What feels authentic and what doesn’t? As AI continues to appear in our classrooms, social feeds and even Super Bowl commercials, that uneasy feeling might be our most human reaction yet.

Hi my name is Lucy Dahl and I'm a sophomore at Penn State majoring in Public Relations. I love going to the beach and country concerts. My favorite area to write about is culture and current events.