Her Campus Logo Her Campus Logo
UC Berkeley | Culture > Digital

WHY AI FEELS “CREEPY”

Nora Yang Student Contributor, University of California - Berkeley
This article is written by a student writer from the Her Campus at UC Berkeley chapter and does not reflect the views of Her Campus.

In one of my Cognitive Science lectures, my professor pulled up a slide claiming that AI could predict our personalities better than our closest friends. Apparently, Facebook likes alone could reveal a person’s psychological traits with shocking accuracy. With just a few likes, an algorithm could understand you better than your closest friends. With 300, it could surpass even your spouse.

The lecture hall fell silent. We like to believe that our personalities are something intimate and best understood by those who know us deeply, but here was evidence that AI, trained on our digital footprints, might’ve figured us out better than we know ourselves.

This realization stuck with me, and I started wondering why it felt so unsettling. AI isn’t inherently evil. It’s just a tool designed to analyze data and make predictions. Yet there’s something about AI — especially when it becomes eerily human-like or intrudes into private aspects of our lives — that triggers a discomfort in us. From its ability to predict our actions to the ways it subtly influences our decision, AI feels powerful and unnerving. But why?

The Facebook study my professor mentioned was a glimpse into how AI is reshaping the way we see ourselves. AI-powered algorithms build complex psychological profiles based on seemingly insignificant behaviors. They know what kinds of ads will influence us most, what political messages we’re susceptible to, and what time of day we’re most likely to make an impulse purchase. This can feel invasive because it challenges our belief in personal agency. We like to think we make decisions based on conscious thought that is free from external manipulation, but AI’s ability to anticipate our choices suggests otherwise. If an algorithm can predict what we’ll buy, who we’ll date, or even how we’ll vote, how much of our behavior is truly our own?

This discomfort also ties into a psychological principle known as the illusion of free will. Research suggests that many of our decisions are influenced by unconscious factors long before we become aware of them. AI, with its vast data-processing power, simply picks up on these patterns faster than we do.

It’s not just that AI knows us. Rather, it knows us in ways that make us question how much control we actually have. And beyond personality prediction, AI triggers a more existential fear of losing control over technology.

Humans have always been uneasy about machines that operate independently from us (think pop culture narratives like The Terminator and its subsequent films). It’s one thing to program a tool to perform a task, but it’s another when the tool starts making decisions on its own. AI-driven automation has already taken over aspects of finance, healthcare, and even creative industries. While AI today may not be self-aware, its ability to outperform humans in specialized tasks like playing chess or diagnosing diseases makes us question what’ll happen when it does surpass us in areas we consider uniquely human. 

We fear AI not because it’s evil, but because we don’t fully understand how it works. And what we don’t understand, we struggle to control.

Social media platforms, powered by AI-driven recommendation algorithms, shape what we see, what we think about, and even what we believe. Every scroll, every ad, and every piece of news is tailored to keep us engaged. While AI doesn’t predict behavior, it can surely modify it. Platforms use engagement data to show us content that’ll maximize our time spent online which sometimes leads us down crazed rabbit holes. Since AI knows what makes us click, whether it’s outrage, controversy, or emotional triggers, it exploits our psychological weaknesses and our awareness of choice. Unlike traditional advertising where we recognize persuasion, AI’s influence is often subtle. We think we’re making independent choices when, in reality, we’re nudged toward certain directions. No one likes being manipulated, and when AI becomes the invisible force guiding our thoughts and actions, it challenges our sense of autonomy in ways we’re only beginning to understand.

Ironically, AI also feels creepy when it performs too perfectly. Humans are imperfect by nature. We make mistakes, second-guess ourselves, and express emotions in ways that are sometimes irrational or inconsistent. AI, on the other hand, operates with logic and efficiency that can feel alienating.

For example, AI-written novels or art may follow perfect narrative structures or look technically flawless, but they lack emotional depth and the warmth of human understanding. We connect with others because of their imperfections. When AI lacks those quirks, it feels less relatable even if it’s functionally superior. 

We want AI to be smart, but we also want it to be human. When it’s too machine-like, it feels alien. When it’s too human-like, it enters this strange realm of untrust. Either way, it unsettles us. 

Our unease with AI isn’t necessarily a sign that it’s evil. Rather, it’s a sign that we’re at the edge of something new and something that challenges our long-held assumptions about what it means to be a person.

Nora Yang

UC Berkeley '28

Nora Yang is a second-year student at the University of California, Berkeley studying Economics and Cognitive Science. She was born in Southern California, spent a few years in British Columbia, Canada, and now calls the Bay Area home. In her free time, you can find her painting self-portraits, building dioramas, or cafe hopping.