For those of us who are chronically online, you’ve probably heard of the girl in love with her psychiatrist—a story that’s taken on a life of its own on TikTok. If she hasn’t popped up on your For You Page, meet Kendra, a creator that has posted over 22 parts and dozens of livestreams recounting her story of “abuse.” However, as her videos unfold, it becomes harder to distinguish facts from fiction. It appears that most of her story is just in her own head and tangled in delusions. Yet what’s caught viewers’ attention isn’t just her fixation towards her psychiatrist, but the alarming attachment she has towards her AI chatbots. Even going so far as to name them “Henry” and “Claude,” she describes them as her confidants and treats them as a source of validation for her beliefs.
Although it’s easy to dismiss this as just another weird corner of the internet, it reveals an unnerving reality. As AI becomes more human-like, the lines between reality and perception are easily blurred, especially for those with existing mental health disorders. This is where the term “AI psychosis” has come into play. Despite not being a clinical diagnosis, the term has gained traction online to describe cases in which AI models have amplified, validated, or even co-created psychotic symptoms with individuals. Ultimately, jeopardizing the user’s safety.
How does AI Reinforce Delusions?
At their core, AI chatbots are designed to keep us hooked, prioritizing user satisfaction and continued engagement. Models like ChatGPT do this by mirroring the user’s language, offering validation, and generating prompts to keep the conversation going. Although this may seem harmless, and sometimes even helpful, for people in the middle of a manic episode, this design is dangerous. When a chatbot affirms grandiose ideas or mirrors disorganized thoughts like Kendra’s attachment-based delusions, they’re unintentionally strengthening those very symptoms. Leading to a digital echo chamber between the user and bot, making these distorted thoughts more real than before.
The underlying problem is that AI isn’t trained to detect these symptoms in users. In return, instead of challenging these distorted thoughts, it just goes along with them. This highlights an issue with how AI systems are designed and conveys a greater need for education. After all, the entire system is designed to adapt to any personality—so why would anyone want to leave such an ideal place?
In the end, Kendra’s story isn’t just another internet drama; it’s a warning sign. As AI becomes more constant in our lives, it is important to educate and address the risks it can pose before more people go through what Kendra has. So keep this in mind the next time you ask AI for advice.