There’s no denying that AI usage is on the rise– and products such as OpenAI’s ChatGPT are accessible to everyone, not just the techies. But with advancing technology comes a host of concerns.
Many people are now afraid that AI will take over their jobs. Block, the company behind services like Square and CashApp, just laid off 4,000 workers and said AI was the reason.
False evidence and misinformation are also major concerns due to the rise of deepfakes, which are instances of realistic AI-generated media, such as photos and videos.
Additionally, bias, privacy, and environmental impacts of AI are major points that have been brought up in the conversation.
WhileAI opinions differ, many people accept the high number of downsides, finding that chatbots such as ChatGPT, Claude, and Gemini make their lives easier. A recent study shows 86% of college students regularly use AI to help with assignments.
But these advanced chatbots, being just a few clicks away for most people fuels a lesser-known problem: the danger AI poses to mental health.
The Implications of AI and Mental Health
Using AI as a therapist
One way AI can directly endanger mental health is when people attempt to use a chatbot as a therapist. Most models showed similar bias against misunderstood mental health conditions like schizophrenia, which plays into the stigma.
It has also failed to identify crises where a patient was a danger to themselves, unknowingly fueling the self-destructive mindset.
It is clear that, despite how advanced chatbots are, we are still at a point where a human’s judgment is required to provide quality mental health advice. However, with chatbots being so widely accessible, it may attract people who don’t have access to therapy or seek a free, convenient alternative.
The use of AI chatbots for personal matters has also been linked to anxiety and depression.
Even though human therapists also have biases, they can make a conscious effort to understand a client’s culture, religion, or other factors that shape their experience to provide better care.
However, AI bots trained on specific data sets cannot adapt to diverse clients as effectively. This means their recommendations will not be as personalized and, therefore, less effective.
Instead, AI may show outright discrimination and prejudice towards certain demographics, depending on the nature of the training data.
AI and Confirmation Bias
Even if someone attends human therapy or doesn’t actively use AI as a therapeutic tool, the behavior AI models tend towards can be harmful. Many chatbots can amplify confirmation bias.
Confirmation bias is a phenomenon where people favor information that supports their current beliefs and ideas. If someone expresses harmful or unhealthy ideas to an AI chatbot, the bot may reinforce them by either highlighting possible benefits or omitting risks.
In a mental health context, this could manifest as an AI model encouraging unhealthy coping mechanisms or always telling you you’re completely in the right during an interpersonal conflict.
If you mention a friendship issue or being a bit stressed to a bot, it could lead to a harmful cycle where it keeps validating your views.
While AI bots are becoming very common and can help with certain aspects of life, it is important to be aware of their impacts on mental health, an often-overlooked issue.
AI can be used to automate tedious work and make things more efficient, but we should be mindful of how we use it, especially for personal matters.
What do you use AI for? Let us know @HerCampusSJSU!