Young people chat with AI Chatbots on Mental Health Issues

TheCyprus


In the era of Loneliness Epidemic (loneliness) and taboos around the search for mental health services, artificial intelligence takes on a new role: the emotional interlocutor. Chatbots such as Woebot, Wysa and Replika are promoted as mental support tools, especially for young people who face stress and isolation. But can a digital voice provide real care or are we risking losing true mental support for the convenience of online simulation?

Research findings

Two important studies published in 2025 shed light on this question from different but complementary perspectives. The first, entitled “Digital Mental Health: Role of Artificial Intelligence in Psychotherapy” by Sandhya Bhatt, was published in the journal Annals of Neurosciences (Volume 32, issue 2). It is a systematic review of 13 studies that evaluate the effectiveness of mental health interventions using AI, mainly chatbots and online programs, for the treatment of disorders such as anxiety, depression and stress. Bhatt concludes that many of these tools can lead to a significant improvement in mental state, especially mild to moderate cases, and are of particular interest due to their financial accessibility and absence of stigma.

Bhatt’s study shows that AI applications, often based on cognitive behavioral therapy (CBT), have already begun to provide measurable benefits. Tools such as Woebot and Treadwill are associated with a decrease in anxiety and depression symptoms, while others have shown success in managing emotional discomfort, improved sleep or enhancing self -knowledge. Most importantly, many of them operate without human intervention, a characteristic that makes them particularly attractive to young users looking for immediate, without criticism, support.

But what happens when these tools are used not as treatment supplements, but as substitutes for human relationship? There is the second study, entitled “Exploring the Ethical Challenges of Conversational AI in Mental Health Care: Scoping Review”, published in JMIR Mental Health (Volume 12). Led by Mehrdad RahSEpar Meadi and his research team, the study examines 101 scientific sources and maps the ethical dilemmas that arise when using artificial intelligence (CAI) conversations in therapeutic contexts.

The findings of this research are alarming. Psychic interlocutors of artificial intelligence are often treated by users, sometimes subconsciously, as emotional entities. In fact, however, these tools operate in limited frames. They cannot evaluate emotional complexity, respond adequately to judgments or understand the depth of the user’s inner experience. The study reports incidents where AI failed miserably: Chatbot gave dangerous tips to people with eating disorders, while another has reacted inappropriately to trauma and abuse revelations. The inability of technology to manage self -destruction tendencies or serious psychological crises is particularly alarming.

Critical issues

An issue is emotional dependency. AI interlocutors are always available, always understanding and never reactive. This can create strong bonds, especially in people isolated or vulnerable while many of them are young. Over time, dependence on a relationship without real reciprocity can discourage social interaction, intensify isolation and create the illusion of support without real basis.

Privacy is also a field of reflection. Users are often invited to share highly personal information, without being clear how they are used or stored. When commercial priorities precede user care, data protection is called into question.

At the heart of this debate is a cultural phenomenon. Young people today are experiencing a mental health crisis. Increased rates of anxiety, depression and suicides are observed. At the same time, young people feel deeply social disconnect. For many, especially those who do not have access to traditional treatment or feared stigmatization, AI provides a palliative alternative. It is fast, available and anonymous. The question is whether this convenience sacrifices something valuable: authentic human care.

Conclusion

Both studies come to a common and critical conclusion: Artificial intelligence, though promising, must be treated as a supplement to human support, not as a substitute. Bhatt stresses that these tools can be useful aids in the mental health system, especially for populations with limited access. But even the most effective applications work better when combined with human supervision. RahSEpar Meadi’s team goes further, demanding strict regulation, transparency and ethical guidelines for the development and implementation of CAI in the field of mental health. Without these foundations, we run the risk of commercializing human nature and commissioning mental care to systems that are unable to respond to its complexity.

CENTRAL PHOTOGRAPHY: Illustration: Victoria Hart/Guardian Design/Getty Images

Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Total
0
Share