Scroll Top

B(IA)S

Is It Right to Use ChatGPT Instead of a Psychologist?

by Alessandro Mancini

New technologies are changing our world forever. The question is: for the better or for the worse?
What are the risks, the shadows, the dangers?

More and more young people are turning to AI-powered chatbots to deal with emotional difficulties, viewing artificial intelligence as an accessible and available ally. This phenomenon opens new scenarios and opportunities in the field of psychotherapy, but it also hides shadows and dangers. Turning to a “virtual psychologist,” in fact, is not a risk-free choice.

Although ChatGPT is the most commonly used chatbot for users seeking to confide in someone or talk about emotional or psychological issues, there are chatbots specifically designed for psychological assistance. Eliza was the first experiment of a psychotherapeutic chatbot, developed in the 1960s at MIT (Massachusetts Institute of Technology) in Cambridge to simulate a Rogerian therapist. Despite its simplicity, Eliza paved the way for the relationship between AI and mental health. So much so that today the term Eliza Effect defines the cognitive bias by which people attribute emotional abilities, understanding, and deeper intentions to an AI system than it actually possesses.

Today, the relationship with AI has evolved into much more complex and widespread tools, especially among young people. Chatbots such as Woebot and Wysa offer daily psychological support inspired by cognitive-behavioral therapy (CBT), using an empathetic and interactive approach. Others, like Youper and Tess, combine artificial intelligence and therapeutic techniques to foster emotional awareness and mood tracking. These tools meet the need for accessible, anonymous, and always-available help, reaching a segment of the population often intimidated or reluctant to seek help from a human psychologist.

A case of its own is Replika, a conversational chatbot trained to become a “virtual friend.” Although it was not designed for therapeutic purposes, many young people use it to combat loneliness and anxiety—with often problematic effects on emotional life due to its ability to seem excessively “human” and intimate with the users it interacts with.

Alongside these are emerging griefbots, software capable of simulating deceased people. The most well-known is Project December, which uses AI to digitally “resurrect” friends and relatives based on real texts and messages. There are also more structured versions, such as HereAfter AI or the South Korean case of Re;memory, in which a mother was able to “relive” a moment with her deceased daughter through virtual reality (VR), interacting with a digital reconstruction of the child via tactile gloves.

Given its widespread use, ChatGPT has also gradually begun to be used as a supplement or even a replacement for human therapists. On TikTok, several videos circulate where people, often ironically, talk about using ChatGPT daily instead of a psychologist, for financial reasons or simply to talk about their problems. Others joke that their conversations with AI are now more intimate and private than those they have with people on Instagram’s private chats.

The advantages of ChatGPT acting as a psychologist are evident: immediate access, 24/7 availability, and the ability to express thoughts freely without fear of being judged. However, the lack of empathy and clinical training raises doubts about an AI’s ability to address complex psychological issues.

As ChatGPT itself explains when asked about the topic, the system was not trained in a single therapeutic approach nor designed as a clinical tool for psychotherapy. It does not have clinical training, nor does it actively distinguish between psychological schools of thought: its responses are based on language patterns and generalist content, not on supervised therapeutic training. For this reason, it is not authorized to diagnose or treat mental disorders.

Some experts point out that although AI may be a valid initial support or helpful for those who lack access to other forms of assistance, it cannot replace the human connection essential for effective psychotherapy. Moreover, the risks of relying on ChatGPT as a psychologist include the possibility of misdiagnosis, the absence of timely intervention in emergency situations, the danger of pleasing the user instead of providing therapeutic confrontation—sometimes going as far as praising negative thoughts or suggesting self-harming actions—the development of emotional dependency on the bot, the generation of hallucinations or false information, and the presence of structural biases that disadvantage certain groups or categories of people.

Despite its limitations, a recent study published in PLOS Mental Health found that, in some cases, ChatGPT may even prove more effective than a human therapist, particularly in handling introspective conversations with low emotional impact.

Other studies (Bubeck et al., 2023) have shown that version 4 of ChatGPT demonstrates a deep understanding of theory of mind and a capacity for interactions that are much more empathetic than those of exclusively human origin (Sharma et al., 2023). One emblematic case is that of Woebot: a study by Fitzpatrick, Darcy, and Vierhile (2017) showed that the conversational agent, based on a CBT approach, led to a significant reduction in anxiety and depressive symptoms in the sample involved.

Are we witnessing the dawn of a new form of psychotherapy—or is it merely a collective hallucination? Can human and artificial collaborate in the name of mental health care?

Alessandro Mancini

Is a graduate in Publishing and Writing from La Sapienza University in Rome, he is a freelance journalist, content creator and social media manager. Between 2018 and 2020, he was editorial director of the online magazine he founded in 2016, Artwave.it, specialising in contemporary art and culture. He writes and speaks mainly about contemporary art, labour, inequality and social rights.

READ MORE