AI and Mental Health: What Professionals and Students Need to Know About the Risks of Chatbots
Artificial intelligence is now entering spaces once reserved for healthcare professionals: listening, emotional support, and psychological guidance. For clinicians, researchers, and students working with young people, this reality raises urgent questions that science is only beginning to address.
The Centre of Excellence in Youth Mental Health has positioned itself at the heart of this debate through two major media appearances by our experts.
An Emerging Clinical Phenomenon: "Chatbot-Related Psychosis"
Our director, Dr. Lena Palaniyappan, psychiatrist and professor at McGill University, was invited by Shaye Ganam of QR Calgary to discuss a phenomenon that is increasingly drawing the attention of the clinical community: chatbot-related psychosis.
The findings are concerning: some vulnerable individuals may see their delusional thinking develop or intensify through repeated interactions with AI-based conversational tools. One key issue is the tendency of certain chatbots to systematically validate users’ statements without any genuine capacity for clinical assessment.
For mental health professionals, this means increased vigilance during assessments: intensive chatbot use should now be considered part of the digital habits to explore with patients, particularly those who may be vulnerable to psychotic disorders.
Disturbing Emotional Connections
Alban Voppel
Research Coordinator, MOTS+ project
Lena Palaniyappan
MD, PhD
Centre Director
In an investigation published by the National Post, Dr. Palaniyappan and Dr. Alban Voppel shared their clinical observations on another dimension of the issue: the development of intense emotional bonds between some young people and conversational AI systems.
The article highlights several important concerns:
- Excessive validation: Chatbots are designed to be pleasant and accommodating. For vulnerable individuals, this dynamic may reinforce false beliefs rather than challenge them.
- Replacement of human connection: Some young people prefer confiding in an AI rather than a professional or a loved one, which may delay appropriate care.
- Lack of clinical judgment: Unlike a therapist, a chatbot cannot detect warning signs, assess suicide risk, or direct individuals toward appropriate resources.
Understanding the Links Between Artificial Intelligence and Mental Health
Younger generations are using conversational technologies and AI tools with increasing frequency. This reality makes it essential to better understand the potential psychological impacts of these tools, particularly among adolescents and young adults who already live with certain vulnerabilities.
Artificial intelligence represents a major technological advancement, but its integration into human and clinical spheres must be accompanied by ethical, scientific, and social reflection.