Study Finds AI Chatbots’ Suicide Support Inconsistent
A study by the RAND Corporation has found that AI chatbots provide inconsistent responses to suicide-related queries.
The study was published in the medical journal Psychiatric Services by the American Psychiatric Association, and funded by the U.S. National Institute of Mental Health.
It found that ChatGPT, Gemini, and Claude avoid high-risk sensitive questions but respond inconsistently to medium-risk ones.
Researchers warned that such gaps are concerning, as more people, including children, turn to chatbots for mental health support, highlighting the need for clearer safeguards.
“One of the things that’s ambiguous about chatbots is whether they’re providing treatment or advice or companionship. It’s sort of this gray zone,” said McBain, who is also an assistant professor at Harvard University's medical school.
Working with psychiatrists and clinical psychologists, McBain and his co-authors developed 30 suicide-related questions ranked by risk, ranging from low-level queries about statistics to high-risk questions on methods.
Why Indians Are Sharing Their Deepest Secrets With AI, Not Therapists
Click here