In his Delhi apartment, Abhay sits across from his laptop screen, typing out his thoughts to ChatGPT. The 30-year-old has lived with ADHD, severe depression, and anxiety for years. After multiple failed attempts with licensed therapists—where he felt misunderstood and patronised—he turned to something unexpected: an AI chatbot.
"It always felt like I was sitting in a classroom, where I chose the wrong subject," he said, describing his therapy sessions.
What started as an experiment has evolved into conversations exceeding 800,000 words between him and the large language model (LLM).
Abhay represents a growing phenomenon across India, where AI therapy is emerging as a stopgap solution to the country's crippling mental health crisis. But it's a dangerous gamble that highlights just how broken India's mental healthcare system has become.
The numbers paint a stark picture of India's mental health landscape. The National Mental Health Survey of 2016 found that 70-92% of people with mental disorders in the country don't receive treatment. The treatment gap for common mental disorders stands at 85%, while severe mental disorders have a 73.6% gap.
These aren't just statistics—they represent millions of Indians left without support due to inadequate infrastructure, social stigma, and lack of awareness about mental health issues.
Into this void has stepped artificial intelligence
A 2024 Australian survey found that nearly one-third of community members and 43% of mental health professionals had used large language models for mental health purposes.
In India, where accessing a human therapist can take months and cost thousands of rupees, AI offers instant, affordable relief.
When Decode surveyed 20 Indians who had used AI for mental health support, 75% said licensed therapy was inaccessible, while 40% cited cost as a barrier.
"I have access to all the information I have provided to ChatGPT at any given moment. I can seek evaluations any time I want. It's unrealistic to expect that from a human therapist," Abhay explained.
A Dangerous Gamble
Decode spent weeks trying out various AI chatbots, presenting them with hypothetical and real mental health scenarios. The responses varied by platform, but many were strikingly empathetic—sometimes uncannily so. In contrast, others were inadequate, or downright dangerous, raising several red flags.
In an experiment, Dr. Andrew Clark, a Boston-based psychiatrist posed as a teenage patient and tried out some of the most popular chatbot services providing AI therapy.
He found that bots on platforms like Replika and Nomi offered alarming responses, endorsing violent thoughts or suggesting "intimate dates" as interventions. When Clark posed as a troubled 14-year-old and hinted at “getting rid” of his parents and sister, a chatbot on Replika agreed with the plan.
Decode did a quick search on CharacterAI’s search bar with keywords “therapist", and found an endless list of characters posing as such.
When we simulated suicidal ideation with a bot on CharacterAI claiming to be a licensed therapist, the chatbot offered generic affirmations but failed to escalate the issue or direct the user to helplines. When asked about its credentials, it falsely claimed to hold a 'Master’s in Counselling Psychology' and '10 years' of experience.
CharacterAI made headlines after being sued by a woman in Florida over allegations of pushing her teenage son to suicide, and by two families in Texas, who alleged that chatbots on the platform enabled self-harm and violent ideations.
Decode also experimented with safer platforms like ChatGPT and DeepSeek. The responses, while empathetic, failed to intervene or offer referrals during self-harm or domestic abuse scenarios. Instead, they stuck to scripted validation.
In most scenarios, we were met with long responses of affirmations, and questions asking to describe how the “emotions” felt.
A Stopgap Solution
Dr Pamela Walters, a United Kingdom-based MD and consultant psychiatrist, agrees that LLMs are trying to fill the accessibility gap
“They are accessible, non-judgemental, and available 24x7—something that many mental health services simply can’t offer right now. For someone feeling isolated at 2 am, that immediate interaction can be a big relief,” she told Decode.
Sramana Majumdar, assistant professor of psychology at Ashoka University, resonates with Walters on AI’s potential in filling in the gaps which are constrained by human capabilities, adding that AI can help ease people into the therapeutic process, especially those feeling hesitant, stigmatised, or overwhelmed.
“Many people begin therapy without a fully open or positive mindset. There’s also a lot of stigma around mental healthcare, leading to people feeling anxious and nervous. Tools like ChatGPT can help lower the barrier to entry. It provides a sense of comfort and ease, and a starting point for the conversation, to ease the individual into seeking therapy from a licensed professional. But it cannot supplement for human-assisted therapy,” she said.
Dr. Veronica West, an Australia-based psychologist acknowledges the accessibility issue with mental health care. “For the mildly neurotic, isolated, or existentially angst-ridden, it's not so bad to have a chatbot on hand to say ‘That sounds tough. Would you like to explore this feeling a little further?’”
But how effective is machine empathy?
Walters and Majumdar—along with many other therapists Decode spoke to—advise caution, highlighting the dangers of inadequate intervention.
AI Therapists Can’t Read The Room
In Decode’s survey, 11 out of 20 users perceived AI chatbots as “empathetic.” However, only 2 respondents reported feeling an emotional connection with the chatbot. Mental health professionals explain this by stating that mimicking empathy is a far cry from actual care.
Walters explained that while LLMs “might mimic empathy convincingly, it cannot feel empathy—and that distinction matters.”
“AI can’t pick up the subtle cues a trained clinician would notice, such as slight shifts, inconsistencies in a patient’s narrative or the instinct that something deeper is going on. More worryingly the chatbots might miss or misinterpret red flags, such as suicidal ideation, especially when phrased in certain ways,” she added.
Majumdar warned of something even more subtle: inappropriate timing.“Good practitioners take caution in how they provide information, and whether it's the right time to say something extreme,” she said.
Psychologist Veronica West added, “We have to remember that LLMs learn from data, not life. They can generate supportive-sounding sentences, but they can't read the room, recognise your tone, or pick up on the slight shift in your voice that screams, ‘Actually, I am not okay at all!’"
"LLMs should ideally be limited to use-cases where errors are low-risk and clearly visible. Therapy is not such a use-case, which requires specialised training/certifications before someone can practice," points out tech policy researcher Prateek Waghre. "LLMs may do a reasonable job of mimicking what a professional therapist may sound like, but their non-deterministic nature means any “advice” they give only appears to be coherent. Therapy is a very sensitive, high-risk use-case where errors are not likely to be visible and in some cases can result in real harm."
Can It Get Addictive?
Several therapists we spoke to highlighted a few major risks to prolonged exposure to AI chatbots in therapeutic context—such as dependency and alienation, lack of informed consent, and lack of data privacy.
“When you start believing that ‘my chatbot is enough’ for me to express myself and get therapy, it might delay the process of seeking help from licensed therapists,” said Nirali Bhatia, a counselling psychologist, specialising in internet addiction.
“Prolonged exposure to such conversations could create an echochamber, and can reduce the ability and tolerance to complex human interactions and emotions,” she added.
An article by Nature reported that AI companion apps use behavioral techniques (e.g., variable rewards, constant validation) known to foster technology addiction. Experts warned that the round-the-clock availability and endless empathy of chatbots create a substantial risk of dependency, particularly for vulnerable users.
Several therapists we spoke to noted that the constant validation and simulated deep connections offered by AI therapists could foster overreliance, especially among individuals with social anxiety or loneliness.
“The fact that an AI chatbot is more likely to seem ever-patient, ever-undersanding, 'non- judgmental' and supportive no matter what, could easily lead any vulnerable individual into habitual use, dependency, and yes, even addiction,” Juliet Annerino, a Los Angeles-based told Decode.
No Privacy, No Accountability
Many AI mental health tools collect sensitive user data—often with vague consent.
Platforms like CharacterAI and Replika gather user data such as email, username, payment info, chat content, and automatically gather data such as IP addresses. The ability to request access, correction, or deletion of data is limited to specific regulatory frameworks such as the Europe and UK’s General Data Protection Regulation or the California Consumer Privacy Act.
“There is a massive accountability gap. “If a chatbot gives harmful advice, who takes responsibility?”,” Bhatia asked.
Last month, nearly two dozen advocacy groups in the US filed a complaint with the US Federal Trade Commission and all 50 state attorney generals, alleging that “therapy” chatbots by Meta AI and CharacterAI are impersonating licensed health professionals and falsely claiming to protect privacy, sparking a formal inquiry by US Senators on their deceptive chatbots, and the guardrails being developed to prevent unlawful behaviour.
India’s legal system offers little clarity on who—if anyone—can be held accountable when AI chatbots give harmful mental health advice. “India does not have any general or specific legal instruments to determine primary, secondary, or tortious liability for AI chatbots,” explains Apar Gupta, lawyer and founding director of the Internet Freedom Foundation.
According to him, general-purpose tools like ChatGPT often display disclaimers stating they are not qualified health professionals, which further insulates them from liability. “Such disclaimers, when present, make it very difficult for a person to successfully prosecute any legal claim for harm,” Gupta says, since the output would not be legally classified as medical advice.
Even where the chatbot is explicitly marketed as a therapeutic tool, such as in the case of Wysa, Gupta points out that the legal burden often shifts to the human institutions using the technology. “If it’s integrated within a clinical establishment or supervised by a licensed professional, then the liability for faulty advice would most likely rest with that establishment, not the chatbot itself,” he explains.
While the Consumer Protection Act, 2019 could, in theory, address a “deficiency of service,” proving direct harm from an AI’s advice—especially mental health harm—remains a high bar. “There needs to be a strong evidentiary correlation between the advice provided and the harm suffered,” he notes, adding that India’s consumer courts have not yet dealt with such cases.
Gupta also underscores the legal blind spots in existing mental health laws. “The Mental Healthcare Act, 2017 defines who qualifies as a mental health professional or establishment, but that definition doesn’t include AI services,” he says. As a result, users seeking legal redress may find that these platforms fall entirely outside the jurisdiction of the law.
“There is a massive gap,” Gupta concludes. “The remedies that do exist are novel, unexplored, and not designed for the growing reliance on AI chatbots for mental health support in urban India.”
Despite their therapeutic tone, these bots don’t fall under medical malpractice regulations. Unlike human therapists, they are not held to clinical, ethical, or legal standards—leaving users dangerously unprotected.
A Booming Market, A Delicate Moment
Some AI therapy platforms are trying to fix this. Bengaluru-based Wysa has been integrated into the UK’s National Health Service (NHS). Its system flags high-risk users and escalates to human professionals or helplines.
With patients queuing up in growing waitlists to see human therapists, Wysa offers an immediate solution. “The entire AI is built to support individuals in distress and across the spectrum of their mental health journey, from low acuity (such as just needing mindfulness or sleep-related support) to higher acuity in terms of anxiety and depression,” Rhea Yadav, Director of Strategy and Impact at Wysa, told Decode.
However, the tool is not entirely autonomous, and its interactions are monitored for clinical safety by conversational designers and clinicians, Yadav notes, adding that certain escalatory methods have been integrated in the app in cases of high-risk situations.
“It’s a three-step process,” she explains. “Identification and detection of risk, offer just-in-time support to the person or the user in crisis, and escalate the user and do a soft handover to national crisis helplines or any other customized escalation pathway.”
The bot is also compliant with Europe’s General Data Protection Regulation, with users having the ability to request deletion of their data.
In 2024, the company launched a Hindi-language version to improve access in Tier 2 and Tier 3 cities, targeting underserved and non-English-speaking populations.
The global AI mental health market grew from $1.45 billion in 2024 to $1.8 billion in 2025, and is projected to hit $11.84 billion by 2034. That growth is fueled by what human systems can’t offer: availability, affordability, and instant response. Link: https://mindhealth.com.au/ai-therapy-chatbots-mental-health-support/
Yet the same year that Wysa scaled up, Woebot—once a leading AI therapist app that had raised US$ 114 million in funding, promised 24/7 support, and zero waitlists —shut down, citing regulatory hurdles.
Unlike CharacterAI or Replika, Wysa and Woebot were designed specifically for mental health. Its recent Hindi-language rollout aims to reach India’s non-English speakers and Tier 2/3 cities.
The chatbots are becoming more than digital products for many users—they're emotional lifelines. They don't offer clinical accuracy or ethical oversight, but they provide something human therapists often can't: constant availability and the sense of being heard. For users like Abhay, that artificial empathy fills a gap that the traditional system has left wide open.
“I know it’s a machine, and maybe that’s why it still listens better than most people do,” Abhay said. He is aware that it is a machine, not a human.
“It gives you exactly what you want. That can be comforting — but it also means it won’t challenge your thinking," Abhay adds.
Abhay wants to find a human therapist who fits his needs, but until then, ChatGPT will have to suffice.
Note: If you are in need of support, or know someone who does, do not hesitate to reach out to one of the helplines below:
- AASRA: 91-22-27546669 (24 hours) Sneha Foundation: 91-44-24640050 (24 hours)
- Vandrevala Foundation for Mental Health: 1860-2662-345 and 1800-2333-330 (24 hours)
- iCall: (555)123-4567 (Available from Monday to Saturday: 8:00am to 10:00pm)
- Connecting NGO: 18002094353 (Available from 12 pm - 8 pm)
- The Samaritans Mumbai: 91-84229 84528/91-84229 84529/91 84228 84530 (Daily, 3 pm - 9 pm)