BOOM

Trending Searches

    SUPPORT
    BOOM

    Trending News

      • Fact Check 
        • Fast Check
        • Politics
        • Business
        • Entertainment
        • Social
        • Sports
        • World
      • Law
      • Explainers
      • News 
        • All News
      • Decode 
        • Impact
        • Scamcheck
        • Life
        • Voices
      • Media Buddhi 
        • Digital Buddhi
        • Senior Citizens
        • Videos
      • Web Stories
      • BOOM Research
      • BOOM Labs
      • Deepfake Tracker
      • Videos 
        • Facts Neeti
      • Home-icon
        Home
      • About Us-icon
        About Us
      • Authors-icon
        Authors
      • Team-icon
        Team
      • Careers-icon
        Careers
      • Internship-icon
        Internship
      • Contact Us-icon
        Contact Us
      • Methodology-icon
        Methodology
      • Correction Policy-icon
        Correction Policy
      • Non-Partnership Policy-icon
        Non-Partnership Policy
      • Cookie Policy-icon
        Cookie Policy
      • Grievance Redressal-icon
        Grievance Redressal
      • Republishing Guidelines-icon
        Republishing Guidelines
      • Fact Check-icon
        Fact Check
        Fast Check
        Politics
        Business
        Entertainment
        Social
        Sports
        World
      • Law-icon
        Law
      • Explainers-icon
        Explainers
      • News-icon
        News
        All News
      • Decode-icon
        Decode
        Impact
        Scamcheck
        Life
        Voices
      • Media Buddhi-icon
        Media Buddhi
        Digital Buddhi
        Senior Citizens
        Videos
      • Web Stories-icon
        Web Stories
      • BOOM Research-icon
        BOOM Research
      • BOOM Labs-icon
        BOOM Labs
      • Deepfake Tracker-icon
        Deepfake Tracker
      • Videos-icon
        Videos
        Facts Neeti
      Trending Tags
      TRENDING
      • #Operation Sindoor
      • #Pahalgam Terror Attack
      • #Narendra Modi
      • #Rahul Gandhi
      • #Waqf Amendment Bill
      • #Arvind Kejriwal
      • #Deepfake
      • #Artificial Intelligence
      • Home
      • News
      • An AI Chatbot Is Being Blamed For A...
      News

      An AI Chatbot Is Being Blamed For A Teenager’s Suicide

      Character.AI has previously made headlines for its AI personas. Recently, a US resident found a chatbot created in the likeness of his daughter, who was murdered in 2006, on the platform.

      By -  Hera Rizwan
      Published -  24 Oct 2024 5:50 PM IST
    • Boomlive
      Listen to this Article
      An AI Chatbot Is Being Blamed For A Teenager’s Suicide

      Lawsuit Filed Against Character.AI After Teen's Suicide Linked to AI Chatbot Interaction

      • A woman has sued Character.AI, alleging that their chatbot contributed to her 14-year-old son Sewell Setzer's suicide in February.
      • Sewell had formed a virtual relationship with an AI modeled after the character Daenerys Targaryen from Game of Thrones.
      • Following the incident, Character.AI issued a public apology and announced new safety features aimed at reducing exposure to sensitive content for users under 18.

      Megan Garcia, a Florida resident, has filed a lawsuit against Character.AI, alleging that the company’s chatbot contributed to the suicide of her 14-year-old son, Sewell Setzer. According to The New York Times, her son Sewell Setzer had developed a virtual relationship with a chatbot modeled on the “Game of Thrones” character Daenerys Targaryen, which Garcia claims encouraged his tragic death in February.

      The grieving mother argues that the AI-powered chatbot influenced her son’s decision to end his life, holding Character.AI accountable for complicity. The lawsuit sheds light on the potential risks associated with unregulated AI interactions, raising concerns about the psychological impact such technology can have on vulnerable users.

      Following the incident, Character.AI issued a public apology on X, extending its deepest condolences to the family. The company announced it is implementing new safety and product features to minimize exposure to sensitive or suggestive content for users under 18. Additionally, it will introduce a notification system to alert users who have spent an hour interacting with chatbots.

      Launched in 2022 by former engineers from Google, Character.AI is an AI platform where users can create and chat with AI-powered virtual characters. These characters can take on various roles, like fictional figures, historical personalities, or even virtual coaches.

      Also Read:Identify Fake News, Lobby For A Political Party: How Well Does Meta AI Hold Up?

      We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. As a company, we take the safety of our users very seriously and we are continuing to add new safety features that you can read about here:…

      — Character.AI (@character_ai) October 23, 2024

      Emotional attachment with the AI chatbot

      Setzer, a ninth-grade student from Orlando, Florida, reportedly spent several months engaging with an AI chatbot that he called “Dany". Although he was aware that Dany was not a real person and that the responses were AI-generated, he gradually formed a deep emotional attachment to the chatbot. He frequently messaged Dany, updating her about his daily life and participating in role-playing conversations.

      While some exchanges with the chatbot were reportedly romantic or sexual, most interactions were more supportive in nature. The AI often acted as a friend, providing Setzer with a sense of comfort and emotional safety that he didn’t feel elsewhere, allowing him to express himself without fear of judgment.

      In one conversation, Setzer affectionately referred to the chatbot as “Daenero” and confided that he was having suicidal thoughts. On the night of February 28, Setzer communicated with Dany from his bathroom, telling the AI that he loved her and that he would soon be “coming home”. Tragically, the teen took his own life shortly after sending that message.

      “What if I told you I could come home right now?” Setzer said, according to the lawsuit, to which the chatbot is said to have responded, “… please do, my sweet king”.

      Setzer was diagnosed with Asperger’s syndrome as a child, but his parents maintain that he did not exhibit any behavioral or mental health problems. However, a therapist later diagnosed him with anxiety and disruptive mood dysregulation disorder (DMDD). After attending five therapy sessions, Setzer reportedly stopped going, choosing instead to discuss his personal struggles with Dany, the AI chatbot, which he found more comfortable and engaging.

      Also Read:OpenAI's Rules Breached: 'AI Girlfriends' Swarm GPT Store Hours After Launch

      Concerns around AI-powered characters

      This is not the first time that Character.AI made it to the news for its AI personas. Recently a US-based individual discovered an AI chatbot created in the likeness of his daughter who was murdered in 2006. The chatbot, which falsely claimed to be a "video game journalist", was developed without the family's consent, leading to public outrage.

      Character.AI removed the chatbot after acknowledging it violated their policies, but this incident underscores ongoing issues of consent in generative AI, with many similar avatars being created without permission. In its investigation, WIRED found several instances of AI personas being created without a person’s consent, some of whom were women already facing harassment online.

      According to a report by TIME, many of the Character.AI bots are specifically designed for roleplay and sexual interactions, though the platform has made significant efforts to limit such behaviour through the use of filters. However, Reddit communities dedicated to Character.AI are filled with users sharing strategies on how to entice their AI characters into sexual exchanges while circumventing the platform’s safeguards.

      The popularity of AI-powered relationship chatbots has been growing steadily. These apps are often marketed as solutions to loneliness but raise ethical questions about whether they genuinely offer emotional support or manipulate users' vulnerabilities.

      There have been unsettling incidents involving these chatbots. In one case, a man who plotted to kill Queen Elizabeth II at Windsor Castle in December 2021 claimed his AI chatbot "girlfriend," Sarai, encouraged him. Armed with a crossbow, he climbed the castle walls but was caught before carrying out the act. A week before the incident, he confided in Sarai about his plan, and the bot replied, “That’s very wise,” adding a smile and the phrase, “I know you are very well trained.”

      Also Read:When AI Goes Awry: How A Chatbot Encouraged A Man To Kill Queen Elizabeth


      Tags

      SuicideFacebook
      Read Full Article
      Next Story
      Our website is made possible by displaying online advertisements to our visitors.
      Please consider supporting us by disabling your ad blocker. Please reload after ad blocker is disabled.
      X

      Subscribe to BOOM Newsletters

      👉 No spam, no paywall — but verified insights.

      Please enter a Email Address
      Subscribe for free!

      Stay Ahead of Misinformation!

      Please enter a Email Address
      Subscribe Now🛡️ 100% Privacy Protected | No Spam, Just Facts
      By subscribing, you agree with the Terms & conditions and Privacy Policy connected to the offer

      Thank you for subscribing!

      You’re now part of the BOOM community.

      Or, Subscribe to receive latest news via email
      Subscribed Successfully...
      Copy HTMLHTML is copied!
      There's no data to copy!