BOOM

Trending Searches

    SUPPORT
    BOOM

    Trending News

      • Fact Check 
        • Fast Check
        • Politics
        • Business
        • Entertainment
        • Social
        • Sports
        • World
      • Law
      • Explainers
      • News 
        • All News
      • Decode 
        • Impact
        • Scamcheck
        • Life
        • Voices
      • Media Buddhi 
        • Digital Buddhi
        • Senior Citizens
        • Videos
      • Web Stories
      • BOOM Research
      • BOOM Labs
      • Deepfake Tracker
      • Videos 
        • Facts Neeti
      • Home-icon
        Home
      • About Us-icon
        About Us
      • Authors-icon
        Authors
      • Team-icon
        Team
      • Careers-icon
        Careers
      • Internship-icon
        Internship
      • Contact Us-icon
        Contact Us
      • Methodology-icon
        Methodology
      • Correction Policy-icon
        Correction Policy
      • Non-Partnership Policy-icon
        Non-Partnership Policy
      • Cookie Policy-icon
        Cookie Policy
      • Grievance Redressal-icon
        Grievance Redressal
      • Republishing Guidelines-icon
        Republishing Guidelines
      • Fact Check-icon
        Fact Check
        Fast Check
        Politics
        Business
        Entertainment
        Social
        Sports
        World
      • Law-icon
        Law
      • Explainers-icon
        Explainers
      • News-icon
        News
        All News
      • Decode-icon
        Decode
        Impact
        Scamcheck
        Life
        Voices
      • Media Buddhi-icon
        Media Buddhi
        Digital Buddhi
        Senior Citizens
        Videos
      • Web Stories-icon
        Web Stories
      • BOOM Research-icon
        BOOM Research
      • BOOM Labs-icon
        BOOM Labs
      • Deepfake Tracker-icon
        Deepfake Tracker
      • Videos-icon
        Videos
        Facts Neeti
      Trending Tags
      TRENDING
      • #Operation Sindoor
      • #Pahalgam Terror Attack
      • #Narendra Modi
      • #Rahul Gandhi
      • #Waqf Amendment Bill
      • #Arvind Kejriwal
      • #Deepfake
      • #Artificial Intelligence
      • Home
      • Explainers
      • Meta's AI Prevents Suicide In...
      Explainers

      Meta's AI Prevents Suicide In Lucknow: How Does It Detect Self-Harm?

      The AI tool, trained to recognise keywords and patterns linked to self-harm, also evaluates comments and past posts to assess urgency.

      By -  Hera Rizwan
      Published -  6 Sept 2024 5:48 PM IST
    • Boomlive
      Metas AI Prevents Suicide In Lucknow: How Does It Detect Self-Harm?

      How Effective is Meta's AI in Detecting and Preventing Suicides?

      • A 21-year-old woman attempting suicide was saved after Meta alerted police to her Instagram post.
      • Meta's AI has intervened in multiple cases, including a boy in Kota and a man in Mumbai, preventing suicide attempts through timely alerts.
      • Meta uses machine learning to identify suicide-related keywords and patterns, analysing posts, comments, and user history.

      In a recent incident, Meta helped save the life of a woman in Lucknow who was attempting suicide. Reportedly, the 21-year-old, distressed after being abandoned by her husband, posted a video on Instagram showing herself with a noose around her neck.

      When the video went viral, Meta alerted the Social Media Centre at the Directorate General of Police. The police quickly responded, reached the location, and intervened to prevent the suicide. Afterward, the woman was taken out of the room and counseled by women police officers.

      This is not the first time that Meta’s AI systems have been instrumental in saving lives. There have been previous reports of similar interventions, where distressing posts flagged by AI have prompted timely responses from law enforcement agencies.

      Also Read:Identify Fake News, Lobby For A Political Party: How Well Does Meta AI Hold Up?

      How effective is Meta’s AI in detecting suicide patterns?

      Kota, a major coaching hub in Rajasthan, is known for preparing students for competitive exams like the IIT-JEE and NEET. However, the city's intense academic pressure has led to a worrying rise in student suicides.

      In June, a 16-year-old boy in Kota Rural was saved after uploading two Instagram reels expressing suicidal intent. Upon receiving an alert from Meta, the police quickly dispatched a team to his residence and arranged counseling for both him and his parents.

      Similarly, a coaching student in Kota from Jhunjhunu district posted a concerning message on Facebook. Police reached out to the student in Kota and his parents in Jhunjhunu, and the student was given counseling by a licensed psychiatrist.

      In 2021, a 23-year-old from Mumbai attempted to live-stream his suicide on Facebook, inadvertently leading to his rescue. Meta team in Ireland alerted the Mumbai Police, who quickly broke into his home and transported the unconscious man to the hospital within an hour of his attempt.

      In 2020, West Bengal Police were also notified by Facebook about a youth’s live-streamed suicide attempt with a sharp weapon. The police intervened after being informed by Facebook, and they alerted the youth's unsuspecting father.

      With millions of users creating enormous amounts of data daily through posts, texts, and videos, manual monitoring is infeasible. Therefore, Meta uses artificial intelligence (AI) to spot potential issues and identify signs of trouble.

      Also Read:How Meta Ads Enable Loan Scam That Misuse Aurangabad Bank's Name

      How Meta's AI detects signs of suicide?

      According to its Safety Centre, since 2006—just two years after Facebook's launch in 2004—the tech giant has been collaborating with experts in suicide prevention and safety to support users across Meta platforms.

      Initially, the team provided support by connecting individuals to local authorities, helplines, or non-governmental organisations. In 2018, Meta enhanced its approach by incorporating AI and machine learning to better identify potential self-harm situations.

      Alluding to the introduction of the AI tool, a 2018 Meta blog read, “In the past, we’ve relied on loved ones to report concerning posts to us since they are in the best position to know when someone is struggling. However, many posts expressing suicidal thoughts are never reported to Facebook, or are not reported fast enough.”

      In 2017, Facebook introduced a machine-learning model to detect suicide-related keywords like "kill," "goodbye," and "depressed," based on expert input. However, these words can also be used in non-harmful contexts, leading to false positives, which required Meta’s community operations team to filter manually.

      The AI tool was trained to detect suicidal patterns for better accuracy. The tool analyses comments, looking for phrases like “tell me where you are” for serious cases or “I’m here for you” for less urgent ones. It also checks patterns in previous posts and their timing to assess if the user is in immediate danger.

      When self-harm is reported or flagged, Meta's community operations team reviews it and takes action. If there is no immediate danger, they connect the user with support services, like helplines or counseling. In urgent cases, local authorities, such as the police, are notified right away.

      Meta also uses AI to prioritise the order its team reviews reported posts, videos and livestreams. The blog post reads, "It also lets our reviewers prioritise and evaluate urgent posts, contacting emergency services when members of our community might be at risk of harm. Speed is critical."

      Also Read:Oversight Board Urges Meta To Tweak AI-Generated Explicit Content Policy


      Tags

      MetaFacebookInstagramSuicide
      Read Full Article
      Next Story
      Our website is made possible by displaying online advertisements to our visitors.
      Please consider supporting us by disabling your ad blocker. Please reload after ad blocker is disabled.
      X

      Subscribe to BOOM Newsletters

      👉 No spam, no paywall — but verified insights.

      Please enter a Email Address
      Subscribe for free!

      Stay Ahead of Misinformation!

      Please enter a Email Address
      Subscribe Now🛡️ 100% Privacy Protected | No Spam, Just Facts
      By subscribing, you agree with the Terms & conditions and Privacy Policy connected to the offer

      Thank you for subscribing!

      You’re now part of the BOOM community.

      Or, Subscribe to receive latest news via email
      Subscribed Successfully...
      Copy HTMLHTML is copied!
      There's no data to copy!