BOOM

Trending Searches

    SUPPORT
    BOOM

    Trending News

      • Fact Check 
        • Fast Check
        • Politics
        • Business
        • Entertainment
        • Social
        • Sports
        • World
      • Law
      • Explainers
      • News 
        • All News
      • Decode 
        • Impact
        • Scamcheck
        • Life
        • Voices
      • Media Buddhi 
        • Digital Buddhi
        • Senior Citizens
        • Videos
      • Web Stories
      • BOOM Research
      • BOOM Labs
      • Deepfake Tracker
      • Videos 
        • Facts Neeti
      • Home-icon
        Home
      • About Us-icon
        About Us
      • Authors-icon
        Authors
      • Team-icon
        Team
      • Careers-icon
        Careers
      • Internship-icon
        Internship
      • Contact Us-icon
        Contact Us
      • Methodology-icon
        Methodology
      • Correction Policy-icon
        Correction Policy
      • Non-Partnership Policy-icon
        Non-Partnership Policy
      • Cookie Policy-icon
        Cookie Policy
      • Grievance Redressal-icon
        Grievance Redressal
      • Republishing Guidelines-icon
        Republishing Guidelines
      • Fact Check-icon
        Fact Check
        Fast Check
        Politics
        Business
        Entertainment
        Social
        Sports
        World
      • Law-icon
        Law
      • Explainers-icon
        Explainers
      • News-icon
        News
        All News
      • Decode-icon
        Decode
        Impact
        Scamcheck
        Life
        Voices
      • Media Buddhi-icon
        Media Buddhi
        Digital Buddhi
        Senior Citizens
        Videos
      • Web Stories-icon
        Web Stories
      • BOOM Research-icon
        BOOM Research
      • BOOM Labs-icon
        BOOM Labs
      • Deepfake Tracker-icon
        Deepfake Tracker
      • Videos-icon
        Videos
        Facts Neeti
      Trending Tags
      TRENDING
      • #Operation Sindoor
      • #Pahalgam Terror Attack
      • #Narendra Modi
      • #Rahul Gandhi
      • #Waqf Amendment Bill
      • #Arvind Kejriwal
      • #Deepfake
      • #Artificial Intelligence
      • Home
      • Explainers
      • AI-Generated Child Sexual Abuse...
      Explainers

      AI-Generated Child Sexual Abuse Material 'Nightmare' Is Now A Reality: Report

      The Internet Watch Foundation has cautioned that the spread of child sexual abuse images on the internet could intensify without adequate regulation of AI tools that create deepfake photos.

      By - Hera Rizwan |
      Published -  25 Oct 2023 6:53 PM IST
    • Boomlive
      Listen to this Article
      AI-Generated Child Sexual Abuse Material Nightmare Is Now A Reality: Report

      In its latest report, the U.K.-based Internet Watch Foundation (IWF) has flagged a flood of AI-generated images of child sexual abuse on the internet. The watchdog has urged the governments and technology providers to act quickly before this "nightmare" overwhelms law enforcement investigators with an expanding pool of potential victims.

      The report has highlighted that criminals are leveraging downloadable open-source generative AI models, capable of generating images, with highly alarming consequences. This technology is now being employed to generate new images featuring previously victimised children and are even beginning to offer monthly subscription services for AI-generated child sexual abuse material (CSAM).

      Also Read:More Than 3 Billion People Remain Unconnected To Mobile Internet Globally: Report

      What are the key findings of the report?

      Back in June, the IWF reported the discovery of seven URLs in the public domain that appeared to contain AI-generated content. In a recent investigation into a dark web forum associated with CSAM, the IWF has unveiled thousands of AI-generated images that are deemed illegal under UK law, shedding light on the extent of AI's application in this context. A dark web is a section of the internet that can only be accessed with a specialised browser.

      Here's what the report found-

      -A dark web forum dedicated to CSAM, assessed by IWF had a total of 20,254 AI-generated images shared within a month. Out of this, 11,108 images were chosen for evaluation by IWF analysts as they were explicitly criminal in nature. The remaining 9,146 AI-generated images either did not depict children or depicted children in situations that were evidently non-criminal in nature.

      - The watchdog report also suggests that there are substantial evidence that AI-generated CSAM has elevated the risk of re-victimising known survivors of child sexual abuse and exposing famous children, as well as children associated with the perpetrators.

      -According to the report, most AI CSAM found is now realistic enough to be treated as ‘real’ CSAM. "The most convincing AI CSAM is visually indistinguishable from real CSAM, even for trained analysts," the report read.

      -The disturbing AI-generated images encompass images of rape of babies and toddlers, famous preteen children being abused, as well as BDSM content featuring teenagers. The images likely depicted children aged between 7 and 13 years old, and were 99.6 percent female.

      - It added that the technology was also being used to create images of celebrities who have been “de-aged” and subsequently portrayed in explicit situations involving minors. Other examples of CSAM included using AI tools to “nudify” pictures of clothed children found on the internet.

      -Perpetrators frequently made reference to "Stable Diffusion", an AI model provided by the UK-based company Stability AI, as observed by the IWF.

      Also Read:‘Shoot Heroin’: AI Chatbots’ Advise Can Worsen Eating Disorder, Finds Study

      "Worst nightmare" is coming true

      Susie Hargreaves, the chief executive of the IWF, said that the watchdog’s “worst nightmares have come true”. According to her, the foundation had earlier warned that AI imagery could soon become indistinguishable from real pictures of children suffering sexual abuse, and that we could start to see this imagery proliferating in much greater numbers. "We have now passed that point," she said.

      Hargreaves highlighted that this dangerous trend will reopen the emotional wounds for the victims as now not only do they have to grapple with the distressing knowledge that their abuse might be circulating in the shadows of the internet, but they also face the added threat of encountering fresh images depicting their abuse in previously unimaginable and horrifying ways.

      The watchdog has cautioned the governments to take appropriate steps. Dan Sexton, the watchdog group's chief technology officer warned that unless action is taken to prevent it, the deluge of deepfake child sexual abuse images could overwhelm investigators, making it challenging to differentiate between actual children and virtual characters, potentially impeding the rescue efforts. Additionally, criminals may exploit these images for grooming and coercing new victims.

      Also Read:Instagram Is The Most Important Platform For Pedophile Networks: Report


      Tags

      Child AbuseDeepfakeFacebook
      Read Full Article
      Next Story
      Our website is made possible by displaying online advertisements to our visitors.
      Please consider supporting us by disabling your ad blocker. Please reload after ad blocker is disabled.
      X

      Subscribe to BOOM Newsletters

      👉 No spam, no paywall — but verified insights.

      Please enter a Email Address
      Subscribe for free!

      Stay Ahead of Misinformation!

      Please enter a Email Address
      Subscribe Now🛡️ 100% Privacy Protected | No Spam, Just Facts
      By subscribing, you agree with the Terms & conditions and Privacy Policy connected to the offer

      Thank you for subscribing!

      You’re now part of the BOOM community.

      Or, Subscribe to receive latest news via email
      Subscribed Successfully...
      Copy HTMLHTML is copied!
      There's no data to copy!