The social media platform X, owned by Elon Musk, has restricted searches for Taylor Swift following the emergence of explicit images of the singer generated using artificial intelligence on the platform.
As verified by BOOM, searches for "Taylor Swift" and "Taylor Swift AI" on X result in a message saying, "Oops, something went wrong". Nevertheless, searches such as "Taylor Swift Photos" and "Taylor Swift singer" still yield results. Swift has yet to address the incident publicly.
According to The Verge, within a span of 17 hours, one of the most widely circulated posts featuring the images garnered over 45 million views, 24,000 reposts, and hundreds of thousands of likes and bookmarks before it was eventually taken down from the platform.
What was the origin of these images?
404 Media identified the AI-generated Taylor Swift images linked to a particular Telegram group focused on sharing abusive images of women. Among the tools utilised by the group is a no-cost Microsoft text-to-image generator.
The specified Telegram group used Microsoft's AI image generator named Designers. Group members also shared certain prompts to help others bypass the safeguards implemented by Microsoft. Before the images gained widespread attention, group members advised users to input "Taylor 'singer' swift" instead of "Taylor Swift" to evade restrictions.
Although 404 Media couldn't replicate the exact images posted on X, they discovered that Microsoft's Designer produced images of "Taylor 'singer' Swift" even though it didn't respond to "Taylor Swift".
This is not the first time that the renowned singer's AI version was created. Decode reported how a high-school teacher from Texas uses AI to teach his students mathematics in Taylor Swift's voice. Mr. Schuler, who has been in the teaching profession for 20 years now, uses Midjourney and Leonardo.AI to generate images and Gooey.AI to lip-sync the audio created with Elevenlabs, in order to make creative tutorials.
What reactions did the AI generated images ignite?
The White House expressed deep concern over the presence of deepfakes, emphasising the crucial role that social media companies must play in upholding their own regulations and preventing the spread of misinformation.
White House Press Secretary Karine Jean-Pierre said at a news briefing, “This is very alarming. And so, we’re going to do what we can to deal with this issue. So while social media companies make their own independent decisions about content management, we believe they have an important role to play in enforcing, enforcing their own rules to prevent the spread of misinformation, and non-consensual, intimate imagery of real people.”
SAG-AFTRA (Screen Actors Guild-American Federation of Television and Radio Artists), the American labour union representing more than 1,60,000 professionals in the media industry, also expressed disapproval of the deepfake images in an official statement.
The statement read, “As a society, we have it in our power to control these technologies, but we must act now before it is too late. SAG-AFTRA continues to support legislation by Congressman Joe Morelle, the Preventing Deepfakes of Intimate Images Act, to make sure we stop exploitation of this nature from happening again."
Microsoft CEO Satya Nadella also expressed concern over the incident in an interview to NBC news. Nadella stressed the importance of a swift response to such incidents. "I think it behooves us to move fast on this" he stated, emphasising the necessity for "guardrails" to ensure the creation of only safe content online.
Is X keeping up with its compliance?
The incidence of deepfake videos has increased on X as well as other social media platforms, worldwide. As per the "2023 State of Deepfakes Report" from Home Security Heroes, a web security services company based in the US, the incidence of deepfake videos has multiplied by five since 2019.
In India too, the internet witnessed the viral spread of deepfake videos featuring actress Rashmika Mandanna last year, leading to the arrest of a 23-year-old B-Tech graduate from Andhra Pradesh who was involved in its creation. But it's not limited to one celebrity. BOOM has found X is full of deepfake images of Indian actresses made on easily accessible websites. The websites enable users to artificially undress individuals by uploading their photos.
With some digging, Decode found out a website called Desifakes.com which has a number of requests for ‘nude photos’. On one of the forums, ‘celebrities and personalities AI fakes’ there are multiple actresses’ real photos shared with their photos where they are without clothes. Turns out, it’s a simple hack. A website called clothoff which describes itself as “a breakthrough in AI” allows users to upload photos of their choice and then the AI does the work.
There's more. Recently, a deepfake video featuring former cricketer Sachin Tendulkar emerged, where he was seen endorsing a gaming app, resulting in the filing of an FIR against the app.
X prohibits the hosting of synthetic and manipulated media as well as nonconsensual nudity according to its policies. A public statement addressing the incident was made by X nearly a day later; however, it did not explicitly reference the Swift images. It read, "Posting Non-Consensual Nudity (NCN) images is strictly prohibited on X and we have a zero-tolerance policy towards such content. Our teams are actively removing all identified images and taking appropriate actions against the accounts responsible for posting them."
Although Elon Musk's X might have shown promptness in curtailing the spread of the AI-generated explicit images of Taylor Swift, there remains similar handles on the platform which have managed to evade the purview of its compliance.
In May 2023, an AI-generated image of what looked like an explosion outside of the Pentagon was extensively circulated on X. The Arlington Police Department had to quickly debunk the image, but not before the stock market dipped by 0.26 percent. The verified account spreading the misinformation was later suspended, but it managed to cause a fair share of damage.
Another AI-generated deepfake video circulated on X, portraying climate activist Greta Thunburg endorsing the adoption of eco-friendly military technology and "biodegradable missiles". This misleading video gained significant attention, receiving millions of views, particularly after being shared by individuals such as Jack Posobiec, known for promoting the Pizzagate conspiracy theory.
Although a subtle watermark suggested that the manipulated video was intended as "satire," numerous users in the comment section took the content seriously, while others found it challenging to determine its authenticity.
In addition to this, numerous AI-generated images and videos concerning the current conflict are proliferating on X. Within this surge, there are clumsy endeavors at provocative propaganda, memes fueled by hatred, and deliberate attempts to mislead the public.
Beyond X's Community Notes feature, which enables users to identify an image as AI-generated, it is challenging to locate any other compliance that provides contextual information, or promptly removes the post. Even with this feature, often, by the time confirmation of an AI-generated image is obtained, it is already too late, with thousands having viewed and reposted it.