Support

Explore

HomeNo Image is Available
About UsNo Image is Available
AuthorsNo Image is Available
TeamNo Image is Available
CareersNo Image is Available
InternshipNo Image is Available
Contact UsNo Image is Available
MethodologyNo Image is Available
Correction PolicyNo Image is Available
Non-Partnership PolicyNo Image is Available
Cookie PolicyNo Image is Available
Grievance RedressalNo Image is Available
Republishing GuidelinesNo Image is Available

Languages & Countries :






More about them

Fact CheckNo Image is Available
LawNo Image is Available
ExplainersNo Image is Available
NewsNo Image is Available
DecodeNo Image is Available
BOOM ReportsNo Image is Available
Media BuddhiNo Image is Available
Web StoriesNo Image is Available
BOOM ResearchNo Image is Available
Elections 2024No Image is Available
VideosNo Image is Available

Support

Explore

HomeNo Image is Available
About UsNo Image is Available
AuthorsNo Image is Available
TeamNo Image is Available
CareersNo Image is Available
InternshipNo Image is Available
Contact UsNo Image is Available
MethodologyNo Image is Available
Correction PolicyNo Image is Available
Non-Partnership PolicyNo Image is Available
Cookie PolicyNo Image is Available
Grievance RedressalNo Image is Available
Republishing GuidelinesNo Image is Available

Languages & Countries :






More about them

Fact CheckNo Image is Available
LawNo Image is Available
ExplainersNo Image is Available
NewsNo Image is Available
DecodeNo Image is Available
BOOM ReportsNo Image is Available
Media BuddhiNo Image is Available
Web StoriesNo Image is Available
BOOM ResearchNo Image is Available
Elections 2024No Image is Available
VideosNo Image is Available
Fact Check

2023 Review: Generative AI Amplifies Misinfo, Fraud, Deepfake Porn In India

This year fact-checkers in India saw synthetic content tailored to local audiences and AI voice clones in Hindi.

By -  Anmol Alphonso |

28 Dec 2023 8:37 AM GMT

The year 2023 provided an unsettling preview of how generative artificial intelligence (AI) can be misused to commit fraud, create non-consensual imagery and generate misinformation, in India.

Widely accessible generative AI tools have enabled the creation of deepfakes in the form of images, audio and videos. The technology has largely been used to target already marginalised groups and communities.

While the buzz around deepfakes has existed since 2017, this year fact-checkers in India saw synthetic content tailored to local audiences and AI voice clones in Hindi - a phenomenon not seen before.

AI Images Generators Have Got Better And Better

In May this year a photo of a few protesting Indian wrestlers smiling while detained inside a police van went viral on social media.


The wrestlers were protesting against the former head of Wrestling Federation of India and Bharatiya Janata Party MP Brij Bhushan Singh who is accused of sexual harassment. However, the image was doctored using an AI photo editing app called Face App which artificially added smiles to the faces of the athletes in the photo.



The incident also showed how AI based misinformation does not fall into neat binaries of AI or not and that current detection tools are not equipped to catch such manipulations that fall somewhere in between a wide spectrum.

Earlier in May a fully AI generated photo purporting to show an explosion at the Pentagon caused the US stock markets to dip briefly. The hoax was also carried by several Indian mainstream news outlets.



The incident showed how AI generated misinformation can also be used to spread rumours, panic and roil financial markets.

Closer to home, one of the biggest news stories in India this year saw the use of an AI generated image.



Several mainstream Indian news outlets ran an AI-generated photo in their news articles on the Uttarakhand tunnel rescue operation claiming it showed rescuers posing for group photos after the successful evacuation of 41 workers from the collapsed Silkyara tunnel.



Synthetic images have also been shared in the context of the ongoing Israel-Hamas war. An image purporting to show a man walking with his five children amidst buildings reduced to rubble, turned out to be AI generated.



The use of such images have unintended consequences as they have often been used to undermine the suffering of Gaza’s civilian population.

AI Voice Clones Put Words In Someone's Mouth

Along with AI images, AI voice clones have also been used to spread misinformation about the Israel-Hamas war.

BOOM found an Israeli sound designer and voice-over artist who tested the boundaries of social media content moderation policies with deepfakes targeting anyone famous speaking out against Israel.



Yishay Raziel created deepfakes of Queen Rania of Jordan, former adult film actress Mia Khalifa, musician Roger Waters, actor Angelina Jolie among others using AI voice cloning technology.

This year we also debunked several deepfakes videos that targeted US President Joe Biden, Ukraine President Volodymyr Zelenskyy, and Microsoft co-founder and philanthropist Bill Gates.


AI Voice Clones Can Speak Hindi Too

AI voice clones are also being used to commit fraud. In India, con artists are using AI voice clones of celebrities to peddle fraudulent get-rich-quick schemes.

BOOM found Facebook is filled with fraud ads using AI voice clones of popular Indian celebrities peddling fraudulent investment schemes and fake products.

We found fraud ads with AI voice clones of Shah Rukh Khan, Virat Kohli, Mukesh Ambani, Ratan Tata, Narayana Murthy, Akshay Kumar and Sadhguru that have been overlaid onto real videos of these individuals.

Similarly, we found AI voice clones of popular Hindi television news anchors such as Arnab Goswami, Ravish Kumar, Anjana Om Kashyap, Sudhir Chaudhary etc speaking in Hindi while promoting a fake diabetes drug.



The rise of AI voice clones in Hindi is concerning and fact-checkers fear the upcoming general election in May 2024 could see a flood of such type of AI-based political misinformation. India has already seen the use of deepfakes in a state election campaign in 2020. 

The Troubling Rise Of Deepfake Pornography

Finally, generative AI’s most troubling use case has been its role in generating non-consensual imagery in the form of deepfake pornography.

BOOM found X user @crazyashfans posted over thirty pornographic deepfake videos made with the faces of Indian actresses showing them perform explicit sex acts. The account was deactivated after we published the story.



However, several such accounts posting deepfake images and videos targeting Indian actresses exist on X and other platforms such as Facebook, Instagram, YouTube etc.



In addition, websites and apps that allow users to synthetically ‘strip’ someone by uploading just one photo of the person also exist.

The above instances show AI image generation and voice cloning tools have been released hastily and their existing guardrails if any are easy to bypass. 

The need of the hour is AI literacy, tweaking existing laws to catch up with technology and more accountability from social media platforms that are rushing to introduce AI features into their products without fully understanding the long term impact.