Support

Explore

HomeNo Image is Available
About UsNo Image is Available
AuthorsNo Image is Available
TeamNo Image is Available
CareersNo Image is Available
InternshipNo Image is Available
Contact UsNo Image is Available
MethodologyNo Image is Available
Correction PolicyNo Image is Available
Non-Partnership PolicyNo Image is Available
Cookie PolicyNo Image is Available
Grievance RedressalNo Image is Available
Republishing GuidelinesNo Image is Available

Languages & Countries :






More about them

Fact CheckNo Image is Available
LawNo Image is Available
ExplainersNo Image is Available
NewsNo Image is Available
DecodeNo Image is Available
BOOM ReportsNo Image is Available
Media BuddhiNo Image is Available
Web StoriesNo Image is Available
BOOM ResearchNo Image is Available
Elections 2024No Image is Available
VideosNo Image is Available

Support

Explore

HomeNo Image is Available
About UsNo Image is Available
AuthorsNo Image is Available
TeamNo Image is Available
CareersNo Image is Available
InternshipNo Image is Available
Contact UsNo Image is Available
MethodologyNo Image is Available
Correction PolicyNo Image is Available
Non-Partnership PolicyNo Image is Available
Cookie PolicyNo Image is Available
Grievance RedressalNo Image is Available
Republishing GuidelinesNo Image is Available

Languages & Countries :






More about them

Fact CheckNo Image is Available
LawNo Image is Available
ExplainersNo Image is Available
NewsNo Image is Available
DecodeNo Image is Available
BOOM ReportsNo Image is Available
Media BuddhiNo Image is Available
Web StoriesNo Image is Available
BOOM ResearchNo Image is Available
Elections 2024No Image is Available
VideosNo Image is Available
Decode

From Inception To Identification: What Is A Deepfake And How To Detect Them?

Explore the alarming rise of deepfake technology and Uncover the origins of deepfakes, Tips to Identify Deepfakes, and understand the legal and technological measures in place to combat this growing digital threat.

By - Hera Rizwan | 10 Nov 2023 10:08 AM GMT

Earlier this week, a viral video featuring actress Rashmika Mandanna circulated on social media, sparking a blend of shock, surprise, and horror among netizens. The original video was of a British Indian influencer named Zara Patel, which was manipulated using deepfake technology.

The actress took to social media to express her dismay and astonishment at the circulating video. In her post on X, Mandanna wrote, "Something like this is honestly, extremely scary not only for me, but also for each one of us who today is vulnerable to so much harm because of how technology is being misused."

Meanwhile, Decode found that X (formerly Twitter) is full of deepfake videos of Indian actresses. 

After the incident, the Ministry of Electronics and Information Technology promptly issued an advisory to social media platforms. The advisory mandated the platforms to remove such content within 36 hours upon receiving a report from either a user or government authority. Highlighting Section 66D of the Information Technology Act, 2000, it stated that punishment for cheating by personation by using computer resources amounts to imprisonment of up to 3 years and a fine of up to Rs 1 lakh.

But how did it all begin? As this technology is becoming more common and convincing, BOOM delves into the concerning surge of deepfake technology, exploring its inception, methods of detection, and examining the legal and technological safeguards in position to counteract this expanding digital menace.

What is deepfake?

A deepfake is a form of synthetic media that involves replacing a person in an image or video with that of another individual.

The term "deepfake" originated in late 2017, coined by a Reddit user with the same name. This user established a platform on the online news and aggregation site, sharing explicit videos utilising open-source face-swapping technology.

In September 2019, the artificial intelligence company Deeptrace identified nearly 15,000 online deepfake videos, marking a nearly twofold increase over a nine-month period. An astonishing 96% of these videos had pornographic content, with 99% of them superimposing the faces of female celebrities onto adult film performers.

While the term "deepfake" was coined in 2017, the technology itself has roots that extend further into the past. The creation of lifelike fake portraits was facilitated by the development of 'Generative Adversarial Networks' (GAN) in 2014. These networks comprise two AI agents: one generates an image, while the other aims to identify the fake. If the detecting agent uncovers the forgery, the AI forger adjusts and enhances its capabilities.

An X post by Ian Goodfellow, currently a research scientist at Google Deepmind, showed the development of the technology over the last few years. He published a paper back in 2014, with colleagues that introduced a GAN for the first time. 

Subsequently, deepfakes started finding acceptance in a larger set-up in the creative industry to weave believable stories of going in the past, future or a modified present. In India too, the advertising industry has been leveraging the technology for the past few years.

How can we spot deepfakes?

Speaking to BOOM, Jaspreet Bindra, managing director and founder of The Tech Whisperer, enlisted ways of spotting deepfakes, which he feels are becoming "incredibly realistic". "To detect deepfakes, look for inconsistencies in the imagery. Facial features that don't align properly, lighting that seems off, or irregular blinking patterns can be tell-tale signs," he said.

Audio-visual mismatches are also red flags, where the tone and cadence of the voice may not match the person’s usual speech patterns, he added. 

According to Bindra, one should be wary if any video sounds too sensational or outlandish to be true. "One must question who put this out into the world – a reliable source or a notorious fake news factory?"

The proliferation of deepfakes, as Bindra puts, is a stark reminder of the dual-edged nature of technology. "It holds a mirror to society, reflecting our potential for creation and destruction," he said.

As machine learning and AI, is being used to create deepfakes, they are also being used to combat them. "Companies like Deeptrace are pioneering software to identify deepfakes by analysing shadows and reflections that are not congruent with physics. Additionally, platforms like Facebook and X are also implementing policies to flag, and sometimes remove, deceptive deepfake content," he said.

On the technological front, as Bindra says, embracing blockchain offers a solution, where media can be verified and traced back to its origin.

How to stay informed and vigilant about deepfake threats?

Some of the tips and advice which we can follow to keep ourselves and others safe against the proliferation of this misinformation are-

  • Double-check the source. Look for the same story across different media outlets to verify authenticity.
  • Avoid sharing unverified information.
  • Always approach content with a critical mind. If it seems off, there's a good chance it might be.
  • Tighten your online privacy settings. The less data you have out there, the harder it is for someone to create a deepfake of you.

According to Bindra, apart from us being savvy consumers of media and questioning the authenticity of suspicious content, there also needs to be a robust legal framework that penalises the malicious creation and distribution of deepfakes.

He said, "The IT Act does exist as a recourse for misinformation. However, we require a stronger Act in terms of a stronger penalty with an exemplary punishment to deter such actions. Deepfakes can be very harmful not only from a pornographic viewpoint but also from elections and events of war or communal events."

Lastly, Bindra suggested that it should be mandated that anyone using an AI model to produce an image or information must disclose it. "People must be made aware of Classifiers – software which can detect AI-generated content – and widespread use of the same, much like antivirus," he added.