BJP's Deepfake Videos Create Row; EC Unclear How To Respond

Delhi elections saw the first use of deepfake videos in a political campaign. Should we expect more in the future?

As we witness the first ever use of AI-generated deepfakes in a political campaign, as reported by Vice on February 18, the Election Commission of India is yet to respond to the future utilisation of this new technology that could drastically transform our perception of reality.

BOOM got in touch with an ECI spokesperson who told us that such an issue has not yet come on the table. Speaking to BOOM, Sheyphali Sharan, spokesperson for the ECI said, "There is no such issue in my knowledge." When told about the potential for abuse and the possibility of widespread utilisation for future elections, she had "nothing to respond at this stage."

Also Read: India Is Teeming With 'Cheapfakes', Deepfakes Could Make It Worse

On February 7, 2020, a day ahead of polling in Delhi for the Legislative Assembly, a series of videos of Bharatiya Janata Party leader Manoj Tiwari appeared on multiple WhatsApp groups. The videos showed Tiwari hurling various allegations to his rival Arvind Kejriwal in English and Haryanvi.

The only hiccup - Tiwari never made those videos - they were made using AI-generated deepfake technology. In other words, the videos were not shot, but rather manufactured.

In a world where misinformation is rampant, what does this mean for future elections?

"Positive Campaign" Or A Dangerous Precedent?

The term deepfake is a portmanteau of the terms "deep learning" (or machine learning) and "fake", and it refers to synthetic media where the speech and actions of an individual can be used to create a video of another person (usually a public figure) using algorithms.

Up until now, the term "deepfake" has generally been associated with pornography, with women being the most common target of such visual manipulation.

This has led to deepfakes generally being viewed as a negative tool, which could be used to create highly realistic videos through manipulation, that could fool even the most expert-level tech sleuths.

However, are we witnessing a paradigm shift with BJP's use of deepfakes to reach out to a multilingual voter base?

According to Sagar Vishnoi, the chief strategist of The Ideaz Factory - the marketing company who made the Manoj Tiwari deepfakes - such a technology can be used for a "positive campaign". Speaking to Vice, Vishnoi said, "We have used a tool that has so far been used only for negative or ambush campaigning and debuted it for positive campaign."

Neelkant Bakshi, co-incharge of BJP Delhi IT Cell, had also shown an initial enthusiasm towards the use of such technology for political campaign. According to him, Tiwari's deepfake videos were disseminated across 5,800 WhatsApp groups in the Delhi NCR Region.

In conversation with Vice, Bakshi said that deepfakes helped them "scale campaign efforts like never before". "The Haryanvi videos let us convincingly approach the target audience even if the candidate didn't speak the language of the voter," he added.

However, after the Vice's story brought about angry reactions over the ethics of such technology and the potential for abuse, Bakshi has distanced the party from the usage of such videos and claimed on NDTV that he does not intend to use deepfakes for future elections in Delhi.

We reached out to Bakshi for a comment, and the article will be updated as and when he responds.

The potential for abuse of such a technology in politics has already been recongnised abroad. In October 2019, the state of California in the United States passed a bill to restrict the spread of political deepfakes for a 60 day period, prior to election day.

"Deepfake AI vs Detection AI: A Cat And Mouse Game"

Past manipulations of video and audio using non-AI editing techniques have led to devastating consequences such as a series of mob lynchings in India and have also been used to manipulate voters around the world by spreading slander on targeted politicians.

Also Read: No, Arvind Kejriwal Was Not Intoxicated And Slurring

While "cheapfakes" have already become a rampant issue in the country, the arrival of deepfakes could put the nail in the coffin for free and fair elections by distorting the very nature of truth and thereby manipulating voters.

According to Israeli historian Yuval Noah Harari and Chinese-born American computer scientist Fei-Fei Li, while artificial intelligence can improve our lives, it can also be used to "hack human brains" in ways we could not have thought of before.

BOOM reached out to Debayan Gupta, Assistant Professor of Computer Science at the Ashoka University, to understand the implications of the use of such technology and how we distinguish between a real video and a deepfake. According to him, the cons are pretty huge.

"It means that we can no longer trust video content - just like images could be photoshopped, videos, too, can now be easily doctored. Given the difficulty of tracing something back to its source, one could easily imagine partisan elements creating inflammatory videos (say, a major politician disparaging a particular religion), or embedding faces in pornographic videos," he told BOOM.

As such videos get closer to perfection, how could we possibly distinguish between real videos and manufactured ones? Gupta says the a normal human eye cannot make that distinction - it will require an AI-enabled detection system to do the job.

"There are people working on building AI to detect deepfakes (looking at things like lighting, facial muscles, blinking, etc), but that's a cat-and-mouse game between the deepfake AIs and the detection AIs. The danger is that we trust videos in a way that we don't trust, say, handwritten letters saying the same thing, because we instinctively know that the latter is easy to fake. We just have to mentally adapt to a new world where video evidence is no longer reliable unless backed up by other means."

- Debayan Gupta, Assistant Professor of Computer Science, Ashoka University

Updated On: 2020-02-20T22:59:53+05:30
If you value our work, we have an ask:

Our journalists work with TruthSeekers like you to publish fact-checks, explainers, ground reports and media literacy content. Much of this work involves using investigative methods and forensic tools. Our work is resource-intensive, and we rely on our readers to fund our work. Support us so we can continue our work of decluttering the information landscape.

📧 Subscribe to our newsletter here.

📣You can also follow us on Twitter, Facebook, Instagram, Youtube, Linkedin and Google News
Show Full Article
Next Story
Our website is made possible by displaying online advertisements to our visitors.
Please consider supporting us by disabling your ad blocker. Please reload after ad blocker is disabled.