At a time when the state of Telangana has gone to the polls, the matter of deepfakes has risen again. The Bharat Rashtra Samithi (BRS), led by current Chief Minister K Chandrashekar Rao, has written to the Election Commission alleging that the Congress is using deepfake videos in an attempt to mislead voters. In the run-up to the Telangana elections , there have been other instances too of deepfake videos and audios circulating on social media, confusing voters and the public. And it’s not just Telangana. Multiple examples of politicians using Artificial Intelligence (AI)-generated images, videos and audios have come to the forefront during the ongoing election season. We take a closer look at these incidents, the concerns around them and what could be the potential impact of deepfakes in the political scenario. Telangana’s brush with deepfakes On Thursday (30 November), as voters queued up to cast their votes in the Telangana Assembly, the Congress handle on social media platform X shared a video depicting Telangana minister and BRS working president KT Rama Rao urging the public to vote against his party, in favour of the Congress.
కేసీఆర్ ను ఓడించడమే.. జీవిత లక్ష్యం.
— Telangana Congress (@INCTelangana) November 30, 2023
కాంగ్రెస్ కే మీ ఓటును గుద్దండి.#ByeByeKCR pic.twitter.com/PnJBSnkiL1
The BRS has escalated the matter to the Election Commission, complaining that the Telangana Pradesh Congress Committee (TPCC) is “involved in the creation and dissemination of fake audio and video content through the use of deepfake technology and artificial intelligence.” The ruling party has sought the EC’s immediate intervention into the matter in order to “prevent any further harm to the fair conduct of the election.”
తెలంగాణ అసెంబ్లీ ఎన్నికల్లో BRS నాయకులపై బురద జల్లాడానికి... ఓటర్లను తప్పుదోవ పట్టించడానికి డీప్ఫేక్ టెక్నాలజీని అక్రమంగా వాడుతున్న తెలంగాణ ప్రదేశ్ కాంగ్రెస్ కమిటీ (TPCC) పై BRS పార్టీ భారత ఎన్నికల సంఘానికి, కేంద్ర హోం వ్యవహారాల మంత్రిత్వ శాఖకు అధికారికంగా ఫిర్యాదు దాఖలు… pic.twitter.com/Ty3XPl2bxK
— BRS Party (@BRSparty) November 30, 2023
KTR has also been quoted as telling News18, “Congress is a deep fake party.” Just days ago, KTR had warned party cadres and supporters to be vigilant against Congress’ deepfake “propaganda” videos. “There will be many false/deepfake videos and other forms of nonsensical propaganda over the next few days from Scamgress scammers,” KTR had said last Friday (24 November). KTR isn’t the only victim of
deepfakes during the Telangana election season. Earlier, a deepfake video was circulated showing BRS leader and Telangana minister Ch Malla dancing saying that “you will get jobs if you vote for KCR”. [caption id=“attachment_13448712” align=“alignnone” width=“640”] YSR Telangana Party chief YS Sharmila has also been a victim of the deepfake phenomenon. There was a video showing her on a hospital bed, cautioning people on the impact of nicotine. File image/PTI[/caption] Another deepfake video that went viral in the run-up to the Telangana elections showed YS Sharmila, the chief of the YSR Telangana party, laying on a hospital bed, cautioning against tobacco use. A third AI-generated video has TPCC chief A Revanth Reddy making disapproving remarks about the BRS. Other deepfakes in Indian politics The use of deepfakes in politics isn’t restricted to Telangana only. As recently as 17 November,
Prime Minister Narendra Modi himself became a victim of a deepfake audio ahead of the Madhya Pradesh Assembly elections. He said that he came across a video depicting him playing garba. “I myself admired how well it was made,” he said. “But I never got the chance [to play garba] since my school days.” In
Madhya Pradesh , videos also emerged that made use of clippings from popular TV show Kaun Banega Crorepati. The clippings show the Amitabh Bachchan-starrer quiz show asking questions around Madhya Pradesh politics to whip up anti-incumbency sentiments among viewers. Earlier in April, K Annamalai, Tamil Nadu head of the Bharatiya Janata Party (BJP), released a controversial audio recording of Palanivel Thiagarajan, a member of the DMK and the finance minister of the state. In it, Thiagarajan can be heard accusing his own party members of illegally amassing Rs 30,000 crore. Thiagarajan vehemently denied the veracity of the recording, calling it “fabricated” and “machine-generated.” [caption id=“attachment_13448772” align=“alignnone” width=“640”]
The number of deepfakes on the internet are rising, and politicians and celebrities are often the victims. Image used for representational purposes/Pixabay[/caption] Experts speak Deepfakes have become a huge talking point in India ever since the Rashmika Mandanna video went viral. Following her, there were instances of other celebs such as
Katrina Kaif , Alia Bhatt and Kajol, who fell prey to this phenomenon. And with the increasing number of instances of this trend, IT Minister Ashwini Vaishnaw dubbed deepfakes as a “threat to democracy”. Even law-enforcement agencies struggle with deepfakes. The police has said that the advent of deepfake applications has made it more difficult to differentiate between genuine and manipulated content, leading to confusion in the run-up to the elections. Hyderabad cybercrime ACP Shiva Maruti said: “The rise of AI has ushered in a new era of challenges for us. These AI-generated videos and deepfake apps are a potential threat to the integrity of the electoral process. We are taking this matter seriously and working closely with cyber experts to identify and counter such fake content.” In fact, the 2023 State of Deepfakes report recently reported that India is the sixth most vulnerable country to deepfake pornographic content. South Korea tops the list, with its singers and actresses making up 53 per cent of the people targeted in deepfake pornography. Also read: Why are the polls in Argentina being called the ‘first ever AI election?’ Experts have long been stating that deepfakes and the use of AI in elections could have a negative impact on polls. As UVA cyber privacy expert Danielle Citron noted in a 2019 paper, “AI makes it possible to create audio and video of real people saying and doing things they never said or did.
“The potential to sway the outcome of an election is real, particularly if the attacker is able to time the distribution such that there will be enough window for the fake to circulate but not enough window for the victim to debunk it effectively (assuming it can be debunked at all),”she wrote. Deepfakes also cause the problem of “liar’s dividend,” in which someone legitimately caught doing wrong uses the existence of deepfakes to deny their actions. A sceptical public will be primed to doubt the authenticity of real audio and video evidence. This scepticism can be invoked just as well against authentic as against adulterated content. Carah Ong Whaley, academic program officer for the UVA Center for Politics, explained the dangers of deepfakes and AI in election succinctly. She was quoted by University of Virginia as saying, “Doctored photos and video footage, candidates’ comments taken out of context; it has already been used for decades in campaigns. What AI does is dramatically increase the scale and proliferation, leaving us numb and, hopefully, questioning everything we see and hear about elections. For some voters, exposure to certain messages might suppress turnout. For others, even worse, it could stoke anger and political violence.” And in India it could be even worse. As Digvijaya Rana, a lecturer at Jindal Global University, points out that India has the cheapest data rates across the globe. Hence, making and sharing such media is affordable, making India more vulnerable. With inputs from agencies