A deepfake video of actor Rashmika Mandanna entering a lift wearing a low-cut top has rang alarm bells in India. The viral footage manipulated using Artificial Intelligence (AI) has triggered calls for legal action against the culprits. Now, there are reports that morphed pictures of actor Katrina Kaif’s fight scene from her upcoming flick Tiger 3 have spread online. The incident has also put a spotlight on the dangers of deepfakes, particularly its increasing use to harm women. Acknowledging the increasing menace of this technology, Union Minister of State for Electronics and IT Rajeev Chandrasekhar said on X recently, “Deepfakes are the latest and even more dangerous and damaging form of misinformation and need to be dealt with by platforms”. How big is the deepfake problem across the world and what is being done to deal with it? Let’s take a closer look. The deepfake menace Deepfake uses artificial intelligence to produce or manipulate media, such as images, videos or audios, in a way that it is difficult to spot its authenticity. As per The Guardian, deepfakes originated in 2017 when a Reddit user of the same name posted manipulated porn clips on the site by putting the faces of famous female celebrities onto porn stars. However, deepfakes are not always used with malicious intention and can have several benefits such as producing realistic visual effects in movies and shows, generating engaging educational videos and so on. But as various reports have pointed out for years now, the technology is largely being used to generate non-consensual pornographic videos. With AI tools becoming more accessible, it has also become easier and cheaper to produce non-consensual sexually explicit content. A 2023 State of Deepfakes report says that adult content targeting mostly women accounts for 98 per cent of all deepfake videos online, reported Hindustan Times (HT). Public figures, especially those in the entertainment industry, are the most vulnerable to deepfake pornographic content. The report also states that the number of online
deepfake videos has seen a 550 per cent spike to 95,820 as compared to 2019. Amid growing polarisation in the world, it is feared deepfakes could be used on a large scale to stoke more tensions. AI-generated synthetic media can be used to commit crimes, pull scams, influence key events such as elections, harm someone’s reputation and undermine trust in democratic institutions and media. [caption id=“attachment_13361122” align=“alignnone” width=“640”] A AFP journalist views a video on January 25, 2019, manipulated with artificial intelligence to potentially deceive viewers, or “deepfake” at his newsdesk in Washington, DC. AFP File Photo[/caption] DeepMedia, a company working on tools to detect synthetic media, has predicted that 500,000 video and audio deepfakes will be shared on social media sites across the world this year, Reuters reported. Deepfake in India Mandanna’s is not the first case of manipulated video being shared on social media. In fact, the 2023 State of Deepfakes report says India is the sixth most vulnerable country to deepfake pornographic content. South Korea tops the list, with its singers and actresses making up 53 per cent of the people targeted in deepfake pornography. A concerning report in Boom has found that X, formerly Twitter, is strewn with pornographic
deepfakes of Indian actresses.
Political parties and leaders across the world, including India, have also started using AI-generated images or videos to target their rivals. While India does not currently have laws directly addressing deepfakes, there are legal provisions to deal with the problem. After the “extremely scary” Mandanna footage, the Central government issued an advisory to social media platforms, highlighting India’s laws. As per NDTV, the Ministry of Electronics and Information Technology has cited Section 66D of the Information Technology Act, 2000, which pertains to “punishment for cheating by personation by using computer resource”. Chandrasekhar said previously that social media platforms are legally obliged to remove misinformation within 36 hours once reported by a user or the government under the IT rules notified in April this year.
PM @narendramodi ji's Govt is committed to ensuring Safety and Trust of all DigitalNagriks using Internet
— Rajeev Chandrasekhar 🇮🇳 (@RajeevRC_X) November 6, 2023
Under the IT rules notified in April, 2023 - it is a legal obligation for platforms to
➡️ensure no misinformation is posted by any user AND
➡️ensure that when reported by… https://t.co/IlLlKEOjtd
US The United States is the second most susceptible nation to deepfake adult content, according to this year’s State of Deepfakes report. Hollywood star Scarlett Johansson is taking legal action against an AI app that used her name and deployed AI to create her voice in an advertisement without her permission, according to Variety. She has been previously a target of deepfake porn. The concerns of AI-generated content spreading misinformation has also increased in the US ahead of the presidential elections next year. A startup launched a deepfake detection platform, DeepID, in August to find out “synthetic audio, video, text, and image manipulation.” Such technologies are needed as US politicians are themselves using generative AI to boost their electoral campaigns. Darrell West, senior fellow at the Brookings Institution’s Center for Technology Innovation, told Reuters, “It’s going to be very difficult for voters to distinguish the real from the fake. And you could just imagine how either (Donald) Trump supporters or (Joe) Biden supporters could use this technology to make the opponent look bad.” “There could be things that drop right before the election that nobody has a chance to take down.” Recently, at the first-ever AI Safety Summit at Bletchley Park, the US, India, China, Japan, the United Kingdom, France and the European Union signed a declaration calling for global action against the potential risks of AI. Last week, US president Joe Biden issued an executive order mandating “developers of AI systems that pose risks to US national security, the economy, public health or safety to share the results of safety tests with the US government, in line with the Defense Production Act, before they are released to the public,” reported Reuters. The Pentagon is also working alongside the biggest research institutions in the US to develop technology to detect deepfakes. Other countries and tech The UK enforced the Online Safety Act in October which criminalises sharing and sending deepfake porn. After Japan, England is the fourth most vulnerable nation to deepfake adult content. Between April 2015 and December 2021, the police recorded over 28,000 complaints of sharing non-consensual private sexual images. In January, China became the first country to regulate deepfakes, banning its production without user consent and mandating all manipulated or altered content to be marked as modified. South Korea has made it illegal to distribute deepfakes that could “cause harm to public interest,” with offenders facing up to five years in prison or fines of up to 50 million won. Large tech companies, including Meta, Google and Microsoft, have also ramped up efforts to design tools to detect AI-generated content. With inputs from agencies