Artificial Intelligence can be destructive. The most recent example is the viral deepfake video featuring actor Rashmika Mandanna. In a video, which has been circulating on social media, Rashmika Mandanna can be seen entering the lift in a black outfire. However, the clip has been doctored. The video originally featured Zara Patel, a British-Indian influencer. But in the widely circulated video, her face has been morphed and looks like that of Mandanna. It clip has raised questions of how misinformation spreads rapidly and the role of AI in it. The video has prompted strong reactions from Mandanna, Union minister Rajeev Chandrasekhar and superstar Amitabh Bachchan. In a post on X, Mandanna described the video as “extremely scary”. “Something like this is honestly, extremely scary not only for me, but also for each one of us who today is vulnerable to so much harm because of how technology is being misused,” the actor who will next appear in Animal wrote.
Acknowledging the video, Union Minister Rajeev Chandrasekhar reminded the social media platforms of their legal obligations to fight misinformation. “PM @narendramodi ji’s Govt is committed to ensuring Safety and Trust of all DigitalNagriks using Internet,” he posted on X.
Amitabh Bachchan shared a viral thread on X about the video and deepkfakes and said it was a “case of strong legal action”. But what are deepfakes? And how can we stop them? We have some answers. What are deepfakes? Deepfakes are realistic-looking but fake images, voices or videos, according to Business Today. They are created through AI which uses deep learning – algorithms analysing and training themselves on vast quantities of data. As a piece in The Conversation explained, “To create a realistic copy of someone’s voice you need data to train the algorithm. This means having lots of audio recordings of your intended target’s voice. The more examples of the person’s voice that you can feed into the algorithms, the better and more convincing the eventual copy will be.” Deepfakes have been around for a while. Recall how actor Miles Fisher, who bears more than a passing resemblance to Tom Cruise, put out a series of deepfake videos on Tik Tok that garnered millions of views from people. Or how images that looked like actress Emma Watson went viral on Facebook and Instagram and even the fake video of Ukraine’s Volodymyr Zelensky asking his countrymen to lay down their weapons in 2022, as per The Conversation In the past, deepfakes of Jennifer Lawrence, Arnold Schwarzenegger, Mark Zuckerberg and ex-US president George W Bush also went viral. How does it work? Deepfake fraud is when scamsters gather information on an intended victim – about their friends and family – and use it to target an individual, according to Business Today.
The information, usually acquired via social media or other means either legal or nefarious, is then fed to a deep learning AI.
The scammers use that AI to duplicate the sound or even faces of their target’s friends and family, according to Moneycontrol. When it comes to videos, scammers use AI to map the facial movements of one person onto another. They make then phone calls and video calls on which they claim to be the said friends or family and convince their victims to transfer large sums of money. [caption id=“attachment_12882852” align=“alignnone” width=“640”] Representational image.[/caption] For Indians, deepfake scams are a growing problem. CNBC in May quoted a McAfee survey as noting that nearly two-thirds of respondents cannot distinguish between a real voice and an AI voice. “The survey reveals that more than half (69 percent) of Indians think they don’t know or cannot tell the difference between an AI voice and real voice,” the report stated. Worse, 83 per cent of victims admitted to losing money in such scams. “About half (47 percent) of Indian adults have experienced or know someone who has experienced some kind of AI voice scam, which is almost double the global average (25 percent). 83 percent of Indian victims said they had a loss of money- with 48 percent losing over Rs 50,000,” the report stated. The report noted that Indians are also likely to fall for such scams. “Particularly if they thought the request had come from their parent (46 percent), partner or spouse (34 percent), or child (12 percent). Messages most likely to elicit a response were those claiming that the sender had been robbed (70 percent), was involved in a car incident (69 percent), lost their phone or wallet (65 percent) or needed help while travelling abroad (62 percent),” the report stated. How can people stay safe from such fraud? In this digital age, it’s growing harder and harder to stay safe. As The Conversation notes, people are, more and more, readily sharing details of their lives online. “This means the audio data required to create a realistic copy of a voice could be readily available on social media. But what happens once a copy is out there? What is the worst that can happen? A deepfake algorithm could enable anyone in possession of the data to make “you” say whatever they want. In practice, this can be as simple as writing out some text and getting the computer to say it out loud in what sounds like your voice,” the piece stated. The McAfee report suggested a simple way to insulate yourself from such scams is to establish a codeword with close friends and family. A healthy suspicion of calls from unknown numbers is another way to keep yourself safe. As per Moneycontrol, if you suspect you are being targeted, the best way to keep yourself safe is to cut the call immediately and ignore or block calls from such numbers. According to NDTV, experts say that poor video quality is a sign of a deepfake. Such videos also tend to loop or halt abruptly. The easiest thing – before transferring money – is to simply contact the person you suspect is being impersonated on another number or through other means. With inputs from agencies