The AI problem keeps getting bigger with each passing day. In the latest development, a multinational company based in Hong Kong fell victim to a staggering $25.6 million (~Rs 200 crore) scam that involved deepfake technology . A finance worker of the company was tricked using a multi-person video conference in which all participants — except for the victim — were produced using AI, Hong Kong police said at a briefing on Friday. This alarming incident highlights the widespread danger that deepfake technology poses. Here’s all we know about the possibly biggest deepfake scam. How the scam unfolded It was a sophisticated scam that took almost a week to complete, demonstrating how meticulous the perpetrators were. A phishing email purporting to be from the company’s UK-based CFO was received by the victim, an employee of the finance department, in mid-January, instructing them to carry out a secret transaction. After reaching out to the victim, the scammers kept in touch with her via emails, video calls, and instant messaging platforms. Despite an initial “moment of doubt,” the employee succumbed to the scheme after participating in a group video conference. “(In the) multi-person video conference, it turns out that everyone (he saw) was fake,” senior superintendent Baron Chan Shun-ching told the city’s public broadcaster RTHK. The report claims that the perpetrators created lifelike versions of the company’s employees, including a digitally cloned chief financial officer, using deepfake technology on publicly accessible video and audio footage. The fraudsters used a scripted self-introduction, gave orders, and then abruptly ended the meeting. A deepfake is a kind of synthetic media that mimics real-world information by manipulating or creating audio and visual elements through artificial intelligence, frequently with malicious intention, explained NDTV. Because the deepfake company employees representation seemed real, the victim followed instructions and transferred $25 million in total to five different Hong Kong bank accounts. It wasn’t until the employee followed up with the company headquarters that the deception involving the fake CFO was uncovered. The worker’s identity and details of the company were kept hidden by Hong Kong authorities. A comprehensive investigation underway Following the incident, law enforcement officials launched a thorough investigation to identify and capture the individuals responsible for the daring scam. The force further stated that its cybercrime team was handling the case, which had been categorised as “obtaining property by deception” following an initial investigation. The investigations are ongoing, and no one has been taken into custody yet, reported The Guardian. When confronted with suspicious payment requests, people should proceed cautiously and use verification methods, according to the Economic Times which quoted senior inspector Tyler Chan Chi-wing as saying. Additionally, in an attempt to prevent consumers from becoming victims of similar fraudulent schemes in the future, efforts are in progress to strengthen the alert system and expand its coverage to include Faster Payment System transfers. Similar cases in Hong Kong Hong Kong police said at the news conference on Friday that they have made six arrests related to such scams. According to CNN which quoted Chan as saying, between July and September last year, eight stolen Hong Kong identity cards—all of which had been reported missing by their owners—were used to apply for 90 loans and register 54 bank accounts. The officials said AI deepfakes had been utilised to deceive facial recognition programmes at least 20 times by impersonating the individuals seen on the identity cards. Growing concern about the AI-deepfake technology The complex nature of deepfake technology and the malicious uses it can be employed for are concerning authorities worldwide. When ChatGPT was launched at the end of 2022, it made AI technology widely available at a low cost and at the same time sparked a race to produce improved models among almost all of the tech giants (as well as several startups). Some experts have been pointing out for months the risks and direct threats brought about by the recent spread of AI. These include increased socioeconomic inequality, economic upheaval, algorithmic discrimination, misinformation, and disinformation, as well as political instability and the emergence of a whole new fraud phase, according to The Street. There have been an increasing number of incidents of AI-generated deepfake images and frauds over the past year. At the end of January this year, fake sexually exploitative images of Taylor Swift went viral on social media. Before that, fake version of US president Joe Biden’s voice was also used in robocalls to voters in New Hampshire. Last year, actors Rashmika Mandanna , Katrina Kaif, and Kajol fell victim to the deepfake technology. According to The Street, a mother received a phone call in January 2023 from scammers using AI claiming to have abducted her daughter. The scammers demanded a $1 million ransom money. Her daughter was safe at home in bed, but her screams, produced by AI, were quite real. In the first nine months of 2023, 113,000 deepfake videos have been uploaded to the most prominent porn websites, according to research that Wired magazine cited. Deepfake frauds are an increasing concern for Indians as well. CNBC in May 2023 quoted a McAfee survey as noting that nearly two-thirds of respondents cannot distinguish between a real voice and an AI voice. Even worse, 83 per cent of victims acknowledged losing money as a result of these scams. In January, the UK’s cybersecurity agency issued a warning, claiming that AI was making it harder to recognise phishing mails, which deceive users into revealing their passwords or personal information. How to stay safe from such frauds The Conversation reports that consumers are revealing more and more personal information about themselves online. “This means the audio data required to create a realistic copy of a voice could be readily available on social media. But what happens once a copy is out there? What is the worst that can happen? A deepfake algorithm could enable anyone in possession of the data to make “you” say whatever they want. In practice, this can be as simple as writing out some text and getting the computer to say it out loud in what sounds like your voice,” the article stated. The McAfee report suggested that a simple way to insulate yourself from such scams is to establish a codeword with close friends and family. A healthy suspicion of calls from unknown numbers is another way to keep yourself safe. As per Moneycontrol, if you suspect you are being targeted, the best way to keep yourself safe is to cut the call immediately and ignore or block calls from such numbers. According to NDTV, experts say that poor video quality is a sign of a deepfake. Such videos also tend to loop or halt abruptly. The easiest thing – before transferring money – is to simply contact the person you suspect is being impersonated on another number or through other means. With inputs from agencies
)