Once a buoyantly exciting buzzword, the two letters ‘AI’ (artificial intelligence) have, in just a few short years, gained more notoriety and fear than the once-optimistic fandom. Just ask Rashmika Mandanna, the famous actress, who woke up one morning this week to find a disturbing video of herself circulating on the internet, with her head morphed onto someone else’s body. This constitutes a gross violation of someone’s modesty and privacy, culminating in legal harassment of an individual. It’s not just instances like this; deepfakes are circulating all over the digital world, being used by various agencies to disseminate misinformation and fake news, with the aim of promoting false agendas, propaganda and influencing public consciousness and conscience. To the general public, AI might seem like a relatively new concept, but the technology of Artificial Intelligence has a much longer history than one might imagine. The first recognised AI work dates back to 1943 when Warren McCullough and Walter Pitts proposed a model of artificial neurons. However, the technology truly became prolific in 1950 when Alan Turing, an English mathematician and a pioneer of Machine Learning, published Computing Machinery and Intelligence. In this seminal work, he introduced the Turing test, which demonstrated a machine’s ability to exhibit intelligent behaviour equal to that of a human. The 1960s marked a golden era of optimism when researchers concentrated on developing algorithms to solve mathematical problems. In 1966, Joseph Weizenbaum created the first chatbot, the famous ELIZA. In 1972, Japan witnessed the creation of the first intelligent humanoid robot, named WABOT-1. Even in pop culture, the thrilling prospects of this technology started gaining recognition, with prolific science fiction authors like Arthur C Clarke introducing characters such as HAL 9000, an AI character and the main antagonist in the beloved Space Odyssey series, later adapted into cult movies. HAL 9000 initially served as a computer that controlled the systems of a spacecraft, interacting with the astronauts on board. Gradually, it began to mimic human behaviour and intelligence, eventually exerting influence over the human astronauts on the spaceship. This eerie fictional narrative from over 60 years ago has now become a real-time topic of conversation and intense debate in 2023. After a brief lull in the 1970s when governments ceased funding AI research, there was a resurgence in the 1980s. AI reentered the scene with ‘Expert Systems,’ which emulated human decision-making capabilities. The 1990s, on the other hand, saw another lull due to a lack of funding caused by high costs and underwhelming results. In the early 2000s, there were small victories, such as the development of robot vacuum cleaners. However, the most pivotal impact occurred in 2006 when companies like Google, Facebook, Twitter and Netflix started incorporating AI, fundamentally changing how we view and consume media. This transformation was particularly noticeable during the pandemic when people across the globe confined to their homes turned to social media as the ultimate source of real-time news, knowledge, data and constant entertainment. This addiction to short-form video content on every conceivable topic, from politics to humour, inundated our minds with high-quality dopamine, leaving us constantly craving more and permanently altering the media landscape. Today, concepts like deep learning, big data and data science are trending prominently with companies like Facebook, IBM and Amazon employing AI to create astonishing devices. The breakneck pace at which this technology is evolving is beginning to evoke concerns with many tech and global leaders advocating for more control and possibly even a halt to this rapidly advancing apparatus. At the recently held UK Artificial Intelligence Safety Summit in London, aptly named, Elon Musk engaged in a discussion with Rishi Sunak, the Prime Minister of England. Musk warned about how, in a future of abundance, AI could potentially replace all human jobs leading to a struggle to find meaning in life. They also delved into the potentially terrifying risks associated with frontier AI models. Musk proposed the inclusion of a ‘referee’ and an ‘off switch’ to be built into these models to enable them to be ‘safely deactivated’. The fact that we are developing models where such measures may become necessary or that summits on human safety in relation to technology are now being convened, is unsettling and, to say the least, ominous. The other, more immediate challenge is the menace of deepfakes and misinformation that have become so widespread today. Social media and AI have made it remarkably easy, quick and hassle-free for just about anyone or any party to disseminate information to vast numbers of people in record time, without any requirement for verification, authenticity or censorship. A deepfake is a video or audio created using AI or ML to produce an identical likeness of a person, persons or a targetted community, making them appear to say or do things they never actually said or did, or depicting events that never occurred in reality. These are not merely fakes created by hackers; they are generated using a form of Machine Learning in which two networks are fed identical datasets and set against each other in an alternating back-and-forth battle of generation and detection. These networks are known as Generative Adversarial Networks (GANs), with one network generating the fakes and the other evaluating them for any discrepancies. The dataset includes thousands of images and videos of the person or organisation being targetted and it passes the test only when the detection network no longer rejects the final image or video. AI plays a fundamental role in the creation of these deepfakes as its algorithms generate synthetic images and sounds that are nearly indistinguishable from the real thing. AI algorithms are also employed to analyse and exploit face and voice recognition data to swap the faces and voices of their targets in deepfakes. As AI continues to evolve, the quality and naturalism of deepfakes will only improve, underscoring the importance of acknowledging their potential risks. The risks associated with deepfakes are manifold and severe in nature:
- Misinformation and disinformation: Deepfakes can be used to spread false information about individuals by making it appear that they said something they didn’t. For example, in 2022, a deepfake video of Ukrainian President Volodymyr Zelensky circulated widely, in which he appeared to be asking his troops to surrender.
- Reputational damage to individuals or organisations: For instance, creating pornographic content featuring public figures by superimposing their faces onto other bodies. A programmer recently released an easy-to-use app called DeepNude, allowing users to take a picture of a fully clothed woman and create non-consensual porn by removing her clothes. Although it was eventually taken down, the possibilities are alarming.
- Invasion of privacy: Criminals can use deepfakes to create and employ audio content to access an individual’s private details.
- Psychological harm: Deepfakes can be used by antisocial elements in our society to target individuals and spread false information about them causing psychological distress. As Rashmika Mandanna pointed out, she is an established actress with a voice and the ability to defend herself now, but if this morphing incident had happened to her when she was young and in college, it would have been devastating.
- Incitement of hate or violence: Deepfake videos featuring false information or fake videos of religious, political or public figures making divisive statements can be used to incite hatred and violence in people, potentially resulting in acts of violence or even genocide. This is particularly concerning in countries like India, where communal and religious tensions are consistently high. For example, a recent video with 29 million YouTube likes depicted a man attacking a figure wearing a burkha and carrying a child. The burkha is then forcibly removed to reveal a man, who is implied to be a conman using the burkha as camouflage to carry out criminal activities. It doesn’t take much imagination to envision the chain of events that videos like these might trigger.
Now that deepfakes and AI-generated videos and images have become a widespread phenomenon across the digital world, how can we protect ourselves and our collective society? It’s not to say that this technology is entirely malevolent. In the realm of creativity, it offers numerous benefits. Samsung’s AI lab made the Mona Lisa smile and generated a live portrait of Salvador Dali. Deep Video Portraits, developed by Stanford University, can manipulate not only facial expressions but also a range of movements, such as 3D head rotation, eye gazing, blinking and other subtle actions using generative neural networks. This could be of immense value to film studios and productions especially when dubbing a film into other languages. However, every researcher acknowledges that the risk of misuse of such technologies in ways that harm society is regrettably imminent. Unfortunately, this rapid rise of deepfakes in a world already grappling with the risks of fake news alerts us to a future where it may become increasingly difficult to trust what we see or hear. The internet is rapidly becoming a murky realm of deception and distortion. Some suggest that social media companies should bear the responsibility of banning deepfakes and other AI-generated content but distinguishing between security threats and harmless entertainment is likely to be an insurmountable challenge. The Pandora’s box is now open, and there’s no closing it. In this world where technology mimics human capabilities, perhaps it’s time to identify what sets us apart from robots and use the incredible gift of human consciousness to distinguish reality from illusion. Enjoy and entertain yourselves, good people and may the light always prevail over darkness. The author is a freelance journalist and features writer based out of Delhi. Her main areas of focus are politics, social issues, climate change and lifestyle-related topics. Views expressed in the above piece are personal and solely that of the author. They do not necessarily reflect Firstpost’s views. Read all the Latest News, Trending News, Cricket News, Bollywood News, India News and Entertainment News here. Follow us on Facebook, Twitter and Instagram.