GenAI used in less than 1% of election-related misinformation in 2024, finds Meta study

FP Staff December 5, 2024, 16:40:43 IST

The study examined election-related posts across 40 countries, including key regions like India, the US, and the EU, and found that AI played a minor role in spreading misinformation during major elections in 2024

Advertisement
The study’s findings suggest that fears of AI-generated disinformation disrupting elections may have been overstated, at least for now. Image Credit: Reuters
The study’s findings suggest that fears of AI-generated disinformation disrupting elections may have been overstated, at least for now. Image Credit: Reuters

A recent analysis by Meta shows that generative AI played a minor role in spreading misinformation during major elections in 2024, contributing to less than one per cent of the flagged content on its platforms.

The study examined election-related posts across 40 countries, including key regions like India, the US, and the EU. Despite earlier fears of AI driving disinformation campaigns, Meta claims its existing safeguards effectively curtailed the misuse of AI-generated content.

STORY CONTINUES BELOW THIS AD

Nick Clegg, Meta’s global affairs president, stated that while there were some instances of AI being used maliciously, the volume was low. He noted that the company’s policies and tools proved adequate for managing risks related to AI content on platforms such as Facebook, Instagram, WhatsApp, and Threads.

Cracking down on election interference

Beyond addressing AI-generated misinformation, Meta reported dismantling over 20 covert influence campaigns aimed at interfering with elections. These operations, categorised as Coordinated Inauthentic Behaviour (CIB) networks, were monitored for their use of generative AI. While AI provided some content-generation efficiencies, Meta concluded it didn’t significantly enhance the scale or impact of these campaigns.

Meta also blocked nearly 600,000 user attempts to create deepfake images of political figures using its AI image generator, Imagine. These included requests for fabricated images of prominent leaders like President-elect Trump and President Biden, underscoring the demand for stricter controls around AI tools during high-stakes events.

Lessons from the past

Reflecting on content moderation during the COVID-19 pandemic, Clegg admitted that Meta may have been overly strict in its approach, often removing harmless posts. He attributed this to the uncertainty of the time but acknowledged that the company’s error rate in moderation remains problematic. These mistakes, he said, can unfairly penalise users and hinder the free expression Meta seeks to protect.

Generative AI: A contained threat for now

The study’s findings suggest that fears of AI-generated disinformation disrupting elections may have been overstated, at least for now. Meta’s proactive measures, including monitoring and policy enforcement, seem to have kept AI misuse in check.

However, the company acknowledges that balancing effective content moderation with user freedom remains a challenge. As AI tools become more advanced, Meta’s ongoing efforts to refine its approach will be critical in maintaining trust and integrity on its platforms.

Home Video Shorts Live TV