Meta, the company behind Facebook, Instagram, and WhatsApp, has reported dismantling over 20 covert influence operations globally in 2024, reaffirming its role as a frontline defender against online manipulation.
Russia remains the leading source of such activity, with Meta’s president of global affairs, Nick Clegg, describing its persistent efforts to manipulate public opinion. However, fears of AI-driven disinformation heavily skewing election outcomes have not materialised this year, despite a record number of elections worldwide.
Persistent manipulation efforts
Meta’s security teams tackled a wide array of covert activities, including fake account networks and fabricated news websites designed to influence public discourse. Among the most prominent was a Russia-based operation targeting audiences in Georgia, Armenia, and Azerbaijan, as well as campaigns aimed at undermining Western support for Ukraine. These efforts extended to creating fake news sites under well-known brands like Fox News and the Telegraph to propagate narratives favourable to Russia.
Clegg revealed that Meta intercepted over 500,000 attempts to misuse its AI tools for generating misleading images of political figures such as Donald Trump, Kamala Harris, JD Vance, and Joe Biden in the month leading up to the US elections. Despite these challenges, AI tools were reportedly not used at scale to create deepfakes or conduct widespread disinformation campaigns, contrary to pre-election warnings.
Subtle influence on political discourse
While large-scale AI-enabled manipulation didn’t take centre stage, experts warned against complacency. Meta acknowledged that AI tools amplified disinformation in subtler ways, such as viral xenophobic memes and baseless rumours. For instance, claims that Kamala Harris’s rally was AI-generated or rumours about Haitian immigrants eating pets gained traction online, highlighting how AI can fuel divisive narratives.
Reports from the Centre for Emerging Technology and Security echoed these concerns, stating that AI-generated content played a role in amplifying existing disinformation during elections. Though the impact on specific results, such as Donald Trump’s election win, remains unclear, researchers cautioned that AI-enabled threats pose a growing risk to democratic systems.
Preparing for a future of synthetic content
Clegg noted that while the influence of AI fakery in 2024 was relatively modest, the growing prevalence of synthetic content is “very, very likely to change” in the coming years. Meta’s findings align with warnings from researchers about the potential for AI to increasingly shape political discourse and disrupt democratic processes as technology evolves.
As Australia and Canada prepare for elections in 2025, Meta and other stakeholders are being urged to remain vigilant against the escalating risks of AI-enabled disinformation campaigns. With synthetic content becoming more sophisticated, the challenge of protecting democratic systems is only set to intensify.