The 2024 White House contest faces a firehose of tech-enabled misinformation, from false photographs of Donald Trump’s arrest to a film presenting a dismal future under Joe Biden, in what is generally characterised as America’s first AI election. Campaigners on all sides of the US political spectrum are using powerful artificial intelligence-powered technologies, which many tech experts see as a double-edged sword. AI programmes can instantly clone a political figure’s voice and generate films and text that appear so authentic that voters may fail to distinguish between fact and fiction, weakening faith in the democratic process. Simultaneously, campaigns are likely to leverage the technology to improve operational efficiency in everything from voter database analysis to fundraising email authoring. In June, Florida Governor Ron DeSantis’ presidential campaign published a video purportedly showing former President Trump hugging Anthony Fauci, a favourite Republican punching bag throughout the coronavirus outbreak. AFP’s factcheckers found the video used AI-generated images. After Biden formally announced his reelection bid, the Republican Party in April released a video it said was an “AI-generated look into the country’s possible future” if he wins. It showed photo-realistic images of panic on Wall Street, China invading Taiwan, waves of immigrants overrunning border agents, and a military takeover of San Francisco amid dire crime. Other campaign-related examples of AI imagery include fake photos of Trump being hauled away by New York police officers and video of Biden declaring a national draft to support Ukraine’s war effort against Russia. ‘Wild West’ “Generative AI threatens to supercharge online disinformation campaigns,” the nonprofit Freedom House said in a recent report, warning that the technology was already being used to smear electoral opponents in the United States. “Purveyors of disinformation are employing AI-generated images, audio, and text, making the truth easier to distort and harder to discern.” More than 50 percent of Americans expect AI-enabled falsehoods will impact the outcome of the 2024 election, according to a poll published in September by the media group Axios and business intelligence firm Morning Consult. About one-third of Americans said they will be less trusting of the results because of AI, according to the poll. In a hyperpolarized political environment, observers warn such sentiments risk stoking public anger at the election process – akin to the January 6, 2021 assault on the US Capitol by Trump supporters over false allegations that the 2020 election was stolen from him. “Through (AI) templates that are easy and inexpensive to use, we are going to face a Wild West of campaign claims and counter-claims, with limited ability to distinguish fake from real material and uncertainty regarding how these appeals will affect the election,” said Darrell West from the Brookings Institution. ‘Game changing’ At the same time, rapid AI advancements have also made it a “game changing” resource for understanding voters and campaign trends at a “very granular level”, said Vance Reavie, chief executive of Junction AI. Campaign staff previously relied on expensive consultants to develop outreach plans and spent hours on drafting speeches, talking points and social media posts, but AI has made the same jobs possible within a fraction of that time, Reavie told AFP. But underscoring the potential for abuse, when AFP directed the AI-powered ChatGPT to create a campaign newsletter in favor of Trump, feeding it the former president’s false statements debunked by US fact-checkers, it produced – within seconds – a slick campaign document with those falsehoods. When AFP further prompted the chatbot to make the newsletter “angrier,” it regurgitated the same falsehoods in a more apocalyptic tone. Authorities are scrambling to set up guardrails for AI, with several US states such as Minnesota passing legislation to criminalize deepfakes aimed at hurting political candidates or influencing elections. On Monday, Biden signed an ambitious executive order to promote the “safe, secure and trustworthy” use of AI. “Deep fakes use AI-generated audio and video to smear reputations… spread fake news, and commit fraud,” Biden said at the signing of the order. He voiced concern that fraudsters could take a three-second recording of someone’s voice to generate an audio deepfake. “I’ve watched one of me,” he said. “I said, ‘When the hell did I say that?’”
)