Deepfakes: Not yet sophisticated enough to impact 2024 Elections?



Contrary to widespread fears and predictions, deepfakes and AI-generated content had a relatively minimal impact on the 2024 elections. A study examining 78 election deepfakes found that only half were actually deceptive in nature.

Extensive media coverage and public awareness campaigns effectively primed voters to be skeptical of potentially manipulated content as tens of thousands of news articles warned about deepfake risks while public service announcements from law enforcement and celebrities raised awareness. Technology companies took additional steps to protect election integrity.

This heightened awareness largely eliminated the element of surprise that could have allowed well-timed deepfakes to cause significant disruption.

Research indicates that deepfakes are not yet necessarily more persuasive than other forms of misinformation. A study found that deepfakes could convince over 40% of a sample about false scandals, but this was no more effective than textual or audio misinformation.

Furthermore, it’s crucial to understand that deepfakes primarily serve to reinforce existing beliefs rather than alter opinions. Much like other types of disinformation, individuals who are inclined to believe certain narratives are more susceptible to being misled by false information that aligns with their preconceived, biased notions.

Current research indicates that most political persuasion methods have minimal impact. However, deepfakes possess the potential to sway enough voters to change the outcome of a tight election. Consider this: in 2024, Donald Trump secured the popular vote by a mere 1.47 percent margin. If deepfakes had influenced just 2 out of every 100 voters to support him, it could have altered the presidential election’s outcome. This highlights the urgent need nor to underestimate the influence of such technology on democratic processes.

While AI-generated content received significant attention, traditional forms of misinformation continued to pose a greater threat. The News Literacy Project found that “cheap fakes” not using AI were used seven times more often than AI-generated content in election misinformation. Also, basic editing techniques, out-of-context media, and even video game footage remained effective in spreading false information.

The feared widespread disruption from deepfakes failed to materialize in most 2024 elections internationally. In India, deepfakes were used more for trolling than spreading false information. Indonesia also saw AI used to soften a candidate’s image rather than spread misinformation. In all, Meta reported that AI-driven tactics provided only “incremental productivity and content-generation gains” in influence campaigns.

While individual deepfakes may not have significantly swayed voters, the narrative surrounding AI’s potential impact on elections has contributed to a broader erosion of trust. 23% of Brits no longer trust political content on social media and only 29% trust content from verified sources like official news outlets like the BBC.

Key takeaway: Deepfakes allow bad actors to discredit authentic media by falsely claiming it’s AI-generated.

While deepfakes and AI-generated content remain a concern for future elections, their impact in 2024 was less severe than anticipated. Traditional forms of misinformation, coupled with the general erosion of trust in online information, proved to be more significant challenges to election integrity. Moving forward, a balanced approach focusing on media literacy, robust content moderation, and addressing broader disinformation tactics will be crucial in safeguarding democratic processes.



Latest