How generative AI could make existing election threats even worse

AI currently remains an amplifier of threats rather than a new root cause.
Voting booths lined up in a row.

Interest and investment in AI has exploded since last fall’s public launch of ChatGPT as the promise of generative AI technology has become widely apparent. Now, amidst its accelerating adoption and integration, Americans are grappling with the implications of generative AI for a host of issues including national security, labor markets, and education. As eyes turn to the 2024 election, the potential effects of this technology on our democracy have led to predictions that range from the anodyne to the apocalyptic

We believe generative AI’s rapid advancements are poised to escalate existing threats to the 2024 election cycle, but are unlikely to introduce qualitatively new threats. AI merits attention and investment as a threat amplifier, but must be kept in perspective, both in light of its short-term capabilities and the broader threat landscape facing the 2024 election.

What’s changed since 2020

The 2024 election will not be the first election the US administers since the advent of AI, but the AI landscape has evolved in two major ways since the 2020 election:

  • The increased sophistication of foundation models that have enabled applications such as generative AI; and 
  • The significantly broader audience with access to sophisticated generative AI, including open-source models.

As a result, it is now easier than ever for any user to produce high-quality synthetic content across format categories (video, image, audio, and text, including software code). In a growing number of cases, generative AI outputs are so realistic that they are indistinguishable from human-generated content to the naked eye (even to experts in computer science), a trend likely to accelerate as newer models are released over the next year.

What users can produce using generative AI depends in part on how they access AI models. For example, a user engaging with ChatGPT, through either its public interface or API, is subject to the platform’s policies and oversight. As concerns have mounted about bad actors using generative AI for harm, some platforms have implemented policies and safety guardrails to restrict how they can be employed. However, not all models are subject to these restrictions. Open-source models (such as Meta’s LLaMA-2 and its derivatives) are widely available and allow users far more agency in how a model can be both fine-tuned and used, including in ways that circumvent pre-existing guardrails.

Why advancements in AI matter: a case study

During Kentucky’s 2018 gubernatorial election, a Twitter user posted about shredding a box of Republican ballots. The claim was widely circulated and contributed to mistrust in the election’s results. At the time, generative AI was still in its infancy and the tweet was not accompanied by images or video. Ultimately, it had limited impact.

Today, generative AI is far more capable. Observe the difference between the quality of synthetic images created using an early 2021 model (one of the earliest available) compared to a model from late 2023:

A comparison of text-to-image technology for similar prompts. The image on the left was generated using the DALLE-mini model (comparable to what was available in early 2021) while the image on the right was made using a meme made with SDXL, a modern, high-resolution version of Stable Diffusion. The image on the right, while not fully realistic, is far more believable. We expect that by next year’s general election next year, image models will be even more advanced.

Both images above were generated with just a few minutes of work, using free online tools. By the time the 2024 campaign season is in full swing, we expect it will be even easier to generate higher quality synthetic images. 

Imagine if the troll from 2018 had access to such a high-quality synthetic image to augment his tweet — would its impact have been greater?

What generative AI’s advances mean for the 2024 election

The 2024 election is facing a threat landscape that combines long-standing election-administration vulnerabilities with new and worsening trends. It is in this broader threat landscape that generative AI’s impact should be situated and evaluated as an accelerant and amplifier. A shortlist of factors may make the 2024 election cycle especially susceptible to AI-enabled threats:

  • The online information ecosystem has splintered: Today’s online information ecosystem has become increasingly fragmented with the entry of new and niche platforms. Short-form video (which can be more difficult to analyze than text and images) is increasingly dominant online. Meanwhile, the status of platforms’ election safeguards remain largely uncertain and in some cases, have been, rolled back following a season of layoffs impacting trust and safety teams. This could prove a ready environment for the unchecked distribution of synthetic content spreading election disinformation.
  • The Liar’s Dividend: The Liar’s Dividend is the concept that the proliferation of mis- and disinformation allows political or bad actors to capitalize on voters’ increasing distrust, resulting in voters discounting even true information as fake. In past cycles, audiences have been less likely to question the authenticity of audio and image-based content, but the proliferation of AI-produced synthetic content will change that. 
  • Geopolitical developments: US foreign policy, particularly its posture towards Ukraine, Israel, and Taiwan, appears likely to be highly dependent on the outcome of the election. Adversarial nations such as Russia, Iran, and China may therefore be heavily incentivized to engage in election interference against the US, in order to sway the outcome in a way that is favorable for them. AI-generated content is a demonstrated part of their election interference playbook, and will likely play a significant role in their influence operations.

When imagining AI-enabled election threats, a commonly envisioned scenario is the “fake October surprise” — that is, a convincing generative AI-produced deepfake distributed in the final days of the election in order to change votes and, ultimately, the outcome of the election. 

We believe AI will be most disruptive when used to target existing vulnerabilities in election operations or voter engagement by scaling tried and tested interference playbooks.

However, there are a broad range of ways synthetic content could be applied to amplify election threats before, during, and following Election Day. We believe AI will be most disruptive when used to target existing vulnerabilities in election operations or voter engagement by scaling tried and tested interference playbooks. These playbooks include disinformation campaigns, voter suppression, election official harassment, cyberattacks of election infrastructure, and post-election disruption. While none of these forms of interference are novel, generative AI can enhance the plausibility and effectiveness of content used to bolster these efforts across content categories (text, image/video, and audio). For example, it could be used to create misleading records of ballot tabulation or certification (text), video deepfakes that manipulate publicly-available dropbox surveillance footage (image/video), or audio deepfake robocalls aimed at voters with inaccurate polling instructions. 

With that in mind, it is critical to not over index on any one application of generative AI, but instead recognize generative AI’s potential utility where speed, scale, or quality of content could enhance interference efforts.

The time remaining to mitigate AI’s effects on the 2024 election

Opportunities to mitigate the harms threatened by generative AI-enhanced election threats exist at every point in synthetic content’s lifecycle — from its production (placing guardrails on the queries to and outputs of generative-AI models) to its distribution (digital provenance tools to track and identify both synthetic and authentic content) to its consumption (educating and preparing voters for an influx of election-interference-focused content). These strategies include both familiar playbooks and new approaches to bolster election security and safeguard voting. 

With primaries beginning in less than two months and the general election a year away, the clock is ticking. Given the complex policy environment and shortened time frame for implementation prior to the election, it is incumbent on the pro-democracy movement to use our remaining time and capacity wisely — and cohesively.

Whether it’s equipping election officials with training on how to prepare for generative AI, empowering campaigns and journalists to use digital signatures in their communications, or optimizing pre-bunking and counter-messaging for targeted communities of voters, we have strategies, new and old, to mitigate the harms that may be amplified by AI. 

Executing them to maximum effect will require a coordinated effort, one that’s squarely focused on AI’s short-term capabilities, situates its potential impact within the larger election landscape, and prioritizes strategies that can be most impactful with the time we have.

About the Authors

Nicole Schneidman

Technology Policy Strategist

Nicole Schneidman is a Technology Policy Strategist at Protect Democracy working to combat anti-democratic applications and impacts of technology, including disinformation.

Christian Johnson

Machine Learning Engineer & Policy Strategist, VoteShield & Elections

Christian Johnson is a Machine Learning Engineer and Policy Strategist with Protect Democracy’s VoteShield team.

Related Content