The Shortlist: Generative AI Platform Recommendations

AI platforms on a computer screen.

Four steps generative AI platforms can take to prepare for the general election

1. Adequate Election Teams Resourcing

Adequately resource election teams, including related Trust and Safety, policy, legal, and operations teams, at least six months before the U.S. general election and maintain this resourcing through Inauguration Day.

Read more

2. Authoritative Voting and Election Information

Amplify accurate, authoritative content on the time, place, and manner of voting and election results for the remainder of the U.S. election season.


Read more

3. Disclosing Content Authenticity

Deploy one direct (user-facing) and one indirect (not user-facing) disclosure synthetic-media transparency method for audio and visual synthetic content and conduct public education so diverse audiences and end-users can distinguish AI-generated or modified content.

Read more

4. Election Integrity Policies

Prohibit in API and business policies the use of services or models to interfere with the lawful conduct of elections, including spreading falsehoods concerning election laws or processes or intimidating voters or election officials.


Read more

1. Adequate Election Teams Resourcing

1. Adequate Election Teams Resourcing

No matter what form election protection takes at a platform, it relies on teams operating with election safety as a top priority leading up to and throughout election season, including Inauguration Day. This can include both internal teams and external partners, like third-party fact checkers or civil society organizations who offer public education on voting and election administration. Internally, these teams vary in size and function across platforms but are typically cross-functional and include fully staffed product teams (product managers, engineers, design and research managers) as well as content policy, partnerships, operations, legal, and communications managers.1Elections Integrity Best Practices, Elections Integrity Series Part 1, INTEGRITY INSTITUTE (May 17, 2023), https://integrityinstitute.org/s/Final-Elections-Best-Practices-Guide-Part-1_2023-05-24.pdf. Across these functions, team members may not all have specific election expertise or be exclusively dedicated to elections. However, at minimum, election leads, particularly in policy and partnerships functions, should be versed on U.S. election administration as well as the specific outlook and risks facing the 2024 cycle.

While individual platforms are best suited to determine what constitutes adequate resourcing for their teams, they should base this assessment on audits that evaluate how a platform could be used to produce or distribute election information.2Id. Platforms should prioritize resourcing based on the level of risks across the use cases for creating or spreading election information they identify, especially risks of voter suppression or physical violence. Sufficient resourcing includes staffing, budget, and tooling, including ensuring platforms are able to execute robust on-platform monitoring. Finally, adequate resourcing should account for peak moments in the cycle that will pose elevated risks and require surge capacity and oversight.

Election teams vary in the degree to which they are centralized or dispersed within an organization. Regardless of their form or where they’re housed, teams must have a documented understanding of roles — namely, the key decision makers at critical junctures — including amongst a platform’s executives and C-suite, legal counsel and operations managers. This understanding should be paired with replicable, documented processes that election teams and decision makers can use to quickly assess and respond to emerging threats. While election teams should engage in thorough red teaming or threat scenario planning to inform their preparations, they will inevitably encounter novel situations during the 2024 cycle. When presented with these situations, election teams must make difficult decisions in a compressed timeline, which will rely on clear escalation channels and consistent, documented communication.3Id.

2. Authoritative Voting and Election Information

2. Authoritative Voting and Election Information

All three categories of platforms should prioritize ensuring their users have consistent access to authoritative information on voting and the 2024 election’s administration for the full duration of the cycle, through Inauguration Day. This information should concentrate on information about all stages of voting and election results. Platforms would be wise to rely on partnerships with official election authorities or civil society organizations to equip users with vetted information from authoritative sources.

There are a range of formats and channels that platforms can use to equip users with authoritative election information. For example, social media platforms can amplify such information, whether in-feed or through recommendation surfaces, or prominently display an in-product election hub.4In 2020, Twitter introduced an election hub “to help Americans prepare for the most uncertain election in modern U.S. history.” Taylor Hatmaker, Twitter Debuts U.S. Election Hub to Help People Navigate Voting in 2020, TECHCRUNCH (Sept. 15, 2020, 1:00 PM), https://techcrunch.com/2020/09/15/twitter-election-hub-voting-tools. Messaging platforms, regardless of whether they employ end-to-end encryption, can ensure users have the option to engage with dedicated chatbots to fact check information or access authoritative election FAQs.5One example of a chatbot-enabled, fact-checking tipline program on an encrypted messaging platform is Meedan’s election fact-checking programs on WhatsApp. Elections: Verified Content for the Voting Public, MEEDAN (last visited Feb. 27, 2023), https://meedan.com/programs/elections. Finally, generative AI platforms can direct users to authoritative sources of information in response to relevant queries and at minimum, should train models to refuse to answer election-related queries for which they cannot consistently and accurately provide authoritative information.6OpenAI announced a partnership with the National Association of Secretaries of State that will ensure ChatGPT users are directed to CanIVote.org if they pose election-related procedure questions. OpenAI, How OpenAI is Approaching 2024 Worldwide Elections, OPENAI (Jan. 15, 2024), https://openai.com/blog/how-openai-is-approaching-2024-worldwide-elections#OpenAI.  

Across these delivery mechanisms, platforms should prioritize ensuring that information is accessible to a diverse American audience, including non-English-speaking communities. Platforms should also ensure the information they offer is digestible, timely, and provides sufficient context to help users situate the current moment within the broader electoral process.

3. Disclosing Content Authenticity

3. Disclosing Content Authenticity

The anticipated proliferation of synthetic content in the U.S.’s election information ecosystem will require audiences, journalists, and distribution platforms, like social media and messaging platforms, to grapple in new ways with content authenticity. While not a silver bullet, generative AI platforms should employ synthetic media transparency methods, both direct (user facing) and indirect (not user facing) disclosure methods,7A growing number of terms are used to describe strategies to disclose whether content is synthetic. Here, synthetic media transparency methods is defined using the Partnership on AI’s Glossary for Synthetic Media Transparency Methods as “[t]he umbrella term used to describe signals for conveying whether a piece of media is AI-generated or AI-modified.” PAI Staff, Building a Glossary for Synthetic Media Transparency Methods, Part 1: Indirect Disclosure, PARTNERSHIP ON AI (Dec. 19, 2023), https://partnershiponai.org/glossary-for-synthetic-media-transparency-methods-part-1-indirect-disclosure. for their visual and audio content.8The recently announced Tech Accord to Combat Deceptive Use of AI in 2024 Elections includes provenance as one of its seven principal goals. A Tech Accord to Combat Deceptive Use of AI in 2024 Elections, AI ELECTIONS ACCORD (Feb. 16, 2024), https://www.aielectionsaccord.com/uploads/2024/02/A-Tech-Accord-to-Combat-Deceptive-Use-of-AI-in-2024-Elections.FINAL_.pdf. Alongside these disclosure methods, generative AI platforms should adopt policies that prohibit users from representing the output of a generative AI platform as not synthetic, which should apply to first and third-party usage of models.9For example, Google’s Generative AI Prohibited Use Policy prohibits the “misrepresentation of the provenance of generated content” created by relevant Google services “by claiming content was created by a human … in order to deceive.” Generative AI Prohibited Use Policy, GOOGLE (March 14, 2023), https://policies.google.com/terms/generative-ai/use-policy.

There is not one form of synthetic media transparency that alone can address the challenges introduced by generative AI’s widespread availability. Therefore, we believe platforms should take a balanced, portfolio approach to disclosure. At minimum, platforms should employ at least one synthetic media transparency method that provides direct disclosure to end users to signal content that is AI-generated or AI-modified. This disclosure can take the form of content labels or overlays such as visible watermarks,10PAI Staff supra note 35. but should be designed for the general public’s comprehension. 

Unfortunately, direct disclosure methods, like visible watermarking, are unlikely to withstand bad actors’ circumvention. As a result, generative AI platforms should also implement at least one indirect disclosure method for their audio and visual content, such as signed metadata or invisible watermarks. Rather than being user-facing, indirect disclosure methods signal to entities involved in contents’ development and distribution — such as social media and messaging platforms — when a piece of content is AI-generated or modified. 

There is not one form of synthetic media transparency that alone can address the challenges introduced by generative AI’s widespread availability. Therefore, we believe platforms should take a balanced, portfolio approach to disclosure.

As generative AI platforms adopt direct disclosure synthetic media transparency methods, they should help audiences and end-users understand those disclosures’ significance.11The recently announced Tech Accord to Combat Deceptive Use of AI in 2024 Elections includes Public Awareness as one of its seven principal goals, specifically, “Engaging in shared efforts to educate the public about media literacy best practices, in particular regarding Deceptive AI Election Content, and ways citizens can protect themselves from being manipulated or deceived by this content.” AI ELECTIONS ACCORD, supra note 36. No matter their form, these methods are new to the American public and robust digital literacy campaigns should accompany them. Platforms can use a combination of approaches including funding programs with trusted intermediaries, in-product education, and cross-industry partnerships to educate voters.

4. Election Integrity Policies

4. Election Integrity Policies

Legacy social media and messaging platforms have experienced one or more U.S. election cycles, but 2024 will be a testing ground for more recently launched generative AI platforms. As yet, generative AI platforms largely lack election-specific terms of service or usage policies analogous to those social media and messaging platforms have on the books.12See, e.g., Misinformation Policy Explainer, DISCORD (Oct. 24, 2023), https://discord.com/safety/misinformation-policy-explainer (prohibiting misinformation about “the integrity of a civic process — specifically, around issues that could delegitimize results or undermine faith in public institutions”);  Civic and Election Integrity, TIKTOK (March 2023), https://www.tiktok.com/community-guidelines/ en/integrity-authenticity/#2 (prohibiting the misinformation about the “laws, processes, and procedures that govern the organization and implementation of elections … ” as well as misinformation about the outcome of an election).

Having election-specific policies in place ensures generative AI platforms clearly and publicly convey the behaviors they will monitor and enforce. Naming election-related prohibited applications clarifies whether, for example, voter suppression or election subversion efforts will qualify under broad policies prohibiting “harmful” or “misleading” content.13Speechify’s Prohibited Uses of the Service, for example, includes the use of “the Services for any illegal, immoral or harmful purpose.” Terms & Conditions, SPEECHIFY (May 25, 2023), https://speechify.com/terms. The recently announced Tech Accord to Combat Deceptive Use of AI in 2024 Elections has acknowledged the importance of “providing transparency to the public…by publishing the policies that explain how we will address such content.”14AI ELECTIONS ACCORD, supra note 36. This is critical for platforms’ API or business service terms because abusing these offerings can result in the production and distribution at scale of election-threatening synthetic content.15Midjourney’s Terms of Service offers an example of a generative AI platform that has election-specific policy language in place, specifically prohibiting users from using the service “to try to influence the outcome of an election.” Terms of Service, MIDJOURNEY (Dec. 22, 2023), https://docs.midjourney.com/docs/terms-of-service. The election-specific policies we propose (bans on falsehoods concerning election laws, processes, or procedures and intimidating voters or election officials) are also consistent with U.S. law, which includes numerous provisions prohibiting interference with the right to vote and voter intimidation.16ee, e.g., 18 U.S.C. § 241 (criminalizing interference with the right to vote); United States v. Mackey, 652 F. Supp. 3d 309 (E.D.N.Y. 2023) (§ 241 applies to a scheme to distribute false information about voting by text); 42 U.S.C. § 1985 (imposing civil liability for conspiracies to intimidate voters in federal elections).

About the Author

Nicole Schneidman

Technology Policy Strategist

Nicole Schneidman is a Technology Policy Strategist at Protect Democracy working to combat anti-democratic applications and impacts of technology, including disinformation.

Related Content