The Shortlist: Messaging Platform Recommendations

Discord and Whatsapp logo on a phone screen.

Four steps messaging platforms can take to prepare for the general election

1. Adequate Election Teams Resourcing

Adequately resource election teams, including related Trust and Safety, policy, legal, and operations teams, at least six months before the U.S. general election and maintain this resourcing through Inauguration Day.

Read more

2. Authoritative Voting and Election Information

Amplify accurate, authoritative content on the time, place, and manner of voting and election results for the remainder of the U.S. election season.


Read more

3. Reasonable Usage-Rate Limits

Establish usage-rate limits for commenting, inviting, messaging, sharing and forwarding features — particularly their usage by accounts and entities that are new, demonstrate suspicious activity, or relate to voting or elections — at least four months prior to the general election through Inauguration Day.

Read more

4. Heightened Enforcement on Inauthentic Networks

Prohibit coordinated inauthentic behavior using fake accounts and temporarily reduce the threshold for enforcing on borderline inauthentic account networks, at least four months prior to the general election through Inauguration Day.


Read more

1. Adequate Election Teams Resourcing

1. Adequate Election Teams Resourcing

No matter what form election protection takes at a platform, it relies on teams operating with election safety as a top priority leading up to and throughout election season, including Inauguration Day. This can include both internal teams and external partners, like third-party fact checkers or civil society organizations who offer public education on voting and election administration. Internally, these teams vary in size and function across platforms but are typically cross-functional and include fully staffed product teams (product managers, engineers, design and research managers) as well as content policy, partnerships, operations, legal, and communications managers.1Elections Integrity Best Practices, Elections Integrity Series Part 1, INTEGRITY INSTITUTE (May 17, 2023), https://integrityinstitute.org/s/Final-Elections-Best-Practices-Guide-Part-1_2023-05-24.pdf. Across these functions, team members may not all have specific election expertise or be exclusively dedicated to elections. However, at minimum, election leads, particularly in policy and partnerships functions, should be versed on U.S. election administration as well as the specific outlook and risks facing the 2024 cycle.

While individual platforms are best suited to determine what constitutes adequate resourcing for their teams, they should base this assessment on audits that evaluate how a platform could be used to produce or distribute election information.2Id. Platforms should prioritize resourcing based on the level of risks across the use cases for creating or spreading election information they identify, especially risks of voter suppression or physical violence. Sufficient resourcing includes staffing, budget, and tooling, including ensuring platforms are able to execute robust on-platform monitoring. Finally, adequate resourcing should account for peak moments in the cycle that will pose elevated risks and require surge capacity and oversight.

Election teams vary in the degree to which they are centralized or dispersed within an organization. Regardless of their form or where they’re housed, teams must have a documented understanding of roles — namely, the key decision makers at critical junctures — including amongst a platform’s executives and C-suite, legal counsel and operations managers. This understanding should be paired with replicable, documented processes that election teams and decision makers can use to quickly assess and respond to emerging threats. While election teams should engage in thorough red teaming or threat scenario planning to inform their preparations, they will inevitably encounter novel situations during the 2024 cycle. When presented with these situations, election teams must make difficult decisions in a compressed timeline, which will rely on clear escalation channels and consistent, documented communication.3Id.

2. Authoritative Voting and Election Information

2. Authoritative Voting and Election Information

All three categories of platforms should prioritize ensuring their users have consistent access to authoritative information on voting and the 2024 election’s administration for the full duration of the cycle, through Inauguration Day. This information should concentrate on information about all stages of voting and election results. Platforms would be wise to rely on partnerships with official election authorities or civil society organizations to equip users with vetted information from authoritative sources.

There are a range of formats and channels that platforms can use to equip users with authoritative election information. For example, social media platforms can amplify such information, whether in-feed or through recommendation surfaces, or prominently display an in-product election hub.4In 2020, Twitter introduced an election hub “to help Americans prepare for the most uncertain election in modern U.S. history.” Taylor Hatmaker, Twitter Debuts U.S. Election Hub to Help People Navigate Voting in 2020, TECHCRUNCH (Sept. 15, 2020, 1:00 PM), https://techcrunch.com/2020/09/15/twitter-election-hub-voting-tools. Messaging platforms, regardless of whether they employ end-to-end encryption, can ensure users have the option to engage with dedicated chatbots to fact check information or access authoritative election FAQs.5One example of a chatbot-enabled, fact-checking tipline program on an encrypted messaging platform is Meedan’s election fact-checking programs on WhatsApp. Elections: Verified Content for the Voting Public, MEEDAN (last visited Feb. 27, 2023), https://meedan.com/programs/elections. Finally, generative AI platforms can direct users to authoritative sources of information in response to relevant queries and at minimum, should train models to refuse to answer election-related queries for which they cannot consistently and accurately provide authoritative information.6OpenAI announced a partnership with the National Association of Secretaries of State that will ensure ChatGPT users are directed to CanIVote.org if they pose election-related procedure questions. OpenAI, How OpenAI is Approaching 2024 Worldwide Elections, OPENAI (Jan. 15, 2024), https://openai.com/blog/how-openai-is-approaching-2024-worldwide-elections#OpenAI.  

Across these delivery mechanisms, platforms should prioritize ensuring that information is accessible to a diverse American audience, including non-English-speaking communities. Platforms should also ensure the information they offer is digestible, timely, and provides sufficient context to help users situate the current moment within the broader electoral process.

3. Reasonable Usage-Rate Limits

3. Reasonable Usage-Rate Limits

Usage-rate limits7See Ravi Iyer, A Concise Social Media Design Election Advocacy Guide for 2024, DESIGNING TOMORROW (Jan. 19, 2024), https://open.substack.com/pub/psychoftech/p/a-concise-social-media-design-election?r=2b7wo9&utm_campaign=post&utm_medium=email. place a ceiling on the number of times in a certain period any user can employ a specific platform feature like commenting, inviting, messaging, sharing or forwarding. In placing this ceiling, rate limits reduce the likelihood that bad actors, whether relying on bots or prolific human activity, can supercharge distribution of content or entities by abusively overusing a feature. In past U.S. elections, there has been a recurring dynamic of a small set of superusers having a significantly outsized role in producing and spreading harmful election-related content, including disinformation and calls for political violence.8For example, an internal analysis at Facebook determined that 0.3% of users were responsible for 30% of the group invites that resulted in the original “Stop the Steal” Facebook group growing to 360,000 members in 24 hours, with 2.1 million membership requests still pending when it was taken down. Similarly, internal research at Facebook found that one individual issued 400,000 invitations to QAnon groups in six months. See JEFF HORWITZ, BROKEN CODE 205, 219 (2023). These superusers have illustrated that social media and messaging platforms offer features that, when used at extreme outlier or “spammy” levels, can be vectors for manipulation. 

This recommendation suggests platforms implement rate limits that narrowly prevent extreme overuse. Establishing a reasonable, focused threshold for rate limits requires platforms to carefully balance tradeoffs with on-platform engagement while also recognizing how rate limits will impact both legitimate and manipulative usage of a feature. Actual implementation will vary by platform, but in practice, successful deployment would mean that a platform sets a targeted rate limit that only affects a small sliver of users’ “spammy” activity.9As described in Broken Code, the internal team at Facebook created to fight Dedicated Vaccine Discouragement Entities “set the goal of limiting the anti-vax activity of the top .001 percent of users — a group that turned out to have a meaningful effect on overall discourse.” Id. at 247. What’s more, rate limits do not ban accounts from ever using a feature – they prevent outlier usage for a defined duration, after which point an account can begin using that feature again.10For example, if a rate limit establishes a threshold such that no user can send no more than 50 invitations to a group each day, an impacted user would not be able to send their 51st invitation in that twenty-four hour period. Once the defined time period — a speed bump, so to speak — for the rate limit has passed, the user would be able to again send invitations to the group. 

4. Heightened Enforcement on Inauthentic Networks

4. Heightened Enforcement on Inauthentic Networks

Coordinated networks of fake accounts or bots have been a hallmark of influence operations during past election cycles, including those led by foreign actors.11S. Rep. No. 166-290 at 18 (2020). Recent self-published reports from platforms demonstrate the extent to which this tactic is still in use by foreign actors on distribution platforms.12See Ben Nimmo, et. al., Third Quarter Adversarial Threat Report, META (November 2023), https://scontent-sjc3-1.xx.fbcdn.net/v/t39.8562-6/406961197_3573768156197610_1503341237955279091_n.pdf?_nc_cat=105&ccb=1-7&_nc_sid=b8d81d&_nc_ohc=ov1yoGD30OsAX9iqZSy&_nc_ht=scontent-sjc3-1.xx&oh=00_AfCIj9mm7ATOzgdpUm22xhiMZ8GfUvIkYgS5jkVal1ae-Q&oe=65E372D2.  

In recognition of the [widespread availability of generative AI] it is essential that messaging platforms, regardless of whether they are end-to-end encrypted, have policies that prohibit inauthentic behavior — specifically the coordinated use of fake accounts or entities.

In addition, researchers and monitors have highlighted how the widespread availability of generative AI has made managing those networks easier and more convincingly human than ever before.13Though AI-generated fake profile pictures have been used since 2019, inauthentic accounts then relied largely on human labor, often in the form of troll farms. This meant that inauthentic networks could be detected by identifying patterns in both account behaviors and content, such as suspiciously coordinated messaging schedules, or frequently repeated phrases or spelling errors. In the era of generative AI, inauthentic networks can now be managed at scale, while avoiding some of these common signals of suspicious activity. For example, generative AI can be used to create many variations of the same message, while largely avoiding repetitive phrasing and spelling errors. In addition, AI chatbots have significantly changed the degree to which bad actors need human labor to manage an influence operation. See William Marcellino, et. al., The Rise of Generative AI and the Coming Era of Social Media Manipulation 3.0, RAND CORPORATION (Sept. 7, 2023), https://www.rand.org/pubs/perspectives/PEA2679-1.html. In recognition of the new state of play, it is essential that messaging platforms, regardless of whether they are end-to-end encrypted, have policies that prohibit inauthentic behavior, and specifically the coordinated use of fake accounts or entities.14Facebook’s Inauthentic Behavior Policy (which applies to messaging platforms Messenger and Instagram) is an example of such a policy. The policy specifically prohibits “Coordinated Inauthentic Behavior,” which is defined as “the use of multiple Facebook or Instagram assets, working in concert to engage in Inauthentic Behavior … where the use of fake accounts is central to the operation.” Inauthentic Behavior, META (April 25, 2022), https://transparency.fb.com/policies/community-standards/inauthentic-behavior. Platforms, both social media and messaging, who have adopted policies like these self report that resulting investigations that focus on account behavior, rather than content, have created resiliency to threat actors attempting to use synthetic content in covert influence operations.15Nimmo, supra note 28 at 26.  

In addition, starting at least four months prior to the U.S. general election through Inauguration Day, messaging platforms should reduce the threshold at which they take action on suspected inauthentic account networks. These thresholds should employ behavioral signals that can be identified even on encrypted platforms, such as unusual spikes in account or messaging activity or rates of activity inconsistent with a human user (ie., the rate at which messages are sent or typed).16Stopping Abuse, supra note 22 at 7. Platforms may also consider, where resources permit, training AI models to detect coordinated inauthentic behavior, using on-platform data to compare past and recent behavior of inauthentic account networks with the activity of typical human users.17Id. Recognizing that broadened enforcement may result in false positives, platforms should offer users in-product appeals channels to request platforms review enforcement decisions, as appropriate.

Platforms are best positioned to evaluate and set thresholds in a way that accounts for heightened risks around the 2024 cycle and the new capabilities of AI-enabled networks. They should monitor and adjust these thresholds throughout election season to respond to evolving online dynamics. Finally, as the implications of generative AI’s usage by threat actors is evolving — including how foreign actors will use the technology — platforms should exchange information between each other to identify cross-platform influence operations.18See Nimmo, supra note 28 at 17.

About the Author

Nicole Schneidman

Technology Policy Strategist

Nicole Schneidman is a Technology Policy Strategist at Protect Democracy working to combat anti-democratic applications and impacts of technology, including disinformation.

Related Content