The Shortlist: Social Media Platform Recommendations

Four steps social media platforms can take to prepare for the general election

1. Adequate Election Teams Resourcing

Adequately resource election teams, including related Trust and Safety, policy, legal, and operations teams, at least six months before the U.S. general election and maintain this resourcing through Inauguration Day.

Read more

2. Authoritative Voting and Election Information

Amplify accurate, authoritative content on the time, place, and manner of voting and election results for the remainder of the U.S. election season.


Read more

3. Reasonable Usage-Rate Limits

Establish usage-rate limits for commenting, inviting, messaging, sharing and forwarding features — particularly their usage by accounts and entities that are new, demonstrate suspicious activity, or relate to voting or elections — at least four months prior to the general election through Inauguration Day.

Read more

4. Limiting Distribution of New and Suspicious Entities

Limit distribution of content from new accounts and entities as well as accounts and entities that have demonstrated suspicious on-platform activity, at least four months prior to the general election through Inauguration Day.


Read more

1. Adequate Election Teams Resourcing

1. Adequate Election Teams Resourcing

No matter what form election protection takes at a platform, it relies on teams operating with election safety as a top priority leading up to and throughout election season, including Inauguration Day. This can include both internal teams and external partners, like third-party fact checkers or civil society organizations who offer public education on voting and election administration. Internally, these teams vary in size and function across platforms but are typically cross-functional and include fully staffed product teams (product managers, engineers, design and research managers) as well as content policy, partnerships, operations, legal, and communications managers.1Elections Integrity Best Practices, Elections Integrity Series Part 1, INTEGRITY INSTITUTE (May 17, 2023), https://integrityinstitute.org/s/Final-Elections-Best-Practices-Guide-Part-1_2023-05-24.pdf. Across these functions, team members may not all have specific election expertise or be exclusively dedicated to elections. However, at minimum, election leads, particularly in policy and partnerships functions, should be versed on U.S. election administration as well as the specific outlook and risks facing the 2024 cycle.

While individual platforms are best suited to determine what constitutes adequate resourcing for their teams, they should base this assessment on audits that evaluate how a platform could be used to produce or distribute election information.2Id. Platforms should prioritize resourcing based on the level of risks across the use cases for creating or spreading election information they identify, especially risks of voter suppression or physical violence. Sufficient resourcing includes staffing, budget, and tooling, including ensuring platforms are able to execute robust on-platform monitoring. Finally, adequate resourcing should account for peak moments in the cycle that will pose elevated risks and require surge capacity and oversight.

Election teams vary in the degree to which they are centralized or dispersed within an organization. Regardless of their form or where they’re housed, teams must have a documented understanding of roles — namely, the key decision makers at critical junctures — including amongst a platform’s executives and C-suite, legal counsel and operations managers. This understanding should be paired with replicable, documented processes that election teams and decision makers can use to quickly assess and respond to emerging threats. While election teams should engage in thorough red teaming or threat scenario planning to inform their preparations, they will inevitably encounter novel situations during the 2024 cycle. When presented with these situations, election teams must make difficult decisions in a compressed timeline, which will rely on clear escalation channels and consistent, documented communication.3Id.

2. Authoritative Voting and Election Information

2. Authoritative Voting and Election Information

All three categories of platforms should prioritize ensuring their users have consistent access to authoritative information on voting and the 2024 election’s administration for the full duration of the cycle, through Inauguration Day. This information should concentrate on information about all stages of voting and election results. Platforms would be wise to rely on partnerships with official election authorities or civil society organizations to equip users with vetted information from authoritative sources.

There are a range of formats and channels that platforms can use to equip users with authoritative election information. For example, social media platforms can amplify such information, whether in-feed or through recommendation surfaces, or prominently display an in-product election hub.4In 2020, Twitter introduced an election hub “to help Americans prepare for the most uncertain election in modern U.S. history.” Taylor Hatmaker, Twitter Debuts U.S. Election Hub to Help People Navigate Voting in 2020, TECHCRUNCH (Sept. 15, 2020, 1:00 PM), https://techcrunch.com/2020/09/15/twitter-election-hub-voting-tools. Messaging platforms, regardless of whether they employ end-to-end encryption, can ensure users have the option to engage with dedicated chatbots to fact check information or access authoritative election FAQs.5One example of a chatbot-enabled, fact-checking tipline program on an encrypted messaging platform is Meedan’s election fact-checking programs on WhatsApp. Elections: Verified Content for the Voting Public, MEEDAN (last visited Feb. 27, 2023), https://meedan.com/programs/elections. Finally, generative AI platforms can direct users to authoritative sources of information in response to relevant queries and at minimum, should train models to refuse to answer election-related queries for which they cannot consistently and accurately provide authoritative information.6OpenAI announced a partnership with the National Association of Secretaries of State that will ensure ChatGPT users are directed to CanIVote.org if they pose election-related procedure questions. OpenAI, How OpenAI is Approaching 2024 Worldwide Elections, OPENAI (Jan. 15, 2024), https://openai.com/blog/how-openai-is-approaching-2024-worldwide-elections#OpenAI.  

Across these delivery mechanisms, platforms should prioritize ensuring that information is accessible to a diverse American audience, including non-English-speaking communities. Platforms should also ensure the information they offer is digestible, timely, and provides sufficient context to help users situate the current moment within the broader electoral process.

3. Reasonable Usage-Rate Limits

3. Reasonable Usage-Rate Limits

Usage-rate limits7See Ravi Iyer, A Concise Social Media Design Election Advocacy Guide for 2024, DESIGNING TOMORROW (Jan. 19, 2024), https://open.substack.com/pub/psychoftech/p/a-concise-social-media-design-election?r=2b7wo9&utm_campaign=post&utm_medium=email. place a ceiling on the number of times in a certain period any user can employ a specific platform feature like commenting, inviting, messaging, sharing or forwarding. In placing this ceiling, rate limits reduce the likelihood that bad actors, whether relying on bots or prolific human activity, can supercharge distribution of content or entities by abusively overusing a feature. In past U.S. elections, there has been a recurring dynamic of a small set of superusers having a significantly outsized role in producing and spreading harmful election-related content, including disinformation and calls for political violence.8For example, an internal analysis at Facebook determined that 0.3% of users were responsible for 30% of the group invites that resulted in the original “Stop the Steal” Facebook group growing to 360,000 members in 24 hours, with 2.1 million membership requests still pending when it was taken down. Similarly, internal research at Facebook found that one individual issued 400,000 invitations to QAnon groups in six months. See JEFF HORWITZ, BROKEN CODE 205, 219 (2023). These superusers have illustrated that social media and messaging platforms offer features that, when used at extreme outlier or “spammy” levels, can be vectors for manipulation. 

This recommendation suggests platforms implement rate limits that narrowly prevent extreme overuse. Establishing a reasonable, focused threshold for rate limits requires platforms to carefully balance tradeoffs with on-platform engagement while also recognizing how rate limits will impact both legitimate and manipulative usage of a feature. Actual implementation will vary by platform, but in practice, successful deployment would mean that a platform sets a targeted rate limit that only affects a small sliver of users’ “spammy” activity.9As described in Broken Code, the internal team at Facebook created to fight Dedicated Vaccine Discouragement Entities “set the goal of limiting the anti-vax activity of the top .001 percent of users — a group that turned out to have a meaningful effect on overall discourse.” Id. at 247. What’s more, rate limits do not ban accounts from ever using a feature – they prevent outlier usage for a defined duration, after which point an account can begin using that feature again.10For example, if a rate limit establishes a threshold such that no user can send no more than 50 invitations to a group each day, an impacted user would not be able to send their 51st invitation in that twenty-four hour period. Once the defined time period — a speed bump, so to speak — for the rate limit has passed, the user would be able to again send invitations to the group. 

Actual implementation will vary by platform, but in practice, successful deployment would mean that a platform sets a targeted rate limit that only affects a small sliver of users’ “spammy” activity.

Platforms should consider how rate limits should be adjusted in response to the increased risks and dynamic nature of the election environment. For example, a platform could apply a rate limit in a targeted manner prior to voting and broaden its application once voting begins, continuing to adjust it as needed for higher-risk periods or significant fluctuations in platform usage. In addition, platforms should diligently apply rate limits to categories of content or entities that likely pose higher risks during election season, such as new accounts and entities,11See infra pp. 12-13. accounts and entities that have demonstrated suspicious on-platform behavior,12See infra p. 12. or accounts and entities that relate to voting or elections. At minimum, platforms should plan for aggressive applications of rate limits as a break-the-glass measure and have clear documentation for the criteria that would trigger this deployment.  

The rate limit recommendations offered for social media and messaging platforms differ in two respects. First, social media platforms typically offer a commenting feature absent on messaging platforms. Second, a number of messaging platforms in the U.S. offer end-to-end encryption. Where social media platforms can employ algorithmic classifiers to distinguish and categorize content, encrypted messaging platforms do not view the content shared on their platforms, and thus cannot distinguish among categories of content distributed on their surfaces. As a result, the social media platform recommendation suggests applying rate limits to election and voting-related content and entities, as defined by individual platforms. By comparison, our recommendation for messaging platforms recognizes that rate limits can’t be applied based on a category of content on encrypted channels.13WhatsApp, an encrypted messaging platform, employs rate limits to maintain the private nature of their service and safeguard elections. In 2019, WhatsApp set a content-level rate limit by restricting message forwarding to five chats at a time. In addition, WhatsApp separately limited the reforwarding of viral messages. Specifically, the platform labeled messages that had been reforwarded many times and limited their resharing to one chat at a time. Finally, in recognition of bad actors with political motivations, WhatsApp also maintained account-level rate limits on the number of groups an account could create within a specific time period. More Changes to Forwarding, WHATSAPP (Jan. 21, 2019), https://blog.whatsapp.com/more-changes-to-forwarding; About WhatsApp and Elections, WHATSAPP, https://faq.whatsapp.com/518562649771533/ ?helpref=uf_share (last visited Feb. 28, 2024); Stopping Abuse: How WhatsApp Fights Bulk Messaging and Automated Behavior, WHATSAPP (Feb. 6, 2019), https://scontent-sjc3-1.xx.fbcdn.net/v/t39.8562-6/299911313_583606040085749_3003238759000179053_n.pdf?_nc_cat=101&ccb=1-7&_nc_sid=b8d81d&_nc_ohc=_NqFMccy7U4AX8SrXhF&_nc_ht=scontent-sjc3-1.xx&oh=00_AfAUg7YY5qSKrwBdzx9Y-pl0_e87YD89fXfBPiRwjrycmQ&oe=65E50694. 

4. Limiting Distribution of New and Suspicious Entities

4. Limiting Distribution of New and Suspicious Entities

Distribution on a social media platform relies on algorithmically ranking content. Each platform employs its own set of ranking systems and criteria, but largely they function in a similar manner to transform what would be an impossibly overwhelming volume of content into a functional, curated feed or list for users.14Brief of the Integrity Institute and Algotransparency as Amici Curiae in Support of Neither Party, Gonzalez, et. al, v. Google LLC., 598 U.S. ___ (2023), https://www.supremecourt.gov/DocketPDF/21/21-1333/249279/20221207100038897_21-1333_Amici%20Brief.pdf. Platforms broadly optimize their ranking systems to deliver to each user a unique set of content based on what delivers the highest value to the company, which most platforms define as on-platform engagement.15Id.  

Legacy social media platforms each also employ and monitor on-platform signals to identify suspicious or unusual activity. This can include outlier levels of activity or growth, particularly after periods of account inactivity, as well as tracking specific policy violations associated with an account or entity. Commonly, monitoring also looks for spam-like activity, which platforms widely recognize as behavior that should be curtailed. In executing on-platform monitoring for any of these signals, platforms should especially prioritize accounts that have desirable characteristics, such as having verified status. 

In addition to demonstrating how hyperactive users have proven to be recurring spreaders of election-threatening narratives, past election cycles have highlighted how new accounts and entities, particularly those that gain viral traction and growth, can be used to publish and spread election-threatening content.16HORWITZ, supra note 17. As a result, Trust and Safety teams at legacy social media platforms have included heightened safeguards on newly-created entities or accounts as break-the-glass measures, including limiting invitations to join or follow new entities or avoiding recommending content from new entities or accounts.17Id. at 213. 

During the sensitive period of the 2024 election cycle, platforms should limit the distribution of content from both new accounts and entities as well as those that have signaled suspicious on-platform activity. Platforms are best suited to determine what constitutes a new or suspicious account or entity. In doing so, they should consider not only on-platform behaviors that signal suspicion, but those that suggest an account is legitimate or trustworthy. Accounting for such signals in determining a new account’s trustworthiness can help ensure that new, legitimate accounts are not indefinitely placed at a distribution disadvantage.

About the Author

Nicole Schneidman

Technology Policy Strategist

Nicole Schneidman is a Technology Policy Strategist at Protect Democracy working to combat anti-democratic applications and impacts of technology, including disinformation.

Related Content