The AI for Democracy Action Lab (AI-DAL) is a place to join together with tech innovators, legal and policy experts, software engineers, nonprofit leaders, and others to not only defend against the dangers of artificial intelligence (AI) for our democracy, but seize its possibilities for strengthening self-government. This builds on our experience integrating a variety of tools from tech to policy.
One: Incubating and accelerating development of AI-enabled civic tools. We will partner with organizations that have been pioneering AI’s pro-democracy applications — including key allies in state and local governments and other civil society organizations — to identify the strongest use cases for AI to advance democracy and fast track responsive product development. Our lab will organize a learning community that shares expertise, best practices, and learnings from success and failure alike so that innovations can be replicated and scaled responsibly.
Two: Developing policy and governance solutions that check AI-enhanced threats to democracy. We will serve as a trusted, nonpartisan resource for platforms and regulators grappling with how to govern these systems in the public interest. The focus will be especially on how constitutional principles — checks-and-balances, individual freedoms, and the right to privacy — can be embedded in forward-looking data and tech governance regimes.
Why we’re building this lab
Given the rapid rise in AI adoption and capability, we believe we must work proactively to defend against autocratic applications of AI while harnessing its capabilities for democracy.
AI’s potential for disruption (for good or ill) spans three key areas:
AI is already capable of digesting enormous amounts of information almost simultaneously. This can be easily and profoundly abused by autocrats, whether through surveillance at scale via tools like facial recognition technology, or through targeting specific communities and their communications.
At the same time, if deployed correctly, AI can be used as a tool to help protect privacy and security. It can help spot vulnerabilities, monitor threats, and protect critical systems both online and off.
In the wrong hands, AI tools can supercharge propaganda and disinformation efforts. Already, generative AI is capable of producing high-quality synthetic content across formats (text, video, image, and audio) that is increasingly indistinguishable from human-generated content. Authoritarian actors (and AI slop manufacturers) have already discovered AI’s utility for flooding the zone with high volumes of cheaply created synthetic content. AI-fueled propaganda efforts threaten to swamp our information ecosystem with disinformation, deepfakes, and simple noise.
In parallel, AI, like other advanced technology before it, is changing where and how we get our information – from AI-generated summaries at the top of search results to generative AI chatbots. This has already raised real questions around their reliability as sources for accurate information, particularly given risks of hallucination. But, it also makes AI the latest target for authoritarian efforts to censor and control information.
Together these point to perhaps the biggest threat that AI’s misuse poses to democracy – simply eroding, either deliberately or accidentally, the foundation of trust and shared reality necessary for democracy to work.
At the same time, there are still ways that AI can be used to bolster — not undermine — our information environment and trust. In fact checkers’ hands, it can be used to speed the verification of information and scale the number of claims they review. For trust and safety workers, it is a tool for supercharging their ability to analyze and moderate the flood of content on modern social media and messaging platforms. AI’s pattern detection capabilities have already made it an important tool for analyzing public input and identifying areas of common ground.
AI in the wrong hands has the potential to consolidate power in government or hierarchies if used to justify eliminating checks-and-balances and the people who power them. Right now, if a president or government leader wants to undertake a repressive program, it is not as simple as issuing a single directive. Instead, they still must contend with procedural guardrails designed to check unilateral authority and offer transparency into decision-making – and compel a long chain of people to carry out that agenda. While far from a fail safe, this provides a certain amount of resistance to harmful actions or decisions. Imagine, if, however, these guardrails and their human decisionmakers are swept away as “inefficient” under the pretense that AI can effectively replace them.
Conversely, in the best uses, AI can be deployed not to eliminate checks-and-balances and human decision-making, but rather to strengthen them — equipping people with better tooling to digest and analyze information and render careful judgment. Not only does that bolster democracy by effectively strengthening institutional resilience to top-down coercion, but it could actually facilitate efficiency gains and more informed decisions.
We analyze publicly available voter databases and mail ballot processing files to identify unexpected or improper changes that could affect the administration of, or confidence in, an election.
The Shortlist: Generative AI Platform Recommendations
We produced four recommendations for how AI platforms could handle the 2024 elections, designed to offer priority interventions that could be adapted to platforms’ nuances and implemented ahead of high-stakes elections both in the U.S. and around the world.
Wisconsin passes model legislation on generative AI and political ads
In March 2024, Wisconsin passed a law requiring the disclosure of AI-generated audio or visual content in political ads. This uncomplicated policy (AB 664) takes a balanced approach to safeguarding Wisconsin voters’ informed election choices. It went into effect in November 2024 and represents a model for policymakers, especially at the state level, of a logical first step towards oversight of the intersection of AI and elections in the U.S.
How generative AI could make existing election threats even worse
In 2023, we wrote about the threat AI could pose to our elections, arguing that “generative AI’s rapid advancements are poised to escalate existing threats to the 2024 election cycle, but are unlikely to introduce qualitatively new threats.”