The AI for Democracy Action Lab

The AI for Democracy Action Lab (AI-DAL) is a place to join together with tech innovators, legal and policy experts, software engineers, nonprofit leaders, and others to not only defend against the dangers of artificial intelligence (AI) for our democracy, but seize its possibilities for strengthening self-government. This builds on our experience integrating a variety of tools from tech to policy.

This lab will have two mutually enforcing goals: 

One: Incubating and accelerating development of AI-enabled civic tools. We will partner with organizations that have been pioneering AI’s pro-democracy applications — including key allies in state and local governments and other civil society organizations — to identify the strongest use cases for AI to advance democracy and fast track responsive product development. Our lab will organize a learning community that shares expertise, best practices, and learnings from success and failure alike so that innovations can be replicated and scaled responsibly.

Two: Developing policy and governance solutions that check AI-enhanced threats to democracy. We will serve as a trusted, nonpartisan resource for platforms and regulators grappling with how to govern these systems in the public interest. The focus will be especially on how constitutional principles — checks-and-balances, individual freedoms, and the right to privacy — can be embedded in forward-looking data and tech governance regimes.

Why we’re building this lab

Given the rapid rise in AI adoption and capability, we believe we must work proactively to defend against autocratic applications of AI while harnessing its capabilities for democracy. 

AI’s potential for disruption (for good or ill) spans three key areas:

Our key projects

Voteshield

We analyze publicly available voter databases and mail ballot processing files to identify unexpected or improper changes that could affect the administration of, or confidence in, an election.
AI platforms on a computer screen.

The Shortlist: Generative AI Platform Recommendations

We produced four recommendations for how AI platforms could handle the 2024 elections, designed to offer priority interventions that could be adapted to platforms’ nuances and implemented ahead of high-stakes elections both in the U.S. and around the world.
The Madison skyline in Wisconsin.

Wisconsin passes model legislation on generative AI and political ads

In March 2024, Wisconsin passed a law requiring the disclosure of AI-generated audio or visual content in political ads. This uncomplicated policy (AB 664) takes a balanced approach to safeguarding Wisconsin voters’ informed election choices. It went into effect in November 2024 and represents a model for policymakers, especially at the state level, of a logical first step towards oversight of the intersection of AI and elections in the U.S.

How generative AI could make existing election threats even worse

In 2023, we wrote about the threat AI could pose to our elections, arguing that “generative AI’s rapid advancements are poised to escalate existing threats to the 2024 election cycle, but are unlikely to introduce qualitatively new threats.”