Safeguarding frontier AI labs’ guardrails from government overreach

The Pentagon is punishing one of the United States’ leading frontier AI labs for refusing to lift contractual restrictions prohibiting government use of its products for mass domestic surveillance and fully autonomous weapons. These restrictions not only command bipartisan support, but are essential in the absence of meaningful legal and oversight mechanisms that could check the risks and abuses of such AI uses. The AI for Democracy Action Lab is pushing back against this blatant government interference with critical industry guardrails in court and in Congress.

At stake are two “red lines” that command broad support: AI should not be used to surveil American civilians en masse, and currently available AI systems should not be permitted to select and kill targets without a human in the loop. 

The Pentagon’s effort to punish a company for holding these lines sets a precedent with significant consequences for our democracy and U.S. competitiveness in the AI industry.

Amicus brief: Scientists at frontier AI labs push back on the Pentagon’s retaliation against Anthropic

Dozens of scientists and researchers employed at OpenAI and Google filed, in their personal capacities, an amicus brief in Anthropic v. U.S. Department of War, opposing the Pentagon’s effort to punish Anthropic for maintaining its AI safeguards against domestic mass surveillance and fully autonomous lethal weapons. The brief was filed on their behalf by the AI for Democracy Action Lab at Protect Democracy.

In a show of tech industry solidarity, the brief speaks to widely shared concerns about the risks when the deployment of AI systems outpaces the legal and ethical frameworks designed to govern them. It argues that the Pentagon’s reckless designation of Anthropic as a supply chain risk harms public debate on the risks and benefits of AI as well as U.S. competitiveness in the field of AI and innovation more broadly.

[T]he technical concerns animating Anthropic’s ‘red lines’ are legitimate and widely recognized within our scientific community as requiring some kind of response. The best currently available AI systems cannot safely or reliably handle fully autonomous lethal targeting, and should not be available for domestic mass surveillance of the American people. While there are various ways to establish these guardrails, we agree that these guardrails must be in place.

Letter to Congress: Investigate the Pentagon’s supply chain threat

A coalition of over 30 former senior defense officials and tech policy leaders sent a letter to Congress urging oversight of the Pentagon’s threat to designate Anthropic as a supply chain risk under 10 U.S.C. § 3252. That authority was designed to protect the U.S. from infiltration by foreign adversaries, not to penalize American companies for declining to remove safeguards against mass domestic surveillance and autonomous weapons. Protect Democracy joined these leaders as a signatory to demand Congressional action.

The letter calls the Pentagon’s actions a “dangerous precedent” and warns that designating a U.S. company a “supply chain risk” is a profound departure from the authority’s purpose and risks chilling innovation, destabilizing the broader AI ecosystem, and weakening America’s competitive position in the global AI race.

Notable Signatories

Former CIA Director Michael Hayden · Former Secretaries of the Navy Richard Danzig, Carlos Del Toro, and F. Whitten Peters · Admiral William Ownes, U.S. Navy (Ret.) · Major General Randy Manner, U.S. Army (Ret.) · Lawrence Lessig, Harvard Law School · Alexandra Givens, CEO, Center for Democracy & Technology · and more than 20 additional national security and civil society leaders

Related Content