Home » Microsoft Challenges the Pentagon’s Power to Punish AI Companies for Ethical Positions in Landmark Filing

Microsoft Challenges the Pentagon’s Power to Punish AI Companies for Ethical Positions in Landmark Filing

by admin477351

 

Microsoft has directly challenged the Pentagon’s claimed power to punish AI companies for their ethical positions by filing a landmark court brief in a San Francisco federal court in support of Anthropic’s legal battle against the Defense Department’s supply-chain risk designation. The brief called for a temporary restraining order and argued that the designation sets a dangerous precedent for how the government can treat commercial AI companies. Amazon, Google, Apple, and OpenAI have also joined the legal challenge through a separate supporting filing.

The conflict arose from a $200 million contract negotiation that collapsed after Anthropic refused to allow its Claude AI to be used for mass surveillance of American citizens or to power autonomous lethal weapons. Defense Secretary Pete Hegseth applied the supply-chain risk designation following the breakdown, and the Pentagon’s technology chief later publicly foreclosed any possibility of renegotiation. Anthropic filed two simultaneous lawsuits in California and Washington DC challenging the designation.

Microsoft’s landmark challenge to the Pentagon’s authority is grounded in its direct integration of Anthropic’s technology into military systems and its participation in the $9 billion Joint Warfighting Cloud Capability contract. Additional federal agreements spanning defense, intelligence, and civilian agencies give Microsoft a deep stake in the outcome. Microsoft publicly argued that the government and the technology sector needed to work together to ensure AI served national security without crossing ethical lines.

Anthropic’s lawsuits argued that the supply-chain risk designation was an unconstitutional act of ideological punishment for the company’s publicly expressed AI safety positions, violating its First Amendment rights. The company disclosed that it does not currently believe Claude is safe or reliable enough for lethal autonomous operations, which it said was the genuine basis for its contract demands. Anthropic noted that this designation had never before been applied to a US company.

Congressional Democrats have separately written to the Pentagon asking whether AI was used in a strike in Iran that reportedly killed over 175 civilians at a school, raising questions about AI targeting and human oversight. Their formal inquiries are adding legislative urgency to an already historic legal confrontation. Together, Microsoft’s landmark challenge, the industry coalition, and congressional pressure are creating the most consequential test of the limits of government authority over commercial AI in US history.

 

You may also like