Microsoft has sent its strongest-ever signal on the importance of AI ethics by filing a court brief in a San Francisco federal court supporting Anthropic’s challenge to the Pentagon’s supply-chain risk designation. The brief called for a temporary restraining order and argued that the designation poses a serious and immediate threat to the technology networks that support national defense. Amazon, Google, Apple, and OpenAI have also backed Anthropic through a joint filing, amplifying the industry’s collective message.
The Pentagon’s designation was applied to Anthropic after the company refused to allow its Claude AI to be used for mass domestic surveillance or autonomous lethal weapons during a $200 million contract negotiation. Defense Secretary Pete Hegseth formalized the designation following the breakdown of talks, triggering the cancellation of Anthropic’s government contracts. Anthropic filed two simultaneous lawsuits challenging the designation in California and Washington DC.
Microsoft’s strongest-ever signal on AI ethics is grounded in its direct integration of Anthropic’s technology into federal military systems and its participation in the Pentagon’s $9 billion cloud computing contract. The company also holds additional federal agreements spanning defense, intelligence, and civilian agencies. Microsoft publicly argued that responsible AI governance and national security were complementary goals that required collaboration between government and industry.
Anthropic’s court filings argued that the supply-chain risk designation was an unconstitutional act of retaliation for the company’s publicly stated AI safety positions. The company disclosed that it does not currently believe Claude is safe or reliable enough for lethal autonomous operations, which it said was the genuine basis for its contract demands. The Pentagon’s technology chief publicly ruled out any possibility of renegotiation.
Congressional Democrats have separately asked the Pentagon whether AI was involved in a strike in Iran that reportedly killed over 175 civilians at a school, demanding answers about AI targeting tools and human oversight. Their inquiries are adding legislative pressure to an already extraordinary legal confrontation. Together, Microsoft’s strongest-ever signal, the industry coalition, and congressional scrutiny are creating a watershed moment for AI ethics in American national security policy.