________
US Pentagon Moves to Blacklist Anthropic AI For Refusing to Spy on Americans
By G.Calder | February 18, 2026
The US Department of Defense is considering blacklisting Anthropic — one of America's leading AI companies, and the creator of the Claude large language model — after it refused to let the military use its technology without ethical limits. Anthropic says its usage limits are essential to protect against mass surveillance and creating autonomous weapons, while the Pentagon is reportedly considering designating the company a "supply chain risk" and severing defense ties after months of negotiations have broken down.
The previously quiet technology partnership between the DoD and Anthropic has erupted into a public, philosophical, and potentially precedent-setting dispute. At issue is whether an AI firm can set ethical boundaries on how its technology is used, or whether the government gets to decide. This feud raises bigger questions about who controls AI: the companies who build it, the citizens whose liberties are at stake, or government institutions that want unfettered access.
The Stand-off: Supply Chain Risk & Ethical Limits
Defense Secretary Pete Hegseth is said to be "close" to severing ties with Anthropic and labeling the company a supply chain risk — a label historically reserved for foreign adversaries — because Anthropic has refused to relax the ethical guardrails attached to its AI tools. Those limits include refusing to allow Claude to be used for mass domestic surveillance of Americans and fully autonomous weapons that can fire without human intervention.
The dispute is not hypothetical; the Pentagon has been pushing four leading AI providers — OpenAI, Google, xAI, and Anthropic — to allow their models to be used for "all lawful purposes," including sensitive areas like weapons development, intelligence collection, and battlefield operations. Anthropic alone has maintained that some applications should remain off limits, and that stance has now triggered open frustration among senior defense officials.
Anthropic's contract with the Pentagon, awarded in July 2025 and valued at up to $200 million, is part of a broader push by the U.S. military to integrate top AI technology into defense workflows. Claude was the first model approved for classified military networks and remains the only such system deployed for sensitive tasks. Other companies have agreed to lift their safeguards for use in unclassified government settings; only Anthropic has stood its ground on holding ethical limits in all contexts.
The Pentagon argues that pre-setting boundaries for lawful use is too restrictive. A senior official reportedly told Axios that negotiating individual case-by-case approvals is impractical for military planning and that partners must be willing to help "our warfighters win in any fight." That official also warned that Anthropic could face consequences for resisting, reflecting how serious the standoff has become.
Anthropic Didn't Know That Claude AI Was Used to Capture Maduro
The philosophical fault line in this dispute was made clearer after reports that Claude was accessed during the U.S. military's January 2026 operation to capture Venezuelan President Nicolás Maduro. According to multiple outlets, Claude was used through a system built by Palantir — yet Anthropic's provisions on violence and use policy prohibit its models from being used to "facilitate or promote any act of violence" or to design or deploy weapons. The military has not confirmed details, and Anthropic has stated it did not discuss Claude's use in specific operations, insisting its usage policies apply to all contexts.
Continue reading at The Expose.
________
Related:
He's a "chief":
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.