The United States Secretary of the Treasury, Scott Bessent, announced the termination of the government’s use of Anthropic AI products, citing concerns over contractual restrictions imposed by the company. Speaking with CNBC’s Squawk Pod, Bessent emphasized that no private company could dictate terms affecting national security. His announcement follows recent directives from the Pentagon and President Donald Trump, which together have escalated the scrutiny of Anthropic’s role in government operations.
Bessent outlined the Treasury’s position in a post, stating: “No private company will ever dictate the terms of our national security. Anthropic’s attempts to push use clauses into their contracts with the United States government are unacceptable, and their products will no longer be utilized by the @USTreasury or any other government agency.” The post included a clip in which Bessent elaborated on the decision.
No private company will ever dictate the terms of our national security.
— Treasury Secretary Scott Bessent (@SecScottBessent) March 4, 2026
Anthropic’s attempts to push use clauses into their contracts with the United States government are unacceptable, and their products will no longer be utilized by the @USTreasury or any other government… pic.twitter.com/TBIEYCAblr
Bessent responded to questions from CNBC’s Joe Kernan regarding the potential for compromise with Anthropic, saying: “Yeah. Again, Joe, the vendor does not get to insert a use clause, the ex post into a product…if you know, I’m sure you have a fancy foreign car. And if the manufacturer said, ‘Well, we don’t want you driving it on Sundays ex post,’ you would get rid of that car. And we had Anthropic at Treasury. It was part of a suite of other AI companies we were testing and…we’ll have them out of our system within several days. So this isn’t mission-critical and they can’t push use clauses into their contracts, but naming them a supply chain threat means that no one who is a contractor to the US government can work with Anthropic either…this is very bad behavior on their part in terms of trying to decide what is the use clause.”
The move aligns with broader government action against Anthropic. President Trump announced on Truth Social that the U.S. government would blacklist the company, and the Pentagon labeled it a “supply chain risk.” The decision came after Anthropic declined the Pentagon’s request to lift all safeguards on the military’s use of its AI model, Claude, citing ethical concerns regarding mass domestic surveillance and autonomous weapon systems.
— Rapid Response 47 (@RapidResponse47) February 27, 2026
Trump framed the company’s actions as a threat to military authority and national security, writing that Anthropic’s attempt to “strong-arm the Department of War” endangered American troops and lives. He directed all federal agencies to immediately cease use of the company’s technology, implementing a six-month phase-out period for military and other sensitive operations.
The Pentagon’s response emphasized the difficulty of negotiating AI usage with private firms. Defense officials had offered Anthropic a deal that would have permitted the collection or analysis of personal data, including geolocation, web activity, and financial information, to support military applications. Anthropic CEO Dario Amodei rejected the offer, stating that compliance would be inconsistent with the company’s ethical standards. Emil Michael, the Defense undersecretary overseeing AI negotiations, called Amodei’s decision a risk to national safety, arguing that the military must have authority to determine lawful uses of AI tools without negotiating terms with private companies.
This policy move carries significant implications for the government’s AI infrastructure. Claude is currently the only AI model integrated into classified military systems and has been used in high-profile operations, including the capture of Nicolás Maduro. The transition away from Anthropic will require coordination with contractors like Palantir, which utilize Claude for sensitive military workflows. The Pentagon’s severance of its $200 million contract with Anthropic reflects the broader goal of ensuring that AI tools in government and defense remain fully under federal control.
Anthropic has established a strong presence in enterprise AI, but its refusal to comply with government demands on model usage has resulted in what officials characterize as unacceptable risk. Trump asserted that decisions about military operations must remain under the authority of the Commander-in-Chief and military leadership, not AI companies guided by what he described as “radical left” ideologies.
Secretary Bessent’s announcement signals the Treasury’s alignment with these priorities. By removing Anthropic products from federal systems, the government seeks to prevent private entities from imposing restrictions that could influence operational or security decisions. Bessent affirmed that the process of phasing out Anthropic AI will occur within days and that alternative providers will be integrated to ensure continuity across Treasury and other agencies.
The decision marks a significant moment in the evolving relationship between artificial intelligence and U.S. national security. With AI playing an increasingly critical role in government operations, the Treasury and Pentagon are asserting that control over these technologies must remain fully within federal oversight, setting a precedent for future contracts and technology partnerships.














