Anthropic CEO Dario Amodei criticized OpenAI CEO Sam Altman following the company’s recent deal with the U.S. Department of Defense, highlighting a harsh contrast in approaches to national security and AI ethics. In a memo to staff, reported by The Information, Amodei referred to OpenAI’s engagement with the military as “safety theater,” asserting that the decision to sign the contract prioritized internal optics over substantive safeguards.
“The main reason OpenAI accepted the DoD’s deal and we did not is that they cared about placating employees, and we actually cared about preventing abuses,” Amodei wrote. His comments follow a failed negotiation between Anthropic and the Department of Defense over unrestricted access to the company’s AI technology. Anthropic, which maintained a $200 million contract with the military, insisted the DoD affirm that its AI would not be used for mass domestic surveillance or autonomous weaponry.
The Department of Defense instead reached an agreement with OpenAI. Sam Altman described the contract as aligning with core safety principles, including prohibitions on domestic surveillance, human accountability for the use of force, and restrictions on autonomous weapons. Altman stated, “In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome…We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted.”
Tonight, we reached an agreement with the Department of War to deploy our models in their classified network.
— Sam Altman (@sama) February 28, 2026
In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.
AI safety and wide distribution of…
OpenAI further clarified in a statement that the contract explicitly prevents use of its AI systems for domestic surveillance of U.S. persons, including the handling of personally identifiable information. The company emphasized a cloud-only deployment with cleared personnel in the loop, designed to ensure that red lines regarding surveillance and autonomous weapons are enforced. OpenAI maintained that its approach includes layered safeguards, a controlled safety stack, and active oversight, with the Department required to comply with existing laws and policies.
Amodei directly challenged OpenAI’s portrayal of its deal. In his memo, he described Altman as falsely presenting himself “as a peacemaker and dealmaker” and referred to OpenAI’s messaging as “straight up lies.” He argued that Anthropic could not agree to the DoD’s terms without compromising the company’s ethical standards, noting the Pentagon’s insistence on AI availability for “any lawful use” conflicted with the protections Anthropic sought to maintain.
The company’s response also reflected concern over public perception. Amodei cited increased app downloads following OpenAI’s announcement, framing Anthropic’s refusal as a principled stand against compromising safety. “I think this attempted spin/gaslighting is not working very well on the general public or the media, where people mostly see OpenAI’s deal with the DoW as sketchy or suspicious, and see us as the heroes (we’re #2 in the App Store now!),” he wrote.
OpenAI emphasized that its contract preserves the same key red lines as Anthropic while ensuring enforceability through technical and operational safeguards. The company stated that the DoD explicitly agrees that its AI will not be used for domestic surveillance or to operate autonomous weapons, and that the deployment architecture allows independent verification of these protections.
The contrast between Anthropic and OpenAI underscores a broader debate in the AI sector over balancing ethical safeguards with government demand. Amodei framed the decision as a defense against potential misuse of AI, while Altman stressed compliance with safety principles combined with operational collaboration with the military. Both approaches reflect growing scrutiny over AI deployment in sensitive national security contexts.
As the government continues to integrate AI systems into defense operations, the decisions by Anthropic and OpenAI highlight the tension between corporate responsibility, employee expectations, and national security requirements. Amodei’s comments illustrate a firm stance on ethical boundaries, while Altman’s contract demonstrates a pathway for AI companies to work with the military under multi-layered safeguards without compromising deployment objectives.














