• Home
  • Artificial Intelligence
  • “Pete Hegseth Wants to Use AI to Fire Weapons Without Human Input,” — Sen. Raphael Warnock Speaks Against Artificial Intelligence Use in War — “He Is a Threat to Our Safety”
“Pete Hegseth Wants to Use AI to Fire Weapons Without Human Input,” — Sen. Raphael Warnock Speaks Against Artificial Intelligence Use in War — “He Is a Threat to Our Safety”

“Pete Hegseth Wants to Use AI to Fire Weapons Without Human Input,” — Sen. Raphael Warnock Speaks Against Artificial Intelligence Use in War — “He Is a Threat to Our Safety”

Senator Raphael Warnock issued a serious warning against the use of artificial intelligence in military operations, criticizing Secretary of War Pete Hegseth for pushing to deploy AI systems capable of firing weapons without human oversight. Warnock’s comments came in response to recent reports that Hegseth demanded full military access to Anthropic’s AI model Claude, threatening the private company with government pressure to meet his deadline.

“This is out of control. Pete Hegseth wants to use AI to fire weapons without human input. And he’s willing to blackmail a private company in order to do it. He is a threat to our safety and must be fired immediately,” Warnock wrote in a post.

The controversy centers on Hegseth’s meeting with Anthropic CEO Dario Amodei at the Pentagon, where he demanded that the company grant the military unrestricted access to its AI model by the end of the week. The Department of Defense has reportedly considered invoking the Defense Production Act to compel compliance. Hegseth argued that, similar to the Pentagon’s purchase of aircraft from Boeing, the military should have full control over Claude’s deployment once acquired.

Sources familiar with the meeting told CBS News that Anthropic has repeatedly requested guardrails to prevent Claude from making final targeting decisions in military operations, citing risks of hallucinations and potential lethal errors without human oversight. The company has also emphasized that the AI should not be used for mass surveillance of Americans, a point the Pentagon insists is outside the scope of its lawful military objectives.

The tension highlights a broader debate over the role of artificial intelligence in national security. While the Pentagon maintains that its requests are strictly legal, the push for AI systems capable of autonomous weapon deployment raises questions about accountability and safety. Claude’s limitations, including its susceptibility to errors, have fueled concerns that removing human judgment from military operations could lead to unintended escalation or mission failure.

Senator Warnock’s warnings signal an emerging pushback against AI in warfare, framing the debate as one of public safety and ethical responsibility. As AI continues to advance, policymakers, defense officials, and technology companies face increasing scrutiny over how autonomous systems should be integrated into national security operations.

Releated Posts

The US Army is Developing Its Own Chatbot

The U.S. Army is developing an AI-powered chatbot to help soldiers access mission-related information. The system, known as…

ByByZane Clark Apr 13, 2026

The Dangers of Using A.I. for Medical Advice Instead of Doctors — A Case With Real Consequences

The growing integration of artificial intelligence into everyday decision-making has raised new concerns about its reliability, particularly in…

ByByZane Clark Apr 13, 2026

Parker County to Install License Plate-Reader Cameras

County commissioners authorized the Parker County Sheriff’s Office to use seized asset funds to buy the cameras from…

ByByZane Clark Apr 13, 2026