A high-ranking U.S. Army general has revealed that he relies on the AI chatbot ChatGPT to help make command decisions affecting thousands of soldiers, raising concerns about national security and confidentiality.
Major General William “Hank” Taylor, the former commanding general of the 8th Army in South Korea, told reporters at the Association of the United States Army conference in October 2025 that he has developed a close relationship with the artificial intelligence tool.
“I’ve become—Chat and I are really close lately,” Taylor said, using a familiar nickname for ChatGPT. The general, who also serves as Director of Operations for United Nations Command and US Forces Korea, emphasized that his goal is to leverage AI technology to gain a strategic advantage.
Using AI for Command Decisions
“As a commander, I want to make better decisions,” Taylor explained. “I want to make sure that I make decisions at the right time to give me the advantage.”
Futurism reported that Taylor told Business Insider he is “asking to build, trying to build models to help all of us,” and using ChatGPT to help make both military and personal decisions affecting soldiers under his command.
The general’s admission comes as military agencies across the United States are exploring various applications of generative artificial intelligence, though the open acknowledgment of relying on a commercial AI chatbot for command decisions is unprecedented.
Security Concerns Raised
Taylor’s use of ChatGPT has sparked immediate concerns among defense experts. The Mirror reports that the revelation is “raising security alarms,” while The Express notes it raises “questions about confidentiality and national security.”
The concerns stem from the fact that ChatGPT is a commercial product developed by OpenAI, and information entered into the chatbot could potentially be stored or accessed by third parties. Military leaders using such tools for command decisions involving classified or sensitive information could create significant security vulnerabilities.
Ars Technica described Taylor’s familiar reference to ChatGPT as “Chat” as a “distressingly familiar diminutive nickname,” highlighting the concerning level of dependence the general expressed on the AI tool.
The revelation has sparked debate about the appropriate role of artificial intelligence in military decision-making and whether proper safeguards are in place to protect sensitive information and national security interests.














