• Home
  • Artificial Intelligence
  • “If AI Gets Programmed by the Extinctionists, Its Utility Function Will Be the Extinction of Humanity” — Elon Musk Explains How AI Could Go Wrong
“If AI Gets Programmed by the Extinctionists, Its Utility Function Will Be the Extinction of Humanity” — Elon Musk Explains How AI Could Go Wrong

“If AI Gets Programmed by the Extinctionists, Its Utility Function Will Be the Extinction of Humanity” — Elon Musk Explains How AI Could Go Wrong

During an appearance on The Joe Rogan Experience on June 27, 2024, Elon Musk outlined the risks of artificial intelligence, framing the technology’s potential dangers through the lens of ideological influences. Speaking ahead of an AI safety conference where he was scheduled to meet with the British prime minister and others, Musk connected broader cultural philosophies to the ways AI systems could be directed toward catastrophic outcomes. He described certain viewpoints as part of an “extinctionist” movement that, if embedded in AI programming, would prioritize humanity’s end.

Musk warned explicitly that AI could be steered toward human extinction if shaped by these ideas. “So you have to say, how could AI go wrong? Well, if AI gets programmed by the extinctionist, its utility function will be the extinction of humanity,” he said. “Yeah, clearly. I mean, particularly if they won’t even think it’s bad, like that guy.” He added that such programming could lead to decisions resembling eugenics, with “radical changes in what people are allowed to and not allowed to do that allow them to survive that may be detrimental.”

The comments arose in a wider discussion of philosophies Musk characterized as a “death cult” that ultimately promotes the extinction of humanity and civilization. He pointed to environmentalism taken to an extreme, where humanity is viewed as a plague on the planet, and referenced a public figure associated with the Voluntary Human Extinction Movement who had been quoted in the New York Times arguing that it would be better if no people existed. Musk contrasted his own stance, stating he is “not in favor of human extinction” while noting that others explicitly are.

These remarks carry added weight amid the rapid acceleration of AI development. As systems advance toward greater autonomy and the ability to function independently of human oversight, the importance of who programs their core objectives becomes critical. Musk emphasized that the wrong foundational directives could align AI with goals that undermine human survival rather than support it.

Musk’s observations underscore a growing consensus in tech circles that AI safety requires vigilant attention to the values encoded in its utility functions. With AI progressing quickly toward self-sustaining capabilities, ensuring alignment with pro-human principles remains essential to avoiding unintended and irreversible consequences.

Releated Posts

The US Army is Developing Its Own Chatbot

The U.S. Army is developing an AI-powered chatbot to help soldiers access mission-related information. The system, known as…

ByByZane Clark Apr 13, 2026

The Dangers of Using A.I. for Medical Advice Instead of Doctors — A Case With Real Consequences

The growing integration of artificial intelligence into everyday decision-making has raised new concerns about its reliability, particularly in…

ByByZane Clark Apr 13, 2026