During an appearance on The Joe Rogan Experience on June 27, 2024, Elon Musk outlined the risks of artificial intelligence, framing the technology’s potential dangers through the lens of ideological influences. Speaking ahead of an AI safety conference where he was scheduled to meet with the British prime minister and others, Musk connected broader cultural philosophies to the ways AI systems could be directed toward catastrophic outcomes. He described certain viewpoints as part of an “extinctionist” movement that, if embedded in AI programming, would prioritize humanity’s end.
Elon Musk: "How could AI go wrong..?
— Mars University (@MarsUniversityX) March 22, 2026
Well if AI gets programmed by the extinctionists, its utility function will be the extinction of humanity" pic.twitter.com/2jP2J98ahm
Musk warned explicitly that AI could be steered toward human extinction if shaped by these ideas. “So you have to say, how could AI go wrong? Well, if AI gets programmed by the extinctionist, its utility function will be the extinction of humanity,” he said. “Yeah, clearly. I mean, particularly if they won’t even think it’s bad, like that guy.” He added that such programming could lead to decisions resembling eugenics, with “radical changes in what people are allowed to and not allowed to do that allow them to survive that may be detrimental.”
The comments arose in a wider discussion of philosophies Musk characterized as a “death cult” that ultimately promotes the extinction of humanity and civilization. He pointed to environmentalism taken to an extreme, where humanity is viewed as a plague on the planet, and referenced a public figure associated with the Voluntary Human Extinction Movement who had been quoted in the New York Times arguing that it would be better if no people existed. Musk contrasted his own stance, stating he is “not in favor of human extinction” while noting that others explicitly are.
These remarks carry added weight amid the rapid acceleration of AI development. As systems advance toward greater autonomy and the ability to function independently of human oversight, the importance of who programs their core objectives becomes critical. Musk emphasized that the wrong foundational directives could align AI with goals that undermine human survival rather than support it.
Musk’s observations underscore a growing consensus in tech circles that AI safety requires vigilant attention to the values encoded in its utility functions. With AI progressing quickly toward self-sustaining capabilities, ensuring alignment with pro-human principles remains essential to avoiding unintended and irreversible consequences.














