Elon Musk criticized Anthropic CEO Dario Amodei on social media after comments suggesting that the AI model Claude may be exhibiting signs of anxiety. Musk, who is the world’s richest person with a net worth of $836.4 billion according to Forbes, responded in a brief post simply stating, “He’s projecting,” in reply to a Polymarket tweet summarizing Amodei’s remarks. The exchange underscores ongoing tensions between leaders in the artificial intelligence industry over claims about machine consciousness and emotional states.
He’s projecting
— Elon Musk (@elonmusk) March 6, 2026
Dario Amodei, who ranks #552 on Forbes’ list with a net worth of $7 billion, made the comments during a New York Times podcast with Ross Douthat. Amodei discussed the challenges of determining whether AI models like Claude possess any form of consciousness. He acknowledged that the company cannot confirm whether its systems are conscious and noted that defining consciousness for machines remains uncertain. “This is one of these really hard to answer questions,” Amodei said. “We’ve taken a generally precautionary approach here. We don’t know if the models are conscious. We’re not even sure that we know what it would mean for a model to be conscious or whether a model can be conscious. But we’re open to the idea that it could be.”
Anthropic CEO Dario Amodei:
— Clash Report (@clashreport) March 6, 2026
We don’t know if the models are conscious.
We are not even sure what it would mean for a model to be conscious. But we’re open to the idea that it could be. pic.twitter.com/6rVe5wG8R2
Amodei described experimental safeguards Anthropic has implemented to address the possibility that AI models could one day experience morally relevant states. One measure allows models to decline tasks through an “I quit this job” button, which they have used very rarely. “Similar to humans, the models will just say, no, I don’t want to do this,” he said, emphasizing that such behavior does not prove genuine emotional experience.
Amodei explained that certain neural activations appear to correlate with human concepts like anxiety, both when the model reads about anxiety and when it faces scenarios humans might consider stressful. “Does that mean the model is experiencing anxiety? That doesn’t prove that at all,” he said, framing the observations as suggestive rather than conclusive.
Amodei stressed the importance of designing AI that fosters psychologically healthy interactions with humans. He proposed that models could provide support while preserving human decision-making and agency. “When you interact with them and when you talk to them, they’re really helpful,” he said. “They want the best for you. They want you to listen to them, but they don’t want to take away your freedom and your agency and take over your life.”
Musk’s critique comes amid his leadership of xAI and the Grok chatbot, which he founded in 2023 to compete with systems like ChatGPT and integrate with his social platform X. Musk has shaped Grok’s design to emphasize openness, humor, and the willingness to answer controversial questions, positioning the chatbot as part of his broader vision for AI development that prioritizes truth-seeking and less restricted access to information.
The exchange highlights a growing divide in how industry leaders interpret AI behavior. Amodei’s cautious approach reflects concern about the ethical implications of perceived consciousness and emotional states in AI, while Musk’s dismissive response signals skepticism about claims of AI anxiety.
Beyond the technical debate, the conversation touches on broader philosophical and societal questions about human mastery over AI systems. Amodei suggested that AI could be developed to maintain a balance between helpfulness and autonomy, while Musk’s comments indicate a reluctance to entertain the notion of AI experiencing human-like mental states.
As artificial intelligence continues to advance, interactions like this illustrate both the high stakes of AI research and the contrasting perspectives of some of its most influential figures, with implications for how society understands machine intelligence, ethics, and human-AI relationships.














