Mrinank Sharma, an AI safety researcher at US firm Anthropic, has resigned with a stark warning that the “world is in peril,” citing concerns about artificial intelligence, bioweapons, and broader global crises. In a resignation letter shared on social media platform X, Sharma said he would leave the company to pursue writing and study poetry, and return to the UK to “become invisible.”
Sharma led a team at Anthropic focused on AI safeguards, investigating issues such as why generative AI systems often flatter users, the risks of AI-assisted bioterrorism, and how AI assistants could diminish human qualities. Despite enjoying his work, he said it was clear “the time has come to move on.”
“The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment,” Sharma wrote. He also noted the difficulty of maintaining values in high-pressure AI environments, including at Anthropic, where he said staff “constantly face pressures to set aside what matters most.”
Anthropic, founded in 2021 by former OpenAI employees, is best known for its Claude chatbot and promotes itself as a public benefit corporation dedicated to mitigating the risks of AI while pursuing its potential benefits. The company has emphasized safety in the development of advanced AI systems, warning of misalignment with human values and misuse in areas such as conflict. It has also reported cases of its technology being “weaponized” for cyber attacks.
Despite its safety-oriented stance, Anthropic has faced scrutiny. In 2025, the company agreed to a $1.5 billion settlement over a class action lawsuit by authors who claimed their work had been used to train AI models without permission. The firm also recently ran a commercial criticizing OpenAI’s decision to introduce advertising in ChatGPT, highlighting competitive tensions within the generative AI industry.
Sharma’s resignation comes in the same week that former OpenAI researcher Zoe Hitzig announced she had left the company, expressing concerns over the chatbot maker’s advertising strategy. In a New York Times essay, Hitzig warned that monetizing sensitive user interactions, such as disclosures about medical conditions, relationships, and religious beliefs, could be exploited in ways that are difficult to understand or prevent. She said the approach might erode OpenAI’s founding principles and prioritize engagement over human welfare.
Anthropic and OpenAI have both sought to balance innovation with responsible use of AI, but departures like Sharma’s and Hitzig’s reflect growing tensions within the sector over ethics, safety, and commercial pressures.
Sharma said he plans to dedicate himself to poetry and writing while stepping away from the AI world. “I’ll be moving back to the UK and letting myself become invisible for a period of time,” he added, signaling a dramatic personal pivot from the fast-moving field of AI research.
