Anthropic’s security manager leaves the company to write poetry

In a movement more typical of “nihilistic penguin“that the head of security for one of the main protagonists in the development of AI, Mrinank Sharma, head of artificial intelligence security at Anthropic, has announced his resignation with a public letter in your X profile and he will dedicate his life to writing poetry. In his statement, Sharma not only explained why he is leaving the company that develops the models of Claudebut instead described the current state of AI development, with language that mixes alarm with personal reflection. “The world is in danger,” said the former director of Anthropic. The context: who he is and what he did at Anthropic. Mrinank Sharma headed the Safeguards Research Team from Anthropic, a research group focused on studying the risks associated with AI systems. Within Anthropic, Sharma’s work included developing defenses against risks such as AI-assisted bioterrorism and studying phenomena such as sycophancy (the tendency of AI models to user adulation), as well as investigate how AI can influence human perception and change cultural behaviors. He leaves, but leaves a message. The almost cryptic letter that Sharma published in X It quickly went viral due to the messages it contained. In it, he expressed his concerns in a tone that transcends the technical. One of the quotes that has attracted the most attention: “The world is in danger. And not only because of AI, or biological weapons, but because of a series of interconnected crises that are developing at this very moment.” Beyond the almost apocalyptic literalism, Sharma warned that humanity was approaching a critical point in which the development of AI was facing ethical dilemmas for those who develop it “our wisdom must grow at the same rate as our ability to affect the world, otherwise we will face the consequences.” Work to stay out of work. Sharma is not the only one who faces this ethical dilemma. According to sources of The Telegraphother Anthropic employees have expressed concern about the huge evolutionary leap in the latest AI models. “I feel like I come to work every day to stay out of work“one of the employees acknowledged to the British media. In a way this is true, since these employees are working on the development of a technology that, in all likelihood, change nature of his work, and that of millions of peoplea few years away. Is that good or bad? A first reading of the letter leaves the feeling that these workers are developing the weapon that will destroy humanity. However, a reading between the lines leaves Anthropic in a pioneering situation compared to its rivals from OpenAI, Microsoft or xAI: they are achieving advance at a pace which overwhelms even its developers. A sensation that does not seem to occur in the templates of other companies. Could it be that their models are not at that point of evolution? “Throughout my time here, I have seen repeatedly how difficult it is to allow our values ​​to guide our actions. We constantly face pressure to let go of what matters most,” Sharma wrote. The poetic turn. In addition to reflecting on the global risks he perceives, Sharma announced that his next professional step will be very different from the one he had until now. In his letter he mentioned his intention to devote time to what he called “the practice of courageous speech” through poetry. This change of lA for poetry has been interpreted as a sign of dissatisfaction with the pace and focus prevailing in the AI ​​technology industry. Like Sharma, in recent weeks other key figures in Anthropic’s AI development have announced their resignation. Harsh Mehta and Behnam Neyshabur They also announced a few days ago that they were leaving the company. However, in these cases, the exit announcement was made and, immediately afterwards, a new AI project was announced. That is to say, far from the ethical postulates that Sharma proposed, his intention was more along the lines of digging into his own gold mine and not that of others. In Xataka | Daniela Amodei, co-founder of Anthropic: “studying humanities will be more important than ever” Image | mrinank sharmaAnthropic

“They are brilliant researchers under the control of an authoritarian government.” Anthropic’s CEO has spoken about Depseek

In the midst of the stir caused by the latest models of the Deepseek, the CEO of Anthropic, Dario Amodei, has published An analysis on its personal website in which it questions the narrative of the “Chinese miracle” in artificial intelligence. Why is it important. The debate on Chinese capacity to develop advanced AI has monopolized the agenda in recent days after Deepseek’s releaseswhich have come to provoke A 17% drop in Nvidia shares. The facts. Deepseek claims to have developed its model V3 for just under 6 million dollarswhile Amodei explained that Claude 3.5 Sonnetthe last and most advanced Anthropic model, required “some tens of millions” in training. Far from the “thousands of millions” that were speculated. “Deepseek has produced a model close to the performance of US models 7-10 months ago, for a rather lower cost, but not in the proportions that have been suggested,” said the CEO. Deepseek operates with about 50,000 generation chips Hoppera capacity that Amodei considers similar to that of the main American technological ones. According to his analysis, Deepseek’s advances reflect the natural reduction of costs in the sector, estimated at annual 75%. The context. Deepseek has presented two models: V3, which uses traditional training. And R1, which incorporates reinforcement learning. For Amodei, real innovation is in V3, not in R1, which according to him, follows roads already explored by other technological ones. Turning point. The development of an AI superior to human intelligence will require millions of chips and tens of billions of dollars in the coming years. “Between 2026 and 2027 we will see which will be smarter than almost all humans in almost all tasks,” he said. In this scenario, he has defended Export controls as a strategic tool. Amodei has also recognized the talent of Deepseek engineers … although he has warned about the implications that a company operates under the control of the Chinese government. For him, The growing efficiency In the development of AI justifies reinforce, and not relax, commercial restrictions. In fact, he has had some words of praise for Depseek’s team, but not for his nation: “They are brilliant and curious researchers who only want to create useful technology, but are subject to an authoritarian government that has committed human rights violations and He has behaved aggressively on the world scene. “ In Xataka | “Google gives you links, perplexity gives you answers”: we talk to the CEO of the startup that wants to kill the father Outstanding image | Techcrunch

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.