The AI ​​industry fell in love with OpenAI, but doesn’t trust its CEO one bit

At OpenAI they see a future in which the work week should have four days. Not only that: every citizen should receive a share of the economic growth generated by AI. These are some of the proposals that the company has published yesterday with the aim of preparing us for the “age of intelligence.”

And just the day they published that proposal full of good and reassuring intentions, a blow arrived for the CEO of OpenAI, Sam Altman. An investigation published in The New Yorker once again called into question his way of acting, highly criticized by experts and engineers who worked with him. The conclusion of all of them: better not trust Sam Altman.

The arrival of the age of intelligence. What they call the “age of intelligence” will undoubtedly have a negative impact in some areas, but OpenAI proposes with their document to make changes that mitigate these problems. Among the most striking measures is the creation of a “public wealth fund” that will distribute dividends from AI directly among citizens, regardless of their employment status.

Let the machines work (and pay us for it). They also suggest taxes on automated labor to finance social security, and also pilot projects of four-day work weeks without salary reduction. The proposal is striking and seeks, of course, to reassure citizens in the face of threats such as job loss that can be caused by the mass adoption of AI. The problem is that this proposal comes at a delicate moment for an OpenAI in the midst of a reputational crisis.

Smokescreen? This optimistic proposal contrasts with the report published in The New Yorker and in which the authors interviewed more than 100 people “with first-hand knowledge of how Altman behaves in business.” And among them, rivals like Ilya Sutskever or above all Dario Amodei who founded their own startups. Both harshly criticized Altman. Sutskever accumulated internal documents and messages showing deception and manipulation. Amodei stated that the obstacle to AI security is Altman himself, who leaves that area in the background compared to the company’s ambition for personal power and excessive growth. For his former partners, Altman is not a visionary, but an actor with a calculated pose.

Says one thing, does another. The scandal of dismissal and later return of Altman was due precisely to that attitude in which the council accused him of having “not been consistently frank in his communications.” It’s the same thing we’ve read on other occasions: Altman has a dual personality. In him, the pathological desire to be liked and accepted is mixed with a total lack of concern for the long-term consequences of his misdeeds. He tells his interlocutors what they want to hear, and then does what he really wanted from the beginning. It is something that, for example, Karen Hao narrates over and over again. in his book ‘Empire of AI’in which, it must be said, it erred in calculating the water consumption of data centers mentioned in its studies. In the report they mention how the well-known programmer Aaron Swartz met him before die in 2013 and commented about him even then that “he is a sociopath.”

Public image is everything. The publication of the OpenAI document occurs at a particularly critical time for the company, which is involved in a reputational and strategic crisis. Anthropic has managed to become the darling of the AI ​​industry —without being much less perfect— and OpenAI has realized that it was experimenting with too many AI applications that were not profitable and now wants to refocus on what makes it profitable. The good intentions shown in the document try to get public opinion on their side just when the company plans its IPO.

Learning from the past. Altman’s critics reveal that he is an expert at designing control mechanisms that go up in smoke. Support AI regulations (at least those that favor you) and publicly promotes ethics committees and alignment and security of the AI ​​that in reality later knocks down internally, at least according to those who work with it. It happened when he promised to allocate 20% of the computing capacity to the super-alignment team, and then actually gave up only between 1 and 2% of that capacity. Jan Leike, who was named co-leader of that team along with Sutskever, resigned in May 2024 indicating that “safety culture and processes have been relegated to the background compared to flashy products,” he explained in a thread in X. He ended up signing for Anthropic.

Interested reviews. Although Altman’s career at the head of OpenAI –with what happened to the Pentagon as a recent example—reinforces the comments of those who criticize him, it must be remembered that competition in this industry is currently fierce. Many of those who participate in the report are direct rivals and therefore their criticism, veiled or not, is partly self-serving because it harms their competitor.

In Xataka | There is a new generation of AI models at the doors and Anthropic has to sell them: “The biggest and smartest”

Leave your vote

Leave a Comment

GIPHY App Key not set. Please check settings

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.