The AI ​​industry fell in love with OpenAI, but doesn’t trust its CEO one bit

At OpenAI they see a future in which the work week should have four days. Not only that: every citizen should receive a share of the economic growth generated by AI. These are some of the proposals that the company has published yesterday with the aim of preparing us for the “age of intelligence.” And just the day they published that proposal full of good and reassuring intentions, a blow arrived for the CEO of OpenAI, Sam Altman. An investigation published in The New Yorker once again called into question his way of acting, highly criticized by experts and engineers who worked with him. The conclusion of all of them: better not trust Sam Altman. The arrival of the age of intelligence. What they call the “age of intelligence” will undoubtedly have a negative impact in some areas, but OpenAI proposes with their document to make changes that mitigate these problems. Among the most striking measures is the creation of a “public wealth fund” that will distribute dividends from AI directly among citizens, regardless of their employment status. Let the machines work (and pay us for it). They also suggest taxes on automated labor to finance social security, and also pilot projects of four-day work weeks without salary reduction. The proposal is striking and seeks, of course, to reassure citizens in the face of threats such as job loss that can be caused by the mass adoption of AI. The problem is that this proposal comes at a delicate moment for an OpenAI in the midst of a reputational crisis. Smokescreen? This optimistic proposal contrasts with the report published in The New Yorker and in which the authors interviewed more than 100 people “with first-hand knowledge of how Altman behaves in business.” And among them, rivals like Ilya Sutskever or above all Dario Amodei who founded their own startups. Both harshly criticized Altman. Sutskever accumulated internal documents and messages showing deception and manipulation. Amodei stated that the obstacle to AI security is Altman himself, who leaves that area in the background compared to the company’s ambition for personal power and excessive growth. For his former partners, Altman is not a visionary, but an actor with a calculated pose. Says one thing, does another. The scandal of dismissal and later return of Altman was due precisely to that attitude in which the council accused him of having “not been consistently frank in his communications.” It’s the same thing we’ve read on other occasions: Altman has a dual personality. In him, the pathological desire to be liked and accepted is mixed with a total lack of concern for the long-term consequences of his misdeeds. He tells his interlocutors what they want to hear, and then does what he really wanted from the beginning. It is something that, for example, Karen Hao narrates over and over again. in his book ‘Empire of AI’in which, it must be said, it erred in calculating the water consumption of data centers mentioned in its studies. In the report they mention how the well-known programmer Aaron Swartz met him before die in 2013 and commented about him even then that “he is a sociopath.” Public image is everything. The publication of the OpenAI document occurs at a particularly critical time for the company, which is involved in a reputational and strategic crisis. Anthropic has managed to become the darling of the AI ​​industry —without being much less perfect— and OpenAI has realized that it was experimenting with too many AI applications that were not profitable and now wants to refocus on what makes it profitable. The good intentions shown in the document try to get public opinion on their side just when the company plans its IPO. Learning from the past. Altman’s critics reveal that he is an expert at designing control mechanisms that go up in smoke. Support AI regulations (at least those that favor you) and publicly promotes ethics committees and alignment and security of the AI ​​that in reality later knocks down internally, at least according to those who work with it. It happened when he promised to allocate 20% of the computing capacity to the super-alignment team, and then actually gave up only between 1 and 2% of that capacity. Jan Leike, who was named co-leader of that team along with Sutskever, resigned in May 2024 indicating that “safety culture and processes have been relegated to the background compared to flashy products,” he explained in a thread in X. He ended up signing for Anthropic. Interested reviews. Although Altman’s career at the head of OpenAI –with what happened to the Pentagon as a recent example—reinforces the comments of those who criticize him, it must be remembered that competition in this industry is currently fierce. Many of those who participate in the report are direct rivals and therefore their criticism, veiled or not, is partly self-serving because it harms their competitor. In Xataka | There is a new generation of AI models at the doors and Anthropic has to sell them: “The biggest and smartest”

In 2016, a urologist discovered that a Disneyland attraction helped expel renal calculations. The story was a bit more complicated

In September 2016, David Wartinger (Urologist and Professor Emeritus at Michigan State University) He published a study according to which the Russian mountains facilitated the passage of kidney stones and, of course, the media went crazy. The kidney calculations They usually cause intense pain, bleeding, nausea and vomiting. In many cases they do not even go through the ureter them alone and you have to reduce them to dust with shock waves. The mere idea that a simple walk in a fair attraction could help solve it was a bombing. Of course, the story had a small print. Big Thunder Mountain Railroad. At some point in the late nineteenth century, someone found A colossal gold vein in Big Thundera mountain of the southwest of the United States. Quickly, the small mining camp in the immediate vicinity became a prosperous city and the mountain was filled with an intricate system of trains that moved the mineral from one place to another. No one suspected that Big Thunder was a sacred place for the Native Americans. Or, rather, it was a damn place that the natives avoided as a soul that the devil carried. It was inevitable that, sooner or later, a catastrophe would pass. And it passed. Some speak of an earthquake, others of a sudden flood. Be that as it may, the town was abandoned and the mine was closed until, years later, an expedition discovered that the wagons continued to move through the heart of the mountain without anyone who handled them. No, As Randy Meeks explained to usIt is not a real event: the plot of the most mythical mountain of Disneyland. That’s where history begins. A urologist in Michigan. As Wartinger himself explainedthe idea came unexpectedly. “Basically, I had patients who told me that after getting on a certain Russian mountain in Walt Disney World, they were able to expel their kidney calculations (…) I even had a patient who said he expelled three different calculations after climbing several times” As it seems, this theory did not pay much attention until, already retired, he discovered that other urologists They told similar things. It was then, when he felt the need to put it into practice. He did not do it with people, of course: he did it with a 3D model of a kidney with three calculations. He put it in the backpack, went to Orlando and rode 20 times in the Big Thunder Mountain. And what did you discover? “In total, we use 174 kidney calculations in different shapes, sizes and weights to see if each model worked in the same attraction and in two other roller mountains,” Wartinger explained in Msu Today. Interestingly, Big Thunder was the only one that gave positive results. “In the pilot study, sitting on the last car of the Russian mountain showed a success rate of around 64 percent, while sitting in the first cars only had a successful rate of 16 percent.” A simple curiosity … that quickly took out of context. With the help of the same Wartinger, by the way. As explained in Snopesin the interviews given by the retired urologist went far beyond reporting the results of his study. “If you have kidney calculations, but otherwise it is healthy and meets the requirements of the trip, patients should try it. It is certainly a cheaper alternative to medical care,” he said. The problem is that the basis for this is weak. What really does science say? There is no doubt that, as other urologists pointed out, This preliminary evidence suggests that a roller coaster can help naturally expel (very “) small. But, obviously, there is no empirical basis to recommend it. That is, it is a very striking study that could give an interesting clinical trial. A study that was never conducted. And without which, raising it as an alternative is risky: the experience of riding in that roller coaster with a large calculation can be terrible. Image | Renato Mitra In Xataka | There are people addicted to drink up to 15 liters of water daily (and it is a more serious problem than it seems)

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.