In two days the animated spin-off of the platform’s only powerful franchise premieres on Netflix: ‘Stranger Things’

On December 31, 2025, Netflix aired the final episode of ‘Stranger Things‘. With it, the platform recorded its most viewed New Year’s broadcast in history and put the finishing touch to the platform’s most viewed series, exceeding 1.2 billion accumulated views. And four months later Eleven, Mike, Dustin and the rest of the Hawkins gang are back, but this time in animated format With new voices and a new showrunnerthis ‘Stranger Things: Stories from 85′ this one arrives April 23 to Netflix. And how is it presented when the main story has already closed? Well, going back to when it wasn’t yet: the series takes place in the winter of 1985, between the second and third original seasons. That interval existed in the canon, although it did not come without a technical problem: at the end of T2, Eleven closes the door to the Other Side. T3 starts in July of that same year. How do you fit interdimensional monsters into a story that takes place during the months when the door is sealed? Easy: Particles from the Other Side that escaped before the door closed have begun to mutate into different plant species in Hawkins, generating hybrid creatures like a “snow shark.” All of this comes from the original visual style of the illustrator Meybis Ruiz Cruz and which, through animation, Netflix has wanted to bring closer to series from the eighties such as ‘He-Man’ or ‘The Real Ghostbusters’, but without losing sight of more recent proposals such as the animated films of the Spider-verse or ‘Arcane’. ‘Stories of 85’ obeys the tactic of keeping a franchise alive once it is impossible to do so with the original actors, through formats that alleviate technical and distribution demands. If this experiment works, we will undoubtedly see similar attempts with series like ‘Wednesday’ or ‘The Bridgertons’, and that without leaving Netflix. And, of course, if it works it will also continue on this side: according to the new showrunnerEric Robles, there are ideas to cover many other intervals in the official series. You have to milk it from somewhere. In Xataka | 16 premieres on Netflix: this week, the new ‘Stranger Things’, a rare British series and the return of Charlize Theron

Who is Johny Srouji and why this great unknown has just become the second most powerful person at Apple

For those who have been following Apple for a long time, Johny Srouji is no stranger. For the rest of the world yes, but after the appointment of John Ternus as CEO of Applethis Israeli engineer has become the second most powerful person in the company. The question is obvious: who is Johny Srouji? Who is Srouji and why does he matter?. Born in Haifa, Israel, in 1964 to a middle-class Christian Arab family, Srouji studied computer science at the Israel Institute of Technology (Technion) and graduated Summa Cum Laude in both his engineering and master’s degrees. He worked at Intel and IBM before Apple hired him in 2008 with a very clear assignment: to design the company’s first chips. He did much more than that. The revolution made chip. That first chip designed by Srouji was the Apple A4, which debuted in 2010 in the original iPad and the iPhone 4. From there, Srouji forged one of the most prestigious hardware careers in the recent history of the technology industry. The A7 of 2013 was the first SoC in using 64-bit architecture, and then there would come the revolution of the Apple M1 with which the company definitively got rid of dependence on Intel in its Macs. But his work goes beyond. His official title until now was senior vice president of hardware technologies, but it did not reflect the real scope of his work. Srouji not only led the chip design. Also that of batteries, cameras, storage controllers, sensors, displays, cellular modems and other critical components of the entire family of Apple devices. Almost everything that makes these products work the way they do is largely due to the work of Srouji and his team. With the new position, his responsibility expands and he will now control the entire cycle: not only the hardware itself, but also the physical design. It’s a colossal challenge, but if anyone seems prepared to take it on, it’s Srouji. He was about to leave. In December 2025 Bloombeg reported that Srouji had informed Tim Cook that he was seriously considering leave Apple in the near future. Two days later, Srouji himself published a message to his team denying the newsbut the damage was done. For Apple to lose Srouji would have been a disaster, and it is very likely that this new position is in part Apple’s response to that alarm signal. Textbook talent retention, but raised to maximum power. New position, new structure. In it internal communication that Srouji has sent to team employees, the engineer detailed how he will organize the division into five areas: Hardware engineering: led by Tom Marieb, an Intel veteran who joined Apple in 2019. Siilicio: it will be directed by Sri Santhanam, a manager with a long career at Apple Advanced Technologies: Supervised by Zongjian Chen Platform architecture: led by Tim Millet Program management: will be managed by Donny Nordhues In that message, Srouji acknowledges that this “represents a significant change” but believes it will work thanks to the entire team. It seems that you are very clear about how you want to work with your team. A fusion with a lot of historical sense. The reunification of hardware engineering and the hardware technologies division under the same leader is not entirely new. It is the structure that Apple had for years under the direction of Bob Mansfield, former head of hardware. until 2013 and? then he took charge of the failed Project TitanApple’s car. That’s when those two areas were divided, something that allowed both Ternus and Srouji to progress in their domains, but also caused some structural tensions between teams that had to collaborate. Bringing them back together is a clear commitment to strengthening that collaboration. The great cover-up of Ternus’s appointment. It is normal that the vast majority of headlines go to Ternus, who will decide the future of the company from now on, but Apple is above all a hardware company. That Srouji now becomes his leader makes this engineer a person with enormous power within the company. The change is promising in terms of promoting that facet of the product that both he and Ternus dominate, and without a doubt interesting times await us at Apple. Image | Apple In Xataka | John Ternus, vice president of Apple: “The iPhone Air had been in development for years, but we had to say ‘no’ until now”

Anthropic says Claude Mythos is too powerful to go public. The question is if this is nothing more than “the wolf is coming”

Claude Mythos Preview It is the best AI model ever created. We don’t say it, Anthropic says it, but almost no one else can say it because only a select group of companies has access to said model. The cybersecurity capabilities of the model appear to be astonishingbut more and more experts say that although Mythos is better than its predecessors, it is not the revolutionary leap that Anthropic seems to propose. Is that way of launching the model just an effective way of creating hype? Beware the Anthropic speech. The well-known entrepreneur and analyst Gary Marcus recently gave three reasons why, according to him, the launch of Mythos is not as revolutionary as Anthropic wants us to see. I cited tweets from software engineers and cybersecurity experts who cast doubt on Anthropic’s claims. The company published a study on the capabilities of Claude Mythos Preview that seemed to make it an extraordinary tool for the field of cybersecurity, but at the same time it was so powerful that it could be very dangerous if it fell into the wrong hands. Isn’t that a big deal? Among Claude Mythos’ achievements, Anthropic highlighted how he had found vulnerabilities in Firefox 147. But in reality many of the flaws were basically variations of the same two bugs. If you removed them from the equation, Mythos’ effectiveness rate at finding new exploits dropped a lot, even below Opus 4.6. Anthropic did not hide that fact, of course, but it makes this capacity, for example, not seem so striking. An X user also criticized the use of Cybench as a cybersecurity benchmark when Opus 4.6 almost completely surpassed it. For him, the choice of some of the Anthropic tests was debatable because they were not a challenge to current models. Other models can do the same. The co-founder and CEO of Hugging Face, Clement Delangue, stated that Mythos was no big deal. Their argument: they had used small, cheap open models, isolated the relevant code from some examples of the vulnerabilities found by Mythos, and they found the same problems which had already detected the Anthropic model. According to the Epoch Capabilities Index, which measures the capacity of AI models by combining several benchmarks, the leap that Mythos has taken is striking and “departs” from the progressive line of its predecessors. Source: Anthropic. Observer bias. But here it should be noted that in those analyzes they knew where to look because Mythos had already found those problems. We are dealing with observer bias, and in fact the Hugging Face document makes it clear that they even gave him specific clues such as “consider integer overflow”) to find those bugs. And on this observation, another one: Hugging Face does not say that a small model can replace Mythos on its own, but that it can be very good by giving it the appropriate code fragment. Mythos seems more capable of blindly complex security breaches, but it is a huge model and that is why it has greater capacity. Or what is the same: Mythos is better because it has the size, design and resources to be better. Fear, uncertainty, doubt? The language used by Anthropic in this advertisement could be considered to some extent a clear use of FUD (“Fear, Uncertainty, Doubt” -> “Fear, Uncertainty, Doubt”) as a marketing technique. It is a resource that has been seen in the past, and for example OpenAI already said in 2019—years before the launch of ChatGPT—that GPT-2 was too dangerous for a public launch. Obviously it wasn’t, but that certainly served to create expectation about the true capacity of the model. It’s better, but it may not be revolutionary. The results of the benchmarks that Anthropic published already made it clear that although there are very notable jumps in some tests, in others the evolution is much less striking. Claude Mythos was not the best at everything, and now analysts appear who contrast that data with other metrics. For example, with the Epoch Capabilities Index (ECI) from Epoch AI, the startup that has one of the most reputable benchmarks of the industry. And according to this index, Claude Mythos is above his rivals, but not for long. The wolf is coming. The truth is that the launch of Claude Mythos Preview has been really striking and the documents that accompanied that document tell us about a really capable AI model. The problem is that it is impossible to verify it because only a few companies have access to it and can test it. Without that public availability the only thing we can do is trust (or not) what Anthropic tells us, and that is the point: it is not clear that we should do it. The company is interested in us buying this discourse, obviously, but without an independent analysis it is impossible to verify these statements. In Xataka | Anthropic has become the darling of AI and has sought a partner to guarantee its future. It’s not the one we thought

Claude Mythos is an AI model so powerful it’s scary. So Anthropic has decided that you won’t be able to use it

Claude Mythos Preview it’s already here and it’s so good it’s scary. Literally. Anthropic has just introduced it to the public, but it has been done so cautiously that we won’t even be able to test it and it will only be available for certain technology partners. That’s frustrating and disturbing at the same time, but also reasonable. So powerful that it scares. On February 24, 2026, Anthropic engineers were able to test their new artificial intelligence model for the first time, which they called Claude Mythos Preview. As soon as they did they realized one thing: “demonstrated a dramatic leap in its cyber capabilities over previous models, including the ability to autonomously discover and exploit vulnerabilities zero-day in the main operating systems and web browsers on the market. Threat to global cybersecurity. This finding made it clear to Anthropic officials that although this capability makes it very valuable for defensive purposes, it also poses clear risks if the model were offered globally. Thus, a cybercriminal could take advantage of it to find vulnerabilities in all types of systems and exploit them. A few hours ago the company developed this analysis of Mythos as a threat to cybersecurity in a post on his blogand for example highlighted how Mythos found a vulnerability (now corrected) that had been present in OpenBSD for 27 years, an operating system precisely recognized for its very strong security. There were more examples, and all of them made the conclusion clear: Mythos is too powerful for ordinary mortals to use. Superior in all benchmarks, and in some cases such as USAMO (mathematics), the jump is simply incredible. Source: Anthropic. The best in history according to benchmarks. Anthropic has published a very in-depth report about this model with its “system card”. Among the data present is, for example, its performance in benchmarks, where it has swept GPT 5.4, Gemini 3.1 Pro and also Claude Ous 4.6, which until now was the best model in the world in almost all performance tests. Although in some cases the jump is not spectacular, in others such as USAMO —mathematical problem solving—Mythos practically achieves perfection. He barely hallucinates… That system card also talks in detail about how Claude Mythos Preview has a drastically lower hallucination rate than Claude Opus 4.6 and earlier models. He is also capable of saying “I don’t know” if he does not have enough information to answer, something that reduces hallucinations due to overconfidence. …but when it does, be careful. The paper warns of a new phenomenon: when the model fails in some complex tasks, the “hallucinations” are not obvious errors, but rather extremely subtle and well-argued technical failures. This is dangerous because the answer seems totally correct to experts, requiring very deep verification. Glasswing Project. That power and capacity has meant that the model will only be available through a “defensive” program that they have called Glasswing Project and which will be exclusive to some Anthropic technology partners. Specifically AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA and Palo Alto Networks. All of them will have the privilege (and responsibility) of having access to Claude Mythos Preview to identify vulnerabilities and exploits and correct them before bad actors can do so. Mythos Preview “it’s just the beginning”. Although this model is the most capable that has been seen so far, at least according to the benchmarks and data presented by Anthropic, the company assures that “we see no reason to think that Mythos Preview is the point at which the cybersecurity capabilities of language models reach their peak.” They assure that they expect the models to continue improving in the coming months and years, although this new model is certainly on another level. In Xataka | OpenAI and Anthropic have proposed the impossible: lose $85 billion in one year and survive

We thought we had an AI bubble. There are powerful arguments that indicate that we were wrong

You either love AI or hate it. Either you are a (deluded?) optimist, or you are on the bandwagon of skeptics and bet due to an imminent puncture of that AI bubble that everyone talks about. The well-known analyst Ben Thompson has been in the second group for some time and stated that in fact we were in a “good” bubble and beneficial even if it bursts. The annual NVIDIA conference a few days ago has made him change his position, and for him there is no bubble. It doesn’t have just one argument, but three. Or rather, three jumps. The first jump: ChatGPT. The launch of ChatGPT in November 2022 was an eye-opener and demonstrated what generative AI could do. That first model, yes, had two serious problems. The first, that he was frequently wrong. The second, that when I didn’t know something, he invented it and hallucinated with astonishing security. That made ChatGPT something awesome but unreliable, like a cool toy that needs constant user supervision to be truly useful. The second leap: reasoning. Almost two years later, another unique revolution occurred in the field of generative AI. In September 2024 OpenAI launched its o1 model and with it there was a spectacular novelty. For the first time, the model did not simply blurt out the first thing that came to mind: he reasoned about his answer before giving it, evaluated whether it was correct, and considered alternatives. The result was an AI significantly more reliable and, therefore, more useful. The price? More computing. AI models with the ability to “reason” consume many more tokens than those that respond directly, and that triggered demand for infrastructure. Or what is the same: data centers. The third jump: the agents. These two revolutions have been joined by the third, that of AI agents. Claude Code and Codex at the end of 2025 showed that AI agents were no longer a promise and became something that really worked. From then on it is possible to give them instructions so that they can then start executing nested tasks that can keep them working for hours. These agents verify their own results and correct errors without the human having to intervene. The difference with what we had before is notable, but it also dismantles the bubble theory. Bubble? In a bubble, Thompson explained, investment exceeds real demand. However, in his opinion, the opposite is true here, because each hyperscaler—Microsoft, Google, Amazon, Meta—has made it clear that the computing demand is surpassing them, and to solve it they are all announcing astronomical investments in AI data centers. These investments exceed market expectations, but not those of these companies, which like Thompson are clear that in reality the demand is going to end up being so enormous that the current infrastructure will fall very short. Millions of users are not needed. Even more striking in this analysis is another nuance that this analyst points to. Chatbots were supposed to need mass adoption to generate economic impact, but on the other side we have agents, who don’t have that requirement. A single person can control thousands (millions?) of agents simultaneously, creating complex tasks. That means it doesn’t take everyone to use AI for computing demand to skyrocket: enough people just need to use it as they are likely to use it: to create those “one-person businesses” where one human being has thousands of AI employees. Companies will pay. The reality is that the vast majority of consumers are not going to want to pay for AI. Companies do, because they pay for productivity and AI seems start fulfilling that promise. But the argument goes beyond cost savings: agents not only make the work that humans do more efficient, but they allow a small group of people with a clear strategic vision to execute it on a scale that previously required hundreds of employees who also had to be coordinated. Large companies have been adding layers of management necessary to scale for decades, but all that hierarchy disappears with agents. But. This analyst is also clear that the wave of layoffs is going to be increasingly evident and it is evident that AI is going to have a clear impact. However, he explains that many of these current layoffs correspond more to the overemployment experienced with the COVID-19 pandemic. What will happen now is that companies will no longer wonder if they hired too much for the “pre-AI” world, but rather if they hired too much for the “post-AI” world. In fact, those that don’t ask will probably end up competing with smaller rivals, built from the ground up with AI and with radically lower cost structures. For him two things are clear. The first is that the demand for computing will not stop growing. The second, that the bubble, if it exists —and according to him, the answer is that he doesn’t—, it’s not going to explode. In Xataka | His dog had cancer, his vets had no solutions and he found an mRNA vaccine elsewhere: ChatGPT

The AI ​​race is no longer about who has the most powerful model. Who launches the easiest and safest OpenClaw

2026 began with an earthquake in the world of AI, and it did not come from any of the big technology companies, but from an unknown programmer and his open source project OpenClaw (formerly Clawdbot and Moltbot). Not even two months have passed and we can say that the boom of this AI agent is reconfiguring the AI ​​career, causing more and more companies to jump on the bandwagon. The last one was Perplexity. Personal Computer. a month ago, Perplexity announced Computerwhich was a cloud-based tool capable of orchestrating agents using various models. The next step is Personal Computeryour own OpenClaw. can be left running on a Mac Mini and control it from another device, such as a mobile phone, exactly the same as OpenClaw, but with a simpler interface that does not require technical knowledge. Further user-friendly. Another key aspect is that they focus on security, one of the delicate points of OpenClaw. Perplexity claims that with Personal Computer, “Every sensitive action requires your approval. Every action is logged. There’s an off switch.” At the moment Personal Computer is not available yet, but if you want to try it before anyone else you can sign up for the waiting list. NVIDIA NemoClaw. Which is the most valuable company in the world has taken good note of the success of OpenClaw and a couple of days ago they announced that they will launch their own open source platform for enterprise AI agents, they will call it NemoClaw. This announcement is also important because it places NVIDIA in a position of direct competition against companies like Anthropic, OpenAI or Perplexity. This changes its position from a hardware supplier to a software competitor. and OpenAI…The project had not even been three months old when OpenAI, not only bought it, but also hired its creator Peter Steinberger. It was not the only one who bid to achieve the viral success of the moment, Meta also tried, but OpenAI was the one that won the bid. Stenberger said the project would continue to remain “open and independent.” This case is a good example of two things: how far a person can go with a good AI idea and how difficult, if not impossible, it is to compete in an ecosystem in which the competition is some of the largest and most valuable companies in the world. David against Goliath. The agentic AI race. We spent a good part of 2025 watching AI agents take their first steps, many times with quite mediocre results. It was clear that agentic AI was getting a lot betterbut I don’t think anyone expected that the first viral hit would be carried out by an independent and open source project. OpenClaw not only succeeded, it has launched a new race in AI, one that seeks the ultimate custom AI agents. OpenClaw has two barriers to entry, on the one hand requiring certain technical knowledge and on the other security. It is a very powerful agent, but sometimes unpredictable. Hence, Perplexity is appealing precisely to improve these two aspects. We’ll see who will be next. In Xataka | Social networks were born for humans: Meta has just bought one designed for AI agents Image | Pexels

It is the most powerful ever seen

Raising your head and looking at the sky looking to recognize constellations and encounter a shower of stars or meteorites is a pleasure, but what the astronomical community has found It is simply extraordinary: a beam of cosmic energy aimed at Earth from half the known universe. It’s not the first time we’ve seen something like this, but this is the brightest and most distant ever seen. The discovery. South Africa’s MeerKAT telescope has discovered the most powerful and distant space laser ever detected. It is a beam of microwaves fired 8 billion years ago that has just arrived at Earth and to locate it, the team needed a cosmic magnifying glass that Einstein predicted more than a century ago. Context. Hydroxyl megamasers (the prefix mega denotes that their luminosity is millions of times higher than that of an ordinary hydroxyl maser) are natural phenomena that occur when two galaxies collide. At that moment the gas clouds are violently compressed, exciting hydroxyl molecules. These release microwaves in an amplified and coherent way (like artificial lasers). Simply put, they are the cosmic equivalent of a laser. Of course, instead of visible light, what they emit are microwaves. For astronomy they serve as a kind of “cosmic beacon” used to study how galaxies were formed in the early universe. to the telescope. This natural laser comes from a pair of colliding galaxies (the HATLAS J142935.3–002836 system) that emit a megamaser so bright that the research team has proposed upgrade it to gigamaser, an order of magnitude higher. The person responsible for the discovery is the MeerKAT radio telescopea network of 64 radio frequency antennas located in South Africa. The signal we receive today was emitted 8 billion years ago, that is, when the Universe was half its current age. Why is it important. Because megamasers are direct tracers of galactic mergers in the young universe. Their study allows us to determine how they were formed and how they evolved. Furthermore, this proposal to classify it as a “gigamaser” opens the door to more objects similar in size to exist yet to be discovered. As details Thato Manamelaastronomer at the University of Pretoria and lead author from the study: “This is just the beginning. We don’t want to find just one system, but hundreds or thousands” Illustration of the distant galaxy 8 billion light years away (in red), magnified by an unrelated foreground disk galaxy, resulting in a red ring. By breaking radio light into different colors, like a prism does, the hydroxyl gigamaser is revealed. IDIA How they did it. The microwave signal was too weak to be detected at that distance, but the scientific team made use of something that Einstein glimpsed: the gravitational lens. In short: a huge mass located somewhere between Earth and galaxies acts as a natural amplifier, bending space-time around it, that is, bending and concentrating microwaves like a magnifying glass. What is produced is an Einstein ring, a luminous halo around the intermediate object. That effect amplified the signal enough that MeerKAT could capture the cosmic ray and analyze it. In Xataka | The quietest place in the solar system is on the far side of the Moon, which is why they have just installed a radio telescope there In Xataka | A new “solar system” has just been discovered. There’s just one problem: it shouldn’t exist. Cover | NASA Hubble Space Telescope

Sierra was the second most powerful supercomputer in the world. When its time came it ended up in the shredder, literally

Supercomputers represent the extreme of modern computing: machines capable of performing enormous amounts of calculations every second and supporting scientific or strategic projects of enormous complexity. Saw He was one of those giants. For years he operated in the Lawrence Livermore National Laboratorywhere he was in charge of highly sensitive simulations for the United States Government. At the time he came to occupy second place in the TOP500 rankingwhich ranks the world’s fastest supercomputers. But in high-performance computing, even the most advanced systems have a limited lifespan. After seven years of service, Sierra has been retired. A giant for simulations. When Sierra began operating in 2018 at the Livermore facility, it was incorporated into the center’s high-performance computing infrastructure to support the nuclear arsenal maintenance program managed by the National Nuclear Security Administration. Instead of resorting to real nuclear tests, scientists use computer simulations capable of reproducing the behavior of the weapons and materials involved in their design. This work requires extraordinary computing power and also has implications in areas such as nonproliferation and counterterrorism. Almost at the top of the ranking. As we noted above, for several years the Sierra was among the fastest machines on the planet. According to the TOP500 ranking, it recorded 94.64 petaflops, that is, tens of quadrillion floating point operations per second. To achieve this, it used an unusual architecture at the time, based on IBM Power9 processors combined with NVIDIA Volta V100 graphics accelerators. This design allowed work to be distributed among thousands of computing nodes and offered a notable leap over previous generations of supercomputing. When the hardware starts to fail. Supercomputers do not escape a reality common to any technological infrastructure: over the years, the hardware begins to deteriorate. In this type of systems, The usual useful life is usually around five to seven yearsa period after which the failure rate begins to grow and maintaining the system becomes more complex. As these machines accumulate hours of operation, the likelihood increases that certain components will fail or need to be replaced. In the case of Sierra, furthermore, part of the problem was already very specific: some of its components had stopped being manufactured and the version of the operating system it used had lost support. The successor. Sierra’s retirement is also related to the arrival of a new generation of supercomputing at the center. In 2025 it began operating The Captainthe system destined to take its place within the laboratory’s computing infrastructure. Although at first glance both may seem similar facilities, the difference is inside. El Capitan uses an architecture based on the AMD Instinct MI300A APUs and a shared memory system between CPU and GPU, which allows it to achieve much higher performance. According to data released by the lab, this machine can reach 1,809 exaflops, about 19 times faster than Sierra at its peak according to TOP500. Disassemble a supercomputer piece by piece. The end of Sierra was not simply about shutting down the system and leaving it out of commission. The process was carried out in several phases that began with the progressive removal of computing nodes and internal components. Technicians dismantled entire racks, extracted batteries and separated different elements for recycling or controlled destruction. Some parts, such as system plates or metal structures, were sent to specialized facilities for shredding. Since Sierra had worked with simulations linked to the US nuclear arsenal, the laboratory had to prevent any possibility of partial data recovery or reconstruction of sensitive information, hence the storage devices received even stricter treatment. Images | United States Department of Energy In Xataka | Meta has been buying chips from NVIDIA and AMD for years. Now it also makes its own so as not to fall short

It took Apple to put the iPhone chip in a computer so that we know that the iPhone is as powerful as a computer

He MacBook Neo It is surprising analysts and buyers with its good performance. And the question should be: why? It is the first time that Apple has made a move of this caliber to make one of its star products cheaper: putting the processor of an iPhone inside a Mac. We consumers have so internalized that “a cell phone is a cell phone” and that “a PC is a PC” that, usually, we do not pay attention to what we usually have in our pockets. It took Apple to put the processor of an iPhone in a PC to realize that, precisely, what we have in our pocket is a PC. “Move up to 4k videos”. X is filled with analysts thoroughly testing the MacBook Neo, and hallucinating that it is capable of doing… what any other MacBook can do. The 8 GB of RAM is a limitation, as it was in the first generations of Macs with M1 chip. But, far from that “use for office, basic and browser”, the Neo is surprising for being capable of what is expected of a Mac: do more than that. The main limitation is given by the 8 GB of RAM, which is few even for a Mac, but not by the chip. It’s normal. A Mac with a mobile chip. It sounds like a crazy idea. But if we look (not even in depth) at A18 Prowe understand perfectly what is happening. No matter how much Apple mounts the A18 Pro in a mobile phone, it is a chip that far exceeds the capabilities that even a desktop or laptop would need for “basic use.” In fact, the A18 Pro scores above an M1 in Single-Core, it is not far behind in graphical performance and is much more advanced at the manufacturing level (number of transistors, instructions, frequencies). In fact, it’s not just an Apple thing: a Snapdragon 8 Elite sweeps an M1 in multi-core and reaches a M2 in single. We weren’t realizing. We have been saying for years that the power of mobile phones is completely excessive. A certain part is necessary for the highest-end mobile phones to be able to record in 8K, process images in real time and operate at the rate they work, but 90% we are driving at 30 km/h in a supercar that exceeds 300. This is not something new. In fact, for years Apple’s A processors were outperforming Intel’s, back in the days when M chips didn’t exist. As told John Gruberthe A9 CPU of the iPhone 6s In 2015 (it has rained) it was already comparable to MacBooks from 2013. In 2017, as he says Antonio Sabanthe iPad Pro was already faster than the MacBook Pro with the I7 chip. Just what was needed. Macs have historically been characterized as a perfect mobility solution for designers, musicians, video editors and other creators. But there was an even bigger niche: people who don’t do any of that and want a computer for “normal” use. While MacBook Airs are not over-the-top Macs, they offer much more than any average user needs. In fact, I myself bought an Air M4 and not a Pro because, even as a video editor, I don’t need much else. Apple has found in the Neo more than possibly the “e” phenomenona formula that we will see year after year if we achieve commercial success. Image | Apple In Xataka | Apple has only found one option to make a cheap laptop: make it a mobile

The good news is that AI models are becoming more powerful. The bad thing is that everyone ends up saying the same thing.

We have artificial intelligence. What we don’t have is artificial diversity. That is the conclusion reached by a group of researchers who did a relatively simple test: they asked 25 different AI models a bunch of questions to see what they answered. And that’s the bad thing: who answered things that were too similar. “Artificial hive mind”. Scientists from the University of Washington, Carnegie Mellon University and Stanford University, among other institutions, have published an interesting joint study. In it they reveal how after various tests it seems clear that although AI models are becoming more and more advanced, the problem is that they all seem to have developed a kind of “artificial hive mind”: no matter what you ask them, they answer in a suspiciously similar way. When asking all these models “what time was”, many responded with the phrase “time is like a river”, while another group of models answered that “it is like a weaver”. time is a river. One of the questions asked of these models is “What is time?”and although that question leaves clear room for very different answers, the worrying thing is that they were not. Several models responded with the phrase “time is a river” and then developed it a little, while others responded with “time is a weaver (of moments).” That similarity when it came to responding turned out to be a constant. The illusion of abundance. We believe that when we consult something with an AI we access a whole world of conversational possibilities, but the study reveals that in reality we are facing a system that proposes very similar outputs. Although language models promise limitless creativity, they tend to converge on that hive mind where diversity is sacrificed for statistical consistency. It is reasonable, especially considering that large language models They are based on the concept of transformera probabilistic system that tries to find the next “best” word as it answers us. Same script. The researchers created a large-scale data set with 26,000 queries from real users that theoretically allowed the models to generate multiple valid and creative responses. They called that data set “Infinity-Chat” and divided the questions into six main categories and 17 subcategories. IA, you repeat yourself more than a broken record. During the tests it was observed that the same model tends to repeat itself, generating very similar responses. In fact, even when special parameters were used for questions designed to encourage diversity, the same effect was produced. This is what researchers call “inter-model collapse.” Too similar. These tests made it clear that the semantic similarity, how similar the responses of the different models were, was worrying. According to the study, this similarity ranged between 71% and 82%, and in some cases certain models managed to generate identical paragraphs word for word. The training problem. It is not only that they all generate text in a similar way due to their design, but there is also a training problem. The authors suggest that this homogeneity of responses could be due to several reasons: Training data sources end up being shared: models They are trained with similar “datasets” and for example they are based on similar texts and knowledge that come, for example, from Wikipedia or a very similar set of books. Contamination effect due to synthetic data generated by other AIs: they also use synthetic texts generated by other AI models. Rewards: The models used to reward these models are calibrated to reward some notion of “consensus” quality. Thus, creative and individual diversity is punished. AIs are “educated” to be precisely very similar to each other. Problem in sight. All of this makes researchers explicitly warn about two clear risks when using these AI models. We will think the same: if we users do not stop using AI models that answer basically the same thing, our own ways of thinking on those topics and problems will be “homogenized”and it will also make our responses more uniform. Point of view reduction: The other danger follows from the first: if the AI ​​ends up converging and answering the same thing, points of view are eliminated. Here the biases for example from the western world will be evident in Western models (ChatGPT, Gemini, Claude), and the same will happen with the oriental ones, for example. This would cause the potential suppression of alternative worldviews, of perspectives and “looks” that are different from our reality. Image | Solen Feyissa In Xataka | The scientist who made the AI ​​we know today possible has just raised 1 billion. His new goal is to teach him to see space

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.