These are my 5 favorite models

Spring means the arrival of heat after winter, but also the appearance of allergies to pollen, grasses and many plants and trees. For this, having good sunglasses can be a before and after, especially if they are enveloping. But… what do you have to take into account when buying one of these glasses? These are the most important points: The design: It is necessary that the frame has a curved shape and not a straight one, since this way it will protect us from the front and the sides of our eyes. The function is to create a physical barrier that prevents the wind from entering through the holes in the glasses. Side protection: Although the curved frame plays a fundamental role, it is also especially interesting to have wide temples. In this way, we will have better protection on the sides. The distance: It is important to keep in mind that this type of wraparound glasses must fit very well to achieve a more airtight effect. As a recommendation, it is interesting that they stay tight to the cheekbones, but without the crystals touching the eyelashes. Polarized lenses: During allergy, one of the symptoms is sensitivity to light. The lenses themselves will protect us from this, but if they are also polarized, much better because they can prevent the eyes from becoming more irritated. UV protection: Some glasses also have a quality UV filter against ultraviolet radiation, a particularly interesting protection when spending a lot of time in the sun. Now, taking this into account, What glasses should I buy? There is a lot to choose from with very varied prices, so we have made a selection for different budgets. Radiant Hawkers by 29.80 euroswith UV protection against ultraviolet radiation. Polaroid 7886 by 33.36 eurosinexpensive glasses with thick temples and polarized lenses. Nautica N2239S by 59 euroswith double protection against ultraviolet radiation and polarized lenses. Nike WindTrack Heat by 89 eurosglasses with a more sporty design that have UV protection. Ray-Ban RB4335 by 137 eurosglasses with very wide temples and UV protection. The price could vary. We earn commission from these links Radiant Hawkers If you want to go cheap, the Hawkers Radiance They are sunglasses that, for 29.80 euroshave a wrap-around format in the frame, since it presents a curvature. They have UV protection against ultraviolet radiation and are capable of absorbing, depending on the brand, between 82 and 92% of sunlight. Of course, they are not polarized and the lenses slightly cover the sides of the eyes, but not the temples. The price could vary. We earn commission from these links Polaroid 7886 With a similar price, 33.36 euros In this case, we find the Polaroid 7886sunglasses that also have a curvature in the frame as well as wide temples to further protect the sides of the eyes. They are polarized. In addition, Amazon customers often highlight that they are quite comfortable thanks to their flexible design, which allows them to better adapt to each face shape. The price could vary. We earn commission from these links Nautica N2239S El Corte Inglés also has the Nautica N2239S for a price of 59 euros. They are especially interesting glasses due to their design, since they have an enveloping curved format as well as thick temples. We must also highlight the double protection of the polarized lenses and their UV filters against ultraviolet radiation. The price could vary. We earn commission from these links Nike WindTrack Heat On the other hand, if you prefer something more sporty, El Corte Inglés also has the Nike WindTrack Heatin this case for a price of 89 euros. They are glasses that protect above all thanks to their great curvature in the frame, and not so much because of the temples because they are somewhat thin. They also come with UV protection against ultraviolet radiation, although they are not polarized. The price could vary. We earn commission from these links Ray-Ban RB4335 Ray-Ban has a very characteristic style in its glasses and the Ray-Ban RB4335 They are especially interesting for allergies. Because? Because of its format, which not only has an enveloping curvature, but also because of the wide temples that it includes. The lenses are not polarized, but they do have UV protection. Its price in this case is 137 euros. The price could vary. We earn commission from these links Some of the links in this article are affiliated and may provide a benefit to Xataka. In case of non-availability, offers may vary. Images | who nguyen and Michele Pagnani in UnsplashHawkers, Polaroid, Nautic, Nike, Ray-Ban In Xataka | All types of solar panels that you can install at home (and which ones best suit your needs) In Xataka | The four solar power banks with the most capacity to charge your mobile when you are camping

Chinese AI models boasted of being good, pretty and cheap. There are only two of those three things

It is not as well known as its rivals, but Zhipu AI (z.ai) has become one of the most promising Chinese AI startups. It is responsible for the family of open GLM models that have always offered a solvent and, above all, very cheap alternative. That, unfortunately, is no longer so true, but we are witnessing a change in strategy both between it and its competitors in the Asian giant. Chinese AI models are no longer such a bargain. GLM-5.1 is better… Z.ai announced yesterday the launch of its shiny new AI model, GLM-5.1. I did it with my chest out because we are facing a promising evolution of this LLM (744B parameters, 40B assets with Mixture of Experts architecture) that certainly surpasses its predecessors but that in some metrics even seems to be above GPT-5.4, Claude Opus 4.6 or Gemini 3.1. Agentic tasks and those that require autonomy for long periods work better than ever, but if you want to benefit from these improvements, you have to check out: the price of the model is now at least 8% more expensive than previous versions. …but also more expensive. According to prices managed by OpenRouter, the well-known platform that serves as a “distributor” of multiple free and commercial models, the prices of the new Z.ai model have risen significantly. Thus, GLM-5.1 costs between 8 and 17% more than GLM-5 Turbo, also recently launched. It is the second time that the Chinese company has raised prices for its users in 2026, and that is a worrying sign. The excuse, of course, is the same as always. We are in high demand. When Z.ai launched GLM-5 at the beginning of February, it took the opportunity to raise the prices of its plans for programmers between 30 and 60%while the API rose between 67% and 100% (doubling). Its shares on the stock market perked up significantly after the launch and the price increase – logical, investors saw that income was probably going to increase thanks to these increases – but the company indicated that demand was very high and that its models had to reflect that circumstance. From the three B’s to just two. The Chinese open models had been demonstrating remarkable quality and a fantastic price/performance ratio for months. They were good, pretty and cheap, but Zhipu AI has just been the latest to end up raising prices. Most of its competitors have been doing it too: Moonshot AI (Kimi), MiniMax and StepFun did it already in 2025, but Alibaba, ByteDance, Tencent and Baidu have also adopted increasingly ambitious pricing strategies. as indicated on TrendForce. OpenClaw as a trigger. Much of the blame for this great demand lies with AI agents like OpenClaw, which has become viral but has a problem: it consumes tokens at an extraordinary rate. A conversation with ChatGPT, Claude or Gemini has a cost, but the use of tokens in “chat mode” is much lower than that carried out by AI agents, who do not stop “thinking” and analyzing different possibilities and chaining processes to resolve our requests. The Chinese models have become a good alternative if one wants to save because using Claude Opus 4.6 was very expensive —and now, prohibited—, but these models are slowly becoming high-end AI models. At least, for price. I already know how this story ends. What we are experiencing with AI models we already saw with smartphones. Chinese manufacturers broke the market with bargain phones that offered high-end features for mid-range or low-range prices, but then they evolved and over the years most manufacturers have ended up focusing on the super-high ranges and at most have launched “cheap” sub-brands. Xiaomi has done it with Redmi and POCO, for example, and now we are seeing something similar with Chinese AI startups, which gained popularity with good, pretty and cheap models, but are now beginning to transition to that new batch of capable but no longer so affordable models. First they catch you, then they squeeze you. What we are seeing with the Chinese AI models we were also seeing with the models of companies like OpenAI or Anthropic. Both they and their competitors release increasingly better but also increasingly more expensive models, and that means that those tokens that these companies sell us are becoming more and more precious: the quotas for the ChatGPT Plus or Claude Pro plans, for example, seem to be running out. faster than beforeand the users they take time complaining about it. On Reddit They have a “megathread” dedicated precisely to that, but here we have bad news: this doesn’t look like it will go down, but rather more. In Xataka | Anthropic has shut down OpenClaw for a reason: it’s building the “walled garden” that Nintendo perfected

five models from 99 to 345 euros

Book Day will be celebrated on April 23, so many brands and stores have begun to launch some offers on electronic book readers (eReaders) to start the month reading. Do you want to devour that novel that has been on your wish list for so long? Well, let’s review what the best offers we have available right now. Kindle by 99 eurosAmazon’s basic eReader for reading frequently. Pocketbook Verse by 118.90 eurosa reader that has the particularity of having physical buttons, as well as a touch screen. Kindle Colorsoft by 199 eurosan eReader with a color screen for reading comics and magazines. Kindle Colorsoft Signature Edition by 229 eurosa vitaminized version of Colorsoft that includes two additional features. Kindle Scribe (2022) by 294.99 eurosthe ideal eReader if you are going to use it only at home. Kindle Colorsoft (latest generation) The price could vary. We earn commission from these links Kindle If you tend to read every so often and are not looking for the best possible reading experience, the Kindle It is the perfect model both for what it offers and for its price, especially now that it has dropped to 99 euros (before 119 euros). This model comes with a six-inch anti-glare screenit has 16 GB to store many books and it weighs little, so you can hold it with one hand. In addition, it has an adjustable front light and its autonomy is weeks. Kindle (latest generation) The price could vary. We earn commission from these links Pocketbook Verse If you are looking for something similar, El Corte Inglés has on offer the PocketBook Verse by 118.90 euros (before 135 euros). It stands out mainly because it includes a button panel with which you can turn the pages or move through the menus, an interesting point to avoid getting your screen so dirty, which is touch screen so that we can also use it in this other way. Its diagonal is also six inches, it comes with 8 GB of storage, although more can be added with a microSD card of up to 128 GB and it has access to eBiblio to be able to loan books online. The price could vary. We earn commission from these links Kindle Colorsoft On the other hand, if you want to read books, but also the occasional comic or magazine, the most interesting model is the Kindle Colorsoft. 199 euros (before 269.99 euros), an eReader that comes with a color screen anti-reflective seven inches. Its color panel also allows you to better view the covers and illustrations of the novels and even underline text with various colors to differentiate the characters. Additionally, although it is not a model that is on sale, the Kobo Clara Color is a very interesting alternative. It offers a very similar reading experience for a slightly lower price, as this model costs 169 euros. Of course, its screen is a little smaller, six inches. Kindle Colorsoft (latest generation) The price could vary. We earn commission from these links Kindle Colorsoft Signature Edition One step above we have a proposal similar to the previous one from Amazon, but with a couple of additions. He Kindle Colorsoft Signature Edition right now it costs 229 euros (before 299.99 euros) and differs from its basic model (Colorsoft) in that in this case it comes with automatic adjustment of the front light, wireless charging and 32 GB of storage instead of 16 GB. Kindle Colorsoft Signature Edition (latest generation) The price could vary. We earn commission from these links Kindle Scribe (2022) If you are only going to read at home and prefer a larger reader to see the print better, the Kindle Scribe (2022) It has also dropped in price to 294.99 euros (before 399.99 euros). It is an eReader with a 10.2-inch screen, similar in size to what we normally see on tablets. It includes a pencil to take notes, its weight is 433 grams (you will have to use it with two hands) and its battery offers a autonomy of up to 12 weeks. The Kindle Scribe has a newer version released in 2024. It has also dropped in price to 344.99 euros (previously 479.99 euros), but the differences between both models, from 2022 and 2024, are so small that it is not worth betting on this eReader. The biggest difference is found in the pencil, which in the current generation is of better quality. The price could vary. We earn commission from these links Some of the links in this article are affiliated and may provide a benefit to Xataka. In case of non-availability, offers may vary. Images | Amazon, Pocketbook In Xataka | Which Kindle to buy: buying guide with recommendations to get it right with Amazon e-book readers In Xataka | The 25 best science fiction books

Meta spent a fortune on AI talent and data centers. Nine months later the result is: zero models

Mark Zuckerberg wanted to be the Florentino Pérez of AI. last summer began to sign galacticos in this segment and getting talent by letting go stacks of millions of dollars. He more popularOf course, it was the AI wunderkind Alexandr Wangwho became leader of its “Superintelligence” division. The funny thing is that the months go by and go by and in Meta they don’t seem to have absolutely anything to show. And that is very worrying. Delays. Despite having invested billions of dollars in that restructuring of the company to bet (practically) everything on AI, three internal sources confirm that Meta finds it very difficult to meet the planned deadlines. The race for generative AI waits for no one, and at the company headquarters nerves are on edge because the roadmap is not being met. Avocado, where are you? The new foundational AI model that Meta has been working on for months has been internally named Avocado, but at the moment it is not measuring up, something that reminds us what happened to Llama 4. Internal tests reveal that although it manages to surpass the aforementioned Llama 4 and the old Gemini 2.5, it falls short of Gemini 3.0 (and of course, the recent Gemini 3.1). Patience. Coming out with a model that is clearly worse than its rivals does not make sense, so Meta has decided to wait and delay the launch of its model. Avocado is expected to hit the market in May at the earliest. And meanwhile, Gemini. The situation is so critical that according to these sources, the leaders of the AI ​​division are considering something unthinkable: paying a license to Google to be able to use Gemini in their own products, something that for example will Apple do Siri. That would be a clear sign that for now this own model is not capable enough to power the AI ​​functions of WhatsApp, Instagram and Threads. Money does not equal speed. The company has spent billions of dollars on AI researchers, and has committed to invest 600,000 million dollars in building AI data centers. In January, Meta projected a capex of $135 billion dedicated almost entirely to these projectsalmost double the $72 billion it spent last year. Despite these investments, the company is currently missing from an area in which its competitors continue to advance. Internal tension. According to these sources, Meta is becoming a tinderbox. The “TBD Lab” (for “To Be Determined”), the unit led by Wang, is working under maximum pressure on models named after fruits (Avocado, Mango, Watermelon), but has clashed with old-school Meta managers like Chris Cox and Andrew Bossworth. The company is trying to integrate those models with Meta’s advertising business, which is what supports everything, but Wang doesn’t seem to handle that part of the business very well. Goodbye to open models. Meta stood out at the beginning of this AI race as the company whose open models —not Open Source— were above the rest. Llama became the norm in this area, but in this new stage that philosophy seems to change and China is the one that now leads that segment. Thus, there is talk that both Zuckerberg and Wang lean toward closed models, such as those of OpenAI (GPT) or Google (Gemini). This allows you to have full control over the code, a competitive advantage that Meta does not seem to want to give up. Few fruits of this tree. Despite the extraordinary deployment of resources, the current balance is poor. Meta’s only tangible product of those investments is Vibes, an application similar to Sora that has not managed to fully gel. Meanwhile, those initial talent signings have turned into abandonments: the trickle of AI researchers who leave the company to join others (or found their own projects) is increasing. In Xataka | Meta has been buying chips from NVIDIA and AMD for years. Now it also makes its own so as not to fall short

The good news is that AI models are becoming more powerful. The bad thing is that everyone ends up saying the same thing.

We have artificial intelligence. What we don’t have is artificial diversity. That is the conclusion reached by a group of researchers who did a relatively simple test: they asked 25 different AI models a bunch of questions to see what they answered. And that’s the bad thing: who answered things that were too similar. “Artificial hive mind”. Scientists from the University of Washington, Carnegie Mellon University and Stanford University, among other institutions, have published an interesting joint study. In it they reveal how after various tests it seems clear that although AI models are becoming more and more advanced, the problem is that they all seem to have developed a kind of “artificial hive mind”: no matter what you ask them, they answer in a suspiciously similar way. When asking all these models “what time was”, many responded with the phrase “time is like a river”, while another group of models answered that “it is like a weaver”. time is a river. One of the questions asked of these models is “What is time?”and although that question leaves clear room for very different answers, the worrying thing is that they were not. Several models responded with the phrase “time is a river” and then developed it a little, while others responded with “time is a weaver (of moments).” That similarity when it came to responding turned out to be a constant. The illusion of abundance. We believe that when we consult something with an AI we access a whole world of conversational possibilities, but the study reveals that in reality we are facing a system that proposes very similar outputs. Although language models promise limitless creativity, they tend to converge on that hive mind where diversity is sacrificed for statistical consistency. It is reasonable, especially considering that large language models They are based on the concept of transformera probabilistic system that tries to find the next “best” word as it answers us. Same script. The researchers created a large-scale data set with 26,000 queries from real users that theoretically allowed the models to generate multiple valid and creative responses. They called that data set “Infinity-Chat” and divided the questions into six main categories and 17 subcategories. IA, you repeat yourself more than a broken record. During the tests it was observed that the same model tends to repeat itself, generating very similar responses. In fact, even when special parameters were used for questions designed to encourage diversity, the same effect was produced. This is what researchers call “inter-model collapse.” Too similar. These tests made it clear that the semantic similarity, how similar the responses of the different models were, was worrying. According to the study, this similarity ranged between 71% and 82%, and in some cases certain models managed to generate identical paragraphs word for word. The training problem. It is not only that they all generate text in a similar way due to their design, but there is also a training problem. The authors suggest that this homogeneity of responses could be due to several reasons: Training data sources end up being shared: models They are trained with similar “datasets” and for example they are based on similar texts and knowledge that come, for example, from Wikipedia or a very similar set of books. Contamination effect due to synthetic data generated by other AIs: they also use synthetic texts generated by other AI models. Rewards: The models used to reward these models are calibrated to reward some notion of “consensus” quality. Thus, creative and individual diversity is punished. AIs are “educated” to be precisely very similar to each other. Problem in sight. All of this makes researchers explicitly warn about two clear risks when using these AI models. We will think the same: if we users do not stop using AI models that answer basically the same thing, our own ways of thinking on those topics and problems will be “homogenized”and it will also make our responses more uniform. Point of view reduction: The other danger follows from the first: if the AI ​​ends up converging and answering the same thing, points of view are eliminated. Here the biases for example from the western world will be evident in Western models (ChatGPT, Gemini, Claude), and the same will happen with the oriental ones, for example. This would cause the potential suppression of alternative worldviews, of perspectives and “looks” that are different from our reality. Image | Solen Feyissa In Xataka | The scientist who made the AI ​​we know today possible has just raised 1 billion. His new goal is to teach him to see space

36 new models until 2030, more electrification and a commitment to new markets

Renault will present all the details in an event that can be followed online in the morning, but it has already confirmed its new future plan, its name and its objectives. The company was complying with the schedule with Renaulutiona project that Luca de Meo devised upon his arrival with the clear objective of renewing the range, making it more attractive and presenting new electric cars that could attract the public for more than just their own technology. With the departure of Luca de Meo, François Provost, CEO of Renault Group, wanted to mark his line for the future and today presents futuREAdy, the company’s new strategic plan. Bigger, much bigger Growth. The new strategic plan of the Renault group is based on that word. In a press release sent by the company, they define futuREAdy as a project that is based on four pillars: Growth-ready: 36 new models by 2030. Tech-ready: Accelerate “all key technologies” (Renault has been defending a multi-energy future and not just electric) Excellence-ready: improve the competitiveness of the company by seeking better results and operational performance. Trust-ready: commitment to the distribution network and worker maintenance plan The plan proposal is ambitious. Renault talks about wanting to become “the leading European car manufacturer on a global scale”, based on the four areas above. Growth-ready The first point that Renault wanted to insist on is its product offensive for the coming years. Taking advantage of the three brands of their group, they want to put 22 new models on the market in Europe, of which 16 will be completely electric. In addition, they want to expand the bet with another 14 models on an international scale. It must be taken into account that the company has already announced models with a clear premium focus to compete outside Europe, in markets such as South Korea and the Middle East or Mexico. To do this, they foresee a roadmap that is based on its three brands: Renault: they aspire to launch 12 new products in Europe, maintaining hybrid options beyond 2030 but with a 100% electric offer with a new platform. Another 14 cars will be launched outside Europe with the aim that by 2030 the brand will sell more than two million cars (half outside Europe) of which 100% will be electrified on our continent to a greater or lesser extent and 50% of sales outside Europe. Dacia: the commitment to “the most competitive product combining price, cost and value for the customer” is maintained. Its electrification is accelerating, with two-thirds of the proposal electrified by 2030. Efforts will continue to be made for the C segment and for 4×4 and gas-powered vehicles. 25% of its cars will be electric from 2030. Alpine: A new generation of the A110 based on the Alpine Performance Platform (APP) will be launched. They aspire to attract new customers with the Alpine A290 and A390 but they will also keep the Alpine A100 R Ultime alive, which they will take advantage of to put limited series on the market with greater exclusivity and customization. They also point out that the objective is to improve the customer experience by making greater efforts throughout the entire product life cycle, from purchase, financing and after-sales service. The goal: a loyalty rate of 80% in 2030. Tech-Ready Renault has been one of the companies that has insisted the most on keeping hybrid technologies alivecombustion engines and the commitment to parallel alternatives such as hydrogen or LPG, beyond the mandatory European electrification. In this sense, its partnership with Geely and the Horse co-brand They are being key to the development of new engines and hybrid schemes. Among the objectives, they point out a new generation of electric vehicles in the C segment as a clear priority. The company has based its strategy until now on smaller vehicles, with the launch of the new Renault Twingo, Renault 5 and Renault 4. Now, the objective is to make the leap to a higher level and they will do it with the new RGEV Medium 2.0 electric platform. This platform will arrive with 800 volt architecture and “fast charging of up to 10 minutes by 2030”, although they do not clarify charging powers. They hope that it can cover cars from the B+ segment to the D segment and saloon, SUV and minivan bodies. They will use a “cell-to-body” design in which the battery cells themselves are attached to the chassis. This allows greater autonomy and the use of fewer parts. They believe that they will be able to put options on the market with 750 kilometers of electric autonomy according to the WLTP cycle and up to 1,400 kilometers in extended range versions with emissions less than 25 gr/km of CO2, what will be key from 2030. These cars capable of charging “in 10 minutes” will do so with chemical batteries with higher energy density. On the contrary, the most affordable versions will opt for 400-volt architectures, with 20-minute charges in 2030, designed for the AB segment. The electric motors will use a wound rotor without rare earths that aim to improve consumption by 20%, one of the points with which Renault has suffered the most in the first electric cars that it has launched on the market. These developments will be made in Europe and for Europe. This platform will take advantage of Google technology to be the digital heart of the vehicle. Renault assures that they will have the first carOS, “the operating system for vehicles, co-developed with Google’s partner, based on Android.” This development will not only remain in the infotainment system, it will also reach the ADAS and chassis modes to improve automated driving functions. This entire electric strategy will be supported by hybrid technology, with versions of less than 150 HP starting in 2030 focused on a non-European audience. Excellence-Ready The other great leap that Renault aspires to is the improvement of its production systems to achieve better operational results. Renault talks about … Read more

The head of AI at Alibaba leaves the company. That points to a 180º turn for the Qwen family models

An employee leaving a company does not have to mean a radical change, especially when that employee has been the leader of an important project and his departure occurs just after the launch. This is what just happened with Junyang (Justin) Lin, the technological leader of the team qwen. A strange exit. On March 2, Alibaba launched a new model family lightweight with two fast models designed for edge use, a multimodal model for agentic systems and a reasoning model that stood up to much larger models. The next day, Junyang Lin announced on his X account “I am leaving. Goodbye, my dear Qwen,” without giving further details. And he wasn’t the only one. Also leaving the company were Hui Binyuan, a scientific researcher, and Yu Bowen, head of post-training at Qwen. No one has commented on the reasons behind his departure from the company and rumors that they had been fired They didn’t wait. However, according to Panda Daily, Alibaba said it had approved his resignation. ¿What is happening? Justin’s departure caused a stir among his colleagues, with some claiming that it was “the end of an era”. We are talking about the person who has led the Qwen team from the beginning and a great AI researcher, with an academic profile that exceeds 40,000 citationsso this decision has raised many eyebrows. Whether fired or resigned, Justin was a key figure on the team, but he also leaves just after a launch and several other employees have followed him. What is happening at Alibaba? Closed models. As we said, the parties involved have not offered more details, but the theories have not been long in coming and one of them is that Alibaba could be thinking of moving towards closed models. Alibaba has been making efforts to monetize its AI and closing their models could be part of the plan. It would certainly make sense for the project leader to quit at the prospect of such a profound change. There’s a new guy in the office. Shortly after the news broke, another one jumped out: Alibaba has signed Zhou Haowho until now was a researcher at Google DeepMind. Zhou will join the Qwen team as head of post-training, so he will directly replace Yu Bowen and not Justin. Zhou has been a key figure in the development of Gemini 3, the Seeker’s AI mode, and Deep Research mode. lto open source strategy. DeepSeek, Kimi, Qwen… Chinese companies have become the standard bearers of open source AI, an antagonistic strategy with the closed stance of the US. But it is not a question of giving away AI just for the sake of it, but rather it is part of their roadmap: offering access to create a large user base and thus be able to be dominant in the future. Furthermore, Chinese companies know very well that the US is technologically ahead (Justin himself recognized it recently), so launching open and free AIs is a way to gain ground on them. However, in the long term it does not seem like a very good strategy because there will come a point where they want to monetize it and there is a risk of losing users who feel betrayed. We do not know if Alibaba has already started down this path, but if it has, we will soon see if this risk is real or not. Image | qwen In Xataka | China’s open AIs aren’t “beating” ChatGPT, they’re doing something more important: catapulting their industry

has done it with promising pocket AI models

The latest models from OpenAI, Anthropic or Google are fantastic, no doubt, but they have a problem: they are gigantic, so the only way to use them is to use the chatbots of these companies. But while those companies focus on that approach, Alibaba just surprised us with something fascinating. The allure of tiny AI models. This Chinese technology giant just launched the family of “Qwen 3.5 Small Models”, which is made up of four variants of open models with really small sizes. Thus, we have a “dwarf” model with 800 million parameters (0.8B), another with 2,000 million (2B), a third with 4,000 million (4B) and the last one with 9,000 million (9B). There are no official figures for the number of parameters for GPT 5.3, Opus 4.6 or Gemini 3.1, but it is very likely that they are all around 500B or far higher. Tiny but bully. The first two models are designed for prototyping and deployment in very modest devices and in which battery life is a priority, because their consumption is also very tight. Meanwhile, Qwen3.5-4B is a multimodal model for lightweight AI agents that supports a context window of up to 262,144 tokens. The latter, for example, has a size of less than 3 GB in its 4-bit quantized version, which makes it usable even on mobile phones. Even more interesting is the “eldest” of the family. The best essences… The latest of these models, Qwen3.5-9B, is really promising. It is a reasoning model that according to its creators surpasses nothing less than gpt-oss-120B, the open OpenAI model that is 13.5 times larger and that until now was a great reference in this field. All of these models are open weights, and can be found in both Hugging Face as in ModelScope in its different variants. A new approach. In these models Alibaba has made some changes and makes use of what they call Efficient Hybrid Architecture in which they combine a new type of attention algorithms (Gated Delta Networks) with the already known Mixture-of-Experts (MoE). This approach allows you to avoid the “memory wall” problem that affects small models. Promising returns. The results of the benchmarks published by Alibaba are really striking. Both Qwen3.5-4B and Qwen3.5-9B make a notable leap in efficiency, especially in multimodal tests—these models are capable of using images as input—and reasoning tasks. Thus, in the MMMU-Pro visual reasoning test, Qwen3.5-9B left Gemini 2.5 Flash lite behind, and in the GPQA reasoning test the Alibaba model 9B even managed to leave gpt-oss-120b behind. Alibaba surpasses itself. Paul Couvert, popularizer of AI, showed his enthusiasm in Xwhere he explained that at least according to these benchmarks Qwen3.5-4B was as powerful as Qwen3-Next-80B-A3B-Thinking, which until not long ago was considered a marvel but which had a notable size. Models for your laptop and your mobile. These models are especially striking because they give the option for practically anyone to use them on their laptop or mobile phone (or integrated into a browser!). In all cases the advantages are clear: you do not depend on the cloud, so you can use them offline, and our conversations do not go through any server, so “everything stays at home” and when using these models, the chats are private. Only Google seems to follow suit. Of the Western AI majors, only Google seems to be interested in small models. Gemma 3 270M was a surprising version launched in August 2025. Microsoft also has its Phi-4 December 2024, but beyond that there are few examples. OpenAI launched gpt-oss-20B and gpt-oss-120B in August 2025 and showed some interest in this type of scenario, but there has been no news since then. There are startups like Liquid that have a eye-catching LFM2.5 with a variant of only 1.2B, but here Alibaba seems unstoppable with its commitment to small. At least, for now. In Xataka | If the question is which of the big tech companies is winning the AI ​​race, the answer is: none

will no longer pause dangerous models if the competition releases them first

Anthropic is in the middle of an important issue with the Pentagon in the United States that may end up shaping the future of the company. Founded with security as its reason for being, it has just rewritten the rules that defined it. And his “Responsible Scaling Policy“, the document that established when to stop the development of a model that is too dangerous, has evolved into a mere roadmap with flexible objectives. And this change is much more important than it seems. Not only for Anthropic, but for the rest of the industry. Let’s get to it. What exactly has changed. Until now, Anthropic policy stated that the company would pause training or delay the launch of a model if its capabilities exceeded the speed at which sufficient safeguards could be developed. That is to say: if the model was too powerful to be controlled safely, it was stopped. This is over. And it is that the new policy removes that automatic braking mechanism and replaces it with a series of public commitments, along with regular third-party audited risk reports. The change was confirmed by the company itself in an official statement. Why have they done it? The company gives two main reasons. The first is the competitive environment: OpenAI, Google and xAI advance without those types of restrictions. “We didn’t feel it made sense to make unilateral commitments if competitors are moving full speed ahead,” counted Jared Kaplan, chief scientific officer at Anthropic, told Time. The second, as it could not be otherwise, is political: Washington has turned its back on AI regulationand Anthropic acknowledges on its blog that the current anti-regulatory climate makes its own safeguards asymmetrical with respect to the rest of the sector. Paradox. From Anthropic’s point of view, it is not a renunciation of security, but a decision made based on it. Their reasoning: if the actors who are more responsible (they fall into this bag, logically) stop while the less careful ones move forward, the net result is “a less safe world.” The logic has a certain coherence, but it also means accepting that security depends on what the competition does. And that is a very dangerous game. Context. Anthropic was founded by former OpenAI executives, including Dario Amodei, who left that company precisely because they believed that it did not pay enough attention to the risks of AI. The new policy comes at a time when several security researchers have left the company. Just like shared Wall Street Journal, one of them, Mrinank Sharma, wrote a letter to his colleagues this month saying that “the world is in danger” because of AI, before announcing his departure. In fact, according to sources close to the media, his departure would be partly related to this decision. What’s happening with the Pentagon?. The announcement comes in full tension with the Pentagon. US Secretary of Defense Pete Hegseth gave Anthropic an ultimatum the same Tuesday that the policy change was made public: modifying its red lines on the use of Claude or risk losing a $200 million contract with the Department of Defense. Anthropic has made it clear that both issues are independent, but the temporal coincidence has not gone unnoticed. What remains of the security policy. It is not a total abandonment. Anthropic remains committed to delaying the development or deployment of “highly capable” models in specific circumstances, and is committed to publishing detailed, externally verified risk reports every three to six months. The company also now separates its own internal guidelines from its recommendations for the rest of the sector, implicitly acknowledging that the commitment to a “race to the top”, which other companies are adopting, has not worked as expected. Cover image | Wikimedia Commons and Anthropic In Xataka | The US has a message for AI companies: if necessary, that AI belongs to the State

OpenAI’s obsession was to train its models like crazy. Now it’s run them faster than anyone else

OpenAI has signed an agreement estimated to be worth more than $10 billion with Cerebras Systems, a startup that designs advanced AI chips dedicated to one thing: running AI models as fast as possible. It is a unique alliance not only because of that change of focus, but because there is a conflict of interests. what has happened. The firm led by Sam Altman has committed to purchasing 750 MW of computing capacity over the next three years from Cerebras. Sources cited in The Wall Street Journal indicate that this alliance has an estimated value of more than $10 billion. We are therefore facing an operation extraordinary in size, but peculiar in form and substance. What Cerebras does. The firm based in Sunnyvale, California, was founded in 2015 by former engineers from SeaMicro, purchased in 2012 by AMD. The startup designs artificial intelligence chips specifically aimed at the inference stage of AI models, that is, executing them. More tokens per second please. When we use ChatGPT or any AI model, what we are looking at is an AI model using inference. Some “write” faster than others, and that speed of displaying text in responses is measured in tokens per second. Typically NVIDIA chips are great for the training phase, but not so much for the inference phase. Chips from companies like Cerebras —or those of the well-known Groqwhich has just been “bought” by NVIDIA—are precisely designed to run those models at full speed and obtain very high token per second speeds. The AI ​​is already good. Now she wants to be fast. NVIDIA’s recent “purchase” of Groq makes it clear that Jensen Huang’s company wanted the ability to offer those ultra-fast inference chips, and now OpenAI seems to want something very similar with its deal with Cerebras. AI models have already achieved remarkable performance in many scenarios, and although they are not perfect, now companies want them to not only work well, but also work very very fast and their responses, even if they are long, appear almost instantly. OpenAI wants more computing power. This operation also helps Sam Altman’s company with another objective: to obtain (and reserve) as much computing capacity as possible in anticipation of the fact that demand for these AI models will grow non-stop in the coming months and years. According to WSJ OpenAI already has more than 900 million weekly users, and its managers have frequently commented that they continue to have computing capacity problems. Brains grow. This agreement reinforces Cerebras’ position in a market that clearly demands this type of solutions. The firm is negotiating a $1 billion investment round that would bring its market valuation to $22 billion, tripling the current valuation, which is around $8.1 billion. In the past it has raised $1.8 billion according to PitchBook. Conflict of interest. This agreement also draws attention for an important aspect: Sam Altman, CEO of OpenAI, is also an investor in Cerebras (he is at the bottom of this Cerebras website) and indeed your company At one point he considered acquiring Cerebras although in the end that operation did not bear fruit. We are therefore faced with an operation that theoretically benefits Altman on both sides, which is worrying. How will OpenAI pay for this party? This new agreement once again triggers the debate about OpenAI’s ability to meet its credit and debt obligations. In 2025 it generated about 13,000 million dollars in income, but that enormous amount remains minuscule if we take into account that the contracts it signed with OracleMicrosoft or Amazon They amount to about 600,000 million dollars that will have to end up getting from somewhere. Where from? It’s a good question. We’ll see if they can end up answering it. In Xataka | The alliance between Oracle and OpenAI is not just about data centers: it is about overtaking Google, Apple and Microsoft on the right

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.