What’s new and what improvements are there in the new version of the ChatGPT model with two personalities

Let’s tell you What’s new in GPT-5.1the new version of the model artificial intelligence of ChatGPT. GPT versions are the engine of your interactions with OpenAI’s AI, and the results that are given to you and the way in which they are told to you depend on them. This new update stands out above all for having two versions with different personalities. But in addition to this, there are also other new features that go more under the hood, but that can also make a difference when it comes to serving you the answers. Here, you must be clear that this new version of GPT-5.1 It has reached paying users firstwhether they have a Plus, Pro, Go or Business subscription. Maybe later it will also reach free users, but probably with limited use. Two GPT-5.1 with two ways to respond As we have told you, the main novelty of this new update is that it offers two versions with different personalities. This goes beyond customize ChatGPT with different personalities as you can do in the settings, but there are directly two versions of GPT-5.1, and each of them has a different type of response. On the one hand it is GPT-5.1 Instantwhich is more conversational and with more “warm” or close responses. and then you have GPT-5.1 Thinkingthat use clearer language with less jargon. This last model is trained for deep reasoning, and responds faster on simple tasks, while dedicating more thinking time to complex ones. Paid ChatGPT users will see the ChatGPT 5.1 model activated at the top. By clicking on the name the model selector will open, and in it you can choose between the variants instant and thinking. There is also a way Car which will choose for you which of the two variants to use depending on what you ask in the prompt. Smarter adaptive reasoning GPT-5.1 also improves its internal logic, and now dynamically decides the “thinking time” that you dedicate to a request. Come on, instead of dedicating the same time to each of them, you will dedicate different times depending on the type of request. This way, when you make a simple query it will be processed with minimal calculation and faster, while more advanced reasoning tasks receive additional layers of analysis to improve the results and make them more coherent and context-sensitive. Behavioral improvements and instructions This new model too improves following instructionsbetter “understanding” what you ask and generating responses that are more aligned with it. Each of the two variants adjusts its reasoning to the complexity of the request to ensure that the answers are consistent with everything you ask of them. Better tone and personality controls We already told you that ChatGPT has a setting to determine the tone and personality of the responses. Now, instead of having predefined ringtones the user can configure it. For example, you can choose to make it professional, friendly, or efficient, and apply it consistently across all interactions. For regular users this simply helps you be more comfortable with the answers it gives you. But for companies it is even more important, since you can align the tone with what you want to use both in your communications with customers and in internal documentation. Context retention improvements Context retention is more effective, which improves continuity in long interactions and with multiple shifts. This will help you as a user, but it is especially important in the business environment, in uses such as customer service or knowledge base systems. Performance optimization Response generation is now faster, and token overhead is reduced to make GPT-5.1 a better model for automated environments. It can deliver higher quality or better results using fewer tokens than previous models, reducing the overall costs of using the API. In Xataka Basics | The best prompts to save hours of work and do your tasks with ChatGPT, Gemini, Copilot or other artificial intelligence

what it is, characteristics of this artificial intelligence model and differences with Gemini and ChatGPT

Let’s explain to you what is Kimi K2 Thinkingthe latest model of the company artificial intelligence Kimi AI. It is an AI that has made a name for itself today due to its open nature and for having managed to compete directly against GPT-5, Gemini 2.5 Pro and other high-end models. We are going to start the article by explaining what Kimi K2 is and the characteristics that make this artificial intelligence model different. Then, we will finish by telling you the main differences with respect to the most popular models on the market. What is Kimi K2 Thinking Kimi K2 Thinking is the latest version of Kimi, a Chinese artificial intelligence model created by Moonshot companyfrom Alibaba. Since the names are the same, you can clarify yourself by thinking that Kimi is the company’s AI, like ChatGPTand that this AI has different models that are being launched, such as the case of GPT-5 in the case of OpenAI. Kimi K2 was launched in July, and stood out for having a gigantic size of 10,000 million parameters. Now there is a new version called Kimi K2 Thinkingwhose number of active parameters amounts to 32,000. According to its creators, this allows the AI ​​to maintain stable use of agentic tools over 200 to 300 sequential calls. And what does all this talk mean? As you know, we are entering the era of AI agentswhich are automations with which an artificial intelligence can carry out different actions autonomously. This allows the AI ​​to even make decisions for you, from asking it to make your purchase to preparing a vacation package for you and taking care of the reservations. It is also something that at the business level is going to have even more uses. Therefore, the more ability an AI has to perform a large number of actions without making mistakes, the more valuable and powerful it is. Features of Kimi K2 Thinking The most important feature of Kimi K2 Thinking is that it is an open model. The models of companies like OpenAI, Google or Anthropic are closed, which means that their source code is kept under lock and key, only these companies know how it works inside. Meanwhile, K2 Thinking is open source, which means that anyone can know how it works inside looking your Githubits features, and you can even adapt it for free. What’s more, you can install it locally at no cost, although the computer needed for this is too powerful for ordinary mortals, but “distilled” versions, lowered or trimmed, can be released so that people can use them locally. In this aspect it is like DeepSeek, another open AI that already surprised a few months ago for approaching the power of non-open models such as Gemini or ChatGPT. In the case of Kimi K2 Thinking, according to the test benches has managed to surpass GPT-5something that until recently was unthinkable. We are facing a Mixture-of-Experts architecture model (MoE), which means that it is made up of several experts (subnetworks or specialized modules), and that not everything is activated at once, but only the parts of the model necessary to answer what you ask or perform the task that you have asked. It should also be said that it is multilingual, and can be used in other languages ​​although it focuses on Chinese, and that it can process many types of file formats. Also searches in real time to offer you the most up-to-date information, and is multimodal, being able to interpret text, images, code or a combination of these. Kimi K2 Thinking can be used as a conversational char answering questions and maintaining long context while following complex threads. But it can also interpret images, or a combination of mixed inputs such as images with text and with code. In addition to this, it can generate programming code, analyze long documents thanks to its large context window and extract information to answer questions about the content or give you a summary. Additionally, you can create automations or agents. Differences with ChatGPT or Gemini As we have told you above, the main difference of Kimi is its open concept. While ChatGPT and Gemini are proprietary models, Kimi allows access to the community so they can see its code. Several benchmarks have shown that Kimi K2 Thinking outperforms GPT-5 and Claude Sonnet 4.5 (Thinking) in search and agentic browsing in the browser, in text-only operation, and in information collection. The only thing in which it still does not surpass these models is in the creation of code. In the use of agentic tools, benchmarks or test benches have shown that Kimi K2 Thinking is positioned as a leading AI model. Besides, Kimi is a cheaper model for several things. First, training the model cost $4.6 million, according to indicate on CNBC, a ridiculous figure considering that training proprietary models like GPT-5 It cost about 500 million dollars according to estimates. It’s also cheaper to use the Kimi K2 Thinking API. The API is like the entry key that allows other applications to connect to this AI to work with it. The price of K2 Thinking is $0.6 per million tokens in and $2.5 per million tokens out. GPT-5 Chat costs $1.25/10 respectively, and Claude Sonnet 4.5 costs $3/15 respectively. For the average user, the operation is the same.. You have the website kimi.comwhere after registering for free you can use the Kimi K1.5 and K2 models. However, If you want to use Kimi K2 Thinking you will have to pay with their subscriptions of 19 or 30 dollars. At least, this is if you want to use the full version on the official website, without having to install anything. In Xataka Basics | The best prompts to save hours of work and do your tasks with ChatGPT, Gemini, Copilot or other artificial intelligence

We believed that no open model could outperform GPT-5. A Chinese startup proves us wrong

A Chinese startup called Moonshot just launched Kimi K2 Thinkinga gigantic open model with a trillion parameters that has done something that seemed almost impossible: surpass the best proprietary models from companies like OpenAI, Google or Anthropic. If we thought that “Open Source” models could never compete with GPT-5, Gemini 2.5 Pro or Claude, we were wrong. what has happened. This “AI laboratory” had already announced Kimi K2 in July with that gigantic size of one trillion parameters, but now they have released the “Thinking” version with that same size (32 billion active parameters, Mixture of Experts architecture). According to those responsible, the model is capable of maintaining stable use of agentic tools over between 200 and 300 sequential calls. Or what is the same: it can chain long sequences of actions autonomously and apparently without error. The best of all is not that: it is that it surpasses GPT-5 or Claude Sonnet 4.5 in various tests and costs much less than those models. The benchmarks. Those responsible for Moonshot explained how Kimi K2 Thinking achieves the highest scores in Humanity’s Last Exam (general knowledge, 44.9%) and BrowserComp (agent browsers, 60.2%). He is almost at Claude’s level in the SWE software development test, and is also almost the best in another of those benchmarks, LiveCodeBench v6. It is true that in some tests still slightly behind of its “western” rivals, but the achievement is spectacular. More benchmarks. Those responsible for Artificial Analysis have shown their first conclusions after evaluating it with various tests. Thus, they highlight its behavior in agentic tasks that simulate that the model is acting as a customer service agent. In this test it obtained 93% of the maximum, surpassing all its competitors by far (GPT-5 Codex High obtained 87%, for example). They will do more tests, but for now the prospects are fantastic. And on top of that, cheap. On CNBC indicate that training the model cost $4.6 million, a ridiculous figure considering that training proprietary models like GPT-5 It cost about 500 million dollars according to estimates. Using the Kimi K2 Thinking API is also very affordable: $0.6 per million tokens in and $2.5 per million tokens out. GPT-5 Chat costs $1.25/10 respectively, while Claude Sonnet 4.5 costs $3/15 respectively. The details. The model makes use of an INT4 quantization to improve its efficiency without compromising the precision and quality of its responses. Its context window—the “size” of the data we can enter when making prompts—is 256k, a relatively modest figure for large models but still notable. And as a good open model, we can download it to use locally… if we have a real monster at our disposal. The model weighs 594 GB, and for example joining two Mac Studio M3 Ultra It is possible to make it work locally relatively smoothly at about 15 t/s. Alibaba is behindyes. Although the model is developed by an independent startup called Moonshot, this firm has been financially supported by Alibaba, which is becoming an absolute powerhouse in this field. Already not only conforms with developing its own models, which are outstanding (Qwen is the clear example), but is also financing the development of other models such as Kimi K2/Thinking. China and its love for open AI models. During the last few months we have seen how China dominated in the field of open AI models —not “Open Source”—. The Asian giant has adopted an overwhelming philosophy with increasingly better models but which until now seemed to be several steps behind the large proprietary models of OpenAI, Anthropic or Google. This is no longer the case. The race is lively. This achievement represents a new vote of confidence for the open models coming from Chinese companies. It is true that they are huge and that makes it very difficult to use them in practice by end users, but they present an interesting alternative for companies. Image | idnaklss with Midjourney In Xataka | There are many “internal” races within the greater AI race. And Alibaba is winning Open Source

Chinese hypermarkets are in crisis and have found the solution: follow the Mercadona model

The golden age of Chinese hypermarkets is coming to an end. With the economy stepping on the brake, these mastodons are in a tighten and desperately seek new formulas to hook consumers who look more at the pocket. In this new panorama, the solution seems to be betting on the strategy that Mercadona dominates perfectly for years. What’s happening. The great Chinese supermarkets are having You would be difficult to survive. In recent years, Carrefour has closed more than 140 stores, Tesco has disappeared and last year the main leading hypermarkets had Important losses. With The economy in decelerationChinese consumers are more cautious when spending and that is causing the main chains to change their strategy drastically, as reported in Bloomberg. The Mercadona model. Many neighborhood stores and more white brands, this is how some Chinese giants are adapting to this new era. The own brands were not usual in China, but currently they take more and more space in the halls of the main chains. In addition, they are beginning to change their store strategy, favoring the proximity of smaller stores instead of hypermarket that forces us to move by car and plan a larger purchase. Adapt or die. Chinese hypermarket chains are transforming with smaller formats and their own brands. Walmart, with its stores proximity to Lo Carrefour Express and its MarketSide brand, is a good example of this trend. The Wumart Group has launched Six stores with discounts in Beijing and FreeShyppo, from Alibaba, already has more than 300 stores under its cheap chaopa brand. Approximately 60% of the products found in these stores are white brands. This strategy responds to the search for savings and convenience by the consumer. The Pangdonglai case. It is a Henan supermarket that has achieved viral success. Its strategy is based on exceptional customer service, good treatment of unique employees and services such as ticket offices with dog water and personalized preparation of the purchase basket. But the main secret of their success is that they have placed their profit margin in 30%, which allows them to keep low prices all the time, without having to resort to specific promotions. Despite having been born in a smaller city, its model is so influential that Yonghui Superstores, the fourth chain of China, is reforming its stores following its example. Image | Wikipedia In Xataka | The US studied what would happen if it enters war with China. Now he has started a career desperate to double missiles

Spanish Clevergy has just lifted 3.2 million to expand its energy management model

The relationship between marketers and households is changing: it is no longer going on invoices, it goes from apps that explain what your home consumes and what you can optimize. In that day -to -day landing stands out Clevergya Spanish startup founded in 2022 that has just closed 3.2 million euros To make your European leap. Its proposal allows companies to offer not only a personalized application with real -time monitoring, alerts and savings recommendations, but also a set of solutions to digitize their business. The promised result: customers who better understand their energy consumption and companies that modernize their offer without starting from scratch. Founded in Madrid in 2022 by Beltrán Aznar, Álvaro Pérez and Juan LópezClevergy has moved quickly in a sector where digitalization is already a demand. In just three years he has managed to arrive, according to the company, “hundreds of thousands” of Spanish homes through their agreements with marketers. Its role is clear: it acts under a B2B2C model, that is, it offers technology to companies so that it is the ones that put it in their end customers. This combination of speed and adoption has given visibility in a market in full transformation. Clevergy seeks to convert energy management into a daily experience Clevergy’s proposal for marketers goes beyond an app for its customers. The company has developed a portal that allows to centralize operations and support, in addition to identifying business opportunities and cutting costs. It also offers one API to integrate consumption data and generation from counters, solar panels or connected devices. To this are added white brand applications, adaptable to the identity of each company, and modules that can be inserted into existing platforms. For homes, all this deployment is concretized in functions designed to give more visibility about the energy they consume. Customers can monitor their real -time spending, receive notifications when inefficiencies are detected and adjust their consumption habits. The system also includes comparisons with other users, calculation of potential savings and remote control of connected equipment. In this way, marketers seek to add a tangible value to their offer and generate confidence in a market where the price is no longer the only decisive factor. Clevergy’s growth has been fast. In just three years he claims to have tripled its growth and, in just 18 months, has closed two rounds of financing: the first of 1.5 million euros in 2024 And the second, of 3.2 million, is the one that has just been announced in 2025. The latter is the one that marks a turning point, when arriving at a time when marketers intensify the search for digital services to improve their relationship with customers and reduce costs. For the company, it is a validation of its role in this transformation process. Clevergy has closed two rounds of financing: the first of 1.5 million euros in 2024 and the second, of 3.2 million The round of 3.2 million euros has been led by Racine2 (managed by Serena and Makesense) together with Axon Partners Group, with the participation of Satgana, Wayra (the CVC of Telefónica) and Angels, Juan Roig’s investment society. With these funds, Clevergy seeks to accelerate its international expansion and improve the capabilities of its platform. The declared objective of the company is to continue refining its technology and progressively take it to other countries of the continent. The challenge is now to check how far Clevergy can go outside of Spain. The company has shown traction in the national market, but the jump to Europe implies integrating with different regulations and compete in a stage with other technological and energy actors. It will be key to see how it manages to deploy its platform in new countries and if the marketers really transfer that proposal to the final customer. Its evolution will mark to what extent this digitalization model can be consolidated beyond the domestic market. Images | Clevergy In Xataka | Juan Roig believes that in the future no one will have cooking at home. Mercadona is conquering the market thanks to it In Xataka | A Basque startup of AI has just lifted 189 million euros with a great idea: compress the AI

disassemble three tesla model and

“It’s a great vehicle.” And that is why, Xiaomi has not had objections to take a Tesla Model and to disassemble it and be clear about what they could do so that their first electric SUV was as good or better as that of Elon Musk’s company. The words are from Lei Jun. The confirmation of that “disassembly” too. “Exceptional”. “The Model and is a very, very exceptional car” although according to the CEO of Xiaomi, front in interior space against its Xiaomi Yu7. That is what Lei Jun, CEO of the company, said in a public intervention of which they echoed in Business Insider. In it, Lei Jun did not hesitate to comply with the vehicle of the American company to, later, point out what his weak points were in front of Xiaomi’s car. Of course, according to him, Xiaomi’s option is broader, has greater autonomy and, above all, a very difficult value for ignorance. As they point out in Xiaomi worldthe CEO said that if you did not contemplate the Xiaomi Yu7 as your next purchase, the Tesla Model and is the best alternative. Three units. They are the ones Lei Jun has confirmed that bought from Tesla Model and with the aim of dismantling them and checking how cars made inside were. The intention was to take the leader of the segment (it is the best -selling electricity in the world) and shred it to verify what made it different. Xiaomi’s case is not, much less, the only one. A few months ago we knew that Toyota had opted for services of an advice to discover what made by the cars of Byd and Tesla. We also know that Mercedes did the same with Zeekr carsChinese company owned by Geely, to verify what was hidden under the body to show off 1,000 kilometers of autonomy. And Xiaomi. Although we now know that Xiaomi has been the one that bought cars from the competition, the Chinese company itself has been closely monitored by all kinds of rivals. Jim Farley, CEO of Ford, has repeated to satiety who bought an Xiaomi car and led him for months without getting tired of him. Hyundai also took it to South Korea. Even this summer a photograph at the door of Maranello, the Ferrari factory, showed An Xiaomi Su7 Ultra entering the plant. Although there is no official confirmation, everything indicates that it is the Italians have also been analyzing in depth Nürburgring faster electric car. Because? As we see, it is usual for companies to buy competition units to understand how they work, what makes them different and how they get the advantages over their own models. The Xiaomi case is not unique but it is significant to see who points to. The Tesla Model and has been the most purchased electric car in the world and, even, the first completely electric model with giving the award of the “best -selling car in the world”regardless of the technology used. But since the Xiaomi Su7 And, with the redoubled commitment of Yu7 (Electric SUV), Lei Jun’s company has focused on making it clear what makes Tesla’s car better. Xiaomi has presumed autonomous driving technology (although he omitted it for legal reasons with Yu7), autonomy and, above all, price, since Xiaomi’s versions are sold by a fraction of Tesla cars. A compliment. In addition, there is another important detail throughout this fight. When a Chinese manufacturer points directly to a rival and mentions it in its presentations, it is not a direct signaling strategy. In China, “Inspiration” (or direct copy) It serves to make clear who is the market leader and Who they want to look like and, of course, they want to overcome. It is a strategy that clashes with Western culture but is the reason why Xiaomi mobile presentations are full of references to Apple. Or why the presentations of Xiaomi cars were flooded with references to Tesla and Porsche. Because each in its field (value for money, the first, and example of sportsmanship, the second) are the market leaders. Disassemble Copy the good and try to improve it It is a strategy that is followed throughout the world but has more reason to be in a culture where “inspiration” is better seen than in the West. Photo | Tesla and Xiaomi In Xataka | I have climbed into the xiaomi car and now I am going to miss the Xiaomi Su7 Max every time I get on mine

Baidu has just launched a new AI model that competes with the best. The surprise: it is not Open Source

Baidu has presented Ernie X1.1, a new generative model that represents a promising qualitative leap and that seems to compete from you to you with their rivals. That is not so surprising. What it is is that we are facing a model that goes against the Chinese trend: it is closed. Why is it important. Being “the Chinese Google” worked very well for Baidu for more than two decades, but in recent times this Chinese technological giant is trying to Do not lose commits in the AI ​​race. His last movement goes in that direction: Ernie X1.1 has just presented, the latest version of a reasoning model that earns whole. The model can be easily tested in the Official Chatbot Website (You have to display the option to use the Ernie X1.1 model if you are not selected). The new Ernie X1.1 competes from you to you with Depseek-R1, Gemini-2.5 Pro and even GPT-5 in various benchmarks Promising. The internal tests published by BAIDU reveal that their new reasoning model offers “significant advances” in terms of precision of the answers (34.8% better than Ernie X1), instructions monitoring (12.5% ​​better) and agricultural capacities (9.6% better). Not only that: its performance in Benchmarks exceeds Desepseek R1-0528, the reference AI model for months in China. It is also able to compete in performance with GPT-5 and Gemini 2.5 Pro, two of the most advanced models today. But closed. While the trend in China is to offer open models in which at least the weights with which these models have trained, Baidu has decided that at the moment Ernie X1.1 will be closed. The model has been offered to corporate clients already developers through the Baidu cloud platform, called Qianfan (via API), in addition to its aforementioned availability in Ernie.baidu.com. Meanwhile, other models open. What it has done is to open the Ernie-4.5-21B-A3B-Thinking model, a version that is somewhat less capable but that has reasoning capacity and that supports a context window of 128,000 tokens. This model uses a MOE (Mixture of Experts) architecture with 21,000 million total parameters although only 3,000 million of them are active to make it more efficient. The model is for example available Through Hugging Face. An AI that accompanies the elderly. As they point out In SCMPduring the presentation of this wang haifeng model, Cto de Baidu, advertisement The launch of an AI agent destined to accompany the elderly. The objective is to help the more than 310 million Chinese who are over 60 years offering information that for example can help them with their health status and also allows us to discern whether the information they find on the Internet can be harmful to them. Counting Erres. In Xataka we have been able to use Ernie X1.1 for a few hours and the first impressions are very positive. The reasoning model is especially careful when answering, and for example when asking “how many” R “does the phrase ‘the San Roque dog has no tail because Ramón Ramírez has stolen it’?” The model correctly replied that 9, when Ernie 4.5, also available on that website, replied quickly but badly: 8. We also ask you to create a table with the 10 countries that had won the most football, and once again The answer It was perfect, something we had not seen in almost any previous model. In other interactions – for example, when creating and analyzing Python code – the performance was also remarkable. In the absence of trying it in depth, these small tests point to a really remarkable behavior of the model. But rivals squeeze. We are therefore before a striking step by Baidu, both for that promising performance and for the fact that the company has given up not opening it for the moment. In front of Ernie X1.1, yes, the competition is fierce. Both QWen3-Max-Preview (Alibaba) and Kimi-K2-0905 (MoNshot) have achieved enter the list Of the 10 most powerful models on the market According to LMarena. There for those who lead are Gemini-2.5 Pro, Claude Opus 4.1, O3 and GPT-5-High, but the advance of these Chinese models is remarkable. It remains to be seen if Ernie X1.1 will also sneak that list. In Xataka | Deepseek has given the starting gun in the race for a cheaper AI. And China starts with advantage

Alibaba has presented its largest AI model, with a billion parameters. The question is whether at this point that means something

The Chinese giant Alibaba has announced a new language model, the largest they have announced to date. It is called Qwen-3-Max and presumes that it has more than 1 billion parameters. The biggest. It is the last model within the series Qwen3 which was launched in May of this year and, as its name ‘Max’ indicates, it is the largest to date. Its size is given by the parameters, 1 billion to be exact, while the previous models of its series reached a maximum of 235,000 million. According to South China Morning Post (Which owner Alibaba), his model stands out in understanding of language, reasoning and text generation. Benchmarks. The results of the benchmarks place QWen3-Max ahead of competitors such as Claude Opus 4, Deepseek v3.1 and Kimi K2. If Gemini 2.5 Pro or GPT-5 does not appear, it is because they are models of reasoning and have only compared rapid response models. As they point out in Dev.toboth Gemini 2.5 Pro and GPT-5 obtain higher scores in mathematics and code, so reasoning models continue to have advantage in those areas. Qwen3-max-preview can already be tested free of charge. Benchmarks shared by Alibaba. Parameters. The parameters are all the internal variables that a model learns during training. In other words, it is the knowledge that the model has obtained from the data with which it has trained and allows it to interpret our requests and generate their answers. In theory, the more parameters, the model will have more and better capabilities. It also implies that it needs more computational power both to train and to execute the model. More does not mean better. The speech of the parameters remembers that of the megapixels with the first cameras. A 100 megapixel sensor will take larger photos than a 10 sensor, but there are other crucial factors that affect image quality such as sensor size or lens luminosity. Quality data. More parameters can be translated into more learning capacity and more resolution of complex tasks, as long as quality training data has been used. It is obvious: a language model that has been trained with redundant, incorrect or biased data will learn and continue to reproduce those errors in their operation. There are more. In 2022, the laboratory Deepmind from Google, discovered that many models were oversized in parameters but underlined in data. To demonstrate it they created the Chinchilla model with “only” 70,000 million parameters, but four times more data. The result was that it beat Gopher, a model with four times more parameters. Architecture. The architecture of the model is another decisive factor in order to achieve an efficient model; A standard architecture is not the same that forces the model to use its entire neuronal network, than one like Mixture of experts which consists of many smaller networks. It would be something like having an expert committee each with a specialty. In this way, the model can choose your expert for each query and not have to use the entire network. For example, with this technique, Mistral manages to use only a fraction of his parameters And so it is faster and cheap to execute. Image | Markus Winkler, via Pexels In Xataka | The ASML-Mistral alliance reveals the European plan B: if we cannot manufacture chips, we will at least control how they are manufactured

The world is waiting for Depseek’s new great model to compete with GPT-5, but Depseek has other plans: the agricultural AI

At the beginning of the year, the Chinese startup Deepseek put the world of AI up with Deepseek R1a free and open source model that was placed at the height of GPT-4 or Claude. After the coup on the table, in Depseek they have been quite quiet, but now we know what its next objective is: the agriculture. Before the end of the year. A few days ago Bloomberg reported that Deepseek is working on an advanced and very ambitious agent. He will be able to perform multiple tasks with minimal user intervention and will learn as he works. According to sources close to the company, the founder of the company Lian Wenfeng is pressing his team so that the new agentic model is ready before the end of the year. The company has already taken a step in this direction with the Deepseek v3.1 presentation Just two weeks ago. As detailed by the company in A post in Wechatits new model improves performance in reasoning tasks and agricultural abilities. A step back. Deepseek R2, the expected successor of the successful model with which Deepseek revolutionized the industry making begging. Instead they gave us Deepseek v3.1 and now the rumors suggest that their next great launch will be an AI agent. What is happening? There are voices, such as This Chinese journalistthat they see this turn to the agricultural AI as a way of taking a step back and getting away from the expensive and competitive career of the foundational language models. That The generative AI is reaching its roof It is something that is being talked about Since last year. GPT-5 is the test more recent than The big jumps are a thing of the past. If we add to this that China has a more conservative way of proceeding, with more long -term strategiesDeepseek’s turn towards an agriculture instead of launching Depseek R2 makes sense. Restrictions Although we have seen The most ingenious forms to make fun of themUnited States restrictions on chip export to China are also impacting the plans of many Chinese and Deepseek companies do not get rid. This also involves extra pressure that forces new routes with which to market their products. In fact, there is something striking in Deepseek v3.1 and it is that the model has been specially designed for Chinese chipswith the objective of Avoid dependence on foreign chips. Generate income. The agricultural AI opens another way for Deepseek, one in which you can get benefits more easily. Large language models have a problem: They cost a money and monetize them is not being a simple task. Given this, IA agents rise like a Most reasonable business model. Deepseek R1 has already given a whole lesson in Resource efficiencyIt makes sense that the company wants to opt for the fastest path to the benefits. A more conservative position. Although He has trimmed positionsChina lags in AI in terms of investments and access to the most advanced chips. Despite this, his approach in this AI race is being different. We see it in your Bet on the Open-Source wave “Personified“But perhaps the biggest difference is that, while their competitors in the United States continue to squander billions, in China they are choosing to be more conservative and not waste. This turn to the agents is in that conservative line to achieve a more sustainable industry. Image | Matheus Bertelli, via Pexels In Xataka | There is a city in China that is measured face to face with Silicon Valley: welcome to Hangzhou, the house of the ‘Six Little Dragons’

Microsoft had a saved secret. His new AI model for Copilot is the clearest statement against Openai’s domain

Since the fever broke out by generative AI, Microsoft has opted for OpenAi models to give life to key functions in some of its most important products. It is not strange if we remember that those of Redmond They invested more than 10,000 million dollars in the startup directed by Sam Altman. However, this alliance of convenience It has been showing fissures for months And, as time goes by, The rivalry between both parties becomes more evident. It was Microsoft itself that, a year ago, included OpenAI in his list of competitorstogether with Amazon, Apple, Google and Meta. And it is OpenAi who, According to The Informationinsists on not wanting to share its avant -garde technology when it arrives, if it arrives, the AGI. Even if you try to make up, the link is no longer as solid as in its first days. And now there is another chapter underway. Microsoft AI begins to show their own letters Mustafa SuleymanCEO of Microsoft AI, who assumed the position when the association with Openai had been consolidated for years, It has just presented two internally developed models. They are advanced proposals that users can already prove and reflect the company’s ambition: “Create AI applied as a platform for products.” One is a real novelty and another we already knew. Let’s look at the details. Mai-1-Preview. It is the great novelty. It is a Mixure-OF-Experts model, in the style of GPT-4O or GPT-5, designed to offer great capabilities in resolution of instructions and useful responses for daily consultations. According to Suleyman, it is the first model trained from beginning to end in Microsoft AI’s own laboratories. The most striking thing is that it can already be tested. Just enter LMARENAselect Direct Chat and choose Mai-1-Preview. In the coming weeks it will also arrive in Copilot, although only “for certain cases of text use.” It is paradoxical: the text functions of the Microsoft Chatbot work thanks to OpenAi, but now it will begin to live with its own technology. Developers can also request anticipated access to API. MAI-VOICE-1. It is a voice generation model that stands out for its expressiveness and naturalness. For some time now it promotes functions such as Copilot Daily (news summaries) and Copilot Podcasts, although only in English. It is also available in COPILOT LABSwhere you can try different voices, styles and narration tones. One of its strengths is efficiency: it can generate a complete minute of audio in less than a second using a single GPU. With this, it is one of the fastest and most effective voice systems that exist today. Microsoft defends that the voice will be the interface of future assistants of AI. And he wants to advance with a high fidelity solution capable of responding in different scenarios. Could I have resorted to Openai and your GPT-4o in audio version? Yes. Do you want to do it? Everything indicates not. “Much more to come. We have great ambitions for what comes next: advances in the models, an exciting roadmap in computation capacity and the opportunity to reach billions of people through Microsoft products. We are building an AI for all,” said Suleyman in X. It remains to see what course the relationship between Microsoft and OpenAi will take. What is clear to users is that there will be more variety and more tools to experiment. Images | OpenAI In Xataka | Goal wants us to use AI when we don’t know what to say at WhatsApp: This is how your new option for writing assistance works

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.