OpenAI and Google deny that they are going to put ads in ChatGPT and Gemini. The reality is that accounts do not come only with subscriptions

What AI has a profitability problem It is something well known. All you have to do is look at the OpenAI accounts, which in the last consolidated quarter lost a whopping $11.5 billion. The subscriptions were presented as a way to monetize chatbotsbut ChatGPT barely has 5% of the total users on one of your payment plans. The numbers do not come out and, although companies deny it, the shadow of advertising hangs over AI. what’s happening. Rumors that some very popular chatbots are integrating ads are intensifying in recent days. First they began to circulate alleged screenshots of an ad on ChatGPT and later a media specialized in advertising claimed that Gemini will have announcements in 2026. Companies deny it. Google has been quick to deny the information, ensuring that Gemini has no ads and “there are currently no plans in place to change it.” What stands out above all is that “currently”, which continues to leave the door open to include advertising in the future. For its part, OpenAI has come out to deny it ensuring that what appeared in that screenshot “was either not real or it was not an advertisement.” What was seen was a suggestion to connect the account of Target, the popular American hypermarket chain. When the river sounds… Despite the forcefulness in denying it, a few days ago we learned that OpenAI is preparing the ground to include advertising in ChatGPT. ChatGPT beta version for Android includes explicit references to an ad feature and tags like “content bazaar” and “ad carousel.” Additionally, the company is hiring experts in advertising platformsso the appearance of ads is not a question of “if”, but of “when.” In the case of Google, we haven’t seen any screenshots or traces in the code, so there isn’t that sense of imminence. However, there are rumors that there will be announcements in AI summaries and taking into account that advertising is the company’s main business, it does not sound crazy that they end up integrating ads into their chatbot. Investment vs return. The imbalance between what technology companies are spending on AI with what they are earning is totally unbalanced. Big tech companies like Google are increasing their revenue, but It is not thanks to AI, but to its cloud services. In the case of OpenAI, without an infrastructure to minimize the impact, the disconnection between expenses and income is brutal. Subscriptions are not enough. AI has managed to penetrate the general public and, according to the consulting firm Menlo Venturesalready has 1.8 billion users around the world. The problem is that only 3% pay any type of subscription. OpenAI currently has 5% paying users and expects that by 2030 the figure will increase to 8.5%. It is still not enough to achieve the desired profitability. According to a study by JP Morgan, For the AI ​​industry to achieve a 10% return on everything they have spent, it would take $650 billion a year, which is the same as saying that 1.4 billion people pay more than $400 each year to use AI. They may succeed, but for now ads seem like a faster way to generate income. Image | Generated with Gemini In Xataka | AI has become the best example that if you don’t pay for the product, you are the product

ChatGPT has the same chance of hitting the Lottery Jackpot as a witch reading the guts of a crow

There are those who always play the same number. Others travel half of Spain looking for the combination they have dreamed of or simply a special date. This dance of fetishes related to the Extraordinary Christmas Lottery Draw is now added a new name: ChatGPT. And the question is not only whether artificial intelligence is capable of guessing the winning number, something that is obviously not possible. It goes much further than that: there is a lot of superstition in this, but also of believing at face value what the AI ​​tells us. Even when we know that there is nothing behind it to support its results. ChatGPT and the lottery. Christmas is coming and with it interest in the Lottery increases. And with it, an unexpected protagonist also emerges again: ChatGPT. The OpenAI ‘chatbot’ has become another Christmas classic thanks to the fact that, one more yearwe Spaniards ask you again what the winning number will be. The objective is clear: that ChatGPT deciphers the tenth that the Fat Man will win in the 2025 Christmas Lottery. Although only chance rules here. It doesn’t get wet. The Christmas Draw is carried out using a system of two drums, with a manual mechanism, in which all the balls are identical, both in weight and size, so that they have exactly the same chance of winning. The prize is drawn from the first pot and the number to which it is associated is drawn from the second pot. The procedure for drawing the balls is completely random and, therefore, so is the winning number. The chance of getting it right is 0.001%. If you have ever tried to ask ChatGPT what the Gordo will be on December 22, its answer is what it should be: If you insist, he also repeats the same thing: “I cannot tell you with certainty what the winning number of the Spanish Christmas Lottery will be. And in fact no one can. The draw is designed to be totally random; each number from 00000 to 99999 has the same probability of being awarded.” He is not trying to sell us the bike and makes it very clear why: “although there are those who try to use theories, superstitions or even artificial intelligence to predict numbers, these methods have no real foundation: in the end, each number still has exactly 1 in 100,000 probabilities.” Finish singing. But, if we try to scratch a little more, it ends up showing a random number. If you give it a ‘prompt’ asking for a number based on a mathematical sample or taking into account the history of winning combinations, ChatGPT tells us that “I can give you a simulated number as a result of a fictitious statistical sample, but you must be clear that it does not increase your probabilities nor does it represent a real prediction.” And then, what was expected, his bet. In this case, 32,704. Of course, by trying to ask the same question in several different conversations, each time it offers a different answer. The ending doesn’t even have to match. It’s a totally random answer again. The new search engines. Chatbots like ChatGPT or Gemini they are displacing search engines traditional when it comes to search for specific information on a topic or even a much longer explanation. Even Google itself he is taking it to the kitchen to change the way we interact with the internet. If before we asked Google what could be causing a headache or what could happen to us if we took an expired medication, now the quickest, simplest and most accessible way is to have a conversation with the AI ​​as a “know-it-all” to have the solution to all our questions and concerns. Even with those that have no answer, like Gordo’s winning number. A digital superstition. The infinite possibilities of AI are leading us to use it in quite peculiar ways. From have a romantic relationship with her until resorting to it to replace psychological therapy or even interact less with other humans. In the case of the lottery, just as there are gestures associated with good luck, such as passing the tenth over the belly of a pregnant woman or the figurine of a Virgin, asking ChatGPT to choose a number for us is a new digital superstition. Another space to which we have also opened the door to artificial intelligence, “just in case” is right. Cover image | Generated with Gemini In Xataka | We have become filled with digital superstitions. They are a horror for our productivity In Xataka | ChatGPT and the Christmas Lottery: what you can do with artificial intelligence and how to ask it for a prediction

Google hit the red button when ChatGPT came upon it. Now it is OpenAI who has pressed it, according to WSJ

Sam Altman has activated high alert on OpenAI. Just like share From Wall Street Journal, the company’s CEO announced this Monday in an internal memo that the company enters “code red” to improve ChatGPTthe tool that has catapulted the company to stardom but that now sees its rivals closing the gap at breakneck speed. what’s happening. OpenAI is postponing several important projects to focus all its resources on improving the daily ChatGPT experience, according to the internal memo to which WSJ has had access. According to Altman, the chatbot urgently needs advances in personalization, speed, reliability and the ability to answer a broader range of questions. Among the postponed projects are initiatives to include advertising in the free version of ChatGPT, AI agents for health and purchases (the latter was announced very recently), and Pressa personal assistant in development. why now. The pressure comes mainly from Google. Your model Gemini 3released last month, has outperformed OpenAI in industry benchmarks and sent the Mountain View giant’s stock soaring. Just like assures In the middle, Gemini’s monthly active users went from 450 million in July to 650 million in October, a meteoric growth that sets off all the alarms at OpenAI. Although ChatGPT maintains the lead with approximately more than 800 million weekly users, the speed at which Google is gaining ground is worrying. The underlying problem. OpenAI is in a delicate position. The company it is not profitable and it needs constant rounds of financing to survive, which puts it at a disadvantage compared to Google and other technology companies that can finance their investments with their own income. It’s also spending more aggressively than its main startup rival, Anthropic. According to their own financial projectionsOpenAI will need to reach revenues of approximately $200 billion to be profitable in 2030. All while being committed to investments of hundreds of billions in data centers. The last setbacks. The company has had a difficult time lately balancing the security of its chatbot with making it more attractive to users. The GPT-5 model Launched in August, it disappointed some users, who complained about its colder tone and problems answering simple math and geography questions. OpenAI had to update the model last month to make it warmer and better able to follow user instructions. OpenAI’s response. According to point In the middle, Altman has established daily calls for those responsible for improving ChatGPT and has encouraged temporary team transfers. WSJ assures that the company uses three color codes: yellow, orange and red, to describe the different levels of urgency necessary to address problems. According to the outlet, prior to this “code red”, OpenAI had declared a “code orange” in its effort to improve the chatbot. Nick Turley, Head of ChatGPT at OpenAI, stated in X that ChatGPT represents 70% of global AI-assisted activities and 10% of search activities. An unexpected script twist. This represents a radical change compared to three years ago, when it was Google who declared its own code red in response to the threat posed by ChatGPT. And after a groundbreaking Google I/O Last May, those from Mountain View have witnessed brutal growth in all the directions in which the AI ​​race is currently pointing, with improvements in their chatbot, the deployment of countless AI agents, improvements in their applications and more. Now it seems that it is OpenAI who must defend its position. And now what. Altman advertisement that next week OpenAI will launch a new reasoning model that, according to internal evaluations, surpasses Google’s Gemini 3. However, he acknowledges that there is still a lot of work to be done in the everyday chatbot experience. Cover image | OpenAI and Xataka Android In Xataka | China already has an army of 5.8 million engineers. His new plan involves accelerating doctorates

OpenAI will show ads on ChatGPT because it has no choice: the free AI business is unsustainable

OpenAI has started laying the groundwork to introduce advertising on ChatGPT. The code for the latest beta version of your Android app includes explicit references to search adsadvertising carousels and commercial content. It is something that can be seen coming from afar and has been rumored, but there is a trace of OpenAI itself. Why is it important. The company cannot sustain free access to a technology that is very expensive to operate indefinitely. Google and Meta can afford something like this because they finance their chatbots with huge prior advertising deals, but OpenAI continues to accumulate debt and burning cash without a clear profitable model. A $200/month Pro plan user has already reported seeing a Peloton ad during a conversation. The publicity seems inevitable, perhaps even for those who pay… in the absence of knowing if that was a mistake or part of the next new normal. In Xataka Privacy is dying since ChatGPT arrived. Now our obsession is for AI to know us as best as possible Between the lines. Sam Altman has gone from calling ads “the last resort” in 2024 to praising Instagram’s advertising model months later. Leaked internal forecasts anticipated $1 billion in “free user monetization” by 2026. The company has been hiring specialized personnel in advertising platformsattribution systems and campaign tools. The discourse has changed: it now talks about finding a format that “benefits the user.” Yes, but. Reuters informs that OpenAI has declared internal “code red” to improve ChatGPT (just the opposite of what happened when ChatGPT arrived) and is postponing initiatives such as advertising. The priority now is to respond to the launch of Gemini 3do not monetize free users. {“videoId”:”x9rqykw”,”autoplay”:false,”title”:”Apps in ChatGPT”, “tag”:”technology”, “duration”:”69″} The hidden advantage. Conversational AIs know their users better than anyone cookie or web tracking pixel. We tell them our concerns, intimacies and interests without filters. We navigate obsessed with not being tracked, but We give ChatGPT a perfect advertising profile. Google knows what you are looking for. ChatGPT knows what you think. The difference determines the value of the ad. At stake. OpenAI handles 800 million weekly users processing 2.5 billion daily queries. That beastly audience turns any advertising model into potential billions of annual revenue. The current free plan isn’t going away, but it will likely include ads. Payment plans could become more expensive when the restructuring comes. The company needs revenue that doesn’t rely solely on subscriptions to close its huge operating deficit. In Xataka |ChatGPT has been a tool. If you start remembering all our conversations, it’s going to be something else: a relationship. Featured image |Solen Feyissa (function() { window._JS_MODULES = window._JS_MODULES || {}; var headElement = document.getElementsByTagName(‘head’)(0); if (_JS_MODULES.instagram) { var instagramScript = document.createElement(‘script’); instagramScript.src=”https://platform.instagram.com/en_US/embeds.js”; instagramScript.async = true; instagramScript.defer = true; headElement.appendChild(instagramScript); – The news OpenAI will show ads on ChatGPT because it has no choice: the free AI business is unsustainable was originally published in Xataka by Javier Lacort .

OpenAI just launched ChatGPT for teachers. The question now is how much education we are willing to delegate to AI

What happens when a teacher uses artificial intelligence to prepare his classes, a student uses it to do homework, and finally, that same teacher uses AI again to correct them? It may not be the norm yetbut that scenario no longer sounds so far away. The speed at which these tools have been integrated into classrooms has opened a fundamental debate: what do we really learn if we let technology do the work for us? And what does the educational system lose if this process becomes a habit? The landing of AI in education is neither coincidental nor recent. Technological tools have been present in classrooms for years, with platforms such as Google Classroom either Moodle. The novelty is not in using technology, but in relying on systems capable of generating content, proposing solutions or even being used in pedagogical decisions. That is where the big developers—Google, Microsoft, Anthropic and, more recently, OpenAI—have decided to go a step further and position themselves at the center of the educational debate. Here OpenAI lands with a dedicated proposal for teachers in the United States. We are talking about a version of ChatGPT Designed for primary and secondary educators, free for verified teachers, with administrative controls for centers and school districts. Unlike the service that almost all of us know, OpenAI ensures that the data generated in these environments will not be used, by default, to train its models. What ChatGPT offers for teachers Personalized assistance. It allows you to enter school level, curriculum and desired format so that the answers adapt to the real style of the classroom. It is the teacher who controls that configuration. Integration with usual resources. You can generate presentations with Canva, import lesson plans or documents from Google Drive and Microsoft 365, and start a conversation with that context already activated. Ideas from other teachers. Show real examples of teachers already using ChatGPT in their classes, directly below the editor, as a source of inspiration. Teaching collaboration. It makes it easy to create custom GPTs and shared templates to plan units, lessons, or assessments among colleagues in the same school or district. Management from the center. It offers a manageable workspace, with secure accounts and differentiated roles for teachers and academic leaders. What is OpenAI pursuing with this? Among the 800 million weekly ChatGPT users there are many teachers. The company explains that they are using the tool to design teaching units, adapt the curriculum to regional standards or generate examples that help evaluate their students. Let’s look at some of the usage examples you have shared: Generate examples for a task You are an expert English teacher. Using the prompts in the accompanying readings, generate seven different sample answers. Responses should be one paragraph in length and range in quality from very well written to very poor. They must be written following the RACES format (restate, respond, cite, explain and summarize). Include a justification for each answer, indicating your level of writing. Plan a multi-week drive My science department is redesigning the 8th grade physical science curriculum and I need help creating a teaching unit based on the attached objectives. Please make a plan for a 20-day unit with 55-minute classes. I need a guiding question for each day to help focus learning. Provide hands-on activities for students to explore these topics. As we can see, AI is here to stay, and trying to ignore it is not an option. The real question is how to use it without replacing the act of learning, which is much more than completing a task. Because if the teacher uses AI to solve what he has to prepare, and the student does the same to deliver what is required of him, what remains of that process beyond compliance? The educational system is not based on the ability to deliver results, but on the ability to think, make mistakes and argue with one’s own knowledge. An MIT study provides data that begins to illuminate the debate: users who wrote essays with ChatGPT produced the text 60% faster, but their cognitive effort was relevant was reduced by 32%. That is, they achieve a more polished result, but with less mental work. Another study, in this case from the SBS Swiss Business Schoolnotes that the increased use of AI is linked to the deterioration of critical thinking skills. We still do not know what effects this dynamic will have in the medium or long term. What we do know is that the classroom has become a territory where big technology companies want to be. And that the real educational challenge of the next decade will not be deciding whether we use AI, but deciding how much of the educational process we are willing to delegate to it. Images | Xataka with Gemini 3 | OpenAI In Xataka | The problem is not that the AI ​​is not able to read the time. The problem is confirming that he does not reason and only repeats what he has seen.

How to create a character in ChatGPT and Gemini to use it in all the images you make with artificial intelligence

Let’s explain to you how to create a character in ChatGPT and Gemini., and tell the AI ​​to remember it so you can then use it in all the images you generate in that conversation. Thus, if you want there to be cohesion between all the images, and for them to be starring the same digital person, you have a way to do it. We are going to briefly explain three different ways in which you can do it, telling you step by step the process you must follow. Remember that it is best to do everything in the same chat to maintain context. Create a character from a description The first method is to tell it that you want to create a character and add a skin description that you want to use. Start the prompt by telling them that you want to create a character called “name”, and then tell them in detail about their physical appearance and clothing. This method usually works best in ChatGPT. The AI ​​may then ask you questions to clarify, such as the style to use, and you must answer as you prefer. In my case, I have asked him to make me a comic-style one. An image will then appear, and you can ask it to make changes if you want to the design. Now, you can add a prompt with which set the character’s appearance. For that, you can use a prompt like “I want you to set this appearance to the “name” character, so that if I ask you for more drawings of him, you will always use the same design. Okay?”. With this, ChatGPT or Gemini should save this aspect. Now you can start ask you to draw this character in different ways. To do this, literally ask them to draw (name of character) and describe the scene and what they are doing. You should make the image keeping the same style of the drawing and exactly the same appearance. Create the character from a photo You can do exactly the same, but creating your character from a photo rather than from a description. Simply ask them to reimagine the photo and add a description if you want to change something or add more things, such as the outfit. Then ask him again to make it a character to use from now on. And then, just ask him to create the same character in different scenes. This method does not always work well in ChatGPT, and it usually works worse in Gemini, but it is something worth exploring. Use an already created image And the third option is use a character that you have created on another website or AIor in short any external design. To do this, upload the drawing of this character and add a prompt like “I want all the images I ask you for in this specific chat from now on to use this character as the protagonist”. This alone will be enough. And from now on, simply go asking me to create an image with a person doing what you want in the environment you describe. The image will be generated, but using the character you created before as a reference. Here, this usually works best with Gemini. In Xataka Basics | The best prompts to save hours of work and do your tasks with ChatGPT, Gemini, Copilot or other artificial intelligence

Until now in ChatGPT we chatted only with an AI. Now it’s starting to look more like WhatsApp

OpenAI has begun testing a function that gives a twist to the use given to ChatGPT: group chats. For the first time, the platform allows create conversations with up to 20 people where the chatbot acts as another participant. The pilot test has started in Japan, New Zealand, South Korea and Taiwan, and is available for users of all plans: Free, Go, Plus and Pro. How these groups work. To create a group chat, tap the new people icon in the top right corner of any new or existing conversation. Users must set up a basic profile with name, username and photo. They can then invite others by sharing a direct link. If you add people to an existing chat, ChatGPT creates a copy as a new group to keep the original conversation separate. The chatbot that knows when to speak. OpenAI assures have adjusted ChatGPT’s behavior to work better in this collaborative environment. The chatbot now follows the flow of the conversation and decides when to respond and when to remain silent, something that did not happen before. Users can force participation by directly mentioning “ChatGPT” in a message. In addition, the assistant can also react with emojis to messages, offering more natural behavior. Why does it make sense?. ChatGPT already had a solid foundation as a one-on-one conversation tool. Making the jump to group chats allows you compete directly with courier servicesa terrain where opportunities for use multiply. Planning trips, organizing work projects, searching for restaurants, or preparing collaborative documents are natural use cases that increase the time users spend on the platform. More exposure means more chances of conversion: the more people using ChatGPT as a group, the greater the chance that at least some will end up subscribing to the chatGPT. most basic payment plan (Go) turning the function into another way to make the service profitable. The technology behind. The responses in these groups are powered by GPT-5.1 Auto, a new model that OpenAI says can automatically select the best model GPT-5.1 depending on the type of question and the models available for each user according to their subscription. All the usual ChatGPT features, be it web searching, image and file uploading, image generation, and dictation, are available within groups. OpenAI also specifies that usage limits only apply when the chatbot responds, not when users chat with each other. What’s next. OpenAI indicates that will refine the experience based on feedback from early users before expanding it to more regions and plans. Currently, participants can customize the group name, add or remove people, mute notifications, and even set custom instructions for how ChatGPT should respond in each specific group. Anyone with the link can invite others, and participants can mute or remove other members at any time, except the group creator. Images | OpenAI In Xataka | Companies are turning their workers who know how to use AI into “stars”: the new labor gap

George RR Martin asked ChatGPT to write ‘Game of Thrones’. He did it so well that he is going to end up before the judge

The debate about the AI usage limits and how is this going to actually affect the creators It is very complex, and it has only just begun. From discerning to what extent AI’s ability to create works outside of humans will continue to grow to the logical ethical and legal concerns that appear around a tool that, from its very definition, is in a completely unexplored area. At the moment, George RR Martin and other authors are taking steps in search of more demanding regulation. What has happened? A federal judge in Manhattan has given the green light for the lawsuit filed by George RR Martin and other authors against OpenAI and Microsoft for alleged copyright violation. The creator of ‘Game of Thrones’ and his colleagues accuse these companies of use his works without authorization to train ChatGPT. According to the ruling issued on October 27, 2025, there are reasons for the case to move forward, since ChatGPT’s proposal for a sequel to the saga was substantially similar to Martin’s work already protected by copyright. The determining test. It came when lawyers asked ChatGPT to create a fictional sequel to ‘A Clash of Kings’. The chatbot immediately spawned a novel called ‘Dance of Shadows’, a sequel that included a new Targaryen heir named Lady Elara, a rebellious sect of the Children of the Forest, and a mysterious form of ancient dragon-related magic. This ability to recreate elements from Martin’s universe made the question clear: how could the AI ​​know his work in such detail without having fed on it? The precedents. The origins of this legal conflict date back to September 2023, when Martin, accompanied by 17 other authors (including people like Michael Chabon, Ta-Nehisi Coates, Jia Tolentino, John Grisham, Jonathan Franzen and Sarah Silverman) raised his voice against what he considered a systematic exploitation of his work. The case was brought by the Authors Guild union, in a lawsuit that spoke of “systematic theft on a massive scale”, arguing that the tool makes use of their works without paying royalties and without the writers’ consent. The letter. Months before the lawsuit, these authors and many others, such as Margaret Atwood or Nora Roberts, they had sent a letter to large technology companies conveying their concerns about generative AI technologies. In that document they warned about “the injustice inherent in exploiting our works as part of your AI systems without our consent, credit or compensation.” The accusation was clear: ChatGPT had not only learned from his books; Now I could replicate them. Other attacks. We are at a key moment in determining the legal implications of generative AI. At the beginning of 2025, for example, it was decided by the juries a similar dispute against Anthropicwhich concluded with an out-of-court settlement: the company paid $1.5 billion to authors whose works were used without permission. This precedent shows that technology companies are willing to negotiate to avoid court rulings that could establish binding jurisprudence. In England, by contrast, the High Court of England determined that Stability AI did not infringe copyright by train your model with Getty imagesthat is, a decision in literally the opposite direction, which has generated alarm among European creators. In all these cases the debate about “fair use” or fair use: The technology companies argue that the training of their models constitutes a transformative use of works, similar to when search engines index content. The creators reply that it is a massive appropriation that replaces, not complements, the original work. And in the background, a shock that has only just begun. Header |Gage Skidmore

What’s new and what improvements are there in the new version of the ChatGPT model with two personalities

Let’s tell you What’s new in GPT-5.1the new version of the model artificial intelligence of ChatGPT. GPT versions are the engine of your interactions with OpenAI’s AI, and the results that are given to you and the way in which they are told to you depend on them. This new update stands out above all for having two versions with different personalities. But in addition to this, there are also other new features that go more under the hood, but that can also make a difference when it comes to serving you the answers. Here, you must be clear that this new version of GPT-5.1 It has reached paying users firstwhether they have a Plus, Pro, Go or Business subscription. Maybe later it will also reach free users, but probably with limited use. Two GPT-5.1 with two ways to respond As we have told you, the main novelty of this new update is that it offers two versions with different personalities. This goes beyond customize ChatGPT with different personalities as you can do in the settings, but there are directly two versions of GPT-5.1, and each of them has a different type of response. On the one hand it is GPT-5.1 Instantwhich is more conversational and with more “warm” or close responses. and then you have GPT-5.1 Thinkingthat use clearer language with less jargon. This last model is trained for deep reasoning, and responds faster on simple tasks, while dedicating more thinking time to complex ones. Paid ChatGPT users will see the ChatGPT 5.1 model activated at the top. By clicking on the name the model selector will open, and in it you can choose between the variants instant and thinking. There is also a way Car which will choose for you which of the two variants to use depending on what you ask in the prompt. Smarter adaptive reasoning GPT-5.1 also improves its internal logic, and now dynamically decides the “thinking time” that you dedicate to a request. Come on, instead of dedicating the same time to each of them, you will dedicate different times depending on the type of request. This way, when you make a simple query it will be processed with minimal calculation and faster, while more advanced reasoning tasks receive additional layers of analysis to improve the results and make them more coherent and context-sensitive. Behavioral improvements and instructions This new model too improves following instructionsbetter “understanding” what you ask and generating responses that are more aligned with it. Each of the two variants adjusts its reasoning to the complexity of the request to ensure that the answers are consistent with everything you ask of them. Better tone and personality controls We already told you that ChatGPT has a setting to determine the tone and personality of the responses. Now, instead of having predefined ringtones the user can configure it. For example, you can choose to make it professional, friendly, or efficient, and apply it consistently across all interactions. For regular users this simply helps you be more comfortable with the answers it gives you. But for companies it is even more important, since you can align the tone with what you want to use both in your communications with customers and in internal documentation. Context retention improvements Context retention is more effective, which improves continuity in long interactions and with multiple shifts. This will help you as a user, but it is especially important in the business environment, in uses such as customer service or knowledge base systems. Performance optimization Response generation is now faster, and token overhead is reduced to make GPT-5.1 a better model for automated environments. It can deliver higher quality or better results using fewer tokens than previous models, reducing the overall costs of using the API. In Xataka Basics | The best prompts to save hours of work and do your tasks with ChatGPT, Gemini, Copilot or other artificial intelligence

what it is, characteristics of this artificial intelligence model and differences with Gemini and ChatGPT

Let’s explain to you what is Kimi K2 Thinkingthe latest model of the company artificial intelligence Kimi AI. It is an AI that has made a name for itself today due to its open nature and for having managed to compete directly against GPT-5, Gemini 2.5 Pro and other high-end models. We are going to start the article by explaining what Kimi K2 is and the characteristics that make this artificial intelligence model different. Then, we will finish by telling you the main differences with respect to the most popular models on the market. What is Kimi K2 Thinking Kimi K2 Thinking is the latest version of Kimi, a Chinese artificial intelligence model created by Moonshot companyfrom Alibaba. Since the names are the same, you can clarify yourself by thinking that Kimi is the company’s AI, like ChatGPTand that this AI has different models that are being launched, such as the case of GPT-5 in the case of OpenAI. Kimi K2 was launched in July, and stood out for having a gigantic size of 10,000 million parameters. Now there is a new version called Kimi K2 Thinkingwhose number of active parameters amounts to 32,000. According to its creators, this allows the AI ​​to maintain stable use of agentic tools over 200 to 300 sequential calls. And what does all this talk mean? As you know, we are entering the era of AI agentswhich are automations with which an artificial intelligence can carry out different actions autonomously. This allows the AI ​​to even make decisions for you, from asking it to make your purchase to preparing a vacation package for you and taking care of the reservations. It is also something that at the business level is going to have even more uses. Therefore, the more ability an AI has to perform a large number of actions without making mistakes, the more valuable and powerful it is. Features of Kimi K2 Thinking The most important feature of Kimi K2 Thinking is that it is an open model. The models of companies like OpenAI, Google or Anthropic are closed, which means that their source code is kept under lock and key, only these companies know how it works inside. Meanwhile, K2 Thinking is open source, which means that anyone can know how it works inside looking your Githubits features, and you can even adapt it for free. What’s more, you can install it locally at no cost, although the computer needed for this is too powerful for ordinary mortals, but “distilled” versions, lowered or trimmed, can be released so that people can use them locally. In this aspect it is like DeepSeek, another open AI that already surprised a few months ago for approaching the power of non-open models such as Gemini or ChatGPT. In the case of Kimi K2 Thinking, according to the test benches has managed to surpass GPT-5something that until recently was unthinkable. We are facing a Mixture-of-Experts architecture model (MoE), which means that it is made up of several experts (subnetworks or specialized modules), and that not everything is activated at once, but only the parts of the model necessary to answer what you ask or perform the task that you have asked. It should also be said that it is multilingual, and can be used in other languages ​​although it focuses on Chinese, and that it can process many types of file formats. Also searches in real time to offer you the most up-to-date information, and is multimodal, being able to interpret text, images, code or a combination of these. Kimi K2 Thinking can be used as a conversational char answering questions and maintaining long context while following complex threads. But it can also interpret images, or a combination of mixed inputs such as images with text and with code. In addition to this, it can generate programming code, analyze long documents thanks to its large context window and extract information to answer questions about the content or give you a summary. Additionally, you can create automations or agents. Differences with ChatGPT or Gemini As we have told you above, the main difference of Kimi is its open concept. While ChatGPT and Gemini are proprietary models, Kimi allows access to the community so they can see its code. Several benchmarks have shown that Kimi K2 Thinking outperforms GPT-5 and Claude Sonnet 4.5 (Thinking) in search and agentic browsing in the browser, in text-only operation, and in information collection. The only thing in which it still does not surpass these models is in the creation of code. In the use of agentic tools, benchmarks or test benches have shown that Kimi K2 Thinking is positioned as a leading AI model. Besides, Kimi is a cheaper model for several things. First, training the model cost $4.6 million, according to indicate on CNBC, a ridiculous figure considering that training proprietary models like GPT-5 It cost about 500 million dollars according to estimates. It’s also cheaper to use the Kimi K2 Thinking API. The API is like the entry key that allows other applications to connect to this AI to work with it. The price of K2 Thinking is $0.6 per million tokens in and $2.5 per million tokens out. GPT-5 Chat costs $1.25/10 respectively, and Claude Sonnet 4.5 costs $3/15 respectively. For the average user, the operation is the same.. You have the website kimi.comwhere after registering for free you can use the Kimi K1.5 and K2 models. However, If you want to use Kimi K2 Thinking you will have to pay with their subscriptions of 19 or 30 dollars. At least, this is if you want to use the full version on the official website, without having to install anything. In Xataka Basics | The best prompts to save hours of work and do your tasks with ChatGPT, Gemini, Copilot or other artificial intelligence

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.