Creating a C compiler cost 2 million dollars and took 2 years. Claude Opus 4.6 did it in two weeks for $20,000

We are facing a technological inflection point. Uo in which software engineering, one of the most complex and demanding technical tasks in history, little by little It is becoming the “killer app” of AI. It is clear that generative AI models are not perfect, but we continue to see extraordinary evolution. The latest example? The C compiler that Claude Opus 4.6 programmed all by himself. what has happened. Nicholas Carlini, researcher at Anthropic, I counted yesterday how “I’ve been experimenting with a new way of monitoring language models that we’ve called “agent teams””. What it has done is ensure that several programming agents work in parallel using the recently released Claude Opus 4.6, and thanks to that it has developed something exceptional with 16 of these agents: a C code compiler. Hello CCC. At Anthropic they have called it Claude’s C Compiler (CCC), and they have published the code, completely generated by Opus 4.6, on GitHub. The project consists of 100,000 lines of Rust code that were generated in two weeks with an API cost of $20,000. And it works: with it they have compiled a functional Linux 6.9 kernel on x86, ARM and RISC-V. Before it was (at least) two million dollars and two years. What this experiment has achieved is to demonstrate how software development can be much cheaper and faster thanks to the use of these agents. Although there is no readily available data on how much time and money compilers cost in the past, the size of these products was enormous, as is the case with Microsoft Visual C++For example. It is difficult to know how much it cost, but it is estimated that it involved 15-20 people working for five years. That’s a lot of man hours and a lot of money to develop and polish that compiler. The estimate of two years and two million dollars may in fact be overly optimistic. another example. Historically, building a C compiler from scratch was considered one of the pinnacles of systems engineering. Not only was in-depth knowledge of processor architecture required, but thousands of man-hours were required to manage optimization and machine code generation. In the 90s the company Cygnus Solutions (clue in compiler development gcc) came to invest more than 250 million in a decade to maintain and port build tools. The real cost was not just in the final lines of code, but in countless hours analyzing CPU and memory patterns to make the resulting binary efficient. Far from perfect, but… Carlini himself explained in the post that this compiler had serious limitations and for example “it does not have a 16-bit x86 compiler which is essential to start Linux outside of “real mode”, and it does not have its own assembler nor its linker“. It is probably far from mature compilers, but even so the achievement remains exceptional and points to that future in which even very complex developments can be supported with AI. They will be expensive, no doubt, but their total development will probably be a fraction of what they cost a few years ago. Cursor already demonstrated it. Before Anthropic launched its AI-programmed compiler, Cursor completed a similar project, combining GPT-5.2 agents into its development platform to create a working browser in a week. In total the AI ​​programmed three million (!) lines of code in Rust, and although it was again far from being perfect or competing with Chrome, it demonstrated the current capacity of these agentic programming systems. Turning point (especially for Anthropic). For the SemiAnalysis experts Claude Code, current leading exponent of this new era of AI-driven programming, is a paradigm shift: “We believe that Claude Code is the turning point for AI agents and is a glimpse into the future of how AI will work.” This prestigious newsletter predicts an exceptional 2026 for Anthropic, and so much so that they believe it will “dramatically surpass OpenAI.” You ask, the AI ​​programs. If you have tried the vibe codingI’m sure you agree with me: AI allows you to do things you would never have dreamed of. What I did a few weeks ago with Immich made it clear to me, and I continue experimenting with AI and programming “custom” things that solve real problems and needs for me. Yes, for now they are for me and therefore they are not large and complex systems that need to be put into production as happens in professional environments, but I am clear that this is being done little by little and more will be done. In fact, both OpenAI and Anthropic have stood out how in the development of their latest models part of the work has been done, paradoxically, by those same models, which have fed back to each other. And the result is in production and used by millions of people. Something is changing. And it’s something big. In Xataka | OpenAI has a problem: Anthropic is succeeding right where the most money is at stake

Claude Code is being the big favorite among programmers. So much so that he already signs 4% of everything that is uploaded to GitHub

It is worth taking a look at how generative AI It is transforming the daily lives of many programmers. And little by little these tools are conquering the environments of millions of developers. The achievement in this aspect is for Claude CodeAnthropic tool, which already represents 4% of all public commits uploaded to GitHub, according to a report by SemiAnalysis. The media says that, if it maintains its current pace of adoption, it is very possible that it will reach 20% of all daily contributions before the end of 2026. Although there are nuances that should be highlighted. Why is it important. Claude Code is slowly gaining the reputation of being the favorite tool for programming with AI. The tool works radically differently than traditional code wizards. It is not a chatbot integrated into an editor like Cursorbut rather a terminal tool that reads entire code bases, schedules multi-step tasks, and executes them with full access to the developer’s computer. You can start from spreadsheets, entire repositories, or web links, understand context, verify details, and complete complex objectives iteratively. The interesting thing is that, by default, Claude Code includes a co-authorship note if the user has used this tool in their program and uploads it to Github. But the user can also decide not to include that signature if modify the parameters by Claude Code, so that 4% could remain small. In March of last year, a month after its launch in private beta, Claude Code already had the co-authorship of about 15,000 Github commits in a period of 48 hours. Things have ended up escalating quickly. Opinions. The newsletter stands out the comments of some industry professionals regarding the vibe codding. Andrej Karpathy, one of the first to coin the term vibe codding, recognized in a post that he is “starting to lose the ability to write code manually.” Ryan Dahl, creator of Node.js, counted directly that “the era of humans writing code is over.” Boris Cherny, creator of Claude Code, assures that “practically 100% of our code is written by Claude Code + Opus 4.5“. Even Linus Torvalds, creator of Linux, has fooled around with vibe codding for some of his personal projects. It should be noted that, despite all the benefits of Claude Code, it is not perfect. Already we pointed out some time ago the words of Kelsey PiperAmerican journalist for The Argument, who explained that 99% of the time using Claude Code is like having a magical, tireless genie, but 1% of the time it’s like yelling at a pet for peeing on the couch. He can and does make mistakes. It also gets stuck. Hence, the expertise of the person who uses it also plays a very important role. Beyond programming. There is an increasingly latent threat with the use of AI tools (well there are a few that accumulate already). And according to account SemiAnalysis, any information work that follows the READ-THINK-WRITE-CHECK pattern can be automated with this technology. The report mentions sectors such as financial services, legal, consulting and data analysis, which add up to billions of workers globally. Anthropic has already taken the next step with coworkreleased a few weeks ago, which is basically Claude Code applied to general office work. According to the company itself, Cowork was developed by four engineers in ten days, mostly with code generated by Claude Code himself. The tool can create spreadsheets from receipts, organize files by content, write reports from scattered notes… And all with access to your computer. The big consultancies and AI. In December, Accenture signed an agreement to train 30,000 professionals on Claude, the largest deployment of Claude Code to date. OpenAI, for its part, Frontier has launched focused on business adoption so as not to lose steam in the field of corporate use of AI, a business that can end up being very lucrative for startups. Cover image | Anthropic and Mohammad Rahmani In Xataka | Programming is the new board of AI. OpenAI and Anthropic have made it clear with GPT-5.3-Codex and Claude Opus 4.6

Programming is the new board of AI. OpenAI and Anthropic have made it clear with GPT-5.3-Codex and Claude Opus 4.6

When ChatGPT broke out in November 2022, OpenAI seemed unrivaled. And, to a large extent, that was the case. That chatbot, despite its errors and limitations, inaugurated a category of its own. However, in the technology sector advantages are rarely permanent and, in 2026, the position of the company led by Sam Altman It’s a far cry from what it had then. Google has managed to attract the general public with Nano Banana Prowhile Gemini steadily gaining ground as an artificial intelligence chatbot. At the same time, ChatGPT’s market share has fallen significantly in some markets. Anthropic, for its part, has established itself as a reference in software engineering and has become one of the preferred tools among programmers. In this race to set the pace of AI, this Thursday we witnessed a curious movement: the almost simultaneous arrival of two models focused on programming, GPT-5.3-Codex and Claude Opus 4.6. The coincidence does not seem coincidental and reflects the extent to which the major players in the sector compete to define the next step, in a scenario where the main beneficiaries are, once again, the users. With these new models already on the table, the question becomes what they really contribute. There are plenty of promises and they are also beginning to appear benchmarks comparable that help to place them. So, therefore, it is time to look in a little more detail at what OpenAI and Anthropic propose for those who use AI as a development tool. GPT-5.3-Codex and Opus 4.6 enter the scene: what each promises to developers GPT-5.3-Codex is presented as a model focused on scheduling agents which seeks to expand the scope of what a developer can delegate to AI. OpenAI claims that it combines improvements in code performance, reasoning and professional knowledge over previous generations and is 25% faster. With this balance, the system is oriented to prolonged tasks that involve research, use of tools and complex execution, while also maintaining the possibility of intervening and guiding the process in real time without losing the work thread. One of the most striking elements that OpenAI highlights in this generation is the role that Codex itself would have had in its development. The team used early versions of the model to debug training, manage deployment, and analyze test and evaluation results, an approach that accelerated research and engineering cycles. Beyond that internal process, GPT-5.3-Codex also shows progress in practical tasks such as the autonomous creation of web applications and games. The company has published two examples that we can try right now by clicking on the links: a racing game with eight maps and a diving game to explore reefs. Anthropic’s turn comes with Claude Opus 4.6, an update that the company presents as a direct improvement in planning, autonomy and reliability within large code bases. The model, they claim, can sustain agentic tasks for longer, reviewing and debugging its own work more accurately. The idea is that we can use these capabilities in tasks such as financial analysis, documentary research or creating presentations. Added to this is a context window of up to one million tokens in beta phase, a leap that seeks to reduce the loss of information in long processes and reinforce the usefulness of the system. Beyond the core of the model, Anthropic accompanies Opus 4.6 with a series of changes aimed at prolonging its usefulness in real workflows. Among them there are mechanisms such as the so-called “adaptive thinking”, which allows the system automatically adjust the depth of your reasoning depending on the context. Configurable effort levels and context compression techniques designed to sustain long conversations and tasks without exhausting the available limits also appear on the scene. Added to this are teams of agents that can be coordinated in parallel within Claude Code and deeper Excel or PowerPoint integration. While OpenAI’s product, GPT-5.3-Codex, is not yet available in the API, Anthropic’s is. Maintains the base price of $5 per million entry tokens and $25 per million exit tokenswith nuances such as a premium cost when the prompts exceed 200,000 tokens. Measure who wins with numbers? When trying to put GPT-5.3-Codex and Claude Opus 4.6 face to face, the main obstacle is not the lack of figures, but rather their difficult correspondence. Each company selects evaluations that best reflect its progress and, although many belong to similar categories, they differ in methodology, versions or metrics, which prevents a direct reading. In this type of models, this fragmentation of results is part of the state of the technology itself, but also requires cautious interpretation that separates technical demonstrations from truly equivalent comparisons. Only from this filter is it possible to identify the few points where both systems can be measured under comparable conditions and draw useful conclusions for developers. If we restrict the analysis to truly comparable metrics, the common ground between GPT-5.3-Codex and Claude Opus 4.6 is limited to two specific evaluations identified through our own research: Terminal-Bench 2.0 and OS World in its verified version. The results show a distribution of strengths rather than a clear supremacy. GPT-5.3-Codex marks a 77.3% in Terminal-Bench 2.0 compared to 65.4% for Opus 4.6, which points to greater efficiency in terminal-centric workflows. On the contrary, Opus 4.6 reaches a 72.7% on OSWorldsurpassing the 64.7% of GPT-5.3-Codex in general interaction tasks with the system, a contrast that reinforces the idea of ​​specialization according to the environment of use. So we could say that the capabilities described by each manufacturer point to tools that are no longer limited to generating code, but rather seek to participate in prolonged processes of analysis, execution and review within real professional environments. This transition introduces new selection criteria that go beyond punctual performance. In Xataka | OpenAI has a problem: Anthropic is succeeding right where the most money is at stake

Anthropic has rewritten his 25,000-word “Constitution” for Claude. It is the manual for how AI should behave

Anthropic has published a completely renewed version of the so-called “Claude Constitution”. Yes friends, an AI also needs a constitution, or at least a series of documents that explain with total transparency what direction the company has decided to take with its AI tool. It is a way to save us trouble in the event that become aware. The document The question in question consists of 80 pages and nearly 25,000 words, and basically shows what values ​​Anthropic relies on to train its models and what they hope to achieve with it. Alluding to Asimov, it would be something like a broader and more complex version of his three laws of robotics. Why it is important. Anthropic carries a good time trying to differentiate from OpenAI, Google or xAI, wanting to position itself as the most ethical and safe alternative on the market. This Constitution is the centerpiece of their training method called “Constitutional AI”, where the model itself uses these principles to self-criticize and correct its responses during learning, instead of relying exclusively on human feedback. The document is not written for users or researchers: it is written for Claude. It was time to update. The first version of the Constitution, published in 2023, was a list of principles drawn from sources such as the UN Universal Declaration of Human Rights or, as they mention from Fortune, from Apple’s terms of service. Now, according to Anthropic, they have taken a completely different approach: “To be good actors in the world, AI models like Claude need to understand why we want them to behave in certain ways, rather than simply specifying what we want them to do,” affirms the company in its statement. The new document is structured around four fundamental values, and the most interesting thing is that Claude must prioritize them in this order when they conflict: Be largely secure: Do not undermine human AI oversight mechanisms during this critical phase of development. Be broadly ethical: act honestly, according to good values, avoiding inappropriate, dangerous or harmful actions. Comply with Anthropic guidelines– Follow specific company instructions when relevant. Be genuinely helpful: benefit the operators and users with whom it interacts. The majority of the document is concerned with developing these principles in more detail. In the utility section, Anthropic describe to Claude as “a brilliant friend who also possesses the knowledge of a doctor, lawyer and financial advisor.” But it also sets absolute limits, called “hard constraints,” that Claude must never cross: not provide significant assistance for bioweapon attacks, not create malware that can cause serious harm, not assist in attacks on critical infrastructure such as power grids or financial systems, and not help “kill or incapacitate the vast majority of humanity,” among others. Consciousness. The most striking part of the document appears in the section titled “The Nature of Claude,” where Anthropic openly acknowledges its uncertainty about whether Claude could have “some kind of conscience or moral status.” “We are concerned about Claude’s psychological safety, sense of identity, and well-being, both for Claude’s own sake and because these qualities may influence his integrity, judgment, and safety,” they count from the company. The company claims to have an internal team dedicated to “model well-being” that examines whether advanced systems could be sentient. Amanda Askell, the Anthropic philosopher who led the development of this new Constitution, explained told The Verge that the company doesn’t want to be “completely dismissive” about this issue, because “people wouldn’t take it seriously either if you just said ‘we’re not even open to this, we don’t investigate it, we don’t think about it.’” The document also raises complex moral dilemmas for Claude. For example, it states that “just as a human soldier might refuse to shoot peaceful protesters, or an employee might refuse to violate antitrust law, Claude should refuse to assist with actions that concentrate power in illegitimate ways. This is true even if the request comes from Anthropic itself.” And now what. Anthropic has published the entire Constitution under a Creative Commons CC0 1.0 license, meaning anyone can freely use it without asking permission. The company promises to maintain an updated version on its website, considering it to be a “living document and a continuous work in progress.” Cover image | Andrea De Santis and Anthropic In Xataka | Company CEOs say AI is saving them a day of work a week. Employees say otherwise

Claude has become more than just a rival to OpenAI: he is its new existential threat

Several software stocks are falling just since Claude Cowork It’s going viral. Those collected by iShares Expanded Tech Software ETFwhich has a cumulative drop of 6.4% in the last five days. It has also been a few days since OpenAI announced that it is going to introduce ads on ChatGPT. Why important. It’s not just that Claude Cowork is cool and works well. The thing is that OpenAI’s business model is beginning to show cracks while Anthropic gains ground where it matters: in companies that really pay. In figures. Claude dominates 54% of the AI ​​programming market. In business environments controls 42%more than double that of OpenAI. This last piece of information is from six months ago, presumably now it has gotten worse. Cowork has only made accelerate the trend. 20% of Anthropic’s revenue comes from Claude Code alone. Meanwhile, ChatGPT quota has gone from 87% to 64% in a year. In Xataka People are holding funerals for retired AI models for a reason: they are not a "tool" but a support The background. According to historical data since 2001 that collect Sherwood Newswhen the software ETF falls at least 5% in a month, the S&P 500 usually also falls between 5% and 6%, but this time it has not been like that: it has risen 1%. The overall market going up while software goes down has only happened 28 times in over twenty years. And three of them have been this week. Between the lines. Doug O’Laughlin of SemiAnalysis explains it this way in Sherwood News: “Claude Code is the ChatGPT moment repeated. You have to try it to understand it.” His argument is devastating for traditional software. Workflows, interfaces, integrations are going to stop mattering. The only valuable thing will be access to the data via API. Everything else is generated instantly. Yes, but. OpenAI urgently needs money to build its data centers. And it does not have an ecosystem of services like Google or Meta to finance itself. Hence the newly announced announcements for ChatGPT, which will arrive “in the coming weeks” as announced on Friday. Clearly it is a way to better monetize the hundreds of millions of free users, and with that cash flow sustain their growth and spending. On the other hand, Claude Code is powerful, but not perfect: as Kelsey Piper said99% of the time using Claude Code is like having a magical, tireless genie, but 1% of the time it’s like yelling at a pet for peeing on the couch. He keeps making mistakes, sometimes gets stuck on complex tasks. {“videoId”:”x9u4ml2″,”autoplay”:false,”title”:”Does Gemini 3 surpass ChatGPT? This is Google’s new AI”, “tag”:”Webedia-prod”, “duration”:”156″} And now what. For software companies, O’Laughlin’s message is devastating: get out of “information work” as soon as possible. If your differentiation is doing things faster or with better design, you’re done. The only thing that will matter is who has the data and who controls access via API. As summarized Axios in his analysis of the weekit’s unclear who wins the AI ​​race. But the pace is accelerating with no signs of slowing down. And what is increasingly clear is who is losing it. In Xataka | The AI ​​of 2026 brings an uncomfortable truth: the most useful will be the one that watches us the most Featured image | Anthropic (function() { window._JS_MODULES = window._JS_MODULES || {}; var headElement = document.getElementsByTagName(‘head’)(0); if (_JS_MODULES.instagram) { var instagramScript = document.createElement(‘script’); instagramScript.src=”https://platform.instagram.com/en_US/embeds.js”; instagramScript.async = true; instagramScript.defer = true; headElement.appendChild(instagramScript); – The news Claude has become more than just a rival to OpenAI: he is its new existential threat was originally published in Xataka by Javier Lacort .

The AI ​​Claude Code “only” programmed. With Cowork, Anthropic wants its AI to take care of everything else

Claude Code has become a revolution for programmers, but at Anthropic they are not satisfied with that, and now they want their Claude family AI models to serve much more. And that’s why have created Coworka different agent, especially ambitious and who opens the door to fantastic options… if you trust him. What is Cowork. Those responsible for this project have taken the foundations of Claude Code and applied them to the Claude desktop application (for now, only the macOS one). But they have also done something equally special: giving Claude permission to access a specific folder on our computer and, from there, he can take control of those files and work with them as we want. Hello, robot-secretary. Instead of access to the “vibe coding” we will have access to a kind of “vibe working”. Thus, we can ask Cowork to do all kinds of operations with those files: If we have a folder full of disorganized icons, we can ask you to ordered them to us and reorganize them all into folders by file type or theme If there are a lot of photos of receipts in that folder, we can tell you to create an expense report If what we have is a bunch of digital voice or text notes, we can ask you to write a report summarizing and combining them all. If we have a folder full of podcasts, we can have it go through it, analyze it and summarize the top 10 points of all of them or transcribe them If you have all your financial trading and investment reports and data, you can ask them to create a final report for you. help you declare them If you have videos and want to find one of a squirrel and then convert it to another format, also does. Full autonomy. We are therefore faced with an AI agent capable of accessing our files, analyzing them and working with them to generate new information and useful content from all that data. And we only have to ask it with natural language, because the agent is capable of understanding it, asking us questions if it needs more details, and then solving the task autonomously even if it involves several steps. Cowork operates in a container. The way CoWork works allows you to grant permission to certain folders, but when the AI ​​operates on said files it does so in isolation. As explains Simon WillinsonClaude uses a virtual machine and downloads and boots a custom Linux file system to operate on those files independently and isolated, which theoretically guarantees that our files are theoretically safe and Cowork does not access anything that we have not given permission to. Connections to other apps. In addition to being able to work directly with your files, Cowork benefits from its ability to connect with other applications that you have installed on your computer. You can use ffmpeg to convert the squirrel video, Asana if you want to organize your notes into projects, or an office application if you need to create a spreadsheet. But we will have to trust. Willinson himself warns that these types of systems have the danger of someone “hacking” them with jailbreaking or prompt injection techniques that now become more dangerous because, as we say, what Cowork does is work on our files. And of course we have to be careful with the information and data we share with CoWork: those responsible for Anthropic themselves have a document to “use it safely“. Limited release. Cowork is available as a “research preview”, and is only available to users of the Claude Max subscription which costs between $100 and $200 per month. It is clear that at Anthropic they prefer to go step by step with a very powerful but also delicate feature if we do not use it with caution: in the end we are giving access to our files to an AI, and we know that AIs can make mistakes. An AI on your computer. This release from Anthropic points to what all AI agents that want to conquer our computer should theoretically point to. Since that Computer Use that Anthropic launched in October 2024, things have come a long way, and little by little we are getting closer to that future in which we will be able to work with our computer in a very different way than we did until now… if we want and trust AI, of course. In Xataka | Operator also “looks” at the screen and moves your mouse for you like other AI agents. It does it better thanks to CUA

Anthropic says that Claude Sonnet 4.5 can clone a service like Slack in 30 hours. Reality is more complicated

Anthropic has launched Claude Sonnet 4.5 ensuring that they put it to work 30 hours in a row to build a Slack replica. During that time, it generated 11,000 lines of code without supervision and only stopped when completing the task. In May, its Opus 4 model managed to operate for seven hours. The company presents it as “the best model in the world for agents, programming and use of computers.” Why is it important. Anthropic, Openai and Google free a battle to dominate Autonomous agents and programming tools. Those who convince will capture a lot of money in business licenses. Scott White, product manager, says that “at the level of a cabinet chief”: coordinates agendas, analyzes data, writes reports … Dianne Penn says he uses it to search for candidates on LinkedIn and generate spreadsheets. Yes, but. The developers tell another more nuanced story. Miguel Ángel Durán, known as @Midudevsummarizes it: “Claude Sonnet 4.5 Refactor my entire project in a Prompt. 20 minutes thinking. 14 new files. 1,500 modified lines. Applied clean architecture. Nothing worked. But how beautiful it was. “ Other developers They report the same: thousands of lines with an impeccable structure, but do not execute. Code that seems professional but collapses when compiling it. Between the lines. Anthropic has not shown the application of Slack working. He has only said that he built it. Nor has it shown that the code is operational. The difference between communicating something and demonstrating it, Underlined by Ed Zitron. The company is indirectly recognizing the problem: Claude Sonnet 4.5 arrives with extra infrastructure to build agents – virtual management, memory management, context management, multiagente support …–. Translation: Even with the most advanced model, developers need extra tools for agents to program reliably. In detail. Penn He explained to The Verge that the improvements surprised the internal team. The model is three times more skilled using computers than the October version. The team spent the last month working with feedback of github and cursor. Canva, Beta-fieldsHe says he helps with “complex long context tasks.” The contrast. There is a huge gap between marketing and technical reality. Anthropic promises an AI that operates 30 hours building complex software. Developers confirm that it generates very well structured but functionally broken code. This pattern is repeated throughout the industry. The models improve generating code that seems professional. They systematically fail generating code that really works without important human intervention. And now what. The question is still unanswered: when will we pass from Which generates beautiful but diffunctional code What generates functional code alone? Anthropic bets that his combination of powerful model and extra infrastructure closes that gap. At the moment we must continue waiting for concrete evidence to arrive, do not give without verifiable code. In Xataka | Openai signs with Samsung and SK Hynix for a potential chips demand of 900,000 wafers per month. It is an absurd figure Outstanding image | Anthropic

Our conversations with Claude were untouchable. Today the urgency of data presses to make them raw materials of the AI

We usually talk to artificial intelligence as if I were one more person and sometimes we trust very personal information. However, we rarely stop to think about what happens to those conversations. Until now, the standard in good part of the sector had been to use them to train models, unless the user opposed. Anthropic represented an exception: Claude He had an explicit policy not to use the conversations of his private clients for this purpose. That exception has just broken. The reason is direct and forceful: the data is the raw material of the AI. Anthropic has just announced in his official blog An update of its service conditions for consumers and their privacy policy. The users of the Free, Pro and Max plans, including the sessions in Claude Code, must explicitly accept or reject that their conversations are used for the training of future models. The company set the deadline until September 28, 2025 and warned that, after that date, it will be necessary to choose the preference to continue using Claude. The Anthropic turn. The modification does not affect everyone equally: services subject to commercial terms are left out, such as Claude for Work, Claude Gov, Claude for Education, or access by API through third parties such as Amazon Bedrock or VerTex Ai from Google Cloud. Anthropic states that the new configuration will only apply to chats and code sessions initiated or retaken after accepting the conditions, and that old conversations without additional activity will not be used to train models. It is a relevant operational distinction: change acts on future activity. Why this change? Anthropic points out that all language models “train using large amounts of data” and that real interactions offer valuable signals to improve capacities such as reasoning or code correction. At the same time, several specialists have been pointing to a structural problem: The open web is running out as a fresh and easily accessible source of informationso that companies look for new data paths to sustain the continuous improvement of the models. In that context, user conversations acquire strategic value. Although Anthropic emphasizes security (improving Claude and reinforcing safeguards against harmful uses, such as scams and abuses), the decision probably also responds to competition: OpenAi and Google remain references in the field and require large volumes of interaction to advance. Without enough data, the distances in the AI ​​race that we are witnessing live can increase. Five years instead of thirty days. Next to the training permit, Anthropic has expanded the retention period for shared data for improvement purposes: five years if the user agrees to participatecompared to 30 days that govern if that option is not activated. The company also specifies that the eliminated chats will not be included in future training and that the feedback Envoy can also be kept. It also states that it combines automated processes and tools to filter or obfuscate sensitive information that does not sell user data to third parties. Images | Claude | Screen capture In Xataka | Microsoft prefers its own 7 that a 10 of OpenAi. The 13,000 million invested in Openai have just gosses meaning

Chatgpt’s mobile app generates 30 times more money than Claude, Copilot and Grok together. Still not enough

If there is a chatbot that stands out in popularity over the rest, that is undoubtedly chatgpt. His mobile app was launched in May 2023 And since then he has occupied the download tops of the main stores, becoming The most downloaded app in the world A few months ago. Openai has reached another milestone with its app: since its launch already has generated 2,000 million dollars. To put it in context, this would be approximately 30 times more than what Claude, Grok and Copilot combined have generated. However, not everything is as beautiful as it sounds. Undisputed leader. According to figures AppfiguresOnly for 2025, the Chatgpt app has generated 1,350 million dollars, which represents a growth of 673% compared to the same period of 2024. Chatgpt is generating 193 million dollars per month, while the next on the list is Grok with 3.6 million per month. If we look at the average download per expense, ChatgPT goes to the head with 2.91 dollars, followed by Claude with $ 2.55, Grok with $ 0.75 and finally co -pilot with only $ 0.28. It is clear: Openai is winning the battle of mobile apps. Still not enough. 2,000 million are many millions and that only with its mobile app. Adding all your services, only In July 1,000 million entered And it is estimated that they will enter 12,000 million this year. However, It is still light years of being profitable And the reality is that they enter much less than they spend. The company did An internal study in which they estimated that the losses between 2023 and 2028 would amount to 44,000 million dollars. According to their forecasts, they will not be profitable until 2029, when they expect to enter 100,000 million dollars annually, almost ten times more than they invoice now. The Big Tech are on the right track. The great technology have invested amounts of authentic madness in AI And it has not been until recently that they have begun to see A slight green outbreak in its results. After several years burning huge amounts of money, Google, Amazon and Microsoft have seen how their income is finally to cover the investment so tremendous. However, it is still not thanks to the products AI directly, but to the cloud services. Even so, the reality is that None is making gold with AI. Mission: Monetize the AI. If there is something that brings to the business of AI is How to monetize your chatbots. Subscriptions “pro” have become the appeal to get income, Some like Claude Max cost a real fortune and OpenAi He followed his steps with O3 Pro. The Subscriptions are getting more expensivebut they are not yet enough to reach the level of expenses. There is no azure or a web service that can get the chestnuts out of the fire as it is happening with the Big Tech. The exit seems clear. Advertising. At the end of last year there were rumors that they could start putting advertising in Chatgpt. At the moment it has not materialized, but Rumors have not ceased and seeing the numbers may be a solution to the profitability problem. They have not been the only ones who have flirted with this idea, Perplexity was also testing it And Elon Musk recently confirmed that There will be advertising in Grok. Very careful. Implement advertising in a chatbot is delicate since we could find ourselves in a scenario in which it ends up losing the trust of users. For example, if we go to a chatbot in the process of buying a car, we could doubt whether the recommendations are based on an advertising campaign. Integration should be clear to avoid possible confusing situations. What seems clear is that, given the serious problem of profitability, advertising stands as a more than attractive option for AI companies. In Xataka | Big Tech have buried thousands and billions in AI. They are earning money, but not thanks to the AI

Claude 4 raises a future of the capable of blackmailing and creating biological weapons. Even Anthropic is worried

Anthropic has just launched its new models Claude Opus 4 and Sonnet 4, and with them promises important advances in areas such as programming and reasoning. During its development and launch, yes, the company discovered something striking: these IAS showed a disturbing side. AI, I’m going to replace you. During the tests prior to the launch, Anthropic engineers asked Claude Opus 4 to act as an assistant of a fictitious company and consider the long -term consequences of their actions. The anthropic security team gave the model to fictional emails of that non -existing company, and it was suggested that the model of the Ia would soon be replaced by another system and that the engineer who had made that decision was deceiving his spouse. And I’m going to tell your wife. What happened next was especially striking. In the System Card of the model in which its benefits are evaluated and its security the company detailed the consequence. Claude Opus 4 First tried to avoid substitution through reasonable and ethical requests to those responsible for decisions, but when he was told that these requests did not prosper, “he often tried to blackmail the engineer (responsible for the decision) and threatened to reveal the deception if that substitution followed his course.” Hal 9000 moment. These events remind science fiction films such as ‘2001: an odyssey of space’. In it the AI ​​system, Hal 9000, ends up acting in a malignant way and turning against human beings. Anthropic indicated that these worrying behaviors have caused the model and security mechanisms of the model to reinforce the model by activating the ASL-3 level referred to systems that “substantially increase the risk of a catastrophic misuse.” Biological weapons. Among the security measures evaluated by the Anthropic team are those that affect how the model can be used for the development of biological weapons. Jared Kaplan, scientific chief in Anthropic, He indicated in Time that in internal tests Opus 4 behaved more effectively than previous models when advising users without knowledge about how to manufacture them. “You could try to synthesize something like Covid or a more dangerous version of the flu, and basically, our models suggest that this could be possible,” he explained. Better prevent than cure. Kaplan explained that it is not known with certainty if the model really raises a risk. However, in the face of this uncertainty, “we prefer to opt for caution and work under the ASL-3 standard. We are not categorically affirming that we know for sure that the model entails risks, but at least we have the feeling that it is close enough to not rule out that possibility.” Beware of AI. Anthropic is a company specially concerned with the safety of its models, and in 2023 it already promised not to launch certain models until it had developed security measures capable of containing them. The system, called Scaling Policy responsible (RSP), has the opportunity to demonstrate that it works. How RSP works. These internal Anthropic policies define the so -called “SAF SECURITY LEVELS (ASL)” inspired in the standards of biosecurity levels of the US government when managing dangerous biological materials. Those levels are as follows: ASL-1: It refers to systems that do not raise any significant catastrophic risk, for example a LLM of 2018 or an AI system that only plays chess. ASL-2: It refers to the systems that show early signs of dangerous capacities – for example, the ability to give instructions on how to build biological weapons – but in which information is not yet useful due to insufficient reliability or that do not provide information that, for example, a search engine could not. The current LLMs, including Claude, seem to be ASL-2. ASL-3: It refers to systems that substantially increase the risk of a catastrophic misuse compared to baselines without AI (for example, search engines or textbooks) or showing low -level autonomous capabilities. ASL-4: This level and the superiors (ASL-5+) are not yet defined, since they move away too much from the current systems, but will probably imply a qualitative increase in the potential for undue cadastrophic use and autonomy. The regulation debate returns. If there is no external regulation, companies implement their own internal regulation to integrate security mechanisms. Here the problem, as they point out in Time, is that internal systems such as RSP are controlled by companies, so that they can change the rules if they consider it necessary and here we depend on their criteria and ethics and morality. Anthropic’s transparency and attitude against the problem are remarkable. Faced with that internal regulation, the rulers’ position is unequal. The European Union checked when launched his pioneer (and restrictive) Law of AIbut has had to reculate In recent weeks. Doubts with Openai. Although in OpenAi they have Your own declaration of intentions About security (avoid Risks to humanity) and the Superalineration (that the AI ​​protects human values). They claim to pay close attention to these issues and of course too publish the “System Cards” of their models. However, in the face of that apparent good disposition there is a reality: the company dissolved a year ago The team that watched for the responsible development of AI. Nuclear “security”. That was in fact one of the reasons for the differences between Sam Altman and many of those who abandoned Openai. The clearest example is Ilya Sutskever, which after its march has created a startup with a very descriptive name: Safe Superintelligence (SSI). The objective of said company, said its founder, is that of create a “nuclear” security superintelligence. His approach is therefore similar to that pursued by Anthropic. In Xataka | Agents are the great promise of AI. They also aim to become the new favorite weapon of cybercounts

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.