Peter Steinberger, the creator of AI agent OpenClawrose on Saturday to a ton of mentions on his Twitter account. In all of them they warned him of the same thing: Anthropic announced that Claude Code (Claude Pro/Max) accounts could not be used in OpenClaw. The decision left him anything but indifferent, and users of this agent have criticized a decision that, although reasonable, is, in a way, a disturbing tactic because of how and when it arrived.
what has happened. OpenClaw is the AI agent that, if you want, takes control of your machine and uses its apps to do for you everything you ask. Its operation is very powerful, especially if you use it with quality models such as Claude Opus 4.6 or Claude Sonnet 4.6. Many users were taking advantage of the Claude Pro and Claude Max plans to get the most out of OpenClaw, but Anthropic has said that that cannot be done. As explained, OpenClaw and other AI agents consume too many tokens and those plans are designed to be used in Claude Code for programming.
If you want to use Claude with OpenClaw, pay. At Anthropic they do not prohibit the use of their AI models with OpenClaw, but they make it clear that if you want to use them you must use them with their API. It’s as if you bought the monthly transport pass for 20 euros to travel unlimited on the subway: it works perfectly when you go to and from work or university, but Anthropic says that you cannot use that pass to use it in your courier company that makes hundreds of trips a day. The token consumption in Claude Code is manageable, but in OpenClaw that consumption skyrockets and in Anthropic they want you to pay per use, not take advantage of the “flat rate” (with limits) of their Pro/Max plans.
It’s understandable… Many users have attacked Anthropic and criticized that decision. Boris Cherny, one of the top managers of Claude Code, answered in X to a user who told him that decision “sucked”:
“I know it sucks. At its core, engineering is about making hard decisions, and one of the things we do to serve a lot of customers is optimize how subscriptions work to reach as many people as possible with the best model.
Third-party services are not optimized in this way, so it is very difficult for us to maintain it in the long term.”
It is true that the massive use of Claude in OpenClaw raises the internal costs of Anthropic’s infrastructure: it does not pay off for them that so many instances of OpenClaw are being used with Claude, at least not if they are not used with the API. It is reasonable because these plans effectively “cheat” by being able to be used with this and other AI agents. But…
…the moment is curious. This decision comes shortly after Anthropic has started to “copy” some of the OpenClaw features in their products, something we also expected. Claude Cowork, Dispatch and Remote Control have become the “official” ways to be able to do some of what OpenClaw does directly with Anthropic tools, and shortly after releasing them is when they begin to cover the way in which users can use their monthly plans. For Peter Steinberger, creator of OpenClaw, Anthropic’s decision comes now It is significant:
“It’s funny how the timing coincides: first they copy the most popular features of our tool to their own closed product, and then they block access to open source.”
Anthropic doesn’t lie. The technical argument that Cherny mentions is real, and the truth is that there is a capacity problem. Claude models are expensive to run, demand grows faster than infrastructure, and users of AI agents like OpenClaw consume resources much more intensively than conventional chat or Claude Code users. This is not sustainable, but it is also true that this decision comes three weeks after Steinberger “sold” OpenClaw to OpenAIAnthropic’s nemesis.
Anthropic in Nintendo mode. This is the classic walled garden pattern: you see what works on another platform or rival product, absorb it, and then close the door. Nintendo has done this for decades with its platform developers, and Apple has perfected it with the App Store. The difference is that Nintendo and Apple had that walled garden from the beginning, and Anthropic is building it now.
Although it’s not exactly the same. It should be noted that Nintendo is protecting an ecosystem with decades of irreplaceable IPs (Intellectual Properties): Mario, Zelda, or Metroid. It is normal that there is an access cost. Anthropic is doing that right now with Claude as the star product, but obviously it doesn’t have anything comparable (at the moment) to the IPs that Nintendo has. Here is another disturbing comparison: Apple or Nintendo charge to enter the ecosystem but it does not keep the meter running. Anthropic does: it has an increasingly closed garden, but it also forces the use of the API to use OpenClaw, with a pay-per-use model that is reasonable given Claude’s demand.
But the rest do “leave”. What Anthropic has done clashes with what other AI companies are doing, especially when we talk about Chinese startups. The creators of Kimi, Minimax, GLM or the recent Xiaomi MiMo They do not have these policies: you can contract their monthly plans, very cheap, and take advantage of their models for OpenClaw without problems and without (barely) limits. It is true that these models are not as capable as Claude, but the way they act is still striking.
In Xataka | OpenClaw changed the rules of the AI race. Technology companies already have their answer: copy it


GIPHY App Key not set. Please check settings