Within Meta there is a race to see which employee consumes the most AI tokens. It’s the ‘Tokenmaxxing’ of Silicon Valley

There is a battle within Meta: see who spends the most AI tokens. This is the basic unit that AI uses to understand the language with which we order actions. It is like the “bridge” between our words and the numbers that the machine can process and, therefore, when ChatGPT either Google They present a model, they brag about the millions of tokens they can process. But tokens are also becoming a ‘spending’ unit in AI companies. Silicon Valleyso much so that they may be generating a toxic work culture. And Meta is an example of a company where employees compete to see how many tokens they can consume to become a Token Legend. Tokenmaxxing. It is not the first time that we talked about this. A few days ago, Jensen Huang -CEO of NVIDIA and one of the main instigators of this phenomenon- commented that he would be worried if an engineer who earns $500,000 did not spend at least $250,000 a year on tokens. Because tokens cost money and NVIDIA is already considering offering tokens as part of the signing bonuses for its artificial intelligence engineers. Goals. As it could not be otherwise, Meta does not want to miss this party. The company, which changed its name when the metaverse was going to be the big thing and, after the swerveis defined as a “native AI company”, is one of those that promotes its artificial intelligence engineers to keep a count of the tokens spent during their day. There is no official data, but there are reports revealed to media such as Business Insider and The Information which point out that some of these teams have very specific objectives related to the use of tokens. For example, the company expects 65% of its engineers to write more than 75% of code using AI tools by the middle of this year. The Scalable Machine Learning division has another objective, and so on in each of the code-related departments within Meta. Legend Token. In The Information, they directly point out that there is an internal classification table created by the employees themselves to gamify the work. It shows the 250 most intensive AI users in their tasks with an easy premise: the more tokens you spend, the more you climb in the ranking. The winner of this particular competition takes the title of ‘Token Legend’, or ‘Legend of Tokens’. It is turning an expectation into a kind of internal sport. The first paragraph of this article converted to tokens crazy spending. If we put the first paragraph of 542 words in the tool ‘tokenizer‘ from OpenAI, we see that that simple phrase has already consumed 121 tokens. Well: according to The Information, in the last 30 days the total token panel usage of that internal table was more than 60 billion (of ours) tokens And even if they want to dress it for sports and competition, it is still obligatory. In late 2025, Meta launched the ‘Level Up’ program where employees who complete the most tasks using AI earn badges. And more important than this: it made the use of AI a central criterion in its employee performance evaluations. This, obviously, sets salary and promotion objectives. Doubts. But of course, beyond paying to work, there are other underlying issues. One of the criticisms of this tokenmaxxing system is that AI companies like Meta or NVIDIA encourage spending more on tokens because, in this way, their own employees become consumers of the product they are creating. An easy example that software engineering analyst Gergely Orosz exposed which is as if Tim Cook, CEO of Apple, said that if one of his employees who earns $500,000 a year did not spend $50,000 on purchases in the App Store, he would be worried. Orosz continuous stating that productivity should not be measured in tokens spent, but in the results obtained. Industry issue. In any case, Meta and NVIDIA are not the only ones that measure their employees by their consumption of AI at work. It is something that is soaking in other AI majors, turning the tokens into an extra work benefit incorporated into the engineers’ remuneration wheel along with the base salary, performance bonuses and shares. HE esteem that an OpenAI engineer can process 210 billion tokens in a week and there are Claude Code engineers who accumulate more than $150,000 in tokens in one month. Basically it is merging part of your salary into the company that pays you. And… have they said anything from Meta? Yes, it’s not about volume, but about quality, pointing that performance rewards are based on the impact of the work and not the raw use of AI. Image | ‘Wolf of Wall Street’, Meta Logo. Edited In Xataka | Google Earth shows the world. The Spanish Xoople wants AI to understand it

Jensen Huang believes he has found the perfect new bonus for software engineers. Not Stocks: AI Tokens

The CEO of Nvidia has been converting the AI tokens at the center of all their public conversations. Jensen Huang’s latest idea links these tokens to the efficiency of engineers and how the best engineers in the world are recruited: in addition to a generous salary, offer them an amount equivalent to half their annual salary in AI tokens as part of the hiring package.​ Huang verbalized his proposal during the inaugural speech of the GTC 2026 conferenceNVIDIA’s largest annual event for developers. In a later interview, the Nvidia CEO detailed that engineers would earn “a few hundred thousand dollars a year as a base salary,” and the intention would be to give them “probably half of that, also, in tokens, so they can multiply your productivity times ten”.​ What Huang proposes already has a name: Tokenmaxxing. In one podcast appearance ‘All-InHuang said he would be on “high alert” if an engineer earning $500,000 didn’t spend at least $250,000 a year on tokens. “If that person said (that he has used tokens worth) $5,000, I would go completely crazy,” Huang stated. When asked if NVIDIA planned to spend $2 billion on tokens for its engineering team, as proposed, Huang responded: “We’re trying.”​ As and how they counted in The New York Timesthat has generated a phenomenon called “Tokenmaxxing“, with which engineers brag about the number of tokens they consume to try to improve the perception of their productivity: the more tokens you consume, the more productive you are. Tokens as bonuses are a trend in Silicon Valley. The CEO of NVIDIA is not the only one who thinks this way, and the use of tokens as an extra work benefit it’s soaking among investors in the sector. Tomasz Tunguz of Theory Ventures stated to Business Insider that “companies are incorporating AI inference as a fourth component of engineer compensation: salary, bonus, stock and tokens.” The interest of whoever sells the chips. The NVIDIA CEO encouraging everyone to spend more on tokens is not disinterested advice. Gergely Orosz, analyst at software engineeringhe pointed it out bluntly in a publication from AND he added an analogy that sums it up accurately: “It’s almost as if the CEO of Apple said, ‘If someone who makes $500,000 a year doesn’t spend at least $50,000 a year on in-app purchases on iOS, I’d be deeply alarmed.’ And yes, you would be, because that would reduce the revenue you generate.” Huang is the head of the company that manufactures the chips for AI on which most of the world’s artificial intelligence runs. Huang himself made it clear to his investors: “Without computing, there is no way to generate tokens. Without tokens, there is no way to grow revenue,” he declared, describing his data centers as “token factories” whose demand will only grow as AI agents proliferate.​ Do not confuse value with price. However, Huang has incurred a bias when arguing his idea: confusing value with price. Orosz formulated it clearly in a message in X : “The advice that engineers should use tools that make them more productive IS correct… except that the cost of tools should NOT be what we focus on. Some of the most useful tools are very cheap. Of course, vendors will focus on selling the most expensive and most profitable tools.” Productivity is not measured in tokens spent, but in results achieved. The right question for companies should not be whether their employees use more AI, but whether increased use of AI is rewarded. with greater productivity. In Xataka | Customers demand that a human solve their problem. The surprising thing is that if humans serve them they think they are an AI Image | NVIDIA, Unsplash (Arif Riyanto)

China makes tokens cheaper than anyone else

Last month, Chinese AI models surpassed American ones in use on OpenRouter, an AI platform that allows you to detect interesting trends. And this in fact was just confirmed this month and has accelerated, because what we are seeing is that despite the obstacles that the US has tried to put in place to prevent China from competing in this market, the Asian giant has found a key tactic to do it: the so-called “token export”. Useless tariffs. The US government the era of globalization burst with the trade war with China and more recently with its aggressive tariff policy. That has had a clear effect on Chinese exportsbut the Asian country has found a way to avoid tariffs: with AI. Its artificial intelligence models can be used around the world without being affected by tariffs. Although they are inferior in performance and quality, they are much cheaper to use, so China is managing to convince the world with its old recipe: if the product or service is good enough and is also cheap, it wins. On the OpenRouter platform we have been seeing for two months how Chinese models are used more than those from the US for a simple reason: they are much cheaper and perform reasonably well. Token export. When we use energy we consume kilowatts. When we use AI we consume tokens from AI models. And that is where China is winning with the phenomenon called “token export”because the tokens of their AI models are extremely competitive and for many tasks those models are good enough. Minimax M2.5, Step 3.5 Flash or DeepSeek V3.2 clearly outperformed Gemini 3 Flash Preview, Claude Sonnet 4.6 or Claude Opus 4.6 in use in the last two months on the OpenRouter platform, for example. Developers from all over the world take advantage of these models and do so without being affected by tariffs: tokens do not pay those fees that, for example, apply to mobile phones, cars and many other products. Devastating price difference. While an American premium model like Claude Opus 4.6 costs 5 dollars per million entry tokens (Sonnet 4.6 costs 3), Chinese models like the MiniMax M2.5 cost as little as $0.25, 20 times less, and the Step 3.5 Flash, also very popular, costs just $0.10, 50 times less. AI agents ask for cheap models. That price gap is especially relevant now that AI agents —and especially, OpenClaw— begin to demonstrate their ability. These types of systems are capable of completing tasks for us and even controlling the machines to which we give them access, but to achieve this they use a huge amount of tokens. Using the best models guarantees better results, but it is also very expensive, but in many “simple” tasks, very cheap models like the Chinese ones They can perfectly solve the problem. The subscription trap. In recent weeks, the rise of OpenClaw and similar platforms has provoked a curious response from companies like Anthropic or Google. To these companies they don’t like it that subscription plans for your AI models be used for these types of AI agents because they argue that those plans are abused, and there are certain restrictions to that type of uses. This has caused many users to opt for AI models from Chinese companies, which are precisely positioning themselves as the cheap and trouble-free alternative to be able to take advantage of these agents. Why Chinese tokens are so cheap. There are several factors that favor the low cost of AI models in China. The first of them, cheap energy: the costs of industrial energy They are 40% smaller than in the United States. The second, its efficient architecture: as DeepSeek demonstrated, it is possible to achieve great results with techniques like Mixture of Experts (MoE). With it, the model is divided into multiple “experts” and only activates those that are necessary according to the request. The irony of tariffs. Curiously, the US restrictions on the export of advanced chips may have ended up being the great catalyst for this situation. By not having access to the most advanced NVIDIA chips, Chinese companies were forced to perfect the efficiency of their models to the maximum, and that has now caused be more competitive in the AI ​​inference market (that of the use of models in practice), which is where this new economic battle is being fought. Challenges. Although the “token export” is currently profitable for China, it faces significant challenges. The data sovereignty is one of them: for a company or a government, sending sensitive data to data centers in China is a red line. There is also the problem of latency: the response of China’s AI models is affected by the enormous distances that these data packets have to travel. It remains to be seen if Washington ends up applying some kind of measure to also restrict the use of AI models from Chinese companies, although that seems more complicated. In Xataka | NVIDIA already has a monopoly on AI hardware: now it wants to conquer software through agents

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.