Anthropic was the “don’t be evil” of AI for developers. Now he’s squeezing them all

Claude Code and Claude Opus 4.6 sparked a golden era for developers, who found themselves with a fantastic AI agent and model for their work. Suddenly OpenAI was no longer the trendy company: Anthropic was, which users and developers fell in love and became in the pretty girl of AI. Months later we are seeing how Anthropic is making changes that are being highly criticized and that point to something that we have already seen repeatedly: platforms conquer you and inevitably then the platforms squeeze you. The trigger. On April 2, 2026, Stella Laurenzo, Senior Director in AMD’s AI group, published a text in Claude Code’s GitHub repository titled “Claude Code is useless for complex engineering tasks with February updates.” This directive included a meticulous analysis of almost 6,600 real Claude Code sessions with nearly 235,000 tool calls and about 18,000 reasoning blocks in four different projects. The conclusions were obvious to her: the performance of Claude Code and Claude Opus 4.6 had degraded. The numbers. In this analysis, two periods are shown according to Laurenzo. In the good period, from January to mid-February, the model read 6.6 files for every file it edited. In the theoretically degraded period, from March onwards, that rate had fallen to 2.0 files read. Code edits in files that Claude had not recently reviewed went from 6.2% to 33.7%: one in three changes to the code were being made “blindly.” In addition, the visibility of the reasoning was reduced, from 2,200 characters to only 600 on average, but there is something more. The costs of the process multiplied by 122 in the same period, although it is true that in that period they went from using 1-3 concurrent agents to using 5-10, which complicates the interpretation of the data. Anthropic tries to clarify what happened. Anthropic’s official response It was published by Boris Chernyresponsible for Claude Code. This engineer confirmed two actual product changes: On February 9, Opus 4.6 switched to using so-called “adaptive reasoning” by default. On March 3, the default effort level moved from high to medium, sitting at level 85, which Anthropic describes as “the best balance of intelligence, latency, and cost for most users.” Closed debate. Cherny also spoke of that suspicion that Claude was now hiding “how he thought.” He explained that the change in visible reasoning records is not a real degradation, and the detected header was simply a user interface modification that hid intermediate reasoning to reduce latency without affecting model performance. Laurenzo herself had already foreseen something like this and tried to implement solutions to avoid it, but her data confirmed this drop in performance. Cherny closed the debate as if the issue had been resolved, but it doesn’t seem like it really is. Computing capacity crisis. Thariq Shihipar of Claude Code’s team revealed in March that Anthropic was adjusting session limits to 5 hours during peak hours. That is to say: if there was a lot of demand, your Claude tokens would probably run out faster. He pointed out that the measure would actually only be noticed by 7% of users (the most intensive during those peak hours), and confessed “I know this is frustrating. We will continue to invest in scaling efficiency.” This is contradicted by a comment in the debate on Laurenzo’s post in which explained that “we do not degrade our models to better serve demand, I have said this many times before.” More degradations. They appeared other discoveries and criticismssuch as how Claude Code’s prompt cache had also been drastically reduced (from one hour to five minutes), triggering quota consumption in long programming sessions. Anthropic he indicated to VentureBeat that Team and Enterprise accounts are not affected by these session limits, but the pattern seems increasingly clear: computing is scarce and must be rationed… or at least that is what all these Anthropic measures seem to point to. What remains unclear is whether the quality of the model has actually been degraded, although there are Reddit “megathreads” that also point in that direction. “Nerfing”, nothing. When a company deliberately degrades its service, it is often called “nerfing.” on social networksand criticism in this sense was increasing in the case of Anthropic. Numerous publications of users in X and in media of technology have done reference to Laurenzo’s studio and accused Anthropic of this voluntary degradation of its models. Boris Cherny intervened in at least one case to flatly say that “That’s false” and to explain that they reported the changes and in fact gave users the option to disable it. But rationing exists. In The Wall Street Journal they confirmed that this rationing of computing is certainly occurring among AI platforms due to high demand. We have a good example of the consequences in David Hsu, founder and CEO of Retool. He explained in said newspaper that although he preferred Claude Opus 4.6 to power his AI agent, he recently had to switch to the OpenAI model because “Anthropic keeps crashing all the time.” Prices change (silently). The Information indicated yesterday that Anthropic is changing the way it bills users of Enterprise plans. Instead of a subscription of $200 per month with a “flat rate” for using their AI models, what they will do is charge a base rate of $20 per user per month and to that they will add the consumption of each user with the standard price of their API. Your own updated documentation points it out (“Use is not included in the per-seat rate”) and it is estimated that the change could double or even triple the cost of using Claude for heavy users. The discounts of 10 to 15% on the API that were included in the past and that allowed companies to scale this token consumption in a more affordable way also disappear. Prices per million tokens have not changed, but we went from a “flat rate” (with usage fees) to a pay-per-use model, much more expensive for heavy users. It’s not just Anthropic. … Read more

LaLiga’s massive IP blocks are making life impossible for users, companies and developers. So you can claim

LittleCranky67, which is the alias of our protagonist, didn’t know what was happening with his computer this weekend. This developer was doing something that never gave him any trouble: working with the GitLab platform to download a Docker software package. That process kept giving him strange errors, and LittleCranky67 ended up realizing what had caused it all: LaLiga’s indiscriminate IP blocking. After share your frustration on HackerNewshundreds of comments confirmed other similar cases, and in them we also discovered something interesting: how to officially claim LaLiga. Or at least, how to try. A sad old story. LaLiga he shields himself in the Judgment of December 18, 2024 issued by the Commercial Court No. 6 of Barcelona. This allows you to demand from operators such as Movistar, Vodafone, Orange or Digi to block at the IP level any address that is identified as a source of illegal IPTV broadcasts during LaLiga football matches. Many of those IPs are Cloudflare shared IPs, so when the IPTV service IP is blocked, all domains associated with that shared IP are blocked, which can be hundreds or even thousands. And in those domains there are web pages of private usersfrom companies that they stop being able to sell and also critical services for developers such as Docker, GitHub or GitLab. The irony is that lockdowns don’t work. While many users complain about these blocks and how are affecting websites and services that usemany others continue to remember on social networks that in reality the blocks to view these IPTV broadcasts can be easily circumvented in many ways. The most popular, use VPN services. In LaLiga they know that this method is widely used, so for months They are also working on blocking those services. It doesn’t seem to be serving a lotand whoever really wants to watch the football game without paying has many relatively simple ways to achieve it. If you are affected, you can claim. In that thread, several users remember that one way to try to change things is for users to protest, complain and complain en masse. There are several ways to do it: Telecommunications User Service Office. It is the official body in Spain for these cases. A formal claim can be filed for arbitrary loss of service or censorship, and even claim financial losses if the blockade prevents you from working. Those who have a digital certificate or Cl@ve can do so directly online. Complain to your internet provider. It is also important to open a support ticket with your operator. It is true that they are obliged to follow court orders, but they must know that the blockade is causing collateral damage to services that have nothing to do with football. Common Electronic Registry (SARA network). This portal It also allows you to send formal complaints to management if other methods fail. Spanish Data Protection Agency (AEPD). Those responsible for RootedCON have been fighting this situation for some time, and offer another recommendation: report LaLiga to the AEPD. This template allows you to complete that complaint in a simple way Demagive to Telecommunications Operators. At RootedCON they also suggest filing a complaint against ISPs, and explain the process in a small thread on Twitter. Again, just download a request and file it individually. Complaint to the European Commission. It is also possible to enter the European Commission complaints website to send a claim to the entity. We explained it in Xataka and the process is another way of trying to stop this situation with the help of European bodies. The BOE serves as a defensive argument. In these complaints it is advisable to cite the BOE-A-2022-10757 as a legal reference. It corresponds to Law 11/2022, of June 28, General Telecommunications (LGTel) and is the fundamental rule that regulates your rights as an internet user in Spain. The message that we can write is the following: “Under the protection of Law 11/2022, of June 28, General Telecommunications (BOE-A-2022-10757), specifically regarding the rights of end users (Chapter IV) and the principles of continuity and quality of service, I present this claim for the blocking of access to legitimate IP addresses (specify which ones, e.g. Cloudflare/Docker) unrelated to any illicit activity. This blockade constitutes a violation of my right to communication and the contracted service, causing harm (professional/personal) by preventing the operation of work/security tools. “I request the immediate cessation of said technical restriction in compliance with the provisions of the aforementioned Law.” The nightmare continues. The debate in HackerNews is nothing more than confirmation of what internet users in Spain have been suffering for more than a year. A private organization has the power to order ISPs in a country to indiscriminately block IPs without judicial review in real time, during regular hours, causing documented harm to third parties that have nothing to do with the original violation. In that thread some users compare the situation with that of the Great Firewall of Chinanot so much in intensity as in its logic. We are faced with an infrastructure of selective censorship that seems to be able to be applied to any content that an actor with sufficient judicial power wants to block. From football to tennis or golf. In fact, things could go further, because what began as an attack against illegal broadcasts of football matches could now be seen in other sports such as tennis or golf. Telefónica —which follows in the footsteps of LaLiga— wants to extend indiscriminate blockades to the Champions League, tennis or golf. This threatens to suffer these side effects for many more days and for many more hours, and can mean that for a good part of the week, users like LittleCranky67 find themselves unable to download Docker packages or access thousands of legitimate websites that end up being knocked down by these blocks. Images | Wirestock | LaLiga In Xataka | LaLiga has been at war with Cloudflare for years over piracy. It has just joined forces with its main competitor

The developers who get the most out of AI are also the ones who sleep the least: it’s called "AI psychosis"

Andrej Karpathy, co-founder of OpenAI and who coined the term vibe codinghas been in what he describes as a state of “AI psychosis” since December. He works 16 hours a day directing swarms of code agents. And he admits that he feels “extremely nervous” when he has left tokens without consuming at the end of the month. This has been admitted in an interview with Sarah Guo. It is not an isolated case but rather the pattern that is beginning to repeat itself among the developers who get the most out of this type of agents. Why is it important. The dominant narrative about AI has been that of unlimited productivity and the famous “10x“What is beginning to be documented is its dark side: the most intensive users are also those who show the most worrying signs of behavioral deterioration. And they are not anecdotal profiles. Garry Tan, CEO of an entire Y Combinator, has called his own experience “cyber psychosis“. A CTO picked up by Axios says he needs prescription medication to sleep. If the most productive tools in history generate the same patterns in their most intensive users as games of chance, the debate about the impact of AI at work enters another dimension. In Xataka Having an AI on my phone that works without an Internet connection is more useful than I thought: this way you can start it Between the lines. Karpathy’s nervousness at the tokens Being left unused is the behavioral signature of someone who has internalized scarcity as a threat, exactly the same mechanism that keeps a gambler hooked on a slot machine. Developer Armin Ronacher talked about this in January: “Many of us fell into code addiction with agents. We barely slept, we built incredible things.” The context. Agents like Claude Code either Codex from OpenAI do not work like a chatbot that is asked a question. They operate autonomously for hours, writing, testing, and deploying code while the developer monitors, fixes, and re-delegates. The promise is enormous and so is the cognitive cost: the human brain is not designed to supervise processes that advance at machine speed during 16-hour days. {“videoId”:”x9f93vm”,”autoplay”:false,”title”:”Claude Code Presentation”, “tag”:””, “duration”:”234″} Yes, but. Programmers have always had a reputation for working in marathons of concentration. Sleepless nights before a launch are part of industry folklore. What distinguishes this phenomenon is its compulsive nature and its continuity: it is not the specific pressure of a deadlinebut an activation that does not turn off when the job ends, because with an agent that can keep running, the job never completely ends. In Xataka |I have lived the “miracle” of vibe coding: this is how I programmed an Android TV app without having any idea about programming Featured image | Anthropic (function() { window._JS_MODULES = window._JS_MODULES || {}; var headElement = document.getElementsByTagName(‘head’)(0); if (_JS_MODULES.instagram) { var instagramScript = document.createElement(‘script’); instagramScript.src=”https://platform.instagram.com/en_US/embeds.js”; instagramScript.async = true; instagramScript.defer = true; headElement.appendChild(instagramScript); – The news The developers who get the most out of AI are also the ones who sleep the least: it’s called “AI psychosis” was originally published in Xataka by Javier Lacort .

Meta has ended up firing its developers to pay for AI

Mark Zuckerberg’s company is not having its best week. To the sanctions imposed by a US court for not protecting users of the addictive consequences of their platformsjoins a new round of layoffs that affects hundreds of people in five business areas. It’s not the first time so far this year, and it probably won’t be the last either. We cannot say that the measure has caught Meta employees by surprise, because a few days ago Reuters I was already ahead that the parent company of Facebook, Instagram and WhatsApp was planning to cut staff due to the increased costs of AI development. Now have materialized eliminating the departments closest to the metaverse. 700 employees on the street and a metaverse that goes out. According to published NBC Based on sources close to the company, Meta will lay off about 700 employees in this round. The cuts will affect Reality Labs, the division that for years was the flagship of Zuckerberg’s big bet on the metaverse, which just a few days ago announced the Horizon Worlds closure on Quest headsetsas well as some in the human resources departments, sales and Facebook employees, as pointed out The New York Times. Those affected are a small fraction of the nearly 78,000 employees that Meta currently has on staff, but the reason given by the company is already a classic in big tech: “Meta’s teams restructure or implement changes periodically to guarantee that they are in the best position to achieve their objectives,” said a Meta spokesperson. in a statement to which you have had access NBC. Layoffs down, bonuses up. Hours before these layoffs were announced, Meta presented a new stock compensation program for six of its senior managers. The message between the lines has not gone unnoticed. While the company cut staff with the argument of reducing costs to face the huge investments in AIwith a forecast of expenses of between 162,000 and 169,000 million dollars for 2026, the executives closest to Zuckerberg saw their compensation increased by up to 921 million dollars each for the next five years. Meta justifies the increase to its managers as a tool to retain talent in the middle of the war for the best AI profiles, but the temporal coincidence between both announcements could not have been more unfortunate. ​Layoffs without financial hardship. Historically, a company laying off its employees was a clear sign of financial problems. Instead, in the age of AI, each round of layoffs is celebrated on the financial markets with increases in the price of shares because it is a clear sign that the company is restructuring to adapt to changes in strategy for the development of AI and continue generating million-dollar income. In fact, one of the phenomena that is occurring In the latest rounds of layoffs in large technology companies, while hundreds of employees are being laid off from certain departments, new vacancies are opening up. to hire new employees with another profile more AI oriented. ​Meta is not an isolated case. What happens in Meta is part of a dynamic that is repeated throughout the sector. Amazon, Microsoft and other big tech companies have announced massive cuts in recent months, and in all cases the AI appears as the main justification for layoffs. According to data From the consulting firm Challenger, Gray & Christmas, AI has been the argument for 12,304 layoffs so far in 2026, the equivalent of 8% of all layoffs recorded in that same period.​ In Xataka | Mark Zuckerberg spent millions on a “superintelligence” team. He is dedicating it to creating a personal AI agent for you Image | Goal

Android’s controversial new requirements for installing apps from unverified developers

Android has always boasted something that set it apart from the rest: the freedom to install applications from practically anywhere. That possibility still exists, but what we have seen now points to an important shift in how it is exercised. Google does not eliminate it, although it does surround it with more friction so that it stops being an impulsive gesture. And that change, although it does not close the door, does clearly transform the experience of those who were used to traveling that path without too many obstacles. The change. This is a novelty that does not equally affect everything outside of Google Play, and here we should stop so as not to mix concepts. What Google proposes is not to tighten any external installation, but to add new barriers when the application comes from a developer who is not verified within the new system that the company wants to implement. In that specific scenario, the process stops being immediate and begins to require more time, more steps and a much more conscious decision. What steps will we have to follow. When Google activates this flow, scheduled for August according to the company, installing an app from an unverified developer will no longer be a quick process and will involve a very specific sequence. These are the steps we will have to complete: Manually activate developer mode in settings, without shortcuts Confirm that no one is guiding us to disable system protections Restart the phone, something that cuts off calls or active remote access Wait 24 hours before continuing, in what Google calls “protective waiting period” Reauthenticate us with biometrics or PIN to confirm that it is us Finally install the app, with visible warnings and the option to allow this type of installation for seven days or indefinitely The argument. Google says that Android is no longer a platform associated primarily with enthusiasts, but rather a digital foundation used by billions of people. In this context, the company maintains that previous warnings and barriers were not enough to stop certain frauds supported by social engineering. As he explains, many attacks are based on generating urgency, keeping the victim under pressure and pushing him to deactivate protections without thinking, and this new system seeks precisely to break that dynamic. Openness vs. control. Google insists that this move does not break with the essence of Android, but rather tries to balance openness and security. On your blogthe company emphasizes that advanced users will still be able to install apps from unverified developers, and that this “advanced flow” is intended for them as a one-time process. How it affects us. In practice, the impact will depend a lot on how we use Android. If we move within Google Play, we will not see relevant changes on a day-to-day basis. However, if we are used to installing applications from outside or following independent developers, the experience does transform. Installing from an unverified source will involve more steps, and more time. Images | Xataka | Google In Xataka | The foldable that comes closest to the perfect screen avoids all problems except one: the OPPO Find N6 points the way forward

Amazon insisted that its developers use its AI to work. They’re just fixing what AI breaks

AI was going to make us work less and better. In the Amazon offices the reality is being very different. According to several employees of the firminternal tools like Kiro are causing a rebound effect: developers spend more time fixing the code defective that generates that tool than writing your own. From Malaga to Malagon. The situation is ironic, because as some engineers indicate, they are trying to get out of a problem caused by AI by using more AI. Dina, a software developer from New York, joined Amazon two years ago and began her job as writing code. However, what I was doing recently was not writing it, but rather fixing the code that Amazon’s programming AI—called Kiro—broke. According to her, this model was misleading and frequently generated bad code. Days after speaking with The Guardian For that report, Dina was fired. Go-go layoffs. The case of Amazon is especially bloody due to the company’s recent wave of massive layoffs. In recent months has cut its workforce by 30,000 people10% of its corporate strength. At Amazon they deny that these layoffs have to do with AI, but the CEO, Andy Jassy, ​​has suggested in internal communications that the efficiency gains raised by the process automation will allow you to operate with tighter equipment. You are contradicting exactly what the company says without fully meaning it. workplace suicide. Many employees interviewed by this means indicated that they felt like a kind of work suicide. His current job is to document processes in detail and correct system errors to, in essence, prepare for his own replacement by machines. These forced training phases are making employees feel like their cycle at Amazon has an expiration date. I don’t need a hammer anyway. We have experienced what is happening at Amazon in the past. The deployment of AI tools has in many cases been chaotic according to the employees interviewed. They have been forced to use “half-baked” tools born of hackathons without adequately evaluating whether they really provided the appropriate solution. An engineer said it clearly: you can’t look at every problem and think how to use the hammer you have for said problem. The first thing is to know if this problem really needs a hammer. Service outages. AI integration issues are also reportedly responsible for Amazon service outages. Internal reports link at least two of those crashes to code changes that were made with AI tools. These changes were not properly supervised, and although the company can blame “human errors” for these problems, the origin is as usual: delegating critical decisions to systems that are not yet 100% reliable. Amazon knows who uses AI. In this adaptation of Amazon to the age of AI there is also another disturbing element: those responsible are monitoring to the millimeter what your employees do with AI. What they were already doing in the warehouses to measure the performance and productivity of these employees now also happens in the offices. There are dashboards where team leaders monitor who uses AI and how often, and in some teams the goal is for at least 80% of the workforce to use these tools weekly, regardless of whether they are useful or not. Promotions. And whether you use AI tools a little or a lot can also be decisive for promoting internally on Amazon. Documents have been detected in which the candidate is explicitly asked how they have used AI to improve their impact. The message is clear: if you do not embrace this technology, even if it is deficient, your chances of moving up become very complicated. Low morale. Among the employees surveyed, the feeling that everyone perceived was the demoralization of the teams. In fact, more than 1,000 workers signed a petition against that aggressive deployment of AI tools. For them, the company culture is changing and what is now required is work more hours with fewer resources with the excuse that external competition is “hungry.” In Xataka | The role of developers of the future was supposed to be to “review” the code that AI wrote. Claude just buried him

On the surface, the AI ​​talent war is about engineers and developers. It’s actually about plumbers and electricians.

In recent months we have seen how some of the big big tech companies are opening their portfolio to hire the best AI talents: among the most voracious is goalbut the arrival of Jony Ive to OpenAI It was a flash signing. They may not have the resume of the former design director or make as many headlines, but the AI ​​talent war is also being played in another league: that of blue-collar technicians, such as the CEO of NVIDIA already predicted months ago and more recently, at the World Economic Forum from Davos. (Another) bottleneck for AI. Because for ChatGPT to have a new model or Nano Banana to level up, data centers are needed. And at the same time, huge quantities of electricity supplied by energy plants. We have already seen that data centers are proliferating like mushrooms (or at least, their planning, materializing them is another more arduous and slow story which leads some companies to consider ride them in space). So there are big tech that are being becoming energetic. But to assemble and maintain everything, you need electricians, plumbers or air conditioning technicians. And there are precisely not a few: the union that represents electricians in the United States and Canada mentions in a blog post of specific data center projects that can quadruple the current number of its members. Blue collar technicians wanted. The problem is that they are scarce: according to the United States Bureau of Labor Statisticsbetween now and 2034 there will be an average shortage of 81,000 electricians per year. Furthermore, demand in the next decade will increase by 9%, well above average. According to this McKinsey studyBy 2030, the United States will require 130,000 more electricians and 240,000 construction workers. The absence of professionals such as bricklayers, welders or plumbers also occurs in Europe, as collect the latest report of the European Employment Service. In Spain at the moment takes its toll on housing construction. There is no one to inherit the workshop anymore. Wired picks statements by the economist responsible for the American Builders Association, Anirban Basu, who tells how in the past workers passed on their skills to their offspring, but now they are encouraged to pursue university studies. The problem is that baby boomers are retiring, leaving a void that no one is filling. Dan Quinonez, its counterpart in the plumbing sector, comes to say the same: They are doing everything possible, but it is a structural problem that has no immediate solution. Data centers are not places for newbies. On the other hand, data centers are not just any job and it is not only because of the technical requirements, but because the deadlines are tight, leaving little room for delays or errors. This is crucial as it is normal for apprentices to be trained on the job. Incorporating workers quickly and safely is a challenge, as David Long tells of the National Association of Electrical Contractors. What Big Tech are doing. This reality does not go unnoticed by big technology companies and Google has already gone ahead: last spring advertisement that would make a financial injection to the Electrical Training Alliance, an organization that trains electricians with the goal of improving the skills of 100,000 active electricians and training 30,000 before 2030. The point is that AI also competes with other sectors: housing, hospitals, industries… the competition is fierce. But the companies behind it have an ace up their sleeve: those demands and tight deadlines usually translate into higher salaries and more overtime. As Charles White tells of the Association of Plumbing Contractors, this causes union workers to change companies in search of better conditions. Without going any further, Jensen Huang prediction offers with six-figure salaries. How long will the boom last? The installation of a data center is a finite project in time that, once completed, is limited to maintaining a small permanent maintenance team. Likewise, and although we are in a phase of AI expansion with enormous potential, sooner or later it will lose steam. At that time, we will see what will happen: of course, taking into account the needs in other sectors and the hole that the retiring generations are leaving, it seems that it will not cost them much to find another job. In Xataka | Spain is becoming a true Mecca for data centers. Uruguay has some lessons in this regard In Xataka | 30,000 jobs and many doubts. What we know (and what we don’t) about the Valencian “data valley” Cover | Sammyayot254, Jimmy Nilsson Masth and Xpda chaddavis.photography

Vibe coding wants to help Open Source. But developers don’t want AI botches

If you like Open Source software, vibe coding now gives you a fantastic opportunity: to take that code and modify it to your liking with the help of vibe coding and the AI ​​agents that program. Let them tell it to me. You may have good ideas and the AI ​​will solve them with new code generated with these tools, but there is a problem: the quality of that code may not be adequate. what has happened. Steve Ruiz (@steveruizok) is the creator and responsible for TLDrawa nice Open Source application that allows you to turn your browser into a canvas so you can easily draw whatever you want on it. On January 15, Steve posted a message on X in which he announced something very striking: he would stop accepting code contributions (pull requests, PRs) in the TLDraw GitHub repository. We don’t need low quality code. “Due to the influx of pull requests of low quality, we will soon close those requests to external contributors,” said the person responsible for a project in an additional post on the official blog of the project. The message was clear: although people’s intentions are surely good when trying to contribute their ideas to an existing project, this developer soon realized that the code contributed by new programmers, fans of vibe coding, was of low quality. Solution? Ban those contributions made with AI. AI-generated code can serve. In that article I indicated that this was not a measure against vibe coding, but against code (any code, human or AI) of poor quality. Ruiz explained how: “We already accept code written with AI. I write code with AI tools. I hope my team uses those AI tools too. If you know the project’s code base and know what you’re doing, writing great code has never been easier thanks to these tools.” AI Slop, but from code. Although we often talk about “slop created by AI“(AI Slop) in reference to low-quality text, images, music and videos, the term can also be applied to code. Ruiz explained how in September he began to detect many requests for code contributions that seemed correct but that after a deeper analysis, although they worked, could potentially introduce future problems and complexity to the project. I correct here, I correct there. In addition, many of the contributors had profiles in which they could be seen jumping from Open Source project to Open Source project and then disappearing. They simply contributed without following the policies or requirements of the project and moved on to another. This is a plague. In the debates that this decision generated in Hacker News and x Ruiz found himself with a surprise: people not only did not protest, but they valued the measure positively. He commented how “this seems to be the standard experience for all public repository maintainers right now.” He cited the example of Excalidrawanother similar project that “received more than twice as many PRs in the fourth quarter of 2025 than in the third” in your repository. More and more vetoes of low-quality AI code. Other projects are going through that same phase. ghosttya terminal emulator for macOS and Linux, recently published their “AI policy” in the site’s public GitHub repository with important notices. For example, that “PRs created by AI must have been fully verified with human use”, and further that “all use of AI in any way, shape or form must be disclosed.” That’s cheating. Curl, a very popular utility for command line users, had announced the opening of a bounty program to detect bugs and vulnerabilities in its code. What have many people done? Use AI to find them and take the money. Those responsible for the program have announced that They will close it this month in the face of the avalanche of low-quality vulnerability reports clearly generated by AI. Linus already said it. Linus Torvalds, creator of the Linux kernel, admitted to using vibe coding tools for some small personal project. While recognizing that these tools can be great, he warned of the danger of all that AI-generated code: “AI will be a tool, and it will make people more productive. I think vibe coding is great for getting people to start programming. I think (the code it generates) is going to be horrible to maintain… so I don’t think programmers will go away. You’ll still want to have people who know how to maintain the output.” AI code works, but it is not usually “quality”. The developer community has been warning and experiencing this for some time. Although AI tools can help program and solve many routine tasks, the generated code must be reviewed by a human programmer to avoid future problems. It is reasonable to think that this code will be increasingly better and of higher quality, but today in many cases the situation is clear: it may work, yes, but that is not enough for many projects in production, especially when they are used by thousands (let alone millions) of people. In Xataka | Bill Gates and Linus Torvalds had been rivals for 30 years. The funny thing is that they just met and took a selfie

AI has allowed developers to program faster than ever. That’s turning out to be a problem.

Whoever has tried it knows it. Programming with AI can be wonderful. Especially if you have (almost) no idea about programming. This is where generative AI models have seen their first and probably only revolution. The developers were the first to be able to embrace this new technology. The appearance of GitHub Copilot in 2021 It showed us that it was no longer necessary to chop so much code, because the machine was already doing it for you, and since then the advance of generative AI in the field of programming has been overwhelming. The question is: has it been positive? The answer is not at all clear. It is evident that AI has allowed: That millions of people who were not programmers could turn their ideas for applications and games into a reality. That millions of professionals can save time by not having to write repetitive code (boilerplate) to focus on other more important and productive parts of your work The industry, of course, has been especially insistent with this vision of the transformation of this segment. Satya Nadella (CEO of Microsoft) and Sundar Pichai (CEO of Alphabet/Google) already boasted months ago that about 25% of the code generated by their companies is generated by AI. Meanwhile, Jensen Huang went further and made it clear that At this point no one should learn to program anymore because the AI ​​would do it for us. These are very forceful statements, but behind them lies another reality: that All that glitters is not gold in the world of AI for programmers. At MIT Technology Review they have spoken with more than 30 developers and experts in this field and have reached interesting conclusions. AI is a better programmer than ever. At least, according to the benchmarks In August 2024 OpenAI made a unique launch: presented SWE-bench Verifieda benchmark intended to measure the ability of generative AI models to program. At that time, the best of the models was only capable of solving 33% of the tests proposed by that benchmark. A year later the best models already exceed 70%. Current ranking of the best models according to the SWE-bench Verified benchmark. Several already pass 70% of the tests. Source: SWE-bench. The evolution in this area has been dizzying and we have witnessed the birth of that new modality programming called “vibe coding” and all the big ones have developed powerful programming tools to take advantage of the pull. We have OpenAI Codex, Gemini CLI, or Claude Code, for example, but they have been added startups like Cursor either Windsurfing who have also known how to take advantage of this fever for programming with AI. All of these tools promise basically the same thing: that you will program more and better. Productivity theoretically skyrockets, and while more code is certainly being written than ever thanks to AI, programmers They have gone from writing their own code to reviewing what machines generate. Recent studies reveal that veteran developers who believed they had been more productive actually they weren’t. Their estimate was that they had been 20% faster by being able to move forward without blockages, but in reality they had taken 19% longer than they would have taken without AI, according to the tests carried out. There is another problem too: code quality is not necessarily goodand as we say, developers must review that code before being able to use it in production. In the latest survey from Stack Overflow, one of the largest developer communities in the world, there was a notable fact: The positive perception of AI tools had decreased: it was 70% in 2024, and 60% in 2025. There are limitations, but even so everything has already changed Those interviewed by MIT Technology Review generally agreed with its conclusions. Generative AI programming tools are great for producing repetitive code, writing tests, fixing bugs, or explaining code to new developers. However, they still have important limitations, and the most notable is his short memory. These models are only capable of handling a fraction of the workload in professional environments: if your code is large, the AI ​​model may not be able to “consume” it and understand it all at once. For small projects, great. For large developments, probably not so much. The problem of hallucinations also affects the code, and in repositories with a multitude of components, AI models can end up getting lost and not understanding the structure and its interconnections. The problems are there, and they can end up accumulating and causing exactly the opposite of what they wanted to avoid. Several experts, however, explained in that text how it is actually difficult to go back. Kyle Daigle, COO of GitHub, explained that “the days of coding every line of code by hand are likely behind us.” Erin Yepis, an analyst at Stack Overflow, indicated that although this unbridled optimism towards AI has fallen somewhat, that is actually a sign of something else: that programmers embrace this technology, but they do so assuming its risks. And then there is another reality. One that is repeated day after day and that seems undeniable. The AI ​​we have today is the worst of all those we will have in the future. It may not be tomorrow or next week, but it is clear that the AI ​​you program will end up getting better and better. And there may come a point when those limitations disappear. Whether they do it or not, what is clear is that AI has changed programming forever. Image | Mohammad Rahmani In Xataka | OpenAI has turned ChatGPT into mainstream AI. In the business world the game is being won by its great rival

The chaos that AI has generated in personnel hiring has revealed a type of hidden talent: “invisible developers”

For years it has been repeated that to have a good work in technology It was necessary to cultivate a good public personal brand and maintain an updated and complete professional profile. However, more and more voices within the technology sector are dismantling that idea, ensuring that many of the most valued developers They don’t do any of that. They are not going to apply to dozens of job offers or optimize their visibility. “Invisible developers” are simply brilliant at their job. This invisibility is something that was put on the table Gergely Oroszengineer, analyst and author of ‘The Software Engineer’s Guidebook’ in a recent message in his X profile, in which he pointed out that this profile of “invisible developers” flies under the radar “the only way to find them is through references and specific searches”, assured the expert Candidates with AI have broken everything. The increase in responses generated by AI to job offers has completely broken the hiring system. They explained it perfectly from the Manfred technological employment platform, where a few years ago they received between 20 and 50 applications a day for each job offer, and now they receive more than 500. Various recruiters they explained on Reddit that this saturation of applications lowers the average quality of the applications and makes it difficult to detect real talent through this route. The situation is so extreme that, as Orosz indicated in an analysis from the tech job market posted on his blog, “many companies hire most engineers through contacts and referrals.” Internal recommendations matter more than ever. In this saturated scenario in which true talent goes unnoticed, word of mouth has become the most reliable hiring filter. It is estimated that around 80% of existing job offers are not made public and are filled internally or through references and recommendations from the employees themselves. In fact, many companies use referral incentives among their employees so that, when a vacancy opens, they recommend their former colleagues and acquaintances as candidates. As Orosz details in his analysis, recruiters increasingly look for candidates more among the pages of their agenda than among the applications that come to them. The myth of the hypervisible developer. Public attention usually focuses on profiles with a lot of activity on networks or with highly visible projects. However, different examples and testimonies reveal the rising trend of “invisible developers”: brilliant workers at their job with little or no activity on their public profiles. A clear example is found in the message published by Max Spero, co-founder of the AI ​​company Pangram, in which he compares the GitHub contribution profile of an unemployed 22-year-old developer, full of activity and contributions, and that of a prominent Google engineer, with a practically empty history. In response to that post, Konstantin K, a software developer from San Francisco, confirmed Spero’s message. “The top 1% of engineers I’ve worked with over the past 10 years didn’t have GitHub, LinkedIn or LeetCode, they don’t speak at conferences or publish podcasts. But they built systems that no one else can,” he wrote. Trust networks between colleagues. Other testimonialsamong which Orosz is also foundreinforce this idea of ​​”invisible developers” and agree that the most effective way to open job doors in the future is to be valuable to colleagues in the present. “From the outside you cannot know how good an engineer this person is until you ask former colleagues. There are many cases like this,” wrote Orosz in X. Even academic research suggest that internal networks—those formed by real collaborations, not superficial digital connections—have a direct impact on career opportunities. In other words, the professional prestige that these “invisible” employees generate within the teams in which they participate weighs more than any public presence and their colleagues become their guarantors to obtain a job in the future. Real contact in a digital setting. It is still paradoxical that, faced with the saturation of digital channels and the implementation of AI-based systems, the technology sector is returning to a classic model: relying on real recommendations to reduce uncertainty. Research reveals that recruiters prefer to spend time on references validated by employees or former colleagues, rather than analyzing hundreds of clone resumes generated with AI. In Xataka | Job interviews have always been a game of cunning: AI is just taking things to another level Image | Unsplash (Vitaly Gariev)

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.