That Anthropic has shut down OpenClaw is understandable. That they do so confirms that they are becoming the Nintendo of AI

Peter Steinberger, the creator of AI agent OpenClawrose on Saturday to a ton of mentions on his Twitter account. In all of them they warned him of the same thing: Anthropic announced that Claude Code (Claude Pro/Max) accounts could not be used in OpenClaw. The decision left him anything but indifferent, and users of this agent have criticized a decision that, although reasonable, is, in a way, a disturbing tactic because of how and when it arrived. what has happened. OpenClaw is the AI ​​agent that, if you want, takes control of your machine and uses its apps to do for you everything you ask. Its operation is very powerful, especially if you use it with quality models such as Claude Opus 4.6 or Claude Sonnet 4.6. Many users were taking advantage of the Claude Pro and Claude Max plans to get the most out of OpenClaw, but Anthropic has said that that cannot be done. As explained, OpenClaw and other AI agents consume too many tokens and those plans are designed to be used in Claude Code for programming. If you want to use Claude with OpenClaw, pay. At Anthropic they do not prohibit the use of their AI models with OpenClaw, but they make it clear that if you want to use them you must use them with their API. It’s as if you bought the monthly transport pass for 20 euros to travel unlimited on the subway: it works perfectly when you go to and from work or university, but Anthropic says that you cannot use that pass to use it in your courier company that makes hundreds of trips a day. The token consumption in Claude Code is manageable, but in OpenClaw that consumption skyrockets and in Anthropic they want you to pay per use, not take advantage of the “flat rate” (with limits) of their Pro/Max plans. It’s understandable… Many users have attacked Anthropic and criticized that decision. Boris Cherny, one of the top managers of Claude Code, answered in X to a user who told him that decision “sucked”: “I know it sucks. At its core, engineering is about making hard decisions, and one of the things we do to serve a lot of customers is optimize how subscriptions work to reach as many people as possible with the best model. Third-party services are not optimized in this way, so it is very difficult for us to maintain it in the long term.” It is true that the massive use of Claude in OpenClaw raises the internal costs of Anthropic’s infrastructure: it does not pay off for them that so many instances of OpenClaw are being used with Claude, at least not if they are not used with the API. It is reasonable because these plans effectively “cheat” by being able to be used with this and other AI agents. But… …the moment is curious. This decision comes shortly after Anthropic has started to “copy” some of the OpenClaw features in their products, something we also expected. Claude Cowork, Dispatch and Remote Control have become the “official” ways to be able to do some of what OpenClaw does directly with Anthropic tools, and shortly after releasing them is when they begin to cover the way in which users can use their monthly plans. For Peter Steinberger, creator of OpenClaw, Anthropic’s decision comes now It is significant: “It’s funny how the timing coincides: first they copy the most popular features of our tool to their own closed product, and then they block access to open source.” Anthropic doesn’t lie. The technical argument that Cherny mentions is real, and the truth is that there is a capacity problem. Claude models are expensive to run, demand grows faster than infrastructure, and users of AI agents like OpenClaw consume resources much more intensively than conventional chat or Claude Code users. This is not sustainable, but it is also true that this decision comes three weeks after Steinberger “sold” OpenClaw to OpenAIAnthropic’s nemesis. Anthropic in Nintendo mode. This is the classic walled garden pattern: you see what works on another platform or rival product, absorb it, and then close the door. Nintendo has done this for decades with its platform developers, and Apple has perfected it with the App Store. The difference is that Nintendo and Apple had that walled garden from the beginning, and Anthropic is building it now. Although it’s not exactly the same. It should be noted that Nintendo is protecting an ecosystem with decades of irreplaceable IPs (Intellectual Properties): Mario, Zelda, or Metroid. It is normal that there is an access cost. Anthropic is doing that right now with Claude as the star product, but obviously it doesn’t have anything comparable (at the moment) to the IPs that Nintendo has. Here is another disturbing comparison: Apple or Nintendo charge to enter the ecosystem but it does not keep the meter running. Anthropic does: it has an increasingly closed garden, but it also forces the use of the API to use OpenClaw, with a pay-per-use model that is reasonable given Claude’s demand. But the rest do “leave”. What Anthropic has done clashes with what other AI companies are doing, especially when we talk about Chinese startups. The creators of Kimi, Minimax, GLM or the recent Xiaomi MiMo They do not have these policies: you can contract their monthly plans, very cheap, and take advantage of their models for OpenClaw without problems and without (barely) limits. It is true that these models are not as capable as Claude, but the way they act is still striking. In Xataka | OpenClaw changed the rules of the AI ​​race. Technology companies already have their answer: copy it

China says it has built its largest data center. And confirms that your problem is precisely in the chips

China has just turned on its new technological pride in Shenzhen: an AI cluster with 14,000 petaflops built entirely with Huawei Ascend 910C chips. the city has presented it as the first scale computing center with 10,000 cards with completely national technology. It is an undeniable milestone, but if we give it context, an alarm signal and a dose of reality. Why is it important. The Shenzhen cluster, with all its rhetoric of technological sovereignty, represents about 1% of the capacity of the largest US data center in operation today. In other words: China has built, with great institutional effort, what OpenAI already had available to train GPT-4 in 2022. The gap is not a question of ambition (China has it) or capital (it also has it) or energy (of course, he also has it). It’s a chip issue. What are they capable of manufacturing and in what volume today. Between the lines. The Shenzhen government statement highlights energy efficiency metrics and occupancy rates of 92%. It’s really good data. But the selection of indicators (the cherry picking) says a lot so it is omitted: there are no direct comparisons with the clusters of NVIDIA H100 that colonize the data centers of Microsoft, Google or Amazon. Posting only what you have is also a way of not publishing what you lack. The context. At this point no one doubts that China does not lack electricity, not even engineersnor money to build large-scale AI infrastructure. What is still missing, despite the advances, are the chips. Export restrictions imposed by Trump They have cut off access to advanced semiconductors from NVIDIA and TSMCand that has forced China to accelerate its own ecosystem. Huawei has responded with the Ascend 910Ca capable chip but that still has performance limitations and, above all, volume production. If wafers were not in short supply, this data center would be a hundred times larger. Yes, but. Can China close that four-year gap before it gets even bigger? The answer depends almost entirely on how much its domestic semiconductor industry manages to scale, and whether or not Western sanctions manage to stifle that process. At the moment, in Shenzhen they are celebrating an achievement as undeniable as it turns out that in the eyes of Silicon Valley they are still in 2022. Featured image | Huawei In Xataka | Memory prices have started to fall in some markets. There is still a long way to go to close the AI ​​crisis

The ERE of 750 workers confirms the profitability crisis of delivery in Spain

Glovo has opened the consultation period for an Employment Regulation File that will affect a maximum of 750 delivery workers in more than 60 locations throughout Spain: The official reason is that the distribution model with employees is not profitable in a large part of the territory. However, unions like CCOO had months denouncing that the company was already carrying out a “covert ERE” through a continuous trickle of disciplinary dismissals under questionable justifications. Why is it important. This decision comes just eight months after Glovo will complete its adaptation to the Rider Lawregularizing the delivery drivers who until then worked as self-employed. This adjustment shows the platform’s difficulties in sustaining a profitable logistics model once forced to abandon the self-employed scheme and assume the labor costs of the Workers’ Statute. The background. Glovo was the last major platform to comply with the Rider Law, which was approved in 2021, but its effective application was in fits and starts, between fines and institutional pressure. In July 2025, The company regularized its delivery drivers (more than 13,000 throughout Spain) in the face of the imminent threat of criminal proceedings, which opened the door to prison sentences for its leadership for widespread fraud. What Glovo had to give up then is cutting now. Between the lines. The company does not directly blame the Rider Law. It points out that its direct logistics management model, the so-called Gen2, “has proven to be inefficient” in small and medium-sized municipalities, and that it is necessary to move to the Gen1 model, in which Glovo does not assume the delivery operation. Translated: where the volume of orders is not sufficient to cover the costs of having permanent employees, the platform transitions to a model of marketplace (Gen1). That is, Glovo continues to operate the application and collect commissions, but the logistics of delivery are now assumed by the restaurants themselves or subcontracted companies. In figures: 750 delivery workers affected by the ERE. More than 60 locations where service will be reduced or eliminated. And more than 800 cities where Glovo operations continue normally. The big question. Now the underlying debate is not whether Glovo complies with the law or not (now, without a doubt, it complies with it), but whether the delivery whose model he proposes can be sustainable with a workforce of employees in markets where orders do not have the volume that exists in large cities. In addition, COVID triggered home delivery consumption to levels that have since normalized, and platforms have been searching for years for the balance point that allows them to make money without resorting to questionable working conditions. In many corners of Spain, that point has not yet appeared. Yes, but. Yolanda Díaz has responded to the announcement by rejecting any “blackmail” and promising that the Labor Inspection will ensure compliance with the law. You are right that the law must be followed. But the ERE that Glovo has announced does not breach it: reducing activity where there is no business is a legitimate decision. The underlying problem lies in the structural change of the sector: the delivery was born and based its profitability on a model of self-employed workers, a formula that Glovo defended to the end, arguing for the flexibility of the service. Now, the real challenge is to demonstrate whether the business remains economically viable when platforms must assume the structural costs of a salaried workforce, as required by current legislation. Featured image | Nursultan Abakirov In Xataka | The death of cooking at home: inviting to “dinner” is increasingly becoming inviting to order by Glovo

Sony closes its games to PC and Capcom confirms that half of its sales come from there. A contradiction with a reason: Playstation 6

Sony has decided that titles like ‘Ghost of Yotei’ will not come to PC, ending six years of multiplatform strategy. The twist is striking for the moment: Capcom has just confirmed that 50% of its sales already come from the PCand expects that number to continue growing. Two giants of the industry from Japan, two radically opposite bets on where the future of the business lies. The breakup. After weeks of rumors in that direction, Bloomberg confirmed that Sony has canceled plans to bring its big single-player exclusives to PC. TO ‘Ghost of Yotei‘, one of the most celebrated PlayStation releases in 2025, are joined by ‘Saro’, the next Housemarque game. The multiplatform experiment that Sony started in 2020 with ‘Horizon Zero Dawn‘It has lasted six years. The withdrawal is not total. According to sources consulted by Bloomberg, games as a service (the imminent ‘Marathon’, ‘Marvel Tokon’ or ‘Horizon: Hunter’s Gathering’) will maintain a multi-platform launch, because their business model depends on building player bases as wide as possible. ‘Death Stranding 2: On the Beach’ and ‘Kena: Scars of Kosmora’ will continue to be ported to PC this year, as they are titles from third-party developers published under the PlayStation umbrella. Calm rhythm. Sony’s release rate on Steam was never that of a furious competitor: the titles arrived between one and three years after their console debut, which meant that the ports had to compete against previous versions that had already suffered price discounts, to which was added the incessant pace of new releases for PC. These are some of the reasons why Sony may the bills didn’t work out: ‘Ghost of Tsushima’ reached a peak of 77,000 simultaneous players, but ‘Horizon Forbidden West’ and ‘The Last of Us Part II Remastered’ did not exceed 40,000 and 30,000 respectively. To retreat. This leaves some decisions made by Sony recently up in the air: in 2021 acquired Nixxes Softwarea Dutch studio specialized in porting games to PC (‘Tomb Raider’ trilogy, ‘Deus Ex: Mankind Divided’), and it is not clear what its future will be with this new strategy. Furthermore, on the 19th Bluepoint Games closedthe studio responsible for the remakes of ‘Demon’s Souls’ and ‘Shadow of the Colossus’, with around 70 employees affected. Sony think tank CEO Hermen Hulst spoke internally of an “increasingly challenging industry environment”, with rising development costs and slowing growth. Bluepoint had been working on a ‘God of War’ project in the form of a game as a service. Print money. In February 2025, Shuhei Yoshida, former president of PlayStation Studios, described the strategy of bringing games to Steam as something which is similar to “printing money” because the cost of a port is only a fraction of that of developing an original game. Why the change in strategy? PC numbers weren’t bad in absolute terms: Sony’s five best-selling titles on Steam together they surpassed 43 million copies and generated more than $1.2 billion in gross revenue for the company. The problem may well be the value they subtract from the hardware: they generate income but not a loyalty to the ecosystembecause a PC gamer does not need to buy a Playstation console. In the end, as Bloomberg pointed out, PC ports represent less than 2% of Sony’s total annual revenue. And there is something else: apparentlythere is a real concern about the design of the next xboxwhose architecture is closer to a Windows PC than to a conventional console, with the possibility of supporting multiple stores, including Steam. The PlayStation exclusives available on Steam could run perfectly on an Xbox, which would make Sony investing money to literally benefit its most direct competitor. Capcom prefers the PC. A day before Bloomberg published Sony’s new policy, Capcom released the Q&A transcript of its fiscal third-quarter results. An investor asked about the PC strategy regarding ‘Resident Evil Requiem‘ and its technical commitment to that platform. The company’s response was that the PC already represents approximately 50% of the total units sold for Capcom, and the internal expectation is that that figure will continue to grow. Already in October 2021, the company’s COO Haruhiro Tsujimoto stated in an interview that the company’s goal was that the PC became its main platform and that the proportion of sales between console and PC will reach 50-50 in 2022 or 2023. The prediction has been fulfilled with only a couple of years of delay. Revenue generated via Steam grew by 61.1% between April 2024 and March 2025 and in that same period PlayStation’s share of Capcom’s total revenue fell below 10%. The differences. Of course, there are differences to take into account between Capcom and Sony. Capcom has no hardware to sell, and its only incentive is to maximize the distribution of its software catalog across all platforms. Sony, on the other hand, manages a complete ecosystem that includes a console, digital store, subscription and accessories, and each decision has to be measured not only in direct sales of that title but in the impact on the value of the hardware and loyalty to the ecosystem. They are different business structures. Things that happen. All this happens the same year that Valve announces its new Steam Machine, in which the ROG Ally with branding from xbox It corroborates after the Steam Deck that the power of PC hardware can reach the living room, and that Microsoft officially embraces the idea that its games do not need an Xbox to be played. What justifies buying a dedicated console in 2026? Sony clearly focuses its sights on the most successful name in the industry at the moment: Nintendo. If you have to sell hardware, the key is exclusivity and making your object essential. It only remains to be seen if Playstation 6 will be so essential. In Xataka | Playstation 6: all the information we know (or think we know) so far

Ransomware has exploded in Spain and the data confirms it

He ransomware It is one of those attacks that no one wants to suffer. Companies fear it because, if they do not manage to contain it in time, they can be paralyzed for days, weeks or even months, with million-dollar losses as a consequence. It is not foreign to private users either: we will not always be willing, nor able, to pay a ransom, which in many cases means losing our files. However, this threat continues to advance, gaining presence in our environment and forcing us to remain more alert than ever. Spain, among the most affected countries. The team of Thales Cyber ​​Threat Intelligenceone of the largest European defense and cybersecurity groups, places Spain as one of the most attractive targets for actors operating with ransomware. According to their report shared via email, the country recorded 164 attacks in 2025, with 79 in the first half of the year and 85 in the second. The most relevant data comes when putting these figures in context: Spain ranked sixth in the world in the number of attacks during the second half of the year. A trend that points upward. Thales experts also point out that ransomware attacks in Spain grew by 7.6%, an increase that is part of a general increase in cyber activity. Behind them are factors such as geopolitical tensions, the evolution of ransomware tools, the increasingly rapid exploitation of vulnerabilities and the interconnection of threats between critical sectors. All of this creates a scenario with more mature, organized and difficult to contain actors. The global context changes the scale. Although the situation in Spain invites vigilance, the panorama is transformed when it is expanded to an international level. The United States was the most affected country in the second half of 2025, with 3,946 attacks. They were followed by Canada, with 411, and Germany, with 296. The weight of the United States is especially striking: it accounted for 51.23% of the attacks recorded in that period, which shows a very unequal distribution of this criminal activity. A particularly exposed sector. On a global scale, and always according to Thales, the financial sector continues to be among the main objectives. Banks, payment institutions and fintech companies face not only ransomware campaigns, but also persistent threats from advanced cybercriminals, state-sponsored actors and hacktivist groups. In 2025, this sector accumulated 533 ransomware attacks, the highest number among the industries analyzed. The report also identifies the most active groups. Qilin led the activity with 60 attacks, followed by Akirawith 29, and Inc Ransom, with 17. To them were added two operations that emerged in the second half of the year, The Gentlemen, with 13 attacks, and Sinobi, with 10, which managed to place themselves among the five most active groups against the financial sector. Consequences that go beyond the numbers. When a ransomware attack manages to overcome an organization’s defenses, the impact stops being statistical and becomes tangible. At the international level, Jaguar Land Rover was forced to paralyze its factories for more than a month after an incident of this type. In Spain, several town councils have also suffered similar attacks, with service interruptions and operational problems that show to what extent these threats have ceased to be a theoretical risk and have become a very real challenge. Images | Xataka with Gemini | Thales In Xataka | How often should we change ALL our passwords according to three cybersecurity experts

Massive study confirms direct link to heart damage and mortality

For years science has been warning us that ultra-processed they are a danger because of the effects it has on our body. Something that began as a suspicion about nutritional quality has now become a statistical certaintysince ultra-processed foods not only make you fat, but also directly hit the cardiovascular system. With figures. A new study conducted by Florida Atlantic University (FAU) and published just a few days ago in The American Journal of Medicine has put an alarming figure on the table: high consumption of these products is linked to a 47% higher risk of suffering from cardiovascular diseases. And it is not a study that is based on speculation, but the authors have analyzed the data from the National Health and Nutrition Examination Survey corresponding to the period 2021-2023 cwith a sample of 4,787 American adults. How it was done. The methodology is robust because it does not simply look at what participants eat, but the researchers adjusted the results taking into account confounding variables such as age, sex, race, income level and, crucially, smoking. With all this, and eliminating the effect of tobacco and socioeconomic situation of the equation, the result was that those who consume greater amounts of ultra-processed foods are almost 50% more likely to develop heart pathologies compared to those who consume less. It is not an isolated case. If this study were the only one, we might be skeptical. The problem is that it rains in the wet, since the FAU research It arrives to confirm a trend that we had already seen in previous macro studies, consolidating what in science is called a dose-response relationship: the greater the amount of ultra-processed foods, the greater the damage. For this we have the French precedent with a famous studio of the cohort NutriNet-Santéwith more than 100,000 participants, which has already shown that an increase of just 10% in the ultra-processed diet is associated with a 12% increase in total cardiovascular risk. There is more. A meta-analysis published in 2024which reviewed more than a million participants, found a linear relationship in which for each additional daily serving of ultra-processed foods, the risk of cardiovascular events increases by 2.2%. And if we still want more evidence, in Australia A 25-year follow-up of almost 40,000 people linked high UPF consumption with a 19% higher cardiovascular mortality. The new tobacco. The most striking thing about this new research is not only the numbers, but the comparison they make with tobacco and the public health crisis it generated in the 20th century. And while the anti-smoking campaigns achieved drastically reduce deaths due to lung cancer and heart disease, the food industry has filled shelves with products classified as ultra-processed. Because? The mechanism behind this 47% elevated risk appears to be related to systemic inflammation and altered lipid metabolism. It must be taken into account that industrial processing generates polluting byproducts such as acrylamide and uses additives that increase oxidative stress in our body. Basically, the body loses the ability to “cleanse” itself at the cellular level, decreasing antioxidant enzymes and allowing free radicals to damage the inner layer of the vessels, which accelerates the formation of atherosclerotic plaque. This is combined with a nutritional composition with 5 or more ingredients, rich in added sugarssaturated fats and additives, but poor in fiber and micronutrients. A trio that directly impacts blood pressure and insulin resistance, increasing predisposition to diabetes. Images | Darko Trajkovic In Xataka | Making extra rice is no longer a mistake: cooling and reheating it can reduce its calories according to some nutritionists

Samsung confirms the date of its next Unpacked in a year full of challenges

The Samsung Galaxy S26 They are just around the corner. We could intuit that, but now it is official. And Samsung just confirmed the date of your next Galaxy Unpackedan event that will take place at the gates of the Mobile World Congress (although much further away) and in which AI will once again be the main protagonist. when and where. Galaxy Unpacked will take place on February 25 in the city of San Francisco, United States. The conference will be at 7:00 p.m., Spanish peninsular time, and can be followed through the usual channels. Among them, of course, the live one we will do from Xataka. What do we expect? Unless there are surprises, it is most likely that we will know the new Samsung Galaxy S26. If nothing changes, it should be a family of three devices, with the Ultra model being the flagship. Even though AI PCs have not finished taking shapeSamsung has already started talking about “AI Phone” and assures that its new smartphones represent “a new stage in the era of artificial intelligence, where technology becomes truly personal and adaptable.” We’ll see what this means. A year of challenges. Samsung has a tough year ahead. The Galaxy S are no longer the only exponents of the most premium high-end and Chinese firms are pushing hard. There they are OPPO, Xiaomi, Motorola and Honorto give just a few examples. In terms of pure and simple specifications, the entire high-end range plays more or less in the same league, although Samsung starts with a clear advantage: brand positioning and ecosystem. As far as the technical specifications are concerned, last year we missed a more notable leap in photography and battery. Seeing how Chinese brands are spending money on silicon-carbon batteries and the 200 megapixel telephotosthis year these two sections aim to become even more important. Without forgetting the component crisis. Another important aspect that will be interesting to see how Samsung has resolved is that of the components, courtesy of AI. We are immersed in a RAM and storage crisis, to the point that we are seeing mobile phones again with four gigabytes of RAM and not so ambitious configurations. And it is important, because RAM goes far beyond keeping more apps in the background. On the other hand, and everything must be said, there is no evil that does not come with good. Samsung is one of the largest manufacturers of RAM and that branch of the business is scary. The results for the fourth quarter of 2025 speak for themselves: a year-on-year profit increase of 208% and shares completely skyrocketing. Cover image | Xataka In Xataka | With the consumer segment drowning, Samsung is the first to manufacture HBM4 memory. And it will be for NVIDIA, of course

The largest clinical trial confirms that it detects more and reduces the radiologist’s burden

With the arrival of artificial intelligence, one of the applications was undoubtedly medicinewhich could mark a authentic revolution. Although definitive proof was missing to tell us that it really had real use. And this one just arrived thanks to an article published in The Lancent which has pointed out how AI can help us detect more breast cancers and even reduces those that are much more dangerous. The screening. Unfortunately, in Spain we have in mind, because of how recent it was, the problems with screening programs in Andalusia. And despite this great controversy, this type of screening is very useful and significantly reduces the number of women who end up dying from breast cancer that was not detected in time. But now we want to go a little further with the integration of technology so that fewer tumors escape that to the human eye can escape due to their small size. Interval cancers. Without a doubt, it is the great enemy in radiodiagnosis when we refer to screening mammograms. This term refers to those tumors that are detected between one check-up and the next, and that have different reasons for their appearance. The first reason is that it is a tumor that grows very quickly (and that can be much more malignant) or that was missed in the previous control mammogram due to its small size. And this is a serious problem, since the basis of screening is to detect cancers in the earliest stages where they can respond better to more conservative treatments. The study. The MASAI trial (Mammography Screening with Artificial Intelligence) has shown that the use of AI reduces these cases drastically. And the figures are quite promising, since there was a 12% reduction in cancer rate interval in the two years after the woman was screened. In figures, it went from 1.76 cases per 1,000 women to 1.55 cases. A difference that may be very small in our eyes, but in public health and oncology it is a real success, since reducing by 12% the tumors that usually “escape” is a major clinical advance. Less work. Until now the standard method to analyze these tests focused on a double reading. This means that two radiologists reviewed each mammogram independently to ensure nothing was missed. A security method that is ideal, but that consumes an immense amount of human resources in health systems. That is why with this method a paradigm shift is proposed that is based on intelligent triage and that can be summarized in three different points: The AI ​​initially analyzes the mammogram image and assigns it a risk score from 1 to 10. In the event that it is categorized as low risk, the image is reviewed by a single radiologist to see if it agrees that the image is clean and closes the case. If the risk is high in the mammography, the image does pass the double reading system with AI marking the most suspicious areas where there may be injury. The result. With this new algorithm, the study has aimed at a 44% reduction in the reading letter for professionals, in order to make doctors now focus on the images that are much more doubtful. And no, working less did not mean working worse. On the contrary: the AI ​​arm of the study detected 29% more clinically relevant cancers without increasing the rate of false positives (the great fear of over-diagnosing healthy patients). Complement and not replace. This is something important that the study itself highlights, since they point out that AI has not arrived to fire radiologists. The MASAI method is only a “decision support”, since the AI ​​prioritizes, orders and signals, but the final clinical decision is always that of the doctor and therefore in human hands. With the publication of these final results in The Lancet, The validation cycle of one of the most important tests is closed of the decade in radiology. The next step is no longer asking whether AI works in breast cancer screening, but how long it will take for public health systems to implement it to give radiologists one more tool that allows them to be more precise and methodical. Images | National Cancer Institute In Xataka | A Spanish milestone against pancreatic cancer: we are one step closer to eradicating it but there is still a long way to go

After a weekend of floods, deaths and evacuations, AEMET confirms that calm is coming for the New Year

Málaga, Granada, Murcia and the south of the Valencian Community have passed a complicated weekend with floods, deaths and displaced people. In fact, in some areas of the southeast the worst has not happened yet. And people are tired: «”I feel like selling everything and leaving town: the rains are increasingly torrential”, said a neighbor from Cartama (Málaga). However, we will forget again. We will start the year cold, yes. But also with a strong anticyclone, with fog and frost. There will be no rain except somewhere in the south/east and the Balearic Islands; something that with the night movements of New Year’s Eve, is good news. However, the models start to draw that with the New Year there can also be a change in pattern. A change of pattern? Starting Thursday, as explained by Duncan Wingenthe models contemplate “the rise of the Atlantic ridge towards Iceland and Greenland”: it is what experts call the “Atlantic ridge.” It is a tongue of high pressure at altitude that bulges over the Atlantic and extends towards high latitudes. It is a wall that diverts the current from the west. What it represents for Spain. It’s hard to saythe truth. The effect on the peninsula depends on where the dorsal ends up placed. Or, close the Atlantic corridor and we have a few days of stable, dry and cold weather on the surface. Or, it favors the entry of cold from the north with thermal drops, a winter sensation and snow. Or, finally, the storm corridor opens with the consequent intrusion of Atlantic fronts from the Ocean. That is, rain and a slightly milder climate. What should we expect? It is a great unknown: enormous. And taking into account that it is the key phenomenon to understand what is going to happen in the coming weeks in southern Europe, it is important. Therefore, we have to continue monitoring them closely. Euro-Atlantic regimes modulate temperatures, energy demands and meteorological alerts. The Atlantic Crest is a piece of that puzzle and there are many things that depend on it. It is still surprising because, well, for now we are only going to see a deep, wintery cry. Image | PolarWx In Xataka | La Niña is going to be meteorologically “less intense” than we expected. And that actually hides a problem.

Spotify has suffered the largest music theft in history. One that confirms that most of their catalog is never heard

Anna’s Archive was already known by literature lovers, who turned to this repository to be able to access books of all kinds without having to pay for them. Now they want to achieve the same thing with music, and they have taken a colossal and disturbing step: stealing practically the entire Spotify catalog. What is Anna’s Archive. Anna’s Archive project appeared on the scene in late 2022, shortly after legal pressure managed to knock down the Z-Library platformone of the largest websites for downloading free books. The platform works as a metasearch engine that allows you to find books and then download them. Anna’s Archive does not host these files—which, according to the project, exempts it from legal responsibility—and links to different anonymous download providers, which is where users can obtain them. Until now the platform focused on books, but that has changed. The biggest music theft in history. In a post published on his blog official this weekend, those responsible for Anna’s Archive indicated that they have made “a backup copy” of Spotify that includes both metadata and music files. Not only that: it is indicated that they are distributing all this information through torrent files, and the total download takes up 300 TB of data “grouped by popularity.” 86 million songs. They call it the first music “preservation archive” in history and it has 86 million music files. Although that figure is only 37% of the songs in Spotify’s entire catalog, according to Anna’s Archive they account for 99.6% of listening on Spotify. And here there are two important things: on the one hand, music as such. And on the other hand, the metadata that surrounds that music, and that offers very interesting information about Spotify’s music catalog. The top 10,000 popularity. Thus, at Anna’s Archive they wanted to organize that archive based on “popularity”, a metric that they use in Spotify to order the songs that are listened to the most and how recent those plays are. Those responsible for Anna’s Archive have compiled a gigantic list with the 10,000 most popular songs according to this metric. Lady Gaga, Bad Bunny and Billie Eilish occupy the top three positions, for example. This graph reveals how song popularity demonstrates the long tail phenomenon. Only 62 songs exceed 90 points. Three out of four songs are not heard. By grouping songs by popularity, the metadata reveals and confirms the traditional long tail phenomenon. More than 70% of the songs in the Spotify catalog are barely listened to (less than 1,000 plays), and there are so many that are popular or that they had to cut the gigantic file (it would have been 700 TB) to end that representation of 99.6% of songs that have minimal popularity on Spotify. That does not mean that they are better or worse, be careful: it just means that they have been heard more or less on the platform. We all hear (more or less) the same thing. Most listens come to songs with popularity between 50 and 80, and here comes an expected figure: of the 86 million songs, only 210,000 exceed 50 popularity (0.1%). Or what is the same: almost everyone basically listens to a very small set of songs compared to the size of the catalog. How much is each song listened to? Those responsible for Anna’s Archive claim that it is possible to estimate the total number of views per song thanks to popularity. They gave the example of the first three: ‘Die with a smile’ (Lady Gaga and Bruno Mars), 3,075 million views ‘Birds of a feather’ (Billie Eilish): 3,137 million views ‘DtMF’ (Bad Bunny): 1,124 million views Between the three of them they accumulate as many listens as the songs that are between number 20 and number 100 million have. Once again, the long tail in action. Analysis everywhere. These metadata are very useful, and Anna’s Archive has produced a unique report in which they reveal conclusions based on the data collected. Thus, you can confirm how the most common length of songs is around 3:30 minutes, how there are numerous duplicates per song (licenses, versions, etc.), which ones are the most popular genres between artists or how most of the songs on Spotify are singles, and not part of an album. These metadata are a true treasure for market researchers. Downloading (for now) only in large torrents. At Anna’s Archive they have not published almost any of the torrents so far, but they have already indicated how they will offer those 300 TB. First, the metadata in a 200 GB file, which is already being shared by about 200 people. Then the music in various batches organized by popularity. Finally, some additional metadata and content like album art designs. Time will tell if those 86 million songs end up being available on some type of platform that links them to download individually. At Anna’s Archive that does not seem to be the intention, at least for now, and at the moment the metasearch engine focuses strictly on books. What Spotify says. As they point out in TorrentFreakthose responsible for Spotify have launched an investigation, and as a result have “identified and deactivated the accounts of malicious users who were participating in illegal scraping activities.” They have also implemented new measures to prevent these types of attacks and “are monitoring suspicious behavior.” Image | Sumeet B In Xataka | The chaos of streaming is causing a phenomenon that we thought was in recession: downloads are increasing

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.