Pokémon Go brought millions of players to the streets. Millions of players who were actually training an AI

In 2016 it came to the mobile market Pokémon Goa spinoff of the popular entertainment franchise with a very interesting premise: capture Pokémon in your city using your cell phone’s GPS. The game caught on very quickly and became a phenomenon. It’s been almost 10 years since that and Niantic, its developer, has taken advantage of all the data that millions of players have been giving them to guide delivery robots through the cities. Your first client: Coco Robotics. The business that no one saw coming. The amount of information that can be obtained from Pokémon Go is truly impressive, since millions of people have voluntarily traveled the world with their mobile phones in order to capture (digitally) this type of creatures. And each game leaves an invisible trace, since there are millions of photos of buildings, squares and streets labeled with very precise coordinates that would not have been possible without the information provided by its users when playing. Five hundred million people installed the app in its first 60 days, according to Brian McClendonCTO of Niantic Spatial. Eight years later, the game still has more than 100 million players in 2024, according to data from Scopely, the company that acquired Pokémon Go from Niantic that same year. The problem that GPS does not solve. GPS devices become a bit silly when they have to operate on sidewalks and much of the urban fabric that does not correspond to the road. Signals bounce between skyscrapers, tunnels and viaducts and the margin of error can be up to 50 meters, enough to place a robot on the wrong sidewalk or the next street. “The urban canyon is the worst place in the world for GPS,” affirms McClendon. Coco Robotics, a startup that operates nearly 1,000 delivery robots in cities such as Los Angeles, Chicago, Miami and Helsinki, knows this well, as its devices operate precisely in those dense areas where the signal is never reliable. This is where Niantic Spatial comes in. In May 2024, Niantic separated its spatial and artificial intelligence division. created Niantic Spatial as an independent company. Its core product is a visual positioning system (VPS) trained with 30 billion urban images, capable of placing a device on the map with a precision of a few centimeters from a handful of photos of the environment. The key is that these images come from millions of points of interest in Pokémon Go and Login (the company’s pre-Pokémon Go AR game, released in 2013). In such popular games, players have for years been directed to photograph the same place from different angles, at different times and in different weather conditions. “We had over a million locations around the world where we can locate you to the nearest centimeter and, more importantly, know where you are looking,” explains McClendon. What this changes for robots. Coco Robotics has been the first partner to adopt this technology. Its robots, equipped with four cameras, will combine conventional GPS with Niantic Spatial’s VPS to position itself more accurately, especially in pickup areas in front of restaurants and in delivery to the customer’s door. According to Zach Rash, CEO of Coco, the goal is meet delivery times promised and not depend on margins of error that in practice mean arriving late or to the wrong place. The model already solves one of the most practical challenges of urban robotics: performing well where conventional systems fall short. Beyond the distribution. John Hanke, CEO of Niantic Spatial, talks about what he calls a living map: a hyper-updated simulation of the real world that updates as robots move through it and provide new data. The idea is not only that the maps are more accurate, but that they are designed for machines, not people. This involves adding descriptions of each element of the environment, its properties, its context. “This era is about building useful descriptions of the world for machines to understand,” says Hanke. In that sense, Niantic Spatial differs from other bets on world models, such as those of Google DeepMind or World Labswhich focus on generating virtual environments. Niantic Spatial wants to replicate the real world as it is. In Xataka | OpenClaw changed the rules of the AI ​​race. Technology companies already have their answer: copy it

If the controversy is that AI steals works in its training, the European Union has the solution: license them

A few weeks ago the Washington Post published this image of the “Panama Project”: It is a warehouse with hundreds of thousands of books waiting their turn to be scanned and destroyed in the process. It is part of an internal program Anthropic to train its AI and the result of tens of millions of dollars in purchases to digitize all those works without permission from their authors. They are not the only ones who “they borrow” copyrighted content to train their artificial intelligences and the European Union is clear about something: stop stealing protected content and properly license works to train AI. And AI companies defend themselves by saying that no one is going to think about small companies. Europe is clear: if you want to train AI, pay the author It is curious how the entertainment industry and the regulation of countries shook hands at the beginning of the 2000s with those ads of “you wouldn’t steal a purse. You wouldn’t steal a car. Don’t steal a movie.” They portrayed copying a CD or downloading a movie as if you were breaking into the Pentagon’s systems. Years later, that same industry turns a deaf ear given what big technology companies are doing to train AI. The Washington Post document states that others such as Meta, Google and OpenAI They had also participated in the race to obtain data in bulk for your models. There are kicking examples, like the 81.7 TB of copyrighted books that you have downloaded Meta or that OpenAI will use animation from all the studios to train its AI (earning reproaches by Ghigli and more Japanese studies and complaining that Deepseek has looted ChatGPT). Given the context, it is time to say that the European Parliament has grown tired of this and has one of the things he is best at: legislating. In this case, it makes perfect sense for Europe to take this measure, and the agency issued a report non-binding law that urges the European Commission to develop rules that set minimum standards for these AI companies. “Generative AI should not operate outside the rule of law” Basically, if they use protected content for their training, they must license it and also compensate the authors. with the title “Protecting creative work with copyright in the age of AI”the European Parliament demands a series of measures apart from licensing the works. They are the following: Calls for the transparent and remunerated use of protected content to train generative AI. AI vendors are expected to recognize and pay for the copyrighted work they used to train their systems. Measures so that owners of works with rights can exclude their protected work from training. The reason that they argue MEPs is that “generative AI should not operate outside the rule of law. If copyrighted works are used to train artificial intelligence systems, creators have the right to transparency, legal certainty and fair compensation.” The European Group of Societies of Authors and Composers, or GESAC, points in the same direction. In statements to EuronewsAdriana Moscoso del Prado, general manager of GESAC; assures that “this vote adds to the growing recognition at the EU level of what is at stake. Innovation, equity and cultural sovereignty must go hand in hand.” AI companies fight back From the CCIA, the Computer and Communications Industry Association, it was noted that this is not a measure to protect artists, but rather “a compliance tax.” That is, something that must be fulfilled no matter what and that goes against progress. The group argued that such a measure would not go against large companies, but against small ones. They say that many will have difficulty negotiating complex licensing agreements with major publishers, “holding back Europe’s digital competitiveness on the global stage” and stating that what they would need to do is improve existing laws in the European Union, including the AI ​​Law and the Copyright Directive. In any case, there is nothing on the table at the moment. As we say, it is a self-initiative report by Parliament and is not binding. The Commission can now consider whether to do so or not, but it makes one thing clear: Parliament’s position on any future AI measures by the Commission. The problem is that generative AI has already plundered millions of copyrighted works on which it can build its next interactions. The software has tons of information to pivot on and can evolve in other areas, like stopping hallucinating, for example. And it is another example of the two speeds of this matter: the technological ones taking the first steps and the legislators behind them seeing what can be done when the act they want to legislate on was already carried out years ago. Images | Washington Post, Anti-Piracy Campaign (edited) In Xataka | The AI ​​industry is only sustainable by violating copyright laws. So he’s trying to eradicate them

Tencent has a significant stake in US military training tools. Trump is going to stand up to it

The Trump administration is debating if it forces the Chinese giant Tencent to get rid of its stakes in the largest Western video game companies. At stake are Riot Games, Epic Games and Supercell (more than a billion players) and the Unreal Engine, used in military simulations. The ghost of TikTok returns, but this time the affected market is different. Why Tencent. Tencent is not only the largest video game company in the world. It is also the largest silent shareholder in the Western industry: it owns 100% of Riot Games, 28% of Epic Games and majority control of Supercell, the Finnish company behind ‘Clash of Clans’. To this we must add participations in Larian, Remedy, Ubisoft and Discord, among dozens of other studios. For years, that capital has flowed to the West: the studios needed investment, Tencent had liquidity, and no one was looking for trouble. The White House sniffs. Washington, however, he has had doubts for years. The Committee on Foreign Investment in the United States (CFIUS) began to review these investments during Trump’s first termand the case became one of the longest in the history of the organization, going through two administrations without reaching a clear resolution. What worries the White House is that video game platforms collect financial information, personal data and chat logs from hundreds of millions of users, many of them Americans. These databases are candy for any intelligence agency. The Epic case. The Unreal Engine adds an extra issue in which the White House has a special interest. The engine not only gives life to video games like ‘Fortnite’; It is also used by defense contractors and the US military itself for military simulation and training. In fact, the country’s Armed Forces have worked directly with Epic for years on that development. That Tencent is a shareholder in the company that builds this technology is what turns this issue into a national security problem. So much so that in January 2025, the Pentagon formally classified Tencent as a company linked to the Chinese military. Tencent rejected that classification, but the Pentagon did not withdraw it. There are problems. During the Biden administration, the issue was entrenched by an internal disagreement that no one knew how to resolve: Deputy Attorney General Lisa Monaco defended forced disinvestment, but the Treasury Department preferred to keep investments under data segregation protocols. Without consensus, the case was frozen. The cabinet meeting scheduled for March 4 was postponed due to scheduling conflicts. That same day, Tencent shares fell 1.72%. Parallels with TikTok. There are similaritiesbut also differences. With ByteDance, the US forced the creation of a new entity with 80% in the hands of US investors, as a condition of operating there. But the problem with Tencent is that it does not operate on American soil, but rather is a shareholder in companies already established there. Getting rid of these stakes is not the same as closing an app, it is more a restructuring of private capital. The consequences in the case of Tencent would go beyond Riot and Epic: the Chinese company has been the main injector of capital into studios for a decade, and a forced disinvestment would change the financing conditions of the entire sector, favoring large publishers. When will there be a solution? The decision has an undeclared but known deadline: Trump travels to China in April to meet with Xi Jinping. Forcing Tencent to sell would send a message of maximum pressure before sitting down to negotiate. In any case, neither the US Treasury, nor Tencent, nor Epic nor Riot have made public statements. Silence, in this type of situation, is louder than if they were discussing it loudly. In Xataka – China has made a drastic decision: prioritize ‘its’ technology, even if it is worse

AI consumes obscene amounts of energy. Sam Altman compares it to the cost of “training” humans

OpenAI CEO Sam Altman participated in an event organized by The Indian Express. During the interview made some striking statements, but the greatest of all of them was the one he dedicated to talking about what it costs to train an AI model. In fact, he complained about how many of ChatGPT’s energy consumption discussions they are unfair. Training humans also consumes a lot. The interviewer asked Altman about ChatGPT’s energy consumption and Sam Altman took a few seconds to answer the question, and then made a peculiar comparison (my bold): One of the things that is always unfair in this comparison is that it talks about how much energy it takes to train an AI model compared to what it costs a human to perform an inference query. But it also takes a lot of energy to train a human. It takes about 20 years of life and all the food you eat during that time before you become intelligent. And not only that, it took the widespread evolution of the hundred billion people who have lived and learned not to be eaten by predators and to understand science and so on to create you. The fair comparison is if you ask ChatGPT, how much energy does it take once their model is trained to answer that question compared to a human? And AI has probably already caught up in terms of energy efficiency if we measure it that way. A previous Epoch AI study corroborates that energy consumption during inference (when we actually use ChatGPT, for example) is low. Source: Epoch AI. Training is one thing, inference another.. The answer may be controversial, but to a certain extent it is logical: learning, both in the case of humans and AI, takes time and consumes many resources, but that cost is one thing and the cost of inference, of “applying that training”, is another. Once we have learned, it is not too difficult to answer things. This is what Altman is trying to point out here, who recognizes that AI does indeed consume a lot of energy in training, but that it has then become very efficient in the inference phase, when we actually use ChatGPT. The problem is that although Altman has already spoken that in inference consumption is minimal, does not provide evidence of this. The water problem is no longer a problem. He also spoke about the controversial water consumption that was theoretically carried out in large AI data centers. Although he acknowledged that this was a problem when “we used to use evaporative cooling in data centers.” Now, however, “we don’t do that,” he recalled, and made it clear that those accusations that “ChatGPT uses 17 gallons per query, or whatever” is totally false, “totally crazy, it has no connection with reality.” But again, there is still no official data from AI companies in this section. How much does AI really consume? The truth is that at this point we still do not have really clear data on how much the AI ​​consumes both in the training phase and in the inference phase. There are those who have investigated energy and water consumption and have made a mistake. wildly exaggerating the databut for example in the US, where a large number of data centers are concentrated, there is no legislation that forces transparency with those figures. Increasingly more efficient models and data centers. One of the most interesting studies was the one made by Epoch AI in February 2025, and at that time it was also concluded that AI did not actually consume as much as it was said to consume. In fact, it consumed relatively little and the models have only improved in efficiency. Chips and cooling systems have also improved, and although data centers have certainly require enormous amounts of energywe continue blindly in this section. In Xataka | Spain has a plan to capture more data centers than anyone else: “shield” them from energy costs

The best science comedian does not have any scientific training. And that’s the key to your success.

Tom Gauld is one of the most accessible and yet peculiar cartoonists of today. His vignettes are a mixture of a wink for the initiated and simple, white humor.which often makes his cartoons a mix of “everyone can understand them” and “if you’re interested in science and literature, sure.” A real rarity in these times when you have to show up at franchise fan clubs with a very clear identification and resume. Because Gauld may talk about quantum physics, multiverses and the secrets of the cosmos, but he doesn’t leave anyone out either, all thanks to deceptively simple, but highly expressive graphics. Able to make an Escherian architectural nonsense believable or to perfectly portray the interior of an impossible dimension with just a couple of lines, Gauld reduces the complex to a couple of gentle strokes, and hence his popularity on the internet and in media of indisputable prestige such as ‘The Guardian’where he makes literary jokes, or ‘New Scientist‘, where it focuses more on science and technology. It is precisely a compilation of jokes of this last type, ‘Physics for cats’, which Salamandra is now publishing. Thanks to this brand new volume we have had the opportunity to speak with him and have him explain his creative processes and his career as a scientific comedian… who does not have much knowledge of science. We started, of course, by asking him how his collaboration with ‘New Scientist’ began and what impact it has had on the way he approaches scientific topics in his comics. It tells us that we have to go back very far in time. “My grandfather was a scientist, a marine biologist, and he always read the ‘New Scientist’. So when he went home, the magazine was always there, and when he finished reading the magazine, he would give it to my father, who was also interested in science. When I was little, I would look at the pictures and diagrams and, from time to time, I would read a little bit of the text.” And from there, a few years later and now a professional cartoonist, he began to collaborate with them. Gauld states that a magazine of this type is a splendid workplace for an illustrator: “Some concepts about reality or other universes cannot be photographed, so in These types of magazines have a good tradition of using illustrationsand in fact most of its covers are illustrations rather than photographs. Then, I don’t remember exactly why, I thought it was strange that they didn’t have a comic strip in the magazine.” He proposed it a decade ago and it was accepted, but, he says, “I got a little scared because I stopped studying science when I was about 16, so I’m not an expert at all.” How to draw science It is obvious that this approach to science from a non-scientist perspective will entail difficulties. But contrary to what it might seem, “the really difficult thing with vignettes is not getting the scientific details right.” His process is: “I read the magazine, I follow scientists on social media, I listen to podcasts and radio shows about science, and anything that I think could make a joke I write down in my notebook.” And his approach is clear: “I’m giving my own light-hearted, fun take on something that’s quite serious and thoughtful. I try to do it without being derogatory, like when you make fun of a friend you respect.” Which inevitably brings us to the next question: how do you balance scientific precision with the artistic freedom to create such abstract concepts? And in fact, here the lack of scientific training is revealed as an advantage: “When creating the strips, the fact that I have no scientific training, that I am an ordinary person, not a professional, perhaps helps me judge the level of knowledge at which the jokes should be.” And he adds: “I never want to make a cartoon that makes people feel stupid.which makes one think that a doctorate is needed to understand it”. What happens then when he stumbles upon concepts that even he can’t understand? “When some real science is mentioned in the cartoon, I like to get it right, so I do some research on the Internet or ask someone at New Scientist to check my formulas or whatever. Or I do it so badly that it’s obvious I’m not trying to get it right. In fact, last night an astrophysicist mentioned that one of the formulas in the background of one of my strips was correct and that he liked it, which I was very happy about.” When we ask him if there are any scientific ideas or theories related to physics that he finds especially inspiring, he tells us that two come to mind. “One that I think I keep coming back to in the cartoons is, and I guess this is more of a philosophical question than a physical one: What is reality? That and the idea of ​​many worlds. The other is quantum theory, which I still don’t understand. I’ve made some jokes about it and I’m proud of them, but I think they could be improved if I ever managed to understand all of quantum theory. Which may never happen, but I keep trying.” And here we enter into a personal question, but we couldn’t help but ask him: does Tom Gauld like Gary Larson’s humor? (Larson, for those who don’t know, is the creator of ‘The Far Side’, absolute master of comics with background geeka mix of surreal humor and deep knowledge of biology and science absolutely unmatched). “I’ve mentioned Gary Larson as an influence in almost every interview I’ve done today,” he confesses, “so I’m glad you brought it up.” Typical Gary Larson: “‘Hey! What is this, Higgins? Physics equations?… Do you like your job as a cartoonist, Higgins?” And he adds: “The cartoons from ‘The Far Side’ appeared in my local newspaper when I was a teenager and I have … Read more

The industry became obsessed with training AI models, while Google prepared its masterstroke: inference chips

In recent years, what was truly relevant was training AI models to make them better. Now that they have matured and training it no longer scales as noticeablywhat matters most is inference: that when we use AI chatbots they work quickly and efficiently. Google realized this change in focus, and has chips precisely prepared for it. Ironwood. This is the name of the new chips from Google’s famous family of Tensor Processing Units (TPUs). The company, which began developing them in 2015 and launched the first ones in 2018now obtains especially interesting fruits from all that effort: some really promising chips not for training AI models, but for us to use them faster and more efficiently than ever. Inference, inference, inference. These “TPUv7” will be available in the coming weeks and can be used to train AI models, but they are especially aimed at “serving” these models to users so that they can use them. It is the other big leg of AI chips, the really visible one: one thing is to train the models and quite another to “execute” them so that they respond to user requests. Efficiency and power by flag. The advance in the performance of these AI chips is enormous, at least according to Google. The company claims that Ironwood offers four times the performance of the previous generation in both training and inference, and is “the most powerful and energy-efficient custom silicon to date.” Google has already reached an agreement with Anthropic so that the latter has access up to one million TPUs to run Claude and serve it to its users. Google’s AI supercomputersand. These chips are the key components of the so-called AI Hypercomputer, an integrated supercomputing system that according to Google allows customers to reduce IT costs by 28% and a ROI of 353% in three years. Or what is the same: they promise that if you use these chips, the return on investment will be multiplied by more than four in that period. Almost 10,000 interconnected chips. The new Ironwoods are also equipped with the ability to be part of joining forces in a big way. It is possible to combine up to 9,216 of them in a single node or pod, which theoretically makes the bottlenecks of the most demanding models disappear. The size of this type of cluster is enormous, and allows for up to 1.77 Petabytes of shared HBM memory while these chips communicate with a bandwidth of 9.6 Tbps thanks to the so-called Inter-Chip Interconnect (ICI). More FLOPS than anyone. The company also claims that an “Ironwood pod” (a cluster with those 9,216 Ironwood TPUs) offers 118x more ExaFLOPS FP8 than its best competitor. FLOPS measure how many floating-point math operations these chips can solve per second, ensuring that basically any AI workload is going to run in record times. NVIDIA has more and more competition (and that’s a good thing). Google chips are a demonstration of the clear vocation of companies to avoid too many dependencies on third parties. Google has all the ingredients to do it, and its TPUv7 is proof of this. It’s not the only oneand many other AI companies have long sought to create their own chips. NVIDIA’s dominance remains clearbut the company has a small problem. In inference CUDA is no longer so vital. Once the AI ​​model has been trained, inference operates under different game rules than training. CUDA support remains a relevant factorbut its importance in inference is much less. Inference focuses on obtaining the fastest possible answer. Here the models are “compiled” and can run optimally on the target hardware. This may cause NVIDIA to lose relevance to alternatives like Google. In Xataka | When you’re OpenAI and you can’t buy enough GPUs, the solution is obvious: make your own

He claims to have a training method for his AI 88% cheaper

The Chinese company High-Flyer broke into the market in the market of the artificial intelligence (AI) At the end of last January. Deepseek, his proposal, has made its way among its competitors thanks to its open nature and its benefitsbut the authentic debate has revolved for several weeks Around the cost of training your models. And it is that according to its creators they were barely spent 5.6 million dollars In this process. Three and a half months later this figure is still unbelievable, so it is reasonable to contemplate it with distrust. Anyway, Deepseek has put on the table the possibility of facing the training of the new models of investing much less money than the one spent by US companies OpenAi, Google or Anthropic in the tuning of theirs. Now it is the Chinese technological giant Alibaba who seems to be following The same path that has already traveled Depseek. And it is that it claims to have developed a AI models training system that reduces the cost almost 90%, which presumably will have a positive impact on AI search capabilities. Alibaba’s jewel is called Zerostoch The strategy that Alibaba engineers have devised to reduce the cost of training their AI models is ingenious. And instead of interacting with real search engines during this process, Zerostoch, which is what is called its technology, improves search capabilities carrying out simulations. To understand why this approach is much cheaper, we need to keep in mind that the costs associated with the redirection of commercial search engines are usually high. Alibaba has put a model that behaves as a search engine that is capable of training other AI models According to Alibaba Send 64,000 consultations to the Google search engine through an API has an approximate cost of $ 586.70, while generating the appropriate responses for training by means of an AI model of 14,000 million parameters has an approximate cost of 70.80 dollars, which represents an 88% cheaper. In practice, what Alibaba engineers seem to have achieved is to tune a model that behaves as a search engine that is capable of training other AI models so that they are able to solve consultations. This scenario has a very evident advantage: training no longer requires interaction with external search infrastructures. Alibaba, as we all know, is a gigantic company, but from now on this strategy can be used by much smaller companies to train their own AI models without the need to face a great investment in this process. In addition, presumably this technology will improve both the search capabilities of AI models and the ability with which they carry out the reasoning processes. At the moment Alibaba has used Zerostoch to improve the capabilities of his Quark model, which for just three days has been capable, always according to its creators, to combine the internet search and advanced reasoning capabilities to resolve inference and deliver precise responses to complex consultations. Image | Markus Spiske More information | SCMP In Xataka | Samsung is preparing to give TSMC a bars where it hurts most: the manufacture of the chips for ia

They improved their training and changed jobs

Receive money every month without conditions And without having to justify what is spent, for decades, A controversial idea and difficult to imagine in practice. However, Germany decided to carry out one of the most ambitious experiments in the world to verify what really happens when it is offered A vital basic income To a group of people. The results of this study have been a success since, as has been demonstrated In other previous testsfar from discouraging labor improvement, it has improved it. Basic Vital Rent in Germany. He Pilot project of universal basic income in Germany was designed as a long -term scientific study and with a rigorous approach to the Mein Grondeinkommen organization, the German Economic Research Institute (DIW Berlin) and other academic institutions. The experiment has not only revealed data on the economic impact of universal basic income, but has also allowed us to observe how the lives of those who receive it change. From improvements in mental health to the influence of economic support in training and decisions to change jobs. The experiment. The test counted With a total of 122 people between 21 and 40 randomly selected, they already had previous income of between 1,100 and 2,600 euros per month, to which 1,200 euros per month were added from June 2021 to May 2024. Parallel, a control group of 1,580 people with similar sociodemographic characteristics was established, but who did not receive this additional income. The participants of the basic income group did not have any type of condition to receive those 1,200 euros per month, so they could work, study or not perform any work activity, and the money was delivered without inspections or deductions. The objective was to measure, in a precise way, how this financial support influenced in their personal and professional decisions. During the three years of the experiment, each of the 122 participants received a total of 43,200 euros. The control group, on the other hand, only received symbolic compensation for completing the periodic questionnaires of the study. More training, better jobs. One of the main fears of vital basic income is that the search for employment or professional improvement discourages. However, the study results German have shown just the opposite. A remarkable number of participants chose to invest that additional income in their training, either to improve their skills or to change the professional sector. In the first 18 months of the project, labor mobility and work satisfaction increased clearly. Even after this period, the beneficiaries of basic income reported higher job satisfaction levels, regardless of whether they had changed jobs or not. During the analysis period, the percentage of people who changed employment in the group of participants that received the 1,200 euros extra was higher than in the control group. As it happened In Finland’s testsfinancial security allowed many participants to dare to look for better paid jobs or with better working conditions. Less stress and reduction of medical casualties. The study too analyzed the effect of basic income on health and the psychological well -being of the participants. The data reveal that those who received the 1,200 euros per month experienced a significant reduction in financial stress and an improvement in your mental health. This positive effect also resulted in a decrease in the number of medical casualties and in a greater sense of control over one’s life. “Universal basic income can mean enormous savings in the health and social assistance system. Because people with mental stability can work more productively and innovatively,” assured Klara Simon, president of Mein Grdeinkommen, the German agency responsible for the experiment. In Xataka | Sam Altman has been giving millions of dollars in secret. Its objective: the biggest study on universal basic income Image | Mein Grundeinkommen (Fabian Melber)

Strava has bought Runna. The Sports Social Network now has a fantastic application with training plans

Strava, the social network of athletes par excellence with more than 150 million users, has announced the acquisition of Runnaa popular British application of training plans for runners launched in 2021. Why is it important. This purchase resolves one of Strava’s great ‘ This was a weak point for both beginners and veterans, who had to resort to other platforms. This movement is not only a strategic purchase, but a direct response to the world boom of the Running. In 2024 almost one billion races were registered in Strava, and according to Your datathe race on foot is the sport that grows the fastest global, with the Z generation leading this trend. What does Runna do. The Runna app, which I tried for months, asks for concrete objectives (finish a 10K, finish a marathon, run average marathon in less than 1:40 h, etc.) I already change both specific and adapted training plans, as a live guide of each plan. That is, if we start a training with Runna (present at Apple Watch or Garmin), we will guide us on our clock and also in our headphones on each part of the training, it will notify us if we are going too slow or too fast, of each change of rhythm that we have to do, etc. And now what. In the short term, both applications will continue to function independently, according to They have confirmed the CEO and co -founder of Strava and Runna respectively. Presumably, the first changes and integrations will arrive in the coming months. As a backdrop, two clear elements: The growing importance of Strava’s premium subscription (8 euros per month) with which to access a series of somewhat secondary functions. Limiting training plans to this subscription will make it much more attractive. Runna’s are 20 euros per month. Other recent purchases from Strava: Recover Athleticsan injury prevention application; and Fatmapa 3D maps platform. The expansion strategy in different areas of the digital fitness is evident. The big question. How are subscriptions going to now manage? If we look at the annual plans, Strava Premium costs $ 80 a year for the 120 that Runna costs. At the moment, users will need to maintain both subscriptions if they want to have access to “everything.” Runna uses Strava’s third parties to synchronize activities. It is expected that a better integration arrives. Michael Martin, CEO of Strava, has compared this purchase with that of Recover Athletics, which works separately but is free for Strava subscribers. Deepen. These types of purchases usually generate reactions found. In 2023, Strava had a confusing and controversial change in its subscription price And a few weeks ago that Garmin announced his subscription plan With a rather warm answer. Clearly Strava wants to be more than “the social network of sport” to offer more value directly to the athlete, and not just a social meeting point around training. In Xataka | After almost a decade with the Apple Watch I have spent a Garmin. And I have understood what I was losing me Outstanding image | Strava

What is and how to find content on this online courses and training platform

Let’s explain What is and how to use Grow With Googlethe Google online courses system. It is a page that is available for all users who have an account with the company, and that, and in which a large number of courses are indexed. We are going to start the article briefly explaining what this portal is and what you can expect from it. Then, we will give you some indications for you to learn to use it. What is Grow With Google Grow with Google is a program for Help improve your digital skillswhether you are an individual person and if you are a company. It is an index where you will find multiple methods to improve your knowledge. You will generally have Three types of resources On this website. On the one hand you will find online courses, both from Google and other third -party companies. You will also have workshops to practice skills, as well as a series of free resources to improve your use of technology. This is not a portal of its own content, but an index with links that lead you to the content of other platforms. Some of these contents as courses are from Google, and the links take you to the pages of the company, but there are others that are of different platforms. Although this also makes entering the index does not require having a Google account. What is sought with this content is to give you tool that serve you to refresh your knowledge, and With this finding a job. They also serve to train you to advance in your current work or look for another, or learn the ears to digitize your company. How the platform works The first thing you have to do is enter the Grow With Google portal, whose address is Grow.google. Once inside, at the top click on Discover our courses and toolswhich is the section that will take you to the content index. Here inside, you can Choose if you want content to improve your career or your businessso you can differentiate between courses and contents for individuals or for business. Choose one of the two and you will reach the final index. Here, you will have in the body of the website paintings with each course and resource, and when you press in one you will take you to the external page where it is housed. You also have a filter column Where you will be able to choose the following things: Issue: You can choose from the topics for which you want the content. You have from artificial intelligence or programming to online sales and productivity. Guy: You can choose between online courses, products and tools or manuals. Certificate: You can filter to be shown content such as courses with free or payment certificates. The course can be free, but in some cases for the final certificate you will have to pay. Duration: The courses can last from less than 1 hour to more than 40 hours. You have several filters about it to adjust what you are looking for. And with these filters you can find the course or online content that you want within the platform. When you press in one that interests you will take you to another website that can be from Google or not, so depending on the platform you are going to need to register there to access the content. In Xataka Basics | Programming languages: which are the five most popular for 2025 and 36 free courses to learn how to use them

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.