While the West debates what to do with AI in schools, in China there are already schools turning it into a child tutor

Anyone who has been a child or a parent knows the scene: the flexo light on, an incomprehensible math problem on the table, tears falling from the frustration of not understanding a lesson or not being able to pronounce a foreign language, and a parent losing patience after explaining the same thing for the fifth time. In China they have found a way to turn it around, parents frustrated and exhausted by their workdays are delegating the academic supervision of their children to artificial intelligence. While in different countries there is a strong debate and fear about whether AI erodes critical thinking of students, the opposite is true in China: a 2025 survey led by KPMG revealed that more than 90% of the Chinese are optimistic about this technology. The phenomenon came to light and sparked debate on social media when a mother in Shandong province discovered her husband playing on his mobile phone while letting her Kimi AIa chatbot capable of processing two million characters, did his son’s homework. But this father is not an isolated case. Many adults are using AI not just to teach, but to do the dreaded “parenting chores.” Mr. Zhang, for example, admitted to using the chatbot Doubao to generate summaries of the Aesop’s Fables and print step-by-step images for your third grader’s craft projects. The market has responded with an avalanche of gadgets. Zheng Wenqi, a working mother, bought for about 375 dollars the “Native Language Star”, a device composed of a mask that muffles your voice in Chinese and a speaker that translates it into English to converse with your children. Others, like university professor Wu Ling, They invested $1,170 in AlphaDoga robot dog powered by the DeepSeek model that practices English, dances and keeps his only son company. There are even parents who have gone one step further by becoming creators. This is the case of Yin Xingyu, a mother from Shenzhen who does not know how to program, but who uses the technique of vibecoding with DeepSeek to create interactive English word games for her 6-year-old daughter, as well as generate personalized comics using the Nano Banana Pro imaging model. For the purist parents, devices have emerged such as the “Youdao AI Q&A Pen”, a smart pen designed from “asceticism”: it has no browser or games, it only guides the child step by step in their mathematical reasoning without giving them the direct answer. A multi-million dollar business in a gray area All this enthusiasm has fueled a runaway educational technology market valued at more than $43 billion. Outsourcing has left the homes to take to the streets and, until July 2024, The opening of about 50,000 was estimated “AI study rooms” across the country. In these establishments, children sit in cubicles in front of standardized tablets; They cannot leave until the indicators on the screen turn from red (errors) to green (correct answers). As detailed on CCTVthe “teachers” in these rooms do not teach, they are prohibited from explaining the subject and they act as mere supervisors and commissioned salespeople. To cope with the monotony of 6 to 8 hours answering questions, some children learn to play Go or Gomoku secretly on the same machines, often with the supervisors’ blind eye. However, former employees and parents report that in many of these centers, “artificial intelligence” is just a marketing façade to charge more, and children simply consume pre-recorded lessons on basic tablets. Behind these study rooms hides a business survival tactic. Many of these centers operate in a gray zone to avoid the strict “double reduction” policy. imposed by the government in 2021which banned for-profit tutoring to relieve financial and academic pressure on families. By arguing that “it is AI that teaches and not a human,” these companies dodge education regulators, registering under names of “cultural media” and avoiding words like “enrollment” or “classes.” Franchises are strategically expanding into peri-urban areas and small towns, where rents are low and parents are equally willing to pay for a place to leave their children. This mass adoption is no accident; is backed by a clear state directive. The Chinese government is promoting the integration of AI in education as part of a national strategy to accelerate its technological progress against global competitors such as the United States. The regulations are already on the table. Starting with the fall 2025 semester, Beijing will require a minimum of eight hours per year of AI education in all primary and secondary schools. The transition has been rapid and planned, with higher education leading the way: 99% of university students and teachers in China already use generative tools, and elite universities such as Zhejiang or Fudan have made AI courses mandatory and transversal subjects. Science supports this dive. An empirical study conducted with high school students in H city showed that the duration of daily use of AI tools significantly and positively influences students’ AI knowledge and algorithmic thinking. That is, constant exposure is already shaping your cognitive and technological abilities. The debate is served The families’ opinions are drastically divided. For many, AI democratizes education. Mothers like Li Linyun celebrate that the Doubao chatbot be a “24-hour, knowledgeable and extremely patient teacher,” which has saved him hundreds of dollars on human tutors and improved his relationship with his daughter. On the other hand, technological dependence terrifies educators and a faction of parents, who criticize that children are becoming lazy and losing the ability to think independently. In study halls, proctors notice that students, desperate to turn the screen green, resort to tactical memorization: repeatedly choosing incorrect answers by discard until the system approves them, without actually learning the concept. Added to this is the “AI illusion” and its hallucinations. Su Xiao, mother of a ninth grader, discovered that the general models They could invent historical data with complete confidence and fluency, or omit crucial data in mathematical problems, offering logically impeccable but erroneous results. This forced her to become a “cyber quality inspector,” … Read more

turning the “sea of ​​death” into a carbon sink

For decades, the Taklamakan desertin the Chinese region of Xinjiang, has had a nickname quite eloquent: “the sea of ​​death.” And it is no wonder, since it is the second largest mobile dune desert in the world and a place where, historically, whoever enters does not usually leave. But faced with this major problem with sand for the surrounding areas, China decided to find a solution. The solution. China since 1978 has been waging an ecological engineering war against sand with a very specific weapon: the Three North Shelter Belt Programbetter known as the Great Green Wall. A name that seems to come out of Game of Thrones, but its objective is to stop erosion and sandstorms. But a new massive study published in PNAS has just revealed an unexpected and monumental side effect: human intervention has turned the edges of one of the driest places on Earth into an active carbon sink. The data. The study has focused on 25 years of data obtained through field work and also with satellites. What the team has found on the margins of the Taklamakan is what they call a “cold spot” of carbon dioxide. This means that in reforested areas the concentration of CO₂ is between 1 and 2 parts per million smaller than in the surrounding environment. And although it may not seem like much, in climatology it is an outrage. The trend in this case is quite clear, since The vegetation cover is increasing every yearand there is also a tendency for soil and plants to be “eating” more carbon than they are emitting. How is it possible? The million-dollar question here is pretty clear: how do you keep 66 billion trees alive in a place where it barely rains? The answer lies in water management technology and species selection. In this case, the project does not focus on planting oaks or pines, but is based on Extremophilous species like him Tamarixhe Haloxylon and the Euphrates poplar, which are plants evolutionarily designed to survive on very little. But the technological key has been the use of drip risk with saline water. Origin of water. China discovered that under the Taklamakan there are immense aquifers, but they are too saline for traditional agriculture. However, these “halophytic” plants can tolerate it, so it seemed like it was done on purpose. That is why groundwater is used to irrigate the protective strips that exist, especially around the famous tarim desert highway. The result with this is that soil moisture drops drastically between waterings, but the plants survive. And, although the salinity of the superficial soil increases, studies indicate that it is manageable in the long term and does not salinize the deep layers. This has made it possible to complete in 2024 a “green belt” of 3,046 kilometers that encloses the desert, stabilizing dunes that previously moved meters each year. Its stability. Unlike the Great Green Wall attempts in the Sahara, which have suffered from political instability and a lack of continued funding, the Chinese project has maintained its course since 1978. This continuity has allowed a “40-year experiment” that is now bearing fruit with important conclusions. The Chinese authorities themselves cite that national forest coverage has gone from 10% in 1949 to 25% today, thanks in large part to this project. As a result, in places like Maigaiti in Xinjiang, sandstorm days have dropped from 150 a year to fewer than 50. It is not the panacea. The source article warns of the limitations of this project: photosynthesis and carbon sequestration are strongly correlated with seasonal precipitation. This means that at least 16 liters of rainfall per month is needed in high season to maximize its effect. But behind it is climate change that is drastically altering rainfall patterns in Central Asia, which could weaken the carbon sink. Although what is happening in Taklamakan is causing a paradigm shift, since now where we see reforestation of deserts, we also see a way to cool our planet by reducing the concentration of CO₂. Images | Wikipedia Jasmine Milton In Xataka | Someone has counted each and every tree in China. Because? Well because now it is possible

Apple has been resisting turning Siri into a chatbot for years. Until the evidence has been surrendered

2026 will be the year of Siri, but not because of an internal turn at Apple or because of the maturity of Apple Intelligence. It will be because the pact with Google will allow Apple to use Gemini technology as your assistant’s base. The details about what Apple will do with its assistant have not taken long to come to light and, we have good news: they will arrive in the next version of iOS. The new Siri. Apple has been announcing the benefits of the new Siri since before having prepared their news. With the Apple Intelligence announcement He put on the table a Siri completely integrated into the system, capable of functioning as a complete assistant and functioning mostly locally. The reality? Everything remains practically the same as before and, when Siri doesn’t know how to respond to something, ends up opening ChatGPT. What is going to change. They explain in Bloomberg that Apple, as of iOS 27will surrender to the chatbot model that has worked so well for companies like OpenAI and Google. The mere assistant model has expired, and Siri will become a chatbot at the service of any of our requests. This new chatbot will be integrated into all Apple apps (expecting an API open to developers to integrate it into their apps), allowing us, for example, to find specific photos in the app Photosfunction as a programming assistant in Xcodeetc. What won’t change. The only certainty with the new chatbot model is that Apple will continue to maintain its obsession with privacy and maintaining its AI ecosystem as its own, even if it is based on Gemini. Apple’s intention is to integrate this experience into iPhone, iPad, Mac and Apple Watchmaintaining activation through Voice command “Siri” or holding down the power button on the iPhone. The difference. Today, Siri is an assistant, a command system. You tell him something Siri classifies the intention (set an alarm, call X person, send a message) Execute the order Moving to the chatbot model means having a generative model capable of interpreting natural language, maintaining conversations and a more “human” interaction with the phone. This is what their rivals have been doing for a few years. Adapting to the inevitable. That Siri will evolve in 2026 is proof that the classic assistant model is exhausted. Apple will have to adopt the chatbot model as an inevitable transition, previously led by OpenAI and in which Gemini now seems to be leading the way. The thing doesn’t end here. The destination of the new Siri is not only current Apple devices. As my colleague Javier Pastor says, the company plans to launch a device without a screenits first AI-focused wearable. According to the leaked information, it will have a format similar to that of AirTags, a microphone system and a launch scheduled for 2027. New assistant, new devices, and alliance with Google. The new stage of artificial intelligence for Apple is finally arriving. The question is whether they will manage to offer something new. Image | Xataka In Xataka | Hey Siri: 134 voice commands to get the most out of Apple’s assistant

The remake of ‘Prince of Persia’ aimed to be the turning point for Ubisoft. It has been canceled along with other titles

There are games that are not only played, they are remembered. ‘Prince of Persia: The Sands of Time‘ belongs to that category and for years was one of Ubisoft’s calling cards in its most inspired stage. Its remake, announced after a streak of ups and downsaimed to serve as a bridge between that legacy and a new stage for the company. What its cancellation reveals is just the opposite. Ubisoft is going through a period of harsh changeswith delays, cuts and decisions that reflect the extent to which the group is reviewing priorities to adapt to a tighter economic and creative scenario. The announcement came todayJanuary 21, coinciding with the presentation of financial results, and marks a turning point in the group’s strategy. Ubisoft announced a “reset” on a global scale that includes a new creative structure, a deep review of its game portfolio and an adjustment to the size of the organization. The company places these decisions in a more demanding market context, with higher costs and a “more selective” AAA. according to your own diagnosis. The stated objective is to gain agility, accelerate decision-making and guide the business towards what it defines as a more player-centered model. Cascading cancellations and delays. The restructuring has immediate consequences on the catalog. Ubisoft confirmed the cancellation of six games in development, including Prince of ‘Persia: The Sands of Time Remake‘, along with three new unannounced IPs and a mobile project. In addition, the company has decided to delay another seven titles to, as it explains, ensure that its new quality thresholds are met. One of those games, initially scheduled for fiscal year 2026, now moves to 2027, a move that directly impacts its short- and medium-term planning. A new internal map by brands and genres. One of the most profound changes affects how Ubisoft is organized internally. The company is reorganizing its production model to group its teams into five “Creative Houses”, each focused on specific franchises and genres, and supported by a “Creative Network” of studios to support production. The first brings together brands such as ‘Assassin’s Creed’, ‘Far Cry’ and ‘Rainbow Six’, while others group together sagas such as ‘The Division’, ‘Ghost Recon’ or ‘Splinter Cell’. ‘Prince of Persia’ is integrated into the fourth of these units, along with Rayman, Anno or Beyond Good & Evil, with its own leadership and greater creative autonomy. Beyond the canceled or delayed games, the restructuring implies profound changes in the company itself. Ubisoft has reiterated its intention to close studios, reorganize teams and reduce costs continuously over the coming years. In its plan, the company sets a reduction in its cost base of at least one hundred million euros by the end of its 2025-2026 financial year, and adds another two hundred million additional euros to be cut over the following two years. The group admits that the process will be difficult, but presents it as a necessary step to regain stability in a market that is increasingly less tolerant of errors. A new creative focus for the coming years. Looking ahead to this stage, Ubisoft states that it will concentrate its efforts on large open worlds and games as a service. At the same time, he has indicated that he will accelerate investments in “player-oriented generative AI”, a formulation with which he points to uses aimed directly at the player, without yet specifying how it will translate into specific titles. The company also recognizes that the revision of its roadmap will have effects on the release schedule and its financial forecasts. It is, in practice, the price assumed for the model change. Images | Ubisoft In Xataka | Sony has come up with something taboo in the world of video games: that AI starts playing for you when you crash

turning ChatGPT into a pocket doctor

More than 230 million people ask ChatGPT questions about health and wellness every week. It is not an estimate, it is data from OpenAI published alongside the announcement of ChatGPT Healthyour new personalized section for medical and wellness consultations. ChatGPT Health. ChatGPT Health was presented just a few hours ago as a new section within the app, designed to go beyond the specific queries that users already made. The goal is more ambitious: centralize health data and medical history in a single place, turning ChatGPT into something closer to a health monitoring platform than a simple conversational assistant. How it works. Currently available for “a small number of early adopters” in the United StatesGPT Salud will be integrated as another section within the app. It will allow you to make queries like any other GPT, with some key points: We can integrate it with Apple Health (and all your data on mental health, steps, weight, heart health, etc.) We can integrate it with wellness apps like MyFitnessPal It is specially developed to upload our clinical history and most recent tests In detail. In addition to these integrations, ChatGPT Salud is designed to function as a healthy habits assistant. It will also be integrated with apps like Peloton to suggest classes or guided meditations, apps related to creating recipes like Instacart, or Weight Watchers to guide nutrition in GLP-1 medication users against obesity and diabetes. Why is it important. OpenAI works closely with health experts in the development of its models. Still, ChatGPT has not been without controversy, especially in the mental health space, where the model generated various criticisms and forced response guidelines to be adjusted in recent updates. The launch of GPT Salud is not a rectification, it is a declaration of intent: AI not only does not turn back in the clinical field, but rather aspires to consolidate itself as a “pocket doctor”, always available, increasingly integrated and with a growing role in everyday decision-making related to health. Yes, but. AI-fueled psychotic breaks, demandsand people asking ChatGPT how can you inject botox to herself. OpenAI makes it clear that ChatGPT health is not coming to replace your GP, but use the app as if it were He is a regular. Fairly recent surveys show that 17% of the adult population uses chatbots at least once a month to search for information related to their health. Dependency and retention. ChatGPT is still, by far, the most used AI chatbot in the worlddespite the advance of alternatives such as Google Gemini, DeepSeek or Grok. But OpenAI cannot rest on its laurels, and a of the best ways to stop the drop in users What Gemini is causing you is dependency and retention. ChatGPT is still by far the most used AI chatbot in the worlddespite the push of alternatives such as Google Gemini, DeepSeek or Grok. But OpenAI cannot afford complacency. The growth of Gemini is having an impact on retention of user (with a significant drop in GPT) and, given this, health is a clear winner. They are not one-off queries, they are recurring. By including clinical information, ChatGPT stops being a question and answer chatbot, it becomes a medical agenda. The health app market continues to grow in users, and integrating them natively into ChatGPT retains the user even more Image | Pexels In Xataka | ChatGPT is writing medical and economic studies. We know this because he uses strange words

Psychology knows that we are turning bad education into diagnosis

A decade ago, if someone behaved selfishly in a relationship, we would clearly say that they were “selfish.” Today, you will most likely hear that that person has an “avoidance bond” or that his or her behavior is a “response to past trauma“. That is why today psychology has come to explain absolutely everything, but there is a problem: we are pathologizing everyday life. A new idea. The psychologist Ángela Fernández recently threw a dart at the center of the debate: “not everything is trauma or anxious attachment; sometimes it is simply a lack of education.” And this phrase is not just an unpopular opinion; is the summary of a growing concern in the scientific literature about how the “trauma culture” is blurring the boundary between pathology and character. “Overpathologization.” The concept is not new, but it has never been so relevant. scientific literature I already warned about the tendency that exists to look for an illness in every action we do inappropriately in daily life. In this way, modern psychology runs the risk of turning normal activities or reactions, such as sadness after a breakup or work stress, into a medical problem. This increase in diagnoses It has a pretty dangerous side effect.: trivializes serious disorders. When we call any emotional wound or inconvenience “trauma,” we are eroding the perception of human resilience, and in the process, downplaying those who truly suffer from PTSD. If everything is trauma, nothing is. In the Anglo-Saxon clinical field, the term “Trauma Culture” has been coined. Publications in Psychology Today warn that this fashion of seeking an explanation clinic for every emotional reaction can be counterproductive. Far from helping, it pushes people towards therapeutic interventions that they don’t fit your real problempreventing grieving or learning processes that are simply part of growing up. This is something that is added to by different psychotherapists who emphasize that considering each conflict that exists in a couple as a “response to trauma” mixes everyday stress with pathological conditions that are truly very complex. All this does is create a generation of people who consider themselves “broken” by default, instead of understanding that frustration and conflict are inherent to human interaction. It is selfishness. One of the most controversial points of Fernández’s criticism is the mention of “lack of education” or maturity, and the bibliography seems to agree with him. Published works in ScienceDirect about the “egoism-altruism spectrum” suggest that certain harmful behaviors are not explained by a “deregulated” nervous system, but by personality traits such as lack of empathy or manipulation. Something that is innate to a person, and that can hardly be treated. In this way, we have subclinical psychopathic traits: people who do not have a mental illness, but who show excessive interest in their own well-being. In these cases, the clinical diagnosis acts as a “cloak of invisibility” that exempts the person who causes some type of harm from personal responsibility. An excuse. That is why if I have had bad behavior, I can create an “invisibility cloak” effect that exempts me from personal responsibility. This way, I can blame this behavior on the parents or my own personal past, as if it were an “attachment trauma.” But the reality is that, often, these are unempathetic patterns that should be treated from ethics and education, not from the psychiatry manual. The danger of labels in infancy. Different scientific reports point because we are labeling normal variations in children’s behavior as mental disorders. This means that what was once a restless child or one who had difficulty following rules, today runs the risk of being quickly diagnosed and medicated. By turning behavioral problems into psychopathologies, we are missing the opportunity to teach discipline, limits, and frustration tolerance. As experts point out Birchwood Clinic, extensive use of these labels increases anxiety and medicalizationcreating a dependency on the health system for problems that, historically, were resolved in the social and family environment. The verdict of science. Social media has created a market of “pocket diagnostics” where selfishness is disguised as “self-care” and rudeness as “emotional limit.” However, clinical psychology insists: for something to be a disorder, there must be significant functional impairment. That is why being inconsiderate towards others does not make a person a psychiatric patient, but sometimes you simply have to grow up. Images | Vitaly Gariev In Xataka | Those born between 1950 and 1970 have a psychological advantage over other generations: they are entering their “peak”

Something is going wrong with AI. The US is turning to energy solutions that it thought were buried to power data centers

The race to develop and operate increasingly powerful artificial intelligence models comes at a cost that is rarely at the center of the technological narrative. It is not in the chips or the software, but in the huge amount of electricity needed to keep active data centers running around the clock. In the United States, this pressure is already being translated into concrete decisions: polluting power plants that were in retirement are being restarted to cover increasing peaks and tensions on the grid. The paradox is evident, the most ambitious advance in the technology sector depends, for the moment, on energy solutions from another era. The problem is not so much an absolute shortage of electricity as a time lag. The demand for data centers linked to AI it’s growing much faster than the ability to launch new electrical generation, especially renewable, in short terms. Building large energy infrastructures takes years, while these complexes can advance in much shorter time frames. Faced with this temporary shock, network operators and electricity companies are turning to what already exists and can be activated immediately, even if it is more polluting. PJM in context. The clash between electricity demand and supply is perceived with special clarity in the PJM region, the largest electricity market in the United States, which covers 13 states and concentrates a very significant part of the country’s data centers. We can understand it as a large regional electricity exchange that coordinates generation, prices and network stability in real time. There, the growth of data centers linked to AI is putting to the test a system designed for a very different consumption pattern, making PJM the first thermometer of a problem that is beginning to appear in other areas. What is a central peaker. The calls central peakeror peak, are facilities designed to come online only during short periods of peak demand, such as heat waves or winter peaks, when the system needs immediate reinforcement. They are not designed to operate continuously, but to react quickly. According to a report According to the US Government Accountability Office, these facilities generate just 3% of the country’s electricity, but they account for nearly 19% of the installed capacity, a reserve that is now being used much more frequently than expected. South view of the Fisk plant in Chicago The case of the Fisk plant, in the working-class neighborhood of Pilsen, in Chicago, illustrates well how this shift translates on the ground. It is an oil-fueled facility, built decades ago and scheduled to be retired next year, that had been relegated to an almost testimonial role. The arrival of new electrical demands associated with data centers changed that equation. Matt Pistner, senior vice president of generation at NRG Energy, explained to Reuters that the company saw an economic argument to maintain the units and that is why it withdrew the closure notice, a decision that returns activity to a location that many residents believed was in permanent withdrawal. When the price rules. The change is not explained only by technical needs, but also by very clear market signals. In PJM, the prices paid to generators to guarantee supply at times of maximum demand skyrocketed this summer, more than 800% compared to the previous year. An analysis by the aforementioned agency shows that about 60% of oil, gas and coal plants scheduled for retirement in the region postponed or canceled those plans this year, and most of them were units peakerjust the ones that best fit in this new scenario of relative scarcity. The bill for this energy shift is paid above all at a local level. The power plants peaker They tend to be older facilities, with lower chimneys and fewer pollution filters than other plants, which increases the impact on their immediate surroundings when they operate more frequently. Coal is also postponed. The phenomenon is not limited to power plants peaker fueled by oil or gas. On a national scale, several utilities have begun to delay the closure of coal plants that were part of their climate commitments. A DeSmog analysis identified at least 15 retirements postponed from January 2025 alone, facilities that together represent about 1.5% of US energy emissions. Dominion Energy offers a clear example: In 2020 he promised to generate all its electricity with renewables by 2045, but after the company projected that data center demand in Virginia will quadruple by 2038, it is now taking a step back. Images | Xataka with Gemini 3 Pro | Theodore Kloba In Xataka | A former NASA engineer is clear: data centers in space are a horrible idea

AI has allowed developers to program faster than ever. That’s turning out to be a problem.

Whoever has tried it knows it. Programming with AI can be wonderful. Especially if you have (almost) no idea about programming. This is where generative AI models have seen their first and probably only revolution. The developers were the first to be able to embrace this new technology. The appearance of GitHub Copilot in 2021 It showed us that it was no longer necessary to chop so much code, because the machine was already doing it for you, and since then the advance of generative AI in the field of programming has been overwhelming. The question is: has it been positive? The answer is not at all clear. It is evident that AI has allowed: That millions of people who were not programmers could turn their ideas for applications and games into a reality. That millions of professionals can save time by not having to write repetitive code (boilerplate) to focus on other more important and productive parts of your work The industry, of course, has been especially insistent with this vision of the transformation of this segment. Satya Nadella (CEO of Microsoft) and Sundar Pichai (CEO of Alphabet/Google) already boasted months ago that about 25% of the code generated by their companies is generated by AI. Meanwhile, Jensen Huang went further and made it clear that At this point no one should learn to program anymore because the AI ​​would do it for us. These are very forceful statements, but behind them lies another reality: that All that glitters is not gold in the world of AI for programmers. At MIT Technology Review they have spoken with more than 30 developers and experts in this field and have reached interesting conclusions. AI is a better programmer than ever. At least, according to the benchmarks In August 2024 OpenAI made a unique launch: presented SWE-bench Verifieda benchmark intended to measure the ability of generative AI models to program. At that time, the best of the models was only capable of solving 33% of the tests proposed by that benchmark. A year later the best models already exceed 70%. Current ranking of the best models according to the SWE-bench Verified benchmark. Several already pass 70% of the tests. Source: SWE-bench. The evolution in this area has been dizzying and we have witnessed the birth of that new modality programming called “vibe coding” and all the big ones have developed powerful programming tools to take advantage of the pull. We have OpenAI Codex, Gemini CLI, or Claude Code, for example, but they have been added startups like Cursor either Windsurfing who have also known how to take advantage of this fever for programming with AI. All of these tools promise basically the same thing: that you will program more and better. Productivity theoretically skyrockets, and while more code is certainly being written than ever thanks to AI, programmers They have gone from writing their own code to reviewing what machines generate. Recent studies reveal that veteran developers who believed they had been more productive actually they weren’t. Their estimate was that they had been 20% faster by being able to move forward without blockages, but in reality they had taken 19% longer than they would have taken without AI, according to the tests carried out. There is another problem too: code quality is not necessarily goodand as we say, developers must review that code before being able to use it in production. In the latest survey from Stack Overflow, one of the largest developer communities in the world, there was a notable fact: The positive perception of AI tools had decreased: it was 70% in 2024, and 60% in 2025. There are limitations, but even so everything has already changed Those interviewed by MIT Technology Review generally agreed with its conclusions. Generative AI programming tools are great for producing repetitive code, writing tests, fixing bugs, or explaining code to new developers. However, they still have important limitations, and the most notable is his short memory. These models are only capable of handling a fraction of the workload in professional environments: if your code is large, the AI ​​model may not be able to “consume” it and understand it all at once. For small projects, great. For large developments, probably not so much. The problem of hallucinations also affects the code, and in repositories with a multitude of components, AI models can end up getting lost and not understanding the structure and its interconnections. The problems are there, and they can end up accumulating and causing exactly the opposite of what they wanted to avoid. Several experts, however, explained in that text how it is actually difficult to go back. Kyle Daigle, COO of GitHub, explained that “the days of coding every line of code by hand are likely behind us.” Erin Yepis, an analyst at Stack Overflow, indicated that although this unbridled optimism towards AI has fallen somewhat, that is actually a sign of something else: that programmers embrace this technology, but they do so assuming its risks. And then there is another reality. One that is repeated day after day and that seems undeniable. The AI ​​we have today is the worst of all those we will have in the future. It may not be tomorrow or next week, but it is clear that the AI ​​you program will end up getting better and better. And there may come a point when those limitations disappear. Whether they do it or not, what is clear is that AI has changed programming forever. Image | Mohammad Rahmani In Xataka | OpenAI has turned ChatGPT into mainstream AI. In the business world the game is being won by its great rival

Big Tech is turning India into the new darling of its AI expansion

Microsoft just announced an investment of $17.5 billion in India over the next four years, the technology giant’s largest in Asia. amazon has followed in his footsteps with 35,000 million dollars until 2030. Google already had announced 15,000 million for the same period. The big tech companies They are turning to the Asian subcontinent like never before, and it makes all the sense in the world. Why India has become irresistible. The Asian country brings together three characteristics that make it a strategic target for technology companies: a population of more than 1.4 billion inhabitants with growing access to the internet and smartphones, infrastructure costs significantly lower than in other Asian markets such as Japan or Singapore, and a government that actively promotes digital transformation. According to Ericsson dataan active smartphone in India consumes an average of 36 GB per month, 44% more than in North America and 71% more than the global average. Additionally, the country’s data center capacity has increased 2.5-fold since 2021, reaching 1.5 gigawatts. The perfect time for investments. The race for artificial intelligence has accelerated this trend. Microsoft plans to open its largest cloud region in India, located in Hyderabad, by mid-2026. The company will also expand its three existing data center regions in Chennai, Hyderabad and Pune. For its part, Google will build an AI hub in Visakhapatnam which will include data centres, power sources and fiber optic networks. These investments seek to stay ahead of the competition in a market where demand for cloud services and AI tools is growing rapidly among companies, startups, and government agencies. Beyond data centers. Investments are not limited to physical infrastructure. Microsoft has committed to train 20 million workers from this country in AI skills by 2030, doubling its initial goal. The company claims to have already trained 5.6 million people since January 2025. Amazon, for its part, claims to have digitized to more than 12 million small businesses and enabled $20 billion in cumulative e-commerce exports. Both companies are integrating their technologies into the Indian government’s digital public platforms, such as the e-Shram and National Career Service systems, which serve more than 310 million uncontracted workers. The battle for digital sovereignty. A key element of this strategy is the proposal of “sovereign” solutions. Microsoft has launched its Sovereign Public Cloud and Sovereign Private Cloud specifically for customers in India, allowing data and workloads to remain within the country’s borders. As the company announced, Microsoft 365 Copilot will process data within India by the end of 2025, making the country one of the first four global markets to receive this capability. “This investment signals India’s rise as a reliable technology partner for the world,” counted Ashwini Vaishnaw, Minister of Electronics and Information Technology. There are challenges. Despite investment enthusiasm, India presents significant obstacles. Irregular power supply, high energy costs and water shortages in several regions complicate the expansion of resource-intensive data centers. These factors could slow the deployment of AI infrastructure and raise operating expenses for cloud providers. However, New Delhi is deploying incentives for AI and semiconductor projects, it has relaxed some regulatory requirements and fosters alliances with telecommunications operators and local technology companies to continue adding value to the global AI race, from local territory. Capacity or mass consumerism. The interesting thing would be to know if India will obtain real technological capacity of its own in the face of so much investment or if it will simply consolidate itself as another consumer market for Big Tech. The government has approved semiconductor projects worth more than $18 billion under its India Semiconductor Mission, seeking to reduce dependence on imported chips. “India is becoming a hot spot for technology investments,” pointed out Dan Ives, analyst at Wedbush Securities. It remains to be seen what all this materializes into. Cover image | İsmail Enes Ayhan and Naveed Ahmed In Xataka | Steve Jobs hated obedient teams: he paid his managers to contradict him, not to obey him

Porsche is approaching a turning point in its history with the electric 718. And they are very clear on who to look at: Hyundai

In September 2019, Porsche finally presented the Taycanits first fully electric car. Well, we should better say something like “the first electric car of the modern era of Porsche“Be that as it may, the truth is that the car was a meteorite in the sports car industry. With the Porsche Taycan, the Germans had a statement of intent on their hands. With him they showed that their pulse was not going to tremble with that launch an electric car on the market no matter how much tradition and history it had behind it. Furthermore, they showed that they were one step ahead of the competition. With that electric car they could achieve scandalous figures… and dizzying sensations. Although we could expect modest sales, the truth is that the car achieved the embrace of the public and a very high volume of purchases. The cruising pace encouraged the company to think that yes, they had a market to exploit. Together with the strategy of a business group that is governed by European emissions regulations, it seemed clear that the majority of Porsche cars They would end up being electric sooner or later. The question is whether the Porsche Taycan distorted the strategy to be followed. The great success of a flagship model, exotic and far ahead of the rest of the market, did not have to anticipate a generalized embrace of this technology in all the company’s cars. The electric Porsche Macan, that once offered a V6 in one of the brand’s entry cars, it seems a good example of how not all Porsche customers are the same. Because a good part of the customers who opted for the Macan wanted to get closer to the sensations typical of Porsche at the price their pocketbook allows. These sensations have to do, in part, with that V6 heart that we mentioned before. And it is even more pronounced among those looking for a Porsche 718. While the Porsche Macan can be understood as a gateway to the brand, the Porsche 718 is understood as a gateway to “the Porsche experience”. Their customers don’t just want a Porsche, they want to enjoy the sensations that a central engine provides and the sound of a boxer engine. The latter is something that cannot be matched with an electric car, but the brand is convinced that it can simulate or equal the rest of the incentives that the Porsche 718 currently offers. And to achieve this they have looked to Hyundai. Hyundai as a reference Unlike most brands, which have limited themselves to jumping into electric cars by offering more and more powerful versions, Hyundai has done in-depth work with its cars to offer a truly passionate electric car. Or, at least, they have made an attempt to achieve it, which is much more than most brands. This strategy is part of the Hyundai Ioniq 5N. The first “electric N” was already born with a clear sporting vocation. Not only because of the jet of its 650 HP of poweralso for the sound of its soundtrack and a careful simulation of gear changes. The result has been so good that Porsche itself recognizes that the sports car has inspired them in the development of its next electric Porsche 718. a car that should simulate the sensations of a central engine placing the batteries behind the driver and thus shift the weight balance of the car to resemble what it now feels like with a mid-engine combustion engine. But the German company needs to put other incentives on the table. To questions from the Australian media DriveFrank Moser, responsible for the 718 and 911 ranges, has made clear the influence of the South Korean model. “We have learned a lot (talking about the Hyundai Ioniq 5 N). I have driven it several times. They have done it very, very well.” In his statements, Moser assures that the car was “developer”. He says that in one of these tests he notified Andreas Preuninger, responsible for the most radical area of ​​his sports cars, that he would come to pick him up at the controls of the South Korean car. Preuninger’s response was not encouraging, “leave me alone, I don’t want to see any of that.” However, he says that when he pressed the button that unleashes all the power and sportiness of the Ioniq 5 N, his partner was clearly surprised. One of the aspects that most surprised the Germans was the simulation of the sound and the gear change. Hyundai has done a great campaign highlighting the latter since it incorporates a mode that turns the vehicle into a sequential shift car. The idea is that despite being electric, the car does not always have the same thrust, taking away part of the torque that is available in the rev range in which it would supposedly be working. Toyota seems to be working on something similar and Honda incorporates the same mode into the new Prelude. In the absence of testing these innovations, what is certain is that Hyundai’s simulated gear change has received good reviews. In Top Gear They defined it as “quite funny.” “My more cynical disposition wanted to laugh at the Ioniq 5 N and its disguised gearbox. I wanted to say it was stupid and sad, and a waste of time. But in all honesty, I enjoyed it. Me impressed. It’s there if you want it. If you don’t, choose one of the quiet driving modes,” Ollie Kew noted in his article. Photo | Hyundai and Porsche In Xataka | China has turned the electric car market into a crazy race. And Porsche pays for it with billion-dollar losses

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.