insurers have started to turn their backs on them

Since the end of 2022 we have witnessed, live, the artificial intelligence revolution. The launch of ChatGPT opened a stage of investment and expectations that has elevated actors like NVIDIA and has placed OpenAI among the most influential startups. But every revolution has a reverse. As AI advances, so does the list of demands and the question that no one can avoid: who bears the risk when something goes wrong. In the United States, every technological advance comes accompanied by an avalanche of lawsuits. It’s not just a habit: it’s part of the system. If a company does something that generates profits but can also cause harm, sooner or later someone will take it to court. And that’s why insurance exists, to convert a future risk into a present cost. The model has worked for decades, but artificial intelligence is starting to test it like no other sector before. Cases that are pressing now. OpenAI and Anthropic have been the first to see how far the risk bill can go. The first faces lawsuits for the use of protected works to train models and for a civil liability case after the suicide of a teenager. In both cases, the costs are not only in the millions: they set the tone for a litigation that threatens to spread throughout the sector. What policies cover today. For now, the AI ​​majors are operating with conventional policies, similar to those of any technology company. According to the Financial TimesOpenAI has hired Aon to design coverage that would be around $300 million, although not everyone involved confirms that figure. It is a significant amount, but insignificant compared to possible claims of billions. In practice, insurers recognize that the sector does not yet have “sufficient capacity” to protect providers of large-scale models. Why do they back down? The aforementioned newspaper points out that Aon did not want to comment on specific companies, although its head of cybersecurity, Kevin Kalinich, admitted that they do not have sufficient capacity to cover model providers. He further explained that what insurers fear is that a failure by an AI company will become a “systemic, correlated and aggregate risk.” Plan B: Self-insure. With insurers folding, AI companies are seeking refuge in themselves. OpenAI is apparently considering setting aside funds from investors or even creating a captive —a kind of own insurer that serves to cover internal risks when the market does not want to do so. Anthropic has already done it: it allocated part of its capital to a $1.5 billion deal with writers. They are solutions that buy time, but do not guarantee stability if the next court ruling triggers compensation. What changes for the rest of the sector. The impact goes beyond OpenAI or Anthropic. Startups and smaller providers are already noticing how premiums are rising, coverage is reduced, and launch times are lengthening due to legal requirements. Legal uncertainty has become another fixed cost. In the absence of a clear formula to measure AI risks, insurers treat them as potentially catastrophic. And that makes each experiment, each new model and each line of code more expensive. What to watch from now on. The coming months will be decisive to see if the insurance sector manages to adapt. Financial Times points to new formulas that cover chatbot errors and AI-generated content, although for now they are limited trials. Companies, meanwhile, are preparing their next defense: diversifying funds and protecting internal structures. The artificial intelligence industry has not stopped nor does it seem like it will. But its expansion is beginning to touch the limits of a system that does not yet know how to measure these risks. Insurers tread carefully, regulators watch from the sidelines, and companies are forced to improvise in certain cases. Images | vecstock (Freepik) | Xataka with Gemini 2.5 In Xataka | “These are things that a university student would get in trouble for”: Deloitte delivered a report made with AI to Australia

Seven CCAA backs down with the screens and mark a change in trend

The Spanish education system has a peculiar way to embrace innovations: with excessive enthusiasm first, and with equal intensity when rejecting them later. First was the aspirational devotion for the Finnish model, then the obsession to digitize each classroom. Today, just over a decade after that race to fill the laptops, We are seeing the principle of an opposite pendular movement: Seven Autonomies – governments by both PP and PSOE – are regulating to reduce screens in classes, according to The world. This turn is not born from a political whim. The results of the latest PISA tests and international evaluations have been another jug ​​of cold water. The reading comprehension and the basic mathematical skills They are collapsing while students spend more time with a tablet and less with paper and ball. Madrid, the Region of Murcia, Valencian Community, Balearic Islands, Galicia, Asturias and Catalonia are preparing different regulations but with the same spirit: Recover paper. Limit the hours in front of the screens. And reflect on whether we really need to do all duties in digital. We are also seeing many politicians to seek the tribute applause cursing the screens in the classrooms, but none do self -criticism when the tribune rewarded the slogan “a child, a laptop.” Is Selective amnesia that allows to navigate comfortably between opposite trends Without assuming responsibilities. Correction has no political color. The Socialist Asturian Lydia Espina and the popular Murcia Fernando López Miras coincide in the fundamental although they disagree in the forms, for giving two examples. Perhaps the most notable difference is that while Madrid opts for a more blunt prohibition, other regions prefer to set recommendations, leaving a certain margin of maneuver to the centers. But the message is clear: enough of tablets for everything. The most interesting thing about this Pendulazo is that it arrives right now, when the technology industry continues to promise the educational revolution through its devices, and when The AI ​​begins to appear in the classrooms. But between technological determinism and nostalgia for grid notebooks, perhaps what we are really seeing is a healthy maturity exercise: technology must be a medium, not an end. And like any tool, it has its optimal use moments. Not all the time. Not for everything. And of course, not for everyone at the same age. And we are starting to see it. In Xataka | China is planning its future in three dimensions. That is why he will put a subject of AI in schools Outstanding image | Xataka with Midjourney

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.