Since the end of 2022 we have witnessed, live, the artificial intelligence revolution. The launch of ChatGPT opened a stage of investment and expectations that has elevated actors like NVIDIA and has placed OpenAI among the most influential startups. But every revolution has a reverse. As AI advances, so does the list of demands and the question that no one can avoid: who bears the risk when something goes wrong.
In the United States, every technological advance comes accompanied by an avalanche of lawsuits. It’s not just a habit: it’s part of the system. If a company does something that generates profits but can also cause harm, sooner or later someone will take it to court. And that’s why insurance exists, to convert a future risk into a present cost. The model has worked for decades, but artificial intelligence is starting to test it like no other sector before.
Cases that are pressing now. OpenAI and Anthropic have been the first to see how far the risk bill can go. The first faces lawsuits for the use of protected works to train models and for a civil liability case after the suicide of a teenager. In both cases, the costs are not only in the millions: they set the tone for a litigation that threatens to spread throughout the sector.
What policies cover today. For now, the AI majors are operating with conventional policies, similar to those of any technology company. According to the Financial TimesOpenAI has hired Aon to design coverage that would be around $300 million, although not everyone involved confirms that figure. It is a significant amount, but insignificant compared to possible claims of billions. In practice, insurers recognize that the sector does not yet have “sufficient capacity” to protect providers of large-scale models.
Why do they back down? The aforementioned newspaper points out that Aon did not want to comment on specific companies, although its head of cybersecurity, Kevin Kalinich, admitted that they do not have sufficient capacity to cover model providers. He further explained that what insurers fear is that a failure by an AI company will become a “systemic, correlated and aggregate risk.”
Plan B: Self-insure. With insurers folding, AI companies are seeking refuge in themselves. OpenAI is apparently considering setting aside funds from investors or even creating a captive —a kind of own insurer that serves to cover internal risks when the market does not want to do so. Anthropic has already done it: it allocated part of its capital to a $1.5 billion deal with writers. They are solutions that buy time, but do not guarantee stability if the next court ruling triggers compensation.


What changes for the rest of the sector. The impact goes beyond OpenAI or Anthropic. Startups and smaller providers are already noticing how premiums are rising, coverage is reduced, and launch times are lengthening due to legal requirements. Legal uncertainty has become another fixed cost. In the absence of a clear formula to measure AI risks, insurers treat them as potentially catastrophic. And that makes each experiment, each new model and each line of code more expensive.
What to watch from now on. The coming months will be decisive to see if the insurance sector manages to adapt. Financial Times points to new formulas that cover chatbot errors and AI-generated content, although for now they are limited trials. Companies, meanwhile, are preparing their next defense: diversifying funds and protecting internal structures.
The artificial intelligence industry has not stopped nor does it seem like it will. But its expansion is beginning to touch the limits of a system that does not yet know how to measure these risks. Insurers tread carefully, regulators watch from the sidelines, and companies are forced to improvise in certain cases.
Images | vecstock (Freepik) | Xataka with Gemini 2.5
GIPHY App Key not set. Please check settings