Brussels has agreed delay the toughest restrictions of your AI Law until December 2027, which in practice moves its real impact to 2028. Initially the calendar was much more ambitious and intended to prohibit and punish AI systems classified as “unacceptable risk”but all that now remains a dead letter. This makes it clear that community institutions are not prepared to supervise what they intended to regulate. Tell AESIA.
A victory with risks. German Chancellor Friedrich Merz has made Germany dictate the pace of Brussels. Mez has pushed until the last minute to get industrial AI applications virtually off the radar of the law. This will allow German business giants such as Siemens or Bosch not to have to comply with the regulations. Is according to Politico a political victory for Berlin—although other companies like ASML asked for the same— which protects its heavyweights, but poses a problem: if we let AI control factories and critical infrastructure, we risk its failures having an enormous impact, which is exactly what the AI Law proposed. What was considered high risk six months ago is no longer so.
Brussels comes to its senses. MEPs have understood something that they had been refusing to admit for a long time: being the “sheriff” of the internet is of no use if you don’t have your own AI industry. While Europe regulated, the US and China they grew without brake. The agreement is the clear admission that regulating with a heavy hand a market that you do not dominate does not make sense. The EU now gives its companies some oxygen instead of forcing regulations that the rest of the world is simply ignoring. The “Brussels effect” has a ceiling, and this delay marks it.
The AI Law does not give up completely. The EU, however, has included an express prohibition against AI systems capable of generating deepfakes of recognizable people. It is a direct response to the controversy generated by tools like those present in Xand a way to keep the protective spirit of the law somewhat alive. The obligation to identify AI-generated content also remains, but the grace period is stricter now, at three months compared to the previous six months. Even so, there is a clear surrender in what mattered most.
Careful. If the EU has decided to delay its flagship AI law in the face of industrial pressure, what will prevent the DMA or the DSA from ending up suffering the same fate? The two regulations have been involved in complex industrial battles for some time: Apple and Meta they continue to resist to meet the requirements of interoperability and transparency, and the Commission has had to qualify its own requirements. The precedent of the is dangerous because it shows that political pressure works.
Regulate so much for what. The EU has been wanting to lead technological regulation without leading (or even competing in many areas) in technological innovation. The GDPR served as a global standard because Europe was a large enough market to impose conditions for entry. The difference is that AI depends on something else very different, and here the feeling is that the only thing Europe is doing is putting doors on the field. Let your own AI Law end modifying first and later delaying is nothing more than a tacit recognition that the EU’s regulatory strategy and ambition has been a shot in the foot. One that has worsened the conditions to be able to compete with those of other countries.
Image | World Economic Forum

GIPHY App Key not set. Please check settings