If you have generated a text, an audio or a video with AI, it will be better than tagged. Otherwise you could expose yourself to a really important fine. It is the conclusion of the draft law that the Council of Ministers approved yesterday, and is especially oriented to avoid the risks of The Deepfakes.
The Pope and the Alarms. In March 2023 it became viral An Deepfake of Pope Francis carrying a theoretical coat of Balenciaga. That triggered the alarms of the European Union, which launched the AI law, approved by the European Parliament on March 13, 2024 and that He entered into force Months later, on August 1. Now Spain adapts that law for a clear purpose.
Spain against the Deepfakes.The Minister of Digital Transformation and Public Function, Óscar López, explained that “AI is a very powerful tool, which can be used to improve our lives or to spread bulos and attack democracy.”
Do not tag AI contents, severe infraction. This bill considers as a serious infraction not comply with the labeling of texts, videos or audios generated with AI to identify them of adequate froma. It is something that the EU has been insisting since June 2023, but there are two problems: that almost no one is doing it at the moment.
Colossal fines. The minister did not specify how to perform that labeling, and according to his words, It will be Aesia the one that puts the rules for that labeling. As already established by the AI Law, breaching the regulations raised fines of up to 35 million euros and/or between 5 and 7% of the worldwide turnover of the offender company.
Initiatives to label: have them, there are. The need to label texts, images, videos and audios is evident, but for now there is no universal and accepted alternative. Google proposed its own solution In May 2023 and I rowed it in October 2024 With Synthidwhich can be applied even in short texts and is already used by Gemini. Adobe too He threw himself soon To try to solve the problem. Goal has its own Water marks for audios generated by Ia. Even Openai joined the effort With a combo of a Cr symbol (“Content credentials“, also driven by adobe) visible and an invisible water mark in the images generated with AI.


The C2PA standard architecture makes it clear how each content adds metadata that indicates the steps in which it has been edited and how, including possible uses of AI. Source: C2PA.
The C2PA standard and the search for consensus. The most remarkable proposal in this regard is that of the standard C2PA (Coalition for Content Provenance and Authenticity). This coalition manages technologies such as the aforementioned Content Credential Specification To label content generated spor ia. Many of the main companies are in that group, including Openai, Amazon, Google, Meta or Microsoft, but there is a great absence: Apple, which (inexplicably) does not seem to pronounce on this regard.
Where are AI labels? Although the labeling technology is there, it has not just been widely used. There are isolated cases: goal began to label images On Instagram or Facebookbut that’s having problems To perform that labeling, which is tried to apply to already published content. YouTube too He has taken steps In that sense and Google begins to implement it In your search engine and your ads. Even manufacturers such as Sony or Leica label their photos in some of their cameras.
Label everything, whether or not done with ia. Ideally (utopia?) It would be that any content was labeled in two ways: either it is made with the assistance of AI, or is it a content in which the AI has not helped. We have the perfect example in the camera apps of our mobiles with Android or iOS: those images and videos should be labeled with something similar to a “content without ia”, for example. This is what Sony or Leica cameras pose, but imposes a huge problem for the entire hardware and software segment.
We need adoption to shoot. Companies seem to be clear that something like this is necessary and reasonable, but starting it is being a especially long and complex process. Cryptographic labeling is probably the best option to prevent Deepfakes from becoming an even more important threat, but here is a decisive step: that companies adopt these measures. Even if it is little by little.
GIPHY App Key not set. Please check settings