Deepfakes are much more than a bad joke. Now the Government wants them to be a violation of the right to honor

The year started with X filling up with photos of women in bikinis. Everything was normal, except that it was other users who “undressed” them using Grok, Elon Musk’s AI. In the midst of the revived debate about deepfakes, the Government has announced a new draft law with which they seek to combat them. Against deepfakes. The text It is a modification of the organic law of civil protection of the right to honor, personal and family privacy and one’s own image. According to the Minister of Inclusion, Elma Saiz, “it is a more protectionist text, adapted to new technologies.” The “ultra-impersonations carried out with artificial intelligence” or deepfakes They will be a crime when the affected person does not give their consent and the objective is to undermine their moral integrity, generating sexual or humiliating content. In these cases there may be a prison sentence of up to 2 years. The draft also raises the age of consent for image transfer to 16 years (currently it is 14 years). However, the text continues to consider use to damage the reputation of the affected person to be illegitimate, even if they have given their consent. After death. The main novelty of this reform is that it contemplates the protection of the image or voice even after the death of the person, as long as it has been specified in the will. As they point out in The Countrythis could directly affect some true-crime content in which it is used AI to recreate the image or voice of murder victims. Another case that is considered is when the perpetrator of a crime tells details of the crime in podcasts, interviews or other media. If your story reopens the victim’s wound, it will be considered an unlawful interference with their rights. Let us remember the case of book by José Bretón. The exceptions. Those that already existed in the old law are maintained, such as recordings authorized by a judge or the publication of private conversations, as long as their content is news of general interest. The novelty is that specific AI exceptions are included. The image or voice of a public figure may be used if it is in a creative or humorous context. Of course, they must clearly specify that AI has been used in its creation. Was it necessary? This is the question that some lawyers like Borja Adsuara in his X profile. His argument is that the current law already protects the right to honor in all areas, so it was not necessary to mention new technologies such as social networks, AI or deepfakes. However, it must be taken into account, as they point out in Reutersthat the European Union is requiring member countries to regulate deepfakes, especially those with non-consensual sexual content, by 2027. Previous cases. The Grok case has reignited the debate about deepfakes by the volume of images generatedbut it is not the first time that this type of practice occurs. In 2023 there was the first massive case in Spain when some teenagers generated fake nude images of several minors. Recently we also learned about the first fine from the Spanish Data Protection Agency for a minor who used an app to “undress” a classmate. Image | Unsplash (edited) In Xataka | The United Kingdom is tired of people bypassing porn blocking: their new idea is to block it on iOS and Android

Alibaba has new Open Source to generate videos. The problem is that it is being used to generate pornographic deepfakes

Last week Alibaba launched Wanx 2.1a new AI model for video generation that competes with others such as Sorafrom OpenAi, or I see 2from Google. The tool ended up becoming 24 hours later the person responsible for dozens of pornographic videos They will be published on the Internet. Wanx 2.1 is also an Open Source model whose code It is available in github. Very soon users fond of pornographic content and who also possessed the appropriate technical knowledge took advantage of that fair model for that: to generate porn videos. They told it in 404 averagewhere they indicated how in communities dedicated to producing and sharing pornographic deepfakes without the consent of the people represented, the users were “salivating” because of the advanced Alibaba model. Some of those videos were shared in Civitai, A platform in which users share images and videos generated with AI tools, but in which such images are also bought and sold. The statistics that we can find in the pages of the AI ​​models used to create those images and video show how These models have already been downloaded hundreds of times by users. There are already dozens of pornographic videos created with WAN 2.1, indicate in 404 average. In Civitai, users are allowed to share AI models, but unseeled pornographic content is not allowed. That does not prevent users from discharging the models and use them to produce these models themselves, they affirm in this medium. The problem is once again the use of tools that, of course, allow perfectly appropriate and spectacular videos, but can also be used for uses such as these. We already saw how this type of Deepfakes They can become very profitableand the advance of these tools makes it difficult to control this type of content. In Xataka | In South Korea, Deepfake Porn has become a nightmare. Your solution: three years in jail for those who see it

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.