A woman named Lina went to the police last January. His ex -part Viogén. This system, based on an algorithm, determined that Lina was a “medium” risk person. Three weeks later it was allegedly murdered For your partner. It is not the first time that something like this happens, and shows that we have a serious problem with our potential dependence on algorithms.
The origin of Viogén. The Interior Ministry development In 2007 the Viogén system (integral monitoring in cases of gender violence). Among its objectives was to make a risk prediction and, depending on that prediction, monitoring and protection of the victims.
How it works. The system is based on the collection and analysis from various sources such as police complaints, protection orders or criminal record. In the complaint, for example, a series of questions about the episode of the aggression, the situation of the victim, the children, the aggressor’s profile or the aggravating vulnerability, such as economic dependence are asked.
Risk levels. From that evaluation one of the four levels of risk is assigned to each case (1 – low, 2 – medium, 3 – high, 4 – extreme). Each of them entails specific measures that may include from the allocation of telecare devices to remote orders. At the extreme risk, women have 24 -hour police surveillance.
Viogén 2. The system has evolved since its creation and in recent months its second version has been implemented, Viogén 2. As explained in article 14the algorithm was updated with novelties such as eliminating the unreissented risk and hindering the inactivation of open cases. Thus, a new supervised inactivation modality appears that sets police control mechanisms for a period of six months extendable to one year. That makes it possible to monitor cases in which police experts have not appreciated the existence of risk for women or this is low.
Zero protocol. There are also modifications that will allow the victim to request it in a “voluntary, manifest and repeated” way to inactivate cases of unattended risk, low or medium. Even so, the so -called “zero protocol” designed to minimize the risk of victims who express their desire not to denounce. According to the Macro -New Equalitythe vast majority of victims do not report, and therefore also protect them: institutions only have knowledge of 21.7% of cases according to said survey.
Tragedies everywhere. The problem is that the system is not entirely effective. The alleged murder of Lina is the last example of the limited reliability of Viogén. In October 2024 a 56 -year -old woman She was killed Despite having asked for help even twice. Before, in 2024, another woman was killed by her partner and her It was also part of the Viogén system.
The algorithm seems to minimize the risk. In the case of Lina, for example, the Viogén system allocated the “medium” risk for it, and that seems to happen on more occasions. In September 2024, 96,644 women were within the Viogén systembut only 12 of them were considered extreme risk, 0.01% of the total. Both the Minister of Equality, Ana Redondo, and the Minister of Interior, Fernando Grande-Marlaska, They minimize errors Recognizing that “the model is not infallible, but saves many lives.”
New alarm against AI and algorithms. In recent times we are seeing how there are more and more cases in which excessive confidence is granted to algorithms on especially sensitive issues used in administrations and public institutions.


The AI does not stop making mistake. It happened with the Veripol system using AI to detect false complaints: His real reliability was very debatable. Something before, in March, we lived the Ábalos Case scandal in which an AI used to transcribe the statements of witnesses and defendants made mistakes and ended up turning some paragraphs into a gallimatisms. The AI system for facial recognition itself that is being used for example In video surveillance cameras in Madrid He has done too Jump alarms in privacy. In the United Kingdom an AI was used to predict crimes to the minority report, and Its results were unfortunate. Attempts to apply AI in judicial processes and police They have also generated worrying conclusions.
Lack of transparency. These systems are usually also criticized for their lack of transparency. Veripol is a good example, but we had others. In 2024 we talked about the Bosco system, used by electricity companies to decide who and who cannot accept the social bonus for aid to the light invoice. The Government He refused to share the source code claiming reasons for public security and national defense. It is not a problem only from Spain: there is an algorithm that suggests to the US judges what convictions imposebut its code is a secret, for example. In such delicate issues, the lack of transparency on the functioning of these algorithms is especially worrying.
There were no agencies for this? In 2021 the creation of A Spanish Agency for Artificial Intelligence Supervision (Aesia). It was apparently centered to monitor compliance with the Digital Services Law (DSA) on platforms such as great social networks, and in fact in 2022 Sevilla was chosen To house the first European Center for Algorithmic Transparency (Ecat).
What about Aesia. More recently we have seen how AESIA finally wants to take shape with Its coruña headquarters and start operating in 2025 to theoretically focus on the application of the EU AI Law. Its objective is theoretically to carry out “measures for the minimization of significant risks on the safety and health of people, as well as their fundamental rights, which can be derived from the use of AI systems.” Both the case of Viogén and Veripol’s or what happened in the ‘Abalos Case’ are precisely likely to enter that area, and it remains to be seen if the activity of this agency manages to help both the algorithms used as well as its application are optimal.
Image | James Harrison | National Police
GIPHY App Key not set. Please check settings