The National Police had been using ia for six years to detect false complaints. His real reliability was very debatable

In October 2018, the National Police published a press release in which everything sounded great. There was talk of A new artificial intelligence tool called Veripol that promised an accuracy of 90% when detecting false complaints. Six years later we have discovered that this promise was very debatable.

The Technical Cabinet of the General Police Directorate He confirmed Civio that this tool ceased to be operational in October 2024, six years later. This decision, as this media points out, was not accidental.

Three months before BOE was published with the new artificial intelligence regulation. In section (59) it is indicated to the polygraphs of AI as high -risk AI tools and the following is precisely indicated:

“(…) It is appropriate to classify several systems of AI destined to be used for the purpose of guaranteeing the law when their precision, reliability and transparency are especially important to avoid adverse consequences, conserve the confidence of the population and guarantee accountability and effective resource roads. the risk of a natural person being a victim of crimes, Like polygraphs and other similar tools. “

Not only that: a group of law and mathematics experts from the University of Valencia highlighted In a study How Veripol was a tool that information was barely available, which made an especially complicated audit. Even indicating that this made them conjecture their conclusions, they stressed the situation was “very poor in terms of compliance with the minimum standards of transparency” necessary for the use of tools of this type.

Do certain specific words are enough to detect lies?

Civio conducted a study of the reliability of the tool. After analyzing 1,122 allegations of theft in Spain of 2015, Veripol’s behavior was singular: if a complaint contains the words “day”, “lawyer”, “safe” or “back” it is more likely to be false, but that probability increases If words like “two hundred” are used several times or “barely.”

Riete
Riete

Veripol began to be evaluated in a pilot program in June 2017, and He won an award Investigation of the Spanish Police Foundation. There was talk of its success even In Scientific American. Little by little, its use was extended until officially activated in the aforementioned October 2018 nationwide. Its use was notable until October 2020 (about 84,000 complaints), while apparently in 2022 it was used only in 3,752 complaints, of which 511 were detected as false.

In the development of said project The Complutense University of Madrid (UCM), the Carlos III University of Madrid, the University of Rome “La Sapienza” and the Interior Ministry of the Government of Spain. A UCM advertisement Point out How the initiative began in 2014, and began to be tested with apparent success in 2017. Preliminary tests, yes, were executed with a sample that The experts described as scarce.

In this statement, it was highlighted how “it is the first time worldwide that a tool of these characteristics is developed” and that Veripol performed an “automatic analysis of the statements of complainants using natural language processing techniques and automatic learning, with a success rate of 91%, fifteen points higher than that of expert agents.” In the description of its operation the following was pointed out:

“For example, it is known that in cases of theft, true statements are presented more details, descriptions and personal information, in the face of the exclusive insistence on the extracted object and the omission of details about the attacker or how the incident of the false happened. From this linguistic analysis, Veripol is able to create an effective pattern.”

“In just one week, 31 and 49 false theft cases were detected and closed, while between 2008 and 2016 they were 3.33 and 12.14 in Murcia and Malaga, respectively. The effectiveness of the pilot study was 83%.”

A doubtful reliability

He Complete study of the University of Valencia reflects a debatable functioning of the tool. They indicate how Veripol designers claimed that false complaints in relation to theft are “extremely common” and the crime is “generally carried out by citizens who have no criminal record.” They have no clear figures, but even unable to estimate the real figure of false complaints “they suggest that it could be around 57%”, a figure that It rests interestingly in non -solving cases.

In Damn.es They carried out in 2020 an analysis of the evolution of the use of veripol and indicated how “the fact that the total registered crimes remains constant as the veripol use figures decrease, can be an indicative that the algorithm It is not used so much among the police officers. “For some agents, they pointed out,” the program is not very precise “, and that although in theory it could work well, its application in police stations was complex because training was needed to do so.

Algorithmwatch, a non -governmental and non -profit agency based in Berlin and Zurich, often performs algorithms analysis and AI systems to try to evaluate their reliability and validity. In October 2020 They evaluated Veripol’s behavior And his conclusion was already overwhelming: “It is not clear if it works as it is intended”.

They also explained that lies detectors – which is in essence what Veripol is – have a long trajectory of bad operations. In this concrete algorithm, he worried for example how some specific words had too much weight in the decision. One of the agents interviewed at that time indicated how it was enough that the word “knife” was in the report to be considered true.

We are therefore before a technological tool, in this case theoretically supported by AI, which raised doubts from the beginning for transparency about its true functioning and development, and that it began to be used without preliminary evidence being completely conclusive by the size of the samples.

In recent times We have seen for example how the Ábalos ‘case’ served to demonstrate that There are many risks here and that it is necessary to evaluate well the application of algorithms before applying them in public bodies of all kinds. In that case, specifically an AI to transcribe the statements made some texts into gallimaties, and that is precisely what the EU’s law and organisms like Aesia in our country They should try to avoid.

Image | National Police

In Xataka | The videos of AI have broken the Instagram and Tiktok algorithms. Welcome to the new “AI landscape”

Leave your vote

Leave a Comment

GIPHY App Key not set. Please check settings

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.