The Spanish Agency for the Supervision of Artificial Intelligence (AESIA) is close to completing its first year of operational life. This organization has activated several initiatives with results that are still difficult to specify, but one thing is clear: supervise, what is said to supervise, does not seem to be supervising anything. The danger, once again, is to continue the European drift: it is good to try to avoid the risks imposed by AI, but what Europe and Spain need is something else.
Neither supervises nor sanctions. The great paradox of the agency with official headquarters in A Coruña is that, after months of operation, it has not yet exercised its theoretical sanctioning power nor has it audited a single critical algorithm of Big Tech. For now, its work has focused on “early access” to the regulations.
The eternal criticism. Although the European AI Law already allows systems that violate fundamental rights to be banned from February 2025, the AESIA has not opened a single relevant file. Alberto Gago, its director, recently declared in El País that “We are sure that no prohibited AI operates in Spain.” The work is currently very different: it is limited to pedagogy and accompaniment, leaving the work of regulatory “bite” for a future that at the moment seems far from coming. Meanwhile, the real AI market continues to be defined by companies from the US or China, which do not stop releasing new models with practically no regulatory restrictions, while Spanish and European companies have the yoke of a regulation on their heads that threatens to block them before they can even launch projects of this type.
At the moment he only writes manuals. In fact, she has now become a free legal consultant for a dozen companies from a “regulatory sandbox” recently created. This initiative, which boasts of being one year ahead of the mandatory deadlines of the European AI Law, wants to act as a controlled space where companies can test their AI systems. Of the 200 applications, 12 projects were selected, but the result of this effort consists of the writing technical guides that help companies comply with these regulations. The sandbox also raises doubts regarding things such as its duration, which is one year and may be too long for how fast this segment moves.
A civic center as a temporary headquarters. AESIA should already be using the facilities of the La Terraza building, but said location continues under a concession from RTVE and this It does not theoretically end until 2034. It is difficult to project an image of international technological sovereignty when the agency’s main office operates from the Casa Veeduríaa shared space with neighborhood activities. This provisional headquarters coexists with neighborhood workshops and association meetingsfar from the massive data centers it aims to oversee. The image of a cutting-edge regulator working among this type of activities is probably not the most appropriate in terms of its operational credibility.
Thirty professionals against the billion dollars of Big Tech investment. There is a worrying disproportion between the ambition of the government narrative and, for example, the actual staff currently available at AESIA. During the launch announcement, 80 highly specialized employees were promised, but the figures August 2025 indicate that there are barely 30 professionals on staff covering all areas. The work seems mammoth if an organization like this wants to supervise all the models that will come into operation in our country. Currently on their official website they appear two calls to cover permanent and temporary positions, in addition to six calls for officials.
The Ideas Laboratory. Last April got started this “multidisciplinary faculty” to anticipate ethical challenges regarding gender, minors and misinformation. Although the topics are vital, the purely academic format clashes with the extreme speed at which the AI industry moves. It is especially peculiar that the organism emits Christmas toy recommendations as global corporations redefine geopolitical power through massive language models that now threaten even unbalance the pillars of the economy.
Good intentions are of little use. There is an evident mismatch between the philosophical mission of this laboratory and the technical reality. Although this citizen pedagogical work is interestingly necessary, it should not be the main function or the greatest achievement of a high-level technical supervision agency. The AESIA is behaving more like a citizen service department than as an organization capable of analyzing how the algorithms that grant us credit or diagnose diseases work.
ALIA, a compromising example. We have a first worrying case with ALIAthe AI model developed at the BSC. This model has been certified by AESIA, which indicates that it complies with the regulations. However the boot and evolution of said model continues to be erratic and worrying, although it is true that the resources available to the project are very far from those of startups in the US or China. The rigor of the certification is debatable and calls into question whether AESIA will have the capacity to oversee the most advanced AI models.


GIPHY App Key not set. Please check settings