Using facial recognition to hunt for copycats seemed like a good idea. This Valencian university has just discovered that it was not

Educational centers that decide to do online exams face a challenge: without being able to monitor students in person, how do you ensure that they do not copy? A Valencian university found the solution with a sophisticated video surveillance and facial recognition system. Well, the joke has paid off. Resolution. In the summer of last year, the Spanish Data Protection Agency (AEPD) filed a complaint against the International University of Valencia o VIU for the use of facial recognition and recording to conduct online exams. As reported in À Puntthe resolution has already arrived and the VIU is going to have to pay 650,000 euros The system. In the VIU evaluation regulationsit is detailed that a “facial recognition technology system” will be used in the online tests. This system consists of the use of two cameras (which the student must provide), one to monitor the student and another for the environment, ensuring that there are no other people in the same room. The software is constantly capturing and analyzing images in real time to verify the student’s identity through AI. At the same time, the program is responsible for controlling the screen and even the devices connected to the computer with which the test is carried out. Two fines. The 650,000 euros are actually the sum of two fines. The first, of 300,000 euros, is for having failed to comply with the article 9 of the GDPR which prohibits the processing of biometric data with few exceptions. The second, which amounts to 350,000 euros, is due to a breach of the article 5.1c of the GDPRwhich maintains that personal data must be “adequate, relevant and limited to what is necessary.” The AEPD considers the use of facial recognition for this purpose to be disproportionate. Consent discarded. One of the exceptions to article 9 of the GDPR and which the VIU tried to rely on is that “the interested party gave explicit consent.” It is true that the students had agreed to use this control system, the problem is that they were not given any alternative: either they accepted, or they did not take the exam. The AEPD does not “consider the mandatory acceptance of general conditions upon registration to be valid consent”, which is why it rules it out in its resolution. The VIU also tried to take refuge in the “essential public interest”, another of the exceptions of article 9, but the AEPD has rejected it because there is no specific law for the processing of biometric data in the educational context. The university invoked the university law that says that universities must verify that students have acquired a series of knowledge, but the AEPD has also rejected it as insufficient. Wow, we have to pay. It’s not just the VIU. There is other universities such as the European University, Isabel I, La Rioja or Burgos that also use similar systems that combine cameras and facial recognition. During the pandemic there was no choice but to opt for online training and this prompted the appearance of video surveillance systems in exams, which raised the eyebrows of the AEPDwhich in 2021 already warned that biometrics could not be used to monitor exams. This resolution is the first that imposes a large fine, so it is assumed that universities will make changes if they do not want to go to the cashier’s office. Open door. The AEPD does not close the door to the use of biometrics as fraud prevention in the educational field, including AI systems. However, he points out that according to the European Union AI Regulationbiometric data is considered high risk, which does not prohibit its use, but does not give express permission to use it in this context. In Xataka | I’ll take the exam online for €20: the new student situation is an open bar for cheating Images | VIU, Pexels

This has free facial recognition, night vision and two-way audio

have a surveillance camera At home it can give us a very important extra security. Not only to monitor the home if we are not at home, but also to be able to take a quick look and see what our children and pets are doing. Although there are many options, Going for one with 4K resolution can give us extra detail that really matters. If we look for one of these, one that stands out the most is undoubtedly the Tapo C260: camera that we have available from 69.99 euros. Tapo C260 – Indoor WiFi Surveillance Camera, Home Security, 4K 8MP 360°, AI Detection, Local/Cloud Recording, Physical Privacy, 18x Zoom, Tracking, Two-Way Audio, microSD The price could vary. We earn commission from these links 4K resolution, night vision and even facial recognition As we say, we have many options on the market if we are looking for a new surveillance camera. Most usually offer 1080p or 2K resolution, which may be enough for tighter budgets. Now, the ideal is to bet on one that reaches up to 4K, since thanks to this we will have a series of advantages that, in practice, are noticeable. The first and most obvious is that, by having more resolution, they will offer us a greater detail when recording image. This will allow us to identify faces or small objects more easily. In addition, a higher resolution also allows us to zoom without the sharpness of the image disappearing. In that sense, the Tapo C260 is ideal if we are looking for a new indoor surveillance camera. In addition to this resolution, it is capable of reaching up to 18x digital zoom and has 360 degree panoramic viewso we will not have blind spots with it. Even at night and in very low light, since it has color night vision. It also has facial recognition and artificial intelligencetwo free functions that will allow the camera to automatically detect people, pets or even baby cries. It also has intelligent automatic tracking, ideal for the camera to follow the movements of our pet, for example. To all of the above, we must add that the camera also has flexible storage, since it supports microSD up to 512 GB or we can use the cloud to store your data. All without forgetting that it also uses two-way audioideal for real-time communication. As we said above, this Tapo C620 is designed for interiors. But, What if we look for an option for outdoors? So maybe it suits us more the Tapo C660 Kitan alternative that also has 4K resolution, 360-degree panoramic vision, smart functions, night vision and in this case has IP65 protection against dust and rain. Not only that, but it also has a solar panel and battery, so it has autonomous operation that does not require a plug. It is available for 159.99 euros. Some of the links in this article are affiliated and may provide a benefit to Xataka. In case of non-availability, offers may vary. Images | TP-Link In Xataka | Best surveillance cameras: which one to buy and 11 recommended models for indoors, outdoors, babies and pets In Xataka | Best tablets. Which one to buy and 8 recommended models for all pockets and needs

Tinder has a serious problem with bots posing as humans. So it’s going to ask you for facial recognition.

Creating a fake profile on Tinder can take just a few minutes. Soon it won’t be so simple. The app is implementing a security measure to combat the problem of fake accounts which will force all users to undergo facial verification. Show me your face. Face verification was optional, but with Face Check it is made mandatory for all new users. During the account creation process, a selfie video will have to be taken as “proof of life”. The measure is already underway in some countries such as the United States, Australia, Canada, Colombia and India, among others. There is no date, but it is expected that it will soon be deployed in the rest of the world. Does not save the photo. Tinder’s security manager tells it in Wired. During verification, the app does not save a photo of our face, but instead saves reference points on the shape of the face and converts them into a mathematical hash. The app compares that file with its database to check if it matches another account. With this measure, Tinder will prevent the creation of bot accounts, but it will also prevent the same person from having multiple accounts. a serious problem. We don’t have updated data on the volume of fake accounts, but in 2021 it was at least 23%. Tinder admits that almost all (98%) of the moderation actions they carry out are motivated by fake accounts, spam and fraud. And the problem is serious. In 2024, Bloomberg published a report about the extent of romance frauds, many of them carried out through fake profiles created by AI. The usual topic is cryptocurrencies and other fraudulent platforms. According to the Federal Trade CommissionIn 2022 alone, more than $1.3 billion were scammed in the United States. Loss of interest. After the boom of the pandemic, Tinder began to lose users, especially paid. Others like Bumble also began to decline and the trend has continued. According to this survey78% of users were tired of using these apps. It’s what they call ‘dating fatigue‘ and basically it is that we are too lazy to flirt through apps. and trustworthy. The fact that apps are full of fake profiles does not help their growth and Tinder knows it. The new measure is aimed at regaining the trust of users, ensuring them that they are talking to a real person and not a bot or a multi-account. Of course, it still does not address other problems, such as those who upload fake photos, lie about their relationship status or they use ChatGPT to seem more interesting than they really are. Image | Pexels In Xataka | Singles are fed up with Tinder. So they are starting to turn to an old acquaintance: marriage agencies

A Spanish company imposed facial recognition to enter its gyms. Result: Fine of 96,000 euros

The Spanish Data Protection Agency (AEPD) has imposed A sanction of 96,000 euros to the gyms chain exceeds to violate the data protection of its customers. The reason? Have imposed facial recognition as the only method of access to your gyms. The facts were denounced by Facua in 2023 and now the resolution has been known. What happened? On August 4, 2023, a claim was filed to SIDECU, a company based in A Coruña in charge of the gyms. According to the document (PDF), the Sports Center exceeds Entrepuentes in Seville was “denying access to the facilities” because a new access method had been implemented through a facial recognition system. “ The complainant considered that this access was “invasive about his intimacy” and “excessive for access to said establishment.” Until the implementation of the facial recognition system, which had not been notified to the partners and was mandatory, it was possible to enter the gym using a card. This claim was added two more and, finally, in September 2023, Facua denounced SIDECU. The defense. Sidec defended himself wielding that he did not store images of the users, but generated a facial pattern through an algorithm patented by the company that developed the system. According to the gyms chain, this “template” was not enough to identify users or deduce their physical characteristics. For SIDECU, this was enough for the system to meet the RGPDbut the truth is that no. The first error. Misunderstand the regulations, thus breaking article 9 of the RGPD. Article 4.14 of the RGPD establishes that biometric data are “personal data obtained from a specific technical treatment, related to the physical, physiological or behavioral characteristics of a natural person that allow or confirm the unique identification of said person, such as facial images or dactyloscopic data.” According to article 9 of the same regulation, the treatment of “biometric data aimed at uniquely identifying a natural person” is prohibited. Image | Ryan Hoffman The second error. Impose the system and not warn, thus breaking article 13 of the RGP. Not only did it not warn users, but facial recognition was the only way to access the establishments and there was no other real option, thus entering the game: consent was not free. It is true that the company ended up implementing an alternative access system (teaching the ID at the door), but its arrival was later to claims. Do not warn users breaches article 13 of the GDPR. The third terror. Do not evaluate the risks, finally breaking article 35 of the RGPD. According to the sentence, SIDEC did not justify why it was necessary to implement this system, above all, less invasive and equally effective alternatives. The AEPD states that the company did not carry out the impact assessment in the protection of personal data (when it was not dealing with personal data) and that it acted without fraud, but negligently and without “the special diligence that is enforceable to this type of treatments.” The sanctions. Three, one for each article violated: 80,000 euros for violating article 9 of the RGPD, 30,000 euros for not having informed users in advance (article 35) and 50,000 euros for not having prepared the impact assessment on personal data protection (article 9). In total, a penalty of 160,000 euros that, due to the recognition of the responsibility and the Sidecu’s soon lamp, has remained at 96,000 euros. Cover image | Gold’s Gym Nepal In Xataka | Unsuspected fine for the European Commission in Europe: it violated its own General Data Protection Regulation

Spanish universities controlled online exams with facial recognition. AEPD has decided that it is enough

The Spanish Agency for Data Protection (AEPD) has reached a clear conclusion about the use of biometric systems with artificial intelligence in online university evaluation. There is no possibility that they can be used legally, at least without a specific enabling law. The trigger. The AEPD has presented A complaint against the International University of Valencia. The VIU had been using a system that combined use of artificial intelligence tools with double camera recording (which the student must contribute) to monitor online tests. It is a practice that It has been doing for more than three yearsand that the agency has sanctioned rejecting the legitimacy of this data processing. Viu is not alone. The International University of Valencia is not an isolated case. Some of the most prestigious in Spain, such as University of BurgosUniversity Isabel I, European University, or the University of La Rioja have been implementing this system for years. It is a solution to the growing demand for 100% online training, with tools that allow the student to monitor without the need for the exams in person. The main objective, according to universities, is to avoid fraud and impersonations of identity during evaluations. The culprit. Smowl, this is the name of The online exam supervision tool. This solution, designed for both business and academic use, allows monitoring with webcam, extra camera, browser block, eyelashes control and, ultimately, replaces human role in exam supervision. In the case of the UIV, it was guaranteed that these data were pseudo and eliminated “quickly”, although it recognized that the processing of these data meant “a very high impact risk for the rights and freedoms of the affected people” The universities are covered in which it is the student who gives their consent to the use of these tools by accepting the general conditions of the course in which it has enrolled. The AEPD has another opinion. There is no legislation that covers its use. Universities are shielding that it is the student who gives their consent to the use of these tools by accepting the general conditions of the course in which it has enrolled. The AEPD has another opinion. “The consent cannot be considered valid because there was no real and effective alternative to students as the software used is the only method allowed to perform online exams. Their rejection by students involved losing their right to evaluation. Nor is the mandatory acceptance of general conditions to enroll when enrolled.” These data are of special category and are regulated by Article 9 of the General Data Protection Regulation (RGPD) Since 2022. According to the agency, there is currently no legitic exception in said article or a specific legal framework that enables these practices. It is also rejected that students can give consent, having no alternative available. But it doesn’t close the door. Data protection does not close the door completely to these types of systems. Specifies that it is necessary to develop specific regulations to determine “in what cases, conditions and under what guarantees this biometric treatment can be carried out”. “ Currently, without frame in which to protect yourself, the use of these tools will be subject to sanction for breach of the GDPR. A deep modification of the regulation would not be necessary, it would suffice with an exception that specifically reflect these scenarios of use. Facial recognition in Spain. It is not the first time that Spain calls into question the use of this type of systems. The OBERTA UNIVERSIDAD DE CATALUÑA was sanctioned In 2022 with 20,000 for using facial recognition in their exams. Outside the educational field, one of the most popular cases was that of Mercadona, fined 2.5 million euros for a pilot project in which they tested a facial verification system in their supermarkets. At a lower level, local companies have also faced large fines for breaching the regulations of the GDPR in the workday registry through biometry. Despite this, it is a technology used in video surveillance systems, Like Renfeor that of Madrid in its streets with hundreds of cameras with AI to reinforce the security of the capital. Images | Pexels (Andrea Piacquadio), Unspash (Dom Fou) In Xataka | Pau is approaching: here you have all the degrees related to technology and science with its cutting notes

A company has created an alternative to facial recognition. Does not scan faces and its use already begins to generate controversy

When cities like San Francisco They decided to prohibit use From the facial recognition by the police, many celebrated it as a victory for privacy. However, a new tool begins to make its way as an alternative. It does not scan faces, but allows people with remarkable precision to follow. Identify without analyzing the face. The tool is called Track and has been developed by Veritonea company specialized in artificial intelligence solutions applied to video analysis. Unlike classic systems, Track tracks individuals based on attributes such as physical complexion, hair color and style, clothing, accessories or the type of footwear. The algorithm also distinguishes the skin tone, although, according to the company, it does not allow to search explicitly by that criterion. With all this information, the system generates chronologies that allow following a person along different scenarios and video sources. It is not a development concept or a future promise. According to data provided by VERITONE itself, more than 400 customers are already using this technology in the United States, including state and local police forces, universities and private companies. Among them are federal prosecutors of the Department of Justice, who began using the tool in August 2024. Track is available through cloud platforms such as Amazon Web Services and Microsoft Azure, and is part of the company’s digital forensic analysis solution ecosystem. An evolving system. Currently, Track works exclusively with recorded videos, such as those captured by body cameras, drones, public recordings on YouTube or content provided by Citizens. Veritone claims to be less than a year after enabling the analysis in live broadcasts, which would open the door to a real -time surveillance system capable of following people even when their faces are not visible. Covering your face no longer guarantees anonymity. Until now, avoid facial recognition systems It was possible with hairstylelarge glasses, disruptive makeup or garments designed to confuse algorithms. But Track works differently. It does not depend on the face, but on general visual patterns. You can follow a figure through multiple videos analyzing complexion, clothes or way of moving. Of course, he needs a starting point: someone should mark the person before starting tracking. Even so, its logic doubts many of the classic strategies to avoid being identified. And privacy? Although this technology does not use biometric data in the strict sense, such as faces or footprints, it is based on physical and aesthetic attributes that can be repeated frequently. As Mit Technology Review collectsACLU, an American civil rights defense organization, warns that tools such as track could significantly expand surveillance capabilities. On the other hand, some digital rights specialists underline that continuous tracking through different video sources could be functionally equivalent to facial recognition. An alternative that can avoid the current legal framework. As Track is not based on traditional biometric characteristics, many of the laws that regulate facial recognition in different parts of the world would not be applied directly. This does not mean that the surveillance is less, but that it operates from another technical angle, less regulated for now. The tool is thus positioned in a gray terrain. It offers advanced monitoring without formally invading the biometric space, but its practical effects are dangerously approaching those who have already generated concern with automated facial identification. Images | Xataka with XAI | Alex Knight In Xataka | The intentions of the United Kingdom with Apple are a nightmare for privacy. That of the British and that of the whole world In Xataka | Alibaba wants to be the new Deepseek: he claims to have a training method for his AI 88% cheaper

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.