teachers’ lonely struggle to reinvent homework and exams

“There are jobs and exercises that I see that help me learn something. I do those. But others that seem unnecessary to me… I tell the AI: do this job for me, I copy, paste and deliver it.” Lucía, an intermediate degree student in the field of health who prefers to remain anonymous, tells it bluntly. Last year he started use AI for their jobs. Since then, some she has made, but many others she has not. It is not an isolated case. In many educational centers, generative artificial intelligence has become an everyday tool. ChatGPT, Gemini and other assistants have become normalized among students to do homework, summaries or papers, just as before they did it Wikipedia or search engines. The difference is that now they not only find information: they also write it. From primary to university “They clearly use it,” says Nerea Eguiguren, a professor of Philosophy and History at a high school in Madrid, referring to the use of these tools among her students. At its core it is something widespread, but he describes this use as “superficial”: “They don’t even open ChatGPT. They put the question on Google and the first answer is from Gemini. They copy it and tell you whatever.” At the university the phenomenon does not go unnoticed either. “The use of AI is widespread,” explains Silvia Eva Agosto Riera, associate professor at the Faculty of Education at the Complutense University of Madrid. Students use it to search for information, write texts or correct work. Some responsibly; others, simply transferring what the tool gives them without contrasting anything. Sergio Cuevas del Valle, a doctoral student in Aerospace Engineering at the Rey Juan Carlos University and a teaching collaborator there, agrees, who is sure that his students use it: “The question is, why don’t they use it?” Meanwhile, in other areas of teaching the impact seems more limited. Marta Benegas, a secondary school Fine Arts teacher, notices it every day. “They don’t use AI as much because they basically draw. To draw you need the notebook and the pencil.” When this use appears, it is usually for the same thing as in other subjects: preparing written work. And the impact of AI is not the same in all subjects. In more theoretical subjects – such as language, philosophy or history – its impact is more noticeable, because many of its traditional exercises, like essays or text comments, are precisely the type of tasks that AI can solve with ease. On the other hand, in more practical subjects the margin for “copying” is smaller: drawing, solving problems step by step or practicing procedures requires demonstrating the process. (Unsplash) Lucía has verified this in her most practical subjects and evaluations: “In many cases, if you don’t have a basis, no matter how much you ask the AI, you won’t be able to understand it. You can ask for steps or instructions, but if you’ve never done it yourself, you won’t know.” In primary school the debate is still in a different phase. The age still slightly limits autonomous use of these tools. Belén Álvarez, a teacher at a school in the Canary Islands, admits that she did not even want to mention AI in her classroom until recently. “I didn’t want them to know her because of me,” he says. Their youngest students are eight years old, but half already have mobile phones with internet access. “Honestly, AI doesn’t seem like the most dangerous thing they have access to.” Given this presence almost omnipotent artificial intelligence In the educational field, teachers find new challenges when it comes to assigning homework and assignments. Teaching tries to adapt to the new scenario, which has led to rethinking the way of evaluating what students really know. Has the end of homework and jobs come? In many cases, the reaction has been immediate and direct. Faced with the reality of being able to solve assignments—which were previously assigned as homework—in a matter of seconds with the help of AI, one of the quickest solutions has been to bring those tasks back to the classroom. Nerea Eguiguren did it after detecting it several times. “Before, I sent text comments home in the second year of high school. The third time I saw that they used AI, I changed.” Now, although he continues to send those exercises, they do them in class: “This way I know they can’t use it.” More face-to-face exercises, more oral activities, less homework or more practical evaluations, all these adjustments are repeated at different educational levels. The detection tools of AI too They have become allies of teachers, who use them above all to supervise more theoretical work – although most of the time as a simple support, since they are aware that their reliability is also limited. And when its presence is evident, the consequence can also be direct. “Of course I have suspended jobs due to improper use of AI (…) You don’t have time to suspend the evaluation, but I have suspended many jobs,” says Eguiguren. It also affects the University. (Unsplash) Sergio Cuevas del Valle has also had to “pose everything differently”: “Almost any problem that I may pose as a challenge will have already been posed, and almost certainly, solved. It is very likely that the students could find it even without AI.” For this reason, it is proposed how “AI comes to question even the figure of the teacher, and even that of the students, to the extent that it allows human beings to have no need to accumulate internal knowledge, nor do we need someone to teach it to us.” All of this “underlines the need to rethink teaching at all levels,” trying to ensure that students “work on skills such as the development of intuition, logical thinking and capacity for effort, which were already inherent to ‘homework’.” AI can solve mechanically almost any problem, “but you still need someone to ask the right questions.” To these new … Read more

The Valencian Community has an “anti-AI” plan so that no one cheats on competitive exams

Much of the success of AI glasseslike the Ray-Ban Metait is precisely that they do not look like a technological device. They look like normal glasses that integrate perfectly into our look. AND that is exactly the problem. There is the issue of recording people without their consent and also using them to cheat on exams. It is precisely what the Generalitat Valenciana wants to avoid. The plan. They count in Lift EMV that the Public Function area of ​​the Ministry of Finance has set its sights on AI glasses, and has already detailed a plan against their use in competitive examinations. The plan is to carry out a physical inspection upon entering the exam that verifies “the absence of electronic components.” To do this, they will provide training to supervisory personnel so that they are able to detect these devices, which, as happens with glasses, go very unnoticed. You will have to look for frames that are thicker than normal and lights that activate when recording. There is more. In addition to the physical inspection, an even more effective option is also being considered: installing frequency inhibitors. With this, if a security guard loses smart glasses or a watch, they should be useless as long as they remain in the range of action. They also highlight that for some time now they have been including complex questions whose answers require analysis and critical thinking, so that AI cannot answer them so easily. The new ‘chop’. Gone are the times when we kept a piece of paper with the lesson up our sleeve, now it is copied with AI glasses and smartwatch. This is what an applicant did in the MIR exam this year in Santiago de Compostela. It has not been revealed what model of glasses and watch he was wearing, nor how he planned to use them, but everything indicates that he read the questions with the glasses and got the answer on the watch. A seamless plan, except that someone noticed and got caught. His grade was a zero. AI so you don’t copy. Paradoxically, there are universities that use AI for precisely the opposite purpose: to prevent students from copying. We recently talked about the VIU, also in Valencia, and how thanks to AI and facial recognition they could control exams remotely. A very complex system that It has cost the university 650,000 euros for violating the General Data Protection Regulation. The problem with AI glasses. That someone can cheat on an exam is wrong, but there are things that are even worse. For example, in Spain we have the case of man who was arrested for recording women without their consent with the Ray-Ban Meta. They have also been made modifications in glasses that allow strangers to be identified on the street and there are more and more places where its use is being restricted. Wearing AI in your eyes can be very useful, but anyone you pass on the street could be recording you is intrusive, and even more so if the person wearing them is in, for example, a gym locker room. Or if it is the person who has to shave your crotch. Even if the LED indicator is off, I don’t think anyone could shake the feeling of being recorded. Image | Xataka, Unsplash In Xataka | When you put on your new Meta glasses something else happens: everything you record is recorded in Kenya to train the AI

Whole China is of exams. So AI companies are laying their chatbots so that students do not cheat

In Spain the students recently passed By the Pau test (Before EBAU, EVAU or Selectivity), and now something similar is happening in China, where Chinese students face Gaokao (高考), the National Access to University Exam. And they do it with an almost obligatory novelty. Nothing to cheat with chatbots from AI. The most popular chatbots in China Like Qwenfrom Alibaba, have temporarily deactivated functions such as image recognition. They have done it precisely to prevent such characteristic from being used as a modern “chop” To help them during these tests. Impartiality in the tests. The same has happened with Yuanbao (Tencent) and Kimi (MoNshot), two other popular chatbots in China, which have also deactivated that image recognition characteristic. When trying to use this function, they indicate In Bloombergthe text “appears” to guarantee the impartiality of the university access tests, this function cannot be used during the test period. “ An exam in which the future is played. The Gaokao was held for the first time in 1952 as part of the reform of the then newly created People’s Republic of China. The access processes to universities changed during Mao Zedong’s mandate, but in 1977 Deng Xiaoping recovered them and have continued to be used until today. There are 16 provinces with personalized exams, but in all cases the conclusion is the same: these tests determine the immediate future of students In the academic aspect. Designed and printed in jail. Gaoako access tests are so important that they are designed under strict security by a small team of teachers. These professionals are sent to isolated site of Beijing as military facilities or prisonswhere they make the questions. They cannot leave those locations until the tests are performed, but it is also that most exams are printed within prisons and each “printing” is protected 24 hours a day by cameras and guards. Even its transport to the centers is done with security measures that one would expect in money transports from banking entities, for example. Everything to prevent the questions from leaking. Scratch note. Chatbots are presented as a spectacular help for these students, and students – and their parents – know it. The note of these exams determines whether the student may or may not access the best careers and university institutions, and that also depends on their future positions, salaries and even their social mobility. Competitiveness is also huge: More than 13 million students They are presented to these tests this year. To achieve better notes, all kinds of solutions are used, from particular teachers to these attempts to cheat. Of photo recognition, nothing. The tests have taken place from 7 to today, June 10. The Alibaba chatbots (Qwen) and bytedance (Doubao) offered the Image recognition for AI until last Monday. However, according to Bloomberg if a user asked for the solutions to a problem in a paper that was taken a photo, Qwen replied that the service was temporarily disabled. In Doubao the message indicated is that this request “did not meet the rules.” AI is fine to learn, but not for exams. In Beijing they launched recently A plan to integrate the teaching of AI at school. Although this type of discipline in classrooms is being tried, one thing is that they learn to use it and another very different that students take it to cheat in these tests. In fact a new set of standards Published by the Ministry of Education of China last month established that students should not use the content generated by the response in their duties or in the aforementioned exams. The objective: that they do not depend too much on artificial intelligence. Image | 绵 绵 In Xataka | The 100 best universities in the world excluding those of the US, exposed this graphic revealing

Spanish universities controlled online exams with facial recognition. AEPD has decided that it is enough

The Spanish Agency for Data Protection (AEPD) has reached a clear conclusion about the use of biometric systems with artificial intelligence in online university evaluation. There is no possibility that they can be used legally, at least without a specific enabling law. The trigger. The AEPD has presented A complaint against the International University of Valencia. The VIU had been using a system that combined use of artificial intelligence tools with double camera recording (which the student must contribute) to monitor online tests. It is a practice that It has been doing for more than three yearsand that the agency has sanctioned rejecting the legitimacy of this data processing. Viu is not alone. The International University of Valencia is not an isolated case. Some of the most prestigious in Spain, such as University of BurgosUniversity Isabel I, European University, or the University of La Rioja have been implementing this system for years. It is a solution to the growing demand for 100% online training, with tools that allow the student to monitor without the need for the exams in person. The main objective, according to universities, is to avoid fraud and impersonations of identity during evaluations. The culprit. Smowl, this is the name of The online exam supervision tool. This solution, designed for both business and academic use, allows monitoring with webcam, extra camera, browser block, eyelashes control and, ultimately, replaces human role in exam supervision. In the case of the UIV, it was guaranteed that these data were pseudo and eliminated “quickly”, although it recognized that the processing of these data meant “a very high impact risk for the rights and freedoms of the affected people” The universities are covered in which it is the student who gives their consent to the use of these tools by accepting the general conditions of the course in which it has enrolled. The AEPD has another opinion. There is no legislation that covers its use. Universities are shielding that it is the student who gives their consent to the use of these tools by accepting the general conditions of the course in which it has enrolled. The AEPD has another opinion. “The consent cannot be considered valid because there was no real and effective alternative to students as the software used is the only method allowed to perform online exams. Their rejection by students involved losing their right to evaluation. Nor is the mandatory acceptance of general conditions to enroll when enrolled.” These data are of special category and are regulated by Article 9 of the General Data Protection Regulation (RGPD) Since 2022. According to the agency, there is currently no legitic exception in said article or a specific legal framework that enables these practices. It is also rejected that students can give consent, having no alternative available. But it doesn’t close the door. Data protection does not close the door completely to these types of systems. Specifies that it is necessary to develop specific regulations to determine “in what cases, conditions and under what guarantees this biometric treatment can be carried out”. “ Currently, without frame in which to protect yourself, the use of these tools will be subject to sanction for breach of the GDPR. A deep modification of the regulation would not be necessary, it would suffice with an exception that specifically reflect these scenarios of use. Facial recognition in Spain. It is not the first time that Spain calls into question the use of this type of systems. The OBERTA UNIVERSIDAD DE CATALUÑA was sanctioned In 2022 with 20,000 for using facial recognition in their exams. Outside the educational field, one of the most popular cases was that of Mercadona, fined 2.5 million euros for a pilot project in which they tested a facial verification system in their supermarkets. At a lower level, local companies have also faced large fines for breaching the regulations of the GDPR in the workday registry through biometry. Despite this, it is a technology used in video surveillance systems, Like Renfeor that of Madrid in its streets with hundreds of cameras with AI to reinforce the security of the capital. Images | Pexels (Andrea Piacquadio), Unspash (Dom Fou) In Xataka | Pau is approaching: here you have all the degrees related to technology and science with its cutting notes

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.