Three months ago Australia banned social media for those under 16 years of age. It is already investigating possible breaches

Just three months ago, Australia launched one of the most ambitious regulations that have been proposed so far on social networks and minors. The measure came into force on December 10, 2025 with a clear message: force platforms to prevent those under 16 years of age from having accounts and give families back part of the control over the digital lives of the youngest. From the first moment it was presented as a pioneering initiative, but something important was also assumed from the beginning: applying it was not going to be easy. The first doubts. The rule has already entered its most delicate phase, checking whether it is really being applied as planned. The eSafety regulator has opened the first formal review and has put platforms such as Facebook, Instagram, Snapchat, TikTok and YouTube under scrutiny. The agency speaks of “significant concerns” and points to failures in control mechanisms. It also points out that current systems are not effectively preventing those below that threshold from continuing to open new accounts. How minors are sneaking in. The report goes beyond a general warning and focuses on very specific failures in the control systems. It has been detected that there are not enough safeguards to prevent users under the permitted age from creating new accounts, but also something more striking: some platforms allow the verification processes to be repeated until the user manages to pass them. Also in certain cases, these profiles are invited to demonstrate that they meet the age requirement even after having indicated that they do not, which shows inconsistencies in the application of controls. A problem that was already anticipated. The difficulties in applying the rule have not arisen now, they were already on the table from day one. When the law came into force, The Australian Government itself admitted that its implementation would not be perfect, and the first signs pointed in that direction. According to ABC, Some minors managed to bypass the verification systems with basic tricks, such as altering their appearance in facial controls. The outlet itself also warned that parents and older siblings could help some children get around the restrictions, an early sign that the challenge was not just in passing the law, but in making it really work. What is at stake for the platformss. The investigation opened by eSafety does not remain a diagnosis, it opens the door to possible sanctions if it is demonstrated that companies have not taken reasonable measures to prevent minors affected by the rule from having an account. Reuters points out that The fines can reach 49.5 million Australian dollars and affect the aforementioned services and platforms. The regulator has already begun collecting evidence and hopes to close at least part of its investigations by mid-year, which places technology companies in a scenario in which non-compliance is no longer just a reputational risk. The Spanish mirror. What is happening in Australia helps to put into context a debate that has also gained weight in Spain, although here it is at a different point. Peter Sánchez announced in February that The Government wants to prohibit access to social networks for minors under 16 years of age within a broader package of measures on age verification, traceability of hate and responsibility of technology managers. The key difference is that that ban has not come into force and is not being enforced. Still, the Australian case offers a useful reference to anticipate what kind of challenges may appear when such a measure moves from political announcement to actual implementation. Images | cottonbro studio In Xataka | “What the hell is happening with Lidl Spain?”: Germans are speechless at the chain’s comic surrealism

Wikipedia has banned using AI to write or rewrite articles in English. Human knowledge begins to raise barriers

The English version of Wikipedia has just banned articles made with AI. In the last update of their guidelines are clear: content generated with language models violates content policies. The largest encyclopedia on the internet positions itself as a refuge for content created by humans. AI no thanks. The ‘AI yes or AI no’ debate has been going on for a while generating tension on Wikipedia and they have finally opted to support human content with an overwhelming majority 40 to 2. The new restriction imposed reads as follows: “Text generated by large language models (…) often violates several of Wikipedia’s fundamental content policies.” Those fundamental policies What it refers to are the neutrality of the content, verifiability and that the content cannot be original research, but must be attributed to reliable sources. With this change, editors are prohibited from using LLM “to generate or rewrite article content.” Two exceptions. Wikipedia contemplates two scenarios in which the use of AI is allowed: Basic style suggestions and corrections, as long as the LLM does not introduce its own content. They warn that it must be used with caution since LLMs tend to “go beyond what is asked of them and alter the meaning of the text.” Translation of articles into other languages, as long as it is reviewed by a person competent in the two languages ​​involved. Here it is important to note that Wikipedia has already had dramas in the past because of AI translations. Why is it important. Wikipedia has positioned itself as a repository of genuinely human content in an internet that is flooded with artificial content. At a time when distinguish the authentic from the synthetic is increasingly difficult, the largest encyclopedia in the world chooses to rely on human authorship as a guarantee of reliability. There is certainly something ironic and that is that Wikipedia rejects AI, but AI continues to draw on Wikipedia to provide answerscausing them to lose clicks and saturating your servers. AI generated vs human made. Until recently we thought that the solution was flag artificial content on platforms with the classic ‘AI’ label, but we are already at a point where it is more valuable and useful to highlight the opposite: that it is made by humans. The advancement of image generation tools and the amount of texts made with AI are overwhelming, to the point that an anti-AI current is emerging; Some artists are starting to designing “badly” to differentiate itself from AI homogenizationthey have created extensions to return to the internet before ChatGPTthere is browsers that filter AI results and even ‘Not by AI’ badge has been created. The point is that it is a David against Goliath. The Etsy case. It is perhaps one of the most bloody cases of the flood of low-quality AI content. The platform that It was presented as a refuge for the authentic, today it is an AI market which also tries to pass itself off as artisanal. Ghibli-style portraits for 20 euros, profiles managed entirely by AI that say things like “I can’t wait to draw you”… Etsy allows content made with AI, but says you have to label it as such. Nobody does it. Proof that the label is no longer useful. A key detail. The last paragraph of Wikipedia’s guidelines is especially striking because it talks about possible sanctions for those who violate the rule, the problem is how they plan to detect who uses AI. Wikipedia admits that “some editors may have writing styles similar to those of large language models” and that “more evidence than mere stylistic or linguistic clues is needed to justify the imposition of sanctions.” We have no idea how they are going to do it, what we do know is that AI text detectors fail more than a fairground shotgun. Image | Wikipedia, edited In Xataka | The last barrier against AI is good taste. The problem is that an entire generation is growing up without developing it

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.