anti-nudity algorithms within the system

Children’s access to technology has gotten out of hand, or that is what more and more governments around the world think. The United Kingdom banned porn for those under 18 years of age, Australia has banned those under 16 who have accounts on social networks, Denmark wants to do the same…There are many attempts to limit what minors can see online, but The effectiveness of their methods is rather doubtful. Now, the British government has had a new idea. TOanti-nudity algorithms. They tell it in Financial Times. The government wants technology companies to do the work of blocking nude images on the devices themselves. The idea is that it is not only detected within apps, but also at the operating system level. We are talking about both mobile phones and computers, so it would imply that iOS, Android, MacOS and Windows implement algorithms to prevent, not only from seeing, but from taking and sharing nude photos within the system. At the moment it will not be mandatory, but they will simply encourage the platforms to do it, but the idea is on the table. Why is it important. It is a way of admitting that the current measures are not enough and it is necessary for the platforms to take sides when filtering the content. Taking the United Kingdom’s own example, people who want to access portals like Pornhub must identify themselves in advance, which has caused a huge drop in traffic, but at the same time an increase in VPN downloads. Effectiveness and friction. At the moment it is a hypothetical scenario, but it could be the most effective measure of all those being considered. We only have to look at the case of apps like Instagram and its relentless algorithm to avoid nudity. The idea is to bring these algorithms to the entire system, so that no nudity is shown on the screen, except if the user has verified that they are of legal age through an official document. How the porn block works now in the United Kingdom (and how it is proposed the well-known Spanish ‘pajaporte’), users must identify themselves when entering certain websites. Now imagine that when you buy a cell phone or a computer, when creating the account they ask for your ID to verify your age. Sure we would still find ways to bypass it, for example by creating fake profiles, but it creates less friction because you would only have to do it once. ANDThe HMD case. There are currently no safeguards to block nudity at the operating system level. The options offered by the platforms are the classic parental controls, but there is a precedent for a device that blocks adult content. It is about the HMD Fuse“the mobile that grows with your children” that was announced a few months ago. It comes with a system called HarmBlock AI that is dedicated to scanning the content and prevents nude images from being displayed, stored or taken. Image | Pexelsedited In Xataka | This year the Three Wise Men bring something very special to children: children’s cosmetics

They already use a mixture created by algorithms for their data centers

Goal has used a concrete mixture designed by algorithms in one of its data centers. According to the companythis formula promises to be more sustainable, faster to apply and has been developed with open source tools. With this approach, not only is it sought move towards zero emissions: Also accelerate the construction of infrastructure that grow to time, As the data center that is raising under provisional structures demonstrates. Invisible concrete weight. Few materials are as omnipresent as concrete. It is used on roads, bridges, homes … and also in data centers where a good part of our digital life is housed. The problem is that manufacturing its components, especially cement, generates a huge amount of CO2. The World Economic Forum indicates that It represents about 8% of global emissions. Goal has proposed to reduce that footprint without compromising the resistance or work speed. And that’s where his new model enters. An AI that does not create chatbots, but mixtures. To develop this system, goal was allied with Amrize – One of the world’s largest cement manufacturers – and with the Urban-Champaign University of Illinois. Together they have created an AI model that proposes concrete compositions. The model is based on Bayesian optimization And it is built with Botorch and AX, two open source tools developed by the goal itself. A slab test in the Rosemount Data Center The challenge was not minor: each mixture involves combining different types of cement, aggregates, water, additives and supplementary materials such as scum or flying ashes. The exact proportion, its origin or even the time of the year can alter the result. Traditionally, they explain, validate a new formula has been for weeks. With AI, this process accelerates because the system learns from the previous data, proposes new promising combinations and refines its predictions after each test. Implementation of the concrete formulation generated by AI in the data center From the laboratory to the field. One of the first large -scale validations was made in the data center that Meta builds in Rosemount, Minnesota. There, the contractor Mortensen applied the new mixture in one of the building’s support slabs. The objective was not only to check its resistance, but also its workability and the final finish: these slabs must be perfectly smooth and durable. The result, according to the firm, exceeded all technical standards. The formula designed by AI not only met the demands of resistance and cure, but also behaved well into work: it was poured without problems and offered an adequate surface. After two iterations, and with minimal human adjustments, the model had generated a recipe that improved the usual industrial formulas in speed, resistance and potential for emission reduction. An open model. The system developed by goal is not a commercial product or a closed tool. The company has published the code, data and technical approach in An open github repository called SustainableConcrete. The idea is not to keep the formula, but share the method: a way of applying artificial intelligence to concrete design that can adapt to other works, suppliers or materials. Touch wait to know if we will see more initiatives like this. This could facilitate the adoption of alternative mixtures in a variety of constructions. As we have seen, goal has not invented a new material. What he has done is to use AI to find new concrete formulas. Images | Xataka with Gemini 2.5 Flash | Mark Zuckerberg | Goal (1, 2) In Xataka | Nvidia says that China has the best open source AI in the world. These praises have a very clear intention

Modern algorithms decide for us to see. YouTube is the last redoubt where the algorithm does not choose for you

The 2025 Internet is dominated by algorithms that seem to know each other better than ourselves. They were only missing Chatbots with memory than adding to your ability to read between the lines. In this scenario, YouTube is increasingly a beautiful anomaly. While Tiktok, Instagram or X drag us from one subject to another according to the whims of a system that optimizes engagement Pure, Google’s video platform maintains almost anachronistic respect for our choices. It is the last redoubt where what we are looking for still matters more than what makes us react. The difference is in Its algorithmic architecture. YouTube recommends mainly within the thematic ecosystems that we have already chosen. Tiktok, on the other hand, can launch us from vegan recipes to conspiracy theories at a time if that keeps our thumbs sliding. This thematic verticality It is not altruism, it is part of its business model: You need to maintain long sessions within specific issues, where segmented ads have greater value. Only its consequence is positive for the user. Or at least more positive than the rest. The best way to understand what makes YouTube different is to live it as a user. When I am looking for videos of Valencia, the algorithm keeps me in that world: post-part interviews, gatherings, montages of the best goals of the season and usually memories of a better past. It does not suddenly jump to polarizing politics or drag me to incendiary content it happens to provoke my outrage. YouTube respects the thematic ecosystem that I choose. Amplify our searches, do not try to manipulate us better. The user experience reinforces this sense of control: A prominent search bar. Channels to subscribe. Lists we actively build. A history that we can manage. They are vestiges of an internet where we sailed with purpose, not where we were navigated. It is also fair to indicate that YouTube belongs to Google, one of the great architects of the current algorithmic Internet. It is not immune to problems – the clickbait it blooms and its own attempts with YouTube Shorts They show that it is not above the market. However, it maintains a different balance. And the question is obvious: if this more balanced model works for the world’s largest video platform, why do the rest of the industry opt for systems that virtually annul our agency? YouTube also has serious problems. Their Rabbit Holes (Something like ‘Backgroundless wells’) They can take us on paths cobbled by radicalization. Its monetization system favors the extension and recurrence of quality. We are not facing a hero, but before a survivor who has found a niche where he prosper without completely eliminating our autonomy. In the end this this chronology of the evolution of the Internet. The web (yesterday Gloria, today survival) was originally a space where we chose our destinations. Today algorithms decide for us. YouTube retains vestiges of the previous model while adapting to the new one, becoming a kind of “Internet inside the Internet.” This “limited algorithmic autonomy” allows something not only good, but almost sacred: predictability. We can anticipate what we will find, creating a more satisfactory experience. It also allows the fragmentation of communities focused on specific interests, without forcing that everything competes in a single Feed homogenized, which is The great evil of the current X and the perennial identity of Tiktok. YouTube is not perfect – no one is – but it makes us question if we can design platforms that serve users who want to enjoy healthy, without being hooked or dragged where they do not want, and not just advertisers. YouTube, with all its contradictions, is a sign that an intermediate path is possible. Where there is some algorithmic manipulation (it is the market, friend), but that coexist with the total user agency. In Xataka | Podcasts are living their great revolution, but not in Spotify or Apple Podcasts: YouTube is winning the game Outstanding image | Xataka with Mockuuuups Studio

The videos of AI have broken the Instagram and Tiktok algorithms. Welcome to the new “AI landscape”

A little over a month ago This unpleasant video De Instagram went viral in that social network. In him a strange creature half -spider of a man made a strange appearance in a mall. At this time the video has 3.5 million “like” and more than 23,000, but the truly worrying is not that. Collapse of the videos generated by AI. What is worrisome is that this video is part of an avalanche of videos generated by AI that is collapsing networks such as Instagram or Tiktok. And broken algorithms. As they explain in 404 averagethe algorithm of these platforms has ended up breaking through a unique brute force attack. One in which these networks do not stop receiving content generated by AI to end up saturating those recommendation algorithms. A LEA. Brute force attacks try, for example, to find out a password testing one by one all possible combinations. In this case, what they try is to saturate the algorithms so that they end up showing these videos generated by AI, and they are achieving it. Some already call Instagram Ya Tiktok “Villagers of AI” for the enormous amount of these contents, and that have made the conventional contents generated by human users have lost great relevance to the algorithm. Reality has changed. For many these networks they serve not only to entertain themselves, but to be aware of the present. Videos often try to profile reality, but that reality has now been disrupted and those Instagram or Tiktok accounts barely show anything that is real. Some videos are deep -to -detecting unappokes, and as we verify in the past, the paradoxical is that sometimes we are not even able to detect what is real and what is not. AI seeks virality and ends up finding it. The creators in social networks seek to get their videos viral, and for this they dedicate huge resources and time. In spite of this, success is not assured, but spammers that make video content generated with AI do not need to think so much: they can generate thousands of videos with very little effort, flood social networks and wait for some of those contents to be successful. The lottery of the virality is not so much if you have a lot of tickets. The influencers who are earning money with the landfills of AI. As usual in this type of phenomena, influencers have appeared with new formulas to get rich quickly and with hardly any effort. One of them, a 17 -year -old named Daniel Bittonpresumes having already won two million dollars and has a clear message. “While others invest 5 or 6 hours making a perfect video, we can generate 8 or 10 shorts in less than 30 minutes.” As? Using AI tools. The “Hot Dog” method. One of Bitton’s friends is a known spammer of Tiktok called Musa Mustafa. His method to go viral is that of the “sad hot dog”: “When you are hungry at two in the morning, even a sad hot dog knows better than any Michelin restaurant meal. Tiktok works in a similar way. Your audience does not expect (you don’t even want) perfectly polished videos.” Mustafa wonders, not without some reason, “when it was the last time you saw a viral video of Tiktok and thought: ‘Uauh, the degree of color in this video is incredible.” Or what is the same: the amount wins by a win to quality. But platforms embrace these contents. In The Guardian They already warned us Recently: social networks and networks are not stopping this type of spam, but are benefiting from it, they accept it and are even promoting it. In fact they offer tools to facilitate the generation of content through AI, which means that rather than trying to solve the problem, they are aggravating it. An example: Facebook. Meta’s firm recently launched a tool for advertisers called Advantage+. With it it is possible to create with different versions of an advertisement to try them all with the method of A/b test and then select the one that works best. For advertisers (and for the finish line, of course), all this is fantastic, because they can get more effective ads with much less investment of time and money. Are there limits? There will undoubtedly users who They reject This type of content: precisely networks such as Bluesky either Mastodon They move away from the algorithm and are more similar to how they were Twitter or Facebook years ago. But it seems clear that a vast majority of them have no problem with these contents generated by AI, which in fact have an example of success in that porn generated by the impossible combinations –A fish generated by the kissing to a woman generated by AI, an orco generated by AI marrying a girlfriend also generated by AI – they are also becoming viral. More wood for dead Internet theory. It has been talking about how the growing presence of bots on the Internet It will end that the human presence in these contents will be marginal. What we already saw with texts or images generated by the flooding Internet now we are seeing it with videos of the flooding social networks or with Virtual avatars generated by AI. The AI ​​landfill extends, and the worst thing is that not even users – which we contribute to making these contents viral – or the companies – which as we said not only do not stop them, but also promote them – seem to have a greater problem with this situation. Image | Kenneth Schipper In Xataka | Meta follows the steps of X: we not only work writing for her, now we will also work moderating her

end of anonymity on social networks and transparent algorithms

The President of the Government, Pedro Sánchez, has just proposed in Davos to end anonymity on social networks, and that their owners be criminally responsible for the content published on them, according to the live coverage he is doing. The Country of the Forum. Why is it important. The proposal, which at the moment is nothing more than that, seeks to stop the toxic effect of networks on democracy, according to Sánchez. The context. Sánchez has accused the platform owners of wanting to “have political power by undermining our democratic institutions.” His plan includes three main measures: End anonymity on social networks. Force transparency of algorithms. Criminal liability for owners. Between the lines. The proposal is striking but not entirely new. A few months ago, the Spanish Prosecutor’s Office proposed ending anonymity to investigate hate crimesin a reissue of the Gag Law who has been with us for years. To a similar extent, Italy approved a law a few days ago which requires identification in order to publish restaurant reviews, with the intention of tackling the problem of fake reviews. The obstacles. The technical implementation, as has been discussed with each proposal similar to this one, represents a significant challenge. public networks, VPNs or the Tor browser These are the most common ways to bypass this type of identity checks. We have a similar precedent with the ‘pajaporte‘in Spain: its scope, very limitedis another example of the complexity of regulating identity on the Internet. In perspective. Although the political will seems clear, we do not know to what extent this measure is planned to be implemented as such, and to what extent it is little more than a probe balloon… although it is not the first time we have heard such an idea. “Just as the owner of a restaurant is responsible if his customers are poisoned, the owners must be responsible if their networks poison the public debate,” Sánchez argued in Davos. Go deeper. The proposal will reach the European Council, where Sánchez will seek for states to “regain control” so that social networks help democracy instead of endangering it. The initiative poses a dilemma between network regulation, privacy and freedom of expression. And what emerges from here will be what shapes the Internet of tomorrow. In Xataka | Being asked for a copy of the DNI is now common. This is how the Police recommend modifying it before sharing it Featured image | Moncloa

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.