The visual garbage of AI is so omnipresent that it is already unleashing a counter-aesthetic current: neo-brutalism

The Internet is being flooded with images and designs that seem to be cut from the same mold: identical fonts, predictable gradients, aesthetics polished to the point of nausea. This phenomenon is difficult to describe and limit due to its infinite variants and omnipresence, but it has a name: “AI slop“. By this we refer to digital content generated with artificial intelligence, from images to web design itself, and where quantity takes precedence over any hint of originality or meaning beyond the effectiveness of the mass production chain. But what is AI Slop. The expression gained traction in 2024 thanks to British programmer Simon Willisonalthough it had previously circulated in communities such as 4chan and Hacker News. The concept indicates a root problem: When AI models are trained with the most common patterns on the internet, they replicate a generic and forgettable aesthetic ad nauseam. It’s what experts call “distributional convergence”: everything seems designed by the same depersonalized algorithm. And the anti-AI slop? Faced with this invasion of algorithmic uniformity, a visual counterculture emerges that celebrates precisely what AI avoids: the clumsiness, the unevenness, the marks of the human creative process. The anti-AI slop is not an aesthetic whim, but a declaration of principles that rescues imperfection and turns it into a differential value and a trait of delicious humanity. Some critics celebrate it as a kind of digital neo-brutalism, referring to the famous unadorned concrete architecture of the 1950s. This neo-brutalism is characterized by taking digital nudity to the extreme: sites built with basic HTML and minimal CSS, where the code is displayed without artifice. The fonts are not the elegant paid fonts, but the system ones installed by default: Arial, Times New Roman, Courier. The photographs appear unretouched, with their digital noises and compression artifacts clearly visible. Asymmetrical compositions, in short, that break any notion of classical balance. Like children. This leads us to a style perhaps opposite to cold brutalism, but also contrary to IA Slop: the aesthetic of a childish hasty sketch. Deliberately unbalanced proportions, freehand illustrations, elements that overflow the margins. Lindsay Marsh, a designer specializing in visual trends, points out that These visible “errors” act as signatures of authenticity: They are proof that behind the screen there are human fingers, not processors without humanity. The people of Phantom Watchers formulates it in a similar way: “It’s our way of saying ‘a human was here.’” Any notable example? The recent redesign of the oldest magazine The Face It is full of imperfections. Hell, it even looks like they programmed it in HTML. What features does it have? Like IA Slop itself, this opposition mutates in countless ways: disproportionately large fonts that challenge traditional visual hierarchy, website scaffolding exposed in an exhibitionist manner (even leaving the code visible), and color combinations limited to one or two colors on uniform black or white backgrounds, sometimes imitating the texture of analog montage. The templates are twisted on purpose, breaking with the obsessive symmetry that dominates more formal styles, and which are easier to imitate by those AIs that propose to set up a web store in just a few minutes and with a couple of prompts. But… why? The guiding principles of this rejection movement are clear: imperfections as a form of rejection of digital makeup, functionality without disguises, frontal rejection of prefabricated templates. “We don’t need decoration, we need design that just works,” summarized the people from the U1CORE design team when analyzing one of the many tentacles of this anti-AI Slop: the brutalist minimalismwhich is the label under which this new design trend is also categorized We have philosophy. And China, no less. Some evoke the aesthetics of another architectural and decorative trend: Japanese wabi-sabiwho finds the ephemeral and the defective beautiful. Cracks in walls and objects, time-worn textures, organic asymmetry… everything that algorithmic perfection rejects, anti-AI slop highlights. Many designers have named it “post-AI visual fatigue“the feeling that has given rise to all this: a collective exhaustion in the face of designs as polished as they are sterile and devoid of personality. Who said punk? For some of us, those of us who are old dogs, this philosophy reminds us of the guidelines of the first punk, the one who created fanzines with headlines made with letters cut out of magazines. Then ethics became aesthetics, and everything was militancy of photocopying and album covers as if they were kidnapping notes; But along the way, there was also opposition to a giant. To serious media, with gray designs and content without stridency. Punk stood up to the establishment with filth and “do it yourself”. It sounds very familiar to us: AI is the new mainstream, and many are going hardcore mode. Header | Kris Shakar In Xataka | Young people have decided to stop posting (so much) on Facebook and Instagram. “AI-generated garbage” has free rein

Images no longer mean that something was real. Welcome to the era of permanent visual doubt

There was a time, probably less than a year ago, when you saw a picture on the Internet and simply believed it. You didn’t stop to analyze it, or look for its context. You didn’t think “is it real?”, you simply processed it as information, and moved on. That moment will not return. We no longer talk about deepfakes very hardworking people who deceive some journalist (of that We already warned seven years ago). We are talking about something much more banal and therefore more devastating: Your brother-in-law can create a photo in three seconds of you, completely drunk, at a bachelor party you never went to. Your ex can fabricate a photo of you in a pose you never had. A student can generate a compromising image of his or her teacher during the transition between classes. The question is no longer whether the technology is good enough. It is perfect, we are seeing it with several tools and with the recently launched Nano Banana Pro to the head. In fact, it’s too perfectto. And perhaps for the first time, technical perfection has come before social perfection. Who is capable of seeing the photo on the right and assuming that neither the woman nor the waiter nor the bar actually exist? Let’s go having to learn to do something different from what we have been doing all our lives: learn not to be able to trust our eyes. Our entire epistemology—from court testimony to family photo albums—rests on a simple principle: seeing is a way of knowing. Not perfect, but sufficient: For 300,000 years of human evolution, if you saw a tiger, there was a tiger. For 199 years of photography, if you saw an image of a tiger, someone had been close to a tiger. That chain just broke. And it doesn’t break little by little, with warnings and an adaptation period. It breaks suddenly, on any given Tuesday, when you discover that the viral photo you shared was fake and you ate it without hesitation. Or worse: when you discover that everyone has assumed that the real photo you shared is actually fake. What we are losing is not the ability to distinguish what is real from what is fake. That got complicated a long time ago. What we are losing is something more primary: the possibility of operating under the assumption that the visual is, by default, a reasonable starting point. There’s the catch. for a decade we become obsessed with fake news. We were worried about Russian bots, troll farms or organized disinformation. All that was industrial. It cost a lot of money, left footprints and required coordination. What Nano Banana Pro brings is different. It is artisanal misinformation, common at home. You don’t need an authoritarian government or a budget behind it. You just need a smartphone, whatever it is. We could combat industrial misinformation with fact-checkers and media literacy. How do you combat the fact that each person is now a printing press for alternative realities? How do you verify 10 billion images daily? You can’t. The least obvious consequence is the most devastating: we are going to beg for a lock next to our real photos. If anyone can make any image, only those with verifiable certification will matter. Encrypted metadata, digital chain of custody, institutional authenticity seals. Anything, but something. The photo without a stamp will be suspicious by default. Who is going to offer that certification? Google, Meta, Apple, maybe governments. The only institutions with resources to verify on that scale. We are going to pay them for something that has been free for two centuries: the presumption that what was photographed existed. Because the alternative – a world where no one can be sure of anything – is simply unlivable. But The worst thing is not losing confidence in the images. It is losing confidence in memory. Your brain doesn’t store experiences, it stores reconstructions. And every time you remember something, you reconstruct it with the help of fragments: smells, emotions, images. Photographs have been crutches for memory for decades. They consolidated the rest of the memory. And then there is exhaustion. Every image you see now requires a little evaluation. Is it real? Do I verify it before sharing it? Will I look like a tolili if I send her to the group? Another tab for our internal CPU. Our parents never had to do this cognitive work. We are going to spend the rest of our lives in suspicion mode. Not because they are cynical, but because they are rational. That permanent suspicion has a cost. In attention, in mental energy. Perhaps in a capacity for wonder. In the possibility of seeing something extraordinary and simply believing it. Never again. There is hardly a solution for this: You can’t train an AI to detect AI-generated images perfectly: it’s an infinite arms race. Each detector upgrades the generators. Each generator improves the detectors. Each higher wall is an incentive to lengthen the pole. You can’t educate people to “think critically” on each of the thousands of images it processes per day. We don’t have bandwidth. and nor you can legislate the problem because technology is faster than the law and more accessible than any prohibition. The only thing left is adaptation. Cultural and psychological. Our grandparents trusted what they saw. We trusted what was photographed. Our children are not going to trust anything that does not come certified. Maybe the blockchain It was also invented for this. AND When everything needs verification, nothing can be spontaneous. When every image is suspect, none is memorable. When reality requires constant authentication, we stop inhabiting it naturally. Photography died the day it became indistinguishable from the imagination. We will continue taking photos and we will continue seeing them. But They will no longer do what they did for two centuries: tell us what was real. Welcome to the era of permanent visual doubt. In Xataka | There is a generation … Read more

Here begins the era of visual and interactive AI that you will not want to stop using

Today I have asked Gemini twice what it consisted of the three body problem. The first time I asked the conventional Gemini, who after thinking for a few seconds gave me a text answer, well structured but which at first scared me a little because it even included equations. This was what the response looked like: Then I decided to ask Gemini again, but this time taking advantage of the new feature called Dynamic View (Dynamic View). Google introduced this option a few days ago, and here Gemini does not respond in text mode, but visually. This was what the response looked like: So that I could understand that concept, he created a simulation where I could switch between different simulation modes and speeds. And after that, he complemented this simulation with short texts that explained what happens when there are only two bodies (like the Earth and the Moon) and when there are three bodies and the butterfly effect is experienced. The system becomes so chaotic and complex that triple star systems in the universe are unstable. I didn’t understand it as much with the formula, but with that simulation, I did. This is a clear example of where the tables are going in the world of artificial intelligence chatbots. In that future that Gemini proposes, the conversation can become—if we wish— much more visual and interactive. Almost like a game, because by modifying the simulation we can check the effect of that change in real time. It’s easier to “click” and understand the concept, and that, dear readers, is addictive. Google talked about all this in the presentation of the feature last week, explaining how this option “allows AI models to create immersive experiences, interactive tools and simulations, completely generated in real time for any prompt.” Well, indeed, this is how ‘How I Met Your Mother’ ended, although I have hidden the text so as not to spoil that ending for those who have not seen the series. If you haven’t done it, I recommend it 😉 The practical applications of something like this are, once again, almost limitless. One can apply these dynamic views to understand probability theory, to get fashion tips, or to remember how ‘How I Met Your Mother’ ended. Already put I have asked the impossible: explain to me the movie ‘Tenet‘. He tried it with a good visual scheme (the video below shows that interactive response), but it didn’t help me much because I’m afraid that movie is absolutely inexplicable. I’m not saying it: Nolan says it. Visual and interactive summaries take a few seconds to complete and are not suitable for the impatient, but once they do, the truth is that the answers do not disappoint because that interactivity and visual content enrich said answer and make it much more digestible and attractive for the user. It is the tiktokization of AI to make it even more direct. This approach from Google once again demonstrates how strong the company has been for a few months. The Nano Banana phenomenon turned it into a company that finally demonstrated its potential, and both Gemini 2.5 Flash and Pro a few months ago like now Gemini 3 – which certainly seems to be a step above its rivals – have confirmed the optimism surrounding the company. This latest innovation from Dynamic View It is one of the most powerful and disruptive we have seen in the use of AI in these three years, and follows the path that the company has already outlined with the fabulous NotebookLM. Let’s go shopping with ChatGPT Google, of course, is not alone in that effort. OpenAI has been an absolute benchmark in the productization of AI, and with ChatGPT it got it right from the first moment in that user experience that made us want to use the chatbot for more and more things. The company led by Sam Altman has also been putting forward interesting proposals for a long time to be able to apply AI to all types of scenarios, and now it has come up with a new one that is unique these days of Black Friday: a “Purchase Research” mode that goes beyond finding products for us. And it goes further because it does not stick to our initial prompt, but rather asks us about that prompt. For example, I am looking for a 27-inch monitor with 1440p resolution (QHD) that is cheap and for mostly office use. And that’s what I put in the search engine. The surprises came from there, because in that mode ChatGPT does not give you the answer directly, but it asks you some more questions in “survey” mode asking yourself boxes to answer. Preferred connectivity? (HDMI) What budget do you have? (less than 150 euros). Which panel do you prefer? (I don’t care). After these questions, ChatGPT presents some preliminary options on the screen so that you can tell it if its results are on the right track or not (and if they are not, it asks you why, for example by price or features). After two and a half minutes, the chatbot presented an interesting personalized shopping guide in which it recommended me this Philips 27E2N1500L/00 that is 99 euros and that I will probably end up buying. Obviously this OpenAI tool is interesting for users, but also for OpenAI, because it is one more move in that strategy of becoming our indispensable ally for all types of purchases. ChatGPT wants to be a useful shopping assistant that helps us find products… and that along the way give a commission to OpenAI. We already saw it with Instant Checkout, and this is another move that points to that promising line of income for the company, which certainly needs it like eating. But beyond that, the Purchase Research mode is another good example of how these searches no longer stop at what we ask, but instead ask us questions to better understand what we want and then give … Read more

The color of your Ethernet cable is not for decoration: it is a key visual language

We all have Ethernet cables at home and they are probably different colors. In my case, I have several yellows, but there are also red, blue, green… What many people do not know (myself included) is that colors are not a whim of the manufacturers, but rather They answer a practical question. A question of organization Contrary to what we might expect, the exterior color of an Ethernet cable will not tell us anything about its performance. If what you want is know the category of the cable (that is, the speed it supports), they all come with this detail printed on the cable itself. The color does not tell us if the cable is more or less fast, it is for something totally different: being able to distinguish and organize them better. In Xataka How to convert the antenna sockets in your house into an Ethernet network to bring Internet from one room to another. In a home it doesn’t make as much sense, but imagine a server or data center where Ethernet cables number in the hundreds or even thousands; If all the cables were the same color it would be crazy to identify them. Colors help manage large networks. Ethernet cable colors Although there are some guidelines on cable colors from organizations such as the IEEEand ANSIthere really is no universal color code for Ethernet cables. The meaning of each color can vary depending on the country, the sector and even the company. However, there are many similarities and widely used color patterns. These are the most common uses: Grey/white/black: These are the colors that we usually find for general home and office use. We see them in most routers. Blue: They are the most used cable for general network connections, servers or workstations. Yellow: They are usually PoE (Power over Ethernet) cables, that is, they provide power as well as connectivity. They can be used in IP cameras and VoIP phones. Green: to directly connect two devices such as computers, without an intermediate device. Red: They are usually reserved for critical connections such as security or emergency systems. orange and purple: They are less common colors. According to Cables and Kitsare used to connect systems that require a specific connection not compatible with the usual standards, for example to connect older systems that do not use Ethernet with newer ones that do. {“videoId”:”x8coltz”,”autoplay”:false,”title”:”ALL ABOUT ETHERNET CABLES_ TYPES, CHARACTERISTICS AND WHICH TO CHOOSE”, “tag”:”webedia-prod”, “duration”:”211″} As we said, the color of the cable does not determine its performance, but rather has a practical purpose for those who manage very large networks. With colors, maintenance time is shortened and serious failures such as the disconnection of critical systems are avoided. At home it can also be useful if you have several devices connected to your router and you want to clearly see which is which. Image |PxHere In Xataka | The submarine cables belonged to the teleoperators, and now the big technology companies are controlling them (function() { window._JS_MODULES = window._JS_MODULES || {}; var headElement = document.getElementsByTagName(‘head’)(0); if (_JS_MODULES.instagram) { var instagramScript = document.createElement(‘script’); instagramScript.src=”https://platform.instagram.com/en_US/embeds.js”; instagramScript.async = true; instagramScript.defer = true; headElement.appendChild(instagramScript); – The news The color of your Ethernet cable is not for decoration: it is a key visual language was originally published in Xataka by Amparo Babiloni .

Ode to rounded corners, the visual element that has proven Steve Jobs right once again

Let’s pay a small tribute to a visual element that we almost never pay attention to, but that is already an integral part of our lives. Let’s talk about rounded corners. They are everywhere and have taken over technology. We love them. We are full of devices and interfaces dominated by rectangles and squares with rounded corners. They are more elegant, softer to look at, much less aggressive and strident. But there is a true psychology behind that way of designing objects and interfaces. For example: since we were little we always knew that sharp corners were dangerous – today corner protectors for children are a big deal. These elements facilitate visual perception, and their introduction into the technological world deserves to be remembered. Steve Jobs was right (again) Andy Hertzfeld was one of the team members who developed the Apple Macintosh. In May 1981 he shared a curious story, now recovered by the Computer History Museum. Lisa OS 1.0. Look at the edges of the calculator app. They are rounded! The protagonist of that story is Bill Atkinson, legendary Apple engineer and Hertzfeld’s partner on that project. At that time Atkinson was working on the development of his QuickDraw application – then called LisaGraf – and although he usually worked from home, if he made any significant progress he would quickly go to the office to show off the improvement. That’s what happened that spring. Atkinson approached Apple’s offices in mythical “Texaco Towers” Cupertino campus and showed how he had added code to be able to draw circles and ovals very easily. Programming that was much more complicated than it seems because square roots were usually involved to achieve it and the Motorola 68000 of the Lisa and the Macintosh did not support floating point operations. Atkinson managed to solve it with calculations that only used addition and subtraction—he was probably inspired by the Bresenham algorithm—and began to fill the screen with circles and ovals while his companions probably smiled in astonishment and satisfaction. But there was someone who was neither too amazed nor too pleased. That someone was Steve Jobs. Upon seeing the demonstration, Jobs said —Okay, circles and ovals are fine, but How about drawing rectangles with rounded corners? Can we do that too? —No, there is no way to do it. “It would actually be really difficult to do, and I don’t really think we need it,” Atkinson replied, probably annoyed that Jobs hadn’t been too impressed with his method for creating circles and ovals. —Rectangles with corners are everywhere! Look around this room! Hello, Mac OS X with rounded corners (2001). Sure enough, the room had objects like whiteboards and tables with rounded corners, and Jobs insisted that they were everywhere and that he only had to look out the window to notice. He ended up convincing Atkinson to take him around the block and point out all the rectangles with rounded corners they saw. After seeing a no parking sign that was rectangular with rounded edges, he said: —Okay, I give up. I’ll see if it’s as difficult as I thought. And he went home to work on the problem. The next afternoon he returned to the office with a huge smile: his new demo I didn’t just draw rectangles with rounded cornersbut it did it almost as fast as it did drawing rectangles with corners. He added that code and called that primitive “RoundRects”. In our pockets we usually carry a device that makes good use of these rectangles with rounded corners. The iPhone, of course, does it. That design element soon became an integral and indispensable part of the Macintosh operating system interface. And it also ended up being part of the hardware (hello, mobile phones with rounded corners) and software design at both Apple and many other technology companies. Source: Freepik. The Cupertino firm also fully integrated it into its iPhones starting in 2013, when iOS 7 and its “squircle” arrivedan even more subtle type of rectangle with rounded corners that he ended up using, for example, in his icons. It was one more example of the particular relevance of a design element that has ended up completely taking over our screens and the technological world. Long live the rounded corners. In Xataka | Many young people already see and hear everything at 1.5x. They didn’t get there by chance: there was a lot of money at stake

The visual spectacularity of ‘The Witcher 4’ is ridiculous everything seen so far. And it is the main sin of the industry

The technical demo of ‘The Witcher 4’ has left the players community speech and has become the star demo of the State of Unreal 2025where Epic Games has presented the latest news regarding its star graphic engine. A waste of overwhelming graphics that, at the same time, summarizes one of the great problems of the industry: being always dealing with futures. What has been seen. First of all, it should be emphasized that the seen is a technical demo IN-ENGINENot a video In-game. That is, it is the Unreal Engine 5 running in real time, nothing of a CGI scene, but it is not a captured fragment of the game. In this case it ran in a PlayStation 5, but CD Projakt wanted to make it clear that it is still not clear in which format the format will appear New installment of ‘The Witcher’. It is simply an exhibition of everything that the game can offer in terms of characters animations, fauna, mapping, reactions. A wonder that as we already knew, will star in an old acquaintance of the saga: Ciri. Mea guilt. This presentation also serves for CD Projakt Red to reassure players: The fiasco of ‘Cyberpunk 2077’ In consoles, they say (without hesitation, qualify as “disaster”), it will not be repeated. That is why they do not have a date for this ‘The Witcher 4’: the time that is necessary to become a “new reference” of games of this type will be in development. In times of ‘GTA VI‘It is not a bad purpose, but the truth is that the seen seems to be up to the circumstances: the magnitude of the scenarios, the naturalness of the movements, the realism of the gestures, the apparent diversity of possible reactions of the characters … is simply overwhelming. Three beasts. Currently, we have three graphic beasts outings of the mainstream With greater technical and monetary muscle of the industry: this ‘The Witcher 4’, which, like everything about CD Project Red, is at the visual avant -garde of the medium; ‘Gears of War E-Day‘, which we will possibly see gameplay Soon, and that in exhibitions like this kinematics made with the game engine we can expect a return in style of the saga; And, of course, ‘gta vi’, what has not yet shown gameplaybut that has taught incredibly realistic cinematics made with the game engine. Only the elders. That is, games of three industry giants (CD Projakt, Microsoft and Rockstar), whose launch we can set, in the best case, in the medium term: perhaps we do not have them to have them in our hands until entering 2026. And that is a problem: the games that technically push the industry, on the one hand come from the mastodons of the sector. Companies accustomed to operating with a franchise coup (such as these titles) and to ensure the shooting with revolutionary games in the visual … and only in that. The damn hype. And we return to the usual problem: the games are very expensive to produce (With its 100 million estimated budget, in fact, ‘GTA VI’ is going to be the most expensive in history, with 2,000 million), so the producers do not risk, and from the side of the players we complain that we are not going anywhere, and it is the fish that bite the tail. Those who have the waist to move in unpublished directions to the industry can not allow it, even if they have technological monsters such as Unreal Engine 5. And so we are looking at a distant future. So far away, that CD Projekt Red cannot even reveal what platforms ‘The Witcher 4’ is going to come out. Video games have always been held on promises, and we treasure enough bad experiences on the matter (yes, CD Projekt Red, we have already forgiven you, but the ‘Cyberpunk 2077’ bugs do not forget as well as so) as to distrust the “will arrive when it is.” ‘The Witcher 4’ looks like a heart attack, but paradoxically, we are tired of having to wait sitting. Header | CD Projakt Red In Xataka | All the questions and doubts that remain to be resolved from the Nintendo Switch 2 to a week of its launch

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.