There was a time, probably less than a year ago, when you saw a picture on the Internet and simply believed it. You didn’t stop to analyze it, or look for its context. You didn’t think “is it real?”, you simply processed it as information, and moved on.
That moment will not return.
We no longer talk about deepfakes very hardworking people who deceive some journalist (of that We already warned seven years ago). We are talking about something much more banal and therefore more devastating:
- Your brother-in-law can create a photo in three seconds of you, completely drunk, at a bachelor party you never went to.
- Your ex can fabricate a photo of you in a pose you never had.
- A student can generate a compromising image of his or her teacher during the transition between classes.
The question is no longer whether the technology is good enough. It is perfect, we are seeing it with several tools and with the recently launched Nano Banana Pro to the head. In fact, it’s too perfectto. And perhaps for the first time, technical perfection has come before social perfection.
Who is capable of seeing the photo on the right and assuming that neither the woman nor the waiter nor the bar actually exist?
Let’s go having to learn to do something different from what we have been doing all our lives: learn not to be able to trust our eyes.
Our entire epistemology—from court testimony to family photo albums—rests on a simple principle: seeing is a way of knowing. Not perfect, but sufficient:
- For 300,000 years of human evolution, if you saw a tiger, there was a tiger.
- For 199 years of photography, if you saw an image of a tiger, someone had been close to a tiger.
That chain just broke. And it doesn’t break little by little, with warnings and an adaptation period. It breaks suddenly, on any given Tuesday, when you discover that the viral photo you shared was fake and you ate it without hesitation. Or worse: when you discover that everyone has assumed that the real photo you shared is actually fake.
What we are losing is not the ability to distinguish what is real from what is fake. That got complicated a long time ago. What we are losing is something more primary: the possibility of operating under the assumption that the visual is, by default, a reasonable starting point.
There’s the catch. for a decade we become obsessed with fake news. We were worried about Russian bots, troll farms or organized disinformation. All that was industrial. It cost a lot of money, left footprints and required coordination.
What Nano Banana Pro brings is different. It is artisanal misinformation, common at home. You don’t need an authoritarian government or a budget behind it. You just need a smartphone, whatever it is.
We could combat industrial misinformation with fact-checkers and media literacy. How do you combat the fact that each person is now a printing press for alternative realities? How do you verify 10 billion images daily?
You can’t.
The least obvious consequence is the most devastating: we are going to beg for a lock next to our real photos. If anyone can make any image, only those with verifiable certification will matter. Encrypted metadata, digital chain of custody, institutional authenticity seals. Anything, but something. The photo without a stamp will be suspicious by default.
Who is going to offer that certification? Google, Meta, Apple, maybe governments. The only institutions with resources to verify on that scale. We are going to pay them for something that has been free for two centuries: the presumption that what was photographed existed. Because the alternative – a world where no one can be sure of anything – is simply unlivable.
But The worst thing is not losing confidence in the images. It is losing confidence in memory. Your brain doesn’t store experiences, it stores reconstructions. And every time you remember something, you reconstruct it with the help of fragments: smells, emotions, images. Photographs have been crutches for memory for decades. They consolidated the rest of the memory.
And then there is exhaustion. Every image you see now requires a little evaluation. Is it real? Do I verify it before sharing it? Will I look like a tolili if I send her to the group? Another tab for our internal CPU.
Our parents never had to do this cognitive work. We are going to spend the rest of our lives in suspicion mode. Not because they are cynical, but because they are rational.
That permanent suspicion has a cost. In attention, in mental energy. Perhaps in a capacity for wonder. In the possibility of seeing something extraordinary and simply believing it. Never again. There is hardly a solution for this:
- You can’t train an AI to detect AI-generated images perfectly: it’s an infinite arms race. Each detector upgrades the generators. Each generator improves the detectors. Each higher wall is an incentive to lengthen the pole.
- You can’t educate people to “think critically” on each of the thousands of images it processes per day. We don’t have bandwidth.
- and nor you can legislate the problem because technology is faster than the law and more accessible than any prohibition.
The only thing left is adaptation. Cultural and psychological.
Our grandparents trusted what they saw. We trusted what was photographed. Our children are not going to trust anything that does not come certified. Maybe the blockchain It was also invented for this.
AND When everything needs verification, nothing can be spontaneous. When every image is suspect, none is memorable. When reality requires constant authentication, we stop inhabiting it naturally.
Photography died the day it became indistinguishable from the imagination. We will continue taking photos and we will continue seeing them. But They will no longer do what they did for two centuries: tell us what was real.
Welcome to the era of permanent visual doubt.
Featured image | immasidx



GIPHY App Key not set. Please check settings