You feel like going to Sri Lanka because you saw it on Instagram. The problem is that the person who recommended it to you was an AI

The image is familiar. A young woman smiles from a beach with turquoise waters. In the following publication, he appears walking along a cobblestone street in Marrakech. Below, he poses at a luxury hotel in the Maldives. The skin is perfect, the body responds to the prevailing canons and the text accompanies with inspirational phrases about traveling, discovering cultures and “living in the moment.”

Nothing seems out of place. Until you discover the reality. That traveler has not flown, she has not walked those streets or tried the food she recommends. It doesn’t exist. She is an influencer generated by artificial intelligence and is part of a phenomenon that is growing quietly: the normalization of artificial profiles that influence the real decisions of millions of people.

A silent, but massive boom. In the last two years, Instagram and other social networks have been filled with virtual influencers: characters created with generative AI who pretend to be real people and publish travel content, lifestyle or fashion, the best known case in Spain is Aitana Lopez. Some indicate it more or less clearly in their biography; others do so ambiguously or almost invisibly. However, what is interesting here is how the examples multiply in the tourism sector. Sena Z has been presented as “the first travel and hospitality influencer created with AI”, It’s a collaboration between the luxury group Cenizaro Hotels & Resorts and the technology firm Bracai. Sena publishes cultural recommendations, messages about sustainability and photographs from exotic destinations.

Another notable case is Emma, ​​the official influencer and chatbot of the German National Tourism Office. Emma not only publish content on Instagrambut answers questions in more than 20 languages ​​from the official website of the organization. As explained from the entity to the Washington Postits creation is part of a strategy to “stay at the forefront of digital innovation.” Other profiles are added to these profiles, such as Radhika, Emily Pellegrinior corporate avatars like Samathe Qatar Airways virtual stewardess who appears both on the airline’s website and on social networks, publishing as if she were living real experiences.

These are not isolated experiments. As detailed by The New York Timesairlines, tourist offices and brands are increasingly turning to these avatars because they are cheaper, faster and completely controllable. An AI influencer does not get sick, does not get tired, does not age and does not generate personal controversies.

Inexperienced influencers. The question is inevitable: what happens when the experience is not real? Just look through these profiles to see it: they recommend destinations, restaurants and cultures that they have not experienced. Even so, they generate engagementaccumulate thousands of likes and comments, and influence travel decisions.

From the brands’ point of view, the appeal is evident. According to data collected by the New York mediacreating an advanced avatar can cost between $5,000 and $15,000, compared to traditional campaigns that easily exceed six figures. In addition, content can be produced without travel, without filming equipment and without negotiating with human talent. However, for real creators, the impact is already being felt. Human influencers cited by the same medium explain that brands are reducing payments, eliminating extras and offering less advantageous collaborations. AI thus becomes a new direct competition within the creative economy, a sector valued at more than 200 billion dollars globally.

Is someone regulating it? While Technology advances quickly, regulation tries to catch up. Going home, in Europe, the clearest answer comes through the Artificial Intelligence Regulations (AI Act). Article 50, which will come into force in August 2026establishes transparency obligations for providers and users of AI systems. Among them:

  • Report when a person interacts with an AI system.
  • Mark content generated or manipulated by AI (text, image, audio or video) in detectable format.
  • Force deepfakes and AI-generated texts that report on matters of public interest to be declared, unless there is human editorial review.

The European Commission has already started the preparation of a Code of Good Practices for the marking and labeling of content generated by AI, with the participation of experts, platforms and civil society. The goal is to facilitate compliance before the law is fully applicable. However, many virtual profiles do not clearly indicate either their artificial nature or their commercial links, leaving the user in a field of ambiguity.

Unreal bodies, algorithmic authority. Beyond destination promotion, most AI influencers share common traits: eternal youth, slim bodies, perfect skin and a total absence of imperfections. This phenomenon coincides with the return of Y2K aesthetics and extreme thinness on social networks, a trend that has been linked to a decline in body diversity.

The most notable case was due to advertising campaigns with models generated by AI, like Guess in Vogue. Mental health experts warned that constant exposure to unreal bodies can aggravate self-esteem problems and increase risk of eating disorders. The difference, they point out, is key: while traditional retouching started from a real body, AI creates bodies that have never existedimpossible to achieve even in theory.

This logic has been taken to the extreme with phenomena such as the Miss IA pageantwhere artificially generated models compete showing bodies without pores, without age and without history. According to plastic surgeonsmore and more patients come to consultation with images created by AI asking for impossible interventions and pointing out the risk of frustration, obsession and psychological damage.

The underlying problem: we no longer know what is real. All of this occurs in a broader context: a crisis of visual confidence. As my colleague in Xataka has analyzedthe massive generation of hyperrealistic images has broken a chain that for centuries seemed solid: if something was seen, it had probably existed. Today, that presumption has disappeared. Seeing is no longer equivalent to knowing.

In this new scenario, we not only doubt whether an influencer has really traveled, but also whether the image itself corresponds to something that happened. The consequence is a permanent suspicion that affects memory, attention and the way we relate to digital reality. The technical solution—seals, metadata, certifications—is just beginning to be drawn, while the cultural adaptation is far behind.

A single movement. In the end, everything returns to the initial gesture: sliding your finger across the screen. The beaches are still real. Cities exist. Travel happens. But those who tell them, increasingly, have not been there.

In a digital ecosystem saturated with perfect images, the question is no longer whether we will see more influencers generated by artificial intelligence, but whether we will know—and be able to demand—to distinguish them. Because on Instagram, inspiration continues to be sold as authentic, even though there is no longer anyone behind it who has packed their suitcase.

Image | instagram

Xataka | We couldn’t tell you if the image at the top of this post is real or generated by AI: we are in the era of permanent doubt


Leave your vote

Leave a Comment

GIPHY App Key not set. Please check settings

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.