How to add the Three Wise Men to any photo of your street using artificial intelligence

Let’s tell you how to add the Three Wise Men to your photographsso that you can create images full of illusion. The idea is that if you have a photo of your street or a place you usually walk through, you can add these characters to it without altering anything else. We are going to tell you two ways to do this, both with artificial intelligence. First we will go to a website designed exclusively for this, which is the easiest alternative to use. And then we will tell you how to use the most popular artificial intelligence chatbots, such as ChatGPT either Gemini. Use a third party page If you want to do things as easily as possiblethere are pages like fotoalosreyesmagos.comcreated especially to add the Three Wise Men to your photos, and which allows you to see the photos shared by other users from all over Spain. The website offers consistency in designs, although the results are a little less refined. To use it, go to fotoalosreyesmagos.com and click on Upload your photo. Now you will go to a screen where you have to upload the photo you want to use to insert the Three Wise Men. Click on the box or drag the photo to it if you are on the computer. Remember that they must be photos of a street or landscape so that the AI ​​can insert the characters into it. Now you’ll have to choose how to customize your resulting photo. To do this, you just have to decide if you want to include the camels or only to the Kings. Additionally, you have to choose if you want the photo to be public indicating your location or if you want to keep it private and not publish it in the gallery. Now, after deciding whether or not to accept or not give the website a donation of one euro, the photo will be generated. When the photo is generated you can download or share itin addition to publishing it if you want in the public gallery. Add the Three Wise Men with ChatGPT or Gemini The other option is use ChatGPT or Geminiin both cases you will be able to use the same prompt, although today ChatGPT Images offers better results. But you can try both options and stick with the one that suits you best. What you have to do in both options is upload a photograph of your neighborhood, and add the following prompt: I want you to add the three Wise Men in this photo. They should be walking down the street, and you should make them realistic, make them look like real people. Look at the proportions so that they have a realistic size within the photograph, that they have the size of a real person. Don’t touch anything else in the photo, just add to the characters. That’s it, with this the AI ​​will generate a fairly realistic image of these characters. The advantage of this option is that you can add and specify things at the prompt you use, adding objects, specifying sizes, and similar. In Xataka Basics | How to create a character in ChatGPT and Gemini to use it in all the images you make with artificial intelligence

How to easily create a nice Christmas greeting with your photo using ChatGPT Images

We are going to explain to you step by step how to create a Christmas greeting with your photo wearing ChatGPT Imagesthe section to create images from photographs of ChatGPT. This tool will allow you to do the editing with artificial intelligence without having to write a prompt certain. Right now, this section has a default style that allows you to create a pretty Christmas photo in just two or three clicks. So, to this we are going to add an additional request and voila, you will have the complete congratulations. You’ll see that it’s simple. Your Christmas greeting with ChatGPT The first thing you have to do is enter the ChatGPT website or application on your device. Once you are inside, open the side menu Click on the section Images that will appear just below the search options. This will take you to the main screen of the section Images. In it, look in the section Test a style on an imageand here inside click on the option Festive portrait what you are going to find. This will open a window where you have to choose the photo you want to use as a basis for congratulations. It has to be a portrait-type photo in which one or more people appear. For this, you can use one of the recent ones that you have used in this section, or manually choose another one that you want to use. Just by doing this, ChatGPT will turn your photo into a festive portrait Full of Christmas decorations and motifs. If you don’t like the result the first time, you can try again, and even copy and paste the prompt using another photo, or repeating the entire process. Once you have the image created to your liking, it’s time to give it the final brushstroke. In the same chat where it was generated and shown to you, you have to write that you want me to add a text using a Christmas font. For example, you can put something like “Happy Holidays.” For that, I have used this prompt: I want to turn this photo into a Christmas card. For that, I want you to put the text “Happy Holidays!” at the bottom. using a Christmas font. And only with this you will have your greeting created. Now all you have left is decide if you want to make additional adjustmentssuch as changing the size of the word, its position, or even making the image horizontal. You can even upload someone else’s photo and ask ChatGPT to add it to the composition by doing the same treatment to the photo so that it looks the same. In Xataka Basics | How to create a character in ChatGPT and Gemini to use it in all the images you make with artificial intelligence

How to create a Christmas sugar cookie image with your pet’s photo in a couple of clicks with ChatGPT

We are going to explain to you step by step how to create a Christmas sugar cookie image with your pet’s photo wearing ChatGPT Imagesthe section to create images from photographs of ChatGPT. This tool will allow you to do the editing with artificial intelligence without having to write a prompt certain. Therefore, the best thing about all this is that these images are very easy to makeand you only need a couple of clicks to get them. Furthermore, since when the photo is generated you will also be shown the prompt generated by ChatGPT itself, you can copy and paste it into another AI or make any modifications you want. Make sugar cookies from your pets The first thing you have to do is enter the ChatGPT website or application on your device. Here in the side menu Click on the section Images that will appear just below the search options. Once you enter the section Imagesnow you have to look in the row where it says Test a style on an image. Here, search and click on the option sugar cookie that appears with the drawing of a dog biscuit. This will open a window where you have to choose the photo of your pet that you want to use as a reference. For this, you can use one of the recent ones that you have used in this section, or manually choose another one that you want to use. And that’s it. Just by doing this, ChatGPT will create the sugar cookie image with the photo of the pet that you have uploaded. If you are not satisfied, you can try again or copy and paste the prompt with another photo. You can also edit and modify the prompt so that the result is different. In Xataka Basics | How to create a character in ChatGPT and Gemini to use it in all the images you make with artificial intelligence

How to make a Christmas greeting by creating a family or group photo from separate photographs

Let’s tell you how to create Christmas greetings by generating a group image from separate photos. For this we are going to use artificial intelligencespecifically Gemini with its Nano Banana, being possibly the best free alternative to do this. Here, the secret is again to use an appropriate prompt in which you describe exactly what you want. We are going to tell you everything you should take into account and the prompt you should use later to create the image. You will see that it is quite simple. Group Christmas greeting with Gemini Before you start, you first have to Carefully select the photos you want to use. Try to have similar lighting, or that the same part of everyone’s body can be seen. Gemini is going to try to cut and paste all the photos together making as little modifications as possible, so keep that in mind. They should be photos that look similar. Of course, you should also know that you will be able to change their clothes to the people in the photos. Therefore, and although the ideal is for everyone to be dressed similarly, it is not essential, because then you can have Gemini put the clothes you want on them. Once you have everything, start a conversation with Gemini. Inside, first upload the photos you are going to use. Afterwards, you can copy and paste the following prompt and send it along with the photos: I want you to create a Christmas card with a family photo. I’m going to give you separate photos of people, and I want you to create a family photo where they all appear together. Under the photo you have to say “Merry Christmas”. Make the background with Christmas motifs. In this prompt you can make changes or more details. You can describe the background to be used, and also the font and text. Don’t be afraid to try, experiment and try again if the first result doesn’t work out for you. After doing so, as we have told you before, you can ask Gemini to change their clothes. This way, if people’s clothes are different in the photos, you can unify the result a little. In fact, if you have a group photo you can also simply ask them to change their outfits. Another option is to upload the group photo and then an individual photo of another person who is not there and ask Gemini to add this person. And do you remember when we told you how to turn your photos into video game scenarios either in a character from Stranger Things? Well, you can also use these tricks here to make the greeting as original and personalized as possible. In Xataka Basics | Gemini Image Editor: 16 Ways and Tricks to Squeeze Nano-banana with Google’s AI

How to turn any photo of yourself into a Stranger Things character using Nano Banana

Tomorrow the long-awaited premiere fifth season of one of the most popular series of recent years. To warm up our engines, we are going to explain to you how you can become a ‘Stranger Things’ character thanks to the AI ​​of Nano Banana totally free. Nano Banana is the image creation model integrated into Gemini, Google’s AI. Its peculiarity is that it allows you to make modifications to the photos while maintaining the content. We have already told you how turn a photo of yourself into an action figure or make your photos become a Nintendo-style video game setting. How to create ‘Stranger Things’ style photos The first step is to choose a photo from your gallery in which the person we want to turn into a character in the series appears. It is important that the face is seen well, so a medium shot or close-up is a good option. Once you have chosen the photo, Upload it to Gemini by clicking the + button (in the lower left corner). The next step is to copy and paste one of the prompts which we leave you below. They are already circulating on networks various prompts and since we couldn’t decide on just one, we have chosen the three that we liked the most. Option 1: talking on the phone while the demogorgon lurks Create a 2000s-style dream portrait of me inside a Stranger Things-inspired house, Will’s house, with an alphabet painted in crooked black paint on the wall, and above each letter a series of colored Christmas lights with each light bulb above each letter. The interior is that of a suburban house in a small town with soft, dim lighting and shadows. Soft, but with a subtle dreamlike cinematic glow. I’m leaning against the wall with a yellow telephone, which has a broken cord in my ear. Costume and appearance: the hairstyle is 80s style, I wear normal teenage clothes from 1987 inspired by the series Stranger Things: high-waisted jeans, t-shirt, jackets or sweaters with several layers in muted colors of the time, in the window you can see a tall and thin monster, whose head is made of red petals, there are 4 petals and in the center they look like teeth, as in the series, it lurks, partially hidden in the shadows, creating a disturbing and suspenseful atmosphere without the need to cover my face. face. Decor and props: The room has authentic 80s decor: patterned wallpaper, retro furniture, a blanket on the couch, an old CRT TV, stacks of 80s books or magazines, small nostalgic decorations on the shelves, faded 80s pop culture posters on the walls, the outside environment looks like red rays are falling, don’t change my face. Option 2: waiting for the demogorgon with an ax in his hand Create a high-quality, realistic photo using the reference face without changes or distortions. General style and atmosphere: A photograph in dark and intense tones, with a style similar to that of a frame from the series “Stranger Things”, with clear references to the atmosphere of the 80s and mysticism. Subject and main character: In the foreground appears a young person (similar to a character from “Stranger Things”) wearing a dark red plaid shirt with a white t-shirt underneath and black pants. His eighties-style hair is slightly disheveled. She is sitting on a sofa. He holds an ax in his hands and stares to the side. Setting and setting (interior): The scene takes place inside a room with walls covered with old wallpaper typical of the 80s. The space is very messy: there are many books, stacks of papers, cassettes and other objects scattered around the bed and a low table in front of it. To the left you can see shelves or shelves full of objects. Key Details (Alphabet and Lighting): On the wall just behind the character, the English alphabet is written in large letters that look hand-drawn. A string of Christmas lights with large bulbs hangs on the wall. Each letter corresponds to one or more bulbs in the garland. These holiday/Christmas lights also hang from the ceiling, illuminating the scene with a warm, flickering glow (red, blue, yellow), creating dramatic shadows and reflections. Quality and lighting: The image has been created in high resolution, emphasizing textures (fabric, wood, paper). The lighting is dim and contrasted (noir), with a strong lighting effect coming from the garlands (bloom effect). Style: Cinematic, dramatic, fashion photography or studio portrait style, set in an unusual location. High resolution, sharp details, hyperrealism, great level of detail, professional post-production. Option 3: walking at night in front of Hawkins High School Use my selfie to create an ultra realistic 9:16 dreamy 80s cinematic photo inspired by the 80s show and Stranger Things. The photo should be moody with vibrant lighting. It’s night and the sky has many dark spooky clouds that are throwing up red and blue. There are also red and blue rays. I’m standing in front of Hawkins High School and the school buildings. The school sign says Hawkins. I’m standing in the parking lot. I am walking and wearing a baseball t-shirt. The sleeves are black and the chest is white. The shirt says Hellfire Club. I have Levi jeans from the 80s. I have black Keds and white socks. I’m looking into the distance. My hair and clothing style is from the 80s. I’m holding a walkie talkie up to my mouth with one hand and holding a jean jacket in the other hand. The ground looks wet with some puddles and is casting shadows. There is a baseball bat with nails stuck into the bat. In the distant distance, behind the school buildings, I can see a dark demogorgon. Don’t change my facial features or hair color. As you can see, the prompts are extremely detailed, so the result you get should look quite similar to the images we attached. If you want to change something, such as the style of clothing … Read more

This is the 3I/ATLAS photo that NASA was accused of hijacking. Of course it doesn’t change anything

They are the most controversial astronomical photos of the last two months. And to no one’s surprise, speculation as to why NASA had not published them was exaggerated. This is what the space agency has seen. A little context. Since the ATLAS system detected a new interstellar object crossing our neighborhood, a very specific part of the scientific community has been carefully monitoring its trajectory to detect any anomalies. Especially since cosmologist Avi Loeb suggested it could be an artificial alien object. That NASA took a month and a half to release 3I/ATLAS images taken during its approach to Mars has not helped control such speculation. But the administrative silence, caused by the US government shutdown, has come to an end. NASA is back this week with a huge amount of data under the arm. “It’s a comet.” NASA has mobilized 12 of its spacecraft to observe the visitor from outside the solar system. And the official message is forceful, almost designed to nip any exotic speculation in the bud: “it looks like a comet and behaves like a comet, and all the evidence points to it being a comet,” said Amit Kshatriya, the agency’s highest-ranking official, in a press conference. Of course, it is a different comet from those in the solar system, which suggests that it was born in an environment with a different chemistry than ours, perhaps around a star much older than the Sun, because it is unusually rich in nickel and, instead of expelling water, it expels carbon dioxide. What’s new. What makes this new observation campaign special is the geometry. When 3I/ATLAS passed its closest point to the Sun in late October, Earth was on the “wrong side,” with the Sun blocking our direct view. Taking advantage of the fact that Mars had a privileged view, NASA forced the instruments of its ships beyond their original design. The Mars Reconnaissance Orbiter captured high-resolution images from 30 million kilometers away. The MAVEN mission analyzed its ultraviolet composition and the Perseverance rover, from the Martian surface, managed to capture a faint flash of the comet. Meanwhile, the Psyche and Lucy spacecraft, traveling to distant asteroids, managed to capture the comet against the light, revealing details of its tail and coma that would be invisible from Earth. And the SOHO and STEREO solar observatories followed suit when it was too close to the Sun for other telescopes. What does Loeb say? The controversial Harvard astrophysicist and techno-signature hunter has published an immediate response showing his disappointment. For Loeb, the NASA press conference has been an exercise in bureaucracy to confirm the “expected and boring.” His main arguments for maintaining skepticism are: The striking thing about the mass: 3I/ATLAS is a million times more massive than ‘Oumuamua. Statistically, we should have seen millions of small objects before seeing one this big, unless it was intentionally “sent”, according to the cosmologist. The camouflage theory: Loeb argues that an interstellar probe that has traveled through the cold interstellar medium could have accumulated a layer of ice and dust on its surface. As it approaches the Sun, this layer would sublimate, making it look like a natural comet. The resolution of the images: The photos shown by NASA are blurry (due to the limitations of the probes), so Loeb is pinning his hopes on images taken by amateur astronomers as the comet approaches Earth. And now what. NASA has not found any technosignatures: no radio signals, no impossible maneuvers outside of gravity, nothing that indicates intelligence on this comet. However, the show is not over. On December 19, 2025, 3I/ATLAS will have its closest approach to Earth (about 270 million kilometers). It will be then when the James Webb space telescope and the large terrestrial observatories will be able to perform the definitive autopsy. Image | POT In Xataka | 3I/ATLAS shows signs of non-gravitational acceleration: something has pushed it, and we think we know what

The only photo you need to understand the scale of what Blue Origin, Jeff Bezos’ company, has just done

In the absence of bananas, there is nothing like having five human operators in the photo to understand the scale of the New Glenn rocket, whose first stage is 57 meters high and seven meters in diameter. landed successfully on a barge in the Atlantic. SpaceX has company. So far, the club of companies capable of landing their orbital-class rockets so they can be reused had only one partner: SpaceX. For a decade now, Elon Musk’s company has single-handedly dominated the reuse game, landing and taking off again up to 500 times with the Falcon 9 thanks to a reliability that is now more than routine. What you see in this photo is the breaking of that monopoly. The first successful landing of the enormous New Glenn rocket, achieved on only its second flight, demonstrates that orbital reuse is no longer a matter of a single company. Although Blue Origin, founded in 2000 by Jeff Bezos, is far behind SpaceX, it has just taken a giant leap that Bezos summarized with a Latin expression: Gradatim Ferociter (“step by step, fiercely”). As large as graceful. Unlike the Falcon 9, which measures 70 meters and can put about 22 tons of cargo into low orbit, the New Glenn stands out with 98 meters in height and a planned capacity of 45 tons. If we had not seen SpaceX catch the Super Heavy (the first stage of Starship) three times with the arms of the launch tower, it would seem more unlikely to us that a rocket like the New Glenn would be able to land gracefully in the center of a barge in the Atlantic Ocean. And without getting covered in soot. There is another fundamental detail in the photo: the rocket fuselage is clean. Unlike the Falcon 9 boosters, which return covered in the characteristic black soot caused by kerosene combustion, the New Glenn appears almost pristine. The reason is that its seven powerful BE-4 engines use methane and liquid oxygen (a combination of cryogenic propellants known as methalox). This fuel is not only more efficient and cheaper, but it burns much cleaner, facilitating inspection and reconditioning tasks for the next flight. With this landing, the New Glenn has become the first methalox rocket to successfully recover a first stage from an orbital flight, ahead of the Zhuque 3 from the Chinese company Landspace (and with permission from Starship, which also uses methalox, but has never reached orbit). Things are coming. Blue Origin’s sweet moment begins now. In an interview with Ars Technicathe company’s CEO, Dave Limp, has confirmed that the aggressive 2026 goal is to complete between 12 and 24 missions. The company has announced a launch price of about $70 million, a figure almost identical to what SpaceX charges for a Falcon 9. But the New Glenn not only competes with the Falcon 9, but also threatens to burst the market by competing directly in the league of the Falcon Heavy, but with the advantage of a unique and fully reusable first stage. As for the rocket that has landed, its next payload will not be a probe or a satellite, but the Blue Moon Mark 1 lunar module, which the company plans to launch in the first quarter of 2026 to demonstrate to NASA They are ready for the moon race. Image | Jeff BezosBlue Origin In Xataka | Blue Origin now has a golden opportunity to overtake SpaceX on trips to the Moon. And he is taking advantage of it

This is how the “impossible” photo of the man falling into the Sun was made

It seems like a montage, but it is so real that it has gone around the world just when AI was making surreal images stop impressing us. Andrew McCarthy’s “The Fall of Icarus” has shown that there are still ways to outdo the machine with technical precision and months of planning. Logistical madness. In the photo, a backlit silhouette appears to have launched itself in free fall over the Sun. It is the skydiver Gabriel C. Brown transiting in front of a particularly active solar disk. On the other side of the telescope, the famous astrophotographer Andrew McCarthywhich had begun planning the capture at the beginning of the year. It is, quite possibly, the first photo of this type, since the list of variables to control was insane. They needed the optimal sun angle, a safe height for Brown to launch from, and a perfectly calculated glide path between the sun and the camera. Three-way communication. It was 9 in the morning in the Arizona desert. McCarthy had his telescopes ready and was in constant communication with both Gabriel Brown, the skydiver, and Jim Hamberlin, the pilot of the paramotor from which he would launch. McCarthy followed the aircraft with his telescope and, once it was aligned with the Sun, gave the order. “Okay, I’ll see you,” he said over the radio. “Jump, jump, jump!” Brown jumped at about 1,070 meters above sea level with the engine idling to ensure a perfect angle. “I got it, man!” he heard him say on the radio. The sixth time was the charm. McCarthy told Live Science that the biggest challenge had been finding the paramotor in the sky. Although it was about 2.4 km from its position, the point of the shot was to capture in detail the Sun, which was 50 million times the same distance. It took the team six attempts to correctly align the aircraft with the photographer’s position on the ground. When push came to shove, they could only make one jump, as folding the parachute for a second attempt would have taken too long. Is it really not a setup? It is not, and the secret is in the telescope. As explained PetaPixelcarried a hydrogen-alpha filter to block all sunlight except for a very specific red wavelength that emits incandescent hydrogen. This is how those infernal images of the solar chromosphere are taken: the layer of active “fire” on the surface of the Sun, with its filaments and protuberances especially visible during times of greater solar activity. It is not very different from how other photos of rockets and space stations passing in front of the Sun are taken, but with extra planning and audacity so that the protagonist of the image is, for the first time, a tiny person. Images | Andrew McCarthy In Xataka | We are used to seeing the Perseids looking up. This is what they look like from space, looking down

How to see any photo with the 3D space effect of iOS 26 on your iPhone using the new function of the APP photos

Let’s explain How to use the app Photos From your iPhone to see with 3D effect The photos you want. It is one of the functions of iOS 26 with which you can use this effect with your photos On the lock screenbut the APP photos also allows you to use it with anyone at any time. Here, you just have to know that this function is pure amusement, to see the effect and that’s it, you can’t use it for much more. But in the end, it can also help you to see how you are with any photograph you have before using it then as a wallpaper. Use the spatial effect with any photo The first thing you have to do is Open the application Photos On your iPhone. Remember that you should have updated iOS 26, and the condition is that the photo where you want to apply the effect must have some type of protagonist, such as a person, pet or object. Choose a photo and enter it. Once you are inside the photo, you have to click on the hexagon icon that will appear above right. To see it you must have the interface, that is, if you only see the photo click on the screen so that the other elements come out. When you press that button, you will see that for a few seconds some purple colors appear on the screen while the AI ​​analyzes the content of the photo. Then you will see that The indicator of Space sceneand that when you move the mobile screen you can see the 3D effect of the photo. In Xataka Basics | How to obtain information from what appears on your iPhone screen with Apple Intelligence and iOS 26

It was practically impossible for a satellite to “ruin” the photo of another satellite. With Starlink already go twice

Until recently, the idea that a terrestrial observation satellite accidentally captured another satellite in the flight was as an unlikely coincidence as finding a needle in a haystack. Space is an immense emptiness and The satellites move very quickly. But in the last year we have witnessed this phenomenon twice. And on both occasions, the protagonist has been a Spacex Starlink satellite. In a secret military base in China. On August 21, one of the new Satellites WorldView Legion of Maxar It passed over the Gobi desert, in China, with the aim of photographing the Dingxin Air Base: a high secret installation where China proves its most advanced fighters. The satellite achieved the image, but an unexpected intruder appears in it. A silver ship with two large solar panels and three spectra of colors cross Maxar’s photo, creating what an executive of the company described on LinkedIn as “accidental art.” What we see is actually a single satellite, the Starlink 33828immortalized in different wavelengths on one of the most sensitive places of the Chinese army. The trick is in the camera. The curious multicolored image is explained by how observation satellites and the incredible speed at orbit move. These satellites do not take a single image, but a series of images in different spectral bands almost simultaneously: a high resolution (panchromatic) and several in different colors (red, green, blue …) of lower quality. Then, an algorithm merges all this information to create the final photo, already clearly color. The problem of that “almost simultaneously” is almost. When the objective is the earth, which is relatively still with respect to the satellite, the system works perfectly. But when another satellite crosses in the field of vision at a relative speed of almost 1,400 meters per second (about 5,000 km/h), the camera captures it in a slightly different position in each of the color layers. The result is that spectral effect with several colored shadows. The Google Maps Starlink. This is the second time that a Starlink satellite accidentally sneaks into an alien photo. As We count in April 2025a Reddit user discovered a very similar effect on a Google Maps image on a rural Texas area. On that occasion, the photo was taken by a Pleaiades European satellite, and the result was even clearer: five silhouettes of the same object, corresponding to the close, red, blue, green and pancromatic infrared bands. The enormous amount of satellites in low orbit is turning an astronomically unlikely event into a new normality. Why are Starlink satellites. Because they are a majority. Spacex already has More than 8,300 Starlink in orbitmore than all other satellite constellations together. With its plans to expand the network to more than 30,000, the probability that one of them is crossed in the viewfinder of another satellite is growing. But also, they fly low. To offer a low latency internet connection, the Starlink operate about 500 km altitude in the low terrestrial orbit. This is the same orbital “highway” that most use the earth’s observation satellites, such as the Worldview Legion of Maxar (which are 518 km). His paths are destined to cross. Beyond the visual anecdote, these images are the symptom that the low orbit is increasingly congested, which forces perform constant evasion maneuvers To prevent collisions. Image | Maxar In Xataka | What types of satellites exist: guide not to get lost in a gigantic network of which we are increasingly dependent

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.