what it is and how to use it to create artificial intelligence images from your photos

Let’s explain to you what it is and how you can use it ChatGPT Imagesthe new section of the artificial intelligence Designed to help you create and edit photos and images. This is a forceful response to Free Nano Banana of Geminiso amazing that it already represents a new evolutionary leap in images created by AI. Since the launch of Nano Banana on Gemini, Google had managed to compete head to head with ChatGPT in creating images from photographs. Gemini was able to use your face and be recognized, something that OpenAI’s AI could not do… until now. We’re going to start by explaining what this new feature is and what features it has to differentiate it from the rest, because there are some very interesting and innovative things. Then, at the end we will summarize how you can use it to create images with different styles from your photos. What is ChatGPT Images ChatGPT Images a new section dedicated to image creation within ChatGPT. This artificial intelligence chat has updated and improved its image creator from photos so much that it has decided to give it an exclusive section. Just as ChatGPT’s normal chat allows you to create images from scratch or from your photographs, This section is exclusive for creating images from photos. Come on, the idea is that when you want to do this, instead of getting complicated by asking ChatGPT, you can enter the section and speed up the process. This is so because In the Images section you will have several ideas for designs and tools to edit your photos. Thus, it will be as easy as clicking on one of your designs, choosing the photo and that’s it, ChatGPT will do the rest. With this, eliminates the need to know how to write a good promptand the process is simpler and more visual for inexperienced people. When you choose the design and upload the photo, it will automatically be sent to ChatGPT with a pre-generated prompt that you can see. Showing you the prompt changes everything This is important, because being able to see the prompt that ChatGPT uses in its presetyou will also be able to copy and paste it to modify it, or even use it in Gemini or some other competing tool. Thus, ChatGPT Images is also not only a good testing ground, but by offering you several prompts it gives you the basis to later generate a much more personalized image from them. You will also know how the image editing prompt works in a more transparent way, and you will be able to use things from both prompts to create a completely unique one. Until now, when you were faced with creating an image from a photo you had to do it from scratch, composing the prompt on your own or searching the Internet to find them. That’s why showing it to you changes everything, because it makes anyone without knowledge can create images very elegant with AI. To all this we must add an interface that also simplifies everything, and in which ideas are shown to you with an image of the resultso that if you see something you like, you just have to click and choose the photo. How to use ChatGPT Images The first thing you have to do is enter the ChatGPT website or application on your device. Here in the side menu Click on the section Images that will appear just below the search options. This will take you to the main screen of the section Images. In it, at the top you have a search field to write a prompt manually, and below you have pre-generated styles of images and ideas of styles or other things you can do. When you choose one of the designs or ideas, you will go to a screen where you simply have to choose the photo you want to use. You can choose any of the last ones that you have used, or click on Choose a new photo to manually upload another photo. And that’s it. When you do so, a chat with ChatGPT will open that includes the photo and the prompt created to generate the type of image you have chosen. In a few minutes you will have the result. You will be able to copy this prompt to reuse it with other images in the chat itself and even modify it to your liking. In Xataka Basics | How to create a character in ChatGPT and Gemini to use it in all the images you make with artificial intelligence

How to create your own Christmas carol with artificial intelligence

Let’s explain to you how to create your own carol with artificial intelligenceso that you can have one completely personalized. Here, it is not so much about creating covers, but rather one with original music to which you can add the lyrics you want. For make a song with artificial intelligence you have many tools to generate music. We will start by telling you several of them, and then we will tell you the step by step that you should follow with Sunofor being the most popular among all. However, the steps will be almost the same with the others. Some tips before starting Before you start, you should consider the type of song you want to create. In this case it is evident that we want to create a carolso it is something we should mention in our prompt so that the AI ​​can relate it and generate a type of music according to this. In this case, the key will be in decide the letter you want to put. The beauty of making your own carol is being able to add to the lyrics references to things that those who are going to hear it will recognize, and here you will have two ways to proceed. On the one hand, you can write the lyrics by hand, try to compose some kind of poem that you think would look good in the song. But you can also use normal AI chatbots like ChatGPT, Copilot, DeepSeek either Gemini inter alia. Even Apps like Suno have their own AI for lyrics. In this case, then what you have to do is tell it the concept or even mention things you want added in the lyrics, and let the artificial intelligence do it all for you. Once you have everything clear, then you will have to proceed with choose an AI tool to create musicsomething for which we are going to help you a little later. You will see that in general they are quite easy to use. Don’t settle for the first resultbecause it may not be good the first time. Repeat the process or you promptmake changes to refine what you want, and little by little you will get something better or closer to what you have in mind. Lastly, you should know that the music you create does not belong to you. You are not a musician creating a song with your skills, but you have generated music by algorithms, and therefore in many cases you cannot register it as yours, nor claim its exclusive ownership. You won’t be able to monetize either. music in most cases, since many platforms will detect it as made by AI. Tools to create music by AI Now, we continue with a small list of AI tools to create music from your text requests. They all work in a similar way: you will have to give it a short description of what you want to achieve, include the lyrics and musical style, and the application will do the rest. Suno: Suno It is the best-known AI music creation service, and one of the first that began to surprise users. It has a simple way to describe what you want and give it creative freedom, and another to specify genres, lyrics, and have greater control. Link: suno.com. Riffusion: Very similar to Suno, although it is a lesser known AI tool. It also has two modes: you can use a prompt in which you explain what you want or by using its advanced panel where you can add genres by hand or the lyrics you want. Link: riffusion.com. udio: This program divides the song creation process into three steps. First you define the style, then the lyrics, and then you specify the vibe, the sensation or feeling it conveys. Link: audio.com. Boomy: Another alternative that allows you to create songs by AI, and even has a function to publish them on streaming services. Link: boomy.com. soundraw: Look for a different mechanic to help you create the song, with a genre selector and the ability to choose the duration and tempos to use. Link: soundraw.io. Create your Christmas carol with Suno Create music with Suno It’s simple. First enter suno.com and log in or create an account. Once you do, on the main screen click on the section Create. You will go to a page where you must choose between simple and custom modeswhich appear with the names Simple and custom. If you choose the option Simpleyou just have to describe the type of song you want using natural language. Mention that you want a Christmas caroland if you don’t write the lyrics, you can at least give it a hint by writing the theme you want it about. You can also choose to make it instrumental only, and in Inspiration you can choose a musical influence. When you have it, press the button Create to start the creation. If you choose the option customwhich will take you to if you decide to add letters in the simple one, you will go to a screen with many more options. Here, you will find the following options: Lyrics: It is the field where you have to write the lyrics of the song for the AI ​​to sing them. you have a button Make Random Lyrics to automatically generate letters randomly. Instrumental: This button is used to skip the lyrics and make an instrumental song. Styles: This is another of the most important fields, in which you have to choose the style of music you want to use for your song. Here you can write “carol”. Advanced options: Includes the option to exclude musical styles. Song Title: The title of the song. Workspace: You can create different workspaces to organize your songs, and choose which one goes to. Create: This is the button you must press next to create your song. Now what you have to do is fill in all the fields as you wantpaying … Read more

How to summarize videos with artificial intelligence to know what they say without having to see them

Let’s tell you how to summarize videos using AIso that you can know what is said in them without having to see them. Because sometimes you may be looking for a tutorial, a guide, a recipe or just information, but you don’t feel like watching a 40-minute YouTube video. This is something you can do very easily using artificial intelligence. Of course, you will be able to do it with Geminibut ChatGPT does not allow it to be done. The positive part is that you can do it with the free version of Google AI without problems. Summarize online videos First of all, with Gemini you will be able summarize videos that are on online platformssuch as YouTube or Dailymotion. Of course, you will not be able to do it on others like those on social networks like Instagram. But for YouTube and the like it works. The prompt you can use is the following: “I want you to give me a summary of the content of the video in the link. Make the summary schematically using points or bulletpoints. (Link).” In this prompt that we have used, the request to make bulletpoints is optional. However, if you decide to use it in its entirety, you will have a summary that is not so textual, but rather schematic, point by point. This will make the content easier and faster to understand. Summarize videos by uploading files Gemini also allows you to summarize the videos you attach to the prompt with a file. For this, you must add the video filesomething you can do by choosing to upload the file or linking to it from Google Drive. The prompt you can use is the following: “I want you to give me a summary of the content of the video that I attached. Make the summary schematically using points or bulletpoints.” Come on, what you can use the same promptbut with the difference that instead of adding the link to the video, you have to attach it to the message. Thus, Gemini will analyze the content of the video you have sent and give you a point-by-point summary. Ask questions about the video content You can also ask you questions related to the content of the video. Thus, instead of a summary you can ask a specific point or the precise question you have and want to solve. The prompt you can use is the following: “I want you to look for the information in the video I attached, and tell me (question) (link).” In this way, the question you ask will not be answered based on general information on the Internet, but in what is said in the video about it. It is quite useful to extract more precise information. In Xataka Basics | The best prompts to save hours of work and do your tasks with ChatGPT, Gemini, Copilot or other artificial intelligence

When they sold us generative “Artificial Intelligence” we did not know that it was going to be artificial and generative but not “intelligent”

A few months ago, a group of Spanish researchers thought of putting an AI chatbot to the test with a curious test. They uploaded an image of an analog clock to the chatbot and asked the AI ​​a simple “What time is it on that clock?” The AI ​​failed disturbingly. Machine, can you tell me the time? Researchers from the Polytechnic University of Madrid, the University of Valladolid and the Politecnico de Milano signed a month ago a study in which they wanted to evaluate how intelligent the artificial intelligence of those models was. To do this, they built a large set of synthetic images of analog clocks—available in Hugging Face— in which 43,000 different hours were shown. Before fine-tuning their behavior, the AI ​​models consistently failed when trying to tell the time. After the adjustment the behavior was much better, but still imperfect. That should not happen with such a “simple” issue for humans. disastrous result. From there they asked four generative AI models what time those images of those analog clocks showed. None of them managed to tell the time accurately. That group of models was made up of GPT-4o, Gemma3-12B, LlaMa3.2-11B and QwenVL-2.5-7B, and all of them had serious problems “reading” the time and differentiating, for example, the hands or the angle and direction of those hands in relation to the numbers marked on the watch. Fine tuning to improve. After these first tests, the group of researchers managed to significantly improve the behavior of these models after performing fine tuning: they trained them with 5,000 additional images from that data set and then re-evaluated the behavior of the models. However, the models again failed consistently when tested with a different set of images of analog clocks. The conclusion was clear. They don’t know how to generalize. What they discovered with this test was confirmation of what we have been observing from the beginning with AI models: they are good at recognizing data that they are familiar with (memorized), but they often fail in scenarios that they have never faced and that are not part of their training sets. Or what is the same: they were incapable of generalizing. Dalí enters the scene. To try to find out the causes of these failures, the researchers created new sets of images in which, for example, they used the Dalí’s famous distorted clocksor those that included arrows at the end of the hands. Humans are able to tell time on analog clocks even if they are distorted, but for AI models that was a huge problem. If they do this with watches, imagine with medical analysis. The danger of these conclusions is that they reignite the debate about whether generative AI models are indeed artificial and generative, but not very intelligent. If they have these difficulties in identifying the hands or their orientations, things are dangerous if what the models have to analyze are medical images or, for example, real-time images of an autonomous car driving through a city. AIs are stupid. Although it is true that generative AI models are fantastic as aids in various scenarios such as programming, the reality is that what they do is “regurgitate” responses that are already part of their training data. As Thomas Wolf, Chief Science Officer of Hugging Face, explained, a generative AI “will never ask questions that no one had thought of or that no one had dared to ask.” Although thanks to their enormous memory and training they can recover a multitude of data and present it in useful ways, finding solutions to problems for which they have not been trained is very complicated. For experts like Yann LeCun, the reality is clear: generative AI it’s very stupid and, furthermore, a dead end. Source: clocks.brianmoore.com AI doesn’t draw watches very well either. Added to the experiment of these researchers is another small test that once again calls into question the capacity of generative AI. It involves asking different models to create the code that allows an analog clock to be displayed with the current time. A designer named Brian Moore wanted to share the result of several AI models and the truth is that the result obtained in most of them is terrible, although others like Kimi K2 achieve a good result. We have tested with the recent Grok 4.1 and GPT-5.1. After a little insistence, Grok 4.1 has drawn the perfect clock and it works. With GPT-5.1 there has been no way, at least in our tests. A worrying reality. This inability to solve tasks that seem simple certainly means that these models are not in a good place. It is true that a good prompt can help resolve some of these limitations, but what is becoming increasingly evident is that AI models continue to make mistakes despite the passage of time. The theoretical revolution of this technology precisely needs to eradicate them, and it does not seem that we are on the way to achieving it. The models improve, yes, but not enough for us to trust them 100%. Image | Yaniv Knobel In Xataka | As if there weren’t enough AI companies, Jeff Bezos has just returned from the shadows to build another one, according to the NYT

What are the news about Google’s new artificial intelligence model?

Let’s tell you what they are the main news of Gemini 3the new version of the model artificial intelligence announced by Google. We already have the first data on its main characteristics. As always, the flashes will go to Gemini 3 Pro, which will be the most advanced version. Here, one thing you want to know is that You will notice few of these new developments when you are using Gemini in a conventional way. Most of these changes are aimed at advanced users. Gemini 3 news A step forward in all areas: Google has presented the results of its model in various types of tests, comparing it with the previous version and its direct competition. He is ahead of everyone in everything, from mathematics to understanding what is happening on the screen or creating code. Reason “at the doctoral level”: That is what the test results also indicate, although where it advances the most is in the mathematical results, with a score of 23.4% for the MathArena Apex test compared to 1.0% for GPT 5.1 or 1.6% for Claude Sonnet 4.5. Integrates with Google Search: Gemini 3 is linked to Google’s AI Mode, integrating into the search engine. Generate visual elements: Gemini 3, has the ability to create interactive visual elements, such as calculators, simulations or widgets in real time. Something especially useful when integrated into the search engine. Sometimes it may not respond to you with text, but with an interactive webapp. More direct answers: Google has fine-tuned the way its model responds, offering more concise responses that offer more valuable information and less flattery, clichés and clichés. Improvements in “Deep Thinking”: Another of the most notable improvements is in deep thinking, in addition to advances in code execution, abstract reasoning and visual understanding. Larger context window: This model has a context window of up to one million tokens, being able to analyze large code repositories or very long texts on which you can later work. Better contextual reasoning: Reasoning is improved, especially with long contexts, to avoid hallucinations. Parallel reasoning improvements: Abilities to reason with visual and textual data at the same time are improved, improving accuracy when interpreting tables, diagrams and interfaces. Improvements in its multimodal mode: The analysis of all types of information is improved. For example, you can decipher and even translate handwritten recipes in different languages, and use them to create a cookbook that you can share. You can also analyze sports matches, scrutinize research data and generate code from it. Programming improvements: As we said at the beginning, one of the biggest improvements of this model is in its ability to program. Improved your agent mode: Your ability to use tools and operate a computer through the terminal using agent mode has also been improved. Agents with Gemini can now autonomously plan and improve more complex software tasks. Gemini 3 will begin to be available in the coming daysalthough as we told you at the beginning, it is possible that many of the differences will not be noticed unless you are going to try to take advantage of them in an advanced way. In Xataka Basics | The best prompts to save hours of work and do your tasks with ChatGPT, Gemini, Copilot or other artificial intelligence

How to create a character in ChatGPT and Gemini to use it in all the images you make with artificial intelligence

Let’s explain to you how to create a character in ChatGPT and Gemini., and tell the AI ​​to remember it so you can then use it in all the images you generate in that conversation. Thus, if you want there to be cohesion between all the images, and for them to be starring the same digital person, you have a way to do it. We are going to briefly explain three different ways in which you can do it, telling you step by step the process you must follow. Remember that it is best to do everything in the same chat to maintain context. Create a character from a description The first method is to tell it that you want to create a character and add a skin description that you want to use. Start the prompt by telling them that you want to create a character called “name”, and then tell them in detail about their physical appearance and clothing. This method usually works best in ChatGPT. The AI ​​may then ask you questions to clarify, such as the style to use, and you must answer as you prefer. In my case, I have asked him to make me a comic-style one. An image will then appear, and you can ask it to make changes if you want to the design. Now, you can add a prompt with which set the character’s appearance. For that, you can use a prompt like “I want you to set this appearance to the “name” character, so that if I ask you for more drawings of him, you will always use the same design. Okay?”. With this, ChatGPT or Gemini should save this aspect. Now you can start ask you to draw this character in different ways. To do this, literally ask them to draw (name of character) and describe the scene and what they are doing. You should make the image keeping the same style of the drawing and exactly the same appearance. Create the character from a photo You can do exactly the same, but creating your character from a photo rather than from a description. Simply ask them to reimagine the photo and add a description if you want to change something or add more things, such as the outfit. Then ask him again to make it a character to use from now on. And then, just ask him to create the same character in different scenes. This method does not always work well in ChatGPT, and it usually works worse in Gemini, but it is something worth exploring. Use an already created image And the third option is use a character that you have created on another website or AIor in short any external design. To do this, upload the drawing of this character and add a prompt like “I want all the images I ask you for in this specific chat from now on to use this character as the protagonist”. This alone will be enough. And from now on, simply go asking me to create an image with a person doing what you want in the environment you describe. The image will be generated, but using the character you created before as a reference. Here, this usually works best with Gemini. In Xataka Basics | The best prompts to save hours of work and do your tasks with ChatGPT, Gemini, Copilot or other artificial intelligence

what it is, characteristics of this artificial intelligence model and differences with Gemini and ChatGPT

Let’s explain to you what is Kimi K2 Thinkingthe latest model of the company artificial intelligence Kimi AI. It is an AI that has made a name for itself today due to its open nature and for having managed to compete directly against GPT-5, Gemini 2.5 Pro and other high-end models. We are going to start the article by explaining what Kimi K2 is and the characteristics that make this artificial intelligence model different. Then, we will finish by telling you the main differences with respect to the most popular models on the market. What is Kimi K2 Thinking Kimi K2 Thinking is the latest version of Kimi, a Chinese artificial intelligence model created by Moonshot companyfrom Alibaba. Since the names are the same, you can clarify yourself by thinking that Kimi is the company’s AI, like ChatGPTand that this AI has different models that are being launched, such as the case of GPT-5 in the case of OpenAI. Kimi K2 was launched in July, and stood out for having a gigantic size of 10,000 million parameters. Now there is a new version called Kimi K2 Thinkingwhose number of active parameters amounts to 32,000. According to its creators, this allows the AI ​​to maintain stable use of agentic tools over 200 to 300 sequential calls. And what does all this talk mean? As you know, we are entering the era of AI agentswhich are automations with which an artificial intelligence can carry out different actions autonomously. This allows the AI ​​to even make decisions for you, from asking it to make your purchase to preparing a vacation package for you and taking care of the reservations. It is also something that at the business level is going to have even more uses. Therefore, the more ability an AI has to perform a large number of actions without making mistakes, the more valuable and powerful it is. Features of Kimi K2 Thinking The most important feature of Kimi K2 Thinking is that it is an open model. The models of companies like OpenAI, Google or Anthropic are closed, which means that their source code is kept under lock and key, only these companies know how it works inside. Meanwhile, K2 Thinking is open source, which means that anyone can know how it works inside looking your Githubits features, and you can even adapt it for free. What’s more, you can install it locally at no cost, although the computer needed for this is too powerful for ordinary mortals, but “distilled” versions, lowered or trimmed, can be released so that people can use them locally. In this aspect it is like DeepSeek, another open AI that already surprised a few months ago for approaching the power of non-open models such as Gemini or ChatGPT. In the case of Kimi K2 Thinking, according to the test benches has managed to surpass GPT-5something that until recently was unthinkable. We are facing a Mixture-of-Experts architecture model (MoE), which means that it is made up of several experts (subnetworks or specialized modules), and that not everything is activated at once, but only the parts of the model necessary to answer what you ask or perform the task that you have asked. It should also be said that it is multilingual, and can be used in other languages ​​although it focuses on Chinese, and that it can process many types of file formats. Also searches in real time to offer you the most up-to-date information, and is multimodal, being able to interpret text, images, code or a combination of these. Kimi K2 Thinking can be used as a conversational char answering questions and maintaining long context while following complex threads. But it can also interpret images, or a combination of mixed inputs such as images with text and with code. In addition to this, it can generate programming code, analyze long documents thanks to its large context window and extract information to answer questions about the content or give you a summary. Additionally, you can create automations or agents. Differences with ChatGPT or Gemini As we have told you above, the main difference of Kimi is its open concept. While ChatGPT and Gemini are proprietary models, Kimi allows access to the community so they can see its code. Several benchmarks have shown that Kimi K2 Thinking outperforms GPT-5 and Claude Sonnet 4.5 (Thinking) in search and agentic browsing in the browser, in text-only operation, and in information collection. The only thing in which it still does not surpass these models is in the creation of code. In the use of agentic tools, benchmarks or test benches have shown that Kimi K2 Thinking is positioned as a leading AI model. Besides, Kimi is a cheaper model for several things. First, training the model cost $4.6 million, according to indicate on CNBC, a ridiculous figure considering that training proprietary models like GPT-5 It cost about 500 million dollars according to estimates. It’s also cheaper to use the Kimi K2 Thinking API. The API is like the entry key that allows other applications to connect to this AI to work with it. The price of K2 Thinking is $0.6 per million tokens in and $2.5 per million tokens out. GPT-5 Chat costs $1.25/10 respectively, and Claude Sonnet 4.5 costs $3/15 respectively. For the average user, the operation is the same.. You have the website kimi.comwhere after registering for free you can use the Kimi K1.5 and K2 models. However, If you want to use Kimi K2 Thinking you will have to pay with their subscriptions of 19 or 30 dollars. At least, this is if you want to use the full version on the official website, without having to install anything. In Xataka Basics | The best prompts to save hours of work and do your tasks with ChatGPT, Gemini, Copilot or other artificial intelligence

follow Freepik’s artificial intelligence conferences live

Today is the day. Today it finally begins Upscale Confan event organized by Freepik and of which Xataka is a Media Partner. Upscale Conf is made up of two days full of conferences, round tables and workshops with artificial intelligence as the protagonist. Two xatakeros and their respective companions will be able to enjoy both days in person having won the draw we did a few days ago, but if you don’t want to miss the conferences, you also have options. Just because, Freepik will broadcast Upscale Conf live and direct. We will tell you below how to follow the talks. Follow Upscale Conf live When: November 4th and 5th. Where: In Malaga, although you can follow it live via streaming. What to expect from Upscale Conf Upscale Conf is now celebrating its third edition. After its debut in Malaga last November and its visit to San Francisco, Freepik brings its great event home again. During the two days that will take place on November 4 and 5attendees will be able to attend conferences by industry leaders and creators, participate in panels, practical workshops and sessions hands-onin addition to enjoying networking spaces and moments. Upscale Conf in San Francisco | Image: Freepik It will be a very interesting meeting and, if you want to work up an appetite, you can consult the complete agenda and list of speakers on the official website. Small preview: among them you will find designers, creative directors, founders and CEOs of companies such as Freepik, ElevenLabs, Google Cloud, The Dor Brothers, SpecialGuestX or GenreAI, among many others. Those who cannot attend will be able to follow the conferences through YouTube. There will be two live shows: the one from the first day and the second day. Both start at 8:30. In the video above you will find the updated live stream so you can watch it directly from here. Cover image | Xataka In Xataka | “AI is unstoppable”: the CEO of Freepik talks to us about AI, entrepreneurship and the mistakes of an EU that only focuses on the dangers of AI

The future of artificial intelligence is not in the cloud, it is in the nucleus of the atom

On the outskirts of Palo, a farming town in eastern Iowa, you can still see the gray towers of Duane Arnold Nuclear Power Plant. They have been silent for years, but those who live nearby remember the constant hum that accompanied their childhood. For nearly half a century, that boiling water reactor was part of the landscape and power supply of the Midwest. Everything changed in August 2020, when a right —a wall of storms with hurricane-force winds—ravaged corn crops and damaged cooling towers. Duane Arnold went out and no one thought it would come back on. The plant, already aging and with a license about to expire, was permanently shut down. It seemed like the end. Five years later, that atomic silence will be broken again, driven not by the State or the traditional nuclear industry, but by a technology company: Google. “It’s alive, it’s alive.” Victor Frankenstein shouted in the 1931 film. Nine decades later, that cry echoes symbolically in Iowa: the Duane Arnold nuclear power plant will come back to life. The resurrection will come from Google and NextEra Energywhich will invest more than 1.6 billion dollars to return the pulse to the plant in 2029. According to ReutersGoogle will buy most of the energy generated for 25 years to power its artificial intelligence data centers, while NextEra will assume 100% control of the plant after acquiring the shares of its local partners. A restructuring never seen before. Reactivating a nuclear plant is not as simple as pressing a button again. In the case of Duane Arnold, Google and NextEra Energy plan to redo all critical infrastructure, modernize security systems and pass inspection by the Nuclear Regulatory Commission (NRC) before receiving a new license. The project is unprecedented: to demonstrate that a closed plant can be revived under current safety standards. “Reopening an existing plant is faster and cheaper than building a new one from scratch,” explain analysts cited by the Financial Times. If all goes well, Duane Arnold will be producing energy again in 2029, along with Palisades and Three Mile Islandthe other two pieces of the American atomic renaissance. It is not the first, nor will it be the last. Big technology companies are betting on reopening nuclear plants. On the one hand, Microsoft signed a similar agreement with Constellation Energy to reopen the Three Mile Island plant in Pennsylvania, which is expected to resume operations in 2028. On the other hand, Amazon is working with Dominion Energy to develop SMR reactors (Small Modular Reactors) in Virginia. Google itself I had already taken steps in that direction: Last year it announced a partnership with Kairos Power to build seven SMR reactors by 2030, with a total capacity of 500 megawatts. These modular reactors are smaller, more efficient and safer, and are presented as the future of civil nuclear energy. Additionally, SMRs can be installed near data centers, reducing electrical transportation losses and costs. The AI ​​energy fever. The trend is unmistakable: Big Tech they are betting on the atom to fuel the era of artificial intelligence. Each new generation of models—from ChatGPT to Gemini to Claude—demands thousands of megawatts of additional power. And the growth is just beginning. In this context, OpenAI – the creator of ChatGPT – has asked the US government for a national plan to drastically expand the country’s electrical capacity. As CNBC reportedthe company asked the White House to commit to building 100 gigawatts of new energy capacity each year, warning that China added 429 gigawatts in 2024 alonecompared to 51 in the United States. In its statement it concludes with a phrase that will become an energy motto of the sector: “Electrons are the new oil.” Risks and doubts. Despite the enthusiasm, the Google project is not without controversy. Physicist Edwin Lyman of the Union of Concerned Scientists warned in the Financial Times that Duane Arnold has “the same design as the reactors that melted down at Fukushima in 2011” and that it suffered “significant damage, including its cooling towers, during the right “Until a realistic estimate of the cost of reconstruction and safety guarantees is known, we will not know if it can generate affordable electricity,” Lyman said. Likewise, Wall Street Journal collect the criticism from environmental groups such as the Sierra Club, which question the age of the reactor, the degradation of its components after years of inactivity and the management of radioactive waste. However, even among skeptics there is consensus on one point: AI’s energy appetite leaves no alternative to exploring all possible options. lhe electrons of the future. What is happening in Iowa is not a simple industrial reopening: it is a declaration of intent of the new technological capitalism. Google, symbol of the cloud and virtuality, turns to the most tangible and ancient atom to sustain its digital future. The paradox sums up the moment: artificial intelligence needs real matter, megawatts and electrons. The Duane Arnold plant, which once marked the rise and fall of the American nuclear dream, could be reborn as the energy heart of AI. And if OpenAI’s predictions come true, it won’t be the last. In the new global economy, electricity will be the oil of the 21st century. And in Iowa, Google just lit the spark again. Image | Unsplash Xataka | The amount of nuclear energy generated by each country, detailed in this interactive map

What it is, how it works and how to use this internet browser with artificial intelligence

Let’s explain to you what ChatGPT Atlas is and how it worksthe internet browser created by OpenAI. It is an alternative to Chrome and the rest of the browsers that stands out for having the artificial intelligence of ChatGPT integrated, and is used both as a search engine and to interact with the content you see. We are going to start the article by explaining what exactly this browser is like, both outside and inside. And then we will explain briefly how it works. What ChatGPT Atlas is and how it works ChatGPT Atlas is an internet browser created by OpenAI, one of the leading artificial intelligence companies. His proposal is to offer a vitaminized browser with AIso that you will be able to interact with ChatGPT at all times. The browser is based on Chromiumwhich is the same open source base that other browsers such as Brave, Microsoft Edge or Chrome itself are based on. This means that all the websites you visit will work practically as well as with the other most popular browsers. Using Chromium as a base also allows install the extensions available for Chrome. Come on, if you use extensions in Chrome, Edge or Brave, you may also be able to install them in Atlas. Other more technical advantages of this browser are that you can render pages with Blink, the Chromium engine, use its standard APIs such as tabs, history, cookies or bookmarks, nothing changes, or run JavaScript, CSS and HTML5 like in any modern browser. But the great attraction of the browser is its integration with ChatGPTwhich at launch uses the GPT-5 model, the same as the official AI app. You can use this artificial intelligence without having to open it in external tabssince it will be integrated into a native environment in the browser. The AI ​​model does not execute web code or directly access a page’s servers for security. Additionally, Interactions have limited permissions, nor do they access your data outside of the context in which you are using Atlas. Yes indeed, you have the option to activate user memory. This means that ChatGPT will remember key data about things you talk about with the AI ​​such as your interests such as personal tastes, personal contexts such as plants you may have at home, or styles. You can also deliberately ask him to remember things about you. These elements will be stored as small chunks called “facts” that you can configure in ChatGPT memory. And what is this for in Atlas? Well, it allows the browser to remember interests and browsing routines, to adapt the explanations it gives you when reading websites and documents, or to maintain coherence in different contexts such as tabs, searches, or projects. Imagine that ChatGPT has learned that you write in a digital medium about technological dissemination for beginners, as is my case. So, when you ask it to summarize a website, it will do so adapting to that context, and the explanation will be simpler and more colloquial to adapt to how you understand things. Furthermore, at combine memory with browser toolsit will remember your web projects or research, maintain styles in different sessions, remember configurations, etc. If you search for laptops, for example, you can ask them to compare the results with what you searched the previous month. How to use Atlas To download the Atlas browser, you have to go to the website chatgpt.com/es-ES/atlas. At the moment it is only available on macOS. Once you download the browser, during the installation process you will be able to import the data from another browser you have installed, such as Chrome or Safari. This data is passwords, bookmarks, history, everything. When you make the browser, you will see that it is very similar to Chrome. You will need to log in with your ChatGPT account, and then open a new tab ChatGPT will appear instead of Google to perform searches. When you do it, you will have several types of search results. By default you will see the AI ​​responses, but above there are tabs that will allow you to see website results, just like in Google, and image results. This way you won’t miss anything in your experience. The other big change is when you are browsing any website. The browser has a button Ask ChatGPT which opens a column on the right where you can ask the AI ​​anything related to the content of the website, such as a summary, or anything that comes to mind. Besides, when you select a text and right clickin the context menu you will have an option to ask ChatGPT about this content. Thus, you will be able to obtain context of words or phrases in a simple and fast way. Atlas also has a settings section where you can choose the appearance of the tabs, whether the bookmarks bar is displayed, and you can also control your browsing data and its customization. You’ll also be able to control your AI chat history directly from here, and much more. In Xataka Basics | The best prompts to save hours of work and do your tasks with ChatGPT, Gemini, Copilot or other artificial intelligence

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.