Satellite images have revealed what happened to one of Russia’s biggest arsenals. Now we understand Moscow’s silence

On April 22 the satellites began to point out A point on the planeta change only perceptible through the images from space offered a first track of what was happening about 60 kilometers from Moscow. Despite the weather conditions of that day and the low resolution of the optical data captured by the Sentinel-2 satellite From the European Space Agency, the damages were clearly visible. An explosion had “burst” the 51st arsenal of the main missile and artillery direction of the Russian Ministry of Defense. Total devastation of Arsenal. Visual confirmation was reinforced by radar images Synthetic opening (SAR) capable of penetrating clouds and smoke, which showed significant structural alterations in the complex nucleus. The comparison between images Taken on April 14 and 23, it indicated that at least 30 buildings destined to storage of ammunition had been completely destroyed. Explosions, evacuations and blackouts. The day after the explosion, the secondary detonations They still continuedunderlining the magnitude of the stored material. The strength of the outbreak forced Evacuate eight nearby townswhile 37 settlements were left without gas supply. The most remote evacuated town was 4.5 kilometers from Arsenal. NASA fire monitoring system data also confirmed the existence of multiple igneous foci Within the perimeter, coinciding with the analysis of the intelligence expert (OSINT) MT Anderson, who used additional filters to detect heat points and Confirm destruction Massive infrastructure. A strategic arsenal. Then the magnitude of what happened began to be known. He 51st Arsenal Grau It was not simply a deposit of ammunition. As one of the Eight main arsenals that still operated in the European part of Russia, its function was key both in the distribution and in the logistics maintenance of the Moscow weapons. Three of those eight arsenals had already been destroyed for 2024, which turned this loss into a considerable strategic blow for the Kremlin military supply chain. Arsenal was designed to house Up to 264,000 tons of explosive material. Among the remains found after the explosion were identified 107 mm rockets for Multiple type 63 rocket Chinese manufacturing, many of which were recorded spread around local residents, suggesting that part of the material was stored outpatient and had recently been delivered. The catastrophe, or the attacknot only compromised Russian logistics operability in the Ukrainian conflict, but raised (once again) serious doubts about the security of its own arsenal in times of war. Images of the British report with the before and after the explosions A self -inflicted blow. Now, and after A study Of all the images and confidential information of the intelligence of the United Kingdom Ministry of Defense, it has been confirmed that the cause of the incident was not “external”, but a combination of bad practices in the management of armament and a negligent storage management by Russia. British research, in fact, is reinforced by the declaration of the Russian Defense Ministry itself, which, in silence from the incident without offering more data, there were attributed the disaster to the “violation of security requirements” in the manipulation of explosive materials. For the United Kingdom, the event is not an isolated case, but the reflection of a prolonged and documented trend of “Russian ineptitude in the treatment of its own ammunition”, although that yes, in this case it represents the greatest loss of self -inflicted arsenal since the beginning of the large -scale war in Ukraine. Strategic installation We already said it before. The affected deposit was a key installation for the war supply of the Kremlin on the Ukrainian front and, according to figures from the Ukrainian authorities cited by the United Kingdomhosted around hundreds of thousands of tons of ammunition, including ballistic missiles, projectiles thrown from air and anti -aircraft systems. Satellite images verified by the insider medium They also revealed that more than a square kilometer of the complex was affected by the detonations, which suggests that massive and prolonged destruction, with multiple fires and a chain of secondary explosions that, According to disseminated videos In social networks, they even reached nearby civil areas. Error pattern. In addition, it is not the first time that the arsenal of the 51st Grau suffers incidents of this type. Insider told That in June 2022, Russian state media reported a spontaneous explosion during loading and unloading operations that cost four people. The pattern is consistent with British complaint: A continuous chain of operational errors and insufficient security measures that make critical facilities into vulnerable points within the Russian military apparatus. The lack of technical discipline and effective prevention protocols has not only generated large material losses, but also has compromised the safety of populated areas in times of war. Consequences. If you want also, the incident gives wings to the rhetoric of the West. The impact of this catastrophe transcends the material. The destruction of one of the main deposits of Russian ammunition not only weakens the immediate logistics capabilities of Moscow in its offensive against Ukraine, but also reinforces an idea increasingly sustained Among the “alidos”: that of a corroded military power for structural failures, operational improvisation and a dangerous carefree for the most basic security standards. Seen thus, in full prolonged war and with its supply lines under pressure, losing tens of thousands of tons of armament due to internal negligence constitutes a defeat with several readings. Image | Maxar In Xataka | Russia launched its fearsome nuclear missile Satan II last week, the “Invincible Weapon” of Putin. It was regular In Xataka | The US has detected an object in space with strange behavior. The source that released it has also located: Russia

Freepik has just launched an Open Source model with a groundbreaking characteristic: licensed images

The Spanish Startup Freepik is already one of the absolute referents in the segment of artificial intelligence, but its last launch is especially significant. It is a new model of its own generative called F lite which is a new blow to the table for several reasons, but especially for an especially striking. Licensed images. Freepik, in collaboration with Fal.AI, has presented F lite, a model of generative text in image that stands out especially for being trained “exclusively with high quality images, with legal support and copyright protected” thanks to an important detail. They come from the Freepik library. Inspired (partly) in Deepseek. In Xataka we have talked to Omar Pera (@ompemi), Product responsible in Freepik. He has explained to us how the Deepseek Chinese model served as inspiration by demonstrating that it was possible to create a small but very capable model with much less resources and data. F lite is trained with “only 80 million images, compared to more than one billion usual images” in models of the generative images of the competition. Open Source. As explained in the technical report that accompanies the launch, F Lite is also An Open Source model of 10 million parameters-diminuto if we compare it with the 1.76 trillion estimated GPT-4-which has been trained for two months in 64 GPUS NVIDIA H100. A sample of the result obtained with F lite via Fal.AI. The car does not resemble Renault 5, true, but at least at the level of photorealism the result is really decent. Small but solvent. As explained Iván de Prado (@ivanprado), one of the top people responsible for its development, F lite is a decent model to generate certain types of images despite its small size. It also has its limitations, and can show anatomical defects or not give good results in complex or text rendering. The model is available in two versions in Hugging Face (regular, Texture) and in Fal (regular, Texture). It is also possible Download the github and use it at home, for example via comfyui. Is just the beginning. The launch of this model raises the beginning of an especially striking project that could gradually make it an alternative that rival the most ambitious models, but always supported by that argument of being trained with licensed images and being Open Source. More options for users. The model is not part of the moment of the offer of models available on the Freepik web platform, yes, and does not compete directly with models such as FluxMystic or image 3. “The bet and strategy does not change,” Pear pointed out, and consists in “optimizing the user offer and offering the best technology to solve the problem” and the need of each user. Demands everywhere. We have been seeing how artificial intelligence companies make indiscriminate use of content protected by copyright to train their models. They usually do it without a license and without having permission to use those works, and that has caused numerous Copyright rape demands, especially in cases Like OpenAi. Freepik Enterprise. This announcement comes almost at the same time as its new business offer, called Freepik Enterprise. Omar Pera confirmed that the initial reception of F Lite has been remarkable since this new service. In this area it is precisely where using a model like this is especially interesting, because “companies are covered” when using a model trained with licensed images. A by adobe and other rivals. Pera also pointed us out in Freepik They do not compete With images, “we are going for professional use cases of marketing or design.” They are not compared to a Getty or a Shuttersock, and more “with the creative tools of Adobe, with Leonardo (part of Canva from a year ago) or with the professional services of Midjourney. “ Image | Freepik In Xataka | All the great AI have ignored the laws of Copyright. The amazing thing is that there is still no consequences

After the blackout, false images of Spain and Portugal circulated from space. Now we have the real photos

55 million people ran out of electricity in Spain and Portugal on April 28, but Total darkness images They circulated the next day, like the one above, they were false. The Balearic Islands did not suffer the blackout, and a good part of the Peninsula already had light when the night fell. Now the European Space Agency has compiled the real images captured from space. The trigger for this collection work were, in fact, the false images. Alejandro Sánchez de Miguel, researcher at the Institute of Astrophysics of Andalusia linked to ESA light pollution projects, He saw the photos that were circulating And he decided that the real ones had to be published. Three NASA satellites equipped with Night observation technology (Suomi-NPP, NOAA-20 and NOAA-21) spent a couple of times on the peninsula that night. Their Six images They tell us a nuanced story, and very different from viral montages, how the Peninsula finished illuminating. The blackout at 03:12, 03:36, 04:30, 04:54 and 05:18 While areas like Madrid had regained light around 22:00 on the 28th, other regions, especially in the souththey followed in the dark until Well enter the morning On the 29th, the almost complete recovery, visible in the last passes of the satellites, arrived at 5:00. The night was clear in almost the entire territory. Dark spots in France or the Portuguese coast are not due to supply cuts, but that the satellite did not go through that concrete region in the waterfall. In green, the areas that still had no light. Blank, the ones In these contrasted images, areas without light can be visualized more easily in green, while areas with electricity supply appear blank. The provinces of Almería and Granada They were the ones that took the closest to illuminate. There are also green areas in Castilla-La Mancha and dispersed regions of Levante, the Sierra Morena and the Campo de Gibraltar. The blackout in Andalusia seen by NASA Earth Observatory While satellites help quickly evaluate the scope and progression of light cuts, the blackouts itself offer space agencies the possibility of Study light pollution and its impact on the observation of heaven or In circadian rhythms of people. In areas such as Almería, the light pollution It was reduced between 70 and 80% during the blackout of April 28. But the light was restored, the stars turned off again and Pilas radio He went back to the drawer. Images | NASA, that In Xataka | ESA has launched the world’s first satellite equipped with Band Radar P. The goal: see through forests

14 Functions, tricks and ideas to squeeze the analysis of chatgpt images

Let’s tell you the best ways of Take advantage of image analysis of Chatgpt. Because there are some that have become popular, but there are other hidden or less known that can help you with some concrete tasks. And that’s why we will mention them to know them. Many of these functions are available in the free version of Chatgpt, although some may need the payment version to work, or to improve their quality and reliability. And as we always say in Xataka Basics, these are our proposals. However, we invite you to leave us in the comments any other that you can think of. Thus, all readers can benefit from the knowledge of our xatakers. Recognize plants and animals Have you found a rare plant or an animal that you do not identify? If you take a picture of that plant or animal And you go up to Chatgpt accompanied by a question where you want to know what it is, artificial intelligence can get you out of doubt. But the AI ​​is not going to limit Recognize the plant or the animal whose photo you send him, but It will also tell you some of its characteristics. In the case of plants you can even give you rapid advice for their care. Make drawings from a photo The function of Reimagine a photo It is the most popular of Chatgpt in recent times. Can edit your photos with Ghibli’s styleas An action dolland In many other ways. The only limit is your imagination and your ability to ask for an original style. All you have to do is upload a photo of a person and ask him to reimagine her with the style you want. Of course, in the photos where many people come out they can have problems and several of them will disappear, that is, for the moment it is better to do it only with a few. Analyze historical or current images You can upload a photo and ask the artificial intelligence that Identify what is in it. If it is a famous picture you can identify and tell you the author, but it can also recognize styles or even places. You can also recognize uniforms, symbols, shields and tell you the meaning of everything. You can also convert an image into a descriptive textasking him to describe what appears in it. In addition, you may give you advice to improve the drawings in the event that they are yours. Translate posters or signal texts Imagine that you are traveling and that signals or informative posters appear in another language. In this case, You can ask Chatgpt to translate them. You just have to take a picture and upload it asking you to translate the text that appears in it to the language you want. This too can serve you for difficult manuscripts to read or that are deteriorated. You take a picture, and Chatgpt can try to interpret them for you even if they don’t seem to read well. And summaries of the text of a photo Imagine that you are in a public park with large posters and a lot of text, or in a museum where the wide explanation of the works is in another language. Here, you can Bring the translation to a new level so that it is not limited only to telling you what it puts. For example, if the text is very wide you can ask Chatgpt directly to summarize it. And if the translation is very complex you can say that it explains it or summarizes as a high school child, that should make things much easier. Detects errors in technical diagrams If you have doubts about a Network diagram, circuits or workflowsyou can also upload it to Chatgpt and ask you to analyze it to find errors or inconsistencies in them. Check your website or app designs In the event that you are designing an application or a web page, You can upload a screenshot To show how design is. When you do, you can ask you questions that help you improve it. For example, you will be able to ask Chatgpt to make suggestions to improve the usability of your website or app. You can also help you improve aesthetics and accessibility after analyzing the content of the capture you have uploaded. It also helps you with posters and other designs You can also ask for help with other types of designs, such as posters, posters, or whatever you want. Can Ask about typography or designthe composition, the choice of colors or any questions you have. Analyze graphics for you Another of the things you can do with this artificial intelligence is to upload Graphics photoswhether of bars, lines or cake. Then, you can ask Chargpt to analyze them and tell you the information of what he puts in them. You can also analyze tables in the same way. Gives you advice to edit photos Chatgpt cannot edit photos for you, as much can reimagine them by drawing from them. However, when you go up you can ask for Tips to help you edit them somehow determined. For example, you can ask you to help you have a more vintage or professional appearance, and chatgpt will give you several tips to guide you in the editing process and tell you what things you can change to get that specific look you want. Correct your manuscript notes Imagine that you have a notebook where you have been scoring the notes of a class, but you are not sure if you have written everything correctly. You can take a picture and send it to chatgpt to check them and tell you if there is any erroror even help you summarize key concepts or other things. Can help you with purchases Chatgpt can also help you with purchases. For example, if you do not understand very well the composition or label of a product, you can take … Read more

The images of AI already flood the Internet. Google and see 2 will serve to start filling it with videos generated by AI

The Images that mimic the Ghibli style They are very good, but be careful, because we can soon make small animation shorts that imitate that or any other style. The models of the generative video are closer more than ever to the general public by Google, which now offers its spectacular model I see 2 as part of the subscription to Gemini Advanced. Hello, Gemini Advanced. When it appeared on the scene I see 2 could only be used through the VERTEX AI and Videofx platforms, but the scope of both proposals was limited and was intended for a more expert audience. Now this model reaches Gemini Advanced, which allows you to transform our text prompts into dynamic videos. Interestingly, the option that now comes to Gemini Advanced It is available for weeks on the Spanish Freepik platformwhich was the first to offer that generation of ia videos using this same model. Eight seconds of artificial creativity. The model, created by DEEPMIND, is available as one more of the options of the GEMINI models drop -down menu. If we use it, we can generate an eight -second video clip with 720p resolution, MP4 format and 16: 9 appearance ratio. In Google they point out that there is a monthly limit for the number of videos that each Gemini Advanced user can create, but do not indicate which, only that we will receive a warning when we approach that limit. A text prompt is enough. To create videos with Gemini we will only have to describe the scene – as always, the details will help create it with greater loyalty – and Gemini will be responsible for materializing that idea and turning it into a video generated by AI. To conquer the Internet. In Google they also wanted to convert these videos into content that can be shared easy and quickly. Thus, it is enough to choose any of the videos that we have generated and then select the option “Share” and be able to upload it to platforms such as Tiktok or YouTube Shorts. Flood of videos from AI in view. We could already live how the phenomenon of the Image ghiblation He won the Internet, and AI models have shown that this type of uses are those that attract the general public at least temporarily. The images generated by AI are already everywhere and are flooding the Internet, but now we face that with the video generated by AI. That Google bets on this type of content and invites you to share it without tapujos is a declaration of forceful intent. The video generated by AI remains for those who pay. The AI ​​models to generate images have long offered their options to users even in free services —Chatgpt and Gemini do it – although with limitations. With the video at the moment that does not happen, and so much Sora as Runway Or now I see 2 are oriented to users who pay for this intensive use of resources to generate these videos. Already available, Spain included. The new option for generating videos in Gemini through View 2 begins to gradually deploy from today for Gemini Advanced subscribers worldwide, including Spain. We can write our Prompts in Spanish as well, which makes Gemini more and more versatile to be able to generate both text and images and video (for Gemini Advanced subscribers, we insist). Water brand in each frame. To avoid bad uses of this tool, Gemini can refuse to create some videos that are contrary to Google’s use policies. In addition, all the videos generated with I see 2 include a digital water mark embedded in each frame Thanks to Synthid technology. Whisk, the other option. I see 2 too will be available through whiskan experimental platform of AI that is not available in Spain. With it the advantage is that not only prompts of text are accepted, but also starting images that then serve as base frames for the rest of the animation. Thus, choosing a Studio Ghibli style frame makes us obtain an eight -second miniClip that also mimics that style, as Google shows in its promotional videos. The Whisk Animate function based on VI 2 is available from today for the subscribers of the Google One Premium Plan in various countries of the world, although for the moment not in Spain. Image | Google In Xataka | The Spanish Startup Freepik is already one of the most important ‘success’ in the history of Spain after being bought by the EQT fund

What is this new iPhone and Mac app and how to use it to create images with artificial intelligence

Let’s tell you What is and how to use imagine playgroundthe new application based on Apple Intelligence To create images. This tool has begun to arrive and settle alone in the latest iPhone and Mac models after the latest updates of its operating system. It is a tool with which Create images by artificial intelligencebut that works quite different from that of alternatives such as Chatgpt, COPILOT either Gemini. We will start the article explaining your concept and operation, and then we will tell you how to use it. In this article we will use macOS catches, because the screen is larger and looks better. But the operation is exactly the same on the iPhone, only with the interface adapted to the vertical format. What is Playground Imagine Imagine playground is an application for Create images through artificial intelligence. It is an application that will be automatically installed on your MAC and your iPhone when they receive their latest updates, which implement the deployment of Apple Intelligence In Spain. The images are going to be created from scratch, but their way of working is different. You won’t have to use full promptsyou don’t have to tell you a complete description of what you want. Instead, what you will do is add concepts and elementsand with the sum of all of them the image will be generated. For example, you can add a photo of yours, from other people or anything else in your gallery. Then, imagine playground allows you Add objects, funds, environments and accessoriesand you can also write small PROMPTS that add to the elements that are added. The idea is to create the image from the sum of several conceptsand not of a single description. The advantage of this mechanics is that it is easier to add small modifications to the image, since you only have to eliminate concepts of the sum or add others. The quality of the images created is much lower than that you will find with online services such as the competition, but It works locally on your devices. This means that the images are created in your Mac or your iPhone directly, and not on Apple servers, which prioritizes privacy. Image Playground is available for iPhone from 15 Pro and 16 that have iOS 18.4 and higher versions. Also for iPads with M1 chip onwards or A17 with ipados 18.4 or higher, as well as mac with M1 or superiors with macOS 15.4 or higher. How to use PlayGround Image The way to use playground is simple. You open it and have a bubble in the central part, and below you have The elements you can add. You have Apple suggestions, such as funds, people, environments or accessories, but below you also have a writing field, a people selector or the possibility of adding other photos to use them. If you want to select people, you will go to the ones you have labeled in Apple photos. In each of the people You can choose different faces or photosand you will see the results that are generated with each of them. Now what you have to do is Go adding all the concepts you want. Remember that you will also have a field where you can write the concept or description to add it. When you have it, click on the central bubble to see the slide pass. When you press in the bubble you will go to the preview, where you will be able to see the creations. As you slide, Playground will create new images, so you can infinitely slide through the alternatives to see one that you like. You can also choose between three styles different image. The style of Animation It will make 3D images, and the other two are two 3D styles to make drawings in different ways. Now you just have to explore and experience, being able to Add or remove concepts to your liking. When you have something you like click on the button Okand you will see the image big. Now you will have options to share the image or delete it. In Xataka Basics | Magic draft of the iPhone and Mac: What is it and how to use it to delete unwanted elements of your photos with Apple Intelligence

Ghibli -style images have firing the chatgpt phenomenon again

You probably remember that moment in the late 2022 in which Chatgpt broke into the scene. An application created by a practically unknown startup allowed, for the first time, to maintain a fluid conversation with an AI. We could ask him to write a story, review a text or explain string theory. At a time when the industry seemed to advance with incremental improvements, without great surprises, Chatgpt’s irruption was an unexpected turn. And yes, he caught everyone with the Guardian. Because, as happens so many times, when a technology catches us, we run in mass towards it. The chatgpt phenomenon was no exception. He only needed five days to reach one million users. A shocking figure if we compare it with other technological giants: Netflix took three and a half yearsTwitter two years, Facebook ten months and Instagram two and a half months. Now, Openai is news again: he has done it again, overcoming his own popularity records. Click to see the original message in x The person in charge of announcing the new milestone was Sam Altman himself. “The launch of Chatgpt 26 months ago was one of the most crazy viral moments I have ever seen, and we added a million users in five days. We just added one million in the last hour,” The OpenAi CEO wrote in a message posted on X on March 31. There are very few services capable of receiving such an avalanche of users in such a short time. One of the closest cases starred in Goal with Threads, Your alternative to Twitter, which added two million users in just two hours. Of course, the context was different: direct integration with Instagram greatly facilitated this mission. But not everything stays in the official data. Analysis signatures such as Tower Sensor also reflect the huge growth of Chatgpt. According to your estimatesapplication downloads increased by 11% last week, while the number of active users grew by 5%, with subscriptions by increasing 6% in the same period. When technology connects with the public It is no secret that artificial intelligence has evolved at high speed in the last two years. However, they are not always the most sophisticated technical advances – Those that seek greater performance, solve complex problems or democratize access to reasoning models – those that manage to connect with the general public. Generally, what moves us is on another plane: nostalgia, art, humor. Or, directly, the memes. And it is precisely where the last of Openai has found his hole in everyday life. Exactly seven days ago, Chatgpt was updated With a new function: an integrated image generator, driven by the multimodal model GPT-4O. One of the many scenes recreated with chatgpt Although GPT-4O was already present in ChatgPT-we met last year-until now he stood out for his skills in text generation and computer vision. But It did not generate images. That has changed with this latest update, which, curiously, is especially good when recreating images in very recognizable styles. In Spain, as in other parts of the world, users soon took advantage of the novelty to unleash creativity. They began to recreate iconic images In the purest Studio Ghibli style, transform vacations in scenes built with Lego pieces or convert real portraits into versions of Muppets or in detailed illustrations Pixel art. But not everything is so idyllic. Fever for the new ability to generate images in ChatgPT has been accompanied by two factors that, although less visible, are not minor. The first has to do with The infrastructure: demand has been so high that OpenAI delayed deployment for free users and impulse use limits. In addition, the question of visual styles is not exempt from controversy. As users experience with recreations inspired by very recognizable creative universes, a debate has been on the table that has been around the development of artificial intelligence: how have these models learn? The answer, although not always transparent, points in an awkward direction. Many of these models, including the one that drives this function in chatgpt, They have trained with large volumes of images available on the network, Many of them protected by copyright. This, once, is a point of tension between technological and authors. Images | Xataka with chatgpt | @MDURBAR | X screen capture In Xataka | How to turn your photo into an action doll with accessories using chatgpt In Xataka | Openai has just lifted the greatest financing round in history: there is a blind faith in the AI ​​despite everything

We just needed AI and satellite images

The last year exceeded all expectations Regarding the installation of renewable energy. However, until now we did not have a clear visual image of this phenomenon, but only with data to make comparisons. Now we can access that panoramic. Short. The project GLOBAL RENEWABLE Watch It consists of a map created from satellite images and artificial intelligence models to show the expansion of solar and wind energy. The initiative has been developed by Microsoft AI for Good Lab, in collaboration with The Nature Conservancy and Planet Labs PBC. In addition, Cornell University He has participated in the validation of the project. The operation. The program itself has explained that they can identify, thanks to the AI, where it is more viable to install renewable projects, considering factors such as the availability of long -term land, the electrical infrastructure of all countries and the environmental impact. In addition, the Public Platform information in Github and uses an open source approach to guarantee transparency. Accelerated growth. In a quick look at the program, it shows that energy has advanced at an unprecedented rate. In it we can see something that We already knew: China leads the installed capacity of solar and wind energy, with large projects that extend throughout the country with a capacity of 632,859.61 MW. This data is quite significant because in the data collected in the fourth quarter of 2017 it had 270,827.59MW, exceeding 133.68%. Well below, but with a significant number it is the United States that is located with 285,974.11MW, exceeding 126.63%. The data show a considerable increase in the American country, but with Trump’s policies It seems that they will be maintained In that amount. For its part, in the case of the European Union, the most prominent case is Germany with 60,042.53MW, but its data at the beginning of data collection were not so low: 38,151.75MW. In fact, the case of Germany was expected by The rise of self -consumption with more than 500,000 solar panels installed on the balconies. Spain’s capacity in solar and wind energy And Spain? It is the next country that follows the EU. The data shown by the Atlas are of the second quarter of 2024, which reveal that the energy capacity in megawatts is 54,068.67 exceeding 127.93% the capacity of the fourth quarter of 2017. In addition, solar energy is the source that More has increased In the same time parameter, achieving more than 254.55%. There are still things to polish. One of the main challenges is the handling of large volumes of data, since the platform processes the terabytes of satellite image and thousands of predictions related to solar and wind energy. In addition, for the data to be reliable, it requires a constant validation of the AI ​​algorithms. Image | GLOBAL RENEWABLE Watch In Xataka | A solar panel has been orbiting the earth for seven years. Now we know that space photovoltaic parks are viable

The sunset images on the moon are more than a curiosity. They will also help us solve an old mystery

50 years after the last manned mission to the moon, NASA and other space agencies in the world have renewed their interest in exploration In situ of the satellite In the case of the US agency, this return to the moon has been bittersweet. To the recurring transpiés of the Artemis program The injured arrivals from American private probes are added. There are also exceptions here. Sunset on the moon. The first module mission Blue Ghost From the company Texana Firefly Aerospace has brought us A sequence of images They show us the sunset on the moon. The images are not only striking but can help us solve a question that persecutes us from the era of the Apollo missions. An enigma to solve. The doubt in question revolves around the phenomenon known as “Lunar horizon glow” (Lunar Horizon Glow) and is related to the dust of the satellite surface. This phenomenon was documented by THE SURVEYOR 7 MISSION that he arrived at the satellite at the end of 1968 and was also captured by the astronauts of the last manned lunar mission, Apollo 17. The data compiled by the probe could help us to inquire if this glow is due to the existence of small polka dust particles in suspension on the lunar surface and investigate in the phenomenon that would be causing this levitation in a place with a practically non -existent atmosphere such as the moon. The main hypothesis indicates that these particles could be electrically charged by the effect of solar radiation, which would make them repel with each other and thus levitate. Halo. The images show us for the first time in decades this unique luminous halo. In the photographs, the greenish glow caused by lunar dust in the lunar dusk can be seen. From eclipse to sunset. On March 2, the Blue Ghost module landed in the Mare Crisium de Luna region becoming the First private mission in reaching the satellite. The mission operated for 14 days, the strictly daytime part of the lunar day, that is, until a few hours later the dusk we see in the image. The mission brought 10 charges to the moon NASA experiments. During his activity period he sent 119 GB of data to Earth, of which 51 GB correspond to technological and scientific data, According to NASA. In those 14 days the mission too He could capture A unique event: the lunar eclipse of March 14. With unique detail: from the moon what was appreciated was an eclipse of sun caused when our planet interposed between the satellite and light in those 14 days the mission could also capture a unique event: the lunar eclipse of last March 14. With unique detail: from the moon what was appreciated was an eclipse of sun caused when our planet interposed between the satellite and light of the sun. CLPS The Firefly Aerospace mission is part of the Clps initiative (Commercial Lunar Payload Services) from NASA and the Artemis campaign. This program seeks that private companies cooperate with NASA in lunar exploration in order to outsource part of the agency’s efforts in this corner of the universe. The lunar missions framed in this initiative have been arriving with more grief than glory to our satellite. Only one of the four Missions launched to date has managed to land in the satellite with the ability to operate normally. In Xataka | It has passed again: the intuitive machines spacecraft has come alive to the moon, but has overturned at the last moment Image | Firefly Aerospace

is able to erase the water mark from the images

If someone wants to protect their images to sell them or distribute them in a protected way, using water marks is usual. This protection could be useless to the new Google AI model, which has shown to eliminate those water marks in some cases. And the implications are important, of course. Generates images with Gemini 2.0 flash. The family of Gemini 2.0 flash models Google has been giving surprises in recent times. He did it certainly with his preliminary mode of reasoning, Flash Thinkingand now it does it again with its mode of image generation. In this case there are most striking options, but also one that is generating some controversy. It is available on Google AI Studio, where it is enough to select this mode in the “CREATE PROMPT” section deploying the “Model” drop -down on the right. There it is enough to choose “Gemini 2.0 Flash (Image Generation) Experimental”. Multimodal. One of the most striking characteristics of this model is its multimodal capacity. Normally to generate an image we write a text prompt to describe what we intend to achieve, but with Gemini 2.0 Flash and this new mode we can generate images through other images that we can modify (in a photo of someone with a white shirt, we could ask for for example something like “Make the shirt that is red”). Surprising but imperfect. We wanted to do a test With a starting image and then change the hair and jacket. He put his hair blonde as we asked, but in doing so he totally changed the girl’s factions. The result of the second transformation, turning the jacket into a blue shirt, was not entirely perfect, but from even that the final result is striking. Eliminating water marks. Much more surprising, but also controversial, is the ability of this model to eliminate water marks from the images. There are many Users that They shared their experiments In networks like X, and there it effectively showed how in those examples the water marks disappeared. In this example, not only did it not eliminate the water marks, but completely changed the face of the model. But it doesn’t work 100%. We wanted to try the service generating our own water brand In this image. Once generated, we ask Gemini 2.0 Flash to eliminate it, but as you can see in the image not only did not eliminate it, but also changed the face of the model and eliminated some elements, such as that small cone of pink ice cream that appears in the lower right part of the image. Here generative AI makes its own decisions, and these can be as unpredictable as erroneous. When it works it is amazing. We did more evidence, and there were cases in which the result was surprising. In this case we take An image with water brandwe cut it and try in the model. As you can see in the image headed by the article, the result is really good, but be careful: things like the bracelets of the model or the pendant disappear. Even so, the method works, and poses a problem. Back to the AI ​​and intellectual property debate. In recent days there has been much talk about how Openai asked the US government to Eliminate copyright restrictions For AI companies. In this way there would be no consequences for its indiscriminate use of works protected by copyright, and now this system adds more firewood to the fire: Google does not even consider this type of option when you designed its model, but the fact that it serves to eliminate these water marks is worrisome. Above all, for the big image banks that use them so that artists can make money with the contents they generate. What do they say in Google. From Xataka we have contacted Google, and a spokesman points out the following: “Use Google’s generative tools to incur copyright violations is a violation of our terms of service. As with all experimental launches, we are closely supervising and attentive to the comments of the developers.” In Xataka | 5,000 “tokens” of my blog are being used to train an AI. I have not given my permission

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.