14 Functions, tricks and ideas to squeeze the analysis of chatgpt images

Let’s tell you the best ways of Take advantage of image analysis of Chatgpt. Because there are some that have become popular, but there are other hidden or less known that can help you with some concrete tasks. And that’s why we will mention them to know them. Many of these functions are available in the free version of Chatgpt, although some may need the payment version to work, or to improve their quality and reliability. And as we always say in Xataka Basics, these are our proposals. However, we invite you to leave us in the comments any other that you can think of. Thus, all readers can benefit from the knowledge of our xatakers. Recognize plants and animals Have you found a rare plant or an animal that you do not identify? If you take a picture of that plant or animal And you go up to Chatgpt accompanied by a question where you want to know what it is, artificial intelligence can get you out of doubt. But the AI ​​is not going to limit Recognize the plant or the animal whose photo you send him, but It will also tell you some of its characteristics. In the case of plants you can even give you rapid advice for their care. Make drawings from a photo The function of Reimagine a photo It is the most popular of Chatgpt in recent times. Can edit your photos with Ghibli’s styleas An action dolland In many other ways. The only limit is your imagination and your ability to ask for an original style. All you have to do is upload a photo of a person and ask him to reimagine her with the style you want. Of course, in the photos where many people come out they can have problems and several of them will disappear, that is, for the moment it is better to do it only with a few. Analyze historical or current images You can upload a photo and ask the artificial intelligence that Identify what is in it. If it is a famous picture you can identify and tell you the author, but it can also recognize styles or even places. You can also recognize uniforms, symbols, shields and tell you the meaning of everything. You can also convert an image into a descriptive textasking him to describe what appears in it. In addition, you may give you advice to improve the drawings in the event that they are yours. Translate posters or signal texts Imagine that you are traveling and that signals or informative posters appear in another language. In this case, You can ask Chatgpt to translate them. You just have to take a picture and upload it asking you to translate the text that appears in it to the language you want. This too can serve you for difficult manuscripts to read or that are deteriorated. You take a picture, and Chatgpt can try to interpret them for you even if they don’t seem to read well. And summaries of the text of a photo Imagine that you are in a public park with large posters and a lot of text, or in a museum where the wide explanation of the works is in another language. Here, you can Bring the translation to a new level so that it is not limited only to telling you what it puts. For example, if the text is very wide you can ask Chatgpt directly to summarize it. And if the translation is very complex you can say that it explains it or summarizes as a high school child, that should make things much easier. Detects errors in technical diagrams If you have doubts about a Network diagram, circuits or workflowsyou can also upload it to Chatgpt and ask you to analyze it to find errors or inconsistencies in them. Check your website or app designs In the event that you are designing an application or a web page, You can upload a screenshot To show how design is. When you do, you can ask you questions that help you improve it. For example, you will be able to ask Chatgpt to make suggestions to improve the usability of your website or app. You can also help you improve aesthetics and accessibility after analyzing the content of the capture you have uploaded. It also helps you with posters and other designs You can also ask for help with other types of designs, such as posters, posters, or whatever you want. Can Ask about typography or designthe composition, the choice of colors or any questions you have. Analyze graphics for you Another of the things you can do with this artificial intelligence is to upload Graphics photoswhether of bars, lines or cake. Then, you can ask Chargpt to analyze them and tell you the information of what he puts in them. You can also analyze tables in the same way. Gives you advice to edit photos Chatgpt cannot edit photos for you, as much can reimagine them by drawing from them. However, when you go up you can ask for Tips to help you edit them somehow determined. For example, you can ask you to help you have a more vintage or professional appearance, and chatgpt will give you several tips to guide you in the editing process and tell you what things you can change to get that specific look you want. Correct your manuscript notes Imagine that you have a notebook where you have been scoring the notes of a class, but you are not sure if you have written everything correctly. You can take a picture and send it to chatgpt to check them and tell you if there is any erroror even help you summarize key concepts or other things. Can help you with purchases Chatgpt can also help you with purchases. For example, if you do not understand very well the composition or label of a product, you can take … Read more

The images of AI already flood the Internet. Google and see 2 will serve to start filling it with videos generated by AI

The Images that mimic the Ghibli style They are very good, but be careful, because we can soon make small animation shorts that imitate that or any other style. The models of the generative video are closer more than ever to the general public by Google, which now offers its spectacular model I see 2 as part of the subscription to Gemini Advanced. Hello, Gemini Advanced. When it appeared on the scene I see 2 could only be used through the VERTEX AI and Videofx platforms, but the scope of both proposals was limited and was intended for a more expert audience. Now this model reaches Gemini Advanced, which allows you to transform our text prompts into dynamic videos. Interestingly, the option that now comes to Gemini Advanced It is available for weeks on the Spanish Freepik platformwhich was the first to offer that generation of ia videos using this same model. Eight seconds of artificial creativity. The model, created by DEEPMIND, is available as one more of the options of the GEMINI models drop -down menu. If we use it, we can generate an eight -second video clip with 720p resolution, MP4 format and 16: 9 appearance ratio. In Google they point out that there is a monthly limit for the number of videos that each Gemini Advanced user can create, but do not indicate which, only that we will receive a warning when we approach that limit. A text prompt is enough. To create videos with Gemini we will only have to describe the scene – as always, the details will help create it with greater loyalty – and Gemini will be responsible for materializing that idea and turning it into a video generated by AI. To conquer the Internet. In Google they also wanted to convert these videos into content that can be shared easy and quickly. Thus, it is enough to choose any of the videos that we have generated and then select the option “Share” and be able to upload it to platforms such as Tiktok or YouTube Shorts. Flood of videos from AI in view. We could already live how the phenomenon of the Image ghiblation He won the Internet, and AI models have shown that this type of uses are those that attract the general public at least temporarily. The images generated by AI are already everywhere and are flooding the Internet, but now we face that with the video generated by AI. That Google bets on this type of content and invites you to share it without tapujos is a declaration of forceful intent. The video generated by AI remains for those who pay. The AI ​​models to generate images have long offered their options to users even in free services —Chatgpt and Gemini do it – although with limitations. With the video at the moment that does not happen, and so much Sora as Runway Or now I see 2 are oriented to users who pay for this intensive use of resources to generate these videos. Already available, Spain included. The new option for generating videos in Gemini through View 2 begins to gradually deploy from today for Gemini Advanced subscribers worldwide, including Spain. We can write our Prompts in Spanish as well, which makes Gemini more and more versatile to be able to generate both text and images and video (for Gemini Advanced subscribers, we insist). Water brand in each frame. To avoid bad uses of this tool, Gemini can refuse to create some videos that are contrary to Google’s use policies. In addition, all the videos generated with I see 2 include a digital water mark embedded in each frame Thanks to Synthid technology. Whisk, the other option. I see 2 too will be available through whiskan experimental platform of AI that is not available in Spain. With it the advantage is that not only prompts of text are accepted, but also starting images that then serve as base frames for the rest of the animation. Thus, choosing a Studio Ghibli style frame makes us obtain an eight -second miniClip that also mimics that style, as Google shows in its promotional videos. The Whisk Animate function based on VI 2 is available from today for the subscribers of the Google One Premium Plan in various countries of the world, although for the moment not in Spain. Image | Google In Xataka | The Spanish Startup Freepik is already one of the most important ‘success’ in the history of Spain after being bought by the EQT fund

What is this new iPhone and Mac app and how to use it to create images with artificial intelligence

Let’s tell you What is and how to use imagine playgroundthe new application based on Apple Intelligence To create images. This tool has begun to arrive and settle alone in the latest iPhone and Mac models after the latest updates of its operating system. It is a tool with which Create images by artificial intelligencebut that works quite different from that of alternatives such as Chatgpt, COPILOT either Gemini. We will start the article explaining your concept and operation, and then we will tell you how to use it. In this article we will use macOS catches, because the screen is larger and looks better. But the operation is exactly the same on the iPhone, only with the interface adapted to the vertical format. What is Playground Imagine Imagine playground is an application for Create images through artificial intelligence. It is an application that will be automatically installed on your MAC and your iPhone when they receive their latest updates, which implement the deployment of Apple Intelligence In Spain. The images are going to be created from scratch, but their way of working is different. You won’t have to use full promptsyou don’t have to tell you a complete description of what you want. Instead, what you will do is add concepts and elementsand with the sum of all of them the image will be generated. For example, you can add a photo of yours, from other people or anything else in your gallery. Then, imagine playground allows you Add objects, funds, environments and accessoriesand you can also write small PROMPTS that add to the elements that are added. The idea is to create the image from the sum of several conceptsand not of a single description. The advantage of this mechanics is that it is easier to add small modifications to the image, since you only have to eliminate concepts of the sum or add others. The quality of the images created is much lower than that you will find with online services such as the competition, but It works locally on your devices. This means that the images are created in your Mac or your iPhone directly, and not on Apple servers, which prioritizes privacy. Image Playground is available for iPhone from 15 Pro and 16 that have iOS 18.4 and higher versions. Also for iPads with M1 chip onwards or A17 with ipados 18.4 or higher, as well as mac with M1 or superiors with macOS 15.4 or higher. How to use PlayGround Image The way to use playground is simple. You open it and have a bubble in the central part, and below you have The elements you can add. You have Apple suggestions, such as funds, people, environments or accessories, but below you also have a writing field, a people selector or the possibility of adding other photos to use them. If you want to select people, you will go to the ones you have labeled in Apple photos. In each of the people You can choose different faces or photosand you will see the results that are generated with each of them. Now what you have to do is Go adding all the concepts you want. Remember that you will also have a field where you can write the concept or description to add it. When you have it, click on the central bubble to see the slide pass. When you press in the bubble you will go to the preview, where you will be able to see the creations. As you slide, Playground will create new images, so you can infinitely slide through the alternatives to see one that you like. You can also choose between three styles different image. The style of Animation It will make 3D images, and the other two are two 3D styles to make drawings in different ways. Now you just have to explore and experience, being able to Add or remove concepts to your liking. When you have something you like click on the button Okand you will see the image big. Now you will have options to share the image or delete it. In Xataka Basics | Magic draft of the iPhone and Mac: What is it and how to use it to delete unwanted elements of your photos with Apple Intelligence

Ghibli -style images have firing the chatgpt phenomenon again

You probably remember that moment in the late 2022 in which Chatgpt broke into the scene. An application created by a practically unknown startup allowed, for the first time, to maintain a fluid conversation with an AI. We could ask him to write a story, review a text or explain string theory. At a time when the industry seemed to advance with incremental improvements, without great surprises, Chatgpt’s irruption was an unexpected turn. And yes, he caught everyone with the Guardian. Because, as happens so many times, when a technology catches us, we run in mass towards it. The chatgpt phenomenon was no exception. He only needed five days to reach one million users. A shocking figure if we compare it with other technological giants: Netflix took three and a half yearsTwitter two years, Facebook ten months and Instagram two and a half months. Now, Openai is news again: he has done it again, overcoming his own popularity records. Click to see the original message in x The person in charge of announcing the new milestone was Sam Altman himself. “The launch of Chatgpt 26 months ago was one of the most crazy viral moments I have ever seen, and we added a million users in five days. We just added one million in the last hour,” The OpenAi CEO wrote in a message posted on X on March 31. There are very few services capable of receiving such an avalanche of users in such a short time. One of the closest cases starred in Goal with Threads, Your alternative to Twitter, which added two million users in just two hours. Of course, the context was different: direct integration with Instagram greatly facilitated this mission. But not everything stays in the official data. Analysis signatures such as Tower Sensor also reflect the huge growth of Chatgpt. According to your estimatesapplication downloads increased by 11% last week, while the number of active users grew by 5%, with subscriptions by increasing 6% in the same period. When technology connects with the public It is no secret that artificial intelligence has evolved at high speed in the last two years. However, they are not always the most sophisticated technical advances – Those that seek greater performance, solve complex problems or democratize access to reasoning models – those that manage to connect with the general public. Generally, what moves us is on another plane: nostalgia, art, humor. Or, directly, the memes. And it is precisely where the last of Openai has found his hole in everyday life. Exactly seven days ago, Chatgpt was updated With a new function: an integrated image generator, driven by the multimodal model GPT-4O. One of the many scenes recreated with chatgpt Although GPT-4O was already present in ChatgPT-we met last year-until now he stood out for his skills in text generation and computer vision. But It did not generate images. That has changed with this latest update, which, curiously, is especially good when recreating images in very recognizable styles. In Spain, as in other parts of the world, users soon took advantage of the novelty to unleash creativity. They began to recreate iconic images In the purest Studio Ghibli style, transform vacations in scenes built with Lego pieces or convert real portraits into versions of Muppets or in detailed illustrations Pixel art. But not everything is so idyllic. Fever for the new ability to generate images in ChatgPT has been accompanied by two factors that, although less visible, are not minor. The first has to do with The infrastructure: demand has been so high that OpenAI delayed deployment for free users and impulse use limits. In addition, the question of visual styles is not exempt from controversy. As users experience with recreations inspired by very recognizable creative universes, a debate has been on the table that has been around the development of artificial intelligence: how have these models learn? The answer, although not always transparent, points in an awkward direction. Many of these models, including the one that drives this function in chatgpt, They have trained with large volumes of images available on the network, Many of them protected by copyright. This, once, is a point of tension between technological and authors. Images | Xataka with chatgpt | @MDURBAR | X screen capture In Xataka | How to turn your photo into an action doll with accessories using chatgpt In Xataka | Openai has just lifted the greatest financing round in history: there is a blind faith in the AI ​​despite everything

We just needed AI and satellite images

The last year exceeded all expectations Regarding the installation of renewable energy. However, until now we did not have a clear visual image of this phenomenon, but only with data to make comparisons. Now we can access that panoramic. Short. The project GLOBAL RENEWABLE Watch It consists of a map created from satellite images and artificial intelligence models to show the expansion of solar and wind energy. The initiative has been developed by Microsoft AI for Good Lab, in collaboration with The Nature Conservancy and Planet Labs PBC. In addition, Cornell University He has participated in the validation of the project. The operation. The program itself has explained that they can identify, thanks to the AI, where it is more viable to install renewable projects, considering factors such as the availability of long -term land, the electrical infrastructure of all countries and the environmental impact. In addition, the Public Platform information in Github and uses an open source approach to guarantee transparency. Accelerated growth. In a quick look at the program, it shows that energy has advanced at an unprecedented rate. In it we can see something that We already knew: China leads the installed capacity of solar and wind energy, with large projects that extend throughout the country with a capacity of 632,859.61 MW. This data is quite significant because in the data collected in the fourth quarter of 2017 it had 270,827.59MW, exceeding 133.68%. Well below, but with a significant number it is the United States that is located with 285,974.11MW, exceeding 126.63%. The data show a considerable increase in the American country, but with Trump’s policies It seems that they will be maintained In that amount. For its part, in the case of the European Union, the most prominent case is Germany with 60,042.53MW, but its data at the beginning of data collection were not so low: 38,151.75MW. In fact, the case of Germany was expected by The rise of self -consumption with more than 500,000 solar panels installed on the balconies. Spain’s capacity in solar and wind energy And Spain? It is the next country that follows the EU. The data shown by the Atlas are of the second quarter of 2024, which reveal that the energy capacity in megawatts is 54,068.67 exceeding 127.93% the capacity of the fourth quarter of 2017. In addition, solar energy is the source that More has increased In the same time parameter, achieving more than 254.55%. There are still things to polish. One of the main challenges is the handling of large volumes of data, since the platform processes the terabytes of satellite image and thousands of predictions related to solar and wind energy. In addition, for the data to be reliable, it requires a constant validation of the AI ​​algorithms. Image | GLOBAL RENEWABLE Watch In Xataka | A solar panel has been orbiting the earth for seven years. Now we know that space photovoltaic parks are viable

The sunset images on the moon are more than a curiosity. They will also help us solve an old mystery

50 years after the last manned mission to the moon, NASA and other space agencies in the world have renewed their interest in exploration In situ of the satellite In the case of the US agency, this return to the moon has been bittersweet. To the recurring transpiés of the Artemis program The injured arrivals from American private probes are added. There are also exceptions here. Sunset on the moon. The first module mission Blue Ghost From the company Texana Firefly Aerospace has brought us A sequence of images They show us the sunset on the moon. The images are not only striking but can help us solve a question that persecutes us from the era of the Apollo missions. An enigma to solve. The doubt in question revolves around the phenomenon known as “Lunar horizon glow” (Lunar Horizon Glow) and is related to the dust of the satellite surface. This phenomenon was documented by THE SURVEYOR 7 MISSION that he arrived at the satellite at the end of 1968 and was also captured by the astronauts of the last manned lunar mission, Apollo 17. The data compiled by the probe could help us to inquire if this glow is due to the existence of small polka dust particles in suspension on the lunar surface and investigate in the phenomenon that would be causing this levitation in a place with a practically non -existent atmosphere such as the moon. The main hypothesis indicates that these particles could be electrically charged by the effect of solar radiation, which would make them repel with each other and thus levitate. Halo. The images show us for the first time in decades this unique luminous halo. In the photographs, the greenish glow caused by lunar dust in the lunar dusk can be seen. From eclipse to sunset. On March 2, the Blue Ghost module landed in the Mare Crisium de Luna region becoming the First private mission in reaching the satellite. The mission operated for 14 days, the strictly daytime part of the lunar day, that is, until a few hours later the dusk we see in the image. The mission brought 10 charges to the moon NASA experiments. During his activity period he sent 119 GB of data to Earth, of which 51 GB correspond to technological and scientific data, According to NASA. In those 14 days the mission too He could capture A unique event: the lunar eclipse of March 14. With unique detail: from the moon what was appreciated was an eclipse of sun caused when our planet interposed between the satellite and light in those 14 days the mission could also capture a unique event: the lunar eclipse of last March 14. With unique detail: from the moon what was appreciated was an eclipse of sun caused when our planet interposed between the satellite and light of the sun. CLPS The Firefly Aerospace mission is part of the Clps initiative (Commercial Lunar Payload Services) from NASA and the Artemis campaign. This program seeks that private companies cooperate with NASA in lunar exploration in order to outsource part of the agency’s efforts in this corner of the universe. The lunar missions framed in this initiative have been arriving with more grief than glory to our satellite. Only one of the four Missions launched to date has managed to land in the satellite with the ability to operate normally. In Xataka | It has passed again: the intuitive machines spacecraft has come alive to the moon, but has overturned at the last moment Image | Firefly Aerospace

is able to erase the water mark from the images

If someone wants to protect their images to sell them or distribute them in a protected way, using water marks is usual. This protection could be useless to the new Google AI model, which has shown to eliminate those water marks in some cases. And the implications are important, of course. Generates images with Gemini 2.0 flash. The family of Gemini 2.0 flash models Google has been giving surprises in recent times. He did it certainly with his preliminary mode of reasoning, Flash Thinkingand now it does it again with its mode of image generation. In this case there are most striking options, but also one that is generating some controversy. It is available on Google AI Studio, where it is enough to select this mode in the “CREATE PROMPT” section deploying the “Model” drop -down on the right. There it is enough to choose “Gemini 2.0 Flash (Image Generation) Experimental”. Multimodal. One of the most striking characteristics of this model is its multimodal capacity. Normally to generate an image we write a text prompt to describe what we intend to achieve, but with Gemini 2.0 Flash and this new mode we can generate images through other images that we can modify (in a photo of someone with a white shirt, we could ask for for example something like “Make the shirt that is red”). Surprising but imperfect. We wanted to do a test With a starting image and then change the hair and jacket. He put his hair blonde as we asked, but in doing so he totally changed the girl’s factions. The result of the second transformation, turning the jacket into a blue shirt, was not entirely perfect, but from even that the final result is striking. Eliminating water marks. Much more surprising, but also controversial, is the ability of this model to eliminate water marks from the images. There are many Users that They shared their experiments In networks like X, and there it effectively showed how in those examples the water marks disappeared. In this example, not only did it not eliminate the water marks, but completely changed the face of the model. But it doesn’t work 100%. We wanted to try the service generating our own water brand In this image. Once generated, we ask Gemini 2.0 Flash to eliminate it, but as you can see in the image not only did not eliminate it, but also changed the face of the model and eliminated some elements, such as that small cone of pink ice cream that appears in the lower right part of the image. Here generative AI makes its own decisions, and these can be as unpredictable as erroneous. When it works it is amazing. We did more evidence, and there were cases in which the result was surprising. In this case we take An image with water brandwe cut it and try in the model. As you can see in the image headed by the article, the result is really good, but be careful: things like the bracelets of the model or the pendant disappear. Even so, the method works, and poses a problem. Back to the AI ​​and intellectual property debate. In recent days there has been much talk about how Openai asked the US government to Eliminate copyright restrictions For AI companies. In this way there would be no consequences for its indiscriminate use of works protected by copyright, and now this system adds more firewood to the fire: Google does not even consider this type of option when you designed its model, but the fact that it serves to eliminate these water marks is worrisome. Above all, for the big image banks that use them so that artists can make money with the contents they generate. What do they say in Google. From Xataka we have contacted Google, and a spokesman points out the following: “Use Google’s generative tools to incur copyright violations is a violation of our terms of service. As with all experimental launches, we are closely supervising and attentive to the comments of the developers.” In Xataka | 5,000 “tokens” of my blog are being used to train an AI. I have not given my permission

Satellite images have revealed an unknown complex in Neom. It is so luxurious that it can only be for a person

The last thing we knew about Neom was possibly what was most moving away from the ambitious project that has Saudi Arabia. Apparently, the Futurist City proposal will not be exclusively on luxury hotels and futuristic skyscrapers. There will be much more, among other things, an agreement to become a Pointer space for AI. The latest: such an exclusive and unknown complex that can only be for a person. A luxurious palatial complex. He Neom megaprojectthe ambitious initiative of around 2 billion dollars from Saudi Arabia to transform its economy and position itself as a global reference in technology, innovation and tourism, has once again news this week after the dissemination of satellite images that reveal an unknown architecture: the Construction of a lavish palace On the Red Sea coast. Elon Musk vs Jeff Bezos: The Galaxies War Captured in January by the satellite technology company Maxar Technologies and published in a documentary of the Business Insider mediumthe images show an exclusive complex with private beaches, extensive gardens, a golf course and up to 10 helipuertos. It can only be for a person. As The medium pointsthis palace clearly points to a more than possible residence: that of the heir prince Mohammed Bin Salman. In addition, although from the project no one has talked about this architecture, the satellite plans coincide with the previous data reported by Reuters in 2018when at the dawn of the project the award of the first contracts to build up to five royal palaces in the region was mentioned, approximately 170 kilometers from Tabuk. Neom: a megaproject with light and dark. We have told it several times. Neom is the central pillar of the Mohammed Bin Salman modernization plan, designed to diversify the Saudi economy and reduce its oil dependence. And of all the initiatives, its boldest proposal It’s The Linethat vertical city of 170 kilometers long, conceived as a futuristic model of sustainable urbanization. However, the project has faced financial challengesenvironmental controversies and concerns for human rights (He even has talked about deaths). Originally scheduled to complete in 2030, development has slowed down significantly (even thought that it would be aborted at some point). At present, it is estimated that only a section of 2.5 kilometers called Hidden Marinait could be ready for that date. A prince and his lifestyle. Also counted in the middle that Prince Mohammed Bin Salman is known for his luxurious lifestyle and its expensive property portfolio. Among its most outstanding acquisitions are a Château in southern France valued at 300 million dollars and a 300 million dollars supery, reflecting the level of opulence with which its figure is related. The construction of the Palace in Neom, therefore, seems to fit within this pattern of great personal investments in the middle of a national colossal project. Neom: between ambition and reality. In short, the pharaonic project represents the most ambitious attempt of Saudi Arabia for redefining its economic future and its global image. However, the difficulties in its execution, the Lack of foreign investments and concerns about their environmental and social impact put in Interdicted your viability In the short term. The revelation of the private palace within the megaproject only adds more questions about the true nature of at least, leaves any reasonable doubt: are we facing a step towards that future of the kingdom that is advertised, or a symbol of excesses and megalomania in the middle of a plan of the most uncertain? Image | Maxar In Xataka | Neom will not just luxury hotels and futuristic skyscrapers: Saudi Arabia has sealed a key pact to dominate the AI In Xataka | The influencers who already live in Neom are uploading videos so that people are encouraged to go. It looks like an industrial estate

We have dedicated six years to process images of a black hole to reach a conclusion: Einstein was right

Several years have passed since the Telescope of the Event Horizon (EHT) published the famous first image of a black holetaken in 2017. The photo has yes doquestioned by some researchersbut the EHT last year published a second image of the black hole M87*, taken in 2018. The new photo not only validated the original, but once again corroborates the Einstein’s general relativity theory. The largest radio telescope. To obtain the image of the black hole in the center of the Messier 87 galaxy, we needed to build a radio telescope about 10,000 kilometers in diameter. Since the land has a diameter of 13,000, the EHT took a more reasonable path: Extract data from different receptors, telescopes and radio antennas from all over the world and combine them by interferometry. The EHT produced 250 Petabytes of information in a one -week interval. It took a couple of years to process all the information and publish an image. But first, he added a new telescope to the project (the GLT of Greenland) and took the second image of M87* that saw the light in 2024. Six years processing. The second image of the black hole M87*, taken a year and ten days after the original, in April 2018, took six to process and publish, but it was worth it. On the one hand, proves that 2017’s observations were fine. The Persistence of the size of the central shadow In both images confirms the original estimate of the dimensions of the black hole, dissipating the criticisms about the simulations dependence to calculate this data. On the other, comparing the two images shows that the ring of matter around the black hole is rotating as expected. The brightest part has moved 30 degrees, which is consistent with the models of the hole. We are seeing what Einstein predicted. Located 55 million light years from us, M87* is a supermassive black hole in the center of an elliptical galaxy that manipulates the subject with its magnetic fields and expels the one that does not consume at speeds close to that of light. The image of 2018, like its predecessor of 2017, reflects this tumultuous activity with a bright ring around it. This validates the theory that the diameter of the event horizon, and therefore that of the black hole itself, is intrinsically linked to its mass, framing a central shadow that Albert Einstein’s equations predicted more than a century ago. Why it looks like a donut. That brilliant donut called accretion disc should be very fine, but we get very dispersed and unemployed. Throughout the trip he has made through space, his light has dispersed by the dust in interstellar space, which leads us to see it in this way. Despite the dispersion, the image is clear enough to confirm not only Black hole rotation but also the alignment of its rotational axis with a powerful stream of material (“relativistic jet”) that moves away from M87. The importance of reproducing results. Although it will take six years to arrive, this vindic confirmation the findings of the EHT and is seen as a milestone for global scientific collaboration, in addition to a robust confirmation that we are facing the shadow of a black hole and the matter that orbit it. Future data analysis will help better understand how magnetic fields and plasma flows within the accretion disc interact. In the next decade, we could even have videos of the evolution of M87* in time thanks to the next generation program of the EHT (NGEHT), which promises images of greater resolution and a broader range of frequencies. All thanks to the collaboration of observatories from all over the world. Image | Event horizon telescope In Xataka | A group of astrophysics has knocked down Kerr’s hypothesis. Black holes are still a source of surprises In Xataka | There is water since the beginning of time: NASA has found 140 billion oceans to 12,000 million light years *An earlier version of this article was published in February 2024

satellite images show what aim to be their next great aircraft carrier

The satellite images of the Shipyard of Dalian, in the province of Liaoning, point to a key advance in China’s naval strategy: the possible construction of their fourth carrier. Known as Type 004, this ship would stand out for integrating an electromagnetic catapult system to launch airplanes and drones, in addition to having a greater displacement than the previous models. It is also speculated with the incorporation of nuclear propulsion, which would mark a significant leap in the operational capacity of the Chinese Navy. As The War Zone points outthe images obtained by Airbus and accessible on Google Earth correspond to last year, although they have recently earned attention. The satellite view suggests that the aircraft carriers continues at an early construction phase, with structures that seem to fit with the catapult system. In addition, models of a J-15 Flanker hunting and a naval helicopter from the Z-8 series have been identified. These types of elements are not accidental: they are usually used in the tests and the development of new aircraft carriers. A project that has been spinning for years Speculation about this aircraft carriers began almost a decade ago and gained strength in 2017, when a porch crane was installed in the Jiangnan shipyard. However, that project did not prosper and the rumors continued to emerge sporadically. It was not until March 2024 when the Admiral and political commissioner of the Chinese Navy, Yuan Huazhi, officially confirmed its existence. He assured that they did not face inconveniences and said that it would soon be announced if the aircraft carrier will have Nuclear propulsion. As we mentioned before, one of the greatest advantages of this aircraft carriers will be its catapult system, which would allow it to be at the level of the American R. Ford in this aspect. This offers key benefits, such as the ability to launch heavier planes. For the Chinese Navy, this means more flight autonomy thanks to a greater fuel load and more weapons capacity. If the nuclear propulsion is confirmed, the ship could operate without geographical restrictions, with a practically unlimited range. The models in Dalian’s shipyard You may ask what kind of aircraft will launch the TyPE 004. For now, there is no official confirmation, but analysts believe they will operate with KJ-600 aircraft, biturbohélice and comparable to the E-2 Hawkeye of the US Navy. These are aircraft of Early Warning and Airborne Controldesigned to provide surveillance, recognition and air space management. Equipped with a long -range radar, they can detect and track both aircraft and enemy vessels at large distances. Although there is no official confirmation, analysts suggest that Type 004 will operate with the furtive hunting J-35, optimized for missions in aircraft carriers and with advanced stealth technology. This would be supported by improved versions of J-15, a versatile hunting adapted to electromagnetic catapults. In addition, the incorporation of unmanned aircraft is expected, expanding their tactical capacities. With this combination, the Chinese Navy would reinforce its air domain in the open sea, integrating manned airplanes and drones in joint operations. Images | Google Earth screen capture In Xataka | China is already sailing its last amphibious beast. It has a huge cover for drones and points to three missions

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.