the international image of UAE

“It’s not the Dubai we know.” The phrase is from Satya Jaganathan, a woman from the United Arab Emirates (UAE) who on Sunday told the BBC how their routine has been turned upside down by something difficult to see in one of the richest and most stable nations in the Middle East: missiles. Over the weekend, in response to the US-Israeli attack that killed its supreme leader, Tehran responded with a wave of missiles that partly targeted your neighbors of the Gulf, targeting Bahrain, Qatar, Saudi Arabia, Kuwait or the UAE, where Jaganathan was caught. The Iranian drones and missiles have not left a large number of victims in the UAE, but they have dealt a severe blow to something equally important for the country: the image of stability that it projects globally, a fundamental value that has helped it become the destination of thousands of expats and a logistical reference. As Satya says, the Dubai of this Sunday “is not the Dubai we know.” What has happened? That the Middle East faces what is probably its most tense outlook in recent years. On Saturday, Israel and the United States launched a powerful attack against Iran that ended the life of the country’s supreme leader, Ayatollah Ali Khameneiin addition to the Iranian Minister of Defense and the commander of the Revolutionary Guard, according to Reuters. Tehran’s reaction was devastating. Unlike other Iranian attacks, such as the one in 2024 or the ‘Twelve Day War’when the offensive of the Islamic Republic seemed to seek a “planned de-escalation”on this occasion the Iranian forces have responded with force. And in the process they have pointed out where it hurts the most in countries like the UAE or Saudi Arabia. What has he done? Tehran has responded to the Israeli and American attacks with severity, launching missiles and drones that (now) do not seem to seek de-escalation. For now, it has managed to escalate the conflict and directly involve other countries in the Middle East. In addition to directing missiles toward Israel, the Islamic Republic has dealt blows against the United Arab Emirates, Bahrain, Qatar, Saudi Arabia, Kuwait, Jordan and Iraq. It’s not a coincidence. To a greater or lesser extent, these seven nations facilitate Washington’s operations in the region. The port of Jebel Ali, for example, regularly welcomes American ships, Bahrain is home to the Fifth Fleet of the US Navy and the US also takes advantage of Doha. “All occupied territories and US criminal bases in the region have been hit by powerful Iranian missile strikes. This operation will continue relentlessly until the enemy is decisively defeated,” claims the Revolutionary Guard. Their purpose is clear: to pressure their neighbors to limit Washington’s reach. In case there were any doubts, the Iranian Foreign Minister, Abbas Araghchi, remembered to the countries of the region that have the “responsibility to prevent the improper use of their facilities and territories.” How have the attacks been? Beyond the Iranian rhetoric, it does not appear that the attacks have had serious consequences either in terms of casualties or destruction of infrastructure. Jordan claims to have shot down a pair of ballistic missiles and, although “objects and debris” fell at several points, they only caused material damage. In Kuwait a drone attacked the airfield and in Saudi Arabia the Government insist in which it has repelled “cowardly attacks” against Riyadh and the Eastern Province. Of course that does not mean that Iran has not left destruction and victims. Are figures handled? Yes. In total The New York Times details that Iranian attacks have caused at least four deaths and more than a hundred injured in the United Arab Emirates, Kuwait, Qatar, Bahrain and Oman. Perhaps the country that has received the most attention is the UAE, which received a wave of more than 540 drones165 ballistic missiles and another two cruise missiles, according to their authorities. Emirati air defense systems have intercepted most of the projectiles, but that did not prevent the blow from being felt in one of the most influential and thriving kingdoms in the region. In Dubai, the financial heart of the Middle East, images have been seen of luxurious hotels affected by fire, towers with windows burst by explosions and havoc at the airport. That’s all? No. Beyond the toll of injuries, deaths and damaged infrastructure, Iran has pursued another objective: to hit the international image of its neighbors, limiting their projection of reliable destinations. The worst part has probably been borne by the Emirates, where they live hundreds of thousands of expats. The nation has also become an important tourist hub, both for its attraction itself and for its strategic position, which makes it a stopover point for many Western tourists who fly to Asia or Oceania. In practice, that translates into two things: a constant flow of millions of travelers from the rest of the world and thousands and millions of dollars. A whole way to diversify the economy beyond oil, an objective that neighboring Saudi Arabia has also been pursuing for years. due to megaprojects. Is it that serious? Beyond its skyscrapers, luxury, landscapes, standard of living and great infrastructure, hooks that serve to attract expats and tourists, the UAE above all plays the card of its stability. The same one that Iran now wants to score against. “You don’t expect to hear missiles flying in Dubai,” recognize to TNYW Elizabeth Rayment, who was surprised by the attack in Palm Islands. The weekend attacks caused a fire for example at the Fairmont The Palm hotel in Dubai, a luxurious five-star establishment. Other accommodation damaged by the remains of an Iranian drone was the Burj Al Arab. What is the objective? For Middle East expert Andrew Thomas, there is little doubt about Iran’s purpose. “This is a deliberate strategy, designed to impose early and substantial costs on its neighbors and the overall stability of the region,” he explains in an article of The Conversation. “The strategy is to weaken the region and … Read more

We can no longer trust any image on the internet

In 2012, Hurricane Sandy devastated the Caribbean Sea and reached the coast of New York. There he left floods, power outages and spectacular photos. Of all of them there was one especially amazing which went viral, but there was a problem: it was false. She wasn’t the only one that slipped into networks. That image was just one more example of what we have seen before and after: great phenomena and events end up generating floods of content, some of which are not real. There are many reasons why people take advantage of these moments to spread false images, but at least before achieving credible images and videos was expensive. Only advanced users of applications like Photoshop or Final Cut/Premiere could achieve convincing results, but AI, as we know, has changed all that. We have been warning about this problem for some time: distinguishing between what is real and what is generated by AI it’s getting harder. and these days we have had the last great example of this trend. Anatomy of a deepfake The Kamchatka Peninsula, in the far east of Russia, has experienced a historic snow storm. The worst in decades, according to records, with snow levels exceeding two meters in various areas, according to Xinhua. Petropavlovsk-Kamchatsky, the administrative, industrial, and scientific center of Kamchatka Krai, has especially suffered these consequences, and residents of the region have spread images on networks of the one that already has been baptized like the “snow apocalypse.” Those images spread in news media and social networks and that they were real—often more “mundane” and much less spectacular— contrast with others that theoretically also showed the state of various points in the region but that are actually generated with AI. That video, for example, was shared a few days ago by Linus Ekenstam, an influencer who often shares news and reflections on AI. He republished that video and claimed that it was real, but soon several users indicated that the video was actually created by AI. Ekenstam argued that the theoretical AI error that it pointed out in the user was not such, and that where he lives there are poles near the streetlights. He therefore tried to defend that for him the video was real, but others suggested that it was not. The definitive test: a user linked to the theoretical original videowhich apparently originated in a TikTok account dedicated precisely to disseminating AI-generated content that seems real. The crucial thing about that fake video is that it is spectacular, but not overly spectacular. It is, to a certain extent, believable, and when the image and the camera movement itself is so convincing, it is difficult to think that “maybe it is generated by AI.” With this snow storm experienced in Kamchatka, unusual images have been shared on networks, much more typical of a dystopian Hollywood movie than a real natural phenomenon. A priori the images may even seem coherent, but a more detailed – and above all, more critical – examination makes it easier for us to realize that perhaps these images are not as real as they seem. In fact, the most striking images shared on social networks and that accumulate thousands of retweets and likes on X, for example, contrast with those published in traditional media, which tend to be as we said much less flashy and much more mundane. Spanish media such as OndaZero or OKDiario have published some images and videos generated by AI on their digital media or on their social media accounts without realizing that these videos actually had their origin in the aforementioned TikTok account which has managed to spread like wildfire. Debates about the possibility that certain images could be real have been frequent for example on Redditwhere users shared for example an amazing catch which when analyzed in detail seemed generated by AI. The avalanche of “citizen journalism”, which can be well-intentioned and very important at times, contrasts here with the role of the media, which has an enormous responsibility in acting as trusted sources of information. Even they (and we) can fall into the trap, and here once again The best thing is to start distrusting what we see on our screens, because it may be false content. The videos that appeared in some media such as SkyNews or in The Vanguard they combine with others that (at least, a priori) seem real, but that at this point also require rigorous examination. Our brain betrays us and technology knows it There are several well-studied psychological phenomena and cognitive biases that explain why we believed in fake news in the past and now the same thing happens to us again with deepfakes. It doesn’t matter if we know (or at least rationally suspect) that these images and videos are false: technology and especially AI precisely exploit these biases. Among them the following stand out: Confirmation bias: we believe what fits with what we already believe. Our brain does not seek truth as much as internal coherence, so if a piece of news reinforces our ideology, we lower the level of potential criticism, but if it contradicts it, we analyze it with a magnifying glass or directly discard it. The problem here is that AI can generate tailor-made content adjusted to each narrative. Illusory truth effect: here it happens that “if I have seen it many times, it will be true.” Repetition increases the feeling of truthfulness, not actual truthfulness, and it is something that, for example, social networks, machines for repeating hoaxes, make the most of. Again, AI facilitates the mass production of the same lie with minimal variations. We believe what we see: This is what some call perceptual realism. We trust too much in the visual, and hence the famous saying “a picture is worth a thousand words.” Images are processed much faster than text, and critical thinking comes after the emotional reaction, as you well argued Daniel Kanheman in his famous ‘Think fast, think slow’. Cognitive load: related … Read more

How to create an image of yourself and a Pixar character with your face using artificial intelligence, with Gemini or ChatGPT

We are going to explain how to create an image in which you appear holding a 3D character of yours miniature using artificial intelligencelooking like Pixar characters. We are going to use a prompt created for use with Geminialthough it will also work in ChatGPT without problems. It is a fairly simple composition, in which you only need to add a photo of yourself and write the prompt, which is quite long and complex. But the result is quite curious, although you may need several tries to get it completely to your liking. An image of you with a 3D cartoon What you have to do is open a new chat with Gemini, which is the AI ​​with which you will have the best results. Once you have it, upload a photo of yourself in which your face looks goodand then add the following text as a request or prompt: “Use the uploaded photo as the ONLY facial and identity reference. The main subject must look exactly like the person in the uploaded image, preserving identical facial structure, proportions, skin tone, hairstyle, eye shape, nose, lips, jawline and overall identity. Do not embellish, alter or replace facial features. Create a cinematic, ultra-detailed scene of your subject smiling naturally. The subject delicately holds a tiny, cartoon-style miniature version of the same person by the hair between his fingers, like a playful puppet suspended in the air. The miniature character is a Pixar/Disney-style 3D version of the same person, with cute, exaggerated proportions, big, expressive eyes, mouth open with joy, arms raised, and a lively, playful stance. The miniature must clearly resemble the same person and be wearing a matching outfit. The main subject looks at the little character with surprise, delight and affection, creating a whimsical and touching interaction. Lighting is warm professional studio lighting with soft rim light, shallow depth of field, and soft golden bokeh background. The real person’s skin texture is photorealistic, while the miniature character has clean Pixar-style materials, smooth shading, and polished 3D surfaces. Cinematic color grading, high contrast, sharp focus, premium portrait composition, 50mm lens look, f/1.8 aperture, ultra-realism mixed with stylized animation, 4:5 aspect ratio, 8K quality, cinematic finish. Anime, 2D illustration, comic style, flat shading, low poly, plastic skin, wax face, face swap, different identity, facial morphing, beauty filters, excessive smoothing, blur, low resolution, grain, noise, distortion, deformed face, incorrect facial proportions, extra fingers, missing fingers, duplicate hands, floating objects, bad anatomy, inconsistent lighting, harsh shadows, neon colors, cold blue tones, washed out colors, excessive saturation, watermark, text, logo, severed head, face out of frame.” Yes, it is a very long text, but each of the sentences that make it up help with the effect. When you send it, you will receive a composition that shows an image of you holding a Pixar character with your face in your fingers. You will also be able to do it with ChatGPTwhich occasionally releases good results. However, the faces are sometimes somewhat deformed, and for now Gemini seems to do better almost always. In Xataka Basics | How to create a character in ChatGPT and Gemini to use it in all the images you make with artificial intelligence

How to create a Christmas sugar cookie image with your pet’s photo in a couple of clicks with ChatGPT

We are going to explain to you step by step how to create a Christmas sugar cookie image with your pet’s photo wearing ChatGPT Imagesthe section to create images from photographs of ChatGPT. This tool will allow you to do the editing with artificial intelligence without having to write a prompt certain. Therefore, the best thing about all this is that these images are very easy to makeand you only need a couple of clicks to get them. Furthermore, since when the photo is generated you will also be shown the prompt generated by ChatGPT itself, you can copy and paste it into another AI or make any modifications you want. Make sugar cookies from your pets The first thing you have to do is enter the ChatGPT website or application on your device. Here in the side menu Click on the section Images that will appear just below the search options. Once you enter the section Imagesnow you have to look in the row where it says Test a style on an image. Here, search and click on the option sugar cookie that appears with the drawing of a dog biscuit. This will open a window where you have to choose the photo of your pet that you want to use as a reference. For this, you can use one of the recent ones that you have used in this section, or manually choose another one that you want to use. And that’s it. Just by doing this, ChatGPT will create the sugar cookie image with the photo of the pet that you have uploaded. If you are not satisfied, you can try again or copy and paste the prompt with another photo. You can also edit and modify the prompt so that the result is different. In Xataka Basics | How to create a character in ChatGPT and Gemini to use it in all the images you make with artificial intelligence

Image and sound

Image and sound The best offers The 2024 news in Televisions, projectors, speakers and headphones. Offers in image and sound. Here you have the Xataka selection: 01. Televisions 02. Speakers 03. Headphones 04. Projectors 05. Smart TV Box 06. Cameras Some of the links published here are affiliate links. The products mentioned have been independently selected by the editorial team in search of the best deals, except those marked as sponsored by brands. Xataka purchasing guides Black Friday with Xataka Computers and computing Mobiles and accessories Image and sound Home connected Gaming and accessories lifestyle and gadgets VPN and Services of the Internet Gifts Christmas Days Hours Min. Sec. function getTimeRemaining(endtime) { var t = Date.parse(endtime) – Date.parse(new Date()); var seconds = Math.floor((t / 1000) % 60); var minutes = Math.floor((t / 1000 / 60) % 60); var hours = Math.floor((t / (1000 * 60 * 60)) % 24); var days = Math.floor(t / (1000 * 60 * 60 * 24)); return { ‘total’: t, ‘days’: days, ‘hours’: hours, ‘minutes’: minutes, ‘seconds’: seconds };}function initializeClock(id, endtime) { var clock = document.getElementById(id); var daysSpan = clock.querySelector(‘.countdown-d’); var hoursSpan = clock.querySelector(‘.countdown-h’); var minutesSpan = clock.querySelector(‘.countdown-m’); var secondsSpan = clock.querySelector(‘.countdown-s’); function updateClock() { var t = getTimeRemaining(endtime); daysSpan.innerHTML = t.days; hoursSpan.innerHTML = (‘0’ + t.hours).slice(-2); minutesSpan.innerHTML = (‘0’ + t.minutes).slice(-2); secondsSpan.innerHTML = (‘0’ + t.seconds).slice(-2); if (t.total now.getTime()){ initializeClock(‘countdown’, deadline); }else{ document.getElementById(“countdown”).style.display = “none”; } * 01. Televisions LED, OLED, QLED, HDR and their multiple versions, televisions of all ranges with excellent value for money. Television buying guides: Best televisions in quality price: which one to buy and seven recommended 4K smart TVs Televisions to play on PC: which ones are better to buy? Tips and recommendations The best 4K OLED TVs: which one to buy? Tips and recommendations The 4K televisions with the best quality-price ratio: which one to buy? Tips and recommendations * 02. Speakers The best portable Bluetooth speakers. Smart speaker models with a good quality-price ratio that are now on sale Speaker Buying Guides: Best Bluetooth speakers (2024): which one to buy and 13 recommended models * 03. Headphones The best traditional and wireless headphones that best suit your needs and budget. Headphone Buying Guides: For me, the best wireless headset and the one with the best value for money is… Best truly wireless headphones (TWS) with noise cancellation: which one to buy and 11 recommended models from 40 euros Which wireless headphones to buy for running: buying guide with recommendations and nine featured models from 20 euros * 04. Projectors Enjoy the cinema at home or play on the big screen: LCD, LCoS and DLP projectors, these are the featured models on offer. Projector Buying Guides: Best projectors to set up your own home theater: features and recommended models for indoors and outdoors The best projectors to set up a home theater: Which ones to buy? Tips and recommendations * 05. Smart TV Box Turn your TV into a smart TV and enjoy all the streaming content. Here is Xataka’s selection of Smart TV Boxes on offer. Smart TV box purchasing guides: Best smart TV boxes: which one to buy and 11 recommended set top boxes from 27 to 214 euros * 06. Cameras Capture the moment, record, stream or photograph any moment with these video and SLR cameras. Camera Buying Guides: Which camera for beginners to buy: these are the models and recommendations from Xataka editors * All Xataka buying guides Black Friday with Xataka Computers and computing Mobiles and accessories Image and sound Home connected Gaming and accessories lifestyle and gadgets VPN and Services of the Internet Gifts Christmas .landing-catalog-container { max-width: 1296px; } @media only screen and (min-width: 768px) { .network-large li, .network-large li:first-child, .network-large li:last-child { width: 116px; } } .landing-cta .subsection-subheading-alt { text-align: center; } * 👆 (function(c,l,a,r,i,t,y){ c(a)=c(a)||function(){(c(a).q=c(a).q||()).push(arguments)}; t=l.createElement(r);t.async=1;t.src=”https://www.clarity.ms/tag/”+i; y=l.getElementsByTagName(r)(0);y.parentNode.insertBefore(t,y); })(window, document, “clarity”, “script”, “49vayizxop”); (function() { window._JS_MODULES = window._JS_MODULES || {}; var headElement = document.getElementsByTagName(‘head’)(0); if (_JS_MODULES.instagram) { var instagramScript = document.createElement(‘script’); instagramScript.src=”https://platform.instagram.com/en_US/embeds.js”; instagramScript.async = true; instagramScript.defer = true; headElement.appendChild(instagramScript); – The news Image and sound was originally published in Xataka by admin .

We have been talking about railguns for years without seeing their real damage. Japan just showed an image that says it all

Japan is going through one of the most crucial transformations in recent decades: that of its rearmament. It is its most aggressive defense policy since Second World Warand the Ministry of Defense justifies because we are in the “most severe and complex phase of the last 80 years.” And there is nothing that better exemplifies Japanese rearmament than a cannon that, until not long agoit was science fiction material. The electromagnetic cannon. Reconfiguration. Starting in the 1990s, Japan stopped investing significantly in its Self-Defense Forces. He economic bubble burstthe “lost decade” and demographic difficulties implied that the military spending of 1% of GDP that they adopted after the Constitution of 1947 would be maintained. In 2023, things changed. As a result of geopolitical complexity, they decided that they would invest 2% of their GDP in rearmament. In figures, we are talking about about 271,000 million euros until 2027, but recently The target has been brought forward to March 2026. This reconfiguration will manifest itself in four dimensions: the aforementioned increase in military spending, the restructuring of the Self-Defense Forces, a relaxation of restrictions on the export of weapons and the expansion of long-range offensive capabilities. That’s where the railgun comes into play. Electromagnetic cannon. Like gunpowder, it fires a projectile that gains speed as it passes through a barrel. However, it uses electricity instead of gunpowder. Two metal rails form a circuit that, when closed by the projectile, generates an intense magnetic field. This produces a beastly force that propels the projectile at high speed, allowing hypersonic, precise and long-range shots. This speed would allow that would travel without detour even in the most unfavorable weather conditions. Japan has been investing in this field since mid of the 2010s, and a few weeks ago, the Japan Acquisition, Technology and Logistics Agency (ATLA) performed the first proof documented firing of a naval electromagnetic cannon at a real ship. Mounted in it JS Asuka test shipthe prototype is a cannon of 40 millimeters in caliber and six meters in length. It requires four huge energy containers to power the weapon and the projectiles used were small missiles of about 320 grams, stabilized by fins and without an explosive head. There is no need for an explosion: upon reaching those 2,300 meters per second, the kinetic energy is comparable to that of a 1,000 kilo car crashing into something at 140 km/h. Success. During them, the system achieved a record by firing projectiles at a speed of 2,300 meters per second. It is a speed of Mach 6-7, but in addition, they also pushed the useful life of the barrel to the limit. The estimate was about 120 shots, since it was established in previous phases of the investigation, but they got perform more than 200 shots without the system failing. ATLA had conducted open sea tests before, but never against a real target. And although they had already commented that the tests were a success, now they have shared photographs in which you can see the holes left by these projectiles. The target ship was in motion, but due to the enormous speed and stability of the projectiles thanks to the enormous power of the system, the entry holes allow an almost perfect view of the “cross” left by the projectile passing through the hull. Challenges. Now, understanding how a railgun works is easy, but executing it is extremely complex. It is a brutal technical challenge due to several factors: The stability of the barrel: the system generates tremendous heat, so dissipation systems must be effective enough not to compromise the integrity of the barrel. Wear and tear not only affects the speed and accuracy of the projectile, but can cause accidents on the boat itself. The energy: since it requires so much electricity to operate, it must have storage systems large enough to allow it to operate with the necessary power and during intense fire sessions. Miniaturization of the system: these cannons are extremely large and, although ATLA has managed to contain it quite a bit, mounting them on ships is not easy due to both the length of the cannon itself and the set of batteries required. Integrating a railgun into a ship is not easy. Perspectives. Currently, ATLA is working on evolving a system which might not be as far from the action as was thought a few months ago, and this miniaturization would allow it to be mounted on other types of vehicles, in addition to ground defense lines. But apart from as a weapon, the agency has mentioned that the concept of electromagnetic acceleration could be applied to other areas. For example, to the “mass throwers” ​​that would allow launching materials electromagnetically in space transportation. The problem is that other challenges are added, such as the imperative need to calculate the trajectory millimetrically or develop recovery methods for these goods. USA and China. And, although it may seem like another test of weapons, what Japan has achieved is a milestone. After fifteen years of research and some 500 million dollars invested in the technology, The United States left in 2021 the development of electromagnetic railguns (although they are now with larger versions). Japan has persevered and its testing demonstrates that the system can be viable in a real-world context. And another that has continued to develop this technology is China. They are keeping it more secret, but we have already seen images of Chinese ships with an electromagnetic cannon and power containers on the front. And that, precisely, it was these two countries that They are taking steps forward when developing this technology It’s not a coincidence. They are both engrossed in technological warbut also in a escalation of military tension that has been going on for months and that is leading both countries to accuse each other of invading their respective territory. Images | ATLA, Japan Maritime Self-Defense Force In Xataka | Taiwan has had an idea if Beijing invades it: surprise China underground

from image bank to Adobe rival

Tomorrow, November 20, our Xataka NordVPN Awards 2025which you can follow from our website. In them we will reward the most important devices and technologies of this year and for the first time we will present the special Xataka Award to the best Spanish technology company. The winner is Freepik, the Malaga company that has gone from being a bank of images and graphic resources to competing directly with Adobe and Figma. Omar Pera, its CPO, will accompany us during the gala. The pivotal moment: when DALL-E 2 changed everything Founded in 2010 in Malaga as a simple search engine for free images, Freepik served millions of users with a simple business model: facilitating access to graphic resources. “We started the company because we were making web pages and it took us a long time to find the image we wanted,” its CEO, Joaquín Cuenca, explained to us in an interview at the beginning of the year. The launch of DALL-E 2 in 2022 It was the turning point. Cuenca’s first reaction was visceral: “This makes what we are doing obsolete.” But he quickly came to a more important conclusion: “I saw him as unstoppable.” There was no debate about whether generative AI would transform their industry, but rather how to respond. The company went from marketplace from static content to develop your own video generation, editing and production tools. And the impact has been brutal: “Almost 50% of new subscribers, their first action is to do something with AI. A year and three months ago it was 0%,” Cuenca told us in February. “And more than 50% of existing subscribers are already using AI on Freepik.” Today Freepik serves more than 150 million monthly users with a complete ecosystem of tools: Freepik, Flaticon, Slidesgo, Wepik and Magnific AI. They already exceed 800,000 paid subscriptions. Own technology to avoid lawsuits Freepik developed F Liteits generative model trained exclusively with 80 million of its own commercially licensed images. It is a strategic bet to avoid the lawsuits that companies like Midjourney and OpenAI are having to face. Solid legal ground as a competitive advantage. Magnificenta Murcia startup purchased in 2024, went viral for its ability to increase the resolution of images without distorting details. Mysticits star model, competes with Ideogram and Midjourney. Regarding the latter, Cuenca is clear: “We have a lot of respect for Midjourney,” but he considers that Freepik is superior in adherence to the prompt because “Midjourney takes a lot of artistic license.” They are also clear about their position regarding OpenAI. For Cuenca, DALL-E “was a diversion from OpenAI”: “If you have the possibility of getting the best language model in the world in your company, everything else is a distraction. With that potential, starting to make images makes no sense.” It is precisely that distraction from the giants that opened a window of opportunity. The stone in the shoe Freepik’s commitment is to build an all-in-one platform instead of forcing the use of different applications: generation with several models, video with Google I see 3 (they became the first in the world to integrate Veo), complete editing, audio, conversion to SVG… “AI is very well aligned with what we want to do, and in fact helps us expand the catalog of things that we can solve for the user,” explains Cuenca. They no longer compete as much with Getty or Shutterstock as with Adobe and company. Video is the next frontier. “The video is now where the image was in 2023,” Cuenca told us. It still requires many iterations to achieve the desired result, but the room for improvement is enormous. Democratization from Malaga, without complexes Freepik believes that AI democratizes the creation of visual content, allowing businesses of any size to produce more engaging material without the need for large budgets or specialized equipment. And he does it from Malaga, without complexes. Although it has an office in San Francisco, its center of gravity remains in Andalusia. “From Malaga to Madrid is nothing, it’s two hours or so,” explains Cuenca, who considers that the real problem of Spanish entrepreneurship is not location: “We lack people who have good ideas, that is the main brake.” Freepik competes globally from there. It is one of the great successes of Spanish technology, and without a doubt the greatest national reference in generative AI. In just two years, Freepik completely reinvented itself: from distributor to developer, from marketplace to ecosystem, freemium An essential subscription for hundreds of thousands of creative professionals. A transformation that makes it worthy of the first special Xataka Award for the best Spanish technology company. Featured image | Xataka

He won an art contest with an image made with Midjourney. Now he is fighting in court to be recognized as an artist

It seems like an eternity has passed, but in 2022, AI image generation tools were already achieving the most convincing results. And if not, tell the participants in the Colorado art contest, who saw how An image created with Midjourney took first prize in the ‘digital art’ category. The controversy was afoot: can we call something that an AI does art? Its author is very clear about this and has gone to court to defend it. What has happened? Jason Allen, the author of the image (or rather, the prompt), tried to register ‘Théâtre D’opéra Spatial’ a month after winning the contest, but was not allowed. According to the US Copyright Officethe image contains “more than a minimal amount of artificial intelligence-generated content.” Allen began a legal battle to get the image registered. According to what they say in 404medialast August they filed a request in court defending that it is a work of art and Allen an artist. The prompt. Although it was created by software, Allen states that the creation of the prompt is an artistic process in itself and therefore should be considered an artist. In the text presented to the court, his lawyers defend that “he created the image by providing hundreds of iterative text prompts (…) to help express his intellectual vision.” However, for the copyright office, just providing the instructions was not enough and they repeatedly rejected his request. Art or not. The news unleashed a wave of criticism on networks and brought to the table the debate of whether images generated with AI should be considered art. This controversy has polarized the artistic and technological community, creating two marked and opposing positions: on the one hand, those who They consider that it cannot be considered art because it lacks human intentionality, on the other hand those who defend that AI is one more tool with which the artist expresses himselfjust like a brush, a graphics tablet or a camera. It’s not the first time. Art has faced more debates like this and there is a very clear example. The same thing is happening with AI that happened with photography in the 19th century; was rejected by defenders of drawing and paintingwho saw their jobs threatened by new technology. More than a century later, photography is considered art and fills galleries and museums. And most importantly, the painting still exists. The intention. The debate arises when mechanical means come into play. In the case of photography it was the camera and with AI it is software, very complex but software nonetheless. If we accept that photography, digital illustration or 3D modeling are art, AI can be too. The key that makes the difference is the intention behind it. Setting any prompt and sticking with the first result that comes to mind is not the same as having a clear idea, a story to tell, a feeling to express, and looking for the result that captures it as best as possible. Of course, it would be fair that those works compete in their own category. The problem. AI has turned the art community against it from the beginning. Image generators, especially the first ones, were trained with countless works of art by authors who received nothing in return. Some authors they began to “poison” their works for AI to go crazy and there are several initiatives that artists can join to prevent your jobs from ending up training AI. Image | Jason Allen and Midjourney In Xataka | Either you pay or we will use your works to train AI: the threat of hackers to an artists’ website

Improve your GEMINI 6 OFFICIAL TRICKS TO IMPROVE THE BEST IMAGE WITH ARTIFICIAL INTELLIGENCE

We are going to give you some Turks and tips for Improve your created images in Geminiso that you can improve them and have better results. They are tricks that you can also apply to photo editing in the artificial intelligence from Google, and that in fact They are official and recommended by the company. Most tricks try compose an image with two or three promptsso that instead of trying to do everything at once you can divide the process into several steps. All this is worth it, because you can improve the results and combine them. Make precise transformations Gemini now allows you to edit photosand this It also applies to creations generated by AI. You can, for example, create a realistic image or in the way you want, and then ask you to change very specific elements leaving the rest the same. Prompt 1: Create high quality photography of a living room with a new dark wood color where there is a musical team. Next to it there is a very high shelf with many CDs. Above the furniture you have a silver gray musical chain. Prompt 2: Change the color of the musical chain for metallic black. Reuse the appearance of a character First, a good trick is that before creating a composition you first design the character. Come on, first write a prompt where you describe how you want the character to be and make the appropriate changes, and then ask that show that same character in another situationcomposing the scene you want. Prompt 1: Create an illustration of a gray kitten baby with white rallas that has very large eyes and a big smile. Prompt 2: Now show me this same cat up on a bicycle and pedaling on a sand beach Fuse several ideas in an image In addition to reusing elements of an image to put them in others, you can create two different images and then a third that combines them. Thus, for example, you can first create a character and then a scenario, and when these two elements are already to your liking then you merge them. Prompt 1: Create a realistic image but cartoon of a gray kitten baby with white rallas that has very large eyes and a big smile. Prompt 2: Create a realistic image but cartoon of a cat hall full of toys. Prompt 3: Now make the cat in the previous image be playing inside the game room of the last image. Although it is not necessary to do so, it is recommended that define the same visual style in the first two images so that then they combine better. Although of course, we have already told you that it is not necessary. Change the image style This is something very classic that you can do when editing your photos in Gemini, and also applies to the images generated by AI. You can create a composition with a certain style, but then You decide what you want to change it. The image will be the same, but with a different style: Prompt 1: A photorealist image of a plane taking off at the airport Prompt 2: Applies the image style drawn to pencil in this image Make what happens in an image evolve You can also make The image evolves to a differentlet’s be the same scene but apply the logic to generate a more complex. For example, you can make a scene, and then ask him to draw what would happen if elements of it interact in a different way. Prompt 1: Draw the grated gray cat before walking down the street with a Chinese vase in arms that carries with both hands on the head. Prompt 2: It generates an image that shows what would happen if this character trips. Here, you can see that in the initial prompt we have reused a design that we have already mentioned before, that of the cat. This is so that you know that within the same chat, and even having created several images, you can mention a concrete to reuse it. In Xataka Basics | How to use Gemini to summarize YouTube videos or ask questions about its Android content

The amazing image of the VLT that is already a history of astronomy

An international astronomer team has achieved what is equivalent to a cosmic ultrasound: They have obtained the first image of a giant gaseous planet In full formation process, embedded in the middle of a dust and gas disk with multiple rings. The system, called Wispit 2B, has become overnight in the perfect laboratory for Understand how the planets are bornincluding, perhaps, our own Jupiter. A photograph hunted at the best time. The discovery, published in Two simultaneous articles In The Astrophysical Journal Letters, he not only shows us the planet Wispit 2B, a giant of about five times Jupiter’s mass, but has “caught him with his hands in the dough.” They have detected it by emitting the characteristic brightness of overheated hydrogen, the definitive test that is actively attracting material from the disc that surrounds it to continue growing. And if that were not enough, there are indications of an even more massive planet. A cosmic vinyl disk. The first piece of this puzzle is provided by the Sphere instrument of Vary Large Telescope (VLT) in Chile, where the star Wispit 2, a young solar analogue (just five million years old) located about 133 light years from us was observed. What they found was spectacular: an extensive Protoplanetary disc of 380 astronomical units (UA) structured in four concentric rings, such as the grooves of a vinyl disk. The planetary formation theory predicts that the mass planets, when orbiting their star, “clean” their gravitational path, creating gaps on the gas and dust disc. And precisely, in the most prominent hole between the second and third ring, about 57 UA of its star, was a point of light: Wispit 2B. A discovery that had to confirm again. To make sure it was not a distant star in the background, the team made observations in four different moments over almost two years. The results were conclusive: the object moved next to the star, following a Keplerian orbit consisting of his position in the hole of the disc. The analysis of its brightness in different infrared wavelengths (Bands H and Ks) allowed to estimate its mass by approximately 4.9 times Jupiter’s mass. This is the first unequivocal detection of a planet on an album with multiple rings, directly confirming the interaction between the planet and the disc that forms it. A second definitive test. If the first investigation was the photo of the crime, The secondled by Laird M. Close, is the recorded video confession. Using the advanced Magao-X adaptive optical system in the Magallanes telescope, they also observed the system in a very specific wavelength: that of the H-Alpha (656.3 nm). This broadcast is an unmistakable firm of the accrection, the process by which a planet attracts gas from its surroundings. When hydrogen gas falls to the planet, it is compressed and heated to thousands of degrees, emitting a characteristic reddish glow. And WISPT 2B was shining intensely in H-Alfa. We already know its growth rate. This detection not only confirms undoubtedly that Wispit 2B is a growing protoplanet, but allowed the team to calculate at what rhythm it is collecting matter: 2.25 × 10−12 solar masses per year. It is a slow but constant rhythm, which offers us a unique window to the final stages of the formation of a gas giant. This finding makes Wispit 2B one of the very few protopoplanets (along with the celebrities PDS 70b yc) of which there is direct evidence of accretion, that is, that they grow gradually using external matter. A second planet and a mystery of inclination. The surprises do not end there. The high resolution data of the Magao-X team revealed a second candidate object much closer to the star, about 15 UA. Baptized as CC1 (Close Companion 1), this object is extremely red and its brightness is consistent with a planet about nine times Jupiter’s mass. In addition, researchers have indicated a curious statistical coincidence. Counting to Wispit 2B, there are already four systems with protoplanets detected by their H-Alpha broadcast. Surprisingly, they all have a very similar inclination with respect to our line of vision (between 37 ° and 52 °). The probability that this occurs by chance is only 1% (a 2.6σ significance). Why is Wispit 2B so important. There are several reasons behind. The first of these is that it visually demonstrates that giant giant planets can be formed at great distances from their star and that they are responsible for sculpting the holes in protoplanetary discs. But the interesting thing is that it has become a unique laboratory, since it is such a “clean” and well-defined system, it allows us to study the planet-Disco interaction with an unprecedented detail. An analogue of our past: The star is similar to our sun in his childhood, so studying Wispit 2 is like looking at a snapshot of how our own solar system could have formed. That is why the next step will be to point to Wispit 2B with the Jame Webb space telescope and the Alma Observatory. With them, you can analyze the planet’s atmosphere and analyze the chemical secrets you can hide. In Xataka | When the first human being stepped on the moon we all believed that he had abandoned the “earth.” We were wrong

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.