Renewables have stopped their emissions ahead of time

China has achieved what until recently seemed unattainable: reduce its CO₂ emissions while continuing to expand its energy capacity. An unexpected turn to The biggest issuer in the world. Short. A Brief Carbon report He has revealed that CO2 emissions in China decreased 1.6% year -on -year in the first quarter of 2025 and 1% in the last 12 months. Investment in renewables. China hoped to start reduce your emissions in 2030. But The strong growth of renewables “Leolar, wind and nuclear,” is advancing the times. And most interesting: this fall in emissions is not due to a crisis, but to a real transformation of the energy system. For the first time, the growth of clean energy has reduced China’s fossil fuel energy to the growing demand for electricity In data. To dimension the rhythm of change, Carbon Brief He has highlighted that only in March of this year 23 GW of solar energy and 13 GW of wind were installed, breaking all the previous records. The new installed capacity not only covered the growing energy demand, but allowed to reduce the use of coal. Other factors in early fall. The decrease in CO2 before planned has not been only due to an expansion of renewable Focused on internal production. Can this be only temporary? As have indicated The data, the emissions are only 1% of the peak, so any economic activity or change of energy policy can change this situation. In fact, there are precedents of temporary falls in 2009, 2012, 2015 and 2022, all driven by crisis or economic decelerations. In addition, the Asian country Keep maintaining A strong investment in coal plants for your “electrostate.” The big question now is whether this fall can be maintained and becoming a structural trend, or if it will be just a respite before new increases. Uncertainties. Next June It enters into force A new renewable energy price policy. With this rule, guaranteed rates disappear that were linked to coal and new projects must directly negotiate their contracts in the market. In the short term, an impulse is expected in the installation figures, but in the long term the measure could generate instability if there are no clear incentives that maintain rhythm. Forecasts According to Carbon Brief analysisthe long -term trajectory will depend on two key factors: the next five -year plan that China is developing and the economic response of the commercial warfare, now in a momentary truce. In short, the challenge is political and structural. If the decisions taken in the coming years manage to consolidate this trend, China will not only fulfill its climatic commitments, but also lead the global energy transformation. Image | Unspash and Unspash Xataka | This researcher is convinced to know the measure with the greatest transformative potential today: make cities a good place to live

There is no miraculous recipe that gives us a “natural ozempic.” But science has some “tricks” that can help us

The popularity of the semaglutida (the active substance of the well -known Ozempic) and similar compounds, has led many to the search for alternatives that allow replicating Its effects without having to go through the pharmacy. It is not something to miss, like any other drug, Ozempic and similar medications have side effects, not to mention their cost or the fact that it can only be acquired with medical prescription. Having more alternatives within our reach can be tempting. The search for a “natural ozempic” however has more shadows than lights, more pseudoscience than science. That does not mean that the door is completely closed. One of the compounds to which often It has hung This label It is the Berber. Although some studies have detected pharmacological effects of this compound, we lack solid evidence today that proves to help us lose weight. What we do have is A series of risks associated with its consumption. Another regular candidate to serve as “natural ozempic” It’s green tea. Although this drink can give us some nutrients and help us stay hydrated, its effects on or on our glycemic levels They are limited at best. In An article for The conversationMary J. Scourboutakos, an expert from the University of Toront A “Ozempic of Nature”. The key to her is not in a compound that these foods can give us, but to focus on those who in one way or another stimulate the production of hormones that these treatments “imitate.” To understand it, we must remember how these compounds work. Semaglutida is an agonist of peptide receptors similar to type 1 glucagon (LPG-1), that is, it is a compound that functions as this key hormone for our digestive processes. The LPG-1 is a hormone that our stomach segregates when we eat with the aim of transmitting a double message. On the one hand, this hormone warns the pancreas that you should begin to segregate insulin to control blood sugar levels. These types of medications are, in origin, drugs against diabetes that stimulate insulin production for glycemic control in people who have trouble exercising this control. On the other hand, this peptide also sends a message to the brain: satiety. This is the reason why these compounds also make those who consume them lose weight, since they flatten hunger. Other drugs, such as Zepbound, They work similarly. The formula of the American Eli Lilly is based on the tirzepatida, which acts as an analogous not only of the LPG-1, but also of the GIP (polypeptide gastric inhibitor). Not only what, also how According to Explains Scourboutakosour diet can help stimulate the production of the GLP-1 hormone. He does not only through nutrients but also through our habits, when and how, in addition to what. With respect to nutrients, the expert highlights two, being the first, and perhaps the most important, fiber. The fiber is key Because it serves as food to our intestinal microbioma, and this in turn would be responsible for segregating compounds capable of stimulating hormone production. A study Posted in 2018 in the magazine Science He indicated that the fiber could help feed bacteria in charge of segregating short-chain fatty acids, which would be in this case those responsible for encouraging the secretion of the LPG-1. Recently, Another studyis published in Nature Microbiologyobtained similar results. The team responsible for this analysis detected that greater Presence of bacteria Vulgatus bacteroids and of its metabolite, vitamin B5 or pantotenic acid were able to activate the secretion of the peptide similar to type 1 glucagon. Returning to Scourboutakos articlethis highlights another key nutrient, monounsaturated fats, which we can find for example in the olive oil. Again, some studies support this notion, such as The published in 1999 In the magazine The American Journal of Clinical Nutritionwhich detected that olive oil stimulated better than butter the secretion of both LPG-1 and GIP. Regarding the “How”, ScourboTakos quotes for example A study In the magazine Diabetologyfrom which he concludes that consuming the most protein foods before the richest in carbohydrates (such as eating a fish dish and then one of rice) would give rise to a greater secretion of LPG-1. Other relevant factors mentioned by Scourboutakos include the time we consume food and even if we chew or consume in a liquefied form. Our diet affects our health and our well -being in very diverse ways. It may seem anti -limitic, but the closest thing to a “natural ozempic” we have is A varied diet, rich in fibers And in fats such as olive oil, whose positive properties have been well known for a long time. A diet of this type may not be able to cure diseases, but it is a necessary complement to many treatments and is an important preventive tool when avoiding problems for the future. In Xataka | Ozempic is supposed to increase the risk of erectile dysfunction. Thousands of people think exactly the opposite Image | Chemist4u / She Olsson

Ai Mode, Project Beam, I see 3, the Project Aura, Jules glasses and everything presented in a Google I/or 2025 loaded with ambition

Google has taken the heavy artillery in the war to lead the development – and the business – of artificial intelligence. The American giant has done so in his annual developer conference, which more than this has been a demonstration of strength, a showcase where he has presented some of his most innovative advances. Next, we review all the products that Google has presented this Tuesday, May 20. If you want to deepen any of them, next to each name you will find a link with all the information. Gemini Ultra, I see 3 and image 4: a subscription for those who want everything Gemini Ultra is the new most complete artificial intelligence subscription of Google. It costs $ 249.99 per month And, for now, it is only available in the United States. Includes access to tools such as video generator I see 3the FLOW editing app and the Deep Think mode of Gemini 2.5 Pro, which has not yet been officially released. Subscribers also obtain improvements in notebooklm and whisk, storage of up to 30 TB in the cloud, YouTube Premium and access to the chatbot gemini directly from Chrome. Some of the most advanced functions are driven by the technology of Project Marinerwhich gives “agricultural” capacities agents. Deep Think: an AI that takes time to respond better Deep Think is a new mode of reasoning of the Gemini 2.5 Pro model which allows the AI ​​to consider several possible answers before deciding on one. It seeks to improve accuracy in complex tasks and advanced benchmarks.It is currently only available for a small test group through Gemini’s API. Google states that he is carrying out security evaluations before launching it publicly. Ai Mode and Search Live: This way Google wants to redesign the search Ai mode It is a new experimental function for the search for Google that allows you to ask complex questions, with multiple elements, from an AI -based interface. It is launched this week in the United States and is able to handle sports, financial data and offer options such as “tested” virtual. Throughout the summer Search Live will arrive, a function that will allow asking questions based on what the mobile camera detects in real time. Gemini in Chrome and detection of synthetic content Gemini is integrated into Chrome as navigation assistant to help understand the web pages content and execute tasks. Gmail incorporates custom intelligent responses and a new function to clean the inbox. Google has launched Synthid detectora verification system that uses invisible water marks to identify content generated by AI. BEAM: 3D video call with simultaneous translation Beam, formerly known as Project Starlineconvert video calls into almost face -to -face conversations thanks to a six -chamber matrix and a light field screen. It offers headquarters to the millimeter and video to 60 frames per second. It includes real -time translation in Google Meet, preserving the voice, tone and expressions of the original speaker. Jules: Google agent to program without touching the keyboard Jules is Google’s new assisted programming agentdesigned to compete with platforms such as cursor, Windsurf or Codex. It is able to generate tests, update dependencies, write changelogs in audio and correct bugs while the user continues to work on other things. It works without plugins or additional facilities and is available in Public Beta for US users. Android also present at the event Android premieres new tools to find phones and lost objects, and also a new design language called Material 3 Expressive. Google showed its new glasses livedeveloped with Xreal and based on Android XR. The demo included interaction with Gemini by voice, simultaneous translation and overlap of real -time information. The project is called Project Aura and seeks to bring Android to the XR world with a practical and ornaments. Images | Google In Xataka | Smart glasses do not have to be an armed one. Google has it clearer than ever

The next milestone for the IAS that generates video was to make them with audio. Google has achieved it with I see 3

Great day for Google. We are in full I/O 2025, the most important software event for the American company. Interestingly, Android is being one of the least sounded names: this year the only thing that matters is AI. And, related to AI, Google has been working on a model that allows you to generate video through text. That model is I seeand in its new update it is able to generate these videos … with audio. I see 3. Google has three levels for its generative artificial intelligence of video. I see 1, I see 2 And the new I see 3. Yes, they are much easier names regarding what We are accustomed to us. I see 3 is the most powerful model, capable of generating 4K video with advanced film compression. In this Google I/or gains a key capacity: the generation of video with audio. Of environmental sounds to dialogues. Google goes with everything with I see 3. This model not only has a higher quality with respect to I see 2: it is the only one of Google capable of generating videos with audio. For example, if in the prompt we detail that we want an urban scene, it will be able to recreate some of the sounds corresponding to it (people walking, traffic, bustle, etc.). Google goes further, and promises to be able to create even dialogues between characters. This is one of the definitive barriers for text to text to become practically a science fiction function. With I see 3 it will be possible to do everything. IMPROVEMENTS IN SEE 2. Although I see 3 is the absolute protagonist, I see 2 is updated with new functions. Among them, it premieres new camera controls much more precise for Traveling and Zoom movements, outpainting options to expand the framing (to pass the vertical to horizontal or vice versa video), as well as the possibility of adding or deleting elements of the video. Flow arrives. Related to VI, Image and Gemini arrives Flow, the new Google tool to create cinematographic videos through AI. It is a new work environment to be able to give free rein to our creations with I see: a video editor with whom we can create both with image and I see. In addition to functioning as editor, it will have some social function. Through Flow we can access Flow TV, a feed in which we will see content, channels and creators who are generating videos with I see. Ahead of Open AI. Chatgpt creators surprised the world with Soraits artificial intelligence to generate video from a prompt. The problem? At least, at the time we write these lines, it is not able to generate video. In December 2024 Google already advanced Sora on the right Showing the capabilities of VER 2, which quadrupled the video output resolution with respect to the Open AI model. It also allowed to create more durable videos, and a “understanding” of spectacular physics, something that makes the difference when creating a natural video. Your rivals. Rival video generators such as Runway, Luma ai either Pika Labs They allow to add external audio, but in no case generate sound at the time of delivering the final video. Google has just been punched on the table with I see 3, maintaining the first career position and further complicating things to giants like Open AI. At the moment, these functions will be available for GEMINI Ultra subscribers in the United States through the Gemini and Flow app, as well as for companies through VERTEX AI. Image | Google In Xataka | 14 tools to create free images

Google already offers synthid to mark content generated by the as such. Now it will help us identify them

When Pope Francis I appeared With a colorful white coat of feathers, many probably believed that this image was real. It was not, but that image, Like Donald Trump’s arrestone thing made clear: that we were increasingly difficult to differentiate the images created by an AI of those that are not. How to solve it? Water marks. Among the possible solutions, there was a fairly obvious: identifying the images of AI as generated by AI. That is: every time someone asked Chatgpt an image with Studio Ghibli style oa grok any of Bill Gates with a gunthose models of AI should include certain metadata in the imagne file. Basically it would be like “Put a seal” to these images to be able to identify them in case of doubt. There Synthid enters. There is Several efforts In that sense, and among them is Synthid. This water brand technology was presented in 2023 And a year later They offered as a free tool so that anyone could implement it. Fight against Deepfakes. As those responsible for the company comment, since its launch in 2023 Synthid has placed water marks in more than 10,000 million images, videos, audio files and texts. This can be identified as generated by AI and thus reduce the possibilities of misinformation and erroneous attribution. In addition, they stand out in Google, the results generated by VI 3, Image 4 and Lyria 2 will continue to have Synthid water marks. The AI ​​content detector arrives. The company has announced during the Google I/O 2025 event that Synthid Detector launches, a verification portal to help users identify the contents generated by AI. Its operation is simple: it is enough to raise any content (text, image, video, audio), and the Synthid detector will identify whether the entire file or only one part contains synthid. Preparing for the good and the bad of AI. This type of tool can be especially useful for artists and creators who can thus defend their work as legitimate, but it is above all a useful option that helps us protect ourselves from Deepfakes and misinformation attempts. But. As we mentioned months ago, Synthid is a promising proposal and with many striking characteristics, but it has a problem: that (for the moment) It is not a universal standard. Here Google must not only use it on its platforms, but to reach a consensus with the rest of large technology so that there is a single open and interoperable system that solves the problem of water brands universally. In Xataka | Ilya Sutskever’s new company has a clear objective: to create a “nuclear” security superinteex

An unforeseen live and a bet that points far away

Google’s story with the glasses is not over. On the contrary. After years of silence, the company has put them on stage again. And he has done it with a project that recovers a family name: Project aura. It is not the first time we listen to it. It has been playing from The times of Google Glasswhich ended in a dark corner of technological history. But this time, the approach is very different. During the Google I/or 2025, where we also saw proposals such as Ai mode either Beamthe company showed live the state of development of its new mixed reality glasses. A proposal that Born in collaboration with Xreal And that, according to those responsible for the project, wants to take the Android experience to the XR universe with naturalness, context and responses in real time. A live demo, without tricks or touch -ups. It all started when Shahram left, responsible for the device area, He launched a question to the public recorded in a video posted on YouTube: “Who points to see an early demonstration of the Android XR glasses?” The answer came from bambalins. Nishtha Bhatia, part of the team, appeared on the scene remotely and began to show the real operation of the glasses. The first thing we saw was an interface superimposed in real time over the environment. Through the integrated camera, the glasses showed what he had in front while Bhatia received messages, played music, consulted addresses or interact with Gemini, the conversational assistant, all through voice commands. Without taking out the mobile. Without clicking anything. In one of the most striking moments, the demo showed how I could ask which band was the author of a painting that was watching. Gemini responded, although with the occasional delay attributable to connection problems. He also asked that a band of the band on YouTube Music be reproduced, which happened without manual intervention. Everything was recorded in the image shared in real time. Live translation and a small failure on stage. The final test consisted of a conversation between Izadi and Bhatia in different languages. She spoke in Hindi, he in Farsi. The glasses, by Gemini, offered a simultaneous translation with voice interpretation. The system worked correctly for a few seconds, but those responsible decided to interrupt the demo when they detected a failure. Despite the stuping, the message was clear: Google wants to play again in the field of connected glasses, this time with a more mature base, supported by its service ecosystem, in Gemini and in collaborations with key actors in the world XR. The difference, at least for now, is in the approach: practical experiences, in real time, without long -term ornaments or promises. Images | Google In Xataka | Google already has an agricultural AI capable of programming for you: it’s called Jules and seeks to stand up for OpenAi

Europe is closing its doors

The Covid-19 Pandemia generated a great migration of US digital nomads towards countries with more lax mobility measures and with a much more affordable cost of life. Many European countries, including Spain, received them with open arms. To them and their investments. However, that has long been for a long time, and the geopolitical scenario It has changed radically. What has not changed is the Desire of Americans migrate to Europe In search of stabilitysecurity and a better quality of life than in the US. The difference is that, now, Europe is closing its doors. So much that even US citizens will need An entrance visa (Ethics) to visit Europe. The American exodus. The interest of Americans to obtain residence in European countries and in the United Kingdom has been increasing. Last year Ireland received 31,825 applications for American citizenship. Only during the month of February this year, 3,692 citizenship applications from the US have already been submitted. In statements to EuronewsArielle Tucker, founder of the Connected Financial Planning migration advisory platform, said that after the arrival in the presidency of Donald Trump, “many people feel that the longer they remain in the United States, the more insecure they are about how their quality of life will be and how that could affect their financial well -being.” Kelly Cordes, founder of Irish CitizenSHIP Consultants, He counted to Bloomberg that he had also noticed a notable increase in requests for citizenship of Americans to Ireland. “It is definitely different from everything we have seen. People are very worried; they have an urgency to obtain citizenship.” Of an average of 10 weekly citizenship requests that this agency processed in 2024, it has gone between 20 and 25 applications a week. Europe and the United Kingdom close their doors. Despite the increase in residence and citizenship requests, the authorities of Europe and the United Kingdom are responding with stricter regulations. Even so, 6,100 US citizens requested British citizens, according to data of the United Kingdom Ministry, taking advantage of its British roots. For its part, Italy, which previously allowed those who demonstrated family ties to obtain residence with relative ease, has carried out an emergency reform To avoid the avalanche of applications and now requires more rigorous evidence and longer processes to protect the interests of the local population. Gold Passports. Another option used by Americans With greater economic resources it is to obtain the calls “Gold Passports” either Golden Visa. These programs allow to obtain residence or citizenship in exchange for significant investmentsgenerally in real estate or the implementation of local businesses. Portugal and Spain were Very popular destinations Among Americans who were looking for this type of visas, but the situation It has changed In recent months due to negative impact In the real estate market and the local economy. New strategic opportunities. Faced with this panorama, some European countries are taking advantage of this migratory current of Americans to attract qualified talent. France, for example, has launched the campaign “Choose France for Science“, With the aim of attracting highly trained international and professional researchers by speeding up the processing of their residence permits. In contrast, Doge’s cuts, with Elon Musk in frontthey are putting in serious trouble the work of researchers in the USthus simplifying the immigration decision for these scientists. In Xataka | Of course, digital nomads love Oviedo. It is not for the way of life: it is because they charge 90,000 euros In Xataka | Digital nomadic visas: the countries hook to attract the best digital talent without paying the cost to keep them Image | Unspash (Global Residence Index)

The search as we knew it is over. Google’s AI Mode no longer delivers results, he talks

In recent times, Perplexity has done something that seemed unthinkable: Make Google Search feel old. With its conversational interface, direct answers, linked references and a relentless update rhythm, has shown that a search engine with not only possible, but desirable. It is faster, clearer, more useful. And increasingly popular. Many already have it as FJA tab. Google has taken note. And he has responded. The most important announcement – although somewhat camouflaged – of the I/O 2025 It was not a new model or an ultra intelligent agent. It was this: Ai mode arrives at Search and Gemini. Or in other words: Google has begun to transform its search engine into something that looks a lot like perplexity. For now it is only for users of the United States –argh–, but the movement is clear. When AI Mode activates in the Gemini app, the user stops doing classic style searches and begins to receive direct answers generated by the model, with links to sources, relevant context and capabilities to go beyond: compare, ask for explanations, continue asking. The search engine no longer delivers blue link lists, not even a summary above. Find conversation. In this way, Gemini is not a conversational model. It is an active knowledge engine, A synthesis of LLM, browser and assistantwith the ambition to replace the habit of “google” with that of “asking.” You can search for flights, understand documents, ask for cross opinions or compare articles. And all that, without touching an external page. This is not The results with generative that we saw will arrive in Spain a couple of months ago. This goes much further. That were generative responses about classic results. Ai mode is something else: It is more perplexity, more direct, more useful. And more dangerous for the web ecosystem. Because here is the turn that nobody should overlook. In Perplexity, at least for now, the sources are visible, well prominent, and are central to the experience. In ai mode, on the other hand, ambition seems different: respond so much and so well that the user does not feel the need to leave. A closed, polished, self -sufficient experience. That changes things. Not only for the user, who can stop distinguishing between response and source. Also for the media, creators, forums, specialists. Everything that today feeds Gemini from the web becomes less visible in the process. Knowledge is preserved, but loses authorship On the surface. Perplexity Forced Google to advance. But in doing so, Google has changed certain rules. He has taken what works – the synthesis, natural language, speed – and integrates it into an ecosystem, broader, more fluid, also more opaque. If Perpleplexity was a pioneer in experience, Google now counterattacks with total integration. Therefore, the AI ​​Mode in Gemini is not just a technical novelty. It is a paradigm change in how we look, how we read, how we inform ourselves. The user no longer consults a database. Interact with a system that interprets, selects, synthesizes and responds. Google has caught where the search is going. And has decided to move. But in his style. In Xataka | Google has put a price on the future of AI: $ 250 per month Outstanding image | Google

Smart glasses do not have to be an armed one. Google has it clearer than ever

Goal is marking the way to the market on how smart glasses should be with their collaboration with Ray-Ban. And Google, which has been developing projects from some years and years cardboard VR glasses to a device to Vision Pro (Your project with Samsung and Android XR), it seems to have realized what the winning format is. In the Google I/or 2025 he has shown us the news in Android XR. Far from helmets, far from devices that we can only use at home for a few hours due to its weight and dimensions. His words make it clear: “We know that glasses can only be really useful if you want to wear them all day.” That was the key. The small format. The glasses, glasses are. And everything else is helmets or devices of augmented/mixed reality for a more complex use scenario: games, video projection or, why not, the complete replace It is not working. Goal has done it well this year with its Ray-Ban Metaglasses that with the naked eye seem completely normal. Only when we approach them, the cameras system, the LED and the thickness of the Patillas Chivan that we are facing an electronic device. Gemini see everything. The focus of this I/or has been clear: Gemini wants to be able to see everything. In both Google and iOS, you will be able to see both the content shown by the camera and the content shared through our screen. The only thing you need are “eyes and ears” so … nothing prevents this concept from taking intelligent glasses. Not only helmets. Until now, the mixed reality helmets They looked like the only devices capable of showing capabilities beyond the simple: calls and video/photos recording. Google has just demonstrated in the I/O. Google wants to integrate Gemini into traditional format glasses, so that the wizard is right in our view. The goal? Free ourselves from the mobile and that we can interact with the environment in a much more natural way. Ten years of work. According to Google, the company has been working for more than a decade on this type of concept for smart glasses. It has only been possible now, with Android XR. Camera, microphone, speakers and telephone connection. It is all that is needed for Android XR to work together with our smartphone and have access to its applications. Similarly, the most advanced devices will equip a screen on the lens to offer information to the user privately. What can be done. Batched with the phone, the glasses can already work with Gemini. The idea is clear and Google does not hide it: “Come and hear the same as you.” With Android XR in glasses you can send messages, ask for indications of Google Maps, take photos that will be stored in Google Photos, and even translate a conversation in real time. With who. The success of Android XR in glasses will depend on Google’s ability to convince its partners. As of today, the company announces collaboration with Gentle Monster and Warby Parker to create Style glasses with Android XR. In “a future”, they hope to collaborate with more partners. Similarly, they claim that collaboration with Samsung will go beyond viewers: the goal is to bring Android XR to traditional format glasses. Image | Xataka In Xataka | Xiaomi glasses are being a brutal success in China. And that they have not yet went on sale

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.