The molecule that stores the sun for years and releases heat just when you need it

In winter, raising the blinds to take advantage of the light and heat of the sun in the central hours of the day is a good idea to heat the house while saving on heating. Of course, as the afternoon passes and night falls, goodbye to the sun and its heat. From an energy point of view, it would be fantastic to be able to store the sun in a bottle to release its heat when needed. Something like this has occurred to a research team from the University of California in Santa Barbara, which has published its research in Science: a molecule that captures sunlight, stores it for years without loss, and releases it on demand. No plugs or batteries. Professor Grace Han’s group has synthesized a modified organic molecule inspired by DNA. It is called pyrimidone and is capable of capturing solar energy, storing it in chemical bonds and releasing it as heat in a controlled and reversible manner. In short, as if it were a battery. Context. The analogy of the bottled sun is for practical purposes one of the great problems of solar energy: the issue is not so much capturing it, but rather storing it because obviously there is not always enough sun to satisfy demand. And conventional batteries degrade, are heavy, carry inherent management risks, and are expensive (although now they are below minimums). What Han’s team is proposing is not new: molecular thermal storage, known as “MOST” for short, has been researched for years. However, until now no system had managed to combine competitive energy densities with release temperatures sufficient for real practical application. Why is it important. Because this research breaks two essential barriers that make MOST increasingly closer to being a reality: It has an energy density of more than 1.6 megajoules per kilogram, almost double the energy density of a standard lithium-ion battery. It releases enough heat to be able to boil water under ambient conditions. It is also soluble in water, which makes it potentially compatible with circulation systems in solar collectors. These properties open the door to uses such as domestic heating and domestic hot water (DHW), areas without an electrical grid or systems integrated into roofs. How it works. It is important to highlight that despite the analogies with solar energy, its mechanism is completely different from that of photovoltaic cells. Come on, it does not convert light into electricity, but rather it transforms it into chemical energy that it stores in its chemical bonds. The molecule, which was designed with computational modeling thinking about reducing it as much as possible, works as if it were a spring: upon absorbing ultraviolet light it undergoes a reversible change in its shape, passing into a high-energy state. The molecule can remain stable in that state for years until an external stimulus causes it to relax, releasing the accumulated heat. As Han Nguyen detailslead author of the article, “the concept is reusable and recyclable.” From Barcelona to California. The fact that the MOST have been in the laboratory for a long time is so true that in 2024 a team from the Polytechnic University of Catalonia published a paper in Joule on a hybrid device that integrated a MOST system directly into a silicon photovoltaic cell. The idea is that organic molecules (composed of carbon, hydrogen, oxygen, fluorine and nitrogen) act on the one hand by storing energy and on the other, as an optical filter and cooling agent for the solar cell. The molecules absorb the UV photons that silicon does not use well, cool the cell and store that surplus as chemical energy. Thus, the solar cell generates more electricity and nothing is wasted: the system achieved a solar utilization efficiency of 14.9% and a record of 2.3% in MOST storage. Yes, but. That two independent studies separated in time work on the MOST shows that this technology is more than a mere laboratory concept: it is getting closer to having real applications. Of course, like any other innovation, it faces the challenge of scalability and costs, essential for eventual industrial deployment. In Xataka | Plastic solar panels have always been more of a dream than reality: China has just changed that In Xataka | Spain has just plugged in more batteries in one month than in three years: this is the plan to save our cheaper energy Cover | POT

Anthropic releases a new feature to download all your memory to leave ChatGPT and switch to Claude

This weekend Anthropic has gone from being an AI used by the Pentagon, other US agencies and having partners such as Microsoft or Amazon to total ostracism: from Friday at 5:01 p.m. It is classified as a “risk to the supply chain”. Total veto, a serious threat to the survival of a company valued at 380,000 million dollars and also a challenge for those entities that in less than six months will have to transition to another alternative. The Pentagon itself He already has an agreement with OpenAI to succeed him. Anthropic’s situation is delicate to say the least serving its strategic clients and alliances, something essential to continue growing in the tough battle of intelligence. The company led by Dario Amodei, which was firm in its principles when expressing its concern about the use of artificial intelligence for mass civil surveillance and the development of weapons capable of firing without human intervention, has already announced that he will contestbut for now they look rough. He only has the civil…in every sense, because Claude has risen to number 1 for free downloads in the App Store in the United States, as reported by CNBC. Because yes, this tug of war with the US government has brought an increase in the popularity of Claude, less known than other alternatives such as ChatGPT or Gemini. On the other hand, this movement in which the US Administration has said goodbye to Anthropic in favor of OpenAI also has a reading in which Claude wins: the terms of the agreement and how it affects ChatGPT users. Anthropic Coup de Effect. So Anthropic has been taken out of the sleeve a new feature to facilitate the transition from other AI models, such as ChatGPT or Gemini, to Claude. Because if you have been using ChatGPT for a while for example and already knows youstarting from scratch is a step backwards in every sense. The new feature allows you to import all your memory from other models into Claude so that it immediately knows everything about you (everything that your previous AI already knew). You no longer start from scratch. How to download your memory and load it in Claude. To incorporate your preferences and context from other AI providers into Claude you have to do two steps: Copy and paste the prompt below into the AI ​​you normally use, like Gemini or ChatGPT: I’m moving to another service and need to export my data. List every memory you have stored about me, as well as any context you’ve learned about me from past conversations. Output everything in a single code block so I can easily copy it. Format each entry as: (date saved, if available) – memory content. Make sure to cover all of the following — preserve my words verbatim where possible: Instructions I’ve given you about how to respond (tone, format, style, ‘always do X’, ‘never do Y’). Personal details: name, location, job, family, interests. Projects, goals, and recurring topics. Tools, languages, and frameworks I use. Preferences and corrections I’ve made to your behavior. Any other stored context not covered above. Do not summarize, group, or omit any entries. The model will return everything it knows about you in a block of text, which you have to copy and paste later into Claude. Go to ‘Settings‘ > ‘Capabilities‘and there in Import Memorypaste the answer. Then, tap ‘Add to memory’. From that moment on, Claude already knows what your previous AI knew. It has small print. This is a feature for users on a paid plan (Pro, Max, Team or Enterprise). If you are on the free version, at most you will only be able to have that context in that conversation, but not permanently. In short: the import is free as a manual process, but for Claude to remember it permanently a payment plan is required. In Xataka | Claude: 23 functions and some tricks to get the most out of this artificial intelligence In Xataka | Anthropic and OpenAI have developed AI. The US Pentagon is showing you who really owns it

will no longer pause dangerous models if the competition releases them first

Anthropic is in the middle of an important issue with the Pentagon in the United States that may end up shaping the future of the company. Founded with security as its reason for being, it has just rewritten the rules that defined it. And his “Responsible Scaling Policy“, the document that established when to stop the development of a model that is too dangerous, has evolved into a mere roadmap with flexible objectives. And this change is much more important than it seems. Not only for Anthropic, but for the rest of the industry. Let’s get to it. What exactly has changed. Until now, Anthropic policy stated that the company would pause training or delay the launch of a model if its capabilities exceeded the speed at which sufficient safeguards could be developed. That is to say: if the model was too powerful to be controlled safely, it was stopped. This is over. And it is that the new policy removes that automatic braking mechanism and replaces it with a series of public commitments, along with regular third-party audited risk reports. The change was confirmed by the company itself in an official statement. Why have they done it? The company gives two main reasons. The first is the competitive environment: OpenAI, Google and xAI advance without those types of restrictions. “We didn’t feel it made sense to make unilateral commitments if competitors are moving full speed ahead,” counted Jared Kaplan, chief scientific officer at Anthropic, told Time. The second, as it could not be otherwise, is political: Washington has turned its back on AI regulationand Anthropic acknowledges on its blog that the current anti-regulatory climate makes its own safeguards asymmetrical with respect to the rest of the sector. Paradox. From Anthropic’s point of view, it is not a renunciation of security, but a decision made based on it. Their reasoning: if the actors who are more responsible (they fall into this bag, logically) stop while the less careful ones move forward, the net result is “a less safe world.” The logic has a certain coherence, but it also means accepting that security depends on what the competition does. And that is a very dangerous game. Context. Anthropic was founded by former OpenAI executives, including Dario Amodei, who left that company precisely because they believed that it did not pay enough attention to the risks of AI. The new policy comes at a time when several security researchers have left the company. Just like shared Wall Street Journal, one of them, Mrinank Sharma, wrote a letter to his colleagues this month saying that “the world is in danger” because of AI, before announcing his departure. In fact, according to sources close to the media, his departure would be partly related to this decision. What’s happening with the Pentagon?. The announcement comes in full tension with the Pentagon. US Secretary of Defense Pete Hegseth gave Anthropic an ultimatum the same Tuesday that the policy change was made public: modifying its red lines on the use of Claude or risk losing a $200 million contract with the Department of Defense. Anthropic has made it clear that both issues are independent, but the temporal coincidence has not gone unnoticed. What remains of the security policy. It is not a total abandonment. Anthropic remains committed to delaying the development or deployment of “highly capable” models in specific circumstances, and is committed to publishing detailed, externally verified risk reports every three to six months. The company also now separates its own internal guidelines from its recommendations for the rest of the sector, implicitly acknowledging that the commitment to a “race to the top”, which other companies are adopting, has not worked as expected. Cover image | Wikimedia Commons and Anthropic In Xataka | The US has a message for AI companies: if necessary, that AI belongs to the State

We have had Stephen King releases for several weeks in a row. Don’t we know how to do anything else?

The fall of 2025 has brought with it an avalanche of Stephen King: almost in consecutive weeks we have had the premiere of ‘The long march‘ and ‘The Running Man‘, and shortly before the series started ‘It: Welcome to Derry‘ on HBO Max. Three great productions in just one month. Are we facing an unimaginative industry that constantly turns to the same author, or is it that King continues to offer something that others cannot? The answer has three keys: the so-called Kingaissance, the decisive factor of the streaming and the current value of King, which has not been devalued by bad adaptations. Debunking the myth. To deny King’s supposed dependence on the horror genre, just look at the last twelve months of releases. Independent horror is enjoying an unsuspected golden age: ‘Longlegs’, for example, grossed more than one hundred million dollars at the box office with a budget of just ten, and films like ‘The substance‘ have given terror a life-long breath of quality, including Oscar nominations. Classic franchises such as ‘Final Destination’ are recovered, ‘Frankenstein’ is sweeping Netflix and a star system from horror creators: the aforementioned Perkins, Prano Bailey-Bond, Danny and Michael Philippou, Zach Creggar and Rose Glass, among others. The Kingaissance. The Anglo-Saxon media coined a term to describe what is happening: the “Kingaissance“, a revival that has a precise birth date. In September 2017, ‘It’ by Andy Muschietti became an unexpected cultural phenomenon: With a budget of just thirty-five million, it grossed more than seven hundred globally, becoming the highest-grossing horror film in history without adjusting for inflation. What followed was an avalanche. Without exhaustiveness: ‘Doctor Sleep’, ‘Animal Graveyard’, ‘Eyes of Fire’, ‘Salem’s Lot’, ‘In the Tall Grass’, the series ‘Apocalypse’ and ‘Chapelwaite’… And now, three more adaptations, to which will be added the future television ‘Carrie’ by Mike Flanagan, ‘The Talisman’ for Netflix and perhaps a new ‘Cujo’. The difference with the eighties is abysmal. Back then, TV movies and B series predominated: now they are series on HBO and films with established directors. King himself often has creative control and serves as executive producer on many of these projects. The factor streaming. For decades, adaptations of King’s longer novels have been handicapped by having to compress their length to the margins of the feature film. He streaming changed the rules of the game: platforms now allow series of eight or ten episodes that respect the author’s narrative complexity, something that had previously only been experienced in miniseries format, in productions such as the first version of ‘It’ or ‘The Store’. It happened with ‘11.22.63’, with ‘The Stranger’, with ‘Lisey’s Story’ (which King personally wrote)… Now it is the turn of the prequel to the latest version of ‘It’, and it is clear how the logic of the platforms works: they look for recognizable IPs, and King offers dozens of stories with a bomb-proof dramatic structure. But there were bad adaptations of King. And they didn’t kill the goose that laid the golden eggs. It’s always happened: there are adaptations in miniseries format in the nineties, like ‘The Langoliers’ or ‘The Shining’ that are a pain. Since the nineties there have been as many weak King films as there have been notable ones. Very recent is the horrendous ‘The Dark Tower’ from 2017, which compressed eight novels into 95 disastrous minutes. Or ‘Cell’, absolutely forgettable. Why didn’t these catastrophes sink King’s value? First, the original novels remain, at worst, more than readable, and at best, downright excellent: the source material is indestructible. Second, readers clearly distinguish between author and adaptation, continue to appreciate the writer, and continue to try their hand at adaptations. Third, the good adaptations (‘The Shining,’ ‘Carrie,’ ‘It,’ ‘Misery,’ the original ‘Pet Sematary’) are so good that we’ll always come back for more. Why we return to King. The answer, despite appearances, is not a lack of ideas, but rather that we are faced with a name of proven effectiveness, even in its worst moments: few have that commercial hook combined with minimum standards of quality and entertainment. King has more than 65 novels and 200 short stories, an inexhaustible mine whose themes will never go out of style and are universal: generational traumas, addictions, the problems of the working class, invisible threats, the corruption of power, the weight of our past… And to top it off, we are in the era of the IP. So it is not an issue that affects only him. Marvel, DC, Disney… In 2024, the ten highest-grossing films They all came from pre-existing intellectual properties. And Hollywood seeks familiarity: from the Agatha Christie films directed by Kenneth Branagh to the explosion of video game adaptations like ‘fallout‘, ‘The Last of Us‘ and ‘Super Mario Bros.: The Movie‘. An ideal scenario for a brand that, undoubtedly, has had its ups and downs, but that right now enjoys unexpected iron health. In Xataka | There is a book by Stephen King that sells for around 100 euros and I got it for five: the strange story of ‘Rage’

Sandra Echeverría releases a lawsuit against a neighbor in the US.

Sandra Echeverríaactress known for his participation in productions such as “Savages” and “Route Change”, He surprised to reveal that he is currently going through a complicated legal process in the United States against a citizen who used to be his neighbor in California. In a video direction published from his Tik Tok account, the star of the big screen told the details of the situation that has been going through several weeks, arguing that he decided to take the case to the legal route due to damages that the defendant made on his property. “I am a little nervous because I am about to connect with a judge and with a former Los Angeles. This man flooded my department, just a few days before selling it, it was several months ago, but he did not want to be responsible“He said through the audiovisual. Leonardo de Lozanne’s wife said that the only thing he is looking for is to do justice and that the damages caused are compensated: “I had to put demand to fight to recover the expenses I made for repairs, which were many,” he said. To finish her message to the public, Sandra Echeverría acknowledged that although the process has been exhausting at the physical and emotional level, for her it is very important to stay faithful to herself and her belief of not allowing injustices. “Do not be left, because then there are no consequences. One cannot be paying for the errors of others, and yes, the demand is a laziness, it is a laziness to fight; It causes you a lot of anxiety and everything, but I have always been a justice. I like to fight for justice and sometimes you have to get into these rolls. Dispose of luck, and nothing, that is fair and the right thing“He sentenced bluntly. As a result of this message, the famous became the target of support comments by its digital community: “Exactly, you can’t pay for the mistakes of others”, “Agree, being good is not meaning of being silly,” “Hopefully everything is resolved in your favor,” “You are right, I hope they make you justice” And “well said, it is your heritage and you have to take care of it”They are some of them. Continue reading: • Sandra Echeverría recalled her brief separation with Leonardo de Lozanne• Sandra Echeverría sends tremendous request to Ángela Aguilar and her cousin, Majo Aguilar• Video: Surprising! Sandra Echeverría sings identical to Shakira (Tagstotranslate) Sandra Echeverr \ U00eda

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.