decide how and what the world learns

In recent weeks we have seen Elon Musk rising as champion of the neutrality of knowledgealthough paradoxically he does so by offering his own vision of history through an AI that only he controls: Grokipedia. Just like they stood out in The SixthMusk’s has not been the only case of a millionaire who has wanted to impose his interests on the interpretation of culture or how it is accessed. For more than three centuries, millionaires have sought to influence in the way the world accesses knowledge, leaving traces that range from the Enlightenment to today’s digital world. Forms and formats change, from printed encyclopedias to artificial intelligence algorithms, but the intention to dominate the narrative persists. Chrétien-Guillaume de Malesherbes and the Encyclopédie In the 18th century, the European political and religious context was restrictive and censorious with respect to knowledge that questioned religious dogma. Chrétien-Guillaume de Malesherbeswas a wealthy and influential French official who, in his role as director of the Royal Librairie, took on the challenge to protect a work that challenged that order: the Encyclopédie of Diderot and d’Alembert. This ambitious project not only compiled human knowledge, but did so from a scientific and rational vision, displacing religious dogma from the center of knowledge. The Encyclopédie became a symbol of the Enlightenment, an ideological statement that sought to liberate the human mind through reason and empiricismgenerating a profound cultural change against the dominant monarchical and ecclesiastical structures. Malesherbes faced censorship and prohibitions, but from his position of influence he defended evidence and science as bases for intellectual emancipation. Encyclopédie of Diderot and d’Alembert This approach not only transformed the way knowledge was understood in Europe, but also established a precedent: access to knowledge could be a tool for freedom and social criticism, very aligned (and even advanced) with the air of freedom that ran through France at the end of the 18th century. The Encyclopédie It was the first major initiative that reflected how knowledge could be a political and cultural weapon, shaped by those who had the influence to protect and disseminate it. Andrew Carnegie and public libraries In the late 19th and early 20th centuries, Andrew Carnegie brought the democratization of knowledge to a more tangible and accessible concept: free public libraries. As and how do they count at the BBC, Carnegie was born into a working-class family in Scotland and emigrated to the United States where he amassed an immense fortune thanks to steel industry and demand for steel for railway construction. During his youth, Carnegie faced the reality that many private libraries charged fees that prevented access to the poorest, including himself, which motivated him to invest a good part of his fortune in establishing free libraries. Andrew Carnegie in 1878 However, beyond his apparent philanthropy, Carnegie complained that many workers were not sufficiently trained, so his investment sought to bring that knowledge to the greatest number of people to create an educated and capable workforce. Carnegie financed the construction and equipment of between 2,500 and 3,000 libraries leaving the communities responsible for its maintenance and operation, thus ensuring its sustainability. His vision was for the library to be an open-access community center so that everyone could educate themselves, so that foreigners could learn the language and acquire skills to boost industrial productivity. Bill Gates and Encarta: knowledge in the digital age With the computer boom in the early 90s, Bill Gates envisioned a new way to access knowledge: the multimedia encyclopedia. In 1993, Microsoft launched Encartaa CD-ROM encyclopedia that contained thousands of articles, audios, images and interactive maps accessible from a personal computer. This product represented a radical change with respect to printed books and physical libraries, bringing information closer to homes around the world through technology. But Encarta was not an altruistic work to bring knowledge to users, but rather it set a clear commercial strategy: you needed a PC with Windows to use it, which promoted the influence of Microsoft’s operating system on the consumer. Encarta was presented as an educational, useful and visually attractive tool for a diverse audience, reflecting the transition towards digital knowledge in the emerging Internet era. With this new product, Microsoft took a step back in the free access to knowledge for which Carnegie had fought: to learn with Encarta you had to pay a license between $395 and $22.95, depending on the year. Finally, Wikipedia came to break that economic barrier again by offering free and banishing Encarta. Rupert Murdoch and the media narrative While other models relied on encyclopedic or educational knowledge, Rupert Murdoch built a media empire focused on a more current concept: shaping public perception through ideological narratives. Murdoch, the son of an Australian publisher, expanded his influence by controlling newspapers and television networks such as The Times, The Wall Street Journal and Fox News. His project was neither neutral nor purely informative, but rather a business model based on making the business profitable. opinion and ideological bias. During the 1980s and 1990s, Murdoch built a media structure that made him tremendously rich. Instead of keeping informational neutralityshowed the news according to very defined ideological frameworks, with a focus on the interpretation of facts to influence public opinion. After all, it is another way of offering knowledge according to the point of view of whoever finances the medium. Elon Musk and Grokipedia In the 21st century, information flows in abundance through online channels, but even in this hyperconnected scenario, some millionaires continue to feel the need to show knowledge according to their own prism. As part of his personal offensive against Wikipedia, Elon Musk has launched Grokipedia through his company xAI, presenting it as an alternative “without ideological restrictions or cultural biases” to Wikipedia. Musk accused Wikipedia of having a “woke patina”, that is, a progressive cultural bias, and proposed Grokipedia as a project capable of offering “objective facts” generated by AI. However, Grokipedia has been criticized for reproducing specific political biases and by the lack of transparency in its sources … Read more

Australia’s idea to survive its own solar success

In Australia, solar energy has gone from being the promise of the future to a problem of the present. There is so much sun, and so many panels, that the electrical grid is reeling from excess production. During the middle of the day, millions of rooftops feed electricity back into the system, generating more energy than the grid can absorb without losing stability. At that time, wholesale prices fall to zero and even negative values. The solution that the Australian government has found is as simple as it is disruptive: giving away electricity for three hours a day. The challenge of excess. Australia has been experiencing its particular energy paradox for years: the transition towards renewables has advanced so quickly that the system is beginning to suffer its consequences. More than four million homes —one in three— have solar panels on their roofs. This distributed generation already produces more electricity than all the coal plants still active. According to Reutersthe program, dubbed “Solar Sharer”, will allow millions of homes to access three hours of free energy a day, even those who do not have solar panels. “People who can move their electricity consumption to the zero-cost period will benefit directly, whether or not they have solar panels and are homeowners or tenants,” explained Energy Minister Chris Bowen. Energy for everyone. The plan is not optional for electricity companies: the Australian Government will require them to offer three hours of free electricity each day during the midday solar peak. The measure will start in 2026 in New South Wales, South Australia and southeast Queensland, and will be extended to the rest of the country if it works as expected. To make it possible, the Executive will modify the Default Market Offer (BMD)the benchmark fee that limits what retailers can charge. From now on, that rate will include a daily slot of zero cost, just when the grid is saturated with solar energy. Participating households must have a smart meter and reorganize their consumption: run the washing machine, charge the car or turn on the air conditioning when the sun is at its highest. A double objective. On the one hand, it seeks to relieve pressure on the grid and reduce emissions. According to the Financial Timesthe plan seeks to utilize excess solar capacity and rebalance the electrical grid to reduce dependence on coal and gas. Tim Buckley, director of the Climate Energy Finance think tank, called it an “obvious” measure, as it will create a “demand pool” in the middle of the day, helping to stabilize the system. The Australian Government has been committed to accelerating the energy transition for some time. In 2022, Bowen set a goal for 82% of electricity to come from renewable sources by 2030, as detailed by Reuters. Initiatives like the Solar Sharer They are added to the subsidy for domestic batteries, which will allow part of that free energy to be stored for night use. Not everyone is happy. The Australian Energy Council (AEC), the consortium that brings together the main electricity companies, criticized the Government for not having consulted the sector before the announcement. Its executive director, Louise Kinnear, warned that “Lack of consultation risks damaging sector confidence and generating unintended consequences.” Additionally, some companies fear the plan will increase network costs and force smaller retailers out of the market. According to FTemployers fear that the measure will distort competition, although defenders of the plan assure that the real risk is not acting in the face of a saturated network. Despite this, large players such as AGL Energy and Ovo Energy have shown willingness to collaborate with the Government to define the technical details. From Australia to Spain. The Australian proposal has sparked interest in other sunny countries, especially in southern Europe, where solar energy has also grown explosively. From there the inevitable question arises: can we replicate it in Spain? Being one of the largest photovoltaic powers in Europe and with negative price episodes In the electricity market, it is logical to consider this possibility. However, the Spanish electrical system goes through a phase of instability: while the south of the peninsula produces more solar energy than it consumes, the north continues to depend on gas plants, the only ones capable of providing the “inertia” necessary to stabilize the network. Although the hourly tariff system and smart meters would allow the Australian measure to be technically replicated, the European framework prevents offering free electricity directly. The price is set in the wholesale market, managed by OMIE, and the State cannot intervene except through subsidies or discounts. In short: Spain has the sun and the technology, but not the regulatory flexibility. As noted by analyst Joaquín Coronado“we have the generation of the future, but we continue to use the crutches of the past.” The global experiment. Giving away electricity to avoid a collapse of the grid may seem contradictory, but it contains a lesson about the energy transition: the problem of the 21st century will not be producing energy, but managing it. While Europe debates how to lower the bill, Australia has chosen to share its excess. If the plan works, it could become a reference for other countries with strong solar penetration, such as Spain or Italy. In the words of Minister Chris Bowen“the more people take advantage of the offer and transfer their consumption, the greater the benefits will be for everyone.” Perhaps the future of energy is not just about paying less, but about using light when the sun gives it away. Image | Unsplash Xataka | 75% of the universe is made up of unknown matter. Australia has gone down to look for him in a mine

In 1995 a program came out that promised to double your PC’s RAM. In the best of cases what I did was not spend more

The 90s were wonderful in the world of software and hardware. Epic trolling like that of the 299 dollars of the first PlayStationthe legendary key of Windows 95 or the PlayStation emulator presented by Steve Jobs himself. In the middle of the decade a program came out that promised the impossible: double the amount of RAM on your PC. Its name was SoftRAM 95 and, although it makes us raise an eyebrow today, in its day it sold hundreds of thousands of copies for $80 each. And spoiler: it was of absolutely no use. SoftRAM 95, the miracle solution for your PC’s RAM The launch of a program like this is a product of its time, one in which users they could have been less ‘smart’ Now for more than logical reasons and in an industry in which everything was learned and developed as we went. There were times when the smartest were the ones who got results, but a company called Suncronys Softcorp learned its lesson the hard way. The year was 1995 and Windows 95 was beginning to revolutionize homes. Although the Microsoft system made control a PC was more accessible than ever (unfortunately for Steve Jobs), the hardware still had a brutal barrier to entry: the price. They were still expensive devices, very expensive, so saving on components saved a few dollars. RAM It was one of those components for which you paid gold per KB, but… what if there was a program that, for a few dollars, doubled the amount of memory on our PC? What if he did all this without having to touch any piece of our equipment? That is where the Californian Syncronys Softcorp saw a vein and – now we can say that in bad faith – launched its program: SoftRAM 95. It went on sale in August 1995 and it is estimated that they sold a whopping 600,000 copies until December of that same year. In those days, it was truly outrageous. And the logical question is how he achieved what he promised. The long answer is that it compressed the memory, so when the operating system needed to save data from RAM to the hard drive, SoftRAM 95 compressed it before writing it, reducing the amount of space needed on the disk and allowing the RAM to have more space available. The concept, roughly speaking, is correct, and the program interface told us that yes, congratulations, you had double the amount of RAM. The long answer is that it didn’t do what it promised. Although technically they were on the right track, this process at the time was tremendously ambitious for one reason: the speed of both the RAM and the primitive hard drives It was so absurdly slow that, effectively, the objective could not be met. They knew this from the top of Syncronys, but they didn’t care: the money was pouring in because each license cost about 30 dollars. Under the magnifying glass of the press… and Microsoft However, things quickly went wrong. A magazine of the time called PC Magazine submitted the software to a analysis How these analyzes should be done: testing whether the program really did what it promised. Using blocks of data to evaluate whether compression was effective, they found that processing times were exactly the same with compressible data and with random data that could not be compressed. They came to the conclusion that the only thing SoftRAM did was show an animated screen which gave the user the perception that they were working when, in reality, they were doing absolutely nothing. But beyond the press, those who got their hands on the software were Bryce Cogswell and Mark Russinovich, two Microsoft engineers who dissected the program at the code level. Basically, confirmed the well-founded suspicion of PC Magazine and pointed out that the program never actually worked. That is, the paging controller device – that compression of the RAM to transfer it to the hard drive – it closed just when loadingso it never did anything at all other than display false numbers while the operating system worked exactly as it should, whether the program was installed or not. When I said before that the management of Syncronys knew it, it was not because we saw history with the eyes of the present. When everything was revealed, they reported that RAM compression was not being carried out and, in addition, it was learned that they sold the software even though its developers had warned that the product was not ready. And it wasn’t aI’ll launch it and I’ll fix itlike many current games”, because in 1995 Internet updates were not the norm. Just when the company thought it was over, the US Federal Trade Commission arrived. Following its investigation, Syncronys finally acknowledged that it had misrepresented the performance of its product and banned it from selling any more copies of both SoftRAM and Windows 3.1 as SoftRAM 95. In total, both versions placed 700,000 copies on the market and Syncronys declared bankruptcy in July 98, owing 4.5 million dollars. The idea did not die with SoftRAM In the end, what SoftRAM did The best case scenario was not to eat up your PC’s resources.and it was one of those attempts to sell whatever in a still somewhat naive market. For PC Worldnext to AOL and RealPlayerSoftRAM is the worst technology product of all time. But of course, with the eyes of 2025, you may be wondering… what happens with solutions like Windows Vista ReadyBoost and the mobile memory expansion? It’s a different matter and, although both promise to improve performance by using “extra memory”, it is something very different from what SoftRAM did. ReadyBoost, for example, allowed you to use the memory of a pen drive as a cache to speed up access to frequent data. It acted as an extension of the system’s virtual memory and the theory is correct, but again we ran into the speed limitation of USBs … Read more

How to customize ChatGPT to choose its default personality, what it knows about you and even what it calls you

Let’s tell you how to customize ChatGPT to adapt it to the way you want to use this artificial intelligence. You will be able to customize the way he responds, or even the way he communicates with you. All with things that you can customize within the settings. We are going to explain to you all the things you can customize about the behavior and memory of ChatGPT. And remember that all of this is optional, because you actually have a button to activate or deactivate the option. Enable personalizationand by disabling it you can make nothing you change take effect. Customize how ChatGPT responds to you Within the ChatGPT settings you have a section called Personalize. In it, you will be able to detail many aspects of your user experience with ChatGPT, so that everything is more personalized. One of the most important things is to configure how you want their responses to be. ChatGPT Personality The first thing you can do is adjust AI personalityso that you can choose the way in which the answers that the AI ​​gives you are written. By default, their personality is happy and adaptable, but you will be able to choose up to four different ones. They are the following: Cynical: ChatGPT responses are made with a critical and sarcastic tone. Robot: ChatGPT responses are made in an effective and direct tone. Attentive: ChatGPT responses are made with a thoughtful and understanding tone. Geek: ChatGPT responses are made with an exploratory and enthusiastic tone. Custom instructions If none of these personalities convince you, you will also be able to customize it manually giving you instructions on what your tone should be. This way, you can make ChatGPT become a much more personal assistant adapted to what you expect from its responses. For that, you have to go to the section Custom instructions. Below, you’ll have a writing field, where you can hand-write the type of tone you want the AI ​​to use. Below you will also have several examples, and you can simply click on it. Configure what ChatGPT knows about you Below the personality-related options, you have several options to define your data personal. For example, in Your Nickname You can choose a name or nickname so that ChatGPT always refers to you when answering your questions. After the nickname, you also have a section to specify Your profession. Thus, the answers he gives you can be adapted when appropriate using terms that you may know a little more about. It will also be able to guide you with answers and solutions to your work, and even ask you explicitly without having to specify what you work or study. Lastly, you also have a section More about you with which you can give more information about your interestsyour values ​​or your way of thinking. This way, you can make sure that your answers can address or give you examples of things that interest you, or simply leave out some things that might offend your values. Here, you are free to explain everything you want about yourself, you can explore and add things to adapt ChatGPT as much as possible taking into account the things you want it to take into account about you and your context. Manage what ChatGPT remembers and then you have the memory section. When you talk to ChatGPT, at any time you can say something the AI ​​thinks is important about you. It can be anything from a plant that you have because you have asked him about it to your personal tastes, and he will remember this to use when he thinks it is necessary. For example, your plant can appear in the background when you ask it to create images. If you click on the option Manage memoriesyou will see a list of all the things that ChatGPT has been saving about you on its own. Here, delete all memories or others individually, and it even has a search engine for cases where there are many about you. But if you don’t like this, you can disable memory usage. Disabling the option Reference saved memoriesChatGPT will no longer store them and refer to them when responding to you. And then, you also have one last option to Reference chat historywhich you can activate or deactivate. This will allow ChatGPT to take into account other conversations to mention things you have talked about in them, without having to save them in memories. With all these things, you should know that you can change everything according to your needs. You can activate or deactivate options depending on what you need, or directly deactivate all these customizations in case you don’t need anything. In Xataka Basics | The best prompts to save hours of work and do your tasks with ChatGPT, Gemini, Copilot or other artificial intelligence

In AI, teraflops came first, then parameters. Now what matters are the ‘bragawatts’

The technological conversation revolves around fashions, and there is nothing as fashionable as artificial intelligence. All the countries that want to be part of the conversation are developing their models and tools and it is interesting how geopolitics permeates everything: the US seeking to be sovereign while China wants to monetize now. But as interesting as the capabilities of one model or another, it is to talk about two concepts that are totally aligned: data centers that feed the enormous amount of calculation necessary to train artificial intelligence and, evidently, Where do they get that absurd amount of energy from?. And as a result of that conversation a fascinating term has been born: the one with the ‘bragawatts’. The ‘bragawatts’ as the bragging of AI Something common when companies like OpenAI or Google announce new data centers focused on AI is that they give a bombastic number about the amount of energy it will consume. RecentlyOpenAI announced a new campus in Michigan that, together with six other also recently revealedthey will need more than 8 GW to operate. They also talk about money: a plan launched in January of this year of 500 billion dollars and 10 GW of planned capacity. According to the company, it is “the infrastructure necessary to advance AI and reindustrialize the country.” In Financial Times They have done the math and, with the Michigan project, the company has 46 GW of computing power. As when talking about operations like the purchase of Activision-Blizzard by Microsoft for 75 billion dollars, context is needed because it is difficult to imagine such enormous numbers. If 1 GW is enough to power 800,000 homes in the United States (with what they spend on air conditioning at any time of the year), these OpenAI data centers would consume as much energy as more than 44 million homes. More context pointed out in the Financial Times: almost three times all the homes there are in California. And the fact that companies give this power data so happily has led to some coin the term ‘bragawatt’. This neologism is a sarcastic combination between ‘brag’, “to show off”, and ‘watts’, the unit of power. In Spanish it is difficult to find a name, but basically it is a boast, something that some companies use, publicly exaggerating the energy consumption capacities planned for their infrastructures. There are several reasons why this is done, but as with any type of announcement by companies that are ‘public’ -those listed on the stock exchange-, the objective is to attract the attention of both the press and the technology sector and, above all, investors. In the economic environment they comment that these bombastic figures are not always met, but beyond the marketing boastthere is a bottom to all this. OpenAI asked the US government to secure 100 GW annually to fuel the country’s different AI developments and NVIDIA explained quite well why estimating the demand for these centers is a problem. In a recent report, the company commented something very interesting: Unlike a traditional data center, which runs thousands of unrelated tasks, an AI “factory” operates as a single system. When training a large language model, or LLM, thousands of GPUs perform intensive calculation cycles, followed by periods of data exchange. Everything is done in perfect synchrony that generates an energy profile characterized by massive and rapid load variations. The electrical consumption of a rack can go from an “idle” state, around 30% utilization, to 100% and back again in a matter of milliseconds. This forces engineers to oversize components to support the maximum current, not the average, which increases costs and space requirements. When these oscillations are added across an entire data room – which can represent hundreds of megawatts rising and falling in seconds – they pose a significant threat to the stability of the electrical grid, making interconnection with the grid a key bottleneck for the expansion of AI. Therefore, beyond the aforementioned boasting, there is some substance in those enormous figures that companies give. And what Nvidia says is backed by data. The big technology companies in the United States are taking over important technology centers. nuclear electricity production or with contracts with oil and gas companies. The coal is re-emerging in full decarbonization to feed the ‘gluttons’ data centers and we are seeing that this focus on LLM is leading large oil companies to give a turn in their plans to adopt renewable energies. AI needs fast energy capable of supporting those performance peaks, and renewables don’t seem like the way to go at the moment. Since we are dealing with grandiose figures, esteem that, between now and 2029, the world will spend about 3 trillion dollars (“its” three trillion) on data centers. And to give more context, it is what France’s economy was worth in 2024. Yeah Are we talking about a bubble or not?is another topic, but there are those who think that these ‘fanfare’ are very difficult to believe. Also who point that AI will have more impact than technologies so far, including the Internet, so we may need all that energy. Only time will tell. Image | İsmail Enes Ayhan In Xataka | While Silicon Valley seeks electricity, China subsidizes it: this is how it wants to win the AI ​​war

Archaeologists have been fascinated by the largest temple in the Mayan world for years. Now we know that it is a map of the cosmos

Our knowledge about the first Mesoamericans they just widened. And in a big way. A team led by professors from the University of Arizona has published a study with new revelations about Aguada Phoenixa site located east of the state of Tabasco, Mexico, near the border with Guatemala. Said like that, it may not seem like a big deal, but Aguada Fénix is ​​not just any place. When it was discovered, about five years ago, showed up as “the largest and oldest Mayan monument ever discovered.” Now we know that he also had some surprises in store for us. What is Aguada Fénix? To answer that question we have to go back a few years, to 2017, when with the help of lidar technology A team led by two professors from the University of Arizona (UA), Takeshi Inomata and Daniela Triadan, identified an ancient monument that until then had gone unnoticed in the state of Tabasco, very close to Guatemala. The laser beams, capable of passing through tree canopies and revealing three-dimensional shapes, showed nothing more nor less than a monument of more than 1,400 meters long, about 400 wide and between 9 and 15 high. That’s right from the start, because if you go beyond the central platform the set occupies much more spacewith roads and enormous pipelines connected to a nearby lagoon. Why is it important? Because of its reach. And historical relevance. When the archaeologists began to excavate and resorted to radiocarbon dating, they had another surprise: the complex had been built between the years 1000 and 800 BC, which was older than the archaeological site of Ceibalin Guatemala, considered the oldest ceremonial center. Aguada Fénix therefore left a double surprise for the researchers, as confirmed in 2020when announcing the discovery, the University of Arizona itself: not only was previous Ceibal, but stood out in size. In fact, it became the “largest known monument in Mayan history”, far surpassing the pyramids and palaces built during subsequent centuries. And why is it news now? Because researchers have not been content with presenting Aguada Fénix to the world. Over the last few years They have continued investigatingexpanding our knowledge of a complex that actually extends far beyond the central platform and the nine roads initially identified. Thanks to tools such as LIDAR, experts have found out that it extends kilometers further and detected an extensive hydraulic system with channels 35 meters wide and five meters deep with a dam. Have they discovered anything else? Yes. To begin with, Aguada Fénix probably served as a very special ceremonial center, a “cosmogram” that represented the order of the universe as its creators understood it. During the excavations they discovered a cross-shaped well in which they recovered ceremonial artifacts, pieces that offer us “unprecedented information about the first Mayan rituals.” To be more precise, they found jade axes and ornaments showing a crocodile, a bird and a woman giving birth. “It is like a model of the cosmos. They thought that it is ordered according to this cruciform pattern and that this is linked to the order of time,” adds Inomata. Ritual decorations? Not only that. When they reached the bottom of the pit, the researchers located another smaller cruciform structure with a new surprise. There they found mineral pigments, mounds of blue, green and yellow tones that mark cardinal points. “We knew that there are colors linked to directions, and that is important for all Mesoamerican peoples, even the Native American peoples of North America,” comments Inomata. “But we’ve never had pigments arranged this way. This is the first case where we found them associated with each specific direction. It was exciting.” And what were they doing there? Archaeologists believe that the different pigments and other materials were arranged as an offering and then covered with sand and earth. They also verified that radiocarbon dating dates them to around 900-845 BC. With all this data on the table, they do not rule out that people later returned to the monument to perform rituals and deposit objects. Another revealing fact is that the central axis of the Aguada Fénix monument seems to align with the sunrise on two very specific dates: October 17 and February 24, 130 days apart, which suggests to experts that it represented half of the Mesoamerican ritual cycle of 260 days. Inomata remembers that it would not be exceptional. The layout would agree with that of other Mayan sites. Why is it so relevant? Beyond the scope of the site itself, the new findings are relevant for what they tell us about the ancient inhabitants of the region. For a start, remember from the UAdebunks the old theory that Mesoamericans grew gradually and dedicated themselves to building increasingly larger settlements until they reached Tikal in Guatemala or Teotihuacán in central Mexico. Aguada Fénix is ​​long before the heyday of both enclaves, which does not mean that it is “as big or even bigger than them.” “What we are discovering is that there was a ‘big bang’ of construction at the beginning of 1,000 BC that no one really knew about,” reflects Inomata. With the discovery of the state of Tabasco it is confirmed that “from the beginning” there was large-scale planning and construction. Aguada Fénix is ​​so old in fact and anticipates so much of the Mayan apogee (around the 3rd-10th centuries AD) that experts are not sure whether its builders spoke Mayan languages. In any case they do admit “a strong cultural continuity” with later communities. How the hell did they build it? That is another of the most suggestive conclusions of the study that Inmoata and his colleagues have published in Science Advances. In it they slip a curious theory: although it is known that other enclaves, such as Tikal, in Guatemala, were governed by powerful monarchs, in the case of Aguada Fénix there are no indications that speak of powerful rulers with the ability to force their subjects to work. That does not mean … Read more

The most unexpected treatment against cancer is LED light, and it is giving good results

Currently there are many research groups that have a very clear objective: find a cancer treatment that is effective, specific and above all safe. Something that can be really complex because of everything that cancer hides behind it, but science continues to give us good news. The last one comes from the University of Texas and the University of Porto which have developed a technique based on tin oxide nanoflakes (SnOx) and LEDs that allows cancer cells to be destroyed with precision. The current problem. The therapy par excellence today in the fight against cancer, without a doubt It’s chemotherapy and radiotherapy. The first of these has numerous problems that have been tried to be corrected, such as low specificity, that is, it attacks both cancer cells like the healthy ones. And this ultimately produces many side effects that can cause you to not continue with the treatment. This makes the goal of science to seek specificity and for the treatment to attack only cancer cells. This is something that is being tried to achieve with immunotherapy and techniques like CAR-T which ultimately is part of personalized medicine for each patient and which offers a very specific selection of the type of cell to destroy. But science has not stopped here. The discovery. One of the techniques that appears to be promising is photothermal therapy (PTT). The concept in this case is quite simple to understand: inject nanomaterials into a tumor and then heat them using light. This logically causes a localized increase in temperature, which selectively destroys the cancer cells that have been marked before. The problem until now was materials and light. Many photothermal therapies require high-powered lasers, which are expensive and can damage surrounding tissue. Now, a team of researchers from the University of Texas at Austin and the University of Porto have found the key to changing the rules of the game. A secret ingredient. The team has developed a new photothermal agent called nanoflakes that are made of tin oxide. After all, they are tiny sheets with a thickness of less than 20 nanometers and what is really ingenious is how they were manufactured. The really ingenious thing is how they made them. They started from a cheap and abundant material such as tin disulfide, which ironically is useless for photothermal therapy. In this way, through a ‘green’ and scalable process called electrochemical exfoliation with oxidation, which only uses aqueous media, they managed to transform the inactive tin disulfide into tin oxide that was already ready to fight cancer. And the light came. Once this material was available, all that was left was to expose it to the LED irradiation low-cost that emit infrared light at 810 nm. In this case we are talking about radiation that is very safe and does not damage healthy skin as can occur with radiotherapy, and it is also extremely cheap and accessible to everyone (even developing countries). Results. To test the effectiveness, researchers have tested cells in culture. The first thing they saw was that this treatment had no effect on healthy cells, that is, it did not destroy them. But the best comes when applying it to cancer cells results in a great reduction in the different colonies. Specifically, in skin cancer there was a 92% reduction in the viability of tumor cells, while in colorectal cancer this percentage dropped to 50%, but still maintained good results. And all thanks to an increase in temperature from 37 °C to 50 °C in 30 minutes that killed cancer cells. The future. This study not only presents a more efficient material, but validates its use with safer and more economical light sources. The researchers themselves point to the potential of LED systems for applications such as skin cancer treatment, which could theoretically be self-administered at home. This would be a great advantage for patients and would reduce the burden on health systems, although there is still a lot of research ahead to see if this therapy can be viable in a range that will surely not be less than 10 years. Images | National Cancer Institute Logan Voss In Xataka | Colon cancers are increasing alarmingly among young people. We have a suspect: sedentary lifestyle

India has bombed clouds to improve its terrible air quality. They have wasted 400,000 dollars

The sky of New Delhi is a painting. While half the world is focused on reduce your emissions and improve air quality (something that ultra-polluted giants like China are successfully implementing), the other half continues with inefficient decarbonization policies. India is one of themand the arrival of winter does not help. To combat its poor air quality, the country has “sown its clouds” about New Delhi. And there are voices that suggest that they have spent a fortune and it has not been worth anything. Crisis. The situation of the large cities of India, with the focus on a capital that has more than 28 million inhabitants in its metropolitan area and a density of almost 6,000 inhabitants per km², is really complicated. Vehicle emissions account for 40% of emissions in the city, but there are other sources such as construction dust, inorganic aerosols or industrial activities themselves that contribute a lot. ‘dirt’ in the city air. The quality is not good at any time of the year, but in the post-monsoon season, between October and November, the situation becomes critical. It is when a large amount of rice stubble and other waste is burned, which, together with the rest of the sources of particles since the arrival of cold air traps the pollutants near the ground, causes the amount of particles to skyrocket. And it’s not a joke: esteem that between 2009 and 2019 there were nearly four million deaths in India linked to poor air quality. Figures. To measure this “dirt” in the air, we turn to PM2.5. It is a measure of the amount of fine particles that are suspended in the air, specifically those that have a diameter equal to or less than 2.5 micrometers. They are so small that they can penetrate deep into the lungs, reaching the blood system and posing a serious health risk. That said, PM2.5 levels in Delhi are between 140 and 170 µg/m³, almost 12 times higher than the safe levels set by the WHO, of 15 µg/m³. Petter Ljungman, a researcher at the Karolinska Institute in Sweden, analyzed the role of these particles and determined that “each increase of 10 micrograms per cubic meter in the concentration of PM2.5 leads to an 8.6% increase in mortality.” Bombing the clouds. In the face of a crisis like this, two things can be done: become aware and rethink the country’s strategy or resort to desperate measures. As we read in Reutersit seems that the Government has opted for the latter. On October 28, the Delhi government in collaboration with the Indian Institute of Technology Kanpur carried out the first tests of cloud seeding. This is India’s first attempt at this technique and it is not about “creating clouds”, but rather making the existing ones release water. Using a series of catalysts launched from aircraft, water droplets contained in a cloud can be made to coalesce into larger, heavier droplets. In this way, and due to their own weight, they fall to the ground in the form of rain. It is not something new because, although it may seem like something out of science fiction, we have been “sowing” clouds for half a century. Negative… results. The problem is that each time we have had more and more evidence that it is something that is of little use. If clouds are good candidates, yes, showers are generated, but the big problem is that it is a very expensive practice for the results obtained and that is the reason why more and more countries have abandoned his projects related to this “creation” of rain. In the case of the Indian experiment, the cost was about $400,000 to put into operation the planes that dispersed sodium chloride and silver iodide over several districts north of the capital. Each of the flights cost about $70,000 and the person who said that it was not of much use was not an external entity or someone critical of the Government: it was the director of IIT Kanpur himself. Manindra Agarwal admitted that the results were “not as desired” because the humidity levels in the clouds were extremely low. It was a crucial error because it is estimated that the minimum for condensing these cloud droplets is 50% and the chosen ones had levels between 15 and 20%. Despite this, Agarwal commented that a reduction of between 6% and 18% was observed in certain particle measurement parameters, but they were at very localized and short-lived moments. deaf ears. And of course, faced with the investment of such a fortune without results, it did not take long for the voices to say “I told you so” to rain down. Climate activists said it, but also two other official bodies: the Indian Meteorological Department and the Air Quality Management Commission. The two organizations indicated That the technique requires specific clouds that are absent during Delhi’s cold, dry winter. Recommendations. In the end, what this action demonstrates is that, in desperate situations, desperate measures only work as a source of funds. The solutions must be considered more in the medium and short term and this is something in which China has served as an example. In the case of India, what is being proposed is control over stubble burning during this autumn season, better waste management and stricter industrial regulations. On the other hand, the country has taken giant steps in recent years in terms of transport electrification is concerned, but progress must also be made in improving urban forestry that “traps” pollution and in the use of large-scale renewable energy. Until they do that, the almost 30 million inhabitants of New Delhi will breathe air equivalent what they would inhale if they smoked seven cigarettes a day. Images | Naomi E Tesla, Submitmpsd In Xataka | The Atacama salt flat is the key on which the electric car industry pivots. And it’s starting to dry

the look that became a voice and guide

In 2019 we published a 37 minute documentary about Dulce, a girl with motor paralysis who learned to communicate using only his eyes and a system of eye-tracking by Irisbond. When she started with him, she was six years old. The learning process had just begun. The eighteen months of recording culminated with a moment that summed up the entire effort: in front of her classmates, using her communicator, Dulce announced “my mother has a baby.” Pure manifestation of desires, willingness to share. Perhaps the first time he not only named the world but shaped it. Six years later, we have spoken again with Raúl, his father. Today Dulce is thirteen years old, her brother Max is already ten, and Dante, that baby who was beginning to appear in Raquel, is already five years old. The communicator is still your voice, but what has changed is what you say with it and what you use it for. From spectator to teacher When we met her, Dulce was learning to use the device with the patience of first Celia and then Mariano, her educators. He burst virtual balloons on the screen, related pictograms with concepts, constructed basic phrases. The process was methodical and exhausting: each session required prior calibration, sustained concentration, and the diffuse promise that that, one day, would give him communicative independencesomething very remote then. Dulce introducing herself at one of her talks. Image provided. Now Dulce is on the other side. Not only does she now master the system, but she has become a trainer for other communicator users through Gema Canales Foundation. “She is as a teacher, teaching other children to use communicators because she is very good at it and has a lot of patience,” explains Raúl. “He’s taught three or four kids how to use the system already.” It is not a specific activity. According to her father, it is something she would like to continue in the future, when she is an adult. The communicator is no longer just his or her tool of expression, but also what he or she trains others in.. The transformation is complete: from student struggling to articulate simple ideas to mentor capable of transmitting technique and patience to others. Teenage conversations The most notable thing is not the technological leaps—which there have been, although moderate—but the communicative leaps. In 2018, Dulce was pronouncing single words, constructing short sentences and expressing basic desires. Six years later he has more complex conversations. “She has normal conversations of a 13-year-old teenager,” says Raúl. Image provided. The most notable change came with mobile phones. Dulce already has her own, not as the main communication device – for that she continues to use the Irisbond system connected to a tablet – but as a gateway to digital socialization typical of their age. The mobile allows you to access WhatsApp and have conversations with friends, a teenage rite of passage. Although he accesses through WhatsApp Web for fluidity and convenience, he also likes to use his cell phone with the mobility that his left hand allows. This communicative autonomy has also changed its social dynamics. Raúl remembers moments when Dulce, in new environments with strangers, starts conversations using her communicator. The other kids quickly naturalize the system: “Oh, okay, I talk and she answers me like this.” There is no discomfort, just a slight adaptation to the pace of the conversation, which is slower than natural speech but fluid to maintain complete dialogues. The voice that doesn’t want to change Technologically, the system has not evolved a lot in these six years. The most important improvements occurred in the years before the documentary, when the eye-tracking It went from crude to functional. Since then, progress has been incremental. The response speed has improved slightly, the software is somewhat more predictive, but nothing transformative. The most interesting thing is that Dulce has resisted changing the voice of the communicator. The system has been updated with more voices, even from children, not just adults, as some parents had been demanding. Image provided. When the tool added the first children’s voices, Raúl went “with all his enthusiasm” to configure it on Dulce’s tablet, but he found something unexpected: her refusal. She preferred to keep the one she has been using for years, with an adult ring. “She’s already gotten used to that being her sound. It’s like your voice changes overnight, you feel strange, you don’t recognize yourself in it.” His father speculates something obvious but easy to forget: when you’ve spent most of your life hearing yourself speak in one way, changing your voice is not an improvement, it’s losing your sound identity. The limit is still physical Dulce finished primary education with excellent grades, with the only curricular adaptation in Physical Education. Now he is in 1st year of ESO and limitations are beginning to appear, not due to cognitive ability but due to motor demand. Mathematics, which in Primary was numbers, now introduces algebra. “There it could get more complicated for her,” admits Raúl. The solution involves an assistant who transcribes what Dulce indicates with her communicator, a necessary support not because she does not understand the subject but because writing equations with her eyes is infinitely slower than by hand. It is a technical limitation, not an intellectual one, but it sets the pace of your academic progress. The impact of the documentary The 2019 report did not change Dulce’s life or that of her family. There was no media transformation or avalanche of attention. But Raúl remembers a very specific effect: when they had meetings with the Madrid Department of Education or made requests for academic support resources, someone would mention “ah, yes, you are Dulce’s family, the one from the documentary.” “I already had a face, eyes, expressiveness, a story,” he explains. “It wasn’t just a name in a dossier.” In the bureaucratic negotiation for resources and support, this minimal humanization of the file worked in their favor. It wasn’t … Read more

Five technology offers to take advantage of MediaMarkt’s Black Weeks, today, November 9

Nothing left for Black Friday! In a few weeks, what is one of the biggest sales campaigns of the year will begin, so many stores have already launched discounts that serve as a small preview. MediaMarkt, for example, has its Black Weeksso in this article we are going to review what they are five of their best deals that will be available over the weekend. Google Pixel 10 by 799 euros with coupon, one of the best proposals of the year with a much more reasonable price. Sony WH-1000XM4L by 189 eurostop quality-price headphones that are now cheaper. LaCie Mobile Drive V2 by 145.99 eurosa 5TB hard drive that has a portable design. iPhone 16e by 599 eurosApple’s mobile phone returns with one of the best discounts it has received to date. Dyson V8 Advanced by 259 eurosa good Dyson vacuum cleaner that has dropped in price by 35%. Google Pixel 10 There are not a few offers that the Google Pixel 10 since its launch, and now a new one has arrived at the MediaMarkt store: introducing the coupon TradeInPixel10100Nov Before processing the purchase, you can buy by 799 euros. And be careful, because it is a mobile phone that has an excellent multimedia section, very good cameras, exquisite software and 256 GB of internal storage. The price could vary. We earn commission from these links Sony WH-1000XM4L If what you are looking for are good headphones, be careful because the Sony WH-1000XM4L have fallen in MediaMarkt up to 189 eurosa precious thing considering that, despite the fact that there are a couple of more current generations, They offer a very good sound experience. They have excellent active noise cancellation, are very comfortable and their battery offers a range of up to 30 hours. The price could vary. We earn commission from these links LaCie Mobile Drive V2 Although SSD units have taken a more prominent role than HDD hard drives, the latter tend to have much more competitive prices. He LaCie Mobile Drive V2 is a good example of this, since at MediaMarkt it is on sale for 145.99 euros. Has 5TB internal storage, has a USB-C connection, allows you to make backup copies and has a resistant and portable design. LaCie Mobile Drive V2 (5TB) The price could vary. We earn commission from these links iPhone 16e He iPhone 16e It was launched on sale and since then we have been able to find it on sale many times. Now, MediaMarkt has it for 599 eurosone of the best prices it has had so far. It is a powerful mobile thanks to the A18 chipoffers good autonomy and is ideal for those who want a very compact format (6.1 inches). The price could vary. We earn commission from these links Dyson V8 Advanced On the other hand, if what you are looking for is a good vacuum cleaner, be careful because this Dyson V8 Advanced has fallen to 259 euros. It is a cordless vacuum cleaner that offers a power of up to 130 AW, its theoretical autonomy is up to 40 minutes and it comes with several accessories. In addition, it has several modes of use and has a hygienic emptying system for dirt. The price could vary. We earn commission from these links Some of the links in this article are affiliated and may provide a benefit to Xataka. In case of non-availability, offers may vary. Images | MediaMarkt and Compradicción (header), Google, Sony, LaCie, Apple, Dyson In Xataka | The best mobile phones (2025), we have tested them and here are their analyzes In Xataka | Best wireless headphones. Which one to buy and 21 models from 15 euros to 470 euros

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.