Claude just demonstrated it with Firefox

For years, finding serious vulnerabilities in complex software has been a task reserved for specialized researchers who spend weeks or months examining millions of lines of code. That scenario is beginning to change. Artificial intelligence models are no longer limited to generating code or helping to debug it, they are also beginning to detect security flaws on their own. A recent example has been shown by Anthropic with Claude Opus 4.6its most advanced model, when put to the test with Firefox. The experiment is especially striking because Firefox, managed by Mozilla and used by hundreds of millions of people, is one of the most audited open source projects in the web ecosystem. Analyze the Firefox browser code. During two weeks of testing, the system identified 22 different vulnerabilities, according to information published by both organizations. Mozilla assessed 14 of them as high severity flaws, meaning they could have served as a basis for attacks if someone had developed the appropriate exploit code. According to those responsible for the project, most of these problems have already been solved in Firefox 148, the version published in February, while the rest will be corrected in future versions. Inside the experiment. Claude’s work was not a simple automatic search for errors. According to Anthropic, the team first used the model to try to reproduce historical vulnerabilities recorded in Firefox, a way to test if it was able to recognize real failure patterns. Then they moved on to the most interesting part of the experiment: asking it to analyze the current version of the browser to locate problems that had not yet been reported. The process started in the JavaScript engine and then expanded to other areas of the code. In total, the analysis covered thousands of files from the project, including several thousand C++ files, generating a long list of findings that were subsequently reviewed by the researchers. A striking fact. Claude found more high-severity bugs in two weeks than the browser usually receives in about two months through its usual investigation channels. During the process, the Anthropic team submitted 112 unique reports to the project’s bug tracking system, although not all were confirmed vulnerabilities. Part of Mozilla’s job was precisely to review, debug and classify those findings before determining which ones had real security implications. The experience ended up becoming a direct collaboration between both organizations to review the results and prioritize corrections. The other half of the problem. The Anthropic team also wanted to see how far the model could go beyond detecting errors and turning those failures into real attacks. To do this, they asked him to develop exploits capable of taking advantage of the discovered vulnerabilities. The experiment included hundreds of runs with different approaches and cost approximately $4,000 in API credits. Still, the result showed a clear difference between the two capabilities: Claude only managed to generate two working exploits in a simplified test environment, without some of the defenses present in a real browser. Beyond the specific case of Firefox, the experiment reflects a change that is beginning to worry and interest the security community at the same time. AI-based tools are rapidly improving at detecting vulnerabilities in complex software, which could help developers fix bugs more quickly. Images | Anthropic | Rubaitul Azad In Xataka | iPhones were supposed to be the most secure cell phones in the world. It was supposed

In its goal of reaching the Moon in 2030, China has hit the table: it has demonstrated the potential of its technology

The race for the human return to the Moon has officially entered a new operational phase with China successfully executing the first “lit” flight of its heavy rocket new generation: Long March-10 (LM-10). A test that has not only validated its propulsion capacity, but also certifies the safety of its future crew in the most hostile launch environment. Where. This milestone, achieved since Wenchang launch pad (Hainan), places the Chinese lunar program on a firm and technically verified trajectory to meet its strategic objective: putting humans on the lunar surface before 2030. The litmus test. The essay recently made marks a turning point, since, unlike the tests static or scale models from previous yearsthis has been a real flight with ignition. The LM-10 took off in a prototype configuration with the goal of achieving the maximum dynamic pressure (Max-Q). In aerospace engineering, Max-Q is the critical moment during the climb where the aerodynamic forces on the vehicle structure are most violent. It is the “worst scenario” possible for an emergency that could threaten the safety of the crew, and it is precisely at that moment that the abort command was sent to the Mengzhou manned ship (the successor of the Shenzhou). In Xataka In silence, China is making giant strides in a race that until now it was not leading: space. There are differences. What distinguishes this essay from those carried out by other historical powers is the sophistication of the subsequent sequence. At first, the Mengzhou capsuleseparated from the rocket and activated its escape enginesmoving away from the “danger zone” at high speed, validating its ability to save the crew in extreme aerodynamic conditions. On the other hand, as the capsule descended toward a controlled splashdown, the first stage of the LM-10 rocket was not jettisoned. For the first time in a test of these characteristics in China, the stage continued its ascent briefly and then executed a controlled descent and landed in the sea. A success. This success simultaneously validates the structural integrity under maximum stress, the compatibility of the interfaces between rocket and ship, and the partial reusability of the system, a technological advance that brings China closer to the operational efficiency of companies such as SpaceX with Artemis. All this within a context where China and the United States ‘fight’ to see who is the first to return to the Moon. A change of concept. Wenchang’s success is just the tip of the spear of a much more complex system known as the CMSA’s “Earth-Space Transportation System for Manned Lunar Flights.” This architecture moves away from the “one giant shot” concept and opts for a two-launch and orbital rendezvous scheme. The three pillars. The first of them is the Long March-10a colossus approximately 92 meters high capable of placing about 70 tons in low Earth orbit and about 27 tons in lunar transfer orbit. The most interesting thing is that its modular design and the recovery capacity of the first stage are fundamental for the economic sustainability of the program, since the entire structure is recovered for subsequent tests and missions. The second pillar is Mengzhouwhich is designed for deep space missions and is larger and more capable than the current Shenzhou. Its development, which began conceptually around 2017-2018, has culminated in a modular vehicle capable of supporting atmospheric reentry at lunar return speeds. The third is a dedicated lunar landing module known as Lanyue waiting in lunar orbit. {“videoId”:”x96edv6″,”autoplay”:false,”title”:”China’s space suit to go to the Moon”, “tag”:”China”, “duration”:”64″} Roadmap. This includes two separate launches of the LM-10: one to transport the Lanyue module and another for the crew on Mengzhou. The final objective is that both vehicles will perform a meeting maneuver and docking in lunar orbit before the taikonauts descend to the surface. Chronology of ambition. The path towards this 2026 flight has been methodical, characterized by a strategy of “short but quick steps” that began in 2013 with the first discussions and the development of prototypes. It was in 2020 when an 8-day orbital test flight was made using a Long March-5B and that validated the capsule’s heat shield and recovery systems. Finally, it was this month of February when the flight occurred with an abortion in Max-Q and recovery of the stage. If we look to the future, before the end of 2026, “zero altitude” abandonment tests and complete tests of the Lanyue lunar landing module are expected, all aimed at meeting the 2030 launch window. A duel of titans. The comparison between the United States and China is practically mandatory in these cases. While the United States relies on the raw power of the SLS Block 1a 98-meter and disposable colossus, China is committed to operational efficiency with the Long March-10. And although the Chinese rocket is a little less powerful, its design incorporates a reusable first stage, which reduces costs and is closer to the sustainability model that SpaceX has popularized in the West, contrasting with the immense expense per launch of the American system. On the other hand, NASA has opted for a hybrid and complex scheme: it launches the crew in the Orion capsule with the government SLS rocket, and then docks in lunar orbit with the Starship HLSa commercial lander from SpaceX. In contrast, China has chosen a more pragmatic “distributed architecture”: it will carry out two separate launches of the LM-10, one for the Lanyue lunar landing module and another for the crew on the Mengzhou spacecraft, which will meet directly in lunar orbit. In Xataka Starlink’s dominance in space begins to move: another company already has permission for a constellation of 4,000 satellites On their calendars. The US program, depending on multiple commercial suppliers and disruptive technologies (such as Starship’s in-orbit refueling), faces highly complex logistics that have accumulated delays for the Artemis III mission. In contrast, China’s centralized and vertical model maintains a firm and predictable roadmap to the year 2030. In this way, we are seeing two titanic powers with two different … Read more

We thought that Chatgpt was used mostly to work. Openai herself has just demonstrated otherwise

For months, many of us assumed that Chatgpt It had become the perfect tool for the office work and also to program. OpenAi has published His first detailed study about what users really do and who they are, and the portrait breaks that intuition: most conversations do not work. Personal use dominates and grows. The data reflect a notable change in the type of use of ChatgPPT: in June 2025, 73% of the conversations were not work, when in June 2024 the percentages were almost tied. And there are other interesting data: the public is mostly young, with about half of the messages sent by people between 18 and 25 years old. To this is added a turn in the gender profile: the first records showed predominance of male names, but in 2025 52% corresponds to female names. More than work: chatgpt shows your most personal face The company classified more than one million conversations in seven major categories. The most common, “practical orientation”, supposes 28.3% of all interactions and includes help requests for daily tasks, academic consultations or training tips. The study also draws a curious panorama: it points out that adoption grows faster in less rich countries, although it does not list uses by country. The second major block are requests related to writing, which highlight the edition or criticism of texts and personal communication. The programming also appears, which concentrates only 4.2% of the analyzed chats. A trend that gains strength is the search for information. Openai states that these types of consultations have become a nearby substitute for web search engines. Between May and June 2024, it grew constantly, until it was placed as the second most common use. In this section, questions about products also appear, which suppose 2.1% of the consultations within that category. These data plan questions about the future of searches and how the company led by Sam Altman He is challenging Google. Another relevant block is that of personal advice and intimate conversations. The report indicates that 1.9% of the interactions are related to reflections and relationships, And 0.4% are role -playing games, including the use of chatgpt as virtual “companion”. Although the study insists that they are small figures, The issue is in the spotlight in several countries due to the impact of this technology on the mental health of some people. reflections and relationships, The study has 62 pages and covers the period from May 2024 to June 2025, with data from 1.5 million users and a sample of 1.1 million conversations. If the question is how Openai has achieved to obtain this information, the company says it has used its own models to analyze the messages, preventing human researchers from reading individual conversations. Demographic information comes from the data that users provide when registering. Images | Solen Feyissa | Levart_photographer In Xataka | China is selling us a future full of humanoid robots. We have (many) doubts

Alibaba has just demonstrated that Openai spends 78 million to do the same as them for $ 500,000

There is a new star technique to train AI models super efficiently. It is at least what Alibaba seems to have demonstrated, that Friday presented His family of QWEN3-next models and did so presuming from spectacular efficiency that even Leave behind the one he achieved Deepseek R1. What happened. Alibaba Cloud, the Alibaba group’s cloud infrastructure division, presented a new generation of LLMS on Friday that described as “the future of efficient LLMs.” According to those responsible, these new models are 13 times smaller than the largest model that that company has launched, and that was presented just a week earlier. You can try QWen3-Next On the Alibaba website (Remember to choose it from the drop -down menu, in the upper left). QWen3-Next. This is what the models of this family are called, among which it stands out especially QWen3-Next-80b-A3Bwhich according to developers is up to 10 times faster than the QWEN3-32B model that was launched in April. The really remarkable thing is that it also manages to be much faster with a 90% reduction in training costs. $ 500,000 is nothing. According to AI Index Report From Stanford University, to train GPT-4 OpenAI invested $ 78 million in computation. Google was further spent on Gemini Ultra, and according to that study the figure amounted to 191 million dollars. It is estimated that QWEN3-Next has only cost $ 500,000 in that training phase. Better than its competitors. According to the benchmarks made By the artificial firm Analysis, QWen3-Next-80B-A3B has managed to overcome both the latest version of Deepseek R1 and Kimi-K2. Alibaba’s new reasoning model is not the best in global terms-GPT-5, Grok 4, Gemini 2.5 Pro Claude 4.1 Opus overcome it-but still achieves outstanding performance taking into account its training cost. How have you done it? Mixture of experts. These models make use of the Mixture of Expert architecture (MOE). With it, the model is “divided” into a kind of neuronal subnets that are the “experts” specialized in data subsets. Alibaba in this case increased the number of “experts”: while Depseek-V3 and Kimi-K2 make use of 256 and 384 experts, QWen3-Next-80b-A3B makes use of 512 experts, but only activates 10 at the same time. Hybrid attention. The key to that efficiency is in the so -called hybrid attention. Current models usually see their efficiency reduced if the input length is very long and have to “pay more attention” and that implies more computing. In Qwen3-Next-80b-A3B, a technique called “Gated Deltanet” is used that They developed and shared MIT and NVIDIA in March. GATED DELTANET. This technique improves the way in which the models pay attention when making certain adjustments to the input data. The technique determines what information retain and which can be discarded. That allows creating a precise and super -efficient cost mechanism. In fact, QWEN3-Next-80B-A3B is comparable to the most powerful Alibaba model, Qwern3-235B-A22B-Thinking-2507. Efficient and small models. The growing costs of training new models of AI begin to be worrisome, and that has made more and more efforts to create “small” language models that are cheaper to train, are more specialized and especially efficient. Last month Tencent presented models below 7,000 million parameters, and another startup called Z.AI published its GLM-4.5 Air model with only 12,000 million active parameters. Meanwhile, large models such as GPT-5 or Claude use many more parameters, which makes the necessary computation to use them much greater. In Xataka | If the question is which of the great technology is winning the AI ​​career, the answer is: None

This guy existed and scientifically demonstrated that Jack could survive

We will never know for sure if James Cameron was aware during the filming of ‘Titanic‘Of what I was filming. What I have less doubts is that I knew perfectly the end I had to give to the story. At this point we do not discover anything with the death of poor Jack so that Rose remained afloat, although the doubt will always be there. Could I have survived? It turns out that the answer was in a frame of the movie. Let’s go first with one of the most famous deaths in the history of cinema. It is because, in essence, Jack could have saved Perfectly, or so we think many. Where one fits, two are always possible, but the character of DiCaprio decided that he left Rose’s entire table while she, far from forcing him to get on, dedicated herself to contemplating him curled up while the protagonist dies agonally and her parts freely freeze. The controversy was served. Myth hunters appear In 2012, the popular program raises the debate with a live test. What do they do? A drill with dolls and a small recreation of the film’s board. Indeed, the table leans, but when they tried the feat with a large -scale replica, they discovered that Rose could have removed the life jacket and place it under the plank to add extra float. The method lifted the table in such a way that most of their bodies (80%) were out of the water while floating. The conclusion from the program was clear, “Jack’s death was unnecessary.” The video of the Mythbusters went viral and Cameron soon came out in his defense. The director replied that people “were losing their meaning” with the whole issue of the character of the character. In 2017, Cameron began to be fed up with the question and called him sadist or had little scientific rigor. In the past he had already recognized that the character’s death was simply and plain An artistic licensebut seeing that no one paid attention to him, he opted for try the situation that occurs in Titanic. “It was in the water with the wood table, placing people on it for approximately two days, and studying exactly if it was buoyant enough to endure the weight of a person with full free space, which means that their body was not immersed in the water so that it could survive the hours that lasted until the ship came,” explained. So could it have been saved? Two years ago, Cameron, given the insistence of many fans of the film, went out to say something that until then was unknown. In an interview with Half The Toronto Sun He revealed that he had documented a “Scientific study“That shows that two people could not have survived at the floating door at the end of ‘Titanic’. “We have conducted a scientific study to end all this and nail a stake in the heart once and for all,” Cameron said. “Since then we have performed an exhaustive forensic analysis with an expert in hypothermia that reproduced the filly raft and we are going to make a small special about it.” Apparently, in this study they took two specialists who had the same body mass of Kate and DiCaprio, they put sensors throughout the body, “and we put them in ice water to see if they could have survived through a variety of methods. The answer was that there was no way that both could have survived. Only one could survive.” In addition, he added that “Jack needed to die. It is so, as in Romeo and Julieta. It is a film about love, sacrifice and mortality.” The baker Be an artistic license or not, the truth is that Cameron knew something that went unnoticed by many spectators at the premiere of the film. Among the many historical characters that are strategically placed in the film and representing real lives that embarked on the Titanic, one of them was the answer that fans expected at the end of Jack. Your real name: Charles Joughina guy who embarked on the Titanic to work as head of the transatlantic pastry and bakery section. Joughin, 33, was resting in his cabin when on April 14 the Titanic hit the iceberg. He was in charge of sending the greatest amount of food to supply life boats, and while doing so, he “gave” the bottle. This was recorded in The British Government Commission during the research on the sinking of Titanic. “I went down to my room for a drink,” Joughin declaredadding that I had A bottle of whiskey in his cabin that would accompany him the rest of the fateful day. The man then climbed and helped women and children enter the boats, and from time to time, he gave the bottle, which possibly calmed him and made the situation of chaos that was lived more bearable. And here comes the time (actually appears several times, but this is the clearest) in which reality and fiction come together (Min. 01:22 of the video). The film shows us Joughin, played by actor Liam Tuohy, in the same company as Jack (Dicaprio) and Rose (Winslet) and the Titanic departing in two. However, unlike the film, it is said that the “baker” remained as the only individual at the end of the Titanic while rising in the air, was suspended for a moment and then sank into the ocean. “I don’t think my head will sink underwater, really. It may have been wet, but up there,” declared in the investigation. Joughin was, therefore, one of the survivors who found himself floating in the icy waters of the North Atlantic (like Jack) and lived to tell it. As? The researchers explained that it is estimated that LAt water temperature that night was -2 ° C. In addition, we must have the impact of immersion in such frozen waters, which is why the majority died in just a few minutes. We know … Read more

Duolingo believed that AI was his ally. GPT-5 has just demonstrated that it can be its mortal competition

Duolingo sinks in the stock market. In early June its action was around $ 525, but now its value It has collapsed Up to 325 dollars, 38%. It is not entirely clear what this debacle has caused, but we have a clear suspect: AI. Be careful what you say. Three months ago Luis Von Ahn, CEO of Duolingo, made very controversial statements and indicated that he would replace part of his network of external (human) collaborators by generative AI systems. Although he stressed that they would continue to be a company that took great care to its employees, it also stressed that IA would take an increasingly notable role in the entire operation of the company, especially to “eliminate bottlenecks so we can do more” with the employees they already had. The value of Duolingo’s action has suffered important ups and downs in what we have been, but the last trend is clearly negative. Source: Google Finance. Boom and drop in actions. These statements were produced at the end of April. The initial impact for shares seemed to be positive, which went from $ 400 to $ 530 (32.5% growth) in a few days. But shortly after that optimism for the role of AI in the company vanished among investors: the action fell even below those initial levels, and now is around year principle levels. It seemed that Duolingo traced. The company presented financial results and corroborated the success of its business model. The feeling of progress when learning a language – gamification is a powerful (and as we will see, dangerous) tool— sold more than learning itself. That allowed to momentarily stop the fall of the actions a few days ago, but then something happened. GPT-5. During the presentation of the new OpenAI model there was a demonstration in which one of the company’s engineers launched a dart poisoned to Duolingo. That demo was to create a custom web application in just three minutes with which the user could learn French. With a simple prompt an app that competed directly with Duolingo, and that of course avoided paying for that application to learn languages. That PROMPT single made GPT-5 capable of creating an interactive website to teach you to speak in French. Source: OpenAi. Be careful with gamifying everything. Although that demo of GPT-5 becoming a personalized teacher is striking, the value of value of Duolingo’s action may also have been motivated by other causes. Especially, for that clear focus on the gamification of the learning process. Converting this process into a game is attractive and encourages many users to take that task in a more fun way, but criticism of excessive focus In Duolingo’s gamification they are frequent. As A user said In Reddit, “for me the reward to learn a language is to learn the language.” Other explained That Duolingo is not a learning application, but that it must be taken as something else: a game. The condemnation of advertising. Other criticisms are aimed at Excessive appearance of ads advertising when one uses duolingo to learn a language. Advertising occurs in the free version, because the premium version has the advantage of not showing it. The model is reasonable – study, after all, is a company and is there to earn money – but as with streaming, the presence of ads is increasing and is increasingly annoying for those users of the free version. Of betting on AI to be threatened by her. The truth is that although all these factors may have influenced that assessment, volatility may have been influenced by those expectations that are constantly lived with AI. The companies that are most committed to this technology are the ones that are going up in the stock market –to tell the “Trinity of the AI”-, although the real impact of this technology is for the very discreet moment. AI as a private teacher. What is unquestionable is that The potential of AI as a teacher of any discipline “Not only about languages,” is undeniable. It is something that GPT-4o already pointed out, whose demonstrations were in the same direction. For example, the boy’s video Learning to solve a mathematical issue —Hahere included – it was especially striking, and hunting a future in which whoever wants to get a lot of these “private teachers” that we can create with a prompt single in chatgpt (and other chatbots, of course). It is early to know, of course, but Duolingo, like many others, seems to be suffering the consequences of that future potential. In Xataka | We do not know if the AI is going to eat your work, but the CEO of some startups are determined to convince you of it

The blackout in Spain has demonstrated which is the ideal means to inform in a crisis: the radio

In the minutes that followed the Broady in Spain and Portugal At 12:32 yesterday, millions of people They wondered The same: “What happened.” In other crises, television, newspapers and of course the Internet are the clear alternatives to find out what is happening, but yesterday that was not possible. Almost everything failed, but there was a means of communication that allowed us to keep us informed: the radio. Where is the FM radio of the mobiles. The fragility of our communications caused the radio to erect as a unique solution to keep us informed. The mobiles had support for FM radio years, but this feature has disappeared in all current models. In fact It went from being an extra desired to an exclusive function of cheaper models. Something older mobiles do have that function by connecting headphones (which bend as antenna), and that allowed those who still have any of those mobiles could be informed thanks to that functionat least while the mobile battery lasted. Some brands keep them in some models. Xiaomi is one of the manufacturers that still includes support for FM radio in some of its Redmi family models, but of course not all. He Xiaomi Redmi Note 14s or the Redmi 14c They are good examples. The normal thing is that in recent terminals we will not find that option, which we can enable in older mobiles (such as Little m5he Motorola G73 5g or the Samsung Galaxy M23 5g) and also in input range terminals and in less extended brands such as Doogee or Ulefone. Radio as informative lifeguards. Without light there was no television or wifi in houses and offices, and connect to the Internet It was an odyssey all day: Mobile lines, the only ones that could give a way out, worked irregularly … If they worked. And yet, the radio worked without apparent problems. Transistor to batteries. Meanwhile, those still kept a battery transistor at home could be informed thanks to the radio stations continued to broadcast during the blackout. These transistors became the salvation of many citizens, who either had radios in their homes or gathered in the streets around vehicles with analog radios or that could listen to the radio What other people had in the streets or on the terraces. They were also one of the products that They sold out more quickly in shops that remained active during the blackout. Why the stations continued to work. Augusto Molina and Héctor Zafra are respectively the technical director and the technical manager of the SER, and explained In a piece in this medium The way of proceeding in these emergency cases. The first thing they did was turn off monitors and all the teams that could be turned off to save to the fullest. That allowed maximizing the autonomy of emergency equipment that is used in these cases, and that were the key both in the SER chain and other stations. Electrogen equipment. These structure (motorogener) groups are machines that generate electricity through an internal combustion engine. They make use of fossil fuels (gasoline, diesel) and allow generating electricity while the fuel lasts. Are the teams that They are used in hospitals to maintain a good part of the essential services during this type of energy crisis. Several stations survived the blackout. National Radio of Spain kept for example emissions, such as They also did wave zero, the Cope chainthe aforementioned chain and other stations – although not all – and broadcasters who could continue to inform thanks to those emergency teams that were activated during the blackout. Radio as lifeguard in emergency situations. What happened yesterday has revealed the relevance of the radio as an ideal means of communication to emergencies. There have been numerous cases in the story in which that has become evident. In our country they stand out for example The coup d’etat of 23-F In Spain, in 11m or more recent attacks The Dana that ravaged the Valencian Community and that caused the majority of the electric laying. Transistors, both in one and another, allowed to remain informed. Image | Xataka In Xataka | Five annual pounds and a telephone line: how the electrophone, the “spotify” of the nineteenth century worked

In Spain, cutting urban trees looks like national sport. These Swiss have just demonstrated that it is a mistake

There is only a handful of things that we know for sure about cities and one of them is that trees are key. And it is that the exposure to green spaces (and there the trees enter) “is associated with lower risks of mortality.” It’s simple, it’s clear, it’s easy. And, despite this, we do not take note. Wait, wait a moment … how? Yes and There are many reasons To do this: trees filter air pollutants, provide shadow, reduce ambient temperature in warm climates and encourage people to spend more time outdoors. They are a cheap and relatively accessible system to improve people’s lives. But, as I say, when planting trees it is not so easy. Among other things because there is no space. How can we plant trees to get maximum benefit? That is what They wondered Zurich ETH researchers. To do this, they examined high resolution data of the tree canopy to determine “the structure of the green tree spaces” within a radius of 500 meters of the place of residence of a person. Of six million people, actually. AND They crossed this urban data with health and mortality information from the neighborhoods of Europe and Asia that analyzed. What have you discovered? That both tree coverage in residential areas and its spatial distribution correlates with mortality. In fact, researchers They realized that the risk of mortality was “significantly lower in people living in neighborhoods with extensive, adjacent and well -interconnected areas of tree cups than in people who live in areas with less areas of fragmented tree glasses and with complex geometries.” This seems true, in addition, if we discount other factors such as age, wealth, gender or educational level. It is true that the data are correlational and, therefore, do not allow to establish causal relationships; But the effect size makes all this very promising. But, indeed, research is needed. “We are still in the early stages of this research”, The researchers explained. There are very basic things that are there to be clear: for example, they did not study the influence of specific factors such as pre -existing diseases, smoking or the same use of these green spaces. A big problem … that affects us especially. And I say it affects Spain because Bad care that urban trees are given here is An endemic problem. The causes are diverse, yes; But they can be summarized in three: few media, mismanagement and isolated political decisions of any current technical knowledge. Swiss study is just the last drop of a glass to be overflowing. Because we have known for a long time that trees help reduce atmospheric pollutants and mitigate the Urban heat island effect; But we don’t take it seriously. Nor do we go to take it. Image | Vladimir Kudinov In Xataka | A centenary ficus has just died in Seville after two years of agony. It is the best example of how Spain is killing its urban trees

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.