The CNMV has tested AI to invest in the stock market for ten months. The conclusions are very revealing

In recent months there has been a recurring discourse that we see on social networks and that sell us again that “get rich quick” message. That message is “use AI to invest in the stock market.” The interesting thing comes when we see how the CNMV has published a study in which it has precisely attempted to analyze that premise. Although this organization warns of the risks of investing with AI, there is another important message in the conclusions: LLMs are not bad investors per se. They are bad at following vague instructions, which is just how most people use them. The CNMV study. Two researchers from the CNMV, Ricardo Crisóstomo and Diana Mykhalyuk, have published a study methodologically serious (but imperfect) and very interesting: they used four AI models for ten months live, from April 2025 to January 2026. They chose ChatGPT, Gemini, DeepSeek and Perplexity as models. The process was simple but demanding: each month they asked each model to identify the five stocks in the Ibex35 index with the best expected performance (to buy) and the five with the worst expected performance (to sell short). Then the real result was measured at the end of the month, and here there was no historical data selected just because: the real market was the only arbiter of all the functioning of the models. The models evolved. One of the most significant aspects of the study is that its creators recognized a methodological problem that was difficult to avoid: during those ten months, the versions of the four models were updated several times. The Gemini of April 2025 was not the same than that of January 2026for example, and that could influence the results. The researchers commented that it was impossible to know with certainty whether an improvement or deterioration in performance was due to the prompt strategy, market conditions in that period, or simply because the model changed. The prompt is everything. Three were also tested prompt types very different, and that gave rise to conclusions that were neither alarmist nor did they create false expectations: they were “it depends.” Thus, their results showed that everything depended on the type of supervision that these models had: If the LLMs were asked generic questions such as “What stocks should I buy?”, they failed repeatedly. There were computational errors, incorrect interpretations and also the famous hallucinations of chatbots. Curiously, the only one that made a profit was ChatGPT. The problem is that people who use AI to invest probably use this mode of action. But if prompts prepared with iterative reviews and human supervision at each step were used, Perplexity achieved a monthly return of 3.5% on the IBEX35. Gemini and ChatGPT also improved their behavior if given more precise instructions, and DeepSeek was the worst ranked overall. There is another finding: when models receive official regulatory documentation or business results reports, their predictive accuracy improves significantly. The LLMs they reason better on concrete and verified facts than generating analysis from scratch on information that they themselves search for on the web. financial hallucinations. The CNMV study points out that financial markets are especially demanding for AI models because they require complex processes. They have to retrieve and collect information dynamically, they have to reason in multiple steps, they have to be numerically precise, and they have to know this market, and all in real time. Chatbots are trained to generate “convincing” textsso the incentive here is that the investment recommendation “sounds good” even though it is completely wrong. The confidence with which AI models present incorrect financial analysis is proportional to the risk they pose to those who use them without checking whether what they say makes sense. In short: do not trust AI to invest right off the bat. The Reddit user’s experiment was equally striking, but hardly conclusive. Source: Reddit. The Reddit experiment. A Reddit user named Blotter-fyi rode in November from 2024 a platform called Rallies.ai which gave several AI agents access to real-time financial data and money to make stock market operations. Four months later, with the S&P index down 7% since the start, five of the models are outperforming that index, although only two have positive returns in absolute terms. The author himself was the first to warn that four months are insufficient to reach a conclusion: it could be luck, the market or simply the prompt. Nof1’s experiment was fascinating, but it made it clear that AI models don’t typically make money investing in crypto. Source: Nof1. Nof1 and crypto fascination. Another particularly striking experiment was the one that the company nof1.ai made with its Alpha Arena. He put six AI models to compete, gave them 10,000 real dollars each and gave them two weeks to trade cryptocurrency derivatives without human intervention. The most striking result was not who won, but who lost: GPT-5 ended with more than 25% losses and Gemini with close to a negative 40%. Meanwhile, the Chinese models Qwen and DeepSeek dominated in terms of good performance. They iterated with other models, 32 in total, and of all of them only six achieved a positive return: the rest lost money. Grok-4.20 was the big winner ahead of GPT-5.1 and DeepSeek v3.1. Maybe you shouldn’t just let AI invest for you. The conclusions after these experiments are clear. Four months of a model outperforming the S&P index in a bear market does not prove that AI is a good investor. Only in that specific period, with that specific marketthat model made decisions that turned out to be less bad than those in the index. To see if this makes sense takes years, multiple market conditions, and many instances of the same experiment running in parallel. The same happens with Nof1 – especially short – and with a more serious and methodical process like that of the CNMV, which was also surrounded by events whose impact on the final result was uncertain. Faced with so many unknowns, the conclusion seems clear: … Read more

We have been observing the snow of the northern hemisphere from space for 40 years. The conclusions of the latest major study are devastating

As some older people around us say: winter is already it’s not what it was. As we move forward in the decade, scientific data paints an increasingly clear and disturbing picture of the amount of snow that has accumulated in some parts of our planet. And the images seem to leave no room for doubt, since they suggest that snow coverage in the northern hemisphere is constantly reducing, altering the seasonal cycles that govern our climate. The data. The last job we have had access to was published in January of this same year, and the conclusion they have drawn is quite devastating when pointing out that 24% of the regions of the northern hemisphere show a significant decline in the presence of snow, compared to a mere 9% that has registered an increase in its amount. How it looked. To reach these conclusions, researchers have not limited themselves to looking at the thermometer. They have turned to a gigantic high-resolution database that brings together historical data since 1980 with information on both snow and ice. Mathematical model. But the real advance in this case lies in the use of advanced statistics. And, expanding on previous research from 2023, they have applied a two-state Markov chain model, which in simple terms is a mathematical model that allows analyzing the spatial and temporal probabilities of snow persisting or disappearing in specific grids on Earth over decades. That is why we are facing one of the most rigorous methodologies that currently exist to understand snow trends, eliminating the “noise” of the precipitation that is coming in the coming months. Early spring. But… Where exactly is the snow disappearing? The Markov model reveals that the decline is not uniform, but there is an alarming pattern that directly affects our side of the globe: spring melt is coming forward dramatically in Europe and Central Asia. Right now we are seeing snow melting earlier, shortening winter temperatures and directly altering the water cycle, which is vital for agriculture and ecosystems during the warmer months. The consequences. But it is not something new, since previous works already warned of this loss of snow, which is a decline that not only affects water reserves, but also the ability of the Earth’s surface to reflect solar radiation. Something that is not nonsense, since less snow means more exposed dark land, greater heat absorption and, consequently, an increase in regional temperatures. A consensus. In addition to this study, in 2025, research was also published that analyzed possible biases in climate records. NOAA historicalconfirming that the decline in snow during autumn and winter is a real phenomenon and not an erroneous measurement. But it does not stop there, since the last Arctic bulletin painted a very extreme scenario, since, although there was above-average snowfall until May 2025, the decline during June was so rapid and abrupt that snow coverage was reduced to half of what it was 60 years ago. A mixed and volatile pattern that shows a climate system under stress. Images | Mathieu Odin In Xataka | Under the Canary Islands rests a 1,625 meter volcano: it has now begun to show signs of life after ten years of vigil

Australia has analyzed teleworking since before pandemic. His conclusions disassemble the reasons for the return to the office

Although teleworking is no longer the preferred option by companies, or at least not their full -time variant, remote work continues to maintain much higher values that those who registered before pandemic. That shows that, in a way, teleworking does arrive to stay in Very specific contexts. Australia has been observing the real impact of teleworking for four years and consolidated data contradict old prejudices. “Working from home makes us happier,” The authors assure of a study by the University of Southern Australia, ciming a new more flexible and productive labor model. Time flexibility: the new office fruit. The Australian study is especially revealing because it began before the pandemic and the rise of teleworking and has been extended for four years, which leaves a much more defined photo of how remote work has impacted on the way of working and its consequences. According to the study, the possibility of choosing from where to work has allowed to improve both the mental and physical health of workers, although there is still a certain friction from corporate culture. According to A report of the International Labor Organization (OTI), the flexibility provided by teleworking is already equal to The emotional salary with which companies try to capture and retain the best employees, replacing other benefits. Most satisfied employees with their work. The data collected by the study reveal that before the pandemic, the Australian worker used about 4.5 hours per week only in displacements to the office. That time optimization It makes those who work from home enjoy “ten extra days of free time a year against those who go to the office”, dedicating 33% of that time to leisure, which implies “more opportunities to be physically active and less sedentary.” A factor that also highlighted the Academic Study which was carried out in Spain from the Lacaixa Foundation. According to the authors, these data “usually go hand in hand with worse mental health and with lower scores in the assessment of our own health.” Thanks to teleworking, employees have gained “hours of rest to sleep and, for example, breakfast more peacefully”, which helps reduce levels of template stress. In turn, this recovered time also has a reflection in healthier habits, such as home preparation or the increase in the consumption of fruits, vegetables and dairy. The result has been a more varied and healthy diet, with less dependence on ultraprocessed foods that require less preparation time. Positive whenever it is by choice. If something has shown us the experience of “forced” teleworking during the confinements of 2020 is that the teleworking It is not for everyone. Since this study allows to contrast the situation of employees before and after the massive arrival of teleworking, it also reveals how it affects that change of work model in workers The researchers found that the well -being and mental health They improved especially when teleworking was voluntarily chosen, while “when employees work from home for obligation, mental health and well -being tend to get worse.” Productivity in evidence. One of the main arguments of companies for the return to the office has been the alleged fall in productivity that was associated with teleworking. In this sense, researchers blame the problem more to an inability to assign tasks and New model management not in a direct casuistry of teleworking. “In many cases, managers who claim that teleworking reduces productivity responds more to a lack of management than a real performance problem,” says researchers in their conclusions. The conclusion after four years of monitoring is unequivocal: work performance and productivity seem to stay stable or, in most cases, improve when working from home. These results coincide With other research that They disconnect the decrease in productivity of the company with the teleworking. The distance does affect the cohesion of the equipment. Great corporations like Amazon wielded The argument of the cohesion of the equipment To impose The return to the office. In that sense, the study prepared by Australian researchers recognizes that “the connection with the classmates is difficult to reproduce at a distance,” admit those responsible for the study, and alert about the risk of loss of cohesion in the work teams. But, as has been demonstrated with some strategies back to the office, the problem can be mitigated by facilitating efficient communication channels. A Recent study Posted in the magazine Naturerevealed that this Team cohesion problem It currently persists with the hybrid day model, provided that consistent communication patterns are not established. In Xataka | A Barcelona company wanted to try the four -day week. He ended up firebaging an employee for having two jobs Image | Unspash (Rodeo Project Management Software)

A study has estimated Chatgpt’s energy cost. According to its conclusions, it is not as apocalyptic as it appears

Chatgpt at 3 WHating. In October 2023 A study Alex de Vries pointed out that a Chatgpt consultation had an estimated energy cost of 3 Wh. His estimate came from comments from Google, whose managers indicated that Chatgpt’s consumption was “probably” 10 times that of a search. And Google herself had revealed that in 2009 each search was 0.3 Wh, and hence the final figure. De Vries, by the way, has previously published another study in which he warned of the worrying bitcoin mining energy consumption. From Vries, yes, he took reference that the average consultations were about 4,000 input tokens and 2,000 output, which would be equivalent to quite long questions and answers, when it is normal to make them shorter. Efficiency gains whole. Throughout that time many things have changed, both for Google – whose results and infrastructure are very different from that of 15 years ago – and for Chatgpt, which has also gained whole efficiency. It is very likely that Google consultations are now more efficient, but surely those that are also those of Chatgpt and other chatbots. A new estimate. Epoch AI is a non -profit organization that among other things is responsible for the creation of the Benchmark FrontierMath. This test tries to assess the mathematical computing capacity of AI models, and has become one of the most interesting metrics for its difficulty. Your researchers have now published a study in which they precisely estimate the energy consumption of a chatgpt consultation. Chatgpt consumes ten times less than what was thought. According to its conclusions, Chatgpt consultations based on the LLM GPT-4O consume about 0.3 watts-room, which is ten times less than what was previously considered. That 0.3 Wh “calculation is in fact relatively pessimistic, and it is possible that many or most requests are even cheaper.” How they have done the calculation. In Epoch they have been based on known data for their calculations. Thus, they point out that according to Openai a Token equals approximately 0.75 words and that generating a token costs approximately 2 flops. Taking into account the calculation capacity of the GPUS NVIDIA H100 (989 TFLOPS in TF32, 67 Tlops in FP32 Operations) and their consumption (1500 W, although they consider that they actually consume 70% of that average power), the result It is the aforementioned. Everything has improved. As we pointed out, in Epoch AI, they emphasize that the difference between this estimate and the previous Until better multiply. In addition, in the previous estimate “there was an excessively pessimistic count of the necessary tokens.” How much are 0.3 Wh? They are equivalent to less than the amount of electricity than a LED bulb or a laptop consume “in a few minutes.” According to the Energy Information Administration of the United States, an average home there consumes 10,500 kWh per year, that is, about 28,000 Wh a day. Even an intensive use of chatgpt does not seem excessively to influence that consumption. Reason consumes more. Although they take GPT-4O as a reference, they make it clear that using reasoning models such as O1 or O3-mini requires more energy consumption, but for the moment they are less popular. And train the models, too. These researchers have also highlighted the energy cost of training models such as GPT-4O, which according to their estimates would have been between 20 and 25 MW in a period of three months. That would be equivalent to some 20,000 middle homes in the US. The general costs are worrisome. Although the data in this study reveal that using chatgpt does not consume as much energy as previously estimated, the problem may be another. The general energy costs of AI are colossal and aim to be much higher in the short term: the Big Tech fever for investing tens of billions of dollars to build data centers. And he does it because all these data centers will have huge needs at the energy level. Eye: Let’s not forget the air conditioning. Image | Xataka with Freepik Pikaso In Xataka | The amazing history of ARM, the architecture that triumphs on the mobile and that was born more than 30 years ago in Acorn Computer

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.