The science of learning dismantles the mathematical rule of the fashionable study method

When it comes to studying anything, almost all of us want to have a system that allows us learn quickly and efficient. This is where we can turn to the Internet, where there are numerous pages that promise us almost miraculous systems to pass easily, and one of them is the 2-7-30 method. But… What does science say about this system? What is it about? This method focuses on a system where you have to review the information exactly 2, 7 and 30 days after having addressed it on the first occasion. Something that is quite similar to what we want to achieve with the flashcards. Something that a priori seems quite simple to put into practice, but which can generate quite a bit of fear by leaving a topic shelved for so many days in the last round. It gives good results. But it is the best from the point of view of science, and to understand it, we have to go to the basics of how our memory works. And this method is based on the spacing effectwhich undoubtedly far surpasses the classic ‘binge’ the night before an exam, where you try to get all the data in in a matter of hours. Here, a classic meta-analysis published in 2006 in Psychological Bulletin, analyzed 839 measures in 317 experiments and confirmed that distributing practice over separate intervals dramatically improves retention. But even in the past, other studies suggested that repeating material over time consolidates memory much more efficiently. Recovery practice. There is no point in spacing out the reviews if, when day 2 or day 7 of the method arrives, we limit ourselves to passively rereading the notes. Here different studies have shown that actively trying to remember information produces much more lasting learning than passively re-studying it. In this way, forcing the brain to “rescue” that data strengthens neuronal connections, and science points to the advantage of active remembering over traditional binge-watching methods, such as making conceptual maps. The enemy to beat. The concept of reviewing in increasingly longer windows of time is born from the need to combat our natural decline in retention. This is where a work on the “forgetting curve” by Hermann Ebbinghaus comes into play, which demonstrated that we lose most of the newly learned information within hours or days if we do nothing to retain it. More modern replications of this idea confirm that this initial rapid forgetting is real and useful to contextualize the problem, although researchers depend on different factors and not only the strict passage of time. That is why the idea we should stick with is that every time we review the information, the forgetting curve resets and its slope becomes gentler so that it takes longer to disappear. The myth of exact numbers. Although it has been shown that spacing study days, in reality science does not identify 2, 7 and 30 days as a universally valid pattern for all learning and people, but will depend on many factors. Here, a study published in 2008 showed that the optimal interval between reviews depends on the retention interval we are looking for, but that the spacing changes radically if the objective is to remember something for an exam that is due in a week versus if we want to remember something in a year, as can happen in an opposition. In this way we get the following pattern: If the exam is in 1 week, the reviews should be separated by just 1 or 2 days. If the exam is in 1 year, Reviews should be spaced several weeks or even a month apart. Images | freepik In Xataka | SQ3R technique: the study method that helps you understand the subjects, not just remember them

A new mathematical proof settles the debate over whether the universe is a simulation

What if everything we see, feel and experience is not real? It is one of the most fascinating ideas in science fiction and modern philosophy, in which it is proposed that everything around us it’s a real simulation of computer of some higher civilizationas if we were literally sims. And such is its magnitude, that science has had to come out to deny this idea. The problem. The ‘simulation hypothesis’ has gone beyond being a simple movie premise to a serious debate in technology circles and physical. The argument is usually statistical: if a civilization can create one simulation of reality, it will probably create many. These simulations could in turn generate their own and in this infinite ‘stack’ of realities, the odds that our universe be original, they are almost non-existent. And although this has been a very restrained topic among philosophers, science has also wanted to fully enter into research to respond to a problem within fundamental physics and pure mathematics. And the answer is quite clear: we are not in a simulation. The study. An international team of physicists, including Dr. Mir Faizal of the University of British Columbia (UBC) and renowned physicist Dr. Lawrence M. Krauss, has mathematically proven that the universe cannot be a computer simulation. His findings, published in it Journal of Holography Applications in Physicsnot only disprove the idea, but reveal something much deeper about the nature of reality: the universe is based on a type of “understanding” that exists beyond the reach of any algorithm. The reality. To understand this test, we must first understand what ‘reality’ is. Modern physics no longer sees the universe as tangible ‘matter’ moving in empty space, but thanks to Einstein space and time merged to now demonstrate that the microscopic world is probabilistic. The most widely accepted theory today focuses on quantum gravity, which suggests that space and time are fundamental. They are “emergent”: they spring from something deeper, something more like pure information. In this way, physicists assume that a “Theory of Everything“(ToE) that unifies gravity and quantum physics would, in essence, be a large axiomatic system: a set of meaningful rules and algorithmic calculations from which the entire universe, including spacetime itself, could be “computed” and generated. Incompleteness Theorems. In 1931, logician Kurt Gödel demonstrated something that blew up the foundations of mathematics: any formal system (such as a computer program or a set of physical laws) that is complex enough to include basic arithmetic will be incomplete or inconsistent. By ‘incomplete’ we mean that there will be true statements within the systems themselves that will never be able to be demonstrated following their own rules. It’s like the famous paradox that says “this statement is true, but it cannot be proven.” Faizal’s team argues that any purely algorithmic ToE would suffer from this limitation. There would always be “Gödelian truths” about the physics of the universe (perhaps about specific microstates of black holes or the nature of the singularity) that such a computational system could not test. Two layers. If the algorithmic universe is “incomplete”, how does our reality seem to work? Researchers propose that reality is not only the algorithm. This is what allows the universe to “know” that these Gödel truths are true, even though the algorithm alone cannot prove them. It is a fundamental layer of reality that transcends simple computing. The final test. With all the pieces on the table, the refutation of the simulation hypothesis becomes clear and elegant. The first of all is that every simulation is logarithmic, that is, a computer executes a problem following very specific rules that leave no room for doubt. In this way, we come face to face with our theories that are not ‘perfect’ in their demonstrations. But they don’t stop there, since scientists have pointed out that an algorithm can only simulate the algorithmic part, meaning that a computer could only, in the best of cases, emulate the computational and incomplete part of our universe. And the most important thing without a doubt is that our universe is more than an algorithm, since as Gödel’s theorems demonstrate, complete physical reality must include a non-algorithmic layer to be consistent and complete. Images | Compare Fiber In Xataka | Exactly 100 years ago we began to understand how the world works. Quantum physics has radically changed our lives

The end of mathematical problems without solution

To the models of artificial intelligence (AI) Currently, mathematics are good. In fact, in October 2024 Goal AIthe finish line, managed to generalize Lyapunov’s function. The Russian mathematician Aleksander Lyapunov proposed the concept of the function that bears his name in 1892. His work is a very important tool in the study of dynamic systems, but mathematicians have since struggled to find a general method that allows identifying the functions of Lyapunov. And they have not been successful. However, Goal AI has had it. This is not at all the only recent success of AI models in the field of mathematics. Sergei Gukov, Professor of Theoretical Physics and Mathematics at the California Institute of Technology (Caltech), leads a team of researchers who are looking for The way to use this technology to solve advanced mathematical problems that require thousands, millions, or even billions of steps. Currently these scientists are working on the conjecture of Andrews-Curtis, a problem of combinatorial theory of groups proposed 60 years ago. Google and Openai AI have won gold in the mathematics Olympiad Gukov and his team have not yet managed to solve the main conjecture, but with the help of AI they have achieved something important: they have refuted several families of problems related to the conjecture of Andrews-Curtis and known as counterexamples that have remained open for more than 25 years. Gukov acknowledges That current AI models have important limitations when facing very complex mathematical problems, but it has the hope that in the future this technology allows the human being to solve Mathematical Millennium Problems. The best asset that researchers have to face this challenge is to instruct AI by resorting to learning for reinforcement According to this mathematician, the best asset that researchers have to face this challenge is to instruct AI by resorting to Reinforcement learning. Anyway, something important has just happened. As we anticipate in the holder of this article, the models of Google and Openai AI They have won the gold in The International Mathematics Olympiad. Both managed to solve five of the six problems posed using general purpose reasoning models capable of processing mathematical concepts using natural language. This strategy is different from the one they have previously used the AI companies in mathematical tests. In any case, According to SCMP An expert they have consulted argues that the speed with which the AI models are being developed suggests that they are less than a year after being used to solve some mathematical problems that still have no solution. As we have seen, Sergei Gukov defends this same idea, although this last mathematician has not dared to specify the moment in which AI will begin to solve the problems in which mathematicians have been engaged decades. Who knows, perhaps the solution to the millennium problems is close. Hopefully. Image | Jesus Thomas More information | SCMP In Xataka | These two problems have baffled mathematicians for decades. A genius has solved them with a stroke

AI is already our best ally to solve the mathematical problems that seem impossible

The applications of the artificial intelligence (AI) are presumably unlimited. Beyond the daily uses with which many of us are already familiar with the design of drugs, Disease diagnosisthe OPTIMIZATION OF INDUSTRIAL PROCESSES or the Analysis of physical or chemical mechanisms complexes, among other options. It is even being used to solve mathematical problems of enormous difficulty. In addition, algorithms that use deep neuronal networks and Automatic learning They are designed to identify complex patterns in large volumes of information, which allows them to recognize images, voice or process natural language greatly. AI has reached our lives, and it is clear that it will stay, but the most surprising thing is that it is consolidating as an extremely valuable tool in relatively exotic fields. It is possible that AI helps us solve the mathematical problems of the millennium In October 2024 Goal AIMeta’s artificial intelligence, managed to generalize Lyapunov’s function. The Russian mathematician Aleksander Lyapunov proposed the concept of the function that bears his name in 1892. His work is a very important tool in the study of dynamic systems, but mathematicians have since struggled to find a general method that allows identifying the functions of Lyapunov. And they have not been successful. However, goal has had it. Our mathematical knowledge will no longer be limited by intuition and human capacity The strategy used by the company led by Mark Zuckerberg to solve the challenge of Lyapunov’s functions has consisted of training an AI model to recognize patterns and relationships between certain dynamic systems and its corresponding functions of Lyapunov. This is precisely what is good for AI. And it is a huge success because our mathematical knowledge will no longer be limited for intuition and human capacity. The AI ​​puts in our hands a new way of addressing complex mathematical problems, identifying patterns that a priori remain hidden for the human being. However, in the field of mathematics AI still has to improve to help us solve the great challenges that the human being has ahead. Sergei Gukov, professor of theoretical physics and mathematics at the California Institute of Technology (Caltech), leads a team of researchers who is looking for ways to use AI to solve advanced mathematical problems that require thousands, millions, or even billions of steps. These scientists are currently working on The conjecture of Andrews-Curtisa group combinatorial theory proposed 60 years ago. They have not yet managed to solve the main conjecture, but with the help of AI they have achieved something important: they have refuted several families of problems related to the conjecture of Andrews-Curtis and known as counterexamples that have remained open for more than 25 years. Gukov acknowledges that Current AI models have important limitations When facing very complex mathematical problems, but you hope that in the future this technology allows the human being to solve Mathematical Millennium Problems. According to this mathematician, the best asset that researchers have to face this challenge is to instruct AI by resorting to Reinforcement learning. Image | Generated by Xataka with Dall-e More information | IEEE Spectrum In Xataka | These two problems have baffled mathematicians for decades. A genius has solved them with a stroke

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.