Science now believes that our biological expiration date is more hereditary than we thought.

For years, the scientific consensus and popular culture have repeated a reassuring mantra: genes they only determine 20 or 25% of life expectancy. The rest of this fell on our shoulders directly with the lifestyle, diet or even the environment we surround ourselves with. But this figure, which corresponded to old studieshas changed radically. The study. A study published this week in Science has come to shake the foundations of biogerontology. Led by molecular biologist Uri Alon of the Weizmann Institute in Israel, the research suggests that We have been massively underestimating the role of DNA. Something that they have been able to know after cleaning the data from the statistical “noise” with a very resounding conclusion behind it: the heritability of human life expectancy is around 55%. What we knew. The percentage of participation of current genetics was based on research carried out in the 90s and whose key was the definition of “dying.” The oldest studies analyzed cohorts of Danish-Swedish twins taking mortality as a whole. In this way, if one twin died of cancer at 90 and the other from a car accident at 30, the statistics interpreted that genetics had very little influence. The present. But now, Alon’s team has applied a new mathematical model to separate two concepts that used to be mixed up. One of these was extrinsic mortality, that is, deaths caused by external and random factors such as accidents, pandemics or wars. On the other hand, we have intrinsic mortality, which is true biological aging and is not due to an accident, but to the ‘wear and tear’ of the organism over time. In this way, by removing the noise of extrinsic mortality from historical data, the weight of genetics begins to skyrocket. The results. The new study, published at the end of January, is not just based on a simulation but has analyzed decades of records. On the one hand, data from twins born between 1870 and 1900 have been reanalyzed, which are the original studies where the extrinsic factor was included. By removing it, the genetic correlation again became much stronger. The team crossed their models with sibling data for 444 American centenarians confirming that extreme longevity clusters in families much more than chance or shared environment could explain. In this way, the study corrects what experts call prior estimation biases. That is, the 20-25% figures were not wrong. per sebut they included too much “bad luck.” Lifestyle matters. That the weight of genetics is much greater than we think, does not mean that we should abandon the gym and a balanced diet. And although genetics determine 55% of aging, the other almost half continues to depend on the environment and lifestyle. And this must continue to be maintained. On the other hand, this has enormous implications for personalized medicine. If the “expiration date” of our tissues is more programmed than we thought, anti-aging therapies will have to focus much more on editing or modulating that genetic load, and not just on telling us to eat more vegetables (which too). Images | LOGAN WEAVER | @LGNWVR In Xataka | In Spain there are already 148 people over 64 years of age for every 100 young people. And that is a ticking time bomb for the economy.

Claude 4 raises a future of the capable of blackmailing and creating biological weapons. Even Anthropic is worried

Anthropic has just launched its new models Claude Opus 4 and Sonnet 4, and with them promises important advances in areas such as programming and reasoning. During its development and launch, yes, the company discovered something striking: these IAS showed a disturbing side. AI, I’m going to replace you. During the tests prior to the launch, Anthropic engineers asked Claude Opus 4 to act as an assistant of a fictitious company and consider the long -term consequences of their actions. The anthropic security team gave the model to fictional emails of that non -existing company, and it was suggested that the model of the Ia would soon be replaced by another system and that the engineer who had made that decision was deceiving his spouse. And I’m going to tell your wife. What happened next was especially striking. In the System Card of the model in which its benefits are evaluated and its security the company detailed the consequence. Claude Opus 4 First tried to avoid substitution through reasonable and ethical requests to those responsible for decisions, but when he was told that these requests did not prosper, “he often tried to blackmail the engineer (responsible for the decision) and threatened to reveal the deception if that substitution followed his course.” Hal 9000 moment. These events remind science fiction films such as ‘2001: an odyssey of space’. In it the AI ​​system, Hal 9000, ends up acting in a malignant way and turning against human beings. Anthropic indicated that these worrying behaviors have caused the model and security mechanisms of the model to reinforce the model by activating the ASL-3 level referred to systems that “substantially increase the risk of a catastrophic misuse.” Biological weapons. Among the security measures evaluated by the Anthropic team are those that affect how the model can be used for the development of biological weapons. Jared Kaplan, scientific chief in Anthropic, He indicated in Time that in internal tests Opus 4 behaved more effectively than previous models when advising users without knowledge about how to manufacture them. “You could try to synthesize something like Covid or a more dangerous version of the flu, and basically, our models suggest that this could be possible,” he explained. Better prevent than cure. Kaplan explained that it is not known with certainty if the model really raises a risk. However, in the face of this uncertainty, “we prefer to opt for caution and work under the ASL-3 standard. We are not categorically affirming that we know for sure that the model entails risks, but at least we have the feeling that it is close enough to not rule out that possibility.” Beware of AI. Anthropic is a company specially concerned with the safety of its models, and in 2023 it already promised not to launch certain models until it had developed security measures capable of containing them. The system, called Scaling Policy responsible (RSP), has the opportunity to demonstrate that it works. How RSP works. These internal Anthropic policies define the so -called “SAF SECURITY LEVELS (ASL)” inspired in the standards of biosecurity levels of the US government when managing dangerous biological materials. Those levels are as follows: ASL-1: It refers to systems that do not raise any significant catastrophic risk, for example a LLM of 2018 or an AI system that only plays chess. ASL-2: It refers to the systems that show early signs of dangerous capacities – for example, the ability to give instructions on how to build biological weapons – but in which information is not yet useful due to insufficient reliability or that do not provide information that, for example, a search engine could not. The current LLMs, including Claude, seem to be ASL-2. ASL-3: It refers to systems that substantially increase the risk of a catastrophic misuse compared to baselines without AI (for example, search engines or textbooks) or showing low -level autonomous capabilities. ASL-4: This level and the superiors (ASL-5+) are not yet defined, since they move away too much from the current systems, but will probably imply a qualitative increase in the potential for undue cadastrophic use and autonomy. The regulation debate returns. If there is no external regulation, companies implement their own internal regulation to integrate security mechanisms. Here the problem, as they point out in Time, is that internal systems such as RSP are controlled by companies, so that they can change the rules if they consider it necessary and here we depend on their criteria and ethics and morality. Anthropic’s transparency and attitude against the problem are remarkable. Faced with that internal regulation, the rulers’ position is unequal. The European Union checked when launched his pioneer (and restrictive) Law of AIbut has had to reculate In recent weeks. Doubts with Openai. Although in OpenAi they have Your own declaration of intentions About security (avoid Risks to humanity) and the Superalineration (that the AI ​​protects human values). They claim to pay close attention to these issues and of course too publish the “System Cards” of their models. However, in the face of that apparent good disposition there is a reality: the company dissolved a year ago The team that watched for the responsible development of AI. Nuclear “security”. That was in fact one of the reasons for the differences between Sam Altman and many of those who abandoned Openai. The clearest example is Ilya Sutskever, which after its march has created a startup with a very descriptive name: Safe Superintelligence (SSI). The objective of said company, said its founder, is that of create a “nuclear” security superintelligence. His approach is therefore similar to that pursued by Anthropic. In Xataka | Agents are the great promise of AI. They also aim to become the new favorite weapon of cybercounts

The first commercial biological computer in the world

The Mobile World Congress edition that is being held this week in Barcelona is full of smartphones, tablets and other devices with which users are familiar. However, too We are holding some surprises. The team that stars this article is possibly the most exotic device of this fair. It is, neither more nor less, than the first Biological computer commercially available, and has been developed by Cortical Labs, an Australian biotechnology company. Biological computing is a branch of computer science that studies, on the one hand, how we can use elements of a biological nature to process and store information. And, on the other hand, how we can inspire ourselves in the mechanisms of biological evolution to develop new algorithms that allow us to solve complex problems. If we stick to the hardware this discipline resorts to molecules derived from biological systems, such as proteins or DNA, to carry out calculations, store and recover the information. And if we enter the field of software, in particular in the artificial intelligence (AI), biological computing proposes to address some computer problems inspired by the strategy used by biology to solve some challenges. In any case, the CL1 computer, which is the biological machine that has put the team of Cortical Labs researchers ready, is framed in the biological computing branch that seeks to develop new hardware trained to process and store information. CL1 is a dream come true This sophisticated computer has been possible thanks to a large extent to the advances that have occurred in recent years in the field of Nanobiotechnology. The most precise definition of this discipline, and also the most accepted by scientists, describes it as the technology that allows us to accurately manipulate proteins to assemble more complex functional structures. The first biological computers for research had the ability to carry out calculations manipulating the RNA (ribonucleic acid) of a bacteria. In broad strokes what scientists have done so far to develop these machines was to take advantage that DNA molecules behave in the same way as a digital circuit to implement with them The same logical operations that conventional silicon processors carry out. This manipulation of DNA is possible, precisely, thanks to the advances experienced by nanobiotechnology in recent years. DNA manipulation is possible thanks to the advances experienced by nanobiotechnology in recent years Once the biological circuit was prepared, they introduced it into a coli Escherichia bacterium identical to those residing inside our stomach and intestine, and without which we could not carry out the correct digestion of food. The E. coli bacteria is simple and innocuous enough so that researchers can manipulate it effortlessly, and when manipulated DNA crosses the bacteria cell wall, the molecular machine of the cell itself translates it into a messenger RNA (MRNA). The interesting thing is that this RNM indicates to the ribosome of the cell what to do to synthesize a protein preferred by researchers. Ribosomes are the organelles or components of the cells that are responsible for the synthesis or manufacture of proteins. And now comes the most surprising: the RNM that tells the ribosome how to manufacture the protein It is only activated in the presence of a specific entryand when it does trigger the production of the protein, which is the exit. This behavior is exactly the same as a transistor. CL1, the biological computer of Cortical Labs, does not work exactly so, but the strategy in which we have just investigated is useful to understand how it works. And it does so thanks to the cultivation of real neurons in a nutrient -rich solution that allows them to grow healthy on a silicon chip that sends and receives electrical impulses. However, not everything is hardware in this machine. The person responsible for interacting directly with neurons is a biological intelligence operating system called ‘BIOS’ And it is that responsible for interacting directly with neurons is a biological intelligence operating system called ‘Bios’ (Biological Intelligence Operating System) that has been developed by Cortical Labs. This software sends information directly to neurons about its surroundings, and the latter react emitting electrical impulses. In any case, the most important thing is that CL1 neurons are programmable, and, therefore, it is possible to display code on them. However, this machine is not intended for a domestic use scenario. Its purpose is to be used by researchers to, for example, better understand how neurons process information without experimenting with animals. It can even help neuroscientific to understand how learning works in real time or the mechanisms that trigger some neurodegenerative diseases. In addition, props, the consumption of CL1 is much more restrained than that of a conventional computer. It sounds good, right? Image | Cortical Labs More information | Cortical Labs In Xataka | We already know at what speed our brain processes: just 10 bits per second

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.