Quantum computers are going to overthrow classical cryptography sooner than expected

Just two weeks ago a group of researchers from the California Institute of Technology (Caltech), the University of California at Berkeley and the emerging company Oratomic published a scientific article preliminary in which they explore the capabilities of quantum computers of neutral atoms. These machines are an alternative to quantum computers with superconducting qubits and ion traps, and are still in an experimental phase. However, these scientists have estimated that Shor’s algorithm can be implemented using a quantum computer equipped with between 10,000 and 20,000 qubits of neutral atoms. In fact, in their article they even propose a design with which in theory it would be possible break bitcoin encryption in a few days using 26,000 qubits of neutral atoms. In any case, these researchers are not the only ones who in recent weeks have alerted us to the ability to violate classical cryptography that quantum computers will acquire in a relatively short period of time. At the end of last March, Google’s quantum artificial intelligence group published a study in which he demonstrates that the elliptic curve encryption used by Bitcoin or Ethereum, among other cryptocurrencies, can be overthrown using far fewer resources than initially estimated. According to these researchers, a quantum computer with less than half a million physical qubits will be able to decipher the algorithms used by current cryptocurrencies in a few minutes. In short, the scientific community has agreed that classical encryption technologies will be vulnerable before the arrival of large-scale quantum hardware. The first steps to protect ourselves have already been taken Quantum computing experts have known for several years that quantum computers they will end classical cryptography. That moment came in May 2024. A team of researchers from the University of Shanghai (China) led by Professor Wang Chao used a D-Wave quantum computer to successfully break SPN encryption (Substitution-Permutation Network), which is a cryptographic algorithm used to encrypt information. This encryption is the cornerstone of, for example, the AES standard (Advanced Encryption Standard), which is used a lot. These scientists published the results of their research in an interesting article titled “Quantum Processing-Based Public Key Cryptographic Attack Algorithm with the D-Wave Advantage.” However, this is not all. And in mid-May 2025, several Google researchers posted an entry in the blog dedicated to the security of this American company in which they maintain a crucial premise: an RSA integer (Rivest–Shamir–Adleman) 2,048 bits can be factored in less than a week with a quantum computer of less than a million qubits. A 2,048-bit RSA integer can be factored in less than a week with a quantum computer of less than a million qubits Bitcoin, Ethereum, Solana and the other modern cryptocurrencies use a cryptography technique known as elliptic curve that is more robust, efficient and difficult to break than RSA, but its mathematical foundations are similar to those of the latter encryption algorithm. In fact, according to the Google scientists who authored the article I mentioned above, if future quantum computers will have a harder time breaking RSA encryption than initially expected, elliptic curve cryptography will also fall relatively easily. So far we have talked about cryptocurrencies, but it is crucial that we do not overlook that encryption technologies play a fundamental role in our daily lives. In fact, WhatsApp and Telegram use them to encrypt our messages; banks turn to them to protect our transactions and every time we buy something on the internet it is encryption that is responsible for protecting our credit card information. These are just some of the applications of this technology. The threat of quantum computers to encryption technologies is very real, but we have no reason to panic because many researchers have been working on the solution to this challenge for several years. In fact, most of the theoretical work has already been done. In 2024, the US National Institute of Standards and Technology (NIST) published an initial set of standards that includes a post-quantum key exchange mechanism and several post-quantum digital signature schemes. The work that has already been done invites us to foresee that the moment relevant quantum computers appear from a cryptographic point of view, the technologies that will be able to protect our information will already be ready. Image | Generated by Xataka with Gemini More information | arXiv | Google In Xataka | We already know what the chips that will arrive until 2039 will be like. The machine that will allow them to be manufactured is close

engineering challenges are greater than expected

That Apple is going to launch a foldable iPhone It’s a rumor that has been circulating for yearsbut it does not materialize. According to the latest information available Nikkei AsiaApple is already doing engineering tests with its foldable mobile phone, but they are not going as expected. First tests, first problems. According to sources in Apple’s supply chain, the foldable iPhone has already begun the testing phase necessary before mass manufacturing can begin. However, more failures have appeared than initially anticipated and they will need more time to adjust the design and manufacturing processes. critical moment. April and May are an “extremely critical” period to pass these engineering tests. Currently, the foldable iPhone is in the engineering validation testing phase (EVT) and is a crucial step in ensuring they can be mass produced smoothly and without problems. According to Nikkei, Apple’s plan is to produce between 7 and 8 million foldable iPhones, which represents 10% of the total volume of the new range, and launch it this year, but if it does not pass this phase in time, it could put the entire calendar at risk and push the launch to next year. The market is eagerly waiting. They started out as niche devices, almost a rarity, but the foldable market has been growing year after year and, according to IDCin 2026 it will grow 30% year-on-year. One of the arguments that IDC gives to support that figure is, precisely, the launch of the highly anticipated folding iPhone. According to the firm’s head of devices, “This launch is likely to boost awareness of the category and generate interest among consumers. Apple is often a catalyst for widespread adoption of new categories.” Maybe they have to keep waiting another year. The promise that never comes. As we said, the rumor of the folding iPhone has been circulating for years. It started around 2021 when, Analysts said it would arrive in 2023. This never happened, but nothing quelled the rumors. Along the way, Samsung, Huawei, Honor and OPPO have already launched several generations of their folding phones, perfecting the design to achieve ultra-thin bodies and better quality screens. In this sense, the longer the foldable iPhone is delayed, the higher the bar is. What we think we know about the foldable iPhone. There have been many leaks, but a few months ago one of the largest to date occurred. According to leaked data, the folding iPhone will have a book format (like the Samsung Fold) with a 7.58-inch internal screen and a 5.25-inch external screen. The design will be ultra-thin and will eliminate FaceID in favor of TouchID on the side button so that it can be unlocked whether open or closed. In Xataka | iPhone 17e, analysis: the best and the worst of Apple in a mobile that is not only contained in the price Cover image | Concept of Ben Geskin

Google sets a date for “Q-Day”, when quantum computing will be able to break current cryptography sooner than expected

The arrival of the quantum computing brings us closer to an exciting horizon. It is a paradigm shift because, if classical computing is based on bits of 0 and 1, quantum computing uses qubits that can be in both states at the same time. Translation: if classical computing does operations one at a time, quantum computing does many at the same time. This opens up an ocean of possibilities, and will also allow any current encryption system to be broken. in a matter of seconds. Google has been around for a decade getting readyand has set a date for his arrival. 2029. PQC. It stands for post-quantum cryptography. It is a set of encryption algorithms designed to resist attacks by quantum computers and allow data that must be encrypted such as keys and digital signatures to remain so in the long term. Those complex mathematical algorithms designed to resist quantum attacks are designed to be implemented on classical computers. That is, it is not the hardware that is updated, but rather the security. Quantum cryptography is another approach, but also more experimental. It is the one that will use the full potential of quantum computing to achieve theoretically unbreakable security. The one that interests us at the moment is post-quantum, and it makes perfect sense because classical and the quantum They will coexist, and what is needed is to update encryption systems so that companies continue to have classic computers, but with security that resists quantum attacks. Q-Day. Companies have been preparing for this for a long time and, as we say, Google is one of them. Carry from 2016 investing in that post-quantum cryptography, migrating some key exchange systems for internal traffic to the post-quantum standard. A while ago they claimed that key exchange within Google services is now resistant to quantum computing by default. Proton also is in it. So as not to leave it there as a pending task that is never finished, they finish to mark a self-imposed deadline to complete the transition. By 2029 they will have to complete this migration of their security to PQC systems. In fact, on their blog, they have announced that Android 17 will integrate an algorithm that will provide quantum-resistant signatures to protect the integrity of boot software. It is a way of saying “hey, we are already preparing,” but basically what there is is a commitment to that security for a time that is near. And it won’t just be the boot system: applications will be able to generate and verify post-quantum signatures within the devices’ secure hardware, and Google Play itself will also begin generating secure keys for applications that choose to participate in the program during the launch cycle of the new system. The industry prepares. Aside from the announcement, the company urged the rest of the technology industry and governments to step up to accelerate the adoption of these more resistant encryption systems. And, although Google has been saying “the wolf is coming” for several years, they are not the only ones. Microsoft wants to start migrating its systems by 2029, culminating in 2033. US federal agencies also want do it for the 2030-2035 window and the European Commission has urged member states to make critical infrastructure resilient by the end of 2030. With this movement, Google has set a date that seems ambitious and is a declaration of intentions. “It is our responsibility to set an example and share an ambitious schedule,” says Google. It is also evident that as a digital infrastructure provider, offering a post-quantum security system before anyone else gives you a competitive advantage because if someone doesn’t arrive on time, they could always buy your services. Companies like Telefónica are also working on it, but when we talk to them They did not give us an indicative date. What they did comment is that they are beginning to see that there are parts of the industry that are becoming interested in their post-quantum cryptography services. Don’t panic. that the arrival of quantum computing represents a headache for everything that is encrypted (blockchain and cryptocurrencies, banking data and transactions and even messaging apps) does not mean that we have to panic. A few months ago, Keith Martin, professor in the Information Security Group at the University of London, commented that, although the threat is realresearchers have been working for years and most of the theoretical work is done. When cryptographically relevant quantum computers appear, the protection technologies will already be ready and we will not have to worry about anything. In fact, at the user level… we can do little. We are not going to be the ones who have a quantum computer at home to be able to encrypt our information. Basically, as I said a few lines ago, it is Google saying “get ready because this is going to come and, as an industry, we have to prepare.” And they have already set a date. There’s not much left… Image | Xataka In Xataka | Putin compared the quantum race to the nuclear race of the Cold War. China has just taken a leap in that war of the future

Three findings about astronauts’ blood have set off all the alarms. Going to Mars will be more dangerous than expected

We do not want to recognize it, we are not willing to accept it, we refuse to see it; but no, we are not made for space. And our persistence, in the context of large, long-duration manned missions, can cost us dearly. The last reminder has been the blood. The blood? Indeed. Three recent findings (accelerated destruction of red blood cells, platelet dysfunction in microgravity and somatic mutations of hematopoietic stem cells) make it clear that we still have a long way to go before we can enter the depths of outer space without putting our lives at risk. A giant elephant shaped like hematological syndrome. Because this is important, it is not a small health problem. None of that: we are talking about a whole hematological syndrome that affects us on numerous physiological fronts. And it makes sense: the blood leaves a lot to be desired. Is too prone to clots and too slow to clot when it is needed. Plus, he’s not very good at putting up with things either. in space more red blood cells are destroyed than are produced and that generates persistent anemia that can take up to a year to recover. This year it took place the first medical evacuation from the ISS and everything suggests that it will not be the last. A very real problem. That’s what the evacuation of Colonel Mike Finckethat space medicine is not a theoretical question. Even more so, taking into account that every time there will be more people up thereorbital health has become a key issue. What’s new? There is no big news, really: what is new is that an overall vision is now beginning to emerge. And that is giving us a clear idea of ​​the problems we face. For example, space increases the risk of thrombosis and bleeding simultaneously: they are two completely opposite things that have no clear pharmacological approach. And then? Simply be cautious. The new era of space exploration is going to expose us to the evils of space like never before. If we are not prepared, the ‘Gelsinger effect‘ may end up setting everything back a couple of decades. Image | Bradley Dunn In Xataka | NASA astronaut remains hospitalized after returning from space on a SpaceX Crew Dragon spacecraft

is dismantling ‘made in China’ faster than we expected

One in four iPhones is already assembled in India. Apple has delivered in 2025 exactly what JPMorgan forecast in 2022but it has done so at a speed that has surprised the sector. Why is it important. This is perhaps the clearest sign to date that China has lost its status as an irreplaceable player in high-end consumer electronics manufacturing. If Apple can make this move, others can too. In figures: 55 million of iPhones made in India in 2025, up from 36 million in 2024 (a 53% increase in a single year). 25% of world production, for a total of between 220 and 230 million units. 9 billion dollars in iPhone sales in India last year, a record. 14 million of units sold in the country, with a growth of 9% year-on-year. The backdrop. Apple has been around for years trying to reduce its dependence on Chinabut the Chinese supply chain was so efficient and so dense that the movement moved slowly. Then came Trump’s tariffs, and what was a long-term strategy became an emergency. In May 2025, Trump himself called Tim Cook to ask him to stop expanding production in Indiawhich gives an idea of ​​the scale and speed of the transfer. Between the lines. Tariffs have been the perfect excuse to do what Apple has wanted to do for a long time. The company has not only transferred volume: it has also transferred its most profitable models. India now assembles the entire iPhone 17 lineincluding the Pro and Pro Max. That is not outsourcing cheap, it is entrusting India with the jewel in the crown. Yes, but. Manufacturing in India is still more expensive than in China or Vietnam. The Modi government’s incentives (the export-linked production program) have been the glue of this strategy, and they expire imminently, on March 31. Apple, like Samsung, is negotiating with the government for a new round of subsidies. If these do not arrive and the incentive agreement expires without anything to relieve it, India’s attractiveness becomes more complicated. In Xataka | Apple has only found one option to make a cheap laptop: make it a mobile Featured image | TejjXataka with Mockuuups Studio

Meteorologists expected 80 mm of rain in Grazalema, which was already a lot. They are already going for 180 mm

This Wednesday the storm Leonardo is showing all its strength in a good part of Andalusia, something that has forced to cancel classes or even to mobilize the UME due to the possible floods that may occur. One of the focuses of this storm is set in the Sierra de Grazalema in Cádizone of the places where it rains the most in Spain. The point here is that a large number of liters per square meter was expected to fall, but reality has surpassed everything previously calculated. The data. As collected the user in X @Vigorrothe discrepancy between what the model “saw” and what has fallen from the sky is massive: from a forecast of 60 to 80 mm accumulated at 7 in the morning, has moved to a reality of 180 liters per square meter. And this makes us have many questions in our heads… How is it possible that in the era of Big Data and high-resolution models, we fail by more than double in such a short-term prediction? The answer is in the orography. Harmonie’s failure. The technological protagonist here is in Harmonie-AROMEthe mesoscale model used by the AEMET to predict local phenomena. Unlike global models such as the European IFS, Hamonie is designed to see detail down to resolutions of a few kilometers to calculate, for example, how many liters will fall in a specific location. However, today it failed in the Sierra de Grazalema with the differences that we have seen before. And although the AEMET reacted activating the red warningwith an extreme risk of receiving up to 200 liters in one day, the real-time evolution during the early morning hours was much more explosive than the model output indicated. And the worst thing is that there is still a day ahead. A “wall”. To understand why the software falls short, you have to look at the mountain, and the Sierra de Grazalema acts as a formidable physical barrier against the humid winds of the Atlantic. In this way, when these storms hit the mountains, the air is forced to rise abruptly, cooling and condensing all its moisture in a very small space and gives what is known as orographic enhancement. In this peculiar storm, two factors have come together that have undoubtedly magnified it. On the one hand, we have had an atmospheric river which acts as gasoline for the clouds, intensifying precipitation much more than models anticipate, especially when they collide with a mountain. It fails on the scale. On the other hand, we also have the limitations of the numerical models that we use on a daily basis such as collected in the Stormchaser forum. Here they point out that these models continue to have problems resolving short-duration, high-intensity events in complex orographic areas. And they are good at saying that it is going to rain a lot, but they fail when we talk about the magnitude of a specific flood. It rains in the wet. The problem of this underestimation is not only meteorological, but also hydrological since this torrential rain falls on terrain that no longer supports even one more drop. The context in this case is critical, since the month of January already broke historical records in this area with accumulated amounts of around 1,300 liters per square meter. That is why the soil, which is composed mainly of clays is completely saturated, which means that the infiltration rate is zero, causing everything that falls to immediately run into the bed of the Guadalete River and others. Images | Freysteinn G. Jonsson In Xataka | Spain is preparing for a “festival” of storms in February: with more rain than normal and hardly any cold

La Niña is going to be meteorologically “less intense” than we expected. And that actually hides a problem.

There is a 55% chance that the world will cross the La Niña thresholds in the coming weeks. And although The World Meteorological Organization insists that it will probably be a weak and brief episode, that does not mean that it will not cause us problems. Many problems, in fact. First because it is going to catch us with a changed pace. When world meteorological agencies indicate that La Niña will be low in intensity, what they are also doing (often inadvertently) is telling authorities that it won’t be that bad. And that is technically true, but socially speaking it is a mistake. WMO A global event… Let us remember that, with the exception of the seasons, ENSO (of which La Niña is a phase) constitutes the most important source of annual climate variability from all over the planet. It is true that the cold phase usually has less impact than the warm one, but the teleconnections of The Girl They are still huge and so is their impact. …with an impact worthy of its size. In fact, under “normal conditions”, with a 55% chance of it arriving this quarter, many countries would be preparing for its consequences. Above all, because, under “normal conditions”, there are many: For example, in the southeast of the American continent temperatures become warmer than normal. Likewise, they get colder in the Northeast. Less rainfall than normal is expected in Ecuador and Peru, and torrential rains are expected in Northeast Brazil. In Mexico, it is common for La Niña to cause (or make more intense) drought in the north and center of the country, while increasing torrential rainfall in the Pacific, the southern Gulf of Mexico and the Yucatan Peninsula. In Spain, for its part, it is usually synonymous with less rain. In other words, bad news. But we are not in “normal conditions.” As I say, the WMO messages they are precise; but they act as confusing signals: the administrations are not preparing. And that, whether we want it or not, can turn even the most lukewarm of Las Niñas into thousands of problems on a regional scale. But we must also take into account the global scenario. Because, as the WMO also points outLa Niña may bring slight global cooling. However, this should not motivate a reduction in efforts against climate change: the accumulated warming is so great that temperatures will most likely remain above average. That is, climate change is still underway. And, unlike other years, not even La Niña can do anything to contain it. Image | Climate Reanalyzer In Xataka | 2023 was the year in which El Niño and climate change competed. In the Amazon we already know who won

a supermassive black hole older than expected

Since the James Webb Space Telescope opened its infrared eyes towards the universe, the truth is that everything beyond our atmosphere has gone from being something calm and a stranger to become in a frantic puzzle For all astrophysicists. Their latest discovery points to the oldest supermassive black hole ever detected, something that gives us more data about the origin of the universe. It has arrived to break the mold. This black hole is located in the galaxy GHZ2and its most relevant fact is not that it is really far away, but when it was formed. Approximations place it just 350 million years after the Big Bang. Something that breaks the classic schemes that experts used, since in theory there would not have been enough time for a gravitational monster of that caliber to grow so much. His discovery. As we say, the protagonist of this story is the galaxy GHZ2/GLASS-z12. A discovery that has been made thanks to the observations of JWST and to the ALMA radio observatory in Chilewhich has confirmed its location through different parameters that place it as the most distant and oldest structure that has ever been confirmed. But what has set off alarm bells is not only its distance, but also its composition, since extremely intense ionized carbon emission lines have been detected. To understand the importance of this finding, you have to know that ionizing carbon at these levels requires a large amount of energy. This means that younger and more massive stars have the capacity to do so, but it is not enough to explain the intensity that has been observed in this galaxy. This means that you have to sign up for a Active Galactic Nucleusthat is to say, a supermassive black hole that is gobbling up matter at a frenetic pace. The time problem. The study suggests that this black hole would have an enormous mass compared to its host galaxy. While in the local universe (ours) the ratio between the mass of the black hole and the stellar mass of the galaxy is around 0.1%, in GHZ2 this ratio could shoot up to 5%. This is something that challenges the training theories that are currently are divided into two sides: Light seeds: black holes are born from the death of the first stars and grow little by little. The problem here is that 350 million years is not enough to reach this size. Heavy seeds: huge clouds of primordial gas have collapsed into black holes to form them, but without becoming a star. The finding of GHZ2 points directly to the second option or to “super-Eddington” feeding episodes (eating faster than radiation pressure theoretically allows). Its importance. If this finding is finally confirmed, we would be facing the absolute record for an active supermassive black hole. Until now, this record was in the UHZ1 galaxy about 470 million years after the Big Bang. But now GHZ2 pushes us more than 100 million years back in time, bringing us dangerously close to the very moment it all began in our universe. What really seems clear is that the universe in its beginning was not a boring or slow place. It was a dynamic, violent and rapid time where galaxies and black holes evolved at a great speed that we are now beginning to understand. Images | BoliviaIntelligent In Xataka | Bad news, the Universe has entered its dying phase. Good news, we won’t be here to see it

an AI that decides when to shoot has hidden where it is least expected

In recent months, Ukraine has seen technological leaps that until very recently were more typical of the realm of science fiction. Of the machines capturing and taking prisoners went to drones attacking on his own in a matter of weeks or even the arrival of a “general AI” capable of converting soldiers in “invisibles”. The latest: a kind of cross between Terminator and Predator. From improvised anti-aircraft weapon to autonomous system. Yes, Ukraine has turned urgency into advanced military engineering by developing what they have called like Predatoran automated machine gun turret initially created for the Magura naval drones could face Russian helicopters and fighters that They patrolled the Black Seaa space where air pressure on Ukrainian operations increased after the success of unmanned attacks against the Russian Fleet. The Predator debuted in combat end of 2024when its sensors and target acquisition capabilities allowed two helicopters to be shot down using missiles fired from other naval drones, and months later it helped shoot down a Russian Su-30, demonstrating that an unmanned explosive vehicle could also provide anti-aircraft cover. A twist. Once the success of the machine was seen, Ukraine decided to “hide it” where it would be a surprise to the enemy. It turns out that integrating this turret into a maritime platform was a complex challenge which made it necessary to guarantee stability in adverse conditions, precision in a moving hull and compatibility with guidance processes that combine optical sensors, artificial intelligence and gyroscopic systems. The Predator turret equipped on a small tracked vehicle Naval technology adapted for drone warfare. Thus, although it was born for the sea, recent tests of the Predator have confirmed its usefulness in the dominant theater of modern warfare: the FPV drone combat loaded with explosives, responsible for a growing share of Ukrainian losses on the ground. With 7.62mm ammunition, optical sensors, gyroscopic stabilization and automatic detection alerts, the system can be mounted on track vehicles or in the bed of a pick-up, shooting on the move and following minimum targets of just a few centimeters at 100 meters. And more. Artificial intelligence allows the turret identify threats and present options to the operator, who maintains the final decision to avoid fratricidal fires, while the new versions incorporate laser rangefinders and precision improvements adapted to drones controlled by radio frequency or fiber optics. From Ukraine to NATO. The rapid industrialization of the Predator (more than thirty units built and a plan to produce a hundred a month in less than half a year, with a unit cost of less than $100,000 for the Ukrainian forces) makes this system one of the most agile developments of the Ukrainian military complex. In fact, its success has awakened the NATO interestwhich invited the company to an Innovation Challenge and put the system to the test at an evaluation event in France, where the manufacturer presented it remotely as a modular and immediately deployable solution to threats that evolve with weeks, not years, of margin. Additionally, UGV Robotics plans a larger caliber model, the Apex Predatorwith .50 ammunition and the ability to intervene against heavier aerial threats, aiming to turn these turrets into an exportable standard for Western allies. The new paradigm of Ukrainian defense. The story of this turret illustrates how Ukraine is integrating naval and land capabilities into the same combat ecosystem automation basedmodular sensors and systems capable of operating on unmanned platforms, a strategy driven by constant pressure from Russian drones and the need to protect both infantry and exposed vehicles. In this context, a design conceived so that an explosive drone would not be shot down from the air is now transformed into a ground defense against cheap and lethal swarms, making the Predator a symbol of Ukraine’s shift towards a distributed, adaptive defense focused on neutralizing asymmetric threats before they reach their objective. Image | UGV Robotics In Xataka | It’s not that the war in Ukraine has been gamified, it’s that there are now “hero points” to exchange for exclusive weapons In Xataka | In the midst of rearmament, Europe has realized an unimportant detail: it does not have enough bullets

The only advantage Apple could have in AI was its private cloud. It has been copied by the person we least expected

Google has presented Private AI Compute, its cloud infrastructure specially designed so that our conversations remain totally private and cannot be accessed by anyone else. Not even Google. Why is it important. The deployment that Google has announced will allow users of models like Gemini to use them without fear that their sensitive data – finances, health, private conversations – could end up being rescued and accessed by third parties. Idea copied from Apple. This type of infrastructure is an adaptation of the platform that Apple presented more than a year agoPrivate Cloud Compute, and that precisely focused on protecting those conversations by using the company’s future AI models. There are some differences, and for example Apple makes use of a concept of “verifiable transparency” that allows external researchers to audit security and privacy at any time cryptographically. At Google they use third-party verification, which is somewhat more limited as it is not open to the public to verify the running software. Tranquility as a sales argument. AI models are becoming more useful and also more personal and proactive, and that means that we also end up using them with data that may be more personal and sensitive to help us with a very specific question. Beyond ZDR. The problem is that when using the models everything we ask and they answer you can see —and even deduce—. There are ZDR (Zero Data Retention) modes in enterprise accounts from some AI providers, but having a cloud that “privatizes” those conversations is especially promising when it comes to being able to talk about everything with AI without restrictions. no fear of that data coming out of there. How this “privatization cloud” works. Those responsible for Google they explain that Private AI Compute is a “secure, fortified space for processing your data that keeps your data isolated and private to you.” The system uses several layers involving its TPUs and its Titanium Intelligence Enclaves (TIE) security chips. Our devices connect to that secure cloud environment through encryption and a cryptographic security mechanism called “remote attestation” that verifies the identity and integrity of that hardware environment to which we connect. Google also offers a detailed technical report on the operation of this infrastructure. Similar to running local models. The result is theoretically that for the user everything runs “locally” in terms of privacy. Features such as translation or audio summaries that Google offers in its services run directly on our devices: there is no data that travels to the cloud. The best of both worlds. The problem is that local AI models have limited performance, and Private AI Compute will allow you to have the best of both worlds: the power of the best AI models—which run in gigantic data centers—and the privacy guarantee of Google’s Private AI Compute. A surprising twist. This type of infrastructure means that these conversations are completely protected and that not even Google can access them. It’s a surprising turn of events, especially since for the last 25 years Google has made a living by collecting our data to apply it to its advertising model. This type of option goes in just the opposite direction, and it only remains to be seen how it will market such capability. Strategic approach. Curiously, this announcement comes days after we learned that the new version of Siri with AI It will be powered—at least, initially—by Gemini, Google’s AI model. Both companies have had a multimillion-dollar agreement for years to make Google the default search engine in Safari on iPhones and Macs, and now that alliance is apparently reinforced with the use of Google’s AI model to power the future version of Siri. In Xataka | The key to making the iPhone competitive in AI was right next door: imitating what Android had already done

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.