Meta spent a fortune on AI talent and data centers. Nine months later the result is: zero models

Mark Zuckerberg wanted to be the Florentino Pérez of AI. last summer began to sign galacticos in this segment and getting talent by letting go stacks of millions of dollars. He more popularOf course, it was the AI wunderkind Alexandr Wangwho became leader of its “Superintelligence” division. The funny thing is that the months go by and go by and in Meta they don’t seem to have absolutely anything to show. And that is very worrying. Delays. Despite having invested billions of dollars in that restructuring of the company to bet (practically) everything on AI, three internal sources confirm that Meta finds it very difficult to meet the planned deadlines. The race for generative AI waits for no one, and at the company headquarters nerves are on edge because the roadmap is not being met. Avocado, where are you? The new foundational AI model that Meta has been working on for months has been internally named Avocado, but at the moment it is not measuring up, something that reminds us what happened to Llama 4. Internal tests reveal that although it manages to surpass the aforementioned Llama 4 and the old Gemini 2.5, it falls short of Gemini 3.0 (and of course, the recent Gemini 3.1). Patience. Coming out with a model that is clearly worse than its rivals does not make sense, so Meta has decided to wait and delay the launch of its model. Avocado is expected to hit the market in May at the earliest. And meanwhile, Gemini. The situation is so critical that according to these sources, the leaders of the AI ​​division are considering something unthinkable: paying a license to Google to be able to use Gemini in their own products, something that for example will Apple do Siri. That would be a clear sign that for now this own model is not capable enough to power the AI ​​functions of WhatsApp, Instagram and Threads. Money does not equal speed. The company has spent billions of dollars on AI researchers, and has committed to invest 600,000 million dollars in building AI data centers. In January, Meta projected a capex of $135 billion dedicated almost entirely to these projectsalmost double the $72 billion it spent last year. Despite these investments, the company is currently missing from an area in which its competitors continue to advance. Internal tension. According to these sources, Meta is becoming a tinderbox. The “TBD Lab” (for “To Be Determined”), the unit led by Wang, is working under maximum pressure on models named after fruits (Avocado, Mango, Watermelon), but has clashed with old-school Meta managers like Chris Cox and Andrew Bossworth. The company is trying to integrate those models with Meta’s advertising business, which is what supports everything, but Wang doesn’t seem to handle that part of the business very well. Goodbye to open models. Meta stood out at the beginning of this AI race as the company whose open models —not Open Source— were above the rest. Llama became the norm in this area, but in this new stage that philosophy seems to change and China is the one that now leads that segment. Thus, there is talk that both Zuckerberg and Wang lean toward closed models, such as those of OpenAI (GPT) or Google (Gemini). This allows you to have full control over the code, a competitive advantage that Meta does not seem to want to give up. Few fruits of this tree. Despite the extraordinary deployment of resources, the current balance is poor. Meta’s only tangible product of those investments is Vibes, an application similar to Sora that has not managed to fully gel. Meanwhile, those initial talent signings have turned into abandonments: the trickle of AI researchers who leave the company to join others (or found their own projects) is increasing. In Xataka | Meta has been buying chips from NVIDIA and AMD for years. Now it also makes its own so as not to fall short

41 methane gas turbines

Recently we said that xAI had the largest data center with the highest computing capacity in the world. It occupies about 13 football fields and has already reached one gigawatt of power, the first in the world to achieve that computing capacity. All this has an enormous energy cost, but Elon Musk can rest assured because he has just been approved for the installation of 41 methane gas combustion turbines, to the dismay of neighbors and environmental defenders. What has happened? Elon Musk has obtained permits to install 41 methane gas turbines to power Colossus 2, its new mega data center located very close to the Tennessee-Mississippi border. The news comes amid strong opposition from both the community and activists, who denounce that air and noise pollution in the area will increase since the turbines that were already operating are going to be doubled. They say on NBC that during the public hearing, none of the speakers spoke in favor of the project, but it ended up being approved anyway. Two years of controversy. It all started in 2024 with the construction of Colossus in Memphis, Tennessee. Months later, xAI received permission to connect to the electrical grid with a maximum power of 150 megawatts, but it was not enough. In the summer of last year, the Southern Environmental Law Center (SELC) revealed that there were at least 26 gas turbines operating without the necessary permits. In the end they ended up giving him permission to install 15 turbines permanently. Activist organizations such as NAACP have filed lawsuits against xAI for violating the clean air law. The map of xAI facilities spread across the Tennessee-Mississippi border. Source: Google Maps. Preparation: Xataka With one foot in each state. The decision to install 41 additional turbines has been made by the Mississippi Department of Environmental Quality (MDEQ), which is where the turbines will be located, but Colossus 1 and 2 are in Tennessee, as can be seen on the map above these lines. That the data centers are in one state and most of the turbines in another is no coincidence. Although the clean air law It is the same for both, in practice Mississippi is being much more lax when it comes to applying it, as has been demonstrated with this decision. Although there is a border in between, we are talking about the same metropolitan area, so the effects on air quality harm both sides of the border equally. To put it more into context, between Colossus 2 and the facility where the turbines operate there is less than 2km of distance that can be traveled in less than 7 minutes by car. How much this pollutes. Memphis is one of the states with the worst air quality in the US, so much so that It is known as “the capital of asthma”. They recently counted on Electrek which, according to xAI’s own request, we are talking about 6 million tons of greenhouse gases and 1,300 tons of harmful atmospheric pollutants (nitrogen oxides and sulfur dioxide among others) each year. And they also point out a striking detail: while all this is happening, Tesla boasts that in 2024 it avoided the emission of 32 million tons of CO2. The level of cynicism is also through the roof. Image | xAI In Xataka | We have been talking for years about how AI consumes a lot of water. Everything comes from a book that was invented

If Ukraine promoted the use of drones, Iran has triggered the Terminator algorithm. And that was already a problem in science fiction

In the gulf war 1991, the international coalition took more than a month to launch some 100,000 airstrikes after weeks of planning. Three decades later, the ability to process military information has changed radically: satellites, sensors and drones generate amounts of data that no human team could analyze alone. In this new technological environment, the true battlefield is no longer just the air or the land, but the speed at which information is interpreted. From the drone to the algorithm. Recent wars had already anticipated a profound transformation of modern combat, but the conflict with Iran seems to have crossed a different technological frontier. If the war in ukraine popularized the massive use of drones as a dominant tool from the battlefield, the campaign against Iran has introduced a logical even more radical: integration artificial intelligence at the very heart of military decisions. In fact, the initial attacks showed an intensity difficult to imagine just a few years ago, with hundreds of targets hit in a matter of hours and thousands in a few days. That speed was not only the result of greater firepower, but also of the use of capable systems of analyzing enormous volumes of data and transforming that information into almost instantaneous attack plans. Understanding the “kill chain”. I remembered this morning the financial times that traditional war, the so-called chain of destruction (from identifying a target to launching the attack) was a long and bureaucratic process. Intelligence officers analyzed information, wrote reports, commanders evaluated options and finally the coup was authorized. A process that could take hours or even days. The incorporation of AI is reducing that cycle drastically. We are talking about platforms that integrate data from satellites, drones, sensors and intercepted communications that are capable of generating lists of targets, prioritizing them and suggesting the appropriate weapon in a matter of seconds. The result is extreme and disturbing compression of the kill chain: What once required prolonged deliberation now becomes an almost instantaneous sequence. The digital brain of the battlefield. Behind this acceleration are data analysis systems that act as a true operational “brain.” These platforms combine geospatial intelligence, machine learning and advanced language models to interpret information and propose military actions. Its most disruptive capacity is that it no longer only summarizes data, but can reason step by stepevaluate alternatives and generate tactical recommendations. This allows military commanders to process volumes of information that are impossible to handle manually and multiply the number of operational decisions made in the same period of time. In practice, algorithms are allowing select and execute objectives at a scale and speed that were previously unthinkable. Bomb faster than thought. The result of this transformation is a war that begins to move at a rapid speed. higher than human pace. Artificial intelligence can now analyze information, detect patterns and propose attacks faster than a team of analysts could even formulate the right questions. Some experts describe This phenomenon as a form of “compressed decision,” in which planning is reduced to such short windows of time that human managers can barely review what the machine has already processed. In this context, another disturbing idea: that destruction can precede the human reflection process itself, that is, first comes the recommendation generated by the algorithm and then the formal approval of the person who must execute it. And there, there is no doubt, we can have a problem of colossal dimensions. The human dilemma in algorithmic warfare. Because this technological acceleration is generating a growing debate about the real role of humans in military decision-making. Although the armed forces they insist As final control remains in the hands of people, the time available to evaluate system recommendations is increasingly reduced. Some analysts fear that this will lead to a form of “cognitive download”one in which military leaders end up automatically trusting the decisions generated by algorithms. Other countries like China itself observe this evolution with concern and warn of the risk that automated systems end up directly influencing life or death decisions on the battlefield, associating the scenario with the closest thing to the “Terminator algorithm” due to the unequivocal way in which all paths approach James Cameron’s fantastic proposal. A new accelerated war. If you will also, what is emerging is not just a new military technology, but rather a new time of the war. AI makes it possible to process information on a massive scale, identify targets more quickly, and execute attacks with unprecedented simultaneity. This means that military campaigns can develop at a pace that overflows the models traditional planning. From this perspective, war no longer advances solely at the pace of logistics or firepower, but at the pace of algorithms capable of interpreting the battlefield in real time. And in this unprecedented scenario, strategic advantage could increasingly depend on who is able to think (or calculate) faster than the adversary. Although neither of them be human. Image | Ministry of Defense of Ukraine In Xataka | China has just found a hole in the US’s quietest weapon: an algorithm has hacked its B-2s in Iran In Xataka | The great paradox of war: the US ignored Ukraine’s pleas to Russia and now needs it in Iran

Faced with the fear of a barrel of oil at $200, the US has made an unprecedented decision: remove sanctions on Russia

After almost two weeks, the Iran war already has a great (and unexpected) beneficiary: the Kremlin. days after giving carte blanche to India to buy million barrels of Russian crude without fear of sanctions, yesterday Washington was one step further by lifting (partially) the sanctions imposed on the Russian oil industry after the invasion of Ukraine. With this, he hopes to alleviate the effects of the Iran war on the energy market and prevent Tehran’s threat from becoming a reality: that the barrel of Brent shoots to $200an all-time high. The question is… What will it mean for the war in Ukraine? What has happened? That the US has decided to pause the sanctions that penalize the purchase of Russian oil, a measure adopted four years ago and which seeks asphyxiate the Kremlin’s ability to finance its troops in Ukraine. The White House just published an order in which it gives the green light to the purchase of crude oil and oil products from Russia. Of course, with small print. The suspension of sanctions is temporary. It will only affect merchandise previously loaded on ships and (a priori) will be limited to one month: from March 12 to April 11. Click on the image to go to the tweet. Why do you do it? The task of announcing the measure has been the Secretary of the Treasury, Scott Bressent, who a few hours ago insisted in the White House’s efforts to “promote stability” in the global energy market and above all “keep prices low” while the Iran war lasts. “To expand global supply reach, Treasury grants temporary authorization for countries to purchase Russian oil stranded at sea,” explains the high office. “This measure, which is limited in scope and short-term, applies only to oil that is already in transit.” In the same messageBressent insists that the rise in crude oil prices this week, coinciding with the escalation of tension in the Persian Gulf, is “temporary” and claims that “in the long term it will greatly benefit” the US economy. In recent days, Trump himself has tried to downplay the fluctuations in the Brent barrel. Recently he even stated that, being “the largest oil producer”, the US makes “a lot of money” when crude oil rises. Does context matter? A lot. In fact, the decision of the Treasury Department cannot be understood without taking into account several factors. The first, the escalation in the value of oil to which Bressent himself refers. The stock charts show that the cost of a barrel of Brent has skyrocketed in recent days: from marking just under 70 dollars in mid-February, it has gone above 90, with peaks that exceeded the barrier of the 100. Those fluctuations already affect to those who need to fill the car tank and threaten to go beyond transportation, infecting the shopping basket. What will happen now? The problem is not just how much oil has risen over the last two weeks. There is (very much) concern that the barrel of Brent will continue to become more expensive and, if so, by how much. The Iranian regime already has shown its ability to condition oil tanker traffic through the Strait of Hormuz, a strategic maritime passage that channels 20% of international oil, and Tehran seems willing to use ‘black gold’ as a weapon of war. On Wednesday the regime of the ayatollahs threatened to the US (and the West) with a scenario in which the Brent barrel doubles its value and shoots up to $200, shattering the all-time high of 2008, when it reached $174.5. How will it affect Russia? That’s the other big question. The order just published by the US Treasury will allow Russia to market oil for a month without its customers risking sanctions, generating a flow of cash for the Kremlin. Bressent questions in any case the scope of that injection of funds. “It will not bring significant financial benefits to the Russian government, which derives most of its energy revenue from taxes levied at the point of extraction,” defend the secretary. Is it an exceptional measure? The truth is that it is not the first ‘balloon of oxygen’ that Trump has granted to the Russian oil industry since he began his military operation in Iran. It’s been a week now temporarily relaxed its sanctions policy so that India can buy Russian oil. The measure was approved with conditions very similar to those that Washington now extends to the rest of the countries: a 30-day suspension limited to crude oil already loaded on ships. It is not the only card that the White House has tried to reduce market tension. Another, adopted hand in hand of the International Energy Agency, has been to release millions of barrels of reserves. How much will it benefit Moscow? The great unknown. The measure approved by the US is temporary and has a limited scope, but it will probably allow the Kremlin to sell its oil without having to apply significant discounts to offset the possible sanctions that its buyers faced. Recently Financial Times I calculated that Russia is already winning up to 150 million of dollars in extra income every day through the sale of oil, a plus directly related to the conflict in Iran, the closure of the Strait of Hormuz, the turbulence in the Gulf and the growing interest of India and China. But will it help the Kremlin? The situation of the Russian coffers is not particularly buoyant. Its public deficit accumulated during the first two months of the year almost reaches the objective set for the entire year and there are those who question that the extra injection it will receive over the next month thanks to oil will increase its room for maneuver in Ukraine. The reason: hydrocarbons represent only a part of the income (relevant, but not decisive) on which the Kremlin depends, which after four years of war has seen how the country’s military industry is conditioning its economy. Images | … Read more

It’s called MACROHARD and “no one else can do it”

Elon Musk is determined to unify his empire. After the purchase of xAI by SpaceXhas now announced a joint project between Tesla and its artificial intelligence company. It is about the AI agent MACROHARD, and before going into details, let’s go with what is most important for Musk himself: it is a mockery of Microsoft and software that wants to emulate the functioning of entire companies. Also something that no other company can do right now. According to Elon, of course. MACROHARD. If Microsoft is “small soft”, MACROHARD is “huge hard”. Elon Musk with his usual things, but beyond the joke, the project is an umbrella that unites the capabilities of Grok and those of Digital Optimus. Grok is the conductor, xAI’s large language model that calls the shots thanks to its “deep understanding of the world.” Digital Optimus is the executing “arm”, the software that manages the execution of a task in real time. From what Musk has stated, it is a vitamin agent, something in which American industryand Chinais putting all his efforts into it. Also with the General Artificial Intelligencebut that is a different story. like a brain. The tycoon has compared the operation of the agent with the theory of dual processing of the brain. While Grok is the one who sets the reason, the pause and the intention, Digital Optimus is the one who carries out the instinctive action, whatever that means. If we land it better, Grok has the knowledge Being an LLM and Digital Optimus is the one that executes the action. As? Well, being able to “process the last five seconds of what appears on a computer screen and execute actions with the keyboard and mouse in real time. If you want to know more about how it works, you will have to wait because in Musk’s message there is not much more to say about this. Basically, he has described an agent like those in which the competitors are already working from xAI, but Musk claims it can “simulate all of a company’s operations.” “It’s a funny reference to Microsoft” – Elon Musk AI4 chip. The most interesting thing about MACROHARD is that part of “hardware” that can also give rise to the name. Tesla and xAI have been working on their own chips for some time that allow them to better train AI and more optimal inference. After abandoning its chip project a few months ago, this 2026 Tesla rescued it and noted that they were making enormous progress. Maybe not yet to train the model with heavy load, something that NVIDIA chips largely take care of (because NVIDIA is involved in all AI companies, big or small), but yes for that inference task. And that’s where this Tesla and xAI project has a key advantage. The agent is designed to run on Tesla’s AI4 chip along with “relatively moderate use of the NVIDIA hardware that already has xAI and it is much more expensive.” The key is that the AI4 is much cheaper than NVIDIA’s solutions and is also the ‘home-made’ solution, allowing Tesla and xAI to scale the system as far as they want. Provided that TSMCwhich is the company that manufactures them, has bandwidth, we already know who the two main clients are in case of a bottleneck. Revived project. And it is also curious that, just like the own chip projectMACROHARD has been rescued from the underworld he was in. Days before Musk revealed the project, the media Business Insider He noted that the agent had stalled due to factors such as a hiring freeze and the departure of key engineers such as Toby Pohlen, a founding member of xAI who was going to lead the project, but who left the company two weeks later. Perhaps with that part of it being a system powered by its own and cheap chips, what Musk says that “no other company can do it yet” is fulfilled, but it will not be because they are not trying. Outside the long shadow of NVIDIA, OpenAI continues to develop own chips from Broadcom. And Goal, after some lurchesthere is also preparing chips for inference. Photo by Planet Labs Now all that remains is to see less talk and something tangible about MACROHARD’s capabilities and whether, being under the Tesla umbrella, the project comes to fruition. And one last curiosity: MACROHARD is like Elon christening he Colossus 2 data center…with which he already laughed at Microsoft. Images | sherwood, Gage Skidmore In Xataka | Elon Musk’s Grokipedia is not exactly the best place to get objective information. ChatGPT doesn’t care

what it is, what it is for, prices and how you can get one to use in your projects

Let’s explain to you what it is and how to get a ChatGPT API. This is an essential code to be able to link the models of artificial intelligence GPT when developing an app or creating a workflow that includes them. Because in order for a third-party project to use GPT, you will need this key. We are going to start this article by explaining what it is and what it is for. API of ChatGPT, trying to make sure anyone can understand the concept. And then, we will tell you step by step how to get your OpenAI API to use GPT models in your projects. What is the ChatGPT API and what is it for? ChatGPT is the name of OpenAI’s artificial intelligence chat. But basically, it is an intermediary to interact with their artificial intelligence model, which is called GPT. With each new version of GPT, ChatGPT gains new features and power. We can say that GPT is the engine, and therefore you will also be able to link it to other third-party services, in addition to your applications. Thus, these apps or services will be able to knock on GPT’s door and return results to you. However, for a service to use GPT, a bridge is needed, an intermediary, and that is where an API or application programming interface comes into play. APIs are a kind of communication bridge between an app and an external servicein this case the API serves to connect other applications with ChatGPT, or rather, the artificial intelligence model that powers it. To give an example, imagine that I want to create an artificial intelligence bot. Within this bot I would need an AI model, an engine that processed my requests. But of course, an artificial intelligence model can weigh gigas or terabytes, and I can’t afford to include it within the app. Then I will have to connect the bot with an external AI hosted on its own servers. The idea would be that when you write something to my bot, this bot sends my message to the AI, and that when the AI ​​generates the response it reaches the bot and it can show it to me. And since the bot and the AI ​​are on different servers, possibly in different countries, I’m going to need a bridge. And this bridge is the API. The GPT API is paid. You can create it for free, but then you will need to purchase token packs. This is like purchasing credit packages to use the API. When you make a request, depending on the processing it requires and the task involved, you will spend these tokens or tokens, and when you run out you will have to buy more. But in exchange, what you have is the possibility of use ChatGPT in your projects to create your own chatbot or assistant, to automate tasks, to analyze texts, videos or audio and make transcriptions and summary, generate code, and ultimately for whatever you need. You will have the AI ​​within the application, but not natively, but you will have connected both. Token types and API prices The price of GPT APIs It depends on the model you want to use.. If you are going to use the most powerful model it will be more expensive, since it is a model that consumes more resources. Meanwhile, the lighter models will have lower prices. There are three types of tokensso it is advisable to know them all. Entry tokens They are the ones that you send to the model with prompts, instructions, texts to analyze or contexts that you add. For example, the prompt “Summarize this article on artificial intelligence in 5 lines” has 200 tokens, and costs 200 entry. Exit Tokens They are what the model generates as a response. Come on, what consumes the text or image with which you are responded to. If an answer has 500 tokens, these are how many you spend. These tokens are more expensive because the model has to generate the new text, which involves a computational calculation. Cache entry tokens are when you repeat the same context in multiple calls. When you have a conversation in the same chat and on the same topic, the system temporarily saves everything you said before so as not to process everything completely. As for the price, it depends on whether you use a full or lighter model. This is a table with prices of current flagship models. You will also be able to generate APIs for older models, which will have cheaper prices. Model GPT-5.4 GPT-5mini Description Our most competent model for professional work A faster and cheaper version of GPT-5 for well-defined tasks Entry tokens $2.50/million tokens $0.250/million tokens Output tokens $15/million tokens $2/million tokens Tokens cached entry $0.25/million tokens $0.025/million tokens When you generate an API, you will be able to buy token packs of the three types, which They will be spent as you use the API. So, when the tokens of one of the types run out you will have to proceed to buy more. How to get your GPT API To obtain your GPT API you will have to enter platform.openai.com/chat and log in with your ChatGPT account. Here in the left column press where it says API keyswithin the section Manage. Inside this page press where it says Create new secret key. This will take you to a screen where you can configure your API, linking it to a project, giving it a name or configuring its restrictions if you want them to have them. When you have it to your liking, click on Create secret key. And that’s it. with this you will create an APIor as the platform calls it, Secret key or secret key. This is the key you will have to enter when you want to link GPT to a third-party service. In Xataka Basics | ChatGPT apps: what they are and how to use them to give … Read more

the wearable AI recorder that’s not for everyone, but it’s perfect for some

When I tried the Plaud Note ProI came to a conclusion that I did not expect: I was one of the very few gadgets of AI that justified being a device and not simply an application. The question I ask myself now, weeks after having the NotePin S on me, is whether its spiritual successor can say the same. The answer is not so clean. The NotePin S arrives as the wearable version of that same proposal. Same brain, different packaging. Instead of a card that lives glued to the iPhone by MagSafe, here you go a small 17 gram oval that you can pin to your lapel, hang around your neck, wear on your wrist or pin with a magnetic pin. Plaud presented it at this year’s CES with the promise that capturing what you say would never require taking out your phone again. When you first hold it in your hand, the thought is almost identical to the one I had with the Note Pro: how well finished this is. Solid materials, premium feel, that type of product that does not boast of being expensive but suggests it. The finish of the Plaud NotePin S. Image: Xataka. The box is also unusually neat for a startup: magnetic clip, pin, necklace cord, bracelet and charging base included from the beginning, something that with the original NotePin required a separate purchase. All this comes with the Plaud NotePin S box. Image: Xataka. Image: Xataka. The most relevant change compared to the previous model is small and huge at the same time: they have replaced the pressing gesture with a physical button. The original NotePin had a problem for some users, who were encountering recordings that had never started because the touch gesture had not responded well. The S solves this with a long press to record, a press to stop, and a short press during recording to mark a highlight (one of its best features). Simple. Works. I’ve spent a few weeks wearing it in different formats: The clip on the lapel is the most natural in face-to-face meetings. It is the image that crowns this article. The magnetic pin, the most elegant. The cord-necklace, the most comfortable for everyday use outside of formal contexts. The bracelet, on the other hand, is the option that convinces me the least: the material feels below the level of the rest of the kit, and in a world where almost everyone already wears a watch, adding another element on the wrist is not very practical. With the cord to hang it around your neck. Image: Xataka. With the bracelet adapter to wear it on the wrist, with a form factor similar to that of typical activity bracelets. Image: Xataka. Here in a slightly more inclined view… Image: Xataka. …and here on the side so that the thickness can be distinguished. Image: Xataka. What does work consistently is recording. The microphone picks up well up to about three meters, which is enough for most meetings. The transcription, processed in the app using models from OpenAI, Google or Anthropic of your choice, is accurate in Spanish without the type of errors that would cause you to lose confidence in the system. Automatic summaries, especially when you have marked highlights During the conversation, they are the most useful final product: what previously required rereading the entire transcript now appears organized and immediately actionable. There is a novelty in the ecosystem that deserves special attention: along with the hardware, Plaud has launched a desktop application for Mac and PC that records Zoom, Google Meet or Teams meetings in the background without adding any bot to the call. It is an important distinction because similar alternatives appear as visible participants in the meeting, which makes many interlocutors uncomfortable. Example of a recording made with the Plaud NotePin S seen in the Plaud interface. In the screenshot you can see the summary, much more extensive and structured than we could expect. More than a summary, it is a complete and detailed outline. Image: Xataka. A sample of some of the templates with which we can tell Plaud “how” to generate a transcription and the subsequent treatise. Very useful. Image: Xataka. And another example of a summary, in this case we can see how he makes the quotes in the language of the recording, English; but it offers us the entire summary in our native language, Spanish. Image: Xataka. The Plaud app does not appear anywhere, it records natively and is free for those who already have the hardware. For those of us who use the physical device and also usually have meetings by video call, the integration of both sources in the same hub It’s really comfortable. What is uncomfortable is the question that appears here. With the Note Pro, the hardware justification was clear: it freed you from your phone, it had four high-quality MEMS microphones, and the 30-hour battery let you record everything without worrying. The NotePin S has only a fraction of that claimed battery, and its three-meter effective capture radius puts a real limit on it in large rooms. Although in a high school classroom, where I recorded the image above, it responded perfectly. In everyday contexts, both are sufficient. In the most demanding contexts where the Note Pro especially shined, the NotePin S falters in comparison. What the NotePin S offers that the Note Pro can’t is that you can wear it, not just carry it around. And there is the basic question that must be answered before buying it: do I really need to wear my recorder, or is it enough for me to have it in my pocket or on the table? By separating its magnetic coupling, the charging connector is revealed. Image: Xataka. And so it is attached to the USB-C charging accessory. Image: Xataka. Here, separated from the clip that allows it to be put on the lapel. Image: Xataka. For a journalist doing interviews on the move, the … Read more

Chips connected by laser instead of cable. It seems like science fiction, but it aims to revolutionize data centers

If you have ever mounted a PCSurely one of the points on which you have had to pay the most attention is the connections. Because understanding the power of the processor, the GPU or the speed of the RAM is “easy”, but the motherboard is what allows us to interconnect all these components with ‘highways’ in which the data speed can be maximum. In the data centers and serversthis is the same: the better the connections between chips and equipment, the lower latency, higher bandwidth and better performance. These connections are made physically, but there is a French startup that wants to change the rules of the game with NVIDIA. As? Connecting the chips by laser. Chips connected by laser and NVIDIA taking out the wallet Improving interconnection speed is no small feat or a whim. NVIDIA has begun manufacturing its next generation platform, the one named Vera Rubin. It is a system that can be combined with others to multiply benefits. That union, as we say, is physical, but there comes a point at which physics is no longer enough. When that arrives, NVIDIA wants to be ready and, a few days ago, Reuters reported on a $4 billion investment by NVIDIA in two companies that are aggressively researching new technologies to help increase that interconnection speed: Lumentum and Coherent. This is a rack and the nightmare of those of us who hate cables. Specifically, that of the Wikimedia Foundation. Well, imagine that a large part of those cables go outside because the systems are connected by electricity Another of the companies in which they have invested is Scintil Photonics. It is a French startup that this in the testing phase of a technology that, if the industry adopts it, will mark a before and after in this connection on a team scale. The LEAF Light Evaluation Kit is, as detailed, the first dense wavelength division multiplexing single chip to go from theory to practice. It’s like another language, I know, but it’s basically what we were talking about: an optical chip interconnection system instead of copper. And that is the main advantage. With copper reaching physical limits of speed and density, optics are emerging as a solution when connecting clusters of thousands of processors. Each chip has an optical system that is responsible for emitting and receiving light, and in that light goes the data that is currently traveling through cables. The one from the French company it is not the first chip based on photonic communication, but they claim that their technology reduces the energy necessary for them to work by 50%, as well as latency. Results? Well we’ll see. The startup’s CEO, Matt Crowley, has commented that he has “six or seven companies interested in implementing the technology by 2028,” but that due to confidentiality agreements, he cannot name names. The Scintil Photonics prototype The complication in this will be that they get supply of the photonics systems, since the data center racks are built with the idea that they are scalables. That is, it is no longer just power, but how many tens of thousands of units you can interconnect, and a bottleneck in the manufacturing of any of the parties involved in optics would be equivalent to a lack of supply for their customers. At the moment, some prototypes have already been served to select companies for testing, but certainly, using light pulses instead of electrical signals is something that is very interesting in superclusters focused on huge data centers that can scale without the limitations of the physical connection. Images | Victorgrigas, M.I.T., GlobeNewswire In Xataka | Huawei no longer competes: it is building its own parallel reality

The war in Iran has reconfigured global airspace and its consequences are worrying

Europe and Asia, continents united by land, are more separated than ever by the skies. Or, at least, it is more complicated than ever to travel between them. With the conflict in Ukraine and Iran active, airlines are either dealing with a bottleneck in their usual corridors or, on the contrary, are being forced to make long detours. And that has enormous implications. The latest. We have now been two weeks since the United States and Israel opened hostilities against Iran. The country’s response against the latter country and all those neighboring countries that host US bases caused chaos in air mobility in the area. Overnight, thousands of people saw how their flights departing or stopping in Dubai or Doha, two of the 10 largest airports in the world by passenger volumethey were cancelled. And they began to enter the hallways hundreds and thousands of other people looking for a quick exit of countries that were beginning to suffer bombings. Only in the first two days of conflict More than 5,000 operations have already been suspended with Emirates, Etihad Airways and Qatar Airways some of the most affected companies. The consequences were immediate: passengers traveling 10 hours by car to neighboring countries to find free seats and tickets shot over the top. 10,000, 20,000 and up to 80,000 euros. Coping as they can. Little by little, the volume of flights at these airports has been increasing. After the first days of hostilities, Dubai is handling about 500 operations daily but this is much lower than the usual average of 1,200 operations. And airlines are in a similar situation. As stated in Business InsiderEmirates aspired to recover 100% of flights this Friday, March 13. Until the start of the conflict, they operated more than 500 flights daily and at the moment they have barely been exceeding 300. And in a worse situation are Etihad Airways and Qatar Airways, with a volume of operations that does not reach 100 daily flights when in the past they also exceeded 500. The passage between Iran and Russia has become a funnel a funnel. Those who do not have to make a stopover or are not destined for Middle Eastern countries are also not free of problems. With airspace closed over Iran, the passage between Europe and Asia has been reconfigured into a kind of funnel where Azerbaijan is key. And in the south the airlines have to deal with the conflict in the East, in the north they have to deal with the War in Ukraine. Most flights between Europe and Asia without stops in the Middle East are passing through the narrow passage between Türkiye, southern Russia and northern Iran. The other alternative is to divert flights through the southern part of the Arabian Peninsula. These narrow corridors represent a new obstacle for travel from Europe that had to pass through Russia before this country’s attacks on Ukraine. And this last country was chosen for a good part of the routes that connect with China or Japan. Now the airlines have two paths: go around to the south or go around far to the north. More, many more kilometers. Obviously, planes have to fly many more kilometers and burn much more kerosene. In The New York Times They give the example of the Nordic countries. Before 2022, flying from Helsinki to Tokyo was as easy as passing through Russia. Now flights have to circle the latter country from the north or south, spending time, fuel and, of course, money. The same has happened with Helsinki-Bangkok, which used Iran to take advantage of the forced detour to avoid passing through Russia. Now they are diverted through the funnel that is the narrow corridor between Russia and Iran. In BBC They already picked up on this problem a few days ago. With growing tension in the Middle East, some airlines had already chosen to reconfigure their flights through the southern part of the Arabian Peninsula before the first attacks. With greater air traffic in the area and more kilometers to travel, the experts consulted by the media point to something obvious: flights will be longer and the risks of delay greater. And the fuel through the clouds. These diversions also arrive when fuel for airplanes has skyrocketed. They collect in Argus that jet fuel is now double the price of oil before its refinement. The gap between both products is so high that American Airlines has lost 19% in stock market value so far this year. The reason: investors distrust the future of airlines. The fuel used by airplanes is a very delicate refined product whose storage costs are enormous so reserves are small. This causes its price to skyrocket with each new conflict and even its supply to be put at risk. When an unexpected situation involves a war conflict in a corridor through which 20% of oil and gas circulates around the world, the situation is much more delicate. and when 40% of aircraft fuel for Europe arrives through the Strait of Hormuz and it closes, we already know what to expect. From tourism to bankruptcy. The consequences of changes in routes and increases in fuel prices are very diverse. According to Deutsches Bankairlines are at risk of bankruptcy if fuel prices remain so high. They don’t talk just to talk. The last time there was such a big gap between the price of oil and jet fuel was in 2005 after the Katrina and Rita disasters. It was the trigger for the airlines Delta Air Lines and Northwest Airlines went bankrupt. But the change in routes is also key for the cities of the Gulf countries. Dubai or Doha have achieved attract Western tourists who spend a few days in its streets in a kind of gigantic terminal. Without intermediate stops on major trips between Europe and Asia, they risk losing their status as a recreational space between both continents, with tourists having a handful of days between two long trips. … Read more

What is Antigravity, how it works and what you can do with Google’s artificial intelligence IDE

Let’s explain to you what is Antigravity and what you can do with this Google development tool. It is a popular developer environment powered by artificial intelligencewhich makes creating web projects as easy as possible. We are going to start the article by explaining to you in a simple way what it is. Antigravity. Then, we are going to tell you its main functions and what you will be able to do with it. Next, we will mention how you can use it, giving you the essential news, and we will end by talking about its price and availability. What is Antigravity Antigravity is an integrated development environment (IDE)one of those programs that developers use for programs. Come on, it is used to write code and create applications or web pages. The difference is that Antigravity is an IDE powered by artificial intelligence. This means you can delegate complex coding tasks to autonomous AI agents to write the bulk of the code and perform checks. Come on, instead of writing all the code by hand as in classic IDEs, simply you explain to the Antigravity assistant what you wantand then it will use artificial intelligence to plan it, schedule it, test it, and show you the final results for you to review. Just as there are other AI programming tools whose agents simply assist you while you write and you do the bulk of the work, Antigravity is the opposite. Here the burden of writing code falls on the AI, while you are just telling it the concept you want and reviewing everything. So, it is an IDE for whoever doesn’t know how to program can do it. It’s like one of those generative artificial intelligence that responds with text or creates images or videos of whatever you ask, but instead what it does is write code. There are other AI services like Claude that have very good capabilities in writing code. However, Antigravity is capable of not only generating the code, but also testing it, detect and correct errors that could have been generated. And as for the artificial intelligence to which it delegates, its agents use the Gemini, Claude and GPT models. As new versions are released, they are added, but today Gemini 3.1 Pro, Gemini 3 Flash, Claude Sonnet and Opus 4.6, and gpt-oss-120b are available. In essence, making a website with Antigravity can be a little more complicated than making it with Claude if you are an inexperienced user. But if you are a developer, you will have much more controland you will also be able to take advantage of AI to review other projects you have created. What you can do with Antigravity Antigravity is not simply a code editor, but goes much further. For a start. Let’s leave you a list with the main functions What does this tool have: Planning and autonomous execution of tasks. Antigravity agents can autonomously plan, execute, and verify complex tasks through the editor, terminal, and browser. You simply give it the instruction in natural language, which can range from creating a website to reviewing an existing one, and the agent will take care of the process, from planning to implementation. Management of multiple agents in parallel. In the Manager view, a developer can launch five different agents working on five different bugs simultaneously, effectively multiplying their productivity. Verifiable artifacts. Agents produce tangible deliverable artifacts such as task lists, implementation plans, screenshots, and browser recordings. This way, you can verify the logic that the agent is following, and leave comments on the artifact to make corrections or leave feedback without stopping their workflow. Browser control for automatic testing. Antigravity’s browser subagents can launch Chrome, interact with your application interface or website, and verify its operation automatically. Come on, in addition to creating a website you can have the AI ​​verify that everything works well. Two modes of work. Antigravity offers a Plan mode, which generates a detailed plan before acting on complex tasks, and a Fast mode, which executes instructions instantly, great for quick fixes. You can also choose the level of autonomy you give the agent. Compatibility with the existing ecosystem. Antigravity works on top of your existing toolchain: Git, language runtimes, package managers, CLIs, and browsers, so you can open the same repositories and run the same commands you already use. How Antigravity works The way Antigravity works is simple. The main screen is divided into two. On one side you will have the code, where you can open and see the content of the projects. And on the other side you will have the artificial intelligence agent, with a writing prompt where you will only have to describe the website you want. If you want to create a project from scratch, then go to the agents section and Describe the website or application you want to create. You’ll need to do this as completely and thoroughly as possible, talking about what you want it to be able to do, and describing the design you have in mind. Then, send it the prompt and Antigravity will first start thinking about how to do it, and then it will start writing the code, which you will see in the other part of the application. During the process, you will be able to see how the agent is thinking, and this will ask your permission to make changes or actions. You can also, when you launch Antigravity, open a project you already have to see its code. Then, in the agent section you can ask them to make the changes or checks you want. Price and availability You can use Antigravity with your free Google account. This means that you will be able to use it to create any website or application without problems. It is designed for occasional and not very demanding use. However, if you pay for a Google AI Pro or Ultra subscription, you will have much broader limits if you are a professional developer … Read more

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.