For centuries price has been a sign of quality. Generative AI is breaking that rule in dozens of sectors

For centuries, price has served as a cognitive shortcut. If something costs a lot it is because, for one reason or another, it must be worth a lot. An Armani suit, Bang & Olufsen headphonesa McKinsey report. The number has always served to convey certain information to us before seeing the product. It was compressed reputation. With the arrival of generative AI, that is ending in many sectors. Today a logo can cost 15 euros or 15,000. And be the same logo. A market analysis can come from a consulting firm with offices on three continents or from a guy in pajamas who knows how to wear Deep Research. The report may be indistinguishable. In fact sometimes the second one will be betterbecause the guy in pajamas understands the sector and the consultant assigned the junior who was free. AI is breaking the link between production cost and final result. Something very similar to what Antonio Ortiz, AI popularizer and former final boss of this house, in “Artificial Intelligence and unlinking effort and result“. If anyone can generate in minutes what previously required teams, weeks, and invoices with many zeros, price no longer communicates much about quality. It’s starting to be noise. and this will force a signal migration. From ‘how much’ to ‘who’, to ‘how’ or ‘why’. The questions that will matter are going to be “who signed this?”, “what process followed?”, “what human decisions were behind it?” That is, the process will become the product. It is already beginning to be seen with design studios that They obsessively document any iterationor consultancies that not only sell you the deliverable but also also access to the reasoning of their partners. More and more we are digital artisans who charge for showing how we work and not only for what we deliver. AI has made production almost free, so we are being flooded with digital content of all kinds, so scarcity shifts to criteria. Knowing what to ask for, what to discard, what makes sense and what doesn’t. to good taste. AI can do almost anything, and what it can’t, it will learn next year. Deciding well what to do and what not to do is still expensive. There, for the moment and luckily, there is no shortcut. Featured image | Xataka In Xataka | The AI ​​of 2026 brings an uncomfortable truth: the most useful will be the one that watches us the most

When they sold us generative “Artificial Intelligence” we did not know that it was going to be artificial and generative but not “intelligent”

A few months ago, a group of Spanish researchers thought of putting an AI chatbot to the test with a curious test. They uploaded an image of an analog clock to the chatbot and asked the AI ​​a simple “What time is it on that clock?” The AI ​​failed disturbingly. Machine, can you tell me the time? Researchers from the Polytechnic University of Madrid, the University of Valladolid and the Politecnico de Milano signed a month ago a study in which they wanted to evaluate how intelligent the artificial intelligence of those models was. To do this, they built a large set of synthetic images of analog clocks—available in Hugging Face— in which 43,000 different hours were shown. Before fine-tuning their behavior, the AI ​​models consistently failed when trying to tell the time. After the adjustment the behavior was much better, but still imperfect. That should not happen with such a “simple” issue for humans. disastrous result. From there they asked four generative AI models what time those images of those analog clocks showed. None of them managed to tell the time accurately. That group of models was made up of GPT-4o, Gemma3-12B, LlaMa3.2-11B and QwenVL-2.5-7B, and all of them had serious problems “reading” the time and differentiating, for example, the hands or the angle and direction of those hands in relation to the numbers marked on the watch. Fine tuning to improve. After these first tests, the group of researchers managed to significantly improve the behavior of these models after performing fine tuning: they trained them with 5,000 additional images from that data set and then re-evaluated the behavior of the models. However, the models again failed consistently when tested with a different set of images of analog clocks. The conclusion was clear. They don’t know how to generalize. What they discovered with this test was confirmation of what we have been observing from the beginning with AI models: they are good at recognizing data that they are familiar with (memorized), but they often fail in scenarios that they have never faced and that are not part of their training sets. Or what is the same: they were incapable of generalizing. Dalí enters the scene. To try to find out the causes of these failures, the researchers created new sets of images in which, for example, they used the Dalí’s famous distorted clocksor those that included arrows at the end of the hands. Humans are able to tell time on analog clocks even if they are distorted, but for AI models that was a huge problem. If they do this with watches, imagine with medical analysis. The danger of these conclusions is that they reignite the debate about whether generative AI models are indeed artificial and generative, but not very intelligent. If they have these difficulties in identifying the hands or their orientations, things are dangerous if what the models have to analyze are medical images or, for example, real-time images of an autonomous car driving through a city. AIs are stupid. Although it is true that generative AI models are fantastic as aids in various scenarios such as programming, the reality is that what they do is “regurgitate” responses that are already part of their training data. As Thomas Wolf, Chief Science Officer of Hugging Face, explained, a generative AI “will never ask questions that no one had thought of or that no one had dared to ask.” Although thanks to their enormous memory and training they can recover a multitude of data and present it in useful ways, finding solutions to problems for which they have not been trained is very complicated. For experts like Yann LeCun, the reality is clear: generative AI it’s very stupid and, furthermore, a dead end. Source: clocks.brianmoore.com AI doesn’t draw watches very well either. Added to the experiment of these researchers is another small test that once again calls into question the capacity of generative AI. It involves asking different models to create the code that allows an analog clock to be displayed with the current time. A designer named Brian Moore wanted to share the result of several AI models and the truth is that the result obtained in most of them is terrible, although others like Kimi K2 achieve a good result. We have tested with the recent Grok 4.1 and GPT-5.1. After a little insistence, Grok 4.1 has drawn the perfect clock and it works. With GPT-5.1 there has been no way, at least in our tests. A worrying reality. This inability to solve tasks that seem simple certainly means that these models are not in a good place. It is true that a good prompt can help resolve some of these limitations, but what is becoming increasingly evident is that AI models continue to make mistakes despite the passage of time. The theoretical revolution of this technology precisely needs to eradicate them, and it does not seem that we are on the way to achieving it. The models improve, yes, but not enough for us to trust them 100%. Image | Yaniv Knobel In Xataka | As if there weren’t enough AI companies, Jeff Bezos has just returned from the shadows to build another one, according to the NYT

It is an invaluable source for generative AI and at the same time it is what is killing it

Wikipedia wants to be the last human bastion against AI-generated content. The Wikimedia Foundation removed AI-generated summaries after several cases of hallucinations and the complaint of their editors. It’s not Wikipedia’s only problem with AI, it’s also wreaking havoc on its traffic. What is happening. In a article published on the foundation’s blogproduct manager Marshall Miller details the current situation in Wikipedia traffic. The foundation estimates that there has been an 8% drop in human traffic between May and August 2025. They attribute this to the use of generative AI as a source of information, such as the chatbots themselves or initiatives such as the Google AI Overview that respond to the user without having to click on a link. Why is it important. It poses a risk to the continuity of Wikipedia because people are obtaining the information that its volunteers produce, but without going through the web and without adding visits. It is the same as What’s happening in the media with Google’s AI briefingsalthough the main difference is that the media lives off advertising and Wikipedia lives off donations from individuals. Miller sees it clearly: “With fewer visits to Wikipedia, fewer volunteers will be able to develop and enrich the content, and fewer individual donors will be able to support this work.” A crisis that comes from afar. Wikipedia losing traffic has been in the news for a long time. In 2020 they had a massive drop: They lost 3 billion organic traffic visits and the culprit was Google. The appearance of direct response modules that displayed information directly on the results page, causing many people to not click. Correction. In May of this year, Wikipedia detected an unusual increase in traffic from Brazil. At first they classified the traffic as human, but later they verified that they were bots designed to imitate human behavior. This led them to update their bot detection mechanism and, with the new updated data, they saw that there had been an 8% drop in human traffic. The irony. In the era of generative AI, sites like Wikipedia are an invaluable source of information. It is where the chatbots and Google’s own search draw from to give us those answers, but at the same time they are harming it and not only because of the drop in traffic, the bots and scrapers also have an impact on the operation and represent a noticeable load on the servers. Solutions. The Wikipedia Foundation proposes responsible use by users to seek original sources and highlight the importance of human-created content. It sounds almost like a plea and the truth is that the outlook does not look too good. In the case of AI Overview, the media have warned about the consequences and there are even groups that They have sued Google. There are clues that Google could be raising licensing agreements with large media groupsbut for the moment it has not materialized and its results with AI continue to work as the first day (they have even launched the AI mode). Image | Wikipedia In Xataka | A Wikipedia editor spent years pretending to be Russian Red. He was actually an Indian scammer

One more year, we have new iPhone. And one more year, Apple is far in generative for mobiles

Last May of Mayar Pichai, CEO of Google, mentioned the word “AI” 135 times in its inaugural talk In the Google I/O event in 2025. Everything was AI, and that absolute prominence of confirmed in the Presentation of the Pixel 10 In August. Things have been very different in the event of presentation of the iPhone 17, in which this technology has not been practically talked about. Where is Apple Intelligence? One would expect one of the most important technological companies in the world to have climbed into the AI ​​car at this point, but it is not what has happened with Apple. In it Presentation event of the New iPhone 17 As soon as there were Mentions of Apple Intelligence, a platform that not only had to arrive, but when he did proved to be much more limited than that of his competition. Real -time translation. Apple did talk about a new capacity in the Airpods pro 3: The simultaneous translation that is part of the Apple Intelligence platform and that will allow to maintain conversations in different languages ​​among several people, whether they also use those headphones – what will make them hear the translation directly in the helmets – as if not – the translation will appear on the iPhone screen. Waiting to see how it behaves, we are undoubtedly before Apple’s most outstanding novelty in its artificial intelligence functions in recent months. Delays and more delays. Apple already warned in March that its artificial intelligence options would be delayed and They would not reach 2026. Nor was there much to tell at your WWDC conferencewhere the protagonist was the new (and controversial) Liquid Glass Design Language. The iPhone 17 will not boast of AI (for the moment). All this made it practically impossible to wait for surprises in this area, but it is still surprising that while its competitors do not stop raising news with platforms such as Gemini, Apple still does not take a step forward. At the moment the AI ​​does not make a difference. The outstanding functions of AI on Android are striking, but for now they do not seem to have completely transformed the user experience and are not an argument that can make iphone users pass to Android. “Rode to search”, the real -time translation of voice calls and messages, and the striking photographic editing functions They are interesting, but it is even more that growing Gemini integration with installed applications In our Android mobiles, like Gmail. Siri is further away from Gemini. The great harmed of this situation is Siri, Apple’s voice assistant. Theoretically at this point we should have a supervitaminated version with AI, but Its development is being chaotic In an Apple that has not just found the key. That has allowed Google to have a lot of advantage with Gemini, which is already a natural substitute for Google Assistant and winning options with speed. Touch wait. Faced with this panorama, we cannot do much more to wait. Apple has promised that there will be news in Apple Intelligence in spring 2026, and it is more than likely that these novelties will reach their current devices. One thing is true: they are more than prepared for those options, and the NEW CHIP A19 PRO OF THE IPHONE 17 PRO/MAX. Privacy as in the manga. In Apple they already made it clear when presenting Apple Intelligence that one of the clear foci of this platform was to protect the privacy of users. To do this, they will take advantage of both language models that can be executed at home, without connection to the cloud, as more ambitious models that are executed in the Private cloud that Apple It already has prepared. That strategy gives rise to two questions. The first, if the functions that will offer one and the other will be able to compete with the great foundational models of Openai, Google and the rest of the competitors. The second, if that commitment to privacy will really be really differential in the race for AI. In Xataka | If the question is which of the great technology is winning the AI ​​career, the answer is: None

The new great models of generative the AI ​​do not stop delaying. It is a dangerous indication that we have touched the roof

We expected to have GPT-5 available at the beginning of the year, but OpenAi gave us GPT-4.5. The generative AI model, which theoretically represented a remarkable leap with respect to its predecessors, ended up disappointing and the company announced that it will eliminate it from its API in July. It was too expensive and simply did not compensate. That was already a bad sign of AI advance, but there is more. And GPT-5, what? It was expected that GPT-5 will arrive in the middle of the year. Sam Altman has been creating Hypebut in December we knew that the arrival of that model was being problematic. The jump in benefits I wasn’t being expected And the cost of developing it is huge. What did they do in Openai? Delay it and launch in its place GPT-4.5 That, as we have seen, it was one of the great disappointments in the history of Openai. Bad signal one signal. Behemoth is delayed. As indicated In The Wall Street JournalGoal will delay the launch and deployment of its most ambitious model to date, call 4 Behemoth. This “monster” with 288,000 million active parameters (two billion in total) is the third member of the newly presented family calls 4. However, according to WSJ “Company engineers are having difficulties in significantly improving” their capabilities. It should have arrived in April, but now it is estimated that it will arrive in autumn, or even later. Frustration. Sources close to the company indicate that managers are frustrated with the performance of the team that is developing calls 4 Behemoth. “Significant management changes” are already contemplated that would raise internal movements (and who knows if layoffs) as a result of these bad results. And not that the available flame models 4 are having a good receptionlet it be said. Bad signal number two. Unbalanced. In WSJ also highlight how the first flame version was created by its fundamental research team, formed by academics and researchers. Since then 11 of the 14 researchers have left the company. Anthropic does not advance either. We also expected a “round” leap in Claude, the Chabot of the generative of Anthropic, but in February the company presented Claude 3.7. It is true that this model Yes offered striking benefitsbut at the moment its Opus version, the most ambitious, still does not appear, and nothing is known about Claude 4.0. Bad signal number three. Jumps not, at most jumps. What we are seeing in recent months are not significant leaps in the capacity of the models, but striking improvements only in some sections or effective characteristics. It happened with Gemini 2.5 Pro, especially powerful in programming and that has allowed Google to win integersbut also with Openai and the famous images that imitate Studio Ghibli Oa Grok 3, who has become more famous for his lack of censorship that for its accuracy or quality (which is not bad). Deceleration. All this triggers the debate on a potential “deceleration” of AI: the climb no longer seems to work so well, and that of using more GPUS and more data to train models is not offering the expected return. Jaime Sevilla, CEO of Cophai, I did believe That the rhythm of improvement was being expected, but these delays of course make the future progress of generative the AI ​​again. The agents and the AI ​​that “reason” are hope. The models with “reasoning” capacity Yes, they have allowed us to propose striking improvements in some areas, and companies have launched to present this type of variants and deep research modes For specialized uses. The other great hope of 2025 are the agents of AI capable of completing task sequences autonomously to solve a problem, even connecting to other services or data sources. At the moment we already have outstanding examples in the schedule of programmingbut practical applications for end users are limited. Image | Goal In Xataka | There are too many AI models. That raises a true death sentence for Anthropic and Claude

Grok will also remember all our conversations with him. The new generative AI tendency is already here

XAI, Elon Musk’s company, has announced The incorporation of a memory function for its Chatbot Grok, which can now remember details of past conversations to offer more personalized answers. Why is it important. Memory integration is a huge step in the evolution of the AI ​​attendees, transforming them to tools for specific tasks to digital partners who learn and adapt over time. This update reduces Grok’s gap with its rivals. Chatgpt It has been offering a similar but newly improved function for some time to be able to refer to the entire user conversations history. Gemini also has persistent memory to customize your answers. In detail. This new function allows the assistant to retain previous interactions information. You will remember whether we told him that we only want to use Python to program or if we ask for advice to improve running from concrete MMPs. The function is available in beta through Grok’s website and its mobile applications, although it is not yet accessible to users in the European Union or the United Kingdom. The context. Grok 3 already stood out for its speed and intelligence, but as we said at the timeIt lacks elements that make it attractive to recurring and professional use, compared to competition options. I had nothing similar to projects, GPTS either Gems. It still does not have it, but at least now it goes further in product development with a persistent memory. Between the lines. The implementation of memory implies a huge change in human relationship. It allows you to move from the unique and state consultation model that characterized the first systems of AI towards more continuous relationships that are built over time, which they remember. The AI ​​attendees go from being specific tools to becoming digital partners who know our preferences, history and needs. How it works. XAI has emphasized transparency In memory management, allowing users: See exactly what information Grok remembers. Disable the function from the configuration. Eliminate individual “memories.” And now what. The question is whether Grok is going to get thanks to this novelty differentiates himself in an increasingly competitive space or if he will be relegated to punctual anecdote in a saturated market. Xai still has to show that Grok can make something differential and real useful on a day -to -day basis, not only in specific uses closest to hobby. This is a great step in that direction. Outstanding image | Grok, Xataka with Mockuuups Studio In Xataka | Founders of small startups and large technological ones already has something in common: they are millmillonarios thanks to the AI

Follow live the conferences and panels of the great event on generative

As planned today, the event begins in Madrid AI2023an event of which Xataka is half partner and that revolves around the use and application of generative artificial intelligence in the professional environment. During today’s day, different talks, presentations and panels will take place with renowned experts from the sector that, how could it be otherwise, You can continue live and direct. The day will begin Today, March 19, at 10:00 Spanish peninsular time and You can follow it live Through YouTube. If you already have your face -to -face entry, we will be there, so do not hesitate to go on to say Hello and to chat about how AI is not going to change the world, but how it is doing it already. Follow the AI2030 live What to expect from Ai2030 In AI2030 we will discover, by the hand of authentic leaders of the sector, How generative artificial intelligence is being implemented in the business world. We will know practical and real cases and see how the future is drawn, but we will also put our feet on the ground to address the challenges posed by its implementation. During this day we will have the opportunity to listen to personalities such as Ana Castrillo (Head of Marketing from Google), Javier Jiménez (founder and CEO of Dreamshot), Sofia Benjumea (Head of Google for Startups – Emea) and Xabier Iglesias (founder of Mementum Tech), among many others. And attention, because at 13:25 the presentation of our partner Angela Blanco will take place, in which he will address what is to come in 2025. We remind you that The event will begin at 10:00 And it will last until eating, so I sign it on your agenda and see you in AI2030. More information | AI2030

Anthropic possibly has the best generative product. And not even that guarantees that it survives

AI is an unprecedented money devouring. Anthropic is closing a round of 3.5 billion dollars that triggers its valuation to more than 61,000 million. An astronomical figure for a company with a produce … but that has barely two million monthly active users. And projected income of “only” 1.2 billion for this year. The numbers do not add up. And that is precisely the issue. Anthropic’s problem is not the quality of your product. Claude is, in many ways, the most refined market assistant. His security approach and greater ethics, his warmer communication – as warm is his background color as opposed to Chatgpt nuclear – and his ability to hold coherent and deep conversations have made him the favorite of many demanding users. It is really good. But being the best does not assure you victory in the technology industry. Not even survival. OpenAi has 400 million active weekly users because a bestial brand has been built in AI. Google has a kind of Klapauciusan infinite money trick thanks to Your advertising empire. XAI of Elon Musk takes advantage of the X platform and its own CEO as natural showcases. Microsoft has integrated AI throughout its product ecosystem. And Anthropic? It has a great product with little distribution. It is the perfect paradox: the best assistant that almost nobody uses. The history of technology is full of higher products that ended up losing in front of mediocre but better positioned rivals. Betamax was technically superior to VHS. Apple’s Newton anticipated the iPhone for a while but a chestnut was given. Netscape Domino Internet Before being crushed by Internet Explorer. What we are witnessing is A classic standard warwhere the winner will not necessarily be the best product, but that he achieves the critical mass necessary to establish himself as the new standard of the industry. The uncomfortable reality is that we live in a world, As my said yesterday quate Javier Pastorwith Too many models of AI. Every week a new one arises. Anthropic, Openai, Google, Microsoft, Meta, XAI, Deepseek, Perplexity, Mistral, Alibaba … the list continues to grow. And when the risk capital stops flowing so generously – because at some point it will – many will not survive. The analyst Ed Zitron expresses it bluntly: Anthropic “is not a real company, I could not survive without the beneficence of risk capital.” With losses of 5,600 million last year, it is difficult to refute that statement. Zitron omits that living in losses seized to risk capital is the routine of much of the technological industry, but it is not reason. Anthropic’s strategy seems clear: to position itself as the “most human” alternative against the energy of “Robot God” of Openai. Your demos They include warm color corrections, relaxing jazz music and presenters who sound like normal people speaking normal, not as a Chief of or a Head of proclaiming achievements. It is an intelligent approach. Is it enough? Perhaps the most likely destiny for Anthropic is the acquisition. An excellent product with scarce commercial traction is attractive to giants that seek to improve their own AI offers. Apple, who has not yet shown all its cards in this game, could be a logical buyer, although its shopping history is far from these quantities: its greatest acquisition was that of Beats eleven years ago and paid for it twenty times less than what Anthropic is worth now. In this landscape oversaturated with almost indistinguishable models for the average user, the question is not who has the best technology, but who will survive when the money of the venture capital begins to scarce. And in that battle, having the best product is surely not enough. In Xataka | The new Claude 3.7 of Anthropic simplifies what other models complicate. And incidentally program and “reason” like the best Outstanding image | Anthropic

The next generative AI revolution will not reason better, but integrate into physical robots. And will change robotics forever

In the technological world we are fascinated with chatbots who write essays and take their time reasoning. Grok 3 goes, Claude 3.7 It comes, meanwhile, something less visible but deeper is happening: The beginning of the merger between conversational and mechanical bodies. For the first time, robots not only execute preprogrammed instructions. Now also, in their own way, they understand. Historically, robotics and AI have followed separate paths. Parallel, but separate. Industrial robots were as accurate as stupid. AI systems are intelligent, but incorporeal. Let’s think about robotic arms that have existed on assembly lines for decades. Millimetrically exact, but absolutely lost If a single component appeared in a position slightly different from that expected. The new generation of robots connected to LLMS You can now interpret ambiguous instructions, such as “bring me something for thirst”, and solve the problem through reasoning (word of the year), evaluating which drinks are available, if the user showed preference for some and even if there is ice in the freezer. We no longer program specific movements, but rather general objectives. Figure robots are good examples. So good that even They work autonomously at a BMW factory. According to The company has just publishedthey can even receive generic verbal instructions, such as collecting pieces, and without the need for previous specific programming are able to visually analyze the environment and detect them. They can even pause, reassess the situation and correct the error if someone modifies the pieces. This contextual adaptation capacity was unthinkable a couple of years ago. The really groundbreaking of this impaired in robots is that You can learn very differently. The LLMS trained with text lack the physical understanding of the world. Traditional robots lack contextual intuition. By merging them an intelligence that includes both semantics and physics. A robot equipped with LLMS is not only able to understand the instruction “opens that box without damaging its content”, but can improvise before boxes never seen, evaluating materials, closures and fragilities. The revolution, unfortunately, will not be spectacular as in The novelty-fictionbut it will arrive in the form of robotic arms in factories that can be reconfigured with a verbal order. Or warehouse robots that will understand contextual priorities. Or medical assistants capable of interpreting non -verbalized needs of their patients. Boston Dynamics, the Non-Va-Más de la Robotica during this last decade thanks to your robots jumping and doing Parkourshe is no longer as interested in acrobatics as in integrating understanding systems that allow her machines Understand complex instructions in construction and industry environments. You just have to see Your website. And on the horizon look The Tesla optimus or the Cyberone de Xiaomi. Or unitree like One of the great Chinese technological bets. The big change will come When these systems stop failing before the unforeseen and begin to apply general principles of physical and contextual reasoning. We are not seeing the birth of artificial consciousness, but the understanding of the physical world and the world of meaning in a single integrated system. What this powerful convergence does is its silent nature. He catches us arguing On whether Grok 3 deserves a better product or about itself Chatgpt 4.5 It will be sufficient during the remaining of the year, but Robots are beginning to understand the world like us. Not only by calculating a trajectory, but understanding intentions, contexts and meanings. That is much more transformative and valuable than Any ten -page trial generated in seven minutes. In Xataka | Deep Research is not just a new AI function. It is the beginning of the end of intellectual work as we know it Outstanding image | Figure, Ryunosuke Kikuno in Unspash

On March 19 we wait for you in the great event on generative artificial intelligence applied to the company

Xatakeros, next day March 19 The event is celebrated in Madrid AI2030which will have as its central theme the real use and application of the generative AI in the business environment and of which Xataka is Mediapartner. If it is an issue that, like us, is passionate about or that you would like to know more to apply it to your professional environment, keep reading because this will interest you. AI2030 is a different event for format and content, and now you will understand why. As for the format, on the one hand there will be a place for people who want to learn and listen to how various leaders in the sector are working with generative artificial intelligence in their business fields, with practical and real cases of generative in action But also with all the challenges (and opportunities) in its implementation, in addition to a review of the trends of what we expect in the coming years. All accompanied with a small huequecito for networking. Here you can consult the full agenda. All these cases will be presented by first level speakers. Among those already confirmed is Jimmy Klein (Diageo Product), María José Barrera (Global Cdo & E -Commerce director of Massimo Dutti), Sofia Benjumea (Head of Google for Startups – EMEA) and Carlos Rivadulla (manager and lawyer in Écija) , among many others. The event organizes it Miraikureference sign in the development of generative, and is sponsored by Google Cloudso we will also have some of its executives. On the other hand, there will also be a more practical session, which will take place on March 20 in Hackaton format. This opportunity is reserved for a few selected startups but you are still in time to present as a candidate for yours if you are interested. Would you like to accompany us? Keep reading and we give you all the details. For all types of professionals and also for startups If you are passionate about the subject or dedicate yourself professionally to this: On March 19, from 10 in the morning until eating, numerous talks will take AI in business environments and various business sectors in particular. You can request your entry as an assistant here. If you have a startup: You can request to join the Hackaton that is organized on March 20. In that session, first -line companies such as Kia, Massimo Dutti, Estrella Damm or Diageo, among others, will raise a challenge to the participating companies and together they will work to give them a solution. You can get more details and register your startup selecting “Join As a startup” here. We are waiting for you! And if you can’t accompany us or you can’t follow streaming that day, don’t worry, because We will tell you the best of the event in Xataka And in our networks. As our friends from AI2030 say, “the future begins here!” More information and inscriptions | AI2030.AI

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.