Openai’s new voice models already speak as customer service agents. His next destination: the call centers

Since the beginning of the year, the objective of great technological ones has been clear: that we talk to artificial intelligence (ia). Openai, Microsoft, Google and Meta have added voice functions to their assistants. But this seems to be just the beginning. The industry advances at a frantic pace and the way we interact with these tools continues to evolve. Tell the voice agents ‘hello’. Sam Altman’s company has been betting on text agents with tools such as Operator either Computer-Useing agents. However, Openai already has it ready if next great movement to continue highlighting in the race for the development of AI: to promote a new and powerful generation of voice agents. New models on stage. OpenAI has announced The launch of new audio models to turn voice into text and vice versa. They are not in chatgpt, but in the APIwhere developers can use them to create voice agents. The important thing? They aim to be much more precise and to bring customization to the next level. The new OpenAI models, built on GPT-4O and GPT-4O-minipromise to improve Whisper Already its previous text to voice tools, which will also remain active through the API. But it is not just a matter of performance: now they can also modulate their tone to sound, for example, “as an empathic customer service agent.” Destination: the call centers. Openai makes it clear where they point with this launch. He assures that “for the first time, developers can tell the model not only to say, but also how to say it, which allows more personalized experiences for use cases ranging from customer service to creative narrative.” According to Openai, this technology will allow creating much richer “conversational experiences.” If we take into account that Chatgptpowered by GPT-3.5arrived in November 2022, it is evident that the progress has been vertiginous. And everything indicates that these models will end up arriving at the call centers. We might think that at first the interactions will be somewhat limited, but well above the current voice systems. They will move away from traditional automated assistants and will be much more natural. Over time, the line between a conversation with a person and an AI could become almost imperceptible. Images | Charanjeet Dhiman | OpenAI In Xataka | We have tried Sesame’s conversational. It is the experience closest to a “human voice” that we have seen In Xataka | China has found an unusual strategy to avoid US mosquadillas with AI: bet on the Open Source

These are the main differences between both models

Apple surprised yesterday with the launch of a new iPad Air, just after a year of having launched the previous model. What has led Cupertino to a new update of your tablet? Will the new worthy deserve iPad Air 2025? We will try to answer these questions throughout this post, so that you are clear because the new model can be reserved and will be put for sale next March 12in case you are thinking of making the leap to this new generation. Ipad Air (M2 – 2024) of 11 inches Ipad Air (M2 – 2024) of 13 inches Ipad Air (M3 – 2025) of 11 inches iPad Air (M3 – 2025) of 13 inches Dimensions and weight 24.76 x 17.85 x 0.61 cm 462 grams 28.06 x 21.49 x 0.61 cm 617 grams (Wi-Fi models) 618 grams (wi-fi + cellular models) 24.76 x 17.85 x 0.61 centimeters 460 grams 28.06 x 21.49 x 0.61 centimeters 616 grams (617 grams in Wi-Fi + Cellular version) SCREEN 11 -inch IPS Resolution of 2,360 x 1,640p 264 pixels per inch 60 Hz of soda True Tone technology 500 nits shine 13 -inch IPS Resolution of 2,732 x 2.048p 264 pixels per inch 60 Hz of soda True Tone technology 600 nits shine 10.86 inches LCD IPS FullHD+ 2,360 x 1,640p resolution 60 Hz of soda True Tone technology 500 nits shine 12.9 -inch LCD IPS FullHD+ 2,732 x 2.048p resolution 60 Hz of soda True Tone technology 600 nits shine PROCESSOR Apple M2 Apple M2 Apple M3 Apple M3 MEMORY 8 GB 8 GB 8 GB 8 GB STORAGE 128 GB / 256 GB / 512 GB / 1 TB 128 GB / 256 GB / 512 GB / 1 TB 128 GB / 256 GB / 512 GB / 1 TB 128 GB / 256 GB / 512 GB / 1 TB BATTERY Up to 10 hours of video playback Up to 10 hours of video playback Autonomy of up to 10 hours of navigation Autonomy of up to 10 hours of navigation Front camera 12 MP F/2.4 12 MP F/2.4 12 MP with f/2.0 12 MP with f/2.0 Rear cameras 12 MP F/1.8 12 MP F/1.8 12 MP with f/1.8 12 MP with f/1.8 Operating system Ipados 17 Ipados 17 Ipados 18 Ipados 18 SOUND Stereo speakers Double microphone Stereo speakers Double microphone Double Stereo Speaker Double Stereo Speaker Connectivity Wi-Fi 6e 5G (in cellular versions) Bluetooth 5.3 USB-C (USB 3.1 Gen 2) Wi-Fi 6e 5G (in cellular versions) Bluetooth 5.3 USB-C (USB 3.1 Gen 2) Wi-Fi 6e (802.11ax) 5G (in wi-fi + cellular versions) Bluetooth 5.3 USB-C (USB 2.0) Wi-Fi 6e (802.11ax) 5G (in wi-fi + cellular versions) Bluetooth 5.3 USB-C (USB 2.0) OTHERS Touch ID in the lock button Available in blue, purple, space gray and silver Touch ID in the lock button Available in blue, purple, space gray and silver Available in spatial gray, blue, purple and white star colors Available in spatial gray, blue, purple and white star colors PRICE From 659 euros From 949 euros From 699 euros From 949 euros Apple iPad Air of 11 inches (M2): Liquid Retina screen, 128 GB * Some price may have changed from the last review Apple iPad Air of 13 inches (M2): Liquid Retina screen, 128 GB * Some price may have changed from the last review iPad Air M3 (2025) 11 inches 128 GB * Some price may have changed from the last review iPad Air M3 (2025) 13 inches 128 GB * Some price may have changed from the last review Main differences between iPad Air 2025 and 2024 As we have said, With just one year apart Among the both models, it is not that the distinctions are very striking. These are some of the sections in which you can see what novelty the new iPad Air offers. Screen and design Like its predecessor, the IPAD Air 2025 is available in two screen parts: 11 and 13 inches. Beyond this, as far as design is concerned, the weight is the other difference. The 11 inch weighs 460 grams In front of the 462 grams of the 2024 model, while the 13 -inch weighs 616 compared to the 617 grams of the iPad Air 2024. As you can see, the weight difference is not practically notable. Another of the screen differences is that the resolution in the new models is FullHD+. It is the only one, since the new generation maintains the same Nits (500 in the 11 -inch and 600 in the 13 -inch), the same refreshment rate (60 Hz) and both models have True Tone technology. Performance This is the section in which the most different can be noticed between the iPad Air 2024 and the iPad Air 2025. The previous model came with the chip M2while the new generation of iPad Air integrates the m3 chip. This chip is not new, but can already be found in the Macbook Air 2024. If we compare the M3 chip With the M2, you can see a remarkable advance. Performance improves, as well as energy efficiency. Although both chips have eight -core CPU, the M3 offers much faster processing speeds. The M3 chip, in turn, has motors from Prore decodingallowing fluid and real -time reproduction, even if you reproduce high resolution images. It also incorporates AV1 decoding, very useful in 4K and 8K resolutions. Software Another of the differences that can be observed between the iPad Air 2024 and the 2025 model is in terms of software. The new model comes with Ipados 18 and those who buy the new iPad will be guaranteed updates for five years. In addition, it is Compatible with Apple Intelligence (although it will also be with the previous model). Is the new iPad worth? The surprise has been capitalized by knowing that Apple has launched a new iPad in less than a year. Yeah You want to have a better processorthen it will be worth acquiring the new … Read more

GPT-4.5 It is not better than its rivals in almost anything. It is the proof that traditional AI models almost do not advance

Sam Altman I had already warned that they planned to launch GPT-4.5 very soon. We had been waiting for the GPT-4 successor for months, but over time expectations have been going down: there was talk of the Founder of AI And how climbing – more data and more GPUS to train models— It didn’t work so much. Precisely GPT-4.5 was going to be proof that perhaps that was not true. Do you know what? That was probably, because GPT-4.5 is a model with many starting problems. GPT-4.5 is already with us. Yesterday Openai finally presented GPT-4.5the theoretical successor of GPT-4. Sam Altman explained that this was “the first model that makes me feel that I am talking to an attentive person.” Gigantic and expensive. But Altman also recognized something else. “Bad news: it is a giant and expensive model.” The head of OpenAI claimed to have run out of sufficient GPUS to make a mass launch, and the availability of GPT-4.5 is very limited: only Chatgpt Pro users can use it for the moment. Caro no, very expensive. Using the GPT-4.5 model through the OpenAi API is extraordinarily expensive: it costs $ 75 per million input tokens, and $ 150 per million output tokens. GPT-4O costs 2.5 and 10 dollars respectively (30 and 15 times less), and O1, so far the most expensive, costs 15 and 60 dollars respectively. And it is also not a “border” model. He Technical Report OpenAi indicates that this It is not a model “Frontier“ As was GPT-4, for example. That is important, because despite being its largest LLM, the border models are more capable, of large scale and raise risks to generate misinformation or be forced to get out of the standards. In GPT-4.5 they seem to have focused a lot on avoiding errors (it is one of its advantages, it seems to put less the leg according to some test banks). It does not seem better in almost anything. The evidence and benchmarks to which it has been subjected seems to make it clear that the leap into benefits is especially disappointing, especially if we compare it with the new models of its rivals. Is worse in accuracy of the facts that perplexity deep researc, is worse than Claude 3.7 Sonnet in programming According to TechCrunch and Several expertsand it is also worse in reasoning (although it is certainly not oriented to it) than Deepseek R1, O3-mini or Claude 3.7 Sonnet (which is a “hybrid” model). Bittersweet feeling. Experts like Simon Willison either Andrej Karpathy They have shared their first impressions and in both cases the sensation is that GPT-4.5 It is slowis updated only Until October 2023 And it does not represent a really remarkable advance. Willinson came to analyze the debate that dozens of users maintained on GPT-4.5, and in a Summary generated by AI The conclusions were also clear: the numbering itself was inappropriate, the model is too expensive, the price/benefits ratio was very debatable and the performance was not what was expected after so much time. Karpathy’s conclusion is that “it is a little better and that is great, but not exactly in trivial sections of highlighting.” More human? Altman’s appreciation about his conversation how he had been surprised by the conversation capacity of GPT-4.5 Maybe he points to the direction in which this model stands out. Karpathy also pointed to that aspect in saying that the improvement could be shown in “creativity, realization of analogies, general understanding and humor”, which perhaps makes effects effectively with GPT-4.5 give the feeling of being even closer to those we would have with a human being. The climb does not work, the deceleration is here. GPT-4.5 It is a clear example of how we have reached the limits of the scaling. Having a gigantic LLM no longer seems to provide advantages over its predecessors, and dedicate more data and more GPUS to train these models does not seem to make much sense. Altman himself made it clear that GPT-4.5 would be the latest non-reasoning model of the company. That is another sign that demonstrates that the deceleration of the generative AI, at least in regard to traditional models, is a reality. Why have you launched it then? In it OpenAi blog It indicates how “we are sharing GPT-4.5 as a research advance to better understand its strengths and limitations. We are still exploring what they are capable of and we are eager to see how people use it in ways that we would not have expected.” That seems to show doubts that their own creators have with the model, and the question why they have thrown it. They need to continue generating “Hype “. Especially considering that the rivals are very strong lately. Claude 3.7, Grok 3 and of course Deepseek R1 have managed to turn the tortilla and raise a challenge for Openai, which until not long ago seemed to be a step ahead of their rivals. Now that is not clear, and in many sections its competitors already exceed the benefits of their models. OpenAi needs to breastfeed and say “here I am”, but perhaps with GPT-4.5 that movement goes wrong, because at least a priori the benefits are disappointing. And investors squeeze. Some point to another probable theory for this launch. OpenAi could have been forced to launch GPT-4.5 make investorsthat have invested billions of dollars in the company and that need to be calm with their investment. Once again OpenAi has a problem, because it does not seem that GPT-4.5 can leave them calm. It will be difficult for new investors to be convenient with this launch. In Xataka | Openai has a golden opportunity to sweep all its rivals: launch an unlimited chatgpt and full of advertising

There are too many AI models. That raises a true death sentence for Anthropic and Claude

We have AI models to bore. And the problem is that everyone starts looking too close and deciding which one is better not simple. All companies and startups strive to be referents in an absolutely unleashed market. One that as in other technological wars probably ends some winners and enough losers. And there are those who compete with clear disadvantages. Another colossal investment round. In The Wall Street Journal indicate That Anthropic is about to close a new financing round that would allow him to lift 3.5 billion dollars. That would make the company’s assessment amount to 61.5 billion dollars, and the question is whether the company really has options in such a competitive market. “This is not a real company”. According to analyst Ed Zitron, Claude has Two million active monthly users in January 2025. It also talks about how according to the WSJ projected revenues for 2025 (based on current contracts) is 1.2 billion dollars, a very modest figure. “They also lost 5.6 billion dollars last year,” Sign it. According to his opinion, Anthropic “is not a real company, they could not survive without the beneficence of risk capital.” Fierce competition. The truth is that Anthropic is facing exceptional competition in which the large heavyweights of the Tech industry are both in the US and in China. Deepseek surprised all of them with the launch of Deepseek V3 and after Deepseek R1, and that seems to have encouraged investors to bet even more money through all these companies. OpenAI is still a reference. At least, it is in number of users. According to CNBC They already have 400 million of active users every week, an exceptional figure that clearly puts them at the head of the popularity ranking in this segment. As with Claude, Openai is burning money that he does not have and that they obtain from extraordinary financing rounds, but unlike this, we insist, the popularity of Chatgpt is evident. And the big ones have what matters now: money. For many users IA is chatgpt, and giants such as Google with Gemini, Microsoft with Copilot or Meta with flame are still far from achieving that acceptance. They have something that Anthropic (or perplexity) does not have: many, many funds – Grok 3, from Xai is another example – and can be maintained in this race even if that is costing them a lot of money. The prize is too fat not to chase him. There are too many models, some can stay on the road. In all technological wars there have been winners and losers. It is the same as what this battle for AI points, in which there are too many competitors and that it probably ends up causing some of these efforts to not survive. Here Anthropic is one of those at a disadvantage. The AI ​​winner can be a company still unknown. Openai, Google, Apple or Microsoft may be especially well positioned to win that race, but it does not have to be so. As they recently indicated In axiosnew company can arise, still unknown, that end up doing something differential and what none of the greats had thought. It is not easy, but of course it is not impossible. Remembering Netscape. In the second half of the 9th Internet began to show their potential, but the great A small company called Netscape He managed to become a reference in the world of browsers. Then it would end up being the great loser of that war, but it was the demonstration that having more money and resources does not always have to have all the options. And that’s why so much investment in startups. That possibility that the one that wins the race will be an unknown company is precisely the one that makes risk capital companies investing a lot of money in projects that may not get absolutely at all. It has recently occurred with Thinking Machines Labthe Startup of Mira Murati, or with Safe Superintelligencethat of Ilya Sutskever. None of them have a product to show, but still have already received spectacular investments. And be careful, there is also China. Of course there are formidable rivals that are not in the US. Mistral is a reference in Europe, while In China another particular war is being fought which has made today the models of the AI ​​of Chinese companies are so good (or sometimes, better) than those of the US. The winner of this battle could also come from that country. Or any other, of course. Image | Saradash Pradhan In Xataka | China has an ambitious plan to overcome the West in Technology. And he has already chosen his 18 companies to get it

If you have never bought a 3D printer because they are very expensive, much eye at these five offers in a good variety of models

3D printers are not usually especially cheap and there are many things that we must take into account if we are going to buy our first printer. But there are brands that make it a little easier by having a good amount of models that are aimed at a more casual or more expert user. Anycubic has now launched a new campaign and has many of these printers, so we have chosen five models with a good discount. Anycubic Photon Mono 4 by 144 eurosan economic printer that lowers price with the Diynew15 coupon. Anycubic Photon Mono M7 by 279 eurosa printer a little larger than lowers price with the DiyNew20 coupon. Anycubic Kobra 3 Combo by 354 eurosa printer to print in various colors that lowers price with the Diynew25 coupon. Anycubic Kobra 2 Max by 374 eurosa printer that offers good speed and lowers price with the DiyNew25 coupon. Anycubic Kobra S1 Combo by 569 eurosan ultra -grape printer and good size with the DiyNew30 coupon. Anycubic Photon Mono 4 If what we are looking for is only to start in the world of 3D printers with an affordable model, the Anycubic Photon Mono 4 has dropped from 229 euros to the 144 euros With the coupon DIYNEW15. Includes a seven -inch screen, print with a maximum print volume of 153.4 x 87 x 165 mm and has a system to resume printing in case you have a light cut. In addition, as we see in the rest of the printers of this list, the results are quite surprising. You can see some samples on the 3D printer file. * Some price may have changed from the last review Anycubic Photon Mono M7 If we are going to start at 3D printers, but we are looking 279 euros With the coupon DIYNEW20. This printer offers a greater speed than the previous one, also includes a screen, but in this case 10.1 inches and offers a maximum print volume of 223 x 126 x 230 mm. The brand also has some samples in the printer description. * Some price may have changed from the last review Anycubic Kobra 3 Combo On the other hand, if what we want is to have a printer that is capable of printing in color, but that its price does not rise too much, the Anycubic Kobra 3 Combo is an interesting purchase option, especially now that it has dropped from 599 euros to the 354 euros With the coupon DIYNEW25. It allows you to print with four to eight colors with a maximum print volume of 250 x 250 x 260 mm and in the description of the product we can find samples offered by both the brand and users. * Some price may have changed from the last review Anycubic Kobra 2 Max If we want a good printer because we have already used one or because we simply want to make large volume impressions, the Anycubic Kobra 2 Max has dropped from 659 euros to 374 euros With the coupon DIYNEW25. It allows printing with a maximum volume of 420 x 420 x 500 mm, has automatic leveling and is much faster than the previous models. We can also find samples offered by the brand and by users in the printer description. * Some price may have changed from the last review Anycubic Kobra S1 Combo Finally, if what we are looking for is a complete printer and that allows you to print in color, the Anycubic Kobra S1 Combo has also dropped in price from 749 euros to the 569 euros With the coupon DIYNEW30. It offers good speed, it is silent, it allows you to print with a maximum volume of 250 x 250 x 250 mm with up to eight colors and has automatic leveling. In this case, we can only find samples offered by the brand in the printer description. * Some price may have changed from the last review Some of the links of this article are affiliated and can report a benefit to Xataka. In case of non -availability, offers may vary. Images | Anycubic In Xataka | Best domestic 3D printers. Which to buy, resources for 3D printing and eight recommended models In Xataka | This is what I would have liked to know before I started in the 3D printing world

These will be the keys to their next generation of AI models

In a world where competition intensifies with actors such as Deepseek, OpenAi prepares to launch great changes. The news has been released by Sam Altman himselfwhich has recognized some complications in its current product offer. The launch of GPT-4.5 and GPT-5 Its objective is to improve the experience of users for the future. How we pointed out last monththe nomenclature of Openai’s artificial intelligence (AI) models had become complex and confusing, a situation that ended up aggravating itself with the arrival of reasoning models. To try to put some order, the startup introduced the models selector in Chatgpt for some time, but this feature does not seem to have helped too much. Goodbye Model selector, Hello GPT-4.5 and GPT-5 The novelties that will be released from the new OpenAi roadmap are rough. The first of them bears GPT-4.5 name. This product, called Orion by OpenAI employees, will be the last product of the company “without a chain of thought” And, attention to this point, because after this launch, whose date has not been announced, an important unification will arrive. “From there, we will focus on unifying the models of the series or and those of the GPT series, developing systems that integrate all our tools, know when a longer reasoning is necessary and are useful for a wide variety of tasks,” Altman said. This will translate into the launch of GPT-5, which will integrate much of Openai’s technology, including O3. While the startup presented O3 family In December last year, only the O3 mini version, focused on performance, reached the public. The full version of O3, which is as high as human programmers in certain tests, finally will not be available as an independent model. We can use it through chatgpt and API for developers through GPT-5. In development. Images | OpenAI In Xataka | The companies of AI have been jumping the copyright for years. They have just suffered a disturbing legal defeat

The best free tools to install models of AI as Deepseek, call, mistern, gemma and more

We bring you a list with the best free tools for Install artificial intelligence models locallyand thus create your own chatgpt with models such as Deepseek, Callsand more. These are open source models, which means that you can install and use them for free on your computer. Installing an AI locally has disadvantages such as using less powerful versions of these models. However, it has important advantages such as that all data are left on your computer and are not compiled by any company, and that you can use it for free. In this article we have tried to focus only on the eight best programs to do this. However, if you think we have left any list that you consider important, we suggest that Tell us in the comments so that the rest of the users can benefit from your knowledge. Ollama Ollama It is an open source application without graphic environment that you can install both in Windows and Macos and GNU/Linux. What it offers is the possibility of install and use AI models from the terminal of your computer, without complications and without having to open extra apps. This program will allow you to install a large number of models, from the flame to Depseek, Phi, Nomic, Qwen and many more. Each model has different versions, both complete and distilled, and you have the possibility of lowering them with different parameter sizes. LM Studio An open source application that serves to lower LLM models of artificial intelligence on your computer. Offers a unified graphic interface, since you can search and lower AI models within the program with a search engine and in a simple way, and then lower them and throw them in it. This program has versions for Windows, Macos and GNU/Linux, and allows you to use the models in your IU or a local server compatible with OpenAi. You can also use local documents with AI, you will use the models without connection, and you can download them from Hugging Face repositories. You can use models as a flame, Mistral, Phi, Gemma, Qwen or Deepseek Anythingllm A program all in one to be able to use artificial intelligence models on your computer, locally and offline. It is open source, and allows you to chat with documents, execute AI agents and manage various tasks. In addition, if your computer is not very powerful, it has subscriptions to use it from the cloud. It has a very flexible architecture, with three components working together, and in addition to being able to use AI models with open source connect locally to privatesuch as Openai, Azure and others services. It focuses mainly on privacy and customization, having many available controls. GPT4ALL Another open source project to install LLM models on your computer, being able to work on the CPU or the GPU. It has the capacity to install up to 1,000 open source languor models, such as Deepseek R1, Llama, Mistral, Nous-Hermes and many more. It is a payment application, although with a ratuita version with limited tokens. But for daily use it should be enough. It has programs for Windows, Windows ARM, Macos and Ubuntu. Jan An open open source program that allows you to install open source models locally, such as Call, Gemma or Mistral. It also allows you to connect to cloud services such as OpenAi or Anthropic when you need it. All data is stored locally. It has versions for Windows, Macos and GNU/Linux, being compatible with the GPUS NVIDIA (CUDA), AMD (Vulkan) or Intel Arc. Has an extensions system That will allow you to customize it and configure it to your liking. The interface is light and beautiful. Flame.cpp An open source program created to use locally Any flame -based model of finish. This program can work both in the CPU and in the GPU of your computer, which allows it to be better in domestic equipment, although it is a bit more complex to use. NextChat NextChat allows you to use the chatgpt characteristics in an open source package that is under your control. It is a web and desktop application that connects directly to external AI services, such as Google, Openai or Claude, but storing the data locally in the browser. This program also allows users to create “masks”, something similar to GPT with which to create IA tools with specific contexts and configurations. It can work in Windows, Macos and GNU (Linux. Flamefile A program that converts AI models into executable filesso that you can use them independently. It is a Mozilla Builders project, which combines flame.cpp with Cosmopolitan Libc. It is compatible with Windows, GNU/Linux, Macos and BSD. In Xataka Basics | Prompts pages: 16 free websites and communities to find ideas for your prompts and find advice to improve them

Meta emails reveal that he downloaded 81.7 TB of books with copyright via Bittorrent to train their AI models

In the legal process Kadrey against goal Mark Zuckerberg’s company is accused of having used works protected by copyright to train their artificial intelligence models. A few weeks ago it was revealed that Zuckerberg had approved to use pirate booksbut now new and powerful evidence of this looting arrive. Revealed emails. He “Appendix a“The case includes several mail email messages from the finish Do that data collection in October 2022. “Download with torrents from a company’s laptop does not seem a good idea”. In April 2023 Nikolay Bashlykov, one of those responsible for carrying out this data collection, joking including emojis and indicated that the company would have to be careful with the IP from which they downloaded the data. Goal knew the risks. In September of that year Bashlykov already stopped using emoticons and warned that using torrents would imply acting as “seeds” so that others also download them, and “that might not be legally legally.” These debates are proof that Meta knew that this type of activity was illegal, according to the authors who have sued the company. Erasing the footprints. In a Internal message Meta Frank Zhang researcher indicated how the company avoided using its servers by downloading this data set to “avoid” “the risk that anyone can draw the seed” and who downloaded that data. 81.7 TB of data. As they point out In Ars TechnicaThe evidence shows that Meta downloaded at least 81.7 the terabytes of data from various libraries offered by those books protected by copyright. In a New document The legal process indicated that at least 35.7 TB had been downloaded from sites such as Z-Library or Libgen (which It ended up closing last summer). Goal wants to dismiss those charges. Goal has presented a motion to dismiss those accusations indicating that there was no evidence that any book was downloaded by finishing employees through Torrent or that they were later distributed by goal. In Xataka we have contacted the company, and we will update this news if we receive comments on the case. Loot on the Internet fire. These data affect the debatable practices that AI companies are using to train their models. We saw it With Googleand of course also with Openai, who used millions of texts to train Chatgpt, and Many of them had copyright. Perplexity was in the spotlight after discovering that He skipped the bullfighter Internet rules to avoid payment walls and feed your AI model. Internet robberies are being normalized. The amazing thing about all this is that the fact that all companies are skipping the norms and violating copyright seems to be normalizing the looting of the Internet. It almost does not give time to scandal and we give it almost as a policy of consummate facts to be able to follow ours. Is this really a “fair use”? All companies are shielded in the concept of “fair use” (“Fair Use”). This concept developed in Anglo -Saxon law allows the limited use of protected material without being necessary to ask for permission to do so. Copyright rapes have not stopped arriving in the world of generative AI, but they seem to be in the background while these giants thrive. In Xataka | 5,000 “tokens” of my blog are being used to train an AI. I have not given my permission

All their rivals offer free models that “reason” and Gemini 2.0 is the last example

All the companies and startups of AI in the United States were so quiet going to their own. And suddenly Deepseek R1 arrived and became a true existential threat to Silicon Valley. The Chinese startup offered a model of reasoning as good as that of its competitors, but also offered it for free (and Open Source!). What has Silicon Valley did? Apply the story, of course. GEMINI 2.0 Razon Free for All. Enough that you visit the Official Gemini website and display the “Gemini” menu from the upper left to check it. You can already use 2.0 Flash Thinking Experimental (its reasoning model) both in normal mode and in “collaborative” mode with services such as YouTube or Maps. And it is totally free. Microsoft Copilot and Think Deper. Microsoft Copilot’s “Think Deper” mode is also available for free In this service of the company. As we explainThink Deper is actually OpenAi O1, but before Microsoft had to pay the subscription of Copilot Pro ($ 20 per month) to enjoy access to that option. The appearance of Deepseek R1 caused it to also offer it in a grauita way (although with a more limited number of consultations). OPENAI O1. The company led by Sam Altman didn’t want to be left behind and less than a week ago presented O3-minia reasoning model that in addition to being especially powerful is available in the Grauita version of chatgpt. We can activate the “Reason” button so that when we ask something, the O3 reasoning capabilities are put into operation. Deepseek R1 and perplexity. Perplexity’s search engine is gradually offering new options. In fact, a few days ago those responsible announced that On the perplexity website We could activate the Reasoning-R1 model based on Deepseek R1, but housed in the US (to avoid suspicions with possible data theft). They even give the option of opting for the Reasoning-O3-mini model, which is the same offered in Chatgpt. Again for free (although limited), but that stands out for being a comfortable way to try Deepseek R1 in its most powerful version. And the rest? This first batch of reasoning models seems to have taken on foot changed to the rest of the great contenders in the AI ​​segment. Anthropic, who is still a reference with Claude, has not launched a reasoning model at the moment. He has not done so Apple, who goes to his own pace. Meta has not launched anything in this regard despite offering a flame as a clear reference of the Ia Open Source model. And Elon Musk seems to be very busy, because Xai is still working In Grok And for the moment there is no news about a potential variant of reasoning. The only remarkable alternative for the moment is Doubao-1.5-Prothe reasoning model fresh by Bytedance, although it is not available as simple as its competitors. The competition benefits users. The impact of Deepseek R1 on the AI ​​segment has been spectacular as we see. When Openai launched O1 In September 2024 he did it by raising him as a very advanced option but also face: only the subscribers of his services could access it in a limited way. Four months later we are using models that rival O1 but that are totally free and that we can use with more and more options. They are great news for users, which at least for now are benefiting from all that rivalry between these companies. The AI ​​that reasons every time is better and cheaper (or free). A graph Created by Shawn Wang (@swyx) and published in his Newsletter, Latent Space, shows a clear evolution of AI models. In that graph you see how its capacity (measured at LMSYS points, a well -known ranking of AI models) is confronted with its cost per million tokens (ratio 3: 1 entry: exit). Here the right and the right is a model, the better, and Gemini 2.0 Flash Thinking seems to be especially well positioned, but this type of graph is changing very quickly. Again, more good news for us, users. In Xataka | Mistral AI is the French startup that opted for efficiency before Deepseek. His future is uncertain

What are distilled artificial intelligence models and LLM distillation

We will try to explain in a simple and understandable way What are distilled models When we talk about artificial intelligence. When we talk to you about Install Depseek on the computer We mentioned that there were distilled versions, and other AIs are also being created that are distilled versions of other specific models. We also usually refer to it as LLM distillationto specify that we refer to Large linguistic models either Large Language Modelwhich are those capable of processing the text, understanding what we write and responding to text. Come on, like Chatgpt , Deepseek, COPILOT, Gemini either Grok. What is LLM distillation The distillation of artificial intelligence models is A technique to reduce the size of the modelsreplicating the results and performance you can get with them. Although we are used to using them through applications and web pages, LLM models They consume a lot of space and resources. We do not usually notice because when you use an AI from a website or app, you connect to the servers of large companies where this model is running. But if you wanted to have a complete model installed on your computer, you would need a very powerful processor and a lot of space. The solution to this problem is to create a distilled model, A model trained to occupy less space. This model can replicate most of the performance, but it will be smaller and fast, you will need less resources to work. The way to do it is similar to a teacher and a student. The complete model is a teacher who shares his experience and knowledge with a student, transmitting complex concepts and knowledge. Meanwhile, the student model learns to imitate what is being taught in a simpler and more effective way. With that, lighter models are achieved. Your results will never be so good like those of the teacher, but the main and performance characteristics will remain. Come on, which comes to be a Lite version, a small but light and versatile version. There are different techniques To create distilled models, such as knowledge distillation with final results for the student model to know the decision -making process or use the teacher to generate additional training data. It is also distilled in intermediate layer so as not to transfer only final results but intermediate layers, or use several teacher models to train the student. In general, private companies that create artificial intelligence models are also responsible for creating distilled versions. The normal thing is that a specific name is added to the distilled version, such as the “flash” of Google Gemini or “Mini” of OpenAi. In other cases, especially In open source modelsThey can use the name of the master model for the distillate but adding as a last name the models that have been used as a student. Come on, you can take a smaller model like Qwen and use it to create a distilled version of Deepseek that is called Deepseek qwen, or Deepseek distill qwen, to indicate that it is distilled. Pros and cons of distilled models A complete artificial intelligence model has billions of parameters, and the quantity of space and computer power To execute them it is huge. In a domestic computer you will need technology and tip power, in addition to a lot of space, already level of a companies such as OpenAi or Google that offer their AI by web or app, you need many resources on their servers. Therefore, creating distilled models helps reduce size and occupy less space. But it also allows them to work faster, and that less computational costs are necessary. That makes Google or OpenAi offering you Free “small” versions Of its main models, leaving the most complete for payment users. Because keeping the complete requires money and investment. And if we are talking about an open source model, have distilled versions allows you and I can install them and use them on our computer without having to spend thousands of euros on a new processor, on graphics cards or internal storage. These techniques can also be used to create artificial intelligence models at a lower cost than would involve complete training. For that, you take already created models and train to a new one from their data and their knowledge, and you do not have to perform the process from scratch. However, distilled models do not have the same amount of data and parameters, they are often lower in resources, and More failures and hallucinations may arise. I will give you an example. If you follow our guide to Install Depseek on the computeryou will see that at a certain point you have several versions. You have versions 8bversions 14bor the full version of 671b. This number refers to its characters, and the lower the less resources you need, but more distilled and small will be the model. Therefore, in this example, if you install an Deepseek 8b and a 14B, you will see that The lower model has more hallucinations And it gives you less precise answers. Therefore, the better you have the greater the model will have to be the model, and less distilled it will have to be. The same goes for commercial models. If you are using a 2.0 flash gemini, the results will be worse than the full Gemini 2.0, and the same with the OPENAI O3 and O3 mini. However, the Flash or Mini version is the one offered to all free users, while the complete is for payment users, in order to assume the cost of maintaining these models in operation. In Xataka Basics | Prompts pages: 16 free websites and communities to find ideas for your prompts and find advice to improve them

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.