With Gemini he is doing practically everything good

I may not seem like it, but Google is winning The artificial intelligence career. At least when it comes to the performance of their models, which do not stop winning battles in the most grenades. Its most recent model, Gemini 2.5 pro experimental, is the one that takes the most score so much in the Artificial Analysis Intelligence Index as in LMARENA. Both combine the results of various specific benchmarks (mathematics, programming, scientific issues) to show how Gemini’s model is ahead of its competitors. Source: Artificial Analysis Intelligence Index. But winning in the benchmarks does not seem so important. Especially when one thing is what the benchmarks say and what is the perception of the conventional user. For those who use various models regularly, the feeling is that They are all really good In many scenarios. And yet, that perception also seems to be favoring Gemini 2.5 Prothat in the last days has received many Praise in social networks. The model not only behaves well: Is faster that your rivals in tokens generation per second Is One of the cheapest And has a context window of one million tokens only surpassed by the recent launch of Call 4 Google also has another cartridge in the bedroom: Gemini 2.5 Flash is not so powerful in benchmarks, but it is even faster and much cheaper than its older brother. The price of Gemini 2.5 Pro experimental is really low if we compare it with that of its main rivals. Source: Artificial Analysis Intelligence Index. Is even cheaper that Deepseek V3 and R1, which were the traditional referents in that section. This version also has a small size, which makes it especially interesting to finish in our mobiles. And if we talk about Open Source modelsGoogle also has good proposals. Gemma 3 appeared A month ago to prove it and the Internal analysis From Google seemed to show that it is so good or better that calls 3.3. Comparative analysis is not so promising nevertheless, But Google is flooding the world of generative tools. Have Lyria For music, Image 3 For images, I see 2 for video and Chirp 3 For voice, all available via your platform VERTEX AI. It is true that there are higher platforms in these areas, and we have a good example in the model Imagen of OpenAiwhat has conquered the world With the images with Ghibli style. The thing also promises in the field of AI agents. Have Deep Researchas OpenAi, and it is possible to use it for free On the official Gemini website. Projects like Astra (assistant with augmented reality) and Mariner (autonomous browser control) also show a promising future. Demis Hassabis I confirmed these days What MCP (Model Contex protocol, driven by Anthropic) will be adopted by Google, which these days announced Something similar with agent2agent protocol (A2A) to facilitate communication between AI agents. Revolutions have been This colossal deployment of AI solutions is being accompanied by a gradual but absolute integration of technology in the services and platforms with which the world has conquered. The GEMINI presence In Android it is a good example that extends to new functions both on Gemini’s website as In Gmail or your office suite, Google Workspace. And for programmers that model is not only one of the most remarkable: they have just made an integral part of Firebase Studiowho wants to compete With cursor and other agricultural platforms dedicated to programming. Notebooklm is another of the magical tools of the firm –One of our favorites– But there is an even more special integration. What is, how not, that of AI in searches. Google knows well that your traditional search engine It is in danger. Perplexity or Chatgpt Search have shown the way, but for Google to adopt this technology it is Much more complex And his first steps They were erratic. Now things seem to be changing, and little by little that transformation of the Google search engine is consolidating, as demonstrated by the recent announcement of its “AI” mode. That is a sign of an expected and forced metamorphosis. One that goes slowly, but that little by little is crystallizing. Despite this colossal deployment of resources, who leads this segment in popularity is OpenAi. Today for a lot of people Use the equivalent to using chatgptand weakening that position will be difficult. Google, however, has powerful resources to achieve it. The competition may be absolutely fierce, but all these steps seem directed in a good direction. In Xataka | Google has the best researchers in the world. Now you will stop spreading your findings for competition

That my mobile can see for me is as scary as fascinating. I have tried Gemini Live’s eyes

In December 2024 OpenAi surprised the world with a simply brutal function: Chatgpt had “eyes” and was able to see and interpret the world in real time. The demo was simply impressive: the app, through the camera, could recognize everything I saw. And everything is everything. In early 2025, Google announced great novelty to Gemini Liveyour advanced voice mode. A way that comes to compete directly with this chatgpt function, and that is already available for Google Pixel 9 and Samsung Galaxy S25. As long as you pay the Advanced subscription. I have been able to prove this function in a Google Pixel 9 Pro. And yes, it’s as impressive as you might think. The interface. Acting the new “Vision” modes of Gemini Live is quite simple. You just have to open the app and go to the advanced voice mode (the icon of the lower right corner). Once Gemini Live is opened, you will see two new direct accesses: one to give access to the camera and another to give access to your screen. Because yes, you can also read the content of the screen in real time. Camera mode. When activating the camera mode, Gemini will see everything that the camera transmits. It is simply spectacular how it is capable of recognizing absolutely everything, and how quickly is recognizing concrete aspects such as plant types, technological device model (without putting anything in it). We can ask you everything, and it serves as a guide, translator and … private professor. The latter has seemed spectacular: solve equations, psychotechnical problems and all kinds of questions explaining the step by step. Screen mode. This mode is perhaps the most movie at the privacy level but, if we are willing, Gemini is able to read everything you see on the screen. We can ask you any issues related to it. In this case I have not seemed so useful, since Google Lens gives us in a look at the necessary information if we are looking for something in particular. However, it is another sample of Gemini’s new potential. Do not trust AI, never. As always with AI, the recommendation is not to trust it. It is curious as, in a general plane of my desk, he has been able to perfectly recognize my computer. However, by focusing it directly, he told me that he did not see any computer. I have helped him asking if he is a Mac Mini M1 or a M4… and the answer has been that an M1 (they are very easily distinguishable by ports and size). He has also been wrong reading some numbers when asked about some psychotechnical test and, ultimately, you have to be quite above so that it works well. Nor of your questions. The problem shared by Gemini Live and GPT’s advanced conversation mode is clear: they are too much ask. To foster conversation, the answers with a question always end, something especially annoying in this mode of vision. It is very difficult to get to the grain, since it usually interrupts the complete answer with some question. It is still a smaller problem shared with all AI, but breaks the conversational flow a bit. Despite this, Gemini Live’s vision seems barbaric. Image | Xataka In Xataka | Google Gemini: What is it, how it works, differences with GPT and when you can use this artificial intelligence model

Personalized GPTS are one of Openai’s great inventions. Now Google has just released yours in Gemini

One of the most interesting functions that chatgpt has are GPTS. In a nutshell, they are Chatgpt personalized versions created for specific purposes. We could have a GPT focused on correcting our texts, to solve mathematical problems or plan trips. It is a really useful function, but for now they can only be created by payment users. Anyone can use them, but only premium users can create them. Well Google has decided to opt for a different path with Gemini and its Gems. And yes, the user win. Gems? That is the name that the equivalents of GPTs receive on Google Gemini. To all purposes, they are exactly the same. Instead of using Gemini’s “general” version, Gems allow us to use a specialized version in certain tasks. It is a function that, having dedicated the necessary time and care, can be very useful. 10 Google applications that could have triumphed For everyone. So far, only payment users could create and use Gems. That is, the only way to access this function was paying the 21.99 euros per month that costs access to Gemini Advanced. That is over. As plannedGoogle has released access to gems and, from today, creating them and using them is completely free. Gems creator | Capture: Xataka Options. Google gives us five predetermined gems focused on brainstorming, professional orientation, programming, learning and writing review. Grace, however, is to create ours. To do this, you just have to go to the Gems manager and start the process (or you can do it by clicking directly in This linkwhich is direct access). Important: Although gems can be used from mobile app (and deployment is being progressive), they can only be created in the web version. Some keys. When creating a GEM it is important to be clear, concise and descriptive. Here are Some tricks to get the perfect prompt. For example, if we want our Gem to serve us to correct texts in English, something like this should be put: “You are a text reviewer in English and aid to people to detect and correct failures in their writings. Your work is to analyze the texts, find all errors, explain to the user why he is poorly written and suggest improvements. Use a friendly tone. Use Spanish to give explanations. Be patient.” The result will be something similar to this: when giving the badly written phrase “I are not feeling lots well”, the GEM returns the following answer: Example of use of a Gem created by us | Capture: Xataka Models. In our Gems we can use the models we have access to Google Gemini. In the free version we can use Gemini Flash 2.0 and Gemini Flash 2.0 Flash Thinkingwhich is experimental. If we had the payment version we could use the most advanced models. Using the reasoning model can be really useful if we create a very specific Gem focused on answers that need precision. Limitations. Gems are very useful, but they have an important limitation: they do not admit the rise in documents, at least in the Spanish version and for now. In the English version they seem to admit them. Being able to upload documents is a very interesting function to consult bibliography, interact with a PDF, with an Excel sheet, etc. Let’s think about the potential that this has to analyze data, extract trends or digest a lot of information more easily. The problem is that, for the moment, we do not have it available. Cover image | Xataka In Xataka | Google’s results with generative come to Spain. And with them, an elephant in the media room

How to use chatgpt or gemini by taking out the information only from Wikipedia as the only source

Let’s explain the way you can do Chatgpt consultations using Wikipedia as the only source. Sometimes, when you ask for a thing to artificial intelligence you can make mistakes, and even when you want me to explain something obtaining internet information, you can use unreliable sources to generate the answer. Meanwhile, Wikipedia has been positioned for many years A great source of collective knowledge. Yes, it also has errors, but there are much less. Therefore, we are going to tell you and explain the prompt that you can use both in Chatgpt as in COPILOT, Deepseek either Gemini. Ask the AI ​​to use wikipedia What we want to get is that the artificial intelligence chat we use use wikipedia as the only source to obtain information. In this way, you will look for what you have asked there, you will get written information, and generate an answer based on it. We can meet two problems. The first is that in addition to Wikipedia also use other sources, so that everything will no longer come from a single site. It may also happen that the answer is too technical. We will solve both things with the prompt. This is the prompt that we recommend: Explain to me in a simple way what is XXXX taking the information only from Wikipedia. Here, what you have to do is change the XXXX for what you want me to explain. You can also ask you to explain who a person is, or adapt it in the way you need for the request you have in mind. What we have done in this prompt is to add the “in a simple way” so that the answer it generates is colloquial. Besides, We have added the term only To specify that only use Wikipedia as a source, and that you do not obtain data from any other web page. In Xataka Basics | How to improve chatgpt responses: 9 steps to guarantee higher quality and better sources

Gemini Advance functions that become free in March 2025

Let’s tell you what are the payment functions that become free In Gemini. As artificial intelligence chatbots evolve, it is common for every so often to be free, although with some limitations. Meanwhile, payment users also receive more and better functions. In step of the new functions that reach the payment version but they also do it to the free one more limited, the owners of the AI ​​allow you to verify its operation. Thus, if you like to the point that what they give you free is short, you may have more disposition to go to the payment version. Payment functions that become free Here are in format ready the novelties that arrive at the free version of Google Gemini. In each of them we will give you a small explanation so you can understand what it is about. You have these functions both in the web version and in Gemini’s mobile app. Deep Research: It is Gemini’s deep search, with which he does an investigation of what you ask to offer you more broad and precise results. This function can only be used a limited number of months a month that Google has not specified. 2.0 flash and 2.0 flash thinking: These become the predetermined models of Gemini. Gemini 2.0 flash and 2.0 flash thinking They are the “small” versions of the latest models launched by Google, much more capable and, in the case of the second, the answers to try to offer better results analyzed what you ask and the context in which you do it. More apps connected: Google Calendar, Tasks and Keep applications become compatible with the 2.0 Flash Thinking model. This means that you can make requests related to your content on them. Gemini Gems: The Gemini Gems You can use them without being a payment subscriber, and allow you to use gemini versions adapted to concrete tasks and create your own adaptations. Come on, you can make Gemini a programming assistant, a professional counselor and more. In Xataka Basics | Gemini guide: 36 functions and things you can do with Google’s artificial intelligence

Gemini Robotics is his plan for robots to act in the real world

Robotics and artificial intelligence (AI) go hand in hand. It would be useless to develop humanoid robots capable of lifting tons, with state -of -the -art sensors, if we did not have an intelligent system that would allow them to interpret the environment and act accordingly. Without ia, a modern robot would be little more than a lot of sophisticated but useless hardware. They are the Advanced algorithms Those who transform that gross power into machines capable of learning, optimize their performance and respond autonomously to the challenges that are presented to them. From Asimothe iconic Honda robot of the 2000 Sophia, Tesla optimus either FigureAI has made its way in humanoid robotics. However, we are still far from seeing machines that really match the versatility of the human body. As advanced as they are, they still have trouble moving in un controlled environments and manipulating everyday objects can be a real challenge. Gemini Robotics: Google’s bet to take AI to the physical world Meanwhile, in the digital world, AI advances at a completely different rate. He is already able to hold conversations very close to those of a person, overcome exams with surprising scores and solve complex problems with a speed that until a few years ago seemed science fiction. A contrast that makes it clear that, although artificial intelligence progresses by leaps and bounds, there is still much Way to go in its integration with robotics. These challenges are leading to a new generation of AI models specifically for this discipline. Google, as expected, does not want to be left behind and already works on solutions that promise to take humanoid robots one step further. His bet goes through Gemini 2.0, that now has two versions designed to improve the interaction and control of these machines. On the one hand, Gemini Robotics It focuses on vision, language and action (VLA), which allows you to take direct control of robots and improve your response capacity in dynamic environments. On the other, Gemini Robotics-Er It is designed for robotics experts, giving them the necessary tools to develop and execute their own programs with advanced reasoning skills. Gemini Robotics-Er stands out in spatial reasoning with detection and signaling of 3D objects Google has identified three essential qualities that, as they explain, must have robots to be really useful for people. Generality. A good robot should not only execute predefined tasks, but also adapt to unpublished situations and solve problems on the march. It must be able to function in new environments, handle unknown objects and interpret varied instructions without depending on prior training. According to internal tests, its performance in unforeseen tasks far duplicates that of other vision-language models-last generation. Interactivity In a world in constant change, robots must be able to communicate naturally and respond to real -time instructions. Gemini Robotics includes commands in everyday language and in multiple languages, adapting their behavior according to conversation or environment. In addition, it continually monitors what happens around it and adjusts its actions based on new orders or changes in the stage. Skill. Many tasks that humans perform effortlessly require extremely precise motor skills, something that most robots have not yet managed to dominate. Gemini Robotics, however, is capable of performing complex tasks of several steps that require a thorough manipulation, such as folding Origami or packing a snack in a Ziploc bag, demonstrating a higher level of skill. Gemini Robotics not only stands out in the resolution of unforeseen tasks, but its generalization capacity far as the performance of other vision-language-action models. According to Google’s technical reportis able to adapt to unpublished scenarios and make decisions without prior training, bringing robots to real autonomy. In addition, it has been designed to function with different types of robots. Although he trained mainly with Aloha 2, a two -arms platform, he has also proven to control systems such as arms Frankaused in laboratories, and even more advanced humanoids such as Apollo, developed by Apptronik. Its flexibility makes it a model adaptable to various applications, from industry to assistance. For now, there is no scheduled date for a generalized deployment of Gemini Robotics or Gemini Robotics-Er. Technology is still developing and, for the moment, only a small group of companies is having access to these tools. Google Deepmind is collaborating with Apptronik in the construction of the next generation of humanoid robots, exploring how to integrate these models of AI in more advanced systems. In addition, some trusted tests, such as Agile Robots, Agility Robotics, Boston Dynamics and Enchanted Toolsthey are already testing Gemini Robotics-Er, although it is not clear if that access will be expanded in the future. Meanwhile, Google Deepmind continues to work on new security frames and benchmarks to evaluate the possible risks of AI in physical environments. All this makes it clear that, although the project progresses, there is still a long way before this technology reaches the general public. Images | Google Deepmind In Xataka | Faced with an AI that says yes to everything, a concern: this will never create an Einstein or a Newton

All their rivals offer free models that “reason” and Gemini 2.0 is the last example

All the companies and startups of AI in the United States were so quiet going to their own. And suddenly Deepseek R1 arrived and became a true existential threat to Silicon Valley. The Chinese startup offered a model of reasoning as good as that of its competitors, but also offered it for free (and Open Source!). What has Silicon Valley did? Apply the story, of course. GEMINI 2.0 Razon Free for All. Enough that you visit the Official Gemini website and display the “Gemini” menu from the upper left to check it. You can already use 2.0 Flash Thinking Experimental (its reasoning model) both in normal mode and in “collaborative” mode with services such as YouTube or Maps. And it is totally free. Microsoft Copilot and Think Deper. Microsoft Copilot’s “Think Deper” mode is also available for free In this service of the company. As we explainThink Deper is actually OpenAi O1, but before Microsoft had to pay the subscription of Copilot Pro ($ 20 per month) to enjoy access to that option. The appearance of Deepseek R1 caused it to also offer it in a grauita way (although with a more limited number of consultations). OPENAI O1. The company led by Sam Altman didn’t want to be left behind and less than a week ago presented O3-minia reasoning model that in addition to being especially powerful is available in the Grauita version of chatgpt. We can activate the “Reason” button so that when we ask something, the O3 reasoning capabilities are put into operation. Deepseek R1 and perplexity. Perplexity’s search engine is gradually offering new options. In fact, a few days ago those responsible announced that On the perplexity website We could activate the Reasoning-R1 model based on Deepseek R1, but housed in the US (to avoid suspicions with possible data theft). They even give the option of opting for the Reasoning-O3-mini model, which is the same offered in Chatgpt. Again for free (although limited), but that stands out for being a comfortable way to try Deepseek R1 in its most powerful version. And the rest? This first batch of reasoning models seems to have taken on foot changed to the rest of the great contenders in the AI ​​segment. Anthropic, who is still a reference with Claude, has not launched a reasoning model at the moment. He has not done so Apple, who goes to his own pace. Meta has not launched anything in this regard despite offering a flame as a clear reference of the Ia Open Source model. And Elon Musk seems to be very busy, because Xai is still working In Grok And for the moment there is no news about a potential variant of reasoning. The only remarkable alternative for the moment is Doubao-1.5-Prothe reasoning model fresh by Bytedance, although it is not available as simple as its competitors. The competition benefits users. The impact of Deepseek R1 on the AI ​​segment has been spectacular as we see. When Openai launched O1 In September 2024 he did it by raising him as a very advanced option but also face: only the subscribers of his services could access it in a limited way. Four months later we are using models that rival O1 but that are totally free and that we can use with more and more options. They are great news for users, which at least for now are benefiting from all that rivalry between these companies. The AI ​​that reasons every time is better and cheaper (or free). A graph Created by Shawn Wang (@swyx) and published in his Newsletter, Latent Space, shows a clear evolution of AI models. In that graph you see how its capacity (measured at LMSYS points, a well -known ranking of AI models) is confronted with its cost per million tokens (ratio 3: 1 entry: exit). Here the right and the right is a model, the better, and Gemini 2.0 Flash Thinking seems to be especially well positioned, but this type of graph is changing very quickly. Again, more good news for us, users. In Xataka | Mistral AI is the French startup that opted for efficiency before Deepseek. His future is uncertain

How to use Gemini 2.0 Flash and 2.0 Flash Thinking with reasoning on the web or your mobile

Let’s explain How to use Gemini 2.0 Flash and 2.0 Flash Thinking Experiment on the web and the application of Google AI. In this way, you can use The new models launched by the companyincluding the reasoning, which becomes free for all users. We are going to tell you where you are going to find this option, both in the web version and in the mobile version of Gemini. And we are also going to tell you how to use these models once you have activated them. Change the model that uses Gemini Gemini has an option to choose the model you want to use. In the web version, it is located on the left, and by default you will see that under the name of Gemini the model you are using appears. When you press on the model selector button, A window will open with the available Gemini models. This is where you can choose the normal 2.0 flash model, but also the Experimental Thinking To test the reasoning model, there is even one of reasoning to use with Google applications. This option It is also available in the mobile application. In this case, the models selector is in the upper central part of the screen, and the list of models will open down. When you activate a specific model, it will appear marked in the selection button on the left. In addition, you may have a message warning you of some details about its use. Now, simply Write the prompt you want To interact with this concrete model. And after writing your question, you will have the answer. The Flash Thinking model will show a window with reasoning that has followed step by step before building the answer. You can close this window if you are not interested, and the answer will appear at the bottom. In Xataka Basics | Gemini guide: 36 functions and things you can do with Google’s artificial intelligence

Samsung and Google already have almost lists their own vision pro. And they have two fundamental advantages: Gemini and La Voz

Last week we were able to attend the Samsung Galaxy Unpacked event. The South Korean firm presented its new Samsung Galaxy S25, S25+ and S25 Ultrabut I also had another ace in the sleeve: He spoke briefly about Project Moohan. This is its ambitious project to develop augmented reality glasses in collaboration with Google. There is already someone who has tried them and has discovered a potential advantage over the Apple vision. Look mom, without hands, voice. The well -known Youtuber Marques Brownlee (MKBHD) can try them briefly and published A video on your YouTube channel With your impressions. He told many things, but one especially caught his attention: to be able to handle many options with his voice. The recognition of gestures is very good and very useful, but Gemini’s power It was especially striking to be integrated into the glasses. Talking to glasses. This differential point allowed for example by using Google Earth could move to points of the map with a simple voice order. Also Gemini “see what you see”, so you can ask things about what you are looking through the glasses. Another striking option is to act on the intertfaz, opening or closing applications, or even asking you to reorganize your virtual desk to order the open virtual windows that you have at that time. Other Gemini options such as “surrounding to search” are also striking, but as Brownlee said, that integration of Gemini and the voice is an important advantage of Project Moohan About the pro vision. This is the Samsung and Google vision. Brownlee made it clear that the design of Project Moohan glasses is very similar to that of Apple’s vision. Even so there are differential elements, such as the buttons of the glasses frame, the tacile panel of one of the sides or the rear support, which inherits its design of the target Quest Pro. A promising screen. Although there are no technical specifications for the moment, MKBHD did notice that the quality of the screens in these glasses was fantastic. Maybe a small step behind the vision Pro, but very good. In the video the level of detail does not seem very remarkable, but the YouTuber said that in person everything is seen much better. Immersive contents. What is not so clear is what the options will be in one of the key sections of the glasses: the immersive contents. He did show for a brief moment how you could see videos on YouTube with a more immersive virtual environment, but did not give details about videos or space photos that are so striking in the pro vision. Games, controls. Nor did he mention it, but he did glimpse that controls are likely to appear for these glasses to take advantage of virtual reality games. That would be another differential element with respect to the pro vision, which have games but not specific controls. But how much do they cost? Unfortunately, at the moment there are no details about the departure date of these glasses or indications about their final price. Brownlee did notice that they would launch them this year, and the forecast (or hope) is that they are sensibly cheaper than the pro vision, which are not sold in Spain but that in Germany they start from the 3,999 euros. AI and voice can be winning argument. The truth is that this possibility of using the voice to interact with glasses is especially promising. Using your hands and having your arms in motion can get tired all the time, so this type of interaction is very interesting. If we unite the promising paper and options of AI models that can make more up to this whole area, we are facing a product that we have many, eager to know closely. Image | Xataka In Xataka | Google has declared war in the augmented reality segment. Apple and goal will not make it easy

What is special Depseek, the new Chinese artificial intelligence tool (and how differs from chatgpt or gemini)

Image source, Getty images January 28, 2025 Updated 7 hours Deepseek, the new Chinese artificial intelligence model (AI), has shaken the digital world, dazzling investors and sinking the actions of some technological companies, after jumping to the top of application downloads in Apple Store. It was launched on January 20 and quickly captivated computer science before attracting the attention of the entire technology and world industry. The president of the United States, Donald Trump, described the phenomenon as an “alarm call” for companies in that country that must concentrate on “compete to win.” What makes Deepseek so special is the statement of its creators that it was produced at a fraction of the cost of other models in the avant -garde of the industry such as the OpenAi chatgpt, because it uses less advanced technology chips. That possibility caused the giant of the production of Chips Nvidia to lose almost US $ 600,000 million of its market value this Monday, the fall in a more loud day in the history of the USA. Deepseek also generates doubts about Washington’s measures to contain Beijing’s impulse to achieve technological supremacy, which includes export restrictions of advanced chips to China. However, Beijing has redoubled its efforts with President Xi Jinping declaring AI as the main priority. And the new companies such as Deepseek are crucial as China turns a traditional manufacturing of clothing and furniture to advanced chips technology, electric cars and AI. Here we tell you what it is. What is Deepseek? In simple terms, Depseek is a chatbot enhanced by AI, like chatgpt. It is a free application that can be downloaded from the Apple Store store, where Depseek states that it is designed “to answer your questions and enhance your life efficiently.” But the AI ​​model that drives it – called R1 – has about 670,000 million parameters, which makes it the largest open source language model to date, according to Anil Ananthaswamy, author of WHY MACHINES LEARN: The Elegant Math Behind Modern AI (“Why do the machines learn: the elegant mathematics behind the modern AI”). Image source, Getty images Photo foot, Hangzhou, where the Depseek operations center is located, also houses other Chinese technological giants such as Alibaba. It is said that it is as powerful as OPENAI’s O1 model, which enhances Chatgpt, in mathematics, coding and reasoning. It is also claimed that he is able to do all that in a much cheaper way; Its developers claim that building it cost $ 6 million, an austere budget compared to the billions invested by AI companies in the US. It is not clear how they got it. The founder of Deepseek supposedly stored advanced NVIDIA chips before his export to China was prohibited in September 2022. Experts believe that this provision, which some estimate in 50,000, allowed him to build such a powerful model when these chips with other cheaper and less sophisticated. How do you compare with chatgpt or gemini? Deepseek looks and feels like any other chatbot, although it leans more towards conversation. Like Openia or GEMINI Chatgpt of Google, you can open the application (or its website) and ask questions about anything, and the chat strives to give you an answer. Your answers are extensive, but you don’t issue an opinion even if you ask you directly by one. The chatbot usually begins by saying that the issue is “highly subjective” -it is politics (is Donald Trump a good president?) Or soft drinks (which one knows better, Pepsi or Coca -Cola?). He does not even commit to saying whether or not it is his rival Chatgpt, but he did a comparison of the pros and cons of both artificial intelligences. Chatgpt did exactly the same, using a similar language. Image source, Reuters Photo foot, Apparently and operation, Depseek is very similar to other rival chatbots. Deepseek indicates that it was trained with data until October 2023 and, although the app seems to have access to updated information, the web version does not have it. That is similar to the first versions of Chatgpt and is probably a similar protection attempt, to prevent chatbot from launching incorrect information to the web in real time. It can also respond quite fast, although it is currently a little stop under the load of so many users running to try it since it went viral. Chatgpt and Gemini tend to promote their subscription services, which can be around US $ 20 per month, for more detailed information, while Deepseek is free although more limited. Censorship of Taboo themes Where there is a palpable difference is in Depseek’s self -censorship when it comes to prohibited issues in China. Sometimes it starts an answer that then disappears from the screen and is replaced by a notice that says “let’s talk about something else.” The obvious taboo theme are the protests in the Tiananmen Plaza in 1989 that ended with the death of 200 civilians at the hands of the Army according to the Chinese government, but some media estimate that it resulted in a massacre of thousands. Like many other Chinese models of AI -Rernie de Baidu or Doubao by Bytedance- Deepseek is scheduled to evade politically sensitive questions. When the BBC asked the app what happened in Tiananmen Square on June 4, 1989, Deepseek did not give detail some about that documented massacre. He replied: “I’m sorry, I can’t answer that question. I am an assistant to the designed to provide useful and harmless answers.” Photo foot, Deepseek evaded the question that BBC asked him about what happened in Tiananmen Square in 1989. For their part, their Chatgpt and Gemini rivals had no taps to expand in this regard. It is believed that one of the great challenges for the development of AI in China is the censorship of the government. But it seems that Depseek has been trained around an open source model, which allows you to perform complex tasks, while retaining certain information. Who is behind Depseek? Deepseek … Read more

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.