Mistral has a new AI model. The good news is that it is absolutely European; the bad one, which is absolutely mediocre
The French startup Mistral has just launched Mistral Medium 3.5an open-weight AI model that is the great European exponent in an industry absolutely dominated by China—which competes directly with this type of projects—and by the US. And if this is the best they can do, it seems Europe has a problem. Mediocre. This is a “dense” model with 128 billion parameters and a context window of 256,000 tokens. While models with Mixture-of-Experts (MoE) architecture only activate a subset of the total parameters to achieve enviable efficiency and capacity, Mistral activates them all. That makes it much less efficient, but theoretically it should make its performance promising. And that’s the problem. Which it is not. Benchmarks. Pedro Domingos, professor of deep learning at the University of Washington, he expressed it very well: “Mainstream AI companies brag about how their model is much better in benchmarks. Soo Mistral brags about how their model is much worse.” It is true that the models with which it is compared are larger in total number of parameters, but as we will see later, even taking that into account, they are cheaper and theoretically more efficient thanks to the use of that MoE architecture in many of them. The model, however, unifies the previous catalog and follows the market trend of being able to establish the desired level of reasoning (reasoning_effort) as a parameter. Bad results. And he is somewhat right: Mistral does not seem to have problems showing the results of various benchmarks in which it performs poorly, but it also performs poorly with models that are by no means the most recent or powerful on the market. Thus, it is compared with Claude Sonnet 4.5/4.6, with Kimi K2.5, with GLM-5.1 or with Qwen 3.5 397B. In almost all cases (except GLM 5.1) there are already more recent and powerful versions of all of them. Not so far from local models. In fact Medium 3.5 scored 77.6% in SWE-Bench Verified, a programming test in which Qwen3.6-27b It reaches 72.4% with a fundamental difference: you can run it “for free” (with the appropriate hardware, and you paying the electricity bill) with a relatively affordable machine. More expensive (and somewhat more restrictive). If we use it via API, Mistral Medium 3.5 costs $1.50 per million input tokens and $7.5 per million output tokens. GLM-5.1 costs 1.4/4.4 respectively, and Kimi K2.5 costs 0.5/2.8 respectively. Its recent successor, Kimi K2.6, costs 0.95/4, and it is significantly better than Mistral being cheaper. There is a curious fact: Mistral uses a “modified MIT license” instead of the traditional Apache 2.0, and indicates that this model can be used commercially or non-commercially except for “high-income” companies. Chasing Anthropic. In addition to the model itself, the company has presented the so-called remote scheduling agents using Mistral Vibe CLI to, for example, send pull requests to GitHub in an automated way. It also has the so-called “Work Mode” in LeChat, allowing multi-step tasks to be managed autonomously. These are tools clearly intended to strengthen Mistral’s role as a base for scheduling agents, which is the path that has worked fantastically for Anthropic. Your advantage: being European. The only great strength of this model is that it has been developed by a European startup, and that gives it clear visibility at a time when many EU countries they talk about digital sovereignty. It is the only Western model that seems to want to compete with China in the field of open weight models, which is good news, but the truth is that in terms of performance it does not seem that the Mistral Medium 3.5 is going to perform competitively. The geopolitical security network. That, together with the fact that it costs more than its competitors, makes the decision to use it difficult unless for those who prioritize clearly that European origin. That is Mistral’s ace in the hole, and they are taking advantage of perfectly. The company has recently obtained financing to create data centers in Europe, and is nourished and fed by this new obsession with minimize dependency of North American Big Tech. In Xataka | The CEO of Mistral sends a message to Europe: the end of being the technological vassal of the United States