Goal advertisement Last Saturday the launch of flame 4, his new family of Open Source models of IA. The company takes chest with three multimodal variants and one especially striking for being absolutely huge in size. But it is for a good reason.
Hello, call 4. It has been almost a year since the goal announced Call 3and his new family of models arrives with three different variants:
- Call 4 scout: the most “small”, which competes with Gemma 3, Gemini 2.0 Flash-lite and Mistral 3.1.
- Call 4 Maverick: Compete with GPT-4O, Gemini 2.0 Flash and Deepseek v3
- Call 4 Behemoth: An absolute monster that as Meta surpasses GPT-4.5, Gemini 2.0 and Claude 3.7 in various benchmarks. The latter is not demomently available publicly.
Amazing context window. These models offer a context window of 10 million tokens, something simply spectacular. That means we can enter a gigantic amount of data such as entry (PROMPT), for example huge code repositories on which to work directly.
Mixture-Of-Experts. These models use mixture -of-ou-explet architecture that already took advantage for example Depseek. As we explain then, this allows to divide the model into “experts” that activates according to the type of request. That improves efficiency and has proven to be a fantastic technique for models to behave optimally with much less resources consumption. Scout has 16 experts and Maverick has 128. This type of architecture also favors the inference phase, or what is the same: the models will respond not only efficiently, but fast and fluid.
Call 4 Behemoth, the “Model Professor”. It is not yet available, but this variant is absolutely huge and has two billion parameters (2T in English), when calling 3, which was huge (405b) was a five times smaller model. Deepseek R1 has 671,000 million parameters, three times less that calls 4 Behemoth. The key to this model is that it serves as “teacher” for smaller and, above all, specialized variants.


The flame 4 comparative table with respect to some of its rivals.
Specialization. This variant is also a perfect candidate to be “distilled” and starting from it to obtain much smaller but equally capable models that “learn” from that “teacher teacher” that is call 4, but adapting to more concrete areas and scenarios and in which they can highlight.
And less censorship. OpenAI’s image generator He already took a 180º turn and applies much less inspired censorship By Grok 3. Goal does the same with flame 4, which according to the company has become “responds with a strong political inclination at a rate comparable to Grok in a controversial set of political or social issues.” Thus, we have a somewhat less “politically correct” model.
For now debatable results. Although the model seems to score very well in benchmarks, experts like Simon Willinson They have tried it And they ensure that their first impressions are not especially remarkable. Gemini 2.5 Pro seems to behave much better in one of the tests he performed when summarizing and analyzing a text. However, with flame 3 something similar happened, and both calls 3.1 and calls 3.2 significantly improved their behavior.
They can already be tested. Call 4 is now available In WhatsAppInstagram, Facebook or the website of Goal AI. And once again, experts offers the possibility of download italthough you will basically need a cluster with a lot, a lot of memory to be able to run them at home. They are also available in Huggingface.
And soon “will reason”. Mark Zuckerberg Indian In its Instagram account that in addition to these models, next month we will see a model called 4 reasoning that will be the company’s first reasoning model. It is an especially interesting variant, especially when competing with Deepseek R1 (and its successor, which It will appear soon).
Images | Goal
In Xataka | Great technology have determined something this year: that we end up talking to an AI
GIPHY App Key not set. Please check settings