Acer He has just presented Its Veriton GN100, a small workstation with a single purpose: allow you to train and run local models and without depending on the cloud. The proposal is not entirely new, but points to a future in which the PC mutates to give us that promising ability to have a kind of chatgpt in local.
A hardware that sounds like. This team has a GB10 Nvidia chip with Grace Blackwell architecture, and is accompanied by 128 GB of GDDR5 memory and 4 TB of storage. It is practically the same configuration of the workstation NVIDIA DGX Sparkwhich was announced in March but still without being on sale. In that case, the storage was 1 TB, which makes we can have less downloaded models.
An upward trend. The new Acer Veriton GN100 is a small step that can confirm an interesting trend: that of being able to enjoy small work stations that are specifically designed so that we can install and even train AI models locally.
Flag privacy. That is important above all in privacy: our conversations and use of these models will remain totally private, and there are no data that can end up staying on OpenAi, Google or Anthropic servers, for example. Thus, we can use financial, medical or confidential data without fear to consult with the AI models, because that information does not come out of our team.
And potential savings. Not only that: to be able to execute a local AI model allows you to avoid expenses in the use of cloud models, whether we subscribe to plans such as those proposed for example Chatgpt plus As if we use the API and pay for use. With the local model the consultations come out “free” or almost: all we pay is the equipment and its energy consumption.
But. The models that we can run at home are open models as call, Deepseek R1, Qwen or The new OpenAi GPT-Oss. With them it is possible to have notable benefits, but it is very difficult to match the quality of the models as GPT-5Gemini or Claude 4 Opus Because these are gigantic models that are executed in data centers with thousands of GPUS. Competing in that sense is practically impossible, but for certain scenarios we simply do not need it, and these Open Source models can be a fantastic alternative.


Acer Veriton GN100
Graphic memory to power. The larger the models of the best usually behave, but there is a problem: to be able to use them we need a lot of graphic memory and the current PCs do not stand out in that section. The most powerful graphics for consumers, The GeForce RTX 5090It has 32 GB of GDDR7 memory, which is fantastic in bandwidth but limits the size of the models to be executed.
Apple has a winning hand. Fortunately we have alternatives and especially the future points to relevant changes in this area. On the one hand we have the Mac Studio with up to 512 GB of unified memory – and also, with an excellent bandwidth – in which it is possible to execute large models Open Source without problems.
System |
Memory bandwidth (GB/s) |
---|---|
NVIDIA GEFORCE RTX 5090 (GDDR7) |
1,792 |
Mac Studio M3 Ultra (Unified Memory) |
819 |
NVIDIA DGX SPARK (Unified LPDDR5X) |
273 |
Framework Desktop (LPDDR5X memory) |
256 |
But the PCs also begin to aim high. On the other, Teams such as Framework Desktop that have a similar memory system and that include 128 GB of GDDR5 memory also enable that possibility. Nvidia and Acer’s work stations use a very similar scheme and also integrate 128 GB of GDDR5 memory with a decent bandwidth – but not spectacular – for AI applications
This promises. We are thus given the potential takeoff of a trend in the PCS market. Until now the manufacturers had specialized in gaming equipment, but the rise of artificial intelligence allows to glimpse a future in which we have “PCs for AI” that have huge amounts of graphic memory and that are perfect for this area.
In Xataka | The B300 GPU is the new Nvidia beast for Ia. And we already know what prepares for 2026 and 2027
GIPHY App Key not set. Please check settings