GMKtec EVO-T2 sells local AI as desktop infrastructure
GMKtec frames the EVO-T2 as a compact local AI workstation.๐ท AI-generated / Tech&Space
- โ GMKtec claims the EVO-T2 combines CPU, GPU, and NPU for up to 180 TOPS
- โ Phison aiDAPTIV+ uses SSD storage as a memory extension for larger models
- โ 10GbE, USB4, and OCuLink make it closer to an edge server than a normal mini PC
The GMKtec EVO-T2 is not interesting because it is another small box for a desk. It is interesting because it tries to sell the mini PC as a local AI node: a mix of processor, graphics, NPU, fast SSD storage, and networking that can keep part of the workload out of the cloud. TechRadar reports that the device uses third-generation Intel Core Ultra processors on Panther Lake architecture and that GMKtec claims up to 180 TOPS of combined AI compute across CPU, GPU, and NPU. That is a marketing number that needs caution, but the direction is clear: an AI PC is no longer measured only by whether it can run an assistant, but by whether it can execute larger models and agent workflows locally. The article mentions support for models up to 70 billion parameters without relying on external cloud infrastructure. That does not mean such a model will be fast in every scenario. It means the hardware, memory tricks, and software environment are being packaged as a workstation for more private and controlled AI.
180 TOPS, PCIe 5.0 SSDs, and Phison pseudo-memory sound impressive, but the key word remains validation.
Phison aiDAPTIV+ tries to stretch memory by using SSD storage as an active layer.๐ท AI-generated / Tech&Space
The most interesting part is not 10GbE, USB4, or OCuLink, even though those are serious inputs for external GPUs, fast networks, and edge deployments. The most interesting part is Phison aiDAPTIV+, a system that uses SSD storage as a memory extension and segments larger models across DRAM, GPU, and storage. That approach makes sense because local AI often hits memory limits before raw operation counts. A model can fit on paper without fitting into a device's fast memory. If less active parts stay on SSD while active parts move toward the GPU, the user gets the impression of a larger memory pool. But an SSD is not RAM. Latency, wear, heat, and sustained-load behavior will decide whether this is a practical architecture or an impressive demo. TechRadar therefore rightly notes that long-term performance has not been independently verified. The EVO-T2 shows where the category is heading: small devices that are no longer just office PCs, but local AI endpoints. If the claims hold up, companies get a more practical option for documents, agent tasks, and models they do not want to send to the cloud. If they do not, the machine will still be a useful reminder that a TOPS number without a real workload says little.

