TECH&SPACE
LIVE FEEDMC v1.0
HR
// STATUS
ISS420 kmCREW7 aboardNEOs0 tracked todayKp0FLAREB1.0LATESTBaltic Whale and Fehmarn Delays Push Scandlines Toward Faste...ISS420 kmCREW7 aboardNEOs0 tracked todayKp0FLAREB1.0LATESTBaltic Whale and Fehmarn Delays Push Scandlines Toward Faste...
// INITIALIZING GLOBE FEED...
Technologydb#2299

AI’s 625x memory demand: Who pays for the server glut?

(1w ago)
Global
pcgamer.com

📷 Published: Apr 11, 2026 at 08:12 UTC

Axel Byte
AuthorAxel ByteTechnology editor"Knows that a glossy demo is just the opening act."
  • Dell CEO projects AI memory needs to explode 625x by 2028
  • Data centers face cost pressure from DRAM scarcity
  • User cloud bills may rise as providers pass on hardware costs

Dell’s CEO Michael Dell didn’t just predict growth—he forecast a 625x surge in AI memory demand by 2028 compared to 2022. That’s not a typo: the entire AI market’s appetite for DRAM and storage could outstrip today’s levels by nearly three orders of magnitude. For context, NVIDIA’s H100 GPUs already guzzle 80GB of HBM3 per chip; scaling that across millions of servers explains the panic.

The problem isn’t just volume. Memory prices are volatile, and DRAM shortages in 2024 have already pushed costs up 13% quarter-over-quarter. Cloud providers like AWS and Azure absorb these spikes first—but history shows they pass costs to customers when pressed. If Dell’s numbers hold, even mid-tier AI startups could face sticker shock by 2026.

This isn’t abstract for developers. Training a single large language model today can cost $10M in cloud fees; multiply that by 625x memory demand, and the math gets ugly. The real bottleneck may not be GPUs after all—it’s whether the industry can even build enough memory chips in time.

📷 Published: Apr 11, 2026 at 08:12 UTC

The price of progress isn’t just GPUs anymore

The ecosystem effects ripple beyond data centers. PC gamers—already grumbling about DRAM price hikes—could see their 32GB RAM kits become collateral damage in AI’s land grab. Meanwhile, hyperscalers like Google and Meta are locking in long-term memory contracts, squeezing smaller players out of the market.

There’s a cruel irony here: AI’s promise of efficiency may drown in its own infrastructure costs. Startups betting on edge AI to cut cloud bills could find themselves priced out of the very memory needed to run local models. Even Samsung’s aggressive DRAM expansion might not keep pace if demand outstrips silicon foundries’ capacity.

For all the noise about AI’s potential, the actual story is simpler: someone’s paying for this. Whether it’s end users via higher SaaS fees, taxpayers subsidizing CHIPS Act factories, or investors bankrolling unprofitable AI labs, the bill is coming due. The question isn’t if the memory will arrive—it’s who gets priced out when it doesn’t.

Memory ConsumptionComputing Performance
// liked by readers

//Comments