Nvidia’s new SRAM-based AI chip could reshape the memory market

Nvidia may unveil a new SRAM-based AI chip, sparking debate on its impact. Experts suggest this design will complement, not replace, HBM, targeting ultra-low-latency tasks while HBM remains key for large-scale AI training and inference systems.

Nvidia may unveil a new artificial intelligence inference chip architecture built around on-chip static random access memory, or SRAM, at the Nvidia GTC 2026 conference in San Jose, California, on Monday, raising questions about whether the design could reshape the structure of the AI memory market, Korea Herald reported. The news report, quoting industry sources, noted that the proposed SRAM-based architecture is expected to take a different approach from the GPU designs currently used in AI data centres.

Add Asianet Newsable as a Preferred Source

Today’s GPUs process massive datasets by attaching multiple stacks of high-bandwidth memory, or HBM, next to the processor, enabling large volumes of data to be handled at extremely high speeds.

The SRAM-centered design would instead place relatively large SRAM blocks inside the chip itself, reducing data movement and potentially improving processing latency.

Design Trade-offs and Limitations of SRAM

The approach, however, comes with design trade-offs. SRAM is notably larger and more expensive than dynamic random access memory, or DRAM, making it difficult to deploy at large capacities. For the same capacity, SRAM cells require roughly five to ten times more silicon area than DRAM cells, according to industry estimates. As a result, SRAM has traditionally been used as cache or buffer memory in processors rather than as main memory. HBM, by contrast, is designed to deliver extremely high memory bandwidth and has become a key component in AI training and data center workloads.

Will SRAM Threaten HBM Dominance?

Some observers have raised concerns that wider use of SRAM could weaken demand for HBM and other main memory technologies.

Expert Analysis: A Complementary Role for SRAM

Many experts, however, say the likelihood of SRAM directly replacing HBM remains low because the two technologies serve fundamentally different roles.

Industry Insider’s Viewpoint

“Interpreting SRAM as a replacement for HBM is somewhat exaggerated,” the news report quoted an industry source, who spoke on condition of anonymity. “SRAM has traditionally been used as a small-capacity but expensive cache memory located next to the processor.”

The source said structural limits make it difficult for SRAM to replace large-capacity memory.

“To achieve the same capacity, SRAM would require roughly five to ten times more silicon area than DRAM,” the source said. “It may be useful in certain ultra-low-latency parts of AI chips, but its expansion as a general-purpose memory solution is likely to remain limited.”

“For the foreseeable future, HBM will continue to serve as the key near-memory supporting large-scale AI training and inference systems,” the source added.

Market Analyst Perspective

Market analysts also see SRAM-based architectures as more likely to complement existing memory technologies rather than replace them.

Chae Min-sook, an analyst at Korea Investment & Securities, said the introduction of SRAM-centered architectures should be understood as an additional option for specific workloads rather than a strategy to displace HBM or DRAM.

“It is more appropriate to see this as a solution targeting certain ultra-low-latency data center workloads or edge applications,” she said.

“Large-scale model training and general inference servers will still rely on HBM and DRAM as their main memory,” Chae said. “As AI computing evolves, the industry is likely to move toward a more layered memory hierarchy composed of SRAM, HBM and DRAM.”

Academic Outlook on Market Transition

Academics also see limited near-term disruption to the existing memory landscape.

Lee Jong-hwan, a professor of system semiconductor engineering at Sangmyung University, said any structural shift would likely unfold gradually rather than abruptly.

“Even if architectural changes occur, they are unlikely to cause immediate disruption,” Lee said. “Companies such as Samsung Electronics and SK hynix dominate the global memory market, meaning any technology transition would likely proceed at a controlled pace.”

“SRAM is still one type of memory, so from the perspective of memory manufacturers it would not necessarily pose a major problem,” the professor added.

(Except for the headline, this story has not been edited by Asianet Newsable English staff and is published from a syndicated feed.)

Leave a Comment