in ,

CPU And GPU Prices Might Rise, But Not Because Of What You Might Think

With Little SRAM Scaling Of TSMC’s 3nm NodeCould Lead To More Expensive CPUs and GPUs.

According to WikiChip, TSMC’s SRAM scaling has slowed significantly, with SRAM cells lagging behind in the recent process technologies. TSMC’s N3 and N3E nodes, which provide 1.6x and 1.7x improvements in logic density when compared to its N5 (5nm-class) process, have SRAM bitcell sizes of 0.0199µm^² and 0.021 µm^², respectively, which is only ~5% smaller and no scaling compared to N5. Meanwhile, Intel’s Intel 4 (originally called 7nm EUV) reduces SRAM bitcell size to 0.024µm^² from 0.0312µm^² in case of Intel 7 (formerly known as 10nm Enhanced SuperFin). This translates to a SRAM density of 27.8 Mib/mm^², which is behind that of TSMC’s HD SRAM density. Additionally, an Imec presentation has shown that a ‘beyond 2nm node’ with forksheet transistors will be able to achieve a SRAM density of around 60 Mib/mm^². However, this process technology is years away, and chip designers will have to make do with the SRAM densities advertised by Intel and TSMC in the meantime. This slow SRAM scaling will likely cause CPUs, GPUs, and SoCs to become more expensive.

Modern CPUs, GPUs, and SoCs require extensive SRAM caches to efficiently process data and facilitate workloads such as artificial intelligence (AI) and machine learning (ML). For example, AMD’s Ryzen 9 7950X carries 81MB of cache, whereas Nvidia’s AD102 has 123MB of SRAM for various caches. Moving forward, the need for caches and SRAM will only increase, but with N3 and N3E, it will be difficult to reduce the die area occupied by SRAM and mitigate the higher costs associated with the new node compared to N5. As a result, the die sizes of high-performance processors will become larger and their costs will rise. In addition, SRAM cells, like logic cells, are prone to defects. While designers can attempt to partially alleviate larger SRAM cells with N3’s FinFlex innovations (mixing and matching different FinFETs in a block to optimize performance, power, or area), the success of this strategy remains to be seen. TSMC plans to introduce its density-optimized N3S process technology in 2024, which promises to shrink SRAM bitcell size compared to N5. However, it is unclear whether this process will provide enough logic performance for chips designed by AMD, Apple, Nvidia, and Qualcomm.

To combat the costly SRAM area scaling with FinFET-based nodes at 3nm and beyond, chip designers are turning to multi-chiplet design and using alternative memory technologies such as eDRAM or FeRAM for caches. AMD’s 3D V-Cache is an example of this, as it disaggregates larger caches into separate dies made on a cheaper node. Although this does come with its own unique challenges, it could be a viable solution to the SRAM scaling issue in the coming years.

Source: TomsHardware

Report

What do you think?

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

One Comment

Samsung Adjusts Strategy To Stay Ahead Of Apple and Xiaomi in 2023

AMD RX 7900 Series

[Rumors] AMD Used “Unfinished” GPUs For RX 7900 Series Leading To Mixed Results