Samsung has announced the first 24GB GDDR7 memory chip, which will be used in upcoming GPUs. According to the manufacturer, this new GDDR7 DRAM chip offers the fastest speed in the industry, along with advances in performance and efficiency. An important feature of this memory is that it will be used not only to power AI but also in graphics cards for gaming and self-driving cars.
Please follow us on Facebook and Twitter.
Samsung states that the 24GB GDDR7 uses 10-nanometer DRAM, resulting in significant improvements in efficiency. In terms of performance, it features an advanced process node and three-level pulse amplitude modulation (PAM3) signaling. This allows for speeds of up to 42.5 Gbps, depending on the usage environment.
Samsung’s focus for its new memory chip is AI. The 24GB GDDR7 is 25% faster than the previous version and up to 80% faster than GDDR6X. The 10nm manufacturing process has allowed for a 50% increase in cell density without changing the package size. For reference, the 16GB GDDR7 launched in July 2023 offered a speed of 32 Gbps.
In terms of efficiency, the DRAM includes clock control management and a dual VDD design that reduce unnecessary power consumption. Additionally, power gate design techniques minimize current leakage. Samsung claims that its new chip offers more than a 30% improvement in energy efficiency compared to its predecessor.
Samsung aims to address the issues faced by current GPUs.
While the development of this DRAM is driven by the needs of artificial intelligence, other fields will also benefit. Samsung notes that its memory can be used in consumer graphics cards, video game consoles, and autonomous cars.
The introduction of this chip will enable GPU manufacturers to create cards with better VRAM configurations. One major issue affecting current generations of graphics cards is insufficient memory. For example, models like the GeForce RTX 4080 Super only offer 16 GB with speeds of 23 Gbps.
Other manufacturers might use these modules with different bus widths. Based on calculations, a 128-bit GPU could support up to 12 GB of VRAM at 680 GB/s, while a 192-bit GPU can scale up to 18 GB of VRAM at 1 TB/s. In the best-case scenario, a 512-bit configuration could accommodate up to 48 GB of VRAM at 2.7 TB/s.
One of the first graphics cards to feature these modules is expected to be the GeForce RTX 5080, which will be announced alongside the RTX 5090 during NVIDIA’s keynote at CES 2025.