Categories: News

“Going from a 3-gear bicycle to a 20-gear bicycle” — Scientists inch closer to new tech that combines ultra expensive but super fast SRAM and DRAM

Stanford scientists are looking to combine SRAM and DRAM

The new memory type would help solve issues with AI computing

Gain Cell memory looks to bridge the gap between the two types

The development of more energy-efficient hardware for artificial intelligence (AI) systems is receiving increased support, with a focus on improving memory technology.

A hybrid type of memory that blends the high density of DRAM (Dynamic Random-Access Memory) with the speed of SRAM (Static Random-Access Memory) is at the forefront of this effort.

The project is being led by electrical engineers at Stanford University, with the team’s goal being to create faster, more efficient memory hardware for AI applications that addresses the current limitations in processing power and energy consumption.

Memory, a key AI bottleneck – hybrid gain cell memory to the rescue

This research is being funded under the CHIPS and Science Act, with a recent boost of $16.3 million in US Department of Defense funding to the California-Pacific-Northwest AI Hardware Hub.

AI systems are heavily reliant on hardware that can efficiently move and process large volumes of data. However, moving data between memory and logic unites takes time, which slows down GPUs and leads to increased energy consumption.

As AI models become larger and more complex, these memory bottlenecks become more pronounced. Therefore, faster and denser memory located directly on chips is seen as a potential solution to this problem.

Stanford University’s H.-S. Philip Wong, an electrical engineer and chair of the AI Hardware Hub, emphasizes the importance of memory in making AI hardware more energy efficient.

Are you a pro? Subscribe to our newsletter Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed! Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors

Wong’s team has turned to a new type of memory design called Gain Cell memory, which combines the advantages of both DRAM and SRAM. The hybrid gain cell offers a middle ground which has the small footprint of DRAM, but it also provides the faster readout speeds characteristic of SRAM.

The key difference in this new design is the use of two transistors—one for writing data and one for reading rather than the capacitor found in traditional DRAM. This allows the gain cell to retain data more reliably and to boost the signal strength when data is read.

Gain Cell memory has faced limitations such as rapid data leakage in silicon-based designs and slower readout speeds in oxide-based designs. However the Stanford team combined a silicon transistor with an indium tin oxide transistor, significantly enhancing the device’s performance, offering faster readouts while maintaining a compact footprint.

The new design can hold data for over 5,000 seconds, far longer than traditional DRAM, which needs refreshing every 64 milliseconds. Additionally, the hybrid memory is around 50 times faster than oxide-oxide gain cells.

Wong likens this advancement to transitioning from a basic 3-gear bicycle to a sophisticated 20-gear bicycle, emphasizing that this evolution of memory technology will extend beyond traditional options like DRAM, SRAM, and flash memory. “We want to provide better options so designers can optimize better…it’s an opportunity to rearchitect computers,” Wong said.

Via IEEE

Original Author: Efosa Udinmwen | Source: TechRadar

Akshit Behera

Share
Published by
Akshit Behera

Recent Posts

Trump administration’s deal is structured to prevent Intel from selling foundry unit | TechCrunch

The deal allows the U.S. to take more equity in Intel if the company doesn't…

6 months ago

3 Apple Watches are rumored to arrive on September 9 – these are the models to expect

We're expecting two new models alongside the all-new Apple Watch Series 11. | Original Author:…

6 months ago

Fujitsu is teaming with Nvidia to build probably the world’s fastest AI supercomputer ever at 600,000 FP8 Petaflops – so Feyman GPU could well feature

Japan’s FugakuNEXT supercomputer will combine Fujitsu CPUs and Nvidia GPUs to deliver 600EFLOPS AI performance…

6 months ago

Microsoft fires two more employees for participating in Palestine protests on campus

Microsoft has fired two more employees who participated in recent protests against the company’s contracts…

6 months ago

Microsoft launches its first in-house AI models

Microsoft announced its first homegrown AI models on Thursday: MAI-Voice-1 AI and MAI-1-preview. The company…

6 months ago

Life 3.0 – Being Human in the Age of Artificial Intelligence by Max Tegmark

A comprehensive review of Max Tegmark's Life 3.0, exploring the future of artificial intelligence and…

6 months ago