At the recent International Solid State Circuits Conference (ISSCC), IBM announced it had developed a new method for putting memory on a chip to enhance overall performance. IBM pronounced its plans to use DRAM (dynamic RAM) in place of SRAM (static RAM) as the embedded memory cache built onto each of its chips.
According to IBM, the replacement will let each chip store its data in one-third the area and use one-fifth the electricity for standby power, thereby improving chip performance for multi-core processors and for applications that require large amounts of graphic data.
However, chip makers couldn’t really figure out how to use this embedded DRAM with silicon-on-insulator (SOI) technology, until now.
“With this breakthrough solution to the processor/memory gap, IBM is effectively doubling microprocessor performance beyond what classical scaling alone can achieve,” said Dr. Subramanian Iyer, an IBM engineer and director of its 45 nm technology development.
“As semiconductor components have reached the atomic scale, design innovation at the chip-level has replaced materials science as a key factor in continuing Moore’s Law,” Iyer added.
IBM says it plans on putting anywhere from 24MB to 48MB of on-chip cache memory into its new processors next year.