Citing the impact of multi- and many-core computing hitting the mainstream and new developments in extreme scale computing as examples, Justin Rattner, Intel’s chief technology officer, told an Intel Developer Forum audience that the future of computing is being accelerated.
Intel continues to push tech beyond today’s limits, looking for the next big leaps that take computing to the next levels of performance with much less power consumption than is possible today. As an example, Rattner demonstrated a Near-Threshold Voltage Processor using novel, ultra-low voltage circuits that dramatically reduce energy consumption by operating close to threshold, or turn-on voltage, of the transistors. This concept CPU runs fast when needed but drops power to below 10 milliwatts when its workload is light – low enough to keep running while powered only by a solar cell the size of a postage stamp. While the research chip will not become a product itself, the results of this research could lead to the integration of scalable near-threshold voltage circuits across a wide range of future products, reducing power consumption by 5-fold or more and extending always-on capability to a wider range of computing devices. Technologies such as this will further Intel Labs’ goal to reduce energy consumption per computation by 100- to1000-fold for applications ranging from massive data processing at one end of the spectrum to terascale-in-a-pocket at the other.
The Hybrid Memory Cube, a concept DRAM developed by Micron in collaboration with Intel, demonstrates a new approach to memory design delivering a 7-fold improvement in energy-efficiency over today’s DDR3. Hybrid Memory Cube uses a stacked memory chip configuration, forming a compact “cube,” and uses a new, highly efficient memory interface which sets the bar for energy consumed per bit transferred while supporting data rates of one trillion bits per second. This research could lead to dramatic improvements in servers optimised for cloud computing as well as ultrabooks, televisions, tablets and smartphones.
Multi-core, the practice of building more than one processing engine into a single chip, has become the accepted method to increase performance while keeping power consumption low. While many-core is more of a design perspective, rather than incrementally adding cores in a traditional approach, it’s reinventing chip design based on the assumption that high core counts is the new norm.
“Since 2006 Intel and the IA developer community have worked in partnership to realise the potential of multi- and many-core computing, with accelerating impact beyond high-performance computing to solving a wide range of real-world computing problems on clients and servers,” Rattner said. “What we have demonstrated today only scratches the surface of what will be possible with many-core and extreme scale computing systems in the future.”