Home » Opinions » Research @ Intel 2007

Research @ Intel 2007

by | Go to comments

Research @ Intel 2007

You may think of Intel as a company that just makes CPUs and chipsets for your PC but actually it's involved in so much more. From pushing forward the frontiers of microprocessor design to developing new standards for wireless communication, the big I is actively involved in many research projects in a surprisingly varied number of fields throughout the world. That’s why every couple of years they invite a select few journalists from around the world out to Santa Clara to demonstrate some of these projects and, hopefully, give us some clues as to what products will be available in the coming years.

Previous events have showcased such developments as the Xen Virtual Machine, technology from which has ended up in Intel's vPro platform. Other projects like the Photonics research are still ongoing and may eventually see the light of day. Inevitably, there are some projects that fall by the wayside, as other technologies make the research obsolete or the results prove unsuitable for mass production, but that's the thing with trying to predict the future - it's unpredictable.

Justin Rattner opened proceedings with an address titled From St. Petersburg to Santa Clara it's Nano-scale to Tera-scale. In it he gave us an overview of Intel's research setup, which spans the entire globe incorporating 15 locations and nearly 1,000 researchers in a variety of projects, both in software and hardware. He was also keen to emphasise the product based nature of the research that ensures each project has not only some academic merit but is also something that will lead to products appearing on shop shelves. For instance, while the 80 core CPU Intel demonstrated last year may seem to have little practical purpose at the moment, Intel actually has a genuine usage model for the chip. In fact, it was the practical uses of these multi core CPUs that was the focus of many of the projects on show.

Talking of the 80-core CPU, Intel has now managed to get it running at 6.26GHz which gives a theoretical 2 teraflops (2,000,000,000,000 floating point operations per second), but that's not all. Using new dynamic core control, the number of operating cores can be throttled on the fly. This means that while running all 80 cores at full pace the chip consumes over 150 Watts, with just four cores running at 3.13GHz it consumes a paltry 3.32 Watts. This massive scalability opens up a whole host of power saving possibilities, as well as giving a huge amount of processing power.

That's not all though, Intel also has plans for two other variations on the same theme. The first will incorporate RAM in the same package by stacking it on top of the CPU, allowing unprecedented amounts of bandwidth. The other will use x86 cores in place of the basic floating point units that are on the current version. When this happens you'll have an 80 core CPU that can be used for running your regular PC applications.

Of course very few people run 80 applications at the same time so without some form of optimisation, a large proportion of these cores will remain inactive. That's why the vast majority of the research was focussed on ways to use these new multi-core CPUs to their full potential.

Go to comments
comments powered by Disqus