Home » News » PC Component News » Tera-Scale Computing and its Uses

Tera-Scale Computing and its Uses

by | Go to comments

Share:

Last IDF Intel introduced the concept of Tera-Scale computing – a large array of cores together in a single package, capable of an incredible amount of parallel processing. As I described in the opening keynote, Intel got its prototype working at 2-Teraflops, though operating within its normal thermal envelope it’s designed to run at 1-Teraflop, drawing 62W at 1.01volts.

The prototype chip requires liquid cooling and as you can see the set up is quite a mess of cables and connectors. I don’t think we’ll be seeing a mobile version just yet.

/94/ea4ff1/be70/4452-DSCF0237.jpg

While eight-core desktop machines are around the corner, there certainly seems to have been a jump in Intel’s thinking in the move to eighty cores. I asked Jason Howard from the Circuit Research Lab in Oregon, why it has happened like this and the answer I got essentially boiled down to, because they could. The team realised that having more cores actually made things easier. They could afford to lower the clock speeds helping to keep things stable and relatively cool, and having more chips meant they could achieve massive performance. The chips are all fully scalable, reconfigurable, and power aware – energy efficiency is still important even in uber chip prototypes.

/94/56e539/0d21/4452-IMG3884.jpg

However, all those chips require an incredible amount of memory bandwidth. Each chip has a Router component, which enables all the cores to talk to each other, but there’s no point in having all this raw computational power available to you if you can’t give it anything to do. Even next year’s Nehalem processor necessitates Intel moving to an integrated memory controller for the first time, but even that isn’t going to cut if for an array such as this.

Instead, to gives the cores the memory they need it’s physically laid onto the cores, like a microprocessor – DRAM sandwich. The connecting material needs to be able to pass through information at high speed in a technique that Intel rather functionally calls ‘Through Silicon Via’. Copper bumps are placed on one side of the processer and the DRAM and are melded together, enabling data to pass through at very high speeds. The result is 1.62 Terabits/s of bandwidth.

/94/b0f844/313e/4452-IMG3885.jpg

Of course you still need to get data into the chip and today’s silicon based materials simply aren’t fast enough. This is why Intel is working on Optical interconnects, which can operate at a much higher frequency than the silicon that we have today. This needs a laser to power it, the development of which Riyad went into detail on at the last IDF and it still very much a technology of the future.

/94/db8778/3c68/4452-IMG3886.jpg

Once it’s a reality, what are we going to do with all of that computational power. Intel presented a few specific uses to which it has been putting this prototype. In general terms a Teraflop chip is great for solving computationally intensive workloads for scientific and commercial applications, such as financial analytics, medical imaging and other types of data mining. Intel says that in the Tera-Scale era, computers will be able to recognise people, objects and data models, giving them a better grasp of the real world.

/94/b670ea/5db7/4452-IMG3887.jpg
The DRAM is laid directly on top of the Tera-Scale core

/94/ae45a6/1719/4452-IMG3888.jpg

/94/42e32d/2211/4452-IMG3890.jpg
Once joined together, data can pass directly through the Copper Bumps.

Go to comments
comments powered by Disqus