Home » News » PC Component News » Tera-scale computing

Tera-scale computing

by | Go to comments

Share:

I’m sitting in a hotel room in San Francisco in the early hours of the morning, which usually means just one thing, it’s IDF time once more. Despite the fact that the main conference starts today, as normal, Intel rounded up all the press for a pre-IDF briefing yesterday afternoon. The main gist of the briefing was Intel’s push towards parallelism. In fact Intel went as far as to say “It (parallelism) is going to be the future of mainstream computing”. And you know what? I think that Intel is right.

Despite the fact that parallel computing has been around for a very long time, it was traditionally very expensive and very complicated. Back when I was working in the high performance computing arena, I was involved with supercomputers from Cray and Convex, both of which employed multiple CPUs to create a parallel computing environment. Meanwhile, companies like Fujitsu were pushing the boundaries of massively parallel computing. But one thing that all these machines had in common was that they would set you back millions, if not tens of millions of pounds.

Now we’re moving into a new era, an era where parallel processing is available to the masses, even if the masses don’t realise what they’re getting. Do the majority of consumers buying Core Duo and Core 2 Duo desktops and notebooks actually realise what a multi-core environment means? I doubt it, but that doesn’t change the fact that as software develops, every consumer user will get more benefit from multi-core machines.

And it’s not just PCs that are bringing multi-core processing to the mainstream. Let’s not forget that Microsoft’s Xbox 360 and Sony’s forthcoming PlayStation 3 both use multi-core CPUs. This means that as the console matures and software developers learn to harness all the processing power on offer, consumers will see every aspect of their video games improve.



With dual core chips already in the mainstream and quad core chips set to hit the streets in November, Intel was keen to talk about “many core” solutions, or to put it another way, massively parallel computing. The key to this massively parallel ideal is Intel’s Tera-scale project. Despite the fact that Intel’s future many-core chips are a very important part of the Tera-scale vision, they’re not the only part and Intel is aware that it has to crack other areas as well.

There are three parts to the Tera-scale ideal…
----
Teraops – many-core CPUs allowing for massive amounts of data processing.

Terabytes – huge amounts of memory bandwidth will be needed to service all of the CPU cores.

Terabits – lightning fast I/O needs to be implemented to avoid a bottleneck as soon as the computation is done.



----
Intel has cracked the first part already – there was a physical wafer on show stacked full of 80-core chips. These massively parallel 80-core CPUs can produce a Teraflop of performance – that’s the kind of performance you would have paid millions of pounds for not too long ago.
----

A physical wafer on show, populated with 80-core chips


----

But how do you feed all these cores with enough memory bandwidth? Intel’s answer to this conundrum is die stacking – basically stacking physical memory directly below the CPU die and wrapping both layers into the package. This will provide the Terabytes of memory bandwidth necessary to service all those CPU cores.


Go to comments
comments powered by Disqus