Home » Opinions » Intel Larrabee: An Introduction

Larrabee Introduction

by | Go to comments

Share:

If you haven't heard of Larrabee before reading this then on behalf of all of us, welcome back from whatever planet the aliens abducted you to. Intel's impending entry into the GPU market has been rumoured and talked about since late 2006, but up until now precious little concrete information has actually been revealed. The only solid information available was that Larrabee would comprise multiple x86 (Intel Architecture, IA) compliant cores thus making it capable of executing 'normal' code as well as running both the DirectX and OpenGL graphics-specific APIs.

With Intel's own developer forum, IDF, just around the corner and the more general, entirely graphics-centric, SIGGRAPH (Special Interest Group on GRAPHics and Interactive Techniques) conference starting on the 11th of this month Intel has finally allowed we, the general public, further information relating to Larrabee's architecture. The full paper details are being presented at SIGGRAPH and will appear at this link once available on the 12th of August.
/94/592e4b/d504/8313-Slideconvergencesmall.jpg
Architecturally, Intel is making a massive departure from traditional GPU design with Larrabee. Annoyingly there's a lot of information on slides within the paper provided to us by Intel that we aren't able to publish (although we can at least write about said details) and even more that Intel won't talk about yet. Nonetheless, compared to the next-to-no information we've had up until now getting anything out of Intel is great.

Back in the early days of computer graphics, there was no such thing as a graphics card; everything was rendered in software by the CPU. As far as the average (if pretty loaded) consumer was concerned, that changed in 1996 with the release of the Voodoo 1 from 3dFX. For the fist time you didn't need to buy a new CPU (and usually, as a consequence, a whole new PC) to improve your gaming experience, you just slotted a PCB into in a PCI slot and away you went.

This move to discrete 3D graphics brought with it a divergence in architecture. CPUs carried on as they always had, optimising performance to carry out lots of different tasks pretty fast at the same time. GPUs, conversely, started to become much more fixed function and geared towards throughput performance. However, not all games are coded equally which means that in all but an ideal scenario, parts of the single-function card are sitting around waiting for something to do while other sections are overloaded with work.

This problem was partially solved in 2006, with the release of nVidia's G80 chip, the GeForce 8xxx-series and, a bit later, AMD's R600, 2xxx-series. The stream processor-based design upon which these and their successive GPUs operate means that some sections of that fixed function pipeline can be tailored on-the-fly to allow the GPU to adjust some parts of the rendering pipeline to best effect.

These trends towards massive parallelism and programmable, rather than fixed function processing lead the GPU's architecture on an interesting convergence with that of modern CPUs. Top end Nehalem chips will offer eight cores, each capable of processing two simultaneous threads and from what we hear from Intel, the future will see a tendency towards CPUs boasting many-core architectures, rather like a GPU.

Go to comments
comments powered by Disqus