Home / Computing / PC Component / nVidia GeForce GTX 280 / GT200: GPGPU Architecture and Other Features

nVidia GeForce GTX 280 - GT200: GPGPU Architecture and Other Features

By Edward Chester

Reviewed:

Summary

Our Score:

8

One of nVidia's big marketing campaigns this year has revolved around raising the profile of General Purpose Graphics Processing Unit (GPGPU), the task of performing non-3D-graphics related computation on a GPU. So, with the launch of GT200, nVidia was keen to emphasise the superior GPGPU abilities of its latest chip.

GPUs in general are ideally suited to performing parallel computing tasks, like image manipulation and video conversion, because all those shaders can also be put to task as mini CPUs. Individually they may pale in comparison to a proper CPU but when you've got 240 of them, as you have in GT200, the sheer brute force number of shaders will easily out perform any CPU. The big problem at the moment is that writing software to take advantage of parallel processing, and particularly parallel processing on a GPU, is very difficult. This is what prompted nVidia to start working on its CUDA Software Development Kit (SDK), which Hugo recently talked about, and which makes programming for GPGPU considerably easier for the coder.

As well as CUDA as a general programming platform, though, nVidia also recently bought PhysX, the physics processor company, and integrated its technologies into the CUDA SDK. This means that nVidia GPUs can now be put to task in creating life-like physical effects as well as life-like visual effects.

Also, with CUDA being supported by all nVidia's GPUs since the 8800 GTX, it now has a massive installed user base of 70 million. This has prompted some pretty big name developers to sit up and pay attention to CUDA, including Adobe - it will be using GPU acceleration in its upcoming versions of Photoshop and Premier.

Of course, AMD has also been developing its own competitor to CUDA, in the form of its Close To Metal (CTM) SDK. However, this has had significantly less enthusiastic uptake. Even so, with AMD also planning to support the Havok physics engine and not yet jumping on board with PhysX, the true state of play with regards GPGPU is all very up in the air and personally I'd take the whole thing with a pinch of salt for the time being. That said, for those that are interested, GT200 has made some significant improvements over nVidia's previous efforts.

Looking at GT200 with regards GPGPU and you have a chip that takes on the following look. The TPCs become mini 24-core processors, each with its own little stores of cache memory. A Thread Scheduler distributes the massive compute load between all the various TPCs and the frame buffer memory acts like main system memory.

Now in its briefings nVidia went into a lot of detail about why GT200 is better than every nVidia GPU that came before when it comes to GPGPU. However, a lot of the improvement is simply down to the basic increase in processing units rather than any grand new designs. The result is an increase from 518 GigaFLOPs of processing power on G80 to 933 GigaFLOPs on GT200.

That said, there are a few architectural improvements as well. First, thread scheduling has been improved to allow dual-issue MAD+MUL functions to be performed more efficiently. Also double precision (64-bit) calculations are now supported though these rely on 30 (one per SM) dedicated double-precision processors rather than using the SPs themselves, resulting in double-precision performance that is one twelfth that of single-precision (32-bit). Four ‘Atomic' units have also been added. These are designed to handle particular atomic read-modify-write commands with direct access to memory, rather than utilising the chip's own caches.

All told, though, it's a lot of stuff that involves very complicated mathematics, and optimisations thereof, and right now little of it is relevant to the general consumer. When mainstream GPGPU applications begin to hit the shelves we'll come back to these issues and see what difference they really do make.

So, with all that theory out the way, let's look at the first consumer card based on GT200, the GTX280.

Exitialis

June 26, 2008, 3:33 am

"What can we say about Counter-Strike: Source that hasn't been said before? It is simply the benchmark for team-based online shooters and, four years after its release, it's still the most popular game in its genre."





I would argue against that one a quick look at www.xfire.com shows COD4 at about 9 mil, COD2 at 5 mil and CSS at 2 mil minutes played today.





Otherwise a very interesting article, the only thing it makes me sad about is the size of my wallet :)

Ed

June 26, 2008, 1:19 pm

Okay, it's obviously taken a hit in recent years. I'll amend that line. still doing damn well for such an old game, though.

intex

June 27, 2008, 6:20 am

Well call of duty is a full production game. with single player mode and many different versions of multi player... Cs is A Mod it is not a stand alone game. And there is only one mode of play (i.e. there is no capture the flag or free for all modes)

Ed

June 27, 2008, 7:17 pm

Sorry, I'm not sure what your point is Intex?

life

June 27, 2008, 11:16 pm

I thought this was a fantastic review Ed, very in-depth and informative. Looking forward to a GTX260 review, as there is no way I'm shelling out 400 quid on a graphics card! The 260s seem to be going for 𧶲-300 at the moment... which is still very high in my book, but tempting given the potential performance gains over my current 7900GT KO.





P.S. Ed - I think Intex was replying to Exitialis, justifying the currently lower usage stats of CS:S.

Varis Vitols

July 3, 2008, 4:25 pm

life said on 27th June 2008





In that case, why don't You have a look at Radeon HD 4870? It outperforms GTX 260 in almost every case, particularly with AA enabled - by 20 percent. In many cases it stands very close to GTX 280, with AA enabled, but costs only 185-230 pounds at online stores, depending on manufacturer.

Roman

July 3, 2008, 11:43 pm

What are your comments on the explosive heat and noise that the GTX 280 generates?

Flight Instructor

March 27, 2009, 6:49 pm

I like your reviews, in particular when they refer to games hardware, but I only use flight simulator, is it possible to include this type of game when reviewing hardware.

comments powered by Disqus