- Page 1 Leadtek WinFast PX8800 GTX TDH Review
- Page 2 Leadtek WinFast PX8800 GTX TDH Review
- Page 3 Leadtek WinFast PX8800 GTX TDH Review
- Page 4 Performance Results: 3DMark06 Review
- Page 5 Performance Results: Battlefield 2 Review
- Page 6 Performance Results: Call of Duty 2 Review
- Page 7 Performance Results: Counter-Strike: Source Review
- Page 8 Performance Results: Prey Review
- Page 9 Performance Results: Quake 4 Review
The G80 processor works in a very different way to previous generations. Take the GeForce 7900 GTX as an example – it has 24 pixel shader pipelines and eight vertex shaders. Each of these units is designed to do a specific task and nothing else. Naturally, this 24/8 distribution is based on analyzing games and how they distribute their code. However, there will be scenarios where the pixel pipelines will be under utilised and the vertex shaders working overtime; and visa versa.
This has the effect of the chip running inefficiently because of bottlenecks. Wouldn’t it be nice, if unused units could lend a hand in other areas?
That is where G80 architecture comes in. Instead of specific function units, it has 128 “streaming processors”, which can perform any tasks necessary. This means they can be pixel shaders, vertex shaders, geometry shaders, or even perform other functions like processing physics. This is what is known as a “unified shader architecture”.
This should mean that in a gaming environment, every unit is working on the scene and it is purely a matter of distribution. By having all the units working, this should severely reduce internal bottlenecks and increase frame rates dramatically.
Although this simplifies the difference from card to card, it does add in another confusion – a second clock speed. The core of the 8800 GTX runs at 575MHz, but the streaming processors run on a different clock of 1.35GHz. Although previous cards also had multiple clock speeds, only now have they decided to make these known (and adjustable).
The GeForce 8800 GTX has 768MB of GDDR3 memory, running at 900MHz (1,800MHz effective) on a 384-bit interface. That’s quite an astonishing amount of bandwidth and a fairly sizable frame buffer to store data in too. It’s interesting to see nVidia hasn’t chosen to move to GDDR4, but evidently they don’t see it as necessary yet. Naturally, the G80 processor has support for it.
nVidia has stayed with a reliable 90 nanometre process for G80. With all the new technology, this has made for a large chip and therefore a large board. As you can see above, it’s larger than the already large 7900 GTX. It’s actually longer than the motherboard we used for testing, which may cause problems in a number of cases. Most of the extra size is power regulation for the power connectors. It’s quite bulky too. However, the cooling solution that nVidia has employed, and used here by Leadtek, is superb. Despite the heat that the chip must be emanating the fan always ran quietly, which is a major boon for anyone wanting to build a system around one or even two of these.
You might also notice an extra connection for SLI. This is something we noticed on ATI’s X1950 Pro, which follows the same principle. By having two connectors, there is bandwidth in both directions and more of it.
With the chip being completely remade, nVidia has added support for simultaneous HDR and FSAA – an area ATI has previously been alone in supporting. On top of this, overall image quality has been improved considerably. As well as the normal timedemo testing I spent the best part of a day checking out the image quality in top games. Without a doubt, there is a huge improvement in this area.