The release of the G80 based nVidia GeForce 8800 GTX in November 2006 was, in hindsight, a paradigm shift in the computer graphics world. Not only was it the first DirectX 10 graphics card but it also completely blew away the competition in DirectX 9 games before the arrival of Windows Vista and held this lead for an unprecedented amount of time even when Vista and DirectX 10 did finally arrive. Indeed, not until February of this year, when AMD released the ATI Radeon HD 3870 X2, which used two RV670 cores on one card, did the G80 in its various guises have any real competition.
Not that this competition lasted long. Within a few weeks nVidia released the dual G92 based 9800 GX2, which used a similar two-chips-on-one-card method to the HD 3870 X2, and comfortably regained the top performance crown - at least in the games it worked with. nVidia then followed this up with the 9800 GTX, which used a single G92 based chip to marginally extend nVidia's performance lead within the single-chip graphics card market. Of course, ATI still had many good cards and it competed very fiercely in the mainstream sub-£150 market but it just couldn't claim the top spot.
Still, while nVidia held onto its lead with the 9800 series cards, it didn't really push forward any frontiers. Performance was good but not overwhelming and, while new features like HybridPower are useful, the whole range felt a little disappointing.
Just a few months later, though, nVidia has just launched a brand new graphics processor called GT200 that, at least on paper, looks like it should have all the performance necessary to be a true successor to G80. Made up of 1.4 Billion (yes, that is billion with a 'B') transistors, packing in 240 stream processors, 32 ROPs, a 512-bit memory interface, and a whole host of other under-the-bonnet improvements, it is an absolute monster. In fact, with it still being made using the same 65nm process used on G92, it is not just a monster internally, but also externally - at 24 x 24mm this is the biggest single die TSMC has ever commercially produced.
Indeed, if you look at a typical production silicon wafer, which is 300mm in diameter, at most there is only room to produce 94 GT200 chips on each one. Compare this to something like Intel's Conroe CPUs, which are built on the same size manufacturing process but are only 143mm^2 in size, so 426 dies can be produced per wafer, and you get some idea for just how large and expensive GT200 is.
Two variants on the GT200 will be available at launch, and these will be the first parts to take on nVidia's revised branding. All the rebranding boils down to is the switching around of letters and numbers so the new cards are called GTX 280 and GTX 260 rather than the x000 GT/GTX/GTS sort of arrangement we're used to.
The GTX 280 will use the full extent of GT200 with its shader clock running at 1296MHz, 1GB GDDR3 memory running at 1107MHz (2.2GHz effectively), and the rest of the chip purring away at 602MHz. As the power requirement for all this lot will be 236W, the GTX 280 will not only need a conventional six-pin PCI-Express connector but an extra eight-pin one as well.
Meanwhile, the GTX 260, which will be released on the 26th of June (the GTX 280 will be available by the time you read this), has two SM clusters disabled (I'll explain more about this on the next page) and one ROP partition has also been removed. As well as this the clock speeds have been throttled resulting in vital statistics of: 192 shaders running at 1242MHz, 28 ROPs running at 576MHz, and 896MB GDDR3 1000MHz memory. As a result of these speed and component cuts, the GTX 260 will draw less power, 183W to be exact, and consequently needs only a single six-pin PCI-Express power connector.
List pricing is as astronomical as you would expect with the GTX 280 demanding £449 and the GTX 260 £299. What's more, early indications suggest this pricing won't be deviated from too much by stores running discounts or trying to out-price each other. Nevermind, hey.
We'll take a proper look at the GTX260 in a separate article and we'll have a poke around the physical GTX280 card in a few moments but first let's look at what makes nVidia's latest graphical wonder tick.