Trusted Reviews is supported by its audience. If you purchase through links on our site, we may earn a commission. Learn more.

AMD Radeon HD 2900 XT Review

Verdict

rating-star rating-star rating-star rating-star rating-star

Key Specifications

  • Review Price: £270.20

The last really big Radeon launch event was back in October 2005 – back in the days when Radeons were still ATI products rather than AMD ones. The X1000 launch was hosted in Ibiza during the closing weekend, and it was one of the better Press events of its era! This time around, AMD chose a North African location to show off the latest graphics products, shipping technology journalists from all over the world to the city of Tunis in Tunisia.


Unfortunately we were a bit too far away from Tatooine to visit the most wretched hive of scum and villainy for a quick drink, but I did manage to nip over to Carthage to see the ruins before having to sit through two solid days of PowerPoint presentations!

—-
It appears there has been a mistake. The image provided does not show anything related to the AMD Radeon HD 2900 XT or related product reviews. Instead, the image depicts ancient ruins with broken stone walls and scattered architectural fragments, under a clear sky.

”’A trip to Carthage injected some history and culture into the Radeon HD 2000 launch event.”’

—-

Like the X1000 launch before it, the HD 2000 launch involved a whole range of products from entry level up to the high end. In fact AMD announced no less than ten new graphics solutions for both desktop and mobile platforms.


Considering that nVidia stole the march on AMD last year with the release of its GeForce 8 series of DirectX 10 GPUs, the Radeon HD 2000 series has a lot resting on its shoulders – although nVidia’s inability to initially produce stable Vista drivers has definitely helped AMD’s cause!


As you’ve probably noticed already, AMD has dropped the traditional X prefix from the latest generation of Radeon cards, replacing it with HD. It doesn’t take a genius to figure out that the HD stands for High Definition, and since high definition is probably the most popular buzz phrase of the moment, it’s not a bad transition. Of course the HD branding also relates to AMD’s already established and well received Avivo video processing engine, but I’ll cover that in more detail a bit later.


Despite the fact that AMD announced the full complement of ten products under the HD 2000 moniker, only the top end Radeon HD 2900 XT card was actually handed out to the Press for testing and evaluation, so that’s what I’ll be concentrating on with this feature.

The Radeon HD 2900 XT is an immensely complex piece of hardware, with no fewer than 700 million transistors squeezed onto the die. Although AMD was keen to talk about how the HD 2000 series of chips will be based on a new 65nm process, the first GPU to break cover is actually based on an 80nm manufacturing process. I’m in no doubt that eventually the HD 2900 XT will transition to a 65nm process, and when that happens we could see increased clock speeds and lower power draw. As things stand, the GPU has a core clock speed of 742MHz, and the fact that the lower specced 65nm HD 2600 XT card has a core frequency of 800MHz, lends weight to the idea that we can expect higher clocks when the HD 2900 XT drops to a 65nm process.
AMD Radeon HD 2900 XT graphics card with red and black cooling shroud featuring swirling design and ATI Radeon branding.

With the X1000 series of Radeon cards ATI introduced its Ring Bus Memory Controller, which potentially allowed unprecedented memory bandwidth, assuming that you had fast enough memory to take advantage of it. Now AMD has evolved the Ring Bus controller and empowered the HD 2900 XT card with an astonishing 512-bit memory interface – double the bandwidth seen on any other card to date.


Considering the super-fast memory interface, it’s somewhat perplexing that AMD has decided to equip the first batch of HD 2000 XT cards with only 512MB of memory, when nVidia is already putting 640MB on its 8800 GTS parts and 768MB on the top end 8800 GTX parts. Even more confusing is that AMD/ATI delivered the first 1GB graphics card with the workstation incarnation of the X1800 XT – the FireGL V7350.


When questioned about the modest memory complement, AMD told me that board partners can specify as much memory as they like on their boards. Unfortunately that isn’t the case, since the first batch of cards will be reference boards manufactured by AMD, so the board partners will have to take what they’re given. That said, I did see sample cards out in Tunisia with 16 memory chips on them, which would obviously indicate 1GB rather than 512MB. AMD was however honest enough to tell me that the decision to go with only 512MB at launch was made in order to hit a certain, and undoubtedly lower price point.
Close-up of an AMD Radeon HD 2900 XT graphics card GPU with ATI Radeon branding on a black background

The initial samples of the HD 2900 XT are equipped with GDDR3 memory, although you can expect a transition to GDDR4 at some point. The chips on the reference board are running at 825MHz (1650MHz effective) – although that’s quite high for GDDR3, the upper limit of GDDR3 frequency seems to be rising all the time, so there’s likely to be some headroom in there for overclockers.


The heart of the HD 2900 XT chip is an array of 320 stream processing units, which can turn their hand to pretty much anything. AMD has concentrated on efficient execution with the HD 2900 XT, rather than employing a bulldozer approach. The key is ensuring that each and every one of the superscalar stream processors is being used to maximum efficiency, in order to render each frame without wasting any resources. One of the keys to this efficient operation is a switch to a unified shader model…

Screenshot of a first-person shooter game showing detailed textures and lighting effects, demonstrating the graphics capabilities of the AMD Radeon HD 2900 XT graphics card.


”’*The Radeon X1950 refused to enable anti-aliasing in Company of Heroes.”’

The AMD HD 2900 XT is a very impressive piece of hardware that shows real innovation. Features like the tessellation engine will surely bring a huge benefit to real time rendering, and will no doubt become part of the DirectX 10 API at some point. Whether we’ll see any games making use of this feature before nVidia adopts it too remains to be seen, but AMD should definitely be congratulated for bringing cutting edge features to market.


It’s also good to see that AMD has finally made CrossFire a simple solution. Gone is the horrible Master Card and Slave Card model, so now you can just match any two cards from any vendor and run them in CrossFire using internal bridges, much like SLi you might say.
AMD Radeon HD 2900 XT graphics card with red and black cooling shroud and central fan, displayed against a white background.

But I just can’t get my head around the fact that AMD/ATI has conceded the performance crown this time around. I’ve been covering the leapfrog game of graphics hardware for so long that it just seems natural for nVidia to launch a card, and then ATI to edge ahead with its launch a few months later. At the very least I thought that AMD/ATI would be looking to match the GeForce 8800 GTX, but it would appear not.


Of course AMD is putting a lot of faith in DirectX 10 performance, and once some DX10 games hit the market we’ll put both AMD and nVidia hardware through its paces again. Until then though, with today’s games at least, nVidia definitely commands the high ground.
AMD Radeon HD 2900 XT graphics card with red and white artistic design on the cooling shroud and a red fan, displaying HDMI, DVI, and video output ports on a black background.

And then there’s the issue of price. As I mentioned earlier, AMD told me that the graphics memory had been limited to 512MB in order to hit the desired price point. However, looking around the web, the cheapest Radeon HD 2900 XT I could find will set you back £270, while the cheapest 640MB GeForce 8800 GTS costs only £213! That’s a pretty significant price gap, especially for two cards that perform so similarly.


I like the Radeon HD 2900 XT, I really do. The underlying technology inherent in this card shows that the engineers at AMD are still working overtime to create efficient and effective 3D hardware, rather than just going for the brute force approach. However, when you consider that a GeForce 8800 GTS will cost you considerably less and give you very similar performance, it’s hard to recommend this latest Radeon. All I hope is that once the launch frenzy is over, the price will drop in line with nVidia’s part, and then the choice between the two will be far tougher.

The biggest surprise at the event in Tunisia was AMD’s admission that nVidia was set to retain the performance crown. With the Radeon HD 2900 XT pitched against the GeForce 8800 GTS, it appeared that AMD was happy for the GeForce 8800 GTX to remain the fastest graphics card available. Bizarrely, AMD announced that it considered HD 2900 XT CrossFire to be the natural competitor to the 8800 GTX, but that surely meant that there would be no competitor to 8800 GTX SLi!


Interestingly, it was during the briefing on AMD’s CPUs that the answer to the above conundrum slipped out – quad CrossFire. It seems that AMD will be offering a quad-GPU solution in the near future, although everyone was very tight lipped when it came to details. Whether this means four graphics cards rendering frames, or whether one of them will be utilised for physics remains to be seen, but you can be sure we’ll be badgering AMD for this hardware as soon as it’s officially announced.


Although the HD 2900 XT is clearly geared towards DX10 performance, there aren’t any viable DirectX 10 benchmarks available yet. Of course AMD supplied a DX10 benchmark based on the game Call of Juarez, but we never use benchmarks that are supplied by one particular manufacturer. Likewise, today nVidia and Capcom announced a DX10 benchmark based on Lost Planet, but I’m not willing to use that for the same reasons.


We will of course do DX10 benchmarking as soon as independent DirectX 10 games become readily available, but for now we’ve had to stick with our usual suite of DX9 benchmarks. Despite the fact that Richard Huddy stated categorically that the DirectX 9 driver overhead was “like being slapped by a woman”, he also insisted that the HD 2900 XT’s DX9 performance was still first rate.
Screenshot of a strategy game depicting intense action with a building on fire and units around it, showcasing the graphics capability of the AMD Radeon HD 2900 XT graphics card.

Kicking off with Company of Heroes, it’s clear to see that the HD 2900 XT will happily hold its own against a 640MB 8800 GTS. It’s also interesting to see that the Radeon scales very well too, getting pretty close to an 8800 GTX when the resolution is pushed up to 2,560 x 1,600. CrossFire didn’t scale quite so well, dropping a way behind twin 8800 GTX cards at 2,560 x 1,600 with 4x anti-aliasing, although a score of over 65fps is still pretty impressive at this resolution.


Next up is Prey, which I assumed would favour nVidia hardware since it uses OpenGL rather than DirectX. The assumption is borne out at the lower and even mid resolutions and settings, but once again the HD 2900 XT shows excellent scaling and comes close to the 8800 GTX at 2,560 x 1,600, although when 4x AA and 8x AF is switched on the Radeon falls behind. Of particular note is how ineffective CrossFire was running Prey, although to be fair these are very early drivers and I was surprised to see CrossFire running as well as it did.
First-person perspective in a video game showing a character holding a sniper rifle with a scope following two teammates in a snowy environment with some small buildings and a tracer rounds flying through the air.

Call of Duty 2 is a very texture heavy game, which is why the GeForce 8800 GTX puts so much distance between itself and every other card on test. Interestingly here the older Radeon X1950 XT teaches its new sibling a lesson or two, by staying ahead of the HD 2900 XT for many of the tests. That said the HD 2900 XT still manages to stay ahead of the 640MB 8800 GTS in the majority of tests. Unfortunately we couldn’t get CrossFire to work with CoD2, despite many hours of trying – hopefully a future driver revision will solve this.


So, it would seem that AMD was pretty much spot on when it said that the Radeon HD 2900 XT would be competing with the GeForce 8800 GTS 640MB. Those two cards seem to dance around each other in most benchmarks, although the Radeon does appear to scale better as the resolution rises. Of course it’s worth remembering that the HD 2900 XT is using first generation drivers, so one would hope that we’ll see significant performance gains over the coming months, especially where CrossFire is concerned.

The other five products all fall under the Mobility Radeon range, with the line up kicking off with the Mobility Radeon HD 2300. AMD has labelled the HD 2300 as the entry level mobile solution, and it’s easy to see why. First up, this is the only chip in the HD 2000 series line up that isn’t DX10 compatible – it’s limited to DX9c. Also, whereas every other Mobility Radeon product is based on a 65nm process, the HD 2300 is manufactured using a 90nm process.


The core frequency on the HD 2300 ranges from 450 – 480MHz, while the memory is clocked from 400 – 550MHz. The HD 2300 will be offered with both 64 and 128-bit memory interfaces, and despite its dated spec, it still sports the Unified Video Decoder for playback of Blu-ray and HD DVD movies.


The Mobility Radeon HD 2400 and 2400 XT represent the first real step forward in AMD’s mobile graphics solutions. These are 65nm parts that are fully DX10 compliant. The core clock ranges from 350 – 450MHz, depending on the variant. Memory is clocked at 400 – 500MHz, while the memory interface is limited to 64-bit.


AMD is aiming the Mobility Radeon HD 2400 series at thin and light notebooks, where power efficiency is as important as performance. AMD claims that the 65nm process will improve battery life by 25 per cent, which will definitely keep mobile users happy.


The Mobility Radeon 2600 addresses the mobile performance user, but not necessarily the gaming enthusiast. With a core clock ranging from 400 – 500MHz and memory clocked at 550 – 600MHz with either 64 or 128-bit interface, the MR HD 2600 doesn’t look like a massive step up from the HD 2400 XT.


The Mobility Radeon 2600 XT on the other hand, seems to be worthy of its gaming enthusiast target audience. Core clocks range from 600 – 700MHz, while memory speeds range from 700 – 750Mhz with a 64 or 128-bit interface. I should be getting a notebook with a Mobility Radeon 2600 series graphics solution in the lab soon, so check back for some comparative performance testing against a mobile GeForce 8600.

Along with the Radeon HD 2900 XT, AMD is launching nine other products in the HD 2000 series line up. This product range is split evenly over desktop and mobile parts, which means that there are four other desktop graphics cards to complement the top-end HD 2900 XT.


The range kicks off with the Radeon HD 2400, which will ship in both Pro and XT variants. Like the HD 2900 XT, the HD 2400 is 100 per cent DirectX 10 compliant, but considering the modest spec, don’t expect to be playing the latest games at anything but the most basic settings.
AMD Radeon HD 2900 XT graphics card with large black heatsink, red circuit board, and standard interface ports.

Both the HD 2400 cards have only 40 stream processors and a maximum of 256MB of memory. Of more concern is that the memory interface is limited to 64-bit, which will severely limit memory bandwidth. The core frequency on the HD 2400 Pro will be 525MHz, while the XT will manage a far more respectable 700MHz. Memory will run at 400MHz (800MHz effective), but some cards will ship with DDR2 rather than GDDR3 chips.


The good news is that the HD 2400 series still has full HDMI and HDCP support, as well as the ability to offload HD video decoding completely to the GPU. Add to this the fact that AMD were keen to show HD 2400 cards with passive heatsinks installed, and it’s clear that this could be a good option for a low power, low noise Media Center PC.


Next up is the Radeon HD2600 series, which also ships in both Pro and XT flavours. This is the midrange option that will go head to head with nVidia’s GeForce 8600 cards. The number of stream processing units rises to 120, while the core clocks are 600MHz for the Pro and 800MHz for the XT. It’s interesting that the HD 2600 XT runs a higher core frequency than the HD 2900 XT, but this could well be down to the fact that the lower end cards are manufactured using a 65nm process.
AMD Radeon HD 2900 XT graphics card with a black cooling fan, red circuit board, and multiple output connectors.

Just like the GeForce 8600, the HD 2600 is limited to a 128-bit memory interface, which will definitely limit its performance in games. In some respects the 128-bit bus on the HD 2600 is more of an issue than on nVidia’s card, since it’s only a quarter of the width offered by the top end HD 2900 XT. The top end nVidia cards by comparison, only have a 256-bit interface, so the GeForce 8600 isn’t as comparatively compromised.


Strangely AMD will be equipping the HD 2600 series with DDR2, GDDR3 and GDDR4 memory, which means that memory clocks will range from 400 – 1100MHz (800 – 2200MHz effective). This means that the HD 2600 XT could be the first Radeon HD 2000 series card to feature GDDR4, although the extra speed offered by the GDDR4 chips will be offset by the slow interface.

Considering that this latest batch of Radeon cards have HD in their name, it comes as no surprise that they’re well specced when it comes to high definition video processing and output. First up, the HD 2900 XT is equipped with two dual link DVI ports, so you could (funds permitting) attach two 30in monitors to it, each running at 2,560 x 1,600. But the HD 2900 XT also ships with a DVI to HDMI converter, so you can output the signal to a suitably equipped HDTV.


Of course DVI to HDMI converters are nothing new, but what’s really impressive about this particular solution is that the resulting HDMI port will also carry sound. AMD has also ensured that you don’t need a messy cable running from an audio output on your sound chipset to the HD 2900 XT – instead the sound is routed directly from the onboard sound chip, through the motherboard chipset to the graphics card.

—-


”’Current methods of routing sound to a graphics card with HDMI involve messy cabling.”’

—-


”’The Radeon HD2900 XT requires no external cabling to route audio from the sound chip.”’

—-

In many respects, this method of HDMI implementation is preferable to having a port directly on the card. The fact that you retain two dual link DVI ports is obviously a major advantage, while the added flexibility of multiple digital outputs will be very attractive to anyone that wants to integrate a gaming/media PC into their living room. Add to this the fact that a high definition TV with a 1,920 x 1080p panel offers a native resolution that’s higher than many desktop PC monitors.


Of course both the DVI ports on the HD 2900 XT are HDCP compliant, which means that any HDMI output will consequently also be HDCP compliant. This means that you won’t have to worry if you install a Blu-ray or HD DVD drive in your system, since you’ll have no problem playing back protected content.


It’s worth mentioning that the HDMI port can only output 5.1-channels of audio, which means that any 7.1-channel soundtracks are off the menu. But of more note is that this is an HDMI 1.2 port rather than 1.3, which means that there’s no support for Dolby Digital Plus, Dolby TrueHD or DTS HD Master Audio. Whether the inability to output the latest batch of lossless surround codecs is a major issue is debatable, since anyone who’s thinking about putting a home cinema system together that supports those codecs is unlikely to be using a PC as their source device.


The lack of HDMI 1.3 also means that you won’t be able to make use of the Deep Colour feature, which should start appearing on movies in the near future. To be fair though, this isn’t really an oversight on AMD’s part because Windows doesn’t actually support colour depths above 32-bit, which means you’ll have a tough time outputting 36-bit colour, even with an HDMI 1.3 port.


The HD 2900 XT definitely takes its video processing duties seriously with a dedicated video decoding unit integrated into the GPU. This allows the card to offload all video decoding to the GPU with almost no CPU utilisation involved. In fact, the on-GPU video decoding is so effective that AMD claims that it can decode a 40mbps high definition video stream with no CPU utilisation.


With full decode support for both AVC-HD and VC1, the HD 2900 XT definitely makes a solid case for itself as part of a living room based Media Center system. And with the ability to decode very high bit rate high definition video streams without incurring a CPU overhead, the HD 2900 XT would partner well with a Blu-ray or HD DVD drive. The full suite of Avivo post processing features are also still on hand to ensure the best possible quality from video playback.

The more polygons you throw at any 3D model, the better it will look – that’s pretty much the rule of thumb with 3D graphics. The more triangles involved in making up any part of a scene, the denser the polygon mesh and the more believable that model will be. Of course it’s not that easy to just throw exorbitant numbers of triangles at real time 3D scenes, since the amount of processing power needed to create them rises exponentially.


It’s true that this is the exact methodology that’s used by Hollywood studios when rendering computer generated animation – literally millions of triangles are thrown at each scene to make it look as realistic (or not) as possible. The difference is that each frame of a Hollywood movie will take several hours to render, while in a game with real time rendering, you need to create around 60 frames per second!


Traditionally, creating a 3D model in a real time manner involved throwing a specific number of triangles into the mix – this number is usually a compromise of what will look good and what the current generation of hardware will be able to handle. This will leave you with a very basic 3D model that looks rather blocky and unrealistic.


The next stage is to add textures to your model, usually in the form of normal maps, which will add both detail and relief. The normal map has the job of smoothing off all the angles and creating a believably textured and realistic surface, but the rougher the original model, the more detailed the normal map will need to be.


When ATI launched the Radeon X800 back in 2004, it pioneered 3Dc. This is a normal map compression method that allowed very detailed normal maps to be produced and applied to models, while keeping memory bandwidth down to a minimum. The benefit of 3Dc was that it allowed far more detailed and believable textures to be used, without the need to massively increase the amount of memory on the card.


With the HD 2000 series AMD has added a tessellation stage to the rendering model. Tessellation is potentially the Holy Grail of 3D modelling – a way to create masses of polygons for free! The tessellation stage sits between the initial model creation and the texture application stage. Basically tessellation uses subdivision surface techniques to transform a model with a modest number of polygons to one with a significant number of polygons, without the process overhead that would be associated with rendering all those triangles separately.


The result of the tessellation stage is that instead of having a rough and angled model that needs heavy texture work, you’re left with a very smooth and rounded surface that needs far less complex textures to make it look believable and real. Tessellation is something that’s not present in nVidia’s latest hardware, although the company has indicated that tessellation is indeed a very important feature, and will play a big role in the future.


Now I’m usually wary about features that are only sported by one graphics manufacturer, since it’s unlikely that developers will employ features that only address part of the target market. However, since the Xenos chip in the Xbox 360 also utilises a tessellation engine, it’s a safe bet that game developers are already utilising tessellation for Microsoft’s console. With this in mind, it shouldn’t be too difficult to use similar techniques on PC titles, allowing systems with Readeon HD 2900 XT cards to benefit.


Of course considering the development time of most games, it’s altogether possible that by the time we do see PC games that utilise tessellation, nVidia will have moved on to G90 and there’s a good chance that there will be a tessellation engine in that part when it arrives.

It seems that with each new architecture launch there’s an enhancement to the anti-aliasing techniques seen on the previous generation. First up, the HD 2000 series of cards will happily run all the previous versions of ATI/AMD anti-aliasing, including proprietary methods like Temporal AA, which appeared with the X800 cards and Super AA which was originally seen on the X850 CrossFire setup.


For a very long time now, ATI has been trying to convince gamers and journalists alike that there is more to graphics technology than frame rates. The company has been urging potential buyers to look at image quality and base their purchase on just how good things look, rather than just how fast they run. Of course this was an obvious course of action for ATI for many years, since the Radeon cards were widely considered to have superior filtering, although things became far less clear cut when nVidia launched the GeForce 8 series and significantly raised the bar on its filtering.


As graphics engines have improved and become more jaw droppingly beautiful to behold, serious game players have begun to place good anti-aliasing at the top of their list of importance. After all, there’s nothing worse than a beautifully rendered and lit scene that’s spoiled by jaggy diagonal lines, especially when those edges shimmer distractingly while you’re wandering around. Personally I’d far rather run a game at a slightly lower resolution with a decent degree of anti-aliasing enabled, than run it at a higher resolution with none!

—-


AMD showed the new anti-aliasing off to good effect, but we’ll reserve judgement until we’ve viewed CFAA with edge detection ourselves.

—-

As part of the Radeon HD 2000 series AMD has introduced Custom Filter anti-aliasing (CFAA), which theoretically applies anti-aliasing where it’s most needed. Key to CFAA is the edge detection filter, which applies more samples to pixels located along edges to smooth off jaggies. Meanwhile pixels that are not located along edges have fewer samples applied, therefore reducing the amount of load created by the anti-aliasing passes.


There are several potential benefits to CFAA, not least of which is the fact that the intensive anti-aliasing is only being applied where required. AMD also says that there will be a reduction in texture shimmering and less blurring of fine detail.


Unfortunately the current HD 2900 XT driver didn’t support the edge detection function of custom filter anti-aliasing, but as soon as I get hold of one that does I’ll put it through its paces and see if it lives up to AMD’s promises.

As already mentioned, the key to AMD’s unified shader architecture is the superscalar stream processors at the heart of the GPU. There are 320 stream processors, but these are split up into four SIMD (Single Instruction Multiple Data) arrays with 80 stream units in each. Despite the fact that the stream processors are separated into four SIMD arrays, pretty much any instruction thread can be piped to any SIMD array.


Although the stream processors themselves are pretty clever, it’s the way the data is handled that’s the really smart bit. Every stream processor can be sent two threads simultaneously, so that one is queued up and ready, while the other is executing. The reason that a second thread is always queued up in reserve is that every operation is constantly monitored to ensure that the most efficient operation is in effect.


Thread arbiter units constantly monitor the execution process to ensure maximum efficiency. If it is determined that a currently executing thread is in a hold state as it waits for data from elsewhere, that thread will instantly be sidelined and replaced with a new one, thus ensuring that every stream processor is actively executing code, rather than sitting there idle.


All the temporary data associated with the bumped thread is saved so that it can continue to execute later. There can be literally hundreds of queued threads waiting for the requested data they need to continue processing. As soon as that data is received, these queued threads are moved straight back to a stream unit and completed.


A sequencer unit is attached to each thread arbiter unit – this determines what the optimal instruction order for each thread is and again helps to ensure efficient processing across the entire array of stream processors.


Above the arbiter units are the shader command queues – here there are queues for vertex, geometry and pixel shaders. As shader instructions are queued up and fed to the arbiters, they are then routed to each of the four SIMD arrays for execution on the stream processors. As mentioned earlier, there is no distinction between vertex, geometry or pixel shaders, with each stream processor able to act as any type of shader.


What’s interesting is that AMD is pitching the HD 2900 XT up against nVidia’s GeForce 8800 GTS 640MB card, which only sports 96 stream processors – in fact even the GeForce 8800 GTX only has 128 stream units and AMD freely admits that the GTX is a faster card. With that in mind, it’s clear that real world performance is not simply dictated by the amount of stream processors you can squeeze on a die.

Although nVidia launched the first PC based graphics solution to incorporate a unified shader model in the shape of the GeForce 8800 GTX, it’s worth remembering that ATI had a unified shader model part over a year earlier inside the Xbox 360. The Xenos graphics chip inside the X360 was the first unified shader model part to hit the market, and is still the most advance graphics processor ever seen inside a games console – the PlayStation 3 is basically using a variant of the GeForce 7800, so it’s a way behind its main rival when it comes to graphics architecture, despite being launched a year later!


To understand the benefits of a unified shader model, you need to understand how traditional graphics architecture worked. Previous generations of graphics hardware incorporated a number of dedicated vertex shaders and pixel shaders. In principle there’s nothing wrong with this system, with each array of dedicated shaders springing to life when needed and hammering through their instructions. Obviously the more shaders you had, the faster you could process those instructions, and with most games requiring more pixel processing that vertex processing, graphics cards tended to have the shader ratio weighted in favour of pixels.


The problem with the dedicated shader model is that it’s not always that efficient. You see there are times when there will be a significant amount of complex geometry processing, which can leave the pixel shaders twiddling their thumbs. Likewise, when there’s masses of pixel processing to do, your vertex shaders are sitting around doing nothing but wasting clock cycles.


A unified shader model solves the problem of dormant hardware and wasted clock cycles. In essence, the unified shader architecture does away with dedicated shaders for vertex and pixel processing, replacing them with shaders that are capable of executing both vertex and pixel operations. This means that if there is masses of complex geometry to deal with, all the shaders can become vertex processors, and when there’s loads of pixel processing to do, you’ll have a whole host of pixel shaders.


The significant difference between the AMD’s HD 2900 XT and nVidia’s GeForce 8800 is that this card represents AMD’s second generation unified shader part, with the Xenos chip merrily processing away in Xbox 360s for the past 18 months. AMD is adamant that it has learned a lot about unified shader architecture since designing the Xenos, and is confident that this latest incarnation will make the most of Microsoft’s forthcoming DirectX 10 platform.

AMD Radeon HD 2900 XT graphics card installed in a computer case with visible vents and ports.

Trusted Score

rating-star rating-star rating-star rating-star rating-star

Score in detail

  • Value 7
  • Features 10
  • Performance 8

Why trust our journalism?

Founded in 2003, Trusted Reviews exists to give our readers thorough, unbiased and independent advice on what to buy.

Today, we have millions of users a month from around the world, and assess more than 1,000 products a year.

author icon

Editorial independence

Editorial independence means being able to give an unbiased verdict about a product or company, with the avoidance of conflicts of interest. To ensure this is possible, every member of the editorial staff follows a clear code of conduct.

author icon

Professional conduct

We also expect our journalists to follow clear ethical standards in their work. Our staff members must strive for honesty and accuracy in everything they do. We follow the IPSO Editors’ code of practice to underpin these standards.

Trusted Reviews Logo

Sign up to our newsletter

Get the best of Trusted Reviews delivered right to your inbox.

This is a test error message with some extra words