It feels so long since AMD ruled the CPU roost that one almost suspects it never actually happened. Like one of those incredibly vivid dreams that you're convinced is real even for a few moments after you've woken up and still lingers with you for days after. Yet there's no proverbial splash of cold water to rouse us. It actually happened and it was a good time. Oh how we laughed…
But those days are long gone. Yes, ATI may have regained a large proportion of market share in the discrete graphics card market with its HD 4xx0 series of cards but that entire market pales in comparison to the huge number of plain old Intel integrated graphics solutions that power the vast majority of notebooks and PCs. If it's to revive its fortunes, AMD needs to gain a foothold in these grassroots markets. So it was with great interest that I took in all the information that has been trickled out this week about AMD's future products.
First we had the launch of AMD's first 45nm processors, in the shape of its server oriented (Shanghai) Opterons. They feature a whole host of improvements that boost performance while also reducing power consumption, which seems like a useful combination. However, it's not so much how the new compares to the old that is going to result in any wholesale transitions from Intel to AMD in the server sector. For that to happen, it comes down to how the new Opterons compare to Intel's current and near-future platforms and, of course, the jury is still out on that one. Time will tell.
Following this we had more news from the ATI front with the announcement of its Stream GPU-accelerated computing project, which looks to leverage the wave of enthusiasm and evermore rapid uptake of GPU accelerated software to broaden the market for more powerful discrete graphics solutions. Again, though, it's early days for this project and there is huge competition from nVidia's more established CUDA project, not to mention the impending colossus that is Intel's Larabee project.
What's more, though software is beginning to catch up with hardware in terms of taking full advantage of the massive parallel processing power of GPUs, we're still a long way from GPU accelerated computing become a must have. After all, while being able to encode a 1080p Blu-ray rip to an iPod friendly format in a quarter the amount of time it would take a CPU sounds pretty cool, the simple fact of the matter is most people would never undertake such a task. Even when Windows 7 arrives, with its reported greater integration of GPU accelerated tasks, a few flashy animations and windows transitions are hardly going to require massive GigaFLOP capable GPUs. All of which leaves little for AMD shareholders to get excited about