Home » Opinions » Benchmarking Blues » Benchmarking Blues

Benchmarking Blues

Both these problem boards were duly replaced by the manufacturers and the new boards seemed to run without problems. One was to be used as our graphics card testbed and so was used to test two retail GeForce 7800 GTX boards. Things seemed to be going well and we were getting some impressive numbers. But then our Doom 3 test refused to run in SLI mode, even though it would with a single card. This seemed odd as all the other tests were running in SLI. Clearly there was still a problem. We decided to give up on that brand of board and go with another.

Thankfully it worked first time. And as it turned out it was fortunate that the Doom 3 test had failed. All the scores we’d obtained from the previous board were well below what they should have been and we would have been none the wiser had the Doom 3 SLI test worked.

The problem is that this made a total of three SLI boards that had failed, meaning that we’d spent literally days testing boards for nothing. This was time that had just gone to waste. Manufacturers can easily get us replacement boards, but they can’t give us our time back.

Similarly, there were problems we had with some early GeForce 6600 graphics cards that kept overheating. A closer look revealed that the heatsinks didn’t have sufficient contact with GPU. Once we pointed this out the company assured us that on future models the problem would be corrected. Well that’s great but any serious testing would have highlighted this glaring issue. Why is it up to us to beta test its designs?



Another bug-bear are these supposedly time-saving benchmarking ‘utilities’ that claim to act as time savers but in fact often cause more problems than they solve. A particular one for testing Far Cry used settings that differed from those applied as standard when you actually play the game. This caused results to be much lower than when running the tests manually and as such were not a true reflection of the performance a graphics card will give. The end result? Lots of retesting and time wasted.

It’s not just component level testing that can prove frustrating. Take 3G phones. What’s the point of talking about all the things you can do on a handset if the network is still too ropey to make use of it. The trial and tribulations of 3G networks has been well documented by the industry and it’s another classic example of a technology being pushed out to a public before it’s truly ready in a desperate attempt to make money.

comments powered by Disqus