Home » News » PC Component News » Nvidia VP Claims Moore's Law Is Dead

Nvidia VP Claims Moore's Law Is Dead

by | Go to comments

Share:

It is well known - both commercially and legally - that Intel and Nvidia have been at each other's throats for some time now and this isn't likely to improve relations...

Speaking out in a column on Forbes.com Nvidia vice president Bill Dally (pictured) has announced "Moore's Law is now dead." For those in need of a backgrounder: Gordon Moore is Intel's much lauded co-founder who made the bold prediction in a 1965 research paper that the number of transistors on a circuit can be doubled inexpensively roughly every two years. It is often misquoted as the 'speed' of processors (similar, but not the same) will double every 18 months.

Remarkably Moore's Law has stayed largely true for the last 45 years and is spoken about with great reverence at Intel, so what has Dally been saying? In short: that we need to move into an era of parallel processing instead.

"We have reached the limit of what is possible with one or more traditional, serial central processing units, or CPUs," he explains, describing serial verses parallel operations as akin to one person adding up a word count verses many people each counting a paragraph then adding these numbers together.
/94/c7b33a/7175/13170-billdally.jpg
"Going forward, the critical need is to build energy-efficient parallel computers, sometimes called throughput computers, in which many processing cores, each optimized for efficiency, not serial speed, work together on the solution of a problem. A fundamental advantage of parallel computers is that they efficiently turn more transistors into more performance. Doubling the number of processors causes many programs to go twice as fast. In contrast, doubling the number of transistors in a serial CPU results in a very modest increase in performance - at a tremendous expense in energy."

Uncannily enough, Nvidia's GPUs are well versed in parallel operation with Dally pointing out "Every three years we can increase the number of transistors (and cores) by a factor of four. By running each core slightly slower, and hence more efficiently, we can more than triple performance at the same total power. This approach returns us to near historical scaling of computing performance."

Of course there are a great deal of barriers to making what Dally is talking about truly viable on a mass scale, not least re-educating programmers and moving on from ageing bedrock software that cannot take advantage of parallel processing. That said, the efficiencies in GPUs has seen a trend in recent years to increasingly offload traditional CPU tasks such as video playback and reformatting to the GPU while both Internet Explorer 9 and Firefox 3.7 have shown how even browsers can offload web page rendering onto the GPU for faster page loading.

How will Intel respond? One of the joys of covering the tech sector is waiting to find out...

Link:
Via Forbes.com

Go to comments
comments powered by Disqus