Nehalem: a whole new processor concept.

If you thought the Core micro architecture was a vast change from the Netburst Pentium 4 range, just wait until you get a look at what Nehalem has in store! With AMD ramping up the game as they seed Fusion and other technologies to integrate more into the CPU core, we all wondered how Intel was going to react. While the expressed details are still to be confirmed, we’ve learnt that there are a lot of changes in store for Intel’s upcoming platform, and that perhaps the ideas and methods adopted by the green camp weren’t so bad after all.

Firstly Nehalem will arrive in Q208 and is being designed from the ground up on the 45nm process. Intel has confirmed it will contain a variant of Hyper-Threading technology previously seen on the Pentium 4 CPUs, although it won’t be a hacked on addition in response to expected poor IPC and long pipeline, like it was in the Netburst days. SMT (Simultaneous Multithreading) is being optimised to make use of the many cores and shared cache in a way that “intelligently” uses the available resources.

Intel is aiming to haves a scalable performance and core structure including 8+ cores with 16+ threads running. What gets very interesting is that Intel describes Nehalem as having a Multi-Level shared cache arch, without specifically denouncing something along the lines of the L3-shared cache that AMD’s next generation Barcelona will have.

Integrated memory controller... on an Intel CPU?

Say goodbye to the northbridge, because Nehalem will integrate the memory controller into the CPU core. Intel is finally ready to do what AMD has been doing for years with the K8 architecture - incorporate an on-die memory controller, to lower memory access latencies, reduce power consumption of the whole platform and make designing future motherboards far easier.

This could be be a marketing nightmare for Intel’s PR and the green camp is going to be rolling around the floor in fits of glee at this news, but respect to Intel for ultimately biting the bullet and making the right choice. That said, Intel was in a similar situation when it created the Pentium M and had to convince the market the MHz wasn’t the only performance rating that mattered after years of preaching the contrary – and that turned out to be one of the most successful moves for Intel in recent history. By combining the architectural power of Core with an incredibly low latency memory controller and some super bandwidth DDR3 we should see massive gains in multi-core applications that are now suddenly freed of the northbridge front side bus (FSB) limitation.

Traditionally, Intel CPUs in a multi-core scenario had to queue and wait for the northbridge to serve commands to the memory, with the scenario getting progressively worse as the latency increases in every CPU you add. By adding larger and larger L2 cache (or L3 in the case of Xeons), this can help reduce the need to access memory to an extent, but ultimately it couldn’t last, especially with the multi-core, multi-socket platforms of the future. AMD Opterons scale exceptionally well in this respect, as every CPU has its own memory it can talk to, as well as talking to each other through Hyper-Transport.

While there won’t be a “front side bus” in the traditional sense, Intel is still currently using that term in order to differentiate itself from AMD. It has commented that it will use some form of PCI-Express’ ultra fast, point-to-point serial link technology to talk to the memory.

Although this sounds a lot like 30_2252_2353 00.html Hyper-Transport, we’re sure that Intel will only use “elements” of the technology as PCI-Express is tailored towards peripheral interconnect to provide compatibility with older technologies as well as other specific benefits like hot-plugging, scalability and flexibility and data striping which doesn’t benefit small packets and memory addressing. In comparison, Hyper-Transport offers a low overhead, dedicated 32-bit packet point-to-point linkage with integrated addressing that is perfectly suited to memory access.

comments powered by Disqus