During his introduction speech at Financial Analyst Day 2011, Jen-Hsun Huang revealed an interesting slide to the semiconductor analysts – silicon die of nVidia’s first CPU. Without further a due, picture below shows what nVidia presented as Project Denver 64-bit CPU core:
As expected, no die size or manufacturing process were disclosed – given that there is are a lot of features subject to change – thus take this slide with a grain of salt. In order to help you understand the information shown on diagram, we’re including links to sites that explain the functionality of particular units:
- ALU (Arithmetic Logic Unit)
- FPU (Floating Point Unit)
- Load/Store Logic
- MMU (Memory Management Unit)
- L1 Instruction Cache
- L1 Data Cache
- L2 Cache Logic Instruction/Fetch
The largest block on the die belongs to L1 Data Cache, while there is also L1 Instruction cache. While it is too early to discuss the actual features of this CPU core, we’re really intrigued by some of the statements about the Project Denver core – mostly ranging in the field of attaching four Project Denver cores on a high-bandwidth interface such as existing GPU memory controller.
In theory, Project Denver cores inside the Maxwell GPU die should enjoy access to 2+TB/s of internal bandwidth and potentially beyond currently possible 320GB/s of external memory bandwidth (using 512-bit interface and high-speed GDDR5 memory). If nVidia delivers this architecture as planned, we might see quite a change in the market – given that neither CPUs from AMD or Intel don’t have as high system bandwidth as contemporary graphics cards.
Given that Project Denver exists in silicon form, proper execution and launch should bury any doubts whether or not nVidia can make a CPU- which are still present with some semiconductor analysts who also claimed that AMD is going to die, AMD is not going to buy ATI, and that Intel will blow everyone out of the water with Larrabee and of course – that nVidia cannot make a CPU. These theories often forget that Intel is a largest company of companies mentioned and sometimes projects end in blind alley (WiMAX, Larrabee), AMD did the logical thing and acquired ATI and that was a good move – judging by initial Fusion APUs and of lastly – nVidia showing that they can execute as well.
Another big questions is whether or not they can effectively support an CPU ecosystem? That’s another ballgame entirely, since we heard that even with 3D Vision, GeForce and Tegra, as there are many partners that are not satisfied with the level of commitment given by the company. The true measure of success for any product lies in what support can be given to the ecosystem. In that way, Project Denver has to shine not just on the hardware side.
All in all, things are heating up and nothing is better for consumer than the holy word of consumerism: competition.