More than Just a Pixel Pipeline
As we said above, NVIDIA has seriously redesigned the architecture of the pixel pipelines to improve their performance. The developers had modeled 1,300 various shaders to expose bottlenecks of the previous architecture and the resulting pixel pipeline of the G70 looks as follows:
Each of the two shader units now has an additional mini-ALU (these mini-ALUs first appeared back in the NV35, but the NV40 didn’t have them). It improves the mathematical performance of the processor and, accordingly, the speed of pixel shaders. Each pixel processor can execute 8 instructions of the MADD (multiply/add) type in a single cycle, and the total performance of 24 such processors with instructions of that type is a whopping 165Gflops which is three times the performance of the GeForce 6800 Ultra (54Gflops). Loops and branching available in version 3.0 pixel shaders are fully supported.
Of course, real-life shaders do not consist of MADD instructions only, but NVIDIA claims the pixel shader performance of the G70 is two times higher than that of the NV40. We will check this claim in our theoretical tests, but the improved pixel pipelines look highly promising. We can expect a considerable performance gain in modern pixel-shader-heavy games.
Vertex Processors
The flowchart of the G70’s vertex processor doesn’t differ from the same processor in the NV40:
A higher speed of processing geometry is achieved by means of more vertex processors (8 against the NV40’s 6) and, probably, through improvements in the vector and scalar units. According to the official data, the performance of the scalar unit has increased by 20-30% in comparison with the NV40, and a MADD instruction is executed in a single cycle in the vector unit. Besides that, the efficiency of cull and setup operations in the fixed section of the geometry pipeline has increased by 30%. We are going to cover these things in more detail below.
On the whole, we can’t call the new architecture from NVIDIA a revolution. It is rather a greatly improved and perfected GeForce 6 which has been the most advanced architecture in the 3D consumer graphics market until today. The GeForce 7 carries the leadership on, once again confirming NVIDIA’s technological superiority.
HDR: More Speed
The support of OpenEXR format that allows outputting an image in an extended dynamic range on the screen first appeared in the GeForce 6800 Ultra. This format is employed by Industrial Light & Magic, a division of Lucasfilm, for creating special effects for modern blockbuster movies.
Alas, this rendering mode requires huge resources, even though it ensures a much better image quality. The first game to support HDR was the popular 3D shooter Far Cry, since version 1.3. But in fact, this support of HDR remained more of a marketing trick, since you could not play in this mode even in 1024x768 resolution. For example, the performance normally being from 55 to 90fps on the Training map in different resolutions, the HDR mode yielded no more than 15-30fps. Of course, there was no talking about comfortable play. NVIDIA SLI technology increased the speed in the HDR mode to more acceptable numbers but the cost of a system with two GeForce 6800 Ultra/GT was very high.
The situation changes with the arrival of the G70, and HDR is going to be more useful for owners of G70-based graphics cards. According to NVIDIA, the GeForce 7800 GTX is 60% faster in this mode than the GeForce 6800 Ultra thanks to the improved texture-mapping units. So it looks like you can enjoy a beautiful high-dynamic-range image in resolutions up to 1280x1024 with one such graphics card, while SLI configurations will make 1600x1200 resolution playable in HDR.