A Glance Back
SGI: A Tool for Professionals
The history of multi-processor computer graphics goes back more than a decade ago. In 1993 SGI announced the Onyx series of graphics servers of awesome (for that time) performance. The graphics subsystem of those machines consisted of a geometry-processing unit, a display output generator, and the so-called raster manager that processed textures. The cheapest version of such a system, called VTX, isn’t very interesting, although its geometry-processing unit board carried six special-purpose Intel i860XP processors. The more remarkable Reality Engine 2 system permitted to boost the performance by increasing the number of the raster manager units from one to four. The amount of the texture memory didn’t multiply, since its contents were copied for each board, but the frame buffer size in the maximum configuration was as large as 160 megabytes ?that was incredibly big at that time.
Thus, the SGI Onyx can be regarded as the first commercial system equipped with a multi-processor graphics subsystem. Such machines cost up to a million dollars, so only wealthy companies could afford one. There was no talking about using such a system for gaming, of course.
3dfx: Split It in Two!
Two years later the era of hardware 3D graphics began on the ordinary desktop PC. In 1995, 3dfx Interactive, later devoured by NVIDIA, introduced its first gaming accelerator called 3dfx Voodoo Graphics. That was a revolutionary innovation that improved both performance and quality of computer graphics.
The Voodoo Graphics chipset consisted of two chips: the Frame Buffer Interface (FBI) was responsible for the frame buffer and the Texture Mapping Unit (TMU) processed textures. The chipset could scale the performance up by adding more TMUs ?up to three TMUs per one FBI. We don’t know if such configurations ever existed, though. A majority of manufactured Voodoo Graphics cards carried one FBI, one TMU and 4 megabytes of graphics memory (2 megabytes reserved for the frame buffer, and the rest for the textures), but Obsidian3D, a manufacturer of high-performance graphics systems, released the Obsidian 100SB graphics card with two Voodoo Graphics chipsets that were working together in the SLI mode. What did it mean? The abbreviation SLI was fleshed out as Scan Line Interleave. The name described the operational principle of the technology: one Voodoo Graphics chipset was responsible for even-numbered lines of the frame, and the other for odd-numbered ones. Thus, the load was equally divided between the two graphics accelerators and the overall performance would grow up.
The Obsidian 100SB was not a mass product, and the SLI mode for the Voodoo Graphics remained an exotic feature then, but in early 1998 3dfx introduced its next chipset. The Voodoo2 had a higher performance and it became the first mass product to officially support the option of increasing the performance by uniting two Voodoo2-based graphics cards with SLI technology.
Each Voodoo2 graphics card had a connector for attaching to another such card through a special flexible cable. Frankly speaking, this configuration didn’t really take off due to a most trivial reason ?the cost of the system was too high. In 1998 one Voodoo2 graphics card would cost more than $300! True enthusiasts did buy two Voodoo2 cards since this duo was unmatched then in terms of performance and graphics quality. There existed a single-PCB alternative from Quantum3D: the Obsidian2 X-24 was even faster than a Voodoo2 SLI configuration, but cost too much, about $650. Quite naturally the latter solution didn’t become widely popular, either.
The progress in the graphics market going on, in the spring of 1999 NVIDIA released its new graphics chip TNT2 whose Ultra version could challenge the speed of the Voodoo2 SLI and could also work with 32-bit color ?products from 3dfx lacked the latter feature. Riva TNT2 Ultra graphics cards, although expensive by themselves, would be cheaper than a pair of Voodoo2, without requiring a separate card for the 2D mode.