Three different drives from three different manufacturers are used in our HD I/O tests so scores are relative between boards, not between drive interfaces. IDE HDTach
Results are pretty similar. The DFI board seems to take a slightly higher hit on the CPU usage front though when margin of error (+/- 2%) is factored in, all the boards are more or less equal.
SATA HDTach
The nF4 Ultra-D is on the flip side for the SATA test. Burst rates are slightly higher than both the K8T890 and the SLI board and CPU usage is lower as well. Again with the margin of error factored in things are pretty close all around.
USB2 Throughput
USB2 throughput is in line with the Gigabyte SLI board but CPU usage is significantly higher. The nForce 4 takes second behind the K8T890. DFI's BIOS includes USB tweaking options but those were not touched - these results were obtained with the stock settings.
LAN Testing
With the nForce 4 MCP's ethernet connector, CPU usage is slightly lower than that of both the Soltek VIA board as well as the K8NXNP-SLI board. CPU usage was pretty consistently the same with the firewall and TCP off-loading set to on regardless of the firewall setting.
We see some very impressive results on the throughput test. DFI manages to hit the 1Gbps barrier and trounces both the Gigabyte and Soltek boards.
There are still some oddities with the nForce 4 firewall/TCP offloading feature - while the processor hit with the firewall on is approximately the same as what we see with the VIA chipset with no firewall, processor usage jumps up significantly with BOTH TCP offloading and the firewall set to off. In the case of the DFI, processor usage shot up to around 72% compared to the high 30s, low 40s when run with TCP offloading. This is a bit of a concern as we do not see this kind of processor usage on any other platform including the older nForce 3 250GB, VIA boards or even the Intel platforms. We touched upon this issue in our Gigabyte K8NXP-SLI review so this is not just a DFI issue. We have a good idea of why it is doing this and we have been working with NVIDIA to resolve this issue.
The second Marvell based ethernet connector seems to lie on the PCI bus with the throughput hitting 751Mbps only. CPU usage was lower though at roughly 33%.