Intel IOMeter Sequential Read and Write Patterns
Let’s watch the controller doing sequential reading and writing.
IOMeter is sending a stream of read/write requests with a 4-request-long queue. Every minute the size of the data block changes so that we could get the dependence of the linear read/write speed on the data block size:
The following diagram shows the dependence of the controller’s read speed on the size of the data block.
We thought RAID arrays comprised of a different number of drives (especially RAID0 arrays) would have different read speeds, but?There are two distinct groups: 1) the RAID1 and the two JBOD arrays with TCQ on and off and 2) the two-, four-disk RAID0 arrays and the RAID10. We can’t really say why the arrays behave as they do. :) Numbers like 60 or 77MB/s can’t be related to the read speed of the single drive (it is somewhere around 68MB/s) or to the limitations of the PCI bus.
One thing is certain, though. We can’t see any profit from TCQ here at all.
Next goes sequential writing; here’s the table:
We construct a diagram like with sequential reading.
We see the graphs of the single drive with enabled and disabled TCQ fully coincide, but the rest of the graphs are simply awful.
First, all the drives are slower than the single drive starting from 4KB data blocks. Second, the speeds of the RAID0 arrays are similar, and the two-disk array is often faster than the four-disk one. Third, the performance of the mirroring arrays (RAID1 and RAID10) is terrible.
Overall, the Talon ZL4-150 doesn’t have the best possible results in synthetic patterns. Let’s see what it has to show us in patterns that imitate real-life workloads.