Database Pattern
In the Database pattern the drive is processing a stream of requests to read and write 8KB random-address data blocks. The ratio of read to write requests is changing from 0% to 100% throughout the test while the request queue size varies from 1 to 256.
To check out IOMeter Database Results click the following link.
We’ll discuss the results for requests queue depths of 1, 16 and 256.
As you might have expected, the i-RAM is unrivalled. Note that it copes better with reading than with writing, yet its speed in fantastically high in both cases.
As for the SSDs, they are faster than HDDs at random reading when there are no write requests. But as soon as there are even 10% of writes in the queue, the SSDs slow down suddenly, although are still ahead of the HDDs. As the percentage of writes grows up, the 15,000rpm HDD takes the lead, and then the other HDDs go ahead of the SSDs. As opposed to the SSDs, the performance of the HDDs is growing up along with the percentage of writes thanks to the deferred writing mechanism.
The i-RAM performs even better when the queue is 16 requests long. The same is true for the HDDs, the Fujitsu outperforming the SSDs even at 10% write requests. There’s something strange about the behavior of the 64GB SSD. While the 32GB model is indifferent to the increase in the queue depth (because there is nothing to optimize), the 64GB model is slower at read requests. Is this some error in the controller and driver?
There are no significant changes when the queue depth is increased further.
Here are a couple of diagrams that show graphs for five different requests queue depths for each drive.
The performance of the i-RAM is higher by about 20% at every queue depth other than 1.
It’s simple with the 32GB SSD: its performance is constant at every queue depth and in every mode, save for pure reading, and is determined by the write access time and the ratio of reads to writes. At pure reading the performance is increasing steadily along with the request queue depth.
The 64GB SSD behaves differently. It shows its maximum performance at a requests queue of 1. When the queue is 4 requests long, the SSD is somewhat slower. But when the request queue is increased further, the drive suffers a terrible performance hit to a certain level that does not depend on the specific length (16 or 256 requests). This drive’s controller seems to be unable to cope with long request queues.
Here are the same diagrams for the other tested devices:
- IOMeter: Database, Fujitsu MBA3300RC (diagram)
- IOMeter: Database, Samsung SpinPoint F1 (diagram)
- IOMeter: Database, Hitachi 7K200 (diagram)
[Pages]
Last Page [1]· SSD, i-RAM and Traditional Hard Disk Drives [2]· SSD, i-RAM and Traditional Hard Disk Drives - 2 [3]· SSD, i-RAM and Traditional Hard Disk Drives - 3 [4]· SSD, i-RAM and Traditional Hard Disk Drives - 4 [5]· SSD, i-RAM and Traditional Hard Disk Drives - 5 [6]· SSD, i-RAM and Traditional Hard Disk Drives - 6 [7]· SSD, i-RAM and Traditional Hard Disk Drives - 7 [8]· SSD, i-RAM and Traditional Hard Disk Drives - 8 [9]· SSD, i-RAM and Traditional Hard Disk Drives - 9 [10]· SSD, i-RAM and Traditional Hard Disk Drives - 10 [11]· SSD, i-RAM and Traditional Hard Disk Drives - 11 [12]· SSD, i-RAM and Traditional Hard Disk Drives - 12 [13]· SSD, i-RAM and Traditional Hard Disk Drives - 13 [14]· SSD, i-RAM and Traditional Hard Disk Drives - 14 [15]· SSD, i-RAM and Traditional Hard Disk Drives - 15 [16]· SSD, i-RAM and Traditional Hard Disk Drives - 16 [17]· SSD, i-RAM and Traditional Hard Disk Drives - 17 Next Page