Following on from my previous post, I’ve decided to use filebench (Version 184.108.40.206)
for a more structured testing approach. Filebench is easy to work with, as it has already defined several application types (workload) profiles and is easy to install. The workloads can be defined easily (using WML scripting) to mimic any sort of application load on the IO system. I cannot comment on how accurate these workloads are, but they make testing easy :).
The method I’ve used was very simple. Three runs of 60 seconds each, with the default settings for the workload profiles, except for the randomwrite and randomread workloads where I had to reduce the file size to 100 MB. I disabled randomization ( echo 0 > /proc/sys/kernel/randomize_va_space ) as recommended by filebench. Filebench allocated 170 MB of shared memory on all the runs (this appears to be the default).
The results are the average results of the three runs, except for the randomread workload where I only did two runs as the results were very similar. I did not repeat any tests as there was a lot less variation in the results. The results presented below are the IO summary result, rather than the individual operations results.
So without further ado here are the results:
|PMSTest (RAID1)||1.24 MB/s||6.83 MB/s||4.20 MB/s||13.63 MB/s||89.1 MB/s||1.50 MB/s|
|PMSControl||5.02 MB/s||3.73 MB/s||4.93 MB/s||39.83 MB/s||88.9 MB/s||2.33 MB/s|
The webserver results are somewhat mystifying. The workload consists of opening, reading and closing files, so the results should be fairly similar but for some reason the pmsApp is faster!
The rest of the results are as expected: Writes are significantly slower due to nature of the pmsApp (namely a RAID array across a network link) and reads are comparable.
It is also worth noting that regardless of the results, the test prep phase took longer, sometimes a lot longer, when running off the pmsApp, i.e on PMSTest.