Difference between revisions of "Benchmarking: FIO"
Jump to navigation
Jump to search
| Line 58: | Line 58: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
| − | + | == CERN Results == | |
{| class="wikitable" cellpadding=5 style="border:1px; width:80%;" | {| class="wikitable" cellpadding=5 style="border:1px; width:80%;" | ||
| Line 86: | Line 86: | ||
Could be due to wear as this test was run directly after the 3 drive raid 0. | Could be due to wear as this test was run directly after the 3 drive raid 0. | ||
| − | + | == General IOPS Tests == | |
I've found fio (http://freshmeat.net/projects/fio/) to be an excellent testing tool for disk systems. To use it, compile it (requires libaio-devel), and then run it as | I've found fio (http://freshmeat.net/projects/fio/) to be an excellent testing tool for disk systems. To use it, compile it (requires libaio-devel), and then run it as | ||
| Line 114: | Line 114: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
| − | + | == CLI FIO == | |
| − | |||
<syntaxhighlight> | <syntaxhighlight> | ||
| − | Write IOPS Test | + | # Write IOPS Test |
| − | + | fio --filename=/dev/md0 --direct=1 --rw=randwrite --bs=4k --size=16G --numjobs=64 --runtime=10 --group_reporting --name=file1 | |
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| + | #Read IOPS Test | ||
| + | fio --filename=/dev/md0 --direct=1 --rw=randread --bs=4k --size=16G --numjobs=64 --runtime=10 --group_reporting --name=file1 | ||
</syntaxhighlight> | </syntaxhighlight> | ||
Revision as of 08:22, 8 July 2012
CERN Fio Benchmarks
- Ref: http://it-div-procurements.web.cern.ch/it-div-procurements/IT-3821/fio/
- Setup fio config file (/root/fio/fio-bench) with content below for the benchmarks run
[global]
direct=1
bsrange=4k-4k
####################
#numjobs should be half of the number of physical cores on the sample system.
numjobs=16
###################
directory=/pool # directory where '/dev/sda1' disk is mounted on
iodepth=24
timeout=3600
[f1]
size=10000MB
rw=randrwRun fio benchmark (average over 3 runs)
for i in {1..3}
do
fio --eta=always --latency-log --bandwidth-log --output=/root/fio/fio-bench.${i}.log /root/fio/fio-bench
sleep 3
doneCheck results
- Check results:
grep -i iops /root/fio-bench.[1|2|3].logand sum all the iops values
grep -i iops /root/fio-bench.1.log
fio-bench.1.log: read : io=728220KB, bw=207137 B/s, iops=50 , runt=3600019msec
fio-bench.1.log: write: io=727012KB, bw=206793 B/s, iops=50 , runt=3600019msec
fio-bench.1.log: read : io=729100KB, bw=207388 B/s, iops=50 , runt=3600001msec
fio-bench.1.log: write: io=723424KB, bw=205773 B/s, iops=50 , runt=3600001msec
grep -i iops /root/fio-bench.2.log
fio-bench.2.log: read : io=598968KB, bw=170372 B/s, iops=41 , runt=3600006msec
fio-bench.2.log: write: io=603348KB, bw=171618 B/s, iops=41 , runt=3600006msec
fio-bench.2.log: read : io=601040KB, bw=170962 B/s, iops=41 , runt=3600001msec
fio-bench.2.log: write: io=600416KB, bw=170784 B/s, iops=41 , runt=3600001msec
grep -i iops /root/fio-bench.3.log
fio-bench.3.log: read : io=675160KB, bw=192044 B/s, iops=46 , runt=3600023msec
fio-bench.3.log: write: io=675108KB, bw=192029 B/s, iops=46 , runt=3600023msec
fio-bench.3.log: read : io=675532KB, bw=192149 B/s, iops=46 , runt=3600025msec
fio-bench.3.log: write: io=676816KB, bw=192515 B/s, iops=47 , runt=3600025msec
IOPS (average):= (200 + 164 + 185)/3 = 183- Calculate the IOPS from the log file
grep -i iops fio-bench.1.log | sed -e 's/=/ /g' -e 's/:/ /g' | awk '{sum=sum+$8} END {print sum}'CERN Results
| Drive Model | Number of drives | Drive RPM | Interface | IOPS |
|---|---|---|---|---|
| 250GB (WD2503ABYX) | 3 (raid0) | 7200 RPM | 3Gb SATA | 471 |
| Intel 320 40Gb | 3 (raid0) | N/A | 3Gb SATA | 1832 |
| Intel 320 40Gb | 2 (raid0) | N/A | 3Gb SATA | 870* |
| Intel 320 40Gb | 1 | N/A | 3Gb SATA | 606 |
| Intel 320 160Gb | 2 (raid0) | N/A | 3Gb SATA | 3279 |
| Intel 320 160Gb | 1 | N/A | 3Gb SATA | 1591 |
| Seagate Constellation.2 1TB 2.5" | 1 | 7200 RPM | 6Gb SATA | 332 |
| Hitachi Utrastar 2TB (HUA723020ALA640) | 3 (raid0) | 7200 RPM | 6Gb SATA | 559 |
| Toshiba 2TB (MK2001TKRB) | 3 (raid0) | 7200 RPM | 6Gb SAS | 818 |
*Below expected, please run again when drives are available. Could be due to wear as this test was run directly after the 3 drive raid 0.
General IOPS Tests
I've found fio (http://freshmeat.net/projects/fio/) to be an excellent testing tool for disk systems. To use it, compile it (requires libaio-devel), and then run it as
fio input.fio
For a nice simple IOP test, try this:
[random]
rw=randread
size=4g
directory=/data
iodepth=32
blocksize=4k
numjobs=16
nrfiles=1
group_reporting
ioengine=sync
loops=1This file will perform 4GB of IO into a directory named /data, using an IO depth of 32, a block size of 4k (the IOP measurement standard) with random reads as the major operation, using standard unix IO. We have 16 simultaneous jobs doing IO, each job using 1 file. It will aggregate all the information from each job and report it, and it will run once.
then to run the test:
fio file.fioCLI FIO
# Write IOPS Test
fio --filename=/dev/md0 --direct=1 --rw=randwrite --bs=4k --size=16G --numjobs=64 --runtime=10 --group_reporting --name=file1
#Read IOPS Test
fio --filename=/dev/md0 --direct=1 --rw=randread --bs=4k --size=16G --numjobs=64 --runtime=10 --group_reporting --name=file1