Difference between revisions of "Benchmarking: FIO"
Jump to navigation
Jump to search
| Line 3: | Line 3: | ||
* Setup fio config file (/root/fio/fio-bench) with content below for the benchmarks run | * Setup fio config file (/root/fio/fio-bench) with content below for the benchmarks run | ||
| − | < | + | <syntaxhighlight> |
[global] | [global] | ||
direct=1 | direct=1 | ||
| Line 17: | Line 17: | ||
size=10000MB | size=10000MB | ||
rw=randrw | rw=randrw | ||
| − | </ | + | </syntaxhighlight> |
* Run fio benchmark (average over 3 runs) | * Run fio benchmark (average over 3 runs) | ||
| − | < | + | <syntaxhighlight> |
for i in {1..3} | for i in {1..3} | ||
do | do | ||
| Line 27: | Line 27: | ||
done | done | ||
| − | </ | + | </syntaxhighlight> |
* Results:- grep -i iops /root/fio-bench.[1|2|3].log and sum all the iops values | * Results:- grep -i iops /root/fio-bench.[1|2|3].log and sum all the iops values | ||
| − | < | + | <syntaxhighlight> |
grep -i iops /root/fio-bench.1.log | grep -i iops /root/fio-bench.1.log | ||
fio-bench.1.log: read : io=728220KB, bw=207137 B/s, iops=50 , runt=3600019msec | fio-bench.1.log: read : io=728220KB, bw=207137 B/s, iops=50 , runt=3600019msec | ||
| Line 50: | Line 50: | ||
IOPS (single drive):= (200 + 164 + 185)/3 = 183 | IOPS (single drive):= (200 + 164 + 185)/3 = 183 | ||
| − | </ | + | </syntaxhighlight> |
* Calculate the IOPS from the log file | * Calculate the IOPS from the log file | ||
| − | < | + | <syntaxhighlight> |
grep -i iops fio-bench.1.log | sed -e 's/=/ /g' -e 's/:/ /g' | awk '{sum=sum+$8} END {print sum}' | grep -i iops fio-bench.1.log | sed -e 's/=/ /g' -e 's/:/ /g' | awk '{sum=sum+$8} END {print sum}' | ||
| − | </ | + | </syntaxhighlight> |
==== CERN Results ==== | ==== CERN Results ==== | ||
| − | {| class="wikitable" cellpadding=5 style="border:1px | + | {| class="wikitable" cellpadding=5 style="border:1px; width:60%;" |
|- | |- | ||
! Drive Model !! Number of drives !! Drive RPM || IOPS | ! Drive Model !! Number of drives !! Drive RPM || IOPS | ||
| Line 91: | Line 91: | ||
For a nice simple IOP test, try this: | For a nice simple IOP test, try this: | ||
| − | < | + | <syntaxhighlight> |
[random] | [random] | ||
rw=randread | rw=randread | ||
| Line 103: | Line 103: | ||
ioengine=sync | ioengine=sync | ||
loops=1 | loops=1 | ||
| − | </ | + | </syntaxhighlight> |
This file will perform 4GB of IO into a directory named /data, using an IO depth of 32, a block size of 4k (the IOP measurement standard) with random reads as the major operation, using standard unix IO. We have 16 simultaneous jobs doing IO, each job using 1 file. It will aggregate all the information from each job and report it, and it will run once. | This file will perform 4GB of IO into a directory named /data, using an IO depth of 32, a block size of 4k (the IOP measurement standard) with random reads as the major operation, using standard unix IO. We have 16 simultaneous jobs doing IO, each job using 1 file. It will aggregate all the information from each job and report it, and it will run once. | ||
then to run the test: | then to run the test: | ||
| − | < | + | <syntaxhighlight> |
fio file.fio | fio file.fio | ||
| − | </ | + | </syntaxhighlight> |
==== CLI FIO ==== | ==== CLI FIO ==== | ||
| − | < | + | <syntaxhighlight> |
Write IOPS Test | Write IOPS Test | ||
| Line 126: | Line 126: | ||
--runtime=10 --group_reporting --name=file1 | --runtime=10 --group_reporting --name=file1 | ||
| − | </ | + | </syntaxhighlight> |
Revision as of 09:02, 3 July 2012
CERN Fio Benchmarks
- Ref: http://it-div-procurements.web.cern.ch/it-div-procurements/IT-3821/fio/
- Setup fio config file (/root/fio/fio-bench) with content below for the benchmarks run
[global]
direct=1
bsrange=4k-4k
####################
#numjobs should be half of the number of physical cores on the sample system.
numjobs=2
###################
directory=/srv/castor/04 # directory where '/dev/sda1' disk is mounted on
iodepth=24
timeout=3600
[f1]
size=10000MB
rw=randrw- Run fio benchmark (average over 3 runs)
for i in {1..3}
do
fio --eta=always --latency-log --bandwidth-log --output=/root/fio/fio-bench.${i}.log /root/fio/fio-bench
sleep 3
done- Results:- grep -i iops /root/fio-bench.[1|2|3].log and sum all the iops values
grep -i iops /root/fio-bench.1.log
fio-bench.1.log: read : io=728220KB, bw=207137 B/s, iops=50 , runt=3600019msec
fio-bench.1.log: write: io=727012KB, bw=206793 B/s, iops=50 , runt=3600019msec
fio-bench.1.log: read : io=729100KB, bw=207388 B/s, iops=50 , runt=3600001msec
fio-bench.1.log: write: io=723424KB, bw=205773 B/s, iops=50 , runt=3600001msec
grep -i iops /root/fio-bench.2.log
fio-bench.2.log: read : io=598968KB, bw=170372 B/s, iops=41 , runt=3600006msec
fio-bench.2.log: write: io=603348KB, bw=171618 B/s, iops=41 , runt=3600006msec
fio-bench.2.log: read : io=601040KB, bw=170962 B/s, iops=41 , runt=3600001msec
fio-bench.2.log: write: io=600416KB, bw=170784 B/s, iops=41 , runt=3600001msec
grep -i iops /root/fio-bench.3.log
fio-bench.3.log: read : io=675160KB, bw=192044 B/s, iops=46 , runt=3600023msec
fio-bench.3.log: write: io=675108KB, bw=192029 B/s, iops=46 , runt=3600023msec
fio-bench.3.log: read : io=675532KB, bw=192149 B/s, iops=46 , runt=3600025msec
fio-bench.3.log: write: io=676816KB, bw=192515 B/s, iops=47 , runt=3600025msec
IOPS (single drive):= (200 + 164 + 185)/3 = 183- Calculate the IOPS from the log file
grep -i iops fio-bench.1.log | sed -e 's/=/ /g' -e 's/:/ /g' | awk '{sum=sum+$8} END {print sum}'CERN Results
| Drive Model | Number of drives | Drive RPM | IOPS |
|---|---|---|---|
| 250GB (WD2503ABYX) | 3 (raid0) | 7200 RPM | 471 |
| 2TB | 2 (raid0) | 5400 RPM | XXX |
| Intel 320 40Gb | 3 (raid0) | N/A | 1832 |
| Intel 320 40Gb | 2 (raid0) | N/A | 870* |
| Intel 320 40Gb | 1 | N/A | 606 |
| Intel 320 160Gb | 2 (raid0) | N/A | 3279 |
| Intel 320 160Gb | 1 | N/A | 1591 |
| Seagate Conste.2 1TB 2.5" | 1 | 7200 RPM | 332 |
*Below expected, please run again when drives are available. Could be due to wear as this test was run directly after the 3 drive raid 0.
General IOPS Tests
I've found fio (http://freshmeat.net/projects/fio/) to be an excellent testing tool for disk systems. To use it, compile it (requires libaio-devel), and then run it as
fio input.fio
For a nice simple IOP test, try this:
[random]
rw=randread
size=4g
directory=/data
iodepth=32
blocksize=4k
numjobs=16
nrfiles=1
group_reporting
ioengine=sync
loops=1This file will perform 4GB of IO into a directory named /data, using an IO depth of 32, a block size of 4k (the IOP measurement standard) with random reads as the major operation, using standard unix IO. We have 16 simultaneous jobs doing IO, each job using 1 file. It will aggregate all the information from each job and report it, and it will run once.
then to run the test:
fio file.fio
CLI FIO
Write IOPS Test
fio --filename=/dev/md0 --direct=1 --rw=randwrite --bs=4k --size=16G --numjobs=64 \
--runtime=10 --group_reporting --name=file1
Read IOPS Test
fio --filename=/dev/md0 --direct=1 --rw=randread --bs=4k --size=16G --numjobs=64 \
--runtime=10 --group_reporting --name=file1