Benchmarking: STFC Storage (GridPP) with Iozone

From Define Wiki
Revision as of 18:59, 19 October 2014 by David (talk | contribs)
Jump to navigation Jump to search
  • Notes below are based on the 2014 test suits (these tests have been fairly consistent over the years)
  • Using Centos 6.5
  • Installed Iozone and setup in the $PATH

Setup the RAID Array

  • 22 Drives RAID6 array with WD SE WD4000F9YS Drives
  • E5-2620v2 CPUs
  • 64GB RAM
  • LSI 9261 Controller with the following settings:
    • RAID Level: 6
    • Stripe Size: 256k
    • Disk Cache Policy: Enabled
    • Read Policy: Always Read Ahead
    • Write Policy: Always Write Back
    • IO Policy: Direct IO

Partition and Create FS

[root@localhost scripts]# cat intelligent-partitioner-dp.sh 
#!/bin/bash

# Parted stuff
parted -a optimal /dev/sda unit % mkpart primary 0% 33%    ; sleep 1
parted -a optimal /dev/sda unit % mkpart primary 33% 66%   ; sleep 1
parted -a optimal /dev/sda unit % mkpart primary 66% 100%  ; sleep 1

# sleep for 5
sleep 5

partprobe /dev/sda

sleep 5
# Create the XFS Part
mkfs.xfs -f -l version=2 -i size=1024 -n size=65536 -d su=256k,sw=22 -L castor1 /dev/sda1
sleep 2

mkfs.xfs -f -l version=2 -i size=1024 -n size=65536 -d su=256k,sw=22 -L castor2 /dev/sda2
sleep 2

mkfs.xfs -f -l version=2 -i size=1024 -n size=65536 -d su=256k,sw=22 -L castor3 /dev/sda3
sleep 2

Mount the Filesystems - add the follwing to fstab

# fstab
/dev/disk/by-label/castor1     /exportstage/castor1  xfs  logbufs=8,logbsize=256k,noatime,swalloc,inode64  0 2
/dev/disk/by-label/castor2     /exportstage/castor2  xfs  logbufs=8,logbsize=256k,noatime,swalloc,inode64  0 2
/dev/disk/by-label/castor3     /exportstage/castor3  xfs  logbufs=8,logbsize=256k,noatime,swalloc,inode64  0 2

Tuning on the System

  • Set the following
    • Readahead
    • Request Queue
    • Scheduler
# To Set
# blockdev --setra 16384 /dev/sda
# echo 512 > /sys/block/sda/queue/nr_requests
# echo deadline > /sys/block/sda/queue/scheduler
[root@localhost dp]# cat check_settings.sh 
#!/bin/bash

echo -en "[readahead]: "
blockdev --getra /dev/sda
echo -en "[nr_requests]: "
cat /sys/block/sda/queue/nr_requests
echo "[scheduler]: "
cat /sys/block/sda/queue/scheduler
[root@localhost dp]# ./check_settings.sh 
[readahead]: 16384
[nr_requests]: 512
[scheduler]: 
noop anticipatory [deadline] cfq

Run the tests

[root@localhost dp]# cat run_tests.sh 
#!/bin/bash

for i in {1..3}; do
   for j in {1..4}; do
      iozone -Mce -s24g -r256k -i0 -i1 -f /exportstage/castor${i}/iozone.dat >> single-result1
   done
done
for i in {1..3}; do
   for j in {1..4}; do
      iozone -Mce -s24g -r256k -i0 -i1 -f /exportstage/castor${i}/iozone.dat >> single-result2
   done
done


for i in {1..3}; do
   iozone -MCce -t12 -s24g -r256k -i0 -i1 -F /exportstage/castor{1,2,3}/iozone{1,2,3,4}.dat \
   >> multi-result1
done

for i in {1..3}; do
   iozone -MCce -t12 -s24g -r256k -i0 -i1 -F /exportstage/castor{1,2,3}/iozone{1,2,3,4}.dat \
   >> multi-result2
done

iozone -MCce -t 150 -s2g -r256k -i0 -i8 -+p 33 -F \
/exportstage/castor{1..3}/iozone{1..50}.dat >> mixed-io-write-heavy-result1

iozone -MCce -t 150 -s2g -r256k -i0 -i8 -+p 33 -F \
/exportstage/castor{1..3}/iozone{1..50}.dat >> mixed-io-write-heavy-result2

iozone -MCce -t 150 -s2g -r256k -i0 -i8 -+p 66 -F \
/exportstage/castor{1..3}/iozone{1..50}.dat >> mixed-io-read-heavy-result1

iozone -MCce -t 150 -s2g -r256k -i0 -i8 -+p 66 -F \
/exportstage/castor{1..3}/iozone{1..50}.dat >> mixed-io-read-heavy-result2

Verify the Results

  • Run the script below with an argument of '1' or '2' to check the outputs from the tests above