Difference between revisions of "Benchmarking: Enmotus VSSD"
Jump to navigation
Jump to search
| Line 12: | Line 12: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
| − | == | + | == Hardware info == |
| + | |||
| + | SYS-6019U-TR4 | ||
| + | 2x Intel(R) Xeon(R) Platinum 8180 CPU @ 2.50GHz | ||
| + | 12x 16GB = 192GB | ||
| + | OS drive: SAMSUNG MZ7KM240HAGR-00005 | ||
| + | Test SSD: INTEL SSDSC2KB240G7 | ||
| + | Test spinning drive: TOSHIBA MG04ACA600E | ||
| + | |||
== FIO run only on SSD == | == FIO run only on SSD == | ||
| Line 344: | Line 352: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
| + | |||
| + | [root@vssd home]# ecmd --help | ||
| + | |||
| + | Help: | ||
| + | |||
| + | The following are used for creating or deleting Enmotus | ||
| + | |||
| + | virtual tiered drives and VirtualSSDs: | ||
| + | |||
| + | Usage: ecmd --create <fd> <sd> {init} create a VirtualSSD | ||
| + | <fd> fast device e.g. /dev/sdb or vdrive0 | ||
| + | <sd> slow device e.g. /dev/sdc or vdrive1 | ||
| + | one drive may have existing data | ||
| + | to be preserved, the other none | ||
| + | {init} fast_first place fast device at LBA 0 | ||
| + | fast_last place slow device at LBA 0 | ||
| + | fast_split place 20% of fast device at LBA 0 | ||
| + | reserve place slow device at LBA 0 and reserve SSD space so SSD | ||
| + | can later be upgraded | ||
| + | reserve_last place fast device at LBA 0 and reserve HDD space so HDD | ||
| + | can later be upgraded | ||
| + | copy_tier replicate instead of promote active data | ||
| + | onto the fast tier | ||
| + | |||
| + | --create vdrive <drivelist> {option} | ||
| + | {option} linear or stripe, linear default | ||
| + | create a vdrive vdriveM | ||
| + | second call to create vdriveN | ||
| + | drivelist list of device e.g. /dev/sdb | ||
| + | --create vdrive /dev/sdX {sizeGiB} create vdrive of sizeGiB, smaller than /dev/sdX | ||
| + | |||
| + | --create {init} create a VirtualSSD using two vdrives already created | ||
| + | --create vdriveX vdriveY {init} | ||
| + | VirtualSSD from vdrives already created | ||
| + | convention is first vdrive is the fast device | ||
| + | --create single <dev> create a single drive VirtualSSD | ||
| + | <dev> may be a drive or a vdrive | ||
| + | <dev> device e.g. /dev/sdb or vdrive2 | ||
| + | --convert <VirtualSSD> <mode> | ||
| + | add or remove SSD tier without losing data | ||
| + | mode options: single, tiered, reserve, full_tier, copy_tier, stop, status | ||
| + | |||
| + | --convert <VirtualSSD> single | ||
| + | --convert <VirtualSSD> single {--release} | ||
| + | --release removes the vdrive that is available at convert to single | ||
| + | |||
| + | --convert <VirtualSSD> tiered vdriveN | ||
| + | --convert <VirtualSSD> tiered driveM | ||
| + | --convert <VirtualSSD> copy_tier vdriveN | ||
| + | --convert <VirtualSSD> copy_tier driveM | ||
| + | --delete <VirtualSSD>|<vdrive>|<pdrive> | ||
| + | delete VirtualSSD e.g. /dev/eba or t=n | ||
| + | delete vdrive e.g. vdrive3 | ||
| + | delete pdrive e.g. drive2 | ||
| + | --delete_all delete all VirtualSSDs and vdrives, clean the metadata | ||
| + | on all pdrives | ||
| + | |||
| + | The following commands are used for managing existing VirtualSSDs: | ||
| + | --list <option> list system block devices | ||
| + | <option> tdrives, vdrives, pdrives, luns | ||
| + | * boot drive, > in a tier | ||
| + | # not seen by driver, ! USB device not supported | ||
| + | + in pDrives means more than one vdrive contains this pdrive | ||
| + | --status get status and device listing | ||
| + | --promote return global promote mode | ||
| + | --promote <mode> set global promote mode | ||
| + | --promote <dev> return promote mode for /dev/ebX or t=n | ||
| + | --promote <mode> <dev> set promote mode for driveN or t=n | ||
| + | <mode> aggressive maximum promote rate | ||
| + | normal normal promote rate | ||
| + | slow slow promote rate | ||
| + | on turn promote engine on | ||
| + | off turn promote engine off | ||
| + | --policy <dev> return promote policy for /dev/ebX or t=n | ||
| + | --policy <mode> <dev> set promote policy for /dev/ebX or t=n | ||
| + | <mode> rdio promote on reads activity only | ||
| + | rwio promote reads or write activity | ||
| + | rdblock promote on read block activity only | ||
| + | rwblock promote on read and write block activity | ||
| + | --drivetype <mode> set global drivetype to specify trim support | ||
| + | --drivetype <dev> return drivetype that specifies trim support for /dev/ebX or t=n | ||
| + | --drivetype <mode> <dev> set drivetype to specify trim support for /dev/ebX or t=n | ||
| + | <mode> pdd (default) use the underlying physical data disks | ||
| + | i.e. if there is an SSD physical device in the mix | ||
| + | declare VirtualSSD as SSD otherwise, it is a HDD | ||
| + | ssd declare the VirtualSSD as an SSD virtual disk | ||
| + | hdd declare the VirtualSSD as a non-SSD virtual disk | ||
| + | --activitylog {on|off|writelog} control activity logging | ||
| + | --activitylog returns the state of activity log | ||
| + | --sectorsize return global sectorsize | ||
| + | --sectorsize <VirtualSSD> return sectorsize for t=n or /dev/ebX | ||
| + | --sectorsize 512|4096|4K|auto set global sectorsize | ||
| + | 4K and 4096 set the same size | ||
| + | if auto then tier sectorsize depends on drives in tier | ||
| + | --stats show global statistics on or off | ||
| + | --stats on | off turn global statistics on or off | ||
| + | --stats <VirtualSSD> show statistics on or off for VirtualSSD or t=n | ||
| + | --stats <VirtualSSD> on | off turn statistics on or off for VirtualSSD or t=n | ||
| + | --stats <item> <VirtualSSD> show <item> statistics for VirtualSSD or t=n | ||
| + | <item> promote_queue show promote queue | ||
| + | promote_hist show recent promote history | ||
| + | promote_count show page region new promote counts | ||
| + | count show fast/slow tier access counts | ||
| + | map show current fast-slow tier mapping | ||
| + | hostio show host io stats | ||
| + | on turn on statistics gathering, on by default | ||
| + | off turn off statistics gathering | ||
| + | lock shows how many pages are locked in the fast and slow tiers | ||
| + | convert shows how many pages are left to migrate during --convert | ||
| + | --reset error reset VSP error flags | ||
| + | --reset error <VirtualSSD> reset VirtualSSD error flags | ||
| + | --reset stats <VirtualSSD> reset VirtualSSD statistics counters | ||
| + | --help prints out help | ||
| + | --clean <drive> zero the first 2048 sectors and last 2048 sectors of a drive | ||
| + | --rescan do a drive discovery | ||
| + | --pagesize return global pagesize | ||
| + | --pagesize 4M | 2M | 1M | 512K | 256K | 128K | 64K multiple [minimum 128K] | ||
| + | set the default page size for VirtualSSDs | ||
| + | --pagesize raidstripesize=X raidnumdrives=N raidlevel=R | ||
| + | X is in K, e.g. 256K | ||
| + | if R==0 pagesize_initial = X*N | ||
| + | if R==1 pagesize_initial = X | ||
| + | if R==3,4,5 pagesize_initial = X * (N-1) | ||
| + | if R==6 pagesize_initial = X* (N-2) | ||
| + | --pagesize <VirtualSSD> return pagesize for t=n or /dev/ebX | ||
| + | --scantime return the promote scan interval in milliseconds | ||
| + | --scantime 25S set the promote scan interval | ||
| + | 5M no unit means seconds | ||
| + | 2H | ||
| + | 1D max is 7 Days | ||
| + | --attach t=m mount the VirtualSSD to become a drive in the system | ||
| + | --attach vdriveM mount the vdrive to become a drive in the system | ||
| + | --detach t=m {--force} unmount the VirtualSSD so that it is removed from the system | ||
| + | --detach /dev/ebX {--force} unmount the VirtualSSD so it is removed from the system | ||
| + | |||
| + | Miscellaneous Commands: | ||
| + | --license information about current license | ||
| + | --license <actcode> upgrade current license | ||
| + | --license <actcode> --server upgrade current license using local server | ||
| + | --license <actcode> --offline create a file to email to Enmotus to | ||
| + | request more capabilities | ||
| + | actCode can be XXXX-YYYY-PPPP-QQQQ or | ||
| + | refresh, return, support, reset | ||
| + | --license <pathname/file_name> --activate upgrade current license using | ||
| + | file generated by Enmotus | ||
| + | using file from --offline | ||
| + | --update_meta update the metadata on the drives | ||
| + | --check <VirtualSSD> {drivelist} Check the VirtualSSD metadata vmap tables | ||
| + | --config <VirtualSSD> display VirtualSSD device hierarchy | ||
| + | --config display VirtualSSD device hierarchy for all VirtualSSDs | ||
| + | --beacon <drive> do reads from this drive so drive's LED will light | ||
| + | --dumplog generate a zip file of logs and place on Enmotus FTP site | ||
| + | --syslog generate a log, syslog.log, of low level commands | ||
| + | |||
| + | Examples : | ||
| + | a) Creating a simple VirtualSSD from two raw block devices sdb and sdc, fast drive is sdb | ||
| + | ecmd --create /dev/sdb /dev/sdc | ||
| + | b) Creating a striped SSD (sdb,c) and striped HDD (sde,f,g,h) VirtualSSD | ||
| + | ecmd --create vdrive /dev/sd[bc] stripe | ||
| + | ecmd --create vdrive /dev/sd[efgh] stripe | ||
| + | ecmd --create vdrive0 vdrive1 | ||
| + | [Note: If other VirtualSSDs exist, vdrive0, 1 may be different. Use --list to verify] | ||
| + | ecmd --create | ||
| + | [Note: If only two vdrives exist that are not part of a VirtualSSD, they will be used | ||
| + | c) Deleting a VirtualSSD /dev/ebb | ||
| + | ecmd --delete /dev/ebb | ||
| + | ecmd log file at /var/log/entier/ecmd.log | ||
Latest revision as of 11:46, 20 November 2018
Installation
wget https://storage-usw-121.citrixdata.com/download.ashx?dt=dt01e5e16753d54a0582bd2c47ebdcab3d&h=kaAY37e8F%2bgIL%2fBwJEcSJh5dpepBBXmSpc7IRrIg1jk%3d
tar xvf Virtual*
./INSTALL.sh- Create Virtual ssd and file system on top
ecmd --create /dev/sdb /dev/sdc
mkfs.xfs /dev/ebaHardware info
SYS-6019U-TR4 2x Intel(R) Xeon(R) Platinum 8180 CPU @ 2.50GHz 12x 16GB = 192GB OS drive: SAMSUNG MZ7KM240HAGR-00005 Test SSD: INTEL SSDSC2KB240G7 Test spinning drive: TOSHIBA MG04ACA600E
FIO run only on SSD
- Write Bandwidth
fio --name=writebw --filename=/dev/sdb --direct=1 --rw=write --bs=1m --numjobs=56 --iodepth=64 --direct=1 --iodepth_batch=16 --iodepth_batch_complete=16 --runtime=300 --ramp_time=5 --norandommap --time_based --ioengine=libaio --group_reporting | tee -a /home/fio/writebw_sdb.txt
[root@vssd fio]# cat writebw_sdb.txt
writebw: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=64
...
fio-3.1
Starting 56 processes
writebw: (groupid=0, jobs=56): err= 0: pid=23838: Mon Nov 19 16:23:32 2018
write: IOPS=163, BW=169MiB/s (178MB/s)(50.0GiB/302712msec)
slat (msec): min=1683, max=6170, avg=5447.71, stdev=277.34
clat (msec): min=465, max=17092, avg=15797.09, stdev=2611.51
lat (msec): min=2148, max=22539, avg=21194.49, stdev=2743.59
clat percentiles (msec):
| 1.00th=[ 785], 5.00th=[11879], 10.00th=[16174], 20.00th=[16174],
| 30.00th=[16308], 40.00th=[16308], 50.00th=[16308], 60.00th=[16442],
| 70.00th=[16442], 80.00th=[16442], 90.00th=[16576], 95.00th=[16711],
| 99.00th=[16979], 99.50th=[16979], 99.90th=[16979], 99.95th=[17113],
| 99.99th=[17113]
bw ( KiB/s): min=22170, max=33368, per=18.90%, avg=32756.87, stdev=896.17, samples=3035
iops : min= 21, max= 32, avg=31.82, stdev= 0.94, samples=3035
lat (msec) : 500=0.03%, 750=0.84%, 1000=0.65%, 2000=0.29%, >=2000=101.68%
cpu : usr=0.04%, sys=0.07%, ctx=5194, majf=0, minf=413
IO depths : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=1.8%, 32=3.6%, >=64=98.1%
submit : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=98.2%, 32=0.0%, 64=1.8%, >=64=0.0%
issued rwt: total=0,49504,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
WRITE: bw=169MiB/s (178MB/s), 169MiB/s-169MiB/s (178MB/s-178MB/s), io=50.0GiB (53.7GB), run=302712-302712msec
Disk stats (read/write):
sdb: ios=103/102477, merge=0/0, ticks=2518/44677381, in_queue=44700661, util=100.00%- Read IOPS test
fio --name=readiops --filename=/dev/sdb --direct=1 --rw=randread --bs=512 --numjobs=56 --iodepth=64 --direct=1 --iodepth_batch=16 --iodepth_batch_complete=16 --runtime=300 --ramp_time=5 --norandommap --time_based --ioengine=libaio --group_reporting | tee -a /home/fio/readiops_sdb.txt
[root@vssd fio]# cat readiops_sdb.txt
readiops: (g=0): rw=randread, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=64
...
fio-3.1
Starting 56 processes
readiops: (groupid=0, jobs=56): err= 0: pid=53616: Mon Nov 19 16:28:43 2018
read: IOPS=111k, BW=54.3MiB/s (56.9MB/s)(15.9GiB/300011msec)
slat (usec): min=26, max=57723, avg=8033.39, stdev=5524.82
clat (usec): min=3, max=95741, avg=24202.78, stdev=9386.58
lat (usec): min=300, max=104770, avg=32236.41, stdev=10451.12
clat percentiles (usec):
| 1.00th=[ 6652], 5.00th=[ 9372], 10.00th=[12649], 20.00th=[15926],
| 30.00th=[18744], 40.00th=[21103], 50.00th=[23725], 60.00th=[26084],
| 70.00th=[28705], 80.00th=[31851], 90.00th=[36439], 95.00th=[40633],
| 99.00th=[48497], 99.50th=[52167], 99.90th=[58983], 99.95th=[62129],
| 99.99th=[68682]
bw ( KiB/s): min= 737, max= 1306, per=1.80%, avg=999.04, stdev=75.71, samples=33599
iops : min= 1474, max= 2612, avg=1998.53, stdev=151.36, samples=33599
lat (usec) : 4=0.01%, 10=0.01%, 50=0.01%, 100=0.01%, 250=0.01%
lat (usec) : 500=0.03%, 750=0.03%, 1000=0.02%
lat (msec) : 2=0.03%, 4=0.12%, 10=5.67%, 20=29.07%, 50=64.29%
lat (msec) : 100=0.76%
cpu : usr=0.22%, sys=13.59%, ctx=2922454, majf=0, minf=12845
IO depths : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.1%, 32=0.1%, >=64=101.6%
submit : 0=0.0%, 4=0.0%, 8=0.1%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwt: total=33331318,0,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
READ: bw=54.3MiB/s (56.9MB/s), 54.3MiB/s-54.3MiB/s (56.9MB/s-56.9MB/s), io=15.9GiB (17.1GB), run=300011-300011msec
Disk stats (read/write):
sdb: ios=33858930/0, merge=13/0, ticks=48497185/0, in_queue=48664087, util=100.00%- Read Bandwidth test
fio --name=readbw --filename=/dev/sdb --direct=1 --rw=read --bs=1m --numjobs=56 --iodepth=64 --direct=1 --iodepth_batch=16 --iodepth_batch_complete=16 --runtime=300 --ramp_time=5 --norandommap --time_based --ioengine=libaio --group_reporting | tee -a /home/fio/readbw_sdb.txt
[root@vssd fio]# cat readbw_sdb.txt
readbw: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=64
...
fio-3.1
Starting 56 processes
readbw: (groupid=0, jobs=56): err= 0: pid=84459: Mon Nov 19 16:33:57 2018
read: IOPS=419, BW=430MiB/s (451MB/s)(126GiB/301056msec)
slat (msec): min=1819, max=2770, avg=2129.04, stdev=199.20
clat (msec): min=186, max=10394, avg=6305.65, stdev=1042.89
lat (msec): min=2234, max=12763, avg=8430.66, stdev=1136.19
clat percentiles (msec):
| 1.00th=[ 2500], 5.00th=[ 4597], 10.00th=[ 5671], 20.00th=[ 5738],
| 30.00th=[ 5738], 40.00th=[ 5805], 50.00th=[ 6477], 60.00th=[ 6611],
| 70.00th=[ 6745], 80.00th=[ 6879], 90.00th=[ 7215], 95.00th=[ 7684],
| 99.00th=[ 9060], 99.50th=[ 9329], 99.90th=[10000], 99.95th=[10134],
| 99.99th=[10268]
bw ( KiB/s): min=22505, max=33368, per=7.47%, avg=32895.12, stdev=508.24, samples=7916
iops : min= 21, max= 32, avg=31.90, stdev= 0.53, samples=7916
lat (msec) : 250=0.05%, 500=0.65%, 750=0.01%, >=2000=101.42%
cpu : usr=0.00%, sys=0.17%, ctx=12434, majf=0, minf=6427
IO depths : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.7%, 32=1.4%, >=64=100.4%
submit : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=99.3%, 32=0.0%, 64=0.7%, >=64=0.0%
issued rwt: total=126336,0,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
READ: bw=430MiB/s (451MB/s), 430MiB/s-430MiB/s (451MB/s-451MB/s), io=126GiB (136GB), run=301056-301056msec
Disk stats (read/write):
sdb: ios=259033/0, merge=0/0, ticks=44047066/0, in_queue=44049760, util=100.00%- Write IOPS test
fio --name=writeiops --filename=/dev/sdb --direct=1 --rw=write --bs=512 --numjobs=56 --iodepth=64 --direct=1 --iodepth_batch=16 --iodepth_batch_complete=16 --runtime=300 --ramp_time=5 --norandommap --time_based --ioengine=libaio --group_reporting | tee -a /home/fio/writeiops_sdb.txt
[root@vssd fio]# cat writeiops_sdb.txt
writeiops: (g=0): rw=write, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=64
...
fio-3.1
Starting 56 processes
writeiops: (groupid=0, jobs=56): err= 0: pid=116168: Mon Nov 19 16:39:23 2018
write: IOPS=87.9k, BW=42.9MiB/s (45.0MB/s)(12.6GiB/300041msec)
slat (usec): min=13, max=1459, avg=163.18, stdev=64.11
clat (usec): min=1926, max=184108, avg=40579.31, stdev=11065.01
lat (msec): min=2, max=184, avg=40.74, stdev=11.05
clat percentiles (msec):
| 1.00th=[ 20], 5.00th=[ 26], 10.00th=[ 29], 20.00th=[ 32],
| 30.00th=[ 35], 40.00th=[ 37], 50.00th=[ 40], 60.00th=[ 43],
| 70.00th=[ 45], 80.00th=[ 49], 90.00th=[ 54], 95.00th=[ 59],
| 99.00th=[ 77], 99.50th=[ 86], 99.90th=[ 106], 99.95th=[ 114],
| 99.99th=[ 130]
bw ( KiB/s): min= 387, max= 1133, per=1.80%, avg=791.86, stdev=109.10, samples=33600
iops : min= 774, max= 2267, avg=1584.15, stdev=218.22, samples=33600
lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=1.15%, 50=82.82%
lat (msec) : 100=15.88%, 250=0.16%
cpu : usr=0.23%, sys=1.93%, ctx=1303766, majf=0, minf=4802
IO depths : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.1%, 32=0.1%, >=64=101.4%
submit : 0=0.0%, 4=0.1%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwt: total=0,26384737,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
WRITE: bw=42.9MiB/s (45.0MB/s), 42.9MiB/s-42.9MiB/s (45.0MB/s-45.0MB/s), io=12.6GiB (13.5GB), run=300041-300041msec
Disk stats (read/write):
sdb: ios=123/337114, merge=0/26267223, ticks=1672/14584030, in_queue=14678340, util=100.00%FIO run on the virtual SSD
- Write Bandwidth
fio --name=writebw --filename=/dev/eba --direct=1 --rw=write --bs=1m --numjobs=56 --iodepth=64 --direct=1 --iodepth_batch=16 --iodepth_batch_complete=16 --runtime=300 --ramp_time=5 --norandommap --time_based --ioengine=libaio --group_reporting | tee -a /home/fio/writebw_eba.txt
[root@vssd fio]# cat writebw_eba.txt
writebw: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=64
...
fio-3.1
Starting 56 processes
writebw: (groupid=0, jobs=56): err= 0: pid=314067: Mon Nov 19 14:55:02 2018
write: IOPS=160, BW=167MiB/s (175MB/s)(49.9GiB/306340msec)
slat (msec): min=5015, max=6617, avg=5541.87, stdev=293.99
clat (msec): min=241, max=21576, avg=16088.65, stdev=2658.47
lat (msec): min=5610, max=24936, avg=21576.05, stdev=2778.03
clat percentiles (msec):
| 1.00th=[ 776], 5.00th=[11610], 10.00th=[16174], 20.00th=[16174],
| 30.00th=[16308], 40.00th=[16308], 50.00th=[16308], 60.00th=[16442],
| 70.00th=[16442], 80.00th=[16711], 90.00th=[17113], 95.00th=[17113],
| 99.00th=[17113], 99.50th=[17113], 99.90th=[17113], 99.95th=[17113],
| 99.99th=[17113]
bw ( KiB/s): min=22170, max=33368, per=19.24%, avg=32839.70, stdev=831.08, samples=3022
iops : min= 21, max= 32, avg=31.85, stdev= 0.85, samples=3022
lat (msec) : 250=0.03%, 500=0.10%, 750=0.58%, 1000=0.97%, 2000=0.13%
lat (msec) : >=2000=101.79%
cpu : usr=0.04%, sys=0.08%, ctx=4783, majf=0, minf=415
IO depths : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=1.8%, 32=3.6%, >=64=98.2%
submit : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=98.2%, 32=0.0%, 64=1.8%, >=64=0.0%
issued rwt: total=0,49280,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
WRITE: bw=167MiB/s (175MB/s), 167MiB/s-167MiB/s (175MB/s-175MB/s), io=49.9GiB (53.6GB), run=306340-306340msec
Disk stats (read/write):
eba: ios=91/102076, merge=0/0, ticks=4617/62579354, in_queue=62610962, util=100.00%- Read IOPS test
fio --name=readiops --filename=/dev/eba --direct=1 --rw=randread --bs=512 --numjobs=56 --iodepth=64 --direct=1 --iodepth_batch=16 --iodepth_batch_complete=16 --runtime=300 --ramp_time=5 --norandommap --time_based --ioengine=libaio --group_reporting --random_generator=tausworthe64 | tee -a /home/fio/readiops_eba.txt
[root@vssd fio]# cat readiops_eba.txt
readiops: (g=0): rw=randread, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=64
...
fio-3.1
Starting 56 processes
readiops: (groupid=0, jobs=56): err= 0: pid=7460: Mon Nov 19 15:22:47 2018
read: IOPS=225, BW=116KiB/s (119kB/s)(35.1MiB/309914msec)
slat (usec): min=57, max=8238.9k, avg=3963433.56, stdev=3925150.29
clat (usec): min=70, max=17708k, avg=11563971.95, stdev=4266653.26
lat (msec): min=4300, max=24763, avg=15482.93, stdev=5603.56
clat percentiles (usec):
| 1.00th=[ 180], 5.00th=[ 7683965], 10.00th=[ 7751074],
| 20.00th=[ 7818183], 30.00th=[ 7885292], 40.00th=[ 8019510],
| 50.00th=[ 8220836], 60.00th=[15636366], 70.00th=[15770584],
| 80.00th=[15770584], 90.00th=[15904801], 95.00th=[16039019],
| 99.00th=[16173237], 99.50th=[16173237], 99.90th=[16173237],
| 99.95th=[16173237], 99.99th=[16575890]
bw ( KiB/s): min= 10, max= 32, per=26.77%, avg=31.06, stdev= 3.78, samples=2221
iops : min= 21, max= 65, avg=62.31, stdev= 7.53, samples=2221
lat (usec) : 100=0.01%, 250=1.18%
lat (msec) : 750=0.05%, 1000=0.21%, 2000=0.96%, >=2000=100.54%
cpu : usr=0.00%, sys=0.00%, ctx=2972, majf=0, minf=567
IO depths : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=1.3%, 32=2.6%, >=64=99.2%
submit : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=98.7%, 32=0.0%, 64=1.3%, >=64=0.0%
issued rwt: total=69856,0,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
READ: bw=116KiB/s (119kB/s), 116KiB/s-116KiB/s (119kB/s-119kB/s), io=35.1MiB (36.8MB), run=309914-309914msec
Disk stats (read/write):
eba: ios=71998/0, merge=0/0, ticks=62808578/0, in_queue=62810941, util=100.00%- Read Bandwidth test
fio --name=readbw --filename=/dev/eba --direct=1 --rw=read --bs=1m --numjobs=56 --iodepth=64 --direct=1 --iodepth_batch=16 --iodepth_batch_complete=16 --runtime=300 --ramp_time=5 --norandommap --time_based --ioengine=libaio --group_reporting | tee -a /home/fio/readbw_eba.txt
[root@vssd fio]# cat readbw_eba.txt
readbw: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=64
...
fio-3.1
Starting 56 processes
readbw: (groupid=0, jobs=56): err= 0: pid=372172: Mon Nov 19 15:05:40 2018
read: IOPS=427, BW=438MiB/s (459MB/s)(129GiB/301785msec)
slat (msec): min=1834, max=2654, avg=2091.08, stdev=167.42
clat (msec): min=147, max=9580, avg=6193.97, stdev=755.18
lat (msec): min=2264, max=11904, avg=8281.51, stdev=840.18
clat percentiles (msec):
| 1.00th=[ 2500], 5.00th=[ 5671], 10.00th=[ 5671], 20.00th=[ 5738],
| 30.00th=[ 5738], 40.00th=[ 6208], 50.00th=[ 6275], 60.00th=[ 6409],
| 70.00th=[ 6544], 80.00th=[ 6678], 90.00th=[ 6812], 95.00th=[ 7013],
| 99.00th=[ 7550], 99.50th=[ 7550], 99.90th=[ 7617], 99.95th=[ 7617],
| 99.99th=[ 8792]
bw ( KiB/s): min=22170, max=33368, per=7.32%, avg=32858.29, stdev=336.11, samples=8066
iops : min= 21, max= 32, avg=31.92, stdev= 0.39, samples=8066
lat (msec) : 250=0.09%, 500=0.61%, >=2000=101.39%
cpu : usr=0.00%, sys=0.21%, ctx=13263, majf=0, minf=11813
IO depths : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.7%, 32=1.4%, >=64=100.5%
submit : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=99.3%, 32=0.0%, 64=0.7%, >=64=0.0%
issued rwt: total=129024,0,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
READ: bw=438MiB/s (459MB/s), 438MiB/s-438MiB/s (459MB/s-459MB/s), io=129GiB (139GB), run=301785-301785msec
Disk stats (read/write):
eba: ios=264431/0, merge=0/0, ticks=61102876/0, in_queue=61131392, util=100.00%- Write IOPS test
fio --name=writeiops --filename=/dev/eba --direct=1 --rw=write --bs=512 --numjobs=56 --iodepth=64 --direct=1 --iodepth_batch=16 --iodepth_batch_complete=16 --runtime=300 --ramp_time=5 --norandommap --time_based --ioengine=libaio --group_reporting | tee -a /home/fio/writeiops_eba.txt
[root@vssd fio]# cat writeiops_sdb.txt
writeiops: (g=0): rw=write, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=64
...
fio-3.1
Starting 56 processes
writeiops: (groupid=0, jobs=56): err= 0: pid=285042: Mon Nov 19 14:49:35 2018
write: IOPS=80.1k, BW=39.1MiB/s (41.0MB/s)(11.5GiB/300038msec)
slat (usec): min=21, max=1065, avg=99.17, stdev=53.70
clat (msec): min=2, max=225, avg=44.63, stdev=12.38
lat (msec): min=2, max=225, avg=44.73, stdev=12.37
clat percentiles (msec):
| 1.00th=[ 22], 5.00th=[ 28], 10.00th=[ 32], 20.00th=[ 36],
| 30.00th=[ 39], 40.00th=[ 42], 50.00th=[ 44], 60.00th=[ 46],
| 70.00th=[ 49], 80.00th=[ 53], 90.00th=[ 59], 95.00th=[ 66],
| 99.00th=[ 88], 99.50th=[ 97], 99.90th=[ 117], 99.95th=[ 126],
| 99.99th=[ 144]
bw ( KiB/s): min= 367, max= 1107, per=1.80%, avg=719.52, stdev=101.41, samples=33586
iops : min= 734, max= 2215, avg=1439.45, stdev=202.83, samples=33586
lat (msec) : 4=0.01%, 10=0.01%, 20=0.63%, 50=73.59%, 100=25.40%
lat (msec) : 250=0.39%
cpu : usr=0.24%, sys=1.21%, ctx=1297000, majf=0, minf=3179
IO depths : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.1%, 32=0.1%, >=64=101.5%
submit : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwt: total=0,24034784,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
WRITE: bw=39.1MiB/s (41.0MB/s), 39.1MiB/s-39.1MiB/s (41.0MB/s-41.0MB/s), io=11.5GiB (12.3GB), run=300038-300038msec
Disk stats (read/write):
sdb: ios=171/301097, merge=0/23997250, ticks=4519/14449240, in_queue=14523838, util=100.00%[root@vssd home]# ecmd --help
Help:
The following are used for creating or deleting Enmotus
virtual tiered drives and VirtualSSDs:
Usage: ecmd --create <fd> <sd> {init} create a VirtualSSD
<fd> fast device e.g. /dev/sdb or vdrive0
<sd> slow device e.g. /dev/sdc or vdrive1
one drive may have existing data
to be preserved, the other none
{init} fast_first place fast device at LBA 0
fast_last place slow device at LBA 0
fast_split place 20% of fast device at LBA 0
reserve place slow device at LBA 0 and reserve SSD space so SSD
can later be upgraded
reserve_last place fast device at LBA 0 and reserve HDD space so HDD
can later be upgraded
copy_tier replicate instead of promote active data
onto the fast tier
--create vdrive <drivelist> {option}
{option} linear or stripe, linear default
create a vdrive vdriveM
second call to create vdriveN
drivelist list of device e.g. /dev/sdb
--create vdrive /dev/sdX {sizeGiB} create vdrive of sizeGiB, smaller than /dev/sdX
--create {init} create a VirtualSSD using two vdrives already created
--create vdriveX vdriveY {init}
VirtualSSD from vdrives already created
convention is first vdrive is the fast device
--create single <dev> create a single drive VirtualSSD
<dev> may be a drive or a vdrive
<dev> device e.g. /dev/sdb or vdrive2
--convert <VirtualSSD> <mode>
add or remove SSD tier without losing data
mode options: single, tiered, reserve, full_tier, copy_tier, stop, status
--convert <VirtualSSD> single
--convert <VirtualSSD> single {--release}
--release removes the vdrive that is available at convert to single
--convert <VirtualSSD> tiered vdriveN
--convert <VirtualSSD> tiered driveM
--convert <VirtualSSD> copy_tier vdriveN
--convert <VirtualSSD> copy_tier driveM
--delete <VirtualSSD>|<vdrive>|<pdrive>
delete VirtualSSD e.g. /dev/eba or t=n
delete vdrive e.g. vdrive3
delete pdrive e.g. drive2
--delete_all delete all VirtualSSDs and vdrives, clean the metadata
on all pdrives
The following commands are used for managing existing VirtualSSDs:
--list <option> list system block devices
<option> tdrives, vdrives, pdrives, luns
* boot drive, > in a tier
# not seen by driver, ! USB device not supported
+ in pDrives means more than one vdrive contains this pdrive
--status get status and device listing
--promote return global promote mode
--promote <mode> set global promote mode
--promote <dev> return promote mode for /dev/ebX or t=n
--promote <mode> <dev> set promote mode for driveN or t=n
<mode> aggressive maximum promote rate
normal normal promote rate
slow slow promote rate
on turn promote engine on
off turn promote engine off
--policy <dev> return promote policy for /dev/ebX or t=n
--policy <mode> <dev> set promote policy for /dev/ebX or t=n
<mode> rdio promote on reads activity only
rwio promote reads or write activity
rdblock promote on read block activity only
rwblock promote on read and write block activity
--drivetype <mode> set global drivetype to specify trim support
--drivetype <dev> return drivetype that specifies trim support for /dev/ebX or t=n
--drivetype <mode> <dev> set drivetype to specify trim support for /dev/ebX or t=n
<mode> pdd (default) use the underlying physical data disks
i.e. if there is an SSD physical device in the mix
declare VirtualSSD as SSD otherwise, it is a HDD
ssd declare the VirtualSSD as an SSD virtual disk
hdd declare the VirtualSSD as a non-SSD virtual disk
--activitylog {on|off|writelog} control activity logging
--activitylog returns the state of activity log
--sectorsize return global sectorsize
--sectorsize <VirtualSSD> return sectorsize for t=n or /dev/ebX
--sectorsize 512|4096|4K|auto set global sectorsize
4K and 4096 set the same size
if auto then tier sectorsize depends on drives in tier
--stats show global statistics on or off
--stats on | off turn global statistics on or off
--stats <VirtualSSD> show statistics on or off for VirtualSSD or t=n
--stats <VirtualSSD> on | off turn statistics on or off for VirtualSSD or t=n
--stats <item> <VirtualSSD> show <item> statistics for VirtualSSD or t=n
<item> promote_queue show promote queue
promote_hist show recent promote history
promote_count show page region new promote counts
count show fast/slow tier access counts
map show current fast-slow tier mapping
hostio show host io stats
on turn on statistics gathering, on by default
off turn off statistics gathering
lock shows how many pages are locked in the fast and slow tiers
convert shows how many pages are left to migrate during --convert
--reset error reset VSP error flags
--reset error <VirtualSSD> reset VirtualSSD error flags
--reset stats <VirtualSSD> reset VirtualSSD statistics counters
--help prints out help
--clean <drive> zero the first 2048 sectors and last 2048 sectors of a drive
--rescan do a drive discovery
--pagesize return global pagesize
--pagesize 4M | 2M | 1M | 512K | 256K | 128K | 64K multiple [minimum 128K]
set the default page size for VirtualSSDs
--pagesize raidstripesize=X raidnumdrives=N raidlevel=R
X is in K, e.g. 256K
if R==0 pagesize_initial = X*N
if R==1 pagesize_initial = X
if R==3,4,5 pagesize_initial = X * (N-1)
if R==6 pagesize_initial = X* (N-2)
--pagesize <VirtualSSD> return pagesize for t=n or /dev/ebX
--scantime return the promote scan interval in milliseconds
--scantime 25S set the promote scan interval
5M no unit means seconds
2H
1D max is 7 Days
--attach t=m mount the VirtualSSD to become a drive in the system
--attach vdriveM mount the vdrive to become a drive in the system
--detach t=m {--force} unmount the VirtualSSD so that it is removed from the system
--detach /dev/ebX {--force} unmount the VirtualSSD so it is removed from the system
Miscellaneous Commands:
--license information about current license
--license <actcode> upgrade current license
--license <actcode> --server upgrade current license using local server
--license <actcode> --offline create a file to email to Enmotus to
request more capabilities
actCode can be XXXX-YYYY-PPPP-QQQQ or
refresh, return, support, reset
--license <pathname/file_name> --activate upgrade current license using
file generated by Enmotus
using file from --offline
--update_meta update the metadata on the drives
--check <VirtualSSD> {drivelist} Check the VirtualSSD metadata vmap tables
--config <VirtualSSD> display VirtualSSD device hierarchy
--config display VirtualSSD device hierarchy for all VirtualSSDs
--beacon <drive> do reads from this drive so drive's LED will light
--dumplog generate a zip file of logs and place on Enmotus FTP site
--syslog generate a log, syslog.log, of low level commands
Examples : a) Creating a simple VirtualSSD from two raw block devices sdb and sdc, fast drive is sdb
ecmd --create /dev/sdb /dev/sdc
b) Creating a striped SSD (sdb,c) and striped HDD (sde,f,g,h) VirtualSSD
ecmd --create vdrive /dev/sd[bc] stripe
ecmd --create vdrive /dev/sd[efgh] stripe
ecmd --create vdrive0 vdrive1
[Note: If other VirtualSSDs exist, vdrive0, 1 may be different. Use --list to verify]
ecmd --create
[Note: If only two vdrives exist that are not part of a VirtualSSD, they will be used
c) Deleting a VirtualSSD /dev/ebb
ecmd --delete /dev/ebb
ecmd log file at /var/log/entier/ecmd.log