Ceph Rados benchmarking

From Define Wiki
Jump to navigation Jump to search

A 10-second write test on a data pool

[root@ceph-node1 ~]# ceph osd lspools
0 rbd,1 pool1,2 cache-pool,4 volumes,5 images,6 backups,7 vms,
[root@ceph-node1 ~]# rados bench -p data 10 write --no-cleanup
error opening pool data: (2) No such file or directory
[root@ceph-node1 ~]# rados bench -p pool1 10 write --no-cleanup
 Maintaining 16 concurrent writes of 4194304 bytes for up to 10 seconds or 0 objects
 Object prefix: benchmark_data_ceph-node1_5701
   sec Cur ops   started  finished  avg MB/s  cur MB/s  last lat   avg lat
     0       0         0         0         0         0         -         0
     1      16        25         9   35.9899        36  0.964923  0.663588
     2      16        47        31   61.9867        88  0.641149  0.775297
     3      16        73        57   75.9853       104  0.510372   0.79535
     4      16        91        75   74.9866        72  0.338181  0.745966
     5      16       110        94   75.1873        76  0.316483  0.763226
     6      16       123       107   71.3217        52   1.39821  0.827904
     7      16       142       126   71.9888        76  0.435153  0.835862
     8      16       159       143   71.4891        68  0.794159   0.82537
     9      16       169       153   67.9893        40   1.65146   0.84532
    10      16       189       173   69.1892        80  0.934781  0.880206
 Total time run:         10.677784
Total writes made:      190
Write size:             4194304
Bandwidth (MB/sec):     71.176 

Stddev Bandwidth:       28.9602
Max bandwidth (MB/sec): 104
Min bandwidth (MB/sec): 0
Average Latency:        0.890925
Stddev Latency:         0.424522
Max latency:            2.6072
Min latency:            0.26615

Perform a sequential read benchmarking test on the data pool:

[root@ceph-node1 ~]# rados bench -p pool1 10 seq
   sec Cur ops   started  finished  avg MB/s  cur MB/s  last lat   avg lat
     0       0         0         0         0         0         -         0
     1      16        44        28   111.883       112  0.967809  0.261794
     2      16        63        47   93.9437        76   1.72344  0.471426
     3      16        81        65   86.6278        72  0.572271  0.510613
     4      16       107        91    90.966       104  0.791833  0.612809
     5      16       136       120   95.9688       116   1.16369  0.591472
     6      16       158       142   94.6388        88  0.323602   0.58421
     7      16       185       169   96.5452       108  0.505959  0.582252
 Total time run:        7.787027
Total reads made:     190
Read size:            4194304
Bandwidth (MB/sec):    97.598 

Average Latency:       0.646829
Max latency:           2.82743
Min latency:           0.00551059

Perform a random read benchmarking test on the data pool:

[root@ceph-node1 ~]# rados bench -p pool1 10 rand
   sec Cur ops   started  finished  avg MB/s  cur MB/s  last lat   avg lat
     0       0         0         0         0         0         -         0
     1      16        48        32   127.944       128 0.0116537  0.322367
     2      16        89        73   145.956       164  0.427142  0.370509
     3      16       125       109   145.297       144  0.796186  0.393411
     4      16       167       151   150.966       1680.00915591  0.390515
     5      16       209       193   154.367       168  0.369405  0.388701
     6      16       245       229   152.636       144   0.97057  0.394407
     7      16       283       267   152.543       152 0.0062208  0.396334
     8      16       308       292   145.974       100  0.554891   0.40978
     9      16       337       321   142.641       116  0.843427   0.42286
    10      16       365       349   139.575       112   1.00341  0.432433
    11       8       366       358   130.159        36   0.73124  0.449556
 Total time run:        11.110171
Total reads made:     366
Read size:            4194304
Bandwidth (MB/sec):    131.771 

Average Latency:       0.475776
Max latency:           2.375
Min latency:           0.00497015