Ceph Cach tier
Jump to navigation
Jump to search
On ceph-node1
126 ceph osd getcrushmap -o crushmapdump
127 crushtool -d crushmapdump -o crushmapdump-decompiled
128 vim crushmapdump-decompiled
129 crushtool -c crushmapdump-decompiled -o crushmapdump-compiled
130 ceph osd setcrushmap -i crushmapdump-compiled
131 ceph osd tree
132 ceph osd pool create cache-pool 32 32
134 ceph osd pool set cache-pool crush_ruleset 4
135 rados -p cache-pool ls
136 rados -p cache-pool put object1 /etc/hosts
137 rados -p cache-pool ls
138 ceph osd map cache-pool object1
139 rados -p cache-pool rm object`
140 rados -p cache-pool rm object1
141 historyon line 128, the crushmapdump-decompiled should be added the following content:
root cache {
id -5 # do not change unnecessarily
# weight 8.880
alg straw
hash 0 # rjenkins1
item osd.3 weight 0.010
item osd.6 weight 0.010
item osd.10 weight 0.010
}
rule cache-pool {
ruleset 4
type replicated
min_size 1
max_size 10
step take cache
step chooseleaf firstn 0 type osd
step emit
}Creating a cache tier: on ceph-node1:
161 ceph osd tier add EC-pool cache-pool
162 ceph osd tier cache-mode cache-pool writeback
163 ceph osd tier set-overlay EC-pool cache-pool
164 ceph osd dump | egrep -i "EC-pool|cache-pool"configure the cache tier:
ceph osd pool set cache-pool hit_set_type bloom
ceph osd pool set cache-pool hit_set_count 1
ceph osd pool set cache-pool hit_set_period 300
ceph osd pool set cache-pool target_max_bytes 1000000
ceph osd pool set cache-pool target_max_objects 10000
ceph osd pool set cache-pool cache_min_flush_age 300
ceph osd pool set cache-pool cache_min_evict_age 300
ceph osd pool set cache-pool cache_target_dirty_ratio .01
ceph osd pool set cache-pool cache_target_full_ratio .2test the cache tier:
dd if=/dev/zero of=/tmp/file1 bs=1M count=500
rados -p EC-pool put object1 /tmp/file1
rados -p EC-pool ls
rados -p cache-pool ls
rados -p EC-pool put object2 /tmp/file1Note: when use this with openstack, need to solve the openstack user(cinder, glance and etc) permission on cache pools.