Working with CRUSH for SSD and HDD pools

From Define Wiki
Revision as of 13:00, 21 October 2021 by David (talk | contribs) (Created page with "<bash> == Crush Rules == In this scenario we have a few HCI nodes running the standard volumes for OpenStack. There's two additional servers, 1x HDD and 1x SSD server. We are...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

<bash>

Crush Rules

In this scenario we have a few HCI nodes running the standard volumes for OpenStack. There's two additional servers, 1x HDD and 1x SSD server. We are going to create a crush rule to isolate these systems and drives in seperate pools in Ceph. At this point it is assumed that you have a ceph cluster spun up and configured and all OSDs are added and online.

```bash

  1. this command could also be: $ ceph osd crush tree

[root@ukr-dpy1 ~]# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF

-1         1489.06531  root default                                        
-7           43.65807      host ukr-hci1-n1-mlnx                           
 7    hdd    14.55269          osd.7                  up   1.00000  1.00000
15    hdd    14.55269          osd.15                 up   1.00000  1.00000
23    hdd    14.55269          osd.23                 up   1.00000  1.00000
-9           43.65807      host ukr-hci1-n2-mlnx                           
 6    hdd    14.55269          osd.6                  up   1.00000  1.00000
14    hdd    14.55269          osd.14                 up   1.00000  1.00000
22    hdd    14.55269          osd.22                 up   1.00000  1.00000

-11 43.65807 host ukr-hci1-n3-mlnx

 5    hdd    14.55269          osd.5                  up   1.00000  1.00000
11    hdd    14.55269          osd.11                 up   1.00000  1.00000
21    hdd    14.55269          osd.21                 up   1.00000  1.00000

-17 43.65807 host ukr-hci1-n4-mlnx

 2    hdd    14.55269          osd.2                  up   1.00000  1.00000
13    hdd    14.55269          osd.13                 up   1.00000  1.00000
19    hdd    14.55269          osd.19                 up   1.00000  1.00000

-13 43.65807 host ukr-hci2-n1-mlnx

 0    hdd    14.55269          osd.0                  up   1.00000  1.00000
12    hdd    14.55269          osd.12                 up   1.00000  1.00000
17    hdd    14.55269          osd.17                 up   1.00000  1.00000

-15 43.65807 host ukr-hci2-n2-mlnx

 1    hdd    14.55269          osd.1                  up   1.00000  1.00000
10    hdd    14.55269          osd.10                 up   1.00000  1.00000
16    hdd    14.55269          osd.16                 up   1.00000  1.00000
-3           43.65807      host ukr-hci2-n3-mlnx                           
 3    hdd    14.55269          osd.3                  up   1.00000  1.00000
 8    hdd    14.55269          osd.8                  up   1.00000  1.00000
18    hdd    14.55269          osd.18                 up   1.00000  1.00000
-5           43.65807      host ukr-hci2-n4-mlnx                           
 4    hdd    14.55269          osd.4                  up   1.00000  1.00000
 9    hdd    14.55269          osd.9                  up   1.00000  1.00000
20    hdd    14.55269          osd.20                 up   1.00000  1.00000

-31 902.26672 host ukr-hdd1

58    hdd    14.55269          osd.58                 up   1.00000  1.00000

<snip for space> 115 hdd 14.55269 osd.115 up 1.00000 1.00000 116 hdd 14.55269 osd.116 up 1.00000 1.00000 117 hdd 14.55269 osd.117 up 1.00000 1.00000 118 hdd 14.55269 osd.118 up 1.00000 1.00000 119 hdd 14.55269 osd.119 up 1.00000 1.00000 -28 237.53412 host ukr-ssd1-mlnx

24    ssd     6.98630          osd.24                 up   1.00000  1.00000
53    ssd     6.98630          osd.53                 up   1.00000  1.00000

<snip for space>

54    ssd     6.98630          osd.54                 up   1.00000  1.00000
55    ssd     6.98630          osd.55                 up   1.00000  1.00000
56    ssd     6.98630          osd.56                 up   1.00000  1.00000
57    ssd     6.98630          osd.57                 up   1.00000  1.00000

```

Confirm the device class that Ceph is aware of ```bash [root@ukr-dpy1 ~]# ceph osd crush class ls [

   "hdd",
   "ssd"

] ```

Lets confirm the crush rule(s) already in place ```bash [root@ukr-dpy1 ~]# ceph osd crush rule list replicated_rule [root@ukr-dpy1 ~]# ceph osd crush rule dump replicated_rule {

   "rule_id": 0,
   "rule_name": "replicated_rule",
   "ruleset": 0,
   "type": 1,
   "min_size": 1,
   "max_size": 10,
   "steps": [
       {
           "op": "take",
           "item": -1,
           "item_name": "default"
       },
       {
           "op": "chooseleaf_firstn",
           "num": 0,
           "type": "host"
       },
       {
           "op": "emit"
       }
   ]

}

```

Lets try out crushtool to see the crush map from a running cluster and convert it into ascii text ```bash [root@ukr-dpy1 ~]# ceph osd getcrushmap -o /tmp/crush.bin 201 [root@ukr-dpy1 ~]# crushtool -d /tmp/crush.bin -o /tmp/crush.ascii [root@ukr-dpy1 ~]# cat /tmp/crush.ascii

  1. begin crush map

tunable choose_local_tries 0 tunable choose_local_fallback_tries 0 tunable choose_total_tries 50 tunable chooseleaf_descend_once 1 tunable chooseleaf_vary_r 1 tunable chooseleaf_stable 1 tunable straw_calc_version 1 tunable allowed_bucket_algs 54

  1. devices

device 0 osd.0 class hdd device 1 osd.1 class hdd device 2 osd.2 class hdd device 3 osd.3 class hdd device 4 osd.4 class hdd device 5 osd.5 class hdd device 6 osd.6 class hdd device 7 osd.7 class hdd device 8 osd.8 class hdd device 9 osd.9 class hdd device 10 osd.10 class hdd device 11 osd.11 class hdd device 12 osd.12 class hdd device 13 osd.13 class hdd device 14 osd.14 class hdd device 15 osd.15 class hdd device 16 osd.16 class hdd device 17 osd.17 class hdd device 18 osd.18 class hdd device 19 osd.19 class hdd device 20 osd.20 class hdd device 21 osd.21 class hdd device 22 osd.22 class hdd device 23 osd.23 class hdd device 24 osd.24 class ssd device 25 osd.25 class ssd device 26 osd.26 class ssd device 27 osd.27 class ssd device 28 osd.28 class ssd device 29 osd.29 class ssd device 30 osd.30 class ssd device 31 osd.31 class ssd device 32 osd.32 class ssd device 33 osd.33 class ssd device 34 osd.34 class ssd device 35 osd.35 class ssd device 36 osd.36 class ssd device 37 osd.37 class ssd device 38 osd.38 class ssd device 39 osd.39 class ssd device 40 osd.40 class ssd device 41 osd.41 class ssd device 42 osd.42 class ssd device 43 osd.43 class ssd device 44 osd.44 class ssd device 45 osd.45 class ssd device 46 osd.46 class ssd device 47 osd.47 class ssd device 48 osd.48 class ssd device 49 osd.49 class ssd device 50 osd.50 class ssd device 51 osd.51 class ssd device 52 osd.52 class ssd device 53 osd.53 class ssd device 54 osd.54 class ssd device 55 osd.55 class ssd device 56 osd.56 class ssd device 57 osd.57 class ssd device 58 osd.58 class hdd device 59 osd.59 class hdd device 60 osd.60 class hdd device 61 osd.61 class hdd device 62 osd.62 class hdd device 63 osd.63 class hdd device 64 osd.64 class hdd device 65 osd.65 class hdd device 66 osd.66 class hdd device 67 osd.67 class hdd device 68 osd.68 class hdd device 69 osd.69 class hdd device 70 osd.70 class hdd device 71 osd.71 class hdd device 72 osd.72 class hdd device 73 osd.73 class hdd device 74 osd.74 class hdd device 75 osd.75 class hdd device 76 osd.76 class hdd device 77 osd.77 class hdd device 78 osd.78 class hdd device 79 osd.79 class hdd device 80 osd.80 class hdd device 81 osd.81 class hdd device 82 osd.82 class hdd device 83 osd.83 class hdd device 84 osd.84 class hdd device 85 osd.85 class hdd device 86 osd.86 class hdd device 87 osd.87 class hdd device 88 osd.88 class hdd device 89 osd.89 class hdd device 90 osd.90 class hdd device 91 osd.91 class hdd device 92 osd.92 class hdd device 93 osd.93 class hdd device 94 osd.94 class hdd device 95 osd.95 class hdd device 96 osd.96 class hdd device 97 osd.97 class hdd device 98 osd.98 class hdd device 99 osd.99 class hdd device 100 osd.100 class hdd device 101 osd.101 class hdd device 102 osd.102 class hdd device 103 osd.103 class hdd device 104 osd.104 class hdd device 105 osd.105 class hdd device 106 osd.106 class hdd device 107 osd.107 class hdd device 108 osd.108 class hdd device 109 osd.109 class hdd device 110 osd.110 class hdd device 111 osd.111 class hdd device 112 osd.112 class hdd device 113 osd.113 class hdd device 114 osd.114 class hdd device 115 osd.115 class hdd device 116 osd.116 class hdd device 117 osd.117 class hdd device 118 osd.118 class hdd device 119 osd.119 class hdd

  1. types

type 0 osd type 1 host type 2 chassis type 3 rack type 4 row type 5 pdu type 6 pod type 7 room type 8 datacenter type 9 zone type 10 region type 11 root

  1. buckets

host ukr-hci2-n3-mlnx { id -3 # do not change unnecessarily id -4 class hdd # do not change unnecessarily id -19 class ssd # do not change unnecessarily # weight 43.658 alg straw2 hash 0 # rjenkins1 item osd.3 weight 14.553 item osd.8 weight 14.553 item osd.18 weight 14.553 } host ukr-hci2-n4-mlnx { id -5 # do not change unnecessarily id -6 class hdd # do not change unnecessarily id -20 class ssd # do not change unnecessarily # weight 43.658 alg straw2 hash 0 # rjenkins1 item osd.4 weight 14.553 item osd.9 weight 14.553 item osd.20 weight 14.553 } host ukr-hci1-n1-mlnx { id -7 # do not change unnecessarily id -8 class hdd # do not change unnecessarily id -21 class ssd # do not change unnecessarily # weight 43.658 alg straw2 hash 0 # rjenkins1 item osd.7 weight 14.553 item osd.15 weight 14.553 item osd.23 weight 14.553 } host ukr-hci1-n2-mlnx { id -9 # do not change unnecessarily id -10 class hdd # do not change unnecessarily id -22 class ssd # do not change unnecessarily # weight 43.658 alg straw2 hash 0 # rjenkins1 item osd.6 weight 14.553 item osd.14 weight 14.553 item osd.22 weight 14.553 } host ukr-hci1-n3-mlnx { id -11 # do not change unnecessarily id -12 class hdd # do not change unnecessarily id -23 class ssd # do not change unnecessarily # weight 43.658 alg straw2 hash 0 # rjenkins1 item osd.5 weight 14.553 item osd.11 weight 14.553 item osd.21 weight 14.553 } host ukr-hci2-n1-mlnx { id -13 # do not change unnecessarily id -14 class hdd # do not change unnecessarily id -24 class ssd # do not change unnecessarily # weight 43.658 alg straw2 hash 0 # rjenkins1 item osd.0 weight 14.553 item osd.12 weight 14.553 item osd.17 weight 14.553 } host ukr-hci2-n2-mlnx { id -15 # do not change unnecessarily id -16 class hdd # do not change unnecessarily id -25 class ssd # do not change unnecessarily # weight 43.658 alg straw2 hash 0 # rjenkins1 item osd.1 weight 14.553 item osd.10 weight 14.553 item osd.16 weight 14.553 } host ukr-hci1-n4-mlnx { id -17 # do not change unnecessarily id -18 class hdd # do not change unnecessarily id -26 class ssd # do not change unnecessarily # weight 43.658 alg straw2 hash 0 # rjenkins1 item osd.2 weight 14.553 item osd.13 weight 14.553 item osd.19 weight 14.553 } host ukr-ssd1-mlnx { id -28 # do not change unnecessarily id -29 class hdd # do not change unnecessarily id -30 class ssd # do not change unnecessarily # weight 237.534 alg straw2 hash 0 # rjenkins1 item osd.24 weight 6.986 item osd.25 weight 6.986 item osd.26 weight 6.986 item osd.27 weight 6.986 item osd.28 weight 6.986 item osd.29 weight 6.986 item osd.30 weight 6.986 item osd.31 weight 6.986 item osd.32 weight 6.986 item osd.33 weight 6.986 item osd.34 weight 6.986 item osd.35 weight 6.986 item osd.36 weight 6.986 item osd.37 weight 6.986 item osd.38 weight 6.986 item osd.39 weight 6.986 item osd.40 weight 6.986 item osd.41 weight 6.986 item osd.42 weight 6.986 item osd.43 weight 6.986 item osd.44 weight 6.986 item osd.45 weight 6.986 item osd.46 weight 6.986 item osd.47 weight 6.986 item osd.48 weight 6.986 item osd.49 weight 6.986 item osd.50 weight 6.986 item osd.51 weight 6.986 item osd.52 weight 6.986 item osd.53 weight 6.986 item osd.54 weight 6.986 item osd.55 weight 6.986 item osd.56 weight 6.986 item osd.57 weight 6.986 } host ukr-hdd1 { id -31 # do not change unnecessarily id -32 class hdd # do not change unnecessarily id -33 class ssd # do not change unnecessarily # weight 902.267 alg straw2 hash 0 # rjenkins1 item osd.58 weight 14.553 item osd.59 weight 14.553 item osd.60 weight 14.553 item osd.61 weight 14.553 item osd.62 weight 14.553 item osd.63 weight 14.553 item osd.64 weight 14.553 item osd.65 weight 14.553 item osd.66 weight 14.553 item osd.67 weight 14.553 item osd.68 weight 14.553 item osd.69 weight 14.553 item osd.70 weight 14.553 item osd.71 weight 14.553 item osd.72 weight 14.553 item osd.73 weight 14.553 item osd.74 weight 14.553 item osd.75 weight 14.553 item osd.76 weight 14.553 item osd.77 weight 14.553 item osd.78 weight 14.553 item osd.79 weight 14.553 item osd.80 weight 14.553 item osd.81 weight 14.553 item osd.82 weight 14.553 item osd.83 weight 14.553 item osd.84 weight 14.553 item osd.85 weight 14.553 item osd.86 weight 14.553 item osd.87 weight 14.553 item osd.88 weight 14.553 item osd.89 weight 14.553 item osd.90 weight 14.553 item osd.91 weight 14.553 item osd.92 weight 14.553 item osd.93 weight 14.553 item osd.94 weight 14.553 item osd.95 weight 14.553 item osd.96 weight 14.553 item osd.97 weight 14.553 item osd.98 weight 14.553 item osd.99 weight 14.553 item osd.100 weight 14.553 item osd.101 weight 14.553 item osd.102 weight 14.553 item osd.103 weight 14.553 item osd.104 weight 14.553 item osd.105 weight 14.553 item osd.106 weight 14.553 item osd.107 weight 14.553 item osd.108 weight 14.553 item osd.109 weight 14.553 item osd.110 weight 14.553 item osd.111 weight 14.553 item osd.112 weight 14.553 item osd.113 weight 14.553 item osd.114 weight 14.553 item osd.115 weight 14.553 item osd.116 weight 14.553 item osd.117 weight 14.553 item osd.118 weight 14.553 item osd.119 weight 14.553 } root default { id -1 # do not change unnecessarily id -2 class hdd # do not change unnecessarily id -27 class ssd # do not change unnecessarily # weight 1489.065 alg straw2 hash 0 # rjenkins1 item ukr-hci2-n3-mlnx weight 43.658 item ukr-hci2-n4-mlnx weight 43.658 item ukr-hci1-n1-mlnx weight 43.658 item ukr-hci1-n2-mlnx weight 43.658 item ukr-hci1-n3-mlnx weight 43.658 item ukr-hci2-n1-mlnx weight 43.658 item ukr-hci2-n2-mlnx weight 43.658 item ukr-hci1-n4-mlnx weight 43.658 item ukr-ssd1-mlnx weight 237.534 item ukr-hdd1 weight 902.267 }

  1. rules

rule replicated_rule { id 0 type replicated min_size 1 max_size 10 step take default step chooseleaf firstn 0 type host step emit }

  1. end crush map

[root@ukr-dpy1 ~]#

```

Lets investigate what needs to be run to create a new crush replicated rule. ```bash

  1. The command expected the following arguments

ceph osd crush rule create-replicated <rulesetname> default <failure-domain> <class>

  1. note: failure domains can be, from crush map
  2. types

type 0 osd type 1 host type 2 chassis type 3 rack type 4 row type 5 pdu type 6 pod type 7 room type 8 datacenter type 9 region type 10 root

  1. class is as above, hdd/ssd

```

Lets create the SSD crush rule ```bash [root@ukr-dpy1 ~]# ceph osd crush rule create-replicated highspeedpool default osd ssd

  1. On a 3 node environment use "host" as the failure domain, however with a single host ssd configuration we'd need to set replication factor to 1 for ceph to be healthy

[root@ukr-dpy1 ~]# ceph osd crush rule create-replicated highspeedpool default host ssd

  1. if you make a mistake use: ceph osd crush rule rm

```

Confirm the rule has been created and lets take a look at the dump ```bash [root@ukr-dpy1 ~]# ceph osd crush rule list replicated_rule highspeedpool [root@ukr-dpy1 ~]# ceph osd crush rule dump highspeedpool {

   "rule_id": 1,
   "rule_name": "highspeedpool",
   "ruleset": 1,
   "type": 1,
   "min_size": 1,
   "max_size": 10,
   "steps": [
       {
           "op": "take",
           "item": -27,
           "item_name": "default~ssd"
       },
       {
           "op": "choose_firstn",
           "num": 0,
           "type": "osd"
       },
       {
           "op": "emit"
       }
   ]

} ```

Create pools with the new ruleset ```bash [root@ukr-dpy1 ~]# ceph osd pool create ssdpool 64 64 highspeedpool pool 'ssdpool' created ```

Check the details of the pool (in this instance we can see that ssdpool is pool 11) ```bash [root@ukr-dpy1 ~]# ceph osd pool ls detail | grep ssd pool 12 'ssdpool' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 2026 lfor 0/2026/2024 flags hashpspool stripe_width 0

  1. if i hadnt set the failure domain to osd, it would have been host. With host failure and a single node, 3x replication would not have worked and we would have had to set something like below.

[root@ukr-dpy1 ~]# ceph osd pool ls detail | grep ssd pool 11 'ssdpool' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 256 pgp_num 256 pg_num_target 32 pgp_num_target 32 autoscale_mode on last_change 1304 flags hashpspool stripe_width 0

  1. in this scenario i also had to set pool replication size to 1 as i only have a single host

[root@ukr-dpy1 ~]# ceph osd pool set ssdpool size 1 set pool 11 size to 1 ```

Check ceph status for cluster to become healthy ```bash [root@ukr-dpy1 ~]# ceph status

 cluster:
   id:     f7d119bf-94c5-4f9b-bf0e-f6cc0e1d4df8
   health: HEALTH_WARN
           1 pools have many more objects per pg than average

 services:
   mon: 3 daemons, quorum ukr-hci1-n1-mlnx,ukr-hci1-n2-mlnx,ukr-hci1-n3-mlnx (age 3d)
   mgr: ukr-hci1-n1-mlnx(active, since 4d), standbys: ukr-hci1-n4-mlnx, ukr-hci1-n3-mlnx, ukr-hci1-n2-mlnx
   mds: cephfs:3 {0=ukr-hci1-n2-mlnx=up:active,1=ukr-hci1-n4-mlnx=up:active,2=ukr-hci1-n1-mlnx=up:active} 1 up:standby
   osd: 120 osds: 120 up (since 3d), 120 in (since 3d)
   rgw: 3 daemons active (ukr-hci1-n1-mlnx.rgw0, ukr-hci1-n2-mlnx.rgw0, ukr-hci1-n3-mlnx.rgw0)

 task status:

 data:
   pools:   11 pools, 329 pgs
   objects: 20.91k objects, 25 GiB
   usage:   145 GiB used, 1.5 PiB / 1.5 PiB avail
   pgs:     329 active+clean

```

Ok so lets reduce the pg_num down to 32 / pgp_num ```bash [root@ukr-dpy1 ~]# ceph osd pool set ssdpool pg_num 32 set pool 12 pg_num to 32 [root@ukr-dpy1 ~]# ceph osd pool set ssdpool pgp_num 32 set pool 12 pgp_num to 32 [root@ukr-dpy1 ~]# ceph status

 cluster:
   id:     f7d119bf-94c5-4f9b-bf0e-f6cc0e1d4df8
   health: HEALTH_OK

 services:
   mon: 3 daemons, quorum ukr-hci1-n1-mlnx,ukr-hci1-n2-mlnx,ukr-hci1-n3-mlnx (age 3d)
   mgr: ukr-hci1-n1-mlnx(active, since 4d), standbys: ukr-hci1-n4-mlnx, ukr-hci1-n3-mlnx, ukr-hci1-n2-mlnx
   mds: cephfs:3 {0=ukr-hci1-n2-mlnx=up:active,1=ukr-hci1-n4-mlnx=up:active,2=ukr-hci1-n1-mlnx=up:active} 1 up:standby
   osd: 120 osds: 120 up (since 3d), 120 in (since 3d)
   rgw: 3 daemons active (ukr-hci1-n1-mlnx.rgw0, ukr-hci1-n2-mlnx.rgw0, ukr-hci1-n3-mlnx.rgw0)

 task status:

 data:
   pools:   11 pools, 326 pgs
   objects: 20.91k objects, 25 GiB
   usage:   145 GiB used, 1.5 PiB / 1.5 PiB avail
   pgs:     326 active+clean

[root@ukr-dpy1 ~]# ```

Allow the cinder user to access to ssdpool ```bash [root@ukr-dpy1 ~]# ceph auth caps client.cinder osd 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=ssdpool' mon 'profile rbd' updated caps for client.cinder ```

Setup the cinder.conf with the additional ssdpool details ```bash [root@ukr-dpy1 kolla]# cat config/cinder/cinder-volume.conf [DEFAULT] enabled_backends=rbd-1,rbd-ssd

[rbd-1] rbd_ceph_conf=/etc/ceph/ceph.conf rbd_user=cinder backend_host=rbd:volumes rbd_pool=volumes volume_backend_name=rbd-1 volume_driver=cinder.volume.drivers.rbd.RBDDriver rbd_secret_uuid = fa5bb534-cea6-4219-8214-7f5d711e23c2

[rbd-ssd] rbd_ceph_conf=/etc/ceph/ceph.conf rbd_user=cinder backend_host=rbd:ssdpool rbd_pool=ssdpool volume_backend_name=rbd-ssd volume_driver=cinder.volume.drivers.rbd.RBDDriver rbd_secret_uuid = fa5bb534-cea6-4219-8214-7f5d711e23c2 ```

Reconfigure OpenStack ```bash root@kolla-deploy:/kolla# kolla-ansible -i /etc/kolla/multinode-train reconfigure ```

Health warning in ceph ```bash [root@ukr-dpy1 ~]# ceph status

 cluster:
   id:     f7d119bf-94c5-4f9b-bf0e-f6cc0e1d4df8
   health: HEALTH_WARN
           1 pool(s) do not have an application enabled

<snip>

[root@ukr-dpy1 ~]# ceph osd pool application enable ssdpool rbd enabled application 'rbd' on pool 'ssdpool'

  1. ceph health should be good again

```

Setup the new ssdpool as a volume type in openstack, create a volume and attach to a server ```bash openstack volume type list openstack volume type create --property volume_backend_name=ssdpool ssdpool openstack volume create --type ssdpool --size 10 ssdvol1 openstack server add volume demo1 ssdvol1 ```

Benchmark the pools to confirm SSDs are being used when writing to the ssdpool ```bash e]# logout [root@ukr-dpy1 ~]# rados bench -p ssdpool 50 write

  1. then on the ssd1 node
  2. yum install sysstat
  1. on ssd node

iostat 1 ```

Lets edit the original crushmap/rule to only all IO on HDDs ```bash ceph osd crush rule create-replicated highcapacitypool default host hdd

  1. set the pools to use this hdd pool

ceph osd pool set device_health_metrics crush_rule highcapacitypool

ceph osd pool set images crush_rule highcapacitypool
ceph osd pool set volumes crush_rule highcapacitypool
ceph osd pool set backups crush_rule highcapacitypool
ceph osd pool set manilla_data crush_rule highcapacitypool
ceph osd pool set manila_data crush_rule highcapacitypool
ceph osd pool set manila_metadata crush_rule highspeedpool
ceph osd pool set .rgw.root crush_rule highcapacitypool
ceph osd pool set default.rgw.log  crush_rule highcapacitypool
ceph osd pool set default.rgw.control crush_rule highcapacitypool
ceph osd pool set default.rgw.meta crush_rule highcapacitypool

[root@ukr-dpy1 kolla]# ceph status

 cluster:
   id:     f7d119bf-94c5-4f9b-bf0e-f6cc0e1d4df8
   health: HEALTH_WARN
           Degraded data redundancy: 30493/66636 objects degraded (45.761%), 28 pgs degraded, 25 pgs undersized

 services:
   mon: 3 daemons, quorum ukr-hci1-n1-mlnx,ukr-hci1-n2-mlnx,ukr-hci1-n3-mlnx (age 3d)
   mgr: ukr-hci1-n1-mlnx(active, since 4d), standbys: ukr-hci1-n4-mlnx, ukr-hci1-n3-mlnx, ukr-hci1-n2-mlnx
   mds: cephfs:3 {0=ukr-hci1-n2-mlnx=up:active,1=ukr-hci1-n4-mlnx=up:active,2=ukr-hci1-n1-mlnx=up:active} 1 up:standby
   osd: 120 osds: 120 up (since 4d), 120 in (since 4d); 57 remapped pgs
   rgw: 3 daemons active (ukr-hci1-n1-mlnx.rgw0, ukr-hci1-n2-mlnx.rgw0, ukr-hci1-n3-mlnx.rgw0)

 task status:

 data:
   pools:   11 pools, 297 pgs
   objects: 22.21k objects, 30 GiB
   usage:   166 GiB used, 1.5 PiB / 1.5 PiB avail
   pgs:     30493/66636 objects degraded (45.761%)
            11822/66636 objects misplaced (17.741%)
            225 active+clean
            31  active+remapped+backfill_wait
            20  active+recovery_wait+undersized+degraded+remapped
            8   active+recovery_wait+degraded
            7   active+recovery_wait
            5   active+recovering+undersized+remapped
            1   active+recovery_wait+undersized+remapped

 io:
   client:   8.2 KiB/s rd, 9 op/s rd, 0 op/s wr
   recovery: 8.4 MiB/s, 33 objects/s

``` </bash>