Ceph:Operating:Useful RADOS commands
List Ceph pools
rados lspools
List objects in a Ceph pool
rados -p <your-pool> ls
Remove an object from a Ceph pool
rados -p <your-pool> rm <object-name>
Copy an object in a pool
rados -p <your-pool> cp <source-object> <target-object>
When Nova uses Ceph as the backend, this command can be used to backup OpenStack instance disk files or to transfer disks from one instance to another without having to create a snapshot or to export the disk file out of Ceph and back in again.
Let's say we want to transfer disk of this instance to a different instance:
$ openstack server list +--------------------------------------+-------+--------+--------------------------+--------+---------+ | ID | Name | Status | Networks | Image | Flavor | +--------------------------------------+-------+--------+--------------------------+--------+---------+ | 1b2538e1-5f8e-4401-a69f-5c3c0ce41d72 | demo1 | ACTIVE | internal=192.168.100.147 | cirros | m1.tiny | +--------------------------------------+-------+--------+--------------------------+--------+---------+
Firstly, launch an instance that uses the same flavour (the base image doesn't matter) -- this will be the receiver of the disk. Then stop both instances to avoid corrupting data on the disk during the transfer:
$ openstack server list +--------------------------------------+-------+--------+--------------------------+--------+---------+ | ID | Name | Status | Networks | Image | Flavor | +--------------------------------------+-------+--------+--------------------------+--------+---------+ | c5f4e1cd-0c3b-4388-9017-dab18072cb2b | demo2 | ACTIVE | internal=192.168.100.100 | cirros | m1.tiny | | 1b2538e1-5f8e-4401-a69f-5c3c0ce41d72 | demo1 | ACTIVE | internal=192.168.100.147 | cirros | m1.tiny | +--------------------------------------+-------+--------+--------------------------+--------+---------+ $ openstack server stop demo1 $ openstack server stop demo2
Then replace the disk file of the new instance with the old disk in Ceph:
BE EXTRA CAREFUL AND MAKE SURE YOU'RE REMOVING THE CORRECT DISK IMAGE!!!
# rados -p vms ls | grep c5f4e1cd-0c3b-4388-9017-dab18072cb2b rbd_id.c5f4e1cd-0c3b-4388-9017-dab18072cb2b_disk # rados -p vms rm rbd_id.c5f4e1cd-0c3b-4388-9017-dab18072cb2b_disk # rados -p vms ls | grep 1b2538e1-5f8e-4401-a69f-5c3c0ce41d72 rbd_id.1b2538e1-5f8e-4401-a69f-5c3c0ce41d72_disk # rados -p vms cp rbd_id.1b2538e1-5f8e-4401-a69f-5c3c0ce41d72_disk rbd_id.c5f4e1cd-0c3b-4388-9017-dab18072cb2b_disk
Turn the newer instance on and it will have all the files that the original instance had on its disk.
Export Glance images out of Ceph
To a file on the local machine
rbd export --pool=images <image-uuid> <outpu-file.img>
This will also work for exporting disk images of instances -- simply use --pool=vms instead and <your-instance-uuid>_disk as the source image.
To a file on a remote machine
you can do this piping through a (nested) ssh tunnel to a fil on a different node like this:
[root@head01 ~]# ssh 192.168.50.2 "docker exec ceph_mgr rbd export volumes/volume-cb598c42-1f6e-4526-9bf7-275d6ea72269 -" | ssh -i ~/.ssh/id_rsa -p2280 localhost "cat > backup/gitlab-image.raw" Exporting image: 1% complete...
To remote ceph cluster
and you can extend this to export and pipe into rbd import to do a ceph-ceph block clone like this (this one even has ceph in docker). Process is the same rbd export to stdout and pipe that output into ssh command running rbd import - [poolname]/[imagename] you can direct the ssh to a preconfigure reverse tunnel to your target machine like so.
[root@head01 ~]# ssh 192.168.50.2 "docker exec ceph_mgr rbd export volumes/volume-9d685dd4-69fd-4c27-aed5-7ad43e4dbabd -" | ssh -i ~/.ssh/id_rsa -p2280 localhost "rbd import - legacy-backup/win-server-2016-m60-root-vol" Exporting image: 6% complete...