Ceph + OpenStack
Add Ceph to OpenStack
install pre-required packages on openstack1:
yum install qemu libvirt
sudo yum install python-rbdOn ceph-node1: setup ssh key and ceph related configs
vim ~/.ssh/known_hosts
ssh-copy-id openstack1
ls
cd /etc/yum.repos.d/
ls
scp ceph.repo openstack1:`pwd`
cd /etc/ce
cd /etc/ceph/
ls
scp ceph.conf ceph.client.admin.keyring openstack1:`pwd`make sure the repo details, keyrings, ceph.conf and ceph -s are correct! On openstack1, create ceph auth:
ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups'
create uuid and secret for libvirt
uuidgen
vim secret.xml
yum install -y vim
vim secret.xml
uuidgen
vim secret.xml
virsh secret-define --file secret.xml
virsh secret-set-value --secret 457eb676-33da-42ec-9a8c-9293d545c337 --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml
ls
cd
ls
virsh secret-set-value --secret 07b30ea7-88e1-4dc6-b7b5-cc3b8886487a --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml
rm -fr /etc/ceph/secret.xml
ls
cat keystonerc_adminContent for secret.xml:
<secret ephemeral='no' private='no'>
<uuid>457eb676-33da-42ec-9a8c-9293d545c337</uuid>
<usage type='ceph'>
<name>client.cinder secret</name>
</usage>
</secret>Configure OpenStack to use Ceph Configuring Glance Glance can use multiple back ends to store images. To use Ceph block devices by default, configure Glance like the following.
Edit /etc/glance/glance-api.conf and add under the [glance_store] section:
[DEFAULT]
...
default_store = rbd
...
[glance_store]
stores = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8If you want to enable copy-on-write cloning of images, also add under the [DEFAULT] section:
show_image_direct_url = TrueNote that this exposes the back end location via Glance’s API, so the endpoint with this option enabled should not be publicly accessible.
Disable the Glance cache management to avoid images getting cached under /var/lib/glance/image-cache/, assuming your configuration file has flavor = keystone+cachemanagement:
[paste_deploy]
flavor = keystoneConfiguring Cinder OpenStack requires a driver to interact with Ceph block devices. You must also specify the pool name for the block device. On your OpenStack node, edit /etc/cinder/cinder.conf by adding:
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2If you’re using cephx authentication, also configure the user and uuid of the secret you added to libvirt as documented earlier:
rbd_user = cinder
rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337Note that if you are configuring multiple cinder back ends, glance_api_version = 2 must be in the [DEFAULT] section. Configuring Cinder Backup OpenStack Cinder Backup requires a specific daemon so don’t forget to install it. On your Cinder Backup node, edit /etc/cinder/cinder.conf and add:
backup_driver = cinder.backup.drivers.ceph
backup_ceph_conf = /etc/ceph/ceph.conf
backup_ceph_user = cinder-backup
backup_ceph_chunk_size = 134217728
backup_ceph_pool = backups
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
restore_discard_excess_bytes = trueConfiguring Nova to attach Ceph RBD block device
In order to attach Cinder devices (either normal block or by issuing a boot from volume), you must tell Nova (and libvirt) which user and UUID to refer to when attaching the device. libvirt will refer to this user when connecting and authenticating with the Ceph cluster.
rbd_user = cinder
rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337These two flags are also used by the Nova ephemeral backend.
Configuring Nova
In order to boot all the virtual machines directly into Ceph, you must configure the ephemeral backend for Nova.
It is recommended to enable the RBD cache in your Ceph configuration file (enabled by default since Giant). Moreover, enabling the admin socket brings a lot of benefits while troubleshoothing. Having one socket per virtual machine using a Ceph block device will help investigating performance and/or wrong behaviors.
This socket can be accessed like this:
ceph daemon /var/run/ceph/ceph-client.cinder.19195.32310016.asok helpNow on every compute nodes edit your Ceph configuration file:
[client]
rbd cache = true
rbd cache writethrough until flush = true
admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asokIn Juno, Ceph block device was moved under the [libvirt] section. On every Compute node, edit /etc/nova/nova.conf under the [libvirt] section and add:
[libvirt]
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337It is also a good practice to disable file injection. While booting an instance, Nova usually attempts to open the rootfs of the virtual machine. Then, Nova injects values such as password, ssh keys etc. directly into the filesystem. However, it is better to rely on the metadata service and cloud-init.
On every Compute node, edit /etc/nova/nova.conf and add the following under the [libvirt] section:
inject_password = false
inject_key = false
inject_partition = -2To ensure a proper live-migration, use the following flags:
live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST"Restart OpenStack
To activate the Ceph block device driver and load the block device pool name into the configuration, you must restart OpenStack. Thus, for Debian based systems execute these commands on the appropriate nodes:
sudo service openstack-glance-api restart
sudo service openstack-nova-compute restart
sudo service openstack-cinder-volume restart
sudo service openstack-cinder-backup restartTest OpenStack with Ceph:
source /root/keystonerc_admin
cinder create --display-name ceph-volume01 --display-description "test ceph storage" 10
ceph -s
cinder list
rados -p images ls
wget http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img
rados -p volumes ls
ceph -s
yum install wget
wget http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img
rados df
rados df -h
rados df
glance image-create --name="ubuntu-precise-image" --is-public=True --disk-format=qcow2 --container-format=ovf <precise-server-cloudimg-amd64-disk1.img
glance image-list