<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>http://wiki.define-technology.com/mediawiki-1.35.0/index.php?action=history&amp;feed=atom&amp;title=Ceph_%2B_OpenStack</id>
	<title>Ceph + OpenStack - Revision history</title>
	<link rel="self" type="application/atom+xml" href="http://wiki.define-technology.com/mediawiki-1.35.0/index.php?action=history&amp;feed=atom&amp;title=Ceph_%2B_OpenStack"/>
	<link rel="alternate" type="text/html" href="http://wiki.define-technology.com/mediawiki-1.35.0/index.php?title=Ceph_%2B_OpenStack&amp;action=history"/>
	<updated>2026-05-04T20:10:16Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.35.0</generator>
	<entry>
		<id>http://wiki.define-technology.com/mediawiki-1.35.0/index.php?title=Ceph_%2B_OpenStack&amp;diff=7681&amp;oldid=prev</id>
		<title>Chenhui: Created page with &quot;= Add Ceph to OpenStack =  install pre-required packages on openstack1: &lt;syntaxhighlight&gt; yum install qemu libvirt sudo yum install python-rbd &lt;/syntaxhighlight&gt; On ceph-node1: setup ssh key and ce...&quot;</title>
		<link rel="alternate" type="text/html" href="http://wiki.define-technology.com/mediawiki-1.35.0/index.php?title=Ceph_%2B_OpenStack&amp;diff=7681&amp;oldid=prev"/>
		<updated>2015-05-21T16:06:54Z</updated>

		<summary type="html">&lt;p&gt;Created page with &amp;quot;= Add Ceph to OpenStack =  install pre-required packages on openstack1: &amp;lt;syntaxhighlight&amp;gt; yum install qemu libvirt sudo yum install python-rbd &amp;lt;/syntaxhighlight&amp;gt; On ceph-node1: setup ssh key and ce...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;= Add Ceph to OpenStack =&lt;br /&gt;
&lt;br /&gt;
install pre-required packages on openstack1:&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
yum install qemu libvirt&lt;br /&gt;
sudo yum install python-rbd&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
On ceph-node1: setup ssh key and ceph related configs&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
vim ~/.ssh/known_hosts &lt;br /&gt;
ssh-copy-id openstack1&lt;br /&gt;
ls&lt;br /&gt;
cd /etc/yum.repos.d/&lt;br /&gt;
ls&lt;br /&gt;
scp ceph.repo openstack1:`pwd`&lt;br /&gt;
cd /etc/ce&lt;br /&gt;
cd /etc/ceph/&lt;br /&gt;
ls&lt;br /&gt;
scp ceph.conf ceph.client.admin.keyring openstack1:`pwd`&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
make sure the repo details, keyrings, ceph.conf and ceph -s are correct!&lt;br /&gt;
On openstack1, create ceph auth:&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
ceph auth get-or-create client.cinder mon &amp;#039;allow r&amp;#039; osd &amp;#039;allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images&amp;#039;&lt;br /&gt;
ceph auth get-or-create client.glance mon &amp;#039;allow r&amp;#039; osd &amp;#039;allow class-read object_prefix rbd_children, allow rwx pool=images&amp;#039;&lt;br /&gt;
ceph auth get-or-create client.cinder-backup mon &amp;#039;allow r&amp;#039; osd &amp;#039;allow class-read object_prefix rbd_children, allow rwx pool=backups&amp;#039;&lt;br /&gt;
create uuid and secret for libvirt&lt;br /&gt;
uuidgen		&lt;br /&gt;
vim secret.xml &lt;br /&gt;
yum install -y vim&lt;br /&gt;
vim secret.xml&lt;br /&gt;
uuidgen&lt;br /&gt;
vim secret.xml&lt;br /&gt;
virsh secret-define --file secret.xml&lt;br /&gt;
virsh secret-set-value --secret 457eb676-33da-42ec-9a8c-9293d545c337 --base64 $(cat client.cinder.key) &amp;amp;&amp;amp; rm client.cinder.key secret.xml&lt;br /&gt;
ls&lt;br /&gt;
cd&lt;br /&gt;
ls&lt;br /&gt;
virsh secret-set-value --secret 07b30ea7-88e1-4dc6-b7b5-cc3b8886487a --base64 $(cat client.cinder.key) &amp;amp;&amp;amp; rm client.cinder.key secret.xml&lt;br /&gt;
rm -fr /etc/ceph/secret.xml &lt;br /&gt;
ls&lt;br /&gt;
cat keystonerc_admin &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Content for secret.xml:&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;secret ephemeral=&amp;#039;no&amp;#039; private=&amp;#039;no&amp;#039;&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;457eb676-33da-42ec-9a8c-9293d545c337&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;usage type=&amp;#039;ceph&amp;#039;&amp;gt;&lt;br /&gt;
    &amp;lt;name&amp;gt;client.cinder secret&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;/usage&amp;gt;&lt;br /&gt;
&amp;lt;/secret&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Configure OpenStack to use Ceph&lt;br /&gt;
Configuring Glance&lt;br /&gt;
Glance can use multiple back ends to store images. To use Ceph block devices by default, configure Glance like the following.&lt;br /&gt;
&lt;br /&gt;
Edit /etc/glance/glance-api.conf and add under the [glance_store] section:&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
[DEFAULT]&lt;br /&gt;
...&lt;br /&gt;
default_store = rbd&lt;br /&gt;
...&lt;br /&gt;
[glance_store]&lt;br /&gt;
stores = rbd&lt;br /&gt;
rbd_store_pool = images&lt;br /&gt;
rbd_store_user = glance&lt;br /&gt;
rbd_store_ceph_conf = /etc/ceph/ceph.conf&lt;br /&gt;
rbd_store_chunk_size = 8&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
If you want to enable copy-on-write cloning of images, also add under the [DEFAULT] section:&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
show_image_direct_url = True&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Note that this exposes the back end location via Glance’s API, so the endpoint with this option enabled should not be publicly accessible.&lt;br /&gt;
&lt;br /&gt;
Disable the Glance cache management to avoid images getting cached under /var/lib/glance/image-cache/, assuming your configuration file has flavor = keystone+cachemanagement:&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
[paste_deploy]&lt;br /&gt;
flavor = keystone&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Configuring Cinder&lt;br /&gt;
OpenStack requires a driver to interact with Ceph block devices. You must also specify the pool name for the block device. On your OpenStack node, edit /etc/cinder/cinder.conf by adding:&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
volume_driver = cinder.volume.drivers.rbd.RBDDriver&lt;br /&gt;
rbd_pool = volumes&lt;br /&gt;
rbd_ceph_conf = /etc/ceph/ceph.conf&lt;br /&gt;
rbd_flatten_volume_from_snapshot = false&lt;br /&gt;
rbd_max_clone_depth = 5&lt;br /&gt;
rbd_store_chunk_size = 4&lt;br /&gt;
rados_connect_timeout = -1&lt;br /&gt;
glance_api_version = 2&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
If you’re using cephx authentication, also configure the user and uuid of the secret you added to libvirt as documented earlier:&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
rbd_user = cinder&lt;br /&gt;
rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Note that if you are configuring multiple cinder back ends, glance_api_version = 2 must be in the [DEFAULT] section.&lt;br /&gt;
Configuring Cinder Backup&lt;br /&gt;
OpenStack Cinder Backup requires a specific daemon so don’t forget to install it. On your Cinder Backup node, edit /etc/cinder/cinder.conf and add:&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
backup_driver = cinder.backup.drivers.ceph&lt;br /&gt;
backup_ceph_conf = /etc/ceph/ceph.conf&lt;br /&gt;
backup_ceph_user = cinder-backup&lt;br /&gt;
backup_ceph_chunk_size = 134217728&lt;br /&gt;
backup_ceph_pool = backups&lt;br /&gt;
backup_ceph_stripe_unit = 0&lt;br /&gt;
backup_ceph_stripe_count = 0&lt;br /&gt;
restore_discard_excess_bytes = true&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Configuring Nova to attach Ceph RBD block device&lt;br /&gt;
&lt;br /&gt;
In order to attach Cinder devices (either normal block or by issuing a boot from volume), you must tell Nova (and libvirt) which user and UUID to refer to when attaching the device. libvirt will refer to this user when connecting and authenticating with the Ceph cluster.&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
rbd_user = cinder&lt;br /&gt;
rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
These two flags are also used by the Nova ephemeral backend.&lt;br /&gt;
&lt;br /&gt;
Configuring Nova&lt;br /&gt;
&lt;br /&gt;
In order to boot all the virtual machines directly into Ceph, you must configure the ephemeral backend for Nova.&lt;br /&gt;
&lt;br /&gt;
It is recommended to enable the RBD cache in your Ceph configuration file (enabled by default since Giant). Moreover, enabling the admin socket brings a lot of benefits while troubleshoothing. Having one socket per virtual machine using a Ceph block device will help investigating performance and/or wrong behaviors.&lt;br /&gt;
&lt;br /&gt;
This socket can be accessed like this:&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
ceph daemon /var/run/ceph/ceph-client.cinder.19195.32310016.asok help&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Now on every compute nodes edit your Ceph configuration file:&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
[client]&lt;br /&gt;
    rbd cache = true&lt;br /&gt;
    rbd cache writethrough until flush = true&lt;br /&gt;
    admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
In Juno, Ceph block device was moved under the [libvirt] section. On every Compute node, edit /etc/nova/nova.conf under the [libvirt] section and add:&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
[libvirt]&lt;br /&gt;
images_type = rbd&lt;br /&gt;
images_rbd_pool = vms&lt;br /&gt;
images_rbd_ceph_conf = /etc/ceph/ceph.conf&lt;br /&gt;
rbd_user = cinder&lt;br /&gt;
rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
It is also a good practice to disable file injection. While booting an instance, Nova usually attempts to open the rootfs of the virtual machine. Then, Nova injects values such as password, ssh keys etc. directly into the filesystem. However, it is better to rely on the metadata service and cloud-init.&lt;br /&gt;
&lt;br /&gt;
On every Compute node, edit /etc/nova/nova.conf and add the following under the [libvirt] section:&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
inject_password = false&lt;br /&gt;
inject_key = false&lt;br /&gt;
inject_partition = -2&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
To ensure a proper live-migration, use the following flags:&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
live_migration_flag=&amp;quot;VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Restart OpenStack&lt;br /&gt;
&lt;br /&gt;
To activate the Ceph block device driver and load the block device pool name into the configuration, you must restart OpenStack. Thus, for Debian based systems execute these commands on the appropriate nodes:&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
sudo service openstack-glance-api restart&lt;br /&gt;
sudo service openstack-nova-compute restart&lt;br /&gt;
sudo service openstack-cinder-volume restart&lt;br /&gt;
sudo service openstack-cinder-backup restart&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Test OpenStack with Ceph:&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
source /root/keystonerc_admin &lt;br /&gt;
cinder create --display-name ceph-volume01 --display-description &amp;quot;test ceph storage&amp;quot; 10&lt;br /&gt;
ceph -s&lt;br /&gt;
cinder list&lt;br /&gt;
rados -p images ls&lt;br /&gt;
wget http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img&lt;br /&gt;
rados -p volumes ls&lt;br /&gt;
ceph -s&lt;br /&gt;
yum install wget &lt;br /&gt;
wget http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img&lt;br /&gt;
rados df&lt;br /&gt;
rados df -h&lt;br /&gt;
rados df &lt;br /&gt;
glance image-create --name=&amp;quot;ubuntu-precise-image&amp;quot; --is-public=True --disk-format=qcow2 --container-format=ovf &amp;lt;precise-server-cloudimg-amd64-disk1.img&lt;br /&gt;
glance image-list&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>Chenhui</name></author>
	</entry>
</feed>