Keele University
Intro
This is a small deployment originally running non-containerised OpenStack Liberty with Ceph on 4 nodes (1 headnode and 3 controllers/hypervisors). This environment has been wiped clean and redeployed using kolla-ansible OpenStack Train and Ceph Octopus. The scope of this new deployment includes all the core services, like Glance, Nova, Neutron, but without baremetal or monitoring. Both Glance and Cinder are backed to Ceph but Nova is storing instance disks on local HDDs.
In the new deployment the headnode is used as a deployment node and access point to the rest of the nodes.
Diagram
TODO
Deployment configuration
kolla-ansible config
/etc/kolla/globals.yml
---
kolla_base_distro: "centos"
kolla_install_type: "binary"
openstack_release: "train"
kolla_internal_vip_address: "10.10.10.254"
docker_registry: "registry.vscaler.com:5000"
network_interface: "eth0"
tunnel_interface: "ens1.50"
neutron_external_interface: "eno2"
enable_openstack_core: "yes"
enable_glance: "{{ enable_openstack_core | bool }}"
enable_haproxy: "yes"
enable_keepalived: "{{ enable_haproxy | bool }}"
enable_keystone: "{{ enable_openstack_core | bool }}"
enable_mariadb: "yes"
enable_memcached: "yes"
enable_neutron: "{{ enable_openstack_core | bool }}"
enable_nova: "{{ enable_openstack_core | bool }}"
enable_rabbitmq: "{{ 'yes' if om_rpc_transport == 'rabbit' or om_notify_transport == 'rabbit' else 'no' }}"
enable_ceph: "no"
enable_chrony: "yes"
enable_cinder_backup: "no"
enable_fluentd: "yes"
enable_heat: "{{ enable_openstack_core | bool }}"
enable_horizon: "{{ enable_openstack_core | bool }}"
glance_backend_ceph: "yes"
glance_backend_file: "no"
cinder_backend_ceph: "yes"
nova_backend_ceph: "{{ enable_ceph }}"
horizon_tag: stein
enable_mariabackup: "yes"
Extra config:
/etc/kolla/config/cinder/cinder-volume.conf
[DEFAULT]
enabled_backends=rbd-1
[rbd-1]
rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_user=cinder
backend_host=rbd:volumes
rbd_pool=volumes
volume_backend_name=rbd-1
volume_driver=cinder.volume.drivers.rbd.RBDDriver
rbd_secret_uuid = {{ cinder_rbd_secret_uuid }}
/etc/kolla/config/glance/glance-api.conf
[glance_store] stores = rbd default_store = rbd rbd_store_pool = images rbd_store_user = glance rbd_store_ceph_conf = /etc/ceph/ceph.conf
Inventory was taken from the ansible/inventory/multinode from the following revision of kolla-ansible:
commit fb4e16189c16ebe892762aa518f861d21dee1637 Merge: b27e074 55fe8c1 Author: Zuul <zuul@review.opendev.org> Date: Mon Sep 14 17:36:19 2020 +0000
and was edited like this:
--- /root/vscaler/kolla-ansible/ansible/inventory/multinode 2020-09-15 15:13:28.127357847 +0100 +++ /root/vscaler/multinode-train 2020-09-18 10:11:25.102377112 +0100 @@ -4,5 +4,5 @@ # These hostname must be resolvable from your deployment host -control01 -control02 -control03 +node01 +node02 +node03 @@ -14,10 +14,15 @@ [network] -network01 -network02 +node01 +node02 +node03 [compute] -compute01 +node01 +node02 +node03 [monitoring] -monitoring01 +node01 +node02 +node03 @@ -29,3 +34,5 @@ [storage] -storage01 +node01 +node02 +node03
ceph-ansible config
/root/vscaler/ceph-inventory/hosts
[mons] ceph-osd1 ansible_user=root ceph-osd2 ansible_user=root ceph-osd3 ansible_user=root [mgrs] ceph-osd1 ansible_user=root ceph-osd2 ansible_user=root ceph-osd3 ansible_user=root [osds] ceph-osd1 ansible_user=root ceph-osd2 ansible_user=root ceph-osd3 ansible_user=root
/root/vscaler/ceph-inventory/group_vars/all.yml
---
dummy:
ceph_release_num:
octopus: 15
configure_firewall: false
ceph_origin: repository
ceph_repository: community
ceph_stable_release: octopus
monitor_interface: "ens1.100"
osd_objectstore: bluestore
mon_host_v1:
enabled: False
public_network: 10.100.100.0/24
cluster_network: "{{ public_network | regex_replace(' ', '') }}"
containerized_deployment: false
openstack_config: true
openstack_glance_pool:
name: "images"
pg_num: "64"
pgp_num: "64"
rule_name: "replicated_rule"
type: 1
erasure_profile: ""
expected_num_objects: ""
application: "rbd"
size: "{{ osd_pool_default_size }}"
min_size: "{{ osd_pool_default_min_size }}"
pg_autoscale_mode: False
openstack_cinder_pool:
name: "volumes"
pg_num: "128"
pgp_num: "128"
rule_name: "replicated_rule"
type: 1
erasure_profile: ""
expected_num_objects: ""
application: "rbd"
size: "{{ osd_pool_default_size }}"
min_size: "{{ osd_pool_default_min_size }}"
pg_autoscale_mode: False
openstack_pools:
- "{{ openstack_glance_pool }}"
- "{{ openstack_cinder_pool }}"
/root/vscaler/ceph-inventory/group_vars/osds.yml
--- dummy: devices: - /dev/sdc
ceph-ansible revision:
commit edcdbe5601a9aed7770afd8053f2af09e0711f36 Author: Guillaume Abrioux <gabrioux@redhat.com> Date: Fri Sep 11 17:30:33 2020 +0200