VScaler: Upgrade from.an Openstack version to the next one

From Define Wiki
Jump to navigation Jump to search

In this instance I upgraded from Ocata to Pike. Here are the steps:

  1. Pull latest master kolla and kolla-ansible git repo
  2. Install them with pip
  3. Remove docker-py and install docker via pip
  4. Do a kolla-build for all the services needed on the new tag
    • don’t forget to build horizon with our custom Dockerfile
  5. Push the new images to the registry
  6. Change the “openstack version” value in the globals file to 5.0.0 (Pike)
  7. Copy the new passwords file from the latest kolla-ansible/etc/kolla dir to /etc/kolla
    • keep the old file!
    • kolla-genpwd to populate the new passwd file
    • then kolla-mergepwd to merge the old and the new one!
  8. Do a kolla-ansible upgrade
  9. Do a kolla-ansible deploy ! Do not forget this step! It installs 8 more containers on the controller node!

Check the versions of the new service to make sure your have upgraded

  1. Go in the nova_conductor container
  2. Run nova-manage --version
  3. Compare your version to the ones that are in the project site https://releases.openstack.org/teams/nova.html

Specifics for Multinode Train to Ussuri WITH Ceph

Notes

This was for the demo system running Centos 8.4 with train using binary containers this is based on using these docker containers Build_kolla_ussuri_containers_on_fresh_centos8_build_VM

Prep base Kolla config

#pull the new deploy container
podman pull registry.define-technology.com:5000/kolla/kolla-deploy:ussuri
podman tag registry.define-technology.com:5000/kolla/kolla-deploy:ussuri kolla-deploy-ussuri

#create it and copy the base templates into the ~/kolla dir
podman create --name kolla-deploy-ussuri --hostname kolla-deploy-ussuri kolla-deploy-ussuri
podman cp kolla-deploy-ussuri:/kolla/kolla-ansible/etc/kolla/globals.yml ~/kolla/globals.yml.ussuri
podman cp kolla-deploy-ussuri:/kolla/kolla-ansible/etc/kolla/passwords.yml ~/kolla/passwords.yml.ussuri
podman cp kolla-deploy-ussuri:/kolla/kolla-ansible/ansible/inventory/all-in-one ~/kolla/all-in-one_ussuri
podman cp kolla-deploy-ussuri:/kolla/kolla-ansible/ansible/inventory/multinode ~/kolla/multinode_ussuri

#remove it and recreate using  bind mounts
podman rm -f kolla-deploy-ussuri
podman run --name kolla-deploy-ussuri --hostname kolla-deploy-ussuri --net=host -v /root/kolla/:/etc/kolla/ -v /root/.ssh:/root/.ssh -d -it kolla-deploy-ussuri bash

#jump into container and make backups of the original files in case something goes wrong

podman exec -it kolla-deploy-ussuri bash
cd /etc/kolla/

# note that I move the move and then copy back with the same filename to keep the timestamp on the "old" file intact
# first globals
mv globals.yml globals.yml.train_pre_upgrade
cp globals.yml.train_pre_upgrade globals.yml

#show changes
[root@deploy kolla]# diff  -Naur globals.yml.train_pre_upgrade globals.yml
--- globals.yml.train_pre_upgrade       2022-01-05 17:58:51.000000000 +0000
+++ globals.yml 2022-01-11 15:47:04.000000000 +0000
@@ -1,8 +1,8 @@
 ---
 kolla_base_distro: "centos"
 base_distro: "{{ kolla_base_distro }}"  # needed to add this for prechecks to work
-openstack_release: "train"
-openstack_tag_suffix: "{{ '' if base_distro != 'centos' or ansible_distribution_major_version == '7' else  '-centos8' }}"
+openstack_release: "ussuri"
+kolla_install_type: "source"
 kolla_external_vip_address: "10.10.12.10"
 kolla_external_vip_interface: "bond_api.12" #was ovs_public
 #kolla_external_fqdn: demo.public.dt.internal
@@ -51,7 +51,7 @@
 #mariadb backup enable
 enable_mariabackup: "yes"

-horizon_tag: "stein-definetech"
+#horizon_tag: "stein-definetech"
 enable_magnum: true
 # see https://nvd.nist.gov/vuln/detail/CVE-2016-7404
 # enable_cluster_user_trust must be enabled to allow auto scaling to work


#now to merge old and new passwords
#first archive train pwds

mv passwords.yml passwords.yml.train_pre_upgrade

#make a copy of the ussuri passwords template (we made this earlier copying from the image before we mounted the ~/kolla folder over /etc/kolla in the container)

cp passwords.yml.ussuri passwords.yml.new

#now we populate all the passwords in the new file (this is because new services may have been added that were NOT in the previous release)

kolla-genpwd -p passwords.yml.new

#finally merge the old pw with the new passwords into the NEW passwords.yml file

kolla-mergepwd --old passwords.yml.train_pre_upgrade --new passwords.yml.new --final /etc/kolla/passwords.yml

#next we need to update the inventory, I've only updated the multinode one here you need to manually merge this using common sense
# if you dont have this file you missed a step earlier where we copied it out of the container
# cat https://raw.githubusercontent.com/openstack/kolla-ansible/stable/ussuri/ansible/inventory/multinode > multinode_ussuri 
# if you did that
vi multinode_ussuri

If using ceph do this

Ussuri expects the cinder-backup container to have the cinder-volume keyring as well, Train didn't

cp config/cinder/cinder-volume/ceph.client.cinder.keyring config/cinder/cinder-backup/

OK were pretty much done

Pull updated containers

Do not skip this step, it is the first and only chance you will get to detect if you are using the right release, if it complains about missing container manifest files you are doing the wrong one and need to stop and check what you are doing

kolla-ansible -i /etc/kolla/all-in-one pull

IF that worked ok then you are probably ok to do the upgrade

Upgrade

kolla-ansible -i /etc/kolla/all-in-one upgrade

if it complaining about a missing cinder-volume keyring in the cinder-backup role? if so look above to the are you using ceph point if your libvirt container is constantly restarting you have tried to upgrade too many versions in one go, good luck!