Difference between revisions of "OpenStack:Magnum-Xena"

From Define Wiki
Jump to navigation Jump to search
 
Line 34: Line 34:
 
openstack image create --container-format=bare --disk-format=raw --property os_distro='fedora-coreos' fedora-coreos-33.20210426.3.0-openstack.x86_64 --file /etc/kolla/fedora-coreos-33.20210426.3.0-openstack.x86_64.raw
 
openstack image create --container-format=bare --disk-format=raw --property os_distro='fedora-coreos' fedora-coreos-33.20210426.3.0-openstack.x86_64 --file /etc/kolla/fedora-coreos-33.20210426.3.0-openstack.x86_64.raw
 
</pre>
 
</pre>
 +
 +
=== Finding old Fedora Core OS images ===
 +
when you need to find a spacific version you can find them here:
 +
 +
https://builds.coreos.fedoraproject.org/browser?stream=stable&arch=x86_64
  
 
== Make a template using your image ==
 
== Make a template using your image ==

Latest revision as of 18:24, 18 March 2022

Prereqs

this is not an exhaustive list but stuff I noticed due to haveing a very very basic starting point

  • Heat needs to work
    • public apis need to be reachable from a VM test this by pinging the kolla_external_vip_interface from a VM which I have put on a virtual interface on my controller
    • make sure that your kolla_external_vip_interface is set to something sane too
    • HA proxy is not required if you have a dedicated controller, you can set the kolla_external_vip_interface to the external ip

Changes to globals.yml

enable_magnum: yes
enable_cluster_user_trust: yes
enable_barbican: yes

not 100% sure barbican is essentail but I wanted it for something else anyway it is used by magnum

reconfigure openstack

podman exec -it kolla-deploy kolla-ansible -i /etc/kolla/multinode reconfigure -t  common,horizon,magnum,barbican

upload Image to glance

in the docs they tell you to use fedora-coreos-35 https://docs.openstack.org/magnum/xena/contributor/quickstart.html#building-a-kubernetes-cluster-based-on-fedora-coreos

I'm sure you can if you want to sped a lot of time debugging and tuning the deployment to provision kubernetes > 1.18.xxx which I will hopefully come back and explain later IF you do you WILL need to modify some docker volume mounts and to the cgroups folder and then tweak your install to provison something NOT based on the hyperkube image. If you want something that works with the dfaults do this:

wget https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/33.20210426.3.0/x86_64/fedora-coreos-33.20210426.3.0-openstack.x86_64.qcow2.xz
unxz fedora-coreos-33.20210426.3.0-openstack.x86_64.qcow2.xz
qemu-img convert fedora-coreos-33.20210426.3.0-openstack.x86_64.qcow2 fedora-coreos-33.20210426.3.0-openstack.x86_64.raw
mv fedora-coreos-33.20210426.3.0-openstack.x86_64.* /root/kolla
podman exec -it kolla-deploy bash
source /etc/kolla/admin-openrc.sh
openstack image create --container-format=bare --disk-format=raw --property os_distro='fedora-coreos' fedora-coreos-33.20210426.3.0-openstack.x86_64 --file /etc/kolla/fedora-coreos-33.20210426.3.0-openstack.x86_64.raw

Finding old Fedora Core OS images

when you need to find a spacific version you can find them here:

https://builds.coreos.fedoraproject.org/browser?stream=stable&arch=x86_64

Make a template using your image

Not I have no octavia setup here (yet) so no loadbanancers are possible

I'm using ceph, so to speed deployments I'm forcing boot volumes and specifying my preferred volume type. If you have only one volume type you can skip that part but DO NOT forget to add the label for boot_volume_size or it will be created using local storage on the hypervisor which is normally slower. You probably won't have this flavor so change that.

openstack coe cluster template create k8s --image fedora-coreos-33.20210426.3.0-openstack.x86_64 --keypair mykey --external-network public1 --dns-nameserver 8.8.8.8 --flavor g1.medium.2xa100  --master-flavor m1.xlarge --docker-volume-size 10 --network-driver flannel --docker-storage-driver overlay2 --coe kubernetes --labels="boot_volume_size=10,docker_volume_type=ceph-fast-multiattach,boot_volume_type=ceph-fast-multiattach"
<snipped>



openstack coe cluster template show k8s
+-----------------------+------------------------------------------------------------------------------------------------------------------------+
| Field                 | Value                                                                                                                  |
+-----------------------+------------------------------------------------------------------------------------------------------------------------+
| insecure_registry     | -                                                                                                                      |
| labels                | {'boot_volume_size': '10', 'docker_volume_type': 'ceph-fast-multiattach', 'boot_volume_type': 'ceph-fast-multiattach'} |
| updated_at            | 2022-03-18T16:03:09+00:00                                                                                              |
| floating_ip_enabled   | True                                                                                                                   |
| fixed_subnet          | -                                                                                                                      |
| master_flavor_id      | m1.xlarge                                                                                                              |
| uuid                  | 8e7648cc-2699-43cc-8fd2-eb2576032366                                                                                   |
| no_proxy              | -                                                                                                                      |
| https_proxy           | -                                                                                                                      |
| tls_disabled          | False                                                                                                                  |
| keypair_id            | mykey                                                                                                                  |
| public                | False                                                                                                                  |
| http_proxy            | -                                                                                                                      |
| docker_volume_size    | 10                                                                                                                     |
| server_type           | vm                                                                                                                     |
| external_network_id   | public1                                                                                                                |
| cluster_distro        | fedora-coreos                                                                                                          |
| image_id              | fedora-coreos-33.20210426.3.0-openstack.x86_64                                                                         |
| volume_driver         | -                                                                                                                      |
| registry_enabled      | False                                                                                                                  |
| docker_storage_driver | overlay2                                                                                                               |
| apiserver_port        | -                                                                                                                      |
| name                  | k8s                                                                                                                    |
| created_at            | 2022-03-18T14:44:51+00:00                                                                                              |
| network_driver        | flannel                                                                                                                |
| fixed_network         | -                                                                                                                      |
| coe                   | kubernetes                                                                                                             |
| flavor_id             | g1.medium.2xa100                                                                                                       |
| master_lb_enabled     | False                                                                                                                  |
| dns_nameserver        | 8.8.8.8                                                                                                                |
| hidden                | False                                                                                                                  |
| tags                  | -                                                                                                                      |
+-----------------------+------------------------------------------------------------------------------------------------------------------------+

Create a minimal k8s env

openstack coe cluster create k8s-test --cluster-template k8s --node-count 1
Request to create cluster 41862a44-3afd-4fd4-82c3-8a2be036f756 accepted

wait a while, if it spends much more than 10 minutes on kube masters. . . . it's probably fucked the first few times I strongly recommend watching it install so you can catch it early

but eventually you will hopefully see a create complete statemtnn in horizon or something like this using the cli

root@kolla-deploy:/#  openstack coe cluster list
+--------------------------------------+----------------+---------+------------+--------------+-----------------+---------------+
| uuid                                 | name           | keypair | node_count | master_count | status          | health_status |
+--------------------------------------+----------------+---------+------------+--------------+-----------------+---------------+
| 355ba323-f4bf-4d9c-8ad6-122b534f62fa | autoscaling-k8 | mykey   |          1 |            1 | CREATE_COMPLETE | HEALTHY       |
+--------------------------------------+----------------+---------+------------+--------------+-----------------+---------------+

Debugging stuff when it explodes

ssh in and check containers

first get your master IP (the slaves nearly always work if the master succeeds and you are not doing something funky like mixed versions)

openstack coe cluster show autoscaling-k8 -c master_addresses
+------------------+---------------------+
| Field            | Value               |
+------------------+---------------------+
| master_addresses | ['192.168.199.212'] |
+------------------+---------------------+
root@kolla-deploy:/#

then ssh into it and check the containers you have, this is from my WORKING deployment and is roughly what you should see the times will be way lower and you may not yet have them all but, for example if you have kube-apserver AND NOT kublet and that does not get fixed after 2 mins or so it's not going to finish you need to look into why that is and fix it for next time

[root@nl-ams-wc1-stl-fl0-pd0-dpy1 ~]# ssh core@192.168.199.212
Warning: Permanently added '192.168.199.212' (ECDSA) to the list of known hosts.
Fedora CoreOS 33.20210426.3.0
Tracker: https://github.com/coreos/fedora-coreos-tracker
Discuss: https://discussion.fedoraproject.org/c/server/coreos/

Last login: Fri Mar 18 17:46:04 2022 from 192.168.199.29
[core@autoscaling-k8-bii7gdcvsje2-master-0 ~]$ sudo su -
[root@autoscaling-k8-bii7gdcvsje2-master-0 ~]# podman ps
CONTAINER ID  IMAGE                                                            COMMAND               CREATED            STATUS                PORTS   NAMES
bdfb3419d96c  docker.io/openstackmagnum/heat-container-agent:wallaby-stable-1  /usr/bin/start-he...  About an hour ago  Up About an hour ago          heat-container-agent
7a80a9babe76  k8s.gcr.io/hyperkube:v1.18.16                                    kube-apiserver --...  About an hour ago  Up About an hour ago          kube-apiserver
ad8e7a14ddb7  k8s.gcr.io/hyperkube:v1.18.16                                    kube-controller-m...  About an hour ago  Up About an hour ago          kube-controller-manager
95fa67ab080d  k8s.gcr.io/hyperkube:v1.18.16                                    kube-scheduler --...  About an hour ago  Up About an hour ago          kube-scheduler
e04331ccf23a  k8s.gcr.io/hyperkube:v1.18.16                                    kubelet --logtost...  About an hour ago  Up About an hour ago          kubelet
766525ac4d3d  k8s.gcr.io/hyperkube:v1.18.16                                    kube-proxy --logt...  About an hour ago  Up About an hour ago          kube-proxy
77ec9403fd1c  quay.io/coreos/etcd:v3.4.6                                       /usr/local/bin/et...  About an hour ago  Up About an hour ago          etcd

check heat container agent log

first thing to check is that you are not hitting pull limits on dockerhub two ways to check this, the heat-container-agent log with:

journalctl -u heat-container-agent

if you are REALLY fast getting in then retry with --follow to tail the log as it's appended to


note that sometimes the problem you are looking for will not ben in here but most of the times it is. This sets me up nicely for an example of when it isn't in here doesn't it!

my install just gets stuck and no more output in heat container agent log

ok so this happened today my log just got stuck like this:

Mar 18 16:35:35 autoscaling-k8-bii7gdcvsje2-master-0 podman[2130]: INFO:os-refresh-config:Starting phase pre-configure
Mar 18 16:35:35 autoscaling-k8-bii7gdcvsje2-master-0 podman[2130]: dib-run-parts Fri Mar 18 16:35:35 UTC 2022 ----------------------- PROFILING -----------------------
Mar 18 16:35:35 autoscaling-k8-bii7gdcvsje2-master-0 podman[2130]: dib-run-parts Fri Mar 18 16:35:35 UTC 2022
Mar 18 16:35:35 autoscaling-k8-bii7gdcvsje2-master-0 podman[2130]: dib-run-parts Fri Mar 18 16:35:35 UTC 2022 Target: pre-configure.d
Mar 18 16:35:35 autoscaling-k8-bii7gdcvsje2-master-0 podman[2130]: dib-run-parts Fri Mar 18 16:35:35 UTC 2022
Mar 18 16:35:35 autoscaling-k8-bii7gdcvsje2-master-0 podman[2130]: dib-run-parts Fri Mar 18 16:35:35 UTC 2022 Script                                     Seconds
Mar 18 16:35:35 autoscaling-k8-bii7gdcvsje2-master-0 podman[2130]: dib-run-parts Fri Mar 18 16:35:35 UTC 2022 ---------------------------------------  ----------
Mar 18 16:35:35 autoscaling-k8-bii7gdcvsje2-master-0 podman[2130]: dib-run-parts Fri Mar 18 16:35:35 UTC 2022
Mar 18 16:35:35 autoscaling-k8-bii7gdcvsje2-master-0 podman[2130]: dib-run-parts Fri Mar 18 16:35:35 UTC 2022
Mar 18 16:35:35 autoscaling-k8-bii7gdcvsje2-master-0 podman[2130]: dib-run-parts Fri Mar 18 16:35:35 UTC 2022 --------------------- END PROFILING ---------------------
Mar 18 16:35:35 autoscaling-k8-bii7gdcvsje2-master-0 podman[2130]: [2022-03-18 16:35:35,704] (os-refresh-config) [INFO] Completed phase pre-configure
Mar 18 16:35:35 autoscaling-k8-bii7gdcvsje2-master-0 podman[2130]: INFO:os-refresh-config:Completed phase pre-configure
Mar 18 16:35:35 autoscaling-k8-bii7gdcvsje2-master-0 podman[2130]: [2022-03-18 16:35:35,704] (os-refresh-config) [INFO] Starting phase configure
Mar 18 16:35:35 autoscaling-k8-bii7gdcvsje2-master-0 podman[2130]: INFO:os-refresh-config:Starting phase configure
Mar 18 16:35:35 autoscaling-k8-bii7gdcvsje2-master-0 podman[2130]: dib-run-parts Fri Mar 18 16:35:35 UTC 2022 Running /opt/stack/os-config-refresh/configure.d/20-os-apply-config
Mar 18 16:35:35 autoscaling-k8-bii7gdcvsje2-master-0 podman[2130]: [2022/03/18 04:35:35 PM] [INFO] writing /etc/os-collect-config.conf
Mar 18 16:35:35 autoscaling-k8-bii7gdcvsje2-master-0 podman[2130]: [2022/03/18 04:35:35 PM] [INFO] writing /var/run/heat-config/heat-config
Mar 18 16:35:35 autoscaling-k8-bii7gdcvsje2-master-0 podman[2130]: [2022/03/18 04:35:35 PM] [INFO] success
Mar 18 16:35:35 autoscaling-k8-bii7gdcvsje2-master-0 podman[2130]: dib-run-parts Fri Mar 18 16:35:35 UTC 2022 20-os-apply-config completed
Mar 18 16:35:35 autoscaling-k8-bii7gdcvsje2-master-0 podman[2130]: dib-run-parts Fri Mar 18 16:35:35 UTC 2022 Running /opt/stack/os-config-refresh/configure.d/50-heat-config-docker-compose
Mar 18 16:35:35 autoscaling-k8-bii7gdcvsje2-master-0 podman[2130]: dib-run-parts Fri Mar 18 16:35:35 UTC 2022 50-heat-config-docker-compose completed
Mar 18 16:35:35 autoscaling-k8-bii7gdcvsje2-master-0 podman[2130]: dib-run-parts Fri Mar 18 16:35:35 UTC 2022 Running /opt/stack/os-config-refresh/configure.d/55-heat-config
Mar 18 16:35:36 autoscaling-k8-bii7gdcvsje2-master-0 podman[2130]: [2022-03-18 16:35:36,001] (heat-config) [DEBUG] Running /var/lib/heat-config/hooks/script < /var/lib/heat-config/deployed/4330ed45-2879-4864-a217-7f283fbab03f.json

and nothing more would show up even when the install timed out

see that last line that doesn't log into this file. . . it goes here:

/var/log/heat-config/heat-config-script/4330ed45-2879-4864-a217-7f283fbab03f-autoscaling-k8-bii7gdcvsje2-kube_cluster_config-a7azsk4i3cpk.log

NOTE that the uuid in the filename will be different unless you are the luckiest person in the universe. . . if it is go buy a lottery ticket right this second.

now from an example with a Fedora coreos 34 or 35 image from earlier

[root@k8s-test-64g2ugtyhrfl-master-0 ~]# tail -f /var/log/heat-config/heat-config-script/02aeb0a3-cfd0-430e-828d-b0a2082d2016-k8s-test-64g2ugtyhrfl-kube_masters-zriwojqedq4e-0-wky2qedzv5dg-master_config-iawpfxgjuc2s.log
Created symlink /etc/systemd/system/multi-user.target.wants/kube-proxy.service → /etc/systemd/system/kube-proxy.service.
+ for action in enable restart
restart service etcd
+ for service in etcd ${container_runtime_service} kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy
+ echo 'restart service etcd'
+ ssh -F /srv/magnum/.ssh/config root@localhost systemctl restart etcd
+ for service in etcd ${container_runtime_service} kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy
restart service docker
+ echo 'restart service docker'
+ ssh -F /srv/magnum/.ssh/config root@localhost systemctl restart docker
+ for service in etcd ${container_runtime_service} kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy
+ echo 'restart service kube-apiserver'
+ ssh -F /srv/magnum/.ssh/config root@localhost systemctl restart kube-apiserver
restart service kube-apiserver
+ for service in etcd ${container_runtime_service} kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy
+ echo 'restart service kube-controller-manager'
+ ssh -F /srv/magnum/.ssh/config root@localhost systemctl restart kube-controller-manager
restart service kube-controller-manager
+ for service in etcd ${container_runtime_service} kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy
restart service kube-scheduler
+ echo 'restart service kube-scheduler'
+ ssh -F /srv/magnum/.ssh/config root@localhost systemctl restart kube-scheduler
+ for service in etcd ${container_runtime_service} kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy
+ echo 'restart service kubelet'
+ ssh -F /srv/magnum/.ssh/config root@localhost systemctl restart kubelet
restart service kubelet
+ for service in etcd ${container_runtime_service} kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy
restart service kube-proxy
+ echo 'restart service kube-proxy'
+ ssh -F /srv/magnum/.ssh/config root@localhost systemctl restart kube-proxy
++ kubectl get --raw=/healthz
Error from server (InternalError): an error on the server ("[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[-]poststarthook/crd-informer-synced failed: reason withheld\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[+]poststarthook/apiserver/bootstrap-system-flowcontrol-configuration ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[-]autoregister-completion failed: reason withheld\n[+]poststarthook/apiservice-openapi-controller ok\nhealthz check failed") has prevented the request from succeeding
+ '[' ok = '' ']'
+ echo 'Trying to label master node with node-role.kubernetes.io/master=""'
+ sleep 5s
Trying to label master node with node-role.kubernetes.io/master=""
++ kubectl get --raw=/healthz
+ '[' ok = ok ']'
+ kubectl patch node k8s-test-64g2ugtyhrfl-master-0 --patch '{"metadata": {"labels": {"node-role.kubernetes.io/master": ""}}}'
Error from server (NotFound): nodes "k8s-test-64g2ugtyhrfl-master-0" not found
+ echo 'Trying to label master node with node-role.kubernetes.io/master=""'
+ sleep 5s
Trying to label master node with node-role.kubernetes.io/master=""
++ kubectl get --raw=/healthz
+ '[' ok = ok ']'
+ kubectl patch node k8s-test-64g2ugtyhrfl-master-0 --patch '{"metadata": {"labels": {"node-role.kubernetes.io/master": ""}}}'
Error from server (NotFound): nodes "k8s-test-64g2ugtyhrfl-master-0" not found
+ echo 'Trying to label master node with node-role.kubernetes.io/master=""'
+ sleep 5s
Trying to label master node with node-role.kubernetes.io/master=""
++ kubectl get --raw=/healthz
+ '[' ok = ok ']'
+ kubectl patch node k8s-test-64g2ugtyhrfl-master-0 --patch '{"metadata": {"labels": {"node-role.kubernetes.io/master": ""}}}'
Error from server (NotFound): nodes "k8s-test-64g2ugtyhrfl-master-0" not found
+ echo 'Trying to label master node with node-role.kubernetes.io/master=""'
+ sleep 5s
Trying to label master node with node-role.kubernetes.io/master=""
^C

annnnnnnnd so forever


long story short I then realised I should have a kublet container in podman but I didn't

I checked the logs using

journalctl -u kubelet 

and spotted this

Mar 18 09:50:39 k8s-test-hln62bzrqnt3-master-0 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Mar 18 09:50:49 k8s-test-hln62bzrqnt3-master-0 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4.
Mar 18 09:50:49 k8s-test-hln62bzrqnt3-master-0 systemd[1]: Stopped Kubelet via Hyperkube (System Container).
Mar 18 09:50:49 k8s-test-hln62bzrqnt3-master-0 systemd[1]: Starting Kubelet via Hyperkube (System Container)...
Mar 18 09:50:49 k8s-test-hln62bzrqnt3-master-0 podman[6772]: Error: no container with name or ID "kubelet" found: no such container
Mar 18 09:50:49 k8s-test-hln62bzrqnt3-master-0 systemd[1]: Started Kubelet via Hyperkube (System Container).
Mar 18 09:50:49 k8s-test-hln62bzrqnt3-master-0 bash[6797]: Error: statfs /sys/fs/cgroup/systemd: no such file or directory
Mar 18 09:50:49 k8s-test-hln62bzrqnt3-master-0 systemd[1]: kubelet.service: Main process exited, code=exited, status=125/n/a
Mar 18 09:50:49 k8s-test-hln62bzrqnt3-master-0 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Mar 18 09:50:59 k8s-test-hln62bzrqnt3-master-0 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5.
Mar 18 09:50:59 k8s-test-hln62bzrqnt3-master-0 systemd[1]: Stopped Kubelet via Hyperkube (System Container).
Mar 18 09:50:59 k8s-test-hln62bzrqnt3-master-0 systemd[1]: Starting Kubelet via Hyperkube (System Container)...
Mar 18 09:50:59 k8s-test-hln62bzrqnt3-master-0 podman[6884]: Error: no container with name or ID "kubelet" found: no such container
Mar 18 09:50:59 k8s-test-hln62bzrqnt3-master-0 systemd[1]: Started Kubelet via Hyperkube (System Container).
Mar 18 09:50:59 k8s-test-hln62bzrqnt3-master-0 bash[6910]: Error: statfs /sys/fs/cgroup/systemd: no such file or directory
Mar 18 09:50:59 k8s-test-hln62bzrqnt3-master-0 systemd[1]: kubelet.service: Main process exited, code=exited, status=125/n/a

This is because FCOS34 changed to cgroups2 and that path doesn't exist, you thik you can fix this easily but no even if you fix this by removing the volume in the service file you still get:

Mar 18 12:54:19 k8s-test-hln62bzrqnt3-master-0 bash[132110]: W0318 12:54:19.083054  132128 server.go:616] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
Mar 18 12:54:19 k8s-test-hln62bzrqnt3-master-0 bash[132110]: W0318 12:54:19.083124  132128 server.go:623] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
Mar 18 12:54:19 k8s-test-hln62bzrqnt3-master-0 kubelet[132125]: W0318 12:54:19.083124  132128 server.go:623] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
Mar 18 12:54:19 k8s-test-hln62bzrqnt3-master-0 kubelet[132125]: F0318 12:54:19.083835  132128 server.go:274] failed to run Kubelet: mountpoint for  not found
Mar 18 12:54:19 k8s-test-hln62bzrqnt3-master-0 bash[132110]: F0318 12:54:19.083835  132128 server.go:274] failed to run Kubelet: mountpoint for  not found
Mar 18 12:54:19 k8s-test-hln62bzrqnt3-master-0 podman[132110]: 2022-03-18 12:54:19.128859553 +0000 UTC m=+0.339248698 container died a20407cbb59c7c85e9df40c35a50c406193d11f428b63f251e6bad4b563b9044 (image=k8s.gcr.io/hyperkube:v1.18.16, name=kubelet)

because the path structure in cgroups2 is different and it's looking in the old location. This is fixed in kubernetes 1.19 btw. However in 1.19 they removed all support for hyperkube container images and guess what magnum uses by default. . . yep its hyperkube! . . . To be continued!