VScaler:OpenStack Performance Tweaks
Disable HT in the BIOS on nova compute nodes.
Enable NUMA topology in the BIOS on nova compute nodes.
Enable nested virtualization
Verify nested virtualization is not already enabled: On Intel systems use:
cat /sys/module/kvm_intel/parameters/nested
NOn AMD systems use:
cat /sys/module/kvm_amd/parameters/nested
NShut down all VMs and unload the KVM module: On Intel systems use:
modprobe -r kvm_intelOn AMD systems use:
modprobe -r kvm_amdActivate nested virtualization until the next reboot: On Intel systems use:
modprobe kvm_intel nested=1On AMD systems use:
modprobe kvm_amd nested=1Enable persistent nested virtualization: Add the following line to
/etc/modprobe.d/kvm.confFor Intel systems:
options kvm_intel nested=1For AMD systems:
options kvm_amd nested=1Verify nested virtualization is enabled: On Intel systems use:
cat /sys/module/kvm_intel/parameters/nested
YOn AMD systems use:
cat /sys/module/kvm_amd/parameters/nested
YEnable cpu_passthrough on nova compute nodes
Add the following line to
/etc/kolla/config/nova/nova-compute.conf[libvirt]
cpu_mode=host-passthroughSet CPU and RAM allocation ratios
Add or update the following lines to
/etc/kolla/config/nova/nova-compute.conf[DEFAULT]
ram_allocation_ratio = 1.0
cpu_allocation_ratio = 1.0Isolate CPU cores on nova compute nodes
grubby --update-kernel=ALL --args="isolcpus=0,1,2,3"
grub2-install /dev/sdawhere "isolcpus=0,1,2,3" are the cores reserved for VM use.
Enable vCPU pinning on nova compute nodes
Add the following line to /etc/kolla/config/nova-compute.conf or /etc/kolla/config/nova/$nodename/nova-compute.conf if different configurations are required on different nodes.
[DEFAULT]
vcpu_pin_set=0-35Where 0-35 is the range of dedicated cores to be allocated.
Nova scheduler configuration
Add the following filters to the nova-scheduler configuration located in /etc/kolla/config/nova/nova-scheduler.conf
scheduler_default_filters= NUMATopologyFilter,AggregateInstanceExtraSpecsFilterEnable huge pages on nova compute nodes
Check huge pages status
grep Huge /proc/meminfoAdd the following to
/etc/default/grubGRUB_CMDLINE_LINUX="$GRUB_CMDLINE_LINUX hugepagesz=1G hugepages=160 transparent_hugepage=never"Change the number of huge pages according to the host memory configuration.
Grubby can also be used to add the kernel arguments:
grubby --update-kernel=ALL --args="hugepagesz=1G hugepages=160 transparent_hugepage=never"
grub2-install /dev/sdaAdd huge pages flavor metadata
In order to enable allocation of hugepages memory, flavors must have the "hw:mem_page_size" value set to either large or the page size.
openstack flavor set m1.large --property hw:mem_page_size=largeAdd NUMA zone flavor metadata
Add hw:numa_mempolicy=preferred and hw:numa_nodes=$number_of_numa_nodes to the flavor metadata
openstack flavor set m1.large --property hw:numa_mempolicy=preferred
openstack flavor set m1.large --property hw:numa_nodes=2Add CPU pinning flavor metadata
Add hw:cpu_policy=dedicated and hw:cpu_thread_policy=preferred to the flavor metadata
openstack flavor set m1.large --property hw:cpu_thread_policy=preferred
openstack flavor set m1.large --property hw:cpu_policy=dedicatedCreate a pinned CPU host aggregate
nova aggregate-create performance
nova aggregate-set-metadata performance pinned=trueAdd hosts to aggregate
nova aggregate-add-host performance node01Add aggregate metadata to flavor
openstack flavor set m1.large --property aggregate_instance_extra_specs:pinned=true