Bright:CaaS

From Define Wiki
Jump to navigation Jump to search

Make sure, you've followed the Openstack installation guide and you have a working Openstack environment on your Bright cluster. (http://wiki.bostonlabs.co.uk/w/index.php?title=Bright:Openstack-install)

Installing CaaS

[root@shadow-head ~] yum install -y cm-openstack-caas cm-ipxe-caas
[root@shadow-head ~] yum update -y

Next, in the file /cm/shared/apps/cm-openstack-caas/bin/Settings.py, the values of “external_dns_server” and “buildmatic_host_ip” should be edited appropriately:

'external_dns_server': '172.28.0.2'
'buildmatic_host_ip': '172.28.0.199' # this is the external ip of the head node
'pxe_helper_url': 'http://localhost:8082/chain'

After the modifications are in place, the pxehelper service is started and enabled:

[root@shadow-head ~] systemctl start pxehelper
[root@shadow-head ~] systemctl enable pxehelper

The pxehelper service uses port 8082. Unblock this port by adding the following rule to /etc/shorewall/rules and then restarting shorewall:

# -- Allow pxehelper service for automatic head node installation
ACCEPT   net            fw              tcp     8082
 
[root@shadow-head ~] systemctl restart shorewall

The OpenStack images can now be created:

[root@shadow-head ~] openstack image create --file /cm/local/apps/ipxe/ipxe-plain-net0.img --disk-format=raw --container-format=bare --public iPXE-plain-eth0
[root@shadow-head ~] openstack image create --file /cm/local/apps/ipxe/ipxe-plain-net1.img --disk-format=raw --container-format=bare --public iPXE-plain-eth1
[root@shadow-head ~] openstack image create --file /cm/local/apps/ipxe/ipxe-caas.img --disk-format=raw --container-format=bare --public ipxe-caas

The dnsmasq utility must now be configured. Its configuration file: /cm/shared/apps/cm-openstack-caas/etc/dnsmasq.dev.conf has two strings that need to be changed:

# The string is replaced with the external IP address of the head node(s).
<INSERT EXTERNAL IP OF THE MACHINE RUNNING PXE HELPER HERE>
 ...
# This is replaced with the FQDN of the head node (In case of HA setup with the FQDN assigned to the VIP) and with the IP address.
<INSERT EXTERNAL FQDN OF BUILDMATIC SERVER HERE>,<INSERT EXTERNAL IP OF BUILDMATIC SERVER HERE>

After editing:

  • If the network node is not used as a compute node, then the following commands are run:
[root@shadow-head ~] cmsh -c ‘category use openstack-network-nodes; roles; use openstack::node; customizations; add pxe;
set filepaths /etc/neutron/dhcp_agent.ini; entries; add dnsmasq_config_file=/etc/neutron/dnsmasq.dev.conf; commit’
  • If the network node is also to be used as a compute node, then the following cmsh command is run. In this command, the network node is put in the “openstack-compute-hosts” category, is assigned “openstack::node” role, and the customizations needed are added:
[root@shadow-head ~] cmsh -c ‘device use <NETWORK_NODE>; roles; assign openstack::node; customizations; add pxe;
set filepaths /etc/neutron/dhcp_agent.ini; entries; add dnsmasq_config_file=/etc/neutron/dnsmasq.dev.conf; commit’

The following (key, value) pairs are added in the security group section of the configuration file:

[root@shadow-head ~] cmsh -c ‘category use openstack-compute-hosts; roles; use openstack::node; customizations; add "no sec groups";
set filepaths /etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini; entries; add securitygroup firewall_driver=neutron.agent.firewall.NoopFirewallDriver; add securitygroup enable_security_group=False; commit’

If Ceph is installed then it is usually a good idea to customize it by settings its cache mode to writeback:

[root@shadow-head ~] cmsh -c ‘category use openstack-compute-hosts; roles; use openstack::node; customizations; add "rbd cache";
set filepaths /etc/nova/nova.conf; entries; add libvirt disk_cachemodes=network=writeback,block=writeback; commit’