Bright:CaaS
Make sure, you've followed the Openstack installation guide and you have a working Openstack environment on your Bright cluster. (http://wiki.bostonlabs.co.uk/w/index.php?title=Bright:Openstack-install)
Installing CaaS
[root@shadow-head ~] yum install -y cm-openstack-caas cm-ipxe-caas
[root@shadow-head ~] yum update -yNext, in the file /cm/shared/apps/cm-openstack-caas/bin/Settings.py, the values of “external_dns_server” and “buildmatic_host_ip” should be edited appropriately:
'external_dns_server': '172.28.0.2'
'buildmatic_host_ip': '172.28.0.199' # this is the external ip of the head node
'pxe_helper_url': 'http://localhost:8082/chain'After the modifications are in place, the pxehelper service is started and enabled:
[root@shadow-head ~] systemctl start pxehelper
[root@shadow-head ~] systemctl enable pxehelperThe pxehelper service uses port 8082. Unblock this port by adding the following rule to /etc/shorewall/rules and then restarting shorewall:
# -- Allow pxehelper service for automatic head node installation
ACCEPT net fw tcp 8082
[root@shadow-head ~] systemctl restart shorewallThe OpenStack images can now be created:
[root@shadow-head ~] openstack image create --file /cm/local/apps/ipxe/ipxe-plain-net0.img --disk-format=raw --container-format=bare --public iPXE-plain-eth0
[root@shadow-head ~] openstack image create --file /cm/local/apps/ipxe/ipxe-plain-net1.img --disk-format=raw --container-format=bare --public iPXE-plain-eth1
[root@shadow-head ~] openstack image create --file /cm/local/apps/ipxe/ipxe-caas.img --disk-format=raw --container-format=bare --public ipxe-caasThe dnsmasq utility must now be configured. Its configuration file: /cm/shared/apps/cm-openstack-caas/etc/dnsmasq.dev.conf has two strings that need to be changed:
# The string is replaced with the external IP address of the head node(s).
<INSERT EXTERNAL IP OF THE MACHINE RUNNING PXE HELPER HERE>
...
# This is replaced with the FQDN of the head node (In case of HA setup with the FQDN assigned to the VIP) and with the IP address.
<INSERT EXTERNAL FQDN OF BUILDMATIC SERVER HERE>,<INSERT EXTERNAL IP OF BUILDMATIC SERVER HERE>After editing:
- If the network node is not used as a compute node, then the following commands are run:
[root@shadow-head ~] cmsh -c ‘category use openstack-network-nodes; roles; use openstack::node; customizations; add pxe;
set filepaths /etc/neutron/dhcp_agent.ini; entries; add dnsmasq_config_file=/etc/neutron/dnsmasq.dev.conf; commit’- If the network node is also to be used as a compute node, then the following cmsh command is run. In this command, the network node is put in the “openstack-compute-hosts” category, is assigned “openstack::node” role, and the customizations needed are added:
[root@shadow-head ~] cmsh -c ‘device use <NETWORK_NODE>; roles; assign openstack::node; customizations; add pxe;
set filepaths /etc/neutron/dhcp_agent.ini; entries; add dnsmasq_config_file=/etc/neutron/dnsmasq.dev.conf; commit’The following (key, value) pairs are added in the security group section of the configuration file:
[root@shadow-head ~] cmsh -c ‘category use openstack-compute-hosts; roles; use openstack::node; customizations; add "no sec groups";
set filepaths /etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini; entries; add securitygroup firewall_driver=neutron.agent.firewall.NoopFirewallDriver; add securitygroup enable_security_group=False; commit’If Ceph is installed then it is usually a good idea to customize it by settings its cache mode to writeback:
[root@shadow-head ~] cmsh -c ‘category use openstack-compute-hosts; roles; use openstack::node; customizations; add "rbd cache";
set filepaths /etc/nova/nova.conf; entries; add libvirt disk_cachemodes=network=writeback,block=writeback; commit’Buildmatic is now installed and configured.
[root@shadow-head ~] yum -y install buildmatic-common buildmatic-7.1-stable createrepo
[root@shadow-head ~] /cm/local/apps/buildmatic/common/bin/setupbmatic --createconfig
[root@shadow-head ~] cp /cm/local/apps/buildmatic/common/settings.xml /cm/local/apps/buildmatic/7.1-stable/bin
[root@shadow-head ~] cp /cm/local/apps/buildmatic/common/nfsparams.xml /cm/local/apps/buildmatic/7.1-stable/bin
[root@shadow-head ~] cp /cm/local/apps/buildmatic/common/nfsparams.xml /cm/local/apps/buildmatic/7.1-stable/filesThe rpm-store is now populated using a bright DVD. In the following example the rpm-store is populated with a Bright 7.1 version, and with centos 7.1 as the operating system. To add more supported Linux distributions, this step can be repeated with additional Bright ISOs.
The xml buildconfig file is then generated. The index used, “000001” here, must be six digits in length.
A PXE image is generated after that
[root@shadow-head ~] /cm/local/apps/buildmatic/common/bin/setupbmatic --createrpmdir bright7.1-centos7u1.iso
[root@shadow-head ~] /cm/local/apps/buildmatic/7.1-stable/bin/genbuildconfig -v 7.1-stable -d CENTOS7u1 -i 000001
[root@shadow-head ~] /cm/local/apps/buildmatic/7.1-stable/bin/buildmaster /cm/local/apps/buildmatic/7.1-stable/config/000001.xmlThe following lines are added to /etc/exports, so that they can be NFS-mounted from the installer (Replace <CIDR> with the public network ip address):
/home/bright/base-distributions <CIDR>(ro,no_root_squash,async)
/home/bright/rpm-store <CIDR>(ro,no_root_squash,async)
/home/bright/cert-store-pc <CIDR>(ro,no_root_squash,async)A symbolic link to the directory containing the license file is created and the NFS should be restarted:
[root@shadow-head ~] cd /home/bright
[root@shadow-head ~] ln -s cert-store cert-store-pc
[root@shadow-head ~] service nfs restartThe shorewall rules for NFS are now uncommented in the file /etc/shorewall/rules:
# -- Allow NFS traffic from outside to the master
ACCEPT net fw tcp 111 # portmapper
ACCEPT net fw udp 111
ACCEPT net fw tcp 2049 # nfsd
ACCEPT net fw udp 2049
ACCEPT net fw tcp 4000 # statd
ACCEPT net fw udp 4000
ACCEPT net fw tcp 4001 # lockd
ACCEPT net fw udp 4001
ACCEPT net fw udp 4005
ACCEPT net fw tcp 4002 # mountd
ACCEPT net fw udp 4002
ACCEPT net fw tcp 4003 # rquotad
ACCEPT net fw udp 4003Shorewall is now restarted
[root@shadow-head ~] systemctl restart shorewall