Difference between revisions of "OpenNebula: x86 (CentOS)"
m (→Install) |
m (→Download) |
||
| (14 intermediate revisions by the same user not shown) | |||
| Line 1: | Line 1: | ||
| + | == Overview == | ||
| + | [[File:OpenNebula_Overview.png|500px]] | ||
| + | |||
== Download == | == Download == | ||
| − | + | http://downloads.opennebula.org/opennebula-4.2.0.tar.gz <br/> | |
| − | http://downloads.opennebula.org/opennebula-4.2.0.tar.gz | + | Select software component as OpenNebula 4.2.0 CentOS tarball |
| − | |||
| − | |||
== EPEL == | == EPEL == | ||
| Line 15: | Line 16: | ||
tar zxf CentOS-6-opennebula-4.2.0-1.tar.gz | tar zxf CentOS-6-opennebula-4.2.0-1.tar.gz | ||
cd opennebula-4.2.0-1 | cd opennebula-4.2.0-1 | ||
| − | + | </syntaxhighlight> | |
| + | === Front-end === | ||
| + | <syntaxhighlight> | ||
yum localinstall opennebula-common-4.2.0-1.x86_64.rpm | yum localinstall opennebula-common-4.2.0-1.x86_64.rpm | ||
yum localinstall opennebula-ruby-4.2.0-1.x86_64.rpm | yum localinstall opennebula-ruby-4.2.0-1.x86_64.rpm | ||
yum localinstall opennebula-4.2.0-1.x86_64.rpm | yum localinstall opennebula-4.2.0-1.x86_64.rpm | ||
yum localinstall opennebula-server-4.2.0-1.x86_64.rpm | yum localinstall opennebula-server-4.2.0-1.x86_64.rpm | ||
| + | </syntaxhighlight> | ||
| + | === Hosts === | ||
| + | <syntaxhighlight> | ||
| + | yum localinstall opennebula-common-4.2.0-1.x86_64.rpm | ||
| + | yum localinstall opennebula-node-kvm-4.2.0-1.x86_64.rpm | ||
| + | </syntaxhighlight> | ||
| + | == Virtualisation Driver == | ||
| + | === KVM === | ||
| + | ==== Front-end - Settings ==== | ||
| + | OpenNebula needs to know if it is going to use the KVM Driver. To achieve this, uncomment these drivers in /etc/one/oned.conf: | ||
| + | <syntaxhighlight> | ||
| + | IM_MAD = [ | ||
| + | name = "kvm", | ||
| + | executable = "one_im_ssh", | ||
| + | arguments = "-r 0 -t 15 kvm" ] | ||
| + | |||
| + | VM_MAD = [ | ||
| + | name = "kvm", | ||
| + | executable = "one_vmm_exec", | ||
| + | arguments = "-t 15 -r 0 kvm", | ||
| + | default = "vmm_exec/vmm_exec_kvm.conf", | ||
| + | type = "kvm" ] | ||
| + | </syntaxhighlight> | ||
| + | ==== Hosts - Settings ==== | ||
| + | Edit /etc/libvirt/libvirtd.conf | ||
| + | * listen_tls = 0 | ||
| + | * listen_tcp = 1 | ||
| + | * mdns_adv = 0 | ||
| + | * unix_sock_group = "oneadmin" | ||
| + | * unix_sock_rw_perms = "0777" | ||
| + | * auth_unix_ro = "none" | ||
| + | * auth_unix_rw = "none" | ||
| + | Edit /etc/sysconfig/libvirtd | ||
| + | * Set LIBVIRTD_ARGS="--listen" | ||
| + | ==== Start service (Hosts) ==== | ||
| + | <syntaxhighlight> | ||
| + | service libvirtd start | ||
| + | </syntaxhighlight> | ||
| + | For issues while starting libvirtd, edit /etc/libvirt/libvirtd.conf, uncomment this line <br/> | ||
| + | log_outputs="3:syslog:libvirtd" | ||
| + | After that, try to startup libvirtd once again. This time you can see the problem: /var/log/messages | ||
| + | |||
| + | == Secure Shell Access (Front-End) == | ||
| + | You need to create ssh keys for the oneadmin user and configure the host machines so it can connect to them using ssh without need for a password.<br/> | ||
| + | Follow these steps in the front-end:<br/> | ||
| + | Generate oneadmin ssh keys | ||
| + | <syntaxhighlight> | ||
| + | ssh-keygen | ||
| + | cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys | ||
| + | |||
| + | chmod 700 ~/.ssh/ | ||
| + | chmod 600 ~/.ssh/id_rsa.pub | ||
| + | chmod 600 ~/.ssh/id_rsa | ||
| + | chmod 600 ~/.ssh/authorized_keys | ||
| + | </syntaxhighlight> | ||
| + | Tell ssh client to not ask before adding hosts to known_hosts file. Also it is a good idea to reduced the connection timeout in case of network problems. This is configured into ~/.ssh/config, see man ssh_config for a complete reference.: | ||
| + | <syntaxhighlight> | ||
| + | cat ~/.ssh/config | ||
| + | ConnectTimeout 5 | ||
| + | Host * | ||
| + | StrictHostKeyChecking no | ||
| + | </syntaxhighlight> | ||
| + | |||
| + | '''Copy the front-end /var/lib/one/.ssh directory to each one of the hosts in the same path''' | ||
| + | |||
| + | == Starting OpenNebula (Front-End) == | ||
| + | Log in as the oneadmin user follow these steps:<br/> | ||
| + | If you installed from packages, you should have the '~/.one/one_auth' file created with a randomly-generated password. Otherwise, set oneadmin's OpenNebula credentials (username and password) adding the following to ~/.one/one_auth (change password for the desired password): | ||
| + | <syntaxhighlight> | ||
| + | mkdir ~/.one | ||
| + | echo "oneadmin:password" > ~/.one/one_auth | ||
| + | chmod 600 ~/.one/one_auth | ||
| + | </syntaxhighlight> | ||
| + | Start the OpenNebula daemons: | ||
| + | <syntaxhighlight> | ||
| + | one start | ||
| + | </syntaxhighlight> | ||
| + | == Verifying the Installation == | ||
| + | After OpenNebula is started for the first time, you should check that the commands can connect to the OpenNebula daemon. In the front-end, run as oneadmin the command onevm: | ||
| + | <syntaxhighlight> | ||
| + | onevm list | ||
</syntaxhighlight> | </syntaxhighlight> | ||
Latest revision as of 11:11, 19 August 2013
Overview
Download
http://downloads.opennebula.org/opennebula-4.2.0.tar.gz
Select software component as OpenNebula 4.2.0 CentOS tarball
EPEL
wget http://epel.mirror.net.in/epel/6/i386/epel-release-6-8.noarch.rpm
rpm -i epel-release-6-8.noarch.rpmInstall
tar zxf CentOS-6-opennebula-4.2.0-1.tar.gz
cd opennebula-4.2.0-1Front-end
yum localinstall opennebula-common-4.2.0-1.x86_64.rpm
yum localinstall opennebula-ruby-4.2.0-1.x86_64.rpm
yum localinstall opennebula-4.2.0-1.x86_64.rpm
yum localinstall opennebula-server-4.2.0-1.x86_64.rpmHosts
yum localinstall opennebula-common-4.2.0-1.x86_64.rpm
yum localinstall opennebula-node-kvm-4.2.0-1.x86_64.rpmVirtualisation Driver
KVM
Front-end - Settings
OpenNebula needs to know if it is going to use the KVM Driver. To achieve this, uncomment these drivers in /etc/one/oned.conf:
IM_MAD = [
name = "kvm",
executable = "one_im_ssh",
arguments = "-r 0 -t 15 kvm" ]
VM_MAD = [
name = "kvm",
executable = "one_vmm_exec",
arguments = "-t 15 -r 0 kvm",
default = "vmm_exec/vmm_exec_kvm.conf",
type = "kvm" ]Hosts - Settings
Edit /etc/libvirt/libvirtd.conf
- listen_tls = 0
- listen_tcp = 1
- mdns_adv = 0
- unix_sock_group = "oneadmin"
- unix_sock_rw_perms = "0777"
- auth_unix_ro = "none"
- auth_unix_rw = "none"
Edit /etc/sysconfig/libvirtd
- Set LIBVIRTD_ARGS="--listen"
Start service (Hosts)
service libvirtd startFor issues while starting libvirtd, edit /etc/libvirt/libvirtd.conf, uncomment this line
log_outputs="3:syslog:libvirtd"
After that, try to startup libvirtd once again. This time you can see the problem: /var/log/messages
Secure Shell Access (Front-End)
You need to create ssh keys for the oneadmin user and configure the host machines so it can connect to them using ssh without need for a password.
Follow these steps in the front-end:
Generate oneadmin ssh keys
ssh-keygen
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
chmod 700 ~/.ssh/
chmod 600 ~/.ssh/id_rsa.pub
chmod 600 ~/.ssh/id_rsa
chmod 600 ~/.ssh/authorized_keysTell ssh client to not ask before adding hosts to known_hosts file. Also it is a good idea to reduced the connection timeout in case of network problems. This is configured into ~/.ssh/config, see man ssh_config for a complete reference.:
cat ~/.ssh/config
ConnectTimeout 5
Host *
StrictHostKeyChecking noCopy the front-end /var/lib/one/.ssh directory to each one of the hosts in the same path
Starting OpenNebula (Front-End)
Log in as the oneadmin user follow these steps:
If you installed from packages, you should have the '~/.one/one_auth' file created with a randomly-generated password. Otherwise, set oneadmin's OpenNebula credentials (username and password) adding the following to ~/.one/one_auth (change password for the desired password):
mkdir ~/.one
echo "oneadmin:password" > ~/.one/one_auth
chmod 600 ~/.one/one_authStart the OpenNebula daemons:
one startVerifying the Installation
After OpenNebula is started for the first time, you should check that the commands can connect to the OpenNebula daemon. In the front-end, run as oneadmin the command onevm:
onevm list