KVM Node Installation

This page shows you how to install OpenNebula from the binary packages.

Using the packages provided in our site is the recommended method, to ensure the installation of the latest version and to avoid possible packages divergences of different distributions. There are two alternatives here: you can add our package repositories to your system, or visit the software menu to download the latest package for your Linux distribution.

Step 1. Add OpenNebula Repositories

CentOS/RHEL 7

To add OpenNebula repository execute the following as root:

# cat << EOT > /etc/yum.repos.d/opennebula.repo
[opennebula]
name=opennebula
baseurl=https://downloads.opennebula.org/repo/5.6/CentOS/7/x86_64
enabled=1
gpgkey=https://downloads.opennebula.org/repo/repo.key
gpgcheck=1
#repo_gpgcheck=1
EOT

Debian/Ubuntu

To add OpenNebula repository on Debian/Ubuntu execute as root:

wget -q -O- https://downloads.opennebula.org/repo/repo.key | apt-key add -

Debian 9

echo "deb https://downloads.opennebula.org/repo/5.6/Debian/9 stable opennebula" > /etc/apt/sources.list.d/opennebula.list

Ubuntu 14.04

echo "deb https://downloads.opennebula.org/repo/5.6/Ubuntu/14.04 stable opennebula" > /etc/apt/sources.list.d/opennebula.list

Ubuntu 16.04

echo "deb https://downloads.opennebula.org/repo/5.6/Ubuntu/16.04 stable opennebula" > /etc/apt/sources.list.d/opennebula.list

Ubuntu 18.04

echo "deb https://downloads.opennebula.org/repo/5.6/Ubuntu/18.04 stable opennebula" > /etc/apt/sources.list.d/opennebula.list

Step 2. Installing the Software

Installing on CentOS/RHEL

Execute the following commands to install the node package and restart libvirt to use the OpenNebula provided configuration file:

sudo yum install opennebula-node-kvm
sudo systemctl restart libvirtd

Note

You may benefit from using the more recent and feature-rich enterprise QEMU/KVM release. The differences between the base (qemu-kvm) and enterprise (qemu-kvm-rhev on RHEL or qemu-kvm-ev on CentOS) packages are described on the Red Hat Customer Portal.

On CentOS 7, the enterprise packages are part of the separate repository. To replace the base packages, follow these steps:

sudo yum install centos-release-qemu-ev
sudo yum install qemu-kvm-ev

On RHEL 7, you need a paid subscription to the Red Hat Virtualization (RHV) or Red Hat OpenStack (RHOS) products, license only for the Red Hat Enterprise Linux isn’t enough! You have to check the RHV Installation Guide for your licensed version. Usually, the following commands should enable and install the enterprise packages:

sudo subscription-manager repos --enable rhel-7-server-rhv-4-mgmt-agent-rpms
sudo yum install qemu-kvm-rhev

For further configuration, check the specific guide: KVM.

Installing on Debian/Ubuntu

Execute the following commands to install the node package and restart libvirt to use the OpenNebula provided configuration file:

sudo apt-get update
sudo apt-get install opennebula-node
sudo service libvirtd restart # debian
sudo service libvirt-bin restart # ubuntu

For further configuration check the specific guide: KVM.

Step 3. Disable SElinux in CentOS/RHEL 7

Warning

If you are performing an upgrade skip this and the next steps and go back to the upgrade document.

SElinux can cause some problems, like not trusting oneadmin user’s SSH credentials. You can disable it changing in the file /etc/selinux/config this line:

SELINUX=disabled

After this file is changed reboot the machine.

Step 4. Configure Passwordless SSH

OpenNebula Front-end connects to the hypervisor Hosts using SSH. You must distribute the public key of oneadmin user from all machines to the file /var/lib/one/.ssh/authorized_keys in all the machines. There are many methods to achieve the distribution of the SSH keys, ultimately the administrator should choose a method (the recommendation is to use a configuration management system). In this guide we are going to manually scp the SSH keys.

When the package was installed in the Front-end, an SSH key was generated and the authorized_keys populated. We will sync the id_rsa, id_rsa.pub and authorized_keys from the Front-end to the nodes. Additionally we need to create a known_hosts file and sync it as well to the nodes. To create the known_hosts file, we have to execute this command as user oneadmin in the Front-end with all the node names and the Front-end name as parameters:

ssh-keyscan <frontend> <node1> <node2> <node3> ... >> /var/lib/one/.ssh/known_hosts

Now we need to copy the directory /var/lib/one/.ssh to all the nodes. The easiest way is to set a temporary password to oneadmin in all the hosts and copy the directory from the Front-end:

scp -rp /var/lib/one/.ssh <node1>:/var/lib/one/
scp -rp /var/lib/one/.ssh <node2>:/var/lib/one/
scp -rp /var/lib/one/.ssh <node3>:/var/lib/one/
...

You should verify that connecting from the Front-end, as user oneadmin, to the nodes and the Front-end itself, and from the nodes to the Front-end, does not ask password:

ssh <frontend>
exit
ssh <node1>
ssh <frontend>
exit
exit
ssh <node2>
ssh <frontend>
exit
exit
ssh <node3>
ssh <frontend>
exit
exit

If an extra layer of security is needed, it’s possible to keep the private key just at the frontend node instead of copy it to all the hypervisor. In this fashion the oneadmin user in the hypervisors won’t be able to access other hypervisors. This is achieved by modifying the /var/lib/one/.ssh/config in the front-end and adding the ForwardAgent option to the hypervisor hosts for forwarding the key:

cat /var/lib/one/.ssh/config
 Host host1
    User oneadmin
    ForwardAgent yes
 Host host2
    User oneadmin
    ForwardAgent yes

Note

Remember that is neccesary to have running the ssh-agent with the corresponding private key imported before OpenNebula is started. You can start ssh-agent by running eval "$(ssh-agent -s)" and add the private key by running ssh-add /var/lib/one/.ssh/id_rsa.

Step 5. Networking Configuration

image3

A network connection is needed by the OpenNebula Front-end daemons to access the hosts to manage and monitor the Hosts, and to transfer the Image files. It is highly recommended to use a dedicated network for this purpose.

There are various network models (please check the Networking chapter to find out the networking technologies supported by OpenNebula).

You may want to use the simplest network model that corresponds to the bridged drivers. For this driver, you will need to setup a linux bridge and include a physical device to the bridge. Later on, when defining the network in OpenNebula, you will specify the name of this bridge and OpenNebula will know that it should connect the VM to this bridge, thus giving it connectivity with the physical network device connected to the bridge. For example, a typical host with two physical networks, one for public IP addresses (attached to an eth0 NIC for example) and the other for private virtual LANs (NIC eth1 for example) should have two bridges:

brctl show
bridge name bridge id         STP enabled interfaces
br0        8000.001e682f02ac no          eth0
br1        8000.001e682f02ad no          eth1

Note

Remember that this is only required in the Hosts, not in the Front-end. Also remember that it is not important the exact name of the resources (br0, br1, etc...), however it’s important that the bridges and NICs have the same name in all the Hosts.

Step 6. Storage Configuration

You can skip this step entirely if you just want to try out OpenNebula, as it will come configured by default in such a way that it uses the local storage of the Front-end to store Images, and the local storage of the hypervisors as storage for the running VMs.

However, if you want to set-up another storage configuration at this stage, like Ceph, NFS, LVM, etc, you should read the Open Cloud Storage chapter.

Step 8. Adding a Host to OpenNebula

In this step we will register the node we have installed in the OpenNebula Front-end, so OpenNebula can launch VMs in it. This step can be done in the CLI or in Sunstone, the graphical user interface. Follow just one method, not both, as they accomplish the same.

To learn more about the host subsystem, read this guide.

Adding a Host through Sunstone

Open the Sunstone as documented here. In the left side menu go to Infrastructure -> Hosts. Click on the + button.

sunstone_select_create_host

The fill-in the fqdn of the node in the Hostname field.

sunstone_create_host_dialog

Finally, return to the Hosts list, and check that the Host switch to ON status. It should take somewhere between 20s to 1m. Try clicking on the refresh button to check the status more frequently.

sunstone_list_hosts

If the host turns to err state instead of on, check the /var/log/one/oned.log. Chances are it’s a problem with the SSH!

Adding a Host through the CLI

To add a node to the cloud, run this command as oneadmin in the Front-end:

onehost create <node01> -i kvm -v kvm
onehost list
  ID NAME            CLUSTER   RVM      ALLOCATED_CPU      ALLOCATED_MEM STAT
   1 localhost       default     0                  -                  - init

# After some time (20s - 1m)
onehost list
  ID NAME            CLUSTER   RVM      ALLOCATED_CPU      ALLOCATED_MEM STAT
   0 node01          default     0       0 / 400 (0%)     0K / 7.7G (0%) on

If the host turns to err state instead of on, check the /var/log/one/oned.log. Chances are it’s a problem with the SSH!

Step 8. Import Currently Running VMs (Optional)

You can skip this step as importing VMs can be done at any moment, however, if you wish to see your previously deployed VMs in OpenNebula you can use the import VM functionality.

Step 9. Next steps

You can now jump to the optional Verify your Installation section in order to get to launch a test VM.

Otherwise, you are ready to start using your cloud or you could configure more components:

  • Authenticaton. (Optional) For integrating OpenNebula with LDAP/AD, or securing it further with other authentication technologies.
  • Sunstone. OpenNebula GUI should be working and accessible at this stage, but by reading this guide you will learn about specific enhanced configurations for Sunstone.

If your cloud is KVM based you should also follow:

If it’s VMware based: