What’s New in 5.8

OpenNebula 5.8 (Edge) is the fifth major release of the OpenNebula 5 series. A significant effort has been applied in this release to enhance features introduced in 5.6 Blue Flash, while keeping an eye in implementing those features more demanded by the community. A major highlight of Edge is its focus to support computing on the Edge, bringing the processing power of VMs closer to the consumers to reduce latency. In this regards, Edge comes with the following major features:

  • Support for LXD. This enables low resource container orchestration. LXD containers are ideal to run in low consumption devices closer to the customers.
  • Automatic NIC selection. This enhancement of the OpenNebula scheduler will alleviate the burden of VM/container Template management in edge environments where the remote hosts can be potentially heterogeneous, with different network configurations.
  • Distributed Data Centers. This feature is key for the edge cloud. OpenNebula now offers the ability to use bare metal providers to build remote clusters in a breeze, without needing to change the workload nature. We are confident that this is a killer feature that sets OpenNebula apart from the direct competitors in the space.
  • Scalability improvements. Orchestrating an edge cloud will be demanding in terms of the number of VMs, containers and hypervisors to manage. OpenNebula 5.8 brings to the table a myriad of improvements to the monitoring, pool management and GUI, to deliver a smooth user experience in large scale environments.

This OpenNebula release is named after the edges of nebulas. Nebulas are diffuse objects, and their edges can be considered vacuum. However, they are very thick, so they appear to be dense. This is the aim of OpenNebula 5.8, to provide computing power on a wide geographic surface to offer services closer to customers, building a cloud managed from a single portal over very thin infrastructure. There’s an Edge Nebula on the Freelancer videogame.

OpenNebula 5.8 Edge is considered to be a stable release and as such it is available to update production environments.

In the following list you can check the highlights of OpenNebula 5.8 (a detailed list of changes can be found here):

OpenNebula Core

  • Rename disk snapshots, there is now an option available for renaming disk snapshots via OCA and CLI.
  • Migration through poweroff/on cycle, new options for cold-migrating a Virtual Machine, now they can also be migrated via poweroff and poweroff hard.
  • Mixed mode for ALLOW_ORPHAN attribute which takes care of the dependencies between snapshots after revert actions at Ceph datastores.
  • Default configuration values for RAFT have been updated to a more conservative setting.
  • Search for virtual machines, a new option for searching VMs using onevm list command or one.vmpool.info API call is available. Find out how to search VM instances here.
  • The one.vmpool.info call now returns a reduce version of the VMs body in order to achieve better performance on large environments whit a large number of VMs.

KVM Driver

  • Metadata information with OpenNebula information is included in the Libvirt domain XML, see here.


  • More customization, now the admin can disable the VM advanced options in the Cloud View dialogs.
  • Added flag in view configuration yamls to disable animations in the dashboard widgets.
  • Autorefresh has been removed


  • VM actions can be specified relative to the VM start scheduled actions, for example: terminate this VM after a month of being created.


  • New attribute for the networks called BRIDGE_TYPE for defining the bridging technology used by the driver. More info here.
  • New self-provisioning model for networks, Virtual Network Templates. Users can now instantiate their own virtual networks from predefined templates with their own addressing.
  • Support for NIC Alias. VM’s can have more than one IP associated to the same network interface. NIC Alias uses the same interface as regular NIC, e.g. live attach/detach or context support for autoconfiguration. More info here.

Virtual Machine Management

  • Automatic selection of Virtual Networks for VM NICs. Based on the usual requirements and rank, the Scheduler can pick the right Network for a NIC. You can use this feature to balance network usage at deployment time or to reduce clutter in your VM Template list, as you do not need to duplicate VM Templates for different networks. More info here.
  • LXD hypervisor. OpenNebula can now manage LXD containers the same way Virtual Machines are managed. Setup an LXD host and use the already present Linux network and storage stack. There are virtualization and monitorization drivers allowing this feature and also a new MarketPlace with a public LXD image server backend. More about this here.
  • KVM VM snapshots after migration are now properly restored on the destination host.


  • Added new configuration file vcenterrc, to allow you to change the default behaviour in the process of image importation. More info here.
  • It is now possible to change boot order devices updating the vm template. More info here.
  • VM migration between clusters and datastores is now supported, check here.
  • It is now possible to migrate images from KVM to vCenter or vice versa. More info here.


  • When a MarketPlace appliance is imported into a datastore it is converted if needed from qcow2/raw to vmdk.
  • Added new LXD MarketPlace. A sample LXD marketplace will be created in new installations. You can easily create one for existing deployments following the instructions in the maketplace guide.


  • New Python bindings for the OpenNebula Cloud API (OCA). The PyONE addon is now part of the official distribution, more info here
  • Distributed Data Centers provide tools to build and grow your cloud on bare-metal cloud providers. More info here.
  • one.vm.migrate now accepts an additional argument to set the type of cold migration (save, poweroff or poweroff hard)
  • XSD files has been updated and completed
  • Pagination can be disabled using no-pager option.


  • Free space of the KVM hypervisor is now updated faster for SSH and LVM transfer managers by sending HUP signal to collectd client, see more here. Additionally, you can trigger an information update manually with the `onehost forceupdate` command.
  • LVM drivers supports configurable zero’ing of allocated volumes to prevent data leaks to other VMs, see more here.
  • Attaching volatile disk to the VM running on the LVM datastore is now correctly created as logical volume.

Other Issues Solved