Amazon EC2 Driver¶
Considerations & Limitations¶
You should take into account the following technical considerations when using the EC2 cloud with OpenNebula:
- There is no direct access to the hypervisor, so it cannot be monitored (we don’t know where the VM is running on the EC2 cloud).
- The usual OpenNebula functionality for snapshotting, hot-plugging, or migration is not available with EC2.
- By default OpenNebula will always launch m1.small instances, unless otherwise specified.
- Monitoring of VMs in EC2 is done through CloudWatch. Only information related to the consumption of CPU and Networking (both inbound and outbound) is collected, since CloudWatch does not offer information of guest memory consumption.
Please refer to the EC2 documentation to obtain more information about Amazon instance types and image management:
Uncomment the EC2 IM and VMM drivers from
/etc/one/oned.conf file in order to use the driver.
IM_MAD = [ name = "ec2", executable = "one_im_sh", arguments = "-c -t 1 -r 0 ec2" ] VM_MAD = [ name = "ec2", executable = "one_vmm_sh", arguments = "-t 15 -r 0 ec2", type = "xml" ]
Driver flags are the same as other drivers:
|-t||Number of threads|
|-r||Number of retries|
First of all we take a look over our ec2_driver_conf file located in
proxy_uri: state_wait_timeout_seconds: 300 instance_types: c1.medium: cpu: 2 memory: 1.7 ...
You can define an http proxy if the OpenNebula Frontend does not have access to the internet:
Also, you can modify in the same file the default 300 seconds timeout that is waited for the VM to be in the EC2 running state in case you also want to attach to the instance a elastic ip:
instance_types section shows us the machines that AWS is able to provide, the ec2 driver will retrieve this kind of information so it’s better to not change it unless you are aware of your actions.
If you were using OpenNebula before 5.4 you may have noticed that there are not AWS credentials in configuration file anymore, this is due security reasons. In 5.4 there is a new secure credentials storage for AWS so you do not need to store sensitive credential data inside your disk. OpenNebula daemon stores the data in an encrypted format.
After OpenNebula is restarted, create a new Host with AWS credentials that uses the ec2 drivers:
onehost create ec2 -t ec2 --im ec2 --vm ec2
-t is needed to specify what type of remote provider host we want to set up, if you’ve followed all the instruction properly your default editor should show in your screen asking for the credentials and other mandatory data that will allow you to communicate with AWS.
Once you have opened your editor you can look for additional help at the top of your screen, you have more information in EC2 Specific Template Attributes section. The basic three variables you have to set are:
This can also be done creating a template file than can be used with the creation command:
echo 'EC2_ACCESS = "xXxXXxx"' > ec2host.tpl echo 'EC2_SECRET = "xXXxxXx"' >> ec2host.tpl echo 'REGION_NAME= "xXXxxXx"' >> ec2host.tpl onehost create ec2 -t ec2 ec2host.tpl --im ec2 --vm ec2
EC2 Specific Template Attributes¶
In order to deploy an instance in EC2 through OpenNebula you must include an EC2 section in the virtual machine template. This is an example of a virtual machine template that can be deployed in our local resources or in EC2.
CPU = 0.5 MEMORY = 128 # KVM template machine, this will be use when submitting this VM to local resources DISK = [ IMAGE_ID = 3 ] NIC = [ NETWORK_ID = 7 ] # PUBLIC_CLOUD template, this will be use wen submitting this VM to EC2 PUBLIC_CLOUD = [ TYPE="EC2", AMI="ami-00bafcb5", KEYPAIR="gsg-keypair", INSTANCETYPE=m1.small] #Add this if you want to use only EC2 cloud #SCHED_REQUIREMENTS = 'HOSTNAME = "ec2"'
Check an exhaustive list of attributes in the Virtual Machine Definition File Reference Section.
Default values for all these attributes can be defined in the
<!-- Default configuration attributes for the EC2 driver (all domains will use these values as defaults) Valid attributes are: AKI AMI CLIENTTOKEN INSTANCETYPE KEYPAIR LICENSEPOOL PLACEMENTGROUP PRIVATEIP RAMDISK SUBNETID TENANCY USERDATA SECURITYGROUPS AVAILABILITYZONE EBS_OPTIMIZED ELASTICIP TAGS Use XML syntax to specify defaults, note elements are UPCASE Example: <TEMPLATE> <PUBLIC_CLOUD> <KEYPAIR>gsg-keypair</KEYPAIR> <INSTANCETYPE>m1.small</INSTANCETYPE> </PUBLIC_CLOUD> </TEMPLATE> --> <TEMPLATE> <PUBLIC_CLOUD> <INSTANCETYPE>m1.small</INSTANCETYPE> </PUBLIC_CLOUD> </TEMPLATE>
The PUBLIC_CLOUD sections allow for substitutions from template and virtual network variables, the same way as the CONTEXT section allows.
These values can furthermore be asked to the user using user inputs. A common scenario is to delegate the User Data to the end user. For that, a new User Input named USERDATA can be created of text64 (the User Data needs to be encoded on base64) and a placeholder added to the PUBLIC_CLOUD section:
PUBLIC_CLOUD = [ TYPE="EC2", AMI="ami-00bafcb5", KEYPAIR="gsg-keypair", INSTANCETYPE=m1.small, USERDATA="$USERDATA"]
After successfully executing onehost create with -t option, your default editor will open. An example follows of how you can complete this area:
EC2_ACCESS = "this_is_my_ec2_access_key_identifier" EC2_SECRET = "this_is_my_ec2_secret_key" REGION_NAME = "us-east-1" CAPACITY = [ M1_SMALL = "3", M1_LARGE = "1" ]
The first two attributes have the authentication info required by AWS:
- EC2_ACCESS: Amazon AWS Access Key
- EC2_SECRET: Amazon AWS Secret Access Key
This information will be encrypted as soon as the host is created. In the host template the values of the
EC2_SECRET attributes will be encrypted.
- REGION_NAME: it’s the name of AWS region that your account uses to deploy machines.
In the example the region is set to us-east-1, you can get this information at the EC2 web console.
- CAPACITY: This attribute sets the size and number of EC2 machines that your OpenNebula host will handle, you can see
ec2_driver.conffile to know the supported names. Dot (‘.’) is not permitted, you have to change it to underscores (_) and capitalize the names (
If a CONTEXT section is defined in the template, it will be available as USERDATA inside the VM and can be retrieved by running the following command:
curl http://169.254.169.254/latest/user-data ONEGATE_ENDPOINT="https://onegate... SSH_PUBLIC_KEY="ssh-rsa ABAABeqzaC1y...
For example, if you want to enable SSH access to the VM, an existing EC2 keypair name can be provided in the EC2 template section or the SSH public key of the user can be included in the CONTEXT section of the template.
If a value for the USERDATA attribute is provided in the EC2 section of the template, the CONTEXT section will be ignored and the value provided as USERDATA will be available instead of the CONTEXT information.
Hybrid VM Templates¶
A powerful use of cloud bursting in OpenNebula is the ability to use hybrid templates, defining a VM if OpenNebula decides to launch it locally, and also defining it if it is going to be outsourced to Amazon EC2. The idea behind this is to reference the same kind of VM even if it is incarnated by different images (the local image and the remote AMI).
An example of a hybrid template:
## Local Template section NAME=MNyWebServer CPU=1 MEMORY=256 DISK=[IMAGE="nginx-golden"] NIC=[NETWORK="public"] EC2=[ AMI="ami-xxxxx" ]
OpenNebula will use the first portion (from NAME to NIC) in the above template when the VM is scheduled to a local virtualization node, and the EC2 section when the VM is scheduled to an EC2 node (ie, when the VM is going to be launched in Amazon EC2).
You must create a template file containing the information of the AMIs you want to launch. Additionally if you have an elastic IP address you want to use with your EC2 instances, you can specify it as an optional parameter.
CPU = 1 MEMORY = 1700 # KVM template machine, this will be use when submitting this VM to local resources DISK = [ IMAGE_ID = 3 ] NIC = [ NETWORK_ID = 7 ] #EC2 template machine, this will be use wen submitting this VM to EC2 PUBLIC_CLOUD = [ TYPE="EC2", AMI="ami-00bafcb5", KEYPAIR="gsg-keypair", INSTANCETYPE=m1.small] #Add this if you want to use only EC2 cloud #SCHED_REQUIREMENTS = 'HOSTNAME = "ec2"'
You only can submit and control the template using the OpenNebula interface:
onetemplate create ec2template onetemplate instantiate ec2template
Now you can monitor the state of the VM with
onevm list ID USER GROUP NAME STAT CPU MEM HOSTNAME TIME 0 oneadmin oneadmin one-0 runn 0 0K ec2 0d 07:03
Also you can see information (like IP address) related to the amazon instance launched via the command. The attributes available are:
onevm show 0 VIRTUAL MACHINE 0 INFORMATION ID : 0 NAME : pepe USER : oneadmin GROUP : oneadmin STATE : ACTIVE LCM_STATE : RUNNING RESCHED : No HOST : ec2 CLUSTER ID : -1 START TIME : 11/15 14:15:16 END TIME : - DEPLOY ID : i-a0c5a2dd VIRTUAL MACHINE MONITORING USED MEMORY : 0K NET_RX : 208K NET_TX : 4K USED CPU : 0.2 PERMISSIONS OWNER : um- GROUP : --- OTHER : --- VIRTUAL MACHINE HISTORY SEQ HOST ACTION DS START TIME PROLOG 0 ec2 none 0 11/15 14:15:37 2d 21h48m 0h00m00s USER TEMPLATE PUBLIC_CLOUD=[ TYPE="EC2", AMI="ami-6f5f1206", INSTANCETYPE="m1.small", KEYPAIR="gsg-keypair" ] SCHED_REQUIREMENTS="ID=4" VIRTUAL MACHINE TEMPLATE AWS_AVAILABILITY_ZONE="us-east-1d" AWS_DNS_NAME="ec2-54-205-155-229.compute-1.amazonaws.com" AWS_INSTANCE_TYPE="m1.small" AWS_IP_ADDRESS="184.108.40.206" AWS_KEY_NAME="gsg-keypair" AWS_PRIVATE_DNS_NAME="ip-10-12-101-169.ec2.internal" AWS_PRIVATE_IP_ADDRESS="10.12.101.169" AWS_SECURITY_GROUPS="sg-8e45a3e7"
Since ec2 Hosts are treated by the scheduler like any other host, VMs will be automatically deployed in them. But you probably want to lower their priority and start using them only when the local infrastructure is full.
Configure the Priority¶
The ec2 drivers return a probe with the value PRIORITY = -1. This can be used by the scheduler, configuring the ‘fixed’ policy in
DEFAULT_SCHED = [ policy = 4 ]
The local hosts will have a priority of 0 by default, but you could set any value manually with the ‘onehost/onecluster update’ command.
There are two other parameters that you may want to adjust in sched.conf:
- MAX_DISPATCH: Maximum number of Virtual Machines actually dispatched to a host in each scheduling action - MAX_HOST: Maximum number of Virtual Machines dispatched to a given host in each scheduling action
In a scheduling cycle, when MAX_HOST number of VMs have been deployed to a host, it is discarded for the next pending VMs.
For example, having this configuration:
- MAX_HOST = 1
- MAX_DISPATCH = 30
- 2 Hosts: 1 in the local infrastructure, and 1 using the ec2 drivers
- 2 pending VMs
The first VM will be deployed in the local host. The second VM will have also sort the local host with higher priority, but because 1 VMs was already deployed, the second VM will be launched in ec2.
A quick way to ensure that your local infrastructure will be always used before the ec2 hosts is to set MAX_DISPATH to the number of local hosts.
Force a Local or Remote Deployment¶
The ec2 drivers report the host attribute PUBLIC_CLOUD = YES. Knowing this, you can use that attribute in your VM requirements.
To force a VM deployment in a local host, use:
SCHED_REQUIREMENTS = "!(PUBLIC_CLOUD = YES)"
To force a VM deployment in an ec2 host, use:
SCHED_REQUIREMENTS = "PUBLIC_CLOUD = YES"
VMs running on EC2 that were not launched through OpenNebula can be imported in OpenNebula.
If the user account that is going to be used does not have full permissions here is a table that summarizes the privileges required by ec2 driver.
|Write/DecodeAuthorizationMessage||Support all resources|
|List/DescribeInstances||Support all resources|
|Read/DescribeTags||Support all resources|
|Write/AssociateAddress||Support all resources|
|Write/RunInstances||image, instance, key-pair, network-interface, security-group, subnet, volume|
|Write/StartInstances||image, instance, key-pair, network-interface, security-group, subnet, volume|
|Write/StopInstances||image, instance, key-pair, network-interface, security-group, subnet, volume|
|Write/TerminateInstances||image, instance, key-pair, network-interface, security-group, subnet, volume|