[FrontPage] [TitleIndex] [WordIndex

This is a read-only archived version of wiki.centos.org

OpenNebula 4.2

OpenNebula is a mature and stable Cloud Management Software that scales from a single node cloud to thousands of physical nodes. It can be used to build private, public or a hybrid clouds. This guide will help you get started building an OpenNebula cloud on CentOS.

Command line tools and other resources within OpenNebula are refered to as 'one' tools.

1. Package Layout

OpenNebula provides these main packages:

Additionally opennebula-commonand opennebula-ruby exist but they're intended to be used as dependencies. opennebula-occi, which is RESTful service to manage the cloud, is included in the opennebula-sunstone package.

2. Setup

OpenNebula is available in its own CentOS-testing repository. To setup and enable that repository ( as root ):

# cd /etc/yum.repos.d/
# curl -O http://dev.centos.org/centos/6/opennebula/opennebula-testing.repo

Ensure that the repo is setup cleanly by running :

# yum repolist

and you should see an entry for 'one-testing'

3. Installation in the Frontend

A complete install of OpenNebula will have at least both opennebula-server and opennebula-sunstone package. We will assume you have installed both in this guide.

Combining the roles of frontend and worker node is supported. You will only need to run the worker node commands in the frontend.

4. Installation in the Nodes

Install the opennebula-node-kvm package.

An important piece of configuration is the networking. You should read OpenNebula's documentation on networking to set up the network model. You will need to have your main interface, ethX, connected to a bridge. The name of the bridge should be the same accross all nodes.

    $ brctl show
    bridge name bridge id       STP enabled interfaces
    br0     8000.000000000000   no          eth0

5. Configure NFS

You can skip this section if you are using a single server for both the frontend and worker node roles.

5.1. Frontend

Export /var/lib/one/datastores from the frontend to the worker nodes. To do so add the following to the /etc/exports file in the frontend:

   /var/lib/one/datastores 192.168.1.0/24(rw,sync,no_subtree_check,root_squash)

Replace 192.168.1.0/24 with your network. Refresh it by doing ( as root ):

    # exportfs -a  

5.2. Node

Mount the datastores export. Add the following to your /etc/fstab:

    192.168.1.1:/var/lib/one/datastores  /var/lib/one/datastores  nfs   soft,intr,rsize=8192,wsize=8192,noauto

Replace 192.168.1.1 with the ip of the frontend.

Mount it by running ( as root ):

    # mount /var/lib/one/datastores

<!> The messagebus and libvirtd services are required by OpenNebula. They will start automatically after a reboot, otherwise you should manually start them (in that order).

6. Configure SSH passwordless login

OpenNebula will need to SSH passwordlessly from any node (including the frontend) to any other node.

Add the public key to the authorized_keys in the frontend ( as oneadmin ):

    cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

Add the following to ~/.ssh/config so it doesn't prompt to add the keys to the known_hosts file:

    Host *
        StrictHostKeyChecking no
        UserKnownHostsFile /dev/null

Copy the whole ~/.ssh over to all the nodes:

    scp -r ~/.ssh node1:
    scp -r ~/.ssh node2:
    ...

7. A Basic Run

To interact with OpenNebula, you have to do it from the oneadmin account. We will assume all the following commands are performed from that account:

    # sudo su - oneadmin

7.1. Starting OpenNebula's services

These are the services setup for OpenNebula, Sunstone and OCCI, so you can run ( as root ):

    $ service opennebula start
    $ service opennebula-sunstone start
    $ service opennebula-occi start

<!> The opennebula and opennebula-sunstone services are configured to start automatically.

7.2. Adding a Host

To start running VMs, you should first register a worker node for OpenNebula.

Issue this command for each one of your nodes. Replace localhost with your node's hostname. Leave it like this if you are using the frontend as a node:

    $ onehost create localhost -i kvm -v kvm -n dummy

Run onehost list until it's set to on. If it fails you probably have something wrong in your ssh configuration. Take a look at /var/log/one/oned.log*to check it out.

7.3. Adding virtual resources

Once it's working you need to create a network, an image and a virtual machine template with the following commands:

    $ onevnet create <file>
    $ oneimage create <file> -d default # (register to the 'default' datastore)
    $ onetemplate create <file>

A few examples for the files:

    $ cat mynetwork.one
    NAME = "private"
    TYPE = FIXED

    BRIDGE = br0

    LEASES = [ IP=192.168.0.100 ]
    LEASES = [ IP=192.168.0.101 ]
    LEASES = [ IP=192.168.0.102 ]

    $ cat myimage.one
    NAME   = "CentOS-6.4_x86_64"
    PATH   = "http://cloud.centos.org/i/one/c6-x86_64-20121130-1.qcow2.gz"
    MD5    = "97bf1be5e44a66a27c23e7eca13cb3ac"
    DRIVER = "qcow2"

    $ cat mytemplate.one
    NAME   = "CentOS-6.4"
    CPU    = 1
    VCPU   = 1
    MEMORY = 512
    OS = [ arch = "x86_64" ]
    DISK   = [ 
        IMAGE = "CentOS-6.4_x86_64"
    ]

    NIC = [ NETWORK = "private" ]

    GRAPHICS = [
        TYPE    = "vnc",
        LISTEN  = "0.0.0.0"
    ]

    CONTEXT = [
        SSH_PUBLIC_KEY = "<your_public_key>"
    ]

Alternatively some commands accept parameters to define the resources. The following commands to the same as the ones above:

    $ ontemplate create --name "CentOS-6.4" --cpu 1 --vcpu 1 --memory 512 \
        --arch x86_64 --disk "CentOS-6.4_x86_64" --nic "private" --vnc \
        --ssh "<your_public_key>"

    $ oneimage create --name "CentOS-6.4_x86_64" \
        --path "http://cloud.centos.org/i/one/c6-x86_64-20121130-1.qcow2.gz" 
        --driver qcow2
        -d default

<!> You have many ready to run images in the Cloud/OpenNebula page.

7.4. Instantiate a template

To run a Virtual Machine, you will need to instantiate a template:

    $ onetemplate instantiate "CentOS-6.4" -n "My Scratch VM"

If the vm fails, check the reason in the log: /var/log/one/<VM_ID>/vm.log.

8. Sunstone

All the operations above can be done using Sunstone. To access Sunstone, simply start the service and point your browser to: http://<frontend>:9869.

The default password for the oneadmin user (which can be changed by doing oneuser passwd oneadmin <new_password>), can be found in ~/.one/one_auth which is generated randomly on every installation.

9. Support and Troubleshooting

Logs are located in /var/log/one. Be sure to check that in order to troubleshoot. If you need assistance, upstream can help you through their main channels of support.


2023-09-11 07:22