Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Eucalyptus: Setting Up A Private Infrastructure Cloud

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 10

http://blogs.plexibus.

com/2010/05/19/eucalyptus-setting-up-a-privateinfrastructure-cloud/

Eucalyptus: Setting up a private infrastructure cloud


There are a few Infrastructure-as-a-Service offerings that available to download and use. Eucalyptus and OpenNebula are two such offerings. I ended up installing and experimenting with both Eucalyptus and OpenNebula. In this blog post, Ill detail my experience of installing and setting up Eucalyptus 1.6.2 on CentOS. For the sake of keeping things simple but still practical enough, we will have:

1 front-end machine. This will house the Cloud Controller (CLC) and Walrus. Since we intend to keep things fairly simple we will limit ourselves to a single cluster and setup the Cluster Controller (CC) and Storage Controller (SC) on this same machine. In my case, this machine has one network interface (NIC) with an IP address of 192.168.0.114.

2 machines (Nodes) that will serve as hosts running Xen hypervisor for the virtual machines i.e. each machine will have a Node Controller (NC) installed. In my case, each machine has a single NIC and the IP addresses are 192.168.0.19 and 192.168.5.7 respectively.

Before we install Eucalyptus we need to first prep these machines.

Note: For the rest of this document, run the commands as root user.
This document is organized as below. Feel free to skip any sections if you have already implemented the steps in that section. Prepare the machines Download Eucalyptus Install Eucalyptus on the Front-end Install Eucalyptus on the Nodes Run Eucalyptus Register Eucalyptus components First-time Configuration Test your Eucalyptus install

Prep work On the front-end machine, we first install Java and Ant. You can download Sun JDK from here and Ant from here . Im using JDK version 1.6u20 (jdk-6u20-linux-i586-rpm.bin) and Ant version 1.8.0 (apache-ant-1.8.0-bin.tar.gz).

Once you have downloaded Sun JDK to a directory, install it as follows:


view plaincopy to clipboardprint?

1. chmod +x jdk-6u20-linux-i586-rpm.bin 2. ./jdk-6u20-linux-i586-rpm.bin You can confirm that java is on the PATH by running the following command:
view plaincopy to clipboardprint?

1. java -version You should output similar to:


java Java(TM) SE Runtime version Environment (build 1.6.0_19 1.6.0_19-b04)

Java HotSpot(TM) Client VM (build 16.2-b04, mixed mode, sharing)

Next, install Ant under /opt directory as follows:


view plaincopy to clipboardprint?

1. 2. 3. 4. 5.

cd /opt mkdir ant cd ant tar zxvf ~/apache-ant-1.8.0-bin.tar.gz ln -s apache-ant-1.8.0 latest

Next, we need to add an environment variable ANT_HOME that points to/opt/ant/latest and append the $ANT_HOME/bin to the PATH environment variable. Add this to the /etc/profile file as follows:
view plaincopy to clipboardprint?

1. 2. 3. 4.

cd /etc cp profile profile.ORIG echo "export ANT_HOME=/opt/ant/latest" >> profile echo "export PATH=\$PATH:\$ANT_HOME/bin" >> profile

Next we need to install a few dependencies (dhcp, bridge-utils, httpd, xen-libs, ntp) and synchronize the system clock on the front-end machine. You can do this as follows:
view plaincopy to clipboardprint?

1. yum update 2. yum install dhcp xen-libs httpd bridge-utils ntp 3. ntpdate pool.ntp.org I have the following versions installed:
view plaincopy to clipboardprint?

1. yum list dhcp xen-libs httpd bridge-utils

Loaded Loading * * * * Installed bridge-utils.i386 dhcp.i386 httpd.i386 xen-libs.i386 Available dhcp.i386 httpd.i386 xen-libs.i386 3.0.3-105.el5 base mirror speeds

plugins: from cached addons: base: extras: updates: 1.1-2 12:3.0.5-21.el5_4.1 2.2.3-31.el5.centos.4 3.0.3-94.el5_4.3 12:3.0.5-23.el5 2.2.3-43.el5.centos

fastestmirror hostfile mirror.fdcservers.net mirrors.ecvps.com mirror.ubiquityservers.com mirror.ubiquityservers.com Packages installed installed installed installed Packages base base

We also allow the front-end machine to forward IP packets as follows:


view plaincopy to clipboardprint?

1. cd /etc 2. cp sysctl.conf sysctl.conf.ORIG 3. sed -i "s/net.ipv4.ip_forward = 0/net.ipv4.ip_forward = 1/" sysctl.conf To change this value immediately without rebooting, run the following command:
view plaincopy to clipboardprint?

1. sysctl -p /etc/sysctl.conf Next, we need to configure firewall rules to permit the various Eucalyptus communicate with each other. Since we are planning on using security groups in Eucalyptus, lets start with disabling SELinux on the front -end machine as follows:
view plaincopy to clipboardprint?

1. cd /etc/selinux 2. cp config config.ORIG 3. sed -i "s/SELINUX=permissive/SELINUX=disabled/" config Lets reboot the front-end machine at this point. Next, we need to prep the two Nodes. We start by installing xen hypervisor and also synchronize the system clock on each Node as follows:
view plaincopy to clipboardprint?

1. yum update 2. yum install xen ntp 3. ntpdate pool.ntp.org

I have the following versions of xen installed:


view plaincopy to clipboardprint?

1. yum list xen


Loaded Loading * * * * Installed xen.i386 Available xen.i386 3.0.3-105.el5 base 3.0.3-94.el5_4.3 mirror speeds addons: base: extras: updates: plugins: from cached fastestmirror hostfile mirror.ash.fastserv.com mirror.ubiquityservers.com mirror.steadfast.net hpc.arc.georgetown.edu Packages installed Packages

Once we have Xen installed we need to configure it to allow for the hypervisor to be controlled via HTTP from localhost. We can do this by editing /etc/xen/xend-config.sxp file and then restart xen daemon as follows:
view plaincopy to clipboardprint?

1. 2. 3. 4. 5.

cd /etc/xen cp xend-config.sxp xend-config.sxp.ORIG sed -i "s/#(xend-http-server no)/(xend-http-server yes)/" xend-config.sxp sed -i "s/#(xend-address localhost)/(xend-address localhost)/" xend-config.sxp /etc/init.d/xend restart

Next we need to make sure the correct kernel with xen enabled is started at boot. We do this by editing the GRUB configuration file (grub.conf) under/boot/grub. If grub.conf is not available, then edit menu.lst which should be a file instead In my case, /boot/grub/grub.conf is:
# # # # # # # # #boot=/dev/sda default=1 timeout=5 splashimage=(hd0,0)/grub/splash.xpm.gz hiddenmenu title root kernel CentOS (2.6.18-164.15.1.el5xen) (hd0,0) /xen.gz-2.6.18-164.15.1.el5 kernel initrd Note all that you kernel do You and not have initrd to a root /vmlinuz-version ro rerun /boot paths are grub after making relative changes This to to means /boot/, this file that eg. (hd0,0) root=/dev/sda3 /initrd-version.img NOTICE: have partition. grub.conf

of

symlink

to grub.conf.

generated

by

anaconda

module module title root kernel initrd title root kernel

/vmlinuz-2.6.18-164.15.1.el5xen CentOS /vmlinuz-2.6.18-164.15.1.el5 CentOS /vmlinuz-2.6.18-164.el5 ro

ro

root=LABEL=/ /initrd-2.6.18-164.15.1.el5xen.img (2.6.18-164.15.1.el5) (hd0,0)

ro

root=LABEL=/ /initrd-2.6.18-164.15.1.el5.img (2.6.18-164.el5) (hd0,0) root=LABEL=/

initrd /initrd-2.6.18-164.el5.img

The default line is the line we want to change. The first title is 0. Since we wanttitle CentOS (2.6.18164.15.1.el5xen) to be the default kernel we would set default to 0. We can do this as follows:
view plaincopy to clipboardprint?

1. cd /boot/grub 2. cp grub.conf grub.conf.ORIG 3. sed -i "default=1/default=0/" grub.conf Next, we disable SELinux on the Node machines as follows:
view plaincopy to clipboardprint?

1. cd /etc/selinux 2. cp config config.ORIG 3. sed -i "s/SELINUX=permissive/SELINUX=disabled/" config Lets reboot both Node machines at this point. We are not ready to proceed with the installation of Eucalyptus.

Download Eucalyptus You could choose to install Eucalyptus via yum if needed which is easier that

You can downloaded Eucalyptus from here. I picked the 32-bit CentOS 5 rpms that come bundled in a gzip compressed tar file.

Note: Different Eucalyptus components need to be installed on the front-end and each of the Node machines. The
aforementioned tar.gz file contains all Eucalyptus components though. Therefore download it once on the front-end and then copy this file over to each of the Node machines.

Install Eucalyptus on the front-end Once you have downloaded Eucalyptus (in my case, eucalyptus-1.6.2-centos-i386.tar.gz) on the frontend, untar it to roots home folder /root.
view plaincopy to clipboardprint?

1. tar zxvf eucalyptus-1.6.2-centos-i386.tar.gz 2. cd eucalyptus-1.6.2-centos-i386 We are ready to install. Lets start by installing the 3rd-party dependency RPMs included in the eucalyptus1.6.2-rpm-deps-i386 directory. Install all the rpms in this directory as follows:
view plaincopy to clipboardprint?

1. cd eucalyptus-1.6.2-rpm-deps-i386 2. rpm -Uvh aoetools-21-1.el4.i386.rpm euca-axis2c-1.6.0-1.i386.rpm euca-rampartc-1.3.01.i386.rpm vblade-14-1mdv2008.1.i586.rpm groovy-1.6.5-1.noarch.rpm vtun-3.0.21.el5.rf.i386.rpm lzo2-2.02-3.el5.rf.i386.rpm 3. cd .. Note the above rpm -Uvh command might fail with an error about Failed dependenciesjava-sdk > 1.6.0 is

needed.. To get past this error, run the above rpm -Uvh with nodeps. The error is because it is trying to look
for Openjdk during installation. But we have installed Sun Java instead. Adding nodeps will get us past this error message. Dont worry, the Eucalyptus components will start up fine when the time comes to run them. Next, lets install the Cloud Controller, Walrus, Cluster Controller, Storage Controller, and a few other dependencies on the front-end machine as follows:
view plaincopy to clipboardprint?

1. rpm -Uvh eucalyptus-1.6.2-1.i386.rpm eucalyptus-common-java-1.6.21.i386.rpm eucalyptus-cloud-1.6.2-1.i386.rpm eucalyptus-walrus-1.6.21.i386.rpm eucalyptus-sc-1.6.2-1.i386.rpm eucalyptus-cc-1.6.2-1.i386.rpm eucalyptus-gl1.6.2-1.i386.rpm Now lets move on the installing Eucalyptus components on the Node.

Install Eucalyptus on the Nodes First, copy (or download) the Eucalyptus tar.gz file on each Node. Untar it to roots home folder /root.

Note: The steps in this section need to be performed on each Node (in my case on each of the two Nodes).
Lets begin by installing a few 3rd-party dependency RPMs.
view plaincopy to clipboardprint?

1. 2. 3. 4.

tar zxvf eucalyptus-1.6.2-centos-i386.tar.gz cd eucalyptus-1.6.2-centos-i386 cd eucalyptus-1.6.2-rpm-deps-i386 rpm -Uvh aoetools-21-1.el4.i386.rpm euca-axis2c-1.6.0-1.i386.rpm euca-rampartc-1.3.01.i386.rpm 5. cd .. Next, we install the Node Controller (and a couple of dependencies) on each Node as follows:
view plaincopy to clipboardprint?

1. rpm -Uvh eucalyptus-1.6.2-1.i386.rpm eucalyptus-gl-1.6.2-1.i386.rpm eucalyptus-nc1.6.2-1.i386.rpm Next, confirm that the user eucalyptus can connect with the hypervisor throughlibvirt.
view plaincopy to clipboardprint?

1. su eucalyptus -c "virsh list" The output of the above command should look something like:
Id 0 Domain-0 running Name State

Note: If you dont have libvirt installed/running on the Nodes, you could install it: yum install libvirt
Thats it with the installation!

Running Eucalyptus You are now ready to start Eucalyptus up. SSH to the front-end machine and start the Cluster Controller and Cloud Controller as follows:
view plaincopy to clipboardprint?

1. /etc/init.d/eucalyptus-cc start 2. /etc/init.d/eucalyptus-cloud start Run ps command to confirm Eucalyptus is running on the front-end:
view plaincopy to clipboardprint?

1. ps auxww | grep euca


root 500 500 500 500 500 30499 30500 30501 30502 30503 30504 0.0 0.3 0.3 0.3 0.3 0.3 0.1 4.7 4.7 7.3 3.9 5.0 9840 1103496 1103496 1136720 1103484 1103568 1480 48552 48704 75808 40232 51416 ? ? ? ? ? ? Ss S S R S S May13 May13 May13 May13 May13 May13 0:00 33:14 33:26 33:22 32:56 32:55 -f -f 12:53 32:50 /usr/sbin/httpd /usr/sbin/httpd -f -f /usr/sbin/httpd /usr/sbin/httpd /usr/sbin/httpd /usr/sbin/httpd /usr/sbin/httpd /usr/sbin/httpd -f -f -f -f -f -f -L -L //etc/eucalyptus/httpd-cc.conf //etc/eucalyptus/httpd-cc.conf //etc/eucalyptus/httpd-cc.conf //etc/eucalyptus/httpd-cc.conf //etc/eucalyptus/httpd-cc.conf //etc/eucalyptus/httpd-cc.conf console-log console-log

root 30586 0.0 0.0 1852 224 ? Ss May13 0:00 eucalyptus-cloud remote-dns disable-iscsi -h / -u eucalyptus pidfile //var/run/eucalyptus/eucalyptus-cloud.pid pidfile 500 500 30840 31612 0.1 0.3 //var/run/eucalyptus/eucalyptus-cloud.pid 0.4 3.9 1137232 1103636 4824 40556 ? ? S S May13 May13 500 30587 6.5 39.3 937236 403344 ? Sl May13 569:24 eucalyptus-cloud remote-dns disable-iscsi -h / -u eucalyptus //etc/eucalyptus/httpd-cc.conf //etc/eucalyptus/httpd-cc.conf

500

31676

0.3

3.9

1103568

40400

May13

33:09

/usr/sbin/httpd

-f

//etc/eucalyptus/httpd-cc.conf

500 31678 0.3 4.7 1103636 48628 ? S May13 32:54 /usr/sbin/httpd -f //etc/eucalyptus/httpd-cc.conf

Next, SSH to each Node and start Node Controller as follows:


view plaincopy to clipboardprint?

1. /etc/init.d/eucalyptus-nc start Confirm eucalyptus is running on the Nodes:


view plaincopy to clipboardprint?

1. ps auxww | grep euca


root 20637 0.0 0.3 9856 1488 ? Ss May13 0:00 /usr/sbin/httpd -f //etc/eucalyptus/httpd-nc.conf

500 20639 0.9 10.2 80688 50904 ? Sl May13 78:15 /usr/sbin/httpd -f //etc/eucalyptus/httpd-nc.conf

Registering Eucalyptus components Now that you have started all components, you will need to register them so that they can talk to each other. SSH to the front-end machine (in my case, 192.168.0.114) and run the following commands:
view plaincopy to clipboardprint?

1. euca_conf --register-walrus 192.168.0.114 2. euca_conf --register-cluster rosh-cluster1 192.168.0.114 3. euca_conf --register-sc rosh-cluster1 192.168.0.114 where,

192.168.0.114 is the IP address of my front-end machine which has CLC, Walrus, CC and SC installed/running.
Replace this with the IP address of your front-end machine in all the above commands.

rosh-cluster1 is the cluster name that I used. Replace it with your own cluster name.
Next, we need to register the 2 Nodes. On the front-end machine, run the following command:
view plaincopy to clipboardprint?

1. euca_conf --register-nodes "192.168.0.19 192.168.5.7" where,

192.168.0.19, 192.168.5.7 are the 2 Nodes in my case. Replace the above IP addresses with the IP addresses of
your Nodes. Add additional Nodes separated with a space. You can verify that the nodes are registered by verifying that value of the NODESelement in

the eucalyptus.conf file on the front-end reflects the node IP addresses added via the above euca_conf register-nodes command. In my case:

view plaincopy to clipboardprint?

1. grep NODES /etc/eucalyptus/eucalyptus.conf


NODES= 192.168.0.19 192.168.5.7

We are done with registering the Eucalyptus components.

First-time Configuration We are now ready to perform some quick configuration. Using a browser, browse to https://<front-end-ip-address>:8443. In my case, https://192.168.0.114:8443. You will get a warning page stating that the sites security certificate is not trusted. Since Eucalyptus is using a selfsigned certificate which is not verified by a third-party that the browser trusts, shows you this warning. Accept the certificate and you will be prompted for a user_id/password. Enter admin for both. Once you have logged in for the first time, you will be asked to change the password, set the admin email address, etc. Enter the relevant details and hit Submit. On the Configuration web page you will see Cloud Configuration, Walrus Configuration, Clusters, etc. These should all be pre-populated. You could make changes to the configurations if you wish. I left these unchanged for now. Next, browse to Credentials web page and click the Download Credentials zip file. Save the euca2 -adminx509.zip to a directory. You will need these credentials when you use client tools such as euca2ools to manage virtual machines, images, etc. Create a .euca folder and unzip the contents of this file in this folder. Run the following command from under .euca folder:
view plaincopy to clipboardprint?

1. unzip euca2-admin-x509.zip Once you have unzipped the contents, you will find a .eucarc file that exports some variables. The EC2_URL in this case will point to your front-end machine. In my case, 192.168.0.114.
view plaincopy to clipboardprint?

1. cat .eucarc
EUCA_KEY_DIR=$(dirname export export $(readlink -f ${BASH_SOURCE}))

S3_URL=http://192.168.0.114:8773/services/Walrus EC2_URL=http://192.168.0.114:8773/services/Eucalyptus

Before you run any client tools, you will need to source this file.

Testing our Eucalyptus install To keep things simple and quickly test our Eucalyptus installation, downloadAmazon EC2 API Tools. Unzip the downloaded ec2-api-tools.zip to under the .euca folder that you created in the First-time Configuration section.
view plaincopy to clipboardprint?

1. unzip ec2-api-tools.zip Next source the .eucarc file under and run the ec2-describe-availability-zones command provided by the ec2-api-tools. From under .euca folder run the following commands:
view plaincopy to clipboardprint?

1. 2. 3. 4.

cd .euca source .eucarc cd ec2-api-tools-1.3-46266/bin ec2-describe-availability-zones verbose

You should see output similar to the following:


[Deprecated] AVAILABILITYZONE AVAILABILITYZONE AVAILABILITYZONE AVAILABILITYZONE AVAILABILITYZONE AVAILABILITYZONE |||||vm types m1.small c1.medium m1.large m1.xlarge Xalan: rosh-cluster1 free 0004 0004 0002 0000 / / / / / max 0004 0004 0002 0000 2 2 cpu 1 1 org.apache.xml.res.XMLErrorResources_en_US 192.168.0.114 ram 128 256 512 1024 disk 2 5 10 20

AVAILABILITYZONE |- c1.xlarge 0000 / 0000 4 2048 20

where,

rosh-cluster1 is the cluster I registered using euca_conf and in my case, it corresponds to the Cluster Controller
running on my front-end machine (192.168.0.114) If you see something like the above, give yourself a pat on the back! You are now ready to bundle images and create instances from those images on your own private infrastructure cloud!

You might also like