Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Case Study: Installing The Eucalyptus Community Cloud (ECC) : Physical Hardware Layout

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Case Study: Installing the Eucalyptus Community Cl...

http://open.eucalyptus.com/book/export/html/5263

Case Study: Installing the Eucalyptus Community Cloud (ECC)


This article describes the steps taken by the Eucalyptus team to setup the Eucalyptus Community Cloud (ECC). The ECC is a sandbox environment in which members of our community can testdrive and experiment with the Eucalyptus cloud computing platform. The ECC resides at the Coresite data center facility in Los Angeles, is hosted on HP servers, and has one uplink to the Internet provided by Inforelay.

Physical hardware layout


The ECC is a good example of a standard installation of a Eucalyptus cloud, intended for multiple users with very different needs. We begin with a look at the physical hardware layout supporting the ECC. The ECC shares this hardware with other Eucalyptus services, including the Eucalyptus Partner Cloud, and the Eucalyptus QA system, which tests cloud deployment and functionality. In Figure 1, we see how the two NICs of each blade server connect to separate switches, and these switches connect to a managed switch that controls access to the Inforelay uplink.

1 de 6

05/07/11 13:55

Case Study: Installing the Eucalyptus Community Cl...

http://open.eucalyptus.com/book/export/html/5263

Figure 1. ECC Physical Hardware Layout (Each HP SL2x170z chassis contains 4 blades. Please note that our diagram is simplified for clarity to show connections from one blade per chassis).

The servers offer Lights Out Management (LOM), which lets us connect to the servers remotely under all circumstances. Each blade server contains two 500GB disks, the latest quad-core processors, and plenty of RAM to handle instances. As we shall see, having two disks in each blade server is very handy when installing the Eucalyptus Node Controller (NC) component. The network fabric is a twisted pair gigabit network, while the uplink is 100Mbit.

Network configuration
Next, we describe how we configured the network. In Figure 2, we see that only one blade server has been given direct Internet access, while all other blades connect to a common private network. In this case, the private network is configured to be 192.168.253.x. The LOM has its own private network, which is independent and guarantees access in case of network misconfiguration.

2 de 6

05/07/11 13:55

Case Study: Installing the Eucalyptus Community Cl...

http://open.eucalyptus.com/book/export/html/5263

Figure 2. ECC Network Configuration

Installing Eucalyptus components


Now we examine the actual installation of Eucalyptus components. If you have read the Eucalyptus Installation Guide, you may already have an idea where we are going to install each component: The blade with Internet access hosts the Cloud Controller (CLC), Walrus, Storage Controller (SC), and Cluster controller (CC), while the other blades host NCs. We decided to install Eucalyptus from source instead of packages, since the ECC may at times be using some experimental code, and installing from source makes it easy to redeploy single components. (For more information on installing from source, see our Installing Eucalyptus from Source instructions). Figure 3 shows the location of installed Eucalyptus components in context of the ECC's logical network configuration and underlying physical hardware layout.

3 de 6

05/07/11 13:55

Case Study: Installing the Eucalyptus Community Cl...

http://open.eucalyptus.com/book/export/html/5263

Figure 3. Installed Eucalyptus Components

Eucalyptus abstracts away the hypervisor and underlying configuration, so the end user's experience is entirely independent from the infrastructure configuration implemented by the cloud administrator. (Please see our User's Guide for information on interacting with a Eucalyptus cloud). There are ways however for curious-minded users to determine which hypervisor is being used to run your virtual machine.

File system layout


We installed Eucalyptus on the blade server with Internet access, in /opt/eucalyptus-<version-number> so that new upgrades will have their own directory. And we used symbolic links to point /opt/eucalyptus to the current installation. We put the EBS volumes on one disk, and Walrus's buckets directory on the other, to maximize disk space. To achieve that we used symbolic links for the volumes and buckets directories in /opt/eucalyptus to point to the real locations on the different disks. Once Eucalyptus was compiled and installed, since all the blades have the same architecture, we simply rsync'ed /opt/eucalyptus-<version-number> to all the NCs.

4 de 6

05/07/11 13:55

Case Study: Installing the Eucalyptus Community Cl...

http://open.eucalyptus.com/book/export/html/5263

On the NCs we used the two disks to minimize instance start-up time: The Eucalyptus NC caches the emi (eucalyptus machine image) the first time it sees it, so that subsequent instances start faster. But we still need to create a copy of the pristine emi that the instance can modify at will. This disk-to-disk copy can be fairly expensive (multi GB transfers are never easy), so we put the cache on one disk, and the instances images on the other disk. Eucalyptus creates a 'eucalyptus' directory under $INSTANCE_PATH (the directory that holds instance images and cache) to hold the cached images; we simply use the symbolic link to point the directory to the second disk, thus ensuring that the images copy across disks.

Eucalyptus Network configuration


With the low level details out of the way, and Eucalyptus installed on all machines, we can finally finish the configuration: The blade server got its final public IP (which is currently 173.205.188.130 or ecc.eucalyptus.com), while all the other blades (using the second NIC on the blade server) got configured on the private network (192.168.253.x in this case). We reserved as many public IPs as possible to be used with the ECC and with that we were ready to configure Eucalyptus network. While the managed switch uses VLANs, the switches for our private network do not. Thus, being VLAN clean, we chose MANAGED mode as our Eucalyptus networking mode. We gave Eucalyptus the entire 10.x.x.x range to allow for a large number of security groups. (For an overview of MANAGED mode, and how to calculate available security groups, see our Eucalyptus Network Configuration guide).

Eucalyptus configuration
With the Network configuration complete, we were left with the NC configuration. On the NCs we checked that the hypervisor and libvirt were properly configured and we setup the $INSTANCE_PATH directory as mentioned above. Finally we started and registered the components as described in the Installation and Configuration sections of the Eucalyptus Administrator's Guide.

Deploying virtual machines


Our final step is to test the ECC: We uploaded one of our pre-packaged images, uploaded the appropriate kernel and ramdisk for it, and ran a set of instances. We made sure to run enough to cover all nodes, to ensure that all the NCs were properly configured, which, as an added benefit, populated the cache with the image, thus speeding up the launching of future
5 de 6 05/07/11 13:55

Case Study: Installing the Eucalyptus Community Cl...

http://open.eucalyptus.com/book/export/html/5263

instances. Learn more about the Eucalyptus Community Cloud (ECC) Read the Eucalyptus Installation Guide Read the Eucalyptus Administrator's Guide

6 de 6

05/07/11 13:55

You might also like