CS8711 Cloud Computing Lab Manual
CS8711 Cloud Computing Lab Manual
CS8711 Cloud Computing Lab Manual
0042
Course Objective:
To develop web applications in cloud
To learn the design and development process involved in creating a cloud-based
application
To learn to implement and use parallel programming using Hadoop
Exercises:
1. Install Virtualbox/VMware Workstation with different flavours of linux or windows
OS on top of windows7 or 8.
2. Install a C compiler in the virtual machine created using virtual box and execute
Simple Programs
3. Install Google App Engine. Create hello world app and other simple web applications
using python/java.
4. Use GAE launcher to launch the web applications.
5. Simulate a cloud scenario using CloudSim and run a scheduling algorithm that is not
present in CloudSim.
6. Find a procedure to transfer the files from one virtual machine to another virtual
machine.
7. Find a procedure to launch virtual machine using trystack (Online Openstack Demo
Version)
8. Install Hadoop single node cluster and run simple applications like wordcount.
Course Outcome:
PROCEDURE:
Step 0:
Goto MainMenu->Open “MATE TERMINAL”
Through command prompt we are going to install virtualization related files.
Step 1:
To check whether the system supports virtualization using this command.
$egrep -c ‘(vmx|svm)’ /proc/cpuinfo
This is used to check whether your system supports virtualization.
If you get 0 then it won’t support virtualization.
If you get >0 then your machine supports virtualization.
Step 2:
Install qemu-kvm(Quick Emulator & Kernel Virtual Machine) through the “libvirt-bin”
virtualization administrator.
$sudo apt-get install qemu-kvm libvirt-bin
Enter password: sam123
Step 3:
Using the below command we can update the qemu-kvm packages.
$sudo apt-get update
Step 4:
$sudo apt-get install qemu qemu-kvm
Step 5:
We need to install the virtual manager in order to manage the virtualization process.
$sudo apt-get install virt-manager
Step 6:
To open the Hypervisor/Virtual Machine Manager we need to use this command.
$sudo virt-manager
Goto “File” -> New Virtual Machine (To install different operating systems in our virtual machine)
There we have to choose “Local install media (ISO image or CDROM)” option. After that it will ask
the ISO file, there we have to choose “Use ISO image” option and “Browse” the path where the
Ubuntu and Word ISO files are available.
For the currently installing Operating system, we have to give the memory (RAM) size and the CPU
information.
2.
As well as allot disk space for the OS that we are currently installing.
Give some name for the new virtual machine that we have installed now.
In the same way, we can install windows OS also. The output screens for the virtual machines
of different configurations.
OUTPUT:
(i)Ubuntu Operating System in Virtual Machine
(ii)Windows7 Operating System in Virtual Machine
RESULT:
Thus, various configurations of Virtual machines has been created and run.
2. Install a C compiler in the virtual machine created using virtual box and execute
Simple Programs
AIM:
To install a C compiler in the virtual machine and execute a sample program.
PROCEDURE:
3. Open TextEditor in Ubuntu OS and type a C program and save it in desktop to execute.
4. To install the C compiler in Ubuntu OS, open the terminal and type the command.
$sudo apt-get install gcc
5. And then complier the C program and execute it.
7. Copy the Turbo C++ software from a local host by giving the command (\\172.16.42.38\f$\Turbo
C++) in RUN. As well as install the application in our system.
8. Open installed Turbo C++ and write a simple C program to run it.
9. Open installed Turbo C++ and write a simple C program to run it.
RESULT:
Thus, C compiler is installed in the virtual machine and executed the program.
3. Install Google App Engine. Create hello world app and other simple web applications
using python/java.
https://console.cloud.google.com/appengine/start?walkthrough_tutorial_id=java_gae_quickstart
This tutorial shows you how to deploy a sample Java application to App Engine using the App Engine
Maven plugin.
Here are the steps you will be taking.
Create a project
Projects bundle code, VMs, and other resources together for easier development and monitoring.
Build and run your "Hello, world!" app
You will learn how to run your app using Cloud Shell, right in your browser. At the end, you'll deploy
your app to the web using the App Engine Maven plugin.
GCP (Google Cloud Platform) organizes resources into projects, which collect all of the related
resources for a single application in one place.
Begin by creating a new project or selecting an existing project for this tutorial.
Step1:
Select a project, or create a new one
Step2
cd appengine-try-java
Step3
Step4
Step5
Note: If you already created an app, you can skip this step.
mvn appengine:deploy
https://cloud.google.com/appengine/docs/standard/python/getting-started/hosting-a-static-
website
You can use Google App Engine to host a static website. Static web pages can contain
client-side technologies such as HTML, CSS, and JavaScript. Hosting your static site on
App Engine can cost less than using a traditional hosting provider, as App Engine provides a
free tier.
Sites hosted on App Engine are hosted on the REGION_ID (#appengine-urls).r.appspot.com
subdomain, such as [my-project-id].uc.r.appspot.com. After you deploy your site, you
can map your own domain name to your App Engine-hosted website.
runtime: python27
api_version: 1
threadsafe: true
handlers:
- url: /
static_files: www/index.html
upload: www/index.html
- url: /(.*)
static_files: www/\1
upload: www/(.*)
More reference information about the app.yaml file can be found in the app.yaml reference
documentation (/appengine/docs/standard/python/config/appref).
Creating the index.html _le
Create an HTML file that will be served when someone navigates to the root page of your
website. Store this file in your www directory.
Deploying your application to App Engine
When you deploy your application files, your website will be uploaded to App Engine. To
deploy your app, run the following command from within the root directory of your
application where the app.yaml file is located:
Optional flags:
Include the --project flag to specify an alternate Cloud Console project ID to what
you initialized as the default in the gcloud tool. Example: --project
[YOUR_PROJECT_ID]
Include the -v flag to specify a version ID, otherwise one is generated for you.
Example: -v [YOUR_VERSION_ID]
l>
ead>
<title>Hello, world!</title>
<link rel="stylesheet" type="text/css" href="/css/style.css">
head>
ody>
<h1>Hello, world!</h1>
<p>
This is a simple static HTML file that will be served from Google App
Engine.
</p>
body>
ml>
id app deploy
Optional flags:
Include the --project flag to specify an alternate Cloud Console project ID to what
you initialized as the default in the gcloud tool. Example: --project
[YOUR_PROJECT_ID]
Include the -v flag to specify a version ID, otherwise one is generated for you.
Example: -v [YOUR_VERSION_ID
To learn more about deploying your app from the command line, see Deploying a Python 2
App (/appengine/docs/python/tools/uploadinganapp).
Viewing your application
To launch your browser and view the app at https://PROJECT_ID.REGION_ID
(#appengine-urls).r.appspot.com, run the following command
ud app browse
5. Simulate a cloud scenario using CloudSim and run a scheduling algorithm that is not
present in CloudSim.
What is Cloudsim?
CloudSim is a simulation toolkit that supports the modelling and simulation of the core
functionality of cloud, like job/task queue, processing of events, creation of cloud entities
(datacentre, datacentre brokers, etc), communication between different entities,
implementation of broker policies, etc. This toolkit allows to:
import java.text.DecimalFormat;
import java.util.Calendar;
import java.util.List;
import org.cloudbus.cloudsim.Cloudlet;
import org.cloudbus.cloudsim.Datacenter;
import org.cloudbus.cloudsim.Log;
import org.cloudbus.cloudsim.Vm;
import org.cloudbus.cloudsim.core.CloudSim;
/**
* FCFS Task scheduling
* @author Linda J
*/
public class FCFS {
Log.printLine("Starting FCFS...");
try {
// First step: Initialize the CloudSim package. It should be called
// before creating any entities.
int num_user = 1; // number of cloud users
Calendar calendar = Calendar.getInstance();
boolean trace_flag = false; // mean trace events
CloudSim.stopSimulation();
printCloudletList(newList);
}
//We strongly encourage users to develop their own broker policies, to submit vms and
cloudlets according
//to the specific rules of the simulated scenario
private static FcfsBroker createBroker(){
/**
* Prints the Cloudlet objects
* @param list list of Cloudlets
*/
private static void printCloudletList(List<Cloudlet> list) {
int size = list.size();
Cloudlet cloudlet;
}
}
6. Find a procedure to transfer the files from one virtual machine to another virtual
machine.
Aim
PROCEDURE:
1. Select the VM and click File->Export Appliance
4. Click “Export”
5. The Virtual
machine is
being
exported.
6. Install “ssh” to access the neighbour's VM.
7. Go to File->Computer:/home/sam/Documents/
8. Type the neighbour's URL: sftp://sam@172.16.42._/
16. VM is imported.
RESULT:
Thus, Virtual machine migration has been implemented.
7. Find a procedure to launch virtual machine using trystack (Online Openstack Demo
Version)
https://cyberpersons.com/2017/06/22/how-to-install-openstack-and-create-your-first-virtual-
machineinstance/
OpenStack is basically a cloud operating system. That let you deploy public and private
clouds and take care of all the things for you. In this article, we will see that how to install
OpenStack and create your first virtual machine or in other words launch your first
instance.
We will perform our installation on Ubuntu 16.04 Server flavor because as mentioned on
OpenStack site, they have done extensive testing with this flavor and this operating
system is best suited for OpenStack
4 GB Of Ram.
4 CPU Units.
30 GB Disk Space.
Once you have a virtual machine installed with mentioned version of Ubuntu, you are
ready to install OpenStack and take it for a spin.
After your virtual machine is done with a reboot, you are now ready to install OpenStack.
Normally OpenStack runs under non-root user with sudo privileges. We can easily
create one to start with using:
This will give the stack user sudo privileges. We now have to log in as user “stack” to
proceed with our installation, that can be done using the command:
sudo su – stack
Now you are logged in as user “stack”, let’s start with the installation of OpenStack by
downloading the required material.
nano local.conf
[[local|localrc]]
ADMIN_PASSWORD=secret
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
Save and exit the file, this will automate the installation process. We are now ready to
run the installation script. Installation script can be launched using the command:
./stack.sh
Depending on your hardware and internet connection installation can take from 10-20
minutes, once installation is complete you will have something like this on your console
(at the end of the installation):
We will now see that how we can launch our very first and basic instance inside the
cloud we just created.
After you are logged into the OpenStack Dashboard it will look something like this:
Before we launch our first virtual machine, we need to create a network that virtual
machine can use. For now, it will just be a dummy network, because our main purpose
in this article is to launch our first virtual machine or instance. Let see how we can create
a network inside OpenStack.
Now everything is optional in this window if you are interested in filling something up,
you can. Otherwise, leave everything as it is and click “Create”. You now have a network
that you can use to launch a virtual machine.
Details Tab
This is a general information tab for creating an instance, you will have to assign a name
to your virtual machine on this tab. Select zone to launch a virtual machine, and tell how
many copies of virtual machine you want. Just make sure your settings look like this:
Source Tab
Normally when we create a virtual machine on Proxmox or VMWare we need to insert
CD-ROM (virtual CD room), that VPS uses to install the operating system.
In OpenStack this is done by Source Tab, you can use various ways to launch a new
virtual machine, OpenStack allows you to choose following as a source to create your
instance.
Image
Snapshot of already created instance
Volume or a volume Snapshot
We are going to use “Cirros” image to create our instance.
1. Click on the icon where the first arrow is pointing, so that we can use “Cirros” to launch
our virtual machine.
2. After the image is selected, just click “Next” so that we can move to “Flavor” tab.
Flavor Tab
Flavor tab will allow you to allocate resource to your instance. Like:
Ram.
CPU.
Disk Space.
It is similar to giving virtual resources to the virtual machine, but OpenStack gives fancy
names to everything. 🙂
You can see that there are 11 available pre-configured templates to choose from. The
one I choose gave following resources to the instance:
1 virtual CPU.
512 MB Ram.
1 GB Disk.
You can choose any template from the available 11, depending upon the resources
available on your host machine (On which OpenStack is installed).
Network Tab
Network tab allows us to define a network for our virtual machine, you might have
remembered that we’ve created a network above for this purpose. Now by default, the
network you have created above will be selected for this machine, as seen in the image
below:
Key-Pair Tab
Leave defaults and click Next.
Configuration Tab
Leave defaults and click Next.
Scheduler Hints Tab
Leave defaults and click Next.
Metadata Tab
Leave defaults and click Next.
Launch Instance
After going through all the tabs, you are now ready to press that magic “Launch
Instance” button, you must be wondering that for some tabs we’ve left them with default
settings. Its fine for now, because we are just launching our test machine at this time.
Later we will go into depth of each tab and see why it is important and why it is not.
Once you click “Launch Instance” button, OpenStack will start creating our virtual
machine, and it is going to look something like this:
This is simple command line, you can use following details to log in:
Username: cirros
Password: cubswin:)
Conclusion
In this article, we’ve gone through the process of installing OpenStack and creating a
very basic virtual machine just to learn that how things work with OpenStack. In our next
articles, we will further dig deep into each component of OpenStack and see how each
component work.
8. Install Hadoop single node cluster and run simple applications like wordcount.
AIM:
PROCEDURE:
Done.
Enter new UNIX password: \\Note: Enter any password and remember that, this is only for
unix(applicable for hduser)
Enter the new value, or press ENTER for the default \\Note: Just enter your name and then click
enter button for remaining
Other []:
hduser : hadoop
/usr/bin/ssh
/usr/sbin/sshd
sam@sysc40:~$ su hduser
Password: \\Note: Enter the password that we have given above for hduser
hduser@sysc40:/home/sam$
hduser@sysc40:/home/sam$ cd
Enter file in which to save the key (/home/hduser/.ssh/id_rsa): \\Note: Just click Enter button
SHA256:QWYjqMI0g/ElhpXhVvgVITSn4O4HWS98MDqCX7Gsf/g hduser@sysc40
The key's randomart image is:
+---[RSA 2048]----+
|o+*=*.=o= |
|oOo=.=.= . |
|o Bo*. . |
|o+.*.* . |
|o.* * o S |
|+=o |
| + .. |
| o. . |
| .oE |
+----[SHA256]-----+
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
applicable law.
hduser@sysc40:~$ wget
http://mirrors.sonic.net/apache/hadoop/common/hadoop-2.6.5/hadoop-2.6.5.tar.gz
hadoop-2.6.5/include/
hadoop-2.6.5/include/hdfs.h
hadoop-2.6.5/include/Pipes.hh
hadoop-2.6.5/include/TemplateFactory.hh
hadoop-2.6.5/include/SerialUtils.hh
hadoop-2.6.5/include/StringUtils.hh
hadoop-2.6.5/README.txt
hadoop-2.6.5/LICENSE.txt
--------------------------------------
---------------------------------------
hadoop-2.6.5/share/hadoop/tools/lib/jasper-compiler-5.5.23.jar
hadoop-2.6.5/share/hadoop/tools/lib/apacheds-kerberos-codec-2.0.0-M15.jar
hadoop-2.6.5/share/hadoop/tools/lib/aws-java-sdk-1.7.4.jar
hduser@sysc40:~$ cd hadoop-2.6.5
hduser@sysc40:~/hadoop-2.6.5$ su sam
Password: sam123
Done.
sam@sysc40:/home/hduser/hadoop-2.6.5$ su hduser
Password: \\Note: Enter the password that we have given above for hduser
hduser@sysc40:~/hadoop-2.6.5$ cd
There is only one alternative in link group java (providing /usr/bin/java): /usr/lib/jvm/java-8-
openjdk-amd64/jre/bin/java
Nothing to configure.
Add the below content at the end of the file and save it
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
export HADOOP_INSTALL=/usr/local/hadoop
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib"
javac 1.8.0_131
/usr/lib/jvm/java-8-openjdk-amd64/bin/javac
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/app/hadoop/tmp</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
</property>
</configuration>
hduser@sysc40:~$ cp /usr/local/hadoop/etc/hadoop/mapred-site.xml.template
/usr/local/hadoop/etc/hadoop/mapred-site.xml
<configuration> <property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
<description>The host and port that the MapReduce job tracker runs
</description>
</property>
</configuration>
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
The actual number of replications can be specified when the file is created.
</description>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop_store/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop_store/hdfs/datanode</value>
</property>
</configuration>
...
...
...
/************************************************************
************************************************************/
Starting Hadoop
Now it's time to start the newly installed single node cluster.
We can use start-all.sh or (start-dfs.sh and start-yarn.sh)
hduser@sysc40:~$ su sam
Password: sam123
sam@sysc40:/home/hduser$ cd
sam@sysc40:~$ cd /usr/local/hadoop/sbin
sam@sysc40:/usr/local/hadoop/sbin$ ls
refresh-namenodes.sh stop-all.cmd
slaves.sh stop-all.sh
16/11/10 14:51:44 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your
platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
hduser@sysc40:/usr/local/hadoop/sbin$ start-yarn.sh
hduser@sysc70:/usr/local/hadoop/sbin$ jps
14505 SecondaryNameNode
14205 NameNode
14765 NodeManager
15166 Jps
hduser@laptop:/usr/local/hadoop/sbin$ stop-dfs.sh
16/11/10 15:23:20 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your
platform... using builtin-java classes where applicable
16/11/10 15:23:52 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your
platform... using builtin-java classes where applicable
hduser@laptop:/usr/local/hadoop/sbin$ stop-yarn.sh
stopping resourcemanager
no proxyserver to stop
hduser@laptop:/usr/local/hadoop/sbin$ start-yarn.sh
Type http://localhost:50070/ into our browser, then we'll see the web UI of the NameNode
daemon: In the Overview tab, you can see the Overview, Summary, NameNode Journal Status and
the NameNode Storage informations.
We need to click the Nodes option in the left Cluster panel, then it will sho the node that we have
created.
RESULT:
AIM
To write a word count program to demonstrate the use of Map and Reduce task.
PROCEDURE
Step 0: We need to check whether the hadoop dashboard has been created with all the six nodes.
sam@sysc65:~$ su hduser
Password:
hduser@sysc65:/home/sam$ cd /usr/local/hadoop/sbin
hduser@sysc65:/usr/local/hadoop/sbin$ start-all.sh
hduser@sysc65:/usr/local/hadoop/sbin$ jps
3797 NameNode
4279 ResourceManager
4120 SecondaryNameNode
3916 DataNode
4396 NodeManager
5486 Jps
hduser@sysc65:/usr/local/hadoop/sbin$
Then only, we can continue with the below lines to execute this exercise. Open Separate terminal to
workout the below commands by Alt+Ctrl+t
Password:
Step 2:hduser@sysc65:/home/sam$ cd
Paste the program into that file and save it by Ctrl+o, Enter & Ctrl+x
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;
j.setJarByClass(WordCount.class);
j.setMapperClass(MapForWordCount.class);
j.setReducerClass(ReduceForWordCount.class);
j.setOutputKeyClass(Text.class);
j.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(j, input);
FileOutputFormat.setOutputPath(j, output);
System.exit(j.waitForCompletion(true)?0:1);
public void map(LongWritable key, Text value, Context con) throws IOException,
InterruptedException
String[] words=line.split(",");
con.write(outputKey, outputValue);
int sum = 0;
sum += value.get();
}
Step5: hduser@sysc65:~$ /usr/local/hadoop/bin/hadoop classpath
/usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/*:/usr/local/
hadoop/share/hadoop/common/*:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/
share/hadoop/hdfs/lib/*:/usr/local/hadoop/share/hadoop/hdfs/*:/usr/local/hadoop/share/
hadoop/yarn/lib/*:/usr/local/hadoop/share/hadoop/yarn/*:/usr/local/hadoop/share/hadoop/
mapreduce/lib/*:/usr/local/hadoop/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar
Step 6: Copy and paste the above classpath to compile the java program and make it as jar file.
added manifest
Paste the below lines into that file andd save it by Ctrl+o, Enter & Ctrl+x
bus,train,car,bUs,TrAiN,cAr,bus,train,car,bUs,TrAiN,cAr,bus,train,car,bUs,TrAiN,cAr,bus,train,car,bUs
,TrAiN,cAr,bus,train,car,bUs,TrAiN,cAr,bus,train,car,bUs,TrAiN,cAr,bus,train,car,bUs,TrAiN,cAr,bus,tr
ain,car,bUs,TrAiN,cAr,bus,train,car,bUs,TrAiN,cAr,bus,train,car,bUs,TrAiN,cAr,bus,train,car,bUs,TrAiN
,cAr,train,bus,bus
17/08/23 13:11:14 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your
platform... using builtin-java classes where applicable
Step 11:
Step 12:
17/08/23 13:15:16 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your
platform... using builtin-java classes where applicable
Map-Reduce Framework
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
Bytes Read=322
Bytes Written=23
17/08/23 13:15:42 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your
platform... using builtin-java classes where applicable
Found 2 items
-rw-r--r-- 1 hduser supergroup 0 2017-08-23 13:15 /home/hduser/MRDir1/_SUCCESS
Step 14:
17/08/23 13:15:59 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your
platform... using builtin-java classes where applicable
BUS 24
CAR 22
TRAIN 23
RESULT:
Thus, the word count program to demonstrate the use of Map and Reduce task has been
created and executed.