IO Workload in Virtualized Data Center Using Hypervisor
IO Workload in Virtualized Data Center Using Hypervisor
Volume: 2 Issue: 8
ISSN: 2321-8169
2256 2260
_______________________________________________________________________________________________
Dr .G.Singaravel
Abstract: Cloud computing [10] is gaining popularity as its the way to virtualize the datacenter and increase flexibility in the use of
computation resources. This virtual machine approach can dramatically improve the efficiency, power utilization and availability of costly
hardware resources, such as CPU and memory. Virtualization in datacenter had been done in the back end of Eucalyptus software and Front end
was installed on another CPU. The operation of performance measurement had been done in network I/O applications environment of virtualized
cloud. Then measurement was analyzed based on performance impact of co-locating applications in a virtualized cloud in terms of throughput
and resource sharing effectiveness, including the impact of idle instances on applications that are running concurrently on the same physical
host. This project proposes the virtualization technology which uses the hypervisor to install the Eucalyptus software in single physical machine
for setting up a cloud computing environment. By using the hypervisor, the front end and back end of eucalyptus software will be installed in the
same machine. The performance will be measured based on the interference in parallel processing of CPU and network intensive workloads by
using the Xen Virtual Machine Monitors. The main motivation of this project is to provide the scalable virtualized datacenter.
Keywords: Cloud computing, virtualization, Hypervisor, Eucalyptus.
__________________________________________________*****_________________________________________________
INTRODUCTION
Cloud Computing [1] is more than a collection
of computer resources as it provides a mechanism to manage
those resources. It is the delivery of computing as a service
rather than a product, whereby shared resources, software, and
information are provided to computers and other devices as a
utility over a network. A Cloud Computing platform supports
redundant, self-recovering, highly scalable programming
models that allow workloads to recover from many inevitable
hardware/software failures. The concept of cloud computing
and virtualization offers so many innovative opportunities that
it is not surprising that there are new announcements every
day.
_______________________________________________________________________________________
ISSN: 2321-8169
2256 2260
_______________________________________________________________________________________________
measurement and workload analysis also provide some
insights on performance optimizations for CPU scheduler and
I/O channel and efficiency management of workload and VM
configurations
VIRTUALIZATION
Virtualization [9],[6],[5] in computing is a process of
creating virtual (rather than actual) version of something, such
as a hardware platform, operating system, a storage device or
network resources. It is a software acts like hardware. The
software used for virtualization is known as hypervisors. There
are different types of hypervisors which are used for
virtualization like, XEN, VMware, and KVM.
XEN HYPERVISOR
Xen is a Virtual-Machine Monitor (VMM) providing
services that allow multiple computer operating systems to
execute on the same computer hardware concurrently. The
Xen community develops and maintains Xen as free software,
licensed under the GNU General Public License (GPLv2). It is
available for the IA-32, x86-64, Itanium and ARM computer
architectures.
VMWARE
VMware is proprietary software developed in 1998
and based in Palo Alto, California, USA. It is majorly owned
by Corporation. VMwares desktop software runs on
Microsoft Windows, Linux, and Mac OS X, while VMware's
enterprise software hypervisors runs for servers, VMware ESX
and VMware ESXi, are bare-metal embedded hypervisors that
run directly on server hardware without requiring an additional
underlying operating system.
Kernel-based Virtual Machine (KVM)
KVM provides infrastructure virtualization for the
Linux kernel. KVM supports native virtualization on
processors with hardware virtualization extensions. This
supports a paravirtual Ethernet card, a paravirtual disk I/O
controller, a balloon device for adjusting guest memory-usage,
and VGA graphics interface using SPICE or VMware drivers.
PARAVIRTUALIZATION
Paravirtualization is a virtualization technique that
provides a software interface to virtual machines that is similar
but not identical to that of the underlying hardware. The intent
of the modified interface is to reduce the portion of the guest's
execution time spent for performing operations which are
substantially more difficult to run in a virtual environment
compared to a non-virtualized environment.
The paravirtualization provides specially defined
'hooks' to allow the guest(s) and host to request and
acknowledge these tasks, which would otherwise be executed
in the virtual domain. A successful paravirtualized platform
_______________________________________________________________________________________
ISSN: 2321-8169
2256 2260
_______________________________________________________________________________________________
CPU goes idle, it will look on other CPUs to find any runnable
VCPU. This guarantees that no CPU remains idles when there
is runnable work in the system.
Related work
Most of the efforts to date can be classified into three
main categories: (1) performance monitoring and enhancement
of VMs hosted on a single physical machine (2) performance
evaluation, enhancement, and migration of VMs running on
multiple physical hosts(3) performance comparison conducted
with different platforms or different implementations of
VMMs, such as Xen and KVM , as well as the efforts on
developing benchmarks Given that the focus of this paper is
on performance measurement and analysis of network I/O
applications in a virtualized single host, in this section we
provide a brief discussion on the state of art in literature to
date on this topic. Most of the research on virtualization in a
single host has been focused on either developing the
performance monitoring or profiling tools for VMM and VMs,
represented by or conducting performance evaluation work by
varying VM configurations on host capacity utilization or by
varying CPU scheduler configurations, especially for I/O
related performance measurements. For example, some work
has focused on I/O performance improvement by tuning I/O
related parameter such as TCP Segmentation Offload, network
bridging.
PROPOSED SYSTEM ARCHITECTURE
Proposed Architecture
Here the virtualization technology which uses the hypervisor
to install the Eucalyptus software in single physical machine
for setting up a cloud computing environment. By using the
hypervisor, the front end and back end of eucalyptus software
will be installed in the same machine. The performance will be
measured based on the interference in parallel processing of
CPU and network intensive workloads by using the Xen
Virtual Machine Monitors.
EXPERIMENTAL SETUP
_______________________________________________________________________________________
ISSN: 2321-8169
2256 2260
_______________________________________________________________________________________________
PERFORMANCE METRICS
The following metrics are used in our measurement
study. They are collected using Xenmon [8] and Xentop [15].
Server throughput (#req/sec). It quantitatively measures
the maximum number of successful requests served per second
when retrieving web documents.
Normalized throughput. We typically choose one measured
throughput as our baseline reference throughput and normalize
the throughputs of different configuration settings in order to
make adequate comparison.
Aggregated throughput (#req/sec). We use aggregated
throughput as a metric to measure the impact of using varying
number of VMs on the aggregated throughput performance of
a physical host.
CPU utilization (%). To understand the CPU resource
sharing across VMs running on a single physical machine, we
measure the average CPU utilization of each VM, including
Domain0 CPU usage and guest domain CPU usage
respectively.
Network I/O per second (Kbytes/sec). We measure the
amount of network I/O traffic in KB per second, transferred
from a remote web server for the corresponding workload.
IMPACT ANALYSIS
In this section we provide a detailed performance
analysis of maintaining idle VM instances, focusing on the
cost and benefit of maintaining idle guest domains in the
presence of network I/O workloads on a separate VM sharing
the same physical host. Concretely, we focus our measurement
study on addressing the following two questions: First, we
want to understand the advantages and drawbacks of keeping
idle instances from the perspectives of both cloud providers
and cloud consumers. Second, we want to measure and
understand the start-up time of creating one or more new guest
domains on a physical host, and its impact on existing
applications. Consider a set of n (n>0) VMs hosted on a
physical machine, at any given point of time, a guest domain
(VM) can be in one of the following three states: (1) execution
state, namely the guest domain is currently using CPU; (2)
runnable state, namely the guest domain is on the run queue,
waiting to be scheduled for execution on the CPU; and (3)
blocked state, namely the guest domain is blocked and is not
on the run queue. A guest domain is called idle when the guest
OS is executing idle-loop.
MEASUREMENT
we set up our environment with one VM (VM1) running one
of the six selected network I/O workloads of 1 KB, 4 KB, 30
KB, 50 KB, 70 KB and 100 KB. The value of each I/O
workload characteristics is measured at 100% workload rate
for the given workload type. Comparing with networkintensive workloads of 30 KB, 50 KB, 70 KB, and 100 KB
files, the CPU-intensive workloads of 1 KB and 4 KB files
have at least 30% and 60% lower event and switch costs
respectively because the network I/O processing is more
efficient in these cases. Then normalized throughput, CPU
utilization and Network I/O between Domain1 and Domain2,
both with identical 1kB application at 50% workload rate.
_______________________________________________________________________________________
ISSN: 2321-8169
2256 2260
_______________________________________________________________________________________________
Cloud computing offers scalable infrastructure and
software off site, saving labor, hardware, and power costs.
Financially, the clouds virtual resources are typically cheaper
than dedicated physical resources connected to a personal
computer or network. This project proposes the virtualization
technology of using the hypervisor to install the Eucalyptus
software in single physical machine. By using the hypervisor,
the front end and back end will be installed in the same
machine. Then performance will be measured based on the
interference in parallel processing of CPU and network
intensive workloads by using the Xen Virtual Machine
Monitors. The main motivation of this project is to provide
the scalable virtualized data center.
[13]
[14]
[15]
REFERENCES:
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
http://linux.die.net/man/1/xentop
[9]
[10]
[11]
[12]
2260
IJRITCC | August 2014, Available @ http://www.ijritcc.org
_______________________________________________________________________________________