Assignment 1
Assignment 1
Assignment 1
A Bird's-Eye View
Here is a simplified overview of a cloud computing environment:
Cloud Service Providers (CSPs): Companies that own and maintain the
infrastructure and software that power cloud services (e.g., Amazon Web
Services, Google AppEngine, Microsoft Azure).
Data Centers: Massive facilities housing the physical servers, storage, and
networking equipment.
Assignment 1 1
Software as a Service (SaaS): Delivers applications over the Internet.
Assignment 1 2
Users can create virtual machine instances, configure their operating
systems, and manage their applications.
Virtual storage is delivered in the form of raw disk space or object store.
Examples include Amazon Web Services (AWS) and its Elastic Compute
Cloud (EC2) service.
Assignment 1 3
These applications are shared across multiple users whose interaction is
isolated from the other users.
The SaaS layer is also the area of social networking Websites, which
leverage cloud-based infrastructures to sustain the load generated by
their popularity.
Assignment 1 4
Elasticity: The ability to quickly scale computing resources up or down in
response to changing demands. A key characteristic is the ability to adjust to a
variable workload by acquiring or releasing computing resources.
Assignment 1 5
Seamless Creation and Use of Third-Party Services: The sources mention
that cloud computing enables the seamless integration of third-party
services into applications.
Reduced IT Costs: Cloud computing can lead to significant cost savings for
businesses. This includes reduced spending on hardware, software,
maintenance, and IT staff. The multiplexing of resources allows service
providers to operate more efficiently due to economies of scale, resulting in
lower costs for users.
Enhanced Scalability and Flexibility: Users can quickly and easily scale their
computing resources up or down to meet changing demands. This flexibility
allows businesses to respond to fluctuating workloads and avoid
overspending on unused resources.
Assignment 1 6
applications and services.
Regular Updates: Cloud providers regularly update their services with new
features and security enhancements, providing users with access to the latest
technologies.
Relinquishing Control: When users opt for cloud computing, they transfer
control of their data and systems to third-party services. This introduces risks
such as unauthorised access, data corruption, infrastructure failure, and
service unavailability. Determining liability in case of problems becomes
difficult due to the complex structure of cloud services and the involvement of
multiple entities.
Data Storage: The storage of data across multiple sites, potentially in different
organisations and jurisdictions, raises security and privacy concerns. The
sources highlight the need for clear rules and regulations for cloud
governance, encompassing security, ethics, and data protection.
Assignment 1 7
user population. Distributed denial-of-service (DDoS) attacks, phishing, SQL
injection, and cross-site scripting are cited as examples of such threats.
Abuse of Resources: The sources mention the potential for malicious actors
to exploit cloud resources for nefarious activities, such as launching DDoS
attacks, distributing spam, or spreading malware.
Performance Unpredictability
Performance unpredictability arises from resource sharing in cloud
environments. The performance of virtual machines can fluctuate depending on
the overall load, the infrastructure services, and the activities of other users
sharing the same resources.
The sources indicate that new algorithms for controlling resource allocation and
workload placement are needed to address performance unpredictability. They
also point to autonomic computing—systems capable of self-organisation and
self-management—as a potential avenue for improving performance stability.
Elasticity
Assignment 1 8
While elasticity, the ability to scale resources up and down rapidly, is a key
benefit of cloud computing, it also presents challenges. Effectively managing this
elasticity requires sophisticated algorithms for resource allocation and workload
placement.
Virtualisation
Virtualisation is a foundational technology for cloud computing, especially for
Infrastructure-as-a-Service (IaaS) solutions. It enables the creation of a simulated
Assignment 1 9
computing environment that can run applications and operating systems as if
they were on physical hardware.
Assignment 1 10
to work efficiently. Xen has evolved to support full virtualisation using
hardware assistance.
Web 2.0
Web 2.0 represents a shift in how the internet is used, moving from static websites
to interactive, user-driven applications and services. The sources emphasise
the role of Web 2.0 technologies in making cloud computing accessible and
appealing to a broader audience.
Flexibility: Web 2.0 technologies enable dynamic web pages and rich user
experiences similar to desktop applications.
Assignment 1 11
Delivery Mechanism: Web 2.0 technologies serve as the primary interface
through which cloud computing services are delivered, managed, and
accessed.
User Experience: Web 2.0's focus on interactivity and rich user interfaces
enhances the usability of cloud services, making them more appealing to a
wider audience.
Service Orientation: Web services, a key aspect of Web 2.0, enable the
creation and consumption of cloud services, promoting interoperability and
integration with existing systems.
a) Mainframes
Mainframes were among the earliest forms of large-scale computing systems,
featuring multiple processing units presented as a single entity to users.
Assignment 1 12
They excelled in handling massive data processing tasks, such as online
transactions and enterprise resource planning, due to their focus on large data
movement and input/output (I/O) operations.
b) Clusters
Clusters emerged as a more cost-effective alternative to mainframes,
leveraging interconnected commodity hardware to achieve high performance.
Assignment 1 13
c) Grids
Grid computing extended the concept of clusters to a larger scale, connecting
geographically dispersed resources from multiple organisations to create a
virtual supercomputer.
The main motivation behind grids was to provide access to computing power
and resources that went beyond the capabilities of individual organisations.
Resource Sharing: Grids allowed for the sharing of idle resources among
participating organisations, promoting efficient utilisation and reducing
costs. Cloud computing embodies this principle of resource sharing,
enabling providers to offer services to multiple tenants on shared
infrastructure.
Assignment 1 14
highlight that this necessitates explicit invocation mechanisms, typically
through message passing, as opposed to the transparent remote method
invocations found in distributed object programming.
Utility-Oriented Computing
Utility-oriented computing envisions providing computing resources—such as
processing power, storage, and software—as utilities, similar to electricity or
water. This model aims to make computing resources accessible on demand, with
users paying only for what they consume. The sources trace the origins of this
concept back to the early days of computing, with John McCarthy's vision of a
"computer utility" in 1961.
Assignment 1 15
Key Characteristics of Utility-Oriented Computing:
On-Demand Access: Users can access and provision resources as needed,
without lengthy procurement processes or upfront investments.
Pay-Per-Use Pricing: Users pay only for the resources they consume, based
on usage metrics such as processing time, storage capacity, or bandwidth.
Assignment 1 16
massive data processing tasks and operate continuously with minimal downtime.
Mainframes have influenced cloud computing by laying the groundwork for
concepts such as:
2. Clusters
Clusters are collections of interconnected commodity computers that work
together as a single system. Unlike mainframes, clusters leverage readily
available, off-the-shelf hardware, making them more affordable and accessible.
Clusters have significantly influenced cloud computing through their emphasis on:
3. Grids
Grid computing takes the concept of clusters to a larger scale, connecting
geographically dispersed resources from multiple organisations to form a virtual
supercomputer. Grids were initially developed to tackle problems that required
computing power beyond the capacity of individual organisations. They have
significantly influenced cloud computing by pioneering:
Assignment 1 17
Resource sharing: Grids allowed organisations to pool their computing
resources and share idle capacity, promoting efficient utilization and reducing
costs. Cloud computing has adopted this principle of resource sharing,
enabling providers to offer services to multiple tenants on a shared
infrastructure.
4. Xen
Xen is an open-source virtualisation platform that has been influential in the
development of cloud computing infrastructures. It utilizes a technique called
paravirtualization, which modifies the guest operating systems to work more
efficiently with the underlying hardware. Xen was initially developed for server
and desktop virtualisation, but its applications have expanded to include cloud
computing solutions through platforms like Xen Cloud Platform (XCP).
5. Aneka
Aneka is a software platform designed specifically for developing and deploying
cloud applications. It functions as a Platform-as-a-Service (PaaS) solution,
providing developers with tools and services to create and manage applications in
a cloud environment. Aneka allows developers to harness a variety of computing
resources, such as clusters, multicore servers, and even public cloud
infrastructures, and manage them as a unified virtual domain called the "Aneka
Cloud". Key features of Aneka include:
Assignment 1 18
Extensible API and management tools: Aneka offers an extensive set of APIs
for developing cloud applications and includes management tools for
administrators to control and monitor Aneka Clouds.
MODULE 2
Assignment 1 19
used efficiently and applications receive the necessary resources to perform
optimally without over-provisioning.
Legacy software support: Older applications that may not be compatible with
modern hardware or operating systems can be run in virtual machines that
Assignment 1 20
emulate their original environments.
Core Components
The virtualisation reference model typically consists of three core components:
Guest: The guest represents the software or system that interacts with the
virtualisation layer rather than directly with the host hardware. In the case of
hardware virtualisation, the guest is typically an operating system and its
associated applications. For other types of virtualisation, like storage
virtualisation, the guest could be client applications or users interacting with
virtual storage management software.
Host: The host is the underlying physical environment that provides the actual
hardware resources. This includes the physical server hardware, storage
devices, networking components, and any operating system running directly
on the hardware.
Virtualisation Layer: The virtualisation layer sits between the guest and the
host, responsible for creating and managing the virtual environment. This layer
Assignment 1 21
intercepts requests from the guest and translates them into operations on the
underlying host resources. In hardware virtualisation, the virtualisation layer is
often called a hypervisor or virtual machine manager (VMM).
Assignment 1 22
Summarize taxonomy of virtualization techniques.
Assignment 1 23
Execution Virtualisation: Emulating execution environments,
encompassing operating systems, program binaries, and applications. This
is the oldest, most prevalent, and mature area of virtualisation.
Assignment 1 24
Virtualisation
Virtualisation creates an abstraction layer that separates the software or system
(the guest) from the underlying physical hardware (the host). This abstraction is
achieved through a virtualisation layer, often called a hypervisor or virtual
machine manager (VMM) in the context of hardware virtualisation.
The sources provide a comprehensive taxonomy of virtualisation techniques,
categorising them based on the service they emulate:
Execution Virtualisation
Storage Virtualisation
Network Virtualisation:
Within execution virtualisation, there are various techniques for creating virtual
computing environments, including:
Cloud Computing
Cloud computing leverages virtualisation to deliver on-demand computing
resources and services over the internet. It enables users to access and manage
computing power, storage, and networking capabilities without the need for
significant upfront investment or in-house infrastructure management.
The sources highlight three key cloud delivery models:
Infrastructure-as-a-Service (IaaS)
Assignment 1 25
Platform-as-a-Service (PaaS)
Software-as-a-Service (SaaS)
Cost-effectiveness
Simplified IT management
Assignment 1 26
Explain Machine Reference Model.
User ISA: This is used by developers who create applications that directly
manage hardware.
Assignment 1 27
Application Binary Interface (ABI): The ABI is a higher-level interface that sits
between the operating system and the applications or libraries that run on it.
The ABI handles details like low-level data types, data alignment, and calling
conventions, which dictate how functions are invoked and data is passed
between software components. The ABI ensures portability of applications
and libraries across different operating systems that adhere to the same ABI
standard. System calls, which allow applications to request services from the
operating system, are also defined at this level.
Assignment 1 28
Describe hardware virtualization reference model.
Host: This is the physical machine that provides the underlying hardware
resources, including processors, memory, storage, and networking.
Hypervisor (VMM): This is a software layer that sits directly on the host
hardware and is responsible for creating and managing VMs. The hypervisor
Assignment 1 29
abstracts the hardware from the guest operating systems, providing each VM
with its own virtualised view of the resources. The hypervisor acts as a
mediator, controlling access to the physical hardware and ensuring that each
VM operates in isolation.
Guest Operating System: Each VM runs its own guest operating system,
which is unaware that it is operating in a virtualised environment. The guest
operating system interacts with the virtual hardware provided by the
hypervisor, just as it would interact with physical hardware.
Types of Hypervisors
The sources describe two main types of hypervisors:
Type I (Native): These hypervisors run directly on the host hardware, taking
the place of a traditional operating system. They have direct access to the
hardware and provide the most efficient performance. Examples include
VMware ESX, ESXi servers, Xen, and Denali.
Assignment 1 30
hypervisor emulates the entire underlying hardware, ensuring that the guest
operating system experiences full isolation and functionality.
Flexibility and Portability: VMs can be easily created, moved, and copied,
providing flexibility in managing workloads and enabling portability of
applications across different hardware platforms.
Assignment 1 31
Security Risks: While virtualisation enhances security in some ways, it also
introduces new attack vectors. Malicious code could potentially compromise
the hypervisor or escape from one VM to another.
Assignment 1 32