Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Assignment 1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 32

Assignment 1

Define Cloud Computing. Explain bird view of cloud


computing.
Cloud computing is a way of delivering IT services on demand that is utility-
oriented and Internet-centric. In other words, cloud computing offers a pay-as-
you-go model for IT services such as storage, compute power, network
infrastructure, and applications, delivered over the Internet. This has transformed
IT assets from capital costs (paid up front) into operational costs (paid as they are
used).

A Bird's-Eye View
Here is a simplified overview of a cloud computing environment:

Users: Businesses and individuals who subscribe to cloud services.

Cloud Service Providers (CSPs): Companies that own and maintain the
infrastructure and software that power cloud services (e.g., Amazon Web
Services, Google AppEngine, Microsoft Azure).

Data Centers: Massive facilities housing the physical servers, storage, and
networking equipment.

Virtualisation: Technology that allows the creation of virtual servers, storage,


and networks, enabling resource sharing and on-demand provisioning.

Internet: The primary medium for delivering cloud services.

Service Models: The different types of cloud services, typically categorized


as:

Infrastructure as a Service (IaaS): Provides virtual hardware, storage, and


networking resources.

Platform as a Service (PaaS): Offers runtime environments and tools for


developing and deploying applications.

Assignment 1 1
Software as a Service (SaaS): Delivers applications over the Internet.

Explain cloud-computing reference model.


The sources describe a layered Cloud Computing Reference Model to categorise
the wide array of cloud computing services. It encompasses the entire computing
stack, from the physical hardware to the software applications delivered to users.
This model helps in understanding the different layers of cloud services and their
interrelationships.

The Three Main Service Models


The Cloud Computing Reference Model identifies three primary service models:

1. Infrastructure as a Service (IaaS):

This is the foundation of the cloud computing stack.

IaaS delivers on-demand infrastructure in the form of virtual hardware,


storage, and networking.

Virtual hardware is utilized to provide compute on demand in the form of


virtual machine instances.

Assignment 1 2
Users can create virtual machine instances, configure their operating
systems, and manage their applications.

IaaS providers typically charge users based on their usage of virtual


resources, often on a per-hour basis.

Virtual storage is delivered in the form of raw disk space or object store.

Examples include Amazon Web Services (AWS) and its Elastic Compute
Cloud (EC2) service.

2. Platform as a Service (PaaS):

PaaS sits atop IaaS and provides a higher level of abstraction.

It offers scalable runtime environments and tools for developers to build,


deploy, and manage applications without worrying about the underlying
infrastructure.

These services are backed by a core middleware platform that is


responsible for creating the abstract environment where applications are
deployed and executed.

It is the responsibility of the service provider to provide scalability and to


manage fault tolerance, while users are requested to focus on the logic of
the application developed by leveraging the provider’s APIs and libraries.

PaaS solutions include programming language support, databases,


middleware, and other development tools.

Examples include Google AppEngine and Microsoft Azure.

3. Software as a Service (SaaS):

SaaS is the top layer of the cloud computing stack.

It delivers software applications over the Internet on a subscription basis.

Users access applications through web browsers or thin clients, without


needing to install or manage any software locally.

Assignment 1 3
These applications are shared across multiple users whose interaction is
isolated from the other users.

The SaaS layer is also the area of social networking Websites, which
leverage cloud-based infrastructures to sustain the load generated by
their popularity.

Examples include Salesforce.com, Google Apps, and Dropbox.

Describe characteristics and benefits of cloud


computing
Characteristics of Cloud Computing
On-Demand Access: Users can access resources as and when needed,
eliminating the need for upfront investments.

Assignment 1 4
Elasticity: The ability to quickly scale computing resources up or down in
response to changing demands. A key characteristic is the ability to adjust to a
variable workload by acquiring or releasing computing resources.

Utility Computing: Users are charged based on their consumption, similar to


utilities like electricity or water. This model helps control costs by allowing
users to pay only for what they use. Before the advent of cloud computing,
organisations had to make significant upfront investments in IT software and
infrastructure, which often led to capital costs.

Resource Virtualisation: Cloud computing relies on the virtualisation of


hardware, including servers, storage, and network devices. This allows
resources to be shared amongst multiple users and dynamically provisioned
as needed.

Internet-based Access: Cloud services are delivered over the internet,


enabling widespread accessibility. This characteristic is highlighted in the
client-server model, where users typically use simple software, a thin client, to
access the cloud services. The client handles minimal processing, with the
majority of computational tasks being executed on the cloud.

Massive Infrastructure: Cloud providers maintain and operate large-scale


data centres and infrastructure to support the elasticity and scalability of
services. This eliminates the need for individual users or organisations to
invest in and manage their own physical infrastructure.

Simplified Application Acceleration and Scalability: Applications deployed


in the cloud can be easily accelerated and scaled as needed. Cloud
providers offer tools and services to simplify this process, allowing
businesses to quickly respond to changes in demand and optimize
application performance.

Efficient Resource Allocation: Cloud computing enables the efficient


allocation of resources amongst multiple users and applications, optimising
utilisation and reducing waste. This is achieved through techniques like
resource pooling, where a shared pool of resources is dynamically allocated
to different users.

Energy Efficiency: By utilising resources more efficiently and sharing them


amongst multiple users, cloud computing can contribute to energy savings.

Assignment 1 5
Seamless Creation and Use of Third-Party Services: The sources mention
that cloud computing enables the seamless integration of third-party
services into applications.

Benefits of Cloud Computing


No Upfront Commitments: Users can avoid significant upfront investments
in IT infrastructure and instead pay for services as they use them. This
removes the financial barrier to entry for many businesses and individuals.

Nice Pricing: The pay-as-you-go pricing model offered by cloud providers


makes computing services more affordable and accessible.

Reduced IT Costs: Cloud computing can lead to significant cost savings for
businesses. This includes reduced spending on hardware, software,
maintenance, and IT staff. The multiplexing of resources allows service
providers to operate more efficiently due to economies of scale, resulting in
lower costs for users.

Increased Efficiency: Cloud computing allows users to access resources and


services over the internet, making them available anytime and anywhere. This
can enhance efficiency and productivity for both individuals and businesses.

Enhanced Scalability and Flexibility: Users can quickly and easily scale their
computing resources up or down to meet changing demands. This flexibility
allows businesses to respond to fluctuating workloads and avoid
overspending on unused resources.

Improved Security: Cloud providers invest heavily in security measures to


protect their infrastructure and customer data. In many cases, this can result
in better security than businesses could achieve on their own. However, it's
essential to consider the specific security requirements of applications and
choose a cloud provider that meets those needs.

Increased Collaboration: Cloud-based applications and services can enhance


collaboration by enabling users to work on projects and share information in
real-time from different locations.

Faster Deployment: Cloud-based services can be deployed much faster than


traditional IT solutions, reducing the time it takes for businesses to launch new

Assignment 1 6
applications and services.

Regular Updates: Cloud providers regularly update their services with new
features and security enhancements, providing users with access to the latest
technologies.

What are challenges of Cloud computing.


Security
Security is repeatedly cited as the most significant challenge facing cloud
computing. Gaining the trust of a large user base is critical for the future of cloud
computing, and ensuring the security of data stored and processed in the cloud is
paramount. Users accustomed to managing their own secure IT infrastructure may
be hesitant to relinquish control to a third-party provider.
The sources outline various specific security concerns:

Relinquishing Control: When users opt for cloud computing, they transfer
control of their data and systems to third-party services. This introduces risks
such as unauthorised access, data corruption, infrastructure failure, and
service unavailability. Determining liability in case of problems becomes
difficult due to the complex structure of cloud services and the involvement of
multiple entities.

Data Storage: The storage of data across multiple sites, potentially in different
organisations and jurisdictions, raises security and privacy concerns. The
sources highlight the need for clear rules and regulations for cloud
governance, encompassing security, ethics, and data protection.

Multi-tenancy and Virtualisation: The shared infrastructure that underpins


cloud computing, while beneficial in terms of cost and efficiency, can also
introduce security risks. Performance and security isolation become more
challenging in multi-tenant systems where multiple users share resources.

Traditional Threats: Cloud computing remains vulnerable to traditional


security threats amplified by the scale of cloud infrastructure and the large

Assignment 1 7
user population. Distributed denial-of-service (DDoS) attacks, phishing, SQL
injection, and cross-site scripting are cited as examples of such threats.

Abuse of Resources: The sources mention the potential for malicious actors
to exploit cloud resources for nefarious activities, such as launching DDoS
attacks, distributing spam, or spreading malware.

Insecure APIs: Weakly designed application programming interfaces (APIs)


can compromise security by failing to adequately protect user authentication,
access control, and application monitoring during runtime.

Reliability and Availability of Service


The sources raise concerns about the reliability and availability of cloud
services. What happens if a service provider experiences an outage or fails to
deliver the promised service level? A temporary fix with negative economic
implications is overprovisioning, which involves maintaining sufficient resources
to handle even the highest projected demand. However, the sources question the
long-term sustainability of overprovisioning as a strategy.
Additionally, cloud service interruptions can significantly impact organisations
reliant on these services. This risk is compounded by the potential for permanent
data loss in the event of a catastrophic system failure.

Performance Unpredictability
Performance unpredictability arises from resource sharing in cloud
environments. The performance of virtual machines can fluctuate depending on
the overall load, the infrastructure services, and the activities of other users
sharing the same resources.

The sources indicate that new algorithms for controlling resource allocation and
workload placement are needed to address performance unpredictability. They
also point to autonomic computing—systems capable of self-organisation and
self-management—as a potential avenue for improving performance stability.

Elasticity

Assignment 1 8
While elasticity, the ability to scale resources up and down rapidly, is a key
benefit of cloud computing, it also presents challenges. Effectively managing this
elasticity requires sophisticated algorithms for resource allocation and workload
placement.

Interoperability and Standardisation


Lack of standardisation and interoperability is a significant concern for cloud
users. The sources note that proprietary cloud computing environments may
negatively impact the field. Vendor lock-in, where users become tied to a specific
cloud provider, is a major concern.

Standardised interfaces and protocols would promote interoperability, allowing


users to more easily switch between providers and mitigate the risks of vendor
lock-in. However, establishing standards for a rapidly evolving technology can be
difficult and potentially stifle innovation.

Software licensing: Adapting traditional software licensing models to the


dynamic, multi-tenant nature of cloud computing presents difficulties.

System bugs: The complexity of cloud environments can lead to unexpected


bugs and vulnerabilities.

Management of complex systems: The scale and dynamism of cloud


infrastructure require sophisticated management tools and techniques.

User experience: Ensuring a positive user experience, particularly in terms of


usability, security, and trust, is crucial for wider cloud adoption.

Explain Virtualization and Web 2.0.

Virtualisation
Virtualisation is a foundational technology for cloud computing, especially for
Infrastructure-as-a-Service (IaaS) solutions. It enables the creation of a simulated

Assignment 1 9
computing environment that can run applications and operating systems as if
they were on physical hardware.

The sources highlight that virtualisation offers several benefits:

Increased Security: Virtualisation creates isolated environments


(sandboxes) that prevent harmful operations from affecting other users or
the host system.

Customisability and Control: Virtual environments can be easily


configured and adjusted to meet specific application needs.

Efficient Resource Utilisation: Virtualisation enables server consolidation,


allowing multiple virtual machines to run on a single physical server,
maximising hardware utilisation and reducing power consumption.

Portability: Virtual machine images can be moved and executed on


different physical machines, enhancing flexibility and disaster recovery
capabilities.

The sources identify different types of virtualisation techniques:

Hardware Virtualisation: Simulates the hardware interface of a physical


machine, allowing an operating system to run in a virtual machine.

Programming Language Virtualisation: Creates a virtual execution


environment for a specific programming language, enabling portability
across different operating systems and hardware platforms (examples
include Java Virtual Machine and .NET Framework).

Storage Virtualisation: Abstracts the physical storage devices and


presents them as a single logical storage pool.

Network Virtualisation: Creates virtual networks on top of physical


network infrastructure, enabling flexibility and isolation for network traffic.

Examples of Virtualisation Technologies


The sources provide examples of specific virtualisation technologies and their
characteristics:

Xen: An open-source virtualisation platform initially based on


paravirtualisation, which requires modifications to the guest operating system

Assignment 1 10
to work efficiently. Xen has evolved to support full virtualisation using
hardware assistance.

VMware: A company offering a suite of virtualisation products based on full


virtualisation, where the guest operating system runs unmodified on a
simulated replica of the underlying hardware. VMware utilises techniques like
binary translation to achieve full virtualisation on x86 architectures.

Microsoft Hyper-V: A hypervisor integrated into Windows Server that enables


hardware virtualisation.

Web 2.0
Web 2.0 represents a shift in how the internet is used, moving from static websites
to interactive, user-driven applications and services. The sources emphasise
the role of Web 2.0 technologies in making cloud computing accessible and
appealing to a broader audience.

Key Characteristics of Web 2.0:

Interactivity: Web 2.0 applications facilitate user participation, content


creation, and collaboration.

Flexibility: Web 2.0 technologies enable dynamic web pages and rich user
experiences similar to desktop applications.

Dynamism: Web 2.0 applications are constantly evolving with frequent


updates and feature additions without requiring client-side software
installations.

Loose Coupling: Web 2.0 applications can be easily combined and


integrated to create new services and functionalities.

Accessibility: Web 2.0 applications are designed to be accessible from a


variety of devices, including mobile phones, TVs, and car dashboards.

Relationship between Web 2.0 and Cloud Computing


The sources describe a strong connection between Web 2.0 and cloud
computing:

Assignment 1 11
Delivery Mechanism: Web 2.0 technologies serve as the primary interface
through which cloud computing services are delivered, managed, and
accessed.

User Experience: Web 2.0's focus on interactivity and rich user interfaces
enhances the usability of cloud services, making them more appealing to a
wider audience.

Service Orientation: Web services, a key aspect of Web 2.0, enable the
creation and consumption of cloud services, promoting interoperability and
integration with existing systems.

Application Development: Web 2.0 technologies and frameworks facilitate


the development of scalable, cloud-based applications that can leverage the
elasticity and on-demand resources of cloud platforms.

Examples of Web 2.0 Applications


The sources mention several popular Web 2.0 applications, including:

Social Networking Platforms: Facebook, Twitter, Flickr.

Collaboration Tools: Google Docs, Wikipedia.

Content Sharing Platforms: YouTube, Blogger.

Outline the following With respect to cloud computing


a) Mainframes
b) Clusters
c) Grid

a) Mainframes
Mainframes were among the earliest forms of large-scale computing systems,
featuring multiple processing units presented as a single entity to users.

Assignment 1 12
They excelled in handling massive data processing tasks, such as online
transactions and enterprise resource planning, due to their focus on large data
movement and input/output (I/O) operations.

Key characteristics of mainframes that influenced cloud computing include:

High Reliability: Mainframes were designed for continuous operation


("always on") with fault tolerance capabilities, allowing them to remain
operational even when components failed. This emphasis on reliability is a
core principle in cloud computing, where services are expected to be
available with minimal downtime.

Centralised Control: Mainframes were typically managed by a single


organisation, providing a high level of control over resources and security.
While cloud computing often involves sharing infrastructure among
multiple tenants, the concept of centralised management and security
remains relevant, especially in private cloud deployments.

b) Clusters
Clusters emerged as a more cost-effective alternative to mainframes,
leveraging interconnected commodity hardware to achieve high performance.

The use of commodity hardware, rather than specialised components, made


clusters more accessible to a wider range of organisations.

Key characteristics of clusters that influenced cloud computing include:

Scalability: Clusters can be expanded by adding more nodes, allowing for


horizontal scaling to accommodate increasing workloads. This ability to
scale on demand is a fundamental characteristic of cloud computing,
enabling providers to adjust resources dynamically to meet fluctuating
user demands.

Distributed Processing: Clusters distribute workloads across multiple


nodes, enabling parallel processing to accelerate computations. Cloud
computing leverages distributed processing extensively, allowing for the
execution of complex tasks by dividing them into smaller units that can be
processed concurrently.

Assignment 1 13
c) Grids
Grid computing extended the concept of clusters to a larger scale, connecting
geographically dispersed resources from multiple organisations to create a
virtual supercomputer.

The main motivation behind grids was to provide access to computing power
and resources that went beyond the capabilities of individual organisations.

Key characteristics of grids that influenced cloud computing include:

Resource Sharing: Grids allowed for the sharing of idle resources among
participating organisations, promoting efficient utilisation and reducing
costs. Cloud computing embodies this principle of resource sharing,
enabling providers to offer services to multiple tenants on shared
infrastructure.

Utility Computing: Grids introduced the concept of providing computing


resources as a utility, where users pay only for what they consume. This
"pay-per-use" model is a cornerstone of cloud computing, allowing users
to access resources on demand without significant upfront investments.

Explain service oriented and utility oriented computing


Service-Oriented Computing (SOC)
Service-oriented computing is a paradigm that structures software systems as
collections of interacting services. The sources emphasize that this approach is a
core principle behind cloud computing, particularly as it pertains to the concept
of "Everything as a Service" (XaaS).

Key Characteristics of Services:


Explicit Boundaries: Service interactions occur across various domains, often
involving different trust authorities and execution environments. The sources

Assignment 1 14
highlight that this necessitates explicit invocation mechanisms, typically
through message passing, as opposed to the transparent remote method
invocations found in distributed object programming.

Autonomous and Platform-Agnostic: Services are designed to be self-


contained and independent of specific platforms or programming languages.
This characteristic promotes reusability and interoperability, enabling services
to be integrated into diverse systems.

Location Transparency: The physical location of a service should be


irrelevant to the consumer. This principle allows for flexibility in deploying and
managing services across distributed environments, such as cloud platforms.

Benefits of SOC for Cloud Computing:


Flexibility and Agility: SOA allows for the rapid composition and deployment
of new services, enabling cloud providers to respond quickly to changing
demands and offer a wider range of services.

Interoperability: Web services promote interoperability between different


systems and platforms, facilitating integration and collaboration within cloud
environments.

Scalability: Services can be easily scaled up or down to meet fluctuating


workloads, a critical requirement for cloud platforms.

Reduced Development Costs: Reusability and composability of services can


lower development costs by eliminating the need to build everything from
scratch.

Utility-Oriented Computing
Utility-oriented computing envisions providing computing resources—such as
processing power, storage, and software—as utilities, similar to electricity or
water. This model aims to make computing resources accessible on demand, with
users paying only for what they consume. The sources trace the origins of this
concept back to the early days of computing, with John McCarthy's vision of a
"computer utility" in 1961.

Assignment 1 15
Key Characteristics of Utility-Oriented Computing:
On-Demand Access: Users can access and provision resources as needed,
without lengthy procurement processes or upfront investments.

Pay-Per-Use Pricing: Users pay only for the resources they consume, based
on usage metrics such as processing time, storage capacity, or bandwidth.

Elasticity: Resources can be scaled up or down dynamically to match


fluctuating workloads, ensuring optimal resource utilization and cost
efficiency.

Self-Service: Users can typically provision and manage resources themselves


through web interfaces or APIs, reducing reliance on IT staff.

Benefits of Utility-Oriented Computing for Cloud Computing:


Cost Savings: Eliminating upfront capital expenditures and paying only for
actual usage can significantly reduce IT costs for cloud consumers.

Increased Agility: On-demand access to resources allows businesses to


respond quickly to changing market conditions or customer demands.

Simplified IT Management: Cloud providers handle the complexities of


infrastructure management, allowing businesses to focus on their core
operations.

Scalability and Flexibility: Elasticity and pay-per-use pricing enable


businesses to scale their computing resources up or down as needed,
providing flexibility to handle varying workloads.

Summarize any Five computing platforms and


technologies.
1. Mainframes
Mainframes are powerful, highly reliable computer systems that were prevalent in
the early days of computing. They are characterized by their ability to handle

Assignment 1 16
massive data processing tasks and operate continuously with minimal downtime.
Mainframes have influenced cloud computing by laying the groundwork for
concepts such as:

Centralised control and management of computing resources: Mainframes


were typically owned and operated by a single organisation, allowing for tight
control over resources and security. This concept is reflected in modern cloud
computing, particularly in private cloud deployments, where organisations
maintain control over their infrastructure.

High availability and fault tolerance: Mainframes were designed for


continuous operation and featured mechanisms to tolerate component failures
without disrupting service. This focus on reliability is a cornerstone of cloud
computing, where service availability is paramount.

2. Clusters
Clusters are collections of interconnected commodity computers that work
together as a single system. Unlike mainframes, clusters leverage readily
available, off-the-shelf hardware, making them more affordable and accessible.
Clusters have significantly influenced cloud computing through their emphasis on:

Scalability: Clusters can be easily expanded by adding more nodes, allowing


for horizontal scaling to meet increasing demands. This ability to scale
dynamically is crucial for cloud providers to accommodate fluctuating
workloads and maintain performance.

Distributed processing: Clusters distribute workloads across multiple nodes,


enabling parallel processing to accelerate computations. This approach is
fundamental to cloud computing, where complex tasks are often divided into
smaller units that can be processed concurrently across a distributed
infrastructure.

3. Grids
Grid computing takes the concept of clusters to a larger scale, connecting
geographically dispersed resources from multiple organisations to form a virtual
supercomputer. Grids were initially developed to tackle problems that required
computing power beyond the capacity of individual organisations. They have
significantly influenced cloud computing by pioneering:

Assignment 1 17
Resource sharing: Grids allowed organisations to pool their computing
resources and share idle capacity, promoting efficient utilization and reducing
costs. Cloud computing has adopted this principle of resource sharing,
enabling providers to offer services to multiple tenants on a shared
infrastructure.

Utility computing: Grids introduced the "pay-per-use" model, where users


pay only for the resources they consume. This model has become a
cornerstone of cloud computing, providing users with on-demand access to
resources without substantial upfront investments.

4. Xen
Xen is an open-source virtualisation platform that has been influential in the
development of cloud computing infrastructures. It utilizes a technique called
paravirtualization, which modifies the guest operating systems to work more
efficiently with the underlying hardware. Xen was initially developed for server
and desktop virtualisation, but its applications have expanded to include cloud
computing solutions through platforms like Xen Cloud Platform (XCP).

5. Aneka
Aneka is a software platform designed specifically for developing and deploying
cloud applications. It functions as a Platform-as-a-Service (PaaS) solution,
providing developers with tools and services to create and manage applications in
a cloud environment. Aneka allows developers to harness a variety of computing
resources, such as clusters, multicore servers, and even public cloud
infrastructures, and manage them as a unified virtual domain called the "Aneka
Cloud". Key features of Aneka include:

Support for multiple programming models: Aneka provides a flexible


framework that supports various programming models, including thread
programming, task programming, and MapReduce, allowing developers to
choose the most suitable approach for their applications.

Resource management and scalability: Aneka's middleware handles


resource allocation, task scheduling, and scalability, allowing developers to
focus on application logic rather than infrastructure management.

Assignment 1 18
Extensible API and management tools: Aneka offers an extensive set of APIs
for developing cloud applications and includes management tools for
administrators to control and monitor Aneka Clouds.

MODULE 2

Define virtualisation. List and explain need of


virtualization.
Virtualisation is a technique that abstracts or simulates hardware resources,
such as memory, storage, or networking, to create a virtual version of a computing
environment. In essence, it provides the illusion of a particular environment,
whether it's a runtime environment, a storage facility, a network connection, or a
remote desktop. This is achieved through an emulation or abstraction layer that
separates the guest (the software or system using the virtual resources) from the
host (the underlying physical hardware).

The Necessity of Virtualisation


Virtualisation has become a cornerstone of modern computing, particularly in the
realm of cloud computing. It offers a myriad of benefits that address critical needs
in today's technological landscape:

1. Efficient Resource Utilisation and Cost Savings


Virtualisation enables the creation of multiple isolated and controllable
environments on a single physical machine. This allows organisations to:

Consolidate servers: Multiple applications or services that were previously


running on separate physical servers can be migrated to virtual machines on a
single server. This reduces hardware costs, energy consumption, and physical
space requirements.

Optimise resource allocation: Virtual machines can be dynamically allocated


resources (CPU, memory, storage) as needed. This ensures that resources are

Assignment 1 19
used efficiently and applications receive the necessary resources to perform
optimally without over-provisioning.

2. Enhanced Flexibility and Agility


Virtualisation provides greater flexibility and agility in managing IT infrastructure
and services:

Rapid provisioning: Virtual machines can be created and deployed much


faster than setting up physical servers, allowing organisations to respond
quickly to changing demands or new business opportunities.

Easy migration and scalability: Virtual machines can be easily migrated


between physical hosts, enabling workload balancing and dynamic scaling to
meet fluctuating demands. This is crucial for cloud providers to maintain
service availability and performance.

3. Improved Security and Isolation


Virtualisation offers mechanisms for enhancing security and isolating
applications from each other and from the underlying infrastructure:

Sandboxing: Virtual machines act as isolated containers, limiting the impact of


security breaches or software malfunctions. If a virtual machine is
compromised, it's less likely to affect other virtual machines or the host
system.

Secure multi-tenancy: Cloud providers can use virtualisation to host


applications from multiple tenants (customers) on the same infrastructure
while ensuring that their data and applications remain isolated and secure.

4. Simplified Software Testing and Development


Virtualisation streamlines software development and testing processes:

Environment replication: Developers can easily create virtual machines that


replicate various operating systems and configurations, enabling them to test
their applications in different environments without needing dedicated
hardware.

Legacy software support: Older applications that may not be compatible with
modern hardware or operating systems can be run in virtual machines that

Assignment 1 20
emulate their original environments.

5. Business Continuity and Disaster Recovery


Virtualisation facilitates business continuity and disaster recovery efforts:

Rapid recovery: Virtual machine images can be backed up and restored


quickly, minimising downtime in the event of a system failure or disaster.

Geographic redundancy: Virtual machines can be replicated across multiple


geographic locations, providing resilience against localised outages and
ensuring service availability.

Explain virtualization reference model.


The virtualisation reference model provides a framework for understanding the
key components and interactions within a virtualised environment. This model
helps to clarify the roles of different elements and how they work together to
create the illusion of a separate computing environment.

Core Components
The virtualisation reference model typically consists of three core components:

Guest: The guest represents the software or system that interacts with the
virtualisation layer rather than directly with the host hardware. In the case of
hardware virtualisation, the guest is typically an operating system and its
associated applications. For other types of virtualisation, like storage
virtualisation, the guest could be client applications or users interacting with
virtual storage management software.

Host: The host is the underlying physical environment that provides the actual
hardware resources. This includes the physical server hardware, storage
devices, networking components, and any operating system running directly
on the hardware.

Virtualisation Layer: The virtualisation layer sits between the guest and the
host, responsible for creating and managing the virtual environment. This layer

Assignment 1 21
intercepts requests from the guest and translates them into operations on the
underlying host resources. In hardware virtualisation, the virtualisation layer is
often called a hypervisor or virtual machine manager (VMM).

Interactions and Functionality


The virtualisation layer plays a crucial role in mediating the interactions between
the guest and the host. It emulates the necessary hardware interfaces, allocates
resources, and enforces isolation between different virtual environments.
This model helps to conceptualise how virtualisation works across different
domains, such as hardware, storage, and networking.

Example: Hardware Virtualisation


In hardware virtualisation, the guest operating system runs on a virtual machine
(VM) created by the hypervisor. The hypervisor intercepts privileged instructions
issued by the guest OS and manages access to the physical hardware. This
creates the illusion for the guest OS that it has exclusive control over its own
virtual hardware, even though it is sharing the underlying physical resources with
other VMs.

Assignment 1 22
Summarize taxonomy of virtualization techniques.

Categorisation by Service Emulation:

Assignment 1 23
Execution Virtualisation: Emulating execution environments,
encompassing operating systems, program binaries, and applications. This
is the oldest, most prevalent, and mature area of virtualisation.

Storage Virtualisation: Decoupling the physical organisation of storage


hardware from its logical representation. This enables users to interact
with storage resources using a logical path, simplifying storage
management and enhancing flexibility.

Network Virtualisation: Creating and managing virtual networks, either by


aggregating multiple physical networks into a single logical network
(external) or by providing network-like functionality to operating system
partitions (internal).

Further Categorisation of Execution Virtualisation:

Process-Level Virtualisation: Implemented on top of an existing operating


system, leveraging the OS kernel's control over the hardware.

System-Level Virtualisation: Implemented directly on the hardware,


minimizing reliance on an existing operating system.

Types of Virtual Computing Environments within Execution Virtualisation:

Bare Hardware Virtualisation: Providing guest operating systems with the


illusion of running directly on physical hardware.

Operating System Resource Virtualisation: Creating isolated user spaces


within a single operating system, offering separated execution
environments for applications.

Low-Level Programming Language Virtualisation: Employing virtual


machines to execute byte code, providing portability and managed
execution for applications.

Application Library Virtualisation: Enabling applications to run in


environments that don't natively support all their required features.

Explain virtualization and cloud computing

Assignment 1 24
Virtualisation
Virtualisation creates an abstraction layer that separates the software or system
(the guest) from the underlying physical hardware (the host). This abstraction is
achieved through a virtualisation layer, often called a hypervisor or virtual
machine manager (VMM) in the context of hardware virtualisation.
The sources provide a comprehensive taxonomy of virtualisation techniques,
categorising them based on the service they emulate:

Execution Virtualisation

Storage Virtualisation

Network Virtualisation:

Within execution virtualisation, there are various techniques for creating virtual
computing environments, including:

Bare Hardware Virtualisation

Operating System Resource Virtualisation

Low-Level Programming Language Virtualisation

Application Library Virtualisation

Virtualisation offers numerous benefits:

Efficient resource utilisation

Enhanced flexibility and agility

Improved security and isolation

Cloud Computing
Cloud computing leverages virtualisation to deliver on-demand computing
resources and services over the internet. It enables users to access and manage
computing power, storage, and networking capabilities without the need for
significant upfront investment or in-house infrastructure management.
The sources highlight three key cloud delivery models:

Infrastructure-as-a-Service (IaaS)

Assignment 1 25
Platform-as-a-Service (PaaS)

Software-as-a-Service (SaaS)

Cloud computing offers numerous advantages:

Scalability and elasticity

Cost-effectiveness

Accessibility and availability

Simplified IT management

Virtualisation in Cloud Computing


Virtualisation is a core enabler of cloud computing, powering the delivery of
flexible, scalable, and cost-effective cloud services. It allows cloud providers to:

Create and manage multiple isolated environments on a shared


infrastructure.

Dynamically allocate resources to meet fluctuating demands.

Ensure security and isolation between different tenants.

Offer a diverse range of services based on virtualised resources.

Use of virtualisation in various cloud platforms:

Amazon Web Services (AWS): AWS heavily relies on virtualisation to offer a


wide range of IaaS services, including Amazon EC2 (Elastic Compute Cloud),
S3 (Simple Storage Service), and EBS (Elastic Block Store).

Microsoft Windows Azure: Azure leverages virtualisation to provide PaaS


services, allowing developers to create and deploy cloud applications on a
scalable and managed platform.

Open-Source Cloud Platforms: Open-source platforms like Eucalyptus,


OpenNebula, Nimbus, and OpenStack utilize virtualisation techniques to
provide control infrastructure for private clouds, enabling organisations to
build and manage their own cloud environments.

Assignment 1 26
Explain Machine Reference Model.

The Machine Reference Model is a conceptual framework that describes the


layered architecture of modern computing systems, particularly in the context of
virtualisation. This model helps understand how different software and hardware
components interact to enable the execution of applications and operating
systems.

The model comprises three key layers:


Instruction Set Architecture (ISA): The ISA is the lowest level of the model. It
defines the interface between the hardware and the software. The ISA
specifies the instructions that the processor can execute, the registers it uses,
the memory organisation, and the mechanisms for handling interrupts. The ISA
serves as the foundation upon which operating systems and applications are
built. There are two types of ISA:

System ISA: This is used by operating system developers.

User ISA: This is used by developers who create applications that directly
manage hardware.

Assignment 1 27
Application Binary Interface (ABI): The ABI is a higher-level interface that sits
between the operating system and the applications or libraries that run on it.
The ABI handles details like low-level data types, data alignment, and calling
conventions, which dictate how functions are invoked and data is passed
between software components. The ABI ensures portability of applications
and libraries across different operating systems that adhere to the same ABI
standard. System calls, which allow applications to request services from the
operating system, are also defined at this level.

Application Programming Interface (API): The API is the highest level of


abstraction in the model. It provides a set of functions and procedures that
applications can use to interact with libraries or the underlying operating
system. The API hides the complexities of lower-level operations, allowing
developers to focus on the logic of their applications.

How the Layers Work Together


When an application wants to perform an operation, it makes a request through
the API. This request is then translated into a series of system calls defined by the
ABI. The operating system, in turn, interacts with the hardware through the ISA,
converting the system calls into machine instructions that the processor can
understand and execute.

Benefits of the Layered Approach


The layered architecture of the Machine Reference Model offers several benefits:

Modularity and Simplification: By separating concerns into distinct layers, the


model simplifies the design and development of complex computing systems.

Portability: Applications and libraries can be written to a specific ABI, enabling


them to run on different operating systems that implement the same ABI.

Security: The layers provide a degree of isolation, limiting the impact of


software errors or security breaches.

Virtualisation: The model facilitates the implementation of various


virtualisation techniques, enhancing resource utilisation, flexibility, and
security.

Assignment 1 28
Describe hardware virtualization reference model.

The hardware virtualisation reference model describes the components and


interactions involved in creating and managing virtual machines (VMs), which are
isolated environments that appear to be whole computers but only access a
portion of the computer's resources. This model is based on the Machine
Reference Model discussed in our previous conversation, with the key addition of
the hypervisor, also known as the Virtual Machine Monitor (VMM).

The key components of this model are:

Host: This is the physical machine that provides the underlying hardware
resources, including processors, memory, storage, and networking.

Hypervisor (VMM): This is a software layer that sits directly on the host
hardware and is responsible for creating and managing VMs. The hypervisor

Assignment 1 29
abstracts the hardware from the guest operating systems, providing each VM
with its own virtualised view of the resources. The hypervisor acts as a
mediator, controlling access to the physical hardware and ensuring that each
VM operates in isolation.

Guest Operating System: Each VM runs its own guest operating system,
which is unaware that it is operating in a virtualised environment. The guest
operating system interacts with the virtual hardware provided by the
hypervisor, just as it would interact with physical hardware.

Applications: Applications run within the guest operating system environment,


just as they would on a physical machine. They are unaware of the underlying
virtualisation layer.

Types of Hypervisors
The sources describe two main types of hypervisors:

Type I (Native): These hypervisors run directly on the host hardware, taking
the place of a traditional operating system. They have direct access to the
hardware and provide the most efficient performance. Examples include
VMware ESX, ESXi servers, Xen, and Denali.

Type II (Hosted): These hypervisors run as an application within a host


operating system. They rely on the host operating system for some resource
management functions, which can lead to slightly reduced performance
compared to Type I hypervisors. Examples include VMware Workstation and
User-mode Linux.

Hardware Virtualisation Techniques


The sources discuss various techniques used in hardware virtualisation:

Hardware-assisted virtualisation: Modern processors include hardware


extensions that are specifically designed to support virtualisation. These
extensions enhance the performance and security of virtualisation by
providing hardware-level support for critical virtualisation functions. Examples
include Intel VT-x and AMD-V.

Full virtualisation: This technique allows a guest operating system to run


unmodified within a VM, as if it were running on the physical hardware. The

Assignment 1 30
hypervisor emulates the entire underlying hardware, ensuring that the guest
operating system experiences full isolation and functionality.

Paravirtualisation: In this approach, the guest operating system is modified to


work cooperatively with the hypervisor. The guest operating system makes
specific calls to the hypervisor for sensitive operations, leading to improved
performance but requiring modifications to the guest operating system code.
Examples include Xen and Denali.

Partial virtualisation: This technique provides a subset of hardware


virtualisation functionality. It does not provide full isolation or support for all
guest operating system features, but it can be simpler to implement and may
offer performance advantages for specific use cases.

Benefits of Hardware Virtualisation


Hardware virtualisation offers numerous benefits:

Server Consolidation: Multiple VMs can run on a single physical host,


allowing for more efficient use of hardware resources and reducing the
number of physical servers required.

Isolation and Security: Each VM operates in its own isolated environment,


preventing applications or operating systems running in one VM from affecting
others or the host system. This enhances security and reliability.

Flexibility and Portability: VMs can be easily created, moved, and copied,
providing flexibility in managing workloads and enabling portability of
applications across different hardware platforms.

Cost Savings: By reducing the number of physical servers needed, hardware


virtualisation can lead to cost savings in hardware, energy consumption, and
data centre space.

Challenges of Hardware Virtualisation


Despite its advantages, hardware virtualisation also presents some challenges:

Performance Overhead: Virtualisation introduces some performance


overhead due to the need for the hypervisor to intercept and manage
hardware accesses. Hardware-assisted virtualisation and paravirtualisation
techniques are designed to mitigate this overhead.

Assignment 1 31
Security Risks: While virtualisation enhances security in some ways, it also
introduces new attack vectors. Malicious code could potentially compromise
the hypervisor or escape from one VM to another.

Complexity: Managing a virtualised environment can be more complex than


managing physical servers. Administrators need to understand the
virtualisation layer and its implications for resource management, security, and
performance.

Assignment 1 32

You might also like