Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

CS-3 Virtualisation April 30th Wase Wims 2020 Prof A R Rahman

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 64

Cloud Computing - WASE SSWTZG527

CS 3 - April 30th 2023

BITS Pilani
Introduction to Virtualisation

• AGENDA
Virtualisation
Introduction to Virtualization
Use & demerits of Virtualization

BITS Pilani, Deemed to be University under Section 3 of UGC Act, 1956


Virtualization in the cloud - Transforming a Classic Data
Center (CDC) into a Virtualized Data Center
Virtualized Data Center
Transforming a Classic Data Virtualized Data Center (VDC)
Center (CDC) into a Virtualized
Data Center (VDC) requires
Virtualize Network
virtualizing the core elements of
the data center. Virtualize Storage

Virtualize Compute

Classic Data Center (CDC)

Using a phased approach to a virtualized


infrastructure enables smoother
transition to virtualize core elements.
Compute Virtualization
Compute Virtualization

It is a technique of masking or abstracting the physical compute hardware and


enabling multiple operating systems (OSs) to run concurrently on a single or
clustered physical machine(s).

• Enables creation of multiple virtual machines


(VMs), each running an OS and application
• VM is a logical entity that looks and behaves
like physical machine Virtualization Layer
• Virtualization layer resides between hardware and x86 Architecture
VMs
• Also known as hypervisor
• VMs are provided with standardized hardware CPU NIC Card Memory Hard Disk

resources
Need for Compute Virtualization

Hypervisor
x86 Architecture
x86 Architecture

CPU NIC Card Memory Hard Disk


CPU NIC Card Memory Hard Disk

Before Virtualization After Virtualization


• Runs single operating system (OS) per machine at a • Runs multiple operating systems (OSs) per machine
time concurrently
• Couples s/w and h/w tightly • Makes OS and applications h/w independent
• May create conflicts when multiple applications run • Isolates VM from each other, hence no conflict
on the same machine • Improves resource utilization
• Underutilizes resources • Offers flexible infrastructure at low cost
• Is inflexible and expensive
Hypervisor
Hypervisor

It is a software that allows multiple operating systems (OSs) to run


concurrently on a physical machine and to interact directly with the
physical hardware.

• Has two components


• Kernel
• Virtual Machine Monitor (VMM)
VMM VMM VMM

Hypervisor (Kernel and VMM)


x86 Architecture

CPU NIC Card Memory Hard Disk


Types of Hypervisor

APP

Hypervisor
Hypervisor

x86 Architecture Operating System


x86 Architecture

CPU NIC Card Memory Hard Disk


CPU NIC Card Memory Hard Disk

Type 1: Bare-Metal Hypervisor Type 2: Hosted Hypervisor

Type 1: Bare-Metal Hypervisor Type 2: Hosted Hypervisor


• It is an operating system (OS) • It installs and runs as an application
• It installs and runs on x86 bare-metal • It relies on operating system (OS) running on
hardware physical machine for device support and
• It requires certified hardware physical resource management
Benefits of Compute Virtualization

• Server consolidation
• Isolation
• Encapsulation
• Hardware independence
• Reduced cost
Requirements: x86 Hardware Virtualization

• An operating system (OS) is designed to run on a


bare-metal hardware and to fully own the hardware
• x86 architecture offer four levels of privilege
• Ring 0, 1, 2, and 3 Ring 3 User Apps
• User applications run in Ring 3
• OS run in Ring 0 (most privileged) Ring 2

• Challenges of virtualizing x86 hardware Ring 1


• Requires placing the virtualization layer below the OS Ring 0 OS
layer
• Is difficult to capture and translate privileged OS X86 Hardware
instructions at runtime
• Techniques to virtualize compute
• Full, Para, and hardware assisted virtualization
Full Virtualization

• Virtual Machine Monitor (VMM) runs in the privileged Ring 0


Ring 3 User Apps
• VMM decouples guest operating system (OS) from the
underlying physical hardware Ring 2
• Each VM is assigned a VMM Guest OS
Ring 1
• Provides virtual components to each VM
Ring 0 Hypervisor
• Performs Binary Translation (BT) of non-virtualizable OS
instructions
Physical Machine
• Guest OS is not aware of being virtualized X86 Hardware
Paravirtualization

Ring 3 User Apps

• Guest operating system (OS) knows that it is virtualized Ring 2


• Guest OS runs in Ring 0 Ring 1
• Modified guest OS kernel is used, such as Linux and OpenBSD Paravirtualized
Ring 0 Guest OS
• Unmodified guest OS is not supported, such as Microsoft
Hypervisor
Windows
Physical Machine
X86 Hardware
Hardware Assisted Virtualization

Ring 3 User Apps

• Achieved by using hypervisor-aware CPU to handle privileged Ring 2


instructions Ring 1
• Reduces virtualization overhead caused due to full and
Ring 0 Guest OS
paravirtualization
• CPU and Memory virtualization support is provided in hardware
VMM
• Enabled by AMD-V and Intel VT technologies in the x86 processor
architecture Physical Machine
X86 Hardware
Virtual Machine

• From a user’s perspective, a logical compute system


• Runs an operating system (OS) and application like a
physical machine
• Contains virtual components such as CPU, RAM, disk,
and NIC
Hypervisor
• From a hypervisor’s perspective
x86 Architecture
• Virtual machine (VM) is a discrete set of files such as
configuration file, virtual disk files, virtual BIOS file,
VM swap file, and log file CPU NIC Card Memory Hard Disk
Virtual Machine Files
File name Description
Virtual BIOS File • Stores the state of the virtual machine’s (VM’s) BIOS
• Is a VM’s paging file which backs up the VM RAM contents
Virtual Swap File
• The file exists only when VM is running
• Stores the contents of the VM’s disk drive
Virtual Disk File • Appears like a physical disk drive to VM
• VM can have multiple disk drives
• Keeps a log of VM activity
Log File
• Is useful for troubleshooting
• Stores the configuration information chosen during VM creation
Virtual Configuration File • Includes information such as number of CPUs, memory, number and
type of network adaptors, and disk types
File System to Manage VM Files

• The file systems supported by hypervisor are Virtual Machine File System
(VMFS) and Network File System (NFS)
• VMFS
• Is a cluster file system that allows multiple physical machines to perform
read/write on the same storage device concurrently
• Is deployed on FC and iSCSI storage apart from local storage
• NFS
• Enables storing VM files on a remote file server (NAS device)
• NFS client is built into hypervisor
Virtual Machine Hardware

Parallel Serial/Com USB controller


port ports and USB devices

IDE controllers Floppy controller


and floppy drives

Graphic card Virtual Machine Mouse

RAM Keyboard

VM chipset with one Network adapters


or more CPUs SCSI controllers (NIC and HBA)
VM Hardware Components
Virtual Hardware Description

• Virtual machine (VM) can be configured with one or more virtual CPUs
vCPU
• Number of CPUs allocated to a VM can be changed

• Amount of memory presented to the guest operating system (OS)


vRAM
• Memory size can be changed based on requirement

• Stores VM's OS and application data


Virtual Disk
• A VM should have at least one virtual disk

vNIC • Enables a VM to connect to other physical and virtual machines

Virtual DVD/CD-ROM Drive • It maps a VM’s DVD/CD-ROM drive to either a physical drive or an .iso file

Virtual Floppy Drive • It maps a VM’s floppy drive to either a physical drive or an .flp file
Virtual SCSI Controller • VM uses virtual SCSI controller to access virtual disk
Virtual USB Controller • Maps VM’s USB controller to the physical USB controller
Virtual Machine Console

• Provides mouse, keyboard, and screen functionality


• Sends power changes (on/off) to the virtual machine (VM)
• Allows access to BIOS of the VM
• Typically used for virtual hardware configuration and troubleshooting issues
Resource Management
Resource management

A process of allocating resources from physical machine or clustered


physical machines to virtual machines (VMs) to optimize the utilization of
resources.

• Goals of resource management


• Controls utilization of resources
• Prevents VMs from monopolizing resources
• Allocates resources based on relative priority of VMs
• Resources must be pooled to manage them centrally
Resource Pool
Resource pool

It is a logical abstraction of aggregated physical resources that are managed


centrally.

• Created from a physical machine or cluster


• Administrators may create child resource pool or virtual machine
(VM) from the parent resource pool
• Reservation, limit, and share are used to control the resources
consumed by resource pools or VMs
Resource Pool Example

Standalone Physical Machine – Machine 1


Parent Pool
CPU = 3000 MHz
Memory = 6GB

Engineering Pool (Child Pool) Finance Pool (Child Pool)


Marketing-Production VM

CPU = 1000 MHz CPU = 1000 MHz CPU = 500 MHz


Memory = 2GB Memory = 2GB Memory = 1GB

Engineering-Test VM Engineering-Production Finance-Test VM Finance-Production VM


VM
CPU = 500 MHz CPU = 500 MHz CPU = 500 MHz CPU = 500 MHz
Memory = 1GB Memory = 1GB Memory = 1GB Memory = 1GB
Share, Limit, and Reservation

• Parameters that control the resources consumed by a child resource pool or a virtual
machine (VM) are as follows:
• Share
• Amount of CPU or memory resources a VM or a child resource pool can have
with respect to its parent’s total resources
• Limit
• Maximum amount of CPU and memory a VM or a child resource pool can
consume
• Reservation
• Amount of CPU and memory reserved for a VM or a child resource pool
Optimizing CPU Resources

• Modern CPUs are equipped with multiple cores and hyper-threading


• Multi-core processors have multiple processing units (cores) in a single CPU
• Hyper-threading makes a physical CPU appear as two or more logical CPUs
• Allocating a CPU resource efficiently and fairly is critical
• Hypervisor schedules virtual CPUs on the physical CPUs
• Hypervisors support multi-core, hyper-threading, and CPU load-balancing features to
optimize CPU resources
Multi-core Processors
VM with VM with VM with
one CPU two CPUs four CPUs

Virtual CPU

Virtual
Physical

Thread Thread Thread Thread Thread Thread Thread Thread


Thread

Core

Socket

Single – core Dual – core Quad – core


Dual – socket system Single – socket system Single – socket system
Hyper-threading
VM with VM with VM with
one CPU two CPUs one CPU

• Makes a physical CPU appear as two Logical CPUs


(LCPUs)
• Enables operating system (OS) to schedule two or
more threads simultaneously
• Two LCPUs share the same physical resources
• While the current thread is stalled, CPU can
execute another thread LCPU LCPU

• Hypervisor running on a hyper-threading-enabled CPU


provides improved performance and utilization LCPU LCPU

Thread 1 and 2 Dual – core Thread 1 and 2


Single – socket system
with hyperthreading
Optimizing Memory Resource

• Hypervisor manages a machine’s physical memory


• Part of this memory is used by the hypervisor
• Rest is available for virtual machines (VMs)
• VMs can be configured with more memory than physically available, called ‘memory
overcommitment’
• Memory optimization is done to allow overcommitment
• Memory management techniques are Transparent page sharing, memory ballooning, and
memory swapping
Memory Ballooning
No memory shortage, balloon remains
uninflated

Virtual Machine (VM)


1. Memory shortage, balloon inflates
2. Driver demands memory from guest
operating system (OS)
3. Guest OS forces page out
4. Hypervisor reclaims memory

Virtual Machine (VM)

1. Memory shortage resolved,


deflates balloon
2. Driver relinquishes memory
3. Guest OS can use pages
4. Hypervisor grants memory
Virtual Machine (VM)
Memory Swapping

• Each powered-on virtual machine (VM) needs its own swap file
• Created when the VM is powered-on
• Deleted when the VM is powered-off
• Swap file size is equal to the difference between the memory limit and the VM memory
reservation
• Hypervisor swaps out the VM’s memory content if memory is scarce
• Swapping is the last option because it causes notable performance impact
Physical to Virtual Machine (P2V) Conversion
P2V Conversion

It is a process through which physical machines are converted into virtual


machines (VMs).

• Clones data from physical machine’s disk to VM


disk
• Performs system reconfiguration of the destination
VM such as:
• Change IP address and computer name Conversion

• Install required device drivers to enable the VM


to boot Physical Machine Virtual Machine (VM)
Benefits of P2V Converter

• Reduces time needed to setup new virtual machine (VM)

• Enables migration of legacy machine to a new hardware without reinstalling


operating system (OS) or application

• Performs migration across heterogeneous hardware


Components of P2V Converter
• There are three key components:
• Converter server
• Is responsible for controlling conversion process
• Is used for hot conversion only (when source is running its OS)
• Pushes and installs agent on the source machine
• Converter agent
• Is responsible for performing the conversion
• Is used in hot mode only
• Is installed on physical machine to convert it to virtual machine (VM)
• Converter Boot CD
• Bootable CD contains its operating system (OS) and converter application
• Converter application is used to perform cold conversion
Conversion Options

• Hot conversion
• Occurs while physical machine is running
• Performs synchronization
• Copies blocks that were changed during the initial cloning period
• Performs power off at source and power on at target virtual machine (VM)
• Changes IP address and machine name of the selected machine, if both
machines must co-exist on the same network
• Cold conversion
• Occurs while physical machine is not running OS and application
• Boots the physical machine using converter boot CD
• Creates consistent copy of the physical machine
Hot Conversion Process
Converter server
running converter
software

Step 1: Converter server


installs agent on source Step 3: Creates VM on
physical machine destination machine

Agent

Step 4: Clones source


disk to VM disk
Powered-on
Source Physical Source
Snapshot
Machine Volume

Snapshot Destination Physical


Machine running
Step 2: Agent takes hypervisor
snapshot of source volume
Hot Conversion Process (contd.)
Converter server
running converter
software

Step 6: VM is ready to run


Step 5: Synchronizes and
reconfigures the VM

Reconfiguration
Agent

Powered-on
Source Physical Source
Snapshot
Machine Volume

Snapshot Destination Physical


Machine running
hypervisor
Cold Conversion Process

Step 1: Boot physical Step 2: Creates VM on


machine with converter destination machine
boot CD
Converter boot CD

Powered-on
Source Physical Source
Volume
Machine

Destination Physical
Machine (Running
Hypervisor)
Cold Conversion Process (contd.)

Step 4: Installs required drivers to Step 5: VM is ready to run


allow OS to boot on VM

Converter boot CD

Reconfiguration

Powered-on
Source Physical Source Step 3: Clones source
Machine Volume disk to VM disk

Destination Physical
Machine (Running
Hypervisor)
Storage Virtualization
Storage virtualization

It is the process of masking the underlying complexity of physical


storage resources and presenting the logical view of these
resources to compute systems.

• Logical to physical storage mapping is performed by virtualization layer


• Virtualization layer abstracts the identity of physical storage devices
• Creates a storage pool from multiple, heterogeneous storage arrays
• Virtual volumes are created from the storage pools and are assigned to the
compute system
Benefits of Storage Virtualization

• Adds or removes storage without any downtime


• Increases storage utilization thereby reducing TCO
• Provides non-disruptive data migration between storage devices
• Supports heterogeneous, multi-vendor storage platforms
• Simplifies storage management
Storage Virtualization at Different Layers

Layers Examples

Compute • Storage provisioning for VMs

Network • Block-level virtualization


• File-level virtualization

Storage • Virtual Provisioning


• Automated Storage Tiering
Storage for Virtual Machines
Compute 1 Compute 2

• VMs are stored as set of files on storage space available VM 4


VM 3
to hypervisor
• ‘Virtual disk file’ represents a virtual disk used by a VM
to store its data Virtual disk Virtual disk Virtual disk Virtual disk

• Size of virtual disk file represents storage space allocated file file file file

NFS
VMFS
to virtual disk
• VMs remain unaware of
FC SAN
• Total space available to the hypervisor IP Network

• Underlying storage technologies

FC Storage iSCSI NAS


File System for Managing VM Files

• Hypervisor uses two file systems to manage the VM files


• Hypervisor’s native file system called Virtual Machine File System (VMFS)
• Network File System (NFS) such as NAS file system
Network Virtualization
Network Virtualization
It is a process of logically segmenting or grouping physical network(s) and
making them operate as single or multiple independent network(s) called
“Virtual Network(s)”.

• Enables virtual networks to share network resources


• Allows communication between nodes in a virtual network without routing of frames
• Enforces routing for communication between virtual networks
• Restricts management traffic, including ‘Network Broadcast’, from propagating to other
virtual network
• Enables functional grouping of nodes in a virtual network
Network Virtualization in VDC (Virtual Device Context)
• Involves virtualizing
Physical Server Physical Server
physical and VM networks
Physical Network
• Consists of following physical Hypervisor Hypervisor
components:
 Network adapters, switches, PNIC PNIC

routers, bridges, repeaters,


and hubs
• Provides connectivity Physical
 Among physical servers Network

running hypervisor Client


 Between physical servers and
clients
PNIC – Physical NIC
 Between physical servers and
storage systems Storage Array
Benefits of Network Virtualization
Benefit Description
• Restricts access to nodes in a virtual network from
another virtual network
Enhances security
• Isolates sensitive data from one virtual network to
another
• Restricts network broadcast and improves virtual
Enhances performance
network performance
• Allows configuring virtual networks from a
centralized management workstation using
Improves manageability
management software
• Eases grouping and regrouping of nodes
• Enables multiple virtual networks to share the same
physical network, which improves utilization of
Improves utilization and reduces
network resource
CAPEX
• Reduces the requirement to setup separate physical
networks for different node groups
Components of VDC Network Infrastructure
• VDC network infrastructure includes both virtual and physical network components
 Components are connected to each other to enable network traffic flow

Component Description
• Connects VMs to the VM network
Virtual NIC
• Sends/receives VM traffic to/from VM network
Virtual HBA(Host • Enables a VM to access FC(Fiber Channel) RDM disk/LUN assigned to the
Bus Adapter) VM
• Is an Ethernet switch that forms VM network
• Provides connection to virtual NICs and forwards VM traffic
Virtual switch
• Provides connection to hypervisor kernel and directs hypervisor traffic:
management, storage, VM migration
Physical adapter: NIC,
HBA, • Connects physical servers to physical network
CAN(Convereged • Forwards VM and hypervisor traffic to/from physical network
N/W adapter)
• Forms physical network that supports Ethernet/FC/iSCSI/FCoE
Physical switch,
• Provides connections among physical servers, between physical servers and
router
storage systems, and between physical servers and clients
Virtual Network Component: Virtual NIC

• Connects VMs to virtual switch


• Forwards Ethernet frames to virtual switch
• Has unique MAC and IP addresses
• Supports Ethernet standards similar to physical NIC
Overview of Desktop and Application Virtualization

Tight dependency Virtualization breaks


between the layers dependencies between the
layers
User State (data and settings)

Application
Application Virtualization
Isolate the application from OS and hardware

Operating System
Desktop Virtualization
Isolate hardware from OS, application and user
state
Hardware

48
Desktop Virtualization
Desktop Virtualization

Technology which enables detachment of the user state, the


Operating System (OS), and the applications from endpoint devices.

• Enables organizations to host and centrally manage desktops


• Desktops run as virtual machines within the VDC
• They may be accessed over LAN/WAN
• Endpoint devices may be thin clients or PCs

49
Benefits of Desktop Virtualization

• Enablement of thin clients


• Improved data security
• Simplified data backup
• Simplified PC maintenance
• Flexibility of access
Desktop Virtualization Techniques

• Technique 1: Remote Desktop Services(RDS)


• Technique 2: Virtual Desktop Infrastructure (VDI)
• Desktop virtualization techniques provide ability to centrally host and manage
desktop environments
• Deliver them remotely to the user’s endpoint devices
Remote Desktop Services
• RDS is traditionally known as terminal services
• A terminal service runs on top of a Windows
installation
 Provides individual sessions to client systems
 Clients receive visuals of the desktop
 Resource consumption takes place on the server
Benefits of Remote Desktop Services

• Rapid application delivery


• Applications are installed on the server and accessed from there
• Improved security
• Applications and data are stored in the server
• Centralized management
• Low-cost technology when compared to VDI
Virtual Desktop Infrastructure(VDI)

• VDI involves hosting desktop which runs as VM on the server in the


VDC
 Each desktop has its own OS and applications installed
• User has full access to resources of virtualized desktop
VDI: Components
VM execution
Endpoint devices
• Endpoint devices server

• VM hosting/execution servers
• Connection Broker
Connection
broker Shared
Storage

PCs,
notebooks
thin clients
How does this work?

Server (HW1.js)
Server
require('http');
http.createServer
(…)

Internet Amazon SimpleDB


Web page (home.ejs)
<html><head>
<body>…
DOM
accesses
function foo() {
$("#id").html("x");
}
Browser
Script (app.js)
Your VM
Use case Scenario for virtualization

Cust 1

Cust 2
Admin
Physical machine
Cust 3

• Suppose Admin has a machine with 4 CPUs and 8 GB of memory, and three customers:
• Cust 1 wants a machine with 1 CPU and 3GB of memory
• Cust 2 wants 2 CPUs and 1GB of memory
• Cust 3 wants 1 CPU and 4GB of memory
• What should Alice do?
Resource allocation in virtualization

Cust 1
Virtual
machine
monitor
Cust 2
Admin
Physical machine
Virtual machines Cust 3

• Admin can sell each customer a virtual machine (VM) with the requested resources
• From each customer's perspective, it appears as if they had a physical machine all by
themselves (isolation)
How does it work?
VM 1 VM 2
VM Virt Phys App
1 0-99 0-99
App App
1 299-399 100-199
2 0-99 300-399 OS 1 OS 2
2 200-299 500-599
2 600-699 400-499
VMM
Translation table
Physical machine

• Resources (CPU, memory, ...) are virtualized


• VMM ("Hypervisor") has translation tables that map requests for virtual resources to
physical resources
• Example: VM 1 accesses memory cell #323; VMM maps this to memory cell 123.
• For which resources does this (not) work?
• How do VMMs differ from OS kernels?
Benefit: Migration in case of disaster

Cust 1
Virtual
machine
Admin monitor
Cust 2

Virtual machines Cust 3

Physical machines

• What if the machine needs to be shut down?


• e.g., for maintenance, consolidation, ...
• Admin can migrate the VMs to different physical machines without any customers
noticing
Benefit: Time sharing

Cust 4

Cust 1
Virtual
machine
monitor
Cust 2
Admin
Physical machine

• What if Admin gets another customer? Virtual machines Cust 3

• Multiple VMs can time-share the existing resources


• Result: Admin has more virtual CPUs and virtual memory than physical resources (but
not all can be active at the same time)
Benefit and challenge: Isolation
Cust 4

Cust 1
VMM
Cust 2
Admin
Physical machine
Virtual machines Cust 3

• Good: Cust 4 can't access Cust 3’s data


• Bad: What if the load suddenly increases?
• Example: Cust 4 VM shares CPUs with Cust 3's VM, and Cust 3 suddenly starts a large
compute job
• Cust 4 performance may decrease as a result
• VMM can move Cust 4 's software to a different CPU, or migrate it to a different machine
Recap: Virtualization in the cloud
• Gives cloud provider a lot of flexibility
• Can produce VMs with different capabilities
• Can migrate VMs if necessary (e.g., for maintenance)
• Can increase load by overcommitting resources
• Provides security and isolation
• Programs in one VM cannot influence programs in another
• Convenient for users
• Complete control over the virtual 'hardware' (can install own operating system own
applications, ...)
• But: Performance may be hard to predict
• Load changes in other VMs on the same physical machine may affect the performance seen
by the customer
Thank you
Q&A

64
Cloud Computing WASE WIMS 2020

You might also like