Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

OS Android

Download as pdf or txt
Download as pdf or txt
You are on page 1of 37

5

VIRTUAL MACHINES AND MOBILE OS


Virtual Machines – History, Benefits and Features, Building Blocks, Types of Virtual
Machines and their Implementations, Virtualization and Operating-System Components;
Mobile OS - iOS and Android.

5.1 VIRTUAL MACHINES – HISTORY, BENEFITS AND FEATURES

5.1.1 What is Virtual Machine

➢ A virtual machine (VM) is a virtual environment which functions as a virtual computer


system with its own CPU, memory, network interface, and storage, created on a
physical hardware system.
➢ A piece of software called a hypervisor, or virtual machine manager, lets you run
different operating systems on different virtual machines at the same time.
5.1.2 HISTORY

➢ Virtual machines first appeared commercially on IBM mainframes in 1972.


Virtualization was provided by the IBM VM operating system. This system has evolved
and is still available.
➢ IBM VM/370 divided a mainframe into multiple virtual machines, each running its own
operating system.
➢ A major difficulty with the VM approach involved disk systems.
❖ Suppose that the physical machine had three disk drives but wanted to support seven
virtual machines. Clearly, it could not allocate a disk drive to each virtual machine.
The solution was to provide virtual disks—termed minidisks in IBM’s VM
operating system.
➢ The minidisks were identical to the system’s hard disks in all respects except size. The
system implemented each minidisk by allocating as many tracks on the physical disks
as the minidisk needed.
➢ Once the virtual machines were created, users could run any of the operating systems
or software packages that were available on the underlying machine.

5.1.3 The virtualization requirements

➢ Fidelity.
❖ AVMM provides an environment for programs that is essentially identical to the
original machine.
➢ Performance.
❖ Programs running within that environment show only minor performance
decreases.
➢ Safety.
❖ The VMM is in complete control of system resources.
➢ By the late 1990s, Intel 80x86 CPUs had become common, fast, and rich in features.
➢ Both Xen and VMware created technologies, still used today, to allow guest operating
systems to run on the 80x86.
➢ Virtualization has expanded to include all common CPUs, many commercial and open-
source tools, and many operating systems.
❖ For example, the open-source VirtualBox project (http://www.virtualbox.org)
provides a program that runs on Intel x86 and AMD 64 CPUs and on Windows,
Linux, macOS, and Solaris host operating systems.
❖ Possible guest operating systems include many versions of Windows, Linux,
Solaris, and BSD, including even MS-DOS and IBM OS/2.

5.2 BENEFITS AND FEATURES

➢ One important advantage of virtualization is that the host system is protected from the
virtual machines, just as the virtual machines are protected from each other.
➢ A virus inside a guest operating system might damage that operating system but is
unlikely to affect the host or the other guests.
➢ Since each virtual machine is almost completely isolated from all other virtual
machines, there are almost no protection problems.
➢ A potential disadvantage of isolation is that it can prevent sharing of resources.
➢ Two approaches to providing sharing have been implemented.
❖ First, it is possible to share a file-system volume and thus to share files.
❖ Second, it is possible to define a network of virtual machines, each of which can
send information over the virtual communications network.
➢ One feature common to most virtualization implementations is the ability to freeze, or
suspend, a running virtual machine.
➢ Many operating systems provide that basic feature for processes, but VMMs go one
step further and allow copies and snapshots to be made of the guest. The copy can be
used to create a new VM or to move a VM from one machine to another with its current
state intact.
➢ The guest can then resume where it was, as if on its original machine, creating a clone.
The snapshot records a point in time, and the guest can be reset to that point if necessary
(for example, if a change was made but is no longer wanted).
➢ A virtual machine system is a perfect vehicle for operating-system research and
development.
➢ The operating system runs on and controls the entire machine, so the system must be
stopped and taken out of use while changes are made and tested. This period is
commonly called system-development time.
➢ A virtual-machine system can eliminate much of this latter problem. System
programmers are given their own virtual machine, and system development is done on
the virtual machine instead of on a physical machine.
➢ Another advantage of virtual machines for developers is that multiple operating systems
can run concurrently on the developer’s workstation. This virtualized workstation
allows for rapid porting and testing of programs in varying environments.
➢ In addition, multiple versions of a program can run, each in its own isolated operating
system, within one system.
➢ A major advantage of virtual machines in production data-centre use is system
consolidation, which involves taking two or more separate systems and running them
in virtual machines on one system. Such physical-to-virtual conversions result in
resource optimization, since many lightly used systems can be combined to create one
more heavily used system.
➢ A virtual environment might include 100 physical servers, each running 20 virtual
servers. Without virtualization, 2,000 servers would require several system
administrators. With virtualization and its tools, the same work can be managed by one
or two administrators.
✓ One of the tools that make this possible is templating, in which one standard
virtual machine image, including an installed and configured guest operating
system and applications, is saved and used as a source for multiple running
VMs.
✓ Other features include managing the patching of all guests, backing up and
restoring the guests, and monitoring their resource use.
➢ Virtualization can improve not only resource utilization but also resource management.
➢ Some VMMs include a live migration feature that moves a running guest from one
physical server to another without interrupting its operation or active network
connections.
➢ Virtualization has laid the foundation for many other advances in computer facility
implementation, management, and monitoring.
➢ Cloud computing, for example, is made possible by virtualization in which resources
such as CPU, memory, and I/O are provided as services to customers using Internet
technologies.
5.3 BUILDING BLOCKS

➢ The ability to virtualize depends on the features provided by the CPU. If the features
are sufficient, then it is possible to write a VMM that provides a guest environment.
Otherwise, virtualization is impossible.
➢ VMMs use several techniques to implement virtualization, including trap-and-emulate
and binary translation.
➢ The important concept found in most virtualization options is the implementation of a
virtual CPU (VCPU).
➢ The VCPU does not execute code. Rather, it represents the state of the CPU as the guest
machine believes it to be. For each guest, the VMM maintains a VCPU representing
that guest’s current CPU state.
➢ When the guest is context-switched onto a CPU by the VMM, information from the
VCPU is used to load the right context, much as a general-purpose operating system
would use the PCB.

5.3.1 Trap-and-Emulate

➢ On a typical dual-mode system, the virtual machine guest can execute only in user mode
(unless extra hardware support is provided). The kernel, of course, runs in kernel mode,
and it is not safe to allow user-level code to run in kernel mode.
➢ Just as the physical machine has two modes, so must the virtual machine. Consequently,
we must have a virtual user mode and a virtual kernel mode, both of which run in
physical user mode.
➢ Those actions that cause a transfer from user mode to kernel mode on a real machine
(such as a system call, an interrupt, or an attempt to execute a privileged instruction)
must also cause a transfer from virtual user mode to virtual kernel mode in the virtual
machine.
➢ When the kernel in the guest attempts to execute a privileged instruction, that is an error
(because the system is in user mode) and causes a trap to the VMM in the real machine.
The VMM gains control and executes (or “emulates”) the action that was attempted by
the guest kernel on the part of the guest. It then returns control to the virtual machine.
This is called the trap-and-emulate
Figure: 5.1 Trap-and-emulate virtualization implementation.

➢ With privileged instructions, time becomes an issue. All nonprivileged instructions run
natively on the hardware, providing the same performance for guests as native applications.
➢ Privileged instructions create extra overhead, however, causing the guest to run more
slowly than it would natively.
➢ In addition, the CPU is being multiprogrammed among many virtual machines, which can
further slowdown the virtual machines in unpredictable ways.
➢ This problem has been approached in various ways.
❖ IBM VM, for example, allows normal instructions for the virtual machines to
execute directly on the hardware.
❖ Only the privileged instructions (needed mainly for I/O must be emulated and hence
execute more slowly.
➢ In general, with the evolution of hardware, the performance of trap-and-emulate
functionality has been improved, and cases in which it is needed have been reduced.
❖ For example, many CPUs now have extra modes added to their standard dual-mode
operation.
❖ The VCPU need not keep track of what mode the guest operating system is in,
because the physical CPU performs that function.
❖ In fact, some CPUs provide guest CPU state management in hardware, so the VMM
need not supply that functionality, removing the extra overhead.
5.3.2 Binary Translation

➢ Binary translation is a software virtualization and includes the use of an interpreter. It


translates binary code to another binary but excluding nontrapping instructions.
The basic steps are as follows:
➢ If the guest VCPU is in user mode, the guest can run its instructions natively on a physical
CPU.
➢ If the guest VCPU is in kernel mode, then the guest believes that it is running in kernel
mode. The VMM examines every instruction the guest executes in virtual kernel mode by
reading the next few instructions that the guest is going to execute, based on the guest’s
program counter.
➢ Instructions other than special instructions are run natively. Special instructions are
translated into a new set of instructions that perform the equivalent task—for example,
changing the flags in the VCPU.

Figure: 5.2 Binary translation virtualization implementation.

➢ Binary translation is shown in Figure. It is implemented by translation code within the


VMM. The code reads native binary instructions dynamically from the guest, on demand,
and generates native binary code that executes in place of the original code.
5.3.3 Hardware Assistance

➢ Without some level of hardware support, virtualization would be impossible. The more
hardware support available within a system, the more feature-rich and stable the virtual
machines can be and the better they can perform.
➢ Intel x86 CPU family, Intel added new virtualization support (the VT-x instructions), in
successive generations beginning in 2005. Now, binary translation is no longer needed.
➢ In fact, all major general-purpose CPUs now provide extended hardware support for
virtualization. For example, AMD virtualization technology (AMDV) has appeared in
several AMD processors starting in 2006.
➢ It defines two new modes of operation—host and guest—thus moving from a dual-mode
to a multimode processor.
➢ The VMM can enable host mode, define the characteristics of each guest virtual machine,
and then switch the system to guest mode, passing control of the system to a guest operating
system that is running in the virtual machine.
➢ In guest mode, the virtualized operating system thinks it is running on native hardware and
sees whatever devices are included in the host’s definition of the guest.
➢ If the guest tries to access a virtualized resource, then control is passed to the VMM to
manage that interaction.
➢ The functionality in Intel VT-x is similar, providing root and nonroot modes, equivalent to
host and guest modes. Both provide guest VCPU state data structures to load and save guest
CPU state automatically during guest context switches.
➢ In addition, virtual machine control structures (VMCSs) are provided to manage guest and
host state, as well as various guest execution controls, exit controls, and information about
why guests exit back to the host.
Fig:5.3 Hardware Support for Virtualization in the Intel x86 Processor

➢ AMD and Intel have also addressed memory management in the virtual environment. With
AMD’s RVI and Intel’s EPT memory-management enhancements, VMMs no longer need
to implement software NPTs.
➢ All modern x86 CPUs include a memory management unit (MMU) and a translation
lookaside buffer (TLB) to optimize virtual memory performance.
➢ However, in a virtual execution environment, virtual memory virtualization involves
sharing the physical system memory in RAM and dynamically allocating it to the physical
memory of the VMs.
➢ That means a two-stage mapping process should be maintained by the guest OS and the
VMM, respectively: virtual memory to physical memory and physical memory to machine
memory.
➢ Furthermore, MMU virtualization should be supported, which is transparent to the guest
OS.
➢ The guest OS continues to control the mapping of virtual addresses to the physical memory
addresses of VMs.
➢ But the guest OS cannot directly access the actual machine memory. The VMM is
responsible for mapping the guest physical memory to the actual machine memory.
Fig:5.4 Virtual Memory Address Translation

➢ When a virtual address needs to be translated, the CPU will first look for the L4 page table
pointed to by Guest CR3.
➢ Since the address in Guest CR3 is a physical address in the guest OS, the CPU needs to
convert the Guest CR3 GPA to the host physical address (HPA) using EPT.
➢ In this procedure, the CPU will check the EPT TLB to see if the translation is there. If there
is no required translation in the EPT TLB, the CPU will look for it in the EPT.
➢ If the CPU cannot find the translation in the EPT, an EPT violation exception will be raised.
➢ When the GPA of the L4 page table is obtained, the CPU will calculate the GPA of the L3
page table by using the GVA and the content of the L4 page table.
➢ If the entry corresponding to the GVA in the L4 page table is a page fault, the CPU will
generate a page fault interrupt and will let the guest OS kernel handle the interrupt.
➢ When the GPA of the L3 page table is obtained, the CPU will look for the EPT to get the
HPA of the L3 page table
➢ To get the HPA corresponding to a GVA, the CPU needs to look for the EPT five times,
and each time, the memory needs to be accessed four times.
5.4TYPES OF VIRTUAL MACHINES AND THEIR
IMPLEMENTATIONS

5.4.1 The Virtual Machine Life Cycle


➢ Whatever the hypervisor type, at the time a virtual machine is created, its creator gives
the VMM certain parameters.
➢ These parameters usually include:
❖ the number of CPUs,
❖ amount of memory,
❖ networking details,
❖ and storage details
the VMM will take into account when creating the guest.
➢ For example, a user might want to create a new guest with two virtual CPUs, 4 GB of
memory, 10 GB of disk space, one network interface that gets its IP address via DHCP,
and access to the DVD drive.
➢ The VMM then creates the virtual machine with those parameters.
➢ In the case of a type 0 hypervisor, the resources are usually dedicated.
➢ For other hypervisor types, the resources are dedicated or virtualized, depending on the
type.
➢ Finally, when the virtual machine is no longer needed, it can be deleted. When this
happens, the VMM first frees up any used disk space and then removes the
configuration associated with the virtual machine, essentially forgetting the virtual
machine.
➢ Creating a virtual machine from an existing one can be as easy as clicking the “clone”
button and providing a new name and IP address. This ease of creation can lead to
virtual machine sprawl, which occurs when there are so many virtual machines on a
system that their use, history, and state become confusing and difficult to track.

5.4.2 Type 0 Hypervisor

➢ Type 0 hypervisors have existed for many years under many names, including
“partitions” and “domains.”
➢ They are a hardware feature, and that brings its own positives and negatives.
➢ Operating systems need do nothing special to take advantage of their features.
➢ The VMM itself is encoded in the firmware and loaded at boot time. In turn, it loads
the guest images to run in each partition.
➢ The feature set of a type 0 hypervisor tends to be smaller than those of the other types
because it is implemented in hardware.
❖ For example, a system might be split into four virtual systems, each with
dedicated CPUs, memory, and I/O devices.
❖ Each guest believes that it has dedicated hardware because it does, simplifying
many implementation details.
➢ I/O presents some difficulty, because it is not easy to dedicate I/O devices to guests if
there are not enough.
➢ In these cases, the hypervisor manages shared access or grants all devices to a control
partition.
➢ In the control partition, a guest operating system provides services (such as networking)
via daemons to other guests, and the hypervisor routes I/O requests appropriately.
➢ Some type 0 hypervisors are even more sophisticated and can move physical CPUs and
memory between running guests. In these cases, the guests are Para virtualized, aware
of the virtualization and assisting in its execution.
❖ For example, a guest must watch for signals from the hardware or VMM, that a
hardware change has occurred, probe its hardware devices to detect the change,
and add or subtract CPUs or memory from its available resources.
➢ A type 0 hypervisor can run multiple guest operating systems (one in each hardware
partition).
➢ All of those guests, because they are running on raw hardware, can in turn be VMMs.
➢ Essentially, each guest operating system in a type 0 hypervisor is a native operating
system with a subset of hardware made available to it.
Figure:5.5 Type 0 hypervisor

5.4.3 Type 1 Hypervisor

➢ A type 1 hypervisor acts like a lightweight operating system and runs directly on the
host’s hardware.
➢ The most commonly deployed type of hypervisor is the type 1 or bare-metal hypervisor,
where virtualization software is installed directly on the hardware where the operating
system is normally installed.
➢ Because bare-metal hypervisors are isolated from the attack-prone operating system,
they are extremely secure.
➢ In addition, they generally perform better and more efficiently than hosted hypervisors.
➢ For these reasons, most enterprise companies choose bare-metal hypervisors for data
centre computing needs.
➢ By using type 1 hypervisors, data-center managers can control and manage the
operating systems and applications in new and sophisticated ways.
➢ An important benefit is the ability to consolidate more operating systems and
applications onto fewer systems.
❖ For example, rather than having ten systems running at 10 percent utilization each,
a data center might have one server manage the entire load.
➢ If utilization increases, guests and their applications can be moved to less-loaded
systems live, without interruption of service.
➢ Using snapshots and cloning, the system can save the states of guests and duplicate
those states.
➢ Another type of type 1 hypervisor includes various general-purpose operating systems
with VMM functionality.
❖ Here, an operating system such as Red- Hat Enterprise Linux, Windows, or Oracle
Solaris performs its normal duties as well as providing a VMM allowing other
operating systems to run as guests.

Fig:5.6 Type 1 Hypervisor

5.4.4 Type 2 Hypervisor


➢ Type-2 Hypervisor– runs on the operating system of the physical host machine, hence
they are also called hosted hypervisors.
➢ These hypervisors are hosted on the operating system, and the hypervisor runs on that
layer as another software to enable virtualization.
5.4.4.1 Components available in Type 2 Hypervisor
➢ A physical server
➢ OS installed on that server hardware (OSes like Windows, Linux, macOS)
➢ Type-2 hypervisor on that OS
➢ Virtual machine instances/guest VMs
Fig: 5.7 Type 2 Hypervisor

➢ These hypervisors are usually used in environments where there are a small number of
servers.
➢ They do not need a separate management console to set up and manage the virtual
machines. These operations can typically be done on the server that has the hypervisor
hosted. This hypervisor is basically treated as an application on your host system.

5.4.4.2 Advantages of Type-2 hypervisor


Simple management:
➢ They essentially act as management consoles. There is no need to install a separate
software package to manage the virtual machines running on type-2 hypervisors.
Useful for testing purposes:
➢ They are convenient for testing any new software or research projects. You can simply
run multiple instances with different OSes to test how the software works in each
environment.
Examples of Type 2 Hypervisors
➢ VMware Workstation Pro/VMware Fusion, Oracle VirtualBox, etc…
5.4.6 Paravirtualization:

➢ In Paravirtualization the source code of an OS is modified in order to run as a guest OS


in a virtual machine (VM) environment. Calls to the hardware from the guest OS are
replaced with calls to the VM monitor (VMM).
➢ Para-virtualization needs to modify the guest operating systems.
➢ A para virtualized VM provides special APIs requiring substantial OS modifications in
user applications.
➢ Para-virtualization attempts to reduce the virtualization overhead, and thus improve
performance by modifying only the guest OS kernel.

Fig 5.8 Paravirtualization

➢ The above Fig illustrates the concept of a para virtualized VM architecture. The guest
operating systems are para-virtualized.
➢ They are assisted by an intelligent compiler to replace the nonvirtualizable OS instructions
by hypercalls as illustrated in Figure
➢ The traditional x86 processor offers four instruction execution rings: Rings 0, 1, 2, and 3.
➢ The lower the ring number, the higher the privilege of instruction being executed.
➢ The OS is responsible for managing the hardware and the privileged instructions to execute
at Ring 0, while user-level applications run at Ring 3.
➢ When the x86 processor is virtualized, a virtualization layer is inserted between the
hardware and the OS.
➢ According to the x86 ring definition, the virtualization layer should also be installed at Ring
0.
➢ In the above figure shown para-virtualization replaces nonvirtualizable instructions with
hypercalls that communicate directly with the hypervisor or VMM.
➢ However, when the guest OS kernel is modified for virtualization, it can no longer run on
the hardware directly.
➢ The guest OS kernel is modified to replace the privileged and sensitive instructions with
hypercalls to the hypervisor or VMM.
➢ The guest OS running in a guest domain may run at Ring 1 instead of at Ring 0. This implies
that the guest OS may not be able to execute some privileged and sensitive instructions.
➢ The privileged instructions are implemented by hypercalls to the hypervisor. After
replacing the instructions with hypercalls, the modified guest OS emulates the behavior of
the original guest OS.

5.4.7 Programming-Environment Virtualization


➢ Here, a programming language is designed to run within a custom-built virtualized
environment.
❖ For example, Oracle’s Java has many features that depend on its running in the
Java virtual machine (JVM), including specific methods for security and
memory management.
➢ Java programs run within the JVM environment, and the JVM is compiled to be a native
program on systems on which it runs.
❖ This arrangement means that Java programs are written once and then can run
on any system (including all of the major operating systems) on which a JVM
is available.
➢ The same can be said of interpreted languages, which run inside programs that read
each instruction and interpret it into native operations.
5.4.8 Emulation
➢ Emulation is useful when the host system has one system architecture, and the guest
system was compiled for a different architecture.
❖ For example, suppose a company has replaced its outdated computer system
with a new system but would like to continue to run certain important programs
that were compiled for the old system. The programs could be run in an emulator
that translates each of the outdated system’s instructions into the native
instruction set of the new system.
➢ The major challenge of emulation is performance. Instruction-set emulation may run
an order of magnitude slower than native instructions, because it may take ten
instructions on the new system to read, parse, and simulate an instruction from the old
system.
➢ Another challenge for emulator writers is that it is difficult to create a correct emulator
because, in essence, this task involves writing an entire CPU in software.
➢ Many popular video games were written for platforms that are no longer in production.
Users who want to run those games frequently can find an emulator of such a platform
and then run the game unmodified within the emulator.
5.4.9 Application Containment
➢ The goal of virtualization in some instances is to provide a method to segregate
applications, manage their performance and resource use, and create an easy way to
start, stop, move, and manage them. In such cases, perhaps full-fledged virtualization
is not needed.
➢ If the applications are all compiled for the same operating system, then we do not need
complete virtualization to provide these features. We can instead use application
containment.
➢ For example, Oracle Solaris has included containers, or zones, that create a virtual layer
between the operating system and the applications. In this system, only one kernel is
installed, and the hardware is not virtualized. Rather, the operating system and its
devices are virtualized, providing processes within a zone with the impression that they
are the only processes on the system.
➢ One or more containers can be created, and each can have its own applications, network
stacks, network address and ports, user accounts, and so on.
➢ CPU and memory resources can be divided among the zones and the system-wide
processes. Each zone, in fact, can run its own scheduler to optimize the performance of
its applications on the allotted resources.
Figure:5.9 Solaris 10 with two zones

➢ The above figure shows a Solaris 10 system with two containers and the standard “global”
user space.
➢ Containers are much lighter weight than other virtualization methods. That is, they use
fewer system resources and are faster to instantiate and destroy, more similar to processes
than virtual machines. For this reason, they are becoming more commonly used, especially
in cloud computing.
➢ FreeBSD was perhaps the first operating system to include a container-like feature (called
“jails”)
➢ Linux added the LXC container feature in 2014. It is now included in the common Linux
distributions via a flag in the clone() system call.
➢ Containers are also easy to automate and manage, leading to orchestration tools like docker
and Kubernetes.
➢ Orchestration tools are means of automating and coordinating systems and services. Their
aim is to make it simple to run entire suites of distributed applications, just as operating
systems make it simple to run a single program.

5.5 VIRTUALIZATION AND OPERATING-SYSTEM COMPONENTS

➢ In this section, we take a deeper dive into the operating-system aspects of virtualization,
including how the VMM provides core operating-system functions like scheduling, I/O,
and memory management.
5.5.1 CPU Scheduling
➢ A system with virtualization, even a single-CPU system, frequently acts like a
multiprocessor system.
➢ The virtualization software presents one or more virtual CPUs to each of the virtual
machines running on the system and then schedules the use of the physical CPUs among
the virtual machines.
➢ The significant variations among virtualization technologies make it difficult to
summarize the effect of virtualization on scheduling.
❖ The VMM has a number of physical CPUs available and a number of threads to run
on those CPUs. The threads can be VMM threads or guest threads.
❖ Guests are configured with a certain number of virtual CPUs at creation time, and
that number can be adjusted throughout the life of the VM.
❖ When there are enough CPUs to allocate the requested number to each guest, the
VMM can treat the CPUs as dedicated and schedule only a given guest’s threads on
that guest’s CPUs. In this situation, the guests act much like native operating
systems running on native CPUs.
➢ The VMM itself needs some CPU cycles for guest management and I/O management
and can steal cycles from the guests by scheduling its threads across all of the system
CPUs, but the impact of this action is relatively minor.
➢ More difficult is the case of overcommitment, in which the guests are configured for
more CPUs than exist in the system. Here, a VMM can use standard scheduling
algorithms to make progress on each thread but can also add a fairness aspect to those
algorithms.
❖ For example, if there are six hardware CPUs and twelve guest allocated CPUs,
the VMM can allocate CPU resources proportionally, giving each guest half of
the CPU resources it believes it has.
❖ The VMM can still present all twelve virtual CPUs to the guests, but in mapping
them onto physical CPUs, the VMM can use its scheduler to distribute them
appropriately.
➢ Within a virtual machine, this operating system receives only what CPU resources the
virtualization system gives it. A 100-millisecond time slice may take much more than
100 milliseconds of virtual CPU time.
➢ Depending on how busy the system is, the time slice may take a second or more,
resulting in very poor response times for users logged into that virtual machine. The
effect on a real-time operating system can be even more serious.
5.5.2 Memory Management
➢ Efficient memory use in general-purpose operating systems is a major key to
performance.
➢ In virtualized environments, there are more users of memory (the guests and their
applications, as well as the VMM), leading to more pressure on memory use.
➢ Further adding to this pressure is the fact that VMMs typically overcommit memory,
so that the total memory allocated to guests exceeds the amount that physically exists
in the system.
➢ The extra need for efficient memory use is not lost on the implementers of VMMs, who
take extensive measures to ensure the optimal use of memory.
❖ For example, VMware ESX uses several methods of memory management.
Before memory optimization can occur, the VMM must establish how much
real memory each guest should use. To do that, the VMM first evaluates each
guest’s maximum memory size.
❖ Next, the VMM computes a target real-memory allocation for each guest based
on the configured memory for those guest and other factors, such as
overcommitment and system load.
5.5.2.1 Mechanisms to reclaim memory from the guests
➢ One approach is to provide double paging. Here, the VMM has its own page-
replacement algorithms and loads pages into a backing store that the guest believes is
physical memory.
➢ A common solution is for the VMM to install in each guest a pseudo– device driver or
kernel module that the VMM controls. (A pseudo–device driver uses device-driver
interfaces, appearing to the kernel to be a device driver, but does not actually control a
device. Rather, it is an easy way to add kernel-mode code without directly modifying
the kernel.) This balloon memory manager communicates with the VMM and is told to
allocate or deallocate memory. If told to allocate, it allocates memory and tells the
operating system to pin the allocated pages into physical memory.
➢ Another common method for reducing memory pressure is for the VMM to determine
if the same page has been loaded more than once. If this is the case, the VMM reduces
the number of copies of the page to one and maps the other users of the page to that one
copy.
5.5.3 I/O
➢ An operating system’s device-driver mechanism provides a uniform interface to the
operating system whatever the I/O device.
➢ Device-driver interfaces are designed to allow third-party hardware manufacturers to
provide device drivers connecting their devices to the operating system.
➢ Virtualization takes advantage of this built-in flexibility by providing specific
virtualized devices to guest operating systems.
➢ VMMs vary greatly in how they provide I/O to their guests. I/O devices may be
dedicated to guests, for example, or the VMM may have device drivers onto which it
maps guest I/O.
➢ The VMM may also provide idealized device drivers to guests. In this case, the guest
sees an easy-to-control device, but in reality, that simple device driver communicates
to the VMM, which sends the requests to a more complicated real device through a
more complex real device driver.
➢ I/O in virtual environments is complicated and requires careful VMM design and
implementation.
➢ Hypervisor and hardware combination allow direct access to improve I/O performance.
➢ With type 0 hypervisors that provide direct device access, guests can often run at the
same speed as native operating systems.
➢ With direct device access in type 1 and 2 hypervisors, performance can be similar to
that of native operating systems if certain hardware support is present.
➢ The hardware needs to provide DMA pass-through with facilities like VT-d, as well as
direct interrupt delivery (interrupts going directly to the guests).
➢ In addition to direct access, VMMs provide shared access to devices.
❖ Consider a disk drive to which multiple guests have access. The VMM must
provide protection while the device is being shared, assuring that a guest can
access only the blocks specified in the guest’s configuration.
➢ General-purpose operating systems typically have one Internet protocol (IP) address,
although they sometimes have more than one.
➢ With virtualization, each guest needs at least one IP address, because that is the guest’s
main mode of communication. Therefore, a server running a VMM may have dozens
of addresses, and the VMM acts as a virtual switch to route the network packets to the
addressed guests.
➢ The guests can be “directly” connected to the network by an IP address that is seen by
the broader network (this is known as bridging). Alternatively, the VMM can provide
a network address translation (NAT) address.
➢ The NAT address is local to the server on which the guest is running, and the VMM
provides routing between the broader network and the guest.
Storage Management
➢ Virtualized environments need to approach storage management differently than do
native operating systems.
➢ If multiple operating systems have been installed, what and where is the boot disk.
❖ The solution to this problem depends on the type of hypervisor. Type 0
hypervisors often allow root disk partitioning, partly because these systems tend
to run fewer guests than other systems.
❖ Alternatively, a disk manager may be part of the control partition, and that disk
manager may provide disk space (including boot disks) to the other partitions.
❖ Type 1 hypervisors store the guest root disk (and configuration information) in
one or more files in the file systems provided by the VMM.
❖ Type 2 hypervisors store the same information in the host operating system’s
file systems.
❖ A disk image, containing all of the contents of the root disk of the guest, is
contained in one file in the VMM.
➢ Moving a virtual machine from one system to another that runs the same VMM is as
simple as halting the guest, copying the image to the other system, and starting the guest
there.
➢ Guests sometimes need more disk space than is available in their root disk image.
❖ For example, a nonvirtualized database server might use several file systems
spread across many disks to store various parts of the database. Virtualizing
such a database usually involves creating several files and having the VMM
present those to the guest as disks. The guest then executes as usual, with the
VMM translating the disk I/O requests coming from the guest into file I/O
commands to the correct files.
➢ VMMs provide a mechanism to capture a physical system as it is currently configured
and convert it to a guest that the VMM can manage and run. This physical-to-virtual
(P-to-V) conversion reads the disk blocks of the physical system’s disks and stores them
in files on the VMM’s system or on shared storage that the VMM can access.
➢ VMMs also provide a virtual-to physical (V-to-P) procedure for converting a guest to
a physical system. This procedure is sometimes needed for debugging.
5.5.4 Live Migration
➢ One feature not found in general-purpose operating systems but found in type 0 and
type 1 hypervisors is the live migration of a running guest from one system to another.
5.5.4.1 Working of Live Migration
➢ A running guest on one system is copied to another system running the same VMM.
➢ The copy occurs with so little interruption of service that users logged in to the guest,
as well as network connections to the guest, continue without noticeable impact.
➢ This rather astonishing ability is very powerful in resource management and hardware
administration.
➢ After all, compare it with the steps necessary without virtualization
❖ we must warn users
❖ shut down the processes,
❖ possibly move the binaries,
❖ restart the processes on the new system.
➢ Only then can users access the services again.
➢ With live migration, we can decrease the load on an overloaded system or make
hardware or system changes with no discernable disruption for users.
5.5.4.2 The VMM migrates a guest via the following steps:
➢ The source VMM establishes a connection with the target VMM and confirms that it is
allowed to send a guest.
➢ The target creates a new guest by creating a new VCPU, new nested page table, and
other state storage.
➢ The source sends all read-only memory pages to the target.
➢ The source sends all read–write pages to the target, marking them as clean.
➢ The source repeats step 4, because during that step some pages were probably modified
by the guest and are now dirty. These pages need to be sent again and marked again as
clean.
➢ When the cycle of steps 4 and 5 becomes very short, the source VMM freezes the guest,
sends the VCPU’s final state, other state details, and the final dirty pages, and tells the
target to start running the guest. Once the target acknowledges that the guest is running,
the source terminates the guest.

Figure:5.10: Live migration of a guest between two servers.

5.6 MOBILE OS - IOS AND ANDROID


5.6.1 iOS
➢ iOS is a mobile operating system designed for the iPhone smartphone and iPad tablet
computer.

Figure 5.11: Architecture of Apple’s macOS and iOS operating systems.


5.6.1.1 Functions of the various layers include the following:
User experience layer
❖ This layer defines the software interface that allows users to interact with the computing
devices. macOS uses the Aqua user interface, which is designed for a mouse or
trackpad, whereas iOS uses the Springboard user interface, which is designed for touch
devices.
Application frameworks layer
❖ This layer includes the Cocoa and Cocoa Touch frameworks, which provide an API for
the Objective-C and Swift programming languages.
❖ The primary difference between Cocoa and Cocoa Touch is that the former is used for
developing macOS applications, and the latter by iOS to provide support for hardware
features unique to mobile devices, such as touch screens.
Core frameworks.
❖ This layer defines frameworks that support graphics and media including, Quicktime
and OpenGL.
Kernel environment
❖ This environment, also known as Darwin, includes the Mach microkernel and the BSD
UNIX kernel.
➢ As shown in Figure: , applications can be designed to take advantage of user-experience
features or to bypass them and interact directly with either the application framework or
the core framework.
➢ An application can communicate directly with the kernel environment.
5.6.1.2 The Darvin’s Layered System
➢ Darwin uses a hybrid structure.
➢ Darwin is a layered system that consists primarily of the Mach microkernel and the
BSD UNIX kernel.
Figure:5.12 The structure of Darwin

➢ Darwin’s structure is shown in Figure:


➢ Darwin provides two system-call interfaces: Mach system calls (known as traps) and BSD
system calls (which provide POSIX functionality).
➢ The interface to these system calls is a rich set of libraries that includes not only the standard
C library but also libraries that provide networking, security, and programming language
support
➢ Fundamental Operating system services provided by iOS
❖ Memory management,
❖ CPU scheduling,
❖ interprocess communication (IPC) facilities such as message passing and remote
procedure calls (RPCs).
➢ Much of the functionality provided by Mach is available through kernel abstractions, which
include tasks (a Mach process), threads, memory objects, and ports (used for IPC).
➢ The kernel environment provides an I/O kit for development of device drivers and
dynamically loadable modules (which macOS refers to as kernel extensions, or kexts).

5.6.2 ANDROID
➢ The Android operating system was designed by the Open Handset Alliance (led
primarily by Google) and was developed for Android smartphones and tablet
computers.
➢ Whereas iOS is designed to run on Apple mobile devices and is close-sourced, Android
runs on a variety of mobile platforms and is opensourced.
Figure 5.13: Architecture of Google’s Android
➢ The structure of Android appears in Figure.
➢ Android is similar to iOS in that it is a layered stack of software that provides a rich set of
frameworks supporting graphics, audio, and hardware features.
➢ These features, in turn, provide a platform for developing mobile, applications that run on
a multitude of Android-enabled devices.
➢ Software designers for Android devices develop applications in the Java language, but
they do not generally use the standard Java API.
➢ Google has designed a separate Android API for Java development. Java applications are
compiled into a form that can execute on the Android RunTime ART, a virtual machine
designed for Android and optimized for mobile devices with limited memory and CPU
processing capabilities.
➢ Java programs are first compiled to a Java bytecode .class file and then translated into an
executable .dex file.
➢ Whereas many Java virtual machines perform just-in-time (JIT) compilation to improve
application efficiency, ART performs ahead-of-time (AOT) compilation. Here, .dex files
are compiled into native machine code when they are installed on a device, from which
they can execute on the ART.
➢ AOT compilation allows more efficient application execution as well as reduced power
consumption, features that are crucial for mobile systems.
➢ Android developers can also write Java programs that use the Java native interface—or
JNI—which allows developers to bypass the virtual machine and instead write Java
programs that can access specific hardware features.
➢ Programs written using JNI are generally not portable from one hardware device to
another.
➢ The set of native libraries available for Android applications includes frameworks for
developing web browsers (webkit), database support (SQLite), and network support, such
as secure sockets (SSLs).
➢ Because Android can run on an almost unlimited number of hardware devices, Google
has chosen to abstract the physical hardware through the hardware abstraction layer, or
HAL.
➢ By abstracting all hardware, such as the camera, GPS chip, and other sensors, the HAL
provides applications with a consistent view independent of specific hardware.
➢ This feature, of course, allows developers to write programs that are portable across
different hardware platforms.
➢ The standard C library used by Linux systems is the GNU C library (glibc). Google instead
developed the Bionic standard C library for Android.
➢ Not only does Bionic have a smaller memory footprint than glibc, but it also has been
designed for the slower CPUs that characterize mobile devices.
➢ At the bottom of Android’s software stack is the Linux kernel. Google has modified the
Linux kernel used in Android in a variety of areas to support the special needs of mobile
systems, such as power management.
➢ It has also made changes in memory management and allocation and has added a new form
of IPC known as Binder
REVIEW QUESTIONS
PART-A
(2-Marks)
1. What is Virtual Machine
➢ A virtual machine (VM) is a virtual environment which functions as a virtual
computer system with its own CPU, memory, network interface, and storage,
created on a physical hardware system.
➢ A piece of software called a hypervisor, or virtual machine manager, lets you run
different operating systems on different virtual machines at the same time.

2. What is Binary Translation?


➢ Binary translation is a software virtualization and includes the use of an interpreter.
It translates binary code to another binary but excluding nontrapping instructions.
3. List down the requirements for Virtualization.
➢ Fidelity.
❖ AVMM provides an environment for programs that is essentially identical
to the original machine.
➢ Performance.
❖ Programs running within that environment show only minor performance
decreases.
➢ Safety.
❖ The VMM is in complete control of system resources.
4. List down the parameters involved in Virtual Machine Life Cycle.
➢ These parameters usually include:
❖ the number of CPUs,
❖ amount of memory,
❖ networking details,
❖ and storage details
5. What is Type 0 Hypervisor?
➢ In Type 0 Hypervisor the VMM itself is encoded in the firmware and loaded at boot
time. In turn, it loads the guest images to run in each partition.
➢ A type 0 hypervisor can run multiple guest operating systems (one in each hardware
partition).
➢ All those guests, because they are running on raw hardware, can in turn be VMMs.

Fig: Type 0 Hypervisor

6. What is Type 1 Hypervisor?


➢ A type 1 hypervisor acts like a lightweight operating system and runs directly on
the host’s hardware.
➢ The most commonly deployed type of hypervisor is the type 1 or bare-metal
hypervisor, where virtualization software is installed directly on the hardware where
the operating system is normally installed.

Fig: Type 1 Hypervisor


7. What is Type 2 Hypervisor
➢ Type-2 Hypervisor– runs on the operating system of the physical host machine,
hence they are also called hosted hypervisors.
➢ These hypervisors are hosted on the operating system, and the hypervisor runs on
that layer as another software to enable virtualization.

Fig: Type 2 Hypervisor


8. List the components available in a Type 2 Hypervisor
➢ A physical server
➢ OS installed on that server hardware (OSes like Windows, Linux, macOS)
➢ Type-2 hypervisor on that OS
➢ Virtual machine instances/guest VMs
9. What are the advantages of Type 2 Hypervisor?
Simple management:
❖ They essentially act as management consoles. There is no need to install a separate
software package to manage the virtual machines running on type-2 hypervisors.
Useful for testing purposes:
❖ They are convenient for testing any new software or research projects. You can
simply run multiple instances with different OSes to test how the software works in
each environment.
10. What is Para-Virtualization?
➢ In Paravirtualization the source code of an OS is modified in order to run as a guest
OS in a virtual machine (VM) environment. Calls to the hardware from the guest
OS are replaced with calls to the VM monitor (VMM).
➢ Para-virtualization needs to modify the guest operating systems.
Fig: Paravirtualization
11. Write short notes on Emulation.
➢ Emulation is useful when the host system has one system architecture, and the guest
system was compiled for a different architecture.
➢ For example, suppose a company has replaced its outdated computer system with a
new system but would like to continue to run certain important programs that were
compiled for the old system. The programs could be run in an emulator that
translates each of the outdated system’s instructions into the native instruction set
of the new system.
12. What are the challenges the programmer faced in Emulation.
➢ The major challenge of emulation is performance. Instruction-set emulation may
run an order of magnitude slower than native instructions, because it may take ten
instructions on the new system to read, parse, and simulate an instruction from the
old system.
➢ Another challenge for emulator writers is that it is difficult to create a correct
emulator because, in essence, this task involves writing an entire CPU in software.
13. Write short notes on VMM.
➢ Virtual Machine Monitor, also called a “hypervisor,” this is one of many hardware
virtualization techniques that allow multiple operating systems, termed guests, to
run concurrently on a host computer.
➢ It manages the operation of a virtualized environment on top of a physical host
machine.
14. What is iOS
➢ iOS is a mobile operating system designed for the iPhone smartphone and iPad
tablet computer.

Fig: Architecture of iOS Operating System


15. Draw the structure of Darvin’s layered system.

Fig: Darvin’s Layered System

16. What are the system call interfaces provided by the Darvin’s layered system.
➢ Darwin provides two system-call interfaces: Mach system calls (known as traps)
and BSD system calls (which provide POSIX functionality).
17. List down the fundamental operating system services provided by the iOS.
➢ Memory management,
➢ CPU scheduling,
➢ Interprocess communication (IPC) facilities such as message passing and remote
procedure calls (RPCs).
18. What is Android OS?
➢ The Android operating system was designed by the Open Handset Alliance (led
primarily by Google) and was developed for Android smartphones and tablet
computers.
➢ Whereas iOS is designed to run on Apple mobile devices and is close-sourced,
Android runs on a variety of mobile platforms and is open sourced.

Fig: Architecture of Google’s Android


19. What is Live Migration in OS Virtualization?
➢ Live migration refers to the process of moving a virtual machine (VM) running on
one physical host to another host without disrupting normal operations or causing
any downtime or other adverse effects for the end user. Live migration is considered
a major step in virtualization.

Fig: VM Live Migration


20. Compare between Full Virtualization and Para-Virtualization

FULL VIRTUALIZATION PARA-VIRTUALIZATION

A common and cost-effective type of An Enhancement of virtualization


virtualization in which computer service technology in which the guest OS is
requests are separated from the physical recompiled prior to installation inside a VM
hardware that facilitates them
Allows the Guest OS to execute Allows the Guest OS to communicate with
independently the Hypervisor
Guest OS issues hardware calls to access Guest OS directly communicates with the
hardware hypervisor using drivers
Lower Performance Higher Performance

Table: Full Virtualization vs Para-Virtualization

21. Compare between Bare-Metal Hypervisor and Hosted Hypervisor

Bare-Metal Hypervisor Hosted Hypervisor

It is also called as Type 1 Hypervisor It is also called as Type 2 Hypervisor

It run’s directly on the host hardware to It run’s on a conventional operating system


control the hardware and to manage the guest just as other computer programs do.
OS.
It run’s directly on the host hardware Runs on operating systems similar to other
computer programs
Examples: Xen, Microsoft Hyper V, Examples: VMware Workstations, Oracle
VMware ESX/ESXi, Oracle VM Server for
Virtual Box, VMware Player.
x86
Table: Bare-Metal Hypervisor vs Hosted Hypervisor
PART-B
1. What is Virtual Machine? Elaborate on the History, benefits, and features of Virtual
Machine with examples.
2. Discuss briefly about the building blocks of virtualization with neat diagrams.
3. Explain briefly about the different types of Virtual machines with neat diagrams.
4. What is Hypervisor? Discuss about the different types of virtualization with neat diagrams.
5. Write short notes on Full Virtualization and Para Virtualization with neat diagrams.
6. Explain briefly about virtualization and operating system components with examples.
7. What is Live Migration? Discuss about the steps involved in Live Migration with neat
diagrams.
8. Discuss about Android Operating System with neat diagram.
9. Elaborate on iOS with neat diagrams.
10. Discuss about Darvin’s Layered System with neat diagram.

You might also like