Unit Iii Virtualization Infrastructure and Docker Desktop Virtualization
Unit Iii Virtualization Infrastructure and Docker Desktop Virtualization
Unit Iii Virtualization Infrastructure and Docker Desktop Virtualization
Desktop Virtualization
There are several different conceptual models of desktop virtualization, which can
broadly be divided into two categories based on whether the technology executes
the OS instance locally or remotely. It is important to note that not all forms of
desktop virtualization technology involve the use of virtual machines (VMs)
Another advantage to desktop virtualization is that users can access their virtual
desktops from anywhere. Even if a user is working from home or a hotel room,
they can still work from the same desktop environment that they use in the office.
1. Physical switch OS
2. Hypervisor
The basic functionality of the OS is to give the application or the executing process
with a simple set of instructions. System calls that are generated by the OS and
executed through the library are comparable to the service primitives given at the
interface between the application and the network through the SAP (Service
Access Point).
1. Physical Network
2. VM Network
Improves manageability
Reduces CAPEX
Improves utilization
Enhances performance
Enhances security
Storage Virtualization
• Hypervisor Virtualization
• Para-Virtualization
• Full Virtualization
It runs a separate version of the Linux Kernel. Kernel level allows running multiple
servers in a single host. It uses a device driver to communicate between main
Linux Kernel and the virtual machine. This virtualization is a special form of
Server Virtualization.
A hypervisor is a layer between the Operating system and hardware. With the help
of hypervisor multiple operating systems can work. Moreover, it provides features
and necessary services which help OS to work properly.
iv. Para-Virtualization
v. Full Virtualization
This type of virtualization is used for a specific purpose and can apply to network-
attached storage (NAS) system.
This is done between the data being accessed and the location of the physical
memory. It also provides a benefit of better handling file migration in the
background which improves the performance.
The Block based virtual storage is more widely used than the virtual storage
system as the virtual storage system is sometimes used for a specific purpose. The
block-based virtual storage system uses logical storage such as drive partition from
the physical memory in a storage device.
It also abstracts the logical storage such as a hard disk drive or any solid state
memory device. This also allows the virtualization management software to get
familiar with the capacity of the available device and split them into shared
resources to assign.
In storage virtualization, the data quickly retrieve from virtual storage. It is as easy
as accessing a file on the local computer. The data store very easily with the help
of some application and an internet connection which is an easy task.
The data can be migrated based upon the utilization such as the data which is
frequently used can be stored on a high-performance storage system. However, the
data which is rarely used can be placed on a bit slower system.
iii. Security
In storage virtualization, the data stores in different place and secure with
maximum security. If any disaster takes place the data can be retrieved from some
other place and it won’t affect the customer.
The security has the ability to meet the real utilization necessities rather than
providing additional storage.
Application Virtualization
These server images facilitate regulatory compliance with standards such as the
Payment Card Industry Data Security Standards (PCI DSS) and the Health
Insurance Portability and Accountability Act (HIPAA). Since data is not processed
or stored on endpoint devices, no data breach occurs, should the device become
compromised. The endpoint device is but a display terminal.
Allows the running of legacy apps (e.g., those developed for end-of-life OS
platforms like Windows 7 and XP).
Clusters
Virtual Cluster
• This involves
• Apart from it there are common installations for most users or applications, such
as OS or user-level programming libraries.
Deployment
• There are four steps to deploy a group of VMs onto a target cluster:
Many systems use templates to simplify the disk image preparation process.
• Templates could implement the COW (Copy on Write) format. A new COW
backup file is very small and easy to create and transfer.
• VM is configured with a name, disk image, network setting, and allocated CPU
and memory.
• One needs to record each VM configuration into a file. However, this method is
inefficient when managing a large group of VMs
VMs with the same configurations could use pre-edited profiles to simplify the
process. In this scenario, the system configures the VMs according to the chosen
profile.
• Most configuration items use the same settings, while other items, such as UUID,
VM name, and IP address, are assigned with automatically calculated values
Containers are lightweight software packages that contain all the dependencies
required to execute the contained software application. These dependencies include
things like system libraries, external third-party code packages, and other operating
system level applications. The dependencies included in a container exist in stack
levels that are higher than the operating system.
Pros
• Iteration speed
Because containers are lightweight and only include high level software, they are
very fast to modify and iterate on.
• Robust ecosystem
Cons
Containers all share the same underlying hardware system below the operating
system layer, it is possible that an exploit in one container could break out of the
container and affect the shared hardware. Most popular container runtimes have
public repositories of pre-built containers. There is a security risk in using one of
these public images as they may contain exploits or may be vulnerable to being
hijacked by nefarious actors.
Virtual machine
Pros
Virtual machines run in isolation as a fully standalone system. This means that
virtual machines are immune to any exploits or interference from other virtual
machines on a shared host. An individual virtual machine can still be hijacked by
an exploit but the exploited virtual machine will be isolated and unable to
contaminate any other neighboring virtual machines.
• Interactive development
Cons
• Iteration speed
Virtual machines are time consuming to build and regenerate because they
encompass a full stack system. Any modifications to a virtual machine snapshot
can take significant time to regenerate and validate they behave as expected.
Virtual machines can take up a lot of storage space. They can quickly grow to
several gigabytes in size. This can lead to disk space shortage issues on the virtual
machines host machine.
Introduction to Docker
Docker:
Docker is a set of platforms as a service (PaaS) products that use the Operating
system level visualization to deliver software in packages called containers.
Containers are isolated from one another and bundle their own software, libraries,
and configuration files; they can communicate with each other through well-
defined channels. All containers are run by a single operating system kernel and
therefore use fewer resources than a virtual machine.It is a tool that is used to
automate the deployment of applications in an environment designed to manage
containers.
1. Docker Image
2. Docker Container
3. Docker file
• REST API: It specifies how the applications can interact with the Server and
instructs it what to do.
5. Docker Hub
• Docker Hub is the official online repository where you can find other
Docker Images that are available for use.
• It makes it easy to find, manage, and share container images with others.
Docker Components
Docker runs in a client-server architecture that means docker client can connect to
the docker host locally or remotely. Docker client and host (daemon) can run on
the same host or can run on different hosts and communicate through sockets or a
RESTful API.
The Docker client is the primary way that many Docker users interact with Docker.
When you use commands such as docker run, the client sends these commands to
docker daemon, which carries them out. The docker command uses the Docker
API. The Docker client can communicate with more than one daemon.
We can communicate with the docker client using the Docker CLI. We have some
commands through which we can communicate the Docker client. Then the docker
client passes those commands to the Docker daemon.
DockerHost
Docker Daemon
When you use a docker run command to start up a container. Your docker client
will translate that command into an HTTP API call, sends it to docker daemon.
Docker daemon then evaluates the request, talks to underlying os and provisions
your container.
The daemon is responsible for all container-related actions and receives commands
via the CLI or the REST API. It can also communicate with other daemons to
manage its services. The Docker daemon pulls and builds container images as
requested by the client.
Docker Images:
Create a docker image using the docker build command. whenever you pass a
Dockerfile to the docker build command then the docker daemon will create a
docker image according to the Dockerfile instruction.
Run the docker images using the docker run command. whenever we pass the
command to docker client then the docker client passes this command to the
docker daemon then docker daemon will create the container for that image.
Push the docker image to the public registry like DockerHub using the docker push
command after pushed you can access these images from anywhere using docker
pull command.
An image can be used to build a container. Container images can be shared across
teams within an enterprise using a private container registry, or shared with the
world using a public registry like Docker Hub.
A container is a runnable instance of an image. You can create, start, stop, move,
or delete a container using the Docker API or CLI. You can connect a container to
one or more networks, attach storage to it, or even create a new image based on its
current state.
A container is volatile it means whenever you remove or kill the container then all
of its data will be lost from it. If you want to persist the container data use the
docker storage concept.
Containers only have access to resources that are defined in the image, unless
additional access is defined when building the image into a container.
Docker Networking
By default, you get three different networks on the installation of Docker – none,
bridge, and host. The none and host networks are part of the network stack in
Docker. The bridge network automatically creates a gateway and IP subnet and all
containers that belong to this network can talk to each other via IP addressing.
Docker Storage
A container is volatile it means whenever you remove or kill the container then all
of its data will be lost from it. If you want to persist the container data use the
docker storage concept.
You can store data within the writable layer of a container but it requires a storage
driver. In terms of persistent storage, Docker offers the following options:
• Data Volumes: Data Volumes provide the ability to create persistent storage,
with the ability to rename volumes, list volumes, and also list the container that is
associated with the volume. Data Volumes sit on the host file system, outside the
containers copy on write mechanism and are fairly efficient.
Docker Registries
Docker-registries are services that provide locations from where you can store and
download images.
In other words, a Docker registry contains repositories that host one or more
Docker Images.
It's the world's largest repository of container images with an array of content
sources including container community developers, open source projects and
independent software vendors (ISV) building and distributing their code in
containers.