Docker
Docker
Docker
Abstract
Docker has revolutionized the way software applications are developed,
deployed, and managed. Containerization, the core concept behind Docker,
enables the creation of lightweight, isolated environments that encapsulate an
application and all its dependencies. This abstract provides an overview of
Docker and containerization, highlighting their significance in modern software
development and deployment.
Container
A container is a lightweight, isolated environment that packages an
Page | 1
application and its dependencies, allowing it to run consistently and
reliably across different computing environments. Containers provide
a standardized and portable way to deploy software, ensuring that it
works consistently regardless of the underlying infrastructure.
Containers are based on containerization technologies such as Docker,
which utilize operating system-level virtualization. Unlike traditional
virtual machines (VMs) that emulate an entire operating system,
containers share the host system's kernel and only encapsulate the
necessary runtime components and dependencies required to run the
application. This lightweight approach enables containers to start
quickly, use fewer system resources, and offer better performance
compared to VMs.
Containers are often used in modern software development practices,
such as microservices architecture and DevOps. They enable
developers to package their applications along with their dependencies,
configurations, and libraries into a single, portable unit. Containers can
be easily deployed and run on any system that supports the
containerization platform, providing consistent behaviour across
development, testing, and production environments.
Containerization offers benefits like scalability, version control, and
reproducibility. Containers can be easily scaled up or down to meet
changing demands, making them suitable for cloud-based and
distributed systems. They also enable versioning of application
components, making it easier to manage and rollback updates.
Additionally, containers promote reproducibility by ensuring that the
application runs the same way across different environments, reducing
the likelihood of deployment issues caused by variations in system
configurations.
1
By Chaitanya
Virtual Machines (VMs) vs Containers
Page | 2
2
By Chaitanya
VMs provide flexibility in terms of operating systems, allowing
different OS versions or distributions to be run concurrently on the
same physical host.
Live migration is a common feature in VM environments, enabling the
Page | 3
transfer of running VMs between physical hosts with minimal
downtime.
Containers:
Containers, on the other hand, are lightweight, isolated environments
that share the host system's kernel and run as isolated processes.
Containers utilize operating system-level virtualization to package
applications along with their dependencies and configurations into a
single, portable unit.
Containers are more lightweight compared to VMs, as they do not
require emulating a full operating system. They share the host system's
OS kernel, resulting in faster startup times and lower resource
overhead.
Due to their lightweight nature, containers offer higher density,
allowing multiple containers to run concurrently on the same host,
utilizing the host's resources efficiently.
Containers are designed for running modern, microservices-based
applications. They promote scalability, agility, and reproducibility by
providing a standardized and portable execution environment.
Containers offer less isolation compared to VMs since they share the
host system's kernel. However, they employ various isolation
mechanisms (such as namespaces and control groups) to provide
process-level isolation and resource management.
3
By Chaitanya
Docker
Docker is an open-source platform that enables the creation,
deployment, and management of containers. It provides a set of tools
and APIs to build, package, and distribute applications as lightweight, Page | 4
portable containers.
Here are some key aspects of Docker:
Containerization: Docker utilizes containerization technology to
create and run containers. A container is a standalone executable unit
that includes everything needed to run an application, such as code,
runtime, libraries, and dependencies. Docker containers are based on
the Docker image format, which is a lightweight, read-only template
that contains all the instructions for creating a container.
Docker Engine: Docker Engine is the core component of Docker that
runs and manages containers. It provides an environment to build, run,
and manage containers, including features like container orchestration,
resource management, networking, and storage.
Dockerfile: A Dockerfile is a text-based script that defines the steps
to build a Docker image. It specifies the base image, required
dependencies, configuration settings, and other instructions necessary
to create a container. Dockerfiles allow developers to automate the
containerization process and ensure reproducibility.
Docker Hub: Docker Hub is a cloud-based registry that hosts a wide
range of Docker images. It serves as a central repository where users
can discover, download, and share Docker images. Docker Hub
provides both official images maintained by Docker as well as user-
contributed images.
Docker Compose: Docker Compose is a tool that simplifies the
orchestration of multi-container applications. It allows developers to
define and manage the configuration of multiple interconnected
containers as a single unit. Docker Compose files specify the services,
networks, and volumes required for the application, enabling easy
deployment and scaling of complex applications.
Docker Swarm and Kubernetes: Docker Swarm is Docker's native
clustering and orchestration solution for managing a cluster of Docker
nodes. It provides features like service discovery, load balancing, and
4
By Chaitanya
automatic scaling of containers. Alternatively, Docker can integrate
with Kubernetes, a popular container orchestration platform, to manage
containerized applications at scale.
Docker Commands Page | 5
The docker info command provides detailed information about the
Docker installation and its runtime environment. When you run docker
info, you will see a variety of information about Docker and the system
it is running on. Here are some key aspects explained:
Server Version: The version of Docker that is installed on the system.
API Version: The version of the Docker API that is supported by the
Docker daemon.
Docker Root Directory: The directory where Docker stores its data,
including images, containers, volumes, and networks.
Operating System: The name and version of the operating system on
which Docker is running.
Kernel Version: The version of the Linux kernel running on the host
system. Docker requires a compatible kernel for proper functioning.
CPUs: The number of CPUs (or CPU cores) available on the host
system.
Total Memory: The total amount of system memory (RAM)
available on the host system.
Docker Storage Driver: The storage driver used by Docker to
manage the container's filesystem, such as overlay2, aufs, or btrfs.
Logging Driver: The logging driver configured for Docker, which
determines how Docker handles container logs, such as json-file,
syslog, or none.
Cgroup Driver: The control group (cgroup) driver used by Docker to
manage resource allocation and restrictions for containers.
Container Runtimes: The container runtimes supported by Docker.
It may include options like docker-containerd or runc.
Docker Root Dir Permissions: The file system permissions
(ownership and access) for the Docker root directory.
Docker Registry Mirrors: The list of configured Docker registry
mirrors that Docker uses to pull images.
Experimental Features: The list of experimental features that are
enabled in the Docker installation.
5
By Chaitanya
Docker Container Lifecycle Management:
There are different stages when we create a Docker container which is
known as Docker Container Lifecycle. Some of the states are:
Created: A container that has been created but not started Page | 6
Running: A container running with all its processes
Paused: A container whose processes have been paused
Stopped: A container whose processes have been stopped
Deleted: A container in a dead state
Create Containers
Using the docker create command will create a new Docker container
with the specified docker image.
$ docker create --name <container name> <image name>
6
By Chaitanya
Start Container
To start a stopped container, we can use the docker start command.
$ docker start <container name>
Page | 7
Run Container
The docker run command will do the work of both “docker create” and
“docker start” command. This command will create a new container
and run the image in the newly created container.
$ docker run -it --name <container name> <image name>
Pause Container
If we want to pause the processes running inside the container, we can
use the “docker pause” command.
$ docker pause <container name>
The docker pause command is used to pause the processes running
inside a running container. When a container is paused, all processes
within the container are temporarily halted, effectively freezing the
container's execution state. Pausing a container can be useful in
scenarios where you want to temporarily suspend the container's
operations without stopping or removing it.
Some important points to note about the docker pause command:
Pausing a container can be useful for troubleshooting or debugging
purposes. It allows you to examine the container's state and inspect its
filesystem or configurations while it is temporarily frozen.
When a container is paused, its CPU usage is reduced to zero, and
its memory is not freed. The container remains in the same state it was
when it was paused.
7
By Chaitanya
To resume a paused container and allow its processes to continue
executing, you can use the docker unpause command.
Only running containers can be paused. If you try to pause a
container that is not currently running, an error will occur.
Page | 8
It's worth noting that pausing a container will affect all processes
running within it. If the container contains critical or time-sensitive
services, pausing it may disrupt their normal functioning.
When a container is paused using the docker pause command, the
container's processes are halted, including its CPU usage. Therefore, at
the paused state, the container's CPU usage will be reduced to zero.
When a container is paused, it remains in a frozen state, and its CPU
resources are not utilized. This can be useful for scenarios where you
want to temporarily suspend the container's operations without
stopping or removing it, allowing you to examine its state or perform
troubleshooting.
To check the CPU usage of a container in the paused state, you can use
tools or commands that monitor CPU utilization, such as docker stats
or external monitoring solutions. However, while the container is
paused, the CPU usage reported for that container will be zero or close
to zero since its processes are not actively running.
Keep in mind that the CPU usage will remain at zero until the container
is unpaused using the docker unpause command, which allows its
processes to resume and utilize CPU resources again.
docker unpause:
The docker unpause command is used to resume the execution of a
paused container. When a container is paused using the docker pause
8
By Chaitanya
command, its processes are temporarily halted, effectively freezing the
container's execution state. The docker unpause command allows you
to reverse this action and resume the container's operations.
Some important points to note about the docker unpause command:
Page | 9
Only paused containers can be unpaused. If you try to unpause a
container that is not in the paused state, an error will occur.
When a container is unpaused, the processes within the container are
resumed, and the container's execution state returns to normal.
The CPU resources that were previously idle during the pause state
will be utilized again by the container's processes once it is unpaused.
Unpausing a container does not reset or change its state or data. It
simply allows the container's processes to continue running from where
they left off.
Stop Container
Stopping a running Container means to stop all the processes running
in that Container. Stopping does not mean killing or ending the process.
$ docker stop <container name>
A stopped container can be made into the start state, which means all
the processes inside the container will again start. When we do the
docker stop command, the main process inside the container receives a
SIGTERM signal.
9
By Chaitanya
Delete Container:
Page | 10
Removing or deleting the container means destroying all the processes
running inside the container and then deleting the Container. It’s
preferred to destroy the container, only if present in the stopped state
instead of forcefully destroying the running container.
As we tried deleting a Container which was in running state, so the
docker daemon throws an error. We have to first stop the container and
delete it.
$ docker stop <container name>
$ docker rm <container name>
10
By Chaitanya
Also, the memory portion would be there while the container is paused
and again the memory is used when the container is resumed.
When we use the docker stop command, the main process inside the
container receives SIGTERM signal, and after some time, SIGKILL.
Page | 11
Also, it will release the memory used after the container is stopped.
SIGTERM is the signal of termination. The intention is to kill the
process, gracefully or not, but to first allow it a chance to clean up.
SIGKILL is the kill signal. The only behaviour is to kill the process,
immediately.
SIGSTOP is the pause signal. The only behaviour is to pause the
process. The shell uses pausing (and its counterpart, resuming via
SIGCONT) to implement job control.
Docker rm Vs. Docker Kill
docker container rm: Using docker rm, we can remove one or more
containers from the host node and for doing container name or ID can
be used.
docker container kill: The main process inside each container specified
will be sent SIGKILL or any signal specified with option –signal.
11
By Chaitanya
Docker Images
The docker images command is used to list the Docker images
available on your local system. It provides information about the
images, such as their repository, tag, image ID, size, and creation date. Page | 12
This command is helpful for managing and identifying the Docker
images present on your machine.
List all Docker images on your system:
docker images
Show the image IDs of all images (quiet mode):
docker images -q
Display the size and sort the images by size:
docker images --size --sort=size
Filter images by repository and tag:
docker images ubuntu:latest
Ways to create Docker images depending on your requirements and
preferences. Here are some common methods:
Dockerfile: A Dockerfile is a text file that contains instructions for
building a Docker image. You can define the base image, add
dependencies, copy files, configure the environment, and specify the
commands to run. Once you have a Dockerfile, you can use the docker
build command to build an image based on it.
Docker Commit: Docker allows you to create an image by committing
the changes made to a running container. You can start a container,
make the necessary modifications, and then use the docker commit
command to create an image from the container's current state.
Docker commit:
The docker commit command is used to create a new Docker image by
committing the changes made to a running container. It allows you to
capture the current state of a container as an image, including any
modifications or additions made within the container.
Some important points to note about the docker commit command:
It's recommended to use a Dockerfile and the docker build command
whenever possible to create reproducible and maintainable images. The
docker commit command is useful for quick experimentation or in
12
By Chaitanya
situations where you need to capture ad hoc changes made inside a
container.
The docker commit command captures the container's file system
state at the time of the commit, including any new files, modified files,
Page | 13
and changes to existing files. It does not capture the container's
environment variables, network settings, or volumes.
The resulting image from the docker commit command will have a
new image ID. If you provide a [REPOSITORY[:TAG]] argument, the
new image will be tagged accordingly. Otherwise, it will have an auto-
generated ID.
It's important to note that docker commit creates a new image layer
based on the changes made within the container. Each time you use
docker commit, a new layer is added on top of the existing image
layers.
Dockerfile:
A Dockerfile is a text file that contains a set of instructions used to build
a Docker image. It provides a declarative and reproducible way to
define the environment and configuration of a containerized
application. With a Dockerfile, you can automate the process of
building images and ensure consistent setups across different
environments.
A typical Dockerfile consists of a series of instructions that are
executed in order to build the image layer by layer. Here are some
commonly used instructions:
FROM: Specifies the base image to use as the starting point for your
image. It is typically an existing image from Docker Hub or a custom
image built on top of another base image.
13
By Chaitanya
COPY or ADD: Copies files and directories from the host into the
image's filesystem. This allows you to add your application code,
dependencies, and other necessary files to the image.
RUN: Executes a command within the image. You can use this
Page | 14
instruction to install packages, run setup scripts, or perform any other
necessary actions during the image build process.
WORKDIR: Sets the working directory inside the container where
subsequent instructions will be executed.
EXPOSE: Specifies the ports on which the container will listen for
incoming connections at runtime. However, it doesn't actually publish
the ports; it's primarily for documentation purposes.
CMD or ENTRYPOINT: Defines the command that will be executed
when the container is launched from the image. This instruction allows
you to specify the default command or entrypoint for the container.
ENV: Sets environment variables within the image.
These are just a few examples of instructions that can be used in a
Dockerfile. Docker provides many more instructions and options to
cover various use cases and scenarios.
Once you have created a Dockerfile, you can build an image from it
using the docker build command, specifying the directory containing
the Dockerfile as the build context. The docker build command reads
the instructions from the Dockerfile, executes them one by one, and
creates the image.
FROM ubuntu:22.04
LABEL Owner="chaitanya"
LABEL ENV="Dev"
ENV user=admin
ENV password=password
WORKDIR /app
COPY script.sh cont.sh
ADD https://releases.hashicorp.com/packer/1.9.1/packer_1.9.1_linux_amd64.zip
packer_1.9.1_linux_amd64.zip
ARG VERSION='1.5.0'
RUN apt update -y \
&& apt install -y nginx unzip jq net-tools iputils-ping wget curl \
Wgethttps://releases.hashicorp.com/terraform/${VERSION}/terraform_${VERSION}_l
inux_amd64.zip \
&& unzip terraform_${VERSION}_linux_amd64.zip \
&& chmod 777 terraform && ./terraform version \
&& chmod 777 cont.sh && ./cont.sh
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
14
By Chaitanya
In a Dockerfile, both the ENV and ARG instructions are used to work
with values that can be accessed by processes running inside the
container. However, there are some important differences between the
two:
Page | 15
ENV Instruction:
Purpose: The ENV instruction sets environment variables within the
image, which can be accessed by processes running inside the
container.
Scope: Environment variables set using ENV are available during
the build process and persist in the resulting image.
Default Value: Environment variables set using ENV have default
values that are defined in the Dockerfile.
Usage: ENV is typically used to configure the runtime
environment of the container, such as setting paths, defining default
options, or providing configuration values.
ARG Instruction:
Purpose: The ARG instruction defines build-time variables that
are used during the build process but are not preserved in the final
image.
Scope: ARG variables are only available during the build process
and are not accessible by processes running inside the container when
it is executed.
Default Value: ARG variables can have default values defined in
the Dockerfile, but they can also be overridden during the build process
using the --build-arg option.
Usage: ARG is commonly used to pass dynamic values to the
Dockerfile at build time, allowing for customization and flexibility. For
example, you can use ARG to specify a version number or a file path
that changes between builds.
In Dockerfile, both the COPY and ADD instructions are used to copy
files and directories from the host machine into the Docker image.
While they have similar functionalities, there are some important
differences between the two:
COPY Instruction:
Purpose: The COPY instruction copies files and directories from
the host machine to the image.
15
By Chaitanya
Simplicity: COPY is a simpler and more straightforward
command compared to ADD. It is recommended to use COPY unless
you specifically need the additional features provided by ADD.
Behavior: COPY simply copies files and directories from the host
Page | 16
to the destination in the image, without performing any extraction or
modification of the contents.
Usage: COPY is suitable for most cases where you need to copy
files or directories into the image.
ADD Instruction:
Purpose: The ADD instruction copies files and directories from
the host machine to the image, but it also supports additional features
beyond basic copying.
Advanced Features: Unlike COPY, the ADD instruction can
automatically extract compressed archives (such as .tar, .gz, .zip) and
can fetch remote URLs and unpack them in the image.
Behavior: ADD performs more complex operations compared to
COPY. It can have unexpected behavior if used incorrectly, such as
automatically extracting archives, which may not be desired in some
cases.
Usage: ADD can be used when you need the additional
functionality of automatically extracting archives or fetching remote
resources, but it is important to be cautious with its usage to avoid
unexpected results.
In a Dockerfile, both the RUN, CMD, and ENTRYPOINT instructions
are used to execute commands within the Docker image. Each of these
instructions serves a different purpose:
RUN Instruction:
Purpose: The RUN instruction executes a command during the
image build process. It is primarily used to install dependencies, run
setup scripts, and perform any necessary actions required to build the
image.
Usage: You can include multiple RUN instructions in a
Dockerfile, and each one will execute sequentially, creating a new
image layer for each instruction. The resulting state of the file system
after a RUN instruction is applied will be saved in the image for
subsequent instructions to use.
16
By Chaitanya
CMD Instruction:
Purpose: The CMD instruction specifies the default command to be
executed when a container is started from the image. It provides the
default behaviour of the container, but this command can be overridden
Page | 17
by specifying a different command when running the container.
Usage: There can only be one CMD instruction in a Dockerfile. If
multiple CMD instructions are provided, only the last one will take
effect. The CMD instruction can be written in either a shell form or an
exec form.
ENTRYPOINT Instruction:
Purpose: The ENTRYPOINT instruction also specifies the default
command to be executed when a container is started from the image,
similar to CMD. However, unlike CMD, the command specified by
ENTRYPOINT is not easily overridden; it becomes the fixed entry
point of the container.
Usage: There can only be one ENTRYPOINT instruction in a
Dockerfile. If multiple ENTRYPOINT instructions are provided, only
the last one will take effect. The ENTRYPOINT instruction can be
written in either a shell form or an exec form. If a CMD instruction is
present in the Dockerfile, it will be appended as arguments to the
command specified by ENTRYPOINT.
EXPOSE instruction:
EXPOSE instruction is used to inform Docker that the container will
listen on specified network ports at runtime. It does not actually publish
the ports, but rather serves as documentation for developers and
administrators to know which ports should be exposed when running
the container.
17
By Chaitanya
Docker Volumes
Docker volumes are a way to persistently store and share data between
Docker containers and the host system. When a container is created, it
runs in an isolated environment with its own file system. However, any Page | 18
changes made to the container's file system are typically lost when the
container is stopped or removed.
Volumes provide a solution to this problem by allowing you to create
a special location on the host system or within another container, which
can be mounted into a container as a directory. This directory acts as a
shared storage space that persists even when the container is removed
or restarted. Volumes can be used for various purposes such as storing
application data, configuration files, and database files.
There are different types of volumes in Docker:
Named volumes: Named volumes are the simplest type of volume.
You can create a named volume using the docker volume create
command or it can be created automatically when you run a container
and specify a volume using the -v or --mount flag. Named volumes
have a unique name assigned by Docker and are managed by Docker
internally. They are easy to use and provide good isolation.
Host-mounted volumes: Host-mounted volumes allow you to mount
a directory from the host system into a container. With host-mounted
volumes, the data is stored directly on the host's file system, and any
changes made inside the container are immediately visible on the host,
and vice versa. Host-mounted volumes are created using the -v or --
mount flag followed by the host directory path and the container
directory path.
Anonymous volumes: Anonymous volumes are similar to named
volumes, but they are not given an explicit name. They are created
automatically by Docker when a container is run and a new volume is
specified without providing a name. Anonymous volumes are useful
when you don't need to manage or reuse the volume explicitly and just
want a disposable storage space for a specific container.
Volumes can be used with Docker CLI commands, such as docker run,
docker create, and docker-compose, by specifying the volume
18
By Chaitanya
configurations in the command arguments or the container
configuration files.
The exact locations of named volumes on the host system depend on
the storage driver being used by Docker.
Page | 19
Local driver: The local driver is the default storage driver for most
Docker installations. For Linux, the named volumes are stored in the
/var/lib/docker/volumes directory on the host system.
Named volumes:
Named volumes in Docker provide a way to create and manage
persistent storage for containers. When you create a named volume,
Docker assigns a unique name to it, and you can refer to that name
when running containers to use the volume.
Here are the basic steps to work with named volumes in Docker:
Create a named volume: You can create a named volume using the
docker volume create command followed by the desired volume name.
For example:
Docker volume create <name of the volume>
19
By Chaitanya
Page | 20
20
By Chaitanya
Page | 21
Volume is created
Attach the volume to the instance.
21
By Chaitanya
Page | 22
22
By Chaitanya
Page | 23
23
By Chaitanya
Mount the partition to the directory permanently
Page | 24
dit the Docker daemon configuration file: Locate the Docker daemon
configuration file (daemon.json). The location of this file may vary
depending on your operating system:
Linux: /etc/docker/daemon.json
Start the Docker service: Start the Docker service using the appropriate
command for your operating system.
24
By Chaitanya
Verify the change: After starting Docker, verify that the volumes are
now being stored in the new location by creating a new named volume
or using a container with a volume and checking the corresponding
directory in /dockervolume.
Page | 25
25
By Chaitanya
Docker Swarm
Docker Swarm is a native clustering and orchestration solution for
Docker containers. It allows you to create and manage a cluster of
Docker nodes, called a swarm, to run containerized applications across Page | 26
multiple machines. With Docker Swarm, you can distribute workloads
and scale your applications easily, ensuring high availability and fault
tolerance.
Here are some key concepts related to Docker Swarm:
Swarm Manager: The Swarm Manager is responsible for managing
the swarm cluster and orchestrating tasks. It maintains the desired state
of the swarm and distributes tasks to worker nodes.
Worker Nodes: Worker nodes are machines that join the swarm and
execute tasks assigned to them by the Swarm Manager. They run
containerized applications.
Services: Services define long-running tasks in the swarm. A service
specifies the image to use, the desired number of replicas, network and
storage configurations, and other options.
Replicas: Replicas are instances of a service running on worker
nodes. You can define the desired number of replicas for a service, and
Swarm will distribute them across the available worker nodes.
Overlay Networks: Overlay networks provide multi-host
connectivity for containers running on different nodes in the swarm.
They enable communication between services, regardless of the node
they are running on.
Swarm Visualizer: Swarm Visualizer is a tool that provides a visual
representation of the swarm cluster, showing the nodes, services, and
their relationships.
Swarm Mode: Swarm Mode is the built-in orchestration mode
provided by Docker. It allows you to create and manage a swarm
cluster without the need for external tools or plugins.
To create a Docker swarm, you need to initialize a Swarm Manager
using the docker swarm init command. Then, you can join worker
nodes to the swarm using the docker swarm join command. Once the
swarm is set up, you can deploy services, scale them, and manage the
cluster using various Docker commands and APIs.
26
By Chaitanya
The Raft algorithm:
The Raft algorithm is a consensus algorithm used by Docker Swarm to
manage the high-availability state of the swarm. It ensures that the
Swarm Manager nodes have a consistent and agreed-upon view of the
Page | 27
cluster's state and make coordinated decisions.
In Docker Swarm, the Raft consensus algorithm is employed for leader
election and maintaining a replicated log that tracks changes to the
swarm's state. Here's how it works:
Leader Election: When a Swarm Manager node starts, it initiates a
leader election process. The nodes communicate with each other using
the Raft algorithm to determine which node will become the leader.
The leader is responsible for managing the swarm state and handling
client requests.
Consensus: Once a leader is elected, it coordinates with the other
Swarm Manager nodes to achieve consensus on the swarm's state. The
Raft algorithm ensures that all Swarm Manager nodes agree on the
order and consistency of the changes to the swarm's state.
Log Replication: The swarm's state changes are logged as entries in
a replicated log. The leader appends new entries to the log and
replicates them to the follower nodes. The followers then apply these
entries to their local copies of the swarm's state. This replication
process ensures that all Swarm Manager nodes have an up-to-date and
consistent view of the cluster.
Handling Failures: If the leader node fails or becomes unreachable,
a new leader election takes place. The remaining Swarm Manager
nodes use the Raft algorithm to elect a new leader and continue
maintaining the swarm's state.
By using the Raft algorithm, Docker Swarm achieves fault tolerance
and high availability. If a Swarm Manager node fails or is temporarily
disconnected, the remaining nodes can continue operating and elect a
new leader to maintain the swarm's state. This ensures that the cluster
remains functional even in the presence of failures.
27
By Chaitanya
Docker Ingress controller
In Docker, the concept of an Ingress controller is slightly different
compared to Kubernetes. Docker itself does not provide an Ingress
controller as a native feature, but there are several ways to achieve Page | 28
similar functionality using additional tools and configurations.
In the context of Docker, an Ingress controller typically refers to a
reverse proxy or load balancer that handles incoming traffic and routes
it to the appropriate Docker containers running services. It acts as a
central entry point for external traffic, similar to how an Ingress
controller works in Kubernetes.
Here are a few popular options for setting up an Ingress controller in
Docker:
Nginx: Nginx is a widely used web server and reverse proxy that can
serve as an Ingress controller in Docker. By configuring Nginx with
appropriate rules, you can route incoming requests to different
containers based on hostname, URL paths, or other criteria. You can
run Nginx as a separate container and configure it to proxy traffic to
the desired containers.
Traefik: Traefik is a modern, dynamic reverse proxy and load
balancer that can be used as an Ingress controller in Docker. It
integrates well with Docker and provides automatic configuration and
discovery of services. Traefik can dynamically update its routing rules
based on Docker container labels or other metadata, making it easy to
manage and scale containerized services.
HAProxy: HAProxy is a popular load balancer and reverse proxy
that can also serve as an Ingress controller in Docker. It provides
advanced load balancing algorithms and can be configured to route
traffic to different Docker containers based on various criteria.
HAProxy is known for its performance and flexibility.
28
By Chaitanya
It's worth noting that the specific setup and configuration of an Ingress
controller in Docker can vary depending on the tool you choose and the
requirements of your environment. Docker Compose or Docker Swarm
can be used to define the network and services, and the Ingress
Page | 29
controller tool can be integrated into the Docker stack.
Traefik:
Traefik is a modern, dynamic, and cloud-native reverse proxy and load
balancer that is often used as an Ingress controller in containerized
environments like Docker. It is designed to work seamlessly with
container orchestration platforms such as Docker Swarm, Kubernetes,
and Mesos.
Key features of Traefik include:
Automatic Service Discovery: Traefik can automatically discover
and configure routes for services running in the container environment.
It integrates with container orchestration platforms and can
dynamically update its routing rules as containers are created, updated,
or removed.
Dynamic Configuration: Traefik supports dynamic configuration
through various providers. It can watch for changes in configuration
sources like Docker labels, Kubernetes Ingress resources, or file-based
configurations. This makes it easy to manage and scale containerized
services without the need for manual configuration changes.
Load Balancing: Traefik distributes incoming traffic across multiple
instances of a service, providing load balancing capabilities. It supports
different load balancing algorithms and can automatically adjust the
routing based on the health and availability of the backend services.
SSL/TLS Termination: Traefik can handle SSL/TLS termination,
allowing you to secure your services with HTTPS. It can automatically
generate and manage Let's Encrypt certificates, making it convenient
to enable encryption for your applications.
Circuit Breakers and Retries: Traefik includes circuit breaker and
retry mechanisms to enhance service reliability. It can detect and
handle failures by temporarily disabling unresponsive services or
retrying requests to improve overall application resilience.
Middleware Support: Traefik provides a rich set of middleware
options to modify and enhance the request and response flow. You can
29
By Chaitanya
apply middleware configurations for features such as URL rewriting,
rate limiting, authentication, and more.
Dashboard and Metrics: Traefik offers a built-in dashboard that
provides real-time insights into the traffic, routing rules, and backend
Page | 30
services. It also supports integration with monitoring and metrics
systems like Prometheus, enabling you to gather detailed metrics and
perform analysis.
Traefik can be configured using a simple YAML or TOML
configuration file, or it can be deployed using container orchestrators
like Docker Swarm or Kubernetes, where it can watch for changes in
the environment and automatically adjust its routing accordingly.
overlay network:
An overlay network is a virtual network that is created on top of an
existing physical network infrastructure. It allows multiple nodes or
containers distributed across different hosts or clusters to communicate
with each other as if they were on the same local network, even if they
are located in geographically separate locations or isolated networks.
In the context of containerization and orchestration platforms like
Docker Swarm and Kubernetes, overlay networks are used to enable
communication between containers running on different hosts or nodes.
Here are some key characteristics and benefits of overlay networks:
Virtualized Network: An overlay network abstracts the underlying
physical network infrastructure and creates a virtual network that
appears as a single logical network. It allows containers or nodes to
communicate with each other using virtual IP addresses, regardless of
their physical network locations.
Multi-Host Communication: Overlay networks facilitate
communication between containers or services running on different
hosts. Containers on different hosts can communicate directly with
each other using their virtual IP addresses, without the need for explicit
port forwarding or exposing their internal IP addresses to the external
network.
Network Isolation: Overlay networks provide network isolation and
segmentation. Containers connected to the same overlay network can
communicate with each other, while containers on different overlay
30
By Chaitanya
networks are isolated and cannot directly communicate, unless network
routing or policies are explicitly configured.
Scalability and Flexibility: Overlay networks can scale horizontally
as the number of containers or nodes increases. They allow containers
Page | 31
to be easily added or removed from the network without disrupting the
overall connectivity. This flexibility simplifies container orchestration
and deployment scenarios, where containers may be dynamically
scheduled across multiple hosts.
Security and Encapsulation: Overlay networks often provide secure
communication by encapsulating network traffic within an encrypted
tunnel. This ensures the confidentiality and integrity of data transmitted
across the network. It also allows containers to communicate securely
over public or untrusted networks.
Service Discovery and Load Balancing: Overlay networks can
integrate with service discovery mechanisms and load balancers
provided by container orchestration platforms. This enables automatic
discovery of services running on the overlay network and load
balancing of traffic across multiple instances of the same service.
It's important to note that different container orchestration platforms
have their own implementations of overlay networks. For example,
Docker Swarm uses the ingress overlay network for service load
balancing and routing, while Kubernetes uses the Container Network
Interface (CNI) plugins to create overlay networks.
31
By Chaitanya
Create a Traefik Ingress controller:
TRAEFIK 1.6:
docker service create \
--name traefik16 \
Page | 32
--constraint=node.role==manager \
--publish 80:80 \
--publish 9080:8080 \
--mount
type=bind,source=/var/run/docker.sock,target=/var/run/docker.sock
--network traefik-net \
traefik:v1.6 \
--docker \
--docker.swarmmode \
--docker.domain=traefik \
--docker.watch \
--web
docker service create \
--name red \
--label traefik.port=80 \
--network traefik-net \
--label traefik.frontend.rule=Host:red.cgitk8s.xyz \
chaitanyamanikumar/rollingupdate:v5
docker service create \
--name green \
--label traefik.port=80 \
--network traefik-net \
--label traefik.frontend.rule=Host:green.cgitk8s.xyz \
chaitanyamanikumar/testcontainer:v1
docker service create \
--name blue \
--label traefik.port=80 \
--network traefik-net \
--label traefik.frontend.rule=Host:blue.cgitk8s.xyz \
--replicas 3 \
Nginx:latest
32
By Chaitanya
Docker Secrets
Docker Secrets is a feature provided by Docker that allows you to
securely manage sensitive information, such as passwords, API keys,
and certificates, within your Dockerized applications. It helps you Page | 33
separate your application's configuration from its code and ensures that
sensitive data is stored and transmitted securely.
Here are some key points about Docker Secrets:
Secure Storage: Docker Secrets securely stores sensitive data in an
encrypted form on the Docker host. The secrets are only accessible by
the services that need them and are never stored in a container's
filesystem or transmitted over the network in plain text.
Swarm Mode: Docker Secrets is primarily designed for use in
Docker Swarm mode. It enables you to create and manage secrets
within a swarm cluster, ensuring that the secrets are available to the
services running in the swarm.
Creating Secrets: You can create secrets using the docker secret
create command. Secrets can be created from files, from the output of
a command, or by specifying the secret value directly. For example,
you can create a secret from a file containing a TLS certificate or from
the output of a command that generates a password.
Attaching Secrets to Services: Once a secret is created, you can
attach it to a service using the --secret flag when creating or updating
the service. This makes the secret available to the containers running
within that service.
Accessing Secrets in Containers: Inside a container, Docker Secrets
are made available as files in the /run/secrets/ directory. The files have
the same names as the secrets and contain the secret values.
Applications running in the container can read the secret values from
these files.
Updating and Managing Secrets: Secrets can be updated using the
docker secret update command, allowing you to rotate secrets and keep
them up to date. You can also manage secrets using Docker's
command-line interface or APIs.
Docker Secrets provides a convenient and secure way to manage
sensitive information within your Dockerized applications, especially
33
By Chaitanya
in the context of Docker Swarm. It helps you adhere to security best
practices by keeping sensitive data separate from application code and
ensuring that it is stored and transmitted securely within your swarm
cluster.
Page | 34
34
By Chaitanya