Docker Notes
Docker Notes
Docker Introduction................................................................................................................................. 1
Namespacing:...................................................................................................................................... 1
Types of Namespaces......................................................................................................................2
Control Group (cgroup).......................................................................................................................2
$docker info command.............................................................................................................................3
Quick glance at docker image:.................................................................................................................3
Docker Container commands...................................................................................................................4
Docker image commands.......................................................................................................................10
Creating new image:...............................................................................................................................11
Most useful commands in Dockerfile................................................................................................13
ARG and ENV Availability.......................................................................................................... 16
Multi-step image................................................................................................................................16
Shell and Exec Form......................................................................................................................... 17
Docker network:.....................................................................................................................................17
Docker compose:....................................................................................................................................18
The Dot-Env File (.env).................................................................................................................... 21
Setting ARG Values in docker-compose........................................................................................... 21
Different ways to set environment variables..........................................................................................22
1. Provide values one by one........................................................................................................ 22
2. Pass environment variable values from your host.................................................................... 22
3. Take values from a file (env_file).............................................................................................23
Restart policies (--restart).......................................................................................................................24
Docker Vs VM
1. Docker image is smaller. Docker size is in megabytes vs VM size is typically in Gigabyte
2. Docker containers starts and run much faster.
3. VM of any OS can run on any OS host
4. Docker uses Host OS kernal vs VM uses its own kernal on top of host OS.
5. Docker uses operating system level virtualization as it uses host sytem to talk to kernal and
hardware.
6. VMs are hardware level virtualization as it brigs its own kernal to talk to the harware.
7. In docker we do process isolation.
Docker Introduction
Namespacing Vs CGroup
Namcepsace- Isolate resources per process or group like programs (different versions of java)
CGROUP - limit amount of resoures per process like CPU, Memory, HDD, and network
Namespacing:
isolating resources per process (or group of processes). Example: what portion of hard disk,
which harddisk or mount path to use. Which users to use. Which hostnames and which
network to use etc. which process to use as well for example if we have two version of java
installed but one application need java 8 and other need Java11 then these can be defined and
isolated using namespace to avoid version conflict.
“Namespaces are a feature of the Linux kernel that partitions kernel resources such that one
set of processes sees one set of resources while another set of processes sees a different set of
resources.”
It means two isolated processes work similar to running in two different systems with no
knowledge about each others
In other words, the key feature of namespaces is that they isolate processes from each other.
On a server where you are running many different services, isolating each service and its
associated processes from other services means that there is a smaller blast radius for changes,
as well as a smaller footprint for security-related concerns.
Types of Namespaces
Within the Linux kernel, there are different types of namespaces. Each namespace has its own unique
properties:
• A user namespace has its own set of user IDs and group IDs for assignment to processes. In
particular, this means that a process can have root privilege within its user namespace without
having it in other user namespaces.
• A process ID (PID) namespace assigns a set of PIDs to processes that are independent from the
set of PIDs in other namespaces. The first process created in a new namespace has PID 1 and
child processes are assigned subsequent PIDs. If a child process is created with its own PID
namespace, it has PID 1 in that namespace as well as its PID in the parent process’ namespace.
See below for an example.
• A network namespace has an independent network stack: its own private routing table, set of IP
addresses, socket listing, connection tracking table, firewall, and other network-related
resources.
• A mount namespace has an independent list of mount points seen by the processes in the
namespace. This means that you can mount and unmount filesystems in a mount namespace
without affecting the host filesystem.
• An interprocess communication (IPC) namespace has its own IPC resources, for example
POSIX message queues.
• A UNIX Time-Sharing (UTS) namespace allows a single system to appear to have different
host and domain names to different processes.
• Resource limits – You can configure a cgroup to limit how much of a particular resource
(memory or CPU, for example) a process can use.
• Prioritization – You can control how much of a resource (CPU, disk, or network) a process
can use compared to processes in another cgroup when there is resource contention.
• Accounting – Resource limits are monitored and reported at the cgroup level.
• Control – You can change the status (frozen, stopped, or restarted) of all processes in a cgroup
with a single command.
So basically you use cgroups to control how much of a given key resource (CPU, memory, network,
and disk I/O) can be accessed or used by a process or set of processes. Cgroups are a key component
of containers because there are often multiple processes running in a container that you need to
control together. In a Kubernetes environment, cgroups can be used to implement resource requests
and limits and corresponding QoS classes at the pod level.
Kernel
Create and Run in detached docker run -d imageName docker run -d redis
mode
Create and Run in interactive docker run -it imageName docker run -it redis sh
mode with command( helpful commandToRun
for dry run and to validate what
is inside an image. Not used in
prod as this will override
command defined in the actual
image)
Create and Run with override Docker run image java -jar
command to run app.jar
Execute an additional command docker exec -it <container id> docker exec -it
in an already existing container command myredisContainerID redis-cli
(without disturbing primary
command/process) exec – run another command docker exec -it
-i allow us to provide input to myredisContainerID sh
Useful to debug live container the container (Keep STDIN
in prod open) docker exec -it
-t tty myredisContainerID bin/sh
$docker run -d
-e SSO_HOST_URL
-e SSO_PASS
my-image
d run --env-
file=my_env.properties -
it ubuntu bash
my_env.properties content
my_env_property=BLAH...
Stop container gracefully Docker stop containerId If could not stop in 10 secs then
it will fallback to kill
command(refer next one)
--time ( OR -t) to change default 10 secs to
something else.
Restart container Docker restart containerId Seconds to wait for stop before
Restart one or more containers killing the container
The docker pause command
suspends all processes in the
specified containers. On Linux,
this uses the freezer cgroup.
Traditionally, when suspending
a process the SIGSTOP signal is
used, which is observable by
the process being suspended.
docker run
-p3000:3000
-v
/containerDir/otherDir
-v hostDir:/containerDir
-d
imageId
If containerDir not
exists then it will be
created inside the
container
Note
/containerDir/otherDir
will not be mapped to
host system because that
volume does not have
mapping for it. Which
means that dir will be
untouched and have same
content as when the image
got created.
Stop running container $ docker stop
containerid(s)
OR
Restart automatically Read in the blow sections docker run -dit --restart
[restart-policy-value]
[container_name]
By default, docker inspect will
kubectl describe
render results in a JSON array. resourceType
<resource_name>
For example uses of this
kubectl describe pod
command, refer to
podName
the examples section below.
docker cp
<FILE_TO_COPY_FROM_HOST>
<CONTAINER_ID>:<PATH_INSI
DE_CONTAINER_TO_PLACE_THE
_FILE>
$ docker cp a.py
ubu_container:/home/dir1
Container to HOST
docker cp
<CONTAINER_ID>:<FILE_TO_C
OPY_INSIDE_CONTAINER>
<PATH__IN_HOST_TO_PLACE_F
ILE>
$ docker cp
ubu_container:/home/dir1/
new.txt /home/abhishek
Create a tag
TARGET_IMAGE $ docker image tag
that refers to SOURCE_IMAGE[:TAG]
SOURCE_IMAGE TARGET_IMAGE[:TAG]
$docker image ls
Once Dockerfile is defined, we can execute below command to create new image out of it.
$ docker build -t dockerUserId/myImageName:v1 .
$ docker build -t sabtharshi/myImageName:v1 .
If your file name is not Dockerfile then you need to use -f flag to specify the file name
$ docker build -f Dockerfile.dev -t sabtharshi/myImageName:v1 .
Please note that the command ends with a dot (.) which is the build context path. Build context path
is nothing but where command is executing (directory path in local system). When needed, we can
copy files and folders from build context path to docker image that is being built.
Each steps in the Dockerfile, create a new temporary container based on previous step and applies the
new step inside the running container and then creates new images out of the temporary container.
This created image will be used to run the next step.
1. Below RED color highlighted line is the command fired to build new image from Dockerfile.
2. Yellow ones are the steps in the Dockerfile
3. Green ones are the temporary images created after executing each step
4. Purple ones are the temporary containers created using previous step image to add current
step. Once current step is added successfully, new image is created from this container and container
will be removed.
5. Green with yellow text is the final image
$ docker build --
build-arg
some_variable_name=
a_value -t
myimage:v1 .
RUN To run some linux commands. Like updating apt RUN mkdir -p /home/redis
or installing new softwares/tools. Creating new RUN apt update
folders/files etc
COPY Copy files and folders from local system to COPY ./ ./opt
Docker image file system
./ is the location in local system
relative to build context
./opt is the location inside
image
COPY ./ ./
COPY ./package.json ./
ADD Similar to COPY If you want to add a xx.tar.gz to
a /usr/local in container, unzip it,
ADD provides additional features like remote and then remove the useless
URL and tar extraction support. compressed package.
For COPY:
COPY resources/jdk-7u79-linux-
x64.tar.gz /tmp/
RUN rm /tmp/jdk-7u79-linux-
x64.tar.gz
For ADD:
ADD resources/jdk-7u79-linux-
x64.tar.gz /usr/local/
FROM openjdk:8-jdk-alpine
VOLUME /tmp
COPY target/*.jar app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
For this we need to need to use two base images in the Dockerfile. One with maven, git etc. and the
other only with java.
FROM openjdk-with-git-and-maven:17 as builder
WORKDIR
Drawing 1: Dockerfile /app image
with builder
COPY . . // copy source code from host to image work dir /app
RUN mvn clean package // assume jar is created under app/target/app.jar
FROM just-open-jdk:17
COPY --from=builder /app/target/app.jar /opt // copy from builder location to new image path /opt
ENTRYPOINT [“java”, “-jar”, “/opt/app.jar”] // finally set the entrypoint to run the application.
COPY mvnw .
COPY .mvn .mvn
COPY pom.xml .
COPY src src
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG DEPENDENCY=/workspace/app/target/dependency
COPY --from=build ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY --from=build ${DEPENDENCY}/META-INF /app/META-INF
COPY --from=build ${DEPENDENCY}/BOOT-INF/classes /app
ENTRYPOINT ["java","-cp","app:app/lib/*","hello.Application"]
Docker network:
Docker by default comes with network. We can create our own network so that set of
containers can talk to each other.
$docker run -d \
-p 8081:8081 \
-e ME_CONFIG_MONGODB_ADMINUSERNAME=admin \
-e ME_CONFIG_MONGODB_ADMINPASSWORD=password \
-e ME_CONFIG_MONGODB_SERVER=mongodb \
--network mongodb-network
--name mongo-express \
mongo-express
the ones in green color must match so that these two container can
talk to each other using the network mongodb-network
DOCKER Volume
if a container uses mount then when container is deleted the mount is not deleted. that is the main
reason why we use the mount. if in case you need to delete the associated mount path along with
contaner use below $ docker rm -v contanerID (-v means, delete volumes along with deleting the
container)
In Dockerfile
VOLUME “/data”
we can not bind to host directrory or another container in docker file. Those are possible nly from
command line
Docker compose:
Docker compose is really useful tool that let us spin up multiple containers with a single command.
It also creates a common network between the containers that they can use to communicate with
one another.
Using docker compose you can connect all the isolated containers as a single service.
Docker compose is a tool for running multi-container applications on Docker defined using the
compose file format. A compose file is used to define how the one or more containers that make up
you application are configured. Once you have a compose file, you can create, start and stop your
applications with a single command
docker-compose -f <FILE> up – to start network and then services
docker-compose -f <FILE> down → to stop services and then the network
docker-compose -f <FILE> up --build → if you want to create an image using Dockerfile and then
spin up containers
if we are in the same directory as docker-compose file then we can use
$ docker-compose ps → to get details of containers started by the compose file.
version: "3"
services:
mongo1:
image: mongo
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=password
healthcheck:
test: echo 'db.runCommand("ping").ok' | mongo mongo1:27017/test --quiet
interval: 30s
timeout: 30s
retries: 2
start_period: 10s
mongo-express1:
image: mongo-express
posts:
- 8080:8081
environment:
- ME_CONFIG_MONGODB_ADMINUSERNAME=admin
- ME_CONFIG_MONGODB_ADMINPASSWORD=password
- ME_CONFIG_MONGODB_SERVER=mongo1
depends_on:
mongo1:
condition: service_healthy
If you want to run your services in the background, you can pass the -d flag (for “detached” mode) to docker-
compose up and use docker-compose ps to see what is currently running:
$ docker-compose up -d
Starting composetest_redis_1...
Starting composetest_web_1...
$ docker-compose ps
-------------------------------------------------------------------------------------
See docker-compose --help to see other available commands. You can also install command completion for the bash
and zsh shell, which also shows you available commands.
If you started Compose with docker-compose up -d, stop your services once you’ve finished with them:
$ docker-compose stop
You can bring everything down, removing the containers entirely, with the down command. Pass --volumes to also
remove the data volume used by the Redis container:
services:
plex:
image: linuxserver/plex
environment:
- env_var_name=${VARIABLE_NAME} # here it is
Hint: When working with an .env file, you can debug your docker-compose.yml files
quite easily. Just type docker-compose config. This way you’ll see how the
docker-compose.yml file content looks after the substitution step has been
performed without running anything else.
(docker-compose.yml file)
version: '3'
services:
somename:
build:
context: ./app
dockerfile: Dockerfile
args:
some_variable_name: a_value
relevant docs
When you try to set a variable which is not ARG mentioned in the Dockerfile, Docker will
complain.
services:
plex:
image: linuxserver/plex
environment:
- env_var_name=another_value
Relevant docs
2. Pass environment variable values from your host
It’s the same as the above method. The only difference is, you don’t provide a value, but
just name the variable. This will make Docker access the current value in the host
environment and pass it on to the container.
$ docker run -e env_var_name alpine env
For the docker-compose.yml file, leave out the equation sign and everything after it for the
same effect.
version: '3'
services:
plex:
image: linuxserver/plex
environment:
- env_var_name
3. Take values from a file (env_file)
Instead of writing the variables out or hard-coding them (not in good taste according to
the 12-factor folks), we can specify a file to read values from. The contents of such a file
look something like this:
env_var_name=another_value
The file above is called env_file_name (name arbitrary) and it’s located in the current
directory. You can reference the filename, which is parsed to extract the environment
variables to set:
$ docker run --env-file=env_file_name alpine env
relevant docs
With docker-compose.yml files, we just reference a env_file, and Docker parses it for the
variables to set.
version: '3'
services:
plex:
image: linuxserver/plex
env_file: env_file_name
Relvant docs
Here is a small cheat sheet, combining an overview of ARG and ENV availability with
common ways to set them from the command line.
An overview of ARG and ENV availability.
Using the --restart flag on Docker run you can specify a restart policy for how a container should or should not
be restarted on exit.
When a restart policy is active on a container, it will be shown as either Up or Restarting in docker ps. It can
also be useful to use docker events to see the restart policy in effect.
An increasing delay (double the previous delay, starting at 100 milliseconds) is added before each restart to
prevent flooding the server. This means the daemon will wait for 100 ms, then 200 ms, 400, 800, 1600, and so
on until either the on-failure limit, the maximum delay of 1 minute is hit, or when you docker stop or docker rm -
f the container.
If a container is successfully restarted (the container is started and runs for at least 10 seconds), the delay is
reset to its default value of 100 ms.
You can specify the maximum amount of times Docker will try to restart the container when using the on-
failure policy. The default is that Docker will try forever to restart the container. The number of (attempted)
restarts for a container can be obtained via docker inspect. For example, to get the number of restarts for
container “my-container”;
#2
# 2015-03-04T23:47:07.691840179Z
Combining --restart (restart policy) with the --rm (clean up) flag results in an error. On container restart,
attached clients are disconnected. See the examples on using the --rm (clean up) flag later in this page.
Examples
$ docker run --restart=always redis
This will run the redis container with a restart policy of always so that if the container exits, Docker will restart
it.
This will run the redis container with a restart policy of on-failure and a maximum restart count of 10. If
the redis container exits with a non-zero exit status more than 10 times in a row Docker will abort trying to
restart the container. Providing a maximum restart limit is only valid for the on-failure policy.