Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
298 views

Docker Notes

The document provides information on Docker concepts including namespaces, control groups (cgroups), Docker commands like docker info and docker system prune, and how Docker images and containers work. Namespaces isolate resources like networking and filesystems for processes, while cgroups limit resources like CPU and memory for processes. A Docker image contains a filesystem snapshot and startup command, and running an image as a container uses that snapshot and command.

Uploaded by

srn
Copyright
© © All Rights Reserved
Available Formats
Download as ODT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
298 views

Docker Notes

The document provides information on Docker concepts including namespaces, control groups (cgroups), Docker commands like docker info and docker system prune, and how Docker images and containers work. Namespaces isolate resources like networking and filesystems for processes, while cgroups limit resources like CPU and memory for processes. A Docker image contains a filesystem snapshot and startup command, and running an image as a container uses that snapshot and command.

Uploaded by

srn
Copyright
© © All Rights Reserved
Available Formats
Download as ODT, PDF, TXT or read online on Scribd
You are on page 1/ 30

Table of Contents

Docker Introduction................................................................................................................................. 1
Namespacing:...................................................................................................................................... 1
Types of Namespaces......................................................................................................................2
Control Group (cgroup).......................................................................................................................2
$docker info command.............................................................................................................................3
Quick glance at docker image:.................................................................................................................3
Docker Container commands...................................................................................................................4
Docker image commands.......................................................................................................................10
Creating new image:...............................................................................................................................11
Most useful commands in Dockerfile................................................................................................13
ARG and ENV Availability.......................................................................................................... 16
Multi-step image................................................................................................................................16
Shell and Exec Form......................................................................................................................... 17
Docker network:.....................................................................................................................................17
Docker compose:....................................................................................................................................18
The Dot-Env File (.env).................................................................................................................... 21
Setting ARG Values in docker-compose........................................................................................... 21
Different ways to set environment variables..........................................................................................22
1. Provide values one by one........................................................................................................ 22
2. Pass environment variable values from your host.................................................................... 22
3. Take values from a file (env_file).............................................................................................23
Restart policies (--restart).......................................................................................................................24

Docker Vs VM
1. Docker image is smaller. Docker size is in megabytes vs VM size is typically in Gigabyte
2. Docker containers starts and run much faster.
3. VM of any OS can run on any OS host
4. Docker uses Host OS kernal vs VM uses its own kernal on top of host OS.
5. Docker uses operating system level virtualization as it uses host sytem to talk to kernal and
hardware.
6. VMs are hardware level virtualization as it brigs its own kernal to talk to the harware.
7. In docker we do process isolation.

Docker Introduction
Namespacing Vs CGroup
Namcepsace- Isolate resources per process or group like programs (different versions of java)
CGROUP - limit amount of resoures per process like CPU, Memory, HDD, and network

Namespacing:
isolating resources per process (or group of processes). Example: what portion of hard disk,
which harddisk or mount path to use. Which users to use. Which hostnames and which
network to use etc. which process to use as well for example if we have two version of java
installed but one application need java 8 and other need Java11 then these can be defined and
isolated using namespace to avoid version conflict.

“Namespaces are a feature of the Linux kernel that partitions kernel resources such that one
set of processes sees one set of resources while another set of processes sees a different set of
resources.”
It means two isolated processes work similar to running in two different systems with no
knowledge about each others
In other words, the key feature of namespaces is that they isolate processes from each other.
On a server where you are running many different services, isolating each service and its
associated processes from other services means that there is a smaller blast radius for changes,
as well as a smaller footprint for security-related concerns.

Types of Namespaces
Within the Linux kernel, there are different types of namespaces. Each namespace has its own unique
properties:

• A user namespace has its own set of user IDs and group IDs for assignment to processes. In
particular, this means that a process can have root privilege within its user namespace without
having it in other user namespaces.
• A process ID (PID) namespace assigns a set of PIDs to processes that are independent from the
set of PIDs in other namespaces. The first process created in a new namespace has PID 1 and
child processes are assigned subsequent PIDs. If a child process is created with its own PID
namespace, it has PID 1 in that namespace as well as its PID in the parent process’ namespace.
See below for an example.
• A network namespace has an independent network stack: its own private routing table, set of IP
addresses, socket listing, connection tracking table, firewall, and other network-related
resources.
• A mount namespace has an independent list of mount points seen by the processes in the
namespace. This means that you can mount and unmount filesystems in a mount namespace
without affecting the host filesystem.
• An interprocess communication (IPC) namespace has its own IPC resources, for example
POSIX message queues.
• A UNIX Time-Sharing (UTS) namespace allows a single system to appear to have different
host and domain names to different processes.

Control Group (cgroup)


Limit amount of resources used per process
like memory, CPU, HD I/O, network bandwith
A control group (cgroup) is a Linux kernel feature that limits, accounts for, and isolates the resource
usage (CPU, memory, disk I/O, network, and so on) of a collection of processes.
Cgroups provide the following features:

• Resource limits – You can configure a cgroup to limit how much of a particular resource
(memory or CPU, for example) a process can use.
• Prioritization – You can control how much of a resource (CPU, disk, or network) a process
can use compared to processes in another cgroup when there is resource contention.
• Accounting – Resource limits are monitored and reported at the cgroup level.
• Control – You can change the status (frozen, stopped, or restarted) of all processes in a cgroup
with a single command.

So basically you use cgroups to control how much of a given key resource (CPU, memory, network,
and disk I/O) can be accessed or used by a process or set of processes. Cgroups are a key component
of containers because there are often multiple processes running in a container that you need to
control together. In a Kubernetes environment, cgroups can be used to implement resource requests
and limits and corresponding QoS classes at the pod level.

$docker info command


Description:
1.This command displays system wide information of the docker server which is
running on your system.
Few imports details which command is providing as follow:
1. Containers count and its stats
2. Images count
3. Server Version which is installed
4. Storage Driver
5. Swarm active/inactive details
6. Kernal Version
7. Operating System
8. CPUS
9. Total Memory
10.Docker Root directory
11.Product License

Quick glance at docker image:


consist of two things:
→file system snapshot (includes binaries to run them. For example to run spring boot application we
need two things. One fat jar and java binary)
→ start up command.
Container
Image
java jar app.jar
File system snapshot Start up command (running process inside container)
Java, app.jar java jar app.jar

Kernel

RAM CPU Network


File system
Java, app.jar

We can override the start up command when creating new container.

$ docker run image_name new start up command


$ docker run busy-box echo hi
$ docker run busy-box ls
$ docker run busy-box ping www.google.com

docker run imageName = docker create imageName + docker start -a containerId

$ docker create image name


$ docker start -a container Id → to start the existing container which is in stopped state

$ docker system prune → this command will remove


→ all the stopped containers
→ all networks not used by at least one containers
→ all dangling images
→ all build cache

Docker Container commands


List containers $ docker ps → list only running
containers

$ docker ps -a → list running


and stopped containers

$ docker ps -q → list only


running container Ids

Create and Run docker run --name mycontainer


imageToUse

this command pulls the image


from the remote repository if
the host system does not have
the image locally and then
create a container

kubectl run podName --image


imageToUse

Create and Run in detached docker run -d imageName docker run -d redis
mode

Create and Run in interactive docker run -it imageName docker run -it redis sh
mode with command( helpful commandToRun
for dry run and to validate what
is inside an image. Not used in
prod as this will override
command defined in the actual
image)

Create and Run with override Docker run image java -jar
command to run app.jar

Override entrypoint and


command

Docker run --entrypoint java


imageToUse -jar app.jar

Create but do not run. docker create imageName

this command will create a new


container but will not start it.

Execute an additional command docker exec -it <container id> docker exec -it
in an already existing container command myredisContainerID redis-cli
(without disturbing primary
command/process) exec – run another command docker exec -it
-i allow us to provide input to myredisContainerID sh
Useful to debug live container the container (Keep STDIN
in prod open) docker exec -it
-t tty myredisContainerID bin/sh

kubectl exec -it frontendpod


kubectl exec -it bash
<POD_NAME>
<COMMAND> above command is deprecated.
Use below instead ( just add --
kubectl exec -it infront of the command)
<POD_NAME> --
<COMMAND> (OR)
kubectl exec -it frontendpod --
bash

Port forwarding from local port $ docker run -p $ docker run -p


to container port portInLocalSystem:portInCo 8080:6379 redis
ntainer myImage
$ docker run -p
6379:6379 redis

Pass environment values using - docker run -d docker run --env-


file=env_file_name alpine
e flag to the container. -e env
SSO_HOST_URL=http://somet
hing
-e SSO_PASS=A$B
my-image

if you declare environment


variable name but do not
provide value in run command
then it will be substituted from
host systems environment
variable value.

$docker run -d
-e SSO_HOST_URL
-e SSO_PASS
my-image

d run --env-
file=my_env.properties -
it ubuntu bash

my_env.properties content
my_env_property=BLAH...

will be added to the


environment

Run existing non running Docker start -a containerID


container
-a --> attached mode

See log Docker logs containerID


Follow the logs Docker logs -f containerId
--follow or -f

--since Docker logs --since 42m


Show logs since timestamp containerid
(e.g. 2013-01-02T13:23:37Z) or
relative (e.g. 42m for 42
minutes)
$docker logs --tail 1000 -f
containerid
--tail , -nallNumber
of lines to
show from the end of the logs

Stop container gracefully Docker stop containerId If could not stop in 10 secs then
it will fallback to kill
command(refer next one)
--time ( OR -t) to change default 10 secs to
something else.

Stop container immediately Docker kill containerID

Restart container Docker restart containerId Seconds to wait for stop before
Restart one or more containers killing the container

--time ( OR -t) to change default 10 secs to


something else.

Show process that are running Docker top containerId


inside a container. Shows both
primary process (defined in the
Docker file and as well as
secondary process like sh in
docker exec -it containerId
sh)
docker stats [OPTIONS]
[CONTAINER...]
Display a live stream of
container(s) resource usage $ docker stats
statistics

shows how much resource is


used by each containers like
cpu, memory, PIDS and
network etc

The PIDS column contains the


number of processes and
kernel threads created by that
container.

We can even create an image $ docker commit -c “CMD


[echo hi]”
from the running container runningContainerId
though it is not used in real
after -c we can provide
project as we should use override command to be
Dockerfile to make changes used for new image. Here
I have added echo hi
and recreate images and
version the changes.

But for quick testing or


validating purpose we can
create images from running
container

Pause – we can pause running $ docker pause my_container


processes in side a container as
well.

The docker pause command
suspends all processes in the
specified containers. On Linux,
this uses the freezer cgroup.
Traditionally, when suspending
a process the SIGSTOP signal is
used, which is observable by
the process being suspended. 

Probably useful when a long


running process executing and
wanted to make other system
changes in between so that
process can pick new changes
without killing and restarting
new container.

Unpause - to resume paused $ docker unpause my_container


processes in

Set conainer memory while docker run -m 512m nginx


starting a container
$ docker run -m 512m --
-m or –-memory defines max
memory-reservation=256m
memory limit nginx
--memory-reservation
defines min memory

Defines CPU limit for a $ docker run --cpus=2


containers nginx

--cpus = defines max cpu $ docker run --cpus=2 --


cpu-shares=2000 nginx
--cpu-shares = defines priority

Mounting a volume with docker run


-p3000:3000
container -v hostDir:/containerDir
-d
imageId

docker run
-p3000:3000
-v
/containerDir/otherDir
-v hostDir:/containerDir
-d
imageId

when container Dir is


mapped to host system and
changes to the files in
host system will be
reflected in container as
well.

If containerDir not
exists then it will be
created inside the
container

Note
/containerDir/otherDir
will not be mapped to
host system because that
volume does not have
mapping for it. Which
means that dir will be
untouched and have same
content as when the image
got created.
Stop running container $ docker stop
containerid(s)

$docker stop $(docker ps


-q) → stop all running
containers

Delete stopped containers $ docker rm


containerId(s)

OR

$docker container prune →


to delete all the stopped
containers

Restart automatically Read in the blow sections docker run -dit --restart
[restart-policy-value]
[container_name]

Inspect container Docker inspect provides $docker inspect containerId


detailed information on
OR
constructs controlled by
Docker. $docker inspect 55c1595745fc

By default, docker inspect will
kubectl describe
render results in a JSON array. resourceType
<resource_name>
For example uses of this
kubectl describe pod
command, refer to
podName
the examples section below.

Copy files and folders between Docker cp command is a docker cp from to


host and container handy utility that allows
to copy files and folders
between a container and
the host system. Host to container

docker cp
<FILE_TO_COPY_FROM_HOST>
<CONTAINER_ID>:<PATH_INSI
DE_CONTAINER_TO_PLACE_THE
_FILE>

$ docker cp a.py
ubu_container:/home/dir1

Container to HOST

docker cp
<CONTAINER_ID>:<FILE_TO_C
OPY_INSIDE_CONTAINER>
<PATH__IN_HOST_TO_PLACE_F
ILE>

$ docker cp
ubu_container:/home/dir1/
new.txt /home/abhishek

Docker image commands

Pull image from $docker pull imageRepository:Tag $ docker pull redis:1.3.5


remote repo OR
$docker image pull imageRepository:tag

Push image from $docker push userId/repositoryName:tag


local to remote
repository userId → is user id of docker.io
repositoryName → image name
tag → version (latest, 1.0.1 etc)

Create a tag
TARGET_IMAGE $ docker image tag
that refers to SOURCE_IMAGE[:TAG]
SOURCE_IMAGE TARGET_IMAGE[:TAG]

Save an image as docker save [OPTIONS] IMAGE docker save myrepository:1.0.1 -o


[IMAGE…] myrepository.tar
tar file
Save one or more
images to a tar
archive
Load an image from $ docker load -i myrepository.tar
tar file

List all images $ docker images

$docker image ls

List changes to image $docker image history imageId

Delete images $docker rmi imageId


$docker rmi imageId imageId2

$docker image rm imageId


$docker image rm imageId imageId2

Delete unused images $docker image prune

Creating new image:


To create our own image we need to create a Dockerfile with below things
1. Specify a base image
2. Run some commands to install additional program and configuration to run application inside
in the image being created
3. Specify a command to run on container startup

Once Dockerfile is defined, we can execute below command to create new image out of it.
$ docker build -t dockerUserId/myImageName:v1 .
$ docker build -t sabtharshi/myImageName:v1 .
If your file name is not Dockerfile then you need to use -f flag to specify the file name
$ docker build -f Dockerfile.dev -t sabtharshi/myImageName:v1 .
Please note that the command ends with a dot (.) which is the build context path. Build context path
is nothing but where command is executing (directory path in local system). When needed, we can
copy files and folders from build context path to docker image that is being built.
Each steps in the Dockerfile, create a new temporary container based on previous step and applies the
new step inside the running container and then creates new images out of the temporary container.
This created image will be used to run the next step.
1. Below RED color highlighted line is the command fired to build new image from Dockerfile.
2. Yellow ones are the steps in the Dockerfile
3. Green ones are the temporary images created after executing each step
4. Purple ones are the temporary containers created using previous step image to add current
step. Once current step is added successfully, new image is created from this container and container
will be removed.
5. Green with yellow text is the final image

Dockerfile content Output in the terminal

FROM redis:latest root@sabthaa-ubuntu:/home/


sabthaa/Documents/
docker_practice/redis-image#
RUN mkdir -p /home/redis-test
docker build -t mynewredis .
Sending build context to Docker
RUN echo daemon  2.048kB
"&&&&&&&&&&&&&&&&&&&&&&&&" Step 1/4 : FROM redis:latest
---> 7614ae9453d1
Step 2/4 : RUN mkdir -p
CMD ["redis-server"]
/home/redis-test
---> Running in c72b6c917361
Removing intermediate container
c72b6c917361
---> 56b93b67b590
Step 3/4 : RUN echo
"&&&&&&&&&&&&&&&&&&&&&&&&"
---> Running in e04b41d78d40
&&&&&&&&&&&&&&&&&&&&&&&&
Removing intermediate container
e04b41d78d40
---> 6b840afb1d69
Step 4/4 : CMD ["redis-server"]
---> Running in bbfb93eb1b00
Removing intermediate container
bbfb93eb1b00
---> bd82b2f832fc
Successfully built bd82b2f832fc
Successfully tagged
mynewredis:latest
when running RUN apt update always combine it with

Most useful commands in Dockerfile


FROM All docker file should drive from existing FROM ubuntu
images. Hence the first line should always be
FROM. Only one FROM can present in a
Dockerfile. FROM openjdk8:latest

Base image provide the OS that needed to run


our application.
MAINTAINER Who maintains the image
ENV To specify default environment variables. ENV name rishi
Available during both image build time and also
within the running container.

Can be overridden while starting new containers

$docker run -e some_variable=Me imageId

ARG ARG is only available during the build of a


Docker image (docker build etc), not after
the image is created and containers are Syntax in Dockerfile
started from it (ENTRYPOINT, CMD).
You can use ARG values to set ENV values ARG
some_variable_name=de
to work around that. fault_value

This sets build time variale. Or without default value which


should be provided in build
command
ARG some_variable_name

$ docker build --
build-arg
some_variable_name=
a_value -t
myimage:v1 .
RUN To run some linux commands. Like updating apt RUN mkdir -p /home/redis
or installing new softwares/tools. Creating new RUN apt update
folders/files etc

WORKDIR Move to working directory inside WORKDIR /usr/app


image/container. If the path is not exists then it
will get created automatically.

COPY Copy files and folders from local system to COPY ./ ./opt
Docker image file system
./ is the location in local system
relative to build context
./opt is the location inside
image

COPY ./ ./

In this example ./ is current


directory inside container

COPY ./package.json ./
ADD Similar to COPY If you want to add a xx.tar.gz to
a /usr/local in container, unzip it,
ADD provides additional features like remote and then remove the useless
URL and tar extraction support. compressed package.

For COPY:
COPY resources/jdk-7u79-linux-
x64.tar.gz /tmp/

RUN tar -zxvf /tmp/jdk-7u79-linux-


x64.tar.gz -C /usr/local

RUN rm /tmp/jdk-7u79-linux-
x64.tar.gz

For ADD:
ADD resources/jdk-7u79-linux-
x64.tar.gz /usr/local/

ADD supports local-only tar


extraction. Besides it, COPY will
use three layers, but ADD only
uses one layer.
Expose a port EXPOSE 8080
EXPOSE EXPOSE 8090
CMD What process should run on container startup. CMD [“redis-server”]

CMD defines default commands and/or CMD [“java”, “-jar”,


parameters for a container. CMD is an “myapp.jar”]
instruction that is best to use if you need a
default command which users can easily
override. If a Dockerfile has multiple CMDs,
it only applies the instructions from the last
one.
ENTRYPOINT ENTRYPOINT is preferred when you want to ENTRYPOINT [“java”, “-jar”,
“app.jar”]
define a container with a specific
executable. You cannot override an
ENTRYPOINT when starting a container
unless you add the --entrypoint flag.

What is the difference between CMD and


ENTRYPOINT? You cannot override the
ENTRYPOINT instruction by adding
command-line parameters to the docker
run command. By opting for this
instruction, you imply that the container is
specifically built for such use.

There are many situations in which


combining CMD and ENTRYPOINT would be
the best solution for your Docker container.
In such cases, the executable is defined
with ENTRYPOINT, while CMD specifies the
default parameter.

USER USER 1000

VOLUME VOLUME /tmp

ARG and ENV Availability


FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG JAR_FILE
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]COPY

Create image for above dockerfile


docker build \
--build-arg JAR_FILE=target/*.jar \
-t myorg/myapp .

FROM openjdk:8-jdk-alpine
VOLUME /tmp
COPY target/*.jar app.jar
ENTRYPOINT ["java","-jar","/app.jar"]

Create image for above dockerfile


docker build -t myorg/myapp .
Multi-step image
While building artifact(jar file) we need tools like git, maven etc along with java. But while running
the artifact (jar file) in production environment, we do not need them. This means we need to use
some strategy in Dockerfile to produce a production image.

For this we need to need to use two base images in the Dockerfile. One with maven, git etc. and the
other only with java.
FROM openjdk-with-git-and-maven:17 as builder
WORKDIR
Drawing 1: Dockerfile /app image
with builder
COPY . . // copy source code from host to image work dir /app
RUN mvn clean package // assume jar is created under app/target/app.jar

FROM just-open-jdk:17
COPY --from=builder /app/target/app.jar /opt // copy from builder location to new image path /opt
ENTRYPOINT [“java”, “-jar”, “/opt/app.jar”] // finally set the entrypoint to run the application.

FROM openjdk:8-jdk-alpine as build


WORKDIR /workspace/app

COPY mvnw .
COPY .mvn .mvn
COPY pom.xml .
COPY src src

RUN ./mvnw install -DskipTests


RUN mkdir -p target/dependency && (cd target/dependency; jar -xf ../*.jar)

FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG DEPENDENCY=/workspace/app/target/dependency
COPY --from=build ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY --from=build ${DEPENDENCY}/META-INF /app/META-INF
COPY --from=build ${DEPENDENCY}/BOOT-INF/classes /app
ENTRYPOINT ["java","-cp","app:app/lib/*","hello.Application"]

Shell and Exec Form


It is important to discus the forms of instructions. Docker ENTRYPOINT and CMD can have
two forms:
Shell form
Exec form
The syntax for any command in shell form is:
<instruction> <command>

The syntax for instructions in exec form is:

<instruction> ["executable", "parameter"]

You can write Docker CMD/ENTRYPOINT instructions in both forms:


CMD echo "Hello World" (shell form)
CMD ["echo", "Hello World"] (exec form)
ENTRYPOINT echo "Hello World" (shell form)
ENTRYPOINT ["echo", "Hello World"] (exec form)
However, try to keep all your instructions in exec form to prevent potential performance
issues.

Docker network:

Docker by default comes with network. We can create our own network so that set of
containers can talk to each other.

List all networks $docker network ls

Create new network (bridge $docker network create claims-modernization


network)

Use network with docker run $docker network create mongodb-network


command
$docker run -d \
-p27017:27017 \
-e MONGO_INITDB_ROOT_USERNAME=admin \
-e MONGO_INITDB_ROOT_PASSWORD=password \
--network mongodb-network
--name mongodb \
mongo

$docker run -d \
-p 8081:8081 \
-e ME_CONFIG_MONGODB_ADMINUSERNAME=admin \
-e ME_CONFIG_MONGODB_ADMINPASSWORD=password \
-e ME_CONFIG_MONGODB_SERVER=mongodb \
--network mongodb-network
--name mongo-express \
mongo-express

the ones in green color must match so that these two container can
talk to each other using the network mongodb-network

Connect a container to existing docker network connect NETWORK_NAME CONTAINER_ID


network

disconnect container from a docker network disconnect NETWORK_NAME


network CONTAINER_ID

inspect network docker network inspect NETWORK_NAME

DOCKER Volume

1. $ docker volume create VOLUME_NAME


2. $ docker volume ls
3. $ docker volume inspect VOLUME_NAME
4. $docker volume prune

docker run -v VOLUME_NAME:/workspace/some --name mycontainer ubuntu --> Using existing


pre created volume
docker run -v /home/location:/workspace/some --name mycontainer ubuntu --> binding host path to
the volume
docker run -v /workspace/some --name mycontainer ubuntu --> binding host path to the volume
docker run --volume-from=AnotherContainerName ubuntu --> use this to mount from another
conainer

if a container uses mount then when container is deleted the mount is not deleted. that is the main
reason why we use the mount. if in case you need to delete the associated mount path along with
contaner use below $ docker rm -v contanerID (-v means, delete volumes along with deleting the
container)

In Dockerfile

VOLUME “/data”

we can not bind to host directrory or another container in docker file. Those are possible nly from
command line
Docker compose:
Docker compose is really useful tool that let us spin up multiple containers with a single command.
It also creates a common network between the containers that they can use to communicate with
one another.
Using docker compose you can connect all the isolated containers as a single service.
Docker compose is a tool for running multi-container applications on Docker defined using the
compose file format. A compose file is used to define how the one or more containers that make up
you application are configured. Once you have a compose file, you can create, start and stop your
applications with a single command
docker-compose -f <FILE> up – to start network and then services
docker-compose -f <FILE> down → to stop services and then the network
docker-compose -f <FILE> up --build → if you want to create an image using Dockerfile and then
spin up containers
if we are in the same directory as docker-compose file then we can use
$ docker-compose ps → to get details of containers started by the compose file.

version: ‘3’version: ‘3’


Drawing 1: Basic docker- services:
Drawing 2: Docker-compose with custom Dockerfile
services:
compose-with volumes. web: web:
Default docker file name build: build: .
Dockerfile context: ./account-service
ports: //where to find the file
dockerfile:-my-file //docker file name
“3000:3000”
ports: environment:
- “3000:3000”
- REDIS_HOST=
environment: - REDIS_PWD=456
- REDIS_HOST=
volumes:
- REDIS_PWD=456
- /app/node_modules
volumes: - .:/app
- /app/node_modules //no mapping
- .:/app // all in host dir to container /app

Drawing 3: Docker-compose with custom Dockerfile and overriding command


version: ‘3’
services:
web:
build:
context: . //where to find the file
dockerfile: my-file //docker file name
ports:
- “3000:3000”
environment:
- REDIS_HOST=http://abc
- REDIS_PWD → value is not provided. Host system’s
environment variable value will be set
to docker container as well.
volumes:
- /app/node_modules //no mapping
- .:/app // all in host dir to container /app
command: [“npm”, “run” “test”]

Drawing 4: Docker-compose depends on service and health check

version: "3"
services:
mongo1:
image: mongo
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=password
healthcheck:
test: echo 'db.runCommand("ping").ok' | mongo mongo1:27017/test --quiet
interval: 30s
timeout: 30s
retries: 2
start_period: 10s
mongo-express1:
image: mongo-express
posts:
- 8080:8081
environment:
- ME_CONFIG_MONGODB_ADMINUSERNAME=admin
- ME_CONFIG_MONGODB_ADMINPASSWORD=password
- ME_CONFIG_MONGODB_SERVER=mongo1
depends_on:
mongo1:
condition: service_healthy

If you want to run your services in the background, you can pass the -d flag (for “detached” mode) to docker-
compose up and use docker-compose ps to see what is currently running:

$ docker-compose up -d

Starting composetest_redis_1...

Starting composetest_web_1...

$ docker-compose ps

Name Command State Ports

-------------------------------------------------------------------------------------

composetest_redis_1 docker-entrypoint.sh redis ... Up 6379/tcp

composetest_web_1 flask run Up 0.0.0.0:5000->5000/tcp


The docker-compose run command allows you to run one-off commands for your services. For example, to see what
environment variables are available to the web service:

$ docker-compose run web env

See docker-compose --help to see other available commands. You can also install command completion for the bash
and zsh shell, which also shows you available commands.

If you started Compose with docker-compose up -d, stop your services once you’ve finished with them:

$ docker-compose stop

You can bring everything down, removing the containers entirely, with the down command. Pass --volumes to also
remove the data volume used by the Redis container:

$ docker-compose down --volumes

# Stop services only


docker-compose stop

# Stop and remove containers, networks..


docker-compose down

# Down and remove volumes


docker-compose down --volumes

# Down and remove images


docker-compose down --rmi <all|local>

The Dot-Env File (.env)


This one is quite simple, and only confusing because of bad examples and similar
concepts using the same format, sounding a lot like it. What’s important is the dot in
front of env .env, not an “env_file”.
If you have a file named .env in your project, it’s only used to put values into the
docker-compose.yml file which is in the same folder. Those are used with Docker
Compose and Docker Stack. It has nothing to do with ENV, ARG, or anything
Docker-specific. It’s exclusively a docker-compose.yml thing.

The values in the .env file are written in the following notation:


VARIABLE_NAME=some value
OTHER_VARIABLE_NAME=some other value, like 5

Those key-value pairs, are used to substitute dollar-notation variables in the


docker-compose.yml file. It’s kind of a pre-processing step, and the resulting
temporary file is used. This is a nice way to avoid hard-coding values. You can also
use this to set the values for environment variables, by substituting the string, but
that does not happen automatically.
Here is an example docker-compose.yml file, relying on values provided from
a .env file:
version: '3'

services:
plex:
image: linuxserver/plex
environment:
- env_var_name=${VARIABLE_NAME} # here it is
Hint: When working with an .env file, you can debug your docker-compose.yml files
quite easily. Just type docker-compose config. This way you’ll see how the
docker-compose.yml file content looks after the substitution step has been
performed without running anything else.

Here’s a gotcha you should know: environment variables on your host


can override the values in your .env file. Read more here.

Setting ARG Values in docker-compose

(docker-compose.yml file)
version: '3'

services:
somename:
build:
context: ./app
dockerfile: Dockerfile
args:
some_variable_name: a_value
relevant docs
When you try to set a variable which is not ARG mentioned in the Dockerfile, Docker will
complain.

Different ways to set environment variables


Once the image is built, you can launch containers and provide values for ENV variables
in three different ways, either from the command line or using a docker-compose.yml
file. All of those will override any default ENV values in the Dockerfile. Unlike ARG, you
can pass all kinds of environment variables to the container. Even ones not explicitly
defined in the Dockerfile. It depends on your application whether that’ll do anything
however.

1. Provide values one by one


From the commandline, use the -e flag:
$ docker run -e "env_var_name=another_value" alpine env
relevant docs
From a docker-compose.yml file:
version: '3'

services:
plex:
image: linuxserver/plex
environment:
- env_var_name=another_value
Relevant docs
2. Pass environment variable values from your host
It’s the same as the above method. The only difference is, you don’t provide a value, but
just name the variable. This will make Docker access the current value in the host
environment and pass it on to the container.
$ docker run -e env_var_name alpine env
For the docker-compose.yml file, leave out the equation sign and everything after it for the
same effect.
version: '3'

services:
plex:
image: linuxserver/plex
environment:
- env_var_name
3. Take values from a file (env_file)
Instead of writing the variables out or hard-coding them (not in good taste according to
the 12-factor folks), we can specify a file to read values from. The contents of such a file
look something like this:
env_var_name=another_value
The file above is called env_file_name (name arbitrary) and it’s located in the current
directory. You can reference the filename, which is parsed to extract the environment
variables to set:
$ docker run --env-file=env_file_name alpine env
relevant docs
With docker-compose.yml files, we just reference a env_file, and Docker parses it for the
variables to set.
version: '3'

services:
plex:
image: linuxserver/plex
env_file: env_file_name
Relvant docs
Here is a small cheat sheet, combining an overview of ARG and ENV availability with
common ways to set them from the command line.
An overview of ARG and ENV availability.

Restart policies (--restart)

Using the --restart flag on Docker run you can specify a restart policy for how a container should or should not
be restarted on exit.

When a restart policy is active on a container, it will be shown as either Up or Restarting in docker ps. It can
also be useful to use docker events to see the restart policy in effect.

Docker supports the following restart policies:


Policy Result
no Do not automatically restart the container when it exits. This is the default.
on-failure[:max- Restart only if the container exits with a non-zero exit status. Optionally, limit the number
retries] of restart retries the Docker daemon attempts.
Always restart the container regardless of the exit status. When you specify always, the
always Docker daemon will try to restart the container indefinitely. The container will also always
start on daemon startup, regardless of the current state of the container.
Always restart the container regardless of the exit status, including on daemon startup,
unless-stopped except if the container was put into a stopped state before the Docker daemon was
stopped.

An increasing delay (double the previous delay, starting at 100 milliseconds) is added before each restart to
prevent flooding the server. This means the daemon will wait for 100 ms, then 200 ms, 400, 800, 1600, and so
on until either the on-failure limit, the maximum delay of 1 minute is hit, or when you docker stop or docker rm -
f the container.

If a container is successfully restarted (the container is started and runs for at least 10 seconds), the delay is
reset to its default value of 100 ms.

You can specify the maximum amount of times Docker will try to restart the container when using the on-
failure policy. The default is that Docker will try forever to restart the container. The number of (attempted)
restarts for a container can be obtained via docker inspect. For example, to get the number of restarts for
container “my-container”;

$ docker inspect -f "{{ .RestartCount }}" my-container

#2

Or, to get the last time the container was (re)started;

$ docker inspect -f "{{ .State.StartedAt }}" my-container

# 2015-03-04T23:47:07.691840179Z

Combining --restart (restart policy) with the --rm (clean up) flag results in an error. On container restart,
attached clients are disconnected. See the examples on using the --rm (clean up) flag later in this page.

Examples
$ docker run --restart=always redis

This will run the redis container with a restart policy of always so that if the container exits, Docker will restart
it.

$ docker run --restart=on-failure:10 redis

This will run the redis container with a restart policy of on-failure and a maximum restart count of 10. If
the redis container exits with a non-zero exit status more than 10 times in a row Docker will abort trying to
restart the container. Providing a maximum restart limit is only valid for the on-failure policy.

You might also like