DockerOneForAllDeepDive PDF
DockerOneForAllDeepDive PDF
DockerOneForAllDeepDive PDF
html
Installing Docker
In this lesson, we’ll install the latest version of Docker CE. The commands used
throughtout this video are below.
Prerequisites
Uninstall old versions:
Install Docker CE
Add the Utilities needed for Docker:
sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
Docker Architecture
In this lesson we will take a high-level look at the Docker Architecture.
Architecture Overview
Docker architecture:
Client-server architecture
Client talks to the Docker daemon
The Docker daemon handles:
Building
Running
Distributing
Both communicate using a REST API:
UNIX sockets
Network interface
Docker registries:
Docker objects:
Images:
Read-only template with instructions for creating a Docker container
Image is based on another image
Create your own images
Use images published to a registry
Use a Dockerfile to build images
Containers:
file:///home/wiciu/Documents/DevOps/Docker dd - la/html/02 - Docker architecture.html 1/2
2/16/2020 02 - Docker architecture.html
Docker Swarm:
Modular in design:
Batteries included but replaceable
Based on open-standards outline by the Open Container Initiative
The major components:
Docker client
Docker daemon
containerd
runc
The components work together to create and run containers
Docker 0.9
Platform agnostic
Harder to innovate
Slow
Not what the ecosystem wanted
Image spec
Container runtime spec
Version 1.0 release in 2017
Docker Inc. heavily contributed
Docker 1.11 (2016) used the specification as much as possible
runc :
containerd :
shim :
Running Containers
Creating a container:
Docker Hub
In this lesson, we will take a quick look at Docker Hub and its available features.
Docker Commands
In this lesson we will start working with Docker commands. We’ll see the management
commands, along with the ones for managing images and containers.
docker -h
docker image :
docker container :
attach Attach local standard input, output, and error streams to a running
container
commit Create a new image from a container’s changes
cp Copy files/folders between a container and the local filesystem
create Create a new container
diff Inspect changes to files or directories on a container’s filesystem
exec Run a command in a running container
export Export a container’s filesystem as a tar archive
inspect Display detailed information on one or more containers
kill Kill one or more running containers
logs Fetch the logs of a container
ls List containers
pause Pause all processes within one or more containers
port List port mappings or a specific mapping for the container
prune Remove all stopped containers
rename Rename a container
restart Restart one or more containers
rm Remove one or more containers
run Run a command in a new container
start Start one or more stopped containers
stats Display a live stream of container(s) resource usage statistics
stop Stop one or more running containers
top Display the running processes of a container
unpause Unpause all processes within one or more containers
update Update configuration of one or more containers
wait Block until one or more containers stop, then print their exit codes
Creating Containers
In this lesson, we will take a deeper look into creating containers, by exploring a few
of the flags that will alter it’s behavior when created.
Exposing:
Publishing:
Executing a command:
Dockerfile
During a Docker run
Using the exec command
Example:
Nginx Dockerfile
Container Logging
In this lesson, you will learn how to view the logs of a container to get vital output of
your application. You will also learn about some of the logging best practices of
containerized applications.
Nginx Example:
Networking Overview
In this lesson, we will go over the components and theory of how networking functions
in Docker.
Network Drivers:
bridge
host
overlay
macvlan
none
Network plugins
Sandboxes
Endpoints
Networks
Networking Commands
Networking Basics
ifconfig
docker network -h
docker network ls
docker network ls --no-trunc
Creating a network:
Deleting a network:
Networking Containers
In this lesson, we will dig deeper into container networking by supplying our own
subnet and gateway when creating a new network. We will then move on to
networking two different containers using an internal network. This will allow one
container to be publicly accessible while the other one is not.
ifconfig
docker container run --name network-test01 -it --network br04 centos /bin/bash
yum update -y
yum install -y net-tools
ifconfig
netstat -rn
cat /etc/resolv.conf
docker container run -d --name network-test02 --ip 10.1.4.102 --network br04 ngin
Inspect private-nginx :
Storage Overview
In this lesson, we will look a how Docker handles storage for persistent and non-
persistent data.
Non-persistent
Local storage
Data that is ephemeral
Every container has it
Tied to the lifecycle of the contain
Persistent
Volumes
Volumes are decoupled from containers
Non-persistent Data
Non-persistent data:
First-class citizens
Uses the local driver
Third party drivers:
Block storage
File storage
Object storage
Storage locations:
Linux: /var/lib/docker/volumes/
Windows: C:\ProgramData\Docker\volumes
Volume Commands
Volumes are the preferred method of maintaining persistent data in Docker. In this
lesson, we will begin learning how to use the volume subcommand to list, create, and
remove volumes.
docker volume -h
docker volume ls
Inspecting a volume:
Deleting a volume:
Volumes use a new directory that is created within Docker’s storage directory on the
host machine, and Docker manages that directory’s contents.
mkdir target
docker container ls
docker volume ls
-v "$(pwd)"/target2:/app \
nginx
mkdir nginx
cat << EOF > nginx/nginx.conf
user nginx;
worker_processes 1;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
EOF
Volumes are easier to back up or migrate than bind mounts. You can manage volumes
using Docker CLI commands or the Docker API. They work on both Linux and Windows
containers. Volumes can be more safely shared among multiple containers. Volume
drivers allow for:
sudo ls /var/lib/docker/volumes/html-volume/_data
-v html-volume:/usr/share/nginx/html/ \
nginx
Edit index.html :
sudo vi /var/lib/docker/volumes/html-volume/_data/index.html
Install Vim:
apt-get update -y
apt-get install vim -y
docker run -d \
--name=nginx-volume3 \
--mount source=html-volume,target=/usr/share/nginx/html,readonly \
nginx
Install Vim:
apt-get update -y
apt-get install vim -y
Dockerfiles are instructions. They contains all of the commands used to build an
image.
Dockerfile Layers
Dockerfile:
FROM ubuntu:15.04
COPY . /app
RUN make /app
CMD python /app/app.py
Best Practices:
CMD : Provides a default for an executing container. There can only be one CMD
instruction in a Dockerfile
EXPOSE : Informs Docker that the container listens on the specified network ports at
runtime
ADD : Copies new files, directories or remote file URLs from <src> and adds them to
the filesystem of the image at the path <dest> .
COPY : Copies new files or directories from <src> and adds them to the filesystem of
the container at the path <dest> .
VOLUME : Creates a mount point with the specified name and marks it as holding
externally mounted volumes from native host or other containers
USER : Sets the user name (or UID) and optionally the user group (or GID) to use
when running the image and for any RUN , CMD , and ENTRYPOINT instructions that
follow it in the Dockerfile
WORKDIR : Sets the working directory for any RUN , CMD , ENTRYPOINT , COPY , and
ADD instructions that follow it in the Dockerfile
ARG : Defines a variable that users can pass at build-time to the builder with the
docker build command, using the --build-arg <varname>=<value> flag
ONBUILD : Adds a trigger instruction to the image that will be executed at a later time,
when the image is used as the base for another build
HEALTHCHECK : Tells Docker how to test a container to check that it is still working
SHELL : Allows the default shell used for the shell form of commands to be overridden
vi Dockerfile
Dockerfile contents:
docker image ls
docker container ls
Environment Variables
To make new software easier to run, you can use ENV to update the PATH environment
variable for the software that your container installs.
cd docker_images
mkdir env
cd env
Use the --env flag to pass an environment variable when building an image:
--env [KEY]=[VALUE]
ENV [KEY]=[VALUE]
ENV [KEY] [VALUE]
vi Dockerfile
Dockerfile contents:
EXPOSE $PORT
CMD ./bin/www
Build Arguments
In this lesson, we will explore using build arguments to paramerterize an image build.
--build-arg [NAME]=[VALUE]
ARG [NAME]=[DEFAULT_VALUE]
cd docker_images
mkdir args
cd args
vi Dockerfile
Dockerfile:
curl localhost:8085
cd docker_images
mkdir non-privileged-user
cd non-privileged-user
vi Dockerfile
Dockerfile contents:
cd ~/docker_images
mkdir node-non-privileged-user
cd node-non-privileged-user
vi Dockerfile
Dockerfile contents:
Order of Execution
This lesson focuses on the order that instructions are executed in when building an
image. Some instructions may have unintended consequences that can cause your
build to fail.
cd docker_images
mkdir centos-conf
cd centos-conf
vi Dockerfile
Dockerfile contents:
vi Dockerfile
FROM nginx:latest
VOLUME ["/usr/share/nginx/html/"]
Inspect nginx-volume :
vi Dockerfile
Dockerfile contents:
Inspect weather-app4 :
Inspect weather-app5:
Inspect Prometheus:
Using .dockerignore
In this lesson, we’ll create a .dockerignore file, so that we can exclude files we don’t
want copied over when building an image.
cd docker_images
mkdir dockerignore
cd dockerignore
git clone https://github.com/linuxacademy/content-weather-app.git src
cd src
git checkout dockerignore
cd ../
vi .dockerignore
vi Dockerfile
Dockerfile contents:
WORKDIR /var/node
RUN npm install
EXPOSE $PORT
ENTRYPOINT ["./bin/www"]
Building Images
In this lesson, we will learn some alternate ways of building images.
To build one:
Useful flags:
cd docker_images/weather-app
cp Dockerfile Dockerfile.test
docker image build -t linuxacademy/weather-app:path-example1 \
-f Dockerfile.test .
Example:
Example:
Example:
cd docker_images
mkdir tar_image
cd tar_image
git clone https://github.com/linuxacademy/content-weather-app.git
cd content-weather-app
git checkout remote-build
tar -zcvf weather-app.tar.gz Dockerfile src
docker image build -t linuxacademy/weather-app:from-tar - < weather-app.tar.gz
cd docker_images
mkdir multi-stage-builds
cd multi-stage-builds
git clone https://github.com/linuxacademy/content-weather-app.git src
vi Dockerfile
Dockerfile contents:
FROM node:alpine
ARG VERSION=V1.1
LABEL org.label-schema.version=$VERSION
ENV NODE_ENV="production"
COPY --from=build /var/node /var/node
WORKDIR /var/node
EXPOSE 3000
ENTRYPOINT ["./bin/www"]
docker image ls
Tagging
In this lesson, we will talk about how to use the tag command, and best practices to
keep in mind when tagging.
Add a name and an optional tag with -t or --tag , in the name:tag format:
docker image ls
cd docker_images/weather-app/src
git log -1 --pretty=%H
cd ../
Tag the weather-app as the latest using the image tagged with the commit hash:
https://hub.docker.com/
Docker Push:
cd docker_images
mkdir dockerhub
cd dockerhub
vi Dockerfile
Dockerfile contents:
FROM node:alpine
ARG VERSION=V1.1
LABEL org.label-schema.version=$VERSION
ENV NODE_ENV="production"
COPY --from=build /var/node /var/node
WORKDIR /var/node
EXPOSE 3000
ENTRYPOINT ["./bin/www"]
cd src
git log -1 --pretty=%H
cd ../
docker login
docker image push <USERNAME>/weather-app:<HASH>
Image History
Show the history of an image:
vi output.txt
Use the quiet flag to list the image IDs, then save the output to a file using the no-
truncm flag:
Setup:
mkdir output
cd output
gzip weather-app-latest.tar
Docker Top:
Docker Stats:
Attach to container_process:
top
exit
To configure the restart policy for a container, use the --restart flag:
docker container ls
docker container ls
docker container ls
Docker Events
In this lesson, we’ll see how to listen for events using the events command.
Generate Events:
Filters Events:
Generate an event:
List / on docker_events :
Attach to docker_events :
Attach to docker_events :
Start docker_events :
Attach to docker_events :
docker container rm -h
docker container ls -a -q
Start Prometheus:
docker container ls
rivethead42/weather-app:latest
NODE_ENV production
Create Watchtower:
vi .dockerignore
.dockerignore contents:
Dockerfile
.git
.gitignore
file:///home/wiciu/Documents/DevOps/Docker dd - la/html/38 - Updating Containers with Watchtower.html 1/2
2/17/2020 38 - Updating Containers with Watchtower.html
vi app.js
app.js contents:
//This is a comment
//
...
touch newfile.js
Check to see if the container was restarted with the new image:
docker container ls
sudo curl -L \
"https://github.com/docker/compose/releases/download/1.23.2/docker-compose-$(
-o /usr/local/bin/docker-compose
docker-compose --version
Compose Commands
In this lesson, we will start using compose by creating a compose file. Then we will
create and manage the services by using the most commonly used commands:
mkdir -p compose/commands
cd compose/commands
vi docker-compose.yml
docker-compose.yml contents:
version: '3'
services:
web:
image: nginx
ports:
- "8080:80"
volumes:
- nginx_html:/usr/share/nginx/html/
links:
- redis
redis:
image: redis
volumes:
nginx_html: {}
docker-compose up -d
docker-compose ps
docker-compose stop
docker-compose start
docker-compose restart
docker-compose down
cd compose
git clone https://github.com/linuxacademy/content-weather-app.git weather-app
cd weather-app
git checkout compose
vi docker-compose.yml
docker-compose.yml contents:
version: '3'
services:
weather-app:
build:
context: .
args:
- VERSION=v2.0
ports:
- "8081:3000"
environment:
- NODE_ENV=production
docker-compose up -d
docker-compose ps
curl http://localhost:8081
docker-compose build
mkdir -p compose/ghost
cd compose/ghost
vi docker-compose.yml
docker-compose.yml:
version: '3'
services:
ghost:
container_name: ghost
image: ghost:latest
ports:
- "80:2368"
environment:
- database__client=mysql
- database__connection__host=mysql
- database__connection__user=root
- database__connection__password=P4SSw0rd0!
- database__connection__database=ghost
volumes:
- ghost-volume:/var/lib/ghost
networks:
- ghost_network
- mysql_network
depends_on:
- mysql
mysql:
container_name: mysql
file:///home/wiciu/Documents/DevOps/Docker dd - la/html/42 - Using Volumes and Networking with Docker Compose.html 1/2
2/16/2020 42 - Using Volumes and Networking with Docker Compose.html
image: mysql:5.7
environment:
- MYSQL_ROOT_PASSWORD=P4SSw0rd0!
volumes:
- mysql-volume:/var/lib/mysql
networks:
- mysql_network
volumes:
ghost-volume:
mysql-volume:
networks:
ghost_network:
mysql_network:
docker-compose up -d
docker-compose ps
docker volumes ls
docker network ls
file:///home/wiciu/Documents/DevOps/Docker dd - la/html/42 - Using Volumes and Networking with Docker Compose.html 2/2
2/16/2020 43 - Introduction to Docker Swarm.html
Swarm 101
Swarm has two major components:
Swarm was initially a separate product layered on Docker, since Docker 1.12 it has
become a part of the engine.
The Cluster
A swarm consists of one or more Docker nodes.
Nodes are either a managers or a worker.
Managers:
Manage the state of the cluster
Dispatch tasks to workers
Workers:
Accepts and execute tasks
State is held in etcd
Swarm uses Transport Layer Security (TLS):
Encrypted communication
Authenticated nodes
Authorized roles
Orchestration
file:///home/wiciu/Documents/DevOps/Docker dd - la/html/43 - Introduction to Docker Swarm.html 1/2
2/16/2020 43 - Introduction to Docker Swarm.html
Prerequisites
Uninstall old versions:
Install Docker CE
Add the Docker repository:
sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
docker node ls
docker node ls
Inspecting a node:
Creating a service:
List services:
docker service ls
Inspecting a service:
Update a service:
Create nginx_service :
docker service ls
Inspect nginx_service:
Inspect encrypted_overlay :
Inspect my_overlay :
Inspect nginx_service :
Inspect nginx_service :
Remove encrypted_overlay :
Add Plugins:
List plugins:
docker plugin ls
Volume Plugins:
Hedvig
Pure Storage
HPE Nimble Storage
Nutanix DVP
Blockbridge
NexentaStor
StorageOS
Rex-Ray
Disable a plugin:
Remove a plugin:
DOBS_TOKEN=<DIGITAL_OCEAN_TOKEN> \
DOBS_CONVERTUNDERSCORES=true
Setup environment:
mkdir -p swarm/prometheus
cd swarm/prometheus
vi prometheus.yml
prometheus.yml contents:
global:
scrape_interval: 15s
scrape_timeout: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: prometheus
scrape_interval: 5s
static_configs:
- targets:
- prometheus_main:9090
- job_name: nodes
scrape_interval: 5s
static_configs:
- targets:
- [MANAGER]:9100
- [WORKER1]:9100
file:///home/wiciu/Documents/DevOps/Docker dd - la/html/49 - Deploying Stacks in Docker Swarm.html 1/4
2/16/2020 49 - Deploying Stacks in Docker Swarm.html
- [WORKER2]:9100
- job_name: cadvisor
scrape_interval: 5s
static_configs:
- targets:
- [MANAGER]:8081
- [WORKER1]:8081
- [WORKER2]:8081
vi docker-compose.yml
docker-compose.yml contents:
version: '3'
services:
main:
image: prom/prometheus:latest
container_name: prometheus
ports:
- 8080:9090
command:
- --config.file=/etc/prometheus/prometheus.yml
- --storage.tsdb.path=/prometheus/data
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml:ro
- data:/prometheus/data
depends_on:
- cadvisor
- node-exporter
cadvisor:
image: google/cadvisor:latest
container_name: cadvisor
deploy:
mode: global
restart: unless-stopped
ports:
- 8081:8080
volumes:
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
node-exporter:
image: prom/node-exporter:latest
container_name: node-exporter
deploy:
mode: global
restart: unless-stopped
ports:
- 9100:9100
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
command:
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
- --collector.filesystem.ignored-mount-points
- "^/(sys|proc|dev|host|etc|rootfs/var/lib/docker/containers|rootfs/var/lib
grafana:
image: grafana/grafana
container_name: grafana
ports:
- 8082:3000
volumes:
- grafana_data:/var/lib/grafana
- grafana_plugins:/var/lib/grafana/plugins
environment:
- GF_SECURITY_ADMIN_PASSWORD=P4ssW0rd0!
depends_on:
- prometheus
- cadvisor
- node-exporter
volumes:
data:
grafana_data:
grafana_plugins:
List stacks:
docker stack ls
List services:
docker service ls
Linux security:
Namespaces
Control Groups
Mandatory Access Control (MAC)
Seccomp
Docker security:
Docker Swarm
Docker Content Trust
Docker Security Scanner
Docker secrets
Namespaces
Docker creates a set of namespaces and control groups for the container. Docker
containers are an organized collections of namespaces.
Process ID (pid)
network (net)
Filesystem/mount (mount)
Inter-process Communication (ipc)
User (user)
UTS (uts)
Control Groups
file:///home/wiciu/Documents/DevOps/Docker dd - la/html/50 - Introduction to Docker Security.html 1/3
2/16/2020 50 - Introduction to Docker Security.html
CPU
RAM
Disk I/O
Capabilities
Capabilities turn the binary “root/non-root” dichotomy into a fine-grained access
control system. In most cases, containers do not need “real” root privileges at all.
This means root within a container has much less privileges than the real root . It
also means that even if an intruder manages to escalate to root within a container, it
is much harder to do serious damage, or to escalate to the host.
SELinux
AppArmor
Seccomp
This limits the syscalls a container can make to the host’s kernel. All new containers
get a default seccomp configured
Docker Swarm
Swarm Mode:
Encrypted networks
Docker Secrets
These store sensitive data like:
Passwords
TLS Certificates
API Keys
Secrets Workflow:
Seccomp Profile
Testing Seccomp:
mkdir -p seccomp/profiles/chmod
cd seccomp/profiles/chmod
wget https://raw.githubusercontent.com/moby/moby/master/profiles/seccomp/default.
Remove chmod , fchmod and fchmodat from the syscalls whitelist. Syscalls starts at
line 52.
Capabilities:
Dropping Capabilities:
Test mknod :
Disable mknod :
Control Groups
Limiting CPU and memory:
Inspect resource-limits:
docker container run --rm -it --network host --pid host --userns host --cap-add a
-e DOCKER_CONTENT_TRUST=$DOCKER_CONTENT_TRUST \
-v /var/lib:/var/lib \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /usr/lib/systemd:/usr/lib/systemd \
-v /etc:/etc --label docker_bench_security \
docker/docker-bench-security
Commands
Creating a Key:
Importing a Key:
Add a signer:
Remove a signer:
Signing an image:
Create a Key:
Pull [USERNAME]/weather-app :
Enabling DCT
vi /etc/docker/daemon.json
/etc/docker/daemon.json:
{
"content-trust": {
"mode": "enforced"
}
}
Creating a secret:
List secrets:
docker secret ls
Inspecting a secret:
Using secrets:
Deleting a secret:
Setup environment:
mkdir Secrets
cd secrets
vi docker-compose.yml
docker-compose.yml contents:
version: '3.1'
services:
db:
image: mysql:5.7
volumes:
- db_data:/var/lib/mysql
networks:
mysql_internal:
aliases: ["db"]
environment:
MYSQL_ROOT_PASSWORD_FILE: /run/secrets/db_root_password
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD_FILE: /run/secrets/db_password
secrets:
- db_root_password
- db_password
wordpress:
depends_on:
- db
image: wordpress:latest
networks:
mysql_internal:
aliases: ["wordpress"]
wordpress_public:
ports:
- "8001:80"
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD_FILE: /run/secrets/db_password
secrets:
- db_password
secrets:
db_password:
file: db_password.txt
db_root_password:
file: db_root_password.txt
volumes:
db_data:
networks:
mysql_internal:
driver: "overlay"
internal: true
wordpress_public:
driver: "overlay"
Deploy stack:
Docker Networking
Scenario
We’re developing a new containerized application for a client. The application will
consist of two containers: one for the frontend application, and one for the database.
Our client has security concerns about the database, and they want it to run on a
private network that is not publicly accessible.
So, we’ll need to create two networks. One will house the frontend application that is
publicly accessible, and the other network, which is flagged as internal, is where the
database will reside. We have to create a MySQL container connected to the private
network and an Nginx container that is connect to both networks.
Get logged in
Use the credentials and server IP in the hands-on lab overview page to log into the
server with SSH.
Docker Volumes
The Scenario
We need to deploy a MySQL container to our development environment. Because we
will be working with mock customer data that needs to be persistent, the container will
need a volume. Create a volume called mysql_data . Then deploy a MySQL container
that will use this volume to store database files.
Log in to the environment using the credentials provided on the lab page either in a
terminal session on your local machine or by clicking Instant Terminal.
First we’ll use the docker volume command to create a volume called mysql_data :
Create a volume called mysql_data , then deploy a MySQL container called app-
database . Use the mysql latest image, and use the -e flag to set
MYSQL_ROOT_PASSWORD to P4sSw0rd0 . Use the mount flag to mount the mysql_data
volume to /var/lib/mysql . The container should run in the background.
Log In
Log in to the environment using the credentials provided on the lab page either in a
terminal session on your local machine or by clicking Instant Terminal.
Create a Dockerfile.
1. Change directory to weather-app .
cd weather-app/
vi Dockerfile
FROM node:alpine
ARG APP_VERSION=V1.1
LABEL org.label-schema.version=$APP_VERSION
ENV NODE_ENV="production"
COPY --from=source /node/weather-app /node/weather-app
WORKDIR /node/weather-app
EXPOSE 3000
ENTRYPOINT ["./bin/www"]
cd src
cd ../
The Scenario
We’ve just completed building a Dockerfile, and we’re ready to push it to Docker Hub.
We need to build our image with the VERSION build argument that is set to 1.5 ,
then tag the image to latest , and finally push both images to Docker Hub.
[cloud_user@host]$ cd weather-app
[cloud_user@host]$ cd src
[cloud_user@host]$ git log -1 --pretty=%H
[cloud_user@host]$ cd ../
[cloud_user@host]$ docker image build -t [USERNAME]/weather-app:[HASH \
--build-arg VERSION=1.5 .
Introduction
After a few months of debate, we’ve decided to set up a cooking blog. After
researching different platforms, we’ve chosen Wordpress with MySQL. We have a
swarm cluster already set up where we host customer sites. To make maintaining
Wordpress easier, we’ve decided to set it up as a stack. We’ll need to create the
Docker Compose file, deploy the stack, and finish the Wordpress setup.
Log In
Log in to the environment using the credentials provided on the lab page, either in a
terminal session on your local machine or by clicking Instant Terminal. Note that
there are two servers to log into, the manager and the worker.
Copy the join token, and then run it on the worker node:
We should see two nodes, each with a STATUS of Ready and an Availability of Active.
Next up, we’ll create the compose file on the master node. Use whatever text editor
you like, but the file needs to be named docker-compose.yml , and these are the
contents:
version: '3'
services:
db:
image: mysql:5.7
volumes:
- db_data:/var/lib/mysql
networks:
mysql_internal:
environment:
MYSQL_ROOT_PASSWORD: P4ssw0rd0!
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: P4ssw0rd0!
blog:
depends_on:
- db
image: wordpress:latest
networks:
mysql_internal:
wordpress_public:
ports:
- "80:80"
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: P4ssw0rd0!
volumes:
db_data:
networks:
mysql_internal:
internal: true
wordpress_public:
This will show us the running services, and we can see that each has 1/1 replicas.
This means everything is working right.
In the lab directory you will find two directories. The nginx directory has Dockerfie and
nginx.conf file. Nginx will be used to load balance the weather-app. The weather-app
directory has the Dockerfile and source code.
Setting up Nginx
Create a service called nginx.
Update upstream localhost in nginx.conf to include the three containers with their
private port.
Example: server [APP-NAME]:[PORT]
Build an image using the Dockerfile in the nginx directory.
Set tty to true.
Publish port 80 to port 80 on the container.
The nginx container will be on two networks: frontend and weather_app .
Create docker-compose.yml:
vi docker-compose.yml
docker-compose.yml
version: '3'
services:
weather-app1:
build:
context: ./weather-app
args:
- VERSION=v2.0
ports:
- "8080:3000"
networks:
- weather_app
environment:
- NODE_ENV=production
weather-app2:
build:
context: ./weather-app
args:
- VERSION=v2.0
ports:
- "8081:3000"
networks:
- weather_app
environment:
- NODE_ENV=production
weather-app3:
build:
context: ./weather-app
args:
- VERSION=v2.0
ports:
- "8082:3000"
networks:
- weather_app
environment:
- NODE_ENV=production
nginx:
build: ./nginx
tty: true
ports:
- '80:80'
networks:
- frontend
- weather_app
networks:
frontend:
weather_app:
internal: true
Update nginx/nginx.conf:
vi nginx/nginx.conf
nginx.conf:
http {
upstream localhost {
server weather-app1:3000;
server weather-app2:3000;
server weather-app3:3000;
}
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://localhost;
proxy_set_header Host $host;
}
}
}
docker-compose up -d
The Scenario
After years of running containers on a single Docker host, we’ve decided to migrate
over to using Docker Swarm. Using Swarm will allow our clients to scale the number of
containers up, as demand increases, and then down as demand dies off.
Before we can do this, we first need to set up a swarm cluster consisting of a manager
and a worker node. Once setup is complete, create an Nginx service to test the cluster.
Log In
Log in to the environment using the credentials provided on the lab page, either in a
terminal session on your local machine or by clicking Instant Terminal. Note that
there are two servers to log into, the manager and the worker.
After this runs, we’ll get a docker swarm join command that we can then go run from
the worker. Copy it.
On the manager, we can run docker node ls and see both of them, if all went well.
Now we can run (still on the master node) docker service ls and see
nginx_service running, and that it’s running on both nodes.
Introduction
We’re tired of having to manually update several containers, so we’ve decided to look
for an automated solution.
After doing some research, we have discovered Watchtower, a container that updates
all running containers when changes are made to the image that it is running.
We need to create a Dockerfile that will be used to create a Docker image. The image
will be pushed to Docker Hub.
Next, you will create a container using this image. Once the image is created, you will
deploy the Watchtower container. After Watchtower is deployed, you will update the
Dockerfile, rebuild the image, and push the changes to Docker Hub.
Watchtower checks for changes every 30 seconds. Once it detects the changes,
Watchtower will update the running container.
In this lab, we will be using Watchtower to monitor containers for updates. In order to
complete this learning activity, we will need a Docker Hub account.
In the home directory of cloud_user we will find the lab directory. Change to that
directory.
We need to create a Dockerfile (with any text editor you like), and put the following
into it:
FROM node
RUN mkdir -p /var/node
ADD src/ /var/node/
WORKDIR /var/node
RUN npm install
EXPOSE 3000
CMD ./bin/www
FROM node
RUN mkdir -p /var/node
RUN mkdir -p /var/test
ADD src/ /var/node/
WORKDIR /var/node
RUN npm install
EXPOSE 3000
CMD ./bin/www
Now if we run docker container ls periodically, we’ll eventually see that the demo-
app container has a newer CREATED time than the watchtower container does. This
means that Watchtower saw the change we made, and restarted the other container
using the new image.
The Scenario
We manage containers for clients on several Docker servers. But managing the hosts
has become a bit of a pain, so we’ve been looking for an all-in-one tool. We discovered
Portainer, and have decided to test it out by deploying it to one of our hosts.
Log In
Log in to the environment using the credentials provided on the lab page, either in a
terminal session on your local machine or by clicking Instant Terminal.
Create a Volume
Create Portainer
Create your user account and password, and on the next screen click on Local, then
Connect. Click on the local in this screen, then Container in the next one.
Now we can click the Add container button. In the form that follows, use these
settings:
Name: lab_nginx
Image nginx:latest
Click the map additional port button, then map port 8081 to 80 on the container.
Now click Deploy the container.
Test Things
If everything is working, we should be able to look at http://:8081 in a web browser.
The Scenario
In order to secure a MySQL database, we’ve decided to redeploy the container it sits in
as a Swarm service, using secrets.
We’ll use OpenSSL to generate secure passwords for both the MySQL users root and
user . Then we’ll save them to separate files. Next we’ll create secrets for these
passwords, and finally create the MySQL service using these secrets.
Log In
Log in to the environment using the credentials provided on the lab page, either in a
terminal session on your local machine or by clicking Instant Terminal. Note that
there are two servers to log into, the manager and the worker.
Copy the join token, and then run it on the worker node:
We should get a message about this node joining a swarm as a worker. We’re good to
go, and we can shut the worker terminal down.
Create Secrets
Back in the manager node, we need to create the MySQL root password: