Docker Lab
Docker Lab
Docker Lab
git
2. cd /path/to/getting-started-app
3. touch Dockerfile
# syntax=docker/dockerfile:1
FROM node:18-alpine
WORKDIR /app
COPY . .
RUN yarn install --production
CMD ["node", "src/index.js"]
EXPOSE 3000
4. cd /path/to/getting-started-app
5. docker build -t getting-started .
6. docker run -dp 127.0.0.1:3000:3000 getting-started
The -d flag (short for --detach) runs the container in the background. This means
that Docker starts your container and returns you to the terminal prompt. You can
verify that a container is running by viewing it in Docker Dashboard under
Containers, or by running docker ps in the terminal.
The -p flag (short for --publish) creates a port mapping between the host and the
container. The -p flag takes a string value in the format of HOST:CONTAINER, where
HOST is the address on the host, and CONTAINER is the port on the container. The
command publishes the container's port 3000 to 127.0.0.1:3000 (localhost:3000)
on the host. Without the port mapping, you wouldn't be able to access the application
rom the host.
Add an item or two and see that it works as you expect. You can mark items as complete and
remove them. Your frontend is successfully storing items in the backend.
8. docker ps
9. In the src/static/js/app.js file, update line 56 to use the new empty text.
docker ps
Use the docker stop command to stop the container. Replace <the-container-id> with
the ID from docker ps.
Once the container has stopped, you can remove it by using the docker rm command.
3. docker rm <the-container-id>
4.
Note
You can stop and remove a container in a single command by adding the force flag to the
docker rm command. For example: docker rm -f <the-container-id>
1. Now, start your updated app using the docker run command.
You should see the image get pulled down and eventually start up.
Tip
You may have noticed that this command binds the port mapping to a different IP
address. Previous docker run commands published ports to 127.0.0.1:3000 on the
host. This time, you're using 0.0.0.0.
For more information about how port mapping works, see Networking.
If the 3000 badge doesn't appear, you can select Open Port and specify 3000.
You can create the volume and start the container using the CLI or Docker Desktop's
graphical interface.
Stop and remove the todo app container once again with docker rm -f <id>, as it is still
running without using the persistent volume.
Start the todo app container, but add the --mount option to specify a volume mount. Give
the volume a name, and mount it to /etc/todos in the container, which captures all files
created at the path.
Note
If you're using Git Bash, you must use different syntax for this command.
4. For more details about Git Bash's syntax differences, see Working with Git Bash.
1. Once the container starts up, open the app and add a few items to your todo list.
2. Stop and remove the container for the todo app. Use Docker Desktop or docker ps to
get the ID and then docker rm -f <id> to remove it.
3. Start a new container using the previous steps.
4. Open the app. You should see your items still in your list.
5. Go ahead and remove the container when you're done checking out your list.
A lot of people frequently ask "Where is Docker storing my data when I use a volume?" If
you want to know, you can use the docker volume inspect command.
The Mountpoint is the actual location of the data on the disk. Note that on most machines,
you will need to have root access to access this directory from the host.
A bind mount is another type of mount, which lets you share a directory from the host's
filesystem into the container. When working on an application, you can use a bind mount to
mount source code into the container. The container sees the changes you make to the code
immediately, as soon as you save a file. This means that you can run processes in the
container that watch for filesystem changes and respond to them.
In this chapter, you'll see how you can use bind mounts and a tool called nodemon
to watch for file changes, and then restart the application automatically. There are equivalent
tools in most other languages and frameworks.
The following are examples of a named volume and a bind mount using --mount:
Named volume: type=volume,src=my-volume,target=/usr/local/data
Bind mount: type=bind,src=/path/to/data,target=/usr/local/data
The following table outlines the main differences between volume mounts and bind mounts.
Before looking at how you can use bind mounts for developing your application, you can run
a quick experiment to get a practical understanding of how bind mounts work.
The --mount type=bind option tells Docker to create a bind mount, where src is the current
working directory on your host machine (getting-started-app), and target is where that
directory should appear inside the container (/src).
After running the command, Docker starts an interactive bash session in the root directory
of the container's filesystem.
root@ac1237fad8db:/# pwd
/
root@ac1237fad8db:/# ls
bin dev home media opt root sbin srv tmp var
boot etc lib mnt proc run src sys usr
This is the directory that you mounted when starting the container. Listing the contents of this
directory displays the same files as in the getting-started-app directory on your host
machine.
root@ac1237fad8db:/# cd src
root@ac1237fad8db:/src# ls
Dockerfile node_modules package.json spec src yarn.lock
Create a new file named myfile.txt.
Open the getting-started-app directory on the host and observe that the myfile.txt
file is in the directory.
├── getting-started-app/
│ ├── Dockerfile
│ ├── myfile.txt
│ ├── node_modules/
│ ├── package.json
│ ├── spec/
│ ├── src/
│ └── yarn.lock
In the container, list the contents of the app directory once more. Observe that the file is
now gone.
9. root@ac1237fad8db:/src# ls
10. Dockerfile node_modules package.json spec src yarn.lock
11.
That's all for a brief introduction to bind mounts. This procedure demonstrated how files are
shared between the host and the container, and how changes are immediately reflected on
both sides. Now you can use bind mounts to develop software.
Development containers
Using bind mounts is common for local development setups. The advantage is that the
development machine doesn’t need to have all of the build tools and environments installed.
With a single docker run command, Docker pulls dependencies and tools.
The following steps describe how to run a development container with a bind mount that does
the following:
You can use the CLI or Docker Desktop to run your container with a bind mount.
1. Make sure you don't have any getting-started containers currently running.
2. Run the following command from the getting-started-app directory.
You can watch the logs using docker logs <container-id>. You'll know you're ready
to go when you see this:
12. When you're done watching the logs, exit out by hitting Ctrl+C.
Update your app on your host machine and see the changes reflected in the container.
1. In the src/static/js/app.js file, on line 109, change the "Add Item" button to
simply say "Add":
Feel free to make any other changes you'd like to make. Each time you make a change and
save a file, the change is reflected in the container because of the bind mount. When
Nodemon detects a change, it restarts the app inside the container automatically. When you're
done, stop the container and build your new image using:
There's a good chance you'd have to scale APIs and front-ends differently than databases.
Separate containers let you version and update versions in isolation.
While you may use a container for the database locally, you may want to use a managed
service for the database in production. You don't want to ship your database engine with
your app then.
Running multiple processes will require a process manager (the container only starts one
process), which adds complexity to container startup/shutdown.
And there are more reasons. So, like the following diagram, it's best to run your app in
multiple containers.
Container networking
Remember that containers, by default, run in isolation and don't know anything about other
processes or containers on the same machine. So, how do you allow one container to talk to
another? The answer is networking. If you place the two containers on the same network,
they can talk to each other.
Start MySQL
Start a MySQL container and attach it to the network. You're also going to define a few
environment variables that the database will use to initialize the database. To learn more
about the MySQL environment variables, see the "Environment Variables" section in the
MySQL Docker Hub listing
docker run -d \
--network todo-app --network-alias mysql \
-v todo-mysql-data:/var/lib/mysql \
-e MYSQL_ROOT_PASSWORD=secret \
-e MYSQL_DATABASE=todos \
mysql:8.0
In the previous command, you can see the --network-alias flag. In a later section, you'll
learn more about this flag.
Tip
You'll notice a volume named todo-mysql-data in the above command that is mounted at
/var/lib/mysql, which is where MySQL stores its data. However, you never ran a docker
volume create command. Docker recognizes you want to use a named volume and creates
one automatically for you.
To confirm you have the database up and running, connect to the database and verify that it
connects.
When the password prompt comes up, type in secret. In the MySQL shell, list the databases
and verify you see the todos database.
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
| todos |
+--------------------+
5 rows in set (0.00 sec)
4. mysql> exit
5. You now have a todos database and it's ready for you to use.
Connect to MySQL
Now that you know MySQL is up and running, you can use it. But, how do you use it? If you
run another container on the same network, how do you find the container? Remember that
each container has its own IP address.
To answer the questions above and better understand container networking, you're going to
make use of the nicolaka/netshoot
container, which ships with a lot of tools that are useful for troubleshooting or debugging
networking issues.
1. Start a new container using the nicolaka/netshoot image. Make sure to connect it to
the same network.
Inside the container, you're going to use the dig command, which is a useful DNS tool.
You're going to look up the IP address for the hostname mysql.
dig mysql
What this means is that your app only simply needs to connect to a host named mysql
and it'll talk to the database.
The todo app supports the setting of a few environment variables to specify MySQL
connection settings. They are:
Note
While using env vars to set connection settings is generally accepted for development, it's
highly discouraged when running applications in production. Diogo Monica, a former lead of
security at Docker, wrote a fantastic blog post
explaining why.
A more secure mechanism is to use the secret support provided by your container
orchestration framework. In most cases, these secrets are mounted as files in the running
container. You'll see many apps (including the MySQL image and the todo app) also support
env vars with a _FILE suffix to point to a file containing the variable.
As an example, setting the MYSQL_PASSWORD_FILE var will cause the app to use the contents
of the referenced file as the connection password. Docker doesn't do anything to support
these env vars. Your app will need to know to look for the variable and get the file contents.
1. Specify each of the previous environment variables, as well as connect the container
to your app network. Make sure that you are in the getting-started-app directory
when you run this command.
If you look at the logs for the container (docker logs -f <container-id>), you should
see a message similar to the following, which indicates it's using the mysql database.
nodemon src/index.js
[nodemon] 2.0.20
[nodemon] to restart at any time, enter `rs`
[nodemon] watching dir(s): *.*
[nodemon] starting `node src/index.js`
Connected to mysql db at host mysql
Listening on port 3000
Open the app in your browser and add a few items to your todo list.
Connect to the mysql database and prove that the items are being written to the database.
Remember, the password is secret.
Your table will look different because it has your items. But, you should see them stored
there.
The big advantage of using Compose is you can define your application stack in a file, keep it
at the root of your project repository (it's now version controlled), and easily enable someone
else to contribute to your project. Someone would only need to clone your repository and
start the app using Compose. In fact, you might see quite a few projects on GitHub/GitLab
doing exactly this now.
├── getting-started-app/
│ ├── Dockerfile
│ ├── compose.yaml
│ ├── node_modules/
│ ├── package.json
│ ├── spec/
│ ├── src/
│ └── yarn.lock
Define the app service
In part 7, you used the following command to start the application service.
1. Open compose.yaml in a text or code editor, and start by defining the name and
image of the first service (or container) you want to run as part of your application.
The name will automatically become a network alias, which will be useful when
defining your MySQL service.
services:
app:
image: node:18-alpine
Typically, you will see command close to the image definition, although there is no
requirement on ordering. Add the command to your compose.yaml file.
services:
app:
image: node:18-alpine
command: sh -c "yarn install && yarn run dev"
Now migrate the -p 127.0.0.1:3000:3000 part of the command by defining the ports
for the service.
services:
app:
image: node:18-alpine
command: sh -c "yarn install && yarn run dev"
ports:
- 127.0.0.1:3000:3000
Next, migrate both the working directory (-w /app) and the volume mapping (-v "$
(pwd):/app") by using the working_dir and volumes definitions.
One advantage of Docker Compose volume definitions is you can use relative paths from the
current directory.
services:
app:
image: node:18-alpine
command: sh -c "yarn install && yarn run dev"
ports:
- 127.0.0.1:3000:3000
working_dir: /app
volumes:
- ./:/app
Finally, you need to migrate the environment variable definitions using the environment
key.
5. services:
6. app:
7. image: node:18-alpine
8. command: sh -c "yarn install && yarn run dev"
9. ports:
10. - 127.0.0.1:3000:3000
11. working_dir: /app
12. volumes:
13. - ./:/app
14. environment:
15. MYSQL_HOST: mysql
16. MYSQL_USER: root
17. MYSQL_PASSWORD: secret
18. MYSQL_DB: todos
Now, it's time to define the MySQL service. The command that you used for that container
was the following:
docker run -d \
--network todo-app --network-alias mysql \
-v todo-mysql-data:/var/lib/mysql \
-e MYSQL_ROOT_PASSWORD=secret \
-e MYSQL_DATABASE=todos \
mysql:8.0
1. First define the new service and name it mysql so it automatically gets the network
alias. Also specify the image to use as well.
services:
app:
# The app service definition
mysql:
image: mysql:8.0
Next, define the volume mapping. When you ran the container with docker run, Docker
created the named volume automatically. However, that doesn't happen when running with
Compose. You need to define the volume in the top-level volumes: section and then specify
the mountpoint in the service config. By simply providing only the volume name, the default
options are used.
services:
app:
# The app service definition
mysql:
image: mysql:8.0
volumes:
- todo-mysql-data:/var/lib/mysql
volumes:
todo-mysql-data:
3. services:
4. app:
5. # The app service definition
6. mysql:
7. image: mysql:8.0
8. volumes:
9. - todo-mysql-data:/var/lib/mysql
10. environment:
11. MYSQL_ROOT_PASSWORD: secret
12. MYSQL_DATABASE: todos
13.
14. volumes:
15. todo-mysql-data:
services:
app:
image: node:18-alpine
command: sh -c "yarn install && yarn run dev"
ports:
- 127.0.0.1:3000:3000
working_dir: /app
volumes:
- ./:/app
environment:
MYSQL_HOST: mysql
MYSQL_USER: root
MYSQL_PASSWORD: secret
MYSQL_DB: todos
mysql:
image: mysql:8.0
volumes:
- todo-mysql-data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: todos
volumes:
todo-mysql-data:
Run the application stack
Now that you have your compose.yaml file, you can start your application.
1. Make sure no other copies of the containers are running first. Use docker ps to list
the containers and docker rm -f <ids> to remove them.
2. Start up the application stack using the docker compose up command. Add the -d
flag to run everything in the background.
docker compose up -d
When you run the previous command, you should see output like the following:
You'll notice that Docker Compose created the volume as well as a network. By default,
Docker Compose automatically creates a network specifically for the application stack
(which is why you didn't define one in the Compose file).
Look at the logs using the docker compose logs -f command. You'll see the logs from
each of the services interleaved into a single stream. This is incredibly useful when you want
to watch for timing-related issues. The -f flag follows the log, so will give you live output as
it's generated.
If you have run the command already, you'll see output that looks like this:
The service name is displayed at the beginning of the line (often colored) to help distinguish
messages. If you want to view the logs for a specific service, you can add the service name to
the end of the logs command (for example, docker compose logs -f app).
At this point, you should be able to open your app in your browser on http://localhost:3000
If you look at the Docker Dashboard, you'll see that there is a group named getting-started-
app. This is the project name from Docker Compose and used to group the containers
together. By default, the project name is simply the name of the directory that the
compose.yaml was located in.
If you expand the stack, you'll see the two containers you defined in the Compose file. The
names are also a little more descriptive, as they follow the pattern of <service-name>-
<replica-number>. So, it's very easy to quickly see what container is your app and which
container is the mysql database.
When you're ready to tear it all down, simply run docker compose down or hit the trash can
on the Docker Dashboard for the entire app. The containers will stop and the network will be
removed.
Warning
By default, named volumes in your compose file are not removed when you run docker
compose down. If you want to remove the volumes, you need to add the --volumes flag.
The Docker Dashboard does not remove volumes when you delete the app stack.
Using the docker image history command, you can see the command that was used to
create each layer within an image.
1. Use the docker image history command to see the layers in the getting-started
image you created.
You should get output that looks something like the following.
Each of the lines represents a layer in the image. The display here shows the base at the
bottom with the newest layer at the top. Using this, you can also quickly see the size of each
layer, helping diagnose large images.
You'll notice that several of the lines are truncated. If you add the --no-trunc flag, you'll
get the full output.
Layer caching
Now that you've seen the layering in action, there's an important lesson to learn to help
decrease build times for your container images. Once a layer changes, all downstream layers
have to be recreated as well.
Look at the following Dockerfile you created for the getting started app.
# syntax=docker/dockerfile:1
FROM node:18-alpine
WORKDIR /app
COPY . .
RUN yarn install --production
CMD ["node", "src/index.js"]
Going back to the image history output, you see that each command in the Dockerfile
becomes a new layer in the image. You might remember that when you made a change to the
image, the yarn dependencies had to be reinstalled. It doesn't make much sense to ship around
the same dependencies every time you build.
To fix it, you need to restructure your Dockerfile to help support the caching of the
dependencies. For Node-based applications, those dependencies are defined in the
package.json file. You can copy only that file in first, install the dependencies, and then
copy in everything else. Then, you only recreate the yarn dependencies if there was a change
to the package.json.
1. Update the Dockerfile to copy in the package.json first, install dependencies, and
then copy everything else in.
# syntax=docker/dockerfile:1
FROM node:18-alpine
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install --production
COPY . .
CMD ["node", "src/index.js"]
Create a file named .dockerignore in the same folder as the Dockerfile with the
following contents.
node_modules
.dockerignore files are an easy way to selectively copy only image relevant files. You can
read more about this here. In this case, the node_modules folder should be omitted in the
second COPY step because otherwise, it would possibly overwrite files which were created by
the command in the RUN step.
Now, make a change to the src/static/index.html file. For example, change the
<title> to "The Awesome Todo App".
Build the Docker image now using docker build -t getting-started . again. This
time, your output should look a little different.
First off, you should notice that the build was much faster. And, you'll see that several
steps are using previously cached layers. Pushing and pulling this image and updates
to it will be much faster as well.
Multi-stage builds
Multi-stage builds are an incredibly powerful tool to help use multiple stages to create an
image. There are several advantages for them:
Maven/Tomcat example
When building Java-based applications, you need a JDK to compile the source code to Java
bytecode. However, that JDK isn't needed in production. Also, you might be using tools like
Maven or Gradle to help build the app. Those also aren't needed in your final image. Multi-
stage builds help.
# syntax=docker/dockerfile:1
FROM maven AS build
WORKDIR /app
COPY . .
RUN mvn package
FROM tomcat
COPY --from=build /app/target/file.war /usr/local/tomcat/webapps
In this example, you use one stage (called build) to perform the actual Java build using
Maven. In the second stage (starting at FROM tomcat), you copy in files from the build
stage. The final image is only the last stage being created, which can be overridden using the
--target flag.
React example
When building React applications, you need a Node environment to compile the JS code
(typically JSX), SASS stylesheets, and more into static HTML, JS, and CSS. If you aren't
doing server-side rendering, you don't even need a Node environment for your production
build. You can ship the static resources in a static nginx container.
# syntax=docker/dockerfile:1
FROM node:18 AS build
WORKDIR /app
COPY package* yarn.lock ./
RUN yarn install
COPY public ./public
COPY src ./src
RUN yarn run build
FROM nginx:alpine
COPY --from=build /app/build /usr/share/nginx/html
In the previous Dockerfile example, it uses the node:18 image to perform the build
(maximizing layer caching) and then copies the output into an nginx container.
Summary
In this section, you learned a few image building best practices, including layer caching and
multi-stage builds.
Related information:
.dockerignore
Dockerfile reference
Build with Docker guide
Dockerfile best practices
Next steps
In the next section, you'll learn about additional resources you can use to continue learning
about containers.
Container orchestration
Running containers in production is tough. You don't want to log into a machine and simply
run a docker run or docker compose up. Why not? Well, what happens if the containers
die? How do you scale across several machines? Container orchestration solves this problem.
Tools like Kubernetes, Swarm, Nomad, and ECS all help solve this problem, all in slightly
different ways.
The general idea is that you have managers who receive the expected state. This state might
be "I want to run two instances of my web app and expose port 80." The managers then look
at all of the machines in the cluster and delegate work to worker nodes. The managers watch
for changes (such as a container quitting) and then work to make the actual state reflect the
expected state.
Cloud Native Computing Foundation projects
The CNCF is a vendor-neutral home for various open-source projects, including Kubernetes,
Prometheus, Envoy, Linkerd, NATS, and more. You can view the graduated and incubated
projects here
. There are a lot of projects to help solve problems around monitoring, logging, security,
image registries, messaging, and more.
Docker recommends watching the video workshop from DockerCon 2022. Watch the entire
video or use the following links to open the video at a particular section.
Containerize an app
If you'd like to see how containers are built from scratch, Liz Rice from Aqua Security has a
fantastic talk in which she creates a container from scratch in Go. While the talk does not go
into networking, using images for the filesystem, and other advanced topics, it gives a deep
dive into how things are working.
Language-specific guides
If you are looking for information on how to containerize an application using your favorite
language, see the Language-specific guides.
https://docs.docker.com/get-started/02_our_app/