Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Docker Lab

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 22

1. git clone https://github.com/docker/getting-started-app.

git
2. cd /path/to/getting-started-app
3. touch Dockerfile
# syntax=docker/dockerfile:1

FROM node:18-alpine
WORKDIR /app
COPY . .
RUN yarn install --production
CMD ["node", "src/index.js"]
EXPOSE 3000

4. cd /path/to/getting-started-app
5. docker build -t getting-started .
6. docker run -dp 127.0.0.1:3000:3000 getting-started

The -d flag (short for --detach) runs the container in the background. This means
that Docker starts your container and returns you to the terminal prompt. You can
verify that a container is running by viewing it in Docker Dashboard under
Containers, or by running docker ps in the terminal.

The -p flag (short for --publish) creates a port mapping between the host and the
container. The -p flag takes a string value in the format of HOST:CONTAINER, where
HOST is the address on the host, and CONTAINER is the port on the container. The
command publishes the container's port 3000 to 127.0.0.1:3000 (localhost:3000)
on the host. Without the port mapping, you wouldn't be able to access the application
rom the host.

7. After a few seconds, open your web browser to http://localhost:3000

You should see your app.

 Add an item or two and see that it works as you expect. You can mark items as complete and
remove them. Your frontend is successfully storing items in the backend.

8. docker ps
9. In the src/static/js/app.js file, update line 56 to use the new empty text.

- <p className="text-center">No items yet! Add one above!</p>

+ <p className="text-center">You have no todo items yet! Add one above!</p>

10. docker build -t getting-started .


11. docker run -dp 127.0.0.1:3000:3000 getting-started
12. You probably saw an error like this:
docker: Error response from daemon: driver failed programming
external connectivity on endpoint laughing_burnell
(bb242b2ca4d67eba76e79474fb36bb5125708ebdabd7f45c8eaf16caaabde9dd):
Bind for 127.0.0.1:3000 failed: port is already allocated.
The error occurred because you aren't able to start the new container while your old
container is still running. The reason is that the old container is already using the
host's port 3000 and only one process on the machine (containers included) can listen
to a specific port. To fix this, you need to remove the old container.

Remove a container using the CLI

1. Get the ID of the container by using the docker ps command.

 docker ps

 Use the docker stop command to stop the container. Replace <the-container-id> with
the ID from docker ps.

 docker stop <the-container-id>

 Once the container has stopped, you can remove it by using the docker rm command.

3. docker rm <the-container-id>
4.

Note

You can stop and remove a container in a single command by adding the force flag to the
docker rm command. For example: docker rm -f <the-container-id>

Start the updated app container

1. Now, start your updated app using the docker run command.

 docker run -dp 127.0.0.1:3000:3000 getting-started

 Refresh your browser on http://localhost:3000

and you should see your updated help text.

13. docker tag getting-started YOUR-USER-NAME/getting-started


14. docker push YOUR-USER-NAME/getting-started
15. In the terminal, start your freshly pushed app.
16.  docker run -dp 0.0.0.0:3000:3000 YOUR-USER-NAME/getting-started

You should see the image get pulled down and eventually start up.

Tip
You may have noticed that this command binds the port mapping to a different IP
address. Previous docker run commands published ports to 127.0.0.1:3000 on the
host. This time, you're using 0.0.0.0.

Binding to 127.0.0.1 only exposes a container's ports to the loopback interface.


Binding to 0.0.0.0, however, exposes the container's port on all interfaces of the
host, making it available to the outside world.

For more information about how port mapping works, see Networking.

 Select the 3000 badge when it appears.

If the 3000 badge doesn't appear, you can select Open Port and specify 3000.

Create a volume and start the container

You can create the volume and start the container using the CLI or Docker Desktop's
graphical interface.

1. Create a volume by using the docker volume create command.

 docker volume create todo-db

 Stop and remove the todo app container once again with docker rm -f <id>, as it is still
running without using the persistent volume.

 Start the todo app container, but add the --mount option to specify a volume mount. Give
the volume a name, and mount it to /etc/todos in the container, which captures all files
created at the path.

docker run -dp 127.0.0.1:3000:3000 --mount type=volume,src=todo-


db,target=/etc/todos getting-started

Note

If you're using Git Bash, you must use different syntax for this command.

3. docker run -dp 127.0.0.1:3000:3000 --mount type=volume,src=todo-


db,target=//etc/todos getting-started

4. For more details about Git Bash's syntax differences, see Working with Git Bash.

Verify that the data persists

1. Once the container starts up, open the app and add a few items to your todo list.
2. Stop and remove the container for the todo app. Use Docker Desktop or docker ps to
get the ID and then docker rm -f <id> to remove it.
3. Start a new container using the previous steps.
4. Open the app. You should see your items still in your list.
5. Go ahead and remove the container when you're done checking out your list.

You've now learned how to persist data.

Dive into the volume

A lot of people frequently ask "Where is Docker storing my data when I use a volume?" If
you want to know, you can use the docker volume inspect command.

docker volume inspect todo-db


[
{
"CreatedAt": "2019-09-26T02:18:36Z",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/todo-db/_data",
"Name": "todo-db",
"Options": {},
"Scope": "local"
}
]

The Mountpoint is the actual location of the data on the disk. Note that on most machines,
you will need to have root access to access this directory from the host.

Use bind mounts


In part 5, you used a volume mount to persist the data in your database. A volume mount is a
great choice when you need somewhere persistent to store your application data.

A bind mount is another type of mount, which lets you share a directory from the host's
filesystem into the container. When working on an application, you can use a bind mount to
mount source code into the container. The container sees the changes you make to the code
immediately, as soon as you save a file. This means that you can run processes in the
container that watch for filesystem changes and respond to them.

In this chapter, you'll see how you can use bind mounts and a tool called nodemon

to watch for file changes, and then restart the application automatically. There are equivalent
tools in most other languages and frameworks.

Quick volume type comparisons

The following are examples of a named volume and a bind mount using --mount:
 Named volume: type=volume,src=my-volume,target=/usr/local/data
 Bind mount: type=bind,src=/path/to/data,target=/usr/local/data

The following table outlines the main differences between volume mounts and bind mounts.

Named volumes Bind mounts

Host location Docker chooses You decide

Populates new volume with container contents Yes No

Supports Volume Drivers Yes No

Trying out bind mounts

Before looking at how you can use bind mounts for developing your application, you can run
a quick experiment to get a practical understanding of how bind mounts work.

1. Verify that your getting-started-app directory is in a directory defined in Docker


Desktop's file sharing setting. This setting defines which parts of your filesystem you
can share with containers. For details about accessing the setting, see the topic for
Mac, Windows, or Linux.
2. Open a terminal and change directory to the getting-started-app directory.
3. Run the following command to start bash in an ubuntu container with a bind mount.

 docker run -it --mount type=bind,src="$(pwd)",target=/src ubuntu bash

The --mount type=bind option tells Docker to create a bind mount, where src is the current
working directory on your host machine (getting-started-app), and target is where that
directory should appear inside the container (/src).

 After running the command, Docker starts an interactive bash session in the root directory
of the container's filesystem.

 root@ac1237fad8db:/# pwd
/
root@ac1237fad8db:/# ls
bin dev home media opt root sbin srv tmp var
boot etc lib mnt proc run src sys usr

 Change directory to the src directory.

This is the directory that you mounted when starting the container. Listing the contents of this
directory displays the same files as in the getting-started-app directory on your host
machine.

 root@ac1237fad8db:/# cd src
root@ac1237fad8db:/src# ls
Dockerfile node_modules package.json spec src yarn.lock
 Create a new file named myfile.txt.

 root@ac1237fad8db:/src# touch myfile.txt


root@ac1237fad8db:/src# ls
Dockerfile myfile.txt node_modules package.json spec src yarn.lock

 Open the getting-started-app directory on the host and observe that the myfile.txt
file is in the directory.

 ├── getting-started-app/
│ ├── Dockerfile
│ ├── myfile.txt
│ ├── node_modules/
│ ├── package.json
│ ├── spec/
│ ├── src/
│ └── yarn.lock

 From the host, delete the myfile.txt file.

 In the container, list the contents of the app directory once more. Observe that the file is
now gone.

9. root@ac1237fad8db:/src# ls
10. Dockerfile node_modules package.json spec src yarn.lock
11.

12. Stop the interactive container session with Ctrl + D.

That's all for a brief introduction to bind mounts. This procedure demonstrated how files are
shared between the host and the container, and how changes are immediately reflected on
both sides. Now you can use bind mounts to develop software.

Development containers

Using bind mounts is common for local development setups. The advantage is that the
development machine doesn’t need to have all of the build tools and environments installed.
With a single docker run command, Docker pulls dependencies and tools.

Run your app in a development container

The following steps describe how to run a development container with a bind mount that does
the following:

 Mount your source code into the container


 Install all dependencies
 Start nodemon to watch for filesystem changes

You can use the CLI or Docker Desktop to run your container with a bind mount.
1. Make sure you don't have any getting-started containers currently running.
2. Run the following command from the getting-started-app directory.

 docker run -dp 127.0.0.1:3000:3000 \


-w /app --mount type=bind,src="$(pwd)",target=/app \
node:18-alpine \
sh -c "yarn install && yarn run dev"

The following is a breakdown of the command:

 -dp 127.0.0.1:3000:3000 - same as before. Run in detached (background) mode and


create a port mapping
 -w /app - sets the "working directory" or the current directory that the command will run
from
 --mount type=bind,src="$(pwd)",target=/app - bind mount the current directory
from the host into the /app directory in the container
 node:18-alpine - the image to use. Note that this is the base image for your app from the
Dockerfile
 sh -c "yarn install && yarn run dev" - the command. You're starting a shell using
sh (alpine doesn't have bash) and running yarn install to install packages and then
running yarn run dev to start the development server. If you look in the package.json,
you'll see that the dev script starts nodemon.

 You can watch the logs using docker logs <container-id>. You'll know you're ready
to go when you see this:

3. docker logs -f <container-id>


4. nodemon -L src/index.js
5. [nodemon] 2.0.20
6. [nodemon] to restart at any time, enter `rs`
7. [nodemon] watching path(s): *.*
8. [nodemon] watching extensions: js,mjs,json
9. [nodemon] starting `node src/index.js`
10. Using sqlite database at /etc/todos/todo.db
11. Listening on port 3000

12. When you're done watching the logs, exit out by hitting Ctrl+C.

Develop your app with the development container

Update your app on your host machine and see the changes reflected in the container.

1. In the src/static/js/app.js file, on line 109, change the "Add Item" button to
simply say "Add":

 - {submitting ? 'Adding...' : 'Add Item'}


+ {submitting ? 'Adding...' : 'Add'}

Save the file.


 Refresh the page in your web browser, and you should see the change reflected almost
immediately because of the bind mount. Nodemon detects the change and restarts the server.
It might take a few seconds for the Node server to restart. If you get an error, try refreshing
after a few seconds.

 Feel free to make any other changes you'd like to make. Each time you make a change and
save a file, the change is reflected in the container because of the bind mount. When
Nodemon detects a change, it restarts the app inside the container automatically. When you're
done, stop the container and build your new image using:

3. docker build -t getting-started .


4.

Multi container apps


Up to this point, you've been working with single container apps. But, now you will add
MySQL to the application stack. The following question often arises - "Where will MySQL
run? Install it in the same container or run it separately?" In general, each container should do
one thing and do it well. The following are a few reasons to run the container separately:

 There's a good chance you'd have to scale APIs and front-ends differently than databases.
 Separate containers let you version and update versions in isolation.
 While you may use a container for the database locally, you may want to use a managed
service for the database in production. You don't want to ship your database engine with
your app then.
 Running multiple processes will require a process manager (the container only starts one
process), which adds complexity to container startup/shutdown.

And there are more reasons. So, like the following diagram, it's best to run your app in
multiple containers.

Container networking

Remember that containers, by default, run in isolation and don't know anything about other
processes or containers on the same machine. So, how do you allow one container to talk to
another? The answer is networking. If you place the two containers on the same network,
they can talk to each other.

Start MySQL

There are two ways to put a container on a network:

 Assign the network when starting the container.


 Connect an already running container to a network.
In the following steps, you'll create the network first and then attach the MySQL container at
startup.

1. Create the network.

 docker network create todo-app

 Start a MySQL container and attach it to the network. You're also going to define a few
environment variables that the database will use to initialize the database. To learn more
about the MySQL environment variables, see the "Environment Variables" section in the
MySQL Docker Hub listing

 docker run -d \
--network todo-app --network-alias mysql \
-v todo-mysql-data:/var/lib/mysql \
-e MYSQL_ROOT_PASSWORD=secret \
-e MYSQL_DATABASE=todos \
mysql:8.0

In the previous command, you can see the --network-alias flag. In a later section, you'll
learn more about this flag.

Tip

You'll notice a volume named todo-mysql-data in the above command that is mounted at
/var/lib/mysql, which is where MySQL stores its data. However, you never ran a docker
volume create command. Docker recognizes you want to use a named volume and creates
one automatically for you.

 To confirm you have the database up and running, connect to the database and verify that it
connects.

docker exec -it <mysql-container-id> mysql -u root -p

When the password prompt comes up, type in secret. In the MySQL shell, list the databases
and verify you see the todos database.

mysql> SHOW DATABASES;

You should see output that looks like this:

 +--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
| todos |
+--------------------+
5 rows in set (0.00 sec)

 Exit the MySQL shell to return to the shell on your machine.

4. mysql> exit

5. You now have a todos database and it's ready for you to use.

Connect to MySQL

Now that you know MySQL is up and running, you can use it. But, how do you use it? If you
run another container on the same network, how do you find the container? Remember that
each container has its own IP address.

To answer the questions above and better understand container networking, you're going to
make use of the nicolaka/netshoot

container, which ships with a lot of tools that are useful for troubleshooting or debugging
networking issues.

1. Start a new container using the nicolaka/netshoot image. Make sure to connect it to
the same network.

 docker run -it --network todo-app nicolaka/netshoot

 Inside the container, you're going to use the dig command, which is a useful DNS tool.
You're going to look up the IP address for the hostname mysql.

dig mysql

You should get output like the following.

2. ; <<>> DiG 9.18.8 <<>> mysql


3. ;; global options: +cmd
4. ;; Got answer:
5. ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 32162
6. ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
7.
8. ;; QUESTION SECTION:
9. ;mysql. IN A
10.
11. ;; ANSWER SECTION:
12. mysql. 600 IN A 172.23.0.2
13.
14.;; Query time: 0 msec
15.;; SERVER: 127.0.0.11#53(127.0.0.11)
16.;; WHEN: Tue Oct 01 23:47:24 UTC 2019
17. ;; MSG SIZE rcvd: 44
In the "ANSWER SECTION", you will see an A record for mysql that resolves to
172.23.0.2 (your IP address will most likely have a different value). While mysql
isn't normally a valid hostname, Docker was able to resolve it to the IP address of the
container that had that network alias. Remember, you used the --network-alias
earlier.

What this means is that your app only simply needs to connect to a host named mysql
and it'll talk to the database.

Run your app with MySQL

The todo app supports the setting of a few environment variables to specify MySQL
connection settings. They are:

 MYSQL_HOST - the hostname for the running MySQL server


 MYSQL_USER - the username to use for the connection
 MYSQL_PASSWORD - the password to use for the connection
 MYSQL_DB - the database to use once connected

Note

While using env vars to set connection settings is generally accepted for development, it's
highly discouraged when running applications in production. Diogo Monica, a former lead of
security at Docker, wrote a fantastic blog post

explaining why.

A more secure mechanism is to use the secret support provided by your container
orchestration framework. In most cases, these secrets are mounted as files in the running
container. You'll see many apps (including the MySQL image and the todo app) also support
env vars with a _FILE suffix to point to a file containing the variable.

As an example, setting the MYSQL_PASSWORD_FILE var will cause the app to use the contents
of the referenced file as the connection password. Docker doesn't do anything to support
these env vars. Your app will need to know to look for the variable and get the file contents.

You can now start your dev-ready container.

1. Specify each of the previous environment variables, as well as connect the container
to your app network. Make sure that you are in the getting-started-app directory
when you run this command.

 docker run -dp 127.0.0.1:3000:3000 \


-w /app -v "$(pwd):/app" \
--network todo-app \
-e MYSQL_HOST=mysql \
-e MYSQL_USER=root \
-e MYSQL_PASSWORD=secret \
-e MYSQL_DB=todos \
node:18-alpine \
sh -c "yarn install && yarn run dev"

 If you look at the logs for the container (docker logs -f <container-id>), you should
see a message similar to the following, which indicates it's using the mysql database.

 nodemon src/index.js
[nodemon] 2.0.20
[nodemon] to restart at any time, enter `rs`
[nodemon] watching dir(s): *.*
[nodemon] starting `node src/index.js`
Connected to mysql db at host mysql
Listening on port 3000

 Open the app in your browser and add a few items to your todo list.

 Connect to the mysql database and prove that the items are being written to the database.
Remember, the password is secret.

docker exec -it <mysql-container-id> mysql -p todos

And in the mysql shell, run the following:

mysql> select * from todo_items;


+--------------------------------------+--------------------+-----------+
| id | name | completed |
+--------------------------------------+--------------------+-----------+
| c906ff08-60e6-44e6-8f49-ed56a0853e85 | Do amazing things! | 0 |
| 2912a79e-8486-4bc3-a4c5-460793a575ab | Be awesome! | 0 |
+--------------------------------------+--------------------+-----------+

Your table will look different because it has your items. But, you should see them stored
there.

Use Docker Compose


Docker Compose is a tool that helps you define and share multi-container applications. With
Compose, you can create a YAML file to define the services and with a single command, you
can spin everything up or tear it all down.

The big advantage of using Compose is you can define your application stack in a file, keep it
at the root of your project repository (it's now version controlled), and easily enable someone
else to contribute to your project. Someone would only need to clone your repository and
start the app using Compose. In fact, you might see quite a few projects on GitHub/GitLab
doing exactly this now.

Create the Compose file

In the getting-started-app directory, create a file named compose.yaml.

├── getting-started-app/
│ ├── Dockerfile
│ ├── compose.yaml
│ ├── node_modules/
│ ├── package.json
│ ├── spec/
│ ├── src/
│ └── yarn.lock
Define the app service

In part 7, you used the following command to start the application service.

docker run -dp 127.0.0.1:3000:3000 \


-w /app -v "$(pwd):/app" \
--network todo-app \
-e MYSQL_HOST=mysql \
-e MYSQL_USER=root \
-e MYSQL_PASSWORD=secret \
-e MYSQL_DB=todos \
node:18-alpine \
sh -c "yarn install && yarn run dev"

You'll now define this service in the compose.yaml file.

1. Open compose.yaml in a text or code editor, and start by defining the name and
image of the first service (or container) you want to run as part of your application.
The name will automatically become a network alias, which will be useful when
defining your MySQL service.

 services:
app:
image: node:18-alpine

 Typically, you will see command close to the image definition, although there is no
requirement on ordering. Add the command to your compose.yaml file.

 services:
app:
image: node:18-alpine
command: sh -c "yarn install && yarn run dev"

 Now migrate the -p 127.0.0.1:3000:3000 part of the command by defining the ports
for the service.

 services:
app:
image: node:18-alpine
command: sh -c "yarn install && yarn run dev"
ports:
- 127.0.0.1:3000:3000

 Next, migrate both the working directory (-w /app) and the volume mapping (-v "$
(pwd):/app") by using the working_dir and volumes definitions.

One advantage of Docker Compose volume definitions is you can use relative paths from the
current directory.
 services:
app:
image: node:18-alpine
command: sh -c "yarn install && yarn run dev"
ports:
- 127.0.0.1:3000:3000
working_dir: /app
volumes:
- ./:/app

 Finally, you need to migrate the environment variable definitions using the environment
key.

5. services:
6. app:
7. image: node:18-alpine
8. command: sh -c "yarn install && yarn run dev"
9. ports:
10. - 127.0.0.1:3000:3000
11. working_dir: /app
12. volumes:
13. - ./:/app
14. environment:
15. MYSQL_HOST: mysql
16. MYSQL_USER: root
17. MYSQL_PASSWORD: secret
18. MYSQL_DB: todos

Define the MySQL service

Now, it's time to define the MySQL service. The command that you used for that container
was the following:

docker run -d \
--network todo-app --network-alias mysql \
-v todo-mysql-data:/var/lib/mysql \
-e MYSQL_ROOT_PASSWORD=secret \
-e MYSQL_DATABASE=todos \
mysql:8.0

1. First define the new service and name it mysql so it automatically gets the network
alias. Also specify the image to use as well.


services:
app:
# The app service definition
mysql:
image: mysql:8.0

 Next, define the volume mapping. When you ran the container with docker run, Docker
created the named volume automatically. However, that doesn't happen when running with
Compose. You need to define the volume in the top-level volumes: section and then specify
the mountpoint in the service config. By simply providing only the volume name, the default
options are used.

 services:
app:
# The app service definition
mysql:
image: mysql:8.0
volumes:
- todo-mysql-data:/var/lib/mysql

volumes:
todo-mysql-data:

 Finally, you need to specify the environment variables.

3. services:
4. app:
5. # The app service definition
6. mysql:
7. image: mysql:8.0
8. volumes:
9. - todo-mysql-data:/var/lib/mysql
10. environment:
11. MYSQL_ROOT_PASSWORD: secret
12. MYSQL_DATABASE: todos
13.
14. volumes:
15. todo-mysql-data:

At this point, your complete compose.yaml should look like this:

services:
app:
image: node:18-alpine
command: sh -c "yarn install && yarn run dev"
ports:
- 127.0.0.1:3000:3000
working_dir: /app
volumes:
- ./:/app
environment:
MYSQL_HOST: mysql
MYSQL_USER: root
MYSQL_PASSWORD: secret
MYSQL_DB: todos

mysql:
image: mysql:8.0
volumes:
- todo-mysql-data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: todos

volumes:
todo-mysql-data:
Run the application stack

Now that you have your compose.yaml file, you can start your application.

1. Make sure no other copies of the containers are running first. Use docker ps to list
the containers and docker rm -f <ids> to remove them.
2. Start up the application stack using the docker compose up command. Add the -d
flag to run everything in the background.

docker compose up -d

When you run the previous command, you should see output like the following:

 Creating network "app_default" with the default driver


Creating volume "app_todo-mysql-data" with default driver
Creating app_app_1 ... done
Creating app_mysql_1 ... done

You'll notice that Docker Compose created the volume as well as a network. By default,
Docker Compose automatically creates a network specifically for the application stack
(which is why you didn't define one in the Compose file).

 Look at the logs using the docker compose logs -f command. You'll see the logs from
each of the services interleaved into a single stream. This is incredibly useful when you want
to watch for timing-related issues. The -f flag follows the log, so will give you live output as
it's generated.

If you have run the command already, you'll see output that looks like this:

 mysql_1 | 2019-10-03T03:07:16.083639Z 0 [Note] mysqld: ready for


connections.
mysql_1 | Version: '8.0.31' socket: '/var/run/mysqld/mysqld.sock' port:
3306 MySQL Community Server (GPL)
app_1 | Connected to mysql db at host mysql
app_1 | Listening on port 3000

The service name is displayed at the beginning of the line (often colored) to help distinguish
messages. If you want to view the logs for a specific service, you can add the service name to
the end of the logs command (for example, docker compose logs -f app).

 At this point, you should be able to open your app in your browser on http://localhost:3000

4. and see it running.

See the app stack in Docker Dashboard

If you look at the Docker Dashboard, you'll see that there is a group named getting-started-
app. This is the project name from Docker Compose and used to group the containers
together. By default, the project name is simply the name of the directory that the
compose.yaml was located in.
If you expand the stack, you'll see the two containers you defined in the Compose file. The
names are also a little more descriptive, as they follow the pattern of <service-name>-
<replica-number>. So, it's very easy to quickly see what container is your app and which
container is the mysql database.

Tear it all down

When you're ready to tear it all down, simply run docker compose down or hit the trash can
on the Docker Dashboard for the entire app. The containers will stop and the network will be
removed.

Warning

By default, named volumes in your compose file are not removed when you run docker
compose down. If you want to remove the volumes, you need to add the --volumes flag.

The Docker Dashboard does not remove volumes when you delete the app stack.

Image-building best practices


Image layering

Using the docker image history command, you can see the command that was used to
create each layer within an image.

1. Use the docker image history command to see the layers in the getting-started
image you created.

docker image history getting-started

You should get output that looks something like the following.

 IMAGE CREATED CREATED BY


SIZE COMMENT
a78a40cbf866 18 seconds ago /bin/sh -c #(nop) CMD ["node"
"src/index.j… 0B
f1d1808565d6 19 seconds ago /bin/sh -c yarn install --
production 85.4MB
a2c054d14948 36 seconds ago /bin/sh -c #(nop) COPY
dir:5dc710ad87c789593… 198kB
9577ae713121 37 seconds ago /bin/sh -c #(nop) WORKDIR /app
0B
b95baba1cfdb 13 days ago /bin/sh -c #(nop) CMD ["node"]
0B
<missing> 13 days ago /bin/sh -c #(nop) ENTRYPOINT
["docker-entry… 0B
<missing> 13 days ago /bin/sh -c #(nop) COPY
file:238737301d473041… 116B
<missing> 13 days ago /bin/sh -c apk add --no-cache --
virtual .bui… 5.35MB
<missing> 13 days ago /bin/sh -c #(nop) ENV
YARN_VERSION=1.21.1 0B
<missing> 13 days ago /bin/sh -c addgroup -g 1000 node
&& addu… 74.3MB
<missing> 13 days ago /bin/sh -c #(nop) ENV
NODE_VERSION=12.14.1 0B
<missing> 13 days ago /bin/sh -c #(nop) CMD ["/bin/sh"]
0B
<missing> 13 days ago /bin/sh -c #(nop) ADD
file:e69d441d729412d24… 5.59MB

Each of the lines represents a layer in the image. The display here shows the base at the
bottom with the newest layer at the top. Using this, you can also quickly see the size of each
layer, helping diagnose large images.

 You'll notice that several of the lines are truncated. If you add the --no-trunc flag, you'll
get the full output.

2. docker image history --no-trunc getting-started


3.

Layer caching

Now that you've seen the layering in action, there's an important lesson to learn to help
decrease build times for your container images. Once a layer changes, all downstream layers
have to be recreated as well.

Look at the following Dockerfile you created for the getting started app.

# syntax=docker/dockerfile:1
FROM node:18-alpine
WORKDIR /app
COPY . .
RUN yarn install --production
CMD ["node", "src/index.js"]

Going back to the image history output, you see that each command in the Dockerfile
becomes a new layer in the image. You might remember that when you made a change to the
image, the yarn dependencies had to be reinstalled. It doesn't make much sense to ship around
the same dependencies every time you build.

To fix it, you need to restructure your Dockerfile to help support the caching of the
dependencies. For Node-based applications, those dependencies are defined in the
package.json file. You can copy only that file in first, install the dependencies, and then
copy in everything else. Then, you only recreate the yarn dependencies if there was a change
to the package.json.

1. Update the Dockerfile to copy in the package.json first, install dependencies, and
then copy everything else in.

 # syntax=docker/dockerfile:1
FROM node:18-alpine
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install --production
COPY . .
CMD ["node", "src/index.js"]
 Create a file named .dockerignore in the same folder as the Dockerfile with the
following contents.

 node_modules

.dockerignore files are an easy way to selectively copy only image relevant files. You can
read more about this here. In this case, the node_modules folder should be omitted in the
second COPY step because otherwise, it would possibly overwrite files which were created by
the command in the RUN step.

 Build a new image using docker build.

docker build -t getting-started .

You should see output like the following.

 [+] Building 16.1s (10/10) FINISHED


=> [internal] load build definition from Dockerfile
=> => transferring dockerfile: 175B
=> [internal] load .dockerignore
=> => transferring context: 2B
=> [internal] load metadata for docker.io/library/node:18-alpine
=> [internal] load build context
=> => transferring context: 53.37MB
=> [1/5] FROM docker.io/library/node:18-alpine
=> CACHED [2/5] WORKDIR /app
=> [3/5] COPY package.json yarn.lock ./
=> [4/5] RUN yarn install --production
=> [5/5] COPY . .
=> exporting to image
=> => exporting layers
=> => writing image
sha256:d6f819013566c54c50124ed94d5e66c452325327217f4f04399b45f94e37d25
=> => naming to docker.io/library/getting-started

 Now, make a change to the src/static/index.html file. For example, change the
<title> to "The Awesome Todo App".

 Build the Docker image now using docker build -t getting-started . again. This
time, your output should look a little different.

5. [+] Building 1.2s (10/10) FINISHED


6. => [internal] load build definition from Dockerfile
7. => => transferring dockerfile: 37B
8. => [internal] load .dockerignore
9. => => transferring context: 2B
10. => [internal] load metadata for docker.io/library/node:18-alpine
11. => [internal] load build context
12. => => transferring context: 450.43kB
13. => [1/5] FROM docker.io/library/node:18-alpine
14. => CACHED [2/5] WORKDIR /app
15. => CACHED [3/5] COPY package.json yarn.lock ./
16. => CACHED [4/5] RUN yarn install --production
17.=> [5/5] COPY . .
18.=> exporting to image
19.=> => exporting layers
20.=> => writing image
sha256:91790c87bcb096a83c2bd4eb512bc8b134c757cda0bdee4038187f98148e2e
da
21. => => naming to docker.io/library/getting-started

First off, you should notice that the build was much faster. And, you'll see that several
steps are using previously cached layers. Pushing and pulling this image and updates
to it will be much faster as well.

Multi-stage builds

Multi-stage builds are an incredibly powerful tool to help use multiple stages to create an
image. There are several advantages for them:

 Separate build-time dependencies from runtime dependencies


 Reduce overall image size by shipping only what your app needs to run

Maven/Tomcat example

When building Java-based applications, you need a JDK to compile the source code to Java
bytecode. However, that JDK isn't needed in production. Also, you might be using tools like
Maven or Gradle to help build the app. Those also aren't needed in your final image. Multi-
stage builds help.

# syntax=docker/dockerfile:1
FROM maven AS build
WORKDIR /app
COPY . .
RUN mvn package

FROM tomcat
COPY --from=build /app/target/file.war /usr/local/tomcat/webapps

In this example, you use one stage (called build) to perform the actual Java build using
Maven. In the second stage (starting at FROM tomcat), you copy in files from the build
stage. The final image is only the last stage being created, which can be overridden using the
--target flag.

React example

When building React applications, you need a Node environment to compile the JS code
(typically JSX), SASS stylesheets, and more into static HTML, JS, and CSS. If you aren't
doing server-side rendering, you don't even need a Node environment for your production
build. You can ship the static resources in a static nginx container.

# syntax=docker/dockerfile:1
FROM node:18 AS build
WORKDIR /app
COPY package* yarn.lock ./
RUN yarn install
COPY public ./public
COPY src ./src
RUN yarn run build

FROM nginx:alpine
COPY --from=build /app/build /usr/share/nginx/html

In the previous Dockerfile example, it uses the node:18 image to perform the build
(maximizing layer caching) and then copies the output into an nginx container.

Summary

In this section, you learned a few image building best practices, including layer caching and
multi-stage builds.

Related information:

 .dockerignore
 Dockerfile reference
 Build with Docker guide
 Dockerfile best practices

Next steps

In the next section, you'll learn about additional resources you can use to continue learning
about containers.

What next after getting started


Although you're done with the get started guide, there's still a lot more to learn about
containers.

Here are a few other areas to look at next.

Container orchestration

Running containers in production is tough. You don't want to log into a machine and simply
run a docker run or docker compose up. Why not? Well, what happens if the containers
die? How do you scale across several machines? Container orchestration solves this problem.
Tools like Kubernetes, Swarm, Nomad, and ECS all help solve this problem, all in slightly
different ways.

The general idea is that you have managers who receive the expected state. This state might
be "I want to run two instances of my web app and expose port 80." The managers then look
at all of the machines in the cluster and delegate work to worker nodes. The managers watch
for changes (such as a container quitting) and then work to make the actual state reflect the
expected state.
Cloud Native Computing Foundation projects

The CNCF is a vendor-neutral home for various open-source projects, including Kubernetes,
Prometheus, Envoy, Linkerd, NATS, and more. You can view the graduated and incubated
projects here

and the entire CNCF Landscape here

. There are a lot of projects to help solve problems around monitoring, logging, security,
image registries, messaging, and more.

Getting started video workshop

Docker recommends watching the video workshop from DockerCon 2022. Watch the entire
video or use the following links to open the video at a particular section.

 Docker overview and installation

 Pull, run, and explore containers


 Build a container image

 Containerize an app

 Connect a DB and set up a bind mount

 Deploy a container to the cloud

Creating a container from scratch

If you'd like to see how containers are built from scratch, Liz Rice from Aqua Security has a
fantastic talk in which she creates a container from scratch in Go. While the talk does not go
into networking, using images for the filesystem, and other advanced topics, it gives a deep
dive into how things are working.

Language-specific guides

If you are looking for information on how to containerize an application using your favorite
language, see the Language-specific guides.

https://docs.docker.com/get-started/02_our_app/

You might also like