Hackersploit: Docker Security Essentials: A Guide To The Docker Platform and Containers
Hackersploit: Docker Security Essentials: A Guide To The Docker Platform and Containers
SECURITY DOCKER
All material contained herein is the Intellectual Property of
HackerSploit & Linode LLC and cannot be reproduced in any way,
or stored in a retrieval systems, or transmitted in any form or by
any means, electronic, mechanical, photocopying, recording,
scanning or otherwise, without the consent of HackerSploit or
Linode LLC. Please be advised that all labs and tests are to be
conducted within the parameters outlined within the text. The
use of other domains or IP addresses is prohibited.
This guide only focuses on securing the Docker platform on Linux as it is the most widely utilized and
deployed version of the technology.
In order to to follow along with the techniques demonstrated in this guide, you need to have a Linux
server with the following services installed and running:
·
Docker
Note: The demonstrations illustrated in this guide have been performed on an Ubuntu 20.04
server running Docker CE. The commands are distribution agnostic with the exception of
package names, package managers, and the respective init systems.
TECHNICAL REQUIREMENTS
·
Fundamental knowledge of Docker and Docker CLI commands.
·
Functional knowledge of Linux terminal commands.
·
Fundamental knowledge of systemd and Linux init systems.
It is for this reason that the security of the Docker platform needs to be taken seriously. This should
also necessitate the formation of a functional security policy that addresses the security issues and
misconfigurations of the platform.
The most common mistake made by individuals and companies is the assumption that the Docker
platform is secure out of the box. As with many platforms, this is not the case, and implementations
of the Docker platform need to be secured from the ground up.
Another impediment that prevents the adoption and implementation of the technology is the
abstraction and complexity of the component technologies that make up the platform. Until recently,
containers were not considered a mainstream alternative to virtual machines, primarily because of the
technical and idiosyncratic nature of containerization technologies like LXC. Docker was developed
to simplify the adoption of containerization technologies and make them available to a wider
demographic of users. To its credit, it has achieved this objective and is constantly being improved
to make the process more efficient. However, the process of securing Docker can still be unintuitive
for organizations.
This ebook aims to provide a clear and concise guide to securing the Docker platform and consequently
Docker containers at runtime. This process needs to be approached systematically and requires
a functional knowledge of the components that make up the platform, and of the two primary Linux
kernel primitives that make containerization possible: namespaces and cgroups.
· In The Docker Platform section, we will begin the process by explaining the various components
that make up the Docker platform.
· In the Auditing Docker Security section, we will explore the process of performing a security audit
of the Docker platform. An audit identifies vulnerabilities in the configuration of the components
that make up the platform.
· In the next two sections, we will begin the process of securing the Docker host and the Docker
daemon to ensure that we have a secure base to operate from:
· Securing the Docker Host
· Securing the Docker Daemon
· The remaining sections of the guide will conclude by taking a look at the various ways of securing
containers and the process of building secure Docker images:
· Container Security Best Practices
· Controlling Container Resource Consumption with Control Groups (cgroups)
· Implementing Access Control with AppArmor
· Limiting Container System Calls with seccomp
· Vulnerability Scanning for Docker Containers
· Building Secure Docker Images
Let’s begin the process by taking a look at how the Docker platform is designed and organized.
The following diagram outlines the various components that make up the platform and their
inter-connectivity.
In order to understand the process of securing the Docker daemon, we need to take a closer look at how
communication between these components is facilitated:
· Communication between the components that make up the Docker platform is facilitated
through the use of several APIs.
· The Docker client communicates with the Docker daemon through a Unix domain socket
or remotely through a TCP socket.
· Commands sent from the Docker client are sent to the Docker daemon.
· Collectively, the Docker APIs, Docker CLI, and Docker daemon are referred to together
as the Docker Engine.
· The Docker daemon, in turn, forwards commands to Containerd, which is another daemon that
manages the containers and performs related functions, like pushing and pulling images and
container storage.
· Containerd is an industry-standard container management solution that’s also used by other
platforms, like Kubernetes.
· Communication between the Docker daemon and Containerd is facilitated through the gRPC
(open source remote procedure call).
· Furthermore, Containerd itself utilizes a runtime specification, typically runc, to create and
manage the actual containers.
This modularization of components is not random. Docker initially bundled all functionality into
the Docker daemon, which centralized most of the functionality, consequently making it bloated
and leading to a reduction in performance. This centralized structure was later overhauled in favour
of a modularized structure, and containerd was created as part of this modularization effort. The
modularization of components also makes it much simpler to secure as each component can be
handled and secured individually.
Now that you have an understanding of how the Docker platform is structured and organized, we can
begin the process of auditing the security of the Docker host.
Before we get started with the security auditing process, we need to understand what a security audit
is and why it is important in securing a system.
A security audit is a systematic evaluation of the security and configuration of a particular information
system. Security audits are used to measure the security performance of a system against a list
of checks, best practices, and standards.
In the case of Docker, we will be using the CIS Docker Benchmark, which is a consensus driven security
guideline for the Docker platform. The CIS Docker Benchmark provides us with a solid set of guidelines
and checks that can be used to test the security of the Docker platform and establish a baseline security
level. More information about the CIS Docker Benchmark can be found here: https://www.cisecurity.
org/benchmark/docker/
The process of auditing the security of Docker can be automated using various tools. In this guide,
we will be using the Docker Bench for Security utility developed by Docker, Inc.
Docker Bench for Security is an open source Bash script that checks for various common security best
practices of deploying Docker in production environments. The tests are all automated and are based
on the CIS Docker Benchmark. More information about Docker Bench for Security can be found
on GitHub: https://github.com/docker/docker-bench-security
Now that you have an understanding of security audit concepts and the tools and benchmarks we will
be using, we can begin the process of performing a security audit on our Docker host.
The auditing process can be performed by following the procedures outlined below:
1. You first need to clone the docker/docker-bench-security GitHub repository on your Docker
host. This can be done by running the following command:
2. After cloning the repository, you will need to navigate into the docker-bench-security
repository that you just cloned:
cd docker-bench-security
3. The cloned directory will contain a Bash script named docker-bench-security.sh. We can run
this script to perform the Docker security audit by running the following command:
sudo ./docker-bench-security.sh
4. When the script is executed, it will perform all the necessary security checks. Once completed,
it will provide you with a baseline security score as highlighted in the image below.
SECTION C - SCORE
[INFO] Checks: 84
[INFO] Score: 0
The initial baseline security score will be valued at zero, indicating that all checks failed. In this case, we
can identify what needs to be secured by analyzing the results produced by the script, as highlighted in
the image below.
Each check performed by the script is numbered and is flagged with the corresponding color code
based on whether the check was successful:
The script also provides a list of recommendations regarding what components need to be secured
for every check. For example, as shown in the image below, we need to enable auditing for the Docker
daemon:
[WARN] 1.1.3 Ensure auditing is configured for the Docker daemon (Automated)
Note: In this context, the warning is specifically referring to using the Linux Audit
Framework. This topic will be introduced later, in the Setting Up Audit Rules for
Docker Artifacts section.
The script also sorts the results based on the following categories:
·
Host configuration
·
General configuration
·
Docker daemon configuration
·
Docker swarm configuration
This categorization of checks is very useful as it distinguishes the security of components from others,
therefore streamlining the process. The first component that we need to secure based on the results
is the Docker host. Let’s take a look at how to secure the Docker host and implement the security
practices recommended by the Docker Bench for Security tool.
HOST SECURITY
The security of the host kernel and operating system will have a direct correlation to the security of
your containers, given the fact that the containers utilize the host kernel. It is therefore vitaly important
to keep your host secure. The following guidelines outline various security best practices you should
consider when securing your Docker host:
The process of securing the host OS is multi-faceted and leverages multiple security audit tools in
order to establish a baseline security level. This process will result in a Docker host that satisfies the CIS
Docker Benchmark.
1. First, we will run an operating system security audit tool called Lynis. This will help us secure
and harden the host OS. We will implement the recommendations made by Lynis.
2. After we harden the host OS, we will return to the Docker Bench for Security to enable and set
up auditing for our Docker components and artifacts.
Lynis is an extensible security audit tool for computer systems running Linux, FreeBSD, macOS,
OpenBSD, Solaris, and other Unix derivatives. It assists system administrators and security professionals
with scanning a system and its security defenses, with the final goal being system hardening.
Installing Lynis
Lynis is available as a package for most Linux distributions. We can install it by running the following
command on Debian-based systems:
To display all the options and commands available for Lynis, we can run the following command:
Before we get started with scanning, we need to ensure that Lynis is up to date. To check if we are
running the latest version we can run the following command:
== Lynis ==
Version : 3.0.0
Status : Up-to-date
Release date : 2020-03-20
Project page : https://cisofy.com/lynis/
Source code : https://github.com/CISOfy/lynts
Latest package : https://packages.cisofy.com/
2007-2020, CISOfy-https://cisofy.com/Lynis/
dev@li560-203:~$
Lynis will output a lot of information that will also be stored under the /var/log/lynis.log file for easier
access. The summary of the system audit will reveal important information about your system’s security
posture and various security misconfigurations and vulnerabilities.
Lynis will also generate output on how these vulnerabilities and misconfigurations can be fixed or
tweaked.
Components:
- Firewall [V]
- Malware scanner [X]
Scan mode:
Normal [V] Forensics [ ] Integration [ ] Pentest [ ]
Lynis modules:
- Compliance status [?]
- Security audit [V]
- Vulnerability scan [V]
Files:
- Test and debug information : /var/log/lynis.log
- Report data : /var/log/Lynis-report.dat
Great, no warnings
To increase our hardening index score, Lynis provides us with helpful suggestions that detail the various
security configurations we need to make.
Suggestions (50):
---------------------------
• This release is more than 4 months old. Consider upgrading [LYNIS]
https://cisofy.com/lynis/controls/LYNIS/
• Set a password on GRUB boot loader to prevent altering boot configuration [BOOT
5122]
https://cisofy.con/Lynis/controls/B00T-5122/
• Check PAM configuration, add rounds if applicable and expire passwords to encrypt
https://ctsofy.com/lynis/controls/AUTH-9229/
We can now follow the recommendations provided by Lynis to secure and harden our Docker host.
· The first step is to add and configure the necessary user accounts on the system.
· We then need to set up the various groups that will be used to assign permissions to particular
users with specific roles.
· After, we will begin specifying file permissions and assigning ownership of particular files and
directories. This will help us set up a system of accountability and defense in depth.
Linux has multi-user support and, as a result, multiple users can access the system simultaneously.
This can be seen as both an advantage and disadvantage from a security perspective in that multiple
accounts offer multiple access vectors for attackers and therefore increase the overall risk of the server.
To counter this concern, we must ensure that user accounts are set up and sorted accordingly in terms
of their privileges and roles. For example: Having multiple users on a Linux server with root privileges
is extremely dangerous as an attacker will only need to compromise one account to get root access
on the system. We can easily solve this issue by segregating permissions for users based on their roles.
1. The useradd command creates users on your system and has this general syntax:
2. The arguments that can be included are used to specify particular information and
configurations for the user account. Some of these options are described in the table below:
Argument Function
3. Now that we understand the arguments we can specify or use when creating a user, let us create
the user account:
4. We have used the -c argument to specify the full name of the user, and we have used the -s
argument to specify that Bash should be the default shell for the new user. The -m argument
will create the home directory for the user. We finally end the statement with the username
of the account.
5. We now need to specify the password for the user account. We can do this with the following
command:
passwd <username>
6. We will then be prompted to enter a password for the user. Make sure to use a strong password
that follows the specification in your organization’s security policy, if applicable.
When setting up access on a Linux server, some users may require sudo access to perform
administrative tasks like updating packages and installing software. By default, users do not have sudo
access, which means they are unable to perform these administrative tasks.
Giving a user sudo access involves adding the user to a sudo-enabled group. By default, this group is just
called sudo on Debian-based systems, and on Fedora and RedHat-based systems this group is called
wheel. One way we can add the user we have just created to the sudo group by running the following
command:
Docker implements access control for the Docker daemon through a Linux group with specific
permissions. Members of this group will have the privileges required to interact with the Docker
daemon. As a result, only authorized users that require access should be added to this group.
We can add our custom user to this group by running the following command:
The root user’s privileges can be abused to run any commands provided (malicious or otherwise),
including modifying the passwords of other users on the system, consequently locking them
out. Common Linux security practices recommend disabling root logins and creating a separate
administrative account, which can be assigned sudo privileges to run certain commands with root
privileges. Following this step will help mitigate the threats to the root account and will reduce the
overall attack surface of the host.
2. After running the command, we will be prompted to enter the absolute path of the shell we want
to switch to. Specify /usr/sbin/nologin as the shell at the prompt.
3. After you have entered the absolute path to the nologin shell, we can try logging in to the root
account. When attempting to log in, the message
4. These changes will prevent unauthorized users from using the root account, because we have
not specified a valid shell. However, users with sudo privileges will still be able to run all
administrative commands unless the privileges are constrained to certain commands.
Note: Aside from using the chsh utility, another way to update the user’s shell is to modify
the /etc/passwd file.
Note: Users will still be able to login to the account remotely via SSH keys, if they have been
set up. The process of securing SSH is introduced in the next section.
We can lock the password of the root account by running the passwd command with the -l option:
If you want to unlock the password for a specific account, you can use the -u unlock option for the
passwd command:
This will unlock the password for the root account and you will be able to access the account via
password authentication.
Now that we have disabled root user logins, we will be using the custom user account that we have
created going forward. The next step in authentication security involves securing the remote access
protocol, which in most cases will be SSH.
SSH AUTHENTICATION
If your system did not have root password logins disabled, then any attacker could attempt to gain root
access by performing password brute-force attacks on the SSH protocol. So, it’s important to disable
root login via SSH as well.
It’s also important to do this even if you do have root password logins disabled, because it adds an extra
layer of security. Furthermore, it prevents root logins with alternative authentication methods,
like key-based authentication, which will be explored in the next section.
1. We can disable root login via SSH by modifying the OpenSSH server configuration file found
in /etc/ssh/sshd_config.
2. After opening the file with a text editor like nano or vim, we will be greeted with extensive
configuration options that we can use to modify how the SSH server will function.
The strategy used for options in the default sshd config shipped with
OpenSSH is to specify options with their default value where
# possible, but leave them commented. Uncommented options override the
default value.
#Port 22
#AddressFamily any
#ListenAddress 0.0.0.0
#ListenAddress :
#Logging
#SyslogFacility AUTH
#LogLevel INFO
#Authentication:
#LoginGraceTime 2m
#PermitRootLogin no
#StrictModes yes
#MaxAuthTries 6
#MaxSessions 10
4. As you can see in the image above, we have set the option from yes to no. This will prevent users
from authenticating via SSH as the root user.
5. After saving the file, we now need to restart the SSH service. This can be done by running the
following command:
6. After restarting the SSH daemon on the server, we can try logging in to the root account remotely
via SSH. As you can see in the image below we get a Permission Denied error even after
entering the correct root password. This confirms that we have successfully disabled root logins
via SSH.
Key-based authentication utilizes asymmetric encryption to generate two keys that are used for the
encryption and decryption of data. These two keys are called the public key and the private key,
and together they are called a public-private key pair.
The public key is used to encrypt data and only the corresponding private key can decrypt the data.
As a result, the private key must be kept private and secure, whereas the public key can be shared.
1. SSH key pairs can be generated on the client by using the ssh-keygen utility. We can generate
the key pair by running the following command:
ssh-keygen -t rsa
2. This will generate the public and private RSA key pair, and you will be prompted to specify the
directory to which you want to save the keys. You will also be prompted to specify a passphrase
for the key pair. This is an additional level of security that you can use to secure your key pair.
3. The key pair will be generated and saved in your ~/.ssh/ directory. In this directory, you
will find your public key with the .pub extension (e.g. id_rsa.pub), and your private key with
no file extension (e.g. id_rsa).
4. Your public key now needs to be uploaded to your server. We can do this with the ssh-copy-id
utility:
ssh-copy-id <username@SERVER-IP>
5. We are now able to log in directly without entering a user password. Note that if you previously
supplied a passphrase to the ssh-keygen utility, you will be prompted to enter that passphrase
when logging in.
We can now login with our private key. The next step is to disable password authentication completely,
which will ensure that no user will be able to authenticate remotely with SSH without their respective
key pair.
1. This can be done by modifying the /etc/ssh/sshd_config OpenSSH configuration file and
setting the PasswordAuthentication option to no:
2. After saving the new changes to the OpenSSH configuration file, restart the SSH daemon:
3. The SSH server will restart with the new changes applied.
We have now secured the new user account and root account from unauthorized remote access. We are
only able to login to the user account with the unique private key from the key pair we generated.
After implementing the recommendations provided by Lynis, we can run a system audit with Lynis again
to verify the application of the changes we have made.
Components:
- Firewall [V]
- Malware scanner [V]
As highlighted in the preceding image, our hardening index should have increased as a direct
consequence of following the security recommendations.
We can now move on the next step in securing the host, which involves setting up auditing for Docker
artifacts.
During the initial Docker security audit we performed with the Docker Bench for Security utility, we were
able to identify several host configuration warnings that required us to set up audit rules for specific
Docker artifacts. Examples of these artifacts include configuration files, binaries, and systemd service
files.
We can perform the Docker Bench for Security utility again. This time, we can limit our results to the
host configuration to focus on just those checks. This can be done by running the following command:
The script should output a list of Docker artifacts that require audit rules. Before we can enable auditing
of these artifacts, we need to further explore the concept of auditing files and objects on Linux systems.
File and object auditing allows us to log and analyze all the activity of an object. Auditing on Linux is
facilitated through the Linux Audit Framework. In the context of auditing, an object is a system resource
like a file, directory, application, or service. Docker requires us to have audit rules for core artifacts,
like the Docker daemon, in order to ensure that all activity from these artifacts is logged for security
purposes.
The Linux Audit Framework is used to set up and configure auditing policies for user-space processes
like Docker. The following diagram outlines the various components that make up the Linux Audit
Framework and how they interact with each other:
All auditing is handled by the Linux kernel. Whenever a system call is made by a user-space service like
Docker, the kernel will check the audit policy to determine whether the service in question has any audit
rules. If it does, it will send the audit event to Auditd, and consequently, Auditd will send the event log to
the audit.log for storage and analysis. Tools like aureport can be used to perform the analysis.
When Auditd is started or restarted, it will load the audit rules saved in the audit.rules file. We will take a
look at how to create audit rules in the upcoming sections.
Argument Function
We need to create audit rules for all the artifacts listed in the audit results from the Docker Bench for
Security utility:
1. In the previous Docker Bench for Security report, warnings like the following appeared:
2. For this warning, create a corresponding audit rule with a command like this:
sudo auditctl -l
4. This will list out all the created audit rules for the Docker artifacts. You should have similar rules
to the ones highlighted in the image below:
5. After creating the rules, we need to save them to the audit.rules file to make them permanent.
This can be done by copying and pasting the audit rules from the output of the
sudo auditctl -l command to the audit.rules file located in:
/etc/audit/rules.d/audit.rules
6. After adding the rules to the audit.rules file, you will need to restart the auditd service.
This can be done by running the following command:
7. After restarting auditd, we can re-run the Docker Bench for Security tool to confirm that the
audit rules have been enabled and are active:
cd ~/docker-bench-security/
sudo ./docker-bench-security.sh -c host_configuration
[PASS] 1.1.3 - Ensure auditing is configured for the Docker daemon (Automated)
[PASS] 1.1.4 - Ensure auditing is configured for Docker files and directories -/run/containerd (Automated)
[PASS] 1.1.5 - Ensure auditing is configured for Docker files and directories - /var/lib/docker (Automated)
[PASS] 1.1.6 - Ensure auditing is configured for Docker files and directories - /etc/docker (Automated)
[PASS] 1.1.7 - Ensure auditing is configured for Docker files and directories - docker service (Automated)
[INFO] 1.1.8 - Ensure auditing 1s configured for Docker files and directories - container.sock (Automated)
[INFO] * File not found
[PASS] 1.1.9 - Ensure auditing is configured for Docker files and directories - docker socket (Automated)
[INFO] 1.1.10 - Ensure auditing is configured for Docker files and directories - /etc/default/docker (Automated)
[INFO] * File not found
[INFO] 1.1.11 - Ensure auditing is configured for Dockerfiles and directories - /etc/docker/daemon.json (Automated)
[INFO] * File not found
[INFO] 1.1.12 - 1.1.12 Ensure auditing is configured for Dockerfiles and directories - /etc/containerd/config.toml (Au-
tomated)
[INFO] * File not found
[INFO] 1.1.13 - Ensure auditing is configured for Docker files and directories - /etc/sysconfig/docker (Automated)
[INFO] * File not found
[PASS] 1.1.14 - Ensure auditing is configured for Docker files and directories - /usr/bin/containerd (Automated)
[PASS] 1.1.15 - Ensure auditing is configured for Docker files and directories - /usr/bin/containerd-shim (Automated)
We should also have a new audit score that reflects the audit rules we have created as shown in the
image below:
Section C - Score
[INFO] Checks: 20
[INFO] Score: 9
Now that we have been able to successfully secure our Docker host, we can begin the process of
securing the Docker daemon.
We will begin the process by taking a look at how to implement TLS encryption between the Docker
client and daemon.
We can remedy this situation by implementing TLS encryption for remote connections. This process is
twofold and involves generating the TLS certificates for the server and the remote clients. We will begin
by taking a look at how to generate the TLS certificates for both the Docker client and server, and then
we will update the Docker daemon configuration to use the certificates.
1. The first step in the process involves downloading the Bash script. This can be done by running
the following commands:
cd ~
wget https://raw.githubusercontent.com/AlexisAhmed/
DockerSecurityEssentials/main/Docker-TLS-Authentication/secure-
docker-daemon.sh
chmod u+x secure-docker-daemon.sh
./secure-docker-daemon.sh
The script will create a .docker/ directory in your user’s home directory as illustrated in the
image below. This is where the certificates will be stored.
alexis@localhost:~$ ./secure-docker-daemon.sh
you are now in /home/alexis
Directory ./docker/ does not exist
Creating the directory
type in your certificate password (characters are not echoed)
>Type in the server name you’ll use to connect to the Docker server
>139.162.230.200
3. The script will prompt you to enter the Docker server IP. After providing the IP address, the script
will automatically create the client and server certificates in the .docker/ directory as
highlighted in the image below:
After generating the TLS certificates, we now need to create a custom systemd configuration file for the
Docker daemon. This configuration file will be used to enable TLS and specify the TLS certificates.
1. We can create the custom systemd file with a text editor and add the TLS configuration to it.
Create the systemd file in your preferred text editor with a command like the following:
2. After creating and opening the file in your editor, add the following configuration to it.
When pasting this snippet into your file, be sure to replace the <user> string with the username
on your system:
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -D -H unix:///var/run/docker.sock
--tlsverify --tlscert=/home/<user>/.docker/server-cert.pem
--tlscacert=/home/<user>/.docker/ca.pem --tlskey=/home/<user>/.
docker/server-key.pem -H tcp://0.0.0.0:2376
3. After adding the configuration to the file, we need to save it and restart the Docker service.
This can be done by running the following command:
4. If the configuration file has been created and set up correctly, the Docker service should restart
with no issues.
5. You can now copy over the client TLS certificates to the remote Docker client for authentication.
We will not be covering this process in detail as it is beyond the scope of this guide. More
information regarding TLS authentication can be found here: https://docs.docker.com/engine/
security/protect-access/
Now that we have configured TLS encryption between the Docker client and daemon, we can move on
to implementing user namespaces for containers.
After generating the TLS certificates, we need to create a custom systemd configuration file for the
Docker daemon. This configuration file will be used to enable TLS and specify the TLS certificates.
When we run a Docker container, the process is run from the default namespace. As a result, the process
is run under the root user as highlighted in the image below:
This can be dangerous in the event of a container breakout. Because the process is being run as the root
user, an attacker would be able to get root privileges for the host. As a result, we need to run containers
as an unprivileged user.
We need to reconfigure the Docker daemon to use user namespaces. Docker generates a default
dockremap user that you can use, or you can specify your own non-privileged user.
1. We can implement user namespaces by adding the following option to the ExecStart line in
the /etc/systemd/system/docker.service.d/override.conf file we created in the
previous section:
--userns-remap=”default”
2. Your new configuration should be structured as follows::
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -D -H unix:///var/run/docker.sock
--tlsverify --tlscert=/home/<user>/.docker/server-cert.pem
--tlscacert=/home/<user>/.docker/ca.pem --tlskey=/
home/<user>/.docker/server-key.pem --userns-remap=”default”
-H tcp://0.0.0.0:2376
4. We can now confirm that containers will run under the default dockremap UID. Run a
container and give it the name test. Then inspect the container process with this command:
RUNNING DOCKER BENCH FOR SECURITY AFTER SECURING THE DOCKER DAEMON
Now that we have implemented TLS encryption and user namespaces for containers, we can re-run our
security audit with Docker Bench for Security:
cd ~/docker-bench-security/
sudo ./docker-bench-security.sh -c host_configuration
As highlighted in the image below, we should now have an improved security score which highlights the
changes we have been making.
Section C - Score
[INFO] Checks: 84
[INFO] Score: 30
We have been able to successfully secure the Docker daemon and can begin exploring the various ways
of securing Docker containers.
Running containers with an unprivileged user will prevent privilege escalation attacks. This can be done
by following the outline below:
1. Always reconfigure and build your own Docker images so you can customize the various security
parameters to your specification.
2. To run a Docker container as an unprivileged user, you will need to update the Dockerfile before
building the image. This can be done by adding a command like the following example to the
Dockerfile:
# Environment Variables
ENV HOME /home/alexis
ENV DEBIAN_FRONTEND=noninteractive
3. This will add the user to the Docker image, and you can now run the container with the
unprivileged user instead of running it with the default root user. You can specify the user for a
container with the -u option for the docker run command:
As an added security measure, we can disable the root user of a container by modifying the Dockerfile.
Specifically, we can change the default shell from /bin/bash to /usr/sbin/nologin. This can be
done by adding the following command to the Dockerfile:
This will prevent any user on the container from accessing the root account regardless of whether they
have the root password. This configuration is only applicable if you want to disable the root account
completely.
It is recommended to run your containers with specific permissions and ensure that users cannot
escalate their privileges. To do this, use the following flag when running containers:
The no-new-privileges option will stop container processes from gaining any additional privileges.
This will prevent commands like su and sudo from working in your container, and it can prevent
attacks that exploit SETUID binaries.
The recommended method for assigning privileges to a container is to first remove all of the capabilities
(also referred to as dropping the capabilities) and then only add the ones required for your container to
function. If your container does not need kernel capabilities to run, then they should all be dropped.
2. You can also add the specific kernel capabilities required by your containers by running the
following command:
You also have the ability to specify filesystem permissions and access, allowing you to set up containers
with a read only file system or with a temporary file system. This option is useful if you would like to
control whether your containers can store data or make changes to the filesystem.
1. We can run containers with a read-only file system by running the following command:
2. If your container has a service or application that requires the storage of data, you can specify a
temporary file system by running the following command:
Docker creates a default bridge network, and containers are created on this network by default. All
containers on this default network can communicate with each other. However, we can also choose to
isolate Docker containers from communicating with one another. For example, this can be helpful if you
want to isolate a particular Docker container away from another connected group of containers you’re
running.
1. In order to disable inter-container communication, we will need to create a new Docker network.
This can be done by running the following command with the “icc” option set to false.
2. You can also add the specific kernel capabilities required by your containers by running the
following command:
• Some applications consume a high amount of resources and need to be managed with respect to
the host’s performance.
• Containers need to operate in a manner conducive to, and respective of, the performance of other
containers.
• If a container is compromised, an attacker could utilize the resources of the container for CPU
intensive processes like cryptocurrency mining. Control groups can limit the impact of these
exploits.
By default, control groups are managed and maintained by the host’s init system, which in most cases
will be systemd. Docker utilizes cgroupfs (cgroup file system) to manage and maintain the control
groups associated with containers. Docker provides you with the ability to allocate resources for all
containers system-wide or on a container-by-container basis.
Now that we have an understanding of what control groups are and what they are used for, we can
explore the various control group subsystems utilized by Docker.
Subsystems, also known as resource controllers, are used to manage and limit the usage of a specific
resource on a system. The following is a list of subsystems utilized by Docker to control resource
consumption:
We can use these subsystems to limit and control container resource consumption at runtime.
1. To limit container CPU usage, add the --cpus argument when running a container:
In the above example, we are limiting the container to use only the first CPU core. If
your Docker host has multiple cores, you can specify more than one core to use by
including a comma-separated list of core numbers as follows:
In this example, we have limited the container to 128 MB of RAM usage. You can specify a
minimum of 4MB of RAM for each container.
4. Docker also provides users with the ability to specify a limit on the number of processes that a
container can fork. This can be very helpful in limiting the container to specific services and
can prevent fork bomb denial-of-service attacks. To limit the number of processes, use the
--pids-limit argument when running a container:
5. You also have the ability to impose resource restrictions on containers that are already
running. This can be done by running the docker update command, as in this example:
In the above example, we have limited the CPU usage of a container to 25% of available
CPU processing power.
Now that we have an idea of how to control container resource consumption, we can take a look at how
to implement access control for Docker containers with AppArmor.
1. Discretionary Access Control: Access to resources is specified by the resource owner. An example
of this is the implementation of file and directory permissions. This type of access control does not
offer much in regard to the types of resources we can restrict access to.
We can manage the system resources that containers have access to with access control systems like
AppArmor.
WHAT IS APPARMOR
AppArmor (Application Armor) is a Linux security module that is used to manage access to OS resources
by utilizing custom profiles for applications and containers. AppArmor is quite extensive and can be
used to restrict access to networking and specified file paths. Mandatory Access Control solutions like
AppArmor are implemented into the Linux kernel as security modules.
In the context of Docker, we can use AppArmor to secure containers by restricting the resources and
functionality they have access to. Docker runs containers with a default AppArmor profile that provides
a good level of protection for most cases. However, it is recommended to create your own AppArmor
profile based on your requirements and constraints.
If AppArmor is enabled for your host, Docker will utilize the default profile. You can also opt to run
containers with no AppArmor profile, but running containers with no AppArmor profile is considered
dangerous and should not be done in a production environment.
Generating a custom AppArmor profile can be tedious and time-consuming and will require a good
understanding of the requirements of the container. For this reason, we will be utilizing Bane, an open
source tool that automates the process of generating custom AppArmor profiles. More information
regarding Bane can be found here: https://github.com/genuinetools/bane
Before we can start generating and using custom AppArmor profiles, we need to ensure that AppArmor
is installed and enabled. This can be done by running the following command:
aa-enabled
If AppArmor is enabled, you should receive the response text Yes, as illustrated in the
figure below.
alexis@localhost:~$ aa-enabled
Yes
alexis@localhost:~$
After confirming that AppArmor is installed and enabled, you can explore the contents
of the AppArmor configuration directory found under /etc/apparmor.d/. This is the
recommended directory for storage of custom AppArmor profiles and other
configurations. We will be creating our AppArmor configuration files in this directory.
1. In order to install Bane, we can use the automated installer for Linux. This can be done by
running the following commands:
export
BANE_SHA256=”69df3447cc79b028d4a435e151428bd85a816b3e26199cd010c74b7
a17807a05”
NOTE: The above commands reference Bane’s x86 executable for Linux, version v0.4.4.
Check the releases page for Bane on GitHub for other architectures, operating systems, or
any newer releases that are available: https://github.com/genuinetools/bane/releases
2. After running the installer script, we can confirm that Bane is installed by running the following
command:
bane -h
Now that we have confirmed that Bane is installed and enabled, we can take a look at how to create
custom AppArmor profiles.
Creating a custom AppArmor profile requires a good understanding of the resources that your
containers need to access. The scope of this guide is limited to exploring the structure of an AppArmor
profile and how to use the profile when running containers. The exact details of your profiles will require
further investigation.
1. You can access and download the AppArmor profile template from GitHub: https://github.com/
genuinetools/bane/blob/master/sample.toml
The above commands reference Bane’s x86 executable In order to modify and generate the
AppArmor profile with Bane, you need to download the sample AppArmor template to the
AppArmor configuration directory found under /etc/apparmor.d/.
2. The following image highlights the sections of the AppArmor template that you will likely need to
modify:
4. You can also control access to networking by specifying whether you want to enable raw packet
connections. You also have the ability to specify the networking protocols that the container
can use. These controls are depicted in the image below:
6. Bane will generate and install the profile for you and will also provide you with the Docker
runtime security specification for the AppArmor profile as highlighted in the image below.
7. After generating the AppArmor profile with Bane, we can specify the AppArmor profile when
running a container with the --security-opt argument as follows:
This will run the container with your custom AppArmor profile, and based on your profile,
the container will be limited in terms of functionality and the resources it has access to.
Now that we have an idea of how to generate and use custom AppArmor profiles, we can take a look at
how to limit container system calls with seccomp.
A system call is the process through which a user-space process communicates with the Linux kernel
in order to access resources or functionality. Whenever you want to create a file, change ownership or
modify a network configuration, it is facilitated through the use of a system call.
· Containers do not require the ability to make all available system calls in order to function as
needed.
· In the event a container is compromised, the attacker can make various system calls that can lead
to further exploitation of the Docker host.
· Reducing access to system calls greatly reduces the overall attack surface of a container.
Docker utilizes the seccomp filters to restrict system calls available to containers. Docker will utilize a
default seccomp profile for Docker containers. However, you can also create a custom seccomp profile
with your own configurations. Seccomp can be configured to run for all containers or a container-to-
container basis. Lastly, you can run containers with no seccomp profile specified, leaving it unsecured,
but this is not recommended.
You can store your custom seccomp profiles wherever you want. However, it is recommended to
use a standardized directory for all your custom profiles. Custom seccomp profiles are saved in the
.json format.
2. The following image highlights the architecture specification for the default seccomp profile:
In the seccomp profile template, the default action is to deny the container from accessing any system
calls not specified in the syscall allowlist. The image below highlights the syscall allowlist, where you
can specify what system calls you want your container to have access to. You can modify this profile
based on your requirements.
3. When saving the profile, you can save it with a file name that pertains to the functionality that it
restricts or permits.
4. We can specify the custom seccomp profile with the --security-opt option when running a
container:
This will run the container with your custom seccomp profile. Based on your
profile, the container will be limited to the allowlisted system calls specified in the
profile, which can be very useful in limiting the functionality available to users in the
container as well as restricting the functionality of applications.
You also have the ability to run containers in an unconfined mode with no seccomp profile specified.
This is not recommended, as the container will have access to all system calls available. This can be
done by running the following command:
Now that you have an understanding of how to use custom seccomp profiles for your containers, we can
explore the process of performing vulnerability scans on your Docker images.
It is to be noted that this guide will only cover the process of identifying vulnerabilities in Docker images.
Patching and remediation should only be handled by the respective developer or DevOps team in
accordance with the guidelines specified by your organization.
In order to perform our vulnerability scans on our Docker image, we will utilize an open source third-
party tool called Trivy. Trivy is a simple and comprehensive vulnerability scanner for containers and
other artifacts. More information about Trivy can be found on GitHub: https://github.com/aquasecurity/
trivy
1. Trivy has a pre-built Docker image that can be used to perform the vulnerability scans for us, as a
result, we do not have to install it on our Docker host. We can pull the Trivy Docker image by
running the following command:
2. After pulling the image you will need to create a cache directory for the Trivy image. This
directory will be used to store all the cached data:
mkdir -p trivy/.cache
3. After you have created the cache directory, we can perform a vulnerability scan on an image by
running the Trivy image with the following parameters:
Trivy will sort the results based on the vulnerability ID, severity, and the installed and
patched versions of the software packages affected. This information can then be passed
along to the respective teams for patching
5. After the vulnerabilities have been patched, the scan must be re-run to verify
that the patches have remediated the vulnerability. It is always recommended to perform regular
vulnerability scans on your images before running containers in a production environment.
You should now have an understanding of how to scan Docker images for vulnerabilities. Next, we’ll
explore the process of building secure Docker images.
The process of identifying misconfigurations in Docker images can be automated through the use of
a third party open source tool called Dockle. More information about Dockle can be found on GitHub:
https://github.com/goodwithtech/dockle
1. The Dockle Debian package can be downloaded by running the following script on the Docker
host:
VERSION=$(
curl --silent “https://api.github.com/repos/goodwithtech/dockle
releases/latest” | \
grep ‘”tag_name”:’ | \
sed -E ‘s/.*”v([^”]+)”.*/\1/’ \
) && curl -L -o dockle.deb https://github.com/goodwithtech/
dockle/releases/download/v${VERSION}/dockle_${VERSION}_Linux-
64bit.deb
2. After the Dockle debian package has been downloaded, we can install it by running the following
command:
3. After installing Dockle, we can begin scanning Docker images by running the following command:
dockle <image-name>
Dockle will output the list of misconfigurations and recommendations for the changes
that need to be made to the Dockerfile.
The following is a list of best practices you should take in to consideration when building Docker images:
FROM alpine:3.13.5
CMD [“/bin/sh”]
ENV RELEASE_VERSION=1.0.4 SHELL=/bin/bash
RUN apk add --no-cache --update bash g++ make curl \
&& cd /tmp
&& wet https://fossies.org/linux/privat/old/stress-1.0.4. tar.gz\
&& tar-xzyf./tmp/stress-1.0.4.tar.gz\
&& rm /tmp/stress-1.0.4. tar. gz \
&& cd /tmp/stress-1.0.4’
&& ./configure \
&& make-j$(getconf _NPROCESSORS_ONLN)’
&& make install ‘
&& apk del g++ make curl \
&& rm-rf /tmp/* /var/tmp/* /var/cache/apk/* /var/cache/distfiles/*
USER stress
CMD [“/usr/local/bin/stress”]
Making the cloud simple, affordable, accessible, and secure for developers, partners, and businesses
is core to Linode. Ensuring best practices for security creates a shared understanding of responsibility
between a cloud provider and a user. All Linode plans include generous transfer allowances, automated
Advanced DDoS Protection, and the ability to configure Cloud Firewalls and VLAN via Linode Cloud
Manager, our fully-featured API, or CLI at no extra cost.
Encouraging customers to use security best practices begins with our bundled services and the
additional educational resources and documentation we make available to raise security awareness.
When vulnerability prevention is integrated into each layer of your infrastructure and development
process, you secure application data and reduce potential technical debt while ultimately protecting
both your users and yourself.
Next Steps
• Watch our on-demand Docker Security Essentials series to follow HackerSploit’s practical
demonstrations and more information on the best practices outlined in this ebook.
• Keep reading about using Docker and Linode with our extensive Docker guides, including using
Docker Images, Containers, and Docker Files in Depth and a Docker Commands Cheat Sheet.
Browse all Linode docs on Docker and containers.