Docker
Docker
Docker
Docker is a popular platform for developing, shipping, and running applications inside
containers. Containers are lightweight, isolated environments that package an application and
its dependencies, making it easier to ensure consistency between different environments,
from development to production. In this detailed explanation, I'll cover the key concepts of
Docker and provide examples to illustrate these concepts.
1. Images: Docker images are read-only templates that define how a container should
run. Images contain the application code, libraries, and dependencies needed to
execute an application. Images are often created from a Dockerfile, which is a text file
that specifies the instructions for building the image.
3. Dockerfile: A Dockerfile is a text file that contains a set of instructions for building a
Docker image. These instructions include things like specifying the base image,
copying files into the image, setting environment variables, and running commands.
Here's a simple example:
4. Docker Hub: Docker Hub is a public registry of Docker images. It allows developers
to share and distribute Docker images. You can find official images for various
software and create your own images to publish.
5. Docker Compose: Docker Compose is a tool for defining and running multi-
container Docker applications. It uses a YAML file (docker-compose.yml) to define
services, networks, and volumes for your application. It simplifies the management of
complex applications consisting of multiple containers.
1. Docker Daemon:
o The Docker daemon (also known as dockerd) is a background service that
manages Docker containers on a host system.
o It is responsible for building, running, and managing containers.
o The Docker daemon listens for Docker API requests and communicates with
the container runtime to execute those requests.
o It typically runs as a system service and handles the low-level container
operations.
2. Docker Client:
3. Docker Socket:
In summary, the Docker daemon is responsible for managing containers, the Docker client is
the user interface for interacting with Docker, and the Docker socket serves as the
communication bridge between the client and the daemon, enabling users to control and
manage containers and resources on a host system.
Examples:
1. Installing Docker:
To install Docker on Ubuntu using the docker.io package, you can follow these steps:
Open a terminal and update the local package index to ensure you have the latest
information about available packages:
After the installation is complete, start the Docker service and enable it to start on
boot:
To verify that Docker has been installed correctly, run the following command:
docker --version
You should see the Docker version information displayed in the terminal.
By default, the Docker command requires sudo privileges. If you want to use Docker
without sudo, you can add your user to the "docker" group:
After adding your user to the "docker" group, log out and log back in or run the
following command to apply the group changes without logging out:
newgrp docker
That's it! Docker is now installed on your Ubuntu system using the docker.io package, and
you can start using it to manage containers.
The Dockerfile snippet you provided is used to build a Docker image for a Java application
based on the Alpine Linux image with OpenJDK 17. It copies your application's JAR file into
the image and specifies how to run it as a container. However, there's a small issue with
the ENTRYPOINT line. It should reference app.jar, not your-app.jar. Here's the corrected
Dockerfile snippet:
# Use the OpenJDK 17 Alpine Linux image as the base image
FROM openjdk:17-alpine
# Copy the JAR file from your local system to the image
COPY target/database_service_project-0.0.1.jar app.jar
Now that we have our Docker image, we can run a container from it:
Suppose you have a microservices application with multiple containers. You can use Docker
Compose to manage them together. Here's a simple example with a web app and a database:
docker-compose.yml:
version: '3'
services:
web:
image: my-python-app
ports:
- 8080:80
db:
image: postgres:13
environment:
POSTGRES_PASSWORD: mysecretpassword
Start the application stack using Docker Compose:
docker-compose up -d
This starts both the web and database containers in detached mode.
5. Docker Hub and Pulling Images:
You can find and use existing Docker images from Docker Hub. For example, to pull an
official Nginx image:
These examples cover the basics of Docker. Docker is a powerful tool that simplifies
application deployment and management, especially in a containerized and microservices
architecture. It allows you to package applications and their dependencies, ensuring
consistency and ease of deployment across different environments.
Managing Containers:
1. Run a Container:
docker ps
docker ps -a
6. Restart a Container:
docker rm CONTAINER_ID
Managing Images:
docker images
docker volume ls
Managing Networks:
docker network ls
Miscellaneous Commands:
33. View Docker Version Info:
docker version
docker info
docker system df
docker node ls
docker service ls
These are some of the most commonly used Docker commands for managing containers,
images, volumes, networks, and Docker Swarm. Depending on your specific use case, you
may need to use additional commands and options to tailor Docker to your needs.
Docker provides a flexible networking system that allows containers to communicate with
each other and with the outside world. You can create and manage Docker networks using the
Docker CLI. Here are some basic Docker network commands with examples:
1. List Docker Networks: To see a list of all available Docker networks, use the docker
network ls command.
docker network ls
2. Create a Custom Bridge Network: You can create a custom bridge network to
isolate containers from the host network. This is useful when you want containers to
communicate with each other privately.
Docker provides various types of networks and network drivers to enable different network
configurations and communication patterns for containers. Here are some of the most
commonly used Docker network types and their associated network drivers:
These network types and drivers provide flexibility and cater to different use cases and
requirements in containerized applications. Choosing the right network type and driver
depends on your application's architecture, networking needs, and deployment environment.
Docker-Compose
Docker Compose is a tool for defining and running multi-container Docker applications. It
allows you to define your application's services, networks, and volumes in a single docker-
compose.yml file, making it easier to manage complex Docker setups. Here's a guide on how
to use Docker Compose with examples:
Before you begin, make sure you have Docker Compose installed. You can download it from
the official Docker Compose website.
Create a docker-compose.yml file in your project directory. This file will define your Docker
services and their configurations.
Here's a simple example that defines two services, a web application using Nginx and a
backend using Node.js:
version: '3'
services:
web:
image: nginx:latest
ports:
- "80:80"
backend:
image: node:14
working_dir: /app
volumes:
- ./backend:/app
command: npm start
In this example:
Here are some common Docker Compose commands you can use:
docker-compose logs
4. Build Services: Build or rebuild services (useful when you make changes to your
Dockerfile or source code).
docker-compose build
5. Scale Services: You can scale services by specifying the desired number of replicas.
For example, to run two instances of the backend service:
docker-compose up -d --scale backend=2
6. Execute a Command in a Service: You can execute commands within a specific
service using docker-compose exec. For example, to run a shell in
the backend service:
docker-compose exec backend sh
Cleaning Up
To remove all containers and networks created by Docker Compose, use:
These are some of the basic Docker Compose commands and examples to get you started.
Docker Compose is a powerful tool for managing containerized applications, and you can
define more complex configurations and dependencies in your docker-compose.yml file as
your project evolves.
SAMPLE
version: '3.5'
services:
mongodb:
image: mongo
container_name: mongodb
ports:
- 27017:27017
environment:
- MONGO_INITDB_ROOT_USERNAME=rootuser
- MONGO_INITDB_ROOT_PASSWORD=rootpass
networks:
- mongo-network
mongo-express:
image: mongo-express
container_name: mongo-express
ports:
- 8081:8081
environment:
- ME_CONFIG_MONGODB_ADMINUSERNAME=rootuser
- ME_CONFIG_MONGODB_ADMINPASSWORD=rootpass
- ME_CONFIG_MONGODB_SERVER=mongodb
restart: unless-stopped
depends_on:
- mongodb
networks:
- mongo-network
networks:
mongo-network:
name: mongo-network
This Docker Compose file defines two services, mongodb and mongo-express, just like your
original Docker commands. It also specifies the necessary environment variables, ports, and
network configurations. To use it, create a docker-compose.yml file in your project directory
and run docker-compose up -d to start the services.
Practice Repo
https://github.com/DanielMichalski/responsive-personal-website
DOCKER VOLUMES
Docker volumes are a way to persist data generated or used by Docker containers. They
provide a means to store and manage data separately from the container itself, ensuring that
data persists even if the container is stopped or removed. Docker volumes are commonly
used for scenarios where you need to share data between containers or when you want to
keep data separate from the container's file system.
1. Persistent Data: Docker containers are typically ephemeral, meaning their file
systems are isolated and any data generated within a container is lost when the
container is removed. Volumes provide a way to store data outside of containers,
ensuring that it persists across container lifecycle events.
2. Types of Volumes: Docker supports several types of volumes, including named
volumes, host-mounted volumes, and anonymous volumes.
o Named Volumes
o Host-Mounted Volumes
o Anonymous Volumes
3. Volume Management: You can create, list, inspect, and remove volumes using
Docker CLI commands like docker volume create, docker volume ls, docker volume
inspect, and docker volume rm.
4. Using Volumes: To use a volume in a Docker container, you specify the volume's
name or mount point in the container's configuration, typically in a Docker Compose
file or when running docker run with the -v or --volume option.
Docker volumes are used to persist data when containers are created, removed, or stopped.
Here's when data persists when using Docker volumes:
1. Container Restart: If a container is stopped and then restarted, the data stored in
volumes associated with that container will persist. This is useful for ensuring that
your application's data survives container restarts.
2. Container Removal: When you remove a container using docker rm, the data within
the container itself is lost. However, if you have mapped a Docker volume to store
data, that data will persist even after the container is removed. Volumes are separate
from containers, so they can outlive the containers that use them.
3. Container Replacement: If you replace a container with a new one (e.g., updating to
a new version of your application), you can attach the same volume to the new
container, allowing it to access and manipulate the same data.
4. Host System Reboot: Even if the host machine running Docker is rebooted, the data
stored in Docker volumes should remain intact. Docker manages volumes
independently from the host's filesystem.
5. Scaling Containers: When you use Docker Compose or orchestration tools like
Docker Swarm or Kubernetes to scale your application by creating multiple
containers, each container can use the same volume to access and share data.
Docker supports three main types of volumes for managing persistent data in
containers: host-mounted volumes, anonymous volumes, and named volumes. Here are
examples of each:
1. Host-Mounted Volumes:
o Host-mounted volumes allow you to specify a directory from the host machine
that is mounted into the container. This can be useful when you want to share
data between the host and container.
docker run -v /path/on/host:/path/in/container myapp
Example: Mount the /var/data directory on the host machine to the /data directory in
the container.
docker run -v /var/data:/data myapp
2. Anonymous Volumes:
o Anonymous volumes are created automatically by Docker and are managed
for you. They are typically used when you don't need to manage the volume
explicitly, such as for temporary or cache data.
3. Named Volumes:
o Named volumes are explicitly created and given a name, making it easier to
manage and share data between containers. They are useful for maintaining
data between container restarts and for sharing data between multiple
containers.
4. docker volume create mydata
docker run -v mydata:/path/in/container myapp
Example: Create a named volume called mydata and use it to persist data for a web
application container.
docker volume create mydata
docker run -v mydata:/app/data myapp
These are the three main types of Docker volumes, each with its own use cases. You can
choose the one that best fits your requirements based on whether you need to manage the
volume explicitly, share data with the host, or share data between containers.
EXAMPLE
You can use Docker Compose to set up a MongoDB container and a MongoDB Express
(Mongo-Express) container. This example assumes you already have Docker and Docker
Compose installed.
Create a directory for your project and create a docker-compose.yml file inside it with the
following content:
version: '3'
services:
mongodb:
image: mongo
container_name: mongodb
networks:
- mongo-network
ports:
- "27017:27017"
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=123
mongo-express:
image: mongo-express
container_name: mongo-express
networks:
- mongo-network
ports:
- "8081:8081"
environment:
- ME_CONFIG_MONGODB_SERVER=mongodb
- ME_CONFIG_MONGODB_ADMINUSERNAME=admin
- ME_CONFIG_MONGODB_ADMINPASSWORD=123
- ME_CONFIG_BASICAUTH_USERNAME=admin
- ME_CONFIG_BASICAUTH_PASSWORD=123
networks:
mongo-network:
driver: bridge
In this docker-compose.yml file:
Now, navigate to the directory containing the docker-compose.yml file in your terminal and
run:
docker-compose up
Docker Compose will download the necessary images (if not already downloaded) and start
the MongoDB and Mongo Express containers. You can access the MongoDB Express web
interface at http://localhost:8081 and log in using the MongoDB admin credentials you
specified in the docker-compose.yml file.
The data for MongoDB will be stored in a Docker named volume named mongodb_data,
ensuring that it persists even if you stop and remove the containers.
To stop the containers, press Ctrl+C in the terminal where they are running, and then run:
docker-compose down
This will stop and remove the containers, but the data will remain in the named volume for
future use.
Dockerfile
In a Dockerfile, both CMD and ENTRYPOINT are instructions used to specify the command
that should be run when a container is started. However, they serve slightly different
purposes.
1. CMD Instruction:
o The CMD instruction sets the default command and/or parameters for the
container.
o If the Dockerfile contains multiple CMD instructions, only the last one is
effective.
o If a command is specified when running the container (using docker run), it
overrides the CMD instruction.
o The syntax is CMD ["executable","param1","param2"] or CMD command
param1 param2.
Example:
FROM ubuntu
CMD ["echo", "Hello, World!"]
2. ENTRYPOINT Instruction:
Example:
FROM ubuntu
ENTRYPOINT ["echo", "Hello"]
Docker issue
Pipeline
pipeline {
agent any
tools{
jdk 'jdk17'
maven 'maven3'
}
environment {
SONARQUBE_HOME= tool 'sonar-scanner'
}
stages {
stage('Git CheckOut') {
steps {
git 'https://github.com/jaiswaladi2468/BoardgameListingWebApp.git'
}
}
stage('Compile') {
steps {
sh "mvn compile"
}
}
stage('Unit Tests') {
steps {
sh "mvn test"
}
}
stage('Package') {
steps {
sh "mvn package"
}
}
stage('SonarQube Analysis') {
steps {
withSonarQubeEnv('sonar') {
sh ''' $SONARQUBE_HOME/bin/sonar-scanner -
Dsonar.projectName=Boardgame -Dsonar.projectKey=Boardgame \
-Dsonar.java.binaries=. '''
}
}
}
stage('Quality Gate') {
steps {
waitForQualityGate abortPipeline: false
}
}
}
}
}
}
}
}
}
}
}
}
}