Deploying multiple Docker containers to AWS Elastic Beanstalk with CI_CD using AWS CodePipeline and AWS ECR
Deploying multiple Docker containers to AWS Elastic Beanstalk with CI_CD using AWS CodePipeline and AWS ECR
Search Write
Member-only story
420 9
In this tutorial, we will set up a multi-container Docker environment to
host an Angular 11 frontend application, a Node.js backend server,
and an NGINX reverse proxy server. We will then adopt CI/CD practices to
automate the code release process using AWS CodeBuild, CodeDeploy, and
CodePipeline, and deploy our application to AWS Elastic Beanstalk.
Fronted Container
Backend Container
# Use the Node.js 12.13.0-alpine image as the base
FROM node:12.13.0-alpine
NGINX Container
Create a default.conf file in the nginx folder with the following content:
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://ui:4200;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /api {
proxy_pass http://api:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
These Dockerfiles and configurations should help you set up your frontend,
backend, and NGINX containers for your application.
version: '3'
services:
api:
build:
context: ./api
container_name: api
ports:
- "3000:3000"
ui:
build:
context: ./ui
container_name: ui
ports:
- "4200:4200"
nginx:
container_name: nginx
depends_on:
- ui
- api
ports:
- "80:80"
build:
context: ./nginx
Explanation:
build specifies the build context, which is the directory where the
service's Dockerfile is located.
ports maps ports from the container to the host machine. For example,
"3000:3000" maps port 3000 in the container to port 3000 on the host for
the api service.
The nginx service has dependencies on both ui and api , which means it
will wait for these services to be up and running before starting.
The nginx service maps port 80 in the container to port 80 on the host,
allowing access to the NGINX server from the host machine.
To test the multi-container Docker setup, you can run the following
command in the same directory where the docker-compose.yml file is located:
docker-compose up --build
This command will build the Docker images and start the containers defined
in the docker-compose.yml file. Once the containers are running, you can
access the frontend at http://localhost and the backend at
http://localhost/api . The NGINX container acts as a reverse proxy, routing
requests to the appropriate services based on the URL path.
2. Change the working directory to the project folder and initialize the
directory as a Git repository with the main branch.
cd multicontainer_docker_tutorial
git init -b main
3. Add all the files in the project directory to the Git repository and commit
them.
git add -A
git commit -m "First commit"
4. Add the origin remote repository URL, so that when you push changes,
they will be sent to that repository. Replace {username} with your GitHub
username and {repository} with the name of your repository.
git remote add origin @github.com/{username}/multicontainer_docker_tutorial.git"
5. Push the commit from your local repository to the remote GitHub
repository.
6. Now, go to your GitHub repository on the GitHub website to verify that the
files have been successfully pushed to the repository.
7. To configure CodeBuild or other services later on, you will need to create a
GitHub personal access token:
d. Click Generate token and GitHub will provide you with a token. Copy the
token and store it securely as you will need it later to configure services like
CodeBuild or other tools that interact with your GitHub repository.
8. Finally, go back to your GitHub repository to ensure that the files are
being properly pushed and managed within the repository.
By following these steps, you have created a GitHub repository, pushed your
code to it, and obtained a personal access token for future integrations with
GitHub services.
In this step, we will configure your Docker Compose file for cloud
deployment, create a Docker Compose file named docker-compose-prod.yml ,
and update the image URIs to point to the ECR repository we just created.
version: '3'
services:
api:
build:
context: ./api
container_name: api
image: ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/app-api
ports:
- "3000:3000"
ui:
build:
context: ./ui
container_name: ui
image: ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/app-ui
ports:
- "4200:4200"
nginx:
container_name: nginx
depends_on:
- ui
- api
image: ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/app-ngi
ports:
- "80:80"
build:
context: ./nginx
4. In the Application creation screen, click on “Actions” (top right), and then
choose “Create Environment.”
The creation process will take a few minutes to complete as Elastic Beanstalk
provisions the necessary resources and deploys your Docker containers.
In the IAM dashboard, click on “Roles” from the left navigation pane. In the
list of roles, search for the role named aws-elasticbeanstalk-ec2-role that
Elastic Beanstalk uses for EC2 instances.
{
"AWSEBDockerrunVersion": "1",
"containerDefinitions": [
{
"name": "app-api",
"image": "<ECR-repo-uri>/app-api:<image-tag>",
"memory": 128,
"portMappings": [
{
"hostPort": 3000,
"containerPort": 3000
}
],
"essential": true
},
{
"name": "app-ui",
"image": "<ECR-repo-uri>/app-ui:<image-tag>",
"memory": 128,
"portMappings": [
{
"hostPort": 4200,
"containerPort": 4200
}
],
"essential": true
},
{
"name": "app-nginx",
"image": "<ECR-repo-uri>/app-nginx:<image-tag>",
"memory": 128,
"portMappings": [
{
"hostPort": 80,
"containerPort": 80
}
],
"links": ["app-api", "app-ui"],
"essential": true
}
]
}
In this step, we will create a managed policy in AWS IAM that grants AWS
CodeBuild the necessary permissions to access Amazon Elastic Container
Registry (ECR) for pulling and pushing Docker images. We will then attach
this policy to the codebuild-multicontainer-docker-tutorial-service-role .
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"ecr:CompleteLayerUpload",
"ecr:UploadLayerPart",
"ecr:InitiateLayerUpload",
"ecr:BatchCheckLayerAvailability",
"ecr:PutImage"
],
"Resource": "arn:aws:ecr:*:921490761789:repository/*"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "ecr:GetAuthorizationToken",
"Resource": "*"
}
]
}
Click the “Next: Tags” button if you want to add tags (optional). Click the
“Review policy” button. Provide a name for the policy, e.g.,
codebuildecrpolicy . Click the “Create policy” button.
Now, CodeBuild has the permissions it needs to interact with ECR, allowing
it to pull and push Docker images seamlessly.
Additional Information
Run the npm build command, this will create a dist folder on the project root
folder where the Angular app and all the dependencies will be embedded as
a static website.
Create an app.js file in the src folder. We will be using the Express framework
to serve the Angular app. Then routes all the requests to the index.html
Conclusion
By following these steps, you’ve created a robust development and
deployment pipeline, ensuring code quality, consistency, and automation.
This setup empowers you to efficiently manage and scale your multi-
container Docker application on AWS while streamlining the development
process.
Happy coding! 🎉
Leave a comment if you need the full source code.