Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
17 views

Deploying multiple Docker containers to AWS Elastic Beanstalk with CI_CD using AWS CodePipeline and AWS ECR

Uploaded by

jhon jaki111
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Deploying multiple Docker containers to AWS Elastic Beanstalk with CI_CD using AWS CodePipeline and AWS ECR

Uploaded by

jhon jaki111
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Open in app

Search Write

Member-only story

Deploying multiple Docker


containers to AWS Elastic Beanstalk
with CI/CD using AWS CodePipeline
and AWS ECR
Nnyw · Follow
Published in Towards AWS · 11 min read · May 18, 2021

420 9
In this tutorial, we will set up a multi-container Docker environment to
host an Angular 11 frontend application, a Node.js backend server,
and an NGINX reverse proxy server. We will then adopt CI/CD practices to
automate the code release process using AWS CodeBuild, CodeDeploy, and
CodePipeline, and deploy our application to AWS Elastic Beanstalk.

Step 1: Create Dockerfile


A Dockerfile is a text file containing all the commands used to create a
container image.

Fronted Container

# Use the Node.js 12.7-alpine image as the base


FROM node:12.7-alpine

# Set the working directory


WORKDIR /usr/src/app/ui

# Copy package.json and package-lock.json files to the working directory


COPY package*.json ./

# Clean the npm cache and install project dependencies


RUN npm cache clean --force
RUN npm install

# Install the Angular CLI globally


RUN npm install -g @angular/cli

# Copy the application code into the container


COPY . .

# Expose port 4200


EXPOSE 4200

# Define the command to start the application


CMD ["npm", "run", "start"]

Backend Container
# Use the Node.js 12.13.0-alpine image as the base
FROM node:12.13.0-alpine

# Set the working directory


WORKDIR /usr/src/app/api

# Copy package.json and package-lock.json files to the working directory


COPY package*.json ./

# Install project dependencies


RUN npm install

# Copy the application code into the container


COPY . .

# Expose port 3000


EXPOSE 3000

# Define the command to start the API


CMD ["npm", "run", "start"]

NGINX Container

Create a default.conf file in the nginx folder with the following content:

server {
listen 80;
server_name localhost;

location / {
proxy_pass http://ui:4200;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}

location /api {
proxy_pass http://api:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}

2. Create a Dockerfile inside the nginx folder:

# Use the official NGINX image as the base


FROM nginx

# Copy the custom default.conf file to NGINX's configuration directory


COPY default.conf /etc/nginx/conf.d/default.conf

These Dockerfiles and configurations should help you set up your frontend,
backend, and NGINX containers for your application.

Step 2: Create docker-compose.yml

version: '3'
services:
api:
build:
context: ./api
container_name: api
ports:
- "3000:3000"
ui:
build:
context: ./ui
container_name: ui
ports:
- "4200:4200"
nginx:
container_name: nginx
depends_on:
- ui
- api
ports:
- "80:80"
build:
context: ./nginx

Explanation:

docker-compose.yml is a configuration file used to define and manage


multi-container applications.

In this docker-compose.yml file:

We specify the version of Docker Compose that we are using (version 3 in


this case).

We define three services: api , ui , and nginx .

For each service:

build specifies the build context, which is the directory where the
service's Dockerfile is located.

container_name assigns a custom name to the container for easier


identification.

ports maps ports from the container to the host machine. For example,
"3000:3000" maps port 3000 in the container to port 3000 on the host for
the api service.

The nginx service has dependencies on both ui and api , which means it
will wait for these services to be up and running before starting.

The nginx service maps port 80 in the container to port 80 on the host,
allowing access to the NGINX server from the host machine.

To test the multi-container Docker setup, you can run the following
command in the same directory where the docker-compose.yml file is located:
docker-compose up --build

This command will build the Docker images and start the containers defined
in the docker-compose.yml file. Once the containers are running, you can
access the frontend at http://localhost and the backend at
http://localhost/api . The NGINX container acts as a reverse proxy, routing
requests to the appropriate services based on the URL path.

Step 3: Create a GitHub Repository and Set Up a Personal Access Token


1. Log in to your GitHub account and create a new repository.Download the
tutorial source code if you haven’t already.

2. Change the working directory to the project folder and initialize the
directory as a Git repository with the main branch.

cd multicontainer_docker_tutorial
git init -b main

3. Add all the files in the project directory to the Git repository and commit
them.

git add -A
git commit -m "First commit"

4. Add the origin remote repository URL, so that when you push changes,
they will be sent to that repository. Replace {username} with your GitHub
username and {repository} with the name of your repository.
git remote add origin @github.com/{username}/multicontainer_docker_tutorial.git"

5. Push the commit from your local repository to the remote GitHub
repository.

git push -u origin main

6. Now, go to your GitHub repository on the GitHub website to verify that the
files have been successfully pushed to the repository.

7. To configure CodeBuild or other services later on, you will need to create a
GitHub personal access token:

a. Login to GitHub, navigate to Settings, and then Developer settings.

b. On the sidebar, select “Personal access tokens” and click “Generate


Token.”

c. Tick the following scopes (permissions):

admin:repo_hook : This permission allows you to manage repository hooks,


which can be useful for integrating with services like CodeBuild.

repo : This permission grants access to your repositories.

d. Click Generate token and GitHub will provide you with a token. Copy the
token and store it securely as you will need it later to configure services like
CodeBuild or other tools that interact with your GitHub repository.
8. Finally, go back to your GitHub repository to ensure that the files are
being properly pushed and managed within the repository.

By following these steps, you have created a GitHub repository, pushed your
code to it, and obtained a personal access token for future integrations with
GitHub services.

Step 4: Set Up AWS Elastic Container Registry (ECR)


Amazon Elastic Container Registry (ECR) is a fully managed container
registry that simplifies storing, managing, sharing, and deploying container
images and artifacts.

In this step, we will configure your Docker Compose file for cloud
deployment, create a Docker Compose file named docker-compose-prod.yml ,

and update the image URIs to point to the ECR repository we just created.

version: '3'
services:
api:
build:
context: ./api
container_name: api
image: ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/app-api
ports:
- "3000:3000"

ui:
build:
context: ./ui
container_name: ui
image: ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/app-ui
ports:
- "4200:4200"

nginx:
container_name: nginx
depends_on:
- ui
- api
image: ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/app-ngi
ports:
- "80:80"
build:
context: ./nginx

Step 5: Set Up an Elastic Beanstalk Environment


Follow these steps to create an Elastic Beanstalk environment for your multi-
container Docker application:

1. Log in to your AWS account.

2. From the AWS Management Console, navigate to “Elastic Beanstalk.”

3. In the Elastic Beanstalk dashboard, click on “Create New Application.”

4. In the Application creation screen, click on “Actions” (top right), and then
choose “Create Environment.”

5. Select “Web server environment” as the environment type.

6. Fill in the Environment Details:


Application Name: multicontainer-docker-tutorial
Environment Name: multicontainer-docker-tutorial-prod
Domain: multicontainer-docker-tutorial-prod
Platform: Docker
Platform branch: Multi-container Docker running on 64bit Amazon
Linux
Platform version: 2.25.2 (Recommended)
Application code: Sample application

7. After configuring the environment details, click on the “Create


environment” button.

The creation process will take a few minutes to complete as Elastic Beanstalk
provisions the necessary resources and deploys your Docker containers.

Access URL: http://multicontainer-docker-tutorial-prod.ap-southeast-


1.elasticbeanstalk.com/ in your web browser. If you see the message
“Congratulations!” displayed in your browser, it indicates that you have
successfully configured and set up your Elastic Beanstalk environment.
We need to grant Elastic Beanstalk permissions to access Amazon Elastic
Container Registry (ECR).

In the IAM dashboard, click on “Roles” from the left navigation pane. In the
list of roles, search for the role named aws-elasticbeanstalk-ec2-role that
Elastic Beanstalk uses for EC2 instances.

Select the aws-elasticbeanstalk-ec2-role to view its details. In the


“Permissions” tab, click the “Attach policies” button. Select
AmazonEC2ContainerRegistryReadOnly
Now, Elastic Beanstalk’s EC2 instances will have the necessary permissions
to access ECR and pull Docker images for your application.

Step 6: Establish a CI/CD Pipeline


In this phase, we leverage the AWS CodePipeline service to create a robust
Continuous Integration and Continuous Delivery (CI/CD) pipeline for our
application. The primary objective is to automate the entire process when
code changes are committed to the main branch. The pipeline orchestrates a
sequence of actions, including code testing to validate compliance with
defined rules and subsequent tasks like rebuilding container images. Finally,
it deploys the updated release to our Elastic Beanstalk environment.

This automated workflow guarantees code quality and consistency while


ensuring that every change seamlessly integrates into the application. The
CI/CD pipeline brings efficiency and reliability to the development and
deployment process.

Step 7: Set Up AWS CodeBuild


In this step, we will configure AWS CodeBuild to automatically build and
rebuild container images whenever code changes are pushed to your GitHub
repository. To do this, you will also create a buildspec.yml file to define the
build commands and settings for CodeBuild.

Create a buildspec.yml File:

In the root directory of your project, create a buildspec.yml file.

The buildspec.yml file contains a collection of build commands and


settings in YAML format. It defines how CodeBuild should build your
application, including tasks such as fetching source code, running tests,
and building container images.
version: 0.2
phases:
install:
runtime-versions:
docker: 18
pre_build:
commands:
- echo Logging in to Amazon ECR...
- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login -
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker-compose --file=docker-compose-prod.yml build
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker image...
- docker-compose --file=docker-compose-prod.yml push
artifacts:
files:
- 'Dockerrun.aws.json'

In this step, we’ll create a Dockerrun.aws.json file that describes how to


configure an Amazon EC2 instance within a multi-container Docker
environment on Elastic Beanstalk. Elastic Beanstalk uses Amazon Elastic
Container Service (ECS) to manage and orchestrate container deployments
in multi-container Docker environments.

Create a Dockerrun.aws.json File:

In the root directory of your project, create a Dockerrun.aws.json file.

{
"AWSEBDockerrunVersion": "1",
"containerDefinitions": [
{
"name": "app-api",
"image": "<ECR-repo-uri>/app-api:<image-tag>",
"memory": 128,
"portMappings": [
{
"hostPort": 3000,
"containerPort": 3000
}
],
"essential": true
},
{
"name": "app-ui",
"image": "<ECR-repo-uri>/app-ui:<image-tag>",
"memory": 128,
"portMappings": [
{
"hostPort": 4200,
"containerPort": 4200
}
],
"essential": true
},
{
"name": "app-nginx",
"image": "<ECR-repo-uri>/app-nginx:<image-tag>",
"memory": 128,
"portMappings": [
{
"hostPort": 80,
"containerPort": 80
}
],
"links": ["app-api", "app-ui"],
"essential": true
}
]
}

1. From the AWS Management Console, navigate to AWS CodeBuild.

2. Click on “Create build project.”

3. Fill in the required fields as follows:

Project Name: multicontainer-docker-tutorial


Source Provider: Github
Repository: Connect with a Github personal access token
GitHub personal access token: Enter the token
Repository: Repository in my Github account
GitHub Repository: {$your_github_username}/multicontainer-docker-
tutorial.git
Environment Image: Managed Image
Operating system: Ubuntu
Runtime(s): Standard
Image: aws/codebuild/standard:4.0
Image version: Always use the latest image for this runtime version
Environment type: Linux
Privileged: Enable this flag if you want to build Docker images or want your
builds to get elevated privileges
Service role: New service role
Build specifications: Use a buildspec file

Creating a Managed Policy for CodeBuild to Access ECR

In this step, we will create a managed policy in AWS IAM that grants AWS
CodeBuild the necessary permissions to access Amazon Elastic Container
Registry (ECR) for pulling and pushing Docker images. We will then attach
this policy to the codebuild-multicontainer-docker-tutorial-service-role .

Here's a refined explanation:


In the IAM dashboard, click on “Roles” from the left navigation pane. Search
for the role named codebuild-multicontainer-docker-tutorial-service-role

that was created for your CodeBuild project.

Select the codebuild-multicontainer-docker-tutorial-service-role to view its


details and click the Attach policies. In the “Permissions” tab, click the “Add
inline policy” button. Select the “JSON” tab for creating a custom policy.

Paste the following JSON policy statement to grant CodeBuild permissions to


interact with ECR:

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"ecr:CompleteLayerUpload",
"ecr:UploadLayerPart",
"ecr:InitiateLayerUpload",
"ecr:BatchCheckLayerAvailability",
"ecr:PutImage"
],
"Resource": "arn:aws:ecr:*:921490761789:repository/*"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "ecr:GetAuthorizationToken",
"Resource": "*"
}
]
}

Click the “Next: Tags” button if you want to add tags (optional). Click the
“Review policy” button. Provide a name for the policy, e.g.,
codebuildecrpolicy . Click the “Create policy” button.

Now, CodeBuild has the permissions it needs to interact with ECR, allowing
it to pull and push Docker images seamlessly.

Step 7: Set Up AWS Codepipeline


Login to your AWS Console and navigate to CodeBuild. Click on “Create
pipeline”

Configure Source Stage

Pipeline Name: multicontainer-docker-tutorial-pipeline


Service Role: New service role
Role Name: codepipeline-multicontainer-docker-tutorial-service-role
Source Provider: Github (Version 1) and click Connect to Github
Repository: {your_github_username}/multicontainer-docker-tutorial
Branch: main (CodePipeline should monitor this branch for any changes.)
Change detection options: GitHub webhooks (recommended)
Configure Build Stage

Build provider: AWS CodeBuild


Region: {your_nearest_region}
Project name: multicontainer-docker-tutorial
Configure Deploy Stage

Deploy provider: AWS Elastic Beanstalk


Region: {your_nearest_region}
Application name: multicontainer-docker-tutorial
Done! The pipeline will be started automatically.
Go back to the Elastic Beanstalk console to view the updated application.
Try to access the URL

Backend URL: http://multicontainer-docker-tutorial-prod.ap-southeast-


1.elasticbeanstalk.com/api
Frontend URL: http://multicontainer-docker-tutorial-prod.ap-southeast-
1.elasticbeanstalk.com

Additional Information

Unable to run Angular web application after deploying to


ElasticBeanstalk.
You may encounter this issue whereby the Angular app was able to run
locally using the ng serve command, but not on ElasticBeanstalk. This is
because the Angular app is not a static website that can be run directly on a
browser. We need to create a Node.js server to serve the Angular app.

Run the npm build command, this will create a dist folder on the project root
folder where the Angular app and all the dependencies will be embedded as
a static website.

Create an app.js file in the src folder. We will be using the Express framework
to serve the Angular app. Then routes all the requests to the index.html

var express = require('express');


var path = require('path');
var app = express();
var dist = path.resolve(__dirname + '/../dist/ui');
var fallbackPath = path.resolve(dist + '/index.html');
var port = 4200;
var datetime = "LastSync: " + new Date();
app.use(express.static(dist));
app.get('*', function(req, res) {
res.sendFile(fallbackPath);
});
var lastsync = datetime + ': Dashboard is on port ' + port + '!'
app.listen(port, () => console.log(lastsync));

Next, run this command to start your Angular app.

npm run ng build && node src/app.js

Conclusion
By following these steps, you’ve created a robust development and
deployment pipeline, ensuring code quality, consistency, and automation.
This setup empowers you to efficiently manage and scale your multi-
container Docker application on AWS while streamlining the development
process.

Happy coding! 🎉
Leave a comment if you need the full source code.

Clap if you find this helpful tutorial.

Follow me to get notified when I publish new tutorials.

Please buy me a coffee to support me.

AWS Docker Github Programming Cloud Computing

You might also like