Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

DEV Community

Cover image for Deploy your Django app with Postgres DB to production using Docker and Nginx
Mahesh
Mahesh

Posted on

Deploy your Django app with Postgres DB to production using Docker and Nginx

If you have came across this blog, Congrats!πŸ₯³, you are doing your best!
Running a Django application on your local system is quite easy. It doesn't matter if you are a newbie or an skilled professional, everyone sucks in deployment.

I will try to cover up most of the things to deploy your Django app to production efficiently. I'll show you how to use Docker and Nginx to create a secure, scalable deployment setup for your Django project and later in the blog, you can find why i used this process.

Why Deploy Django with Docker and Nginx?

  • Docker simplifies development and deployment by containerising your application, ensuring consistency across environments.
  • Nginx is a high-performance web server that serves static files and acts as a reverse proxy for your Django app, optimising performance and security.

Your project structure should look like this currently

.venv/
my_awesome_project/
β”œβ”€β”€ app/
    β”œβ”€β”€ settings.py
    β”œβ”€β”€ wsgi.py
β”œβ”€β”€ module1/
    β”œβ”€β”€ ....
β”œβ”€β”€ module2/
    β”œβ”€β”€ ....
β”œβ”€β”€ manage.py
β”œβ”€β”€ requirements.txt
Enter fullscreen mode Exit fullscreen mode

To dockerize your project, you need to add 2 files in your root directory.

  • Dockerfile
  • docker-compose.yaml

Note: You should have docker-desktop installed to test the docker containers.

Step 1 : Add Dockerfile

Your structure should look like this now.

.venv/
my_awesome_project/
β”œβ”€β”€ app/
    β”œβ”€β”€ settings.py
    β”œβ”€β”€ wsgi.py
β”œβ”€β”€ module1/
    β”œβ”€β”€ ....
β”œβ”€β”€ module2/
    β”œβ”€β”€ ....
β”œβ”€β”€ manage.py
β”œβ”€β”€ requirements.txt
β”œβ”€β”€ Dockerfile
Enter fullscreen mode Exit fullscreen mode

To add some overview,
A Dockerfile is a script that defines how to build a Docker image for your application. Think of it as the blueprint for creating your app’s environment. With your Dockerfile, you can create as many containers as you want.

Below is the Dockerfile which i personally use

FROM python:3.13-slim AS runtime

# set environment variables
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
ENV DOCKERIZED=true
ENV MICRO_SERVICE=/app

# set work directory
RUN mkdir -p $MICRO_SERVICE
RUN mkdir -p $MICRO_SERVICE/staticfiles
RUN mkdir -p $MICRO_SERVICE/mediafiles

# where the code lives
WORKDIR $MICRO_SERVICE

RUN apt-get update && apt-get install -y --no-install-recommends \
    build-essential \
    libpq-dev \
    libjpeg-dev \
    zlib1g-dev \
    curl \
    && rm -rf /var/lib/apt/lists/*

# copy requirements.txt for installing python libraries
COPY requirements.txt $MICRO_SERVICE

#check for pip updates and install requirements.txt
RUN pip install --upgrade pip && pip install --no-cache-dir -r requirements.txt

# copy project
COPY . $MICRO_SERVICE

# Collect static files (optional)
RUN python manage.py collectstatic --noinput
Enter fullscreen mode Exit fullscreen mode

This basically creates missing directories, update system packages and installs python packages from requirements, moves the project to the 'app' directory and collects static files if present in the project.

Step 2 : Add docker-compose.yml file

Your structure should look like this now.

.venv/
my_awesome_project/
β”œβ”€β”€ app/
    β”œβ”€β”€ settings.py
    β”œβ”€β”€ wsgi.py
β”œβ”€β”€ module1/
    β”œβ”€β”€ ....
β”œβ”€β”€ module2/
    β”œβ”€β”€ ....
β”œβ”€β”€ manage.py
β”œβ”€β”€ requirements.txt
β”œβ”€β”€ Dockerfile
β”œβ”€β”€ docker-compose.yml
Enter fullscreen mode Exit fullscreen mode

To add some overview,
A docker-compose.yml defines and manages multi-container applications. It simplifies running complex apps with just one command.

Below is the docker-compose.yml which i personally use for local testing.

services:
  db:
    image: postgres:15-alpine
    container_name: postgres_db
    volumes:
      - db_data:/var/lib/postgresql/data/
    env_file:
      - .env
    ports:
      - 5432:5432
    restart: always
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U $POSTGRES_USER -d $POSTGRES_DB"]
      interval: 5s
      timeout: 5s
      retries: 5

  backend:
    build:
      context: .
      dockerfile: Dockerfile
    container_name: django_app
    command: gunicorn --config configs/gunicorn_cfg.py app.wsgi:application
    volumes:
      - .:/app:cached
      - static_volume:/app/staticfiles
      - media_volume:/app/mediafiles
    working_dir: /app
    expose:
      - 8000
    env_file:
      - .env
    restart: always
    healthcheck:
      test: ["CMD-SHELL", "python manage.py check --database default --deploy --fail-level CRITICAL"]
      interval: 10s
      timeout: 5s
      retries: 3

  nginx:
    image: nginx:stable-alpine
    container_name: nginx_server
    restart: always
    ports:
      - "80:80"
      - "443:443" # If you plan to use HTTPS
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
      - static_volume:/app/staticfiles:ro
      - media_volume:/app/mediafiles:ro
    depends_on:
      - backend

volumes:
  db_data:
  static_volume:
  media_volume:
Enter fullscreen mode Exit fullscreen mode

This compose file will create 3 services: django_app, postgres_db, nginx_server.
No need to do anything now.

Step 3 : Adding .env file

Before testing our containers, we need to create a .env file in the project folder.

Below settings are generally stored in the .env file.

DEBUG=True
SECRET_KEY='1234567890'
ACCESS_SECRET_KEY='0987654321'
ALLOWED_HOSTS=localhost,127.0.0.1
DB_ENGINE=django.db.backends.postgresql
POSTGRES_DB=app_db
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
POSTGRES_HOST=db
POSTGRES_PORT=5432
DOCKER_RUNNING=true
Enter fullscreen mode Exit fullscreen mode

Please note that POSTGRES_HOST is using service name from the docker-compose.yml file which is db. This is required if you are creating your db container. In case of remote DB, this would be the host URL.

When deploying to production, you would not need the postgres_db as you will be using a centralised DB. The DB credentials will be saved in the .env file.

Your structure should look like this now.

.venv/
my_awesome_project/
β”œβ”€β”€ app/
    β”œβ”€β”€ settings.py
    β”œβ”€β”€ wsgi.py
β”œβ”€β”€ ....
β”œβ”€β”€ manage.py
β”œβ”€β”€ requirements.txt
β”œβ”€β”€ Dockerfile
β”œβ”€β”€ docker-compose.yml
β”œβ”€β”€ .env
Enter fullscreen mode Exit fullscreen mode

Step 4 : Adding gunicorn config

To run our application efficiently(for production) on a machine, we need to use gunicorn. I have already added the command in the docker-compose file. Let's add the config for it.
Your structure should look like this now.

...
my_awesome_project/
β”œβ”€β”€ app/
    β”œβ”€β”€ settings.py
    ...
β”œβ”€β”€ ....
β”œβ”€β”€ configs/
    β”œβ”€β”€ gunicorn_cfg.py
β”œβ”€β”€ manage.py
β”œβ”€β”€ requirements.txt
β”œβ”€β”€ Dockerfile
β”œβ”€β”€ docker-compose.yml
β”œβ”€β”€ .env
Enter fullscreen mode Exit fullscreen mode

And inside the gunicorn_cfg.py put the below config.

import multiprocessing

# Django WSGI application path in pattern MODULE_NAME:VARIABLE_NAME
wsgi_app = "app.wsgi:application"

# The granularity of Error log outputs
loglevel = "info"
backlog = 2048

# The number of worker processes for handling requests
workers = multiprocessing.cpu_count() * 2 + 1
worker_class = "gthread"
threads = 2

# The socket to bind
bind = "0.0.0.0:8000"
timeout = 60

# Restart workers when code changes (development only!)
reload = True

keepalive = 5  # The number of seconds to wait for requests on a Keep-Alive connection
errorlog = "-"  # '-' logs to stderr
accesslog = "-"  # '-' logs to stdout

# Daemonize the Gunicorn process (detach & enter background)
daemon = False
Enter fullscreen mode Exit fullscreen mode

Step 5 : Adding nginx to the docker

There are a lot of benefits of adding a nginx container and not to install it on the machine. The main is, you don't have to worry about configs which get missing-in-action or are mismatched in different environments.
Update your folder structure to look like below

...
my_awesome_project/
β”œβ”€β”€ app/
    β”œβ”€β”€ settings.py
    ...
β”œβ”€β”€ ....
β”œβ”€β”€ configs/
    β”œβ”€β”€ gunicorn_cfg.py
β”œβ”€β”€ nginx/
    β”œβ”€β”€nginx.conf
β”œβ”€β”€ manage.py
β”œβ”€β”€ requirements.txt
β”œβ”€β”€ Dockerfile
β”œβ”€β”€ docker-compose.yml
β”œβ”€β”€ .env
Enter fullscreen mode Exit fullscreen mode

Now in the nginx.conf add the following lines and DO NOT MISS ANYTHING

events {}
http {
    include /etc/nginx/modules-enabled/*.conf;
    include /etc/nginx/mime.types;

    upstream backed_api {
        server backend:8000;
    }

    server {
        listen 80;
        server_name localhost;

        location /favicon.ico {
            access_log off;
            log_not_found off;
        }

        error_log /var/log/nginx/error.log;
        access_log /var/log/nginx/access.log;

        location /static/ {
            autoindex on;
            alias /app/staticfiles/;
        }

        location /media/ {
            autoindex on;
            alias /app/mediafiles/;
        }

        location / {
            proxy_pass http://backed_api;
            proxy_set_header  Host              $http_host;   # required for docker client's sake
            proxy_set_header  X-Real-IP         $remote_addr; # pass on real client's IP
            proxy_set_header  X-Forwarded-For   $proxy_add_x_forwarded_for;
            proxy_set_header  X-Forwarded-Proto $scheme;
            proxy_read_timeout                  900;
            proxy_redirect off;
            client_max_body_size 10M;
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Make sure that you change your upstream server(it should match with compose service i.e. backend in this case. The static URL should alias to STATIC_ROOT mentioned in your settings.py file.

With this, your application is now dockerized.
Open the docker desktop. This will create a docker daemon which interacts with the docker commands.
To build the containers, run the following command:

docker compose -f docker-compose.yml build
Enter fullscreen mode Exit fullscreen mode

To run the application in normal mode use command:

docker compose -f docker-compose.yml up -d
Enter fullscreen mode Exit fullscreen mode

This will create 3 containers in your docker environment with the above configuration.
You will see something like the below image:

Image description

You can access your application on: localhost:80

Congratulations!! your application is now system independent.
No need to worry about virtual environments or system updates.

For small and medium applications this will just work wonders! For large and complex applications, you might need to create another virtual machine for your application like an EC2 instance. In those cases, it is preferable to use either docker-swarm or kubernetes.

If you are building your college project or building a MVP for your startup, this is the best and fastest way to deploy your application.
It will help with your static files, your media files, your URL safety, everything.
With this configuration, your application should be able to handle 1 million requests per day.


Bonus Content
We deployed the application using WSGI file present in the Django project. WSGI do not allow web-sockets/real-time tasks. To support that you need to use ASGI.
You can read more about it on the internet. The baseline is:

WSGI is synchronous and handles requests sequentially, while ASGI is asynchronous and allows for concurrent handling of multiple requests.

Now, to incorporate it in our project we are going to install uvicorn, which is by far the out-performer in Python. We will use it's worker with gunicorn to make our application asynchronous.
Install the package using below command:

pip install uvicorn uvicorn-worker
Enter fullscreen mode Exit fullscreen mode

You can change the following line in gunicorn_cfg.py

# wsgi_app = "backend.wsgi:application"
...
# worker_class = "gthread"
worker_class = "uvicorn.workers.UvicornWorker"
Enter fullscreen mode Exit fullscreen mode

and in the docker-compose file we need to change the command to use asgi file from our project

...
  web:
    build:
      context: .
      dockerfile: Dockerfile
    command: gunicorn -c configs/gunicorn_cfg.py backend.asgi:application
    volumes:
      - .:/app:cached
      - static_volume:/app/staticfiles
      - media_volume:/app/mediafiles
...
Enter fullscreen mode Exit fullscreen mode

Build and run the containers once again. You application should start in the async mode.


Why Nginx container?
While it's true you can install and configure Nginx directly on your production machine, using a separate Nginx container within your Docker Compose setup offers several compelling advantages, especially in modern deployment workflows, like

  1. Isolation and Consistency: The Nginx container runs in its own isolated environment, with its own dependencies and configuration. You ensure a consistent Nginx setup regardless of the underlying operating system or existing software on the server. The Nginx container image guarantees that you're deploying the exact same Nginx version and configuration across different environments (development, staging, production). This significantly reduces the "it works on my machine" problem.
  2. Scalability: Easier Scaling: If your web traffic increases, you can easily scale the number of Nginx containers horizontally (add more instances) without affecting your application or database directly. Docker orchestration tools like Docker Swarm or Kubernetes make this process much smoother. Load Balancing: When you have multiple instances of your Django application (scaling the web service), Nginx, acting as a reverse proxy, can efficiently distribute incoming traffic across these instances, improving performance and resilience.
  3. Portability: Environment Agnostic: Your Docker Compose setup can be easily moved and deployed to different cloud providers or on-premise environments that support Docker without needing to reconfigure Nginx specifically for each environment.

When and why we need db container?
You should use a DB container when:

  1. You're developing locally and want a consistent, isolated database environment.
  2. You're deploying to staging/production and want the database to be part of your infrastructure stack.
  3. You need to ensure compatibility between your app and a specific version/config of a database like PostgreSQL or MySQL.
  4. You're testing features or migrations and want a throwaway DB you can spin up/tear down easily.
  5. You want to avoid polluting your host OS with multiple database installs or versions.

Can we create containers with Dockerfile?
Technically, yes. Think of it this way: While you could manage each component (Django, Nginx, PostgreSQL) as separate services directly on your machine, docker compose provides a way to orchestrate these services as a cohesive application stack. No need to run multiple docker run commands manually.

With docker compose up, you can spin up your entire application stack. It handles network creation, volume mounting, and service orchestration automatically.

Other benefits include:

  1. Built-in Networking: Services can communicate with each other using service names (e.g., web, db) instead of IP addressesβ€”making it easy to link Django to PostgreSQL or Nginx.
  2. Centralised Configuration: Your app, database, reverse proxy, and environment variables can all be configured in one place. Easier to read, debug, and modify.
  3. Scalability & Testing: You can scale services using docker-compose up --scale, and test production-like setups locallyβ€”ideal for integration testing and CI pipelines.
  4. Volume & Data Management: Compose makes it easy to define and persist volumes (e.g., for databases or static files), ensuring your data isn’t lost when containers restart.
  5. Isolation: Each environment (dev, staging, prod) can have its own docker-compose file or override settings with .env files for secure and environment-specific configurations.

I hope you like this article. Thank you!

Top comments (0)