Nginx: Setting Up a Simple Proxy Server Using Docker and Python/Django...
UPDATE: The docker-compose file has been updated to allow django server send logs to logstash properly. Please reference the repository as well as the settings.py for the logging settings.
This post is a continuation of Using Docker with Elasticsearch, Logstash, and Kibana (ELK) where we dockerized the whole application.
At the end of that post, I suggested that folks add Nginx to the docker configuration as a form of practice. In this post, I will make yet another bold attempt to show what Nginx is and why it should be considered for use and how to add it to the existing configuration.
Feeling excited? let's do this!
A little Background
Normally, applications running on any platform (Production or Development) make use of a server to respond to any request from the calling client (users of the application), the framework of concern here is Python/Django which uses the runserver
(python manage.py runserver) as the development server to deliver content whenever a request is made.
While this is good it is not advisable to use this development server (runserver) for a production environment and that is where Green Unicorn comes into play, Green Unicorn (Gunicorn) is a python Http server that interacts with the web application via Web Server Gateway Interface (WSGI) a middle man between the Web Application and the Web Server.
However, using Gunicorn on its own as a Production server is not totally advisable even by Green Unicorn from their documentation page
Although there are many HTTP proxies available, we strongly advise that you use Nginx. If you choose another proxy server you need to make sure that it buffers slow clients when you use default Gunicorn workers. Without this buffering Gunicorn will be easily susceptible to denial-of-service attacks.
That said, for our current setup, we are going to create a production environment where we will incorporate Gunicorn and Nginx as our Server and Proxy server respectively for the application.
Workflow: Web browser makes a request which goes first to Nginx (Proxy Server), Nginx acts as a proxy and sends that request to Gunicorn ( python Http server). Gunicorn receives that and communicates with the web application via an interface called the web server gateway interface (WSGI).
Time To Implement
Here is the SOURCE CODE for reference.
First, we split up our Docker Config
We originally had a simple docker-compose.yml
file where all our configuration was placed. Depending on what you want you can split up your file structure into a Production and Development folder for your dockerfile configuration.
For this section, I created two extra files called docker-compose.override.yml
and docker-compose.prod.yml
and modified the already existing docker-compose.yml
file.
# docker-compose.override.yml
version: '3.2'
services:
django_web:
labels:
com.example.service: "web"
com.example.description: "Use for the main web process"
build:
context: ./bookme/docker_compose/django/
dockerfile: Dockerfile
image: bookme_django_web:1.0
depends_on:
- db
- es
- kibana
command: ["./docker_compose/django/wait_for_postgres.sh"]
ports:
- "8000:8000"
environment:
PRODUCTION: 'false'
logging:
driver: "json-file"
volumes:
- ./bookme:/app
# docker-compose.yml
version: '3.2'
services:
db:
restart: always
image: postgres
container_name: bookme_db
volumes:
- type: volume
source: dbdata
target: /pg_data
ports:
- "8001:5432"
django_web:
container_name: django_web
environment:
- LOGSTASH_HOST=logstash
expose:
- "5959"
es:
labels:
com.example.service: "es"
com.example.description: "For searching and indexing data"
image: elasticsearch:5.4
container_name: bookme_es
volumes:
- type: volume
source: esdata
target: /usr/share/elasticsearch/data/
ports:
- "9200:9200"
kibana:
labels:
com.example.service: "kibana"
com.example.description: "Data visualisation and for log aggregation"
image: kibana:5.4.3
container_name: bookme_kibana
ports:
- "5601:5601"
environment:
- ELASTICSEARCH_URL=http://es:9200
depends_on:
- es
logstash:
labels:
com.example.service: "logstash"
com.example.description: "For logging data"
image: logstash:5.4.3
volumes:
- ./:/logstash_dir
command: logstash -f /logstash_dir/logstash.conf
ports:
- "5959:5959"
depends_on:
- es
volumes:
dbdata:
esdata:
Here I am making use of the common configuration ability of docker-compose where the file docker-compose.override.yml
together with the docker-compose.yml
file acts as our development configuration.
Think of it as substitution where the docker-compose.yml
configuration is used as a common configuration for both the override
(development) and prod
(production) configuration.
# docker-compose.prod.yml
version: '3.2'
services:
django_web:
labels:
com.example.service: "web"
com.example.description: "Use for the main web process"
build:
context: ./bookme/docker_compose/django/
dockerfile: Dockerfile
image: bookme_django_web:1.0
depends_on:
- db
- es
- kibana
command: ["./docker_compose/django/wait_for_postgres.sh"]
environment:
PRODUCTION: 'true'
LOGSTASH_HOST: logstash
expose:
- "5959"
logging:
driver: "json-file"
volumes:
- ./bookme:/app
nginx:
restart: always
container_name: nginx_server
build:
context: ./bookme/docker_compose/nginx/
dockerfile: Dockerfile
depends_on:
- django_web
ports:
- "0.0.0.0:80:80"
The override
file has just the django-web service only while the prod
config file has the django_web service and the nginx service. The common config to both of them is the docker-compose.yml
file.
Something to note here is the nginx
service we have introduced in the docker-compose.prod.yml
file, where we are mapping port 80 on the host machine to port 80 on the docker daemon process, as for the django_web
service here we added an environment instruction to show that this setting is specifically for Production
.
Second, we introduce the Dockerfile and nginx.conf file for nginx
# nginx dockerfile
FROM nginx:latest
ADD nginx.conf /etc/nginx/nginx.conf
Nginx file and configuration can be referenced here
# nginx.conf
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024; ## Default: 1024, increase if you have lots of clients
}
http {
include /etc/nginx/mime.types;
# fallback in case we can't determine a type
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
upstream app {
server django_web:8000;
}
server {
# use 'listen 80 deferred;' for Linux
# use 'listen 80 accept_filter=httpready;' for FreeBSD
listen 80;
charset utf-8;
# Handle noisy favicon.ico messages in nginx
location = /favicon.ico {
return 204;
access_log off;
log_not_found off;
}
location / {
# checks for static file, if not found proxy to app
try_files $uri @proxy_to_app;
}
# django app
location @proxy_to_app {
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_pass http://app;
}
}
}
To understand the nginx.conf
file let's take a look at the documentation briefly
nginx consists of modules which are controlled by directives specified in the configuration file. Directives are divided into simple directives and block directives. A simple directive consists of the name and parameters separated by spaces and ends with a semicolon. A block directive has the same structure as a simple directive, but instead of the semicolon, it ends with a set of additional instructions surrounded by braces ({ and }). If a block directive can have other directives inside braces, it is called a context (examples: events, http, server, and location).
Directives placed in the configuration file outside of any contexts are considered to be in the main context. The events and http directives reside in the main context, server in http, and location in server.
The rest of a line after the # sign is considered a comment.
To know more about the blocks and directives as that is another topic on its own I suggest going through their beginner's guide documentation and Gunicorn's official Nginx documentation page.
Third, we edit our start.sh
script
# start.sh
#!/bin/bash
function manage_app () {
python manage.py makemigrations
python manage.py migrate
}
function start_development() {
# use django runserver as development server here.
manage_app
python manage.py runserver 0.0.0.0:8000
}
function start_production() {
# use gunicorn for production server here
manage_app
gunicorn bookme.wsgi -w 4 -b 0.0.0.0:8000 --chdir=/app --log-file -
}
if [ ${PRODUCTION} == "false" ]; then
# use development server
start_development
else
# use production server
start_production
fi
Two functions to take note of here, the start_development
function which runs using the development server (runserver) and the start_production
function which runs using the production server (gunicorn)
Fourth, we define our automation file to manage the way the application runs.
At this point we define a Makefile to automate repeated task such as starting a development environment, a production environment, ssh-ing into the containers, stopping the process and all... you can also add other commands to the file as well.
# makefile
start-dev:
docker-compose up
start-prod:
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up
stop-compose:
@eval docker stop $$(docker ps -a -q)
docker-compose down
ssh-nginx:
docker exec -it nginx_server bash
ssh-django-web:
docker exec -it django_web bash
ssh-db:
docker exec -it db bash
ssh-es:
docker exec -it es bash
ssh-kibana:
docker exec -it kibana bash
check-network-config-details:
docker network inspect bookme_default
build-prod:
docker-compose -f docker-compose.yml -f docker-compose.prod.yml build
build-dev:
docker-compose build
To start the docker process in production run make start-prod
To start the docker process in development run make start-dev
Once the application is up and running type localhost
for production and http://localhost:8000/
for development on your browser to view this page.
The other commands are easy to see through at this point, but a little note on the production command for docker, the command docker-compose -f docker-compose.yml -f docker-compose.prod.yml up
starts the docker process using only the docker-compose.yml and the docker-compose.prod.yml files.
Running docker-compose up
here makes use of the docker-compose.yml and the docker-compose.override.yml files to start the development environment.
NOTE
Nginx, used as a Proxy server in this post amongst other things can also be used as a Software Load Balancer, as a Cache Proxy, Buffering... etc
If you are feeling inspired by all this gist on Nginx, you can read more on this from their site and use this post on digital ocean for a more in-depth look at Nginx.
Conclusion
So far so good we have made it all the way to this point, we were able to re-structure our docker-compose file, added a couple more to what we have existing, created a new dockerfile for nginx as well as adding the nginx.conf file to the project.
We also created a makefile to automate the process which includes starting the production and development environment as well as other management commands. This is where we wrap it up for this blog.
Congratulations, if you made it here. Thanks for reading and feel free to like this post.
Hey, sorry my reply is coming late. Have you been able to resolve this?
Hello Samuel,
I implemented all files of your tutorial and GitHub repo but I got the following error when docker tries to build “django_web”. Can you help me ?
Thank you a lot !
Greetings
Alexander Teusz
The Error Message:
Awesome. thank you