Codementor Events

Using Docker with Elasticsearch, Logstash, and Kibana (ELK)

Published Nov 21, 2017Last updated Jun 18, 2018
Using Docker with Elasticsearch, Logstash, and Kibana (ELK)

UPDATE: The docker-compose file has been updated to allow django server send logs to logstash properly. Please reference the repository as well as the settings.py for the logging settings.

This post is a continuation of Using Django with Elasticsearch, Logstash, and Kibana (ELK Stack)

SOURCE CODE FOR THIS POST

Note: Our focus is not on the fundamentals of Docker. If you would love to have a general idea of docker then follow this link before you return otherwise sit back and enjoy the show.

Docker has been around the block for a while now and some folks are not so familiar with the whole idea of Docker, let alone use it. Here, I will make a bold attempt to show it's application and how it makes development easy, so get ready to ride with me as we explore the power of docker and how it can be integrated into our/your application.

The Issue!

In the previous blog post, we installed elasticsearch, kibana, and logstash and we had to open up different terminals in other to use it, it worked right? but the idea of having to do all that can be a pain if you had to start all that process manually.Moreso, if you had different developers working on such a project they would have to setup according to their Operating System(OS) (MACOSX, LINUX and WINDOWS)

This would make development environment different for developers on a case by case basis and increase the complexity and time it would take to resolve any issue or issues you'd probably face while developing, not cool right?

Enter Docker

Docker provides a container image which is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings, etc.

Regardless of the environment, the containerized software will always run the same on both Linux and Windows-based apps, reference.

Beyond this intro, docker isolates applications from one another and from the underlying infrastructure. Want to know more? here is the link to docker wisdom dig in!

Docker and The Project

Applications usually require one or more process(es) to run, such as a web process, a DB process like Postgres or MySQL, Nginx, elasticsearch etc. All these processes will generally run on your system locally during development before using platforms like AWS, Google Cloud Platform (GCP), Digital Ocean, Azure etc to host them.

With docker, each of this process/service is placed in an isolated environment/container and made to communicate with each other the same way there would when running directly on your local machine.

Docker takes away the strain of running all this process directly on your machine by running all the process in isolated and secure environments all connected to each other via the same network or multiple networks.

That said a Container can only be gotten from an Image and you need to build one using a Dockerfile or by getting one from Docker Hub (a repository of docker images something similar to GitHub)

So how many services do we have?
For this application we are making use of the following services

  • Postgres/db
  • Elasticsearch/es
  • Logstash
  • Kibana
  • django_web

You can also add the Nginx service to it.I should leave that to you, dive in and have a go at it when you are ready.

Postgres service/process

Using a docker-compose file which allows us to connect services together without using the actual docker CLI commands to do so, we create a docker-compose.yml file in the root of the repository and add this snippet to the file for the Postgres service.

# docker-compose.yml file
version: '3.2'

services:
  db:
    restart: always
    image: postgres
    container_name: bookme_db
    volumes:
      - type: volume
        source: dbdata
        target: /pg_data
    ports:
      - "8001:5432"

What did I just write? The compose file is a simple yml or yaml file that tells the service how to run and operate

version - Tells docker-compose what version you are using as version numbers come with cool possibilities and upgrade to how you can configure docker containers.

services - The various process that your application runs on.

db - The service/container that will be created for our database and will be built from the Postgres image.

restart: Has several options but here we are going to restart if it goes down.

image - Tells docker daemon which docker image it should start the container from

container_name - The name the container should be named for ease of debugging and/or for reference

volumes - Deals with the data that should be shared between the host file and the container( in a simple relative sense think of it has how docker knows to send modified data from the host file to the container)

ports - Here we use this to map port 8001 on the local machine to port 5432 on the container for this process.

Elasticsearch service/process

# docker-compose.yml
.....
es:
    labels:
      com.example.service: "es"
      com.example.description: "For searching and indexing data"
    image: elasticsearch:5.4
    container_name: bookme_es
    volumes:
      - type: volume
        source: esdata
        target: /usr/share/elasticsearch/data/
    ports:
      - "9200:9200"

labels - Use to add meta data(info) to the resulting image

Logstash service/process

# docker-compose.yml

...
logstash:
    labels:
      com.example.service: "logstash"
      com.example.description: "For logging data"
    image: logstash:5.4.3
    volumes:
      - ./:/logstash_dir
    command: logstash -f /logstash_dir/logstash.conf
    depends_on:
      - es
    ports:
      - "5959:5959"

For our logstash service we need to edit our logstash.conf file to point to our es service...

input {
    tcp {
    port => 5959
    codec => json
  }
}
output {
  elasticsearch {
    hosts => ["http://es:9200"]
  }
}

Here we change our hosts value from localhost:9200 to http://es:9200

Kibana service/process

# docker-compose.yml
....

kibana:
    labels:
      com.example.service: "kibana"
      com.example.description: "Data visualisation and for log aggregation"
    image: kibana:5.4.3
    container_name: bookme_kibana
    ports:
      - "5601:5601"
    environment:
      - ELASTICSEARCH_URL=http://es:9200
    depends_on:
      - es

environment - Set the environmental variable ELASTICSEARCH_URL to http://es:9200 where es is the name of our elsticsearch service - reference

depends_on - Tells kibana service to start the elasticsearch service before starting.

django_web service/process

# docker-compose.yml
....

django_web:
    container_name: django_web
    labels:
      com.example.service: "web"
      com.example.description: "Use for the main web process"
    build:
      context: ./bookme/docker_compose/django/
      dockerfile: Dockerfile
    image: bookme_django_web:1.0
    depends_on:
      - db
      - es
      - kibana
    command: ["./docker_compose/django/wait_for_postgres.sh"]
    ports:
      - "8000:8000"
    environment:
      - LOGSTASH_HOST=logstash
    expose:
      - "5959"
    logging:
      driver: "json-file"
    volumes:
      - ./bookme:/app

build - Here we are using build as an object, specifying the context(the path to the dockerfile) and dockerfile(the dockerfile name to use as name can vary somtimes)

Speaking of Dockerfile here is the Dockerfile config placed in the bookme/bookme/docker_compose/django/dockerfile path of the repository.

FROM python:3.6.2
ENV PYTHONUNBUFFERED 1

# update package lists, fix broken system packages
RUN apt-get update
RUN apt-get -f install

# install and cache dependencies in /tmp directory.
# doing it this way also installs any newly added dependencies.
RUN pip3 install --upgrade pip
ADD requirements.txt /tmp/requirements.txt
RUN pip3 install -r /tmp/requirements.txt

# load project files and set work directory
ADD . /app/
WORKDIR /app

# create user and add to docker group
RUN adduser --disabled-password --gecos '' djangobookme
RUN groupadd docker
RUN usermod -aG docker djangobookme

# grant newly created user permissions on essential files
RUN chown -R djangobookme:$(id -gn djangobookme) ~/
RUN chown -R djangobookme:$(id -gn djangobookme) /app/

# change user to newly created user
USER djangobookme

A Dockerfile is used to create a docker image and is made up of instructions such as FROM, RUN, ADD etc... here is a reference to Dockerfile instructions and how there can be used.

commands - using depends_on we can control the start up order of an application.

because compose will not wait until a container is “ready” service like Postgres will cause our docker setup to break and so we introduce the command instruction to tell Django service to wait until the Postgres service is ready before we can fully run the django_web service.

Here is the script and path to the script bookme/bookme/docker_compose/django/wait_for_postgres.sh from the codebase

#!/bin/bash

# wait for Postgres to start
function postgres_ready() {
python << END
import sys
import psycopg2
try:
    conn = psycopg2.connect(dbname="postgres", user="postgres", password="postgres", host="db")
except psycopg2.OperationalError:
    sys.exit(-1)
sys.exit(0)
END
}

until postgres_ready; do
  >&2 echo "Postgres is unavailable - sleeping"
  sleep 1
done

# Start app
>&2 echo "Postgres is up - executing command"

./docker_compose/django/start.sh

bookme/bookme/docker_compose/django/start.sh

#!/bin/bash

# start django
python manage.py makemigrations
python manage.py migrate
python manage.py runserver 0.0.0.0:8000

Don't forget to configure your database for django in the settings.py file

... 
# settings.py 

 Database
# https://docs.djangoproject.com/en/1.11/ref/settings/#databases

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql',
        'NAME': 'postgres',
        'USER': 'postgres',
        'HOST': 'db',
        'PORT': 5432,
    }
}

logging - Used to gatther logs about the docker process

One More thing

Because the appllication has been dockerized we need to point elasticsearch to our dockerized instance of Elasticsearch, that said the little modification to be made will be to our bookemeapi/documents.py file

client = Elasticsearch(['es:9200'])

Here we point to docker es, reference the es service defined in the compose file.

It's a Wrap

Here is the complete file configuration used for the project placed in the root if the directory.
Source Code

version: '3.2'

services:
 db:
   restart: always
   image: postgres
   container_name: bookme_db
   volumes:
     - type: volume
       source: dbdata
       target: /pg_data
   ports:
     - "8001:5432"
 es:
   labels:
     com.example.service: "es"
     com.example.description: "For searching and indexing data"
   image: elasticsearch:5.4
   container_name: bookme_es
   volumes:
     - type: volume
       source: esdata
       target: /usr/share/elasticsearch/data/
   ports:
     - "9200:9200"
 kibana:
   labels:
     com.example.service: "kibana"
     com.example.description: "Data visualisation and for log aggregation"
   image: kibana:5.4.3
   container_name: bookme_kibana
   ports:
     - "5601:5601"
   environment:
     - ELASTICSEARCH_URL=http://es:9200
   depends_on:
     - es
 logstash:
   labels:
     com.example.service: "logstash"
     com.example.description: "For logging data"
   image: logstash:5.4.3
   volumes:
     - ./:/logstash_dir
   command: logstash -f /logstash_dir/logstash.conf
   depends_on:
   - es
   ports:
     - "5959:5959"
 django_web:
   container_name: django_web
   labels:
     com.example.service: "web"
     com.example.description: "Use for the main web process"
   build:
     context: ./bookme/docker_compose/django/
     dockerfile: Dockerfile
   image: bookme_django_web:1.0
   depends_on:
     - db
     - es
     - kibana
   command: ["./docker_compose/django/wait_for_postgres.sh"]
   ports:
     - "8000:8000"
   environment:
     - LOGSTASH_HOST=logstash
   expose:
     - "5959"
   logging:
     driver: "json-file"
   volumes:
     - ./bookme:/app

volumes:
 dbdata:
 esdata:
 

Now that we are set all you need to do is run these commands from your terminal

to start the process

docker-compose up

to stop the process

docker-compose down

When you run docker-compose up, the following happens:

A network called bookme_default is created.
A container is created using django_web’s configuration. It joins the network bookme_default under the name django_web.
A container is created using db’s configuration. It joins the network bookme_default under the name db.
...
Until all the containers are created and services run together in sync.

That said, you can go to localhost:8000, localhost:9200 and localhost:5601 to see the web, elasticsearch and kibana process/service running...

Conclusion

If you made it to this point, congratulations, you have beaten all odds to know and understand docker. We have been able to dockerize the application from the previous state to a new state.

Want to do something cool, why don't you add Nginx configuration to this setup to see how it will play out.

Thanks for reading and feel free to like this post.

Discover and read more posts from Samuel Afuavare James
get started
post comments5Replies
Paramesh Nalla
6 years ago

Hi Samuel,
I’m facing issue while starting the services.

ERROR logstash.agent - failed to fetch pipeline configuration {:message=>“No config files found: ./logstash_dir/logstash.conf. Can you make sure this path is a logstash config file?”}

but this file exists at ./logstash_dir/logstash.conf

Can you please help me.

Samuel Afuavare James
6 years ago

Logstash version?

Paramesh Nalla
6 years ago

Using the same docker-compose file, logstash version : 5.4.3

Samuel Afuavare James
7 years ago

Oh, thanks for pointing that out Dong Wang… Updated.

Alkon Wang
7 years ago

logstash service in docker-compose ?

Show more replies