Codementor Events

Docker and You: Simulating Microservices

Published Apr 14, 2017Last updated Jun 17, 2017
Docker and You: Simulating Microservices

Why Docker?

Operating System Differences

Here's a hypothetical situation:

  • Team member A is on Mac
  • Team member B is on Windows
  • Team member C is on Arch with custom configured X11 (mix of opens source and proprietary nVidia drivers), Raid Filesystem, and only enough binaries for emacs, curl, and Wine to run their partition with Windows Games
  • The server runs in AWS with an Amazon Image

Team members A and C will not have filesystem issues, Team member B might. None of them uses yum, but the server does. Not to mention, anything additional would be frustrating.

How can we solve that? Docker. Now, let's look at how we can set up Docker.

Simple Setup

Here are all the things that the Server runs:

  • Reverse Proxy
  • Static Content Server
  • Authentication Server
    • utilizes couch db for its Authentication Storage
    • utilizes Postgres db database for the user storage
    • writes to Redis to store Sessions
  • Events Server
    • Allows servers to connect inorder to
      • listen to events
      • write events
  • Cron Server
    • dispatch timed
  • WebSocket Grid
    • utilizes Redis to Retrieve Sessions
    • Utilized Events Server
  • Crud Server
    • utilizes Redis to Retrieve Sessions
    • Utilized Events Server
    • Utilizes MySql for structured data storage
    • utilizes MongoDB for Blob Storage

You can start the server with docker-compose like so:

docker-compose up

To write this file?
It's really not that much different from what you just saw.

Why not Docker?

You're probably wondering if there are any situations where you might not want to use Docker. In short, if you're running Docker containers in the cloud on one machine, that is another layer that the client has to get through to finally reach your content. Docker is an incredible tool, but it is not a tool for everything... yet...

Example 1: When Stack and Database are Handled Seperately

# specify our version
version: '3'
# specify our services
services:
  # service name
  app:
    # from what image?
    image: php:5.6-apache
    # which ports should be made public
    ports:
      - "${PUBLIC_PORT}:80"
    # what do we want access to
    depends_on:
      # the db instance will be available via the domain name "db"
      - db
    # what environment variables should be set within the container
    environment:
      MY_DOMAIN_NAME: ${DOMAIN_NAME}
    # what folders from the host computer should replace folders in this instance
    volumes:
      -  ${WEB_ROOT}:/var/www/html
  db:
    image: "mysql:5"
    volumes:
      - ${DATA_ROOT}:/var/lib/mysql
    environment:
      - MYSQL_ROOT_PASSWORD=${ROOT_PASSWORD}

We all know what the classic Lamp stack looks like. It's been there in the greatest of times, and the darkest of times. It's not anything new, but it's trust worthy. The above is a simple implmentation of a lampstack. That way, we don't have to install xampp or set up apache anymore! You'll also notice the ${SOME_NAME} around. These specify environment variables to use in within the file.

Example 2: SSL in Front of Multiuple Static Content Servers

version: "2"
services:
  nginx-proxy:
    image: jwilder/nginx-proxy
    container_name: nginx-proxy
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - "/var/run/docker.sock:/tmp/docker.sock:ro"
      - "./volumes/certs:/etc/nginx/certs:ro"
      - "./volumes/vhost.d:/etc/nginx/vhost.d"
      - "./volumes/html:/usr/share/nginx/html"
  letsencrypt-nginx-proxy-companion:
    image: jrcs/letsencrypt-nginx-proxy-companion
    container_name: letsencrypt-nginx-proxy-companion
    volumes_from:
      - nginx-proxy
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock:ro"
      - "./volumes/certs:/etc/nginx/certs:rw"
      - "./volumes/vhost.d:/etc/nginx/vhost.d"
      - "./volumes/html:/usr/share/nginx/html"
  somewebsite:
    image: nginx
    volumes:
      # specify your public html here
      - ../somewebsite.com/public:/public
      # specify your conf here
      - ../somewebsite.com/conf:/etc/nginx/conf.d:ro
    environment:
      - VIRTUAL_HOST=somewebsite.com,www.somewebsite.com
      - LETSENCRYPT_HOST=somewebsite.com,www.somewebsite.com
      - LETSENCRYPT_EMAIL=you@somewebsite.com
  otherwebsite:
    image: nginx
    volumes:
      # specify your public html here
      - ../otherwebsite.com/public:/public
      # specify your conf here
      - ../otherwebsite.com/conf:/etc/nginx/conf.d:ro
    environment:
      - VIRTUAL_HOST=otherwebsite.com,www.otherwebsite.com
      - LETSENCRYPT_HOST=otherwebsite.com,www.otherwebsite.com
      - LETSENCRYPT_EMAIL=you@otherwebsite.com
  thirdwebsite:
    image: nginx
    volumes:
      # specify your public html here
      - ../thirdwebsite.com/public:/public
      # specify your conf here
      - ../thirdwebsite.com/conf:/etc/nginx/conf.d:ro
    environment:
      - VIRTUAL_HOST=thirdwebsite.com,www.thirdwebsite.com
      - LETSENCRYPT_HOST=thirdwebsite.com,www.thirdwebsite.com
      - LETSENCRYPT_EMAIL=you@thirdwebsite.com

This one is a bit trickier because nginx-proxy is doing some dirty things. They are reading all instances that are created (from /var/run/docker.sock) and checking if these instances have VIRTUAL_HOST environment variables. The somewebsite is overwriting its own conf file, which will be used when the image starts up. If you'd like to know more, click here. Each of these websites are only accessible when their host header sends them to the proxy server.

Example 3: Selenium Testing Suite

version: "3"
services:
  web-driver-tests:
    image: node
    working_dir: /app
    command: npm test
    volumes:
      - "./:/app"
    depends_on:
      - chrome
      - firefox
  fileserver:
    image: kyma/docker-nginx
    command: nginx
    volumes:
      - "./public:/var/www"
    ports:
      - "8080:80"
  hub:
    image: selenium/hub
    ports:
      - "4444:4444"
  firefox:
    image: selenium/node-firefox
    environment:
      - HUB_PORT_4444_TCP_ADDR=hub
      - HUB_PORT_4444_TCP_PORT=4444
    depends_on:
      - fileserver
      - hub
  chrome:
    image: selenium/node-chrome
    environment:
      - HUB_PORT_4444_TCP_ADDR=hub
      - HUB_PORT_4444_TCP_PORT=4444
    depends_on:
      - fileserver
      - hub

Running browser tests reduces your time. You no longer have to open the browser, go through the login, open your desired page, and test the functionality. It can all be automated. 🎉

Example 4: Reason 2

version: "2"
services:
  reverseproxy:
    image: nginx
    volumes:
      # specify how you want to route the urls here
      - ./reverseproxy/conf:/etc/nginx/conf.d:ro
    depends_on:
      - staticcontent
      - authentication
      - websocket_node
      - crud
    ports
      - 80:80
  staticcontent:
    image: nginx
    volumes:
      - ../public:/public
      - ./staticcontent/conf:/etc/nginx/conf.d:ro
  authentication:
    image: nodejs
    working_dir: /app
    command: npm start
    volumes:
      - ../src/auth:/app
    environment:
      - POSTGRES_PASSWORD=${SOME_PASS_PS}
    depends_on:
      - authstore
      - userstore
      - sessionstore
  events:
    image: nodejs
    working_dir: /app
    command: npm start
    volumes:
      - ../lib/events:/app
  cron:
    image: nodejs
    working_dir: /app
    command: npm start
    volumes:
      - ../lib/cron:/app
    depends_on:
      - events
  websocket_node:
    image: nodejs
    working_dir: /app
    command: npm start
    volumes:
      - ../src/live:/app
    depends_on:
      - sessionstore
      - userstore
      - events
  crud:
    image: nodejs
    working_dir: /app
    command: npm start
    volumes:
      - ../src/crud:/app
    environment:
      - POSTGRES_PASSWORD=${SOME_PASS_PS}
      - MYSQL_ROOT_PASSWORD=${SOME_PASS_MS}
    depends_on:
      - sessionstore
      - userstore
      - events
      - structuredstore
      - blobstore
  authstore:
    image: couchdb
  userstore:
    image: postgres
    environment:
      - POSTGRES_PASSWORD=${SOME_PASS_PS}
  sessionstore:
    image: redis
  structuredstore:
    image: mysql
    environment:
      - MYSQL_ROOT_PASSWORD=${SOME_PASS_MS}
  blobstore:
    image: mongo

Final Remarks

The individual websocket nodes will most likely not have access to one another. To handle this, all nodes can all use the eventsserver to proxy communication. This is how it works:

  • eventsserver will notify all websockets of an event and provide a unique return event
  • one of the websockets responds with a unique identifier or timeout occurs
  • they communicate over these individual unique identifiers

I hope I've highlighted some of the benefits of using Dockers. If you want to learn more about Dockers, here are some other helpful resources:

Feel free to comment below or contact me directly on Codementor.

Discover and read more posts from Sam Tobia
get started
post comments1Reply
Mahmoudjbour
7 years ago

$9.99 Getting started with microservices [From zero to production] Flash Sale 🌟
https://www.udemy.com/getting-started-with-microservices-from-zero-to-production/?couponCode=NEW2018DISCOUNT