Codementor Events

eShopOnContainers - Part 5 (ELK Stack)

Published Feb 19, 2023Last updated Feb 26, 2023
eShopOnContainers - Part 5 (ELK Stack)

In this article, you'll find a concise guide to configuring the ELK stack within eShopOnContainers, which stands for ElasticSearch, LogStash, and Kibana. The ELK stack is one of the most widely utilized tools in the industry.

Configuring ELK in Localhost

Let's start with a empty containers in Docker Desktop. The repo is providing a zero-configuration environment for testing the integration via docker-compose command. Run below commands on the root directory(src) of eShopOnContainers:

$ docker-compose -f docker-compose.elk.yml build
$ docker-compose -f docker-compose.elk.yml up

Note, you might get the error - unable to prepare context: path "elk/elasticsearch/" not found.

The reason is elk/elasticsearch/ is aviable in deploy folder of repo and not in src. Therefore, to fix this issue - copy the elk folder from deploy to src folder. And then run the above command again. You can see 3 containers started -

I got a wiered error - docker endpoint for "default" not found.

I was uncertain about the reason for the failure of the docker-compose command with YAML. However, the solution was to remove the .docker directory located at PATH C:\Users\your-username.docker, and then restart docker.

Capture.PNG

Confirm kibana is running at http://localhost:5601

Capture.PNG

We are all set. Let's deploy our eShopContainers to Docker Desktop.

docker-compose -f docker-compose.yml -f docker-compose.override.yml build
docker-compose -f docker-compose.yml -f docker-compose.override.yml up

Confirm webmvc app is running at http://localhost:5100/

Capture.PNG

Configuring Logstash index on Kibana

Once you have started and configured your application, you only need to configure the logstash index on kibana. The index pattern name is eshops-*.

Capture.PNG

With the index pattern configured, you can enter in the discover section and start viewing how the tool is recollecting the logging information.

Capture.PNG

Explaination

Let's gain an understanding of what's occurring behind the scenes, specifically how the logs are being transmitted to Elasticsearch and subsequently displayed in Kibana.

Capture.PNG

All applications have their Serilog configured to send logs to Logstash on port 8080.

Capture.PNG

Next if you look at logstash config - you will notice that the processed log is sent to elasticsearch at 9200, which enable the storage, searching, and analysis of large volumes of data. Also, note we are sending the logs to eshops-* index of elastic search. This is important to enable kibana to create index pattern.

Capture.PNG

Kibana is a data visualization which completes the ELK stack. This tool is used for visualizing the Elasticsearch documents and helps developers to have a quick insight into it. It's configured as below -

Capture.PNG

Please note that in the example given, the application is directly calling Logstash to write log data. This design is not recommended because if Logstash is down, logs may be lost. To address this issue, we can use Filebeat, which is installed as a sidecar container. Filebeat monitors the specified log files or locations, collects log events, and forwards them to either Elasticsearch or Logstash for indexing.

Configuring ELK on Azure VM

By utilizing a preconfigured virtual machine containing Logstash, ElasticSearch, and Kibana, you can set the configuration parameter LogstashUrl. This can be achieved by accessing Microsoft Azure to search for a Certified ELK Virtual Machine, configuring it, and subsequently reviewing and creating it.

Capture.PNG

Configuring the bitnami environment

This virtual machine has a lot of configuration pipeing done. The only thing you have to change is the logstash configuration inside the machine. This configuration is at the file /opt/bitnami/logstash/conf/logstash.conf You must edit the file and overwrite with this configuration:

input {
  http {
    #default host 0.0.0.0:8080
    codec => json
  }
}

## Add your filters / logstash plugins configuration here
filter {
   split {
    field => "events"
    target => "e"
    remove_field => "events"
  }
}

output {
  elasticsearch {
    hosts => "elasticsearch:9200"
    index=>"eshops-%{+xxxx.ww}"
  }
}

For doing this you can connect via ssh to the vm and edit the file using the vi editor for example. When the file will be edited, check there are Inbound Port Rules created for the Logstash service. You can do it going to Networking Menu on your ELK Virtual Machine Resource in Azure.

Capture.PNG

The last step involves connecting to your VM via a browser and verifying that the Bitnami splash page is displayed. To access it, you can obtain the password by going to your virtual machine in Azure and checking the boot diagnostics. You'll see a message containing your password there.

After obtaining your user and password, you can access the Kibana tool, create the eshops-* index pattern, which is thoroughly documented at the beginning of this documentation, and begin to discover.

Discover and read more posts from DhananjayKumar
get started