Codementor Events

How to Instrument and Monitor Your Spring Boot 2 Application in Kubernetes Using Wavefront

Published Mar 31, 2020Last updated Apr 01, 2020
How to Instrument and Monitor Your Spring Boot 2 Application in Kubernetes Using Wavefront

In this post, I’ll walk you through instrumentation and monitoring of a simple standalone application developed with Spring Boot 2.0. The application is running as a container in Kubernetes. Spring Boot is an open-source Java-based framework used to create microservices. Kubernetes is a popular container orchestration system. See the references at the end of the page to learn more about Spring Boot and Kubernetes.
If you want to Gain In-depth Knowledge on Java Spring, please go through this link Spring Boot Online Training
Assume that I want to observe my application’s performance at all levels of the stack. Here’s the scenario:

• I’m monitoring my Kubernetes environment.
• I’m collecting metrics coming from the application itself.
• I add another dimension by instrumenting the application to send distributed traces.

Because all of those metrics are visible in Wavefront, I now have insights into a rich set of data coming from Kubernetes, the application, and the application’s traces.

The application I created for this blog was developed as a demo. It’s a Java application that has REST API endpoints for receiving requests. The responses are generated in JSON format. I named the application ‘loadgen’ because it generates simulated CPU and memory load to use a system’s computational resources. Find the source code of loadgen in my GitHub repo.

While developing this application, I decided to use:

• Spring Boot 2.0 as the base framework because I thought it would help me make my application quickly
• The Micrometer instrumentation library
• The Wavefront reporter, which fits nicely with the Micrometer library
• I used Maven, and I added the following dependency to my pom.xml file to enable the use of Micrometer:

<dependency>
    <groupId>io.micrometer</groupId>
    <artifactId>micrometer-registry-wavefront</artifactId>
    <version>${micrometer.version}</version>
</dependency>

Step 1: Sending Application Metrics

The instrumentation of my loadgen application was quite simple. I have added the following code in my application class (which uses WavefrontMeterRegistry as the registry).

// create a new registry
registry = new WavefrontMeterRegistry(config, Clock.SYSTEM);

// default JVM stats
new ClassLoaderMetrics().bindTo(registry);
new JvmMemoryMetrics().bindTo(registry);
new JvmGcMetrics().bindTo(registry);
new ProcessorMetrics().bindTo(registry);
new JvmThreadMetrics().bindTo(registry);
new FileDescriptorMetrics().bindTo(registry);
new UptimeMetrics().bindTo(registry);

lets see How Kubernetes Training Will Helps

Proxy Settings

The application.properties file specifies the following proxy config settings:

wf.prefix = kubernetes.loadgen
wf.proxy.enabled = true
wf.proxy.host = wavefront-proxy
wf.proxy.port = 2878
wf.duration = 10

The metrics that this Spring Boot application will generate include JVM-related metrics such as how the ‘classloading’ is occurring, garbage collection rates, processor’s CPU utilization, threads, file operations, system uptimes, etc., all under the prefix of ‘kubernetes.loadgen, specified as wf.prefix above.

I chose to add kubernetes as the prefix to emphasize this application is related to it. When my Spring Boot application starts running, it will collect these metrics and send them to the Wavefront proxy every 10 seconds, as defined by wf.duration above.
java_micrometer-300x143.png

Step 2: Easy Distributed Tracing Instrumentation

I wanted to make sure that I could also send distributed traces from the application to Wavefront. The OpenTracing SDK is open-source based and it generates tracing data in a platform-neutral way.

Understanding how your code, components, calls, and services interact with each other is critical to understanding the application performance. I have added the following code to create a Spring Boot component responsible for maintaining a tracer instance. The tracer instance can be used to generate the application’s call spans to keep track of the application’s performance.

That required adding the following dependency to my pom.xml file:

<dependency>
    <groupId>io.opentracing</groupId>
    <artifactId>opentracing-api</artifactId>
    <version>0.32.0</version>
</dependency>
<dependency>
    <groupId>io.opentracing</groupId>
    <artifactId>opentracing-util</artifactId>
    <version>0.32.0</version>
</dependency>
<dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>19.0</version>
</dependency>
<dependency>
    <groupId>com.wavefront</groupId>
    <artifactId>wavefront-opentracing-sdk-java</artifactId>
    <version>1.7</version>
</dependency>

Creating Span Data
I created a new Java class called TraceUtil. My application can access the initialized tracer instance from that class, and can start creating span data for the traces:

package com.vmware.wavefront.loadgen;

import org.apache.log4j.Logger;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Component;

import javax.annotation.PostConstruct;

import com.wavefront.opentracing.WavefrontTracer;
import com.wavefront.opentracing.reporting.CompositeReporter;
import com.wavefront.opentracing.reporting.ConsoleReporter;
import com.wavefront.opentracing.reporting.Reporter;
import com.wavefront.opentracing.reporting.WavefrontSpanReporter;
import com.wavefront.sdk.common.WavefrontSender;
import com.wavefront.sdk.common.application.ApplicationTags;
import com.wavefront.sdk.direct.ingestion.WavefrontDirectIngestionClient;
import com.wavefront.sdk.entities.tracing.sampling.ConstantSampler;
import com.wavefront.sdk.proxy.WavefrontProxyClient;

import java.net.InetAddress;
import java.net.UnknownHostException;

import io.opentracing.Tracer;
import io.opentracing.util.GlobalTracer;

@Component("traceutil")
public class TraceUtil {

  /* its own tracer */
  public Tracer tracer;

  public TraceUtil() {

  }

  @Value("${wf.proxy.enabled}")
  public boolean proxyEnabled;

  @Value("${wf.proxy.host}")
  public String proxyhost;

  @Value("${wf.proxy.port}")
  private String proxyport;

  @Value("${wf.trace.enabled}")
  public boolean traceEnabled;

  @Value("${wf.proxy.histogram.port}")
  private String histogramport;

  @Value("${wf.proxy.trace.port}")
  private String traceport;

  @Value("${wf.direct.enabled}")
  public boolean directEnabled;

  @Value("${wf.direct.server}")
  public String server;

  @Value("${wf.direct.token}")
  public String token;

  @Value("${wf.application}")
  public String application;

  @Value("${wf.service}")
  public String service;

  protected final static Logger logger = Logger.getLogger(TraceUtil.class);

  @PostConstruct
  public void init() {
    if(traceEnabled == true && application != null && service != null) {

      ApplicationTags appTags = new ApplicationTags.Builder(application, service).build();
      String hostname = "unknown";
      try {
        hostname = InetAddress.getLocalHost().getHostName();
      } catch (UnknownHostException e) {
        e.printStackTrace();
      }

      WavefrontSender wavefrontSender = null;
      if(proxyEnabled == true) {
        wavefrontSender = new WavefrontProxyClient.Builder(proxyhost).
            metricsPort(Integer.parseInt(proxyport)).
            distributionPort(Integer.parseInt(histogramport)).tracingPort(Integer.parseInt(traceport)).build();
      } else if(directEnabled == true){
        wavefrontSender = new WavefrontDirectIngestionClient.Builder(server, token).build();
      }
      Reporter wfspanreporter = new WavefrontSpanReporter.Builder().withSource(hostname).build(wavefrontSender);
      Reporter consoleReporter = new ConsoleReporter(hostname);
      Reporter composite = new CompositeReporter(wfspanreporter, consoleReporter);
      logger.info("created new tracer with " + application + " : " + service);
      tracer = new WavefrontTracer.Builder(composite, appTags).withSampler(new ConstantSampler(true)).build();
    }
    else {
      logger.info("created new Global Tracer...");
      tracer = GlobalTracer.get();
    }
  }

  public Tracer getTracer() {
    return tracer;
  }
}

Step 3: Creating a Docker Image

To make the application run inside a container, I created a Docker image using the following Dockerfile.

# base image
FROM openjdk:8-jdk-alpine

LABEL maintainer="hgy@gmail.com"
VOLUME /tmp
EXPOSE 8080

RUN apk update
RUN apk add curl

ARG JAR_FILE=target/loadgen-0.0.1-SNAPSHOT.jar
ADD ${JAR_FILE} loadgen-0.0.1.jar

# run the jar file
ENTRYPOINT ["java","-Djava.security.edg=file:/dev/./urandom","-jar","/loadgen-0.0.1.jar"]

loadgen_docker-300x253.png

Step 4: Setting-up the Kubernetes Cluster

There are many ways to run Kubernetes, but I decided to use the simplest way, which is to use KIND (Kubernetes in Docker). It uses my local Docker engine to run a single-node Kubernetes cluster perfect for my testing purpose. Lets see How Spring Boot Training Will Helps you

kind create cluster --name kind-wf
export KUBECONFIG="$(kind get kubeconfig-path --name="kind-wf")"

With my kind-wf Kubernetes cluster up and running, and with proper KUBECONFIG environment setup, I can now start using kubectl to set up the Wavefront integration for this cluster.

To configure the Wavefront integration for Kubernetes, just follow the instructions in the Wavefront documentation. I download deployment yaml files for both Wavefront proxy server and Wavefront collector, which will be collecting and sending the Kubernetes system metrics to Wavefront.

I can even use a Helm chart to do the setup with a single Helm command. Find a Helm chart related to the Wavefront Kubernetes integration here.
Screen-Shot-2019-12-19-at-12.59.46-PM.png

Step 5: Deploying Loadgen Pod Into Kubernetes Cluster

Now we can deploy our loadgen in the kind-wf cluster. I’ve created the following simple deployment yaml file (loadgen-svc.yaml) to do it:

---
apiVersion: v1
kind: Namespace
metadata:
  name: loadgen
---
apiVersion: v1
kind: Service
metadata:
  name: wavefront-proxy
  namespace: loadgen
spec:
  type: ExternalName
  externalName: wavefront-proxy.default.svc.cluster.local
  ports:
  - name: wavefront
    port: 2878
    protocol: TCP
  - name: traces
    port: 30000
    protocol: TCP
  - name: histogram
    port: 40000
    protocol: TCP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: loadgen
  namespace: loadgen
  labels:
    app: loadgen
    user: howard
    version: 0.0.5
spec:
  selector:
    matchLabels:
      app: loadgen
  replicas: 2
  template:
    metadata:
      labels:
        app: loadgen
    spec:
      containers:
      - name: loadgen
        image: howardyoo/loadgen:0.0.4
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8080
        resources:
          limits:
            memory: 512Mi
          requests:
            memory: 256Mi
---
apiVersion: v1
kind: Service
metadata:
  namespace: loadgen
  name: loadgen-svc
  labels:
    app: loadgen-svc
    user: howard
spec:
  ports:
  - port: 8080
    targetPort: 8080
    protocol: TCP
  type: LoadBalancer
  selector:
    app: loadgen

This yaml file creates a pod that runs two instances of loadgen containers, load-balanced as a service on port 8080 and accessible via that port. In the snippet above, loadgen is referencing the Wavefront proxy service using its external name and defining it as ‘wavefront-proxy’ with appropriate three ports. Wavefront proxy listens:

• for regular metrics on its default port 2878
• for histograms on port 40000
• for traces on port 30000.
We have to define these ports so that the loadgen instrumented code can send all of its metrics, histograms, and spans to the Wavefront proxy, which will then forward the data to the Wavefront server running in the cloud.
Screen-Shot-2019-12-19-at-1.19.17-PM.png

Step 6: Running Loadgen and Monitoring It Using Wavefront

We should now have the following types of metrics coming into Wavefront:

• Kubernetes system metrics – how your cluster, nodes, pods, containers are doing – delivered by wavefront-collector
• Application metrics – (centered around JVM performance) for loadgen
• Distributed traces for loadgen troubleshooting when it is running
If you are running this on your laptop, you should probably hear CPU fan boosting up, as loadgen is going to be peaking CPU usage. Let it run for a while, and check how our kind-wf cluster is doing using the Wavefront Kubernetes dashboard per the figure below:
kubernetes-namespace-dsh.png

Full-Stack Observability

By combining three distinct areas where you can collect your application metrics, Kubernetes metrics, and distributed tracing data into Wavefront, developers now have complete observability into what’s happening from top to bottom of their applications.
Screen-Shot-2019-12-19-at-1.15.27-PM-2-1024x572.png
• Developers can understand how their applications are performing by understanding the number of requests, errors, and duration of their particular services. Also, they can understand how each component executes relative to other components.
• Developers can monitor all of the custom application-specific metrics they instrumented. It includes insights into how many instances are running, how much resources are utilized, and from which container they are running.
• With an application running on top of an orchestration platform like Kubernetes, developers can deploy and run their applications with ease anywhere. They can correlate and quickly locate where their application is running from and how many instances are running. Also, it is possible to observe everything that happens on the platform while tracking the performance of the applications. All of that gives developers a complete view of their stacks.

Take your career to new heights of success with an Spring Boot Certification
Wavefront is an Enterprise Observability as a Service platform. It means that Wavefront will always be there to receive your telemetry data, regardless of where you deploy and run your application.

Discover and read more posts from Sravan Cynixit
get started