Distributed Tracing with Jaeger on Kubernetes helps to prevent issues that arise when the microservices develop performance problems.
As part of our Server Management Services, we assist our customers with several Kubernetes queries.
Today, let’s deploy a very small distributed application to a Kubernetes cluster and simulate a performance lag using a sleep function in our code.
Distributed Tracing with Jaeger on Kubernetes
Kubernetes and its services can create very efficient and scalable systems. However, problems arise when one of them develops performance problems.
Typically, the problem can be with one of the backend services. In such a case, to discover the problem, we implement distributed tracing.
This system will let us trace the lifecycle of each customer-generated event and see how each service processes that event.
In order to begin, our Support Techs suggest having the following:
- A Kubernetes 1.15+ cluster with connection configuration set as the kubectl default.
- Docker.
- An account at Docker Hub to store Docker image.
- Install the kubectl command-line tool on the local machine and configure it to connect to the cluster.
- The curl command-line utility on the local machine.
Now, let us see how to deploy Distributed Tracing with Jaeger on Kubernetes.
Step 1 – Build the Sample Application
Initially, we will build and deploy bob-jaeger, a sample application. It uses two services: one for the frontend and one for the backend.
In the following steps, we will deploy the app to Kubernetes, install Jaeger, and then use it to trace our service issue.
First, we create a project directory structure and navigate inside:
$ mkdir -p ./bob-jaeger/frontend ./bob-jaeger/backend && cd ./bob-jaeger
Now we have a root directory, bob-jaeger, and two subdirectories:
. ├── backend └── frontend
Moving ahead, let us see how our Support Techs build the frontend application.
-
Build the Frontend Application
In a text editor, we create and open a new file:
$ nano ./frontend/frontend.py
In order to import Flask, build our counter functions, and define one route for HTTP requests we add the following code:
import os import requests from flask import Flask app = Flask(__name__) def get_counter(counter_endpoint): counter_response = requests.get(counter_endpoint) return counter_response.text def increase_counter(counter_endpoint): counter_response = requests.post(counter_endpoint) return counter_response.text @app.route(‘/’) def hello_world(): counter_service = os.environ.get(‘COUNTER_ENDPOINT’, default=”https://localhost:5000″) counter_endpoint = f'{counter_service}/api/counter’ counter = get_counter(counter_endpoint) increase_counter(counter_endpoint) return f”””Hello, World! You’re visitor number {counter} in here!\n\n”””
Here, the os module will communicate with the operating system. The requests module is to send HTTP requests. Finally, a flask is a microframework that will host the app.
Then we proceed to define the get_counter()
and increase_counter()
functions.
We will then go ahead and define the route /
, which will call another function, hello_world().
This function will retrieve a URL and a port for our backend pod, assign it to a variable, and then pass that variable to our first two functions, get_counter() and increase_counter(), which will send the GET and POST requests to the backend.
The backend will then pause for a random period of time before incrementing the current counter number and then return that number.
Finally, hello_world() will take this value and print a “Hello World!” string to the console that includes our new visitor count.
Save and close frontend.py.
-
Build a Dockerfile for the frontend application.
First, we create and open a new Dockerfile in ./frontend:
$ nano ./frontend/Dockerfile
Then we add the following:
FROM alpine:3.8 RUN apk add –no-cache py3-pip python3 && \ pip3 install flask requests COPY . /usr/src/frontend ENV FLASK_APP frontend.py WORKDIR /usr/src/frontend CMD flask run –host=0.0.0.0 –port=8000
Here, we instruct the image to build from the base Alpine Linux image.
We then install Python3, pip, and several additional dependencies.
Next, we copy the application source code, set an environment variable pointing to the main application code, set the working directory, and write a command to run Flask whenever we create a container from the image.
Eventually, we save and close the file.
-
Build Docker image for the frontend application
Now we will build a docker image and push it to a repository in Docker Hub.
First, we make sure we are signed in to Docker Hub:
$ docker login –username=your_username –password=your_password
Then we build the image:
$ docker build -t your_username/do-visit-counter-frontend:v1 ./frontend
Now we push the image to Docker Hub:
$ docker push your_username/do-visit-counter-frontend:v1
-
Build the Backend Application
Initially, we create and open the file, backend.py in ./backend:
$ nano ./backend/backend.py
Then we add the following content, defining two functions and another route:
from random import randint from time import sleep from flask import request from flask import Flask app = Flask(__name__) counter_value = 1 def get_counter(): return str(counter_value) def increase_counter(): global counter_value int(counter_value) sleep(randint(1,10)) counter_value += 1 return str(counter_value) @app.route(‘/api/counter’, methods=[‘GET’, ‘POST’]) def counter(): if request.method == ‘GET’: return get_counter() elif request.method == ‘POST’: return increase_counter()
Here we import several modules. We then set our counter value to 1 and define two functions.
The first, get_counter, returns the current counter value. Whereas, increase_counter increments our counter value by 1 and uses the sleep module to delay the function’s completion by a random amount of time.
The backend also has a route that accepts two methods: POST and GET.
Eventually, we save and close the file.
-
Create a second Dockerfile in ./backend:
We create and open a second Dockerfile in ./backend:
$ nano ./backend/Dockerfile
Then we add the following content:
FROM alpine:3.8 RUN apk add –no-cache py3-pip python3 && \ pip3 install flask COPY . /usr/src/backend ENV FLASK_APP backend.py WORKDIR /usr/src/backend CMD flask run –host=0.0.0.0 –port=5000
Save and close the file.
Now we build the image:
$ docker build -t your_username/do-visit-counter-backend:v1 ./backend
Eventually, we push it to Docker Hub:
$ docker push your_username/do-visit-counter-backend:v1
Step 2 – Deploy and Test the Application
Moving ahead, we need to deploy to Kubernetes and test the basic application. Then, we can add Jaeger.
Let us start with deployment and testing.
At this point, our directory tree looks like this:
. ├── backend │ ├── Dockerfile │ └── backend.py └── frontend ├── Dockerfile └── frontend.py
To deploy this application to our cluster, we need two Kubernetes manifests; one for each half of the application.
Hence we create and open a new manifest file in ./frontend:
$ nano ./frontend/deploy_frontend.yaml
Then we add the following content to specify how Kubernetes builds our Deployment:
apiVersion: apps/v1 kind: Deployment metadata: name: do-visit-counter-frontend labels: name: do-visit-counter-frontend spec: replicas: 1 selector: matchLabels: app: do-visit-counter-frontend template: metadata: labels: app: do-visit-counter-frontend spec: containers: – name: do-visit-counter-frontend image: your_dockerhub_username/do-visit-counter-frontend:v1 imagePullPolicy: Always env: – name: COUNTER_ENDPOINT value: http://do-visit-counter-backend.default.svc.cluster.local:5000 ports: – name: frontend-port containerPort: 8000 protocol: TCP
Eventually, save and close the file.
Then we proceed to create the manifest for our backend application:
$ nano ./backend/deploy_backend.yaml
Add the following content. Make sure to “your_dockerhub_username” with your Docker Hub username:
apiVersion: apps/v1 kind: Deployment metadata: name: do-visit-counter-backend labels: name: do-visit-counter-backend spec: replicas: 1 selector: matchLabels: app: do-visit-counter-backend template: metadata: labels: app: do-visit-counter-backend spec: containers: – name: do-visit-counter-backend image: your_dockerhub_username/do-visit-counter-backend:v1 imagePullPolicy: Always ports: – name: backend-port containerPort: 5000 protocol: TCP — apiVersion: v1 kind: Service metadata: name: do-visit-counter-backend spec: selector: app: do-visit-counter-backend ports: – protocol: TCP port: 5000 targetPort: 5000
Then, save and close the file.
Now we will deploy our counter to the cluster using kubectl. Start with the frontend:
$ kubectl apply -f ./frontend/deploy_frontend.yaml
And then deploy the backend:
$ kubectl apply -f ./backend/deploy_backend.yaml
In order to verify that everything is working, call kubectl get pods:
$ kubectl get pods
Our output will be like this:
~~ NAME READY STATUS RESTARTS AGE do-visit-counter-backend-79f6964-prqpb 1/1 Running 0 3m do-visit-counter-frontend-6985bdc8fd-92clz 1/1 Running 0 3m ~~
We need all the pods in the READY state. If they are not, wait and rerun the previous command.
Finally, to use our application, we forward ports from the cluster and communicate with the frontend using the curl command.
In addition, open a second terminal window because forwarding ports will block one window.
To forward the port use, kubectl:
$ kubectl port-forward $(kubectl get pods -l=app=”do-visit-counter-frontend” -o name) 8000:8000
Then, in the second terminal window, we send three requests to the frontend application:
for i in 1 2 3; do curl localhost:8000; done
Each curl call will increment the visit number. We will have an output like this:
Hello, World! You’re visitor number 1 in here! Hello, World! You’re visitor number 2 in here! Hello, World! You’re visitor number 3 in here!
Step 3 – Deploy Jaeger
Collecting traces and visualizing them is the specialty of Jaeger’s. We deploy Jaeger to the cluster so it can find our performance lags
First, we create the Custom Resource Definition required by the Jaeger Operator. We use the recommended templates available on Jaeger’s official documentation:
$ kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/crds/jaegertracing.io_jaegers_crd.yaml
Then, we create a Service Account, a Role, and Role Binding for Role-Based Access Control:
$ kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/service_account.yaml $ kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/role.yaml $ kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/role_binding.yaml
Finally, we deploy the Jaeger Operator:
$ kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/operator.yaml
Next. we need to create a resource describing the Jaeger instance we want the Operator to manage. To do so, we will follow Jaeger’s official documentation:
Use a heredoc to create this resource from the command line:
$ kubectl apply -f – <<EOF $ apiVersion: jaegertracing.io/v1 $ kind: Jaeger $ metadata: $ name: simplest $ EOF
Press ENTER to create the resource.
Then check the deployments again:
$ kubectl get pods
We will see an output with the Jaeger operator and the simplest deployment:
NAME READY STATUS RESTARTS AGE do-visit-counter-backend-79f6964-prqpb 1/1 Running 0 3m do-visit-counter-frontend-6985bdc8fd-92clz 1/1 Running 0 3m jaeger-operator-547567dddb-rxsd2 1/1 Running 0 73s simplest-759cb7d586-q6x28 1/1 Running 0 42s
To validate Jaeger is working correctly, we forward its port and see if we can access the UI:
$ kubectl port-forward $(kubectl get pods -l=app=”jaeger” -o name) 16686:16686
Then open a browser and navigate to http://localhost:16686. The Jaeger UI will load.
Step 4 – Add Instrumentation
Although Jaeger automates many tasks, we need to add instrumentation manually to the application. Fortunately, we have the Flask-OpenTracing module to handle that task.
Generally, OpenTracing is one of the standards of distributed tracing.
Moving ahead, let us add Flask-OpenTracing to our frontend code.
To do so, reopen .frontend.py:
$ nano ./frontend/frontend.py
Then we add the following code, which will embed OpenTracing:
import os import requests from flask import Flask from jaeger_client import Config from flask_opentracing import FlaskTracing app = Flask(__name__) config = Config( config={ ‘sampler’: {‘type’: ‘const’, ‘param’: 1}, ‘logging’: True, ‘reporter_batch_size’: 1,}, service_name=”service”) jaeger_tracer = config.initialize_tracer() tracing = FlaskTracing(jaeger_tracer, True, app) def get_counter(counter_endpoint): counter_response = requests.get(counter_endpoint) return counter_response.text def increase_counter(counter_endpoint): counter_response = requests.post(counter_endpoint) return counter_response.text @app.route(‘/’) def hello_world(): counter_service = os.environ.get(‘COUNTER_ENDPOINT’, default=”https://localhost:5000″) counter_endpoint = f'{counter_service}/api/counter’ counter = get_counter(counter_endpoint) increase_counter(counter_endpoint) return f”””Hello, World! You’re visitor number {counter} in here!\n\n”””
Eventually, we save and close the file.
Then we open the backend application code:
$ nano ./backend/backend.py
Here, add the below code:
from random import randint from time import sleep from flask import Flask from flask import request from jaeger_client import Config from flask_opentracing import FlaskTracing app = Flask(__name__) config = Config( config={ ‘sampler’: {‘type’: ‘const’, ‘param’: 1}, ‘logging’: True, ‘reporter_batch_size’: 1,}, service_name=”service”) jaeger_tracer = config.initialize_tracer() tracing = FlaskTracing(jaeger_tracer, True, app) counter_value = 1 def get_counter(): return str(counter_value) def increase_counter(): global counter_value int(counter_value) sleep(randint(1,10)) counter_value += 1 return str(counter_value) @app.route(‘/api/counter’, methods=[‘GET’, ‘POST’]) def counter(): if request.method == ‘GET’: return get_counter() elif request.method == ‘POST’: return increase_counter()
Save and close the file.
In addition, we have to modify our Dockerfiles for both services.
Open the Dockerfile for the frontend:
$ nano ./frontend/Dockerfile
Add the below code:
FROM alpine:3.8 RUN apk add –no-cache py3-pip python3 && \ pip3 install flask requests Flask-Opentracing jaeger-client COPY . /usr/src/frontend ENV FLASK_APP frontend.py WORKDIR /usr/src/frontend CMD flask run –host=0.0.0.0 –port=8000
Then save and close the file.
Now open the backend’s Dockerfile:
$ nano ./backend/Dockerfile
Add the below code:
FROM alpine:3.8 RUN apk add –no-cache py3-pip python3 && \ pip3 install flask Flask-Opentracing jaeger-client COPY . /usr/src/backend ENV FLASK_APP backend.py WORKDIR /usr/src/backend CMD flask run –host=0.0.0.0 –port=5000
With these changes, we will rebuild and push the new versions of our containers.
First, we build and push the frontend application. Note the v2 tag at the end:
$ docker build -t your_username/do-visit-counter-frontend:v2 ./frontend $ docker push your_username/do-visit-counter-frontend:v2
Then we build and push the backend application:
$ docker build -t your_username/do-visit-counter-backend:v2 ./backend $ docker push your_username/do-visit-counter-backend:v2
Finally, we have to inject Jaeger sidecars into the application pods to listen to traces from the pod and forward them to the Jaeger server.
To do so, we add an annotation to our manifests.
Open the manifest for the frontend:
$ nano ./frontend/deploy_frontend.yaml
Add the code. Make note that we replace our image with the v2 version. Also, revise that line and add the Docker Hub username:
apiVersion: apps/v1 kind: Deployment metadata: name: do-visit-counter-frontend labels: name: do-visit-counter-frontend annotations: “sidecar.jaegertracing.io/inject”: “true” spec: replicas: 1 selector: matchLabels: app: do-visit-counter-frontend template: metadata: labels: app: do-visit-counter-frontend spec: containers: – name: do-visit-counter-frontend image: your_dockerhub_username/do-visit-counter-frontend:v2 imagePullPolicy: Always env: – name: COUNTER_ENDPOINT value: http://do-visit-counter-backend.default.svc.cluster.local:5000 ports: – name: frontend-port containerPort: 8000 protocol: TCP
This annotation will inject a Jaeger sidecar into our pod.
Eventually, save and close the file.
Now open the manifest for the backend:
$ nano ./backend/deploy_backend.yaml
Repeat the process, to inject the Jaeger sidecar and to update the image tag:
apiVersion: apps/v1 kind: Deployment metadata: name: do-visit-counter-backend labels: name: do-visit-counter-backend annotations: “sidecar.jaegertracing.io/inject”: “true” spec: replicas: 1 selector: matchLabels: app: do-visit-counter-backend template: metadata: labels: app: do-visit-counter-backend spec: containers: – name: do-visit-counter-backend image: your_dockerhub_username/do-visit-counter-backend:v2 imagePullPolicy: Always ports: – name: backend-port containerPort: 5000 protocol: TCP — apiVersion: v1 kind: Service metadata: name: do-visit-counter-backend spec: selector: app: do-visit-counter-backend ports: – protocol: TCP port: 5000 targetPort: 5000
With our new manifests in place, we apply them to the cluster and wait for the pods to create.
Meanwhile, let us delete our old resources:
$ kubectl delete -f ./frontend/deploy_frontend.yaml $ kubectl delete -f ./backend/deploy_backend.yaml
And then replace them:
$ kubectl apply -f ./frontend/deploy_frontend.yaml $ kubectl apply -f ./backend/deploy_backend.yaml
This time the pods for our applications will consist of two containers: one for the application and a second for the Jaeger sidecar.
To check it, we use kubectl:
$ kubectl get pods
The application pods appear with 2/2 in the READY column:
NAME READY STATUS RESTARTS AGE jaeger-operator-547567dddb-rxsd2 1/1 Running 0 23m simplest-759cb7d586-q6x28 1/1 Running 0 22m do-visit-counter-backend-694c7db576-jcsmv 2/2 Running 0 73s do-visit-counter-frontend-6d7d47f955-lwdnf 2/2 Running 0 42s
With the sidecars and instrumentation in place, we can rerun the program and investigate the traces in the Jaeger UI.
Step 5 – Investigating Traces in Jaeger
Here’s where we reap the benefits of tracing. The goal is to see what call might be a performance issue by looking at the Jaeger UI.
To set this up we open a second and third terminal window. We will use two windows to port-forward Jaeger and our application. Whereas, the third to send HTTP requests to the frontend from our machine via curl.
In the first window, forward the port for the frontend service:
$ kubectl port-forward $(kubectl get pods -l=app=”do-visit-counter-frontend” -o name) 8000:8000
In the second window, forward the port for Jaeger:
$ kubectl port-forward $(kubectl get pods -l=app=”jaeger” -o name) 16686:16686
In the third window, we use curl in a loop to generate 10 HTTP requests:
for i in 0 1 2 3 4 5 6 7 8 9; do curl localhost:8000; done
Our output will be like this:
Hello, World! You’re visitor number 1 in here! Hello, World! You’re visitor number 2 in here! . . . Hello, World! You’re visitor number 10 in here!
This will give us enough data points to compare them in the visualization.
Open a browser and navigate to http://localhost:16686. Set the Service dropdown menu to service and change limit results to 30. Press Find Traces.
Eventually, the traces from our application will appear in the graph.
Here, Jaeger traces how long our applications take to process information and which functions contribute the most time. It gives us hints to focus our investigation. Jaeger effectively visualizes the performance leak inside our distributed application.
[Need help with the procedures? We can help you]
Conclusion
To conclude, by implementing Distributed Tracing with Jaeger on Kubernetes we were able to find the cause of our irregular response time. We saw how our Support Techs deploy the same.
0 Comments