eShopOnContainers - Part 3 (Istio Injection)
In this part, we will explore Istio injection into our Kubernetes cluster. The Istio's role is to manage microservice communication, enforce policies, and provide insights into network traffic.This helps to improve the overall security, reliability, and observability of the application. Additionally, Istio provides features such as load balancing, traffic routing, and service-to-service authentication, making it easier for developers to focus on writing code for their microservices, without having to worry about the underlying network infrastructure.
Istio, in particular, is designed to work without major changes to a pre-existing service code.
Kube-proxy is an integral part of the Kubernetes networking setup and works at the network layer to handle communication between the pods running on different nodes in a cluster. It operates in one of three modes: userspace, iptables, and IPVS (IP Virtual Server).
In userspace mode, kube-proxy communicates directly with the host network stack to manage traffic. In iptables mode, kube-proxy manages iptables rules to forward traffic to the appropriate pods. IPVS mode uses the Linux IP virtual server to handle traffic forwarding.
The main role of kube-proxy is to handle incoming network requests and forward them to the appropriate backend pod. It implements services as defined in the cluster, performing tasks such as TCP and UDP stream forwarding, load balancing, and network connection tracking.
Kube-proxy listens to the Kubernetes API server for changes to the services and endpoints, and updates its configuration accordingly. It monitors the health of backend pods and dynamically updates the routing table to ensure that traffic is always sent to healthy pods.
In summary, kube-proxy is responsible for ensuring network connectivity between pods in a Kubernetes cluster and providing load balancing for services running in the cluster.
The kube-proxy component in the process also needs to intercept traffic, except that the kube-proxy intercepts traffic to and from the Kubernetes node — while the sidecar proxy intercepts traffic to and from the pod.
Istio can follow the service registration in Kubernetes and can also interface with other service discovery systems via platform adapters in the control plane; and then generate data plane configurations (using CRD, which are stored in etcd) with transparent proxies for the data plane. The transparent proxy of the data plane is deployed as a sidecar container in the pod of each application service, and all these proxies need to request the control plane to synchronize the proxy configuration. It uses Envoy Proxy as a sidecar.
The injection process works as follows:
-
The Istio Control Plane is installed in the Kubernetes cluster. This includes the Istio Pilot, Mixer, and Citadel components, which are responsible for managing traffic, policies, and security.
-
When a new pod is created in the cluster, the Kubernetes admission controller intercepts the pod creation request and checks if it has an annotation that indicates it needs to be injected with an Istio sidecar proxy. This annotation is sidecar.istio.io/inject: true.
-
If the annotation is present, the Kubernetes admission controller invokes the Istio sidecar injector, which is a mutating webhook that modifies the pod's YAML definition to include an additional container for the sidecar proxy. The proxy is typically implemented as a lightweight Envoy proxy.
-
The modified YAML definition is then used to create the pod with the sidecar proxy container alongside the main container for the application.
-
Once the pod is created, the Istio sidecar proxy intercepts all incoming and outgoing traffic for the application container. It uses the service mesh to manage and secure the communication between microservices in the application.
Switch the current-context
Check your kubectl configuration to see if it's pointing to the correct cluster context, if not switch tp correct contex by using the below command.
kubectl config use-context docker-desktop
Install Istio
-
Download the Istio release: You can download the latest Istio release from the official Istio website https://github.com/istio/istio/releases (istio-1.15.1-win.zip). Extract it to a directory on your system.
-
Add the path to the environment variable.
-
Install Istio's Custom Resource Definitions (CRDs) and instal the Istio control plane components:
istioctl install --set profile=demo
- Verify the installation: You can verify the installation by checking the status of the Istio components using the following command:
kubectl get pods -n istio-system
- (Optional) Install istio addon
We can find yaml files under the istio download folder, istio-1.15.1-win\istio-1.15.1\samples\addons to apply all the addons on istio.
kubectl apply -f prometheus.yaml
kubectl apply -f kiali.yaml
kubectl apply -f jaeger.yaml
kubectl apply -f grafana.yaml
- All the addons are installed to istio-system namespace and can be viewed:
kubectl get svc -n istio-system
The final step will be enabling the creation of Envoy proxies, which will be deployed as sidecars to services running in the mesh.
Sidecars are typically used to add an extra layer of functionality in existing container environments. Istio’s mesh architecture relies on communication between Envoy sidecars, which comprise the data plane of the mesh, and the components of the control plane. In order for the mesh to work, we need to ensure that each Pod in the mesh will also run an Envoy sidecar.
There are two ways of accomplishing this goal:
• manual sidecar injection
• automatic sidecar injection.
We’ll enable automatic sidecar injection by labelling the namespace in which we will create our application objects with the label istio-injection=enabled. This will ensure that the MutatingAdmissionWebhook controller can intercept requests to the kube-apiserver and perform a specific action — in this case, ensuring that all of our application pods start with a sidecar.
kubectl label namespace default istio-injection=enabled
We can verify that the command worked as intended by running the following:
kubectl get namespace -L istio-injection
Deploy eShopOnContainer
Now that the Istio system is installed and running, it's time to deploy the eShopContainer in a Kubernetes cluster. Ensure Helm is installed as the below powershell script uses Helm (refer to Part 2):
.\deploy-all.ps1 -imageTag linux-latest -useLocalk8s $true -imagePullPolicy Always
Next, check the status of all the pods.
kubectl get pods
In my case, I was getting CrashLoopBackOff status for webMVC, catalog-api etc. The "CrashLoopBackOff" status in Kubernetes means that a container in a pod keeps crashing and restarting, and the number of restarts exceeds the restart policy defined in the pod. This status indicates a problem with the container, such as an issue with the application inside the container, missing dependencies, or a failure to start properly. On checking the log I found that application trying to connect to RabbitMQ, Redis and SQL Data via sidecar (envoy proxy) was failing. So, I decided to skip the sidecar for those pods.
To skip the Istio sidecar injection for pods created by a deployment in a namespace labeled with "istio-injection=enabled", you can add the "sidecar.istio.io/inject" annotation to the deployment's template specification with a value of "false" (shown below). This annotation will be applied to all pods created by the deployment, preventing the Istio sidecar from being injected into those pods.
Below you can see the highlighted pods are running with sidecar.
Determining the ingress IP and routing traffic to webmvc
kubectl get svc -n istio-system
You can see that EXTERNAL-IP of istio-ingress-gateway is localhost and is exposed on port 80. And to route traffic to webmvc you'll need to create a Gateway and a VirtualService resource in Istio. Create a file called istio-gateway.yaml in your project directory and paste the following:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: mvcweb-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: webmvc-virtual-service
spec:
hosts:
- "*"
gateways:
- mvcweb-gateway
http:
- route:
- destination:
host: webmvc
In this example, the Gateway resource named "mvcweb-gateway" opens port 80 on the Istio ingress gateway. The VirtualService resource named "webmvc-virtual-service" route requests "webmvc" service.
Deploy this to cluster with below command:
kubectl apply -f istio-gateway.yaml
You are all set, now you call access webmvc app on port 80 of localhost.
Alternatevely, you can also access webmvc directly using port-forwarding and check if application works -
kubectl port-forward svc/webmvc 7777:80
Enabling Telemetry Access by Ingress
kubectl port-forward svc/kiali 20001 -n istio-system