Wednesday, April 10, 2019

Setting up an AWS EKS Cluster with Rancher

In the previous blog  we explored on setting up an K8S Cluster on the AWS Cloud without using any additional softwares or tools. The Cloud Providers make it easy to create a K8S Cluster in the Cloud. The tough part is securing, fine tuning, upgradation, access management etc. Rancher provides a centralized management of the different K8S Clusters, these can be in any of the Cloud (AWS, GCP, Azure) or In-Prem. More on what Rancher has to offer on top of K8S here. The good thing about Rancher is that's it's 100% Open Source and there is no Vendor Lock-in. We can simply remove Rancher and interact with the K8S Cluster directly.

Rancher also allows creating K8S Clusters in the different environments. In this blog we will look at installing a K8S Cluster on the AWS Cloud in a Quick and Dirty way (not ready for production). Here are the instructions for the same. The only prerequisite is access to a Linux OS with Docker installed on it.

Steps for creating a K8S Cluster on AWS using Rancher


Step 1: Login to the Linux Console and run the Rancher Container using the below command.

sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher


Step 2: Access the Rancher Console in a browser. Click on "Advanced" and then "Accept the Risk and Continue".



Step 3: Setup a admin password to access the Rancher Console.


Step 4: Click on "Save URL".


Step 5: Click on "Add Cluster" and select "Amazon EKS".



Step 6: Select the Region and provide the AWS Access and Secret Key. The same can be generated as mentioned in the "Generate access keys for programmatic access" section here. Click on "Next: Configure Cluster".


Step 7: Click on "Next: Select VPC & Subnet"


Step 8: Click on "Next: Select Instance Options"


Step 9: There is a small catch here. In this screen we specify the minimum and the maximum number of the nodes in the ASG (Auto Scaling Group), but no where we specify the default size to start with. So, when the Cluster is started, it creates 3 EC2 instances by default. There is an issue opened in Github Rancher project for the same and it's still open.

Click on Create.


Step 10: Initially the Cluster will be in a Provisioning State and after 10-15 minutes it will change to Active State.



During the Cluster creation process, the following will be created. The same can be observed by going to the AWS Management Console for the appropriate Service.

          1) CloudFormation Stacks
          2) EC2 instances
          3) VPC
          4) EKS Control Plane

Step 11: Go through the different screens in the Rancher Console.



Step 12: Once you have spend some time with the Rancher Console. Select the AWS Cluster created in the above sequence of steps and delete it. After a few minutes the Cluster should be gone from the Rancher Console.



Step 13: Although Rancher Console says that the Cluster is deleted, there might be some resources hanging in AWS as shown below. Not sure where the bug is. But, if there are any resources in AWS still not deleted they might add to the AWS monthly bill. So, it's always better to the AWS Management Console for CloudFormation, EC2, S3, VPC, EKS and make sure all the AWS resources are deleted.


Conclusion


An AWS EKS Cluster can be easily created using Rancher as mentioned above. The steps are more or less the same for creating a Cluster on Azure and GCP also. Rancher also allows to import an existing K8S Cluster. Once all the Clusters are within Rancher, they can be managed centrally, policies applied etc. It makes it easy when there are multiple K8S Clusters to manage within an organization.

As seen above, there are a few rough edges in the installation process. First, there is no way to mention the default number of EC2 worker nodes in the Cluster. Also, not all the AWS resources are deleted when the EKS Cluster is deleted through Rancher, which might incur additional cost. These things might be fixed in the future releases of Rancher.

Also, note that this is a quick and dirty way of installing K8S Cluster on AWS using Rancher and is not ready for Production, as this particular setup doesn't support High Availability. Here are instructions on how to setup a HA Cluster using Rancher.

Wednesday, April 3, 2019

K8S Cluster on AWS using EKS

As mentioned in the previous blogs, there are different ways of getting started with K8S (Minikube, Play With K8S, K3S, Kubeadm, Kops etc). Here in the K8S documentation. Some of them involves running K8S on the local machine and some in the Cloud. AWS EKS, Google GKE and Azure AKS are a few of services for K8S in the Cloud.

By following the instructions as mentioned here, it takes about 20 minutes to setup and access the K8S cluster using those instructions. Below is the output and the end of following the instructions.

1) As usual AWS uses CloudFormation for automating tasks around the Cluster setup. There would be two Stacks, one for creating a new VPC where the K8S cluster would be running and the other for creating the Worker nodes) which we have to manage.


2) A new VPC with the Subnets, Routing Tables, Internet Gateways, Security Groups etc). These are automatically created by the CloudFormation template.


3) A EKS Cluster which manages the components in the Control Plane (K8S master) and makes sure they are Highly Available.


4) EC2 instances again created by the CloudFormation for the K8S Workers. By default 4 EC2 instances are created by the CloudFormation, but it can be reduced to 1 as we won't be running a heavy load on the cluster. This can be changed in the CloudFormation template by changing the Default instances for the AutoScaling group from 4 to 1. I have changed it to 2 and so the number of EC2 instances.


5) Service roles created manually and by the CloudFormation Stack.


6) Once the Guestbook application has been deployed on the AWS K8S Cluster, the same can be accessed as shown below.


7) We can get the nodes, pods and namespaces as shown below. Note that an alias has been for the kubectl command.

 

A few things to keep in mind


1) As mentioned above the number of default Worker nodes can be decreased to 1, while creating the Stack for the K8S workers. Saves $$$.

2) There were a few errors while creating the VPC as shown below while creating the CloudFormation Stack. This error happens within a region where the default VPC was deleted and created again, otherwise it works file. After a bit of debugging, the error was not obvious.


A quick workaround for the above problem, just in case someone gets stuck around it is to use another region which has the default VPC created by AWS account or hard code the Availability Zone as shown below in the CloudFormation template.


3) Don't forget to delete the Guestbook application (1) and the AWS EKS Cluster (1).

Conclusion


Just to conclude, it takes around 20 minutes to setup a Cluster. Instead of repeating the steps in this blog, here again are the steps from AWS Documentation to get started with a K8S Cluster on AWS.

For the sake of learning and trying out different things I would prefer to setup the K8S Cluster on the local machine as it is faster to start the Cluster and quickly try out a few things. Also, the Cloud Vendors are a bit behind on the K8S version. As of this writing, AWS EKS supports K8S Version 1.12 and lower which is like 6 months old, while the latest K8S Version is 1.14 which was released recently.

Tuesday, March 26, 2019

K8S Support for Windows Containers

K8S has two components, the Master and the Worker. The Containers wrapped in Pods are executed on the Worker nodes, while the Master node schedules the Pods across the Worker nodes based on the resource availability and a couple of other factors like Taints and Tolerations. For K8S, the nodes in the Cluster combined is all like one big NODE or machine.


Till date both the Master and the Worker nodes in K8S used to run on different flavors of Linux and it was not possible to run Windows based workloads on K8S. Even in the Azure Cloud it was all Linux VMs for the Master and the Worker nodes.

With the new K8S 1.14 release support for Windows based Worker nodes has moved from Beta to GA, still the Master node is all on Linux only. Now it's possible to run/schedule Windows Containers through K8S on the Windows Worker nodes and not just through Docker Swarm.

While this is the most significant feature, along with this there are a couple of other features are introduced in K8S 1.14 release as mentioned here. There is a plan to blog about this and other new K8S features in the upcoming blogs in K8S official blog here. So, keep following the blog to get to know the latest around K8S.

 

This is a big thing as it enables Windows Containers to be scheduled in a K8S Cluster to take advantage of .NET runtime and the ecosystem Windows has to offer. And Microsoft gets a chance to push for more Windows based workloads on K8S. It surprises me that it took 4 years for adding Windows Container support to K8S, since it was released in 2015.

How is the contribution to K8S across companies?

The good thing about K8S is that almost all of the big companies are rallying around it, instead of providing their own alternatives. Here you can get the number of contributors to K8S grouped by companies for last month. Metrics for other data ranges can also be seen. There are also couple of other pre-built dashboard around a lot of other metrics there. Obviously Google has the most number of contributors followed by RedHat and VMware.


The visualization is provided by Grafana. The home page (https://devstats.cncf.io/) provides metrics for other CNCF projects (including Graduated, Incubating and  Sandbox).

Monday, March 25, 2019

Automating Canary deployment with Jenkins, Spinnaker and Prometheus

In one of the previous blog, we looked at Canary deployment and how it can be done with Istio (service mesh). Initially a small percent of the traffic (may be 1%) is sent to the new version and the remaining (99%) is sent to the old version. As performance metrics are met, errors are within limit for the new version then the amount of traffic to the new version is increased incrementally till all the traffic goes to the new version.

We have seen how to do the same manually (again from the previous blog), but the whole thing can be automated using a CI tool (Jenkins etc), CD tool (Spinnaker etc) and a monitoring tool (Prometheus etc).
  • Jenkins workflow gets triggered for changes to Git and a Docker image is built and pushed to image registry.
  • Docker webooks triggers a Spinnaker pipeline to start a Canary Deployment of the new image to a K8S environment.
  • In the Spinnaker pipeline, the metrics from Prometheus are used to scale up or scale down the amount of traffic to the new version of the service using Canary Deployment in an incremental fashion.

Courtesy here

Again there is an interesting article and demo from Kublr around the above flow (excluding the Jenkins part). In the demo the image is built manually and pushed to Docker, which triggers the rest of the workflow. The only thing missing from the demo is the installation, configuration and integration of the different softwares. The setup has already been done and workflow is triggered and the execution shown. It's worth to see the demo and understand how the different pieces fit together.

It's all interesting to see and know how the different pieces fit together in the Microservices world. In the upcoming blogs we will see the steps to setup such an environment from scratch or by using one of the K8S management platforms.

In the case of BigData there were vendors like Cloudera, HortonWorks, MapR who were integrating the different softwares like Hadoop, Hive, Pig, HBase, Cassandra etc and make sure they work nice together. I guess in the K8S space also there will be such vendors. If yes, please update in the comments below.

Sunday, March 24, 2019

Getting started with Canary Deployment with Istio on K8S

K8S and Istio aren't that easy and it took me some time to figure it out and get started with Istio and setup a few working examples. So, this post is all about getting anyone interested in Istio to get started quickly and easily with 'Canary Deployment with Istio'

 

What is Canary Deployment?


Canary Deployment is a deployment technique in which a new version of the software is incrementally introduced to a subset of users. In the below figure, the new version V2 of the service is rolled out to only 1% of users and the rest are still using the old version V1 of the service. This reduces the risk of rolling out a faulty version to all the users.


The different characteristics like performance of V2, user feed back etc are observed and incrementally more and more users are exposed to the new version V2, till the 100% of the users are exposed to V2. Any problem with the version V2, the users can be rolled back to V1. More about Canary Deployment here. It's an old article, but the concepts are all the same.

Why Canary Deployment with Istio on K8S?


Canary Deployment can be done with plain K8S as well with Istio on top of K8S. With plain K8S, there are a few disadvantages. The ratio of users exposed to the service is proportional to the number of Pods of that service. Lets say we want 99% of the users to V1 and 1% to V2, then there should be a minimum 99 Pods of V1 and 1 Pod of V2, irrespective of the amount of traffic to the Pods. This is because the KubeProxy uses round-robin across different pods.

This is not the case with Istio, where we can specify the traffic split across different versions and this is independent of the number of the Pods of each version. Again, lets say we want 99% of the users to V1 and 1% to V2, then in the case of Istio at a minimum we need 1 Pod of V1 and 1 Pod V2. This happens because of the Istio component VirtualService.

How to get started with Canary Deployment on Istio?


Once Istio has been properly installed on top of K8S, the below components have to be created with the proper configuration to get started with Canary Deployment.

K8S Components
  • Deployment
  • Service
Istio Components
  • Gateway
  • VirtualService
  • DestinationRule
Instead of repeating the content here, I would be pointing to some resources which I think are really good to get started with Canary Deployment on Istio. The below are the links to Istio Official Documentation on Canary Deployment. It has got code snippets, but it's not complete and cannot be used As-Is.

1) Istio Official Documentation on Canary Deployment
 
2) Istio Official blog entry on Canary Deployment

The below blogs have a good explanation around Canary Deployment with working code snippets. I would recommend to get start with the below. If you have access to an existing K8S cluster then the steps of installing a K8S cluster using Kubler tool can be skipped.

3) Kublr Blog with code for Canary Deployment (Weight Based)

4) Kublr Blog with code for Canary Deployment (Intelligent Routing)

When creating a Gateway as mentioned in the above Kublr blogs on the Cloud then a Load Balancer will be created and the webpage can be accessed by using the URL of the Load Balancer. But, when creating a Gateway on a local machine or in a non Cloud environment a Load Balancer is not created.

In this case, port forwarding has to be used from any port (1235) in the below command to the istio-ingressgateway pod. Then the webpage can be accessed externally by using the IP address of the master on port 1235. Note that in the below command the pod name istio-ingressgateway-6c756547b-qz2tx has to be modified.
kubectl port-forward -n=istio-system --address 0.0.0.0 pod/istio-ingressgateway-6c756547b-qz2tx 1235:80
In the upcoming blogs, we will explore the other features of Istio like Injecting Faults, Timeouts and Retires.

Friday, March 22, 2019

Tips and Tricks for using K8S faster and easier

In this blog I will list down the Tips and Tricks around usage of K8S making it easier and faster to use. Some of these are extremely useful during the Certifications also. Usually I add them to my .bashrc file.

I will try to continuously update this blog as I get across many more. Also, if you come across any additional tips, let me know in the comments and I will add them here.

1) Create a shortcut for kubectl, this will save a few key strokes.
alias k='kubectl'
2) Adding this to .bashrc makes the auto completion work with the alias 'k' mentioned above. This makes it faster to complete the commands.
source <(kubectl completion bash | sed s/kubectl/k/g)
3) This deletes the namespace and creates it again. This makes sure that all the objects in the namespace are cleanup. This makes K8S faster as things the objects don't get piled over time.
alias kc='k delete namespaces my-namespace;k create namespace my-namespace' 
Once this is done, the default namespace has to be changed. Note to change the current context (kubernetes-admin@kubernetes) in the second command based on the output of the first command.
kubectl config current-context
kubectl config set-context kubernetes-admin@kubernetes --namespace=my-namespace
4) The watch command is used a lot in K8S to observe the the different objects getting created and destroyed (object life cycle). When we run 'watch k get pods' the 'k' alias doesn't get expanded unless the below is included in the .bashrc.
alias watch='watch '
5) A bunch of them are mentioned here at

-- 'Pimp my Kubernetes Shell'.
-- Boosting your kubectl productivity

6) YAML is better than XML in verbosity. But, typing YAML is still a pain. The below commands will create the sample YAML files that can be modified later on as per our requirement. Note that the actual K8S objects are not created, just the YAML files as the --dry-run option is used.
k run nginx --image=nginx --restart=Never --dry-run -o yaml > pod.yaml
k run nginx --image=nginx -r=2 --generator=run/v1 --dry-run -o yaml > rc.yaml
k run nginx --image=nginx -r=2 --dry-run -o yaml > deployment.yaml
k expose deployment nginx --target-port=80 --port=8080 --type=NodePort --dry-run -o yaml > service.yaml
That's it for now. Have fun with K8S.

7) With tmux multiple terminal sessions can be accessed simultaneously in a single window. Usually I split into multiple sessions based on the task at hand. Tmux is a bit tricky to get started with, but very easy to get addicted.

Here the window has been split into two sessions. On the right is the help with the examples and on the left is the prompt to try out the commands. No need to toggle across different screens.


Here on the top-right is the 'watch k get pods' and bottom-right is the 'watch k get deployments' command. And on the left is the prompt to try out the commands. In this case, we can notice the pods and deployments getting created and destroyed. The life cycle of the K8S objects.


Add the below to .bashrc to get the tmux start automatically in the terminal.
#Start tmux autmatically
if command -v tmux>/dev/null; then
  [[ ! $TERM =~ screen ]] && [ -z $TMUX ] && exec tmux
fi

Exposing a K8S Service of Type LoadBalancer on Local Machine

As we had been exploring in this blog, there are different ways of installing and using K8S. It can be in the Cloud, In-Premise and also on your Laptop. For the sake of trying different things I prefer installing it on the Laptop as I would have complete control over it (building and breaking) and also also that I can work offline. And there is no cost associated with it. But, there are a few disadvantages of it, like not being able to integrate with the different Cloud services. We will look at one of the disadvantage of installing K8S on the Laptop and a workaround for the same.

When we create Pods then an IP address is assigned to them and when a Pod goes down and a new one is created by the Deployment automatically, then the Pod might be assigned a different IP address. So, when the IP address is not static then how do we access them? This is where a K8S Service comes into play. More about the K8S Service and the different ways they can be exposed here, here, here and here.

Based on how we want to expose a Service depending on the access type, Service can be exposed using Type as a ClusterIP, NodePort or a LoadBalancer as mentioned here. When a Service of Type LoadBalancer is created, then K8S will automatically creates a Cloud vendor specific Load Balancer. This is good in the Cloud, but what happens when we create a Service of Type LoadBalancer in a non-Cloud environment as in the case of Laptop where a Load Balancer can't be provisioned automatically. This can be addressed using MetalLB which is described as a 'bare-metal load-balancer for K8S'.

So, lets get into the sequence of steps to get around this problem using Metallb.

Step 1: Create a file nginxlb.yaml with the below content. This basically creates a Deployment and a Service for nginx.
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1
        ports:
        - name: http
          containerPort: 80

---
apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  ports:
  - name: http
    port: 8080
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx
  type: LoadBalancer
Create another file layer2-config.yaml with the below content. This creates a ConfigMap with the IP address range pool which can be assigned to the Load Balancer. Note that the IP address range has to be modified according to the network to which your Laptop has been connected to.
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: my-ip-space
      protocol: layer2
      addresses:
      - 192.168.0.30-192.168.0.40
Step 2: Create a Deployment and Service using the 'kubectl apply -f nginx' command. Notice that the EXTERNAL-IP is specified as <pending> because in the local setup a Load Balancer cannot be created as in the case of Cloud.


Step 3: Delete the Deployment and the Service as they will be installed later again with MetalLB setup completed using the 'kubectl delete -f nginxlb.yaml' command.


Step 4: Install the MetalLB and the related components using 'kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.7.3/manifests/metallb.yaml' command.


Make sure that the corresponding Pods are created using the 'kubectl get pods -n=metallb-system' and also that they are in the Running status after a few minutes.


Step 5: Apply the configuration for the MetalLB installation using the 'kubectl apply -f layer2-config.yaml' command. In this file we specify the range of IP address from which the Load Balancer will get an IP address. Don't forget to change the IP address in the file, based on the network the Laptop is connected to.


Step  6: Now that MetalLB setup has been done. Recreate the Deployment and the Service using 'kubectl apply -f nginx.yaml'. Notice in the output of the 'kubectl get svc' command, the EXTERNAL-IP is 192.168.0.31 and not <pending> as was the case earlier.


Step 7: Now that the Load Balancer has been created, open the 192.168.0.31:8080 URL in a browser to get the below nginx page. Note to change the IP address based on the above step. Now, we are able to setup a Service of Type set to LoadBalancer if we get the below page.


Step 8: Finally delete the nginx Deployment/Service and the MetalLB related components using the 'kubectl delete -f nginxlb.yaml' and the 'kubectl delete -f https://raw.githubusercontent.com/google/metallb/v0.7.3/manifests/metallb.yaml' command. This is an optional step, just in case you want to delete it.

 

Conclusion


By default in a non-Cloud environment when we create a service with Type set to LoadBalancer the EXTERNAL-IP is set to <pending> as a Load Balancer cannot be created. But, in this blog we have seen how to configure a Layer 2 Load Balancer (MetalLB) on the Laptop to get a Load Balancer created. Below is the screen shot before and after the MetalLB setup has been done. Note the value of the EXTERNAL-IP.

Before

After

Thursday, March 21, 2019

Installing Istio and Bookinfo Application on K8S

In this blog previously we explored the different ways of getting started with K8S using Play-With-Kubernetes, Minikube and finally easily installing K8S using a combination of VirtualBox and Vagrant.

Now we will try to install Istio on top of the existing K8S setup and install a Microservices based application called Bookinfo on top of it. Once the setup of Istio is done, we should be able to explore the different features of Istio. The Bookinfo Application is a polygot application using Python, Java, Ruby and NodeJS. The various Microservices and how there interact is detailed here.

So, what is service mesh and what is Istio? Istio is an Open Source implementation of service mesh. While K8S provides orchestration of the Containers, Istio is used for the management of the services created by these Containers in the Microservices based Architecture. More about Istio and service mesh here (1, 2, 3). As we explore the different features of Istio in the upcoming blogs, it will be more clear what service mesh is all about in the context of Microservice based architecture.

Istio is not the only implementation of service mesh as mentioned here. Google uses Istio, while AWS uses App Mesh. Both of them are built on top of Envoy proxy.

Lets jump into the installation of Istio and the Bookinfo Microservices based application on top of it. We would be following the steps mentioned here and here.

Step 1: Download Istio using the 'curl -L https://git.io/getLatestIstio | ISTIO_VERSION=1.1.0 sh -' command. It will create a 'istio-1.1.0' folder with the below structure.


Step 2: Install the Istio CRD using the `for i in install/kubernetes/helm/istio-init/files/crd*yaml; do kubectl apply -f $i; done` command.



Step 3: Install the Istio binaries using the 'kubectl apply -f install/kubernetes/istio-demo-auth.yaml' command.



Step 4: Verify the Istio installation using 'kubectl get svc -n istio-system' and 'kubectl get pods -n istio-system' commands. All the services should be created and the pods should be in a Running or Completed status below.

Now we are done with the Istio setup. Note there are a couple of different ways of installing Istio, but this is the easiest way.



Step 5: Run the below commands to create a namespace called 'my-namespace' and make it the default namespace in the current context. The current context name 'kubernetes-admin@kubernetes' in the third command has to be modified based on the output of the second command.

a) kubectl delete namespaces my-namespace;kubectl create namespace my-namespace
b) kubectl config current-context
c) kubectl config set-context kubernetes-admin@kubernetes --namespace=my-namespace


Step 6: Istio sidecar can be injected into the application manually or automatically. We will look at the automatic way. Label the namespace with the 'kubectl label namespace my-namespace istio-injection=enabled' command. With this label, any application deployed in this namespace will have Istio sidecar injected into it automatically.


Step 7: Deploy the Bookinfo application using the 'kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml' command. Make sure to be the Istio folder as shown below.


Check the status of the pods (kubectl get pods), they should be in the Running status after a few minutes.


To confirm that the application is running. Make a call to the Bookapp webpage from one of the pod. using the below command.

kubectl exec -it $(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}') -c ratings -- curl productpage:9080/productpage | grep -o "<title>.*</title>"

The output should be as shown below. This means that the application has been deployed successfully. Note that is says 2/2 in the above screenshot. Why is it so? One is main application container and the other is Envoy proxy container injected by Istio. Without the label on the namespace it should say 1/1 as the Envoy proxy container is not injected by Istio.


Step 8: An overlay network is created by default and the Bookinfo webpage is only accessible within this network and not from the outside. For this we have to use port forwarding using the 'kubectl port-forward --address 0.0.0.0 pod/productpage-v1-6b6798cb84-v6l7p 1234:9080' command. In this the pod name should be changed, which can be got by running the 'kubectl get pods' command.


Instead of using port forwarding we should have used Istio Gateway as mentioned here, But, this automatically creates a load balancer which is not possible in the local machine, but only on the Cloud. So, we are using port forwarding as mentioned above.

Step 9: Now the Bookinfo webpage can be accessed from the browser (192.168.0.101:1234/productpage). Note that the IP address has to be modified to match the IP address of any of the node in the K8S cluster.


Step 10: Use the below commands to cleanup the Bookinfo application and Istio.

samples/bookinfo/platform/kube/cleanup.sh

kubectl delete -f install/kubernetes/istio-demo-auth.yaml
for i in install/kubernetes/helm/istio-init/files/crd*yaml; do kubectl delete -f $i; done




Conclusion


In this blog we looked at the required steps for installation Istio and then Bookinfo application on top of K8S. It's not too difficult to install Istio as mentioned above, but the Cloud vendors providing managed K8S make it even easier with a single click installation of Istio or any other service mesh.

In the future blogs, we will explore the different features of Istio in a bit more detail using the Bookinfo or some other application, this will make it clear what Istio and service mesh is all about.