Friday, March 22, 2019

Tips and Tricks for using K8S faster and easier

In this blog I will list down the Tips and Tricks around usage of K8S making it easier and faster to use. Some of these are extremely useful during the Certifications also. Usually I add them to my .bashrc file.

I will try to continuously update this blog as I get across many more. Also, if you come across any additional tips, let me know in the comments and I will add them here.

1) Create a shortcut for kubectl, this will save a few key strokes.
alias k='kubectl'
2) Adding this to .bashrc makes the auto completion work with the alias 'k' mentioned above. This makes it faster to complete the commands.
source <(kubectl completion bash | sed s/kubectl/k/g)
3) This deletes the namespace and creates it again. This makes sure that all the objects in the namespace are cleanup. This makes K8S faster as things the objects don't get piled over time.
alias kc='k delete namespaces my-namespace;k create namespace my-namespace' 
Once this is done, the default namespace has to be changed. Note to change the current context (kubernetes-admin@kubernetes) in the second command based on the output of the first command.
kubectl config current-context
kubectl config set-context kubernetes-admin@kubernetes --namespace=my-namespace
4) The watch command is used a lot in K8S to observe the the different objects getting created and destroyed (object life cycle). When we run 'watch k get pods' the 'k' alias doesn't get expanded unless the below is included in the .bashrc.
alias watch='watch '
5) A bunch of them are mentioned here at 'Pimp my Kubernetes Shell'.

6) YAML is better than XML in verbosity. But, typing YAML is still a pain. The below commands will create the sample YAML files that can be modified later on as per our requirement. Note that the actual K8S objects are not created, just the YAML files as the --dry-run option is used.
k run nginx --image=nginx --restart=Never --dry-run -o yaml > pod.yaml
k run nginx --image=nginx -r=2 --generator=run/v1 --dry-run -o yaml > rc.yaml
k run nginx --image=nginx -r=2 --dry-run -o yaml > deployment.yaml
k expose deployment nginx --target-port=80 --port=8080 --type=NodePort --dry-run -o yaml > service.yaml
That's it for now. Have fun with K8S.

Exposing a K8S Service of Type LoadBalancer on Local Machine

As we had been exploring in this blog, there are different ways of installing and using K8S. It can be in the Cloud, In-Premise and also on your Laptop. For the sake of trying different things I prefer installing it on the Laptop as I would have complete control over it (building and breaking) and also also that I can work offline. And there is no cost associated with it. But, there are a few disadvantages of it, like not being able to integrate with the different Cloud services. We will look at one of the disadvantage of installing K8S on the Laptop and a workaround for the same.

When we create Pods then an IP address is assigned to them and when a Pod goes down and a new one is created by the Deployment automatically, then the Pod might be assigned a different IP address. So, when the IP address is not static then how do we access them? This is where a K8S Service comes into play. More about the K8S Service and the different ways they can be exposed here, here, here and here.

Based on how we want to expose a Service depending on the access type, Service can be exposed using Type as a ClusterIP, NodePort or a LoadBalancer as mentioned here. When a Service of Type LoadBalancer is created, then K8S will automatically creates a Cloud vendor specific Load Balancer. This is good in the Cloud, but what happens when we create a Service of Type LoadBalancer in a non-Cloud environment as in the case of Laptop where a Load Balancer can't be provisioned automatically. This can be addressed using MetalLB which is described as a 'bare-metal load-balancer for K8S'.

So, lets get into the sequence of steps to get around this problem using Metallb.

Step 1: Create a file nginxlb.yaml with the below content. This basically creates a Deployment and a Service for nginx.
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1
        ports:
        - name: http
          containerPort: 80

---
apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  ports:
  - name: http
    port: 8080
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx
  type: LoadBalancer
Create another file layer2-config.yaml with the below content. This creates a ConfigMap with the IP address range pool which can be assigned to the Load Balancer. Note that the IP address range has to be modified according to the network to which your Laptop has been connected to.
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: my-ip-space
      protocol: layer2
      addresses:
      - 192.168.0.30-192.168.0.40
Step 2: Create a Deployment and Service using the 'kubectl apply -f nginx' command. Notice that the EXTERNAL-IP is specified as <pending> because in the local setup a Load Balancer cannot be created as in the case of Cloud.


Step 3: Delete the Deployment and the Service as they will be installed later again with MetalLB setup completed using the 'kubectl delete -f nginxlb.yaml' command.


Step 4: Install the MetalLB and the related components using 'kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.7.3/manifests/metallb.yaml' command.


Make sure that the corresponding Pods are created using the 'kubectl get pods -n=metallb-system' and also that they are in the Running status after a few minutes.


Step 5: Apply the configuration for the MetalLB installation using the 'kubectl apply -f layer2-config.yaml' command. In this file we specify the range of IP address from which the Load Balancer will get an IP address. Don't forget to change the IP address in the file, based on the network the Laptop is connected to.


Step  6: Now that MetalLB setup has been done. Recreate the Deployment and the Service using 'kubectl apply -f nginx.yaml'. Notice in the output of the 'kubectl get svc' command, the EXTERNAL-IP is 192.168.0.31 and not <pending> as was the case earlier.


Step 7: Now that the Load Balancer has been created, open the 192.168.0.31:8080 URL in a browser to get the below nginx page. Note to change the IP address based on the above step. Now, we are able to setup a Service of Type set to LoadBalancer if we get the below page.


Step 8: Finally delete the nginx Deployment/Service and the MetalLB related components using the 'kubectl delete -f nginxlb.yaml' and the 'kubectl delete -f https://raw.githubusercontent.com/google/metallb/v0.7.3/manifests/metallb.yaml' command. This is an optional step, just in case you want to delete it.

 

Conclusion


By default in a non-Cloud environment when we create a service with Type set to LoadBalancer the EXTERNAL-IP is set to <pending> as a Load Balancer cannot be created. But, in this blog we have seen how to configure a Layer 2 Load Balancer (MetalLB) on the Laptop to get a Load Balancer created. Below is the screen shot before and after the MetalLB setup has been done. Note the value of the EXTERNAL-IP.

Before

After

Thursday, March 21, 2019

Installing Istio and Bookinfo Application on K8S

In this blog previously we explored the different ways of getting started with K8S using Play-With-Kubernetes, Minikube and finally easily installing K8S using a combination of VirtualBox and Vagrant.

Now we will try to install Istio on top of the existing K8S setup and install a Microservices based application called Bookinfo on top of it. Once the setup of Istio is done, we should be able to explore the different features of Istio. The Bookinfo Application is a polygot application using Python, Java, Ruby and NodeJS. The various Microservices and how there interact is detailed here.

So, what is service mesh and what is Istio? Istio is an Open Source implementation of service mesh. While K8S provides orchestration of the Containers, Istio is used for the management of the services created by these Containers in the Microservices based Architecture. More about Istio and service mesh here (1, 2, 3). As we explore the different features of Istio in the upcoming blogs, it will be more clear what service mesh is all about in the context of Microservice based architecture.

Istio is not the only implementation of service mesh as mentioned here. Google uses Istio, while AWS uses App Mesh. Both of them are built on top of Envoy proxy.

Lets jump into the installation of Istio and the Bookinfo Microservices based application on top of it. We would be following the steps mentioned here and here.

Step 1: Download Istio using the 'curl -L https://git.io/getLatestIstio | ISTIO_VERSION=1.1.0 sh -' command. It will create a 'istio-1.1.0' folder with the below structure.


Step 2: Install the Istio CRD using the `for i in install/kubernetes/helm/istio-init/files/crd*yaml; do kubectl apply -f $i; done` command.



Step 3: Install the Istio binaries using the 'kubectl apply -f install/kubernetes/istio-demo-auth.yaml' command.



Step 4: Verify the Istio installation using 'kubectl get svc -n istio-system' and 'kubectl get pods -n istio-system' commands. All the services should be created and the pods should be in a Running or Completed status below.

Now we are done with the Istio setup. Note there are a couple of different ways of installing Istio, but this is the easiest way.



Step 5: Run the below commands to create a namespace called 'my-namespace' and make it the default namespace in the current context. The current context name 'kubernetes-admin@kubernetes' in the third command has to be modified based on the output of the second command.

a) kubectl delete namespaces my-namespace;kubectl create namespace my-namespace
b) kubectl config current-context
c) kubectl config set-context kubernetes-admin@kubernetes --namespace=my-namespace


Step 6: Istio sidecar can be injected into the application manually or automatically. We will look at the automatic way. Label the namespace with the 'kubectl label namespace my-namespace istio-injection=enabled' command. With this label, any application deployed in this namespace will have Istio sidecar injected into it automatically.


Step 7: Deploy the Bookinfo application using the 'kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml' command. Make sure to be the Istio folder as shown below.


Check the status of the pods (kubectl get pods), they should be in the Running status after a few minutes.


To confirm that the application is running. Make a call to the Bookapp webpage from one of the pod. using the below command.

kubectl exec -it $(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}') -c ratings -- curl productpage:9080/productpage | grep -o "<title>.*</title>"

The output should be as shown below. This means that the application has been deployed successfully. Note that is says 2/2 in the above screenshot. Why is it so? One is main application container and the other is Envoy proxy container injected by Istio. Without the label on the namespace it should say 1/1 as the Envoy proxy container is not injected by Istio.


Step 8: An overlay network is created by default and the Bookinfo webpage is only accessible within this network and not from the outside. For this we have to use port forwarding using the 'kubectl port-forward --address 0.0.0.0 pod/productpage-v1-6b6798cb84-v6l7p 1234:9080' command. In this the pod name should be changed, which can be got by running the 'kubectl get pods' command.


Instead of using port forwarding we should have used Istio Gateway as mentioned here, But, this automatically creates a load balancer which is not possible in the local machine, but only on the Cloud. So, we are using port forwarding as mentioned above.

Step 9: Now the Bookinfo webpage can be accessed from the browser (192.168.0.101:1234/productpage). Note that the IP address has to be modified to match the IP address of any of the node in the K8S cluster.


Step 10: Use the below commands to cleanup the Bookinfo application and Istio.

samples/bookinfo/platform/kube/cleanup.sh

kubectl delete -f install/kubernetes/istio-demo-auth.yaml
for i in install/kubernetes/helm/istio-init/files/crd*yaml; do kubectl delete -f $i; done




Conclusion


In this blog we looked at the required steps for installation Istio and then Bookinfo application on top of K8S. It's not too difficult to install Istio as mentioned above, but the Cloud vendors providing managed K8S make it even easier with a single click installation of Istio or any other service mesh.

In the future blogs, we will explore the different features of Istio in a bit more detail using the Bookinfo or some other application, this will make it clear what Istio and service mesh is all about.

Monday, March 18, 2019

Quickly and easily installing K8S on the local machine

In the previous blog here we have seen how to get started with K8S easily with zero-installation using Play-With-Kubernetes (PWK). Everything happens in the remote machines, so nothing to install on the local machines. We can get started with K8S in less than 5-10 minutes using PWK. The main con of PWK is that the session is available for 4 hours and any modifications to the K8S cluster are lost.

One easy way to use K8S locally is to use Minikube as mentioned here, but it provides a single node cluster and it makes it tough to test the different failure scenarios like a node going down and a few other things.

In this blog we will try to install a multi-node K8S locally on the laptop as mentioned here, so that the changes are persisted across sessions. We should be able to continue from where we left. K8S-the-hard-way sets up a cluster from scratch, but it takes time and expertise. So, there are tools like kubeadm which abstracts and makes the installation process easier.

With kubeadm there are a sequence of steps to install a multi-node cluster on laptop. And for those who are new to Linux, it might be a pain. So, I was trying to figure out if the installation process using kubeadm can be automated using Vagrant. Tried a couple of hours, got stuck and gave up. And then luckily I found a ready made Vagrantfile from this article, which made the K8S installation process a breeze.

On a side node a multi-node K8S cluster can be run on the Cloud, but not every one is comfortable with the Cloud, so here are the steps using VirtualBox and Vagrant on the local machine.

Step 1: Download and install the latest version of VirtualBox and then Vagrant. For the sake of Vagrant, you might have to restart the OS. The installation is pretty obvious as installing any other Windows software.

Step 2: Make folder on the laptop and create a Vagrantfile with the content from here. If required the amount of Memory and CPU cores can be modified in this file.

Step 3: Go to the above created folder and run the command 'vagrant up' from the Command Prompt. It takes a couple of minutes to create Virtual Machines in VirtualBox, download and install the K8S and the required binaries. The end screen will appear as shown below.


And the Virtual Machines (k8s-head, k8s-node-1 and k8s-node-2) will appear as shown below. We are all set with the K8S installation. It's a piece of cake. It had never been easy to install softwares.


Step 4: K8S follows a master-slave architecture. Login to the master using 'vagrant ssh k8s-head' and run the 'kubectl get nodes' to make sure all the nodes are ready.



Step 5: Now lets create a deployment using the 'kubectl run nginx --image=nginx -r=4' and make sure it has been deployed using the 'kubectl get deployment' and 'kubectl get pods' commands.




Step 6: Now if we want to destroy the cluster, run the 'vagrant destroy -f' command from the earlier created folder and the Virtual Machines will be shutdown and deleted.


Step 7: During the installation if something goes wrong then it will be displayed on the screen and more details will be logged to 'ubuntu-xenial-16.04-cloudimg-console.log' file in the same folder.


As seen above all it takes is a couple of steps to create a multi node K8S on the laptop. Now you should be all setup to get started and explore the world of K8S. Further nodes can be added by modifying the Vagrantfile and running the 'vagrant up' command.

In the upcoming blogs, we will try to install additional packages or applications on top of the above K8S cluster and try different things with them.

Note: Joserra in the comments points to the K8S blog on the same here. This blog uses VirtualBox and Vagrant. While the K8S blog uses Ansible to run the commands in the VM on top of VirtualBox and Vagrant. The end result of both of them are the same.

Monday, March 11, 2019

Getting started with K8S the easy way using 'Play with Kubernetes'

There are many ways of installing K8S as mentioned here. It can be installed in the Cloud, on-premise and also locally on the laptop using virtualization. But, installing K8S had never been easy. In this blog, we will look at one of the easiest way to get started with K8S using Play with Kubernetes (PWK). With this the whole K8S experience is within the browser and there is nothing to install on the laptop, everything is installed on the remote machine. PWK uses 'Docker in Docker' which is detailed here (1, 2).

Step 1:  Go to https://labs.play-with-k8s.com/, Login and click on Start. A Docker or a Git login would be required for the same.


Step 2: PWK allows up to 5 nodes or machines. Click on 'ADD NEW INSTANCE' for 5 times and this will add 5 instances as shown below from node1 to node5. Here we will configure node1 as master and the remaining as workers.

Clicking on a node in the left pane will give access to the corresponding terminal in the bottom right pane. The combination 'Alt+Enter' will maximize the terminal.



Step 3: Run 'kubeadm config images pull' command on node1. This will pull all the images required for the installation before the actual installation starts in the next step. This is an optional step, but this step makes the installation faster.


Step 4: Init the master on node1 using the 'kubeadm init --apiserver-advertise-address $(hostname -i)' command. The output of the command should be as shown below. Note down the 'kubeadm join .....' command from the output of this command.



Step 5: Now is the time to deploy the Pod network using the below command on node1.

kubectl apply -n kube-system -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 |tr -d '\n')"


Step 6: Execute the 'kubeadm join ......' command on all the workers (node2, node3, node4 and node5). On each of the node the 'This node has joined the cluster' will be displayed towards the end of the output. The 'kubeadm join ......' command has been got from Step 4.



Step 7: After a few minutes run 'kubectl get nodes' on the master node (node1) and all the nodes should be in a Ready status. This makes sure that out 5 node K8S cluster is ready.


Step 8: Lets create a K8S Deployment with 4 replicas on the nginx server by running 'kubectl run nginx --image=nginx -r=4' on the master node (node1). Initially the status of the Containers will be in 'Container Creating. But, in a few seconds it will change to Running.



Step 9: Get the detailed status of the Pods using 'kubectl get pods -o wide' command. This sill show that the Pods are balanced across all the nodes.

The K8S Deployment objects maintains a fixed number of Pods. Delete one of the Pod using 'kubectl delete pod NAME-OF-THE-POD'. Notice that the Pod will be deleted and a new Pod is automatically created. This can be observed by running the 'kubectl get pods -o wide' command again. The name of the deleted Pod will be changed.


The K8S session would be available for 4 hours. And also any resources/setting done will be lost after the session. The changes to the cluster won't be persisted. Likewise there are a few disadvantages of using PWK, but the good thing is it's free and requires no installation on the local machine.

In the upcoming blogs, we will try to explore the other ways of installing K8S. Also, check Katakoda. It offers K8S in the browser similar to PWK.

Friday, March 1, 2019

Webinar to know about CKAD and CKA Kubernetes Certifications

Kubernetes is all about orchestrating Microservices. Instead of repeating what it's all about, here is home page for Kubernetes with more details. CNCF offers CKAD and CKA certifications around Kubernetes. While CKAD is more from a developers perspective, CKA is from administration perspective. Out of these CKA is a bit tougher compared to CKAD. While most of the certifications are theoretical, the Kubernetes Certifications are practical, a set of tasks have to be completed in a given time on a Kubernetes cluster. So, hands on is pretty much required for the Certifications.

Here is a recorded webinar from CNCF on getting started with the Certifications. I was preparing for the Kubernetes certification, but got deviated. Planning to get back to get back to the Certifications again. Will write a detailed blog on these Certification once I get through the Certifications.

Paper on Serverless Computing from Berkeley

Cloud Computing moves MOST of the administration from the Cloud consumers to the Cloud providers. No need to think about procuring hardware, networking, cooling, physical security etc. Serverless moves in the same direction, taking away even more administration from the Cloud consumers.

The name `Serverless` is a bit of misnomer as there are still servers involved. The only thing is that the Cloud consumers need not think in terms of Servers. Take the example of FAAS (Function-As-A-Service). Here are the sequence of steps, no where a SERVER is mentioned.

- Write and test a function
- Package the function
- Deploy the package to the Cloud
- Associate an event with the function (to be invoked automatically) or provide an API Gateway (to be invoked programmatically)

There is no mention of SERVER in the above and so the name Serverless. The good thing about FAAS is that it scales automatically and there is no need pay when the function is not invoked which is not the case of IAAS, PAAS and SAAS. We pay based on the number of function invocations and the amount of resources consumed.

Serverless has a lot to go, but applications can be built end-to-end without thinking about Servers and so Serverless. Here is a recent good read Cloud Programming Simplified: A Berkeley View on Serverless Computing about the pros, cons, challenges, research areas and finally predictions of Serverless computing.

Also to get a hang on FAAS, here is an blog I have written using AWS Lambda to to trigger a Java function which shrinks an image as soon as it has been uploaded to AWS S3.