Wednesday, April 8, 2020

How does the K8S cluster gets bootstrapped?

Although the different Cloud vendors provide managed service like AWS EKS, GCP GKE, Azure AKS and others, nothing beats running K8S on the local machine. Not only we can get started quickly, it's absolutely free and provides the ultimate freedom for any experimentation. Here is the setup I have using Oracle VirtualBox on Windows 10 OS.


It's a 3 node K8S cluster with one Control Plane node to do the orchestration and scheduling of the containers and two Worker nodes for the execution of the the containers. Recently, I upgraded to the latest version of K8S (1.18 as of this writing) and was able to try some of the new features.


The command 'kubectl get pods --all-namespaces -o wide' gives the Pods in all the namespaces. Below is the screenshot listing the Pods on the Control Plane and the Workers node. I was curious on how the Pods get started, this will give us a chance to check out the initialization parameters, tweak them and also to enable/disable features. This blog is all about the same. Note that the instructions are specific to installation using kubeadm and differ a bit for other installation process.


I was poking around in the K8S cluster and this StackOverflow Query (1) helped me. Below is the workflow on the how the K8S cluster gets bootstrapped. It all starts with the kubelet getting started automatically as a systemd service on all the Control Plane and the Workers nodes, which starts the minimum required static Pods for K8S to start working. Once the K8S Cluster boots up, additional Pods are started to get the K8S cluster into the desired state as stored in the etcd database.


Here is the workflow in a bit more detail


- Below is how the kubelet gets started as a systemd Service. /etc/systemd/system/kubelet.service.d/10-kubeadm.conf has the command to start the kubelet process and also the initialization parameters. Note that one of the parameter is /var/lib/kubelet/config.yaml location.


- In the /var/lib/kubelet/config.yaml file the staticPodPath variable is set to /etc/kubernetes/manifests path which has the yaml files for the static Pods to be started once the kubelet starts. As of now the apiserver, scheduler and etcd haven't started yet. So, kubelet starts them and manages them. Although these Pods are visible to the apiserver later, the apiserver doesn't manage them, kubelet is the one which manages them.


- In the /etc/kubernetes/manifests folder we have the yaml definitions for the etcd, apiserver, contoller-manager and the scheduler. These files will help us to understand how the K8S system Pods are initialized.


- OK, how about the coredns, flannel and proxy Pods getting started? The coredns and the proxy Pods are created by kubeadm during the K8S cluster initialization phase (1, 2) using kubeadm. The flannel Pods were created by me manually while setting up the K8S cluster (1). The details of these are stored in the etcd database as with any other user created K8S objects and K8S will automatically start them once the cluster starts.


Mystery solved, now that we know on how the K8S cluster gets bootstrapped, more can be explored. For using and administering K8S it is required to know the K8S bootstrap process in a bit more detail.

No comments:

Post a Comment