Tuesday, April 20, 2021

Installing K8S on AWS EC2 and connecting via Lens

There are tons of ways of setting up K8S on AWS. Today we will see one of the easiest way to get started with K8S on AWS. The good thing is that we would be using t2.micro instance type, which falls under the AWS free tier. This configuration is good enough to get started with K8S and not for production setup. It's with the assumption that the reader is familiar with the basic concepts of AWS.

Step 1: Create a SecurityGroup which allows all the traffic inbound and outbound as shown below. This is not a good practice, but is OK for the sake of demo and practicing K8S. Also, make sure to create a KeyPair.

Step 2: Create 3 Ubuntu instances with t2.micro as the instance type. Make sure to attach the above created SecurityGroup and to attach the KeyPair for connecting to the EC2 instances later. Name the instances as ControlPlane, Worker1 and Worker2 to avoid any confusion.

Step 3: Create an Elastic IP and assign it to the Control Plane EC2 instance or else the external IP address of the EC2 might change on reboot and we won't be able to connect from our laptop which is outside the VPC.

Step 4: Connect to the EC2 instances using Putty or some other SSH Client. Here I had setup tmux panes for the 3 EC2 instances. The left pane is for the Control Plane and the right side panes for the Worker EC2 instances. tmux has a cool feature "synchronize-panes" as mentioned in the StackOverflow response here (1).  Enter the command in one of the pane and it will be automatically be played in the other panes. If not comfortable with tmux, then simply open multiple Putty sessions to the EC2 instances.

Step 5: On the Control Plane and the Worker Instances execute the below commands. This is where the above mentioned tmux feature comes handy.

#Update Ubuntu
sudo su
apt-get update
apt-get dist-upgrade -y

#Install Docker
apt install docker.io -y
systemctl enable docker
usermod -a -G docker ubuntu

#Add Google K8S repo and install kubeadm
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
apt install kubeadm -y

#Pull K8S Docker images (makes the installation faster later)
kubeadm config images pull

Step 6: On the Control Plane instance execute the below commands.

#Initialize the Control Plane. It will take a few minutes.
#note down the complete "kubeadm join ....." command from the output
kubeadm init --pod-network-cidr= --ignore-preflight-errors=NumCPU

#Setup the K8S configiguration for Ubuntu user
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

#Install the Flanner overlay network
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Step 7: On both the Worker EC2 instances execute the "kubeadm join ....." command for the Worker EC2 instances to be part of the K8S Cluster.

Step 8: Go back to the Control Plane and execute the below commands to make sure the Cluster is setup properly.

#Make sure all the nodes are in a Ready state
kubectl get nodes

#Make sure all the pods are in a running state
kubectl get pods --all-namespaces

Step 9: On the Control Plane create dep.yaml file with the below yaml content and create a deployment with the "kubectl apply -f dep.yaml" command. Get the status of the deployment/pods using the "kubectl get deployments" and "kubectl get pods" commands.

apiVersion: apps/v1
kind: Deployment
  name: nginx-deployment
    app: nginx
  replicas: 2
      app: nginx
        app: nginx
      - name: nginx
        image: nginx
        - containerPort: 80

Step 10: Now lets try connect Lens to the K8S Cluster. Connect to the Control Plane EC2 instance and execute the below commands to generate the certificates again. Make sure to replace the IP addresses with the Public and Private IP address of the Control Plane EC2 instance.

sudo su
rm /etc/kubernetes/pki/apiserver.*
kubeadm init phase certs all --apiserver-advertise-address= --apiserver-cert-extra-sans=,
docker rm -f `docker ps -q -f 'name=k8s_kube-apiserver*'`
systemctl restart kubelet

Step 11:  Copy the content of the ".kube/config" file from the Control Plane to the laptop and save it as a file. This file has all the details to connect to the K8S Cluster.  Replace server IP with the external IP of the master EC2

Step 12: Download Lens from here and install it.  Add a Cluster by pointing to the K8S config file created earlier. Go to Cluster properties in Lens and install Metrics.

In a few seconds, Lens would be gathering the details and metrics from the K8S Cluster on AWS. Note that as of now there is not too much pressure on the EC2 instances.

(List of nodes)

(Deployment which was created earlier)

(Pods which were created earlier)

(kubectl commands via Lens on the Control Plane)


It's not that difficult to setup K8S on AWS using kubeadm. We haven't really considered security and performance though, this setup is good enough to get started with K8S on AWS. Since, we are installing K8S manually, we are responsible for HA, Scalability, Upgradation etc. This is where managed services like AWS EKS come into play.

No comments:

Post a Comment