Wednesday, April 10, 2019

Setting up an AWS EKS Cluster with Rancher

In the previous blog  we explored on setting up an K8S Cluster on the AWS Cloud without using any additional softwares or tools. The Cloud Providers make it easy to create a K8S Cluster in the Cloud. The tough part is securing, fine tuning, upgradation, access management etc. Rancher provides a centralized management of the different K8S Clusters, these can be in any of the Cloud (AWS, GCP, Azure) or In-Prem. More on what Rancher has to offer on top of K8S here. The good thing about Rancher is that's it's 100% Open Source and there is no Vendor Lock-in. We can simply remove Rancher and interact with the K8S Cluster directly.

Rancher also allows creating K8S Clusters in the different environments. In this blog we will look at installing a K8S Cluster on the AWS Cloud in a Quick and Dirty way (not ready for production). Here are the instructions for the same. The only prerequisite is access to a Linux OS with Docker installed on it.

Steps for creating a K8S Cluster on AWS using Rancher


Step 1: Login to the Linux Console and run the Rancher Container using the below command.

sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher


Step 2: Access the Rancher Console in a browser. Click on "Advanced" and then "Accept the Risk and Continue".



Step 3: Setup a admin password to access the Rancher Console.


Step 4: Click on "Save URL".


Step 5: Click on "Add Cluster" and select "Amazon EKS".



Step 6: Select the Region and provide the AWS Access and Secret Key. The same can be generated as mentioned in the "Generate access keys for programmatic access" section here. Click on "Next: Configure Cluster".


Step 7: Click on "Next: Select VPC & Subnet"


Step 8: Click on "Next: Select Instance Options"


Step 9: There is a small catch here. In this screen we specify the minimum and the maximum number of the nodes in the ASG (Auto Scaling Group), but no where we specify the default size to start with. So, when the Cluster is started, it creates 3 EC2 instances by default. There is an issue opened in Github Rancher project for the same and it's still open.

Click on Create.


Step 10: Initially the Cluster will be in a Provisioning State and after 10-15 minutes it will change to Active State.



During the Cluster creation process, the following will be created. The same can be observed by going to the AWS Management Console for the appropriate Service.

          1) CloudFormation Stacks
          2) EC2 instances
          3) VPC
          4) EKS Control Plane

Step 11: Go through the different screens in the Rancher Console.



Step 12: Once you have spend some time with the Rancher Console. Select the AWS Cluster created in the above sequence of steps and delete it. After a few minutes the Cluster should be gone from the Rancher Console.



Step 13: Although Rancher Console says that the Cluster is deleted, there might be some resources hanging in AWS as shown below. Not sure where the bug is. But, if there are any resources in AWS still not deleted they might add to the AWS monthly bill. So, it's always better to the AWS Management Console for CloudFormation, EC2, S3, VPC, EKS and make sure all the AWS resources are deleted.


Conclusion


An AWS EKS Cluster can be easily created using Rancher as mentioned above. The steps are more or less the same for creating a Cluster on Azure and GCP also. Rancher also allows to import an existing K8S Cluster. Once all the Clusters are within Rancher, they can be managed centrally, policies applied etc. It makes it easy when there are multiple K8S Clusters to manage within an organization.

As seen above, there are a few rough edges in the installation process. First, there is no way to mention the default number of EC2 worker nodes in the Cluster. Also, not all the AWS resources are deleted when the EKS Cluster is deleted through Rancher, which might incur additional cost. These things might be fixed in the future releases of Rancher.

Also, note that this is a quick and dirty way of installing K8S Cluster on AWS using Rancher and is not ready for Production, as this particular setup doesn't support High Availability. Here are instructions on how to setup a HA Cluster using Rancher.

Wednesday, April 3, 2019

K8S Cluster on AWS using EKS

As mentioned in the previous blogs, there are different ways of getting started with K8S (Minikube, Play With K8S, K3S, Kubeadm, Kops etc). Here in the K8S documentation. Some of them involves running K8S on the local machine and some in the Cloud. AWS EKS, Google GKE and Azure AKS are a few of services for K8S in the Cloud.

By following the instructions as mentioned here, it takes about 20 minutes to setup and access the K8S cluster using those instructions. Below is the output and the end of following the instructions.

1) As usual AWS uses CloudFormation for automating tasks around the Cluster setup. There would be two Stacks, one for creating a new VPC where the K8S cluster would be running and the other for creating the Worker nodes) which we have to manage.


2) A new VPC with the Subnets, Routing Tables, Internet Gateways, Security Groups etc). These are automatically created by the CloudFormation template.


3) A EKS Cluster which manages the components in the Control Plane (K8S master) and makes sure they are Highly Available.


4) EC2 instances again created by the CloudFormation for the K8S Workers. By default 4 EC2 instances are created by the CloudFormation, but it can be reduced to 1 as we won't be running a heavy load on the cluster. This can be changed in the CloudFormation template by changing the Default instances for the AutoScaling group from 4 to 1. I have changed it to 2 and so the number of EC2 instances.


5) Service roles created manually and by the CloudFormation Stack.


6) Once the Guestbook application has been deployed on the AWS K8S Cluster, the same can be accessed as shown below.


7) We can get the nodes, pods and namespaces as shown below. Note that an alias has been for the kubectl command.

 

A few things to keep in mind


1) As mentioned above the number of default Worker nodes can be decreased to 1, while creating the Stack for the K8S workers. Saves $$$.

2) There were a few errors while creating the VPC as shown below while creating the CloudFormation Stack. This error happens within a region where the default VPC was deleted and created again, otherwise it works file. After a bit of debugging, the error was not obvious.


A quick workaround for the above problem, just in case someone gets stuck around it is to use another region which has the default VPC created by AWS account or hard code the Availability Zone as shown below in the CloudFormation template.


3) Don't forget to delete the Guestbook application (1) and the AWS EKS Cluster (1).

Conclusion


Just to conclude, it takes around 20 minutes to setup a Cluster. Instead of repeating the steps in this blog, here again are the steps from AWS Documentation to get started with a K8S Cluster on AWS.

For the sake of learning and trying out different things I would prefer to setup the K8S Cluster on the local machine as it is faster to start the Cluster and quickly try out a few things. Also, the Cloud Vendors are a bit behind on the K8S version. As of this writing, AWS EKS supports K8S Version 1.12 and lower which is like 6 months old, while the latest K8S Version is 1.14 which was released recently.