In the previous blog we explored on setting up an K8S Cluster on the AWS Cloud without using any additional softwares or tools. The Cloud Providers make it easy to create a K8S Cluster in the Cloud. The tough part is securing, fine tuning, upgradation, access management etc. Rancher provides a centralized management of the different K8S Clusters, these can be in any of the Cloud (AWS, GCP, Azure) or In-Prem. More on what Rancher has to offer on top of K8S here. The good thing about Rancher is that's it's 100% Open Source and there is no Vendor Lock-in. We can simply remove Rancher and interact with the K8S Cluster directly.
Rancher also allows creating K8S Clusters in the different environments. In this blog we will look at installing a K8S Cluster on the AWS Cloud in a Quick and Dirty way (not ready for production). Here are the instructions for the same. The only prerequisite is access to a Linux OS with Docker installed on it.
Step 1: Login to the Linux Console and run the Rancher Container using the below command.
sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher
Step 2: Access the Rancher Console in a browser. Click on "Advanced" and then "Accept the Risk and Continue".
Step 3: Setup a admin password to access the Rancher Console.
Step 4: Click on "Save URL".
Step 5: Click on "Add Cluster" and select "Amazon EKS".
Step 6: Select the Region and provide the AWS Access and Secret Key. The same can be generated as mentioned in the "Generate access keys for programmatic access" section here. Click on "Next: Configure Cluster".
Step 7: Click on "Next: Select VPC & Subnet"
Step 8: Click on "Next: Select Instance Options"
Step 9: There is a small catch here. In this screen we specify the minimum and the maximum number of the nodes in the ASG (Auto Scaling Group), but no where we specify the default size to start with. So, when the Cluster is started, it creates 3 EC2 instances by default. There is an issue opened in Github Rancher project for the same and it's still open.
Click on Create.
Step 10: Initially the Cluster will be in a Provisioning State and after 10-15 minutes it will change to Active State.
During the Cluster creation process, the following will be created. The same can be observed by going to the AWS Management Console for the appropriate Service.
1) CloudFormation Stacks
2) EC2 instances
3) VPC
4) EKS Control Plane
Step 11: Go through the different screens in the Rancher Console.
Step 12: Once you have spend some time with the Rancher Console. Select the AWS Cluster created in the above sequence of steps and delete it. After a few minutes the Cluster should be gone from the Rancher Console.
Step 13: Although Rancher Console says that the Cluster is deleted, there might be some resources hanging in AWS as shown below. Not sure where the bug is. But, if there are any resources in AWS still not deleted they might add to the AWS monthly bill. So, it's always better to the AWS Management Console for CloudFormation, EC2, S3, VPC, EKS and make sure all the AWS resources are deleted.
An AWS EKS Cluster can be easily created using Rancher as mentioned above. The steps are more or less the same for creating a Cluster on Azure and GCP also. Rancher also allows to import an existing K8S Cluster. Once all the Clusters are within Rancher, they can be managed centrally, policies applied etc. It makes it easy when there are multiple K8S Clusters to manage within an organization.
As seen above, there are a few rough edges in the installation process. First, there is no way to mention the default number of EC2 worker nodes in the Cluster. Also, not all the AWS resources are deleted when the EKS Cluster is deleted through Rancher, which might incur additional cost. These things might be fixed in the future releases of Rancher.
Also, note that this is a quick and dirty way of installing K8S Cluster on AWS using Rancher and is not ready for Production, as this particular setup doesn't support High Availability. Here are instructions on how to setup a HA Cluster using Rancher.
Rancher also allows creating K8S Clusters in the different environments. In this blog we will look at installing a K8S Cluster on the AWS Cloud in a Quick and Dirty way (not ready for production). Here are the instructions for the same. The only prerequisite is access to a Linux OS with Docker installed on it.
Steps for creating a K8S Cluster on AWS using Rancher
Step 1: Login to the Linux Console and run the Rancher Container using the below command.
sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher
Step 2: Access the Rancher Console in a browser. Click on "Advanced" and then "Accept the Risk and Continue".
Step 3: Setup a admin password to access the Rancher Console.
Step 4: Click on "Save URL".
Step 5: Click on "Add Cluster" and select "Amazon EKS".
Step 6: Select the Region and provide the AWS Access and Secret Key. The same can be generated as mentioned in the "Generate access keys for programmatic access" section here. Click on "Next: Configure Cluster".
Step 7: Click on "Next: Select VPC & Subnet"
Step 8: Click on "Next: Select Instance Options"
Step 9: There is a small catch here. In this screen we specify the minimum and the maximum number of the nodes in the ASG (Auto Scaling Group), but no where we specify the default size to start with. So, when the Cluster is started, it creates 3 EC2 instances by default. There is an issue opened in Github Rancher project for the same and it's still open.
Click on Create.
Step 10: Initially the Cluster will be in a Provisioning State and after 10-15 minutes it will change to Active State.
During the Cluster creation process, the following will be created. The same can be observed by going to the AWS Management Console for the appropriate Service.
1) CloudFormation Stacks
2) EC2 instances
3) VPC
4) EKS Control Plane
Step 11: Go through the different screens in the Rancher Console.
Step 12: Once you have spend some time with the Rancher Console. Select the AWS Cluster created in the above sequence of steps and delete it. After a few minutes the Cluster should be gone from the Rancher Console.
Step 13: Although Rancher Console says that the Cluster is deleted, there might be some resources hanging in AWS as shown below. Not sure where the bug is. But, if there are any resources in AWS still not deleted they might add to the AWS monthly bill. So, it's always better to the AWS Management Console for CloudFormation, EC2, S3, VPC, EKS and make sure all the AWS resources are deleted.
Conclusion
An AWS EKS Cluster can be easily created using Rancher as mentioned above. The steps are more or less the same for creating a Cluster on Azure and GCP also. Rancher also allows to import an existing K8S Cluster. Once all the Clusters are within Rancher, they can be managed centrally, policies applied etc. It makes it easy when there are multiple K8S Clusters to manage within an organization.
As seen above, there are a few rough edges in the installation process. First, there is no way to mention the default number of EC2 worker nodes in the Cluster. Also, not all the AWS resources are deleted when the EKS Cluster is deleted through Rancher, which might incur additional cost. These things might be fixed in the future releases of Rancher.
Also, note that this is a quick and dirty way of installing K8S Cluster on AWS using Rancher and is not ready for Production, as this particular setup doesn't support High Availability. Here are instructions on how to setup a HA Cluster using Rancher.