Monday, July 1, 2019

How to use Tags in AWS to give resource permissions?

Within AWS we can create multiple resources like EC2, S3, RDS, Lambda etc. After creating the resources, the permissions should be given appropriately. Lets say, we have two servers for Production and two servers for Development, then the users in the Production Group should have the access to the Production Servers and the users in the Development Group should have access to the Development Servers and not the other way. By permission, it can be to view, stop, start, terminate, reboot etc.
 

The same can be achieved using the AWS Service "Identity and Access Management" along with Tags as show in the above diagram.Tags are meta data attached to AWS resources like EC2. Tags have keys and values. For ex., Key can be Env and Value can be Production. We should be able to Tag EC2 instances as Env=Production and give the permissions for Production Group to only the EC2 which have the Tag Env=Production and skip the rest of the EC2.

Here are the steps for the above at a high level.

1) Create an IAM User with no policies attached
2) Create a Policy with the appropriate permissions
3) Create a Group and attach the above Policy to it
4) Add the User to the Group
5) Create two EC2s with the appropriate tags
6) Login as the IAM User and check the actions allowed on the EC2

 

Sequence of steps for setting up access to EC2 Access

Step 1: Create an IAM User

Step 1.1: Go to the IAM Management Console, Click on the Users tab in the left pane and click on "Add User".


Step 1.2: Enter the user name and make sure to select "AWS Management Console Access". Click on "Next Permissions".


Step 1.3: We would be adding the permissions later, for now click on "Next: Tags".


Step 1.4: Tags are optional for a user. So, click on "Next: Review".


Step 1.5: Make sure the user details are proper and click on "Create user". The IAM User will be created as shown in the below screen.



Step 2: Create a policy with the appropriate permissions

Step 2.1: Click on the policies in the left pane and click on "Create policy".


Step 2.2: Click on the JSON tab, enter the below policy and click on "Review policy". The below JSON gives the permission to Stop/Start EC2 instances with a Tag Env=Production, while viewing the description for all the EC2 instances is allowed. The policy was got from here and modified. Here are a lot of sample policies from AWS.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ec2:StartInstances",
                "ec2:StopInstances"
            ],
            "Resource": "arn:aws:ec2:*:*:instance/*",
            "Condition": {
                "StringEquals": {
                    "ec2:ResourceTag/Env": "Production"
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": "ec2:DescribeInstances",
            "Resource": "*"
        }
    ]
}

Step 2.3: Give the policy a name and click on "Create policy". The policy should be created as shown below.



Step 3: Create a Group and attach the policy to it

Step 3.1: Click on the Group tab in the left pane and click on "Create New Group".


Step 3.2: Enter the Group Name and click on "Next Step".


Step 3.3: Select the Policy which has been created earlier and click on "Next Step".


Step 3.4: Review the Group details and click on "Create Group".


Step 4: Add the user to the Group

Step 4.1: Select the Group which was created earlier. Click on "Group Actions" and "Add Users to Group" and select the user which was created earlier and click on "Add Users".



Step 5: Create EC2 instances and tag them

Step 5.1: Create two EC2 instances as mentioned here. A smaller instance is recommended as it involves less cost and that we don't be doing any processing in these instances. Click on the Tags tab and click on the "Add/Edit Tags".


Step 5.2: Enter the Tags as shown below and click on "Create Tag". For the other instance specify the Key as Env and Value as Production.


The Tags to the EC2 should appear as below. The Env column won't appear by default. Click on the Gear button on the top right and select the Env to make column visible.


Step 6: Login as the IAM User and check the actions allowed

Step 6.1: Go to the Dashboard in the IAM Management Console and grab the URL. Open the URL in a entirely different browser and enter the credentials for the IAM user.



Step 6.2: Go to the EC2 Management Console. Notice the name of the user logged in the top right as highlighted below. It's the IAM User.


Step 6.3: Select the Development EC2 instance and try to terminate it as shown below. It should lead to an error. This is because the IAM user has only Stop/Start access to EC2 instance which have a Tag Env=Production and nothing else.



Step 6.4: Select the Production instance and try to Stop the instance. The user would be able to as the user has access to Stop/Start any EC2 instance with a Tag Env=Production.


 

Conclusion


Tagging is metadata around AWS resources like EC2, S3, RDS etc. We have seen above on how to use Tagging can be used to group EC2 resources together and give permissions to IAM Users. The same applies to other AWS resources also. Tagging also can be used to segregate the AWS billings lets says across different departments, customers, projects etc. The usage of Tagging is limited just by imagination, the different scenarios and strategies are mentioned in this AWS article.

Wednesday, April 10, 2019

Setting up an AWS EKS Cluster with Rancher

In the previous blog  we explored on setting up an K8S Cluster on the AWS Cloud without using any additional softwares or tools. The Cloud Providers make it easy to create a K8S Cluster in the Cloud. The tough part is securing, fine tuning, upgradation, access management etc. Rancher provides a centralized management of the different K8S Clusters, these can be in any of the Cloud (AWS, GCP, Azure) or In-Prem. More on what Rancher has to offer on top of K8S here. The good thing about Rancher is that's it's 100% Open Source and there is no Vendor Lock-in. We can simply remove Rancher and interact with the K8S Cluster directly.

Rancher also allows creating K8S Clusters in the different environments. In this blog we will look at installing a K8S Cluster on the AWS Cloud in a Quick and Dirty way (not ready for production). Here are the instructions for the same. The only prerequisite is access to a Linux OS with Docker installed on it.

Steps for creating a K8S Cluster on AWS using Rancher


Step 1: Login to the Linux Console and run the Rancher Container using the below command.

sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher


Step 2: Access the Rancher Console in a browser. Click on "Advanced" and then "Accept the Risk and Continue".



Step 3: Setup a admin password to access the Rancher Console.


Step 4: Click on "Save URL".


Step 5: Click on "Add Cluster" and select "Amazon EKS".



Step 6: Select the Region and provide the AWS Access and Secret Key. The same can be generated as mentioned in the "Generate access keys for programmatic access" section here. Click on "Next: Configure Cluster".


Step 7: Click on "Next: Select VPC & Subnet"


Step 8: Click on "Next: Select Instance Options"


Step 9: There is a small catch here. In this screen we specify the minimum and the maximum number of the nodes in the ASG (Auto Scaling Group), but no where we specify the default size to start with. So, when the Cluster is started, it creates 3 EC2 instances by default. There is an issue opened in Github Rancher project for the same and it's still open.

Click on Create.


Step 10: Initially the Cluster will be in a Provisioning State and after 10-15 minutes it will change to Active State.



During the Cluster creation process, the following will be created. The same can be observed by going to the AWS Management Console for the appropriate Service.

          1) CloudFormation Stacks
          2) EC2 instances
          3) VPC
          4) EKS Control Plane

Step 11: Go through the different screens in the Rancher Console.



Step 12: Once you have spend some time with the Rancher Console. Select the AWS Cluster created in the above sequence of steps and delete it. After a few minutes the Cluster should be gone from the Rancher Console.



Step 13: Although Rancher Console says that the Cluster is deleted, there might be some resources hanging in AWS as shown below. Not sure where the bug is. But, if there are any resources in AWS still not deleted they might add to the AWS monthly bill. So, it's always better to the AWS Management Console for CloudFormation, EC2, S3, VPC, EKS and make sure all the AWS resources are deleted.


Conclusion


An AWS EKS Cluster can be easily created using Rancher as mentioned above. The steps are more or less the same for creating a Cluster on Azure and GCP also. Rancher also allows to import an existing K8S Cluster. Once all the Clusters are within Rancher, they can be managed centrally, policies applied etc. It makes it easy when there are multiple K8S Clusters to manage within an organization.

As seen above, there are a few rough edges in the installation process. First, there is no way to mention the default number of EC2 worker nodes in the Cluster. Also, not all the AWS resources are deleted when the EKS Cluster is deleted through Rancher, which might incur additional cost. These things might be fixed in the future releases of Rancher.

Also, note that this is a quick and dirty way of installing K8S Cluster on AWS using Rancher and is not ready for Production, as this particular setup doesn't support High Availability. Here are instructions on how to setup a HA Cluster using Rancher.

Wednesday, April 3, 2019

K8S Cluster on AWS using EKS

As mentioned in the previous blogs, there are different ways of getting started with K8S (Minikube, Play With K8S, K3S, Kubeadm, Kops etc). Here in the K8S documentation. Some of them involves running K8S on the local machine and some in the Cloud. AWS EKS, Google GKE and Azure AKS are a few of services for K8S in the Cloud.

By following the instructions as mentioned here, it takes about 20 minutes to setup and access the K8S cluster using those instructions. Below is the output and the end of following the instructions.

1) As usual AWS uses CloudFormation for automating tasks around the Cluster setup. There would be two Stacks, one for creating a new VPC where the K8S cluster would be running and the other for creating the Worker nodes) which we have to manage.


2) A new VPC with the Subnets, Routing Tables, Internet Gateways, Security Groups etc). These are automatically created by the CloudFormation template.


3) A EKS Cluster which manages the components in the Control Plane (K8S master) and makes sure they are Highly Available.


4) EC2 instances again created by the CloudFormation for the K8S Workers. By default 4 EC2 instances are created by the CloudFormation, but it can be reduced to 1 as we won't be running a heavy load on the cluster. This can be changed in the CloudFormation template by changing the Default instances for the AutoScaling group from 4 to 1. I have changed it to 2 and so the number of EC2 instances.


5) Service roles created manually and by the CloudFormation Stack.


6) Once the Guestbook application has been deployed on the AWS K8S Cluster, the same can be accessed as shown below.


7) We can get the nodes, pods and namespaces as shown below. Note that an alias has been for the kubectl command.

 

A few things to keep in mind


1) As mentioned above the number of default Worker nodes can be decreased to 1, while creating the Stack for the K8S workers. Saves $$$.

2) There were a few errors while creating the VPC as shown below while creating the CloudFormation Stack. This error happens within a region where the default VPC was deleted and created again, otherwise it works file. After a bit of debugging, the error was not obvious.


A quick workaround for the above problem, just in case someone gets stuck around it is to use another region which has the default VPC created by AWS account or hard code the Availability Zone as shown below in the CloudFormation template.


3) Don't forget to delete the Guestbook application (1) and the AWS EKS Cluster (1).

Conclusion


Just to conclude, it takes around 20 minutes to setup a Cluster. Instead of repeating the steps in this blog, here again are the steps from AWS Documentation to get started with a K8S Cluster on AWS.

For the sake of learning and trying out different things I would prefer to setup the K8S Cluster on the local machine as it is faster to start the Cluster and quickly try out a few things. Also, the Cloud Vendors are a bit behind on the K8S version. As of this writing, AWS EKS supports K8S Version 1.12 and lower which is like 6 months old, while the latest K8S Version is 1.14 which was released recently.