Tuesday, May 19, 2020

Accessing private resources using AWS Client VPN

AWS VPC supports creating public and private subnets. Any EC2 in the public subnet will have public and private IP and so front end or customer facing applications like web applications are installed on this EC2. Also, this EC2 can be reached from outside the Cloud using the public IP for any maintenance.

The private subnet EC2 will have only private IP and no public IP, backend applications like databases ore installed on this EC2. This EC2 can't be reached directly from outside the Cloud as it doesn't have any public IP. So, how do we connect to it for activities like updating databases, creating tables etc? This is where Jump box or the Bastion box comes into play.

To connect to the EC2 in the private subnet we need to launch a Bastion box in the public subnet, connect to it and from there connect to the EC2 in the private subnet. This corresponds to step (1) and (2) in the below diagram. Step (2) pretty much like connecting to remote server, but in this case both the EC2s are in the same

  
The problem with the above approach is that the Bastion box does have an public IP and there is a probability of someone trying to access it over the public IP. We can get rid of the Bastion Box and use AWS Client VPN to connect the EC2 in the private subnet although the EC2 doesn't have an public IP as shown below.

  

I have created a new VPC with public and private subnet using the VPC wizard. And then created an EC2 in the private subnet. Note that there is no public IP and just the private IP. But, still we want to connect to it for doing the maintenance work on the EC2.


I followed this blog and created a VPN Endpoint and connected to the EC2 in the private subnet. Note the AWS Client VPN on the left side and connection to the EC2 from my Laptop via the Putty on the right side. Notice that the IP address in the Putty matches with the private IP address of the EC2 in the above screen.

Friday, May 8, 2020

Sticking to the AWS free tier and building K8S cluster using kops

In one of the previous blog we have seen how to setup K8S on AWS using t2.micro EC2 instance and MicroK8S. This falls under the AWS free tier. The same can be done with kops also. This blog is all about the same. Again we will try to stick to the free tier.

The first step is create an Ubuntu EC2 instance and install kops and AWS CLI on it. THen on this EC2 we would be running a bunch of commands to create and tear down the K8S cluster as shown below. The steps mentioned in this blog would be creating a K8S cluster of 1 master and 2 slaves and this would allow to try how K8S behaves for worker failures and also scheduling across multiple workers. But, this configuration requires a total of 34 GB of EBS volumes out of which 30GB is covered within the AWS free tier, rest need to be paid for. To strictly fall under the free tier only one worker node can also be created, but we won't be able to try out the different K8S features.


Once the cluster has been created, we can see the nodes in the ready state as shown below.


Steps for setting up the K8S on EC2 using kops


Step 1: Create an Ubuntu 18.04 EC2 instance (t2.micro) and connect to it via Putty and execute the below commands.

-- Install kubectl and Python3, AWS CLI and kubectl.

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list

sudo apt-get update

sudo apt-get install -y python3-pip apt-transport-https kubectl

pip3 install awscli --upgrade

export PATH="$PATH:/home/ubuntu/.local/bin/"

Step 2: Install kops (used for installing K8S on AWS)

curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64

chmod +x kops-linux-amd64

sudo mv kops-linux-amd64 /usr/local/bin/kops

Step 3: Check if aws, kops and kubectl commands are there in the path or not

Step 4: Create a AWS IAM User (with API, SDK access) with the below policy and get the Access Keys for this user.

-- AmazonEC2FullAccess
-- AmazonS3FullAccess
-- IAMFullAccess
-- AmazonVPCFullAccess

Step 5: Configure the Access Keys and AWS Region on the EC2 instance using the `aws configure` command. Use us-east-1 or some other region. For the Output format, leave it as empty.

Step 6: Export the keys. Note to replace the keys after the $ symbol in the below commands.

export AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id)
export AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key)

Step 7: Generate the ssh keys

ssh-keygen -f .ssh/id_rsa

Step 8: Export the name of the cluster and the S3 bucket name. Make to sure it ends with k8s.local to use the gossip protocol or else we need to work with Route53.

export NAME=myfirstcluster.k8s.local
export KOPS_STATE_STORE=s3://praveen-kops-cluster

Step 9: Create S3 bucket

aws s3api create-bucket --bucket praveen-kops-cluster --region us-east-1

Step 10: Create a cluster configuration. Note that the cluster is not built. Also, as mentioned earlier we need to pay for the additional 4GB of EBS volume. To make sure we fall under the free tier, specify the node-count as 1, to create one worker node.

kops create cluster --name=$NAME --state=$KOPS_STATE_STORE --zones=us-east-1a --node-count=2 --node-size=t2.micro --master-size=t2.micro  --master-volume-size=8 --node-volume-size=8

Step 11: Change the EBS volume size for etcd main and events attached to the master. By default multiple 20GB EBS volumes are created and the total storage doesn't fall under the free tier (1). Edit the cluster configuration by executing the below command.

kops edit cluster myfirstcluster.k8s.local

and add the below after "name: a" in two different locations.

volumeType: gp2
volumeSize: 1

Step 12: Build the cluster

kops update cluster $NAME --yes

Step 13: Check the status of the cluster. The cluster and the related resources will take around a few minutes for it to be created. Also execute `kubectl get nodes` to get the status of the cluster.

kops validate cluster $NAME

Step 14: Create deployments on the cluster which we created. Create a file deployment.yaml with the contents from K8S documentation here.

-- Create a deployment using the below command.

kubectl apply -f deployment.yaml

-- Once the deployment has been created, check the status of the pods using the below command.

kubectl get pods

-- Delete the deloyment

kubectl delete -f deployment.yaml

Step 15: Delete the cluster. Again the cluster deletion will take a few minutes.

kops delete cluster --name $NAME --yes

Step 16: Delete the s3 bucket.

aws s3api delete-bucket --bucket praveen-kops-cluster --region us-east-1

Step 17: Terminate the Ubuntu EC2 instance which was created manually earlier.

Step 18: Delete the IAM User

Conclusion


Following the above steps a K8S cluster can be created within a few minutes on AWS and the good thing is that it falls within the AWS free tier. This is not production ready, but is good enough to get started.

One more thing to note, with the kops installation we have complete control over the K8S cluster. But, the user has to manage the master and the worker nodes like upgrading, patching, securing K8S on them. And so, the Cloud vendors provide different managed K8S installations like AWS EKS, GCP GKE and Azure AKS.