Wednesday, April 26, 2017

Scratch programming for kids

It's summer holidays and I had been helping my kid (8 years) get started with programming. So I bought the Computer Coding for Kids book by Carol Vorderman. The books covers initially Scratch (from MIT) and then Python from the basics. Scratch is a visual programming language which is easy for those who are just getting started with programming. No need for typing code, more of drag and drop.

So, here is his first program (link here), a small and cute game in Scratch. Click on the flag on the top right, to get the game started. He was all excited to get it published in this blog. One thing to notice is that the execution of the game requires Adobe Flash, which is slowly getting out of fashion with the different browsers because of the vulnerabilities and stability issues. For some reason, I was able to get it run only in IE and not in Edge/Chrome/Firefox browsers.


Scratch looks a bit basic, but I have seen some interesting programs developed in it. I would recommend the above mentioned book for anyone who wants to get started with programming. Carol has written a few more books, which I plan to buy once we complete this book.

Monday, April 24, 2017

Creating a Linux AMI from an EC2 instance

In one of the earlier blog, we created a Linux EC2 instance with a static page. We installed and started the httpd server, created a simple index.html in the /var/www/html folder. The application in the EC2 is too basic, but definitely we can build complex applications.

Lets say, we need 10 such instances. Then it's not necessary to install the softwares, do the necessary configurations multiple times. What we can do is create an AMI, once the softwares have been installed and the necessary configuration changes have been made. The AMI will have the OS and the required softwares with the appropriate configurations. The same AMI can be used while launching the new EC2 instances. More about the AMI here.


So, here are the steps

1. Create a Linux EC2 instance with the httpd server and index.html as shown here.

2. Once the different softwares have been installed and configured, the AMI can be created as shown below.


3. Give the image name and description and click on `Create Image`.


4. It takes a couple of minutes to create the AMI. The status of the same can be seen by clicking on the AMI link in the left side pane. Initially the AMI will be in a pending state, after a few minutes it will change into available state.


5. On launch a new EC2 instance, we can select the new AMI which was created in the above steps in the `My AMIs` tab. The AMI has all the softwares and configurations in it, so there is no need to repeat the same thing again.


6. As we are simply learning/trying things and not anything in production, make sure to a) terminate all the EC2 instances b) dregister the AMI and c) delete the Snapshot.



Now that we know how to create an AMI with all the required softwares/configurations/data, we will look at AutoScaling and ELB (Elastic Load Balancers) in the upcoming blogs.

Creating an alarm in CloudWatch for a Linux EC2 instance

AWS CloudWatch does a couple of things as mentioned in the AWS documentation here. But, one of the interesting thing is that it can gather the metrics (CPU, Network, Disk etc) for the different Amazon resources.

In this blog, we will look how to look at the CPU metrics on a Linux EC2 instance and trigger an Alarm with a corresponding action. To make it simple, if the CPU Utilization in the Linux instance goes beyond 50% then we will be notified through an email. We can watch different metrics, we will use CPU Utilization for now. CloudWatch also allows to create custom metrics, like the application response time.

I will go with the assumption that the Linux instance has already been created as shown in this blog. So, here are the steps.

1. In the Linux instance make sure that the detailed monitoring is enabled. If not then enable it from the EC2 management console as shown below.


2. Go to the CloudWatch management console and select Metrics on the left pane. Select EC2 in the the `All Metrics` tab.


3. Again select the `Per-Instance Metrics` from the `All Metrics` tab.


4. Go back to the EC2 management console and get the `Instance Id` as shown below.


5. Go back to the CloudWatch management console. Search for the InstanceId and select CPUUtilization. Now the line graph will be populated as shown below for the CPUUtilization. We should be also able to select multiple metrics at the same time.


6. The graph is shown with a 5 minute granularity. By changing it to minute granularity, we would be getting a much smoother graph. Click on `Graphed Metrics` tab and change the Period to `1 minute`.


7. Now, lets try to create an Alarm. Click on Alarm in the left pane and then on the `Create Alarm` button.


8. Click on the `Per-Instance Metrics` under EC2. Search for the EC2 instance id and select CPUUtilization as shown below.


9. Now that the metric has been selected, we have to define the alarm. Click on `Next`. Specify the Name, Description and CPU % as shown below.


10. When the alarm is breached (CPU > 50%), then the notification has to be done. In the same screen, go a bit down and specify the notification by click on `New list` as shown below.


11. Click on `Create Alarm` in the same screen. To avoid spamming, AWS will send an email with the confirmation link which has to be clicked. Go ahead and check your email for confirmation.


12. Once the link has been clicked in the email, we will get a confirmation as below.


13. Now, we are all setup to receive an email notification when the CPU > 50%. Login to the Linux instance as mentioned in this blog and run the below command to increase the CPU on the Linux instance. The command doesn't do anything useful, but simply increases the CPU usage. There are more ways to increase the CPU usage, more here.
dd if=/dev/urandom | bzip2 -9 >> /dev/null

14. If we go back to the Metrics then we will notice that the CPU has spiked on the Linux instance because of the above command.


15. And the Alarm also moved from the OK status to the ALARM status as shown below.


16. Check your email and there would be a notification from AWS about the breach of the Alarm threshold.

17. Make sure to delete the Alarm and terminate the Linux instance.

The procedure is a bit too long and it took me good amount of time to write this blog, but once you do it a couple of times and understand whats being done, then it would be a piece of cake.

Friday, April 14, 2017

Got through `AWS Certified Developer - Associate`

Today, I got through the AWS Certified Developer - Associate. This is the second certification around AWS (first certification here). It had been a bit more tough than I expected, but it was fun. Anyway, here is the certificate.


There were 55 questions which had to be solved in 80 minutes. It was a bit tough and I did spend till the last minute solving and reviewing the questions. As usual, nothing beats the practice to get through the certification.

The next certification I am planning is `AWS Certified SysOps Administrator - Associate` and then the `AWS Certified Big Data - Specialty` Certification. The Big Data Certification is in Beta and I am not sure when Amazon would make it available to the public. This is the certification which I am really interested, as it is an intersection of Big Data and the cloud.

Thursday, April 13, 2017

Which EBS type to use to meet our requirements?

In the previous blog, I pointed to the approach to figure out the best way to pick the best EC2 instance to meet our requirements. Here is a video to pick the best EBS type. EBS (Elastic Block Storage) is like a Hard Disk Drive which can be attached to the EC2 instance.


The way AWS provides different EC2 instances, there are different EBS types which are basically categorized into SSD and Magnetic. Depending upon if we are looking at high IOPS or throughput, the appropriate EBS type can be picked. Here is a flow chart in the same video to figure out the type of EBS to pick.

One interesting thing to note about EBS is that the EC2 instance and EBS need not be on the same physical machine. That is, the processing and the disk need not be on the same machine. There is a low latency and high throughput between the EC2 and EBS machines.

A good amount of the AWS videos are really good and some of them are so-so. I would be promoting the videos which are really interesting in this blog for the readers.

Which EC2 instance to use to meet our requirements?

In the previous blogs, we looked at creating a Linux and a Windows instance. There are lots of parameters which can be configured during the instance creation. One of the important parameter is in the instance type and size. AWS provides many instance types as mentioned here. While some of them are optimized for CPU, others are optimized for Memory etc. It's important to pick up a right instance type to meet our requirements. If we pick a smaller instance, we won't be able to meet the requirements. On the other end, if we pick a larger instance then we need to pay more.

So, which one do we pick? Here is a nice video from the nice AWS folks on picking the right EC2 instance to match the requirements. In the same video here , there is also a flow chart to help you pick the right instance type.


Without the Cloud, we need to do the sizing and the capacity planning carefully as per our requirements before ordering the hardware, as we are stuck to it. With the Cloud, we can try an instance, identify the bottle necks and move to another instance type without any commitment. Once, we are sure of the proper EC2 type, then we can also go with the Reserved Instances. That's the beauty of the cloud.

Amazon has been introducing new EC2 instance types regularly, so picking the right EC2 instance is a continuous exercise to leverage the benefits of the new EC2 instance types.

Monday, April 10, 2017

Finding all the AWS resources in our account

Recently I was talking to someone about creating an account with AWS. The immediate response I got was, `AWS was charging me continuously month to month although I was not consuming any of the service`. Many times I noticed that the users of AWS start a service and forget to stop the same, the same happened to me a couple of time. Once, I started an ELB (Elastic Load Balancer) and forgot to stop the same, the ELB was running the whole night. Luckily, the hourly rate for the ELB was less and I was not charged much for the same.

One way to identify the problem is to create a Billing Alarm as shown in the previous blog.  Once we are notified of the billing, we need to identify the AWS resources which we are using and then take an action on them. But, there are 13 Regions and many services within each Region. It's difficult to go through all the services in each Region, to figure out what are all the AWS resources which we are consuming. One way is to use `Resource Groups`.

1. Go to the AWS Management Console and click on the `Resource Groups` and the `Tag Editor`.


2. In the Regions Box, select all the Regions and in the `Resource types` select `All Resource Types` and then click on `Find Resources`. All the Resources which are being used from this account will be shown as below.


3. Now, we can click on the link in the `Go` column of a particular resource and take some action like stopping it, if not required.

One of the main disadvantage of the Cloud is that the resources can be spawned in a few seconds, without much consideration if is actually required or not. Let's say the EC2 instance type is not meeting our Non Functional Requirements (NFR), then another instance can be easily started without fine tuning the existing EC2. Slowly, we end up with a lot of unnecessary AWS resources.

Also, note that not all the AWS resources will be charged. For example, a Security Group is not charged, so I can create a Security Group and forget about it. While, I cannot do the same thing with an EC2 instance.