Creating an AWS EC2 instance with a Public IP

The first couple of times you create a new EC2 instance on AWS this is an easy option to miss, and it defaults to private IP only.

If you want to create an EC2 instance with a public IP, when you create your instance from the dashboard ensure this option is set to ‘Enable’ :

Deploying Docker Containers to AWS EC2 Container Service (ECS)

I’ve spent a lot of time playing with Docker containers locally for various personal projects, but haven’t spent much time deploying them to the cloud. I did look at IBM Bluemix a while back, and their web console and toolset was a pretty good developer experience. I’m curious about how OpenShift Online is evolving into a container based service as I’ve deployed many personal projects to OpenShift, and it has to be my favorite PaaS for features, ease of use, and cost.

AWS is the obvious leader in this space, and despite playing with a few EC2 services during the developer free year, I hadn’t tried yet to deploy Docker Containers there.

AWS’s Docker support is EC2 Container Service, or ECS.

To get started:

Install the AWS CLI: http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html

On the Mac I installed this easily with ‘brew install awscli’, which was much simpler than installing Python and PIP per the official instructions (see here).

Create an AWS user in AWS IAM for authenticating between your local Docker install and with ECS (this user is used instead of your master Amazon account credentials).

Run ‘aws configure’ locally and add secret key credentials from when you created your admin user in IAM

Follow through the step in the ECS Getting Stared guide here: https://console.aws.amazon.com/ecs/home?region=us-east-1#/firstRun

To summarize the steps in the getting started guide:

  • From the ECS Control Panel, create a Docker Image Repository: https://console.aws.amazon.com/ecs/home?region=us-east-1#/repositories
  • Connect your local Docker client with your Docker credentials in ECS:
    aws ecr get-login --region us-east-1
  • Copy and paste the docker login command from the previous step, this will log you in for 24 hours
  • Tag your image locally ready to push to your ECS repository – use the repo URI from the first step:
docker tag imageid ecs-repo-uri

The example command in the docs looks like this:

docker tag e9ae3c220b23 aws_account_id.dkr.ecr.region.amazonaws.com/repository-name

For the last param, the tag name, use the ECS Docker Repo URI when you created the repo.

Push the image to your ECS repo with (where image-tag-name is the same as the tag name above):

docker push image-tag-name

Docker images are run on ECS using a task config. You can create with the web ui (https://console.aws.amazon.com/ecs/home?region=us-east-1#/taskDefinitions), or manually create as a json file. If you create from the web ui you can copy the json from the configured task as a template for another task.

Before you can run a task, you need to create a Cluster, using the web ui: https://console.aws.amazon.com/ecs/home?region=us-east-1#/clusters

Run your task specifying the EC2 cluster to run on:

aws ecs run-task –task-definition task-def-name –cluster cluster-name

If you omit the –cluster param, you’ll see this error

Error: "An error occurred (ClusterNotFoundException) when calling the RunTask operation: Cluster not found."

To check cluster status:

aws ecs describe-clusters --cluster cluster-name

Ensure you have an inbound rule on your EC2 security to allow incoming requests to the exposed port on your container (e.g. TCP 80 for incoming web traffic).

Next up: deploying a single container is not particularly useful. Next I’m going to take a look at adding Netflix Eureka for discovery of other deployed services in containers.

Checklist for accessing an AWS EC2 instance with ssh

Quick checklist of items to check for enabling ssh instance into a running EC2 instance:

  • EC2 instance is started (check from AWS console)
  • From AWS console, check Security Group for the instance has an inbound rule for SSH – if only accessing remotely from your current IP, you can press ‘My IP’ to set your current public IP
  • From Network & Security, create a keypair and download the .pem file
  • Check the public DNS name for your EC2 instance from the console
  • chmod 400 your .pem file, otherwise you’ll get an error that it’s publicly readable

Connect with:

ssh -i path-to-.pem-file ec2-user@ec2-your-instance-name.compute-xyz.amazonaws.com

Interesting point of view on Amazon’s business model

The fact that Amazon exists as an online retailer but also offers cloud based hosting services has always interested me. I always wondered if the hosting business was based on technology they had developed in house to support their online retailer business and so decided to set themselves up as a hosting provider based on their own technology, but it appears from some history on wikipedia that they later migrated their online store to the AWS platform, so AWS was developed at a later point.

Kas Thomas has an interesting post about the broad diversification of Amazon’s business.