Deploying Docker Containers to AWS EC2 Container Service (ECS)

I’ve spent a lot of time playing with Docker containers locally for various personal projects, but haven’t spent much time deploying them to the cloud. I did look at IBM Bluemix a while back, and their web console and toolset was a pretty good developer experience. I’m curious about how OpenShift Online is evolving into a container based service as I’ve deployed many personal projects to OpenShift, and it has to be my favorite PaaS for features, ease of use, and cost.

AWS is the obvious leader in this space, and despite playing with a few EC2 services during the developer free year, I hadn’t tried yet to deploy Docker Containers there.

AWS’s Docker support is EC2 Container Service, or ECS.

To get started:

Install the AWS CLI: http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html

On the Mac I installed this easily with ‘brew install awscli’, which was much simpler than installing Python and PIP per the official instructions (see here).

Create an AWS user in AWS IAM for authenticating between your local Docker install and with ECS (this user is used instead of your master Amazon account credentials).

Run ‘aws configure’ locally and add secret key credentials from when you created your admin user in IAM

Follow through the step in the ECS Getting Stared guide here: https://console.aws.amazon.com/ecs/home?region=us-east-1#/firstRun

To summarize the steps in the getting started guide:

  • From the ECS Control Panel, create a Docker Image Repository: https://console.aws.amazon.com/ecs/home?region=us-east-1#/repositories
  • Connect your local Docker client with your Docker credentials in ECS:
    aws ecr get-login --region us-east-1
  • Copy and paste the docker login command from the previous step, this will log you in for 24 hours
  • Tag your image locally ready to push to your ECS repository – use the repo URI from the first step:
docker tag imageid ecs-repo-uri

The example command in the docs looks like this:

docker tag e9ae3c220b23 aws_account_id.dkr.ecr.region.amazonaws.com/repository-name

For the last param, the tag name, use the ECS Docker Repo URI when you created the repo.

Push the image to your ECS repo with (where image-tag-name is the same as the tag name above):

docker push image-tag-name

Docker images are run on ECS using a task config. You can create with the web ui (https://console.aws.amazon.com/ecs/home?region=us-east-1#/taskDefinitions), or manually create as a json file. If you create from the web ui you can copy the json from the configured task as a template for another task.

Before you can run a task, you need to create a Cluster, using the web ui: https://console.aws.amazon.com/ecs/home?region=us-east-1#/clusters

Run your task specifying the EC2 cluster to run on:

aws ecs run-task –task-definition task-def-name –cluster cluster-name

If you omit the –cluster param, you’ll see this error

Error: "An error occurred (ClusterNotFoundException) when calling the RunTask operation: Cluster not found."

To check cluster status:

aws ecs describe-clusters --cluster cluster-name

Ensure you have an inbound rule on your EC2 security to allow incoming requests to the exposed port on your container (e.g. TCP 80 for incoming web traffic).

Next up: deploying a single container is not particularly useful. Next I’m going to take a look at adding Netflix Eureka for discovery of other deployed services in containers.

Docker usage notes – continued (2)

A few rough usage notes, continuing from my first post a while back.

Delete all containers:

Run a container from an image and delete it when it exits: use –rm:

docker run -it --rm containerid

 

Pass an argument into a build:

  • use ‘ARG argname’ to declare the argument in your Dockerfile
  • pass value for argname with –build-arg argname=value

Test during build if an arg was passed:

RUN test -n "$argname"

 

Building ARM Docker images on the Raspberry Pi

Install Docker for ARM using the install script:

curl -sSL https://get.docker.com | sh

From: https://www.raspberrypi.org/blog/docker-comes-to-raspberry-pi/

Set to startup as a service:

sudo systemctl enable docker

Start the service manually now (or reboot to start automatically):

sudo systemctl start docker

Add user to docker group (to run docker cli without sudo):

sudo usermod -aG docker pi

 

To create a new image from a Raspbian base for ARM, use the Raspbian images from resin (in your Dockerfile):

FROM resin/rpi-raspbian:latest

From: http://blog.alexellis.io/getting-started-with-docker-on-raspberry-pi/

Edit your Dockerfile to include and configure whatever you need, and build an image as normal on the Pi using:

docker build -t tagname .

… and off you go.

Running a React app in a Docker container

I started converting my AngularJS AddressBook app into React. Since this is part of a larger project to run all the parts of my app in Docker containers with Docker Compose, I needed to look at how I can run my React app in a container.

For development, using the webpack dev server works well locally. Running this from within a container though requires doing an ‘npm install’ and a ‘npm run build’ inside the container, which probably doesn’t make sense to run every time, as with all the npm dependencies from create-react-app this take some time to run.

The other option is to build the app locally and then just build a container with the webpack output, and serve it with an HTTP server like nginx.

Let’s take a look at each of these options.

The easiest and simplest approach is to build the app locally first, as you normally would:

npm install
npm run build

A Dockerfile to build the container using the output from ‘npm run build’ then looks like (using an nginx image as a starting point):

FROM nginx
COPY build /usr/share/nginx/html

You build this with:

docker build -t addressbook-react-web-nginx .

And then run with:

docker run -it -p 80:80 addressbook-react-web-nginx

Next up, for a comparison, the Dockerfile to push everything into the image and run with ‘npm run start’ inside the container would look like:

FROM node:6.10-alpine
RUN mkdir -p /home/web
COPY *.html /home/web/
COPY public /home/web/public/
COPY src /home/web/src/
COPY *.json /home/web/
WORKDIR "/home/web"
RUN npm install
ENTRYPOINT npm run start

This is starting from one of the official node images. It copies in all the source files, runs npm install and then sets the entrypoint for when the container starts to run the webpack dev server.

An improvement on this approach could be to build an intermediate container with the package.json and npm install already run, and then build new images from this incrementally adding just the changed source files. Plenty of options to explore depending on what you need to do.

Next up, modify my previous docker compose file to include one of these new images containing my React app and now I’ve got all the pieces ready to go.