Installing Docker in an AWS EC2 instance

AWS offers their own EC2 Container Service (ECS) which simplifies deploying Docker containers to EC2 instances (and clusters of instances) and management of your containers. If you want to do-it-yourself though, you can easily install docker yourself in your own instance.

For example, in an Ubuntu EC2 instance,

sudo apt-get install docker.io

Start the docker service with:

sudo service docker start

If you want to manage you own Docker install on EC2, AWS have a guide walking you what you need to know – for further details see here: http://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-basics.html

(Latest Ubuntu apt packages are docker-ce and docker-ee – see the Docker docs here for more info)

Configuring nginx to proxy REST requests across multiple Spring Boot microservices in Docker Containers

Using nginx to proxy requests across Docker containers is a common use case for nginx, and covered in many posts and articles. Trying to get it working from scratch is a not-so-trivial task. Following many articles, I got confused about whether I was needing to load balance requests across identical container instances, or whether I needed nginx to proxy requests across my different containers. I ran across quite a few configuration issues and challenges, but mostly I think because I didn’t understand what I was trying to do (and ended up with configuration trying to load balance and proxy at the same time which was not what I needed) 🙂

Articles like this one and this one show how to configure ‘upstream’ servers that you can load balance requests across. My understanding of this approach is that this is what you need if you have multiple server or container instances, and you want nginx to load balance across the instances. After a while of trying to get this configuration working what I realised I needed was just the proxy_pass config for nginx, telling it to proxy requests for a matching url to a given Spring Boot service in a container. This question captures this approach well.

To explore a typical configuration, I have 2 simple Spring Boot REST services, springbootservice1 and springbootservice2, each will be in their own Docker container. When run in individual containers, they are accessed via:

http://localhost:8080/service1/example1

and

http://localhost:8080/service2/example2

Without additional configuration to run them on different ports, as standalone Spring Boot services they can’t obviously be run on the same host at the same time. This would be trivial to do with just two services, but once you scale up this approach for many different services, managing different ports with manual configuration is not particularly practical.

Out of the box, docker lets you manage the exposes external ports easily when you ‘docker run’ a container with the ‘-p’ option. What I was interested in was a solution to run them in different containers, without manually defining and managing ports by hand, and to allow a web app to be able to call any of the services on port 80 so an app consuming these services has no need to know what ports the services are actually running on.

The Dockerfile for each looks like this (replace Service1 for Service2 for the second service):

#Official JDK8 Alpine-based image
FROM java:openjdk-8-alpine
ADD target/SpringBootService1-0.0.1-SNAPSHOT.jar /opt/SpringBootService1-0.0.1-SNAPSHOT.jar
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "/opt/SpringBootService1-0.0.1-SNAPSHOT.jar"]

The Dockerfile for nginx looks like this:

FROM nginx
RUN rm /etc/nginx/conf.d/default.conf
COPY conf /etc/nginx
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

And here’s my config for nginx:

http {
 server {
   listen 80;
   location /service1 {
     proxy_pass http://springbootservice1:8080/service1/;
   }
   location /service2 {
     proxy_pass http://springbootservice2:8080/service2/;
   }
 }
}
events { worker_connections 1024; }

The key part of this config is the proxy_pass config:

location /service1 { 
  proxy_pass http://springbootservice1:8080/service1/;
}

This takes incoming requests with a URI matching /service1 and proxies it to http://springbootservice1:8080/service1/ – in this case springbootservice1 is the default host name of the container running Service1. This is it’s default name from the docker-compose config which we’ll cover next. Note that the lack of trailing and trailing ‘/’s is relevant and important in order to match any incoming pattern and then forward that URI appended to the end of the target URI on the container service.

version: '3'
 
services:
     nginx-lb:
         build: ../nginx-loadbalancer
         #image: nginx-lb
         ports:
             - "80:80"
         links:
             - springbootservice1
             - springbootservice2
         depends_on:
             - springbootservice1
             - springbootservice2
     springbootservice1:
         image: springbootservice1
         ports:
             - "8080"
     springbootservice2:
         image: springbootservice2
         ports:
             - "8080"

To bring up all the contains in one go, ‘docker-compose up’ starts up all 3 containers, and now nginx handles requests on port 80 and proxies to the correct container based on the path. This approach could easily be scaled up to include more containers, but at some point it will become obvious with this configuration hardcoded in both the nginx.conf file and the docker-compose file, a better solution would be to use some kind of dynamic discovery of the containers as they become available. I’ll be investigating some options for this kind of approach next.

Running Consul for service discovery in Docker containers

I’ve been looking at Spring Boot services in Docker and options for Service Discovery. I started looking at Eureka a while back, but just took a look at Consul.

Here’s my docker-compose.yml that I’ve got working so far. I realize this is just Consul running in 3 containers (1 server, and 2 containers with Consul agents), but I was interested in taking a look at how the containers register with the main server and how it’s configured.

One of the agent containers is running the UI on port 8500, this looks interesting to get an overview of what’s registered with the Consul server.

More to come later.

Deploying Docker Containers to AWS EC2 Container Service (ECS)

I’ve spent a lot of time playing with Docker containers locally for various personal projects, but haven’t spent much time deploying them to the cloud. I did look at IBM Bluemix a while back, and their web console and toolset was a pretty good developer experience. I’m curious about how OpenShift Online is evolving into a container based service as I’ve deployed many personal projects to OpenShift, and it has to be my favorite PaaS for features, ease of use, and cost.

AWS is the obvious leader in this space, and despite playing with a few EC2 services during the developer free year, I hadn’t tried yet to deploy Docker Containers there.

AWS’s Docker support is EC2 Container Service, or ECS.

To get started:

Install the AWS CLI: http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html

On the Mac I installed this easily with ‘brew install awscli’, which was much simpler than installing Python and PIP per the official instructions (see here).

Create an AWS user in AWS IAM for authenticating between your local Docker install and with ECS (this user is used instead of your master Amazon account credentials).

Run ‘aws configure’ locally and add secret key credentials from when you created your admin user in IAM

Follow through the step in the ECS Getting Stared guide here: https://console.aws.amazon.com/ecs/home?region=us-east-1#/firstRun

To summarize the steps in the getting started guide:

  • From the ECS Control Panel, create a Docker Image Repository: https://console.aws.amazon.com/ecs/home?region=us-east-1#/repositories
  • Connect your local Docker client with your Docker credentials in ECS:
    aws ecr get-login --region us-east-1
  • Copy and paste the docker login command from the previous step, this will log you in for 24 hours
  • Tag your image locally ready to push to your ECS repository – use the repo URI from the first step:
docker tag imageid ecs-repo-uri

The example command in the docs looks like this:

docker tag e9ae3c220b23 aws_account_id.dkr.ecr.region.amazonaws.com/repository-name

For the last param, the tag name, use the ECS Docker Repo URI when you created the repo.

Push the image to your ECS repo with (where image-tag-name is the same as the tag name above):

docker push image-tag-name

Docker images are run on ECS using a task config. You can create with the web ui (https://console.aws.amazon.com/ecs/home?region=us-east-1#/taskDefinitions), or manually create as a json file. If you create from the web ui you can copy the json from the configured task as a template for another task.

Before you can run a task, you need to create a Cluster, using the web ui: https://console.aws.amazon.com/ecs/home?region=us-east-1#/clusters

Run your task specifying the EC2 cluster to run on:

aws ecs run-task –task-definition task-def-name –cluster cluster-name

If you omit the –cluster param, you’ll see this error

Error: "An error occurred (ClusterNotFoundException) when calling the RunTask operation: Cluster not found."

To check cluster status:

aws ecs describe-clusters --cluster cluster-name

Ensure you have an inbound rule on your EC2 security to allow incoming requests to the exposed port on your container (e.g. TCP 80 for incoming web traffic).

Next up: deploying a single container is not particularly useful. Next I’m going to take a look at adding Netflix Eureka for discovery of other deployed services in containers.