Deploying Docker containers to AWS ECS Fargate

The interesting feature of AWS ECS Fargate is that it’s ‘serverless for containers’. Serverless broadly means you don’t need to be concerned with the provisioning and maintenance of the servers or compute that are running your code. With Fargate, you don’t have to provision compute for your Docker Containers, AWS manages the compute for you.

If you’re working with Docker containers, AWS have multiple runtime options, each with their own pros and cons:

  • running Docker on your own EC2 instances – the roll your own approach, you provision instances and manage everything yourself
  • AWS ECS with EC2 launch type – you still need to provision a pool of available EC2 instances on which AWS will run your containers
  • AWS EKS – managed Kubernetes
  • AWS ECS with Fargate launch type – you don’t need to provision any compute (e.g. EC2), AWS manages the compute for you

I’m taking a look at AWS ECS Fargate to see what it takes to deploy a Docker container.

An ECS cluster needs a VPC in which your container instances will run, with at least 1 public or private subnet. Steps to create a new VPC with subnets is covered here.

Following these steps from the VPC section in ECS tutorials using the AWS Console I created:

  • an Elastic IP to associate with my cluster for public access
  • a new VPC with 1 private subnet and 1 public subnet

I created these with the VPC Wizard using this option:

Apparently your public subnet doesn’t get assigned a public IP by default, so follow these steps in the guide to change this default behavior:

When you select your public subnet, this option is under Actions here:

Select this option:

My public subnet was created in AZ us-west-2a and my private subnet is also in the same AZ. The guide recommends creating 1 additional public and private subnets in a different AZ high for availability.

To create a ECS Fargate cluster you can use the AWS CLI like this:

aws ecs create-cluster --cluster-name your-fargate-cluster-name

This will return some stats about your newly created cluster, like:

"clusterName": "fargate-cluster1",
"status": "ACTIVE",
"registeredContainerInstancesCount": 0,
"runningTasksCount": 0,
"pendingTasksCount": 0,
"activeServicesCount": 0,
"statistics": [],
"tags": [],
"settings": [
{
"name": "containerInsights",
"value": "disabled"
}
],
"capacityProviders": [],
"defaultCapacityProviderStrategy": []

However, I’m not sure at this point how to configure the new cluster to specify the VPC and subnets I just created, so for my first cluster I’m going to use the ECS wizard in the AWS Console first, and then come back to the CLI later.

Using the wizard I selected the Networking Only option with Fargate:

I don’t need to select the ‘Create VPC’ option because I’ve already created one:

Turns out there aren’t any options to associate the VPC at this point, the tasks are associated to your VPC and subnets when you create them next. So using the CLI step earlier would create the cluster exactly the same.

You need to define an ECS task definition that defines the task that will run on the ECS cluster. Following the tutorial here, the example JSON file provided as an example looks like this:

{
"family": "sample-fargate",
"networkMode": "awsvpc",
"containerDefinitions": [
{
"name": "fargate-app",
"image": "httpd:2.4",
"portMappings": [
{
"containerPort": 80,
"hostPort": 80,
"protocol": "tcp"
}
],
"essential": true,
"entryPoint": [
"sh",
"-c"
],
"command": [
"/bin/sh -c \"echo ' Amazon ECS Sample App
body {margin-top: 40px; background-color: #333;}
Amazon ECS Sample App
Congratulations!
Your application is now running on a container in Amazon ECS.
' > /usr/local/apache2/htdocs/index.html && httpd-foreground\""
]
}
],
"requiresCompatibilities": [
"FARGATE"
],
"cpu": "256",
"memory": "512"
}

Since we’re deploying a Docker container, we need to specify a Docker image to pull some somewhere. This example provides the name of a Docker container to pull from Docker Hub, in this case httpd:2.4. To. deploy your own apps, you configure your own dockerfile for your app, and publish it to a Docker repo like Docker Hub, or AWS ECR.

Register this task definition with:

aws ecs register-task-definition --cli-input-json file://task-def.json

When –cli-input-json reads your config file, it will open is whatever is your default editor in your shell. On my Mac in zsh it appears to open the file in vim with a ‘:’ prompt at the bottom of the screen, and pressing ‘q’ quits the editor and continues registering the Task Def.

You can list registered Task Definitions with:

aws ecs list-task-definitions

By default, your ECS service will only have a private IP, and would typically be exposed publicly via an ELB. You can configure the task to get allocated it’s own public IP by adding this config:

"networkConfiguration": {
"awsvpcConfiguration": {
"assignPublicIp": "ENABLED",
"securityGroups": [ "sg-12345678" ],
"subnets": [ "subnet-12345678" ]
}
}

This is where we we specify the subnets that were created earlier. I’m going to publicly expose this container, so I’m associating it with the 2 public subnets I created (added to the above config snippet).

I also need a Security Group for the config, so I’ll create that too and allow incoming traffic on port 80.

It’s not obvious from the docs where this NetworkConfiguration section gets specified, but it doesn’t go in the Task Definition json, it gets passed when you create the Service using the Task Definition.

To create a Service, use this cli command:

aws ecs create-service --cluster fargate-cluster --service-name fargate-service --task-definition sample-fargate:1 --desired-count 1 --launch-type "FARGATE" --network-configuration "awsvpcConfiguration={subnets=[subnet-abcd1234],securityGroups=[sg-abcd1234],assignPublicIp=ENABLED}"

Using this command to plug in the subnet ids and Security Group id, from the ECS Console you’ll now see you have service running! If you drill down to the task you can find the assigned public IP. Hit the IP to call the service! Since we’re running an httpd container with a sample web page, we see:

Awesome, up and running!

Moving my nginx+mysql WordPress VPS native install to Docker containers on a KVM VPS

My WordPress blog that you’re reading right now is running on nginx and MySQL installed on a cheap OpenVZ VPS. I’ve been running on a $2.50 VPS from Virmach for the past 6 months or so and been very happy with the service. I spent a bunch of time tweaking the nginx and MySQL config params to run in < 512MB, which it does comfortably, but nginx and MySQL are both installed directly on the Ubuntu VM instance and it would be great of I could make this setup more easily movable between cloud providers (or even to have a local copy of the setup for testing, vs the live site).

I’ve been spending a lot of time playing with Docker and Kubernetes, so it seems logical that I should move the site into containers and then this will allow me to explore other deployment options.

Migration Steps – find a KVM VPS

As far as I know you can’t install Docker in an OpenVZ virtualized VPS container, so first step I need to move to a KVM based VPS so I can install Docker (and possibly Kubernetes). I’ve been shopping the deals on lowendbox.com and there’s plenty of reasonably deals for around $5/month for various combinations of 2 to 4GB RAM and 2 to 4 vCPU.

Dockerize nginx, MySQL and WordPress

I’ve been playing with this already. I’ve picked up my own combo of favorite/useful WordPress plugins, so I’ll probably share a generic set of Dockerfiles and then leave it up to anyone if they want to use them to customize your own WordPress install in the container.

Configure a local dev/test environment Docker setup vs production environment Docker setup on my VPS

This makes a lot of sense and is a benefit of using containers. This will allow me to test my config locally, and then push to my production node. I’ve been looking at using Rancher to help with this, but still got lots to learn.

More updates to come as my project progresses.

Building a multi-container Spring Boot and MongoDB webapp with Docker 1.12.x – part 2

In the first part of this article, I showed how I split the frontend, backend and database all into their own containers, and how each could be individually scaled using docker-compose.

If you’re already familiar with docker-compose and using haproxy for load balancing against container, you might have noticed there’s a limitation in my approach, as both the backend REST service was exposing it’s port 8080 externally, and so I don’t think HTTP requests from the frontend browser in the app were ever passing through the haproxy to be load balanced, only the requests to load the front end on port 80 were being load balanced.

I looked into how I could configure haproxy with multiple backends, listening on different ports, but eventually came to the conclusion that adding two different haproxy containers, one load balancing for port 80 and one for port 8080 was easy to do.

I’m not sure if this is the best way to approach this, but it certainly. works. Leave me a comment if you have any suggestions.

Here’s my final docker-compose.yml:

 

version: '2'
  
services:
    mongodata:
        image: mongo:3.2
        volumes:
        - /data/db
        entrypoint: /bin/bash
    mongo:
        image: mongo:3.2
        depends_on: 
            - mongodata
        volumes_from:
            - mongodata
        ports:
            - "27017"
    addressbook:
        image: addressbook
        depends_on: 
            - mongo
        environment:
            - MONGODB_DB_NAME=addressbook
        ports:
            - "8080"
        links:
            - mongo
    web:
        image: docker-web-angularjs
        ports:
            - "80"
    lb-web:
        image: dockercloud/haproxy
        depends_on: 
            - web
        environment:
            - STATS_PORT=1936
            - STATS_AUTH="admin:your-password"
        links:
            - web
        volumes:
            - /var/run/docker.sock:/var/run/docker.sock
        ports:
            - 80:80
            - 1936:1936
    lb-addressbook:
        image: dockercloud/haproxy
        depends_on: 
            - addressbook
        environment:
            - STATS_PORT=1937
            - STATS_AUTH="admin:your-password"
        links:
            - addressbook
        volumes:
            - /var/run/docker.sock:/var/run/docker.sock
        ports:
            - 8080:80
            - 1937:1937

WildFly Swarm microservice development: JAX-RS app deployed to a Docker container

WildFly Swarm is an interesting approach to the ‘small is good’ current trend that is ignoring the traditional, overly large Java EE App Servers, and instead deploying self-contained, executable Jars without needing a whole App Server to execute.

Rather than requiring the whole Java EE stack, WildFly Swarm lets you chose only the implementations of the parts that your service needs to execute. Need JAX-RS? Then pull it in via a Maven dependency. Need JPA? Pull that in too. Any other required transitive dependencies are automatically pulled in via Maven. Pulling in JAX-RS also requires a Servlet container and APIs, but this is pulled in for you via Maven transitive dependencies. The end result is a self-contained, packaged Jar that contains everything it needs to run with a ‘java -jar’ command.

I’ve been spending some time recently looking at getting Weblogic Portal 10.3.6 running in a Docker container (check my work in progress changes on GitHub here). As EE containers go, it’s big. It’s heavy. If you want to describe anything as a monolith, then this is your perfect example. So switching gears I wanted to go to the other extreme and look at how you would build a lightweight Java based service, and WildfFly Swarm looked pretty interesting.

I attended one of the sessions at JavaOne this year giving an intro to Swarm, so it’s been on my todo list to take a look.

Getting started with Swarm is actually pretty easy, as it’s all driven by Maven dependencies and plugins. There’s an easy to follow tutorial here. The example apps showing different WildFly components packaged using Swarm are also worth a look here.

I worked through the examples putting together a JAX-RS helloworld app, and also a Dockerfile to package and deploy it to a Docker container. It was actually pretty easy, and my app ended up looking much like the provided examples.

My example JAX-RS resource is pretty simple, nothing complicated here:

If you’re looking for the different WildFly services that you can package with Swarm, browsing the mvnrepository is a good place to start to quickly grab the mvn deps, or browse the examples or source.

For the Maven war and wildfly-storm plugins:

[code]<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins>/groupId>
<artifactId>maven-war-plugin>/artifactId>
<configuration>
<failOnMissingWebXml>false</failOnMissingWebXml>
<packagingExcludes>WEB-INF/lib/wildfly-swarm-*.jar>/packagingExcludes>
</configuration>
</plugin>
<plugin>
<groupId>org.wildfly.swarm</groupId>
<artifactId>wildfly-swarm-plugin</artifactId>
<executions>
<execution>
<goals>
<goal>package</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>[/code]

Next, adding a dependency for the WildFly Swarm JAX-RS:

[code]
<dependency>
<groupId>org.wildfly.swarm</groupId>
<artifactId>wildfly-swarm-jaxrs</artifactId>
<version>1.0.0.Alpha5</version>
</dependency>[/code]

To build the self-contained executable Jar, build with ‘mvn package’ – this will build the app as normal to target, but also include a *-swarm.jar – this the self-contained Jar containing all the WildFly dependencies, and can be run standalone with ‘java -jar helloworld-swarm-0.0.1-SNAPSHOT-swarm.jar’

Building a Docker container

Given that we’ve now got a simple, single, self-contained Jar, deploying this into a Docker container is also pretty easy as we have no other dependencies to worry about (we just need a JVM).

An example Dockerfile would look like (scroll to the right):

[code]
FROM java:openjdk-8-jdk
ADD target/helloworld-swarm-0.0.1-SNAPSHOT-swarm.jar /opt/helloworld-swarm.jar
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "/opt/helloworld-swarm.jar"]
[/code]

All we’re doing here is:
1. Creating an image based on the official openjdk image from DockerHub

2. Adding our built WildFly Swarm jar containing our app into /opt in the image

3. Exposing port 8080

4. Defining the Entrypoint that executes when the container starts

To create an image based on this Dockerfile:

docker build -t helloworld .

To start a container based on this image:

docker run -d -p 8080:8080 imageid

… this creates a new container based on the new container, runs it as a daemon, with port 8080 in the container exposed to 8080 on the host. Done!

Hit the URL of our endpoint using the IP of the running Docker-Machine:

To watch the logs of the running container, do:

docker logs -f containerid

In this case you see the output as the WildFly Undertow Servlet container starts up, and initializes our JAX-RS based app:

Pretty simple! We’ve got a minimal WildFly server starting up inside a Docker container in about 10s, and our app deploying in 2s. That’s pretty good if you ask me 🙂