Getting past Vagrant’s “Authentication failure” error when starting up OpenShift Origin

For getting up and running quickly with OpenShift Origin, RedHat have an all-in-one VM image you can provision with Vagrant. The instructions mention to not use Vagrant 1.8.5 as there’s an issue with the SSH setup – since I already had 1.8.5 installed for some other projects, I tried anyway, and ran into issues with SSH’ing into the VM with SSH keys.

When provisioning the VM, you’ll see:

 

Kevins-MacBook-Pro:openshift-origin kev$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Checking if box 'openshift/origin-all-in-one' is up to date...
==> default: Resuming suspended VM...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 127.0.0.1:2222
    default: SSH username: vagrant
    default: SSH auth method: private key
    default: Warning: Authentication failure. Retrying...
    default: Warning: Authentication failure. Retrying...

There’s a number of posts discussing this issue and a few workarounds, for example, here and here.

The suggestions relate to switching from the ssh authentication to userid/password, by adding this to your Vagrantfile:

config.ssh.username = "vagrant"
config.ssh.password = "vagrant"

I tried this, and when running vagrant up I had different errors about “SSH authentication failed”. Next I tried adding this recommendation:

 

config.ssh.insert_key = false

This didn’t make any difference initially, but doing a vagrant destroy, and then trying to bring it up again initially ran into the same issue, I Ctrl-C’d out and tried again and then it worked second time. I’m not sure what steps got past the ssh keys issue, but at this point I was up and running. There’s a long discussion in both the linked threads above that describe the cause of the issue, so if you’re interested take a look through those threads.

Building a multi-container Spring Boot and MongoDB webapp with Docker 1.12.x – part 2

In the first part of this article, I showed how I split the frontend, backend and database all into their own containers, and how each could be individually scaled using docker-compose.

If you’re already familiar with docker-compose and using haproxy for load balancing against container, you might have noticed there’s a limitation in my approach, as both the backend REST service was exposing it’s port 8080 externally, and so I don’t think HTTP requests from the frontend browser in the app were ever passing through the haproxy to be load balanced, only the requests to load the front end on port 80 were being load balanced.

I looked into how I could configure haproxy with multiple backends, listening on different ports, but eventually came to the conclusion that adding two different haproxy containers, one load balancing for port 80 and one for port 8080 was easy to do.

I’m not sure if this is the best way to approach this, but it certainly. works. Leave me a comment if you have any suggestions.

Here’s my final docker-compose.yml:

 

version: '2'
  
services:
    mongodata:
        image: mongo:3.2
        volumes:
        - /data/db
        entrypoint: /bin/bash
    mongo:
        image: mongo:3.2
        depends_on: 
            - mongodata
        volumes_from:
            - mongodata
        ports:
            - "27017"
    addressbook:
        image: addressbook
        depends_on: 
            - mongo
        environment:
            - MONGODB_DB_NAME=addressbook
        ports:
            - "8080"
        links:
            - mongo
    web:
        image: docker-web-angularjs
        ports:
            - "80"
    lb-web:
        image: dockercloud/haproxy
        depends_on: 
            - web
        environment:
            - STATS_PORT=1936
            - STATS_AUTH="admin:your-password"
        links:
            - web
        volumes:
            - /var/run/docker.sock:/var/run/docker.sock
        ports:
            - 80:80
            - 1936:1936
    lb-addressbook:
        image: dockercloud/haproxy
        depends_on: 
            - addressbook
        environment:
            - STATS_PORT=1937
            - STATS_AUTH="admin:your-password"
        links:
            - addressbook
        volumes:
            - /var/run/docker.sock:/var/run/docker.sock
        ports:
            - 8080:80
            - 1937:1937

Building a multi-container Spring Boot and MongoDB webapp with Docker 1.12.x

I’ve spent a bunch of time playing with apps in single containers up until now and reached the point where I started to think about questions like:

“ok, so how is this done if you have an application with multiple services, where you may need to scale individual services but not others?”

and

“how does an app in one container talk to another container?”

Composing applications from multiple containers and handling the scaling I think this is the sweetspot for Kubernetes, but I don’t feel ready to ramp up quite that much just yet. Besides, with Docker 1.12 adding ‘swarm mode’ and with docker-compose, it’s looking like Docker has most of the tools you need to build and scale a multi-container app out of the box without adding additional tools to the stack.

So here’s where I started:

  • Container 1: Spring Boot app with JAX-RS RESTful endpoints
  • Container 2: MongoDB database
  • Container 3: Data volume container for MongoDB

As it turns out, this is doesn’t involve too much additional work than just building Dockerfiles for the individual containers and then wiring them together with a docker-compose.yml file.

Starting with each container, here’s the pretty simple Dockerfiles for each:

Spring Boot container:

FROM java:openjdk-8-alpine
ADD SpringBootAddressBook-0.0.1-SNAPSHOT.jar /opt/SpringBootAddressBook-0.0.1-SNAPSHOT.jar
EXPOSE 8080
ENV MONGODB_DB_NAME addressbook
ENV MONGODB_DB_HOST mongo
ENV MONGODB_DB_PORT 27017
ENTRYPOINT ["java", "-jar", "/opt/SpringBootAddressBook-0.0.1-SNAPSHOT.jar"]

 

MongoDB container:
MongoDB can be run straight from the official dockerfiles on Docker Hub, using one container for the server and one for a data container – see the complete docker-compose file below.

Bringing it all together, here’s the Docker Compose file to orchestrate the containers together:

version: '2'
services:
    mongodata:
        image: mongo:3.2
        volumes:
        - /data/db
        entrypoint: /bin/bash
    mongo:
        image: mongo:3.2
        depends_on: 
            - mongodata
        volumes_from:
            - mongodata
        ports:
        #kh: only specify internal port, not external, so we can scale with docker-compose scale
            - "27017"
    addressbook:
        image: addressbook
        depends_on: 
            - mongo
        environment:
            - MONGODB_DB_NAME=addressbook
        ports:
            - 8080:8080
        links:
            - mongo

At this point, the group of containers can be brought up as a whole with:

docker-compose up

… and brought down with:

docker-compose down

You can individually scale any container with:

docker-compose scale containername=count

…. where count is the number of container instances to spin up.

 

So what if you want to add in a web frontend as a container too? Easy enough. Here’s an AngularJS frontend, served by nginx:

    web:
        image: docker-web-angularjs
        ports:
            - "80"

Now, if we spin up multiple containers for the REST backend and the nginx frontend as well, we need a load balancer as well, right? Also easy, just add in haproxy:

    lb:
        image: dockercloud/haproxy
        depends_on: 
            - addressbook
        environment:
            - STATS_PORT=1936
            - STATS_AUTH="admin:password"
        links:
            - addressbook
            - web
        volumes:
            - /var/run/docker.sock:/var/run/docker.sock
        ports:
            - 80:80
            - 1936:1936

I had noticed that the startup order of the containers was not always predictable, and sometimes the Spring Boot container would start before the MongoDB container was up. This can be fixed by adding the depends_on element. I’m not sure if I really needed to add all the dependencies that I did in order to force a very specific startup order, but this seems to work for me. The order I have in the complete docker-compose.yml is (from first to last):

  • mongodata (data container)
  • mongo
  • addressbook (REST backend)
  • web (AngularJS frontend)
  • haproxy

The complete source for the AddressBook backend is available in this project on github. The deploy-* folders contain the individual dockerfiles:

https://github.com/kevinhooke/SpringBootRESTAddressBook

 

Docker Swarm node set up with Docker 1.12 RC on Raspberry Pi

I’ve been following through the steps in this article to set up a Docker swarm cluster on a pair of Raspberry Pis. Most of the steps work as-is from Mac OS, but for a few steps there’s a couple of variations.

This is most likely going to be part 1 of a few posts as I work through and get this working.

For example, to copy your ssh key to the Pis, instead of:

ssh-copy-id pirate@pi3.local

… you can do (from tip here):

cat ~/.ssh/id_rsa.pub | ssh user@machine "mkdir ~/.ssh; cat >> ~/.ssh/authorized_keys"

To switch between remote hosts:

eval $(docker-machine env pi1)

where pi1 is the name of the remote host.

To switch back to the localhost (not entirely obvious but found the answer here):

eval "$(docker-machine env -u)"

After ‘swarm init’ and adding the other nodes to the cluster, ‘docker swarm ls’ lists the nodes in the cluster:

More to come in part 2:-)