Getting past Vagrant’s “Authentication failure” error when starting up OpenShift Origin

For getting up and running quickly with OpenShift Origin, RedHat have an all-in-one VM image you can provision with Vagrant. The instructions mention to not use Vagrant 1.8.5 as there’s an issue with the SSH setup – since I already had 1.8.5 installed for some other projects, I tried anyway, and ran into issues with SSH’ing into the VM with SSH keys.

When provisioning the VM, you’ll see:

 

Kevins-MacBook-Pro:openshift-origin kev$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Checking if box 'openshift/origin-all-in-one' is up to date...
==> default: Resuming suspended VM...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 127.0.0.1:2222
    default: SSH username: vagrant
    default: SSH auth method: private key
    default: Warning: Authentication failure. Retrying...
    default: Warning: Authentication failure. Retrying...

There’s a number of posts discussing this issue and a few workarounds, for example, here and here.

The suggestions relate to switching from the ssh authentication to userid/password, by adding this to your Vagrantfile:

config.ssh.username = "vagrant"
config.ssh.password = "vagrant"

I tried this, and when running vagrant up I had different errors about “SSH authentication failed”. Next I tried adding this recommendation:

 

config.ssh.insert_key = false

This didn’t make any difference initially, but doing a vagrant destroy, and then trying to bring it up again initially ran into the same issue, I Ctrl-C’d out and tried again and then it worked second time. I’m not sure what steps got past the ssh keys issue, but at this point I was up and running. There’s a long discussion in both the linked threads above that describe the cause of the issue, so if you’re interested take a look through those threads.

Building a multi-container Spring Boot and MongoDB webapp with Docker 1.12.x – part 2

In the first part of this article, I showed how I split the frontend, backend and database all into their own containers, and how each could be individually scaled using docker-compose.

If you’re already familiar with docker-compose and using haproxy for load balancing against container, you might have noticed there’s a limitation in my approach, as both the backend REST service was exposing it’s port 8080 externally, and so I don’t think HTTP requests from the frontend browser in the app were ever passing through the haproxy to be load balanced, only the requests to load the front end on port 80 were being load balanced.

I looked into how I could configure haproxy with multiple backends, listening on different ports, but eventually came to the conclusion that adding two different haproxy containers, one load balancing for port 80 and one for port 8080 was easy to do.

I’m not sure if this is the best way to approach this, but it certainly. works. Leave me a comment if you have any suggestions.

Here’s my final docker-compose.yml:

 

version: '2'
  
services:
    mongodata:
        image: mongo:3.2
        volumes:
        - /data/db
        entrypoint: /bin/bash
    mongo:
        image: mongo:3.2
        depends_on: 
            - mongodata
        volumes_from:
            - mongodata
        ports:
            - "27017"
    addressbook:
        image: addressbook
        depends_on: 
            - mongo
        environment:
            - MONGODB_DB_NAME=addressbook
        ports:
            - "8080"
        links:
            - mongo
    web:
        image: docker-web-angularjs
        ports:
            - "80"
    lb-web:
        image: dockercloud/haproxy
        depends_on: 
            - web
        environment:
            - STATS_PORT=1936
            - STATS_AUTH="admin:your-password"
        links:
            - web
        volumes:
            - /var/run/docker.sock:/var/run/docker.sock
        ports:
            - 80:80
            - 1936:1936
    lb-addressbook:
        image: dockercloud/haproxy
        depends_on: 
            - addressbook
        environment:
            - STATS_PORT=1937
            - STATS_AUTH="admin:your-password"
        links:
            - addressbook
        volumes:
            - /var/run/docker.sock:/var/run/docker.sock
        ports:
            - 8080:80
            - 1937:1937

Building a multi-container Spring Boot and MongoDB webapp with Docker 1.12.x

I’ve spent a bunch of time playing with apps in single containers up until now and reached the point where I started to think about questions like:

“ok, so how is this done if you have an application with multiple services, where you may need to scale individual services but not others?”

and

“how does an app in one container talk to another container?”

Composing applications from multiple containers and handling the scaling I think this is the sweetspot for Kubernetes, but I don’t feel ready to ramp up quite that much just yet. Besides, with Docker 1.12 adding ‘swarm mode’ and with docker-compose, it’s looking like Docker has most of the tools you need to build and scale a multi-container app out of the box without adding additional tools to the stack.

So here’s where I started:

  • Container 1: Spring Boot app with JAX-RS RESTful endpoints
  • Container 2: MongoDB database
  • Container 3: Data volume container for MongoDB

As it turns out, this is doesn’t involve too much additional work than just building Dockerfiles for the individual containers and then wiring them together with a docker-compose.yml file.

Starting with each container, here’s the pretty simple Dockerfiles for each:

Spring Boot container:

FROM java:openjdk-8-alpine
ADD SpringBootAddressBook-0.0.1-SNAPSHOT.jar /opt/SpringBootAddressBook-0.0.1-SNAPSHOT.jar
EXPOSE 8080
ENV MONGODB_DB_NAME addressbook
ENV MONGODB_DB_HOST mongo
ENV MONGODB_DB_PORT 27017
ENTRYPOINT ["java", "-jar", "/opt/SpringBootAddressBook-0.0.1-SNAPSHOT.jar"]

 

MongoDB container:
MongoDB can be run straight from the official dockerfiles on Docker Hub, using one container for the server and one for a data container – see the complete docker-compose file below.

Bringing it all together, here’s the Docker Compose file to orchestrate the containers together:

version: '2'
services:
    mongodata:
        image: mongo:3.2
        volumes:
        - /data/db
        entrypoint: /bin/bash
    mongo:
        image: mongo:3.2
        depends_on: 
            - mongodata
        volumes_from:
            - mongodata
        ports:
        #kh: only specify internal port, not external, so we can scale with docker-compose scale
            - "27017"
    addressbook:
        image: addressbook
        depends_on: 
            - mongo
        environment:
            - MONGODB_DB_NAME=addressbook
        ports:
            - 8080:8080
        links:
            - mongo

At this point, the group of containers can be brought up as a whole with:

docker-compose up

… and brought down with:

docker-compose down

You can individually scale any container with:

docker-compose scale containername=count

…. where count is the number of container instances to spin up.

 

So what if you want to add in a web frontend as a container too? Easy enough. Here’s an AngularJS frontend, served by nginx:

    web:
        image: docker-web-angularjs
        ports:
            - "80"

Now, if we spin up multiple containers for the REST backend and the nginx frontend as well, we need a load balancer as well, right? Also easy, just add in haproxy:

    lb:
        image: dockercloud/haproxy
        depends_on: 
            - addressbook
        environment:
            - STATS_PORT=1936
            - STATS_AUTH="admin:password"
        links:
            - addressbook
            - web
        volumes:
            - /var/run/docker.sock:/var/run/docker.sock
        ports:
            - 80:80
            - 1936:1936

I had noticed that the startup order of the containers was not always predictable, and sometimes the Spring Boot container would start before the MongoDB container was up. This can be fixed by adding the depends_on element. I’m not sure if I really needed to add all the dependencies that I did in order to force a very specific startup order, but this seems to work for me. The order I have in the complete docker-compose.yml is (from first to last):

  • mongodata (data container)
  • mongo
  • addressbook (REST backend)
  • web (AngularJS frontend)
  • haproxy

The complete source for the AddressBook backend is available in this project on github. The deploy-* folders contain the individual dockerfiles:

https://github.com/kevinhooke/SpringBootRESTAddressBook

 

JavaOne 2016: Oracle’s ‘refocusing’ of the EE8 proposal

Oracle’s planned Java EE ‘announcement‘ at JavaOne 2016 turned out to be (based on the announcement from the Sunday keynote and the Java EE 8 roadmap session by Linda Demichiel, EE spec lead for EE JSRs) a ‘refocusing’, a realignment to bring Java EE more in line with current trends, particularly to support deploying EE applications to the cloud and the development of microservices.

So what exactly is going to change? For the new EE8 spec proposal, with finalization now planned for 2017,  some previously proposed features for inclusion are now going to be dropped, namely:

  • MVC 1.0
  • JMS 2.1
  • EE Management apis 2.0

Given than MVC 1.0 was a new EE web framework as an alternative to JSF (not adding new technology but built on top of what we already have with a Reference Implementation already built out), this is perhaps the most surprising removal. Given the new focus of cloud and microservices however, this makes sense, as it doesn’t really add anything towards supporting the new goals.

Everything else in the original EE8 JSR 366 is going to remain in the new EE8 proposal.

New features to be added include a new feature for supporting Health Checks, and a new Configuration proposal to standardize external configuration of deployments – both of which will be useful to support deployments to the cloud.

Is this ‘refocusing’ enough? Is it going to be finalized soon enough? For an EE8 target of 2017, and EE9 of 2018, is it too little, too late?

Given that you can use current EE features today with frameworks like Spring Boot and Dropwizard to build and deploy microservices, is a standardized cloud/microservice focused EE platform still relevant?

Alternative runtimes like WildFly Swarm and IBM’s Liberty Profile are giving runtime flexibility and allow you to pick just the features your app needs at runtime (and are available today). Rather than an all-in-one EE runtime container, is a ‘pick n mix’ type approach more relevant to support the current popularity of microservices?