Revisiting my spotviz.info webapp: visualizing WSJT-X FT8 spots over time – part 4: It’s back, it’s alive! http://www.spotviz.info

I still have some refactoring to do as I move the app to the cloud, but as the first starting point I have the original app up and running on WildFly on a VPS. It’s back up and live on the original domain, http://www.spotviz.info

The uploader has been updated to read current 2.x WSJT-X log files, but I haven’t uploaded the .jar to GitLab yet, but I’ll do that soon. I’ve some test log data uploaded under my callsign, KK6DCT.

More changes to come soon.

If you’re catching up, here’s my previous posts on getting the project up and running again:

Configuring nginx to proxy REST requests across multiple Spring Boot microservices in Docker Containers

Using nginx to proxy requests across Docker containers is a common use case for nginx, and covered in many posts and articles. Trying to get it working from scratch is a not-so-trivial task. Following many articles, I got confused about whether I was needing to load balance requests across identical container instances, or whether I needed nginx to proxy requests across my different containers. I ran across quite a few configuration issues and challenges, but mostly I think because I didn’t understand what I was trying to do (and ended up with configuration trying to load balance and proxy at the same time which was not what I needed) 🙂

Articles like this one and this one show how to configure ‘upstream’ servers that you can load balance requests across. My understanding of this approach is that this is what you need if you have multiple server or container instances, and you want nginx to load balance across the instances. After a while of trying to get this configuration working what I realised I needed was just the proxy_pass config for nginx, telling it to proxy requests for a matching url to a given Spring Boot service in a container. This question captures this approach well.

To explore a typical configuration, I have 2 simple Spring Boot REST services, springbootservice1 and springbootservice2, each will be in their own Docker container. When run in individual containers, they are accessed via:

http://localhost:8080/service1/example1

and

http://localhost:8080/service2/example2

Without additional configuration to run them on different ports, as standalone Spring Boot services they can’t obviously be run on the same host at the same time. This would be trivial to do with just two services, but once you scale up this approach for many different services, managing different ports with manual configuration is not particularly practical.

Out of the box, docker lets you manage the exposes external ports easily when you ‘docker run’ a container with the ‘-p’ option. What I was interested in was a solution to run them in different containers, without manually defining and managing ports by hand, and to allow a web app to be able to call any of the services on port 80 so an app consuming these services has no need to know what ports the services are actually running on.

The Dockerfile for each looks like this (replace Service1 for Service2 for the second service):

#Official JDK8 Alpine-based image
FROM java:openjdk-8-alpine
ADD target/SpringBootService1-0.0.1-SNAPSHOT.jar /opt/SpringBootService1-0.0.1-SNAPSHOT.jar
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "/opt/SpringBootService1-0.0.1-SNAPSHOT.jar"]

The Dockerfile for nginx looks like this:

FROM nginx
RUN rm /etc/nginx/conf.d/default.conf
COPY conf /etc/nginx
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

And here’s my config for nginx:

http {
 server {
   listen 80;
   location /service1 {
     proxy_pass http://springbootservice1:8080/service1/;
   }
   location /service2 {
     proxy_pass http://springbootservice2:8080/service2/;
   }
 }
}
events { worker_connections 1024; }

The key part of this config is the proxy_pass config:

location /service1 { 
  proxy_pass http://springbootservice1:8080/service1/;
}

This takes incoming requests with a URI matching /service1 and proxies it to http://springbootservice1:8080/service1/ – in this case springbootservice1 is the default host name of the container running Service1. This is it’s default name from the docker-compose config which we’ll cover next. Note that the lack of trailing and trailing ‘/’s is relevant and important in order to match any incoming pattern and then forward that URI appended to the end of the target URI on the container service.

version: '3'
 
services:
     nginx-lb:
         build: ../nginx-loadbalancer
         #image: nginx-lb
         ports:
             - "80:80"
         links:
             - springbootservice1
             - springbootservice2
         depends_on:
             - springbootservice1
             - springbootservice2
     springbootservice1:
         image: springbootservice1
         ports:
             - "8080"
     springbootservice2:
         image: springbootservice2
         ports:
             - "8080"

To bring up all the contains in one go, ‘docker-compose up’ starts up all 3 containers, and now nginx handles requests on port 80 and proxies to the correct container based on the path. This approach could easily be scaled up to include more containers, but at some point it will become obvious with this configuration hardcoded in both the nginx.conf file and the docker-compose file, a better solution would be to use some kind of dynamic discovery of the containers as they become available. I’ll be investigating some options for this kind of approach next.

Building a multi-container Spring Boot and MongoDB webapp with Docker 1.12.x – part 2

In the first part of this article, I showed how I split the frontend, backend and database all into their own containers, and how each could be individually scaled using docker-compose.

If you’re already familiar with docker-compose and using haproxy for load balancing against container, you might have noticed there’s a limitation in my approach, as both the backend REST service was exposing it’s port 8080 externally, and so I don’t think HTTP requests from the frontend browser in the app were ever passing through the haproxy to be load balanced, only the requests to load the front end on port 80 were being load balanced.

I looked into how I could configure haproxy with multiple backends, listening on different ports, but eventually came to the conclusion that adding two different haproxy containers, one load balancing for port 80 and one for port 8080 was easy to do.

I’m not sure if this is the best way to approach this, but it certainly. works. Leave me a comment if you have any suggestions.

Here’s my final docker-compose.yml:

 

version: '2'
  
services:
    mongodata:
        image: mongo:3.2
        volumes:
        - /data/db
        entrypoint: /bin/bash
    mongo:
        image: mongo:3.2
        depends_on: 
            - mongodata
        volumes_from:
            - mongodata
        ports:
            - "27017"
    addressbook:
        image: addressbook
        depends_on: 
            - mongo
        environment:
            - MONGODB_DB_NAME=addressbook
        ports:
            - "8080"
        links:
            - mongo
    web:
        image: docker-web-angularjs
        ports:
            - "80"
    lb-web:
        image: dockercloud/haproxy
        depends_on: 
            - web
        environment:
            - STATS_PORT=1936
            - STATS_AUTH="admin:your-password"
        links:
            - web
        volumes:
            - /var/run/docker.sock:/var/run/docker.sock
        ports:
            - 80:80
            - 1936:1936
    lb-addressbook:
        image: dockercloud/haproxy
        depends_on: 
            - addressbook
        environment:
            - STATS_PORT=1937
            - STATS_AUTH="admin:your-password"
        links:
            - addressbook
        volumes:
            - /var/run/docker.sock:/var/run/docker.sock
        ports:
            - 8080:80
            - 1937:1937

JavaOne 2016: Oracle’s ‘refocusing’ of the EE8 proposal

Oracle’s planned Java EE ‘announcement‘ at JavaOne 2016 turned out to be (based on the announcement from the Sunday keynote and the Java EE 8 roadmap session by Linda Demichiel, EE spec lead for EE JSRs) a ‘refocusing’, a realignment to bring Java EE more in line with current trends, particularly to support deploying EE applications to the cloud and the development of microservices.

So what exactly is going to change? For the new EE8 spec proposal, with finalization now planned for 2017,  some previously proposed features for inclusion are now going to be dropped, namely:

  • MVC 1.0
  • JMS 2.1
  • EE Management apis 2.0

Given than MVC 1.0 was a new EE web framework as an alternative to JSF (not adding new technology but built on top of what we already have with a Reference Implementation already built out), this is perhaps the most surprising removal. Given the new focus of cloud and microservices however, this makes sense, as it doesn’t really add anything towards supporting the new goals.

Everything else in the original EE8 JSR 366 is going to remain in the new EE8 proposal.

New features to be added include a new feature for supporting Health Checks, and a new Configuration proposal to standardize external configuration of deployments – both of which will be useful to support deployments to the cloud.

Is this ‘refocusing’ enough? Is it going to be finalized soon enough? For an EE8 target of 2017, and EE9 of 2018, is it too little, too late?

Given that you can use current EE features today with frameworks like Spring Boot and Dropwizard to build and deploy microservices, is a standardized cloud/microservice focused EE platform still relevant?

Alternative runtimes like WildFly Swarm and IBM’s Liberty Profile are giving runtime flexibility and allow you to pick just the features your app needs at runtime (and are available today). Rather than an all-in-one EE runtime container, is a ‘pick n mix’ type approach more relevant to support the current popularity of microservices?