Revisiting my spotviz.info webapp: visualizing WSJT-X FT8 spots over time

A few years back I built an AngularJS webapp for visualizing JT65/JT9 spots over a period of time, logged as you run the WSJT-X app to decode FT8 signals you receive at your station. At the time I had it deployed on RedHat’s OpenShift, running in a couple of ‘gears’: one for JBoss hosting a REST API backend, a queue and MDB for processing uploaded spots, and one for a MongoDB database to store the collected spot info. Unfortunately the ‘bronze’ basic plan which was an incredible good deal for hosting apps at the time (at around $1 a month if I remember right) was discontinued, and the replacement plans were multiple times the cost, so I discontinued the app and didn’t redeploy it again after that.

At some point thought I was planning on taking another look at deploying it elsewhere, and if I’m going to pick it up again, I might as well take a look at refreshing the architecture. Here’s what the original v1 deployment looked like:

For personal projects I typically build them using some api or technology I want to get more familiar with. I remember at the time I had a need to refresh myself on JAX-WS SOAP based webservices, so the client that is monitoring the WSJT-X log file and uploading to the serverside for processing is a generated JAX-WS client to a webservice deployed in front of a queue; it receives the messages sent from the client and adds them to the queue for processing. If I had to refresh this part there’s no real need for this to be as heavy as JAX-WS and could be simpler with a simple REST api, so that’s probably how I’ll update that part.

I’m interested in something like Apache Kafka can be used to process high volumes of incoming data, so this might be a good refresh for the serverside queue and MDB.

I remember putting a lot of time into building my animated map display of received spots in the AngularJS app. This was back in 2015 I think, so this could probably do with a refresh to at least the latest/current Angular version, which would be a considerable rewrite I think. I’ll take a look.

Anyway, I haven’t run any part of this even locally in development for years, so that will be my first steps, get it up and running, and then start incrementally updating some of the parts.

This project is related to my recent experiments with getting WSJT-X and a SDRPlay RSP2 running on a Raspberry Pi, as a low cost FT8 monitoring station, so now that part is up and running, time to work on the software side again.

Error starting Openshift Origin on CentOS 7: systemd cgroup driver vs cgroupfs driver

Following the instructions to install the Openshift Origin binary from here,  on first attempt to start it up I got this error:

failed to run Kubelet: failed to create kubelet: 
misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"

Per instructions in this issue ticket, to verify which cgroup drivers docker is using I used:

$ sudo docker info |grep -i cgroup

Cgroup Driver: cgroupfs

Unfortunately the steps to check the cgroup driver for kubernetes don’t match with my install because I’m guessing the single binary Openshift Origin has it packaged all in one, so there is no corresponding systemd config for it.

This article suggested to configure the cgroups driver for Docker so it matches kubernetes, but it looks like the yum install for docker-ce doesn’t configure systemd for it either.

Ok, to the docs. Per the Docker docs for configuring systemd here, it suggests to pull to preconfigured files from a git repo and place them in /etc/systemd/system

Now I have the systemd files for Docker in place,  this articles says to add this arg to the end of the ExecStart line in docker.service:

--exec-opt native.cgroupdriver=systemd

Now reload my config and restart the docker service:

sudo systemctl daemon-reload
sudo systemctl restart docker

and let’s check again what cgroups driver we’re using with:

$ sudo docker info |grep -i cgroup

Cgroup Driver: systemd

… and now we’ve switched to systemd.

Ok, starting up Openshift again, this issue is resolved, there’s a lot of log output as the server starts up. After opening up the firewall ports for 8443, my Openshift Console is now up!

Getting past Vagrant’s “Authentication failure” error when starting up OpenShift Origin

For getting up and running quickly with OpenShift Origin, RedHat have an all-in-one VM image you can provision with Vagrant. The instructions mention to not use Vagrant 1.8.5 as there’s an issue with the SSH setup – since I already had 1.8.5 installed for some other projects, I tried anyway, and ran into issues with SSH’ing into the VM with SSH keys.

When provisioning the VM, you’ll see:

 

Kevins-MacBook-Pro:openshift-origin kev$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Checking if box 'openshift/origin-all-in-one' is up to date...
==> default: Resuming suspended VM...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 127.0.0.1:2222
    default: SSH username: vagrant
    default: SSH auth method: private key
    default: Warning: Authentication failure. Retrying...
    default: Warning: Authentication failure. Retrying...

There’s a number of posts discussing this issue and a few workarounds, for example, here and here.

The suggestions relate to switching from the ssh authentication to userid/password, by adding this to your Vagrantfile:

config.ssh.username = "vagrant"
config.ssh.password = "vagrant"

I tried this, and when running vagrant up I had different errors about “SSH authentication failed”. Next I tried adding this recommendation:

 

config.ssh.insert_key = false

This didn’t make any difference initially, but doing a vagrant destroy, and then trying to bring it up again initially ran into the same issue, I Ctrl-C’d out and tried again and then it worked second time. I’m not sure what steps got past the ssh keys issue, but at this point I was up and running. There’s a long discussion in both the linked threads above that describe the cause of the issue, so if you’re interested take a look through those threads.

Installing an SSL certificate on OpenShift Online

SSL certificates are relative inexpensive, but there’s a number of organizations that are starting to offer certs for free – Let’s Encrypt is one. Their approach requires a script to renew your cert every 90 days. In some hosted environments however it might not be possible to run such a script.

For OpenShift hosted apps, you can both assign your own domain name to an application, and also import an SSL cert. See instructions here. Since it’s currently not possible to run a script like what Let’s Encrypt uses (see SO post here), certs from other organizations are more easily imported. StartCom is offering free SSL certs for 1 year, after which presumably you renew for another year.

Depending on what you are hosting, you may need to find and replace any hardcoded references to content loaded via http instead of https (to avoid ‘mixed content’ warnings in your browser). Once you’ve done this though, you get a shiny new green SSL padlock on your site!