Revisiting my spotviz.info webapp: visualizing WSJT-X FT8 spots over time – part 6: Redesigning to take advantage of the Cloud

Following on from Part 1 and subsequent posts, I now have the app deployed locally on WildFly 17, up and running, and also redeployed to a small 1 cpu 1 GB VPS: http://www.spotviz.info . At this point I’m starting to think about how I’m going to redesign the system to take advantage of the cloud.

Here are my re-design and deployment goals:

  • monthly runtime costs since this is a hobby project should be low. Less that $5 a month is my goal
  • take advantage of AWS services as much as possible, but only where use of those services still meet my monthly cost goal
  • if there are AWS free tier options that make sense to take advantage of, favor these services if they help keep costs down

Here’s a refresher on my diagram showing how the project was previously structured and deployed:

As of September 2019, the original app is now redeployed as a monolithic single .war again to WildFly 17, running on a single VPS. MongoDB is also running on the same VPS. The web app is up at: http://www.spotviz.info

There’s many options for how I could redesign and rebuild parts of this to take advantage of the cloud. Here’s the various parts that could either be redesigned, and/or split into separate deployments:

  • WSJT-X log file parser and uploader client app (the only part that probably won’t change, other than being updated to support the latest WSJT-X log file format)
  • Front end webapp: AngularJS static website assets
  • JAX-WS endpoint for uploading spots for processing
  • MDB for processing the upload queue
  • HamQTH api webservice client for looking up callsign info
  • MongoDB for storing parsed spots, callsigns, locations
  • Rest API used by AngularJS frontend app for querying spot data

Here’s a number of options that I’m going to investigate:

Option 1: redeploy the whole .war unchanged as previously deployed to OpenShift back in 2015, to a VM somewhere in the cloud. Cheapest options would be to a VPS. AWS LightSail VPS options are still not as a cheap as VPS deals you can get elsewhere (check LowEndBox for deals), and AWS EC2 instances running 24×7 are more expensive (still cheap, but not as cheap as VPS deals)

Update September 2019: COMPLETE: original app is now deployed and up and running

Option 2: Using AWS services: If I split the app into individual parts I can incrementally take on one or more of these options:

  • Route 53 for DNS (September 2019: COMPLETE!)
  • Serve AngularJS static content from AWS S3 (next easiest change) (December 2019: COMPLETE!)
  • AWS API Gateway for log file upload endpoint and RestAPIs for data lookups
  • AWS Lambdas for handling uploads and RestAPIs
  • Rely on scaling on demand of Lambdas for handling upload and parsing requests, removing need for the queue
  • Refactor data store from MongoDB to DynamoDB

Option 3: Other variations:

  • Replace use of WildFly queue with AWS SQS
  • Replace queue approach with a streams processing approach, either AWS Kinesis or AWS MSK

More updates coming later.

Moving GoDaddy DNS to AWS Route 53

Before changing the DNS settings on GoDaddy, set up Route 53 to manage DNS for your domain first, because you’ll need the AWS DNS server names when updating the GoDaddy DNS config.

In the AWS Console, go to Route 53, then Hosted Zones, and press the ‘Created Hosted Zone’ button:

Enter your domain name, select ‘Public Hosted Zone’ and press ‘Create’:

At this point Route 53 will have assigned a list of DNS nameservers for your domain – remember this list for when you update the GoDaddy config

Next, press ‘Create Record Set’, select Type A record and enter the name of your subdomain, e.g. www, and enter the IP address of your server that this name should resolve to:

Now let’s update the GoDaddy config. By default, here’s the DNS entries managed by GoDaddy when you register a domain with them:

First step, cancel the GoDaddy nameservers. Do this from the ‘Change’ button:

Change the type to Custom:

Then enter the dns server names from Route 53 and press ‘Save’. You should now be done. Assuming the setup on Route 53 has propagated, hit your domain name in a browser and hopefully you’re up and running.

Next steps: From NGINX WordPress and MySQL running on Docker, to Kubernetes

This website running this blog has been running in Docker containers on a small-ish 4GB VPS for the past 9 months pretty much issue free. You can follow by journey to migrate this site to Docker in posts here and here.

Since I’ve been spending time recently getting up to speed with Kubernetes, the next logic step would be to deploy to a Kubernetes cluster, which would give me an opportunity to find out what it takes to run Kubernetes. I’ve looked at managed offerings on Google and AWS, but the cost for a few small personal projects is a little more than I want to spend.

I have a 8GB VPS ready to go, so far installed with Docker and Kubernetes running as a single node cluster, and I’m starting to plan my strategy for migrating to this new server. The first thing I’ve been thinking about is whether I should take my existing Docker images and just deploy to Kubernetes as is. Where I’ve got stuck so far is I don’t know enough about how to run NGINX with WordPress and MySQL in multiple pods, so I think I might install the WordPress Chart using Helm and for the time being not worry about how to do this myself.

The next thing I’ve been looking at is how to configure an Ingress to access different deployed services via different urls. I’ve been looking at setting up Traefik as the Ingress Controller to do this, and will be sharing a post about that config shortly. What I’m interested in is being able to deploy a number of different projects, including my WordPress site, and have them accessed via different urls, and it looks like Traefik will handle this fine.

I plan on writing some further posts as I make this transition over the next couple of weeks.

Building and deploying Docker containers using GitLab CI Pipelines

As part of migrating this blog to Docker containers to move to a different VPS provider (here, here and here), I found myself repeating a number of steps manually, which always a good indication that there’s an opportunity to automate some or all of those steps.

Each time I made a change in the configuration or changed the content to be deployed, I found myself rebuilding the Docker image and either running locally, pushing to my test server, and eventually pushing to my prod VPS and running there.

I’m using a locally running GitLab for my version control, so to use its build pipeline features was a natural next step. I talked about setting up a GitLab runner previously here – this is what performs the work for your pipeline.

You configure your pipeline with a .gitlab-ci.yml file in the root of your repo. I defined 2 stages, build and deploy:

stages:
 - build
 - deploy

For my build stage, I have a single task which is to build my images using my docker-compose.yml:

build:
 stage: build
 script:
 - docker-compose build
 tags:
 - docker-test

For my deploy steps, I defined one for deploying to my test server, and one for deploying to my production VPS. This is the deploy to my locally running Docker server. It changes DOCKER_HOST to point to my test server, and then uses the docker-compose.yml again to bring down the running containers, and bring up the new containers with the updated images:

deploy-containers:
 stage: deploy
 script:
 - export DOCKER_HOST=tcp://192.x.x.x:2375
 - docker-compose down
 - docker-compose up -d
 tags:
 - docker-test

And one for my deploy to production. Note that this step is defined with ‘when: manual’ which tells GitLab the task is only run manually (when you click on the ‘>’ run icon):

prod-deploy-containers:
 stage: deploy
 script:
 - pwd && ls -l
 - ./docker-compose-vps-down.sh
 - ./docker-compose-vps-up.sh
 when: manual
 tags:
 - docker-prod

Here’s what the complete pipeline looks like in GitLab:

With this in place, now any changes committed to the repo result in a new image created and pushed to my test server automatically, and when I’ve completed testing the changes I can optionally deploy the changes to my prod VPS hosted server.