Moving GoDaddy DNS to AWS Route 53

Before changing the DNS settings on GoDaddy, set up Route 53 to manage DNS for your domain first, because you’ll need the AWS DNS server names when updating the GoDaddy DNS config.

In the AWS Console, go to Route 53, then Hosted Zones, and press the ‘Created Hosted Zone’ button:

Enter your domain name, select ‘Public Hosted Zone’ and press ‘Create’:

At this point Route 53 will have assigned a list of DNS nameservers for your domain – remember this list for when you update the GoDaddy config

Next, press ‘Create Record Set’, select Type A record and enter the name of your subdomain, e.g. www, and enter the IP address of your server that this name should resolve to:

Now let’s update the GoDaddy config. By default, here’s the DNS entries managed by GoDaddy when you register a domain with them:

First step, cancel the GoDaddy nameservers. Do this from the ‘Change’ button:

Change the type to Custom:

Then enter the dns server names from Route 53 and press ‘Save’. You should now be done. Assuming the setup on Route 53 has propagated, hit your domain name in a browser and hopefully you’re up and running.

Revisiting my spotviz.info webapp: visualizing WSJT-X FT8 spots over time

A few years back I built an AngularJS webapp for visualizing JT65/JT9 spots over a period of time, logged as you run the WSJT-X app to decode FT8 signals you receive at your station. At the time I had it deployed on RedHat’s OpenShift, running in a couple of ‘gears’: one for JBoss hosting a REST API backend, a queue and MDB for processing uploaded spots, and one for a MongoDB database to store the collected spot info. Unfortunately the ‘bronze’ basic plan which was an incredible good deal for hosting apps at the time (at around $1 a month if I remember right) was discontinued, and the replacement plans were multiple times the cost, so I discontinued the app and didn’t redeploy it again after that.

At some point thought I was planning on taking another look at deploying it elsewhere, and if I’m going to pick it up again, I might as well take a look at refreshing the architecture. Here’s what the original v1 deployment looked like:

For personal projects I typically build them using some api or technology I want to get more familiar with. I remember at the time I had a need to refresh myself on JAX-WS SOAP based webservices, so the client that is monitoring the WSJT-X log file and uploading to the serverside for processing is a generated JAX-WS client to a webservice deployed in front of a queue; it receives the messages sent from the client and adds them to the queue for processing. If I had to refresh this part there’s no real need for this to be as heavy as JAX-WS and could be simpler with a simple REST api, so that’s probably how I’ll update that part.

I’m interested in something like Apache Kafka can be used to process high volumes of incoming data, so this might be a good refresh for the serverside queue and MDB.

I remember putting a lot of time into building my animated map display of received spots in the AngularJS app. This was back in 2015 I think, so this could probably do with a refresh to at least the latest/current Angular version, which would be a considerable rewrite I think. I’ll take a look.

Anyway, I haven’t run any part of this even locally in development for years, so that will be my first steps, get it up and running, and then start incrementally updating some of the parts.

This project is related to my recent experiments with getting WSJT-X and a SDRPlay RSP2 running on a Raspberry Pi, as a low cost FT8 monitoring station, so now that part is up and running, time to work on the software side again.

Migrating an existing WordPress + nginx + php5-fpm + mysql website to Docker containers: lessons learned

I’ve covered in previous posts why I wanted to Dockerize my site and move to containers, you can read about it in my other posts shared here. Having played with Docker for personal projects for several months at this point, I thought it was going to be easy, but ran into several issues and unexpected decisions that I needed to make. In this post I’ll summarize a few of these issues and learning points.

Realizing the meaning of ‘containers are ephemeral’, or ‘where do I put my application data’?

Docker images are the blueprint for a container, while the container is a running instance of an image. It’s clear from the Docker docs and elsewhere that you should treat your containers as ‘ephemeral’, meaning they only exist while they’re up and running, their state is temporary, and once they are discarded their state is also lost.

This is an easy concept to grasp at a high level, but in practice this leads to important and valid questions, like ‘so where does my data go’? This became very apparent to me when transferring my existing WordPress data. First, I have data in MySQL tables that needs to get imported into the new MySQL server running in a container. Second, where does the wordpress/wp-content go that in my case contains nearly 500MB of uploaded images from my 2,000+ posts?

The data for MySQL was easy to address, as the official MySQL docker image is already set up to use Docker’s data volume feature by default to externalize your MySQL data files outside of your running container.

The issue of where to put my WordPress wp-content containing 500MB of upload files is what caused my ahah moment with data volumes. Naively, you can create an image and use the COPY command to copy any number of files into an image, including even 500MB of images, but when you start to move this image around, like pushing it to a repository or a remote server, you quickly realize you’ve created something that is impractical. Making incremental changes to a image containing this quantity of files you quickly find that you’re unable to push it anywhere quickly.

To address this, I created an image with nginx and php5-fpm installed, but used Docker’s bind mount to reference and load my static content outside the container.

Now I have my app in containers, how do I actually deploy to different servers?

Up until this point I’ve built and run containers locally, I’ve set up a local Docker Repository for pushing images to for testing, but the main reasons I was interested in for this migration was to enable:

  • building and testing the containers locally
  • testing deployment to a VM server with an identical setup to my production KVM hosted server
  • pushing to my production server when I was ready to deploy to my live site

Before the Windows and MacOS naive Docker installations, I thought docker-machine was just a way to deploy to a locally running Docker install in a VM. It never occurred to me that you can also use the docker-machine command to act on any remote Docker install too.

It turns out even setting a env var DOCKER_HOST to point to the IP of any remote Docker server will enable you to direct commands to that remote server. I believe part of the ‘docker-machine create’ setup helps automate setting up TLS certs for communicating with your remote server, but you can also do this manually following the steps here. I took this approach because I wanted to use the certs from my dev machine as well as my GitLab build machine.

I used this approach to build my images locally, and then on committing my Dockerfile and source changes to my GitLab repo, I also set up a CI Pipeline to run the same commands and push automatically to a locally running test VM server, and then manually to push to my production server.

I’ll cover my GitLab CI Pipeline setup in an upcoming post.

How do you monitor an application running in containers?

I’ve been looking at a number of approaches. Prometheus looks like a great option, and I’ve been setting this up on my test server to take a look. I’m still looking at a few related options, maybe even using Grafana to visualize metrics. I’ll cover this in a future post too.

Migration to new VPS running my blog in Docker containers now complete!

After many more hours than I expected or planned, I’ve migrated this site to run on a new VPS provider running in a larger KVM based VPS. The site is now running with nginx and php5-fpm in one Docker container, and MySQL in another, linked together with docker-compose.

Along the way I ran into several issues around performance and firewall configurations, which led to setting up a GitLab CI/CD pipeline (here and here) so I could more quickly iterate and deploy changes to a local test VM server on my ESXi rack server. I set up this test VM to mirror the configuration in my VPS KVM, and then used a GitLab pipeline to push the containers to my test server, and then manually push to my production VPS server when ready to deploy.

The good news is I learned plenty along the way, but also went down several rabbit holes trying to chase down performance issues that turned out to be more related to my misconfiguration of Ubuntu’s UFW and Dockers interaction with iptables that caused some weirdness.

The other good news is I have plenty of RAM and CPU to spare in this KVM based VPS where I’m running Docker, so I’ll be able to take advantage of this to deploy some other projects too (this was one of my other reasons for migrating to another server/provider). I’ll share some additional posts about some of the specifics of the GitLab CI/CD config, dockerfile and docker-compose configurations in the next few days.