Site update: Migrating hosting providers – automating deployment with Terraform, Ansible and GitLab CI Pipelines

Over the past couple of years I’ve been working on and off on a personal project to migrate and update a GitLab CI pipeline on my self-hosted GitLab for building and deploying this site. Unfortunately my self-hosted GitLab used to be on a e-waste HP DL380 G7 rack server that I no longer have after moving house, so I’ve gone back to using my old 2008 MacPro 3,1 as a Proxmox server, where I now run GitLab (which oddly is what I first used this Mac for several years ago).

As part of the update, I wanted to achieve a couple of goals:

  • update the GitLab pipeline to deploy to a staging server for testing, and then deploy to the live server
  • template any deployment files that are server/domain specific
  • update my Docker images for WordPress, updating the plugins, and anything that needs to be in the image to support the runtime, e.g. nginx, php plugins for nginx etc.
  • move to a new cloud provider that would allow me to provision VMs with Terraform
  • automate updating SSL certs with Let’s Encrypt certbot

I won’t share my completed pipeline because I don’t want to share specifics about how my WordPress site is configured, but I’ll give an overview of what I used to automate various parts of it:

While I’ve ended up with a working solution that meets my goals (I can run the pipeline to deploy to my test server or deploy latest to my new live server), I still have a few areas I could improve:

  • GitLab CI Environments, and parameterization – I don’t feel I’ve taken enough advantage of these yet. The jobs that deploy to my test server run automatically, but the deploy to my live set is the same set of jobs that I manually run, and configured to deploy to a different server – I feel there’s more I can parameterize here and need to do some more experimentation in this area

Although this effort was spread over a couple of years before I got to a point of completion, it was a great opportunity to gain some more experience across all these tools.

Running and connecting to different MySQL servers in Docker containers

Running and exposing default MySQL port:

docker run -e MYSQL_ROOT_PASSWORD=mypassword -p 3306:3306 -d mysql
mysql -u root -p

Running with a different exposed port on the host:

docker run -e MYSQL_ROOT_PASSWORD=mypassword -p 3307:3306 -d mysql

Connecting to server on different port : -P option, but also requires -h for host, otherwise seems to connect to whatever is running on 3306 by default:

mysql -h 127.0.0.1 -P 3307 -u root -p

Website down for 24 hours: SSL certificate update failed – checking the contents of your certificate bundle

My SSL certificate for this site was about to expire this week, so I paid for an update for another year and then proceeded to upload my new certificate bundle to my server. Having been through this process a few times, I have a couple of posts describing the steps for configuring nginx with SSL certs here:

… and how to create a certificate bundle here:

I normally concatenate the root, intermediate and my site certificate manually before uploading using the steps in the post above. This time though I noticed the updated certifcate had a bundle download, so I downloaded this and uploaded straight to my site and then restarted…

Unfortunately, since I run nginx in a Docker container, on restarting the container it failed and then went into a restart loop. While constantly failing and restarting like this, it’s not possible (that I know of) to ‘docker exec -it bash’ into the container since it hasn’t completely started. In hindsight maybe ‘docker log’ would have told be what I needed to know, but I wanted to look at the /var/log/nginx/error.log inside the container to see what the issue was. I found a neat trick to do this which I’ll cover in another post.

In the meantime, I found the error in the nginx error.log was this:

2024/06/12 16:31:02 [emerg] 56#56: SSL_CTX_use_PrivateKey_file("/etc/nginx/ssl-certs/my-site.key") failed (SSL: error:0B080074:x509 certificate routines:X509_check_private_key:key values mismatch)

This seemed odd since I generated my CSR for the new certificate on the server and had the key for the request and new certificate. This post for this error luckily had suggestions to look at the contents of the bundle, using:

openssl x509 -noout -text -in yourcert.cert

And exactly as one of the answers suggested, the ‘Subject:’ field in the certificate was not for my domain, it was for the CA instead. The bundle that I downloaded after purchasing my new certificate contained the CA and the Intermediate certs but not the cert for my domain… I should have followed my own instructions for combining all three and including my own site certificate.

I created my new bundle by hand, uploaded to my server and now everything is back to normal with the new SSL certificate.

In hindsight I should have tested my updates on my test server before upload direct to my live server, but since moving house recently I no longer have the HP rackserver I had before, on which I used to run a test server that mirrored the config of my live site. Lesson learned, I need to set up a new test server…