Site update: Migrating hosting providers – automating deployment with Terraform, Ansible and GitLab CI Pipelines

Over the past couple of years I’ve been working on and off on a personal project to migrate and update a GitLab CI pipeline on my self-hosted GitLab for building and deploying this site. Unfortunately my self-hosted GitLab used to be on a e-waste HP DL380 G7 rack server that I no longer have after moving house, so I’ve gone back to using my old 2008 MacPro 3,1 as a Proxmox server, where I now run GitLab (which oddly is what I first used this Mac for several years ago).

As part of the update, I wanted to achieve a couple of goals:

  • update the GitLab pipeline to deploy to a staging server for testing, and then deploy to the live server
  • template any deployment files that are server/domain specific
  • update my Docker images for WordPress, updating the plugins, and anything that needs to be in the image to support the runtime, e.g. nginx, php plugins for nginx etc.
  • move to a new cloud provider that would allow me to provision VMs with Terraform
  • automate updating SSL certs with Let’s Encrypt certbot

I won’t share my completed pipeline because I don’t want to share specifics about how my WordPress site is configured, but I’ll give an overview of what I used to automate various parts of it:

While I’ve ended up with a working solution that meets my goals (I can run the pipeline to deploy to my test server or deploy latest to my new live server), I still have a few areas I could improve:

  • GitLab CI Environments, and parameterization – I don’t feel I’ve taken enough advantage of these yet. The jobs that deploy to my test server run automatically, but the deploy to my live set is the same set of jobs that I manually run, and configured to deploy to a different server – I feel there’s more I can parameterize here and need to do some more experimentation in this area

Although this effort was spread over a couple of years before I got to a point of completion, it was a great opportunity to gain some more experience across all these tools.

Preserving generated files as artifacts in GitLab CI Pipelines

Today I learned after spending a while trying to debug why a later job in my pipeline couldn’t see a file from a previous job, that GitLab does not preserve files on the filesystem between stages, and even jobs. I guess this makes sense as your pipeline is running against what it currently in your repo, and not untracked files that have been created by your pipeline.

If you are generating new files, from for example Ansible generating files from templates, if the files are generated in one job and then you expect to use them in a later job in the pipeline, you need to tell GitLab that the files are ‘artifacts’ to preserve them.

In the case of generated files, they will be untracked files in git. Tell GitLab to publish them as artifacts with the following config:

generate-nginx-config test2:
stage: generate-templates
environment: test2
script:
- cd iac/ansible
- ansible-playbook -i test2.yml nginx-playbook.yml
# keep the file generated from ansible template, which is now
# untracked, so it can be used in following jobs
artifacts:
untracked: true
paths:
- nginx/config/etc/nginx/sites-available
tags:
- docker-test

This is a job in my pipeline for generating my nginx config based on the environment I’m deploying to. Note the untracked: true which tells GitLab to preserve the untracked files as artifacts.