Preserving generated files as artifacts in GitLab CI Pipelines

Today I learned after spending a while trying to debug why a later job in my pipeline couldn’t see a file from a previous job, that GitLab does not preserve files on the filesystem between stages, and even jobs. I guess this makes sense as your pipeline is running against what it currently in your repo, and not untracked files that have been created by your pipeline.

If you are generating new files, from for example Ansible generating files from templates, if the files are generated in one job and then you expect to use them in a later job in the pipeline, you need to tell GitLab that the files are ‘artifacts’ to preserve them.

In the case of generated files, they will be untracked files in git. Tell GitLab to publish them as artifacts with the following config:

generate-nginx-config test2:
stage: generate-templates
environment: test2
script:
- cd iac/ansible
- ansible-playbook -i test2.yml nginx-playbook.yml
# keep the file generated from ansible template, which is now
# untracked, so it can be used in following jobs
artifacts:
untracked: true
paths:
- nginx/config/etc/nginx/sites-available
tags:
- docker-test

This is a job in my pipeline for generating my nginx config based on the environment I’m deploying to. Note the untracked: true which tells GitLab to preserve the untracked files as artifacts.

Resetting WordPress user passwords in the database

After restoring a backup mysqldump file from one server to another, I found that my WordPress user passwords were no longer working correctly, despite knowing I was using the correct password. Apparently because WordPress hashes passwords with md5 in the user table, it could be that attempting logon on another server the md5 hash was computing differently.\, The simple fix was to reset passwords in the wp_users table with a new hash, using:

update wp_users set user_pass = md5('NEW_PASSWORD') where user_login = 'USER-TO-UPDATE';

rync files between servers with bandwidth throttling

As part of moving from one hosting provider to another, I needed to move a large amount of uploaded files to my local machine and then out to a VM on a new provider. Looking at rsync to do this, for moving the content locally first I didn’t want to eat up all my home bandwidth, so found there is a ‘bwlimit’ parameter in KB, e.g. –bwlimit=500 would limit to 500kbps:

rsync --bwlimit=[bw here in kbps] -a --progress [id]@[host]:/source/path .

Run this from the folder where you want the files to arrive what also contains folder ‘path’ (otherwise you’ll end up with path/path/[files here])

Terraform error provisioning new VMs on Hetzner: error during placement (resource_unavailable)

I’ve been moving to use Terraform to provision VMs on Hetzner, and recently while adding a minor change and doing a ‘terraform apply’ I got this error:

Error: error during placement (resource_unavailable, ...

I hadn’t seen this before, but according to this post, ‘resource_unavailable’ means that Hetzner are unable to provision a new VM of the selected type (I’m using cx22), and may be a temporary issue, or maybe changing the location to one of their other locations may work.

Re-running a few minutes later and the provisioning was successful.