2 years later: 2 years of running WordPress and MySQL on Docker in a VPS

It’s been 2 years since I migrated this site from a native install on a VPS to another VPS running Docker. I covered my migration in a number of posts, the first of which is here:

The surprising thing (maybe? maybe not?) is that the site has been up and running for the past 2 years with no issues. I think I rebooted the VPS a couple of times for reasons I can’t remember, but other than that the site’s been up reliably for the past 2 years.

It’s also been 2 years since I last renewed my SSL certificate, so time to do a couple of updates. More to come later.

WordPress error uploading photos on nginx: “client intended to send too large body”

Attempting to upload some new photos between around 3 to 5MB, the majority of the uploads failed, and this error was occurring repeatedly in my nginx error.log:

2018/08/06 07:18:37 [error] 43#0: *5 client intended to send too large body: 3125530 bytes

Based on this tip from here, the solution was to increase the default POST body size config with this setting in my nginx.conf:

client_max_body_size 5M;

Building and deploying Docker containers using GitLab CI Pipelines

As part of migrating this blog to Docker containers to move to a different VPS provider (here, here and here), I found myself repeating a number of steps manually, which always a good indication that there’s an opportunity to automate some or all of those steps.

Each time I made a change in the configuration or changed the content to be deployed, I found myself rebuilding the Docker image and either running locally, pushing to my test server, and eventually pushing to my prod VPS and running there.

I’m using a locally running GitLab for my version control, so to use its build pipeline features was a natural next step. I talked about setting up a GitLab runner previously here – this is what performs the work for your pipeline.

You configure your pipeline with a .gitlab-ci.yml file in the root of your repo. I defined 2 stages, build and deploy:

stages:
 - build
 - deploy

For my build stage, I have a single task which is to build my images using my docker-compose.yml:

build:
 stage: build
 script:
 - docker-compose build
 tags:
 - docker-test

For my deploy steps, I defined one for deploying to my test server, and one for deploying to my production VPS. This is the deploy to my locally running Docker server. It changes DOCKER_HOST to point to my test server, and then uses the docker-compose.yml again to bring down the running containers, and bring up the new containers with the updated images:

deploy-containers:
 stage: deploy
 script:
 - export DOCKER_HOST=tcp://192.x.x.x:2375
 - docker-compose down
 - docker-compose up -d
 tags:
 - docker-test

And one for my deploy to production. Note that this step is defined with ‘when: manual’ which tells GitLab the task is only run manually (when you click on the ‘>’ run icon):

prod-deploy-containers:
 stage: deploy
 script:
 - pwd && ls -l
 - ./docker-compose-vps-down.sh
 - ./docker-compose-vps-up.sh
 when: manual
 tags:
 - docker-prod

Here’s what the complete pipeline looks like in GitLab:

With this in place, now any changes committed to the repo result in a new image created and pushed to my test server automatically, and when I’ve completed testing the changes I can optionally deploy the changes to my prod VPS hosted server.

 

 

 

 

Migrating an existing WordPress + nginx + php5-fpm + mysql website to Docker containers: lessons learned

I’ve covered in previous posts why I wanted to Dockerize my site and move to containers, you can read about it in my other posts shared here. Having played with Docker for personal projects for several months at this point, I thought it was going to be easy, but ran into several issues and unexpected decisions that I needed to make. In this post I’ll summarize a few of these issues and learning points.

Realizing the meaning of ‘containers are ephemeral’, or ‘where do I put my application data’?

Docker images are the blueprint for a container, while the container is a running instance of an image. It’s clear from the Docker docs and elsewhere that you should treat your containers as ‘ephemeral’, meaning they only exist while they’re up and running, their state is temporary, and once they are discarded their state is also lost.

This is an easy concept to grasp at a high level, but in practice this leads to important and valid questions, like ‘so where does my data go’? This became very apparent to me when transferring my existing WordPress data. First, I have data in MySQL tables that needs to get imported into the new MySQL server running in a container. Second, where does the wordpress/wp-content go that in my case contains nearly 500MB of uploaded images from my 2,000+ posts?

The data for MySQL was easy to address, as the official MySQL docker image is already set up to use Docker’s data volume feature by default to externalize your MySQL data files outside of your running container.

The issue of where to put my WordPress wp-content containing 500MB of upload files is what caused my ahah moment with data volumes. Naively, you can create an image and use the COPY command to copy any number of files into an image, including even 500MB of images, but when you start to move this image around, like pushing it to a repository or a remote server, you quickly realize you’ve created something that is impractical. Making incremental changes to a image containing this quantity of files you quickly find that you’re unable to push it anywhere quickly.

To address this, I created an image with nginx and php5-fpm installed, but used Docker’s bind mount to reference and load my static content outside the container.

Now I have my app in containers, how do I actually deploy to different servers?

Up until this point I’ve built and run containers locally, I’ve set up a local Docker Repository for pushing images to for testing, but the main reasons I was interested in for this migration was to enable:

  • building and testing the containers locally
  • testing deployment to a VM server with an identical setup to my production KVM hosted server
  • pushing to my production server when I was ready to deploy to my live site

Before the Windows and MacOS naive Docker installations, I thought docker-machine was just a way to deploy to a locally running Docker install in a VM. It never occurred to me that you can also use the docker-machine command to act on any remote Docker install too.

It turns out even setting a env var DOCKER_HOST to point to the IP of any remote Docker server will enable you to direct commands to that remote server. I believe part of the ‘docker-machine create’ setup helps automate setting up TLS certs for communicating with your remote server, but you can also do this manually following the steps here. I took this approach because I wanted to use the certs from my dev machine as well as my GitLab build machine.

I used this approach to build my images locally, and then on committing my Dockerfile and source changes to my GitLab repo, I also set up a CI Pipeline to run the same commands and push automatically to a locally running test VM server, and then manually to push to my production server.

I’ll cover my GitLab CI Pipeline setup in an upcoming post.

How do you monitor an application running in containers?

I’ve been looking at a number of approaches. Prometheus looks like a great option, and I’ve been setting this up on my test server to take a look. I’m still looking at a few related options, maybe even using Grafana to visualize metrics. I’ll cover this in a future post too.