Installing and Configuring Atlassian Confluence with MySQL in Docker Containers

Atlassian Confluence is already available as a Docker Image from the Docker Hub but you still need to provide a database instance for a production setup. Let’s build a docker-compose file to create a container from this image together with a container running MySQL.

First,  per the docs on the Docker Hub page, create an external folder /data/confluence that will get mounted as a volume by the Container.

This is my first version to get this working (keep reading for refining this to include a JDBC driver)

[code]

version: ‘3’
services:
confluence:
image: atlassian/confluence-server
restart: always
volumes:
– /data/confluence:/var/atlassian/application-data/confluence
ports:
– 8090:8090
– 8091:8091
confl-mysql:
build: ./mysql
restart: always
environment:
– MYSQL_RANDOM_ROOT_PASSWORD=yes
– MYSQL_DATABASE=confluence
– MYSQL_USER=confluence
– MYSQL_PASSWORD=your-password
[/code]

After hitting your-ip:8090 for the first time, you can pick the ‘My own database’ option:

To connect to a MySQL db you need to drop a MySQL JDBC driver into /opt/atlassian/confluence/confluence/WEB-INF/lib so at this point we’ve got a couple of options. We could either copy the JDBC driver into the container (but since containers are ephemeral we’d lose this change if we started a new container from the image), or take a step back and rebuild the image including the driver:

The right thing to do would be to rebuild a custom image including the driver. So let’s do that.

Download the MySQL Connector driver from here.

Let’s commit it into our project and add a new Dockerfile to build a modified version of the official Confluence image, which is simply just these two lines:

[code]

FROM atlassian/confluence-server
COPY mysql-connector-java-5.1.46.jar /opt/atlassian/confluence/confluence/WEB-INF/lib

[/code]

Update the docker-compose file to build this new image instead of using the provided one from Docker Hub. Replace:

[code]

image: atlassian/confluence-server

[/code]

with

[code]

build: ./confl-mysql

[/code]

(or your corresponding name of your custom image containing the above Dockerfile)

Now when we startup this container and hit the app, the JDBC driver was recognized and we’re on to the next config page for our database connection params:

Entering our credentials and pressing Test, we’ve got an error about the default encoding:

To address this, the Confluence setup docs here describe editing the my.cnf file in MySQL, or alternatively I could pass params. The MySQL docs have a chapter on configuring and running MySQL in Docker, and this Q&A on Stackoverflow describes passing the optional params in a command section in your docker-compose file.

My first attempt was to add this:

[code]

confl-mysql:
build: ./mysql
restart: always
command: character-set-server=utf8 collation-server=utf8_bin
[/code]

but the syntax was not quite right yet, resulting in the container startup in a restart loop, and this error appearing in the container logs:

/usr/local/bin/docker-entrypoint.sh: line 202: exec: character-set-server=utf8: not found

Reading docs for the command option, the command in the docker-compose file needs to be the command to start the app in the container as well as the optional params. So now I’m here:

[code]

confl-mysql:
build: ./mysql
restart: always
command: [mysqld, –character-set-server=utf8 –collation-server=utf8_bin]
[/code]

Now we’re getting closer. Logs from my MySQL container and how showing:

ERROR: mysqld failed while attempting to check config

command was: "mysqld --character-set-server=utf8 --collation-server=utf8_bin --verbose --help"

mysqld: Character set 'utf8 --collation-server=utf8_bin' is not a compiled character set and is not specified in the '/usr/share/mysql/charsets/Index.xml' file

Some Googling made me realize each of the params is command separated,  so next update is:

[code]
confl-mysql:
build: ./mysql
restart: always
command: [mysqld, –character-set-server=utf8, –collation-server=utf8_bin]
[/code]

and now we’ve got both containers starting up. The list of params should be updated to add all the optional params listed in the Confluence MySQL set up docs, otherwise you’ll get an error for each missing param. The complete list is:

command: [mysqld, --character-set-server=utf8, --collation-server=utf8_bin, --default-storage-engine=INNODB, --max_allowed_packet=256M, --innodb_log_file_size=2GB, --transaction-isolation=READ-COMMITTED, --binlog_format=row]

… and my VM has run out of diskspace, so time to expand my disk. Back shortly.

Ok, back. Restarted and now we’re in business:

Complete config and now the containers are up!

Changing a GitLab Runner from ‘Locked to a Project’ to Shared

I have a GitLab Runner assigned to a project that I’d like to share with another similar project. Currently it looks like this:

Pressing the small edit icon, I can see these options:

I want to reuse this same runner, so I unchecked the ‘Lock to current projects’ checkbox.

Now if I go to the CI/CD settings for my other project I can see it is available, so I click ‘enable for this project’

Now my Pending Job that was triggered after my first push to my repo has kicked in and is being deployed to my test Docker server. Cool.

Building and deploying Docker containers using GitLab CI Pipelines

As part of migrating this blog to Docker containers to move to a different VPS provider (here, here and here), I found myself repeating a number of steps manually, which always a good indication that there’s an opportunity to automate some or all of those steps.

Each time I made a change in the configuration or changed the content to be deployed, I found myself rebuilding the Docker image and either running locally, pushing to my test server, and eventually pushing to my prod VPS and running there.

I’m using a locally running GitLab for my version control, so to use its build pipeline features was a natural next step. I talked about setting up a GitLab runner previously here – this is what performs the work for your pipeline.

You configure your pipeline with a .gitlab-ci.yml file in the root of your repo. I defined 2 stages, build and deploy:

stages:
 - build
 - deploy

For my build stage, I have a single task which is to build my images using my docker-compose.yml:

build:
 stage: build
 script:
 - docker-compose build
 tags:
 - docker-test

For my deploy steps, I defined one for deploying to my test server, and one for deploying to my production VPS. This is the deploy to my locally running Docker server. It changes DOCKER_HOST to point to my test server, and then uses the docker-compose.yml again to bring down the running containers, and bring up the new containers with the updated images:

deploy-containers:
 stage: deploy
 script:
 - export DOCKER_HOST=tcp://192.x.x.x:2375
 - docker-compose down
 - docker-compose up -d
 tags:
 - docker-test

And one for my deploy to production. Note that this step is defined with ‘when: manual’ which tells GitLab the task is only run manually (when you click on the ‘>’ run icon):

prod-deploy-containers:
 stage: deploy
 script:
 - pwd && ls -l
 - ./docker-compose-vps-down.sh
 - ./docker-compose-vps-up.sh
 when: manual
 tags:
 - docker-prod

Here’s what the complete pipeline looks like in GitLab:

With this in place, now any changes committed to the repo result in a new image created and pushed to my test server automatically, and when I’ve completed testing the changes I can optionally deploy the changes to my prod VPS hosted server.