Exporting select result from MySQL and getting the error ‘ERROR 1290 (HY000): The MySQL server is running with the –secure-file-priv option’

To use ‘select … into outfile’ on MySQL the user needs to have the FILE permission as described here.

Even with this privilege however, if there server is running with the –secure-file-priv option you’ll see this error:

ERROR 1290 (HY000): The MySQL server is running with the --secure-file-priv option

When this option is enabled there’s usually one or more paths by default that are configured that you can import and export to. You can find these paths using:

mysql> SHOW VARIABLES LIKE "secure_file_priv";

+------------------+-----------------------+

| Variable_name    | Value                 |

+------------------+-----------------------+

| secure_file_priv | /var/lib/mysql-files/ |

+------------------+-----------------------+

1 row in set (0.02 sec)

Knowing where our trusted location is for importing/exporting, lets try again:

select examplecol from exampletable 
order by createdate desc limit 1 
into outfile '/var/lib/mysql-files/mysqlout.txt';

Success!

This is discussed in answer to this question here.

Migrating an existing WordPress + nginx + php5-fpm + mysql website to Docker containers: lessons learned

I’ve covered in previous posts why I wanted to Dockerize my site and move to containers, you can read about it in my other posts shared here. Having played with Docker for personal projects for several months at this point, I thought it was going to be easy, but ran into several issues and unexpected decisions that I needed to make. In this post I’ll summarize a few of these issues and learning points.

Realizing the meaning of ‘containers are ephemeral’, or ‘where do I put my application data’?

Docker images are the blueprint for a container, while the container is a running instance of an image. It’s clear from the Docker docs and elsewhere that you should treat your containers as ‘ephemeral’, meaning they only exist while they’re up and running, their state is temporary, and once they are discarded their state is also lost.

This is an easy concept to grasp at a high level, but in practice this leads to important and valid questions, like ‘so where does my data go’? This became very apparent to me when transferring my existing WordPress data. First, I have data in MySQL tables that needs to get imported into the new MySQL server running in a container. Second, where does the wordpress/wp-content go that in my case contains nearly 500MB of uploaded images from my 2,000+ posts?

The data for MySQL was easy to address, as the official MySQL docker image is already set up to use Docker’s data volume feature by default to externalize your MySQL data files outside of your running container.

The issue of where to put my WordPress wp-content containing 500MB of upload files is what caused my ahah moment with data volumes. Naively, you can create an image and use the COPY command to copy any number of files into an image, including even 500MB of images, but when you start to move this image around, like pushing it to a repository or a remote server, you quickly realize you’ve created something that is impractical. Making incremental changes to a image containing this quantity of files you quickly find that you’re unable to push it anywhere quickly.

To address this, I created an image with nginx and php5-fpm installed, but used Docker’s bind mount to reference and load my static content outside the container.

Now I have my app in containers, how do I actually deploy to different servers?

Up until this point I’ve built and run containers locally, I’ve set up a local Docker Repository for pushing images to for testing, but the main reasons I was interested in for this migration was to enable:

  • building and testing the containers locally
  • testing deployment to a VM server with an identical setup to my production KVM hosted server
  • pushing to my production server when I was ready to deploy to my live site

Before the Windows and MacOS naive Docker installations, I thought docker-machine was just a way to deploy to a locally running Docker install in a VM. It never occurred to me that you can also use the docker-machine command to act on any remote Docker install too.

It turns out even setting a env var DOCKER_HOST to point to the IP of any remote Docker server will enable you to direct commands to that remote server. I believe part of the ‘docker-machine create’ setup helps automate setting up TLS certs for communicating with your remote server, but you can also do this manually following the steps here. I took this approach because I wanted to use the certs from my dev machine as well as my GitLab build machine.

I used this approach to build my images locally, and then on committing my Dockerfile and source changes to my GitLab repo, I also set up a CI Pipeline to run the same commands and push automatically to a locally running test VM server, and then manually to push to my production server.

I’ll cover my GitLab CI Pipeline setup in an upcoming post.

How do you monitor an application running in containers?

I’ve been looking at a number of approaches. Prometheus looks like a great option, and I’ve been setting this up on my test server to take a look. I’m still looking at a few related options, maybe even using Grafana to visualize metrics. I’ll cover this in a future post too.

Setting up a local production Jira installation with MySQL

Download the Jira server install from here.

Follow the steps here to download and add the JDBC driver to your Jira server. Download and copy the mysql connector jar to

/opt/atlassian/jira/lib/

Chose the ‘set up myself’ option, and complete the form to point to your already installed MySQL db instance:

Configure your install properties:

On the next screen you’ll be prompted for your license key, or head over to the Atlassian site to purchase or create a trial key. When you enter your organization name (above) on the license site it will pick up your site id.

Create an admin user, continue and now you’re up and running!