nginx + php5-fpm response lag on first requests

I’m in the middle of migrating this existing site to Docker containers and moving to a new VPS host. Part of my motivation for the move is to capture the customized configs for each of the servers, so I can easily move the whole deployment between a test environment and a production deploy. What’s prompting this is the realization that the majority of the performance tweaks I made during the first native install I have captured in various blog posts here, but to recreate these install steps I would need to go back to each of those articles and get the details in order to repeat them elsewhere. That’s not a particularly repeatable process.

I’m close to switch from my current non-Docker install (here, as of 2/27/18) to my test install now running on Docker. I’ll share more about how that is configured in future posts, but I just wanted to capture one nginx + php5-fpm specific config that had me stumped for a few days.

There’s many options for configuring the worker processes for nginx and php5-fpm. php5-fpm itself has a number of modes that control how it manages it’s worker processes. By default the process manager is ‘dynamic’ (pm = dynamic). This creates processes to handle incoming requests based on the other related config options (max_children, start servers, min_spare_servers, max_spare_servers etc).

On my current site based on recommendations I changed this to pm = ondemand in order to minimize memory usage on my 512MB VPS. One other param though had an interesting effect:

pm.process_idle_timeout = 10s;

This keeps a process alive for an additional 10s after it’s finished the current request. This seems to have an impact on the responsiveness of the WordPress site, as without this there seems to be a noticeable lag of 3-4 seconds before responses start to come back to the browser, presumably because new worker processes are needed to restart to handle the next request – by keeping them up after the last request there no lag to restart a new process.

I was almost at the point of making a no-go decision based on the laggy performance, but adding this one param has fixed the laggy behavior, and now I’m looking all set. Given that I’ve jumped from a 512MB VPS to a 4GB VPS, I’m less concerned about keeping memory usage to a minimum this time so I haven’t changed from dynamic to ondemand in the Docker config for my new nginx + php5-fpm config, but this one param is worth knowing about.

Moving my nginx+mysql WordPress VPS native install to Docker containers on a KVM VPS

My WordPress blog that you’re reading right now is running on nginx and MySQL installed on a cheap OpenVZ VPS. I’ve been running on a $2.50 VPS from Virmach for the past 6 months or so and been very happy with the service. I spent a bunch of time tweaking the nginx and MySQL config params to run in < 512MB, which it does comfortably, but nginx and MySQL are both installed directly on the Ubuntu VM instance and it would be great of I could make this setup more easily movable between cloud providers (or even to have a local copy of the setup for testing, vs the live site).

I’ve been spending a lot of time playing with Docker and Kubernetes, so it seems logical that I should move the site into containers and then this will allow me to explore other deployment options.

Migration Steps – find a KVM VPS

As far as I know you can’t install Docker in an OpenVZ virtualized VPS container, so first step I need to move to a KVM based VPS so I can install Docker (and possibly Kubernetes). I’ve been shopping the deals on lowendbox.com and there’s plenty of reasonably deals for around $5/month for various combinations of 2 to 4GB RAM and 2 to 4 vCPU.

Dockerize nginx, MySQL and WordPress

I’ve been playing with this already. I’ve picked up my own combo of favorite/useful WordPress plugins, so I’ll probably share a generic set of Dockerfiles and then leave it up to anyone if they want to use them to customize your own WordPress install in the container.

Configure a local dev/test environment Docker setup vs production environment Docker setup on my VPS

This makes a lot of sense and is a benefit of using containers. This will allow me to test my config locally, and then push to my production node. I’ve been looking at using Rancher to help with this, but still got lots to learn.

More updates to come as my project progresses.

Revisiting AWS ECS: deploying Docker containers to ECS

A few months back I walked through the steps to build, tag and deploy Docker containers to AWS ECS. It’s been a while and I need to revisit the steps.

Although you can use the the AWS Console, using the aws cli works well complements the common steps with the docker cli.

The steps you need to connect and login the docker cli to aws are listed from the AWS ECS dashboard, from the Repositories tab. Press the ‘Push Commands’ button and it will show you the login command which looks like this:

aws ecr get-login --no-include-email --region us-east-1

and the output shows you a ‘docker login’ command – copy this and paste it to where you run your aws cli, to logon on aws.

Assuming you have a docker image already built (‘docker build -t yourimage .’), then you can tag it ready to push with the next command listed from the ‘Push Commands’ output:

docker tag yourimage:latest id-of-your-ecs-registry.dkr.ecr.us-east-1.amazonaws.com/yourimage:latest

Now push with:

docker push id-of-your-ecs-registry.dkr.ecr.us-east-1.amazonaws.com/yourimage:latest

 

Adding a 2.5″ drive to a Mac Pro 2008

I have a couple of spare 500GB 2.5″ drives that were going to go into my HP DL380 rack server, but for the reasons described here, I ended up replacing with some WD drives instead. So I wanted to install these in my Mac Pro instead to at least get some use from them.

The pre ‘trash can’ Mac Pro towers have 4 slide out drive trays (see here across the center of the case) that allow you to easily install or remove 3.5″ drives without messing with any cables. Attach a drive into a drive sled, screw in the 4 screws on the sled and then push it in.

For 2.5″ drives however, they obviously won’t fit into the drive sled. There’s a number of adapter options if you just Google for “mac pro 2.5 drive adapter” or similar, and the prices are all over from $5 to $30 or more.

I went for a cheaper $5 option on Amazon. When the adapter arrived, what’s interesting is it looks like the adapter was 3D printed:

The kit came with easy to follow instructions and needed screws.

 

 

Following the instructions and attaching the drive, here’s what it looks like with the adapter fitted into one of the drive sleds:

 

 

 

On booting up, the top drive is the original disk that came in this used Mac Pro, it has an HFS+ partition installed with El Cap, and a partition installed with  Windows 10. The 2nd is the newly added 2.5″ 500GB disk. Great!