Migration to new VPS running my blog in Docker containers now complete!

After many more hours than I expected or planned, I’ve migrated this site to run on a new VPS provider running in a larger KVM based VPS. The site is now running with nginx and php5-fpm in one Docker container, and MySQL in another, linked together with docker-compose.

Along the way I ran into several issues around performance and firewall configurations, which led to setting up a GitLab CI/CD pipeline (here and here) so I could more quickly iterate and deploy changes to a local test VM server on my ESXi rack server. I set up this test VM to mirror the configuration in my VPS KVM, and then used a GitLab pipeline to push the containers to my test server, and then manually push to my production VPS server when ready to deploy.

The good news is I learned plenty along the way, but also went down several rabbit holes trying to chase down performance issues that turned out to be more related to my misconfiguration of Ubuntu’s UFW and Dockers interaction with iptables that caused some weirdness.

The other good news is I have plenty of RAM and CPU to spare in this KVM based VPS where I’m running Docker, so I’ll be able to take advantage of this to deploy some other projects too (this was one of my other reasons for migrating to another server/provider). I’ll share some additional posts about some of the specifics of the GitLab CI/CD config, dockerfile and docker-compose configurations in the next few days.

Enabling Docker service to listen on a port

By default the Docker service listens on a local socket. If you want to access the Docker service api remotely, you need to configure the service to listen on a port as well.

On Ubuntu 16.04, edit /lib/systemd/system/docker.service and change this line:

ExecStart=/usr/bin/dockerd -H fd://

to

ExecStart=/usr/bin/dockerd -H fd:// -H tcp://0.0.0.0:2376

Reload the systemd config:

sudo systemctl daemon-reload

and restart the service:

sudo systemctl restart docker.service

More info here.

kubernetes : kubectl useful commands

Working through the interactive tutorial here is a good reference for kubectl usage.

A few notes for reference:

Copy master node config to a remote machine (from here):

scp root@<master ip>:/etc/kubernetes/admin.conf .

All of the commands if running on a remote machine can use the copied conf file by passing: --kubeconfig ./admin.conf

Query nodes in the cluster:

kubectl get nodes


Show current cluster info:

./kubectl cluster-info

Kubernetes master is running at https://192.168.1.80:6443

KubeDNS is running at https://192.168.1.80:6443/api/v1/namespaces/kube-system/services/kube-dns/proxy

From the interactive tutorial:

Run kubernetes-bootcamp:

kubectl run kubernetes-bootcamp --image=docker.io/jocatalin/kubernetes-bootcamp:v1 --port=8080

Pods:

kubectl get pods
kubectl describe pod podname
kubectl delete pod podname

Deployments:

kubectl get deployments
kubectl describe deployment deploymentname
kubectl delete deployment deploymentname

Get logs

kubectl logs podname

All disks are not equal, especially cheap consumer disks without temperature sensors (a.k.a HP DL380 G7 fan speeds running like a 747 on take off)

I’ve recently been enjoying the freedom of a homelab, creating VMs on VMware ESXi and installing all-the-things. I’ve had my HP Proliant DL380 G7 for a couple of weeks now and already accumulated a number of VMs for investigating a collection of things:

Prior to my DL380 arriving, I was pondering what type of disks to put in, and in particular whether I used go with cheap consumer laptop hard disks, more expensive (e.g. WD Red NAS disks), or named-brand HP disks. I went with a pair of HGST 500GB disks to start with, and ran into an issue with the cooling fans spinning up like a 747 taking off.  Googling for “dl380 disk fans” turns up many related posts, and it turns out that some non-HP drives in Proliant servers may not report their internal temperature correctly, resulting in the server thinking the drives are overheating, and cranking up the fans to compensate.

Here’s a couple of screenshots (from the iLO – Integrated Lights Out) showing the fans ramping up to near unbearable noise levels over about 5 minutes:

  • iLO reporting the drives overheating:

  • Only after a couple of mins, but still running faster than probably should be at idle:

  • Starting to ramp up:
  • Getting unbearable now:

If I’d spent some more time reading aound I would have found this excellent article detailing this issue, and more specifically drives known to work in the DL380 and drives known to have this issue. Turns out, most of the WD disks do work, so I replaced the HGST drives with 2 WD Black 750GB drives. Now the server at idle runs with the fans between 10-13% and is actually no louder than a regular desktop.

Back to creating some more VMs 🙂