Expanding the disk size for an Ubuntu guest on VMware ESXi

Stop the guest VM.

Change the attached disk size in VM settings:

Attach a gparted iso or alternatively you can attach the original Ubuntu desktop ISO that you originally installed from.

Change the Boot Option for your VM to boot into the guest VM’s BIOS (‘Force BIOS setup’) to change the boot order with the cdrom first (by default it won’t boot from the attached cdrom as it’s set to only boot from cdrom if the attached disk does not boot first):

With the gparted iso or Ubuntu desktop install iso attached, restart the VM, and then run gparted.

Use gparted to expand the partition into the free space.

Once resized, reboot the Unbuntu guest (reset the boot order or unattach the cdrom iso image).

Use pvdisplay to get the Volume Group name

$ sudo pvdisplay

  --- Physical volume ---

  PV Name               /dev/sda5

  VG Name               ubuntu-vg

  PV Size               39.76 GiB / not usable 2.00 MiB

  Allocatable           yes (but full)

  PE Size               4.00 MiB

  Total PE              10178

  Free PE               0

  Allocated PE          10178

Use vgextend with the volume group name and physical disk name to extend:

sudo vgextend ubuntu-vg /dev/sda5

Use lvextend with param “-l+100%FREE” to expand the logical volume:

sudo lvextend -l+100%FREE /dev/ubuntu-vg/root

Now use resize2fs:

sudo resize2fs /dev/mapper/ubuntu--vg-root

Done!

More info on using gparted here. Info on resizing LVM disks in this article here.

GitLab not restarting, postresql service not running

After restarting my GitLab server I kept getting the 502 “GitLab is taking too much time to respond” error.

Taking a look at the running services, I get this:

$ sudo gitlab-ctl status

run: gitaly: (pid 1048) 382s; run: log: (pid 1046) 382s

run: gitlab-monitor: (pid 1035) 382s; run: log: (pid 1033) 382s

run: gitlab-workhorse: (pid 1047) 382s; run: log: (pid 1045) 382s

run: logrotate: (pid 1029) 382s; run: log: (pid 1028) 382s

run: nginx: (pid 3900) 15s; run: log: (pid 1026) 382s

run: node-exporter: (pid 1031) 382s; run: log: (pid 1030) 382s

run: postgres-exporter: (pid 1039) 382s; run: log: (pid 1038) 382s

down: postgresql: 0s, normally up, want up; run: log: (pid 1041) 382s

run: prometheus: (pid 3919) 15s; run: log: (pid 1032) 382s

run: redis: (pid 1053) 382s; run: log: (pid 1050) 382s

run: redis-exporter: (pid 1037) 382s; run: log: (pid 1036) 382s

run: sidekiq: (pid 3931) 14s; run: log: (pid 1049) 382s

run: unicorn: (pid 3937) 14s; run: log: (pid 1044) 382s

Everything is up apart from Postgresql. Trying to stop all services and restart, or rebooting the sever still gets the same error. Checking GitLab’s postgresql logs, they show:

2018-03-13_04:04:45.73226 FATAL:  pre-existing shared memory block (key 5432001, ID 0) is still in use

2018-03-13_04:04:45.73232 HINT:  If you're sure there are no old server processes still running, remove the shared memory block or just delete the file "postmaster.pid".

Doing a quick search found this identical question. Following the steps in the first answer:

sudo gitlab-ctl stop
sudo systemctl stop gitlab-runsvdir.service
ps aux | grep postgre (check if there are any postgres processes; shouldn't be)
sudo rm /var/opt/gitlab/postgresql/data/postmaster.pid
sudo systemctl start gitlab-runsvdir.service
sudo gitlab-ctl reconfigure

And then ‘sudo gitlab-ctl start’ and now everything is back up and clean.

Migrating an existing WordPress + nginx + php5-fpm + mysql website to Docker containers: lessons learned

I’ve covered in previous posts why I wanted to Dockerize my site and move to containers, you can read about it in my other posts shared here. Having played with Docker for personal projects for several months at this point, I thought it was going to be easy, but ran into several issues and unexpected decisions that I needed to make. In this post I’ll summarize a few of these issues and learning points.

Realizing the meaning of ‘containers are ephemeral’, or ‘where do I put my application data’?

Docker images are the blueprint for a container, while the container is a running instance of an image. It’s clear from the Docker docs and elsewhere that you should treat your containers as ‘ephemeral’, meaning they only exist while they’re up and running, their state is temporary, and once they are discarded their state is also lost.

This is an easy concept to grasp at a high level, but in practice this leads to important and valid questions, like ‘so where does my data go’? This became very apparent to me when transferring my existing WordPress data. First, I have data in MySQL tables that needs to get imported into the new MySQL server running in a container. Second, where does the wordpress/wp-content go that in my case contains nearly 500MB of uploaded images from my 2,000+ posts?

The data for MySQL was easy to address, as the official MySQL docker image is already set up to use Docker’s data volume feature by default to externalize your MySQL data files outside of your running container.

The issue of where to put my WordPress wp-content containing 500MB of upload files is what caused my ahah moment with data volumes. Naively, you can create an image and use the COPY command to copy any number of files into an image, including even 500MB of images, but when you start to move this image around, like pushing it to a repository or a remote server, you quickly realize you’ve created something that is impractical. Making incremental changes to a image containing this quantity of files you quickly find that you’re unable to push it anywhere quickly.

To address this, I created an image with nginx and php5-fpm installed, but used Docker’s bind mount to reference and load my static content outside the container.

Now I have my app in containers, how do I actually deploy to different servers?

Up until this point I’ve built and run containers locally, I’ve set up a local Docker Repository for pushing images to for testing, but the main reasons I was interested in for this migration was to enable:

  • building and testing the containers locally
  • testing deployment to a VM server with an identical setup to my production KVM hosted server
  • pushing to my production server when I was ready to deploy to my live site

Before the Windows and MacOS naive Docker installations, I thought docker-machine was just a way to deploy to a locally running Docker install in a VM. It never occurred to me that you can also use the docker-machine command to act on any remote Docker install too.

It turns out even setting a env var DOCKER_HOST to point to the IP of any remote Docker server will enable you to direct commands to that remote server. I believe part of the ‘docker-machine create’ setup helps automate setting up TLS certs for communicating with your remote server, but you can also do this manually following the steps here. I took this approach because I wanted to use the certs from my dev machine as well as my GitLab build machine.

I used this approach to build my images locally, and then on committing my Dockerfile and source changes to my GitLab repo, I also set up a CI Pipeline to run the same commands and push automatically to a locally running test VM server, and then manually to push to my production server.

I’ll cover my GitLab CI Pipeline setup in an upcoming post.

How do you monitor an application running in containers?

I’ve been looking at a number of approaches. Prometheus looks like a great option, and I’ve been setting this up on my test server to take a look. I’m still looking at a few related options, maybe even using Grafana to visualize metrics. I’ll cover this in a future post too.

Burning ISOs to disk on MacOS El Cap and later

Prior to El Cap, the Disk Utility on MacOS had an icon to burn an ISO to disk when you mounted the iso. For whatever reason this was removed in El Cap and after, but the ability to burn isos has always bee provided from the Finder.

Right-click an iso file in the Finder and you’ll see a burn option. More info in this article here.