Next steps: From NGINX WordPress and MySQL running on Docker, to Kubernetes

This website running this blog has been running in Docker containers on a small-ish 4GB VPS for the past 9 months pretty much issue free. You can follow by journey to migrate this site to Docker in posts here and here.

Since I’ve been spending time recently getting up to speed with Kubernetes, the next logic step would be to deploy to a Kubernetes cluster, which would give me an opportunity to find out what it takes to run Kubernetes. I’ve looked at managed offerings on Google and AWS, but the cost for a few small personal projects is a little more than I want to spend.

I have a 8GB VPS ready to go, so far installed with Docker and Kubernetes running as a single node cluster, and I’m starting to plan my strategy for migrating to this new server. The first thing I’ve been thinking about is whether I should take my existing Docker images and just deploy to Kubernetes as is. Where I’ve got stuck so far is I don’t know enough about how to run NGINX with WordPress and MySQL in multiple pods, so I think I might install the WordPress Chart using Helm and for the time being not worry about how to do this myself.

The next thing I’ve been looking at is how to configure an Ingress to access different deployed services via different urls. I’ve been looking at setting up Traefik as the Ingress Controller to do this, and will be sharing a post about that config shortly. What I’m interested in is being able to deploy a number of different projects, including my WordPress site, and have them accessed via different urls, and it looks like Traefik will handle this fine.

I plan on writing some further posts as I make this transition over the next couple of weeks.

InfoQ interview with Martin Fowler : 2nd Edition of his 1999 Classic, Refactoring, is now shipping!

Martin Fowler’s classic software development book, Refactoring, was first released in 1999. It’s been a staple on my bookshelf since I got a copy in 2000, something that I regularly refer back to for advice on how to improve the structure and maintainability of my code.

The 2nd edition of the book has now been released, updated for 2018, with all the code examples in the book which were previously in Java now replaced with equivalent examples in JavaScript. I have my copy on order with Amazon and should be receiving my copy before the end of the year.

InfoQ have a great podcast interview with Martin, discussing the motivations for releasing an updated edition for 2018, 19 years after the 1st edition was released. Check out the interview here.

AWS EKS: Kubernetes clusters provisioned with CloudFormation templates

AWS was the last of the major cloud providers to offer a managed Kubernetes service (GA announced June this year). All the others have already had offerings up and available for some time (Google Kubernetes Engine – GKE, Microsoft Azure Kubernetes Service – AKS, IBM Cloud Kuberenetes Service, even Oracle Cloud have their Container Engine for Kubernetes). At the point when AWS announced via Kubernetes service last year, many people declared that the container orchestration wars between Kubernetes, Mesos and Docker Swarm (and others?) was over. At this point Kubernetes has become a common runtime platform for running microservices on any of the major cloud platforms.

The great thing about the pay as you go approach of the cloud is that it’s easy to spin up anything on demand and kick the tires. I’ve been experimenting with Kubernetes running on a my HomeLab ESXi server for a while, and have been bouncing around the idea of moving some personal hobby projects from currently running in Docker containers on cheap VPSes online to my own Kubernetes cluster in the cloud.

I had a couple of attempts walking through the rather extensive EKS setup instructions. My first attempt I didn’t manage to get a working cluster running, but learned enough about what I was supposed to do and where I’d gone wrong, that on my next attempt I got my cluster up and running ok.

From my limited experience so far, there’s little in the way of being able to ‘one click provision’ a new EKS cluster on AWS. It takes about an hour to walk through the setup scripts, which although written well, there’s not enough automation and too much reliance on provided CloudFormation scripts. In comparison at the other end of the ease of provisioning spectrum, take a look at Google’s Kubernetes Engine offering. While on a road trip, I created a Google Cloud account on my phone as a passenger in a car and created a GKE cluster with multiple nodes in less time than it took to create my Google Cloud billing account and enter my credit card details. Google’s cloud provisioning via their web console have simplified the whole setup to the point where it only take a couple of button clicks and you’re up and running. In comparison, AWS EKS is far from this point, it would be impossible to follow and run their setup scripts on your phone as a passenger in a car.

The other problem with the current approach on AWS is the extensive use of CloudFormation templates to create EKS clusters – it seems this results in little connection between the bare bones EKS console web page and the resources you provision via the CloudFormation scripts. This lack of connection between the console page and the scripts resulted in this rather unpleasant monthly bill:

I created a test EKS cluster to do some testing, and then when I’d finished, I deleted the cluster with the delete button on the EKS console page. I expected that this would have deleted all the resources created and associated with the cluster. Apparently though, if you delete your cluster from the EKS Console, only the master nodes are destroyed (which at current prices cost 20c/hour, is expensive compared to GKE and AKS that run your cluster master nodes for free), but any other provisioned resources like Auto Scaling groups and your EC2 nodes are left active.

If you currently delete you cluster from the Console therefore and then forgot about it for a couple of weeks, the cost of leaving a couple of t2.medium EC2 instances up for several days is around $50. Ouch.

What makes this issue worse is the Auto Scaling Group created from the CloudFormation templates for the nodes will keep recreating your EC2 nodes if you try and manually terminate them. So if you attempt to shut them down, if you’re not paying attention they’ll automatically get recreated with a few mins:

Luckily, after creating a support ticket with AWS to explain that these nodes were left up running even though I’d deleted the EKS cluster from the Console, they gave me a full refund for these unexpected charges. AWS your customer support is awesome 🙂

So, lessons learned so far:

  • set up am AWS Budget with alarms so if your monthly costs unexpectedly increase beyond what you plan to spend, you’ll be alerted and can take corrective actions
  • don’t take CloudFormation templates for granted – check the resources they create, and keep an eye on the resources as they’re running
  • it’s great that you now have the option of a common runtime platform on every major cloud provider, but some of the other providers offer a much better user experience in terms of provisioning and tooling (although I expect AWS will catch up soon)

Running GitLab in a Docker container on a different port

Gitlab by default runs on port 80. GitLab in a Docker container runs the same as a when natively installed, but to change the port you need to change the config, and change the exposed ports on the container.

First, per steps here, start the container with:

docker run --detach \	
--hostname gitlab.example.com \
--publish host-https-port:container-https-port
--publish host-http-port:container-http-port
--publish 22:22 \
--name gitlab \
--restart always \
--volume /srv/gitlab/config:/etc/gitlab \
--volume /srv/gitlab/logs:/var/log/gitlab \
--volume /srv/gitlab/data:/var/opt/gitlab \
gitlab/gitlab-ce:latest

By default host-http-port and container-http-port are 80, and host-https-port container-https-port are 443. Change these to be whatever port you want to run on, but keep each pair the same (e.g. host-http-port and container-http-port = 8090)

When the container is up, start a shell into the running container:

docker exec -it containerid sh

and then edit the config file:

vi /etc/gitlab/gitlab.rb

and add line (at the top is ok):

external_url 'http://localhost:your-new-port-here

setting your-new-port-here to the new port.

Reconfigure the server:

gitlab-ctl reconfigure
gitlab-ctl restart

Done!