Migration to new VPS running my blog in Docker containers now complete!

After many more hours than I expected or planned, I’ve migrated this site to run on a new VPS provider running in a larger KVM based VPS. The site is now running with nginx and php5-fpm in one Docker container, and MySQL in another, linked together with docker-compose.

Along the way I ran into several issues around performance and firewall configurations, which led to setting up a GitLab CI/CD pipeline (here and here) so I could more quickly iterate and deploy changes to a local test VM server on my ESXi rack server. I set up this test VM to mirror the configuration in my VPS KVM, and then used a GitLab pipeline to push the containers to my test server, and then manually push to my production VPS server when ready to deploy.

The good news is I learned plenty along the way, but also went down several rabbit holes trying to chase down performance issues that turned out to be more related to my misconfiguration of Ubuntu’s UFW and Dockers interaction with iptables that caused some weirdness.

The other good news is I have plenty of RAM and CPU to spare in this KVM based VPS where I’m running Docker, so I’ll be able to take advantage of this to deploy some other projects too (this was one of my other reasons for migrating to another server/provider). I’ll share some additional posts about some of the specifics of the GitLab CI/CD config, dockerfile and docker-compose configurations in the next few days.

AWS Lambda cost calculator webapp

AWS Lambda usage costs are a little tricky to understand, because the usage cost is per GB seconds of usage. This is calculated from the execution time of your Lambda by the GB of memory it is configured to use. For example, a single request to a Lambda configured to use 1GB that executes for 1 sec is 1 GB-sec.

AWS offers a free tier that includes the first 400,000 GB-s for free, and the first 1,000,000 requests a month for free. Above those you’re charged $0.00001667 for each GB-s and $0.20 for every 1M additions requests. Check the details here.

I put together a simple webapp that allows you to play with the numbers and see what your costs are going to look like. You can check it out (served from AWS S3) here:

 

https://s3-us-west-1.amazonaws.com/awslambdacostcalc/index.html

If you’re interested in taking a look at the source for the React app, it’s here on Github. Create me a ticket if you find any issues.

AWS CloudWatch default metrics for EC2 instances

AWS CloudWatch allows you to monitor events and logs from the services you are running. There are a set of default metrics provided, and you can also create you own custom metrics.

From a running EC2 instance, let’s look at the metrics displayed beneath your selected instance, on the Monitoring tab:

By default we get metrics displayed for:

  • CPU utilization
  • Disk reads/writes bytes and operations
  • Network in/out bytes and packets
  • Status checks failed

Now let’s create a new CloudWatch dashboard and add some metrics. Press ‘Create Dashboard’:

Next you can select a chart type, and then select from the available metrics. For EC2 there are 105 metrics to pick from:

Let’s see what options we have – you can enter filter values in the entry field. Let’s say I’m interested in disk reads/writes:

Notice the second column is by InstanceId, so if you have many instances (including it seems terminated instances which are showing in my list), make sure you pick the stats for the instance you want to monitor – here I’ve added a widget for Disk read/write bytes, and CPU utilization:

 

Troubleshooting User Data scripts when creating AWS EC2 instances

When an AWS EC2 User Data script fails, you’ll see something like this in /var/log/cloud-init.log in your instance:

2018-02-03 06:08:16,536 - util.py[DEBUG]: Failed running /var/lib/cloud/instance/scripts/part-001 [127]

Traceback (most recent call last):

  File "/usr/lib/python3/dist-packages/cloudinit/util.py", line 806, in runparts

    subp(prefix + [exe_path], capture=False)

  File "/usr/lib/python3/dist-packages/cloudinit/util.py", line 1847, in subp

    cmd=args)

cloudinit.util.ProcessExecutionError: Unexpected error while running command.

Command: ['/var/lib/cloud/instance/scripts/part-001']

Exit code: 127

Reason: -

Stdout: -

Stderr: -

2018-02-03 06:08:16,541 - cc_scripts_user.py[WARNING]: Failed to run module scripts-user (scripts in /var/lib/cloud/instance/scripts)

2018-02-03 06:08:16,541 - handlers.py[DEBUG]: finish: modules-final/config-scripts-user: FAIL: running config-scripts-user with frequency once-per-instance

It tells you something failed, but not what. The trouble seems that output from your user data script does not go to the cloud-init.log by default.

One of the answers in this post suggests to pipe your script commands and output to logger into a separate log file like this:

set -x
exec > >(tee /var/log/user-data.log|logger -t user-data ) 2>&1
echo BEGIN
date '+%Y-%m-%d %H:%M:%S'

Now running my script with a ‘apt-get update -y’ looks like:

+ echo BEGIN
BEGIN
+ date '+%Y-%m-%d %H:%M:%S'
2018-02-03 23:37:55
+ apt-get update -y
... output continues here

And further down, here’s my specific error I was looking for:

+ java -Xmx1024M -Xms1024M -jar minecraft_server.1.12.2.jar nogui

/var/lib/cloud/instance/scripts/part-001: line 11: java: command not found

My EC2 running the Ubuntu AMI does not have Java installed by default, so I need to install it with (adding to my User Data script):

apt-get install openjdk-8-jre-headless -y

… and now my script runs as expected.