SSH to AWS EC2: ‘permissions 0644 are too open’ error

To connect to an EC2 instance over SSH, if the permissions on your .pem file are too broad then you’ll see this error:

Permissions 0644 for ‘keypair.pem’ are too open.

It is required that your private key files are NOT accessible by others.

This private key will be ignored.

chmod the .pem file to 0400 and then you should be good. This is described here.

Testing apps using AWS DynamoDB locally with the AWS CLI and JavaScript AWS SDK

One challenge when developing and testing code locally before you deploy is how you test against resources that are provisioned in the cloud. You can point directly to your cloud resources, but in some cases it’s easier if you can run and test code locally before you deploy.

DynamoDB has a local runtime that you can download and run locally from here.

Once downloaded run with:

java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -sharedDb

You can use the AWS CLI to execute any of the DynamoDB commands against your local db by passing the –endpoint-url option like this:

aws dynamodb list-tables --endpoint-url http://localhost:8000

Docs for the available DynamoDB commands are here.

Other useful cli commands (append–endpoint-url http://localhost:8000 to use the local db):

List all tables:

aws dynamodb list-tables 

Delete a table:

aws dynamodb delete-table --table-name tablename

Scan a table (with no criteria)

aws dynamodb scan --table-name tablename

 

Running AWS Lambda functions on a timed schedule

AWS Lambdas can be called directly or triggered to execute based on a configured event. If you take a look at the ‘CloudWatch Events’ section, there is a configuration option for a Rule that can take a cron expression to trigger a Lambda based on a timed schedule. This could be useful for scheduling maintenance tasks or anything that needs to run on a periodic basis:

Scroll down to the ‘Configure trigger’ section and you’ll see the ‘Schedule Expression’ field where you can enter an expression like ‘rate(x minutes)’ (there’s a typo in the screenshot below it should be ‘minutes’ not ‘minute’) or define a cron style pattern:

Scroll further down and there’s an option to enable the schedule rule straight away, or disable it for testing, and you can enable it when you ready to start the schedule later:

After you’ve created the Rule, if you need to edit it you’ll find it over in the CloudWatch console page, under Events / Rules:

Migration to new VPS running my blog in Docker containers now complete!

After many more hours than I expected or planned, I’ve migrated this site to run on a new VPS provider running in a larger KVM based VPS. The site is now running with nginx and php5-fpm in one Docker container, and MySQL in another, linked together with docker-compose.

Along the way I ran into several issues around performance and firewall configurations, which led to setting up a GitLab CI/CD pipeline (here and here) so I could more quickly iterate and deploy changes to a local test VM server on my ESXi rack server. I set up this test VM to mirror the configuration in my VPS KVM, and then used a GitLab pipeline to push the containers to my test server, and then manually push to my production VPS server when ready to deploy.

The good news is I learned plenty along the way, but also went down several rabbit holes trying to chase down performance issues that turned out to be more related to my misconfiguration of Ubuntu’s UFW and Dockers interaction with iptables that caused some weirdness.

The other good news is I have plenty of RAM and CPU to spare in this KVM based VPS where I’m running Docker, so I’ll be able to take advantage of this to deploy some other projects too (this was one of my other reasons for migrating to another server/provider). I’ll share some additional posts about some of the specifics of the GitLab CI/CD config, dockerfile and docker-compose configurations in the next few days.