Provisioning a Kubernetes cluster on AWS EKS (part 1, not up yet)

tldr; AWS introduced their version of a managed Kubernetes as a Service earlier this year, EKS. A link to the setup guide and my experience following these instructions is below. After running into a couple of issues (using the IAM role in the wrong place, and running the CloudFormation stack creation and cluster creation steps as a root user, instead of an IAM user), I spent at least a couple of hours trying to get an EKS cluster up, and then wanted to find out how easy or otherwise it is to provision a Kubernetes cluster on the other major cloud vendors. On Google Cloud, it turns out it’s incredibly easy – it took less than 5 minutes using their web console on my phone while in a car (as a passenger of course 🙂 ). From reading similar articles it sounds like the experience on Azure is similar. AWS have clearly got some work to do in this area to get their provisioning more like Google’s couple-of-button-clicks and you’re done approach. If you’re still interested in the steps to provision EKS then continue reading, otherwise in the meantime I’m off to play with Google’s Kubernetes Engine 🙂

The full Getting Started guide for AWS EKS is here, but here’s the condensed steps required to deploy a Kubernetes cluster on AWS:

Create a Role in IAM for EKS:

Create a Stack using the linked CloudFormation template in the docs – I kept all the defaults and used the role I created above.

At this point when I attempted to create, but got this error:

Template validation error: Role arn:aws:iam::xxx:role/eks_role is invalid or cannot be assumed

I assumed that the Role created in the step earlier was to be used here, but it’s used later when creating your cluster, not for running the CloudFormation template – don’t enter it here, leave this field blank:

When then Stack creation completes you’ll see:

Back to set up steps:

Install kubectl if you don’t already have it

Download and install aws-iam-authenticator, and to your PATH

Back to the AWS Console, head to EKS and create your cluster:

For the VPC selection, the first/default VPC selected was my default VPC and not the VPC created during the Stack creation, so I changed from this default:

Since I had run and re-run the CloudFormation template a few times until I got it working, I ended up with a number of VPCs and SecurityGroups with the same name as the Stack. To work out which were the currently in use ones, I went back to CloudFormation and checked the Outputs tab to get a list of SecurityGroupIds, VPCIds and SubnetIds in use by the current Stack. Using this info I then selected the matching values for the VPC and SecurityGroup (the one with ControlPlaneSecurityGroup in the name).

Cluster is up!

Initialize the aws cli and kubectl connection config to your cluster:

 aws eks update-kubeconfig --name cluster_name

At this point you have a running cluster with master nodes, but no worker EC2 nodes provisioned. That’s the next step in the Getting Started Guide.

Now check running services:

kubectl get svc

At this point, I was prompted for credentials and wondered what credentials it needed since my aws cli was already configured and logged in:

$ kubectl get svc
Please enter Username: my-iam-user
Please enter Password:

This post suggested that there’s a step in the guide that requires you to create the cluster with an IAM user and not your root user. I obviously missed this and used my root user. I’ll delete the cluster logon as an IAM user and try again.

Created a new cluster with an Admin IAM user, and now I can see the cluster starting with:

aws eks describe-cluster --name devel --query cluster.status

{
"cluster": {
"name": "test1",
...

"status": "CREATING",
...
}

Once the Cluster is up, continue with the instructions to add a worker node using the CloudFormation template file.

At this point more errors, ‘Unauthorized’

Searching around found this post, that implies not only should you not create the cluster with a root user, but also the stack needs to be created with the same IAM user.

Back to the start, trying one more time.

At this point I got distracted by the fact that it only takes 5 minutes and a couple of button clicks on Google Cloud to provision a Kubernetes cluster… so I’ll come back to getting this set up on AWS at a later point … in the meantime I’m off to kick the tires on Google Kubernetes Engine.

SSH to AWS EC2: ‘permissions 0644 are too open’ error

To connect to an EC2 instance over SSH, if the permissions on your .pem file are too broad then you’ll see this error:

Permissions 0644 for ‘keypair.pem’ are too open.

It is required that your private key files are NOT accessible by others.

This private key will be ignored.

chmod the .pem file to 0400 and then you should be good. This is described here.

Testing apps using AWS DynamoDB locally with the AWS CLI and JavaScript AWS SDK

One challenge when developing and testing code locally before you deploy is how you test against resources that are provisioned in the cloud. You can point directly to your cloud resources, but in some cases it’s easier if you can run and test code locally before you deploy.

DynamoDB has a local runtime that you can download and run locally from here.

Once downloaded run with:

java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -sharedDb

You can use the AWS CLI to execute any of the DynamoDB commands against your local db by passing the –endpoint-url option like this:

aws dynamodb list-tables --endpoint-url http://localhost:8000

Docs for the available DynamoDB commands are here.

Other useful cli commands (append–endpoint-url http://localhost:8000 to use the local db):

List all tables:

aws dynamodb list-tables 

Delete a table:

aws dynamodb delete-table --table-name tablename

Scan a table (with no criteria)

aws dynamodb scan --table-name tablename

 

Running AWS Lambda functions on a timed schedule

AWS Lambdas can be called directly or triggered to execute based on a configured event. If you take a look at the ‘CloudWatch Events’ section, there is a configuration option for a Rule that can take a cron expression to trigger a Lambda based on a timed schedule. This could be useful for scheduling maintenance tasks or anything that needs to run on a periodic basis:

Scroll down to the ‘Configure trigger’ section and you’ll see the ‘Schedule Expression’ field where you can enter an expression like ‘rate(x minutes)’ (there’s a typo in the screenshot below it should be ‘minutes’ not ‘minute’) or define a cron style pattern:

Scroll further down and there’s an option to enable the schedule rule straight away, or disable it for testing, and you can enable it when you ready to start the schedule later:

After you’ve created the Rule, if you need to edit it you’ll find it over in the CloudWatch console page, under Events / Rules: