tldr; AWS introduced their version of a managed Kubernetes as a Service earlier this year, EKS. A link to the setup guide and my experience following these instructions is below. After running into a couple of issues (using the IAM role in the wrong place, and running the CloudFormation stack creation and cluster creation steps as a root user, instead of an IAM user), I spent at least a couple of hours trying to get an EKS cluster up, and then wanted to find out how easy or otherwise it is to provision a Kubernetes cluster on the other major cloud vendors. On Google Cloud, it turns out it’s incredibly easy – it took less than 5 minutes using their web console on my phone while in a car (as a passenger of course ๐ ). From reading similar articles it sounds like the experience on Azure is similar. AWS have clearly got some work to do in this area to get their provisioning more like Google’s couple-of-button-clicks and you’re done approach. If you’re still interested in the steps to provision EKS then continue reading, otherwise in the meantime I’m off to play with Google’s Kubernetes Engine ๐
The full Getting Started guide for AWS EKS is here, but here’s the condensed steps required to deploy a Kubernetes cluster on AWS:
Create a Role in IAM for EKS:
Create a Stack using the linked CloudFormation template in the docs – I kept all the defaults and used the role I created above.
At this point when I attempted to create, but got this error:
Template validation error: Role arn:aws:iam::xxx:role/eks_role is invalid or cannot be assumed
I assumed that the Role created in the step earlier was to be used here, but it’s used later when creating your cluster, not for running the CloudFormation template – don’t enter it here, leave this field blank:
When then Stack creation completes you’ll see:
Back to set up steps:
Install kubectl if you don’t already have it
Download and install aws-iam-authenticator, and to your PATH
Back to the AWS Console, head to EKS and create your cluster:
For the VPC selection, the first/default VPC selected was my default VPC and not the VPC created during the Stack creation, so I changed from this default:
Since I had run and re-run the CloudFormation template a few times until I got it working, I ended up with a number of VPCs and SecurityGroups with the same name as the Stack. To work out which were the currently in use ones, I went back to CloudFormation and checked the Outputs tab to get a list of SecurityGroupIds, VPCIds and SubnetIds in use by the current Stack. Using this info I then selected the matching values for the VPC and SecurityGroup (the one with ControlPlaneSecurityGroup in the name).
Cluster is up!
Initialize the aws cli and kubectl connection config to your cluster:
aws eks update-kubeconfig --name cluster_name
At this point you have a running cluster with master nodes, but no worker EC2 nodes provisioned. That’s the next step in the Getting Started Guide.
Now check running services:
kubectl get svc
At this point, I was prompted for credentials and wondered what credentials it needed since my aws cli was already configured and logged in:
$ kubectl get svc
Please enter Username: my-iam-user
Please enter Password:
This post suggested that there’s a step in the guide that requires you to create the cluster with an IAM user and not your root user. I obviously missed this and used my root user. I’ll delete the cluster logon as an IAM user and try again.
Created a new cluster with an Admin IAM user, and now I can see the cluster starting with:
aws eks describe-cluster --name devel --query cluster.status
{
"cluster": {
"name": "test1",
...
"status": "CREATING",
...
}
Once the Cluster is up, continue with the instructions to add a worker node using the CloudFormation template file.
At this point more errors, ‘Unauthorized’
Searching around found this post, that implies not only should you not create the cluster with a root user, but also the stack needs to be created with the same IAM user.
Back to the start, trying one more time.
At this point I got distracted by the fact that it only takes 5 minutes and a couple of button clicks on Google Cloud to provision a Kubernetes cluster… so I’ll come back to getting this set up on AWS at a later point … in the meantime I’m off to kick the tires on Google Kubernetes Engine.