After following steps in guide to create certs:
$ cp ca.cert.pem ~/.helm/ca.pem
$ cp helm.cert.pem ~/.helm/cert.pem
$ cp helm.key.pem ~/.helm/key.pem
Then use the –tls options:
$ helm ls --tls
Articles, notes and random thoughts on Software Development and Technology
After following steps in guide to create certs:
$ cp ca.cert.pem ~/.helm/ca.pem
$ cp helm.cert.pem ~/.helm/cert.pem
$ cp helm.key.pem ~/.helm/key.pem
Then use the –tls options:
$ helm ls --tls
Attempting to run a pod on the master and get this error:
Warning FailedScheduling 14m (x2 over 14m) default-scheduler 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
From issue here: you can configure the master node to run pods with:
kubectl taint nodes --all node-role.kubernetes.io/master-
This website running this blog has been running in Docker containers on a small-ish 4GB VPS for the past 9 months pretty much issue free. You can follow by journey to migrate this site to Docker in posts here and here.
Since I’ve been spending time recently getting up to speed with Kubernetes, the next logic step would be to deploy to a Kubernetes cluster, which would give me an opportunity to find out what it takes to run Kubernetes. I’ve looked at managed offerings on Google and AWS, but the cost for a few small personal projects is a little more than I want to spend.
I have a 8GB VPS ready to go, so far installed with Docker and Kubernetes running as a single node cluster, and I’m starting to plan my strategy for migrating to this new server. The first thing I’ve been thinking about is whether I should take my existing Docker images and just deploy to Kubernetes as is. Where I’ve got stuck so far is I don’t know enough about how to run NGINX with WordPress and MySQL in multiple pods, so I think I might install the WordPress Chart using Helm and for the time being not worry about how to do this myself.
The next thing I’ve been looking at is how to configure an Ingress to access different deployed services via different urls. I’ve been looking at setting up Traefik as the Ingress Controller to do this, and will be sharing a post about that config shortly. What I’m interested in is being able to deploy a number of different projects, including my WordPress site, and have them accessed via different urls, and it looks like Traefik will handle this fine.
I plan on writing some further posts as I make this transition over the next couple of weeks.
AWS was the last of the major cloud providers to offer a managed Kubernetes service (GA announced June this year). All the others have already had offerings up and available for some time (Google Kubernetes Engine – GKE, Microsoft Azure Kubernetes Service – AKS, IBM Cloud Kuberenetes Service, even Oracle Cloud have their Container Engine for Kubernetes). At the point when AWS announced via Kubernetes service last year, many people declared that the container orchestration wars between Kubernetes, Mesos and Docker Swarm (and others?) was over. At this point Kubernetes has become a common runtime platform for running microservices on any of the major cloud platforms.
The great thing about the pay as you go approach of the cloud is that it’s easy to spin up anything on demand and kick the tires. I’ve been experimenting with Kubernetes running on a my HomeLab ESXi server for a while, and have been bouncing around the idea of moving some personal hobby projects from currently running in Docker containers on cheap VPSes online to my own Kubernetes cluster in the cloud.
I had a couple of attempts walking through the rather extensive EKS setup instructions. My first attempt I didn’t manage to get a working cluster running, but learned enough about what I was supposed to do and where I’d gone wrong, that on my next attempt I got my cluster up and running ok.
From my limited experience so far, there’s little in the way of being able to ‘one click provision’ a new EKS cluster on AWS. It takes about an hour to walk through the setup scripts, which although written well, there’s not enough automation and too much reliance on provided CloudFormation scripts. In comparison at the other end of the ease of provisioning spectrum, take a look at Google’s Kubernetes Engine offering. While on a road trip, I created a Google Cloud account on my phone as a passenger in a car and created a GKE cluster with multiple nodes in less time than it took to create my Google Cloud billing account and enter my credit card details. Google’s cloud provisioning via their web console have simplified the whole setup to the point where it only take a couple of button clicks and you’re up and running. In comparison, AWS EKS is far from this point, it would be impossible to follow and run their setup scripts on your phone as a passenger in a car.
The other problem with the current approach on AWS is the extensive use of CloudFormation templates to create EKS clusters – it seems this results in little connection between the bare bones EKS console web page and the resources you provision via the CloudFormation scripts. This lack of connection between the console page and the scripts resulted in this rather unpleasant monthly bill:
I created a test EKS cluster to do some testing, and then when I’d finished, I deleted the cluster with the delete button on the EKS console page. I expected that this would have deleted all the resources created and associated with the cluster. Apparently though, if you delete your cluster from the EKS Console, only the master nodes are destroyed (which at current prices cost 20c/hour, is expensive compared to GKE and AKS that run your cluster master nodes for free), but any other provisioned resources like Auto Scaling groups and your EC2 nodes are left active.
If you currently delete you cluster from the Console therefore and then forgot about it for a couple of weeks, the cost of leaving a couple of t2.medium EC2 instances up for several days is around $50. Ouch.
What makes this issue worse is the Auto Scaling Group created from the CloudFormation templates for the nodes will keep recreating your EC2 nodes if you try and manually terminate them. So if you attempt to shut them down, if you’re not paying attention they’ll automatically get recreated with a few mins:
Luckily, after creating a support ticket with AWS to explain that these nodes were left up running even though I’d deleted the EKS cluster from the Console, they gave me a full refund for these unexpected charges. AWS your customer support is awesome 🙂
So, lessons learned so far: