Updating rke created Kubernetes cluster from 1.11.3 to 1.11.5

There was a vulnerability found today in some older Kubernetes versions. There are already patched versions available. If you have 1.11.3 installed from rke, you can update to 1.11.5 by editing your cluster.yml, replacing the kubernetes image:

kubernetes: rancher/hyperkube:v1.11.3-rancher1

with

kubernetes: rancher/hyperkube:v1.11.5-rancher1

And then run ‘rke up’ again.

This is from this Github ticket.

Rancher RKE Kubernetes install notes

Rancher’s RKE is a Kubernetes cluster installer – see more here.

Pre-reqs:

  • Docker must be running on the client machine where you are going to run the rke setup tool
  • The docs are not obvious, but the rke tool is run on a client machine to provision your cluster, it is not run on any of the target cluster nodes 

Notes using Ubuntu 16.04 server.

Remove prior Docker installs:

sudo apt-get remove docker docker-engine docker.io

Create docker group and add user to docker group:

sudo groupadd docker
usermod -aG docker <user_name>

Install per Docker CE install steps here, or use the Rancher provider install script here

Supported Docker versions for RKE (as of Dec 2018) are: 1.11.x 1.12.x 1.13.x 17.03.x

Configure Docker daemon to listen for incoming requests on 2376, as per steps here.

Using ‘rke config’ with the default/minimal cluster.yml here, and then install/setup with ‘rke up’

If you didn’t change the name of the cluster.yml file, after the install is complete, you’ll have a kube_config_cluster.yml file in the same dir which you can use with kubectl to interact with you cluster, or add it into your existing ~/.kube/config file

Updating Kubernetes master node and worker node config if an ip address changes

I have a test Kubernetes cluster running with a CentOS7 master nodes, and 4 CentOS7 worker nodes, under VMware ESXi. The ip addresses of each of the VMs is from DHCP, and as I hadn’t booted these VMs for a while, when I recently started them up they all got new IP addresses, so the cluster would not start up, and all the .kube/config files were now referring to incorrect IP addresses. Note to self – this is a good reason why you should use DNS names for the nodes in your cluster instead of ip addresses, especially IP addresses that can change.

Anyway, to restore my cluster back to a working state, I reinitialized the master node, and the joined the workers to the new master.

First on the master:

sudo kubeadm reset
sudo kubeadm init --pod-network-cidr=10.244.0.0/16

#take a copy of the kubeadm join command to run on the workers

#copy kube config for local kubectl
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

#apply networking overlay
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.8.0/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.8.0/Documentation/kube-flannel-rbac.yml

#for each of the worker nodes, scp the config file to each node for local kubectl use
scp /etc/kubernetes/admin.conf kev@192.168.1.86:~/.kube/config

On each of the worker nodes:

sudo kubeadm reset

#then run the kubeadm join command shown from the master when you ran kubeadm init

Kubernetes Rolling Updates: implementing a Readiness Probe with Spring Boot Actuator

During a Rolling Update on Kubernetes, if a service has a Readiness Probe defined, Kubernetes will use the results of calling this heathcheck to determine when updated pods are ready to start accepting traffic.

Kubernetes supports two probes to determine the health of a pod:

  • the Readiness Probe: used to determine if a pod is able to accept traffic
  • the Liveliness Probe: used to determine if a pod is appropriately responding, and if not, it will be killed and a new pod restarted

Spring Boot’s Actuator default healthcheck to indicate if a service is up and ready for traffic can be used for a Kubernetes Readiness Probe. To include in an existing Spring Boot service, add the Actuator maven dependency:

<dependency>
<groupId>org.springframework.boot</groupId
<artifactId>spring-boot-starter-actuator</artifactId
</dependency>

This adds the default healthcheck accessible by /actuator/health, and returns a 200 (and json response { “status” : “up”} ) if the service is up and running. 

To configure Kubernetes to call this Actuator healthcheck to determine the health of a pod, add a readinessProbe section to the container spec section for your deployment.yaml:

spec:
containers:
- name: exampleservice-b
image: kevinhooke/examplespringboot-b:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
readinessProbe:
httpGet:
path: /example-b/v1/actuator/health
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 5

Kubernetes will call this endpoint to check when the pod is deployed and ready for traffic. During a rolling update, as new pods are created with an updated image, you’ll see their status go from 0/1 available to 1/1 available as soon as the Spring Boot service has completed startup and the healthcheck is responding.

The gif below shows deployment of an update image to a pod. Notice how as new pods are created, they move from 0/1 to 1/1 and then when they are ready, the old pods are destroyed: