Updating Kubernetes master node and worker node config if an ip address changes

I have a test Kubernetes cluster running with a CentOS7 master nodes, and 4 CentOS7 worker nodes, under VMware ESXi. The ip addresses of each of the VMs is from DHCP, and as I hadn’t booted these VMs for a while, when I recently started them up they all got new IP addresses, so the cluster would not start up, and all the .kube/config files were now referring to incorrect IP addresses. Note to self – this is a good reason why you should use DNS names for the nodes in your cluster instead of ip addresses, especially IP addresses that can change.

Anyway, to restore my cluster back to a working state, I reinitialized the master node, and the joined the workers to the new master.

First on the master:

sudo kubeadm reset
sudo kubeadm init --pod-network-cidr=10.244.0.0/16

#take a copy of the kubeadm join command to run on the workers

#copy kube config for local kubectl
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

#apply networking overlay
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.8.0/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.8.0/Documentation/kube-flannel-rbac.yml

#for each of the worker nodes, scp the config file to each node for local kubectl use
scp /etc/kubernetes/admin.conf kev@192.168.1.86:~/.kube/config

On each of the worker nodes:

sudo kubeadm reset

#then run the kubeadm join command shown from the master when you ran kubeadm init

The best way to learn anything new in software development is to try it out yourself

You can read books, watch YouTubes and listen to as many podcast as you like, but the best way to learn anything new in software development is to try it out for yourself. Why? Because you’ll learn far more from the hands-on experimentation with a new tech/library/api when you try to use it that can ever be transferred as knowledge and experience from a single book/video/article/podcast. What you can learn from a single 1 hour podcast can give you a high level overview of a topic, but you can never learn as much as you will from trying it our yourself.

Part of the learning experience is working out how to solve the problems you run into. The ‘huh, it never said that in the manual’ experience. Once you’ve worked through the unexpected issues along the way, you’ll have built a much deeper understanding of what it actually takes to use a new technology. It’s where the rubber meets the road that counts.

Kubernetes Rolling Updates: implementing a Readiness Probe with Spring Boot Actuator

During a Rolling Update on Kubernetes, if a service has a Readiness Probe defined, Kubernetes will use the results of calling this heathcheck to determine when updated pods are ready to start accepting traffic.

Kubernetes supports two probes to determine the health of a pod:

  • the Readiness Probe: used to determine if a pod is able to accept traffic
  • the Liveliness Probe: used to determine if a pod is appropriately responding, and if not, it will be killed and a new pod restarted

Spring Boot’s Actuator default healthcheck to indicate if a service is up and ready for traffic can be used for a Kubernetes Readiness Probe. To include in an existing Spring Boot service, add the Actuator maven dependency:

<dependency>
<groupId>org.springframework.boot</groupId
<artifactId>spring-boot-starter-actuator</artifactId
</dependency>

This adds the default healthcheck accessible by /actuator/health, and returns a 200 (and json response { “status” : “up”} ) if the service is up and running. 

To configure Kubernetes to call this Actuator healthcheck to determine the health of a pod, add a readinessProbe section to the container spec section for your deployment.yaml:

spec:
containers:
- name: exampleservice-b
image: kevinhooke/examplespringboot-b:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
readinessProbe:
httpGet:
path: /example-b/v1/actuator/health
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 5

Kubernetes will call this endpoint to check when the pod is deployed and ready for traffic. During a rolling update, as new pods are created with an updated image, you’ll see their status go from 0/1 available to 1/1 available as soon as the Spring Boot service has completed startup and the healthcheck is responding.

The gif below shows deployment of an update image to a pod. Notice how as new pods are created, they move from 0/1 to 1/1 and then when they are ready, the old pods are destroyed:

Checking iptables filtering for bridge networking on Ubuntu (for Kubernetes setup)

If you’re installing and configuring a Kubernetes cluster on bare metal or in a VM yourself, one of the install steps using kubeadm says to check iptables filtering for bridge networking, but it doesn’t exactly say how to do this per distro.

The setting required is:

net.bridge.bridge-nf-call-iptables=1

There are specific steps in the kubadm docs above for RHEL/CentOS to add this setting. For Ubuntu it seems this is set by default, but you can confirm by:

sysctl net.bridge.bridge-nf-call-iptables

and the expected setting is 1:

net.bridge.bridge-nf-call-iptables = 1

It seems on Ubuntu 16.04 server this is set to 1 by default, but if it’s 0, you can edit this property in /etc/sysctl.conf