Kubernetes Rolling Updates: implementing a Readiness Probe with Spring Boot Actuator

During a Rolling Update on Kubernetes, if a service has a Readiness Probe defined, Kubernetes will use the results of calling this heathcheck to determine when updated pods are ready to start accepting traffic.

Kubernetes supports two probes to determine the health of a pod:

  • the Readiness Probe: used to determine if a pod is able to accept traffic
  • the Liveliness Probe: used to determine if a pod is appropriately responding, and if not, it will be killed and a new pod restarted

Spring Boot’s Actuator default healthcheck to indicate if a service is up and ready for traffic can be used for a Kubernetes Readiness Probe. To include in an existing Spring Boot service, add the Actuator maven dependency:

<dependency>
<groupId>org.springframework.boot</groupId
<artifactId>spring-boot-starter-actuator</artifactId
</dependency>

This adds the default healthcheck accessible by /actuator/health, and returns a 200 (and json response { “status” : “up”} ) if the service is up and running. 

To configure Kubernetes to call this Actuator healthcheck to determine the health of a pod, add a readinessProbe section to the container spec section for your deployment.yaml:

spec:
containers:
- name: exampleservice-b
image: kevinhooke/examplespringboot-b:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
readinessProbe:
httpGet:
path: /example-b/v1/actuator/health
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 5

Kubernetes will call this endpoint to check when the pod is deployed and ready for traffic. During a rolling update, as new pods are created with an updated image, you’ll see their status go from 0/1 available to 1/1 available as soon as the Spring Boot service has completed startup and the healthcheck is responding.

The gif below shows deployment of an update image to a pod. Notice how as new pods are created, they move from 0/1 to 1/1 and then when they are ready, the old pods are destroyed:

Checking iptables filtering for bridge networking on Ubuntu (for Kubernetes setup)

If you’re installing and configuring a Kubernetes cluster on bare metal or in a VM yourself, one of the install steps using kubeadm says to check iptables filtering for bridge networking, but it doesn’t exactly say how to do this per distro.

The setting required is:

net.bridge.bridge-nf-call-iptables=1

There are specific steps in the kubadm docs above for RHEL/CentOS to add this setting. For Ubuntu it seems this is set by default, but you can confirm by:

sysctl net.bridge.bridge-nf-call-iptables

and the expected setting is 1:

net.bridge.bridge-nf-call-iptables = 1

It seems on Ubuntu 16.04 server this is set to 1 by default, but if it’s 0, you can edit this property in /etc/sysctl.conf

kubernetes: switching kubectl contexts

Info about your currently configured clusters and contexts referring to each of these contexts is stored in ~/.kube/config . You can browse this into with:

kubectl config view

For contexts, scroll down to the contexts section.

For your currently configured context:

 kubectl config current-context

To switch to another context:

kubectl config use-context contextname

Related info on kubectl: https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/