Updating Kubernetes master node and worker node config if an ip address changes

I have a test Kubernetes cluster running with a CentOS7 master nodes, and 4 CentOS7 worker nodes, under VMware ESXi. The ip addresses of each of the VMs is from DHCP, and as I hadn’t booted these VMs for a while, when I recently started them up they all got new IP addresses, so the cluster would not start up, and all the .kube/config files were now referring to incorrect IP addresses. Note to self – this is a good reason why you should use DNS names for the nodes in your cluster instead of ip addresses, especially IP addresses that can change.

Anyway, to restore my cluster back to a working state, I reinitialized the master node, and the joined the workers to the new master.

First on the master:

sudo kubeadm reset
sudo kubeadm init --pod-network-cidr=10.244.0.0/16

#take a copy of the kubeadm join command to run on the workers

#copy kube config for local kubectl
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

#apply networking overlay
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.8.0/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.8.0/Documentation/kube-flannel-rbac.yml

#for each of the worker nodes, scp the config file to each node for local kubectl use
scp /etc/kubernetes/admin.conf kev@192.168.1.86:~/.kube/config

On each of the worker nodes:

sudo kubeadm reset

#then run the kubeadm join command shown from the master when you ran kubeadm init

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.