To redeploy a service after an image has been updated:
kubectl rollout restart deployment [deployment-name]

Articles, notes and random thoughts on Software Development and Technology
To redeploy a service after an image has been updated:
kubectl rollout restart deployment [deployment-name]
I’m deploying an app to Kubernetes that references a Kubernetes Secret that is exported as an env var on the pod. I couldn’t work out why I kept getting this error when the pod was starting up:
FATAL: password authentication failed for user "admin"
but if I exec’d into the pod to check the value of the env var, it was the correct value that I expected.
Eventually I did stumble across this clue – ‘printenv’ inside the pod shows:
DB_PASSWORD=[value here] KUBERNETES_SERVICE_PORT_HTTPS=443 [... other values here]
Between DB_PASSWORD and the next value there’s a blank line, followed by a long list of other env var values, with no other blank lines.
From this question, the issue is how I originally encoded the base64 value with:
echo your-value-here | base64
which is not the same as:
echo -n your-value-here | base64
echo apparently includes a newline by default, so you need to use it as above with the -n option
‘kubectrl describe nodes’ is showing
reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
for each node.
This looks like the same issue as here – I followed the quickstart guide and used the ‘--flannel-backend none' option which looks like I setup a snigle node cluster with no networking…
Following the network docs here, I changed my setup line removing the ‘none’ option and now everything is good:
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server" sh -s - --token [my token]
Per the k3s docs here, copy the /etc/rancher/k3s/k3s.yaml from your controller to your local machine at ~/.kube/config.
Change the server ip to it’s actual ip, and then should be able to use kubectl against the remote cluster.