Ccs Kubernetes Certificates Expire C
Ccs Kubernetes Certificates Expire C
Ccs Kubernetes Certificates Expire C
Introduction
This document describes a possible outage issue that the customers might face when they have a
Kubernetes based system that has been installed for more than 365 days. Furthermore, it goes
through the steps needed to fix the situation and get the Kubernetes based system back up and
running.
Problem
After one year of time of default installed Kubernetes cluster, the client certificates expire. You will
not be able to access Cisco CloudCentre Suite (CCS). Although it will still appear up, you will not
be able to log in. If you navigate to the kubectl CLI, you will see this error, "Unable to connect to
the server: x509: certificate has expired or is not yet valid."
You can run this bash script to see the expiration date of their certificates:
https://github.com/cisco-cx-workflows/cx-ao-shared-
workflows/tree/master/CCSCheckKubernetesExpiration__definition_workflow_01E01VIRWZDE24
mWlsHrqCGB9xUix0f9ZxG
Solution
New certificates have to be re-issued via Kubeadm across the cluster and then you need to re-join
the worker nodes to the masters.
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
advertiseAddress: <IP ADDRESS FROM STEP 2>
kubernetesVersion: v1.11.6
#NOTE: If the customer is running a load balancer VM then you must add these lines after...
#apiServerCertSANs:
#- <load balancer IP>
6. Backup your old certificates and keys. This is not required but recommended. Make a
backup directory and copy these files to it.
#Files
#apiserver.crt
#apiserver.key
#apiserver-kubelet-client.crt
#apiserver-kubelet-client.key
#front-proxy-client.crt
#front-proxy-client.key
#ie
cd /etc/kubernetes/pki
mkdir backup
mv apiserver.key backup/apiserver.key.bak
7. If you skipped Step 6., you can just delete the previously mentioned files via rm command
like rm apiserver.crt.
8. Navigate back to where your kubeadmCERT.yaml file is located. Generate a new apiserver
cert via kubeadm --config kubeadmCERT.yaml alpha phase certs apiserver.
10. Generate new front-proxy-client cert via kubeadm --config kubeadmCERT.yaml alpha
phase certs front-proxy-client.
11. In the /etc/kubernetes folder, backup the .conf files. Not required but recommended. You
should have kubelet.conf, controller-manager.conf, scheduler.conf, and possibly
admin.conf. You can delete them if you don't want to back them up.
12. Generate new configuration files via kubeadm --config kubeadmCERT.yaml alpha phase
kubeconfig all.
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
chmod 777 $HOME/.kube/config
export KUBECONFIG=.kube/config
14. Reboot the master node via shutdown -r now.
15. Once the master is backed up, check whether kubelet is running via systemctl status
kubelet.
18. On one master, generate a new join token via kubeadm token create --print-join-
command. Copy that command for later use.
22. Remove the old configuration files, these include ca.crt, kubelet.conf, and bootstrap-
kubelet.conf.
rm /etc/kubernetes/pki/ca.crt
rm /etc/kubernetes/kubelet.conf
rm /etc/kubernetes/bootstrap-kubelet.conf
23. Grab the name of the node from Step 19.
24. Issue the command for the worker in order to rejoin the cluster. Use the command from 18.,
but add --node-name <name of node> to the end.
Run 'kubectl get nodes' on the master to see this node join the cluster.
25. Exit the worker and check the status on a master via kubectl get nodes. It should be in
Ready status.
27. The last kubectl get nodes should show that all nodes are in "Ready" status, back online,
and joined to the cluster.