Upgrading Homelab Kubernetes Cluster from 1.19 to 1.20

Calico 3.18 has been released with support for Kubernetes 1.20, therefore it’s time to upgrade!

The Upgrade Path

Our cluster was built using Ansible (kubeadm), therefore we could delete the existing one and use Ansible to build a new one with the latest version of Kubernetes. That however does not sound like a lot of fun. We will go the kubeadm upgrade route.

We will be upgrading from:

  1. kubeadm 1.19.7
  2. kubelet 1.19.7
  3. kubectl 1.19.7
  4. calico 3.17

to:

  1. kubeadm 1.20.5
  2. kubelet 1.20.5
  3. kubectl 1.20.5
  4. calico 3.18

Backup the Cluster

Kubernetes nodes run on KVM, therefore we have taken KVM snapshosts of each virtual machine before starting the upgrade.

Upgrade Control Plane Nodes

Cluster node status before proceeding:

$ kubectl get no -o wide
NAME    STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION           CONTAINER-RUNTIME
srv31   Ready    master   40d   v1.19.7   10.11.1.31    none          CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   docker://19.3.14
srv32   Ready    master   40d   v1.19.7   10.11.1.32    none          CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   docker://19.3.14
srv33   Ready    master   40d   v1.19.7   10.11.1.33    none          CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   docker://19.3.14
srv34   Ready    none     40d   v1.19.7   10.11.1.34    none          CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   docker://19.3.14
srv35   Ready    none     12d   v1.19.7   10.11.1.35    none          CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   docker://19.3.14
srv36   Ready    none     12d   v1.19.7   10.11.1.36    none          CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   docker://19.3.14

Perform kubeadm upgrade

The upgrade procedure on control plane nodes should be executed one node at a time.

We will start with the control plane srv31. For the first control plane node srv31:

$ sudo yum install -y kubeadm-1.20.5-0 --disableexcludes=kubernetes
$ kubeadm version

Verify the upgrade plan:

$ sudo kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.19.7
[upgrade/versions] kubeadm version: v1.20.5
[upgrade/versions] Latest stable version: v1.20.5
[upgrade/versions] Latest stable version: v1.20.5
[upgrade/versions] Latest version in the v1.19 series: v1.19.9
[upgrade/versions] Latest version in the v1.19 series: v1.19.9

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
kubelet     6 x v1.19.7   v1.19.9

Upgrade to the latest version in the v1.19 series:

COMPONENT                 CURRENT    AVAILABLE
kube-apiserver            v1.19.7    v1.19.9
kube-controller-manager   v1.19.7    v1.19.9
kube-scheduler            v1.19.7    v1.19.9
kube-proxy                v1.19.7    v1.19.9
CoreDNS                   1.7.0      1.7.0
etcd                      3.4.13-0   3.4.13-0

You can now apply the upgrade by executing the following command:

	kubeadm upgrade apply v1.19.9

_____________________________________________________________________

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
kubelet     6 x v1.19.7   v1.20.5

Upgrade to the latest stable version:

COMPONENT                 CURRENT    AVAILABLE
kube-apiserver            v1.19.7    v1.20.5
kube-controller-manager   v1.19.7    v1.20.5
kube-scheduler            v1.19.7    v1.20.5
kube-proxy                v1.19.7    v1.20.5
CoreDNS                   1.7.0      1.7.0
etcd                      3.4.13-0   3.4.13-0

You can now apply the upgrade by executing the following command:

	kubeadm upgrade apply v1.20.5

_____________________________________________________________________


The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.

API GROUP                 CURRENT VERSION   PREFERRED VERSION   MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io   v1alpha1          v1alpha1            no
kubelet.config.k8s.io     v1beta1           v1beta1             no
_____________________________________________________________________

Upgrade the cluster:

$ sudo kubeadm upgrade apply v1.20.5
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.20.5"
[upgrade/versions] Cluster version: v1.19.7
[upgrade/versions] kubeadm version: v1.20.5
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.20.5"...
Static pod: kube-apiserver-srv31 hash: 158f4bd8a79d7db4eaf164381af8c83a
Static pod: kube-controller-manager-srv31 hash: bb3e6e68f1ad86e0ce0348be29608543
Static pod: kube-scheduler-srv31 hash: 57b58b3eb5589cb745c50233392349fb
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-srv31 hash: a266fbf1b7121936cd47439fa08ca01c
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-04-02-21-27-24/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: etcd-srv31 hash: a266fbf1b7121936cd47439fa08ca01c
Static pod: etcd-srv31 hash: a266fbf1b7121936cd47439fa08ca01c
Static pod: etcd-srv31 hash: 831009cc8a4aa20cc490b2988d4374fc
[apiclient] Found 3 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests425684888"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-04-02-21-27-24/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-srv31 hash: 158f4bd8a79d7db4eaf164381af8c83a
Static pod: kube-apiserver-srv31 hash: a84b19f9229a51278d5980293549c7c0
[apiclient] Found 3 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-04-02-21-27-24/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-srv31 hash: 2b33741afc25a93663d57a0c33b469e4
[apiclient] Found 3 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-04-02-21-27-24/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-srv31 hash: 57b58b3eb5589cb745c50233392349fb
Static pod: kube-scheduler-srv31 hash: 9e23b1a40191518b4ea2c75208418b49
[apiclient] Found 3 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade/postupgrade] Applying label node-role.kubernetes.io/control-plane='' to Nodes with label node-role.kubernetes.io/master='' (deprecated)
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.20.5". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

We are going to upgrade from Calico 3.17 to Calico 3.18 which has been tested against Kubernetes version 1.20.

$ kubectl apply -f https://docs.projectcalico.org/archive/v3.18/manifests/calico.yaml
configmap/calico-config unchanged
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org configured
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers configured
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers unchanged
clusterrole.rbac.authorization.k8s.io/calico-node unchanged
clusterrolebinding.rbac.authorization.k8s.io/calico-node unchanged
daemonset.apps/calico-node configured
serviceaccount/calico-node unchanged
deployment.apps/calico-kube-controllers configured
serviceaccount/calico-kube-controllers unchanged
poddisruptionbudget.policy/calico-kube-controllers unchanged

For the other control plane nodes:

$ sudo yum install -y kubeadm-1.20.5-0 --disableexcludes=kubernetes
$ kubeadm version
$ sudo kubeadm upgrade node

According to Kubernetes documentation, calling kubeadm upgrade plan and upgrading the CNI provider plugin is no longer needed.

Drain the Nodes and Upgrade kubelet and kubectl

$ export CONTROL_PLANE="srv31"
$ kubectl drain ${CONTROL_PLANE} --ignore-daemonsets
$ sudo yum install -y kubelet-1.20.5-0 kubectl-1.20.5-0 --disableexcludes=kubernetes
$ sudo systemctl daemon-reload && sudo systemctl restart kubelet
$ kubectl uncordon ${CONTROL_PLANE}

Repeat the process for control planes srv32 and srv33.

Upgrade Worker Nodes

We will start with the worker node srv34.

Upgrade kubeadm:

$ sudo yum install -y kubeadm-1.20.5-0 --disableexcludes=kubernetes
$ sudo kubeadm upgrade node

Drain the worker node:

$ export WORKER_NODE="srv34"
$ kubectl drain ${WORKER_NODE} --ignore-daemonsets

Upgrade kubelet and kubectl:

$ sudo yum install -y kubelet-1.20.5-0 kubectl-1.20.5-0 --disableexcludes=kubernetes
$ sudo systemctl daemon-reload && sudo systemctl restart kubelet

Uncordon the worker node:

$ kubectl uncordon ${WORKER_NODE}

Repeat the process for worker nodes srv35 and srv36.

Verify Cluster Status

Check cluster node status:

$ kubectl get no
NAME    STATUS   ROLES                  AGE   VERSION
srv31   Ready    control-plane,master   41d   v1.20.5
srv32   Ready    control-plane,master   41d   v1.20.5
srv33   Ready    control-plane,master   41d   v1.20.5
srv34   Ready    none                   41d   v1.20.5
srv35   Ready    none                   12d   v1.20.5
srv36   Ready    none                   12d   v1.20.5

Check Calico pods:

$ kubectl -n kube-system get po -l k8s-app=calico-node
NAME                READY   STATUS    RESTARTS   AGE
calico-node-6bplg   1/1     Running   0          63m
calico-node-868fb   1/1     Running   0          51m
calico-node-8z7ns   1/1     Running   0          53m
calico-node-c7sd8   1/1     Running   0          55m
calico-node-jfzn4   1/1     Running   0          55m
calico-node-ztqzc   1/1     Running   0          52m

References

https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/

Leave a Reply

Your email address will not be published. Required fields are marked *