Kubernetes upgrade notes: 1.21.x to 1.22.x

If you used my Kubernetes the Not So Hard Way With Ansible blog posts to setup a Kubernetes (K8s) cluster this notes might be helpful for you (and maybe for others too that manage a K8s cluster on their own e.g.). I’ll only mention changes that might be relevant because they will either be interesting for most K8s administrators anyways (even in case they run a fully managed Kubernetes deployment) or if it’s relevant if you manage your own bare-metal/VM based on-prem Kubernetes deployment. I normally skip changes that are only relevant for GKE, AWS EKS, Azure or other cloud providers.

I’ve a general upgrade guide Kubernetes the Not So Hard Way With Ansible - Upgrading Kubernetes that worked quite well for me for the last past K8s upgrades. So please read that guide if you want to know HOW the components are updated. This post here is esp. for the 1.21.x to 1.22.x upgrade and WHAT was interesting for me.

First: As usual I don’t update a production system before the .2 release of a new major version is released. In my experience the .0 and .1 are just too buggy (and to be honest sometimes it’s even better to wait for the .5 release ;-) ). Of course it is important to test new releases already in development or integration systems and report bugs!

Second: I only upgrade from the latest version of the former major release. In my case I was running 1.21.4 and at the time writing this text 1.21.8 was the latest 1.21.x release. After reading the 1.21.x changelog to see if any important changes where made between 1.21.4 and 1.21.8 I don’t saw anything that prevented me updating and I don’t needed to change anything. So I did the 1.21.4 to 1.21.8 upgrade first. If you use my Ansible roles that basically only means to change k8s_release variable from 1.21.4 to 1.21.8 and deploy the changes for the control plane and worker nodes out as described in my upgrade guide. After that everything still worked as expected so I continued with the next step.

Here are a few links that might be interesting regarding what’s new in regards to new features in Kubernetes 1.22:

Kubernetes 1.22 CHANGELOG
Kubernetes 1.22: Reaching New Peaks
What’s new in Kubernetes 1.22 - SysDig blog

Since K8s 1.14 there are also searchable release notes available. You can specify the K8s version and a K8s area/component (e.g. kublet, apiserver, …) and immediately get an overview what changed in that regard. Quite nice! :-)

As it is normally no problem to have a newer kubectl utility that is only one major version ahead of the server version I also updated kubectl from 1.21.x to 1.22.x using my kubectl Ansible role.

As always before a major upgrade read the Urgent Upgrade Notes! If you used my Ansible roles to install Kubernetes and used most of the default settings then there should be no need to adjust any settings. For K8s 1.22 release I actually couldn’t find any urgent notes that were relevant for my Ansible roles or my own on-prem setup.

In What’s New (Major Themes) I’ve found the following highlights that looks most important to me:

  • Quite a few API beta version have be REMOVED (not deprecated!): ValidatingWebhookConfiguration, MutatingWebhookConfiguration, CustomResourceDefinition, APIService, TokenReview, SubjectAccessReview, CertificateSigningRequest, Lease, Ingress, and IngressClass. So before upgrading to K8s 1.22 make sure you don’t use any beta versions of this APIs anymore. Esp. Ingress and IngressClass are in use by most users most probably. See Deprecated API Migration Guide and Kubernetes API and Feature Removals In 1.22: Here’s What You Need To Know

To check what versions you currently use you can use the following commands:

bash

kubectl get ingresses.extensions -A
kubectl get ingress.networking.k8s.io -A
kubectl get validatingwebhookconfigurations.admissionregistration.k8s.io -A
kubectl get mutatingwebhookconfigurations.admissionregistration.k8s.io -A
kubectl get customresourcedefinitions.apiextensions.k8s.io -A
kubectl get apiservices.apiregistration.k8s.io -A
kubectl get certificatesigningrequests.certificates.k8s.io -A
kubectl get leases.coordination.k8s.io -A

For ingresses.extensions e.g. you get such an information:

bash

Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
No resources found
  • Kubernetes release cadence has reduced from 4 releases per year to 3 releases per year
  • etcd moves to version 3.5. For more information see below.
  • Kubernetes Node system swap support
  • Cluster-wide seccomp defaults. This an Alpha feature. Nevertheless a very interesting one as it can increase workload security a lot.

A few interesting things I’ve found in Deprecation:

  • K8s Controller Manager: The flags --port and --address have no effect and will be removed in 1.24
  • K8s Controller Manager: --authorization-kubeconfig and --authentication-kubeconfig MUST be specified and correctly set to get authentication/authorization working. Normally that means setting the value of this parameters to the same value you supply to --kubeconfig parameter. This also requires --requestheader-client-ca-file to be set. The value of this one is normally the same as you supply to --root-ca-file.
  • liveness/readiness probes to K8s Controller Manager MUST use HTTPS now, and the default port has been changed to 10257

A few interesting API Changes also took place of course:

  • As already mentioned above the ingress v1beta1 has been deprecated.
  • IndexedJob has been promoted to Beta.
  • Alpha feature (already mentioned above): Added new kubelet alpha feature SeccompDefault. This feature enables falling back to the RuntimeDefault (former runtime/default) seccomp profile if nothing else is specified in the pod/container SecurityContext or the pod annotation level. To use the feature, enable the feature gate as well as set the kubelet configuration option SeccompDefault (--seccomp-default) to true.
  • Alpha feature: Swap support can now be enabled on Kubernetes nodes with the NodeSwapEnabled feature flag.
  • Enable MaxSurge for DaemonSet by default.
  • Ephemeral containers are now allowed to configure a securityContext that differs from that of the Pod. Cluster administrators should ensure that security policy controllers support EphemeralContainers before enabling this feature in clusters.
  • Introduce minReadySeconds api to the StatefulSets.
  • Kube-apiserver: --service-account-issuer can be specified multiple times now, to enable non-disruptive change of issuer.
  • Scheduler could be configured to consider new resources beside CPU and memory, GPU for example, for the score plugin of NodeResourcesBalancedAllocation.
  • Suspend Job feature graduated to beta.
  • The NetworkPolicyEndPort is graduated to beta and is enabled by default.
  • The podAffinity, NamespaceSelector and the associated CrossNamespaceAffinity quota scope features graduate to Beta and they are now enabled by default.

And finally the interesting Features:

  • A system-cluster-critical pod should not get a low OOM Score. Action required if the user wants to have the pod to be OOMKilled last and the pod has system-cluster-critical priority class, it has to be changed to system-node-critical priority class to preserve the existing behavior
  • Kube-apiserver: The alpha PodSecurity feature can be enabled by passing --feature-gates=PodSecurity=true, and enables controlling allowed pods using namespace labels.
  • Update CoreDNS to 1.8.4. Grant CoreDNS permissions to “list” and “watch” EndpointSlice objects to accommodate dual-stack support.
  • Promote Cronjobs storage version to batch/v1.
  • Secret values are now masked by default in kubectl diff output.
  • The ServiceInternalTrafficPolicy feature graduates to Beta and enable by default, which enables the internalTrafficPolicy field of Service by default.
  • Services with externalTrafficPolicy: Local now support graceful termination when using the iptables or ipvs mode of kube-proxy with EndpointSlices enabled. Specifically, if a connection for such a service arrives on a node when there are no “Ready” endpoints for the service, but there is at least one Terminating pod for that service on the node, then kube-proxy will send the traffic to the Terminating pod rather than dropping it. This patches up a race condition between when a pod is killed and when the external load balancer notices that it has been killed.
  • The SetHostnameAsFQDN graduates to GA and thus will be unconditionally disabled.
  • The WarningHeader feature is now GA and is unconditionally enabled. This will add output to a few kubectl subcommands output to warn in case an API is deprecated.
  • Warnings for the use of deprecated and known-bad values in pod specs are now sent.
  • The kubectl debug is able to create ephemeral containers in pre-1.22 clusters with the EphemeralContainers feature enabled. Note that versions of kubectl prior to 1.22 are unable to create ephemeral containers in clusters version 1.22 and greater due to an API change.

As mentioned above CoreDNS was upgraded to 1.8.x. So this one can be upgraded first. I already adjusted my CoreDNS playbook accordingly. There is a very handy tool that helps you upgrading CoreDNS’s configuration file Corefile. Read more about it at CoreDNS Corefile Migration for Kubernetes. The binary releases of that tool can be downloaded here: corefile-migration.

Also mentioned above etcd was upgraded to 3.5.x. In my blog post Kubernetes the Not So Hard Way With Ansible - Upgrading Kubernetes there is a paragraph that describes how to do this upgrade.

Before I upgraded to Kubernetes 1.22 this time I actually did another change. As you most probably know Docker/dockershim is deprecated since K8s 1.20 and will be removed in 1.24. So I took the opportunity while upgrading to the lastest 1.21 (before upgrading to the latest 1.22 release) to remove Docker/dockershim and replace it with containerd and runc. The whole process is documented in its own blog post: Kubernetes: Replace dockershim with containerd and runc. So I recommend to get rid of Docker/dockershim with the upgrade to K8s 1.23 at latest! But the earlier you do it the less pressure you have at the end ;-) And TBH with containerd in place now it looks like Pods are now starting a little bit fast then before ;-)

If you use CSI then also check the CSI Sidecar Containers documentation. Every sidecar container contains a matrix which version you need at a minimum, maximum and which version is recommend to use with whatever K8s version.
Nevertheless if your K8s update to v1.22 worked fine I would recommend to also update the CSI sidecar containers sooner or later because a) lots of changes happen ATM in this area and b) you might require the newer versions for the next K8s version anyways.

In my case I needed to add three new settings for kube-controller-manager (this is already added to my Ansible kubernetes-controller role): authentication-kubeconfig, authorization-kubeconfig and requestheader-client-ca-file needed to be added to k8s_controller_manager_settings (see K8s Deprecations 1.22. The value for the first two is basically the same as for kubeconfig (which is the kube-controller-manager.kubeconfig file). For requestheader-client-ca-file the value needs to be set to the same value as the already present root-ca-file setting. It points to the file of the certificate authority which kube-apiserver uses.

For the Kubelet I changed the default of the cgroupDriver value in the KubeletConfiguration to systemd as kubelet runs as a systemd service. See configure-cgroup-driver for more details. Before that the default was cgroupfs. Also see Migrating to the systemd driver. This change is also already backed into my Ansible role for kubernetes-worker.

Now I finally updated the K8s controller and worker nodes to version 1.22.x as described in Kubernetes the Not So Hard Way With Ansible - Upgrading Kubernetes.

That’s it for today! Happy upgrading! ;-)