Kubernetes upgrade notes: 1.23.x to 1.24.x
Introduction
If you used my Kubernetes the Not So Hard Way With Ansible blog posts to setup a Kubernetes (K8s) cluster this notes might be helpful for you (and maybe for others too that manage a K8s cluster on their own e.g.). I’ll only mention changes that might be relevant because they will either be interesting for most K8s administrators anyways (even in case they run a fully managed Kubernetes deployment) or if it’s relevant if you manage your own bare-metal/VM based on-prem Kubernetes deployment. I normally skip changes that are only relevant for GKE, AWS EKS, Azure or other cloud providers.
I’ve a general upgrade guide Kubernetes the Not So Hard Way With Ansible - Upgrading Kubernetes that worked quite well for me for the last past K8s upgrades. So please read that guide if you want to know HOW the components are updated. This post here is esp. for the 1.23.x
to 1.24.x
upgrade and WHAT was interesting for me.
As usual I don’t update a production system before the .2
release of a new major version is released. In my experience the .0
and .1
are just too buggy. Nevertheless it’s important to test new releases (and even Beta or release candidates if possible) already in development or integration systems and report bugs!
Update to latest current release
I only upgrade from the latest version of the former major release. At the time writing this blog post 1.23.10
was the latest 1.23.x
release. After reading the 1.23 CHANGELOG to figure out if any important changes where made between the current 1.23.x
and latest 1.23.10
release I didn’t see anything that prevented me updating and I don’t needed to change anything.
So I did the 1.23.10
update first. If you use my Ansible roles that basically only means to change k8s_release
variable from 1.23.x
to 1.23.10
and deploy the changes for the control plane and worker nodes as described in my upgrade guide.
After that everything still worked as expected so I continued with the next step.
Upgrading kubectl
As it’s normally no problem to have a newer kubectl
utility that is only one major version ahead of the server version I updated kubectl
from 1.23.x
to latest 1.24.x
using my kubectl Ansible role.
Release notes
Since K8s 1.14 there are also searchable release notes available. You can specify the K8s version and a K8s area/component (e.g. kublet, apiserver, …) and immediately get an overview what changed in that regard. Quite nice! :-)
Urgent Upgrade Notes
As always before a major upgrade read the Urgent Upgrade Notes! If you used my Ansible roles to install Kubernetes and used most of the default settings then there should be no need to adjust any settings. For K8s 1.24
release I actually couldn’t find any urgent notes that were relevant for my Ansible roles or my own on-prem setup. But nevertheless there are three notes that might be relevant for some people:
- Kubernetes
1.23.x
was the last release that supported Docker runtime using dockershim in the kubelet. If you still have that one running then there is no way to upgrade to Kubernetes1.24.x
yet. In that case you need to replace dockershim first. Please read my blog post Kubernetes: Replace dockershim with containerd and runc and act accordingly. Afterwards you can continue with the Kubernetes1.24.x
upgrade. - The
LegacyServiceAccountTokenNoAutoGeneration
feature gate is beta, and enabled by default. When enabled, Secret API objects containing service account tokens are no longer auto-generated for every ServiceAccount. Also see Service account token Secrets. - The calculations for Pod topology spread skew now exclude nodes that don’t match the node affinity/selector. This may lead to unschedulable pods if you previously had pods matching the spreading selector on those excluded nodes (not matching the node affinity/selector), especially when the topologyKey is not node-level. Revisit the node affinity and/or pod selector in the topology spread constraints to avoid this scenario.
What’s New (Major Themes)
In What’s New (Major Themes) I’ve found the following highlight(s) that looks most important to me:
- Already mentioned above but very important so I mention it once more: Dockershim Removed from kubelet
- Beta APIs Off by Default. That was a little bit surprising for me as Beta APIs were enabled for quite some releases and only Alpha APIs were disabled. So if you want to use a Beta API with K8s v1.24 or higher you need to enable it.
Deprecation
A few interesting things I’ve found in Deprecation:
kube-apiserver
: the insecure address flags--address
,--insecure-bind-address
,--port
and--insecure-port
were removed.- The insecure address flags
--address
and--port
inkube-controller-manager
have had no effect since v1.20 and are removed in v1.24. - Removed
kube-scheduler
insecure flags. You can use--bind-address
and--secure-port
instead. - The
node.k8s.io/v1alpha1
RuntimeClass API is no longer served. Use thenode.k8s.io/v1
API version, available since v1.20. - The cluster addon for dashboard was removed. To install dashboard, see here.
API changes
A few interesting API Changes also took place of course:
- Indexed Jobs graduated to stable.
MaxUnavailable
forStatefulSets
, allows faster RollingUpdate by taking down more than 1 pod at a time. The number of pods you want to take down during a RollingUpdate is configurable usingmaxUnavailable
parameter.- Pod affinity namespace selector and cross-namespace quota graduated to GA.
- The
ServerSideFieldValidation
feature has graduated to beta and is now enabled by default. Kubectl1.24
and newer will use server-side validation instead of client-side validation when writing to API servers with the feature enabled. - The feature
DynamicKubeletConfig
has been removed from the kubelet.
Features
And finally the interesting Features:
- Added completion for
kubectl config set-context
. - Added label selector flag to all
kubectl rollout
commands. - Added support for kubectl commands (
kubectl exec
andkubectl port-forward
) via a SOCKS5 proxy. - Kubectl now supports shell completion for the / format for specifying resources. kubectl now provides shell completion for container names following the
--container
/-c
flag of theexec
command. kubectl’s shell completion now suggests resource types for commands that only apply to pods. - Kubelet: the following dockershim related flags are also removed along with dockershim
--experimental-dockershim-root-directory
,--docker-endpoint
,--image-pull-progress-deadline
,--network-plugin
,--cni-conf-dir
,--cni-bin-dir
,--cni-cache-dir
,--network-plugin-mtu
. - Kubernetes
1.24
is built with Go1.18
, which will no longer validate certificates signed with a SHA-1 hash algorithm by default. See https://golang.org/doc/go1.18#sha1 for more details. - Move volume expansion feature to GA.
- Promoted graceful shutdown based on pod priority to beta.
- The
kubectl logs
will now warn and default to the first container in a pod. This new behavior brings it in line withkubectl exec
. - The kubelet now creates an iptables chain named
KUBE-IPTABLES-HINT
in themangle
table. Containerized components that need to modify iptables rules in the host network namespace can use the existence of this chain to more-reliably determine whether the system is usingiptables-legacy
oriptables-nft
. - The output of
kubectl describe ingress
now includes anIngressClass
name if available. - Updates
kubectl kustomize
andkubectl apply -k
to Kustomizev4.5.4
. kubectl create token
can now be used to request a service account token, and permission to request service account tokens is added to theedit
andadmin
RBAC roles
Bug or Regression
- Reverted graceful node shutdown to match 1.21 behavior of setting pods that have not yet successfully completed to “Failed” phase if the
GracefulNodeShutdown
feature is enabled in kubelet. TheGracefulNodeShutdown
feature is beta and must be explicitly configured via kubelet config to be enabled in 1.21+. This changes 1.22 and 1.23 behavior on node shutdown to match 1.21. If you do not want pods to be marked terminated on node shutdown in 1.22 and 1.23, disable theGracefulNodeShutdown
feature. - Remove deprecated
--serviceaccount
,--hostport
,--requests
and--limits
fromkubectl run
. - Updated
cri-tools
tov1.23.0
. - Updated runc to
1.1.1
. - Users who look at iptables dumps will see some changes in the naming and structure of rules.
Uncategorized
- Deprecate
kubectl version
long output, will be replaced withkubectl version --short
. Users requiring full output should use--output=yaml|json
instead.
etcd
Lastest supported and tested etcd
version for Kubernetes v1.24
is v3.5.3
.
CSI
If you use CSI then also check the CSI Sidecar Containers documentation. Every sidecar container contains a matrix which version you need at a minimum, maximum and which version is recommend to use with whatever K8s version.
Nevertheless if your K8s update to v1.24
worked fine I would recommend to also update the CSI sidecar containers sooner or later.
Additional resources
Here are a few links that might be interesting regarding what’s new in regards to new features in Kubernetes 1.24
:
Kubernetes 1.24 CHANGELOG
Kubernetes 1.24: Stargazer
Kubernetes 1.24 – What’s new? - SysDig blog
Kubernetes 1.24 - What’s New in Kubernetes Version 1.24 - Aqua Sec Blog
Upgrade Kubernetes
Now I finally upgraded the K8s controller and worker nodes to version 1.24.x
as described in Kubernetes the Not So Hard Way With Ansible - Upgrading Kubernetes.
That’s it for today! Happy upgrading! ;-)