Kubernetes upgrade notes: 1.29.x to 1.30.x

If you used my Kubernetes the Not So Hard Way With Ansible blog posts to setup a Kubernetes (K8s) cluster this notes might be helpful for you (and maybe for others too that manage a K8s cluster on their own e.g.). I’ll only mention changes that might be relevant because they will either be interesting for most K8s administrators anyways (even in case they run a fully managed Kubernetes deployment) or if it’s relevant if you manage your own bare-metal/VM based on-prem Kubernetes deployment. I normally skip changes that are only relevant for GKE, AWS EKS, Azure or other cloud providers.

I’ve a general upgrade guide Kubernetes the Not So Hard Way With Ansible - Upgrading Kubernetes that worked quite well for me for the last past K8s upgrades. So please read that guide if you want to know HOW the components are updated. This post here is esp. for the 1.29.x to 1.30.x upgrade and WHAT was interesting for me.

As usual I don’t update a production system before the .2 release of a new major version is released. In my experience the .0 and .1 are just too buggy. Nevertheless it’s important to test new releases (and even beta or release candidates if possible) already in development environments and report bugs!

I only upgrade from the latest version of the former major release. At the time writing this blog post 1.29.9 was the latest 1.29.x release. After reading the 1.29 CHANGELOG to figure out if any important changes where made between the current 1.29.x and latest 1.28.9 release I didn’t see anything that prevented me updating and I don’t needed to change anything.

So I did the 1.29.9 update first. If you use my Ansible roles that basically only means to change k8s_ctl_release variable from 1.29.x to 1.29.9 (for the controller nodes) and the same for k8s_worker_release (for the worker nodes). Deploy the changes for the control plane and worker nodes as described in my upgrade guide.

After that everything still worked as expected so I continued with the next step.

As it’s normally no problem to have a newer kubectl utility that is only one major version ahead of the server version I updated kubectl from 1.29.x to latest 1.30.x using my kubectl Ansible role.

Since K8s 1.14 there are also searchable release notes available. You can specify the K8s version and a K8s area/component (e.g. kublet, apiserver, …) and immediately get an overview what changed in that regard. Quite nice! 😉

Funny enough I think Kubernetes v1.30 release is the first that doesn’t have urgent upgrade notes 😉

All important stuff is listed in the Kubernetes v1.30: Uwubernetes release announcement.

The following listings of changes and features only contains stuff that I found useful and interesting. See the full Kubernetes v1.30 Changelog for all changes.

  • Node log query That’s one of the more interesting features IMHO. To help with debugging issues on nodes, Kubernetes v1.27 introduced a feature that allows fetching logs of services running on the node. Following the v1.30 release, this is now beta (you still need to enable the feature to use it, though). For Linux it is assumed that the service logs are available via journald. To get the logs of the kubelet on a node one can use this command:

    bash

    kubectl get --raw "/api/v1/nodes/node-1.example/proxy/logs/?query=kubelet"

    For more information, see the log query documentation.

  • Beta Support For Pods With User Namespaces This feature definitely increases security of workload quite a bit. User namespaces are a Linux feature that isolates the UIDs and GIDs of the containers from the ones on the host. The identifiers in the container can be mapped to identifiers on the host in a way where the host UID/GIDs used for different containers never overlap. Furthermore, the identifiers can be mapped to unprivileged, non-overlapping UIDs and GIDs on the host. This feature needs Linux kernel >= 6.3. Sadly currently containerd v1.7 isn’t supported. So one have to wait for containerd v2.0 (at least…). Basically only CRI-O with crun is currently supported but not CRI-O with runc. Currently it looks like that using runc with this feature is a general problem. Hopefully it’ll work some day as this is really an interesting feature.

  • CRD validation ratcheting

  • Contextual logging
    Make Kubernetes aware of the LoadBalancer behaviour

  • Structured Authentication Configuration

  • kubelet allowed specifying a custom root directory for pod logs (instead of the default /var/log/pods) using the podLogsDir key in kubelet configuration.
  • AppArmor profiles can now be configured through fields on the PodSecurityContext and container SecurityContext. The beta AppArmor annotations are deprecated, and AppArmor status is no longer included in the node ready condition.
  • Contextual logging is now in beta and enabled by default.
  • In kubelet configuration, the .memorySwap.swapBehavior field now accepts a new value NoSwap, which becomes the default if unspecified. The previously accepted UnlimitedSwap value has been dropped.
  • ValidatingAdmissionPolicy was promoted to GA and will be enabled by default.
  • Added Timezone column in the output of the kubectl get cronjob command.
  • Changed --nodeport-addresses behavior to default to “primary node IP(s) only” rather than “all node IPs”.
  • kubectl get job now displays the status for the listed jobs.
  • kubectl port-forward over websockets (tunneling SPDY) can now be enabled using an Alpha feature flag environment variable: KUBECTL_PORT_FORWARD_WEBSOCKETS=true. The API Server being communicated to must also have an Alpha feature flag enabled: PortForwardWebsockets.
  • Graduated “Forensic Container Checkpointing” (KEP #2008) from Alpha to Beta.
  • Graduated HorizontalPodAutoscaler support for per-container metrics to stable.
  • Introduced a new alpha feature gate, SELinuxMount, which can now be enabled to accelerate SELinux relabeling.
  • kubelet now supports configuring the IDs used to create user namespaces.
  • Promoted KubeProxyDrainingTerminatingNodes to beta
  • The kubelet now rejects creating the pod if hostUserns=false and the CRI runtime does not support user namespaces.
  • etcd: Updated to v3.5.11

If you use CSI then also check the CSI Sidecar Containers documentation. Every sidecar container contains a matrix which version you need at a minimum, maximum and which version is recommend to use with whatever K8s version.
Nevertheless if your K8s update to v1.30 worked fine I would recommend to also update the CSI sidecar containers sooner or later.

Now I finally upgraded the K8s controller and worker nodes to version 1.30.x as described in Kubernetes the Not So Hard Way With Ansible - Upgrading Kubernetes.

That’s it for today! Happy upgrading! 😉