Kubernetes upgrade notes: 1.15.x to 1.16.x
If you used my Kubernetes the Not So Hard Way With Ansible
blog posts to setup a Kubernetes (K8s) cluster this notes might be helpful for you (and maybe for others too that manage a K8s cluster on their own).
I’ve a general upgrade guide Kubernetes the Not So Hard Way With Ansible - Upgrading Kubernetes that worked quite well for me for the last past K8s upgrades. So please read that guide if you want to know HOW the components are updated. This post here is esp. for the 1.15.x to 1.16.x upgrade and WHAT I changed.
First: As usual I don’t update before the .2
release of a new major version is released. In my experience the .0
and .1
are just too buggy.
Second: I only upgrade from the latest version of the former major release. In my case I was running 1.15.3
and at the time writing this text 1.15.6
was the latest 1.15.x
release. After reading the 1.15.x changelog to see if any important changes where made between 1.15.3
and 1.15.6
I don’t saw anything that prevented me updating and I don’t needed to change anything. So I did the 1.15.3
to 1.15.6
upgrade first. If you use my Ansible roles that basically only means to change k8s_release
variable from 1.15.3
to 1.15.6
and roll the changes for the control plane and worker nodes out as described in my upgrade guide. After that everything still worked as expected so I continued with the next step.
As it is normally no problem to have a newer kubectl
utility that is only one major version ahead of the server version I also updated kubectl
from 1.14.x
to 1.15.2
using my kubectl Ansible role.
Since K8s 1.14 there are also searchable release notes available. You can specify the K8s version and a K8s area/component (e.g. kublet, apiserver, …) and immediately get an overview what changed in that regard. Quite nice! :-)
One interesting Alpha feature was added Kubernetes 1.16: Ephemeral containers
. These temporary containers can be added to running pods for purposes such as debugging, similar to how kubectl exec runs a process in an existing container. As this is a Alpha feature it needs to be enabled via feature gate parameter. Also pod spread constraints
have been added in alpha. You can use these constraints to control how Pods are spread across the cluster among failure-domains.
Before moving on there is one VERY important thing mentioned in the K8s 1.16.x
changelog (it was already announced in the former two major releases). If you’ve a look at the Deprecations and Removals you’ll notice that a few important objects won’t be served in K8s 1.16.x
anymore at some endpoints. If you still using some of this old APIs before upgrading to K8s 1.16
you should REALLY change a few kinds
as they’re gone in K8s 1.16
or they’ll go away in K8s 1.17
or 1.19
for some endpoints at latest. So this should be changed before you upgrade:
Ingress
: migrate tonetworking.k8s.io/v1beta1
NetworkPolicy
: migrate tonetworking.k8s.io/v1
PodSecurityPolicy
migrate topolicy/v1beta1
DaemonSet
: migrate toapps/v1
Deployment
: migrate toapps/v1
ReplicaSet
: migrate toapps/v1
PriorityClass
: migrate toscheduling.k8s.io/v1
To check what versions you currently use you can use the following commands:
kubectl get ingresses.extensions -A
kubectl get ingress.networking.k8s.io -A
kubectl get networkpolicies.extensions -A
kubectl get networkpolicies.networking.k8s.io -A
kubectl get podsecuritypolicies.extensions -A
kubectl get podsecuritypolicies.policy -A
kubectl get daemonsets.extensions -A
kubectl get daemonsets.apps -A
kubectl get deployment.extensions -A
kubectl get deployment.apps -A
kubectl get replicasets.extensions -A
kubectl get replicasets.apps -A
kubectl get priorityclasses.scheduling.k8s.io -A
A final tip on this topic: There is a video TGI Kubernetes 084: Kubernetes API removal and you about the API removals.
The Known Issues of the CHANGELOG contained no important information for me. As im using Ubuntu 18.04 it’s not an issue for me but there is an issue running iptables 1.8.0 or newer
. Ubuntu 18.04 uses iptables 1.6.x
. So that’s maybe something that people should take care of that use Ubuntu > 18.04 or maybe also newer versions of Debian or CentOS.
In the Urgent Upgrade Notes I saw no important information for me.
As already mentioned above the Deprecations and Removals contains VERY information about APIs that are no longer served by default.
On the Dependencies list we see etcd has been updated to v3.3.15
. This was needed because of a bug in etcd client library. As it currently doesn’t effects me I’ll stay with etcd 3.3.13
for now. I already know that Kubernetes v1.17
will default to etcd 3.4.x
so I’ll upgrade next time ;-) CoreDNS
was upgraded to 1.6.2
. If CoreDNS
needs to be upgraded I normally do this after all Kubernetes controller and worker nodes are upgraded. The other mentioned dependencies are not of interest for me as I don’t use them so that section is done.
If you use CSI then also check the CSI Sidecar Containers documentation. Every sidecar container contains a matrix which version you need at a minimum, maximum and which version is recommend to use with whatever K8s version. Since this is quite new stuff basically all CSI sidecar container are working with K8s 1.13 to 1.16. The first releases of this sidecar containers only need K8s 1.10 but I wouldn’t use this old versions. So there is at least no urgent need to upgrade CSI sidecar containers ATM. Nevertheless if your K8s update to v1.16 worked fine I would recommend to also update the CSI sidecar containers sooner or later because a) lots of changes happen ATM in this area and b) you might require the newer versions for the next K8s anyways.
During upgrading my worker I figured out that flannel
configuration was missing cniVersion: "0.3.1"
in /etc/cni/net.d/10-flannel.conflist
. Without this setting Pods won’t startup anymore after you upgrade to K8s v1.16.x. I updated my Ansible flannel role accordingly. So make sure to role out that role before upgrading a worker node (also see Support Kubernetes 1.16 and Linkerd #1181).
Now I finally updated the K8s controller and worker nodes to version 1.16.3
as described in Kubernetes the Not So Hard Way With Ansible - Upgrading Kubernetes.
That’s it for today! Happy upgrading! ;-)