Changelog for Kubernetes 1.26

Changelog for Kubernetes 1.26

Versions

The deployed Kubernetes version varies based on when your cluster is deployed. We try deploying cluster using the latest patch release of Kubernetes.

Current release is Kubernetes 1.26.13

Major changes

  • Added support for node autoscaling
  • Removed API Flow control resources flowcontrol.apiserver.k8s.io/v1beta1. The replacement flowcontrol.apiserver.k8s.io/v1beta2 was implemented in Kubernetes 1.23
  • Removed API HorizontalPodAutoscaler autoscaling/v2beta2. The replacement autoscaling/v2 was introduced in Kubernetes 1.23
  • We no longer deploy NodeLocal DNSCache for new clusters

Deprecations

Note that all deprecations will be removed in a future Kubernetes release. This does not mean you need to perform any changes right now. However, we recommend you to start migrating your applications in order to avoid issues in future releases.

  • In Kubernetes 1.26 the storage class 4k will be removed from all clusters. This only affects clusters created prior to Kubernetes 1.23. Instead use the v1-dynamic-40 which is the default storage class since Kubernetes 1.23. This change was originally planned for Kuberntes 1.25 however has been pushed back to 1.26 to allow some extra time for migrations.

APIs removed in Kubernetes 1.27

More details can be found in Kubernetes official documentation.

  • CSIStorageCapacity storage.k8s.io/v1beta1. The replacement storage.k8s.io/v1 was implemented in Kubernetes 1.24

APIs removed in Kubernetes 1.29

More details can be found in Kubernetes official documentation.

  • Flow control flowcontrol.apiserver.k8s.io/v1beta2. The replacement flowcontrol.apiserver.k8s.io/v1beta3 was implemented in Kubernetes 1.26

Other noteworthy deprecations

Kubernetes beta topology labels on nodes are deprecated and will be removed in a future release. You can follow the list below to see which labels are being replaced:

Please note: The following changes does not have a set Kubernetes release. The replacement labels are however already implemented.

  • beta.kubernetes.io/instance-type -> node.kubernetes.io/instance-type
  • beta.kubernetes.io/arch -> kubernetes.io/arch
  • beta.kubernetes.io/os -> kubernetes.io/os
  • failure-domain.beta.kubernetes.io/region -> topology.kubernetes.io/region
  • failure-domain.beta.kubernetes.io/zone -> topology.kubernetes.io/zone

Is downtime expected

The cluster is expected to be up and running during the upgrade however pods will restart when being migrated to a new node.

Known issues

Custom node taints and labels lost during upgrade

All custom taints and labels on worker and control-plane nodes are lost during upgrade.

Snapshots are not working

There is currently a limitation in the snapshot controller not making it topology aware.

Last modified April 22, 2024: added useful options (#171) (7e11b10)