Changelog for Kubernetes 1.23

Changelog for Kubernetes 1.23

Versions

  • Kubernetes 1.23.7
  • Nginx-ingress: 1.3.0
  • Certmanager: 1.9.1

Major changes

  • A new storage class v1-dynamic-40 is introduced and set as the default storage class. All information about this storage class can be found here.
  • Worker and control plane nodes now use v1-c2-m8-d80 as their default flavor. You can find a complete list of all available flavors here.
  • All nodes will be migrated to the updated flavors during the upgrade. All new flavors will have the same specification however the flavor ID will be changed. This affects customers that use the node.kubernetes.io/instance-type label that can be located on nodes.
  • Control plane nodes will have their disk migrated from the deprecated 4k storage class to v1-dynamic-40.
  • Starting from Kubernetes 1.23 we will require 3 control plane (masters) nodes.

Flavor mapping

Old flavor New flavor
v1-standard-2 v1-c2-m8-d80
v1-standard-4 v1-c4-m16-d160
v1-standard-8 v1-c8-m32-d320
v1-dedicated-8 d1-c8-m58-d800
v2-dedicated-8 d2-c8-m120-d1.6k

Changes affecting new clusters:

What happened to the metrics/monitoring node?

Previously when creating new or upgrading clusters to Kubernetes 1.23 we added an extra node that handled monitoring. This node is no longer needed and all services have been converted to run inside the Kubernetes cluster. This means that clusters being upgraded or created from now on won’t get an extra node added. Clusters that currently have the monitoring node will be migrated to the new setup within the upcoming weeks (The change is non-service affecting).

Deprecations

Note that all deprecations will be removed in a future Kubernetes release. This does not mean you need to perform any changes right now. However, we recommend you to start migrating your applications in order to avoid issues in future releases.

  • In kubernetes 1.25 the storage class 4k will be removed from all clusters created prior to Kubernetes 1.23.

APIs removed in Kubernetes 1.25

More details can be found in Kubernetes official documentation.

  • Pod Security Policies will be removed in Kubernetes 1.25
  • CronJob batch/v1beta1. The new API batch/v1 was implemented in Kubernetes 1.21 (this is a drop in replacement)
  • EndpointSlice discovery.k8s.io/v1beta1. The new API discovery.k8s.io/v1 was implemented in Kubernetes 1.21
  • Event events.k8s.io/v1beta1. The new API events.k8s.io/v1 was implemented in Kubernetes 1.19
  • PodDisruptionBudget policy/v1beta1. The new API policy/v1 was implemented in Kubernetes 1.21
  • RuntimeClass node.k8s.io/v1beta1. The new API node.k8s.io/v1 was implemented in Kubernetes 1.20

Other noteworthy deprecations

Kubernetes beta topology labels on nodes are deprecated and will be removed in a future release. You can follow the list below to see which labels are being replaced:

Please note: The following changes does not have a set Kubernetes release. However, the replacement labels are already implemented.

  • beta.kubernetes.io/instance-type -> node.kubernetes.io/instance-type
  • beta.kubernetes.io/arch -> kubernetes.io/arch
  • beta.kubernetes.io/os -> kubernetes.io/os
  • failure-domain.beta.kubernetes.io/region -> topology.kubernetes.io/region
  • failure-domain.beta.kubernetes.io/zone -> topology.kubernetes.io/zone

Is downtime expected

The upgrade drains (move all workload from) one node at the time, patches that node and brings it back in the cluster. First after all deployments and statefulsets are running again we will continue on with the next node.

Known issues

Custom node taints and labels lost during upgrade

All custom taints and labels on worker and control-plane nodes are lost during upgrade.

Custom changes to non -customer security groups will be lost

All changes to security groups not suffixed with “-customer” will be lost during the upgrade.

Snapshots are not working

There is currently a limitation in the snapshot controller not making it topology aware.

Last modified April 22, 2024: added useful options (#171) (7e11b10)