1 - Changelog for Kubernetes 1.34

Changelog for Kubernetes 1.34

Versions

The deployed Kubernetes patch version varies based on when your cluster is deployed or upgraded. We strive to use the latest versions available.

Current release leverages Kubernetes 1.34. Official release blogpost found here with corresponding official changelog.

Optional addons

  • ingress-nginx is provided with version v1.14.3
  • Certmanager is provided with version v1.18.2

Major changes

  • The 4k, 8k, 16k, and v1-dynamic-40 storage classes are removed in this version. Existing volumes will not be affected. The ability to create legacy volumes will be removed. Please migrate manifests that specify storage classes to the storageclasses prefixed with v2-, which have been available since Kubernetes 1.26 and have been the default since 2024-06-28 made public in the announcement.

Noteworthy changes in upcoming versions

Announcement of changes in future versions.

Scheduled for upcoming releases:

  • We’ll remove the legacy nodelocaldns where still deployed. Relevant only if the cluster was created before v1.26.
  • Ingress-nginx controller will be fully deprecated from our management, following the news.
  • We will not handle migrations of ingresses, but aim to provide an API Gateway controller as addon.

Is downtime expected

The cluster is expected to be up and running during the upgrade however pods will restart when being migrated to a new node.

Known issues

Custom node taints and labels lost during upgrade.

Custom taints and labels on worker and control-plane nodes may be lost during the upgrade. We recommend auditing and reapplying any critical custom taints/labels via automation (e.g., cluster bootstrap, configuration management, or a post-upgrade job).

There is a label that is persistent across upgrades that can be used to direct workload to particular nodegroups. Example on how to use it:

spec:
  template:
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: nodegroup.node.cluster.x-k8s.io
                operator: In
                values:
                - worker1

Snapshots are not working.

There is currently a limitation in the snapshot controller: it is not topology-aware. As a result, snapshot behavior may be unreliable for topology-sensitive volumes. Avoid depending on snapshots for cross-zone/region recovery until a topology-aware snapshot controller is available or confirm your storage driver’s snapshot semantics.

2 - Changelog for Kubernetes 1.33

Changelog for Kubernetes 1.33

Versions

The deployed Kubernetes patch version varies based on when your cluster is deployed or upgraded. We strive to use the latest versions available.

Current release leverages Kubernetes 1.33. Official release blogpost found here with corresponding official changelog.

Optional addons

  • ingress-nginx is provided with version v1.14.3
  • cert-manager is provided with version v1.18.2

Major changes

  • Base Ubuntu image upgraded from 22.04 to 24.04.

Kubelet configurations provided by Elastx, not configurable by client

  • NodeDrainVolume and NodeDrainTimeout: 5 -> 15min

    • Increased duration to 15 minutes to allow more time for graceful shutdown and controlled startup of workload on new nodes, while respecting PodDisruptionBudgets.
  • podPidsLimit: 0 → 4096

    • Added safety net of a maximum of Per-pod PIDs (process IDs), that is limited and enforced by the kubelet. We used to not have any limitation. Setting this to 4096 limits how many PIDs a single pod may create, which helps mitigate runaway processes or fork-bombs.
  • serializeImagePulls: true → false

    • Allows the kubelet to pull multiple images in parallel, speeding up startup times.
  • maxParallelImagePulls: 0 → 10

    • Controls the maximum number of image pulls the kubelet will perform in parallel.

Introducing resource reservations on worker nodes

To improve stability and predictability of the core Kubernetes functionality during heavy load, we introduce node reservations for CPU, memory, and ephemeral storage.

The reservation model follows proven hyperscaler formulas but is tuned conservatively, ensuring more allocatable resources.

Hyperscalers tend to not make a distinction of systemReserved and kubeReserved, and bundle all reservations into and kubeReserved. We make use of both, but skewed towards kube reservations to align closer with Hyperscalers, but still maintain the reservations of the system. We calculate the reservations settings based on Cpu cores, Memory and Storage of each flavor dynamically.

Here we’ve provided a sample of what to expect:

CPU Reservations Table

Cores (int) System reserved (millicores) Kube reserved (millicores) Allocatable of node (%)
2 35 120 92%
4 41 180 94%
8 81 240 96%
16 83 320 97%
32 88 480 98%
64 98 800 99%

Memory Reservations

Memory (Gi) System reserved (Gi) Kube reserved (Gi) Reserved total (Gi) Eviction Soft (Gi) Eviction Hard (Gi) Allocatable of node (%)
8 0.4 1.0 1.4 0.00 0.25 79%
16 0.4 1.8 2.2 0.00 0.25 85%
32 0.4 3.4 3.8 0.00 0.25 87%
64 0.4 3.7 4.1 0.00 0.25 93%
120 0.4 4.3 4.7 0.00 0.25 96%
240 0.4 4.5 4.9 0.00 0.25 98%
384 0.4 6.9 7.3 0.00 0.25 98%
512 0.4 8.2 8.6 0.00 0.25 98%

Ephemeral Disk Reservations

NOTE: We use the default of nodefs.available at 10%.

Storage (Gi) System reserved (Gi) Kube reserved (Gi) Reserved total (Gi) Eviction Soft (Gi) Eviction Hard (Gi) Allocatable of node (%)
60 12.0 1.0 13.0 0.0 6.0 68%
80 12.0 1.0 13.0 0.0 8.0 74%
120 12.0 1.0 13.0 0.0 12.0 79%
240 12.0 1.0 13.0 0.0 24.0 85%
1600 12.0 1.0 13.0 0.0 160.0 89%

Noteworthy changes in upcoming versions.

Announcement of changes in future versions.

Kubernetes v1.34

The 4k, 8k, 16k, and v1-dynamic-40 storage classes are scheduled to be removed. Existing volumes will not be affected, but the ability to create those legacy volumes will be removed. Please migrate manifests that specify these storage classes to the storage classes prefixed with v2-, which have been available since Kubernetes 1.26 and have been the default since 2024-06-28 (see the announcement). The v1 storage platform was announced as deprecated 2023-12-20 (see the announcement).

Scheduled for upcoming releases:

  • We’ll remove the legacy nodelocaldns where still deployed. Relevant only if the cluster was created before v1.26.
  • Ingress-nginx controller will be fully deprecated from our management, following the news.
  • We will not handle migrations of ingresses, but aim to provide an API Gateway controller as addon.

Is downtime expected?

The cluster control plane should remain available during the upgrade; however, pods will be restarted when workloads are migrated to new nodes. Plan for short pod restarts during the upgrade.

Known issues.

Custom node taints and labels lost during upgrade.

Custom taints and labels on worker and control-plane nodes may be lost during the upgrade. We recommend auditing and reapplying any critical custom taints/labels via automation (e.g., cluster bootstrap, configuration management, or a post-upgrade job).

There is a label that is persistent across upgrades that can be used to direct workload to particular nodegroups. Example on how to use it:

spec:
  template:
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: nodegroup.node.cluster.x-k8s.io
                operator: In
                values:
                - worker1

Snapshots are not working.

There is currently a limitation in the snapshot controller: it is not topology-aware. As a result, snapshot behavior may be unreliable for topology-sensitive volumes. Avoid depending on snapshots for cross-zone/region recovery until a topology-aware snapshot controller is available or confirm your storage driver’s snapshot semantics.

3 - Changelog for Kubernetes 1.32

Changelog for Kubernetes 1.32

Versions

The deployed Kubernetes patch version varies based on when your cluster is deployed or upgraded. We strive to use the latest versions available.

Current release leverages Kubernetes 1.32. Official release blogpost found here with corresponding official changelog.

Optional addons

  • ingress-nginx is provided with version v1.12.1
  • cert-manager is provided with version v1.16.3

Major changes

  • We have announced the deprecation of legacy storageClasses in v1.32. This is postponed to v1.34.

  • Flow control flowcontrol.apiserver.k8s.io/v1beta3 will be removed. The replacement flowcontrol.apiserver.k8s.io/v1 was implemented in Kubernetes 1.29

  • More details can be found in Kubernetes official documentation.

Noteworthy changes in coming versions

V1.34

  • The 4k, 8k, 16k, and v1-dynamic-40 storage classes are scheduled to be removed. Existing volumes will not be affected. The ability to create legacy volumes will be removed. Please migrate manifests that specify storage classes to the storageclasses prefixed with v2-, which have been available since Kubernetes 1.26 and have been the default since 2024-06-28 made public in the announcement.

Is downtime expected

The cluster is expected to be up and running during the upgrade however pods will restart when being migrated to a new node.

Known issues

Custom node taints and labels lost during upgrade

All custom taints and labels on worker and control-plane nodes are lost during upgrade.

Snapshots are not working

There is currently a limitation in the snapshot controller not making it topology aware.

4 - Changelog for Kubernetes 1.31

Changelog for Kubernetes 1.31

Versions

The deployed Kubernetes patch version varies based on when your cluster is deployed or upgraded. We strive to use the latest versions available.

Current release leverages Kubernetes 1.31. Official release blogpost found here with corresponding official changelog.

Major changes

In case there are major changes that impacts Elastx Kubernetes cluster deployments they will be listed here.

Noteworthy API changes in coming version Kubernetes 1.32

  • Flow control flowcontrol.apiserver.k8s.io/v1beta3 will be removed. The replacement flowcontrol.apiserver.k8s.io/v1 was implemented in Kubernetes 1.29

  • (The 4k, 8k, 16k, and v1-dynamic-40 storage classes will be removed This is postponed to v1.34.). Please migrate to the v2 storage classes, which have been available since Kubernetes 1.26 and have been the default since Kubernetes 1.30.

  • More details can be found in Kubernetes official documentation.

Other noteworthy deprecations

  • Please migrate to the v2 storage classes, which have been available since Kubernetes 1.26 and have been the default since the announcement for existing clusters, and default for new clusters starting at Kubernetes v1.30.

    Kubernetes beta topology labels on nodes are deprecated and will be removed in a future release. You can follow the list below to see which labels are being replaced:

Please note: The following changes does not have a set Kubernetes release. However, the replacement labels are already implemented.

  • beta.kubernetes.io/instance-type -> node.kubernetes.io/instance-type
  • beta.kubernetes.io/arch -> kubernetes.io/arch
  • beta.kubernetes.io/os -> kubernetes.io/os
  • failure-domain.beta.kubernetes.io/region -> topology.kubernetes.io/region
  • failure-domain.beta.kubernetes.io/zone -> topology.kubernetes.io/zone

Is downtime expected

The cluster is expected to be up and running during the upgrade however pods will restart when being migrated to a new node.

Known issues

Custom node taints and labels lost during upgrade

All custom taints and labels on worker and control-plane nodes are lost during upgrade.

Snapshots are not working

There is currently a limitation in the snapshot controller not making it topology aware.

5 - Changelog for Kubernetes 1.30

Changelog for Kubernetes 1.30

Versions

The deployed Kubernetes version varies based on when your cluster is deployed. We try deploying cluster using the latest patch release of Kubernetes.

Current release is Kubernetes 1.30.1

Major changes

  • New default storageclass v2-1k
  • New clusters will only have v2 storage classes available.
  • nodelocaldns will be removed for all clusters where it’s still deployed. This change affects only clusters created prior to Kubernetes 1.26, as the feature was deprecated in that version.
  • Clusters created before Kubernetes 1.26 will have their public domains removed. In Kubernetes 1.26, we migrated to using a LoadBalancer and its IP instead. If you are using an old kubeconfig with an active domain, please fetch a new one.

APIs removed in Kubernetes 1.32

More details can be found in Kubernetes official documentation.

  • Flow control flowcontrol.apiserver.k8s.io/v1beta3. The replacement flowcontrol.apiserver.k8s.io/v1 was implemented in Kubernetes 1.29
  • The 4k, 8k, 16k, and v1-dynamic-40 storage classes will be removed. Please migrate to the v2 storage classes, which have been available since Kubernetes 1.26 and have been the default since Kubernetes 1.30.

Other noteworthy deprecations

Kubernetes beta topology labels on nodes are deprecated and will be removed in a future release. You can follow the list below to see which labels are being replaced:

Please note: The following changes does not have a set Kubernetes release. However, the replacement labels are already implemented.

  • beta.kubernetes.io/instance-type -> node.kubernetes.io/instance-type
  • beta.kubernetes.io/arch -> kubernetes.io/arch
  • beta.kubernetes.io/os -> kubernetes.io/os
  • failure-domain.beta.kubernetes.io/region -> topology.kubernetes.io/region
  • failure-domain.beta.kubernetes.io/zone -> topology.kubernetes.io/zone

Is downtime expected

The cluster is expected to be up and running during the upgrade however pods will restart when being migrated to a new node.

Known issues

Custom node taints and labels lost during upgrade

All custom taints and labels on worker and control-plane nodes are lost during upgrade.

Snapshots are not working

There is currently a limitation in the snapshot controller not making it topology aware.

6 - Changelog for Kubernetes 1.29

Changelog for Kubernetes 1.29

Versions

The deployed Kubernetes version varies based on when your cluster is deployed. We try deploying cluster using the latest patch release of Kubernetes.

Current release is Kubernetes 1.29.1

Major changes

  • Flow control flowcontrol.apiserver.k8s.io/v1beta2. The replacement flowcontrol.apiserver.k8s.io/v1beta3 was implemented in Kubernetes 1.26

APIs removed in Kubernetes 1.32

More details can be found in Kubernetes official documentation.

  • Flow control flowcontrol.apiserver.k8s.io/v1beta3. The replacement flowcontrol.apiserver.k8s.io/v1 was implemented in Kubernetes 1.29

Other noteworthy deprecations

Kubernetes beta topology labels on nodes are deprecated and will be removed in a future release. You can follow the list below to see which labels are being replaced:

Please note: The following changes does not have a set Kubernetes release. However, the replacement labels are already implemented.

  • beta.kubernetes.io/instance-type -> node.kubernetes.io/instance-type
  • beta.kubernetes.io/arch -> kubernetes.io/arch
  • beta.kubernetes.io/os -> kubernetes.io/os
  • failure-domain.beta.kubernetes.io/region -> topology.kubernetes.io/region
  • failure-domain.beta.kubernetes.io/zone -> topology.kubernetes.io/zone

Is downtime expected

The cluster is expected to be up and running during the upgrade however pods will restart when being migrated to a new node.

Known issues

Custom node taints and labels lost during upgrade

All custom taints and labels on worker and control-plane nodes are lost during upgrade.

Snapshots are not working

There is currently a limitation in the snapshot controller not making it topology aware.

7 - Changelog for Kubernetes 1.28

Changelog for Kubernetes 1.28

Versions

The deployed Kubernetes version varies based on when your cluster is deployed. We try deploying cluster using the latest patch release of Kubernetes.

Current release is Kubernetes 1.28.6

Major changes

  • No major changes

APIs removed in Kubernetes 1.29

More details can be found in Kubernetes official documentation.

  • Flow control flowcontrol.apiserver.k8s.io/v1beta2. The replacement flowcontrol.apiserver.k8s.io/v1beta3 was implemented in Kubernetes 1.26

Other noteworthy deprecations

Kubernetes beta topology labels on nodes are deprecated and will be removed in a future release. You can follow the list below to see which labels are being replaced:

Please note: The following changes does not have a set Kubernetes release. However, the replacement labels are already implemented.

  • beta.kubernetes.io/instance-type -> node.kubernetes.io/instance-type
  • beta.kubernetes.io/arch -> kubernetes.io/arch
  • beta.kubernetes.io/os -> kubernetes.io/os
  • failure-domain.beta.kubernetes.io/region -> topology.kubernetes.io/region
  • failure-domain.beta.kubernetes.io/zone -> topology.kubernetes.io/zone

Is downtime expected

The cluster is expected to be up and running during the upgrade however pods will restart when being migrated to a new node.

Known issues

Custom node taints and labels lost during upgrade

All custom taints and labels on worker and control-plane nodes are lost during upgrade.

Snapshots are not working

There is currently a limitation in the snapshot controller not making it topology aware.

8 - Changelog for Kubernetes 1.27

Changelog for Kubernetes 1.27

Versions

The deployed Kubernetes version varies based on when your cluster is deployed. We try deploying cluster using the latest patch release of Kubernetes.

Current release is Kubernetes 1.27.10

Major changes

  • Removed API CSIStorageCapacity storage.k8s.io/v1beta1. The replacement storage.k8s.io/v1 was implemented in Kubernetes 1.24

APIs removed in Kubernetes 1.29

More details can be found in Kubernetes official documentation.

  • Flow control flowcontrol.apiserver.k8s.io/v1beta2. The replacement flowcontrol.apiserver.k8s.io/v1beta3 was implemented in Kubernetes 1.26

Other noteworthy deprecations

Kubernetes beta topology labels on nodes are deprecated and will be removed in a future release. You can follow the list below to see which labels are being replaced:

Please note: The following changes does not have a set Kubernetes release. However, the replacement labels are already implemented.

  • beta.kubernetes.io/instance-type -> node.kubernetes.io/instance-type
  • beta.kubernetes.io/arch -> kubernetes.io/arch
  • beta.kubernetes.io/os -> kubernetes.io/os
  • failure-domain.beta.kubernetes.io/region -> topology.kubernetes.io/region
  • failure-domain.beta.kubernetes.io/zone -> topology.kubernetes.io/zone

Is downtime expected

The cluster is expected to be up and running during the upgrade however pods will restart when being migrated to a new node.

Known issues

Custom node taints and labels lost during upgrade

All custom taints and labels on worker and control-plane nodes are lost during upgrade.

Snapshots are not working

There is currently a limitation in the snapshot controller not making it topology aware.

9 - Changelog for Kubernetes 1.26

Changelog for Kubernetes 1.26

Versions

The deployed Kubernetes version varies based on when your cluster is deployed. We try deploying cluster using the latest patch release of Kubernetes.

Current release is Kubernetes 1.26.13

Major changes

  • Added support for node autoscaling
  • Removed API Flow control resources flowcontrol.apiserver.k8s.io/v1beta1. The replacement flowcontrol.apiserver.k8s.io/v1beta2 was implemented in Kubernetes 1.23
  • Removed API HorizontalPodAutoscaler autoscaling/v2beta2. The replacement autoscaling/v2 was introduced in Kubernetes 1.23
  • We no longer deploy NodeLocal DNSCache for new clusters

Deprecations

Note that all deprecations will be removed in a future Kubernetes release. This does not mean you need to perform any changes right now. However, we recommend you to start migrating your applications in order to avoid issues in future releases.

  • In Kubernetes 1.26 the storage class 4k will be removed from all clusters. This only affects clusters created prior to Kubernetes 1.23. Instead use the v1-dynamic-40 which is the default storage class since Kubernetes 1.23. This change was originally planned for Kuberntes 1.25 however has been pushed back to 1.26 to allow some extra time for migrations.

APIs removed in Kubernetes 1.27

More details can be found in Kubernetes official documentation.

  • CSIStorageCapacity storage.k8s.io/v1beta1. The replacement storage.k8s.io/v1 was implemented in Kubernetes 1.24

APIs removed in Kubernetes 1.29

More details can be found in Kubernetes official documentation.

  • Flow control flowcontrol.apiserver.k8s.io/v1beta2. The replacement flowcontrol.apiserver.k8s.io/v1beta3 was implemented in Kubernetes 1.26

Other noteworthy deprecations

Kubernetes beta topology labels on nodes are deprecated and will be removed in a future release. You can follow the list below to see which labels are being replaced:

Please note: The following changes does not have a set Kubernetes release. The replacement labels are however already implemented.

  • beta.kubernetes.io/instance-type -> node.kubernetes.io/instance-type
  • beta.kubernetes.io/arch -> kubernetes.io/arch
  • beta.kubernetes.io/os -> kubernetes.io/os
  • failure-domain.beta.kubernetes.io/region -> topology.kubernetes.io/region
  • failure-domain.beta.kubernetes.io/zone -> topology.kubernetes.io/zone

Is downtime expected

The cluster is expected to be up and running during the upgrade however pods will restart when being migrated to a new node.

Known issues

Custom node taints and labels lost during upgrade

All custom taints and labels on worker and control-plane nodes are lost during upgrade.

Snapshots are not working

There is currently a limitation in the snapshot controller not making it topology aware.

10 - Changelog for Kubernetes 1.25

Changelog for Kubernetes 1.25

Versions

  • Kubernetes 1.25.6
  • Nginx-ingress: 1.4.0
  • Certmanager: 1.11.0

Major changes

  • Pod Security Policies has been removed.
  • CronJob API batch/v1beta1 has been removed and is replaced with batch/v1 that was implemented in Kubernetes 1.21
  • EndpointSlice API discovery.k8s.io/v1beta1 has been removed and is replaced with discovery.k8s.io/v1 that was implemented in Kubernetes 1.21
  • Event API events.k8s.io/v1beta1 has been removed and is replaced with events.k8s.io/v1 that was implemented in Kubernetes 1.19
  • PodDisruptionBudget API policy/v1beta1 has been removed and is replaced with policy/v1 that was implemented in Kubernetes 1.21
  • RuntimeClass API node.k8s.io/v1beta1 has been removed and is replaced with node.k8s.io/v1 that was implemented in Kubernetes 1.20

Deprecations

Note that all deprecations will be removed in a future Kubernetes release. This does not mean you need to perform any changes right now. However, we recommend you to start migrating your applications in order to avoid issues in future releases.

  • In Kubernetes 1.26 the storage class 4k will be removed from all clusters. This only affects clusters created prior to Kubernetes 1.23. Instead use the v1-dynamic-40 which is the default storage class since Kubernetes 1.23. This change was originally planned for Kuberntes 1.25 however has been pushed back to 1.26 to allow some extra time for migrations.

APIs removed in Kubernetes 1.26

More details can be found in Kubernetes official documentation.

  • Flow control resources flowcontrol.apiserver.k8s.io/v1beta1. The replacement flowcontrol.apiserver.k8s.io/v1beta2 was implemented in Kubernetes 1.23
  • HorizontalPodAutoscaler autoscaling/v2beta2. The replacement autoscaling/v2 was introduced in Kubernetes 1.23

APIs removed in Kubernetes 1.27

More details can be found in Kubernetes official documentation.

  • CSIStorageCapacity storage.k8s.io/v1beta1. The replacement storage.k8s.io/v1 was implemented in Kubernetes 1.24

Other noteworthy deprecations

Kubernetes beta topology labels on nodes are deprecated and will be removed in a future release. You can follow the list below to see which labels are being replaced:

Please note: The following changes does not have a set Kubernetes release. However, the replacement labels are already implemented.

  • beta.kubernetes.io/instance-type -> node.kubernetes.io/instance-type
  • beta.kubernetes.io/arch -> kubernetes.io/arch
  • beta.kubernetes.io/os -> kubernetes.io/os
  • failure-domain.beta.kubernetes.io/region -> topology.kubernetes.io/region
  • failure-domain.beta.kubernetes.io/zone -> topology.kubernetes.io/zone

Is downtime expected

The upgrade drains (move all workload from) one node at the time, patches that node and brings it back in the cluster. First after all deployments and statefulsets are running again we will continue on with the next node.

Known issues

Custom node taints and labels lost during upgrade

All custom taints and labels on worker and control-plane nodes are lost during upgrade.

Custom changes to non -customer security groups will be lost

All changes to security groups not suffixed with “-customer” will be lost during the upgrade.

Snapshots are not working

There is currently a limitation in the snapshot controller not making it topology aware.

11 - Changelog for Kubernetes 1.24

Changelog for Kubernetes 1.24

Versions

  • Kubernetes 1.24.6
  • Nginx-ingress: 1.4.0
  • Certmanager: 1.10.0

Major changes

  • The node-role.kubernetes.io/master= label is removed from all control plane nodes, instead use the node-role.kubernetes.io/control-plane= label.
  • The taint node-role.kubernetes.io/control-plane:NoSchedule has been added to all control plane nodes.

Deprecations

Note that all deprecations will be removed in a future Kubernetes release. This does not mean you need to perform any changes right now. However, we recommend you to start migrating your applications in order to avoid issues in future releases.

  • In Kubernetes 1.25 the storage class 4k will be removed from all clusters. This only affects clusters created prior to Kubernetes 1.23. Instead use the v1-dynamic-40 which is the default storage class since Kubernetes 1.23.

APIs removed in Kubernetes 1.25

More details can be found in Kubernetes official documentation.

  • Pod Security Policies will be removed in Kubernetes 1.25
  • CronJob batch/v1beta1. The new API batch/v1 was implemented in Kubernetes 1.21 (this is a drop in replacement)
  • EndpointSlice discovery.k8s.io/v1beta1. The new API discovery.k8s.io/v1 was implemented in Kubernetes 1.21
  • Event events.k8s.io/v1beta1. The new API events.k8s.io/v1 was implemented in Kubernetes 1.19
  • PodDisruptionBudget policy/v1beta1. The new API policy/v1 was implemented in Kubernetes 1.21
  • RuntimeClass node.k8s.io/v1beta1. The new API node.k8s.io/v1 was implemented in Kubernetes 1.20

APIs removed in Kubernetes 1.26

More details can be found in Kubernetes official documentation.

  • Flow control resources flowcontrol.apiserver.k8s.io/v1beta1. The replacement flowcontrol.apiserver.k8s.io/v1beta2 was implemented in Kubernetes 1.23
  • HorizontalPodAutoscaler autoscaling/v2beta2. The replacement autoscaling/v2 was introduced in Kubernetes 1.23

Other noteworthy deprecations

Kubernetes beta topology labels on nodes are deprecated and will be removed in a future release. You can follow the list below to see which labels are being replaced:

Please note: The following changes does not have a set Kubernetes release. However, the replacement labels are already implemented.

  • beta.kubernetes.io/instance-type -> node.kubernetes.io/instance-type
  • beta.kubernetes.io/arch -> kubernetes.io/arch
  • beta.kubernetes.io/os -> kubernetes.io/os
  • failure-domain.beta.kubernetes.io/region -> topology.kubernetes.io/region
  • failure-domain.beta.kubernetes.io/zone -> topology.kubernetes.io/zone

Is downtime expected

The upgrade drains (move all workload from) one node at the time, patches that node and brings it back in the cluster. First after all deployments and statefulsets are running again we will continue on with the next node.

Known issues

Custom node taints and labels lost during upgrade

All custom taints and labels on worker and control-plane nodes are lost during upgrade.

Custom changes to non -customer security groups will be lost

All changes to security groups not suffixed with “-customer” will be lost during the upgrade.

Snapshots are not working

There is currently a limitation in the snapshot controller not making it topology aware.

12 - Changelog for Kubernetes 1.23

Changelog for Kubernetes 1.23

Versions

  • Kubernetes 1.23.7
  • Nginx-ingress: 1.3.0
  • Certmanager: 1.9.1

Major changes

  • A new storage class v1-dynamic-40 is introduced and set as the default storage class. All information about this storage class can be found here.
  • Worker and control plane nodes now use v1-c2-m8-d80 as their default flavor. You can find a complete list of all available flavors here.
  • All nodes will be migrated to the updated flavors during the upgrade. All new flavors will have the same specification however the flavor ID will be changed. This affects customers that use the node.kubernetes.io/instance-type label that can be located on nodes.
  • Control plane nodes will have their disk migrated from the deprecated 4k storage class to v1-dynamic-40.
  • Starting from Kubernetes 1.23 we will require 3 control plane (masters) nodes.

Flavor mapping

Old flavor New flavor
v1-standard-2 v1-c2-m8-d80
v1-standard-4 v1-c4-m16-d160
v1-standard-8 v1-c8-m32-d320
v1-dedicated-8 d1-c8-m58-d800
v2-dedicated-8 d2-c8-m120-d1.6k

Changes affecting new clusters:

What happened to the metrics/monitoring node?

Previously when creating new or upgrading clusters to Kubernetes 1.23 we added an extra node that handled monitoring. This node is no longer needed and all services have been converted to run inside the Kubernetes cluster. This means that clusters being upgraded or created from now on won’t get an extra node added. Clusters that currently have the monitoring node will be migrated to the new setup within the upcoming weeks (The change is non-service affecting).

Deprecations

Note that all deprecations will be removed in a future Kubernetes release. This does not mean you need to perform any changes right now. However, we recommend you to start migrating your applications in order to avoid issues in future releases.

  • In kubernetes 1.25 the storage class 4k will be removed from all clusters created prior to Kubernetes 1.23.

APIs removed in Kubernetes 1.25

More details can be found in Kubernetes official documentation.

  • Pod Security Policies will be removed in Kubernetes 1.25
  • CronJob batch/v1beta1. The new API batch/v1 was implemented in Kubernetes 1.21 (this is a drop in replacement)
  • EndpointSlice discovery.k8s.io/v1beta1. The new API discovery.k8s.io/v1 was implemented in Kubernetes 1.21
  • Event events.k8s.io/v1beta1. The new API events.k8s.io/v1 was implemented in Kubernetes 1.19
  • PodDisruptionBudget policy/v1beta1. The new API policy/v1 was implemented in Kubernetes 1.21
  • RuntimeClass node.k8s.io/v1beta1. The new API node.k8s.io/v1 was implemented in Kubernetes 1.20

Other noteworthy deprecations

Kubernetes beta topology labels on nodes are deprecated and will be removed in a future release. You can follow the list below to see which labels are being replaced:

Please note: The following changes does not have a set Kubernetes release. However, the replacement labels are already implemented.

  • beta.kubernetes.io/instance-type -> node.kubernetes.io/instance-type
  • beta.kubernetes.io/arch -> kubernetes.io/arch
  • beta.kubernetes.io/os -> kubernetes.io/os
  • failure-domain.beta.kubernetes.io/region -> topology.kubernetes.io/region
  • failure-domain.beta.kubernetes.io/zone -> topology.kubernetes.io/zone

Is downtime expected

The upgrade drains (move all workload from) one node at the time, patches that node and brings it back in the cluster. First after all deployments and statefulsets are running again we will continue on with the next node.

Known issues

Custom node taints and labels lost during upgrade

All custom taints and labels on worker and control-plane nodes are lost during upgrade.

Custom changes to non -customer security groups will be lost

All changes to security groups not suffixed with “-customer” will be lost during the upgrade.

Snapshots are not working

There is currently a limitation in the snapshot controller not making it topology aware.

13 - Changelog for Kubernetes 1.22

Changelog for Kubernetes 1.22

Versions

  • Kubernetes 1.22.8
  • Nginx-ingress: 1.1.1
  • Certmanager: 1.6.3

Major changes

  • When our ingress is installed we set it as the default ingress, meaning it will be used unless a custom ingress class is used/specified
  • Clusters are now running containerd instead of docker. This should not affect your workload at all
  • We reserve 5% RAM on all nodes making it easier to calculate how much is left for your workload
  • All components deployed by Elastx have tolerations for NoSchedule taints by default
  • Certmanager cert-manager.io/v1alpha2, cert-manager.io/v1alpha3, cert-manager.io/v1beta1, acme.cert-manager.io/v1alpha2, acme.cert-manager.io/v1alpha3 and acme.cert-manager.io/v1beta1 APIs are no longer served. All existing resources will be converted automatically to cert-manager.io/v1 and acme.cert-manager.io/v1, however you will still need to update your local manifests
  • Several old APIs are no longer served. A complete list can be found in Kubernetes documentation

Changes affecting new clusters:

  • All new clusters will have the cluster domain cluster.local by default
  • The encrypted *-enc storage-classes (4k-enc, 8k-enc and 16k-enc) are no longer available to new clusters since they are deprecated for removal in Openstack. Do not worry, all our other storage classes (4k, 8k, 16k and future classes) are now encrypted by default. Read our full announcement here

Deprecations

Note that all deprecations will be removed in a future Kubernetes release. This does not mean you need to perform any changes right now. However, we recommend you to start migrating your applications in order to avoid issues in future releases.

APIs removed in Kubernetes 1.25

More details can be found in Kubernetes official documentation.

  • Pod Security Policies will be removed in Kubernetes 1.25
  • CronJob batch/v1beta1. The new API batch/v1 was implemented in Kubernetes 1.21 (this is a drop in replacement)
  • EndpointSlice discovery.k8s.io/v1beta1. The new API discovery.k8s.io/v1 was implemented in Kubernetes 1.21
  • Event events.k8s.io/v1beta1. The new API events.k8s.io/v1 was implemented in Kubernetes 1.19
  • PodDisruptionBudget policy/v1beta1. The new API policy/v1 was implemented in Kubernetes 1.21
  • RuntimeClass node.k8s.io/v1beta1. The new API node.k8s.io/v1 was implemented in Kubernetes 1.20

Other noteworthy deprecations

Kubernetes beta topology labels on nodes are deprecated and will be removed in a future release. You can follow the list below to see which labels are being replaced:

Please note: The following changes does not have a set Kubernetes release. However, the replacement labels are already implemented.

  • beta.kubernetes.io/instance-type -> node.kubernetes.io/instance-type
  • beta.kubernetes.io/arch -> kubernetes.io/arch
  • beta.kubernetes.io/os -> kubernetes.io/os
  • failure-domain.beta.kubernetes.io/region -> topology.kubernetes.io/region
  • failure-domain.beta.kubernetes.io/zone -> topology.kubernetes.io/zone

Is downtime expected

The upgrade drains (move all workload from) one node at the time, patches that node and brings it back in the cluster. First after all deployments and statefulsets are running again we will continue on with the next node.

Known issues

Custom node taints and labels lost during upgrade

All custom taints and labels on nodes are lost during upgrade.

Custom changes to non -customer security groups will be lost

All changes to security groups not suffixed with “-customer” will be lost during the upgrade.

Snapshots are not working

There is currently a limitation in the snapshot controller not making it topology aware.

14 - Changelog for Kubernetes 1.21

Changelog for Kubernetes 1.21

Versions

  • Kubernetes 1.21.5
  • Nginx-ingress: 1.0.1
  • Certmanager: 1.5.3

Major changes

  • Load Balancers are by default allowed to talk to all tcp ports on worker nodes.

New Kubernetes features:

  • The ability to create immutable secrets and configmaps.
  • Cronjobs are now stable and the new API batch/v1 is implemented.
  • Graceful node shutdown, when shutting worker nodes this is detected by Kubernetes and pods will be evicted.

Deprecations

Note that all deprecations will be removed in a future Kubernetes release, this does not mean you need to perform any changes now however we recommend you to start migrating your applications to avoid issues in future releases.

APIs removed in Kubernetes 1.22

A guide on how to migrate from affected APIs can be found in the Kubernetes upstream documentation.

  • Ingress extensions/v1beta1 and networking.k8s.io/v1beta1
  • ValidatingWebhookConfiguration and MutatingWebhookConfiguration admissionregistration.k8s.io/v1beta1
  • CustomResourceDefinition apiextensions.k8s.io/v1beta1
  • CertificateSigningRequest certificates.k8s.io/v1beta1
  • APIService apiregistration.k8s.io/v1beta1
  • TokenReview authentication.k8s.io/v1beta1
  • Lease coordination.k8s.io/v1beta1
  • SubjectAccessReview, LocalSubjectAccessReview and SelfSubjectAccessReview authorization.k8s.io/v1beta1
  • Certmanager api v1alpha2, v1alpha3 and v1beta1

Other noteworthy deprecations

Kubernetes beta topology labels on nodes are deprecated and will be removed in a future release, follow the list below to see what labels are being replaced:

Please note: the following change does not have a set Kubernetes release when being removed however the replacement labels are already implemented.

  • beta.kubernetes.io/instance-type -> node.kubernetes.io/instance-type
  • beta.kubernetes.io/arch -> kubernetes.io/arch
  • beta.kubernetes.io/os -> kubernetes.io/os
  • failure-domain.beta.kubernetes.io/region -> topology.kubernetes.io/region
  • failure-domain.beta.kubernetes.io/zone -> topology.kubernetes.io/zone

APIs removed in Kubernetes 1.25

more detail can be found in Kubernetes official documentation.

  • Pod Security Policies will be removed in Kubernetes 1.25.
  • CronJob batch/v1beta1, the new API batch/v1 was implemented in Kubernetes 1.21 (this is a drop in replacement)
  • EndpointSlice discovery.k8s.io/v1beta1, the new API discovery.k8s.io/v1 was implemented in Kubernetes 1.21
  • Event events.k8s.io/v1beta1, the new API events.k8s.io/v1 was implemented in Kubernetes 1.19
  • PodDisruptionBudget policy/v1beta1, the new API policy/v1 was implemented in Kubernetes 1.21
  • RuntimeClass node.k8s.io/v1beta1, the new API node.k8s.io/v1 was implemented in Kubernetes 1.20

Is downtime expected

The upgrade drains (moving all workload from) one node at the time, patches that node and brings it back in the cluster. First after all deployments and statefulsets are running again we will continue on with the next node.

Known issues

Custom node taints and labels lost during upgrade

All custom taints and labels on nodes are lost during upgrade.

Custom changes to non -customer security groups will be lost

All changes to security groups not suffixed with “-customer” will be lost during the upgrade

Snapshots is not working

There is currently a limitation in the snapshot controller not making it topology aware.

15 - Changelog for Kubernetes 1.20

Changelog for Kubernetes 1.20

Versions

  • Kubernetes 1.20.7
  • Nginx-ingress: 0.46.0
  • Certmanager: 1.3.1

Major changes

  • RBAC api rbac.authorization.k8s.io/v1alpha1 has been removed. Instead use the replacement rbac.authorization.k8s.io/v1.
  • We no longer supports new clusters being created with pod security policy enabled. Instead we recommend using OPA Gatekeeper, in case you have any questions regarding this contact our support and we will help you guys out.
  • The built-in Cinder Volume Provider has gone from deprecated to disabled. Any volumes that are still using it will have to be migrated, see Known Issues.

Deprecations

  • Ingress api extensions/v1beta1 will be removed in kubernetes 1.22.
  • Kubernetes beta lables on nodes are deplricated and will be removed in a future release, follow the below list to se what lable replaces the old one:
    • beta.kubernetes.io/instance-type -> node.kubernetes.io/instance-type
    • beta.kubernetes.io/arch -> kubernetes.io/arch
    • beta.kubernetes.io/os -> kubernetes.io/os
    • failure-domain.beta.kubernetes.io/region -> topology.kubernetes.io/region
    • failure-domain.beta.kubernetes.io/zone -> topology.kubernetes.io/zone
  • Certmanager api v1alpha2, v1alpha3 and v1beta1 will be removed in a future release. We strongly recommend that you upgrade to the new v1 api.
  • RBAC api rbac.authorization.k8s.io/v1beta1 will be removed in an upcoming release. The apis are replaced with rbac.authorization.k8s.io/v1.
  • Pod Security Policies will be removed in Kubernetes 1.25 in all clusters having the feature enabled. Instead we recommend OPA Gatekeeper.

Is downtime expected

The upgrade drains (moving all workload from) one node at the time, patches that node and brings it back in the cluster. First after all deployments and statefulsets are running again we will continue on with the next node.

Known issues

Custom node taints and labels lost during upgrade

All custom taints and labels on nodes are lost during upgrade.

Custom changes to non -customer security groups will be lost

All changes to security groups not suffixed with “-customer” will be lost during the upgrade

Snapshots is not working

There is currently a limitation in the snapshot controller not making it topology aware.

Volumes using built-in Cinder Volume Provider will be converted

During the upgrade to 1.20 Elastx staff will upgrade any volumes still being managed by the built-in Cinder Volume Provider. No action is needed on the customer side, but it will produce events and possibly log events that may raise concern.

To get a list of Persistent Volumes that are affected you can run this command before the upgrade:

$ kubectl get pv -o json | jq -r '.items[] | select (.spec.cinder != null) | .metadata.name'

Volumes that have been converted will show an event under the Persistent Volume Claim object asserting that data has been lost - this is a false statement and is due to the fact that the underlying Persistent Volume was disconnected for a brief moment while it was being attached to the new CSI-based Cinder Volume Provider.

Bitnami (and possibly other) images and runAsGroup

Some Bitnami images silently assume they are run with the equivalent of runAsGroup: 0. This was the Kubernetes default until 1.20.x. The result is strange looking permission errors on startup and can cause workloads to fail.

At least the Bitnami PostgreSQL and RabbitMQ images have been confirmed as having these issues.

To find out if there are problematic workloads in your cluster you can run the following commands:

    kubectl get pods -A -o yaml|grep image:| sort | uniq | grep bitnami

If any images turn up there may be issues. !NB. Other images may have been built using Bitnami images as base, these will not show up using the above command.

Solution without PSP

On clusters not running PSP it should suffice to just add:

    runAsGroup: 0

To the securityContext for the affected containers.

Solution with PSP

On clusters running PSP some more actions need to be taken. The restricted PSP forbids running as group 0 so a new one needs to be created, such as:

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  annotations:
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default,runtime/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: runtime/default
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
  name: restricted-runasgroup0
spec:
  allowPrivilegeEscalation: false
  fsGroup:
    ranges:
    - max: 65535
      min: 1
    rule: MustRunAs
  requiredDropCapabilities:
  - ALL
  runAsGroup:
    ranges:
    - max: 65535
      min: 0
    rule: MustRunAs
  runAsUser:
    rule: MustRunAsNonRoot
  seLinux:
    rule: RunAsAny
  supplementalGroups:
    ranges:
    - max: 65535
      min: 1
    rule: MustRunAs
  volumes:
  - configMap
  - emptyDir
  - projected
  - secret
  - downwardAPI
  - persistentVolumeClaim

Furthermore a ClusterRole allowing the use of said PSP is needed:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
  name: psp:restricted-runasgroup0
rules:
- apiGroups:
  - policy
  resourceNames:
  - restricted-runasgroup0
  resources:
  - podsecuritypolicies
  verbs:
  - use

And finally you need to bind the ServiceAccounts that need to run as group 0 to the ClusterRole with a ClusterRoleBinding:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: psp:restricted-runasgroup0
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: psp:restricted-runasgroup0
subjects:
- kind: ServiceAccount
  name: default
  namespace: keycloak
- kind: ServiceAccount
  name: XXX
  namespace: YYY

Then its just a matter of adding:

runAsGroup: 0

To the securityContext for the affected containers.

16 - Changelog for Kubernetes 1.19

Changelog for Kubernetes 1.19

Versions

  • Kubernetes 1.19.7
  • Nginx-ingress: 0.43.0
  • Certmanager: 1.2.0

Major changes

  • New security groups are implemented where you can store all youre firewall rules. The new security groups will be persistent between upgrades and called CLUSTERNAME-k8s-worker-customer and CLUSTERNAME-k8s-master-customer (CLUSTERNAME will be replaced with actual cluster name). With this change we will remove our previous default firewall rules that allowed public traffic to the Kubernetes cluster, this includes the following services:

    • Master API (port 6443)
    • Ingress (port 80 & 443)
    • Nodeports (ports 30000 to 32676)

    If you currently have any of the mentioned ports open you either need to add them to the new security groups (created during the upgrade) or mention this during the planning discussion and we will assist you with this. Please be aware that any rules added to the new security groups is not managed by us and you are responsible for them being up to date.

Deprecations

  • Ingress api extensions/v1beta1 will be removed in kubernetes 1.22
  • RBAC api rbac.authorization.k8s.io/v1alpha1 and rbac.authorization.k8s.io/v1beta1 will be removed in kubernetes 1.20. The apis are replaced with rbac.authorization.k8s.io/v1.
  • The node label beta.kubernetes.io/instance-type will be rmeoved in an uppcomig release. Use node.kubernetes.io/instance-type instead.
  • Certmanager api v1alpha2, v1alpha3 and v1beta1 will be removed in a future release. We strongly recommend that you upgrade to the new v1 api

Is downtime expected

The upgrade drains (moving all workload from) one node at the time, patches that node and brings in back in the cluster. First after all deployments and statefulsets are running again we will continue on with the next node.

Known issues

Custom node taints and labels lost during upgrade

All custom taints and labels on nodes are lost during upgrade.

Custom security groups will be lost during upgrade

All custom security groups bound inside openstack will be detached during upgrade.

Snapshots is not working

There is currently a limitation in the snapshot controller not making it topology aware.

17 - Changelog for Kubernetes 1.18

Changelog for Kubernetes 1.18

Versions

  • Kubernetes 1.18.9
  • Nginx-ingress: 0.40.0
  • Certmanager: 1.0.3

Major changes

  • Moved the tcp-services configmap used by our ingress controller to the default namespace.

Deprecations

  • Ingress api extensions/v1beta1 will be removed in kubernetes 1.22
  • RBAC api rbac.authorization.k8s.io/v1alpha1 and rbac.authorization.k8s.io/v1beta1 will be removed in kubernetes 1.20. The apis are replaced with rbac.authorization.k8s.io/v1.
  • The node label beta.kubernetes.io/instance-type will be rmeoved in an uppcomig release. Use node.kubernetes.io/instance-type instead.
  • Certmanager api v1alpha2, v1alpha3 and v1beta1 will be removed in a future release. We strongly recommend that you upgrade to the new v1 api
  • Accessing the Kubernetes dashboard over the Kubernetes API. This feature will not be added to new clusters however if your cluster already has this available it will continue working until Kubernetes 1.19

Removals

  • Some older deprecated metrics, more information regarding this can be found in the official Kubernetes changelog: Link to Kubernetes changelog

Is downtime expected

For this upgrade we expect a shorter downtime on the ingress. The downtime on the ingress should be no longer than 5 minutes and hopefully even under 1 minute in length.

The upgrade drains (moving all workload from) one node at the time, patches that node and brings in back in the cluster. First after all deployments and statefulsets are running again we will continue on with the next node.

Known issues

Custom node taints and labels lost during upgrade

All custom taints and labels on nodes are lost during upgrade.

Snapshots is not working

There is currently a limitation in the snapshot controller not making it topology aware.

Resize problem on volumes created before Kubernetes 1.16

Volume expansion sometimes fails on volumes created before Kubernetes 1.16.

A workaround exists by adding an annotation on the affected volumes, an example command:

kubectl annotate --overwrite pvc PVCNAME volume.kubernetes.io/storage-resizer=cinder.csi.openstack.org

18 - Changelog for Kubernetes 1.17

Changelog for Kubernetes 1.17

Versions

  • Kubernetes 1.17.9
  • Nginx-ingress: 0.32.0
  • Certmanager: 0.15.0

Major changes

  • We can now combine nodes with multiple different flavors within one cluster
  • Fixed a bug where some external network connections got stuck (MTU missmatch, calico)
  • Enabled calicos metric endpoint
  • New and improved monitoring system
  • Ingress does only support serving http over port 80 and https over port 443
  • Cert-manager using new APIs: Cert-manager info

Deprecations

  • Ingress api extensions/v1beta1 will be removed in kubernetes 1.22
  • RBAC api rbac.authorization.k8s.io/v1alpha1 and rbac.authorization.k8s.io/v1beta1 will be removed in kubernetes 1.20. The apis are replaced with rbac.authorization.k8s.io/v1.
  • The node label beta.kubernetes.io/instance-type will be rmeoved in an uppcomig release. Use node.kubernetes.io/instance-type instead.

Removals

Custom ingress ports

We no longer supports using custom ingress ports. From 1.17 http traffic will be received on port 80 and https on port 443

You can check what ports you are using with the following command:

kubectl get service -n elx-nginx-ingress elx-nginx-ingress-controller

If you aren’t using port 80 and 443 please be aware that the ports your ingress listen on will change during the upgrade to Kubernetes 1.17. ELASTX team will contact you before the upgrade takes place and we can together come up with a solution.

Old Kubernetes APIs

A complete list of APIs that will be removed in this version:

  • NetworkPolicy
    • extensions/v1beta1
  • PodSecurityPolicy
    • extensions/v1beta1
  • DaemonSet
    • extensions/v1beta1
    • apps/v1beta2
  • Deployment
    • extensions/v1beta1
    • apps/v1beta1
    • apps/v1beta2
  • StatefulSet
    • apps/v1beta1
    • apps/v1beta2
  • ReplicaSet
    • extensions/v1beta1
    • apps/v1beta1
    • apps/v1beta2

Is downtime expected

For this upgrade we expect a shorter downtime on the ingress. the downtime on the ingress should be no longer than 5 minutes and hopefully even under 1 minute in length.

The upgrade are draining (moving all load from) one node at the time, patches that node and brings in back in the cluster. first after all deployments and statefulsets are running again we will continue with the next node.

Known issues

Custom node taints and labels lost during upgrade

All custom taints and labels on nodes are lost during upgrade.

Snapshots is not working

There is currently a limitation in the snapshot controller not making it topology aware.

Resize problem on volumes created before Kubernetes 1.16

Volume expansion sometimes fails on volumes created before Kubernetes 1.16.

A workaround exists by adding an annotation on the affected volumes, an example command:

kubectl annotate --overwrite pvc PVCNAME volume.kubernetes.io/storage-resizer=cinder.csi.openstack.org