This is the multi-page printable view of this section. Click here to print.
Changelog
- 1: Changelog for Kubernetes 1.32
- 2: Changelog for Kubernetes 1.31
- 3: Changelog for Kubernetes 1.30
- 4: Changelog for Kubernetes 1.29
- 5: Changelog for Kubernetes 1.28
- 6: Changelog for Kubernetes 1.27
- 7: Changelog for Kubernetes 1.26
- 8: Changelog for Kubernetes 1.25
- 9: Changelog for Kubernetes 1.24
- 10: Changelog for Kubernetes 1.23
- 11: Changelog for Kubernetes 1.22
- 12: Changelog for Kubernetes 1.21
- 13: Changelog for Kubernetes 1.20
- 14: Changelog for Kubernetes 1.19
- 15: Changelog for Kubernetes 1.18
- 16: Changelog for Kubernetes 1.17
1 - Changelog for Kubernetes 1.32
Versions
The deployed Kubernetes patch version varies based on when your cluster is deployed or upgraded. We strive to use the latest versions available.
Current release leverages Kubernetes 1.32. Official release blogpost found here with corresponding official changelog.
Optional addons
Major changes
-
We have announced the deprecation of legacy storageClasses in v1.32. This is postponed to v1.34.
-
Flow control
flowcontrol.apiserver.k8s.io/v1beta3will be removed. The replacementflowcontrol.apiserver.k8s.io/v1was implemented in Kubernetes 1.29 -
More details can be found in Kubernetes official documentation.
Noteworthy changes in coming versions
V1.34
- The 4k, 8k, 16k, and v1-dynamic-40 storage classes are scheduled to be removed. Existing volumes will not be affected. The ability to create legacy volumes will be removed. Please migrate manifests that specify storage classes to the storageclasses prefixed with
v2-, which have been available since Kubernetes 1.26 and have been the default since 2024-06-28 made public in the announcement.
Is downtime expected
The cluster is expected to be up and running during the upgrade however pods will restart when being migrated to a new node.
Known issues
Custom node taints and labels lost during upgrade
All custom taints and labels on worker and control-plane nodes are lost during upgrade.
Snapshots are not working
There is currently a limitation in the snapshot controller not making it topology aware.
2 - Changelog for Kubernetes 1.31
Versions
The deployed Kubernetes patch version varies based on when your cluster is deployed or upgraded. We strive to use the latest versions available.
Current release leverages Kubernetes 1.31. Official release blogpost found here with corresponding official changelog.
Major changes
In case there are major changes that impacts Elastx Kubernetes cluster deployments they will be listed here.
Noteworthy API changes in coming version Kubernetes 1.32
-
Flow control
flowcontrol.apiserver.k8s.io/v1beta3will be removed. The replacementflowcontrol.apiserver.k8s.io/v1was implemented in Kubernetes 1.29 -
(The 4k, 8k, 16k, and v1-dynamic-40 storage classes will be removed This is postponed to v1.34.). Please migrate to the v2 storage classes, which have been available since Kubernetes 1.26 and have been the default since Kubernetes 1.30.
-
More details can be found in Kubernetes official documentation.
Other noteworthy deprecations
-
Please migrate to the v2 storage classes, which have been available since Kubernetes 1.26 and have been the default since the announcement for existing clusters, and default for new clusters starting at Kubernetes v1.30.
Kubernetes beta topology labels on nodes are deprecated and will be removed in a future release. You can follow the list below to see which labels are being replaced:
Please note: The following changes does not have a set Kubernetes release. However, the replacement labels are already implemented.
beta.kubernetes.io/instance-type->node.kubernetes.io/instance-typebeta.kubernetes.io/arch->kubernetes.io/archbeta.kubernetes.io/os->kubernetes.io/osfailure-domain.beta.kubernetes.io/region->topology.kubernetes.io/regionfailure-domain.beta.kubernetes.io/zone->topology.kubernetes.io/zone
Is downtime expected
The cluster is expected to be up and running during the upgrade however pods will restart when being migrated to a new node.
Known issues
Custom node taints and labels lost during upgrade
All custom taints and labels on worker and control-plane nodes are lost during upgrade.
Snapshots are not working
There is currently a limitation in the snapshot controller not making it topology aware.
3 - Changelog for Kubernetes 1.30
Versions
The deployed Kubernetes version varies based on when your cluster is deployed. We try deploying cluster using the latest patch release of Kubernetes.
Current release is Kubernetes 1.30.1
Major changes
- New default storageclass
v2-1k - New clusters will only have v2 storage classes available.
nodelocaldnswill be removed for all clusters where it’s still deployed. This change affects only clusters created prior to Kubernetes 1.26, as the feature was deprecated in that version.- Clusters created before Kubernetes 1.26 will have their public domains removed. In Kubernetes 1.26, we migrated to using a LoadBalancer and its IP instead. If you are using an old kubeconfig with an active domain, please fetch a new one.
APIs removed in Kubernetes 1.32
More details can be found in Kubernetes official documentation.
- Flow control
flowcontrol.apiserver.k8s.io/v1beta3. The replacementflowcontrol.apiserver.k8s.io/v1was implemented in Kubernetes 1.29 - The 4k, 8k, 16k, and v1-dynamic-40 storage classes will be removed. Please migrate to the v2 storage classes, which have been available since Kubernetes 1.26 and have been the default since Kubernetes 1.30.
Other noteworthy deprecations
Kubernetes beta topology labels on nodes are deprecated and will be removed in a future release. You can follow the list below to see which labels are being replaced:
Please note: The following changes does not have a set Kubernetes release. However, the replacement labels are already implemented.
beta.kubernetes.io/instance-type->node.kubernetes.io/instance-typebeta.kubernetes.io/arch->kubernetes.io/archbeta.kubernetes.io/os->kubernetes.io/osfailure-domain.beta.kubernetes.io/region->topology.kubernetes.io/regionfailure-domain.beta.kubernetes.io/zone->topology.kubernetes.io/zone
Is downtime expected
The cluster is expected to be up and running during the upgrade however pods will restart when being migrated to a new node.
Known issues
Custom node taints and labels lost during upgrade
All custom taints and labels on worker and control-plane nodes are lost during upgrade.
Snapshots are not working
There is currently a limitation in the snapshot controller not making it topology aware.
4 - Changelog for Kubernetes 1.29
Versions
The deployed Kubernetes version varies based on when your cluster is deployed. We try deploying cluster using the latest patch release of Kubernetes.
Current release is Kubernetes 1.29.1
Major changes
- Flow control
flowcontrol.apiserver.k8s.io/v1beta2. The replacementflowcontrol.apiserver.k8s.io/v1beta3was implemented in Kubernetes 1.26
APIs removed in Kubernetes 1.32
More details can be found in Kubernetes official documentation.
- Flow control
flowcontrol.apiserver.k8s.io/v1beta3. The replacementflowcontrol.apiserver.k8s.io/v1was implemented in Kubernetes 1.29
Other noteworthy deprecations
Kubernetes beta topology labels on nodes are deprecated and will be removed in a future release. You can follow the list below to see which labels are being replaced:
Please note: The following changes does not have a set Kubernetes release. However, the replacement labels are already implemented.
beta.kubernetes.io/instance-type->node.kubernetes.io/instance-typebeta.kubernetes.io/arch->kubernetes.io/archbeta.kubernetes.io/os->kubernetes.io/osfailure-domain.beta.kubernetes.io/region->topology.kubernetes.io/regionfailure-domain.beta.kubernetes.io/zone->topology.kubernetes.io/zone
Is downtime expected
The cluster is expected to be up and running during the upgrade however pods will restart when being migrated to a new node.
Known issues
Custom node taints and labels lost during upgrade
All custom taints and labels on worker and control-plane nodes are lost during upgrade.
Snapshots are not working
There is currently a limitation in the snapshot controller not making it topology aware.
5 - Changelog for Kubernetes 1.28
Versions
The deployed Kubernetes version varies based on when your cluster is deployed. We try deploying cluster using the latest patch release of Kubernetes.
Current release is Kubernetes 1.28.6
Major changes
- No major changes
APIs removed in Kubernetes 1.29
More details can be found in Kubernetes official documentation.
- Flow control
flowcontrol.apiserver.k8s.io/v1beta2. The replacementflowcontrol.apiserver.k8s.io/v1beta3was implemented in Kubernetes 1.26
Other noteworthy deprecations
Kubernetes beta topology labels on nodes are deprecated and will be removed in a future release. You can follow the list below to see which labels are being replaced:
Please note: The following changes does not have a set Kubernetes release. However, the replacement labels are already implemented.
beta.kubernetes.io/instance-type->node.kubernetes.io/instance-typebeta.kubernetes.io/arch->kubernetes.io/archbeta.kubernetes.io/os->kubernetes.io/osfailure-domain.beta.kubernetes.io/region->topology.kubernetes.io/regionfailure-domain.beta.kubernetes.io/zone->topology.kubernetes.io/zone
Is downtime expected
The cluster is expected to be up and running during the upgrade however pods will restart when being migrated to a new node.
Known issues
Custom node taints and labels lost during upgrade
All custom taints and labels on worker and control-plane nodes are lost during upgrade.
Snapshots are not working
There is currently a limitation in the snapshot controller not making it topology aware.
6 - Changelog for Kubernetes 1.27
Versions
The deployed Kubernetes version varies based on when your cluster is deployed. We try deploying cluster using the latest patch release of Kubernetes.
Current release is Kubernetes 1.27.10
Major changes
- Removed API CSIStorageCapacity
storage.k8s.io/v1beta1. The replacementstorage.k8s.io/v1was implemented in Kubernetes 1.24
APIs removed in Kubernetes 1.29
More details can be found in Kubernetes official documentation.
- Flow control
flowcontrol.apiserver.k8s.io/v1beta2. The replacementflowcontrol.apiserver.k8s.io/v1beta3was implemented in Kubernetes 1.26
Other noteworthy deprecations
Kubernetes beta topology labels on nodes are deprecated and will be removed in a future release. You can follow the list below to see which labels are being replaced:
Please note: The following changes does not have a set Kubernetes release. However, the replacement labels are already implemented.
beta.kubernetes.io/instance-type->node.kubernetes.io/instance-typebeta.kubernetes.io/arch->kubernetes.io/archbeta.kubernetes.io/os->kubernetes.io/osfailure-domain.beta.kubernetes.io/region->topology.kubernetes.io/regionfailure-domain.beta.kubernetes.io/zone->topology.kubernetes.io/zone
Is downtime expected
The cluster is expected to be up and running during the upgrade however pods will restart when being migrated to a new node.
Known issues
Custom node taints and labels lost during upgrade
All custom taints and labels on worker and control-plane nodes are lost during upgrade.
Snapshots are not working
There is currently a limitation in the snapshot controller not making it topology aware.
7 - Changelog for Kubernetes 1.26
Versions
The deployed Kubernetes version varies based on when your cluster is deployed. We try deploying cluster using the latest patch release of Kubernetes.
Current release is Kubernetes 1.26.13
Major changes
- Added support for node autoscaling
- Removed API Flow control resources
flowcontrol.apiserver.k8s.io/v1beta1. The replacementflowcontrol.apiserver.k8s.io/v1beta2was implemented in Kubernetes 1.23 - Removed API HorizontalPodAutoscaler
autoscaling/v2beta2. The replacementautoscaling/v2was introduced in Kubernetes 1.23 - We no longer deploy NodeLocal DNSCache for new clusters
Deprecations
Note that all deprecations will be removed in a future Kubernetes release. This does not mean you need to perform any changes right now. However, we recommend you to start migrating your applications in order to avoid issues in future releases.
- In Kubernetes 1.26 the storage class
4kwill be removed from all clusters. This only affects clusters created prior to Kubernetes 1.23. Instead use the v1-dynamic-40 which is the default storage class since Kubernetes 1.23. This change was originally planned for Kuberntes 1.25 however has been pushed back to 1.26 to allow some extra time for migrations.
APIs removed in Kubernetes 1.27
More details can be found in Kubernetes official documentation.
- CSIStorageCapacity
storage.k8s.io/v1beta1. The replacementstorage.k8s.io/v1was implemented in Kubernetes 1.24
APIs removed in Kubernetes 1.29
More details can be found in Kubernetes official documentation.
- Flow control
flowcontrol.apiserver.k8s.io/v1beta2. The replacementflowcontrol.apiserver.k8s.io/v1beta3was implemented in Kubernetes 1.26
Other noteworthy deprecations
Kubernetes beta topology labels on nodes are deprecated and will be removed in a future release. You can follow the list below to see which labels are being replaced:
Please note: The following changes does not have a set Kubernetes release. The replacement labels are however already implemented.
beta.kubernetes.io/instance-type->node.kubernetes.io/instance-typebeta.kubernetes.io/arch->kubernetes.io/archbeta.kubernetes.io/os->kubernetes.io/osfailure-domain.beta.kubernetes.io/region->topology.kubernetes.io/regionfailure-domain.beta.kubernetes.io/zone->topology.kubernetes.io/zone
Is downtime expected
The cluster is expected to be up and running during the upgrade however pods will restart when being migrated to a new node.
Known issues
Custom node taints and labels lost during upgrade
All custom taints and labels on worker and control-plane nodes are lost during upgrade.
Snapshots are not working
There is currently a limitation in the snapshot controller not making it topology aware.
8 - Changelog for Kubernetes 1.25
Versions
- Kubernetes 1.25.6
- Nginx-ingress: 1.4.0
- Certmanager: 1.11.0
Major changes
- Pod Security Policies has been removed.
- CronJob API
batch/v1beta1has been removed and is replaced withbatch/v1that was implemented in Kubernetes 1.21 - EndpointSlice API
discovery.k8s.io/v1beta1has been removed and is replaced withdiscovery.k8s.io/v1that was implemented in Kubernetes 1.21 - Event API
events.k8s.io/v1beta1has been removed and is replaced withevents.k8s.io/v1that was implemented in Kubernetes 1.19 - PodDisruptionBudget API
policy/v1beta1has been removed and is replaced withpolicy/v1that was implemented in Kubernetes 1.21 - RuntimeClass API
node.k8s.io/v1beta1has been removed and is replaced withnode.k8s.io/v1that was implemented in Kubernetes 1.20
Deprecations
Note that all deprecations will be removed in a future Kubernetes release. This does not mean you need to perform any changes right now. However, we recommend you to start migrating your applications in order to avoid issues in future releases.
- In Kubernetes 1.26 the storage class
4kwill be removed from all clusters. This only affects clusters created prior to Kubernetes 1.23. Instead use the v1-dynamic-40 which is the default storage class since Kubernetes 1.23. This change was originally planned for Kuberntes 1.25 however has been pushed back to 1.26 to allow some extra time for migrations.
APIs removed in Kubernetes 1.26
More details can be found in Kubernetes official documentation.
- Flow control resources
flowcontrol.apiserver.k8s.io/v1beta1. The replacementflowcontrol.apiserver.k8s.io/v1beta2was implemented in Kubernetes 1.23 - HorizontalPodAutoscaler
autoscaling/v2beta2. The replacementautoscaling/v2was introduced in Kubernetes 1.23
APIs removed in Kubernetes 1.27
More details can be found in Kubernetes official documentation.
- CSIStorageCapacity
storage.k8s.io/v1beta1. The replacementstorage.k8s.io/v1was implemented in Kubernetes 1.24
Other noteworthy deprecations
Kubernetes beta topology labels on nodes are deprecated and will be removed in a future release. You can follow the list below to see which labels are being replaced:
Please note: The following changes does not have a set Kubernetes release. However, the replacement labels are already implemented.
beta.kubernetes.io/instance-type->node.kubernetes.io/instance-typebeta.kubernetes.io/arch->kubernetes.io/archbeta.kubernetes.io/os->kubernetes.io/osfailure-domain.beta.kubernetes.io/region->topology.kubernetes.io/regionfailure-domain.beta.kubernetes.io/zone->topology.kubernetes.io/zone
Is downtime expected
The upgrade drains (move all workload from) one node at the time, patches that node and brings it back in the cluster. First after all deployments and statefulsets are running again we will continue on with the next node.
Known issues
Custom node taints and labels lost during upgrade
All custom taints and labels on worker and control-plane nodes are lost during upgrade.
Custom changes to non -customer security groups will be lost
All changes to security groups not suffixed with “-customer” will be lost during the upgrade.
Snapshots are not working
There is currently a limitation in the snapshot controller not making it topology aware.
9 - Changelog for Kubernetes 1.24
Versions
- Kubernetes 1.24.6
- Nginx-ingress: 1.4.0
- Certmanager: 1.10.0
Major changes
- The
node-role.kubernetes.io/master=label is removed from all control plane nodes, instead use thenode-role.kubernetes.io/control-plane=label. - The taint
node-role.kubernetes.io/control-plane:NoSchedulehas been added to all control plane nodes.
Deprecations
Note that all deprecations will be removed in a future Kubernetes release. This does not mean you need to perform any changes right now. However, we recommend you to start migrating your applications in order to avoid issues in future releases.
- In Kubernetes 1.25 the storage class
4kwill be removed from all clusters. This only affects clusters created prior to Kubernetes 1.23. Instead use the v1-dynamic-40 which is the default storage class since Kubernetes 1.23.
APIs removed in Kubernetes 1.25
More details can be found in Kubernetes official documentation.
- Pod Security Policies will be removed in Kubernetes 1.25
- CronJob
batch/v1beta1. The new APIbatch/v1was implemented in Kubernetes 1.21 (this is a drop in replacement) - EndpointSlice
discovery.k8s.io/v1beta1. The new APIdiscovery.k8s.io/v1was implemented in Kubernetes 1.21 - Event
events.k8s.io/v1beta1. The new APIevents.k8s.io/v1was implemented in Kubernetes 1.19 - PodDisruptionBudget
policy/v1beta1. The new APIpolicy/v1was implemented in Kubernetes 1.21 - RuntimeClass
node.k8s.io/v1beta1. The new APInode.k8s.io/v1was implemented in Kubernetes 1.20
APIs removed in Kubernetes 1.26
More details can be found in Kubernetes official documentation.
- Flow control resources
flowcontrol.apiserver.k8s.io/v1beta1. The replacementflowcontrol.apiserver.k8s.io/v1beta2was implemented in Kubernetes 1.23 - HorizontalPodAutoscaler
autoscaling/v2beta2. The replacementautoscaling/v2was introduced in Kubernetes 1.23
Other noteworthy deprecations
Kubernetes beta topology labels on nodes are deprecated and will be removed in a future release. You can follow the list below to see which labels are being replaced:
Please note: The following changes does not have a set Kubernetes release. However, the replacement labels are already implemented.
beta.kubernetes.io/instance-type->node.kubernetes.io/instance-typebeta.kubernetes.io/arch->kubernetes.io/archbeta.kubernetes.io/os->kubernetes.io/osfailure-domain.beta.kubernetes.io/region->topology.kubernetes.io/regionfailure-domain.beta.kubernetes.io/zone->topology.kubernetes.io/zone
Is downtime expected
The upgrade drains (move all workload from) one node at the time, patches that node and brings it back in the cluster. First after all deployments and statefulsets are running again we will continue on with the next node.
Known issues
Custom node taints and labels lost during upgrade
All custom taints and labels on worker and control-plane nodes are lost during upgrade.
Custom changes to non -customer security groups will be lost
All changes to security groups not suffixed with “-customer” will be lost during the upgrade.
Snapshots are not working
There is currently a limitation in the snapshot controller not making it topology aware.
10 - Changelog for Kubernetes 1.23
Versions
- Kubernetes 1.23.7
- Nginx-ingress: 1.3.0
- Certmanager: 1.9.1
Major changes
- A new storage class
v1-dynamic-40is introduced and set as the default storage class. All information about this storage class can be found here. - Worker and control plane nodes now use
v1-c2-m8-d80as their default flavor. You can find a complete list of all available flavors here. - All nodes will be migrated to the updated flavors during the upgrade. All new flavors will have the same specification however the flavor ID will be changed. This affects customers that use the
node.kubernetes.io/instance-typelabel that can be located on nodes. - Control plane nodes will have their disk migrated from the deprecated
4kstorage class tov1-dynamic-40. - Starting from Kubernetes 1.23 we will require 3 control plane (masters) nodes.
Flavor mapping
| Old flavor | New flavor |
|---|---|
| v1-standard-2 | v1-c2-m8-d80 |
| v1-standard-4 | v1-c4-m16-d160 |
| v1-standard-8 | v1-c8-m32-d320 |
| v1-dedicated-8 | d1-c8-m58-d800 |
| v2-dedicated-8 | d2-c8-m120-d1.6k |
Changes affecting new clusters:
- The storage class
4kwill no longer be set-up due to it being deprecated in Openstack. The full announcement can be found here.
What happened to the metrics/monitoring node?
Previously when creating new or upgrading clusters to Kubernetes 1.23 we added an extra node that handled monitoring. This node is no longer needed and all services have been converted to run inside the Kubernetes cluster. This means that clusters being upgraded or created from now on won’t get an extra node added. Clusters that currently have the monitoring node will be migrated to the new setup within the upcoming weeks (The change is non-service affecting).
Deprecations
Note that all deprecations will be removed in a future Kubernetes release. This does not mean you need to perform any changes right now. However, we recommend you to start migrating your applications in order to avoid issues in future releases.
- In kubernetes 1.25 the storage class
4kwill be removed from all clusters created prior to Kubernetes 1.23.
APIs removed in Kubernetes 1.25
More details can be found in Kubernetes official documentation.
- Pod Security Policies will be removed in Kubernetes 1.25
- CronJob
batch/v1beta1. The new APIbatch/v1was implemented in Kubernetes 1.21 (this is a drop in replacement) - EndpointSlice
discovery.k8s.io/v1beta1. The new APIdiscovery.k8s.io/v1was implemented in Kubernetes 1.21 - Event
events.k8s.io/v1beta1. The new APIevents.k8s.io/v1was implemented in Kubernetes 1.19 - PodDisruptionBudget
policy/v1beta1. The new APIpolicy/v1was implemented in Kubernetes 1.21 - RuntimeClass
node.k8s.io/v1beta1. The new APInode.k8s.io/v1was implemented in Kubernetes 1.20
Other noteworthy deprecations
Kubernetes beta topology labels on nodes are deprecated and will be removed in a future release. You can follow the list below to see which labels are being replaced:
Please note: The following changes does not have a set Kubernetes release. However, the replacement labels are already implemented.
beta.kubernetes.io/instance-type->node.kubernetes.io/instance-typebeta.kubernetes.io/arch->kubernetes.io/archbeta.kubernetes.io/os->kubernetes.io/osfailure-domain.beta.kubernetes.io/region->topology.kubernetes.io/regionfailure-domain.beta.kubernetes.io/zone->topology.kubernetes.io/zone
Is downtime expected
The upgrade drains (move all workload from) one node at the time, patches that node and brings it back in the cluster. First after all deployments and statefulsets are running again we will continue on with the next node.
Known issues
Custom node taints and labels lost during upgrade
All custom taints and labels on worker and control-plane nodes are lost during upgrade.
Custom changes to non -customer security groups will be lost
All changes to security groups not suffixed with “-customer” will be lost during the upgrade.
Snapshots are not working
There is currently a limitation in the snapshot controller not making it topology aware.
11 - Changelog for Kubernetes 1.22
Versions
- Kubernetes 1.22.8
- Nginx-ingress: 1.1.1
- Certmanager: 1.6.3
Major changes
- When our ingress is installed we set it as the default ingress, meaning it will be used unless a custom ingress class is used/specified
- Clusters are now running containerd instead of docker. This should not affect your workload at all
- We reserve 5% RAM on all nodes making it easier to calculate how much is left for your workload
- All components deployed by Elastx have tolerations for
NoScheduletaints by default - Certmanager
cert-manager.io/v1alpha2,cert-manager.io/v1alpha3,cert-manager.io/v1beta1,acme.cert-manager.io/v1alpha2,acme.cert-manager.io/v1alpha3andacme.cert-manager.io/v1beta1APIs are no longer served. All existing resources will be converted automatically tocert-manager.io/v1andacme.cert-manager.io/v1, however you will still need to update your local manifests - Several old APIs are no longer served. A complete list can be found in Kubernetes documentation
Changes affecting new clusters:
- All new clusters will have the cluster domain
cluster.localby default - The encrypted
*-encstorage-classes (4k-enc,8k-encand16k-enc) are no longer available to new clusters since they are deprecated for removal in Openstack. Do not worry, all our other storage classes (4k,8k,16kand future classes) are now encrypted by default. Read our full announcement here
Deprecations
Note that all deprecations will be removed in a future Kubernetes release. This does not mean you need to perform any changes right now. However, we recommend you to start migrating your applications in order to avoid issues in future releases.
APIs removed in Kubernetes 1.25
More details can be found in Kubernetes official documentation.
- Pod Security Policies will be removed in Kubernetes 1.25
- CronJob
batch/v1beta1. The new APIbatch/v1was implemented in Kubernetes 1.21 (this is a drop in replacement) - EndpointSlice
discovery.k8s.io/v1beta1. The new APIdiscovery.k8s.io/v1was implemented in Kubernetes 1.21 - Event
events.k8s.io/v1beta1. The new APIevents.k8s.io/v1was implemented in Kubernetes 1.19 - PodDisruptionBudget
policy/v1beta1. The new APIpolicy/v1was implemented in Kubernetes 1.21 - RuntimeClass
node.k8s.io/v1beta1. The new APInode.k8s.io/v1was implemented in Kubernetes 1.20
Other noteworthy deprecations
Kubernetes beta topology labels on nodes are deprecated and will be removed in a future release. You can follow the list below to see which labels are being replaced:
Please note: The following changes does not have a set Kubernetes release. However, the replacement labels are already implemented.
beta.kubernetes.io/instance-type->node.kubernetes.io/instance-typebeta.kubernetes.io/arch->kubernetes.io/archbeta.kubernetes.io/os->kubernetes.io/osfailure-domain.beta.kubernetes.io/region->topology.kubernetes.io/regionfailure-domain.beta.kubernetes.io/zone->topology.kubernetes.io/zone
Is downtime expected
The upgrade drains (move all workload from) one node at the time, patches that node and brings it back in the cluster. First after all deployments and statefulsets are running again we will continue on with the next node.
Known issues
Custom node taints and labels lost during upgrade
All custom taints and labels on nodes are lost during upgrade.
Custom changes to non -customer security groups will be lost
All changes to security groups not suffixed with “-customer” will be lost during the upgrade.
Snapshots are not working
There is currently a limitation in the snapshot controller not making it topology aware.
12 - Changelog for Kubernetes 1.21
Versions
- Kubernetes 1.21.5
- Nginx-ingress: 1.0.1
- Certmanager: 1.5.3
Major changes
- Load Balancers are by default allowed to talk to all tcp ports on worker nodes.
New Kubernetes features:
- The ability to create immutable secrets and configmaps.
- Cronjobs are now stable and the new API
batch/v1is implemented. - Graceful node shutdown, when shutting worker nodes this is detected by Kubernetes and pods will be evicted.
Deprecations
Note that all deprecations will be removed in a future Kubernetes release, this does not mean you need to perform any changes now however we recommend you to start migrating your applications to avoid issues in future releases.
APIs removed in Kubernetes 1.22
A guide on how to migrate from affected APIs can be found in the Kubernetes upstream documentation.
- Ingress
extensions/v1beta1andnetworking.k8s.io/v1beta1 - ValidatingWebhookConfiguration and MutatingWebhookConfiguration
admissionregistration.k8s.io/v1beta1 - CustomResourceDefinition
apiextensions.k8s.io/v1beta1 - CertificateSigningRequest
certificates.k8s.io/v1beta1 - APIService
apiregistration.k8s.io/v1beta1 - TokenReview
authentication.k8s.io/v1beta1 - Lease
coordination.k8s.io/v1beta1 - SubjectAccessReview, LocalSubjectAccessReview and SelfSubjectAccessReview
authorization.k8s.io/v1beta1 - Certmanager api
v1alpha2,v1alpha3andv1beta1
Other noteworthy deprecations
Kubernetes beta topology labels on nodes are deprecated and will be removed in a future release, follow the list below to see what labels are being replaced:
Please note: the following change does not have a set Kubernetes release when being removed however the replacement labels are already implemented.
beta.kubernetes.io/instance-type->node.kubernetes.io/instance-typebeta.kubernetes.io/arch->kubernetes.io/archbeta.kubernetes.io/os->kubernetes.io/osfailure-domain.beta.kubernetes.io/region->topology.kubernetes.io/regionfailure-domain.beta.kubernetes.io/zone->topology.kubernetes.io/zone
APIs removed in Kubernetes 1.25
more detail can be found in Kubernetes official documentation.
- Pod Security Policies will be removed in Kubernetes 1.25.
- CronJob
batch/v1beta1, the new APIbatch/v1was implemented in Kubernetes 1.21 (this is a drop in replacement) - EndpointSlice
discovery.k8s.io/v1beta1, the new APIdiscovery.k8s.io/v1was implemented in Kubernetes 1.21 - Event
events.k8s.io/v1beta1, the new APIevents.k8s.io/v1was implemented in Kubernetes 1.19 - PodDisruptionBudget
policy/v1beta1, the new APIpolicy/v1was implemented in Kubernetes 1.21 - RuntimeClass
node.k8s.io/v1beta1, the new APInode.k8s.io/v1was implemented in Kubernetes 1.20
Is downtime expected
The upgrade drains (moving all workload from) one node at the time, patches that node and brings it back in the cluster. First after all deployments and statefulsets are running again we will continue on with the next node.
Known issues
Custom node taints and labels lost during upgrade
All custom taints and labels on nodes are lost during upgrade.
Custom changes to non -customer security groups will be lost
All changes to security groups not suffixed with “-customer” will be lost during the upgrade
Snapshots is not working
There is currently a limitation in the snapshot controller not making it topology aware.
13 - Changelog for Kubernetes 1.20
Versions
- Kubernetes 1.20.7
- Nginx-ingress: 0.46.0
- Certmanager: 1.3.1
Major changes
- RBAC api
rbac.authorization.k8s.io/v1alpha1has been removed. Instead use the replacementrbac.authorization.k8s.io/v1. - We no longer supports new clusters being created with pod security policy enabled. Instead we recommend using OPA Gatekeeper, in case you have any questions regarding this contact our support and we will help you guys out.
- The built-in Cinder Volume Provider has gone from deprecated to disabled. Any volumes that are still using it will have to be migrated, see Known Issues.
Deprecations
- Ingress api
extensions/v1beta1will be removed in kubernetes 1.22. - Kubernetes beta lables on nodes are deplricated and will be removed in a future release, follow the below list to se what lable replaces the old one:
beta.kubernetes.io/instance-type->node.kubernetes.io/instance-typebeta.kubernetes.io/arch->kubernetes.io/archbeta.kubernetes.io/os->kubernetes.io/osfailure-domain.beta.kubernetes.io/region->topology.kubernetes.io/regionfailure-domain.beta.kubernetes.io/zone->topology.kubernetes.io/zone
- Certmanager api
v1alpha2,v1alpha3andv1beta1will be removed in a future release. We strongly recommend that you upgrade to the newv1api. - RBAC api
rbac.authorization.k8s.io/v1beta1will be removed in an upcoming release. The apis are replaced withrbac.authorization.k8s.io/v1. - Pod Security Policies will be removed in Kubernetes 1.25 in all clusters having the feature enabled. Instead we recommend OPA Gatekeeper.
Is downtime expected
The upgrade drains (moving all workload from) one node at the time, patches that node and brings it back in the cluster. First after all deployments and statefulsets are running again we will continue on with the next node.
Known issues
Custom node taints and labels lost during upgrade
All custom taints and labels on nodes are lost during upgrade.
Custom changes to non -customer security groups will be lost
All changes to security groups not suffixed with “-customer” will be lost during the upgrade
Snapshots is not working
There is currently a limitation in the snapshot controller not making it topology aware.
Volumes using built-in Cinder Volume Provider will be converted
During the upgrade to 1.20 Elastx staff will upgrade any volumes still being managed by the built-in Cinder Volume Provider. No action is needed on the customer side, but it will produce events and possibly log events that may raise concern.
To get a list of Persistent Volumes that are affected you can run this command before the upgrade:
$ kubectl get pv -o json | jq -r '.items[] | select (.spec.cinder != null) | .metadata.name'
Volumes that have been converted will show an event under the Persistent Volume Claim object asserting that data has been lost - this is a false statement and is due to the fact that the underlying Persistent Volume was disconnected for a brief moment while it was being attached to the new CSI-based Cinder Volume Provider.
Bitnami (and possibly other) images and runAsGroup
Some Bitnami images silently assume they are run with the equivalent of runAsGroup: 0. This was the Kubernetes default until 1.20.x. The result is strange looking permission errors on startup and can cause workloads to fail.
At least the Bitnami PostgreSQL and RabbitMQ images have been confirmed as having these issues.
To find out if there are problematic workloads in your cluster you can run the following commands:
kubectl get pods -A -o yaml|grep image:| sort | uniq | grep bitnami
If any images turn up there may be issues. !NB. Other images may have been built using Bitnami images as base, these will not show up using the above command.
Solution without PSP
On clusters not running PSP it should suffice to just add:
runAsGroup: 0
To the securityContext for the affected containers.
Solution with PSP
On clusters running PSP some more actions need to be taken. The restricted PSP forbids running as group 0 so a new one needs to be created, such as:
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
annotations:
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default,runtime/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: runtime/default
labels:
addonmanager.kubernetes.io/mode: Reconcile
name: restricted-runasgroup0
spec:
allowPrivilegeEscalation: false
fsGroup:
ranges:
- max: 65535
min: 1
rule: MustRunAs
requiredDropCapabilities:
- ALL
runAsGroup:
ranges:
- max: 65535
min: 0
rule: MustRunAs
runAsUser:
rule: MustRunAsNonRoot
seLinux:
rule: RunAsAny
supplementalGroups:
ranges:
- max: 65535
min: 1
rule: MustRunAs
volumes:
- configMap
- emptyDir
- projected
- secret
- downwardAPI
- persistentVolumeClaim
Furthermore a ClusterRole allowing the use of said PSP is needed:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
addonmanager.kubernetes.io/mode: Reconcile
name: psp:restricted-runasgroup0
rules:
- apiGroups:
- policy
resourceNames:
- restricted-runasgroup0
resources:
- podsecuritypolicies
verbs:
- use
And finally you need to bind the ServiceAccounts that need to run as group 0 to the ClusterRole with a ClusterRoleBinding:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: psp:restricted-runasgroup0
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: psp:restricted-runasgroup0
subjects:
- kind: ServiceAccount
name: default
namespace: keycloak
- kind: ServiceAccount
name: XXX
namespace: YYY
Then its just a matter of adding:
runAsGroup: 0
To the securityContext for the affected containers.
14 - Changelog for Kubernetes 1.19
Versions
- Kubernetes 1.19.7
- Nginx-ingress: 0.43.0
- Certmanager: 1.2.0
Major changes
-
New security groups are implemented where you can store all youre firewall rules. The new security groups will be persistent between upgrades and called
CLUSTERNAME-k8s-worker-customerandCLUSTERNAME-k8s-master-customer(CLUSTERNAME will be replaced with actual cluster name). With this change we will remove our previous default firewall rules that allowed public traffic to the Kubernetes cluster, this includes the following services:- Master API (port 6443)
- Ingress (port 80 & 443)
- Nodeports (ports 30000 to 32676)
Deprecations
- Ingress api
extensions/v1beta1will be removed in kubernetes 1.22 - RBAC api
rbac.authorization.k8s.io/v1alpha1andrbac.authorization.k8s.io/v1beta1will be removed in kubernetes 1.20. The apis are replaced withrbac.authorization.k8s.io/v1. - The node label
beta.kubernetes.io/instance-typewill be rmeoved in an uppcomig release. Usenode.kubernetes.io/instance-typeinstead. - Certmanager api
v1alpha2,v1alpha3andv1beta1will be removed in a future release. We strongly recommend that you upgrade to the newv1api
Is downtime expected
The upgrade drains (moving all workload from) one node at the time, patches that node and brings in back in the cluster. First after all deployments and statefulsets are running again we will continue on with the next node.
Known issues
Custom node taints and labels lost during upgrade
All custom taints and labels on nodes are lost during upgrade.
Custom security groups will be lost during upgrade
All custom security groups bound inside openstack will be detached during upgrade.
Snapshots is not working
There is currently a limitation in the snapshot controller not making it topology aware.
15 - Changelog for Kubernetes 1.18
Versions
- Kubernetes 1.18.9
- Nginx-ingress: 0.40.0
- Certmanager: 1.0.3
Major changes
- Moved the
tcp-servicesconfigmap used by our ingress controller to the default namespace.
Deprecations
- Ingress api
extensions/v1beta1will be removed in kubernetes 1.22 - RBAC api
rbac.authorization.k8s.io/v1alpha1andrbac.authorization.k8s.io/v1beta1will be removed in kubernetes 1.20. The apis are replaced withrbac.authorization.k8s.io/v1. - The node label
beta.kubernetes.io/instance-typewill be rmeoved in an uppcomig release. Usenode.kubernetes.io/instance-typeinstead. - Certmanager api
v1alpha2,v1alpha3andv1beta1will be removed in a future release. We strongly recommend that you upgrade to the newv1api - Accessing the Kubernetes dashboard over the Kubernetes API. This feature will not be added to new clusters however if your cluster already has this available it will continue working until Kubernetes 1.19
Removals
- Some older deprecated metrics, more information regarding this can be found in the official Kubernetes changelog: Link to Kubernetes changelog
Is downtime expected
For this upgrade we expect a shorter downtime on the ingress. The downtime on the ingress should be no longer than 5 minutes and hopefully even under 1 minute in length.
The upgrade drains (moving all workload from) one node at the time, patches that node and brings in back in the cluster. First after all deployments and statefulsets are running again we will continue on with the next node.
Known issues
Custom node taints and labels lost during upgrade
All custom taints and labels on nodes are lost during upgrade.
Snapshots is not working
There is currently a limitation in the snapshot controller not making it topology aware.
Resize problem on volumes created before Kubernetes 1.16
Volume expansion sometimes fails on volumes created before Kubernetes 1.16.
A workaround exists by adding an annotation on the affected volumes, an example command:
kubectl annotate --overwrite pvc PVCNAME volume.kubernetes.io/storage-resizer=cinder.csi.openstack.org
16 - Changelog for Kubernetes 1.17
Versions
- Kubernetes 1.17.9
- Nginx-ingress: 0.32.0
- Certmanager: 0.15.0
Major changes
- We can now combine nodes with multiple different flavors within one cluster
- Fixed a bug where some external network connections got stuck (MTU missmatch, calico)
- Enabled calicos metric endpoint
- New and improved monitoring system
- Ingress does only support serving http over port 80 and https over port 443
- Cert-manager using new APIs: Cert-manager info
Deprecations
- Ingress api
extensions/v1beta1will be removed in kubernetes 1.22 - RBAC api
rbac.authorization.k8s.io/v1alpha1andrbac.authorization.k8s.io/v1beta1will be removed in kubernetes 1.20. The apis are replaced withrbac.authorization.k8s.io/v1. - The node label
beta.kubernetes.io/instance-typewill be rmeoved in an uppcomig release. Usenode.kubernetes.io/instance-typeinstead.
Removals
Custom ingress ports
We no longer supports using custom ingress ports. From 1.17 http traffic will be received on port 80 and https on port 443
You can check what ports you are using with the following command:
kubectl get service -n elx-nginx-ingress elx-nginx-ingress-controller
If you aren’t using port 80 and 443 please be aware that the ports your ingress listen on will change during the upgrade to Kubernetes 1.17. ELASTX team will contact you before the upgrade takes place and we can together come up with a solution.
Old Kubernetes APIs
A complete list of APIs that will be removed in this version:
- NetworkPolicy
extensions/v1beta1
- PodSecurityPolicy
extensions/v1beta1
- DaemonSet
extensions/v1beta1apps/v1beta2
- Deployment
extensions/v1beta1apps/v1beta1apps/v1beta2
- StatefulSet
apps/v1beta1apps/v1beta2
- ReplicaSet
extensions/v1beta1apps/v1beta1apps/v1beta2
Is downtime expected
For this upgrade we expect a shorter downtime on the ingress. the downtime on the ingress should be no longer than 5 minutes and hopefully even under 1 minute in length.
The upgrade are draining (moving all load from) one node at the time, patches that node and brings in back in the cluster. first after all deployments and statefulsets are running again we will continue with the next node.
Known issues
Custom node taints and labels lost during upgrade
All custom taints and labels on nodes are lost during upgrade.
Snapshots is not working
There is currently a limitation in the snapshot controller not making it topology aware.
Resize problem on volumes created before Kubernetes 1.16
Volume expansion sometimes fails on volumes created before Kubernetes 1.16.
A workaround exists by adding an annotation on the affected volumes, an example command:
kubectl annotate --overwrite pvc PVCNAME volume.kubernetes.io/storage-resizer=cinder.csi.openstack.org