Changelog for Kubernetes 1.17

Changelog for Kubernetes 1.17

Versions

  • Kubernetes 1.17.9
  • Nginx-ingress: 0.32.0
  • Certmanager: 0.15.0

Major changes

  • We can now combine nodes with multiple different flavors within one cluster
  • Fixed a bug where some external network connections got stuck (MTU missmatch, calico)
  • Enabled calicos metric endpoint
  • New and improved monitoring system
  • Ingress does only support serving http over port 80 and https over port 443
  • Cert-manager using new APIs: Cert-manager info

Deprecations

  • Ingress api extensions/v1beta1 will be removed in kubernetes 1.22
  • RBAC api rbac.authorization.k8s.io/v1alpha1 and rbac.authorization.k8s.io/v1beta1 will be removed in kubernetes 1.20. The apis are replaced with rbac.authorization.k8s.io/v1.
  • The node label beta.kubernetes.io/instance-type will be rmeoved in an uppcomig release. Use node.kubernetes.io/instance-type instead.

Removals

Custom ingress ports

We no longer supports using custom ingress ports. From 1.17 http traffic will be received on port 80 and https on port 443

You can check what ports you are using with the following command:

kubectl get service -n elx-nginx-ingress elx-nginx-ingress-controller

If you aren’t using port 80 and 443 please be aware that the ports your ingress listen on will change during the upgrade to Kubernetes 1.17. ELASTX team will contact you before the upgrade takes place and we can together come up with a solution.

Old Kubernetes APIs

A complete list of APIs that will be removed in this version:

  • NetworkPolicy
    • extensions/v1beta1
  • PodSecurityPolicy
    • extensions/v1beta1
  • DaemonSet
    • extensions/v1beta1
    • apps/v1beta2
  • Deployment
    • extensions/v1beta1
    • apps/v1beta1
    • apps/v1beta2
  • StatefulSet
    • apps/v1beta1
    • apps/v1beta2
  • ReplicaSet
    • extensions/v1beta1
    • apps/v1beta1
    • apps/v1beta2

Is downtime expected

For this upgrade we expect a shorter downtime on the ingress. the downtime on the ingress should be no longer than 5 minutes and hopefully even under 1 minute in length.

The upgrade are draining (moving all load from) one node at the time, patches that node and brings in back in the cluster. first after all deployments and statefulsets are running again we will continue with the next node.

Known issues

Custom node taints and labels lost during upgrade

All custom taints and labels on nodes are lost during upgrade.

Snapshots is not working

There is currently a limitation in the snapshot controller not making it topology aware.

Resize problem on volumes created before Kubernetes 1.16

Volume expansion sometimes fails on volumes created before Kubernetes 1.16.

A workaround exists by adding an annotation on the affected volumes, an example command:

kubectl annotate --overwrite pvc PVCNAME volume.kubernetes.io/storage-resizer=cinder.csi.openstack.org
Last modified April 22, 2024: added useful options (#171) (7e11b10)