Persistent volumes

Using persistent volumes

Persistent volumes in our Elastx Kubernetes CaaS service are provided by OpenStack Cinder. Volumes are dynamically provisioned by Kubernetes Cloud Provider OpenStack.

Storage classes

8k refers to 8000 IOPS.

See our pricing page under the table Storage to calculate your costs.

This is the list of storage classes provided in clusters of version > 1.23.

$ kubectl get storageclasses
NAME                       PROVISIONER                RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
16k                        cinder.csi.openstack.org   Delete          WaitForFirstConsumer   true                   167d
8k                         cinder.csi.openstack.org   Delete          WaitForFirstConsumer   true                   167d
v1-dynamic-40 (default)    cinder.csi.openstack.org   Delete          WaitForFirstConsumer   true                   167d

Example of PersistentVolumeClaim

A quick example of how to create an unused 1Gi persistent volume claim named example:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: example
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Filesystem
  resources:
    requests:
      storage: 1Gi
  storageClassName: 16k
$ kubectl get persistentvolumeclaim
NAME      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
example   Bound    pvc-f8b1dc7f-db84-11e8-bda5-fa163e3803b4   1Gi        RWO            16k            18s

Good to know

Cross mounting of volumes between nodes

Cross mounting of volumes is not supported! That is a volume can only be mounted by a node residing in the same availability zone as the volume. Plan accordingly for ensured high availability!

Limit of volumes and pods per node

In case higher number of volumes or pods are required, consider adding additional worker nodes.

Kubernetes version Max pods/node Max volumes/node
v1.25 and lower 110 25
v1.26 and higher 110 125

Encryption

All volumes are encrypted at rest in hardware.

Volume type hostPath

A volume of type hostPath is in reality just a local directory on the specific node being mounted in a pod, this means data is stored locally and will be unavailable if the pod is ever rescheduled on another node. This is expected during cluster upgrades or maintenance, however it may also occur because of other reasons, for example if a pod crashes or a node is malfunctioning.
You can read more about this here.
If you are looking for a way to store persistent data we would instead recommend to make use of PVCs. PVCs can move between nodes within one data-center meaning any data stored will be present even if the pod is being recreated.

Known issues

Resizing encrypted volumes

Legacy: encrypted volumes do not resize properly, please contact our support if you wish to resize such a volume.

Last modified April 22, 2024: added useful options (#171) (7e11b10)