Persistent volumes and zoning guide#
The following sections provide operational guides for provisioning PersistentVolumes
when running Aspen Mesh.
Persistent Volumes#
Aspen Mesh utilizes PersistentVolumes
to store various kinds of persistent data such as metrics, events and trace data (dependent on which features are enabled).
The Kubernetes PersistentVolume
subsystem provides an API for users and administrators that abstracts details of how storage is provided from how it is consumed. Applications or administrators can create PersistentVolumeClaims
(PVCs) to request PersistentVolume
(PV) resources without having specific knowledge of the underlying storage infrastructure. A PersistentVolume
is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes, managed by a storage controller.
Aspen Mesh uses this approach instead of allocating or managing storage itself. Aspen Mesh creates PersistentVolumeClaims
to request storage from a platform-provided Kubernetes storage controller.
Note
Your platform and cloud provider will support and provide instructions for storage. For example, if you are a user of RedHat OpenShift 4.5, you can find their documentation here. We provide pointers and links to storage options here to assist in getting started but we don’t guarantee that following these instructions will meet your criteria for redundancy or performance. Please work with your platform or cloud provider to configure production-ready storage.
Dynamic provisioning#
Dynamic volume provisioning allows storage volumes to be created on-demand. The dynamic provisioning feature eliminates the need for cluster administrators to pre-provision storage. Aspen Mesh recommends the use of dynamic provisioning feature.
Depending on the installation method, your Kubernetes cluster may be deployed with an existing StorageClass
that is marked as default. This default StorageClass
is then used to dynamically provision storage for PersistentVolumeClaims
that do not require any specific storage class. The pre-installed default StorageClass
may not fit well with your expected workload. In this case, you need to change the default storage class. Please consult the docs for your installation for detailed instructions. For example, here are the instructions for RedHat OpenShift 4.5.
Kubernetes dynamic host path provisioning#
Note
Locally-provisioned volumes may only be suitable for demonstration environments. Aspen Mesh has positive experience with the controller described in this section but doesn’t officially endorse or support it.
In case you have set-up a local or on premise cluster for demo or try-out purposes, you can quickly add support for dynamic provisioning, using the local path provisioner from Rancher, which adds dynamic provisioning using hostPath volumes.
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml
kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Verify that the StorageClass
is successfully created.
kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local-path (default) rancher.io/local-path Delete WaitForFirstConsumer false 7d17h
Running in multiple zones#
When deploying to a cloud provider and that deployment spans multiple availability zones, certain steps are required in order to use PersistentVolumes
. These steps will vary depending on the version of Kubernetes being provided by your cloud provider and your cloud provider itself.
Use multi-zoned storage class as default storage class
To list all available storage classes you can use the following command:
kubectl get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 (default) kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 35d
Use a multi-zoned storage class
When creating volumes across multiple availability zones, you need to ensure that the volumes are bound in the same zone as the pod. This can be accomplished by creating a
StorageClass
withvolumeBindingMode
set toWaitForFirstConsumer
. When volumes are created under this binding mode, it means that the pod will determine which node the volumes should bind to since the pod is considered its first consumer. Another reason why this is necessary is because the default binding mode ofImmediate
often results in scheduling conflicts when the pod is created in a different zone than its volumes.apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: labels: k8s-addon: storage-aws.addons.k8s.io name: gp2 parameters: type: gp2 provisioner: kubernetes.io/aws-ebs reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer
Notice here that
volumeBindingMode
is set toWaitForFirstConsumer
. Everything else can be the exact same as your other storage classes.