Usage
Dynamic provisioning
Dynamic provisioning simplifies the management of storage resources in containerized environments by automatically provisioning storage volumes on-demand. This eliminates the need for manual configuration, allowing administrators to dynamically assign storage resources based on application requirements.
To provide volume that will be stored in the SaunaFS, you need to:
- Create a secret storing SaunaFS connection info:
stringData.masterAddress
- hostname of SaunaFS masterstringData.masterPort
- port number on which the SaunaFS master is listening (optional, default:9421
)stringData.rootDir
- path of SaunaFS export root directory (optional, default:/
)stringData.password
- password required to access the export (optional)
kind: Secret apiVersion: v1 metadata: name: saunafs-secret namespace: saunafs-csi type: Opaque stringData: masterAddress: saunafs-cluster-internal.saunafs-operator.svc.cluster.local masterPort: 9421 rootDir: / password: S4unaF5#85&32
SaunaFS Password
By default SaunaFS exports are password protected. Export password is stored in kubernetes secret named <CLUSTER_NAME>-default-exports-secret
, you can get it using kubectl:
$ kubectl get secrets saunafs-test-cluster-default-exports-secret -o jsonpath={.data.saunafs-export-password} | base64 --decode
- Create a storage class:
parameters.replicationGoalName
- name of replication mode defined inmfsgoals.cfg
or in SaunaFS Cluster (if SaunaFS Operator is used)parameters.*-secret-name
- name of the connection info secretparameters.*-secret-namespace
- namespace of the connection info secret
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: saunafs-sc provisioner: saunafs.csi.sarkan.io allowVolumeExpansion: true parameters: replicationGoalName: ec21 csi.storage.k8s.io/provisioner-secret-name: saunafs-secret csi.storage.k8s.io/provisioner-secret-namespace: saunafs-csi csi.storage.k8s.io/node-publish-secret-name: saunafs-secret csi.storage.k8s.io/node-publish-secret-namespace: saunafs-csi csi.storage.k8s.io/controller-expand-secret-name: saunafs-secret csi.storage.k8s.io/controller-expand-secret-namespace: saunafs-csi
Custom file system
If you want volumes to be formatted on creation to one of ext3
, ext4
or xfs
file systems you need to specify a csi.storage.k8s.io/fstype
parameter in the storage class. Volumes are formatted only in Filesystem
volume mode.
- Create a Persistent Volume Claim (PVC) utilizing created storage class:
spec.resources.requests.storage
- requested storage sizespec.storageClassName
- name of the storage class created in step 2
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc-saunafs-csi-test namespace: saunafs-csi spec: volumeMode: Filesystem accessModes: - ReadWriteMany resources: requests: storage: 1Ti storageClassName: saunafs-sc
AccessModes
saunafs.csi.sarkan.io
supports all available access modes
Block volume mode
You can configure the volumeMode
parameter as Block
to utilize a volume as a raw block device. This type of volume is provided to a Pod as a block device, without any filesystem on it.
spec.resources.requests.storage
- requested storage sizespec.storageClassName
- name of the storage class
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-saunafs-csi-test
namespace: saunafs-csi
spec:
volumeMode: Block
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Ti
storageClassName: saunafs-sc
Volume expansion
If you want to expand the volume, you must edit the requested storage value:
kubectl edit pvc <PVC_NAME>
Block mode
If the volume mode is set to Block, the pods that have this volume mounted need to be restarted in order to apply the size change.
Block mode
For already formatted devices, remember to extend the filesystem after the volume expansion. For example, using the resize2fs
command for ext3/ext4 or xfs_growfs
for xfs.
Volume shrinking
Volume shrinking isn't supported in Kubernetes. For further information regarding volume resizing, click here.
Static provisioning
You can expose already existing SaunaFS directory by creating static Persistent Volume.
Create a secret storing SaunaFS connection info:
stringData.masterAddress
- hostname of SaunaFS masterstringData.masterPort
- port number on which the SaunaFS master is listening (optional, default:9421
)stringData.rootDir
- path of SaunaFS export root directory (optional, default:/
)data.password
- password required to access the export (optional)
kind: Secret apiVersion: v1 metadata: name: saunafs-secret namespace: saunafs-csi type: Opaque stringData: masterAddress: saunafs-cluster-internal.saunafs-operator.svc.cluster.local masterPort: 9421 rootDir: / data: password: S4unaF5#85&32
Create a Static Persistent Volume:
spec.capacity.storage
- requested storage sizespec.csi.volumeHandle
- directory path relative to SaunaFS root directory specified in the connection info secret, directory must already exist at the time of making Persistent Volumespec.csi.nodePublishSecretRef.name
- name of the connection info secretspec.csi.nodePublishSecretRef.namespace
- namespace of the connection info secret
apiVersion: v1 kind: PersistentVolume metadata: name: pv-static-saunafs-csi spec: capacity: storage: 1Ti accessModes: - ReadWriteMany csi: driver: saunafs.csi.sarkan.io volumeHandle: /volumes/pv-saunafs-csi nodePublishSecretRef: name: saunafs-secret namespace: saunafs-csi
AccessModes
saunafs.csi.sarkan.io
supports all available access modes
- Create a Persistent Volume Claim (PVC):
spec.volumeName
- name of the persistent volume to which the PVC will be boundspec.resources.requests.storage
- requested storage size
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-saunafs-csi-test namespace: saunafs-csi spec: storageClassName: "" volumeName: pv-static-saunafs-csi accessModes: - ReadWriteMany resources: requests: storage: 1Ti
Volume Snapshot
You can perform snapshot of a volume. You can only create a snapshot and restore from it within same namespace.
- Create
VolumeSnapshotClass
:deletionPolicy
- determines what happens on deletion ofVolumeSnapshot
object. If is set to Delete, then the underlying storage snapshot will be deleted along with the VolumeSnapshotContent object. If the DeletionPolicy is Retain, then both the underlying snapshot and VolumeSnapshotContent remain.parameters.*-secret-name
- name of the connection info secretparameters.*-secret-namespace
- namespace of the connection info secret
apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: saunafs-snapshotclass driver: saunafs.csi.sarkan.io deletionPolicy: Delete parameters: csi.storage.k8s.io/snapshotter-secret-name: saunafs-cluster csi.storage.k8s.io/snapshotter-secret-namespace: default
- Create
VolumeSnapshot
object to perform snapshot:spec.volumeSnapshotClassName
- name of the snapshot class created in previous stepspec.source.persistentVolumeClaimName
- name of the PersistentVolumeClaim which you want to snapshot
apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: pvc-saunafs-csi-test-snapshot spec: volumeSnapshotClassName: saunafs-snapshotclass source: persistentVolumeClaimName: pvc-saunafs-csi-test
- Restore volume snapshot by creating new PVC and specifying
spec.dataSource
:spec.resources.requests.storage
- requested storage size, must be equal or larger than snapshot sizespec.storageClassName
- name of a storage classspec.dataSource.name
- name of a volume snapshot
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-saunafs-csi-test-restore spec: storageClassName: saunafs-sc accessModes: - ReadWriteMany resources: requests: storage: 3Gi dataSource: apiGroup: snapshot.storage.k8s.io kind: VolumeSnapshot name: pvc-saunafs-csi-test-snapshot
TIP
For more information about volume snapshots refer to kubernetes documentation.
WARNING
If using SaunaFS CSI as a separate component make sure your kubernetes distribution comes with Snapshot CRDs and Snapshot Controller. Check CSI Snapshotter repository for more information.
Volume Cloning
SaunaFS Volumes can be cloned. Cloned volume will have exact same content as source volume. Destination volume may use a different storage class than the source volume.
You can only clone a PVC when it exists in the same namespace as the destination PVC.
To clone a volume you need to create PersistentVolumeClaim
with specified dataSource to existing PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: clone-of-pvc-saunafs-csi-test
namespace: saunafs-csi
spec:
accessModes:
- ReadWriteMany
storageClassName: saunafs-sc
resources:
requests:
storage: 5Gi
dataSource:
kind: PersistentVolumeClaim
name: pvc-saunafs-csi-test