Usage

Dynamic provisioning

Dynamic provisioningopen in new window simplifies the management of storage resources in containerized environments by automatically provisioning storage volumes on-demand. This eliminates the need for manual configuration, allowing administrators to dynamically assign storage resources based on application requirements.

To provide volume that will be stored in the SaunaFS, you need to:

  1. Create a secretopen in new window storing SaunaFS connection info:
    • stringData.masterAddress - hostname of SaunaFS master
    • stringData.masterPort - port number on which the SaunaFS master is listening (optional, default: 9421)
    • stringData.rootDir - path of SaunaFS export root directory (optional, default: /)
    • stringData.password - password required to access the export (optional)
    kind: Secret
    apiVersion: v1
    metadata:
      name: saunafs-secret
      namespace: saunafs-csi
    type: Opaque
    stringData:
      masterAddress: saunafs-cluster-internal.saunafs-operator.svc.cluster.local
      masterPort: 9421
      rootDir: /
      password: S4unaF5#85&32
    

SaunaFS Password

By default SaunaFS exports are password protected. Export password is stored in kubernetes secret named <CLUSTER_NAME>-default-exports-secret, you can get it using kubectl:

  • $ kubectl get secrets saunafs-test-cluster-default-exports-secret -o jsonpath={.data.saunafs-export-password} | base64 --decode
    
  1. Create a storage classopen in new window:
    • parameters.replicationGoalName - name of replication mode defined in mfsgoals.cfg or in SaunaFS Cluster (if SaunaFS Operator is used)
    • parameters.*-secret-name - name of the connection info secret
    • parameters.*-secret-namespace - namespace of the connection info secret
    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: saunafs-sc
    provisioner: saunafs.csi.sarkan.io
    allowVolumeExpansion: true
    parameters:
      replicationGoalName: ec21
      csi.storage.k8s.io/provisioner-secret-name: saunafs-secret
      csi.storage.k8s.io/provisioner-secret-namespace: saunafs-csi
      csi.storage.k8s.io/node-publish-secret-name: saunafs-secret
      csi.storage.k8s.io/node-publish-secret-namespace: saunafs-csi
      csi.storage.k8s.io/controller-expand-secret-name: saunafs-secret
      csi.storage.k8s.io/controller-expand-secret-namespace: saunafs-csi
    

Custom file system

If you want volumes to be formatted on creation to one of ext3, ext4 or xfs file systems you need to specify a csi.storage.k8s.io/fstype parameter in the storage class. Volumes are formatted only in Filesystem volume mode.

  1. Create a Persistent Volume Claim (PVCopen in new window) utilizing created storage class:
    • spec.resources.requests.storage - requested storage size
    • spec.storageClassName - name of the storage class created in step 2
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: pvc-saunafs-csi-test
      namespace: saunafs-csi
    spec:
      volumeMode: Filesystem
      accessModes:
      - ReadWriteMany
      resources:
        requests:
          storage: 1Ti
      storageClassName: saunafs-sc
    

AccessModes

saunafs.csi.sarkan.io supports all available access modesopen in new window

Block volume mode

You can configure the volumeMode parameter as Block to utilize a volume as a raw block device. This type of volume is provided to a Pod as a block device, without any filesystem on it.

  • spec.resources.requests.storage - requested storage size
  • spec.storageClassName - name of the storage class
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
 name: pvc-saunafs-csi-test
 namespace: saunafs-csi
spec:
 volumeMode: Block
 accessModes:
 - ReadWriteMany
 resources:
   requests:
     storage: 1Ti
 storageClassName: saunafs-sc

Volume expansion

If you want to expand the volume, you must edit the requested storage value:

kubectl edit pvc <PVC_NAME>

Block mode

If the volume mode is set to Block, the pods that have this volume mounted need to be restarted in order to apply the size change.

Block mode

For already formatted devices, remember to extend the filesystem after the volume expansion. For example, using the resize2fs command for ext3/ext4 or xfs_growfs for xfs.

Volume shrinking

Volume shrinking isn't supported in Kubernetes. For further information regarding volume resizing, click hereopen in new window.

Static provisioning

You can expose already existing SaunaFS directory by creating static Persistent Volume.

  1. Create a secretopen in new window storing SaunaFS connection info:

    • stringData.masterAddress - hostname of SaunaFS master
    • stringData.masterPort - port number on which the SaunaFS master is listening (optional, default: 9421)
    • stringData.rootDir - path of SaunaFS export root directory (optional, default: /)
    • data.password - password required to access the export (optional)
    kind: Secret
    apiVersion: v1
    metadata:
      name: saunafs-secret
      namespace: saunafs-csi
    type: Opaque
    stringData:
      masterAddress: saunafs-cluster-internal.saunafs-operator.svc.cluster.local
      masterPort: 9421
      rootDir: /
    data:
      password: S4unaF5#85&32
    
  2. Create a Static Persistent Volumeopen in new window:

    • spec.capacity.storage - requested storage size
    • spec.csi.volumeHandle - directory path relative to SaunaFS root directory specified in the connection info secret, directory must already exist at the time of making Persistent Volume
    • spec.csi.nodePublishSecretRef.name - name of the connection info secret
    • spec.csi.nodePublishSecretRef.namespace - namespace of the connection info secret
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv-static-saunafs-csi
    spec:
      capacity:
        storage: 1Ti
      accessModes:
        - ReadWriteMany
      csi:
        driver: saunafs.csi.sarkan.io
        volumeHandle: /volumes/pv-saunafs-csi
        nodePublishSecretRef:
          name: saunafs-secret
          namespace: saunafs-csi
    

AccessModes

saunafs.csi.sarkan.io supports all available access modesopen in new window

  1. Create a Persistent Volume Claim (PVCopen in new window):
    • spec.volumeName - name of the persistent volume to which the PVC will be bound
    • spec.resources.requests.storage - requested storage size
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: pvc-saunafs-csi-test
      namespace: saunafs-csi
    spec:
      storageClassName: ""
      volumeName: pv-static-saunafs-csi
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 1Ti
    

Volume Snapshot

You can perform snapshot of a volume. You can only create a snapshot and restore from it within same namespace.

  1. Create VolumeSnapshotClass:
    • deletionPolicy - determines what happens on deletion of VolumeSnapshot object. If is set to Delete, then the underlying storage snapshot will be deleted along with the VolumeSnapshotContent object. If the DeletionPolicy is Retain, then both the underlying snapshot and VolumeSnapshotContent remain.
    • parameters.*-secret-name - name of the connection info secret
    • parameters.*-secret-namespace - namespace of the connection info secret
    apiVersion: snapshot.storage.k8s.io/v1
    kind: VolumeSnapshotClass
    metadata:
      name: saunafs-snapshotclass
    driver: saunafs.csi.sarkan.io
    deletionPolicy: Delete
    parameters:
      csi.storage.k8s.io/snapshotter-secret-name: saunafs-cluster
      csi.storage.k8s.io/snapshotter-secret-namespace: default
    
  2. Create VolumeSnapshot object to perform snapshot:
    • spec.volumeSnapshotClassName - name of the snapshot class created in previous step
    • spec.source.persistentVolumeClaimName - name of the PersistentVolumeClaim which you want to snapshot
    apiVersion: snapshot.storage.k8s.io/v1
    kind: VolumeSnapshot
    metadata:
      name: pvc-saunafs-csi-test-snapshot
    spec:
      volumeSnapshotClassName: saunafs-snapshotclass
      source:
        persistentVolumeClaimName: pvc-saunafs-csi-test
    
  3. Restore volume snapshot by creating new PVC and specifying spec.dataSource:
    • spec.resources.requests.storage - requested storage size, must be equal or larger than snapshot size
    • spec.storageClassName - name of a storage class
    • spec.dataSource.name - name of a volume snapshot
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: pvc-saunafs-csi-test-restore
    spec:
      storageClassName: saunafs-sc
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 3Gi
      dataSource:
        apiGroup: snapshot.storage.k8s.io
        kind: VolumeSnapshot
        name: pvc-saunafs-csi-test-snapshot
    

TIP

For more information about volume snapshots refer to kubernetes documentationopen in new window.

WARNING

If using SaunaFS CSI as a separate component make sure your kubernetes distribution comes with Snapshot CRDs and Snapshot Controller. Check CSI Snapshotter repositoryopen in new window for more information.

Volume Cloning

SaunaFS Volumes can be cloned. Cloned volume will have exact same content as source volume. Destination volume may use a different storage class than the source volume.

You can only clone a PVC when it exists in the same namespace as the destination PVC.

To clone a volume you need to create PersistentVolumeClaim with specified dataSource to existing PVC:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
    name: clone-of-pvc-saunafs-csi-test
    namespace: saunafs-csi
spec:
  accessModes:
  - ReadWriteMany
  storageClassName: saunafs-sc
  resources:
    requests:
      storage: 5Gi
  dataSource:
    kind: PersistentVolumeClaim
    name: pvc-saunafs-csi-test