Sarkan DocumentationSarkan Documentation
Overview
  • HostDisk CSI
  • SaunaFS Operator
  • SaunaFS CSI
Website
Contact Us
Overview
  • HostDisk CSI
  • SaunaFS Operator
  • SaunaFS CSI
Website
Contact Us
  • Overview

    • Introduction
    • Prerequisites
    • Installation
  • HostDisk CSI

    • Introduction
    • Deployment
    • Usage
    • Design
    • Hostdisk Metrics
  • SaunaFS Operator

    • Introduction
    • Deployment
    • Usage
    • NFS Ganesha
    • Architecture
    • Custom resources
    • Metric exporters
    • Sarkan Grafana Dashboard
  • SaunaFS CSI

    • Introduction
    • Deployment
    • Usage
    • Design

Usage

SaunaFS CSI driver allows Kubernetes workloads to utilize SaunaFS volumes as persistent storage, supporting both dynamic and static provisioning. It enables dynamic provisioning of volumes through the creation of PersistentVolumeClaims (PVCs) associated with appropriate StorageClasses, as well as the use of pre-existing directories as static volumes.

The driver supports volume expansion, allowing for the resizing of volumes to accommodate growing storage needs. Additionally, it offers snapshot and cloning capabilities, facilitating data backup, recovery, and duplication processes within the Kubernetes environment.

  • Dynamic provisioning
    • Block volume mode
    • Volume expansion
  • Static provisioning
  • Volume Snapshot
  • Volume Cloning

Dynamic provisioning

Dynamic provisioning simplifies the management of storage resources in containerized environments by automatically provisioning storage volumes on-demand. This eliminates the need for manual configuration, allowing administrators to dynamically assign storage resources based on application requirements.

To provide volume that will be stored in the SaunaFS, you need to:

  1. Create a secret storing SaunaFS connection info:

    • stringData.masterAddress - hostname of SaunaFS master
    • stringData.masterPort - port number on which the SaunaFS master is listening (optional, default: 9421)
    • stringData.rootDir - path of SaunaFS export root directory (optional, default: /)
    • stringData.password - password required to access the export (optional)
    kind: Secret
    apiVersion: v1
    metadata:
      name: saunafs-secret
      namespace: saunafs-csi
    type: Opaque
    stringData:
      masterAddress: saunafs-cluster-internal.saunafs-operator.svc.cluster.local
      masterPort: "9421"
      rootDir: /
      password: S4unaF5#85&32
    

    SaunaFS Password

    By default, when deploying a SaunaFS cluster using the SaunaFS Operator, the root export is secured with a password. This password is stored in a Kubernetes Secret named <CLUSTER_NAME>-default-exports-secret, you can get it using kubectl:

    $ kubectl get secrets saunafs-test-cluster-default-exports-secret -o jsonpath={.data.saunafs-export-password} | base64 --decode
    
  2. Create a StorageClass that specifies the saunafs.csi.sarkan.io provisioner and includes parameters such as replication goal, trash time, and filesystem type to enable dynamic provisioning of persistent volumes backed by SaunaFS.

    • parameters.replicationGoalName - name of replication mode defined in mfsgoals.cfg or in SaunaFS Cluster (if SaunaFS Operator is used)
    • parameters.trashTime - this parameter is specified in seconds and if not set, the default value is 0, meaning files are deleted immediately
    • parameters.volumeDirectory - this parameter specifies the directory path where volumes will be stored. If not set, the default value is '/volumes'
    • parameters.*-secret-name - name of the connection info secret
    • parameters.*-secret-namespace - namespace of the connection info secret
    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: saunafs-sc
    provisioner: saunafs.csi.sarkan.io
    allowVolumeExpansion: true
    parameters:
      replicationGoalName: ec21
      trashTime: "0"
      volumeDirectory: "/volumes"
      csi.storage.k8s.io/provisioner-secret-name: saunafs-secret
      csi.storage.k8s.io/provisioner-secret-namespace: saunafs-csi
      csi.storage.k8s.io/node-publish-secret-name: saunafs-secret
      csi.storage.k8s.io/node-publish-secret-namespace: saunafs-csi
      csi.storage.k8s.io/controller-expand-secret-name: saunafs-secret
      csi.storage.k8s.io/controller-expand-secret-namespace: saunafs-csi
    

    TrashTime

    The trash time determines how long deleted files are kept before being permanently removed. You can control it by setting the trashTime parameter, specified in seconds, in the storage class.

    Custom file system

    To provision volumes with a filesystem other than the default used by SaunaFS, such as ext3, ext4 or xfs, you need to specify the desired filesystem type using the csi.storage.k8s.io/fstype parameter in your StorageClass definition. This parameter directs the CSI driver to format the volume with the specified filesystem during creation.

    It's important to note that volumes are formatted only when the VolumeMode is set to Filesystem, this parameter has no effect if the volume is provisioned in Block mode.

VolumeDirectory

Make sure that the specified volumeDirectory and snapshotDirectory from the volumeSnapshotClass are separate paths and don't contain any pre-existing data. Reusing a directory that was previously used for snapshots or volumes may lead to incorrect controller behavior.

  1. Create a Persistent Volume Claim (PVC) utilizing created storage class:
    • spec.resources.requests.storage - requested storage size
    • spec.storageClassName - name of the storage class created in step 2
    • spec.volumeMode - SaunaFS CSI driver supports both Filesystem and Block modes, allowing volumes to be mounted as filesystem directories or exposed as raw block devices within pods
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: pvc-saunafs-csi-test
      namespace: saunafs-csi
    spec:
      volumeMode: Filesystem
      accessModes:
      - ReadWriteMany
      resources:
        requests:
          storage: 1Ti
      storageClassName: saunafs-sc
    

AccessModes

saunafs.csi.sarkan.io supports all available access modes

Block volume mode

You can configure the volumeMode parameter as Block to utilize a volume as a raw block device. This type of volume is provided to a Pod as a block device, without any filesystem on it.

  • spec.resources.requests.storage - requested storage size
  • spec.storageClassName - name of the storage class
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
 name: pvc-saunafs-csi-test
 namespace: saunafs-csi
spec:
 volumeMode: Block
 accessModes:
 - ReadWriteMany
 resources:
   requests:
     storage: 1Ti
 storageClassName: saunafs-sc

Volume expansion

If you want to expand the volume, you need to edit the spec.resources.requests.storage field in the PVC specification to request a larger storage size. Ensure that the associated StorageClass has allowVolumeExpansion set to true.

kubectl edit pvc <PVC_NAME>

Block mode

If the volume mode is set to Block, the pods that have this volume mounted need to be restarted in order to apply the size change.

For already formatted devices, remember to extend the filesystem after the volume expansion. For example, using the resize2fs command for ext3/ext4 or xfs_growfs for xfs.

Volume shrinking

Volume shrinking isn't supported in Kubernetes. For further information regarding volume resizing, click here.

Static provisioning

To expose an existing SaunaFS directory to Kubernetes workloads, you can create a static PersistentVolume (PV) that references the specific directory within your SaunaFS cluster. This approach allows you to integrate pre-existing data into your Kubernetes environment without modifying the underlying storage.

  1. Create a secret storing SaunaFS connection info:

    • stringData.masterAddress - hostname of SaunaFS master
    • stringData.masterPort - port number on which the SaunaFS master is listening (optional, default: 9421)
    • stringData.rootDir - path of SaunaFS export root directory (optional, default: /)
    • data.password - password required to access the export (optional)
    kind: Secret
    apiVersion: v1
    metadata:
      name: saunafs-secret
      namespace: saunafs-csi
    type: Opaque
    stringData:
      masterAddress: saunafs-cluster-internal.saunafs-operator.svc.cluster.local
      masterPort: "9421"
      rootDir: /
    data:
      password: S4unaF5#85&32
    
  2. Create a Static Persistent Volume:

    • spec.capacity.storage - requested storage size
    • spec.csi.volumeHandle - directory path relative to SaunaFS root directory specified in the connection info secret, directory must already exist at the time of making Persistent Volume
    • spec.csi.nodePublishSecretRef.name - name of the connection info secret
    • spec.csi.nodePublishSecretRef.namespace - namespace of the connection info secret
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv-static-saunafs-csi
    spec:
      capacity:
        storage: 1Ti
      accessModes:
        - ReadWriteMany
      csi:
        driver: saunafs.csi.sarkan.io
        volumeHandle: /volumes/pv-saunafs-csi
        nodePublishSecretRef:
          name: saunafs-secret
          namespace: saunafs-csi
    

    AccessModes

    saunafs.csi.sarkan.io supports all available access modes

  3. Create a Persistent Volume Claim (PVC):

    • spec.volumeName - name of the persistent volume to which the PVC will be bound
    • spec.resources.requests.storage - requested storage size
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: pvc-saunafs-csi-test
      namespace: saunafs-csi
    spec:
      storageClassName: ""
      volumeName: pv-static-saunafs-csi
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 1Ti
    

Volume Snapshot

You can create a snapshot of a volume using the SaunaFS CSI driver. However, both the VolumeSnapshot and the associated PersistentVolumeClaim (PVC) must reside in the same Kubernetes namespace. Similarly, restoring a volume from a snapshot is only supported within the same namespace.

For more information, refer to the official Kubernetes documentation on Volume Snapshots.

  1. Create VolumeSnapshotClass:
    • deletionPolicy - determines what happens on deletion of VolumeSnapshot object. If is set to Delete, then the underlying storage snapshot will be deleted along with the VolumeSnapshotContent object. If the DeletionPolicy is Retain, then both the underlying snapshot and VolumeSnapshotContent remain.
    • parameters.snapshotDirectory - this parameter specifies the directory path where snapshots will be stored. If not set, the default value is '/snapshots'
    • parameters.*-secret-name - name of the connection info secret
    • parameters.*-secret-namespace - namespace of the connection info secret
    apiVersion: snapshot.storage.k8s.io/v1
    kind: VolumeSnapshotClass
    metadata:
      name: saunafs-snapshotclass
    driver: saunafs.csi.sarkan.io
    deletionPolicy: Delete
    parameters:
      snapshotDirectory: "/snapshots"
      csi.storage.k8s.io/snapshotter-secret-name: saunafs-cluster
      csi.storage.k8s.io/snapshotter-secret-namespace: default
    
  2. Create a VolumeSnapshot resource that references the existing PersistentVolumeClaim (PVC) you want to snapshot, ensuring both resources are within the same namespace.
    • spec.volumeSnapshotClassName - name of the snapshot class created in previous step
    • spec.source.persistentVolumeClaimName - name of the PersistentVolumeClaim which you want to snapshot
    apiVersion: snapshot.storage.k8s.io/v1
    kind: VolumeSnapshot
    metadata:
      name: pvc-saunafs-csi-test-snapshot
    spec:
      volumeSnapshotClassName: saunafs-snapshotclass
      source:
        persistentVolumeClaimName: pvc-saunafs-csi-test
    
  3. Restore volume snapshot by creating new PVC and specifying spec.dataSource:
    • spec.resources.requests.storage - requested storage size, must be equal or larger than snapshot size
    • spec.storageClassName - name of a storage class
    • spec.dataSource.name - name of a volume snapshot
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: pvc-saunafs-csi-test-restore
    spec:
      storageClassName: saunafs-sc
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 3Gi
      dataSource:
        apiGroup: snapshot.storage.k8s.io
        kind: VolumeSnapshot
        name: pvc-saunafs-csi-test-snapshot
    

Warning

If using SaunaFS CSI as a separate component make sure your kubernetes distribution comes with Snapshot CRDs and Snapshot Controller. Check CSI Snapshotter repository for more information.

Volume Cloning

SaunaFS Volumes can be cloned. Cloned volume will have exact same content as source volume. Destination volume may use a different storage class than the source volume.

You can only clone a PVC when it exists in the same namespace as the destination PVC.

To clone a volume you need to create PersistentVolumeClaim with specified dataSource to existing PVC:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
    name: clone-of-pvc-saunafs-csi-test
    namespace: saunafs-csi
spec:
  accessModes:
  - ReadWriteMany
  storageClassName: saunafs-sc
  resources:
    requests:
      storage: 5Gi
  dataSource:
    kind: PersistentVolumeClaim
    name: pvc-saunafs-csi-test
Prev
Deployment
Next
Design

Copyright © 2025 Sarkan. All rights reserved.
Made with ❤️ by Indevops for Kubernetes and SaunaFS 💑.