Usage
To properly deploy a SaunaFS cluster within a Kubernetes environment, you need to create a SaunafsCluster
resource and define at least one metadata volume and one chunk volume. Based on the specifications of these resources, the SaunaFS Operator will automatically deploy the appropriate components, such as metadata servers and chunkservers, services and secrets.
The SaunaFS Operator simplifies and automates many configuration processes, while still offering full flexibility in customizing the cluster setup. This enables users to tailor the deployment to their specific requirements, benefiting from automation without sacrificing control.
SaunaFS Cluster creation
The primary source of configuration for a SaunaFS cluster is the SaunafsCluster
custom resource. It enables the configuration of metadata servers and chunkservers, the exposure of the cluster outside the Kubernetes environment, or the definition of selectors that automate the creation of metadata and chunkserver volumes.
You can deploy more than one SaunaFS cluster in a single Kubernetes cluster. A comprehensive description of all available configuration options can be found in SaunafsCluster CRD section.
Create a simple SaunaFS cluster resource:
apiVersion: saunafs.sarkan.io/v1beta1
kind: SaunafsCluster
metadata:
name: saunafs-cluster
namespace: saunafs-operator
spec:
exposeExternally: true
pvSelectors:
metadataStorage: example-cluster=metadata
chunkStorage: example-cluster=chunks
pvcSelectors:
metadataStorage: example-cluster=metadata
chunkStorage: example-cluster=chunks
Warning
Two SaunaFS clusters shouldn't have same pvSelectors
in cluster nor same pvcSelectors
in single namespace.
Display SaunaFS cluster information:
kubectl get saunafscluster saunafs-cluster --namespace saunafs-operator
For SaunaFS cluster to work you must create at least one Metadata volume and one Chunk volume. If you have defined selectors in the SaunafsCluster
specification, you can use them to label the selected Persistent Volumes or Persistent Volume Claims and let the operator create volumes automatically:
If you prefer to create the Metadata and Chunk volumes manually for your SaunaFS cluster, please refer to the Manual volumes creation section.
Tips
If you intend to deploy a SaunaFS cluster in a namespace other than saunafs-operator
, you must create a Kubernetes Secret containing Sarkan registry credentials within that specific namespace. You can use the following command:
kubectl create secret docker-registry saunafs-operator-image-pull-secret \
--namespace <your-namespace> \
--docker-server=registry.indevops.com \
--docker-username=<your-username> \
--docker-password=<your-password>
Tips
Metadata servers should be on different nodes for better failure tolerance. Metadata server node is chosen by kube-scheduler based on PVC used by metadata volume, so it's recommended to set proper node affinity for PVC used for metadata volumes.
SaunaFS volumes creation from persistent volumes
If both PV and PVC selectors are defined in the SaunaFS cluster resource, you can label PersistentVolumes to match those selectors. The SaunaFS Operator will automatically create a matching PVC and deploy the appropriate SaunaFS volumes based on the specified selectors.
- List available PersistentVolume:
kubectl get persistentvolume
- Add the appropriate label to the PersistentVolume to designate it for use by Metadata server or Chunkserver:
kubectl label pv <pv_name>... <pv_selector>
- Display created volumes status:
kubectl get saunafschunkvolumes,saunafsmetadatavolumes --namespace saunafs-operator
Chunkserver mount options
When provisioning volumes with Hostdisk CSI, ensure that PersistentVolumes for Chunkservers include the mount options recommended by SaunaFS. You can use the following command for that:
kubectl patch pv <PVs>... -p '{"spec":{"mountOptions":["noatime","nodiratime","nosuid","nodev","noexec","nofail","largeio","allocsize=33554432","inode64"]}}'
Important
It's recommended to use separate PersistentVolumes and PersistentVolumeClaims for the Chunkserver and Metadata server. Sharing volumes between them may lead to uncontrolled memory and CPU resource consumption.
SaunaFS volumes creation from persistent volume claims
When the SaunaFS cluster resource is configured with a PVC selector, you can manually label a PersistentVolumeClaim to match that selector. The corresponding SaunaFS volumes will automatically use the matching PVC. It works in a similar way to the PV selector mechanism, but without the need to label the PersistentVolume.
- List available PersistentVolumeClaims:
kubectl get persistentvolumeclaims
- Add the appropriate label to the PersistentVolumeClaim to designate it for use by Metadata server or Chunkserver:
kubectl label pvc <pvc_name>... <pvc_selector>
- Display created volumes status:
kubectl get saunafschunkvolumes,saunafsmetadatavolumes --namespace saunafs-operator
Manual SaunaFS volumes creation
Alternatively, you can manually create SaunaFS volumes by providing names of SaunaFS Cluster and PVC, without relying on selectors.
- Create SaunaFS Metadata Volume or SaunaFS Chunk Volume resources with properly set persistent volume claim and cluster name.
clusterName
- Name of the SaunaFS cluster this metadata or chunk volume belongs topersistentVolumeClaimName
- Name of the PVC to use
apiVersion: saunafs.sarkan.io/v1beta1
kind: SaunafsMetadataVolume
metadata:
name: example-metadata-volume
namespace: saunafs-operator
spec:
clusterName: saunafs-cluster
persistentVolumeClaimName: pvc-1
apiVersion: saunafs.sarkan.io/v1beta1
kind: SaunafsChunkVolume
metadata:
name: example-chunks-volume
namespace: saunafs-operator
spec:
clusterName: saunafs-cluster
persistentVolumeClaimName: pvc-2
Namespace
SaunaFS Volumes must be in the same namespace as cluster to which they're assigned.
SaunaFS resources status
To ensure successful deployment, use kubectl
to inspect the status of all SaunaFS-related resources within the target namespace:
kubectl get saunafsmetadatavolumes,saunafschunkvolumes,saunafsclusters --namespace saunafs-operator
kubectl get saunafs --namespace saunafs-operator # Same as above
SaunaFS Common Gateway Interface (CGI)
SaunaFS CGI offers a web-based GUI that presents SaunaFS status and various statistics.
To access SaunaFS CGI find its load balancer IP:
$ kubectl get svc -n saunafs-operator NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) saunafs-cgi-service LoadBalancer 10.102.184.175 10.20.32.97 9425:30224/TCP saunafs-cluster-internal ClusterIP 10.105.20.190 <none> 9419/TCP,9420/TCP,9421/TCP
- In internet browser enter
"<CGI EXTERNAL IP>:9425"
- In the master address field, specify the name of the internal cluster service, for example,
saunafs-cluster-internal
.
Mounting SaunaFS
There are several ways to mount a SaunaFS cluster: you can use the native SaunaFS client, available for Linux, macOS, and Windows, or access the cluster via NFS or S3-compatible protocols.
- To mount a SaunaFS cluster using the native client, you need to install the SaunaFS client appropriate for your operating system, such as Linux or Windows.
- The SaunaFS master address is exposed via a Kubernetes service located in the same namespace as the cluster. The service name is prefixed with the cluster's name.
kubectl get svc -n saunafs-operator
- Get SaunaFS cluster internal IP for inside kubernetes cluster usage
- Get SaunaFS cluster external IP for outside kubernetes cluster usage (requires setting the
exposeExternally
field in the SaunaFS Cluster resource.)
- By default, the SaunaFS Operator creates a password-protected export of the root directory named
<CLUSTER-NAME>-root-export
. The export password is stored in a Kubernetes Secret named the same as the export, which can be retrieved usingkubectl
.$ kubectl get secrets <CLUSTER-NAME>-root-export -o jsonpath={.data.password} | base64 --decode
- Use SaunaFS client to mount filesystem
$ mkdir -p /mnt/sarkan $ sfsmount -o askpassword -H "<MASTER IP>" /mnt/sarkan
External access
You need to specify spec.exposeExternally
in SaunafsCluster
resource for external load balancer to be created. More about access from outside kubernetes cluster here.
SaunaFS Exports
SaunaFS Exports serve as access control for sfsmounts.
To define custom exports, create a SaunaFSExport
custom resource. Note that the specified export path must exist within the directory structure of the SaunaFS cluster. You can create the path manually, for example, by mounting the root directory and creating the necessary subdirectories.
- Example SaunaFS Export for
/volumes
directory:spec.clusterName
- Name of the SaunaFS cluster this export applies to.spec.address
- IP address, network or address range this export can be accessed from.spec.path
- Path to be exported relative to your SaunaFS root.spec.options
- Comma separated list of export options, refer to SaunaFS documentation for list of possible options. Defaults to 'readonly'.
apiVersion: saunafs.sarkan.io/v1beta1 kind: SaunafsExport metadata: name: saunafs-export-volumes spec: clusterName: saunafs-cluster address: "*" path: "/volumes" options: "rw"
Default exports
Root /
and metadata .
exports are created by default and you don't have to create them.
Password Protection
When you define a custom SaunafsExport, a new secret is also automatically generated for it, named the same as this export. This secret contains a generated password and is used to protect access to the export, you can get it using kubectl:
$ kubectl get secrets <EXPORT-NAME> -o jsonpath={.data.password} | base64 --decode
If you want to change the generated password, you can manually edit the corresponding Kubernetes Secret. Use the stringData
section to provide a new password, Kubernetes will handle the base64 encoding automatically.
apiVersion: v1
data:
masterAddress: c2F1bmFmcy10ZXN0LWNsdXN0ZXItaW50ZXJuYWwuc2F1bmFmcy1vcGVyYXRvci5zdmMuY2x1c3Rlci5sb2NhbA==
masterPort: OTQyMQ==
rootDir: Lw==
stringData:
password: "newPassword#123"
kind: Secret
metadata:
name: saunafs-cluster-root-export
namespace: saunafs-operator
type: Opaque
Mounting a specific subfolder
To mount a subfolder of a SaunaFS export, provide the target path using the -S <path>
flag or the -o sfssubfolder=<path>
option with sfsmount
.
SaunaFS admin password
To perform administrative tasks using the SaunaFS administration tool saunafs-adm, you must obtain the administrator password. For detailed instructions on manual administration of SaunaFS components, please refer to the official SaunaFS documentation.
SaunaFS admin password is automatically created on cluster creation. Use kubectl
to obtain the admin password for the SaunaFS cluster:
kubectl get secret "<SAUNAFS-CLUSTER-NAME>-admin-secret" --template={{.data.password}} | base64 --decode
SaunaFS Cluster scaling
There are several ways to scale up the SaunaFS cluster. If you have defined selectors in the SaunafsCluster
specification, you can use them to label the selected Persistent Volumes or Persistent Volume Claims and let the operator create new volumes automatically:
If you prefer to create the Metadata and Chunk volumes manually for your SaunaFS cluster, please refer to the Manual volumes creation section.
Note that the number of labeled Persistent Volumes or Persistent Volume Claims doesn't directly correspond to the number of Chunkserver pods. For more details, refer to the design section.
Cluster downsizing
It's possible to reduce the number of Metadata and Chunk volumes in a SaunaFS cluster. However, caution is required, improper removal can result in data loss.
If you intend to remove a Metadata or Chunk volume that was created using label selectors, you must remove the label from the corresponding Persistent Volume and Persistent Volume Claim.
Important
If you manually delete a Metadata or Chunk volume without removing the appropriate label, the SaunaFS Operator will interpret this as an unexpected failure and automatically recreate the volume.
To unlabel a Persistent Volume, use the kubectl label
command with the label key followed by a hyphen (-), which tells Kubernetes to remove the label. For example, to remove a label with the key example-cluster
, run:
kubectl label pv <pv-name> example-cluster-
If you didn't use selectors and created the volumes manually, you can also delete them manually.
Caution
Ensure that the corresponding PVC isn't removed before the associated Metadata Server or Chunkserver pods and volumes are fully deleted. Removing the PVC prematurely may cause issues with proper chunk migration and could lead to data loss.
Volume Finalizers and Deletion Protection in SaunaFS Cluster
It's not possible to delete all Metadata Servers and Chunkservers in a SaunaFS cluster. At least one Metadata Server and one Chunkserver must remain operational. This behavior is enforced through volume finalizers, which prevent uncontrolled deletion of critical volumes.
A metadata volume with the highest metadata version is protected by a finalizer to prevent accidental removal, thereby safeguarding against potential data loss.
A similar mechanism applies to chunk volumes. If a chunk volume still stores any chunks, it's protected from deletion. When a chunk volume is marked for removal, the chunks it contains are first evicted and redistributed to other Chunkservers in the cluster. The chunk volume is only deleted once all data has been successfully migrated.
These safeguards ensure cluster stability and data integrity during scaling down or maintenance operations.
SaunaFS Cluster deletion
The only supported way to remove all Metadata Servers and Chunkservers is by deleting the SaunaFSCluster resource. This triggers the SaunaFS Operator to clean up all resources associated with the cluster, including services, secrets, exports, volumes, and pods. However, certain resources, such as Persistent Volumes and Persistent Volume Claims, aren't deleted and must be removed manually if no longer needed.
Before deleting the cluster, make sure:
All important data is backed up or migrated.
No running workloads are still using SaunaFS volumes.
Chunk data has been safely evicted and volumes are ready to be removed.