This tutorial will show you how to set up democratic-csi
as a storage class on a k8s cluster.
We will show you how to set up it on a ZOL (OpenZFS on Linux) or TrueNAS storage.
democratic-csi
implements the csi
(container storage interface) spec providing storage for various container orchestration systems (ie: Kubernetes).TrueNAS
and ZoL
on Ubuntu
.<aside> ⚠️
Do not use democratic-csi on microk8s since there is an issue when mounting the persistent volume.
</aside>
democratic-csi
compatible storage. (Refer to the link in the reference for more details)Setup democratic-csi helm chart repo
helm repo add democratic-csi <https://democratic-csi.github.io/charts/>
helm repo update
Generate a pair of ssh key
ssh-keygen
Generating public/private rsa key pair.
The file format of the public key $HOME/.ssh/id_rsa.pub
will be looking like this:
ssh-rsa .....
.....
..... $USER@$HOST
Paste the public key to the authorized_keys
file inside the $HOME/.ssh
folder of the root
user on the ZOL storage server.
Copy the value of the private key file $HOME/.ssh/id_rsa
for later use.
Prepare the helm values yaml file
For ZOL storage, use the helm value file zfs-nfs-csi.yaml
:
csiDriver:
name: "org.democratic-csi.nfs"
fsGroupPolicy: File
storageClasses:
- name: zfs-nfs-csi
defaultClass: true
reclaimPolicy: Retain
volumeBindingMode: Immediate
allowVolumeExpansion: true
parameters:
fsType: nfs
mountOptions:
- noatime
- nfsvers=4.2
secrets:
provisioner-secret:
controller-publish-secret:
node-stage-secret:
node-publish-secret:
controller-expand-secret:
driver:
config:
driver: zfs-generic-nfs
sshConnection:
host: <Put the storage host fqdn here>
port: 22
username: root
# use either password or key
password: ""
privateKey: |
<Paste the ssh private key of your authorized server here>
zfs:
datasetProperties:
compression: lz4
"primehub:pvc": "{{ parameters.[csi.storage.k8s.io/pvc/namespace] }}/{{ parameters.[csi.storage.k8s.io/pvc/name] }}"
datasetParentName: <Use the dataset name you created on the storage server>
# do NOT comment this option out even if you don't plan to use snapshots, just leave it with dummy value
detachedSnapshotsDatasetParentName: <Use the dataset name you created on the storage server>
datasetEnableQuotas: true
datasetEnableReservation: false
datasetPermissionsMode: "0777"
datasetPermissionsUser: 0
datasetPermissionsGroup: 0
nfs:
shareHost: <Put the storage host fqdn here>
shareStrategy: "setDatasetProperties"
shareStrategySetDatasetProperties:
properties:
sharenfs: "rw,no_subtree_check,no_root_squash"
For TrueNAS, use truenas-csi.yaml
:
csiDriver:
name: "org.democratic-csi.nfs"
storageClasses:
- name: truenas-nfs-csi
defaultClass: true
reclaimPolicy: Retain
volumeBindingMode: Immediate
allowVolumeExpansion: true
parameters:
fsType: nfs
mountOptions:
- noatime
- nfsvers=4.2
secrets:
provisioner-secret:
controller-publish-secret:
node-stage-secret:
node-publish-secret:
controller-expand-secret:
# if your cluster supports snapshots you may enable below
volumeSnapshotClasses: []
driver:
config:
# please see the most up-to-date example of the corresponding config here:
# <https://github.com/democratic-csi/democratic-csi/tree/master/examples>
# YOU MUST COPY THE DATA HERE INLINE!
driver: freenas-nfs
instance_id:
httpConnection:
protocol: http
host: <Put your TrueNAS FQDN or fixed IP here>
port: 80
apiKey: <Use the api key retrieved from the TrueNAS server>
username: root
allowInsecure: true
apiVersion: 2
sshConnection:
host: <Put your TrueNAS FQDN or fixed IP here>
port: 22
username: root
# use either password or key
privateKey: |
<Paste the ssh private key of your authorized server here>
zfs:
datasetParentName: <Use the dataset name you created on the storage server, ie: hdd/nfs>
detachedSnapshotsDatasetParentName: <Use the dataset name you created on the storage serverie: hdd/snapshots>
datasetEnableQuotas: true
datasetEnableReservation: false
datasetPermissionsMode: "0777"
datasetPermissionsUser: 0
datasetPermissionsGroup: 0
nfs:
shareHost: <Put your TrueNAS FQDN or fixed IP here>
shareAlldirs: false
shareAllowedHosts: []
shareAllowedNetworks: []
shareMaprootUser: root
shareMaprootGroup: wheel
shareMapallUser: ""
shareMapallGroup: ""
For both type of the storage, you need to modify the following options to meet the real situation in your environment:
reclaimPolicy | Set to Retain if you want to keep the pv after delete the pvc. |
---|---|
volumeBindingMode | Set to Immediate if you want to create the pv after the pvc is created. |
allowVolumeExpansion | Set to true if you want to expand the pvc size later |
mountOptions | Change the option for the server to share the volume to the cluster |
host | Put the fqdn or fixed IP of your storage server here |
privateKey | Put the ssh private key of your authorized server here. (root-like access is recommended) |
compression | Use lz4 if you want to reach a balance between storage size and performance, other option is also viable (such as ztsd, but the performance may be reduced when leverage this algorithm) |
datasetParentName | Use the zfs pool and dataset name you created on the storage server, for example: $zfspool_name/$datasetname |
detachedSnapshotsDatasetParentName | This option is not used, but we must fill in as it is required by the csi driver, for example: $zfspool_name/snapshots |
shareHost | Put the fqdn or fixed IP of your storage server here |
Install democratic-csi helm chart
For ZOL storage:
helm search repo democratic-csi/
helm upgrade \\
--install \\
--create-namespace \\
--values zfs-nfs-csi.yaml \\
--namespace democratic-csi \\
zfs-nfs democratic-csi/democratic-csi
For TrueNAS:
helm search repo democratic-csi/
helm upgrade \\
--install \\
--create-namespace \\
--values truenas-csi.yaml \\
--namespace democratic-csi \\
zfs-nfs democratic-csi/democratic-csi
Check the helm chart status after the installation is completed:
helm ls -Aa
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
zfs-nfs democratic-csi 2 2024-10-01 14:11:07.695916112 +0800 CST deployed democratic-csi-0.14.6
Check if the pods are running:
kubectl -n democratic-csi get pods
NAME READY STATUS RESTARTS AGE
zfs-nfs-democratic-csi-controller-xxxxxxxxxx-xxxxx 6/6 Running 0 7d14h
zfs-nfs-democratic-csi-node-xxxxx 4/4 Running 0 7d14h
Get the storage information
kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
zfs-nfs-csi org.democratic-csi.nfs Retain Immediate true 7d14h
Set the democratic-csi storage as the default storage class
For ZOL storage:
kubectl patch storageclass zfs-nfs-csi -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
For TrueNAS:
kubectl patch storageclass truenas-nfs-csi -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'