Moving to TrueNAS and Democratic CSI for Kubernetes Persistent Storage

I read an article about TrueNAS enabling container storage for Kubernetes by using the Democratic CSI driver to provide direct access to the storage system, and jumped right in.

Up until now I was using my own DIY NAS server to provide various services to the homelab environment, including NFS, which worked great to be honest with you, but it did not have a CSI.


We are using our Kubernetes homelab to deploy democratic-csi.

You will need a TrueNAS Core server. Note that installation of TrueNAS is beyond the scope of this article.

The Plan

In this article, we are going to do the following:

  1. Configure TrueNAS Core 12.0-U3 to provide NFS services.
  2. Configure democratic-csi for Kubernetes using Helm.
  3. Create Kubernetes persistent volumes.

The IP address of the TrueNAS server is

Our Kubernetes nodes are pre-configured to use NFS, therefore no change is required. If you’re deploying a new set of CentOS servers, make sure to install the package nfs-utils.

Configure TrueNAS Core

A shout-out to Jonathan Gazeley and his blog post that helped me to get TrueNAS configured in no time.

Create a Storage Pool

We’ve created a storage pool called homelab-hdd/k8s/nfs.

Enable NFS and SSH Services

We are interested in NFS and SSH, no other service is required. Note that S.M.A.R.T should be enabled by default.

Configure NFS Service

Make sure to enable the following:

  1. Enable NFSv4.
  2. NFSv3 ownership model for NFSv4.

Configure SSH Passwordless Authentication

Kubernetes will require access to the TrueNAS API with a privileged user. For the homelab server, we will use the root user with passwordless authentication.

Generate an SSH keypair:

$ ssh-keygen -t rsa -C [email protected] -f truenas_rsa
$ cat ./ 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC5uztUHLU+dYDtj+23MQEpRt/ov4JZG+pw9bKRCbkBKC8aDhxYtJrNyoViGSR4diXORDDH8KA3JCbKVfHKDQhrXy+13aUGOVA/k/oCP/IgQH9spU1QHPCJOCMhIAVzp2lePLzC2ZKcFusFk0mpkCbGTbklt+uLs96+IsrOyhifBgOdmAt7o2FK8H6hl/Wddgk5ARSjPrc10aPxgGo/Gwg4RjGpopOtuRQeHNCC7/RAXzRxJLS7l7BYr/4yI+Gi4kas8sFWx2D0df0c3d/+SM2mccdNRCySywXlgD9tFhf6uCFpfdsnarzMxmH3P0LnxMDWwhisoohIHR3ErzkY4RgX [email protected]

Navigate to Accounts > Users > root and add the public SSH key. Also change the shell to bash.

Verify that you can SSH into the TrueNAS server using the SSH key and the root account:

$ ssh -i ./truenas_rsa [email protected]
Last login: Fri Apr 23 19:47:07 2021
FreeBSD 12.2-RELEASE-p6 f2858df162b(HEAD) TRUENAS 

	TrueNAS (c) 2009-2021, iXsystems, Inc.
	All rights reserved.
	TrueNAS code is released under the modified BSD license with some
	files copyrighted by (c) iXsystems, Inc.

	For more information, documentation, help or support, go here:
Welcome to TrueNAS

Generate a TrueNAS API Key

Navigate to settings (cog icon) > API Keys and generate a key. Give it some name, e.g. root. The key will be used to authenitcate with the TrueNAS HTTP server.

Configure Kubernetes democratic-csi

Helm Installation

$ helm repo add democratic-csi
$ helm repo update
$ helm search repo democratic-csi/

Configure Helm Values File

The content of our file freenas-nfs.yaml can be seen below. Example configuration can be found in democratic-csi’s GitHub repository.

  name: "org.democratic-csi.nfs"

- name: freenas-nfs-csi
  defaultClass: false
  reclaimPolicy: Retain
  volumeBindingMode: Immediate
  allowVolumeExpansion: true
    fsType: nfs
  - noatime
  - nfsvers=4

    driver: freenas-nfs
      protocol: http
      port: 80
      # This is the API key that we generated previously
      apiKey: 1-fAP3JzEaXXLGyKam8ZnotarealkeyIKJ6nnKUX5ARd5v0pw0cADEkqnH1S079v
      username: root
      allowInsecure: true
      apiVersion: 2
      port: 22
      username: root
      # This is the SSH key that we generated for passwordless authentication
      privateKey: |
        -----BEGIN RSA PRIVATE KEY-----
        -----END RSA PRIVATE KEY-----
      # Make sure to use the storage pool that was created previously
      datasetParentName: homelab-hdd/k8s/nfs/vols
      detachedSnapshotsDatasetParentName: homelab-hdd/k8s/nfs/snaps
      datasetEnableQuotas: true
      datasetEnableReservation: false
      datasetPermissionsMode: "0777"
      datasetPermissionsUser: root
      datasetPermissionsGroup: wheel
      shareAlldirs: false
      shareAllowedHosts: []
      shareAllowedNetworks: []
      shareMaprootUser: root
      shareMaprootGroup: wheel
      shareMapallUser: ""
      shareMapallGroup: ""

Install the democratic-csi Helm Chart

$ helm upgrade \
  --install \
  --create-namespace \
  --values freenas-nfs.yaml \
  --namespace democratic-csi \
  zfs-nfs democratic-csi/democratic-csi

Verify that pods are up and running:

$ kubectl -n democratic-csi get pods
NAME                                                 READY   STATUS    RESTARTS   AGE
zfs-nfs-democratic-csi-controller-5dbfcb7896-89tqv   4/4     Running   0          39h
zfs-nfs-democratic-csi-node-6nz29                    3/3     Running   0          39h
zfs-nfs-democratic-csi-node-bdt47                    3/3     Running   0          39h
zfs-nfs-democratic-csi-node-c7p6h                    3/3     Running   0          39h
$ kubectl get sc
freenas-nfs-csi   org.democratic-csi.nfs   Retain          Immediate           true                   39h

Create Persistent Volumens

We are going to create a persistent volume for Grafana. See the content of the file grafana-pvc.yml below.

apiVersion: v1
kind: PersistentVolumeClaim
  name: nfs-pvc-grafana
  namespace: monitoring
    app: grafana
  annotations: "freenas-nfs-csi"
  storageClassName: freenas-nfs-csi
    - ReadWriteOnce
      storage: 500Mi

Create a persistent volume claim:

$ kubectl apply -f ./grafana-pvc.yml


$ kubectl -n monitoring get pvc -l app=grafana
NAME                   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
nfs-pvc-grafana        Bound    pvc-371089f8-3c6e-49c6-b6d7-bf2b8ec16108   500Mi      RWO            freenas-nfs-csi   18h

Related Posts

Add Second Disk to Existing TrueNAS Pool in Mirror Configuration (RAID1)


7 thoughts on “Moving to TrueNAS and Democratic CSI for Kubernetes Persistent Storage

  1. It’s nice,
    and i follow the guide.
    But at the end i get:

    kubectl logs -n democratic-csi zfs-nfs-democratic-csi-controller-ffbb878db-w7ptj external-provisioner
    I0523 04:01:17.938148       1 feature_gate.go:243] feature gates: &{map[]}
    I0523 04:01:17.938209       1 csi-provisioner.go:132] Version: v2.1.0
    I0523 04:01:17.938222       1 csi-provisioner.go:155] Building kube configs for running in cluster...
    I0523 04:01:17.954599       1 connection.go:153] Connecting to unix:///csi-data/csi.sock
    I0523 04:01:17.957244       1 common.go:111] Probing CSI driver for readiness
    I0523 04:01:17.957255       1 connection.go:182] GRPC call: /csi.v1.Identity/Probe
    I0523 04:01:17.957268       1 connection.go:183] GRPC request: {}
    I0523 04:01:19.033063       1 connection.go:185] GRPC response: {}
    I0523 04:01:19.033243       1 connection.go:186] GRPC error: rpc error: code = Internal desc = Error: connect EHOSTUNREACH
    E0523 04:01:19.033271       1 csi-provisioner.go:193] CSI driver probe failed: rpc error: code = Internal desc = Error: connect EHOSTUNREACH

    Any idea whats could be wrong?

    • Hi, the error message suggests that the TrueNAS host in unreachable. Can you check the following:

      1. Is this the IP address of the TrueNAS server?
      2. Is SSH service on the TrueNAS server enabled?
      3. Is firewall on the TrueNAS server configured to allow incoming SSH traffic?
      4. Can you SSH into the TrueNAS server as the root user from a Kubernetes host?

  2. Hi Lisenet, everything works as indicated. Thank you.

    quick question, what if I have two freenas/truenas and I want them both as PV for the cluster that way I do not have to put every deployments on one NFS server? what values should I replace on the freenas-nfs.yml? Appreciate your help.

    • Hi, thanks, I’m glad it worked for you.

      May I ask what is it that you are trying to achieve? If you need high availability for storage, then I’d suggest to use multiple disks with TrueNAS in a RAID. If you need high availability for the NFS server, then you should use one that has two separate power circuits and two UPS systems, two network card with a bonded interface for redundancy etc.

      If you want to use two instances of TrueNAS, then you will have to create two PVs, one on each storage array. Please see Kubernetes documentation for more info about PVs.

  3. Hi Lisenet,
    Thans very much for your guide, it worked well !
    But I still have a problem. I tried to apply the pvc that we created as an “existingClaim” in a configuration file of mariadb, and I also indicated the StorageClass, but there was nothing applied to the excint pvc in the meanwhile, there was a new pvc pending…
    Perhaps you can help me with that ?

  4. Hello everyone

    Thank you for the documentation! Im stuck with container creation. The pods are always in a state of “ContainerCreating”.

    [email protected]:~$ kubectl -n democratic-csi get pods
    zfs-nfs-democratic-csi-node-q8pfr 0/3 ContainerCreating 0 4m36s
    zfs-nfs-democratic-csi-node-z4vgd 0/3 ContainerCreating 0 4m36s
    zfs-nfs-democratic-csi-node-rgqzp 0/3 ContainerCreating 0 4m36s
    zfs-nfs-democratic-csi-controller-5c4f449c6d-mjv2k 4/4 Running 0 4m36s

    I0622 05:39:36.845632 1 csi-provisioner.go:132] Version: v2.1.0
    I0622 05:39:36.845649 1 csi-provisioner.go:155] Building kube configs for running in cluster…
    I0622 05:39:36.853723 1 connection.go:153] Connecting to unix:///csi-data/csi.sock
    I0622 05:39:39.942724 1 common.go:111] Probing CSI driver for readiness
    I0622 05:39:39.942739 1 connection.go:182] GRPC call: /csi.v1.Identity/Probe
    I0622 05:39:39.942743 1 connection.go:183] GRPC request: {}
    I0622 05:39:40.053698 1 connection.go:185] GRPC response: {“ready”:{“value”:true}}
    I0622 05:39:40.053801 1 connection.go:186] GRPC error:
    I0622 05:39:40.053809 1 connection.go:182] GRPC call: /csi.v1.Identity/GetPluginInfo
    I0622 05:39:40.053812 1 connection.go:183] GRPC request: {}
    I0622 05:39:40.056092 1 connection.go:185] GRPC response: {“name”:”org.democratic-csi.nfs”,”vendor_version”:”1.2.0″}
    I0622 05:39:40.056134 1 connection.go:186] GRPC error:
    I0622 05:39:40.056142 1 csi-provisioner.go:202] Detected CSI driver org.democratic-csi.nfs
    I0622 05:39:40.056151 1 connection.go:182] GRPC call: /csi.v1.Identity/GetPluginCapabilities
    I0622 05:39:40.056158 1 connection.go:183] GRPC request: {}
    I0622 05:39:40.059424 1 connection.go:185] GRPC response: {“capabilities”:[{“Type”:{“Service”:{“type”:1}}},{“Type”:{“VolumeExpansion”:{“type”:1}}}]}
    I0622 05:39:40.059558 1 connection.go:186] GRPC error:
    I0622 05:39:40.059568 1 connection.go:182] GRPC call: /csi.v1.Controller/ControllerGetCapabilities
    I0622 05:39:40.059573 1 connection.go:183] GRPC request: {}
    I0622 05:39:40.061972 1 connection.go:185] GRPC response: {“capabilities”:[{“Type”:{“Rpc”:{“type”:1}}},{“Type”:{“Rpc”:{“type”:3}}},{“Type”:{“Rpc”:{“type”:4}}},{“Type”:{“Rpc”:{“type”:5}}},{“Type”:{“Rpc”:{“type”:6}}},{“Type”:{“Rpc”:{“type”:7}}},{“Type”:{“Rpc”:{“type”:9}}}]}
    I0622 05:39:40.062058 1 connection.go:186] GRPC error:
    I0622 05:39:40.062111 1 csi-provisioner.go:244] CSI driver does not support PUBLISH_UNPUBLISH_VOLUME, not watching VolumeAttachments
    I0622 05:39:40.062779 1 controller.go:753] Using saving PVs to API server in background
    I0622 05:39:40.063639 1 leaderelection.go:243] attempting to acquire leader lease democratic-csi/org-democratic-csi-nfs…

    Any advice how to troubleshoot this issue?

Leave a Reply to Seaser Cancel reply

Your email address will not be published. Required fields are marked *