Moving to TrueNAS and Democratic CSI for Kubernetes Persistent Storage

I read an article about TrueNAS enabling container storage for Kubernetes by using the Democratic CSI driver to provide direct access to the storage system, and jumped right in.

Up until now I was using my own DIY NAS server to provide various services to the homelab environment, including NFS, which worked great to be honest with you, but it did not have a CSI.

Pre-requisites

We are using our Kubernetes homelab to deploy democratic-csi.

You will need a TrueNAS Core server. Note that installation of TrueNAS is beyond the scope of this article.

The Plan

In this article, we are going to do the following:

  1. Configure TrueNAS Core 12.0-U3 to provide NFS services.
  2. Configure democratic-csi for Kubernetes using Helm.
  3. Create Kubernetes persistent volumes.

The IP address of the TrueNAS server is 10.11.1.5.

Our Kubernetes nodes are pre-configured to use NFS, therefore no change is required. If you’re deploying a new set of CentOS servers, make sure to install the package nfs-utils.

Configure TrueNAS Core

A shout-out to Jonathan Gazeley and his blog post that helped me to get TrueNAS configured in no time.

Create a Storage Pool

We’ve created a storage pool called homelab-hdd/k8s/nfs.

Enable NFS and SSH Services

We are interested in NFS and SSH, no other service is required. Note that S.M.A.R.T should be enabled by default.

Configure NFS Service

Make sure to enable the following:

  1. Enable NFSv4.
  2. NFSv3 ownership model for NFSv4.

Configure SSH Passwordless Authentication

Kubernetes will require access to the TrueNAS API with a privileged user. For the homelab server, we will use the root user with passwordless authentication.

Generate an SSH keypair:

$ ssh-keygen -t rsa -C [email protected] -f truenas_rsa
$ cat ./truenas_rsa.pub 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC5uztUHLU+dYDtj+23MQEpRt/ov4JZG+pw9bKRCbkBKC8aDhxYtJrNyoViGSR4diXORDDH8KA3JCbKVfHKDQhrXy+13aUGOVA/k/oCP/IgQH9spU1QHPCJOCMhIAVzp2lePLzC2ZKcFusFk0mpkCbGTbklt+uLs96+IsrOyhifBgOdmAt7o2FK8H6hl/Wddgk5ARSjPrc10aPxgGo/Gwg4RjGpopOtuRQeHNCC7/RAXzRxJLS7l7BYr/4yI+Gi4kas8sFWx2D0df0c3d/+SM2mccdNRCySywXlgD9tFhf6uCFpfdsnarzMxmH3P0LnxMDWwhisoohIHR3ErzkY4RgX [email protected]

Navigate to Accounts > Users > root and add the public SSH key. Also change the shell to bash.

Verify that you can SSH into the TrueNAS server using the SSH key and the root account:

$ ssh -i ./truenas_rsa [email protected]
Last login: Fri Apr 23 19:47:07 2021
FreeBSD 12.2-RELEASE-p6 f2858df162b(HEAD) TRUENAS 

	TrueNAS (c) 2009-2021, iXsystems, Inc.
	All rights reserved.
	TrueNAS code is released under the modified BSD license with some
	files copyrighted by (c) iXsystems, Inc.

	For more information, documentation, help or support, go here:
	http://truenas.com
Welcome to TrueNAS
truenas#

Generate a TrueNAS API Key

Navigate to settings (cog icon) > API Keys and generate a key. Give it some name, e.g. root. The key will be used to authenitcate with the TrueNAS HTTP server.

Configure Kubernetes democratic-csi

Helm Installation

$ helm repo add democratic-csi https://democratic-csi.github.io/charts/
$ helm repo update
$ helm search repo democratic-csi/

Configure Helm Values File

The content of our file freenas-nfs.yaml can be seen below. Example configuration can be found in democratic-csi’s GitHub repository.

csiDriver:
  name: "org.democratic-csi.nfs"

storageClasses:
- name: freenas-nfs-csi
  defaultClass: false
  reclaimPolicy: Retain
  volumeBindingMode: Immediate
  allowVolumeExpansion: true
  parameters:
    fsType: nfs
      
  mountOptions:
  - noatime
  - nfsvers=4
  secrets:
    provisioner-secret:
    controller-publish-secret:
    node-stage-secret:
    node-publish-secret:
    controller-expand-secret:

driver:
  config:
    driver: freenas-nfs
    instance_id:
    httpConnection:
      protocol: http
      host: 10.11.1.5
      port: 80
      # This is the API key that we generated previously
      apiKey: 1-fAP3JzEaXXLGyKam8ZnotarealkeyIKJ6nnKUX5ARd5v0pw0cADEkqnH1S079v
      username: root
      allowInsecure: true
      apiVersion: 2
    sshConnection:
      host: 10.11.1.5
      port: 22
      username: root
      # This is the SSH key that we generated for passwordless authentication
      privateKey: |
        -----BEGIN RSA PRIVATE KEY-----
        MIIEogIBAAKCAQEAubs7VBy1PnWA7Y/ttzEBKUbf6L+CWRvqcPWykQm5ASgvGg4c
        [...]
        tl4biLpseFQgV3INtM0NNW4+LlTSAnjApDtNzttX/h5HTBLHyoc=
        -----END RSA PRIVATE KEY-----
    zfs:
      # Make sure to use the storage pool that was created previously
      datasetParentName: homelab-hdd/k8s/nfs/vols
      detachedSnapshotsDatasetParentName: homelab-hdd/k8s/nfs/snaps
      datasetEnableQuotas: true
      datasetEnableReservation: false
      datasetPermissionsMode: "0777"
      datasetPermissionsUser: root
      datasetPermissionsGroup: wheel
    nfs:
      shareHost: 10.11.1.5
      shareAlldirs: false
      shareAllowedHosts: []
      shareAllowedNetworks: []
      shareMaprootUser: root
      shareMaprootGroup: wheel
      shareMapallUser: ""
      shareMapallGroup: ""

Install the democratic-csi Helm Chart

$ helm upgrade \
  --install \
  --create-namespace \
  --values freenas-nfs.yaml \
  --namespace democratic-csi \
  zfs-nfs democratic-csi/democratic-csi

Verify that pods are up and running:

$ kubectl -n democratic-csi get pods
NAME                                                 READY   STATUS    RESTARTS   AGE
zfs-nfs-democratic-csi-controller-5dbfcb7896-89tqv   4/4     Running   0          39h
zfs-nfs-democratic-csi-node-6nz29                    3/3     Running   0          39h
zfs-nfs-democratic-csi-node-bdt47                    3/3     Running   0          39h
zfs-nfs-democratic-csi-node-c7p6h                    3/3     Running   0          39h
$ kubectl get sc
NAME              PROVISIONER              RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
freenas-nfs-csi   org.democratic-csi.nfs   Retain          Immediate           true                   39h

Create Persistent Volumens

We are going to create a persistent volume for Grafana. See the content of the file grafana-pvc.yml below.

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc-grafana
  namespace: monitoring
  labels:
    app: grafana
  annotations:
    volume.beta.kubernetes.io/storage-class: "freenas-nfs-csi"
spec:
  storageClassName: freenas-nfs-csi
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 500Mi

Create a persistent volume claim:

$ kubectl apply -f ./grafana-pvc.yml

Verify:

$ kubectl -n monitoring get pvc -l app=grafana
NAME                   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
nfs-pvc-grafana        Bound    pvc-371089f8-3c6e-49c6-b6d7-bf2b8ec16108   500Mi      RWO            freenas-nfs-csi   18h

Related Posts

Add Second Disk to Existing TrueNAS Pool in Mirror Configuration (RAID1)

References

https://github.com/democratic-csi/democratic-csi

17 thoughts on “Moving to TrueNAS and Democratic CSI for Kubernetes Persistent Storage

  1. It’s nice,
    and i follow the guide.
    But at the end i get:

    kubectl logs -n democratic-csi zfs-nfs-democratic-csi-controller-ffbb878db-w7ptj external-provisioner
    
    I0523 04:01:17.938148       1 feature_gate.go:243] feature gates: &{map[]}
    I0523 04:01:17.938209       1 csi-provisioner.go:132] Version: v2.1.0
    I0523 04:01:17.938222       1 csi-provisioner.go:155] Building kube configs for running in cluster...
    I0523 04:01:17.954599       1 connection.go:153] Connecting to unix:///csi-data/csi.sock
    I0523 04:01:17.957244       1 common.go:111] Probing CSI driver for readiness
    I0523 04:01:17.957255       1 connection.go:182] GRPC call: /csi.v1.Identity/Probe
    I0523 04:01:17.957268       1 connection.go:183] GRPC request: {}
    I0523 04:01:19.033063       1 connection.go:185] GRPC response: {}
    I0523 04:01:19.033243       1 connection.go:186] GRPC error: rpc error: code = Internal desc = Error: connect EHOSTUNREACH 172.17.3.20:22
    E0523 04:01:19.033271       1 csi-provisioner.go:193] CSI driver probe failed: rpc error: code = Internal desc = Error: connect EHOSTUNREACH 172.17.3.20:22
    

    Any idea whats could be wrong?

    • Hi, the error message suggests that the TrueNAS host in unreachable. Can you check the following:

      1. Is this 172.17.3.20 the IP address of the TrueNAS server?
      2. Is SSH service on the TrueNAS server enabled?
      3. Is firewall on the TrueNAS server configured to allow incoming SSH traffic?
      4. Can you SSH into the TrueNAS server as the root user from a Kubernetes host?

  2. Hi Lisenet, everything works as indicated. Thank you.

    quick question, what if I have two freenas/truenas and I want them both as PV for the cluster that way I do not have to put every deployments on one NFS server? what values should I replace on the freenas-nfs.yml? Appreciate your help.

    • Hi, thanks, I’m glad it worked for you.

      May I ask what is it that you are trying to achieve? If you need high availability for storage, then I’d suggest to use multiple disks with TrueNAS in a RAID. If you need high availability for the NFS server, then you should use one that has two separate power circuits and two UPS systems, two network card with a bonded interface for redundancy etc.

      If you want to use two instances of TrueNAS, then you will have to create two PVs, one on each storage array. Please see Kubernetes documentation for more info about PVs.

  3. Hi Lisenet,
    Thans very much for your guide, it worked well !
    But I still have a problem. I tried to apply the pvc that we created as an “existingClaim” in a configuration file of mariadb, and I also indicated the StorageClass, but there was nothing applied to the excint pvc in the meanwhile, there was a new pvc pending…
    Perhaps you can help me with that ?

  4. Hello everyone

    Thank you for the documentation! Im stuck with container creation. The pods are always in a state of “ContainerCreating”.

    [email protected]:~$ kubectl -n democratic-csi get pods
    NAME READY STATUS RESTARTS AGE
    zfs-nfs-democratic-csi-node-q8pfr 0/3 ContainerCreating 0 4m36s
    zfs-nfs-democratic-csi-node-z4vgd 0/3 ContainerCreating 0 4m36s
    zfs-nfs-democratic-csi-node-rgqzp 0/3 ContainerCreating 0 4m36s
    zfs-nfs-democratic-csi-controller-5c4f449c6d-mjv2k 4/4 Running 0 4m36s

    I0622 05:39:36.845632 1 csi-provisioner.go:132] Version: v2.1.0
    I0622 05:39:36.845649 1 csi-provisioner.go:155] Building kube configs for running in cluster…
    I0622 05:39:36.853723 1 connection.go:153] Connecting to unix:///csi-data/csi.sock
    I0622 05:39:39.942724 1 common.go:111] Probing CSI driver for readiness
    I0622 05:39:39.942739 1 connection.go:182] GRPC call: /csi.v1.Identity/Probe
    I0622 05:39:39.942743 1 connection.go:183] GRPC request: {}
    I0622 05:39:40.053698 1 connection.go:185] GRPC response: {“ready”:{“value”:true}}
    I0622 05:39:40.053801 1 connection.go:186] GRPC error:
    I0622 05:39:40.053809 1 connection.go:182] GRPC call: /csi.v1.Identity/GetPluginInfo
    I0622 05:39:40.053812 1 connection.go:183] GRPC request: {}
    I0622 05:39:40.056092 1 connection.go:185] GRPC response: {“name”:”org.democratic-csi.nfs”,”vendor_version”:”1.2.0″}
    I0622 05:39:40.056134 1 connection.go:186] GRPC error:
    I0622 05:39:40.056142 1 csi-provisioner.go:202] Detected CSI driver org.democratic-csi.nfs
    I0622 05:39:40.056151 1 connection.go:182] GRPC call: /csi.v1.Identity/GetPluginCapabilities
    I0622 05:39:40.056158 1 connection.go:183] GRPC request: {}
    I0622 05:39:40.059424 1 connection.go:185] GRPC response: {“capabilities”:[{“Type”:{“Service”:{“type”:1}}},{“Type”:{“VolumeExpansion”:{“type”:1}}}]}
    I0622 05:39:40.059558 1 connection.go:186] GRPC error:
    I0622 05:39:40.059568 1 connection.go:182] GRPC call: /csi.v1.Controller/ControllerGetCapabilities
    I0622 05:39:40.059573 1 connection.go:183] GRPC request: {}
    I0622 05:39:40.061972 1 connection.go:185] GRPC response: {“capabilities”:[{“Type”:{“Rpc”:{“type”:1}}},{“Type”:{“Rpc”:{“type”:3}}},{“Type”:{“Rpc”:{“type”:4}}},{“Type”:{“Rpc”:{“type”:5}}},{“Type”:{“Rpc”:{“type”:6}}},{“Type”:{“Rpc”:{“type”:7}}},{“Type”:{“Rpc”:{“type”:9}}}]}
    I0622 05:39:40.062058 1 connection.go:186] GRPC error:
    I0622 05:39:40.062111 1 csi-provisioner.go:244] CSI driver does not support PUBLISH_UNPUBLISH_VOLUME, not watching VolumeAttachments
    I0622 05:39:40.062779 1 controller.go:753] Using saving PVs to API server in background
    I0622 05:39:40.063639 1 leaderelection.go:243] attempting to acquire leader lease democratic-csi/org-democratic-csi-nfs…

    Any advice how to troubleshoot this issue?

  5. Hi Lisenet,
    It was working fine for me but for the past few days now I have been getting the following error in the logs:

    I0728 14:48:13.040870 1 feature_gate.go:243] feature gates: &{map[]}
    I0728 14:48:13.040926 1 csi-provisioner.go:138] Version: v2.2.2
    I0728 14:48:13.040951 1 csi-provisioner.go:161] Building kube configs for running in cluster…
    I0728 14:48:13.048174 1 connection.go:153] Connecting to unix:///csi-data/csi.sock
    I0728 14:48:13.048711 1 common.go:111] Probing CSI driver for readiness
    I0728 14:48:13.048734 1 connection.go:182] GRPC call: /csi.v1.Identity/Probe
    I0728 14:48:13.048740 1 connection.go:183] GRPC request: {}
    I0728 14:48:13.050782 1 connection.go:185] GRPC response: {}
    I0728 14:48:13.051093 1 connection.go:186] GRPC error: rpc error: code = Unavailable desc = Bad Gateway: HTTP status code 502; transport: missing content-type field
    E0728 14:48:13.051126 1 csi-provisioner.go:203] CSI driver probe failed: rpc error: code = Unavailable desc = Bad Gateway: HTTP status code 502; transport: missing content-type field

  6. GREAT walk through! Worked perfectly…however, how do I add ANOTHER storage class using a different disk. I created a 2nd pool for a different disk array…so do I create ANOTHER helm chart or can I update the original and just append it with a new ZFS and NFS block with a different path?

    • Thanks Eric. While I’ve not tried this, but I think you should be able to define it in the same Helm chart under storageClasses:. Make sure to give it a different name though.

  7. Hi Lisenet it’s not working

    freenas-nfs.yaml 
    --------
    csiDriver:
      name: "org.democratic-csi.nfs"
    
    storageClasses:
    - name: freenas-nfs-csi
      defaultClass: false
      reclaimPolicy: Delete
      volumeBindingMode: Immediate
      allowVolumeExpansion: true
      parameters:
        fsType: nfs
    
      mountOptions:
      - noatime
      - nfsvers=4
      secrets:
        provisioner-secret:
        controller-publish-secret:
        node-stage-secret:
        node-publish-secret:
        controller-expand-secret:
    
    driver:
      config:
        driver: freenas-nfs
        instance_id:
        httpConnection:
          protocol: http
          host: 192.168.30.13
          port: 80
          username: root
          password: "password"
          allowInsecure: true
        sshConnection:
          host: 192.168.30.13
          port: 22
          username: root
          # use either password or key
          password: "pwassword"
        zfs:
          datasetParentName: default/k8s/nfs/v         #pool/dataset/dataset/dataset/dataset
          detachedSnapshotsDatasetParentName: default/k8s/nfs/s       #pool/dataset/dataset/dataset/dataset
          datasetEnableQuotas: true
          datasetEnableReservation: false
          datasetPermissionsMode: "0777"
          datasetPermissionsUser: root
          datasetPermissionsGroup: wheel
        nfs:
          shareHost: 192.168.30.13
          shareAlldirs: false
          shareAllowedHosts: []
          shareAllowedNetworks: []
          shareMaprootUser: root
          shareMaprootGroup: wheel
          shareMapallUser: ""
          shareMapallGroup: ""
    

    – truenas ssh connection test

    $ ssh [email protected]
    [email protected]'s password:
    Last login: Mon Oct 24 10:49:11 2022 from 192.168.30.160
    FreeBSD 12.2-RELEASE-p3 7851f4a452d(HEAD) TRUENAS
    
            TrueNAS (c) 2009-2021, iXsystems, Inc.
            All rights reserved.
            TrueNAS code is released under the modified BSD license with some
            files copyrighted by (c) iXsystems, Inc.
    
            For more information, documentation, help or support, go here:
            http://truenas.com
    Welcome to TrueNAS
    truenas#
    

    – freenas NFS Service settings

    $ sudo showmount -e 192.168.30.13
    [sudo] password for ubuntu:
    $ k get po -n democratic-csi
    NAME                                                 READY   STATUS             RESTARTS      AGE
    zfs-nfs-democratic-csi-controller-6db5558c48-fp9n2   2/5     CrashLoopBackOff   23 (8s ago)   7m4s
    zfs-nfs-democratic-csi-node-dhx58                    4/4     Running            0             7m4s
    zfs-nfs-democratic-csi-node-j2m2b                    4/4     Running            0             7m4s
    zfs-nfs-democratic-csi-node-ptnvf                    4/4     Running            0             7m4s
    $ k logs po/zfs-nfs-democratic-csi-controller-6db5558c48-fp9n2 -n democratic-csi
    error: a container name must be specified for pod zfs-nfs-democratic-csi-controller-6db5558c48-fp9n2, choose one of: [external-provisioner external-resizer external-snapshotter csi-driver csi-proxy]
    $ k describe po/zfs-nfs-democratic-csi-controller-6db5558c48-fp9n2 -n democratic-csi
    Name:         zfs-nfs-democratic-csi-controller-6db5558c48-fp9n2
    Namespace:    democratic-csi
    Priority:     0
    Node:         kube-node03/192.168.30.63
    Start Time:   Mon, 24 Oct 2022 13:41:49 +0900
    Labels:       app.kubernetes.io/component=controller-linux
                  app.kubernetes.io/csi-role=controller
                  app.kubernetes.io/instance=zfs-nfs
                  app.kubernetes.io/managed-by=Helm
                  app.kubernetes.io/name=democratic-csi
                  pod-template-hash=6db5558c48
    Annotations:  checksum/configmap: 4c738d703c418eb6a75fa3a097249ef9bf02c0678a385c5bb41bd6c2416beef9
                  checksum/secret: a31e63d7cfb6b568f3328d3ad9e3a71ee33725559a9f2dc5c64a1cfe601f4ffd
                  cni.projectcalico.org/containerID: 7ea282f3f6c20e2b814d6154e219e6166b7b617df4d279eb749a54487d549df1
                  cni.projectcalico.org/podIP: 192.168.161.46/32
                  cni.projectcalico.org/podIPs: 192.168.161.46/32
    Status:       Running
    IP:           192.168.161.46
    IPs:
      IP:           192.168.161.46
    Controlled By:  ReplicaSet/zfs-nfs-democratic-csi-controller-6db5558c48
    Containers:
      external-provisioner:
        Container ID:  docker://db54ec6363dfeed2118d347dea9969ba2e2e844ca85af7eee867d0e5d732800b
        Image:         k8s.gcr.io/sig-storage/csi-provisioner:v3.1.0
        Image ID:      docker-pullable://k8s.gcr.io/sig-storage/[email protected]:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119
        Port:          
        Host Port:     
        Args:
          --v=5
          --leader-election
          --leader-election-namespace=democratic-csi
          --timeout=90s
          --worker-threads=10
          --extra-create-metadata
          --csi-address=/csi-data/csi.sock
        State:          Waiting
          Reason:       CrashLoopBackOff
        Last State:     Terminated
          Reason:       Error
          Exit Code:    1
          Started:      Mon, 24 Oct 2022 13:48:45 +0900
          Finished:     Mon, 24 Oct 2022 13:48:45 +0900
        Ready:          False
        Restart Count:  6
        Environment:
          NODE_NAME:   (v1:spec.nodeName)
          NAMESPACE:  democratic-csi (v1:metadata.namespace)
          POD_NAME:   zfs-nfs-democratic-csi-controller-6db5558c48-fp9n2 (v1:metadata.name)
        Mounts:
          /csi-data from socket-dir (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dsq8c (ro)
      external-resizer:
        Container ID:  docker://0a129107c481e5630a58a2ee0c4c8fe8e0e91bfd4108d1f3b399c9f174792179
        Image:         k8s.gcr.io/sig-storage/csi-resizer:v1.4.0
        Image ID:      docker-pullable://k8s.gcr.io/sig-storage/[email protected]:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4
        Port:          
        Host Port:     
        Args:
          --v=5
          --leader-election
          --leader-election-namespace=democratic-csi
          --timeout=90s
          --workers=10
          --csi-address=/csi-data/csi.sock
        State:          Waiting
          Reason:       CrashLoopBackOff
        Last State:     Terminated
          Reason:       Error
          Exit Code:    255
          Started:      Mon, 24 Oct 2022 13:48:45 +0900
          Finished:     Mon, 24 Oct 2022 13:48:45 +0900
        Ready:          False
        Restart Count:  6
        Environment:
          NODE_NAME:   (v1:spec.nodeName)
          NAMESPACE:  democratic-csi (v1:metadata.namespace)
          POD_NAME:   zfs-nfs-democratic-csi-controller-6db5558c48-fp9n2 (v1:metadata.name)
        Mounts:
          /csi-data from socket-dir (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dsq8c (ro)
      external-snapshotter:
        Container ID:  docker://689c7ad05a56cd045cc467f928e8148a8557d803df88fb8f5a328fde09d26d35
        Image:         k8s.gcr.io/sig-storage/csi-snapshotter:v5.0.1
        Image ID:      docker-pullable://k8s.gcr.io/sig-storage/[email protected]:89e900a160a986a1a7a4eba7f5259e510398fa87ca9b8a729e7dec59e04c7709
        Port:          
        Host Port:     
        Args:
          --v=5
          --leader-election
          --leader-election-namespace=democratic-csi
          --timeout=90s
          --worker-threads=10
          --csi-address=/csi-data/csi.sock
        State:          Waiting
          Reason:       CrashLoopBackOff
        Last State:     Terminated
          Reason:       Error
          Exit Code:    1
          Started:      Mon, 24 Oct 2022 13:48:45 +0900
          Finished:     Mon, 24 Oct 2022 13:48:45 +0900
        Ready:          False
        Restart Count:  6
        Environment:
          NODE_NAME:   (v1:spec.nodeName)
          NAMESPACE:  democratic-csi (v1:metadata.namespace)
          POD_NAME:   zfs-nfs-democratic-csi-controller-6db5558c48-fp9n2 (v1:metadata.name)
        Mounts:
          /csi-data from socket-dir (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dsq8c (ro)
      csi-driver:
        Container ID:  docker://310f97ba3898a8e020a082cbeeba3a62a8e3a6c75be1088a3984d4e6bf8da5f3
        Image:         docker.io/democraticcsi/democratic-csi:latest
        Image ID:      docker-pullable://democraticcsi/[email protected]:9633b08bf21d93dec186e8c4b7a39177fb6d59fd4371c88700097b9cc0aa4712
        Port:          
        Host Port:     
        Args:
          --csi-version=1.5.0
          --csi-name=org.democratic-csi.nfs
          --driver-config-file=/config/driver-config-file.yaml
          --log-level=info
          --csi-mode=controller
          --server-socket=/csi-data/csi.sock.internal
        State:          Running
          Started:      Mon, 24 Oct 2022 13:48:48 +0900
        Last State:     Terminated
          Reason:       Error
          Exit Code:    1
          Started:      Mon, 24 Oct 2022 13:46:05 +0900
          Finished:     Mon, 24 Oct 2022 13:47:13 +0900
        Ready:          True
        Restart Count:  5
        Liveness:       exec [bin/liveness-probe --csi-version=1.5.0 --csi-address=/csi-data/csi.sock.internal] delay=10s timeout=15s period=60s #success=1 #failure=3
        Environment:
          NODE_EXTRA_CA_CERTS:  /tmp/certs/extra-ca-certs.crt
        Mounts:
          /config from config (rw)
          /csi-data from socket-dir (rw)
          /tmp/certs from extra-ca-certs (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dsq8c (ro)
      csi-proxy:
        Container ID:   docker://4fd59cc00d890dcbd4998a690578ebb064bd42e8cbf4b7df1b3163e95c0efef1
        Image:          docker.io/democraticcsi/csi-grpc-proxy:v0.5.3
        Image ID:       docker-pullable://democraticcsi/[email protected]:4d65ca1cf17d941a8df668b8fe2f1c0cfa512c8b0dbef3ff89a4cd405e076923
        Port:           
        Host Port:      
        State:          Running
          Started:      Mon, 24 Oct 2022 13:41:55 +0900
        Ready:          True
        Restart Count:  0
        Environment:
          BIND_TO:   unix:///csi-data/csi.sock
          PROXY_TO:  unix:///csi-data/csi.sock.internal
        Mounts:
          /csi-data from socket-dir (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dsq8c (ro)
    Conditions:
      Type              Status
      Initialized       True
      Ready             False
      ContainersReady   False
      PodScheduled      True
    Volumes:
      socket-dir:
        Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
        Medium:
        SizeLimit:  
      config:
        Type:        Secret (a volume populated by a Secret)
        SecretName:  zfs-nfs-democratic-csi-driver-config
        Optional:    false
      extra-ca-certs:
        Type:      ConfigMap (a volume populated by a ConfigMap)
        Name:      zfs-nfs-democratic-csi
        Optional:  false
      kube-api-access-dsq8c:
        Type:                    Projected (a volume that contains injected data from multiple sources)
        TokenExpirationSeconds:  3607
        ConfigMapName:           kube-root-ca.crt
        ConfigMapOptional:       
        DownwardAPI:             true
    QoS Class:                   BestEffort
    Node-Selectors:              kubernetes.io/os=linux
    Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
    Events:
      Type     Reason     Age                     From               Message
      ----     ------     ----                    ----               -------
      Normal   Scheduled  7m54s                   default-scheduler  Successfully assigned democratic-csi/zfs-nfs-democratic-csi-controller-6db5558c48-fp9n2 to kube-node03
      Normal   Pulling    7m52s                   kubelet            Pulling image "docker.io/democraticcsi/democratic-csi:latest"
      Normal   Created    7m48s                   kubelet            Created container csi-proxy
      Normal   Pulled     7m48s                   kubelet            Container image "docker.io/democraticcsi/csi-grpc-proxy:v0.5.3" already present on machine
      Normal   Started    7m48s                   kubelet            Started container csi-driver
      Normal   Created    7m48s                   kubelet            Created container csi-driver
      Normal   Pulled     7m48s                   kubelet            Successfully pulled image "docker.io/democraticcsi/democratic-csi:latest" in 4.198052133s
      Normal   Started    7m48s                   kubelet            Started container csi-proxy
      Normal   Started    7m25s (x2 over 7m52s)   kubelet            Started container external-snapshotter
      Normal   Created    7m25s (x2 over 7m53s)   kubelet            Created container external-snapshotter
      Normal   Pulled     7m25s (x2 over 7m53s)   kubelet            Container image "k8s.gcr.io/sig-storage/csi-snapshotter:v5.0.1" already present on machine
      Normal   Started    7m25s (x2 over 7m53s)   kubelet            Started container external-resizer
      Normal   Created    7m25s (x2 over 7m53s)   kubelet            Created container external-resizer
      Normal   Pulled     7m25s (x2 over 7m53s)   kubelet            Container image "k8s.gcr.io/sig-storage/csi-resizer:v1.4.0" already present on machine
      Normal   Started    7m25s (x2 over 7m53s)   kubelet            Started container external-provisioner
      Normal   Created    7m25s (x2 over 7m53s)   kubelet            Created container external-provisioner
      Normal   Pulled     7m25s (x2 over 7m53s)   kubelet            Container image "k8s.gcr.io/sig-storage/csi-provisioner:v3.1.0" already present on machine
      Warning  BackOff    2m41s (x24 over 7m21s)  kubelet            Back-off restarting failed container

    – kube-master node log

    Oct 24 13:50:09 kube-master01 kernel: [1138938.491714] NFS: state manager: check lease failed on NFSv4 server 192.168.30.13 with error 10021
    Oct 24 13:50:15 kube-master01 kernel: [1138943.611783] NFS: state manager: check lease failed on NFSv4 server 192.168.30.13 with error 10021
    Oct 24 13:50:20 kube-master01 kernel: [1138948.735392] NFS: state manager: check lease failed on NFSv4 server 192.168.30.13 with error 10021
    $ k get po -n democratic-csi -owide
    NAME                                                 READY   STATUS             RESTARTS       AGE    IP               NODE          NOMINATED NODE   READINESS GATES
    zfs-nfs-democratic-csi-controller-6db5558c48-fp9n2   1/5     CrashLoopBackOff   23 (45s ago)   9m9s   192.168.161.46   kube-node03              
    zfs-nfs-democratic-csi-node-dhx58                    4/4     Running            0              9m9s   192.168.30.81    kube-node01              
    zfs-nfs-democratic-csi-node-j2m2b                    4/4     Running            0              9m9s   192.168.30.63    kube-node03              
    zfs-nfs-democratic-csi-node-ptnvf                    4/4     Running            0              9m9s   192.168.30.84    kube-node02              
    $ ssh kube-node03
    $ cat /var/log/syslog 
    
    Oct 24 13:51:09 kube-node03 systemd[4146954]: run-docker-runtime\x2drunc-moby-132c7a72de1ee1cbb09938602ceae90e7eb142da14c55d1069fc034734bd1c27-runc.RDzUwG.mount: Succeeded.
    Oct 24 13:51:09 kube-node03 systemd[1]: run-docker-runtime\x2drunc-moby-132c7a72de1ee1cbb09938602ceae90e7eb142da14c55d1069fc034734bd1c27-runc.RDzUwG.mount: Succeeded.
    Oct 24 13:51:13 kube-node03 systemd[4146954]: run-docker-runtime\x2drunc-moby-5aaea4b2c9c29b850cde63c841be7a667f9a01df69235d02006293c6900d4611-runc.ulIQnl.mount: Succeeded.
    Oct 24 13:51:13 kube-node03 systemd[1]: run-docker-runtime\x2drunc-moby-5aaea4b2c9c29b850cde63c841be7a667f9a01df69235d02006293c6900d4611-runc.ulIQnl.mount: Succeeded.
    Oct 24 13:51:13 kube-node03 systemd[1]: run-docker-runtime\x2drunc-moby-5aaea4b2c9c29b850cde63c841be7a667f9a01df69235d02006293c6900d4611-runc.crfkz4.mount: Succeeded.
    Oct 24 13:51:15 kube-node03 kubelet[4355]: I1024 13:51:15.906313    4355 scope.go:110] "RemoveContainer" containerID="db54ec6363dfeed2118d347dea9969ba2e2e844ca85af7eee867d0e5d732800b"
    Oct 24 13:51:15 kube-node03 kubelet[4355]: I1024 13:51:15.906423    4355 scope.go:110] "RemoveContainer" containerID="0a129107c481e5630a58a2ee0c4c8fe8e0e91bfd4108d1f3b399c9f174792179"
    Oct 24 13:51:15 kube-node03 kubelet[4355]: I1024 13:51:15.906461    4355 scope.go:110] "RemoveContainer" containerID="689c7ad05a56cd045cc467f928e8148a8557d803df88fb8f5a328fde09d26d35"
    Oct 24 13:51:15 kube-node03 kubelet[4355]: I1024 13:51:15.906495    4355 scope.go:110] "RemoveContainer" containerID="310f97ba3898a8e020a082cbeeba3a62a8e3a6c75be1088a3984d4e6bf8da5f3"
    Oct 24 13:51:15 kube-node03 kubelet[4355]: E1024 13:51:15.910221    4355 pod_workers.go:951] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"external-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=external-provisioner pod=zfs-nfs-democratic-csi-controller-6db5558c48-fp9n2_democratic-csi(ffc4ed90-43c0-48cc-8d98-2e3acd00160d)\", failed to \"StartContainer\" for \"external-resizer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=external-resizer pod=zfs-nfs-democratic-csi-controller-6db5558c48-fp9n2_democratic-csi(ffc4ed90-43c0-48cc-8d98-2e3acd00160d)\", failed to \"StartContainer\" for \"external-snapshotter\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=external-snapshotter pod=zfs-nfs-democratic-csi-controller-6db5558c48-fp9n2_democratic-csi(ffc4ed90-43c0-48cc-8d98-2e3acd00160d)\", failed to \"StartContainer\" for \"csi-driver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=csi-driver pod=zfs-nfs-democratic-csi-controller-6db5558c48-fp9n2_democratic-csi(ffc4ed90-43c0-48cc-8d98-2e3acd00160d)\"]" pod="democratic-csi/zfs-nfs-democratic-csi-controller-6db5558c48-fp9n2" podUID=ffc4ed90-43c0-48cc-8d98-2e3acd00160d
    • Hi, can you provide container logs please?

      $ k logs po/zfs-nfs-democratic-csi-controller-6db5558c48-fp9n2 -n democratic-csi
      error: a container name must be specified for pod zfs-nfs-democratic-csi-controller-6db5558c48-fp9n2, choose one of: [external-provisioner external-resizer external-snapshotter csi-driver csi-proxy]
      
    • thank you Lisenet
      my k8s cluster is calico-cni 192.168.0.0/16
      Truenas is 192.168.30.13
      it’s my fault
      k8s 192.168.30.0/24 , Truenas 10.10.10.0/24 (Network Segmentation)
      Now working !!!

      $ k get po -n democratic-csi
      NAME READY STATUS RESTARTS AGE
      zfs-nfs-democratic-csi-controller-55cb498d9c-d95sr 5/5 Running 0 22m
      zfs-nfs-democratic-csi-node-6rmrg 4/4 Running 0 22m
      zfs-nfs-democratic-csi-node-7rfqf 4/4 Running 0 22m
      zfs-nfs-democratic-csi-node-qk7vv 4/4 Running 0 22m

      $ k get pvc
      NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
      test-claim-nfs Bound pvc-1dcf25e6-9c19-48c6-9262-5f1c8ce66cc5 1Gi RWO freenas-nfs-csi 10m

Leave a Reply

Your email address will not be published. Required fields are marked *