1
0
mirror of https://github.com/ceph/ceph-csi.git synced 2024-12-21 12:30:24 +00:00
ceph-csi/examples/nfs
Sachin Prabhu 254699cb1a nfs: add support for clients in the StorageClass
The clients parameter in the storage class is used to limit access to
the export to the set of hostnames, networks or ip addresses specified.

Signed-off-by: Sachin Prabhu <sprabhu@redhat.com>
2023-07-06 06:24:11 +00:00
..
pod-clone.yaml deploy: add nfs pvc-clone & pod-clone example yaml 2022-05-24 18:13:02 +00:00
pod-restore.yaml deploy: add support for nfs snapshot 2022-05-24 18:13:02 +00:00
pod-rwop.yaml e2e: add RWOP examples for NFS-provisioning 2022-05-10 00:43:43 +00:00
pod.yaml doc: example for PVC and Pod using a NFS-volume 2022-03-28 11:58:42 +00:00
pvc-clone.yaml deploy: add nfs pvc-clone & pod-clone example yaml 2022-05-24 18:13:02 +00:00
pvc-restore.yaml deploy: add support for nfs snapshot 2022-05-24 18:13:02 +00:00
pvc-rwop.yaml e2e: add RWOP examples for NFS-provisioning 2022-05-10 00:43:43 +00:00
pvc.yaml doc: example for PVC and Pod using a NFS-volume 2022-03-28 11:58:42 +00:00
README.md doc: initial/partial instructions for using NFS examples 2022-03-28 11:58:42 +00:00
rook-nfs.yaml doc: use .nfs as default pool for NFS-export configs 2022-05-10 00:43:43 +00:00
snapshot.yaml deploy: add support for nfs snapshot 2022-05-24 18:13:02 +00:00
snapshotclass.yaml deploy: remove snapshot v1beta1 references from manifests 2022-11-17 10:05:01 +00:00
storageclass.yaml nfs: add support for clients in the StorageClass 2023-07-06 06:24:11 +00:00

Dynamic provisioning with NFS

The easiest way to try out the examples for dynamic provisioning with NFS, is to use Rook Ceph with CephNFS. Rook can be used to deploy a Ceph cluster. Ceph is able to maintain a NFS-Ganesha service with a few commands, making configuring the Ceph cluster a minimal effort.

Enabling the Ceph NFS-service

Ceph does not enable the NFS-service by default. In order for Rook Ceph to be able to configure NFS-exports, the NFS-service needs to be configured first.

In the Rook Toolbox, run the following commands:

ceph osd pool create nfs-ganesha
ceph mgr module enable rook
ceph mgr module enable nfs
ceph orch set backend rook

Create a NFS-cluster

In the directory where this README is located, there is an example rook-nfs.yaml file. This file can be used to create a Ceph managed NFS-cluster with the name "my-nfs".

$ kubectl create -f rook-nfs.yaml
cephnfs.ceph.rook.io/my-nfs created

The CephNFS resource will create a NFS-Ganesha Pod and Service with label app=rook-ceph-nfs:

$ kubectl get pods -l app=rook-ceph-nfs
NAME                                      READY   STATUS    RESTARTS   AGE
rook-ceph-nfs-my-nfs-a-5d47f66977-sc2rk   2/2     Running   0          61s
$ kubectl get service -l app=rook-ceph-nfs
NAME                     TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
rook-ceph-nfs-my-nfs-a   ClusterIP   172.30.218.195   <none>        2049/TCP   2m58s

Create a StorageClass

The parameters of the StorageClass reflect mostly what CephFS requires to connect to the Ceph cluster. All required options are commented clearly in the storageclass.yaml file.

In addition to the CephFS parameters, there are:

  • nfsCluster: name of the Ceph managed NFS-cluster (here my-nfs)
  • server: hostname/IP/service of the NFS-server (here 172.30.218.195)

Edit storageclass.yaml, and create the resource:

$ kubectl create -f storageclass.yaml
storageclass.storage.k8s.io/csi-nfs-sc created

TODO: next steps

  • deploy the NFS-provisioner
  • deploy the kubernetes-csi/csi-driver-nfs
  • create the CSIDriver object