2019-12-19 15:35:58 +00:00
|
|
|
# How to test RBD and CephFS plugins with Kubernetes 1.14+
|
2018-07-18 14:49:15 +00:00
|
|
|
|
2019-04-22 21:35:39 +00:00
|
|
|
## Deploying Ceph-CSI services
|
|
|
|
|
2023-05-30 15:17:51 +00:00
|
|
|
Create [ceph-config](../deploy/ceph-conf.yaml) configmap using the following command.
|
2022-03-07 10:31:49 +00:00
|
|
|
|
|
|
|
```bash
|
2023-05-30 15:17:51 +00:00
|
|
|
kubectl apply -f ../deploy/ceph-conf.yaml
|
2022-03-07 10:31:49 +00:00
|
|
|
```
|
|
|
|
|
2019-02-07 09:58:38 +00:00
|
|
|
Both `rbd` and `cephfs` directories contain `plugin-deploy.sh` and
|
|
|
|
`plugin-teardown.sh` helper scripts. You can use those to help you
|
|
|
|
deploy/teardown RBACs, sidecar containers and the plugin in one go.
|
|
|
|
By default, they look for the YAML manifests in
|
2023-05-30 15:17:51 +00:00
|
|
|
`../deploy/{rbd,cephfs}/kubernetes`.
|
2020-11-11 07:22:54 +00:00
|
|
|
You can override this path by running
|
|
|
|
|
|
|
|
```bash
|
|
|
|
./plugin-deploy.sh /path/to/my/manifests
|
|
|
|
```
|
2018-07-18 14:49:15 +00:00
|
|
|
|
2019-05-28 19:03:18 +00:00
|
|
|
## Creating CSI configuration
|
2019-04-22 21:35:39 +00:00
|
|
|
|
2019-05-28 19:03:18 +00:00
|
|
|
The CSI plugin requires configuration information regarding the Ceph cluster(s),
|
|
|
|
that would host the dynamically or statically provisioned volumes. This
|
2019-04-22 21:35:39 +00:00
|
|
|
is provided by adding a per-cluster identifier (referred to as clusterID), and
|
|
|
|
the required monitor details for the same, as in the provided [sample config
|
2023-05-30 15:17:51 +00:00
|
|
|
map](../deploy/csi-config-map-sample.yaml).
|
2019-04-22 21:35:39 +00:00
|
|
|
|
|
|
|
Gather the following information from the Ceph cluster(s) of choice,
|
|
|
|
|
|
|
|
* Ceph monitor list
|
2022-11-09 13:37:26 +00:00
|
|
|
* Typically in the output of `ceph mon dump`
|
|
|
|
* Used to prepare a list of `monitors` in the CSI configuration file
|
2019-04-22 21:35:39 +00:00
|
|
|
* Ceph Cluster fsid
|
2022-11-09 13:37:26 +00:00
|
|
|
* If choosing to use the Ceph cluster fsid as the unique value of clusterID,
|
|
|
|
* Output of `ceph fsid`
|
|
|
|
* Alternatively, choose a `<cluster-id>` value that is distinct per Ceph
|
2019-04-22 21:35:39 +00:00
|
|
|
cluster in use by this kubernetes cluster
|
|
|
|
|
2023-05-30 15:17:51 +00:00
|
|
|
Update the [sample configmap](../deploy/csi-config-map-sample.yaml) with values
|
2019-04-22 21:35:39 +00:00
|
|
|
from a Ceph cluster and replace `<cluster-id>` with the chosen clusterID, to
|
2020-06-10 06:23:03 +00:00
|
|
|
create the manifest for the configmap which can be updated in the cluster
|
2019-04-22 21:35:39 +00:00
|
|
|
using the following command,
|
|
|
|
|
2020-11-11 07:22:54 +00:00
|
|
|
```bash
|
2023-05-30 15:17:51 +00:00
|
|
|
kubectl replace -f ../deploy/csi-config-map-sample.yaml
|
2020-11-11 07:22:54 +00:00
|
|
|
```
|
2019-04-22 21:35:39 +00:00
|
|
|
|
|
|
|
Storage class and snapshot class, using `<cluster-id>` as the value for the
|
|
|
|
option `clusterID`, can now be created on the cluster.
|
|
|
|
|
2022-03-31 13:29:33 +00:00
|
|
|
## Running CephCSI with pod networking
|
|
|
|
|
2022-07-26 10:32:40 +00:00
|
|
|
The current problem with Pod Networking, is when a CephFS/RBD/NFS volume is mounted
|
|
|
|
in a pod using Ceph CSI and then the CSI CephFS/RBD/NFS plugin is restarted or
|
2022-03-31 13:29:33 +00:00
|
|
|
terminated (e.g. by restarting or deleting its DaemonSet), all operations on
|
|
|
|
the volume become blocked, even after restarting the CSI pods.
|
|
|
|
|
|
|
|
The only workaround is to restart the node where the Ceph CSI plugin pod was
|
|
|
|
restarted. This can be mitigated by running the `rbd map`/`mount -t` commands
|
|
|
|
in a different network namespace which does not get deleted when the CSI
|
2022-07-26 10:32:40 +00:00
|
|
|
CephFS/RBD/NFS plugin is restarted or terminated.
|
2022-03-31 13:29:33 +00:00
|
|
|
|
|
|
|
If someone wants to run the CephCSI with the pod networking they can still do
|
|
|
|
by setting the `netNamespaceFilePath`. If this path is set CephCSI will execute
|
|
|
|
the `rbd map`/`mount -t` commands after entering the [network
|
|
|
|
namespace](https://man7.org/linux/man-pages/man7/network_namespaces.7.html)
|
|
|
|
specified by `netNamespaceFilePath` with the
|
|
|
|
[nsenter](https://man7.org/linux/man-pages/man1/nsenter.1.html) command.
|
|
|
|
|
|
|
|
`netNamespaceFilePath` should point to the network namespace of some
|
|
|
|
long-running process, typically it would be a symlink to
|
|
|
|
`/proc/<long running process id>/ns/net`.
|
|
|
|
|
|
|
|
The long-running process can also be another pod which is a Daemonset which
|
|
|
|
never restarts. This Pod should only be stopped and restarted when a node is
|
|
|
|
stopped so that volume operations do not become blocked. The new DaemonSet pod
|
|
|
|
can contain a single container, responsible for holding its pod network alive.
|
|
|
|
It is used as a passthrough by the CephCSI plugin pod which when mounting or
|
|
|
|
mapping will use the network namespace of this pod.
|
|
|
|
|
|
|
|
Once the pod is created get its PID and create a symlink to
|
|
|
|
`/proc/<PID>/ns/net` in the hostPath volume shared with the csi-plugin pod and
|
|
|
|
specify the path in the `netNamespaceFilePath` option.
|
|
|
|
|
|
|
|
*Note* This Pod should have `hostPID: true` in the Pod Spec.
|
|
|
|
|
2019-04-22 21:35:39 +00:00
|
|
|
## Deploying the storage class
|
|
|
|
|
2019-02-20 10:00:18 +00:00
|
|
|
Once the plugin is successfully deployed, you'll need to customize
|
2019-02-07 09:58:38 +00:00
|
|
|
`storageclass.yaml` and `secret.yaml` manifests to reflect your Ceph cluster
|
|
|
|
setup.
|
|
|
|
Please consult the documentation for info about available parameters.
|
|
|
|
|
|
|
|
After configuring the secrets, monitors, etc. you can deploy a
|
|
|
|
testing Pod mounting a RBD image / CephFS volume:
|
2018-07-18 14:49:15 +00:00
|
|
|
|
|
|
|
```bash
|
2019-02-05 19:35:29 +00:00
|
|
|
kubectl create -f secret.yaml
|
|
|
|
kubectl create -f storageclass.yaml
|
|
|
|
kubectl create -f pvc.yaml
|
|
|
|
kubectl create -f pod.yaml
|
2018-07-18 14:49:15 +00:00
|
|
|
```
|
|
|
|
|
|
|
|
Other helper scripts:
|
2019-02-07 09:58:38 +00:00
|
|
|
|
2018-07-18 14:49:15 +00:00
|
|
|
* `logs.sh` output of the plugin
|
|
|
|
* `exec-bash.sh` logs into the plugin's container and runs bash
|
2019-02-05 19:35:29 +00:00
|
|
|
|
2019-04-22 21:35:39 +00:00
|
|
|
### How to test RBD Snapshot feature
|
2019-02-05 19:35:29 +00:00
|
|
|
|
2019-02-07 09:58:38 +00:00
|
|
|
Before continuing, make sure you enabled the required
|
|
|
|
feature gate `VolumeSnapshotDataSource=true` in your Kubernetes cluster.
|
2019-02-05 19:35:29 +00:00
|
|
|
|
2019-02-07 09:58:38 +00:00
|
|
|
In the `examples/rbd` directory you will find two files related to snapshots:
|
|
|
|
[snapshotclass.yaml](./rbd/snapshotclass.yaml) and
|
|
|
|
[snapshot.yaml](./rbd/snapshot.yaml).
|
2019-02-05 19:35:29 +00:00
|
|
|
|
2019-02-07 09:58:38 +00:00
|
|
|
Once you created your RBD volume, you'll need to customize at least
|
2019-05-31 18:43:38 +00:00
|
|
|
`snapshotclass.yaml` and make sure the `clusterid` parameter matches
|
2019-02-07 09:58:38 +00:00
|
|
|
your Ceph cluster setup.
|
|
|
|
If you followed the documentation to create the rbdplugin, you shouldn't
|
|
|
|
have to edit any other file.
|
2019-02-05 19:35:29 +00:00
|
|
|
|
2020-09-28 17:46:10 +00:00
|
|
|
Note that it is recommended to create a volume snapshot or a PVC clone
|
|
|
|
only when the PVC is not in use.
|
|
|
|
|
2019-02-05 19:35:29 +00:00
|
|
|
After configuring everything you needed, deploy the snapshot class:
|
|
|
|
|
|
|
|
```bash
|
|
|
|
kubectl create -f snapshotclass.yaml
|
|
|
|
```
|
|
|
|
|
|
|
|
Verify that the snapshot class was created:
|
|
|
|
|
|
|
|
```console
|
|
|
|
$ kubectl get volumesnapshotclass
|
|
|
|
NAME AGE
|
|
|
|
csi-rbdplugin-snapclass 4s
|
|
|
|
```
|
|
|
|
|
|
|
|
Create a snapshot from the existing PVC:
|
|
|
|
|
|
|
|
```bash
|
|
|
|
kubectl create -f snapshot.yaml
|
|
|
|
```
|
|
|
|
|
|
|
|
To verify if your volume snapshot has successfully been created, run the following:
|
|
|
|
|
|
|
|
```console
|
|
|
|
$ kubectl get volumesnapshot
|
|
|
|
NAME AGE
|
|
|
|
rbd-pvc-snapshot 6s
|
|
|
|
```
|
|
|
|
|
|
|
|
To check the status of the snapshot, run the following:
|
|
|
|
|
|
|
|
```bash
|
|
|
|
$ kubectl describe volumesnapshot rbd-pvc-snapshot
|
|
|
|
Name: rbd-pvc-snapshot
|
|
|
|
Namespace: default
|
|
|
|
Labels: <none>
|
|
|
|
Annotations: <none>
|
|
|
|
API Version: snapshot.storage.k8s.io/v1alpha1
|
|
|
|
Kind: VolumeSnapshot
|
|
|
|
Metadata:
|
|
|
|
Creation Timestamp: 2019-02-06T08:52:34Z
|
|
|
|
Finalizers:
|
|
|
|
snapshot.storage.kubernetes.io/volumesnapshot-protection
|
|
|
|
Generation: 5
|
|
|
|
Resource Version: 84239
|
|
|
|
Self Link: /apis/snapshot.storage.k8s.io/v1alpha1/namespaces/default/volumesnapshots/rbd-pvc-snapshot
|
|
|
|
UID: 8b9b5740-29ec-11e9-8e0f-b8ca3aad030b
|
|
|
|
Spec:
|
|
|
|
Snapshot Class Name: csi-rbdplugin-snapclass
|
|
|
|
Snapshot Content Name: snapcontent-8b9b5740-29ec-11e9-8e0f-b8ca3aad030b
|
|
|
|
Source:
|
|
|
|
API Group: <nil>
|
|
|
|
Kind: PersistentVolumeClaim
|
|
|
|
Name: rbd-pvc
|
|
|
|
Status:
|
|
|
|
Creation Time: 2019-02-06T08:52:34Z
|
|
|
|
Ready To Use: true
|
|
|
|
Restore Size: 1Gi
|
|
|
|
Events: <none>
|
|
|
|
```
|
|
|
|
|
2019-02-07 09:58:38 +00:00
|
|
|
To be sure everything is OK you can run `rbd snap ls [your-pvc-name]` inside
|
|
|
|
one of your Ceph pod.
|
2019-02-05 19:35:29 +00:00
|
|
|
|
2019-02-07 09:58:38 +00:00
|
|
|
To restore the snapshot to a new PVC, deploy
|
|
|
|
[pvc-restore.yaml](./rbd/pvc-restore.yaml) and a testing pod
|
2019-03-13 05:09:58 +00:00
|
|
|
[pod-restore.yaml](./rbd/pod-restore.yaml):
|
2019-02-05 19:35:29 +00:00
|
|
|
|
|
|
|
```bash
|
|
|
|
kubectl create -f pvc-restore.yaml
|
|
|
|
kubectl create -f pod-restore.yaml
|
|
|
|
```
|
2019-03-14 00:18:04 +00:00
|
|
|
|
2019-04-22 21:35:39 +00:00
|
|
|
### How to test RBD MULTI_NODE_MULTI_WRITER BLOCK feature
|
2019-03-14 00:18:04 +00:00
|
|
|
|
|
|
|
Requires feature-gates: `BlockVolume=true` `CSIBlockVolume=true`
|
|
|
|
|
|
|
|
*NOTE* The MULTI_NODE_MULTI_WRITER capability is only available for
|
|
|
|
Volumes that are of access_type `block`
|
|
|
|
|
|
|
|
*WARNING* This feature is strictly for workloads that know how to deal
|
|
|
|
with concurrent access to the Volume (eg Active/Passive applications).
|
|
|
|
Using RWX modes on non clustered file systems with applications trying
|
|
|
|
to simultaneously access the Volume will likely result in data corruption!
|
|
|
|
|
|
|
|
Following are examples for issuing a request for a `Block`
|
|
|
|
`ReadWriteMany` Claim, and using the resultant Claim for a POD
|
|
|
|
|
|
|
|
```yaml
|
|
|
|
apiVersion: v1
|
|
|
|
kind: PersistentVolumeClaim
|
|
|
|
metadata:
|
|
|
|
name: block-pvc
|
|
|
|
spec:
|
|
|
|
accessModes:
|
|
|
|
- ReadWriteMany
|
|
|
|
volumeMode: Block
|
|
|
|
resources:
|
|
|
|
requests:
|
|
|
|
storage: 1Gi
|
2019-05-07 14:49:45 +00:00
|
|
|
storageClassName: csi-rbd-sc
|
2019-03-14 00:18:04 +00:00
|
|
|
```
|
|
|
|
|
|
|
|
Create a POD that uses this PVC:
|
|
|
|
|
|
|
|
```yaml
|
|
|
|
apiVersion: v1
|
|
|
|
kind: Pod
|
|
|
|
metadata:
|
|
|
|
name: my-pod
|
|
|
|
spec:
|
|
|
|
containers:
|
|
|
|
- name: my-container
|
2020-11-26 12:38:10 +00:00
|
|
|
image: docker.io/library/debian:latest
|
2019-03-14 00:18:04 +00:00
|
|
|
command: ["/bin/bash", "-c"]
|
|
|
|
args: [ "tail -f /dev/null" ]
|
|
|
|
volumeDevices:
|
|
|
|
- devicePath: /dev/rbdblock
|
|
|
|
name: my-volume
|
|
|
|
imagePullPolicy: IfNotPresent
|
|
|
|
volumes:
|
|
|
|
- name: my-volume
|
|
|
|
persistentVolumeClaim:
|
|
|
|
claimName: block-pvc
|
|
|
|
|
|
|
|
```
|
|
|
|
|
|
|
|
Now, we can create a second POD (ensure the POD is scheduled on a different
|
|
|
|
node; multiwriter single node works without this feature) that also uses this
|
|
|
|
PVC at the same time, again wait for the pod to enter running state, and verify
|
|
|
|
the block device is available.
|
|
|
|
|
|
|
|
```yaml
|
|
|
|
apiVersion: v1
|
|
|
|
kind: Pod
|
|
|
|
metadata:
|
|
|
|
name: another-pod
|
|
|
|
spec:
|
|
|
|
containers:
|
|
|
|
- name: my-container
|
2020-11-26 12:38:10 +00:00
|
|
|
image: docker.io/library/debian:latest
|
2019-03-14 00:18:04 +00:00
|
|
|
command: ["/bin/bash", "-c"]
|
|
|
|
args: [ "tail -f /dev/null" ]
|
|
|
|
volumeDevices:
|
|
|
|
- devicePath: /dev/rbdblock
|
|
|
|
name: my-volume
|
|
|
|
imagePullPolicy: IfNotPresent
|
|
|
|
volumes:
|
|
|
|
- name: my-volume
|
|
|
|
persistentVolumeClaim:
|
|
|
|
claimName: block-pvc
|
|
|
|
```
|
|
|
|
|
|
|
|
Wait for the PODs to enter Running state, check that our block device
|
|
|
|
is available in the container at `/dev/rdbblock` in both containers:
|
|
|
|
|
|
|
|
```bash
|
|
|
|
$ kubectl exec -it my-pod -- fdisk -l /dev/rbdblock
|
|
|
|
Disk /dev/rbdblock: 1 GiB, 1073741824 bytes, 2097152 sectors
|
|
|
|
Units: sectors of 1 * 512 = 512 bytes
|
|
|
|
Sector size (logical/physical): 512 bytes / 512 bytes
|
|
|
|
I/O size (minimum/optimal): 4194304 bytes / 4194304 bytes
|
|
|
|
```
|
|
|
|
|
|
|
|
```bash
|
|
|
|
$ kubectl exec -it another-pod -- fdisk -l /dev/rbdblock
|
|
|
|
Disk /dev/rbdblock: 1 GiB, 1073741824 bytes, 2097152 sectors
|
|
|
|
Units: sectors of 1 * 512 = 512 bytes
|
|
|
|
Sector size (logical/physical): 512 bytes / 512 bytes
|
|
|
|
I/O size (minimum/optimal): 4194304 bytes / 4194304 bytes
|
|
|
|
```
|
2023-04-11 09:17:34 +00:00
|
|
|
|
|
|
|
### How to create CephFS Snapshot and Restore
|
|
|
|
|
|
|
|
In the `examples/cephfs` directory you will find two files related to snapshots:
|
|
|
|
[snapshotclass.yaml](./cephfs/snapshotclass.yaml) and
|
|
|
|
[snapshot.yaml](./cephfs/snapshot.yaml).
|
|
|
|
|
|
|
|
Once you created your CephFS volume, you'll need to customize at least
|
|
|
|
`snapshotclass.yaml` and make sure the `clusterID` parameter matches
|
|
|
|
your Ceph cluster setup.
|
|
|
|
|
|
|
|
Note that it is recommended to create a volume snapshot or a PVC clone
|
|
|
|
only when the PVC is not in use.
|
|
|
|
|
|
|
|
After configuring everything you needed, create the snapshot class:
|
|
|
|
|
|
|
|
```bash
|
|
|
|
kubectl create -f ../examples/cephfs/snapshotclass.yaml
|
|
|
|
```
|
|
|
|
|
|
|
|
Verify that the snapshot class was created:
|
|
|
|
|
|
|
|
```console
|
|
|
|
$ kubectl get volumesnapshotclass
|
|
|
|
NAME DRIVER DELETIONPOLICY AGE
|
|
|
|
csi-cephfsplugin-snapclass cephfs.csi.ceph.com Delete 24m
|
|
|
|
```
|
|
|
|
|
|
|
|
Create a snapshot from the existing PVC:
|
|
|
|
|
|
|
|
```bash
|
|
|
|
kubectl create -f ../examples/cephfs/snapshot.yaml
|
|
|
|
```
|
|
|
|
|
|
|
|
To verify if your volume snapshot has successfully been created and to
|
|
|
|
get the details about snapshot, run the following:
|
|
|
|
|
|
|
|
```console
|
|
|
|
$ kubectl get volumesnapshot
|
|
|
|
NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE
|
|
|
|
cephfs-pvc-snapshot true csi-cephfs-pvc 1Gi csi-cephfsplugin-snapclass snapcontent-34476204-a14a-4d59-bfbc-2bbba695652c 3s 6s
|
|
|
|
```
|
|
|
|
|
|
|
|
To be sure everything is OK you can run
|
|
|
|
`ceph fs subvolume snapshot ls <vol_name> <sub_name> [<group_name>]`
|
|
|
|
inside one of your Ceph pod.
|
|
|
|
|
|
|
|
To restore the snapshot to a new PVC, deploy
|
|
|
|
[pvc-restore.yaml](./cephfs/pvc-restore.yaml) and a testing pod
|
|
|
|
[pod-restore.yaml](./cephfs/pod-restore.yaml):
|
|
|
|
|
|
|
|
```bash
|
|
|
|
kubectl create -f pvc-restore.yaml
|
|
|
|
kubectl create -f pod-restore.yaml
|
|
|
|
```
|
|
|
|
|
|
|
|
### Cleanup for CephFS Snapshot and Restore
|
|
|
|
|
|
|
|
Delete the testing pod and restored pvc.
|
|
|
|
|
|
|
|
```bash
|
|
|
|
kubectl delete pod <pod-restore name>
|
|
|
|
kubectl delete pvc <pvc-restore name>
|
|
|
|
```
|
|
|
|
|
|
|
|
Now, the snapshot is no longer in use, Delete the volume snapshot
|
|
|
|
and volume snapshot class.
|
|
|
|
|
|
|
|
```bash
|
|
|
|
kubectl delete volumesnapshot <snapshot name>
|
|
|
|
kubectl delete volumesnapshotclass <snapshotclass name>
|
|
|
|
```
|
|
|
|
|
|
|
|
### How to Clone CephFS Volumes
|
|
|
|
|
|
|
|
Create the clone from cephFS PVC:
|
|
|
|
|
|
|
|
```bash
|
|
|
|
kubectl create -f ../examples/cephfs/pvc-clone.yaml
|
|
|
|
kubectl create -f ../examples/cephfs/pod-clone.yaml
|
|
|
|
```
|
|
|
|
|
|
|
|
To verify if your clone has successfully been created, run the following:
|
|
|
|
|
|
|
|
```console
|
|
|
|
$ kubectl get pvc
|
|
|
|
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
|
|
|
|
csi-cephfs-pvc Bound pvc-1ea51547-a88b-4ab0-8b4a-812caeaf025d 1Gi RWX csi-cephfs-sc 20h
|
|
|
|
cephfs-pvc-clone Bound pvc-b575bc35-d521-4c41-b4f9-1d733cd28fdf 1Gi RWX csi-cephfs-sc 39s
|
|
|
|
```
|
|
|
|
|
|
|
|
### Cleanup
|
|
|
|
|
|
|
|
Delete the cloned pod and pvc:
|
|
|
|
|
|
|
|
```bash
|
|
|
|
kubectl delete pod <pod-clone name>
|
|
|
|
kubectl delete pvc <pvc-clone name>
|
|
|
|
```
|