mirror of
https://github.com/ceph/ceph-csi.git
synced 2025-06-13 10:33:35 +00:00
Fix markdown style issue
Signed-off-by: Madhu Rajanna <mrajanna@redhat.com>
This commit is contained in:
committed by
mergify[bot]
parent
0fc294ae5b
commit
7043b3839a
@ -1,10 +1,20 @@
|
||||
## How to test RBD and CephFS plugins with Kubernetes 1.13
|
||||
# How to test RBD and CephFS plugins with Kubernetes 1.13
|
||||
|
||||
Both `rbd` and `cephfs` directories contain `plugin-deploy.sh` and `plugin-teardown.sh` helper scripts. You can use those to help you deploy/tear down RBACs, sidecar containers and the plugin in one go. By default, they look for the YAML manifests in `../../deploy/{rbd,cephfs}/kubernetes`. You can override this path by running `$ ./plugin-deploy.sh /path/to/my/manifests`.
|
||||
Both `rbd` and `cephfs` directories contain `plugin-deploy.sh` and
|
||||
`plugin-teardown.sh` helper scripts. You can use those to help you
|
||||
deploy/teardown RBACs, sidecar containers and the plugin in one go.
|
||||
By default, they look for the YAML manifests in
|
||||
`../../deploy/{rbd,cephfs}/kubernetes`.
|
||||
You can override this path by running `$ ./plugin-deploy.sh /path/to/my/manifests`.
|
||||
|
||||
Once the plugin is successfuly deployed, you'll need to customize `storageclass.yaml` and `secret.yaml` manifests to reflect your Ceph cluster setup. Please consult the documentation for info about available parameters.
|
||||
Once the plugin is successfuly deployed, you'll need to customize
|
||||
`storageclass.yaml` and `secret.yaml` manifests to reflect your Ceph cluster
|
||||
setup.
|
||||
Please consult the documentation for info about available parameters.
|
||||
|
||||
After configuring the secrets, monitors, etc. you can deploy a
|
||||
testing Pod mounting a RBD image / CephFS volume:
|
||||
|
||||
After configuring the secrets, monitors, etc. you can deploy a testing Pod mounting a RBD image / CephFS volume:
|
||||
```bash
|
||||
kubectl create -f secret.yaml
|
||||
kubectl create -f storageclass.yaml
|
||||
@ -13,16 +23,24 @@ kubectl create -f pod.yaml
|
||||
```
|
||||
|
||||
Other helper scripts:
|
||||
|
||||
* `logs.sh` output of the plugin
|
||||
* `exec-bash.sh` logs into the plugin's container and runs bash
|
||||
|
||||
## How to test RBD Snapshot feature
|
||||
|
||||
Before continuing, make sure you enabled the required feature gate `VolumeSnapshotDataSource=true` in your Kubernetes cluster.
|
||||
Before continuing, make sure you enabled the required
|
||||
feature gate `VolumeSnapshotDataSource=true` in your Kubernetes cluster.
|
||||
|
||||
In the `examples/rbd` directory you will find two files related to snapshots: [snapshotclass.yaml](./rbd/snapshotclass.yaml) and [snapshot.yaml](./rbd/snapshot.yaml).
|
||||
In the `examples/rbd` directory you will find two files related to snapshots:
|
||||
[snapshotclass.yaml](./rbd/snapshotclass.yaml) and
|
||||
[snapshot.yaml](./rbd/snapshot.yaml).
|
||||
|
||||
Once you created your RBD volume, you'll need to customize at least `snapshotclass.yaml` and make sure the `monitors` and `pool` parameters match your Ceph cluster setup. If you followed the documentation to create the rbdplugin, you shouldn't have to edit any other file.
|
||||
Once you created your RBD volume, you'll need to customize at least
|
||||
`snapshotclass.yaml` and make sure the `monitors` and `pool` parameters match
|
||||
your Ceph cluster setup.
|
||||
If you followed the documentation to create the rbdplugin, you shouldn't
|
||||
have to edit any other file.
|
||||
|
||||
After configuring everything you needed, deploy the snapshot class:
|
||||
|
||||
@ -44,7 +62,6 @@ Create a snapshot from the existing PVC:
|
||||
kubectl create -f snapshot.yaml
|
||||
```
|
||||
|
||||
|
||||
To verify if your volume snapshot has successfully been created, run the following:
|
||||
|
||||
```console
|
||||
@ -86,9 +103,12 @@ Status:
|
||||
Events: <none>
|
||||
```
|
||||
|
||||
To be sure everything is OK you can run `rbd snap ls [your-pvc-name]` inside one of your Ceph pod.
|
||||
To be sure everything is OK you can run `rbd snap ls [your-pvc-name]` inside
|
||||
one of your Ceph pod.
|
||||
|
||||
To restore the snapshot to a new PVC, deploy [pvc-restore.yaml](./rbd/pvc-restore.yaml) and a testing pod [pod-restore.yaml](./rbd/pvc-restore.yaml):
|
||||
To restore the snapshot to a new PVC, deploy
|
||||
[pvc-restore.yaml](./rbd/pvc-restore.yaml) and a testing pod
|
||||
[pod-restore.yaml](./rbd/pvc-restore.yaml):
|
||||
|
||||
```bash
|
||||
kubectl create -f pvc-restore.yaml
|
||||
|
Reference in New Issue
Block a user