cleanup: stick to standards when using dollar-sign in md

MD014 - Dollar signs used before commands without showing output
The dollar signs are unnecessary, it is easier to copy and paste and
less noisy if the dollar signs are omitted. Especially when the
command doesn't list the output, but if the command follows output
we can use `$ ` (dollar+space) mainly to differentiate between
command and its ouput.

scenario 1: when command doesn't follow output
```console
cd ~/work
```

scenario 2: when command follow output (use dollar+space)
```console
$ ls ~/work
file1 file2 dir1 dir2 ...
```

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
This commit is contained in:
Prasanna Kumar Kalever 2020-11-11 12:52:54 +05:30 committed by mergify[bot]
parent fcaa332921
commit 2945f7b669
10 changed files with 159 additions and 104 deletions

View File

@ -15,7 +15,7 @@ helm repo add ceph-csi https://ceph.github.io/csi-charts
we need to enter into the directory where all charts are present
```console
[$]cd charts
cd charts
```
**Note:** charts directory is present in root of the ceph-csi project

View File

@ -15,7 +15,7 @@ helm repo add ceph-csi https://ceph.github.io/csi-charts
we need to enter into the directory where all charts are present
```console
[$]cd charts
cd charts
```
**Note:** charts directory is present in root of the ceph-csi project

View File

@ -87,9 +87,9 @@ compatibility support and without prior notice.
git checkout v3.1.0 tag
```bash
[$] git clone https://github.com/ceph/ceph-csi.git
[$] cd ./ceph-csi
[$] git checkout v3.1.0
git clone https://github.com/ceph/ceph-csi.git
cd ./ceph-csi
git checkout v3.1.0
```
**Note:** While upgrading please Ignore warning messages from kubectl output
@ -112,7 +112,7 @@ Provisioner deployment
##### 1.1 Update the CephFS Provisioner RBAC
```bash
[$] kubectl apply -f deploy/cephfs/kubernetes/csi-provisioner-rbac.yaml
$ kubectl apply -f deploy/cephfs/kubernetes/csi-provisioner-rbac.yaml
serviceaccount/cephfs-csi-provisioner configured
clusterrole.rbac.authorization.k8s.io/cephfs-external-provisioner-runner configured
clusterrole.rbac.authorization.k8s.io/cephfs-external-provisioner-runner-rules configured
@ -124,7 +124,7 @@ rolebinding.rbac.authorization.k8s.io/cephfs-csi-provisioner-role-cfg configured
##### 1.2 Update the CephFS Provisioner deployment
```bash
[$]kubectl apply -f deploy/cephfs/kubernetes/csi-cephfsplugin-provisioner.yaml
$ kubectl apply -f deploy/cephfs/kubernetes/csi-cephfsplugin-provisioner.yaml
service/csi-cephfsplugin-provisioner configured
deployment.apps/csi-cephfsplugin-provisioner configured
```
@ -132,7 +132,7 @@ deployment.apps/csi-cephfsplugin-provisioner configured
wait for the deployment to complete
```bash
[$]kubectl get deployment
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
csi-cephfsplugin-provisioner 3/3 1 3 104m
```
@ -147,7 +147,7 @@ nodeplugin daemonset
##### 2.1 Update the CephFS Nodeplugin RBAC
```bash
[$]kubectl apply -f deploy/cephfs/kubernetes/csi-nodeplugin-rbac.yaml
$ kubectl apply -f deploy/cephfs/kubernetes/csi-nodeplugin-rbac.yaml
serviceaccount/cephfs-csi-nodeplugin configured
clusterrole.rbac.authorization.k8s.io/cephfs-csi-nodeplugin configured
clusterrole.rbac.authorization.k8s.io/cephfs-csi-nodeplugin-rules configured
@ -188,7 +188,7 @@ daemonset spec
##### 2.2 Update the CephFS Nodeplugin daemonset
```bash
[$]kubectl apply -f deploy/cephfs/kubernetes/csi-cephfsplugin.yaml
$ kubectl apply -f deploy/cephfs/kubernetes/csi-cephfsplugin.yaml
daemonset.apps/csi-cephfsplugin configured
service/csi-metrics-cephfsplugin configured
```
@ -230,7 +230,7 @@ Provisioner deployment
##### 3.1 Update the RBD Provisioner RBAC
```bash
[$]kubectl apply -f deploy/rbd/kubernetes/csi-provisioner-rbac.yaml
$ kubectl apply -f deploy/rbd/kubernetes/csi-provisioner-rbac.yaml
serviceaccount/rbd-csi-provisioner configured
clusterrole.rbac.authorization.k8s.io/rbd-external-provisioner-runner configured
clusterrole.rbac.authorization.k8s.io/rbd-external-provisioner-runner-rules configured
@ -242,7 +242,7 @@ rolebinding.rbac.authorization.k8s.io/rbd-csi-provisioner-role-cfg configured
##### 3.2 Update the RBD Provisioner deployment
```bash
[$]kubectl apply -f deploy/rbd/kubernetes/csi-rbdplugin-provisioner.yaml
$ kubectl apply -f deploy/rbd/kubernetes/csi-rbdplugin-provisioner.yaml
service/csi-rbdplugin-provisioner configured
deployment.apps/csi-rbdplugin-provisioner configured
```
@ -250,7 +250,7 @@ deployment.apps/csi-rbdplugin-provisioner configured
wait for the deployment to complete
```bash
[$]kubectl get deployments
$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
csi-rbdplugin-provisioner 3/3 3 3 139m
```
@ -265,7 +265,7 @@ nodeplugin daemonset
##### 4.1 Update the RBD Nodeplugin RBAC
```bash
[$]kubectl apply -f deploy/rbd/kubernetes/csi-nodeplugin-rbac.yaml
$ kubectl apply -f deploy/rbd/kubernetes/csi-nodeplugin-rbac.yaml
serviceaccount/rbd-csi-nodeplugin configured
clusterrole.rbac.authorization.k8s.io/rbd-csi-nodeplugin configured
clusterrole.rbac.authorization.k8s.io/rbd-csi-nodeplugin-rules configured
@ -306,7 +306,7 @@ daemonset spec
##### 4.2 Update the RBD Nodeplugin daemonset
```bash
[$]kubectl apply -f deploy/rbd/kubernetes/csi-rbdplugin.yaml
$ kubectl apply -f deploy/rbd/kubernetes/csi-rbdplugin.yaml
daemonset.apps/csi-rbdplugin configured
service/csi-metrics-rbdplugin configured
```

View File

@ -22,20 +22,20 @@
- To install snapshot controller and CRD
```console
$./scripts/install-snapshot.sh install
./scripts/install-snapshot.sh install
```
To install from specific external-snapshotter version, you can leverage
`SNAPSHOT_VERSION` variable, for example:
```console
$SNAPSHOT_VERSION="v3.0.1" ./scripts/install-snapshot.sh install
SNAPSHOT_VERSION="v3.0.1" ./scripts/install-snapshot.sh install
```
- In the future, you can choose to cleanup by running
```console
$./scripts/install-snapshot.sh cleanup
./scripts/install-snapshot.sh cleanup
```
**NOTE: At present, there is a limit of 400 snapshots per cephFS filesystem.
@ -93,8 +93,10 @@ snapcontent-34476204-a14a-4d59-bfbc-2bbba695652c true 1073741824 De
## Restore snapshot to a new PVC
```console
kubectl create -f ../examples/cephfs/pvc-restore.yaml
```
$ kubectl create -f ../examples/cephfs/pvc-restore.yaml
```console
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
csi-cephfs-pvc Bound pvc-1ea51547-a88b-4ab0-8b4a-812caeaf025d 1Gi RWX csi-cephfs-sc 20h
@ -104,7 +106,10 @@ cephfs-pvc-restore Bound pvc-95308c75-6c93-4928-a551-6b5137192209 1Gi
## Clone PVC
```console
$ kubectl create -f ../examples/cephfs/pvc-clone.yaml
kubectl create -f ../examples/cephfs/pvc-clone.yaml
```
```console
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
csi-cephfs-pvc Bound pvc-1ea51547-a88b-4ab0-8b4a-812caeaf025d 1Gi RWX csi-cephfs-sc 20h

View File

@ -40,23 +40,26 @@ for more information realted to Volume cloning in kubernetes.
### RBD CLI commands to create snapshot
```bash
[$] rbd snap ls <RBD image for src k8s volume> --all
```
rbd snap ls <RBD image for src k8s volume> --all
// If the parent has more snapshots than the configured `maxsnapshotsonimage`
add backgound tasks to flatten the temporary cloned images( temporary cloned
image names will be same as snapshot names)
[$] ceph rbd task add flatten <RBD image for temporary snap images>
[$] rbd snap create <RBD image for src k8s volume>@<random snap name>
[$] rbd clone --rbd-default-clone-format 2 --image-feature
// add backgound tasks to flatten the temporary cloned images (temporary cloned
// image names will be same as snapshot names)
ceph rbd task add flatten <RBD image for temporary snap images>
rbd snap create <RBD image for src k8s volume>@<random snap name>
rbd clone --rbd-default-clone-format 2 --image-feature
layering,deep-flatten <RBD image for src k8s volume>@<random snap>
<RBD image for temporary snap image>
[$] rbd snap rm <RBD image for src k8s volume>@<random snap name>
[$] rbd snap create <RBD image for temporary snap image>@<random snap name>
rbd snap rm <RBD image for src k8s volume>@<random snap name>
rbd snap create <RBD image for temporary snap image>@<random snap name>
// check the depth, if the depth is greater than configured hardlimit add a
// task to flatten the cloned image, return snapshot status ready as `false`,
// if the depth is greater than softlimit add a task to flatten the image
// and return success
[$] ceph rbd task add flatten <RBD image for temporary snap image>
ceph rbd task add flatten <RBD image for temporary snap image>
```
## Create PVC from a snapshot (datasource snapshot)
@ -69,17 +72,18 @@ image names will be same as snapshot names)
### RBD CLI commands to create clone from snapshot
```bash
```
// check the depth, if the depth is greater than configured (hardlimit)
// Add a task to value flatten the cloned image
[$] ceph rbd task add flatten <RBD image for temporary snap image>
[$] rbd clone --rbd-default-clone-format 2 --image-feature <k8s dst vol config>
ceph rbd task add flatten <RBD image for temporary snap image>
rbd clone --rbd-default-clone-format 2 --image-feature <k8s dst vol config>
<RBD image for temporary snap image>@<random snap name>
<RBD image for k8s dst vol>
// check the depth,if the depth is greater than configured hardlimit add a task
// to flatten the cloned image return ABORT error, if the depth is greater than
// softlimit add a task to flatten the image and return success
[$] ceph rbd task add flatten <RBD image for k8s dst vol>
ceph rbd task add flatten <RBD image for k8s dst vol>
```
## Delete a snapshot
@ -92,10 +96,10 @@ image names will be same as snapshot names)
### RBD CLI commands to delete a snapshot
```bash
[$] rbd snap create <RBD image for temporary snap image>@<random snap name>
[$] rbd trash mv <RBD image for temporary snap image>
[$] ceph rbd task trash remove <RBD image for temporary snap image ID>
```
rbd snap create <RBD image for temporary snap image>@<random snap name>
rbd trash mv <RBD image for temporary snap image>
ceph rbd task trash remove <RBD image for temporary snap image ID>
```
## Delete a Volume (PVC)
@ -112,7 +116,7 @@ image(this will be applicable for both normal image and cloned image)
### RBD CLI commands to delete a volume
```bash
```
1) rbd trash mv <image>
2) ceph rbd task trash remove <image>
```
@ -136,19 +140,20 @@ for more information realted to Volume cloning in kubernetes.
### RBD CLI commands to create a Volume from Volume
```bash
```
// check the image depth of the parent image if flatten required add a
// task to flatten image and return ABORT to avoid leak(same hardlimit and
// softlimit check will be done)
[$] ceph rbd task add flatten <RBD image for src k8s volume>
[$] rbd snap create <RBD image for src k8s volume>@<random snap name>
[$] rbd clone --rbd-default-clone-format 2 --image-feature
ceph rbd task add flatten <RBD image for src k8s volume>
rbd snap create <RBD image for src k8s volume>@<random snap name>
rbd clone --rbd-default-clone-format 2 --image-feature
layering,deep-flatten <RBD image for src k8s volume>@<random snap>
<RBD image for temporary snap image>
[$] rbd snap rm <RBD image for src k8s volume>@<random snap name>
[$] rbd snap create <RBD image for temporary snap image>@<random snap name>
[$] rbd clone --rbd-default-clone-format 2 --image-feature <k8s dst vol config>
rbd snap rm <RBD image for src k8s volume>@<random snap name>
rbd snap create <RBD image for temporary snap image>@<random snap name>
rbd clone --rbd-default-clone-format 2 --image-feature <k8s dst vol config>
<RBD image for temporary snap image>@<random snap name>
<RBD image for k8s dst vol>
[$] rbd snap rm <RBD image for src k8s volume>@<random snap name>
rbd snap rm <RBD image for src k8s volume>@<random snap name>
```

View File

@ -28,17 +28,27 @@ it is **highly** encouraged to:
distributions. See the [go-ceph installaton
instructions](https://github.com/ceph/go-ceph#installation) for more
details.
* Run `$ go get -d github.com/ceph/ceph-csi`
* Run
```console
go get -d github.com/ceph/ceph-csi`
```
This will just download the source and not build it. The downloaded source
will be at `$GOPATH/src/github.com/ceph/ceph-csi`
* Fork the [ceph-csi repo](https://github.com/ceph/ceph-csi) on Github.
* Add your fork as a git remote:
`$ git remote add fork https://github.com/<your-github-username>/ceph-csi`
```console
git remote add fork https://github.com/<your-github-username>/ceph-csi`
```
* Set up a pre-commit hook to catch issues locally.
`$ pip install pre-commit==2.5.1`
`$ pre-commit install`
```console
pip install pre-commit==2.5.1
pre-commit install
```
See the [pre-commit installation
instructions](https://pre-commit.com/#installation) for more
@ -54,10 +64,16 @@ it is **highly** encouraged to:
### Building Ceph-CSI
To build ceph-csi locally run:
`$ make`
```console
make
```
To build ceph-csi in a container:
`$ make containerized-build`
```console
make containerized-build
```
The built binary will be present under `_output/` directory.
@ -68,13 +84,17 @@ that validate the style and other basics of the source code. Execute the unit
tests (in the `*_test.go` files) and check the formatting of YAML files,
MarkDown documents and shell scripts:
`$ make containerized-test`
```console
make containerized-test
```
It is also possible to run only selected tests, these are the targets in the
`Makefile` in the root of the project. For example, run the different static
checks with:
`$ make containerized-test TARGET=static-check`
```console
make containerized-test TARGET=static-check
```
In addition to running tests locally, each Pull Request that is created will
trigger Continous Integration tests that include the `containerized-test`, but
@ -168,16 +188,20 @@ git tools.
Here is a short guide on how to work on a new patch. In this example, we will
work on a patch called *hellopatch*:
* `$ git checkout master`
* `$ git pull`
* `$ git checkout -b hellopatch`
```console
git checkout master
git pull
git checkout -b hellopatch
```
Do your work here and commit.
Run the test suite, which includes linting checks, static code check, and unit
tests:
`$ make test`
```console
make test
```
Certain unit tests may require extended permissions or other external resources
that are not available by default. To run these tests as well, export the
@ -188,7 +212,9 @@ wherever applicable.
Once you are ready to push, you will type the following:
`$ git push fork hellopatch`
```console
git push fork hellopatch
```
**Creating A Pull Request:**
When you are satisfied with your changes, you will then need to go to your repo

View File

@ -66,8 +66,8 @@ metadata:
- mounted Filesystem size in pod using this PVC
```bash
[$]kubectl exec -it csi-rbd-demo-pod sh
# df -h /var/lib/www/html
$ kubectl exec -it csi-rbd-demo-pod sh
sh-4.4# df -h /var/lib/www/html
Filesystem Size Used Avail Use% Mounted on
/dev/rbd0 976M 2.6M 958M 1% /var/lib/www/html
```
@ -75,7 +75,7 @@ Filesystem Size Used Avail Use% Mounted on
- Now expand the PVC by editing the PVC (pvc.spec.resource.requests.storage)
```bash
[$]kubectl edit pvc rbd-pvc
kubectl edit pvc rbd-pvc
```
Check PVC status after editing the pvc storage
@ -131,7 +131,7 @@ calls the NodeExpandVolume to expand the PVC on node, the `status conditions`
and `status` will be updated
```bash
[$]kubectl get pvc
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
rbd-pvc Bound pvc-efe688d6-a420-4041-900e-c5e19fd73ebf 10Gi RWO csi-rbd-sc 7m6s
```
@ -139,8 +139,8 @@ rbd-pvc Bound pvc-efe688d6-a420-4041-900e-c5e19fd73ebf 10Gi RWO
- Now let us check the directory size inside the pod where PVC is mounted
```bash
[$]kubectl exec -it csi-rbd-demo-pod sh
# df -h /var/lib/www/html
$ kubectl exec -it csi-rbd-demo-pod sh
sh-4.4# df -h /var/lib/www/html
Filesystem Size Used Avail Use% Mounted on
/dev/rbd0 9.9G 4.5M 9.8G 1% /var/lib/www/html
```
@ -150,7 +150,7 @@ now you can see the size of `/var/lib/www/html` is updated from 976M to 9.9G
#### Expand RBD Block PVC
```bash
[$]kubectl get pvc raw-block-pvc -o yaml
$ kubectl get pvc raw-block-pvc -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
@ -186,7 +186,7 @@ status:
- Device size in pod using this PVC
```bash
[$]kubectl exec -it pod-with-raw-block-volume sh
$ kubectl exec -it pod-with-raw-block-volume sh
sh-4.4# blockdev --getsize64 /dev/xvda
1073741824
```
@ -200,7 +200,7 @@ bytes which is equal to `1Gib`
which should be greater than the current size.
```bash
[$]kubectl edit pvc raw-block-pvc
kubectl edit pvc raw-block-pvc
```
Check PVC status after editing the pvc storage
@ -250,7 +250,7 @@ the NodeExpandVolume to expand the PVC on node, the status conditions will be up
and `status.capacity.storage` will be updated.
```bash
[$]kubectl get pvc
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
raw-block-pvc Bound pvc-efe688d6-a420-4041-900e-c5e19fd73ebf 10Gi RWO csi-rbd-sc 7m6s
```
@ -258,7 +258,7 @@ raw-block-pvc Bound pvc-efe688d6-a420-4041-900e-c5e19fd73ebf 10Gi R
Device size in pod using this PVC
```bash
[$]kubectl exec -it pod-with-raw-block-volume sh
$ kubectl exec -it pod-with-raw-block-volume sh
sh-4.4# blockdev --getsize64 /dev/xvda
10737418240
```
@ -314,8 +314,8 @@ metadata:
- mounted Filesystem size in pod using this PVC
```bash
[$]kubectl exec -it csi-cephfs-demo-pod sh
# df -h /var/lib/www
$ kubectl exec -it csi-cephfs-demo-pod sh
sh-4.4# df -h /var/lib/www
Filesystem Size Used Avail Use% Mounted on
10.108.149.216:6789:/volumes/csi/csi-vol-b0a1bc79-38fe-11ea-adb6-1a2797ee96de 5.0G 0 5.0G 0% /var/lib/www
```
@ -323,7 +323,7 @@ Filesystem S
- Now expand the PVC by editing the PVC (pvc.spec.resource.requests.storage)
```bash
[$]kubectl edit pvc csi-cephfs-pvc
kubectl edit pvc csi-cephfs-pvc
```
Check PVC status after editing the PVC storage
@ -370,7 +370,7 @@ metadata:
Now you can see the PVC status capacity storage is updated with request size
```bash
[$]kubectl get pvc
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
csi-cephfs-pvc Bound pvc-b84d07c9-ea67-40b4-96b9-4a79669b1ccc 10Gi RWX csi-cephfs-sc 6m26s
```
@ -378,8 +378,8 @@ csi-cephfs-pvc Bound pvc-b84d07c9-ea67-40b4-96b9-4a79669b1ccc 10Gi
- Now let us check the directory size inside the pod where PVC is mounted
```bash
[$]kubectl exec -it csi-cephfs-demo-pod sh
# df -h /var/lib/www
$ kubectl exec -it csi-cephfs-demo-pod sh
sh-4.4# df -h /var/lib/www
Filesystem Size Used Avail Use% Mounted on
10.108.149.216:6789:/volumes/csi/csi-vol-b0a1bc79-38fe-11ea-adb6-1a2797ee96de 10G 0 10G 0% /var/lib/www
```

View File

@ -12,7 +12,9 @@ It is required to cleanup metadata and image separately.
a. get pv_name
`[$]kubectl get pvc pvc_name -n namespace -owide`
```
kubectl get pvc pvc_name -n namespace -owide
```
```bash
$ kubectl get pvc mysql-pvc -owide -n prometheus
@ -28,7 +30,9 @@ a. get pv_name
a. get omapkey (suffix of csi.volumes.default is value used for the CLI option
[--instanceid](deploy-rbd.md#configuration) in the provisioner deployment.)
`[$]rados listomapkeys csi.volumes.default -p pool_name | grep pv_name`
```
rados listomapkeys csi.volumes.default -p pool_name | grep pv_name
```
```bash
$ rados listomapkeys csi.volumes.default -p kube_csi | grep pvc-bc537af8-67fc-4963-99c4-f40b3401686a
@ -37,11 +41,12 @@ a. get omapkey (suffix of csi.volumes.default is value used for the CLI option
b. get omapval
`[$]rados getomapval csi.volumes.default omapkey -p pool_name`
```
rados getomapval csi.volumes.default omapkey -p pool_name
```
```bash
$ rados getomapval csi.volumes.default csi.volume.pvc-bc537af8-67fc-4963-99c4-f40b3401686a
-p kube_csi
$ rados getomapval csi.volumes.default csi.volume.pvc-bc537af8-67fc-4963-99c4-f40b3401686a -p kube_csi
value (36 bytes) :
00000000 64 64 32 34 37 33 64 30 2d 36 61 38 63 2d 31 31 |dd2473d0-6a8c-11|
00000010 65 61 2d 39 31 31 33 2d 30 61 64 35 39 64 39 39 |ea-9113-0ad59d99|
@ -53,7 +58,9 @@ b. get omapval
a. remove rbd image(csi-vol-omapval, the prefix csi-vol is value of [volumeNamePrefix](deploy-rbd.md#configuration))
`[$]rbd remove rbd_image_name -p pool_name`
```
rbd remove rbd_image_name -p pool_name
```
```bash
$ rbd remove csi-vol-dd2473d0-6a8c-11ea-9113-0ad59d995ce7 -p kube_csi
@ -62,36 +69,43 @@ a. remove rbd image(csi-vol-omapval, the prefix csi-vol is value of [volumeNameP
b. remove cephfs subvolume(csi-vol-omapval)
`[$]ceph fs subvolume rm volume_name subvolume_name group_name`
```
ceph fs subvolume rm volume_name subvolume_name group_name
```
```bash
$ ceph fs subvolume rm cephfs csi-vol-340daf84-5e8f-11ea-8560-6e87b41d7a6e csi
ceph fs subvolume rm cephfs csi-vol-340daf84-5e8f-11ea-8560-6e87b41d7a6e csi
```
### 4. Delete omap object and omapkey
a. delete omap object
`[$]rados rm csi.volume.omapval -p pool_name`
```
rados rm csi.volume.omapval -p pool_name
```
```bash
$ rados rm csi.volume.dd2473d0-6a8c-11ea-9113-0ad59d995ce7 -p kube_csi
rados rm csi.volume.dd2473d0-6a8c-11ea-9113-0ad59d995ce7 -p kube_csi
```
b. delete omapkey
`[$]rados rmomapkey csi.volumes.default csi.volume.omapkey -p pool_name`
```
$ rados rmomapkey csi.volumes.default csi.volume.pvc-bc537af8-67fc-4963-99c4-f40b3401686a
-p kube_csi
rados rmomapkey csi.volumes.default csi.volume.omapkey -p pool_name
```
```bash
rados rmomapkey csi.volumes.default csi.volume.pvc-bc537af8-67fc-4963-99c4-f40b3401686a -p kube_csi
```
### 5. Delete PV
a. delete pv
`[$] kubectl delete pv pv_name -n namespace`
```
kubectl delete pv pv_name -n namespace
```
```bash
$ kubectl delete pv pvc-bc537af8-67fc-4963-99c4-f40b3401686a -n prometheus

View File

@ -34,7 +34,7 @@ Lets create a new rbd image in ceph cluster which we are going to use for
static PVC
```console
[$]rbd create static-image --size=1024 --pool=replicapool
rbd create static-image --size=1024 --pool=replicapool
```
### Create RBD static PV
@ -90,7 +90,7 @@ static RBD PV
delete attempt in csi-provisioner.
```bash
[$] kubectl create -f fs-static-pv.yaml
$ kubectl create -f fs-static-pv.yaml
persistentvolume/fs-static-pv created
```
@ -118,7 +118,7 @@ spec:
```
```bash
[$] kubectl create -f fs-static-pvc.yaml
$ kubectl create -f fs-static-pvc.yaml
persistentvolumeclaim/fs-static-pvc created
```
@ -141,11 +141,11 @@ the subvolumegroup. **myfs** is the filesystem name(volume name) inside
which subvolume should be created.
```console
[$]ceph fs subvolumegroup create myfs testGroup
ceph fs subvolumegroup create myfs testGroup
```
```console
[$]ceph fs subvolume create myfs testSubVolume testGroup --size=1073741824
ceph fs subvolume create myfs testSubVolume testGroup --size=1073741824
```
**Note:** volume here refers to the filesystem.
@ -156,7 +156,7 @@ To create the CephFS PV you need to know the `volume rootpath`, and `clusterID`,
here is the command to get the root path in ceph cluster
```bash
[$]ceph fs subvolume getpath myfs testSubVolume testGroup
$ ceph fs subvolume getpath myfs testSubVolume testGroup
/volumes/testGroup/testSubVolume
```
@ -213,7 +213,7 @@ static CephFS PV
delete attempt in csi-provisioner.
```bash
[$] kubectl create -f cephfs-static-pv.yaml
$ kubectl create -f cephfs-static-pv.yaml
persistentvolume/cephfs-static-pv created
```
@ -239,7 +239,7 @@ spec:
```
```bash
[$] kubectl create -f cephfs-static-pvc.yaml
$ kubectl create -f cephfs-static-pvc.yaml
persistentvolumeclaim/cephfs-static-pvc created
```

View File

@ -7,7 +7,11 @@ Both `rbd` and `cephfs` directories contain `plugin-deploy.sh` and
deploy/teardown RBACs, sidecar containers and the plugin in one go.
By default, they look for the YAML manifests in
`../../deploy/{rbd,cephfs}/kubernetes`.
You can override this path by running `$ ./plugin-deploy.sh /path/to/my/manifests`.
You can override this path by running
```bash
./plugin-deploy.sh /path/to/my/manifests
```
## Creating CSI configuration
@ -33,7 +37,9 @@ from a Ceph cluster and replace `<cluster-id>` with the chosen clusterID, to
create the manifest for the configmap which can be updated in the cluster
using the following command,
* `kubectl replace -f ./csi-config-map-sample.yaml`
```bash
kubectl replace -f ./csi-config-map-sample.yaml
```
Storage class and snapshot class, using `<cluster-id>` as the value for the
option `clusterID`, can now be created on the cluster.
@ -110,7 +116,6 @@ To check the status of the snapshot, run the following:
```bash
$ kubectl describe volumesnapshot rbd-pvc-snapshot
Name: rbd-pvc-snapshot
Namespace: default
Labels: <none>