doc: added docs for snapshot-backed CephFS volumes

Signed-off-by: Robert Vasek <robert.vasek@cern.ch>
This commit is contained in:
Robert Vasek 2022-04-06 17:48:27 +02:00 committed by mergify[bot]
parent 79cb1b3849
commit f59806caff
3 changed files with 67 additions and 0 deletions

View File

@ -0,0 +1,61 @@
# Provisioning and mounting CephFS snapshot-backed volumes
Snapshot-backed volumes allow CephFS subvolume snapshots to be exposed as
regular read-only PVCs. No data cloning is performed and provisioning such
volumes is done in constant time.
For more details please refer to [Snapshots as shallow read-only volumes](./design/proposals/cephfs-snapshot-shallow-ro-vol.md)
design document.
## Prerequisites
Prerequisites for this feature are the same as for creating PVCs with snapshot
volume source. See [Create snapshot and Clone Volume](./snap-clone.md) for more
information.
## Usage
### Provisioning a snapshot-backed volume from a volume snapshot
For provisioning new snapshot-backed volumes, following configuration must be
set for storage class(es) and their PVCs respectively:
* StorageClass:
* Specify `backingSnapshot: "true"` parameter.
* PersistentVolumeClaim:
* Set `storageClassName` to point to your storage class with backing
snapshots enabled.
* Define `spec.dataSource` for your desired source volume snapshot.
* Set `spec.accessModes` to `ReadOnlyMany`. This is the only access mode that
is supported by this feature.
### Mounting snapshots from pre-provisioned volumes
Steps for defining a PersistentVolume and PersistentVolumeClaim for
pre-provisioned CephFS subvolumes are identical to those described in
[Static PVC with ceph-csi](./static-pvc.md), except one additional parameter
must be specified: `backingSnapshotID`. CephFS-CSI driver will retrieve the
snapshot identified by the given ID from within the specified subvolume, and
expose it to workloads in read-only mode. Volume access mode must be set to
`ReadOnlyMany`.
Note that the snapshot retrieval is done by traversing `<rootPath>/.snap` and
searching for a directory that contains `backingSnapshotID` value in its name.
The specified snapshot ID does not necessarily need to be the complete directory
name inside `<rootPath>/.snap`, however it must be complete enough to uniquely
identify that directory.
Example:
```
$ ls .snap
_f279df14-6729-4342-b82f-166f45204233_1099511628283
_a364870e-6729-4342-b82f-166f45204233_1099635085072
```
`f279df14-6729-4342-b82f-166f45204233` would be considered a valid value for
`backingSnapshotID` volume parameter, whereas `6729-4342-b82f-166f45204233`
would not, as it would be ambiguous.
If the given snapshot ID is ambiguous, or no such snapshot is found, mounting
the PVC will fail with INVALID_ARGUMENT error code.

View File

@ -81,6 +81,7 @@ you're running it inside a k8s cluster and find the config itself).
| `pool` | no | Ceph pool into which volume data shall be stored |
| `volumeNamePrefix` | no | Prefix to use for naming subvolumes (defaults to `csi-vol-`). |
| `snapshotNamePrefix` | no | Prefix to use for naming snapshots (defaults to `csi-snap-`) |
| `backingSnapshot` | no | Boolean value. The PVC shall be backed by the CephFS snapshot specified in its data source. `pool` parameter must not be specified. (defaults to `false`) |
| `kernelMountOptions` | no | Comma separated string of mount options accepted by cephfs kernel mounter, by default no options are passed. Check man mount.ceph for options. |
| `fuseMountOptions` | no | Comma separated string of mount options accepted by ceph-fuse mounter, by default no options are passed. |
| `csi.storage.k8s.io/provisioner-secret-name`, `csi.storage.k8s.io/node-stage-secret-name` | for Kubernetes | Name of the Kubernetes Secret object containing Ceph client credentials. Both parameters should have the same value |

View File

@ -47,6 +47,11 @@ parameters:
# If omitted, defaults to "csi-vol-".
# volumeNamePrefix: "foo-bar-"
# (optional) Boolean value. The PVC shall be backed by the CephFS snapshot
# specified in its data source. `pool` parameter must not be specified.
# (defaults to `false`)
# backingSnapshot: "true"
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions: