instead of adding single volumes to the
group journal, support adding multiple
volumeID's map to the group journal
which is required for RBD as well.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Adjusted method names to not have any
specific things to volumesnapshot as
we want to reuse the same journal for
volumegroup as well.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
The "slices" package has been introduced in Go 1.21 and can be used
instead of the Kubernetes package that will be replaced by the standard
package at one point too.
Signed-off-by: Niels de Vos <ndevos@ibm.com>
implemented DeleteVolumeGroupSnapshot RPC which
does below operations
* Basic request validation
* Get the snapshotId's and volumeId's
mapping reserved for the UUID
* Delete snapshot and remove its mapping
from the omap
* Repeat above steps until all the mapping
are removed
* Remove the reserved uuid from the omap
* Reset the filesystem quiesce, This might be
required as cephfs doesnt provide any options to
remove the quiesce, if we get any request with same
ID again we can reuse the quiesce API for same set-id
* Return success if the received error is
Pool not found or key not found.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
implemented CreateVolumeGroupSnapshot RPC which
does below operations
* Basic request validation
* Reserve the UUID for the group name
* Quiesce the filesystem for all the subvolumes
from the input volumeId's
* Take the snapshot for all the input volumeId's
* Add the mapping between volumeId's and snapshot
Id's in omap
* Release the quiesce for the filesystem for
all the subvolumes from the input volumeId's
Undo all the operations if anything fails.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
added validateCreateVolumeGroupSnapshotRequest
to validate the CreateVolumeGroupSnapshotRequest
request and ensure that all the requirement
options are set. if not, reject the RPC request.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>