flatten cloned images to remove the link
with the snapshot as the parent snapshot can
be removed from the trash.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Added maxsnapshotsonimage flag to flatten
the older rbd images on the chain to avoid
issue in krbd.The limit is in krbd since it
only allocate 1 4KiB page to handle all the
snapshot ids for an image.
The max limit is 510 as per
https://github.com/torvalds/linux/blob/
aaa2faab4ed8e5fe0111e04d6e168c028fe2987f/drivers/block/rbd.c#L98
in cephcsi we arekeeping the default to 450 to reserve 10%
to avoid issues.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
added listsnapshots function for an
rbd image to list all the snapshots
created from an rbd images, This will
list the snapshots which are in trash also.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Several places in the code compared errors directly with the go-ceph
sentinel errors. This change uses the errors.Is() function of go
1.13 instead. The err113 linter reported this issue as:
err113: do not compare errors directly, use errors.Is() instead
Signed-off-by: Sven Anderson <sven@redhat.com>
"github.com/pkg/errors" does not offer more functionlity than that we
need from the standard "errors" package. With Golang v1.13 errors can be
wrapped with `fmt.Errorf("... %w", err)`. `errors.Is()` and
`errors.As()` are available as well.
See-also: https://tip.golang.org/doc/go1.13#error_wrapping
Signed-off-by: Niels de Vos <ndevos@redhat.com>
By fixing the golangci-lint runs, this now gets reported as a problem.
Instead of addressing the compexity of the DeleteVolume() method here,
mark it as a TODO.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
currently, various calls to deleteImage does not
need the rbdStatus check. That is only required
when calling from DeleteVolume. This PR optimizes the
rbd image deletion by removing the status check.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
added skipForceFlatten flag to skip
the image deptha and skip image flattening.
This will be very useful if the kernel is
not listed in cephcsi which supports deep
flatten fauture.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
This Adds a support for create,delete snapshot
and creating a new rbd image from the snapshot.
* Create a snapshot
* Create a temporary snapshot from the parent volume
* Clone a new image from a temporary snapshot with options
--rbd-default-clone-format 2 --image-feature layering,deep-flatten
* Delete temporary snapshot created
* Create a new snapshot from cloned image
* Check the image chain depth, if the Softlimit is reached Add a
task Flatten the cloned image and return success. if the depth
is reached hard limit Add a task Flatten the cloned image and
return snapshot status ready as false
```bash
1) rbd snap create <RBD image for src k8s volume>@<random snap name>
2) rbd clone --rbd-default-clone-format 2 --image-feature
layering,deep-flatten <RBD image for src k8s volume>@<random snap>
<RBD image for temporary snap image>
3) rbd snap rm <RBD image for src k8s volume>@<random snap name>
4) rbd snap rm <RBD image for temporary snap image>@<random snap name>
5) check the depth, if the depth is greater than configured hard
limit add a task to flatten the cloned image return snapshot status
ready as false if the depth is greater than soft limit add a task
to flatten the image and return success
```
* Create a clone from snapshot
* Clone a new image from the snapshot with user-provided options
* Check the depth(n) of the cloned image if n>=(hard limit)
Add task to flatten the image and return ABORT (to avoid image leak)
```bash
1) rbd clone --rbd-default-clone-format 2 --image-feature
<k8s dst vol config> <RBD image for temporary snap image>@<random snap name>
<RBD image for k8s dst vol>
2) check the depth, if the depth is greater than configured hard limit
add a task to flatten the cloned image return ABORT error if the depth is
greater than soft limit add a task to flatten the image and return success
```
* Delete snapshot or pvc
* Move the temporary cloned image to the trash
* Add task to remove the image from the trash
```bash
1) rbd trash mv <cloned image>
2) ceph rbd task trash remove <cloned image>
```
With earlier implementation to delete the image, we used to add
a task to remove the image with new changes this cannot be done
as the image may contain snapshots or linking.so we will be
doing below steps to delete an image(this will be
applicable for both normal image and cloned image)
* Move the rbd image to the trash
* Add task to remove the image from the trash
```bash
1) rbd trash mv <image>
2) ceph rbd task trash remove <image>
```
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
we dont need to call the snapshot CLI functions
to get snapshot details. as these details are not
requried with new snapshot design.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
cephcsi need to store and retrieve the rbd image ID
in the omap as we need the image ID to add a
task to remove the image from the Trash.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
as with snapshot and cloning implementation
the rbd images cannot be deleted with rbd
remove or add a task to delete the rbd image
as it might contains the snapshots and clones.
we need to make use of the rbd mv trask and
add a task to remove the image from trash
once all its clones and snapshots links
are broken and there will no longer any
dependency between parent and child images.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
add a new function called getImageID to fetch
the image id of an image which need to be stored
and retrived for Delete operation.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
added Hardlimit and Softlimit flags for cephcsi
arguments. When the Softlimit is reached cephcsi
will start a background task to flatten the rbd
image and return success and if the hardlimit
is reached it will start a background task
to flatten the rbd image and return ready
to use as false to make sure that the image
will not be used until it is flatten.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
As we are using v2 cloning we dont need to
do protect the snapshot before cloning and
unprotect it before deleting.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
as we need to reuse the same code for both cephfs
and rbd moving the supported version check function
to util package, for better readability renamed
the function.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
A new version of gosec insists on handling errors returned by Close():
[/go/src/github.com/ceph/ceph-csi/internal/util/pidlimit.go:44] - G307 (CWE-): Deferring unsafe method "*os.File" on type "Close" (Confidence: HIGH, Severity: MEDIUM)
> defer cgroup.Close()
[/go/src/github.com/ceph/ceph-csi/internal/util/pidlimit.go:78] - G307 (CWE-): Deferring unsafe method "*os.File" on type "Close" (Confidence: HIGH, Severity: MEDIUM)
> defer f.Close()
[/go/src/github.com/ceph/ceph-csi/internal/util/pidlimit.go:113] - G307 (CWE-): Deferring unsafe method "*os.File" on type "Close" (Confidence: HIGH, Severity: MEDIUM)
> defer f.Close()
Signed-off-by: Niels de Vos <ndevos@redhat.com>
The generated ceph.conf does not need readable by the group, there is
only one (system) user consuming the configurations file.
This addresses the following gosec warning:
[/go/src/github.com/ceph/ceph-csi/internal/util/cephconf.go:52] - G306 (CWE-): Expect WriteFile permissions to be 0600 or less (Confidence: HIGH, Severity: MEDIUM)
> ioutil.WriteFile(CephConfigPath, cephConfig, 0640)
Signed-off-by: Niels de Vos <ndevos@redhat.com>
gosec-2.3.0 complains about the following:
[/go/src/github.com/ceph/ceph-csi/internal/util/cephcmds.go:146] - G307 (CWE-): Deferring unsafe method "*os.File" on type "Close" (Confidence: HIGH, Severity: MEDIUM)
> defer tmpFile.Close()
By logging the error from Close(), the warning is gone.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
cephcsi need to add mount the cephfs subvolume
as the readonly when the PVC type is ROX to
provide only readonly access to the users
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
With the current code base, the subvolumegroup will
be created once, and even for a different cluster,
subvolumegroup creation is not allowed again.
Added support multiple subvolumegroups creation by
validating one subvolumegroup creation per cluster.
Fixes: #1123
Signed-off-by: Yug Gupta <ygupta@redhat.com>
The previous function used to remove omap keys apparently did not
return errors when removing omap keys from a missing omap (oid).
Mimic that behavior when using the api.
Signed-off-by: John Mulligan <jmulligan@redhat.com>
For any function that sets more than one key on a single oid setting
them as a batch will be more efficient.
Signed-off-by: John Mulligan <jmulligan@redhat.com>
For any function that removes more than one key on a single oid removing
them as a batch will be more efficient.
Signed-off-by: John Mulligan <jmulligan@redhat.com>
Taking this appraoch means that any function that must get more than one
key's value from the same oid can be more efficient by calling out to
ceph only once.
To be cautious and avoid missing things we always request ceph return
more keys than we actually expect to be set on the oid. If there are
unexpected keys there, we will not miss the keys we want if we first hit
an unexpected key if we were to limit ourselves to iterating only over
the number of keys we're expecting to be on the object.
Signed-off-by: John Mulligan <jmulligan@redhat.com>
Convert the business-logic of the journal to use the new go-ceph based
omap manipulation functions.
Signed-off-by: John Mulligan <jmulligan@redhat.com>