This commit makes sure sparsify() is not run when rbd
image is in use.
Running rbd sparsify with workload doing io and too
frequently is not desirable.
When a image is in use fstrim is run and sparsify will
be run only when image is not mapped.
Signed-off-by: Rakshith R <rar@redhat.com>
(cherry picked from commit 98fdadfde7)
# Conflicts:
# internal/rbd/errors.go
As per the csiaddon spec last sync time is
required parameter in the GetVolumeReplicationInfo
if we are failed to parse the description, return
not found error message instead of nil
which is empty response
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
After Failover of workloads to the secondary
cluster when the primary cluster is down,
RBD Image is not marked healthy, and VR
resources are not promoted to the Primary,
In VolumeReplication, the `CURRENT STATE`
remains Unknown and doesn't change to Primary.
This happens because the primary cluster went down,
and we have force promoted the image on the
secondary cluster. and the image stays in
up+stopping_replay or could be any other states.
Currently assumption was that the image will
always be `up+stopped`. But the image will be in
`up+stopped` only for planned failover and it
could be in any other state if its a forced
failover. For this reason, removing
checkHealthyPrimary from the PromoteVolume RPC call.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
rbd mirroring CLI calls are async and it doesn't wait
for the operation to be completed. ex:- `rbd mirror image enable`
it will enable the mirroring on the image but it doesn't
ensure that the image is mirroring enabled and healthy
primary. The same goes for the promote volume also.
This commits adds a check-in PromoteVolume to make sure
the image in a healthy state i.e `up+stopped`.
note:- not considering any intermediate states to make
sure the image is completely healthy before responding
success to the RPC call.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
This commit adds the logic to detect a passed in volumeID
is a migrated volume ID and if yes, the driver connect to the
backend cluster and clean/delete the image. The logic
only applied if its a migration volume ID. The migration volume ID
carry the information like mons, pool and image name which is
good enough for the driver to identify and connect to the backend
cluster for its operations.
migration volID format:
<mig>_mons-<monsHash>_image-<imageUID>_<poolHash>
Details on the hash values:
* MonsHash: this carry a hash value (md5sum) which will be acted as the
`clusterID` for the operations in this context.
* ImageUID: this is the unique UUID generated by kubernetes for the created
volume.
* PoolHash: this is an encoded string of pool name.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
Make sure to operate within the namespace if any given
when dealing with rbd images and snapshots and their journals.
Signed-off-by: Mehdy Khoshnoody <mehdy.khoshnoody@gmail.com>
This change replaces the sentinel errors in rbd module with
standard errors created with errors.New().
Related: #1203
Signed-off-by: Sven Anderson <sven@redhat.com>
This Adds a support for create,delete snapshot
and creating a new rbd image from the snapshot.
* Create a snapshot
* Create a temporary snapshot from the parent volume
* Clone a new image from a temporary snapshot with options
--rbd-default-clone-format 2 --image-feature layering,deep-flatten
* Delete temporary snapshot created
* Create a new snapshot from cloned image
* Check the image chain depth, if the Softlimit is reached Add a
task Flatten the cloned image and return success. if the depth
is reached hard limit Add a task Flatten the cloned image and
return snapshot status ready as false
```bash
1) rbd snap create <RBD image for src k8s volume>@<random snap name>
2) rbd clone --rbd-default-clone-format 2 --image-feature
layering,deep-flatten <RBD image for src k8s volume>@<random snap>
<RBD image for temporary snap image>
3) rbd snap rm <RBD image for src k8s volume>@<random snap name>
4) rbd snap rm <RBD image for temporary snap image>@<random snap name>
5) check the depth, if the depth is greater than configured hard
limit add a task to flatten the cloned image return snapshot status
ready as false if the depth is greater than soft limit add a task
to flatten the image and return success
```
* Create a clone from snapshot
* Clone a new image from the snapshot with user-provided options
* Check the depth(n) of the cloned image if n>=(hard limit)
Add task to flatten the image and return ABORT (to avoid image leak)
```bash
1) rbd clone --rbd-default-clone-format 2 --image-feature
<k8s dst vol config> <RBD image for temporary snap image>@<random snap name>
<RBD image for k8s dst vol>
2) check the depth, if the depth is greater than configured hard limit
add a task to flatten the cloned image return ABORT error if the depth is
greater than soft limit add a task to flatten the image and return success
```
* Delete snapshot or pvc
* Move the temporary cloned image to the trash
* Add task to remove the image from the trash
```bash
1) rbd trash mv <cloned image>
2) ceph rbd task trash remove <cloned image>
```
With earlier implementation to delete the image, we used to add
a task to remove the image with new changes this cannot be done
as the image may contain snapshots or linking.so we will be
doing below steps to delete an image(this will be
applicable for both normal image and cloned image)
* Move the rbd image to the trash
* Add task to remove the image from the trash
```bash
1) rbd trash mv <image>
2) ceph rbd task trash remove <image>
```
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
The internal/ directory in Go has a special meaning, and indicates that
those packages are not meant for external consumption. Ceph-CSI does
provide public APIs for other projects to consume. There is no plan to
keep the API of the internally used packages stable.
Closes: #903
Signed-off-by: Niels de Vos <ndevos@redhat.com>