using os.RemoveAll will remove everything
in the director after the Umount we should
be using os.Remove only to remove the empty
directory
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
(cherry picked from commit cd09266870)
We should not be dependent on the CO to ensure
that it will serialize the request instead of
that we need to have own internal locks to ensure
that we dont do concurrent operations for same
request.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
(cherry picked from commit 7cfeae579f)
There was a discrepancy between the objectId
when creating the lock and when releasing the lock
this caused every lock to hang.
Signed-off-by: NymanRobin <robin.nyman@est.tech>
This patch allows to avoid hanging mutex lock scenario when
fscrypt fails to unlock. Prevents uncessary delays
Signed-off-by: Sunnatillo <sunnat.samadov@est.tech>
The way fscrypt client handles metadata and policy creation
causing errors when multiple instances start simultaneously.
This commit adds a lock to ensure the initial setup
completes correctly, preventing race conditions and
mismatches.
Signed-off-by: Sunnatillo <sunnat.samadov@est.tech>
This commit resolves the govet issue -
`copylocks: call of append copies lock value ... contains sync.Mutex`
Embedding DoNotCopy in a struct is a convention to signal and prevent
shallow copies, as recommended in Go's best practices. This does not
rely on a language feature but is instead a special case within the vet
checker.
For more details, see https://golang.org/issues/8005
Signed-off-by: Praveen M <m.praveen@ibm.com>
instead of adding single volumes to the
group journal, support adding multiple
volumeID's map to the group journal
which is required for RBD as well.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Adjusted method names to not have any
specific things to volumesnapshot as
we want to reuse the same journal for
volumegroup as well.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
This commit remove `VOLUME_ACCESSIBILITY_CONSTRAINTS` capabilities
from CephFS as topology based volume provisioning is not yet supported.
Signed-off-by: Praveen M <m.praveen@ibm.com>
golangci-lint reports these:
The copy of the 'for' variable "kmsID" can be deleted (Go 1.22+)
(copyloopvar)
Signed-off-by: Niels de Vos <ndevos@ibm.com>
The "slices" package has been introduced in Go 1.21 and can be used
instead of the Kubernetes package that will be replaced by the standard
package at one point too.
Signed-off-by: Niels de Vos <ndevos@ibm.com>
Added unit test for
validateVolumeGroupSnapshotRequest API which
validates the input VolumeGroupSnapshotRequest
request
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
implemented DeleteVolumeGroupSnapshot RPC which
does below operations
* Basic request validation
* Get the snapshotId's and volumeId's
mapping reserved for the UUID
* Delete snapshot and remove its mapping
from the omap
* Repeat above steps until all the mapping
are removed
* Remove the reserved uuid from the omap
* Reset the filesystem quiesce, This might be
required as cephfs doesnt provide any options to
remove the quiesce, if we get any request with same
ID again we can reuse the quiesce API for same set-id
* Return success if the received error is
Pool not found or key not found.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
implemented CreateVolumeGroupSnapshot RPC which
does below operations
* Basic request validation
* Reserve the UUID for the group name
* Quiesce the filesystem for all the subvolumes
from the input volumeId's
* Take the snapshot for all the input volumeId's
* Add the mapping between volumeId's and snapshot
Id's in omap
* Release the quiesce for the filesystem for
all the subvolumes from the input volumeId's
Undo all the operations if anything fails.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
volumegroup.go holders all the helpers
to extra the group details from the request
and also to extra group details from the
groupID.
This also provide helpers to reserve group
for the request Name and also an undo function
incase if somethings goes wrong and we need to
cleanup the reserved omap entries.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Adding a lock for the volumegroup so
that we can take care of serializing
the same requests to ensure same requests
are not served in parallel.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
added validateCreateVolumeGroupSnapshotRequest
to validate the CreateVolumeGroupSnapshotRequest
request and ensure that all the requirement
options are set. if not, reject the RPC request.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
The only encoding version that exists is `1`. There is no need to have
multiple constants for that version across different packages. Because
there is only one version, `GenerateVolID()` does not really require it,
and it can use a default version.
If there is a need in the future to support an other encoding version,
this can be revisited with a cleaner solution.
Signed-off-by: Niels de Vos <ndevos@ibm.com>
By reading the contents of /proc/filesystems, and checking if "ceph" is
included there, running "modprobe ceph" can be skipped.
Fixes: #4376
Signed-off-by: Niels de Vos <ndevos@ibm.com>
consider fsName optional for static volume
as it is not required to be set during mount
operation with fuse and kernel client.
fixes: #4311
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
This is to pre-emptively add check for EAGAIN error returned from
ceph as part of https://github.com/ceph/ceph/pull/52670 if all the
clone threads are busy and return csi compatible error.
Fixes: #3996
Signed-off-by: karthik-us <ksubrahm@redhat.com>
The ceph fs subvolume resize support is available
in all the active ceph releases. Hence removing the
code to check the supportability of the feature.
Signed-off-by: karthik-us <ksubrahm@redhat.com>
This commit makes use of crush location labels from node
labels to supply `crush_location` and `read_from_replica=localize`
options during mount. Using these options, cephfs
will be able to redirect reads to the closest OSD,
improving performance.
Signed-off-by: Praveen M <m.praveen@ibm.com>
Implemented the capability to include kernel mount options and
fuse mount options for individual clusters within the ceph-csi-config
ConfigMap.This allows users to configure the kernel/fuse mount options
for each cluster separately. The mount options specified in the ConfigMap
will supersede those provided via command line arguments.
Signed-off-by: Praveen M <m.praveen@ibm.com>
If any operations like Resize, Deleting
snapshot fails, we need to remove
both snapshot and the clone to avoid
resource leak.
closes: #4218
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
The ReplicationServer is not used anymore, the functionality has moved
to CSI-Addons and the `internal/csi-addons/rbd` package. These last
references were not activated anywhere, so can be removed without any
impact.
See-also: #3314
Signed-off-by: Niels de Vos <ndevos@ibm.com>
The HealthChecker is configured to use the Staging path pf the volume,
with a `.csi/` subdirectory. In the future this directory could be a
directory that is not under the Published directory.
Fixes: #4219
Signed-off-by: Niels de Vos <ndevos@ibm.com>