If any operations like Resize, Deleting
snapshot fails, we need to remove
both snapshot and the clone to avoid
resource leak.
closes: #4218
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
The ReplicationServer is not used anymore, the functionality has moved
to CSI-Addons and the `internal/csi-addons/rbd` package. These last
references were not activated anywhere, so can be removed without any
impact.
See-also: #3314
Signed-off-by: Niels de Vos <ndevos@ibm.com>
The HealthChecker is configured to use the Staging path pf the volume,
with a `.csi/` subdirectory. In the future this directory could be a
directory that is not under the Published directory.
Fixes: #4219
Signed-off-by: Niels de Vos <ndevos@ibm.com>
This commit eliminates the code for protecting and unprotecting
snapshots, as the functionality to protect and unprotect snapshots
is being deprecated.
Signed-off-by: Praveen M <m.praveen@ibm.com>
Multiple go-routines may simultaneously create the
subVolumeGroupCreated map or write into it
for a particular group.
This commit safeguards subVolumeGroupCreated map
from concurrent creation/writes while allowing for multiple
readers.
Signed-off-by: Rakshith R <rar@redhat.com>
Multiple go-routines may simultaneously check for a clusterID's
presence in clusterAdditionalInfo and create an entry if it is
absent. This set of operation needs to be serialized.
Therefore, this commit safeguards clusterAdditionalInfo map
from concurrent writes with a mutex to prevent the above problem.
Signed-off-by: Rakshith R <rar@redhat.com>
Add support to create RWX clone from the
ROX clone, in ceph no subvolume clone is
created when ROX clone is created from a
snapshot just a internal ref counter is
added. This PR allows creating a RWX clone
from a ROX clone which allows users to create
RW copy of PVC where cephcsi will identify
the snapshot created for the ROX volume and
creates a subvolume from the CephFS snapshot.
updates: #3603
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Not sure why but go-lint is failing
with below error and this fix is required
to make it pass
```
directive `//nolint:staticcheck // See comment above.`
is unused for linter "staticcheck" (nolintlint)
```
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Set VolumeOptions.Pool parameter to empty for Snapshot-backed volumes.
This Pool parameter is optional and only used as 'pool-layout' parameter
during subvolume and subvolume clone create request in cephcsi
and not used for Snapshot-backed volume at all.
It is not saved anywhere for use in subsequent operations after create too.
Therefore, We can set it to empty and not error out.
Signed-off-by: rakshith-r <rar@redhat.com>
We should not call ExpandVolume for the BackingSnapshot
subvolume as there wont be any real subvolume created for
it and even if we call it the ExpandVolume will fail
fail as there is no real subvolume exists.
This commits fixes by adjusting the `if` check to ensure
that ExpandVolume will only be called either the
VolumeRequest is to create from a snapshot or volume
and BackingSnapshot is not true.
sample code here https://go.dev/play/p/PI2tNii5tTg
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Add Ceph FS fscrypt support, similar to the RBD/ext4 fscrypt
integration. Supports encrypted PVCs, snapshots and clones.
Requires kernel and Ceph MDS support that is currently not in any
stable release.
Signed-off-by: Marcel Lauhoff <marcel.lauhoff@suse.com>
this commit remove the protobuf dependency locking in the module
description.
Also, ptypes.TimestampProto is deprecated and this commit
make use of the timestamppb.New() for the construction.
ParseTime() function has been removed and callers adjusted to the
same.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
When we do stat on the targetpath, if there is
any error we can check is it due to corruption.
If yes, cephcsi can return abnormal in the
NodeGetVolumeStats so that consumer (CO/admin)
and detect and take further action.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
To avoid subvolume leaks if the SetAllMetadata
operations fails delete the subvolume.
If any operation fails after creating the subvolume
we will remove the omap as the omap gets
removed we will need to remove the subvolume to
avoid stale resources.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
As we need to compare the error type instead
of the error value we need to use errors.As
to check the API is implemented or not.
fixes: #3347
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
CephFS does not have a concept of "free inodes", inodes get allocated
on-demand in the filesystem.
This confuses alerting managers that expect a (high) number of free
inodes, and warnings get produced if the number of free inodes is not
high enough. This causes alerts to always get reported for CephFS.
To prevent the false-positive alerts from happening, the
NodeGetVolumeStats procedure for CephFS (and CephNFS) will not contain
inodes in the reply anymore.
See-also: https://bugzilla.redhat.com/2128263
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Incase the subvolumegroup is deleted
and recreated we need to restart the
cephcsi provisioner pod to clear cache
that cephcsi maintains. With this PR
if cephcsi sees NotFound error duing
subvolume creation it will reset the cache
for that filesystem so that in next RPC
call cephcsi will try to create the
subvolumegroup again
Ref: https://github.com/rook/rook/issues/10623
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
In a cluster we can have multiple filesystem
for that we need to have a map of
subvolumegroups to check filesystem is created
nor not.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
If the ceph cluster is of older version and doesnot
support metadata operation, Instead of failing
the request return the success if metadata
operation is not supported.
fixes#3347
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
getting is unused for linter "staticcheck"
(nolintlint) error message due to wrong
comment format. this the format now with
`//directive // comment`
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
`--setmetadata` is false by default, honoring it
will keep the metadata disabled by default
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
`--setmetadata` is false by default, honoring it
will keep the metadata disabled by default
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Make sure to set metadata when subvolume snapshot exist, i.e. if the
provisioner pod is restarted while createSnapShot is in progress, say it
created the subvolume snapshot but didn't yet set the metadata.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Set snapshot-name/snapshot-namespace/snapshotcontent-name details
on subvolume snapshots as metadata on create.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
This change helps read the cluster name from the cmdline args,
the provisioner will set the same on the subvolume.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Make sure to set metadata when subvolume exist, i.e. if the provisioner pod
is restarted while createVolume is in progress, say it created the subvolume
but didn't yet set the metadata.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
This helps Monitoring solutions without access to Kubernetes clusters to
display the details of the PV/PVC/NameSpace in their dashboard.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Recently the k8s.io/mount-utils package added more runtime dectection.
When creating a new Mounter, the detect is run every time. This is
unfortunate, as it logs a message like the following:
```
mount_linux.go:283] Detected umount with safe 'not mounted' behavior
```
This message might be useful, so it probably good to keep it.
In Ceph-CSI there are various locations where Mounter instances are
created. Moving that to the DefaultNodeServer type reduces it to a
single place. Some utility functions need to accept the additional
parameter too, so that has been modified as well.
See-also: kubernetes/kubernetes#109676
Signed-off-by: Niels de Vos <ndevos@redhat.com>