When we do stat on the targetpath, if there is
any error we can check is it due to corruption.
If yes, cephcsi can return abnormal in the
NodeGetVolumeStats so that consumer (CO/admin)
and detect and take further action.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
To avoid subvolume leaks if the SetAllMetadata
operations fails delete the subvolume.
If any operation fails after creating the subvolume
we will remove the omap as the omap gets
removed we will need to remove the subvolume to
avoid stale resources.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
As we need to compare the error type instead
of the error value we need to use errors.As
to check the API is implemented or not.
fixes: #3347
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
CephFS does not have a concept of "free inodes", inodes get allocated
on-demand in the filesystem.
This confuses alerting managers that expect a (high) number of free
inodes, and warnings get produced if the number of free inodes is not
high enough. This causes alerts to always get reported for CephFS.
To prevent the false-positive alerts from happening, the
NodeGetVolumeStats procedure for CephFS (and CephNFS) will not contain
inodes in the reply anymore.
See-also: https://bugzilla.redhat.com/2128263
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Incase the subvolumegroup is deleted
and recreated we need to restart the
cephcsi provisioner pod to clear cache
that cephcsi maintains. With this PR
if cephcsi sees NotFound error duing
subvolume creation it will reset the cache
for that filesystem so that in next RPC
call cephcsi will try to create the
subvolumegroup again
Ref: https://github.com/rook/rook/issues/10623
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
In a cluster we can have multiple filesystem
for that we need to have a map of
subvolumegroups to check filesystem is created
nor not.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
If the ceph cluster is of older version and doesnot
support metadata operation, Instead of failing
the request return the success if metadata
operation is not supported.
fixes#3347
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
getting is unused for linter "staticcheck"
(nolintlint) error message due to wrong
comment format. this the format now with
`//directive // comment`
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
`--setmetadata` is false by default, honoring it
will keep the metadata disabled by default
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
`--setmetadata` is false by default, honoring it
will keep the metadata disabled by default
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Make sure to set metadata when subvolume snapshot exist, i.e. if the
provisioner pod is restarted while createSnapShot is in progress, say it
created the subvolume snapshot but didn't yet set the metadata.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Set snapshot-name/snapshot-namespace/snapshotcontent-name details
on subvolume snapshots as metadata on create.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
This change helps read the cluster name from the cmdline args,
the provisioner will set the same on the subvolume.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Make sure to set metadata when subvolume exist, i.e. if the provisioner pod
is restarted while createVolume is in progress, say it created the subvolume
but didn't yet set the metadata.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
This helps Monitoring solutions without access to Kubernetes clusters to
display the details of the PV/PVC/NameSpace in their dashboard.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Recently the k8s.io/mount-utils package added more runtime dectection.
When creating a new Mounter, the detect is run every time. This is
unfortunate, as it logs a message like the following:
```
mount_linux.go:283] Detected umount with safe 'not mounted' behavior
```
This message might be useful, so it probably good to keep it.
In Ceph-CSI there are various locations where Mounter instances are
created. Moving that to the DefaultNodeServer type reduces it to a
single place. Some utility functions need to accept the additional
parameter too, so that has been modified as well.
See-also: kubernetes/kubernetes#109676
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Looks like cephfs snapshot size is buggy and its
getting removed in ceph fs. we cannot get the size
of the snapshot during CreateVolume call, so we cannot
do any size check at CreateVolume to check if the
restore size is smaller or not.
As we are removing this check it also fixes#3147
but we dont have any validation at CSI level for
smaller restore we need to depend on kubernetes
external-provisioner for it.
fixes: #3147
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Due to the bug in the df stat we need to round off
the subvolume size to align with 4Mib.
Note:- Minimum supported size in cephcsi is 1Mib,
we dont need to take care of Kib.
fixes#3240
More details at https://github.com/ceph/ceph/pull/46905
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
While creating subvolumes, CephFS driver set the mode to `777`
and pass it along to go ceph apis which cause the subvolume
permission to be on 777, however if we create a subvolume
directly in the ceph cluster, the default permission bits are
set which is 755 for the subvolume. This commit try to stick
to the default behaviour even while creating the subvolume.
This also means that we can work with fsgrouppolicy set to
`File` in csiDriver object which is also addressed in this commit.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
go-ceph provides a new GetFailure() method to retrieve details errors
when cloning failed. This is now included in the `cephFSCloneState`
struct, which was a simple string before.
While modifying the `cephFSCloneState` struct, the constants have been
removed, as go-ceph provides them as well.
Fixes: #3140
Signed-off-by: Niels de Vos <ndevos@redhat.com>
This commit implements most of
docs/design/proposals/cephfs-snapshot-shallow-ro-vol.md design document;
specifically (de-)provisioning of snapshot-backed volumes, mounting such
volumes as well as mounting pre-provisioned snapshot-backed volumes.
Signed-off-by: Robert Vasek <robert.vasek@cern.ch>
In case of pre-provisioned volume the clusterID is
not set in the volume context as the clusterID is missing
we cannot extract the NetNamespaceFilePath from the
configuration file. For static volume and dynamically
provisioned volume the clusterID is set.
Note:- This is a special case to support mounting PV
without clusterID parameter.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
as same host directory is not shared between
the cephfs and the rbd plugin pod. we need
to keep the netNamespaceFilePath separately
for both cephfs and rbd. CephFS plugin will
use this path to execute mount -t commands.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
As the netNamespaceFilePath can be separate for
both cephfs and rbd adding the netNamespaceFilePath
path for RBD, This will help us to keep RBD and
CephFS specific options separately.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
add support to run rbd map and mount -t
commands with the nsenter.
complete design of pod/multus network
is added here https://github.com/rook/rook/
blob/master/design/ceph/multus-network.md#csi-pods
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
The omap is stored with the requested
snapshot name not with the subvolume
snapshotname. This fix uses the correct
snapshot request name to cleanup the omap
once the subvolume snapshot is deleted.
fixes: #2974
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
remove kubernetes csi prefixed parameters
from the volumeContext as we dont want
to store it in the PV VolumeAttributes.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
as per the CSI standard the size is optional parameter,
as we are allowing the clone to a bigger size
today we need to block the clone to a smaller size
as its a have side effects like data corruption etc.
Note:- Even though this check is present in kubernetes
sidecar as CSI is CO independent adding the check
here.
fixes: #2718
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Mounts managed by ceph-fuse may get corrupted by e.g. the ceph-fuse process
exiting abruptly, or its parent container being terminated, taking down its
child processes with it.
This commit adds checks to NodeStageVolume and NodePublishVolume procedures
to detect whether a mountpoint in staging_target_path and/or target_path is
corrupted, and remount is performed if corruption is detected.
Signed-off-by: Robert Vasek <robert.vasek@cern.ch>
Refactored a couple of helper functions for easier resue.
* Code for building store.VolumeOptions is factored out into a separate function.
* Changed args of getCredentailsForVolume() and NodeServer.mount() so that
instead of passing in whole csi.NodeStageVolumeRequest, only necessary
properties are passed explicitly. This is to allow these functions to be
called outside of NodeStageVolume() where NodeStageVolumeRequest is not
available.
Signed-off-by: Robert Vasek <robert.vasek@cern.ch>
This commits refactors the cephfs core
functions with interfaces. This helps in
better code structuring and writing the
unit test cases.
update #852
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
At present we are node staging with worldwide permissions which is
not correct. We should allow the CO to take care of it and make
the decision. This commit also remove `fuseMountOptions` and
`KernelMountOptions` as they are no longer needed
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
the omap is stored with the requested
snapshot name not with the subvolume
snapshotname. This fix uses the correct
snapshot request name to cleanup the omap
once the subvolume snapshot is deleted.
fixes: #2832
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Currently, we are using methods and all the methods
makes a network call to fetch details from the ceph
clusters, its difficult to write test cases for
these functions, if we move to the interfaces
we can make use of mock to write unit testing
for the caller functions.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
During CreateVolume from snapshot/volume,
its difficult to identify if the clone is
failed and a new clone is created. In case
of clone failure logging the error message
for better debugging.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Currently, as a workaround, we are calling
the resize volume on the cloned, restore volumes
to adjust the cloned, restored volumes.
With this fix, we are calling the resize volume
only if there is a size mismatch with requested
and the volume from which the new volume needs
to be created.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
SINGLE_NODE_WRITER capability ambiguity has been fixed in csi spec v1.5
which allows the SP drivers to declare more granular WRITE capability.
These are not really new capabilities rather capabilities introduced to
get the desired functionality from CO side based on the capabilities SP
driver support for various CSI operations.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
The ceph changes are done on the both server and the
client side this change is not enough for remove
setting the size of cloned volumes.
this caused the regression like #2719#2720#2721#2722.
This reverts commit 3565a342d5.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>