We need to unset the metadata on the clone
and restore PVC if the parent PVC was created
when setmetadata was set to true and it was
set to false when restore and clone pvc was
created.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
(cherry picked from commit d12400aa9c)
If a PV is reattached to a new PVC in a different
namespace we need to update the namespace name
in the rados object.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
(cherry picked from commit 07aa9dea5c)
If a PV is reattached to a new PVC in a different
namespace we need to update the namespace name
in the rbd image metadata.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
(cherry picked from commit 019628c8c2)
The error message return from the GRPC
should be of GRPC error messages only
not the normal go errors. This commits
returns GRPC error if setAllMetadata
fails.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
(cherry picked from commit 69eb6e40dc)
If any operations fails after the volume creation
we will cleanup the omap objects, but it is missing
if setAllMetadata fails. This commits adds the code
to cleanup the rbd image if metadata operation fails.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
(cherry picked from commit 01d4a614c3)
If the image is mirroring enabled
and primary consider it for mapping,
if the image is mirroring enabled but
not primary yet. return error message
until the image is marked as primary.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
(cherry picked from commit 71dbc7dbb4)
csi-addons server will advertise replication capability and
replication service will run with csi-addons server too.
Signed-off-by: Rakshith R <rar@redhat.com>
Current code uses an !A && !B condition incorrectly to
test A:Up and B:status for a remote peer image.
This should be !A || !B as we require both conditions to
be in the specified state (Up: true, and status Unknown).
This is corrected by this commit, and further fixes:
- check and return ready only when a remote site is
found in the status output
- check if all peer sites are ready, if multiple are found
and return ready appropriately
Signed-off-by: Shyamsundar Ranganathan <srangana@redhat.com>
During ResyncVolume we check if the image
is in an error state, and we resync.
After resync, the image will move to
either the `Error` or the `Resyncing` state.
And if the image is in the above two
conditions, we will return a successful
response and Ready=false so that the
consumer can wait until the volume is
ready to use. If the image is in any
other state we return an error message
to indicate the syncing is not going on.
The whole resync and image state change
depends on the rbd mirror daemon. If the
mirror daemon is not running, the image
can be in Resyncing or Unknown state.
The Ramen marks the volume replication as
secondary, and once the resync starts, it
will delete the volume replication CR as a
cleanup process.
As we dont have a check for the rbd mirror
daemon, we are returning a resync success
response and Ready=false. Due to this false
response Ramen is assuming the resync started
and deleted the volume replication CR, and
because of this, the cluster goes into a bad
state and needs manual intervention.
fixes#3289
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
IsNotMountPoint() is deprecated and Mounter.IsMountPoint() is
recommended to be used instead.
Reported-by: golangci/staticcheck
Signed-off-by: Niels de Vos <ndevos@redhat.com>
previously, it was a requirement to have attacher sidecar for CSI
drivers and there had an implementation of dummy mode of operation.
However skipAttach implementation has been stabilized and the dummy
mode of operation is going to be removed from the external-attacher.
Considering this driver work on volumeattachment objects for NBD driver
use cases, we have to implement dummy controllerpublish and unpublish
and thus keep supporting our operations even in absence of dummy mode
of operation in the sidecar.
This commit make a NOOP controller publish and unpublish for RBD driver.
CephFS driver does not require attacher and it has already been made free
from the attachment operations.
Ref# https://github.com/ceph/ceph-csi/pull/3149
Ref# https://github.com/kubernetes-csi/external-attacher/issues/226
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
After Failover of workloads to the secondary
cluster when the primary cluster is down,
RBD Image is not marked healthy, and VR
resources are not promoted to the Primary,
In VolumeReplication, the `CURRENT STATE`
remains Unknown and doesn't change to Primary.
This happens because the primary cluster went down,
and we have force promoted the image on the
secondary cluster. and the image stays in
up+stopping_replay or could be any other states.
Currently assumption was that the image will
always be `up+stopped`. But the image will be in
`up+stopped` only for planned failover and it
could be in any other state if its a forced
failover. For this reason, removing
checkHealthyPrimary from the PromoteVolume RPC call.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Recently the k8s.io/mount-utils package added more runtime dectection.
When creating a new Mounter, the detect is run every time. This is
unfortunate, as it logs a message like the following:
```
mount_linux.go:283] Detected umount with safe 'not mounted' behavior
```
This message might be useful, so it probably good to keep it.
In Ceph-CSI there are various locations where Mounter instances are
created. Moving that to the DefaultNodeServer type reduces it to a
single place. Some utility functions need to accept the additional
parameter too, so that has been modified as well.
See-also: kubernetes/kubernetes#109676
Signed-off-by: Niels de Vos <ndevos@redhat.com>
When the Ceph user is restricted to a specific namespace in the pool, it is
crucial that evey interaction with the cluster is done within that namespace.
This wasn't the case in `getCloneDepth()`.
This issue was causing snapshot creation to fail with
> Failed to check and update snapshot content: failed to take snapshot of the
> volume X: "rpc error: code = Internal desc = rbd: ret=-1, Operation not
> permitted"
Signed-off-by: Benoît Knecht <bknecht@protonmail.ch>
There are regular reports that identify a non-error as the cause of
failures. The Kubernetes mount-utils package has detection for systemd
based environments, and if systemd is unavailable, the following error
is logged:
Cannot run systemd-run, assuming non-systemd OS
systemd-run output: System has not been booted with systemd as init
system (PID 1). Can't operate.
Failed to create bus connection: Host is down, failed with: exit status 1
Because of the `failed` and `exit status 1` error message, users might
assume that the mounting failed. This does not need to be the case. The
container-images that the Ceph-CSI projects provides, do not use
systemd, so the error will get logged with each mount attempt.
By using the newer MountSensitiveWithoutSystemd() function from the
mount-utils package where we can, the number of confusing logs get
reduced.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
This commit removes the clone incase
unsetAllMetadata or copyEncryptionConfig or
expand fails for createVolumeFromSnapshot
and CreateSnapshot.
It also removes the clone in case of
any failure in createCloneFromImage.
issue: #3103
Signed-off-by: Yati Padia <ypadia@redhat.com>
As we added support to set the metadata on the rbd images created for
the PVC and volume snapshot, by default metadata is set on all the images.
As we have seen we are hitting issues#2327 a lot of times with this,
we start to leave a lot of stale images. Currently, we rely on
`--extra-create-metadata=true` to decide to set the metadata or not,
we cannot set this option to false to disable setting metadata because we
use this for encryption too.
This changes is to provide an option to disable setting the image
metadata when starting cephcsi.
Fixes: #3009
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Removed the code in checkHealthyPrimary which
makes the ceph call, passing it as input now.
Added unit test for checkHealthyPrimary function
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
we need to check for image should be in up+stopped state
not anyone of the state for that the we need to use
OR check not the AND check.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
When the image is force promoted to primary on the
cluster the remote image might not be in replaying
state because due to the split brain state. This
PR reverts back the commit
c3c87f2ef3. Which we added
to check the remote image status.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Kubernetes 1.24 and newer use a different path for staging the volume.
That means the CSI-driver is requested to mount the volume at an other
location, compared to previous versions of Kubernetes. CSI-drivers
implementing the volumeHealer, must receive the correct path, otherwise
the after a nodeplugin restart the NBD mounts will bailout attempting
to NodeStageVolume() call and return an error.
See-also: kubernetes/kubernetes#107065
Fixes: #3176
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
During failover we do demote the volume on the primary
as the image is still not promoted yet on the remote cluster,
there are spurious split-brain errors reported by RBD,
the Cephcsi resync will attempt to resync from the "known"
secondary and that will cause data loss
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
RBD supports creating rbd images with
object size, stripe unit and stripe count
to support striping. This PR adds the support
for the same.
More details about striping at
https://docs.ceph.com/en/quincy/man/8/rbd/#stripingfixes: #3124
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
This change helps read the cluster name from the cmdline args,
the provisioner will set the same on the RBD images.
Fixes: #2973
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
continue running rbd driver when /sys/bus/rbd/supported_features file is
missing, do not bailout.
Fixes: #2678
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
krbdFeatures is set to zero when kernel version < 3.8, i.e. in case where
/sys/bus/rbd/supported_features is absent and we are unable to prepare
the krbd attributes based on kernel version.
When krbdFeatures is set to zero fallback to NBD only when autofallback
is turned ON.
Fixes: #2678
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Upstream /sys/bus/rbd/supported_features is part of Linux kernel v4.11.0
Prepare the attributes and use them in case if
/sys/bus/rbd/supported_features is missing.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Move k8s.GetVolumeMetadata() out of setVolumeMetadata() and rename it to
setAllMetadata() so that the same can be used for setting volume and
snapshot metadata.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
For the default mounter the mounter option
will not be set in the storageclass and as it is
not available in the storageclass same will not
be set in the volume context, Because of this the
mapOptions are getting discarded. If the mounter
is not set assuming it's an rbd mounter.
Note:- If the mounter is not set in the storageclass
we can set it in the volume context explicitly,
Doing this check-in node server to support backward
existing volumes and the check is minimal we are not
altering the volume context.
fixes: #3076
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
This commit makes modification so as to allow pvc-pvc clone
with different storageclass having different encryption
configs.
This commit also modifies `copyEncryptionConfig()` to
include a `isEncrypted()` check within the function.
Signed-off-by: Rakshith R <rar@redhat.com>
As the netNamespaceFilePath can be separate for
both cephfs and rbd adding the netNamespaceFilePath
path for RBD, This will help us to keep RBD and
CephFS specific options separately.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
To consider the image is healthy during the Promote
operation currently we are checking only the image
state on the primary site. If the network is flaky
or the remote site is down the image health is
not as expected. To make sure the image is healthy
across the clusters check the state on both local
and the remote clusters.
some details:
https://bugzilla.redhat.com/show_bug.cgi?id=2014495
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
calling setRbdNbdToolFeatures inside an init
gets called in main.go for both cephfs and rbd
driver. instead of calling it in init function
calling this in rbd driver.go as this is specific
to rbd.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Set snapshot-name/snapshot-namespace/snapshotcontent-name details
on RBD backend snapshot image as metadata on snapshot
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Make sure to set metadata when image exist, i.e. if the provisioner pod
is restarted while createVolume is in progress, say it created the image
but didn't yet set the metadata.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Example if a PVC was delete by setting `persistentVolumeReclaimPolicy` as
`Retain` on PV, and PV is reattached to a new PVC, we make sure to update
PV/PVC image metadata on a PV reattach.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
This helps Monitoring solutions without access to Kubernetes clusters to
display the details of the PV/PVC/NameSpace in their dashboard.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Define and use PV and PVC metadata keys used by external provisioner.
The CSI external-provisioner (v1.6.0+) introduces the
--extra-create-metadata flag, which automatically sets map<string, string>
parameters in the CSI CreateVolumeRequest.
Add utility functions to set/Get PV/PVC/PVCNamespace metadata on image
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
add support to run rbd map and mount -t
commands with the nsenter.
complete design of pod/multus network
is added here https://github.com/rook/rook/
blob/master/design/ceph/multus-network.md#csi-pods
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Currently we only check if the rbd-nbd tool supports cookie feature.
This change will also defend cookie addition based on kernel version
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>