Commit Graph

963 Commits

Author SHA1 Message Date
Madhu Rajanna
27d0d3c0c5 cephfs: skip expand for BackingSnapshot volume
We should not call ExpandVolume for the BackingSnapshot
subvolume as there wont be any real subvolume created for
it and even if we call it the ExpandVolume will fail
fail as there is no real subvolume exists.

This commits fixes by adjusting the `if` check to ensure
that ExpandVolume will only be called either the
VolumeRequest is to create from a snapshot or volume
and BackingSnapshot is not true.

sample code here https://go.dev/play/p/PI2tNii5tTg

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
(cherry picked from commit f7796081d3)
2022-12-21 14:15:17 +00:00
Madhu Rajanna
90d76e9c8a rbd: unset metadata if setmetadata is false
We need to unset the metadata on the clone
and restore PVC if the parent PVC was created
when setmetadata was set to true and it was
set to false when restore and clone pvc was
created.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
(cherry picked from commit d12400aa9c)
2022-11-28 17:13:10 +00:00
Rakshith R
18f0fc1f4a rbd: ignore stdErr for ceph osd blocklist when there is no error
`ceph osd blocklist range add/rm <ip>` cmd is outputting
"blocklisting cidr:10.1.114.75:0/32 until 202..." messages
incorrectly into stdErr. This commit ignores stdErr when err
is nil.

Signed-off-by: Rakshith R <rar@redhat.com>
(cherry picked from commit eb21d75ef7)
2022-11-14 08:52:40 +00:00
Madhu Rajanna
b33d9ee583 rbd: update namespace name in rados object
If a PV is reattached to a new PVC in a different
namespace we need to update the namespace name
in the rados object.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
(cherry picked from commit 07aa9dea5c)
2022-10-28 19:49:30 +00:00
Madhu Rajanna
787d54fa6a rbd: update namespace name in metadata
If a PV is reattached to a new PVC in a different
namespace we need to update the namespace name
in the rbd image metadata.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
(cherry picked from commit 019628c8c2)
2022-10-28 19:49:30 +00:00
Madhu Rajanna
d39b61334a cephfs: delete subvolume if SetAllMetadata fails
To avoid subvolume leaks if the SetAllMetadata
operations fails delete the subvolume.
If any operation fails after creating the subvolume
we will remove the omap as the omap gets
removed we will need to remove the subvolume to
avoid stale resources.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
(cherry picked from commit 302fead713)
2022-10-19 07:14:43 +00:00
Madhu Rajanna
a3a0730900 rbd: return GRPC error message
The error message return from the GRPC
should be of GRPC error messages only
not the normal go errors. This commits
returns GRPC error if setAllMetadata
fails.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
(cherry picked from commit 69eb6e40dc)
2022-10-18 10:21:06 +00:00
Madhu Rajanna
42bed5a346 rbd: delete volume if setallmetadata fails
If any operations fails after the volume creation
we will cleanup the omap objects, but it is missing
if setAllMetadata fails. This commits adds the code
to cleanup the rbd image if metadata operation fails.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
(cherry picked from commit 01d4a614c3)
2022-10-18 10:21:06 +00:00
Madhu Rajanna
454bcc466a cephfs: use errors.As instead of errors.Is
As we need to compare the error type instead
of the error value we need to use errors.As
to check the API is implemented or not.

fixes: #3347

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
(cherry picked from commit b40e8894f8)
2022-10-17 11:41:18 +00:00
Rakshith R
acfe22efed rbd: use blocklist range cmd, fallback if it fails
This commit adds blocklist range cmd feature,
while fallbacks to old blocklist one ip at a
time if the cmd is invalid(not available).

Signed-off-by: Rakshith R <rar@redhat.com>
(cherry picked from commit a57859dfa4)
2022-09-13 13:55:17 +00:00
Prashanth Dintyala
e5e949d94a rbd: create token and use it for vault SA everytime possible
use TokenRequest API by default for vault SA even with K8s versions < 1.24

Signed-off-by: Prashanth Dintyala <vdintyala@nvidia.com>
(cherry picked from commit 2a6487cbf5)
2022-09-09 16:07:31 +00:00
Madhu Rajanna
4face6a7b3 cephfs: retry subvolumegroup creation
Incase the  subvolumegroup is deleted
and recreated we need to restart the
cephcsi provisioner pod to clear cache
that cephcsi maintains. With this PR
if cephcsi sees NotFound error duing
subvolume creation it will reset the cache
for that filesystem so that in next RPC
call cephcsi will try to create the
subvolumegroup again

Ref: https://github.com/rook/rook/issues/10623

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
(cherry picked from commit 76064d8e34)
2022-09-08 11:37:05 +00:00
Madhu Rajanna
e08143a88b cephfs: fix subvolumegroup creation for multiple fs
In a cluster we can have multiple filesystem
for that we need to have a map of
subvolumegroups to check filesystem is created
nor not.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
(cherry picked from commit e56621cd66)
2022-09-08 11:37:05 +00:00
Madhu Rajanna
04879bbb33 rbd: map only primary image
If the image is mirroring enabled
and primary consider it for mapping,
if the image is mirroring enabled but
not primary yet. return error message
until the image is marked as primary.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
(cherry picked from commit 71dbc7dbb4)
2022-09-06 14:05:08 +00:00
Madhu Rajanna
3092b46774 cephfs: return success if metadata operation not supported
If the ceph cluster is of older version and doesnot
support metadata operation, Instead of failing
the request return the success if metadata
operation is not supported.

fixes #3347

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
(cherry picked from commit 038462ff43)
2022-08-30 03:43:33 +00:00
Rakshith R
3a4c7c9d79 rbd: modify stripSecret mechanism in logGRPC()
This commit updates csi-addons spec version
and modifies logging to strip replication
request secret using csi.StripSecret, then
with replication.protosanitizer if the former
fails. This is done in order to make sure
we strip csi and replication format of secrets.

Signed-off-by: Rakshith R <rar@redhat.com>
(cherry picked from commit 40134772a7)
2022-08-29 14:55:24 +00:00
Rakshith R
2f393f24b7 rbd: improve kmip verifyResponse() error message
This commit uses %q instead %v in error messages
and adds result reason and message in kmip
verifyresponse().

Signed-off-by: Rakshith R <rar@redhat.com>
(cherry picked from commit f47839d73d)
2022-08-24 08:32:50 +00:00
Rakshith R
f3675f4f28 rbd: fix bug in kmip kms Decrypt function
This commit fixes a bug in kmip kms Decrypt
function, where emd.DEK was fed in a Nonce
instead of emd.Nonce by mistake.

Signed-off-by: Rakshith R <rar@redhat.com>
(cherry picked from commit eaa0e14cb2)
2022-08-24 08:32:50 +00:00
Rakshith R
19e4146fab rbd: add replication capability & service to csiaddons server
csi-addons server will advertise replication capability and
replication service will run with csi-addons server too.

Signed-off-by: Rakshith R <rar@redhat.com>
2022-08-18 08:19:20 +00:00
Rakshith R
0c33a33d5c rbd: add kmip encryption type
The Key Management Interoperability Protocol (KMIP)
is an extensible communication protocol
that defines message formats for the manipulation
of cryptographic keys on a key management server.
Ceph-CSI can now be configured to connect to
various KMS using KMIP for encrypting RBD volumes.

https://en.wikipedia.org/wiki/Key_Management_Interoperability_Protocol

Signed-off-by: Rakshith R <rar@redhat.com>
2022-08-18 07:41:42 +00:00
Madhu Rajanna
dde21543bd cephfs: fix staticcheck comment
getting is unused for linter "staticcheck"
(nolintlint) error message due to wrong
comment format. this the format now with
`//directive // comment`

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2022-08-10 17:51:26 +00:00
Rakshith R
d39d2cffcc cleanup: use index instead of value while iterating
This commit cleans up for loop to use index to access
value instead of copying value into a new variable
while iterating.
```
internal/util/csiconfig.go:103:2: rangeValCopy: each \
iteration copies 136 bytes (consider pointers or indexing) \
(gocritic)
        for _, cluster := range config {
```

Signed-off-by: Rakshith R <rar@redhat.com>
2022-08-09 13:36:03 +00:00
Rakshith R
3d3c029471 nfs: add nodeserver within cephcsi
This commit adds nfs nodeserver capable of
mounting nfs volumes, even with pod networking
using NSenter design similar to rbd and cephfs.
NodePublish, NodeUnpublish, NodeGetVolumeStats
and NodeGetCapabilities have been implemented.

The nodeserver implementation has been inspired
from https://github.com/kubernetes-csi/csi-driver-nfs,
which was previously used for mounted cephcsi exported
nfs volumes. The current implementation is also
backward compatible for the previously created
PVCs.

Signed-off-by: Rakshith R <rar@redhat.com>
2022-08-09 13:36:03 +00:00
Shyamsundar Ranganathan
c2280011d1 rbd: Report remote peer readiness if Up and status.Unknown
Current code uses an !A && !B condition incorrectly to
test A:Up and B:status for a remote peer image.

This should be !A || !B as we require both conditions to
be in the specified state (Up: true, and status Unknown).

This is corrected by this commit, and further fixes:
- check and return ready only when a remote site is
found in the status output
- check if all peer sites are ready, if multiple are found
and return ready appropriately

Signed-off-by: Shyamsundar Ranganathan <srangana@redhat.com>
2022-08-09 05:32:15 +00:00
Madhu Rajanna
8d7b6ee59f rbd: consider mirror deamon state for ResyncVolume
During ResyncVolume we check if the image
is in an error state, and we resync.
After resync, the image will move to
either the `Error` or the `Resyncing` state.
And if the image is in the above two
conditions, we will return a successful
response and Ready=false so that the
consumer can wait until the volume is
ready to use. If the image is in any
other state we return an error message
to indicate the syncing is not going on.
The whole resync and image state change
depends on the rbd mirror daemon. If the
mirror daemon is not running, the image
can be in Resyncing or Unknown state.
The Ramen marks the volume replication as
secondary, and once the resync starts, it
will delete the volume replication CR as a
cleanup process.

As we dont have a check for the rbd mirror
daemon, we are returning a resync success
response and Ready=false. Due to this false
response Ramen is assuming the resync started
and deleted the volume replication CR, and
because of this, the cluster goes into a bad
state and needs manual intervention.

fixes #3289

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2022-08-08 13:26:15 +00:00
Niels de Vos
83df1eae53 rebase: k8s.io/mount-utils/IsNotMountPoint() is deprecated
IsNotMountPoint() is deprecated and Mounter.IsMountPoint() is
recommended to be used instead.

Reported-by: golangci/staticcheck
Signed-off-by: Niels de Vos <ndevos@redhat.com>
2022-08-04 09:53:07 +00:00
Niels de Vos
10b2277330 util: use k8s.io/mount-utils/NewWithoutSystemd() to prevent logging
NewWithoutSystemd() has been introduced in the k8s.io/mount-utils
package so that systemd is not called while executing functions. This
offers consumers the ability to prevent confusing and scary messages
from getting logged.

See-also: kubernetes/kubernetes#111218
Signed-off-by: Niels de Vos <ndevos@redhat.com>
2022-08-04 09:53:07 +00:00
Niels de Vos
3a200b6976 rbd: use IsLikelyNotMountPoint() to prevent systemd log messages
Signed-off-by: Niels de Vos <ndevos@redhat.com>
2022-08-04 09:53:07 +00:00
Niels de Vos
0a173a8a9e nfs: make DeleteVolume (more) idempotent
Signed-off-by: Niels de Vos <ndevos@redhat.com>
2022-08-03 19:43:16 +00:00
Humble Chirammal
bc9ad3d9f1 rbd: add dummy attacher implementation
previously, it was a requirement to have attacher sidecar for CSI
drivers and there had an implementation of dummy mode of operation.
However skipAttach implementation has been stabilized and the dummy
mode of operation is going to be removed from the external-attacher.
Considering this driver  work on volumeattachment objects for NBD driver
use cases, we have to implement dummy controllerpublish and unpublish
and thus keep supporting our operations even in absence of dummy mode
of operation in the sidecar.

This commit make a NOOP controller publish and unpublish for RBD driver.

CephFS driver does not require attacher and it has already been made free
from the attachment operations.

    Ref# https://github.com/ceph/ceph-csi/pull/3149
    Ref# https://github.com/kubernetes-csi/external-attacher/issues/226

Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2022-08-03 00:25:49 +00:00
Prasanna Kumar Kalever
30244bf11b cephfs: snapshots honor --setmetadata option
`--setmetadata` is false by default, honoring it
will keep the metadata disabled by default

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-08-01 07:15:29 +00:00
Prasanna Kumar Kalever
14d6211d6d cephfs: subvolumes honor --setmetadata option
`--setmetadata` is false by default, honoring it
will keep the metadata disabled by default

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-08-01 07:15:29 +00:00
Prasanna Kumar Kalever
de7128b3a2 cephfs: Add clusterName as metadata on snapshots
Example:
sh-4.4$ ceph fs subvolume snapshot metadata ls myfs csi-vol-ba248f9e-0e75-11ed-b774-8e97192ff5ec \
			csi-snap-ce24e3bb-0e75-11ed-b774-8e97192ff5ec --group_name csi
{
    "csi.ceph.com/cluster/name": "\"K8s-cluster-1\"",
    "csi.storage.k8s.io/volumesnapshot/name": "cephfs-pvc-snapshot",
    "csi.storage.k8s.io/volumesnapshot/namespace": "rook-ceph",
    "csi.storage.k8s.io/volumesnapshotcontent/name": "snapcontent-2e89e1b2-e6e9-48fe-b365-edb493d7022e"
}

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-08-01 07:15:29 +00:00
Prasanna Kumar Kalever
856d7c264c cephfs: handle metadata op-failures with unsupported ceph versions
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-07-28 19:37:23 +00:00
Prasanna Kumar Kalever
5f36f7e8bd cephfs: update subvolume snapshot metadata if snapshot already exists.
Make sure to set metadata when subvolume snapshot exist, i.e. if the
provisioner pod is restarted while createSnapShot is in progress, say it
created the subvolume snapshot but didn't yet set the metadata.

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-07-28 19:37:23 +00:00
Prasanna Kumar Kalever
7c9259a45e cephfs: set metadata on the subvolume snapshot on create
Set snapshot-name/snapshot-namespace/snapshotcontent-name details
on subvolume snapshots as metadata on create.

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-07-28 19:37:23 +00:00
Prasanna Kumar Kalever
8c0dd482fa cephfs: add set/Remove subvolume snapshot metadata utility functions
Add utility functions to set/Remove
snapshot-name/snapshot-namespace/snapshotcontent-name metadata on
subvolume snapshots.

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-07-28 19:37:23 +00:00
Prasanna Kumar Kalever
51099d60fe cephfs: handle metadata op-failures with unsupported ceph versions
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-07-28 04:07:52 +00:00
Prasanna Kumar Kalever
11d51ed9b0 cephfs: unset cluster Name metadata
unsets the cluster name metadata key and value on the subvolume

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-07-28 04:07:52 +00:00
Prasanna Kumar Kalever
21d811096b cephfs: set cluster Name as metadata on the subvolume
This change helps read the cluster name from the cmdline args,
the provisioner will set the same on the subvolume.

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-07-28 04:07:52 +00:00
Prasanna Kumar Kalever
466bdf97b2 cephfs: set metadata on restart of provisioner pod
Make sure to set metadata when subvolume exist, i.e. if the provisioner pod
is restarted while createVolume is in progress, say it created the subvolume
but didn't yet set the metadata.

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-07-28 04:07:52 +00:00
Prasanna Kumar Kalever
6bcb8ecc68 cephfs: set PV/PVC details on the subvolume as metadata on create
This helps Monitoring solutions without access to Kubernetes clusters to
display the details of the PV/PVC/NameSpace in their dashboard.

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-07-28 04:07:52 +00:00
Prasanna Kumar Kalever
ecf03eb6ae cephfs: add set/Get/List/Remove metadata utility functions
Add utility functions to set/Get/List/Remove PV/PVC/PVCNamespace metadata
on subvolume.

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-07-28 04:07:52 +00:00
Madhu Rajanna
8c5563a9bc rbd: remove checkHealthyPrimary check
After Failover of workloads to the secondary
cluster when the primary cluster is down,
RBD Image is not marked healthy, and VR
resources are not promoted to the Primary,
In VolumeReplication, the `CURRENT STATE`
remains Unknown and doesn't change to Primary.

This happens because the primary cluster went down,
and we have force promoted the image on the
secondary cluster. and the image stays in
up+stopping_replay or could be any other states.
Currently assumption was that the image will
always be `up+stopped`. But the image will be in
`up+stopped` only for planned failover and it
could be in any other state if its a forced
failover. For this reason, removing
checkHealthyPrimary from the PromoteVolume RPC call.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2022-07-27 09:04:27 +00:00
Niels de Vos
011d4fc81c cleanup: create k8s.io/mount-utils Mounter only once
Recently the k8s.io/mount-utils package added more runtime dectection.
When creating a new Mounter, the detect is run every time. This is
unfortunate, as it logs a message like the following:

```
mount_linux.go:283] Detected umount with safe 'not mounted' behavior
```

This message might be useful, so it probably good to keep it.

In Ceph-CSI there are various locations where Mounter instances are
created. Moving that to the DefaultNodeServer type reduces it to a
single place. Some utility functions need to accept the additional
parameter too, so that has been modified as well.

See-also: kubernetes/kubernetes#109676
Signed-off-by: Niels de Vos <ndevos@redhat.com>
2022-07-21 07:14:43 +00:00
takeaki-matsumoto
1025871021 cephfs: Support mount option on nodeplugin
add mount options on nodeplugin side

Signed-off-by: takeaki-matsumoto <takeaki.matsumoto@linecorp.com>
2022-07-18 22:04:12 +00:00
Madhu Rajanna
ceb88d6498 cephfs: remove extra check for restore size
Looks like cephfs snapshot size is buggy and its
getting removed in ceph fs. we cannot get the size
of the snapshot during CreateVolume call, so we cannot
do any size check at CreateVolume to check if the
restore size is smaller or not.

As we are removing this check it also fixes #3147
but we dont have any validation at CSI level for
smaller restore we need to depend on kubernetes
external-provisioner for it.

fixes: #3147

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2022-07-18 10:04:14 +00:00
Madhu Rajanna
f171143135 cephfs: round to cephfs size to multiple of 4Mib
Due to the bug in the df stat we need to round off
the subvolume size to align with 4Mib.

Note:- Minimum supported size in cephcsi is 1Mib,
we dont need to take care of Kib.

fixes #3240

More details at https://github.com/ceph/ceph/pull/46905

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2022-07-13 18:32:40 +00:00
Humble Chirammal
1856647506 cephfs: go with default permissions while creating subvolumes
While creating subvolumes, CephFS driver set the mode to `777`
and pass it along to go ceph apis which cause the subvolume
permission to be on 777, however if we create a subvolume
directly in the ceph cluster, the default permission bits are
set which is 755 for the subvolume. This commit try to stick
to the default behaviour even while creating the subvolume.

This also means that we can work with fsgrouppolicy set to
`File` in csiDriver object which is also addressed in this commit.

Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2022-07-13 06:49:58 +00:00
Benoît Knecht
507844c9b1 rbd: Use rados namespace when getting clone depth
When the Ceph user is restricted to a specific namespace in the pool, it is
crucial that evey interaction with the cluster is done within that namespace.
This wasn't the case in `getCloneDepth()`.

This issue was causing snapshot creation to fail with

> Failed to check and update snapshot content: failed to take snapshot of the
> volume X: "rpc error: code = Internal desc = rbd: ret=-1, Operation not
> permitted"

Signed-off-by: Benoît Knecht <bknecht@protonmail.ch>
2022-07-07 22:20:29 +00:00