Commit Graph

955 Commits

Author SHA1 Message Date
Rakshith R
a57859dfa4 rbd: use blocklist range cmd, fallback if it fails
This commit adds blocklist range cmd feature,
while fallbacks to old blocklist one ip at a
time if the cmd is invalid(not available).

Signed-off-by: Rakshith R <rar@redhat.com>
2022-09-13 13:10:32 +00:00
Prashanth Dintyala
2a6487cbf5 rbd: create token and use it for vault SA everytime possible
use TokenRequest API by default for vault SA even with K8s versions < 1.24

Signed-off-by: Prashanth Dintyala <vdintyala@nvidia.com>
2022-09-09 10:13:32 +00:00
Madhu Rajanna
76064d8e34 cephfs: retry subvolumegroup creation
Incase the  subvolumegroup is deleted
and recreated we need to restart the
cephcsi provisioner pod to clear cache
that cephcsi maintains. With this PR
if cephcsi sees NotFound error duing
subvolume creation it will reset the cache
for that filesystem so that in next RPC
call cephcsi will try to create the
subvolumegroup again

Ref: https://github.com/rook/rook/issues/10623

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2022-09-07 18:24:30 +00:00
Madhu Rajanna
e56621cd66 cephfs: fix subvolumegroup creation for multiple fs
In a cluster we can have multiple filesystem
for that we need to have a map of
subvolumegroups to check filesystem is created
nor not.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2022-09-07 18:24:30 +00:00
Madhu Rajanna
71dbc7dbb4 rbd: map only primary image
If the image is mirroring enabled
and primary consider it for mapping,
if the image is mirroring enabled but
not primary yet. return error message
until the image is marked as primary.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2022-09-06 10:40:12 +00:00
Madhu Rajanna
038462ff43 cephfs: return success if metadata operation not supported
If the ceph cluster is of older version and doesnot
support metadata operation, Instead of failing
the request return the success if metadata
operation is not supported.

fixes #3347

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2022-08-29 18:37:53 +00:00
Rakshith R
40134772a7 rbd: modify stripSecret mechanism in logGRPC()
This commit updates csi-addons spec version
and modifies logging to strip replication
request secret using csi.StripSecret, then
with replication.protosanitizer if the former
fails. This is done in order to make sure
we strip csi and replication format of secrets.

Signed-off-by: Rakshith R <rar@redhat.com>
2022-08-29 11:18:15 +00:00
Rakshith R
f47839d73d rbd: improve kmip verifyResponse() error message
This commit uses %q instead %v in error messages
and adds result reason and message in kmip
verifyresponse().

Signed-off-by: Rakshith R <rar@redhat.com>
2022-08-24 07:58:57 +00:00
Rakshith R
eaa0e14cb2 rbd: fix bug in kmip kms Decrypt function
This commit fixes a bug in kmip kms Decrypt
function, where emd.DEK was fed in a Nonce
instead of emd.Nonce by mistake.

Signed-off-by: Rakshith R <rar@redhat.com>
2022-08-24 07:58:57 +00:00
Niels de Vos
b697b9b0d9 cleanup: replace github.com/pborman/uuid with github.com/google/uuid
The github.com/google/uuid package is used by Kubernetes, and it is part
of the vendor/ directory already. Our usage of github.com/pborman/uuid
can be replaced by github.com/google/uuid, so that
github.com/pborman/uuid can be removed as a dependency.

Closes: #3315
Signed-off-by: Niels de Vos <ndevos@redhat.com>
2022-08-22 14:34:25 +00:00
Rakshith R
19e4146fab rbd: add replication capability & service to csiaddons server
csi-addons server will advertise replication capability and
replication service will run with csi-addons server too.

Signed-off-by: Rakshith R <rar@redhat.com>
2022-08-18 08:19:20 +00:00
Rakshith R
0c33a33d5c rbd: add kmip encryption type
The Key Management Interoperability Protocol (KMIP)
is an extensible communication protocol
that defines message formats for the manipulation
of cryptographic keys on a key management server.
Ceph-CSI can now be configured to connect to
various KMS using KMIP for encrypting RBD volumes.

https://en.wikipedia.org/wiki/Key_Management_Interoperability_Protocol

Signed-off-by: Rakshith R <rar@redhat.com>
2022-08-18 07:41:42 +00:00
Madhu Rajanna
dde21543bd cephfs: fix staticcheck comment
getting is unused for linter "staticcheck"
(nolintlint) error message due to wrong
comment format. this the format now with
`//directive // comment`

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2022-08-10 17:51:26 +00:00
Rakshith R
d39d2cffcc cleanup: use index instead of value while iterating
This commit cleans up for loop to use index to access
value instead of copying value into a new variable
while iterating.
```
internal/util/csiconfig.go:103:2: rangeValCopy: each \
iteration copies 136 bytes (consider pointers or indexing) \
(gocritic)
        for _, cluster := range config {
```

Signed-off-by: Rakshith R <rar@redhat.com>
2022-08-09 13:36:03 +00:00
Rakshith R
3d3c029471 nfs: add nodeserver within cephcsi
This commit adds nfs nodeserver capable of
mounting nfs volumes, even with pod networking
using NSenter design similar to rbd and cephfs.
NodePublish, NodeUnpublish, NodeGetVolumeStats
and NodeGetCapabilities have been implemented.

The nodeserver implementation has been inspired
from https://github.com/kubernetes-csi/csi-driver-nfs,
which was previously used for mounted cephcsi exported
nfs volumes. The current implementation is also
backward compatible for the previously created
PVCs.

Signed-off-by: Rakshith R <rar@redhat.com>
2022-08-09 13:36:03 +00:00
Shyamsundar Ranganathan
c2280011d1 rbd: Report remote peer readiness if Up and status.Unknown
Current code uses an !A && !B condition incorrectly to
test A:Up and B:status for a remote peer image.

This should be !A || !B as we require both conditions to
be in the specified state (Up: true, and status Unknown).

This is corrected by this commit, and further fixes:
- check and return ready only when a remote site is
found in the status output
- check if all peer sites are ready, if multiple are found
and return ready appropriately

Signed-off-by: Shyamsundar Ranganathan <srangana@redhat.com>
2022-08-09 05:32:15 +00:00
Madhu Rajanna
8d7b6ee59f rbd: consider mirror deamon state for ResyncVolume
During ResyncVolume we check if the image
is in an error state, and we resync.
After resync, the image will move to
either the `Error` or the `Resyncing` state.
And if the image is in the above two
conditions, we will return a successful
response and Ready=false so that the
consumer can wait until the volume is
ready to use. If the image is in any
other state we return an error message
to indicate the syncing is not going on.
The whole resync and image state change
depends on the rbd mirror daemon. If the
mirror daemon is not running, the image
can be in Resyncing or Unknown state.
The Ramen marks the volume replication as
secondary, and once the resync starts, it
will delete the volume replication CR as a
cleanup process.

As we dont have a check for the rbd mirror
daemon, we are returning a resync success
response and Ready=false. Due to this false
response Ramen is assuming the resync started
and deleted the volume replication CR, and
because of this, the cluster goes into a bad
state and needs manual intervention.

fixes #3289

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2022-08-08 13:26:15 +00:00
Niels de Vos
83df1eae53 rebase: k8s.io/mount-utils/IsNotMountPoint() is deprecated
IsNotMountPoint() is deprecated and Mounter.IsMountPoint() is
recommended to be used instead.

Reported-by: golangci/staticcheck
Signed-off-by: Niels de Vos <ndevos@redhat.com>
2022-08-04 09:53:07 +00:00
Niels de Vos
10b2277330 util: use k8s.io/mount-utils/NewWithoutSystemd() to prevent logging
NewWithoutSystemd() has been introduced in the k8s.io/mount-utils
package so that systemd is not called while executing functions. This
offers consumers the ability to prevent confusing and scary messages
from getting logged.

See-also: kubernetes/kubernetes#111218
Signed-off-by: Niels de Vos <ndevos@redhat.com>
2022-08-04 09:53:07 +00:00
Niels de Vos
3a200b6976 rbd: use IsLikelyNotMountPoint() to prevent systemd log messages
Signed-off-by: Niels de Vos <ndevos@redhat.com>
2022-08-04 09:53:07 +00:00
Niels de Vos
0a173a8a9e nfs: make DeleteVolume (more) idempotent
Signed-off-by: Niels de Vos <ndevos@redhat.com>
2022-08-03 19:43:16 +00:00
Humble Chirammal
bc9ad3d9f1 rbd: add dummy attacher implementation
previously, it was a requirement to have attacher sidecar for CSI
drivers and there had an implementation of dummy mode of operation.
However skipAttach implementation has been stabilized and the dummy
mode of operation is going to be removed from the external-attacher.
Considering this driver  work on volumeattachment objects for NBD driver
use cases, we have to implement dummy controllerpublish and unpublish
and thus keep supporting our operations even in absence of dummy mode
of operation in the sidecar.

This commit make a NOOP controller publish and unpublish for RBD driver.

CephFS driver does not require attacher and it has already been made free
from the attachment operations.

    Ref# https://github.com/ceph/ceph-csi/pull/3149
    Ref# https://github.com/kubernetes-csi/external-attacher/issues/226

Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2022-08-03 00:25:49 +00:00
Prasanna Kumar Kalever
30244bf11b cephfs: snapshots honor --setmetadata option
`--setmetadata` is false by default, honoring it
will keep the metadata disabled by default

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-08-01 07:15:29 +00:00
Prasanna Kumar Kalever
14d6211d6d cephfs: subvolumes honor --setmetadata option
`--setmetadata` is false by default, honoring it
will keep the metadata disabled by default

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-08-01 07:15:29 +00:00
Prasanna Kumar Kalever
de7128b3a2 cephfs: Add clusterName as metadata on snapshots
Example:
sh-4.4$ ceph fs subvolume snapshot metadata ls myfs csi-vol-ba248f9e-0e75-11ed-b774-8e97192ff5ec \
			csi-snap-ce24e3bb-0e75-11ed-b774-8e97192ff5ec --group_name csi
{
    "csi.ceph.com/cluster/name": "\"K8s-cluster-1\"",
    "csi.storage.k8s.io/volumesnapshot/name": "cephfs-pvc-snapshot",
    "csi.storage.k8s.io/volumesnapshot/namespace": "rook-ceph",
    "csi.storage.k8s.io/volumesnapshotcontent/name": "snapcontent-2e89e1b2-e6e9-48fe-b365-edb493d7022e"
}

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-08-01 07:15:29 +00:00
Prasanna Kumar Kalever
856d7c264c cephfs: handle metadata op-failures with unsupported ceph versions
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-07-28 19:37:23 +00:00
Prasanna Kumar Kalever
5f36f7e8bd cephfs: update subvolume snapshot metadata if snapshot already exists.
Make sure to set metadata when subvolume snapshot exist, i.e. if the
provisioner pod is restarted while createSnapShot is in progress, say it
created the subvolume snapshot but didn't yet set the metadata.

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-07-28 19:37:23 +00:00
Prasanna Kumar Kalever
7c9259a45e cephfs: set metadata on the subvolume snapshot on create
Set snapshot-name/snapshot-namespace/snapshotcontent-name details
on subvolume snapshots as metadata on create.

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-07-28 19:37:23 +00:00
Prasanna Kumar Kalever
8c0dd482fa cephfs: add set/Remove subvolume snapshot metadata utility functions
Add utility functions to set/Remove
snapshot-name/snapshot-namespace/snapshotcontent-name metadata on
subvolume snapshots.

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-07-28 19:37:23 +00:00
Prasanna Kumar Kalever
51099d60fe cephfs: handle metadata op-failures with unsupported ceph versions
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-07-28 04:07:52 +00:00
Prasanna Kumar Kalever
11d51ed9b0 cephfs: unset cluster Name metadata
unsets the cluster name metadata key and value on the subvolume

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-07-28 04:07:52 +00:00
Prasanna Kumar Kalever
21d811096b cephfs: set cluster Name as metadata on the subvolume
This change helps read the cluster name from the cmdline args,
the provisioner will set the same on the subvolume.

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-07-28 04:07:52 +00:00
Prasanna Kumar Kalever
466bdf97b2 cephfs: set metadata on restart of provisioner pod
Make sure to set metadata when subvolume exist, i.e. if the provisioner pod
is restarted while createVolume is in progress, say it created the subvolume
but didn't yet set the metadata.

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-07-28 04:07:52 +00:00
Prasanna Kumar Kalever
6bcb8ecc68 cephfs: set PV/PVC details on the subvolume as metadata on create
This helps Monitoring solutions without access to Kubernetes clusters to
display the details of the PV/PVC/NameSpace in their dashboard.

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-07-28 04:07:52 +00:00
Prasanna Kumar Kalever
ecf03eb6ae cephfs: add set/Get/List/Remove metadata utility functions
Add utility functions to set/Get/List/Remove PV/PVC/PVCNamespace metadata
on subvolume.

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-07-28 04:07:52 +00:00
Madhu Rajanna
8c5563a9bc rbd: remove checkHealthyPrimary check
After Failover of workloads to the secondary
cluster when the primary cluster is down,
RBD Image is not marked healthy, and VR
resources are not promoted to the Primary,
In VolumeReplication, the `CURRENT STATE`
remains Unknown and doesn't change to Primary.

This happens because the primary cluster went down,
and we have force promoted the image on the
secondary cluster. and the image stays in
up+stopping_replay or could be any other states.
Currently assumption was that the image will
always be `up+stopped`. But the image will be in
`up+stopped` only for planned failover and it
could be in any other state if its a forced
failover. For this reason, removing
checkHealthyPrimary from the PromoteVolume RPC call.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2022-07-27 09:04:27 +00:00
Niels de Vos
011d4fc81c cleanup: create k8s.io/mount-utils Mounter only once
Recently the k8s.io/mount-utils package added more runtime dectection.
When creating a new Mounter, the detect is run every time. This is
unfortunate, as it logs a message like the following:

```
mount_linux.go:283] Detected umount with safe 'not mounted' behavior
```

This message might be useful, so it probably good to keep it.

In Ceph-CSI there are various locations where Mounter instances are
created. Moving that to the DefaultNodeServer type reduces it to a
single place. Some utility functions need to accept the additional
parameter too, so that has been modified as well.

See-also: kubernetes/kubernetes#109676
Signed-off-by: Niels de Vos <ndevos@redhat.com>
2022-07-21 07:14:43 +00:00
takeaki-matsumoto
1025871021 cephfs: Support mount option on nodeplugin
add mount options on nodeplugin side

Signed-off-by: takeaki-matsumoto <takeaki.matsumoto@linecorp.com>
2022-07-18 22:04:12 +00:00
Madhu Rajanna
ceb88d6498 cephfs: remove extra check for restore size
Looks like cephfs snapshot size is buggy and its
getting removed in ceph fs. we cannot get the size
of the snapshot during CreateVolume call, so we cannot
do any size check at CreateVolume to check if the
restore size is smaller or not.

As we are removing this check it also fixes #3147
but we dont have any validation at CSI level for
smaller restore we need to depend on kubernetes
external-provisioner for it.

fixes: #3147

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2022-07-18 10:04:14 +00:00
Madhu Rajanna
f171143135 cephfs: round to cephfs size to multiple of 4Mib
Due to the bug in the df stat we need to round off
the subvolume size to align with 4Mib.

Note:- Minimum supported size in cephcsi is 1Mib,
we dont need to take care of Kib.

fixes #3240

More details at https://github.com/ceph/ceph/pull/46905

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2022-07-13 18:32:40 +00:00
Humble Chirammal
1856647506 cephfs: go with default permissions while creating subvolumes
While creating subvolumes, CephFS driver set the mode to `777`
and pass it along to go ceph apis which cause the subvolume
permission to be on 777, however if we create a subvolume
directly in the ceph cluster, the default permission bits are
set which is 755 for the subvolume. This commit try to stick
to the default behaviour even while creating the subvolume.

This also means that we can work with fsgrouppolicy set to
`File` in csiDriver object which is also addressed in this commit.

Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2022-07-13 06:49:58 +00:00
Benoît Knecht
507844c9b1 rbd: Use rados namespace when getting clone depth
When the Ceph user is restricted to a specific namespace in the pool, it is
crucial that evey interaction with the cluster is done within that namespace.
This wasn't the case in `getCloneDepth()`.

This issue was causing snapshot creation to fail with

> Failed to check and update snapshot content: failed to take snapshot of the
> volume X: "rpc error: code = Internal desc = rbd: ret=-1, Operation not
> permitted"

Signed-off-by: Benoît Knecht <bknecht@protonmail.ch>
2022-07-07 22:20:29 +00:00
Niels de Vos
14ba1498bf util: reduce systemd related errors while mounting
There are regular reports that identify a non-error as the cause of
failures. The Kubernetes mount-utils package has detection for systemd
based environments, and if systemd is unavailable, the following error
is logged:

    Cannot run systemd-run, assuming non-systemd OS
    systemd-run output: System has not been booted with systemd as init
    system (PID 1). Can't operate.
    Failed to create bus connection: Host is down, failed with: exit status 1

Because of the `failed` and `exit status 1` error message, users might
assume that the mounting failed. This does not need to be the case. The
container-images that the Ceph-CSI projects provides, do not use
systemd, so the error will get logged with each mount attempt.

By using the newer MountSensitiveWithoutSystemd() function from the
mount-utils package where we can, the number of confusing logs get
reduced.

Signed-off-by: Niels de Vos <ndevos@redhat.com>
2022-07-04 10:02:54 +00:00
Niels de Vos
a1ed6207f6 cephfs: report detailed error message on clone failure
go-ceph provides a new GetFailure() method to retrieve details errors
when cloning failed. This is now included in the `cephFSCloneState`
struct, which was a simple string before.

While modifying the `cephFSCloneState` struct, the constants have been
removed, as go-ceph provides them as well.

Fixes: #3140
Signed-off-by: Niels de Vos <ndevos@redhat.com>
2022-06-30 19:33:41 +00:00
Yati Padia
5c40f1ef33 rbd: remove the clone in case of failure
This commit removes the clone incase
unsetAllMetadata or copyEncryptionConfig or
expand fails for createVolumeFromSnapshot
and CreateSnapshot.
It also removes the clone in case of
any failure in createCloneFromImage.

issue: #3103

Signed-off-by: Yati Padia <ypadia@redhat.com>
2022-06-30 05:50:16 +00:00
Prasanna Kumar Kalever
9fa3c8382b cleanup: reduce struct padding
internal/rbd/rbd_util.go:89:15: struct of size 312 bytes could be of
size 304 bytes:
``
struct{
	RbdImageName   	string,
	ImageID        	string,
	VolID          	string,
	Monitors       	string,
	JournalPool    	string,
	Pool           	string,
	RadosNamespace 	string,
	ClusterID      	string,
	RequestName    	string,
	NamePrefix     	string,
	ParentName     	string,
	ParentPool     	string,
	ClusterName    	string,
	Owner          	string,
	VolSize        	int64,
	StripeCount    	uint64,
	StripeUnit     	uint64,
	ObjectSize     	uint64,
	ImageFeatureSet	github.com/ceph/go-ceph/rbd.FeatureSet,
	encryption
*github.com/ceph/ceph-csi/internal/util.VolumeEncryption,
	CreatedAt
*google.golang.org/protobuf/types/known/timestamppb.Timestamp,
	conn
*github.com/ceph/ceph-csi/internal/util.ClusterConnection,
	ioctx          	*github.com/ceph/go-ceph/rados.IOContext,
	Primary        	bool,
	EnableMetadata 	bool,
}
`` (maligned)
type rbdImage struct {
              ^}`
make: *** [Makefile:118: go-lint] Error 1

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-06-28 19:12:53 +00:00
Prasanna Kumar Kalever
29a3f4acf6 cleanup: ReconcilePersistentVolume consider passing it by pointer
Address: hugeParam linter

internal/controller/persistentvolume/persistentvolume.go:59:7:
hugeParam: r is heavy (80 bytes); consider passing it by pointer
(gocritic)
[...]
internal/controller/persistentvolume/persistentvolume.go:135:7:
hugeParam: r is heavy (80 bytes); consider passing it by pointer
(gocritic)
func (r ReconcilePersistentVolume) reconcilePV(ctx context.Context, obj
runtime.Object) error {}

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-06-28 19:12:53 +00:00
Prasanna Kumar Kalever
caf4090657 rbd: provide option to disable setting metadata on rbd images
As we added support to set the metadata on the rbd images created for
the PVC and volume snapshot, by default metadata is set on all the images.

As we have seen we are hitting issues#2327 a lot of times with this,
we start to leave a lot of stale images. Currently, we rely on
`--extra-create-metadata=true` to decide to set the metadata or not,
we cannot set this option to false to disable setting metadata because we
use this for encryption too.

This changes is to provide an option to disable setting the image
metadata when starting cephcsi.

Fixes: #3009
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-06-28 19:12:53 +00:00
Madhu Rajanna
8a47904e8f rbd: add unit test for checkHealthyPrimary
Removed the code in checkHealthyPrimary which
makes the ceph call, passing it as input now.
Added unit test for checkHealthyPrimary function

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2022-06-28 13:17:11 +00:00
Madhu Rajanna
53e76fab69 rbd: fix checkHealthyPrimary to consider up+stopped state
we need to check for image should be in up+stopped state
not anyone of the state for that the we need to use
OR check not the AND check.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2022-06-28 13:17:11 +00:00