Commit Graph

992 Commits

Author SHA1 Message Date
Rakshith R
d39d2cffcc cleanup: use index instead of value while iterating
This commit cleans up for loop to use index to access
value instead of copying value into a new variable
while iterating.
```
internal/util/csiconfig.go:103:2: rangeValCopy: each \
iteration copies 136 bytes (consider pointers or indexing) \
(gocritic)
        for _, cluster := range config {
```

Signed-off-by: Rakshith R <rar@redhat.com>
2022-08-09 13:36:03 +00:00
Rakshith R
3d3c029471 nfs: add nodeserver within cephcsi
This commit adds nfs nodeserver capable of
mounting nfs volumes, even with pod networking
using NSenter design similar to rbd and cephfs.
NodePublish, NodeUnpublish, NodeGetVolumeStats
and NodeGetCapabilities have been implemented.

The nodeserver implementation has been inspired
from https://github.com/kubernetes-csi/csi-driver-nfs,
which was previously used for mounted cephcsi exported
nfs volumes. The current implementation is also
backward compatible for the previously created
PVCs.

Signed-off-by: Rakshith R <rar@redhat.com>
2022-08-09 13:36:03 +00:00
Shyamsundar Ranganathan
c2280011d1 rbd: Report remote peer readiness if Up and status.Unknown
Current code uses an !A && !B condition incorrectly to
test A:Up and B:status for a remote peer image.

This should be !A || !B as we require both conditions to
be in the specified state (Up: true, and status Unknown).

This is corrected by this commit, and further fixes:
- check and return ready only when a remote site is
found in the status output
- check if all peer sites are ready, if multiple are found
and return ready appropriately

Signed-off-by: Shyamsundar Ranganathan <srangana@redhat.com>
2022-08-09 05:32:15 +00:00
Madhu Rajanna
8d7b6ee59f rbd: consider mirror deamon state for ResyncVolume
During ResyncVolume we check if the image
is in an error state, and we resync.
After resync, the image will move to
either the `Error` or the `Resyncing` state.
And if the image is in the above two
conditions, we will return a successful
response and Ready=false so that the
consumer can wait until the volume is
ready to use. If the image is in any
other state we return an error message
to indicate the syncing is not going on.
The whole resync and image state change
depends on the rbd mirror daemon. If the
mirror daemon is not running, the image
can be in Resyncing or Unknown state.
The Ramen marks the volume replication as
secondary, and once the resync starts, it
will delete the volume replication CR as a
cleanup process.

As we dont have a check for the rbd mirror
daemon, we are returning a resync success
response and Ready=false. Due to this false
response Ramen is assuming the resync started
and deleted the volume replication CR, and
because of this, the cluster goes into a bad
state and needs manual intervention.

fixes #3289

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2022-08-08 13:26:15 +00:00
Niels de Vos
83df1eae53 rebase: k8s.io/mount-utils/IsNotMountPoint() is deprecated
IsNotMountPoint() is deprecated and Mounter.IsMountPoint() is
recommended to be used instead.

Reported-by: golangci/staticcheck
Signed-off-by: Niels de Vos <ndevos@redhat.com>
2022-08-04 09:53:07 +00:00
Niels de Vos
10b2277330 util: use k8s.io/mount-utils/NewWithoutSystemd() to prevent logging
NewWithoutSystemd() has been introduced in the k8s.io/mount-utils
package so that systemd is not called while executing functions. This
offers consumers the ability to prevent confusing and scary messages
from getting logged.

See-also: kubernetes/kubernetes#111218
Signed-off-by: Niels de Vos <ndevos@redhat.com>
2022-08-04 09:53:07 +00:00
Niels de Vos
3a200b6976 rbd: use IsLikelyNotMountPoint() to prevent systemd log messages
Signed-off-by: Niels de Vos <ndevos@redhat.com>
2022-08-04 09:53:07 +00:00
Niels de Vos
0a173a8a9e nfs: make DeleteVolume (more) idempotent
Signed-off-by: Niels de Vos <ndevos@redhat.com>
2022-08-03 19:43:16 +00:00
Humble Chirammal
bc9ad3d9f1 rbd: add dummy attacher implementation
previously, it was a requirement to have attacher sidecar for CSI
drivers and there had an implementation of dummy mode of operation.
However skipAttach implementation has been stabilized and the dummy
mode of operation is going to be removed from the external-attacher.
Considering this driver  work on volumeattachment objects for NBD driver
use cases, we have to implement dummy controllerpublish and unpublish
and thus keep supporting our operations even in absence of dummy mode
of operation in the sidecar.

This commit make a NOOP controller publish and unpublish for RBD driver.

CephFS driver does not require attacher and it has already been made free
from the attachment operations.

    Ref# https://github.com/ceph/ceph-csi/pull/3149
    Ref# https://github.com/kubernetes-csi/external-attacher/issues/226

Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2022-08-03 00:25:49 +00:00
Prasanna Kumar Kalever
30244bf11b cephfs: snapshots honor --setmetadata option
`--setmetadata` is false by default, honoring it
will keep the metadata disabled by default

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-08-01 07:15:29 +00:00
Prasanna Kumar Kalever
14d6211d6d cephfs: subvolumes honor --setmetadata option
`--setmetadata` is false by default, honoring it
will keep the metadata disabled by default

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-08-01 07:15:29 +00:00
Prasanna Kumar Kalever
de7128b3a2 cephfs: Add clusterName as metadata on snapshots
Example:
sh-4.4$ ceph fs subvolume snapshot metadata ls myfs csi-vol-ba248f9e-0e75-11ed-b774-8e97192ff5ec \
			csi-snap-ce24e3bb-0e75-11ed-b774-8e97192ff5ec --group_name csi
{
    "csi.ceph.com/cluster/name": "\"K8s-cluster-1\"",
    "csi.storage.k8s.io/volumesnapshot/name": "cephfs-pvc-snapshot",
    "csi.storage.k8s.io/volumesnapshot/namespace": "rook-ceph",
    "csi.storage.k8s.io/volumesnapshotcontent/name": "snapcontent-2e89e1b2-e6e9-48fe-b365-edb493d7022e"
}

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-08-01 07:15:29 +00:00
Prasanna Kumar Kalever
856d7c264c cephfs: handle metadata op-failures with unsupported ceph versions
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-07-28 19:37:23 +00:00
Prasanna Kumar Kalever
5f36f7e8bd cephfs: update subvolume snapshot metadata if snapshot already exists.
Make sure to set metadata when subvolume snapshot exist, i.e. if the
provisioner pod is restarted while createSnapShot is in progress, say it
created the subvolume snapshot but didn't yet set the metadata.

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-07-28 19:37:23 +00:00
Prasanna Kumar Kalever
7c9259a45e cephfs: set metadata on the subvolume snapshot on create
Set snapshot-name/snapshot-namespace/snapshotcontent-name details
on subvolume snapshots as metadata on create.

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-07-28 19:37:23 +00:00
Prasanna Kumar Kalever
8c0dd482fa cephfs: add set/Remove subvolume snapshot metadata utility functions
Add utility functions to set/Remove
snapshot-name/snapshot-namespace/snapshotcontent-name metadata on
subvolume snapshots.

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-07-28 19:37:23 +00:00
Prasanna Kumar Kalever
51099d60fe cephfs: handle metadata op-failures with unsupported ceph versions
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-07-28 04:07:52 +00:00
Prasanna Kumar Kalever
11d51ed9b0 cephfs: unset cluster Name metadata
unsets the cluster name metadata key and value on the subvolume

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-07-28 04:07:52 +00:00
Prasanna Kumar Kalever
21d811096b cephfs: set cluster Name as metadata on the subvolume
This change helps read the cluster name from the cmdline args,
the provisioner will set the same on the subvolume.

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-07-28 04:07:52 +00:00
Prasanna Kumar Kalever
466bdf97b2 cephfs: set metadata on restart of provisioner pod
Make sure to set metadata when subvolume exist, i.e. if the provisioner pod
is restarted while createVolume is in progress, say it created the subvolume
but didn't yet set the metadata.

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-07-28 04:07:52 +00:00
Prasanna Kumar Kalever
6bcb8ecc68 cephfs: set PV/PVC details on the subvolume as metadata on create
This helps Monitoring solutions without access to Kubernetes clusters to
display the details of the PV/PVC/NameSpace in their dashboard.

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-07-28 04:07:52 +00:00
Prasanna Kumar Kalever
ecf03eb6ae cephfs: add set/Get/List/Remove metadata utility functions
Add utility functions to set/Get/List/Remove PV/PVC/PVCNamespace metadata
on subvolume.

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-07-28 04:07:52 +00:00
Madhu Rajanna
8c5563a9bc rbd: remove checkHealthyPrimary check
After Failover of workloads to the secondary
cluster when the primary cluster is down,
RBD Image is not marked healthy, and VR
resources are not promoted to the Primary,
In VolumeReplication, the `CURRENT STATE`
remains Unknown and doesn't change to Primary.

This happens because the primary cluster went down,
and we have force promoted the image on the
secondary cluster. and the image stays in
up+stopping_replay or could be any other states.
Currently assumption was that the image will
always be `up+stopped`. But the image will be in
`up+stopped` only for planned failover and it
could be in any other state if its a forced
failover. For this reason, removing
checkHealthyPrimary from the PromoteVolume RPC call.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2022-07-27 09:04:27 +00:00
Niels de Vos
011d4fc81c cleanup: create k8s.io/mount-utils Mounter only once
Recently the k8s.io/mount-utils package added more runtime dectection.
When creating a new Mounter, the detect is run every time. This is
unfortunate, as it logs a message like the following:

```
mount_linux.go:283] Detected umount with safe 'not mounted' behavior
```

This message might be useful, so it probably good to keep it.

In Ceph-CSI there are various locations where Mounter instances are
created. Moving that to the DefaultNodeServer type reduces it to a
single place. Some utility functions need to accept the additional
parameter too, so that has been modified as well.

See-also: kubernetes/kubernetes#109676
Signed-off-by: Niels de Vos <ndevos@redhat.com>
2022-07-21 07:14:43 +00:00
takeaki-matsumoto
1025871021 cephfs: Support mount option on nodeplugin
add mount options on nodeplugin side

Signed-off-by: takeaki-matsumoto <takeaki.matsumoto@linecorp.com>
2022-07-18 22:04:12 +00:00
Madhu Rajanna
ceb88d6498 cephfs: remove extra check for restore size
Looks like cephfs snapshot size is buggy and its
getting removed in ceph fs. we cannot get the size
of the snapshot during CreateVolume call, so we cannot
do any size check at CreateVolume to check if the
restore size is smaller or not.

As we are removing this check it also fixes #3147
but we dont have any validation at CSI level for
smaller restore we need to depend on kubernetes
external-provisioner for it.

fixes: #3147

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2022-07-18 10:04:14 +00:00
Madhu Rajanna
f171143135 cephfs: round to cephfs size to multiple of 4Mib
Due to the bug in the df stat we need to round off
the subvolume size to align with 4Mib.

Note:- Minimum supported size in cephcsi is 1Mib,
we dont need to take care of Kib.

fixes #3240

More details at https://github.com/ceph/ceph/pull/46905

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2022-07-13 18:32:40 +00:00
Humble Chirammal
1856647506 cephfs: go with default permissions while creating subvolumes
While creating subvolumes, CephFS driver set the mode to `777`
and pass it along to go ceph apis which cause the subvolume
permission to be on 777, however if we create a subvolume
directly in the ceph cluster, the default permission bits are
set which is 755 for the subvolume. This commit try to stick
to the default behaviour even while creating the subvolume.

This also means that we can work with fsgrouppolicy set to
`File` in csiDriver object which is also addressed in this commit.

Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2022-07-13 06:49:58 +00:00
Benoît Knecht
507844c9b1 rbd: Use rados namespace when getting clone depth
When the Ceph user is restricted to a specific namespace in the pool, it is
crucial that evey interaction with the cluster is done within that namespace.
This wasn't the case in `getCloneDepth()`.

This issue was causing snapshot creation to fail with

> Failed to check and update snapshot content: failed to take snapshot of the
> volume X: "rpc error: code = Internal desc = rbd: ret=-1, Operation not
> permitted"

Signed-off-by: Benoît Knecht <bknecht@protonmail.ch>
2022-07-07 22:20:29 +00:00
Niels de Vos
14ba1498bf util: reduce systemd related errors while mounting
There are regular reports that identify a non-error as the cause of
failures. The Kubernetes mount-utils package has detection for systemd
based environments, and if systemd is unavailable, the following error
is logged:

    Cannot run systemd-run, assuming non-systemd OS
    systemd-run output: System has not been booted with systemd as init
    system (PID 1). Can't operate.
    Failed to create bus connection: Host is down, failed with: exit status 1

Because of the `failed` and `exit status 1` error message, users might
assume that the mounting failed. This does not need to be the case. The
container-images that the Ceph-CSI projects provides, do not use
systemd, so the error will get logged with each mount attempt.

By using the newer MountSensitiveWithoutSystemd() function from the
mount-utils package where we can, the number of confusing logs get
reduced.

Signed-off-by: Niels de Vos <ndevos@redhat.com>
2022-07-04 10:02:54 +00:00
Niels de Vos
a1ed6207f6 cephfs: report detailed error message on clone failure
go-ceph provides a new GetFailure() method to retrieve details errors
when cloning failed. This is now included in the `cephFSCloneState`
struct, which was a simple string before.

While modifying the `cephFSCloneState` struct, the constants have been
removed, as go-ceph provides them as well.

Fixes: #3140
Signed-off-by: Niels de Vos <ndevos@redhat.com>
2022-06-30 19:33:41 +00:00
Yati Padia
5c40f1ef33 rbd: remove the clone in case of failure
This commit removes the clone incase
unsetAllMetadata or copyEncryptionConfig or
expand fails for createVolumeFromSnapshot
and CreateSnapshot.
It also removes the clone in case of
any failure in createCloneFromImage.

issue: #3103

Signed-off-by: Yati Padia <ypadia@redhat.com>
2022-06-30 05:50:16 +00:00
Prasanna Kumar Kalever
9fa3c8382b cleanup: reduce struct padding
internal/rbd/rbd_util.go:89:15: struct of size 312 bytes could be of
size 304 bytes:
``
struct{
	RbdImageName   	string,
	ImageID        	string,
	VolID          	string,
	Monitors       	string,
	JournalPool    	string,
	Pool           	string,
	RadosNamespace 	string,
	ClusterID      	string,
	RequestName    	string,
	NamePrefix     	string,
	ParentName     	string,
	ParentPool     	string,
	ClusterName    	string,
	Owner          	string,
	VolSize        	int64,
	StripeCount    	uint64,
	StripeUnit     	uint64,
	ObjectSize     	uint64,
	ImageFeatureSet	github.com/ceph/go-ceph/rbd.FeatureSet,
	encryption
*github.com/ceph/ceph-csi/internal/util.VolumeEncryption,
	CreatedAt
*google.golang.org/protobuf/types/known/timestamppb.Timestamp,
	conn
*github.com/ceph/ceph-csi/internal/util.ClusterConnection,
	ioctx          	*github.com/ceph/go-ceph/rados.IOContext,
	Primary        	bool,
	EnableMetadata 	bool,
}
`` (maligned)
type rbdImage struct {
              ^}`
make: *** [Makefile:118: go-lint] Error 1

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-06-28 19:12:53 +00:00
Prasanna Kumar Kalever
29a3f4acf6 cleanup: ReconcilePersistentVolume consider passing it by pointer
Address: hugeParam linter

internal/controller/persistentvolume/persistentvolume.go:59:7:
hugeParam: r is heavy (80 bytes); consider passing it by pointer
(gocritic)
[...]
internal/controller/persistentvolume/persistentvolume.go:135:7:
hugeParam: r is heavy (80 bytes); consider passing it by pointer
(gocritic)
func (r ReconcilePersistentVolume) reconcilePV(ctx context.Context, obj
runtime.Object) error {}

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-06-28 19:12:53 +00:00
Prasanna Kumar Kalever
caf4090657 rbd: provide option to disable setting metadata on rbd images
As we added support to set the metadata on the rbd images created for
the PVC and volume snapshot, by default metadata is set on all the images.

As we have seen we are hitting issues#2327 a lot of times with this,
we start to leave a lot of stale images. Currently, we rely on
`--extra-create-metadata=true` to decide to set the metadata or not,
we cannot set this option to false to disable setting metadata because we
use this for encryption too.

This changes is to provide an option to disable setting the image
metadata when starting cephcsi.

Fixes: #3009
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-06-28 19:12:53 +00:00
Madhu Rajanna
8a47904e8f rbd: add unit test for checkHealthyPrimary
Removed the code in checkHealthyPrimary which
makes the ceph call, passing it as input now.
Added unit test for checkHealthyPrimary function

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2022-06-28 13:17:11 +00:00
Madhu Rajanna
53e76fab69 rbd: fix checkHealthyPrimary to consider up+stopped state
we need to check for image should be in up+stopped state
not anyone of the state for that the we need to use
OR check not the AND check.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2022-06-28 13:17:11 +00:00
Madhu Rajanna
704cb5c941 revert: rbd: consider remote image health for primary
When the image is force promoted to primary on the
cluster the remote image might not be in replaying
state because due to the split brain state. This
PR reverts back the commit
c3c87f2ef3. Which we added
to check the remote image status.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2022-06-28 13:17:11 +00:00
Prasanna Kumar Kalever
1da446d2f2 rbd: healer detect Kubernetes version for right StagingTargetPath
Kubernetes 1.24 and newer use a different path for staging the volume.
That means the CSI-driver is requested to mount the volume at an other
location, compared to previous versions of Kubernetes. CSI-drivers
implementing the volumeHealer, must receive the correct path, otherwise
the after a nodeplugin restart the NBD mounts will bailout attempting
to NodeStageVolume() call and return an error.

See-also: kubernetes/kubernetes#107065

Fixes: #3176
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-06-24 12:23:29 +00:00
Madhu Rajanna
3acaa018db rbd: issue resync only if the force flag is set
During failover we do demote the volume on the primary
as the image is still not promoted yet on the remote cluster,
there are spurious split-brain errors reported by RBD,
the Cephcsi resync will attempt to resync from the "known"
secondary and that will cause data loss

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2022-06-23 13:28:18 +00:00
Madhu Rajanna
7a2dd4c3cf rbd: create token and use it for vault SA
create the token if kubernetes version in
1.24+ and use it for vault sa.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Signed-off-by: Rakshith R <rar@redhat.com>
2022-06-17 11:37:59 +00:00
Robert Vasek
fd7559a903 cephfs: added support for snapshot-backed volumes
This commit implements most of
docs/design/proposals/cephfs-snapshot-shallow-ro-vol.md design document;
specifically (de-)provisioning of snapshot-backed volumes, mounting such
volumes as well as mounting pre-provisioned snapshot-backed volumes.

Signed-off-by: Robert Vasek <robert.vasek@cern.ch>
2022-06-16 09:44:27 +00:00
Robert Vasek
0807fd2e6c journal: added csi.volume.backingsnapshotid image attribute
Signed-off-by: Robert Vasek <robert.vasek@cern.ch>
2022-06-16 09:44:27 +00:00
Madhu Rajanna
4b57cc3ec5 rbd: add support for rbd striping
RBD supports creating rbd images with
object size, stripe unit and stripe count
to support striping. This PR adds the support
for the same.

More details about striping at
https://docs.ceph.com/en/quincy/man/8/rbd/#striping

fixes: #3124

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2022-06-09 18:59:00 +00:00
Prasanna Kumar Kalever
09a8e5e9e6 rbd: unset cluster Name metadata
unsets the cluster name metadata key and value on the RBD image

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-06-08 16:23:59 +00:00
Prasanna Kumar Kalever
2880c25fd6 rbd: set cluster Name as metadata on the image
This change helps read the cluster name from the cmdline args,
the provisioner will set the same on the RBD images.

Fixes: #2973
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-06-08 16:23:59 +00:00
Prasanna Kumar Kalever
deb003e605 cleanup: use prefix instead of hardcoding csiParameterPrefix
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-06-08 16:23:59 +00:00
Madhu Rajanna
1952a9b4b3 ci: fix all linter errors found in golangci-lint
Fixing all the linter errors found in golang-ci
lint v1.46.2

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2022-06-03 12:55:54 +00:00
Madhu Rajanna
c9943320ac cephfs: skip NetNamespaceFilePath if the volume is pre-provisioned
In case of pre-provisioned volume the clusterID is
not set in the volume context as the clusterID is missing
we cannot extract the NetNamespaceFilePath from the
configuration file. For static volume and dynamically
provisioned volume the clusterID is set.

Note:- This is a special case to support mounting PV
without clusterID parameter.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2022-06-03 07:25:25 +00:00
Rakshith R
7688306f87 rbd: use vaultAuthPath variable name in error msg
Before the change, the error msg was the following:
```
failed to set VAULT_AUTH_MOUNT_PATH in Vault config: path is empty
```
`vaultAuthPath` is the actual variable name set by the
user. The error message will now be the following:
```
failed to set "vaultAuthPath" in vault config: path is empty
```

Signed-off-by: Rakshith R <rar@redhat.com>
2022-05-26 07:37:48 +00:00
Rakshith R
894c20f792 nfs: add support for pvc-pvc clone
This commit adds support for pvc-pvc clone.
Only capability needed to be advertised, the
underlying support is already provided by cephfs
backend.

Signed-off-by: Rakshith R <rar@redhat.com>
2022-05-24 18:13:02 +00:00
Rakshith R
24515b509f nfs: add support for create & delete snapshot
This commits adds support for creation and
deletion of nfs snapshots based on cephfs.

Signed-off-by: Rakshith R <rar@redhat.com>
2022-05-24 18:13:02 +00:00
Prasanna Kumar Kalever
6470cf3343 rbd: fix bug handling GetKrbdSupportedFeatures()
continue running rbd driver when /sys/bus/rbd/supported_features file is
missing, do not bailout.

Fixes: #2678
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-05-15 15:10:08 +00:00
Prasanna Kumar Kalever
83cc1b0e58 rbd: handle when krbdFeatures is zero
krbdFeatures is set to zero when kernel version < 3.8, i.e. in  case where
/sys/bus/rbd/supported_features is absent and we are unable to prepare
the krbd attributes based on kernel version.

When krbdFeatures is set to zero fallback to NBD only when autofallback
is turned ON.

Fixes: #2678
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-05-15 15:10:08 +00:00
Prasanna Kumar Kalever
e53fd87154 rbd: prepare krbd feature attrs if supported_features file is absent
Upstream /sys/bus/rbd/supported_features is part of Linux kernel v4.11.0
Prepare the attributes and use them in case if
/sys/bus/rbd/supported_features is missing.

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-05-15 15:10:08 +00:00
Prasanna Kumar Kalever
27f503c144 rbd: unset parent PVC metadata on CreateVolume From Volume
Unset the parent PVC metadata on the temp clone rbd image

Fixes: #2970
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-05-12 15:54:09 +00:00
Prasanna Kumar Kalever
e0f34a6d60 rbd: unset snapshot metadata on CreateVolume From snapshot
Unset the snapshot metadata from the rbd image created from the snapshot

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-05-12 15:54:09 +00:00
Prasanna Kumar Kalever
d89c5fb39f rbd: unset PVC metadata on CreateSnapshot
Unset the PVC metadata on the rbd image created for the snapshot

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-05-12 15:54:09 +00:00
Prasanna Kumar Kalever
bac33262ae rbd: add unset volume/snapshot metadata utility functions
Added
GetVolumeMetadataKeys()
GetSnaoshotMetadataKeys()
unsetVolumeMetadata() and
unsetSnapshotMetadata()

functions.

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-05-12 15:54:09 +00:00
Prasanna Kumar Kalever
1fd5277b3c cleanup: simplify setVolumeMetadata and rename it
Move k8s.GetVolumeMetadata() out of setVolumeMetadata() and rename it to
setAllMetadata() so that the same can be used for setting volume and
snapshot metadata.

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-05-12 15:54:09 +00:00
Niels de Vos
36e51402cb nfs: support ExpandVolume CSI procedure
There is not much the NFS-provisioner needs to do to expand a volume,
everything is handled by the CephFS components.

NFS does not need a resize on the node, so only ControllerExpandVolume
is required.

Signed-off-by: Niels de Vos <ndevos@redhat.com>
2022-05-10 17:43:59 +00:00
Madhu Rajanna
70674565df rbd: consider rbd as default mounter if not set
For the default mounter the mounter option
will not be set in the storageclass and as it is
not available in the storageclass same will not
be set in the volume context, Because of this the
mapOptions are getting discarded. If the mounter
is not set assuming it's an rbd mounter.

Note:- If the mounter is not set in the storageclass
we can set it in the volume context explicitly,
Doing this check-in node server to support backward
existing volumes and the check is minimal we are not
altering the volume context.

fixes: #3076

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2022-05-09 20:00:11 +00:00
Marcus Röder
a95a6213eb util: support systems using the new cgroup v2 structure
With cgroup v2, the location of the pids.max file changed and so did the
/proc/self/cgroup file

new /proc/self/cgroup file
`
0::/user.slice/user-500.slice/session-14.scope
`

old file:
`
11:pids:/user.slice/user-500.slice/session-2.scope
10:blkio:/user.slice
9:net_cls,net_prio:/
8:perf_event:/
...
`

There is no directory per subsystem (e.g. /sys/fs/cgroup/pids) any more, all
files are now in one directory.

fixes: https://github.com/ceph/ceph-csi/issues/3085

Signed-off-by: Marcus Röder <m.roeder@yieldlab.de>
2022-05-07 20:38:48 +00:00
Rakshith R
f1ccc4eced rbd: support pvc-pvc clone with different sc & encryption
This commit makes modification so as to allow pvc-pvc clone
with different storageclass having different encryption
configs.
This commit also modifies `copyEncryptionConfig()` to
include a `isEncrypted()` check within the function.

Signed-off-by: Rakshith R <rar@redhat.com>
2022-05-06 10:32:21 +00:00
Rakshith R
bd57feb26e rbd: use vaultAuthPath variable name in error msg
Before the change, the error msg was the following:
```
failed to set VAULT_AUTH_MOUNT_PATH in Vault config: path is empty
```
`vaultAuthPath` is the actual variable name set by the
user. The error message will now be the following:
```
failed to set "vaultAuthPath" in vault config: path is empty
```

Signed-off-by: Rakshith R <rar@redhat.com>
2022-05-05 05:49:31 +00:00
Niels de Vos
9d7faf850f nfs: delete the CephFS volume when the export is already removed
In case the NFS-export has already been removed from the NFS-server, but
the CSI Controller was restarted, a retry to remove the NFS-volume will
fail with an error like:

> GRPC error: ....: response status not empty: "Export does not exist"

When this error is reported, assume the NFS-export was already removed
from the NFS-server configuration, and continue with deleting the
backend volume.

Signed-off-by: Niels de Vos <ndevos@redhat.com>
2022-05-04 21:31:06 +00:00
Madhu Rajanna
d2bc9743f7 cephfs: add netNamespaceFilePath for CephFS
as same host directory is not shared between
the cephfs and the rbd plugin pod. we need
to keep the netNamespaceFilePath separately
for both cephfs and rbd. CephFS plugin will
use this path to execute mount -t commands.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2022-04-19 12:28:46 +00:00
Madhu Rajanna
eb4bfb7326 cleanup: use block comment for ClusterInfo example
Adjusted the mix of tabs and the spaces and also
used block comment for better readability.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2022-04-19 12:28:46 +00:00
Madhu Rajanna
b4acbd08a5 rbd: move radosNamespace to RBD section
As radosNamespace is more specific to
RBD not the general ceph configuration. Now
we introduced a new RBD section for RBD specific
options, Moving the radosNamespace to RBD section
and keeping the radosNamespace still under the
global ceph level configration for backward
compatibility.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2022-04-19 12:28:46 +00:00
Madhu Rajanna
766346868e util: Add RBD specific options in clusterInfo
As the netNamespaceFilePath can be separate for
both cephfs and rbd adding the netNamespaceFilePath
path for RBD, This will help us to keep RBD and
CephFS specific options separately.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2022-04-19 12:28:46 +00:00
Niels de Vos
2b71aac752 nfs: return gRPC status from CephFS CreateVolume failure
The NFS Controller returns a non-gRPC error in case the CreateVolume
call for the CephFS volume fails. It is better to return the gRPC-error
that the CephFS Controller passed along.

Signed-off-by: Niels de Vos <ndevos@redhat.com>
2022-04-19 08:23:16 +00:00
Humble Chirammal
fcd0f4713a cleanup: correct typos in test description and source code
this commit correct typos in various places.

Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2022-04-18 10:29:08 +00:00
Humble Chirammal
4c4879ba8b cleanup: remove import alias for fence library
this commit remove unneeded import alias of fence library
from the network_fence test.

Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2022-04-18 10:29:08 +00:00
Madhu Rajanna
c245436ec4 util: fix logging in ExecuteCommandWithNSEnter
log the nsenter and its argument after executing
the command with the nsenter CLI.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2022-04-14 12:17:21 +00:00
Niels de Vos
28369702d2 nfs: use go-ceph API for creating/deleting exports
Recent versions of Ceph allow calling the NFS-export management
functions over the go-ceph API.

This seems incompatible with older versions that have been tested with
the `ceph nfs` commands that this commit replaces.

Signed-off-by: Niels de Vos <ndevos@redhat.com>
2022-04-14 08:01:45 +00:00
Madhu Rajanna
d886ab0d66 rbd: use leases for leader election
use leases for leader election instead
of the deprecated configmap based leader
election.

This PR is making leases as default leader election
refer https://github.com/kubernetes-sigs/
controller-runtime/pull/1773, default from configmap
to configmap leases was done with
https://github.com/kubernetes-sigs/
controller-runtime/pull/1144.

Release notes https://github.com/kubernetes-sigs/
controller-runtime/releases/tag/v0.7.0

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2022-04-14 06:46:50 +00:00
Madhu Rajanna
64a9b1fa59 rbd: consider remote image health for primary
To consider the image is healthy during the Promote
operation currently we are checking only the image
state on the primary site. If the network is flaky
or the remote site is down the image health is
not as expected. To make sure the image is healthy
across the clusters check the state on both local
and the remote clusters.

some details:
https://bugzilla.redhat.com/show_bug.cgi?id=2014495

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2022-04-13 08:37:23 +00:00
Madhu Rajanna
dffb6e72c2 rbd: check nbd tool features only for rbd driver
calling setRbdNbdToolFeatures inside an init
gets called in main.go for both cephfs and rbd
driver. instead of calling it in init function
calling this in rbd driver.go as this is specific
to rbd.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2022-04-11 21:18:27 +00:00
Humble Chirammal
959df4dbac doc: correct typos in struct field comments and release.md
corrected strings in the release guide and util server.

Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2022-04-11 06:23:25 +00:00
Prasanna Kumar Kalever
41fe2c7dda rbd: set metadata on the snapshot
Set snapshot-name/snapshot-namespace/snapshotcontent-name details
on RBD backend snapshot image as metadata on snapshot

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-04-08 15:43:14 +00:00
Prasanna Kumar Kalever
0ef79c6fc0 rbd: set metadata on restart of provisioner pod
Make sure to set metadata when image exist, i.e. if the provisioner pod
is restarted while createVolume is in progress, say it created the image
but didn't yet set the metadata.

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-04-08 15:43:14 +00:00
Prasanna Kumar Kalever
ae5925f04c rbd: update PV/PVC metadata on a reattach of PV
Example if a PVC was delete by setting `persistentVolumeReclaimPolicy` as
`Retain` on PV, and PV is reattached to a new PVC, we make sure to update
PV/PVC image metadata on a PV reattach.

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-04-08 15:43:14 +00:00
Prasanna Kumar Kalever
0119d69ab2 rbd: set PV/PVC details on the image as metadata on create
This helps Monitoring solutions without access to Kubernetes clusters to
display the details of the PV/PVC/NameSpace in their dashboard.

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-04-08 15:43:14 +00:00
Prasanna Kumar Kalever
4d750ed0e5 rbd: add set/Get VolumeMetadata() utility function
Define and use PV and PVC metadata keys used by external provisioner.
The CSI external-provisioner (v1.6.0+) introduces the
--extra-create-metadata flag, which automatically sets map<string, string>
parameters in the CSI CreateVolumeRequest.

Add utility functions to set/Get PV/PVC/PVCNamespace metadata on image

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-04-08 15:43:14 +00:00
Madhu Rajanna
7b2aef0d81 util: add support for the nsenter
add support to run rbd map and mount -t
commands with the nsenter.

complete design of pod/multus network
is added here https://github.com/rook/rook/
blob/master/design/ceph/multus-network.md#csi-pods

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2022-04-08 10:23:21 +00:00
Prasanna Kumar Kalever
d760d0ab6d rbd: check for cookie support from kernel
Currently we only check if the rbd-nbd tool supports cookie feature.
This change will also defend cookie addition based on kernel version

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-04-04 09:51:13 +00:00
Madhu Rajanna
f8bbd2f60f cephfs: fix omap deletion in DeleteSnapshot
The omap is stored with the requested
snapshot name not with the subvolume
snapshotname. This fix uses the correct
snapshot request name to cleanup the omap
once the subvolume snapshot is deleted.

fixes: #2974

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2022-03-31 13:46:03 +00:00
Niels de Vos
1da19680b4 nfs: support new and old NFS-management commands
The `ceph nfs export ...` commands have changed in recent Ceph releases.
Use the most recent command as a default, fall back to the older command
when an error is reported.

This shoud make the NFS-provisioner work on any current Ceph version.

Signed-off-by: Niels de Vos <ndevos@redhat.com>
2022-03-31 11:28:40 +00:00
Madhu Rajanna
f90408be4d rbd: increase force promote timeout to 2 minutes
Increase the timeout to 2 minutes to give enough time
for rollback to complete.
As rollback is performed by the force-promote command it,
at times, may take more than a minute
(based on dirty blocks that need to be rolled
back approximately) to rollback.

The added extra 1 minute is useful though to avoid
multiple calls to complete the rollback and in
extremely corner cases to avoid failures in the
first instance of the call when the mirror watcher
is not yet removed (post scaling down the
RBD mirror instance)

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2022-03-30 13:46:27 +00:00
Thibaut Blanchard
e874c9c11b rbd: fix topology snapshot pool
Restoring a snapshot with a new PVC results with a wrong
dataPoolName in case of initial volume linked
to a storageClass with topology constraints and erasure coding.

Signed-off-by: Thibaut Blanchard <thibaut.blanchard@gmail.com>
2022-03-30 04:40:30 +00:00
Niels de Vos
885295fcc9 nfs: store the NFS-cluster name in the journal
Signed-off-by: Niels de Vos <ndevos@redhat.com>
2022-03-28 11:23:17 +00:00
Niels de Vos
3b4d193ca8 journal: add StoreAttribute/FetchAttribute
Signed-off-by: Niels de Vos <ndevos@redhat.com>
2022-03-28 11:23:17 +00:00
Niels de Vos
010fd816dd nfs: store the calling Context in NFSVolume
NFSVolume instances are short lived, they only extist for a certain gRPC
procedure. It is easier to store the calling Context in the NFSVolume
struct, than to pass it to some of the functions that require it.

Signed-off-by: Niels de Vos <ndevos@redhat.com>
2022-03-28 11:23:17 +00:00
Niels de Vos
6d83df9cc9 nfs: add basic provisioner with create/delete procedures
These NFS Controller and Identity servers are the base for the new
provisioner. The functionality is currently extremely limited, follow-up
PRs will implement various CSI procedures.

CreateVolume is implemented with the bare minimum. This makes it
possible to create a volume, and mount it with the
kubernetes-csi/csi-driver-nfs NodePlugin.

DeleteVolume unexports the volume from the Ceph managed NFS-Ganesha
service. In case the Ceph cluster provides multiple NFS-Ganesha
deployments, things might not work as expected. This is going to be
addressed in follow-up improvements.

Lots of TODO comments need to be resolved before this can be declared
"production ready". Unit- and e2e-tests are missing as well.

Signed-off-by: Niels de Vos <ndevos@redhat.com>
2022-03-28 11:23:17 +00:00
Robert Vasek
f6ae612003 util: added reference tracker
RT, reference tracker, is key-based implementation of a reference counter.
Unlike an integer-based counter, RT counts references by tracking unique
keys. This allows accounting in situations where idempotency must be
preserved. It guarantees there will be no duplicit increments or decrements
of the counter.

Signed-off-by: Robert Vasek <robert.vasek@cern.ch>
2022-03-27 19:24:26 +00:00
Rakshith R
40de75e0db rbd: modify oidc token file path according to FHS 3.0
OIDC token file path has been modified from
`/var/run/secrets/token` to `/run/secrets/tokens`.
This has been done to ensure compliance with
FHS 3.0.

refer:
https://refspecs.linuxfoundation.org/FHS_3.0/fhs/ch05s13.html

Signed-off-by: Rakshith R <rar@redhat.com>
2022-03-23 13:29:35 +00:00
Madhu Rajanna
8c5e414d53 rbd: do not read pvc namespace from volume attributes
Below are the 3 different cases where we need
the PVC namespace for encryption

* CreateVolume:- Read the namespace from the
createVolume parameters and store it in the omap
* NodeStage:- Read the namespace from the omap
not from the volumeContext
* Regenerate:- Read the pvc namespace from the claimRef
not from the volumeAttributes.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2022-03-21 08:54:43 +00:00
Madhu Rajanna
77011fbc61 cephfs: remove kubernetes csi prefixed parameters
remove kubernetes csi prefixed parameters
from the volumeContext as we dont want
to store it in the PV VolumeAttributes.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2022-03-21 08:54:43 +00:00
Madhu Rajanna
a7315a04c1 rbd: remove kubernetes csi prefixed parameters
remove kubernetes csi prefixed parameters
from the volumeContext as we dont want
to store it in the PV VolumeAttributes.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2022-03-21 08:54:43 +00:00
Madhu Rajanna
366c2ace31 util: add helper to get pvcnamespace from input
added helper function to return the pvc namespace
name from the input parameters.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2022-03-21 08:54:43 +00:00