NFS volume similar to CephFs volumes can support
fsGroupPolicy as File, now Kubernetes Kubernetes may
use fsGroup to change permissions and ownership of the
volume to match user requested fsGroup in the pod's
SecurityPolicy regardless of fstype or access mode.
Signed-off-by: Rakshith R <rar@redhat.com>
This commit cleans up for loop to use index to access
value instead of copying value into a new variable
while iterating.
```
internal/util/csiconfig.go:103:2: rangeValCopy: each \
iteration copies 136 bytes (consider pointers or indexing) \
(gocritic)
for _, cluster := range config {
```
Signed-off-by: Rakshith R <rar@redhat.com>
This commit makes modification to nfs daemonset to use
nfs nodeserver. `nfs.NetNamespaceFilePath` example is
added.
Signed-off-by: Rakshith R <rar@redhat.com>
This commit adds nfs nodeserver capable of
mounting nfs volumes, even with pod networking
using NSenter design similar to rbd and cephfs.
NodePublish, NodeUnpublish, NodeGetVolumeStats
and NodeGetCapabilities have been implemented.
The nodeserver implementation has been inspired
from https://github.com/kubernetes-csi/csi-driver-nfs,
which was previously used for mounted cephcsi exported
nfs volumes. The current implementation is also
backward compatible for the previously created
PVCs.
Signed-off-by: Rakshith R <rar@redhat.com>
Current code uses an !A && !B condition incorrectly to
test A:Up and B:status for a remote peer image.
This should be !A || !B as we require both conditions to
be in the specified state (Up: true, and status Unknown).
This is corrected by this commit, and further fixes:
- check and return ready only when a remote site is
found in the status output
- check if all peer sites are ready, if multiple are found
and return ready appropriately
Signed-off-by: Shyamsundar Ranganathan <srangana@redhat.com>
During ResyncVolume we check if the image
is in an error state, and we resync.
After resync, the image will move to
either the `Error` or the `Resyncing` state.
And if the image is in the above two
conditions, we will return a successful
response and Ready=false so that the
consumer can wait until the volume is
ready to use. If the image is in any
other state we return an error message
to indicate the syncing is not going on.
The whole resync and image state change
depends on the rbd mirror daemon. If the
mirror daemon is not running, the image
can be in Resyncing or Unknown state.
The Ramen marks the volume replication as
secondary, and once the resync starts, it
will delete the volume replication CR as a
cleanup process.
As we dont have a check for the rbd mirror
daemon, we are returning a resync success
response and Ready=false. Due to this false
response Ramen is assuming the resync started
and deleted the volume replication CR, and
because of this, the cluster goes into a bad
state and needs manual intervention.
fixes#3289
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Comment out `comment: ` settings, since it
does not have any options set, otherwise
throws the following error.
```
The current Mergify configuration is invalid
required key not provided @ defaults → actions → comment → message
```
Signed-off-by: Rakshith R <rar@redhat.com>
At present, the check is performed to validate the version of kube
is v1.17 and this commit remove the same.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
IsNotMountPoint() is deprecated and Mounter.IsMountPoint() is
recommended to be used instead.
Reported-by: golangci/staticcheck
Signed-off-by: Niels de Vos <ndevos@redhat.com>
NewWithoutSystemd() has been introduced in the k8s.io/mount-utils
package so that systemd is not called while executing functions. This
offers consumers the ability to prevent confusing and scary messages
from getting logged.
See-also: kubernetes/kubernetes#111218
Signed-off-by: Niels de Vos <ndevos@redhat.com>
kubernetes/kubernetes#111083 has been merged and synced into
k8s.io/mount-utils. This should remove any systemd log messages while
calling NodeStageVolume and NodeGetVolumeStats.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Some of the steps still refer to CephFS, likely missed some replacements
while copy/pasting. The logging is a little confusing when messages
claim something with CephFS failed, but the test is about NFS.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
previously, it was a requirement to have attacher sidecar for CSI
drivers and there had an implementation of dummy mode of operation.
However skipAttach implementation has been stabilized and the dummy
mode of operation is going to be removed from the external-attacher.
Considering this driver work on volumeattachment objects for NBD driver
use cases, we have to implement dummy controllerpublish and unpublish
and thus keep supporting our operations even in absence of dummy mode
of operation in the sidecar.
This commit make a NOOP controller publish and unpublish for RBD driver.
CephFS driver does not require attacher and it has already been made free
from the attachment operations.
Ref# https://github.com/ceph/ceph-csi/pull/3149
Ref# https://github.com/kubernetes-csi/external-attacher/issues/226
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
`--setmetadata` is false by default, honoring it
will keep the metadata disabled by default
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
`--setmetadata` is false by default, honoring it
will keep the metadata disabled by default
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Sometimes executing a command in a Pod fails with "unable to upgrade
connection". This is most likely a temporary situation, and retrying
hopefully reduces the number of spurious failures because of it.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
This argument in csi-snapshotter sidecar allows us to receive
snapshot-name/snapshot-namespace/snapshotcontent-name metadata in the
CreateSnapshot() request.
For ex:
csi.storage.k8s.io/volumesnapshot/name
csi.storage.k8s.io/volumesnapshot/namespace
csi.storage.k8s.io/volumesnapshotcontent/name
This is a useful information which can be used depend on the use case we
have at our driver. The features like adding metadata to snapshot image
can consume this based on the need.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Make sure to set metadata when subvolume snapshot exist, i.e. if the
provisioner pod is restarted while createSnapShot is in progress, say it
created the subvolume snapshot but didn't yet set the metadata.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Set snapshot-name/snapshot-namespace/snapshotcontent-name details
on subvolume snapshots as metadata on create.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
At present we have single log level configuration for all the containers
running for our CSI pods, which has been defaulted to log Level 5.
However this cause many logs to be spitted in a cluster and cause log
spamming to an extent. This commit introduce one more log level control
for CSI pods called sidecarLogLevel which defaults to log Level 1.
The sidecar controllers like snapshotter, resizer, attacher..etc has
been configured with this new log level and driver pods are with old
configruation value.
This allow us to have different configuration options for sidecar
constrollers and driver pods.
With this, we will also have a choice of different configuation setting
instead of locking onto one variable for the containers deployed via CSI driver.
To summarize the CSI containers maintained by Ceph CSI driver has log
level 5 and controllers/sidecars not maintained by Ceph CSI driver has
log level 1 configuration.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
This change helps read the cluster name from the cmdline args,
the provisioner will set the same on the subvolume.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Make sure to set metadata when subvolume exist, i.e. if the provisioner pod
is restarted while createVolume is in progress, say it created the subvolume
but didn't yet set the metadata.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
This helps Monitoring solutions without access to Kubernetes clusters to
display the details of the PV/PVC/NameSpace in their dashboard.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
After Failover of workloads to the secondary
cluster when the primary cluster is down,
RBD Image is not marked healthy, and VR
resources are not promoted to the Primary,
In VolumeReplication, the `CURRENT STATE`
remains Unknown and doesn't change to Primary.
This happens because the primary cluster went down,
and we have force promoted the image on the
secondary cluster. and the image stays in
up+stopping_replay or could be any other states.
Currently assumption was that the image will
always be `up+stopped`. But the image will be in
`up+stopped` only for planned failover and it
could be in any other state if its a forced
failover. For this reason, removing
checkHealthyPrimary from the PromoteVolume RPC call.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
The Ceph v17.2.2 container-image fails to start Ceph Mgr. This causes
issues while the e2e test suite is running. It is better to check if
Ceph Mgr is available, before continuing with the rest of the CI job.
Updates: #3259
Signed-off-by: Niels de Vos <ndevos@redhat.com>