Instead of patching the PV to update
the persistentVolumeReclaimPolicy and
the claimRef before deleting the PVC.
Patch PV persistentVolumeReclaimPolicy to Retain
to retain the PV after deleting the PVC.
Remove the claimRef on the PV after deleting
the PVC so that claim can be attached to a new PVC.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
To consider the image is healthy during the Promote
operation currently we are checking only the image
state on the primary site. If the network is flaky
or the remote site is down the image health is
not as expected. To make sure the image is healthy
across the clusters check the state on both local
and the remote clusters.
some details:
https://bugzilla.redhat.com/show_bug.cgi?id=2014495
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
we no longer require the kubernetes validation for clone tests in
the e2e tests. This commit remove it for CephFS.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
we no longer require the kubernetes validation for clone tests in
the e2e tests. This commit remove it for RBD.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
On OpenShift it is not possible for the Rook toolbox to get the metrics
from Kubelet (without additional configuration). By passing
-is-openshift, the metrics are not checked, and the e2e suite does not
fail on that particular piece.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
A new -filesystem=... option has been added so that the e2e tests can
run against environments that do not have a "myfs" CephFS filesystem.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
In case the toolbox pod is not available, the error message lists that
no Pods are found, but there is no hint about the toolbox. By mentioning
the toolbox in the error message, it suggests a good place to start
troubleshooting the environment.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
This commit correct the release version of upgrade tests from
unsupported 3.3.1 to supported version.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
calling setRbdNbdToolFeatures inside an init
gets called in main.go for both cephfs and rbd
driver. instead of calling it in init function
calling this in rbd driver.go as this is specific
to rbd.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Considering snapshot controllers have been moved to GA since
kube version 1.20, we no longer need to have a mention of beta
version of the same in our deployment.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
This argument in csi-snapshotter sidecar allows us to receive
snapshot-name/snapshot-namespace/snapshotcontent-name metadata in the
CreateSnapshot() request.
For ex:
csi.storage.k8s.io/volumesnapshot/name
csi.storage.k8s.io/volumesnapshot/namespace
csi.storage.k8s.io/volumesnapshotcontent/name
This is a useful information which can be used depend on the use case we
have at our driver. The features like adding metadata to snapshot image
can consume this based on the need.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Set snapshot-name/snapshot-namespace/snapshotcontent-name details
on RBD backend snapshot image as metadata on snapshot
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
* create a PVC and check PVC/PV metadata on RBD image
* create and delete a PVC, attach the old PV to a new PVC and check if
PVC metadata is updated on RBD image
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Make sure to set metadata when image exist, i.e. if the provisioner pod
is restarted while createVolume is in progress, say it created the image
but didn't yet set the metadata.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Example if a PVC was delete by setting `persistentVolumeReclaimPolicy` as
`Retain` on PV, and PV is reattached to a new PVC, we make sure to update
PV/PVC image metadata on a PV reattach.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
This helps Monitoring solutions without access to Kubernetes clusters to
display the details of the PV/PVC/NameSpace in their dashboard.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Define and use PV and PVC metadata keys used by external provisioner.
The CSI external-provisioner (v1.6.0+) introduces the
--extra-create-metadata flag, which automatically sets map<string, string>
parameters in the CSI CreateVolumeRequest.
Add utility functions to set/Get PV/PVC/PVCNamespace metadata on image
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
add support to run rbd map and mount -t
commands with the nsenter.
complete design of pod/multus network
is added here https://github.com/rook/rook/
blob/master/design/ceph/multus-network.md#csi-pods
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
we already have generic rules to merge the PR's
in devel and release branches with `automatic merge`
rules. Removing the duplicate release-3.5 rule.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
This commit add upgrade documentation for release 3.6.0
and also update support matrix for v3.6.0.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
The `scripts/golangci.yml.buildtags.in` file is generated from the
`Makefile`, there is no need to include it in the repository. By adding
the file to the `.gitignore` list, the output of `git status` will not
show the file anymore.
Fixes: 8fb5739f2
"build: more flexible handling of go build tags; added ceph_preview"
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Currently we only check if the rbd-nbd tool supports cookie feature.
This change will also defend cookie addition based on kernel version
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
These deployment files are heavily based on the CephFS deployment.
Deploying an environment with these files work for me in minikube. This
should make it possible to add e2e testing as well.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
There is currently no e2e testing, unit-tests or Helm Chart for NFS
support. Until the functionality is confirmed to be working on a regular
basis, support for NFS provisioner volume will be Alpha.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
The omap is stored with the requested
snapshot name not with the subvolume
snapshotname. This fix uses the correct
snapshot request name to cleanup the omap
once the subvolume snapshot is deleted.
fixes: #2974
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
The `ceph nfs export ...` commands have changed in recent Ceph releases.
Use the most recent command as a default, fall back to the older command
when an error is reported.
This shoud make the NFS-provisioner work on any current Ceph version.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Increase the timeout to 2 minutes to give enough time
for rollback to complete.
As rollback is performed by the force-promote command it,
at times, may take more than a minute
(based on dirty blocks that need to be rolled
back approximately) to rollback.
The added extra 1 minute is useful though to avoid
multiple calls to complete the rollback and in
extremely corner cases to avoid failures in the
first instance of the call when the mirror watcher
is not yet removed (post scaling down the
RBD mirror instance)
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Restoring a snapshot with a new PVC results with a wrong
dataPoolName in case of initial volume linked
to a storageClass with topology constraints and erasure coding.
Signed-off-by: Thibaut Blanchard <thibaut.blanchard@gmail.com>
The README explains some of the requirements and basic configuration for
using the NFS-provisioner. When more deployment artifacts are added, the
README will get extended.
The Rook CephNFS example is included, as it is the easiest to get
started with dynamic provisioning of NFS-volumes.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
NFSVolume instances are short lived, they only extist for a certain gRPC
procedure. It is easier to store the calling Context in the NFSVolume
struct, than to pass it to some of the functions that require it.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
These NFS Controller and Identity servers are the base for the new
provisioner. The functionality is currently extremely limited, follow-up
PRs will implement various CSI procedures.
CreateVolume is implemented with the bare minimum. This makes it
possible to create a volume, and mount it with the
kubernetes-csi/csi-driver-nfs NodePlugin.
DeleteVolume unexports the volume from the Ceph managed NFS-Ganesha
service. In case the Ceph cluster provides multiple NFS-Ganesha
deployments, things might not work as expected. This is going to be
addressed in follow-up improvements.
Lots of TODO comments need to be resolved before this can be declared
"production ready". Unit- and e2e-tests are missing as well.
Signed-off-by: Niels de Vos <ndevos@redhat.com>