When testing NFS-provisioning on a cluster that has an NFS-provisioner
and node-plugins deployed with a different driver-name, it is very
useful to have a commandline option to change the name of the
provisioner that is placed in the StorageClass.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
NFS testing will automatically be enabled when CephFS is enabled. This
makes sure the NFS tests run in the CI where there are different jobs
for CephFS and RBD. With a dedicated testNFS variable, it is still
possible to only run the NFS tests, when both CephFS and RBD are
disabled.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
The tests for the NFS-provisioner can be run by passing -deploy-nfs and
-test-nfs as parameters to the `go test` or `e2e.test` command.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
added getPersistentVolume helper function
to get the PV and also try if there is any API
error to improve the CI.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
added getPersistentVolumeClaim helper function
to get the PVC and also try if there is any API
error to improve the CI.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
The Ceph cluster-id is usually detected with `ceph fsid`. This is not
always correct, as the the Ceph cluster can also be configured by name.
If the -clusterid=... is passed, it will be used instead of trying to
detect it with `ceph fsid`.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
There are many locations where the cluster-id (`ceph fsid`) is obtained
from the Rook Toolbox. Instead of duplicating the code everywhere, use a
new helper function getClusterID().
Signed-off-by: Niels de Vos <ndevos@redhat.com>
A new -filesystem=... option has been added so that the e2e tests can
run against environments that do not have a "myfs" CephFS filesystem.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
StorageClasses are cluster resources, not namespaced; there is no need
to log the namespace of a StorageClass.
When creating a StorageClass, NotFound is not an error that will be
returned, not need to check for it.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
On occasion the creation of the StorageClass can fail due to an
etcdserver timeout. If that happens, the creation can be attempted after
a delay.
This has already been done for CephFS StorageClasses, but was missed for
RBD.
See-also: ceph/ceph-csi@8a0377ef02
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Some parts of the Context() seem to get executed, even when BeforeEach()
did a Skip() for the test. By adding a return inside the Context(), the
tests should not get executed at all.
This was noticed in a failed test, where upgrade was running, eventhough
the job was executed as a nornal non-upgrade one.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
This commit change the image registry URL for sidecars in the
deployment from `k8s.gcr.io` to `registry.k8s.io` as
the migration is happening from former to the latter. This commit
also correct the e2e readme for the change.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
As radosNamespace is more specific to
RBD not the general ceph configuration. Now
we introduced a new RBD section for RBD specific
options, Moving the radosNamespace to RBD section
and keeping the radosNamespace still under the
global ceph level configration for backward
compatibility.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Instead of patching the PV to update
the persistentVolumeReclaimPolicy and
the claimRef before deleting the PVC.
Patch PV persistentVolumeReclaimPolicy to Retain
to retain the PV after deleting the PVC.
Remove the claimRef on the PV after deleting
the PVC so that claim can be attached to a new PVC.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
we no longer require the kubernetes validation for clone tests in
the e2e tests. This commit remove it for CephFS.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
we no longer require the kubernetes validation for clone tests in
the e2e tests. This commit remove it for RBD.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
On OpenShift it is not possible for the Rook toolbox to get the metrics
from Kubelet (without additional configuration). By passing
-is-openshift, the metrics are not checked, and the e2e suite does not
fail on that particular piece.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
A new -filesystem=... option has been added so that the e2e tests can
run against environments that do not have a "myfs" CephFS filesystem.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
In case the toolbox pod is not available, the error message lists that
no Pods are found, but there is no hint about the toolbox. By mentioning
the toolbox in the error message, it suggests a good place to start
troubleshooting the environment.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
This commit correct the release version of upgrade tests from
unsupported 3.3.1 to supported version.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
Considering snapshot controllers have been moved to GA since
kube version 1.20, we no longer need to have a mention of beta
version of the same in our deployment.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
* create a PVC and check PVC/PV metadata on RBD image
* create and delete a PVC, attach the old PV to a new PVC and check if
PVC metadata is updated on RBD image
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Makes the rbd images features in the storageclass
as optional so that default image features of librbd
can be used. and also kept the option to user
to specify the image features in the storageclass.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
as deep-flatten is long supported in ceph and its
enabled by default in the librbd, providing an option
to enable it in cephcsi for the rbd images we are
creating.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
it might need sometime for the deployment to
get created, consider the NotFound as a valid
error and retry again.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
On occasion deploying CephFS components fail due to errors like these:
failed to delete provisioner rbac .../csi-provisioner-rbac.yaml
By using the deleteResource() helper, an retry is done in case of a
failure.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
There have been errors while CephFS tests were running, like:
failed to create storageclass: etcdserver: request timed out
When retrying to create the StorageClass, the e2e tests are expected to
continue and (hopefully) succeed.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
The CentOS Stream 8 base container image does not have `ps` installed.
This causes CI jobs to fail, when checking for a restarted rbd-nbd
process.
Instead of using `ps`, the `pstree` command can be used. This will add
some ASCII-tree symbols in front of the command that is logged by the
e2e tests, but that is only used for manual reviewing and does not harm
the running test.
Fixes: #2850
Signed-off-by: Niels de Vos <ndevos@redhat.com>
This commit removes the thick provisioning
code as thick provisioning is deprecated in
cephcsi 3.5.0.
fixes: #2795
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
cephfs data pool name is changed from filesystem-data0
to filesystem-replicated in Rook 1.8. updating
the cephcsi helper functions also to use new
pool names.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
as ioutil.ReadFile is deprecated and
suggestion is to use os.ReadFile as
per https://pkg.go.dev/io/ioutil updating
the same.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
This commit make recreateCSIRBDPods function to be a general one
so that it can be consumed by more clients.
Updates https://github.com/ceph/ceph-csi/issues/2509
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
added e2e for below cases
Normal PVC clone to a bigger
size PVC (without encryption)
* Filesystem pvc clone to a bigger size
* Block pvc clone to a bigger size
Encrypted PVC clone to a bigger
size PVC
* Filesystem pvc clone to a bigger size
* Block pvc clone to a bigger size
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
added e2e for below cases
Normal PVC snapshot restore to a bigger
size PVC (without encryption)
* Filesystem pvc restore to a bigger size
* Block pvc restore to a bigger size
Encrypted PVC snapshot restore to a bigger
size PVC
* Filesystem pvc restore to a bigger size
* Block pvc restore to a bigger size
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
This commit adjust existing migration e2e tests to a couple of tests
to cover the scenarios. The seperate filesystem and block tests have
been shrinked to single one and also introduced a couple of helper
functions to setup and teardown migraition specific secret,configmap
and sc. The static pv function has been renamed to a general name
while the tests were adjusted.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
This `unparam` linter escape is no longer needed and CI is failing
if we keep there. This commit remove the same and make CI happy.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
adding e2e testcase to validate the workflow
of pvc creation and attaching to pod works for
new image features like fast-diff,obj-map,exclusive-lock
and layering.
fixes: #2695
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Currently, we are skipping the generic ephemeral
testing if the kubernetes version is less than
1.21 because of this one the who test suite is
getting skipped and e2e is marked as success
in 2 minutes. This commit runs the ephemeral
tests if the kube=>1.21+. If we do this, for
the lower version we can run other tests.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
The e2e sometimes fail getting objects like PVCs from the Kubernetes API
server, and log the following error:
Error getting pvc "rbd-6940" in namespace "rbd-694": rpc error: code = Unknown desc = OK: HTTP status code 200; transport: missing content-type field
By checking the error message, and initiating a retry on this failure,
CI jobs should fail less regulary.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Add tests for RWX and ROX accessModes for Block and FileSystem Mode
PVCs.
Fixes: #2262
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
To make the error return consistent across e2e tests we have decided
to remove with error presence from the logs and this commit
does that for e2e/snapshot.go.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
To make the error return consistent across e2e tests we have decided
to remove with error presence from the logs and this commit
does that for e2e/cephfs_helper.go.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
To make the error return consistent across e2e tests we have decided
to remove with error presence from the logs and this commit
does that for e2e/upgrade-rbd.go.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
To make the error return consistent across e2e tests we have decided
to remove with error presence from the logs and this commit
does that for e2e/upgrade-cephfs.go.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
To make the error return consistent across e2e tests we have decided
to remove with error presence from the logs and this commit
does that for e2e/rbd_helper.go.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
To make the error return consistent across e2e tests we have decided
to remove with error presence from the logs and this commit
does that for e2e/ceph_user.go.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
To make the error return consistent across e2e tests we have decided
to remove with error presence from the logs and this commit
does that for e2e/utils.go.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
To make the error return consistent across e2e tests we have decided
to remove `with error` presence from the logs and this commit
does that for cephfs tests.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
To make the error return consistent across e2e tests we have decided
to remove `with error` presence from the logs and this commit
does that for rbd tests.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
This commit adds the validation of csi cephfs driver to work with
ephemeral volume support. With ephemeral volume support a user can
specify ephemeral volumes in its pod spec and tie the lifecycle
of the PVC with the POD.
An example POD spec also included in this commit.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
This commit adds the validation of csi RBD driver to work with
ephemeral volume support. With ephemeral volume support a user can
specify ephemeral volumes in its pod spec and tie the lifecycle
of the PVC with the POD.
An example pod spec is also included in this commit.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
Considering we are far out of these release and only care about
kubernetes releases from v1.20, there is no need to have this
version check in place for the tests.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
Considering we are far out of these release and only care about
kubernetes releases from v1.20, there is no need to have this
version check in place for the tests.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
Considering we are far out of these release and only care about
kubernetes releases from v1.20, there is no need to have this
version check in place for the tests.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
considering we are far out of this release and only care about
kubernetes releases from v1.20, there is no need to have this
version check in place for the tests.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
considering we are far out of this release and only care about
kubernetes releases from v1.20, there is no need to have this
version check in place for the tests.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
considering we are far out of this release and only care about
kubernetes releases from v1.20, there is no need to have this
version check in place for the tests.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
There have been occasional CI job failures due to "transport is closing"
errors. Adding this error to the isRetryableAPIError() function should
make sure to retry the request until the connection is restored.
Fixes: #2613
Signed-off-by: Niels de Vos <ndevos@redhat.com>
currently the mountType validation of the encrypted volume is done in
the application, we should rather validate this inside the nodeplugin
pod.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Currently, at "perform IO on rbd-nbd volume after nodeplugin restart"
test we are performing write on the rbd-nbd based mount after nodeplugin
restart. But due to a bug in NBD driver the writes are failing, please
note NBD zero cmd timeout handling is fixed with kernel >= 5.4 and hence
we should defend on writes based on kernel version to avoid unnecessary
CI failures.
For more information see
https://github.com/ceph/ceph-csi/issues/2204#issuecomment-930941047
updates: #2204
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
this commit create and make use of migration secret in the requests and
validate various csi operations
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
This is to preserve the rbd-nbd logs post unmap, so that the CI can dump
the available logs from logdir.
Fixes: #2451
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
For static volume, the user will manually mounts
already existing image as a volume to the application
pods. As its a rbd Image, if the PVC is of type
fileSystem the image will be mapped, formatted
and mounted on the node,
If the user resizes the image on the ceph cluster.
User cannot not automatically resize the filesystem
created on the rbd image. Even if deletes and
recreates the kubernetes objects, the new size
will not be visible on the node.
With this changes During the NodeStageVolumeRequest
the nodeplugin will check the size of the mapped rbd
image on the node using the devicePath. and also
the rbd image size on the ceph cluster.
If the size is not matching it will do the file
system resize on the node as part of the
NodeStageVolumeRequest RPC call.
The user need to do below operation to see new size
* Resize the rbd image in ceph cluster
* Scale down all the application pods using the static
PVC.
* Make sure no application pods which are using the
static PVC is running on a node.
* Scale up all the application pods.
Validate the new size in application pod mounted
volume.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
This commit add test for migration delete volID detection scenario
by passing a custom volID and with the entries in configmap changed
to simulate the situation. The staticPV function also changed its
accept the annotation map which make it more general usage.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>