In the function validatePVCSnapshot(...), we don't need
validateEncryption variable as we are passing kms value
which can help us check the value of validateEncryption.
Hence, we can avoid using that.
Signed-off-by: Yati Padia <ypadia@redhat.com>
Bringup the rbd-nbd map/attach process on the rbd node plugin and expect the
IO to continue uninterrupted.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
This is a negative testcase to showcase as per current design
the IO will fail because of the missing mappings
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Added an E2E test to test below case
* Create PVC
* Create Snapshot from PVC
* Delete PVC
* Create Clone from Snapshot
* Delete Snapshot
* Mount clone to Application
* Delete Application and PVC Clone
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
when a Snapshot is encrypted during a CreateSnapshot
operation, the encryption key gets created in the KMS
when we delete the Snapshot the key from the KMS
should also gets deleted.
When we create a volume from snapshot we are copying
required information but we missed to copy the
encryption information, This commit adds the missing
information to delete the encryption key.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
The default number for cloning and snapshot/restore is 10 volumes. This
adds to the time the test suite runs. There is no need to validate 10
copies of the encrypted volume, a single copy is sufficient.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
This moves validatePVCSnapshot() into its own function, so that it
follows the same format as validatePVCClone() does already.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
csidriver object can be created on the kubernetes
for below reason.
If a CSI driver creates a CSIDriver object,
Kubernetes users can easily discover the CSI
Drivers installed on their cluster
(simply by issuing kubectl get CSIDriver)
Ref: https://kubernetes-csi.github.io/docs/csi-driver-object.html#what-is-the-csidriver-object
attachRequired is always required to be set to
true to avoid issue on RWO PVC.
more details about it at https://github.com/rook/rook/pull/4332
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Current rbd plugin only supports the layering feature
for rbd image. Add exclusive-lock and journaling image
features for the rbd.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Signed-off-by: woohhan <woohyung_han@tmax.co.kr>
Currently, in rbd snapshot restore and volume clone E2E we
are not checking any data consistency after doing snapshot
restore or volume clone. Hence, this PR writes the data in
the PVC and checks the checksum of the file and verify it with
the snapshot or cloned PVC.
Signed-off-by: Yati Padia <ypadia@redhat.com>
The stripe-size is the most efficient size to write to RBD images.
However, not all images are a multiple of stripe-size large. That means
thick-provisioning would not allocate the full image, and the process
might even fail.
This adds a 50 MB PVC to test the process, 100 MB is coincidentally a
multiple of the (default 4 MB) stripe-size, 50 MB is not.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
When tests run and something goes wrong during deployment, not all
information is available. Logging the events from the namespace where
Ceph-CSI (and Vault) is deployed, might help with troubleshooting.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
The csi.volume.owner should get stored when the csi-provisioner sidecar
passes additional metadata. This option is now enabled by default, so
the owner (Kubernetes Namespace) of RBD images is expected to be
available.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
This commits adds an E2E testing
to verify the metadata created by controller,
We are not checking the generated omap data,
but we will be verify PVC resize and binding
pvc to application.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
There are several go-routines where Failf() is called, which will cause
a Golang panic inside the Ginko test framework. Instead of aborting the
go-routine, capture the error and check for failures once all
go-routines have finished.
The CephFS tests have been updated already, this changs only affects the
RBD tests.
Updates: #1359
Signed-off-by: Niels de Vos <ndevos@redhat.com>
The added anti-affinity rules prevent provisioner operators from scheduling on
the same nodes. The kubernetes scheduler will spread the pods across nodes to
improve availability during node failures.
Signed-off-by: Nico Berlee <nico.berlee@on2it.net>
validate backend rbd images count in each
E2E test cases. This helps a lot to catch
the issues in each test case.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
These test cases are will be executed against a rados namespace.
- Create a PVC and bind it to an app.
- Resize block PVC and check device size.
- Create a PVC clone and bind it to an app.
Signed-off-by: Mehdy Khoshnoody <mehdy.khoshnoody@gmail.com>
In rbd E2E testing,we need to create snap and clone
as parallel operation.
This helps us to insure that functionality works when
we have parallel delete and create operations and also
it helps to catch bugs when we get parallel requests.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Added two new parameters for e2e test to skip
rbd and cephfs tests. This will help us to
run more test in Travis CI.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Reduced the number of pods created
in ROX E2E to save some time in E2E
and changed the waiting time from 2 to 1
min.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
with new implemntation when user creates a snapshot
in backend we are creating rbd image, we need to
validate the total images count in backend when
creating snapshots and clones.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Added an E2E to mount rbd PVC as readonly
in application pod and try to create some
file in Readonly PVC,when we try to create
files on RO PVC, we should get error.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
if the PVC access mode is ReadOnlyMany
or single node readonly, mounting the rbd
device path to the staging path as readonly
to avoid the write operation.
If the PVC acccess mode is readonly, mapping
rbd images as readonly.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Added a test case to cover unmap of rbd image
if the mounting fails. if we pass the invalid
mount option the expectation is that mounting
of rbd image to stagingpath fails. as the unmap
happens it should not block the rbd pvc deletion
saying rbd image is in-use.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
as kube is the shortform for kubernetes.
its expected to mention full form kubernetes
in the e2e tests.
Updated few wordings in the e2e.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
the static check is failing as the replicapool
is used in 3 or more places, we need to define
a variable and use it.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
snapshot beta CRD wont work if the
kubernetes version is less than 1.17.0
as the snapshot CRD wont be installed
we cannot test the snapshot,so disabling
it if the kube version is less than 1.17
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
This commit adds support to mention dataPool parameter for the
topology constrained pools in the StorageClass, that can be
leveraged to mention erasure coded pool names to use for RBD
data instead of the replica pools.
Signed-off-by: ShyamsundarR <srangana@redhat.com>