logErr function logs all the ocured errors
with a message that is passed for occurence
of each error.
Co-authored-by: Niels de Vos <ndevos@redhat.com>
Co-authored-by: Madhu Rajanna <madhupr007@gmail.com>
Signed-off-by: Yug <yuggupta27@gmail.com>
added a helper function to test clone creation
in a different pool.
Co-authored-by: Madhu Rajanna <madhupr007@gmail.com>
Signed-off-by: Yug <yuggupta27@gmail.com>
made pool as a argument of listRBDImages to support
listing of rbd images in different pools.
Co-authored-by: Madhu Rajanna <madhupr007@gmail.com>
Signed-off-by: Yug <yuggupta27@gmail.com>
In the function validatePVCSnapshot(...), we don't need
validateEncryption variable as we are passing kms value
which can help us check the value of validateEncryption.
Hence, we can avoid using that.
Signed-off-by: Yati Padia <ypadia@redhat.com>
This commit calls `waitForDaemonSets` and `waitForDeploymentComplete`
after upgrading to wait for csi driver pods to be in running state
for both rbd and cephfs upgrade tests.
Signed-off-by: Rakshith R <rar@redhat.com>
Bringup the rbd-nbd map/attach process on the rbd node plugin and expect the
IO to continue uninterrupted.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
This is a negative testcase to showcase as per current design
the IO will fail because of the missing mappings
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
This commit addresses ifshort linter issues which
checks if short syntax for if-statements is possible.
updates: #1586
Signed-off-by: Rakshith R <rar@redhat.com>
Test if metrics are available at all. The actual values are a little
difficult to validate.
BlockMode volumes support metrics since Kubernetes 1.22.
See-also: kubernetes/kubernetes#97972
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Added an E2E test to test below case
* Create PVC
* Create Snapshot from PVC
* Delete PVC
* Create Clone from Snapshot
* Delete Snapshot
* Mount clone to Application
* Delete Application and PVC Clone
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
when a Snapshot is encrypted during a CreateSnapshot
operation, the encryption key gets created in the KMS
when we delete the Snapshot the key from the KMS
should also gets deleted.
When we create a volume from snapshot we are copying
required information but we missed to copy the
encryption information, This commit adds the missing
information to delete the encryption key.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
The default number for cloning and snapshot/restore is 10 volumes. This
adds to the time the test suite runs. There is no need to validate 10
copies of the encrypted volume, a single copy is sufficient.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
This moves validatePVCSnapshot() into its own function, so that it
follows the same format as validatePVCClone() does already.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
csidriver object can be created on the kubernetes
for below reason.
If a CSI driver creates a CSIDriver object,
Kubernetes users can easily discover the CSI
Drivers installed on their cluster
(simply by issuing kubectl get CSIDriver)
Ref: https://kubernetes-csi.github.io/docs/csi-driver-object.html#what-is-the-csidriver-object
attachRequired is always required to be set to
true to avoid issue on RWO PVC.
more details about it at https://github.com/rook/rook/pull/4332
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Key existence and removal is only checked for the VaultKMS provider. It
should also be done for the VaultTokensKMS provider.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Checks app deletion when cephFS volume is already unmounted.
Creates app, pvc and binds them. Unmounts the volume through
umount cmd in cephFS plugin and checks app deletion.
Signed-off-by: Rakshith R <rar@redhat.com>
execCommandInDaemonsetPod() executes commands inside given
container of a daemonset pod on a particular node.
Signed-off-by: Rakshith R <rar@redhat.com>
getDaemonSetLabelSelector returns labels of daemonset given name and
namespace dynamically, needed since labels are not same for helm and
non-helm deployments.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Signed-off-by: Rakshith R <rar@redhat.com>
Current rbd plugin only supports the layering feature
for rbd image. Add exclusive-lock and journaling image
features for the rbd.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Signed-off-by: woohhan <woohyung_han@tmax.co.kr>
Failed to delete voluesnapshot when backend subvolume
(pvc) and ceph fs subvolume snapshot is deleted
Fixes#1647
Signed-off-by: Yati Padia <ypadia@redhat.com>
Currently, in rbd snapshot restore and volume clone E2E we
are not checking any data consistency after doing snapshot
restore or volume clone. Hence, this PR writes the data in
the PVC and checks the checksum of the file and verify it with
the snapshot or cloned PVC.
Signed-off-by: Yati Padia <ypadia@redhat.com>
The stripe-size is the most efficient size to write to RBD images.
However, not all images are a multiple of stripe-size large. That means
thick-provisioning would not allocate the full image, and the process
might even fail.
This adds a 50 MB PVC to test the process, 100 MB is coincidentally a
multiple of the (default 4 MB) stripe-size, 50 MB is not.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
when user provides an option for VolumeNamePrefix
create subvolume with the prefix which will be easy
for user to identify the subvolumes belongs to
the storageclass, Added an E2E testing to verify
that the subvolume contains the Prefix what is
provided in the storageclass.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
When one Quantity is in GiB, and the other in Dec (bytes), the value
should be the same. However, by using ==, this is not the case. It is
needed to use Equals() for that.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
When deleting a PVC fails, the following messages are repeated until a
timeout is hit:
cephfs-80811 in state &PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},} to be deleted (600 seconds elapsed)
Because the Phase is not set, the PVC seems to be in a strange state. In
case this happens, log all details from the PVC so that we can identify
additional conditions to check for completed deletion.
Updates: #1874
Signed-off-by: Niels de Vos <ndevos@redhat.com>
When passing a pointer to a PVC and PV, the status of the deleted
objects is not logged correctly. The `PersistentVolumeClaim.Status` and
`PersistedVolume.Status` that is added to the logs contain the status of
the initially created object (reference to the PVC/PV). When the PVC/PV
is removed, there is no guarantee that the object is updated.
Logs show an empty (nullified) `PersistentVolumeClaim.Status`, which is
not helpful. Instead, use the returned PVC/PV from the `Get()` function
and use that for further logging. Even when the `.Status` struct from
the PVC/PV gets wiped, the returned object should have correct details.
Updates: #1874
Signed-off-by: Niels de Vos <ndevos@redhat.com>