Ceph’s logging levels operate on a scale of 1 to 20, where 1 is terse
and 20 is verbose.
Format:
debug-{subsystem} = {log-level}
Setting `rbd` loglevel to 20 at our e2e tests.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
The rbd-nbd resize volume support with its netlink interface needs linux
kernel version >= v5.3.0
Hence define a defence check for the supported kernel version
Fixes: #2234
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
By using retryKubectl helper function,
a retry will be done, and the known error
messages will be skipped.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Until we have a real fix, just to avoid occasionally file system entering
into read-only on nodeplugin restart, lets sync data from the application
pod.
Updates: #2204
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
nlreturn linter requires a new line before return
and branch statements except when the return is alone
inside a statement group (such as an if statement) to
increase code clarity. This commit addresses such issues.
Updates: #1586
Signed-off-by: Rakshith R <rar@redhat.com>
Now that the healer functionaity for mounter processes is available,
lets start, using it.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
This commit resolves godot linter issue
which says "Comment should end in a period (godot)".
Updates: #1586
Signed-off-by: Yati Padia <ypadia@redhat.com>
This adds a new `kmsConfig` interface that can be used to validate
different KMS services and setting. It makes checking for the available
support easier, and fetching the passphrase simpler.
The basicKMS mirrors the current validation of the KMS implementations
that use secrets and metadata. vaultKMS can be used to validate the
passphrase stored in a Vault service.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
This commit adds e2e for user secret based metadata encryption,
adds user-secret.yaml and makes required changes in kms-connection-details,
kms-config yamls.
Signed-off-by: Rakshith R <rar@redhat.com>
this commit is to validate if the encrypted
keys are created and deleted properly while
pvc-pvc clone images
Updates: #2022
Signed-off-by: Yati Padia <ypadia@redhat.com>
We have many declarations and invocations..etc with long lines which are
very difficult to follow while doing code reading. This address the issues
in 'e2e/rbd*.go' files to restrict the line length to 120 chars.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
Add a case to create a new PVC with VolumeContentSource set to a
thick-provisioned PVC. This should result in a new thick-provisioned PVC
once the cloning is done.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
when all the PVC and associated images are deleted,
the images should also get deleted from the trash.
This commit adds the validation check for the same.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
The variable naming for rbd mount options has been changed
to rbdMountOptions to be consistent with other variable naming schema
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
made pool as a argument of listRBDImages to support
listing of rbd images in different pools.
Co-authored-by: Madhu Rajanna <madhupr007@gmail.com>
Signed-off-by: Yug <yuggupta27@gmail.com>
In the function validatePVCSnapshot(...), we don't need
validateEncryption variable as we are passing kms value
which can help us check the value of validateEncryption.
Hence, we can avoid using that.
Signed-off-by: Yati Padia <ypadia@redhat.com>
Bringup the rbd-nbd map/attach process on the rbd node plugin and expect the
IO to continue uninterrupted.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
This is a negative testcase to showcase as per current design
the IO will fail because of the missing mappings
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Added an E2E test to test below case
* Create PVC
* Create Snapshot from PVC
* Delete PVC
* Create Clone from Snapshot
* Delete Snapshot
* Mount clone to Application
* Delete Application and PVC Clone
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
when a Snapshot is encrypted during a CreateSnapshot
operation, the encryption key gets created in the KMS
when we delete the Snapshot the key from the KMS
should also gets deleted.
When we create a volume from snapshot we are copying
required information but we missed to copy the
encryption information, This commit adds the missing
information to delete the encryption key.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
The default number for cloning and snapshot/restore is 10 volumes. This
adds to the time the test suite runs. There is no need to validate 10
copies of the encrypted volume, a single copy is sufficient.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
This moves validatePVCSnapshot() into its own function, so that it
follows the same format as validatePVCClone() does already.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
csidriver object can be created on the kubernetes
for below reason.
If a CSI driver creates a CSIDriver object,
Kubernetes users can easily discover the CSI
Drivers installed on their cluster
(simply by issuing kubectl get CSIDriver)
Ref: https://kubernetes-csi.github.io/docs/csi-driver-object.html#what-is-the-csidriver-object
attachRequired is always required to be set to
true to avoid issue on RWO PVC.
more details about it at https://github.com/rook/rook/pull/4332
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Current rbd plugin only supports the layering feature
for rbd image. Add exclusive-lock and journaling image
features for the rbd.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Signed-off-by: woohhan <woohyung_han@tmax.co.kr>
Currently, in rbd snapshot restore and volume clone E2E we
are not checking any data consistency after doing snapshot
restore or volume clone. Hence, this PR writes the data in
the PVC and checks the checksum of the file and verify it with
the snapshot or cloned PVC.
Signed-off-by: Yati Padia <ypadia@redhat.com>
The stripe-size is the most efficient size to write to RBD images.
However, not all images are a multiple of stripe-size large. That means
thick-provisioning would not allocate the full image, and the process
might even fail.
This adds a 50 MB PVC to test the process, 100 MB is coincidentally a
multiple of the (default 4 MB) stripe-size, 50 MB is not.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
When tests run and something goes wrong during deployment, not all
information is available. Logging the events from the namespace where
Ceph-CSI (and Vault) is deployed, might help with troubleshooting.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
The csi.volume.owner should get stored when the csi-provisioner sidecar
passes additional metadata. This option is now enabled by default, so
the owner (Kubernetes Namespace) of RBD images is expected to be
available.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
This commits adds an E2E testing
to verify the metadata created by controller,
We are not checking the generated omap data,
but we will be verify PVC resize and binding
pvc to application.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
There are several go-routines where Failf() is called, which will cause
a Golang panic inside the Ginko test framework. Instead of aborting the
go-routine, capture the error and check for failures once all
go-routines have finished.
The CephFS tests have been updated already, this changs only affects the
RBD tests.
Updates: #1359
Signed-off-by: Niels de Vos <ndevos@redhat.com>
The added anti-affinity rules prevent provisioner operators from scheduling on
the same nodes. The kubernetes scheduler will spread the pods across nodes to
improve availability during node failures.
Signed-off-by: Nico Berlee <nico.berlee@on2it.net>
validate backend rbd images count in each
E2E test cases. This helps a lot to catch
the issues in each test case.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
These test cases are will be executed against a rados namespace.
- Create a PVC and bind it to an app.
- Resize block PVC and check device size.
- Create a PVC clone and bind it to an app.
Signed-off-by: Mehdy Khoshnoody <mehdy.khoshnoody@gmail.com>
In rbd E2E testing,we need to create snap and clone
as parallel operation.
This helps us to insure that functionality works when
we have parallel delete and create operations and also
it helps to catch bugs when we get parallel requests.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Added two new parameters for e2e test to skip
rbd and cephfs tests. This will help us to
run more test in Travis CI.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Reduced the number of pods created
in ROX E2E to save some time in E2E
and changed the waiting time from 2 to 1
min.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
with new implemntation when user creates a snapshot
in backend we are creating rbd image, we need to
validate the total images count in backend when
creating snapshots and clones.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Added an E2E to mount rbd PVC as readonly
in application pod and try to create some
file in Readonly PVC,when we try to create
files on RO PVC, we should get error.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
if the PVC access mode is ReadOnlyMany
or single node readonly, mounting the rbd
device path to the staging path as readonly
to avoid the write operation.
If the PVC acccess mode is readonly, mapping
rbd images as readonly.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Added a test case to cover unmap of rbd image
if the mounting fails. if we pass the invalid
mount option the expectation is that mounting
of rbd image to stagingpath fails. as the unmap
happens it should not block the rbd pvc deletion
saying rbd image is in-use.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
as kube is the shortform for kubernetes.
its expected to mention full form kubernetes
in the e2e tests.
Updated few wordings in the e2e.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
the static check is failing as the replicapool
is used in 3 or more places, we need to define
a variable and use it.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
snapshot beta CRD wont work if the
kubernetes version is less than 1.17.0
as the snapshot CRD wont be installed
we cannot test the snapshot,so disabling
it if the kube version is less than 1.17
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
This commit adds support to mention dataPool parameter for the
topology constrained pools in the StorageClass, that can be
leveraged to mention erasure coded pool names to use for RBD
data instead of the replica pools.
Signed-off-by: ShyamsundarR <srangana@redhat.com>
- This commit adds tests only for RBD, as CephFS still needs
an enhancement in CephFS subvolume commands to effectively use
topology based provisioning
Signed-off-by: ShyamsundarR <srangana@redhat.com>
With client-go v1.18.0 there is a change where Signatures on methods
in generated clientsets, dynamic, metadata, and scale clients have been
modified to accept context.Context as a first argument.
Signatures of Create, Update, and Patch methods have been updated to accept
CreateOptions, UpdateOptions and PatchOptions respectively.
Signatures of Delete and DeleteCollection methods now accept DeleteOptions
by value instead of by reference
The framework.RunkubectlInput now accepts namespace as the first parameter
which is also accommodated with this PR.
Signed-off-by: Humble Chirammal hchiramm@redhat.com
This PR adds the support for helm
installation, and cephcsi helm charts
deployment and teardown and also runs E2E
on for helm charts.
Add socat to provide port forwadring access for helm
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
To test helm charts in CI we need to skip the ceph-csi
deployment in E2E, This PR provides an option in E2E
to enable/disable cephcsi deployment.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
use mountoptions when mounting rbd to stagingpath
in stagevolume request, add E2E for mount options
fixes: #846
updates: #757
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
this allows administrators to override the naming prefix for both volumes and snapshots
created by the rbd plugin.
Signed-off-by: Reinier Schoof <reinier@skoef.nl>
If the backend rbd or cephfs pool is already deleted
we need to return success to the DeleteVolume RPC
call to make it idempotent.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
* moves KMS type from StorageClass into KMS configuration itself
* updates omapval used to identify KMS to only it's ID without the type
why?
1. when using multiple KMS configurations (not currently supported)
automated parsing of kms configuration will be failing because some
entries in configs won't comply with the requested type
2. less options are needed in the StorageClass and less data used to
identify the KMS
Signed-off-by: Vasyl Purchel vasyl.purchel@workday.com
Signed-off-by: Andrea Baglioni andrea.baglioni@workday.com
- adds proposal document for PVC encryption from PR448
- adds per-volume encription by generating encryption passphrase
for each volume and storing it in a KMS
- adds HashiCorp Vault integration as a KMS for encryption passphrases
- avoids encrypting volume second time if it was already encrypted but
no file system created
- avoids unnecessary checks if volume is a mapped device when encryption
was not requested
- prevents resizing encrypted volumes (it is not currently supported)
- prevents creating snapshots from encrypted volumes to prevent attack
on encryption key (security guard until re-encryption of volumes
implemented)
Signed-off-by: Vasyl Purchel vasyl.purchel@workday.com
Signed-off-by: Andrea Baglioni andrea.baglioni@workday.comFixes#420Fixes#744
Adds encryption in StorageClass as a parameter. Encryption passphrase is
stored in kubernetes secrets per StorageClass. Implements rbd volume
encryption relying on dm-crypt and cryptsetup using LUKS extension
The change is related to proposal made earlier. This is a first part of
the full feature that adds encryption with passphrase stored in secrets.
Signed-off-by: Vasyl Purchel vasyl.purchel@workday.com
Signed-off-by: Andrea Baglioni andrea.baglioni@workday.com
Signed-off-by: Ioannis Papaioannou ioannis.papaioannou@workday.com
Signed-off-by: Paul Mc Auley paul.mcauley@workday.com
Signed-off-by: Sergio de Carvalho sergio.carvalho@workday.com
We have the e2e test with --deploy-rook=true that makes all test
environment. It works fine, but It does not seem to be the role of
e2e test. In addition, when developing the code we need to run full
test scenario with deploying rook every time, or we need to build
rook environment by hand. Move rook-deploy code to minikube.sh.
Currently rbd CSI plugin uses formatAndMount of
mount.SafeFormatAndMount. This does not allow to pass or use
specific formatting arguments with it. This patch introduce
RBD specific formatting options with both xfs and ext4,
for example: -E no-discard with ext4 and -k option with
XFS to boost formatting performance of RBD device.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
in toolbox mon endpoints are not
updated properly, this is causing an issue in E2E
this PR is a workaround to fix this issue.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Use Deployment with leader election instead of StatefulSet
Deployment behaves better when a node gets disconnected
from the rest of the cluster - new provisioner leader
is elected in ~15 seconds, while it may take up to
5 minutes for StatefulSet to start a new replica.
Refer: kubernetes-csi/external-provisioner@52d1fbc
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
in NodeStage RPC call we have to map the
device to the node plugin and make sure the
the device will be mounted to the global path
in nodeUnstage request unmount the device from
global path and unmap the device
if the volume mode is block we will be creating
a file inside a stageTargetPath and it will be
considered as the global path
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
* Enable all static-checks in golangci-lint
* Update golangci-lint version
* Fix issue found in golangci-lint
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Deployment behaves better when a node gets disconnected from the rest of
the cluster - new provisioner leader is elected in ~15 seconds, while
it may take up to 5 minutes for StatefulSet to start a new replica.
Refer: 52d1fbcf9dFixes: #335
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>