This function was wrongly declared with name initResouces() in e2e
utils package and this patch address the typo in the name
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
The e2e bootstrap does not make use of these or its declared
unwantedly in the same, removing it with this commit.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
Wrapcheck is a simple Go linter to check that errors
from external packages are wrapped during return to
help identify the error source during debugging.
This commit addresses the wrapcheck error
Updates:#2025
Signed-off-by: Yati Padia <ypadia@redhat.com>
Add a case to create a new PVC with VolumeContentSource set to a
thick-provisioned PVC. This should result in a new thick-provisioned PVC
once the cloning is done.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
when all the PVC and associated images are deleted,
the images should also get deleted from the trash.
This commit adds the validation check for the same.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
The variable naming for rbd mount options has been changed
to rbdMountOptions to be consistent with other variable naming schema
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
logErr function logs all the ocured errors
with a message that is passed for occurence
of each error.
Co-authored-by: Niels de Vos <ndevos@redhat.com>
Co-authored-by: Madhu Rajanna <madhupr007@gmail.com>
Signed-off-by: Yug <yuggupta27@gmail.com>
added a helper function to test clone creation
in a different pool.
Co-authored-by: Madhu Rajanna <madhupr007@gmail.com>
Signed-off-by: Yug <yuggupta27@gmail.com>
made pool as a argument of listRBDImages to support
listing of rbd images in different pools.
Co-authored-by: Madhu Rajanna <madhupr007@gmail.com>
Signed-off-by: Yug <yuggupta27@gmail.com>
In the function validatePVCSnapshot(...), we don't need
validateEncryption variable as we are passing kms value
which can help us check the value of validateEncryption.
Hence, we can avoid using that.
Signed-off-by: Yati Padia <ypadia@redhat.com>
This commit calls `waitForDaemonSets` and `waitForDeploymentComplete`
after upgrading to wait for csi driver pods to be in running state
for both rbd and cephfs upgrade tests.
Signed-off-by: Rakshith R <rar@redhat.com>
Bringup the rbd-nbd map/attach process on the rbd node plugin and expect the
IO to continue uninterrupted.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
This is a negative testcase to showcase as per current design
the IO will fail because of the missing mappings
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
This commit addresses ifshort linter issues which
checks if short syntax for if-statements is possible.
updates: #1586
Signed-off-by: Rakshith R <rar@redhat.com>
Test if metrics are available at all. The actual values are a little
difficult to validate.
BlockMode volumes support metrics since Kubernetes 1.22.
See-also: kubernetes/kubernetes#97972
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Added an E2E test to test below case
* Create PVC
* Create Snapshot from PVC
* Delete PVC
* Create Clone from Snapshot
* Delete Snapshot
* Mount clone to Application
* Delete Application and PVC Clone
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
when a Snapshot is encrypted during a CreateSnapshot
operation, the encryption key gets created in the KMS
when we delete the Snapshot the key from the KMS
should also gets deleted.
When we create a volume from snapshot we are copying
required information but we missed to copy the
encryption information, This commit adds the missing
information to delete the encryption key.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
The default number for cloning and snapshot/restore is 10 volumes. This
adds to the time the test suite runs. There is no need to validate 10
copies of the encrypted volume, a single copy is sufficient.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
This moves validatePVCSnapshot() into its own function, so that it
follows the same format as validatePVCClone() does already.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
csidriver object can be created on the kubernetes
for below reason.
If a CSI driver creates a CSIDriver object,
Kubernetes users can easily discover the CSI
Drivers installed on their cluster
(simply by issuing kubectl get CSIDriver)
Ref: https://kubernetes-csi.github.io/docs/csi-driver-object.html#what-is-the-csidriver-object
attachRequired is always required to be set to
true to avoid issue on RWO PVC.
more details about it at https://github.com/rook/rook/pull/4332
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Key existence and removal is only checked for the VaultKMS provider. It
should also be done for the VaultTokensKMS provider.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Checks app deletion when cephFS volume is already unmounted.
Creates app, pvc and binds them. Unmounts the volume through
umount cmd in cephFS plugin and checks app deletion.
Signed-off-by: Rakshith R <rar@redhat.com>
execCommandInDaemonsetPod() executes commands inside given
container of a daemonset pod on a particular node.
Signed-off-by: Rakshith R <rar@redhat.com>
getDaemonSetLabelSelector returns labels of daemonset given name and
namespace dynamically, needed since labels are not same for helm and
non-helm deployments.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Signed-off-by: Rakshith R <rar@redhat.com>
Current rbd plugin only supports the layering feature
for rbd image. Add exclusive-lock and journaling image
features for the rbd.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Signed-off-by: woohhan <woohyung_han@tmax.co.kr>
Failed to delete voluesnapshot when backend subvolume
(pvc) and ceph fs subvolume snapshot is deleted
Fixes#1647
Signed-off-by: Yati Padia <ypadia@redhat.com>
Currently, in rbd snapshot restore and volume clone E2E we
are not checking any data consistency after doing snapshot
restore or volume clone. Hence, this PR writes the data in
the PVC and checks the checksum of the file and verify it with
the snapshot or cloned PVC.
Signed-off-by: Yati Padia <ypadia@redhat.com>
The stripe-size is the most efficient size to write to RBD images.
However, not all images are a multiple of stripe-size large. That means
thick-provisioning would not allocate the full image, and the process
might even fail.
This adds a 50 MB PVC to test the process, 100 MB is coincidentally a
multiple of the (default 4 MB) stripe-size, 50 MB is not.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
when user provides an option for VolumeNamePrefix
create subvolume with the prefix which will be easy
for user to identify the subvolumes belongs to
the storageclass, Added an E2E testing to verify
that the subvolume contains the Prefix what is
provided in the storageclass.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
When one Quantity is in GiB, and the other in Dec (bytes), the value
should be the same. However, by using ==, this is not the case. It is
needed to use Equals() for that.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
When deleting a PVC fails, the following messages are repeated until a
timeout is hit:
cephfs-80811 in state &PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},} to be deleted (600 seconds elapsed)
Because the Phase is not set, the PVC seems to be in a strange state. In
case this happens, log all details from the PVC so that we can identify
additional conditions to check for completed deletion.
Updates: #1874
Signed-off-by: Niels de Vos <ndevos@redhat.com>
When passing a pointer to a PVC and PV, the status of the deleted
objects is not logged correctly. The `PersistentVolumeClaim.Status` and
`PersistedVolume.Status` that is added to the logs contain the status of
the initially created object (reference to the PVC/PV). When the PVC/PV
is removed, there is no guarantee that the object is updated.
Logs show an empty (nullified) `PersistentVolumeClaim.Status`, which is
not helpful. Instead, use the returned PVC/PV from the `Get()` function
and use that for further logging. Even when the `.Status` struct from
the PVC/PV gets wiped, the returned object should have correct details.
Updates: #1874
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Failures when deploying Hashicorp Vault are logged as informative. This
means that testing will continue, even if Vault will not be available.
Instead of logging the errors as INFO, use FAIL so that tests are not
run and the problems are identified early and obviously.
Updates: #1795
Signed-off-by: Niels de Vos <ndevos@redhat.com>
The e2e tests create a Secret for using with the RBD StorageClass.
However this Secret was not used, instead the Rook generated Secret was
linked in the StorageClass.
By using our own Secret from the examples, Rook should not touch it when
we make modifications. In addition, no modifications are needed for
encryption anymore, as these are included in the example.
Updates: #1795
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Added more example to run e2e and functional tests using `go test` and
`make` commands.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
IsRetryableAPIError is not available in latest
kubernetes release ie 1.20.0 created a internal
function called isRetryableAPIError for the same.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
When tests run and something goes wrong during deployment, not all
information is available. Logging the events from the namespace where
Ceph-CSI (and Vault) is deployed, might help with troubleshooting.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
The csi.volume.owner should get stored when the csi-provisioner sidecar
passes additional metadata. This option is now enabled by default, so
the owner (Kubernetes Namespace) of RBD images is expected to be
available.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Once the Vault API removed a secret, the contents will have been wiped.
The key is still available, until it gets destroyed. This causes the e2e
test to detect an empty secret, and assume that it has not been deleted
yet.
By requesting the `data` field from the secret, an error is thrown in
case the secret has been wiped. This makes it possible for the e2e test
to detect that the secret has been removed and scheduled for destroying.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
This commits adds an E2E testing
to verify the metadata created by controller,
We are not checking the generated omap data,
but we will be verify PVC resize and binding
pvc to application.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Reduce the number of images that get pulled from Docker Hub. Use the
official CentOS container registry instead.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
If the imagePullPolicy is not set and the image
tag is empty or latest the image is always pulled.
This commit sets the policy to pull image if not
present.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
There are several go-routines where Failf() is called, which will cause
a Golang panic inside the Ginko test framework. Instead of aborting the
go-routine, capture the error and check for failures once all
go-routines have finished.
The CephFS tests have been updated already, this changs only affects the
validatePVCClone() utility function.
Updates: #1359
Signed-off-by: Niels de Vos <ndevos@redhat.com>
There are several go-routines where Failf() is called, which will cause
a Golang panic inside the Ginko test framework. Instead of aborting the
go-routine, capture the error and check for failures once all
go-routines have finished.
The CephFS tests have been updated already, this changs only affects the
RBD tests.
Updates: #1359
Signed-off-by: Niels de Vos <ndevos@redhat.com>
There are several go-routines where Failf() is called, which will cause
a Golang panic inside the Ginko test framework. Instead of aborting the
go-routine, capture the error and check for failures once all
go-routines have finished.
Updates: #1359
Signed-off-by: Niels de Vos <ndevos@redhat.com>
The upgrade-tests-cephfs fails relative regularly with the following
error during intial deployment:
timeout waiting for deployment csi-cephfsplugin-provisioner with error error waiting for deployment "csi-cephfsplugin-provisioner" status to match expectation: etcdserver: request timed out
By detecting if the API-server returned a non-fatal error, the test does
not need to abort, but can wait for completion. PollImmediate() will
still return ErrWaitTimeout once the timeout elapsed.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
If loadPVC() fails, it return error and we expect the PVC object
to be nil too. In many places we check on the error and exit.
However in few places we are looking at PVC object.
This commit make the condition check on `err` instead of `PVC`
object for consistency.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
The added anti-affinity rules prevent provisioner operators from scheduling on
the same nodes. The kubernetes scheduler will spread the pods across nodes to
improve availability during node failures.
Signed-off-by: Nico Berlee <nico.berlee@on2it.net>
validate backend rbd images count in each
E2E test cases. This helps a lot to catch
the issues in each test case.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
These test cases are will be executed against a rados namespace.
- Create a PVC and bind it to an app.
- Resize block PVC and check device size.
- Create a PVC clone and bind it to an app.
Signed-off-by: Mehdy Khoshnoody <mehdy.khoshnoody@gmail.com>
As we are populating the volume in other two test cases for clone and
snapshot operation, we dont need a specific test case now.
WriteDataInPod() function is also changed to take the pod spec and write
some data to it
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
writeDataInPod() write data to the attached PVC using `dd` command
It leave the pod and pvc state as it is.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
In rbd E2E testing,we need to create snap and clone
as parallel operation.
This helps us to insure that functionality works when
we have parallel delete and create operations and also
it helps to catch bugs when we get parallel requests.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
In Go 1.13, the fmt.Errorf function supports a new %w verb.
When this verb is present, the error returned by fmt.Errorf
will have an Unwrap method returning the argument of %w,
which must be an error. In all other ways, %w is identical to %v.
Updates: #1227
Signed-off-by: Yug <yuggupta27@gmail.com>
We had "ns" as a parameter and then trying to
declare it also as a local variable, which is what
the complaint about "shadowing" refers to.
Issue reported:
shadow: declaration of "ns" shadows declaration at line 57 (govet)
Signed-off-by: Yug <yuggupta27@gmail.com>
Added two new parameters for e2e test to skip
rbd and cephfs tests. This will help us to
run more test in Travis CI.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Reduced the number of pods created
in ROX E2E to save some time in E2E
and changed the waiting time from 2 to 1
min.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
with new implemntation when user creates a snapshot
in backend we are creating rbd image, we need to
validate the total images count in backend when
creating snapshots and clones.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Added E2E testing for creation
and mounting of ROX PVC, if the
PVC accessmode is ReadOnlyMany
the application pod should not get
write access to it.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
If the mount option is readonly in app
pod, the pod should not get the write
access to the mounted cephfs subvolume.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
With the current code base, the subvolumegroup will
be created once, and even for a different cluster,
subvolumegroup creation is not allowed again.
Added support multiple subvolumegroups creation by
validating one subvolumegroup creation per cluster.
Fixes: #1123
Signed-off-by: Yug Gupta <ygupta@redhat.com>
Added an E2E to mount rbd PVC as readonly
in application pod and try to create some
file in Readonly PVC,when we try to create
files on RO PVC, we should get error.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
if the PVC access mode is ReadOnlyMany
or single node readonly, mounting the rbd
device path to the staging path as readonly
to avoid the write operation.
If the PVC acccess mode is readonly, mapping
rbd images as readonly.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Go 1.13 contains support for error wrapping. To support wrapping,
fmt.Errorf now has a %w verb for creating wrapped errors, and three
new functions in the errors package ( errors.Unwrap, errors.Is and
errors.As) simplify unwrapping and inspecting wrapped errors.
With this change, If we currently compare errors using ==, we have to
use errors.Is instead. Example:
if err == io.ErrUnexpectedEOF
becomes
if errors.Is(err, io.ErrUnexpectedEOF)
https://tip.golang.org/doc/go1.13#error_wrapping
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
Added a test case to cover unmap of rbd image
if the mounting fails. if we pass the invalid
mount option the expectation is that mounting
of rbd image to stagingpath fails. as the unmap
happens it should not block the rbd pvc deletion
saying rbd image is in-use.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
util: golint warns about exported methods to have a
comment or to unexport them.
e2e: golint warns about package comment to be of the form
"Package e2e ..."
Reported-by: https://goreportcard.com/report/github.com/ceph/ceph-csi
Updates: #975
Signed-off-by: Yug Gupta <ygupta@redhat.com>
as kube is the shortform for kubernetes.
its expected to mention full form kubernetes
in the e2e tests.
Updated few wordings in the e2e.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
we dont need to explictly set the kind
and apiversion in the snapshot class object
as it is already set in snapshotclass.yaml file
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
the static check is failing as the replicapool
is used in 3 or more places, we need to define
a variable and use it.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
snapshot beta CRD wont work if the
kubernetes version is less than 1.17.0
as the snapshot CRD wont be installed
we cannot test the snapshot,so disabling
it if the kube version is less than 1.17
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
The name of the CephFS SubvolumeGroup for the CSI volumes was hardcoded to "csi". To make permission management in multi tenancy environments easier, this commit makes it possible to configure the CSI SubvolumeGroup.
related to #798 and #931
IneffAssign warns about the two following statements:
Line 1342: warning: ineffectual assignment to rFound (ineffassign)
Line 1350: warning: ineffectual assignment to zFound (ineffassign)
rFound and zFound should be set before entering the loop, otherwise the
initial value will overwrite the updated value on each iteration.
Reported-by: https://goreportcard.com/report/github.com/ceph/ceph-csi
Updates: #975
Signed-off-by: Niels de Vos <ndevos@redhat.com>
This commit adds support to mention dataPool parameter for the
topology constrained pools in the StorageClass, that can be
leveraged to mention erasure coded pool names to use for RBD
data instead of the replica pools.
Signed-off-by: ShyamsundarR <srangana@redhat.com>
- This commit adds tests only for RBD, as CephFS still needs
an enhancement in CephFS subvolume commands to effectively use
topology based provisioning
Signed-off-by: ShyamsundarR <srangana@redhat.com>
With client-go v1.18.0 there is a change where Signatures on methods
in generated clientsets, dynamic, metadata, and scale clients have been
modified to accept context.Context as a first argument.
Signatures of Create, Update, and Patch methods have been updated to accept
CreateOptions, UpdateOptions and PatchOptions respectively.
Signatures of Delete and DeleteCollection methods now accept DeleteOptions
by value instead of by reference
The framework.RunkubectlInput now accepts namespace as the first parameter
which is also accommodated with this PR.
Signed-off-by: Humble Chirammal hchiramm@redhat.com
Updated E2E to use a normal user other than admin user,
who is having access to create and mount cephfs PVC and
create and map rbd PVC
we will use the user created by rook who is having above
access.
Signed-off-by: Madhu Rajanna madhupr007@gmail.com
This PR adds a test case for #904
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
as we need to include the encryption
secret key inside the secret created by the
rook, This PR will add the key and value required
for the encryption inside the secrets.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
This PR adds the support for helm
installation, and cephcsi helm charts
deployment and teardown and also runs E2E
on for helm charts.
Add socat to provide port forwadring access for helm
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
in e2e if the configmap is ready present,
we need to update it to make life simpler
for helm chart e2e.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
To test helm charts in CI we need to skip the ceph-csi
deployment in E2E, This PR provides an option in E2E
to enable/disable cephcsi deployment.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
use mountoptions when mounting rbd to stagingpath
in stagevolume request, add E2E for mount options
fixes: #846
updates: #757
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
this allows administrators to override the naming prefix for both volumes and snapshots
created by the rbd plugin.
Signed-off-by: Reinier Schoof <reinier@skoef.nl>
If the backend rbd or cephfs pool is already deleted
we need to return success to the DeleteVolume RPC
call to make it idempotent.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
* moves KMS type from StorageClass into KMS configuration itself
* updates omapval used to identify KMS to only it's ID without the type
why?
1. when using multiple KMS configurations (not currently supported)
automated parsing of kms configuration will be failing because some
entries in configs won't comply with the requested type
2. less options are needed in the StorageClass and less data used to
identify the KMS
Signed-off-by: Vasyl Purchel vasyl.purchel@workday.com
Signed-off-by: Andrea Baglioni andrea.baglioni@workday.com
- adds proposal document for PVC encryption from PR448
- adds per-volume encription by generating encryption passphrase
for each volume and storing it in a KMS
- adds HashiCorp Vault integration as a KMS for encryption passphrases
- avoids encrypting volume second time if it was already encrypted but
no file system created
- avoids unnecessary checks if volume is a mapped device when encryption
was not requested
- prevents resizing encrypted volumes (it is not currently supported)
- prevents creating snapshots from encrypted volumes to prevent attack
on encryption key (security guard until re-encryption of volumes
implemented)
Signed-off-by: Vasyl Purchel vasyl.purchel@workday.com
Signed-off-by: Andrea Baglioni andrea.baglioni@workday.comFixes#420Fixes#744
If a backend volume is deleted, DeleteVolume call for the same should
succeed, detecting the image is missing and delete the related OMaps.
This commit adds a test case to ensure this is occuring correctly.
Updates #474
Signed-off-by: ShyamsundarR <srangana@redhat.com>
and its functions in E2E.
update vendor packages
log dismounter command output
use kube v1.17.1 in dependency
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
Adds encryption in StorageClass as a parameter. Encryption passphrase is
stored in kubernetes secrets per StorageClass. Implements rbd volume
encryption relying on dm-crypt and cryptsetup using LUKS extension
The change is related to proposal made earlier. This is a first part of
the full feature that adds encryption with passphrase stored in secrets.
Signed-off-by: Vasyl Purchel vasyl.purchel@workday.com
Signed-off-by: Andrea Baglioni andrea.baglioni@workday.com
Signed-off-by: Ioannis Papaioannou ioannis.papaioannou@workday.com
Signed-off-by: Paul Mc Auley paul.mcauley@workday.com
Signed-off-by: Sergio de Carvalho sergio.carvalho@workday.com
We have the e2e test with --deploy-rook=true that makes all test
environment. It works fine, but It does not seem to be the role of
e2e test. In addition, when developing the code we need to run full
test scenario with deploying rook every time, or we need to build
rook environment by hand. Move rook-deploy code to minikube.sh.
If kube version is == 1.13.x cephfs
and rbd provisioner are deployed as statefulset
and if kube version is > 1.13.x cephfs and
rbd provisioner are deployed as deployment
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
When rook-ceph is upgraded and changed some feature, e2e can be
failed. Change rook-ceph default verion to 'v1.1.2' explicitly
which is working fine in current code.
Currently rbd CSI plugin uses formatAndMount of
mount.SafeFormatAndMount. This does not allow to pass or use
specific formatting arguments with it. This patch introduce
RBD specific formatting options with both xfs and ext4,
for example: -E no-discard with ext4 and -k option with
XFS to boost formatting performance of RBD device.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
in toolbox mon endpoints are not
updated properly, this is causing an issue in E2E
this PR is a workaround to fix this issue.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
i think now its take time to discover the mon IP
from svc name in tool box, this is a workaround
to fix it.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Sometimes the tests fail cleaning up due unavailable resources that are
listed in the .yaml files. Deleting the missing resources returns
"resource not found". By passing --ignore-not-found to kubectl, this
problem should not happen anymore (and possibly makes it more obvious
where tests do go wrong).
rook master deploys the ceph-csi
by default now, this will affect the
ceph-csi testing failure, This PR will
remove the ceph-csi resources created rook
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Use Deployment with leader election instead of StatefulSet
Deployment behaves better when a node gets disconnected
from the rest of the cluster - new provisioner leader
is elected in ~15 seconds, while it may take up to
5 minutes for StatefulSet to start a new replica.
Refer: kubernetes-csi/external-provisioner@52d1fbc
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
in NodeStage RPC call we have to map the
device to the node plugin and make sure the
the device will be mounted to the global path
in nodeUnstage request unmount the device from
global path and unmap the device
if the volume mode is block we will be creating
a file inside a stageTargetPath and it will be
considered as the global path
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Currently the cephfs PVC create/delete and all other operations
related to cephfs are failing. One of the recent commits in rook
900abbc967e108ad622648b740a7c57f1268209f has modified ceph-mgr
to run as ceph user rather than root user. The ceph user currently
has no permission to write to the root of the cephfs filesystem.
The fix will be external to CSI itself, but until that lands, sending
a workaround patch so the CSI CI is unblocked
In this patch, we are setting the permission 777 on root of the cephfs
filesystem. Thus ceph user will be able to modify the cephfs filesystem.
Signed-off-by: Poornima G <pgurusid@redhat.com>
Currently CephFs provisioner mounts the ceph filesystem
and creates a subdirectory as a part of provisioning the
volume. Ceph now supports commands to provision fs subvolumes,
hance modify the provisioner to use ceph mgr commands to
(de)provision fs subvolumes.
Signed-off-by: Poornima G <pgurusid@redhat.com>
RBD plugin needs only a single ID to manage images and operations against a
pool, mentioned in the storage class. The current scheme of 2 IDs is hence not
needed and removed in this commit.
Further, unlike CephFS plugin, the RBD plugin splits the user id and the key
into the storage class and the secret respectively. Also the parameter name
for the key in the secret is noted in the storageclass making it a variant and
hampers usability/comprehension. This is also fixed by moving the id and the key
to the secret and not retaining the same in the storage class, like CephFS.
Fixes#270
Testing done:
- Basic PVC creation and mounting
Signed-off-by: ShyamsundarR <srangana@redhat.com>
* Enable all static-checks in golangci-lint
* Update golangci-lint version
* Fix issue found in golangci-lint
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Deployment behaves better when a node gets disconnected from the rest of
the cluster - new provisioner leader is elected in ~15 seconds, while
it may take up to 5 minutes for StatefulSet to start a new replica.
Refer: 52d1fbcf9dFixes: #335
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>