previously we were retriving clusterID using the monitors field
in the volume context at node stage code path. however it is possible to
retrieve or use clusterID directly from the volume context. This
commit also remove the getClusterIDFromMigrationVolume() function
which was used previously and its tests
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
we reuse or overload the variable name in the test execution at present.
This commit use a different variable name as initialized in each run
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
This is to preserve the rbd-nbd logs post unmap, so that the CI can dump
the available logs from logdir.
Fixes: #2451
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
For static volume, the user will manually mounts
already existing image as a volume to the application
pods. As its a rbd Image, if the PVC is of type
fileSystem the image will be mapped, formatted
and mounted on the node,
If the user resizes the image on the ceph cluster.
User cannot not automatically resize the filesystem
created on the rbd image. Even if deletes and
recreates the kubernetes objects, the new size
will not be visible on the node.
With this changes During the NodeStageVolumeRequest
the nodeplugin will check the size of the mapped rbd
image on the node using the devicePath. and also
the rbd image size on the ceph cluster.
If the size is not matching it will do the file
system resize on the node as part of the
NodeStageVolumeRequest RPC call.
The user need to do below operation to see new size
* Resize the rbd image in ceph cluster
* Scale down all the application pods using the static
PVC.
* Make sure no application pods which are using the
static PVC is running on a node.
* Scale up all the application pods.
Validate the new size in application pod mounted
volume.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
in NodeStage operation we are flattening
the image to support mounting on the older
clients. this commits moves it to a helper
function to reduce code complexity.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
This initial version of yamlgen generates deploy/scc.yaml based on the
deployment artifact that is provided by the new api/deploy/ocp package.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
During PVC snapshot/clone both kms config and passphrase needs to copied,
while for PVC restore only passphrase needs to be copied to dest rbdvol
since destination storageclass may have another kms config.
Signed-off-by: Rakshith R <rar@redhat.com>
This commit adds the logic to detect a passed in volumeID
is a migrated volume ID and if yes, the driver connect to the
backend cluster and clean/delete the image. The logic
only applied if its a migration volume ID. The migration volume ID
carry the information like mons, pool and image name which is
good enough for the driver to identify and connect to the backend
cluster for its operations.
migration volID format:
<mig>_mons-<monsHash>_image-<imageUID>_<poolHash>
Details on the hash values:
* MonsHash: this carry a hash value (md5sum) which will be acted as the
`clusterID` for the operations in this context.
* ImageUID: this is the unique UUID generated by kubernetes for the created
volume.
* PoolHash: this is an encoded string of pool name.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
This commit add test for migration delete volID detection scenario
by passing a custom volID and with the entries in configmap changed
to simulate the situation. The staticPV function also changed its
accept the annotation map which make it more general usage.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
createCustomConfigmap helps to create a custom cluster entry in
the configmap, however this was coupled with subvolumegroup filling
in the cluster configuration. This commit helps to make it more
general and the subvolumegroup filling is controlled now with a flag
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
as we are refractoring the cephfs code,
Moving all the core functions to a new folder
/pkg called core. This will make things easier
to implement. For now onwards all the core
functionalities will be added to the core
package.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
we dont need securityContext for the cephfs provisioner
pod as its not doing any special operations like mount,
selinux operations etc .
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
cephfs deployment doesnot need extra permission like
privileged,Capabilities and reduce unwanted volumes.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
cephfs deployment doesnot need extra permission like
privileged,Capabilities and remove unwanted volumes.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
rbd deployment doesnot need extra permission like
privileged,Capabilities and remove unwanted volumes.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
we dont need securityContext for the rbd provisioner
pod as its not doing any special operations like map
,unmap selinux etc.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
we dont need securityContext for the cephfs provisioner
pod as its not doing any special operations.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>