Commit Graph

9 Commits

Author SHA1 Message Date
Yati Padia
847b996501 cleanup: Modifies Wrapcheck linter
Wrapcheck is a  simple Go linter to check that errors
from external packages are wrapped during return to
help identify the error source during debugging.
This commit addresses the wrapcheck error

Updates:#2025

Signed-off-by: Yati Padia <ypadia@redhat.com>
2021-06-22 08:47:55 +00:00
Humble Chirammal
0fae0e53b6 cleanup: various source code comment corrections
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2021-04-16 10:22:35 +00:00
Madhu Rajanna
e2fa84357a rbd: take lock when reconciling the PV
there can be a change we can reconcile same
PV parallelly we can endup in generating and
deleting multiple omap keys. to be on safer
side taking lock to process one volumeHandle
at a time.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2021-04-07 11:46:27 +00:00
Madhu Rajanna
0f8813d89f rbd:store/Read volumeID in/from PV annotation
In the case of the Async DR, the volumeID will
not be the same if the clusterID or the PoolID
is different, With Earlier implementation, it
is expected that the new volumeID mapping is
stored in the rados omap pool. In the case of the
ControllerExpand or the DeleteVolume Request,
the only volumeID will be sent it's not possible
to find the corresponding poolID in the new cluster.

With This Change, it works as below

The csi-rbdplugin-controller will watch for the PV
objects, when there are any PV objects created it
will check the omap already exists, If the omap doesn't
exist it will generate the new volumeID and it checks for
the volumeID mapping entry in the PV annotation, if the
mapping does not exist, it will add the new entry
to the PV annotation.

The cephcsi will check for the PV annotations if the
omap does not exist if the mapping exists in the PV
annotation, it will use the new volumeID for further
operations.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2021-04-07 11:46:27 +00:00
Rakshith R
0f7b653b4e cleanup: refactor deeply nested if statements in persistentvolume.go
Refactored deeply nested if statement in persistentvolume.go to
reduce cognitive complexity.

Signed-off-by: Rakshith R <rar@redhat.com>
2021-04-07 02:31:41 +00:00
Niels de Vos
37471c7a5f cleanup: return error type in ReconcilePersistentVolume.getCredentials()
Signed-off-by: Niels de Vos <ndevos@redhat.com>
2020-12-09 08:35:35 +00:00
Madhu Rajanna
b2fb43b335 cleanup: reduce the code complexity of controller
created a new helper function to getCredentials.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2020-12-07 11:03:27 +00:00
Madhu Rajanna
e243c0006b rbd: dont generate OMAP data for static volume
if the user has created a static PV for a RBD
image which is not created by CSI driver, dont
generate the OMAP data.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2020-12-07 11:03:27 +00:00
Madhu Rajanna
68bd44beba rbd: add new controller to regenerate omap data
In the case of Disaster Recovery failover, the
user expected to create the static PVC's. We have
planned not to go with the PVC name and namespace
for many reasons (as in kubernetes it's planned to
support PVC transfer to a new namespace with a
different name and with new features coming in
like data populator etc). For now, we are
planning to go with static PVC's to support
async mirroring.

During Async mirroring only the RBD images are
mirrored to the secondary site, and when the
user creates the static PVC's on the failover
we need to regenerate the omap data. The
volumeHandler in PV spec is an encoded string
which contains clusterID and poolID and image UUID,
The clusterID and poolID won't remain same on both
the clusters, for that cephcsi need to generate the
new volume handler and its to create a mapping
between new volume handler and old volume handler
with that whenever cephcsi gets csi requests it
check if the mapping exists it will pull the new
volume handler and continues other operations.

The new controller watches for the PVs created,
It checks if the omap exists if it doesn't it
will regenerate the entire omap data.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2020-11-28 18:50:00 +00:00