ceph-csi/internal
Madhu Rajanna 17d47a4c31 rbd: remove checkHealthyPrimary check
After Failover of workloads to the secondary
cluster when the primary cluster is down,
RBD Image is not marked healthy, and VR
resources are not promoted to the Primary,
In VolumeReplication, the `CURRENT STATE`
remains Unknown and doesn't change to Primary.

This happens because the primary cluster went down,
and we have force promoted the image on the
secondary cluster. and the image stays in
up+stopping_replay or could be any other states.
Currently assumption was that the image will
always be `up+stopped`. But the image will be in
`up+stopped` only for planned failover and it
could be in any other state if its a forced
failover. For this reason, removing
checkHealthyPrimary from the PromoteVolume RPC call.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
(cherry picked from commit 8c5563a9bc)
2022-07-27 09:39:56 +00:00
..
cephfs cephfs: skip NetNamespaceFilePath if the volume is pre-provisioned 2022-06-03 12:47:27 +00:00
controller rbd: use leases for leader election 2022-04-15 10:24:19 +00:00
csi-addons rbd: return unimplemented error for block-mode reclaimspace req 2022-03-03 19:00:49 +00:00
csi-common rbd: remove unimplemented responses for node operations 2022-03-16 15:27:48 +00:00
journal journal: add StoreAttribute/FetchAttribute 2022-03-28 11:23:17 +00:00
kms rbd: create token and use it for vault SA 2022-06-17 14:10:18 +00:00
liveness cleanup: move log functions to new internal/util/log package 2021-08-26 09:34:05 +00:00
nfs nfs: delete the CephFS volume when the export is already removed 2022-05-05 03:27:30 +00:00
rbd rbd: remove checkHealthyPrimary check 2022-07-27 09:39:56 +00:00
util rbd: create token and use it for vault SA 2022-06-17 14:10:18 +00:00