rbd: add new controller to regenerate omap data

In the case of Disaster Recovery failover, the
user expected to create the static PVC's. We have
planned not to go with the PVC name and namespace
for many reasons (as in kubernetes it's planned to
support PVC transfer to a new namespace with a
different name and with new features coming in
like data populator etc). For now, we are
planning to go with static PVC's to support
async mirroring.

During Async mirroring only the RBD images are
mirrored to the secondary site, and when the
user creates the static PVC's on the failover
we need to regenerate the omap data. The
volumeHandler in PV spec is an encoded string
which contains clusterID and poolID and image UUID,
The clusterID and poolID won't remain same on both
the clusters, for that cephcsi need to generate the
new volume handler and its to create a mapping
between new volume handler and old volume handler
with that whenever cephcsi gets csi requests it
check if the mapping exists it will pull the new
volume handler and continues other operations.

The new controller watches for the PVs created,
It checks if the omap exists if it doesn't it
will regenerate the entire omap data.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
This commit is contained in:
Madhu Rajanna
2020-10-21 18:19:45 +05:30
committed by mergify[bot]
parent 5af3fe5deb
commit 68bd44beba
5 changed files with 471 additions and 21 deletions

View File

@ -434,9 +434,12 @@ func (conn *Connection) UndoReservation(ctx context.Context,
return err
}
// reserveOMapName creates an omap with passed in oMapNamePrefix and a generated <uuid>.
// It ensures generated omap name does not already exist and if conflicts are detected, a set
// number of retires with newer uuids are attempted before returning an error.
// reserveOMapName creates an omap with passed in oMapNamePrefix and a
// generated <uuid>. If the passed volUUID is not empty it will use it instead
// of generating its own UUID and it will return an error immediately if omap
// already exists.if the passed volUUID is empty It ensures generated omap name
// does not already exist and if conflicts are detected, a set number of
// retires with newer uuids are attempted before returning an error.
func reserveOMapName(ctx context.Context, monitors string, cr *util.Credentials, pool, namespace, oMapNamePrefix, volUUID string) (string, error) {
var iterUUID string
@ -489,8 +492,8 @@ Input arguments:
- namePrefix: Prefix to use when generating the image/subvolume name (suffix is an auto-genetated UUID)
- parentName: Name of the parent image/subvolume if reservation is for a snapshot (optional)
- kmsConf: Name of the key management service used to encrypt the image (optional)
- volUUID: UUID need to be reserved instead of auto-generating one (this is
useful for mirroring and metro-DR)
- volUUID: UUID need to be reserved instead of auto-generating one (this is
useful for mirroring and metro-DR)
Return values:
- string: Contains the UUID that was reserved for the passed in reqName
@ -689,3 +692,43 @@ func (conn *Connection) Destroy() {
conn.monitors = ""
conn.cr = nil
}
// CheckNewUUIDMapping checks is there any UUID mapping between old
// volumeHandle and the newly generated volumeHandle.
func (conn *Connection) CheckNewUUIDMapping(ctx context.Context,
journalPool, volumeHandle string) (string, error) {
var cj = conn.config
// check if request name is already part of the directory omap
fetchKeys := []string{
cj.csiNameKeyPrefix + volumeHandle,
}
values, err := getOMapValues(
ctx, conn, journalPool, cj.namespace, cj.csiDirectory,
cj.commonPrefix, fetchKeys)
if err != nil {
if errors.Is(err, util.ErrKeyNotFound) || errors.Is(err, util.ErrPoolNotFound) {
// pool or omap (oid) was not present
// stop processing but without an error for no reservation exists
return "", nil
}
return "", err
}
return values[cj.csiNameKeyPrefix+volumeHandle], nil
}
// ReserveNewUUIDMapping creates the omap mapping between the oldVolumeHandle
// and the newVolumeHandle. Incase of Async Mirroring the PV is statically
// created it will have oldVolumeHandle,the volumeHandle is composed of
// clusterID,PoolID etc. as the poolID and clusterID might be different at the
// secondary cluster cephcsi will generate the new mapping and keep it for
// internal reference.
func (conn *Connection) ReserveNewUUIDMapping(ctx context.Context,
journalPool, oldVolumeHandle, newVolumeHandle string) error {
var cj = conn.config
setKeys := map[string]string{
cj.csiNameKeyPrefix + oldVolumeHandle: newVolumeHandle,
}
return setOMapKeys(ctx, conn, journalPool, cj.namespace, cj.csiDirectory, setKeys)
}