1
0
mirror of https://github.com/ceph/ceph-csi.git synced 2024-12-26 06:50:23 +00:00
ceph-csi/docs/design/proposals/intree-migrate.md
riya-singhal31 539686329f ci: fix mdl related failures
This commit address the issue-
https://github.com/ceph/ceph-csi/issues/3448.

Signed-off-by: riya-singhal31 <rsinghal@redhat.com>
2022-11-17 08:25:10 +00:00

6.2 KiB

In-tree storage plugin to CSI Driver Migration

Prior to CSI, Kubernetes provided a powerful volume plugin system. These volume plugins were “in-tree” meaning their code was part of the core Kubernetes code and shipped with the core Kubernetes binaries. The CSI migration effort enables the replacement of existing in-tree storage plugins such as kubernetes.io/rbd with a corresponding CSI driver. This will be an opt-in feature turned on at cluster creation time that will redirect in-tree plugin operations to a corresponding CSI Driver. The main mechanism we will use to migrate plugins is redirecting in-tree operation calls to the CSI Driver instead of the in-tree driver, the external components will pick up these in-tree PV's and use a translation library to translate to CSI Source. For more information on CSI migration effort refer design doc

RBD in-tree plugin to CSI migration

This feature is tracked in Kubernetes for v1.23 as alpha under RBD in-tree migration KEP . Below are the changes identified in the design discussions to enable CSI migration for RBD plugin:

ClusterID field in the migration request

Considering the clusterID field is a required one for CSI, but in-tree StorageClass has monitors field as a required thing the clusterID will be sent from the migration library based on the monitors field as like any other CSI request, Kubernetes storage admin supposed to create a clusterID based on the monitors hash ( ex: #md5sum <<< "<monaddress:port>") in the CSI config map and keep the monitors under this configuration before enabling the migration. While CSI driver receive the volume ID, it will look at the configmap and figure out the monitors to do the operations like create volume. Thus, CSI operations are unchanged or untouched wrt to the clusterID field. In other words, for Ceph CSI this is yet another request coming in from a client with required parameters. This is an opaque/NOOP change from the Ceph CSI side for its operations, however mentioning it here as a reference.

New Volume ID for existing volume operations

The existing volume will have a volume ID passed in similar to the following format:

mig_mons-<monitors-hash>_image-<imageUUID>_<pool-hash>

Details on the hash values:

  • Monitors Hash: this carry a hash value (md5sum) which will be acted as the clusterID for the operations in this context.

  • ImageUUID: this is the unique UUID generated by Kubernetes for the created volume.

  • PoolHash: this is an encoded string of pool name.

This volume handle is introduced as the existing in-tree volume does not have the volume ID format ( ex: 0001-0020-b7f67366bb43f32e07d8a261a7840da9 -0000000000000002-c867ff35-3f04-11ec-b823-0242ac110003) which Ceph CSI use by default. However, Ceph CSI need certain information like clusterID, pool, Image name to perform deletion and expansion of the volumes. The existing in-tree volume's NodeStageVolume, NodePublishVolume, NodeUnstageVolume, NodeUnpublishVolume operations should work without any issues as those will be tracked as static volumes for the CSI driver. For DeleteVolume and ControllerExpandVolume operations, from above migration specific volume ID, the CSI driver can derive the required details and connect to the backend cluster ( as it carry information like monitors, pool and image name) then perform deletion of the image or perform resize operation.

Migration secret for CSI operations

The in-tree secret has a data field called key which is the base64 admin secret key. The ceph CSI driver currently expect the secret to contain data field UserKey for the equivalent. The CSI driver also expect the UserID field which is not available in the in-tree secret by default. This missing userID will be filled (if the username differ than admin) in the migration secret as adminId field in the migration request, ie:

key field value will be picked up from the migration secret to UserKey field. adminId field value will be picked up from the migration secret to UserID field

if adminId field is nil or not set, UserID field will be filled with default value ie admin. The above logic get activated only when the secret is a migration secret, otherwise skipped to the normal workflow as we have today. CreateVolume, DeleteVolume, NodeStage and ExpandVolume has to validate the secret and use for its operations:

Helper functions

Few helper/utility functions will be introduced as part of this effort to handle migration specific logic to it.

internal/rbd/migration.go

  • isMigrationVolID() - Identify the volume ID is of above-mentioned form of migration volume ID.
  • parseMigrationVolID() - Parse the migration volume ID and populate the required information like pool, image, clusterID..etc. into migrationVolID structure.
  • deleteMigratedVolume() - Deletes the volume from the cluster by making use of the migrationVolID structure fields like pool, image, clusterID.

internal/util/credentials.go

  • isMigrationSecret() - Identify the passed in secret is a migration secret based on the map values (ex: key field) of request secret.
  • NewUserCredentialsWithMigration() - This take request secret map as input and validate whether it is migration secret with the help of isMigrationSecret(), and return credentials by calling NewUserCredentials().
  • ParseAndSetSecretMapFromMigSecret() - Parse the secret map from the migration request and return a map with adjusted secret fields for the CSI driver to continue performing secret specific operations.

CephFS in-tree plugin to CSI migration

This is yet to be done and tracked at CephFS in-tree migration KEP

Additional Reference

Kubernetes 1.17 Feature: Kubernetes In-Tree to CSI Volume Migration Moves to Beta

Tracker Issue in Ceph CSI

In-tree storage plugin to CSI Driver Migration KEP