mirror of
https://github.com/ceph/ceph-csi.git
synced 2025-06-14 18:53:35 +00:00
doc: few corrections or typo fixing in design documentation
- Fixes spelling mistakes. - Grammatical error correction. - Wrapping the text at 80 line count..etc Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
This commit is contained in:
committed by
mergify[bot]
parent
12e8e46bcf
commit
3196b798cc
@ -1,11 +1,11 @@
|
||||
# RBD MIRRORING
|
||||
|
||||
RBD mirroring is a process of replication of RBD images between two or more
|
||||
Ceph clusters. Mirroring ensures point-in-time, crash-consistent RBD images
|
||||
between clusters, RBD mirroring is mainly used for disaster recovery (i.e.
|
||||
having a secondary site as a failover). See [Ceph
|
||||
documentation](https://docs.ceph.com/en/latest/rbd/rbd-mirroring) on RBD
|
||||
mirroring for complete information.
|
||||
RBD mirroring is a process of replication of RBD images between two or more Ceph
|
||||
clusters. Mirroring ensures point-in-time, crash-consistent RBD images between
|
||||
clusters, RBD mirroring is mainly used for disaster recovery (i.e. having a
|
||||
secondary site as a failover).
|
||||
See [Ceph documentation](https://docs.ceph.com/en/latest/rbd/rbd-mirroring) on
|
||||
RBD mirroring for complete information.
|
||||
|
||||
## Architecture
|
||||
|
||||
@ -28,8 +28,8 @@ PersistentVolumeClaim (PVC) on the secondary site during the failover.
|
||||
VolumeHandle to identify the OMAP data nor the image anymore because as we have
|
||||
only PoolID and ClusterID in the VolumeHandle. We cannot identify the correct
|
||||
pool name from the PoolID because pool name will remain the same on both
|
||||
clusters but not the PoolID even the ClusterID can be different on the
|
||||
secondary cluster.
|
||||
clusters but not the PoolID even the ClusterID can be different on the secondary
|
||||
cluster.
|
||||
|
||||
> Sample PV spec which will be used by rbdplugin controller to regenerate OMAP
|
||||
> data
|
||||
@ -56,10 +56,10 @@ csi:
|
||||
```
|
||||
|
||||
> **VolumeHandle** is the unique volume name returned by the CSI volume plugin’s
|
||||
CreateVolume to refer to the volume on all subsequent calls.
|
||||
> CreateVolume to refer to the volume on all subsequent calls.
|
||||
|
||||
Once the static PVC is created on the secondary cluster, the Kubernetes User
|
||||
can try delete the PVC,expand the PVC or mount the PVC. In case of mounting
|
||||
Once the static PVC is created on the secondary cluster, the Kubernetes User can
|
||||
try to delete the PVC,expand the PVC or mount the PVC. In case of mounting
|
||||
(NodeStageVolume) we will get the volume context in RPC call but not in the
|
||||
Delete/Expand Request. In Delete/Expand RPC request only the VolumeHandle
|
||||
(`clusterID-poolID-volumeuniqueID`) will be sent where it contains the encoded
|
||||
@ -73,17 +73,17 @@ secondary cluster as the PoolID and ClusterID always may not be the same.
|
||||
|
||||
To solve this problem, We will have a new controller(rbdplugin controller)
|
||||
running as part of provisioner pod which watches for the PV objects. When a PV
|
||||
is created it will extract the required information from the PV spec and it
|
||||
is created it will extract the required information from the PV spec, and it
|
||||
will regenerate the OMAP data. Whenever Ceph-CSI gets a RPC request with older
|
||||
VolumeHandle, it will check if any new VolumeHandle exists for the old
|
||||
VolumeHandle. If yes, it uses the new VolumeHandle for internal operations (to
|
||||
get pool name, Ceph monitor details from the ClusterID etc).
|
||||
|
||||
Currently, We are making use of watchers in node stage request to make sure
|
||||
ReadWriteOnce (RWO) PVC is mounted on a single node at a given point in time.
|
||||
We need to change the watchers logic in the node stage request as when we
|
||||
enable the RBD mirroring on an image, a watcher will be added on a RBD image by
|
||||
the rbd mirroring daemon.
|
||||
ReadWriteOnce (RWO) PVC is mounted on a single node at a given point in time. We
|
||||
need to change the watchers logic in the node stage request as when we enable
|
||||
the RBD mirroring on an image, a watcher will be added on a RBD image by the rbd
|
||||
mirroring daemon.
|
||||
|
||||
To solve the ClusterID problem, If the ClusterID is different on the second
|
||||
cluster, the admin has to create a new ConfigMap for the mapped ClusterID's.
|
||||
|
Reference in New Issue
Block a user