f0b2ea6a6d
During resync operation the local image will get deleted and a new image is recreated by the rbd mirroring. The new image will have a new imageID. Once resync is completed update the imageID in the OMAP to get the image removed from the trash during DeleteVolume. Before resyncing ``` sh-4.4# rbd info replicapool/csi-vol-0c25bdd3-485f-11ec-bd30-0242ac110004 rbd image 'csi-vol-0c25bdd3-485f-11ec-bd30-0242ac110004': size 1 GiB in 256 objects order 22 (4 MiB objects) snapshot_count: 1 id: 1efcc6b7a769 block_name_prefix: rbd_data.1efcc6b7a769 format: 2 features: layering op_features: flags: create_timestamp: Thu Nov 18 11:02:40 2021 access_timestamp: Thu Nov 18 11:02:40 2021 modify_timestamp: Thu Nov 18 11:02:40 2021 mirroring state: enabled mirroring mode: snapshot mirroring global id: 9c4c236d-8a47-4779-b4f6-94e05da70dbd mirroring primary: true ``` ``` sh-4.4# rados listomapvals csi.volume.0c25bdd3-485f-11ec-bd30-0242ac110004 --pool=replicapool csi.imageid value (12 bytes) : 00000000 31 65 66 63 63 36 62 37 61 37 36 39 |1efcc6b7a769| 0000000c csi.imagename value (44 bytes) : 00000000 63 73 69 2d 76 6f 6c 2d 30 63 32 35 62 64 64 33 |csi-vol-0c25bdd3| 00000010 2d 34 38 35 66 2d 31 31 65 63 2d 62 64 33 30 2d |-485f-11ec-bd30-| 00000020 30 32 34 32 61 63 31 31 30 30 30 34 |0242ac110004| 0000002c csi.volname value (40 bytes) : 00000000 70 76 63 2d 32 36 38 39 33 66 30 38 2d 66 66 32 |pvc-26893f08-ff2| 00000010 62 2d 34 61 30 66 2d 61 35 63 33 2d 38 38 34 62 |b-4a0f-a5c3-884b| 00000020 37 32 30 66 66 62 32 63 |720ffb2c| 00000028 csi.volume.owner value (7 bytes) : 00000000 64 65 66 61 75 6c 74 |default| 00000007 ``` After Resyncing ``` sh-4.4# rbd info replicapool/csi-vol-0c25bdd3-485f-11ec-bd30-0242ac110004 rbd image 'csi-vol-0c25bdd3-485f-11ec-bd30-0242ac110004': size 1 GiB in 256 objects order 22 (4 MiB objects) snapshot_count: 1 id: 10b183a48a97 block_name_prefix: rbd_data.10b183a48a97 format: 2 features: layering, non-primary op_features: flags: create_timestamp: Thu Nov 18 11:09:39 2021 access_timestamp: Thu Nov 18 11:09:39 2021 modify_timestamp: Thu Nov 18 11:09:39 2021 mirroring state: enabled mirroring mode: snapshot mirroring global id: 9c4c236d-8a47-4779-b4f6-94e05da70dbd mirroring primary: false sh-4.4# rados listomapvals csi.volume.0c25bdd3-485f-11ec-bd30-0242ac110004 --pool=replicapool csi.imageid value (12 bytes) : 00000000 31 30 62 31 38 33 61 34 38 61 39 37 |10b183a48a97| 0000000c csi.imagename value (44 bytes) : 00000000 63 73 69 2d 76 6f 6c 2d 30 63 32 35 62 64 64 33 |csi-vol-0c25bdd3| 00000010 2d 34 38 35 66 2d 31 31 65 63 2d 62 64 33 30 2d |-485f-11ec-bd30-| 00000020 30 32 34 32 61 63 31 31 30 30 30 34 |0242ac110004| 0000002c csi.volname value (40 bytes) : 00000000 70 76 63 2d 32 36 38 39 33 66 30 38 2d 66 66 32 |pvc-26893f08-ff2| 00000010 62 2d 34 61 30 66 2d 61 35 63 33 2d 38 38 34 62 |b-4a0f-a5c3-884b| 00000020 37 32 30 66 66 62 32 63 |720ffb2c| 00000028 csi.volume.owner value (7 bytes) : 00000000 64 65 66 61 75 6c 74 |default| 00000007 ``` Signed-off-by: Madhu Rajanna <madhupr007@gmail.com> |
||
---|---|---|
.github | ||
actions/retest | ||
api | ||
assets | ||
charts | ||
cmd | ||
deploy | ||
docs | ||
e2e | ||
examples | ||
internal | ||
scripts | ||
tools | ||
troubleshooting/tools | ||
vendor | ||
.commitlintrc.yml | ||
.gitignore | ||
.mergify.yml | ||
.pre-commit-config.yaml | ||
build.env | ||
deploy.sh | ||
go.mod | ||
go.sum | ||
LICENSE | ||
Makefile | ||
README.md |
Ceph CSI
This repo contains Ceph Container Storage Interface (CSI) driver for RBD, CephFS and kubernetes sidecar deployment yamls of provisioner, attacher, resizer, driver-registrar and snapshotter for supporting CSI functionalities.
Overview
Ceph CSI plugins implement an interface between CSI enabled Container Orchestrator (CO) and Ceph cluster. It allows dynamically provisioning Ceph volumes and attaching them to workloads.
Independent CSI plugins are provided to support RBD and CephFS backed volumes,
- For details about configuration and deployment of RBD plugin, please refer rbd doc and for CephFS plugin configuration and deployment please refer cephFS doc.
- For example usage of RBD and CephFS CSI plugins, see examples in
examples/
. - Stale resource cleanup, please refer cleanup doc.
NOTE:
- Ceph CSI
Arm64
support is experimental.
Project status
Status: GA
Known to work CO platforms
Ceph CSI drivers are currently developed and tested exclusively on Kubernetes environments.
Ceph CSI Version | Container Orchestrator Name | Version Tested |
---|---|---|
v3.4.0 | Kubernetes | v1.20, v1.21, v1.22 |
v3.3.0 | Kubernetes | v1.20, v1.21, v1.22 |
There is work in progress to make this CO independent and thus support other orchestration environments (Nomad, Mesos..etc) in the future.
NOTE:
The supported window of Ceph CSI versions is known as "N.(x-1)": (N (Latest major release) . (x (Latest minor release) - 1)).
For example, if Ceph CSI latest major version is 3.4.0
today, support is
provided for the versions above 3.3.0
. If users are running an unsupported
Ceph CSI version, they will be asked to upgrade when requesting support for the
cluster.
Support Matrix
Ceph-CSI features and available versions
Please refer rbd nbd mounter for its support details.
Plugin | Features | Feature Status | CSI Driver Version | CSI Spec Version | Ceph Cluster Version | Kubernetes Version |
---|---|---|---|---|---|---|
RBD | Dynamically provision, de-provision Block mode RWO volume | GA | >= v1.0.0 | >= v1.0.0 | Nautilus (>=14.0.0) | >= v1.14.0 |
Dynamically provision, de-provision Block mode RWX volume | GA | >= v1.0.0 | >= v1.0.0 | Nautilus (>=14.0.0) | >= v1.14.0 | |
Dynamically provision, de-provision File mode RWO volume | GA | >= v1.0.0 | >= v1.0.0 | Nautilus (>=14.0.0) | >= v1.14.0 | |
Provision File Mode ROX volume from snapshot | Alpha | >= v3.0.0 | >= v1.0.0 | Nautilus (>=v14.2.2) | >= v1.17.0 | |
Provision File Mode ROX volume from another volume | Alpha | >= v3.0.0 | >= v1.0.0 | Nautilus (>=v14.2.2) | >= v1.16.0 | |
Provision Block Mode ROX volume from snapshot | Alpha | >= v3.0.0 | >= v1.0.0 | Nautilus (>=v14.2.2) | >= v1.17.0 | |
Provision Block Mode ROX volume from another volume | Alpha | >= v3.0.0 | >= v1.0.0 | Nautilus (>=v14.2.2) | >= v1.16.0 | |
Creating and deleting snapshot | Beta | >= v1.0.0 | >= v1.0.0 | Nautilus (>=14.0.0) | >= v1.17.0 | |
Provision volume from snapshot | Beta | >= v1.0.0 | >= v1.0.0 | Nautilus (>=14.0.0) | >= v1.17.0 | |
Provision volume from another volume | Beta | >= v1.0.0 | >= v1.0.0 | Nautilus (>=14.0.0) | >= v1.16.0 | |
Expand volume | Beta | >= v2.0.0 | >= v1.1.0 | Nautilus (>=14.0.0) | >= v1.15.0 | |
Volume/PV Metrics of File Mode Volume | Beta | >= v1.2.0 | >= v1.1.0 | Nautilus (>=14.0.0) | >= v1.15.0 | |
Volume/PV Metrics of Block Mode Volume | Beta | >= v1.2.0 | >= v1.1.0 | Nautilus (>=14.0.0) | >= v1.21.0 | |
Topology Aware Provisioning Support | Alpha | >= v2.1.0 | >= v1.1.0 | Nautilus (>=14.0.0) | >= v1.14.0 | |
CephFS | Dynamically provision, de-provision File mode RWO volume | Beta | >= v1.1.0 | >= v1.0.0 | Nautilus (>=14.2.2) | >= v1.14.0 |
Dynamically provision, de-provision File mode RWX volume | Beta | >= v1.1.0 | >= v1.0.0 | Nautilus (>=v14.2.2) | >= v1.14.0 | |
Dynamically provision, de-provision File mode ROX volume | Alpha | >= v3.0.0 | >= v1.0.0 | Nautilus (>=v14.2.2) | >= v1.14.0 | |
Creating and deleting snapshot | Beta | >= v3.1.0 | >= v1.0.0 | Octopus (>=v15.2.3) | >= v1.17.0 | |
Provision volume from snapshot | Beta | >= v3.1.0 | >= v1.0.0 | Octopus (>=v15.2.3) | >= v1.17.0 | |
Provision volume from another volume | Beta | >= v3.1.0 | >= v1.0.0 | Octopus (>=v15.2.3) | >= v1.16.0 | |
Expand volume | Beta | >= v2.0.0 | >= v1.1.0 | Nautilus (>=v14.2.2) | >= v1.15.0 | |
Volume/PV Metrics of File Mode Volume | Beta | >= v1.2.0 | >= v1.1.0 | Nautilus (>=v14.2.2) | >= v1.15.0 |
NOTE
: The Alpha
status reflects possible non-backward
compatible changes in the future, and is thus not recommended
for production use.
CSI spec and Kubernetes version compatibility
Please refer to the matrix in the Kubernetes documentation.
Ceph CSI Container images and release compatibility
Ceph CSI Release/Branch | Container image name | Image Tag |
---|---|---|
devel (Branch) | quay.io/cephcsi/cephcsi | canary |
v3.4.0 (Release) | quay.io/cephcsi/cephcsi | v3.4.0 |
v3.3.1 (Release) | quay.io/cephcsi/cephcsi | v3.3.1 |
v3.3.0 (Release) | quay.io/cephcsi/cephcsi | v3.3.0 |
Deprecated Ceph CSI Release/Branch | Container image name | Image Tag |
---|---|---|
v3.2.2 (Release) | quay.io/cephcsi/cephcsi | v3.2.2 |
v3.2.1 (Release) | quay.io/cephcsi/cephcsi | v3.2.1 |
v3.2.0 (Release) | quay.io/cephcsi/cephcsi | v3.2.0 |
v3.1.2 (Release) | quay.io/cephcsi/cephcsi | v3.1.2 |
v3.1.1 (Release) | quay.io/cephcsi/cephcsi | v3.1.1 |
v3.1.0 (Release) | quay.io/cephcsi/cephcsi | v3.1.0 |
v3.0.0 (Release) | quay.io/cephcsi/cephcsi | v3.0.0 |
v2.1.2 (Release) | quay.io/cephcsi/cephcsi | v2.1.2 |
v2.1.1 (Release) | quay.io/cephcsi/cephcsi | v2.1.1 |
v2.1.0 (Release) | quay.io/cephcsi/cephcsi | v2.1.0 |
v2.0.1 (Release) | quay.io/cephcsi/cephcsi | v2.0.1 |
v2.0.0 (Release) | quay.io/cephcsi/cephcsi | v2.0.0 |
v1.2.2 (Release) | quay.io/cephcsi/cephcsi | v1.2.2 |
v1.2.1 (Release) | quay.io/cephcsi/cephcsi | v1.2.1 |
v1.2.0 (Release) | quay.io/cephcsi/cephcsi | v1.2.0 |
v1.1.0 (Release) | quay.io/cephcsi/cephcsi | v1.1.0 |
v1.0.0 (Branch) | quay.io/cephcsi/cephfsplugin | v1.0.0 |
v1.0.0 (Branch) | quay.io/cephcsi/rbdplugin | v1.0.0 |
Contributing to this repo
Please follow development-guide and coding style guidelines if you are interested to contribute to this repo.
Troubleshooting
Please submit an issue at: Issues
Weekly Bug Triage call
We conduct weekly bug triage calls at our slack channel on Tuesdays. More details are available here
Dev standup
A regular dev standup takes place every Monday,Tuesday and Thursday at
12:00 PM UTC. Convert to your local
timezone by executing command date -d "12:00 UTC"
on terminal
Any changes to the meeting schedule will be added to the agenda doc.
Anyone who wants to discuss the direction of the project, design and implementation reviews, or general questions with the broader community is welcome and encouraged to join.
- Meeting link: https://meet.google.com/nnn-txfp-cge
- Current agenda
Contact
Please use the following to reach members of the community:
- Slack: Join our slack channel to discuss anything related to this project. You can join the slack by this invite link
- Forums: ceph-csi
- Twitter: @CephCsi