CSI driver for Ceph
Go to file
Madhu Rajanna 0f8813d89f rbd:store/Read volumeID in/from PV annotation
In the case of the Async DR, the volumeID will
not be the same if the clusterID or the PoolID
is different, With Earlier implementation, it
is expected that the new volumeID mapping is
stored in the rados omap pool. In the case of the
ControllerExpand or the DeleteVolume Request,
the only volumeID will be sent it's not possible
to find the corresponding poolID in the new cluster.

With This Change, it works as below

The csi-rbdplugin-controller will watch for the PV
objects, when there are any PV objects created it
will check the omap already exists, If the omap doesn't
exist it will generate the new volumeID and it checks for
the volumeID mapping entry in the PV annotation, if the
mapping does not exist, it will add the new entry
to the PV annotation.

The cephcsi will check for the PV annotations if the
omap does not exist if the mapping exists in the PV
annotation, it will use the new volumeID for further
operations.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2021-04-07 11:46:27 +00:00
.github doc: rename "master" branch to "devel" 2021-03-01 10:51:30 +05:30
assets feat: Adds Ceph logo as icon for Helm charts 2019-08-20 05:34:28 +00:00
charts deploy: add csidriver object for cephfs and rbd 2021-03-31 13:41:35 +00:00
cmd cleanup: refactor deeply nested if statement in cephcsi.go 2021-04-07 02:31:41 +00:00
deploy deploy: use serviceAccountName instead of serviceAccount in yamls 2021-04-06 09:00:35 +00:00
docs rbd:store/Read volumeID in/from PV annotation 2021-04-07 11:46:27 +00:00
e2e cleanup: correct misspelling 2021-04-01 12:00:21 +00:00
examples deploy: use serviceAccountName instead of serviceAccount in yamls 2021-04-06 09:00:35 +00:00
internal rbd:store/Read volumeID in/from PV annotation 2021-04-07 11:46:27 +00:00
scripts ci: enable nestif linter 2021-04-07 02:31:41 +00:00
troubleshooting/tools util: Fix tracevol to use --config for oc only 2020-08-25 10:04:19 +00:00
vendor rebase: rename kube-storage to csi-addons 2021-04-06 10:59:58 +00:00
.commitlintrc.yml ci: fix typo in commitlintrc.yml 2020-10-21 23:04:18 +00:00
.gitignore build: add .test-container-id to .gitignore 2020-08-18 14:34:08 +00:00
.mergify.yml build: move mergify/merge options to defaults section 2021-03-01 14:16:07 +05:30
.pre-commit-config.yaml ci: Add pre-commit hook to catch issues locally 2020-08-19 16:01:16 +00:00
.travis.yml ci: use "devel" branch instead of "master" 2021-03-01 10:51:30 +05:30
build.env build: update ceph tag in Makefile 2021-03-16 13:06:44 +00:00
deploy.sh deploy: use "devel" branch instead of "master" 2021-03-01 10:51:30 +05:30
go.mod rebase: rename kube-storage to csi-addons 2021-04-06 10:59:58 +00:00
go.sum rebase: rename kube-storage to csi-addons 2021-04-06 10:59:58 +00:00
LICENSE Removing appendix from license. 2019-08-09 15:16:46 +00:00
Makefile build: regenerate scripts/golangci.yml if build.env was updated 2021-03-17 13:38:46 +00:00
README.md doc: rename "master" branch to "devel" 2021-03-01 10:51:30 +05:30

Ceph CSI

Go Report
Card Build
Status

This repo contains Ceph Container Storage Interface (CSI) driver for RBD, CephFS and kubernetes sidecar deployment yamls of provisioner, attacher, resizer, driver-registrar and snapshotter for supporting CSI functionalities.

Overview

Ceph CSI plugins implement an interface between CSI enabled Container Orchestrator (CO) and Ceph cluster. It allows dynamically provisioning Ceph volumes and attaching them to workloads.

Independent CSI plugins are provided to support RBD and CephFS backed volumes,

  • For details about configuration and deployment of RBD plugin, please refer rbd doc and for CephFS plugin configuration and deployment please refer cephfs doc.
  • For example usage of RBD and CephFS CSI plugins, see examples in examples/.
  • Stale resource cleanup, please refer cleanup doc.

NOTE:

  • Ceph CSI Arm64 support is experimental.

Project status

Status: GA

Supported CO platforms

Ceph CSI drivers are currently developed and tested exclusively on Kubernetes environments. There is work in progress to make this CO independent and thus support other orchestration environments in the future.

NOTE:

  • csiv0.3 is deprecated with release of csi v1.1.0

Support Matrix

Ceph-CSI features and available versions

Plugin Features Feature Status CSI Driver Version CSI Spec Version Ceph Cluster Version Kubernetes Version
RBD Dynamically provision, de-provision Block mode RWO volume GA >= v1.0.0 >= v1.0.0 Nautilus (>=14.0.0) >= v1.14.0
Dynamically provision, de-provision Block mode RWX volume GA >= v1.0.0 >= v1.0.0 Nautilus (>=14.0.0) >= v1.14.0
Dynamically provision, de-provision File mode RWO volume GA >= v1.0.0 >= v1.0.0 Nautilus (>=14.0.0) >= v1.14.0
Provision File Mode ROX volume from snapshot Alpha >= v3.0.0 >= v1.0.0 Nautilus (>=v14.2.2) >= v1.17.0
Provision File Mode ROX volume from another volume Alpha >= v3.0.0 >= v1.0.0 Nautilus (>=v14.2.2) >= v1.16.0
Provision Block Mode ROX volume from snapshot Alpha >= v3.0.0 >= v1.0.0 Nautilus (>=v14.2.2) >= v1.17.0
Provision Block Mode ROX volume from another volume Alpha >= v3.0.0 >= v1.0.0 Nautilus (>=v14.2.2) >= v1.16.0
Creating and deleting snapshot Alpha >= v1.0.0 >= v1.0.0 Nautilus (>=14.0.0) >= v1.17.0
Provision volume from snapshot Alpha >= v1.0.0 >= v1.0.0 Nautilus (>=14.0.0) >= v1.17.0
Provision volume from another volume Alpha >= v1.0.0 >= v1.0.0 Nautilus (>=14.0.0) >= v1.16.0
Expand volume Beta >= v2.0.0 >= v1.1.0 Nautilus (>=14.0.0) >= v1.15.0
Metrics Support Beta >= v1.2.0 >= v1.1.0 Nautilus (>=14.0.0) >= v1.15.0
Topology Aware Provisioning Support Alpha >= v2.1.0 >= v1.1.0 Nautilus (>=14.0.0) >= v1.14.0
CephFS Dynamically provision, de-provision File mode RWO volume Beta >= v1.1.0 >= v1.0.0 Nautilus (>=14.2.2) >= v1.14.0
Dynamically provision, de-provision File mode RWX volume Beta >= v1.1.0 >= v1.0.0 Nautilus (>=v14.2.2) >= v1.14.0
Dynamically provision, de-provision File mode ROX volume Alpha >= v3.0.0 >= v1.0.0 Nautilus (>=v14.2.2) >= v1.14.0
Creating and deleting snapshot Alpha >= v3.1.0 >= v1.0.0 Octopus (>=v15.2.3) >= v1.17.0
Provision volume from snapshot Alpha >= v3.1.0 >= v1.0.0 Octopus (>=v15.2.3) >= v1.17.0
Provision volume from another volume Alpha >= v3.1.0 >= v1.0.0 Octopus (>=v15.2.3) >= v1.16.0
Expand volume Beta >= v2.0.0 >= v1.1.0 Nautilus (>=v14.2.2) >= v1.15.0
Metrics Beta >= v1.2.0 >= v1.1.0 Nautilus (>=v14.2.2) >= v1.15.0

NOTE: The Alpha status reflects possible non-backward compatible changes in the future, and is thus not recommended for production use.

CSI spec and Kubernetes version compatibility

Please refer to the matrix in the Kubernetes documentation.

Ceph CSI Container images and release compatibility

Ceph CSI Release/Branch Container image name Image Tag
Master (Branch) quay.io/cephcsi/cephcsi canary
v3.2.0 (Release) quay.io/cephcsi/cephcsi v3.2.0
v3.1.2 (Release) quay.io/cephcsi/cephcsi v3.1.2
v3.1.1 (Release) quay.io/cephcsi/cephcsi v3.1.1
v3.1.0 (Release) quay.io/cephcsi/cephcsi v3.1.0
v3.0.0 (Release) quay.io/cephcsi/cephcsi v3.0.0
v2.1.2 (Release) quay.io/cephcsi/cephcsi v2.1.2
v2.1.1 (Release) quay.io/cephcsi/cephcsi v2.1.1
v2.1.0 (Release) quay.io/cephcsi/cephcsi v2.1.0
v2.0.1 (Release) quay.io/cephcsi/cephcsi v2.0.1
v2.0.0 (Release) quay.io/cephcsi/cephcsi v2.0.0
v1.2.2 (Release) quay.io/cephcsi/cephcsi v1.2.2
v1.2.1 (Release) quay.io/cephcsi/cephcsi v1.2.1
v1.2.0 (Release) quay.io/cephcsi/cephcsi v1.2.0
v1.1.0 (Release) quay.io/cephcsi/cephcsi v1.1.0
v1.0.0 (Branch) quay.io/cephcsi/cephfsplugin v1.0.0
v1.0.0 (Branch) quay.io/cephcsi/rbdplugin v1.0.0

Contributing to this repo

Please follow development-guide and coding style guidelines if you are interested to contribute to this repo.

Troubleshooting

Please submit an issue at: Issues

Weekly Bug Triage call

We conduct weekly bug triage calls at our slack channel on Tuesdays. More details are available here

Dev standup

A regular dev standup takes place every other Monday,Tuesday,Thursday at 2:00 PM UTC. Convert to your local timezone by executing command date -d 14:00 UTC on terminal

Any changes to the meeting schedule will be added to the agenda doc.

Anyone who wants to discuss the direction of the project, design and implementation reviews, or general questions with the broader community is welcome and encouraged to join.

Contact

Please use the following to reach members of the community: