CSI driver for Ceph
Go to file
Madhu Rajanna 17d47a4c31 rbd: remove checkHealthyPrimary check
After Failover of workloads to the secondary
cluster when the primary cluster is down,
RBD Image is not marked healthy, and VR
resources are not promoted to the Primary,
In VolumeReplication, the `CURRENT STATE`
remains Unknown and doesn't change to Primary.

This happens because the primary cluster went down,
and we have force promoted the image on the
secondary cluster. and the image stays in
up+stopping_replay or could be any other states.
Currently assumption was that the image will
always be `up+stopped`. But the image will be in
`up+stopped` only for planned failover and it
could be in any other state if its a forced
failover. For this reason, removing
checkHealthyPrimary from the PromoteVolume RPC call.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
(cherry picked from commit 8c5563a9bc)
2022-07-27 09:39:56 +00:00
.github ci: use CEPH_CSI_BOT token for retest action 2022-03-18 05:27:49 +00:00
actions/retest ci: modify retest action to comment @Mergifyio refresh 2022-03-17 07:52:26 +00:00
api nfs: add provisioner & plugin sa to scc.yaml 2022-04-13 13:23:48 +00:00
assets feat: Adds Ceph logo as icon for Helm charts 2019-08-20 05:34:28 +00:00
charts deploy: fix the staging path accordingly in the templates 2022-06-24 16:16:01 +00:00
cmd rbd: healer detect Kubernetes version for right StagingTargetPath 2022-06-24 16:16:01 +00:00
deploy build: resolve a fixme and disable tcmu repo 2022-07-20 09:58:58 +00:00
docs doc: update doc for 3.6.1 release 2022-04-22 09:35:29 +00:00
e2e e2e: use exclusive-lock together with lock_on_read 2022-07-04 04:52:43 +00:00
examples cephfs: add netNamespaceFilePath for CephFS 2022-04-19 16:33:59 +00:00
internal rbd: remove checkHealthyPrimary check 2022-07-27 09:39:56 +00:00
scripts revert: release 3.6.2 template changes 2022-06-09 11:42:49 +00:00
tools deploy: add deployment artifacts for NFS support 2022-04-01 10:37:41 +00:00
troubleshooting/tools ci: fix pylint error 2022-05-30 10:44:58 +00:00
vendor rebase: use go-ceph version with NFS-Admin API 2022-04-15 13:13:31 +00:00
.commitlintrc.yml ci: add "nfs" as allowed commit prefix 2022-03-28 11:23:17 +00:00
.gitignore build: ignore generated go-tags file 2022-04-04 12:59:12 +00:00
.mergify.yml ci: add "component/nfs" label with Mergify 2022-03-17 14:10:57 +01:00
.pre-commit-config.yaml ci: Add pre-commit hook to catch issues locally 2020-08-19 16:01:16 +00:00
build.env rebase: update minikube to v1.26.0 2022-07-04 04:52:43 +00:00
deploy.sh build: git config before commit 2021-06-10 16:03:40 +02:00
go.mod rebase: use go-ceph version with NFS-Admin API 2022-04-15 13:13:31 +00:00
go.sum rebase: use go-ceph version with NFS-Admin API 2022-04-15 13:13:31 +00:00
LICENSE Removing appendix from license. 2019-08-09 15:16:46 +00:00
Makefile rebase: use go-ceph version with NFS-Admin API 2022-04-15 13:13:31 +00:00
README.md doc: update doc for 3.6.1 release 2022-04-22 09:35:29 +00:00

Ceph CSI

GitHub release Go Report
Card TODOs

This repo contains Ceph Container Storage Interface (CSI) driver for RBD, CephFS and kubernetes sidecar deployment yamls of provisioner, attacher, resizer, driver-registrar and snapshotter for supporting CSI functionalities.

Overview

Ceph CSI plugins implement an interface between CSI enabled Container Orchestrator (CO) and Ceph cluster. It allows dynamically provisioning Ceph volumes and attaching them to workloads.

Independent CSI plugins are provided to support RBD and CephFS backed volumes,

  • For details about configuration and deployment of RBD plugin, please refer rbd doc and for CephFS plugin configuration and deployment please refer cephFS doc.
  • For example usage of RBD and CephFS CSI plugins, see examples in examples/.
  • Stale resource cleanup, please refer cleanup doc.

NOTE:

  • Ceph CSI Arm64 support is experimental.

Project status

Status: GA

Known to work CO platforms

Ceph CSI drivers are currently developed and tested exclusively on Kubernetes environments.

Ceph CSI Version Container Orchestrator Name Version Tested
v3.6.1 Kubernetes v1.21, v1.22, v1.23
v3.6.0 Kubernetes v1.21, v1.22, v1.23
v3.5.1 Kubernetes v1.21, v1.22, v1.23
v3.5.0 Kubernetes v1.21, v1.22, v1.23
v3.4.0 Kubernetes v1.20, v1.21, v1.22

There is work in progress to make this CO independent and thus support other orchestration environments (Nomad, Mesos..etc) in the future.

NOTE:

The supported window of Ceph CSI versions is known as "N.(x-1)": (N (Latest major release) . (x (Latest minor release) - 1)).

For example, if Ceph CSI latest major version is 3.6.0 today, support is provided for the versions above 3.5.0. If users are running an unsupported Ceph CSI version, they will be asked to upgrade when requesting support for the cluster.

Support Matrix

Ceph-CSI features and available versions

Please refer rbd nbd mounter for its support details.

Plugin Features Feature Status CSI Driver Version CSI Spec Version Ceph Cluster Version Kubernetes Version
RBD Dynamically provision, de-provision Block mode RWO volume GA >= v1.0.0 >= v1.0.0 Nautilus (>=14.0.0) >= v1.14.0
Dynamically provision, de-provision Block mode RWX volume GA >= v1.0.0 >= v1.0.0 Nautilus (>=14.0.0) >= v1.14.0
Dynamically provision, de-provision Block mode RWOP volume Alpha >= v3.5.0 >= v1.5.0 Nautilus (>=14.0.0) >= v1.22.0
Dynamically provision, de-provision File mode RWO volume GA >= v1.0.0 >= v1.0.0 Nautilus (>=14.0.0) >= v1.14.0
Dynamically provision, de-provision File mode RWOP volume Alpha >= v3.5.0 >= v1.5.0 Nautilus (>=14.0.0) >= v1.22.0
Provision File Mode ROX volume from snapshot Alpha >= v3.0.0 >= v1.0.0 Nautilus (>=v14.2.2) >= v1.17.0
Provision File Mode ROX volume from another volume Alpha >= v3.0.0 >= v1.0.0 Nautilus (>=v14.2.2) >= v1.16.0
Provision Block Mode ROX volume from snapshot Alpha >= v3.0.0 >= v1.0.0 Nautilus (>=v14.2.2) >= v1.17.0
Provision Block Mode ROX volume from another volume Alpha >= v3.0.0 >= v1.0.0 Nautilus (>=v14.2.2) >= v1.16.0
Creating and deleting snapshot GA >= v1.0.0 >= v1.0.0 Nautilus (>=14.0.0) >= v1.17.0
Provision volume from snapshot GA >= v1.0.0 >= v1.0.0 Nautilus (>=14.0.0) >= v1.17.0
Provision volume from another volume GA >= v1.0.0 >= v1.0.0 Nautilus (>=14.0.0) >= v1.16.0
Expand volume Beta >= v2.0.0 >= v1.1.0 Nautilus (>=14.0.0) >= v1.15.0
Volume/PV Metrics of File Mode Volume GA >= v1.2.0 >= v1.1.0 Nautilus (>=14.0.0) >= v1.15.0
Volume/PV Metrics of Block Mode Volume GA >= v1.2.0 >= v1.1.0 Nautilus (>=14.0.0) >= v1.21.0
Topology Aware Provisioning Support Alpha >= v2.1.0 >= v1.1.0 Nautilus (>=14.0.0) >= v1.14.0
CephFS Dynamically provision, de-provision File mode RWO volume GA >= v1.1.0 >= v1.0.0 Nautilus (>=14.2.2) >= v1.14.0
Dynamically provision, de-provision File mode RWX volume GA >= v1.1.0 >= v1.0.0 Nautilus (>=v14.2.2) >= v1.14.0
Dynamically provision, de-provision File mode ROX volume Alpha >= v3.0.0 >= v1.0.0 Nautilus (>=v14.2.2) >= v1.14.0
Dynamically provision, de-provision File mode RWOP volume Alpha >= v3.5.0 >= v1.5.0 Nautilus (>=14.0.0) >= v1.22.0
Creating and deleting snapshot GA >= v3.1.0 >= v1.0.0 Octopus (>=v15.2.4) >= v1.17.0
Provision volume from snapshot GA >= v3.1.0 >= v1.0.0 Octopus (>=v15.2.4) >= v1.17.0
Provision volume from another volume GA >= v3.1.0 >= v1.0.0 Octopus (>=v15.2.4) >= v1.16.0
Expand volume Beta >= v2.0.0 >= v1.1.0 Nautilus (>=v14.2.2) >= v1.15.0
Volume/PV Metrics of File Mode Volume GA >= v1.2.0 >= v1.1.0 Nautilus (>=v14.2.2) >= v1.15.0
NFS Dynamically provision, de-provision File mode RWO volume Alpha >= v3.6.0 >= v1.0.0 Pacific (>=16.2.0) >= v1.14.0
Dynamically provision, de-provision File mode RWX volume Alpha >= v3.6.0 >= v1.0.0 Pacific (>=16.2.0) >= v1.14.0
Dynamically provision, de-provision File mode ROX volume Alpha >= v3.6.0 >= v1.0.0 Pacific (>=16.2.0) >= v1.14.0
Dynamically provision, de-provision File mode RWOP volume Alpha >= v3.6.0 >= v1.5.0 Pacific (>=16.2.0) >= v1.22.0

NOTE: The Alpha status reflects possible non-backward compatible changes in the future, and is thus not recommended for production use.

CSI spec and Kubernetes version compatibility

Please refer to the matrix in the Kubernetes documentation.

Ceph CSI Container images and release compatibility

Ceph CSI Release/Branch Container image name Image Tag
devel (Branch) quay.io/cephcsi/cephcsi canary
v3.6.1 (Release) quay.io/cephcsi/cephcsi v3.6.1
v3.6.0 (Release) quay.io/cephcsi/cephcsi v3.6.0
v3.5.1 (Release) quay.io/cephcsi/cephcsi v3.5.1
v3.5.0 (Release) quay.io/cephcsi/cephcsi v3.5.0
v3.4.0 (Release) quay.io/cephcsi/cephcsi v3.4.0
Deprecated Ceph CSI Release/Branch Container image name Image Tag
v3.3.1 (Release) quay.io/cephcsi/cephcsi v3.3.1
v3.3.0 (Release) quay.io/cephcsi/cephcsi v3.3.0
v3.2.2 (Release) quay.io/cephcsi/cephcsi v3.2.2
v3.2.1 (Release) quay.io/cephcsi/cephcsi v3.2.1
v3.2.0 (Release) quay.io/cephcsi/cephcsi v3.2.0
v3.1.2 (Release) quay.io/cephcsi/cephcsi v3.1.2
v3.1.1 (Release) quay.io/cephcsi/cephcsi v3.1.1
v3.1.0 (Release) quay.io/cephcsi/cephcsi v3.1.0
v3.0.0 (Release) quay.io/cephcsi/cephcsi v3.0.0
v2.1.2 (Release) quay.io/cephcsi/cephcsi v2.1.2
v2.1.1 (Release) quay.io/cephcsi/cephcsi v2.1.1
v2.1.0 (Release) quay.io/cephcsi/cephcsi v2.1.0
v2.0.1 (Release) quay.io/cephcsi/cephcsi v2.0.1
v2.0.0 (Release) quay.io/cephcsi/cephcsi v2.0.0
v1.2.2 (Release) quay.io/cephcsi/cephcsi v1.2.2
v1.2.1 (Release) quay.io/cephcsi/cephcsi v1.2.1
v1.2.0 (Release) quay.io/cephcsi/cephcsi v1.2.0
v1.1.0 (Release) quay.io/cephcsi/cephcsi v1.1.0
v1.0.0 (Branch) quay.io/cephcsi/cephfsplugin v1.0.0
v1.0.0 (Branch) quay.io/cephcsi/rbdplugin v1.0.0

Contributing to this repo

Please follow development-guide and coding style guidelines if you are interested to contribute to this repo.

Troubleshooting

Please submit an issue at: Issues

Weekly Bug Triage call

We conduct weekly bug triage calls at our slack channel on Tuesdays. More details are available here

Dev standup

A regular dev standup takes place every Monday,Tuesday and Thursday at 12:00 PM UTC. Convert to your local timezone by executing command date -d "12:00 UTC" on terminal

Any changes to the meeting schedule will be added to the agenda doc.

Anyone who wants to discuss the direction of the project, design and implementation reviews, or general questions with the broader community is welcome and encouraged to join.

Contact

Please use the following to reach members of the community: