CSI driver for Ceph
Go to file
Humble Chirammal d362dffd62 doc: add v2.1.1 release to the release matrix
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2020-05-13 09:20:37 +00:00
.github ci: add configuration for "Semantic Pull Request" bot 2020-05-11 11:32:25 +00:00
assets feat: Adds Ceph logo as icon for Helm charts 2019-08-20 05:34:28 +00:00
charts make CephFS SubvolumeGroup configurable 2020-05-04 05:50:06 +00:00
cmd cleanup: move pkg/ to internal/ 2020-04-23 11:00:59 +00:00
deploy rbd: change image pull policy 2020-05-12 13:44:52 +00:00
docs DOC: Added document for DOC and commit message 2020-05-06 16:54:15 +00:00
e2e make CephFS SubvolumeGroup configurable 2020-05-04 05:50:06 +00:00
examples make CephFS SubvolumeGroup configurable 2020-05-04 05:50:06 +00:00
internal journal: remove SetNamespace setter function 2020-05-12 17:57:36 +00:00
scripts build: add check for functional environment 2020-05-12 10:52:59 +00:00
troubleshooting/tools Trace backend volume from pvc 2019-08-26 14:42:12 +00:00
vendor Changes to accommodate client-go changes and kube vendor update 2020-04-14 10:50:12 +00:00
.gitignore build: add an option to compile in a container 2020-03-26 08:45:53 +00:00
.mergify.yml CI: update mergify rules 2020-05-06 21:11:13 +05:30
.travis.yml ci: run e2e tests on Kubernetes 1.16.9 2020-05-11 19:30:39 +00:00
deploy.sh create temp directory to push helm charts 2020-04-28 11:45:56 +00:00
go.mod Add grpc 1.27 version require section in go.mod to avoid CI issue 2020-04-23 16:02:55 +00:00
go.sum Changes to accommodate client-go changes and kube vendor update 2020-04-14 10:50:12 +00:00
LICENSE Removing appendix from license. 2019-08-09 15:16:46 +00:00
Makefile build: add check for functional environment 2020-05-12 10:52:59 +00:00
README.md doc: add v2.1.1 release to the release matrix 2020-05-13 09:20:37 +00:00

Ceph CSI

Go Report
Card Build
Status

This repo contains Ceph Container Storage Interface (CSI) driver for RBD, CephFS and kubernetes sidecar deployment yamls of provisioner, attacher, node-driver-registrar and snapshotter for supporting CSI functionalities.

Overview

Ceph CSI plugins implement an interface between CSI enabled Container Orchestrator (CO) and Ceph cluster. It allows dynamically provisioning Ceph volumes and attaching them to workloads.

Independent CSI plugins are provided to support RBD and CephFS backed volumes,

  • For details about configuration and deployment of RBD plugin, please refer rbd doc and for CephFS plugin configuration and deployment please refer cephfs doc.
  • For example usage of RBD and CephFS CSI plugins, see examples in examples/.
  • Stale resource cleanup, please refer cleanup doc.

NOTE:

  • Ceph CSI Arm64 support is experimental.

Project status

Status: GA

Supported CO platforms

Ceph CSI drivers are currently developed and tested exclusively on Kubernetes environments. There is work in progress to make this CO independent and thus support other orchestration environments in the future.

NOTE:

  • csiv0.3 is deprecated with release of csi v1.1.0

Support Matrix

Ceph-CSI features and available versions

Plugin Features Feature Status CSI Driver Version CSI Spec Version Ceph Cluster Version Kubernetes Version
RBD Dynamically provision, de-provision Block mode RWO volume GA >= v1.0.0 >= v1.0.0 Mimic (>=v13.0.0) >= v1.14.0
Dynamically provision, de-provision Block mode RWX volume GA >= v1.0.0 >= v1.0.0 Mimic (>=v13.0.0) >= v1.14.0
Dynamically provision, de-provision File mode RWO volume GA >= v1.0.0 >= v1.0.0 Mimic (>=v13.0.0) >= v1.14.0
Creating and deleting snapshot Alpha >= v1.0.0 >= v1.0.0 Mimic (>=v13.0.0) >= v1.14.0
Provision volume from snapshot Alpha >= v1.0.0 >= v1.0.0 Mimic (>=v13.0.0) >= v1.14.0
Provision volume from another volume - - - - -
Resize volume Beta >= v2.0.0 >= v1.1.0 Mimic (>=v13.0.0) >= v1.15.0
Metrics Support Beta >= v1.2.0 >= v1.1.0 Mimic (>=v13.0.0) >= v1.15.0
Topology Aware Provisioning Support Alpha >= v2.1.0 >= v1.1.0 Mimic (>=v13.0.0) >= v1.14.0
CephFS Dynamically provision, de-provision File mode RWO volume Beta >= v1.1.0 >= v1.0.0 Nautilus (>=14.2.2) >= v1.14.0
Dynamically provision, de-provision File mode RWX volume Beta >= v1.1.0 >= v1.0.0 Nautilus (>=v14.2.2) >= v1.14.0
Creating and deleting snapshot - - - - -
Provision volume from snapshot - - - - -
Provision volume from another volume - - - - -
Resize volume Beta >= v2.0.0 >= v1.1.0 Nautilus (>=v14.2.2) >= v1.15.0
Metrics Beta >= v1.2.0 >= v1.1.0 Nautilus (>=v14.2.2) >= v1.15.0

NOTE: The Alpha status reflects possible non-backward compatible changes in the future, and is thus not recommended for production use.

CSI spec and Kubernetes version compatibility

Please refer to the matrix in the Kubernetes documentation.

Ceph CSI Container images and release compatibility

Ceph CSI Release/Branch Container image name Image Tag
Master (Branch) quay.io/cephcsi/cephcsi canary
v2.1.1 (Release) quay.io/cephcsi/cephcsi v2.1.1
v2.1.0 (Release) quay.io/cephcsi/cephcsi v2.1.0
v2.0.1 (Release) quay.io/cephcsi/cephcsi v2.0.1
v2.0.0 (Release) quay.io/cephcsi/cephcsi v2.0.0
v1.2.2 (Release) quay.io/cephcsi/cephcsi v1.2.2
v1.2.1 (Release) quay.io/cephcsi/cephcsi v1.2.1
v1.2.0 (Release) quay.io/cephcsi/cephcsi v1.2.0
v1.1.0 (Release) quay.io/cephcsi/cephcsi v1.1.0
v1.0.0 (Branch) quay.io/cephcsi/cephfsplugin v1.0.0
v1.0.0 (Branch) quay.io/cephcsi/rbdplugin v1.0.0

Contributing to this repo

Please follow development-guide and coding style guidelines if you are interested to contribute to this repo.

Troubleshooting

Please submit an issue at: Issues

Weekly Bug Triage call

We conduct weekly bug triage calls at our slack channel on Tuesdays. More details are available here

Dev standup

A regular dev standup takes place every other Monday,Tuesday,Thursday at 2:00 PM UTC. Convert to your local timezone by executing command date -d 14:00 UTC on terminal

Any changes to the meeting schedule will be added to the agenda doc.

Anyone who wants to discuss the direction of the project, design and implementation reviews, or general questions with the broader community is welcome and encouraged to join.

Contact

Please use the following to reach members of the community: