CSI driver for Ceph
Go to file
Prasanna Kumar Kalever d870cb152a deploy: add --extra-create-metadata arg to csi-snapshotter sidecar
This argument in csi-snapshotter sidecar allows us to receive
snapshot-name/snapshot-namespace/snapshotcontent-name metadata in the
CreateSnapshot() request.

For ex:

csi.storage.k8s.io/volumesnapshot/name
csi.storage.k8s.io/volumesnapshot/namespace
csi.storage.k8s.io/volumesnapshotcontent/name

This is a useful information which can be used depend on the use case we
have at our driver. The features like adding metadata to snapshot image
can consume this based on the need.

Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
2022-04-08 15:43:14 +00:00
.github ci: use CEPH_CSI_BOT token for retest action 2022-03-18 05:27:49 +00:00
actions/retest ci: modify retest action to comment @Mergifyio refresh 2022-03-17 07:52:26 +00:00
api deploy: add deployment artifacts for NFS support 2022-04-01 10:37:41 +00:00
assets feat: Adds Ceph logo as icon for Helm charts 2019-08-20 05:34:28 +00:00
charts deploy: add --extra-create-metadata arg to csi-snapshotter sidecar 2022-04-08 15:43:14 +00:00
cmd nfs: enable NFS-provisioner with --type=nfs 2022-03-28 11:23:17 +00:00
deploy deploy: add --extra-create-metadata arg to csi-snapshotter sidecar 2022-04-08 15:43:14 +00:00
docs doc: update documentation for release 3.6.0 2022-04-04 13:29:08 +00:00
e2e e2e: add test cases for image metadata validation 2022-04-08 15:43:14 +00:00
examples util: add support for the nsenter 2022-04-08 10:23:21 +00:00
internal rbd: set metadata on the snapshot 2022-04-08 15:43:14 +00:00
scripts build: remove cache while building container image 2022-03-28 06:09:27 +00:00
tools deploy: add deployment artifacts for NFS support 2022-04-01 10:37:41 +00:00
troubleshooting/tools cleanup: address pylint "consider-using-with" in tracevol.py 2021-08-19 09:06:17 +00:00
vendor rebase: bump github.com/aws/aws-sdk-go from 1.43.22 to 1.43.32 2022-04-06 06:38:20 +00:00
.commitlintrc.yml ci: add "nfs" as allowed commit prefix 2022-03-28 11:23:17 +00:00
.gitignore build: ignore generated go-tags file 2022-04-04 12:59:12 +00:00
.mergify.yml ci: remove duplicate release-3.5 merge rules 2022-04-06 20:59:07 +05:30
.pre-commit-config.yaml ci: Add pre-commit hook to catch issues locally 2020-08-19 16:01:16 +00:00
build.env build: consume quincy release of Ceph 2022-04-05 02:37:28 +00:00
deploy.sh build: git config before commit 2021-06-10 16:03:40 +02:00
go.mod rebase: bump github.com/aws/aws-sdk-go from 1.43.22 to 1.43.32 2022-04-06 06:38:20 +00:00
go.sum rebase: bump github.com/aws/aws-sdk-go from 1.43.22 to 1.43.32 2022-04-06 06:38:20 +00:00
LICENSE Removing appendix from license. 2019-08-09 15:16:46 +00:00
Makefile build: more flexible handling of go build tags; added ceph_preview 2022-03-27 19:24:26 +00:00
README.md doc: move release 3.4.0 to unsupported version 2022-04-06 09:49:00 +00:00

Ceph CSI

GitHub release Go Report
Card TODOs

This repo contains Ceph Container Storage Interface (CSI) driver for RBD, CephFS and kubernetes sidecar deployment yamls of provisioner, attacher, resizer, driver-registrar and snapshotter for supporting CSI functionalities.

Overview

Ceph CSI plugins implement an interface between CSI enabled Container Orchestrator (CO) and Ceph cluster. It allows dynamically provisioning Ceph volumes and attaching them to workloads.

Independent CSI plugins are provided to support RBD and CephFS backed volumes,

  • For details about configuration and deployment of RBD plugin, please refer rbd doc and for CephFS plugin configuration and deployment please refer cephFS doc.
  • For example usage of RBD and CephFS CSI plugins, see examples in examples/.
  • Stale resource cleanup, please refer cleanup doc.

NOTE:

  • Ceph CSI Arm64 support is experimental.

Project status

Status: GA

Known to work CO platforms

Ceph CSI drivers are currently developed and tested exclusively on Kubernetes environments.

Ceph CSI Version Container Orchestrator Name Version Tested
v3.6.0 Kubernetes v1.21, v1.22, v1.23
v3.5.1 Kubernetes v1.21, v1.22, v1.23
v3.5.0 Kubernetes v1.21, v1.22, v1.23
v3.4.0 Kubernetes v1.20, v1.21, v1.22

There is work in progress to make this CO independent and thus support other orchestration environments (Nomad, Mesos..etc) in the future.

NOTE:

The supported window of Ceph CSI versions is known as "N.(x-1)": (N (Latest major release) . (x (Latest minor release) - 1)).

For example, if Ceph CSI latest major version is 3.6.0 today, support is provided for the versions above 3.5.0. If users are running an unsupported Ceph CSI version, they will be asked to upgrade when requesting support for the cluster.

Support Matrix

Ceph-CSI features and available versions

Please refer rbd nbd mounter for its support details.

Plugin Features Feature Status CSI Driver Version CSI Spec Version Ceph Cluster Version Kubernetes Version
RBD Dynamically provision, de-provision Block mode RWO volume GA >= v1.0.0 >= v1.0.0 Nautilus (>=14.0.0) >= v1.14.0
Dynamically provision, de-provision Block mode RWX volume GA >= v1.0.0 >= v1.0.0 Nautilus (>=14.0.0) >= v1.14.0
Dynamically provision, de-provision Block mode RWOP volume Alpha >= v3.5.0 >= v1.5.0 Nautilus (>=14.0.0) >= v1.22.0
Dynamically provision, de-provision File mode RWO volume GA >= v1.0.0 >= v1.0.0 Nautilus (>=14.0.0) >= v1.14.0
Dynamically provision, de-provision File mode RWOP volume Alpha >= v3.5.0 >= v1.5.0 Nautilus (>=14.0.0) >= v1.22.0
Provision File Mode ROX volume from snapshot Alpha >= v3.0.0 >= v1.0.0 Nautilus (>=v14.2.2) >= v1.17.0
Provision File Mode ROX volume from another volume Alpha >= v3.0.0 >= v1.0.0 Nautilus (>=v14.2.2) >= v1.16.0
Provision Block Mode ROX volume from snapshot Alpha >= v3.0.0 >= v1.0.0 Nautilus (>=v14.2.2) >= v1.17.0
Provision Block Mode ROX volume from another volume Alpha >= v3.0.0 >= v1.0.0 Nautilus (>=v14.2.2) >= v1.16.0
Creating and deleting snapshot GA >= v1.0.0 >= v1.0.0 Nautilus (>=14.0.0) >= v1.17.0
Provision volume from snapshot GA >= v1.0.0 >= v1.0.0 Nautilus (>=14.0.0) >= v1.17.0
Provision volume from another volume GA >= v1.0.0 >= v1.0.0 Nautilus (>=14.0.0) >= v1.16.0
Expand volume Beta >= v2.0.0 >= v1.1.0 Nautilus (>=14.0.0) >= v1.15.0
Volume/PV Metrics of File Mode Volume GA >= v1.2.0 >= v1.1.0 Nautilus (>=14.0.0) >= v1.15.0
Volume/PV Metrics of Block Mode Volume GA >= v1.2.0 >= v1.1.0 Nautilus (>=14.0.0) >= v1.21.0
Topology Aware Provisioning Support Alpha >= v2.1.0 >= v1.1.0 Nautilus (>=14.0.0) >= v1.14.0
CephFS Dynamically provision, de-provision File mode RWO volume GA >= v1.1.0 >= v1.0.0 Nautilus (>=14.2.2) >= v1.14.0
Dynamically provision, de-provision File mode RWX volume GA >= v1.1.0 >= v1.0.0 Nautilus (>=v14.2.2) >= v1.14.0
Dynamically provision, de-provision File mode ROX volume Alpha >= v3.0.0 >= v1.0.0 Nautilus (>=v14.2.2) >= v1.14.0
Dynamically provision, de-provision File mode RWOP volume Alpha >= v3.5.0 >= v1.5.0 Nautilus (>=14.0.0) >= v1.22.0
Creating and deleting snapshot GA >= v3.1.0 >= v1.0.0 Octopus (>=v15.2.4) >= v1.17.0
Provision volume from snapshot GA >= v3.1.0 >= v1.0.0 Octopus (>=v15.2.4) >= v1.17.0
Provision volume from another volume GA >= v3.1.0 >= v1.0.0 Octopus (>=v15.2.4) >= v1.16.0
Expand volume Beta >= v2.0.0 >= v1.1.0 Nautilus (>=v14.2.2) >= v1.15.0
Volume/PV Metrics of File Mode Volume GA >= v1.2.0 >= v1.1.0 Nautilus (>=v14.2.2) >= v1.15.0
NFS Dynamically provision, de-provision File mode RWO volume Alpha >= v3.6.0 >= v1.0.0 Pacific (>=16.2.0) >= v1.14.0
Dynamically provision, de-provision File mode RWX volume Alpha >= v3.6.0 >= v1.0.0 Pacific (>=16.2.0) >= v1.14.0
Dynamically provision, de-provision File mode ROX volume Alpha >= v3.6.0 >= v1.0.0 Pacific (>=16.2.0) >= v1.14.0
Dynamically provision, de-provision File mode RWOP volume Alpha >= v3.6.0 >= v1.5.0 Pacific (>=16.2.0) >= v1.22.0

NOTE: The Alpha status reflects possible non-backward compatible changes in the future, and is thus not recommended for production use.

CSI spec and Kubernetes version compatibility

Please refer to the matrix in the Kubernetes documentation.

Ceph CSI Container images and release compatibility

Ceph CSI Release/Branch Container image name Image Tag
devel (Branch) quay.io/cephcsi/cephcsi canary
v3.6.0 (Release) quay.io/cephcsi/cephcsi v3.6.0
v3.5.1 (Release) quay.io/cephcsi/cephcsi v3.5.1
v3.5.0 (Release) quay.io/cephcsi/cephcsi v3.5.0
Deprecated Ceph CSI Release/Branch Container image name Image Tag
v3.4.0 (Release) quay.io/cephcsi/cephcsi v3.4.0
v3.3.1 (Release) quay.io/cephcsi/cephcsi v3.3.1
v3.3.0 (Release) quay.io/cephcsi/cephcsi v3.3.0
v3.2.2 (Release) quay.io/cephcsi/cephcsi v3.2.2
v3.2.1 (Release) quay.io/cephcsi/cephcsi v3.2.1
v3.2.0 (Release) quay.io/cephcsi/cephcsi v3.2.0
v3.1.2 (Release) quay.io/cephcsi/cephcsi v3.1.2
v3.1.1 (Release) quay.io/cephcsi/cephcsi v3.1.1
v3.1.0 (Release) quay.io/cephcsi/cephcsi v3.1.0
v3.0.0 (Release) quay.io/cephcsi/cephcsi v3.0.0
v2.1.2 (Release) quay.io/cephcsi/cephcsi v2.1.2
v2.1.1 (Release) quay.io/cephcsi/cephcsi v2.1.1
v2.1.0 (Release) quay.io/cephcsi/cephcsi v2.1.0
v2.0.1 (Release) quay.io/cephcsi/cephcsi v2.0.1
v2.0.0 (Release) quay.io/cephcsi/cephcsi v2.0.0
v1.2.2 (Release) quay.io/cephcsi/cephcsi v1.2.2
v1.2.1 (Release) quay.io/cephcsi/cephcsi v1.2.1
v1.2.0 (Release) quay.io/cephcsi/cephcsi v1.2.0
v1.1.0 (Release) quay.io/cephcsi/cephcsi v1.1.0
v1.0.0 (Branch) quay.io/cephcsi/cephfsplugin v1.0.0
v1.0.0 (Branch) quay.io/cephcsi/rbdplugin v1.0.0

Contributing to this repo

Please follow development-guide and coding style guidelines if you are interested to contribute to this repo.

Troubleshooting

Please submit an issue at: Issues

Weekly Bug Triage call

We conduct weekly bug triage calls at our slack channel on Tuesdays. More details are available here

Dev standup

A regular dev standup takes place every Monday,Tuesday and Thursday at 12:00 PM UTC. Convert to your local timezone by executing command date -d "12:00 UTC" on terminal

Any changes to the meeting schedule will be added to the agenda doc.

Anyone who wants to discuss the direction of the project, design and implementation reviews, or general questions with the broader community is welcome and encouraged to join.

Contact

Please use the following to reach members of the community: