CSI driver for Ceph
Go to file
Niels de Vos 7d642b791c e2e: fix IneffAssign warnings in checkPVSelectorValuesForPVC()
IneffAssign warns about the two following statements:

  Line 1342: warning: ineffectual assignment to rFound (ineffassign)
  Line 1350: warning: ineffectual assignment to zFound (ineffassign)

rFound and zFound should be set before entering the loop, otherwise the
initial value will overwrite the updated value on each iteration.

Reported-by: https://goreportcard.com/report/github.com/ceph/ceph-csi
Updates: #975
Signed-off-by: Niels de Vos <ndevos@redhat.com>
2020-04-24 08:35:36 +00:00
.github Add PR/Issue/Feature templates 2019-05-27 11:43:44 +00:00
assets feat: Adds Ceph logo as icon for Helm charts 2019-08-20 05:34:28 +00:00
charts update the helm documentation for cephfs and rbd 2020-04-23 04:43:04 +00:00
cmd cleanup: move pkg/ to internal/ 2020-04-23 11:00:59 +00:00
deploy Read baseimage from the dockerfile 2020-04-22 15:41:40 +00:00
docs cleanup: move pkg/ to internal/ 2020-04-23 11:00:59 +00:00
e2e e2e: fix IneffAssign warnings in checkPVSelectorValuesForPVC() 2020-04-24 08:35:36 +00:00
examples Add support for erasure coded pools 2020-04-14 14:14:29 +00:00
internal journal: move voljournal.go to a new package 2020-04-24 07:36:38 +00:00
scripts cleanup: move pkg/ to internal/ 2020-04-23 11:00:59 +00:00
troubleshooting/tools Trace backend volume from pvc 2019-08-26 14:42:12 +00:00
vendor Changes to accommodate client-go changes and kube vendor update 2020-04-14 10:50:12 +00:00
.gitignore build: add an option to compile in a container 2020-03-26 08:45:53 +00:00
.mergify.yml Update Mergify for v2.1.0 backports 2020-04-15 13:00:22 +05:30
.travis.yml Add travis_wait to deploy section 2020-04-23 09:09:28 +00:00
build-multi-arch-image.sh Read baseimage from the dockerfile 2020-04-22 15:41:40 +00:00
deploy.sh Read baseimage from the dockerfile 2020-04-22 15:41:40 +00:00
go.mod Add grpc 1.27 version require section in go.mod to avoid CI issue 2020-04-23 16:02:55 +00:00
go.sum Changes to accommodate client-go changes and kube vendor update 2020-04-14 10:50:12 +00:00
LICENSE Removing appendix from license. 2019-08-09 15:16:46 +00:00
Makefile cleanup: move pkg/ to internal/ 2020-04-23 11:00:59 +00:00
README.md Update README for v2.1.0 release. 2020-04-15 11:19:57 +00:00

Ceph CSI

Go Report
Card Build
Status

This repo contains Ceph Container Storage Interface (CSI) driver for RBD, CephFS and kubernetes sidecar deployment yamls of provisioner, attacher, node-driver-registrar and snapshotter for supporting CSI functionalities.

Overview

Ceph CSI plugins implement an interface between CSI enabled Container Orchestrator (CO) and Ceph cluster. It allows dynamically provisioning Ceph volumes and attaching them to workloads.

Independent CSI plugins are provided to support RBD and CephFS backed volumes,

  • For details about configuration and deployment of RBD plugin, please refer rbd doc and for CephFS plugin configuration and deployment please refer cephfs doc.
  • For example usage of RBD and CephFS CSI plugins, see examples in examples/.
  • Stale resource cleanup, please refer cleanup doc.

NOTE:

  • Ceph CSI Arm64 support is experimental.

Project status

Status: GA

Supported CO platforms

Ceph CSI drivers are currently developed and tested exclusively on Kubernetes environments. There is work in progress to make this CO independent and thus support other orchestration environments in the future.

NOTE:

  • csiv0.3 is deprecated with release of csi v1.1.0

Support Matrix

Ceph-CSI features and available versions

Plugin Features Feature Status CSI Driver Version CSI Spec Version Ceph Cluster Version Kubernetes Version
RBD Dynamically provision, de-provision Block mode RWO volume GA >= v1.0.0 >= v1.0.0 Mimic (>=v13.0.0) >= v1.14.0
Dynamically provision, de-provision Block mode RWX volume GA >= v1.0.0 >= v1.0.0 Mimic (>=v13.0.0) >= v1.14.0
Dynamically provision, de-provision File mode RWO volume GA >= v1.0.0 >= v1.0.0 Mimic (>=v13.0.0) >= v1.14.0
Creating and deleting snapshot Alpha >= v1.0.0 >= v1.0.0 Mimic (>=v13.0.0) >= v1.14.0
Provision volume from snapshot Alpha >= v1.0.0 >= v1.0.0 Mimic (>=v13.0.0) >= v1.14.0
Provision volume from another volume - - - - -
Resize volume Beta >= v2.0.0 >= v1.1.0 Mimic (>=v13.0.0) >= v1.15.0
Metrics Support Beta >= v1.2.0 >= v1.1.0 Mimic (>=v13.0.0) >= v1.15.0
Topology Aware Provisioning Support Alpha >= v2.1.0 >= v1.1.0 Mimic (>=v13.0.0) >= v1.14.0
CephFS Dynamically provision, de-provision File mode RWO volume Beta >= v1.1.0 >= v1.0.0 Nautilus (>=14.2.2) >= v1.14.0
Dynamically provision, de-provision File mode RWX volume Beta >= v1.1.0 >= v1.0.0 Nautilus (>=v14.2.2) >= v1.14.0
Creating and deleting snapshot - - - - -
Provision volume from snapshot - - - - -
Provision volume from another volume - - - - -
Resize volume Beta >= v2.0.0 >= v1.1.0 Nautilus (>=v14.2.2) >= v1.15.0
Metrics Beta >= v1.2.0 >= v1.1.0 Nautilus (>=v14.2.2) >= v1.15.0

NOTE: The Alpha status reflects possible non-backward compatible changes in the future, and is thus not recommended for production use.

CSI spec and Kubernetes version compatibility

Please refer to the matrix in the Kubernetes documentation.

Ceph CSI Container images and release compatibility

Ceph CSI Release/Branch Container image name Image Tag
Master (Branch) quay.io/cephcsi/cephcsi canary
v2.1.0 (Release) quay.io/cephcsi/cephcsi v2.1.0
v2.0.1 (Release) quay.io/cephcsi/cephcsi v2.0.1
v2.0.0 (Release) quay.io/cephcsi/cephcsi v2.0.0
v1.2.2 (Release) quay.io/cephcsi/cephcsi v1.2.2
v1.2.1 (Release) quay.io/cephcsi/cephcsi v1.2.1
v1.2.0 (Release) quay.io/cephcsi/cephcsi v1.2.0
v1.1.0 (Release) quay.io/cephcsi/cephcsi v1.1.0
v1.0.0 (Branch) quay.io/cephcsi/cephfsplugin v1.0.0
v1.0.0 (Branch) quay.io/cephcsi/rbdplugin v1.0.0

Contributing to this repo

Please follow development-guide and coding style guidelines if you are interested to contribute to this repo.

Troubleshooting

Please submit an issue at: Issues

Weekly Bug Triage call

We conduct weekly bug triage calls at our slack channel on Tuesdays. More details are available here

Contact

Please use the following to reach members of the community: