CSI driver for Ceph
Go to file
Humble Devassy Chirammal 0e61098522
Merge pull request #387 from humblec/social
Update readme with contact section
2019-06-07 17:03:18 +05:30
.github Add PR/Issue/Feature templates 2019-05-27 11:43:44 +00:00
cmd Make CephFS plugin stateless reusing RADOS based journal scheme 2019-05-30 06:20:35 -04:00
deploy Merge pull request #390 from ShyamsundarR/stateless-cephfs 2019-06-07 10:44:18 +05:30
docs Merge pull request #390 from ShyamsundarR/stateless-cephfs 2019-06-07 10:44:18 +05:30
e2e create config map for cephfs 2019-06-07 15:38:23 +05:30
examples Merge pull request #409 from humblec/mount-options 2019-06-07 16:28:38 +05:30
pkg Merge pull request #409 from humblec/mount-options 2019-06-07 16:28:38 +05:30
scripts code changes for E2E 2019-06-04 11:39:40 +05:30
vendor Lift kube dependency to 1.14.2 2019-06-04 12:39:45 +05:30
.gitignore update ceph-csi to build and use a single docker image 2019-05-28 18:10:22 +00:00
.mergify.yml Default to 2 approvals for a patch to get in. 2019-04-04 07:38:02 +00:00
.travis.yml Merge pull request #406 from Madhu-1/update-golangci 2019-06-05 10:13:07 +05:30
deploy.sh update ceph-csi to build and use a single docker image 2019-05-28 18:10:22 +00:00
Gopkg.lock Lift kube dependency to 1.14.2 2019-06-04 12:39:45 +05:30
Gopkg.toml Lift kube dependency to 1.14.2 2019-06-04 12:39:45 +05:30
LICENSE add Apache License 2018-01-10 16:12:00 +00:00
Makefile code changes for E2E 2019-06-04 11:39:40 +05:30
README.md Merge pull request #387 from humblec/social 2019-06-07 17:03:18 +05:30

Ceph CSI

Go Report
Card Build
Status

This repo contains [Container Storage Interface(CSI)] (https://github.com/container-storage-interface/) driver, provisioner, and attacher for Ceph RBD and CephFS.

Overview

Ceph CSI plugins implement an interface between CSI enabled Container Orchestrator (CO) and Ceph cluster. It allows dynamically provisioning Ceph volumes and attaching them to workloads.

Independent CSI plugins are provided to support RBD and CephFS backed volumes,

  • For details about configuration and deployment of RBD and CephFS CSI plugins, see documentation in docs/.
  • For example usage of RBD and CephFS CSI plugins, see examples in examples/.

Project status

Status: Alpha

The alpha status reflects possible non-backward compatible changes in the future, and is thus not recommended for production use. There is work in progress that would change on-disk metadata for certain operations, possibly breaking backward compatibility.

Supported CO platforms

Ceph CSI drivers are currently developed and tested exclusively on Kubernetes environments. There is work in progress to make this CO independent and thus support other orchestration environments in the future.

For Kubernetes versions 1.11 and 1.12, please use 0.3 images and deployments.

For Kubernetes versions 1.13 and above, please use 1.0 images and deployments.

Support Matrix

Ceph-CSI features and available versions

Plugin Features CSI driver Version
CephFS Dynamically provision, de-provision File mode RWO volume >=v0.3.0
Dynamically provision, de-provision File mode RWX volume >=v0.3.0
Creating and deleting snapshot -
Provision volume from snapshot -
Provision volume from another volume -
Resize volume -
RBD Dynamically provision, de-provision Block mode RWO volume >=v0.3.0
Dynamically provision, de-provision Block mode RWX volume >=v0.3.0
Dynamically provision, de-provision File mode RWO volume v1.0.0
Creating and deleting snapshot >=v0.3.0
Provision volume from snapshot v1.0.0
Provision volume from another volume -
Resize volume -

Ceph-CSI versions and CSI spec compatibility

Ceph CSI driver Version CSI spec version
v0.3.0 v0.3
v1.0.0 v1.0.0

CSI spec and Kubernetes version compatibility

Please refer to the matrix in the Kubernetes documentation.

Contributing to this repo

Please follow [development-guide] (https://github.com/ceph/ceph-csi/tree/master/docs/development-guide.md) and coding style guidelines if you are interested to contribute to this repo.

Troubleshooting

Please submit an issue at: Issues

Contact

Please use the following to reach members of the community: