c4a3675cec
As detailed in issue #279, current lock scheme has hash buckets that are count of CPUs. This causes a lot of contention when parallel requests are made to the CSI plugin. To reduce lock contention, this commit introduces granular locks per identifier. The commit also changes the timeout for gRPC requests to Create and Delete volumes, as the current timeout is 10s (kubernetes documentation says 15s but code defaults are 10s). A virtual setup takes about 12-15s to complete a request at times, that leads to unwanted retries of the same request, hence the increased timeout to enable operation completion with minimal retries. Tests to create PVCs before and after these changes look like so, Before: Default master code + sidecar provisioner --timeout option set to 30 seconds 20 PVCs Creation: 3 runs, 396/391/400 seconds Deletion: 3 runs, 218/271/118 seconds - Once was stalled for more than 8 minutes and cancelled the run After: Current commit + sidecar provisioner --timeout option set to 30 sec 20 PVCs Creation: 3 runs, 42/59/65 seconds Deletion: 3 runs, 32/32/31 seconds Fixes: #279 Signed-off-by: ShyamsundarR <srangana@redhat.com> |
||
---|---|---|
.github | ||
cmd | ||
deploy | ||
docs | ||
e2e | ||
examples | ||
pkg | ||
scripts | ||
vendor | ||
.gitignore | ||
.mergify.yml | ||
.travis.yml | ||
deploy.sh | ||
Gopkg.lock | ||
Gopkg.toml | ||
LICENSE | ||
Makefile | ||
README.md |
Ceph CSI
This repo contains [Container Storage Interface(CSI)] (https://github.com/container-storage-interface/) Ceph CSI driver for RBD, CephFS and kubernetes sidecar deployment yamls of provisioner, attacher, node-driver-registrar and snapshotter for supporting CSI functionalities.
Overview
Ceph CSI plugins implement an interface between CSI enabled Container Orchestrator (CO) and Ceph cluster. It allows dynamically provisioning Ceph volumes and attaching them to workloads.
Independent CSI plugins are provided to support RBD and CephFS backed volumes,
- For details about configuration and deployment of RBD plugin, please refer rbd doc and for CephFS plugin configuration and deployment please refer cephfs doc.
- For example usage of RBD and CephFS CSI plugins, see examples in
examples/
.
Project status
Status: Alpha
The alpha status reflects possible non-backward compatible changes in the future, and is thus not recommended for production use. There is work in progress that would change on-disk metadata for certain operations, possibly breaking backward compatibility.
Supported CO platforms
Ceph CSI drivers are currently developed and tested exclusively on Kubernetes environments. There is work in progress to make this CO independent and thus support other orchestration environments in the future.
For Kubernetes versions 1.11 and 1.12, please use 0.3 images and deployments.
For Kubernetes versions 1.13 and above, please use 1.0 images and deployments.
Support Matrix
Ceph-CSI features and available versions
Plugin | Features | CSI driver Version |
---|---|---|
CephFS | Dynamically provision, de-provision File mode RWO volume | >=v0.3.0 |
Dynamically provision, de-provision File mode RWX volume | >=v0.3.0 | |
Creating and deleting snapshot | - | |
Provision volume from snapshot | - | |
Provision volume from another volume | - | |
Resize volume | - | |
RBD | Dynamically provision, de-provision Block mode RWO volume | >=v0.3.0 |
Dynamically provision, de-provision Block mode RWX volume | >=v0.3.0 | |
Dynamically provision, de-provision File mode RWO volume | v1.0.0 | |
Creating and deleting snapshot | >=v0.3.0 | |
Provision volume from snapshot | v1.0.0 | |
Provision volume from another volume | - | |
Resize volume | - |
Ceph-CSI versions and CSI spec compatibility
Ceph CSI driver Version | CSI spec version |
---|---|
v0.3.0 | v0.3 |
v1.0.0 | v1.0.0 |
CSI spec and Kubernetes version compatibility
Please refer to the matrix in the Kubernetes documentation.
Contributing to this repo
Please follow [development-guide] (https://github.com/ceph/ceph-csi/tree/master/docs/development-guide.md) and coding style guidelines if you are interested to contribute to this repo.
Troubleshooting
Please submit an issue at: Issues
Contact
Please use the following to reach members of the community:
- Slack: Join our slack channel
- Forums: ceph-csi
- Twitter: @CephCsi