CSI driver for Ceph
Go to file
Madhu Rajanna a0fd805a8b rbd: Add support for smart cloning
Added support for RBD PVC to PVC cloning, below
commands are executed to create a PVC-PVC clone from
RBD side.

* Check the depth(n) of the cloned image if n>=(hard limit -2)
or ((soft limit-2) Add a task to flatten the image and return
about (to avoid image leak) **Note** will try to flatten the
temp clone image in the chain if available
* Reserve the key and values in omap (this will help us to
avoid the leak as it's not reserved earlier as we have returned
ABORT (the request may not come back))
* Create a snapshot of rbd image
* Clone the snapshot (temp clone)
* Delete the snapshot
* Snapshot the temp clone
* Clone the snapshot (final clone)
* Delete the snapshot

```bash
1) check the image depth of the parent image if flatten required
add a task to flatten image and return ABORT to avoid leak
(hardlimit-2 and softlimit-2 check will be done)
2) Reserve omap keys
2) rbd snap create <RBD image for src k8s volume>@<random snap name>
3) rbd clone --rbd-default-clone-format 2 --image-feature
layering,deep-flatten <RBD image for src k8s volume>@<random snap>
<RBD image for temporary snap image>
4) rbd snap rm <RBD image for src k8s volume>@<random snap name>
5) rbd snap create <cloned RBD image created in snapshot process>@<random snap name>
6) rbd clone --rbd-default-clone-format 2 --image-feature <k8s dst vol config>
 <RBD image for temporary snap image>@<random snap name> <RBD image for k8s dst vol>
7)rbd snap rm <RBD image for src k8s volume>@<random snap name>
```

* Delete temporary clone image created as part of clone(delete if present)
* Delete rbd image

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2020-07-10 14:02:12 +00:00
.github ci: Enable stale bot on issues 2020-05-22 09:09:06 +00:00
assets feat: Adds Ceph logo as icon for Helm charts 2019-08-20 05:34:28 +00:00
charts rbd: add maxsnapshotsonimage flag 2020-07-06 10:08:31 +00:00
cmd cleanup: Avoid usage of numbers 2020-07-10 07:41:23 +00:00
deploy ci: display a warning when GO_ARCH is not set for image-cephcsi 2020-07-06 06:20:53 +00:00
docs rbd: add maxsnapshotsonimage flag 2020-07-06 10:08:31 +00:00
e2e e2e: add e2e to test pvc-pvc cloning 2020-07-10 14:02:12 +00:00
examples rbd: Add support for smart cloning 2020-07-10 14:02:12 +00:00
internal rbd: Add support for smart cloning 2020-07-10 14:02:12 +00:00
scripts ci: fix psp issue in minikube latest version 2020-07-10 09:47:21 +00:00
troubleshooting/tools util: fix tracevol.py to work with volumeNamePrefix in storageclass 2020-07-08 14:33:15 +00:00
vendor rebase: update k8s.io/klog to v2.3.0 2020-07-10 07:41:23 +00:00
.commitlintrc.yml ci: add .commitlintrc.yml for forcing commitlint CI checks 2020-05-15 18:20:29 +00:00
.gitignore ci: generate golangci.yml with correct CEPH_VERSION 2020-07-02 14:24:02 +00:00
.mergify.yml ci: automatically merge PRs in ci/centos 2020-05-28 12:27:44 +05:30
.travis.yml ci: update kubernetes v1.17 to latest patch release 2020-07-10 09:47:21 +00:00
build.env ci: update minikube version to latest release 2020-07-10 09:47:21 +00:00
deploy.sh build: use BASE_IMAGE from build.env 2020-06-28 17:46:37 +00:00
go.mod rebase: update k8s.io/klog to v2.3.0 2020-07-10 07:41:23 +00:00
go.sum rebase: update k8s.io/klog to v2.3.0 2020-07-10 07:41:23 +00:00
LICENSE Removing appendix from license. 2019-08-09 15:16:46 +00:00
Makefile build: error out when Podman or Docker is not available 2020-07-07 09:49:45 +00:00
README.md doc: update readme for ROX PVC 2020-06-30 17:43:41 +00:00

Ceph CSI

Go Report
Card Build
Status

This repo contains Ceph Container Storage Interface (CSI) driver for RBD, CephFS and kubernetes sidecar deployment yamls of provisioner, attacher, resizer, driver-registrar and snapshotter for supporting CSI functionalities.

Overview

Ceph CSI plugins implement an interface between CSI enabled Container Orchestrator (CO) and Ceph cluster. It allows dynamically provisioning Ceph volumes and attaching them to workloads.

Independent CSI plugins are provided to support RBD and CephFS backed volumes,

  • For details about configuration and deployment of RBD plugin, please refer rbd doc and for CephFS plugin configuration and deployment please refer cephfs doc.
  • For example usage of RBD and CephFS CSI plugins, see examples in examples/.
  • Stale resource cleanup, please refer cleanup doc.

NOTE:

  • Ceph CSI Arm64 support is experimental.

Project status

Status: GA

Supported CO platforms

Ceph CSI drivers are currently developed and tested exclusively on Kubernetes environments. There is work in progress to make this CO independent and thus support other orchestration environments in the future.

NOTE:

  • csiv0.3 is deprecated with release of csi v1.1.0

Support Matrix

Ceph-CSI features and available versions

Plugin Features Feature Status CSI Driver Version CSI Spec Version Ceph Cluster Version Kubernetes Version
RBD Dynamically provision, de-provision Block mode RWO volume GA >= v1.0.0 >= v1.0.0 Mimic (>=v13.0.0) >= v1.14.0
Dynamically provision, de-provision Block mode RWX volume GA >= v1.0.0 >= v1.0.0 Mimic (>=v13.0.0) >= v1.14.0
Dynamically provision, de-provision File mode RWO volume GA >= v1.0.0 >= v1.0.0 Mimic (>=v13.0.0) >= v1.14.0
Dynamically provision(from snapshot or volume), de-provision File mode ROX volume Alpha >= v3.0.0 >= v1.0.0 Mimic (>=v13.0.0) >= v1.14.0
Dynamically provision(from snapshot or volume), de-provision Block mode ROX volume Alpha >= v3.0.0 >= v1.0.0 Mimic (>=v13.0.0) >= v1.14.0
Creating and deleting snapshot Alpha >= v1.0.0 >= v1.0.0 Mimic (>=v13.0.0) >= v1.14.0
Provision volume from snapshot Alpha >= v1.0.0 >= v1.0.0 Mimic (>=v13.0.0) >= v1.14.0
Provision volume from another volume - - - - -
Expand volume Beta >= v2.0.0 >= v1.1.0 Mimic (>=v13.0.0) >= v1.15.0
Metrics Support Beta >= v1.2.0 >= v1.1.0 Mimic (>=v13.0.0) >= v1.15.0
Topology Aware Provisioning Support Alpha >= v2.1.0 >= v1.1.0 Mimic (>=v13.0.0) >= v1.14.0
CephFS Dynamically provision, de-provision File mode RWO volume Beta >= v1.1.0 >= v1.0.0 Nautilus (>=14.2.2) >= v1.14.0
Dynamically provision, de-provision File mode RWX volume Beta >= v1.1.0 >= v1.0.0 Nautilus (>=v14.2.2) >= v1.14.0
Dynamically provision, de-provision File mode ROX volume Alpha >= v3.0.0 >= v1.0.0 Nautilus (>=v14.2.2) >= v1.14.0
Creating and deleting snapshot - - - - -
Provision volume from snapshot - - - - -
Provision volume from another volume - - - - -
Expand volume Beta >= v2.0.0 >= v1.1.0 Nautilus (>=v14.2.2) >= v1.15.0
Metrics Beta >= v1.2.0 >= v1.1.0 Nautilus (>=v14.2.2) >= v1.15.0

NOTE: The Alpha status reflects possible non-backward compatible changes in the future, and is thus not recommended for production use.

CSI spec and Kubernetes version compatibility

Please refer to the matrix in the Kubernetes documentation.

Ceph CSI Container images and release compatibility

Ceph CSI Release/Branch Container image name Image Tag
Master (Branch) quay.io/cephcsi/cephcsi canary
v2.1.2 (Release) quay.io/cephcsi/cephcsi v2.1.2
v2.1.1 (Release) quay.io/cephcsi/cephcsi v2.1.1
v2.1.0 (Release) quay.io/cephcsi/cephcsi v2.1.0
v2.0.1 (Release) quay.io/cephcsi/cephcsi v2.0.1
v2.0.0 (Release) quay.io/cephcsi/cephcsi v2.0.0
v1.2.2 (Release) quay.io/cephcsi/cephcsi v1.2.2
v1.2.1 (Release) quay.io/cephcsi/cephcsi v1.2.1
v1.2.0 (Release) quay.io/cephcsi/cephcsi v1.2.0
v1.1.0 (Release) quay.io/cephcsi/cephcsi v1.1.0
v1.0.0 (Branch) quay.io/cephcsi/cephfsplugin v1.0.0
v1.0.0 (Branch) quay.io/cephcsi/rbdplugin v1.0.0

Contributing to this repo

Please follow development-guide and coding style guidelines if you are interested to contribute to this repo.

Troubleshooting

Please submit an issue at: Issues

Weekly Bug Triage call

We conduct weekly bug triage calls at our slack channel on Tuesdays. More details are available here

Dev standup

A regular dev standup takes place every other Monday,Tuesday,Thursday at 2:00 PM UTC. Convert to your local timezone by executing command date -d 14:00 UTC on terminal

Any changes to the meeting schedule will be added to the agenda doc.

Anyone who wants to discuss the direction of the project, design and implementation reviews, or general questions with the broader community is welcome and encouraged to join.

Contact

Please use the following to reach members of the community: