e3a63029a3
This Adds a support for create,delete snapshot and creating a new rbd image from the snapshot. * Create a snapshot * Create a temporary snapshot from the parent volume * Clone a new image from a temporary snapshot with options --rbd-default-clone-format 2 --image-feature layering,deep-flatten * Delete temporary snapshot created * Create a new snapshot from cloned image * Check the image chain depth, if the Softlimit is reached Add a task Flatten the cloned image and return success. if the depth is reached hard limit Add a task Flatten the cloned image and return snapshot status ready as false ```bash 1) rbd snap create <RBD image for src k8s volume>@<random snap name> 2) rbd clone --rbd-default-clone-format 2 --image-feature layering,deep-flatten <RBD image for src k8s volume>@<random snap> <RBD image for temporary snap image> 3) rbd snap rm <RBD image for src k8s volume>@<random snap name> 4) rbd snap rm <RBD image for temporary snap image>@<random snap name> 5) check the depth, if the depth is greater than configured hard limit add a task to flatten the cloned image return snapshot status ready as false if the depth is greater than soft limit add a task to flatten the image and return success ``` * Create a clone from snapshot * Clone a new image from the snapshot with user-provided options * Check the depth(n) of the cloned image if n>=(hard limit) Add task to flatten the image and return ABORT (to avoid image leak) ```bash 1) rbd clone --rbd-default-clone-format 2 --image-feature <k8s dst vol config> <RBD image for temporary snap image>@<random snap name> <RBD image for k8s dst vol> 2) check the depth, if the depth is greater than configured hard limit add a task to flatten the cloned image return ABORT error if the depth is greater than soft limit add a task to flatten the image and return success ``` * Delete snapshot or pvc * Move the temporary cloned image to the trash * Add task to remove the image from the trash ```bash 1) rbd trash mv <cloned image> 2) ceph rbd task trash remove <cloned image> ``` With earlier implementation to delete the image, we used to add a task to remove the image with new changes this cannot be done as the image may contain snapshots or linking.so we will be doing below steps to delete an image(this will be applicable for both normal image and cloned image) * Move the rbd image to the trash * Add task to remove the image from the trash ```bash 1) rbd trash mv <image> 2) ceph rbd task trash remove <image> ``` Signed-off-by: Madhu Rajanna <madhupr007@gmail.com> |
||
---|---|---|
.github | ||
assets | ||
charts | ||
cmd | ||
deploy | ||
docs | ||
e2e | ||
examples | ||
internal | ||
scripts | ||
troubleshooting/tools | ||
vendor | ||
.commitlintrc.yml | ||
.gitignore | ||
.mergify.yml | ||
.travis.yml | ||
build.env | ||
deploy.sh | ||
go.mod | ||
go.sum | ||
LICENSE | ||
Makefile | ||
README.md |
Ceph CSI
This repo contains Ceph Container Storage Interface (CSI) driver for RBD, CephFS and kubernetes sidecar deployment yamls of provisioner, attacher, resizer, driver-registrar and snapshotter for supporting CSI functionalities.
Overview
Ceph CSI plugins implement an interface between CSI enabled Container Orchestrator (CO) and Ceph cluster. It allows dynamically provisioning Ceph volumes and attaching them to workloads.
Independent CSI plugins are provided to support RBD and CephFS backed volumes,
- For details about configuration and deployment of RBD plugin, please refer rbd doc and for CephFS plugin configuration and deployment please refer cephfs doc.
- For example usage of RBD and CephFS CSI plugins, see examples in
examples/
. - Stale resource cleanup, please refer cleanup doc.
NOTE:
- Ceph CSI
Arm64
support is experimental.
Project status
Status: GA
Supported CO platforms
Ceph CSI drivers are currently developed and tested exclusively on Kubernetes environments. There is work in progress to make this CO independent and thus support other orchestration environments in the future.
NOTE:
csiv0.3
is deprecated with release ofcsi v1.1.0
Support Matrix
Ceph-CSI features and available versions
Plugin | Features | Feature Status | CSI Driver Version | CSI Spec Version | Ceph Cluster Version | Kubernetes Version |
---|---|---|---|---|---|---|
RBD | Dynamically provision, de-provision Block mode RWO volume | GA | >= v1.0.0 | >= v1.0.0 | Mimic (>=v13.0.0) | >= v1.14.0 |
Dynamically provision, de-provision Block mode RWX volume | GA | >= v1.0.0 | >= v1.0.0 | Mimic (>=v13.0.0) | >= v1.14.0 | |
Dynamically provision, de-provision File mode RWO volume | GA | >= v1.0.0 | >= v1.0.0 | Mimic (>=v13.0.0) | >= v1.14.0 | |
Dynamically provision(from snapshot or volume), de-provision File mode ROX volume | Alpha | >= v3.0.0 | >= v1.0.0 | Mimic (>=v13.0.0) | >= v1.14.0 | |
Dynamically provision(from snapshot or volume), de-provision Block mode ROX volume | Alpha | >= v3.0.0 | >= v1.0.0 | Mimic (>=v13.0.0) | >= v1.14.0 | |
Creating and deleting snapshot | Alpha | >= v1.0.0 | >= v1.0.0 | Mimic (>=v13.0.0) | >= v1.14.0 | |
Provision volume from snapshot | Alpha | >= v1.0.0 | >= v1.0.0 | Mimic (>=v13.0.0) | >= v1.14.0 | |
Provision volume from another volume | - | - | - | - | - | |
Expand volume | Beta | >= v2.0.0 | >= v1.1.0 | Mimic (>=v13.0.0) | >= v1.15.0 | |
Metrics Support | Beta | >= v1.2.0 | >= v1.1.0 | Mimic (>=v13.0.0) | >= v1.15.0 | |
Topology Aware Provisioning Support | Alpha | >= v2.1.0 | >= v1.1.0 | Mimic (>=v13.0.0) | >= v1.14.0 | |
CephFS | Dynamically provision, de-provision File mode RWO volume | Beta | >= v1.1.0 | >= v1.0.0 | Nautilus (>=14.2.2) | >= v1.14.0 |
Dynamically provision, de-provision File mode RWX volume | Beta | >= v1.1.0 | >= v1.0.0 | Nautilus (>=v14.2.2) | >= v1.14.0 | |
Dynamically provision, de-provision File mode ROX volume | Alpha | >= v3.0.0 | >= v1.0.0 | Nautilus (>=v14.2.2) | >= v1.14.0 | |
Creating and deleting snapshot | - | - | - | - | - | |
Provision volume from snapshot | - | - | - | - | - | |
Provision volume from another volume | - | - | - | - | - | |
Expand volume | Beta | >= v2.0.0 | >= v1.1.0 | Nautilus (>=v14.2.2) | >= v1.15.0 | |
Metrics | Beta | >= v1.2.0 | >= v1.1.0 | Nautilus (>=v14.2.2) | >= v1.15.0 |
NOTE
: The Alpha
status reflects possible non-backward
compatible changes in the future, and is thus not recommended
for production use.
CSI spec and Kubernetes version compatibility
Please refer to the matrix in the Kubernetes documentation.
Ceph CSI Container images and release compatibility
Ceph CSI Release/Branch | Container image name | Image Tag |
---|---|---|
Master (Branch) | quay.io/cephcsi/cephcsi | canary |
v2.1.2 (Release) | quay.io/cephcsi/cephcsi | v2.1.2 |
v2.1.1 (Release) | quay.io/cephcsi/cephcsi | v2.1.1 |
v2.1.0 (Release) | quay.io/cephcsi/cephcsi | v2.1.0 |
v2.0.1 (Release) | quay.io/cephcsi/cephcsi | v2.0.1 |
v2.0.0 (Release) | quay.io/cephcsi/cephcsi | v2.0.0 |
v1.2.2 (Release) | quay.io/cephcsi/cephcsi | v1.2.2 |
v1.2.1 (Release) | quay.io/cephcsi/cephcsi | v1.2.1 |
v1.2.0 (Release) | quay.io/cephcsi/cephcsi | v1.2.0 |
v1.1.0 (Release) | quay.io/cephcsi/cephcsi | v1.1.0 |
v1.0.0 (Branch) | quay.io/cephcsi/cephfsplugin | v1.0.0 |
v1.0.0 (Branch) | quay.io/cephcsi/rbdplugin | v1.0.0 |
Contributing to this repo
Please follow development-guide and coding style guidelines if you are interested to contribute to this repo.
Troubleshooting
Please submit an issue at: Issues
Weekly Bug Triage call
We conduct weekly bug triage calls at our slack channel on Tuesdays. More details are available here
Dev standup
A regular dev standup takes place every other Monday,Tuesday,Thursday at
2:00 PM UTC. Convert to your local
timezone by executing command date -d 14:00 UTC
on terminal
Any changes to the meeting schedule will be added to the agenda doc.
Anyone who wants to discuss the direction of the project, design and implementation reviews, or general questions with the broader community is welcome and encouraged to join.
- Meeting link: https://redhat.bluejeans.com/702977652
- Current agenda
Contact
Please use the following to reach members of the community:
- Slack: Join our slack channel to discuss about anything related to this project. You can join the slack by this invite link
- Forums: ceph-csi
- Twitter: @CephCsi