Merge pull request #59 from gman0/v0.3.0-docs

[CSI 0.3.0 2/4] Makefile, manifests, docs, examples
This commit is contained in:
Huamin Chen 2018-08-07 09:47:32 -04:00 committed by GitHub
commit 78a7185e37
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
47 changed files with 860 additions and 837 deletions

View File

@ -15,10 +15,10 @@
.PHONY: all rbdplugin .PHONY: all rbdplugin
RBD_IMAGE_NAME=quay.io/cephcsi/rbdplugin RBD_IMAGE_NAME=quay.io/cephcsi/rbdplugin
RBD_IMAGE_VERSION=v0.2.0 RBD_IMAGE_VERSION=v0.3.0
CEPHFS_IMAGE_NAME=quay.io/cephcsi/cephfsplugin CEPHFS_IMAGE_NAME=quay.io/cephcsi/cephfsplugin
CEPHFS_IMAGE_VERSION=v0.2.0 CEPHFS_IMAGE_VERSION=v0.3.0
all: rbdplugin cephfsplugin all: rbdplugin cephfsplugin
@ -30,7 +30,7 @@ rbdplugin:
if [ ! -d ./vendor ]; then dep ensure; fi if [ ! -d ./vendor ]; then dep ensure; fi
CGO_ENABLED=0 GOOS=linux go build -a -ldflags '-extldflags "-static"' -o _output/rbdplugin ./rbd CGO_ENABLED=0 GOOS=linux go build -a -ldflags '-extldflags "-static"' -o _output/rbdplugin ./rbd
rbdplugin-container: rbdplugin image-rbdplugin: rbdplugin
cp _output/rbdplugin deploy/rbd/docker cp _output/rbdplugin deploy/rbd/docker
docker build -t $(RBD_IMAGE_NAME):$(RBD_IMAGE_VERSION) deploy/rbd/docker docker build -t $(RBD_IMAGE_NAME):$(RBD_IMAGE_VERSION) deploy/rbd/docker
@ -38,14 +38,14 @@ cephfsplugin:
if [ ! -d ./vendor ]; then dep ensure; fi if [ ! -d ./vendor ]; then dep ensure; fi
CGO_ENABLED=0 GOOS=linux go build -a -ldflags '-extldflags "-static"' -o _output/cephfsplugin ./cephfs CGO_ENABLED=0 GOOS=linux go build -a -ldflags '-extldflags "-static"' -o _output/cephfsplugin ./cephfs
cephfsplugin-container: cephfsplugin image-cephfsplugin: cephfsplugin
cp _output/cephfsplugin deploy/cephfs/docker cp _output/cephfsplugin deploy/cephfs/docker
docker build -t $(CEPHFS_IMAGE_NAME):$(CEPHFS_IMAGE_VERSION) deploy/cephfs/docker docker build -t $(CEPHFS_IMAGE_NAME):$(CEPHFS_IMAGE_VERSION) deploy/cephfs/docker
push-rbdplugin-container: rbdplugin-container push-image-rbdplugin: image-rbdplugin
docker push $(RBD_IMAGE_NAME):$(RBD_IMAGE_VERSION) docker push $(RBD_IMAGE_NAME):$(RBD_IMAGE_VERSION)
push-cephfsplugin-container: cephfsplugin-container push-image-cephfsplugin: image-cephfsplugin
docker push $(CEPHFS_IMAGE_NAME):$(CEPHFS_IMAGE_VERSION) docker push $(CEPHFS_IMAGE_NAME):$(CEPHFS_IMAGE_VERSION)
clean: clean:

308
README.md
View File

@ -1,308 +1,16 @@
# Ceph CSI # Ceph CSI 0.3.0
[Container Storage Interface (CSI)](https://github.com/container-storage-interface/) driver, provisioner, and attacher for Ceph RBD and CephFS.
## Overview ## Overview
Ceph CSI plugins implement an interface between CSI enabled Container Ceph CSI plugins implement an interface between CSI enabled Container Orchestrator (CO) and CEPH cluster. It allows dynamically provisioning CEPH volumes and attaching them to workloads. Current implementation of Ceph CSI plugins was tested in Kubernetes environment (requires Kubernetes 1.11+), but the code does not rely on any Kubernetes specific calls (WIP to make it k8s agnostic) and should be able to run with any CSI enabled CO.
Orchestrator and CEPH cluster. It allows dynamically provision CEPH
volumes and attach it to workloads.
Current implementation of Ceph CSI plugins was tested in Kubernetes environment (requires Kubernetes 1.10+),
but the code does not rely on any Kubernetes specific calls (WIP to make it k8s agnostic)
and should be able to run with any CSI enabled CO (Containers Orchestration).
[Container Storage Interface (CSI)](https://github.com/container-storage-interface/) driver, provisioner, and attacher for Ceph RBD and CephFS For details about configuration and deployment of RBD and CephFS CSI plugins, see documentation in `docs/`.
## RBD Plugin For example usage of RBD and CephFS CSI plugins, see examples in `examples/`.
An RBD CSI plugin is available to help simplify storage management.
Once user creates PVC with the reference to a RBD storage class, rbd image and
corresponding PV object gets dynamically created and becomes ready to be used by
workloads.
### Configuration Requirements
* Secret object with the authentication key for ceph cluster
* StorageClass with rbdplugin (default CSI RBD plugin name) as a provisioner name
and information about ceph cluster (monitors, pool, etc)
* Service Accounts with required RBAC permissions
### Feature Status
### 1.9: Alpha
**Important:** `CSIPersistentVolume` and `MountPropagation`
[feature gates must be enabled starting in 1.9](#enabling-the-alpha-feature-gates).
Also API server must run with running config set to: `storage.k8s.io/v1alpha1`
### Compiling
CSI RBD plugin can be compiled in a form of a binary file or in a form of a container. When compiled
as a binary file, it gets stored in \_output folder with the name rbdplugin. When compiled as a container,
the resulting image is stored in a local docker's image store.
To compile just a binary file:
```
$ make rbdplugin
```
To build a container:
```
$ make rbdplugin-container
```
By running:
```
$ docker images | grep rbdplugin
```
You should see the following line in the output:
```
quay.io/cephcsi/rbdplugin v0.2.0 76369a8f8528 15 minutes ago 372.5 MB
```
### Testing
#### Prerequisite
##### Enable Mount Propagation in Docker
Comment out `MountFlags=slave` in docker systemd service then restart docker service.
```bash
# systemctl daemon-reload
# systemctl restart docker
```
##### Enable Kubernetes Feature Gates
Enable features `MountPropagation=true,CSIPersistentVolume=true` and runtime config `storage.k8s.io/v1alpha1=true`
#### Step 1: Create Secret
```
$ kubectl create -f ./deploy/rbd/kubernetes/rbd-secrets.yaml
```
**Important:** rbd-secrets.yaml, must be customized to match your ceph environment.
#### Step 2: Create StorageClass
```
$ kubectl create -f ./deploy/rbd/kubernetes/rbd-storage-class.yaml
```
**Important:** rbd-storage-class.yaml, must be customized to match your ceph environment.
#### Step 3: Start CSI CEPH RBD plugin
```
$ kubectl create -f ./deploy/rbd/kubernetes/rbdplugin.yaml
```
#### Step 4: Start CSI External Attacher
```
$ kubectl create -f ./deploy/rbd/kubernetes/csi-attacher.yaml
```
#### Step 5: Start CSI External Provisioner
```
$ kubectl create -f ./deploy/rbd/kubernetes/csi-provisioner.yaml
```
**Important:** Deployment yaml files includes required Service Account definitions and
required RBAC rules.
#### Step 6: Check status of CSI RBD plugin
```
$ kubectl get pods | grep csi
```
The following output should be displayed:
```
NAMESPACE NAME READY STATUS RESTARTS AGE
default csi-attacher-0 1/1 Running 0 1d
default csi-rbdplugin-qxqtl 2/2 Running 0 1d
default csi-provisioner-0 1/1 Running 0 1d
```
#### Step 7: Create PVC
```
$ kubectl create -f ./deploy/rbd/kubernetes/pvc.yaml
```
#### Step 8: Check status of provisioner PV
```
$ kubectl get pv
```
The following output should be displayed:
```
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
kubernetes-dynamic-pvc-1b19ddf1-0047-11e8-85ab-760f2eed12ea 5Gi RWO Delete Bound default/csi-pvc rbdv2 10s
```
```
$ kubectl describe pv kubernetes-dynamic-pvc-1b19ddf1-0047-11e8-85ab-760f2eed12ea
Name: kubernetes-dynamic-pvc-1b19ddf1-0047-11e8-85ab-760f2eed12ea
Annotations: csi.volume.kubernetes.io/volume-attributes={"monitors":"192.168.80.233:6789","pool":"kubernetes"}
csiProvisionerIdentity=1516716490787-8081-rbdplugin <------ !!!
pv.kubernetes.io/provisioned-by=rbdplugin
StorageClass: rbdv2 <------ !!!
Status: Bound <------ !!!
Claim: default/csi-pvc <------ !!!
Reclaim Policy: Delete
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 5Gi
Message:
Source:
Type: CSI <------ !!!
```
#### Step 9: Create a test pod
```bash
# kubectl create -f ./deploy/rbd/pod.yaml
```
## CephFS plugin
A CephFS CSI plugin is available to help simplify storage management.
Once user creates PVC with the reference to a CephFS CSI storage class, corresponding
PV object gets dynamically created and becomes ready to be used by workloads.
### Configuration Requirements
* Secret object with the authentication user ID `userID` and key `userKey` for ceph cluster
* StorageClass with csi-cephfsplugin (default CSI CephFS plugin name) as a provisioner name
and information about ceph cluster (monitors, pool, rootPath, ...)
* Service Accounts with required RBAC permissions
Mounter options: specifies whether to use FUSE or ceph kernel client for mounting. By default, the plugin will probe for `ceph-fuse`. If this fails, the kernel client will be used instead. Command line argument `--volumemounter=[fuse|kernel]` overrides this behaviour.
StorageClass options:
* `provisionVolume: "bool"`: if set to true, the plugin will provision and mount a new volume. Admin credentials `adminID` and `adminKey` are required in the secret object, since this also creates a dedicated RADOS user used for mounting the volume.
* `rootPath: /path-in-cephfs`: required field if `provisionVolume=true`. CephFS is mounted from the specified path. User credentials `userID` and `userKey` are required in the secret object.
* `mounter: "kernel" or "fuse"`: (optional) per-StorageClass mounter configuration. Overrides the default mounter.
### Feature Status
### 1.9: Alpha
**Important:** `CSIPersistentVolume` and `MountPropagation`
[feature gates must be enabled starting in 1.9](#enabling-the-alpha-feature-gates).
Also API server must run with running config set to: `storage.k8s.io/v1alpha1`
* `kube-apiserver` must be launched with `--feature-gates=CSIPersistentVolume=true,MountPropagation=true`
and `--runtime-config=storage.k8s.io/v1alpha1=true`
* `kube-controller-manager` must be launched with `--feature-gates=CSIPersistentVolume=true`
* `kubelet` must be launched with `--feature-gates=CSIPersistentVolume=true,MountPropagation=true`
### Compiling
CSI CephFS plugin can be compiled in a form of a binary file or in a form of a container. When compiled
as a binary file, it gets stored in \_output folder with the name cephfsplugin. When compiled as a container,
the resulting image is stored in a local docker's image store.
To compile just a binary file:
```
$ make cephfsplugin
```
To build a container:
```
$ make cephfsplugin-container
```
By running:
```
$ docker images | grep cephfsplugin
```
You should see the following line in the output:
```
quay.io/cephcsi/cephfsplugin v0.2.0 79482e644593 4 minutes ago 305MB
```
### Testing
#### Prerequisite
##### Enable Mount Propagation in Docker
Comment out `MountFlags=slave` in docker systemd service then restart docker service.
```
# systemctl daemon-reload
# systemctl restart docker
```
##### Enable Kubernetes Feature Gates
Enable features `MountPropagation=true,CSIPersistentVolume=true` and runtime config `storage.k8s.io/v1alpha1=true`
#### Step 1: Create Secret
```
$ kubectl create -f ./deploy/cephfs/kubernetes/secret.yaml
```
**Important:** secret.yaml, must be customized to match your ceph environment.
#### Step 2: Create StorageClass
```
$ kubectl create -f ./deploy/cephfs/kubernetes/cephfs-storage-class.yaml
```
**Important:** cephfs-storage-class.yaml, must be customized to match your ceph environment.
#### Step 3: Start CSI CEPH CephFS plugin
```
$ kubectl create -f ./deploy/cephfs/kubernetes/cephfsplugin.yaml
```
#### Step 4: Start CSI External Attacher
```
$ kubectl create -f ./deploy/cephfs/kubernetes/csi-attacher.yaml
```
#### Step 5: Start CSI External Provisioner
```
$ kubectl create -f ./deploy/cephfs/kubernetes/csi-provisioner.yaml
```
**Important:** Deployment yaml files includes required Service Account definitions and
required RBAC rules.
#### Step 6: Check status of CSI CephFS plugin
```
$ kubectl get pods | grep csi
csi-attacher-0 1/1 Running 0 6m
csi-cephfsplugin-hmqpk 2/2 Running 0 6m
csi-provisioner-0 1/1 Running 0 6m
```
#### Step 7: Create PVC
```
$ kubectl create -f ./deploy/cephfs/kubernetes/pvc.yaml
```
#### Step 8: Check status of provisioner PV
```
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
kubernetes-dynamic-pv-715cef0b30d811e8 5Gi RWX Delete Bound default/csi-cephfs-pvc csi-cephfs 5s
```
```
$ kubectl describe pv kubernetes-dynamic-pv-715cef0b30d811e8
Name: kubernetes-dynamic-pv-715cef0b30d811e8
Labels: <none>
Annotations: pv.kubernetes.io/provisioned-by=csi-cephfsplugin
StorageClass: csi-cephfs
Status: Bound
Claim: default/csi-cephfs-pvc
Reclaim Policy: Delete
Access Modes: RWX
Capacity: 5Gi
Message:
Source:
Type: CSI (a Container Storage Interface (CSI) volume source)
Driver: ReadOnly: %v
VolumeHandle: csi-cephfsplugin
%!(EXTRA string=csi-cephfs-7182b779-30d8-11e8-bf01-5254007d7491, bool=false)Events: <none>
```
#### Step 9: Create a test pod
```
$ kubectl create -f ./deploy/cephfs/kubernetes/pod.yaml
```
## Troubleshooting ## Troubleshooting
Please submit an issue at:[Issues](https://github.com/ceph/ceph-csi/issues) Please submit an issue at: [Issues](https://github.com/ceph/ceph-csi/issues)

View File

@ -2,5 +2,5 @@
if [ "${TRAVIS_BRANCH}" == "master" ] && [ "${TRAVIS_PULL_REQUEST}" == "false" ]; then if [ "${TRAVIS_BRANCH}" == "master" ] && [ "${TRAVIS_PULL_REQUEST}" == "false" ]; then
docker login -u "${QUAY_IO_USERNAME}" -p "${QUAY_IO_PASSWORD}" quay.io docker login -u "${QUAY_IO_USERNAME}" -p "${QUAY_IO_PASSWORD}" quay.io
make push-rbdplugin-container push-cephfsplugin-container make push-image-rbdplugin push-image-cephfsplugin
fi fi

View File

@ -0,0 +1,37 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: csi-attacher
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: external-attacher-runner
rules:
- apiGroups: [""]
resources: ["events"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments"]
verbs: ["get", "list", "watch", "update"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-attacher-role
subjects:
- kind: ServiceAccount
name: csi-attacher
namespace: default
roleRef:
kind: ClusterRole
name: external-attacher-runner
apiGroup: rbac.authorization.k8s.io

View File

@ -1,87 +0,0 @@
# This YAML file contains RBAC API objects,
# which are necessary to run external csi attacher for cinder.
apiVersion: v1
kind: ServiceAccount
metadata:
name: csi-attacher
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: external-attacher-runner
rules:
- apiGroups: [""]
resources: ["events"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments"]
verbs: ["get", "list", "watch", "update"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-attacher-role
subjects:
- kind: ServiceAccount
name: csi-attacher
namespace: default
roleRef:
kind: ClusterRole
name: external-attacher-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Service
apiVersion: v1
metadata:
name: csi-attacher
labels:
app: csi-attacher
spec:
selector:
app: csi-attacher
ports:
- name: dummy
port: 12345
---
kind: StatefulSet
apiVersion: apps/v1beta1
metadata:
name: csi-attacher
spec:
serviceName: "csi-attacher"
replicas: 1
template:
metadata:
labels:
app: csi-attacher
spec:
serviceAccount: csi-attacher
containers:
- name: csi-attacher
image: quay.io/k8scsi/csi-attacher:v0.2.0
args:
- "--v=5"
- "--csi-address=$(ADDRESS)"
env:
- name: ADDRESS
value: /var/lib/kubelet/plugins/csi-cephfsplugin/csi.sock
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /var/lib/kubelet/plugins/csi-cephfsplugin
volumes:
- name: socket-dir
hostPath:
path: /var/lib/kubelet/plugins/csi-cephfsplugin
type: DirectoryOrCreate

View File

@ -0,0 +1,45 @@
kind: Service
apiVersion: v1
metadata:
name: csi-cephfsplugin-attacher
labels:
app: csi-cephfsplugin-attacher
spec:
selector:
app: csi-cephfsplugin-attacher
ports:
- name: dummy
port: 12345
---
kind: StatefulSet
apiVersion: apps/v1beta1
metadata:
name: csi-cephfsplugin-attacher
spec:
serviceName: "csi-cephfsplugin-attacher"
replicas: 1
template:
metadata:
labels:
app: csi-cephfsplugin-attacher
spec:
serviceAccount: csi-attacher
containers:
- name: csi-cephfsplugin-attacher
image: quay.io/k8scsi/csi-attacher:v0.3.0
args:
- "--v=5"
- "--csi-address=$(ADDRESS)"
env:
- name: ADDRESS
value: /var/lib/kubelet/plugins/csi-cephfsplugin/csi.sock
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /var/lib/kubelet/plugins/csi-cephfsplugin
volumes:
- name: socket-dir
hostPath:
path: /var/lib/kubelet/plugins/csi-cephfsplugin
type: DirectoryOrCreate

View File

@ -0,0 +1,46 @@
kind: Service
apiVersion: v1
metadata:
name: csi-cephfsplugin-provisioner
labels:
app: csi-cephfsplugin-provisioner
spec:
selector:
app: csi-cephfsplugin-provisioner
ports:
- name: dummy
port: 12345
---
kind: StatefulSet
apiVersion: apps/v1beta1
metadata:
name: csi-cephfsplugin-provisioner
spec:
serviceName: "csi-cephfsplugin-provisioner"
replicas: 1
template:
metadata:
labels:
app: csi-cephfsplugin-provisioner
spec:
serviceAccount: csi-provisioner
containers:
- name: csi-provisioner
image: quay.io/k8scsi/csi-provisioner:v0.3.0
args:
- "--provisioner=csi-cephfsplugin"
- "--csi-address=$(ADDRESS)"
- "--v=5"
env:
- name: ADDRESS
value: /var/lib/kubelet/plugins/csi-cephfsplugin/csi.sock
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /var/lib/kubelet/plugins/csi-cephfsplugin
volumes:
- name: socket-dir
hostPath:
path: /var/lib/kubelet/plugins/csi-cephfsplugin
type: DirectoryOrCreate

View File

@ -1,46 +1,3 @@
# This YAML defines all API objects to create RBAC roles for csi node plugin.
apiVersion: v1
kind: ServiceAccount
metadata:
name: csi-cephfsplugin
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-cephfsplugin
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "update"]
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments"]
verbs: ["get", "list", "watch", "update"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-cephfsplugin
subjects:
- kind: ServiceAccount
name: csi-cephfsplugin
namespace: default
roleRef:
kind: ClusterRole
name: csi-cephfsplugin
apiGroup: rbac.authorization.k8s.io
---
# This YAML file contains driver-registrar & csi driver nodeplugin API objects,
# which are necessary to run csi nodeplugin for cephfs.
kind: DaemonSet kind: DaemonSet
apiVersion: apps/v1beta2 apiVersion: apps/v1beta2
metadata: metadata:
@ -54,11 +11,11 @@ spec:
labels: labels:
app: csi-cephfsplugin app: csi-cephfsplugin
spec: spec:
serviceAccount: csi-cephfsplugin serviceAccount: csi-nodeplugin
hostNetwork: true hostNetwork: true
containers: containers:
- name: driver-registrar - name: driver-registrar
image: quay.io/k8scsi/driver-registrar:v0.2.0 image: quay.io/k8scsi/driver-registrar:v0.3.0
args: args:
- "--v=5" - "--v=5"
- "--csi-address=$(ADDRESS)" - "--csi-address=$(ADDRESS)"
@ -78,7 +35,7 @@ spec:
capabilities: capabilities:
add: ["SYS_ADMIN"] add: ["SYS_ADMIN"]
allowPrivilegeEscalation: true allowPrivilegeEscalation: true
image: quay.io/cephcsi/cephfsplugin:v0.2.0 image: quay.io/cephcsi/cephfsplugin:v0.3.0
args : args :
- "--nodeid=$(NODE_ID)" - "--nodeid=$(NODE_ID)"
- "--endpoint=$(CSI_ENDPOINT)" - "--endpoint=$(CSI_ENDPOINT)"

View File

@ -0,0 +1,37 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: csi-nodeplugin
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-nodeplugin
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "update"]
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments"]
verbs: ["get", "list", "watch", "update"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-nodeplugin
subjects:
- kind: ServiceAccount
name: csi-nodeplugin
namespace: default
roleRef:
kind: ClusterRole
name: csi-nodeplugin
apiGroup: rbac.authorization.k8s.io

View File

@ -0,0 +1,40 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: csi-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: external-provisioner-runner
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-provisioner-role
subjects:
- kind: ServiceAccount
name: csi-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: external-provisioner-runner
apiGroup: rbac.authorization.k8s.io

View File

@ -1,97 +0,0 @@
# This YAML file contains all API objects that are necessary to run external
# CSI provisioner.
#
# In production, this needs to be in separate files, e.g. service account and
# role and role binding needs to be created once, while stateful set may
# require some tuning.
#
# In addition, mock CSI driver is hardcoded as the CSI driver.
apiVersion: v1
kind: ServiceAccount
metadata:
name: csi-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: external-provisioner-runner
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-provisioner-role
subjects:
- kind: ServiceAccount
name: csi-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: external-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Service
apiVersion: v1
metadata:
name: csi-provisioner
labels:
app: csi-provisioner
spec:
selector:
app: csi-provisioner
ports:
- name: dummy
port: 12345
---
kind: StatefulSet
apiVersion: apps/v1beta1
metadata:
name: csi-provisioner
spec:
serviceName: "csi-provisioner"
replicas: 1
template:
metadata:
labels:
app: csi-provisioner
spec:
serviceAccount: csi-provisioner
containers:
- name: csi-provisioner
image: quay.io/k8scsi/csi-provisioner:v0.2.1
args:
- "--provisioner=csi-cephfsplugin"
- "--csi-address=$(ADDRESS)"
- "--v=5"
env:
- name: ADDRESS
value: /var/lib/kubelet/plugins/csi-cephfsplugin/csi.sock
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /var/lib/kubelet/plugins/csi-cephfsplugin
volumes:
- name: socket-dir
hostPath:
path: /var/lib/kubelet/plugins/csi-cephfsplugin
type: DirectoryOrCreate

View File

@ -1,7 +0,0 @@
#!/bin/bash
objects=(cephfs-storage-class cephfsplugin csi-attacher csi-provisioner)
for obj in ${objects[@]}; do
kubectl create -f "./$obj.yaml"
done

View File

@ -1,4 +0,0 @@
#!/bin/sh
kubectl create -f ./pvc.yaml
kubectl create -f ./pod.yaml

View File

@ -1,3 +0,0 @@
#!/bin/sh
kubectl exec -it $(kubectl get pods -l app=csi-cephfsplugin -o=name | head -n 1 | cut -f2 -d"/") -c csi-cephfsplugin bash

View File

@ -1,3 +0,0 @@
#!/bin/sh
kubectl logs $(kubectl get pods -l app=csi-cephfsplugin -o=name | head -n 1) -c csi-cephfsplugin

View File

@ -1,7 +0,0 @@
#!/bin/bash
objects=(cephfsplugin csi-provisioner csi-attacher cephfs-storage-class)
for obj in ${objects[@]}; do
kubectl delete -f "./$obj.yaml"
done

View File

@ -1,4 +0,0 @@
#!/bin/sh
kubectl delete -f ./pod.yaml
kubectl delete -f ./pvc.yaml

View File

@ -0,0 +1,37 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: csi-attacher
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: external-attacher-runner
rules:
- apiGroups: [""]
resources: ["events"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments"]
verbs: ["get", "list", "watch", "update"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-attacher-role
subjects:
- kind: ServiceAccount
name: csi-attacher
namespace: default
roleRef:
kind: ClusterRole
name: external-attacher-runner
apiGroup: rbac.authorization.k8s.io

View File

@ -1,87 +0,0 @@
# This YAML file contains RBAC API objects,
# which are necessary to run external csi attacher for cinder.
apiVersion: v1
kind: ServiceAccount
metadata:
name: csi-attacher
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: external-attacher-runner
rules:
- apiGroups: [""]
resources: ["events"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments"]
verbs: ["get", "list", "watch", "update"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-attacher-role
subjects:
- kind: ServiceAccount
name: csi-attacher
namespace: default
roleRef:
kind: ClusterRole
name: external-attacher-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Service
apiVersion: v1
metadata:
name: csi-attacher
labels:
app: csi-attacher
spec:
selector:
app: csi-attacher
ports:
- name: dummy
port: 12345
---
kind: StatefulSet
apiVersion: apps/v1beta1
metadata:
name: csi-attacher
spec:
serviceName: "csi-attacher"
replicas: 1
template:
metadata:
labels:
app: csi-attacher
spec:
serviceAccount: csi-attacher
containers:
- name: csi-attacher
image: quay.io/k8scsi/csi-attacher:v0.2.0
args:
- "--v=5"
- "--csi-address=$(ADDRESS)"
env:
- name: ADDRESS
value: /var/lib/kubelet/plugins/csi-rbdplugin/csi.sock
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /var/lib/kubelet/plugins/csi-rbdplugin
volumes:
- name: socket-dir
hostPath:
path: /var/lib/kubelet/plugins/csi-rbdplugin
type: DirectoryOrCreate

View File

@ -0,0 +1,37 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: csi-nodeplugin
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-nodeplugin
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "update"]
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments"]
verbs: ["get", "list", "watch", "update"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-nodeplugin
subjects:
- kind: ServiceAccount
name: csi-nodeplugin
namespace: default
roleRef:
kind: ClusterRole
name: csi-nodeplugin
apiGroup: rbac.authorization.k8s.io

View File

@ -0,0 +1,40 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: csi-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: external-provisioner-runner
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-provisioner-role
subjects:
- kind: ServiceAccount
name: csi-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: external-provisioner-runner
apiGroup: rbac.authorization.k8s.io

View File

@ -1,97 +0,0 @@
# This YAML file contains all API objects that are necessary to run external
# CSI provisioner.
#
# In production, this needs to be in separate files, e.g. service account and
# role and role binding needs to be created once, while stateful set may
# require some tuning.
#
# In addition, mock CSI driver is hardcoded as the CSI driver.
apiVersion: v1
kind: ServiceAccount
metadata:
name: csi-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: external-provisioner-runner
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-provisioner-role
subjects:
- kind: ServiceAccount
name: csi-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: external-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Service
apiVersion: v1
metadata:
name: csi-provisioner
labels:
app: csi-provisioner
spec:
selector:
app: csi-provisioner
ports:
- name: dummy
port: 12345
---
kind: StatefulSet
apiVersion: apps/v1beta1
metadata:
name: csi-provisioner
spec:
serviceName: "csi-provisioner"
replicas: 1
template:
metadata:
labels:
app: csi-provisioner
spec:
serviceAccount: csi-provisioner
containers:
- name: csi-provisioner
image: quay.io/k8scsi/csi-provisioner:v0.2.0
args:
- "--provisioner=csi-rbdplugin"
- "--csi-address=$(ADDRESS)"
- "--v=5"
env:
- name: ADDRESS
value: /var/lib/kubelet/plugins/csi-rbdplugin/csi.sock
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /var/lib/kubelet/plugins/csi-rbdplugin
volumes:
- name: socket-dir
hostPath:
path: /var/lib/kubelet/plugins/csi-rbdplugin
type: DirectoryOrCreate

View File

@ -0,0 +1,45 @@
kind: Service
apiVersion: v1
metadata:
name: csi-rbdplugin-attacher
labels:
app: csi-rbdplugin-attacher
spec:
selector:
app: csi-rbdplugin-attacher
ports:
- name: dummy
port: 12345
---
kind: StatefulSet
apiVersion: apps/v1beta1
metadata:
name: csi-rbdplugin-attacher
spec:
serviceName: "csi-rbdplugin-attacher"
replicas: 1
template:
metadata:
labels:
app: csi-rbdplugin-attacher
spec:
serviceAccount: csi-attacher
containers:
- name: csi-rbdplugin-attacher
image: quay.io/k8scsi/csi-attacher:v0.3.0
args:
- "--v=5"
- "--csi-address=$(ADDRESS)"
env:
- name: ADDRESS
value: /var/lib/kubelet/plugins/csi-rbdplugin/csi.sock
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /var/lib/kubelet/plugins/csi-rbdplugin
volumes:
- name: socket-dir
hostPath:
path: /var/lib/kubelet/plugins/csi-rbdplugin
type: DirectoryOrCreate

View File

@ -0,0 +1,46 @@
kind: Service
apiVersion: v1
metadata:
name: csi-rbdplugin-provisioner
labels:
app: csi-rbdplugin-provisioner
spec:
selector:
app: csi-rbdplugin-provisioner
ports:
- name: dummy
port: 12345
---
kind: StatefulSet
apiVersion: apps/v1beta1
metadata:
name: csi-rbdplugin-provisioner
spec:
serviceName: "csi-rbdplugin-provisioner"
replicas: 1
template:
metadata:
labels:
app: csi-rbdplugin-provisioner
spec:
serviceAccount: csi-provisioner
containers:
- name: csi-provisioner
image: quay.io/k8scsi/csi-provisioner:v0.3.0
args:
- "--provisioner=csi-rbdplugin"
- "--csi-address=$(ADDRESS)"
- "--v=5"
env:
- name: ADDRESS
value: /var/lib/kubelet/plugins/csi-rbdplugin/csi.sock
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /var/lib/kubelet/plugins/csi-rbdplugin
volumes:
- name: socket-dir
hostPath:
path: /var/lib/kubelet/plugins/csi-rbdplugin
type: DirectoryOrCreate

View File

@ -1,46 +1,3 @@
# This YAML defines all API objects to create RBAC roles for csi node plugin.
apiVersion: v1
kind: ServiceAccount
metadata:
name: csi-rbdplugin
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-rbdplugin
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "update"]
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments"]
verbs: ["get", "list", "watch", "update"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-rbdplugin
subjects:
- kind: ServiceAccount
name: csi-rbdplugin
namespace: default
roleRef:
kind: ClusterRole
name: csi-rbdplugin
apiGroup: rbac.authorization.k8s.io
---
# This YAML file contains driver-registrar & csi driver nodeplugin API objects,
# which are necessary to run csi nodeplugin for rbd.
kind: DaemonSet kind: DaemonSet
apiVersion: apps/v1beta2 apiVersion: apps/v1beta2
metadata: metadata:
@ -54,11 +11,11 @@ spec:
labels: labels:
app: csi-rbdplugin app: csi-rbdplugin
spec: spec:
serviceAccount: csi-rbdplugin serviceAccount: csi-nodeplugin
hostNetwork: true hostNetwork: true
containers: containers:
- name: driver-registrar - name: driver-registrar
image: quay.io/k8scsi/driver-registrar:v0.2.0 image: quay.io/k8scsi/driver-registrar:v0.3.0
args: args:
- "--v=5" - "--v=5"
- "--csi-address=$(ADDRESS)" - "--csi-address=$(ADDRESS)"
@ -78,7 +35,7 @@ spec:
capabilities: capabilities:
add: ["SYS_ADMIN"] add: ["SYS_ADMIN"]
allowPrivilegeEscalation: true allowPrivilegeEscalation: true
image: quay.io/cephcsi/rbdplugin:v0.2.0 image: quay.io/cephcsi/rbdplugin:v0.3.0
args : args :
- "--nodeid=$(NODE_ID)" - "--nodeid=$(NODE_ID)"
- "--endpoint=$(CSI_ENDPOINT)" - "--endpoint=$(CSI_ENDPOINT)"
@ -126,4 +83,4 @@ spec:
path: /sys path: /sys
- name: lib-modules - name: lib-modules
hostPath: hostPath:
path: /lib/modules path: /lib/modules

View File

@ -1,10 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: csi-ceph-secret
namespace: default
data:
#Please note this value is base64 encoded.
# Key value corresponds to a user name defined in ceph cluster
admin: QVFDZUhPMVpJTFBQRFJBQTd6dzNkNzZicGxrdlR3em9vc3lidkE9PQo=
kubernetes: QVFDZDR1MVoxSDI0QnhBQWFxdmZIRnFuMSs0RFZlK1pRZ0ZmUEE9PQo=

View File

@ -1,13 +0,0 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-rbd
provisioner: csi-rbdplugin
parameters:
monitors: 192.168.80.233:6789
pool: kubernetes
csiProvisionerSecretName: csi-ceph-secret
csiProvisionerSecretNamespace: default
imageFormat: "2"
imageFeatures: layering
reclaimPolicy: Delete

109
docs/deploy-cephfs.md Normal file
View File

@ -0,0 +1,109 @@
# CSI CephFS plugin
The CSI CephFS plugin is able to both provision new CephFS volumes and attach and mount existing ones to workloads.
## Building
CSI CephFS plugin can be compiled in a form of a binary file or in a form of a Docker image. When compiled as a binary file, the result is stored in `_output/` directory with the name `cephfsplugin`. When compiled as an image, it's stored in the local Docker image store.
Building binary:
```bash
$ make cephfsplugin
```
Building Docker image:
```bash
$ make image-cephfsplugin
```
## Configuration
**Available command line arguments:**
Option | Default value | Description
------ | ------------- | -----------
`--endpoint` | `unix://tmp/csi.sock` | CSI endpoint, must be a UNIX socket
`--drivername` | `csi-cephfsplugin` | name of the driver (Kubernetes: `provisioner` field in StorageClass must correspond to this value)
`--nodeid` | _empty_ | This node's ID
`--volumemounter` | _empty_ | default volume mounter. Available options are `kernel` and `fuse`. This is the mount method used if volume parameters don't specify otherwise. If left unspecified, the driver will first probe for `ceph-fuse` in system's path and will choose Ceph kernel client if probing failed.
**Available volume parameters:**
Parameter | Required | Description
--------- | -------- | -----------
`monitors` | yes | Comma separated list of Ceph monitors (e.g. `192.168.100.1:6789,192.168.100.2:6789,192.168.100.3:6789`)
`mounter` | no | Mount method to be used for this volume. Available options are `kernel` for Ceph kernel client and `fuse` for Ceph FUSE driver. Defaults to "default mounter", see command line arguments.
`provisionVolume` | yes | Mode of operation. BOOL value. If `true`, a new CephFS volume will be provisioned. If `false`, an existing CephFS will be used.
`pool` | for `provisionVolume=true` | Ceph pool into which the volume shall be created
`rootPath` | for `provisionVolume=false` | Root path of an existing CephFS volume
`csiProvisionerSecretName`, `csiNodeStageSecretName` | for Kubernetes | name of the Kubernetes Secret object containing Ceph client credentials. Both parameters should have the same value
`csiProvisionerSecretNamespace`, `csiNodeStageSecretNamespace` | for Kubernetes | namespaces of the above Secret objects
**Required secrets for `provisionVolume=true`:**
Admin credentials are required for provisioning new volumes
* `adminID`: ID of an admin client
* `adminKey`: key of the admin client
**Required secrets for `provisionVolume=false`:**
User credentials with access to an existing volume
* `userID`: ID of a user client
* `userKey`: key of a user client
## Deployment with Kubernetes
Requires Kubernetes 1.11
Your Kubernetes cluster must allow privileged pods (i.e. `--allow-privileged` flag must be set to true for both the API server and the kubelet). Moreover, as stated in the [mount propagation docs](https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation), the Docker daemon of the cluster nodes must allow shared mounts.
YAML manifests are located in `deploy/cephfs/kubernetes`.
**Deploy RBACs for sidecar containers and node plugins:**
```bash
$ kubectl create -f csi-attacher-rbac.yaml
$ kubectl create -f csi-provisioner-rbac.yaml
$ kubectl create -f csi-nodeplugin-rbac.yaml
```
Those manifests deploy service accounts, cluster roles and cluster role bindings. These are shared for both RBD and CephFS CSI plugins, as they require the same permissions.
**Deploy CSI sidecar containers:**
```bash
$ kubectl create -f csi-cephfsplugin-attacher.yaml
$ kubectl create -f csi-cephfsplugin-provisioner.yaml
```
Deploys stateful sets for external-attacher and external-provisioner sidecar containers for CSI CephFS.
**Deploy CSI CephFS driver:**
```bash
$ kubectl create -f csi-cephfsplugin.yaml
```
Deploys a daemon set with two containers: CSI driver-registrar and the CSI CephFS driver.
## Verifying the deployment in Kubernetes
After successfuly completing the steps above, you should see output similar to this:
```bash
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/csi-cephfsplugin-attacher-0 1/1 Running 0 26s
pod/csi-cephfsplugin-provisioner-0 1/1 Running 0 25s
pod/csi-cephfsplugin-rljcv 2/2 Running 0 24s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/csi-cephfsplugin-attacher ClusterIP 10.104.116.218 <none> 12345/TCP 27s
service/csi-cephfsplugin-provisioner ClusterIP 10.101.78.75 <none> 12345/TCP 26s
...
```
You can try deploying a demo pod from `examples/cephfs` to test the deployment further.
### Notes on volume deletion
Volumes that were provisioned dynamically (i.e. `provisionVolume=true`) are allowed to be deleted by the driver as well, if the user chooses to do so. Otherwise, the driver is forbidden to delete such volumes - attempting to delete them is a no-op.

100
docs/deploy-rbd.md Normal file
View File

@ -0,0 +1,100 @@
# CSI RBD Plugin
The RBD CSI plugin is able to provision new RBD images and attach and mount those to worlkoads.
## Building
CSI RBD plugin can be compiled in a form of a binary file or in a form of a Docker image. When compiled as a binary file, the result is stored in `_output/` directory with the name `rbdplugin`. When compiled as an image, it's stored in the local Docker image store.
Building binary:
```bash
$ make rbdplugin
```
Building Docker image:
```bash
$ make image-rbdplugin
```
## Configuration
**Available command line arguments:**
Option | Default value | Description
------ | ------------- | -----------
`--endpoint` | `unix://tmp/csi.sock` | CSI endpoint, must be a UNIX socket
`--drivername` | `csi-cephfsplugin` | name of the driver (Kubernetes: `provisioner` field in StorageClass must correspond to this value)
`--nodeid` | _empty_ | This node's ID
**Available volume parameters:**
Parameter | Required | Description
--------- | -------- | -----------
`monitors` | yes | Comma separated list of Ceph monitors (e.g. `192.168.100.1:6789,192.168.100.2:6789,192.168.100.3:6789`)
`pool` | yes | Ceph pool into which the RBD image shall be created
`imageFormat` | no | RBD image format. Defaults to `2`. See [man pages](http://docs.ceph.com/docs/mimic/man/8/rbd/#cmdoption-rbd-image-format)
`imageFeatures` | no | RBD image features. Available for `imageFormat=2`. CSI RBD currently supports only `layering` feature. See [man pages](http://docs.ceph.com/docs/mimic/man/8/rbd/#cmdoption-rbd-image-feature)
`csiProvisionerSecretName`, `csiNodePublishSecretName` | for Kubernetes | name of the Kubernetes Secret object containing Ceph client credentials. Both parameters should have the same value
`csiProvisionerSecretNamespace`, `csiNodePublishSecretNamespace` | for Kubernetes | namespaces of the above Secret objects
**Required secrets:**
Admin credentials are required for provisioning new RBD images
`ADMIN_NAME`: `ADMIN_PASSWORD` - note that the key of the key-value pair is the name of the client with admin privileges, and the value is its password
Also note that CSI RBD expects admin keyring and Ceph config file in `/etc/ceph`.
## Deployment with Kubernetes
Requires Kubernetes 1.11
Your Kubernetes cluster must allow privileged pods (i.e. `--allow-privileged` flag must be set to true for both the API server and the kubelet). Moreover, as stated in the [mount propagation docs](https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation), the Docker daemon of the cluster nodes must allow shared mounts.
YAML manifests are located in `deploy/rbd/kubernetes`.
**Deploy RBACs for sidecar containers and node plugins:**
```bash
$ kubectl create -f csi-attacher-rbac.yaml
$ kubectl create -f csi-provisioner-rbac.yaml
$ kubectl create -f csi-nodeplugin-rbac.yaml
```
Those manifests deploy service accounts, cluster roles and cluster role bindings. These are shared for both RBD and CephFS CSI plugins, as they require the same permissions.
**Deploy CSI sidecar containers:**
```bash
$ kubectl create -f csi-rbdplugin-attacher.yaml
$ kubectl create -f csi-rbdplugin-provisioner.yaml
```
Deploys stateful sets for external-attacher and external-provisioner sidecar containers for CSI RBD.
**Deploy RBD CSI driver:**
```bash
$ kubectl create -f csi-rbdplugin.yaml
```
Deploys a daemon set with two containers: CSI driver-registrar and the CSI RBD driver.
## Verifying the deployment in Kubernetes
After successfuly completing the steps above, you should see output similar to this:
```bash
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/csi-rbdplugin-attacher-0 1/1 Running 0 23s
pod/csi-rbdplugin-fptqr 2/2 Running 0 21s
pod/csi-rbdplugin-provisioner-0 1/1 Running 0 22s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/csi-rbdplugin-attacher ClusterIP 10.109.15.54 <none> 12345/TCP 26s
service/csi-rbdplugin-provisioner ClusterIP 10.104.2.130 <none> 12345/TCP 23s
...
```
You can try deploying a demo pod from `examples/rbd` to test the deployment further.

17
examples/README.md Normal file
View File

@ -0,0 +1,17 @@
## How to test RBD and CephFS plugins with Kubernetes 1.11
Both `rbd` and `cephfs` directories contain `plugin-deploy.sh` and `plugin-teardown.sh` helper scripts. You can use those to help you deploy/tear down RBACs, sidecar containers and the plugin in one go. By default, they look for the YAML manifests in `../../deploy/{rbd,cephfs}/kubernetes`. You can override this path by running `$ ./plugin-deploy.sh /path/to/my/manifests`.
Once the plugin is successfuly deployed, you'll need to customize `storageclass.yaml` and `secret.yaml` manifests to reflect your Ceph cluster setup. Please consult the documentation for info about available parameters.
After configuring the secrets, monitors, etc. you can deploy a testing Pod mounting a RBD image / CephFS volume:
```bash
$ kubectl create -f secret.yaml
$ kubectl create -f storageclass.yaml
$ kubectl create -f pvc.yaml
$ kubectl create -f pod.yaml
```
Other helper scripts:
* `logs.sh` output of the plugin
* `exec-bash.sh` logs into the plugin's container and runs bash

View File

@ -0,0 +1,27 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: csicephfs-demo-depl
labels:
app: web-server
spec:
replicas: 1
selector:
matchLabels:
app: web-server
template:
metadata:
labels:
app: web-server
spec:
containers:
- name: web-server
image: nginx
volumeMounts:
- name: mypvc
mountPath: /var/lib/www/html
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: csi-cephfs-pvc
readOnly: false

15
examples/cephfs/exec-bash.sh Executable file
View File

@ -0,0 +1,15 @@
#!/bin/bash
CONTAINER_NAME=csi-cephfsplugin
POD_NAME=$(kubectl get pods -l app=$CONTAINER_NAME -o=name | head -n 1)
function get_pod_status() {
echo -n $(kubectl get $POD_NAME -o jsonpath="{.status.phase}")
}
while [[ "$(get_pod_status)" != "Running" ]]; do
sleep 1
echo "Waiting for $POD_NAME (status $(get_pod_status))"
done
kubectl exec -it ${POD_NAME#*/} -c $CONTAINER_NAME bash

15
examples/cephfs/logs.sh Executable file
View File

@ -0,0 +1,15 @@
#!/bin/bash
CONTAINER_NAME=csi-cephfsplugin
POD_NAME=$(kubectl get pods -l app=$CONTAINER_NAME -o=name | head -n 1)
function get_pod_status() {
echo -n $(kubectl get $POD_NAME -o jsonpath="{.status.phase}")
}
while [[ "$(get_pod_status)" != "Running" ]]; do
sleep 1
echo "Waiting for $POD_NAME (status $(get_pod_status))"
done
kubectl logs -f $POD_NAME -c $CONTAINER_NAME

View File

@ -0,0 +1,15 @@
#!/bin/bash
deployment_base="${1}"
if [[ -z $deployment_base ]]; then
deployment_base="../../deploy/cephfs/kubernetes"
fi
cd "$deployment_base" || exit 1
objects=(csi-attacher-rbac csi-provisioner-rbac csi-nodeplugin-rbac csi-cephfsplugin-attacher csi-cephfsplugin-provisioner csi-cephfsplugin)
for obj in ${objects[@]}; do
kubectl create -f "./$obj.yaml"
done

View File

@ -0,0 +1,15 @@
#!/bin/bash
deployment_base="${1}"
if [[ -z $deployment_base ]]; then
deployment_base="../../deploy/cephfs/kubernetes"
fi
cd "$deployment_base" || exit 1
objects=(csi-cephfsplugin-attacher csi-cephfsplugin-provisioner csi-cephfsplugin csi-attacher-rbac csi-provisioner-rbac csi-nodeplugin-rbac)
for obj in ${objects[@]}; do
kubectl delete -f "./$obj.yaml"
done

View File

@ -1,14 +1,14 @@
apiVersion: v1 apiVersion: v1
kind: Pod kind: Pod
metadata: metadata:
name: web-server name: csicephfs-demo-pod
spec: spec:
containers: containers:
- name: web-server - name: web-server
image: nginx image: nginx
volumeMounts: volumeMounts:
- mountPath: /var/lib/www/html - name: mypvc
name: mypvc mountPath: /var/lib/www
volumes: volumes:
- name: mypvc - name: mypvc
persistentVolumeClaim: persistentVolumeClaim:

View File

@ -5,9 +5,9 @@ metadata:
namespace: default namespace: default
data: data:
# Required if provisionVolume is set to false # Required if provisionVolume is set to false
userID: userID-encoded-by-base64 userID: BASE64-ENCODED-VALUE
userKey: userKey-encoded-by-base64 userKey: BASE64-ENCODED-VALUE
# Required if provisionVolume is set to true # Required if provisionVolume is set to true
adminID: adminID-encoded-by-base64 adminID: BASE64-ENCODED-VALUE
adminKey: adminKey-encoded-by-base64 adminKey: BASE64-ENCODED-VALUE

View File

@ -4,22 +4,27 @@ metadata:
name: csi-cephfs name: csi-cephfs
provisioner: csi-cephfsplugin provisioner: csi-cephfsplugin
parameters: parameters:
monitors: mon1:port,mon2:port # Comma separated list of Ceph monitors
monitors: mon1:port,mon2:port,...
# If set to true, a new volume will be created along with a RADOS user - this requires admin access. # If set to true, a new volume will be created along with a RADOS user - this requires admin access.
# If set to false, it is assumed the volume already exists and the user is expected to provide # If set to false, it is assumed the volume already exists and the user is expected to provide
# a rootPath to a cephfs volume and user credentials. # a rootPath to a cephfs volume and user credentials.
provisionVolume: "true" provisionVolume: "true"
# Required if provisionVolume is set to false # Ceph pool into which the volume shall be created
# rootPath: /path-in-cephfs # Required for provisionVolume: "true"
pool: cephfs_data
# Required if provisionVolume is set to true # Root path of an existing CephFS volume
# pool: cephfs_data # Required for provisionVolume: "false"
# rootPath: /absolute/path
# The secret has to contain user and/or admin credentials. # The secrets have to contain user and/or Ceph admin credentials.
csiProvisionerSecretName: csi-cephfs-secret csiProvisionerSecretName: csi-cephfs-secret
csiProvisionerSecretNameSpace: default csiProvisionerSecretNamespace: default
csiNodeStageSecretName: csi-cephfs-secret
csiNodeStageSecretNamespace: default
# (optional) The driver can use either ceph-fuse (fuse) or ceph kernel client (kernel) # (optional) The driver can use either ceph-fuse (fuse) or ceph kernel client (kernel)
# If left out, default volume mounter will be used - this is determined by probing for ceph-fuse # If left out, default volume mounter will be used - this is determined by probing for ceph-fuse

15
examples/rbd/exec-bash.sh Executable file
View File

@ -0,0 +1,15 @@
#!/bin/bash
CONTAINER_NAME=csi-rbdplugin
POD_NAME=$(kubectl get pods -l app=$CONTAINER_NAME -o=name | head -n 1)
function get_pod_status() {
echo -n $(kubectl get $POD_NAME -o jsonpath="{.status.phase}")
}
while [[ "$(get_pod_status)" != "Running" ]]; do
sleep 1
echo "Waiting for $POD_NAME (status $(get_pod_status))"
done
kubectl exec -it ${POD_NAME#*/} -c $CONTAINER_NAME bash

15
examples/rbd/logs.sh Executable file
View File

@ -0,0 +1,15 @@
#!/bin/bash
CONTAINER_NAME=csi-rbdplugin
POD_NAME=$(kubectl get pods -l app=$CONTAINER_NAME -o=name | head -n 1)
function get_pod_status() {
echo -n $(kubectl get $POD_NAME -o jsonpath="{.status.phase}")
}
while [[ "$(get_pod_status)" != "Running" ]]; do
sleep 1
echo "Waiting for $POD_NAME (status $(get_pod_status))"
done
kubectl logs -f $POD_NAME -c $CONTAINER_NAME

15
examples/rbd/plugin-deploy.sh Executable file
View File

@ -0,0 +1,15 @@
#!/bin/bash
deployment_base="${1}"
if [[ -z $deployment_base ]]; then
deployment_base="../../deploy/rbd/kubernetes"
fi
cd "$deployment_base" || exit 1
objects=(csi-attacher-rbac csi-provisioner-rbac csi-nodeplugin-rbac csi-rbdplugin-attacher csi-rbdplugin-provisioner csi-rbdplugin)
for obj in ${objects[@]}; do
kubectl create -f "./$obj.yaml"
done

15
examples/rbd/plugin-teardown.sh Executable file
View File

@ -0,0 +1,15 @@
#!/bin/bash
deployment_base="${1}"
if [[ -z $deployment_base ]]; then
deployment_base="../../deploy/rbd/kubernetes"
fi
cd "$deployment_base" || exit 1
objects=(csi-rbdplugin-attacher csi-rbdplugin-provisioner csi-rbdplugin csi-attacher-rbac csi-provisioner-rbac csi-nodeplugin-rbac)
for obj in ${objects[@]}; do
kubectl delete -f "./$obj.yaml"
done

View File

@ -1,14 +1,14 @@
apiVersion: v1 apiVersion: v1
kind: Pod kind: Pod
metadata: metadata:
name: web-server name: csirbd-demo-pod
spec: spec:
containers: containers:
- name: web-server - name: web-server
image: nginx image: nginx
volumeMounts: volumeMounts:
- mountPath: /var/lib/www/html - name: mypvc
name: mypvc mountPath: /var/lib/www/html
volumes: volumes:
- name: mypvc - name: mypvc
persistentVolumeClaim: persistentVolumeClaim:

View File

@ -7,5 +7,5 @@ spec:
- ReadWriteOnce - ReadWriteOnce
resources: resources:
requests: requests:
storage: 5Gi storage: 1Gi
storageClassName: csi-rbd storageClassName: csi-rbd

8
examples/rbd/secret.yaml Normal file
View File

@ -0,0 +1,8 @@
apiVersion: v1
kind: Secret
metadata:
name: csi-rbd-secret
namespace: default
data:
# Key value corresponds to a user name defined in ceph cluster
admin: BASE64-ENCODED-PASSWORD

View File

@ -0,0 +1,24 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-rbd
provisioner: csi-rbdplugin
parameters:
# Comma separated list of Ceph monitors
monitors: mon1:port,mon2:port,...
# Ceph pool into which the RBD image shall be created
pool: rbd
# RBD image format. Defaults to "2".
imageFormat: "2"
# RBD image features. Available for imageFormat: "2". CSI RBD currently supports only `layering` feature.
imageFeatures: layering
# The secrets have to contain Ceph admin credentials.
csiProvisionerSecretName: csi-rbd-secret
csiProvisionerSecretNamespace: default
csiNodePublishSecretName: csi-rbd-secret
csiNodePublishSecretNamespace: default
reclaimPolicy: Delete