mirror of
https://github.com/ceph/ceph-csi.git
synced 2024-11-09 16:00:22 +00:00
updated README, added docs
This commit is contained in:
parent
e8ea0aa713
commit
af7824cafa
308
README.md
308
README.md
@ -1,308 +1,16 @@
|
||||
# Ceph CSI
|
||||
# Ceph CSI 0.3.0
|
||||
|
||||
[Container Storage Interface (CSI)](https://github.com/container-storage-interface/) driver, provisioner, and attacher for Ceph RBD and CephFS.
|
||||
|
||||
## Overview
|
||||
|
||||
Ceph CSI plugins implement an interface between CSI enabled Container
|
||||
Orchestrator and CEPH cluster. It allows dynamically provision CEPH
|
||||
volumes and attach it to workloads.
|
||||
Current implementation of Ceph CSI plugins was tested in Kubernetes environment (requires Kubernetes 1.10+),
|
||||
but the code does not rely on any Kubernetes specific calls (WIP to make it k8s agnostic)
|
||||
and should be able to run with any CSI enabled CO (Containers Orchestration).
|
||||
Ceph CSI plugins implement an interface between CSI enabled Container Orchestrator (CO) and CEPH cluster. It allows dynamically provisioning CEPH volumes and attaching them to workloads. Current implementation of Ceph CSI plugins was tested in Kubernetes environment (requires Kubernetes 1.11+), but the code does not rely on any Kubernetes specific calls (WIP to make it k8s agnostic) and should be able to run with any CSI enabled CO.
|
||||
|
||||
[Container Storage Interface (CSI)](https://github.com/container-storage-interface/) driver, provisioner, and attacher for Ceph RBD and CephFS
|
||||
For details about configuration and deployment of RBD and CephFS CSI plugins, see documentation in `docs/`.
|
||||
|
||||
## RBD Plugin
|
||||
|
||||
An RBD CSI plugin is available to help simplify storage management.
|
||||
Once user creates PVC with the reference to a RBD storage class, rbd image and
|
||||
corresponding PV object gets dynamically created and becomes ready to be used by
|
||||
workloads.
|
||||
|
||||
### Configuration Requirements
|
||||
|
||||
* Secret object with the authentication key for ceph cluster
|
||||
* StorageClass with rbdplugin (default CSI RBD plugin name) as a provisioner name
|
||||
and information about ceph cluster (monitors, pool, etc)
|
||||
* Service Accounts with required RBAC permissions
|
||||
|
||||
### Feature Status
|
||||
|
||||
### 1.9: Alpha
|
||||
|
||||
**Important:** `CSIPersistentVolume` and `MountPropagation`
|
||||
[feature gates must be enabled starting in 1.9](#enabling-the-alpha-feature-gates).
|
||||
Also API server must run with running config set to: `storage.k8s.io/v1alpha1`
|
||||
|
||||
### Compiling
|
||||
CSI RBD plugin can be compiled in a form of a binary file or in a form of a container. When compiled
|
||||
as a binary file, it gets stored in \_output folder with the name rbdplugin. When compiled as a container,
|
||||
the resulting image is stored in a local docker's image store.
|
||||
|
||||
To compile just a binary file:
|
||||
```
|
||||
$ make rbdplugin
|
||||
```
|
||||
|
||||
To build a container:
|
||||
```
|
||||
$ make rbdplugin-container
|
||||
```
|
||||
By running:
|
||||
```
|
||||
$ docker images | grep rbdplugin
|
||||
```
|
||||
You should see the following line in the output:
|
||||
```
|
||||
quay.io/cephcsi/rbdplugin v0.2.0 76369a8f8528 15 minutes ago 372.5 MB
|
||||
```
|
||||
|
||||
### Testing
|
||||
|
||||
#### Prerequisite
|
||||
|
||||
##### Enable Mount Propagation in Docker
|
||||
|
||||
Comment out `MountFlags=slave` in docker systemd service then restart docker service.
|
||||
```bash
|
||||
# systemctl daemon-reload
|
||||
# systemctl restart docker
|
||||
```
|
||||
|
||||
##### Enable Kubernetes Feature Gates
|
||||
|
||||
Enable features `MountPropagation=true,CSIPersistentVolume=true` and runtime config `storage.k8s.io/v1alpha1=true`
|
||||
|
||||
#### Step 1: Create Secret
|
||||
```
|
||||
$ kubectl create -f ./deploy/rbd/kubernetes/rbd-secrets.yaml
|
||||
```
|
||||
**Important:** rbd-secrets.yaml, must be customized to match your ceph environment.
|
||||
|
||||
#### Step 2: Create StorageClass
|
||||
```
|
||||
$ kubectl create -f ./deploy/rbd/kubernetes/rbd-storage-class.yaml
|
||||
```
|
||||
**Important:** rbd-storage-class.yaml, must be customized to match your ceph environment.
|
||||
|
||||
#### Step 3: Start CSI CEPH RBD plugin
|
||||
```
|
||||
$ kubectl create -f ./deploy/rbd/kubernetes/rbdplugin.yaml
|
||||
```
|
||||
|
||||
#### Step 4: Start CSI External Attacher
|
||||
```
|
||||
$ kubectl create -f ./deploy/rbd/kubernetes/csi-attacher.yaml
|
||||
```
|
||||
|
||||
#### Step 5: Start CSI External Provisioner
|
||||
```
|
||||
$ kubectl create -f ./deploy/rbd/kubernetes/csi-provisioner.yaml
|
||||
```
|
||||
**Important:** Deployment yaml files includes required Service Account definitions and
|
||||
required RBAC rules.
|
||||
|
||||
#### Step 6: Check status of CSI RBD plugin
|
||||
```
|
||||
$ kubectl get pods | grep csi
|
||||
```
|
||||
|
||||
The following output should be displayed:
|
||||
|
||||
```
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
default csi-attacher-0 1/1 Running 0 1d
|
||||
default csi-rbdplugin-qxqtl 2/2 Running 0 1d
|
||||
default csi-provisioner-0 1/1 Running 0 1d
|
||||
```
|
||||
|
||||
#### Step 7: Create PVC
|
||||
```
|
||||
$ kubectl create -f ./deploy/rbd/kubernetes/pvc.yaml
|
||||
```
|
||||
|
||||
#### Step 8: Check status of provisioner PV
|
||||
```
|
||||
$ kubectl get pv
|
||||
```
|
||||
|
||||
The following output should be displayed:
|
||||
|
||||
```
|
||||
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
|
||||
kubernetes-dynamic-pvc-1b19ddf1-0047-11e8-85ab-760f2eed12ea 5Gi RWO Delete Bound default/csi-pvc rbdv2 10s
|
||||
```
|
||||
|
||||
```
|
||||
$ kubectl describe pv kubernetes-dynamic-pvc-1b19ddf1-0047-11e8-85ab-760f2eed12ea
|
||||
Name: kubernetes-dynamic-pvc-1b19ddf1-0047-11e8-85ab-760f2eed12ea
|
||||
Annotations: csi.volume.kubernetes.io/volume-attributes={"monitors":"192.168.80.233:6789","pool":"kubernetes"}
|
||||
csiProvisionerIdentity=1516716490787-8081-rbdplugin <------ !!!
|
||||
pv.kubernetes.io/provisioned-by=rbdplugin
|
||||
StorageClass: rbdv2 <------ !!!
|
||||
Status: Bound <------ !!!
|
||||
Claim: default/csi-pvc <------ !!!
|
||||
Reclaim Policy: Delete
|
||||
Access Modes: RWO
|
||||
VolumeMode: Filesystem
|
||||
Capacity: 5Gi
|
||||
Message:
|
||||
Source:
|
||||
Type: CSI <------ !!!
|
||||
```
|
||||
|
||||
#### Step 9: Create a test pod
|
||||
|
||||
```bash
|
||||
# kubectl create -f ./deploy/rbd/pod.yaml
|
||||
```
|
||||
|
||||
## CephFS plugin
|
||||
|
||||
A CephFS CSI plugin is available to help simplify storage management.
|
||||
Once user creates PVC with the reference to a CephFS CSI storage class, corresponding
|
||||
PV object gets dynamically created and becomes ready to be used by workloads.
|
||||
|
||||
### Configuration Requirements
|
||||
|
||||
* Secret object with the authentication user ID `userID` and key `userKey` for ceph cluster
|
||||
* StorageClass with csi-cephfsplugin (default CSI CephFS plugin name) as a provisioner name
|
||||
and information about ceph cluster (monitors, pool, rootPath, ...)
|
||||
* Service Accounts with required RBAC permissions
|
||||
|
||||
Mounter options: specifies whether to use FUSE or ceph kernel client for mounting. By default, the plugin will probe for `ceph-fuse`. If this fails, the kernel client will be used instead. Command line argument `--volumemounter=[fuse|kernel]` overrides this behaviour.
|
||||
|
||||
StorageClass options:
|
||||
* `provisionVolume: "bool"`: if set to true, the plugin will provision and mount a new volume. Admin credentials `adminID` and `adminKey` are required in the secret object, since this also creates a dedicated RADOS user used for mounting the volume.
|
||||
* `rootPath: /path-in-cephfs`: required field if `provisionVolume=true`. CephFS is mounted from the specified path. User credentials `userID` and `userKey` are required in the secret object.
|
||||
* `mounter: "kernel" or "fuse"`: (optional) per-StorageClass mounter configuration. Overrides the default mounter.
|
||||
|
||||
### Feature Status
|
||||
|
||||
### 1.9: Alpha
|
||||
|
||||
**Important:** `CSIPersistentVolume` and `MountPropagation`
|
||||
[feature gates must be enabled starting in 1.9](#enabling-the-alpha-feature-gates).
|
||||
Also API server must run with running config set to: `storage.k8s.io/v1alpha1`
|
||||
|
||||
* `kube-apiserver` must be launched with `--feature-gates=CSIPersistentVolume=true,MountPropagation=true`
|
||||
and `--runtime-config=storage.k8s.io/v1alpha1=true`
|
||||
* `kube-controller-manager` must be launched with `--feature-gates=CSIPersistentVolume=true`
|
||||
* `kubelet` must be launched with `--feature-gates=CSIPersistentVolume=true,MountPropagation=true`
|
||||
|
||||
### Compiling
|
||||
CSI CephFS plugin can be compiled in a form of a binary file or in a form of a container. When compiled
|
||||
as a binary file, it gets stored in \_output folder with the name cephfsplugin. When compiled as a container,
|
||||
the resulting image is stored in a local docker's image store.
|
||||
|
||||
To compile just a binary file:
|
||||
```
|
||||
$ make cephfsplugin
|
||||
```
|
||||
|
||||
To build a container:
|
||||
```
|
||||
$ make cephfsplugin-container
|
||||
```
|
||||
By running:
|
||||
```
|
||||
$ docker images | grep cephfsplugin
|
||||
```
|
||||
You should see the following line in the output:
|
||||
```
|
||||
quay.io/cephcsi/cephfsplugin v0.2.0 79482e644593 4 minutes ago 305MB
|
||||
```
|
||||
|
||||
### Testing
|
||||
|
||||
#### Prerequisite
|
||||
|
||||
##### Enable Mount Propagation in Docker
|
||||
|
||||
Comment out `MountFlags=slave` in docker systemd service then restart docker service.
|
||||
```
|
||||
# systemctl daemon-reload
|
||||
# systemctl restart docker
|
||||
```
|
||||
|
||||
##### Enable Kubernetes Feature Gates
|
||||
|
||||
Enable features `MountPropagation=true,CSIPersistentVolume=true` and runtime config `storage.k8s.io/v1alpha1=true`
|
||||
|
||||
#### Step 1: Create Secret
|
||||
```
|
||||
$ kubectl create -f ./deploy/cephfs/kubernetes/secret.yaml
|
||||
```
|
||||
**Important:** secret.yaml, must be customized to match your ceph environment.
|
||||
|
||||
#### Step 2: Create StorageClass
|
||||
```
|
||||
$ kubectl create -f ./deploy/cephfs/kubernetes/cephfs-storage-class.yaml
|
||||
```
|
||||
**Important:** cephfs-storage-class.yaml, must be customized to match your ceph environment.
|
||||
|
||||
#### Step 3: Start CSI CEPH CephFS plugin
|
||||
```
|
||||
$ kubectl create -f ./deploy/cephfs/kubernetes/cephfsplugin.yaml
|
||||
```
|
||||
|
||||
#### Step 4: Start CSI External Attacher
|
||||
```
|
||||
$ kubectl create -f ./deploy/cephfs/kubernetes/csi-attacher.yaml
|
||||
```
|
||||
|
||||
#### Step 5: Start CSI External Provisioner
|
||||
```
|
||||
$ kubectl create -f ./deploy/cephfs/kubernetes/csi-provisioner.yaml
|
||||
```
|
||||
**Important:** Deployment yaml files includes required Service Account definitions and
|
||||
required RBAC rules.
|
||||
|
||||
#### Step 6: Check status of CSI CephFS plugin
|
||||
```
|
||||
$ kubectl get pods | grep csi
|
||||
csi-attacher-0 1/1 Running 0 6m
|
||||
csi-cephfsplugin-hmqpk 2/2 Running 0 6m
|
||||
csi-provisioner-0 1/1 Running 0 6m
|
||||
```
|
||||
|
||||
#### Step 7: Create PVC
|
||||
```
|
||||
$ kubectl create -f ./deploy/cephfs/kubernetes/pvc.yaml
|
||||
```
|
||||
|
||||
#### Step 8: Check status of provisioner PV
|
||||
```
|
||||
$ kubectl get pv
|
||||
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
|
||||
kubernetes-dynamic-pv-715cef0b30d811e8 5Gi RWX Delete Bound default/csi-cephfs-pvc csi-cephfs 5s
|
||||
```
|
||||
|
||||
```
|
||||
$ kubectl describe pv kubernetes-dynamic-pv-715cef0b30d811e8
|
||||
Name: kubernetes-dynamic-pv-715cef0b30d811e8
|
||||
Labels: <none>
|
||||
Annotations: pv.kubernetes.io/provisioned-by=csi-cephfsplugin
|
||||
StorageClass: csi-cephfs
|
||||
Status: Bound
|
||||
Claim: default/csi-cephfs-pvc
|
||||
Reclaim Policy: Delete
|
||||
Access Modes: RWX
|
||||
Capacity: 5Gi
|
||||
Message:
|
||||
Source:
|
||||
Type: CSI (a Container Storage Interface (CSI) volume source)
|
||||
Driver: ReadOnly: %v
|
||||
|
||||
VolumeHandle: csi-cephfsplugin
|
||||
%!(EXTRA string=csi-cephfs-7182b779-30d8-11e8-bf01-5254007d7491, bool=false)Events: <none>
|
||||
```
|
||||
|
||||
#### Step 9: Create a test pod
|
||||
|
||||
```
|
||||
$ kubectl create -f ./deploy/cephfs/kubernetes/pod.yaml
|
||||
```
|
||||
For example usage of RBD and CephFS CSI plugins, see examples in `examples/`.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
Please submit an issue at:[Issues](https://github.com/ceph/ceph-csi/issues)
|
||||
Please submit an issue at: [Issues](https://github.com/ceph/ceph-csi/issues)
|
||||
|
||||
|
109
docs/deploy-cephfs.md
Normal file
109
docs/deploy-cephfs.md
Normal file
@ -0,0 +1,109 @@
|
||||
# CSI CephFS plugin
|
||||
|
||||
The CSI CephFS plugin is able to both provision new CephFS volumes and attach and mount existing ones to workloads.
|
||||
|
||||
## Building
|
||||
|
||||
CSI CephFS plugin can be compiled in a form of a binary file or in a form of a Docker image. When compiled as a binary file, the result is stored in `_output/` directory with the name `cephfsplugin`. When compiled as an image, it's stored in the local Docker image store.
|
||||
|
||||
Building binary:
|
||||
```bash
|
||||
$ make cephfsplugin
|
||||
```
|
||||
|
||||
Building Docker image:
|
||||
```bash
|
||||
$ make image-cephfsplugin
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
**Available command line arguments:**
|
||||
|
||||
Option | Default value | Description
|
||||
------ | ------------- | -----------
|
||||
`--endpoint` | `unix://tmp/csi.sock` | CSI endpoint, must be a UNIX socket
|
||||
`--drivername` | `csi-cephfsplugin` | name of the driver (Kubernetes: `provisioner` field in StorageClass must correspond to this value)
|
||||
`--nodeid` | _empty_ | This node's ID
|
||||
`--volumemounter` | _empty_ | default volume mounter. Available options are `kernel` and `fuse`. This is the mount method used if volume parameters don't specify otherwise. If left unspecified, the driver will first probe for `ceph-fuse` in system's path and will choose Ceph kernel client if probing failed.
|
||||
|
||||
**Available volume parameters:**
|
||||
|
||||
Parameter | Required | Description
|
||||
--------- | -------- | -----------
|
||||
`monitors` | yes | Comma separated list of Ceph monitors (e.g. `192.168.100.1:6789,192.168.100.2:6789,192.168.100.3:6789`)
|
||||
`mounter` | no | Mount method to be used for this volume. Available options are `kernel` for Ceph kernel client and `fuse` for Ceph FUSE driver. Defaults to "default mounter", see command line arguments.
|
||||
`provisionVolume` | yes | Mode of operation. BOOL value. If `true`, a new CephFS volume will be provisioned. If `false`, an existing CephFS will be used.
|
||||
`pool` | for `provisionVolume=true` | Ceph pool into which the volume shall be created
|
||||
`rootPath` | for `provisionVolume=false` | Root path of an existing CephFS volume
|
||||
`csiProvisionerSecretName`, `csiNodeStageSecretName` | for Kubernetes | name of the Kubernetes Secret object containing Ceph client credentials. Both parameters should have the same value
|
||||
`csiProvisionerSecretNamespace`, `csiNodeStageSecretNamespace` | for Kubernetes | namespaces of the above Secret objects
|
||||
|
||||
**Required secrets for `provisionVolume=true`:**
|
||||
Admin credentials are required for provisioning new volumes
|
||||
* `adminID`: ID of an admin client
|
||||
* `adminKey`: key of the admin client
|
||||
|
||||
**Required secrets for `provisionVolume=false`:**
|
||||
User credentials with access to an existing volume
|
||||
* `userID`: ID of a user client
|
||||
* `userKey`: key of a user client
|
||||
|
||||
## Deployment with Kubernetes
|
||||
|
||||
Requires Kubernetes 1.11
|
||||
|
||||
Your Kubernetes cluster must allow privileged pods (i.e. `--allow-privileged` flag must be set to true for both the API server and the kubelet). Moreover, as stated in the [mount propagation docs](https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation), the Docker daemon of the cluster nodes must allow shared mounts.
|
||||
|
||||
YAML manifests are located in `deploy/cephfs/kubernetes`.
|
||||
|
||||
**Deploy RBACs for sidecar containers and node plugins:**
|
||||
|
||||
```bash
|
||||
$ kubectl create -f csi-attacher-rbac.yaml
|
||||
$ kubectl create -f csi-provisioner-rbac.yaml
|
||||
$ kubectl create -f csi-nodeplugin-rbac.yaml
|
||||
```
|
||||
|
||||
Those manifests deploy service accounts, cluster roles and cluster role bindings. These are shared for both RBD and CephFS CSI plugins, as they require the same permissions.
|
||||
|
||||
**Deploy CSI sidecar containers:**
|
||||
|
||||
```bash
|
||||
$ kubectl create -f csi-cephfsplugin-attacher.yaml
|
||||
$ kubectl create -f csi-cephfsplugin-provisioner.yaml
|
||||
```
|
||||
|
||||
Deploys stateful sets for external-attacher and external-provisioner sidecar containers for CSI CephFS.
|
||||
|
||||
**Deploy CSI CephFS driver:**
|
||||
|
||||
```bash
|
||||
$ kubectl create -f csi-cephfsplugin.yaml
|
||||
```
|
||||
|
||||
Deploys a daemon set with two containers: CSI driver-registrar and the CSI CephFS driver.
|
||||
|
||||
## Verifying the deployment in Kubernetes
|
||||
|
||||
After successfuly completing the steps above, you should see output similar to this:
|
||||
```bash
|
||||
$ kubectl get all
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
pod/csi-cephfsplugin-attacher-0 1/1 Running 0 26s
|
||||
pod/csi-cephfsplugin-provisioner-0 1/1 Running 0 25s
|
||||
pod/csi-cephfsplugin-rljcv 2/2 Running 0 24s
|
||||
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
service/csi-cephfsplugin-attacher ClusterIP 10.104.116.218 <none> 12345/TCP 27s
|
||||
service/csi-cephfsplugin-provisioner ClusterIP 10.101.78.75 <none> 12345/TCP 26s
|
||||
|
||||
...
|
||||
```
|
||||
|
||||
You can try deploying a demo pod from `examples/cephfs` to test the deployment further.
|
||||
|
||||
### Notes on volume deletion
|
||||
|
||||
Volumes that were provisioned dynamically (i.e. `provisionVolume=true`) are allowed to be deleted by the driver as well, if the user chooses to do so. Otherwise, the driver is forbidden to delete such volumes - attempting to delete them is a no-op.
|
||||
|
100
docs/deploy-rbd.md
Normal file
100
docs/deploy-rbd.md
Normal file
@ -0,0 +1,100 @@
|
||||
# CSI RBD Plugin
|
||||
|
||||
The RBD CSI plugin is able to provision new RBD images and attach and mount those to worlkoads.
|
||||
|
||||
## Building
|
||||
|
||||
CSI RBD plugin can be compiled in a form of a binary file or in a form of a Docker image. When compiled as a binary file, the result is stored in `_output/` directory with the name `rbdplugin`. When compiled as an image, it's stored in the local Docker image store.
|
||||
|
||||
Building binary:
|
||||
```bash
|
||||
$ make rbdplugin
|
||||
```
|
||||
|
||||
Building Docker image:
|
||||
```bash
|
||||
$ make image-rbdplugin
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
**Available command line arguments:**
|
||||
|
||||
Option | Default value | Description
|
||||
------ | ------------- | -----------
|
||||
`--endpoint` | `unix://tmp/csi.sock` | CSI endpoint, must be a UNIX socket
|
||||
`--drivername` | `csi-cephfsplugin` | name of the driver (Kubernetes: `provisioner` field in StorageClass must correspond to this value)
|
||||
`--nodeid` | _empty_ | This node's ID
|
||||
|
||||
**Available volume parameters:**
|
||||
|
||||
Parameter | Required | Description
|
||||
--------- | -------- | -----------
|
||||
`monitors` | yes | Comma separated list of Ceph monitors (e.g. `192.168.100.1:6789,192.168.100.2:6789,192.168.100.3:6789`)
|
||||
`pool` | yes | Ceph pool into which the RBD image shall be created
|
||||
`imageFormat` | no | RBD image format. Defaults to `2`. See [man pages](http://docs.ceph.com/docs/mimic/man/8/rbd/#cmdoption-rbd-image-format)
|
||||
`imageFeatures` | no | RBD image features. Available for `imageFormat=2`. CSI RBD currently supports only `layering` feature. See [man pages](http://docs.ceph.com/docs/mimic/man/8/rbd/#cmdoption-rbd-image-feature)
|
||||
`csiProvisionerSecretName`, `csiNodePublishSecretName` | for Kubernetes | name of the Kubernetes Secret object containing Ceph client credentials. Both parameters should have the same value
|
||||
`csiProvisionerSecretNamespace`, `csiNodePublishSecretNamespace` | for Kubernetes | namespaces of the above Secret objects
|
||||
|
||||
**Required secrets:**
|
||||
Admin credentials are required for provisioning new RBD images
|
||||
`ADMIN_NAME`: `ADMIN_PASSWORD` - note that the key of the key-value pair is the name of the client with admin privileges, and the value is its password
|
||||
|
||||
Also note that CSI RBD expects admin keyring and Ceph config file in `/etc/ceph`.
|
||||
|
||||
## Deployment with Kubernetes
|
||||
|
||||
Requires Kubernetes 1.11
|
||||
|
||||
Your Kubernetes cluster must allow privileged pods (i.e. `--allow-privileged` flag must be set to true for both the API server and the kubelet). Moreover, as stated in the [mount propagation docs](https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation), the Docker daemon of the cluster nodes must allow shared mounts.
|
||||
|
||||
YAML manifests are located in `deploy/rbd/kubernetes`.
|
||||
|
||||
**Deploy RBACs for sidecar containers and node plugins:**
|
||||
|
||||
```bash
|
||||
$ kubectl create -f csi-attacher-rbac.yaml
|
||||
$ kubectl create -f csi-provisioner-rbac.yaml
|
||||
$ kubectl create -f csi-nodeplugin-rbac.yaml
|
||||
```
|
||||
|
||||
Those manifests deploy service accounts, cluster roles and cluster role bindings. These are shared for both RBD and CephFS CSI plugins, as they require the same permissions.
|
||||
|
||||
**Deploy CSI sidecar containers:**
|
||||
|
||||
```bash
|
||||
$ kubectl create -f csi-rbdplugin-attacher.yaml
|
||||
$ kubectl create -f csi-rbdplugin-provisioner.yaml
|
||||
```
|
||||
|
||||
Deploys stateful sets for external-attacher and external-provisioner sidecar containers for CSI RBD.
|
||||
|
||||
**Deploy RBD CSI driver:**
|
||||
|
||||
```bash
|
||||
$ kubectl create -f csi-rbdplugin.yaml
|
||||
```
|
||||
|
||||
Deploys a daemon set with two containers: CSI driver-registrar and the CSI RBD driver.
|
||||
|
||||
## Verifying the deployment in Kubernetes
|
||||
|
||||
After successfuly completing the steps above, you should see output similar to this:
|
||||
|
||||
```bash
|
||||
$ kubectl get all
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
pod/csi-rbdplugin-attacher-0 1/1 Running 0 23s
|
||||
pod/csi-rbdplugin-fptqr 2/2 Running 0 21s
|
||||
pod/csi-rbdplugin-provisioner-0 1/1 Running 0 22s
|
||||
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
service/csi-rbdplugin-attacher ClusterIP 10.109.15.54 <none> 12345/TCP 26s
|
||||
service/csi-rbdplugin-provisioner ClusterIP 10.104.2.130 <none> 12345/TCP 23s
|
||||
|
||||
...
|
||||
```
|
||||
|
||||
You can try deploying a demo pod from `examples/rbd` to test the deployment further.
|
||||
|
Loading…
Reference in New Issue
Block a user