2019-02-21 04:30:21 +00:00
# ceph-csi-cephfs
2021-09-20 10:16:55 +00:00
The ceph-csi-cephfs chart adds cephFS volume support to your cluster.
2019-02-21 04:30:21 +00:00
2020-05-18 08:18:46 +00:00
## Install from release repo
Add chart repository to install helm charts from it
```console
helm repo add ceph-csi https://ceph.github.io/csi-charts
```
## Install from local Chart
we need to enter into the directory where all charts are present
```console
2020-11-11 07:22:54 +00:00
cd charts
2020-05-18 08:18:46 +00:00
```
**Note:** charts directory is present in root of the ceph-csi project
### Install Chart
2019-02-21 04:30:21 +00:00
To install the Chart into your Kubernetes cluster
2020-04-13 09:58:03 +00:00
- For helm 2.x
```bash
helm install --namespace "ceph-csi-cephfs" --name "ceph-csi-cephfs" ceph-csi/ceph-csi-cephfs
```
- For helm 3.x
Create the namespace where Helm should install the components with
```bash
2021-03-01 19:19:44 +00:00
kubectl create namespace ceph-csi-cephfs
2020-04-13 09:58:03 +00:00
```
Run the installation
```bash
helm install --namespace "ceph-csi-cephfs" "ceph-csi-cephfs" ceph-csi/ceph-csi-cephfs
```
2019-02-21 04:30:21 +00:00
After installation succeeds, you can get a status of Chart
```bash
2023-08-07 14:40:33 +00:00
helm status --namespace "ceph-csi-cephfs" "ceph-csi-cephfs"
2019-02-21 04:30:21 +00:00
```
2023-02-08 11:22:26 +00:00
### Upgrade Chart
If you want to upgrade your Chart, use the following commands.
```bash
helm repo update ceph-csi
helm upgrade --namespace ceph-csi-cephfs ceph-csi-cephfs ceph-csi/ceph-csi-cephfs
```
For upgrading to a specific version, provide the flag `--version` and the
version.
**Do not forget to include your values**, if they differ from the default values.
We recommend not to use `--reuse-values` in case there are new defaults AND
compare your currently used values with the new default values.
#### Known Issues Upgrading
- When upgrading to version >=3.7.0, you might encounter an error that the
CephFS CSI Driver cannot be updated. Please refer to
[issue ](https://github.com/ceph/ceph-csi/issues/3397 ) for more details.
This is due to the CSIDriver resource not being updatable. To work around this
you can delete the CSIDriver object by running:
```bash
kubectl delete csidriver cephfs.csi.ceph.com
```
Then rerun your `helm upgrade` command.
2020-05-18 08:18:46 +00:00
### Delete Chart
2020-04-13 09:58:03 +00:00
2019-02-21 04:30:21 +00:00
If you want to delete your Chart, use this command
2020-04-13 09:58:03 +00:00
- For helm 2.x
```bash
helm delete --purge "ceph-csi-cephfs"
```
- For helm 3.x
```bash
helm uninstall "ceph-csi-cephfs" --namespace "ceph-csi-cephfs"
```
2019-03-13 06:23:54 +00:00
If you want to delete the namespace, use this command
```bash
2019-09-26 09:55:12 +00:00
kubectl delete namespace ceph-csi-cephfs
2019-03-13 06:23:54 +00:00
```
2021-07-01 12:48:19 +00:00
### Configuration
The following table lists the configurable parameters of the ceph-csi-cephfs
charts and their default values.
2021-07-08 15:23:33 +00:00
| Parameter | Description | Default |
| ---------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------- |
| `rbac.create` | Specifies whether RBAC resources should be created | `true` |
| `serviceAccounts.nodeplugin.create` | Specifies whether a nodeplugin ServiceAccount should be created | `true` |
| `serviceAccounts.nodeplugin.name` | The name of the nodeplugin ServiceAccount to use. If not set and create is true, a name is generated using the fullname | "" |
| `serviceAccounts.provisioner.create` | Specifies whether a provisioner ServiceAccount should be created | `true` |
| `serviceAccounts.provisioner.name` | The name of the provisioner ServiceAccount of provisioner to use. If not set and create is true, a name is generated using the fullname | "" |
| `csiConfig` | Configuration for the CSI to connect to the cluster | [] |
2022-10-14 14:59:09 +00:00
| `commonLabels` | Labels to apply to all resources | `{}` |
2021-07-08 15:23:33 +00:00
| `logLevel` | Set logging level for csi containers. Supported values from 0 to 5. 0 for general useful logs, 5 for trace level verbosity. | `5` |
2022-07-26 04:52:19 +00:00
| `sidecarLogLevel` | Set logging level for csi sidecar containers. Supported values from 0 to 5. 0 for general useful logs, 5 for trace level verbosity. | `1` |
2021-07-08 15:23:33 +00:00
| `nodeplugin.name` | Specifies the nodeplugin name | `nodeplugin` |
| `nodeplugin.updateStrategy` | Specifies the update Strategy. If you are using ceph-fuse client set this value to OnDelete | `RollingUpdate` |
2023-09-02 07:15:08 +00:00
| `nodeplugin.priorityClassName` | Set user created priorityClassName for csi plugin pods. default is system-node-critical which is highest priority | `system-node-critical` |
2023-06-15 08:32:06 +00:00
| `nodeplugin.imagePullSecrets` | Specifies imagePullSecrets for containers | `[]` |
2021-07-08 15:23:33 +00:00
| `nodeplugin.profiling.enabled` | Specifies whether profiling should be enabled | `false` |
2022-04-19 16:19:39 +00:00
| `nodeplugin.registrar.image.repository` | Node-Registrar image repository URL | `registry.k8s.io/sig-storage/csi-node-driver-registrar` |
2024-07-17 07:01:52 +00:00
| `nodeplugin.registrar.image.tag` | Image tag | `v2.11.1` |
2021-07-08 15:23:33 +00:00
| `nodeplugin.registrar.image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `nodeplugin.plugin.image.repository` | Nodeplugin image repository URL | `quay.io/cephcsi/cephcsi` |
| `nodeplugin.plugin.image.tag` | Image tag | `canary` |
| `nodeplugin.plugin.image.pullPolicy` | Image pull policy | `IfNotPresent` |
2024-06-05 11:27:49 +00:00
| `nodeplugin.podSecurityContext` | Specifies pod-level security context. | `{}` |
2021-07-08 15:23:33 +00:00
| `nodeplugin.nodeSelector` | Kubernetes `nodeSelector` to add to the Daemonset | `{}` |
| `nodeplugin.tolerations` | List of Kubernetes `tolerations` to add to the Daemonset | `{}` |
| `nodeplugin.forcecephkernelclient` | Set to true to enable Ceph Kernel clients on kernel < 4.17 which support quotas | `true` |
2022-07-06 15:46:12 +00:00
| `nodeplugin.kernelmountoptions` | Comma separated string of mount options accepted by cephfs kernel mounter quotas | `""` |
| `nodeplugin.fusemountoptions` | Comma separated string of mount options accepted by ceph-fuse mounter quotas | `""` |
2021-07-08 15:23:33 +00:00
| `nodeplugin.podSecurityPolicy.enabled` | If true, create & use [Pod Security Policy resources ](https://kubernetes.io/docs/concepts/policy/pod-security-policy/ ). | `false` |
| `provisioner.name` | Specifies the name of provisioner | `provisioner` |
| `provisioner.replicaCount` | Specifies the replicaCount | `3` |
| `provisioner.timeout` | GRPC timeout for waiting for creation or deletion of a volume | `60s` |
2022-06-14 13:35:04 +00:00
| `provisioner.clustername` | Cluster name to set on the subvolume | "" |
2022-07-29 07:37:23 +00:00
| `provisioner.setmetadata` | Set metadata on volume | `true` |
2023-09-02 07:15:08 +00:00
| `provisioner.priorityClassName` | Set user created priorityClassName for csi provisioner pods. Default is `system-cluster-critical` which is less priority than `system-node-critical` | `system-cluster-critical` |
2022-06-13 13:31:38 +00:00
| `provisioner.enableHostNetwork` | Specifies whether hostNetwork is enabled for provisioner pod. | `false` |
2023-06-15 08:32:06 +00:00
| `provisioner.imagePullSecrets` | Specifies imagePullSecrets for containers | `[]` |
2021-07-08 15:23:33 +00:00
| `provisioner.profiling.enabled` | Specifies whether profiling should be enabled | `false` |
2022-05-31 07:32:58 +00:00
| `provisioner.provisioner.image.repository` | Specifies the csi-provisioner image repository URL | `registry.k8s.io/sig-storage/csi-provisioner` |
2024-05-27 06:20:25 +00:00
| `provisioner.provisioner.image.tag` | Specifies image tag | `v5.0.1` |
2021-07-08 15:23:33 +00:00
| `provisioner.provisioner.image.pullPolicy` | Specifies pull policy | `IfNotPresent` |
2023-01-05 10:59:43 +00:00
| `provisioner.provisioner.image.extraArgs` | Specifies extra arguments for the provisioner sidecar | `[]` |
2022-05-31 07:32:58 +00:00
| `provisioner.resizer.image.repository` | Specifies the csi-resizer image repository URL | `registry.k8s.io/sig-storage/csi-resizer` |
2024-05-27 06:20:25 +00:00
| `provisioner.resizer.image.tag` | Specifies image tag | `v1.11.1` |
2021-07-08 15:23:33 +00:00
| `provisioner.resizer.image.pullPolicy` | Specifies pull policy | `IfNotPresent` |
2023-01-05 10:59:43 +00:00
| `provisioner.resizer.image.extraArgs` | Specifies extra arguments for the resizer sidecar | `[]` |
2021-07-08 15:23:33 +00:00
| `provisioner.resizer.name` | Specifies the name of csi-resizer sidecar | `resizer` |
| `provisioner.resizer.enabled` | Specifies whether resizer sidecar is enabled | `true` |
2022-05-31 07:32:58 +00:00
| `provisioner.snapshotter.image.repository` | Specifies the csi-snapshotter image repository URL | `registry.k8s.io/sig-storage/csi-snapshotter` |
2024-05-27 06:20:25 +00:00
| `provisioner.snapshotter.image.tag` | Specifies image tag | `v8.0.1` |
2021-07-08 15:23:33 +00:00
| `provisioner.snapshotter.image.pullPolicy` | Specifies pull policy | `IfNotPresent` |
2023-01-05 10:59:43 +00:00
| `provisioner.snapshotter.image.extraArgs` | Specifies extra arguments for the snapshotter sidecar | `[]` |
2024-02-20 08:06:27 +00:00
| `provisioner.snapshotter.args.enableVolumeGroupSnapshots` | enables the creation of volume group snapshots | `false` |
2021-07-08 15:23:33 +00:00
| `provisioner.nodeSelector` | Specifies the node selector for provisioner deployment | `{}` |
| `provisioner.tolerations` | Specifies the tolerations for provisioner deployment | `{}` |
| `provisioner.affinity` | Specifies the affinity for provisioner deployment | `{}` |
| `provisioner.podSecurityPolicy.enabled` | Specifies whether podSecurityPolicy is enabled | `false` |
2024-06-05 11:27:49 +00:00
| `provisioner.podSecurityContext` | Specifies pod-level security context. | `{}` |
2021-07-08 15:23:33 +00:00
| `provisionerSocketFile` | The filename of the provisioner socket | `csi-provisioner.sock` |
| `pluginSocketFile` | The filename of the plugin socket | `csi.sock` |
2023-11-17 06:29:54 +00:00
| `readAffinity.enabled` | Enable read affinity for CephFS subvolumes. Recommended to set to true if running kernel 5.8 or newer. | `false` |
2023-11-17 06:30:19 +00:00
| `readAffinity.crushLocationLabels` | Define which node labels to use as CRUSH location. This should correspond to the values set in the CRUSH map. For more information, click [here ](https://github.com/ceph/ceph-csi/blob/v3.9.0/docs/deploy-cephfs.md#read-affinity-using-crush-locations-for-cephfs-subvolumes )| `[]` |
2021-07-08 15:23:33 +00:00
| `kubeletDir` | Kubelet working directory | `/var/lib/kubelet` |
| `driverName` | Name of the csi-driver | `cephfs.csi.ceph.com` |
| `configMapName` | Name of the configmap which contains cluster configuration | `ceph-csi-config` |
| `externallyManagedConfigmap` | Specifies the use of an externally provided configmap | `false` |
2022-02-12 17:01:10 +00:00
| `cephConfConfigMapName` | Name of the configmap which contains ceph.conf configuration | `ceph-config` |
2021-07-08 15:23:33 +00:00
| `storageClass.create` | Specifies whether the StorageClass should be created | `false` |
2021-09-20 10:16:55 +00:00
| `storageClass.name` | Specifies the cephFS StorageClass name | `csi-cephfs-sc` |
2021-10-26 03:32:17 +00:00
| `storageClass.annotations` | Specifies the annotations for the cephFS storageClass | `[]` |
2021-07-08 15:23:33 +00:00
| `storageClass.clusterID` | String representing a Ceph cluster to provision storage from | `<cluster-ID>` |
| `storageClass.fsName` | CephFS filesystem name into which the volume shall be created | `myfs` |
| `storageClass.pool` | Ceph pool into which volume data shall be stored | `""` |
| `storageClass.fuseMountOptions` | Comma separated string of Ceph-fuse mount options | `""` |
2021-09-20 10:16:55 +00:00
| `storageclass.kernelMountOptions` | Comma separated string of CephFS kernel mount options | `""` |
2021-07-08 15:23:33 +00:00
| `storageClass.mounter` | The driver can use either ceph-fuse (fuse) or ceph kernelclient (kernel) | `""` |
| `storageClass.volumeNamePrefix` | Prefix to use for naming subvolumes | `""` |
| `storageClass.provisionerSecret` | The secrets have to contain user and/or Ceph admin credentials. | `csi-cephfs-secret` |
| `storageClass.provisionerSecretNamespace` | Specifies the provisioner secret namespace | `""` |
| `storageClass.controllerExpandSecret` | Specifies the controller expand secret name | `csi-cephfs-secret` |
| `storageClass.controllerExpandSecretNamespace` | Specifies the controller expand secret namespace | `""` |
| `storageClass.nodeStageSecret` | Specifies the node stage secret name | `csi-cephfs-secret` |
| `storageClass.nodeStageSecretNamespace` | Specifies the node stage secret namespace | `""` |
| `storageClass.reclaimPolicy` | Specifies the reclaim policy of the StorageClass | `Delete` |
| `storageClass.allowVolumeExpansion` | Specifies whether volume expansion should be allowed | `true` |
| `storageClass.mountOptions` | Specifies the mount options | `[]` |
| `secret.create` | Specifies whether the secret should be created | `false` |
2021-09-20 10:16:55 +00:00
| `secret.name` | Specifies the cephFS secret name | `csi-cephfs-secret` |
| `secret.adminID` | Specifies the admin ID of the cephFS secret | `<plaintext ID>` |
2021-07-08 15:23:33 +00:00
| `secret.adminKey` | Specifies the key that corresponds to the adminID | `<Ceph auth key corresponding to ID above>` |
2022-02-15 23:13:39 +00:00
| `selinuxMount` | Mount the host /etc/selinux inside pods to support selinux-enabled filesystems | `true` |
2024-04-01 03:23:36 +00:00
| `CSIDriver.fsGroupPolicy` | Specifies the fsGroupPolicy for the CSI driver object | `File` |
| `CSIDriver.seLinuxMount` | Specify for efficient SELinux volume relabeling | `true` |
2024-06-21 11:37:56 +00:00
| `instanceID` | Unique ID distinguishing this instance of Ceph CSI among other instances, when sharing Ceph clusters across CSI instances for provisioning. | ` ` |
2021-07-01 12:48:19 +00:00
### Command Line
You can pass the settings with helm command line parameters.
Specify each parameter using the --set key=value argument to helm install.
For Example:
```bash
helm install --set configMapName=ceph-csi-config --set provisioner.podSecurityPolicy.enabled=true
```