The container runtime CRI-O limits the number of PIDs to 1024 by default. When many PVCs are requested at the same time, it is possible for the provisioner to start too many threads (or go routines) and executing 'rbd' commands can start to fail. In case a go routine can not get started, the process panics. The PID limit can be changed by passing an argument to kubelet, but this will affect all pids running on a host. Changing the parameters to kubelet is also not a very elegant solution. Instead, the provisioner pod can change the configuration itself. The pod is running in privileged mode and can write to /sys/fs/cgroup where the limit is configured. With this change, the limit is configured to 'max', just as if there is no limit at all. The logs of the csi-rbdplugin in the provisioner pod will reflect the change it makes when starting the service: $ oc -n rook-ceph logs -c csi-rbdplugin csi-rbdplugin-provisioner-0 .. I0726 13:59:19.737678 1 cephcsi.go:127] Initial PID limit is set to 1024 I0726 13:59:19.737746 1 cephcsi.go:136] Reconfigured PID limit to -1 (max) .. It is possible to pass a different limit on the commandline of the cephcsi executable. The following flag has been added: --pidlimit=<int> the PID limit to configure through cgroups This accepts special values -1 (max) and 0 (default, do not reconfigure). Other integers will be the limit that gets configured in cgroups. Signed-off-by: Niels de Vos <ndevos@redhat.com>
12 KiB
CSI CephFS plugin
The CSI CephFS plugin is able to both provision new CephFS volumes and attach and mount existing ones to workloads.
Building
CSI plugin can be compiled in the form of a binary file or in the form
of a Docker image.
When compiled as a binary file, the result is stored in _output/
directory with the name cephcsi
.
When compiled as an image, it's stored in the local Docker image store
with name cephcsi
.
Building binary:
make cephcsi
Building Docker image:
make image-cephcsi
Configuration
NOTE: To make CephFS CSI driver version >= 1.1.0 work with Ceph v14.2.2
cluster (not deployed by rook), you need to add the following settings in the
mgr
section of the ceph.conf used by the Ceph manager daemon, and restart the
Ceph manager daemon.
[mgr]
client mount uid = 0
client mount gid = 0
This is due to an issue in Ceph v14.2.2 that should be resolved in v14.2.3.
Available command line arguments:
Option | Default value | Description |
---|---|---|
--endpoint |
unix://tmp/csi.sock |
CSI endpoint, must be a UNIX socket |
--drivername |
cephfs.csi.ceph.com |
Name of the driver (Kubernetes: provisioner field in StorageClass must correspond to this value) |
--nodeid |
empty | This node's ID |
--type |
empty | Driver type `[rbd |
--volumemounter |
empty | Default volume mounter. Available options are kernel and fuse . This is the mount method used if volume parameters don't specify otherwise. If left unspecified, the driver will first probe for ceph-fuse in system's path and will choose Ceph kernel client if probing failed. |
--mountcachedir |
empty | Volume mount cache info save dir. If left unspecified, the dirver will not record mount info, or it will save mount info and when driver restart it will remount volume it cached. |
--instanceid |
"default" | Unique ID distinguishing this instance of Ceph CSI among other instances, when sharing Ceph clusters across CSI instances for provisioning |
--pluginpath |
"/var/lib/kubelet/plugins/" | The location of cephcsi plugin on host |
--metadatastorage |
empty | Points to where older (1.0.0 or older plugin versions) metadata about provisioned volumes are kept, as file or in as k8s configmap (node or k8s_configmap respectively) |
--pidlimit |
0 | Configure the PID limit in cgroups. The container runtime can restrict the number of processes/tasks which can cause problems while provisioning (or deleting) a large number of volumes. A value of -1 configures the limit to the maximum, 0 does not configure limits at all. |
Available environmental variables:
KUBERNETES_CONFIG_PATH
: if you use k8s_configmap
as metadata store, specify
the path of your k8s config file (if not specified, the plugin will assume
you're running it inside a k8s cluster and find the config itself).
POD_NAMESPACE
: if you use k8s_configmap
as metadata store, POD_NAMESPACE
is used to define in which namespace you want the configmaps to be stored
Available volume parameters:
Parameter | Required | Description |
---|---|---|
clusterID |
yes | String representing a Ceph cluster, must be unique across all Ceph clusters in use for provisioning, cannot be greater than 36 bytes in length, and should remain immutable for the lifetime of the Ceph cluster in use |
fsName |
yes | CephFS filesystem name into which the volume shall be created |
mounter |
no | Mount method to be used for this volume. Available options are kernel for Ceph kernel client and fuse for Ceph FUSE driver. Defaults to "default mounter", see command line arguments. |
pool |
no | Ceph pool into which volume data shall be stored |
csi.storage.k8s.io/provisioner-secret-name , csi.storage.k8s.io/node-stage-secret-name |
for Kubernetes | Name of the Kubernetes Secret object containing Ceph client credentials. Both parameters should have the same value |
csi.storage.k8s.io/provisioner-secret-namespace , csi.storage.k8s.io/node-stage-secret-namespace |
for Kubernetes | Namespaces of the above Secret objects |
NOTE: An accompanying CSI configuration file, needs to be provided to the running pods. Refer to Creating CSI configuration for more information.
NOTE: A suggested way to populate and retain uniqueness of the clusterID is
to use the output of ceph fsid
of the Ceph cluster to be used for
provisioning.
Required secrets for provisioning: Admin credentials are required for provisioning new volumes
adminID
: ID of an admin clientadminKey
: key of the admin client
Required secrets for statically provisioned volumes: User credentials with access to an existing volume
userID
: ID of a user clientuserKey
: key of a user client
Notes on volume size: when provisioning a new volume, max_bytes
quota
attribute for this volume will be set to the requested volume size (see Ceph
quota documentation). A request
for a zero-sized volume means no quota attribute will be set.
Deployment with Kubernetes
Requires Kubernetes 1.13+
if your cluster version is 1.13.x please use cephfs v1.13 templates or else use cephfs v1.14+ templates
Your Kubernetes cluster must allow privileged pods (i.e. --allow-privileged
flag must be set to true for both the API server and the kubelet). Moreover, as
stated in the mount propagation
docs,
the Docker daemon of the cluster nodes must allow shared mounts.
YAML manifests are located in deploy/cephfs/kubernetes
.
Deploy RBACs for sidecar containers and node plugins:
kubectl create -f csi-provisioner-rbac.yaml
kubectl create -f csi-nodeplugin-rbac.yaml
Those manifests deploy service accounts, cluster roles and cluster role bindings. These are shared for both RBD and CephFS CSI plugins, as they require the same permissions.
Deploy ConfigMap for CSI plugins:
kubectl create -f csi-config-map.yaml
The configmap deploys an empty CSI configuration that is mounted as a volume within the Ceph CSI plugin pods. To add a specific Ceph clusters configuration details, refer to Creating CSI configuration for more information.
Deploy CSI sidecar containers:
kubectl create -f csi-cephfsplugin-provisioner.yaml
Deploys stateful set of provision which includes external-provisioner ,external-attacher for CSI CephFS.
Deploy CSI CephFS driver:
kubectl create -f csi-cephfsplugin.yaml
Deploys a daemon set with two containers: CSI node-driver-registrar and the CSI CephFS driver.
Verifying the deployment in Kubernetes
After successfully completing the steps above, you should see output similar to this:
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/csi-cephfsplugin-provisioner-0 3/3 Running 0 25s
pod/csi-cephfsplugin-rljcv 2/2 Running 0 24s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/csi-cephfsplugin-provisioner ClusterIP 10.101.78.75 <none> 12345/TCP 26s
...
Once the CSI plugin configuration is updated with details from a Ceph cluster of choice, you can try deploying a demo pod from examples/cephfs using the instructions provided to test the deployment further.
Notes on volume deletion
Dynamically povisioned volumes are deleted by the driver, when requested to do so. Statically provisioned volumes, from plugin versions less than or equal to 1.0.0, are a no-op when a delete operation is performed against the same, and are expected to be deleted on the Ceph cluster by the user.