ceph-csi/deploy/cephfs/kubernetes/v1.13/helm
Niels de Vos 31648c8feb provisioners: add reconfiguring of PID limit
The container runtime CRI-O limits the number of PIDs to 1024 by
default. When many PVCs are requested at the same time, it is possible
for the provisioner to start too many threads (or go routines) and
executing 'rbd' commands can start to fail. In case a go routine can not
get started, the process panics.

The PID limit can be changed by passing an argument to kubelet, but this
will affect all pids running on a host. Changing the parameters to
kubelet is also not a very elegant solution.

Instead, the provisioner pod can change the configuration itself. The
pod is running in privileged mode and can write to /sys/fs/cgroup where
the limit is configured.

With this change, the limit is configured to 'max', just as if there is
no limit at all. The logs of the csi-rbdplugin in the provisioner pod
will reflect the change it makes when starting the service:

    $ oc -n rook-ceph logs -c csi-rbdplugin csi-rbdplugin-provisioner-0
    ..
    I0726 13:59:19.737678       1 cephcsi.go:127] Initial PID limit is set to 1024
    I0726 13:59:19.737746       1 cephcsi.go:136] Reconfigured PID limit to -1 (max)
    ..

It is possible to pass a different limit on the commandline of the
cephcsi executable. The following flag has been added:

    --pidlimit=<int>       the PID limit to configure through cgroups

This accepts special values -1 (max) and 0 (default, do not
reconfigure). Other integers will be the limit that gets configured in
cgroups.

Signed-off-by: Niels de Vos <ndevos@redhat.com>
2019-08-13 14:43:29 +00:00
..
templates provisioners: add reconfiguring of PID limit 2019-08-13 14:43:29 +00:00
.helmignore Enable leader election in v1.14+ 2019-08-05 07:11:44 +00:00
Chart.yaml Enable leader election in v1.14+ 2019-08-05 07:11:44 +00:00
README.md Enable leader election in v1.14+ 2019-08-05 07:11:44 +00:00
values.yaml Enable leader election in v1.14+ 2019-08-05 07:11:44 +00:00

ceph-csi-cephfs

The ceph-csi-cephfs chart adds cephfs volume support to your cluster.

Install Chart

To install the Chart into your Kubernetes cluster

helm install --namespace "ceph-csi-cephfs" --name "ceph-csi-cephfs" ceph-csi/ceph-csi-cephfs

After installation succeeds, you can get a status of Chart

helm status "ceph-csi-cephfs"

If you want to delete your Chart, use this command

helm delete --purge "ceph-csi-cephfs"

If you want to delete the namespace, use this command

kubectl delete namespace ceph-csi-rbd