This will help user to check whats
the actual error. if the config file
is having issue or the clusterid is
not valid.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Image deletion takes time proportional to the size of the
image. Hence, ceph manager is enhanced to support async
deletion of an image, or rather passing the task of
deleting an image to the ceph manager.
This commit leverages the ceph manager enhancement in the CSI code.
NOTE: This is tested against a ceph cluster that is running
Ceph master version of the code. Once other releases
catch up in terms of the feature, the optimization would be
available to the CSI driver as well.
Fixes: #523
Signed-off-by: ShyamsundarR <srangana@redhat.com>
we have see better performace in device
format and mounting by setting the fstype to xfs
from default ext4.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
The container runtime CRI-O limits the number of PIDs to 1024 by
default. When many PVCs are requested at the same time, it is possible
for the provisioner to start too many threads (or go routines) and
executing 'rbd' commands can start to fail. In case a go routine can not
get started, the process panics.
The PID limit can be changed by passing an argument to kubelet, but this
will affect all pids running on a host. Changing the parameters to
kubelet is also not a very elegant solution.
Instead, the provisioner pod can change the configuration itself. The
pod is running in privileged mode and can write to /sys/fs/cgroup where
the limit is configured.
With this change, the limit is configured to 'max', just as if there is
no limit at all. The logs of the csi-rbdplugin in the provisioner pod
will reflect the change it makes when starting the service:
$ oc -n rook-ceph logs -c csi-rbdplugin csi-rbdplugin-provisioner-0
..
I0726 13:59:19.737678 1 cephcsi.go:127] Initial PID limit is set to 1024
I0726 13:59:19.737746 1 cephcsi.go:136] Reconfigured PID limit to -1 (max)
..
It is possible to pass a different limit on the commandline of the
cephcsi executable. The following flag has been added:
--pidlimit=<int> the PID limit to configure through cgroups
This accepts special values -1 (max) and 0 (default, do not
reconfigure). Other integers will be the limit that gets configured in
cgroups.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
In unstage we now adhere to the transaction (or order of steps)
done in Stage. To enable this we stash the image meta data
into a local file on the staging path for use with unstage
request.
This helps in unmapping a stale map, in case the mount or
other steps in the transaction are complete.
Signed-off-by: ShyamsundarR <srangana@redhat.com>
This change also starts mapping nbd based access using ther rbd CLI
as, it is a prerequisite to get device listing for nbd as well.
Signed-off-by: ShyamsundarR <srangana@redhat.com>
This commit moves the mounting of a block volumes and filesystems
to a sub-file (already the case) or a sub-dir within the staging
path.
This enables using the staging path to store any additional data
regarding the mount. For example, this will be extended in the
future to store the fsid of the cluster, and maybe the pool name
to map unmap requests to the right image.
Also, this fixes the noted hack in the code, to determine in a
common manner if there is a mount on the passed in staging path.
Signed-off-by: ShyamsundarR <srangana@redhat.com>
Sometimes the tests fail cleaning up due unavailable resources that are
listed in the .yaml files. Deleting the missing resources returns
"resource not found". By passing --ignore-not-found to kubectl, this
problem should not happen anymore (and possibly makes it more obvious
where tests do go wrong).
rook master deploys the ceph-csi
by default now, this will affect the
ceph-csi testing failure, This PR will
remove the ceph-csi resources created rook
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
once we map the rbd image on a node
we will get the device name its mapped
in the map output itself,no need to
check the devicepath post rbd mapping
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Use Deployment with leader election instead of StatefulSet
Deployment behaves better when a node gets disconnected
from the rest of the cluster - new provisioner leader
is elected in ~15 seconds, while it may take up to
5 minutes for StatefulSet to start a new replica.
Refer: kubernetes-csi/external-provisioner@52d1fbc
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
It's CO responsibility to create the
stagingPath as per the CSI spec.
The CO SHALL ensure
// that the path is directory and that the process serving the
// request has `read` and `write` permission to that directory. The
// CO SHALL be responsible for creating the directory if it does not
// exist.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
... and not that of the FS subvolume group `csi`.
There is no reason for setting the mode of FS subvolume group `csi`
(a CephFS subdirectory) as 777. It's default mode is 755. It's
sufficient to set the mode of FS subvolumes within the subvolume group
to `777`.
Signed-off-by: Ramana Raja <rraja@redhat.com>
... instead of that of the `csi` subvolume group. The pool layout
specified via storage class's `pool` setting is a subvolume property
and not a subvolume group property. The `csi` subvolume group
may have subvolumes of different storage classes with different
pool layouts.
Fixes: #499
Signed-off-by: Ramana Raja <rraja@redhat.com>
if mapping of rbd device is passed and mounting
device to stagingpath fails or if chmod on targetpath fails
,which may leave up stale mapping if
unstage is called
this will be fixed by unmapping if somthing fails
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Every Ceph CLI that is invoked at present passes the key via the
--key option, and hence is exposed to key being displayed on
the host using a ps command or such means.
This commit addresses this issue by stashing the key in a tmp
file, which is again created on a tmpfs (or empty dir backed by
memory). Further using such tmp files as arguments to the --keyfile
option for every CLI that is invoked.
This prevents the key from being visible as part of the argument list
of the invoked program on the system.
Fixes: #318
Signed-off-by: ShyamsundarR <srangana@redhat.com>
Currently, provisioner creates user for every volume and nodeplugin
uses this user to mount that volume. But nodeplugin and provisioner
already have admin credentials, hence using the admin credentials
to mount the volume and getting rid of user creation for each volume.
Signed-off-by: Poornima G <pgurusid@redhat.com>