When the initial DeleteVolume times out (as it does on slow clusters
due to the low 10 second limit), the external-provisioner calls it
again. The CSI standard requires the second call to succeed if the
volume has been deleted in the meantime. This didn't work because
DeleteVolume returned an error when failing to find the volume info
file:
rbdplugin: E1008 08:05:35.631783 1 utils.go:100] GRPC error: rbd: open err /var/lib/kubelet/plugins/csi-rbdplugin/controller/csi-rbd-622a252c-cad0-11e8-9112-deadbeef0101.json/open /var/lib/kubelet/plugins/csi-rbdplugin/controller/csi-rbd-622a252c-cad0-11e8-9112-deadbeef0101.json: no such file or directory
The fix is to treat a missing volume info file as "volume already
deleted" and return success. To detect this, the original os error
must be wrapped, otherwise the caller of loadVolInfo cannot determine
the root cause.
Note that further work may be needed to make the driver really
resilient, for example there are probably concurrency issues.
But for now this fixes: #82
output from `ceph auth -f json get` contains non-JSON data in the beginning
workaround for this is searching for the start of valid JSON data (starts with "[{")
and start reading from there
The driver will now probe for either ceph fuse/kernel every time
it's about to mount a cephfs volume.
This also affects CreateVolume/DeleteVolume where the mounting
was hard-coded to ceph kernel client till now - mounter configuration
and probing are now honored.
Some commands were executed in ceph-csi, but users do not know what
commands with what options were executed. Hence, it is difficult to
debug once the command did not work fine.
This patch adds logging what commmand and options are executed.
fuse mount does not allow to mount directory if it contains some
files. Due to this, currently scaled pod with cephfs failed to mount
by ceph-fuse.
This patch adds nonempty option to ceph-fuse command to support
ReadWriteMany with ceph-fuse.