mirror of
https://github.com/ceph/ceph-csi.git
synced 2024-11-17 20:00:23 +00:00
25e3a961c3
When the initial DeleteVolume times out (as it does on slow clusters due to the low 10 second limit), the external-provisioner calls it again. The CSI standard requires the second call to succeed if the volume has been deleted in the meantime. This didn't work because DeleteVolume returned an error when failing to find the volume info file: rbdplugin: E1008 08:05:35.631783 1 utils.go:100] GRPC error: rbd: open err /var/lib/kubelet/plugins/csi-rbdplugin/controller/csi-rbd-622a252c-cad0-11e8-9112-deadbeef0101.json/open /var/lib/kubelet/plugins/csi-rbdplugin/controller/csi-rbd-622a252c-cad0-11e8-9112-deadbeef0101.json: no such file or directory The fix is to treat a missing volume info file as "volume already deleted" and return success. To detect this, the original os error must be wrapped, otherwise the caller of loadVolInfo cannot determine the root cause. Note that further work may be needed to make the driver really resilient, for example there are probably concurrency issues. But for now this fixes: #82 |
||
---|---|---|
.. | ||
cephfs | ||
rbd |