This commit provides the option to pass in Ceph cluster-id instead
of a MON list from the storage class.
This helps in moving towards a stateless CSI implementation.
Tested the following,
- PV provisioning and staging using cluster-id in storage class
- PV provisioning and staging using MON list in storage class
Did not test,
- snapshot operations in either forms of the storage class
Signed-off-by: ShyamsundarR <srangana@redhat.com>
issue #217
Goal
we try to solve when csi exit unexpect, the pod use cephfs pv can not auto recovery because lost mount relation until pod be killed and reschedule to other node. i think this is may be a problem. may be csi plugin can do more thing to remount the old path so when pod may be auto recovery when pod exit and restart, the old mount path can use.
NoGoal
Pod should exit and restart when csi plugin pod exit and mount point lost. if pod not exit will get error of **transport endpoint is not connected**.
implment logic
csi-plugin start:
1. load all MountCachEntry from node local dir
2. check if volID exist in cluster, if no we ignore this entry, if yes continue
3. check if stagingPath exist, if yes we mount the path
4. check if all targetPath exist, if yes we binmount to staging path
NodeServer:
1. NodeStageVolume: add MountCachEntry on local dir include readonly attr and ceph secret
2. NodeStagePublishVolume: add pod bind mount path to MountCachEntry and persist local dir
3. NodeStageunPublishVolume: remove pod bind mount path From MountCachEntry and persist local dir
4. NodeStageunStageVolume: remove MountCachEntry from local dir
This commit reverts the initial implementation of the
multi-node-multi-writer feature:
commit: b5b8e46460
It replaces that implementation with a more restrictive version that
only allows multi-node-multi-writer for volumes of type `block`
With this change there are no volume parameters required in the stoarge
class, we also fail any attempt to create a file based device with
multi-node-multi-write being specified, this way a user doesn't have to
wait until they try and do the publish before realizing it doesn't work.
during volume creation we check volume size in
bytes, and even during listing of volumes and
snapshots we need to check size in bytes
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
This change adds the ability to define a `multiNodeWritable` option in
the Storage Class.
This change does a number of things:
1. Allow multi-node-multi-writer access modes if the SC options is
enabled
2. Bypass the watcher checks for MultiNodeMultiWriter Volumes
3. Maintains existing watcher checks for SingleNodeWriter access modes
regardless of the StorageClass option.
fix lint-errors
looping over a map is not guaranteet
to be ordered.
we need to sort the volume ID's for
ListVolume rpc for rbd plugin.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
currently all the created volumes are
stored in the metadata store, so we
can use this information to support
list volumes.
Signed-off-by: Madhu Rajanna <mrajanna@redhat.com>
during volume creation we are validating
that volume name cannot be empty,removing
this check as we are not going to hit
this case
Fixes: #204
Signed-off-by: Madhu Rajanna <mrajanna@redhat.com>
pkg/rbd/rbd.go:67:65⚠️ exported func NewNodeServer
returns unexported type *rbd.nodeServer, which can be
annoying to use (golint)
Signed-off-by: Madhu Rajanna <mrajanna@redhat.com>
it wont be meaningful to call cephfs.NewcephfsDriver()
to get a new driver, it will be better if we call
cephfs.GetNewDriver() which returns the cephfs driver
object.
same goes for rbd also
Signed-off-by: Madhu Rajanna <mrajanna@redhat.com>
The timeout value in external-provisioner is fairly low. It's not
uncommon that it times out and retries before the rbdplugin is done
with CreateVolume. rbdplugin has to serialize calls and ensure that
they are idempotent to deal with this.
The timeout value in external-provisioner is fairly low. It's not
uncommon that it times out and retries before the rbdplugin is done
with CreateVolume. rbdplugin has to serialize calls and ensure that
they are idempotent to deal with this.
When the initial DeleteVolume times out (as it does on slow clusters
due to the low 10 second limit), the external-provisioner calls it
again. The CSI standard requires the second call to succeed if the
volume has been deleted in the meantime. This didn't work because
DeleteVolume returned an error when failing to find the volume info
file:
rbdplugin: E1008 08:05:35.631783 1 utils.go:100] GRPC error: rbd: open err /var/lib/kubelet/plugins/csi-rbdplugin/controller/csi-rbd-622a252c-cad0-11e8-9112-deadbeef0101.json/open /var/lib/kubelet/plugins/csi-rbdplugin/controller/csi-rbd-622a252c-cad0-11e8-9112-deadbeef0101.json: no such file or directory
The fix is to treat a missing volume info file as "volume already
deleted" and return success. To detect this, the original os error
must be wrapped, otherwise the caller of loadVolInfo cannot determine
the root cause.
Note that further work may be needed to make the driver really
resilient, for example there are probably concurrency issues.
But for now this fixes: #82
output from `ceph auth -f json get` contains non-JSON data in the beginning
workaround for this is searching for the start of valid JSON data (starts with "[{")
and start reading from there
The driver will now probe for either ceph fuse/kernel every time
it's about to mount a cephfs volume.
This also affects CreateVolume/DeleteVolume where the mounting
was hard-coded to ceph kernel client till now - mounter configuration
and probing are now honored.
Some commands were executed in ceph-csi, but users do not know what
commands with what options were executed. Hence, it is difficult to
debug once the command did not work fine.
This patch adds logging what commmand and options are executed.
fuse mount does not allow to mount directory if it contains some
files. Due to this, currently scaled pod with cephfs failed to mount
by ceph-fuse.
This patch adds nonempty option to ceph-fuse command to support
ReadWriteMany with ceph-fuse.
Cephfs doesn't have a feature to provide Block Volume, therefore it should return false to ValidateVolumeCapabilities if Block Volume is specified.
Fixes#44