Commit Graph

80 Commits

Author SHA1 Message Date
Patrick Ohly
0f9c9061ce rbd: refuse to create block volumes
Without this check, the driver fails one of the E2E storage tests in
Kubernetes 1.13: provisioning a block volume is expected to fail in
e689d515f7/test/e2e/storage/testsuites/volumemode.go (L329-L330)
2018-12-13 10:53:16 +01:00
gman
ed811e0506 cephfs: don't set quotas for zero-sized volumes 2018-12-01 10:39:09 +01:00
Patrick Ohly
720ad4afeb rbd: protect against concurrent gRPC calls
The timeout value in external-provisioner is fairly low. It's not
uncommon that it times out and retries before the rbdplugin is done
with CreateVolume. rbdplugin has to serialize calls and ensure that
they are idempotent to deal with this.
2018-10-26 15:29:48 +02:00
Huamin Chen
188cdd1d68
Merge pull request #89 from rootfs/containerized
support nsmounter when running in containerized mode
2018-10-15 20:25:40 -04:00
Huamin Chen
3436a094f7 support nsmounter when running in containerized mode
Signed-off-by: Huamin Chen <hchen@redhat.com>
2018-10-15 14:59:41 +00:00
Patrick Ohly
25e3a961c3 rbdplugin: idempotent DeleteVolume
When the initial DeleteVolume times out (as it does on slow clusters
due to the low 10 second limit), the external-provisioner calls it
again. The CSI standard requires the second call to succeed if the
volume has been deleted in the meantime. This didn't work because
DeleteVolume returned an error when failing to find the volume info
file:

  rbdplugin: E1008 08:05:35.631783       1 utils.go:100] GRPC error: rbd: open err /var/lib/kubelet/plugins/csi-rbdplugin/controller/csi-rbd-622a252c-cad0-11e8-9112-deadbeef0101.json/open /var/lib/kubelet/plugins/csi-rbdplugin/controller/csi-rbd-622a252c-cad0-11e8-9112-deadbeef0101.json: no such file or directory

The fix is to treat a missing volume info file as "volume already
deleted" and return success. To detect this, the original os error
must be wrapped, otherwise the caller of loadVolInfo cannot determine
the root cause.

Note that further work may be needed to make the driver really
resilient, for example there are probably concurrency issues.
But for now this fixes: #82
2018-10-09 12:08:56 +02:00
Huamin Chen
239f295dd1
Merge pull request #79 from rootfs/rbd-nbd
allow monitors be embedded in credential secret
2018-09-24 08:58:09 -04:00
Huamin Chen
d5b7543565 allow monitors be embedded in credential secret
Signed-off-by: Huamin Chen <hchen@redhat.com>
2018-09-21 14:43:01 +00:00
Kenjiro Nakayama
c1e072de0b Fix misspelling of "successfully" 2018-09-21 23:08:23 +09:00
Huamin Chen
30a5d9a6e7 add rbd-nbd mounter in storage class
Signed-off-by: Huamin Chen <hchen@redhat.com>
2018-09-18 14:09:12 +00:00
Huamin Chen
6f3625b11e review feedback
Signed-off-by: Huamin Chen <hchen@redhat.com>
2018-09-18 13:10:28 +00:00
Huamin Chen
8955eb03bc support rbd-nbd
Signed-off-by: Huamin Chen <hchen@redhat.com>
2018-09-17 18:12:22 +00:00
gman
3c11129149 cephfs: ceph user is created in CreateVolume and deleted in DeleteVolume 2018-08-28 10:21:11 +02:00
gman
9c3389d784 cephfs/util: log execCommandJson; cache mount.New() instance 2018-08-28 10:19:28 +02:00
gman
12958d0a9a cephfs/cephuser: fixed getCephUser
output from `ceph auth -f json get` contains non-JSON data in the beginning
workaround for this is searching for the start of valid JSON data (starts with "[{")
and start reading from there
2018-08-28 10:13:53 +02:00
gman
6ddf98addf cephfs: cache available volume mounters 2018-08-14 16:48:30 +02:00
gman
c515a013d3 cephfs: volumemounter probe
The driver will now probe for either ceph fuse/kernel every time
it's about to mount a cephfs volume.

This also affects CreateVolume/DeleteVolume where the mounting
was hard-coded to ceph kernel client till now - mounter configuration
and probing are now honored.
2018-08-14 11:19:41 +02:00
Huamin Chen
43b9f9aeaa
Merge pull request #61 from sngchlko/support-snapshot-in-rbdplugin
Support snapshot in rbdplugin
2018-08-09 09:31:31 -04:00
Seungcheol Ko
38aa575925 check snapshot feature 2018-08-09 22:07:13 +09:00
Seungcheol Ko
4312907f7b remove the snapshot if can't store snapshot information 2018-08-09 22:07:06 +09:00
Seungcheol Ko
b0e68a52e0 Refactoring using users 2018-08-09 22:07:00 +09:00
Seungcheol Ko
7d90783f03 fix nit 2018-08-09 22:06:51 +09:00
Róbert Vašek
069140e74a
Merge pull request #65 from clkao/execCommandJson-error
Log error output for execCommandJson as well.
2018-08-08 17:58:52 +02:00
Chia-liang Kao
a1de128a81 Log error output for execCommandJson as well. 2018-08-08 23:39:19 +08:00
gman
1c38412e39 cephfs: CSI 0.3.0; NodeStageVolume/NodeUnstageVolume; refactoring 2018-08-08 14:47:25 +02:00
Seungcheol Ko
f0fba1240a Revert "Implement NodeGetInfo for csi spec 3.0"
This reverts commit c93466b009.
2018-08-08 20:22:59 +09:00
Seungcheol Ko
b1ccdbb154 Support snapshot feature in rbdplugin 2018-08-08 17:16:07 +09:00
Seungcheol Ko
c93466b009 Implement NodeGetInfo for csi spec 3.0 2018-08-08 14:41:45 +09:00
Huamin Chen
4331960ab3
Merge pull request #55 from nak3/nonempty
Add nonempty option to ceph-fuse to support ReadWriteMany
2018-08-07 14:14:57 -04:00
Kenjiro Nakayama
e8784ec094 Logging command and options for debug friendly
Some commands were executed in ceph-csi, but users do not know what
commands with what options were executed. Hence, it is difficult to
debug once the command did not work fine.

This patch adds logging what commmand and options are executed.
2018-07-31 15:31:11 +09:00
Kenjiro Nakayama
b649d4f1f6 Add nonempty option to ceph-fuse to support ReadWriteMany
fuse mount does not allow to mount directory if it contains some
files. Due to this, currently scaled pod with cephfs failed to mount
by ceph-fuse.

This patch adds nonempty option to ceph-fuse command to support
ReadWriteMany with ceph-fuse.
2018-07-31 14:44:33 +09:00
Seungcheol Ko
bc34bd389e support image features for csi-rbdplugin 2018-07-21 00:59:54 +09:00
Masaki Kimura
753dbc2303 Fix Cephfs plugin to return false to ValidateVolumeCapabilities if Block volume is specified
Cephfs doesn't have a feature to provide Block Volume, therefore it should return false to ValidateVolumeCapabilities if Block Volume is specified.

Fixes #44
2018-07-10 16:48:55 +00:00
Huamin Chen
0df9e8e794
Merge pull request #42 from gman0/cephfs-delete-policy
cephfs: forbid deletion of shares not provisioned by the driver
2018-06-13 14:43:49 -04:00
gman
675ee93e46 cephfs: DeleteVolume() calls are allowed only for volumes with provisionVolume=true parameter 2018-06-13 16:29:10 +02:00
malc0lm
f273874f26 rbd: advertises PluginCapability_Service_CONTROLLER_SERVICE 2018-06-13 15:14:15 +08:00
gman
8c53b5eb79 cephfs: Identity Service advertises PluginCapability_Service_CONTROLLER_SERVICE 2018-06-12 17:09:44 +02:00
gman
0cc1e06beb cephfs: createCephUser needs admin credentials 2018-06-12 17:08:14 +02:00
gman
b260bff659 cephfs: CreateVolume() needs ceph config 2018-06-12 17:07:20 +02:00
gman
2fcc252f5c cephfs: pass volume UUIDs where needed 2018-06-12 17:05:42 +02:00
gman
f45ddd7c9d cephfs: cephuser: set config and admin explicitly when creating/deleting users 2018-06-12 17:03:45 +02:00
gman
cc88d2fa09 cephfs: cephconf: include volume UUID in keyrings/secrets 2018-06-12 17:02:14 +02:00
gman
0ba3174bbc cephfs/NodePublishVolume: fix error message 2018-05-23 10:28:25 +02:00
gman
1a7b365b95 cephfs: ceph config filename is now mixed with volume UUID 2018-05-18 18:17:37 +02:00
gman
bf89151b87 cephfs: ceph.conf is created in NodePublishVolume instead of CreateVolume 2018-05-18 18:15:37 +02:00
gman
77469c8370 cephfs/volumecache: fixed error msg 2018-04-20 16:24:13 +02:00
gman
8844452453 cephfs/nodeserver: create a new user if necessary; updated NodeUnpublishVolume 2018-04-13 15:53:43 +02:00
gman
a2160e88a7 cephfs/controllerserver: create volume if provisionVolume=true; implemented DeleteVolume 2018-04-13 14:54:40 +02:00
gman
886fdccb9b cephfs: added mounter probing and --volumemounter cmd arg 2018-04-13 14:53:17 +02:00
gman
b7d856e562 cephfs/volume: added createVolume and purgeVolume 2018-04-13 14:49:49 +02:00