Commit Graph

98 Commits

Author SHA1 Message Date
Madhu Rajanna
9ddc265c10 reject block volume creation in cephfs
Signed-off-by: Madhu Rajanna <mrajanna@redhat.com>
2019-01-16 18:17:14 +05:30
Huamin Chen
aed7506d88 fix merge leftovers; use canary driver-registrar image, as v1.0.0 is not hosted in quay.io
Signed-off-by: Huamin Chen <hchen@redhat.com>
2019-01-15 13:31:06 -05:00
Huamin Chen
e46099a504 reconcile merge
Signed-off-by: Huamin Chen <hchen@redhat.com>
2019-01-15 16:20:41 +00:00
Huamin Chen
85b8415024 Merge branch 'master' into master-to-1.0 2019-01-15 16:15:30 +00:00
mickymiek
b387daaabf remove useless comment 2019-01-14 20:15:09 +00:00
mickymiek
32cb974b8c gofmt 2019-01-14 20:15:09 +00:00
mickymiek
62d65ad0cb cm metadata persist for rbd and cephfs 2019-01-14 20:15:09 +00:00
Patrick Ohly
51d6ac6f55 rbd: refuse to create block volumes
Without this check, the driver fails one of the E2E storage tests in
Kubernetes 1.13: provisioning a block volume is expected to fail in
e689d515f7/test/e2e/storage/testsuites/volumemode.go (L329-L330)
2019-01-14 20:15:09 +00:00
gman
29bdeb2261 cephfs: don't set quotas for zero-sized volumes 2019-01-14 20:15:09 +00:00
Patrick Ohly
403cad682c rbd: protect against concurrent gRPC calls
The timeout value in external-provisioner is fairly low. It's not
uncommon that it times out and retries before the rbdplugin is done
with CreateVolume. rbdplugin has to serialize calls and ensure that
they are idempotent to deal with this.
2019-01-14 20:15:09 +00:00
Mike Cronce
2c3961b960 pkg/rbd/rbd.go: Fix "go vet" errors 2018-12-04 21:54:00 -05:00
Mike Cronce
23a4126aed pkg/rbd/controllerserver.go: gofmt 2018-12-04 21:44:04 -05:00
Mike Cronce
d9fbdeb517 pkg/cephfs: Use request name to generate deterministic volume names 2018-12-04 21:39:00 -05:00
Mike Cronce
04872e5ebf Merge branch 'master' of github.com:ceph/ceph-csi into csi-v1.0.0 2018-12-04 16:28:37 -05:00
Mike Cronce
22e23640a4 pkg/rbd/rbd.go: Updated PluginFolder to use new CSI 1.x directory 2018-12-04 15:38:09 -05:00
Mike Cronce
37caeb5b2c pkg/cephfs/driver.go: Updated PluginFolder to use new CSI 1.x directory 2018-12-04 15:38:09 -05:00
gman
ed811e0506 cephfs: don't set quotas for zero-sized volumes 2018-12-01 10:39:09 +01:00
Mike Cronce
af3083f717 pkg: Updated "version" variables from 0.3.0 to 1.0.0 2018-11-29 13:15:52 -05:00
Mike Cronce
93cb8a04d7 pkg/rbd: Updated for new versions of CSI/Kubernetes dependencies 2018-11-24 14:18:24 -05:00
Mike Cronce
41b30eb6c2 pkg/cephfs: Updated for new versions of CSI/Kubernetes dependencies 2018-11-24 13:48:36 -05:00
Patrick Ohly
720ad4afeb rbd: protect against concurrent gRPC calls
The timeout value in external-provisioner is fairly low. It's not
uncommon that it times out and retries before the rbdplugin is done
with CreateVolume. rbdplugin has to serialize calls and ensure that
they are idempotent to deal with this.
2018-10-26 15:29:48 +02:00
Huamin Chen
188cdd1d68
Merge pull request #89 from rootfs/containerized
support nsmounter when running in containerized mode
2018-10-15 20:25:40 -04:00
Huamin Chen
3436a094f7 support nsmounter when running in containerized mode
Signed-off-by: Huamin Chen <hchen@redhat.com>
2018-10-15 14:59:41 +00:00
Patrick Ohly
25e3a961c3 rbdplugin: idempotent DeleteVolume
When the initial DeleteVolume times out (as it does on slow clusters
due to the low 10 second limit), the external-provisioner calls it
again. The CSI standard requires the second call to succeed if the
volume has been deleted in the meantime. This didn't work because
DeleteVolume returned an error when failing to find the volume info
file:

  rbdplugin: E1008 08:05:35.631783       1 utils.go:100] GRPC error: rbd: open err /var/lib/kubelet/plugins/csi-rbdplugin/controller/csi-rbd-622a252c-cad0-11e8-9112-deadbeef0101.json/open /var/lib/kubelet/plugins/csi-rbdplugin/controller/csi-rbd-622a252c-cad0-11e8-9112-deadbeef0101.json: no such file or directory

The fix is to treat a missing volume info file as "volume already
deleted" and return success. To detect this, the original os error
must be wrapped, otherwise the caller of loadVolInfo cannot determine
the root cause.

Note that further work may be needed to make the driver really
resilient, for example there are probably concurrency issues.
But for now this fixes: #82
2018-10-09 12:08:56 +02:00
Huamin Chen
239f295dd1
Merge pull request #79 from rootfs/rbd-nbd
allow monitors be embedded in credential secret
2018-09-24 08:58:09 -04:00
Huamin Chen
d5b7543565 allow monitors be embedded in credential secret
Signed-off-by: Huamin Chen <hchen@redhat.com>
2018-09-21 14:43:01 +00:00
Kenjiro Nakayama
c1e072de0b Fix misspelling of "successfully" 2018-09-21 23:08:23 +09:00
Huamin Chen
30a5d9a6e7 add rbd-nbd mounter in storage class
Signed-off-by: Huamin Chen <hchen@redhat.com>
2018-09-18 14:09:12 +00:00
Huamin Chen
6f3625b11e review feedback
Signed-off-by: Huamin Chen <hchen@redhat.com>
2018-09-18 13:10:28 +00:00
Huamin Chen
8955eb03bc support rbd-nbd
Signed-off-by: Huamin Chen <hchen@redhat.com>
2018-09-17 18:12:22 +00:00
gman
3c11129149 cephfs: ceph user is created in CreateVolume and deleted in DeleteVolume 2018-08-28 10:21:11 +02:00
gman
9c3389d784 cephfs/util: log execCommandJson; cache mount.New() instance 2018-08-28 10:19:28 +02:00
gman
12958d0a9a cephfs/cephuser: fixed getCephUser
output from `ceph auth -f json get` contains non-JSON data in the beginning
workaround for this is searching for the start of valid JSON data (starts with "[{")
and start reading from there
2018-08-28 10:13:53 +02:00
gman
6ddf98addf cephfs: cache available volume mounters 2018-08-14 16:48:30 +02:00
gman
c515a013d3 cephfs: volumemounter probe
The driver will now probe for either ceph fuse/kernel every time
it's about to mount a cephfs volume.

This also affects CreateVolume/DeleteVolume where the mounting
was hard-coded to ceph kernel client till now - mounter configuration
and probing are now honored.
2018-08-14 11:19:41 +02:00
Huamin Chen
43b9f9aeaa
Merge pull request #61 from sngchlko/support-snapshot-in-rbdplugin
Support snapshot in rbdplugin
2018-08-09 09:31:31 -04:00
Seungcheol Ko
38aa575925 check snapshot feature 2018-08-09 22:07:13 +09:00
Seungcheol Ko
4312907f7b remove the snapshot if can't store snapshot information 2018-08-09 22:07:06 +09:00
Seungcheol Ko
b0e68a52e0 Refactoring using users 2018-08-09 22:07:00 +09:00
Seungcheol Ko
7d90783f03 fix nit 2018-08-09 22:06:51 +09:00
Róbert Vašek
069140e74a
Merge pull request #65 from clkao/execCommandJson-error
Log error output for execCommandJson as well.
2018-08-08 17:58:52 +02:00
Chia-liang Kao
a1de128a81 Log error output for execCommandJson as well. 2018-08-08 23:39:19 +08:00
gman
1c38412e39 cephfs: CSI 0.3.0; NodeStageVolume/NodeUnstageVolume; refactoring 2018-08-08 14:47:25 +02:00
Seungcheol Ko
f0fba1240a Revert "Implement NodeGetInfo for csi spec 3.0"
This reverts commit c93466b009.
2018-08-08 20:22:59 +09:00
Seungcheol Ko
b1ccdbb154 Support snapshot feature in rbdplugin 2018-08-08 17:16:07 +09:00
Seungcheol Ko
c93466b009 Implement NodeGetInfo for csi spec 3.0 2018-08-08 14:41:45 +09:00
Huamin Chen
4331960ab3
Merge pull request #55 from nak3/nonempty
Add nonempty option to ceph-fuse to support ReadWriteMany
2018-08-07 14:14:57 -04:00
Kenjiro Nakayama
e8784ec094 Logging command and options for debug friendly
Some commands were executed in ceph-csi, but users do not know what
commands with what options were executed. Hence, it is difficult to
debug once the command did not work fine.

This patch adds logging what commmand and options are executed.
2018-07-31 15:31:11 +09:00
Kenjiro Nakayama
b649d4f1f6 Add nonempty option to ceph-fuse to support ReadWriteMany
fuse mount does not allow to mount directory if it contains some
files. Due to this, currently scaled pod with cephfs failed to mount
by ceph-fuse.

This patch adds nonempty option to ceph-fuse command to support
ReadWriteMany with ceph-fuse.
2018-07-31 14:44:33 +09:00
Seungcheol Ko
bc34bd389e support image features for csi-rbdplugin 2018-07-21 00:59:54 +09:00