Commit Graph

46 Commits

Author SHA1 Message Date
Madhu Rajanna
6aac399075 Change the logic of locking
if any on going opearation is seen,we
have to return Abort error message

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2019-09-20 07:37:17 +00:00
Daniel-Pivonka
01a78cace5 switch to cephfs, utils, and csicommon to new loging system
Signed-off-by: Daniel-Pivonka <dpivonka@redhat.com>
2019-08-29 14:04:31 +00:00
Madhu Rajanna
3af364e7b5 move to statand context package
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2019-08-26 06:19:24 +00:00
ShyamsundarR
bd204d7d45 Use --keyfile option to pass keys to all Ceph CLIs
Every Ceph CLI that is invoked at present passes the key via the
--key option, and hence is exposed to key being displayed on
the host using a ps command or such means.

This commit addresses this issue by stashing the key in a tmp
file, which is again created on a tmpfs (or empty dir backed by
memory). Further using such tmp files as arguments to the --keyfile
option for every CLI that is invoked.

This prevents the key from being visible as part of the argument list
of the invoked program on the system.

Fixes: #318

Signed-off-by: ShyamsundarR <srangana@redhat.com>
2019-07-25 12:46:15 +00:00
Poornima G
c2835183e5 Remove user creation for every volume
Currently, provisioner creates user for every volume and nodeplugin
uses this user to mount that volume. But nodeplugin and provisioner
already have admin credentials, hence using the admin credentials
to mount the volume and getting rid of user creation for each volume.

Signed-off-by: Poornima G <pgurusid@redhat.com>
2019-07-25 10:59:42 +00:00
Poornima G
0d566ee30c Backward compatibility for deleting and mounting old volumes
Signed-off-by: Poornima G <pgurusid@redhat.com>
2019-07-12 05:42:41 +00:00
ShyamsundarR
c4a3675cec Move locks to more granular locking than CPU count based
As detailed in issue #279, current lock scheme has hash
buckets that are count of CPUs. This causes a lot of contention
when parallel requests are made to the CSI plugin. To reduce
lock contention, this commit introduces granular locks per
identifier.

The commit also changes the timeout for gRPC requests to Create
and Delete volumes, as the current timeout is 10s (kubernetes
documentation says 15s but code defaults are 10s). A virtual
setup takes about 12-15s to complete a request at times, that leads
to unwanted retries of the same request, hence the increased
timeout to enable operation completion with minimal retries.

Tests to create PVCs before and after these changes look like so,

Before:
Default master code + sidecar provisioner --timeout option set
to 30 seconds

20 PVCs
Creation: 3 runs, 396/391/400 seconds
Deletion: 3 runs, 218/271/118 seconds
  - Once was stalled for more than 8 minutes and cancelled the run

After:
Current commit + sidecar provisioner --timeout option set to 30 sec
20 PVCs
Creation: 3 runs, 42/59/65 seconds
Deletion: 3 runs, 32/32/31 seconds

Fixes: #279
Signed-off-by: ShyamsundarR <srangana@redhat.com>
2019-07-01 14:10:14 +00:00
ShyamsundarR
c5762b6b5c Modify RBD plugin to use a single ID and move the id and key into the secret
RBD plugin needs only a single ID to manage images and operations against a
pool, mentioned in the storage class. The current scheme of 2 IDs is hence not
needed and removed in this commit.

Further, unlike CephFS plugin, the RBD plugin splits the user id and the key
into the storage class and the secret respectively. Also the parameter name
for the key in the secret is noted in the storageclass making it a variant and
hampers usability/comprehension. This is also fixed by moving the id and the key
to the secret and not retaining the same in the storage class, like CephFS.

Fixes #270

Testing done:
- Basic PVC creation and mounting

Signed-off-by: ShyamsundarR <srangana@redhat.com>
2019-06-24 13:46:14 +00:00
ShyamsundarR
b9cd0e18ad Make CephFS plugin stateless reusing RADOS based journal scheme
This is a part of the stateless set of commits for CephCSI.

This commit removes the dependency on config maps to store cephFS provisioned
volumes, and instead relies on RADOS based objects and keys, and required
CSI VolumeID encoding to detect the provisioned volumes.

Changes:
- Provide backward compatibility to provisioned volumes by older plugin versions (1.0.0 or older)
- Remove Create/Delete support for statically provisioned volumes (fixes #382)
- Added namespace support to RADOS OMaps and used the same to store RADOS CSI objects and keys in the CephFS metadata pool
- Added support to mention fsname for CephFS provisioning (fixes #359)
- Changed field name in CSI Identifier to 'location', to denote a pool or fscid
- Updated mounter cache to use new scheme
- Required Helm manifests are updated
- Required documentation and other manifests are updated
- Made driver option 'metadatastorage' as optional, as fresh installs do not need to specify the same

Testing done:
- Create/Mount/Delete PVC
- Create/Delete 5 PVCs
- Mount version 1.0.0 PVC
- Delete version 1.0.0 PV
- Mount Statically defined PV/PVC/Pod
- Mount Statically defined version 1.0.0 PV/PVC/Pod
- Delete Statically defined version 1.0.0 PV/PVC/Pod
- Node restart when mounted to test mountcache
- Use InstanceID other than 'default'
- RBD basic round of tests, as namespace is added to OMaps
- csitest against ceph-fs plugin
  - NOTE: CephFS plugin still does not detect and address already created
  volumes but of a different size
- Test not providing any value to the metadata storage parameter

Signed-off-by: ShyamsundarR <srangana@redhat.com>
2019-05-30 06:20:35 -04:00
Madhu Rajanna
f60a07ae82 update vendor to latest kubernetes 1.14.0
some of the kubernetes independent
packages are moved out of the tree to
new projects.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2019-05-14 06:56:56 +00:00
wilmardo
891daa9375 Replaces the references to the Kubernete Authors with the Ceph-CSI authors 2019-04-03 11:14:08 +02:00
Madhu Rajanna
fdc0d8255a move csi-common to ceph-csi
kubernetes/driver/csi-common is no
longer maintained.

Signed-off-by: Madhu Rajanna <mrajanna@redhat.com>
2019-02-27 14:17:19 +05:30
gman
143003bcfd cephfs: added locks for {Create,Delete}Volume, NodeStageVolume 2019-02-26 11:06:25 +01:00
gman
ce3affcc6a cephfs: DeleteVolume should assume the volume to be already deleted if metadata doesn't exist 2019-02-25 18:07:28 +01:00
gman
96bf4a98bd cephfs: don't need to store keyrings anymore 2019-02-14 13:55:51 +00:00
gman
a63b06a620 cephfs: specify monitors explicitly 2019-02-13 14:09:33 +00:00
Humble Chirammal
c9da8469ad migrate cephfs code to use klog instead of glog
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2019-02-05 12:09:04 +00:00
Madhu Rajanna
7a0c233c27 Fix issues found in gometalinter
Signed-off-by: Madhu Rajanna <mrajanna@redhat.com>
2019-01-29 11:20:35 +05:30
Madhu Rajanna
25642fe404 Add method comments
Signed-off-by: Madhu Rajanna <mrajanna@redhat.com>
2019-01-29 11:20:35 +05:30
Madhu Rajanna
36f99e36ca Fix unparam issues
Signed-off-by: Madhu Rajanna <mrajanna@redhat.com>
2019-01-25 14:16:03 +05:30
Madhu Rajanna
284c5801c3 Fix golint issue
pkg/rbd/rbd.go:67:65⚠️ exported func NewNodeServer
returns unexported type *rbd.nodeServer, which can be
annoying to use (golint)

Signed-off-by: Madhu Rajanna <mrajanna@redhat.com>
2019-01-25 14:16:03 +05:30
Madhu Rajanna
5eb1974e38 Fix vetshadow issues
Signed-off-by: Madhu Rajanna <mrajanna@redhat.com>
2019-01-25 14:15:25 +05:30
Madhu Rajanna
15b5b0112e rename Id to ID to fix lint issue
Signed-off-by: Madhu Rajanna <mrajanna@redhat.com>
2019-01-25 14:14:48 +05:30
Huamin Chen
0151792684 review feedback: make monValueFromSecret override monitors if both are set
Signed-off-by: Huamin Chen <hchen@redhat.com>
2019-01-21 09:21:03 -05:00
Huamin Chen
db463edeef allow ceph mon stored in secret so when mon changes, cephfs driver can get latest mons and override old ones
Signed-off-by: Huamin Chen <hchen@redhat.com>
2019-01-18 10:27:48 -05:00
Huamin Chen
85b8415024 Merge branch 'master' into master-to-1.0 2019-01-15 16:15:30 +00:00
mickymiek
62d65ad0cb cm metadata persist for rbd and cephfs 2019-01-14 20:15:09 +00:00
gman
29bdeb2261 cephfs: don't set quotas for zero-sized volumes 2019-01-14 20:15:09 +00:00
Mike Cronce
d9fbdeb517 pkg/cephfs: Use request name to generate deterministic volume names 2018-12-04 21:39:00 -05:00
Mike Cronce
04872e5ebf Merge branch 'master' of github.com:ceph/ceph-csi into csi-v1.0.0 2018-12-04 16:28:37 -05:00
gman
ed811e0506 cephfs: don't set quotas for zero-sized volumes 2018-12-01 10:39:09 +01:00
Mike Cronce
41b30eb6c2 pkg/cephfs: Updated for new versions of CSI/Kubernetes dependencies 2018-11-24 13:48:36 -05:00
Kenjiro Nakayama
c1e072de0b Fix misspelling of "successfully" 2018-09-21 23:08:23 +09:00
gman
3c11129149 cephfs: ceph user is created in CreateVolume and deleted in DeleteVolume 2018-08-28 10:21:11 +02:00
gman
1c38412e39 cephfs: CSI 0.3.0; NodeStageVolume/NodeUnstageVolume; refactoring 2018-08-08 14:47:25 +02:00
Masaki Kimura
753dbc2303 Fix Cephfs plugin to return false to ValidateVolumeCapabilities if Block volume is specified
Cephfs doesn't have a feature to provide Block Volume, therefore it should return false to ValidateVolumeCapabilities if Block Volume is specified.

Fixes #44
2018-07-10 16:48:55 +00:00
gman
675ee93e46 cephfs: DeleteVolume() calls are allowed only for volumes with provisionVolume=true parameter 2018-06-13 16:29:10 +02:00
gman
b260bff659 cephfs: CreateVolume() needs ceph config 2018-06-12 17:07:20 +02:00
gman
bf89151b87 cephfs: ceph.conf is created in NodePublishVolume instead of CreateVolume 2018-05-18 18:15:37 +02:00
gman
a2160e88a7 cephfs/controllerserver: create volume if provisionVolume=true; implemented DeleteVolume 2018-04-13 14:54:40 +02:00
gman
4c5c67b8f9 cephfs: check volumeOptions.Mounter and choose ceph-fuse or mount.ceph accordingly 2018-03-22 14:14:57 +01:00
gman
9fefc270d8 cephfs/controllerserver: write ceph.conf 2018-03-20 16:40:30 +01:00
gman
66c16e35e6 cephfs: refactoring for CSI 0.2.0 part 1 2018-03-13 10:25:50 +01:00
gman
06f411bbf3 cephfs: volumes are now created for separate ceph users with limited access to fs
Uses a slightly modified version of https://github.com/kubernetes-incubator/external-storage/blob/master/ceph/cephfs/cephfs_provisioner/cephfs_provisioner.py
This should be rewritten properly in Go, but for it works for now - for demonstration purposes

TODO:
* readOnly is not taken into account
* controllerServer.DeleteVolume does nothing
2018-03-09 17:05:19 +01:00
gman
aa023ea405 cephfs: set access mode to MULTI_NODE_MULTI_WRITER; controller (un)publish is not needed 2018-03-07 14:19:08 +01:00
gman
1c1b0eab1e WIP cephfs CSI plugin 2018-03-05 13:21:30 +01:00