Commit Graph

1869 Commits

Author SHA1 Message Date
Madhu Rajanna
3af364e7b5 move to statand context package
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2019-08-26 06:19:24 +00:00
Madhu Rajanna
38ca08bf65 Context based logging for rbd
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2019-08-26 06:19:24 +00:00
Daniel-Pivonka
81c28d6cb0 implement klog wrapper
Signed-off-by: Daniel-Pivonka <dpivonka@redhat.com>
2019-08-21 14:36:41 +00:00
Daniel-Pivonka
aa74f8c87f Implement context based logging
Signed-off-by: Daniel-Pivonka <dpivonka@redhat.com>
2019-08-21 14:36:41 +00:00
wilmardo
3111e7712a feat: Adds Ceph logo as icon for Helm charts
Signed-off-by: wilmardo <info@wilmardenouden.nl>
2019-08-20 05:34:28 +00:00
Humble Devassy Chirammal
3f32dea047
Merge pull request #551 from humblec/dockerfile
Fix the vulnarabilities in the image.
2019-08-20 10:28:33 +05:30
Madhu Rajanna
e557438f87 unmap rbd image if connection timeout.
Sometime rbd images are mapped even if the
connection timeout error occurs, this will
try to unmap if the received error message
is connection timeout.This will fix stale maps
and rbd image deletion issue

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2019-08-19 10:54:17 +00:00
Madhu Rajanna
0da4bd5151 start controller or node server based on config
if both controller and nodeserver flags are set/unset
cephcsi will start both server,

if only one flag is set, it will start relavent
service.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2019-08-19 06:11:43 +00:00
Madhu Rajanna
89732d923f move flag configuration variable to util
remove unwanted checks
remove getting drivertype from binary name

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2019-08-19 06:11:43 +00:00
Madhu Rajanna
2b1355061e rename rbd.go to driver.go
to keep code struct similar with cephfs
renamed rbd.go to driver.go

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2019-08-19 06:11:43 +00:00
Humble Chirammal
0fc7f4513b Snashotter update
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2019-08-19 05:06:42 +00:00
wilmardo
0a90762970 fix: Adds liveness sidecar to v1.14+ helm charts
Signed-off-by: wilmardo <info@wilmardenouden.nl>
2019-08-16 08:38:49 +00:00
wilmardo
30fb7de118 feat: Implement helm lint
Signed-off-by: wilmardo <info@wilmardenouden.nl>
2019-08-16 07:38:33 +00:00
Humble Chirammal
6950ad468f Fix the vulnarabilities in the image.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2019-08-14 17:37:33 +05:30
Humble Chirammal
8fddb53931 Make the kube dependency to 1.15.2
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2019-08-14 07:40:50 +00:00
Humble Chirammal
9a3a04b180 Add container image compatibility matrix.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2019-08-14 06:48:43 +00:00
Daniel-Pivonka
d621a58207 prometheus liveness probe sidecar
Signed-off-by: Daniel-Pivonka dpivonka@redhat.com
2019-08-13 17:51:41 +00:00
Madhu Rajanna
2ca575b99d Wrap error if failed to fetch mon
This will help user to check whats
the actual error. if the config file
is having issue or the  clusterid is
not valid.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2019-08-13 17:16:27 +00:00
wilmardo
cba6115e30 Fix 1.13 charts
Signed-off-by: wilmardo <info@wilmardenouden.nl>
2019-08-13 16:42:15 +00:00
wilmardo
c739ce9d5e Removes last reference to node-publish-secret
Signed-off-by: wilmardo <info@wilmardenouden.nl>
2019-08-13 16:42:15 +00:00
wilmardo
ca5fbc180c Rework of helm charts
Signed-off-by: wilmardo <info@wilmardenouden.nl>
2019-08-13 16:42:15 +00:00
ShyamsundarR
20d336fca3 Add support to use ceph manager rbd command to delete an image
Image deletion takes time proportional to the size of the
image. Hence, ceph manager is enhanced to support async
deletion of an image, or rather passing the task of
deleting an image to the ceph manager.

This commit leverages the ceph manager enhancement in the CSI code.

NOTE: This is tested against a ceph cluster that is running
Ceph master version of the code. Once other releases
catch up in terms of the feature, the optimization would be
available to the CSI driver as well.

Fixes: #523
Signed-off-by: ShyamsundarR <srangana@redhat.com>
2019-08-13 16:08:22 +00:00
Madhu Rajanna
4ba2d0e10b Add xfs fstype as default type in storageclass
we have see better performace in device
format and mounting by setting the fstype to xfs
from default ext4.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2019-08-13 15:19:36 +00:00
Niels de Vos
31648c8feb provisioners: add reconfiguring of PID limit
The container runtime CRI-O limits the number of PIDs to 1024 by
default. When many PVCs are requested at the same time, it is possible
for the provisioner to start too many threads (or go routines) and
executing 'rbd' commands can start to fail. In case a go routine can not
get started, the process panics.

The PID limit can be changed by passing an argument to kubelet, but this
will affect all pids running on a host. Changing the parameters to
kubelet is also not a very elegant solution.

Instead, the provisioner pod can change the configuration itself. The
pod is running in privileged mode and can write to /sys/fs/cgroup where
the limit is configured.

With this change, the limit is configured to 'max', just as if there is
no limit at all. The logs of the csi-rbdplugin in the provisioner pod
will reflect the change it makes when starting the service:

    $ oc -n rook-ceph logs -c csi-rbdplugin csi-rbdplugin-provisioner-0
    ..
    I0726 13:59:19.737678       1 cephcsi.go:127] Initial PID limit is set to 1024
    I0726 13:59:19.737746       1 cephcsi.go:136] Reconfigured PID limit to -1 (max)
    ..

It is possible to pass a different limit on the commandline of the
cephcsi executable. The following flag has been added:

    --pidlimit=<int>       the PID limit to configure through cgroups

This accepts special values -1 (max) and 0 (default, do not
reconfigure). Other integers will be the limit that gets configured in
cgroups.

Signed-off-by: Niels de Vos <ndevos@redhat.com>
2019-08-13 14:43:29 +00:00
ShyamsundarR
885ec7049d Update Unstage transaction to undo steps done in Stage
In unstage we now adhere to the transaction (or order of steps)
done in Stage. To enable this we stash the image meta data
into a local file on the staging path for use with unstage
request.

This helps in unmapping a stale map, in case the mount or
other steps in the transaction are complete.

Signed-off-by: ShyamsundarR <srangana@redhat.com>
2019-08-13 14:07:52 +00:00
ShyamsundarR
44f7b1fe4b Use "rbd device list" to list and find rbd images and their device paths
This change also starts mapping nbd based access using ther rbd CLI
as, it is a prerequisite to get device listing for nbd as well.

Signed-off-by: ShyamsundarR <srangana@redhat.com>
2019-08-13 14:07:52 +00:00
ShyamsundarR
925bda2881 Move mounting staging instance to a sub-path within staging path
This commit moves the mounting of a block volumes and filesystems
to a sub-file (already the case) or a sub-dir within the staging
path.

This enables using the staging path to store any additional data
regarding the mount. For example, this will be extended in the
future to store the fsid of the cluster, and maybe the pool name
to map unmap requests to the right image.

Also, this fixes the noted hack in the code, to determine in a
common manner if there is a mount on the passed in staging path.

Signed-off-by: ShyamsundarR <srangana@redhat.com>
2019-08-13 14:07:52 +00:00
Niels de Vos
b812ec26df e2e: do not fail deleting resources when "resource not found"
Sometimes the tests fail cleaning up due unavailable resources that are
listed in the .yaml files. Deleting the missing resources returns
"resource not found". By passing --ignore-not-found to kubectl, this
problem should not happen anymore (and possibly makes it more obvious
where tests do go wrong).
2019-08-13 12:46:07 +00:00
Madhu Rajanna
17eb9fb47b
Merge pull request #544 from Madhu-1/skip-snap
Skip snapshot testing  in CI
2019-08-13 16:54:20 +05:30
Madhu Rajanna
90fef919d5 Skip snapshot testing in CI
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2019-08-13 16:22:55 +05:30
Humble Devassy Chirammal
46d3afbe12
Merge pull request #541 from Madhu-1/fix-ci
Fix CI failure in ceph-csi
2019-08-12 15:17:34 +05:30
Madhu Rajanna
ca27001519 Fix CI failure in ceph-csi
rook master deploys the ceph-csi
by default now, this will affect the
ceph-csi testing failure, This PR will
remove the ceph-csi resources created rook

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2019-08-12 11:37:00 +05:30
Madhu Rajanna
d631cd6cd9
Merge pull request #533 from Madhu-1/fix-e2e-dep
Fix issues in E2E cleanup
2019-08-12 10:47:10 +05:30
Kshithij Iyer
1884e46642 Removing appendix from license. 2019-08-09 15:16:46 +00:00
mergify[bot]
cdd27c1459
Merge branch 'master' into fix-e2e-dep 2019-08-09 13:54:35 +00:00
William Zhang
44e807c36b Add the description of Deploy ConfigMap for CSI plugins
Signed-off-by: William Zhang <zhang.wanmin@zte.com.cn>
2019-08-08 12:53:31 +00:00
Madhu Rajanna
4ed187eb3f Fix issues in E2E cleanup
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2019-08-07 20:26:20 +05:30
Madhu Rajanna
7c2fb6187a remove post validation of rbd device
once we map the rbd image on a node
we will get the device name its mapped
in the map output itself,no need to
check the devicepath post rbd  mapping

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2019-08-07 14:52:13 +00:00
Daniel-Pivonka
0063727199 Make parameter pool optional in CephFS storageclass
Signed-off-by: Daniel-Pivonka <dpivonka@redhat.com>
2019-08-07 13:30:38 +00:00
wilmardo
5b461a0787 docs: Clarify that Nautilus means >=14.2.2
Signed-off-by: wilmardo <info@wilmardenouden.nl>
2019-08-07 05:51:24 +00:00
Madhu Rajanna
02bcb5f16a Enable leader election in v1.14+
Use Deployment with leader election instead of StatefulSet

Deployment behaves better when a node gets disconnected
from the rest of the cluster - new provisioner leader
is elected in ~15 seconds, while it may take up to
5 minutes for StatefulSet to start a new replica.

Refer: kubernetes-csi/external-provisioner@52d1fbc

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2019-08-05 07:11:44 +00:00
Humble Chirammal
0786225937 Implement metrics for RBD plugin
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2019-08-01 11:58:54 +00:00
Madhu Rajanna
8a7022cc50 Add recover middleware for grpc server
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2019-08-01 11:02:27 +00:00
Humble Devassy Chirammal
2805135e76
Merge pull request #515 from Madhu-1/fix-readme
Fix kube version in readme
2019-07-30 20:01:33 +05:30
Madhu Rajanna
2f491b2bc3 Fix kube version in readme
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2019-07-30 19:52:05 +05:30
Madhu Rajanna
dfbdec4b6a add validation to check if stagingPath exists
It's CO responsibility to create the
stagingPath as per the CSI spec.

The CO SHALL ensure
// that the path is directory and that the process serving the
// request has `read` and `write` permission to that directory. The
// CO SHALL be responsible for creating the directory if it does not
// exist.

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2019-07-29 12:52:10 +00:00
Ramana Raja
5af29662b2 cephfs: set the mode of the FS subvolumes
... and not that of the FS subvolume group `csi`.

There is no reason for setting the mode of FS subvolume group `csi`
(a CephFS subdirectory) as 777. It's default mode is 755. It's
sufficient to set the mode of FS subvolumes within the subvolume group
to `777`.

Signed-off-by: Ramana Raja <rraja@redhat.com>
2019-07-29 10:11:48 +00:00
Ramana Raja
5932fff93e cephfs: set pool layout of the FS subvolumes
... instead of that of the `csi` subvolume group. The pool layout
specified via storage class's `pool` setting is a subvolume property
and not a subvolume group property. The `csi` subvolume group
may have subvolumes of different storage classes with different
pool layouts.

Fixes: #499
Signed-off-by: Ramana Raja <rraja@redhat.com>
2019-07-29 10:11:48 +00:00
Humble Devassy Chirammal
c7d990a96b
Merge pull request #460 from Madhu-1/fix-pluginapath
Fix pluginpath for cephfs
2019-07-29 14:02:18 +05:30
Humble Devassy Chirammal
6367d0f692
Merge pull request #501 from humblec/bug-triage
Add details about weekly bug triage call in the README
2019-07-29 14:01:20 +05:30