Addressed using k8s client APIs to fetch secrets

Based on the review comments addressed the following,
- Moved away from having to update the pod with volumes
when a new Ceph cluster is added for provisioning via the
CSI driver

- The above now used k8s APIs to fetch secrets
  - TBD: Need to add a watch mechanisim such that these
secrets can be cached and updated when changed

- Folded the Cephc configuration and ID/key config map
and secrets into a single secret

- Provided the ability to read the same config via mapped
or created files within the pod

Tests:
- Ran PV creation/deletion/attach/use using new scheme
StorageClass
- Ran PV creation/deletion/attach/use using older scheme
to ensure nothing is broken
- Did not execute snapshot related tests

Signed-off-by: ShyamsundarR <srangana@redhat.com>
This commit is contained in:
ShyamsundarR
2019-03-07 16:03:33 -05:00
committed by mergify[bot]
parent 97f8c4b677
commit 2064e674a4
20 changed files with 506 additions and 709 deletions

View File

@ -33,6 +33,7 @@ Option | Default value | Description
`--nodeid` | _empty_ | This node's ID
`--containerized` | true | Whether running in containerized mode
`--metadatastorage` | _empty_ | Whether should metadata be kept on node as file or in a k8s configmap (`node` or `k8s_configmap`)
`--configroot` | `/etc/csi-config` | Directory in which CSI specific Ceph cluster configurations are present, OR the value `k8s_objects` if present as kubernetes secrets"
**Available environmental variables:**
@ -52,7 +53,7 @@ Parameter | Required | Description
--------- | -------- | -----------
`monitors` | one of `monitors`, `clusterID` or `monValueFromSecret` must be set | Comma separated list of Ceph monitors (e.g. `192.168.100.1:6789,192.168.100.2:6789,192.168.100.3:6789`)
`monValueFromSecret` | one of `monitors`, `clusterID` or and `monValueFromSecret` must be set | a string pointing the key in the credential secret, whose value is the mon. This is used for the case when the monitors' IP or hostnames are changed, the secret can be updated to pick up the new monitors.
`clusterID` | one of `monitors`, `clusterID` or `monValueFromSecret` must be set | Value of Ceph cluster fsid, into which RBD images shall be created (e.g. `4ae5ae3d-ebfb-4150-bfc8-798970f4e3d9`)
`clusterID` | one of `monitors`, `clusterID` or `monValueFromSecret` must be set | Value of `ceph fsid`, into which RBD images shall be created (e.g. `4ae5ae3d-ebfb-4150-bfc8-798970f4e3d9`)
`pool` | yes | Ceph pool into which the RBD image shall be created
`imageFormat` | no | RBD image format. Defaults to `2`. See [man pages](http://docs.ceph.com/docs/mimic/man/8/rbd/#cmdoption-rbd-image-format)
`imageFeatures` | no | RBD image features. Available for `imageFormat=2`. CSI RBD currently supports only `layering` feature. See [man pages](http://docs.ceph.com/docs/mimic/man/8/rbd/#cmdoption-rbd-image-feature)
@ -60,6 +61,11 @@ Parameter | Required | Description
`csi.storage.k8s.io/provisioner-secret-namespace`, `csi.storage.k8s.io/node-publish-secret-namespace` | for Kubernetes | namespaces of the above Secret objects
`mounter`| no | if set to `rbd-nbd`, use `rbd-nbd` on nodes that have `rbd-nbd` and `nbd` kernel modules to map rbd images
NOTE: If `clusterID` parameter is used, then an accompanying Ceph cluster
configuration secret or config files needs to be provided to the running pods.
Refer to `examples/README.md` section titled "Cluster ID based configuration"
for more information.
**Required secrets:**
Admin credentials are required for provisioning new RBD images `ADMIN_NAME`: