Addressed using k8s client APIs to fetch secrets

Based on the review comments addressed the following,
- Moved away from having to update the pod with volumes
when a new Ceph cluster is added for provisioning via the
CSI driver

- The above now used k8s APIs to fetch secrets
  - TBD: Need to add a watch mechanisim such that these
secrets can be cached and updated when changed

- Folded the Cephc configuration and ID/key config map
and secrets into a single secret

- Provided the ability to read the same config via mapped
or created files within the pod

Tests:
- Ran PV creation/deletion/attach/use using new scheme
StorageClass
- Ran PV creation/deletion/attach/use using older scheme
to ensure nothing is broken
- Did not execute snapshot related tests

Signed-off-by: ShyamsundarR <srangana@redhat.com>
This commit is contained in:
ShyamsundarR
2019-03-07 16:03:33 -05:00
committed by mergify[bot]
parent 97f8c4b677
commit 2064e674a4
20 changed files with 506 additions and 709 deletions

View File

@ -31,7 +31,7 @@ var (
nodeID = flag.String("nodeid", "", "node id")
containerized = flag.Bool("containerized", true, "whether run as containerized")
metadataStorage = flag.String("metadatastorage", "", "metadata persistence method [node|k8s_configmap]")
configRoot = flag.String("configroot", "/etc", "Directory under which Ceph CSI configuration files will be present")
configRoot = flag.String("configroot", "/etc/csi-config", "directory in which CSI specific Ceph cluster configurations are present, OR the value \"k8s_objects\" if present as kubernetes secrets")
)
func init() {