ci: create ceph-csi-config ConfigMap for external storage tests

The StorageClasses that get deployed for the Kubernetes e2e external
storage tests reference a ConfigMap that contains the connection details
for the Ceph cluster. Without this ConfigMap, Ceph-CSI will not function
correctly.

Signed-off-by: Niels de Vos <ndevos@redhat.com>
This commit is contained in:
Niels de Vos 2020-10-22 17:07:08 +02:00 committed by mergify[bot]
parent 6a46b8f17f
commit 381ea22641
2 changed files with 53 additions and 2 deletions

View File

@ -3,8 +3,14 @@
The files in this directory are used by the k8s-e2e-external-storage CI job.
This job runs the [Kubernetes end-to-end external storage tests][1] with
different driver configurations/manifests (in the `driver-*.yaml` files). Each
driver configuration refers to a StorageClass that is used while testing. The
StorageClasses are created with the `create-storageclass.sh` script and the
driver configuration refers to a StorageClass that is used while testing.
The StorageClasses are created with the `create-storageclass.sh` script and the
`sc-*.yaml.in` templates.
The Ceph-CSI Configuration from the `ceph-csiconfig` ConfigMap is created with
`create-configmap.sh` after the deployment is finished. The ConfigMap is
referenced in the StorageClasses and contains the connection details for the
Ceph cluster.
[1]: https://github.com/kubernetes/kubernetes/tree/master/test/e2e/storage/external

View File

@ -0,0 +1,45 @@
#!/bin/sh
#
# Create the Ceph-CSI ConfigMap based on the configuration obtained from the
# Rook deployment.
#
# The ConfigMap is referenced in the StorageClasses that are used by
# driver-*.yaml manifests in the k8s-e2e-external-storage CI job.
#
# Requirements:
# - kubectl in the path
# - working KUBE_CONFIG either in environment, or default config files
# - deployment done with Rook
#
# the namespace where Ceph-CSI is running
NAMESPACE="${1}"
[ -n "${NAMESPACE}" ] || { echo "ERROR: no namespace passed to ${0}"; exit 1; }
# exit on error
set -e
TOOLBOX_POD=$(kubectl -n rook-ceph get pods -l app=rook-ceph-tools -o=jsonpath='{.items[0].metadata.name}')
FS_ID=$(kubectl -n rook-ceph exec "${TOOLBOX_POD}" ceph fsid)
MONITOR=$(kubectl -n rook-ceph get services -l app=rook-ceph-mon -o=jsonpath='{.items[0].spec.clusterIP}:{.items[0].spec.ports[0].port}')
# in case the ConfigMap already exists, remove it before recreating
kubectl -n "${NAMESPACE}" delete --ignore-not-found configmap/ceph-csi-config
cat << EOF | kubectl -n "${NAMESPACE}" create -f -
---
apiVersion: v1
kind: ConfigMap
metadata:
name: ceph-csi-config
data:
config.json: |-
[
{
"clusterID": "${FS_ID}",
"monitors": [
"${MONITOR}"
]
}
]
EOF