diff --git a/scripts/k8s-storage/README.md b/scripts/k8s-storage/README.md index 7c023f9ff..c7a64b509 100644 --- a/scripts/k8s-storage/README.md +++ b/scripts/k8s-storage/README.md @@ -3,8 +3,14 @@ The files in this directory are used by the k8s-e2e-external-storage CI job. This job runs the [Kubernetes end-to-end external storage tests][1] with different driver configurations/manifests (in the `driver-*.yaml` files). Each -driver configuration refers to a StorageClass that is used while testing. The -StorageClasses are created with the `create-storageclass.sh` script and the +driver configuration refers to a StorageClass that is used while testing. + +The StorageClasses are created with the `create-storageclass.sh` script and the `sc-*.yaml.in` templates. +The Ceph-CSI Configuration from the `ceph-csiconfig` ConfigMap is created with +`create-configmap.sh` after the deployment is finished. The ConfigMap is +referenced in the StorageClasses and contains the connection details for the +Ceph cluster. + [1]: https://github.com/kubernetes/kubernetes/tree/master/test/e2e/storage/external diff --git a/scripts/k8s-storage/create-configmap.sh b/scripts/k8s-storage/create-configmap.sh new file mode 100755 index 000000000..c236f3da5 --- /dev/null +++ b/scripts/k8s-storage/create-configmap.sh @@ -0,0 +1,45 @@ +#!/bin/sh +# +# Create the Ceph-CSI ConfigMap based on the configuration obtained from the +# Rook deployment. +# +# The ConfigMap is referenced in the StorageClasses that are used by +# driver-*.yaml manifests in the k8s-e2e-external-storage CI job. +# +# Requirements: +# - kubectl in the path +# - working KUBE_CONFIG either in environment, or default config files +# - deployment done with Rook +# + +# the namespace where Ceph-CSI is running +NAMESPACE="${1}" +[ -n "${NAMESPACE}" ] || { echo "ERROR: no namespace passed to ${0}"; exit 1; } + +# exit on error +set -e + +TOOLBOX_POD=$(kubectl -n rook-ceph get pods -l app=rook-ceph-tools -o=jsonpath='{.items[0].metadata.name}') +FS_ID=$(kubectl -n rook-ceph exec "${TOOLBOX_POD}" ceph fsid) +MONITOR=$(kubectl -n rook-ceph get services -l app=rook-ceph-mon -o=jsonpath='{.items[0].spec.clusterIP}:{.items[0].spec.ports[0].port}') + +# in case the ConfigMap already exists, remove it before recreating +kubectl -n "${NAMESPACE}" delete --ignore-not-found configmap/ceph-csi-config + +cat << EOF | kubectl -n "${NAMESPACE}" create -f - +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: ceph-csi-config +data: + config.json: |- + [ + { + "clusterID": "${FS_ID}", + "monitors": [ + "${MONITOR}" + ] + } + ] +EOF