mirror of
https://github.com/ceph/ceph-csi.git
synced 2024-11-14 02:10:21 +00:00
Updated code and docs to reflect correct terminology
- Updated instances of fsid with clusterid - Updated instances of credentials/subject with user/key Signed-off-by: ShyamsundarR <srangana@redhat.com>
This commit is contained in:
parent
e1c685ef39
commit
fc0cf957be
@ -53,7 +53,7 @@ Parameter | Required | Description
|
|||||||
--------- | -------- | -----------
|
--------- | -------- | -----------
|
||||||
`monitors` | one of `monitors`, `clusterID` or `monValueFromSecret` must be set | Comma separated list of Ceph monitors (e.g. `192.168.100.1:6789,192.168.100.2:6789,192.168.100.3:6789`)
|
`monitors` | one of `monitors`, `clusterID` or `monValueFromSecret` must be set | Comma separated list of Ceph monitors (e.g. `192.168.100.1:6789,192.168.100.2:6789,192.168.100.3:6789`)
|
||||||
`monValueFromSecret` | one of `monitors`, `clusterID` or and `monValueFromSecret` must be set | a string pointing the key in the credential secret, whose value is the mon. This is used for the case when the monitors' IP or hostnames are changed, the secret can be updated to pick up the new monitors.
|
`monValueFromSecret` | one of `monitors`, `clusterID` or and `monValueFromSecret` must be set | a string pointing the key in the credential secret, whose value is the mon. This is used for the case when the monitors' IP or hostnames are changed, the secret can be updated to pick up the new monitors.
|
||||||
`clusterID` | one of `monitors`, `clusterID` or `monValueFromSecret` must be set | Value of `ceph fsid`, into which RBD images shall be created (e.g. `4ae5ae3d-ebfb-4150-bfc8-798970f4e3d9`)
|
`clusterID` | one of `monitors`, `clusterID` or `monValueFromSecret` must be set | String representing a Ceph cluster, must be unique across all Ceph clusters in use for provisioning, cannot be greater than 36 bytes in length, and should remain immutable for the lifetime of the Ceph cluster in use
|
||||||
`pool` | yes | Ceph pool into which the RBD image shall be created
|
`pool` | yes | Ceph pool into which the RBD image shall be created
|
||||||
`imageFormat` | no | RBD image format. Defaults to `2`. See [man pages](http://docs.ceph.com/docs/mimic/man/8/rbd/#cmdoption-rbd-image-format)
|
`imageFormat` | no | RBD image format. Defaults to `2`. See [man pages](http://docs.ceph.com/docs/mimic/man/8/rbd/#cmdoption-rbd-image-format)
|
||||||
`imageFeatures` | no | RBD image features. Available for `imageFormat=2`. CSI RBD currently supports only `layering` feature. See [man pages](http://docs.ceph.com/docs/mimic/man/8/rbd/#cmdoption-rbd-image-feature)
|
`imageFeatures` | no | RBD image features. Available for `imageFormat=2`. CSI RBD currently supports only `layering` feature. See [man pages](http://docs.ceph.com/docs/mimic/man/8/rbd/#cmdoption-rbd-image-feature)
|
||||||
@ -64,7 +64,8 @@ Parameter | Required | Description
|
|||||||
NOTE: If `clusterID` parameter is used, then an accompanying Ceph cluster
|
NOTE: If `clusterID` parameter is used, then an accompanying Ceph cluster
|
||||||
configuration secret or config files needs to be provided to the running pods.
|
configuration secret or config files needs to be provided to the running pods.
|
||||||
Refer to `examples/README.md` section titled "Cluster ID based configuration"
|
Refer to `examples/README.md` section titled "Cluster ID based configuration"
|
||||||
for more information.
|
for more information. A suggested way to populate the clusterID is to use the
|
||||||
|
output of `ceph fsid` of the Ceph cluster to be used for provisioning.
|
||||||
|
|
||||||
**Required secrets:**
|
**Required secrets:**
|
||||||
|
|
||||||
@ -72,10 +73,10 @@ Admin credentials are required for provisioning new RBD images `ADMIN_NAME`:
|
|||||||
`ADMIN_PASSWORD` - note that the key of the key-value pair is the name of the
|
`ADMIN_PASSWORD` - note that the key of the key-value pair is the name of the
|
||||||
client with admin privileges, and the value is its password
|
client with admin privileges, and the value is its password
|
||||||
|
|
||||||
If clusterID is specified, then a pair of secrets are required, with keys named
|
If clusterID is specified, then a secret with various keys and values as
|
||||||
`subjectid` and `credentials`. Where, `subjectid` is the name of the client
|
specified in `examples/rbd/template-ceph-cluster-ID-secret.yaml` needs to be
|
||||||
with admin privileges and `credentials` contain its password. The pair required
|
created, with the secret name matching the string value provided as the
|
||||||
are provisioner and publish secrets, and should contain the same value.
|
`clusterID`.
|
||||||
|
|
||||||
## Deployment with Kubernetes
|
## Deployment with Kubernetes
|
||||||
|
|
||||||
|
@ -226,9 +226,6 @@ Ceph cluster, the following actions need to be completed.
|
|||||||
|
|
||||||
Get the following information from the Ceph cluster,
|
Get the following information from the Ceph cluster,
|
||||||
|
|
||||||
* Ceph Cluster fsid
|
|
||||||
* Output of `ceph fsid`
|
|
||||||
* Used to substitute `<cluster-fsid>` references in the files below
|
|
||||||
* Admin ID and key, that has privileges to perform CRUD operations on the Ceph
|
* Admin ID and key, that has privileges to perform CRUD operations on the Ceph
|
||||||
cluster and pools of choice
|
cluster and pools of choice
|
||||||
* Key is typically the output of, `ceph auth get-key client.admin` where
|
* Key is typically the output of, `ceph auth get-key client.admin` where
|
||||||
@ -237,14 +234,19 @@ Get the following information from the Ceph cluster,
|
|||||||
* Ceph monitor list
|
* Ceph monitor list
|
||||||
* Typically in the output of `ceph mon dump`
|
* Typically in the output of `ceph mon dump`
|
||||||
* Used to prepare comma separated MON list where required in the files below
|
* Used to prepare comma separated MON list where required in the files below
|
||||||
|
* Ceph Cluster fsid
|
||||||
|
* If choosing to use the Ceph cluster fsid as the unique value of clusterID,
|
||||||
|
* Output of `ceph fsid`
|
||||||
|
* Used to substitute `<cluster-id>` references in the files below
|
||||||
|
|
||||||
Update the template `rbd/template-ceph-cluster-ID-secret.yaml` with values from
|
Update the template `rbd/template-ceph-cluster-ID-secret.yaml` with values from
|
||||||
a Ceph cluster and create the following secret,
|
a Ceph cluster and replace `<cluster-id>` with the chosen clusterID to create
|
||||||
|
the following secret,
|
||||||
|
|
||||||
* `kubectl create -f rbd/template-ceph-cluster-ID-secret.yaml`
|
* `kubectl create -f rbd/template-ceph-cluster-ID-secret.yaml`
|
||||||
|
|
||||||
Storage class and snapshot class, using `<cluster-fsid>` as the value for the
|
Storage class and snapshot class, using `<cluster-id>` as the value for the
|
||||||
option `clusterID`, can now be created on the cluster.
|
option `clusterID`, can now be created on the cluster.
|
||||||
|
|
||||||
Remaining steps to test functionality remains the same as mentioned in the
|
Remaining steps to test functionality remains the same as mentioned in the
|
||||||
sections above.
|
sections above.
|
||||||
|
@ -10,8 +10,14 @@ parameters:
|
|||||||
# if using FQDN, make sure csi plugin's dns policy is appropriate.
|
# if using FQDN, make sure csi plugin's dns policy is appropriate.
|
||||||
monitors: mon1:port,mon2:port,...
|
monitors: mon1:port,mon2:port,...
|
||||||
# OR,
|
# OR,
|
||||||
# Ceph cluster fsid, of the cluster to provision storage from
|
# String representing a Ceph cluster to provision storage from.
|
||||||
# clusterID: <ceph-fsid>
|
# Should be unique unique across all Ceph clusters in use for provisioning,
|
||||||
|
# cannot be greater than 36 bytes in length, and should remain immutable for
|
||||||
|
# the lifetime of the StorageClass in use.
|
||||||
|
# If using clusterID, ensure to create a secret, as in
|
||||||
|
# template-ceph-cluster-ID-secret.yaml, to accompany the string chosen to
|
||||||
|
# represent the Ceph cluster in clusterID
|
||||||
|
# clusterID: <cluster-id>
|
||||||
|
|
||||||
csi.storage.k8s.io/snapshotter-secret-name: csi-rbd-secret
|
csi.storage.k8s.io/snapshotter-secret-name: csi-rbd-secret
|
||||||
csi.storage.k8s.io/snapshotter-secret-namespace: default
|
csi.storage.k8s.io/snapshotter-secret-namespace: default
|
||||||
|
@ -9,11 +9,14 @@ parameters:
|
|||||||
# if using FQDN, make sure csi plugin's dns policy is appropriate.
|
# if using FQDN, make sure csi plugin's dns policy is appropriate.
|
||||||
monitors: mon1:port,mon2:port,...
|
monitors: mon1:port,mon2:port,...
|
||||||
# OR,
|
# OR,
|
||||||
# Ceph cluster fsid, of the cluster to provision storage from
|
# String representing a Ceph cluster to provision storage from.
|
||||||
# clusterID: <ceph-fsid>
|
# Should be unique unique across all Ceph clusters in use for provisioning,
|
||||||
# If using clusterID based configuration, CSI pods need to be passed in a
|
# cannot be greater than 36 bytes in length, and should remain immutable for
|
||||||
# secret named ceph-cluster-<cluster-fsid> that contains the cluster
|
# the lifetime of the StorageClass in use.
|
||||||
# information. (as in the provided template-ceph-cluster-ID-secret.yaml)
|
# If using clusterID, ensure to create a secret, as in
|
||||||
|
# template-ceph-cluster-ID-secret.yaml, to accompany the string chosen to
|
||||||
|
# represent the Ceph cluster in clusterID
|
||||||
|
# clusterID: <cluster-id>
|
||||||
# OR,
|
# OR,
|
||||||
# if "monitors" parameter is not set, driver to get monitors from same
|
# if "monitors" parameter is not set, driver to get monitors from same
|
||||||
# secret as admin/user credentials. "monValueFromSecret" provides the
|
# secret as admin/user credentials. "monValueFromSecret" provides the
|
||||||
@ -32,7 +35,7 @@ parameters:
|
|||||||
|
|
||||||
# The secrets have to contain Ceph admin credentials.
|
# The secrets have to contain Ceph admin credentials.
|
||||||
# NOTE: If using "clusterID" instead of "monitors" above, the following
|
# NOTE: If using "clusterID" instead of "monitors" above, the following
|
||||||
# secrets MAY be added to the ceph-cluster-<cluster-fsid> secret and skipped
|
# secrets MAY be added to the ceph-cluster-<cluster-id> secret and skipped
|
||||||
# here
|
# here
|
||||||
csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
|
csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
|
||||||
csi.storage.k8s.io/provisioner-secret-namespace: default
|
csi.storage.k8s.io/provisioner-secret-namespace: default
|
||||||
@ -41,7 +44,7 @@ parameters:
|
|||||||
|
|
||||||
# Ceph users for operating RBD
|
# Ceph users for operating RBD
|
||||||
# NOTE: If using "clusterID" instead of "monitors" above, the following
|
# NOTE: If using "clusterID" instead of "monitors" above, the following
|
||||||
# IDs MAY be added to the ceph-cluster-<cluster-fsid> secret and skipped
|
# IDs MAY be added to the ceph-cluster-<cluster-id> secret and skipped
|
||||||
# here
|
# here
|
||||||
adminid: admin
|
adminid: admin
|
||||||
userid: kubernetes
|
userid: kubernetes
|
||||||
|
@ -6,11 +6,10 @@
|
|||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
kind: Secret
|
kind: Secret
|
||||||
metadata:
|
metadata:
|
||||||
# The <cluster-fsid> is used by the CSI plugin to uniquely identify and use a
|
# The <cluster-id> is used by the CSI plugin to uniquely identify and use a
|
||||||
# Ceph cluster, hence the value MUST match the output of the following
|
# Ceph cluster, the value MUST match the value provided as `clusterID` in the
|
||||||
# command.
|
# StorageClass
|
||||||
# - Output of: `ceph fsid`
|
name: ceph-cluster-<cluster-id>
|
||||||
name: ceph-cluster-<cluster-fsid>
|
|
||||||
namespace: default
|
namespace: default
|
||||||
data:
|
data:
|
||||||
# Base64 encoded and comma separated Ceph cluster monitor list
|
# Base64 encoded and comma separated Ceph cluster monitor list
|
||||||
@ -24,7 +23,7 @@ data:
|
|||||||
# Substitute the entire string including angle braces, with the base64 value
|
# Substitute the entire string including angle braces, with the base64 value
|
||||||
adminid: <BASE64-ENCODED-ID>
|
adminid: <BASE64-ENCODED-ID>
|
||||||
# Base64 encoded key of the provisioner admin ID
|
# Base64 encoded key of the provisioner admin ID
|
||||||
# - Output of: `ceph auth get-key client.admin | base64`
|
# - Output of: `ceph auth get-key client.<admin-id> | base64`
|
||||||
# Substitute the entire string including angle braces, with the base64 value
|
# Substitute the entire string including angle braces, with the base64 value
|
||||||
adminkey: <BASE64-ENCODED-PASSWORD>
|
adminkey: <BASE64-ENCODED-PASSWORD>
|
||||||
# Base64 encoded user ID to use for publishing
|
# Base64 encoded user ID to use for publishing
|
||||||
@ -32,6 +31,6 @@ data:
|
|||||||
# Substitute the entire string including angle braces, with the base64 value
|
# Substitute the entire string including angle braces, with the base64 value
|
||||||
userid: <BASE64-ENCODED-ID>
|
userid: <BASE64-ENCODED-ID>
|
||||||
# Base64 encoded key of the publisher user ID
|
# Base64 encoded key of the publisher user ID
|
||||||
# - Output of: `ceph auth get-key client.admin | base64`
|
# - Output of: `ceph auth get-key client.<admin-id> | base64`
|
||||||
# Substitute the entire string including angle braces, with the base64 value
|
# Substitute the entire string including angle braces, with the base64 value
|
||||||
userkey: <BASE64-ENCODED-PASSWORD>
|
userkey: <BASE64-ENCODED-PASSWORD>
|
||||||
|
@ -1,8 +1,17 @@
|
|||||||
---
|
---
|
||||||
# This is a patch to the existing daemonset deployment of CSI rbdplugin.
|
# This is a patch to the existing daemonset deployment of CSI rbdplugin.
|
||||||
# This is to be used when adding a new Ceph cluster to the CSI plugin.
|
#
|
||||||
|
# This is to be used when using `clusterID` instead of monitors or
|
||||||
|
# monValueFromSecret in the StorageClass to specify the Ceph cluster to
|
||||||
|
# provision storage from, AND when the value of `--configroot` option to the
|
||||||
|
# CSI pods is NOT "k8s_objects".
|
||||||
|
#
|
||||||
|
# This patch file, patches in the specified secret for the 'clusterID' as a
|
||||||
|
# volume, instead of the Ceph CSI plugin actively fetching and using kubernetes
|
||||||
|
# secrets.
|
||||||
|
#
|
||||||
# NOTE: Update csi-rbdplugin-provisioner StatefulSet as well with similar patch
|
# NOTE: Update csi-rbdplugin-provisioner StatefulSet as well with similar patch
|
||||||
# Post substituting the <cluster-fsid> in all places execute,
|
# Post substituting the <cluster-id> in all places execute,
|
||||||
# `kubectl patch daemonset csi-rbdplugin --patch\
|
# `kubectl patch daemonset csi-rbdplugin --patch\
|
||||||
# "$(cat template-csi-rbdplugin-patch.yaml)"`
|
# "$(cat template-csi-rbdplugin-patch.yaml)"`
|
||||||
# to patch the statefulset deployment.
|
# to patch the statefulset deployment.
|
||||||
@ -12,10 +21,10 @@ spec:
|
|||||||
containers:
|
containers:
|
||||||
- name: csi-rbdplugin
|
- name: csi-rbdplugin
|
||||||
volumeMounts:
|
volumeMounts:
|
||||||
- name: ceph-cluster-<cluster-fsid>
|
- name: ceph-cluster-<cluster-id>
|
||||||
mountPath: "/etc/csi-config/ceph-cluster-<cluster-fsid>"
|
mountPath: "/etc/csi-config/ceph-cluster-<cluster-id>"
|
||||||
readOnly: true
|
readOnly: true
|
||||||
volumes:
|
volumes:
|
||||||
- name: ceph-cluster-<cluster-fsid>
|
- name: ceph-cluster-<cluster-id>
|
||||||
secret:
|
secret:
|
||||||
secretName: ceph-cluster-<cluster-fsid>
|
secretName: ceph-cluster-<cluster-id>
|
||||||
|
@ -1,8 +1,17 @@
|
|||||||
---
|
---
|
||||||
# This is a patch to the existing statefulset deployment of CSI rbdplugin.
|
# This is a patch to the existing statefulset deployment of CSI rbdplugin.
|
||||||
# This is to be used when adding a new Ceph cluster to the CSI plugin.
|
#
|
||||||
|
# This is to be used when using `clusterID` instead of monitors or
|
||||||
|
# monValueFromSecret in the StorageClass to specify the Ceph cluster to
|
||||||
|
# provision storage from, AND when the value of `--configroot` option to the
|
||||||
|
# CSI pods is NOT "k8s_objects".
|
||||||
|
#
|
||||||
|
# This patch file, patches in the specified secret for the 'clusterID' as a
|
||||||
|
# volume, instead of the Ceph CSI plugin actively fetching and using kubernetes
|
||||||
|
# secrets.
|
||||||
|
#
|
||||||
# NOTE: Update csi-rbdplugin DaemonSet as well with similar patch
|
# NOTE: Update csi-rbdplugin DaemonSet as well with similar patch
|
||||||
# Post substituting the <cluster-fsid> in all places execute,
|
# Post substituting the <cluster-id> in all places execute,
|
||||||
# `kubectl patch statefulset csi-rbdplugin-provisioner --patch\
|
# `kubectl patch statefulset csi-rbdplugin-provisioner --patch\
|
||||||
# "$(cat template-csi-rbdplugin-provisioner-patch.yaml)"`
|
# "$(cat template-csi-rbdplugin-provisioner-patch.yaml)"`
|
||||||
# to patch the statefulset deployment.
|
# to patch the statefulset deployment.
|
||||||
@ -12,10 +21,10 @@ spec:
|
|||||||
containers:
|
containers:
|
||||||
- name: csi-rbdplugin
|
- name: csi-rbdplugin
|
||||||
volumeMounts:
|
volumeMounts:
|
||||||
- name: ceph-cluster-<cluster-fsid>
|
- name: ceph-cluster-<cluster-id>
|
||||||
mountPath: "/etc/csi-config/ceph-cluster-<cluster-fsid>"
|
mountPath: "/etc/csi-config/ceph-cluster-<cluster-id>"
|
||||||
readOnly: true
|
readOnly: true
|
||||||
volumes:
|
volumes:
|
||||||
- name: ceph-cluster-<cluster-fsid>
|
- name: ceph-cluster-<cluster-id>
|
||||||
secret:
|
secret:
|
||||||
secretName: ceph-cluster-<cluster-fsid>
|
secretName: ceph-cluster-<cluster-id>
|
||||||
|
@ -280,7 +280,7 @@ func createPath(volOpt *rbdVolume, userID string, creds map[string]string) (stri
|
|||||||
}
|
}
|
||||||
|
|
||||||
klog.V(5).Infof("rbd: map mon %s", mon)
|
klog.V(5).Infof("rbd: map mon %s", mon)
|
||||||
key, err := getRBDKey(volOpt.FsID, userID, creds)
|
key, err := getRBDKey(volOpt.ClusterID, userID, creds)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return "", err
|
return "", err
|
||||||
}
|
}
|
||||||
|
@ -52,7 +52,7 @@ type rbdVolume struct {
|
|||||||
UserID string `json:"userId"`
|
UserID string `json:"userId"`
|
||||||
Mounter string `json:"mounter"`
|
Mounter string `json:"mounter"`
|
||||||
DisableInUseChecks bool `json:"disableInUseChecks"`
|
DisableInUseChecks bool `json:"disableInUseChecks"`
|
||||||
FsID string `json:"fsid"`
|
ClusterID string `json:"clusterId"`
|
||||||
}
|
}
|
||||||
|
|
||||||
type rbdSnapshot struct {
|
type rbdSnapshot struct {
|
||||||
@ -67,7 +67,7 @@ type rbdSnapshot struct {
|
|||||||
SizeBytes int64 `json:"sizeBytes"`
|
SizeBytes int64 `json:"sizeBytes"`
|
||||||
AdminID string `json:"adminId"`
|
AdminID string `json:"adminId"`
|
||||||
UserID string `json:"userId"`
|
UserID string `json:"userId"`
|
||||||
FsID string `json:"fsid"`
|
ClusterID string `json:"clusterId"`
|
||||||
}
|
}
|
||||||
|
|
||||||
var (
|
var (
|
||||||
@ -87,17 +87,16 @@ var (
|
|||||||
supportedFeatures = sets.NewString("layering")
|
supportedFeatures = sets.NewString("layering")
|
||||||
)
|
)
|
||||||
|
|
||||||
func getRBDKey(fsid string, id string, credentials map[string]string) (string, error) {
|
func getRBDKey(clusterid string, id string, credentials map[string]string) (string, error) {
|
||||||
var ok bool
|
var ok bool
|
||||||
var err error
|
var err error
|
||||||
var key string
|
var key string
|
||||||
|
|
||||||
if key, ok = credentials[id]; !ok {
|
if key, ok = credentials[id]; !ok {
|
||||||
if fsid != "" {
|
if clusterid != "" {
|
||||||
key, err = confStore.CredentialForUser(fsid, id)
|
key, err = confStore.KeyForUser(clusterid, id)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
klog.Errorf("failed getting credentials (%s)", err)
|
return "", fmt.Errorf("RBD key for ID: %s not found in config store of clusterID (%s)", id, clusterid)
|
||||||
return "", fmt.Errorf("RBD key for ID: %s not found in config store", id)
|
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
return "", fmt.Errorf("RBD key for ID: %s not found", id)
|
return "", fmt.Errorf("RBD key for ID: %s not found", id)
|
||||||
@ -137,7 +136,7 @@ func createRBDImage(pOpts *rbdVolume, volSz int, adminID string, credentials map
|
|||||||
image := pOpts.VolName
|
image := pOpts.VolName
|
||||||
volSzMiB := fmt.Sprintf("%dM", volSz)
|
volSzMiB := fmt.Sprintf("%dM", volSz)
|
||||||
|
|
||||||
key, err := getRBDKey(pOpts.FsID, adminID, credentials)
|
key, err := getRBDKey(pOpts.ClusterID, adminID, credentials)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@ -168,7 +167,7 @@ func rbdStatus(pOpts *rbdVolume, userID string, credentials map[string]string) (
|
|||||||
image := pOpts.VolName
|
image := pOpts.VolName
|
||||||
// If we don't have admin id/secret (e.g. attaching), fallback to user id/secret.
|
// If we don't have admin id/secret (e.g. attaching), fallback to user id/secret.
|
||||||
|
|
||||||
key, err := getRBDKey(pOpts.FsID, userID, credentials)
|
key, err := getRBDKey(pOpts.ClusterID, userID, credentials)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return false, "", err
|
return false, "", err
|
||||||
}
|
}
|
||||||
@ -216,7 +215,7 @@ func deleteRBDImage(pOpts *rbdVolume, adminID string, credentials map[string]str
|
|||||||
klog.Info("rbd is still being used ", image)
|
klog.Info("rbd is still being used ", image)
|
||||||
return fmt.Errorf("rbd %s is still being used", image)
|
return fmt.Errorf("rbd %s is still being used", image)
|
||||||
}
|
}
|
||||||
key, err := getRBDKey(pOpts.FsID, adminID, credentials)
|
key, err := getRBDKey(pOpts.ClusterID, adminID, credentials)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@ -241,22 +240,22 @@ func execCommand(command string, args []string) ([]byte, error) {
|
|||||||
return cmd.CombinedOutput()
|
return cmd.CombinedOutput()
|
||||||
}
|
}
|
||||||
|
|
||||||
func getMonsAndFsID(options map[string]string) (monitors, fsID, monInSecret string, err error) {
|
func getMonsAndClusterID(options map[string]string) (monitors, clusterID, monInSecret string, err error) {
|
||||||
var ok bool
|
var ok bool
|
||||||
|
|
||||||
monitors, ok = options["monitors"]
|
monitors, ok = options["monitors"]
|
||||||
if !ok {
|
if !ok {
|
||||||
// if mons are not set in options, check if they are set in secret
|
// if mons are not set in options, check if they are set in secret
|
||||||
if monInSecret, ok = options["monValueFromSecret"]; !ok {
|
if monInSecret, ok = options["monValueFromSecret"]; !ok {
|
||||||
// if mons are not in secret, check if we have a cluster-fsid
|
// if mons are not in secret, check if we have a cluster-id
|
||||||
if fsID, ok = options["clusterID"]; !ok {
|
if clusterID, ok = options["clusterID"]; !ok {
|
||||||
err = errors.New("either monitors or monValueFromSecret or clusterID must be set")
|
err = errors.New("either monitors or monValueFromSecret or clusterID must be set")
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
if monitors, err = confStore.Mons(fsID); err != nil {
|
if monitors, err = confStore.Mons(clusterID); err != nil {
|
||||||
klog.Errorf("failed getting mons (%s)", err)
|
klog.Errorf("failed getting mons (%s)", err)
|
||||||
err = fmt.Errorf("failed to fetch monitor list using clusterID (%s)", fsID)
|
err = fmt.Errorf("failed to fetch monitor list using clusterID (%s)", clusterID)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -265,16 +264,16 @@ func getMonsAndFsID(options map[string]string) (monitors, fsID, monInSecret stri
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
func getIDs(options map[string]string, fsID string) (adminID, userID string, err error) {
|
func getIDs(options map[string]string, clusterID string) (adminID, userID string, err error) {
|
||||||
var ok bool
|
var ok bool
|
||||||
|
|
||||||
adminID, ok = options["adminid"]
|
adminID, ok = options["adminid"]
|
||||||
switch {
|
switch {
|
||||||
case ok:
|
case ok:
|
||||||
case fsID != "":
|
case clusterID != "":
|
||||||
if adminID, err = confStore.AdminID(fsID); err != nil {
|
if adminID, err = confStore.AdminID(clusterID); err != nil {
|
||||||
klog.Errorf("failed getting subject (%s)", err)
|
klog.Errorf("failed getting subject (%s)", err)
|
||||||
return "", "", fmt.Errorf("failed to fetch provisioner ID using clusterID (%s)", fsID)
|
return "", "", fmt.Errorf("failed to fetch admin ID for clusterID (%s)", clusterID)
|
||||||
}
|
}
|
||||||
default:
|
default:
|
||||||
adminID = rbdDefaultAdminID
|
adminID = rbdDefaultAdminID
|
||||||
@ -283,10 +282,10 @@ func getIDs(options map[string]string, fsID string) (adminID, userID string, err
|
|||||||
userID, ok = options["userid"]
|
userID, ok = options["userid"]
|
||||||
switch {
|
switch {
|
||||||
case ok:
|
case ok:
|
||||||
case fsID != "":
|
case clusterID != "":
|
||||||
if userID, err = confStore.UserID(fsID); err != nil {
|
if userID, err = confStore.UserID(clusterID); err != nil {
|
||||||
klog.Errorf("failed getting subject (%s)", err)
|
klog.Errorf("failed getting subject (%s)", err)
|
||||||
return "", "", fmt.Errorf("failed to fetch publisher ID using clusterID (%s)", fsID)
|
return "", "", fmt.Errorf("failed to fetch user ID using clusterID (%s)", clusterID)
|
||||||
}
|
}
|
||||||
default:
|
default:
|
||||||
userID = rbdDefaultUserID
|
userID = rbdDefaultUserID
|
||||||
@ -305,7 +304,7 @@ func getRBDVolumeOptions(volOptions map[string]string, disableInUseChecks bool)
|
|||||||
return nil, errors.New("missing required parameter pool")
|
return nil, errors.New("missing required parameter pool")
|
||||||
}
|
}
|
||||||
|
|
||||||
rbdVol.Monitors, rbdVol.FsID, rbdVol.MonValueFromSecret, err = getMonsAndFsID(volOptions)
|
rbdVol.Monitors, rbdVol.ClusterID, rbdVol.MonValueFromSecret, err = getMonsAndClusterID(volOptions)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@ -346,7 +345,7 @@ func getCredsFromVol(rbdVol *rbdVolume, volOptions map[string]string) error {
|
|||||||
var ok bool
|
var ok bool
|
||||||
var err error
|
var err error
|
||||||
|
|
||||||
rbdVol.AdminID, rbdVol.UserID, err = getIDs(volOptions, rbdVol.FsID)
|
rbdVol.AdminID, rbdVol.UserID, err = getIDs(volOptions, rbdVol.ClusterID)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@ -369,12 +368,12 @@ func getRBDSnapshotOptions(snapOptions map[string]string) (*rbdSnapshot, error)
|
|||||||
return nil, errors.New("missing required parameter pool")
|
return nil, errors.New("missing required parameter pool")
|
||||||
}
|
}
|
||||||
|
|
||||||
rbdSnap.Monitors, rbdSnap.FsID, rbdSnap.MonValueFromSecret, err = getMonsAndFsID(snapOptions)
|
rbdSnap.Monitors, rbdSnap.ClusterID, rbdSnap.MonValueFromSecret, err = getMonsAndClusterID(snapOptions)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
rbdSnap.AdminID, rbdSnap.UserID, err = getIDs(snapOptions, rbdSnap.FsID)
|
rbdSnap.AdminID, rbdSnap.UserID, err = getIDs(snapOptions, rbdSnap.ClusterID)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@ -439,7 +438,7 @@ func protectSnapshot(pOpts *rbdSnapshot, adminID string, credentials map[string]
|
|||||||
image := pOpts.VolName
|
image := pOpts.VolName
|
||||||
snapID := pOpts.SnapID
|
snapID := pOpts.SnapID
|
||||||
|
|
||||||
key, err := getRBDKey(pOpts.FsID, adminID, credentials)
|
key, err := getRBDKey(pOpts.ClusterID, adminID, credentials)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@ -502,7 +501,7 @@ func createSnapshot(pOpts *rbdSnapshot, adminID string, credentials map[string]s
|
|||||||
image := pOpts.VolName
|
image := pOpts.VolName
|
||||||
snapID := pOpts.SnapID
|
snapID := pOpts.SnapID
|
||||||
|
|
||||||
key, err := getRBDKey(pOpts.FsID, adminID, credentials)
|
key, err := getRBDKey(pOpts.ClusterID, adminID, credentials)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@ -529,7 +528,7 @@ func unprotectSnapshot(pOpts *rbdSnapshot, adminID string, credentials map[strin
|
|||||||
image := pOpts.VolName
|
image := pOpts.VolName
|
||||||
snapID := pOpts.SnapID
|
snapID := pOpts.SnapID
|
||||||
|
|
||||||
key, err := getRBDKey(pOpts.FsID, adminID, credentials)
|
key, err := getRBDKey(pOpts.ClusterID, adminID, credentials)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@ -556,7 +555,7 @@ func deleteSnapshot(pOpts *rbdSnapshot, adminID string, credentials map[string]s
|
|||||||
image := pOpts.VolName
|
image := pOpts.VolName
|
||||||
snapID := pOpts.SnapID
|
snapID := pOpts.SnapID
|
||||||
|
|
||||||
key, err := getRBDKey(pOpts.FsID, adminID, credentials)
|
key, err := getRBDKey(pOpts.ClusterID, adminID, credentials)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@ -583,7 +582,7 @@ func restoreSnapshot(pVolOpts *rbdVolume, pSnapOpts *rbdSnapshot, adminID string
|
|||||||
image := pVolOpts.VolName
|
image := pVolOpts.VolName
|
||||||
snapID := pSnapOpts.SnapID
|
snapID := pSnapOpts.SnapID
|
||||||
|
|
||||||
key, err := getRBDKey(pVolOpts.FsID, adminID, credentials)
|
key, err := getRBDKey(pVolOpts.ClusterID, adminID, credentials)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -27,7 +27,7 @@ import (
|
|||||||
// StoreReader interface enables plugging different stores, that contain the
|
// StoreReader interface enables plugging different stores, that contain the
|
||||||
// keys and data. (e.g k8s secrets or local files)
|
// keys and data. (e.g k8s secrets or local files)
|
||||||
type StoreReader interface {
|
type StoreReader interface {
|
||||||
DataForKey(fsid string, key string) (string, error)
|
DataForKey(clusterID string, key string) (string, error)
|
||||||
}
|
}
|
||||||
|
|
||||||
/* ConfigKeys contents and format,
|
/* ConfigKeys contents and format,
|
||||||
@ -55,23 +55,23 @@ type ConfigStore struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// dataForKey returns data from the config store for the provided key
|
// dataForKey returns data from the config store for the provided key
|
||||||
func (dc *ConfigStore) dataForKey(fsid string, key string) (string, error) {
|
func (dc *ConfigStore) dataForKey(clusterID string, key string) (string, error) {
|
||||||
if dc.StoreReader != nil {
|
if dc.StoreReader != nil {
|
||||||
return dc.StoreReader.DataForKey(fsid, key)
|
return dc.StoreReader.DataForKey(clusterID, key)
|
||||||
}
|
}
|
||||||
|
|
||||||
err := errors.New("config store location uninitialized")
|
err := errors.New("config store location uninitialized")
|
||||||
return "", err
|
return "", err
|
||||||
}
|
}
|
||||||
|
|
||||||
// Mons returns a comma separated MON list from the cluster config represented by fsid
|
// Mons returns a comma separated MON list from the cluster config represented by clusterID
|
||||||
func (dc *ConfigStore) Mons(fsid string) (string, error) {
|
func (dc *ConfigStore) Mons(clusterID string) (string, error) {
|
||||||
return dc.dataForKey(fsid, csMonitors)
|
return dc.dataForKey(clusterID, csMonitors)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Pools returns a list of pool names from the cluster config represented by fsid
|
// Pools returns a list of pool names from the cluster config represented by clusterID
|
||||||
func (dc *ConfigStore) Pools(fsid string) ([]string, error) {
|
func (dc *ConfigStore) Pools(clusterID string) ([]string, error) {
|
||||||
content, err := dc.dataForKey(fsid, csPools)
|
content, err := dc.dataForKey(clusterID, csPools)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@ -79,42 +79,42 @@ func (dc *ConfigStore) Pools(fsid string) ([]string, error) {
|
|||||||
return strings.Split(content, ","), nil
|
return strings.Split(content, ","), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// AdminID returns the admin ID from the cluster config represented by fsid
|
// AdminID returns the admin ID from the cluster config represented by clusterID
|
||||||
func (dc *ConfigStore) AdminID(fsid string) (string, error) {
|
func (dc *ConfigStore) AdminID(clusterID string) (string, error) {
|
||||||
return dc.dataForKey(fsid, csAdminID)
|
return dc.dataForKey(clusterID, csAdminID)
|
||||||
}
|
}
|
||||||
|
|
||||||
// UserID returns the user ID from the cluster config represented by fsid
|
// UserID returns the user ID from the cluster config represented by clusterID
|
||||||
func (dc *ConfigStore) UserID(fsid string) (string, error) {
|
func (dc *ConfigStore) UserID(clusterID string) (string, error) {
|
||||||
return dc.dataForKey(fsid, csUserID)
|
return dc.dataForKey(clusterID, csUserID)
|
||||||
}
|
}
|
||||||
|
|
||||||
// CredentialForUser returns the credentials for the requested user ID
|
// KeyForUser returns the key for the requested user ID from the cluster config
|
||||||
// from the cluster config represented by fsid
|
// represented by clusterID
|
||||||
func (dc *ConfigStore) CredentialForUser(fsid, userID string) (data string, err error) {
|
func (dc *ConfigStore) KeyForUser(clusterID, userID string) (data string, err error) {
|
||||||
var credkey string
|
var fetchKey string
|
||||||
user, err := dc.AdminID(fsid)
|
user, err := dc.AdminID(clusterID)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
if user == userID {
|
if user == userID {
|
||||||
credkey = csAdminKey
|
fetchKey = csAdminKey
|
||||||
} else {
|
} else {
|
||||||
user, err = dc.UserID(fsid)
|
user, err = dc.UserID(clusterID)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
if user != userID {
|
if user != userID {
|
||||||
err = fmt.Errorf("requested user (%s) not found in cluster configuration of (%s)", userID, fsid)
|
err = fmt.Errorf("requested user (%s) not found in cluster configuration of (%s)", userID, clusterID)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
credkey = csUserKey
|
fetchKey = csUserKey
|
||||||
}
|
}
|
||||||
|
|
||||||
return dc.dataForKey(fsid, credkey)
|
return dc.dataForKey(clusterID, fetchKey)
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewConfigStore returns a config store based on value of configRoot. If
|
// NewConfigStore returns a config store based on value of configRoot. If
|
||||||
|
@ -14,8 +14,6 @@ See the License for the specific language governing permissions and
|
|||||||
limitations under the License.
|
limitations under the License.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
// nolint: gocyclo
|
|
||||||
|
|
||||||
package util
|
package util
|
||||||
|
|
||||||
import (
|
import (
|
||||||
@ -26,6 +24,7 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
var basePath = "./test_artifacts"
|
var basePath = "./test_artifacts"
|
||||||
|
var clusterID = "testclusterid"
|
||||||
var cs *ConfigStore
|
var cs *ConfigStore
|
||||||
|
|
||||||
func cleanupTestData() {
|
func cleanupTestData() {
|
||||||
@ -51,20 +50,20 @@ func TestConfigStore(t *testing.T) {
|
|||||||
t.Errorf("Test setup error %s", err)
|
t.Errorf("Test setup error %s", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// TEST: Should fail as fsid directory is missing
|
// TEST: Should fail as clusterid directory is missing
|
||||||
_, err = cs.Mons("testfsid")
|
_, err = cs.Mons(clusterID)
|
||||||
if err == nil {
|
if err == nil {
|
||||||
t.Errorf("Failed: expected error due to missing parent directory")
|
t.Errorf("Failed: expected error due to missing parent directory")
|
||||||
}
|
}
|
||||||
|
|
||||||
testDir = basePath + "/" + "ceph-cluster-testfsid"
|
testDir = basePath + "/" + "ceph-cluster-" + clusterID
|
||||||
err = os.MkdirAll(testDir, 0700)
|
err = os.MkdirAll(testDir, 0700)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Errorf("Test setup error %s", err)
|
t.Errorf("Test setup error %s", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// TEST: Should fail as mons file is missing
|
// TEST: Should fail as mons file is missing
|
||||||
_, err = cs.Mons("testfsid")
|
_, err = cs.Mons(clusterID)
|
||||||
if err == nil {
|
if err == nil {
|
||||||
t.Errorf("Failed: expected error due to missing mons file")
|
t.Errorf("Failed: expected error due to missing mons file")
|
||||||
}
|
}
|
||||||
@ -76,7 +75,7 @@ func TestConfigStore(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// TEST: Should fail as MONs is an empty string
|
// TEST: Should fail as MONs is an empty string
|
||||||
content, err = cs.Mons("testfsid")
|
content, err = cs.Mons(clusterID)
|
||||||
if err == nil {
|
if err == nil {
|
||||||
t.Errorf("Failed: want (%s), got (%s)", data, content)
|
t.Errorf("Failed: want (%s), got (%s)", data, content)
|
||||||
}
|
}
|
||||||
@ -88,7 +87,7 @@ func TestConfigStore(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// TEST: Fetching MONs should succeed
|
// TEST: Fetching MONs should succeed
|
||||||
content, err = cs.Mons("testfsid")
|
content, err = cs.Mons(clusterID)
|
||||||
if err != nil || content != data {
|
if err != nil || content != data {
|
||||||
t.Errorf("Failed: want (%s), got (%s), err (%s)", data, content, err)
|
t.Errorf("Failed: want (%s), got (%s), err (%s)", data, content, err)
|
||||||
}
|
}
|
||||||
@ -100,7 +99,7 @@ func TestConfigStore(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// TEST: Fetching MONs should succeed
|
// TEST: Fetching MONs should succeed
|
||||||
listContent, err := cs.Pools("testfsid")
|
listContent, err := cs.Pools(clusterID)
|
||||||
if err != nil || strings.Join(listContent, ",") != data {
|
if err != nil || strings.Join(listContent, ",") != data {
|
||||||
t.Errorf("Failed: want (%s), got (%s), err (%s)", data, content, err)
|
t.Errorf("Failed: want (%s), got (%s), err (%s)", data, content, err)
|
||||||
}
|
}
|
||||||
@ -112,7 +111,7 @@ func TestConfigStore(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// TEST: Fetching provuser should succeed
|
// TEST: Fetching provuser should succeed
|
||||||
content, err = cs.AdminID("testfsid")
|
content, err = cs.AdminID(clusterID)
|
||||||
if err != nil || content != data {
|
if err != nil || content != data {
|
||||||
t.Errorf("Failed: want (%s), got (%s), err (%s)", data, content, err)
|
t.Errorf("Failed: want (%s), got (%s), err (%s)", data, content, err)
|
||||||
}
|
}
|
||||||
@ -124,7 +123,7 @@ func TestConfigStore(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// TEST: Fetching pubuser should succeed
|
// TEST: Fetching pubuser should succeed
|
||||||
content, err = cs.UserID("testfsid")
|
content, err = cs.UserID(clusterID)
|
||||||
if err != nil || content != data {
|
if err != nil || content != data {
|
||||||
t.Errorf("Failed: want (%s), got (%s), err (%s)", data, content, err)
|
t.Errorf("Failed: want (%s), got (%s), err (%s)", data, content, err)
|
||||||
}
|
}
|
||||||
@ -136,7 +135,7 @@ func TestConfigStore(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// TEST: Fetching provkey should succeed
|
// TEST: Fetching provkey should succeed
|
||||||
content, err = cs.CredentialForUser("testfsid", "provuser")
|
content, err = cs.KeyForUser(clusterID, "provuser")
|
||||||
if err != nil || content != data {
|
if err != nil || content != data {
|
||||||
t.Errorf("Failed: want (%s), got (%s), err (%s)", data, content, err)
|
t.Errorf("Failed: want (%s), got (%s), err (%s)", data, content, err)
|
||||||
}
|
}
|
||||||
@ -148,13 +147,13 @@ func TestConfigStore(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// TEST: Fetching pubkey should succeed
|
// TEST: Fetching pubkey should succeed
|
||||||
content, err = cs.CredentialForUser("testfsid", "pubuser")
|
content, err = cs.KeyForUser(clusterID, "pubuser")
|
||||||
if err != nil || content != data {
|
if err != nil || content != data {
|
||||||
t.Errorf("Failed: want (%s), got (%s), err (%s)", data, content, err)
|
t.Errorf("Failed: want (%s), got (%s), err (%s)", data, content, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// TEST: Fetching random user key should fail
|
// TEST: Fetching random user key should fail
|
||||||
_, err = cs.CredentialForUser("testfsid", "random")
|
_, err = cs.KeyForUser(clusterID, "random")
|
||||||
if err == nil {
|
if err == nil {
|
||||||
t.Errorf("Failed: Expected to fail fetching random user key")
|
t.Errorf("Failed: Expected to fail fetching random user key")
|
||||||
}
|
}
|
||||||
|
@ -30,7 +30,8 @@ BasePath defines the directory under which FileConfig will attempt to open and
|
|||||||
read contents of various Ceph cluster configurations.
|
read contents of various Ceph cluster configurations.
|
||||||
|
|
||||||
Each Ceph cluster configuration is stored under a directory named,
|
Each Ceph cluster configuration is stored under a directory named,
|
||||||
BasePath/ceph-cluster-<fsid>, where <fsid> is the Ceph cluster fsid.
|
BasePath/ceph-cluster-<clusterid>, where <clusterid> uniquely identifies and
|
||||||
|
separates the each Ceph cluster configuration.
|
||||||
|
|
||||||
Under each Ceph cluster configuration directory, individual files named as per
|
Under each Ceph cluster configuration directory, individual files named as per
|
||||||
the ConfigKeys constants in the ConfigStore interface, store the required
|
the ConfigKeys constants in the ConfigStore interface, store the required
|
||||||
@ -42,12 +43,12 @@ type FileConfig struct {
|
|||||||
|
|
||||||
// DataForKey reads the appropriate config file, named using key, and returns
|
// DataForKey reads the appropriate config file, named using key, and returns
|
||||||
// the contents of the file to the caller
|
// the contents of the file to the caller
|
||||||
func (fc *FileConfig) DataForKey(fsid string, key string) (data string, err error) {
|
func (fc *FileConfig) DataForKey(clusterid string, key string) (data string, err error) {
|
||||||
pathToKey := path.Join(fc.BasePath, "ceph-cluster-"+fsid, key)
|
pathToKey := path.Join(fc.BasePath, "ceph-cluster-"+clusterid, key)
|
||||||
// #nosec
|
// #nosec
|
||||||
content, err := ioutil.ReadFile(pathToKey)
|
content, err := ioutil.ReadFile(pathToKey)
|
||||||
if err != nil || string(content) == "" {
|
if err != nil || string(content) == "" {
|
||||||
err = fmt.Errorf("error fetching configuration for cluster ID (%s). (%s)", fsid, err)
|
err = fmt.Errorf("error fetching configuration for cluster ID (%s). (%s)", clusterid, err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -27,7 +27,8 @@ K8sConfig is a ConfigStore interface implementation that reads configuration
|
|||||||
information from k8s secrets.
|
information from k8s secrets.
|
||||||
|
|
||||||
Each Ceph cluster configuration secret is expected to be named,
|
Each Ceph cluster configuration secret is expected to be named,
|
||||||
ceph-cluster-<fsid>, where <fsid> is the Ceph cluster fsid.
|
ceph-cluster-<clusterid>, where <clusterid> uniquely identifies and
|
||||||
|
separates the each Ceph cluster configuration.
|
||||||
|
|
||||||
The secret is expected to contain keys, as defined by the ConfigKeys constants
|
The secret is expected to contain keys, as defined by the ConfigKeys constants
|
||||||
in the ConfigStore interface.
|
in the ConfigStore interface.
|
||||||
@ -37,18 +38,18 @@ type K8sConfig struct {
|
|||||||
Namespace string
|
Namespace string
|
||||||
}
|
}
|
||||||
|
|
||||||
// DataForKey reads the appropriate k8s secret, named using fsid, and returns
|
// DataForKey reads the appropriate k8s secret, named using clusterid, and
|
||||||
// the contents of key within the secret
|
// returns the contents of key within the secret
|
||||||
func (kc *K8sConfig) DataForKey(fsid string, key string) (data string, err error) {
|
func (kc *K8sConfig) DataForKey(clusterid string, key string) (data string, err error) {
|
||||||
secret, err := kc.Client.CoreV1().Secrets(kc.Namespace).Get("ceph-cluster-"+fsid, metav1.GetOptions{})
|
secret, err := kc.Client.CoreV1().Secrets(kc.Namespace).Get("ceph-cluster-"+clusterid, metav1.GetOptions{})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
err = fmt.Errorf("error fetching configuration for cluster ID (%s). (%s)", fsid, err)
|
err = fmt.Errorf("error fetching configuration for cluster ID (%s). (%s)", clusterid, err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
content, ok := secret.Data[key]
|
content, ok := secret.Data[key]
|
||||||
if !ok {
|
if !ok {
|
||||||
err = fmt.Errorf("missing data for key (%s) in cluster configuration of (%s)", key, fsid)
|
err = fmt.Errorf("missing data for key (%s) in cluster configuration of (%s)", key, clusterid)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user