cleanup: Move common files to deploy folder

Few common files related to deployments were kept
in the examples folder initially. Moving them to
deploy folder and updating the relevant files.

Signed-off-by: karthik-us <ksubrahm@redhat.com>
This commit is contained in:
karthik-us
2023-05-30 20:47:51 +05:30
committed by mergify[bot]
parent b5e68c810e
commit 6ac3a4dabc
11 changed files with 27 additions and 13 deletions

View File

@ -2,17 +2,17 @@
## Deploying Ceph-CSI services
Create [ceph-config](./ceph-conf.yaml) configmap using the following command.
Create [ceph-config](../deploy/ceph-conf.yaml) configmap using the following command.
```bash
kubectl apply -f ./ceph-conf.yaml
kubectl apply -f ../deploy/ceph-conf.yaml
```
Both `rbd` and `cephfs` directories contain `plugin-deploy.sh` and
`plugin-teardown.sh` helper scripts. You can use those to help you
deploy/teardown RBACs, sidecar containers and the plugin in one go.
By default, they look for the YAML manifests in
`../../deploy/{rbd,cephfs}/kubernetes`.
`../deploy/{rbd,cephfs}/kubernetes`.
You can override this path by running
```bash
@ -25,7 +25,7 @@ The CSI plugin requires configuration information regarding the Ceph cluster(s),
that would host the dynamically or statically provisioned volumes. This
is provided by adding a per-cluster identifier (referred to as clusterID), and
the required monitor details for the same, as in the provided [sample config
map](./csi-config-map-sample.yaml).
map](../deploy/csi-config-map-sample.yaml).
Gather the following information from the Ceph cluster(s) of choice,
@ -38,13 +38,13 @@ Gather the following information from the Ceph cluster(s) of choice,
* Alternatively, choose a `<cluster-id>` value that is distinct per Ceph
cluster in use by this kubernetes cluster
Update the [sample configmap](./csi-config-map-sample.yaml) with values
Update the [sample configmap](../deploy/csi-config-map-sample.yaml) with values
from a Ceph cluster and replace `<cluster-id>` with the chosen clusterID, to
create the manifest for the configmap which can be updated in the cluster
using the following command,
```bash
kubectl replace -f ./csi-config-map-sample.yaml
kubectl replace -f ../deploy/csi-config-map-sample.yaml
```
Storage class and snapshot class, using `<cluster-id>` as the value for the

View File

@ -1,21 +0,0 @@
---
# This is a sample configmap that helps define a Ceph configuration as required
# by the CSI plugins.
# Sample ceph.conf available at
# https://github.com/ceph/ceph/blob/master/src/sample.ceph.conf Detailed
# documentation is available at
# https://docs.ceph.com/en/latest/rados/configuration/ceph-conf/
apiVersion: v1
kind: ConfigMap
data:
ceph.conf: |
[global]
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
# keyring is a required key and its value should be empty
keyring: |
metadata:
name: ceph-config

View File

@ -1,89 +0,0 @@
---
# This is a sample configmap that helps define a Ceph cluster configuration
# as required by the CSI plugins.
apiVersion: v1
kind: ConfigMap
# Lets see the different configuration under config.json key.
# The <cluster-id> is used by the CSI plugin to uniquely identify and use a
# Ceph cluster, the value MUST match the value provided as `clusterID` in the
# StorageClass
# The <MONValue#> fields are the various monitor addresses for the Ceph cluster
# identified by the <cluster-id>
# If a CSI plugin is using more than one Ceph cluster, repeat the section for
# each such cluster in use.
# To add more clusters or edit MON addresses in an existing configmap, use
# the `kubectl replace` command.
# The "rbd.rados-namespace" is optional and represents a radosNamespace in the
# pool. If any given, all of the rbd images, snapshots, and other metadata will
# be stored within the radosNamespace.
# NOTE: The given radosNamespace must already exists in the pool.
# NOTE: Make sure you don't add radosNamespace option to a currently in use
# configuration as it will cause issues.
# The field "cephFS.subvolumeGroup" is optional and defaults to "csi".
# The "cephFS.netNamespaceFilePath" fields are the various network namespace
# path for the Ceph cluster identified by the <cluster-id>, This will be used
# by the CephFS CSI plugin to execute the mount -t in the
# network namespace specified by the "cephFS.netNamespaceFilePath".
# The "nfs.netNamespaceFilePath" fields are the various network namespace
# path for the Ceph cluster identified by the <cluster-id>, This will be used
# by the NFS CSI plugin to execute the mount -t in the
# network namespace specified by the "nfs.netNamespaceFilePath".
# The "rbd.netNamespaceFilePath" fields are the various network namespace
# path for the Ceph cluster identified by the <cluster-id>, This will be used
# by the RBD CSI plugin to execute the rbd map/unmap in the
# network namespace specified by the "rbd.netNamespaceFilePath".
# If a CSI plugin is using more than one Ceph cluster, repeat the section for
# each such cluster in use.
# NOTE: Changes to the configmap is automatically updated in the running pods,
# thus restarting existing pods using the configmap is NOT required on edits
# to the configmap.
# Lets see the different configuration under cluster-mapping.json key.
# This configuration is needed when volumes are mirrored using the Ceph-CSI.
# clusterIDMapping holds the mapping between two clusterId's of storage
# clusters.
# RBDPoolIDMapping holds the mapping between two poolId's of storage clusters.
# CephFSFscIDMapping holds the mapping between two FscId's of storage
# clusters.
data:
config.json: |-
[
{
"clusterID": "<cluster-id>",
"rbd": {
"netNamespaceFilePath": "<kubeletRootPath>/plugins/rbd.csi.ceph.com/net",
"radosNamespace": "<rados-namespace>",
},
"monitors": [
"<MONValue1>",
"<MONValue2>",
...
"<MONValueN>"
],
"cephFS": {
"subvolumeGroup": "<subvolumegroup for cephFS volumes>"
"netNamespaceFilePath": "<kubeletRootPath>/plugins/cephfs.csi.ceph.com/net",
}
"nfs": {
"netNamespaceFilePath": "<kubeletRootPath>/plugins/nfs.csi.ceph.com/net",
}
}
]
cluster-mapping.json: |-
[
{
"clusterIDMapping": {
"clusterID on site1": "clusterID on site2"
},
"RBDPoolIDMapping": [{
"poolID on site1": "poolID on site2"
...
}],
"CephFSFscIDMapping": [{
"CephFS FscID on site1": "CephFS FscID on site2"
...
}]
}
]
metadata:
name: ceph-csi-config

View File

@ -1,22 +0,0 @@
---
# This is a servicemonitor that would be used by a prometheus collector to pick
# up liveness metrics. The label your prometheus instance is looking for will
# need to be supplied and the namespace may need to changed
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: csi-metrics
namespace: rook-ceph
labels:
team: rook
spec:
namespaceSelector:
matchNames:
- default
selector:
matchLabels:
app: csi-metrics
endpoints:
- port: http-metrics
path: /metrics
interval: 5s