diff --git a/README.md b/README.md index 6e9d89844..b3393e316 100644 --- a/README.md +++ b/README.md @@ -8,18 +8,18 @@ Card](https://goreportcard.com/badge/github.com/ceph/ceph-csi)](https://goreport [![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/5940/badge)](https://bestpractices.coreinfrastructure.org/projects/5940) - [Ceph CSI](#ceph-csi) - - [Overview](#overview) - - [Project status](#project-status) - - [Known to work CO platforms](#known-to-work-co-platforms) - - [Support Matrix](#support-matrix) - - [Ceph-CSI features and available versions](#ceph-csi-features-and-available-versions) - - [CSI spec and Kubernetes version compatibility](#csi-spec-and-kubernetes-version-compatibility) - - [Ceph CSI Container images and release compatibility](#ceph-csi-container-images-and-release-compatibility) - - [Contributing to this repo](#contributing-to-this-repo) - - [Troubleshooting](#troubleshooting) - - [Weekly Bug Triage call](#weekly-bug-triage-call) - - [Dev standup](#dev-standup) - - [Contact](#contact) + - [Overview](#overview) + - [Project status](#project-status) + - [Known to work CO platforms](#known-to-work-co-platforms) + - [Support Matrix](#support-matrix) + - [Ceph-CSI features and available versions](#ceph-csi-features-and-available-versions) + - [CSI spec and Kubernetes version compatibility](#csi-spec-and-kubernetes-version-compatibility) + - [Ceph CSI Container images and release compatibility](#ceph-csi-container-images-and-release-compatibility) + - [Contributing to this repo](#contributing-to-this-repo) + - [Troubleshooting](#troubleshooting) + - [Weekly Bug Triage call](#weekly-bug-triage-call) + - [Dev standup](#dev-standup) + - [Contact](#contact) This repo contains the Ceph [Container Storage Interface (CSI)](https://github.com/container-storage-interface/) diff --git a/docs/ceph-csi-upgrade.md b/docs/ceph-csi-upgrade.md index 72e4c3c40..d62d1fae0 100644 --- a/docs/ceph-csi-upgrade.md +++ b/docs/ceph-csi-upgrade.md @@ -1,39 +1,39 @@ # Ceph-csi Upgrade - [Ceph-csi Upgrade](#ceph-csi-upgrade) - - [Pre-upgrade considerations](#pre-upgrade-considerations) - - [Snapshot-controller and snapshot crd](#snapshot-controller-and-snapshot-crd) - - [Snapshot API version support matrix](#snapshot-api-version-support-matrix) - - [Upgrading from v3.2 to v3.3](#upgrading-from-v32-to-v33) - - [Upgrading from v3.3 to v3.4](#upgrading-from-v33-to-v34) - - [Upgrading from v3.4 to v3.5](#upgrading-from-v34-to-v35) - - [Upgrading from v3.5 to v3.6](#upgrading-from-v35-to-v36) - - [Upgrading from v3.6 to v3.7](#upgrading-from-v36-to-v37) - - [Upgrading CephFS](#upgrading-cephfs) - - [1. Upgrade CephFS Provisioner resources](#1-upgrade-cephfs-provisioner-resources) - - [1.1 Update the CephFS Provisioner RBAC](#11-update-the-cephfs-provisioner-rbac) - - [1.2 Update the CephFS Provisioner deployment](#12-update-the-cephfs-provisioner-deployment) - - [2. Upgrade CephFS Nodeplugin resources](#2-upgrade-cephfs-nodeplugin-resources) - - [2.1 Update the CephFS Nodeplugin RBAC](#21-update-the-cephfs-nodeplugin-rbac) - - [2.2 Update the CephFS Nodeplugin daemonset](#22-update-the-cephfs-nodeplugin-daemonset) - - [2.3 Manual deletion of CephFS Nodeplugin daemonset pods](#23-manual-deletion-of-cephfs-nodeplugin-daemonset-pods) - - [Delete removed CephFS PSP, Role and RoleBinding](#delete-removed-cephfs-psp-role-and-rolebinding) - - [Upgrading RBD](#upgrading-rbd) - - [3. Upgrade RBD Provisioner resources](#3-upgrade-rbd-provisioner-resources) - - [3.1 Update the RBD Provisioner RBAC](#31-update-the-rbd-provisioner-rbac) - - [3.2 Update the RBD Provisioner deployment](#32-update-the-rbd-provisioner-deployment) - - [4. Upgrade RBD Nodeplugin resources](#4-upgrade-rbd-nodeplugin-resources) - - [4.1 Update the RBD Nodeplugin RBAC](#41-update-the-rbd-nodeplugin-rbac) - - [4.2 Update the RBD Nodeplugin daemonset](#42-update-the-rbd-nodeplugin-daemonset) - - [Delete removed RBD PSP, Role and RoleBinding](#delete-removed-rbd-psp-role-and-rolebinding) - - [Upgrading NFS](#upgrading-nfs) - - [5. Upgrade NFS Provisioner resources](#5-upgrade-nfs-provisioner-resources) - - [5.1 Update the NFS Provisioner RBAC](#51-update-the-nfs-provisioner-rbac) - - [5.2 Update the NFS Provisioner deployment](#52-update-the-nfs-provisioner-deployment) - - [6. Upgrade NFS Nodeplugin resources](#6-upgrade-nfs-nodeplugin-resources) - - [6.1 Update the NFS Nodeplugin RBAC](#61-update-the-nfs-nodeplugin-rbac) - - [6.2 Update the NFS Nodeplugin daemonset](#62-update-the-nfs-nodeplugin-daemonset) - - [CSI Sidecar containers consideration](#csi-sidecar-containers-consideration) + - [Pre-upgrade considerations](#pre-upgrade-considerations) + - [Snapshot-controller and snapshot crd](#snapshot-controller-and-snapshot-crd) + - [Snapshot API version support matrix](#snapshot-api-version-support-matrix) + - [Upgrading from v3.2 to v3.3](#upgrading-from-v32-to-v33) + - [Upgrading from v3.3 to v3.4](#upgrading-from-v33-to-v34) + - [Upgrading from v3.4 to v3.5](#upgrading-from-v34-to-v35) + - [Upgrading from v3.5 to v3.6](#upgrading-from-v35-to-v36) + - [Upgrading from v3.6 to v3.7](#upgrading-from-v36-to-v37) + - [Upgrading CephFS](#upgrading-cephfs) + - [1. Upgrade CephFS Provisioner resources](#1-upgrade-cephfs-provisioner-resources) + - [1.1 Update the CephFS Provisioner RBAC](#11-update-the-cephfs-provisioner-rbac) + - [1.2 Update the CephFS Provisioner deployment](#12-update-the-cephfs-provisioner-deployment) + - [2. Upgrade CephFS Nodeplugin resources](#2-upgrade-cephfs-nodeplugin-resources) + - [2.1 Update the CephFS Nodeplugin RBAC](#21-update-the-cephfs-nodeplugin-rbac) + - [2.2 Update the CephFS Nodeplugin daemonset](#22-update-the-cephfs-nodeplugin-daemonset) + - [2.3 Manual deletion of CephFS Nodeplugin daemonset pods](#23-manual-deletion-of-cephfs-nodeplugin-daemonset-pods) + - [Delete removed CephFS PSP, Role and RoleBinding](#delete-removed-cephfs-psp-role-and-rolebinding) + - [Upgrading RBD](#upgrading-rbd) + - [3. Upgrade RBD Provisioner resources](#3-upgrade-rbd-provisioner-resources) + - [3.1 Update the RBD Provisioner RBAC](#31-update-the-rbd-provisioner-rbac) + - [3.2 Update the RBD Provisioner deployment](#32-update-the-rbd-provisioner-deployment) + - [4. Upgrade RBD Nodeplugin resources](#4-upgrade-rbd-nodeplugin-resources) + - [4.1 Update the RBD Nodeplugin RBAC](#41-update-the-rbd-nodeplugin-rbac) + - [4.2 Update the RBD Nodeplugin daemonset](#42-update-the-rbd-nodeplugin-daemonset) + - [Delete removed RBD PSP, Role and RoleBinding](#delete-removed-rbd-psp-role-and-rolebinding) + - [Upgrading NFS](#upgrading-nfs) + - [5. Upgrade NFS Provisioner resources](#5-upgrade-nfs-provisioner-resources) + - [5.1 Update the NFS Provisioner RBAC](#51-update-the-nfs-provisioner-rbac) + - [5.2 Update the NFS Provisioner deployment](#52-update-the-nfs-provisioner-deployment) + - [6. Upgrade NFS Nodeplugin resources](#6-upgrade-nfs-nodeplugin-resources) + - [6.1 Update the NFS Nodeplugin RBAC](#61-update-the-nfs-nodeplugin-rbac) + - [6.2 Update the NFS Nodeplugin daemonset](#62-update-the-nfs-nodeplugin-daemonset) + - [CSI Sidecar containers consideration](#csi-sidecar-containers-consideration) ## Pre-upgrade considerations @@ -226,10 +226,10 @@ For each node: - Drain your application pods from the node - Delete the CSI driver pods on the node - - The pods to delete will be named with a csi-cephfsplugin prefix and have a + - The pods to delete will be named with a csi-cephfsplugin prefix and have a random suffix on each node. However, no need to delete the provisioner pods: csi-cephfsplugin-provisioner-* . - - The pod deletion causes the pods to be restarted and updated automatically + - The pod deletion causes the pods to be restarted and updated automatically on the node. #### Delete removed CephFS PSP, Role and RoleBinding diff --git a/docs/ceph-mount-corruption.md b/docs/ceph-mount-corruption.md index 7643223bc..476d6d8ab 100644 --- a/docs/ceph-mount-corruption.md +++ b/docs/ceph-mount-corruption.md @@ -77,13 +77,16 @@ following errors: More details about the error codes can be found [here](https://www.gnu.org/software/libc/manual/html_node/Error-Codes.html) -For such mounts, The CephCSI nodeplugin returns volume_condition as abnormal for `NodeGetVolumeStats` RPC call. +For such mounts, The CephCSI nodeplugin returns volume_condition as +abnormal for `NodeGetVolumeStats` RPC call. ### kernel client recovery -Once a mountpoint corruption is detected, Below are the two methods to recover from it. +Once a mountpoint corruption is detected, +Below are the two methods to recover from it. * Reboot the node where the abnormal volume behavior is observed. -* Scale down all the applications using the CephFS PVC on the node where abnormal mounts - are present. Once all the applications are deleted, scale up the application +* Scale down all the applications using the CephFS PVC + on the node where abnormal mounts are present. + Once all the applications are deleted, scale up the application to remount the CephFS PVC to application pods. diff --git a/docs/cephfs-snapshot-backed-volumes.md b/docs/cephfs-snapshot-backed-volumes.md index 618b2bc8f..296a4b30c 100644 --- a/docs/cephfs-snapshot-backed-volumes.md +++ b/docs/cephfs-snapshot-backed-volumes.md @@ -21,12 +21,12 @@ For provisioning new snapshot-backed volumes, following configuration must be set for storage class(es) and their PVCs respectively: * StorageClass: - * Specify `backingSnapshot: "true"` parameter. + * Specify `backingSnapshot: "true"` parameter. * PersistentVolumeClaim: - * Set `storageClassName` to point to your storage class with backing + * Set `storageClassName` to point to your storage class with backing snapshots enabled. - * Define `spec.dataSource` for your desired source volume snapshot. - * Set `spec.accessModes` to `ReadOnlyMany`. This is the only access mode that + * Define `spec.dataSource` for your desired source volume snapshot. + * Set `spec.accessModes` to `ReadOnlyMany`. This is the only access mode that is supported by this feature. ### Mounting snapshots from pre-provisioned volumes diff --git a/docs/deploy-rbd.md b/docs/deploy-rbd.md index 67695ac2e..7bb467894 100644 --- a/docs/deploy-rbd.md +++ b/docs/deploy-rbd.md @@ -220,9 +220,9 @@ possible to encrypt them with ceph-csi by using LUKS encryption. * volume is attached to provisioner container * on first time attachment (no file system on the attached device, checked with blkid) - * passphrase is retrieved from selected KMS if KMS is in use - * device is encrypted with LUKS using a passphrase from K8s Secret or KMS - * image-meta updated to "encrypted" in Ceph + * passphrase is retrieved from selected KMS if KMS is in use + * device is encrypted with LUKS using a passphrase from K8s Secret or KMS + * image-meta updated to "encrypted" in Ceph * passphrase is retrieved from selected KMS if KMS is in use * device is open and device path is changed to use a mapper device * mapper device is used instead of original one with usual workflow diff --git a/docs/design/proposals/cephfs-fscrypt.md b/docs/design/proposals/cephfs-fscrypt.md index b96d94fe3..f1b01e668 100644 --- a/docs/design/proposals/cephfs-fscrypt.md +++ b/docs/design/proposals/cephfs-fscrypt.md @@ -19,8 +19,8 @@ Work is in progress to add fscrypt support to CephFS for filesystem-level encryp - [FSCrypt Kernel Documentation](https://www.kernel.org/doc/html/latest/filesystems/fscrypt.html) - Management Tools - - [`fscrypt`](https://github.com/google/fscrypt) - - [`fscryptctl`](https://github.com/google/fscryptctl) + - [`fscrypt`](https://github.com/google/fscrypt) + - [`fscryptctl`](https://github.com/google/fscryptctl) - [Ceph Feature Tracker: "Add fscrypt support to the kernel CephFS client"](https://tracker.ceph.com/issues/46690) - [`fscrypt` design document](https://goo.gl/55cCrI) diff --git a/docs/design/proposals/clusterid-mapping.md b/docs/design/proposals/clusterid-mapping.md index 182b87e3a..9e72205f3 100644 --- a/docs/design/proposals/clusterid-mapping.md +++ b/docs/design/proposals/clusterid-mapping.md @@ -79,13 +79,13 @@ volume is present in the pool. ## Problems with volumeID Replication * The clusterID can be different - * as the clusterID is the namespace where rook is deployed, the Rook might + * as the clusterID is the namespace where rook is deployed, the Rook might be deployed in the different namespace on a secondary cluster - * In standalone Ceph-CSI the clusterID is fsID and fsID is unique per + * In standalone Ceph-CSI the clusterID is fsID and fsID is unique per cluster * The poolID can be different - * PoolID which is encoded in the volumeID won't remain the same across + * PoolID which is encoded in the volumeID won't remain the same across clusters To solve this problem we need to have a new mapping between clusterID's and the diff --git a/docs/design/proposals/encrypted-pvc.md b/docs/design/proposals/encrypted-pvc.md index c2ff751f8..5a9d1d5c7 100644 --- a/docs/design/proposals/encrypted-pvc.md +++ b/docs/design/proposals/encrypted-pvc.md @@ -33,10 +33,10 @@ requirement by using dm-crypt module through cryptsetup cli interface. [here](https://wiki.archlinux.org/index.php/Dm-crypt/Device_encryption#Encrypting_devices_with_cryptsetup) Functions to implement necessary interaction are implemented in a separate `cryptsetup.go` file. - * LuksFormat - * LuksOpen - * LuksClose - * LuksStatus + * LuksFormat + * LuksOpen + * LuksClose + * LuksStatus * `CreateVolume`: refactored to prepare for encryption (tag image that it requires encryption later), before returning, if encrypted volume option is diff --git a/docs/design/proposals/encryption-with-vault-tokens.md b/docs/design/proposals/encryption-with-vault-tokens.md index 9fe832e03..e5d84c609 100644 --- a/docs/design/proposals/encryption-with-vault-tokens.md +++ b/docs/design/proposals/encryption-with-vault-tokens.md @@ -54,7 +54,7 @@ Encryption Key (DEK) for PVC encryption: - when creating the PVC the Ceph-CSI provisioner needs to store the Kubernetes Namespace of the PVC in its metadata - - stores the `csi.volume.owner` (name of Tenant) in the metadata of the + - stores the `csi.volume.owner` (name of Tenant) in the metadata of the volume and sets it as `rbdVolume.Owner` - the Ceph-CSI node-plugin needs to request the Vault Token in the NodeStage CSI operation and create/get the key for the PVC @@ -87,8 +87,8 @@ Kubernetes and other Container Orchestration frameworks is tracked in - configuration of the VaultTokenKMS can be very similar to VaultKMS for common settings - the configuration can override the defaults for each Tenant separately - - Vault Service connection details (address, TLS options, ...) - - name of the Kubernetes Secret that can be looked up per tenant + - Vault Service connection details (address, TLS options, ...) + - name of the Kubernetes Secret that can be looked up per tenant - the configuration points to a Kubernetes Secret per Tenant that contains the Vault Token - the configuration points to an optional Kubernetes ConfigMap per Tenant that diff --git a/docs/design/proposals/intree-migrate.md b/docs/design/proposals/intree-migrate.md index 0d3c4536a..17f399060 100644 --- a/docs/design/proposals/intree-migrate.md +++ b/docs/design/proposals/intree-migrate.md @@ -126,4 +126,4 @@ at [CephFS in-tree migration KEP](https://github.com/kubernetes/enhancements/iss [Tracker Issue in Ceph CSI](https://github.com/ceph/ceph-csi/issues/2509) -[In-tree storage plugin to CSI Driver Migration KEP](https://github.com/kubernetes/enhancements/issues/625) \ No newline at end of file +[In-tree storage plugin to CSI Driver Migration KEP](https://github.com/kubernetes/enhancements/issues/625) diff --git a/docs/design/proposals/rbd-snap-clone.md b/docs/design/proposals/rbd-snap-clone.md index 7085276c0..ce1f0b857 100644 --- a/docs/design/proposals/rbd-snap-clone.md +++ b/docs/design/proposals/rbd-snap-clone.md @@ -1,21 +1,21 @@ # Steps and RBD CLI commands for RBD snapshot and clone operations - [Steps and RBD CLI commands for RBD snapshot and clone operations](#steps-and-rbd-cli-commands-for-rbd-snapshot-and-clone-operations) - - [Create a snapshot from PVC](#create-a-snapshot-from-pvc) - - [steps to create a snapshot](#steps-to-create-a-snapshot) - - [RBD CLI commands to create snapshot](#rbd-cli-commands-to-create-snapshot) - - [Create PVC from a snapshot (datasource snapshot)](#create-pvc-from-a-snapshot-datasource-snapshot) - - [steps to create a pvc from snapshot](#steps-to-create-a-pvc-from-snapshot) - - [RBD CLI commands to create clone from snapshot](#rbd-cli-commands-to-create-clone-from-snapshot) - - [Delete a snapshot](#delete-a-snapshot) - - [steps to delete a snapshot](#steps-to-delete-a-snapshot) - - [RBD CLI commands to delete a snapshot](#rbd-cli-commands-to-delete-a-snapshot) - - [Delete a Volume (PVC)](#delete-a-volume-pvc) - - [steps to delete a volume](#steps-to-delete-a-volume) - - [RBD CLI commands to delete a volume](#rbd-cli-commands-to-delete-a-volume) - - [Volume cloning (datasource pvc)](#volume-cloning-datasource-pvc) - - [steps to create a Volume from Volume](#steps-to-create-a-volume-from-volume) - - [RBD CLI commands to create a Volume from Volume](#rbd-cli-commands-to-create-a-volume-from-volume) + - [Create a snapshot from PVC](#create-a-snapshot-from-pvc) + - [steps to create a snapshot](#steps-to-create-a-snapshot) + - [RBD CLI commands to create snapshot](#rbd-cli-commands-to-create-snapshot) + - [Create PVC from a snapshot (datasource snapshot)](#create-pvc-from-a-snapshot-datasource-snapshot) + - [steps to create a pvc from snapshot](#steps-to-create-a-pvc-from-snapshot) + - [RBD CLI commands to create clone from snapshot](#rbd-cli-commands-to-create-clone-from-snapshot) + - [Delete a snapshot](#delete-a-snapshot) + - [steps to delete a snapshot](#steps-to-delete-a-snapshot) + - [RBD CLI commands to delete a snapshot](#rbd-cli-commands-to-delete-a-snapshot) + - [Delete a Volume (PVC)](#delete-a-volume-pvc) + - [steps to delete a volume](#steps-to-delete-a-volume) + - [RBD CLI commands to delete a volume](#rbd-cli-commands-to-delete-a-volume) + - [Volume cloning (datasource pvc)](#volume-cloning-datasource-pvc) + - [steps to create a Volume from Volume](#steps-to-create-a-volume-from-volume) + - [RBD CLI commands to create a Volume from Volume](#rbd-cli-commands-to-create-a-volume-from-volume) This document outlines the command used to create RBD snapshot, delete RBD snapshot, Restore RBD snapshot and Create new RBD image from existing RBD image. diff --git a/docs/design/proposals/rbd-volume-healer.md b/docs/design/proposals/rbd-volume-healer.md index f5bb7d823..4bd734fb7 100644 --- a/docs/design/proposals/rbd-volume-healer.md +++ b/docs/design/proposals/rbd-volume-healer.md @@ -85,16 +85,16 @@ Volume healer does the below, NodeStage, NodeUnstage, NodePublish, NodeUnPublish operations. Hence none of the operations happen in parallel. - Any issues if the NodeUnstage is issued by kubelet? - - This can not be a problem as we take a lock at the Ceph-CSI level - - If the NodeUnstage success, Ceph-CSI will return StagingPath not found + - This can not be a problem as we take a lock at the Ceph-CSI level + - If the NodeUnstage success, Ceph-CSI will return StagingPath not found error, we can then skip - - If the NodeUnstage fails with an operation already going on, in the next + - If the NodeUnstage fails with an operation already going on, in the next NodeUnstage the volume gets unmounted - What if the PVC is deleted? - - If the PVC is deleted, the volume attachment list might already get + - If the PVC is deleted, the volume attachment list might already get refreshed and entry will be skipped/deleted at the healer. - - For any reason, If the request bails out with Error NotFound, skip the + - For any reason, If the request bails out with Error NotFound, skip the PVC, assuming it might have deleted or the NodeUnstage might have already happened. - - The Volume healer currently works with rbd-nbd, but the design can - accommodate other userspace mounters (may be ceph-fuse). \ No newline at end of file + - The Volume healer currently works with rbd-nbd, but the design can + accommodate other userspace mounters (may be ceph-fuse). diff --git a/docs/disaster-recovery.md b/docs/disaster-recovery.md index f27711b5a..3bda07ec1 100644 --- a/docs/disaster-recovery.md +++ b/docs/disaster-recovery.md @@ -226,13 +226,13 @@ status: * Take a backup of PVC and PV object on primary cluster(cluster-1) - * Take backup of the PVC `rbd-pvc` + * Take backup of the PVC `rbd-pvc` ```bash kubectl get pvc rbd-pvc -oyaml >pvc-backup.yaml ``` - * Take a backup of the PV, corresponding to the PVC + * Take a backup of the PV, corresponding to the PVC ```bash kubectl get pv/pvc-65dc0aac-5e15-4474-90f4-7a3532c621ec -oyaml >pv_backup.yaml @@ -243,7 +243,7 @@ status: * Restoring on the secondary cluster(cluster-2) - * Create storageclass on the secondary cluster + * Create storageclass on the secondary cluster ```bash kubectl create -f examples/rbd/storageclass.yaml --context=cluster-2 @@ -251,7 +251,7 @@ status: storageclass.storage.k8s.io/csi-rbd-sc created ``` - * Create VolumeReplicationClass on the secondary cluster + * Create VolumeReplicationClass on the secondary cluster ```bash cat <` value that is distinct per Ceph + * If choosing to use the Ceph cluster fsid as the unique value of clusterID, + * Output of `ceph fsid` + * Alternatively, choose a `` value that is distinct per Ceph cluster in use by this kubernetes cluster Update the [sample configmap](./csi-config-map-sample.yaml) with values diff --git a/scripts/mdl-style.rb b/scripts/mdl-style.rb index 99a4f3d00..34685a2be 100644 --- a/scripts/mdl-style.rb +++ b/scripts/mdl-style.rb @@ -3,13 +3,8 @@ all #Refer below url for more information about the markdown rules. #https://github.com/markdownlint/markdownlint/blob/master/docs/RULES.md -rule 'MD013', :ignore_code_blocks => false, :tables => false, :line_length => 80 +rule 'MD013', :ignore_code_blocks => true, :tables => false, :line_length => 80 exclude_rule 'MD033' # In-line HTML: GitHub style markdown adds HTML tags exclude_rule 'MD040' # Fenced code blocks should have a language specified exclude_rule 'MD041' # First line in file should be a top level header -# TODO: Enable the rules after making required changes. -exclude_rule 'MD007' # Unordered list indentation -exclude_rule 'MD012' # Multiple consecutive blank lines -exclude_rule 'MD013' # Line length -exclude_rule 'MD047' # File should end with a single newline character \ No newline at end of file diff --git a/tools/README.md b/tools/README.md index 7d64eee99..043367ee4 100644 --- a/tools/README.md +++ b/tools/README.md @@ -5,4 +5,3 @@ `yamlgen` reads deployment configurations from the `api/` package and generates YAML files that can be used for deploying without advanced automation like Rook. The generated files are located under `deploy/`. -