ci: fix mdl related failures

This commit address the issue-
https://github.com/ceph/ceph-csi/issues/3448.

Signed-off-by: riya-singhal31 <rsinghal@redhat.com>
This commit is contained in:
riya-singhal31 2022-11-09 19:07:26 +05:30 committed by mergify[bot]
parent d721ed6c5c
commit 539686329f
24 changed files with 166 additions and 170 deletions

View File

@ -8,18 +8,18 @@ Card](https://goreportcard.com/badge/github.com/ceph/ceph-csi)](https://goreport
[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/5940/badge)](https://bestpractices.coreinfrastructure.org/projects/5940)
- [Ceph CSI](#ceph-csi)
- [Overview](#overview)
- [Project status](#project-status)
- [Known to work CO platforms](#known-to-work-co-platforms)
- [Support Matrix](#support-matrix)
- [Ceph-CSI features and available versions](#ceph-csi-features-and-available-versions)
- [CSI spec and Kubernetes version compatibility](#csi-spec-and-kubernetes-version-compatibility)
- [Ceph CSI Container images and release compatibility](#ceph-csi-container-images-and-release-compatibility)
- [Contributing to this repo](#contributing-to-this-repo)
- [Troubleshooting](#troubleshooting)
- [Weekly Bug Triage call](#weekly-bug-triage-call)
- [Dev standup](#dev-standup)
- [Contact](#contact)
- [Overview](#overview)
- [Project status](#project-status)
- [Known to work CO platforms](#known-to-work-co-platforms)
- [Support Matrix](#support-matrix)
- [Ceph-CSI features and available versions](#ceph-csi-features-and-available-versions)
- [CSI spec and Kubernetes version compatibility](#csi-spec-and-kubernetes-version-compatibility)
- [Ceph CSI Container images and release compatibility](#ceph-csi-container-images-and-release-compatibility)
- [Contributing to this repo](#contributing-to-this-repo)
- [Troubleshooting](#troubleshooting)
- [Weekly Bug Triage call](#weekly-bug-triage-call)
- [Dev standup](#dev-standup)
- [Contact](#contact)
This repo contains the Ceph
[Container Storage Interface (CSI)](https://github.com/container-storage-interface/)

View File

@ -1,39 +1,39 @@
# Ceph-csi Upgrade
- [Ceph-csi Upgrade](#ceph-csi-upgrade)
- [Pre-upgrade considerations](#pre-upgrade-considerations)
- [Snapshot-controller and snapshot crd](#snapshot-controller-and-snapshot-crd)
- [Snapshot API version support matrix](#snapshot-api-version-support-matrix)
- [Upgrading from v3.2 to v3.3](#upgrading-from-v32-to-v33)
- [Upgrading from v3.3 to v3.4](#upgrading-from-v33-to-v34)
- [Upgrading from v3.4 to v3.5](#upgrading-from-v34-to-v35)
- [Upgrading from v3.5 to v3.6](#upgrading-from-v35-to-v36)
- [Upgrading from v3.6 to v3.7](#upgrading-from-v36-to-v37)
- [Upgrading CephFS](#upgrading-cephfs)
- [1. Upgrade CephFS Provisioner resources](#1-upgrade-cephfs-provisioner-resources)
- [1.1 Update the CephFS Provisioner RBAC](#11-update-the-cephfs-provisioner-rbac)
- [1.2 Update the CephFS Provisioner deployment](#12-update-the-cephfs-provisioner-deployment)
- [2. Upgrade CephFS Nodeplugin resources](#2-upgrade-cephfs-nodeplugin-resources)
- [2.1 Update the CephFS Nodeplugin RBAC](#21-update-the-cephfs-nodeplugin-rbac)
- [2.2 Update the CephFS Nodeplugin daemonset](#22-update-the-cephfs-nodeplugin-daemonset)
- [2.3 Manual deletion of CephFS Nodeplugin daemonset pods](#23-manual-deletion-of-cephfs-nodeplugin-daemonset-pods)
- [Delete removed CephFS PSP, Role and RoleBinding](#delete-removed-cephfs-psp-role-and-rolebinding)
- [Upgrading RBD](#upgrading-rbd)
- [3. Upgrade RBD Provisioner resources](#3-upgrade-rbd-provisioner-resources)
- [3.1 Update the RBD Provisioner RBAC](#31-update-the-rbd-provisioner-rbac)
- [3.2 Update the RBD Provisioner deployment](#32-update-the-rbd-provisioner-deployment)
- [4. Upgrade RBD Nodeplugin resources](#4-upgrade-rbd-nodeplugin-resources)
- [4.1 Update the RBD Nodeplugin RBAC](#41-update-the-rbd-nodeplugin-rbac)
- [4.2 Update the RBD Nodeplugin daemonset](#42-update-the-rbd-nodeplugin-daemonset)
- [Delete removed RBD PSP, Role and RoleBinding](#delete-removed-rbd-psp-role-and-rolebinding)
- [Upgrading NFS](#upgrading-nfs)
- [5. Upgrade NFS Provisioner resources](#5-upgrade-nfs-provisioner-resources)
- [5.1 Update the NFS Provisioner RBAC](#51-update-the-nfs-provisioner-rbac)
- [5.2 Update the NFS Provisioner deployment](#52-update-the-nfs-provisioner-deployment)
- [6. Upgrade NFS Nodeplugin resources](#6-upgrade-nfs-nodeplugin-resources)
- [6.1 Update the NFS Nodeplugin RBAC](#61-update-the-nfs-nodeplugin-rbac)
- [6.2 Update the NFS Nodeplugin daemonset](#62-update-the-nfs-nodeplugin-daemonset)
- [CSI Sidecar containers consideration](#csi-sidecar-containers-consideration)
- [Pre-upgrade considerations](#pre-upgrade-considerations)
- [Snapshot-controller and snapshot crd](#snapshot-controller-and-snapshot-crd)
- [Snapshot API version support matrix](#snapshot-api-version-support-matrix)
- [Upgrading from v3.2 to v3.3](#upgrading-from-v32-to-v33)
- [Upgrading from v3.3 to v3.4](#upgrading-from-v33-to-v34)
- [Upgrading from v3.4 to v3.5](#upgrading-from-v34-to-v35)
- [Upgrading from v3.5 to v3.6](#upgrading-from-v35-to-v36)
- [Upgrading from v3.6 to v3.7](#upgrading-from-v36-to-v37)
- [Upgrading CephFS](#upgrading-cephfs)
- [1. Upgrade CephFS Provisioner resources](#1-upgrade-cephfs-provisioner-resources)
- [1.1 Update the CephFS Provisioner RBAC](#11-update-the-cephfs-provisioner-rbac)
- [1.2 Update the CephFS Provisioner deployment](#12-update-the-cephfs-provisioner-deployment)
- [2. Upgrade CephFS Nodeplugin resources](#2-upgrade-cephfs-nodeplugin-resources)
- [2.1 Update the CephFS Nodeplugin RBAC](#21-update-the-cephfs-nodeplugin-rbac)
- [2.2 Update the CephFS Nodeplugin daemonset](#22-update-the-cephfs-nodeplugin-daemonset)
- [2.3 Manual deletion of CephFS Nodeplugin daemonset pods](#23-manual-deletion-of-cephfs-nodeplugin-daemonset-pods)
- [Delete removed CephFS PSP, Role and RoleBinding](#delete-removed-cephfs-psp-role-and-rolebinding)
- [Upgrading RBD](#upgrading-rbd)
- [3. Upgrade RBD Provisioner resources](#3-upgrade-rbd-provisioner-resources)
- [3.1 Update the RBD Provisioner RBAC](#31-update-the-rbd-provisioner-rbac)
- [3.2 Update the RBD Provisioner deployment](#32-update-the-rbd-provisioner-deployment)
- [4. Upgrade RBD Nodeplugin resources](#4-upgrade-rbd-nodeplugin-resources)
- [4.1 Update the RBD Nodeplugin RBAC](#41-update-the-rbd-nodeplugin-rbac)
- [4.2 Update the RBD Nodeplugin daemonset](#42-update-the-rbd-nodeplugin-daemonset)
- [Delete removed RBD PSP, Role and RoleBinding](#delete-removed-rbd-psp-role-and-rolebinding)
- [Upgrading NFS](#upgrading-nfs)
- [5. Upgrade NFS Provisioner resources](#5-upgrade-nfs-provisioner-resources)
- [5.1 Update the NFS Provisioner RBAC](#51-update-the-nfs-provisioner-rbac)
- [5.2 Update the NFS Provisioner deployment](#52-update-the-nfs-provisioner-deployment)
- [6. Upgrade NFS Nodeplugin resources](#6-upgrade-nfs-nodeplugin-resources)
- [6.1 Update the NFS Nodeplugin RBAC](#61-update-the-nfs-nodeplugin-rbac)
- [6.2 Update the NFS Nodeplugin daemonset](#62-update-the-nfs-nodeplugin-daemonset)
- [CSI Sidecar containers consideration](#csi-sidecar-containers-consideration)
## Pre-upgrade considerations
@ -226,10 +226,10 @@ For each node:
- Drain your application pods from the node
- Delete the CSI driver pods on the node
- The pods to delete will be named with a csi-cephfsplugin prefix and have a
- The pods to delete will be named with a csi-cephfsplugin prefix and have a
random suffix on each node. However, no need to delete the provisioner
pods: csi-cephfsplugin-provisioner-* .
- The pod deletion causes the pods to be restarted and updated automatically
- The pod deletion causes the pods to be restarted and updated automatically
on the node.
#### Delete removed CephFS PSP, Role and RoleBinding

View File

@ -77,13 +77,16 @@ following errors:
More details about the error codes can be found [here](https://www.gnu.org/software/libc/manual/html_node/Error-Codes.html)
For such mounts, The CephCSI nodeplugin returns volume_condition as abnormal for `NodeGetVolumeStats` RPC call.
For such mounts, The CephCSI nodeplugin returns volume_condition as
abnormal for `NodeGetVolumeStats` RPC call.
### kernel client recovery
Once a mountpoint corruption is detected, Below are the two methods to recover from it.
Once a mountpoint corruption is detected,
Below are the two methods to recover from it.
* Reboot the node where the abnormal volume behavior is observed.
* Scale down all the applications using the CephFS PVC on the node where abnormal mounts
are present. Once all the applications are deleted, scale up the application
* Scale down all the applications using the CephFS PVC
on the node where abnormal mounts are present.
Once all the applications are deleted, scale up the application
to remount the CephFS PVC to application pods.

View File

@ -21,12 +21,12 @@ For provisioning new snapshot-backed volumes, following configuration must be
set for storage class(es) and their PVCs respectively:
* StorageClass:
* Specify `backingSnapshot: "true"` parameter.
* Specify `backingSnapshot: "true"` parameter.
* PersistentVolumeClaim:
* Set `storageClassName` to point to your storage class with backing
* Set `storageClassName` to point to your storage class with backing
snapshots enabled.
* Define `spec.dataSource` for your desired source volume snapshot.
* Set `spec.accessModes` to `ReadOnlyMany`. This is the only access mode that
* Define `spec.dataSource` for your desired source volume snapshot.
* Set `spec.accessModes` to `ReadOnlyMany`. This is the only access mode that
is supported by this feature.
### Mounting snapshots from pre-provisioned volumes

View File

@ -220,9 +220,9 @@ possible to encrypt them with ceph-csi by using LUKS encryption.
* volume is attached to provisioner container
* on first time attachment
(no file system on the attached device, checked with blkid)
* passphrase is retrieved from selected KMS if KMS is in use
* device is encrypted with LUKS using a passphrase from K8s Secret or KMS
* image-meta updated to "encrypted" in Ceph
* passphrase is retrieved from selected KMS if KMS is in use
* device is encrypted with LUKS using a passphrase from K8s Secret or KMS
* image-meta updated to "encrypted" in Ceph
* passphrase is retrieved from selected KMS if KMS is in use
* device is open and device path is changed to use a mapper device
* mapper device is used instead of original one with usual workflow

View File

@ -19,8 +19,8 @@ Work is in progress to add fscrypt support to CephFS for filesystem-level encryp
- [FSCrypt Kernel Documentation](https://www.kernel.org/doc/html/latest/filesystems/fscrypt.html)
- Management Tools
- [`fscrypt`](https://github.com/google/fscrypt)
- [`fscryptctl`](https://github.com/google/fscryptctl)
- [`fscrypt`](https://github.com/google/fscrypt)
- [`fscryptctl`](https://github.com/google/fscryptctl)
- [Ceph Feature Tracker: "Add fscrypt support to the kernel CephFS client"](https://tracker.ceph.com/issues/46690)
- [`fscrypt` design document](https://goo.gl/55cCrI)

View File

@ -79,13 +79,13 @@ volume is present in the pool.
## Problems with volumeID Replication
* The clusterID can be different
* as the clusterID is the namespace where rook is deployed, the Rook might
* as the clusterID is the namespace where rook is deployed, the Rook might
be deployed in the different namespace on a secondary cluster
* In standalone Ceph-CSI the clusterID is fsID and fsID is unique per
* In standalone Ceph-CSI the clusterID is fsID and fsID is unique per
cluster
* The poolID can be different
* PoolID which is encoded in the volumeID won't remain the same across
* PoolID which is encoded in the volumeID won't remain the same across
clusters
To solve this problem we need to have a new mapping between clusterID's and the

View File

@ -33,10 +33,10 @@ requirement by using dm-crypt module through cryptsetup cli interface.
[here](https://wiki.archlinux.org/index.php/Dm-crypt/Device_encryption#Encrypting_devices_with_cryptsetup)
Functions to implement necessary interaction are implemented in a separate
`cryptsetup.go` file.
* LuksFormat
* LuksOpen
* LuksClose
* LuksStatus
* LuksFormat
* LuksOpen
* LuksClose
* LuksStatus
* `CreateVolume`: refactored to prepare for encryption (tag image that it
requires encryption later), before returning, if encrypted volume option is

View File

@ -54,7 +54,7 @@ Encryption Key (DEK) for PVC encryption:
- when creating the PVC the Ceph-CSI provisioner needs to store the Kubernetes
Namespace of the PVC in its metadata
- stores the `csi.volume.owner` (name of Tenant) in the metadata of the
- stores the `csi.volume.owner` (name of Tenant) in the metadata of the
volume and sets it as `rbdVolume.Owner`
- the Ceph-CSI node-plugin needs to request the Vault Token in the NodeStage
CSI operation and create/get the key for the PVC
@ -87,8 +87,8 @@ Kubernetes and other Container Orchestration frameworks is tracked in
- configuration of the VaultTokenKMS can be very similar to VaultKMS for common
settings
- the configuration can override the defaults for each Tenant separately
- Vault Service connection details (address, TLS options, ...)
- name of the Kubernetes Secret that can be looked up per tenant
- Vault Service connection details (address, TLS options, ...)
- name of the Kubernetes Secret that can be looked up per tenant
- the configuration points to a Kubernetes Secret per Tenant that contains the
Vault Token
- the configuration points to an optional Kubernetes ConfigMap per Tenant that

View File

@ -126,4 +126,4 @@ at [CephFS in-tree migration KEP](https://github.com/kubernetes/enhancements/iss
[Tracker Issue in Ceph CSI](https://github.com/ceph/ceph-csi/issues/2509)
[In-tree storage plugin to CSI Driver Migration KEP](https://github.com/kubernetes/enhancements/issues/625)
[In-tree storage plugin to CSI Driver Migration KEP](https://github.com/kubernetes/enhancements/issues/625)

View File

@ -1,21 +1,21 @@
# Steps and RBD CLI commands for RBD snapshot and clone operations
- [Steps and RBD CLI commands for RBD snapshot and clone operations](#steps-and-rbd-cli-commands-for-rbd-snapshot-and-clone-operations)
- [Create a snapshot from PVC](#create-a-snapshot-from-pvc)
- [steps to create a snapshot](#steps-to-create-a-snapshot)
- [RBD CLI commands to create snapshot](#rbd-cli-commands-to-create-snapshot)
- [Create PVC from a snapshot (datasource snapshot)](#create-pvc-from-a-snapshot-datasource-snapshot)
- [steps to create a pvc from snapshot](#steps-to-create-a-pvc-from-snapshot)
- [RBD CLI commands to create clone from snapshot](#rbd-cli-commands-to-create-clone-from-snapshot)
- [Delete a snapshot](#delete-a-snapshot)
- [steps to delete a snapshot](#steps-to-delete-a-snapshot)
- [RBD CLI commands to delete a snapshot](#rbd-cli-commands-to-delete-a-snapshot)
- [Delete a Volume (PVC)](#delete-a-volume-pvc)
- [steps to delete a volume](#steps-to-delete-a-volume)
- [RBD CLI commands to delete a volume](#rbd-cli-commands-to-delete-a-volume)
- [Volume cloning (datasource pvc)](#volume-cloning-datasource-pvc)
- [steps to create a Volume from Volume](#steps-to-create-a-volume-from-volume)
- [RBD CLI commands to create a Volume from Volume](#rbd-cli-commands-to-create-a-volume-from-volume)
- [Create a snapshot from PVC](#create-a-snapshot-from-pvc)
- [steps to create a snapshot](#steps-to-create-a-snapshot)
- [RBD CLI commands to create snapshot](#rbd-cli-commands-to-create-snapshot)
- [Create PVC from a snapshot (datasource snapshot)](#create-pvc-from-a-snapshot-datasource-snapshot)
- [steps to create a pvc from snapshot](#steps-to-create-a-pvc-from-snapshot)
- [RBD CLI commands to create clone from snapshot](#rbd-cli-commands-to-create-clone-from-snapshot)
- [Delete a snapshot](#delete-a-snapshot)
- [steps to delete a snapshot](#steps-to-delete-a-snapshot)
- [RBD CLI commands to delete a snapshot](#rbd-cli-commands-to-delete-a-snapshot)
- [Delete a Volume (PVC)](#delete-a-volume-pvc)
- [steps to delete a volume](#steps-to-delete-a-volume)
- [RBD CLI commands to delete a volume](#rbd-cli-commands-to-delete-a-volume)
- [Volume cloning (datasource pvc)](#volume-cloning-datasource-pvc)
- [steps to create a Volume from Volume](#steps-to-create-a-volume-from-volume)
- [RBD CLI commands to create a Volume from Volume](#rbd-cli-commands-to-create-a-volume-from-volume)
This document outlines the command used to create RBD snapshot, delete RBD
snapshot, Restore RBD snapshot and Create new RBD image from existing RBD image.

View File

@ -85,16 +85,16 @@ Volume healer does the below,
NodeStage, NodeUnstage, NodePublish, NodeUnPublish operations. Hence none of
the operations happen in parallel.
- Any issues if the NodeUnstage is issued by kubelet?
- This can not be a problem as we take a lock at the Ceph-CSI level
- If the NodeUnstage success, Ceph-CSI will return StagingPath not found
- This can not be a problem as we take a lock at the Ceph-CSI level
- If the NodeUnstage success, Ceph-CSI will return StagingPath not found
error, we can then skip
- If the NodeUnstage fails with an operation already going on, in the next
- If the NodeUnstage fails with an operation already going on, in the next
NodeUnstage the volume gets unmounted
- What if the PVC is deleted?
- If the PVC is deleted, the volume attachment list might already get
- If the PVC is deleted, the volume attachment list might already get
refreshed and entry will be skipped/deleted at the healer.
- For any reason, If the request bails out with Error NotFound, skip the
- For any reason, If the request bails out with Error NotFound, skip the
PVC, assuming it might have deleted or the NodeUnstage might have already
happened.
- The Volume healer currently works with rbd-nbd, but the design can
accommodate other userspace mounters (may be ceph-fuse).
- The Volume healer currently works with rbd-nbd, but the design can
accommodate other userspace mounters (may be ceph-fuse).

View File

@ -226,13 +226,13 @@ status:
* Take a backup of PVC and PV object on primary cluster(cluster-1)
* Take backup of the PVC `rbd-pvc`
* Take backup of the PVC `rbd-pvc`
```bash
kubectl get pvc rbd-pvc -oyaml >pvc-backup.yaml
```
* Take a backup of the PV, corresponding to the PVC
* Take a backup of the PV, corresponding to the PVC
```bash
kubectl get pv/pvc-65dc0aac-5e15-4474-90f4-7a3532c621ec -oyaml >pv_backup.yaml
@ -243,7 +243,7 @@ status:
* Restoring on the secondary cluster(cluster-2)
* Create storageclass on the secondary cluster
* Create storageclass on the secondary cluster
```bash
kubectl create -f examples/rbd/storageclass.yaml --context=cluster-2
@ -251,7 +251,7 @@ status:
storageclass.storage.k8s.io/csi-rbd-sc created
```
* Create VolumeReplicationClass on the secondary cluster
* Create VolumeReplicationClass on the secondary cluster
```bash
cat <<EOF | kubectl --context=cluster-2 apply -f -
@ -270,7 +270,7 @@ status:
volumereplicationclass.replication.storage.openshift.io/rbd-volumereplicationclass created
```
* If Persistent Volumes and Claims are created manually
* If Persistent Volumes and Claims are created manually
on the secondary cluster, remove the `claimRef` on the
backed up PV objects in yaml files; so that the PV can
get bound to the new claim on the secondary cluster.
@ -350,7 +350,7 @@ Follow the below steps for planned migration of workload from primary
* Create the VolumeReplicationClass on the secondary site.
* Create the VolumeReplications for all the PVCs for which mirroring
is enabled
* `replicationState` should be `primary` for all the PVCs on
* `replicationState` should be `primary` for all the PVCs on
the secondary site.
* Check whether the image is marked `primary` on the secondary site
by verifying it in VolumeReplication CR status.

View File

@ -1,12 +1,12 @@
# Dynamically Expand Volume
- [Dynamically Expand Volume](#dynamically-expand-volume)
- [Prerequisite](#prerequisite)
- [Expand RBD PVCs](#expand-rbd-pvcs)
- [Expand RBD Filesystem PVC](#expand-rbd-filesystem-pvc)
- [Expand RBD Block PVC](#expand-rbd-block-pvc)
- [Expand CephFS PVC](#expand-cephfs-pvc)
- [Expand CephFS Filesystem PVC](#expand-cephfs-filesystem-pvc)
- [Prerequisite](#prerequisite)
- [Expand RBD PVCs](#expand-rbd-pvcs)
- [Expand RBD Filesystem PVC](#expand-rbd-filesystem-pvc)
- [Expand RBD Block PVC](#expand-rbd-block-pvc)
- [Expand CephFS PVC](#expand-cephfs-pvc)
- [Expand CephFS Filesystem PVC](#expand-cephfs-filesystem-pvc)
## Prerequisite

View File

@ -10,11 +10,11 @@ corresponding CSI (`rbd.csi.ceph.com`) driver.
- [Prerequisite](#prerequisite)
- [Volume operations after enabling CSI migration](#volume-operations-after-enabling-csi-migration)
- [Create volume](#create-volume)
- [Mount volume to a POD](#mount-volume-to-a-pod)
- [Resize volume](#resize-volume)
- [Unmount volume](#unmount-volume)
- [Delete volume](#delete-volume)
- [Create volume](#create-volume)
- [Mount volume to a POD](#mount-volume-to-a-pod)
- [Resize volume](#resize-volume)
- [Unmount volume](#unmount-volume)
- [Delete volume](#delete-volume)
- [References](#additional-references)
### Prerequisite
@ -140,4 +140,3 @@ To know more about in-tree to CSI migration:
- [design doc](./design/proposals/intree-migrate.md)
- [Kubernetes 1.17 Feature: Kubernetes In-Tree to CSI Volume Migration Moves to Beta](https://Kubernetes.io/blog/2019/12/09/Kubernetes-1-17-feature-csi-migration-beta/)

View File

@ -1,7 +1,7 @@
# Metrics
- [Metrics](#metrics)
- [Liveness](#liveness)
- [Liveness](#liveness)
## Liveness

View File

@ -1,12 +1,12 @@
# RBD NBD Mounter
- [RBD NBD Mounter](#rbd-nbd-mounter)
- [Overview](#overview)
- [Configuration](#configuration)
- [Configuring logging path](#configuring-logging-path)
- [Status](#status)
- [Support Matrix](#support-matrix)
- [CSI spec and Kubernetes version compatibility](#csi-spec-and-kubernetes-version-compatibility)
- [Overview](#overview)
- [Configuration](#configuration)
- [Configuring logging path](#configuring-logging-path)
- [Status](#status)
- [Support Matrix](#support-matrix)
- [CSI spec and Kubernetes version compatibility](#csi-spec-and-kubernetes-version-compatibility)
## Overview
@ -42,29 +42,29 @@ under the `cephLogDir` path on NodeStage(map) and removed the same on
the respective NodeUnstage(unmap).
- There are different strategies to maintain the logs
- `remove`: delete log file on unmap/detach (default behaviour)
- `compress`: compress the log file to gzip on unmap/detach, in case there
- `remove`: delete log file on unmap/detach (default behaviour)
- `compress`: compress the log file to gzip on unmap/detach, in case there
exists a `.gz` file from previous map/unmap of the same volume, then
override the previous log with new log.
- `preserve`: preserve the log file in text format
- `preserve`: preserve the log file in text format
You can tweak the log strategies through `cephLogStrategy` option from the
storageclass yaml
- In case if you need a customized log path, you should do below:
- Edit the DaemonSet templates to change the ceph log directory host-path
- If you are using helm charts, then you can use key `cephLogDirHostPath`
- Edit the DaemonSet templates to change the ceph log directory host-path
- If you are using helm charts, then you can use key `cephLogDirHostPath`
```
helm install --set cephLogDirHostPath=/var/log/ceph-csi/my-dir
```
- For standard templates edit [csi-rbdplugin.yaml](../deploy/rbd/kubernetes/csi-rbdplugin.yaml)
- For standard templates edit [csi-rbdplugin.yaml](../deploy/rbd/kubernetes/csi-rbdplugin.yaml)
to update `hostPath` for `ceph-logdir`.
to update `pathPrefix` spec entries.
- Update the StorageClass with the customized log directory path
- Now update rbd StorageClass for `cephLogDir`, for example
- Update the StorageClass with the customized log directory path
- Now update rbd StorageClass for `cephLogDir`, for example
```
cephLogDir: "/var/log/prod-A-logs"

View File

@ -1,10 +1,10 @@
# Ceph CSI driver Release Process
- [Ceph CSI driver Release Process](#ceph-csi-driver-release-process)
- [Introduction](#introduction)
- [Versioning](#versioning)
- [Tagging repositories](#tagging-repositories)
- [Release process [TBD]](#release-process-tbd)
- [Introduction](#introduction)
- [Versioning](#versioning)
- [Tagging repositories](#tagging-repositories)
- [Release process [TBD]](#release-process-tbd)
## Introduction

View File

@ -2,15 +2,15 @@
- [Prerequisite](#prerequisite)
- [Create CephFS Snapshot and Clone Volume](#create-cephfs-snapshot-and-clone-volume)
- [Create CephFS SnapshotClass](#create-cephfs-snapshotclass)
- [Create CephFS Snapshot](#create-cephfs-snapshot)
- [Restore CephFS Snapshot to a new PVC](#restore-cephfs-snapshot)
- [Clone CephFS PVC](#clone-cephfs-pvc)
- [Create CephFS SnapshotClass](#create-cephfs-snapshotclass)
- [Create CephFS Snapshot](#create-cephfs-snapshot)
- [Restore CephFS Snapshot to a new PVC](#restore-cephfs-snapshot)
- [Clone CephFS PVC](#clone-cephfs-pvc)
- [Create RBD Snapshot and Clone Volume](#create-rbd-snapshot-and-clone-volume)
- [Create RBD SnapshotClass](#create-rbd-snapshotclass)
- [Create RBD Snapshot](#create-rbd-snapshot)
- [Restore RBD Snapshot to a new PVC](#restore-rbd-snapshot)
- [Clone RBD PVC](#clone-rbd-pvc)
- [Create RBD SnapshotClass](#create-rbd-snapshotclass)
- [Create RBD Snapshot](#create-rbd-snapshot)
- [Restore RBD Snapshot to a new PVC](#restore-rbd-snapshot)
- [Clone RBD PVC](#clone-rbd-pvc)
## Prerequisite
@ -23,7 +23,7 @@
be a `volumesnapshotclass` object present in the cluster
for snapshot request to be satisfied.
- To install snapshot controller and CRD
- To install snapshot controller and CRD
```console
./scripts/install-snapshot.sh install
@ -36,7 +36,7 @@
SNAPSHOT_VERSION="v5.0.1" ./scripts/install-snapshot.sh install
```
- In the future, you can choose to cleanup by running
- In the future, you can choose to cleanup by running
```console
./scripts/install-snapshot.sh cleanup

View File

@ -1,18 +1,18 @@
# Static PVC with ceph-csi
- [Static PVC with ceph-csi](#static-pvc-with-ceph-csi)
- [RBD static PVC](#rbd-static-pvc)
- [Create RBD image](#create-rbd-image)
- [Create RBD static PV](#create-rbd-static-pv)
- [RBD Volume Attributes in PV](#rbd-volume-attributes-in-pv)
- [Create RBD static PVC](#create-rbd-static-pvc)
- [Resize RBD image](#resize-rbd-image)
- [CephFS static PVC](#cephfs-static-pvc)
- [Create CephFS subvolume](#create-cephfs-subvolume)
- [Create CephFS static PV](#create-cephfs-static-pv)
- [Node stage secret ref in CephFS PV](#node-stage-secret-ref-in-cephfs-pv)
- [CephFS volume attributes in PV](#cephfs-volume-attributes-in-pv)
- [Create CephFS static PVC](#create-cephfs-static-pvc)
- [RBD static PVC](#rbd-static-pvc)
- [Create RBD image](#create-rbd-image)
- [Create RBD static PV](#create-rbd-static-pv)
- [RBD Volume Attributes in PV](#rbd-volume-attributes-in-pv)
- [Create RBD static PVC](#create-rbd-static-pvc)
- [Resize RBD image](#resize-rbd-image)
- [CephFS static PVC](#cephfs-static-pvc)
- [Create CephFS subvolume](#create-cephfs-subvolume)
- [Create CephFS static PV](#create-cephfs-static-pv)
- [Node stage secret ref in CephFS PV](#node-stage-secret-ref-in-cephfs-pv)
- [CephFS volume attributes in PV](#cephfs-volume-attributes-in-pv)
- [Create CephFS static PVC](#create-cephfs-static-pvc)
This document outlines how to create static PV and static PVC from
existing rbd image/cephFS volume.

View File

@ -1,12 +1,12 @@
# End-to-End Testing
- [End-to-End Testing](#end-to-end-testing)
- [Introduction](#introduction)
- [Install Kubernetes](#install-kubernetes)
- [Deploy Rook](#deploy-rook)
- [Test parameters](#test-parameters)
- [E2E for snapshot](#e2e-for-snapshot)
- [Running E2E](#running-e2e)
- [Introduction](#introduction)
- [Install Kubernetes](#install-kubernetes)
- [Deploy Rook](#deploy-rook)
- [Test parameters](#test-parameters)
- [E2E for snapshot](#e2e-for-snapshot)
- [Running E2E](#running-e2e)
## Introduction

View File

@ -30,12 +30,12 @@ the required monitor details for the same, as in the provided [sample config
Gather the following information from the Ceph cluster(s) of choice,
* Ceph monitor list
* Typically in the output of `ceph mon dump`
* Used to prepare a list of `monitors` in the CSI configuration file
* Typically in the output of `ceph mon dump`
* Used to prepare a list of `monitors` in the CSI configuration file
* Ceph Cluster fsid
* If choosing to use the Ceph cluster fsid as the unique value of clusterID,
* Output of `ceph fsid`
* Alternatively, choose a `<cluster-id>` value that is distinct per Ceph
* If choosing to use the Ceph cluster fsid as the unique value of clusterID,
* Output of `ceph fsid`
* Alternatively, choose a `<cluster-id>` value that is distinct per Ceph
cluster in use by this kubernetes cluster
Update the [sample configmap](./csi-config-map-sample.yaml) with values

View File

@ -3,13 +3,8 @@ all
#Refer below url for more information about the markdown rules.
#https://github.com/markdownlint/markdownlint/blob/master/docs/RULES.md
rule 'MD013', :ignore_code_blocks => false, :tables => false, :line_length => 80
rule 'MD013', :ignore_code_blocks => true, :tables => false, :line_length => 80
exclude_rule 'MD033' # In-line HTML: GitHub style markdown adds HTML tags
exclude_rule 'MD040' # Fenced code blocks should have a language specified
exclude_rule 'MD041' # First line in file should be a top level header
# TODO: Enable the rules after making required changes.
exclude_rule 'MD007' # Unordered list indentation
exclude_rule 'MD012' # Multiple consecutive blank lines
exclude_rule 'MD013' # Line length
exclude_rule 'MD047' # File should end with a single newline character

View File

@ -5,4 +5,3 @@
`yamlgen` reads deployment configurations from the `api/` package and generates
YAML files that can be used for deploying without advanced automation like
Rook. The generated files are located under `deploy/`.