In csi-external-provisioner: v5.0.1, topology-aware
provisioning is enabled by default. As a result provisioner
now expects toologyKeys to be present in CSINode object which
must be passed by user via `--domainlabels` flag in RBD nodeplugin.
Issue: Users upgrading to v3.12.0 who were not previously using
topology-aware provisioning may encounter issues when provisionining
RBD PVCs, as the `--domainlabels` flag might not be set.
Fix: To address this, add `--immediate-topology=false` to disable
topology-aware provisioning. User requiring topology-aware
provisioning should provided the volumeBindingMode as
`WaitForFirstConsumer` and `TopologyConstrainedPools` as required in
the StorageClass and configure `--domainlabels` flag in RBD nodeplugin.
Signed-off-by: Praveen M <m.praveen@ibm.com>
This commit removes the Topology feature gate as it is now enabled by default
and will be removed in a future release. It is CSI driver's responsibility to
report capability `VOLUME_ACCESSIBILITY_CONSTRAINTS` so that topology gets
enabled in external-provisioner. When driver doesn't report it,
external-provisioner disables topology support.
As of this change, Only RBD driver supports topology based volume provisioning
and it reports the `VOLUME_ACCESSIBILITY_CONSTRAINTS` capability,
enabling topology support in the external-provisioner.
Signed-off-by: Praveen M <m.praveen@ibm.com>
When issues or bugs are reported, users often share the logs of the
default container in a Pod. These logs do not contain the required
information, as that mostly only can be found in the logs of the
Ceph-CSI container (named csi-cephfsplugin or csi-rbdplugin).
By moving the Ceph-CSI containers in the Pods to the 1st in the list,
they become the default container for commands like `kubectl logs`.
Signed-off-by: Niels de Vos <ndevos@ibm.com>
The image is now available in the release repository and can be fetched from
there instead of the staging repository.
Signed-off-by: Sebastian Hoß <seb@xn--ho-hia.de>
Below sidecars are updated with this commit.
csi-provisioner: v3.3.0
csi-snapshotter: v6.1.0
This commit change the sidecar versions in build.env setup.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
At present we have single log level configuration for all the containers
running for our CSI pods, which has been defaulted to log Level 5.
However this cause many logs to be spitted in a cluster and cause log
spamming to an extent. This commit introduce one more log level control
for CSI pods called sidecarLogLevel which defaults to log Level 1.
The sidecar controllers like snapshotter, resizer, attacher..etc has
been configured with this new log level and driver pods are with old
configruation value.
This allow us to have different configuration options for sidecar
constrollers and driver pods.
With this, we will also have a choice of different configuation setting
instead of locking onto one variable for the containers deployed via CSI driver.
To summarize the CSI containers maintained by Ceph CSI driver has log
level 5 and controllers/sidecars not maintained by Ceph CSI driver has
log level 1 configuration.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
This commit is added to use canary csi-provisioner image
to test different sc pvc-pvc cloning feature, which is not
yet present in released versions.
refer:
https://github.com/kubernetes-csi/external-provisioner/pull/699
Signed-off-by: Rakshith R <rar@redhat.com>
This commit change the image registry URL for sidecars in the
RBD deployment from `k8s.gcr.io` to `registry.k8s.io` as
the migration is happening from former to the latter.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
This argument in csi-snapshotter sidecar allows us to receive
snapshot-name/snapshot-namespace/snapshotcontent-name metadata in the
CreateSnapshot() request.
For ex:
csi.storage.k8s.io/volumesnapshot/name
csi.storage.k8s.io/volumesnapshot/namespace
csi.storage.k8s.io/volumesnapshotcontent/name
This is a useful information which can be used depend on the use case we
have at our driver. The features like adding metadata to snapshot image
can consume this based on the need.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
OIDC token file path has been modified from
`/var/run/secrets/token` to `/run/secrets/tokens`.
This has been done to ensure compliance with
FHS 3.0.
refer:
https://refspecs.linuxfoundation.org/FHS_3.0/fhs/ch05s13.html
Signed-off-by: Rakshith R <rar@redhat.com>
With Amazon STS and kubernetes cluster is configured with
OIDC identity provider, credentials to access Amazon KMS
can be fetched using oidc-token(serviceaccount token).
Each tenant/namespace needs to create a secret with aws region,
role and CMK ARN.
Ceph-CSI will assume the given role with oidc token and access
aws KMS, with given CMK to encrypt/decrypt DEK which will stored
in the image metdata.
Refer: https://docs.aws.amazon.com/STS/latest/APIReference/welcome.htmlResolves: #2879
Signed-off-by: Rakshith R <rar@redhat.com>
updating external resizer image version
from 1.3.0 to latest available release i.e
1.4.0
1.4.0 changelog link
https://github.com/kubernetes-csi/
external-resizer/blob/master/CHANGELOG/CHANGELOG-1.4.md
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
This commit updates sidecars to the latest available version
which is compatible with kubernetes 1.23 and csi spec 1.5
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
Deployments place all sockets for communicating with CSI components in
the shared `/csi` directory. The CSI-Addons socket was introduced
recently, but not configured to be in the same location (by default
placed in `/tmp`).
Signed-off-by: Niels de Vos <ndevos@redhat.com>
we dont need securityContext for the rbd provisioner
pod as its not doing any special operations like map
,unmap selinux etc.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
use the latest version of csi-snapshotter sidecar image at the
provisioner templates
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
set system-cluster-critical priorityclass on
provisioner pods. the system-cluster-critical is
having lowest priority compared to node-critical.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
if the kms encryption configmap is not mounted
as a volume to the CSI pods, add the code to
read the configuration from the kubernetes. Later
the code to fetch the configmap will be moved to
the new sidecar which is will talk to respective
CO to fetch the encryption configurations.
The k8s configmap uses the standard vault spefic
names to add the configurations. this will be converted
back to the CSI configurations.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
This argument in csi-provisioner sidecar allows us to receive pv/pvc
name/namespace metadata in the createVolume() request.
For ex:
csi.storage.k8s.io/pvc/name
csi.storage.k8s.io/pvc/namespace
csi.storage.k8s.io/pv/name
This is a useful information which can be used depend on the use case we
have at our driver. The features like vault token enablement for multi
tenancy, RBD mirroring ..etc can consume this based on the need.
Refer: #1305
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
Signed-off-by: Niels de Vos <ndevos@redhat.com>
external-provisioner is exposing a new argument
to set the default fstype while starting the provisioner
sidecar, if the fstype is not specified in the storageclass
the default fstype will be applied for the pvc created from
the storageclass.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>