Implemented the capability to include read affinity options
for individual clusters within the ceph-csi-config ConfigMap.
This allows users to configure the crush location for each
cluster separately. The read affinity options specified in
the ConfigMap will supersede those provided via command line arguments.
Signed-off-by: Praveen M <m.praveen@ibm.com>
e2e test case is added to test if read affinity is enabled by
verifying read_from_replica=localize option is passed
Signed-off-by: Praveen M <m.praveen@ibm.com>
Update ceph-csi-rbd helm chart to use the released image
repo for csi-provisioner instead of the staging repo.
Fixes: #3976
Signed-off-by: karthik-us <ksubrahm@redhat.com>
Currently the Helm chart does not contain a
imagePullSecrets option when you are using
private container registry, this is very inconvenient.
This PR add this option for both CephFS and RBD.
Signed-off-by: Garen Fang <fungaren@qq.com>
fix bug that make provisioner get dup affinities
when deploy helm chart ceph-csi-rbd and ceph-csi-cephfs.
Signed-off-by: DashJay <45532257+dashjay@users.noreply.github.com>
Without this patch the READMEs for the Helm Charts do not provide any
documentation on how to upgrade to a newer version. There is at least
one known issue when updating to a newer versions that is unavoidable as
of writing. There is a workaround for the issue which should be
documented in the upgrade section.
This is a problem because currently the only way to find this workaround
is to go through closed GitHub issues. These might not be around at the
time someone needs this information. Furthermore the issue should be
communicated to the operator before it occurs.
This patch adds basic documentation for updating the Helm repository,
and upgrading the installed release of the Helm Chart. How values can be
set is not part of the documentation. If an operator used custom values,
e.g. for the secret, they probably already know how to deal with setting
values. However, the docs still remind the reader to take values into
account.
Reusing the installed values (`--reuse-values`) has lead to problems in
past, which is why it is explicitly discouraged. An example for this
would be the value `logLevel` which was changed to `sidecarLogLevel`.
Reusing values lead to `.Values.sidecarLogLevel` being empty and the
`csi-provisioner` not being started due to invalid value `-v=""`.
Comparing new values with set values is encouraged.
The workaround for issue #3397 from GitHub is being addressed in the
section Know Issues Upgrading.
Signed-off-by: Christian Kugler <syphdias+git@gmail.com>
deploy: remove beta storage group mention from csidriver yaml
the kubernetes version based enablement of storage api group
enablement is no longer requried and its already on v1 for
supported kubernetes versions.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
Below sidecars are updated with this commit.
csi-provisioner: v3.3.0
csi-snapshotter: v6.1.0
This commit change the sidecar versions in build.env setup.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
This commit change the default fsgroup policy for csi driver object
to "File" type which is the better/correct setting for the CSI volumes.
We have been using default value which is "ReadWriteOnceWithFSType".
with this change backward compatibility should be preserved.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
as PSP is deprecated in kubernetes 1.21
and will be removed in kubernetes 1.25
removing the existing PSP related templates
from the repo and updated the required documents.
fixes#1988
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
At present we have single log level configuration for all the containers
running for our CSI pods, which has been defaulted to log Level 5.
However this cause many logs to be spitted in a cluster and cause log
spamming to an extent. This commit introduce one more log level control
for CSI pods called sidecarLogLevel which defaults to log Level 1.
The sidecar controllers like snapshotter, resizer, attacher..etc has
been configured with this new log level and driver pods are with old
configruation value.
This allow us to have different configuration options for sidecar
constrollers and driver pods.
With this, we will also have a choice of different configuation setting
instead of locking onto one variable for the containers deployed via CSI driver.
To summarize the CSI containers maintained by Ceph CSI driver has log
level 5 and controllers/sidecars not maintained by Ceph CSI driver has
log level 1 configuration.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
create the token if kubernetes version in
1.24+ and use it for vault sa.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Signed-off-by: Rakshith R <rar@redhat.com>
image tags are not updated on the Readme, updating
the image tags in Readme to match the tags in
values.yaml
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
This commit enable the mentioned feature gate which helps to
recover from volume expansion failures.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
the helm chart template value has been updated to latest
version of node driver registrar container.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
This commit is added to use canary csi-provisioner image
to test different sc pvc-pvc cloning feature, which is not
yet present in released versions.
refer:
https://github.com/kubernetes-csi/external-provisioner/pull/699
Signed-off-by: Rakshith R <rar@redhat.com>
When running the kubernetes cluster with one single privileged
PodSecurityPolicy which is allowing everything the nodeplugin
daemonset can fail to start. To be precise the problem is the
defaultAllowPrivilegeEscalation: false configuration in the PSP.
Containers of the nodeplugin daemonset won't start when they
have privileged: true but no allowPrivilegeEscalation in their
container securityContext.
Kubernetes will not schedule if this mismatch exists cannot set
allowPrivilegeEscalation to false and privileged to true
Signed-off-by: Silvan Loser <silvan.loser@hotmail.ch>
Signed-off-by: Silvan Loser <33911078+losil@users.noreply.github.com>
This commit change the image registry URL for sidecars in the
RBD deployment from `k8s.gcr.io` to `registry.k8s.io` as
the migration is happening from former to the latter.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
As the netNamespaceFilePath can be separate for
both cephfs and rbd adding the netNamespaceFilePath
path for RBD, This will help us to keep RBD and
CephFS specific options separately.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
This argument in csi-snapshotter sidecar allows us to receive
snapshot-name/snapshot-namespace/snapshotcontent-name metadata in the
CreateSnapshot() request.
For ex:
csi.storage.k8s.io/volumesnapshot/name
csi.storage.k8s.io/volumesnapshot/namespace
csi.storage.k8s.io/volumesnapshotcontent/name
This is a useful information which can be used depend on the use case we
have at our driver. The features like adding metadata to snapshot image
can consume this based on the need.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
add support to run rbd map and mount -t
commands with the nsenter.
complete design of pod/multus network
is added here https://github.com/rook/rook/
blob/master/design/ceph/multus-network.md#csi-pods
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
OIDC token file path has been modified from
`/var/run/secrets/token` to `/run/secrets/tokens`.
This has been done to ensure compliance with
FHS 3.0.
refer:
https://refspecs.linuxfoundation.org/FHS_3.0/fhs/ch05s13.html
Signed-off-by: Rakshith R <rar@redhat.com>
With Amazon STS and kubernetes cluster is configured with
OIDC identity provider, credentials to access Amazon KMS
can be fetched using oidc-token(serviceaccount token).
Each tenant/namespace needs to create a secret with aws region,
role and CMK ARN.
Ceph-CSI will assume the given role with oidc token and access
aws KMS, with given CMK to encrypt/decrypt DEK which will stored
in the image metdata.
Refer: https://docs.aws.amazon.com/STS/latest/APIReference/welcome.htmlResolves: #2879
Signed-off-by: Rakshith R <rar@redhat.com>
avoid specifying the image feature dependencies
and add a link to rbd official document for
reference to the image feature dependencies.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Makes the rbd images features in the storageclass
as optional so that default image features of librbd
can be used. and also kept the option to user
to specify the image features in the storageclass.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>