create the token if kubernetes version in
1.24+ and use it for vault sa.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Signed-off-by: Rakshith R <rar@redhat.com>
As the attacher is no longer required we have to mention the same
for csidriver object parameter.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
CephFS CSI driver dont need attacher sidecar for its operations.
This commit remove the same. The RBAC has also got adjusted.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
image tags are not updated on the Readme, updating
the image tags in Readme to match the tags in
values.yaml
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
This commit enable the mentioned feature gate which helps to
recover from volume expansion failures.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
the helm chart template value has been updated to latest
version of node driver registrar container.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
as we are removing the topology configuration from the deployment
this commit remove it from the documentation too.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
This commit is added to use canary csi-provisioner image
to test different sc pvc-pvc cloning feature, which is not
yet present in released versions.
refer:
https://github.com/kubernetes-csi/external-provisioner/pull/699
Signed-off-by: Rakshith R <rar@redhat.com>
When running the kubernetes cluster with one single privileged
PodSecurityPolicy which is allowing everything the nodeplugin
daemonset can fail to start. To be precise the problem is the
defaultAllowPrivilegeEscalation: false configuration in the PSP.
Containers of the nodeplugin daemonset won't start when they
have privileged: true but no allowPrivilegeEscalation in their
container securityContext.
Kubernetes will not schedule if this mismatch exists cannot set
allowPrivilegeEscalation to false and privileged to true
Signed-off-by: Silvan Loser <silvan.loser@hotmail.ch>
Signed-off-by: Silvan Loser <33911078+losil@users.noreply.github.com>
This commit change the image registry URL for sidecars in the
RBD deployment from `k8s.gcr.io` to `registry.k8s.io` as
the migration is happening from former to the latter.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
This commit change the image registry URL for sidecars in the
CephFS deployment from `k8s.gcr.io` to `registry.k8s.io` as
the migration is happening from former to the latter.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
as same host directory is not shared between
the cephfs and the rbd plugin pod. we need
to keep the netNamespaceFilePath separately
for both cephfs and rbd. CephFS plugin will
use this path to execute mount -t commands.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
As the netNamespaceFilePath can be separate for
both cephfs and rbd adding the netNamespaceFilePath
path for RBD, This will help us to keep RBD and
CephFS specific options separately.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
This argument in csi-snapshotter sidecar allows us to receive
snapshot-name/snapshot-namespace/snapshotcontent-name metadata in the
CreateSnapshot() request.
For ex:
csi.storage.k8s.io/volumesnapshot/name
csi.storage.k8s.io/volumesnapshot/namespace
csi.storage.k8s.io/volumesnapshotcontent/name
This is a useful information which can be used depend on the use case we
have at our driver. The features like adding metadata to snapshot image
can consume this based on the need.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
add support to run rbd map and mount -t
commands with the nsenter.
complete design of pod/multus network
is added here https://github.com/rook/rook/
blob/master/design/ceph/multus-network.md#csi-pods
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
OIDC token file path has been modified from
`/var/run/secrets/token` to `/run/secrets/tokens`.
This has been done to ensure compliance with
FHS 3.0.
refer:
https://refspecs.linuxfoundation.org/FHS_3.0/fhs/ch05s13.html
Signed-off-by: Rakshith R <rar@redhat.com>
With Amazon STS and kubernetes cluster is configured with
OIDC identity provider, credentials to access Amazon KMS
can be fetched using oidc-token(serviceaccount token).
Each tenant/namespace needs to create a secret with aws region,
role and CMK ARN.
Ceph-CSI will assume the given role with oidc token and access
aws KMS, with given CMK to encrypt/decrypt DEK which will stored
in the image metdata.
Refer: https://docs.aws.amazon.com/STS/latest/APIReference/welcome.htmlResolves: #2879
Signed-off-by: Rakshith R <rar@redhat.com>
Mounts managed by ceph-fuse may get corrupted by e.g. the ceph-fuse process
exiting abruptly, or its parent container being terminated, taking down its
child processes with it.
This commit adds checks to NodeStageVolume and NodePublishVolume procedures
to detect whether a mountpoint in staging_target_path and/or target_path is
corrupted, and remount is performed if corruption is detected.
Signed-off-by: Robert Vasek <robert.vasek@cern.ch>
It was decided that latest ceph CSI versions would drop support for
older Kubernetes versions, making this check useless. So it was removed.
Removing this version check allows for the deployment of the CephFS
resizer component when using the helm chart on non vanilla kubernetes
clusters whose API server version are in the form of `1.x.y-abc+def-ghi`.
Signed-off-by: Benjamin Guillon <benjamin.guillon@cc.in2p3.fr>
avoid specifying the image feature dependencies
and add a link to rbd official document for
reference to the image feature dependencies.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Makes the rbd images features in the storageclass
as optional so that default image features of librbd
can be used. and also kept the option to user
to specify the image features in the storageclass.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
as deep-flatten is long supported in ceph and its
enabled by default in the librbd, providing an option
to enable it in cephcsi for the rbd images we are
creating.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Add selinuxMount flag to enable/disable /etc/selinux host mount inside pods
to support selinux-enabled filesystems
Signed-off-by: Francesco Astegiano <francesco.astegiano@gmail.com>
to show what ports containers are exposing add port sections to nodeplugin
and provisioner helm templates
Signed-off-by: Deividas Burškaitis <deividas.burskaitis@oxylabs.io>
removes namespace from non-namespaced storageclass
object.
fixes: #2714
Replacement for #2715 as we didnt receive any update
and PR is already closed.
Co-authored-by: jhrcz-ls
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
This commit removes the thick provisioning
code as thick provisioning is deprecated in
cephcsi 3.5.0.
fixes: #2795
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
updating external resizer image version
from 1.3.0 to latest available release i.e
1.4.0
1.4.0 changelog link
https://github.com/kubernetes-csi/
external-resizer/blob/master/CHANGELOG/CHANGELOG-1.4.md
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
This commit updates sidecars to the latest available version
which is compatible with kubernetes 1.23 and csi spec 1.5
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
Deployments place all sockets for communicating with CSI components in
the shared `/csi` directory. The CSI-Addons socket was introduced
recently, but not configured to be in the same location (by default
placed in `/tmp`).
Signed-off-by: Niels de Vos <ndevos@redhat.com>
When topology is disabled, the ClusterRoleBinding is not created in the Helm
chart. However, the nodeplugin needs access to volumeattachments for the volume
healer.
Signed-off-by: Steven Reitsma <steven@properchaos.nl>
When generating csiconfiguration from values the config.json key gets merged with cluster-mapping.json
as the config.json toYaml element supresses a newline.
This fixes the situation where configuration is generated as shown;
```
data:
config.json: |-
[{"clusterID":"....","monitors":["..."]}]cluster-mapping.json: |-
[]
```
Signed-off-by: Toby Jackson <toby@warmfusion.co.uk>
Version field for helm Chart.yaml needs to have SemVer 2
compatible value, therefore use "<MAJOR-VERSION>-canary"
on "devel" branch.
Refer: https://helm.sh/docs/topics/charts/#the-chartyaml-file
Signed-off-by: Rakshith R <rar@redhat.com>
This change allows the user to choose not to fallback to NBD mounter
when some ImageFeatures are absent with krbd driver, rather just fail
the NodeStage call.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
we dont need securityContext for the cephfs provisioner
pod as its not doing any special operations like mount,
selinux operations etc .
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
cephfs deployment doesnot need extra permission like
privileged,Capabilities and reduce unwanted volumes.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
we dont need securityContext for the cephfs provisioner
pod as its not doing any special operations.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Currently, we delete the ceph client log file on unmap/detach.
This patch provides additional alternatives for users who would like to
persist the log files.
Strategies:
-----------
`remove`: delete log file on unmap/detach
`compress`: compress the log file to gzip on unmap/detach
`preserve`: preserve the log file in text format
Note that the default strategy will be remove on unmap, and these options
can be tweaked from the storage class
Compression size details example:
On Map: (with debug-rbd=20)
---------
$ ls -lh
-rw-r--r-- 1 root root 526K Sep 1 18:15
rbd-nbd-0001-0024-fed5480a-f00f-417a-a51d-31d8a8144c03-0000000000000003-d2e89c87-0b4d-11ec-8ea6-160f128e682d.log
On unmap:
---------
$ ls -lh
-rw-r--r-- 1 root root 33K Sep 1 18:15
rbd-nbd-0001-0024-fed5480a-f00f-417a-a51d-31d8a8144c03-0000000000000003-d2e89c87-0b4d-11ec-8ea6-160f128e682d.gz
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
cephLogDir: is a storage class option that is passed to rbd-nbd daemon.
cephLogDirHostPath: is a nodeplugin daemonset level option that helps in
using the right host-path while bind-mounting
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Problem:
--------
1. rbd-nbd by default logs to /var/log/ceph/ceph-client.admin.log,
Unfortunately, container doesn't have /var/log/ceph directory hence
rbd-nbd is not logging now.
2. Rbd-nbd logs are not persistent across nodeplugin restarts.
Solution:
--------
Provide a host path so that log directory is made available, and the
logs persist on the hostnode across container restarts.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
- mount host's /etc/selinux in node plugins
- process mount options in all code paths for cephfs volume options
Signed-off-by: Alexandre Lossent <alexandre.lossent@cern.ch>
This change resolves a typo for installing the CSIDriver
resource in Kubernetes clusters before 1.18,
where the apiVersion is incorrect.
See also:
https://kubernetes-csi.github.io/docs/csi-driver-object.html
[ndevos: replace v1betav1 in examples with v1beta1]
Signed-off-by: Thomas Kooi <t.j.kooi@avisi.nl>
Problem:
-------
For rbd nbd userspace mounter backends, after a restart of the nodeplugin
all the mounts will start seeing IO errors. This is because, for rbd-nbd
backends there will be a userspace mount daemon running per volume, post
restart of the nodeplugin pod, there is no way to restore the daemons
back to life.
Solution:
--------
The volume healer is a one-time activity that is triggered at the startup
time of the rbd nodeplugin. It navigates through the list of volume
attachments on the node and acts accordingly.
For now, it is limited to nbd type storage only, but it is flexible and
can be extended in the future for other backend types as needed.
From a few feets above:
This solves a severe problem for nbd backed csi volumes. The healer while
going through the list of volume attachments on the node, if finds the
volume is in attached state and is of type nbd, then it will attempt to
fix the rbd-nbd volumes by sending a NodeStageVolume request with the
required volume attributes like secrets, device name, image attributes,
and etc.. which will finally help start the required rbd-nbd daemons in
the nodeplugin csi-rbdplugin container. This will allow reattaching the
backend images with the right nbd device, thus allowing the applications
to perform IO without any interruptions even after a nodeplugin restart.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Nodeplugin needs below cluster roles:
persistentvolumes: get
volumeattachments: list, get
These additional permissions are needed by the volume healer. Volume healer
aims at fixing the volume health issues at the very startup time of the
nodeplugin. As part of its operations, volume healer has to run through
the list of volume attachments and understand details about each
persistentvolume.
The later commits will use these additional cluster roles.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
The provisioner and node-plugin have the capability to connect to
Hashicorp Vault with a ServiceAccount from the Namespace where the PVC
is created. This requires permissions to read the contents of the
ServiceAccount from an other Namespace than where Ceph-CSI is deployed.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
This commit updates the helm chart documentations
with the configurations available while deploying
these helm charts.
Signed-off-by: Yati Padia <ypadia@redhat.com>
Current implementation of semvercompare fails against
pre-release versions. This commit fixes it by using
the entire version string at which csidriver api became GA.
s|">=1.18"|">=1.18.0-beta.1"
Fixes: #2039
Signed-off-by: Rakshith R <rar@redhat.com>
csidriver object can be created on the kubernetes
for below reason.
If a CSI driver creates a CSIDriver object,
Kubernetes users can easily discover the CSI
Drivers installed on their cluster
(simply by issuing kubectl get CSIDriver)
Ref: https://kubernetes-csi.github.io/docs/csi-driver-object.html#what-is-the-csidriver-object
attachRequired is always required to be set to
true to avoid issue on RWO PVC.
more details about it at https://github.com/rook/rook/pull/4332
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
set system-cluster-critical priorityclass on
provisioner pods. the system-cluster-critical is
having lowest priority compared to node-critical.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
set system-node-critical priority on the plugin
pods, as its the highest priority and this need to
be applied on plugin pods as its critical for
storage in cluster.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
as provisioner need to get the configmap from
different namespace to check tenant configuration.
added the clusterrole get access for the same.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
PR #1736 made the kubelet path configurable. It also introduced a change in
the path to the CSI socket. By default the path is now
`/var/lib/kubelet/cephfs.csi.ceph.com/csi.sock` instead of
`/var/lib/kubelet/plugins/cephfs.csi.ceph.com/csi.sock`. This PR
restores the old default.
Signed-off-by: Matthias Neugebauer <matthias.neugebauer@uni-muenster.de>