This commit removes the Topology feature gate as it is now enabled
by default and will be removed in a future release.
Signed-off-by: Praveen M <m.praveen@ibm.com>
When issues or bugs are reported, users often share the logs of the
default container in a Pod. These logs do not contain the required
information, as that mostly only can be found in the logs of the
Ceph-CSI container (named csi-cephfsplugin or csi-rbdplugin).
By moving the Ceph-CSI containers in the Pods to the 1st in the list,
they become the default container for commands like `kubectl logs`.
Signed-off-by: Niels de Vos <ndevos@ibm.com>
Currently the Helm chart does not contain a
imagePullSecrets option when you are using
private container registry, this is very inconvenient.
This PR add this option for both CephFS and RBD.
Signed-off-by: Garen Fang <fungaren@qq.com>
fix bug that make provisioner get dup affinities
when deploy helm chart ceph-csi-rbd and ceph-csi-cephfs.
Signed-off-by: DashJay <45532257+dashjay@users.noreply.github.com>
At present we have single log level configuration for all the containers
running for our CSI pods, which has been defaulted to log Level 5.
However this cause many logs to be spitted in a cluster and cause log
spamming to an extent. This commit introduce one more log level control
for CSI pods called sidecarLogLevel which defaults to log Level 1.
The sidecar controllers like snapshotter, resizer, attacher..etc has
been configured with this new log level and driver pods are with old
configruation value.
This allow us to have different configuration options for sidecar
constrollers and driver pods.
With this, we will also have a choice of different configuation setting
instead of locking onto one variable for the containers deployed via CSI driver.
To summarize the CSI containers maintained by Ceph CSI driver has log
level 5 and controllers/sidecars not maintained by Ceph CSI driver has
log level 1 configuration.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
This commit enable the mentioned feature gate which helps to
recover from volume expansion failures.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
This argument in csi-snapshotter sidecar allows us to receive
snapshot-name/snapshot-namespace/snapshotcontent-name metadata in the
CreateSnapshot() request.
For ex:
csi.storage.k8s.io/volumesnapshot/name
csi.storage.k8s.io/volumesnapshot/namespace
csi.storage.k8s.io/volumesnapshotcontent/name
This is a useful information which can be used depend on the use case we
have at our driver. The features like adding metadata to snapshot image
can consume this based on the need.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
OIDC token file path has been modified from
`/var/run/secrets/token` to `/run/secrets/tokens`.
This has been done to ensure compliance with
FHS 3.0.
refer:
https://refspecs.linuxfoundation.org/FHS_3.0/fhs/ch05s13.html
Signed-off-by: Rakshith R <rar@redhat.com>
With Amazon STS and kubernetes cluster is configured with
OIDC identity provider, credentials to access Amazon KMS
can be fetched using oidc-token(serviceaccount token).
Each tenant/namespace needs to create a secret with aws region,
role and CMK ARN.
Ceph-CSI will assume the given role with oidc token and access
aws KMS, with given CMK to encrypt/decrypt DEK which will stored
in the image metdata.
Refer: https://docs.aws.amazon.com/STS/latest/APIReference/welcome.htmlResolves: #2879
Signed-off-by: Rakshith R <rar@redhat.com>
to show what ports containers are exposing add port sections to nodeplugin
and provisioner helm templates
Signed-off-by: Deividas Burškaitis <deividas.burskaitis@oxylabs.io>
Deployments place all sockets for communicating with CSI components in
the shared `/csi` directory. The CSI-Addons socket was introduced
recently, but not configured to be in the same location (by default
placed in `/tmp`).
Signed-off-by: Niels de Vos <ndevos@redhat.com>
we dont need securityContext for the cephfs provisioner
pod as its not doing any special operations like mount,
selinux operations etc .
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
set system-cluster-critical priorityclass on
provisioner pods. the system-cluster-critical is
having lowest priority compared to node-critical.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
This argument in csi-provisioner sidecar allows us to receive pv/pvc
name/namespace metadata in the createVolume() request.
For ex:
csi.storage.k8s.io/pvc/name
csi.storage.k8s.io/pvc/namespace
csi.storage.k8s.io/pv/name
This is a useful information which can be used depend on the use case we
have at our driver. The features like vault token enablement for multi
tenancy, RBD mirroring ..etc can consume this based on the need.
Refer: #1305
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
Signed-off-by: Niels de Vos <ndevos@redhat.com>
external-provisioner is exposing a new argument
to set the default fstype while starting the provisioner
sidecar, if the fstype is not specified in the storageclass
the default fstype will be applied for the pvc created from
the storageclass.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
This PR makes the changes in csi templates and
upgrade documentation required for updating
csi sidecar images.
Signed-off-by: Mudit Agarwal <muagarwa@redhat.com>
updated deployment template for the new controller and
also added `update` configmap RBAC for the controller
as the controller uses the configmap for the leader
election.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
An rbd image can have a maximum number of
snapshots defined by maxsnapshotsonimage
On the limit is reached the cephcsi will
start flattening the older snapshots and
returns the ABORT error message, The Request
comes after this as to wait till all the
images are flattened (this will increase the
PVC creation time. Instead of waiting till
the maximum snapshots on an RBD image, we can
have a soft limit, once the limit reached
cephcsi will start flattening the task to
break the chain. With this PVC creation time
will only be affected when the hard limit
(minsnapshotsonimage) reached.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
instead of keeping the log level at 5, which
is required only for tracing the errors. this commit
adds an option for users to configure the log level
for all containers.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
When replication count is >1 of the provisioner, the added anti-affinity rules
will prevent provisioner operators from scheduling on the same nodes. The
kubernetes scheduler will spread the pods across nodes to improve availability
during node failures.
Signed-off-by: Nico Berlee <nico.berlee@on2it.net>
as v1.0.0 is deprecated we need to remove the support
for it in the Next coming (v3.0.0) release. This PR
removes the support for the same.
closes#882
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Added maxsnapshotsonimage flag to flatten
the older rbd images on the chain to avoid
issue in krbd.The limit is in krbd since it
only allocate 1 4KiB page to handle all the
snapshot ids for an image.
The max limit is 510 as per
https://github.com/torvalds/linux/blob/
aaa2faab4ed8e5fe0111e04d6e168c028fe2987f/drivers/block/rbd.c#L98
in cephcsi we arekeeping the default to 450 to reserve 10%
to avoid issues.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
added skipForceFlatten flag to skip
the image deptha and skip image flattening.
This will be very useful if the kernel is
not listed in cephcsi which supports deep
flatten fauture.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
added Hardlimit and Softlimit flags for cephcsi
arguments. When the Softlimit is reached cephcsi
will start a background task to flatten the rbd
image and return success and if the hardlimit
is reached it will start a background task
to flatten the rbd image and return ready
to use as false to make sure that the image
will not be used until it is flatten.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
--retry-interval-start:
This is initial retry interval for failures. 1 second is used by default.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
As kubernetes CSI sidecar is exposing the
GRPC mertics we can make use of the same in
ceph-csi we dont need to expose our own.
update: #881
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
- adds proposal document for PVC encryption from PR448
- adds per-volume encription by generating encryption passphrase
for each volume and storing it in a KMS
- adds HashiCorp Vault integration as a KMS for encryption passphrases
- avoids encrypting volume second time if it was already encrypted but
no file system created
- avoids unnecessary checks if volume is a mapped device when encryption
was not requested
- prevents resizing encrypted volumes (it is not currently supported)
- prevents creating snapshots from encrypted volumes to prevent attack
on encryption key (security guard until re-encryption of volumes
implemented)
Signed-off-by: Vasyl Purchel vasyl.purchel@workday.com
Signed-off-by: Andrea Baglioni andrea.baglioni@workday.comFixes#420Fixes#744
currently, we are making use of host path directory
to store the provisioner socket, as this
the socket is not needed by anyone else other than
containers inside the provisioner pod using the
empty directory to store this socket is the best option.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>