Without commit [1] Kernel doesn't handle io-timeout=0 correctly
Hence we recommend Kernel version 5.4 or higher that has commit [1]
[1] https://bit.ly/34CFh06
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
(cherry picked from commit 1c153b120c)
This commit adds the upgrade documentation from v3.4 to v3.5
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
(cherry picked from commit b151325871)
This commit adds optional BaseURL and TokenURL configuration to
key protect/hpcs configuration and client connections, if not
provided default values are used.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
considering IBM has different crypto services (ex: SKLM) in place, its
good to keep the configmap key names with below format
`IBM_KP_...` instead of `KP_..`
so that in future, if we add more crypto services from IBM we can keep
similar schema specific to that specific service from IBM.
Ex: `IBM_SKLM_...`
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
This commit add the design considerations of IBM Key protect KMS
service to the Ceph CSI integration.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
- Fixes spelling mistakes.
- Grammatical error correction.
- Wrapping the text at 80 line count..etc
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
This commit add some more details like helper or utility functions
which will be introduced as part of the effort and also add some more
details about the CSI operations a particular identified change touches.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
we have added clusterID mapping to identify the volumes
in case of a failover in Disaster recovery in #1946.
with #2314 we are moving to a configuration in
configmap for clusterID and poolID mapping.
and with #2314 we have all the required information
to identify the image mappings.
This commit removes the workaround implementation done
in #1946.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
This change allows the user to choose not to fallback to NBD mounter
when some ImageFeatures are absent with krbd driver, rather just fail
the NodeStage call.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Thick-provisioning was introduced to make accounting of assigned space
for volumes easier. When thick-provisioned volumes are the only consumer
of the Ceph cluster, this works fine. However, it is unlikely that this
is the case. Instead, accounting of the requested (thin-provisioned)
size of volumes is much more practical as different types of volumes can
be tracked.
OpenShift already provides cluster-wide quotas, which can combine
accounting of requested volumes by grouping different StorageClasses.
In addition to the difficult practise of allowing only thick-provisioned
RBD backed volumes, the performance makes thick-provisioning
troublesome. As volumes need to be completely allocated, data needs to
be written to the volume. This can take a long time, depending on the
size of the volume. Provisioning, cloning and snapshotting becomes very
much noticeable, and because of the additional time consumption, more
prone to failures.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
For static volume, the user will manually mounts
already existing image as a volume to the application
pods. As its a rbd Image, if the PVC is of type
fileSystem the image will be mapped, formatted
and mounted on the node,
If the user resizes the image on the ceph cluster.
User cannot not automatically resize the filesystem
created on the rbd image. Even if deletes and
recreates the kubernetes objects, the new size
will not be visible on the node.
With this changes During the NodeStageVolumeRequest
the nodeplugin will check the size of the mapped rbd
image on the node using the devicePath. and also
the rbd image size on the ceph cluster.
If the size is not matching it will do the file
system resize on the node as part of the
NodeStageVolumeRequest RPC call.
The user need to do below operation to see new size
* Resize the rbd image in ceph cluster
* Scale down all the application pods using the static
PVC.
* Make sure no application pods which are using the
static PVC is running on a node.
* Scale up all the application pods.
Validate the new size in application pod mounted
volume.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
cephLogDir: is a storage class option that is passed to rbd-nbd daemon.
cephLogDirHostPath: is a nodeplugin daemonset level option that helps in
using the right host-path while bind-mounting
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
As we have deprecated earlier versions than v3.3.0, it is not required
to keep the upgrade docs for the same. The upgrade doc for v3.2.0 to
v3.3.0 has been kept intact.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
added design doc to handle volumeID mapping in case
of the failover in the Disaster Recovery.
update #2118
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Mainly removed rbd-nbd mounter specified at the pre-upgrade
considerations affecting the restarts.
Also updated the 3.3 tags to 3.4
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Add documenation for Disaster Recovery
which steps to Failover and Failback in case
of a planned migration or a Disaster.
Signed-off-by: Yug Gupta <yuggupta27@gmail.com>
In addition to the single ServiceAccount KMS support for Hashicorp
Vault, Ceph-CSI can now use a ServiceAccount per Tenant as well. This
adds the user-documentation with references to the example deployment
files.
Closes: #2222
Signed-off-by: Niels de Vos <ndevos@redhat.com>
A new KMS that supports Hashicorp Vault with the Kubernetes Auth backend
and ServiceAccounts per Tenant (Kubernetes Namespace).
Updates: #2222
Signed-off-by: Niels de Vos <ndevos@redhat.com>
As Travis CI `https://travis-ci.org/` is getting
shutdown date on June 15th. Either we need to move
to new place https://www.travis-ci.com/ or we can
switch to github action to push image and the helm
charts when a PR is merged.
fixes: #1781
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
At present the cert keys are not unique which is not correct.
The keys in the secret should be unique and this patch address
the same.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
As we have v3.3 as the latest release
updating the upgrade doc in the devel
branch to point to the same.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
In the case of the Async DR, the volumeID will
not be the same if the clusterID or the PoolID
is different, With Earlier implementation, it
is expected that the new volumeID mapping is
stored in the rados omap pool. In the case of the
ControllerExpand or the DeleteVolume Request,
the only volumeID will be sent it's not possible
to find the corresponding poolID in the new cluster.
With This Change, it works as below
The csi-rbdplugin-controller will watch for the PV
objects, when there are any PV objects created it
will check the omap already exists, If the omap doesn't
exist it will generate the new volumeID and it checks for
the volumeID mapping entry in the PV annotation, if the
mapping does not exist, it will add the new entry
to the PV annotation.
The cephcsi will check for the PV annotations if the
omap does not exist if the mapping exists in the PV
annotation, it will use the new volumeID for further
operations.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Current rbd plugin only supports the layering feature
for rbd image. Add exclusive-lock and journaling image
features for the rbd.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Signed-off-by: woohhan <woohyung_han@tmax.co.kr>