This change allows the user to choose not to fallback to NBD mounter
when some ImageFeatures are absent with krbd driver, rather just fail
the NodeStage call.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Thick-provisioning was introduced to make accounting of assigned space
for volumes easier. When thick-provisioned volumes are the only consumer
of the Ceph cluster, this works fine. However, it is unlikely that this
is the case. Instead, accounting of the requested (thin-provisioned)
size of volumes is much more practical as different types of volumes can
be tracked.
OpenShift already provides cluster-wide quotas, which can combine
accounting of requested volumes by grouping different StorageClasses.
In addition to the difficult practise of allowing only thick-provisioned
RBD backed volumes, the performance makes thick-provisioning
troublesome. As volumes need to be completely allocated, data needs to
be written to the volume. This can take a long time, depending on the
size of the volume. Provisioning, cloning and snapshotting becomes very
much noticeable, and because of the additional time consumption, more
prone to failures.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
For static volume, the user will manually mounts
already existing image as a volume to the application
pods. As its a rbd Image, if the PVC is of type
fileSystem the image will be mapped, formatted
and mounted on the node,
If the user resizes the image on the ceph cluster.
User cannot not automatically resize the filesystem
created on the rbd image. Even if deletes and
recreates the kubernetes objects, the new size
will not be visible on the node.
With this changes During the NodeStageVolumeRequest
the nodeplugin will check the size of the mapped rbd
image on the node using the devicePath. and also
the rbd image size on the ceph cluster.
If the size is not matching it will do the file
system resize on the node as part of the
NodeStageVolumeRequest RPC call.
The user need to do below operation to see new size
* Resize the rbd image in ceph cluster
* Scale down all the application pods using the static
PVC.
* Make sure no application pods which are using the
static PVC is running on a node.
* Scale up all the application pods.
Validate the new size in application pod mounted
volume.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
cephLogDir: is a storage class option that is passed to rbd-nbd daemon.
cephLogDirHostPath: is a nodeplugin daemonset level option that helps in
using the right host-path while bind-mounting
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
As we have deprecated earlier versions than v3.3.0, it is not required
to keep the upgrade docs for the same. The upgrade doc for v3.2.0 to
v3.3.0 has been kept intact.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
added design doc to handle volumeID mapping in case
of the failover in the Disaster Recovery.
update #2118
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Mainly removed rbd-nbd mounter specified at the pre-upgrade
considerations affecting the restarts.
Also updated the 3.3 tags to 3.4
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Add documenation for Disaster Recovery
which steps to Failover and Failback in case
of a planned migration or a Disaster.
Signed-off-by: Yug Gupta <yuggupta27@gmail.com>
In addition to the single ServiceAccount KMS support for Hashicorp
Vault, Ceph-CSI can now use a ServiceAccount per Tenant as well. This
adds the user-documentation with references to the example deployment
files.
Closes: #2222
Signed-off-by: Niels de Vos <ndevos@redhat.com>
A new KMS that supports Hashicorp Vault with the Kubernetes Auth backend
and ServiceAccounts per Tenant (Kubernetes Namespace).
Updates: #2222
Signed-off-by: Niels de Vos <ndevos@redhat.com>
As Travis CI `https://travis-ci.org/` is getting
shutdown date on June 15th. Either we need to move
to new place https://www.travis-ci.com/ or we can
switch to github action to push image and the helm
charts when a PR is merged.
fixes: #1781
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
At present the cert keys are not unique which is not correct.
The keys in the secret should be unique and this patch address
the same.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
As we have v3.3 as the latest release
updating the upgrade doc in the devel
branch to point to the same.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
In the case of the Async DR, the volumeID will
not be the same if the clusterID or the PoolID
is different, With Earlier implementation, it
is expected that the new volumeID mapping is
stored in the rados omap pool. In the case of the
ControllerExpand or the DeleteVolume Request,
the only volumeID will be sent it's not possible
to find the corresponding poolID in the new cluster.
With This Change, it works as below
The csi-rbdplugin-controller will watch for the PV
objects, when there are any PV objects created it
will check the omap already exists, If the omap doesn't
exist it will generate the new volumeID and it checks for
the volumeID mapping entry in the PV annotation, if the
mapping does not exist, it will add the new entry
to the PV annotation.
The cephcsi will check for the PV annotations if the
omap does not exist if the mapping exists in the PV
annotation, it will use the new volumeID for further
operations.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Current rbd plugin only supports the layering feature
for rbd image. Add exclusive-lock and journaling image
features for the rbd.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Signed-off-by: woohhan <woohyung_han@tmax.co.kr>
Update the emcrypted PVC implementation doc with references to the new
EncryptedKMS, DEKStore and VolumeEncryption types.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Add an option to the StorageClass to support creating fully allocated
(thick provisioned) RBD images
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Signed-off-by: Niels de Vos <ndevos@redhat.com>
When a volume was provisioned by an old Ceph-CSI provisioner, the
metadata of the RBD image will contain `requiresEncryption` to indicate
a passphrase needs to be created. New Ceph-CSI provisioners create the
passphrase in the CreateVolume request, and set `encryptionPrepared`
instead.
When a new node-plugin detects that `requiresEncryption` is set in the
RBD image metadata, it will fallback to the old behaviour.
In case `encryptionPrepared` is read from the RBD image metadata, the
passphrase is used to cryptsetup/format the image.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
currently, the keys for kms certificates/keys in a
secret is ca.cert, tls.cert and
tls.key, this commit changes the key from ca.cert
and tls.cert to cert and tls.key to key.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Added a option to pass the client certificate
and the client certificate key for the vault token
based encryption.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
The yaml files for RBD encryption are located in examples/kms/vault, and
not in the examples/rbd directory.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
In addition to the Vault KMS support (uses Kubernetes ServiceAccount),
there is the new Vault Tokens KMS feature.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Design for adding a new KMS type "VaultTokens" that can be used to
configure a Hashicorp Vault service where each tenant has their own
personal token to manage encryptions keys for PVCs.
Signed-off-by: Niels de Vos <ndevos@redhat.com>