The omap is stored with the requested
snapshot name not with the subvolume
snapshotname. This fix uses the correct
snapshot request name to cleanup the omap
once the subvolume snapshot is deleted.
fixes: #2974
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
The `ceph nfs export ...` commands have changed in recent Ceph releases.
Use the most recent command as a default, fall back to the older command
when an error is reported.
This shoud make the NFS-provisioner work on any current Ceph version.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Increase the timeout to 2 minutes to give enough time
for rollback to complete.
As rollback is performed by the force-promote command it,
at times, may take more than a minute
(based on dirty blocks that need to be rolled
back approximately) to rollback.
The added extra 1 minute is useful though to avoid
multiple calls to complete the rollback and in
extremely corner cases to avoid failures in the
first instance of the call when the mirror watcher
is not yet removed (post scaling down the
RBD mirror instance)
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Restoring a snapshot with a new PVC results with a wrong
dataPoolName in case of initial volume linked
to a storageClass with topology constraints and erasure coding.
Signed-off-by: Thibaut Blanchard <thibaut.blanchard@gmail.com>
NFSVolume instances are short lived, they only extist for a certain gRPC
procedure. It is easier to store the calling Context in the NFSVolume
struct, than to pass it to some of the functions that require it.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
These NFS Controller and Identity servers are the base for the new
provisioner. The functionality is currently extremely limited, follow-up
PRs will implement various CSI procedures.
CreateVolume is implemented with the bare minimum. This makes it
possible to create a volume, and mount it with the
kubernetes-csi/csi-driver-nfs NodePlugin.
DeleteVolume unexports the volume from the Ceph managed NFS-Ganesha
service. In case the Ceph cluster provides multiple NFS-Ganesha
deployments, things might not work as expected. This is going to be
addressed in follow-up improvements.
Lots of TODO comments need to be resolved before this can be declared
"production ready". Unit- and e2e-tests are missing as well.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
RT, reference tracker, is key-based implementation of a reference counter.
Unlike an integer-based counter, RT counts references by tracking unique
keys. This allows accounting in situations where idempotency must be
preserved. It guarantees there will be no duplicit increments or decrements
of the counter.
Signed-off-by: Robert Vasek <robert.vasek@cern.ch>
OIDC token file path has been modified from
`/var/run/secrets/token` to `/run/secrets/tokens`.
This has been done to ensure compliance with
FHS 3.0.
refer:
https://refspecs.linuxfoundation.org/FHS_3.0/fhs/ch05s13.html
Signed-off-by: Rakshith R <rar@redhat.com>
Below are the 3 different cases where we need
the PVC namespace for encryption
* CreateVolume:- Read the namespace from the
createVolume parameters and store it in the omap
* NodeStage:- Read the namespace from the omap
not from the volumeContext
* Regenerate:- Read the pvc namespace from the claimRef
not from the volumeAttributes.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
remove kubernetes csi prefixed parameters
from the volumeContext as we dont want
to store it in the PV VolumeAttributes.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
remove kubernetes csi prefixed parameters
from the volumeContext as we dont want
to store it in the PV VolumeAttributes.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
added helper function to strip the kubernetes
specific parameters from the volumeContext as
volumeContext is storaged in the PV volumeAttributes
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
This commit ensures that parent image is flattened before
creating volume.
- If the data source is a PVC, the underlying image's parent
is flattened(which would be a temp clone or snapshot).
hard & soft limit is reduced by 2 to account for depth that
will be added by temp & final clone.
- If the data source is a Snapshot, the underlying image is
itself flattened.
hard & soft limit is reduced by 1 to account for depth that
will be added by the clone which will be restored from the
snapshot.
Flattening step for resulting PVC image restored from snapshot is removed.
Flattening step for temp clone & final image is removed when pvc clone is
being created.
Fixes: #2190
Signed-off-by: Rakshith R <rar@redhat.com>
as per the CSI standard the size is optional parameter,
as we are allowing the clone to a bigger size
today we need to block the clone to a smaller size
as its a have side effects like data corruption etc.
Note:- Even though this check is present in kubernetes
sidecar as CSI is CO independent adding the check
here.
fixes: #2718
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
These RPCs( nodestage,unstage,volumestats) are
implemented RPCs for our drivers atm. This commit removes
the `unimplemented` responses from the common/default
server initialization routins.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
These RPCs ( controller expand, create and delete snapshots) are
no longer unimplmented and we dont have to declare these as with
`unimplemented` states. This commit remove the same.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
With Amazon STS and kubernetes cluster is configured with
OIDC identity provider, credentials to access Amazon KMS
can be fetched using oidc-token(serviceaccount token).
Each tenant/namespace needs to create a secret with aws region,
role and CMK ARN.
Ceph-CSI will assume the given role with oidc token and access
aws KMS, with given CMK to encrypt/decrypt DEK which will stored
in the image metdata.
Refer: https://docs.aws.amazon.com/STS/latest/APIReference/welcome.htmlResolves: #2879
Signed-off-by: Rakshith R <rar@redhat.com>
Currently, we support
mapOption: "krbd:v1,v2,v3;nbd:v1,v2,v3"
- By omitting `krbd:` or `nbd:`, the option(s) apply to
rbdDefaultMounter which is krbd.
- A user can _override_ the options for a mounter by specifying `krbd:`
or `nbd:`.
mapOption: "v1,v2,v3;nbd:v1,v2,v3"
is effectively the same as the 1st example.
- Sections are split by `;`.
- If users want to specify common options for both `krbd` and `nbd`,
they should mention them twice.
But in case if the krbd or nbd specifc options contian `:` within them,
then the parsing is failing now.
E0301 10:19:13.615111 7348 utils.go:200] ID: 63 Req-ID:
0001-0009-rook-ceph-0000000000000001-fd37c41b-9948-11ec-ad32-0242ac110004
GRPC error: badly formatted map/unmap options:
"krbd:read_from_replica=localize,crush_location=zone:zone1;"
This patch fix the above case where the options itself contain `:`
delimitor
ex: krbd:v1,v2,v3=v31:v32;nbd:v1,v2,v3"
Please note, if you are using such options which contain `:` delimiter,
then it is mandatory to specify the mounter-type.
Fixes: #2910
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Mounts managed by ceph-fuse may get corrupted by e.g. the ceph-fuse process
exiting abruptly, or its parent container being terminated, taking down its
child processes with it.
This commit adds checks to NodeStageVolume and NodePublishVolume procedures
to detect whether a mountpoint in staging_target_path and/or target_path is
corrupted, and remount is performed if corruption is detected.
Signed-off-by: Robert Vasek <robert.vasek@cern.ch>
Refactored a couple of helper functions for easier resue.
* Code for building store.VolumeOptions is factored out into a separate function.
* Changed args of getCredentailsForVolume() and NodeServer.mount() so that
instead of passing in whole csi.NodeStageVolumeRequest, only necessary
properties are passed explicitly. This is to allow these functions to be
called outside of NodeStageVolume() where NodeStageVolumeRequest is not
available.
Signed-off-by: Robert Vasek <robert.vasek@cern.ch>
blkdiscard cmd discards all data on the block device which
is not desired. Hence, return unimplemented code if the
volume access mode is block.
Signed-off-by: Rakshith R <rar@redhat.com>
When a tenant provides a configuration that includes the
`vaultNamespace` option, the `vaultAuthNamespace` option is still taken
from the global configuration. This is not wanted in all cases, as the
`vaultAuthNamespace` option defauls to the `vaultNamespace` option which
the tenant may want to override as well.
The following behaviour is now better defined:
1. no `vaultAuthNamespace` in the global configuration:
A tenant can override the `vaultNamespace` option and that will also
set the `vaultAuthNamespace` option to the same value.
2. `vaultAuthNamespace` and `vaultNamespace` in the global configuration:
When both options are set to different values in the global
configuration, the tenant `vaultNamespace` option will not override
the global `vaultAuthNamespace` option. The tenant can configure
`vaultAuthNamespace` with a different value if required.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Makes the rbd images features in the storageclass
as optional so that default image features of librbd
can be used. and also kept the option to user
to specify the image features in the storageclass.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
as deep-flatten is long supported in ceph and its
enabled by default in the librbd, providing an option
to enable it in cephcsi for the rbd images we are
creating.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
This commits refactors the cephfs core
functions with interfaces. This helps in
better code structuring and writing the
unit test cases.
update #852
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
logging the error is not user-friendly and
it contains system error message. Log the
stderr which is user-friendly error message
for identifying the problem.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Return the dataPool used to create the image instead of the default one
provided by the createVolumeRequest.
In case of topologyConstrainedDataPools, they may differ.
Don't add datapool if it's not present
Signed-off-by: Sébastien Bernard <sebastien.bernard@sfr.com>
At present we are node staging with worldwide permissions which is
not correct. We should allow the CO to take care of it and make
the decision. This commit also remove `fuseMountOptions` and
`KernelMountOptions` as they are no longer needed
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
the omap is stored with the requested
snapshot name not with the subvolume
snapshotname. This fix uses the correct
snapshot request name to cleanup the omap
once the subvolume snapshot is deleted.
fixes: #2832
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
This commit removes `kp-metadata` registration from existing HPCS
or Key Protect code as per the plan.
Fix#2816
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
As we are using optional additional auth data while wrapping
the DEK, we have to send the same additionally while unwrapping.
Error:
```
failed to unwrap the DEK: kp.Error: ..(INVALID_FIELD_ERR)',
reasons='[INVALID_FIELD_ERR: The field `ciphertext` must be: the
original base64 encoded ciphertext from the wrap operation
```
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
When a tenant configures `vaultNamespace` in their own ConfigMap, it is
not applied to the Vault configuration, unless `vaultAuthNamespace` is
set as well. This is unexpected, as the `vaultAuthNamespace` usually is
something configured globally, and not per tenant.
The `vaultAuthNamespace` is an advanced option, that is often not needed
to be configured. Only when tenants have to configure their own
`vaultNamespace`, it is possible that they need to use a different
`vaultAuthNamespace`. The default for the `vaultAuthNamespace` is now
the `vaultNamespace` value from the global configuration. Tenants can
still set it to something else in their own ConfigMap if needed.
Note that Hashicorp Vault Namespaces are only functional in the
Enterprise version of the product. Therefor this can not be tested in
the Ceph-CSI e2e with the Open Source version of Vault.
Fixes: https://bugzilla.redhat.com/2050056
Reported-by: Rachael George <rgeorge@redhat.com>
Signed-off-by: Niels de Vos <ndevos@redhat.com>
This commit removes the thick provisioning
code as thick provisioning is deprecated in
cephcsi 3.5.0.
fixes: #2795
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
At present the KMS structs are exported and ideally we should be
able to work without exporting the same.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
At present the KMS structs are exported and ideally we should be
able to work without exporting the same.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
Currently, we are using methods and all the methods
makes a network call to fetch details from the ceph
clusters, its difficult to write test cases for
these functions, if we move to the interfaces
we can make use of mock to write unit testing
for the caller functions.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
To be consistent with other components and also to explictly
state it belong to `ibm keyprotect` service introducing this
change
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
currently we are overriding the permission to `0o777` at time of node
stage which is not the correct action. That said, this permission
change causes an extra permission correction at time of nodestaging
by the CO while the FSGROUP change policy has been set to
`OnRootMismatch`.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
as ioutil.ReadFile is deprecated and
suggestion is to use os.ReadFile as
per https://pkg.go.dev/io/ioutil updating
the same.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
as ioutil.WriteFile is deprecated and
suggestion is to use os.WriteFile as
per https://pkg.go.dev/io/ioutil updating
the same.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Currently most of the internal methods have the
rbdVolume as the received. As these methods
are completely internal and requires only
the fields of the rbdImage use rbdImage
as the receiver instead of rbdVolume.
updates #2742
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
During CreateVolume from snapshot/volume,
its difficult to identify if the clone is
failed and a new clone is created. In case
of clone failure logging the error message
for better debugging.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
as per the CSI standard the size is optional parameter,
as we are allowing the clone to a bigger size
today we need to block the clone to a smaller size
as its a have side effects like data corruption etc.
Note:- Even though this check is present in kubernetes
sidecar as CSI is CO independent adding the check
here.
updates: #2718
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
as per the CSI standard the size is optional parameter,
as we are allowing the restore to a bigger size
today we need to block the restore to a smaller size
as its a have side effects like data corruption.
Note:- Even though this check is present in kubernetes
sidecar as CSI is CO independent adding the check
here.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Currently, as a workaround, we are calling
the resize volume on the cloned, restore volumes
to adjust the cloned, restored volumes.
With this fix, we are calling the resize volume
only if there is a size mismatch with requested
and the volume from which the new volume needs
to be created.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
SINGLE_NODE_WRITER capability ambiguity has been fixed in csi spec v1.5
which allows the SP drivers to declare more granular WRITE capability in form
of SINGLE_NODE_SINGLE_WRITER or SINGLE_NODE_MULTI_WRITER.
These are not really new capabilities rather capabilities introduced to
get the desired functionality from CO side based on the capabilities SP
driver support for various CSI operations, this new capabilities also help
to address new access mode RWOP (readwriteoncepod).
This commit adds a helper function which identity the request is of
multiwriter mode and also validates whether it is filesystem mode or
block mode. Based on the inspection it fails to allow multi write
requests for filesystem mode and only allow multi write request against
block mode.
This commit also adds unit tests for isMultiWriterBlock function which
validates various accesstypes and accessmodes.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
SINGLE_NODE_WRITER capability ambiguity has been fixed in csi spec v1.5
which allows the SP drivers to declare more granular WRITE capability.
These are not really new capabilities rather capabilities introduced to
get the desired functionality from CO side based on the capabilities SP
driver support for various CSI operations.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
This commit adds optional BaseURL and TokenURL configuration to
key protect/hpcs configuration and client connections, if not
provided default values are used.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
implement UnfenceClusterNetwork grpc call
which allows to unblock the access to a
CIDR block by removing it from network fence.
Signed-off-by: Yug Gupta <yuggupta27@gmail.com>
implement FenceClusterNetwork grpc call which
allows to blocks access to a CIDR block by
creating a network fence.
Signed-off-by: Yug Gupta <yuggupta27@gmail.com>
Convert the CIDR block into a range of IPs,
and then add network fencing via "ceph osd blocklist"
for each IP in that range.
Signed-off-by: Yug Gupta <yuggupta27@gmail.com>
This commit removes rbdVol.getTrashPath() function
since it is no longer being used due to introduction
of go-ceph rbd admin task api for deletion.
Signed-off-by: Rakshith R <rar@redhat.com>
With introduction of go-ceph rbd admin task api, credentials are
no longer required to be passed as cli cmd is not invoked.
Signed-off-by: Rakshith R <rar@redhat.com>
This commit removes `rv.Connect(cr)` since the rbdVolume should
have an active connection in this stage of the function call.
`rv.getCloneDepth(ctx)` will work after a connect to the cluster.
Signed-off-by: Rakshith R <rar@redhat.com>
This commit adds support to go-ceph rbd task api
`trash remove` and `flatten` instead of using cli
cmds.
Fixes: #2186
Signed-off-by: Rakshith R <rar@redhat.com>
considering IBM has different crypto services (ex: SKLM) in place, its
good to keep the configmap key names with below format
`IBM_KP_...` instead of `KP_..`
so that in future, if we add more crypto services from IBM we can keep
similar schema specific to that specific service from IBM.
Ex: `IBM_SKLM_...`
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
The CSI Controller (provisioner) can call `rbd sparsify` to reduce the
space consumption of the volume.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
use ExecCommandWithTimeout with timeout
of 1 minute for the promote operation.
If the command doesnot returns error/response
in 1 minute the process will be killed
and error will be returned to the user.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
added ExecCommandWithTimeout helper function
to execute the commands with the timeout option,
if the command does not return any response with
in the timeout time the process will be terminated
and error will be returned back to the user.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
after creating the rbd image log the image
details corresponding for the request along
with the request name.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
as getImageInfo is already called inside
cloneRbdImageFromSnapshot function right
after creating the clone. remove the extra
API call to get the details again.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
after creating the clone get the current
image details like size, creationTime,
imageFeatures etc from the ceph cluster.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
moved ParentName, ParentPool and ImageFeatureSet
fields to the rbdImage struct as these are the
first citizens on the rbdImage.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
If the volume with a bigger size is created
from a snapshot or from another volume we
need to exapand the filesystem also in the
csidriver as nodeExpand request is not triggered
for this one, During NodeStageVolume we can
expand the filesystem by checking filesystem
needs expansion or not.
If its a encrypted device, check the device
size of rbd device and the LUKS device if required
the device will be expanded before
expanding the filesystem.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
If the requested volume size is greater than
the snapshot size, resize the cloned volume
after creating a clone from a snapshot.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
If the requested volume size is greater than
the parent volume size, resize the cloned volume
after creating a final clone from a parent volume.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
added a check to consider ErrImageNotFound error
during DeleteSnapshot operation, if the error
is ErrImageNotFound we need to ensure that image
is removed from the trash and also the rados
OMAP data is removed.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
we need actual size of the rbdVolume
created for the snapshot, as we are not
storing the size of the snapshot in OMAP
we need to fetch the size from ceph cluster
and update the same on rbdSnapshot struct.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
as we are moving the VolSize to rbdImage struct
we should reuse the same instead of maintaining
one more field in rbdSnapshot struct.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
move the Volsize to the rbdImage struct
as size is more applicable for rbdImage
as rbdImage is used for both rbdVolume
and rbdSnapshot.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
as we are no longer supporting the v1.x
version of cephcsi. removing the json tag
used to store rbd volume details in configmap.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>