This commit:
- modifies GetMonsAndClusterID() to take clusterID instead of options.
- moves out validation of clusterID is set or not out of GetMonsAndClusterID().
- defines ErrClusterIDNotSet new error for reusability.
- add GetClusterID() to obtain clusterID from options.
Signed-off-by: Rakshith R <rar@redhat.com>
(cherry picked from commit 82d09d81cf)
This commit adds capability to genVolFromVolumeOptions() to fetch
mapped clusted-id & mon ips for mirrored PVC on secondary cluster
which may have different cluster-id.
This is required for NodeStageVolume().
We also don't need to check for mapping during volume create requests,
so it can be disabled by passing a bool checkClusterIDMapping as false.
GetMonsAndClusterID() is modified to accept bool checkClusterIDMapping
based on which clustermapping is checked to fetch mapped cluster-id and
mon-ips.
Signed-off-by: Rakshith R <rar@redhat.com>
(cherry picked from commit 9d1e98ca60)
The configurations in cpeh.conf is not picked up by rados connection
automatically, hence we need to call conn.ReadConfigFile before calling
Connect().
Signed-off-by: Rakshith R <rar@redhat.com>
(cherry picked from commit e99dd3dea4)
When we read the csi-kms-connection-details configmap
vaultAuthNamespace might not be set when we do the
conversion the vaultAuthNamespace might be set to empty
key and this commits check for the empty value of
vaultAuthNamespace and set the vaultAuthNamespace
to vaultNamespace.
setting empty value for vaultAuthNamespace happened due
to Marshalling at https://github.com/ceph/ceph-csi/blob/devel/
internal/kms/vault_tokens.go#L136-L139.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
(cherry picked from commit 8c8f34cf7a)
When NewK8sClient() detects and error, it used to call FatalLogMsg()
which causes a panic. There are additional features that can be used on
Kubernetes clusters, but these are not a requirement for most
functionalities of the driver.
Instead of causing a panic, returning an error should suffice. This
allows using the driver on non-Kubernetes clusters again.
Fixes: #2452
Signed-off-by: Niels de Vos <ndevos@redhat.com>
(cherry picked from commit 60c2afbcca)
This commit adds fetchMappedClusterIDAndMons() which returns
monitors and clusterID info after checking cluster mapping info.
This is required for regenerating omap entries in mirrored cluster
with different clusterID.
Signed-off-by: Rakshith R <rar@redhat.com>
(cherry picked from commit 99168dc822)
This commit moves getMappedID() from rbd to util
package since it is not rbd specific and exports
it from there.
Signed-off-by: Rakshith R <rar@redhat.com>
(cherry picked from commit 496bcba85c)
Unit-testing often fails due to a race condition while writing the
clusterMappingConfigFile from multiple go-routines at the same time.
Failures from `make containerized-test` look like this:
=== CONT TestGetClusterMappingInfo/site2-storage_cluster-id_mapping
cluster_mapping_test.go:153: GetClusterMappingInfo() = <nil>, expected data &[{map[site1-storage:site2-storage] [map[1:3]] [map[11:5]]} {map[site3-storage:site2-storage] [map[8:3]] [map[10:5]]}]
=== CONT TestGetClusterMappingInfo/site3-storage_cluster-id_mapping
cluster_mapping_test.go:153: GetClusterMappingInfo() = <nil>, expected data &[{map[site3-storage:site2-storage] [map[8:3]] [map[10:5]]}]
--- FAIL: TestGetClusterMappingInfo (0.01s)
--- PASS: TestGetClusterMappingInfo/mapping_file_not_found (0.00s)
--- PASS: TestGetClusterMappingInfo/mapping_file_found_with_empty_data (0.00s)
--- PASS: TestGetClusterMappingInfo/cluster-id_mapping_not_found (0.00s)
--- FAIL: TestGetClusterMappingInfo/site2-storage_cluster-id_mapping (0.00s)
--- FAIL: TestGetClusterMappingInfo/site3-storage_cluster-id_mapping (0.00s)
--- PASS: TestGetClusterMappingInfo/site1-storage_cluster-id_mapping (0.00s)
By splitting the public GetClusterMappingInfo() function into an
internal getClusterMappingInfo() that takes a filename, unit-testing can
use different files for each go-routine, and testing becomes more
predictable.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
(cherry picked from commit 8b71671b42665789fac4a4aa1453b0b107f475c6)
This commit adds functionality of extracting encryption kmsID,
owner from volumeAttributes in RegenerateJournal() and adds utility
functions ParseEncryptionOpts and FetchEncryptionKMSID.
Signed-off-by: Rakshith R <rar@redhat.com>
(cherry picked from commit f05ac2b25d)
rbd flatten functions is a CLI call and it expects
the creds as the input and copying of creds is
required when we generate the temp clone image.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
(cherry picked from commit 916c97b4a8)
added helper function to read the clusterID mapping
from the mounted file.
The clusterID mapping contains below mappings
* ClusterID mappings (to cluster to which we are failingover
and from which cluster failover happened)
* RBD PoolID mapping of between the clusters.
* CephFS FscID mapping between the clusters.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
(cherry picked from commit ac11d71e19)
The VAULT_AUTH_MOUNT_PATH is a Vault configuration parameter that allows
a user to set a non default path for the Kubernetes ServiceAccount
integration. This can already be configured for the Vault KMS, and is
now added to the Vault Tenant SA KMS as well.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
(cherry picked from commit 4859f2dfdb)
The new `vaultAuthNamespace` configuration parameter can be set to the
Vault Namespace where the authentication is setup in the service. Some
Hashicorp Vault deployments use sub-namespaces for their users/tenants,
with a 'root' namespace where the authentication is configured. This
requires passing of different Vault namespaces for different operations.
Example:
- the Kubernetes Auth mechanism is configured for in the Vault
Namespace called 'devops'
- a user/tenant has a sub-namespace called 'devops/website' where the
encryption passphrases can be placed in the key-value store
The configuration for this, then looks like:
vaultAuthNamespace: devops
vaultNamespace: devops/homepage
Note that Vault Namespaces are a feature of the Hashicorp Vault
Enterprise product, and not part of the Open Source version. This
prevents adding e2e tests that validate the Vault Namespace
configuration.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
(cherry picked from commit f2d5c2e0df)
This commit uses `string.SplitN` instead of `string.Split`.
The path for pids.max has extra `:` symbols in it due to which
getCgroupPidsFile() splits the string into 5 tokens instead of
3 leading to loss of part of the path.
As a result, the below error is reported:
`Failed to get the PID limit, can not reconfigure: open
/sys/fs/cgroup/pids/system.slice/containerd.service/
kubepods-besteffort-pod183b9d14_aed1_4b66_a696_da0c738bc012.slice/pids.max:
no such file or directory`
SplitN takes an argument n and splits the string
accordingly which helps us to get the desired
file path.
Fixes: #2337
Co-authored-by: Yati Padia <ypadia@redhat.com>
Signed-off-by: Yati Padia <ypadia@redhat.com>
(cherry picked from commit 16ec97d8f7)
Previously in ControllerExpandVolume() we had a check for encrypted
volumes and we use to fail for all expand requests on an encrypted
volume. Also for Block VolumeMode PVCs NodeExpandVolume used to be
ignored/skipped.
With these changes, we add support for the expansion of encrypted volumes.
Also for raw Block VolumeMode PVCs with Encryption we call NodeExpandVolume.
That said,
With LUKS1, cryptsetup utility doesn't prompt for a passphrase on resizing
the crypto mapper device. This is because LUKS1 devices don't use kernel
keyring for volume keys.
Whereas, LUKS2 devices use kernel keyring for volume key by default, i.e.
cryptsetup utility asks for a passphrase if it detects volume key was
previously passed to dm-crypt via kernel keyring service, we are overriding
the default by --disable-keyring option during cryptsetup open command.
So that at the time of crypto mapper device resize we will not be
prompted for any passphrase.
Fixes: #1469
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
With Luks1 device:
$ cryptsetup status /dev/mapper/crypto-rbd0
/dev/mapper/crypto-rbd0 is active and is in use.
type: LUKS1
cipher: aes-xts-plain64
keysize: 512 bits
key location: dm-crypt
device: /dev/rbd0
sector size: 512
offset: 4096 sectors
size: 4190208 sectors
mode: read/write
With Luks2 device:
$ cryptsetup status /dev/mapper/crypto-rbd0
/dev/mapper/crypto-rbd0 is active and is in use.
type: LUKS2
cipher: aes-xts-plain64
keysize: 512 bits
key location: dm-crypt
device: /dev/rbd0
sector size: 512
offset: 32768 sectors
size: 4161536 sectors
mode: read/write
This could lead to failures with unmap in the NodeUnstageVolume path
for the encrypted volumes.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
This commit modifies the error of godot, cyclop,
paralleltest linter caused due to merged PRs.
Updates: #1586
Signed-off-by: Yati Padia <ypadia@redhat.com>
At present while acquiring the deleteLock on the volume, we check
for ongoing clone and snapshot creation operations on the same.
Considering snapshot and clone controllers does not allow parent
volume deletion on subjected operations, we can be free from this
extra check.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
It seems that the version of the key/value engine can not always be
detected for Hashicorp Vault. In certain cases, it is required to
configure the `VAULT_BACKEND` (or `vaultBackend`) option so that a
successful connection to the service can be made.
The `kv-v2` is the current default for development deployments of
Hashicorp Vault (what we use for automated testing). Production
deployments default to version 1 for now.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
nlreturn linter requires a new line before return
and branch statements except when the return is alone
inside a statement group (such as an if statement) to
increase code clarity. This commit addresses such issues.
Updates: #1586
Signed-off-by: Rakshith R <rar@redhat.com>
This commit resolves errorlint issues
which checks for the code that will cause
problems with the error wrapping scheme.
Updates: #1586
Signed-off-by: Yati Padia <ypadia@redhat.com>
snapshot controller make sure the pvc which is the source for the
snapshot request wont get deleted while snapshot is getting created,
so we dont need to check for any ongoing delete operation here on the
volume.
Subjected code path in snapshot controller:
```
pvc, err := ctrl.getClaimFromVolumeSnapshot(snapshot)
.
..
pvcClone.ObjectMeta.Finalizers = append(pvcClone.ObjectMeta.Finalizers, utils.PVCFinalizer)
_, err = ctrl.client.CoreV1().PersistentVolumeClaims(pvcClone.Namespace).Update(..)
```
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
Problem:
-------
For rbd nbd userspace mounter backends, after a restart of the nodeplugin
all the mounts will start seeing IO errors. This is because, for rbd-nbd
backends there will be a userspace mount daemon running per volume, post
restart of the nodeplugin pod, there is no way to restore the daemons
back to life.
Solution:
--------
The volume healer is a one-time activity that is triggered at the startup
time of the rbd nodeplugin. It navigates through the list of volume
attachments on the node and acts accordingly.
For now, it is limited to nbd type storage only, but it is flexible and
can be extended in the future for other backend types as needed.
From a few feets above:
This solves a severe problem for nbd backed csi volumes. The healer while
going through the list of volume attachments on the node, if finds the
volume is in attached state and is of type nbd, then it will attempt to
fix the rbd-nbd volumes by sending a NodeStageVolume request with the
required volume attributes like secrets, device name, image attributes,
and etc.. which will finally help start the required rbd-nbd daemons in
the nodeplugin csi-rbdplugin container. This will allow reattaching the
backend images with the right nbd device, thus allowing the applications
to perform IO without any interruptions even after a nodeplugin restart.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
clone controller make sure there is no delete operation happens
on the source PVC which has been referred as the datasource of
clone PVC, we are safe to operate without looking at delete
operation lock in this case.
Subjected code in the controller:
...
if claim.Spec.DataSource != nil && rc.clone {
err = p.setCloneFinalizer(ctx, claim)
...
}
if !checkFinalizer(claim, pvcCloneFinalizer) {
claim.Finalizers = append(claim.Finalizers, pvcCloneFinalizer)
_, err := p.client.CoreV1().PersistentVolumeClaims(claim.Namespace).Update(..claim..)
}
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
parseTenantConfig() only allowed configuring a defined set of options,
and KMSs were not able to re-use the implementation. Now, the function
parses the ConfigMap from the Tenants Namespace and returns a map with
options that the KMS supports.
The map that parseTenantConfig() returns can be inspected by the KMS,
and applied to the vaultTenantConnection type by calling parseConfig().
Signed-off-by: Niels de Vos <ndevos@redhat.com>
This new KMS uses a Kubernetes ServiceAccount from a Tenant (Namespace)
to connect to Hashicorp Vault. The provisioner and node-plugin will
check for the configured ServiceAccount and use the token that is
located in one of the linked Secrets. Subsequently the Vault connection
is configured to use the Kubernetes token from the Tenant.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
This makes the Tenant configuration for Hashicorp Vault KMS connections
more modular. Additional KMS implementations that use Hashicorp Vault
with per-Tenant options can re-use the new vaultTenantConnection.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
This commit resolves parallel test issues
and also excludes internal/util/conn_pool_test.go
as those test can't run in parallel.
Updates: #1586
Signed-off-by: Yati Padia <ypadia@redhat.com>
This commit resolves godot linter issue
which says "Comment should end in a period (godot)".
Updates: #1586
Signed-off-by: Yati Padia <ypadia@redhat.com>
This commit adds t.Helper() to the test helper
function. With this call go test prints correct
lines of code for failed tests. Otherwise,
printed lines will be inside helpers functions.
For more details check: https://github.com/kulti/thelper
Updates: #1586
Signed-off-by: Yati Padia <ypadia@redhat.com>
* Make kernel version parsing to support more (valid) version strings
* Put version string parsing into a separate, testable function
* Fixes#2248 (Kernel Subversion Parsing Failure)
Signed-off-by: Jonas Zeiger <jonas.zeiger@talpidae.net>
This commit adds capability to `metadata` encryption
to be able to fetch `encryptionPassphrase` from user
specified secret name and namespace(if not specified,
will default to namespace where PVC was created).
This behavior is followed if `secretName` key is found
in the encryption configuration else defaults to fetching
`encryptionPassphrase` from storageclass secrets.
Closes: 2107
Signed-off-by: Rakshith R <rar@redhat.com>
New go-ceph admin package api's expects to
pass the rados connection as argument.
added new method called GetRBDAdmin
to get admin connection to administrate
rbd volumes.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
We have many declarations and invocations..etc with long lines which are
very difficult to follow while doing code reading. This address the issues
in 'internal/util' package files to restrict the line length to 120 chars.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
The Go linter paralleltest checks that the t.Parallel
gets called for the test method and for the range of
test cases within the test.
Updates: #2025
Signed-off-by: Yati Padia <ypadia@redhat.com>
This commit addresses the following issue:
'nolint:gocyclo // complexity needs to be reduced.'
is unused for linter "gocyclo" (nolintlint)
Updates:#2025
Signed-off-by: Yati Padia <ypadia@redhat.com>
This commit returns actual error returned by the go-ceph API
to the function GetPoolName(..) instead of just returning
ErrPoolNotFound everytime there is error getting the pool id.
There is a issue reported in which the snapshot creation
takes much more time to reach True state
(i.e., between 2-7 mins) and keeps trying to create with
below error though pool is present:
rpc error: code = NotFound desc = pool not found: pool ID (21)
not found in Ceph cluster.
Since we cannot interpret the actual error for the delay in
snapshot creation, it is required to return the actual error
as well so that we can uderstand the reason.
Signed-off-by: Yati Padia <ypadia@redhat.com>
as callers are already taking care of returing
the GRPC error code return the actual error
from the IsMountPoint function.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
This commit addresses ifshort linter issues which
checks if short syntax for if-statements is possible.
updates: #1586
Signed-off-by: Rakshith R <rar@redhat.com>
By default, the write buffer size in libfuse2 is 2KiB
`fuse_big_writes = true` option is used to override this limit.
This commit makes `fuse_big_writes = true` option as default
in ceph.conf.
Closes: #1928
Signed-off-by: Rakshith R <rar@redhat.com>