clusterAdditionalInfo map is holding a localClusterState
for checking ceph cluster supports resize and subvolumegroup
is created or not, currently we are checking if the key is present
in a map and localClusterStatelocalClusterState.resizeSupported
is set to false to call ceph fs subvolume resize to check command is
supported or not, if a structure is initialized all its members
are set to default value. so we will never going to check the
ceph fs subvolume resize command is supported in backend or not, we are
always using ceph fs subvolume create to resize subvolume. in some
ceph version ceph fs subvolume create wont work to resize a subvolume.
This commit changes the resizeSupported from bool to *bool for
proper handling of this scenario.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
go ceph returns NotImplementedError for invalid
commands,cephcsi is using errors.As to find out
the error.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
go ceph returns NotImplementedError for invalid
commands,cephcsi is using errors.As to find out
the error.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
In order to re-use the configuration of Vault, split a new
vaultConnection type from the VaultKMS type.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
getCloneInfo() does not need to return a full CloneStatus struct that
only has one member. Instead, it can just return the value of the single
member, so the JSON type/struct does not need to be exposed.
This makes the API for getCloneInfo() a little simpler, so it can be
replaced by a go-ceph implementation later on.
As the function does not return any of the unused attributes anymore, it
is renamed to getCloneStatu() as well.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Instead of the hand-rolled Vault usage, use the libopenstorage/secrets
package that provides a nice API. The support for Vault becomes much
simpler and maintainable that way.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
earlier if the depth check fails the
complete vol struct was getting logged,
this commits logs only the pool and image
name.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
In the case of Disaster Recovery failover, the
user expected to create the static PVC's. We have
planned not to go with the PVC name and namespace
for many reasons (as in kubernetes it's planned to
support PVC transfer to a new namespace with a
different name and with new features coming in
like data populator etc). For now, we are
planning to go with static PVC's to support
async mirroring.
During Async mirroring only the RBD images are
mirrored to the secondary site, and when the
user creates the static PVC's on the failover
we need to regenerate the omap data. The
volumeHandler in PV spec is an encoded string
which contains clusterID and poolID and image UUID,
The clusterID and poolID won't remain same on both
the clusters, for that cephcsi need to generate the
new volume handler and its to create a mapping
between new volume handler and old volume handler
with that whenever cephcsi gets csi requests it
check if the mapping exists it will pull the new
volume handler and continues other operations.
The new controller watches for the PVs created,
It checks if the omap exists if it doesn't it
will regenerate the entire omap data.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
in case of mirrored image, if the image is
primary a watcher will be added by the rbd
mirror deamon on the rbd image.
we have to consider 2 watcher to check image
is in use.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
incase of async mirroring the volume UUID is
retrieved from the volume name, instead of cephcsi
generating a new UUID it should reserve the passed
UUID it will be useful when we support both metro DR
and async mirroring on a kubernetes clusters.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
An rbd image can have a maximum number of
snapshots defined by maxsnapshotsonimage
On the limit is reached the cephcsi will
start flattening the older snapshots and
returns the ABORT error message, The Request
comes after this as to wait till all the
images are flattened (this will increase the
PVC creation time. Instead of waiting till
the maximum snapshots on an RBD image, we can
have a soft limit, once the limit reached
cephcsi will start flattening the task to
break the chain. With this PVC creation time
will only be affected when the hard limit
(minsnapshotsonimage) reached.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
The function isCloneRetryError verifies
if the clone error is `pending` or
`in-progress` error.
Co-authored-by: Madhu Rajanna <madhupr007@gmail.com>
Signed-off-by: Yug <yuggupta27@gmail.com>
In certain cases, clone status can be 'pending'.
In that case, abort error message should be
returned similar to that during 'in-progress'
state.
Co-authored-by: Madhu Rajanna <madhupr007@gmail.com>
Signed-off-by: Yug <yuggupta27@gmail.com>
There is a type-check on BytesQuota after calling SubVolumeInfo() to see
if the value is supported. In case no quota is configured, the value
Infinite is returned. This can not be converted to an int64, so the
original code returned an error.
It seems that attaching/mounting sometimes fails with the following
error:
FailedMount: MountVolume.MountDevice failed for volume "pvc-0e8fdd18-873b-4420-bd27-fa6c02a49496" : rpc error: code = Internal desc = subvolume csi-vol-0d68d71a-1f5f-11eb-96d2-0242ac110012 has unsupported quota: infinite
By ignoring the quota of Infinite, and not setting a quota in the
Subvolume object, this problem should not happen again.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
The implementation of getOMapValues assumed that the number of key-value
pairs assigned to the object would be close to the number of keys
being requested. When the number of keys on the object exceeded the
"listExcess" value the function would fail to read additional keys
even if they existed in the omap.
This change sets a large fixed "chunk size" value and keeps reading
key-value pairs as long as the callback gets called and increments
the numKeys counter.
Signed-off-by: John Mulligan <jmulligan@redhat.com>
When using go-ceph and the volumeOptions.Connect() call, the credentials
are not needed once the connection is established.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reduce the number of calls to the `ceph fs` executable to improve
performance of CephFS volume resizing.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
This prepares resizeVolume() so that the volumeOptions.conn can be used
for connecting with go-ceph and use the connection cache.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
We have below exported function in credentials.go which is not
called from anywhere in the repo. Removing it for the same reason.
```
// NewCredentials generates new credentials when id and key
// are provided.
func NewCredentials(id, key string) (*Credentials, error) {
...
```
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
`cap` builtin function returns the capacity of a type. Its not
good practice to use this builtin function for other variable
names, removing it here
Ref# https://golang.org/pkg/builtin/#cap
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
golang-ci suddenly complains about the following issue
internal/cephfs/util.go:41:1: directive `// nolint:unparam // todo:program values has to be revisited later` is unused for linter unparam (nolintlint)
// nolint:unparam // todo:program values has to be revisited later
^
Dropping the comment completely seems to fix it. Ideally
execCommandJSON() will get removed once the migration to go-ceph is
complete.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
volJournal.Connect() got the error on err2 variable, however
the return was on variable err which hold the error return of
DecomposeCSIID() which is wrong. This cause the error return wrongly
parsed and pushed from the caller. From now on, we are reusing the
err variable to hold and revert the error of volJournal.Connect().
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
Previously the purgeVolume error was ignored due to wrong error variable
check in the createVolume. With this change it checks on the proper error.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
The allocated, and potentially connected, volumeOptions object in
newVolumeOptionsFromVolID() is not cleaned-up in case of errors. This
could cause resource leaks.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Without connection, follow-up oparations on the volumeOptions object
will cause a panic. This should fix a regression in CephFS testing.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
All the previous condition checks exit from the function and
when it reach to this block its obvious that error is non nil,
we dont need an extra check here.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
There is no need to pass all secrets on to newVolumeOptions(), it only
needs the credentials. As the caller of newVolumeOptions() already has
the credentials generated, just pass them along instead of the raw
secrets.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
The credentials are not used anymore, the volume object is already
connected to the cluster when createVolume() is called.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
The go-ceph CephFS Admin API needs a rados.Conn to connect the Admin
interface to the Ceph cluster. As the rados.Conn is internal to the
ClusterConnection type, add GetFSAdmin() to create a new FSAdmin
instance for a specific connection.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Add the ClusterConnection to the volumeOptions type, so that future use
of go-ceph can connect to the Ceph cluster.
Once a volumeOptions object is not needed anymore, it needs to get
destroyed to free associated resources like the ClusterConnection.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
The error check condition in genVolFromID() is always false as far as
code reading/workflow goes. Removing it for the same reason.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
cephcsi uses cli for fetching snap list as well as to check the
snapshot namespace, replaced that with go-ceph calls.
Signed-off-by: Mudit Agarwal <muagarwa@redhat.com>
return a proper error message to the user when
the subvolume has the snapshots and it cannot
be removed until the snapshots on the subvolume
have to be deleted.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
When passing
fuseMountOptions: debug
in the StorageClass, the mount options passed on the ceph-fuse
commandline result in "-o nonempty ,debug". The additional space before
the ",debug" causes the mount command to fail.
Fixes: 1485
Signed-off-by: Niels de Vos <ndevos@redhat.com>
The subvolume features consists the list of features, which if includes
"snapshot-autoprotect", result in query based approach to
detecting the need for protect/unprotect.
If "snapshot-autoprotect" feature is present, The UnprotectSnapshot
call should be treated as a no-op
Signed-off-by: Yug <yuggupta27@gmail.com>
The subvolume features consists the list of features, which if includes
"snapshot-autoprotect", result in query based approach to
detecting the need for protect/unprotect.
If "snapshot-autoprotect" feature is present, The ProtectSnapshot
call should be treated as a no-op
Signed-off-by: Yug <yuggupta27@gmail.com>
Use subvolume info to fetch the subvolume path.
If `subvolume info` command is not available,
use `getpath` command instead.
Signed-off-by: Yug <yuggupta27@gmail.com>
Snapshots can be retained even after subvolume deletion in
Ceph 14.2.12. Adding support for the same in ceph-csi.
Signed-off-by: Yug <yuggupta27@gmail.com>
It doesnot make sense to allow the creation of empty
volumes with readonly access, this commit allows the
creation of volume which is having readonly capabilities
only if the content source is set for the volume.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
It doesnot make sense to allow the creation of empty
volumes which is going to be accessed with readonly mode,
this commit allows the creation of volume which is having
readonly capabilities only if the content source is set
for the volume.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
This spec add the extra capability to node and controller
volume to report volume condition of a pv..etc.
Refer # https://github.com/ceph/ceph-csi/issues/1356
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
Form kubernetes v1.19 onwards NodeRequest is getting volume path
in StagingTargetPath instead of VolumePath, cephcsi should also
use the same.
Signed-off-by: Mudit Agarwal <muagarwa@redhat.com>
updateVolWithImageInfo() is currently executing an rbd command
to get details about an RBD image. Replaced it with the
required go-ceph functions.
Signed-off-by: Mudit Agarwal <muagarwa@redhat.com>
in case of clone failure, we need to first delete
the clone and the snapshot from which we created
the clone, then as part of cleanup we need to remove
the temporary cloned image and the temporary snapshot
created on the parent image.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
we should not return the CLI errors in GRPC errors
we need to return proper readable error messages
to the user for better understanding and better
debugging.
updates #1242
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
At CSI spec < 1.2.0, there was no volumecapability in the
expand request. However its available from v1.2+ which allows
us to declare the node operations based on the volume mode.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
currently the lock is not released which is
taken on the request name. this is causing issues
when the subvolume is requested for delete.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
execCommandErr returns both error and stderror
message. checking strings.HasPrefix is not helpful
as the stderr will be the first string. its good
to do string comparison and find out that error
is volume not found error.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
rename FatalLog to FatalLogMsg to keep functions
in parity functions ends with Msg if its only message
logging.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
as we have 2 functions for logging. one for logging
with message and another one is for logging with
context. renamed ErrorLog to ErrorLogMsg to log
with messages.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
if the image is created without flattening image-feature
the image will get few image-features by default, deep-flatten
is one of them. if the image doesnot have any parent
the rbd image flattening will fail, This commit discards
error message if the image doesnt have any parent.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
merging of https://github.com/ceph/ceph-csi/pull/1035
broken the cephcsi building. This commits fixes
the build issue.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Make sure to operate within the namespace if any given
when dealing with rbd images and snapshots and their journals.
Signed-off-by: Mehdy Khoshnoody <mehdy.khoshnoody@gmail.com>
Also add functionality to generate snap from request
and to get mon and clusterID from the request
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
From provided CSI volume ID this populate volumeOptions and snapshot
identifier after connecting to the snapJournal.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
Compared to previous version of the error strings, this change
depend on error strings like ENOENT, EEXIST, EINVAL..etc
The format of the error strings change in different cluster versions
and the error code return should not change. This also add extra
error strings for snapshot and clone operation outputs
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
checkContentSource() validate the data source in the request
and then populate volumeOptions or snapshotshot identifier in
case of snapshot source. If the data source is volume, then
parentVolumeOption and pvID are populated.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
adjust createBackingVolume() to create a subvolume from snapshot
or existing subvolume by taking restore or clone operation locks
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
if we are not able to fetch the cluster-ID from
the createSnapshot request and also if we are
not able to get the monitor information from
the cluster-ID return error instead of using
the parent image information.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Replaced command execution with go-ceph Resize() function.
Volsize is being updated before waiting for resize() to return,
fixed it to get updated only after resize() is successful.
Signed-off-by: Mudit Agarwal <muagarwa@redhat.com>
While adding the context.Context to the resizeRBDimage() function, it
became a little ugly. So renaming the function to resize() and making it
a method of the rbdVolume type.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Most consumers of util.ExecCommand() need to convert the returned []byte
format of stdout and/or stderr to string. By having util.ExecCommand()
return strings instead, the code gets a little simpler.
A few commands return JSON that needs to be parsed. These commands will
be replaced by go-ceph implementations later on. For now, convert the
strings back to []byte when needed.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
All calls to util.ExecCommand() now pass the context.Context. In some
cases this is not possible or needed, and util.ExecCommand() will not
log the command.
This should make debugging easier when command executions fail.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Current versions of the mkfs.xfs binary enable reflink support by
default. This causes problems on systems where the kernel does not
support this feature. When the kernel the feature does not support, but
the filesystem has it enabled, the following error is logged in `dmesg`:
XFS: Superblock has unknown read-only compatible features (0x4) enabled
Introduce a check to see if mkfs.xfs supports the `-m reflink=` option.
In case it does, pass `-m reflink=0` while creating the filesystem.
The check is executed once during the first XFS filesystem creation. The
result of the check is cached until the nodeserver restarts.
Fixes: #966
Signed-off-by: Niels de Vos <ndevos@redhat.com>
The new rbdVolume.isInUse() method will replace the rbdStatus()
function. This removes one more rbd command execution in the
DeleteVolume path.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
This change replaces the sentinel errors in rbd module with
standard errors created with errors.New().
Related: #1203
Signed-off-by: Sven Anderson <sven@redhat.com>
This change replaces the sentinel errors in cephfs module with
standard errors created with errors.New().
Related: #1203
Signed-off-by: Sven Anderson <sven@redhat.com>
The sentinel error code had additional fields in the errors, that are
used nowhere. This leads to unneccesarily complicated code. This
change replaces the sentinel errors in utils with standard errors
created with errors.New() and adds a simple JoinErrors() function to
be able to combine sentinel errors from different code tiers.
Related: #1203
Signed-off-by: Sven Anderson <sven@redhat.com>
Take operation locks on the resources before operating
on the resouces. This allows us to do parallel operations
for some RPC calls such as Clone and Restore of PVC.
This operations will only be blocked if the image is
expanding or Snapshot and RBD image is getting deleted.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
As we are adding new functionalities like Create/Delete
snapshot,Clone from Snapshot and Clone from Volume.
with the current implementation, there are only serial
operations allowed for this functionalities, for some
function we can allow parallel operations like
Clone from snapshot and Clone from Volume and Create
`N` snapshots on a single volume.
Delete Volume: Need to ensure that there is no clone,
Snapshot create and Expand volume in progress.
Expand Volume: Need to ensure that there is no clone,
snapshot create and cloning in progress
Delete Snapshot: Need to ensure that there is no
cloning in progress
Restore Volume/Snapshot: Need to ensure that there is
no Expand or delete operation in progress.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Direct usage of numbers should be avoided.
Issue reported:
mnd: Magic number: X, in <argument> detected (gomnd)
Signed-off-by: Yug <yuggupta27@gmail.com>
Add explanation to nolint directives.
Issue reported:
whyNoLint: include an explanation for nolint directive (gocritic)
Signed-off-by: Yug <yuggupta27@gmail.com>
In Go 1.13, the fmt.Errorf function supports a new %w verb.
When this verb is present, the error returned by fmt.Errorf
will have an Unwrap method returning the argument of %w,
which must be an error. In all other ways, %w is identical to %v.
Updates: #1227
Signed-off-by: Yug <yuggupta27@gmail.com>
In some ceph version if the subvolume is not present, the
ceph returns doesnot exists and in some version not found
error message. This commit fixes issue for both error
checks.
By only checking Error ENOENT: for doesnot exist seems good.
even if some error message changes in ceph ceph-csi wont get
any issue.
```bash
sh-4.2# ceph version
ceph version 14.2.10 (b340acf629a010a74d90da5782a2c5fe0b54ac20) nautilus (stable)
sh-4.2# ceph fs subvolume getpath myfs csi-vol-a24a3d97-c7f4-11ea-8cfc-0242ac110012 --group_name csi
Error ENOENT: subvolume 'csi-vol-a24a3d97-c7f4-11ea-8cfc-0242ac110012' does not exist
```
```bash
sh-4.2# ceph version
ceph version 14.2.4 (75f4de193b3ea58512f204623e6c5a16e6c1e1ba) nautilus (stable)
sh-4.2# ceph fs subvolume getpath myfs testing --group_name=csi
Error ENOENT: Subvolume 'testing' not found
```
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
If the requested volume size and the snapshot or the
parent volume from which the clone is to be created
is not equal cephcsi returns an error message.
updates #1188
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
If the image in the chain is moved to trash, we
cannot get the image details. We need to return the
found depth to the caller.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
If the snapshots on the parent image exceeds
maxSnapshotsOnImage count, we need to flatten
all the temporary cloned images to over come the
krbd issue of maximum number of snapshots on
an image.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
create temporary snapshot on the parent image same as
name as the temporary clone rbd image. Naming the snapshot
and the temporary cloned image helps to flatten the temporary
cloned images when the snapshots on the parent image exceeds
the configured maxSnapshotsOnImage.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
rename newVolumeOptionsFromVersion1Context to newVolumeOptionsFromMonitorList
to provide more clarity to the function readers and also fixed comments.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
as v1.0.0 is deprecated we need to remove the support
for it in the Next coming (v3.0.0) release. This PR
removes the support for the same.
closes#882
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Added support for RBD PVC to PVC cloning, below
commands are executed to create a PVC-PVC clone from
RBD side.
* Check the depth(n) of the cloned image if n>=(hard limit -2)
or ((soft limit-2) Add a task to flatten the image and return
about (to avoid image leak) **Note** will try to flatten the
temp clone image in the chain if available
* Reserve the key and values in omap (this will help us to
avoid the leak as it's not reserved earlier as we have returned
ABORT (the request may not come back))
* Create a snapshot of rbd image
* Clone the snapshot (temp clone)
* Delete the snapshot
* Snapshot the temp clone
* Clone the snapshot (final clone)
* Delete the snapshot
```bash
1) check the image depth of the parent image if flatten required
add a task to flatten image and return ABORT to avoid leak
(hardlimit-2 and softlimit-2 check will be done)
2) Reserve omap keys
2) rbd snap create <RBD image for src k8s volume>@<random snap name>
3) rbd clone --rbd-default-clone-format 2 --image-feature
layering,deep-flatten <RBD image for src k8s volume>@<random snap>
<RBD image for temporary snap image>
4) rbd snap rm <RBD image for src k8s volume>@<random snap name>
5) rbd snap create <cloned RBD image created in snapshot process>@<random snap name>
6) rbd clone --rbd-default-clone-format 2 --image-feature <k8s dst vol config>
<RBD image for temporary snap image>@<random snap name> <RBD image for k8s dst vol>
7)rbd snap rm <RBD image for src k8s volume>@<random snap name>
```
* Delete temporary clone image created as part of clone(delete if present)
* Delete rbd image
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
during the checkSnapCloneExists we are checking
the image, if the image not found we are deleting
the snapshot on the parent image, This PR corrects
the comparasion. instead of snapshotNotFound we need
to check ImageNotFound error.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
updated unit testing for the kernel check
for deep flatten feature for both supported
upstream kernel version (5.1.0+) and RHEL
8.2 backport
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
as RHEL 8.2 supports the deep-flatten
feature, added it to the list to map
the rbd images on the node without flattening.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
as v5.1.0 supports the deep-flatten feature,lowering
the required version to map rbd images which
are having deep-flatten feature
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
we need to take lock on parent rbd image when
we are creating a snapshot from it, if the user
tries to delete/resize the rbd image when we are
taking snapshots,we may face issues. if the volume
lock is present on the rbd image, the user cannot
resize the rbd image nor delete the rbd image.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
flatten cloned images to remove the link
with the snapshot as the parent snapshot can
be removed from the trash.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Added maxsnapshotsonimage flag to flatten
the older rbd images on the chain to avoid
issue in krbd.The limit is in krbd since it
only allocate 1 4KiB page to handle all the
snapshot ids for an image.
The max limit is 510 as per
https://github.com/torvalds/linux/blob/
aaa2faab4ed8e5fe0111e04d6e168c028fe2987f/drivers/block/rbd.c#L98
in cephcsi we arekeeping the default to 450 to reserve 10%
to avoid issues.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
added listsnapshots function for an
rbd image to list all the snapshots
created from an rbd images, This will
list the snapshots which are in trash also.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Several places in the code compared errors directly with the go-ceph
sentinel errors. This change uses the errors.Is() function of go
1.13 instead. The err113 linter reported this issue as:
err113: do not compare errors directly, use errors.Is() instead
Signed-off-by: Sven Anderson <sven@redhat.com>
"github.com/pkg/errors" does not offer more functionlity than that we
need from the standard "errors" package. With Golang v1.13 errors can be
wrapped with `fmt.Errorf("... %w", err)`. `errors.Is()` and
`errors.As()` are available as well.
See-also: https://tip.golang.org/doc/go1.13#error_wrapping
Signed-off-by: Niels de Vos <ndevos@redhat.com>
By fixing the golangci-lint runs, this now gets reported as a problem.
Instead of addressing the compexity of the DeleteVolume() method here,
mark it as a TODO.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
currently, various calls to deleteImage does not
need the rbdStatus check. That is only required
when calling from DeleteVolume. This PR optimizes the
rbd image deletion by removing the status check.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
added skipForceFlatten flag to skip
the image deptha and skip image flattening.
This will be very useful if the kernel is
not listed in cephcsi which supports deep
flatten fauture.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
This Adds a support for create,delete snapshot
and creating a new rbd image from the snapshot.
* Create a snapshot
* Create a temporary snapshot from the parent volume
* Clone a new image from a temporary snapshot with options
--rbd-default-clone-format 2 --image-feature layering,deep-flatten
* Delete temporary snapshot created
* Create a new snapshot from cloned image
* Check the image chain depth, if the Softlimit is reached Add a
task Flatten the cloned image and return success. if the depth
is reached hard limit Add a task Flatten the cloned image and
return snapshot status ready as false
```bash
1) rbd snap create <RBD image for src k8s volume>@<random snap name>
2) rbd clone --rbd-default-clone-format 2 --image-feature
layering,deep-flatten <RBD image for src k8s volume>@<random snap>
<RBD image for temporary snap image>
3) rbd snap rm <RBD image for src k8s volume>@<random snap name>
4) rbd snap rm <RBD image for temporary snap image>@<random snap name>
5) check the depth, if the depth is greater than configured hard
limit add a task to flatten the cloned image return snapshot status
ready as false if the depth is greater than soft limit add a task
to flatten the image and return success
```
* Create a clone from snapshot
* Clone a new image from the snapshot with user-provided options
* Check the depth(n) of the cloned image if n>=(hard limit)
Add task to flatten the image and return ABORT (to avoid image leak)
```bash
1) rbd clone --rbd-default-clone-format 2 --image-feature
<k8s dst vol config> <RBD image for temporary snap image>@<random snap name>
<RBD image for k8s dst vol>
2) check the depth, if the depth is greater than configured hard limit
add a task to flatten the cloned image return ABORT error if the depth is
greater than soft limit add a task to flatten the image and return success
```
* Delete snapshot or pvc
* Move the temporary cloned image to the trash
* Add task to remove the image from the trash
```bash
1) rbd trash mv <cloned image>
2) ceph rbd task trash remove <cloned image>
```
With earlier implementation to delete the image, we used to add
a task to remove the image with new changes this cannot be done
as the image may contain snapshots or linking.so we will be
doing below steps to delete an image(this will be
applicable for both normal image and cloned image)
* Move the rbd image to the trash
* Add task to remove the image from the trash
```bash
1) rbd trash mv <image>
2) ceph rbd task trash remove <image>
```
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
we dont need to call the snapshot CLI functions
to get snapshot details. as these details are not
requried with new snapshot design.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
cephcsi need to store and retrieve the rbd image ID
in the omap as we need the image ID to add a
task to remove the image from the Trash.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
as with snapshot and cloning implementation
the rbd images cannot be deleted with rbd
remove or add a task to delete the rbd image
as it might contains the snapshots and clones.
we need to make use of the rbd mv trask and
add a task to remove the image from trash
once all its clones and snapshots links
are broken and there will no longer any
dependency between parent and child images.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
add a new function called getImageID to fetch
the image id of an image which need to be stored
and retrived for Delete operation.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
added Hardlimit and Softlimit flags for cephcsi
arguments. When the Softlimit is reached cephcsi
will start a background task to flatten the rbd
image and return success and if the hardlimit
is reached it will start a background task
to flatten the rbd image and return ready
to use as false to make sure that the image
will not be used until it is flatten.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
As we are using v2 cloning we dont need to
do protect the snapshot before cloning and
unprotect it before deleting.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
as we need to reuse the same code for both cephfs
and rbd moving the supported version check function
to util package, for better readability renamed
the function.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
A new version of gosec insists on handling errors returned by Close():
[/go/src/github.com/ceph/ceph-csi/internal/util/pidlimit.go:44] - G307 (CWE-): Deferring unsafe method "*os.File" on type "Close" (Confidence: HIGH, Severity: MEDIUM)
> defer cgroup.Close()
[/go/src/github.com/ceph/ceph-csi/internal/util/pidlimit.go:78] - G307 (CWE-): Deferring unsafe method "*os.File" on type "Close" (Confidence: HIGH, Severity: MEDIUM)
> defer f.Close()
[/go/src/github.com/ceph/ceph-csi/internal/util/pidlimit.go:113] - G307 (CWE-): Deferring unsafe method "*os.File" on type "Close" (Confidence: HIGH, Severity: MEDIUM)
> defer f.Close()
Signed-off-by: Niels de Vos <ndevos@redhat.com>
The generated ceph.conf does not need readable by the group, there is
only one (system) user consuming the configurations file.
This addresses the following gosec warning:
[/go/src/github.com/ceph/ceph-csi/internal/util/cephconf.go:52] - G306 (CWE-): Expect WriteFile permissions to be 0600 or less (Confidence: HIGH, Severity: MEDIUM)
> ioutil.WriteFile(CephConfigPath, cephConfig, 0640)
Signed-off-by: Niels de Vos <ndevos@redhat.com>
gosec-2.3.0 complains about the following:
[/go/src/github.com/ceph/ceph-csi/internal/util/cephcmds.go:146] - G307 (CWE-): Deferring unsafe method "*os.File" on type "Close" (Confidence: HIGH, Severity: MEDIUM)
> defer tmpFile.Close()
By logging the error from Close(), the warning is gone.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
cephcsi need to add mount the cephfs subvolume
as the readonly when the PVC type is ROX to
provide only readonly access to the users
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
With the current code base, the subvolumegroup will
be created once, and even for a different cluster,
subvolumegroup creation is not allowed again.
Added support multiple subvolumegroups creation by
validating one subvolumegroup creation per cluster.
Fixes: #1123
Signed-off-by: Yug Gupta <ygupta@redhat.com>
The previous function used to remove omap keys apparently did not
return errors when removing omap keys from a missing omap (oid).
Mimic that behavior when using the api.
Signed-off-by: John Mulligan <jmulligan@redhat.com>
For any function that sets more than one key on a single oid setting
them as a batch will be more efficient.
Signed-off-by: John Mulligan <jmulligan@redhat.com>
For any function that removes more than one key on a single oid removing
them as a batch will be more efficient.
Signed-off-by: John Mulligan <jmulligan@redhat.com>
Taking this appraoch means that any function that must get more than one
key's value from the same oid can be more efficient by calling out to
ceph only once.
To be cautious and avoid missing things we always request ceph return
more keys than we actually expect to be set on the oid. If there are
unexpected keys there, we will not miss the keys we want if we first hit
an unexpected key if we were to limit ourselves to iterating only over
the number of keys we're expecting to be on the object.
Signed-off-by: John Mulligan <jmulligan@redhat.com>
Convert the business-logic of the journal to use the new go-ceph based
omap manipulation functions.
Signed-off-by: John Mulligan <jmulligan@redhat.com>
These new omap manipulation functions (get/set/remove) are roughly
equivalent to the previous command-line based approach but rely
on direct api calls to ceph.
Signed-off-by: John Mulligan <jmulligan@redhat.com>
These types have private fields but we need to construct them outside of
the util package. Add New* methods for both.
Signed-off-by: John Mulligan <jmulligan@redhat.com>
if the PVC access mode is ReadOnlyMany
or single node readonly, mounting the rbd
device path to the staging path as readonly
to avoid the write operation.
If the PVC acccess mode is readonly, mapping
rbd images as readonly.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
go-ceph v0.3 adds constants for ImageFeature values and their names.
Instead of hardcoding "layering" in several places, use the constant
given by librbd.
The rbdVolume.ImageFeatures does not seem to be used anywhere after the
conversion. Stashing the image metadata does include the ImageFeatures
as these are retrieved when getting the image information. It is safe to
drop ImageFeatures altogether and only use the imageFeatureSet instead.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
It seems that convering the release component from the unix.Utsrelease
type leaves some trailing "\x00" characters.
While splitting the string to compare kernel versions, these additional
characters might prevent converting the string to an int. Strip the
additional characters before returning the string.
Note:
"\x00" characters are not visible when printing to a file or screen.
They can be seen in hex-editors, or sending the output through 'xxd'.
Fixes: #1167
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Go 1.13 contains support for error wrapping. To support wrapping,
fmt.Errorf now has a %w verb for creating wrapped errors, and three
new functions in the errors package ( errors.Unwrap, errors.Is and
errors.As) simplify unwrapping and inspecting wrapped errors.
With this change, If we currently compare errors using ==, we have to
use errors.Is instead. Example:
if err == io.ErrUnexpectedEOF
becomes
if errors.Is(err, io.ErrUnexpectedEOF)
https://tip.golang.org/doc/go1.13#error_wrapping
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
IneffAssign warns about the two following statements:
Line 147: warning: ineffectual assignment to supported (ineffassign)
Line 148: warning: ineffectual assignment to ok (ineffassign)
Reported-by: https://goreportcard.com/report/github.com/ceph/ceph-csi
Updates: #975
Signed-off-by: Yug Gupta <ygupta@redhat.com>
This prevents the need to open the IOContext for additional operations
on the image.
It also addresses a leak of the IOContext in case `rbdVolume.open()` was
called. The method only returned the `rbd.Image` without the possibility
to close the related IOContext.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
There is a bug in current code where the devicePath
is always empty and the rbd image unmap never
happens if nodeplugin fails to mount the rbd image
to the stagingpath.
This is a fix to unmap the rbd image if some issue
occurs after rbd image is mapped.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>