`ControllerExpandVolume` creates the credentials from
secrets but never actually uses it for anything.
The secrets map is passed on to `NewVolumeOptionsFromVolID`
which does the same check again. This patch removes the
extraneous step.
Signed-off-by: Niraj Yadav <niryadav@redhat.com>
using os.RemoveAll will remove everything
in the director after the Umount we should
be using os.Remove only to remove the empty
directory
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
using os.RemoveAll will remove everything
in the director after the Umount we should
be using os.Remove only to remove the empty
directory
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
We should not be dependent on the CO to ensure
that it will serialize the request instead of
that we need to have own internal locks to ensure
that we dont do concurrent operations for same
request.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
We should not be dependent on the CO to ensure
that it will serialize the request instead of
that we need to have own internal locks to ensure
that we dont do concurrent operations for same
request.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
During PVC-PVC clone creation, parent of the datasource
image is flattened after checking for clone depth.
We need to account for data source image as well since
we're calculating depth from the parent image.
depthToAvoidFlatten = 3(datasource image + temp + final clone)
Signed-off-by: Rakshith R <rar@redhat.com>
CephCSI should not flatten image that can be mounted
for use by the user.
`checkFlatten()` was called in a recovery code flow
of PVC restored from snapshot and was missed while
refractoring in https://github.com/ceph/ceph-csi/pull/2900
refer: #2900
Signed-off-by: Rakshith R <rar@redhat.com>
Add VolumeGroupLocks in the CSI Controller Server so that operations are
protected against concurrent requests for the same VolumeGroupSnapshot.
Signed-off-by: Niels de Vos <ndevos@ibm.com>
`reserveSnap()` can potentially fail halfway through, in that case it
needs to undo the snapshot reservation and restore modified attributes
of the snapshot.
Fixes: #4945
Signed-off-by: Niels de Vos <ndevos@ibm.com>
Without the SnapshotGroupID in the Snapshot object, Kubernetes CSI does
not know that the Snapshot belongs to a group. In that case, it allows
the deletion of the Snapshot, which should be denied.
Signed-off-by: Niels de Vos <ndevos@ibm.com>
When the GroupSnapGetInfo go-ceph function is supported by librbd, the
Group Controller Servive and VolumeGroupSnapshot capabilities can be
exposed to the Container Orchestrator.
Signed-off-by: Niels de Vos <ndevos@ibm.com>
When creating a Snapshot with the new NewSnapshotByID() function, the
name of the RBD-image that is created is the same as the name of the
Snapshot. The `RbdImageName` points to the name of parent image, which
causes deleting the Snapshot to delete the parent image instead.
Correcting the `RbdImageName` and setting it to the `RbdSnapName` makes
sure that upon deletion, the Snapshot RBD-image is removed, and not the
parent image.
Signed-off-by: Niels de Vos <ndevos@ibm.com>
The Group Controller Server may need to fetch a VolumeGroupSnapshot that
was statically provisioned. In that case, only the name of the
VolumeGroupSnapshot is known and should be resolved to an object.
Signed-off-by: Niels de Vos <ndevos@ibm.com>
The GetVolumeGroupSnapshotByID function makes it possible to get a
VolumeGroupSnapshot object from the Manager by passing a request-id.
This makes it simple for the Group Controller Server to check if a
VolumeGroupSnapshot already exists, so it is not needed to try and
re-create an existing one.
Signed-off-by: Niels de Vos <ndevos@ibm.com>
Implement the CreateVolumeGroupSnapshot for the rbd.Manager. A Group
Controller Server can use the rbd.Manager to create VolumeGroupSnapshots
in an easy an idempotent way.
Signed-off-by: Niels de Vos <ndevos@ibm.com>
A (CSI) VolumeGroupSnapshot object contains references to Snapshot IDs
(or CSI Snapshot handles). In order to work with a VolumeGroupSnapshot
struct, the Snapshot IDs need to be resolved into rbdSnapshot structs.
Signed-off-by: Niels de Vos <ndevos@ibm.com>
The VolumeGroupSnapshot type will be used by the rbd.Manager to create,
inspect and delete VolumeGroupSnapshos.
Signed-off-by: Niels de Vos <ndevos@ibm.com>
When the rbd.Manager creates a VolumeGroupSnapshot, each RBD-snapshot
that is created as part of the RBD-group needs to be cloned into its own
RBD-image that will be used as a CSI Snapshot.
The VolumeGroup.CreateSnapshots() creates the RBD-group snapshot and
returns a list of the Snapshot structs.
Signed-off-by: Niels de Vos <ndevos@ibm.com>
The NewSnapshotByID() function makes it possible to clone a new Snapshot
from an existing RBD-image and the ID of an RBD-snapshot on that image.
This will be used by the VolumeGroupSnapshot feature, where the ID of an
RBD-snapshot is obtained for the RBD-snapshot on the RBD-images.
Signed-off-by: Niels de Vos <ndevos@ibm.com>
Each object is responsible for maintaining a connection to the journal.
By sharing a single journal, cleanup of objects becomes more complex as
the journal is used in deferred functions and only the last should
destroy the journal connection resources.
Signed-off-by: Niels de Vos <ndevos@ibm.com>
Commit 95733b3a9 introduced the `StoreGroupID()` function, but that
unfortunately set an empty key in the journal.
Passing the `csiGroupIDKey` key (with value `csi.groupid`) caused
setting `csi.csi.groupid` as a key. Reading the value back with the
right `csi.groupid` key always returned an empty value.
Fixes: 95733b3a9 "journal: add option to store the groupID"
Signed-off-by: Niels de Vos <ndevos@ibm.com>
When the image is not closed, it keeps a watch open. This prevents the
CSI Controller to delete the Volume, as there is still a user of it.
Fixes: f9ab14e826 "rbd: check if an image is part of a group before adding it"
Signed-off-by: Niels de Vos <ndevos@ibm.com>
The address we get from ceph
contains the ip in the format
of 10.244.0.1:0/2686266785 we
need to extract the client IP
from this address, we already
have a helper to extract it,
This makes the helper more generic
can be reused by multiple packages
in the fence controller.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
implemented GetFenceClients which
connects to the ceph cluster and
returns the ceph clusterID and the
clientaddress that is used for rados
connection.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
added GetAddrs to get the client
Adress of the rados connection
which is helpful for NetworkFencing
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
register Capability_NetworkFence_
GET_CLIENTS_TO_FENCE capability and
start a NetworkFence controllers
as part of rbd nodeplugin.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
This PR modifies the execCryptSetupCommand so that
the process is killed in an event of lock timeout.
Useful in cases where the volume lock is released but
the command is still running.
Signed-off-by: Niraj Yadav <niryadav@redhat.com>
This commit adds the support for storing the CephFS omap data
in a namespace specified in the ceph-csi-config ConfigMap under
cephFS.radosNamespace field.
If the radosNamespace is not set, the default radosNamespace will
be used i.e, csi.
Signed-off-by: Praveen M <m.praveen@ibm.com>
This commit adds `GetCephFSRadosNamespace` util method that returns
the `RadosNamespace` specified in ceph-csi-config ConfigMap under
cephFS.radosNamespace.
If not specified, the method returns the default RadosNamespace
i.e, csi.
Signed-off-by: Praveen M <m.praveen@ibm.com>
The rbdSnapshot/rbdImage object implements all functions for a useful
Snapshot interface. The rbd.Manager will be able to use this for
providing VolumeGroupSnapshot support.
Signed-off-by: Niels de Vos <ndevos@ibm.com>
Prevent re-use of a destroyed connection by setting it to `nil`. This
way it is also safe to call `Destroy()` multiple times without causing a
panic.
Signed-off-by: Niels de Vos <ndevos@ibm.com>
The go-ceph rbd package provides the GroupSnapGetInfo function, but it
may return ErrUnsupported when called. Returning this error after
advertising the support for VolumeGroupSnapshot seems ugly.
In order to advertise support for VolumeGroupSnapshot,
SupportsGroupSnapGetInfo() can be used, which detects the required C
function of librbd.
Signed-off-by: Niels de Vos <ndevos@ibm.com>
`ensureImageCleanup()` can cause a panic when an image was deleted, but
the journal still contained a reference. By opening the IOContext before
using, an error may be returned instead of a panic when using a `nil` or
freed IOContext.
Signed-off-by: Niels de Vos <ndevos@ibm.com>
The go-ceph rbd.GroupCreate() now returns ErrExist in case the group
that is created, already exists. The previous check only ever matched
the string comparison, which is prone to errors in case the contents is
modified by go-ceph.
Signed-off-by: Niels de Vos <ndevos@ibm.com>
Incase of RDR with restricted access the
ceph user will not have access to all the objects
or all the pools where mapping exists
This commits add a check to continue to get
the volume if there is a permission error
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
The `repairImageID()` function is useful for the `rbdSnapshot` objects
as well. Move it to the `rbdImage` struct that is the base for both
`rbdVolume` and `rbdSnapshot`.
Signed-off-by: Niels de Vos <ndevos@ibm.com>
There is no need for the `Manager.DeleteVolumeGroup()` function as
`VolumeGroup.Delete()` should cover everything too.
By moving the `.Delete()` functionality of removing the group from the
journal to the shared `commonVolumeGroup` type, a volume group snaphot
can use it as well.
Signed-off-by: Niels de Vos <ndevos@ibm.com>
For core K8s API objects like Pods, Nodes, etc., we
can use protobuf encoding which reduces CPU consumption
related to (de)serialization, reduces overall latency
of the API call, reduces memory footprint, reduces the
amount of work performed by the GC and results in quicker
propagation of objects to event handlers of shared informers.
Signed-off-by: Nikhil-Ladha <nikhilladha1999@gmail.com>
When an `.Destroy()` is called on an rbdImage (or rbdVolume or
rbdSnapshot), the IOContext, Connection and other attributes are
invalid. When using a destroyed resource that points to an object that
was allocated through librbd, the process most likely ends with a panic.
Signed-off-by: Niels de Vos <ndevos@ibm.com>
This commit adds a gRPC middleware that logs calls that
keep running after their deadline.
Adds --logslowopinterval cmdline argument to pass the log rate.
Signed-off-by: Robert Vasek <robert.vasek@clyso.com>
When an `rbdVolume` or `rbdSnapshot` is not connected with credentials
to the Ceph cluster, operations may try to get the IOContext which then
causes a panic.
Signed-off-by: Niels de Vos <ndevos@ibm.com>
A function called `setImageOptions()` is expected to set the passed
options on the volume. However, the passed options parameter is only
filled with the options that should get set on the RBD-image at the time
of creation.
The naming of the function, and it's parameter is confusing. Rename the
function to `constructImageOptions()` and return the ImageOptions to
make it easier to understand.
Signed-off-by: Niels de Vos <ndevos@ibm.com>