This commit adds a gRPC middleware that logs calls that
keep running after their deadline.
Adds --logslowopinterval cmdline argument to pass the log rate.
Signed-off-by: Robert Vasek <robert.vasek@clyso.com>
There has been some confusion about using different variables for the
InstanceID of the RBD-driver. By removing the global variable
CSIInstanceID, there should be no confusion anymore what variable to
use.
Signed-off-by: Niels de Vos <ndevos@ibm.com>
golangci-lint reports these:
The copy of the 'for' variable "kmsID" can be deleted (Go 1.22+)
(copyloopvar)
Signed-off-by: Niels de Vos <ndevos@ibm.com>
This commit makes use of crush location labels from node
labels to supply `crush_location` and `read_from_replica=localize`
options during mount. Using these options, cephfs
will be able to redirect reads to the closest OSD,
improving performance.
Signed-off-by: Praveen M <m.praveen@ibm.com>
The clients parameter in the storage class is used to limit access to
the export to the set of hostnames, networks or ip addresses specified.
Signed-off-by: Sachin Prabhu <sprabhu@redhat.com>
CephNFS can enable different security flavours for exported volumes.
This can be configured in the optional `secTypes` parameter in the
StorageClass.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
CephFS does not have a concept of "free inodes", inodes get allocated
on-demand in the filesystem.
This confuses alerting managers that expect a (high) number of free
inodes, and warnings get produced if the number of free inodes is not
high enough. This causes alerts to always get reported for CephFS.
To prevent the false-positive alerts from happening, the
NodeGetVolumeStats procedure for CephFS (and CephNFS) will not contain
inodes in the reply anymore.
See-also: https://bugzilla.redhat.com/2128263
Signed-off-by: Niels de Vos <ndevos@redhat.com>
This commit adds nfs nodeserver capable of
mounting nfs volumes, even with pod networking
using NSenter design similar to rbd and cephfs.
NodePublish, NodeUnpublish, NodeGetVolumeStats
and NodeGetCapabilities have been implemented.
The nodeserver implementation has been inspired
from https://github.com/kubernetes-csi/csi-driver-nfs,
which was previously used for mounted cephcsi exported
nfs volumes. The current implementation is also
backward compatible for the previously created
PVCs.
Signed-off-by: Rakshith R <rar@redhat.com>
This commit adds support for pvc-pvc clone.
Only capability needed to be advertised, the
underlying support is already provided by cephfs
backend.
Signed-off-by: Rakshith R <rar@redhat.com>
There is not much the NFS-provisioner needs to do to expand a volume,
everything is handled by the CephFS components.
NFS does not need a resize on the node, so only ControllerExpandVolume
is required.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
In case the NFS-export has already been removed from the NFS-server, but
the CSI Controller was restarted, a retry to remove the NFS-volume will
fail with an error like:
> GRPC error: ....: response status not empty: "Export does not exist"
When this error is reported, assume the NFS-export was already removed
from the NFS-server configuration, and continue with deleting the
backend volume.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
The NFS Controller returns a non-gRPC error in case the CreateVolume
call for the CephFS volume fails. It is better to return the gRPC-error
that the CephFS Controller passed along.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Recent versions of Ceph allow calling the NFS-export management
functions over the go-ceph API.
This seems incompatible with older versions that have been tested with
the `ceph nfs` commands that this commit replaces.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
The `ceph nfs export ...` commands have changed in recent Ceph releases.
Use the most recent command as a default, fall back to the older command
when an error is reported.
This shoud make the NFS-provisioner work on any current Ceph version.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
NFSVolume instances are short lived, they only extist for a certain gRPC
procedure. It is easier to store the calling Context in the NFSVolume
struct, than to pass it to some of the functions that require it.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
These NFS Controller and Identity servers are the base for the new
provisioner. The functionality is currently extremely limited, follow-up
PRs will implement various CSI procedures.
CreateVolume is implemented with the bare minimum. This makes it
possible to create a volume, and mount it with the
kubernetes-csi/csi-driver-nfs NodePlugin.
DeleteVolume unexports the volume from the Ceph managed NFS-Ganesha
service. In case the Ceph cluster provides multiple NFS-Ganesha
deployments, things might not work as expected. This is going to be
addressed in follow-up improvements.
Lots of TODO comments need to be resolved before this can be declared
"production ready". Unit- and e2e-tests are missing as well.
Signed-off-by: Niels de Vos <ndevos@redhat.com>