We heavily use the service for Open Source communities from Mergify. It
is probably nice to promote them a little in our README.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
This commit remove the clusterRole and Binding of cephfs node plugin
as the node RBAC is not needed for CephFS.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
Depending on the Kubernetes version, the following warning is reported
regulary:
> Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+,
> unavailable in v1.25+
The warning is written to stderr, so skipping AlreadyExists or NotFound
is not sufficient to trigger a retry. Ignoring '^Warning:' in the stderr
output should prevent unneeded failures while deploying Rook or other
components.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Rook deployments fail quite regulary in the CI environment now. It is
not clear what the cause is, hopefully a little better logging will
guide us to the issue.
Now executing `kubectl` in a sub-shell, ensuring that the redirection of
the command lands in the right files.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
The project is currently at 54% of the best practices. Hopefully this
badge creates some interest in increasing the grade.
See-also: https://bestpractices.coreinfrastructure.org/projects/5940
Signed-off-by: Niels de Vos <ndevos@redhat.com>
When running the kubernetes cluster with one single privileged
PodSecurityPolicy which is allowing everything the nodeplugin
daemonset can fail to start. To be precise the problem is the
defaultAllowPrivilegeEscalation: false configuration in the PSP.
Containers of the nodeplugin daemonset won't start when they
have privileged: true but no allowPrivilegeEscalation in their
container securityContext.
Kubernetes will not schedule if this mismatch exists cannot set
allowPrivilegeEscalation to false and privileged to true:
Signed-off-by: Silvan Loser <silvan.loser@hotmail.ch>
Signed-off-by: Silvan Loser <33911078+losil@users.noreply.github.com>
When running the kubernetes cluster with one single privileged
PodSecurityPolicy which is allowing everything the nodeplugin
daemonset can fail to start. To be precise the problem is the
defaultAllowPrivilegeEscalation: false configuration in the PSP.
Containers of the nodeplugin daemonset won't start when they
have privileged: true but no allowPrivilegeEscalation in their
container securityContext.
Kubernetes will not schedule if this mismatch exists cannot set
allowPrivilegeEscalation to false and privileged to true
Signed-off-by: Silvan Loser <silvan.loser@hotmail.ch>
Signed-off-by: Silvan Loser <33911078+losil@users.noreply.github.com>
updated doc for 3.6.1 release, this will
be backported to release-v3.6 branch and
we will make deployment changes and do release.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
The Ceph cluster-id is usually detected with `ceph fsid`. This is not
always correct, as the the Ceph cluster can also be configured by name.
If the -clusterid=... is passed, it will be used instead of trying to
detect it with `ceph fsid`.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
There are many locations where the cluster-id (`ceph fsid`) is obtained
from the Rook Toolbox. Instead of duplicating the code everywhere, use a
new helper function getClusterID().
Signed-off-by: Niels de Vos <ndevos@redhat.com>
A new -filesystem=... option has been added so that the e2e tests can
run against environments that do not have a "myfs" CephFS filesystem.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
StorageClasses are cluster resources, not namespaced; there is no need
to log the namespace of a StorageClass.
When creating a StorageClass, NotFound is not an error that will be
returned, not need to check for it.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
On occasion the creation of the StorageClass can fail due to an
etcdserver timeout. If that happens, the creation can be attempted after
a delay.
This has already been done for CephFS StorageClasses, but was missed for
RBD.
See-also: ceph/ceph-csi@8a0377ef02
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Some parts of the Context() seem to get executed, even when BeforeEach()
did a Skip() for the test. By adding a return inside the Context(), the
tests should not get executed at all.
This was noticed in a failed test, where upgrade was running, eventhough
the job was executed as a nornal non-upgrade one.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
The current version of Mergify provides a `requeue` command in addition
to `refresh`. After a CI job failed, the PR needs to be re-added to the
queue, so the `requeue` command is more appropriate.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
This commit change the image registry URL for sidecars in the
deployment from `k8s.gcr.io` to `registry.k8s.io` as
the migration is happening from former to the latter. This commit
also correct the e2e readme for the change.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
This commit change the image registry URL for sidecars in the
NFS deployment from `k8s.gcr.io` to `registry.k8s.io` as
the migration is happening from former to the latter.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
This commit change the image registry URL for sidecars in the
RBD deployment from `k8s.gcr.io` to `registry.k8s.io` as
the migration is happening from former to the latter.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
This commit change the image registry URL for sidecars in the
CephFS deployment from `k8s.gcr.io` to `registry.k8s.io` as
the migration is happening from former to the latter.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
as same host directory is not shared between
the cephfs and the rbd plugin pod. we need
to keep the netNamespaceFilePath separately
for both cephfs and rbd. CephFS plugin will
use this path to execute mount -t commands.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
As radosNamespace is more specific to
RBD not the general ceph configuration. Now
we introduced a new RBD section for RBD specific
options, Moving the radosNamespace to RBD section
and keeping the radosNamespace still under the
global ceph level configration for backward
compatibility.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
As the netNamespaceFilePath can be separate for
both cephfs and rbd adding the netNamespaceFilePath
path for RBD, This will help us to keep RBD and
CephFS specific options separately.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
The NFS Controller returns a non-gRPC error in case the CreateVolume
call for the CephFS volume fails. It is better to return the gRPC-error
that the CephFS Controller passed along.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Kubernetes CSI now hosts the container-image for the NFS-nodeplugin in
the the k8s.gcr.io instead of the Microsoft registry.
See-also: kubernetes-csi/csi-driver-nfs@7b5b6f344
Signed-off-by: Niels de Vos <ndevos@redhat.com>
The NFS-Admin API has been added to go-ceph v0.15.0. As the API can not
be tested in the go-ceph CI, it requires build-tag `ceph_ci_untested`.
This additional build-tag has been added to the `Makefile` and should be
removed when the API does not require the build-tag anymore.
See-also: ceph/go-ceph#655
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Recent versions of Ceph allow calling the NFS-export management
functions over the go-ceph API.
This seems incompatible with older versions that have been tested with
the `ceph nfs` commands that this commit replaces.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
use leases for leader election instead
of the deprecated configmap based leader
election.
This PR is making leases as default leader election
refer https://github.com/kubernetes-sigs/
controller-runtime/pull/1773, default from configmap
to configmap leases was done with
https://github.com/kubernetes-sigs/
controller-runtime/pull/1144.
Release notes https://github.com/kubernetes-sigs/
controller-runtime/releases/tag/v0.7.0
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Instead of patching the PV to update
the persistentVolumeReclaimPolicy and
the claimRef before deleting the PVC.
Patch PV persistentVolumeReclaimPolicy to Retain
to retain the PV after deleting the PVC.
Remove the claimRef on the PV after deleting
the PVC so that claim can be attached to a new PVC.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>