This commit reverts back the changes done
for v3.3.0 release. With this change a
release canary tagged image and helm charts
will get pushed.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
BlockVolume, CSIBlockVolume(GA since k8s v1.18) & VolumeSnapshotDataSource
(GA since k8s v1.20) default to true and don't need to be set to true in
feature gates setting.
Signed-off-by: Rakshith R <rar@redhat.com>
The nestif linter reports deeply nested if statements.
This commit enables the nestif linter and sets
min-complexity to 7.
Closes: #1229
Signed-off-by: Rakshith R <rar@redhat.com>
We have a new release v4.0.0 of
https://github.com/kubernetes-csi/external-snapshotter
Adjusting SNAPSHOT_VERSION will pull the latest controller and CRDs
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Changes:
1. Add a variable in build.env for rook ceph cluster version.
2. Modify rook.sh so that it can deploy ceph cluster with
desirable version also rather than the one which rook installs
by default.
3. Remove the code which is no longer required:
a. Code which was added to test snapshot feature.
b. Code which was required because
https://github.com/rook/rook/pull/5925 was not fixed.
Signed-off-by: Mudit Agarwal <muagarwa@redhat.com>
It seems that recent minikube versions changed something in the
networking, and that prevents
$ ceph fs subvolumegroup create myfs testGroup
from working. Strangely RBD is not impacted. Possibly something is
confusing the CephMgr pod that handles the CephFS admin commands.
Using the "bridge" CNI seems to help, CephFS admin commands work with
this in minikube.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
While deploying Rook, there can be issues when the environment is not
completely settled yet. On occasion the 1st kubectl command fails with
The connection to the server ... was refused - did you specify the right host or port?
This would set the 'ret' variable to a non-zero value, before the next
retry of the kubectl command is done. In case the kubectl command
succeeds, the 'ret' variable still contains the old non-zero value, and
kubectl_retry returns the incorrect result.
By setting the 'ret' variable to 0 before calling kubectl again, this
problem is prevented.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
This PR makes the changes in csi templates and
upgrade documentation required for updating
csi sidecar images.
Signed-off-by: Mudit Agarwal <muagarwa@redhat.com>
Depending on the local changes, running 'make containerized-test' fails
with an error like:
level=error msg="Running error: gofmt: error computing diff: exec: \"diff\": executable file not found in $PATH"
Installing the diffutils package makes sure 'go fmt' finds the
executable.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
It seems that the new log_errors() function does not get triggered when
the script hits `exit 1` conditions in functions. The functions should
return a non-0 value, not cause an exit of the script.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Log a few commands that help troubleshooting Rook deployment issues.
This might need to get extended with more commands.
Updates: #1636
Signed-off-by: Niels de Vos <ndevos@redhat.com>
An rbd image can have a maximum number of
snapshots defined by maxsnapshotsonimage
On the limit is reached the cephcsi will
start flattening the older snapshots and
returns the ABORT error message, The Request
comes after this as to wait till all the
images are flattened (this will increase the
PVC creation time. Instead of waiting till
the maximum snapshots on an RBD image, we can
have a soft limit, once the limit reached
cephcsi will start flattening the task to
break the chain. With this PVC creation time
will only be affected when the hard limit
(minsnapshotsonimage) reached.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
The GitHub style for Pull Request and Issue templates add HTML tags for
some advanced usage. The MarkDown linter should not give warnings when
these are used.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
The StorageClasses that get deployed for the Kubernetes e2e external
storage tests reference a ConfigMap that contains the connection details
for the Ceph cluster. Without this ConfigMap, Ceph-CSI will not function
correctly.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Currently the scripts/install-snapshot.sh script needs to be called
depending on the Kubernetes version. It would be much easier to use the
script if it is intelligent enough to decide itself whether k8s snapshot
controller needs to be installed or not.
Fixes: #1139
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Just like deploy-rook and teardown-rook, this patch will add
install snapshotter and cleanup snapshotter option to minikube
script.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
When replication count is >1 of the provisioner, the added anti-affinity rules
will prevent provisioner operators from scheduling on the same nodes. The
kubernetes scheduler will spread the pods across nodes to improve availability
during node failures.
Signed-off-by: Nico Berlee <nico.berlee@on2it.net>
Instead of using the Docker command to push the image to to minikube VM,
read the image from stdin over ssh and load it with the Docker command
that is available inside the VM.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Allow passing:
$ CONTAINER_CMD="sudo docker" ./scripts/minikube.sh cephcsi
or
$ CONTAINER_CMD="sudo podman" ./scripts/minikube.sh cephcsi
Because the container images could list in '# sudo docker images' or
'# sudo podman images' incase if the Makefile target image-cephcsi is
run with sudo
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Add a way to supply local CONTAINER_CMD option of choice via
env variable to minikube.sh
Note: we still use docker daemon env at minikube box, in the future
we can switch to podman service env '# minikube podman-env' if needed
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Problem:
-------
$ minikube version
minikube version: v1.12.2
commit: be7c19d391302656d27f1f213657d925c4e1cfc2-dirty
$ ./scripts/minikube.sh up
installed minikube version v1.12.2 is not matching requested version latest
Here v1.12.2 is the latest version of minikube, but the script simply bails out.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
In this script we defined all the functions at the top and then started
with executable commands (entry points to script start).
Only this function is odd in the script unlike the rest of them, defined
in between the execution sequence taking away the readability.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
we cannot depend on the master branch of external-snapshotter
in cephcsi as the master branch can change anytime. its
good to use released tags to our E2E.
fixes: #1416
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
There can be spurious failures in the CI when running kubectl create. On
occasion, the command returns with an error, but the api-server did
receive and process the request. This causes a 2nd create action to fail
with messages like:
cephcluster.ceph.rook.io/my-cluster created
Error from server: error when creating "/tmp/tmp.Ur1ZPG85o9/cluster-test.yaml": etcdserver: request timed out
Error from server (AlreadyExists): error when creating "/tmp/tmp.Ur1ZPG85o9/cluster-test.yaml": configmaps "rook-config-override" already exists
Error from server (AlreadyExists): error when creating "/tmp/tmp.Ur1ZPG85o9/cluster-test.yaml": cephclusters.ceph.rook.io "my-cluster" already exists
Error from server (AlreadyExists): error when creating "/tmp/tmp.Ur1ZPG85o9/cluster-test.yaml": configmaps "rook-config-override" already exists
Error from server (AlreadyExists): error when creating "/tmp/tmp.Ur1ZPG85o9/cluster-test.yaml": cephclusters.ceph.rook.io "my-cluster" already exists
Error from server (AlreadyExists): error when creating "/tmp/tmp.Ur1ZPG85o9/cluster-test.yaml": configmaps "rook-config-override" already exists
Error from server (AlreadyExists): error when creating "/tmp/tmp.Ur1ZPG85o9/cluster-test.yaml": cephclusters.ceph.rook.io "my-cluster" already exists
Error from server (AlreadyExists): error when creating "/tmp/tmp.Ur1ZPG85o9/cluster-test.yaml": configmaps "rook-config-override" already exists
Error from server (AlreadyExists): error when creating "/tmp/tmp.Ur1ZPG85o9/cluster-test.yaml": cephclusters.ceph.rook.io "my-cluster" already exists
By handling the create action differently, and checking for the
AlreadyExists word in the stderr output, it is possible to detect
repeated creates that are not needed.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Add retries to prevent ci failure instantly.
Now, the command execution will retry upto
5 times, to avoid failures in some runs.
Signed-off-by: Yug <yuggupta27@gmail.com>
By default the install-helm.sh script uses "latest" as version for Helm.
Unfortunately this version does not exist. The HELM_VERSION variable is
already set in build.env, so source the configuration file as one of the
first actions in install-helm.sh.
Signed-off-by: Niels de Vos <ndevos@redhat.com>