Due to the quotes, the job is accepting
both --deploy-sc and --deploy-secret as
as single parameter.
Removing quotes to help it consider them
as diferent.
Signed-off-by: Yug <yuggupta27@gmail.com>
For release 3.3, we will not be deploying storageclass
and secret on helm installation on ci.
Moving forward, devel and all the future release will
have the deployment enabled by default in the ci.
Signed-off-by: Yug <yuggupta27@gmail.com>
Ceph-CSI does not suport (inline) Ephemeral-volumes. Testing this will
continue to fail. The driver configuration can not be used to disable
testing of this feature, so it is done by skipping the tests by pattern
matching.
Updates: #2017
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Kubernetes 1.22 is in the release process and can be used for testing
already. The CI jobs will be available and can be triggered by leaving a
comment in the PRs like
/test ci/centos/mini-e2e-helm/k8s-1.22
See-also: https://github.com/kubernetes/sig-release/tree/master/releases/release-1.22
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Kubernetes 1.21 is the latest stable release, and Ceph-CSI should be
tested with that. Currently 1.19 is still supported too, so we will need
to run the CI jobs with 1.19, 1.20 and 1.21.
See-also: https://kubernetes.io/releases/
Signed-off-by: Niels de Vos <ndevos@redhat.com>
To identify if a test runs on ceph-csi deployed
via helm charts, pass --helm-test parameter with
the E2E args.
Signed-off-by: Yug <yuggupta27@gmail.com>
Since we are adding support for deployment of sc and
secret via flags, to help script recognize when an
unknown string is passed as a namespace, use
--namespace flag before entering the namespace.
Signed-off-by: Yug <yuggupta27@gmail.com>
The JSON formatting will return the name of the first pod, not all pods
on the list. By useing `items[*]` the names of all pods will be
returned. This causes all logs to be fetched, instead of only the logs
from the 1st pod.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
if the pod is crashed and restarted the current logs
is not helpful. Logging with `-p` might help us
to log the previous container.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
When the `ci/centos` branch has been merged, the container images that
are built from it may need to be rebuilt. Instead of conditionally
rebuilding images only on a change, just rebuild them always, as there
are very few changes in the CI branch anyway.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
The ImageStream and BuildConfig use the Containerfile to build a
container image in the OpenShift registry so that automated image
mirroring can be configured.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
This script reads images.txt and copies the images from docker.io (or
other registries) to the local registry in the CI.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
It seems that `/var/log/rook` inside the VM does not contain any files.
Getting the logs from the Pods through kubectl may not be as stable, but
it should get some logs when minikube is still/again available.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
More details of the Rook (and Ceph) deployment should be useful when
troubleshooting CI failures. This now includes the status of the most
important Kubernetes objects, and all the logs Ceph stores on the host.
Updates: #1969
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Without the `-w` argument, the output of `top` gets truncated, and the
commandline of the processes is not comlete. It would be useful to eb
able to tell which command uses 100% CPU in an output like:
17377 root 20 0 110.8m 8.2m 0.0 0.1 0:00.89 S `- containerd+
17414 167 20 0 1036.7m 59.6m 0.0 0.4 0:03.47 S `- ceph-o+
40875 root 20 0 283.9m 30.4m 100.0 0.2 0:00.23 R `- ceph
Updates: #1969
Signed-off-by: Niels de Vos <ndevos@redhat.com>
It seems that it is required to re-throw the error after a catch{..}
block. Without this, and a successful execution of system-status.sh, the
CI jobs get marked as SUCCESS, even when there was a failure.
Fixes: e36155283 "ci: run system-status.sh in case a job fails"
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Without the script on the node, it can not be executed...
Fixes: e36155283 "ci: run system-status.sh in case a job fails"
Signed-off-by: Niels de Vos <ndevos@redhat.com>
The new `system-status.sh` script logs the status of the host and the
minikube VM. This gets executed when a CI job fails, and should aid in
troubleshooting spurious failures.
Updates: #1969
Signed-off-by: Niels de Vos <ndevos@redhat.com>
The e2e tests very regulary hit a timeout where the Kubernetes API
becomes unreachable for 3 minutes. Hopefully it helps when more RAM is
available to the VM.
Updates: #1969
Signed-off-by: Niels de Vos <ndevos@redhat.com>
The Kubernetes e2e external storage tests from v1.21 do not work yet
with Ceph-CSI. In order to address the issues, the job is now provided
and can be run with:
/test ci/centos/k8s-e2e-external-storage/1.21
The job for v1.20 is enabled by default, and identified by the
ci/centos/k8s-e2e-external-storage/1.20 context in PRs.
Updates: #2017
Signed-off-by: Niels de Vos <ndevos@redhat.com>
k8s-e2e-external-storage fails with error
`./podman2minikube.sh: line 16: minikube: command not found`.
This commit fixes it by starting minikube before calling
./podman2minikube.sh.
Signed-off-by: Rakshith R <rar@redhat.com>
added code to pre-pull the required container
images to run the k8s-e2e-external-storage E2E.
fixes: #2023
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
By default a version of Helm is used that does not want to get
installed. Using the same version as the devel branch makes the testing
work again.
See-also: helm/helm#9617
Signed-off-by: Niels de Vos <ndevos@redhat.com>
To test a PR with Kubernetes 1.21, leave a comment in the PR like:
/test ci/centos/mini-e2e-helm/k8s-1.21
The status of the job will be recorded in the PR, but running this job
is not required (yet).
Updates: #1963
Signed-off-by: Niels de Vos <ndevos@redhat.com>
In case a job has been started without a PR (manual, or timed), the
current checked out branch matches the original as there are not
additional changes in the tree. There is no need to abort the jobs when
the skip-doc-change.sh script did not detect any non-doc changes, as
there are no changes at all.
Updates: #1963
Signed-off-by: Niels de Vos <ndevos@redhat.com>
When tests are started manually (through the Jenkins webui), there is no
PR associated with the job. That means the `git_since` and `ref` are
equal. Trying to create a new branch named `ref` will not work, as the
branch was already created when cloning the repository with `git_since`.
With this change, Jenkins jobs can be started manually. This makes it
possible to run regular/nightly jobs as well.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
This commit enables pre-pulling of ROOK_CEPH_CLUSTER_IMAGE similar to
mini-e2e and min-e2e-helm to overcome Docker Hub pull rate limiter.
Signed-off-by: Rakshith R <rar@redhat.com>
After the introduction of ROOK_CEPH_CLUSTER_IMAGE in build.env, the
additional image needs to get pulled from the CI registry mirror and
pushed into the minikube VM.
Without this addition, the Docker Hub pull limits may prevent deploying
Rook.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
When the container image needs to be rebuild, two parallel jobs will try
to attempt that. With recent versions of Podman, this now fails.
When the image needs to be rebuild, do so in the stage where it would
otherwise get pulled. This makes sure the image gets build only once.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
There are timeouts happening where the logs do not show sufficient
output to diagnose the issue. These timeouts suggests that something
inside the minikube VM is not running as expected. Increasing the RAM to
12GB might help.
The bare-metal systems in the CentOS CI have a minimum of 16GB, so
running a single VM with 12GB should be possible.
See-also: https://wiki.centos.org/QaWiki/PubHardware
Updates: #1867
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Kubernetes v1.20 has been released, so lets use that for testing. Note
that v1.18 is still maintained, but our CI jobs will not consume it
anymore.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
New Kubernetes versions are now prefixed with "Kubernetes", like:
$ ./scripts/get_patch_release.py
Kubernetes v1.18.13
Kubernetes v1.17.15
Kubernetes v1.19.5
Kubernetes v1.20.0
Kubernetes v1.20.0-rc.0
v1.20.0-beta.2
v1.18.12
v1.19.4
v1.17.14
v1.20.0-beta.1
v1.20.0-beta.0
v1.20.0-alpha.3
v1.18.10
v1.17.13
The new "Kubernetes" prefix prevents the current logic to not match the
version. By splitting the returned version string on words, and
returning the last component in get_releases(), the script works as
intended again.
Signed-off-by: Niels de Vos <ndevos@redhat.com>