ceph-csi/e2e
Rakshith R 0744ad502b e2e: add prefixname to rbd controller test
Signed-off-by: Rakshith R <rar@redhat.com>
2021-08-10 09:17:59 +00:00
..
ceph_user.go cleanup: resolve nlreturn linter issues 2021-07-22 06:05:01 +00:00
cephfs_helper.go cleanup: resolve nlreturn linter issues 2021-07-22 06:05:01 +00:00
cephfs.go e2e: retry deploying CephFS components on failure 2021-07-29 12:35:52 +00:00
configmap.go e2e: retry running kubectl on known errors 2021-08-06 08:03:18 +00:00
deploy-vault.go e2e: retry running kubectl on known errors 2021-08-06 08:03:18 +00:00
e2e_test.go cleanup: addresses paralleltest linter 2021-06-25 11:55:12 +00:00
errors_test.go e2e: add isAlreadyExistsCLIError to check known error 2021-08-06 08:03:18 +00:00
errors.go e2e: add isAlreadyExistsCLIError to check known error 2021-08-06 08:03:18 +00:00
kms.go e2e: add verifyKeyDestroyed() for validating vaultDestroyKeys 2021-08-06 12:19:18 +00:00
log.go cleanup: resolve nlreturn linter issues 2021-07-22 06:05:01 +00:00
namespace.go cleanup: resolve nlreturn linter issues 2021-07-22 06:05:01 +00:00
node.go cleanup: resolve nlreturn linter issues 2021-07-22 06:05:01 +00:00
pod.go e2e: retry running kubectl on known errors 2021-08-06 08:03:18 +00:00
pvc.go cleanup: resolve nlreturn linter issues 2021-07-22 06:05:01 +00:00
rbd_helper.go e2e: add verifyKeyDestroyed() for validating vaultDestroyKeys 2021-08-06 12:19:18 +00:00
rbd.go e2e: add prefixname to rbd controller test 2021-08-10 09:17:59 +00:00
README.md e2e: update upgrade-version to v3.3.1 2021-05-24 16:12:20 +00:00
resize.go e2e: use original namespace for retrying resize check 2021-08-04 08:08:24 +00:00
snapshot.go cleanup: resolve nlreturn linter issues 2021-07-22 06:05:01 +00:00
staticpvc.go cleanup: resolves cyclop linter issue 2021-07-22 18:15:48 +00:00
upgrade-cephfs.go cleanup: resolves gofumpt issues in e2e 2021-07-20 15:37:58 +00:00
upgrade-rbd.go cleanup: resolves gofumpt issues in e2e 2021-07-20 15:37:58 +00:00
upgrade.go cleanup: resolve nlreturn linter issues 2021-07-22 06:05:01 +00:00
utils.go e2e: add prefixname to rbd controller test 2021-08-10 09:17:59 +00:00

End-to-End Testing

Introduction

End-to-end (e2e) in cephcsi provides a mechanism to test the end-to-end behavior of the system, These tests will interact with live instances of ceph cluster just like how a user would.

The primary objectives of the e2e tests are to ensure a consistent and reliable behavior of the cephcsi code base and to catch hard-to-test bugs before users do when unit and integration tests are insufficient.

The Test framework is designed to install Rook, run cephcsi tests, and uninstall Rook.

The e2e test are built on top of Ginkgo and Gomega

Install Kubernetes

The cephcsi also provides a script for starting Kubernetes using minikube so users can quickly spin up a Kubernetes cluster.

the following parameters are available to configure kubernetes cluster

flag description
up Starts a local kubernetes cluster and prepare a disk for rook
down Stops a running local kubernetes cluster
clean Deletes a local kubernetes cluster
ssh Log into or run a command on a minikube machine with SSH
deploy-rook Deploy rook to minikube
create-block-pool Creates a rook block pool (named $ROOK_BLOCK_POOL_NAME)
delete-block-pool Deletes a rook block pool (named $ROOK_BLOCK_POOL_NAME)
clean-rook Deletes a rook from minikube
cephcsi Copy built docker images to kubernetes cluster
k8s-sidecar Copy kubernetes sidecar docker images to kubernetes cluster

following environment variables can be exported to customize kubernetes deployment

ENV Description Default
MINIKUBE_VERSION minikube version to install latest
KUBE_VERSION kubernetes version to install latest
MEMORY Amount of RAM allocated to the minikube VM in MB 4096
VM_DRIVER VM driver to create virtual machine virtualbox
CEPHCSI_IMAGE_REPO Repo URL to pull cephcsi images quay.io/cephcsi
K8S_IMAGE_REPO Repo URL to pull kubernetes sidecar images k8s.gcr.io/sig-storage
K8S_FEATURE_GATES Feature gates to enable on kubernetes cluster BlockVolume=true,CSIBlockVolume=true,VolumeSnapshotDataSource=true
ROOK_BLOCK_POOL_NAME Block pool name to create in the rook instance newrbdpool
  • creating kubernetes cluster

    From the ceph-csi root directory, run:

    ./scripts/minikube.sh up
    
  • Teardown kubernetes cluster

    ./scripts/minikube.sh clean
    

Deploy Rook

The cephcsi E2E tests expects that you already have rook running in your cluster.

Thanks to minikube script for the handy deploy-rook option.

./scripts/minikube.sh deploy-rook

Test parameters

In addition to standard go tests parameters, the following custom parameters are available while running tests:

flag description
deploy-timeout Timeout to wait for created kubernetes resources (default: 10 minutes)
deploy-cephfs Deploy cephfs csi driver as part of E2E (default: true)
deploy-rbd Deploy rbd csi driver as part of E2E (default: true)
test-cephfs Test cephfs csi driver as part of E2E (default: true)
upgrade-testing Perform upgrade testing (default: false)
upgrade-version Target version for upgrade testing (default: "v3.3.1")
test-rbd Test rbd csi driver as part of E2E (default: true)
cephcsi-namespace The namespace in which cephcsi driver will be created (default: "default")
rook-namespace The namespace in which rook operator is installed (default: "rook-ceph")
kubeconfig Path to kubeconfig containing embedded authinfo (default: $HOME/.kube/config)
timeout Panic test binary after duration d (default 0, timeout disabled)
v Verbose: print additional output

E2E for snapshot

After the support for snapshot/clone has been added to ceph-csi, you need to follow these steps before running e2e. Please note that the snapshot operation works only if the Kubernetes version is greater than or equal to 1.17.0.

  • Delete Alpha snapshot CRD created by ceph-csi in rook.

    • Check if you have any v1alpha1 CRD created in our Kubernetes cluster

      $ kubectl get crd volumesnapshotclasses.snapshot.storage.k8s.io -o yaml |grep v1alpha1
        - name: v1alpha1
        - v1alpha1
      $ kubectl get crd volumesnapshotcontents.snapshot.storage.k8s.io -o yaml |grep v1alpha1
        - name: v1alpha1
        - v1alpha1
      $ kubectl get crd volumesnapshots.snapshot.storage.k8s.io -o yaml |grep v1alpha1
        - name: v1alpha1
        - v1alpha1
      
    • If you have Alpha CRD, delete it as from Kubernetes 1.17.0+ the snapshot should be v1beta1

      ./scripts/install-snapshot.sh delete-crd
      
  • Install snapshot controller and Beta snapshot CRD

    ./scripts/install-snapshot.sh install
    

    Once you are done running e2e please perform the cleanup by running following:

    ./scripts/install-snapshot.sh cleanup
    

Running E2E

Note:- Prior to running the tests, you may need to copy the kubernetes configuration file to$HOME/.kube/configwhich is required to communicate with kubernetes cluster or you can passkubeconfigflag while running tests.

Functional tests are run by the go test command.

go test ./e2e/ -timeout=20m -v -mod=vendor

To run specific tests, you can specify options

go test ./e2e/ --test-cephfs=false --test-rbd=false --upgrade-testing=true

To run e2e for specific tests with make, use

make run-e2e E2E_ARGS="--test-cephfs=false --test-rbd=true --upgrade-testing=false"

You can also invoke functional tests with make command

make func-test TESTOPTIONS="-deploy-timeout=10 -timeout=30m -v"