and hence stop updating the same 1.0.0 image for every PR.
At some point we would need to decide how to release, 1.0.1.
Signed-off-by: ShyamsundarR <srangana@redhat.com>
plugin images were using centos 7 images as the base image. This
is now moved to the ceph container image that supports required
content since 14.2 version.
Fixes#344
Signed-off-by: ShyamsundarR <srangana@redhat.com>
Add rules and variables to the Makefile so that the unified binary
and container image can be built.
Signed-off-by: John Mulligan <jmulligan@redhat.com>
Based on the review comments addressed the following,
- Moved away from having to update the pod with volumes
when a new Ceph cluster is added for provisioning via the
CSI driver
- The above now used k8s APIs to fetch secrets
- TBD: Need to add a watch mechanisim such that these
secrets can be cached and updated when changed
- Folded the Cephc configuration and ID/key config map
and secrets into a single secret
- Provided the ability to read the same config via mapped
or created files within the pod
Tests:
- Ran PV creation/deletion/attach/use using new scheme
StorageClass
- Ran PV creation/deletion/attach/use using older scheme
to ensure nothing is broken
- Did not execute snapshot related tests
Signed-off-by: ShyamsundarR <srangana@redhat.com>
This commit provides the option to pass in Ceph cluster-id instead
of a MON list from the storage class.
This helps in moving towards a stateless CSI implementation.
Tested the following,
- PV provisioning and staging using cluster-id in storage class
- PV provisioning and staging using MON list in storage class
Did not test,
- snapshot operations in either forms of the storage class
Signed-off-by: ShyamsundarR <srangana@redhat.com>
issue #217
Goal
we try to solve when csi exit unexpect, the pod use cephfs pv can not auto recovery because lost mount relation until pod be killed and reschedule to other node. i think this is may be a problem. may be csi plugin can do more thing to remount the old path so when pod may be auto recovery when pod exit and restart, the old mount path can use.
NoGoal
Pod should exit and restart when csi plugin pod exit and mount point lost. if pod not exit will get error of **transport endpoint is not connected**.
implment logic
csi-plugin start:
1. load all MountCachEntry from node local dir
2. check if volID exist in cluster, if no we ignore this entry, if yes continue
3. check if stagingPath exist, if yes we mount the path
4. check if all targetPath exist, if yes we binmount to staging path
NodeServer:
1. NodeStageVolume: add MountCachEntry on local dir include readonly attr and ceph secret
2. NodeStagePublishVolume: add pod bind mount path to MountCachEntry and persist local dir
3. NodeStageunPublishVolume: remove pod bind mount path From MountCachEntry and persist local dir
4. NodeStageunStageVolume: remove MountCachEntry from local dir
as the socket directory will be created
inside the container no need to follow
the plugin name in for the directory
creation, this will also reduce the code
changes if we want to change driver name.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
This change allows the use of alternatives to or wrappers around
the normal docker command when running the deploy.sh script.
Example: CONTAINER_CMD=podman ./deploy.sh
Signed-off-by: John Mulligan <jmulligan@redhat.com>
From now on, each PR will be merged automatically if:
* there is no DNM label on the PR AND
* the PR has at least one approuval AND
* the travis CI successfully passed
Closes: https://github.com/ceph/ceph-csi/issues/154
Signed-off-by: Sébastien Han <seb@redhat.com>
This change allows the use of alternatives to or wrappers around
the normal docker command for container builds.
Example 1: make image-rbdplugin CONTAINER_CMD=podman
Example 2: CONTAINER_CMD=podman make image-rbdplugin
Signed-off-by: John Mulligan <jmulligan@redhat.com>