Docker Hub offers a way to pull official images without any project
prefix, like "docker.io/vault:latest". This does a redirect to the
images located under "docker.io/library".
By using the full qualified image name, a redirect gets removed while
pulling the images. This reduces the likelyhood of hittin Docker Hub
pull rate-limits.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
(cherry picked from commit 1f18e876f0)
Images that have an unqualified name (no explicit registry) come from
Docker Hub. This can be made explicit by adding docker.io as prefix. In
addition, the default :latest tag has been added too.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
(cherry picked from commit eaeee8ac3d)
The BlockVolume PVC tests consume the example files that refer to
"centos:latest" without registry. This means that the images will get
pulled from Docker Hub, which has rate limits preventing CI jobs from
pulling images.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
(cherry picked from commit 4fd0294eb7)
this allows the image to be reused instead of pulling
it again.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
(cherry picked from commit 7d229c2369)
On occasion deploying Vault fails. It seems the vault-init-job batch job
does not use a full-qualified-image-name for the "vault" container.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
(cherry picked from commit 1845f2b77d)
The intention here is to keep the example YAMLs of CephFS
with recommended Access Mode of CephFS which is RWX instead of RWO.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
Currently, the script does not deploy the driver singlehandedly;
As the vault creation needs to be done prior to that.
The script now includes the vault creation so that
one script can be sufficient to deploy the rbd driver.
Signed-off-by: Yug <yuggupta27@gmail.com>
Added support for RBD PVC to PVC cloning, below
commands are executed to create a PVC-PVC clone from
RBD side.
* Check the depth(n) of the cloned image if n>=(hard limit -2)
or ((soft limit-2) Add a task to flatten the image and return
about (to avoid image leak) **Note** will try to flatten the
temp clone image in the chain if available
* Reserve the key and values in omap (this will help us to
avoid the leak as it's not reserved earlier as we have returned
ABORT (the request may not come back))
* Create a snapshot of rbd image
* Clone the snapshot (temp clone)
* Delete the snapshot
* Snapshot the temp clone
* Clone the snapshot (final clone)
* Delete the snapshot
```bash
1) check the image depth of the parent image if flatten required
add a task to flatten image and return ABORT to avoid leak
(hardlimit-2 and softlimit-2 check will be done)
2) Reserve omap keys
2) rbd snap create <RBD image for src k8s volume>@<random snap name>
3) rbd clone --rbd-default-clone-format 2 --image-feature
layering,deep-flatten <RBD image for src k8s volume>@<random snap>
<RBD image for temporary snap image>
4) rbd snap rm <RBD image for src k8s volume>@<random snap name>
5) rbd snap create <cloned RBD image created in snapshot process>@<random snap name>
6) rbd clone --rbd-default-clone-format 2 --image-feature <k8s dst vol config>
<RBD image for temporary snap image>@<random snap name> <RBD image for k8s dst vol>
7)rbd snap rm <RBD image for src k8s volume>@<random snap name>
```
* Delete temporary clone image created as part of clone(delete if present)
* Delete rbd image
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Updated storageclass and snapshotclass
to include the name prefix for naming
subvolumes and snapshots.
Fixes: #1087
Signed-off-by: Yug Gupta <ygupta@redhat.com>
The name of the CephFS SubvolumeGroup for the CSI volumes was hardcoded to "csi". To make permission management in multi tenancy environments easier, this commit makes it possible to configure the CSI SubvolumeGroup.
related to #798 and #931
This commit adds support to mention dataPool parameter for the
topology constrained pools in the StorageClass, that can be
leveraged to mention erasure coded pool names to use for RBD
data instead of the replica pools.
Signed-off-by: ShyamsundarR <srangana@redhat.com>
we need to provide access to the Service
account created with helm charts to access
the vault service.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
As kubernetes CSI sidecar is exposing the
GRPC mertics we can make use of the same in
ceph-csi we dont need to expose our own.
update: #881
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
While running the 'make test' target and have 'yamllint' available, the
test fails with the following exception:
yamllint -s -d {extends: default, rules: {line-length: {allow-non-breakable-inline-mappings: true}},ignore: charts/*/templates/*.yaml} ./examples/rbd/storageclass.yaml
Traceback (most recent call last):
File "/usr/local/bin/yamllint", line 11, in <module>
sys.exit(run())
File "/usr/local/lib/python3.6/site-packages/yamllint/cli.py", line 181, in run
problems = linter.run(f, conf, filepath)
File "/usr/local/lib/python3.6/site-packages/yamllint/linter.py", line 237, in run
content = input.read()
File "/usr/lib64/python3.6/encodings/ascii.py", line 26, in decode
return codecs.ascii_decode(input, self.errors)[0]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 1947: ordinal not in range(128)
The quotes used in the comments seem to be non-ascii characters.
Replacing these with standard " makes the test pass again.
This problem occurred while running tests in a container based on the
Ceph image (CentOS-7) with Python 3. Travis CI might still use Python 2
for yamllint, and hide the problem.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
* moves KMS type from StorageClass into KMS configuration itself
* updates omapval used to identify KMS to only it's ID without the type
why?
1. when using multiple KMS configurations (not currently supported)
automated parsing of kms configuration will be failing because some
entries in configs won't comply with the requested type
2. less options are needed in the StorageClass and less data used to
identify the KMS
Signed-off-by: Vasyl Purchel vasyl.purchel@workday.com
Signed-off-by: Andrea Baglioni andrea.baglioni@workday.com
- adds proposal document for PVC encryption from PR448
- adds per-volume encription by generating encryption passphrase
for each volume and storing it in a KMS
- adds HashiCorp Vault integration as a KMS for encryption passphrases
- avoids encrypting volume second time if it was already encrypted but
no file system created
- avoids unnecessary checks if volume is a mapped device when encryption
was not requested
- prevents resizing encrypted volumes (it is not currently supported)
- prevents creating snapshots from encrypted volumes to prevent attack
on encryption key (security guard until re-encryption of volumes
implemented)
Signed-off-by: Vasyl Purchel vasyl.purchel@workday.com
Signed-off-by: Andrea Baglioni andrea.baglioni@workday.comFixes#420Fixes#744
Fedora 26 has been End-Of-Life for a long time already, is should not be
used anymore. Instead use the latest CentOS image that gets regular
updates. The Ceph base image used by Ceph-CSI is also based on CentOS,
so uses would be familiar with the available tools.
Also use "sleep infinity" instead of a weird 'tail -f /dev/null' hack.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Adds encryption in StorageClass as a parameter. Encryption passphrase is
stored in kubernetes secrets per StorageClass. Implements rbd volume
encryption relying on dm-crypt and cryptsetup using LUKS extension
The change is related to proposal made earlier. This is a first part of
the full feature that adds encryption with passphrase stored in secrets.
Signed-off-by: Vasyl Purchel vasyl.purchel@workday.com
Signed-off-by: Andrea Baglioni andrea.baglioni@workday.com
Signed-off-by: Ioannis Papaioannou ioannis.papaioannou@workday.com
Signed-off-by: Paul Mc Auley paul.mcauley@workday.com
Signed-off-by: Sergio de Carvalho sergio.carvalho@workday.com
However further testing shows that, ext4 should be
the default or preferred fs for RBD devices.
This patch bring that change
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
The storage class already takes MountOptions(MountFlags), these are the
bind mount options. Some of these options may not be recognised by the
cephfs mount. Hence added a new parameterin Storage Class for
- cephfs kernel mount options,
- ceph-fuse mount options
Ceph kernel mount options are different from ceph-fuse options, hence
added two different parameters.
Signed-off-by: Poornima G <pgurusid@redhat.com>
we have see better performace in device
format and mounting by setting the fstype to xfs
from default ext4.
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
Use Deployment with leader election instead of StatefulSet
Deployment behaves better when a node gets disconnected
from the rest of the cluster - new provisioner leader
is elected in ~15 seconds, while it may take up to
5 minutes for StatefulSet to start a new replica.
Refer: kubernetes-csi/external-provisioner@52d1fbc
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
in NodeStage RPC call we have to map the
device to the node plugin and make sure the
the device will be mounted to the global path
in nodeUnstage request unmount the device from
global path and unmap the device
if the volume mode is block we will be creating
a file inside a stageTargetPath and it will be
considered as the global path
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
RBD plugin needs only a single ID to manage images and operations against a
pool, mentioned in the storage class. The current scheme of 2 IDs is hence not
needed and removed in this commit.
Further, unlike CephFS plugin, the RBD plugin splits the user id and the key
into the storage class and the secret respectively. Also the parameter name
for the key in the secret is noted in the storageclass making it a variant and
hampers usability/comprehension. This is also fixed by moving the id and the key
to the secret and not retaining the same in the storage class, like CephFS.
Fixes#270
Testing done:
- Basic PVC creation and mounting
Signed-off-by: ShyamsundarR <srangana@redhat.com>
update example rbd PVC from ReadWriteMany
to ReadWriteOnce as rbd supports ReadWriteMany
only if pvc mode is block
Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
The SnapshotClass for RBD requires a pool parameter. This is redundant
as a snapshot is not created on a different pool than the source image
of the snapshot (refer rbd man page).
Further, when a snapshot needs to be created its source CSI VolumeID
is passed to the creation call, and hence the source volumes pool
needs to be reused to create the snapshot.
Similarly to clone a snapshot, the create request would come in with a
SnapshotID to help identify the snapshot pool, and the same create
request parameters would contain the storage class based pool parameter
to create the clone into (as clones can be in different pools as
compared to their parent snapshots).
Thus, the parameter pool seems redundant in the snapshot class and
should be removed to improve ease of use.
Fixes#379
Signed-off-by: ShyamsundarR <srangana@redhat.com>
This is a part of the stateless set of commits for CephCSI.
This commit removes the dependency on config maps to store cephFS provisioned
volumes, and instead relies on RADOS based objects and keys, and required
CSI VolumeID encoding to detect the provisioned volumes.
Changes:
- Provide backward compatibility to provisioned volumes by older plugin versions (1.0.0 or older)
- Remove Create/Delete support for statically provisioned volumes (fixes#382)
- Added namespace support to RADOS OMaps and used the same to store RADOS CSI objects and keys in the CephFS metadata pool
- Added support to mention fsname for CephFS provisioning (fixes#359)
- Changed field name in CSI Identifier to 'location', to denote a pool or fscid
- Updated mounter cache to use new scheme
- Required Helm manifests are updated
- Required documentation and other manifests are updated
- Made driver option 'metadatastorage' as optional, as fresh installs do not need to specify the same
Testing done:
- Create/Mount/Delete PVC
- Create/Delete 5 PVCs
- Mount version 1.0.0 PVC
- Delete version 1.0.0 PV
- Mount Statically defined PV/PVC/Pod
- Mount Statically defined version 1.0.0 PV/PVC/Pod
- Delete Statically defined version 1.0.0 PV/PVC/Pod
- Node restart when mounted to test mountcache
- Use InstanceID other than 'default'
- RBD basic round of tests, as namespace is added to OMaps
- csitest against ceph-fs plugin
- NOTE: CephFS plugin still does not detect and address already created
volumes but of a different size
- Test not providing any value to the metadata storage parameter
Signed-off-by: ShyamsundarR <srangana@redhat.com>
Existing config maps are now replaced with rados omaps that help
store information regarding the requested volume names and the rbd
image names backing the same.
Further to detect cluster, pool and which image a volume ID refers
to, changes to volume ID encoding has been done as per provided
design specification in the stateless ceph-csi proposal.
Additional changes and updates,
- Updated documentation
- Updated manifests
- Updated Helm chart
- Addressed a few csi-test failures
Signed-off-by: ShyamsundarR <srangana@redhat.com>