Merge pull request #39 from ceph/devel

sync upstream devel to downstream devel
This commit is contained in:
OpenShift Merge Robot 2021-10-27 06:18:56 +00:00 committed by GitHub
commit ccf5fd9058
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
306 changed files with 31881 additions and 550 deletions

View File

@ -0,0 +1,272 @@
# Snapshots as shallow read-only volumes
CSI spec doesn't have a notion of "mounting a snapshot". Instead, the idiomatic
way of accessing snapshot contents is first to create a volume populated with
snapshot contents and then mount that volume to workloads.
CephFS exposes snapshots as special, read-only directories of a subvolume
located in `<subvolume>/.snap`. cephfs-csi can already provision writable
volumes with snapshots as their data source, where snapshot contents are
cloned to the newly created volume. However, cloning a snapshot to volume
is a very expensive operation in CephFS as the data needs to be fully copied.
When the need is to only read snapshot contents, snapshot cloning is extremely
inefficient and wasteful.
This proposal describes a way for cephfs-csi to expose CephFS snapshots
as shallow, read-only volumes, without needing to clone the underlying
snapshot data.
## Use-cases
What's the point of such read-only volumes?
* **Restore snapshots selectively:** users may want to traverse snapshots,
restoring data to a writable volume more selectively instead of restoring
the whole snapshot.
* **Volume backup:** users can't backup a live volume, they first need
to snapshot it. Once a snapshot is taken, it still can't be backed-up,
as backup tools usually work with volumes (that are exposed as file-systems)
and not snapshots (which might have backend-specific format). What this means
is that in order to create a snapshot backup, users have to clone snapshot
data twice:
1. first time, when restoring the snapshot into a temporary volume from
where the data will be read,
1. and second time, when transferring that volume into some backup/archive
storage (e.g. object store).
The temporary backed-up volume will most likely be thrown away after the
backup transfer is finished. That's a lot of wasted work for what we
originally wanted to do! Having the ability to create volumes from
snapshots cheaply would be a big improvement for this use case.
## Alternatives
* _Snapshots are stored in `<subvolume>/.snap`. Users could simply visit this
directory by themselves._
`.snap` is CephFS-specific detail of how snapshots are exposed.
Users / tools may not be aware of this special directory, or it may not fit
their workflow. At the moment, the idiomatic way of accessing snapshot
contents in CSI drivers is by creating a new volume and populating it
with snapshot.
## Design
Key points:
* Volume source is a snapshot, volume access mode is `*_READER_ONLY`.
* No actual new subvolumes are created in CephFS.
* The resulting volume is a reference to the source subvolume snapshot.
This reference would be stored in `Volume.volume_context` map. In order
to reference a snapshot, we need subvol name and snapshot name.
* Mounting such volume means mounting the respective CephFS subvolume
and exposing the snapshot to workloads.
* Let's call a *shallow read-only volume with a subvolume snapshot
as its data source* just a *shallow volume* from here on out for brevity.
### Controller operations
Care must be taken when handling life-times of relevant storage resources.
When a shallow volume is created, what would happen if:
* _Parent subvolume of the snapshot is removed while the shallow volume
still exists?_
This shouldn't be a problem already. The parent volume has either
`snapshot-retention` subvol feature in which case its snapshots remain
available, or if it doesn't have that feature, it will fail to be deleted
because it still has snapshots associated to it.
* _Source snapshot from which the shallow volume originates is removed while
that shallow volume still exists?_
We need to make sure this doesn't happen and some book-keeping
is necessary. Ideally we could employ some kind of reference counting.
#### Reference counting for shallow volumes
As mentioned above, this is to protect shallow volumes, should their source
snapshot be requested for deletion.
When creating a volume snapshot, a reference tracker (RT), represented by a
RADOS object, would be created for that snapshot. It would store information
required to track the references for the backing subvolume snapshot. Upon a
`CreateSnapshot` call, the reference tracker (RT) would be initialized with a
single reference record, where the CSI snapshot itself is the first reference
to the backing snapshot. Each subsequent shallow volume creation would add a
new reference record to the RT object. Each shallow volume deletion would
remove that reference from the RT object. Calling `DeleteSnapshot` would remove
the reference record that was previously added in `CreateSnapshot`.
The subvolume snapshot would be removed from the Ceph cluster only once the RT
object holds no references. Note that this behavior would permit calling
`DeleteSnapshot` even if it is still referenced by shallow volumes.
* `DeleteSnapshot`:
* RT holds no references or the RT object doesn't exist:
delete the backing snapshot too.
* RT holds at least one reference: keep the backing snapshot.
* `DeleteVolume`:
* RT holds no references: delete the backing snapshot too.
* RT holds at least one reference: keep the backing snapshot.
To enable creating shallow volumes from snapshots that were provisioned by
older versions of cephfs-csi (i.e. before this feature is introduced),
`CreateVolume` for shallow volumes would also create an RT object in case it's
missing. It would be initialized to two: the source snapshot and the newly
created shallow volume.
##### Concurrent access to RT objects
RADOS API provides access to compound atomic read and write operations. These
will be used to implement reference tracking functionality, protecting
modifications of reference records.
#### `CreateVolume`
A read-only volume with snapshot source would be created under these conditions:
1. `CreateVolumeRequest.volume_content_source` is a snapshot,
1. `CreateVolumeRequest.volume_capabilities[*].access_mode` is any of read-only
volume access modes.
1. Possibly other volume parameters in `CreateVolumeRequest.parameters`
specific to shallow volumes.
`CreateVolumeResponse.Volume.volume_context` would then contain necessary
information to identify the source subvolume / snapshot.
Things to look out for:
* _What's the volume size?_
It doesn't consume any space on the filesystem. `Volume.capacity_bytes` is
allowed to contain zero. We could use that.
* _What should be the requested size when creating the volume (specified e.g.
in PVC)?_
This one is tricky. CSI spec allows for
`CreateVolumeRequest.capacity_range.{required_bytes,limit_bytes}` to be
zero. On the other hand,
`PersistentVolumeClaim.spec.resources.requests.storage` must be bigger
than zero. cephfs-csi doesn't care about the requested size (the volume
will be read-only, so it has no usable capacity) and would always set it
to zero. This shouldn't case any problems for the time being, but still
is something we should keep in mind.
`CreateVolume` and behavior when using volume as volume source (PVC-PVC clone):
| New volume | Source volume | Behavior |
|----------------|----------------|-----------------------------------------------------------------------------------|
| shallow volume | shallow volume | Create a new reference to the parent snapshot of the source shallow volume. |
| regular volume | shallow volume | Equivalent for a request to create a regular volume with snapshot as its source. |
| shallow volume | regular volume | Such request doesn't make sense and `CreateVolume` should return an error. |
### `DeleteVolume`
Volume deletion is trivial.
### `CreateSnapshot`
Snapshotting read-only volumes doesn't make sense in general, and should
be rejected.
### `ControllerExpandVolume`
Same thing as above. Expanding read-only volumes doesn't make sense in general,
and should be rejected.
## Node operations
Two cases need to be considered:
* (a) Volume/snapshot provisioning is handled by cephfs-csi
* (b) Volume/snapshot provisioning is handled externally (e.g. pre-provisioned
manually, or by OpenStack Manila, ...)
### `NodeStageVolume`, `NodeUnstageVolume`
Here we're mounting the source subvolume onto the node. Subsequent volume
publish calls then use bind mounts to expose the snapshot directory located in
`.snap/<SNAPSHOT DIRECTORY NAME>`. Unfortunately, we cannot mount snapshots
directly because they are not visible during mount time. We need to mount the
whole subvolume first, and only then perform the binds to target paths.
#### For case (a)
Subvolume paths are normally retrieved by
`ceph fs subvolume info/getpath <VOLUME NAME> <SUBVOLUME NAME> <SUBVOLUMEGROUP NAME>`,
which outputs a path like so:
```
/volumes/<VOLUME NAME>/<SUBVOLUME NAME>/<UUID>
```
Snapshots are then accessible in:
* `/volumes/<VOLUME NAME>/<SUBVOLUME NAME>/.snap` and
* `/volumes/<VOLUME NAME>/<SUBVOLUME NAME>/<UUID>/.snap`.
`/volumes/<VOLUME NAME>/<SUBVOLUME NAME>/<UUID>` may be deleted if the source
subvolume is deleted, but thanks to the `snapshot-retention` feature, snapshots
in `/volumes/<VOLUME NAME>/<SUBVOLUME NAME>/.snap` will remain to be available.
The CephFS mount should therefore have its root set to the parent of what
`fs subvolume getpath` returns, i.e. `/volumes/<VOLUME NAME>/<SUBVOLUME NAME>`.
That way we will have snapshots available regardless of whether the subvolume
itself still exists or not.
#### For case (b)
For cases where subvolumes are managed externally and not by cephfs-csi, we
must assume that the cephx user we're given can access only
`/volumes/<VOLUME NAME>/<SUBVOLUME NAME>/<UUID>` so users won't be able to
benefit from snapshot retention. Users will need to be careful not to delete
the parent subvolumes and snapshots while they are associated by these shallow
RO volumes.
### `NodePublishVolume`, `NodeUnpublishVolume`
Node publish is trivial. We bind staging path to target path as a read-only
mount.
### `NodeGetVolumeStats`
`NodeGetVolumeStatsResponse.usage[*].available` should be always zero.
## Volume parameters, volume context
This section provides a discussion around determinig what volume parameters and
volume context parameters will be used to convey necessary information to the
cephfs-csi driver in order to support shallow volumes.
Volume parameters `CreateVolumeRequest.parameters`:
* Should be "shallow" the default mode for all `CreateVolume` calls that have
(a) snapshot as data source and (b) read-only volume access mode? If not,
a new volume parameter should be introduced: e.g `isShallow: <bool>`. On the
other hand, does it even makes sense for users to want to create full copies
of snapshots and still have them read-only?
Volume context `Volume.volume_context`:
* Here we definitely need `isShallow` or similar. Without it we wouldn't be
able to distinguish between a regular volume that just happens to have
a read-only access mode, and a volume that references a snapshot.
* Currently cephfs-csi recognizes `subvolumePath` for dynamically provisioned
volumes and `rootPath` for pre-previsioned volumes. As mentioned in
[`NodeStageVolume`, `NodeUnstageVolume` section](#NodeStageVolume-NodeUnstageVolume),
snapshots cannot be mounted directly. How do we pass in path to the parent
subvolume?
* a) Path to the snapshot is passed in via `subvolumePath` / `rootPath`,
e.g.
`/volumes/<VOLUME NAME>/<SUBVOLUME NAME>/<UUID>/.snap/<SNAPSHOT NAME>`.
From that we can derive path to the subvolume: it's the parent of `.snap`
directory.
* b) Similar to a), path to the snapshot is passed in via `subvolumePath` /
`rootPath`, but instead of trying to derive the right path we introduce
another volume context parameter containing path to the parent subvolume
explicitly.
* c) `subvolumePath` / `rootPath` contains path to the parent subvolume and
we introduce another volume context parameter containing name of the
snapshot. Path to the snapshot is then formed by appending
`/.snap/<SNAPSHOT NAME>` to the subvolume path.

View File

@ -2761,6 +2761,109 @@ var _ = Describe("RBD", func() {
} }
}) })
By("validate image deletion when it is moved to trash", func() {
// make sure pool is empty
validateRBDImageCount(f, 0, defaultRBDPool)
err := createRBDSnapshotClass(f)
if err != nil {
e2elog.Failf("failed to create storageclass: %v", err)
}
defer func() {
err = deleteRBDSnapshotClass()
if err != nil {
e2elog.Failf("failed to delete VolumeSnapshotClass: %v", err)
}
}()
pvc, err := loadPVC(pvcPath)
if err != nil {
e2elog.Failf("failed to load pvc: %v", err)
}
pvc.Namespace = f.UniqueName
err = createPVCAndvalidatePV(f.ClientSet, pvc, deployTimeout)
if err != nil {
e2elog.Failf("failed to create pvc: %v", err)
}
pvcSmartClone, err := loadPVC(pvcSmartClonePath)
if err != nil {
e2elog.Failf("failed to load pvcSmartClone: %v", err)
}
pvcSmartClone.Namespace = f.UniqueName
err = createPVCAndvalidatePV(f.ClientSet, pvcSmartClone, deployTimeout)
if err != nil {
e2elog.Failf("failed to create pvc: %v", err)
}
snap := getSnapshot(snapshotPath)
snap.Namespace = f.UniqueName
snap.Spec.Source.PersistentVolumeClaimName = &pvc.Name
err = createSnapshot(&snap, deployTimeout)
if err != nil {
e2elog.Failf("failed to create snapshot: %v", err)
}
smartCloneImageData, err := getImageInfoFromPVC(pvcSmartClone.Namespace, pvcSmartClone.Name, f)
if err != nil {
e2elog.Failf("failed to get ImageInfo from pvc: %v", err)
}
imageList, err := listRBDImages(f, defaultRBDPool)
if err != nil {
e2elog.Failf("failed to list rbd images: %v", err)
}
for _, imageName := range imageList {
if imageName == smartCloneImageData.imageName {
// do not move smartclone image to trash to test
// temporary image clone cleanup.
continue
}
_, _, err = execCommandInToolBoxPod(f,
fmt.Sprintf("rbd snap purge %s %s", rbdOptions(defaultRBDPool), imageName), rookNamespace)
if err != nil {
e2elog.Failf(
"failed to snap purge %s %s: %v",
imageName,
rbdOptions(defaultRBDPool),
err)
}
_, _, err = execCommandInToolBoxPod(f,
fmt.Sprintf("rbd trash move %s %s", rbdOptions(defaultRBDPool), imageName), rookNamespace)
if err != nil {
e2elog.Failf(
"failed to move rbd image %s %s to trash: %v",
imageName,
rbdOptions(defaultRBDPool),
err)
}
}
err = deleteSnapshot(&snap, deployTimeout)
if err != nil {
e2elog.Failf("failed to delete snapshot: %v", err)
}
err = deletePVCAndValidatePV(f.ClientSet, pvcSmartClone, deployTimeout)
if err != nil {
e2elog.Failf("failed to delete pvc: %v", err)
}
err = deletePVCAndValidatePV(f.ClientSet, pvc, deployTimeout)
if err != nil {
e2elog.Failf("failed to delete pvc: %v", err)
}
validateRBDImageCount(f, 0, defaultRBDPool)
err = waitToRemoveImagesFromTrash(f, defaultRBDPool, deployTimeout)
if err != nil {
e2elog.Failf("failed to validate rbd images in trash %s: %v", rbdOptions(defaultRBDPool), err)
}
})
By("validate stale images in trash", func() { By("validate stale images in trash", func() {
err := waitToRemoveImagesFromTrash(f, defaultRBDPool, deployTimeout) err := waitToRemoveImagesFromTrash(f, defaultRBDPool, deployTimeout)
if err != nil { if err != nil {

View File

@ -48,6 +48,8 @@ spec:
value: sample_root_token_id value: sample_root_token_id
- name: SKIP_SETCAP - name: SKIP_SETCAP
value: any value: any
- name: HOME
value: /home
livenessProbe: livenessProbe:
exec: exec:
command: command:
@ -58,6 +60,28 @@ spec:
ports: ports:
- containerPort: 8200 - containerPort: 8200
name: vault-api name: vault-api
volumeMounts:
- name: home
mountPath: /home
- name: monitor
image: docker.io/library/vault:latest
imagePullPolicy: "IfNotPresent"
securityContext:
runAsUser: 100
env:
- name: VAULT_ADDR
value: http://localhost:8200
- name: HOME
value: /home
command:
- vault
- monitor
volumeMounts:
- name: home
mountPath: /home
volumes:
- name: home
emptyDir: {}
--- ---
apiVersion: v1 apiVersion: v1
kind: ConfigMap kind: ConfigMap

16
go.mod
View File

@ -3,9 +3,9 @@ module github.com/ceph/ceph-csi
go 1.16 go 1.16
require ( require (
github.com/aws/aws-sdk-go v1.41.0 github.com/aws/aws-sdk-go v1.41.10
github.com/ceph/ceph-csi/api v0.0.0-00010101000000-000000000000 github.com/ceph/ceph-csi/api v0.0.0-00010101000000-000000000000
github.com/ceph/go-ceph v0.11.0 github.com/ceph/go-ceph v0.12.0
github.com/container-storage-interface/spec v1.5.0 github.com/container-storage-interface/spec v1.5.0
github.com/csi-addons/replication-lib-utils v0.2.0 github.com/csi-addons/replication-lib-utils v0.2.0
github.com/csi-addons/spec v0.1.1 github.com/csi-addons/spec v0.1.1
@ -13,7 +13,7 @@ require (
github.com/grpc-ecosystem/go-grpc-middleware v1.3.0 github.com/grpc-ecosystem/go-grpc-middleware v1.3.0
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0 github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0
github.com/hashicorp/golang-lru v0.5.4 // indirect github.com/hashicorp/golang-lru v0.5.4 // indirect
github.com/hashicorp/vault/api v1.1.1 github.com/hashicorp/vault/api v1.2.0
github.com/kubernetes-csi/csi-lib-utils v0.10.0 github.com/kubernetes-csi/csi-lib-utils v0.10.0
github.com/kubernetes-csi/external-snapshotter/client/v4 v4.2.0 github.com/kubernetes-csi/external-snapshotter/client/v4 v4.2.0
github.com/libopenstorage/secrets v0.0.0-20210908194121-a1d19aa9713a github.com/libopenstorage/secrets v0.0.0-20210908194121-a1d19aa9713a
@ -40,7 +40,6 @@ replace (
code.cloudfoundry.org/gofileutils => github.com/cloudfoundry/gofileutils v0.0.0-20170111115228-4d0c80011a0f code.cloudfoundry.org/gofileutils => github.com/cloudfoundry/gofileutils v0.0.0-20170111115228-4d0c80011a0f
github.com/ceph/ceph-csi/api => ./api github.com/ceph/ceph-csi/api => ./api
github.com/golang/protobuf => github.com/golang/protobuf v1.4.3 github.com/golang/protobuf => github.com/golang/protobuf v1.4.3
github.com/hashicorp/vault/sdk => github.com/hashicorp/vault/sdk v0.1.14-0.20201116234512-b4d4137dfe8b
github.com/portworx/sched-ops => github.com/portworx/sched-ops v0.20.4-openstorage-rc3 github.com/portworx/sched-ops => github.com/portworx/sched-ops v0.20.4-openstorage-rc3
gomodules.xyz/jsonpatch/v2 => github.com/gomodules/jsonpatch/v2 v2.2.0 gomodules.xyz/jsonpatch/v2 => github.com/gomodules/jsonpatch/v2 v2.2.0
// //
@ -73,5 +72,10 @@ replace (
k8s.io/sample-apiserver => k8s.io/sample-apiserver v0.22.2 k8s.io/sample-apiserver => k8s.io/sample-apiserver v0.22.2
) )
// This tag doesn't exist, but is imported by github.com/portworx/sched-ops. exclude (
exclude github.com/kubernetes-incubator/external-storage v0.20.4-openstorage-rc2 // missing tag, referred to by github.com/hashicorp/go-kms-wrapping@v0.5.1
github.com/hashicorp/vault/sdk v0.1.14-0.20191229212425-c478d00be0d6
// This tag doesn't exist, but is imported by github.com/portworx/sched-ops.
github.com/kubernetes-incubator/external-storage v0.20.4-openstorage-rc2
)

55
go.sum
View File

@ -133,8 +133,8 @@ github.com/aws/aws-sdk-go v1.25.41/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpi
github.com/aws/aws-sdk-go v1.30.27/go.mod h1:5zCpMtNQVjRREroY7sYe8lOMRSxkhG6MZveU8YkpAk0= github.com/aws/aws-sdk-go v1.30.27/go.mod h1:5zCpMtNQVjRREroY7sYe8lOMRSxkhG6MZveU8YkpAk0=
github.com/aws/aws-sdk-go v1.35.24/go.mod h1:tlPOdRjfxPBpNIwqDj61rmsnA85v9jc0Ps9+muhnW+k= github.com/aws/aws-sdk-go v1.35.24/go.mod h1:tlPOdRjfxPBpNIwqDj61rmsnA85v9jc0Ps9+muhnW+k=
github.com/aws/aws-sdk-go v1.38.49/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro= github.com/aws/aws-sdk-go v1.38.49/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro=
github.com/aws/aws-sdk-go v1.41.0 h1:XUzHLFWQVhmFtmKTodnAo5QdooPQfpVfilCxIV3aLoE= github.com/aws/aws-sdk-go v1.41.10 h1:smSZe5mNOOWZQ+FL8scmS/t9Iple/CvlQfRAJ46i8vw=
github.com/aws/aws-sdk-go v1.41.0/go.mod h1:585smgzpB/KqRA+K3y/NL/oYRqQvpNJYvLm+LY1U59Q= github.com/aws/aws-sdk-go v1.41.10/go.mod h1:585smgzpB/KqRA+K3y/NL/oYRqQvpNJYvLm+LY1U59Q=
github.com/baiyubin/aliyun-sts-go-sdk v0.0.0-20180326062324-cfa1a18b161f/go.mod h1:AuiFmCCPBSrqvVMvuqFuk0qogytodnVFVSN5CeJB8Gc= github.com/baiyubin/aliyun-sts-go-sdk v0.0.0-20180326062324-cfa1a18b161f/go.mod h1:AuiFmCCPBSrqvVMvuqFuk0qogytodnVFVSN5CeJB8Gc=
github.com/benbjohnson/clock v1.0.3/go.mod h1:bGMdMPoPVvcYyt1gHDf4J2KE153Yf9BuiUKYMaxlTDM= github.com/benbjohnson/clock v1.0.3/go.mod h1:bGMdMPoPVvcYyt1gHDf4J2KE153Yf9BuiUKYMaxlTDM=
github.com/benbjohnson/clock v1.1.0 h1:Q92kusRqC1XV2MjkWETPvjJVqKetz1OzxZB7mHJLju8= github.com/benbjohnson/clock v1.1.0 h1:Q92kusRqC1XV2MjkWETPvjJVqKetz1OzxZB7mHJLju8=
@ -161,8 +161,8 @@ github.com/cenkalti/backoff/v3 v3.0.0 h1:ske+9nBpD9qZsTBoF41nW5L+AIuFBKMeze18XQ3
github.com/cenkalti/backoff/v3 v3.0.0/go.mod h1:cIeZDE3IrqwwJl6VUwCN6trj1oXrTS4rc0ij+ULvLYs= github.com/cenkalti/backoff/v3 v3.0.0/go.mod h1:cIeZDE3IrqwwJl6VUwCN6trj1oXrTS4rc0ij+ULvLYs=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU= github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/centrify/cloud-golang-sdk v0.0.0-20190214225812-119110094d0f/go.mod h1:C0rtzmGXgN78pYR0tGJFhtHgkbAs0lIbHwkB81VxDQE= github.com/centrify/cloud-golang-sdk v0.0.0-20190214225812-119110094d0f/go.mod h1:C0rtzmGXgN78pYR0tGJFhtHgkbAs0lIbHwkB81VxDQE=
github.com/ceph/go-ceph v0.11.0 h1:A1pphV40LL8GQKDPpU4XqCa7gkmozsst7rhCC730/nk= github.com/ceph/go-ceph v0.12.0 h1:nlFgKQZXOFR4oMnzXsKwTr79Y6EYDwqTrpigICGy/Tw=
github.com/ceph/go-ceph v0.11.0/go.mod h1:mafFpf5Vg8Ai8Bd+FAMvKBHLmtdpTXdRP/TNq8XWegY= github.com/ceph/go-ceph v0.12.0/go.mod h1:mafFpf5Vg8Ai8Bd+FAMvKBHLmtdpTXdRP/TNq8XWegY=
github.com/certifi/gocertifi v0.0.0-20191021191039-0944d244cd40/go.mod h1:sGbDF6GwGcLpkNXPUTkMRoywsNa/ol15pxFe6ERfguA= github.com/certifi/gocertifi v0.0.0-20191021191039-0944d244cd40/go.mod h1:sGbDF6GwGcLpkNXPUTkMRoywsNa/ol15pxFe6ERfguA=
github.com/certifi/gocertifi v0.0.0-20200922220541-2c3bb06c6054/go.mod h1:sGbDF6GwGcLpkNXPUTkMRoywsNa/ol15pxFe6ERfguA= github.com/certifi/gocertifi v0.0.0-20200922220541-2c3bb06c6054/go.mod h1:sGbDF6GwGcLpkNXPUTkMRoywsNa/ol15pxFe6ERfguA=
github.com/cespare/xxhash v1.1.0 h1:a6HrQnmkObjyL+Gs60czilIUGqrzKutQD6XZog3p+ko= github.com/cespare/xxhash v1.1.0 h1:a6HrQnmkObjyL+Gs60czilIUGqrzKutQD6XZog3p+ko=
@ -299,6 +299,7 @@ github.com/fatih/camelcase v1.0.0/go.mod h1:yN2Sb0lFhZJUdVvtELVWefmrXpuZESvPmqwo
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4= github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/fatih/color v1.9.0 h1:8xPHl4/q1VyqGIPif1F+1V3Y3lSmrq01EabUW3CoW5s= github.com/fatih/color v1.9.0 h1:8xPHl4/q1VyqGIPif1F+1V3Y3lSmrq01EabUW3CoW5s=
github.com/fatih/color v1.9.0/go.mod h1:eQcE1qtQxscV5RaZvpXrrb8Drkc3/DdQ+uUYCNjL+zU= github.com/fatih/color v1.9.0/go.mod h1:eQcE1qtQxscV5RaZvpXrrb8Drkc3/DdQ+uUYCNjL+zU=
github.com/fatih/structs v1.1.0 h1:Q7juDM0QtcnhCpeyLGQKyg4TOIghuNXrkL32pHAUMxo=
github.com/fatih/structs v1.1.0/go.mod h1:9NiDSp5zOcgEDl+j00MP/WkGVPOlPRLejGD8Ga6PJ7M= github.com/fatih/structs v1.1.0/go.mod h1:9NiDSp5zOcgEDl+j00MP/WkGVPOlPRLejGD8Ga6PJ7M=
github.com/felixge/httpsnoop v1.0.1 h1:lvB5Jl89CsZtGIWuTcDM1E/vkVs49/Ml7JJe07l8SPQ= github.com/felixge/httpsnoop v1.0.1 h1:lvB5Jl89CsZtGIWuTcDM1E/vkVs49/Ml7JJe07l8SPQ=
github.com/felixge/httpsnoop v1.0.1/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U= github.com/felixge/httpsnoop v1.0.1/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
@ -309,8 +310,9 @@ github.com/form3tech-oss/jwt-go v3.2.3+incompatible/go.mod h1:pbq4aXjuKjdthFRnoD
github.com/frankban/quicktest v1.4.0/go.mod h1:36zfPVQyHxymz4cH7wlDmVwDrJuljRB60qkgn7rorfQ= github.com/frankban/quicktest v1.4.0/go.mod h1:36zfPVQyHxymz4cH7wlDmVwDrJuljRB60qkgn7rorfQ=
github.com/frankban/quicktest v1.4.1/go.mod h1:36zfPVQyHxymz4cH7wlDmVwDrJuljRB60qkgn7rorfQ= github.com/frankban/quicktest v1.4.1/go.mod h1:36zfPVQyHxymz4cH7wlDmVwDrJuljRB60qkgn7rorfQ=
github.com/frankban/quicktest v1.10.0/go.mod h1:ui7WezCLWMWxVWr1GETZY3smRy0G4KWq9vcPtJmFl7Y= github.com/frankban/quicktest v1.10.0/go.mod h1:ui7WezCLWMWxVWr1GETZY3smRy0G4KWq9vcPtJmFl7Y=
github.com/frankban/quicktest v1.11.3 h1:8sXhOn0uLys67V8EsXLc6eszDs8VXWxL3iRvebPhedY=
github.com/frankban/quicktest v1.11.3/go.mod h1:wRf/ReqHper53s+kmmSZizM8NamnL3IM0I9ntUbOk+k= github.com/frankban/quicktest v1.11.3/go.mod h1:wRf/ReqHper53s+kmmSZizM8NamnL3IM0I9ntUbOk+k=
github.com/frankban/quicktest v1.13.0 h1:yNZif1OkDfNoDfb9zZa9aXIpejNR4F23Wely0c+Qdqk=
github.com/frankban/quicktest v1.13.0/go.mod h1:qLE0fzW0VuyUAJgPU19zByoIr0HtCHN/r/VLSOOIySU=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/fsnotify/fsnotify v1.4.9 h1:hsms1Qyu0jgnwNXIxa+/V/PDsU6CfLf6CNO8H7IWoS4= github.com/fsnotify/fsnotify v1.4.9 h1:hsms1Qyu0jgnwNXIxa+/V/PDsU6CfLf6CNO8H7IWoS4=
github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ= github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
@ -330,6 +332,7 @@ github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/log v0.1.0/go.mod h1:zbhenjAZHb184qTLMA9ZjW7ThYL0H2mk7Q6pNt4vbaY= github.com/go-kit/log v0.1.0/go.mod h1:zbhenjAZHb184qTLMA9ZjW7ThYL0H2mk7Q6pNt4vbaY=
github.com/go-ldap/ldap v3.0.2+incompatible/go.mod h1:qfd9rJvER9Q0/D/Sqn1DfHRoBp40uXYvFoEVrNEPqRc=
github.com/go-ldap/ldap/v3 v3.1.3/go.mod h1:3rbOH3jRS2u6jg2rJnKAMLE/xQyCKIveG2Sa/Cohzb8= github.com/go-ldap/ldap/v3 v3.1.3/go.mod h1:3rbOH3jRS2u6jg2rJnKAMLE/xQyCKIveG2Sa/Cohzb8=
github.com/go-ldap/ldap/v3 v3.1.10/go.mod h1:5Zun81jBTabRaI8lzN7E1JjyEl1g6zI6u9pd8luAK4Q= github.com/go-ldap/ldap/v3 v3.1.10/go.mod h1:5Zun81jBTabRaI8lzN7E1JjyEl1g6zI6u9pd8luAK4Q=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE= github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
@ -482,8 +485,9 @@ github.com/hashicorp/consul/api v1.1.0/go.mod h1:VmuI/Lkw1nC05EYQWNKwWGbkg+FbDBt
github.com/hashicorp/consul/api v1.4.0/go.mod h1:xc8u05kyMa3Wjr9eEAsIAo3dg8+LywT5E/Cl7cNS5nU= github.com/hashicorp/consul/api v1.4.0/go.mod h1:xc8u05kyMa3Wjr9eEAsIAo3dg8+LywT5E/Cl7cNS5nU=
github.com/hashicorp/consul/sdk v0.1.1/go.mod h1:VKf9jXwCTEY1QZP2MOLRhb5i/I/ssyNV1vwHyQBF0x8= github.com/hashicorp/consul/sdk v0.1.1/go.mod h1:VKf9jXwCTEY1QZP2MOLRhb5i/I/ssyNV1vwHyQBF0x8=
github.com/hashicorp/consul/sdk v0.4.0/go.mod h1:fY08Y9z5SvJqevyZNy6WWPXiG3KwBPAvlcdx16zZ0fM= github.com/hashicorp/consul/sdk v0.4.0/go.mod h1:fY08Y9z5SvJqevyZNy6WWPXiG3KwBPAvlcdx16zZ0fM=
github.com/hashicorp/errwrap v1.0.0 h1:hLrqtEDnRye3+sgx6z4qVLNuviH3MR5aQ0ykNJa/UYA=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4= github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY2I=
github.com/hashicorp/errwrap v1.1.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-cleanhttp v0.5.0/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80= github.com/hashicorp/go-cleanhttp v0.5.0/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
github.com/hashicorp/go-cleanhttp v0.5.1 h1:dH3aiDG9Jvb5r5+bYHsikaOUIpcM0xvgMXVoDkXMzJM= github.com/hashicorp/go-cleanhttp v0.5.1 h1:dH3aiDG9Jvb5r5+bYHsikaOUIpcM0xvgMXVoDkXMzJM=
github.com/hashicorp/go-cleanhttp v0.5.1/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80= github.com/hashicorp/go-cleanhttp v0.5.1/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
@ -497,8 +501,8 @@ github.com/hashicorp/go-hclog v0.9.2/go.mod h1:5CU+agLiy3J7N7QjHK5d05KxGsuXiQLrj
github.com/hashicorp/go-hclog v0.10.1/go.mod h1:whpDNt7SSdeAju8AWKIWsul05p54N/39EeqMAyrmvFQ= github.com/hashicorp/go-hclog v0.10.1/go.mod h1:whpDNt7SSdeAju8AWKIWsul05p54N/39EeqMAyrmvFQ=
github.com/hashicorp/go-hclog v0.12.0/go.mod h1:whpDNt7SSdeAju8AWKIWsul05p54N/39EeqMAyrmvFQ= github.com/hashicorp/go-hclog v0.12.0/go.mod h1:whpDNt7SSdeAju8AWKIWsul05p54N/39EeqMAyrmvFQ=
github.com/hashicorp/go-hclog v0.14.1/go.mod h1:whpDNt7SSdeAju8AWKIWsul05p54N/39EeqMAyrmvFQ= github.com/hashicorp/go-hclog v0.14.1/go.mod h1:whpDNt7SSdeAju8AWKIWsul05p54N/39EeqMAyrmvFQ=
github.com/hashicorp/go-hclog v0.16.1 h1:IVQwpTGNRRIHafnTs2dQLIk4ENtneRIEEJWOVDqz99o= github.com/hashicorp/go-hclog v0.16.2 h1:K4ev2ib4LdQETX5cSZBG0DVLk1jwGqSPXBjdah3veNs=
github.com/hashicorp/go-hclog v0.16.1/go.mod h1:whpDNt7SSdeAju8AWKIWsul05p54N/39EeqMAyrmvFQ= github.com/hashicorp/go-hclog v0.16.2/go.mod h1:whpDNt7SSdeAju8AWKIWsul05p54N/39EeqMAyrmvFQ=
github.com/hashicorp/go-immutable-radix v1.0.0/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60= github.com/hashicorp/go-immutable-radix v1.0.0/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60=
github.com/hashicorp/go-immutable-radix v1.1.0 h1:vN9wG1D6KG6YHRTWr8512cxGOVgTMEfgEdSj/hr8MPc= github.com/hashicorp/go-immutable-radix v1.1.0 h1:vN9wG1D6KG6YHRTWr8512cxGOVgTMEfgEdSj/hr8MPc=
github.com/hashicorp/go-immutable-radix v1.1.0/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60= github.com/hashicorp/go-immutable-radix v1.1.0/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60=
@ -512,8 +516,10 @@ github.com/hashicorp/go-msgpack v0.5.3/go.mod h1:ahLV/dePpqEmjfWmKiqvPkv/twdG7iP
github.com/hashicorp/go-msgpack v0.5.5 h1:i9R9JSrqIz0QVLz3sz+i3YJdT7TTSLcfLLzJi9aZTuI= github.com/hashicorp/go-msgpack v0.5.5 h1:i9R9JSrqIz0QVLz3sz+i3YJdT7TTSLcfLLzJi9aZTuI=
github.com/hashicorp/go-msgpack v0.5.5/go.mod h1:ahLV/dePpqEmjfWmKiqvPkv/twdG7iPBM1vqhUKIvfM= github.com/hashicorp/go-msgpack v0.5.5/go.mod h1:ahLV/dePpqEmjfWmKiqvPkv/twdG7iPBM1vqhUKIvfM=
github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHhCYQXV3UM06sGGrk= github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHhCYQXV3UM06sGGrk=
github.com/hashicorp/go-multierror v1.1.0 h1:B9UzwGQJehnUY1yNrnwREHc3fGbC2xefo8g4TbElacI=
github.com/hashicorp/go-multierror v1.1.0/go.mod h1:spPvp8C1qA32ftKqdAHm4hHTbPw+vmowP0z+KUhOZdA= github.com/hashicorp/go-multierror v1.1.0/go.mod h1:spPvp8C1qA32ftKqdAHm4hHTbPw+vmowP0z+KUhOZdA=
github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo=
github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=
github.com/hashicorp/go-plugin v1.0.0/go.mod h1:++UyYGoz3o5w9ZzAdZxtQKrWWP+iqPBn3cQptSMzBuY=
github.com/hashicorp/go-plugin v1.0.1 h1:4OtAfUGbnKC6yS48p0CtMX2oFYtzFZVv6rok3cRWgnE= github.com/hashicorp/go-plugin v1.0.1 h1:4OtAfUGbnKC6yS48p0CtMX2oFYtzFZVv6rok3cRWgnE=
github.com/hashicorp/go-plugin v1.0.1/go.mod h1:++UyYGoz3o5w9ZzAdZxtQKrWWP+iqPBn3cQptSMzBuY= github.com/hashicorp/go-plugin v1.0.1/go.mod h1:++UyYGoz3o5w9ZzAdZxtQKrWWP+iqPBn3cQptSMzBuY=
github.com/hashicorp/go-raftchunking v0.6.3-0.20191002164813-7e9e8525653a h1:FmnBDwGwlTgugDGbVxwV8UavqSMACbGrUpfc98yFLR4= github.com/hashicorp/go-raftchunking v0.6.3-0.20191002164813-7e9e8525653a h1:FmnBDwGwlTgugDGbVxwV8UavqSMACbGrUpfc98yFLR4=
@ -527,6 +533,10 @@ github.com/hashicorp/go-rootcerts v1.0.0/go.mod h1:K6zTfqpRlCUIjkwsN4Z+hiSfzSTQa
github.com/hashicorp/go-rootcerts v1.0.1/go.mod h1:pqUvnprVnM5bf7AOirdbb01K4ccR319Vf4pU3K5EGc8= github.com/hashicorp/go-rootcerts v1.0.1/go.mod h1:pqUvnprVnM5bf7AOirdbb01K4ccR319Vf4pU3K5EGc8=
github.com/hashicorp/go-rootcerts v1.0.2 h1:jzhAVGtqPKbwpyCPELlgNWhE1znq+qwJtW5Oi2viEzc= github.com/hashicorp/go-rootcerts v1.0.2 h1:jzhAVGtqPKbwpyCPELlgNWhE1znq+qwJtW5Oi2viEzc=
github.com/hashicorp/go-rootcerts v1.0.2/go.mod h1:pqUvnprVnM5bf7AOirdbb01K4ccR319Vf4pU3K5EGc8= github.com/hashicorp/go-rootcerts v1.0.2/go.mod h1:pqUvnprVnM5bf7AOirdbb01K4ccR319Vf4pU3K5EGc8=
github.com/hashicorp/go-secure-stdlib/parseutil v0.1.1 h1:78ki3QBevHwYrVxnyVeaEz+7WtifHhauYF23es/0KlI=
github.com/hashicorp/go-secure-stdlib/parseutil v0.1.1/go.mod h1:QmrqtbKuxxSWTN3ETMPuB+VtEiBJ/A9XhoYGv8E1uD8=
github.com/hashicorp/go-secure-stdlib/strutil v0.1.1 h1:nd0HIW15E6FG1MsnArYaHfuw9C2zgzM8LxkG5Ty/788=
github.com/hashicorp/go-secure-stdlib/strutil v0.1.1/go.mod h1:gKOamz3EwoIoJq7mlMIRBpVTAUn8qPCrEclOKKWhD3U=
github.com/hashicorp/go-sockaddr v1.0.0/go.mod h1:7Xibr9yA9JjQq1JpNB2Vw7kxv8xerXegt+ozgdvDeDU= github.com/hashicorp/go-sockaddr v1.0.0/go.mod h1:7Xibr9yA9JjQq1JpNB2Vw7kxv8xerXegt+ozgdvDeDU=
github.com/hashicorp/go-sockaddr v1.0.2 h1:ztczhD1jLxIRjVejw8gFomI1BQZOe2WoVOu0SyteCQc= github.com/hashicorp/go-sockaddr v1.0.2 h1:ztczhD1jLxIRjVejw8gFomI1BQZOe2WoVOu0SyteCQc=
github.com/hashicorp/go-sockaddr v1.0.2/go.mod h1:rB4wwRAUzs07qva3c5SdrY/NEtAUjGlgmH/UkBUC97A= github.com/hashicorp/go-sockaddr v1.0.2/go.mod h1:rB4wwRAUzs07qva3c5SdrY/NEtAUjGlgmH/UkBUC97A=
@ -536,6 +546,7 @@ github.com/hashicorp/go-uuid v1.0.1/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/b
github.com/hashicorp/go-uuid v1.0.2-0.20191001231223-f32f5fe8d6a8/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro= github.com/hashicorp/go-uuid v1.0.2-0.20191001231223-f32f5fe8d6a8/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
github.com/hashicorp/go-uuid v1.0.2 h1:cfejS+Tpcp13yd5nYHWDI6qVCny6wyX2Mt5SGur2IGE= github.com/hashicorp/go-uuid v1.0.2 h1:cfejS+Tpcp13yd5nYHWDI6qVCny6wyX2Mt5SGur2IGE=
github.com/hashicorp/go-uuid v1.0.2/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro= github.com/hashicorp/go-uuid v1.0.2/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
github.com/hashicorp/go-version v1.1.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA=
github.com/hashicorp/go-version v1.2.0 h1:3vNe/fWF5CBgRIguda1meWhsZHy3m8gCJ5wx+dIzX/E= github.com/hashicorp/go-version v1.2.0 h1:3vNe/fWF5CBgRIguda1meWhsZHy3m8gCJ5wx+dIzX/E=
github.com/hashicorp/go-version v1.2.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= github.com/hashicorp/go-version v1.2.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA=
github.com/hashicorp/go.net v0.0.1/go.mod h1:hjKkEWcCURg++eb33jQU7oqQcI9XDCnUzHA0oac0k90= github.com/hashicorp/go.net v0.0.1/go.mod h1:hjKkEWcCURg++eb33jQU7oqQcI9XDCnUzHA0oac0k90=
@ -589,10 +600,20 @@ github.com/hashicorp/vault/api v1.0.5-0.20200215224050-f6547fa8e820/go.mod h1:3f
github.com/hashicorp/vault/api v1.0.5-0.20200317185738-82f498082f02/go.mod h1:3f12BMfgDGjTsTtIUj+ZKZwSobQpZtYGFIEehOv5z1o= github.com/hashicorp/vault/api v1.0.5-0.20200317185738-82f498082f02/go.mod h1:3f12BMfgDGjTsTtIUj+ZKZwSobQpZtYGFIEehOv5z1o=
github.com/hashicorp/vault/api v1.0.5-0.20200519221902-385fac77e20f/go.mod h1:euTFbi2YJgwcju3imEt919lhJKF68nN1cQPq3aA+kBE= github.com/hashicorp/vault/api v1.0.5-0.20200519221902-385fac77e20f/go.mod h1:euTFbi2YJgwcju3imEt919lhJKF68nN1cQPq3aA+kBE=
github.com/hashicorp/vault/api v1.0.5-0.20200902155336-f9d5ce5a171a/go.mod h1:R3Umvhlxi2TN7Ex2hzOowyeNb+SfbVWI973N+ctaFMk= github.com/hashicorp/vault/api v1.0.5-0.20200902155336-f9d5ce5a171a/go.mod h1:R3Umvhlxi2TN7Ex2hzOowyeNb+SfbVWI973N+ctaFMk=
github.com/hashicorp/vault/api v1.1.1 h1:907ld+Z9cALyvbZK2qUX9cLwvSaEQsMVQB3x2KE8+AI= github.com/hashicorp/vault/api v1.2.0 h1:ysGFc6XRGbv05NsWPzuO5VTv68Lj8jtwATxRLFOpP9s=
github.com/hashicorp/vault/api v1.1.1/go.mod h1:29UXcn/1cLOPHQNMWA7bCz2By4PSd0VKPAydKXS5yN0= github.com/hashicorp/vault/api v1.2.0/go.mod h1:dAjw0T5shMnrfH7Q/Mst+LrcTKvStZBVs1PICEDpUqY=
github.com/hashicorp/vault/sdk v0.1.14-0.20201116234512-b4d4137dfe8b h1:vQeIf4LdAqtYoD3N6KSiYilntYZq0F0vxcBTlx/69wg= github.com/hashicorp/vault/sdk v0.1.8/go.mod h1:tHZfc6St71twLizWNHvnnbiGFo1aq0eD2jGPLtP8kAU=
github.com/hashicorp/vault/sdk v0.1.14-0.20201116234512-b4d4137dfe8b/go.mod h1:cAGI4nVnEfAyMeqt9oB+Mase8DNn3qA/LDNHURiwssY= github.com/hashicorp/vault/sdk v0.1.14-0.20190730042320-0dc007d98cc8/go.mod h1:B+hVj7TpuQY1Y/GPbCpffmgd+tSEwvhkWnjtSYCaS2M=
github.com/hashicorp/vault/sdk v0.1.14-0.20191108161836-82f2b5571044/go.mod h1:PcekaFGiPJyHnFy+NZhP6ll650zEw51Ag7g/YEa+EOU=
github.com/hashicorp/vault/sdk v0.1.14-0.20200215195600-2ca765f0a500/go.mod h1:WX57W2PwkrOPQ6rVQk+dy5/htHIaB4aBM70EwKThu10=
github.com/hashicorp/vault/sdk v0.1.14-0.20200305172021-03a3749f220d/go.mod h1:PcekaFGiPJyHnFy+NZhP6ll650zEw51Ag7g/YEa+EOU=
github.com/hashicorp/vault/sdk v0.1.14-0.20200317185738-82f498082f02/go.mod h1:WX57W2PwkrOPQ6rVQk+dy5/htHIaB4aBM70EwKThu10=
github.com/hashicorp/vault/sdk v0.1.14-0.20200427170607-03332aaf8d18/go.mod h1:WX57W2PwkrOPQ6rVQk+dy5/htHIaB4aBM70EwKThu10=
github.com/hashicorp/vault/sdk v0.1.14-0.20200429182704-29fce8f27ce4/go.mod h1:WX57W2PwkrOPQ6rVQk+dy5/htHIaB4aBM70EwKThu10=
github.com/hashicorp/vault/sdk v0.1.14-0.20200519221530-14615acda45f/go.mod h1:WX57W2PwkrOPQ6rVQk+dy5/htHIaB4aBM70EwKThu10=
github.com/hashicorp/vault/sdk v0.1.14-0.20200519221838-e0cfd64bc267/go.mod h1:WX57W2PwkrOPQ6rVQk+dy5/htHIaB4aBM70EwKThu10=
github.com/hashicorp/vault/sdk v0.2.1 h1:S4O6Iv/dyKlE9AUTXGa7VOvZmsCvg36toPKgV4f2P4M=
github.com/hashicorp/vault/sdk v0.2.1/go.mod h1:WfUiO1vYzfBkz1TmoE4ZGU7HD0T0Cl/rZwaxjBkgN4U=
github.com/hashicorp/yamux v0.0.0-20180604194846-3520598351bb/go.mod h1:+NfK9FKeTrX5uv1uIXGdwYDTeHna2qgaIlx54MXqjAM= github.com/hashicorp/yamux v0.0.0-20180604194846-3520598351bb/go.mod h1:+NfK9FKeTrX5uv1uIXGdwYDTeHna2qgaIlx54MXqjAM=
github.com/hashicorp/yamux v0.0.0-20181012175058-2f1d1f20f75d h1:kJCB4vdITiW1eC1vq2e6IsrXKrZit1bv/TDYFGMp4BQ= github.com/hashicorp/yamux v0.0.0-20181012175058-2f1d1f20f75d h1:kJCB4vdITiW1eC1vq2e6IsrXKrZit1bv/TDYFGMp4BQ=
github.com/hashicorp/yamux v0.0.0-20181012175058-2f1d1f20f75d/go.mod h1:+NfK9FKeTrX5uv1uIXGdwYDTeHna2qgaIlx54MXqjAM= github.com/hashicorp/yamux v0.0.0-20181012175058-2f1d1f20f75d/go.mod h1:+NfK9FKeTrX5uv1uIXGdwYDTeHna2qgaIlx54MXqjAM=
@ -733,8 +754,10 @@ github.com/mitchellh/hashstructure v1.0.0/go.mod h1:QjSHrPWS+BGUVBYkbTZWEnOh3G1D
github.com/mitchellh/iochan v1.0.0/go.mod h1:JwYml1nuB7xOzsp52dPpHFffvOCDupsG0QubkSMEySY= github.com/mitchellh/iochan v1.0.0/go.mod h1:JwYml1nuB7xOzsp52dPpHFffvOCDupsG0QubkSMEySY=
github.com/mitchellh/mapstructure v0.0.0-20160808181253-ca63d7c062ee/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= github.com/mitchellh/mapstructure v0.0.0-20160808181253-ca63d7c062ee/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/mapstructure v1.3.2 h1:mRS76wmkOn3KkKAyXDu42V+6ebnXWIztFSYGN7GeoRg=
github.com/mitchellh/mapstructure v1.3.2/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo= github.com/mitchellh/mapstructure v1.3.2/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
github.com/mitchellh/mapstructure v1.4.1/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
github.com/mitchellh/mapstructure v1.4.2 h1:6h7AQ0yhTcIsmFmnAwQls75jp2Gzs4iB8W7pjMO+rqo=
github.com/mitchellh/mapstructure v1.4.2/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
github.com/mitchellh/pointerstructure v0.0.0-20190430161007-f252a8fd71c8/go.mod h1:k4XwG94++jLVsSiTxo7qdIfXA9pj9EAeo0QsNNJOLZ8= github.com/mitchellh/pointerstructure v0.0.0-20190430161007-f252a8fd71c8/go.mod h1:k4XwG94++jLVsSiTxo7qdIfXA9pj9EAeo0QsNNJOLZ8=
github.com/mitchellh/reflectwalk v1.0.0/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw= github.com/mitchellh/reflectwalk v1.0.0/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw=
github.com/mitchellh/reflectwalk v1.0.1 h1:FVzMWA5RllMAKIdUSC8mdWo3XtwoecrH79BY70sEEpE= github.com/mitchellh/reflectwalk v1.0.1 h1:FVzMWA5RllMAKIdUSC8mdWo3XtwoecrH79BY70sEEpE=
@ -829,6 +852,7 @@ github.com/ory/dockertest v3.3.4+incompatible/go.mod h1:1vX4m9wsvi00u5bseYwXaSnh
github.com/ory/dockertest v3.3.5+incompatible/go.mod h1:1vX4m9wsvi00u5bseYwXaSnhNrne+V0E6LAcBILJdPs= github.com/ory/dockertest v3.3.5+incompatible/go.mod h1:1vX4m9wsvi00u5bseYwXaSnhNrne+V0E6LAcBILJdPs=
github.com/oxtoacart/bpool v0.0.0-20150712133111-4e1c5567d7c2/go.mod h1:L3UMQOThbttwfYRNFOWLLVXMhk5Lkio4GGOtw5UrxS0= github.com/oxtoacart/bpool v0.0.0-20150712133111-4e1c5567d7c2/go.mod h1:L3UMQOThbttwfYRNFOWLLVXMhk5Lkio4GGOtw5UrxS0=
github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc= github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
github.com/pascaldekloe/goe v0.1.0 h1:cBOtyMzM9HTpWjXfbbunk26uA6nG3a8n06Wieeh0MwY=
github.com/pascaldekloe/goe v0.1.0/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc= github.com/pascaldekloe/goe v0.1.0/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
github.com/patrickmn/go-cache v0.0.0-20180815053127-5633e0862627/go.mod h1:3Qf8kWWT7OJRJbdiICTKqZju1ZixQ/KpMGzzAfe6+WQ= github.com/patrickmn/go-cache v0.0.0-20180815053127-5633e0862627/go.mod h1:3Qf8kWWT7OJRJbdiICTKqZju1ZixQ/KpMGzzAfe6+WQ=
github.com/patrickmn/go-cache v2.1.0+incompatible h1:HRMgzkcYKYpi3C8ajMPV8OFXaaRUnok+kx1WdO15EQc= github.com/patrickmn/go-cache v2.1.0+incompatible h1:HRMgzkcYKYpi3C8ajMPV8OFXaaRUnok+kx1WdO15EQc=
@ -1057,6 +1081,7 @@ go.opentelemetry.io/proto/otlp v0.7.0/go.mod h1:PqfVotwruBrMGOCsRd/89rSnXhoiJIqe
go.starlark.net v0.0.0-20200306205701-8dd3e2ee1dd5/go.mod h1:nmDLcffg48OtT/PSW0Hg7FvpRQsQh5OSqIylirxKC7o= go.starlark.net v0.0.0-20200306205701-8dd3e2ee1dd5/go.mod h1:nmDLcffg48OtT/PSW0Hg7FvpRQsQh5OSqIylirxKC7o=
go.uber.org/atomic v1.3.2/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE= go.uber.org/atomic v1.3.2/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE= go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/atomic v1.6.0/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ=
go.uber.org/atomic v1.7.0 h1:ADUqmZGgLDDfbSL9ZmPxKTybcoEYHgpYfELNoN+7hsw= go.uber.org/atomic v1.7.0 h1:ADUqmZGgLDDfbSL9ZmPxKTybcoEYHgpYfELNoN+7hsw=
go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc= go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
go.uber.org/goleak v1.1.10 h1:z+mqJhf6ss6BSfSM671tgKyZBFPTTJM+HLxnhPC3wu0= go.uber.org/goleak v1.1.10 h1:z+mqJhf6ss6BSfSM671tgKyZBFPTTJM+HLxnhPC3wu0=
@ -1336,6 +1361,7 @@ golang.org/x/tools v0.0.0-20190718200317-82a3ea8a504c/go.mod h1:jcCCGcm9btYwXyDq
golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191029041327-9cc4af7d6b2c/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191108193012-7d206e10da11/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191108193012-7d206e10da11/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191112195655-aa38f8e97acc/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191112195655-aa38f8e97acc/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191113191852-77e3bb0ad9e7/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191113191852-77e3bb0ad9e7/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
@ -1461,6 +1487,7 @@ google.golang.org/grpc v1.14.0/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmE
google.golang.org/grpc v1.16.0/go.mod h1:0JHn/cJsOMiMfNA9+DeHDlAU7KAAB5GDlYFpa9MZMio= google.golang.org/grpc v1.16.0/go.mod h1:0JHn/cJsOMiMfNA9+DeHDlAU7KAAB5GDlYFpa9MZMio=
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs= google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.19.1/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38= google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM= google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM= google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=

View File

@ -263,17 +263,30 @@ func FilesystemNodeGetVolumeStats(ctx context.Context, targetPath string) (*csi.
return &csi.NodeGetVolumeStatsResponse{ return &csi.NodeGetVolumeStatsResponse{
Usage: []*csi.VolumeUsage{ Usage: []*csi.VolumeUsage{
{ {
Available: available, Available: requirePositive(available),
Total: capacity, Total: requirePositive(capacity),
Used: used, Used: requirePositive(used),
Unit: csi.VolumeUsage_BYTES, Unit: csi.VolumeUsage_BYTES,
}, },
{ {
Available: inodesFree, Available: requirePositive(inodesFree),
Total: inodes, Total: requirePositive(inodes),
Used: inodesUsed, Used: requirePositive(inodesUsed),
Unit: csi.VolumeUsage_INODES, Unit: csi.VolumeUsage_INODES,
}, },
}, },
}, nil }, nil
} }
// requirePositive returns the value for `x` when it is greater or equal to 0,
// or returns 0 in the acse `x` is negative.
//
// This is used for VolumeUsage entries in the NodeGetVolumeStatsResponse. The
// CSI spec does not allow negative values in the VolumeUsage objects.
func requirePositive(x int64) int64 {
if x >= 0 {
return x
}
return 0
}

View File

@ -17,9 +17,15 @@ limitations under the License.
package csicommon package csicommon
import ( import (
"context"
"os"
"path/filepath"
"strings"
"testing" "testing"
"github.com/container-storage-interface/spec/lib/go/csi" "github.com/container-storage-interface/spec/lib/go/csi"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
) )
var fakeID = "fake-id" var fakeID = "fake-id"
@ -70,3 +76,43 @@ func TestGetReqID(t *testing.T) {
t.Errorf("getReqID() = %v, want empty string", got) t.Errorf("getReqID() = %v, want empty string", got)
} }
} }
func TestFilesystemNodeGetVolumeStats(t *testing.T) {
t.Parallel()
// ideally this is tested on different filesystems, including CephFS,
// but we'll settle for the filesystem where the project is checked out
cwd, err := os.Getwd()
require.NoError(t, err)
// retry until a mountpoint is found
for {
stats, err := FilesystemNodeGetVolumeStats(context.TODO(), cwd)
if err != nil && cwd != "/" && strings.HasSuffix(err.Error(), "is not mounted") {
// try again with the parent directory
cwd = filepath.Dir(cwd)
continue
}
require.NoError(t, err)
assert.NotEqual(t, len(stats.Usage), 0)
for _, usage := range stats.Usage {
assert.NotEqual(t, usage.Available, -1)
assert.NotEqual(t, usage.Total, -1)
assert.NotEqual(t, usage.Used, -1)
assert.NotEqual(t, usage.Unit, 0)
}
// tests done, no need to retry again
break
}
}
func TestRequirePositive(t *testing.T) {
t.Parallel()
assert.Equal(t, requirePositive(0), int64(0))
assert.Equal(t, requirePositive(-1), int64(0))
assert.Equal(t, requirePositive(1), int64(1))
}

View File

@ -776,8 +776,13 @@ func (cs *ControllerServer) checkErrAndUndoReserve(
return &csi.DeleteVolumeResponse{}, nil return &csi.DeleteVolumeResponse{}, nil
} }
if errors.Is(err, ErrImageNotFound) {
err = rbdVol.ensureImageCleanup(ctx)
if err != nil {
return nil, status.Error(codes.Internal, err.Error())
}
} else {
// All errors other than ErrImageNotFound should return an error back to the caller // All errors other than ErrImageNotFound should return an error back to the caller
if !errors.Is(err, ErrImageNotFound) {
return nil, status.Error(codes.Internal, err.Error()) return nil, status.Error(codes.Internal, err.Error())
} }
@ -924,8 +929,13 @@ func cleanupRBDImage(ctx context.Context,
tempClone := rbdVol.generateTempClone() tempClone := rbdVol.generateTempClone()
err = deleteImage(ctx, tempClone, cr) err = deleteImage(ctx, tempClone, cr)
if err != nil { if err != nil {
if errors.Is(err, ErrImageNotFound) {
err = tempClone.ensureImageCleanup(ctx)
if err != nil {
return nil, status.Error(codes.Internal, err.Error())
}
} else {
// return error if it is not ErrImageNotFound // return error if it is not ErrImageNotFound
if !errors.Is(err, ErrImageNotFound) {
log.ErrorLog(ctx, "failed to delete rbd image: %s with error: %v", log.ErrorLog(ctx, "failed to delete rbd image: %s with error: %v",
tempClone, err) tempClone, err)
@ -1373,7 +1383,12 @@ func (cs *ControllerServer) DeleteSnapshot(
err = rbdVol.getImageInfo() err = rbdVol.getImageInfo()
if err != nil { if err != nil {
if !errors.Is(err, ErrImageNotFound) { if errors.Is(err, ErrImageNotFound) {
err = rbdVol.ensureImageCleanup(ctx)
if err != nil {
return nil, status.Error(codes.Internal, err.Error())
}
} else {
log.ErrorLog(ctx, "failed to delete rbd image: %s/%s with error: %v", rbdVol.Pool, rbdVol.VolName, err) log.ErrorLog(ctx, "failed to delete rbd image: %s/%s with error: %v", rbdVol.Pool, rbdVol.VolName, err)
return nil, status.Error(codes.Internal, err.Error()) return nil, status.Error(codes.Internal, err.Error())

View File

@ -565,6 +565,26 @@ func (rv *rbdVolume) getTrashPath() string {
return trashPath + "/" + rv.ImageID return trashPath + "/" + rv.ImageID
} }
// ensureImageCleanup finds image in trash and if found removes it
// from trash.
func (rv *rbdVolume) ensureImageCleanup(ctx context.Context) error {
trashInfoList, err := librbd.GetTrashList(rv.ioctx)
if err != nil {
log.ErrorLog(ctx, "failed to list images in trash: %v", err)
return err
}
for _, val := range trashInfoList {
if val.Name == rv.RbdImageName {
rv.ImageID = val.Id
return trashRemoveImage(ctx, rv, rv.conn.Creds)
}
}
return nil
}
// deleteImage deletes a ceph image with provision and volume options. // deleteImage deletes a ceph image with provision and volume options.
func deleteImage(ctx context.Context, pOpts *rbdVolume, cr *util.Credentials) error { func deleteImage(ctx context.Context, pOpts *rbdVolume, cr *util.Credentials) error {
image := pOpts.RbdImageName image := pOpts.RbdImageName
@ -597,6 +617,12 @@ func deleteImage(ctx context.Context, pOpts *rbdVolume, cr *util.Credentials) er
return err return err
} }
return trashRemoveImage(ctx, pOpts, cr)
}
// trashRemoveImage adds a task to trash remove an image using ceph manager if supported,
// otherwise removes the image from trash.
func trashRemoveImage(ctx context.Context, pOpts *rbdVolume, cr *util.Credentials) error {
// attempt to use Ceph manager based deletion support if available // attempt to use Ceph manager based deletion support if available
args := []string{ args := []string{

View File

@ -44,18 +44,6 @@ const (
imageMirrorModeSnapshot imageMirroringMode = "snapshot" imageMirrorModeSnapshot imageMirroringMode = "snapshot"
) )
// imageMirroringState represents the image mirroring state.
type imageMirroringState string
const (
// If the state is unknown means the rbd-mirror daemon is
// running and the image is demoted on both the clusters.
unknown imageMirroringState = "unknown"
// If the state is error means image need resync.
errorState imageMirroringState = "error"
)
const ( const (
// mirroringMode + key to get the imageMirroringMode from parameters. // mirroringMode + key to get the imageMirroringMode from parameters.
imageMirroringKey = "mirroringMode" imageMirroringKey = "mirroringMode"
@ -523,16 +511,16 @@ func (rs *ReplicationServer) DemoteVolume(ctx context.Context,
func checkRemoteSiteStatus(ctx context.Context, mirrorStatus *librbd.GlobalMirrorImageStatus) bool { func checkRemoteSiteStatus(ctx context.Context, mirrorStatus *librbd.GlobalMirrorImageStatus) bool {
ready := true ready := true
for _, s := range mirrorStatus.SiteStatuses { for _, s := range mirrorStatus.SiteStatuses {
if s.MirrorUUID != "" {
if imageMirroringState(s.State.String()) != unknown && !s.Up {
log.UsefulLog( log.UsefulLog(
ctx, ctx,
"peer site mirrorUUID=%s, mirroring state=%s, description=%s and lastUpdate=%s", "peer site mirrorUUID=%q, daemon up=%t, mirroring state=%q, description=%q and lastUpdate=%d",
s.MirrorUUID, s.MirrorUUID,
s.State.String(), s.Up,
s.State,
s.Description, s.Description,
s.LastUpdate) s.LastUpdate)
if s.MirrorUUID != "" {
if s.State != librbd.MirrorImageStatusStateUnknown && !s.Up {
ready = false ready = false
} }
} }
@ -544,7 +532,6 @@ func checkRemoteSiteStatus(ctx context.Context, mirrorStatus *librbd.GlobalMirro
// ResyncVolume extracts the RBD volume information from the volumeID, If the // ResyncVolume extracts the RBD volume information from the volumeID, If the
// image is present, mirroring is enabled and the image is in demoted state. // image is present, mirroring is enabled and the image is in demoted state.
// If yes it will resync the image to correct the split-brain. // If yes it will resync the image to correct the split-brain.
// FIXME: reduce complexity.
func (rs *ReplicationServer) ResyncVolume(ctx context.Context, func (rs *ReplicationServer) ResyncVolume(ctx context.Context,
req *replication.ResyncVolumeRequest, req *replication.ResyncVolumeRequest,
) (*replication.ResyncVolumeResponse, error) { ) (*replication.ResyncVolumeResponse, error) {
@ -619,7 +606,17 @@ func (rs *ReplicationServer) ResyncVolume(ctx context.Context,
return nil, fmt.Errorf("failed to get local status: %w", err) return nil, fmt.Errorf("failed to get local status: %w", err)
} }
state := imageMirroringState(localStatus.State.String())
// convert the last update time to UTC
lastUpdateTime := time.Unix(localStatus.LastUpdate, 0).UTC()
log.UsefulLog(
ctx,
"local status: daemon up=%t, image mirroring state=%q, description=%q and lastUpdate=%s",
localStatus.Up,
localStatus.State,
localStatus.Description,
lastUpdateTime)
// To recover from split brain (up+error) state the image need to be // To recover from split brain (up+error) state the image need to be
// demoted and requested for resync on site-a and then the image on site-b // demoted and requested for resync on site-a and then the image on site-b
// should be demoted. The volume should be marked to ready=true when the // should be demoted. The volume should be marked to ready=true when the
@ -630,7 +627,7 @@ func (rs *ReplicationServer) ResyncVolume(ctx context.Context,
// If the image state on both the sites are up+unknown consider that // If the image state on both the sites are up+unknown consider that
// complete data is synced as the last snapshot // complete data is synced as the last snapshot
// gets exchanged between the clusters. // gets exchanged between the clusters.
if state == unknown && localStatus.Up { if localStatus.State == librbd.MirrorImageStatusStateUnknown && localStatus.Up {
ready = checkRemoteSiteStatus(ctx, mirrorStatus) ready = checkRemoteSiteStatus(ctx, mirrorStatus)
} }
@ -648,14 +645,10 @@ func (rs *ReplicationServer) ResyncVolume(ctx context.Context,
return nil, status.Error(codes.Unavailable, "awaiting initial resync due to split brain") return nil, status.Error(codes.Unavailable, "awaiting initial resync due to split brain")
} }
// convert the last update time to UTC err = checkVolumeResyncStatus(localStatus)
lastUpdateTime := time.Unix(localStatus.LastUpdate, 0).UTC() if err != nil {
log.UsefulLog( return nil, status.Error(codes.Internal, err.Error())
ctx, }
"image mirroring state=%s, description=%s and lastUpdate=%s",
localStatus.State.String(),
localStatus.Description,
lastUpdateTime)
resp := &replication.ResyncVolumeResponse{ resp := &replication.ResyncVolumeResponse{
Ready: ready, Ready: ready,
@ -664,6 +657,25 @@ func (rs *ReplicationServer) ResyncVolume(ctx context.Context,
return resp, nil return resp, nil
} }
func checkVolumeResyncStatus(localStatus librbd.SiteMirrorImageStatus) error {
// we are considering 2 states to check resync started and resync completed
// as below. all other states will be considered as an error state so that
// cephCSI can return error message and volume replication operator can
// mark the VolumeReplication status as not resyncing for the volume.
// If the state is Replaying means the resync is going on.
// Once the volume on remote cluster is demoted and resync
// is completed the image state will be moved to UNKNOWN .
if localStatus.State != librbd.MirrorImageStatusStateReplaying &&
localStatus.State != librbd.MirrorImageStatusStateUnknown {
return fmt.Errorf(
"not resyncing. image is in %q state",
localStatus.State)
}
return nil
}
// resyncRequired returns true if local image is in split-brain state and image // resyncRequired returns true if local image is in split-brain state and image
// needs resync. // needs resync.
func resyncRequired(localStatus librbd.SiteMirrorImageStatus) bool { func resyncRequired(localStatus librbd.SiteMirrorImageStatus) bool {
@ -673,7 +685,7 @@ func resyncRequired(localStatus librbd.SiteMirrorImageStatus) bool {
// be in an error state. It would be also worth considering the `description` // be in an error state. It would be also worth considering the `description`
// field to make sure about split-brain. // field to make sure about split-brain.
splitBrain := "split-brain" splitBrain := "split-brain"
if strings.Contains(localStatus.State.String(), string(errorState)) || if localStatus.State == librbd.MirrorImageStatusStateError ||
strings.Contains(localStatus.Description, splitBrain) { strings.Contains(localStatus.Description, splitBrain) {
return true return true
} }

View File

@ -20,6 +20,7 @@ import (
"reflect" "reflect"
"testing" "testing"
librbd "github.com/ceph/go-ceph/rbd"
"github.com/ceph/go-ceph/rbd/admin" "github.com/ceph/go-ceph/rbd/admin"
) )
@ -175,3 +176,78 @@ func TestGetSchedulingDetails(t *testing.T) {
}) })
} }
} }
func TestCheckVolumeResyncStatus(t *testing.T) {
t.Parallel()
tests := []struct {
name string
args librbd.SiteMirrorImageStatus
wantErr bool
}{
{
name: "test for unknown state",
args: librbd.SiteMirrorImageStatus{
State: librbd.MirrorImageStatusStateUnknown,
},
wantErr: false,
},
{
name: "test for error state",
args: librbd.SiteMirrorImageStatus{
State: librbd.MirrorImageStatusStateError,
},
wantErr: true,
},
{
name: "test for syncing state",
args: librbd.SiteMirrorImageStatus{
State: librbd.MirrorImageStatusStateSyncing,
},
wantErr: true,
},
{
name: "test for starting_replay state",
args: librbd.SiteMirrorImageStatus{
State: librbd.MirrorImageStatusStateStartingReplay,
},
wantErr: true,
},
{
name: "test for replaying state",
args: librbd.SiteMirrorImageStatus{
State: librbd.MirrorImageStatusStateReplaying,
},
wantErr: false,
},
{
name: "test for stopping_replay state",
args: librbd.SiteMirrorImageStatus{
State: librbd.MirrorImageStatusStateStoppingReplay,
},
wantErr: true,
},
{
name: "test for stopped state",
args: librbd.SiteMirrorImageStatus{
State: librbd.MirrorImageStatusStateStopped,
},
wantErr: true,
},
{
name: "test for invalid state",
args: librbd.SiteMirrorImageStatus{
State: librbd.MirrorImageStatusState(100),
},
wantErr: true,
},
}
for _, tt := range tests {
ts := tt
t.Run(ts.name, func(t *testing.T) {
t.Parallel()
if err := checkVolumeResyncStatus(ts.args); (err != nil) != ts.wantErr {
t.Errorf("checkVolumeResyncStatus() error = %v, expect error = %v", err, ts.wantErr)
}
})
}
}

24
vendor/github.com/armon/go-metrics/.gitignore generated vendored Normal file
View File

@ -0,0 +1,24 @@
# Compiled Object files, Static and Dynamic libs (Shared Objects)
*.o
*.a
*.so
# Folders
_obj
_test
# Architecture specific extensions/prefixes
*.[568vq]
[568vq].out
*.cgo1.go
*.cgo2.c
_cgo_defun.c
_cgo_gotypes.go
_cgo_export.*
_testmain.go
*.exe
/metrics.out

13
vendor/github.com/armon/go-metrics/.travis.yml generated vendored Normal file
View File

@ -0,0 +1,13 @@
language: go
go:
- "1.x"
env:
- GO111MODULE=on
install:
- go get ./...
script:
- go test ./...

20
vendor/github.com/armon/go-metrics/LICENSE generated vendored Normal file
View File

@ -0,0 +1,20 @@
The MIT License (MIT)
Copyright (c) 2013 Armon Dadgar
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

91
vendor/github.com/armon/go-metrics/README.md generated vendored Normal file
View File

@ -0,0 +1,91 @@
go-metrics
==========
This library provides a `metrics` package which can be used to instrument code,
expose application metrics, and profile runtime performance in a flexible manner.
Current API: [![GoDoc](https://godoc.org/github.com/armon/go-metrics?status.svg)](https://godoc.org/github.com/armon/go-metrics)
Sinks
-----
The `metrics` package makes use of a `MetricSink` interface to support delivery
to any type of backend. Currently the following sinks are provided:
* StatsiteSink : Sinks to a [statsite](https://github.com/armon/statsite/) instance (TCP)
* StatsdSink: Sinks to a [StatsD](https://github.com/etsy/statsd/) / statsite instance (UDP)
* PrometheusSink: Sinks to a [Prometheus](http://prometheus.io/) metrics endpoint (exposed via HTTP for scrapes)
* InmemSink : Provides in-memory aggregation, can be used to export stats
* FanoutSink : Sinks to multiple sinks. Enables writing to multiple statsite instances for example.
* BlackholeSink : Sinks to nowhere
In addition to the sinks, the `InmemSignal` can be used to catch a signal,
and dump a formatted output of recent metrics. For example, when a process gets
a SIGUSR1, it can dump to stderr recent performance metrics for debugging.
Labels
------
Most metrics do have an equivalent ending with `WithLabels`, such methods
allow to push metrics with labels and use some features of underlying Sinks
(ex: translated into Prometheus labels).
Since some of these labels may increase greatly cardinality of metrics, the
library allow to filter labels using a blacklist/whitelist filtering system
which is global to all metrics.
* If `Config.AllowedLabels` is not nil, then only labels specified in this value will be sent to underlying Sink, otherwise, all labels are sent by default.
* If `Config.BlockedLabels` is not nil, any label specified in this value will not be sent to underlying Sinks.
By default, both `Config.AllowedLabels` and `Config.BlockedLabels` are nil, meaning that
no tags are filetered at all, but it allow to a user to globally block some tags with high
cardinality at application level.
Examples
--------
Here is an example of using the package:
```go
func SlowMethod() {
// Profiling the runtime of a method
defer metrics.MeasureSince([]string{"SlowMethod"}, time.Now())
}
// Configure a statsite sink as the global metrics sink
sink, _ := metrics.NewStatsiteSink("statsite:8125")
metrics.NewGlobal(metrics.DefaultConfig("service-name"), sink)
// Emit a Key/Value pair
metrics.EmitKey([]string{"questions", "meaning of life"}, 42)
```
Here is an example of setting up a signal handler:
```go
// Setup the inmem sink and signal handler
inm := metrics.NewInmemSink(10*time.Second, time.Minute)
sig := metrics.DefaultInmemSignal(inm)
metrics.NewGlobal(metrics.DefaultConfig("service-name"), inm)
// Run some code
inm.SetGauge([]string{"foo"}, 42)
inm.EmitKey([]string{"bar"}, 30)
inm.IncrCounter([]string{"baz"}, 42)
inm.IncrCounter([]string{"baz"}, 1)
inm.IncrCounter([]string{"baz"}, 80)
inm.AddSample([]string{"method", "wow"}, 42)
inm.AddSample([]string{"method", "wow"}, 100)
inm.AddSample([]string{"method", "wow"}, 22)
....
```
When a signal comes in, output like the following will be dumped to stderr:
[2014-01-28 14:57:33.04 -0800 PST][G] 'foo': 42.000
[2014-01-28 14:57:33.04 -0800 PST][P] 'bar': 30.000
[2014-01-28 14:57:33.04 -0800 PST][C] 'baz': Count: 3 Min: 1.000 Mean: 41.000 Max: 80.000 Stddev: 39.509
[2014-01-28 14:57:33.04 -0800 PST][S] 'method.wow': Count: 3 Min: 22.000 Mean: 54.667 Max: 100.000 Stddev: 40.513

12
vendor/github.com/armon/go-metrics/const_unix.go generated vendored Normal file
View File

@ -0,0 +1,12 @@
// +build !windows
package metrics
import (
"syscall"
)
const (
// DefaultSignal is used with DefaultInmemSignal
DefaultSignal = syscall.SIGUSR1
)

13
vendor/github.com/armon/go-metrics/const_windows.go generated vendored Normal file
View File

@ -0,0 +1,13 @@
// +build windows
package metrics
import (
"syscall"
)
const (
// DefaultSignal is used with DefaultInmemSignal
// Windows has no SIGUSR1, use SIGBREAK
DefaultSignal = syscall.Signal(21)
)

17
vendor/github.com/armon/go-metrics/go.mod generated vendored Normal file
View File

@ -0,0 +1,17 @@
module github.com/armon/go-metrics
go 1.12
require (
github.com/DataDog/datadog-go v3.2.0+incompatible
github.com/circonus-labs/circonus-gometrics v2.3.1+incompatible
github.com/circonus-labs/circonusllhist v0.1.3 // indirect
github.com/golang/protobuf v1.3.2
github.com/hashicorp/go-immutable-radix v1.0.0
github.com/hashicorp/go-retryablehttp v0.5.3 // indirect
github.com/pascaldekloe/goe v0.1.0
github.com/prometheus/client_golang v1.4.0
github.com/prometheus/client_model v0.2.0
github.com/prometheus/common v0.9.1
github.com/tv42/httpunix v0.0.0-20150427012821-b75d8614f926 // indirect
)

125
vendor/github.com/armon/go-metrics/go.sum generated vendored Normal file
View File

@ -0,0 +1,125 @@
github.com/DataDog/datadog-go v3.2.0+incompatible h1:qSG2N4FghB1He/r2mFrWKCaL7dXCilEuNEeAn20fdD4=
github.com/DataDog/datadog-go v3.2.0+incompatible/go.mod h1:LButxg5PwREeZtORoXG3tL4fMGNddJ+vMq1mwgfaqoQ=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973 h1:xJ4a3vCFaGF/jqvzLMYoU8P317H5OQ+Via4RmuPwCS0=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/cespare/xxhash/v2 v2.1.1 h1:6MnRN8NT7+YBpUIWxHtefFZOKTAPgGjpQSxqLNn0+qY=
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/circonus-labs/circonus-gometrics v2.3.1+incompatible h1:C29Ae4G5GtYyYMm1aztcyj/J5ckgJm2zwdDajFbx1NY=
github.com/circonus-labs/circonus-gometrics v2.3.1+incompatible/go.mod h1:nmEj6Dob7S7YxXgwXpfOuvO54S+tGdZdw9fuRZt25Ag=
github.com/circonus-labs/circonusllhist v0.1.3 h1:TJH+oke8D16535+jHExHj4nQvzlZrj7ug5D7I/orNUA=
github.com/circonus-labs/circonusllhist v0.1.3/go.mod h1:kMXHVDlOchFAehlya5ePtbp5jckzBHf4XRpQvBOLI+I=
github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/golang/protobuf v1.2.0 h1:P3YflyNX/ehuJFLhxviNdFxQPkGK5cDcApsge1SqnvM=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0 h1:xsAVV57WRhGj6kEIi8ReJzQlHHqcBYCElAvkovg3B/4=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/hashicorp/go-cleanhttp v0.5.0 h1:wvCrVc9TjDls6+YGAF2hAifE1E5U1+b4tH6KdvN3Gig=
github.com/hashicorp/go-cleanhttp v0.5.0/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
github.com/hashicorp/go-immutable-radix v1.0.0 h1:AKDB1HM5PWEA7i4nhcpwOrO2byshxBjXVn/J/3+z5/0=
github.com/hashicorp/go-immutable-radix v1.0.0/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60=
github.com/hashicorp/go-retryablehttp v0.5.3 h1:QlWt0KvWT0lq8MFppF9tsJGF+ynG7ztc2KIPhzRGk7s=
github.com/hashicorp/go-retryablehttp v0.5.3/go.mod h1:9B5zBasrRhHXnJnui7y6sL7es7NDiJgTc6Er0maI1Xs=
github.com/hashicorp/go-uuid v1.0.0 h1:RS8zrF7PhGwyNPOtxSClXXj9HA8feRnJzgnI1RJCSnM=
github.com/hashicorp/go-uuid v1.0.0/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
github.com/hashicorp/golang-lru v0.5.0 h1:CL2msUPvZTLb5O648aiLNJw3hnBxN2+1Jq8rCOH9wdo=
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/json-iterator/go v1.1.9/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/matttproud/golang_protobuf_extensions v1.0.1 h1:4hp9jkHxhMHkqkrB3Ix0jegS5sx/RkqARlsWZ6pIwiU=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/pascaldekloe/goe v0.1.0 h1:cBOtyMzM9HTpWjXfbbunk26uA6nG3a8n06Wieeh0MwY=
github.com/pascaldekloe/goe v0.1.0/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1 h1:iURUrRGxPUNPdy5/HRSm+Yj6okJ6UtLINN0Q9M4+h3I=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
github.com/prometheus/client_golang v1.4.0 h1:YVIb/fVcOTMSqtqZWSKnHpSLBxu8DKgxq8z6RuBZwqI=
github.com/prometheus/client_golang v1.4.0/go.mod h1:e9GMxYsXl05ICDXkRhurwBS4Q3OK1iX/F2sw+iXX5zU=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910 h1:idejC8f05m9MGOsuEi1ATq9shN03HrxNkD/luQvxCv8=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.2.0 h1:uq5h0d+GuxiXLJLNABMgp2qUWDPiLvgCzz2dUR+/W/M=
github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.9.1 h1:KOMtN28tlbam3/7ZKEYKHhKoJZYYj3gMH4uc62x7X7U=
github.com/prometheus/common v0.9.1/go.mod h1:yhUN8i9wzaXS3w1O07YhxHEBxD+W35wd8bs7vj7HSQ4=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.0.8 h1:+fpWZdT24pJBiqJdAwYBjPSk+5YmQzYNPYzQsdzLkt8=
github.com/prometheus/procfs v0.0.8/go.mod h1:7Qr8sr6344vo1JqZ6HhLceV9o3AJ1Ff+GxbHq6oeK9A=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
github.com/stretchr/objx v0.1.0 h1:4G4v2dO3VZwixGIRoQ5Lfboy6nUhCyYzaqnIAPPhYs4=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1 h1:2vfRuCMp5sSVIDSqO8oNnWJq7mPa6KVP3iPIwFBuy8A=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0 h1:TivCn/peBQ7UY8ooIcPgZFpTNSz0Q2U6UrFlUfqbe0Q=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0 h1:2E4SXV/wtOkTonXsotYi4li6zVWxYlZuYNCXe9XRJyk=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/tv42/httpunix v0.0.0-20150427012821-b75d8614f926 h1:G3dpKMzFDjgEh2q1Z7zUUtKa8ViPtH+ocF0bE0g00O8=
github.com/tv42/httpunix v0.0.0-20150427012821-b75d8614f926/go.mod h1:9ESjWnEqriFuLhtthL60Sar/7RFoluCcXsuvEwTV5KM=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f h1:Bl/8QSvNqXvPGPGXa2z5xUTmV7VDcZyvRZ+QQXkXTZQ=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200122134326-e047566fdf82 h1:ywK/j/KkyTHcdyYSZNXGjMwgmDSfjglYZ3vStQ/gSCU=
golang.org/x/sys v0.0.0-20200122134326-e047566fdf82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.5 h1:ymVxjfMaHvXD8RqPRmzHHsB3VvucivSkIAvJFDI5O3c=
gopkg.in/yaml.v2 v2.2.5/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=

335
vendor/github.com/armon/go-metrics/inmem.go generated vendored Normal file
View File

@ -0,0 +1,335 @@
package metrics
import (
"bytes"
"fmt"
"math"
"net/url"
"strings"
"sync"
"time"
)
var spaceReplacer = strings.NewReplacer(" ", "_")
// InmemSink provides a MetricSink that does in-memory aggregation
// without sending metrics over a network. It can be embedded within
// an application to provide profiling information.
type InmemSink struct {
// How long is each aggregation interval
interval time.Duration
// Retain controls how many metrics interval we keep
retain time.Duration
// maxIntervals is the maximum length of intervals.
// It is retain / interval.
maxIntervals int
// intervals is a slice of the retained intervals
intervals []*IntervalMetrics
intervalLock sync.RWMutex
rateDenom float64
}
// IntervalMetrics stores the aggregated metrics
// for a specific interval
type IntervalMetrics struct {
sync.RWMutex
// The start time of the interval
Interval time.Time
// Gauges maps the key to the last set value
Gauges map[string]GaugeValue
// Points maps the string to the list of emitted values
// from EmitKey
Points map[string][]float32
// Counters maps the string key to a sum of the counter
// values
Counters map[string]SampledValue
// Samples maps the key to an AggregateSample,
// which has the rolled up view of a sample
Samples map[string]SampledValue
}
// NewIntervalMetrics creates a new IntervalMetrics for a given interval
func NewIntervalMetrics(intv time.Time) *IntervalMetrics {
return &IntervalMetrics{
Interval: intv,
Gauges: make(map[string]GaugeValue),
Points: make(map[string][]float32),
Counters: make(map[string]SampledValue),
Samples: make(map[string]SampledValue),
}
}
// AggregateSample is used to hold aggregate metrics
// about a sample
type AggregateSample struct {
Count int // The count of emitted pairs
Rate float64 // The values rate per time unit (usually 1 second)
Sum float64 // The sum of values
SumSq float64 `json:"-"` // The sum of squared values
Min float64 // Minimum value
Max float64 // Maximum value
LastUpdated time.Time `json:"-"` // When value was last updated
}
// Computes a Stddev of the values
func (a *AggregateSample) Stddev() float64 {
num := (float64(a.Count) * a.SumSq) - math.Pow(a.Sum, 2)
div := float64(a.Count * (a.Count - 1))
if div == 0 {
return 0
}
return math.Sqrt(num / div)
}
// Computes a mean of the values
func (a *AggregateSample) Mean() float64 {
if a.Count == 0 {
return 0
}
return a.Sum / float64(a.Count)
}
// Ingest is used to update a sample
func (a *AggregateSample) Ingest(v float64, rateDenom float64) {
a.Count++
a.Sum += v
a.SumSq += (v * v)
if v < a.Min || a.Count == 1 {
a.Min = v
}
if v > a.Max || a.Count == 1 {
a.Max = v
}
a.Rate = float64(a.Sum) / rateDenom
a.LastUpdated = time.Now()
}
func (a *AggregateSample) String() string {
if a.Count == 0 {
return "Count: 0"
} else if a.Stddev() == 0 {
return fmt.Sprintf("Count: %d Sum: %0.3f LastUpdated: %s", a.Count, a.Sum, a.LastUpdated)
} else {
return fmt.Sprintf("Count: %d Min: %0.3f Mean: %0.3f Max: %0.3f Stddev: %0.3f Sum: %0.3f LastUpdated: %s",
a.Count, a.Min, a.Mean(), a.Max, a.Stddev(), a.Sum, a.LastUpdated)
}
}
// NewInmemSinkFromURL creates an InmemSink from a URL. It is used
// (and tested) from NewMetricSinkFromURL.
func NewInmemSinkFromURL(u *url.URL) (MetricSink, error) {
params := u.Query()
interval, err := time.ParseDuration(params.Get("interval"))
if err != nil {
return nil, fmt.Errorf("Bad 'interval' param: %s", err)
}
retain, err := time.ParseDuration(params.Get("retain"))
if err != nil {
return nil, fmt.Errorf("Bad 'retain' param: %s", err)
}
return NewInmemSink(interval, retain), nil
}
// NewInmemSink is used to construct a new in-memory sink.
// Uses an aggregation interval and maximum retention period.
func NewInmemSink(interval, retain time.Duration) *InmemSink {
rateTimeUnit := time.Second
i := &InmemSink{
interval: interval,
retain: retain,
maxIntervals: int(retain / interval),
rateDenom: float64(interval.Nanoseconds()) / float64(rateTimeUnit.Nanoseconds()),
}
i.intervals = make([]*IntervalMetrics, 0, i.maxIntervals)
return i
}
func (i *InmemSink) SetGauge(key []string, val float32) {
i.SetGaugeWithLabels(key, val, nil)
}
func (i *InmemSink) SetGaugeWithLabels(key []string, val float32, labels []Label) {
k, name := i.flattenKeyLabels(key, labels)
intv := i.getInterval()
intv.Lock()
defer intv.Unlock()
intv.Gauges[k] = GaugeValue{Name: name, Value: val, Labels: labels}
}
func (i *InmemSink) EmitKey(key []string, val float32) {
k := i.flattenKey(key)
intv := i.getInterval()
intv.Lock()
defer intv.Unlock()
vals := intv.Points[k]
intv.Points[k] = append(vals, val)
}
func (i *InmemSink) IncrCounter(key []string, val float32) {
i.IncrCounterWithLabels(key, val, nil)
}
func (i *InmemSink) IncrCounterWithLabels(key []string, val float32, labels []Label) {
k, name := i.flattenKeyLabels(key, labels)
intv := i.getInterval()
intv.Lock()
defer intv.Unlock()
agg, ok := intv.Counters[k]
if !ok {
agg = SampledValue{
Name: name,
AggregateSample: &AggregateSample{},
Labels: labels,
}
intv.Counters[k] = agg
}
agg.Ingest(float64(val), i.rateDenom)
}
func (i *InmemSink) AddSample(key []string, val float32) {
i.AddSampleWithLabels(key, val, nil)
}
func (i *InmemSink) AddSampleWithLabels(key []string, val float32, labels []Label) {
k, name := i.flattenKeyLabels(key, labels)
intv := i.getInterval()
intv.Lock()
defer intv.Unlock()
agg, ok := intv.Samples[k]
if !ok {
agg = SampledValue{
Name: name,
AggregateSample: &AggregateSample{},
Labels: labels,
}
intv.Samples[k] = agg
}
agg.Ingest(float64(val), i.rateDenom)
}
// Data is used to retrieve all the aggregated metrics
// Intervals may be in use, and a read lock should be acquired
func (i *InmemSink) Data() []*IntervalMetrics {
// Get the current interval, forces creation
i.getInterval()
i.intervalLock.RLock()
defer i.intervalLock.RUnlock()
n := len(i.intervals)
intervals := make([]*IntervalMetrics, n)
copy(intervals[:n-1], i.intervals[:n-1])
current := i.intervals[n-1]
// make its own copy for current interval
intervals[n-1] = &IntervalMetrics{}
copyCurrent := intervals[n-1]
current.RLock()
*copyCurrent = *current
copyCurrent.Gauges = make(map[string]GaugeValue, len(current.Gauges))
for k, v := range current.Gauges {
copyCurrent.Gauges[k] = v
}
// saved values will be not change, just copy its link
copyCurrent.Points = make(map[string][]float32, len(current.Points))
for k, v := range current.Points {
copyCurrent.Points[k] = v
}
copyCurrent.Counters = make(map[string]SampledValue, len(current.Counters))
for k, v := range current.Counters {
copyCurrent.Counters[k] = v.deepCopy()
}
copyCurrent.Samples = make(map[string]SampledValue, len(current.Samples))
for k, v := range current.Samples {
copyCurrent.Samples[k] = v.deepCopy()
}
current.RUnlock()
return intervals
}
func (i *InmemSink) getExistingInterval(intv time.Time) *IntervalMetrics {
i.intervalLock.RLock()
defer i.intervalLock.RUnlock()
n := len(i.intervals)
if n > 0 && i.intervals[n-1].Interval == intv {
return i.intervals[n-1]
}
return nil
}
func (i *InmemSink) createInterval(intv time.Time) *IntervalMetrics {
i.intervalLock.Lock()
defer i.intervalLock.Unlock()
// Check for an existing interval
n := len(i.intervals)
if n > 0 && i.intervals[n-1].Interval == intv {
return i.intervals[n-1]
}
// Add the current interval
current := NewIntervalMetrics(intv)
i.intervals = append(i.intervals, current)
n++
// Truncate the intervals if they are too long
if n >= i.maxIntervals {
copy(i.intervals[0:], i.intervals[n-i.maxIntervals:])
i.intervals = i.intervals[:i.maxIntervals]
}
return current
}
// getInterval returns the current interval to write to
func (i *InmemSink) getInterval() *IntervalMetrics {
intv := time.Now().Truncate(i.interval)
if m := i.getExistingInterval(intv); m != nil {
return m
}
return i.createInterval(intv)
}
// Flattens the key for formatting, removes spaces
func (i *InmemSink) flattenKey(parts []string) string {
buf := &bytes.Buffer{}
joined := strings.Join(parts, ".")
spaceReplacer.WriteString(buf, joined)
return buf.String()
}
// Flattens the key for formatting along with its labels, removes spaces
func (i *InmemSink) flattenKeyLabels(parts []string, labels []Label) (string, string) {
key := i.flattenKey(parts)
buf := bytes.NewBufferString(key)
for _, label := range labels {
spaceReplacer.WriteString(buf, fmt.Sprintf(";%s=%s", label.Name, label.Value))
}
return buf.String(), key
}

131
vendor/github.com/armon/go-metrics/inmem_endpoint.go generated vendored Normal file
View File

@ -0,0 +1,131 @@
package metrics
import (
"fmt"
"net/http"
"sort"
"time"
)
// MetricsSummary holds a roll-up of metrics info for a given interval
type MetricsSummary struct {
Timestamp string
Gauges []GaugeValue
Points []PointValue
Counters []SampledValue
Samples []SampledValue
}
type GaugeValue struct {
Name string
Hash string `json:"-"`
Value float32
Labels []Label `json:"-"`
DisplayLabels map[string]string `json:"Labels"`
}
type PointValue struct {
Name string
Points []float32
}
type SampledValue struct {
Name string
Hash string `json:"-"`
*AggregateSample
Mean float64
Stddev float64
Labels []Label `json:"-"`
DisplayLabels map[string]string `json:"Labels"`
}
// deepCopy allocates a new instance of AggregateSample
func (source *SampledValue) deepCopy() SampledValue {
dest := *source
if source.AggregateSample != nil {
dest.AggregateSample = &AggregateSample{}
*dest.AggregateSample = *source.AggregateSample
}
return dest
}
// DisplayMetrics returns a summary of the metrics from the most recent finished interval.
func (i *InmemSink) DisplayMetrics(resp http.ResponseWriter, req *http.Request) (interface{}, error) {
data := i.Data()
var interval *IntervalMetrics
n := len(data)
switch {
case n == 0:
return nil, fmt.Errorf("no metric intervals have been initialized yet")
case n == 1:
// Show the current interval if it's all we have
interval = data[0]
default:
// Show the most recent finished interval if we have one
interval = data[n-2]
}
interval.RLock()
defer interval.RUnlock()
summary := MetricsSummary{
Timestamp: interval.Interval.Round(time.Second).UTC().String(),
Gauges: make([]GaugeValue, 0, len(interval.Gauges)),
Points: make([]PointValue, 0, len(interval.Points)),
}
// Format and sort the output of each metric type, so it gets displayed in a
// deterministic order.
for name, points := range interval.Points {
summary.Points = append(summary.Points, PointValue{name, points})
}
sort.Slice(summary.Points, func(i, j int) bool {
return summary.Points[i].Name < summary.Points[j].Name
})
for hash, value := range interval.Gauges {
value.Hash = hash
value.DisplayLabels = make(map[string]string)
for _, label := range value.Labels {
value.DisplayLabels[label.Name] = label.Value
}
value.Labels = nil
summary.Gauges = append(summary.Gauges, value)
}
sort.Slice(summary.Gauges, func(i, j int) bool {
return summary.Gauges[i].Hash < summary.Gauges[j].Hash
})
summary.Counters = formatSamples(interval.Counters)
summary.Samples = formatSamples(interval.Samples)
return summary, nil
}
func formatSamples(source map[string]SampledValue) []SampledValue {
output := make([]SampledValue, 0, len(source))
for hash, sample := range source {
displayLabels := make(map[string]string)
for _, label := range sample.Labels {
displayLabels[label.Name] = label.Value
}
output = append(output, SampledValue{
Name: sample.Name,
Hash: hash,
AggregateSample: sample.AggregateSample,
Mean: sample.AggregateSample.Mean(),
Stddev: sample.AggregateSample.Stddev(),
DisplayLabels: displayLabels,
})
}
sort.Slice(output, func(i, j int) bool {
return output[i].Hash < output[j].Hash
})
return output
}

117
vendor/github.com/armon/go-metrics/inmem_signal.go generated vendored Normal file
View File

@ -0,0 +1,117 @@
package metrics
import (
"bytes"
"fmt"
"io"
"os"
"os/signal"
"strings"
"sync"
"syscall"
)
// InmemSignal is used to listen for a given signal, and when received,
// to dump the current metrics from the InmemSink to an io.Writer
type InmemSignal struct {
signal syscall.Signal
inm *InmemSink
w io.Writer
sigCh chan os.Signal
stop bool
stopCh chan struct{}
stopLock sync.Mutex
}
// NewInmemSignal creates a new InmemSignal which listens for a given signal,
// and dumps the current metrics out to a writer
func NewInmemSignal(inmem *InmemSink, sig syscall.Signal, w io.Writer) *InmemSignal {
i := &InmemSignal{
signal: sig,
inm: inmem,
w: w,
sigCh: make(chan os.Signal, 1),
stopCh: make(chan struct{}),
}
signal.Notify(i.sigCh, sig)
go i.run()
return i
}
// DefaultInmemSignal returns a new InmemSignal that responds to SIGUSR1
// and writes output to stderr. Windows uses SIGBREAK
func DefaultInmemSignal(inmem *InmemSink) *InmemSignal {
return NewInmemSignal(inmem, DefaultSignal, os.Stderr)
}
// Stop is used to stop the InmemSignal from listening
func (i *InmemSignal) Stop() {
i.stopLock.Lock()
defer i.stopLock.Unlock()
if i.stop {
return
}
i.stop = true
close(i.stopCh)
signal.Stop(i.sigCh)
}
// run is a long running routine that handles signals
func (i *InmemSignal) run() {
for {
select {
case <-i.sigCh:
i.dumpStats()
case <-i.stopCh:
return
}
}
}
// dumpStats is used to dump the data to output writer
func (i *InmemSignal) dumpStats() {
buf := bytes.NewBuffer(nil)
data := i.inm.Data()
// Skip the last period which is still being aggregated
for j := 0; j < len(data)-1; j++ {
intv := data[j]
intv.RLock()
for _, val := range intv.Gauges {
name := i.flattenLabels(val.Name, val.Labels)
fmt.Fprintf(buf, "[%v][G] '%s': %0.3f\n", intv.Interval, name, val.Value)
}
for name, vals := range intv.Points {
for _, val := range vals {
fmt.Fprintf(buf, "[%v][P] '%s': %0.3f\n", intv.Interval, name, val)
}
}
for _, agg := range intv.Counters {
name := i.flattenLabels(agg.Name, agg.Labels)
fmt.Fprintf(buf, "[%v][C] '%s': %s\n", intv.Interval, name, agg.AggregateSample)
}
for _, agg := range intv.Samples {
name := i.flattenLabels(agg.Name, agg.Labels)
fmt.Fprintf(buf, "[%v][S] '%s': %s\n", intv.Interval, name, agg.AggregateSample)
}
intv.RUnlock()
}
// Write out the bytes
i.w.Write(buf.Bytes())
}
// Flattens the key for formatting along with its labels, removes spaces
func (i *InmemSignal) flattenLabels(name string, labels []Label) string {
buf := bytes.NewBufferString(name)
replacer := strings.NewReplacer(" ", "_", ":", "_")
for _, label := range labels {
replacer.WriteString(buf, ".")
replacer.WriteString(buf, label.Value)
}
return buf.String()
}

293
vendor/github.com/armon/go-metrics/metrics.go generated vendored Normal file
View File

@ -0,0 +1,293 @@
package metrics
import (
"runtime"
"strings"
"time"
"github.com/hashicorp/go-immutable-radix"
)
type Label struct {
Name string
Value string
}
func (m *Metrics) SetGauge(key []string, val float32) {
m.SetGaugeWithLabels(key, val, nil)
}
func (m *Metrics) SetGaugeWithLabels(key []string, val float32, labels []Label) {
if m.HostName != "" {
if m.EnableHostnameLabel {
labels = append(labels, Label{"host", m.HostName})
} else if m.EnableHostname {
key = insert(0, m.HostName, key)
}
}
if m.EnableTypePrefix {
key = insert(0, "gauge", key)
}
if m.ServiceName != "" {
if m.EnableServiceLabel {
labels = append(labels, Label{"service", m.ServiceName})
} else {
key = insert(0, m.ServiceName, key)
}
}
allowed, labelsFiltered := m.allowMetric(key, labels)
if !allowed {
return
}
m.sink.SetGaugeWithLabels(key, val, labelsFiltered)
}
func (m *Metrics) EmitKey(key []string, val float32) {
if m.EnableTypePrefix {
key = insert(0, "kv", key)
}
if m.ServiceName != "" {
key = insert(0, m.ServiceName, key)
}
allowed, _ := m.allowMetric(key, nil)
if !allowed {
return
}
m.sink.EmitKey(key, val)
}
func (m *Metrics) IncrCounter(key []string, val float32) {
m.IncrCounterWithLabels(key, val, nil)
}
func (m *Metrics) IncrCounterWithLabels(key []string, val float32, labels []Label) {
if m.HostName != "" && m.EnableHostnameLabel {
labels = append(labels, Label{"host", m.HostName})
}
if m.EnableTypePrefix {
key = insert(0, "counter", key)
}
if m.ServiceName != "" {
if m.EnableServiceLabel {
labels = append(labels, Label{"service", m.ServiceName})
} else {
key = insert(0, m.ServiceName, key)
}
}
allowed, labelsFiltered := m.allowMetric(key, labels)
if !allowed {
return
}
m.sink.IncrCounterWithLabels(key, val, labelsFiltered)
}
func (m *Metrics) AddSample(key []string, val float32) {
m.AddSampleWithLabels(key, val, nil)
}
func (m *Metrics) AddSampleWithLabels(key []string, val float32, labels []Label) {
if m.HostName != "" && m.EnableHostnameLabel {
labels = append(labels, Label{"host", m.HostName})
}
if m.EnableTypePrefix {
key = insert(0, "sample", key)
}
if m.ServiceName != "" {
if m.EnableServiceLabel {
labels = append(labels, Label{"service", m.ServiceName})
} else {
key = insert(0, m.ServiceName, key)
}
}
allowed, labelsFiltered := m.allowMetric(key, labels)
if !allowed {
return
}
m.sink.AddSampleWithLabels(key, val, labelsFiltered)
}
func (m *Metrics) MeasureSince(key []string, start time.Time) {
m.MeasureSinceWithLabels(key, start, nil)
}
func (m *Metrics) MeasureSinceWithLabels(key []string, start time.Time, labels []Label) {
if m.HostName != "" && m.EnableHostnameLabel {
labels = append(labels, Label{"host", m.HostName})
}
if m.EnableTypePrefix {
key = insert(0, "timer", key)
}
if m.ServiceName != "" {
if m.EnableServiceLabel {
labels = append(labels, Label{"service", m.ServiceName})
} else {
key = insert(0, m.ServiceName, key)
}
}
allowed, labelsFiltered := m.allowMetric(key, labels)
if !allowed {
return
}
now := time.Now()
elapsed := now.Sub(start)
msec := float32(elapsed.Nanoseconds()) / float32(m.TimerGranularity)
m.sink.AddSampleWithLabels(key, msec, labelsFiltered)
}
// UpdateFilter overwrites the existing filter with the given rules.
func (m *Metrics) UpdateFilter(allow, block []string) {
m.UpdateFilterAndLabels(allow, block, m.AllowedLabels, m.BlockedLabels)
}
// UpdateFilterAndLabels overwrites the existing filter with the given rules.
func (m *Metrics) UpdateFilterAndLabels(allow, block, allowedLabels, blockedLabels []string) {
m.filterLock.Lock()
defer m.filterLock.Unlock()
m.AllowedPrefixes = allow
m.BlockedPrefixes = block
if allowedLabels == nil {
// Having a white list means we take only elements from it
m.allowedLabels = nil
} else {
m.allowedLabels = make(map[string]bool)
for _, v := range allowedLabels {
m.allowedLabels[v] = true
}
}
m.blockedLabels = make(map[string]bool)
for _, v := range blockedLabels {
m.blockedLabels[v] = true
}
m.AllowedLabels = allowedLabels
m.BlockedLabels = blockedLabels
m.filter = iradix.New()
for _, prefix := range m.AllowedPrefixes {
m.filter, _, _ = m.filter.Insert([]byte(prefix), true)
}
for _, prefix := range m.BlockedPrefixes {
m.filter, _, _ = m.filter.Insert([]byte(prefix), false)
}
}
// labelIsAllowed return true if a should be included in metric
// the caller should lock m.filterLock while calling this method
func (m *Metrics) labelIsAllowed(label *Label) bool {
labelName := (*label).Name
if m.blockedLabels != nil {
_, ok := m.blockedLabels[labelName]
if ok {
// If present, let's remove this label
return false
}
}
if m.allowedLabels != nil {
_, ok := m.allowedLabels[labelName]
return ok
}
// Allow by default
return true
}
// filterLabels return only allowed labels
// the caller should lock m.filterLock while calling this method
func (m *Metrics) filterLabels(labels []Label) []Label {
if labels == nil {
return nil
}
toReturn := []Label{}
for _, label := range labels {
if m.labelIsAllowed(&label) {
toReturn = append(toReturn, label)
}
}
return toReturn
}
// Returns whether the metric should be allowed based on configured prefix filters
// Also return the applicable labels
func (m *Metrics) allowMetric(key []string, labels []Label) (bool, []Label) {
m.filterLock.RLock()
defer m.filterLock.RUnlock()
if m.filter == nil || m.filter.Len() == 0 {
return m.Config.FilterDefault, m.filterLabels(labels)
}
_, allowed, ok := m.filter.Root().LongestPrefix([]byte(strings.Join(key, ".")))
if !ok {
return m.Config.FilterDefault, m.filterLabels(labels)
}
return allowed.(bool), m.filterLabels(labels)
}
// Periodically collects runtime stats to publish
func (m *Metrics) collectStats() {
for {
time.Sleep(m.ProfileInterval)
m.emitRuntimeStats()
}
}
// Emits various runtime statsitics
func (m *Metrics) emitRuntimeStats() {
// Export number of Goroutines
numRoutines := runtime.NumGoroutine()
m.SetGauge([]string{"runtime", "num_goroutines"}, float32(numRoutines))
// Export memory stats
var stats runtime.MemStats
runtime.ReadMemStats(&stats)
m.SetGauge([]string{"runtime", "alloc_bytes"}, float32(stats.Alloc))
m.SetGauge([]string{"runtime", "sys_bytes"}, float32(stats.Sys))
m.SetGauge([]string{"runtime", "malloc_count"}, float32(stats.Mallocs))
m.SetGauge([]string{"runtime", "free_count"}, float32(stats.Frees))
m.SetGauge([]string{"runtime", "heap_objects"}, float32(stats.HeapObjects))
m.SetGauge([]string{"runtime", "total_gc_pause_ns"}, float32(stats.PauseTotalNs))
m.SetGauge([]string{"runtime", "total_gc_runs"}, float32(stats.NumGC))
// Export info about the last few GC runs
num := stats.NumGC
// Handle wrap around
if num < m.lastNumGC {
m.lastNumGC = 0
}
// Ensure we don't scan more than 256
if num-m.lastNumGC >= 256 {
m.lastNumGC = num - 255
}
for i := m.lastNumGC; i < num; i++ {
pause := stats.PauseNs[i%256]
m.AddSample([]string{"runtime", "gc_pause_ns"}, float32(pause))
}
m.lastNumGC = num
}
// Creates a new slice with the provided string value as the first element
// and the provided slice values as the remaining values.
// Ordering of the values in the provided input slice is kept in tact in the output slice.
func insert(i int, v string, s []string) []string {
// Allocate new slice to avoid modifying the input slice
newS := make([]string, len(s)+1)
// Copy s[0, i-1] into newS
for j := 0; j < i; j++ {
newS[j] = s[j]
}
// Insert provided element at index i
newS[i] = v
// Copy s[i, len(s)-1] into newS starting at newS[i+1]
for j := i; j < len(s); j++ {
newS[j+1] = s[j]
}
return newS
}

115
vendor/github.com/armon/go-metrics/sink.go generated vendored Normal file
View File

@ -0,0 +1,115 @@
package metrics
import (
"fmt"
"net/url"
)
// The MetricSink interface is used to transmit metrics information
// to an external system
type MetricSink interface {
// A Gauge should retain the last value it is set to
SetGauge(key []string, val float32)
SetGaugeWithLabels(key []string, val float32, labels []Label)
// Should emit a Key/Value pair for each call
EmitKey(key []string, val float32)
// Counters should accumulate values
IncrCounter(key []string, val float32)
IncrCounterWithLabels(key []string, val float32, labels []Label)
// Samples are for timing information, where quantiles are used
AddSample(key []string, val float32)
AddSampleWithLabels(key []string, val float32, labels []Label)
}
// BlackholeSink is used to just blackhole messages
type BlackholeSink struct{}
func (*BlackholeSink) SetGauge(key []string, val float32) {}
func (*BlackholeSink) SetGaugeWithLabels(key []string, val float32, labels []Label) {}
func (*BlackholeSink) EmitKey(key []string, val float32) {}
func (*BlackholeSink) IncrCounter(key []string, val float32) {}
func (*BlackholeSink) IncrCounterWithLabels(key []string, val float32, labels []Label) {}
func (*BlackholeSink) AddSample(key []string, val float32) {}
func (*BlackholeSink) AddSampleWithLabels(key []string, val float32, labels []Label) {}
// FanoutSink is used to sink to fanout values to multiple sinks
type FanoutSink []MetricSink
func (fh FanoutSink) SetGauge(key []string, val float32) {
fh.SetGaugeWithLabels(key, val, nil)
}
func (fh FanoutSink) SetGaugeWithLabels(key []string, val float32, labels []Label) {
for _, s := range fh {
s.SetGaugeWithLabels(key, val, labels)
}
}
func (fh FanoutSink) EmitKey(key []string, val float32) {
for _, s := range fh {
s.EmitKey(key, val)
}
}
func (fh FanoutSink) IncrCounter(key []string, val float32) {
fh.IncrCounterWithLabels(key, val, nil)
}
func (fh FanoutSink) IncrCounterWithLabels(key []string, val float32, labels []Label) {
for _, s := range fh {
s.IncrCounterWithLabels(key, val, labels)
}
}
func (fh FanoutSink) AddSample(key []string, val float32) {
fh.AddSampleWithLabels(key, val, nil)
}
func (fh FanoutSink) AddSampleWithLabels(key []string, val float32, labels []Label) {
for _, s := range fh {
s.AddSampleWithLabels(key, val, labels)
}
}
// sinkURLFactoryFunc is an generic interface around the *SinkFromURL() function provided
// by each sink type
type sinkURLFactoryFunc func(*url.URL) (MetricSink, error)
// sinkRegistry supports the generic NewMetricSink function by mapping URL
// schemes to metric sink factory functions
var sinkRegistry = map[string]sinkURLFactoryFunc{
"statsd": NewStatsdSinkFromURL,
"statsite": NewStatsiteSinkFromURL,
"inmem": NewInmemSinkFromURL,
}
// NewMetricSinkFromURL allows a generic URL input to configure any of the
// supported sinks. The scheme of the URL identifies the type of the sink, the
// and query parameters are used to set options.
//
// "statsd://" - Initializes a StatsdSink. The host and port are passed through
// as the "addr" of the sink
//
// "statsite://" - Initializes a StatsiteSink. The host and port become the
// "addr" of the sink
//
// "inmem://" - Initializes an InmemSink. The host and port are ignored. The
// "interval" and "duration" query parameters must be specified with valid
// durations, see NewInmemSink for details.
func NewMetricSinkFromURL(urlStr string) (MetricSink, error) {
u, err := url.Parse(urlStr)
if err != nil {
return nil, err
}
sinkURLFactoryFunc := sinkRegistry[u.Scheme]
if sinkURLFactoryFunc == nil {
return nil, fmt.Errorf(
"cannot create metric sink, unrecognized sink name: %q", u.Scheme)
}
return sinkURLFactoryFunc(u)
}

141
vendor/github.com/armon/go-metrics/start.go generated vendored Normal file
View File

@ -0,0 +1,141 @@
package metrics
import (
"os"
"sync"
"sync/atomic"
"time"
"github.com/hashicorp/go-immutable-radix"
)
// Config is used to configure metrics settings
type Config struct {
ServiceName string // Prefixed with keys to separate services
HostName string // Hostname to use. If not provided and EnableHostname, it will be os.Hostname
EnableHostname bool // Enable prefixing gauge values with hostname
EnableHostnameLabel bool // Enable adding hostname to labels
EnableServiceLabel bool // Enable adding service to labels
EnableRuntimeMetrics bool // Enables profiling of runtime metrics (GC, Goroutines, Memory)
EnableTypePrefix bool // Prefixes key with a type ("counter", "gauge", "timer")
TimerGranularity time.Duration // Granularity of timers.
ProfileInterval time.Duration // Interval to profile runtime metrics
AllowedPrefixes []string // A list of metric prefixes to allow, with '.' as the separator
BlockedPrefixes []string // A list of metric prefixes to block, with '.' as the separator
AllowedLabels []string // A list of metric labels to allow, with '.' as the separator
BlockedLabels []string // A list of metric labels to block, with '.' as the separator
FilterDefault bool // Whether to allow metrics by default
}
// Metrics represents an instance of a metrics sink that can
// be used to emit
type Metrics struct {
Config
lastNumGC uint32
sink MetricSink
filter *iradix.Tree
allowedLabels map[string]bool
blockedLabels map[string]bool
filterLock sync.RWMutex // Lock filters and allowedLabels/blockedLabels access
}
// Shared global metrics instance
var globalMetrics atomic.Value // *Metrics
func init() {
// Initialize to a blackhole sink to avoid errors
globalMetrics.Store(&Metrics{sink: &BlackholeSink{}})
}
// DefaultConfig provides a sane default configuration
func DefaultConfig(serviceName string) *Config {
c := &Config{
ServiceName: serviceName, // Use client provided service
HostName: "",
EnableHostname: true, // Enable hostname prefix
EnableRuntimeMetrics: true, // Enable runtime profiling
EnableTypePrefix: false, // Disable type prefix
TimerGranularity: time.Millisecond, // Timers are in milliseconds
ProfileInterval: time.Second, // Poll runtime every second
FilterDefault: true, // Don't filter metrics by default
}
// Try to get the hostname
name, _ := os.Hostname()
c.HostName = name
return c
}
// New is used to create a new instance of Metrics
func New(conf *Config, sink MetricSink) (*Metrics, error) {
met := &Metrics{}
met.Config = *conf
met.sink = sink
met.UpdateFilterAndLabels(conf.AllowedPrefixes, conf.BlockedPrefixes, conf.AllowedLabels, conf.BlockedLabels)
// Start the runtime collector
if conf.EnableRuntimeMetrics {
go met.collectStats()
}
return met, nil
}
// NewGlobal is the same as New, but it assigns the metrics object to be
// used globally as well as returning it.
func NewGlobal(conf *Config, sink MetricSink) (*Metrics, error) {
metrics, err := New(conf, sink)
if err == nil {
globalMetrics.Store(metrics)
}
return metrics, err
}
// Proxy all the methods to the globalMetrics instance
func SetGauge(key []string, val float32) {
globalMetrics.Load().(*Metrics).SetGauge(key, val)
}
func SetGaugeWithLabels(key []string, val float32, labels []Label) {
globalMetrics.Load().(*Metrics).SetGaugeWithLabels(key, val, labels)
}
func EmitKey(key []string, val float32) {
globalMetrics.Load().(*Metrics).EmitKey(key, val)
}
func IncrCounter(key []string, val float32) {
globalMetrics.Load().(*Metrics).IncrCounter(key, val)
}
func IncrCounterWithLabels(key []string, val float32, labels []Label) {
globalMetrics.Load().(*Metrics).IncrCounterWithLabels(key, val, labels)
}
func AddSample(key []string, val float32) {
globalMetrics.Load().(*Metrics).AddSample(key, val)
}
func AddSampleWithLabels(key []string, val float32, labels []Label) {
globalMetrics.Load().(*Metrics).AddSampleWithLabels(key, val, labels)
}
func MeasureSince(key []string, start time.Time) {
globalMetrics.Load().(*Metrics).MeasureSince(key, start)
}
func MeasureSinceWithLabels(key []string, start time.Time, labels []Label) {
globalMetrics.Load().(*Metrics).MeasureSinceWithLabels(key, start, labels)
}
func UpdateFilter(allow, block []string) {
globalMetrics.Load().(*Metrics).UpdateFilter(allow, block)
}
// UpdateFilterAndLabels set allow/block prefixes of metrics while allowedLabels
// and blockedLabels - when not nil - allow filtering of labels in order to
// block/allow globally labels (especially useful when having large number of
// values for a given label). See README.md for more information about usage.
func UpdateFilterAndLabels(allow, block, allowedLabels, blockedLabels []string) {
globalMetrics.Load().(*Metrics).UpdateFilterAndLabels(allow, block, allowedLabels, blockedLabels)
}

184
vendor/github.com/armon/go-metrics/statsd.go generated vendored Normal file
View File

@ -0,0 +1,184 @@
package metrics
import (
"bytes"
"fmt"
"log"
"net"
"net/url"
"strings"
"time"
)
const (
// statsdMaxLen is the maximum size of a packet
// to send to statsd
statsdMaxLen = 1400
)
// StatsdSink provides a MetricSink that can be used
// with a statsite or statsd metrics server. It uses
// only UDP packets, while StatsiteSink uses TCP.
type StatsdSink struct {
addr string
metricQueue chan string
}
// NewStatsdSinkFromURL creates an StatsdSink from a URL. It is used
// (and tested) from NewMetricSinkFromURL.
func NewStatsdSinkFromURL(u *url.URL) (MetricSink, error) {
return NewStatsdSink(u.Host)
}
// NewStatsdSink is used to create a new StatsdSink
func NewStatsdSink(addr string) (*StatsdSink, error) {
s := &StatsdSink{
addr: addr,
metricQueue: make(chan string, 4096),
}
go s.flushMetrics()
return s, nil
}
// Close is used to stop flushing to statsd
func (s *StatsdSink) Shutdown() {
close(s.metricQueue)
}
func (s *StatsdSink) SetGauge(key []string, val float32) {
flatKey := s.flattenKey(key)
s.pushMetric(fmt.Sprintf("%s:%f|g\n", flatKey, val))
}
func (s *StatsdSink) SetGaugeWithLabels(key []string, val float32, labels []Label) {
flatKey := s.flattenKeyLabels(key, labels)
s.pushMetric(fmt.Sprintf("%s:%f|g\n", flatKey, val))
}
func (s *StatsdSink) EmitKey(key []string, val float32) {
flatKey := s.flattenKey(key)
s.pushMetric(fmt.Sprintf("%s:%f|kv\n", flatKey, val))
}
func (s *StatsdSink) IncrCounter(key []string, val float32) {
flatKey := s.flattenKey(key)
s.pushMetric(fmt.Sprintf("%s:%f|c\n", flatKey, val))
}
func (s *StatsdSink) IncrCounterWithLabels(key []string, val float32, labels []Label) {
flatKey := s.flattenKeyLabels(key, labels)
s.pushMetric(fmt.Sprintf("%s:%f|c\n", flatKey, val))
}
func (s *StatsdSink) AddSample(key []string, val float32) {
flatKey := s.flattenKey(key)
s.pushMetric(fmt.Sprintf("%s:%f|ms\n", flatKey, val))
}
func (s *StatsdSink) AddSampleWithLabels(key []string, val float32, labels []Label) {
flatKey := s.flattenKeyLabels(key, labels)
s.pushMetric(fmt.Sprintf("%s:%f|ms\n", flatKey, val))
}
// Flattens the key for formatting, removes spaces
func (s *StatsdSink) flattenKey(parts []string) string {
joined := strings.Join(parts, ".")
return strings.Map(func(r rune) rune {
switch r {
case ':':
fallthrough
case ' ':
return '_'
default:
return r
}
}, joined)
}
// Flattens the key along with labels for formatting, removes spaces
func (s *StatsdSink) flattenKeyLabels(parts []string, labels []Label) string {
for _, label := range labels {
parts = append(parts, label.Value)
}
return s.flattenKey(parts)
}
// Does a non-blocking push to the metrics queue
func (s *StatsdSink) pushMetric(m string) {
select {
case s.metricQueue <- m:
default:
}
}
// Flushes metrics
func (s *StatsdSink) flushMetrics() {
var sock net.Conn
var err error
var wait <-chan time.Time
ticker := time.NewTicker(flushInterval)
defer ticker.Stop()
CONNECT:
// Create a buffer
buf := bytes.NewBuffer(nil)
// Attempt to connect
sock, err = net.Dial("udp", s.addr)
if err != nil {
log.Printf("[ERR] Error connecting to statsd! Err: %s", err)
goto WAIT
}
for {
select {
case metric, ok := <-s.metricQueue:
// Get a metric from the queue
if !ok {
goto QUIT
}
// Check if this would overflow the packet size
if len(metric)+buf.Len() > statsdMaxLen {
_, err := sock.Write(buf.Bytes())
buf.Reset()
if err != nil {
log.Printf("[ERR] Error writing to statsd! Err: %s", err)
goto WAIT
}
}
// Append to the buffer
buf.WriteString(metric)
case <-ticker.C:
if buf.Len() == 0 {
continue
}
_, err := sock.Write(buf.Bytes())
buf.Reset()
if err != nil {
log.Printf("[ERR] Error flushing to statsd! Err: %s", err)
goto WAIT
}
}
}
WAIT:
// Wait for a while
wait = time.After(time.Duration(5) * time.Second)
for {
select {
// Dequeue the messages to avoid backlog
case _, ok := <-s.metricQueue:
if !ok {
goto QUIT
}
case <-wait:
goto CONNECT
}
}
QUIT:
s.metricQueue = nil
}

172
vendor/github.com/armon/go-metrics/statsite.go generated vendored Normal file
View File

@ -0,0 +1,172 @@
package metrics
import (
"bufio"
"fmt"
"log"
"net"
"net/url"
"strings"
"time"
)
const (
// We force flush the statsite metrics after this period of
// inactivity. Prevents stats from getting stuck in a buffer
// forever.
flushInterval = 100 * time.Millisecond
)
// NewStatsiteSinkFromURL creates an StatsiteSink from a URL. It is used
// (and tested) from NewMetricSinkFromURL.
func NewStatsiteSinkFromURL(u *url.URL) (MetricSink, error) {
return NewStatsiteSink(u.Host)
}
// StatsiteSink provides a MetricSink that can be used with a
// statsite metrics server
type StatsiteSink struct {
addr string
metricQueue chan string
}
// NewStatsiteSink is used to create a new StatsiteSink
func NewStatsiteSink(addr string) (*StatsiteSink, error) {
s := &StatsiteSink{
addr: addr,
metricQueue: make(chan string, 4096),
}
go s.flushMetrics()
return s, nil
}
// Close is used to stop flushing to statsite
func (s *StatsiteSink) Shutdown() {
close(s.metricQueue)
}
func (s *StatsiteSink) SetGauge(key []string, val float32) {
flatKey := s.flattenKey(key)
s.pushMetric(fmt.Sprintf("%s:%f|g\n", flatKey, val))
}
func (s *StatsiteSink) SetGaugeWithLabels(key []string, val float32, labels []Label) {
flatKey := s.flattenKeyLabels(key, labels)
s.pushMetric(fmt.Sprintf("%s:%f|g\n", flatKey, val))
}
func (s *StatsiteSink) EmitKey(key []string, val float32) {
flatKey := s.flattenKey(key)
s.pushMetric(fmt.Sprintf("%s:%f|kv\n", flatKey, val))
}
func (s *StatsiteSink) IncrCounter(key []string, val float32) {
flatKey := s.flattenKey(key)
s.pushMetric(fmt.Sprintf("%s:%f|c\n", flatKey, val))
}
func (s *StatsiteSink) IncrCounterWithLabels(key []string, val float32, labels []Label) {
flatKey := s.flattenKeyLabels(key, labels)
s.pushMetric(fmt.Sprintf("%s:%f|c\n", flatKey, val))
}
func (s *StatsiteSink) AddSample(key []string, val float32) {
flatKey := s.flattenKey(key)
s.pushMetric(fmt.Sprintf("%s:%f|ms\n", flatKey, val))
}
func (s *StatsiteSink) AddSampleWithLabels(key []string, val float32, labels []Label) {
flatKey := s.flattenKeyLabels(key, labels)
s.pushMetric(fmt.Sprintf("%s:%f|ms\n", flatKey, val))
}
// Flattens the key for formatting, removes spaces
func (s *StatsiteSink) flattenKey(parts []string) string {
joined := strings.Join(parts, ".")
return strings.Map(func(r rune) rune {
switch r {
case ':':
fallthrough
case ' ':
return '_'
default:
return r
}
}, joined)
}
// Flattens the key along with labels for formatting, removes spaces
func (s *StatsiteSink) flattenKeyLabels(parts []string, labels []Label) string {
for _, label := range labels {
parts = append(parts, label.Value)
}
return s.flattenKey(parts)
}
// Does a non-blocking push to the metrics queue
func (s *StatsiteSink) pushMetric(m string) {
select {
case s.metricQueue <- m:
default:
}
}
// Flushes metrics
func (s *StatsiteSink) flushMetrics() {
var sock net.Conn
var err error
var wait <-chan time.Time
var buffered *bufio.Writer
ticker := time.NewTicker(flushInterval)
defer ticker.Stop()
CONNECT:
// Attempt to connect
sock, err = net.Dial("tcp", s.addr)
if err != nil {
log.Printf("[ERR] Error connecting to statsite! Err: %s", err)
goto WAIT
}
// Create a buffered writer
buffered = bufio.NewWriter(sock)
for {
select {
case metric, ok := <-s.metricQueue:
// Get a metric from the queue
if !ok {
goto QUIT
}
// Try to send to statsite
_, err := buffered.Write([]byte(metric))
if err != nil {
log.Printf("[ERR] Error writing to statsite! Err: %s", err)
goto WAIT
}
case <-ticker.C:
if err := buffered.Flush(); err != nil {
log.Printf("[ERR] Error flushing to statsite! Err: %s", err)
goto WAIT
}
}
}
WAIT:
// Wait for a while
wait = time.After(time.Duration(5) * time.Second)
for {
select {
// Dequeue the messages to avoid backlog
case _, ok := <-s.metricQueue:
if !ok {
goto QUIT
}
case <-wait:
goto CONNECT
}
}
QUIT:
s.metricQueue = nil
}

22
vendor/github.com/armon/go-radix/.gitignore generated vendored Normal file
View File

@ -0,0 +1,22 @@
# Compiled Object files, Static and Dynamic libs (Shared Objects)
*.o
*.a
*.so
# Folders
_obj
_test
# Architecture specific extensions/prefixes
*.[568vq]
[568vq].out
*.cgo1.go
*.cgo2.c
_cgo_defun.c
_cgo_gotypes.go
_cgo_export.*
_testmain.go
*.exe

3
vendor/github.com/armon/go-radix/.travis.yml generated vendored Normal file
View File

@ -0,0 +1,3 @@
language: go
go:
- tip

20
vendor/github.com/armon/go-radix/LICENSE generated vendored Normal file
View File

@ -0,0 +1,20 @@
The MIT License (MIT)
Copyright (c) 2014 Armon Dadgar
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

38
vendor/github.com/armon/go-radix/README.md generated vendored Normal file
View File

@ -0,0 +1,38 @@
go-radix [![Build Status](https://travis-ci.org/armon/go-radix.png)](https://travis-ci.org/armon/go-radix)
=========
Provides the `radix` package that implements a [radix tree](http://en.wikipedia.org/wiki/Radix_tree).
The package only provides a single `Tree` implementation, optimized for sparse nodes.
As a radix tree, it provides the following:
* O(k) operations. In many cases, this can be faster than a hash table since
the hash function is an O(k) operation, and hash tables have very poor cache locality.
* Minimum / Maximum value lookups
* Ordered iteration
For an immutable variant, see [go-immutable-radix](https://github.com/hashicorp/go-immutable-radix).
Documentation
=============
The full documentation is available on [Godoc](http://godoc.org/github.com/armon/go-radix).
Example
=======
Below is a simple example of usage
```go
// Create a tree
r := radix.New()
r.Insert("foo", 1)
r.Insert("bar", 2)
r.Insert("foobar", 2)
// Find the longest prefix match
m, _, _ := r.LongestPrefix("foozip")
if m != "foo" {
panic("should be foo")
}
```

1
vendor/github.com/armon/go-radix/go.mod generated vendored Normal file
View File

@ -0,0 +1 @@
module github.com/armon/go-radix

540
vendor/github.com/armon/go-radix/radix.go generated vendored Normal file
View File

@ -0,0 +1,540 @@
package radix
import (
"sort"
"strings"
)
// WalkFn is used when walking the tree. Takes a
// key and value, returning if iteration should
// be terminated.
type WalkFn func(s string, v interface{}) bool
// leafNode is used to represent a value
type leafNode struct {
key string
val interface{}
}
// edge is used to represent an edge node
type edge struct {
label byte
node *node
}
type node struct {
// leaf is used to store possible leaf
leaf *leafNode
// prefix is the common prefix we ignore
prefix string
// Edges should be stored in-order for iteration.
// We avoid a fully materialized slice to save memory,
// since in most cases we expect to be sparse
edges edges
}
func (n *node) isLeaf() bool {
return n.leaf != nil
}
func (n *node) addEdge(e edge) {
n.edges = append(n.edges, e)
n.edges.Sort()
}
func (n *node) updateEdge(label byte, node *node) {
num := len(n.edges)
idx := sort.Search(num, func(i int) bool {
return n.edges[i].label >= label
})
if idx < num && n.edges[idx].label == label {
n.edges[idx].node = node
return
}
panic("replacing missing edge")
}
func (n *node) getEdge(label byte) *node {
num := len(n.edges)
idx := sort.Search(num, func(i int) bool {
return n.edges[i].label >= label
})
if idx < num && n.edges[idx].label == label {
return n.edges[idx].node
}
return nil
}
func (n *node) delEdge(label byte) {
num := len(n.edges)
idx := sort.Search(num, func(i int) bool {
return n.edges[i].label >= label
})
if idx < num && n.edges[idx].label == label {
copy(n.edges[idx:], n.edges[idx+1:])
n.edges[len(n.edges)-1] = edge{}
n.edges = n.edges[:len(n.edges)-1]
}
}
type edges []edge
func (e edges) Len() int {
return len(e)
}
func (e edges) Less(i, j int) bool {
return e[i].label < e[j].label
}
func (e edges) Swap(i, j int) {
e[i], e[j] = e[j], e[i]
}
func (e edges) Sort() {
sort.Sort(e)
}
// Tree implements a radix tree. This can be treated as a
// Dictionary abstract data type. The main advantage over
// a standard hash map is prefix-based lookups and
// ordered iteration,
type Tree struct {
root *node
size int
}
// New returns an empty Tree
func New() *Tree {
return NewFromMap(nil)
}
// NewFromMap returns a new tree containing the keys
// from an existing map
func NewFromMap(m map[string]interface{}) *Tree {
t := &Tree{root: &node{}}
for k, v := range m {
t.Insert(k, v)
}
return t
}
// Len is used to return the number of elements in the tree
func (t *Tree) Len() int {
return t.size
}
// longestPrefix finds the length of the shared prefix
// of two strings
func longestPrefix(k1, k2 string) int {
max := len(k1)
if l := len(k2); l < max {
max = l
}
var i int
for i = 0; i < max; i++ {
if k1[i] != k2[i] {
break
}
}
return i
}
// Insert is used to add a newentry or update
// an existing entry. Returns if updated.
func (t *Tree) Insert(s string, v interface{}) (interface{}, bool) {
var parent *node
n := t.root
search := s
for {
// Handle key exhaution
if len(search) == 0 {
if n.isLeaf() {
old := n.leaf.val
n.leaf.val = v
return old, true
}
n.leaf = &leafNode{
key: s,
val: v,
}
t.size++
return nil, false
}
// Look for the edge
parent = n
n = n.getEdge(search[0])
// No edge, create one
if n == nil {
e := edge{
label: search[0],
node: &node{
leaf: &leafNode{
key: s,
val: v,
},
prefix: search,
},
}
parent.addEdge(e)
t.size++
return nil, false
}
// Determine longest prefix of the search key on match
commonPrefix := longestPrefix(search, n.prefix)
if commonPrefix == len(n.prefix) {
search = search[commonPrefix:]
continue
}
// Split the node
t.size++
child := &node{
prefix: search[:commonPrefix],
}
parent.updateEdge(search[0], child)
// Restore the existing node
child.addEdge(edge{
label: n.prefix[commonPrefix],
node: n,
})
n.prefix = n.prefix[commonPrefix:]
// Create a new leaf node
leaf := &leafNode{
key: s,
val: v,
}
// If the new key is a subset, add to to this node
search = search[commonPrefix:]
if len(search) == 0 {
child.leaf = leaf
return nil, false
}
// Create a new edge for the node
child.addEdge(edge{
label: search[0],
node: &node{
leaf: leaf,
prefix: search,
},
})
return nil, false
}
}
// Delete is used to delete a key, returning the previous
// value and if it was deleted
func (t *Tree) Delete(s string) (interface{}, bool) {
var parent *node
var label byte
n := t.root
search := s
for {
// Check for key exhaution
if len(search) == 0 {
if !n.isLeaf() {
break
}
goto DELETE
}
// Look for an edge
parent = n
label = search[0]
n = n.getEdge(label)
if n == nil {
break
}
// Consume the search prefix
if strings.HasPrefix(search, n.prefix) {
search = search[len(n.prefix):]
} else {
break
}
}
return nil, false
DELETE:
// Delete the leaf
leaf := n.leaf
n.leaf = nil
t.size--
// Check if we should delete this node from the parent
if parent != nil && len(n.edges) == 0 {
parent.delEdge(label)
}
// Check if we should merge this node
if n != t.root && len(n.edges) == 1 {
n.mergeChild()
}
// Check if we should merge the parent's other child
if parent != nil && parent != t.root && len(parent.edges) == 1 && !parent.isLeaf() {
parent.mergeChild()
}
return leaf.val, true
}
// DeletePrefix is used to delete the subtree under a prefix
// Returns how many nodes were deleted
// Use this to delete large subtrees efficiently
func (t *Tree) DeletePrefix(s string) int {
return t.deletePrefix(nil, t.root, s)
}
// delete does a recursive deletion
func (t *Tree) deletePrefix(parent, n *node, prefix string) int {
// Check for key exhaustion
if len(prefix) == 0 {
// Remove the leaf node
subTreeSize := 0
//recursively walk from all edges of the node to be deleted
recursiveWalk(n, func(s string, v interface{}) bool {
subTreeSize++
return false
})
if n.isLeaf() {
n.leaf = nil
}
n.edges = nil // deletes the entire subtree
// Check if we should merge the parent's other child
if parent != nil && parent != t.root && len(parent.edges) == 1 && !parent.isLeaf() {
parent.mergeChild()
}
t.size -= subTreeSize
return subTreeSize
}
// Look for an edge
label := prefix[0]
child := n.getEdge(label)
if child == nil || (!strings.HasPrefix(child.prefix, prefix) && !strings.HasPrefix(prefix, child.prefix)) {
return 0
}
// Consume the search prefix
if len(child.prefix) > len(prefix) {
prefix = prefix[len(prefix):]
} else {
prefix = prefix[len(child.prefix):]
}
return t.deletePrefix(n, child, prefix)
}
func (n *node) mergeChild() {
e := n.edges[0]
child := e.node
n.prefix = n.prefix + child.prefix
n.leaf = child.leaf
n.edges = child.edges
}
// Get is used to lookup a specific key, returning
// the value and if it was found
func (t *Tree) Get(s string) (interface{}, bool) {
n := t.root
search := s
for {
// Check for key exhaution
if len(search) == 0 {
if n.isLeaf() {
return n.leaf.val, true
}
break
}
// Look for an edge
n = n.getEdge(search[0])
if n == nil {
break
}
// Consume the search prefix
if strings.HasPrefix(search, n.prefix) {
search = search[len(n.prefix):]
} else {
break
}
}
return nil, false
}
// LongestPrefix is like Get, but instead of an
// exact match, it will return the longest prefix match.
func (t *Tree) LongestPrefix(s string) (string, interface{}, bool) {
var last *leafNode
n := t.root
search := s
for {
// Look for a leaf node
if n.isLeaf() {
last = n.leaf
}
// Check for key exhaution
if len(search) == 0 {
break
}
// Look for an edge
n = n.getEdge(search[0])
if n == nil {
break
}
// Consume the search prefix
if strings.HasPrefix(search, n.prefix) {
search = search[len(n.prefix):]
} else {
break
}
}
if last != nil {
return last.key, last.val, true
}
return "", nil, false
}
// Minimum is used to return the minimum value in the tree
func (t *Tree) Minimum() (string, interface{}, bool) {
n := t.root
for {
if n.isLeaf() {
return n.leaf.key, n.leaf.val, true
}
if len(n.edges) > 0 {
n = n.edges[0].node
} else {
break
}
}
return "", nil, false
}
// Maximum is used to return the maximum value in the tree
func (t *Tree) Maximum() (string, interface{}, bool) {
n := t.root
for {
if num := len(n.edges); num > 0 {
n = n.edges[num-1].node
continue
}
if n.isLeaf() {
return n.leaf.key, n.leaf.val, true
}
break
}
return "", nil, false
}
// Walk is used to walk the tree
func (t *Tree) Walk(fn WalkFn) {
recursiveWalk(t.root, fn)
}
// WalkPrefix is used to walk the tree under a prefix
func (t *Tree) WalkPrefix(prefix string, fn WalkFn) {
n := t.root
search := prefix
for {
// Check for key exhaution
if len(search) == 0 {
recursiveWalk(n, fn)
return
}
// Look for an edge
n = n.getEdge(search[0])
if n == nil {
break
}
// Consume the search prefix
if strings.HasPrefix(search, n.prefix) {
search = search[len(n.prefix):]
} else if strings.HasPrefix(n.prefix, search) {
// Child may be under our search prefix
recursiveWalk(n, fn)
return
} else {
break
}
}
}
// WalkPath is used to walk the tree, but only visiting nodes
// from the root down to a given leaf. Where WalkPrefix walks
// all the entries *under* the given prefix, this walks the
// entries *above* the given prefix.
func (t *Tree) WalkPath(path string, fn WalkFn) {
n := t.root
search := path
for {
// Visit the leaf values if any
if n.leaf != nil && fn(n.leaf.key, n.leaf.val) {
return
}
// Check for key exhaution
if len(search) == 0 {
return
}
// Look for an edge
n = n.getEdge(search[0])
if n == nil {
return
}
// Consume the search prefix
if strings.HasPrefix(search, n.prefix) {
search = search[len(n.prefix):]
} else {
break
}
}
}
// recursiveWalk is used to do a pre-order walk of a node
// recursively. Returns true if the walk should be aborted
func recursiveWalk(n *node, fn WalkFn) bool {
// Visit the leaf values if any
if n.leaf != nil && fn(n.leaf.key, n.leaf.val) {
return true
}
// Recurse on the children
for _, e := range n.edges {
if recursiveWalk(e.node, fn) {
return true
}
}
return false
}
// ToMap is used to walk the tree and convert it into a map
func (t *Tree) ToMap() map[string]interface{} {
out := make(map[string]interface{}, t.size)
t.Walk(func(k string, v interface{}) bool {
out[k] = v
return false
})
return out
}

View File

@ -55,6 +55,7 @@ const (
// AWS ISO (US) partition's regions. // AWS ISO (US) partition's regions.
const ( const (
UsIsoEast1RegionID = "us-iso-east-1" // US ISO East. UsIsoEast1RegionID = "us-iso-east-1" // US ISO East.
UsIsoWest1RegionID = "us-iso-west-1" // US ISO WEST.
) )
// AWS ISOB (US) partition's regions. // AWS ISOB (US) partition's regions.
@ -881,6 +882,31 @@ var awsPartition = partition{
"us-west-2": endpoint{}, "us-west-2": endpoint{},
}, },
}, },
"applicationinsights": service{
Endpoints: endpoints{
"af-south-1": endpoint{},
"ap-east-1": endpoint{},
"ap-northeast-1": endpoint{},
"ap-northeast-2": endpoint{},
"ap-south-1": endpoint{},
"ap-southeast-1": endpoint{},
"ap-southeast-2": endpoint{},
"ca-central-1": endpoint{},
"eu-central-1": endpoint{},
"eu-north-1": endpoint{},
"eu-south-1": endpoint{},
"eu-west-1": endpoint{},
"eu-west-2": endpoint{},
"eu-west-3": endpoint{},
"me-south-1": endpoint{},
"sa-east-1": endpoint{},
"us-east-1": endpoint{},
"us-east-2": endpoint{},
"us-west-1": endpoint{},
"us-west-2": endpoint{},
},
},
"appmesh": service{ "appmesh": service{
Endpoints: endpoints{ Endpoints: endpoints{
@ -4231,6 +4257,20 @@ var awsPartition = partition{
"us-west-2": endpoint{}, "us-west-2": endpoint{},
}, },
}, },
"iotsitewise": service{
Endpoints: endpoints{
"ap-northeast-1": endpoint{},
"ap-northeast-2": endpoint{},
"ap-south-1": endpoint{},
"ap-southeast-1": endpoint{},
"ap-southeast-2": endpoint{},
"eu-central-1": endpoint{},
"eu-west-1": endpoint{},
"us-east-1": endpoint{},
"us-west-2": endpoint{},
},
},
"iotthingsgraph": service{ "iotthingsgraph": service{
Defaults: endpoint{ Defaults: endpoint{
CredentialScope: credentialScope{ CredentialScope: credentialScope{
@ -5033,6 +5073,28 @@ var awsPartition = partition{
"us-west-2": endpoint{}, "us-west-2": endpoint{},
}, },
}, },
"mgn": service{
Endpoints: endpoints{
"ap-east-1": endpoint{},
"ap-northeast-1": endpoint{},
"ap-northeast-2": endpoint{},
"ap-northeast-3": endpoint{},
"ap-south-1": endpoint{},
"ap-southeast-1": endpoint{},
"ap-southeast-2": endpoint{},
"ca-central-1": endpoint{},
"eu-central-1": endpoint{},
"eu-north-1": endpoint{},
"eu-west-1": endpoint{},
"eu-west-2": endpoint{},
"sa-east-1": endpoint{},
"us-east-1": endpoint{},
"us-east-2": endpoint{},
"us-west-1": endpoint{},
"us-west-2": endpoint{},
},
},
"mobileanalytics": service{ "mobileanalytics": service{
Endpoints: endpoints{ Endpoints: endpoints{
@ -8181,6 +8243,17 @@ var awsPartition = partition{
}, },
}, },
}, },
"wisdom": service{
Endpoints: endpoints{
"ap-northeast-1": endpoint{},
"ap-southeast-2": endpoint{},
"eu-central-1": endpoint{},
"eu-west-2": endpoint{},
"us-east-1": endpoint{},
"us-west-2": endpoint{},
},
},
"workdocs": service{ "workdocs": service{
Endpoints: endpoints{ Endpoints: endpoints{
@ -8392,6 +8465,13 @@ var awscnPartition = partition{
"cn-northwest-1": endpoint{}, "cn-northwest-1": endpoint{},
}, },
}, },
"applicationinsights": service{
Endpoints: endpoints{
"cn-north-1": endpoint{},
"cn-northwest-1": endpoint{},
},
},
"appmesh": service{ "appmesh": service{
Endpoints: endpoints{ Endpoints: endpoints{
@ -8831,6 +8911,12 @@ var awscnPartition = partition{
"cn-northwest-1": endpoint{}, "cn-northwest-1": endpoint{},
}, },
}, },
"iotsitewise": service{
Endpoints: endpoints{
"cn-north-1": endpoint{},
},
},
"kafka": service{ "kafka": service{
Endpoints: endpoints{ Endpoints: endpoints{
@ -9477,6 +9563,23 @@ var awsusgovPartition = partition{
}, },
}, },
}, },
"applicationinsights": service{
Endpoints: endpoints{
"us-gov-east-1": endpoint{
Hostname: "applicationinsights.us-gov-east-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-gov-east-1",
},
},
"us-gov-west-1": endpoint{
Hostname: "applicationinsights.us-gov-west-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-gov-west-1",
},
},
},
},
"appstream2": service{ "appstream2": service{
Defaults: endpoint{ Defaults: endpoint{
Protocols: []string{"https"}, Protocols: []string{"https"},
@ -10373,6 +10476,12 @@ var awsusgovPartition = partition{
"us-gov-west-1": endpoint{}, "us-gov-west-1": endpoint{},
}, },
}, },
"iotsitewise": service{
Endpoints: endpoints{
"us-gov-west-1": endpoint{},
},
},
"kafka": service{ "kafka": service{
Endpoints: endpoints{ Endpoints: endpoints{
@ -10380,6 +10489,18 @@ var awsusgovPartition = partition{
"us-gov-west-1": endpoint{}, "us-gov-west-1": endpoint{},
}, },
}, },
"kendra": service{
Endpoints: endpoints{
"fips-us-gov-west-1": endpoint{
Hostname: "kendra-fips.us-gov-west-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-gov-west-1",
},
},
"us-gov-west-1": endpoint{},
},
},
"kinesis": service{ "kinesis": service{
Endpoints: endpoints{ Endpoints: endpoints{
@ -11401,6 +11522,9 @@ var awsisoPartition = partition{
"us-iso-east-1": region{ "us-iso-east-1": region{
Description: "US ISO East", Description: "US ISO East",
}, },
"us-iso-west-1": region{
Description: "US ISO WEST",
},
}, },
Services: services{ Services: services{
"api.ecr": service{ "api.ecr": service{
@ -11412,6 +11536,12 @@ var awsisoPartition = partition{
Region: "us-iso-east-1", Region: "us-iso-east-1",
}, },
}, },
"us-iso-west-1": endpoint{
Hostname: "api.ecr.us-iso-west-1.c2s.ic.gov",
CredentialScope: credentialScope{
Region: "us-iso-west-1",
},
},
}, },
}, },
"api.sagemaker": service{ "api.sagemaker": service{
@ -11432,6 +11562,7 @@ var awsisoPartition = partition{
}, },
Endpoints: endpoints{ Endpoints: endpoints{
"us-iso-east-1": endpoint{}, "us-iso-east-1": endpoint{},
"us-iso-west-1": endpoint{},
}, },
}, },
"autoscaling": service{ "autoscaling": service{
@ -11440,24 +11571,28 @@ var awsisoPartition = partition{
"us-iso-east-1": endpoint{ "us-iso-east-1": endpoint{
Protocols: []string{"http", "https"}, Protocols: []string{"http", "https"},
}, },
"us-iso-west-1": endpoint{},
}, },
}, },
"cloudformation": service{ "cloudformation": service{
Endpoints: endpoints{ Endpoints: endpoints{
"us-iso-east-1": endpoint{}, "us-iso-east-1": endpoint{},
"us-iso-west-1": endpoint{},
}, },
}, },
"cloudtrail": service{ "cloudtrail": service{
Endpoints: endpoints{ Endpoints: endpoints{
"us-iso-east-1": endpoint{}, "us-iso-east-1": endpoint{},
"us-iso-west-1": endpoint{},
}, },
}, },
"codedeploy": service{ "codedeploy": service{
Endpoints: endpoints{ Endpoints: endpoints{
"us-iso-east-1": endpoint{}, "us-iso-east-1": endpoint{},
"us-iso-west-1": endpoint{},
}, },
}, },
"comprehend": service{ "comprehend": service{
@ -11472,6 +11607,7 @@ var awsisoPartition = partition{
Endpoints: endpoints{ Endpoints: endpoints{
"us-iso-east-1": endpoint{}, "us-iso-east-1": endpoint{},
"us-iso-west-1": endpoint{},
}, },
}, },
"datapipeline": service{ "datapipeline": service{
@ -11484,6 +11620,7 @@ var awsisoPartition = partition{
Endpoints: endpoints{ Endpoints: endpoints{
"us-iso-east-1": endpoint{}, "us-iso-east-1": endpoint{},
"us-iso-west-1": endpoint{},
}, },
}, },
"dms": service{ "dms": service{
@ -11510,24 +11647,34 @@ var awsisoPartition = partition{
"us-iso-east-1": endpoint{ "us-iso-east-1": endpoint{
Protocols: []string{"http", "https"}, Protocols: []string{"http", "https"},
}, },
"us-iso-west-1": endpoint{},
},
},
"ebs": service{
Endpoints: endpoints{
"us-iso-east-1": endpoint{},
}, },
}, },
"ec2": service{ "ec2": service{
Endpoints: endpoints{ Endpoints: endpoints{
"us-iso-east-1": endpoint{}, "us-iso-east-1": endpoint{},
"us-iso-west-1": endpoint{},
}, },
}, },
"ecs": service{ "ecs": service{
Endpoints: endpoints{ Endpoints: endpoints{
"us-iso-east-1": endpoint{}, "us-iso-east-1": endpoint{},
"us-iso-west-1": endpoint{},
}, },
}, },
"elasticache": service{ "elasticache": service{
Endpoints: endpoints{ Endpoints: endpoints{
"us-iso-east-1": endpoint{}, "us-iso-east-1": endpoint{},
"us-iso-west-1": endpoint{},
}, },
}, },
"elasticfilesystem": service{ "elasticfilesystem": service{
@ -11548,6 +11695,7 @@ var awsisoPartition = partition{
"us-iso-east-1": endpoint{ "us-iso-east-1": endpoint{
Protocols: []string{"http", "https"}, Protocols: []string{"http", "https"},
}, },
"us-iso-west-1": endpoint{},
}, },
}, },
"elasticmapreduce": service{ "elasticmapreduce": service{
@ -11556,6 +11704,7 @@ var awsisoPartition = partition{
"us-iso-east-1": endpoint{ "us-iso-east-1": endpoint{
Protocols: []string{"https"}, Protocols: []string{"https"},
}, },
"us-iso-west-1": endpoint{},
}, },
}, },
"es": service{ "es": service{
@ -11568,6 +11717,7 @@ var awsisoPartition = partition{
Endpoints: endpoints{ Endpoints: endpoints{
"us-iso-east-1": endpoint{}, "us-iso-east-1": endpoint{},
"us-iso-west-1": endpoint{},
}, },
}, },
"firehose": service{ "firehose": service{
@ -11582,6 +11732,7 @@ var awsisoPartition = partition{
"us-iso-east-1": endpoint{ "us-iso-east-1": endpoint{
Protocols: []string{"http", "https"}, Protocols: []string{"http", "https"},
}, },
"us-iso-west-1": endpoint{},
}, },
}, },
"health": service{ "health": service{
@ -11607,6 +11758,7 @@ var awsisoPartition = partition{
Endpoints: endpoints{ Endpoints: endpoints{
"us-iso-east-1": endpoint{}, "us-iso-east-1": endpoint{},
"us-iso-west-1": endpoint{},
}, },
}, },
"kms": service{ "kms": service{
@ -11619,12 +11771,14 @@ var awsisoPartition = partition{
}, },
}, },
"us-iso-east-1": endpoint{}, "us-iso-east-1": endpoint{},
"us-iso-west-1": endpoint{},
}, },
}, },
"lambda": service{ "lambda": service{
Endpoints: endpoints{ Endpoints: endpoints{
"us-iso-east-1": endpoint{}, "us-iso-east-1": endpoint{},
"us-iso-west-1": endpoint{},
}, },
}, },
"license-manager": service{ "license-manager": service{
@ -11637,6 +11791,7 @@ var awsisoPartition = partition{
Endpoints: endpoints{ Endpoints: endpoints{
"us-iso-east-1": endpoint{}, "us-iso-east-1": endpoint{},
"us-iso-west-1": endpoint{},
}, },
}, },
"medialive": service{ "medialive": service{
@ -11655,6 +11810,7 @@ var awsisoPartition = partition{
Endpoints: endpoints{ Endpoints: endpoints{
"us-iso-east-1": endpoint{}, "us-iso-east-1": endpoint{},
"us-iso-west-1": endpoint{},
}, },
}, },
"outposts": service{ "outposts": service{
@ -11673,12 +11829,14 @@ var awsisoPartition = partition{
Endpoints: endpoints{ Endpoints: endpoints{
"us-iso-east-1": endpoint{}, "us-iso-east-1": endpoint{},
"us-iso-west-1": endpoint{},
}, },
}, },
"redshift": service{ "redshift": service{
Endpoints: endpoints{ Endpoints: endpoints{
"us-iso-east-1": endpoint{}, "us-iso-east-1": endpoint{},
"us-iso-west-1": endpoint{},
}, },
}, },
"route53": service{ "route53": service{
@ -11735,6 +11893,7 @@ var awsisoPartition = partition{
"us-iso-east-1": endpoint{ "us-iso-east-1": endpoint{
Protocols: []string{"http", "https"}, Protocols: []string{"http", "https"},
}, },
"us-iso-west-1": endpoint{},
}, },
}, },
"sqs": service{ "sqs": service{
@ -11743,6 +11902,7 @@ var awsisoPartition = partition{
"us-iso-east-1": endpoint{ "us-iso-east-1": endpoint{
Protocols: []string{"http", "https"}, Protocols: []string{"http", "https"},
}, },
"us-iso-west-1": endpoint{},
}, },
}, },
"ssm": service{ "ssm": service{
@ -11755,6 +11915,7 @@ var awsisoPartition = partition{
Endpoints: endpoints{ Endpoints: endpoints{
"us-iso-east-1": endpoint{}, "us-iso-east-1": endpoint{},
"us-iso-west-1": endpoint{},
}, },
}, },
"streams.dynamodb": service{ "streams.dynamodb": service{
@ -11774,6 +11935,7 @@ var awsisoPartition = partition{
Endpoints: endpoints{ Endpoints: endpoints{
"us-iso-east-1": endpoint{}, "us-iso-east-1": endpoint{},
"us-iso-west-1": endpoint{},
}, },
}, },
"support": service{ "support": service{
@ -11792,6 +11954,7 @@ var awsisoPartition = partition{
Endpoints: endpoints{ Endpoints: endpoints{
"us-iso-east-1": endpoint{}, "us-iso-east-1": endpoint{},
"us-iso-west-1": endpoint{},
}, },
}, },
"transcribe": service{ "transcribe": service{
@ -11934,6 +12097,12 @@ var awsisobPartition = partition{
"us-isob-east-1": endpoint{}, "us-isob-east-1": endpoint{},
}, },
}, },
"ebs": service{
Endpoints: endpoints{
"us-isob-east-1": endpoint{},
},
},
"ec2": service{ "ec2": service{
Defaults: endpoint{ Defaults: endpoint{
Protocols: []string{"http", "https"}, Protocols: []string{"http", "https"},

View File

@ -5,4 +5,4 @@ package aws
const SDKName = "aws-sdk-go" const SDKName = "aws-sdk-go"
// SDKVersion is the version of this SDK // SDKVersion is the version of this SDK
const SDKVersion = "1.41.0" const SDKVersion = "1.41.10"

View File

@ -56231,16 +56231,18 @@ type CreateFlowLogsInput struct {
// or LogGroupName. // or LogGroupName.
DeliverLogsPermissionArn *string `type:"string"` DeliverLogsPermissionArn *string `type:"string"`
// The destination options.
DestinationOptions *DestinationOptionsRequest `type:"structure"`
// Checks whether you have the required permissions for the action, without // Checks whether you have the required permissions for the action, without
// actually making the request, and provides an error response. If you have // actually making the request, and provides an error response. If you have
// the required permissions, the error response is DryRunOperation. Otherwise, // the required permissions, the error response is DryRunOperation. Otherwise,
// it is UnauthorizedOperation. // it is UnauthorizedOperation.
DryRun *bool `type:"boolean"` DryRun *bool `type:"boolean"`
// Specifies the destination to which the flow log data is to be published. // The destination to which the flow log data is to be published. Flow log data
// Flow log data can be published to a CloudWatch Logs log group or an Amazon // can be published to a CloudWatch Logs log group or an Amazon S3 bucket. The
// S3 bucket. The value specified for this parameter depends on the value specified // value specified for this parameter depends on the value specified for LogDestinationType.
// for LogDestinationType.
// //
// If LogDestinationType is not specified or cloud-watch-logs, specify the Amazon // If LogDestinationType is not specified or cloud-watch-logs, specify the Amazon
// Resource Name (ARN) of the CloudWatch Logs log group. For example, to publish // Resource Name (ARN) of the CloudWatch Logs log group. For example, to publish
@ -56255,10 +56257,10 @@ type CreateFlowLogsInput struct {
// a subfolder name. This is a reserved term. // a subfolder name. This is a reserved term.
LogDestination *string `type:"string"` LogDestination *string `type:"string"`
// Specifies the type of destination to which the flow log data is to be published. // The type of destination to which the flow log data is to be published. Flow
// Flow log data can be published to CloudWatch Logs or Amazon S3. To publish // log data can be published to CloudWatch Logs or Amazon S3. To publish flow
// flow log data to CloudWatch Logs, specify cloud-watch-logs. To publish flow // log data to CloudWatch Logs, specify cloud-watch-logs. To publish flow log
// log data to Amazon S3, specify s3. // data to Amazon S3, specify s3.
// //
// If you specify LogDestinationType as s3, do not specify DeliverLogsPermissionArn // If you specify LogDestinationType as s3, do not specify DeliverLogsPermissionArn
// or LogGroupName. // or LogGroupName.
@ -56367,6 +56369,12 @@ func (s *CreateFlowLogsInput) SetDeliverLogsPermissionArn(v string) *CreateFlowL
return s return s
} }
// SetDestinationOptions sets the DestinationOptions field's value.
func (s *CreateFlowLogsInput) SetDestinationOptions(v *DestinationOptionsRequest) *CreateFlowLogsInput {
s.DestinationOptions = v
return s
}
// SetDryRun sets the DryRun field's value. // SetDryRun sets the DryRun field's value.
func (s *CreateFlowLogsInput) SetDryRun(v bool) *CreateFlowLogsInput { func (s *CreateFlowLogsInput) SetDryRun(v bool) *CreateFlowLogsInput {
s.DryRun = &v s.DryRun = &v
@ -76501,8 +76509,11 @@ type DescribeInstanceTypesInput struct {
// * instance-storage-info.disk.type - The storage technology for the local // * instance-storage-info.disk.type - The storage technology for the local
// instance storage disks (hdd | ssd). // instance storage disks (hdd | ssd).
// //
// * instance-storage-info.encryption-supported - Indicates whether data
// is encrypted at rest (required | unsupported).
//
// * instance-storage-info.nvme-support - Indicates whether non-volatile // * instance-storage-info.nvme-support - Indicates whether non-volatile
// memory express (NVMe) is supported for instance store (required | supported) // memory express (NVMe) is supported for instance store (required | supported
// | unsupported). // | unsupported).
// //
// * instance-storage-info.total-size-in-gb - The total amount of storage // * instance-storage-info.total-size-in-gb - The total amount of storage
@ -87935,6 +87946,109 @@ func (s *DescribeVpnGatewaysOutput) SetVpnGateways(v []*VpnGateway) *DescribeVpn
return s return s
} }
// Describes the destination options for a flow log.
type DestinationOptionsRequest struct {
_ struct{} `type:"structure"`
// The format for the flow log. The default is plain-text.
FileFormat *string `type:"string" enum:"DestinationFileFormat"`
// Indicates whether to use Hive-compatible prefixes for flow logs stored in
// Amazon S3. The default is false.
HiveCompatiblePartitions *bool `type:"boolean"`
// Indicates whether to partition the flow log per hour. This reduces the cost
// and response time for queries. The default is false.
PerHourPartition *bool `type:"boolean"`
}
// String returns the string representation.
//
// API parameter values that are decorated as "sensitive" in the API will not
// be included in the string output. The member name will be present, but the
// value will be replaced with "sensitive".
func (s DestinationOptionsRequest) String() string {
return awsutil.Prettify(s)
}
// GoString returns the string representation.
//
// API parameter values that are decorated as "sensitive" in the API will not
// be included in the string output. The member name will be present, but the
// value will be replaced with "sensitive".
func (s DestinationOptionsRequest) GoString() string {
return s.String()
}
// SetFileFormat sets the FileFormat field's value.
func (s *DestinationOptionsRequest) SetFileFormat(v string) *DestinationOptionsRequest {
s.FileFormat = &v
return s
}
// SetHiveCompatiblePartitions sets the HiveCompatiblePartitions field's value.
func (s *DestinationOptionsRequest) SetHiveCompatiblePartitions(v bool) *DestinationOptionsRequest {
s.HiveCompatiblePartitions = &v
return s
}
// SetPerHourPartition sets the PerHourPartition field's value.
func (s *DestinationOptionsRequest) SetPerHourPartition(v bool) *DestinationOptionsRequest {
s.PerHourPartition = &v
return s
}
// Describes the destination options for a flow log.
type DestinationOptionsResponse struct {
_ struct{} `type:"structure"`
// The format for the flow log.
FileFormat *string `locationName:"fileFormat" type:"string" enum:"DestinationFileFormat"`
// Indicates whether to use Hive-compatible prefixes for flow logs stored in
// Amazon S3.
HiveCompatiblePartitions *bool `locationName:"hiveCompatiblePartitions" type:"boolean"`
// Indicates whether to partition the flow log per hour.
PerHourPartition *bool `locationName:"perHourPartition" type:"boolean"`
}
// String returns the string representation.
//
// API parameter values that are decorated as "sensitive" in the API will not
// be included in the string output. The member name will be present, but the
// value will be replaced with "sensitive".
func (s DestinationOptionsResponse) String() string {
return awsutil.Prettify(s)
}
// GoString returns the string representation.
//
// API parameter values that are decorated as "sensitive" in the API will not
// be included in the string output. The member name will be present, but the
// value will be replaced with "sensitive".
func (s DestinationOptionsResponse) GoString() string {
return s.String()
}
// SetFileFormat sets the FileFormat field's value.
func (s *DestinationOptionsResponse) SetFileFormat(v string) *DestinationOptionsResponse {
s.FileFormat = &v
return s
}
// SetHiveCompatiblePartitions sets the HiveCompatiblePartitions field's value.
func (s *DestinationOptionsResponse) SetHiveCompatiblePartitions(v bool) *DestinationOptionsResponse {
s.HiveCompatiblePartitions = &v
return s
}
// SetPerHourPartition sets the PerHourPartition field's value.
func (s *DestinationOptionsResponse) SetPerHourPartition(v bool) *DestinationOptionsResponse {
s.PerHourPartition = &v
return s
}
type DetachClassicLinkVpcInput struct { type DetachClassicLinkVpcInput struct {
_ struct{} `type:"structure"` _ struct{} `type:"structure"`
@ -90815,7 +90929,7 @@ func (s *DiskImageVolumeDescription) SetSize(v int64) *DiskImageVolumeDescriptio
return s return s
} }
// Describes the disk. // Describes a disk.
type DiskInfo struct { type DiskInfo struct {
_ struct{} `type:"structure"` _ struct{} `type:"structure"`
@ -95678,22 +95792,25 @@ type FlowLog struct {
// The status of the logs delivery (SUCCESS | FAILED). // The status of the logs delivery (SUCCESS | FAILED).
DeliverLogsStatus *string `locationName:"deliverLogsStatus" type:"string"` DeliverLogsStatus *string `locationName:"deliverLogsStatus" type:"string"`
// The destination options.
DestinationOptions *DestinationOptionsResponse `locationName:"destinationOptions" type:"structure"`
// The flow log ID. // The flow log ID.
FlowLogId *string `locationName:"flowLogId" type:"string"` FlowLogId *string `locationName:"flowLogId" type:"string"`
// The status of the flow log (ACTIVE). // The status of the flow log (ACTIVE).
FlowLogStatus *string `locationName:"flowLogStatus" type:"string"` FlowLogStatus *string `locationName:"flowLogStatus" type:"string"`
// Specifies the destination to which the flow log data is published. Flow log // The destination to which the flow log data is published. Flow log data can
// data can be published to an CloudWatch Logs log group or an Amazon S3 bucket. // be published to an CloudWatch Logs log group or an Amazon S3 bucket. If the
// If the flow log publishes to CloudWatch Logs, this element indicates the // flow log publishes to CloudWatch Logs, this element indicates the Amazon
// Amazon Resource Name (ARN) of the CloudWatch Logs log group to which the // Resource Name (ARN) of the CloudWatch Logs log group to which the data is
// data is published. If the flow log publishes to Amazon S3, this element indicates // published. If the flow log publishes to Amazon S3, this element indicates
// the ARN of the Amazon S3 bucket to which the data is published. // the ARN of the Amazon S3 bucket to which the data is published.
LogDestination *string `locationName:"logDestination" type:"string"` LogDestination *string `locationName:"logDestination" type:"string"`
// Specifies the type of destination to which the flow log data is published. // The type of destination to which the flow log data is published. Flow log
// Flow log data can be published to CloudWatch Logs or Amazon S3. // data can be published to CloudWatch Logs or Amazon S3.
LogDestinationType *string `locationName:"logDestinationType" type:"string" enum:"LogDestinationType"` LogDestinationType *string `locationName:"logDestinationType" type:"string" enum:"LogDestinationType"`
// The format of the flow log record. // The format of the flow log record.
@ -95764,6 +95881,12 @@ func (s *FlowLog) SetDeliverLogsStatus(v string) *FlowLog {
return s return s
} }
// SetDestinationOptions sets the DestinationOptions field's value.
func (s *FlowLog) SetDestinationOptions(v *DestinationOptionsResponse) *FlowLog {
s.DestinationOptions = v
return s
}
// SetFlowLogId sets the FlowLogId field's value. // SetFlowLogId sets the FlowLogId field's value.
func (s *FlowLog) SetFlowLogId(v string) *FlowLog { func (s *FlowLog) SetFlowLogId(v string) *FlowLog {
s.FlowLogId = &v s.FlowLogId = &v
@ -105489,15 +105612,18 @@ func (s *InstanceStatusSummary) SetStatus(v string) *InstanceStatusSummary {
return s return s
} }
// Describes the disks that are available for the instance type. // Describes the instance store features that are supported by the instance
// type.
type InstanceStorageInfo struct { type InstanceStorageInfo struct {
_ struct{} `type:"structure"` _ struct{} `type:"structure"`
// Describes the disks that are available for the instance type. // Describes the disks that are available for the instance type.
Disks []*DiskInfo `locationName:"disks" locationNameList:"item" type:"list"` Disks []*DiskInfo `locationName:"disks" locationNameList:"item" type:"list"`
// Indicates whether non-volatile memory express (NVMe) is supported for instance // Indicates whether data is encrypted at rest.
// store. EncryptionSupport *string `locationName:"encryptionSupport" type:"string" enum:"InstanceStorageEncryptionSupport"`
// Indicates whether non-volatile memory express (NVMe) is supported.
NvmeSupport *string `locationName:"nvmeSupport" type:"string" enum:"EphemeralNvmeSupport"` NvmeSupport *string `locationName:"nvmeSupport" type:"string" enum:"EphemeralNvmeSupport"`
// The total size of the disks, in GB. // The total size of the disks, in GB.
@ -105528,6 +105654,12 @@ func (s *InstanceStorageInfo) SetDisks(v []*DiskInfo) *InstanceStorageInfo {
return s return s
} }
// SetEncryptionSupport sets the EncryptionSupport field's value.
func (s *InstanceStorageInfo) SetEncryptionSupport(v string) *InstanceStorageInfo {
s.EncryptionSupport = &v
return s
}
// SetNvmeSupport sets the NvmeSupport field's value. // SetNvmeSupport sets the NvmeSupport field's value.
func (s *InstanceStorageInfo) SetNvmeSupport(v string) *InstanceStorageInfo { func (s *InstanceStorageInfo) SetNvmeSupport(v string) *InstanceStorageInfo {
s.NvmeSupport = &v s.NvmeSupport = &v
@ -142104,6 +142236,12 @@ type VpnConnection struct {
// Classic VPN connection. // Classic VPN connection.
Category *string `locationName:"category" type:"string"` Category *string `locationName:"category" type:"string"`
// The ARN of the core network.
CoreNetworkArn *string `locationName:"coreNetworkArn" type:"string"`
// The ARN of the core network attachment.
CoreNetworkAttachmentArn *string `locationName:"coreNetworkAttachmentArn" type:"string"`
// The configuration information for the VPN connection's customer gateway (in // The configuration information for the VPN connection's customer gateway (in
// the native XML format). This element is always present in the CreateVpnConnection // the native XML format). This element is always present in the CreateVpnConnection
// response; however, it's present in the DescribeVpnConnections response only // response; however, it's present in the DescribeVpnConnections response only
@ -142113,6 +142251,9 @@ type VpnConnection struct {
// The ID of the customer gateway at your end of the VPN connection. // The ID of the customer gateway at your end of the VPN connection.
CustomerGatewayId *string `locationName:"customerGatewayId" type:"string"` CustomerGatewayId *string `locationName:"customerGatewayId" type:"string"`
// The current state of the gateway association.
GatewayAssociationState *string `locationName:"gatewayAssociationState" type:"string"`
// The VPN connection options. // The VPN connection options.
Options *VpnConnectionOptions `locationName:"options" type:"structure"` Options *VpnConnectionOptions `locationName:"options" type:"structure"`
@ -142166,6 +142307,18 @@ func (s *VpnConnection) SetCategory(v string) *VpnConnection {
return s return s
} }
// SetCoreNetworkArn sets the CoreNetworkArn field's value.
func (s *VpnConnection) SetCoreNetworkArn(v string) *VpnConnection {
s.CoreNetworkArn = &v
return s
}
// SetCoreNetworkAttachmentArn sets the CoreNetworkAttachmentArn field's value.
func (s *VpnConnection) SetCoreNetworkAttachmentArn(v string) *VpnConnection {
s.CoreNetworkAttachmentArn = &v
return s
}
// SetCustomerGatewayConfiguration sets the CustomerGatewayConfiguration field's value. // SetCustomerGatewayConfiguration sets the CustomerGatewayConfiguration field's value.
func (s *VpnConnection) SetCustomerGatewayConfiguration(v string) *VpnConnection { func (s *VpnConnection) SetCustomerGatewayConfiguration(v string) *VpnConnection {
s.CustomerGatewayConfiguration = &v s.CustomerGatewayConfiguration = &v
@ -142178,6 +142331,12 @@ func (s *VpnConnection) SetCustomerGatewayId(v string) *VpnConnection {
return s return s
} }
// SetGatewayAssociationState sets the GatewayAssociationState field's value.
func (s *VpnConnection) SetGatewayAssociationState(v string) *VpnConnection {
s.GatewayAssociationState = &v
return s
}
// SetOptions sets the Options field's value. // SetOptions sets the Options field's value.
func (s *VpnConnection) SetOptions(v *VpnConnectionOptions) *VpnConnection { func (s *VpnConnection) SetOptions(v *VpnConnectionOptions) *VpnConnection {
s.Options = v s.Options = v
@ -143185,6 +143344,9 @@ const (
// ArchitectureTypeArm64 is a ArchitectureType enum value // ArchitectureTypeArm64 is a ArchitectureType enum value
ArchitectureTypeArm64 = "arm64" ArchitectureTypeArm64 = "arm64"
// ArchitectureTypeX8664Mac is a ArchitectureType enum value
ArchitectureTypeX8664Mac = "x86_64_mac"
) )
// ArchitectureType_Values returns all elements of the ArchitectureType enum // ArchitectureType_Values returns all elements of the ArchitectureType enum
@ -143193,6 +143355,7 @@ func ArchitectureType_Values() []string {
ArchitectureTypeI386, ArchitectureTypeI386,
ArchitectureTypeX8664, ArchitectureTypeX8664,
ArchitectureTypeArm64, ArchitectureTypeArm64,
ArchitectureTypeX8664Mac,
} }
} }
@ -144128,6 +144291,22 @@ func DeleteQueuedReservedInstancesErrorCode_Values() []string {
} }
} }
const (
// DestinationFileFormatPlainText is a DestinationFileFormat enum value
DestinationFileFormatPlainText = "plain-text"
// DestinationFileFormatParquet is a DestinationFileFormat enum value
DestinationFileFormatParquet = "parquet"
)
// DestinationFileFormat_Values returns all elements of the DestinationFileFormat enum
func DestinationFileFormat_Values() []string {
return []string{
DestinationFileFormatPlainText,
DestinationFileFormatParquet,
}
}
const ( const (
// DeviceTypeEbs is a DeviceType enum value // DeviceTypeEbs is a DeviceType enum value
DeviceTypeEbs = "ebs" DeviceTypeEbs = "ebs"
@ -145228,6 +145407,22 @@ func InstanceStateName_Values() []string {
} }
} }
const (
// InstanceStorageEncryptionSupportUnsupported is a InstanceStorageEncryptionSupport enum value
InstanceStorageEncryptionSupportUnsupported = "unsupported"
// InstanceStorageEncryptionSupportRequired is a InstanceStorageEncryptionSupport enum value
InstanceStorageEncryptionSupportRequired = "required"
)
// InstanceStorageEncryptionSupport_Values returns all elements of the InstanceStorageEncryptionSupport enum
func InstanceStorageEncryptionSupport_Values() []string {
return []string{
InstanceStorageEncryptionSupportUnsupported,
InstanceStorageEncryptionSupportRequired,
}
}
const ( const (
// InstanceTypeT1Micro is a InstanceType enum value // InstanceTypeT1Micro is a InstanceType enum value
InstanceTypeT1Micro = "t1.micro" InstanceTypeT1Micro = "t1.micro"

View File

@ -1022,7 +1022,7 @@ func (s *LogoutInput) SetAccessToken(v string) *LogoutInput {
} }
type LogoutOutput struct { type LogoutOutput struct {
_ struct{} `type:"structure" nopayload:"true"` _ struct{} `type:"structure"`
} }
// String returns the string representation. // String returns the string representation.

View File

@ -1,3 +1,4 @@
//go:build !luminous && !mimic
// +build !luminous,!mimic // +build !luminous,!mimic
package admin package admin

View File

@ -1,3 +1,4 @@
//go:build !luminous && !mimic
// +build !luminous,!mimic // +build !luminous,!mimic
package admin package admin

View File

@ -1,3 +1,4 @@
//go:build !luminous && !mimic
// +build !luminous,!mimic // +build !luminous,!mimic
package admin package admin

View File

@ -1,3 +1,4 @@
//go:build !luminous && !mimic
// +build !luminous,!mimic // +build !luminous,!mimic
package admin package admin

View File

@ -1,3 +1,4 @@
//go:build !luminous && !mimic
// +build !luminous,!mimic // +build !luminous,!mimic
package admin package admin

View File

@ -1,3 +1,4 @@
//go:build !luminous && !mimic
// +build !luminous,!mimic // +build !luminous,!mimic
package admin package admin

View File

@ -1,3 +1,4 @@
//go:build !luminous && !mimic
// +build !luminous,!mimic // +build !luminous,!mimic
package admin package admin

View File

@ -1,3 +1,4 @@
//go:build !luminous && !mimic
// +build !luminous,!mimic // +build !luminous,!mimic
package admin package admin

View File

@ -1,3 +1,4 @@
//go:build !luminous && !mimic
// +build !luminous,!mimic // +build !luminous,!mimic
package admin package admin

View File

@ -60,10 +60,10 @@ func (v *PtrGuard) Release() {
// The uintptrPtr() helper function below assumes that uintptr has the same size // The uintptrPtr() helper function below assumes that uintptr has the same size
// as a pointer, although in theory it could be larger. Therefore we use this // as a pointer, although in theory it could be larger. Therefore we use this
// constant expression to assert size equality as a safeguard at compile time. // constant expression to assert size equality as a safeguard at compile time.
// How it works: the difference of both sizes is converted into an 8 bit value // How it works: if sizes are different, either the inner or outer expression is
// and left-bit-shifted by 8. This always creates an overflow error at compile // negative, which always fails with "constant ... overflows uintptr", because
// time, if the difference of the sizes is not 0. // unsafe.Sizeof() is a uintptr typed constant.
const _ = uint8(unsafe.Sizeof(uintptr(0))-PtrSize) << 8 // size assert const _ = -(unsafe.Sizeof(uintptr(0)) - PtrSize) // size assert
func uintptrPtr(p *CPtr) *uintptr { func uintptrPtr(p *CPtr) *uintptr {
return (*uintptr)(unsafe.Pointer(p)) return (*uintptr)(unsafe.Pointer(p))
} }

View File

@ -1,3 +1,4 @@
//go:build ptrguard
// +build ptrguard // +build ptrguard
package cutil package cutil

View File

@ -1,3 +1,4 @@
//go:build !ptrguard
// +build !ptrguard // +build !ptrguard
package cutil package cutil

View File

@ -14,6 +14,8 @@ import (
var argvPlaceholder = "placeholder" var argvPlaceholder = "placeholder"
//revive:disable:var-naming old-yet-exported public api
// ClusterStat represents Ceph cluster statistics. // ClusterStat represents Ceph cluster statistics.
type ClusterStat struct { type ClusterStat struct {
Kb uint64 Kb uint64
@ -22,6 +24,8 @@ type ClusterStat struct {
Num_objects uint64 Num_objects uint64
} }
//revive:enable:var-naming
// Conn is a connection handle to a Ceph cluster. // Conn is a connection handle to a Ceph cluster.
type Conn struct { type Conn struct {
cluster C.rados_t cluster C.rados_t
@ -67,7 +71,7 @@ func (c *Conn) Connect() error {
// Shutdown disconnects from the cluster. // Shutdown disconnects from the cluster.
func (c *Conn) Shutdown() { func (c *Conn) Shutdown() {
if err := c.ensure_connected(); err != nil { if err := c.ensureConnected(); err != nil {
return return
} }
freeConn(c) freeConn(c)
@ -166,7 +170,7 @@ func (c *Conn) WaitForLatestOSDMap() error {
return getError(ret) return getError(ret)
} }
func (c *Conn) ensure_connected() error { func (c *Conn) ensureConnected() error {
if c.connected { if c.connected {
return nil return nil
} }
@ -176,7 +180,7 @@ func (c *Conn) ensure_connected() error {
// GetClusterStats returns statistics about the cluster associated with the // GetClusterStats returns statistics about the cluster associated with the
// connection. // connection.
func (c *Conn) GetClusterStats() (stat ClusterStat, err error) { func (c *Conn) GetClusterStats() (stat ClusterStat, err error) {
if err := c.ensure_connected(); err != nil { if err := c.ensureConnected(); err != nil {
return ClusterStat{}, err return ClusterStat{}, err
} }
cStat := C.struct_rados_cluster_stat_t{} cStat := C.struct_rados_cluster_stat_t{}
@ -269,7 +273,7 @@ func (c *Conn) MakePool(name string) error {
// DeletePool deletes a pool and all the data inside the pool. // DeletePool deletes a pool and all the data inside the pool.
func (c *Conn) DeletePool(name string) error { func (c *Conn) DeletePool(name string) error {
if err := c.ensure_connected(); err != nil { if err := c.ensureConnected(); err != nil {
return err return err
} }
cName := C.CString(name) cName := C.CString(name)
@ -280,7 +284,7 @@ func (c *Conn) DeletePool(name string) error {
// GetPoolByName returns the ID of the pool with a given name. // GetPoolByName returns the ID of the pool with a given name.
func (c *Conn) GetPoolByName(name string) (int64, error) { func (c *Conn) GetPoolByName(name string) (int64, error) {
if err := c.ensure_connected(); err != nil { if err := c.ensureConnected(); err != nil {
return 0, err return 0, err
} }
cName := C.CString(name) cName := C.CString(name)
@ -295,7 +299,7 @@ func (c *Conn) GetPoolByName(name string) (int64, error) {
// GetPoolByID returns the name of a pool by a given ID. // GetPoolByID returns the name of a pool by a given ID.
func (c *Conn) GetPoolByID(id int64) (string, error) { func (c *Conn) GetPoolByID(id int64) (string, error) {
buf := make([]byte, 4096) buf := make([]byte, 4096)
if err := c.ensure_connected(); err != nil { if err := c.ensureConnected(); err != nil {
return "", err return "", err
} }
cid := C.int64_t(id) cid := C.int64_t(id)

View File

@ -45,6 +45,8 @@ const (
CreateIdempotent = C.LIBRADOS_CREATE_IDEMPOTENT CreateIdempotent = C.LIBRADOS_CREATE_IDEMPOTENT
) )
//revive:disable:var-naming old-yet-exported public api
// PoolStat represents Ceph pool statistics. // PoolStat represents Ceph pool statistics.
type PoolStat struct { type PoolStat struct {
// space used in bytes // space used in bytes
@ -69,6 +71,8 @@ type PoolStat struct {
Num_wr_kb uint64 Num_wr_kb uint64
} }
//revive:enable:var-naming
// ObjectStat represents an object stat information // ObjectStat represents an object stat information
type ObjectStat struct { type ObjectStat struct {
// current length in bytes // current length in bytes

View File

@ -1,3 +1,4 @@
//go:build nautilus
// +build nautilus // +build nautilus
package rados package rados

View File

@ -1,3 +1,4 @@
//go:build !nautilus
// +build !nautilus // +build !nautilus
package rados package rados

View File

@ -1,3 +1,4 @@
//go:build !mimic
// +build !mimic // +build !mimic
package rados package rados

View File

@ -1,3 +1,4 @@
//go:build !nautilus
// +build !nautilus // +build !nautilus
package admin package admin

46
vendor/github.com/ceph/go-ceph/rbd/admin/imagespec.go generated vendored Normal file
View File

@ -0,0 +1,46 @@
//go:build !nautilus && ceph_preview
// +build !nautilus,ceph_preview
package admin
import (
"fmt"
)
// ImageSpec values are used to identify an RBD image wherever Ceph APIs
// require an image_spec/image_id_spec using image name/id and optional
// pool and namespace.
// PREVIEW
type ImageSpec struct {
spec string
}
// NewImageSpec is used to construct an ImageSpec given an image name/id
// and optional namespace and pool names.
// PREVIEW
//
// NewImageSpec constructs an ImageSpec to identify an RBD image and thus
// requires image name/id, whereas NewLevelSpec constructs LevelSpec to
// identify entire pool, pool namespace or single RBD image, all of which
// requires pool name.
func NewImageSpec(pool, namespace, image string) ImageSpec {
var s string
if pool != "" && namespace != "" {
s = fmt.Sprintf("%s/%s/%s", pool, namespace, image)
} else if pool != "" {
s = fmt.Sprintf("%s/%s", pool, image)
} else {
s = image
}
return ImageSpec{s}
}
// NewRawImageSpec returns a ImageSpec directly based on the spec string
// argument without constructing it from component values.
// PREVIEW
//
// This should only be used if NewImageSpec can not create the imagespec value
// you want to pass to ceph.
func NewRawImageSpec(spec string) ImageSpec {
return ImageSpec{spec}
}

View File

@ -1,3 +1,4 @@
//go:build !nautilus
// +build !nautilus // +build !nautilus
package admin package admin

143
vendor/github.com/ceph/go-ceph/rbd/admin/task.go generated vendored Normal file
View File

@ -0,0 +1,143 @@
//go:build !nautilus && ceph_preview
// +build !nautilus,ceph_preview
package admin
import (
ccom "github.com/ceph/go-ceph/common/commands"
"github.com/ceph/go-ceph/internal/commands"
)
// TaskAdmin encapsulates management functions for ceph rbd task operations.
// PREVIEW
type TaskAdmin struct {
conn ccom.MgrCommander
}
// Task returns a TaskAdmin type for managing ceph rbd task operations.
// PREVIEW
func (ra *RBDAdmin) Task() *TaskAdmin {
return &TaskAdmin{conn: ra.conn}
}
// TaskRefs contains the action name and information about the image.
// PREVIEW
type TaskRefs struct {
Action string `json:"action"`
PoolName string `json:"pool_name"`
PoolNamespace string `json:"pool_namespace"`
ImageName string `json:"image_name"`
ImageID string `json:"image_id"`
}
// TaskResponse contains the information about the task added on an image.
// PREVIEW
type TaskResponse struct {
Sequence int `json:"sequence"`
ID string `json:"id"`
Message string `json:"message"`
Refs TaskRefs `json:"refs"`
InProgress bool `json:"in_progress"`
Progress float64 `json:"progress"`
RetryAttempts int `json:"retry_attempts"`
RetryTime string `json:"retry_time"`
RetryMessage string `json:"retry_message"`
}
func parseTaskResponse(res commands.Response) (TaskResponse, error) {
var taskResponse TaskResponse
err := res.NoStatus().Unmarshal(&taskResponse).End()
return taskResponse, err
}
func parseTaskResponseList(res commands.Response) ([]TaskResponse, error) {
var taskResponseList []TaskResponse
err := res.NoStatus().Unmarshal(&taskResponseList).End()
return taskResponseList, err
}
// AddFlatten adds a background task to flatten a cloned image based on the
// supplied image spec.
// PREVIEW
//
// Similar To:
// rbd task add flatten <image_spec>
func (ta *TaskAdmin) AddFlatten(img ImageSpec) (TaskResponse, error) {
m := map[string]string{
"prefix": "rbd task add flatten",
"image_spec": img.spec,
"format": "json",
}
return parseTaskResponse(commands.MarshalMgrCommand(ta.conn, m))
}
// AddRemove adds a background task to remove an image based on the supplied
// image spec.
// PREVIEW
//
// Similar To:
// rbd task add remove <image_spec>
func (ta *TaskAdmin) AddRemove(img ImageSpec) (TaskResponse, error) {
m := map[string]string{
"prefix": "rbd task add remove",
"image_spec": img.spec,
"format": "json",
}
return parseTaskResponse(commands.MarshalMgrCommand(ta.conn, m))
}
// AddTrashRemove adds a background task to remove an image from the trash based
// on the supplied image id spec.
// PREVIEW
//
// Similar To:
// rbd task add trash remove <image_id_spec>
func (ta *TaskAdmin) AddTrashRemove(img ImageSpec) (TaskResponse, error) {
m := map[string]string{
"prefix": "rbd task add trash remove",
"image_id_spec": img.spec,
"format": "json",
}
return parseTaskResponse(commands.MarshalMgrCommand(ta.conn, m))
}
// List pending or running asynchronous tasks.
// PREVIEW
//
// Similar To:
// rbd task list
func (ta *TaskAdmin) List() ([]TaskResponse, error) {
m := map[string]string{
"prefix": "rbd task list",
"format": "json",
}
return parseTaskResponseList(commands.MarshalMgrCommand(ta.conn, m))
}
// GetTaskByID returns pending or running asynchronous task using id.
// PREVIEW
//
// Similar To:
// rbd task list <task_id>
func (ta *TaskAdmin) GetTaskByID(taskID string) (TaskResponse, error) {
m := map[string]string{
"prefix": "rbd task list",
"task_id": taskID,
"format": "json",
}
return parseTaskResponse(commands.MarshalMgrCommand(ta.conn, m))
}
// Cancel a pending or running asynchronous task.
// PREVIEW
//
// Similar To:
// rbd task cancel <task_id>
func (ta *TaskAdmin) Cancel(taskID string) (TaskResponse, error) {
m := map[string]string{
"prefix": "rbd task cancel",
"task_id": taskID,
"format": "json",
}
return parseTaskResponse(commands.MarshalMgrCommand(ta.conn, m))
}

View File

@ -1,3 +1,4 @@
//go:build !octopus && !nautilus
// +build !octopus,!nautilus // +build !octopus,!nautilus
package rbd package rbd

View File

@ -1,3 +1,4 @@
//go:build !luminous && !mimic
// +build !luminous,!mimic // +build !luminous,!mimic
package rbd package rbd

View File

@ -21,8 +21,8 @@ func (image *Image) GetMetadata(key string) (string, error) {
return "", err return "", err
} }
c_key := C.CString(key) cKey := C.CString(key)
defer C.free(unsafe.Pointer(c_key)) defer C.free(unsafe.Pointer(cKey))
var ( var (
buf []byte buf []byte
@ -34,7 +34,7 @@ func (image *Image) GetMetadata(key string) (string, error) {
// rbd_metadata_get is a bit quirky and *does not* update the size // rbd_metadata_get is a bit quirky and *does not* update the size
// value if the size passed in >= the needed size. // value if the size passed in >= the needed size.
ret := C.rbd_metadata_get( ret := C.rbd_metadata_get(
image.image, c_key, (*C.char)(unsafe.Pointer(&buf[0])), &csize) image.image, cKey, (*C.char)(unsafe.Pointer(&buf[0])), &csize)
err = getError(ret) err = getError(ret)
return retry.Size(int(csize)).If(err == errRange) return retry.Size(int(csize)).If(err == errRange)
}) })
@ -53,12 +53,12 @@ func (image *Image) SetMetadata(key string, value string) error {
return err return err
} }
c_key := C.CString(key) cKey := C.CString(key)
c_value := C.CString(value) cValue := C.CString(value)
defer C.free(unsafe.Pointer(c_key)) defer C.free(unsafe.Pointer(cKey))
defer C.free(unsafe.Pointer(c_value)) defer C.free(unsafe.Pointer(cValue))
ret := C.rbd_metadata_set(image.image, c_key, c_value) ret := C.rbd_metadata_set(image.image, cKey, cValue)
if ret < 0 { if ret < 0 {
return rbdError(ret) return rbdError(ret)
} }
@ -75,10 +75,10 @@ func (image *Image) RemoveMetadata(key string) error {
return err return err
} }
c_key := C.CString(key) cKey := C.CString(key)
defer C.free(unsafe.Pointer(c_key)) defer C.free(unsafe.Pointer(cKey))
ret := C.rbd_metadata_remove(image.image, c_key) ret := C.rbd_metadata_remove(image.image, cKey)
if ret < 0 { if ret < 0 {
return rbdError(ret) return rbdError(ret)
} }

View File

@ -1,3 +1,4 @@
//go:build !nautilus
// +build !nautilus // +build !nautilus
// Initially, we're only providing mirroring related functions for octopus as // Initially, we're only providing mirroring related functions for octopus as
@ -746,7 +747,9 @@ func (iter *MirrorImageGlobalStatusIter) Next() (*GlobalMirrorImageIDAndStatus,
} }
// Close terminates iteration regardless if iteration was completed and // Close terminates iteration regardless if iteration was completed and
// frees any associated resources. (DEPRECATED) // frees any associated resources.
//
// Deprecated: not required
func (*MirrorImageGlobalStatusIter) Close() error { func (*MirrorImageGlobalStatusIter) Close() error {
return nil return nil
} }
@ -908,3 +911,132 @@ func (iter *MirrorImageInfoIter) fetch() error {
} }
return nil return nil
} }
// MirrorImageInstanceIDItem contains an ID string for a RBD image and
// its corresponding mirrored image's Instance ID.
type MirrorImageInstanceIDItem struct {
ID string
InstanceID string
}
// MirrorImageInstanceIDList returns a slice of MirrorImageInstanceIDItem. If
// the length of the returned slice equals max, the next chunk of the list can
// be obtained by setting start to the ID of the last item of the returned slice.
// If max is 0 a slice of all items is returned.
//
// Implements:
// int rbd_mirror_image_instance_id_list(
// rados_ioctx_t io_ctx,
// const char *start_id,
// size_t max, char **image_ids,
// char **instance_ids,
// size_t *len)
func MirrorImageInstanceIDList(
ioctx *rados.IOContext, start string,
max int) ([]MirrorImageInstanceIDItem, error) {
var (
result []MirrorImageInstanceIDItem
fetchAll bool
)
if max <= 0 {
max = iterBufSize
fetchAll = true
}
chunk := make([]MirrorImageInstanceIDItem, max)
for {
length, err := mirrorImageInstanceIDList(ioctx, start, chunk)
if err != nil {
return nil, err
}
result = append(result, chunk[:length]...)
if !fetchAll || length < max {
break
}
start = chunk[length-1].ID
}
return result, nil
}
func mirrorImageInstanceIDList(ioctx *rados.IOContext, start string,
results []MirrorImageInstanceIDItem) (int, error) {
cStart := C.CString(start)
defer C.free(unsafe.Pointer(cStart))
var (
max = C.size_t(len(results))
length = C.size_t(0)
ids = make([]*C.char, len(results))
instanceIDs = make([]*C.char, len(results))
)
ret := C.rbd_mirror_image_instance_id_list(
cephIoctx(ioctx),
cStart,
max,
&ids[0],
&instanceIDs[0],
&length,
)
if err := getError(ret); err != nil {
return 0, err
}
for i := 0; i < int(length); i++ {
results[i].ID = C.GoString(ids[i])
results[i].InstanceID = C.GoString(instanceIDs[i])
}
C.rbd_mirror_image_instance_id_list_cleanup(
&ids[0],
&instanceIDs[0],
length)
return int(length), getError(ret)
}
// MirrorImageInstanceIDIter provide methods for iterating over all
// the MirrorImageInstanceIDItem values in a pool.
type MirrorImageInstanceIDIter struct {
ioctx *rados.IOContext
buf []MirrorImageInstanceIDItem
lastID string
}
// NewMirrorImageInstanceIDIter creates a new iterator ready for use.
func NewMirrorImageInstanceIDIter(ioctx *rados.IOContext) *MirrorImageInstanceIDIter {
return &MirrorImageInstanceIDIter{
ioctx: ioctx,
}
}
// Next fetches one MirrorImageInstanceIDItem value or a nil value if iteration is
// exhausted. The error return will be non-nil if an underlying error fetching
// more values occurred.
func (iter *MirrorImageInstanceIDIter) Next() (*MirrorImageInstanceIDItem, error) {
if len(iter.buf) == 0 {
if err := iter.fetch(); err != nil {
return nil, err
}
if len(iter.buf) == 0 {
return nil, nil
}
iter.lastID = iter.buf[len(iter.buf)-1].ID
}
item := iter.buf[0]
iter.buf = iter.buf[1:]
return &item, nil
}
func (iter *MirrorImageInstanceIDIter) fetch() error {
iter.buf = nil
items := make([]MirrorImageInstanceIDItem, iterBufSize)
n, err := mirrorImageInstanceIDList(
iter.ioctx,
iter.lastID,
items)
if err != nil {
return err
}
if n > 0 {
iter.buf = items[:n]
}
return nil
}

View File

@ -1,4 +1,6 @@
//go:build !luminous && !mimic
// +build !luminous,!mimic // +build !luminous,!mimic
// //
// Ceph Nautilus is the first release that includes rbd_namespace_create(), // Ceph Nautilus is the first release that includes rbd_namespace_create(),
// rbd_namespace_remove(), rbd_namespace_exists() and rbd_namespace_list(). // rbd_namespace_remove(), rbd_namespace_exists() and rbd_namespace_list().

View File

@ -120,10 +120,10 @@ func (rio *ImageOptions) Destroy() {
// int rbd_image_options_set_string(rbd_image_options_t opts, int optname, // int rbd_image_options_set_string(rbd_image_options_t opts, int optname,
// const char* optval); // const char* optval);
func (rio *ImageOptions) SetString(option ImageOption, value string) error { func (rio *ImageOptions) SetString(option ImageOption, value string) error {
c_value := C.CString(value) cValue := C.CString(value)
defer C.free(unsafe.Pointer(c_value)) defer C.free(unsafe.Pointer(cValue))
ret := C.rbd_image_options_set_string(rio.options, C.int(option), c_value) ret := C.rbd_image_options_set_string(rio.options, C.int(option), cValue)
if ret != 0 { if ret != 0 {
return fmt.Errorf("%v, could not set option %v to \"%v\"", return fmt.Errorf("%v, could not set option %v to \"%v\"",
getError(ret), option, value) getError(ret), option, value)
@ -156,9 +156,9 @@ func (rio *ImageOptions) GetString(option ImageOption) (string, error) {
// int rbd_image_options_set_uint64(rbd_image_options_t opts, int optname, // int rbd_image_options_set_uint64(rbd_image_options_t opts, int optname,
// const uint64_t optval); // const uint64_t optval);
func (rio *ImageOptions) SetUint64(option ImageOption, value uint64) error { func (rio *ImageOptions) SetUint64(option ImageOption, value uint64) error {
c_value := C.uint64_t(value) cValue := C.uint64_t(value)
ret := C.rbd_image_options_set_uint64(rio.options, C.int(option), c_value) ret := C.rbd_image_options_set_uint64(rio.options, C.int(option), cValue)
if ret != 0 { if ret != 0 {
return fmt.Errorf("%v, could not set option %v to \"%v\"", return fmt.Errorf("%v, could not set option %v to \"%v\"",
getError(ret), option, value) getError(ret), option, value)
@ -173,14 +173,14 @@ func (rio *ImageOptions) SetUint64(option ImageOption, value uint64) error {
// int rbd_image_options_get_uint64(rbd_image_options_t opts, int optname, // int rbd_image_options_get_uint64(rbd_image_options_t opts, int optname,
// uint64_t* optval); // uint64_t* optval);
func (rio *ImageOptions) GetUint64(option ImageOption) (uint64, error) { func (rio *ImageOptions) GetUint64(option ImageOption) (uint64, error) {
var c_value C.uint64_t var cValue C.uint64_t
ret := C.rbd_image_options_get_uint64(rio.options, C.int(option), &c_value) ret := C.rbd_image_options_get_uint64(rio.options, C.int(option), &cValue)
if ret != 0 { if ret != 0 {
return 0, fmt.Errorf("%v, could not get option %v", getError(ret), option) return 0, fmt.Errorf("%v, could not get option %v", getError(ret), option)
} }
return uint64(c_value), nil return uint64(cValue), nil
} }
// IsSet returns a true if the RbdImageOption is set, false otherwise. // IsSet returns a true if the RbdImageOption is set, false otherwise.
@ -189,14 +189,14 @@ func (rio *ImageOptions) GetUint64(option ImageOption) (uint64, error) {
// int rbd_image_options_is_set(rbd_image_options_t opts, int optname, // int rbd_image_options_is_set(rbd_image_options_t opts, int optname,
// bool* is_set); // bool* is_set);
func (rio *ImageOptions) IsSet(option ImageOption) (bool, error) { func (rio *ImageOptions) IsSet(option ImageOption) (bool, error) {
var c_set C.bool var cSet C.bool
ret := C.rbd_image_options_is_set(rio.options, C.int(option), &c_set) ret := C.rbd_image_options_is_set(rio.options, C.int(option), &cSet)
if ret != 0 { if ret != 0 {
return false, fmt.Errorf("%v, could not check option %v", getError(ret), option) return false, fmt.Errorf("%v, could not check option %v", getError(ret), option)
} }
return bool(c_set), nil return bool(cSet), nil
} }
// Unset a given RbdImageOption. // Unset a given RbdImageOption.

View File

@ -1,3 +1,4 @@
//go:build !nautilus
// +build !nautilus // +build !nautilus
package rbd package rbd

View File

@ -1,4 +1,6 @@
//go:build !luminous && !mimic
// +build !luminous,!mimic // +build !luminous,!mimic
// //
// Ceph Nautilus is the first release that includes rbd_pool_metadata_get(), // Ceph Nautilus is the first release that includes rbd_pool_metadata_get(),
// rbd_pool_metadata_set() and rbd_pool_metadata_remove(). // rbd_pool_metadata_set() and rbd_pool_metadata_remove().

View File

@ -49,6 +49,8 @@ const (
// Timespec is a public type for the internal C 'struct timespec' // Timespec is a public type for the internal C 'struct timespec'
type Timespec ts.Timespec type Timespec ts.Timespec
// revive:disable:var-naming old-yet-exported public api
// ImageInfo represents the status information for an image. // ImageInfo represents the status information for an image.
type ImageInfo struct { type ImageInfo struct {
Size uint64 Size uint64
@ -58,6 +60,8 @@ type ImageInfo struct {
Block_name_prefix string Block_name_prefix string
} }
// revive:enable:var-naming
// SnapInfo represents the status information for a snapshot. // SnapInfo represents the status information for a snapshot.
type SnapInfo struct { type SnapInfo struct {
Id uint64 Id uint64
@ -125,9 +129,9 @@ func (image *Image) validate(req uint32) error {
// Version returns the major, minor, and patch level of the librbd library. // Version returns the major, minor, and patch level of the librbd library.
func Version() (int, int, int) { func Version() (int, int, int) {
var c_major, c_minor, c_patch C.int var cMajor, cMinor, cPatch C.int
C.rbd_version(&c_major, &c_minor, &c_patch) C.rbd_version(&cMajor, &cMinor, &cPatch)
return int(c_major), int(c_minor), int(c_patch) return int(cMajor), int(cMinor), int(cPatch)
} }
// GetImage gets a reference to a previously created rbd image. // GetImage gets a reference to a previously created rbd image.
@ -160,13 +164,13 @@ func Create(ioctx *rados.IOContext, name string, size uint64, order int,
case 1: case 1:
return Create2(ioctx, name, size, args[0], order) return Create2(ioctx, name, size, args[0], order)
case 0: case 0:
c_order := C.int(order) cOrder := C.int(order)
c_name := C.CString(name) cName := C.CString(name)
defer C.free(unsafe.Pointer(c_name)) defer C.free(unsafe.Pointer(cName))
ret = C.rbd_create(cephIoctx(ioctx), ret = C.rbd_create(cephIoctx(ioctx),
c_name, C.uint64_t(size), &c_order) cName, C.uint64_t(size), &cOrder)
default: default:
return nil, errors.New("Wrong number of argument") return nil, errors.New("Wrong number of argument")
} }
@ -190,13 +194,13 @@ func Create2(ioctx *rados.IOContext, name string, size uint64, features uint64,
order int) (image *Image, err error) { order int) (image *Image, err error) {
var ret C.int var ret C.int
c_order := C.int(order) cOrder := C.int(order)
c_name := C.CString(name) cName := C.CString(name)
defer C.free(unsafe.Pointer(c_name)) defer C.free(unsafe.Pointer(cName))
ret = C.rbd_create2(cephIoctx(ioctx), c_name, ret = C.rbd_create2(cephIoctx(ioctx), cName,
C.uint64_t(size), C.uint64_t(features), &c_order) C.uint64_t(size), C.uint64_t(features), &cOrder)
if ret < 0 { if ret < 0 {
return nil, rbdError(ret) return nil, rbdError(ret)
} }
@ -215,17 +219,17 @@ func Create2(ioctx *rados.IOContext, name string, size uint64, features uint64,
// uint64_t features, int *order, // uint64_t features, int *order,
// uint64_t stripe_unit, uint64_t stripe_count); // uint64_t stripe_unit, uint64_t stripe_count);
func Create3(ioctx *rados.IOContext, name string, size uint64, features uint64, func Create3(ioctx *rados.IOContext, name string, size uint64, features uint64,
order int, stripe_unit uint64, stripe_count uint64) (image *Image, err error) { order int, stripeUnit uint64, stripeCount uint64) (image *Image, err error) {
var ret C.int var ret C.int
c_order := C.int(order) cOrder := C.int(order)
c_name := C.CString(name) cName := C.CString(name)
defer C.free(unsafe.Pointer(c_name)) defer C.free(unsafe.Pointer(cName))
ret = C.rbd_create3(cephIoctx(ioctx), c_name, ret = C.rbd_create3(cephIoctx(ioctx), cName,
C.uint64_t(size), C.uint64_t(features), &c_order, C.uint64_t(size), C.uint64_t(features), &cOrder,
C.uint64_t(stripe_unit), C.uint64_t(stripe_count)) C.uint64_t(stripeUnit), C.uint64_t(stripeCount))
if ret < 0 { if ret < 0 {
return nil, rbdError(ret) return nil, rbdError(ret)
} }
@ -242,31 +246,35 @@ func Create3(ioctx *rados.IOContext, name string, size uint64, features uint64,
// int rbd_clone(rados_ioctx_t p_ioctx, const char *p_name, // int rbd_clone(rados_ioctx_t p_ioctx, const char *p_name,
// const char *p_snapname, rados_ioctx_t c_ioctx, // const char *p_snapname, rados_ioctx_t c_ioctx,
// const char *c_name, uint64_t features, int *c_order); // const char *c_name, uint64_t features, int *c_order);
func (image *Image) Clone(snapname string, c_ioctx *rados.IOContext, c_name string, features uint64, order int) (*Image, error) { func (image *Image) Clone(snapname string, cIoctx *rados.IOContext, cName string, features uint64, order int) (*Image, error) {
if err := image.validate(imageNeedsIOContext); err != nil { if err := image.validate(imageNeedsIOContext); err != nil {
return nil, err return nil, err
} }
c_order := C.int(order) cOrder := C.int(order)
c_p_name := C.CString(image.name) cParentName := C.CString(image.name)
c_p_snapname := C.CString(snapname) cParentSnapName := C.CString(snapname)
c_c_name := C.CString(c_name) cCloneName := C.CString(cName)
defer C.free(unsafe.Pointer(c_p_name)) defer C.free(unsafe.Pointer(cParentName))
defer C.free(unsafe.Pointer(c_p_snapname)) defer C.free(unsafe.Pointer(cParentSnapName))
defer C.free(unsafe.Pointer(c_c_name)) defer C.free(unsafe.Pointer(cCloneName))
ret := C.rbd_clone(cephIoctx(image.ioctx), ret := C.rbd_clone(
c_p_name, c_p_snapname, cephIoctx(image.ioctx),
cephIoctx(c_ioctx), cParentName,
c_c_name, C.uint64_t(features), &c_order) cParentSnapName,
cephIoctx(cIoctx),
cCloneName,
C.uint64_t(features),
&cOrder)
if ret < 0 { if ret < 0 {
return nil, rbdError(ret) return nil, rbdError(ret)
} }
return &Image{ return &Image{
ioctx: c_ioctx, ioctx: cIoctx,
name: c_name, name: cName,
}, nil }, nil
} }
@ -288,10 +296,10 @@ func (image *Image) Trash(delay time.Duration) error {
return err return err
} }
c_name := C.CString(image.name) cName := C.CString(image.name)
defer C.free(unsafe.Pointer(c_name)) defer C.free(unsafe.Pointer(cName))
return getError(C.rbd_trash_move(cephIoctx(image.ioctx), c_name, return getError(C.rbd_trash_move(cephIoctx(image.ioctx), cName,
C.uint64_t(delay.Seconds()))) C.uint64_t(delay.Seconds())))
} }
@ -304,14 +312,14 @@ func (image *Image) Rename(destname string) error {
return err return err
} }
c_srcname := C.CString(image.name) cSrcName := C.CString(image.name)
c_destname := C.CString(destname) cDestName := C.CString(destname)
defer C.free(unsafe.Pointer(c_srcname)) defer C.free(unsafe.Pointer(cSrcName))
defer C.free(unsafe.Pointer(c_destname)) defer C.free(unsafe.Pointer(cDestName))
err := rbdError(C.rbd_rename(cephIoctx(image.ioctx), err := rbdError(C.rbd_rename(cephIoctx(image.ioctx),
c_srcname, c_destname)) cSrcName, cDestName))
if err == 0 { if err == 0 {
image.name = destname image.name = destname
return nil return nil
@ -321,9 +329,7 @@ func (image *Image) Rename(destname string) error {
// Open the rbd image. // Open the rbd image.
// //
// Deprecated: The Open function was provided in earlier versions of the API // Deprecated: use OpenImage and OpenImageReadOnly instead
// and now exists to support older code. The use of OpenImage and
// OpenImageReadOnly is preferred.
func (image *Image) Open(args ...interface{}) error { func (image *Image) Open(args ...interface{}) error {
if err := image.validate(imageNeedsIOContext | imageNeedsName); err != nil { if err := image.validate(imageNeedsIOContext | imageNeedsName); err != nil {
return err return err
@ -399,37 +405,37 @@ func (image *Image) Stat() (info *ImageInfo, err error) {
return nil, err return nil, err
} }
var c_stat C.rbd_image_info_t var cStat C.rbd_image_info_t
if ret := C.rbd_stat(image.image, &c_stat, C.size_t(unsafe.Sizeof(info))); ret < 0 { if ret := C.rbd_stat(image.image, &cStat, C.size_t(unsafe.Sizeof(info))); ret < 0 {
return info, rbdError(ret) return info, rbdError(ret)
} }
return &ImageInfo{ return &ImageInfo{
Size: uint64(c_stat.size), Size: uint64(cStat.size),
Obj_size: uint64(c_stat.obj_size), Obj_size: uint64(cStat.obj_size),
Num_objs: uint64(c_stat.num_objs), Num_objs: uint64(cStat.num_objs),
Order: int(c_stat.order), Order: int(cStat.order),
Block_name_prefix: C.GoString((*C.char)(&c_stat.block_name_prefix[0]))}, nil Block_name_prefix: C.GoString((*C.char)(&cStat.block_name_prefix[0]))}, nil
} }
// IsOldFormat returns true if the rbd image uses the old format. // IsOldFormat returns true if the rbd image uses the old format.
// //
// Implements: // Implements:
// int rbd_get_old_format(rbd_image_t image, uint8_t *old); // int rbd_get_old_format(rbd_image_t image, uint8_t *old);
func (image *Image) IsOldFormat() (old_format bool, err error) { func (image *Image) IsOldFormat() (bool, error) {
if err := image.validate(imageIsOpen); err != nil { if err := image.validate(imageIsOpen); err != nil {
return false, err return false, err
} }
var c_old_format C.uint8_t var cOldFormat C.uint8_t
ret := C.rbd_get_old_format(image.image, ret := C.rbd_get_old_format(image.image,
&c_old_format) &cOldFormat)
if ret < 0 { if ret < 0 {
return false, rbdError(ret) return false, rbdError(ret)
} }
return c_old_format != 0, nil return cOldFormat != 0, nil
} }
// GetSize returns the size of the rbd image. // GetSize returns the size of the rbd image.
@ -452,32 +458,34 @@ func (image *Image) GetSize() (size uint64, err error) {
// //
// Implements: // Implements:
// int rbd_get_stripe_unit(rbd_image_t image, uint64_t *stripe_unit); // int rbd_get_stripe_unit(rbd_image_t image, uint64_t *stripe_unit);
func (image *Image) GetStripeUnit() (stripe_unit uint64, err error) { func (image *Image) GetStripeUnit() (uint64, error) {
if err := image.validate(imageIsOpen); err != nil { if err := image.validate(imageIsOpen); err != nil {
return 0, err return 0, err
} }
if ret := C.rbd_get_stripe_unit(image.image, (*C.uint64_t)(&stripe_unit)); ret < 0 { var stripeUnit uint64
if ret := C.rbd_get_stripe_unit(image.image, (*C.uint64_t)(&stripeUnit)); ret < 0 {
return 0, rbdError(ret) return 0, rbdError(ret)
} }
return stripe_unit, nil return stripeUnit, nil
} }
// GetStripeCount returns the stripe-count value for the rbd image. // GetStripeCount returns the stripe-count value for the rbd image.
// //
// Implements: // Implements:
// int rbd_get_stripe_count(rbd_image_t image, uint64_t *stripe_count); // int rbd_get_stripe_count(rbd_image_t image, uint64_t *stripe_count);
func (image *Image) GetStripeCount() (stripe_count uint64, err error) { func (image *Image) GetStripeCount() (uint64, error) {
if err := image.validate(imageIsOpen); err != nil { if err := image.validate(imageIsOpen); err != nil {
return 0, err return 0, err
} }
if ret := C.rbd_get_stripe_count(image.image, (*C.uint64_t)(&stripe_count)); ret < 0 { var stripeCount uint64
if ret := C.rbd_get_stripe_count(image.image, (*C.uint64_t)(&stripeCount)); ret < 0 {
return 0, rbdError(ret) return 0, rbdError(ret)
} }
return stripe_count, nil return stripeCount, nil
} }
// GetOverlap returns the overlapping bytes between the rbd image and its // GetOverlap returns the overlapping bytes between the rbd image and its
@ -510,11 +518,11 @@ func (image *Image) Copy(ioctx *rados.IOContext, destname string) error {
return ErrNoName return ErrNoName
} }
c_destname := C.CString(destname) cDestName := C.CString(destname)
defer C.free(unsafe.Pointer(c_destname)) defer C.free(unsafe.Pointer(cDestName))
return getError(C.rbd_copy(image.image, return getError(C.rbd_copy(image.image,
cephIoctx(ioctx), c_destname)) cephIoctx(ioctx), cDestName))
} }
// Copy2 copies one rbd image to another, using an image handle. // Copy2 copies one rbd image to another, using an image handle.
@ -583,55 +591,55 @@ func (image *Image) ListLockers() (tag string, lockers []Locker, err error) {
return "", nil, err return "", nil, err
} }
var c_exclusive C.int var cExclusive C.int
var c_tag_len, c_clients_len, c_cookies_len, c_addrs_len C.size_t var cTagLen, cClientsLen, cCookiesLen, cAddrsLen C.size_t
var c_locker_cnt C.ssize_t var cLockerCount C.ssize_t
C.rbd_list_lockers(image.image, &c_exclusive, C.rbd_list_lockers(image.image, &cExclusive,
nil, (*C.size_t)(&c_tag_len), nil, (*C.size_t)(&cTagLen),
nil, (*C.size_t)(&c_clients_len), nil, (*C.size_t)(&cClientsLen),
nil, (*C.size_t)(&c_cookies_len), nil, (*C.size_t)(&cCookiesLen),
nil, (*C.size_t)(&c_addrs_len)) nil, (*C.size_t)(&cAddrsLen))
// no locker held on rbd image when either c_clients_len, // no locker held on rbd image when either c_clients_len,
// c_cookies_len or c_addrs_len is *0*, so just quickly returned // c_cookies_len or c_addrs_len is *0*, so just quickly returned
if int(c_clients_len) == 0 || int(c_cookies_len) == 0 || if int(cClientsLen) == 0 || int(cCookiesLen) == 0 ||
int(c_addrs_len) == 0 { int(cAddrsLen) == 0 {
lockers = make([]Locker, 0) lockers = make([]Locker, 0)
return "", lockers, nil return "", lockers, nil
} }
tag_buf := make([]byte, c_tag_len) tagBuf := make([]byte, cTagLen)
clients_buf := make([]byte, c_clients_len) clientsBuf := make([]byte, cClientsLen)
cookies_buf := make([]byte, c_cookies_len) cookiesBuf := make([]byte, cCookiesLen)
addrs_buf := make([]byte, c_addrs_len) addrsBuf := make([]byte, cAddrsLen)
c_locker_cnt = C.rbd_list_lockers(image.image, &c_exclusive, cLockerCount = C.rbd_list_lockers(image.image, &cExclusive,
(*C.char)(unsafe.Pointer(&tag_buf[0])), (*C.size_t)(&c_tag_len), (*C.char)(unsafe.Pointer(&tagBuf[0])), (*C.size_t)(&cTagLen),
(*C.char)(unsafe.Pointer(&clients_buf[0])), (*C.size_t)(&c_clients_len), (*C.char)(unsafe.Pointer(&clientsBuf[0])), (*C.size_t)(&cClientsLen),
(*C.char)(unsafe.Pointer(&cookies_buf[0])), (*C.size_t)(&c_cookies_len), (*C.char)(unsafe.Pointer(&cookiesBuf[0])), (*C.size_t)(&cCookiesLen),
(*C.char)(unsafe.Pointer(&addrs_buf[0])), (*C.size_t)(&c_addrs_len)) (*C.char)(unsafe.Pointer(&addrsBuf[0])), (*C.size_t)(&cAddrsLen))
// rbd_list_lockers returns negative value for errors // rbd_list_lockers returns negative value for errors
// and *0* means no locker held on rbd image. // and *0* means no locker held on rbd image.
// but *0* is unexpected here because first rbd_list_lockers already // but *0* is unexpected here because first rbd_list_lockers already
// dealt with no locker case // dealt with no locker case
if int(c_locker_cnt) <= 0 { if int(cLockerCount) <= 0 {
return "", nil, rbdError(c_locker_cnt) return "", nil, rbdError(cLockerCount)
} }
clients := cutil.SplitSparseBuffer(clients_buf) clients := cutil.SplitSparseBuffer(clientsBuf)
cookies := cutil.SplitSparseBuffer(cookies_buf) cookies := cutil.SplitSparseBuffer(cookiesBuf)
addrs := cutil.SplitSparseBuffer(addrs_buf) addrs := cutil.SplitSparseBuffer(addrsBuf)
lockers = make([]Locker, c_locker_cnt) lockers = make([]Locker, cLockerCount)
for i := 0; i < int(c_locker_cnt); i++ { for i := 0; i < int(cLockerCount); i++ {
lockers[i] = Locker{Client: clients[i], lockers[i] = Locker{Client: clients[i],
Cookie: cookies[i], Cookie: cookies[i],
Addr: addrs[i]} Addr: addrs[i]}
} }
return string(tag_buf), lockers, nil return string(tagBuf), lockers, nil
} }
// LockExclusive acquires an exclusive lock on the rbd image. // LockExclusive acquires an exclusive lock on the rbd image.
@ -643,10 +651,10 @@ func (image *Image) LockExclusive(cookie string) error {
return err return err
} }
c_cookie := C.CString(cookie) cCookie := C.CString(cookie)
defer C.free(unsafe.Pointer(c_cookie)) defer C.free(unsafe.Pointer(cCookie))
return getError(C.rbd_lock_exclusive(image.image, c_cookie)) return getError(C.rbd_lock_exclusive(image.image, cCookie))
} }
// LockShared acquires a shared lock on the rbd image. // LockShared acquires a shared lock on the rbd image.
@ -658,12 +666,12 @@ func (image *Image) LockShared(cookie string, tag string) error {
return err return err
} }
c_cookie := C.CString(cookie) cCookie := C.CString(cookie)
c_tag := C.CString(tag) cTag := C.CString(tag)
defer C.free(unsafe.Pointer(c_cookie)) defer C.free(unsafe.Pointer(cCookie))
defer C.free(unsafe.Pointer(c_tag)) defer C.free(unsafe.Pointer(cTag))
return getError(C.rbd_lock_shared(image.image, c_cookie, c_tag)) return getError(C.rbd_lock_shared(image.image, cCookie, cTag))
} }
// Unlock releases a lock on the image. // Unlock releases a lock on the image.
@ -675,10 +683,10 @@ func (image *Image) Unlock(cookie string) error {
return err return err
} }
c_cookie := C.CString(cookie) cCookie := C.CString(cookie)
defer C.free(unsafe.Pointer(c_cookie)) defer C.free(unsafe.Pointer(cCookie))
return getError(C.rbd_unlock(image.image, c_cookie)) return getError(C.rbd_unlock(image.image, cCookie))
} }
// BreakLock forces the release of a lock held by another client. // BreakLock forces the release of a lock held by another client.
@ -690,12 +698,12 @@ func (image *Image) BreakLock(client string, cookie string) error {
return err return err
} }
c_client := C.CString(client) cClient := C.CString(client)
c_cookie := C.CString(cookie) cCookie := C.CString(cookie)
defer C.free(unsafe.Pointer(c_client)) defer C.free(unsafe.Pointer(cClient))
defer C.free(unsafe.Pointer(c_cookie)) defer C.free(unsafe.Pointer(cCookie))
return getError(C.rbd_break_lock(image.image, c_client, c_cookie)) return getError(C.rbd_break_lock(image.image, cClient, cCookie))
} }
// ssize_t rbd_read(rbd_image_t image, uint64_t ofs, size_t len, char *buf); // ssize_t rbd_read(rbd_image_t image, uint64_t ofs, size_t len, char *buf);
@ -891,26 +899,26 @@ func (image *Image) GetSnapshotNames() (snaps []SnapInfo, err error) {
return nil, err return nil, err
} }
var c_max_snaps C.int var cMaxSnaps C.int
ret := C.rbd_snap_list(image.image, nil, &c_max_snaps) ret := C.rbd_snap_list(image.image, nil, &cMaxSnaps)
c_snaps := make([]C.rbd_snap_info_t, c_max_snaps) cSnaps := make([]C.rbd_snap_info_t, cMaxSnaps)
snaps = make([]SnapInfo, c_max_snaps) snaps = make([]SnapInfo, cMaxSnaps)
ret = C.rbd_snap_list(image.image, ret = C.rbd_snap_list(image.image,
&c_snaps[0], &c_max_snaps) &cSnaps[0], &cMaxSnaps)
if ret < 0 { if ret < 0 {
return nil, rbdError(ret) return nil, rbdError(ret)
} }
for i, s := range c_snaps { for i, s := range cSnaps {
snaps[i] = SnapInfo{Id: uint64(s.id), snaps[i] = SnapInfo{Id: uint64(s.id),
Size: uint64(s.size), Size: uint64(s.size),
Name: C.GoString(s.name)} Name: C.GoString(s.name)}
} }
C.rbd_snap_list_end(&c_snaps[0]) C.rbd_snap_list_end(&cSnaps[0])
return snaps[:len(snaps)-1], nil return snaps[:len(snaps)-1], nil
} }
@ -953,10 +961,10 @@ func (image *Image) SetSnapshot(snapname string) error {
return err return err
} }
c_snapname := C.CString(snapname) cSnapName := C.CString(snapname)
defer C.free(unsafe.Pointer(c_snapname)) defer C.free(unsafe.Pointer(cSnapName))
return getError(C.rbd_snap_set(image.image, c_snapname)) return getError(C.rbd_snap_set(image.image, cSnapName))
} }
// GetTrashList returns a slice of TrashInfo structs, containing information about all RBD images // GetTrashList returns a slice of TrashInfo structs, containing information about all RBD images
@ -994,21 +1002,21 @@ func GetTrashList(ioctx *rados.IOContext) ([]TrashInfo, error) {
// TrashRemove permanently deletes the trashed RBD with the specified id. // TrashRemove permanently deletes the trashed RBD with the specified id.
func TrashRemove(ioctx *rados.IOContext, id string, force bool) error { func TrashRemove(ioctx *rados.IOContext, id string, force bool) error {
c_id := C.CString(id) cid := C.CString(id)
defer C.free(unsafe.Pointer(c_id)) defer C.free(unsafe.Pointer(cid))
return getError(C.rbd_trash_remove(cephIoctx(ioctx), c_id, C.bool(force))) return getError(C.rbd_trash_remove(cephIoctx(ioctx), cid, C.bool(force)))
} }
// TrashRestore restores the trashed RBD with the specified id back to the pool from whence it // TrashRestore restores the trashed RBD with the specified id back to the pool from whence it
// came, with the specified new name. // came, with the specified new name.
func TrashRestore(ioctx *rados.IOContext, id, name string) error { func TrashRestore(ioctx *rados.IOContext, id, name string) error {
c_id := C.CString(id) cid := C.CString(id)
c_name := C.CString(name) cName := C.CString(name)
defer C.free(unsafe.Pointer(c_id)) defer C.free(unsafe.Pointer(cid))
defer C.free(unsafe.Pointer(c_name)) defer C.free(unsafe.Pointer(cName))
return getError(C.rbd_trash_restore(cephIoctx(ioctx), c_id, c_name)) return getError(C.rbd_trash_restore(cephIoctx(ioctx), cid, cName))
} }
// OpenImage will open an existing rbd image by name and snapshot name, // OpenImage will open an existing rbd image by name and snapshot name,
@ -1199,10 +1207,10 @@ func CreateImage(ioctx *rados.IOContext, name string, size uint64, rio *ImageOpt
return rbdError(C.EINVAL) return rbdError(C.EINVAL)
} }
c_name := C.CString(name) cName := C.CString(name)
defer C.free(unsafe.Pointer(c_name)) defer C.free(unsafe.Pointer(cName))
ret := C.rbd_create4(cephIoctx(ioctx), c_name, ret := C.rbd_create4(cephIoctx(ioctx), cName,
C.uint64_t(size), C.rbd_image_options_t(rio.options)) C.uint64_t(size), C.rbd_image_options_t(rio.options))
return getError(ret) return getError(ret)
} }
@ -1219,9 +1227,9 @@ func RemoveImage(ioctx *rados.IOContext, name string) error {
return ErrNoName return ErrNoName
} }
c_name := C.CString(name) cName := C.CString(name)
defer C.free(unsafe.Pointer(c_name)) defer C.free(unsafe.Pointer(cName))
return getError(C.rbd_remove(cephIoctx(ioctx), c_name)) return getError(C.rbd_remove(cephIoctx(ioctx), cName))
} }
// CloneImage creates a clone of the image from the named snapshot in the // CloneImage creates a clone of the image from the named snapshot in the

View File

@ -1,4 +1,6 @@
//go:build !luminous && !mimic
// +build !luminous,!mimic // +build !luminous,!mimic
// //
// Ceph Nautilus is the first release that includes rbd_list2() and // Ceph Nautilus is the first release that includes rbd_list2() and
// rbd_get_create_timestamp(). // rbd_get_create_timestamp().

View File

@ -27,10 +27,10 @@ func (image *Image) CreateSnapshot(snapname string) (*Snapshot, error) {
return nil, err return nil, err
} }
c_snapname := C.CString(snapname) cSnapName := C.CString(snapname)
defer C.free(unsafe.Pointer(c_snapname)) defer C.free(unsafe.Pointer(cSnapName))
ret := C.rbd_snap_create(image.image, c_snapname) ret := C.rbd_snap_create(image.image, cSnapName)
if ret < 0 { if ret < 0 {
return nil, rbdError(ret) return nil, rbdError(ret)
} }
@ -72,10 +72,10 @@ func (snapshot *Snapshot) Remove() error {
return err return err
} }
c_snapname := C.CString(snapshot.name) cSnapName := C.CString(snapshot.name)
defer C.free(unsafe.Pointer(c_snapname)) defer C.free(unsafe.Pointer(cSnapName))
return getError(C.rbd_snap_remove(snapshot.image.image, c_snapname)) return getError(C.rbd_snap_remove(snapshot.image.image, cSnapName))
} }
// Rollback the image to the snapshot. // Rollback the image to the snapshot.
@ -87,10 +87,10 @@ func (snapshot *Snapshot) Rollback() error {
return err return err
} }
c_snapname := C.CString(snapshot.name) cSnapName := C.CString(snapshot.name)
defer C.free(unsafe.Pointer(c_snapname)) defer C.free(unsafe.Pointer(cSnapName))
return getError(C.rbd_snap_rollback(snapshot.image.image, c_snapname)) return getError(C.rbd_snap_rollback(snapshot.image.image, cSnapName))
} }
// Protect a snapshot from unwanted deletion. // Protect a snapshot from unwanted deletion.
@ -102,10 +102,10 @@ func (snapshot *Snapshot) Protect() error {
return err return err
} }
c_snapname := C.CString(snapshot.name) cSnapName := C.CString(snapshot.name)
defer C.free(unsafe.Pointer(c_snapname)) defer C.free(unsafe.Pointer(cSnapName))
return getError(C.rbd_snap_protect(snapshot.image.image, c_snapname)) return getError(C.rbd_snap_protect(snapshot.image.image, cSnapName))
} }
// Unprotect stops protecting the snapshot. // Unprotect stops protecting the snapshot.
@ -117,10 +117,10 @@ func (snapshot *Snapshot) Unprotect() error {
return err return err
} }
c_snapname := C.CString(snapshot.name) cSnapName := C.CString(snapshot.name)
defer C.free(unsafe.Pointer(c_snapname)) defer C.free(unsafe.Pointer(cSnapName))
return getError(C.rbd_snap_unprotect(snapshot.image.image, c_snapname)) return getError(C.rbd_snap_unprotect(snapshot.image.image, cSnapName))
} }
// IsProtected returns true if the snapshot is currently protected. // IsProtected returns true if the snapshot is currently protected.
@ -133,23 +133,24 @@ func (snapshot *Snapshot) IsProtected() (bool, error) {
return false, err return false, err
} }
var c_is_protected C.int var cIsProtected C.int
c_snapname := C.CString(snapshot.name) cSnapName := C.CString(snapshot.name)
defer C.free(unsafe.Pointer(c_snapname)) defer C.free(unsafe.Pointer(cSnapName))
ret := C.rbd_snap_is_protected(snapshot.image.image, c_snapname, ret := C.rbd_snap_is_protected(snapshot.image.image, cSnapName,
&c_is_protected) &cIsProtected)
if ret < 0 { if ret < 0 {
return false, rbdError(ret) return false, rbdError(ret)
} }
return c_is_protected != 0, nil return cIsProtected != 0, nil
} }
// Set updates the rbd image (not the Snapshot) such that the snapshot // Set updates the rbd image (not the Snapshot) such that the snapshot is the
// is the source of readable data. // source of readable data.
// This method is deprecated. Refer the SetSnapshot method of the Image type instead. //
// Deprecated: use the SetSnapshot method of the Image type instead
// //
// Implements: // Implements:
// int rbd_snap_set(rbd_image_t image, const char *snapname); // int rbd_snap_set(rbd_image_t image, const char *snapname);

View File

@ -1,4 +1,6 @@
//go:build !luminous
// +build !luminous // +build !luminous
// //
// Ceph Mimic introduced rbd_snap_get_namespace_type(). // Ceph Mimic introduced rbd_snap_get_namespace_type().

View File

@ -1,4 +1,6 @@
//go:build !luminous && !mimic
// +build !luminous,!mimic // +build !luminous,!mimic
// //
// Ceph Nautilus introduced rbd_get_parent() and deprecated rbd_get_parent_info(). // Ceph Nautilus introduced rbd_get_parent() and deprecated rbd_get_parent_info().
// Ceph Nautilus introduced rbd_list_children3() and deprecated rbd_list_children(). // Ceph Nautilus introduced rbd_list_children3() and deprecated rbd_list_children().

View File

@ -1,3 +1,4 @@
//go:build !nautilus
// +build !nautilus // +build !nautilus
package rbd package rbd

View File

@ -44,6 +44,8 @@ func Wrap(outer, inner error) error {
// //
// format is the format of the error message. The string '{{err}}' will // format is the format of the error message. The string '{{err}}' will
// be replaced with the original error message. // be replaced with the original error message.
//
// Deprecated: Use fmt.Errorf()
func Wrapf(format string, err error) error { func Wrapf(format string, err error) error {
outerMsg := "<nil>" outerMsg := "<nil>"
if err != nil { if err != nil {
@ -148,6 +150,9 @@ func Walk(err error, cb WalkFunc) {
for _, err := range e.WrappedErrors() { for _, err := range e.WrappedErrors() {
Walk(err, cb) Walk(err, cb)
} }
case interface{ Unwrap() error }:
cb(err)
Walk(e.Unwrap(), cb)
default: default:
cb(err) cb(err)
} }
@ -167,3 +172,7 @@ func (w *wrappedError) Error() string {
func (w *wrappedError) WrappedErrors() []error { func (w *wrappedError) WrappedErrors() []error {
return []error{w.Outer, w.Inner} return []error{w.Outer, w.Inner}
} }
func (w *wrappedError) Unwrap() error {
return w.Inner
}

View File

@ -295,6 +295,9 @@ func (l *intLogger) logPlain(t time.Time, name string, level Level, msg string,
continue FOR continue FOR
case Format: case Format:
val = fmt.Sprintf(st[0].(string), st[1:]...) val = fmt.Sprintf(st[0].(string), st[1:]...)
case Quote:
raw = true
val = strconv.Quote(string(st))
default: default:
v := reflect.ValueOf(st) v := reflect.ValueOf(st)
if v.Kind() == reflect.Slice { if v.Kind() == reflect.Slice {

View File

@ -67,6 +67,12 @@ type Octal int
// text output. For example: L.Info("bits", Binary(17)) // text output. For example: L.Info("bits", Binary(17))
type Binary int type Binary int
// A simple shortcut to format strings with Go quoting. Control and
// non-printable characters will be escaped with their backslash equivalents in
// output. Intended for untrusted or multiline strings which should be logged
// as concisely as possible.
type Quote string
// ColorOption expresses how the output should be colored, if at all. // ColorOption expresses how the output should be colored, if at all.
type ColorOption uint8 type ColorOption uint8

View File

@ -64,7 +64,7 @@ func (s *stdlogAdapter) pickLevel(str string) (Level, string) {
case strings.HasPrefix(str, "[INFO]"): case strings.HasPrefix(str, "[INFO]"):
return Info, strings.TrimSpace(str[6:]) return Info, strings.TrimSpace(str[6:])
case strings.HasPrefix(str, "[WARN]"): case strings.HasPrefix(str, "[WARN]"):
return Warn, strings.TrimSpace(str[7:]) return Warn, strings.TrimSpace(str[6:])
case strings.HasPrefix(str, "[ERROR]"): case strings.HasPrefix(str, "[ERROR]"):
return Error, strings.TrimSpace(str[7:]) return Error, strings.TrimSpace(str[7:])
case strings.HasPrefix(str, "[ERR]"): case strings.HasPrefix(str, "[ERR]"):

View File

@ -0,0 +1,24 @@
# Compiled Object files, Static and Dynamic libs (Shared Objects)
*.o
*.a
*.so
# Folders
_obj
_test
# Architecture specific extensions/prefixes
*.[568vq]
[568vq].out
*.cgo1.go
*.cgo2.c
_cgo_defun.c
_cgo_gotypes.go
_cgo_export.*
_testmain.go
*.exe
*.test
*.prof

View File

@ -0,0 +1,3 @@
language: go
go:
- tip

View File

@ -0,0 +1,9 @@
# 1.1.0 (May 22nd, 2019)
FEATURES
* Add `SeekLowerBound` to allow for range scans. [[GH-24](https://github.com/hashicorp/go-immutable-radix/pull/24)]
# 1.0.0 (August 30th, 2018)
* go mod adopted

363
vendor/github.com/hashicorp/go-immutable-radix/LICENSE generated vendored Normal file
View File

@ -0,0 +1,363 @@
Mozilla Public License, version 2.0
1. Definitions
1.1. "Contributor"
means each individual or legal entity that creates, contributes to the
creation of, or owns Covered Software.
1.2. "Contributor Version"
means the combination of the Contributions of others (if any) used by a
Contributor and that particular Contributor's Contribution.
1.3. "Contribution"
means Covered Software of a particular Contributor.
1.4. "Covered Software"
means Source Code Form to which the initial Contributor has attached the
notice in Exhibit A, the Executable Form of such Source Code Form, and
Modifications of such Source Code Form, in each case including portions
thereof.
1.5. "Incompatible With Secondary Licenses"
means
a. that the initial Contributor has attached the notice described in
Exhibit B to the Covered Software; or
b. that the Covered Software was made available under the terms of
version 1.1 or earlier of the License, but not also under the terms of
a Secondary License.
1.6. "Executable Form"
means any form of the work other than Source Code Form.
1.7. "Larger Work"
means a work that combines Covered Software with other material, in a
separate file or files, that is not Covered Software.
1.8. "License"
means this document.
1.9. "Licensable"
means having the right to grant, to the maximum extent possible, whether
at the time of the initial grant or subsequently, any and all of the
rights conveyed by this License.
1.10. "Modifications"
means any of the following:
a. any file in Source Code Form that results from an addition to,
deletion from, or modification of the contents of Covered Software; or
b. any new file in Source Code Form that contains any Covered Software.
1.11. "Patent Claims" of a Contributor
means any patent claim(s), including without limitation, method,
process, and apparatus claims, in any patent Licensable by such
Contributor that would be infringed, but for the grant of the License,
by the making, using, selling, offering for sale, having made, import,
or transfer of either its Contributions or its Contributor Version.
1.12. "Secondary License"
means either the GNU General Public License, Version 2.0, the GNU Lesser
General Public License, Version 2.1, the GNU Affero General Public
License, Version 3.0, or any later versions of those licenses.
1.13. "Source Code Form"
means the form of the work preferred for making modifications.
1.14. "You" (or "Your")
means an individual or a legal entity exercising rights under this
License. For legal entities, "You" includes any entity that controls, is
controlled by, or is under common control with You. For purposes of this
definition, "control" means (a) the power, direct or indirect, to cause
the direction or management of such entity, whether by contract or
otherwise, or (b) ownership of more than fifty percent (50%) of the
outstanding shares or beneficial ownership of such entity.
2. License Grants and Conditions
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
a. under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications, or
as part of a Larger Work; and
b. under Patent Claims of such Contributor to make, use, sell, offer for
sale, have made, import, and otherwise transfer either its
Contributions or its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution
become effective for each Contribution on the date the Contributor first
distributes such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under
this License. No additional rights or licenses will be implied from the
distribution or licensing of Covered Software under this License.
Notwithstanding Section 2.1(b) above, no patent license is granted by a
Contributor:
a. for any code that a Contributor has removed from Covered Software; or
b. for infringements caused by: (i) Your and any other third party's
modifications of Covered Software, or (ii) the combination of its
Contributions with other software (except as part of its Contributor
Version); or
c. under Patent Claims infringed by Covered Software in the absence of
its Contributions.
This License does not grant any rights in the trademarks, service marks,
or logos of any Contributor (except as may be necessary to comply with
the notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this
License (see Section 10.2) or under the terms of a Secondary License (if
permitted under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its
Contributions are its original creation(s) or it has sufficient rights to
grant the rights to its Contributions conveyed by this License.
2.6. Fair Use
This License is not intended to limit any rights You have under
applicable copyright doctrines of fair use, fair dealing, or other
equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in
Section 2.1.
3. Responsibilities
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any
Modifications that You create or to which You contribute, must be under
the terms of this License. You must inform recipients that the Source
Code Form of the Covered Software is governed by the terms of this
License, and how they can obtain a copy of this License. You may not
attempt to alter or restrict the recipients' rights in the Source Code
Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
a. such Covered Software must also be made available in Source Code Form,
as described in Section 3.1, and You must inform recipients of the
Executable Form how they can obtain a copy of such Source Code Form by
reasonable means in a timely manner, at a charge no more than the cost
of distribution to the recipient; and
b. You may distribute such Executable Form under the terms of this
License, or sublicense it under different terms, provided that the
license for the Executable Form does not attempt to limit or alter the
recipients' rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for
the Covered Software. If the Larger Work is a combination of Covered
Software with a work governed by one or more Secondary Licenses, and the
Covered Software is not Incompatible With Secondary Licenses, this
License permits You to additionally distribute such Covered Software
under the terms of such Secondary License(s), so that the recipient of
the Larger Work may, at their option, further distribute the Covered
Software under the terms of either this License or such Secondary
License(s).
3.4. Notices
You may not remove or alter the substance of any license notices
(including copyright notices, patent notices, disclaimers of warranty, or
limitations of liability) contained within the Source Code Form of the
Covered Software, except that You may alter any license notices to the
extent required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of Covered
Software. However, You may do so only on Your own behalf, and not on
behalf of any Contributor. You must make it absolutely clear that any
such warranty, support, indemnity, or liability obligation is offered by
You alone, and You hereby agree to indemnify every Contributor for any
liability incurred by such Contributor as a result of warranty, support,
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
4. Inability to Comply Due to Statute or Regulation
If it is impossible for You to comply with any of the terms of this License
with respect to some or all of the Covered Software due to statute,
judicial order, or regulation then You must: (a) comply with the terms of
this License to the maximum extent possible; and (b) describe the
limitations and the code they affect. Such description must be placed in a
text file included with all distributions of the Covered Software under
this License. Except to the extent prohibited by statute or regulation,
such description must be sufficiently detailed for a recipient of ordinary
skill to be able to understand it.
5. Termination
5.1. The rights granted under this License will terminate automatically if You
fail to comply with any of its terms. However, if You become compliant,
then the rights granted under this License from a particular Contributor
are reinstated (a) provisionally, unless and until such Contributor
explicitly and finally terminates Your grants, and (b) on an ongoing
basis, if such Contributor fails to notify You of the non-compliance by
some reasonable means prior to 60 days after You have come back into
compliance. Moreover, Your grants from a particular Contributor are
reinstated on an ongoing basis if such Contributor notifies You of the
non-compliance by some reasonable means, this is the first time You have
received notice of non-compliance with this License from such
Contributor, and You become compliant prior to 30 days after Your receipt
of the notice.
5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions,
counter-claims, and cross-claims) alleging that a Contributor Version
directly or indirectly infringes any patent, then the rights granted to
You by any and all Contributors for the Covered Software under Section
2.1 of this License shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user
license agreements (excluding distributors and resellers) which have been
validly granted by You or Your distributors under this License prior to
termination shall survive termination.
6. Disclaimer of Warranty
Covered Software is provided under this License on an "as is" basis,
without warranty of any kind, either expressed, implied, or statutory,
including, without limitation, warranties that the Covered Software is free
of defects, merchantable, fit for a particular purpose or non-infringing.
The entire risk as to the quality and performance of the Covered Software
is with You. Should any Covered Software prove defective in any respect,
You (not any Contributor) assume the cost of any necessary servicing,
repair, or correction. This disclaimer of warranty constitutes an essential
part of this License. No use of any Covered Software is authorized under
this License except under this disclaimer.
7. Limitation of Liability
Under no circumstances and under no legal theory, whether tort (including
negligence), contract, or otherwise, shall any Contributor, or anyone who
distributes Covered Software as permitted above, be liable to You for any
direct, indirect, special, incidental, or consequential damages of any
character including, without limitation, damages for lost profits, loss of
goodwill, work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses, even if such party shall have been
informed of the possibility of such damages. This limitation of liability
shall not apply to liability for death or personal injury resulting from
such party's negligence to the extent applicable law prohibits such
limitation. Some jurisdictions do not allow the exclusion or limitation of
incidental or consequential damages, so this exclusion and limitation may
not apply to You.
8. Litigation
Any litigation relating to this License may be brought only in the courts
of a jurisdiction where the defendant maintains its principal place of
business and such litigation shall be governed by laws of that
jurisdiction, without reference to its conflict-of-law provisions. Nothing
in this Section shall prevent a party's ability to bring cross-claims or
counter-claims.
9. Miscellaneous
This License represents the complete agreement concerning the subject
matter hereof. If any provision of this License is held to be
unenforceable, such provision shall be reformed only to the extent
necessary to make it enforceable. Any law or regulation which provides that
the language of a contract shall be construed against the drafter shall not
be used to construe this License against a Contributor.
10. Versions of the License
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section
10.3, no one other than the license steward has the right to modify or
publish new versions of this License. Each version will be given a
distinguishing version number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version
of the License under which You originally received the Covered Software,
or under the terms of any subsequent version published by the license
steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to
create a new license for such software, you may create and use a
modified version of this License if you rename the license and remove
any references to the name of the license steward (except to note that
such modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary
Licenses If You choose to distribute Source Code Form that is
Incompatible With Secondary Licenses under the terms of this version of
the License, the notice described in Exhibit B of this License must be
attached.
Exhibit A - Source Code Form License Notice
This Source Code Form is subject to the
terms of the Mozilla Public License, v.
2.0. If a copy of the MPL was not
distributed with this file, You can
obtain one at
http://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular file,
then You may include the notice in a location (such as a LICENSE file in a
relevant directory) where a recipient would be likely to look for such a
notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - "Incompatible With Secondary Licenses" Notice
This Source Code Form is "Incompatible
With Secondary Licenses", as defined by
the Mozilla Public License, v. 2.0.

View File

@ -0,0 +1,66 @@
go-immutable-radix [![Build Status](https://travis-ci.org/hashicorp/go-immutable-radix.png)](https://travis-ci.org/hashicorp/go-immutable-radix)
=========
Provides the `iradix` package that implements an immutable [radix tree](http://en.wikipedia.org/wiki/Radix_tree).
The package only provides a single `Tree` implementation, optimized for sparse nodes.
As a radix tree, it provides the following:
* O(k) operations. In many cases, this can be faster than a hash table since
the hash function is an O(k) operation, and hash tables have very poor cache locality.
* Minimum / Maximum value lookups
* Ordered iteration
A tree supports using a transaction to batch multiple updates (insert, delete)
in a more efficient manner than performing each operation one at a time.
For a mutable variant, see [go-radix](https://github.com/armon/go-radix).
Documentation
=============
The full documentation is available on [Godoc](http://godoc.org/github.com/hashicorp/go-immutable-radix).
Example
=======
Below is a simple example of usage
```go
// Create a tree
r := iradix.New()
r, _, _ = r.Insert([]byte("foo"), 1)
r, _, _ = r.Insert([]byte("bar"), 2)
r, _, _ = r.Insert([]byte("foobar"), 2)
// Find the longest prefix match
m, _, _ := r.Root().LongestPrefix([]byte("foozip"))
if string(m) != "foo" {
panic("should be foo")
}
```
Here is an example of performing a range scan of the keys.
```go
// Create a tree
r := iradix.New()
r, _, _ = r.Insert([]byte("001"), 1)
r, _, _ = r.Insert([]byte("002"), 2)
r, _, _ = r.Insert([]byte("005"), 5)
r, _, _ = r.Insert([]byte("010"), 10)
r, _, _ = r.Insert([]byte("100"), 10)
// Range scan over the keys that sort lexicographically between [003, 050)
it := r.Root().Iterator()
it.SeekLowerBound([]byte("003"))
for key, _, ok := it.Next(); ok; key, _, ok = it.Next() {
if key >= "050" {
break
}
fmt.Println(key)
}
// Output:
// 005
// 010
```

View File

@ -0,0 +1,21 @@
package iradix
import "sort"
type edges []edge
func (e edges) Len() int {
return len(e)
}
func (e edges) Less(i, j int) bool {
return e[i].label < e[j].label
}
func (e edges) Swap(i, j int) {
e[i], e[j] = e[j], e[i]
}
func (e edges) Sort() {
sort.Sort(e)
}

View File

@ -0,0 +1,6 @@
module github.com/hashicorp/go-immutable-radix
require (
github.com/hashicorp/go-uuid v1.0.0
github.com/hashicorp/golang-lru v0.5.0
)

View File

@ -0,0 +1,4 @@
github.com/hashicorp/go-uuid v1.0.0 h1:RS8zrF7PhGwyNPOtxSClXXj9HA8feRnJzgnI1RJCSnM=
github.com/hashicorp/go-uuid v1.0.0/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
github.com/hashicorp/golang-lru v0.5.0 h1:CL2msUPvZTLb5O648aiLNJw3hnBxN2+1Jq8rCOH9wdo=
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=

View File

@ -0,0 +1,662 @@
package iradix
import (
"bytes"
"strings"
"github.com/hashicorp/golang-lru/simplelru"
)
const (
// defaultModifiedCache is the default size of the modified node
// cache used per transaction. This is used to cache the updates
// to the nodes near the root, while the leaves do not need to be
// cached. This is important for very large transactions to prevent
// the modified cache from growing to be enormous. This is also used
// to set the max size of the mutation notify maps since those should
// also be bounded in a similar way.
defaultModifiedCache = 8192
)
// Tree implements an immutable radix tree. This can be treated as a
// Dictionary abstract data type. The main advantage over a standard
// hash map is prefix-based lookups and ordered iteration. The immutability
// means that it is safe to concurrently read from a Tree without any
// coordination.
type Tree struct {
root *Node
size int
}
// New returns an empty Tree
func New() *Tree {
t := &Tree{
root: &Node{
mutateCh: make(chan struct{}),
},
}
return t
}
// Len is used to return the number of elements in the tree
func (t *Tree) Len() int {
return t.size
}
// Txn is a transaction on the tree. This transaction is applied
// atomically and returns a new tree when committed. A transaction
// is not thread safe, and should only be used by a single goroutine.
type Txn struct {
// root is the modified root for the transaction.
root *Node
// snap is a snapshot of the root node for use if we have to run the
// slow notify algorithm.
snap *Node
// size tracks the size of the tree as it is modified during the
// transaction.
size int
// writable is a cache of writable nodes that have been created during
// the course of the transaction. This allows us to re-use the same
// nodes for further writes and avoid unnecessary copies of nodes that
// have never been exposed outside the transaction. This will only hold
// up to defaultModifiedCache number of entries.
writable *simplelru.LRU
// trackChannels is used to hold channels that need to be notified to
// signal mutation of the tree. This will only hold up to
// defaultModifiedCache number of entries, after which we will set the
// trackOverflow flag, which will cause us to use a more expensive
// algorithm to perform the notifications. Mutation tracking is only
// performed if trackMutate is true.
trackChannels map[chan struct{}]struct{}
trackOverflow bool
trackMutate bool
}
// Txn starts a new transaction that can be used to mutate the tree
func (t *Tree) Txn() *Txn {
txn := &Txn{
root: t.root,
snap: t.root,
size: t.size,
}
return txn
}
// TrackMutate can be used to toggle if mutations are tracked. If this is enabled
// then notifications will be issued for affected internal nodes and leaves when
// the transaction is committed.
func (t *Txn) TrackMutate(track bool) {
t.trackMutate = track
}
// trackChannel safely attempts to track the given mutation channel, setting the
// overflow flag if we can no longer track any more. This limits the amount of
// state that will accumulate during a transaction and we have a slower algorithm
// to switch to if we overflow.
func (t *Txn) trackChannel(ch chan struct{}) {
// In overflow, make sure we don't store any more objects.
if t.trackOverflow {
return
}
// If this would overflow the state we reject it and set the flag (since
// we aren't tracking everything that's required any longer).
if len(t.trackChannels) >= defaultModifiedCache {
// Mark that we are in the overflow state
t.trackOverflow = true
// Clear the map so that the channels can be garbage collected. It is
// safe to do this since we have already overflowed and will be using
// the slow notify algorithm.
t.trackChannels = nil
return
}
// Create the map on the fly when we need it.
if t.trackChannels == nil {
t.trackChannels = make(map[chan struct{}]struct{})
}
// Otherwise we are good to track it.
t.trackChannels[ch] = struct{}{}
}
// writeNode returns a node to be modified, if the current node has already been
// modified during the course of the transaction, it is used in-place. Set
// forLeafUpdate to true if you are getting a write node to update the leaf,
// which will set leaf mutation tracking appropriately as well.
func (t *Txn) writeNode(n *Node, forLeafUpdate bool) *Node {
// Ensure the writable set exists.
if t.writable == nil {
lru, err := simplelru.NewLRU(defaultModifiedCache, nil)
if err != nil {
panic(err)
}
t.writable = lru
}
// If this node has already been modified, we can continue to use it
// during this transaction. We know that we don't need to track it for
// a node update since the node is writable, but if this is for a leaf
// update we track it, in case the initial write to this node didn't
// update the leaf.
if _, ok := t.writable.Get(n); ok {
if t.trackMutate && forLeafUpdate && n.leaf != nil {
t.trackChannel(n.leaf.mutateCh)
}
return n
}
// Mark this node as being mutated.
if t.trackMutate {
t.trackChannel(n.mutateCh)
}
// Mark its leaf as being mutated, if appropriate.
if t.trackMutate && forLeafUpdate && n.leaf != nil {
t.trackChannel(n.leaf.mutateCh)
}
// Copy the existing node. If you have set forLeafUpdate it will be
// safe to replace this leaf with another after you get your node for
// writing. You MUST replace it, because the channel associated with
// this leaf will be closed when this transaction is committed.
nc := &Node{
mutateCh: make(chan struct{}),
leaf: n.leaf,
}
if n.prefix != nil {
nc.prefix = make([]byte, len(n.prefix))
copy(nc.prefix, n.prefix)
}
if len(n.edges) != 0 {
nc.edges = make([]edge, len(n.edges))
copy(nc.edges, n.edges)
}
// Mark this node as writable.
t.writable.Add(nc, nil)
return nc
}
// Visit all the nodes in the tree under n, and add their mutateChannels to the transaction
// Returns the size of the subtree visited
func (t *Txn) trackChannelsAndCount(n *Node) int {
// Count only leaf nodes
leaves := 0
if n.leaf != nil {
leaves = 1
}
// Mark this node as being mutated.
if t.trackMutate {
t.trackChannel(n.mutateCh)
}
// Mark its leaf as being mutated, if appropriate.
if t.trackMutate && n.leaf != nil {
t.trackChannel(n.leaf.mutateCh)
}
// Recurse on the children
for _, e := range n.edges {
leaves += t.trackChannelsAndCount(e.node)
}
return leaves
}
// mergeChild is called to collapse the given node with its child. This is only
// called when the given node is not a leaf and has a single edge.
func (t *Txn) mergeChild(n *Node) {
// Mark the child node as being mutated since we are about to abandon
// it. We don't need to mark the leaf since we are retaining it if it
// is there.
e := n.edges[0]
child := e.node
if t.trackMutate {
t.trackChannel(child.mutateCh)
}
// Merge the nodes.
n.prefix = concat(n.prefix, child.prefix)
n.leaf = child.leaf
if len(child.edges) != 0 {
n.edges = make([]edge, len(child.edges))
copy(n.edges, child.edges)
} else {
n.edges = nil
}
}
// insert does a recursive insertion
func (t *Txn) insert(n *Node, k, search []byte, v interface{}) (*Node, interface{}, bool) {
// Handle key exhaustion
if len(search) == 0 {
var oldVal interface{}
didUpdate := false
if n.isLeaf() {
oldVal = n.leaf.val
didUpdate = true
}
nc := t.writeNode(n, true)
nc.leaf = &leafNode{
mutateCh: make(chan struct{}),
key: k,
val: v,
}
return nc, oldVal, didUpdate
}
// Look for the edge
idx, child := n.getEdge(search[0])
// No edge, create one
if child == nil {
e := edge{
label: search[0],
node: &Node{
mutateCh: make(chan struct{}),
leaf: &leafNode{
mutateCh: make(chan struct{}),
key: k,
val: v,
},
prefix: search,
},
}
nc := t.writeNode(n, false)
nc.addEdge(e)
return nc, nil, false
}
// Determine longest prefix of the search key on match
commonPrefix := longestPrefix(search, child.prefix)
if commonPrefix == len(child.prefix) {
search = search[commonPrefix:]
newChild, oldVal, didUpdate := t.insert(child, k, search, v)
if newChild != nil {
nc := t.writeNode(n, false)
nc.edges[idx].node = newChild
return nc, oldVal, didUpdate
}
return nil, oldVal, didUpdate
}
// Split the node
nc := t.writeNode(n, false)
splitNode := &Node{
mutateCh: make(chan struct{}),
prefix: search[:commonPrefix],
}
nc.replaceEdge(edge{
label: search[0],
node: splitNode,
})
// Restore the existing child node
modChild := t.writeNode(child, false)
splitNode.addEdge(edge{
label: modChild.prefix[commonPrefix],
node: modChild,
})
modChild.prefix = modChild.prefix[commonPrefix:]
// Create a new leaf node
leaf := &leafNode{
mutateCh: make(chan struct{}),
key: k,
val: v,
}
// If the new key is a subset, add to to this node
search = search[commonPrefix:]
if len(search) == 0 {
splitNode.leaf = leaf
return nc, nil, false
}
// Create a new edge for the node
splitNode.addEdge(edge{
label: search[0],
node: &Node{
mutateCh: make(chan struct{}),
leaf: leaf,
prefix: search,
},
})
return nc, nil, false
}
// delete does a recursive deletion
func (t *Txn) delete(parent, n *Node, search []byte) (*Node, *leafNode) {
// Check for key exhaustion
if len(search) == 0 {
if !n.isLeaf() {
return nil, nil
}
// Copy the pointer in case we are in a transaction that already
// modified this node since the node will be reused. Any changes
// made to the node will not affect returning the original leaf
// value.
oldLeaf := n.leaf
// Remove the leaf node
nc := t.writeNode(n, true)
nc.leaf = nil
// Check if this node should be merged
if n != t.root && len(nc.edges) == 1 {
t.mergeChild(nc)
}
return nc, oldLeaf
}
// Look for an edge
label := search[0]
idx, child := n.getEdge(label)
if child == nil || !bytes.HasPrefix(search, child.prefix) {
return nil, nil
}
// Consume the search prefix
search = search[len(child.prefix):]
newChild, leaf := t.delete(n, child, search)
if newChild == nil {
return nil, nil
}
// Copy this node. WATCH OUT - it's safe to pass "false" here because we
// will only ADD a leaf via nc.mergeChild() if there isn't one due to
// the !nc.isLeaf() check in the logic just below. This is pretty subtle,
// so be careful if you change any of the logic here.
nc := t.writeNode(n, false)
// Delete the edge if the node has no edges
if newChild.leaf == nil && len(newChild.edges) == 0 {
nc.delEdge(label)
if n != t.root && len(nc.edges) == 1 && !nc.isLeaf() {
t.mergeChild(nc)
}
} else {
nc.edges[idx].node = newChild
}
return nc, leaf
}
// delete does a recursive deletion
func (t *Txn) deletePrefix(parent, n *Node, search []byte) (*Node, int) {
// Check for key exhaustion
if len(search) == 0 {
nc := t.writeNode(n, true)
if n.isLeaf() {
nc.leaf = nil
}
nc.edges = nil
return nc, t.trackChannelsAndCount(n)
}
// Look for an edge
label := search[0]
idx, child := n.getEdge(label)
// We make sure that either the child node's prefix starts with the search term, or the search term starts with the child node's prefix
// Need to do both so that we can delete prefixes that don't correspond to any node in the tree
if child == nil || (!bytes.HasPrefix(child.prefix, search) && !bytes.HasPrefix(search, child.prefix)) {
return nil, 0
}
// Consume the search prefix
if len(child.prefix) > len(search) {
search = []byte("")
} else {
search = search[len(child.prefix):]
}
newChild, numDeletions := t.deletePrefix(n, child, search)
if newChild == nil {
return nil, 0
}
// Copy this node. WATCH OUT - it's safe to pass "false" here because we
// will only ADD a leaf via nc.mergeChild() if there isn't one due to
// the !nc.isLeaf() check in the logic just below. This is pretty subtle,
// so be careful if you change any of the logic here.
nc := t.writeNode(n, false)
// Delete the edge if the node has no edges
if newChild.leaf == nil && len(newChild.edges) == 0 {
nc.delEdge(label)
if n != t.root && len(nc.edges) == 1 && !nc.isLeaf() {
t.mergeChild(nc)
}
} else {
nc.edges[idx].node = newChild
}
return nc, numDeletions
}
// Insert is used to add or update a given key. The return provides
// the previous value and a bool indicating if any was set.
func (t *Txn) Insert(k []byte, v interface{}) (interface{}, bool) {
newRoot, oldVal, didUpdate := t.insert(t.root, k, k, v)
if newRoot != nil {
t.root = newRoot
}
if !didUpdate {
t.size++
}
return oldVal, didUpdate
}
// Delete is used to delete a given key. Returns the old value if any,
// and a bool indicating if the key was set.
func (t *Txn) Delete(k []byte) (interface{}, bool) {
newRoot, leaf := t.delete(nil, t.root, k)
if newRoot != nil {
t.root = newRoot
}
if leaf != nil {
t.size--
return leaf.val, true
}
return nil, false
}
// DeletePrefix is used to delete an entire subtree that matches the prefix
// This will delete all nodes under that prefix
func (t *Txn) DeletePrefix(prefix []byte) bool {
newRoot, numDeletions := t.deletePrefix(nil, t.root, prefix)
if newRoot != nil {
t.root = newRoot
t.size = t.size - numDeletions
return true
}
return false
}
// Root returns the current root of the radix tree within this
// transaction. The root is not safe across insert and delete operations,
// but can be used to read the current state during a transaction.
func (t *Txn) Root() *Node {
return t.root
}
// Get is used to lookup a specific key, returning
// the value and if it was found
func (t *Txn) Get(k []byte) (interface{}, bool) {
return t.root.Get(k)
}
// GetWatch is used to lookup a specific key, returning
// the watch channel, value and if it was found
func (t *Txn) GetWatch(k []byte) (<-chan struct{}, interface{}, bool) {
return t.root.GetWatch(k)
}
// Commit is used to finalize the transaction and return a new tree. If mutation
// tracking is turned on then notifications will also be issued.
func (t *Txn) Commit() *Tree {
nt := t.CommitOnly()
if t.trackMutate {
t.Notify()
}
return nt
}
// CommitOnly is used to finalize the transaction and return a new tree, but
// does not issue any notifications until Notify is called.
func (t *Txn) CommitOnly() *Tree {
nt := &Tree{t.root, t.size}
t.writable = nil
return nt
}
// slowNotify does a complete comparison of the before and after trees in order
// to trigger notifications. This doesn't require any additional state but it
// is very expensive to compute.
func (t *Txn) slowNotify() {
snapIter := t.snap.rawIterator()
rootIter := t.root.rawIterator()
for snapIter.Front() != nil || rootIter.Front() != nil {
// If we've exhausted the nodes in the old snapshot, we know
// there's nothing remaining to notify.
if snapIter.Front() == nil {
return
}
snapElem := snapIter.Front()
// If we've exhausted the nodes in the new root, we know we need
// to invalidate everything that remains in the old snapshot. We
// know from the loop condition there's something in the old
// snapshot.
if rootIter.Front() == nil {
close(snapElem.mutateCh)
if snapElem.isLeaf() {
close(snapElem.leaf.mutateCh)
}
snapIter.Next()
continue
}
// Do one string compare so we can check the various conditions
// below without repeating the compare.
cmp := strings.Compare(snapIter.Path(), rootIter.Path())
// If the snapshot is behind the root, then we must have deleted
// this node during the transaction.
if cmp < 0 {
close(snapElem.mutateCh)
if snapElem.isLeaf() {
close(snapElem.leaf.mutateCh)
}
snapIter.Next()
continue
}
// If the snapshot is ahead of the root, then we must have added
// this node during the transaction.
if cmp > 0 {
rootIter.Next()
continue
}
// If we have the same path, then we need to see if we mutated a
// node and possibly the leaf.
rootElem := rootIter.Front()
if snapElem != rootElem {
close(snapElem.mutateCh)
if snapElem.leaf != nil && (snapElem.leaf != rootElem.leaf) {
close(snapElem.leaf.mutateCh)
}
}
snapIter.Next()
rootIter.Next()
}
}
// Notify is used along with TrackMutate to trigger notifications. This must
// only be done once a transaction is committed via CommitOnly, and it is called
// automatically by Commit.
func (t *Txn) Notify() {
if !t.trackMutate {
return
}
// If we've overflowed the tracking state we can't use it in any way and
// need to do a full tree compare.
if t.trackOverflow {
t.slowNotify()
} else {
for ch := range t.trackChannels {
close(ch)
}
}
// Clean up the tracking state so that a re-notify is safe (will trigger
// the else clause above which will be a no-op).
t.trackChannels = nil
t.trackOverflow = false
}
// Insert is used to add or update a given key. The return provides
// the new tree, previous value and a bool indicating if any was set.
func (t *Tree) Insert(k []byte, v interface{}) (*Tree, interface{}, bool) {
txn := t.Txn()
old, ok := txn.Insert(k, v)
return txn.Commit(), old, ok
}
// Delete is used to delete a given key. Returns the new tree,
// old value if any, and a bool indicating if the key was set.
func (t *Tree) Delete(k []byte) (*Tree, interface{}, bool) {
txn := t.Txn()
old, ok := txn.Delete(k)
return txn.Commit(), old, ok
}
// DeletePrefix is used to delete all nodes starting with a given prefix. Returns the new tree,
// and a bool indicating if the prefix matched any nodes
func (t *Tree) DeletePrefix(k []byte) (*Tree, bool) {
txn := t.Txn()
ok := txn.DeletePrefix(k)
return txn.Commit(), ok
}
// Root returns the root node of the tree which can be used for richer
// query operations.
func (t *Tree) Root() *Node {
return t.root
}
// Get is used to lookup a specific key, returning
// the value and if it was found
func (t *Tree) Get(k []byte) (interface{}, bool) {
return t.root.Get(k)
}
// longestPrefix finds the length of the shared prefix
// of two strings
func longestPrefix(k1, k2 []byte) int {
max := len(k1)
if l := len(k2); l < max {
max = l
}
var i int
for i = 0; i < max; i++ {
if k1[i] != k2[i] {
break
}
}
return i
}
// concat two byte slices, returning a third new copy
func concat(a, b []byte) []byte {
c := make([]byte, len(a)+len(b))
copy(c, a)
copy(c[len(a):], b)
return c
}

188
vendor/github.com/hashicorp/go-immutable-radix/iter.go generated vendored Normal file
View File

@ -0,0 +1,188 @@
package iradix
import (
"bytes"
)
// Iterator is used to iterate over a set of nodes
// in pre-order
type Iterator struct {
node *Node
stack []edges
}
// SeekPrefixWatch is used to seek the iterator to a given prefix
// and returns the watch channel of the finest granularity
func (i *Iterator) SeekPrefixWatch(prefix []byte) (watch <-chan struct{}) {
// Wipe the stack
i.stack = nil
n := i.node
watch = n.mutateCh
search := prefix
for {
// Check for key exhaution
if len(search) == 0 {
i.node = n
return
}
// Look for an edge
_, n = n.getEdge(search[0])
if n == nil {
i.node = nil
return
}
// Update to the finest granularity as the search makes progress
watch = n.mutateCh
// Consume the search prefix
if bytes.HasPrefix(search, n.prefix) {
search = search[len(n.prefix):]
} else if bytes.HasPrefix(n.prefix, search) {
i.node = n
return
} else {
i.node = nil
return
}
}
}
// SeekPrefix is used to seek the iterator to a given prefix
func (i *Iterator) SeekPrefix(prefix []byte) {
i.SeekPrefixWatch(prefix)
}
func (i *Iterator) recurseMin(n *Node) *Node {
// Traverse to the minimum child
if n.leaf != nil {
return n
}
if len(n.edges) > 0 {
// Add all the other edges to the stack (the min node will be added as
// we recurse)
i.stack = append(i.stack, n.edges[1:])
return i.recurseMin(n.edges[0].node)
}
// Shouldn't be possible
return nil
}
// SeekLowerBound is used to seek the iterator to the smallest key that is
// greater or equal to the given key. There is no watch variant as it's hard to
// predict based on the radix structure which node(s) changes might affect the
// result.
func (i *Iterator) SeekLowerBound(key []byte) {
// Wipe the stack. Unlike Prefix iteration, we need to build the stack as we
// go because we need only a subset of edges of many nodes in the path to the
// leaf with the lower bound.
i.stack = []edges{}
n := i.node
search := key
found := func(n *Node) {
i.node = n
i.stack = append(i.stack, edges{edge{node: n}})
}
for {
// Compare current prefix with the search key's same-length prefix.
var prefixCmp int
if len(n.prefix) < len(search) {
prefixCmp = bytes.Compare(n.prefix, search[0:len(n.prefix)])
} else {
prefixCmp = bytes.Compare(n.prefix, search)
}
if prefixCmp > 0 {
// Prefix is larger, that means the lower bound is greater than the search
// and from now on we need to follow the minimum path to the smallest
// leaf under this subtree.
n = i.recurseMin(n)
if n != nil {
found(n)
}
return
}
if prefixCmp < 0 {
// Prefix is smaller than search prefix, that means there is no lower
// bound
i.node = nil
return
}
// Prefix is equal, we are still heading for an exact match. If this is a
// leaf we're done.
if n.leaf != nil {
if bytes.Compare(n.leaf.key, key) < 0 {
i.node = nil
return
}
found(n)
return
}
// Consume the search prefix
if len(n.prefix) > len(search) {
search = []byte{}
} else {
search = search[len(n.prefix):]
}
// Otherwise, take the lower bound next edge.
idx, lbNode := n.getLowerBoundEdge(search[0])
if lbNode == nil {
i.node = nil
return
}
// Create stack edges for the all strictly higher edges in this node.
if idx+1 < len(n.edges) {
i.stack = append(i.stack, n.edges[idx+1:])
}
i.node = lbNode
// Recurse
n = lbNode
}
}
// Next returns the next node in order
func (i *Iterator) Next() ([]byte, interface{}, bool) {
// Initialize our stack if needed
if i.stack == nil && i.node != nil {
i.stack = []edges{
edges{
edge{node: i.node},
},
}
}
for len(i.stack) > 0 {
// Inspect the last element of the stack
n := len(i.stack)
last := i.stack[n-1]
elem := last[0].node
// Update the stack
if len(last) > 1 {
i.stack[n-1] = last[1:]
} else {
i.stack = i.stack[:n-1]
}
// Push the edges onto the frontier
if len(elem.edges) > 0 {
i.stack = append(i.stack, elem.edges)
}
// Return the leaf values if any
if elem.leaf != nil {
return elem.leaf.key, elem.leaf.val, true
}
}
return nil, nil, false
}

304
vendor/github.com/hashicorp/go-immutable-radix/node.go generated vendored Normal file
View File

@ -0,0 +1,304 @@
package iradix
import (
"bytes"
"sort"
)
// WalkFn is used when walking the tree. Takes a
// key and value, returning if iteration should
// be terminated.
type WalkFn func(k []byte, v interface{}) bool
// leafNode is used to represent a value
type leafNode struct {
mutateCh chan struct{}
key []byte
val interface{}
}
// edge is used to represent an edge node
type edge struct {
label byte
node *Node
}
// Node is an immutable node in the radix tree
type Node struct {
// mutateCh is closed if this node is modified
mutateCh chan struct{}
// leaf is used to store possible leaf
leaf *leafNode
// prefix is the common prefix we ignore
prefix []byte
// Edges should be stored in-order for iteration.
// We avoid a fully materialized slice to save memory,
// since in most cases we expect to be sparse
edges edges
}
func (n *Node) isLeaf() bool {
return n.leaf != nil
}
func (n *Node) addEdge(e edge) {
num := len(n.edges)
idx := sort.Search(num, func(i int) bool {
return n.edges[i].label >= e.label
})
n.edges = append(n.edges, e)
if idx != num {
copy(n.edges[idx+1:], n.edges[idx:num])
n.edges[idx] = e
}
}
func (n *Node) replaceEdge(e edge) {
num := len(n.edges)
idx := sort.Search(num, func(i int) bool {
return n.edges[i].label >= e.label
})
if idx < num && n.edges[idx].label == e.label {
n.edges[idx].node = e.node
return
}
panic("replacing missing edge")
}
func (n *Node) getEdge(label byte) (int, *Node) {
num := len(n.edges)
idx := sort.Search(num, func(i int) bool {
return n.edges[i].label >= label
})
if idx < num && n.edges[idx].label == label {
return idx, n.edges[idx].node
}
return -1, nil
}
func (n *Node) getLowerBoundEdge(label byte) (int, *Node) {
num := len(n.edges)
idx := sort.Search(num, func(i int) bool {
return n.edges[i].label >= label
})
// we want lower bound behavior so return even if it's not an exact match
if idx < num {
return idx, n.edges[idx].node
}
return -1, nil
}
func (n *Node) delEdge(label byte) {
num := len(n.edges)
idx := sort.Search(num, func(i int) bool {
return n.edges[i].label >= label
})
if idx < num && n.edges[idx].label == label {
copy(n.edges[idx:], n.edges[idx+1:])
n.edges[len(n.edges)-1] = edge{}
n.edges = n.edges[:len(n.edges)-1]
}
}
func (n *Node) GetWatch(k []byte) (<-chan struct{}, interface{}, bool) {
search := k
watch := n.mutateCh
for {
// Check for key exhaustion
if len(search) == 0 {
if n.isLeaf() {
return n.leaf.mutateCh, n.leaf.val, true
}
break
}
// Look for an edge
_, n = n.getEdge(search[0])
if n == nil {
break
}
// Update to the finest granularity as the search makes progress
watch = n.mutateCh
// Consume the search prefix
if bytes.HasPrefix(search, n.prefix) {
search = search[len(n.prefix):]
} else {
break
}
}
return watch, nil, false
}
func (n *Node) Get(k []byte) (interface{}, bool) {
_, val, ok := n.GetWatch(k)
return val, ok
}
// LongestPrefix is like Get, but instead of an
// exact match, it will return the longest prefix match.
func (n *Node) LongestPrefix(k []byte) ([]byte, interface{}, bool) {
var last *leafNode
search := k
for {
// Look for a leaf node
if n.isLeaf() {
last = n.leaf
}
// Check for key exhaution
if len(search) == 0 {
break
}
// Look for an edge
_, n = n.getEdge(search[0])
if n == nil {
break
}
// Consume the search prefix
if bytes.HasPrefix(search, n.prefix) {
search = search[len(n.prefix):]
} else {
break
}
}
if last != nil {
return last.key, last.val, true
}
return nil, nil, false
}
// Minimum is used to return the minimum value in the tree
func (n *Node) Minimum() ([]byte, interface{}, bool) {
for {
if n.isLeaf() {
return n.leaf.key, n.leaf.val, true
}
if len(n.edges) > 0 {
n = n.edges[0].node
} else {
break
}
}
return nil, nil, false
}
// Maximum is used to return the maximum value in the tree
func (n *Node) Maximum() ([]byte, interface{}, bool) {
for {
if num := len(n.edges); num > 0 {
n = n.edges[num-1].node
continue
}
if n.isLeaf() {
return n.leaf.key, n.leaf.val, true
} else {
break
}
}
return nil, nil, false
}
// Iterator is used to return an iterator at
// the given node to walk the tree
func (n *Node) Iterator() *Iterator {
return &Iterator{node: n}
}
// rawIterator is used to return a raw iterator at the given node to walk the
// tree.
func (n *Node) rawIterator() *rawIterator {
iter := &rawIterator{node: n}
iter.Next()
return iter
}
// Walk is used to walk the tree
func (n *Node) Walk(fn WalkFn) {
recursiveWalk(n, fn)
}
// WalkPrefix is used to walk the tree under a prefix
func (n *Node) WalkPrefix(prefix []byte, fn WalkFn) {
search := prefix
for {
// Check for key exhaution
if len(search) == 0 {
recursiveWalk(n, fn)
return
}
// Look for an edge
_, n = n.getEdge(search[0])
if n == nil {
break
}
// Consume the search prefix
if bytes.HasPrefix(search, n.prefix) {
search = search[len(n.prefix):]
} else if bytes.HasPrefix(n.prefix, search) {
// Child may be under our search prefix
recursiveWalk(n, fn)
return
} else {
break
}
}
}
// WalkPath is used to walk the tree, but only visiting nodes
// from the root down to a given leaf. Where WalkPrefix walks
// all the entries *under* the given prefix, this walks the
// entries *above* the given prefix.
func (n *Node) WalkPath(path []byte, fn WalkFn) {
search := path
for {
// Visit the leaf values if any
if n.leaf != nil && fn(n.leaf.key, n.leaf.val) {
return
}
// Check for key exhaution
if len(search) == 0 {
return
}
// Look for an edge
_, n = n.getEdge(search[0])
if n == nil {
return
}
// Consume the search prefix
if bytes.HasPrefix(search, n.prefix) {
search = search[len(n.prefix):]
} else {
break
}
}
}
// recursiveWalk is used to do a pre-order walk of a node
// recursively. Returns true if the walk should be aborted
func recursiveWalk(n *Node, fn WalkFn) bool {
// Visit the leaf values if any
if n.leaf != nil && fn(n.leaf.key, n.leaf.val) {
return true
}
// Recurse on the children
for _, e := range n.edges {
if recursiveWalk(e.node, fn) {
return true
}
}
return false
}

View File

@ -0,0 +1,78 @@
package iradix
// rawIterator visits each of the nodes in the tree, even the ones that are not
// leaves. It keeps track of the effective path (what a leaf at a given node
// would be called), which is useful for comparing trees.
type rawIterator struct {
// node is the starting node in the tree for the iterator.
node *Node
// stack keeps track of edges in the frontier.
stack []rawStackEntry
// pos is the current position of the iterator.
pos *Node
// path is the effective path of the current iterator position,
// regardless of whether the current node is a leaf.
path string
}
// rawStackEntry is used to keep track of the cumulative common path as well as
// its associated edges in the frontier.
type rawStackEntry struct {
path string
edges edges
}
// Front returns the current node that has been iterated to.
func (i *rawIterator) Front() *Node {
return i.pos
}
// Path returns the effective path of the current node, even if it's not actually
// a leaf.
func (i *rawIterator) Path() string {
return i.path
}
// Next advances the iterator to the next node.
func (i *rawIterator) Next() {
// Initialize our stack if needed.
if i.stack == nil && i.node != nil {
i.stack = []rawStackEntry{
rawStackEntry{
edges: edges{
edge{node: i.node},
},
},
}
}
for len(i.stack) > 0 {
// Inspect the last element of the stack.
n := len(i.stack)
last := i.stack[n-1]
elem := last.edges[0].node
// Update the stack.
if len(last.edges) > 1 {
i.stack[n-1].edges = last.edges[1:]
} else {
i.stack = i.stack[:n-1]
}
// Push the edges onto the frontier.
if len(elem.edges) > 0 {
path := last.path + string(elem.prefix)
i.stack = append(i.stack, rawStackEntry{path, elem.edges})
}
i.pos = elem
i.path = last.path + string(elem.prefix)
return
}
i.pos = nil
i.path = ""
}

View File

@ -1,12 +0,0 @@
sudo: false
language: go
go:
- 1.x
branches:
only:
- master
script: env GO111MODULE=on make test testrace

View File

@ -1,10 +1,11 @@
# go-multierror # go-multierror
[![Build Status](http://img.shields.io/travis/hashicorp/go-multierror.svg?style=flat-square)][travis] [![CircleCI](https://img.shields.io/circleci/build/github/hashicorp/go-multierror/master)](https://circleci.com/gh/hashicorp/go-multierror)
[![Go Documentation](http://img.shields.io/badge/go-documentation-blue.svg?style=flat-square)][godocs] [![Go Reference](https://pkg.go.dev/badge/github.com/hashicorp/go-multierror.svg)](https://pkg.go.dev/github.com/hashicorp/go-multierror)
![GitHub go.mod Go version](https://img.shields.io/github/go-mod/go-version/hashicorp/go-multierror)
[travis]: https://travis-ci.org/hashicorp/go-multierror [circleci]: https://app.circleci.com/pipelines/github/hashicorp/go-multierror
[godocs]: https://godoc.org/github.com/hashicorp/go-multierror [godocs]: https://pkg.go.dev/github.com/hashicorp/go-multierror
`go-multierror` is a package for Go that provides a mechanism for `go-multierror` is a package for Go that provides a mechanism for
representing a list of `error` values as a single `error`. representing a list of `error` values as a single `error`.
@ -24,7 +25,25 @@ for introspecting on error values.
Install using `go get github.com/hashicorp/go-multierror`. Install using `go get github.com/hashicorp/go-multierror`.
Full documentation is available at Full documentation is available at
http://godoc.org/github.com/hashicorp/go-multierror https://pkg.go.dev/github.com/hashicorp/go-multierror
### Requires go version 1.13 or newer
`go-multierror` requires go version 1.13 or newer. Go 1.13 introduced
[error wrapping](https://golang.org/doc/go1.13#error_wrapping), which
this library takes advantage of.
If you need to use an earlier version of go, you can use the
[v1.0.0](https://github.com/hashicorp/go-multierror/tree/v1.0.0)
tag, which doesn't rely on features in go 1.13.
If you see compile errors that look like the below, it's likely that
you're on an older version of go:
```
/go/src/github.com/hashicorp/go-multierror/multierror.go:112:9: undefined: errors.As
/go/src/github.com/hashicorp/go-multierror/multierror.go:117:9: undefined: errors.Is
```
## Usage ## Usage

View File

@ -6,6 +6,8 @@ package multierror
// If err is not a multierror.Error, then it will be turned into // If err is not a multierror.Error, then it will be turned into
// one. If any of the errs are multierr.Error, they will be flattened // one. If any of the errs are multierr.Error, they will be flattened
// one level into err. // one level into err.
// Any nil errors within errs will be ignored. If err is nil, a new
// *Error will be returned.
func Append(err error, errs ...error) *Error { func Append(err error, errs ...error) *Error {
switch err := err.(type) { switch err := err.(type) {
case *Error: case *Error:

View File

@ -1,5 +1,5 @@
module github.com/hashicorp/go-multierror module github.com/hashicorp/go-multierror
go 1.14 go 1.13
require github.com/hashicorp/errwrap v1.0.0 require github.com/hashicorp/errwrap v1.0.0

View File

@ -40,14 +40,17 @@ func (e *Error) GoString() string {
return fmt.Sprintf("*%#v", *e) return fmt.Sprintf("*%#v", *e)
} }
// WrappedErrors returns the list of errors that this Error is wrapping. // WrappedErrors returns the list of errors that this Error is wrapping. It is
// It is an implementation of the errwrap.Wrapper interface so that // an implementation of the errwrap.Wrapper interface so that multierror.Error
// multierror.Error can be used with that library. // can be used with that library.
// //
// This method is not safe to be called concurrently and is no different // This method is not safe to be called concurrently. Unlike accessing the
// than accessing the Errors field directly. It is implemented only to // Errors field directly, this function also checks if the multierror is nil to
// satisfy the errwrap.Wrapper interface. // prevent a null-pointer panic. It satisfies the errwrap.Wrapper interface.
func (e *Error) WrappedErrors() []error { func (e *Error) WrappedErrors() []error {
if e == nil {
return nil
}
return e.Errors return e.Errors
} }

2
vendor/github.com/hashicorp/go-plugin/.gitignore generated vendored Normal file
View File

@ -0,0 +1,2 @@
.DS_Store
.idea

353
vendor/github.com/hashicorp/go-plugin/LICENSE generated vendored Normal file
View File

@ -0,0 +1,353 @@
Mozilla Public License, version 2.0
1. Definitions
1.1. “Contributor”
means each individual or legal entity that creates, contributes to the
creation of, or owns Covered Software.
1.2. “Contributor Version”
means the combination of the Contributions of others (if any) used by a
Contributor and that particular Contributors Contribution.
1.3. “Contribution”
means Covered Software of a particular Contributor.
1.4. “Covered Software”
means Source Code Form to which the initial Contributor has attached the
notice in Exhibit A, the Executable Form of such Source Code Form, and
Modifications of such Source Code Form, in each case including portions
thereof.
1.5. “Incompatible With Secondary Licenses”
means
a. that the initial Contributor has attached the notice described in
Exhibit B to the Covered Software; or
b. that the Covered Software was made available under the terms of version
1.1 or earlier of the License, but not also under the terms of a
Secondary License.
1.6. “Executable Form”
means any form of the work other than Source Code Form.
1.7. “Larger Work”
means a work that combines Covered Software with other material, in a separate
file or files, that is not Covered Software.
1.8. “License”
means this document.
1.9. “Licensable”
means having the right to grant, to the maximum extent possible, whether at the
time of the initial grant or subsequently, any and all of the rights conveyed by
this License.
1.10. “Modifications”
means any of the following:
a. any file in Source Code Form that results from an addition to, deletion
from, or modification of the contents of Covered Software; or
b. any new file in Source Code Form that contains any Covered Software.
1.11. “Patent Claims” of a Contributor
means any patent claim(s), including without limitation, method, process,
and apparatus claims, in any patent Licensable by such Contributor that
would be infringed, but for the grant of the License, by the making,
using, selling, offering for sale, having made, import, or transfer of
either its Contributions or its Contributor Version.
1.12. “Secondary License”
means either the GNU General Public License, Version 2.0, the GNU Lesser
General Public License, Version 2.1, the GNU Affero General Public
License, Version 3.0, or any later versions of those licenses.
1.13. “Source Code Form”
means the form of the work preferred for making modifications.
1.14. “You” (or “Your”)
means an individual or a legal entity exercising rights under this
License. For legal entities, “You” includes any entity that controls, is
controlled by, or is under common control with You. For purposes of this
definition, “control” means (a) the power, direct or indirect, to cause
the direction or management of such entity, whether by contract or
otherwise, or (b) ownership of more than fifty percent (50%) of the
outstanding shares or beneficial ownership of such entity.
2. License Grants and Conditions
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
a. under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications, or as
part of a Larger Work; and
b. under Patent Claims of such Contributor to make, use, sell, offer for
sale, have made, import, and otherwise transfer either its Contributions
or its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution become
effective for each Contribution on the date the Contributor first distributes
such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under this
License. No additional rights or licenses will be implied from the distribution
or licensing of Covered Software under this License. Notwithstanding Section
2.1(b) above, no patent license is granted by a Contributor:
a. for any code that a Contributor has removed from Covered Software; or
b. for infringements caused by: (i) Your and any other third partys
modifications of Covered Software, or (ii) the combination of its
Contributions with other software (except as part of its Contributor
Version); or
c. under Patent Claims infringed by Covered Software in the absence of its
Contributions.
This License does not grant any rights in the trademarks, service marks, or
logos of any Contributor (except as may be necessary to comply with the
notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this License
(see Section 10.2) or under the terms of a Secondary License (if permitted
under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its Contributions
are its original creation(s) or it has sufficient rights to grant the
rights to its Contributions conveyed by this License.
2.6. Fair Use
This License is not intended to limit any rights You have under applicable
copyright doctrines of fair use, fair dealing, or other equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in
Section 2.1.
3. Responsibilities
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any
Modifications that You create or to which You contribute, must be under the
terms of this License. You must inform recipients that the Source Code Form
of the Covered Software is governed by the terms of this License, and how
they can obtain a copy of this License. You may not attempt to alter or
restrict the recipients rights in the Source Code Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
a. such Covered Software must also be made available in Source Code Form,
as described in Section 3.1, and You must inform recipients of the
Executable Form how they can obtain a copy of such Source Code Form by
reasonable means in a timely manner, at a charge no more than the cost
of distribution to the recipient; and
b. You may distribute such Executable Form under the terms of this License,
or sublicense it under different terms, provided that the license for
the Executable Form does not attempt to limit or alter the recipients
rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for the
Covered Software. If the Larger Work is a combination of Covered Software
with a work governed by one or more Secondary Licenses, and the Covered
Software is not Incompatible With Secondary Licenses, this License permits
You to additionally distribute such Covered Software under the terms of
such Secondary License(s), so that the recipient of the Larger Work may, at
their option, further distribute the Covered Software under the terms of
either this License or such Secondary License(s).
3.4. Notices
You may not remove or alter the substance of any license notices (including
copyright notices, patent notices, disclaimers of warranty, or limitations
of liability) contained within the Source Code Form of the Covered
Software, except that You may alter any license notices to the extent
required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of Covered
Software. However, You may do so only on Your own behalf, and not on behalf
of any Contributor. You must make it absolutely clear that any such
warranty, support, indemnity, or liability obligation is offered by You
alone, and You hereby agree to indemnify every Contributor for any
liability incurred by such Contributor as a result of warranty, support,
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
4. Inability to Comply Due to Statute or Regulation
If it is impossible for You to comply with any of the terms of this License
with respect to some or all of the Covered Software due to statute, judicial
order, or regulation then You must: (a) comply with the terms of this License
to the maximum extent possible; and (b) describe the limitations and the code
they affect. Such description must be placed in a text file included with all
distributions of the Covered Software under this License. Except to the
extent prohibited by statute or regulation, such description must be
sufficiently detailed for a recipient of ordinary skill to be able to
understand it.
5. Termination
5.1. The rights granted under this License will terminate automatically if You
fail to comply with any of its terms. However, if You become compliant,
then the rights granted under this License from a particular Contributor
are reinstated (a) provisionally, unless and until such Contributor
explicitly and finally terminates Your grants, and (b) on an ongoing basis,
if such Contributor fails to notify You of the non-compliance by some
reasonable means prior to 60 days after You have come back into compliance.
Moreover, Your grants from a particular Contributor are reinstated on an
ongoing basis if such Contributor notifies You of the non-compliance by
some reasonable means, this is the first time You have received notice of
non-compliance with this License from such Contributor, and You become
compliant prior to 30 days after Your receipt of the notice.
5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions, counter-claims,
and cross-claims) alleging that a Contributor Version directly or
indirectly infringes any patent, then the rights granted to You by any and
all Contributors for the Covered Software under Section 2.1 of this License
shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user
license agreements (excluding distributors and resellers) which have been
validly granted by You or Your distributors under this License prior to
termination shall survive termination.
6. Disclaimer of Warranty
Covered Software is provided under this License on an “as is” basis, without
warranty of any kind, either expressed, implied, or statutory, including,
without limitation, warranties that the Covered Software is free of defects,
merchantable, fit for a particular purpose or non-infringing. The entire
risk as to the quality and performance of the Covered Software is with You.
Should any Covered Software prove defective in any respect, You (not any
Contributor) assume the cost of any necessary servicing, repair, or
correction. This disclaimer of warranty constitutes an essential part of this
License. No use of any Covered Software is authorized under this License
except under this disclaimer.
7. Limitation of Liability
Under no circumstances and under no legal theory, whether tort (including
negligence), contract, or otherwise, shall any Contributor, or anyone who
distributes Covered Software as permitted above, be liable to You for any
direct, indirect, special, incidental, or consequential damages of any
character including, without limitation, damages for lost profits, loss of
goodwill, work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses, even if such party shall have been
informed of the possibility of such damages. This limitation of liability
shall not apply to liability for death or personal injury resulting from such
partys negligence to the extent applicable law prohibits such limitation.
Some jurisdictions do not allow the exclusion or limitation of incidental or
consequential damages, so this exclusion and limitation may not apply to You.
8. Litigation
Any litigation relating to this License may be brought only in the courts of
a jurisdiction where the defendant maintains its principal place of business
and such litigation shall be governed by laws of that jurisdiction, without
reference to its conflict-of-law provisions. Nothing in this Section shall
prevent a partys ability to bring cross-claims or counter-claims.
9. Miscellaneous
This License represents the complete agreement concerning the subject matter
hereof. If any provision of this License is held to be unenforceable, such
provision shall be reformed only to the extent necessary to make it
enforceable. Any law or regulation which provides that the language of a
contract shall be construed against the drafter shall not be used to construe
this License against a Contributor.
10. Versions of the License
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section
10.3, no one other than the license steward has the right to modify or
publish new versions of this License. Each version will be given a
distinguishing version number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version of
the License under which You originally received the Covered Software, or
under the terms of any subsequent version published by the license
steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to
create a new license for such software, you may create and use a modified
version of this License if you rename the license and remove any
references to the name of the license steward (except to note that such
modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses
If You choose to distribute Source Code Form that is Incompatible With
Secondary Licenses under the terms of this version of the License, the
notice described in Exhibit B of this License must be attached.
Exhibit A - Source Code Form License Notice
This Source Code Form is subject to the
terms of the Mozilla Public License, v.
2.0. If a copy of the MPL was not
distributed with this file, You can
obtain one at
http://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular file, then
You may include the notice in a location (such as a LICENSE file in a relevant
directory) where a recipient would be likely to look for such a notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - “Incompatible With Secondary Licenses” Notice
This Source Code Form is “Incompatible
With Secondary Licenses”, as defined by
the Mozilla Public License, v. 2.0.

168
vendor/github.com/hashicorp/go-plugin/README.md generated vendored Normal file
View File

@ -0,0 +1,168 @@
# Go Plugin System over RPC
`go-plugin` is a Go (golang) plugin system over RPC. It is the plugin system
that has been in use by HashiCorp tooling for over 4 years. While initially
created for [Packer](https://www.packer.io), it is additionally in use by
[Terraform](https://www.terraform.io), [Nomad](https://www.nomadproject.io), and
[Vault](https://www.vaultproject.io).
While the plugin system is over RPC, it is currently only designed to work
over a local [reliable] network. Plugins over a real network are not supported
and will lead to unexpected behavior.
This plugin system has been used on millions of machines across many different
projects and has proven to be battle hardened and ready for production use.
## Features
The HashiCorp plugin system supports a number of features:
**Plugins are Go interface implementations.** This makes writing and consuming
plugins feel very natural. To a plugin author: you just implement an
interface as if it were going to run in the same process. For a plugin user:
you just use and call functions on an interface as if it were in the same
process. This plugin system handles the communication in between.
**Cross-language support.** Plugins can be written (and consumed) by
almost every major language. This library supports serving plugins via
[gRPC](http://www.grpc.io). gRPC-based plugins enable plugins to be written
in any language.
**Complex arguments and return values are supported.** This library
provides APIs for handling complex arguments and return values such
as interfaces, `io.Reader/Writer`, etc. We do this by giving you a library
(`MuxBroker`) for creating new connections between the client/server to
serve additional interfaces or transfer raw data.
**Bidirectional communication.** Because the plugin system supports
complex arguments, the host process can send it interface implementations
and the plugin can call back into the host process.
**Built-in Logging.** Any plugins that use the `log` standard library
will have log data automatically sent to the host process. The host
process will mirror this output prefixed with the path to the plugin
binary. This makes debugging with plugins simple. If the host system
uses [hclog](https://github.com/hashicorp/go-hclog) then the log data
will be structured. If the plugin also uses hclog, logs from the plugin
will be sent to the host hclog and be structured.
**Protocol Versioning.** A very basic "protocol version" is supported that
can be incremented to invalidate any previous plugins. This is useful when
interface signatures are changing, protocol level changes are necessary,
etc. When a protocol version is incompatible, a human friendly error
message is shown to the end user.
**Stdout/Stderr Syncing.** While plugins are subprocesses, they can continue
to use stdout/stderr as usual and the output will get mirrored back to
the host process. The host process can control what `io.Writer` these
streams go to to prevent this from happening.
**TTY Preservation.** Plugin subprocesses are connected to the identical
stdin file descriptor as the host process, allowing software that requires
a TTY to work. For example, a plugin can execute `ssh` and even though there
are multiple subprocesses and RPC happening, it will look and act perfectly
to the end user.
**Host upgrade while a plugin is running.** Plugins can be "reattached"
so that the host process can be upgraded while the plugin is still running.
This requires the host/plugin to know this is possible and daemonize
properly. `NewClient` takes a `ReattachConfig` to determine if and how to
reattach.
**Cryptographically Secure Plugins.** Plugins can be verified with an expected
checksum and RPC communications can be configured to use TLS. The host process
must be properly secured to protect this configuration.
## Architecture
The HashiCorp plugin system works by launching subprocesses and communicating
over RPC (using standard `net/rpc` or [gRPC](http://www.grpc.io)). A single
connection is made between any plugin and the host process. For net/rpc-based
plugins, we use a [connection multiplexing](https://github.com/hashicorp/yamux)
library to multiplex any other connections on top. For gRPC-based plugins,
the HTTP2 protocol handles multiplexing.
This architecture has a number of benefits:
* Plugins can't crash your host process: A panic in a plugin doesn't
panic the plugin user.
* Plugins are very easy to write: just write a Go application and `go build`.
Or use any other language to write a gRPC server with a tiny amount of
boilerplate to support go-plugin.
* Plugins are very easy to install: just put the binary in a location where
the host will find it (depends on the host but this library also provides
helpers), and the plugin host handles the rest.
* Plugins can be relatively secure: The plugin only has access to the
interfaces and args given to it, not to the entire memory space of the
process. Additionally, go-plugin can communicate with the plugin over
TLS.
## Usage
To use the plugin system, you must take the following steps. These are
high-level steps that must be done. Examples are available in the
`examples/` directory.
1. Choose the interface(s) you want to expose for plugins.
2. For each interface, implement an implementation of that interface
that communicates over a `net/rpc` connection or over a
[gRPC](http://www.grpc.io) connection or both. You'll have to implement
both a client and server implementation.
3. Create a `Plugin` implementation that knows how to create the RPC
client/server for a given plugin type.
4. Plugin authors call `plugin.Serve` to serve a plugin from the
`main` function.
5. Plugin users use `plugin.Client` to launch a subprocess and request
an interface implementation over RPC.
That's it! In practice, step 2 is the most tedious and time consuming step.
Even so, it isn't very difficult and you can see examples in the `examples/`
directory as well as throughout our various open source projects.
For complete API documentation, see [GoDoc](https://godoc.org/github.com/hashicorp/go-plugin).
## Roadmap
Our plugin system is constantly evolving. As we use the plugin system for
new projects or for new features in existing projects, we constantly find
improvements we can make.
At this point in time, the roadmap for the plugin system is:
**Semantic Versioning.** Plugins will be able to implement a semantic version.
This plugin system will give host processes a system for constraining
versions. This is in addition to the protocol versioning already present
which is more for larger underlying changes.
**Plugin fetching.** We will integrate with [go-getter](https://github.com/hashicorp/go-getter)
to support automatic download + install of plugins. Paired with cryptographically
secure plugins (above), we can make this a safe operation for an amazing
user experience.
## What About Shared Libraries?
When we started using plugins (late 2012, early 2013), plugins over RPC
were the only option since Go didn't support dynamic library loading. Today,
Go supports the [plugin](https://golang.org/pkg/plugin/) standard library with
a number of limitations. Since 2012, our plugin system has stabilized
from tens of millions of users using it, and has many benefits we've come to
value greatly.
For example, we use this plugin system in
[Vault](https://www.vaultproject.io) where dynamic library loading is
not acceptable for security reasons. That is an extreme
example, but we believe our library system has more upsides than downsides
over dynamic library loading and since we've had it built and tested for years,
we'll continue to use it.
Shared libraries have one major advantage over our system which is much
higher performance. In real world scenarios across our various tools,
we've never required any more performance out of our plugin system and it
has seen very high throughput, so this isn't a concern for us at the moment.

1025
vendor/github.com/hashicorp/go-plugin/client.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

28
vendor/github.com/hashicorp/go-plugin/discover.go generated vendored Normal file
View File

@ -0,0 +1,28 @@
package plugin
import (
"path/filepath"
)
// Discover discovers plugins that are in a given directory.
//
// The directory doesn't need to be absolute. For example, "." will work fine.
//
// This currently assumes any file matching the glob is a plugin.
// In the future this may be smarter about checking that a file is
// executable and so on.
//
// TODO: test
func Discover(glob, dir string) ([]string, error) {
var err error
// Make the directory absolute if it isn't already
if !filepath.IsAbs(dir) {
dir, err = filepath.Abs(dir)
if err != nil {
return nil, err
}
}
return filepath.Glob(filepath.Join(dir, glob))
}

24
vendor/github.com/hashicorp/go-plugin/error.go generated vendored Normal file
View File

@ -0,0 +1,24 @@
package plugin
// This is a type that wraps error types so that they can be messaged
// across RPC channels. Since "error" is an interface, we can't always
// gob-encode the underlying structure. This is a valid error interface
// implementer that we will push across.
type BasicError struct {
Message string
}
// NewBasicError is used to create a BasicError.
//
// err is allowed to be nil.
func NewBasicError(err error) *BasicError {
if err == nil {
return nil
}
return &BasicError{err.Error()}
}
func (e *BasicError) Error() string {
return e.Message
}

17
vendor/github.com/hashicorp/go-plugin/go.mod generated vendored Normal file
View File

@ -0,0 +1,17 @@
module github.com/hashicorp/go-plugin
require (
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b // indirect
github.com/golang/protobuf v1.2.0
github.com/hashicorp/go-hclog v0.0.0-20180709165350-ff2cf002a8dd
github.com/hashicorp/yamux v0.0.0-20180604194846-3520598351bb
github.com/mitchellh/go-testing-interface v0.0.0-20171004221916-a61a99592b77
github.com/oklog/run v1.0.0
github.com/stretchr/testify v1.3.0 // indirect
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4 // indirect
golang.org/x/sys v0.0.0-20190129075346-302c3dd5f1cc // indirect
golang.org/x/text v0.3.0 // indirect
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8 // indirect
google.golang.org/grpc v1.14.0
)

Some files were not shown because too many files have changed in this diff Show More