mirror of
https://github.com/ceph/ceph-csi.git
synced 2025-06-13 10:33:35 +00:00
141
vendor/github.com/peterbourgon/diskv/README.md
generated
vendored
141
vendor/github.com/peterbourgon/diskv/README.md
generated
vendored
@ -1,141 +0,0 @@
|
||||
# What is diskv?
|
||||
|
||||
Diskv (disk-vee) is a simple, persistent key-value store written in the Go
|
||||
language. It starts with an incredibly simple API for storing arbitrary data on
|
||||
a filesystem by key, and builds several layers of performance-enhancing
|
||||
abstraction on top. The end result is a conceptually simple, but highly
|
||||
performant, disk-backed storage system.
|
||||
|
||||
[![Build Status][1]][2]
|
||||
|
||||
[1]: https://drone.io/github.com/peterbourgon/diskv/status.png
|
||||
[2]: https://drone.io/github.com/peterbourgon/diskv/latest
|
||||
|
||||
|
||||
# Installing
|
||||
|
||||
Install [Go 1][3], either [from source][4] or [with a prepackaged binary][5].
|
||||
Then,
|
||||
|
||||
```bash
|
||||
$ go get github.com/peterbourgon/diskv
|
||||
```
|
||||
|
||||
[3]: http://golang.org
|
||||
[4]: http://golang.org/doc/install/source
|
||||
[5]: http://golang.org/doc/install
|
||||
|
||||
|
||||
# Usage
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"github.com/peterbourgon/diskv"
|
||||
)
|
||||
|
||||
func main() {
|
||||
// Simplest transform function: put all the data files into the base dir.
|
||||
flatTransform := func(s string) []string { return []string{} }
|
||||
|
||||
// Initialize a new diskv store, rooted at "my-data-dir", with a 1MB cache.
|
||||
d := diskv.New(diskv.Options{
|
||||
BasePath: "my-data-dir",
|
||||
Transform: flatTransform,
|
||||
CacheSizeMax: 1024 * 1024,
|
||||
})
|
||||
|
||||
// Write three bytes to the key "alpha".
|
||||
key := "alpha"
|
||||
d.Write(key, []byte{'1', '2', '3'})
|
||||
|
||||
// Read the value back out of the store.
|
||||
value, _ := d.Read(key)
|
||||
fmt.Printf("%v\n", value)
|
||||
|
||||
// Erase the key+value from the store (and the disk).
|
||||
d.Erase(key)
|
||||
}
|
||||
```
|
||||
|
||||
More complex examples can be found in the "examples" subdirectory.
|
||||
|
||||
|
||||
# Theory
|
||||
|
||||
## Basic idea
|
||||
|
||||
At its core, diskv is a map of a key (`string`) to arbitrary data (`[]byte`).
|
||||
The data is written to a single file on disk, with the same name as the key.
|
||||
The key determines where that file will be stored, via a user-provided
|
||||
`TransformFunc`, which takes a key and returns a slice (`[]string`)
|
||||
corresponding to a path list where the key file will be stored. The simplest
|
||||
TransformFunc,
|
||||
|
||||
```go
|
||||
func SimpleTransform (key string) []string {
|
||||
return []string{}
|
||||
}
|
||||
```
|
||||
|
||||
will place all keys in the same, base directory. The design is inspired by
|
||||
[Redis diskstore][6]; a TransformFunc which emulates the default diskstore
|
||||
behavior is available in the content-addressable-storage example.
|
||||
|
||||
[6]: http://groups.google.com/group/redis-db/browse_thread/thread/d444bc786689bde9?pli=1
|
||||
|
||||
**Note** that your TransformFunc should ensure that one valid key doesn't
|
||||
transform to a subset of another valid key. That is, it shouldn't be possible
|
||||
to construct valid keys that resolve to directory names. As a concrete example,
|
||||
if your TransformFunc splits on every 3 characters, then
|
||||
|
||||
```go
|
||||
d.Write("abcabc", val) // OK: written to <base>/abc/abc/abcabc
|
||||
d.Write("abc", val) // Error: attempted write to <base>/abc/abc, but it's a directory
|
||||
```
|
||||
|
||||
This will be addressed in an upcoming version of diskv.
|
||||
|
||||
Probably the most important design principle behind diskv is that your data is
|
||||
always flatly available on the disk. diskv will never do anything that would
|
||||
prevent you from accessing, copying, backing up, or otherwise interacting with
|
||||
your data via common UNIX commandline tools.
|
||||
|
||||
## Adding a cache
|
||||
|
||||
An in-memory caching layer is provided by combining the BasicStore
|
||||
functionality with a simple map structure, and keeping it up-to-date as
|
||||
appropriate. Since the map structure in Go is not threadsafe, it's combined
|
||||
with a RWMutex to provide safe concurrent access.
|
||||
|
||||
## Adding order
|
||||
|
||||
diskv is a key-value store and therefore inherently unordered. An ordering
|
||||
system can be injected into the store by passing something which satisfies the
|
||||
diskv.Index interface. (A default implementation, using Google's
|
||||
[btree][7] package, is provided.) Basically, diskv keeps an ordered (by a
|
||||
user-provided Less function) index of the keys, which can be queried.
|
||||
|
||||
[7]: https://github.com/google/btree
|
||||
|
||||
## Adding compression
|
||||
|
||||
Something which implements the diskv.Compression interface may be passed
|
||||
during store creation, so that all Writes and Reads are filtered through
|
||||
a compression/decompression pipeline. Several default implementations,
|
||||
using stdlib compression algorithms, are provided. Note that data is cached
|
||||
compressed; the cost of decompression is borne with each Read.
|
||||
|
||||
## Streaming
|
||||
|
||||
diskv also now provides ReadStream and WriteStream methods, to allow very large
|
||||
data to be handled efficiently.
|
||||
|
||||
|
||||
# Future plans
|
||||
|
||||
* Needs plenty of robust testing: huge datasets, etc...
|
||||
* More thorough benchmarking
|
||||
* Your suggestions for use-cases I haven't thought of
|
336
vendor/github.com/peterbourgon/diskv/basic_test.go
generated
vendored
336
vendor/github.com/peterbourgon/diskv/basic_test.go
generated
vendored
@ -1,336 +0,0 @@
|
||||
package diskv
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"errors"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func cmpBytes(a, b []byte) bool {
|
||||
if len(a) != len(b) {
|
||||
return false
|
||||
}
|
||||
for i := 0; i < len(a); i++ {
|
||||
if a[i] != b[i] {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func (d *Diskv) isCached(key string) bool {
|
||||
d.mu.RLock()
|
||||
defer d.mu.RUnlock()
|
||||
_, ok := d.cache[key]
|
||||
return ok
|
||||
}
|
||||
|
||||
func TestWriteReadErase(t *testing.T) {
|
||||
d := New(Options{
|
||||
BasePath: "test-data",
|
||||
CacheSizeMax: 1024,
|
||||
})
|
||||
defer d.EraseAll()
|
||||
k, v := "a", []byte{'b'}
|
||||
if err := d.Write(k, v); err != nil {
|
||||
t.Fatalf("write: %s", err)
|
||||
}
|
||||
if readVal, err := d.Read(k); err != nil {
|
||||
t.Fatalf("read: %s", err)
|
||||
} else if bytes.Compare(v, readVal) != 0 {
|
||||
t.Fatalf("read: expected %s, got %s", v, readVal)
|
||||
}
|
||||
if err := d.Erase(k); err != nil {
|
||||
t.Fatalf("erase: %s", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestWRECache(t *testing.T) {
|
||||
d := New(Options{
|
||||
BasePath: "test-data",
|
||||
CacheSizeMax: 1024,
|
||||
})
|
||||
defer d.EraseAll()
|
||||
k, v := "xxx", []byte{' ', ' ', ' '}
|
||||
if d.isCached(k) {
|
||||
t.Fatalf("key cached before Write and Read")
|
||||
}
|
||||
if err := d.Write(k, v); err != nil {
|
||||
t.Fatalf("write: %s", err)
|
||||
}
|
||||
if d.isCached(k) {
|
||||
t.Fatalf("key cached before Read")
|
||||
}
|
||||
if readVal, err := d.Read(k); err != nil {
|
||||
t.Fatalf("read: %s", err)
|
||||
} else if bytes.Compare(v, readVal) != 0 {
|
||||
t.Fatalf("read: expected %s, got %s", v, readVal)
|
||||
}
|
||||
for i := 0; i < 10 && !d.isCached(k); i++ {
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
}
|
||||
if !d.isCached(k) {
|
||||
t.Fatalf("key not cached after Read")
|
||||
}
|
||||
if err := d.Erase(k); err != nil {
|
||||
t.Fatalf("erase: %s", err)
|
||||
}
|
||||
if d.isCached(k) {
|
||||
t.Fatalf("key cached after Erase")
|
||||
}
|
||||
}
|
||||
|
||||
func TestStrings(t *testing.T) {
|
||||
d := New(Options{
|
||||
BasePath: "test-data",
|
||||
CacheSizeMax: 1024,
|
||||
})
|
||||
defer d.EraseAll()
|
||||
|
||||
keys := map[string]bool{"a": false, "b": false, "c": false, "d": false}
|
||||
v := []byte{'1'}
|
||||
for k := range keys {
|
||||
if err := d.Write(k, v); err != nil {
|
||||
t.Fatalf("write: %s: %s", k, err)
|
||||
}
|
||||
}
|
||||
|
||||
for k := range d.Keys(nil) {
|
||||
if _, present := keys[k]; present {
|
||||
t.Logf("got: %s", k)
|
||||
keys[k] = true
|
||||
} else {
|
||||
t.Fatalf("strings() returns unknown key: %s", k)
|
||||
}
|
||||
}
|
||||
|
||||
for k, found := range keys {
|
||||
if !found {
|
||||
t.Errorf("never got %s", k)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestZeroByteCache(t *testing.T) {
|
||||
d := New(Options{
|
||||
BasePath: "test-data",
|
||||
CacheSizeMax: 0,
|
||||
})
|
||||
defer d.EraseAll()
|
||||
|
||||
k, v := "a", []byte{'1', '2', '3'}
|
||||
if err := d.Write(k, v); err != nil {
|
||||
t.Fatalf("Write: %s", err)
|
||||
}
|
||||
|
||||
if d.isCached(k) {
|
||||
t.Fatalf("key cached, expected not-cached")
|
||||
}
|
||||
|
||||
if _, err := d.Read(k); err != nil {
|
||||
t.Fatalf("Read: %s", err)
|
||||
}
|
||||
|
||||
if d.isCached(k) {
|
||||
t.Fatalf("key cached, expected not-cached")
|
||||
}
|
||||
}
|
||||
|
||||
func TestOneByteCache(t *testing.T) {
|
||||
d := New(Options{
|
||||
BasePath: "test-data",
|
||||
CacheSizeMax: 1,
|
||||
})
|
||||
defer d.EraseAll()
|
||||
|
||||
k1, k2, v1, v2 := "a", "b", []byte{'1'}, []byte{'1', '2'}
|
||||
if err := d.Write(k1, v1); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if v, err := d.Read(k1); err != nil {
|
||||
t.Fatal(err)
|
||||
} else if !cmpBytes(v, v1) {
|
||||
t.Fatalf("Read: expected %s, got %s", string(v1), string(v))
|
||||
}
|
||||
|
||||
for i := 0; i < 10 && !d.isCached(k1); i++ {
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
}
|
||||
if !d.isCached(k1) {
|
||||
t.Fatalf("expected 1-byte value to be cached, but it wasn't")
|
||||
}
|
||||
|
||||
if err := d.Write(k2, v2); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if _, err := d.Read(k2); err != nil {
|
||||
t.Fatalf("--> %s", err)
|
||||
}
|
||||
|
||||
for i := 0; i < 10 && (!d.isCached(k1) || d.isCached(k2)); i++ {
|
||||
time.Sleep(10 * time.Millisecond) // just wait for lazy-cache
|
||||
}
|
||||
if !d.isCached(k1) {
|
||||
t.Fatalf("1-byte value was uncached for no reason")
|
||||
}
|
||||
|
||||
if d.isCached(k2) {
|
||||
t.Fatalf("2-byte value was cached, but cache max size is 1")
|
||||
}
|
||||
}
|
||||
|
||||
func TestStaleCache(t *testing.T) {
|
||||
d := New(Options{
|
||||
BasePath: "test-data",
|
||||
CacheSizeMax: 1,
|
||||
})
|
||||
defer d.EraseAll()
|
||||
|
||||
k, first, second := "a", "first", "second"
|
||||
if err := d.Write(k, []byte(first)); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
v, err := d.Read(k)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if string(v) != first {
|
||||
t.Errorf("expected '%s', got '%s'", first, v)
|
||||
}
|
||||
|
||||
if err := d.Write(k, []byte(second)); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
v, err = d.Read(k)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if string(v) != second {
|
||||
t.Errorf("expected '%s', got '%s'", second, v)
|
||||
}
|
||||
}
|
||||
|
||||
func TestHas(t *testing.T) {
|
||||
d := New(Options{
|
||||
BasePath: "test-data",
|
||||
CacheSizeMax: 1024,
|
||||
})
|
||||
defer d.EraseAll()
|
||||
|
||||
for k, v := range map[string]string{
|
||||
"a": "1",
|
||||
"foo": "2",
|
||||
"012345": "3",
|
||||
} {
|
||||
d.Write(k, []byte(v))
|
||||
}
|
||||
|
||||
d.Read("foo") // cache one of them
|
||||
if !d.isCached("foo") {
|
||||
t.Errorf("'foo' didn't get cached")
|
||||
}
|
||||
|
||||
for _, tuple := range []struct {
|
||||
key string
|
||||
expected bool
|
||||
}{
|
||||
{"a", true},
|
||||
{"b", false},
|
||||
{"foo", true},
|
||||
{"bar", false},
|
||||
{"01234", false},
|
||||
{"012345", true},
|
||||
{"0123456", false},
|
||||
} {
|
||||
if expected, got := tuple.expected, d.Has(tuple.key); expected != got {
|
||||
t.Errorf("Has(%s): expected %v, got %v", tuple.key, expected, got)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
type BrokenReader struct{}
|
||||
|
||||
func (BrokenReader) Read(p []byte) (n int, err error) {
|
||||
return 0, errors.New("failed to read")
|
||||
}
|
||||
|
||||
func TestRemovesIncompleteFiles(t *testing.T) {
|
||||
opts := Options{
|
||||
BasePath: "test-data",
|
||||
CacheSizeMax: 1024,
|
||||
}
|
||||
d := New(opts)
|
||||
defer d.EraseAll()
|
||||
|
||||
key, stream, sync := "key", BrokenReader{}, false
|
||||
|
||||
if err := d.WriteStream(key, stream, sync); err == nil {
|
||||
t.Fatalf("Expected i/o copy error, none received.")
|
||||
}
|
||||
|
||||
if _, err := d.Read(key); err == nil {
|
||||
t.Fatal("Could read the key, but it shouldn't exist")
|
||||
}
|
||||
}
|
||||
|
||||
func TestTempDir(t *testing.T) {
|
||||
opts := Options{
|
||||
BasePath: "test-data",
|
||||
TempDir: "test-data-temp",
|
||||
CacheSizeMax: 1024,
|
||||
}
|
||||
d := New(opts)
|
||||
defer d.EraseAll()
|
||||
|
||||
k, v := "a", []byte{'b'}
|
||||
if err := d.Write(k, v); err != nil {
|
||||
t.Fatalf("write: %s", err)
|
||||
}
|
||||
if readVal, err := d.Read(k); err != nil {
|
||||
t.Fatalf("read: %s", err)
|
||||
} else if bytes.Compare(v, readVal) != 0 {
|
||||
t.Fatalf("read: expected %s, got %s", v, readVal)
|
||||
}
|
||||
if err := d.Erase(k); err != nil {
|
||||
t.Fatalf("erase: %s", err)
|
||||
}
|
||||
}
|
||||
|
||||
type CrashingReader struct{}
|
||||
|
||||
func (CrashingReader) Read(p []byte) (n int, err error) {
|
||||
panic("System has crashed while reading the stream")
|
||||
}
|
||||
|
||||
func TestAtomicWrite(t *testing.T) {
|
||||
opts := Options{
|
||||
BasePath: "test-data",
|
||||
// Test would fail if TempDir is not set here.
|
||||
TempDir: "test-data-temp",
|
||||
CacheSizeMax: 1024,
|
||||
}
|
||||
d := New(opts)
|
||||
defer d.EraseAll()
|
||||
|
||||
key := "key"
|
||||
func() {
|
||||
defer func() {
|
||||
recover() // Ignore panicking error
|
||||
}()
|
||||
|
||||
stream := CrashingReader{}
|
||||
d.WriteStream(key, stream, false)
|
||||
}()
|
||||
|
||||
if d.Has(key) {
|
||||
t.Fatal("Has key, but it shouldn't exist")
|
||||
}
|
||||
if _, ok := <-d.Keys(nil); ok {
|
||||
t.Fatal("Store isn't empty")
|
||||
}
|
||||
}
|
72
vendor/github.com/peterbourgon/diskv/compression_test.go
generated
vendored
72
vendor/github.com/peterbourgon/diskv/compression_test.go
generated
vendored
@ -1,72 +0,0 @@
|
||||
package diskv
|
||||
|
||||
import (
|
||||
"compress/flate"
|
||||
"fmt"
|
||||
"math/rand"
|
||||
"os"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func init() {
|
||||
rand.Seed(time.Now().UnixNano())
|
||||
}
|
||||
|
||||
func testCompressionWith(t *testing.T, c Compression, name string) {
|
||||
d := New(Options{
|
||||
BasePath: "compression-test",
|
||||
CacheSizeMax: 0,
|
||||
Compression: c,
|
||||
})
|
||||
defer d.EraseAll()
|
||||
|
||||
sz := 4096
|
||||
val := make([]byte, sz)
|
||||
for i := 0; i < sz; i++ {
|
||||
val[i] = byte('a' + rand.Intn(26)) // {a-z}; should compress some
|
||||
}
|
||||
|
||||
key := "a"
|
||||
if err := d.Write(key, val); err != nil {
|
||||
t.Fatalf("write failed: %s", err)
|
||||
}
|
||||
|
||||
targetFile := fmt.Sprintf("%s%c%s", d.BasePath, os.PathSeparator, key)
|
||||
fi, err := os.Stat(targetFile)
|
||||
if err != nil {
|
||||
t.Fatalf("%s: %s", targetFile, err)
|
||||
}
|
||||
|
||||
if fi.Size() >= int64(sz) {
|
||||
t.Fatalf("%s: size=%d, expected smaller", targetFile, fi.Size())
|
||||
}
|
||||
t.Logf("%s compressed %d to %d", name, sz, fi.Size())
|
||||
|
||||
readVal, err := d.Read(key)
|
||||
if len(readVal) != sz {
|
||||
t.Fatalf("read: expected size=%d, got size=%d", sz, len(readVal))
|
||||
}
|
||||
|
||||
for i := 0; i < sz; i++ {
|
||||
if readVal[i] != val[i] {
|
||||
t.Fatalf("i=%d: expected %v, got %v", i, val[i], readVal[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestGzipDefault(t *testing.T) {
|
||||
testCompressionWith(t, NewGzipCompression(), "gzip")
|
||||
}
|
||||
|
||||
func TestGzipBestCompression(t *testing.T) {
|
||||
testCompressionWith(t, NewGzipCompressionLevel(flate.BestCompression), "gzip-max")
|
||||
}
|
||||
|
||||
func TestGzipBestSpeed(t *testing.T) {
|
||||
testCompressionWith(t, NewGzipCompressionLevel(flate.BestSpeed), "gzip-min")
|
||||
}
|
||||
|
||||
func TestZlib(t *testing.T) {
|
||||
testCompressionWith(t, NewZlibCompression(), "zlib")
|
||||
}
|
63
vendor/github.com/peterbourgon/diskv/examples/content-addressable-store/cas.go
generated
vendored
63
vendor/github.com/peterbourgon/diskv/examples/content-addressable-store/cas.go
generated
vendored
@ -1,63 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"crypto/md5"
|
||||
"fmt"
|
||||
"io"
|
||||
|
||||
"github.com/peterbourgon/diskv"
|
||||
)
|
||||
|
||||
const transformBlockSize = 2 // grouping of chars per directory depth
|
||||
|
||||
func blockTransform(s string) []string {
|
||||
var (
|
||||
sliceSize = len(s) / transformBlockSize
|
||||
pathSlice = make([]string, sliceSize)
|
||||
)
|
||||
for i := 0; i < sliceSize; i++ {
|
||||
from, to := i*transformBlockSize, (i*transformBlockSize)+transformBlockSize
|
||||
pathSlice[i] = s[from:to]
|
||||
}
|
||||
return pathSlice
|
||||
}
|
||||
|
||||
func main() {
|
||||
d := diskv.New(diskv.Options{
|
||||
BasePath: "data",
|
||||
Transform: blockTransform,
|
||||
CacheSizeMax: 1024 * 1024, // 1MB
|
||||
})
|
||||
|
||||
for _, valueStr := range []string{
|
||||
"I am the very model of a modern Major-General",
|
||||
"I've information vegetable, animal, and mineral",
|
||||
"I know the kings of England, and I quote the fights historical",
|
||||
"From Marathon to Waterloo, in order categorical",
|
||||
"I'm very well acquainted, too, with matters mathematical",
|
||||
"I understand equations, both the simple and quadratical",
|
||||
"About binomial theorem I'm teeming with a lot o' news",
|
||||
"With many cheerful facts about the square of the hypotenuse",
|
||||
} {
|
||||
d.Write(md5sum(valueStr), []byte(valueStr))
|
||||
}
|
||||
|
||||
var keyCount int
|
||||
for key := range d.Keys(nil) {
|
||||
val, err := d.Read(key)
|
||||
if err != nil {
|
||||
panic(fmt.Sprintf("key %s had no value", key))
|
||||
}
|
||||
fmt.Printf("%s: %s\n", key, val)
|
||||
keyCount++
|
||||
}
|
||||
fmt.Printf("%d total keys\n", keyCount)
|
||||
|
||||
// d.EraseAll() // leave it commented out to see how data is kept on disk
|
||||
}
|
||||
|
||||
func md5sum(s string) string {
|
||||
h := md5.New()
|
||||
io.WriteString(h, s)
|
||||
return fmt.Sprintf("%x", h.Sum(nil))
|
||||
}
|
30
vendor/github.com/peterbourgon/diskv/examples/super-simple-store/super-simple-store.go
generated
vendored
30
vendor/github.com/peterbourgon/diskv/examples/super-simple-store/super-simple-store.go
generated
vendored
@ -1,30 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/peterbourgon/diskv"
|
||||
)
|
||||
|
||||
func main() {
|
||||
d := diskv.New(diskv.Options{
|
||||
BasePath: "my-diskv-data-directory",
|
||||
Transform: func(s string) []string { return []string{} },
|
||||
CacheSizeMax: 1024 * 1024, // 1MB
|
||||
})
|
||||
|
||||
key := "alpha"
|
||||
if err := d.Write(key, []byte{'1', '2', '3'}); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
value, err := d.Read(key)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
fmt.Printf("%v\n", value)
|
||||
|
||||
if err := d.Erase(key); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
76
vendor/github.com/peterbourgon/diskv/import_test.go
generated
vendored
76
vendor/github.com/peterbourgon/diskv/import_test.go
generated
vendored
@ -1,76 +0,0 @@
|
||||
package diskv_test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
|
||||
"github.com/peterbourgon/diskv"
|
||||
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestImportMove(t *testing.T) {
|
||||
b := []byte(`0123456789`)
|
||||
f, err := ioutil.TempFile("", "temp-test")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if _, err := f.Write(b); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
f.Close()
|
||||
|
||||
d := diskv.New(diskv.Options{
|
||||
BasePath: "test-import-move",
|
||||
})
|
||||
defer d.EraseAll()
|
||||
|
||||
key := "key"
|
||||
|
||||
if err := d.Write(key, []byte(`TBD`)); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if err := d.Import(f.Name(), key, true); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if _, err := os.Stat(f.Name()); err == nil || !os.IsNotExist(err) {
|
||||
t.Errorf("expected temp file to be gone, but err = %v", err)
|
||||
}
|
||||
|
||||
if !d.Has(key) {
|
||||
t.Errorf("%q not present", key)
|
||||
}
|
||||
|
||||
if buf, err := d.Read(key); err != nil || bytes.Compare(b, buf) != 0 {
|
||||
t.Errorf("want %q, have %q (err = %v)", string(b), string(buf), err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestImportCopy(t *testing.T) {
|
||||
b := []byte(`¡åéîòü!`)
|
||||
|
||||
f, err := ioutil.TempFile("", "temp-test")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if _, err := f.Write(b); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
f.Close()
|
||||
|
||||
d := diskv.New(diskv.Options{
|
||||
BasePath: "test-import-copy",
|
||||
})
|
||||
defer d.EraseAll()
|
||||
|
||||
if err := d.Import(f.Name(), "key", false); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if _, err := os.Stat(f.Name()); err != nil {
|
||||
t.Errorf("expected temp file to remain, but got err = %v", err)
|
||||
}
|
||||
}
|
148
vendor/github.com/peterbourgon/diskv/index_test.go
generated
vendored
148
vendor/github.com/peterbourgon/diskv/index_test.go
generated
vendored
@ -1,148 +0,0 @@
|
||||
package diskv
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"reflect"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func strLess(a, b string) bool { return a < b }
|
||||
|
||||
func cmpStrings(a, b []string) bool {
|
||||
if len(a) != len(b) {
|
||||
return false
|
||||
}
|
||||
for i := 0; i < len(a); i++ {
|
||||
if a[i] != b[i] {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func (d *Diskv) isIndexed(key string) bool {
|
||||
if d.Index == nil {
|
||||
return false
|
||||
}
|
||||
|
||||
for _, got := range d.Index.Keys("", 1000) {
|
||||
if got == key {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func TestIndexOrder(t *testing.T) {
|
||||
d := New(Options{
|
||||
BasePath: "index-test",
|
||||
Transform: func(string) []string { return []string{} },
|
||||
CacheSizeMax: 1024,
|
||||
Index: &BTreeIndex{},
|
||||
IndexLess: strLess,
|
||||
})
|
||||
defer d.EraseAll()
|
||||
|
||||
v := []byte{'1', '2', '3'}
|
||||
d.Write("a", v)
|
||||
if !d.isIndexed("a") {
|
||||
t.Fatalf("'a' not indexed after write")
|
||||
}
|
||||
d.Write("1", v)
|
||||
d.Write("m", v)
|
||||
d.Write("-", v)
|
||||
d.Write("A", v)
|
||||
|
||||
expectedKeys := []string{"-", "1", "A", "a", "m"}
|
||||
keys := []string{}
|
||||
for _, key := range d.Index.Keys("", 100) {
|
||||
keys = append(keys, key)
|
||||
}
|
||||
|
||||
if !cmpStrings(keys, expectedKeys) {
|
||||
t.Fatalf("got %s, expected %s", keys, expectedKeys)
|
||||
}
|
||||
}
|
||||
|
||||
func TestIndexLoad(t *testing.T) {
|
||||
d1 := New(Options{
|
||||
BasePath: "index-test",
|
||||
Transform: func(string) []string { return []string{} },
|
||||
CacheSizeMax: 1024,
|
||||
})
|
||||
defer d1.EraseAll()
|
||||
|
||||
val := []byte{'1', '2', '3'}
|
||||
keys := []string{"a", "b", "c", "d", "e", "f", "g"}
|
||||
for _, key := range keys {
|
||||
d1.Write(key, val)
|
||||
}
|
||||
|
||||
d2 := New(Options{
|
||||
BasePath: "index-test",
|
||||
Transform: func(string) []string { return []string{} },
|
||||
CacheSizeMax: 1024,
|
||||
Index: &BTreeIndex{},
|
||||
IndexLess: strLess,
|
||||
})
|
||||
defer d2.EraseAll()
|
||||
|
||||
// check d2 has properly loaded existing d1 data
|
||||
for _, key := range keys {
|
||||
if !d2.isIndexed(key) {
|
||||
t.Fatalf("key '%s' not indexed on secondary", key)
|
||||
}
|
||||
}
|
||||
|
||||
// cache one
|
||||
if readValue, err := d2.Read(keys[0]); err != nil {
|
||||
t.Fatalf("%s", err)
|
||||
} else if bytes.Compare(val, readValue) != 0 {
|
||||
t.Fatalf("%s: got %s, expected %s", keys[0], readValue, val)
|
||||
}
|
||||
|
||||
// make sure it got cached
|
||||
for i := 0; i < 10 && !d2.isCached(keys[0]); i++ {
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
}
|
||||
if !d2.isCached(keys[0]) {
|
||||
t.Fatalf("key '%s' not cached", keys[0])
|
||||
}
|
||||
|
||||
// kill the disk
|
||||
d1.EraseAll()
|
||||
|
||||
// cached value should still be there in the second
|
||||
if readValue, err := d2.Read(keys[0]); err != nil {
|
||||
t.Fatalf("%s", err)
|
||||
} else if bytes.Compare(val, readValue) != 0 {
|
||||
t.Fatalf("%s: got %s, expected %s", keys[0], readValue, val)
|
||||
}
|
||||
|
||||
// but not in the original
|
||||
if _, err := d1.Read(keys[0]); err == nil {
|
||||
t.Fatalf("expected error reading from flushed store")
|
||||
}
|
||||
}
|
||||
|
||||
func TestIndexKeysEmptyFrom(t *testing.T) {
|
||||
d := New(Options{
|
||||
BasePath: "index-test",
|
||||
Transform: func(string) []string { return []string{} },
|
||||
CacheSizeMax: 1024,
|
||||
Index: &BTreeIndex{},
|
||||
IndexLess: strLess,
|
||||
})
|
||||
defer d.EraseAll()
|
||||
|
||||
for _, k := range []string{"a", "c", "z", "b", "x", "b", "y"} {
|
||||
d.Write(k, []byte("1"))
|
||||
}
|
||||
|
||||
want := []string{"a", "b", "c", "x", "y", "z"}
|
||||
have := d.Index.Keys("", 99)
|
||||
if !reflect.DeepEqual(want, have) {
|
||||
t.Errorf("want %v, have %v", want, have)
|
||||
}
|
||||
}
|
121
vendor/github.com/peterbourgon/diskv/issues_test.go
generated
vendored
121
vendor/github.com/peterbourgon/diskv/issues_test.go
generated
vendored
@ -1,121 +0,0 @@
|
||||
package diskv
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"io/ioutil"
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
// ReadStream from cache shouldn't panic on a nil dereference from a nonexistent
|
||||
// Compression :)
|
||||
func TestIssue2A(t *testing.T) {
|
||||
d := New(Options{
|
||||
BasePath: "test-issue-2a",
|
||||
Transform: func(string) []string { return []string{} },
|
||||
CacheSizeMax: 1024,
|
||||
})
|
||||
defer d.EraseAll()
|
||||
|
||||
input := "abcdefghijklmnopqrstuvwxy"
|
||||
key, writeBuf, sync := "a", bytes.NewBufferString(input), false
|
||||
if err := d.WriteStream(key, writeBuf, sync); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
for i := 0; i < 2; i++ {
|
||||
began := time.Now()
|
||||
rc, err := d.ReadStream(key, false)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
buf, err := ioutil.ReadAll(rc)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if !cmpBytes(buf, []byte(input)) {
|
||||
t.Fatalf("read #%d: '%s' != '%s'", i+1, string(buf), input)
|
||||
}
|
||||
rc.Close()
|
||||
t.Logf("read #%d in %s", i+1, time.Since(began))
|
||||
}
|
||||
}
|
||||
|
||||
// ReadStream on a key that resolves to a directory should return an error.
|
||||
func TestIssue2B(t *testing.T) {
|
||||
blockTransform := func(s string) []string {
|
||||
transformBlockSize := 3
|
||||
sliceSize := len(s) / transformBlockSize
|
||||
pathSlice := make([]string, sliceSize)
|
||||
for i := 0; i < sliceSize; i++ {
|
||||
from, to := i*transformBlockSize, (i*transformBlockSize)+transformBlockSize
|
||||
pathSlice[i] = s[from:to]
|
||||
}
|
||||
return pathSlice
|
||||
}
|
||||
|
||||
d := New(Options{
|
||||
BasePath: "test-issue-2b",
|
||||
Transform: blockTransform,
|
||||
CacheSizeMax: 0,
|
||||
})
|
||||
defer d.EraseAll()
|
||||
|
||||
v := []byte{'1', '2', '3'}
|
||||
if err := d.Write("abcabc", v); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
_, err := d.ReadStream("abc", false)
|
||||
if err == nil {
|
||||
t.Fatal("ReadStream('abc') should return error")
|
||||
}
|
||||
t.Logf("ReadStream('abc') returned error: %v", err)
|
||||
}
|
||||
|
||||
// Ensure ReadStream with direct=true isn't racy.
|
||||
func TestIssue17(t *testing.T) {
|
||||
var (
|
||||
basePath = "test-data"
|
||||
)
|
||||
|
||||
dWrite := New(Options{
|
||||
BasePath: basePath,
|
||||
CacheSizeMax: 0,
|
||||
})
|
||||
defer dWrite.EraseAll()
|
||||
|
||||
dRead := New(Options{
|
||||
BasePath: basePath,
|
||||
CacheSizeMax: 50,
|
||||
})
|
||||
|
||||
cases := map[string]string{
|
||||
"a": `1234567890`,
|
||||
"b": `2345678901`,
|
||||
"c": `3456789012`,
|
||||
"d": `4567890123`,
|
||||
"e": `5678901234`,
|
||||
}
|
||||
|
||||
for k, v := range cases {
|
||||
if err := dWrite.Write(k, []byte(v)); err != nil {
|
||||
t.Fatalf("during write: %s", err)
|
||||
}
|
||||
dRead.Read(k) // ensure it's added to cache
|
||||
}
|
||||
|
||||
var wg sync.WaitGroup
|
||||
start := make(chan struct{})
|
||||
for k, v := range cases {
|
||||
wg.Add(1)
|
||||
go func(k, v string) {
|
||||
<-start
|
||||
dRead.ReadStream(k, true)
|
||||
wg.Done()
|
||||
}(k, v)
|
||||
}
|
||||
close(start)
|
||||
wg.Wait()
|
||||
}
|
231
vendor/github.com/peterbourgon/diskv/keys_test.go
generated
vendored
231
vendor/github.com/peterbourgon/diskv/keys_test.go
generated
vendored
@ -1,231 +0,0 @@
|
||||
package diskv_test
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"runtime"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/peterbourgon/diskv"
|
||||
)
|
||||
|
||||
var (
|
||||
keysTestData = map[string]string{
|
||||
"ab01cd01": "When we started building CoreOS",
|
||||
"ab01cd02": "we looked at all the various components available to us",
|
||||
"ab01cd03": "re-using the best tools",
|
||||
"ef01gh04": "and building the ones that did not exist",
|
||||
"ef02gh05": "We believe strongly in the Unix philosophy",
|
||||
"xxxxxxxx": "tools should be independently useful",
|
||||
}
|
||||
|
||||
prefixes = []string{
|
||||
"", // all
|
||||
"a",
|
||||
"ab",
|
||||
"ab0",
|
||||
"ab01",
|
||||
"ab01cd0",
|
||||
"ab01cd01",
|
||||
"ab01cd01x", // none
|
||||
"b", // none
|
||||
"b0", // none
|
||||
"0", // none
|
||||
"01", // none
|
||||
"e",
|
||||
"ef",
|
||||
"efx", // none
|
||||
"ef01gh0",
|
||||
"ef01gh04",
|
||||
"ef01gh05",
|
||||
"ef01gh06", // none
|
||||
}
|
||||
)
|
||||
|
||||
func TestKeysFlat(t *testing.T) {
|
||||
transform := func(s string) []string {
|
||||
if s == "" {
|
||||
t.Fatalf(`transform should not be called with ""`)
|
||||
}
|
||||
return []string{}
|
||||
}
|
||||
d := diskv.New(diskv.Options{
|
||||
BasePath: "test-data",
|
||||
Transform: transform,
|
||||
})
|
||||
defer d.EraseAll()
|
||||
|
||||
for k, v := range keysTestData {
|
||||
d.Write(k, []byte(v))
|
||||
}
|
||||
|
||||
checkKeys(t, d.Keys(nil), keysTestData)
|
||||
}
|
||||
|
||||
func TestKeysNested(t *testing.T) {
|
||||
d := diskv.New(diskv.Options{
|
||||
BasePath: "test-data",
|
||||
Transform: blockTransform(2),
|
||||
})
|
||||
defer d.EraseAll()
|
||||
|
||||
for k, v := range keysTestData {
|
||||
d.Write(k, []byte(v))
|
||||
}
|
||||
|
||||
checkKeys(t, d.Keys(nil), keysTestData)
|
||||
}
|
||||
|
||||
func TestKeysPrefixFlat(t *testing.T) {
|
||||
d := diskv.New(diskv.Options{
|
||||
BasePath: "test-data",
|
||||
})
|
||||
defer d.EraseAll()
|
||||
|
||||
for k, v := range keysTestData {
|
||||
d.Write(k, []byte(v))
|
||||
}
|
||||
|
||||
for _, prefix := range prefixes {
|
||||
checkKeys(t, d.KeysPrefix(prefix, nil), filterPrefix(keysTestData, prefix))
|
||||
}
|
||||
}
|
||||
|
||||
func TestKeysPrefixNested(t *testing.T) {
|
||||
d := diskv.New(diskv.Options{
|
||||
BasePath: "test-data",
|
||||
Transform: blockTransform(2),
|
||||
})
|
||||
defer d.EraseAll()
|
||||
|
||||
for k, v := range keysTestData {
|
||||
d.Write(k, []byte(v))
|
||||
}
|
||||
|
||||
for _, prefix := range prefixes {
|
||||
checkKeys(t, d.KeysPrefix(prefix, nil), filterPrefix(keysTestData, prefix))
|
||||
}
|
||||
}
|
||||
|
||||
func TestKeysCancel(t *testing.T) {
|
||||
d := diskv.New(diskv.Options{
|
||||
BasePath: "test-data",
|
||||
})
|
||||
defer d.EraseAll()
|
||||
|
||||
for k, v := range keysTestData {
|
||||
d.Write(k, []byte(v))
|
||||
}
|
||||
|
||||
var (
|
||||
cancel = make(chan struct{})
|
||||
received = 0
|
||||
cancelAfter = len(keysTestData) / 2
|
||||
)
|
||||
|
||||
for key := range d.Keys(cancel) {
|
||||
received++
|
||||
|
||||
if received >= cancelAfter {
|
||||
close(cancel)
|
||||
runtime.Gosched() // allow walker to detect cancel
|
||||
}
|
||||
|
||||
t.Logf("received %d: %q", received, key)
|
||||
}
|
||||
|
||||
if want, have := cancelAfter, received; want != have {
|
||||
t.Errorf("want %d, have %d")
|
||||
}
|
||||
}
|
||||
|
||||
func checkKeys(t *testing.T, c <-chan string, want map[string]string) {
|
||||
for k := range c {
|
||||
if _, ok := want[k]; !ok {
|
||||
t.Errorf("%q yielded but not expected", k)
|
||||
continue
|
||||
}
|
||||
|
||||
delete(want, k)
|
||||
t.Logf("%q yielded OK", k)
|
||||
}
|
||||
|
||||
if len(want) != 0 {
|
||||
t.Errorf("%d expected key(s) not yielded: %s", len(want), strings.Join(flattenKeys(want), ", "))
|
||||
}
|
||||
}
|
||||
|
||||
func blockTransform(blockSize int) func(string) []string {
|
||||
return func(s string) []string {
|
||||
var (
|
||||
sliceSize = len(s) / blockSize
|
||||
pathSlice = make([]string, sliceSize)
|
||||
)
|
||||
for i := 0; i < sliceSize; i++ {
|
||||
from, to := i*blockSize, (i*blockSize)+blockSize
|
||||
pathSlice[i] = s[from:to]
|
||||
}
|
||||
return pathSlice
|
||||
}
|
||||
}
|
||||
|
||||
func filterPrefix(in map[string]string, prefix string) map[string]string {
|
||||
out := map[string]string{}
|
||||
for k, v := range in {
|
||||
if strings.HasPrefix(k, prefix) {
|
||||
out[k] = v
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func TestFilterPrefix(t *testing.T) {
|
||||
input := map[string]string{
|
||||
"all": "",
|
||||
"and": "",
|
||||
"at": "",
|
||||
"available": "",
|
||||
"best": "",
|
||||
"building": "",
|
||||
"components": "",
|
||||
"coreos": "",
|
||||
"did": "",
|
||||
"exist": "",
|
||||
"looked": "",
|
||||
"not": "",
|
||||
"ones": "",
|
||||
"re-using": "",
|
||||
"started": "",
|
||||
"that": "",
|
||||
"the": "",
|
||||
"to": "",
|
||||
"tools": "",
|
||||
"us": "",
|
||||
"various": "",
|
||||
"we": "",
|
||||
"when": "",
|
||||
}
|
||||
|
||||
for prefix, want := range map[string]map[string]string{
|
||||
"a": map[string]string{"all": "", "and": "", "at": "", "available": ""},
|
||||
"al": map[string]string{"all": ""},
|
||||
"all": map[string]string{"all": ""},
|
||||
"alll": map[string]string{},
|
||||
"c": map[string]string{"components": "", "coreos": ""},
|
||||
"co": map[string]string{"components": "", "coreos": ""},
|
||||
"com": map[string]string{"components": ""},
|
||||
} {
|
||||
have := filterPrefix(input, prefix)
|
||||
if !reflect.DeepEqual(want, have) {
|
||||
t.Errorf("%q: want %v, have %v", prefix, flattenKeys(want), flattenKeys(have))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func flattenKeys(m map[string]string) []string {
|
||||
a := make([]string, 0, len(m))
|
||||
for k := range m {
|
||||
a = append(a, k)
|
||||
}
|
||||
return a
|
||||
}
|
153
vendor/github.com/peterbourgon/diskv/speed_test.go
generated
vendored
153
vendor/github.com/peterbourgon/diskv/speed_test.go
generated
vendored
@ -1,153 +0,0 @@
|
||||
package diskv
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"math/rand"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func shuffle(keys []string) {
|
||||
ints := rand.Perm(len(keys))
|
||||
for i := range keys {
|
||||
keys[i], keys[ints[i]] = keys[ints[i]], keys[i]
|
||||
}
|
||||
}
|
||||
|
||||
func genValue(size int) []byte {
|
||||
v := make([]byte, size)
|
||||
for i := 0; i < size; i++ {
|
||||
v[i] = uint8((rand.Int() % 26) + 97) // a-z
|
||||
}
|
||||
return v
|
||||
}
|
||||
|
||||
const (
|
||||
keyCount = 1000
|
||||
)
|
||||
|
||||
func genKeys() []string {
|
||||
keys := make([]string, keyCount)
|
||||
for i := 0; i < keyCount; i++ {
|
||||
keys[i] = fmt.Sprintf("%d", i)
|
||||
}
|
||||
return keys
|
||||
}
|
||||
|
||||
func (d *Diskv) load(keys []string, val []byte) {
|
||||
for _, key := range keys {
|
||||
d.Write(key, val)
|
||||
}
|
||||
}
|
||||
|
||||
func benchRead(b *testing.B, size, cachesz int) {
|
||||
b.StopTimer()
|
||||
d := New(Options{
|
||||
BasePath: "speed-test",
|
||||
Transform: func(string) []string { return []string{} },
|
||||
CacheSizeMax: uint64(cachesz),
|
||||
})
|
||||
defer d.EraseAll()
|
||||
|
||||
keys := genKeys()
|
||||
value := genValue(size)
|
||||
d.load(keys, value)
|
||||
shuffle(keys)
|
||||
b.SetBytes(int64(size))
|
||||
|
||||
b.StartTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
_, _ = d.Read(keys[i%len(keys)])
|
||||
}
|
||||
b.StopTimer()
|
||||
}
|
||||
|
||||
func benchWrite(b *testing.B, size int, withIndex bool) {
|
||||
b.StopTimer()
|
||||
|
||||
options := Options{
|
||||
BasePath: "speed-test",
|
||||
Transform: func(string) []string { return []string{} },
|
||||
CacheSizeMax: 0,
|
||||
}
|
||||
if withIndex {
|
||||
options.Index = &BTreeIndex{}
|
||||
options.IndexLess = strLess
|
||||
}
|
||||
|
||||
d := New(options)
|
||||
defer d.EraseAll()
|
||||
keys := genKeys()
|
||||
value := genValue(size)
|
||||
shuffle(keys)
|
||||
b.SetBytes(int64(size))
|
||||
|
||||
b.StartTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
d.Write(keys[i%len(keys)], value)
|
||||
}
|
||||
b.StopTimer()
|
||||
}
|
||||
|
||||
func BenchmarkWrite__32B_NoIndex(b *testing.B) {
|
||||
benchWrite(b, 32, false)
|
||||
}
|
||||
|
||||
func BenchmarkWrite__1KB_NoIndex(b *testing.B) {
|
||||
benchWrite(b, 1024, false)
|
||||
}
|
||||
|
||||
func BenchmarkWrite__4KB_NoIndex(b *testing.B) {
|
||||
benchWrite(b, 4096, false)
|
||||
}
|
||||
|
||||
func BenchmarkWrite_10KB_NoIndex(b *testing.B) {
|
||||
benchWrite(b, 10240, false)
|
||||
}
|
||||
|
||||
func BenchmarkWrite__32B_WithIndex(b *testing.B) {
|
||||
benchWrite(b, 32, true)
|
||||
}
|
||||
|
||||
func BenchmarkWrite__1KB_WithIndex(b *testing.B) {
|
||||
benchWrite(b, 1024, true)
|
||||
}
|
||||
|
||||
func BenchmarkWrite__4KB_WithIndex(b *testing.B) {
|
||||
benchWrite(b, 4096, true)
|
||||
}
|
||||
|
||||
func BenchmarkWrite_10KB_WithIndex(b *testing.B) {
|
||||
benchWrite(b, 10240, true)
|
||||
}
|
||||
|
||||
func BenchmarkRead__32B_NoCache(b *testing.B) {
|
||||
benchRead(b, 32, 0)
|
||||
}
|
||||
|
||||
func BenchmarkRead__1KB_NoCache(b *testing.B) {
|
||||
benchRead(b, 1024, 0)
|
||||
}
|
||||
|
||||
func BenchmarkRead__4KB_NoCache(b *testing.B) {
|
||||
benchRead(b, 4096, 0)
|
||||
}
|
||||
|
||||
func BenchmarkRead_10KB_NoCache(b *testing.B) {
|
||||
benchRead(b, 10240, 0)
|
||||
}
|
||||
|
||||
func BenchmarkRead__32B_WithCache(b *testing.B) {
|
||||
benchRead(b, 32, keyCount*32*2)
|
||||
}
|
||||
|
||||
func BenchmarkRead__1KB_WithCache(b *testing.B) {
|
||||
benchRead(b, 1024, keyCount*1024*2)
|
||||
}
|
||||
|
||||
func BenchmarkRead__4KB_WithCache(b *testing.B) {
|
||||
benchRead(b, 4096, keyCount*4096*2)
|
||||
}
|
||||
|
||||
func BenchmarkRead_10KB_WithCache(b *testing.B) {
|
||||
benchRead(b, 10240, keyCount*4096*2)
|
||||
}
|
117
vendor/github.com/peterbourgon/diskv/stream_test.go
generated
vendored
117
vendor/github.com/peterbourgon/diskv/stream_test.go
generated
vendored
@ -1,117 +0,0 @@
|
||||
package diskv
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"io/ioutil"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestBasicStreamCaching(t *testing.T) {
|
||||
d := New(Options{
|
||||
BasePath: "test-data",
|
||||
CacheSizeMax: 1024,
|
||||
})
|
||||
defer d.EraseAll()
|
||||
|
||||
input := "a1b2c3"
|
||||
key, writeBuf, sync := "a", bytes.NewBufferString(input), true
|
||||
if err := d.WriteStream(key, writeBuf, sync); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if d.isCached(key) {
|
||||
t.Fatalf("'%s' cached, but shouldn't be (yet)", key)
|
||||
}
|
||||
|
||||
rc, err := d.ReadStream(key, false)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
readBuf, err := ioutil.ReadAll(rc)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if !cmpBytes(readBuf, []byte(input)) {
|
||||
t.Fatalf("'%s' != '%s'", string(readBuf), input)
|
||||
}
|
||||
|
||||
if !d.isCached(key) {
|
||||
t.Fatalf("'%s' isn't cached, but should be", key)
|
||||
}
|
||||
}
|
||||
|
||||
func TestReadStreamDirect(t *testing.T) {
|
||||
var (
|
||||
basePath = "test-data"
|
||||
)
|
||||
dWrite := New(Options{
|
||||
BasePath: basePath,
|
||||
CacheSizeMax: 0,
|
||||
})
|
||||
defer dWrite.EraseAll()
|
||||
dRead := New(Options{
|
||||
BasePath: basePath,
|
||||
CacheSizeMax: 1024,
|
||||
})
|
||||
|
||||
// Write
|
||||
key, val1, val2 := "a", []byte(`1234567890`), []byte(`aaaaaaaaaa`)
|
||||
if err := dWrite.Write(key, val1); err != nil {
|
||||
t.Fatalf("during first write: %s", err)
|
||||
}
|
||||
|
||||
// First, caching read.
|
||||
val, err := dRead.Read(key)
|
||||
if err != nil {
|
||||
t.Fatalf("during initial read: %s", err)
|
||||
}
|
||||
t.Logf("read 1: %s => %s", key, string(val))
|
||||
if !cmpBytes(val1, val) {
|
||||
t.Errorf("expected %q, got %q", string(val1), string(val))
|
||||
}
|
||||
if !dRead.isCached(key) {
|
||||
t.Errorf("%q should be cached, but isn't", key)
|
||||
}
|
||||
|
||||
// Write a different value.
|
||||
if err := dWrite.Write(key, val2); err != nil {
|
||||
t.Fatalf("during second write: %s", err)
|
||||
}
|
||||
|
||||
// Second read, should hit cache and get the old value.
|
||||
val, err = dRead.Read(key)
|
||||
if err != nil {
|
||||
t.Fatalf("during second (cache-hit) read: %s", err)
|
||||
}
|
||||
t.Logf("read 2: %s => %s", key, string(val))
|
||||
if !cmpBytes(val1, val) {
|
||||
t.Errorf("expected %q, got %q", string(val1), string(val))
|
||||
}
|
||||
|
||||
// Third, direct read, should get the updated value.
|
||||
rc, err := dRead.ReadStream(key, true)
|
||||
if err != nil {
|
||||
t.Fatalf("during third (direct) read, ReadStream: %s", err)
|
||||
}
|
||||
defer rc.Close()
|
||||
val, err = ioutil.ReadAll(rc)
|
||||
if err != nil {
|
||||
t.Fatalf("during third (direct) read, ReadAll: %s", err)
|
||||
}
|
||||
t.Logf("read 3: %s => %s", key, string(val))
|
||||
if !cmpBytes(val2, val) {
|
||||
t.Errorf("expected %q, got %q", string(val1), string(val))
|
||||
}
|
||||
|
||||
// Fourth read, should hit cache and get the new value.
|
||||
val, err = dRead.Read(key)
|
||||
if err != nil {
|
||||
t.Fatalf("during fourth (cache-hit) read: %s", err)
|
||||
}
|
||||
t.Logf("read 4: %s => %s", key, string(val))
|
||||
if !cmpBytes(val2, val) {
|
||||
t.Errorf("expected %q, got %q", string(val1), string(val))
|
||||
}
|
||||
}
|
Reference in New Issue
Block a user