rebase: bump github.com/onsi/ginkgo/v2 from 2.1.6 to 2.3.1

Bumps [github.com/onsi/ginkgo/v2](https://github.com/onsi/ginkgo) from 2.1.6 to 2.3.1.
- [Release notes](https://github.com/onsi/ginkgo/releases)
- [Changelog](https://github.com/onsi/ginkgo/blob/master/CHANGELOG.md)
- [Commits](https://github.com/onsi/ginkgo/compare/v2.1.6...v2.3.1)

---
updated-dependencies:
- dependency-name: github.com/onsi/ginkgo/v2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
This commit is contained in:
dependabot[bot] 2022-10-17 17:38:08 +00:00 committed by mergify[bot]
parent 53bb28e0d9
commit 3a490a4df0
46 changed files with 2752 additions and 939 deletions

4
go.mod
View File

@ -22,8 +22,8 @@ require (
github.com/kubernetes-csi/csi-lib-utils v0.11.0
github.com/kubernetes-csi/external-snapshotter/client/v6 v6.0.1
github.com/libopenstorage/secrets v0.0.0-20210908194121-a1d19aa9713a
github.com/onsi/ginkgo/v2 v2.1.6
github.com/onsi/gomega v1.20.1
github.com/onsi/ginkgo/v2 v2.3.1
github.com/onsi/gomega v1.22.0
github.com/pkg/xattr v0.4.7
github.com/prometheus/client_golang v1.12.2
github.com/stretchr/testify v1.8.0

6
go.sum
View File

@ -937,8 +937,9 @@ github.com/onsi/ginkgo v1.16.5 h1:8xi0RTUf59SOSfEtZMvwTvXYMzG4gV23XVHOZiXNtnE=
github.com/onsi/ginkgo v1.16.5/go.mod h1:+E8gABHa3K6zRBolWtd+ROzc/U5bkGt0FwiG042wbpU=
github.com/onsi/ginkgo/v2 v2.1.3/go.mod h1:vw5CSIxN1JObi/U8gcbwft7ZxR2dgaR70JSE3/PpL4c=
github.com/onsi/ginkgo/v2 v2.1.4/go.mod h1:um6tUpWM/cxCK3/FK8BXqEiUMUwRgSM4JXG47RKZmLU=
github.com/onsi/ginkgo/v2 v2.1.6 h1:Fx2POJZfKRQcM1pH49qSZiYeu319wji004qX+GDovrU=
github.com/onsi/ginkgo/v2 v2.1.6/go.mod h1:MEH45j8TBi6u9BMogfbp0stKC5cdGjumZj5Y7AG4VIk=
github.com/onsi/ginkgo/v2 v2.3.1 h1:8SbseP7qM32WcvE6VaN6vfXxv698izmsJ1UQX9ve7T8=
github.com/onsi/ginkgo/v2 v2.3.1/go.mod h1:Sv4yQXwG5VmF7tm3Q5Z+RWUpPo24LF1mpnz2crUb8Ys=
github.com/onsi/gomega v0.0.0-20170829124025-dcabb60a477c/go.mod h1:C1qb7wdrVGGVU+Z6iS04AVkA3Q65CEZX59MT0QO5uiA=
github.com/onsi/gomega v1.4.2/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.4.3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
@ -947,8 +948,9 @@ github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7J
github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo=
github.com/onsi/gomega v1.17.0/go.mod h1:HnhC7FXeEQY45zxNK3PPoIUhzk/80Xly9PcubAlGdZY=
github.com/onsi/gomega v1.19.0/go.mod h1:LY+I3pBVzYsTBU1AnDwOSxaYi9WoWiqgwooUqq9yPro=
github.com/onsi/gomega v1.20.1 h1:PA/3qinGoukvymdIDV8pii6tiZgC8kbmJO6Z5+b002Q=
github.com/onsi/gomega v1.20.1/go.mod h1:DtrZpjmvpn2mPm4YWQa0/ALMDj9v4YxLgojwPeREyVo=
github.com/onsi/gomega v1.22.0 h1:AIg2/OntwkBiCg5Tt1ayyiF1ArFrWFoCSMtMi/wdApk=
github.com/onsi/gomega v1.22.0/go.mod h1:iYAIXgPSaDHak0LCMA+AWBpIKBr8WZicMxnE8luStNc=
github.com/opencontainers/go-digest v1.0.0-rc1/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s=
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=

View File

@ -1,3 +1,76 @@
## 2.3.1
## Fixes
Several users were invoking `ginkgo` by installing the latest version of the cli via `go install github.com/onsi/ginkgo/v2/ginkgo@latest`. When 2.3.0 was released this resulted in an influx of issues as CI systems failed due to a change in the internal contract between the Ginkgo CLI and the Ginkgo library. Ginkgo only supports running the same version of the library as the cli (which is why both are packaged in the same repository).
With this patch release, the ginkgo CLI can now identify a version mismatch and emit a helpful error message.
- Ginkgo cli can identify version mismatches and emit a helpful error message [bc4ae2f]
- further emphasize that a version match is required when running Ginkgo on CI and/or locally [2691dd8]
## Maintenance
- bump gomega to v1.22.0 [822a937]
## 2.3.0
### Interruptible Nodes and Timeouts
Ginkgo now supports per-node and per-spec timeouts on interruptible nodes. Check out the [documentation for all the details](https://onsi.github.io/ginkgo/#spec-timeouts-and-interruptible-nodes) but the gist is you can now write specs like this:
```go
It("is interruptible", func(ctx SpecContext) { // or context.Context instead of SpecContext, both are valid.
// do things until `ctx.Done()` is closed, for example:
req, err := http.NewRequestWithContext(ctx, "POST", "/build-widgets", nil)
Expect(err).NotTo(HaveOccured())
_, err := http.DefaultClient.Do(req)
Expect(err).NotTo(HaveOccured())
Eventually(client.WidgetCount).WithContext(ctx).Should(Equal(17))
}, NodeTimeout(time.Second*20), GracePeriod(5*time.Second))
```
and have Ginkgo ensure that the node completes before the timeout elapses. If it does elapse, or if an external interrupt is received (e.g. `^C`) then Ginkgo will cancel the context and wait for the Grace Period for the node to exit before proceeding with any cleanup nodes associated with the spec. The `ctx` provided by Ginkgo can also be passed down to Gomega's `Eventually` to have all assertions within the node governed by a single deadline.
### Features
- Ginkgo now records any additional failures that occur during the cleanup of a failed spec. In prior versions this information was quietly discarded, but the introduction of a more rigorous approach to timeouts and interruptions allows Ginkgo to better track subsequent failures.
- `SpecContext` also provides a mechanism for third-party libraries to provide additional information when a Progress Report is generated. Gomega uses this to provide the current state of an `Eventually().WithContext()` assertion when a Progress Report is requested.
- DescribeTable now exits with an error if it is not passed any Entries [a4c9865]
## Fixes
- fixes crashes on newer Ruby 3 installations by upgrading github-pages gem dependency [92c88d5]
- Make the outline command able to use the DSL import [1be2427]
## Maintenance
- chore(docs): delete no meaning d [57c373c]
- chore(docs): Fix hyperlinks [30526d5]
- chore(docs): fix code blocks without language settings [cf611c4]
- fix intra-doc link [b541bcb]
## 2.2.0
### Generate real-time Progress Reports [f91377c]
Ginkgo can now generate Progress Reports to point users at the current running line of code (including a preview of the actual source code) and a best guess at the most relevant subroutines.
These Progress Reports allow users to debug stuck or slow tests without exiting the Ginkgo process. A Progress Report can be generated at any time by sending Ginkgo a `SIGINFO` (`^T` on MacOS/BSD) or `SIGUSR1`.
In addition, the user can specify `--poll-progress-after` and `--poll-progress-interval` to have Ginkgo start periodically emitting progress reports if a given node takes too long. These can be overriden/set on a per-node basis with the `PollProgressAfter` and `PollProgressInterval` decorators.
Progress Reports are emitted to stdout, and also stored in the machine-redable report formats that Ginkgo supports.
Ginkgo also uses this progress reporting infrastructure under the hood when handling timeouts and interrupts. This yields much more focused, useful, and informative stack traces than previously.
### Features
- `BeforeSuite`, `AfterSuite`, `SynchronizedBeforeSuite`, `SynchronizedAfterSuite`, and `ReportAfterSuite` now support (the relevant subset of) decorators. These can be passed in _after_ the callback functions that are usually passed into these nodes.
As a result the **signature of these methods has changed** and now includes a trailing `args ...interface{}`. For most users simply using the DSL, this change is transparent. However if you were assigning one of these functions to a custom variable (or passing it around) then your code may need to change to reflect the new signature.
### Maintenance
- Modernize the invocation of Ginkgo in github actions [0ffde58]
- Update reocmmended CI settings in docs [896bbb9]
- Speed up unnecessarily slow integration test [6d3a90e]
## 2.1.6
### Fixes
@ -77,7 +150,7 @@ See [https://onsi.github.io/ginkgo/MIGRATING_TO_V2](https://onsi.github.io/ginkg
Ginkgo 2.0 now has a Release Candidate. 1.16.5 advertises the existence of the RC.
1.16.5 deprecates GinkgoParallelNode in favor of GinkgoParallelProcess
You can silence the RC advertisement by setting an `ACK_GINKG_RC=true` environment variable or creating a file in your home directory called `.ack-ginkgo-rc`
You can silence the RC advertisement by setting an `ACK_GINKGO_RC=true` environment variable or creating a file in your home directory called `.ack-ginkgo-rc`
## 1.16.4
@ -184,7 +257,7 @@ You can silence the RC advertisement by setting an `ACK_GINKG_RC=true` environme
- replace tail package with maintained one. this fixes go get errors (#667) [4ba33d4]
- improve ginkgo performance - makes progress on #644 [a14f98e]
- fix convert integration tests [1f8ba69]
- fix typo succesful -> successful (#663) [1ea49cf]
- fix typo successful -> successful (#663) [1ea49cf]
- Fix invalid link (#658) [b886136]
- convert utility : Include comments from source (#657) [1077c6d]
- Explain what BDD means [d79e7fb]
@ -278,7 +351,7 @@ You can silence the RC advertisement by setting an `ACK_GINKG_RC=true` environme
- Make generated Junit file compatible with "Maven Surefire" (#488) [e51bee6]
- all: gofmt [000d317]
- Increase eventually timeout to 30s [c73579c]
- Clarify asynchronous test behaviour [294d8f4]
- Clarify asynchronous test behavior [294d8f4]
- Travis badge should only show master [26d2143]
## 1.5.0 5/10/2018
@ -296,13 +369,13 @@ You can silence the RC advertisement by setting an `ACK_GINKG_RC=true` environme
- When running a test and calculating the coverage using the `-coverprofile` and `-outputdir` flags, Ginkgo fails with an error if the directory does not exist. This is due to an [issue in go 1.10](https://github.com/golang/go/issues/24588) (#446) [b36a6e0]
- `unfocus` command ignores vendor folder (#459) [e5e551c, c556e43, a3b6351, 9a820dd]
- Ignore packages whose tests are all ignored by go (#456) [7430ca7, 6d8be98]
- Increase the threshold when checking time measuments (#455) [2f714bf, 68f622c]
- Increase the threshold when checking time measurements (#455) [2f714bf, 68f622c]
- Fix race condition in coverage tests (#423) [a5a8ff7, ab9c08b]
- Add an extra new line after reporting spec run completion for test2json [874520d]
- added name name field to junit reported testsuite [ae61c63]
- Do not set the run time of a spec when the dryRun flag is used (#438) [457e2d9, ba8e856]
- Process FWhen and FSpecify when unfocusing (#434) [9008c7b, ee65bd, df87dfe]
- Synchronise the access to the state of specs to avoid race conditions (#430) [7d481bc, ae6829d]
- Synchronies the access to the state of specs to avoid race conditions (#430) [7d481bc, ae6829d]
- Added Duration on GinkgoTestDescription (#383) [5f49dad, 528417e, 0747408, 329d7ed]
- Fix Ginkgo stack trace on failure for Specify (#415) [b977ede, 65ca40e, 6c46eb8]
- Update README with Go 1.6+, Golang -> Go (#409) [17f6b97, bc14b66, 20d1598]

View File

@ -8,6 +8,6 @@ Your contributions to Ginkgo are essential for its long-term maintenance and imp
- When adding to the Ginkgo CLI, note that there are very few unit tests. Please add an integration test.
- Make sure all the tests succeed via `ginkgo -r -p`
- Vet your changes via `go vet ./...`
- Update the documentation. Ginko uses `godoc` comments and documentation in `docs/index.md`. You can run `bundle exec jekyll serve` in the `docs` directory to preview your changes.
- Update the documentation. Ginkgo uses `godoc` comments and documentation in `docs/index.md`. You can run `bundle exec jekyll serve` in the `docs` directory to preview your changes.
Thanks for supporting Ginkgo!

View File

@ -1,7 +1,13 @@
A Ginkgo release is a tagged git sha and a GitHub release. To cut a release:
1. Ensure CHANGELOG.md is up to date.
- Use `git log --pretty=format:'- %s [%h]' HEAD...vX.X.X` to list all the commits since the last release
- Use
```bash
LAST_VERSION=$(git tag --sort=version:refname | tail -n1)
CHANGES=$(git log --pretty=format:'- %s [%h]' HEAD...$LAST_VERSION)
echo -e "## NEXT\n\n$CHANGES\n\n### Features\n\n## Fixes\n\n## Maintenance\n\n$(cat CHANGELOG.md)" > CHANGELOG.md
```
to update the changelog
- Categorize the changes into
- Breaking Changes (requires a major version)
- New Features (minor version)

View File

@ -77,7 +77,7 @@ func exitIfErrors(errors []error) {
}
}
//The interface implemented by GinkgoWriter
// The interface implemented by GinkgoWriter
type GinkgoWriterInterface interface {
io.Writer
@ -89,6 +89,15 @@ type GinkgoWriterInterface interface {
ClearTeeWriters()
}
/*
SpecContext is the context object passed into nodes that are subject to a timeout or need to be notified of an interrupt. It implements the standard context.Context interface but also contains additional helpers to provide an extensibility point for Ginkgo. (As an example, Gomega's Eventually can use the methods defined on SpecContext to provide deeper integratoin with Ginkgo).
You can do anything with SpecContext that you do with a typical context.Context including wrapping it with any of the context.With* methods.
Ginkgo will cancel the SpecContext when a node is interrupted (e.g. by the user sending an interupt signal) or when a node has exceeded it's allowed run-time. Note, however, that even in cases where a node has a deadline, SpecContext will not return a deadline via .Deadline(). This is because Ginkgo does not use a WithDeadline() context to model node deadlines as Ginkgo needs control over the precise timing of the context cancellation to ensure it can provide an accurate progress report at the moment of cancellation.
*/
type SpecContext = internal.SpecContext
/*
GinkgoWriter implements a GinkgoWriterInterface and io.Writer
@ -103,7 +112,7 @@ You can learn more at https://onsi.github.io/ginkgo/#logging-output
*/
var GinkgoWriter GinkgoWriterInterface
//The interface by which Ginkgo receives *testing.T
// The interface by which Ginkgo receives *testing.T
type GinkgoTestingT interface {
Fail()
}
@ -168,7 +177,7 @@ func PauseOutputInterception() {
outputInterceptor.PauseIntercepting()
}
//ResumeOutputInterception() - see docs for PauseOutputInterception()
// ResumeOutputInterception() - see docs for PauseOutputInterception()
func ResumeOutputInterception() {
if outputInterceptor == nil {
return
@ -277,7 +286,7 @@ func RunSpecs(t GinkgoTestingT, description string, args ...interface{}) bool {
suitePath, err = filepath.Abs(suitePath)
exitIfErr(err)
passed, hasFocusedTests := global.Suite.Run(description, suiteLabels, suitePath, global.Failer, reporter, writer, outputInterceptor, interrupt_handler.NewInterruptHandler(suiteConfig.Timeout, client), client, suiteConfig)
passed, hasFocusedTests := global.Suite.Run(description, suiteLabels, suitePath, global.Failer, reporter, writer, outputInterceptor, interrupt_handler.NewInterruptHandler(client), client, internal.RegisterForProgressSignal, suiteConfig)
outputInterceptor.Shutdown()
flagSet.ValidateDeprecations(deprecationTracker)
@ -444,6 +453,8 @@ It nodes are Subject nodes that contain your spec code and assertions.
Each It node corresponds to an individual Ginkgo spec. You cannot nest any other Ginkgo nodes within an It node's closure.
You can pass It nodes bare functions (func() {}) or functions that receive a SpecContext or context.Context: func(ctx SpecContext) {} and func (ctx context.Context) {}. If the function takes a context then the It is deemed interruptible and Ginkgo will cancel the context in the event of a timeout (configured via the SpecTimeout() or NodeTimeout() decorators) or of an interrupt signal.
You can learn more at https://onsi.github.io/ginkgo/#spec-subjects-it
In addition, subject nodes can be decorated with a variety of decorators. You can learn more here: https://onsi.github.io/ginkgo/#decorator-reference
*/
@ -504,6 +515,11 @@ func By(text string, callback ...func()) {
Text: text,
}
t := time.Now()
global.Suite.SetProgressStepCursor(internal.ProgressStepCursor{
Text: text,
CodeLocation: types.NewCodeLocation(1),
StartTime: t,
})
AddReportEntry("By Step", ReportEntryVisibilityNever, Offset(1), &value, t)
formatter := formatter.NewWithNoColorBool(reporterConfig.NoColor)
GinkgoWriter.Println(formatter.F("{{bold}}STEP:{{/}} %s {{gray}}%s{{/}}", text, t.Format(types.GINKGO_TIME_FORMAT)))
@ -522,11 +538,15 @@ When running in parallel, each parallel process will call BeforeSuite.
You may only register *one* BeforeSuite handler per test suite. You typically do so in your bootstrap file at the top level.
BeforeSuite can take a func() body, or an interruptible func(SpecContext)/func(context.Context) body.
You cannot nest any other Ginkgo nodes within a BeforeSuite node's closure.
You can learn more here: https://onsi.github.io/ginkgo/#suite-setup-and-cleanup-beforesuite-and-aftersuite
*/
func BeforeSuite(body func()) bool {
return pushNode(internal.NewNode(deprecationTracker, types.NodeTypeBeforeSuite, "", body))
func BeforeSuite(body interface{}, args ...interface{}) bool {
combinedArgs := []interface{}{body}
combinedArgs = append(combinedArgs, args...)
return pushNode(internal.NewNode(deprecationTracker, types.NodeTypeBeforeSuite, "", combinedArgs...))
}
/*
@ -537,11 +557,15 @@ When running in parallel, each parallel process will call AfterSuite.
You may only register *one* AfterSuite handler per test suite. You typically do so in your bootstrap file at the top level.
AfterSuite can take a func() body, or an interruptible func(SpecContext)/func(context.Context) body.
You cannot nest any other Ginkgo nodes within an AfterSuite node's closure.
You can learn more here: https://onsi.github.io/ginkgo/#suite-setup-and-cleanup-beforesuite-and-aftersuite
*/
func AfterSuite(body func()) bool {
return pushNode(internal.NewNode(deprecationTracker, types.NodeTypeAfterSuite, "", body))
func AfterSuite(body interface{}, args ...interface{}) bool {
combinedArgs := []interface{}{body}
combinedArgs = append(combinedArgs, args...)
return pushNode(internal.NewNode(deprecationTracker, types.NodeTypeAfterSuite, "", combinedArgs...))
}
/*
@ -552,19 +576,34 @@ information from that setup to all parallel processes.
SynchronizedBeforeSuite accomplishes this by taking *two* function arguments and passing data between them.
The first function is only run on parallel process #1. The second is run on all processes, but *only* after the first function completes successfully. The functions have the following signatures:
The first function (which only runs on process #1) has the signature:
The first function (which only runs on process #1) can have any of the following the signatures:
func()
func(ctx context.Context)
func(ctx SpecContext)
func() []byte
func(ctx context.Context) []byte
func(ctx SpecContext) []byte
The byte array returned by the first function is then passed to the second function, which has the signature:
The byte array returned by the first function (if present) is then passed to the second function, which can have any of the following signature:
func()
func(ctx context.Context)
func(ctx SpecContext)
func(data []byte)
func(ctx context.Context, data []byte)
func(ctx SpecContext, data []byte)
If either function receives a context.Context/SpecContext it is considered interruptible.
You cannot nest any other Ginkgo nodes within an SynchronizedBeforeSuite node's closure.
You can learn more, and see some examples, here: https://onsi.github.io/ginkgo/#parallel-suite-setup-and-cleanup-synchronizedbeforesuite-and-synchronizedaftersuite
*/
func SynchronizedBeforeSuite(process1Body func() []byte, allProcessBody func([]byte)) bool {
return pushNode(internal.NewSynchronizedBeforeSuiteNode(process1Body, allProcessBody, types.NewCodeLocation(1)))
func SynchronizedBeforeSuite(process1Body interface{}, allProcessBody interface{}, args ...interface{}) bool {
combinedArgs := []interface{}{process1Body, allProcessBody}
combinedArgs = append(combinedArgs, args...)
return pushNode(internal.NewNode(deprecationTracker, types.NodeTypeSynchronizedBeforeSuite, "", combinedArgs...))
}
/*
@ -573,21 +612,26 @@ and a piece that must only run once - on process #1.
SynchronizedAfterSuite accomplishes this by taking *two* function arguments. The first runs on all processes. The second runs only on parallel process #1
and *only* after all other processes have finished and exited. This ensures that process #1, and any resources it is managing, remain alive until
all other processes are finished.
all other processes are finished. These two functions can be bare functions (func()) or interruptible (func(context.Context)/func(SpecContext))
Note that you can also use DeferCleanup() in SynchronizedBeforeSuite to accomplish similar results.
You cannot nest any other Ginkgo nodes within an SynchronizedAfterSuite node's closure.
You can learn more, and see some examples, here: https://onsi.github.io/ginkgo/#parallel-suite-setup-and-cleanup-synchronizedbeforesuite-and-synchronizedaftersuite
*/
func SynchronizedAfterSuite(allProcessBody func(), process1Body func()) bool {
return pushNode(internal.NewSynchronizedAfterSuiteNode(allProcessBody, process1Body, types.NewCodeLocation(1)))
func SynchronizedAfterSuite(allProcessBody interface{}, process1Body interface{}, args ...interface{}) bool {
combinedArgs := []interface{}{allProcessBody, process1Body}
combinedArgs = append(combinedArgs, args...)
return pushNode(internal.NewNode(deprecationTracker, types.NodeTypeSynchronizedAfterSuite, "", combinedArgs...))
}
/*
BeforeEach nodes are Setup nodes whose closures run before It node closures. When multiple BeforeEach nodes
are defined in nested Container nodes the outermost BeforeEach node closures are run first.
BeforeEach can take a func() body, or an interruptible func(SpecContext)/func(context.Context) body.
You cannot nest any other Ginkgo nodes within a BeforeEach node's closure.
You can learn more here: https://onsi.github.io/ginkgo/#extracting-common-setup-beforeeach
*/
@ -599,6 +643,8 @@ func BeforeEach(args ...interface{}) bool {
JustBeforeEach nodes are similar to BeforeEach nodes, however they are guaranteed to run *after* all BeforeEach node closures - just before the It node closure.
This can allow you to separate configuration from creation of resources for a spec.
JustBeforeEach can take a func() body, or an interruptible func(SpecContext)/func(context.Context) body.
You cannot nest any other Ginkgo nodes within a JustBeforeEach node's closure.
You can learn more and see some examples here: https://onsi.github.io/ginkgo/#separating-creation-and-configuration-justbeforeeach
*/
@ -612,6 +658,8 @@ are defined in nested Container nodes the innermost AfterEach node closures are
Note that you can also use DeferCleanup() in other Setup or Subject nodes to accomplish similar results.
AfterEach can take a func() body, or an interruptible func(SpecContext)/func(context.Context) body.
You cannot nest any other Ginkgo nodes within an AfterEach node's closure.
You can learn more here: https://onsi.github.io/ginkgo/#spec-cleanup-aftereach-and-defercleanup
*/
@ -622,6 +670,8 @@ func AfterEach(args ...interface{}) bool {
/*
JustAfterEach nodes are similar to AfterEach nodes, however they are guaranteed to run *before* all AfterEach node closures - just after the It node closure. This can allow you to separate diagnostics collection from teardown for a spec.
JustAfterEach can take a func() body, or an interruptible func(SpecContext)/func(context.Context) body.
You cannot nest any other Ginkgo nodes within a JustAfterEach node's closure.
You can learn more and see some examples here: https://onsi.github.io/ginkgo/#separating-diagnostics-collection-and-teardown-justaftereach
*/
@ -634,6 +684,9 @@ BeforeAll nodes are Setup nodes that can occur inside Ordered containers. They
Multiple BeforeAll nodes can be defined in a given Ordered container however they cannot be nested inside any other container.
BeforeAll can take a func() body, or an interruptible func(SpecContext)/func(context.Context) body.
You cannot nest any other Ginkgo nodes within a BeforeAll node's closure.
You can learn more about Ordered Containers at: https://onsi.github.io/ginkgo/#ordered-containers
And you can learn more about BeforeAll at: https://onsi.github.io/ginkgo/#setup-in-ordered-containers-beforeall-and-afterall
@ -649,6 +702,8 @@ Multiple AfterAll nodes can be defined in a given Ordered container however they
Note that you can also use DeferCleanup() in a BeforeAll node to accomplish similar behavior.
AfterAll can take a func() body, or an interruptible func(SpecContext)/func(context.Context) body.
You cannot nest any other Ginkgo nodes within an AfterAll node's closure.
You can learn more about Ordered Containers at: https://onsi.github.io/ginkgo/#ordered-containers
And you can learn more about AfterAll at: https://onsi.github.io/ginkgo/#setup-in-ordered-containers-beforeall-and-afterall
@ -663,15 +718,32 @@ DeferCleanup can be called within any Setup or Subject node to register a cleanu
DeferCleanup can be passed:
1. A function that takes no arguments and returns no values.
2. A function that returns an error (in which case it will assert that the returned error was nil, or it will fail the spec).
3. A function that takes arguments (and optionally returns an error) followed by a list of arguments to passe to the function. For example:
3. A function that takes a context.Context or SpecContext (and optionally returns an error). The resulting cleanup node is deemed interruptible and the passed-in context will be cancelled in the event of a timeout or interrupt.
4. A function that takes arguments (and optionally returns an error) followed by a list of arguments to pass to the function.
5. A function that takes SpecContext and a list of arguments (and optionally returns an error) followed by a list of arguments to pass to the function.
BeforeEach(func() {
DeferCleanup(os.SetEnv, "FOO", os.GetEnv("FOO"))
os.SetEnv("FOO", "BAR")
})
For example:
BeforeEach(func() {
DeferCleanup(os.SetEnv, "FOO", os.GetEnv("FOO"))
os.SetEnv("FOO", "BAR")
})
will register a cleanup handler that will set the environment variable "FOO" to it's current value (obtained by os.GetEnv("FOO")) after the spec runs and then sets the environment variable "FOO" to "BAR" for the current spec.
Similarly:
BeforeEach(func() {
DeferCleanup(func(ctx SpecContext, path) {
req, err := http.NewRequestWithContext(ctx, "POST", path, nil)
Expect(err).NotTo(HaveOccured())
_, err := http.DefaultClient.Do(req)
Expect(err).NotTo(HaveOccured())
}, "example.com/cleanup", NodeTimeout(time.Second*3))
})
will register a cleanup handler that will have three seconds to successfully complete a request to the specified path. Note that we do not specify a context in the list of arguments passed to DeferCleanup - only in the signature of the function we pass in. Ginkgo will detect the requested context and supply a SpecContext when it invokes the cleanup node. If you want to pass in your own context in addition to the Ginkgo-provided SpecContext you must specify the SpecContext as the first argument (e.g. func(ctx SpecContext, otherCtx context.Context)).
When DeferCleanup is called in BeforeEach, JustBeforeEach, It, AfterEach, or JustAfterEach the registered callback will be invoked when the spec completes (i.e. it will behave like an AfterEach node)
When DeferCleanup is called in BeforeAll or AfterAll the registered callback will be invoked when the ordered container completes (i.e. it will behave like an AfterAll node)
When DeferCleanup is called in BeforeSuite, SynchronizedBeforeSuite, AfterSuite, or SynchronizedAfterSuite the registered callback will be invoked when the suite completes (i.e. it will behave like an AfterSuite node)
@ -683,5 +755,5 @@ func DeferCleanup(args ...interface{}) {
fail := func(message string, cl types.CodeLocation) {
global.Failer.Fail(message, cl)
}
pushNode(internal.NewCleanupNode(fail, args...))
pushNode(internal.NewCleanupNode(deprecationTracker, fail, args...))
}

View File

@ -59,7 +59,7 @@ OncePerOrdered is a decorator that allows you to mark outer BeforeEach, AfterEac
per ordered context. Normally these setup nodes run around each individual spec, with OncePerOrdered they will run once around the set of specs in an ordered container.
The behavior for non-Ordered containers/specs is unchanged.
You can learh more here: https://onsi.github.io/ginkgo/#setup-around-ordered-containers-the-onceperordered-decorator
You can learn more here: https://onsi.github.io/ginkgo/#setup-around-ordered-containers-the-onceperordered-decorator
You can learn more about decorators here: https://onsi.github.io/ginkgo/#decorator-reference
*/
const OncePerOrdered = internal.OncePerOrdered
@ -81,6 +81,43 @@ You can learn more here: https://onsi.github.io/ginkgo/#spec-labels
*/
type Labels = internal.Labels
/*
PollProgressAfter allows you to override the configured value for --poll-progress-after for a particular node.
Ginkgo will start emitting node progress if the node is still running after a duration of PollProgressAfter. This allows you to get quicker feedback about the state of a long-running spec.
*/
type PollProgressAfter = internal.PollProgressAfter
/*
PollProgressInterval allows you to override the configured value for --poll-progress-interval for a particular node.
Once a node has been running for longer than PollProgressAfter Ginkgo will emit node progress periodically at an interval of PollProgresInterval.
*/
type PollProgressInterval = internal.PollProgressInterval
/*
NodeTimeout allows you to specify a timeout for an indivdiual node. The node cannot be a container and must be interruptible (i.e. it must be passed a function that accepts a SpecContext or context.Context).
If the node does not exit within the specified NodeTimeout its context will be cancelled. The node wil then have a period of time controlled by the GracePeriod decorator (or global --grace-period command-line argument) to exit. If the node does not exit within GracePeriod Ginkgo will leak the node and proceed to any clean-up nodes associated with the current spec.
*/
type NodeTimeout = internal.NodeTimeout
/*
SpecTimeout allows you to specify a timeout for an indivdiual spec. SpecTimeout can only decorate interruptible It nodes.
All nodes associated with the It node will need to complete before the SpecTimeout has elapsed. Individual nodes (e.g. BeforeEach) may be decorated with different NodeTimeouts - but these can only serve to provide a more stringent deadline for the node in question; they cannot extend the deadline past the SpecTimeout.
If the spec does not complete within the specified SpecTimeout the currently running node will have its context cancelled. The node wil then have a period of time controlled by that node's GracePeriod decorator (or global --grace-period command-line argument) to exit. If the node does not exit within GracePeriod Ginkgo will leak the node and proceed to any clean-up nodes associated with the current spec.
*/
type SpecTimeout = internal.SpecTimeout
/*
GracePeriod denotes the period of time Ginkgo will wait for an interruptible node to exit once an interruption (whether due to a timeout or a user-invoked signal) has occurred. If both the global --grace-period cli flag and a GracePeriod decorator are specified the value in the decorator will take precedence.
Nodes that do not finish within a GracePeriod will be leaked and Ginkgo will proceed to run subsequent nodes. In the event of a timeout, such leaks will be reported to the user.
*/
type GracePeriod = internal.GracePeriod
/*
SuppressProgressReporting is a decorator that allows you to disable progress reporting of a particular node. This is useful if `ginkgo -v -progress` is generating too much noise; particularly
if you have a `ReportAfterEach` node that is running for every skipped spec and is generating lots of progress reports.

View File

@ -13,7 +13,7 @@ import (
Deprecated: Done Channel for asynchronous testing
The Done channel pattern is no longer supported in Ginkgo 2.0.
See here for better patterns for asynchronouse testing: https://onsi.github.io/ginkgo/#patterns-for-asynchronous-testing
See here for better patterns for asynchronous testing: https://onsi.github.io/ginkgo/#patterns-for-asynchronous-testing
For a migration guide see: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#removed-async-testing
*/

View File

@ -128,7 +128,10 @@ func (g *group) evaluateSkipStatus(spec Spec) (types.SpecState, types.Failure) {
if spec.Skip {
return types.SpecStateSkipped, types.Failure{}
}
if g.suite.interruptHandler.Status().Interrupted || g.suite.skipAll {
if g.suite.interruptHandler.Status().Interrupted() || g.suite.skipAll {
return types.SpecStateSkipped, types.Failure{}
}
if !g.suite.deadline.IsZero() && g.suite.deadline.Before(time.Now()) {
return types.SpecStateSkipped, types.Failure{}
}
if !g.succeeded {
@ -163,8 +166,6 @@ func (g *group) isLastSpecWithPair(specID uint, pair runOncePair) bool {
}
func (g *group) attemptSpec(isFinalAttempt bool, spec Spec) {
interruptStatus := g.suite.interruptHandler.Status()
pairs := g.runOncePairs[spec.SubjectID()]
nodes := spec.Nodes.WithType(types.NodeTypeBeforeAll)
@ -173,12 +174,17 @@ func (g *group) attemptSpec(isFinalAttempt bool, spec Spec) {
nodes = append(nodes, spec.Nodes.FirstNodeWithType(types.NodeTypeIt))
terminatingNode, terminatingPair := Node{}, runOncePair{}
deadline := time.Time{}
if spec.SpecTimeout() > 0 {
deadline = time.Now().Add(spec.SpecTimeout())
}
for _, node := range nodes {
oncePair := pairs.runOncePairFor(node.ID)
if !oncePair.isZero() && g.runOnceTracker[oncePair].Is(types.SpecStatePassed) {
continue
}
g.suite.currentSpecReport.State, g.suite.currentSpecReport.Failure = g.suite.runNode(node, interruptStatus.Channel, spec.Nodes.BestTextFor(node))
g.suite.currentSpecReport.State, g.suite.currentSpecReport.Failure = g.suite.runNode(node, deadline, spec.Nodes.BestTextFor(node))
g.suite.currentSpecReport.RunTime = time.Since(g.suite.currentSpecReport.StartTime)
if !oncePair.isZero() {
g.runOnceTracker[oncePair] = g.suite.currentSpecReport.State
@ -260,11 +266,13 @@ func (g *group) attemptSpec(isFinalAttempt bool, spec Spec) {
for _, node := range nodes {
afterNodeWasRun[node.ID] = true
state, failure := g.suite.runNode(node, g.suite.interruptHandler.Status().Channel, spec.Nodes.BestTextFor(node))
state, failure := g.suite.runNode(node, deadline, spec.Nodes.BestTextFor(node))
g.suite.currentSpecReport.RunTime = time.Since(g.suite.currentSpecReport.StartTime)
if g.suite.currentSpecReport.State == types.SpecStatePassed || state == types.SpecStateAborted {
g.suite.currentSpecReport.State = state
g.suite.currentSpecReport.Failure = failure
} else if state.Is(types.SpecStateFailureStates) {
g.suite.currentSpecReport.AdditionalFailures = append(g.suite.currentSpecReport.AdditionalFailures, types.AdditionalFailure{State: state, Failure: failure})
}
}
includeDeferCleanups = true
@ -279,7 +287,10 @@ func (g *group) run(specs Specs) {
}
for _, spec := range g.specs {
g.suite.selectiveLock.Lock()
g.suite.currentSpecReport = g.initialReportForSpec(spec)
g.suite.selectiveLock.Unlock()
g.suite.currentSpecReport.State, g.suite.currentSpecReport.Failure = g.evaluateSkipStatus(spec)
g.suite.reporter.WillRun(g.suite.currentSpecReport)
g.suite.reportEach(spec, types.NodeTypeReportBeforeEach)
@ -318,227 +329,8 @@ func (g *group) run(specs Specs) {
if g.suite.currentSpecReport.State.Is(types.SpecStateFailureStates) {
g.succeeded = false
}
g.suite.selectiveLock.Lock()
g.suite.currentSpecReport = types.SpecReport{}
}
}
func (g *group) oldRun(specs Specs) {
var suite = g.suite
nodeState := map[uint]types.SpecState{}
groupSucceeded := true
indexOfLastSpecContainingNodeID := func(id uint) int {
lastIdx := -1
for idx := range specs {
if specs[idx].Nodes.ContainsNodeID(id) && !specs[idx].Skip {
lastIdx = idx
}
}
return lastIdx
}
for i, spec := range specs {
suite.currentSpecReport = types.SpecReport{
ContainerHierarchyTexts: spec.Nodes.WithType(types.NodeTypeContainer).Texts(),
ContainerHierarchyLocations: spec.Nodes.WithType(types.NodeTypeContainer).CodeLocations(),
ContainerHierarchyLabels: spec.Nodes.WithType(types.NodeTypeContainer).Labels(),
LeafNodeLocation: spec.FirstNodeWithType(types.NodeTypeIt).CodeLocation,
LeafNodeType: types.NodeTypeIt,
LeafNodeText: spec.FirstNodeWithType(types.NodeTypeIt).Text,
LeafNodeLabels: []string(spec.FirstNodeWithType(types.NodeTypeIt).Labels),
ParallelProcess: suite.config.ParallelProcess,
IsSerial: spec.Nodes.HasNodeMarkedSerial(),
IsInOrderedContainer: !spec.Nodes.FirstNodeMarkedOrdered().IsZero(),
}
skip := spec.Skip
if spec.Nodes.HasNodeMarkedPending() {
skip = true
suite.currentSpecReport.State = types.SpecStatePending
} else {
if suite.interruptHandler.Status().Interrupted || suite.skipAll {
skip = true
}
if !groupSucceeded {
skip = true
suite.currentSpecReport.Failure = suite.failureForLeafNodeWithMessage(spec.FirstNodeWithType(types.NodeTypeIt),
"Spec skipped because an earlier spec in an ordered container failed")
}
for _, node := range spec.Nodes.WithType(types.NodeTypeBeforeAll) {
if nodeState[node.ID] == types.SpecStateSkipped {
skip = true
suite.currentSpecReport.Failure = suite.failureForLeafNodeWithMessage(spec.FirstNodeWithType(types.NodeTypeIt),
"Spec skipped because Skip() was called in BeforeAll")
break
}
}
if skip {
suite.currentSpecReport.State = types.SpecStateSkipped
}
}
if suite.config.DryRun && !skip {
skip = true
suite.currentSpecReport.State = types.SpecStatePassed
}
suite.reporter.WillRun(suite.currentSpecReport)
//send the spec report to any attached ReportBeforeEach blocks - this will update suite.currentSpecReport if failures occur in these blocks
suite.reportEach(spec, types.NodeTypeReportBeforeEach)
if suite.currentSpecReport.State.Is(types.SpecStateFailureStates) {
//the reportEach failed, skip this spec
skip = true
}
suite.currentSpecReport.StartTime = time.Now()
maxAttempts := max(1, spec.FlakeAttempts())
if suite.config.FlakeAttempts > 0 {
maxAttempts = suite.config.FlakeAttempts
}
for attempt := 0; !skip && (attempt < maxAttempts); attempt++ {
suite.currentSpecReport.NumAttempts = attempt + 1
suite.writer.Truncate()
suite.outputInterceptor.StartInterceptingOutput()
if attempt > 0 {
fmt.Fprintf(suite.writer, "\nGinkgo: Attempt #%d Failed. Retrying...\n", attempt)
}
isFinalAttempt := (attempt == maxAttempts-1)
interruptStatus := suite.interruptHandler.Status()
deepestNestingLevelAttained := -1
var nodes = spec.Nodes.WithType(types.NodeTypeBeforeAll).Filter(func(n Node) bool {
return nodeState[n.ID] != types.SpecStatePassed
})
nodes = nodes.CopyAppend(spec.Nodes.WithType(types.NodeTypeBeforeEach)...).SortedByAscendingNestingLevel()
nodes = nodes.CopyAppend(spec.Nodes.WithType(types.NodeTypeJustBeforeEach).SortedByAscendingNestingLevel()...)
nodes = nodes.CopyAppend(spec.Nodes.WithType(types.NodeTypeIt)...)
var terminatingNode Node
for j := range nodes {
deepestNestingLevelAttained = max(deepestNestingLevelAttained, nodes[j].NestingLevel)
suite.currentSpecReport.State, suite.currentSpecReport.Failure = suite.runNode(nodes[j], interruptStatus.Channel, spec.Nodes.BestTextFor(nodes[j]))
suite.currentSpecReport.RunTime = time.Since(suite.currentSpecReport.StartTime)
nodeState[nodes[j].ID] = suite.currentSpecReport.State
if suite.currentSpecReport.State != types.SpecStatePassed {
terminatingNode = nodes[j]
break
}
}
afterAllNodesThatRan := map[uint]bool{}
// pull out some shared code so we aren't repeating ourselves down below. this just runs after and cleanup nodes
runAfterAndCleanupNodes := func(nodes Nodes) {
for j := range nodes {
state, failure := suite.runNode(nodes[j], suite.interruptHandler.Status().Channel, spec.Nodes.BestTextFor(nodes[j]))
suite.currentSpecReport.RunTime = time.Since(suite.currentSpecReport.StartTime)
nodeState[nodes[j].ID] = state
if suite.currentSpecReport.State == types.SpecStatePassed || state == types.SpecStateAborted {
suite.currentSpecReport.State = state
suite.currentSpecReport.Failure = failure
if state != types.SpecStatePassed {
terminatingNode = nodes[j]
}
}
if nodes[j].NodeType.Is(types.NodeTypeAfterAll) {
afterAllNodesThatRan[nodes[j].ID] = true
}
}
}
// pull out a helper that captures the logic of whether or not we should run a given After node.
// there is complexity here stemming from the fact that we allow nested ordered contexts and flakey retries
shouldRunAfterNode := func(n Node) bool {
if n.NodeType.Is(types.NodeTypeAfterEach | types.NodeTypeJustAfterEach) {
return true
}
var id uint
if n.NodeType.Is(types.NodeTypeAfterAll) {
id = n.ID
if afterAllNodesThatRan[id] { //we've already run on this attempt. don't run again.
return false
}
}
if n.NodeType.Is(types.NodeTypeCleanupAfterAll) {
id = n.NodeIDWhereCleanupWasGenerated
}
isLastSpecWithNode := indexOfLastSpecContainingNodeID(id) == i
switch suite.currentSpecReport.State {
case types.SpecStatePassed: //we've passed so far...
return isLastSpecWithNode //... and we're the last spec with this AfterNode, so we should run it
case types.SpecStateSkipped: //the spec was skipped by the user...
if isLastSpecWithNode {
return true //...we're the last spec, so we should run the AfterNode
}
if terminatingNode.NodeType.Is(types.NodeTypeBeforeAll) && terminatingNode.NestingLevel == n.NestingLevel {
return true //...or, a BeforeAll was skipped and it's at our nesting level, so our subgroup is going to skip
}
case types.SpecStateFailed, types.SpecStatePanicked: // the spec has failed...
if isFinalAttempt {
return true //...if this was the last attempt then we're the last spec to run and so the AfterNode should run
}
if terminatingNode.NodeType.Is(types.NodeTypeBeforeAll) {
//...we'll be rerunning a BeforeAll so we should cleanup after it if...
if n.NodeType.Is(types.NodeTypeAfterAll) && terminatingNode.NestingLevel == n.NestingLevel {
return true //we're at the same nesting level
}
if n.NodeType.Is(types.NodeTypeCleanupAfterAll) && terminatingNode.ID == n.NodeIDWhereCleanupWasGenerated {
return true //we're a DeferCleanup generated by it
}
}
if terminatingNode.NodeType.Is(types.NodeTypeAfterAll) {
//...we'll be rerunning an AfterAll so we should cleanup after it if...
if n.NodeType.Is(types.NodeTypeCleanupAfterAll) && terminatingNode.ID == n.NodeIDWhereCleanupWasGenerated {
return true //we're a DeferCleanup generated by it
}
}
case types.SpecStateInterrupted, types.SpecStateAborted: // ...we've been interrupted and/or aborted
return true //...that means the test run is over and we should clean up the stack. Run the AfterNode
}
return false
}
// first pass - run all the JustAfterEach, Aftereach, and AfterAlls. Our shoudlRunAfterNode filter function will clean up the AfterAlls for us.
afterNodes := spec.Nodes.WithType(types.NodeTypeJustAfterEach).SortedByDescendingNestingLevel()
afterNodes = afterNodes.CopyAppend(spec.Nodes.WithType(types.NodeTypeAfterEach).CopyAppend(spec.Nodes.WithType(types.NodeTypeAfterAll)...).SortedByDescendingNestingLevel()...)
afterNodes = afterNodes.WithinNestingLevel(deepestNestingLevelAttained)
afterNodes = afterNodes.Filter(shouldRunAfterNode)
runAfterAndCleanupNodes(afterNodes)
// second-pass perhaps we didn't run the AfterAlls but a state change due to an AfterEach now requires us to run the AfterAlls:
afterNodes = spec.Nodes.WithType(types.NodeTypeAfterAll).WithinNestingLevel(deepestNestingLevelAttained).Filter(shouldRunAfterNode)
runAfterAndCleanupNodes(afterNodes)
// now we run any DeferCleanups
afterNodes = suite.cleanupNodes.WithType(types.NodeTypeCleanupAfterEach).Reverse()
afterNodes = append(afterNodes, suite.cleanupNodes.WithType(types.NodeTypeCleanupAfterAll).Filter(shouldRunAfterNode).Reverse()...)
runAfterAndCleanupNodes(afterNodes)
// third-pass, perhaps a DeferCleanup failed and now we need to run the AfterAlls.
afterNodes = spec.Nodes.WithType(types.NodeTypeAfterAll).WithinNestingLevel(deepestNestingLevelAttained).Filter(shouldRunAfterNode)
runAfterAndCleanupNodes(afterNodes)
// and finally - running AfterAlls may have generated some new DeferCleanup nodes, let's run them to finish up
afterNodes = suite.cleanupNodes.WithType(types.NodeTypeCleanupAfterAll).Reverse().Filter(shouldRunAfterNode)
runAfterAndCleanupNodes(afterNodes)
suite.currentSpecReport.EndTime = time.Now()
suite.currentSpecReport.RunTime = suite.currentSpecReport.EndTime.Sub(suite.currentSpecReport.StartTime)
suite.currentSpecReport.CapturedGinkgoWriterOutput += string(suite.writer.Bytes())
suite.currentSpecReport.CapturedStdOutErr += suite.outputInterceptor.StopInterceptingAndReturnOutput()
if suite.currentSpecReport.State.Is(types.SpecStatePassed | types.SpecStateSkipped | types.SpecStateAborted | types.SpecStateInterrupted) {
break
}
}
//send the spec report to any attached ReportAfterEach blocks - this will update suite.currentSpecReport if failures occur in these blocks
suite.reportEach(spec, types.NodeTypeReportAfterEach)
suite.processCurrentSpecReport()
if suite.currentSpecReport.State.Is(types.SpecStateFailureStates) {
groupSucceeded = false
}
suite.currentSpecReport = types.SpecReport{}
g.suite.selectiveLock.Unlock()
}
}

View File

@ -1,39 +1,38 @@
package interrupt_handler
import (
"fmt"
"os"
"os/signal"
"runtime"
"sync"
"syscall"
"time"
"github.com/onsi/ginkgo/v2/formatter"
"github.com/onsi/ginkgo/v2/internal/parallel_support"
)
const TIMEOUT_REPEAT_INTERRUPT_MAXIMUM_DURATION = 30 * time.Second
const TIMEOUT_REPEAT_INTERRUPT_FRACTION_OF_TIMEOUT = 10
const ABORT_POLLING_INTERVAL = 500 * time.Millisecond
const ABORT_REPEAT_INTERRUPT_DURATION = 30 * time.Second
type InterruptCause uint
const (
InterruptCauseInvalid InterruptCause = iota
InterruptCauseSignal
InterruptCauseTimeout
InterruptCauseAbortByOtherProcess
)
type InterruptLevel uint
const (
InterruptLevelUninterrupted InterruptLevel = iota
InterruptLevelCleanupAndReport
InterruptLevelReportOnly
InterruptLevelBailOut
)
func (ic InterruptCause) String() string {
switch ic {
case InterruptCauseSignal:
return "Interrupted by User"
case InterruptCauseTimeout:
return "Interrupted by Timeout"
case InterruptCauseAbortByOtherProcess:
return "Interrupted by Other Ginkgo Process"
}
@ -41,37 +40,49 @@ func (ic InterruptCause) String() string {
}
type InterruptStatus struct {
Interrupted bool
Channel chan interface{}
Cause InterruptCause
Channel chan interface{}
Level InterruptLevel
Cause InterruptCause
}
func (s InterruptStatus) Interrupted() bool {
return s.Level != InterruptLevelUninterrupted
}
func (s InterruptStatus) Message() string {
return s.Cause.String()
}
func (s InterruptStatus) ShouldIncludeProgressReport() bool {
return s.Cause != InterruptCauseAbortByOtherProcess
}
type InterruptHandlerInterface interface {
Status() InterruptStatus
SetInterruptPlaceholderMessage(string)
ClearInterruptPlaceholderMessage()
InterruptMessageWithStackTraces() string
}
type InterruptHandler struct {
c chan interface{}
lock *sync.Mutex
interrupted bool
interruptPlaceholderMessage string
interruptCause InterruptCause
client parallel_support.Client
stop chan interface{}
c chan interface{}
lock *sync.Mutex
level InterruptLevel
cause InterruptCause
client parallel_support.Client
stop chan interface{}
signals []os.Signal
}
func NewInterruptHandler(timeout time.Duration, client parallel_support.Client) *InterruptHandler {
handler := &InterruptHandler{
c: make(chan interface{}),
lock: &sync.Mutex{},
interrupted: false,
stop: make(chan interface{}),
client: client,
func NewInterruptHandler(client parallel_support.Client, signals ...os.Signal) *InterruptHandler {
if len(signals) == 0 {
signals = []os.Signal{os.Interrupt, syscall.SIGTERM}
}
handler.registerForInterrupts(timeout)
handler := &InterruptHandler{
c: make(chan interface{}),
lock: &sync.Mutex{},
stop: make(chan interface{}),
client: client,
signals: signals,
}
handler.registerForInterrupts()
return handler
}
@ -79,30 +90,22 @@ func (handler *InterruptHandler) Stop() {
close(handler.stop)
}
func (handler *InterruptHandler) registerForInterrupts(timeout time.Duration) {
func (handler *InterruptHandler) registerForInterrupts() {
// os signal handling
signalChannel := make(chan os.Signal, 1)
signal.Notify(signalChannel, os.Interrupt, syscall.SIGTERM)
// timeout handling
var timeoutChannel <-chan time.Time
var timeoutTimer *time.Timer
if timeout > 0 {
timeoutTimer = time.NewTimer(timeout)
timeoutChannel = timeoutTimer.C
}
signal.Notify(signalChannel, handler.signals...)
// cross-process abort handling
var abortChannel chan bool
var abortChannel chan interface{}
if handler.client != nil {
abortChannel = make(chan bool)
abortChannel = make(chan interface{})
go func() {
pollTicker := time.NewTicker(ABORT_POLLING_INTERVAL)
for {
select {
case <-pollTicker.C:
if handler.client.ShouldAbort() {
abortChannel <- true
close(abortChannel)
pollTicker.Stop()
return
}
@ -114,55 +117,37 @@ func (handler *InterruptHandler) registerForInterrupts(timeout time.Duration) {
}()
}
// listen for any interrupt signals
// note that some (timeouts, cross-process aborts) will only trigger once
// for these we set up a ticker to keep interrupting the suite until it ends
// this ensures any `AfterEach` or `AfterSuite`s that get stuck cleaning up
// get interrupted eventually
go func() {
go func(abortChannel chan interface{}) {
var interruptCause InterruptCause
var repeatChannel <-chan time.Time
var repeatTicker *time.Ticker
for {
select {
case <-signalChannel:
interruptCause = InterruptCauseSignal
case <-timeoutChannel:
interruptCause = InterruptCauseTimeout
repeatInterruptTimeout := timeout / time.Duration(TIMEOUT_REPEAT_INTERRUPT_FRACTION_OF_TIMEOUT)
if repeatInterruptTimeout > TIMEOUT_REPEAT_INTERRUPT_MAXIMUM_DURATION {
repeatInterruptTimeout = TIMEOUT_REPEAT_INTERRUPT_MAXIMUM_DURATION
}
timeoutTimer.Stop()
repeatTicker = time.NewTicker(repeatInterruptTimeout)
repeatChannel = repeatTicker.C
case <-abortChannel:
interruptCause = InterruptCauseAbortByOtherProcess
repeatTicker = time.NewTicker(ABORT_REPEAT_INTERRUPT_DURATION)
repeatChannel = repeatTicker.C
case <-repeatChannel:
//do nothing, just interrupt again using the same interruptCause
case <-handler.stop:
if timeoutTimer != nil {
timeoutTimer.Stop()
}
if repeatTicker != nil {
repeatTicker.Stop()
}
signal.Stop(signalChannel)
return
}
abortChannel = nil
handler.lock.Lock()
handler.interruptCause = interruptCause
if handler.interruptPlaceholderMessage != "" {
fmt.Println(handler.interruptPlaceholderMessage)
oldLevel := handler.level
handler.cause = interruptCause
if handler.level == InterruptLevelUninterrupted {
handler.level = InterruptLevelCleanupAndReport
} else if handler.level == InterruptLevelCleanupAndReport {
handler.level = InterruptLevelReportOnly
} else if handler.level == InterruptLevelReportOnly {
handler.level = InterruptLevelBailOut
}
if handler.level != oldLevel {
close(handler.c)
handler.c = make(chan interface{})
}
handler.interrupted = true
close(handler.c)
handler.c = make(chan interface{})
handler.lock.Unlock()
}
}()
}(abortChannel)
}
func (handler *InterruptHandler) Status() InterruptStatus {
@ -170,43 +155,8 @@ func (handler *InterruptHandler) Status() InterruptStatus {
defer handler.lock.Unlock()
return InterruptStatus{
Interrupted: handler.interrupted,
Channel: handler.c,
Cause: handler.interruptCause,
Level: handler.level,
Channel: handler.c,
Cause: handler.cause,
}
}
func (handler *InterruptHandler) SetInterruptPlaceholderMessage(message string) {
handler.lock.Lock()
defer handler.lock.Unlock()
handler.interruptPlaceholderMessage = message
}
func (handler *InterruptHandler) ClearInterruptPlaceholderMessage() {
handler.lock.Lock()
defer handler.lock.Unlock()
handler.interruptPlaceholderMessage = ""
}
func (handler *InterruptHandler) InterruptMessageWithStackTraces() string {
handler.lock.Lock()
out := fmt.Sprintf("%s\n\n", handler.interruptCause.String())
defer handler.lock.Unlock()
if handler.interruptCause == InterruptCauseAbortByOtherProcess {
return out
}
out += "Here's a stack trace of all running goroutines:\n"
buf := make([]byte, 8192)
for {
n := runtime.Stack(buf, true)
if n < len(buf) {
buf = buf[:n]
break
}
buf = make([]byte, 2*len(buf))
}
out += formatter.Fi(1, "%s", string(buf))
return out
}

View File

@ -1,9 +1,11 @@
package internal
import (
"context"
"fmt"
"reflect"
"sort"
"time"
"sync"
@ -27,15 +29,20 @@ type Node struct {
NodeType types.NodeType
Text string
Body func()
Body func(SpecContext)
CodeLocation types.CodeLocation
NestingLevel int
HasContext bool
SynchronizedBeforeSuiteProc1Body func() []byte
SynchronizedBeforeSuiteAllProcsBody func([]byte)
SynchronizedBeforeSuiteProc1Body func(SpecContext) []byte
SynchronizedBeforeSuiteProc1BodyHasContext bool
SynchronizedBeforeSuiteAllProcsBody func(SpecContext, []byte)
SynchronizedBeforeSuiteAllProcsBodyHasContext bool
SynchronizedAfterSuiteAllProcsBody func()
SynchronizedAfterSuiteProc1Body func()
SynchronizedAfterSuiteAllProcsBody func(SpecContext)
SynchronizedAfterSuiteAllProcsBodyHasContext bool
SynchronizedAfterSuiteProc1Body func(SpecContext)
SynchronizedAfterSuiteProc1BodyHasContext bool
ReportEachBody func(types.SpecReport)
ReportAfterSuiteBody func(types.Report)
@ -48,6 +55,11 @@ type Node struct {
MarkedSuppressProgressReporting bool
FlakeAttempts int
Labels Labels
PollProgressAfter time.Duration
PollProgressInterval time.Duration
NodeTimeout time.Duration
SpecTimeout time.Duration
GracePeriod time.Duration
NodeIDWhereCleanupWasGenerated uint
}
@ -71,6 +83,11 @@ type FlakeAttempts uint
type Offset uint
type Done chan<- interface{} // Deprecated Done Channel for asynchronous testing
type Labels []string
type PollProgressInterval time.Duration
type PollProgressAfter time.Duration
type NodeTimeout time.Duration
type SpecTimeout time.Duration
type GracePeriod time.Duration
func UnionOfLabels(labels ...Labels) Labels {
out := Labels{}
@ -123,6 +140,16 @@ func isDecoration(arg interface{}) bool {
return true
case t == reflect.TypeOf(Labels{}):
return true
case t == reflect.TypeOf(PollProgressInterval(0)):
return true
case t == reflect.TypeOf(PollProgressAfter(0)):
return true
case t == reflect.TypeOf(NodeTimeout(0)):
return true
case t == reflect.TypeOf(SpecTimeout(0)):
return true
case t == reflect.TypeOf(GracePeriod(0)):
return true
case t.Kind() == reflect.Slice && isSliceOfDecorations(arg):
return true
default:
@ -143,16 +170,23 @@ func isSliceOfDecorations(slice interface{}) bool {
return true
}
var contextType = reflect.TypeOf(new(context.Context)).Elem()
var specContextType = reflect.TypeOf(new(SpecContext)).Elem()
func NewNode(deprecationTracker *types.DeprecationTracker, nodeType types.NodeType, text string, args ...interface{}) (Node, []error) {
baseOffset := 2
node := Node{
ID: UniqueNodeID(),
NodeType: nodeType,
Text: text,
Labels: Labels{},
CodeLocation: types.NewCodeLocation(baseOffset),
NestingLevel: -1,
ID: UniqueNodeID(),
NodeType: nodeType,
Text: text,
Labels: Labels{},
CodeLocation: types.NewCodeLocation(baseOffset),
NestingLevel: -1,
PollProgressAfter: -1,
PollProgressInterval: -1,
GracePeriod: -1,
}
errors := []error{}
appendError := func(err error) {
if err != nil {
@ -219,6 +253,31 @@ func NewNode(deprecationTracker *types.DeprecationTracker, nodeType types.NodeTy
if !nodeType.Is(types.NodeTypesForContainerAndIt) {
appendError(types.GinkgoErrors.InvalidDecoratorForNodeType(node.CodeLocation, nodeType, "FlakeAttempts"))
}
case t == reflect.TypeOf(PollProgressAfter(0)):
node.PollProgressAfter = time.Duration(arg.(PollProgressAfter))
if nodeType.Is(types.NodeTypeContainer) {
appendError(types.GinkgoErrors.InvalidDecoratorForNodeType(node.CodeLocation, nodeType, "PollProgressAfter"))
}
case t == reflect.TypeOf(PollProgressInterval(0)):
node.PollProgressInterval = time.Duration(arg.(PollProgressInterval))
if nodeType.Is(types.NodeTypeContainer) {
appendError(types.GinkgoErrors.InvalidDecoratorForNodeType(node.CodeLocation, nodeType, "PollProgressInterval"))
}
case t == reflect.TypeOf(NodeTimeout(0)):
node.NodeTimeout = time.Duration(arg.(NodeTimeout))
if nodeType.Is(types.NodeTypeContainer) {
appendError(types.GinkgoErrors.InvalidDecoratorForNodeType(node.CodeLocation, nodeType, "NodeTimeout"))
}
case t == reflect.TypeOf(SpecTimeout(0)):
node.SpecTimeout = time.Duration(arg.(SpecTimeout))
if !nodeType.Is(types.NodeTypeIt) {
appendError(types.GinkgoErrors.InvalidDecoratorForNodeType(node.CodeLocation, nodeType, "SpecTimeout"))
}
case t == reflect.TypeOf(GracePeriod(0)):
node.GracePeriod = time.Duration(arg.(GracePeriod))
if nodeType.Is(types.NodeTypeContainer) {
appendError(types.GinkgoErrors.InvalidDecoratorForNodeType(node.CodeLocation, nodeType, "GracePeriod"))
}
case t == reflect.TypeOf(Labels{}):
if !nodeType.Is(types.NodeTypesForContainerAndIt) {
appendError(types.GinkgoErrors.InvalidDecoratorForNodeType(node.CodeLocation, nodeType, "Label"))
@ -232,35 +291,85 @@ func NewNode(deprecationTracker *types.DeprecationTracker, nodeType types.NodeTy
}
}
case t.Kind() == reflect.Func:
if nodeType.Is(types.NodeTypeReportBeforeEach | types.NodeTypeReportAfterEach) {
if node.ReportEachBody != nil {
if nodeType.Is(types.NodeTypeContainer) {
if node.Body != nil {
appendError(types.GinkgoErrors.MultipleBodyFunctions(node.CodeLocation, nodeType))
trackedFunctionError = true
break
}
//we can trust that the function is valid because the compiler has our back here
node.ReportEachBody = arg.(func(types.SpecReport))
break
}
if node.Body != nil {
appendError(types.GinkgoErrors.MultipleBodyFunctions(node.CodeLocation, nodeType))
trackedFunctionError = true
break
}
isValid := (t.NumOut() == 0) && (t.NumIn() <= 1) && (t.NumIn() == 0 || t.In(0) == reflect.TypeOf(make(Done)))
if !isValid {
appendError(types.GinkgoErrors.InvalidBodyType(t, node.CodeLocation, nodeType))
trackedFunctionError = true
break
}
if t.NumIn() == 0 {
node.Body = arg.(func())
if t.NumOut() > 0 || t.NumIn() > 0 {
appendError(types.GinkgoErrors.InvalidBodyTypeForContainer(t, node.CodeLocation, nodeType))
trackedFunctionError = true
break
}
body := arg.(func())
node.Body = func(SpecContext) { body() }
} else if nodeType.Is(types.NodeTypeReportBeforeEach | types.NodeTypeReportAfterEach) {
if node.ReportEachBody == nil {
node.ReportEachBody = arg.(func(types.SpecReport))
} else {
appendError(types.GinkgoErrors.MultipleBodyFunctions(node.CodeLocation, nodeType))
trackedFunctionError = true
break
}
} else if nodeType.Is(types.NodeTypeReportAfterSuite) {
if node.ReportAfterSuiteBody == nil {
node.ReportAfterSuiteBody = arg.(func(types.Report))
} else {
appendError(types.GinkgoErrors.MultipleBodyFunctions(node.CodeLocation, nodeType))
trackedFunctionError = true
break
}
} else if nodeType.Is(types.NodeTypeSynchronizedBeforeSuite) {
if node.SynchronizedBeforeSuiteProc1Body != nil && node.SynchronizedBeforeSuiteAllProcsBody != nil {
appendError(types.GinkgoErrors.MultipleBodyFunctions(node.CodeLocation, nodeType))
trackedFunctionError = true
break
}
if node.SynchronizedBeforeSuiteProc1Body == nil {
body, hasContext := extractSynchronizedBeforeSuiteProc1Body(arg)
if body == nil {
appendError(types.GinkgoErrors.InvalidBodyTypeForSynchronizedBeforeSuiteProc1(t, node.CodeLocation))
trackedFunctionError = true
}
node.SynchronizedBeforeSuiteProc1Body, node.SynchronizedBeforeSuiteProc1BodyHasContext = body, hasContext
} else if node.SynchronizedBeforeSuiteAllProcsBody == nil {
body, hasContext := extractSynchronizedBeforeSuiteAllProcsBody(arg)
if body == nil {
appendError(types.GinkgoErrors.InvalidBodyTypeForSynchronizedBeforeSuiteAllProcs(t, node.CodeLocation))
trackedFunctionError = true
}
node.SynchronizedBeforeSuiteAllProcsBody, node.SynchronizedBeforeSuiteAllProcsBodyHasContext = body, hasContext
}
} else if nodeType.Is(types.NodeTypeSynchronizedAfterSuite) {
if node.SynchronizedAfterSuiteAllProcsBody != nil && node.SynchronizedAfterSuiteProc1Body != nil {
appendError(types.GinkgoErrors.MultipleBodyFunctions(node.CodeLocation, nodeType))
trackedFunctionError = true
break
}
body, hasContext := extractBodyFunction(deprecationTracker, node.CodeLocation, arg)
if body == nil {
appendError(types.GinkgoErrors.InvalidBodyType(t, node.CodeLocation, nodeType))
trackedFunctionError = true
break
}
if node.SynchronizedAfterSuiteAllProcsBody == nil {
node.SynchronizedAfterSuiteAllProcsBody, node.SynchronizedAfterSuiteAllProcsBodyHasContext = body, hasContext
} else if node.SynchronizedAfterSuiteProc1Body == nil {
node.SynchronizedAfterSuiteProc1Body, node.SynchronizedAfterSuiteProc1BodyHasContext = body, hasContext
}
} else {
deprecationTracker.TrackDeprecation(types.Deprecations.Async(), node.CodeLocation)
deprecatedAsyncBody := arg.(func(Done))
node.Body = func() { deprecatedAsyncBody(make(Done)) }
if node.Body != nil {
appendError(types.GinkgoErrors.MultipleBodyFunctions(node.CodeLocation, nodeType))
trackedFunctionError = true
break
}
node.Body, node.HasContext = extractBodyFunction(deprecationTracker, node.CodeLocation, arg)
if node.Body == nil {
appendError(types.GinkgoErrors.InvalidBodyType(t, node.CodeLocation, nodeType))
trackedFunctionError = true
break
}
}
default:
remainingArgs = append(remainingArgs, arg)
@ -272,9 +381,24 @@ func NewNode(deprecationTracker *types.DeprecationTracker, nodeType types.NodeTy
appendError(types.GinkgoErrors.InvalidDeclarationOfFocusedAndPending(node.CodeLocation, nodeType))
}
if node.Body == nil && node.ReportEachBody == nil && !node.MarkedPending && !trackedFunctionError {
hasContext := node.HasContext || node.SynchronizedAfterSuiteProc1BodyHasContext || node.SynchronizedAfterSuiteAllProcsBodyHasContext || node.SynchronizedBeforeSuiteProc1BodyHasContext || node.SynchronizedBeforeSuiteAllProcsBodyHasContext
if !hasContext && (node.NodeTimeout > 0 || node.SpecTimeout > 0 || node.GracePeriod > 0) && len(errors) == 0 {
appendError(types.GinkgoErrors.InvalidTimeoutOrGracePeriodForNonContextNode(node.CodeLocation, nodeType))
}
if !node.NodeType.Is(types.NodeTypeReportBeforeEach|types.NodeTypeReportAfterEach|types.NodeTypeSynchronizedBeforeSuite|types.NodeTypeSynchronizedAfterSuite|types.NodeTypeReportAfterSuite) && node.Body == nil && !node.MarkedPending && !trackedFunctionError {
appendError(types.GinkgoErrors.MissingBodyFunction(node.CodeLocation, nodeType))
}
if node.NodeType.Is(types.NodeTypeSynchronizedBeforeSuite) && !trackedFunctionError && (node.SynchronizedBeforeSuiteProc1Body == nil || node.SynchronizedBeforeSuiteAllProcsBody == nil) {
appendError(types.GinkgoErrors.MissingBodyFunction(node.CodeLocation, nodeType))
}
if node.NodeType.Is(types.NodeTypeSynchronizedAfterSuite) && !trackedFunctionError && (node.SynchronizedAfterSuiteProc1Body == nil || node.SynchronizedAfterSuiteAllProcsBody == nil) {
appendError(types.GinkgoErrors.MissingBodyFunction(node.CodeLocation, nodeType))
}
for _, arg := range remainingArgs {
appendError(types.GinkgoErrors.UnknownDecorator(node.CodeLocation, nodeType, arg))
}
@ -286,76 +410,151 @@ func NewNode(deprecationTracker *types.DeprecationTracker, nodeType types.NodeTy
return node, errors
}
func NewSynchronizedBeforeSuiteNode(proc1Body func() []byte, allProcsBody func([]byte), codeLocation types.CodeLocation) (Node, []error) {
return Node{
ID: UniqueNodeID(),
NodeType: types.NodeTypeSynchronizedBeforeSuite,
SynchronizedBeforeSuiteProc1Body: proc1Body,
SynchronizedBeforeSuiteAllProcsBody: allProcsBody,
CodeLocation: codeLocation,
}, nil
}
var doneType = reflect.TypeOf(make(Done))
func NewSynchronizedAfterSuiteNode(allProcsBody func(), proc1Body func(), codeLocation types.CodeLocation) (Node, []error) {
return Node{
ID: UniqueNodeID(),
NodeType: types.NodeTypeSynchronizedAfterSuite,
SynchronizedAfterSuiteAllProcsBody: allProcsBody,
SynchronizedAfterSuiteProc1Body: proc1Body,
CodeLocation: codeLocation,
}, nil
}
func NewReportAfterSuiteNode(text string, body func(types.Report), codeLocation types.CodeLocation) (Node, []error) {
return Node{
ID: UniqueNodeID(),
Text: text,
NodeType: types.NodeTypeReportAfterSuite,
ReportAfterSuiteBody: body,
CodeLocation: codeLocation,
}, nil
}
func NewCleanupNode(fail func(string, types.CodeLocation), args ...interface{}) (Node, []error) {
baseOffset := 2
node := Node{
ID: UniqueNodeID(),
NodeType: types.NodeTypeCleanupInvalid,
CodeLocation: types.NewCodeLocation(baseOffset),
NestingLevel: -1,
func extractBodyFunction(deprecationTracker *types.DeprecationTracker, cl types.CodeLocation, arg interface{}) (func(SpecContext), bool) {
t := reflect.TypeOf(arg)
if t.NumOut() > 0 || t.NumIn() > 1 {
return nil, false
}
remainingArgs := []interface{}{}
for _, arg := range args {
if t.NumIn() == 1 {
if t.In(0) == doneType {
deprecationTracker.TrackDeprecation(types.Deprecations.Async(), cl)
deprecatedAsyncBody := arg.(func(Done))
return func(SpecContext) { deprecatedAsyncBody(make(Done)) }, false
} else if t.In(0).Implements(specContextType) {
return arg.(func(SpecContext)), true
} else if t.In(0).Implements(contextType) {
body := arg.(func(context.Context))
return func(c SpecContext) { body(c) }, true
}
return nil, false
}
body := arg.(func())
return func(SpecContext) { body() }, false
}
var byteType = reflect.TypeOf([]byte{})
func extractSynchronizedBeforeSuiteProc1Body(arg interface{}) (func(SpecContext) []byte, bool) {
t := reflect.TypeOf(arg)
v := reflect.ValueOf(arg)
if t.NumOut() > 1 || t.NumIn() > 1 {
return nil, false
} else if t.NumOut() == 1 && t.Out(0) != byteType {
return nil, false
} else if t.NumIn() == 1 && !t.In(0).Implements(contextType) {
return nil, false
}
hasContext := t.NumIn() == 1
return func(c SpecContext) []byte {
var out []reflect.Value
if hasContext {
out = v.Call([]reflect.Value{reflect.ValueOf(c)})
} else {
out = v.Call([]reflect.Value{})
}
if len(out) == 1 {
return (out[0].Interface()).([]byte)
} else {
return []byte{}
}
}, hasContext
}
func extractSynchronizedBeforeSuiteAllProcsBody(arg interface{}) (func(SpecContext, []byte), bool) {
t := reflect.TypeOf(arg)
v := reflect.ValueOf(arg)
hasContext, hasByte := false, false
if t.NumOut() > 0 || t.NumIn() > 2 {
return nil, false
} else if t.NumIn() == 2 && t.In(0).Implements(contextType) && t.In(1) == byteType {
hasContext, hasByte = true, true
} else if t.NumIn() == 1 && t.In(0).Implements(contextType) {
hasContext = true
} else if t.NumIn() == 1 && t.In(0) == byteType {
hasByte = true
} else if t.NumIn() != 0 {
return nil, false
}
return func(c SpecContext, b []byte) {
in := []reflect.Value{}
if hasContext {
in = append(in, reflect.ValueOf(c))
}
if hasByte {
in = append(in, reflect.ValueOf(b))
}
v.Call(in)
}, hasContext
}
func NewCleanupNode(deprecationTracker *types.DeprecationTracker, fail func(string, types.CodeLocation), args ...interface{}) (Node, []error) {
decorations, remainingArgs := PartitionDecorations(args...)
baseOffset := 2
cl := types.NewCodeLocation(baseOffset)
finalArgs := []interface{}{}
for _, arg := range decorations {
switch t := reflect.TypeOf(arg); {
case t == reflect.TypeOf(Offset(0)):
node.CodeLocation = types.NewCodeLocation(baseOffset + int(arg.(Offset)))
cl = types.NewCodeLocation(baseOffset + int(arg.(Offset)))
case t == reflect.TypeOf(types.CodeLocation{}):
node.CodeLocation = arg.(types.CodeLocation)
cl = arg.(types.CodeLocation)
default:
remainingArgs = append(remainingArgs, arg)
finalArgs = append(finalArgs, arg)
}
}
finalArgs = append(finalArgs, cl)
if len(remainingArgs) == 0 {
return Node{}, []error{types.GinkgoErrors.DeferCleanupInvalidFunction(node.CodeLocation)}
return Node{}, []error{types.GinkgoErrors.DeferCleanupInvalidFunction(cl)}
}
callback := reflect.ValueOf(remainingArgs[0])
if !(callback.Kind() == reflect.Func && callback.Type().NumOut() <= 1) {
return Node{}, []error{types.GinkgoErrors.DeferCleanupInvalidFunction(node.CodeLocation)}
return Node{}, []error{types.GinkgoErrors.DeferCleanupInvalidFunction(cl)}
}
callArgs := []reflect.Value{}
for _, arg := range remainingArgs[1:] {
callArgs = append(callArgs, reflect.ValueOf(arg))
}
cl := node.CodeLocation
node.Body = func() {
out := callback.Call(callArgs)
hasContext := false
t := callback.Type()
if t.NumIn() > 0 {
if t.In(0).Implements(specContextType) {
hasContext = true
} else if t.In(0).Implements(contextType) && (len(callArgs) == 0 || !callArgs[0].Type().Implements(contextType)) {
hasContext = true
}
}
handleFailure := func(out []reflect.Value) {
if len(out) == 1 && !out[0].IsNil() {
fail(fmt.Sprintf("DeferCleanup callback returned error: %v", out[0]), cl)
}
}
return node, nil
if hasContext {
finalArgs = append(finalArgs, func(c SpecContext) {
out := callback.Call(append([]reflect.Value{reflect.ValueOf(c)}, callArgs...))
handleFailure(out)
})
} else {
finalArgs = append(finalArgs, func() {
out := callback.Call(callArgs)
handleFailure(out)
})
}
return NewNode(deprecationTracker, types.NodeTypeCleanupInvalid, "", finalArgs...)
}
func (n Node) IsZero() bool {

View File

@ -143,7 +143,7 @@ func (interceptor *genericOutputInterceptor) ResumeIntercepting() {
go startPipeFactory(interceptor.pipeChannel, interceptor.shutdown)
}
// Now we make a pipe, we'll use this to redirect the input to the 1 and 2 file descriptors (this is how everything else in the world is tring to log to stdout and stderr)
// Now we make a pipe, we'll use this to redirect the input to the 1 and 2 file descriptors (this is how everything else in the world is string to log to stdout and stderr)
// we get the pipe from our pipe factory. it runs in the background so we can request the next pipe while the spec being intercepted is running
interceptor.pipe = <-interceptor.pipeChannel

View File

@ -28,7 +28,7 @@ func (impl *dupSyscallOutputInterceptorImpl) CreateStdoutStderrClones() (*os.Fil
// And then wrap the clone file descriptors in files.
// One benefit of this (that we don't use yet) is that we can actually write
// to these files to emit output to the console evne though we're intercepting output
// to these files to emit output to the console even though we're intercepting output
stdoutClone := os.NewFile(uintptr(stdoutCloneFD), "stdout-clone")
stderrClone := os.NewFile(uintptr(stderrCloneFD), "stderr-clone")

View File

@ -49,6 +49,7 @@ type Client interface {
FetchNextCounter() (int, error)
PostAbort() error
ShouldAbort() bool
PostEmitProgressReport(report types.ProgressReport) error
Write(p []byte) (int, error)
}

View File

@ -94,6 +94,10 @@ func (client *httpClient) PostSuiteDidEnd(report types.Report) error {
return client.post("/suite-did-end", report)
}
func (client *httpClient) PostEmitProgressReport(report types.ProgressReport) error {
return client.post("/progress-report", report)
}
func (client *httpClient) PostSynchronizedBeforeSuiteCompleted(state types.SpecState, data []byte) error {
beforeSuiteState := BeforeSuiteState{
State: state,

View File

@ -49,6 +49,7 @@ func (server *httpServer) Start() {
mux.HandleFunc("/did-run", server.didRun)
mux.HandleFunc("/suite-did-end", server.specSuiteDidEnd)
mux.HandleFunc("/emit-output", server.emitOutput)
mux.HandleFunc("/progress-report", server.emitProgressReport)
//synchronization endpoints
mux.HandleFunc("/before-suite-completed", server.handleBeforeSuiteCompleted)
@ -155,6 +156,14 @@ func (server *httpServer) emitOutput(writer http.ResponseWriter, request *http.R
server.handleError(server.handler.EmitOutput(output, &n), writer)
}
func (server *httpServer) emitProgressReport(writer http.ResponseWriter, request *http.Request) {
var report types.ProgressReport
if !server.decode(writer, request, &report) {
return
}
server.handleError(server.handler.EmitProgressReport(report, voidReceiver), writer)
}
func (server *httpServer) handleBeforeSuiteCompleted(writer http.ResponseWriter, request *http.Request) {
var beforeSuiteState BeforeSuiteState
if !server.decode(writer, request, &beforeSuiteState) {

View File

@ -72,6 +72,10 @@ func (client *rpcClient) Write(p []byte) (int, error) {
return n, err
}
func (client *rpcClient) PostEmitProgressReport(report types.ProgressReport) error {
return client.client.Call("Server.EmitProgressReport", report, voidReceiver)
}
func (client *rpcClient) PostSynchronizedBeforeSuiteCompleted(state types.SpecState, data []byte) error {
beforeSuiteState := BeforeSuiteState{
State: state,

View File

@ -108,6 +108,13 @@ func (handler *ServerHandler) EmitOutput(output []byte, n *int) error {
return err
}
func (handler *ServerHandler) EmitProgressReport(report types.ProgressReport, _ *Void) error {
handler.lock.Lock()
defer handler.lock.Unlock()
handler.reporter.EmitProgressReport(report)
return nil
}
func (handler *ServerHandler) registerAlive(proc int, alive func() bool) {
handler.lock.Lock()
defer handler.lock.Unlock()

View File

@ -0,0 +1,291 @@
package internal
import (
"bufio"
"bytes"
"context"
"fmt"
"io"
"os"
"os/signal"
"path/filepath"
"runtime"
"strconv"
"strings"
"time"
"github.com/onsi/ginkgo/v2/types"
)
var _SOURCE_CACHE = map[string][]string{}
type ProgressSignalRegistrar func(func()) context.CancelFunc
func RegisterForProgressSignal(handler func()) context.CancelFunc {
signalChannel := make(chan os.Signal, 1)
if len(PROGRESS_SIGNALS) > 0 {
signal.Notify(signalChannel, PROGRESS_SIGNALS...)
}
ctx, cancel := context.WithCancel(context.Background())
go func() {
for {
select {
case <-signalChannel:
handler()
case <-ctx.Done():
signal.Stop(signalChannel)
return
}
}
}()
return cancel
}
type ProgressStepCursor struct {
Text string
CodeLocation types.CodeLocation
StartTime time.Time
}
func NewProgressReport(isRunningInParallel bool, report types.SpecReport, currentNode Node, currentNodeStartTime time.Time, currentStep ProgressStepCursor, gwOutput string, additionalReports []string, sourceRoots []string, includeAll bool) (types.ProgressReport, error) {
pr := types.ProgressReport{
ParallelProcess: report.ParallelProcess,
RunningInParallel: isRunningInParallel,
Time: time.Now(),
ContainerHierarchyTexts: report.ContainerHierarchyTexts,
LeafNodeText: report.LeafNodeText,
LeafNodeLocation: report.LeafNodeLocation,
SpecStartTime: report.StartTime,
CurrentNodeType: currentNode.NodeType,
CurrentNodeText: currentNode.Text,
CurrentNodeLocation: currentNode.CodeLocation,
CurrentNodeStartTime: currentNodeStartTime,
CurrentStepText: currentStep.Text,
CurrentStepLocation: currentStep.CodeLocation,
CurrentStepStartTime: currentStep.StartTime,
AdditionalReports: additionalReports,
CapturedGinkgoWriterOutput: gwOutput,
GinkgoWriterOffset: len(gwOutput),
}
goroutines, err := extractRunningGoroutines()
if err != nil {
return pr, err
}
pr.Goroutines = goroutines
// now we want to try to find goroutines of interest. these will be goroutines that have any function calls with code in packagesOfInterest:
packagesOfInterest := map[string]bool{}
packageFromFilename := func(filename string) string {
return filepath.Dir(filename)
}
addPackageFor := func(filename string) {
if filename != "" {
packagesOfInterest[packageFromFilename(filename)] = true
}
}
isPackageOfInterest := func(filename string) bool {
stackPackage := packageFromFilename(filename)
for packageOfInterest := range packagesOfInterest {
if strings.HasPrefix(stackPackage, packageOfInterest) {
return true
}
}
return false
}
for _, location := range report.ContainerHierarchyLocations {
addPackageFor(location.FileName)
}
addPackageFor(report.LeafNodeLocation.FileName)
addPackageFor(currentNode.CodeLocation.FileName)
addPackageFor(currentStep.CodeLocation.FileName)
//First, we find the SpecGoroutine - this will be the goroutine that includes `runNode`
specGoRoutineIdx := -1
runNodeFunctionCallIdx := -1
OUTER:
for goroutineIdx, goroutine := range pr.Goroutines {
for functionCallIdx, functionCall := range goroutine.Stack {
if strings.Contains(functionCall.Function, "ginkgo/v2/internal.(*Suite).runNode.func") {
specGoRoutineIdx = goroutineIdx
runNodeFunctionCallIdx = functionCallIdx
break OUTER
}
}
}
//Now, we find the first non-Ginkgo function call
if specGoRoutineIdx > -1 {
for runNodeFunctionCallIdx >= 0 {
fn := goroutines[specGoRoutineIdx].Stack[runNodeFunctionCallIdx].Function
file := goroutines[specGoRoutineIdx].Stack[runNodeFunctionCallIdx].Filename
// these are all things that could potentially happen from within ginkgo
if strings.Contains(fn, "ginkgo/v2/internal") || strings.Contains(fn, "reflect.Value") || strings.Contains(file, "ginkgo/table_dsl") || strings.Contains(file, "ginkgo/core_dsl") {
runNodeFunctionCallIdx--
continue
}
if strings.Contains(goroutines[specGoRoutineIdx].Stack[runNodeFunctionCallIdx].Function, "ginkgo/table_dsl") {
}
//found it! lets add its package of interest
addPackageFor(goroutines[specGoRoutineIdx].Stack[runNodeFunctionCallIdx].Filename)
break
}
}
ginkgoEntryPointIdx := -1
OUTER_GINKGO_ENTRY_POINT:
for goroutineIdx, goroutine := range pr.Goroutines {
for _, functionCall := range goroutine.Stack {
if strings.Contains(functionCall.Function, "ginkgo/v2.RunSpecs") {
ginkgoEntryPointIdx = goroutineIdx
break OUTER_GINKGO_ENTRY_POINT
}
}
}
// Now we go through all goroutines and highlight any lines with packages in `packagesOfInterest`
// Any goroutines with highlighted lines end up in the HighlightGoRoutines
for goroutineIdx, goroutine := range pr.Goroutines {
if goroutineIdx == ginkgoEntryPointIdx {
continue
}
if goroutineIdx == specGoRoutineIdx {
pr.Goroutines[goroutineIdx].IsSpecGoroutine = true
}
for functionCallIdx, functionCall := range goroutine.Stack {
if isPackageOfInterest(functionCall.Filename) {
goroutine.Stack[functionCallIdx].Highlight = true
goroutine.Stack[functionCallIdx].Source, goroutine.Stack[functionCallIdx].SourceHighlight = fetchSource(functionCall.Filename, functionCall.Line, 2, sourceRoots)
}
}
}
if !includeAll {
goroutines := []types.Goroutine{pr.SpecGoroutine()}
goroutines = append(goroutines, pr.HighlightedGoroutines()...)
pr.Goroutines = goroutines
}
return pr, nil
}
func extractRunningGoroutines() ([]types.Goroutine, error) {
var stack []byte
for size := 64 * 1024; ; size *= 2 {
stack = make([]byte, size)
if n := runtime.Stack(stack, true); n < size {
stack = stack[:n]
break
}
}
r := bufio.NewReader(bytes.NewReader(stack))
out := []types.Goroutine{}
idx := -1
for {
line, err := r.ReadString('\n')
if err == io.EOF {
break
}
line = strings.TrimSuffix(line, "\n")
//skip blank lines
if line == "" {
continue
}
//parse headers for new goroutine frames
if strings.HasPrefix(line, "goroutine") {
out = append(out, types.Goroutine{})
idx = len(out) - 1
line = strings.TrimPrefix(line, "goroutine ")
line = strings.TrimSuffix(line, ":")
fields := strings.SplitN(line, " ", 2)
if len(fields) != 2 {
return nil, types.GinkgoErrors.FailedToParseStackTrace(fmt.Sprintf("Invalid goroutine frame header: %s", line))
}
out[idx].ID, err = strconv.ParseUint(fields[0], 10, 64)
if err != nil {
return nil, types.GinkgoErrors.FailedToParseStackTrace(fmt.Sprintf("Invalid goroutine ID: %s", fields[1]))
}
out[idx].State = strings.TrimSuffix(strings.TrimPrefix(fields[1], "["), "]")
continue
}
//if we are here we must be at a function call entry in the stack
functionCall := types.FunctionCall{
Function: strings.TrimPrefix(line, "created by "), // no need to track 'created by'
}
line, err = r.ReadString('\n')
line = strings.TrimSuffix(line, "\n")
if err == io.EOF {
return nil, types.GinkgoErrors.FailedToParseStackTrace(fmt.Sprintf("Invalid function call: %s -- missing file name and line number", functionCall.Function))
}
line = strings.TrimLeft(line, " \t")
fields := strings.SplitN(line, ":", 2)
if len(fields) != 2 {
return nil, types.GinkgoErrors.FailedToParseStackTrace(fmt.Sprintf("Invalid filename nad line number: %s", line))
}
functionCall.Filename = fields[0]
line = strings.Split(fields[1], " ")[0]
lineNumber, err := strconv.ParseInt(line, 10, 64)
functionCall.Line = int(lineNumber)
if err != nil {
return nil, types.GinkgoErrors.FailedToParseStackTrace(fmt.Sprintf("Invalid function call line number: %s\n%s", line, err.Error()))
}
out[idx].Stack = append(out[idx].Stack, functionCall)
}
return out, nil
}
func fetchSource(filename string, lineNumber int, span int, configuredSourceRoots []string) ([]string, int) {
if filename == "" {
return []string{}, 0
}
var lines []string
var ok bool
if lines, ok = _SOURCE_CACHE[filename]; !ok {
sourceRoots := []string{""}
sourceRoots = append(sourceRoots, configuredSourceRoots...)
var data []byte
var err error
var found bool
for _, root := range sourceRoots {
data, err = os.ReadFile(filepath.Join(root, filename))
if err == nil {
found = true
break
}
}
if !found {
return []string{}, 0
}
lines = strings.Split(string(data), "\n")
_SOURCE_CACHE[filename] = lines
}
startIndex := lineNumber - span - 1
endIndex := startIndex + span + span + 1
if startIndex < 0 {
startIndex = 0
}
if endIndex > len(lines) {
endIndex = len(lines)
}
highlightIndex := lineNumber - 1 - startIndex
return lines[startIndex:endIndex], highlightIndex
}

View File

@ -0,0 +1,11 @@
//go:build freebsd || openbsd || netbsd || darwin || dragonfly
// +build freebsd openbsd netbsd darwin dragonfly
package internal
import (
"os"
"syscall"
)
var PROGRESS_SIGNALS = []os.Signal{syscall.SIGINFO, syscall.SIGUSR1}

View File

@ -0,0 +1,11 @@
//go:build linux || solaris
// +build linux solaris
package internal
import (
"os"
"syscall"
)
var PROGRESS_SIGNALS = []os.Signal{syscall.SIGUSR1}

View File

@ -0,0 +1,8 @@
//go:build windows
// +build windows
package internal
import "os"
var PROGRESS_SIGNALS = []os.Signal{}

View File

@ -2,6 +2,7 @@ package internal
import (
"strings"
"time"
"github.com/onsi/ginkgo/v2/types"
)
@ -40,6 +41,10 @@ func (s Spec) FlakeAttempts() int {
return flakeAttempts
}
func (s Spec) SpecTimeout() time.Duration {
return s.FirstNodeWithType(types.NodeTypeIt).SpecTimeout
}
type Specs []Spec
func (s Specs) HasAnySpecsMarkedPending() bool {

View File

@ -0,0 +1,90 @@
package internal
import (
"context"
"sort"
"sync"
"github.com/onsi/ginkgo/v2/types"
)
type SpecContext interface {
context.Context
SpecReport() types.SpecReport
AttachProgressReporter(func() string) func()
}
type specContext struct {
context.Context
cancel context.CancelFunc
lock *sync.Mutex
progressReporters map[int]func() string
prCounter int
suite *Suite
}
/*
SpecContext includes a reference to `suite` and embeds itself in itself as a "GINKGO_SPEC_CONTEXT" value. This allows users to create child Contexts without having down-stream consumers (e.g. Gomega) lose access to the SpecContext and its methods. This allows us to build extensions on top of Ginkgo that simply take an all-encompassing context.
Note that while SpecContext is used to enforce deadlines by Ginkgo it is not configured as a context.WithDeadline. Instead, Ginkgo owns responsibility for cancelling the context when the deadline elapses.
This is because Ginkgo needs finer control over when the context is canceled. Specifically, Ginkgo needs to generate a ProgressReport before it cancels the context to ensure progress is captured where the spec is currently running. The only way to avoid a race here is to manually control the cancellation.
*/
func NewSpecContext(suite *Suite) *specContext {
ctx, cancel := context.WithCancel(context.Background())
sc := &specContext{
cancel: cancel,
suite: suite,
lock: &sync.Mutex{},
prCounter: 0,
progressReporters: map[int]func() string{},
}
ctx = context.WithValue(ctx, "GINKGO_SPEC_CONTEXT", sc) //yes, yes, the go docs say don't use a string for a key... but we'd rather avoid a circular dependency between Gomega and Ginkgo
sc.Context = ctx //thank goodness for garbage collectors that can handle circular dependencies
return sc
}
func (sc *specContext) SpecReport() types.SpecReport {
return sc.suite.CurrentSpecReport()
}
func (sc *specContext) AttachProgressReporter(reporter func() string) func() {
sc.lock.Lock()
defer sc.lock.Unlock()
sc.prCounter += 1
prCounter := sc.prCounter
sc.progressReporters[prCounter] = reporter
return func() {
sc.lock.Lock()
defer sc.lock.Unlock()
delete(sc.progressReporters, prCounter)
}
}
func (sc *specContext) QueryProgressReporters() []string {
sc.lock.Lock()
keys := []int{}
for key := range sc.progressReporters {
keys = append(keys, key)
}
sort.Ints(keys)
reporters := []func() string{}
for _, key := range keys {
reporters = append(reporters, sc.progressReporters[key])
}
sc.lock.Unlock()
if len(reporters) == 0 {
return nil
}
out := []string{}
for _, reporter := range reporters {
out = append(out, reporter())
}
return out
}

View File

@ -5,7 +5,6 @@ import (
"sync"
"time"
"github.com/onsi/ginkgo/v2/formatter"
"github.com/onsi/ginkgo/v2/internal/interrupt_handler"
"github.com/onsi/ginkgo/v2/internal/parallel_support"
"github.com/onsi/ginkgo/v2/reporters"
@ -35,21 +34,39 @@ type Suite struct {
outputInterceptor OutputInterceptor
interruptHandler interrupt_handler.InterruptHandlerInterface
config types.SuiteConfig
deadline time.Time
skipAll bool
report types.Report
currentSpecReport types.SpecReport
currentSpecReportUserAccessLock *sync.Mutex
currentNode Node
skipAll bool
report types.Report
currentSpecReport types.SpecReport
currentNode Node
currentNodeStartTime time.Time
currentSpecContext *specContext
progressStepCursor ProgressStepCursor
/*
We don't need to lock around all operations. Just those that *could* happen concurrently.
Suite, generally, only runs one node at a time - and so the possibiity for races is small. In fact, the presence of a race usually indicates the user has launched a goroutine that has leaked past the node it was launched in.
However, there are some operations that can happen concurrently:
- AddReportEntry and CurrentSpecReport can be accessed at any point by the user - including in goroutines that outlive the node intentionally (see, e.g. #1020). They both form a self-contained read-write pair and so a lock in them is sufficent.
- generateProgressReport can be invoked at any point in time by an interrupt or a progres poll. Moreover, it requires access to currentSpecReport, currentNode, currentNodeStartTime, and progressStepCursor. To make it threadsafe we need to lock around generateProgressReport when we read those variables _and_ everywhere those variables are *written*. In general we don't need to worry about all possible field writes to these variables as what `generateProgressReport` does with these variables is fairly selective (hence the name of the lock). Specifically, we dont' need to lock around state and failure message changes on `currentSpecReport` - just the setting of the variable itself.
*/
selectiveLock *sync.Mutex
client parallel_support.Client
}
func NewSuite() *Suite {
return &Suite{
tree: &TreeNode{},
phase: PhaseBuildTopLevel,
currentSpecReportUserAccessLock: &sync.Mutex{},
tree: &TreeNode{},
phase: PhaseBuildTopLevel,
selectiveLock: &sync.Mutex{},
}
}
@ -66,7 +83,7 @@ func (suite *Suite) BuildTree() error {
return nil
}
func (suite *Suite) Run(description string, suiteLabels Labels, suitePath string, failer *Failer, reporter reporters.Reporter, writer WriterInterface, outputInterceptor OutputInterceptor, interruptHandler interrupt_handler.InterruptHandlerInterface, client parallel_support.Client, suiteConfig types.SuiteConfig) (bool, bool) {
func (suite *Suite) Run(description string, suiteLabels Labels, suitePath string, failer *Failer, reporter reporters.Reporter, writer WriterInterface, outputInterceptor OutputInterceptor, interruptHandler interrupt_handler.InterruptHandlerInterface, client parallel_support.Client, progressSignalRegistrar ProgressSignalRegistrar, suiteConfig types.SuiteConfig) (bool, bool) {
if suite.phase != PhaseBuildTree {
panic("cannot run before building the tree = call suite.BuildTree() first")
}
@ -83,8 +100,16 @@ func (suite *Suite) Run(description string, suiteLabels Labels, suitePath string
suite.interruptHandler = interruptHandler
suite.config = suiteConfig
if suite.config.Timeout > 0 {
suite.deadline = time.Now().Add(suite.config.Timeout)
}
cancelProgressHandler := progressSignalRegistrar(suite.handleProgressSignal)
success := suite.runSpecs(description, suiteLabels, suitePath, hasProgrammaticFocus, specs)
cancelProgressHandler()
return success, hasProgrammaticFocus
}
@ -146,7 +171,7 @@ func (suite *Suite) PushNode(node Node) error {
err = types.GinkgoErrors.CaughtPanicDuringABuildPhase(e, node.CodeLocation)
}
}()
node.Body()
node.Body(nil)
return err
}()
suite.tree = parentTree
@ -211,12 +236,23 @@ func (suite *Suite) pushCleanupNode(node Node) error {
return nil
}
/*
Pushing and popping the Step Cursor stack
*/
func (suite *Suite) SetProgressStepCursor(cursor ProgressStepCursor) {
suite.selectiveLock.Lock()
defer suite.selectiveLock.Unlock()
suite.progressStepCursor = cursor
}
/*
Spec Running methods - used during PhaseRun
*/
func (suite *Suite) CurrentSpecReport() types.SpecReport {
suite.currentSpecReportUserAccessLock.Lock()
defer suite.currentSpecReportUserAccessLock.Unlock()
suite.selectiveLock.Lock()
defer suite.selectiveLock.Unlock()
report := suite.currentSpecReport
if suite.writer != nil {
report.CapturedGinkgoWriterOutput = string(suite.writer.Bytes())
@ -227,8 +263,8 @@ func (suite *Suite) CurrentSpecReport() types.SpecReport {
}
func (suite *Suite) AddReportEntry(entry ReportEntry) error {
suite.currentSpecReportUserAccessLock.Lock()
defer suite.currentSpecReportUserAccessLock.Unlock()
suite.selectiveLock.Lock()
defer suite.selectiveLock.Unlock()
if suite.phase != PhaseRun {
return types.GinkgoErrors.AddReportEntryNotDuringRunPhase(entry.Location)
}
@ -236,6 +272,45 @@ func (suite *Suite) AddReportEntry(entry ReportEntry) error {
return nil
}
func (suite *Suite) generateProgressReport(fullReport bool) types.ProgressReport {
suite.selectiveLock.Lock()
defer suite.selectiveLock.Unlock()
var additionalReports []string
if suite.currentSpecContext != nil {
additionalReports = suite.currentSpecContext.QueryProgressReporters()
}
stepCursor := suite.progressStepCursor
gwOutput := suite.currentSpecReport.CapturedGinkgoWriterOutput + string(suite.writer.Bytes())
pr, err := NewProgressReport(suite.isRunningInParallel(), suite.currentSpecReport, suite.currentNode, suite.currentNodeStartTime, stepCursor, gwOutput, additionalReports, suite.config.SourceRoots, fullReport)
if err != nil {
fmt.Printf("{{red}}Failed to generate progress report:{{/}}\n%s\n", err.Error())
}
return pr
}
func (suite *Suite) handleProgressSignal() {
report := suite.generateProgressReport(false)
report.Message = "{{bold}}You've requested a progress report:{{/}}"
suite.emitProgressReport(report)
}
func (suite *Suite) emitProgressReport(report types.ProgressReport) {
suite.selectiveLock.Lock()
suite.currentSpecReport.ProgressReports = append(suite.currentSpecReport.ProgressReports, report.WithoutCapturedGinkgoWriterOutput())
suite.selectiveLock.Unlock()
suite.reporter.EmitProgressReport(report)
if suite.isRunningInParallel() {
err := suite.client.PostEmitProgressReport(report)
if err != nil {
fmt.Println(err.Error())
}
}
}
func (suite *Suite) isRunningInParallel() bool {
return suite.config.ParallelTotal > 1
}
@ -322,12 +397,16 @@ func (suite *Suite) runSpecs(description string, suiteLabels Labels, suitePath s
suite.runAfterSuiteCleanup(numSpecsThatWillBeRun)
interruptStatus := suite.interruptHandler.Status()
if interruptStatus.Interrupted {
if interruptStatus.Interrupted() {
suite.report.SpecialSuiteFailureReasons = append(suite.report.SpecialSuiteFailureReasons, interruptStatus.Cause.String())
suite.report.SuiteSucceeded = false
}
suite.report.EndTime = time.Now()
suite.report.RunTime = suite.report.EndTime.Sub(suite.report.StartTime)
if !suite.deadline.IsZero() && suite.report.EndTime.After(suite.deadline) {
suite.report.SpecialSuiteFailureReasons = append(suite.report.SpecialSuiteFailureReasons, "Suite Timeout Elapsed")
suite.report.SuiteSucceeded = false
}
if suite.config.ParallelProcess == 1 {
suite.runReportAfterSuite()
@ -341,16 +420,18 @@ func (suite *Suite) runSpecs(description string, suiteLabels Labels, suitePath s
}
func (suite *Suite) runBeforeSuite(numSpecsThatWillBeRun int) {
interruptStatus := suite.interruptHandler.Status()
beforeSuiteNode := suite.suiteNodes.FirstNodeWithType(types.NodeTypeBeforeSuite | types.NodeTypeSynchronizedBeforeSuite)
if !beforeSuiteNode.IsZero() && !interruptStatus.Interrupted && numSpecsThatWillBeRun > 0 {
if !beforeSuiteNode.IsZero() && numSpecsThatWillBeRun > 0 {
suite.selectiveLock.Lock()
suite.currentSpecReport = types.SpecReport{
LeafNodeType: beforeSuiteNode.NodeType,
LeafNodeLocation: beforeSuiteNode.CodeLocation,
ParallelProcess: suite.config.ParallelProcess,
}
suite.selectiveLock.Unlock()
suite.reporter.WillRun(suite.currentSpecReport)
suite.runSuiteNode(beforeSuiteNode, interruptStatus.Channel)
suite.runSuiteNode(beforeSuiteNode)
if suite.currentSpecReport.State.Is(types.SpecStateSkipped) {
suite.report.SpecialSuiteFailureReasons = append(suite.report.SpecialSuiteFailureReasons, "Suite skipped in BeforeSuite")
suite.skipAll = true
@ -362,26 +443,32 @@ func (suite *Suite) runBeforeSuite(numSpecsThatWillBeRun int) {
func (suite *Suite) runAfterSuiteCleanup(numSpecsThatWillBeRun int) {
afterSuiteNode := suite.suiteNodes.FirstNodeWithType(types.NodeTypeAfterSuite | types.NodeTypeSynchronizedAfterSuite)
if !afterSuiteNode.IsZero() && numSpecsThatWillBeRun > 0 {
suite.selectiveLock.Lock()
suite.currentSpecReport = types.SpecReport{
LeafNodeType: afterSuiteNode.NodeType,
LeafNodeLocation: afterSuiteNode.CodeLocation,
ParallelProcess: suite.config.ParallelProcess,
}
suite.selectiveLock.Unlock()
suite.reporter.WillRun(suite.currentSpecReport)
suite.runSuiteNode(afterSuiteNode, suite.interruptHandler.Status().Channel)
suite.runSuiteNode(afterSuiteNode)
suite.processCurrentSpecReport()
}
afterSuiteCleanup := suite.cleanupNodes.WithType(types.NodeTypeCleanupAfterSuite).Reverse()
if len(afterSuiteCleanup) > 0 {
for _, cleanupNode := range afterSuiteCleanup {
suite.selectiveLock.Lock()
suite.currentSpecReport = types.SpecReport{
LeafNodeType: cleanupNode.NodeType,
LeafNodeLocation: cleanupNode.CodeLocation,
ParallelProcess: suite.config.ParallelProcess,
}
suite.selectiveLock.Unlock()
suite.reporter.WillRun(suite.currentSpecReport)
suite.runSuiteNode(cleanupNode, suite.interruptHandler.Status().Channel)
suite.runSuiteNode(cleanupNode)
suite.processCurrentSpecReport()
}
}
@ -389,12 +476,15 @@ func (suite *Suite) runAfterSuiteCleanup(numSpecsThatWillBeRun int) {
func (suite *Suite) runReportAfterSuite() {
for _, node := range suite.suiteNodes.WithType(types.NodeTypeReportAfterSuite) {
suite.selectiveLock.Lock()
suite.currentSpecReport = types.SpecReport{
LeafNodeType: node.NodeType,
LeafNodeLocation: node.CodeLocation,
LeafNodeText: node.Text,
ParallelProcess: suite.config.ParallelProcess,
}
suite.selectiveLock.Unlock()
suite.reporter.WillRun(suite.currentSpecReport)
suite.runReportAfterSuiteNode(node, suite.report)
suite.processCurrentSpecReport()
@ -417,16 +507,11 @@ func (suite *Suite) reportEach(spec Spec, nodeType types.NodeType) {
suite.writer.Truncate()
suite.outputInterceptor.StartInterceptingOutput()
report := suite.currentSpecReport
nodes[i].Body = func() {
nodes[i].Body = func(SpecContext) {
nodes[i].ReportEachBody(report)
}
suite.interruptHandler.SetInterruptPlaceholderMessage(formatter.Fiw(0, formatter.COLS,
"{{yellow}}Ginkgo received an interrupt signal but is currently running a %s node. To avoid an invalid report the %s node will not be interrupted however subsequent tests will be skipped.{{/}}\n\n{{bold}}The running %s node is at:\n%s.{{/}}",
nodeType, nodeType, nodeType,
nodes[i].CodeLocation,
))
state, failure := suite.runNode(nodes[i], nil, spec.Nodes.BestTextFor(nodes[i]))
suite.interruptHandler.ClearInterruptPlaceholderMessage()
state, failure := suite.runNode(nodes[i], time.Time{}, spec.Nodes.BestTextFor(nodes[i]))
// If the spec is not in a failure state (i.e. it's Passed/Skipped/Pending) and the reporter has failed, override the state.
// Also, if the reporter is every aborted - always override the state to propagate the abort
if (!suite.currentSpecReport.State.Is(types.SpecStateFailureStates) && state.Is(types.SpecStateFailureStates)) || state.Is(types.SpecStateAborted) {
@ -438,7 +523,7 @@ func (suite *Suite) reportEach(spec Spec, nodeType types.NodeType) {
}
}
func (suite *Suite) runSuiteNode(node Node, interruptChannel chan interface{}) {
func (suite *Suite) runSuiteNode(node Node) {
if suite.config.DryRun {
suite.currentSpecReport.State = types.SpecStatePassed
return
@ -451,13 +536,13 @@ func (suite *Suite) runSuiteNode(node Node, interruptChannel chan interface{}) {
var err error
switch node.NodeType {
case types.NodeTypeBeforeSuite, types.NodeTypeAfterSuite:
suite.currentSpecReport.State, suite.currentSpecReport.Failure = suite.runNode(node, interruptChannel, "")
suite.currentSpecReport.State, suite.currentSpecReport.Failure = suite.runNode(node, time.Time{}, "")
case types.NodeTypeCleanupAfterSuite:
if suite.config.ParallelTotal > 1 && suite.config.ParallelProcess == 1 {
err = suite.client.BlockUntilNonprimaryProcsHaveFinished()
}
if err == nil {
suite.currentSpecReport.State, suite.currentSpecReport.Failure = suite.runNode(node, interruptChannel, "")
suite.currentSpecReport.State, suite.currentSpecReport.Failure = suite.runNode(node, time.Time{}, "")
}
case types.NodeTypeSynchronizedBeforeSuite:
var data []byte
@ -467,8 +552,9 @@ func (suite *Suite) runSuiteNode(node Node, interruptChannel chan interface{}) {
suite.outputInterceptor.StopInterceptingAndReturnOutput()
suite.outputInterceptor.StartInterceptingOutputAndForwardTo(suite.client)
}
node.Body = func() { data = node.SynchronizedBeforeSuiteProc1Body() }
suite.currentSpecReport.State, suite.currentSpecReport.Failure = suite.runNode(node, interruptChannel, "")
node.Body = func(c SpecContext) { data = node.SynchronizedBeforeSuiteProc1Body(c) }
node.HasContext = node.SynchronizedBeforeSuiteProc1BodyHasContext
suite.currentSpecReport.State, suite.currentSpecReport.Failure = suite.runNode(node, time.Time{}, "")
if suite.config.ParallelTotal > 1 {
suite.currentSpecReport.CapturedStdOutErr += suite.outputInterceptor.StopInterceptingAndReturnOutput()
suite.outputInterceptor.StartInterceptingOutput()
@ -485,19 +571,21 @@ func (suite *Suite) runSuiteNode(node Node, interruptChannel chan interface{}) {
switch proc1State {
case types.SpecStatePassed:
runAllProcs = true
case types.SpecStateFailed, types.SpecStatePanicked:
case types.SpecStateFailed, types.SpecStatePanicked, types.SpecStateTimedout:
err = types.GinkgoErrors.SynchronizedBeforeSuiteFailedOnProc1()
case types.SpecStateInterrupted, types.SpecStateAborted, types.SpecStateSkipped:
suite.currentSpecReport.State = proc1State
}
}
if runAllProcs {
node.Body = func() { node.SynchronizedBeforeSuiteAllProcsBody(data) }
suite.currentSpecReport.State, suite.currentSpecReport.Failure = suite.runNode(node, interruptChannel, "")
node.Body = func(c SpecContext) { node.SynchronizedBeforeSuiteAllProcsBody(c, data) }
node.HasContext = node.SynchronizedBeforeSuiteAllProcsBodyHasContext
suite.currentSpecReport.State, suite.currentSpecReport.Failure = suite.runNode(node, time.Time{}, "")
}
case types.NodeTypeSynchronizedAfterSuite:
node.Body = node.SynchronizedAfterSuiteAllProcsBody
suite.currentSpecReport.State, suite.currentSpecReport.Failure = suite.runNode(node, interruptChannel, "")
node.HasContext = node.SynchronizedAfterSuiteAllProcsBodyHasContext
suite.currentSpecReport.State, suite.currentSpecReport.Failure = suite.runNode(node, time.Time{}, "")
if suite.config.ParallelProcess == 1 {
if suite.config.ParallelTotal > 1 {
err = suite.client.BlockUntilNonprimaryProcsHaveFinished()
@ -509,7 +597,8 @@ func (suite *Suite) runSuiteNode(node Node, interruptChannel chan interface{}) {
}
node.Body = node.SynchronizedAfterSuiteProc1Body
state, failure := suite.runNode(node, interruptChannel, "")
node.HasContext = node.SynchronizedAfterSuiteProc1BodyHasContext
state, failure := suite.runNode(node, time.Time{}, "")
if suite.currentSpecReport.State.Is(types.SpecStatePassed) {
suite.currentSpecReport.State, suite.currentSpecReport.Failure = state, failure
}
@ -543,13 +632,8 @@ func (suite *Suite) runReportAfterSuiteNode(node Node, report types.Report) {
report = report.Add(aggregatedReport)
}
node.Body = func() { node.ReportAfterSuiteBody(report) }
suite.interruptHandler.SetInterruptPlaceholderMessage(formatter.Fiw(0, formatter.COLS,
"{{yellow}}Ginkgo received an interrupt signal but is currently running a ReportAfterSuite node. To avoid an invalid report the ReportAfterSuite node will not be interrupted.{{/}}\n\n{{bold}}The running ReportAfterSuite node is at:\n%s.{{/}}",
node.CodeLocation,
))
suite.currentSpecReport.State, suite.currentSpecReport.Failure = suite.runNode(node, nil, "")
suite.interruptHandler.ClearInterruptPlaceholderMessage()
node.Body = func(SpecContext) { node.ReportAfterSuiteBody(report) }
suite.currentSpecReport.State, suite.currentSpecReport.Failure = suite.runNode(node, time.Time{}, "")
suite.currentSpecReport.EndTime = time.Now()
suite.currentSpecReport.RunTime = suite.currentSpecReport.EndTime.Sub(suite.currentSpecReport.StartTime)
@ -559,14 +643,32 @@ func (suite *Suite) runReportAfterSuiteNode(node Node, report types.Report) {
return
}
func (suite *Suite) runNode(node Node, interruptChannel chan interface{}, text string) (types.SpecState, types.Failure) {
func (suite *Suite) runNode(node Node, specDeadline time.Time, text string) (types.SpecState, types.Failure) {
if node.NodeType.Is(types.NodeTypeCleanupAfterEach | types.NodeTypeCleanupAfterAll | types.NodeTypeCleanupAfterSuite) {
suite.cleanupNodes = suite.cleanupNodes.WithoutNode(node)
}
interruptStatus := suite.interruptHandler.Status()
if interruptStatus.Level == interrupt_handler.InterruptLevelBailOut {
return types.SpecStateSkipped, types.Failure{}
}
if interruptStatus.Level == interrupt_handler.InterruptLevelReportOnly && !node.NodeType.Is(types.NodeTypesAllowedDuringReportInterrupt) {
return types.SpecStateSkipped, types.Failure{}
}
if interruptStatus.Level == interrupt_handler.InterruptLevelCleanupAndReport && !node.NodeType.Is(types.NodeTypesAllowedDuringReportInterrupt|types.NodeTypesAllowedDuringCleanupInterrupt) {
return types.SpecStateSkipped, types.Failure{}
}
suite.selectiveLock.Lock()
suite.currentNode = node
suite.currentNodeStartTime = time.Now()
suite.progressStepCursor = ProgressStepCursor{}
suite.selectiveLock.Unlock()
defer func() {
suite.selectiveLock.Lock()
suite.currentNode = Node{}
suite.currentNodeStartTime = time.Time{}
suite.selectiveLock.Unlock()
}()
if suite.config.EmitSpecProgress && !node.MarkedSuppressProgressReporting {
@ -586,6 +688,49 @@ func (suite *Suite) runNode(node Node, interruptChannel chan interface{}, text s
} else {
failure.FailureNodeContext, failure.FailureNodeContainerIndex = types.FailureNodeInContainer, node.NestingLevel-1
}
var outcome types.SpecState
gracePeriod := suite.config.GracePeriod
if node.GracePeriod >= 0 {
gracePeriod = node.GracePeriod
}
now := time.Now()
deadline := suite.deadline
if deadline.IsZero() || (!specDeadline.IsZero() && specDeadline.Before(deadline)) {
deadline = specDeadline
}
if node.NodeTimeout > 0 && (deadline.IsZero() || deadline.Sub(now) > node.NodeTimeout) {
deadline = now.Add(node.NodeTimeout)
}
if (!deadline.IsZero() && deadline.Before(now)) || interruptStatus.Interrupted() {
//we're out of time already. let's wait for a NodeTimeout if we have it, or GracePeriod if we don't
if node.NodeTimeout > 0 {
deadline = now.Add(node.NodeTimeout)
} else {
deadline = now.Add(gracePeriod)
}
}
if !node.HasContext {
// this maps onto the pre-context behavior:
// - an interrupted node exits immediately. with this, context-less nodes that are in a spec with a SpecTimeout and/or are interrupted by other means will simply exit immediately after the timeout/interrupt
// - clean up nodes have up to GracePeriod (formerly hard-coded at 30s) to complete before they are interrupted
gracePeriod = 0
}
sc := NewSpecContext(suite)
defer sc.cancel()
suite.selectiveLock.Lock()
suite.currentSpecContext = sc
suite.selectiveLock.Unlock()
var deadlineChannel <-chan time.Time
if !deadline.IsZero() {
deadlineChannel = time.After(deadline.Sub(now))
}
var gracePeriodChannel <-chan time.Time
outcomeC := make(chan types.SpecState)
failureC := make(chan types.Failure)
@ -597,26 +742,125 @@ func (suite *Suite) runNode(node Node, interruptChannel chan interface{}, text s
suite.failer.Panic(types.NewCodeLocationWithStackTrace(2), e)
}
outcome, failureFromRun := suite.failer.Drain()
outcomeC <- outcome
outcomeFromRun, failureFromRun := suite.failer.Drain()
outcomeC <- outcomeFromRun
failureC <- failureFromRun
}()
node.Body()
node.Body(sc)
finished = true
}()
select {
case outcome := <-outcomeC:
failureFromRun := <-failureC
if outcome == types.SpecStatePassed {
return outcome, types.Failure{}
// progress polling timer and channel
var emitProgressNow <-chan time.Time
var progressPoller *time.Timer
var pollProgressAfter, pollProgressInterval = suite.config.PollProgressAfter, suite.config.PollProgressInterval
if node.PollProgressAfter >= 0 {
pollProgressAfter = node.PollProgressAfter
}
if node.PollProgressInterval >= 0 {
pollProgressInterval = node.PollProgressInterval
}
if pollProgressAfter > 0 {
progressPoller = time.NewTimer(pollProgressAfter)
emitProgressNow = progressPoller.C
defer progressPoller.Stop()
}
// now we wait for an outcome, an interrupt, a timeout, or a progress poll
for {
select {
case outcomeFromRun := <-outcomeC:
failureFromRun := <-failureC
if outcome == types.SpecStateInterrupted {
// we've already been interrupted. we just managed to actually exit
// before the grace period elapsed
return outcome, failure
} else if outcome == types.SpecStateTimedout {
// we've already timed out. we just managed to actually exit
// before the grace period elapsed. if we have a failure message we should include it
if outcomeFromRun != types.SpecStatePassed {
failure.Location, failure.ForwardedPanic = failureFromRun.Location, failureFromRun.ForwardedPanic
failure.Message = "This spec timed out and reported the following failure after the timeout:\n\n" + failureFromRun.Message
}
return outcome, failure
}
if outcomeFromRun.Is(types.SpecStatePassed) {
return outcomeFromRun, types.Failure{}
} else {
failure.Message, failure.Location, failure.ForwardedPanic = failureFromRun.Message, failureFromRun.Location, failureFromRun.ForwardedPanic
return outcomeFromRun, failure
}
case <-gracePeriodChannel:
if node.HasContext && outcome.Is(types.SpecStateTimedout) {
report := suite.generateProgressReport(false)
report.Message = "{{bold}}{{orange}}A running node failed to exit in time{{/}}\nGinkgo is moving on but a node has timed out and failed to exit before its grace period elapsed. The node has now leaked and is running in the background.\nHere's a current progress report:"
suite.emitProgressReport(report)
}
return outcome, failure
case <-deadlineChannel:
// we're out of time - the outcome is a timeout and we capture the failure and progress report
outcome = types.SpecStateTimedout
failure.Message, failure.Location = "Timedout", node.CodeLocation
failure.ProgressReport = suite.generateProgressReport(false).WithoutCapturedGinkgoWriterOutput()
failure.ProgressReport.Message = "{{bold}}This is the Progress Report generated when the timeout occurred:{{/}}"
deadlineChannel = nil
// tell the spec to stop. it's important we generate the progress report first to make sure we capture where
// the spec is actually stuck
sc.cancel()
//and now we wait for the grace period
gracePeriodChannel = time.After(gracePeriod)
case <-interruptStatus.Channel:
interruptStatus = suite.interruptHandler.Status()
deadlineChannel = nil // don't worry about deadlines, time's up now
if outcome == types.SpecStateInvalid {
outcome = types.SpecStateInterrupted
failure.Message, failure.Location = interruptStatus.Message(), node.CodeLocation
if interruptStatus.ShouldIncludeProgressReport() {
failure.ProgressReport = suite.generateProgressReport(true).WithoutCapturedGinkgoWriterOutput()
failure.ProgressReport.Message = "{{bold}}This is the Progress Report generated when the interrupt was received:{{/}}"
}
}
var report types.ProgressReport
if interruptStatus.ShouldIncludeProgressReport() {
report = suite.generateProgressReport(false)
}
sc.cancel()
if interruptStatus.Level == interrupt_handler.InterruptLevelBailOut {
if interruptStatus.ShouldIncludeProgressReport() {
report.Message = fmt.Sprintf("{{bold}}{{orange}}%s{{/}}\n{{bold}}{{red}}Final interrupt received{{/}}; Ginkgo will not run any cleanup or reporting nodes and will terminate as soon as possible.\nHere's a current progress report:", interruptStatus.Message())
suite.emitProgressReport(report)
}
return outcome, failure
}
if interruptStatus.ShouldIncludeProgressReport() {
if interruptStatus.Level == interrupt_handler.InterruptLevelCleanupAndReport {
report.Message = fmt.Sprintf("{{bold}}{{orange}}%s{{/}}\nFirst interrupt received; Ginkgo will run any cleanup and reporting nodes but will skip all remaining specs. {{bold}}Interrupt again to skip cleanup{{/}}.\nHere's a current progress report:", interruptStatus.Message())
} else if interruptStatus.Level == interrupt_handler.InterruptLevelReportOnly {
report.Message = fmt.Sprintf("{{bold}}{{orange}}%s{{/}}\nSecond interrupt received; Ginkgo will run any reporting nodes but will skip all remaining specs and cleanup nodes. {{bold}}Interrupt again to bail immediately{{/}}.\nHere's a current progress report:", interruptStatus.Message())
}
suite.emitProgressReport(report)
}
if gracePeriodChannel == nil {
// we haven't given grace yet... so let's
gracePeriodChannel = time.After(gracePeriod)
} else {
// we've already given grace. time's up. now.
return outcome, failure
}
case <-emitProgressNow:
report := suite.generateProgressReport(false)
report.Message = "{{bold}}Automatically polling progress:{{/}}"
suite.emitProgressReport(report)
if pollProgressInterval > 0 {
progressPoller.Reset(pollProgressInterval)
}
}
failure.Message, failure.Location, failure.ForwardedPanic = failureFromRun.Message, failureFromRun.Location, failureFromRun.ForwardedPanic
return outcome, failure
case <-interruptChannel:
failure.Message, failure.Location = suite.interruptHandler.InterruptMessageWithStackTraces(), node.CodeLocation
return types.SpecStateInterrupted, failure
}
}

View File

@ -12,6 +12,7 @@ import (
"io"
"runtime"
"strings"
"time"
"github.com/onsi/ginkgo/v2/formatter"
"github.com/onsi/ginkgo/v2/types"
@ -134,9 +135,11 @@ func (r *DefaultReporter) DidRun(report types.SpecReport) {
denoter = fmt.Sprintf("[%s]", report.LeafNodeType)
}
highlightColor = r.highlightColorForState(report.State)
switch report.State {
case types.SpecStatePassed:
highlightColor, succinctLocationBlock = "{{green}}", v.LT(types.VerbosityLevelVerbose)
succinctLocationBlock = v.LT(types.VerbosityLevelVerbose)
emitGinkgoWriterOutput = (r.conf.AlwaysEmitGinkgoWriter || v.GTE(types.VerbosityLevelVerbose)) && hasGW
if report.LeafNodeType.Is(types.NodeTypesForSuiteLevelNodes) {
if v.GTE(types.VerbosityLevelVerbose) || hasStd || hasEmittableReports {
@ -157,7 +160,6 @@ func (r *DefaultReporter) DidRun(report types.SpecReport) {
stream = false
}
case types.SpecStatePending:
highlightColor = "{{yellow}}"
includeRuntime, emitGinkgoWriterOutput = false, false
if v.Is(types.VerbosityLevelSuccinct) {
header, stream = "P", true
@ -165,20 +167,21 @@ func (r *DefaultReporter) DidRun(report types.SpecReport) {
header, succinctLocationBlock = "P [PENDING]", v.LT(types.VerbosityLevelVeryVerbose)
}
case types.SpecStateSkipped:
highlightColor = "{{cyan}}"
if report.Failure.Message != "" || v.Is(types.VerbosityLevelVeryVerbose) {
header = "S [SKIPPED]"
} else {
header, stream = "S", true
}
case types.SpecStateFailed:
highlightColor, header = "{{red}}", fmt.Sprintf("%s [FAILED]", denoter)
header = fmt.Sprintf("%s [FAILED]", denoter)
case types.SpecStateTimedout:
header = fmt.Sprintf("%s [TIMEDOUT]", denoter)
case types.SpecStatePanicked:
highlightColor, header = "{{magenta}}", fmt.Sprintf("%s! [PANICKED]", denoter)
header = fmt.Sprintf("%s! [PANICKED]", denoter)
case types.SpecStateInterrupted:
highlightColor, header = "{{orange}}", fmt.Sprintf("%s! [INTERRUPTED]", denoter)
header = fmt.Sprintf("%s! [INTERRUPTED]", denoter)
case types.SpecStateAborted:
highlightColor, header = "{{coral}}", fmt.Sprintf("%s! [ABORTED]", denoter)
header = fmt.Sprintf("%s! [ABORTED]", denoter)
}
// Emit stream and return
@ -208,9 +211,7 @@ func (r *DefaultReporter) DidRun(report types.SpecReport) {
//Emit Captured GinkgoWriter Output
if emitGinkgoWriterOutput && hasGW {
r.emitBlock("\n")
r.emitBlock(r.fi(1, "{{gray}}Begin Captured GinkgoWriter Output >>{{/}}"))
r.emitBlock(r.fi(2, "%s", report.CapturedGinkgoWriterOutput))
r.emitBlock(r.fi(1, "{{gray}}<< End Captured GinkgoWriter Output{{/}}"))
r.emitGinkgoWriterOutput(1, report.CapturedGinkgoWriterOutput, 0)
}
if hasEmittableReports {
@ -232,23 +233,87 @@ func (r *DefaultReporter) DidRun(report types.SpecReport) {
// Emit Failure Message
if !report.Failure.IsZero() {
r.emitBlock("\n")
r.emitBlock(r.fi(1, highlightColor+"%s{{/}}", report.Failure.Message))
r.emitBlock(r.fi(1, highlightColor+"In {{bold}}[%s]{{/}}"+highlightColor+" at: {{bold}}%s{{/}}\n", report.Failure.FailureNodeType, report.Failure.Location))
if report.Failure.ForwardedPanic != "" {
r.emitBlock("\n")
r.emitBlock(r.fi(1, highlightColor+"%s{{/}}", report.Failure.ForwardedPanic))
}
r.EmitFailure(1, report.State, report.Failure, false)
}
if r.conf.FullTrace || report.Failure.ForwardedPanic != "" {
if len(report.AdditionalFailures) > 0 {
if v.GTE(types.VerbosityLevelVerbose) {
r.emitBlock("\n")
r.emitBlock(r.fi(1, highlightColor+"Full Stack Trace{{/}}"))
r.emitBlock(r.fi(2, "%s", report.Failure.Location.FullStackTrace))
r.emitBlock(r.fi(1, "{{bold}}There were additional failures detected after the initial failure:{{/}}"))
for i, additionalFailure := range report.AdditionalFailures {
r.EmitFailure(2, additionalFailure.State, additionalFailure.Failure, true)
if i < len(report.AdditionalFailures)-1 {
r.emitBlock(r.fi(2, "{{gray}}%s{{/}}", strings.Repeat("-", 10)))
}
}
} else {
r.emitBlock("\n")
r.emitBlock(r.fi(1, "{{bold}}There were additional failures detected after the initial failure. Here's a summary - for full details run Ginkgo in verbose mode:{{/}}"))
for _, additionalFailure := range report.AdditionalFailures {
r.emitBlock(r.fi(2, r.highlightColorForState(additionalFailure.State)+"[%s]{{/}} in [%s] at %s",
r.humanReadableState(additionalFailure.State),
additionalFailure.Failure.FailureNodeType,
additionalFailure.Failure.Location,
))
}
}
}
r.emitDelimiter()
}
func (r *DefaultReporter) highlightColorForState(state types.SpecState) string {
switch state {
case types.SpecStatePassed:
return "{{green}}"
case types.SpecStatePending:
return "{{yellow}}"
case types.SpecStateSkipped:
return "{{cyan}}"
case types.SpecStateFailed:
return "{{red}}"
case types.SpecStateTimedout:
return "{{orange}}"
case types.SpecStatePanicked:
return "{{magenta}}"
case types.SpecStateInterrupted:
return "{{orange}}"
case types.SpecStateAborted:
return "{{coral}}"
default:
return "{{gray}}"
}
}
func (r *DefaultReporter) humanReadableState(state types.SpecState) string {
return strings.ToUpper(state.String())
}
func (r *DefaultReporter) EmitFailure(indent uint, state types.SpecState, failure types.Failure, includeState bool) {
highlightColor := r.highlightColorForState(state)
if includeState {
r.emitBlock(r.fi(indent, highlightColor+"[%s]{{/}}", r.humanReadableState(state)))
}
r.emitBlock(r.fi(indent, highlightColor+"%s{{/}}", failure.Message))
r.emitBlock(r.fi(indent, highlightColor+"In {{bold}}[%s]{{/}}"+highlightColor+" at: {{bold}}%s{{/}}\n", failure.FailureNodeType, failure.Location))
if failure.ForwardedPanic != "" {
r.emitBlock("\n")
r.emitBlock(r.fi(indent, highlightColor+"%s{{/}}", failure.ForwardedPanic))
}
if r.conf.FullTrace || failure.ForwardedPanic != "" {
r.emitBlock("\n")
r.emitBlock(r.fi(indent, highlightColor+"Full Stack Trace{{/}}"))
r.emitBlock(r.fi(indent+1, "%s", failure.Location.FullStackTrace))
}
if !failure.ProgressReport.IsZero() {
r.emitBlock("\n")
r.emitProgressReport(indent, false, failure.ProgressReport)
}
}
func (r *DefaultReporter) SuiteDidEnd(report types.Report) {
failures := report.SpecReports.WithState(types.SpecStateFailureStates)
if len(failures) > 0 {
@ -265,6 +330,8 @@ func (r *DefaultReporter) SuiteDidEnd(report types.Report) {
highlightColor, heading = "{{magenta}}", "[PANICKED!]"
case types.SpecStateAborted:
highlightColor, heading = "{{coral}}", "[ABORTED]"
case types.SpecStateTimedout:
highlightColor, heading = "{{orange}}", "[TIMEDOUT]"
case types.SpecStateInterrupted:
highlightColor, heading = "{{orange}}", "[INTERRUPTED]"
}
@ -314,6 +381,161 @@ func (r *DefaultReporter) SuiteDidEnd(report types.Report) {
}
}
func (r *DefaultReporter) EmitProgressReport(report types.ProgressReport) {
r.emitDelimiter()
if report.RunningInParallel {
r.emit(r.f("{{coral}}Progress Report for Ginkgo Process #{{bold}}%d{{/}}\n", report.ParallelProcess))
}
r.emitProgressReport(0, true, report)
r.emitDelimiter()
}
func (r *DefaultReporter) emitProgressReport(indent uint, emitGinkgoWriterOutput bool, report types.ProgressReport) {
if report.Message != "" {
r.emitBlock(r.fi(indent, report.Message+"\n"))
indent += 1
}
if report.LeafNodeText != "" {
subjectIndent := indent
if len(report.ContainerHierarchyTexts) > 0 {
r.emit(r.fi(indent, r.cycleJoin(report.ContainerHierarchyTexts, " ")))
r.emit(" ")
subjectIndent = 0
}
r.emit(r.fi(subjectIndent, "{{bold}}{{orange}}%s{{/}} (Spec Runtime: %s)\n", report.LeafNodeText, report.Time.Sub(report.SpecStartTime).Round(time.Millisecond)))
r.emit(r.fi(indent+1, "{{gray}}%s{{/}}\n", report.LeafNodeLocation))
indent += 1
}
if report.CurrentNodeType != types.NodeTypeInvalid {
r.emit(r.fi(indent, "In {{bold}}{{orange}}[%s]{{/}}", report.CurrentNodeType))
if report.CurrentNodeText != "" && !report.CurrentNodeType.Is(types.NodeTypeIt) {
r.emit(r.f(" {{bold}}{{orange}}%s{{/}}", report.CurrentNodeText))
}
r.emit(r.f(" (Node Runtime: %s)\n", report.Time.Sub(report.CurrentNodeStartTime).Round(time.Millisecond)))
r.emit(r.fi(indent+1, "{{gray}}%s{{/}}\n", report.CurrentNodeLocation))
indent += 1
}
if report.CurrentStepText != "" {
r.emit(r.fi(indent, "At {{bold}}{{orange}}[By Step] %s{{/}} (Step Runtime: %s)\n", report.CurrentStepText, report.Time.Sub(report.CurrentStepStartTime).Round(time.Millisecond)))
r.emit(r.fi(indent+1, "{{gray}}%s{{/}}\n", report.CurrentStepLocation))
indent += 1
}
if indent > 0 {
indent -= 1
}
if emitGinkgoWriterOutput && report.CapturedGinkgoWriterOutput != "" && (report.RunningInParallel || r.conf.Verbosity().LT(types.VerbosityLevelVerbose)) {
r.emit("\n")
r.emitGinkgoWriterOutput(indent, report.CapturedGinkgoWriterOutput, 10)
}
if !report.SpecGoroutine().IsZero() {
r.emit("\n")
r.emit(r.fi(indent, "{{bold}}{{underline}}Spec Goroutine{{/}}\n"))
r.emitGoroutines(indent, report.SpecGoroutine())
}
if len(report.AdditionalReports) > 0 {
r.emit("\n")
r.emitBlock(r.fi(indent, "{{gray}}Begin Additional Progress Reports >>{{/}}"))
for i, additionalReport := range report.AdditionalReports {
r.emit(r.fi(indent+1, additionalReport))
if i < len(report.AdditionalReports)-1 {
r.emitBlock(r.fi(indent+1, "{{gray}}%s{{/}}", strings.Repeat("-", 10)))
}
}
r.emitBlock(r.fi(indent, "{{gray}}<< End Additional Progress Reports{{/}}"))
}
highlightedGoroutines := report.HighlightedGoroutines()
if len(highlightedGoroutines) > 0 {
r.emit("\n")
r.emit(r.fi(indent, "{{bold}}{{underline}}Goroutines of Interest{{/}}\n"))
r.emitGoroutines(indent, highlightedGoroutines...)
}
otherGoroutines := report.OtherGoroutines()
if len(otherGoroutines) > 0 {
r.emit("\n")
r.emit(r.fi(indent, "{{gray}}{{bold}}{{underline}}Other Goroutines{{/}}\n"))
r.emitGoroutines(indent, otherGoroutines...)
}
}
func (r *DefaultReporter) emitGinkgoWriterOutput(indent uint, output string, limit int) {
r.emitBlock(r.fi(indent, "{{gray}}Begin Captured GinkgoWriter Output >>{{/}}"))
if limit == 0 {
r.emitBlock(r.fi(indent+1, "%s", output))
} else {
lines := strings.Split(output, "\n")
if len(lines) <= limit {
r.emitBlock(r.fi(indent+1, "%s", output))
} else {
r.emitBlock(r.fi(indent+1, "{{gray}}...{{/}}"))
for _, line := range lines[len(lines)-limit-1:] {
r.emitBlock(r.fi(indent+1, "%s", line))
}
}
}
r.emitBlock(r.fi(indent, "{{gray}}<< End Captured GinkgoWriter Output{{/}}"))
}
func (r *DefaultReporter) emitGoroutines(indent uint, goroutines ...types.Goroutine) {
for idx, g := range goroutines {
color := "{{gray}}"
if g.HasHighlights() {
color = "{{orange}}"
}
r.emit(r.fi(indent, color+"goroutine %d [%s]{{/}}\n", g.ID, g.State))
for _, fc := range g.Stack {
if fc.Highlight {
r.emit(r.fi(indent, color+"{{bold}}> %s{{/}}\n", fc.Function))
r.emit(r.fi(indent+2, color+"{{bold}}%s:%d{{/}}\n", fc.Filename, fc.Line))
r.emitSource(indent+3, fc)
} else {
r.emit(r.fi(indent+1, "{{gray}}%s{{/}}\n", fc.Function))
r.emit(r.fi(indent+2, "{{gray}}%s:%d{{/}}\n", fc.Filename, fc.Line))
}
}
if idx+1 < len(goroutines) {
r.emit("\n")
}
}
}
func (r *DefaultReporter) emitSource(indent uint, fc types.FunctionCall) {
lines := fc.Source
if len(lines) == 0 {
return
}
lTrim := 100000
for _, line := range lines {
lTrimLine := len(line) - len(strings.TrimLeft(line, " \t"))
if lTrimLine < lTrim && len(line) > 0 {
lTrim = lTrimLine
}
}
if lTrim == 100000 {
lTrim = 0
}
for idx, line := range lines {
if len(line) > lTrim {
line = line[lTrim:]
}
if idx == fc.SourceHighlight {
r.emit(r.fi(indent, "{{bold}}{{orange}}> %s{{/}}\n", line))
} else {
r.emit(r.fi(indent, "| %s\n", line))
}
}
}
/* Emitting to the writer */
func (r *DefaultReporter) emit(s string) {
if len(s) > 0 {

View File

@ -171,8 +171,8 @@ func GenerateJUnitReport(report types.Report, dst string) error {
Classname: report.SuiteDescription,
Status: spec.State.String(),
Time: spec.RunTime.Seconds(),
SystemOut: systemOutForUnstructureReporters(spec),
SystemErr: spec.CapturedGinkgoWriterOutput,
SystemOut: systemOutForUnstructuredReporters(spec),
SystemErr: systemErrForUnstructuredReporters(spec),
}
suite.Tests += 1
@ -191,28 +191,35 @@ func GenerateJUnitReport(report types.Report, dst string) error {
test.Failure = &JUnitFailure{
Message: spec.Failure.Message,
Type: "failed",
Description: fmt.Sprintf("%s\n%s", spec.Failure.Location.String(), spec.Failure.Location.FullStackTrace),
Description: failureDescriptionForUnstructuredReporters(spec),
}
suite.Failures += 1
case types.SpecStateTimedout:
test.Failure = &JUnitFailure{
Message: spec.Failure.Message,
Type: "timedout",
Description: failureDescriptionForUnstructuredReporters(spec),
}
suite.Failures += 1
case types.SpecStateInterrupted:
test.Error = &JUnitError{
Message: "interrupted",
Message: spec.Failure.Message,
Type: "interrupted",
Description: spec.Failure.Message,
Description: failureDescriptionForUnstructuredReporters(spec),
}
suite.Errors += 1
case types.SpecStateAborted:
test.Failure = &JUnitFailure{
Message: spec.Failure.Message,
Type: "aborted",
Description: fmt.Sprintf("%s\n%s", spec.Failure.Location.String(), spec.Failure.Location.FullStackTrace),
Description: failureDescriptionForUnstructuredReporters(spec),
}
suite.Errors += 1
case types.SpecStatePanicked:
test.Error = &JUnitError{
Message: spec.Failure.ForwardedPanic,
Type: "panicked",
Description: fmt.Sprintf("%s\n%s", spec.Failure.Location.String(), spec.Failure.Location.FullStackTrace),
Description: failureDescriptionForUnstructuredReporters(spec),
}
suite.Errors += 1
}
@ -278,7 +285,51 @@ func MergeAndCleanupJUnitReports(sources []string, dst string) ([]string, error)
return messages, f.Close()
}
func systemOutForUnstructureReporters(spec types.SpecReport) string {
func failureDescriptionForUnstructuredReporters(spec types.SpecReport) string {
out := &strings.Builder{}
out.WriteString(spec.Failure.Location.String() + "\n")
out.WriteString(spec.Failure.Location.FullStackTrace)
if !spec.Failure.ProgressReport.IsZero() {
out.WriteString("\n")
NewDefaultReporter(types.ReporterConfig{NoColor: true}, out).EmitProgressReport(spec.Failure.ProgressReport)
}
if len(spec.AdditionalFailures) > 0 {
out.WriteString("\nThere were additional failures detected after the initial failure:\n")
for i, additionalFailure := range spec.AdditionalFailures {
NewDefaultReporter(types.ReporterConfig{NoColor: true}, out).EmitFailure(0, additionalFailure.State, additionalFailure.Failure, true)
if i < len(spec.AdditionalFailures)-1 {
out.WriteString("----------\n")
}
}
}
return out.String()
}
func systemErrForUnstructuredReporters(spec types.SpecReport) string {
out := &strings.Builder{}
gw := spec.CapturedGinkgoWriterOutput
cursor := 0
for _, pr := range spec.ProgressReports {
if cursor < pr.GinkgoWriterOffset {
if pr.GinkgoWriterOffset < len(gw) {
out.WriteString(gw[cursor:pr.GinkgoWriterOffset])
cursor = pr.GinkgoWriterOffset
} else if cursor < len(gw) {
out.WriteString(gw[cursor:])
cursor = len(gw)
}
}
NewDefaultReporter(types.ReporterConfig{NoColor: true}, out).EmitProgressReport(pr)
}
if cursor < len(gw) {
out.WriteString(gw[cursor:])
}
return out.String()
}
func systemOutForUnstructuredReporters(spec types.SpecReport) string {
systemOut := spec.CapturedStdOutErr
if len(spec.ReportEntries) > 0 {
systemOut += "\nReport Entries:\n"

View File

@ -9,11 +9,13 @@ type Reporter interface {
WillRun(report types.SpecReport)
DidRun(report types.SpecReport)
SuiteDidEnd(report types.Report)
EmitProgressReport(progressReport types.ProgressReport)
}
type NoopReporter struct{}
func (n NoopReporter) SuiteWillBegin(report types.Report) {}
func (n NoopReporter) WillRun(report types.SpecReport) {}
func (n NoopReporter) DidRun(report types.SpecReport) {}
func (n NoopReporter) SuiteDidEnd(report types.Report) {}
func (n NoopReporter) SuiteWillBegin(report types.Report) {}
func (n NoopReporter) WillRun(report types.SpecReport) {}
func (n NoopReporter) DidRun(report types.SpecReport) {}
func (n NoopReporter) SuiteDidEnd(report types.Report) {}
func (n NoopReporter) EmitProgressReport(progressReport types.ProgressReport) {}

View File

@ -60,20 +60,24 @@ func GenerateTeamcityReport(report types.Report, dst string) error {
}
fmt.Fprintf(f, "##teamcity[testIgnored name='%s' message='%s']\n", name, tcEscape(message))
case types.SpecStateFailed:
details := fmt.Sprintf("%s\n%s", spec.Failure.Location.String(), spec.Failure.Location.FullStackTrace)
details := failureDescriptionForUnstructuredReporters(spec)
fmt.Fprintf(f, "##teamcity[testFailed name='%s' message='failed - %s' details='%s']\n", name, tcEscape(spec.Failure.Message), tcEscape(details))
case types.SpecStatePanicked:
details := fmt.Sprintf("%s\n%s", spec.Failure.Location.String(), spec.Failure.Location.FullStackTrace)
details := failureDescriptionForUnstructuredReporters(spec)
fmt.Fprintf(f, "##teamcity[testFailed name='%s' message='panicked - %s' details='%s']\n", name, tcEscape(spec.Failure.ForwardedPanic), tcEscape(details))
case types.SpecStateTimedout:
details := failureDescriptionForUnstructuredReporters(spec)
fmt.Fprintf(f, "##teamcity[testFailed name='%s' message='timedout - %s' details='%s']\n", name, tcEscape(spec.Failure.Message), tcEscape(details))
case types.SpecStateInterrupted:
fmt.Fprintf(f, "##teamcity[testFailed name='%s' message='interrupted' details='%s']\n", name, tcEscape(spec.Failure.Message))
details := failureDescriptionForUnstructuredReporters(spec)
fmt.Fprintf(f, "##teamcity[testFailed name='%s' message='interrupted - %s' details='%s']\n", name, tcEscape(spec.Failure.Message), tcEscape(details))
case types.SpecStateAborted:
details := fmt.Sprintf("%s\n%s", spec.Failure.Location.String(), spec.Failure.Location.FullStackTrace)
details := failureDescriptionForUnstructuredReporters(spec)
fmt.Fprintf(f, "##teamcity[testFailed name='%s' message='aborted - %s' details='%s']\n", name, tcEscape(spec.Failure.Message), tcEscape(details))
}
fmt.Fprintf(f, "##teamcity[testStdOut name='%s' out='%s']\n", name, tcEscape(systemOutForUnstructureReporters(spec)))
fmt.Fprintf(f, "##teamcity[testStdErr name='%s' out='%s']\n", name, tcEscape(spec.CapturedGinkgoWriterOutput))
fmt.Fprintf(f, "##teamcity[testStdOut name='%s' out='%s']\n", name, tcEscape(systemOutForUnstructuredReporters(spec)))
fmt.Fprintf(f, "##teamcity[testStdErr name='%s' out='%s']\n", name, tcEscape(systemErrForUnstructuredReporters(spec)))
fmt.Fprintf(f, "##teamcity[testFinished name='%s' duration='%d']\n", name, int(spec.RunTime.Seconds()*1000.0))
}
fmt.Fprintf(f, "##teamcity[testSuiteFinished name='%s']\n", tcEscape(report.SuiteDescription))

View File

@ -115,8 +115,10 @@ You cannot nest any other Ginkgo nodes within a ReportAfterSuite node's closure.
You can learn more about ReportAfterSuite here: https://onsi.github.io/ginkgo/#generating-reports-programmatically
You can learn more about Ginkgo's reporting infrastructure, including generating reports with the CLI here: https://onsi.github.io/ginkgo/#generating-machine-readable-reports
*/
func ReportAfterSuite(text string, body func(Report)) bool {
return pushNode(internal.NewReportAfterSuiteNode(text, body, types.NewCodeLocation(1)))
func ReportAfterSuite(text string, body func(Report), args ...interface{}) bool {
combinedArgs := []interface{}{body}
combinedArgs = append(combinedArgs, args...)
return pushNode(internal.NewNode(deprecationTracker, types.NodeTypeReportAfterSuite, text, combinedArgs...))
}
func registerReportAfterSuiteNodeForAutogeneratedReports(reporterConfig types.ReporterConfig) {
@ -151,7 +153,8 @@ func registerReportAfterSuiteNodeForAutogeneratedReports(reporterConfig types.Re
if reporterConfig.TeamcityReport != "" {
flags = append(flags, "--teamcity-report")
}
pushNode(internal.NewReportAfterSuiteNode(
pushNode(internal.NewNode(
deprecationTracker, types.NodeTypeReportAfterSuite,
fmt.Sprintf("Autogenerated ReportAfterSuite for %s", strings.Join(flags, " ")),
body,
types.NewCustomCodeLocation("autogenerated by Ginkgo"),

View File

@ -1,6 +1,7 @@
package ginkgo
import (
"context"
"fmt"
"reflect"
"strings"
@ -89,6 +90,8 @@ Subsequent arguments accept any Ginkgo decorators. These are filtered out and t
Each Entry ends up generating an individual Ginkgo It. The body of the it is the Table Body function with the Entry parameters passed in.
If you want to generate interruptible specs simply write a Table function that accepts a SpecContext as its first argument. You can then decorate individual Entrys with the NodeTimeout and SpecTimeout decorators.
You can learn more about Entry here: https://onsi.github.io/ginkgo/#table-specs
*/
func Entry(description interface{}, args ...interface{}) TableEntry {
@ -119,12 +122,16 @@ You can mark a particular entry as pending with XEntry. This is equivalent to X
*/
var XEntry = PEntry
var contextType = reflect.TypeOf(new(context.Context)).Elem()
var specContextType = reflect.TypeOf(new(SpecContext)).Elem()
func generateTable(description string, args ...interface{}) {
cl := types.NewCodeLocation(2)
containerNodeArgs := []interface{}{cl}
entries := []TableEntry{}
var itBody interface{}
var itBodyType reflect.Type
var tableLevelEntryDescription interface{}
tableLevelEntryDescription = func(args ...interface{}) string {
@ -135,6 +142,10 @@ func generateTable(description string, args ...interface{}) {
return "Entry: " + strings.Join(out, ", ")
}
if len(args) == 1 {
exitIfErr(types.GinkgoErrors.MissingParametersForTableFunction(cl))
}
for i, arg := range args {
switch t := reflect.TypeOf(arg); {
case t == nil:
@ -152,6 +163,7 @@ func generateTable(description string, args ...interface{}) {
exitIfErr(types.GinkgoErrors.MultipleEntryBodyFunctionsForTable(cl))
}
itBody = arg
itBodyType = reflect.TypeOf(itBody)
default:
containerNodeArgs = append(containerNodeArgs, arg)
}
@ -164,7 +176,7 @@ func generateTable(description string, args ...interface{}) {
var description string
switch t := reflect.TypeOf(entry.description); {
case t == nil:
err = validateParameters(tableLevelEntryDescription, entry.parameters, "Entry Description function", entry.codeLocation)
err = validateParameters(tableLevelEntryDescription, entry.parameters, "Entry Description function", entry.codeLocation, false)
if err == nil {
description = invokeFunction(tableLevelEntryDescription, entry.parameters)[0].String()
}
@ -173,7 +185,7 @@ func generateTable(description string, args ...interface{}) {
case t == reflect.TypeOf(""):
description = entry.description.(string)
case t.Kind() == reflect.Func && t.NumOut() == 1 && t.Out(0) == reflect.TypeOf(""):
err = validateParameters(entry.description, entry.parameters, "Entry Description function", entry.codeLocation)
err = validateParameters(entry.description, entry.parameters, "Entry Description function", entry.codeLocation, false)
if err == nil {
description = invokeFunction(entry.description, entry.parameters)[0].String()
}
@ -181,17 +193,37 @@ func generateTable(description string, args ...interface{}) {
err = types.GinkgoErrors.InvalidEntryDescription(entry.codeLocation)
}
if err == nil {
err = validateParameters(itBody, entry.parameters, "Table Body function", entry.codeLocation)
}
itNodeArgs := []interface{}{entry.codeLocation}
itNodeArgs = append(itNodeArgs, entry.decorations...)
itNodeArgs = append(itNodeArgs, func() {
if err != nil {
panic(err)
hasContext := false
if itBodyType.NumIn() > 0. {
if itBodyType.In(0).Implements(specContextType) {
hasContext = true
} else if itBodyType.In(0).Implements(contextType) && (len(entry.parameters) == 0 || !reflect.TypeOf(entry.parameters[0]).Implements(contextType)) {
hasContext = true
}
invokeFunction(itBody, entry.parameters)
})
}
if err == nil {
err = validateParameters(itBody, entry.parameters, "Table Body function", entry.codeLocation, hasContext)
}
if hasContext {
itNodeArgs = append(itNodeArgs, func(c SpecContext) {
if err != nil {
panic(err)
}
invokeFunction(itBody, append([]interface{}{c}, entry.parameters...))
})
} else {
itNodeArgs = append(itNodeArgs, func() {
if err != nil {
panic(err)
}
invokeFunction(itBody, entry.parameters)
})
}
pushNode(internal.NewNode(deprecationTracker, types.NodeTypeIt, description, itNodeArgs...))
}
@ -223,9 +255,14 @@ func invokeFunction(function interface{}, parameters []interface{}) []reflect.Va
return reflect.ValueOf(function).Call(inValues)
}
func validateParameters(function interface{}, parameters []interface{}, kind string, cl types.CodeLocation) error {
func validateParameters(function interface{}, parameters []interface{}, kind string, cl types.CodeLocation, hasContext bool) error {
funcType := reflect.TypeOf(function)
limit := funcType.NumIn()
offset := 0
if hasContext {
limit = limit - 1
offset = 1
}
if funcType.IsVariadic() {
limit = limit - 1
}
@ -238,13 +275,13 @@ func validateParameters(function interface{}, parameters []interface{}, kind str
var i = 0
for ; i < limit; i++ {
actual := reflect.TypeOf(parameters[i])
expected := funcType.In(i)
expected := funcType.In(i + offset)
if !(actual == nil) && !actual.AssignableTo(expected) {
return types.GinkgoErrors.IncorrectParameterTypeToTableFunction(i+1, expected, actual, kind, cl)
}
}
if funcType.IsVariadic() {
expected := funcType.In(limit).Elem()
expected := funcType.In(limit + offset).Elem()
for ; i < len(parameters); i++ {
actual := reflect.TypeOf(parameters[i])
if !(actual == nil) && !actual.AssignableTo(expected) {

View File

@ -1,5 +1,4 @@
package types
import (
"fmt"
"os"

View File

@ -28,8 +28,12 @@ type SuiteConfig struct {
FlakeAttempts int
EmitSpecProgress bool
DryRun bool
PollProgressAfter time.Duration
PollProgressInterval time.Duration
Timeout time.Duration
OutputInterceptorMode string
SourceRoots []string
GracePeriod time.Duration
ParallelProcess int
ParallelTotal int
@ -42,6 +46,7 @@ func NewDefaultSuiteConfig() SuiteConfig {
Timeout: time.Hour,
ParallelProcess: 1,
ParallelTotal: 1,
GracePeriod: 30 * time.Second,
}
}
@ -272,8 +277,16 @@ var SuiteConfigFlags = GinkgoFlags{
Usage: "If set, ginkgo will walk the test hierarchy without actually running anything. Best paired with -v."},
{KeyPath: "S.EmitSpecProgress", Name: "progress", SectionKey: "debug",
Usage: "If set, ginkgo will emit progress information as each spec runs to the GinkgoWriter."},
{KeyPath: "S.PollProgressAfter", Name: "poll-progress-after", SectionKey: "debug", UsageDefaultValue: "0",
Usage: "Emit node progress reports periodically if node hasn't completed after this duration."},
{KeyPath: "S.PollProgressInterval", Name: "poll-progress-interval", SectionKey: "debug", UsageDefaultValue: "10s",
Usage: "The rate at which to emit node progress reports after poll-progress-after has elapsed."},
{KeyPath: "S.SourceRoots", Name: "source-root", SectionKey: "debug",
Usage: "The location to look for source code when generating progress reports. You can pass multiple --source-root flags."},
{KeyPath: "S.Timeout", Name: "timeout", SectionKey: "debug", UsageDefaultValue: "1h",
Usage: "Test suite fails if it does not complete within the specified timeout."},
{KeyPath: "S.GracePeriod", Name: "grace-period", SectionKey: "debug", UsageDefaultValue: "30s",
Usage: "When interrupted, Ginkgo will wait for GracePeriod for the current running node to exit before moving on to the next one."},
{KeyPath: "S.OutputInterceptorMode", Name: "output-interceptor-mode", SectionKey: "debug", UsageArgument: "dup, swap, or none",
Usage: "If set, ginkgo will use the specified output interception strategy when running in parallel. Defaults to dup on unix and swap on windows."},
@ -381,6 +394,10 @@ func VetConfig(flagSet GinkgoFlagSet, suiteConfig SuiteConfig, reporterConfig Re
errors = append(errors, GinkgoErrors.DryRunInParallelConfiguration())
}
if suiteConfig.GracePeriod <= 0 {
errors = append(errors, GinkgoErrors.GracePeriodCannotBeZero())
}
if len(suiteConfig.FocusFiles) > 0 {
_, err := ParseFileFilters(suiteConfig.FocusFiles)
if err != nil {

View File

@ -189,20 +189,55 @@ func (g ginkgoErrors) UnknownDecorator(cl CodeLocation, nodeType NodeType, decor
}
}
func (g ginkgoErrors) InvalidBodyTypeForContainer(t reflect.Type, cl CodeLocation, nodeType NodeType) error {
return GinkgoError{
Heading: "Invalid Function",
Message: formatter.F(`[%s] node must be passed {{bold}}func(){{/}} - i.e. functions that take nothing and return nothing. You passed {{bold}}%s{{/}} instead.`, nodeType, t),
CodeLocation: cl,
DocLink: "node-decorators-overview",
}
}
func (g ginkgoErrors) InvalidBodyType(t reflect.Type, cl CodeLocation, nodeType NodeType) error {
mustGet := "{{bold}}func(){{/}}, {{bold}}func(ctx SpecContext){{/}}, or {{bold}}func(ctx context.Context){{/}}"
if nodeType.Is(NodeTypeContainer) {
mustGet = "{{bold}}func(){{/}}"
}
return GinkgoError{
Heading: "Invalid Function",
Message: formatter.F(`[%s] node must be passed {{bold}}func(){{/}} - i.e. functions that take nothing and return nothing.
Message: formatter.F(`[%s] node must be passed `+mustGet+`.
You passed {{bold}}%s{{/}} instead.`, nodeType, t),
CodeLocation: cl,
DocLink: "node-decorators-overview",
}
}
func (g ginkgoErrors) InvalidBodyTypeForSynchronizedBeforeSuiteProc1(t reflect.Type, cl CodeLocation) error {
mustGet := "{{bold}}func() []byte{{/}}, {{bold}}func(ctx SpecContext) []byte{{/}}, or {{bold}}func(ctx context.Context) []byte{{/}}, {{bold}}func(){{/}}, {{bold}}func(ctx SpecContext){{/}}, or {{bold}}func(ctx context.Context){{/}}"
return GinkgoError{
Heading: "Invalid Function",
Message: formatter.F(`[SynchronizedBeforeSuite] node must be passed `+mustGet+` for its first function.
You passed {{bold}}%s{{/}} instead.`, t),
CodeLocation: cl,
DocLink: "node-decorators-overview",
}
}
func (g ginkgoErrors) InvalidBodyTypeForSynchronizedBeforeSuiteAllProcs(t reflect.Type, cl CodeLocation) error {
mustGet := "{{bold}}func(){{/}}, {{bold}}func(ctx SpecContext){{/}}, or {{bold}}func(ctx context.Context){{/}}, {{bold}}func([]byte){{/}}, {{bold}}func(ctx SpecContext, []byte){{/}}, or {{bold}}func(ctx context.Context, []byte){{/}}"
return GinkgoError{
Heading: "Invalid Function",
Message: formatter.F(`[SynchronizedBeforeSuite] node must be passed `+mustGet+` for its second function.
You passed {{bold}}%s{{/}} instead.`, t),
CodeLocation: cl,
DocLink: "node-decorators-overview",
}
}
func (g ginkgoErrors) MultipleBodyFunctions(cl CodeLocation, nodeType NodeType) error {
return GinkgoError{
Heading: "Multiple Functions",
Message: formatter.F(`[%s] node must be passed a single {{bold}}func(){{/}} - but more than one was passed in.`, nodeType),
Message: formatter.F(`[%s] node must be passed a single function - but more than one was passed in.`, nodeType),
CodeLocation: cl,
DocLink: "node-decorators-overview",
}
@ -211,12 +246,30 @@ func (g ginkgoErrors) MultipleBodyFunctions(cl CodeLocation, nodeType NodeType)
func (g ginkgoErrors) MissingBodyFunction(cl CodeLocation, nodeType NodeType) error {
return GinkgoError{
Heading: "Missing Functions",
Message: formatter.F(`[%s] node must be passed a single {{bold}}func(){{/}} - but none was passed in.`, nodeType),
Message: formatter.F(`[%s] node must be passed a single function - but none was passed in.`, nodeType),
CodeLocation: cl,
DocLink: "node-decorators-overview",
}
}
func (g ginkgoErrors) InvalidTimeoutOrGracePeriodForNonContextNode(cl CodeLocation, nodeType NodeType) error {
return GinkgoError{
Heading: "Invalid NodeTimeout SpecTimeout, or GracePeriod",
Message: formatter.F(`[%s] was passed NodeTimeout, SpecTimeout, or GracePeriod but does not have a callback that accepts a {{bold}}SpecContext{{/}} or {{bold}}context.Context{{/}}. You must accept a context to enable timeouts and grace periods`, nodeType),
CodeLocation: cl,
DocLink: "spec-timeouts-and-interruptible-nodes",
}
}
func (g ginkgoErrors) InvalidTimeoutOrGracePeriodForNonContextCleanupNode(cl CodeLocation) error {
return GinkgoError{
Heading: "Invalid NodeTimeout SpecTimeout, or GracePeriod",
Message: formatter.F(`[DeferCleanup] was passed NodeTimeout or GracePeriod but does not have a callback that accepts a {{bold}}SpecContext{{/}} or {{bold}}context.Context{{/}}. You must accept a context to enable timeouts and grace periods`),
CodeLocation: cl,
DocLink: "spec-timeouts-and-interruptible-nodes",
}
}
/* Ordered Container errors */
func (g ginkgoErrors) InvalidSerialNodeInNonSerialOrderedContainer(cl CodeLocation, nodeType NodeType) error {
return GinkgoError{
@ -380,6 +433,15 @@ func (g ginkgoErrors) InvalidEntryDescription(cl CodeLocation) error {
}
}
func (g ginkgoErrors) MissingParametersForTableFunction(cl CodeLocation) error {
return GinkgoError{
Heading: fmt.Sprintf("No parameters have been passed to the Table Function"),
Message: fmt.Sprintf("The Table Function expected at least 1 parameter"),
CodeLocation: cl,
DocLink: "table-specs",
}
}
func (g ginkgoErrors) IncorrectParameterTypeForTable(i int, name string, cl CodeLocation) error {
return GinkgoError{
Heading: "DescribeTable passed incorrect parameter type",
@ -498,6 +560,13 @@ func (g ginkgoErrors) DryRunInParallelConfiguration() error {
}
}
func (g ginkgoErrors) GracePeriodCannotBeZero() error {
return GinkgoError{
Heading: "Ginkgo requires a positive --grace-period.",
Message: "Please set --grace-period to a positive duration. The default is 30s.",
}
}
func (g ginkgoErrors) ConflictingVerbosityConfiguration() error {
return GinkgoError{
Heading: "Conflicting reporter verbosity settings.",
@ -532,3 +601,12 @@ func (g ginkgoErrors) BothRepeatAndUntilItFails() error {
Message: "--until-it-fails directs Ginkgo to rerun specs indefinitely until they fail. --repeat directs Ginkgo to rerun specs a set number of times. You can't set both... which would you like?",
}
}
/* Stack-Trace parsing errors */
func (g ginkgoErrors) FailedToParseStackTrace(message string) error {
return GinkgoError{
Heading: "Failed to Parse Stack Trace",
Message: message,
}
}

View File

@ -50,7 +50,6 @@ func (rev ReportEntryValue) MarshalJSON() ([]byte, error) {
}{
Representation: rev.String(),
}
asJSON, err := json.Marshal(rev.raw)
if err != nil {
return nil, err
@ -98,7 +97,7 @@ type ReportEntry struct {
Value ReportEntryValue
}
// ColorableStringer is an interface that ReportEntry values can satisfy. If they do then ColorableStirng() is used to generate their representation.
// ColorableStringer is an interface that ReportEntry values can satisfy. If they do then ColorableString() is used to generate their representation.
type ColorableStringer interface {
ColorableString() string
}
@ -121,6 +120,8 @@ func (entry ReportEntry) GetRawValue() interface{} {
return entry.Value.GetRawValue()
}
type ReportEntries []ReportEntry
func (re ReportEntries) HasVisibility(visibilities ...ReportEntryVisibility) bool {

View File

@ -165,6 +165,12 @@ type SpecReport struct {
// ReportEntries contains any reports added via `AddReportEntry`
ReportEntries ReportEntries
// ProgressReports contains any progress reports generated during this spec. These can either be manually triggered, or automatically generated by Ginkgo via the PollProgressAfter() decorator
ProgressReports []ProgressReport
// AdditionalFailures contains any failures that occurred after the initial spec failure. These typically occur in cleanup nodes after the initial failure and are only emitted when running in verbose mode.
AdditionalFailures []AdditionalFailure
}
func (report SpecReport) MarshalJSON() ([]byte, error) {
@ -184,9 +190,11 @@ func (report SpecReport) MarshalJSON() ([]byte, error) {
ParallelProcess int
Failure *Failure `json:",omitempty"`
NumAttempts int
CapturedGinkgoWriterOutput string `json:",omitempty"`
CapturedStdOutErr string `json:",omitempty"`
ReportEntries ReportEntries `json:",omitempty"`
CapturedGinkgoWriterOutput string `json:",omitempty"`
CapturedStdOutErr string `json:",omitempty"`
ReportEntries ReportEntries `json:",omitempty"`
ProgressReports []ProgressReport `json:",omitempty"`
AdditionalFailures []AdditionalFailure `json:",omitempty"`
}{
ContainerHierarchyTexts: report.ContainerHierarchyTexts,
ContainerHierarchyLocations: report.ContainerHierarchyLocations,
@ -213,6 +221,12 @@ func (report SpecReport) MarshalJSON() ([]byte, error) {
if len(report.ReportEntries) > 0 {
out.ReportEntries = report.ReportEntries
}
if len(report.ProgressReports) > 0 {
out.ProgressReports = report.ProgressReports
}
if len(report.AdditionalFailures) > 0 {
out.AdditionalFailures = report.AdditionalFailures
}
return json.Marshal(out)
}
@ -379,7 +393,7 @@ type Failure struct {
// FailureNodeContext - one of three contexts describing the node in which the failure occurred:
// FailureNodeIsLeafNode means the failure occurred in the leaf node of the associated SpecReport. None of the other FailureNode fields will be populated
// FailureNodeAtTopLevel means the failure occurred in a non-leaf node that is defined at the top-level of the spec (i.e. not in a container). FailureNodeType and FailureNodeLocation will be populated.
// FailureNodeInContainer means the failure occurred in a non-leaf node that is defined within a container. FailureNodeType, FailureNodeLocaiton, and FailureNodeContainerIndex will be populated.
// FailureNodeInContainer means the failure occurred in a non-leaf node that is defined within a container. FailureNodeType, FailureNodeLocation, and FailureNodeContainerIndex will be populated.
//
// FailureNodeType will contain the NodeType of the node in which the failure occurred.
// FailureNodeLocation will contain the CodeLocation of the node in which the failure occurred.
@ -388,10 +402,13 @@ type Failure struct {
FailureNodeType NodeType
FailureNodeLocation CodeLocation
FailureNodeContainerIndex int
//ProgressReport is populated if the spec was interrupted or timed out
ProgressReport ProgressReport
}
func (f Failure) IsZero() bool {
return f == Failure{}
return f.Message == "" && (f.Location == CodeLocation{})
}
// FailureNodeContext captures the location context for the node containing the failing line of code
@ -424,6 +441,14 @@ func (fnc FailureNodeContext) MarshalJSON() ([]byte, error) {
return fncEnumSupport.MarshJSON(uint(fnc))
}
// AdditionalFailure capturs any additional failures that occur after the initial failure of a psec
// these typically occur in clean up nodes after the spec has failed.
// We can't simply use Failure as we want to track the SpecState to know what kind of failure this is
type AdditionalFailure struct {
State SpecState
Failure Failure
}
// SpecState captures the state of a spec
// To determine if a given `state` represents a failure state, use `state.Is(SpecStateFailureStates)`
type SpecState uint
@ -438,6 +463,7 @@ const (
SpecStateAborted
SpecStatePanicked
SpecStateInterrupted
SpecStateTimedout
)
var ssEnumSupport = NewEnumSupport(map[uint]string{
@ -449,6 +475,7 @@ var ssEnumSupport = NewEnumSupport(map[uint]string{
uint(SpecStateAborted): "aborted",
uint(SpecStatePanicked): "panicked",
uint(SpecStateInterrupted): "interrupted",
uint(SpecStateTimedout): "timedout",
})
func (ss SpecState) String() string {
@ -463,12 +490,113 @@ func (ss SpecState) MarshalJSON() ([]byte, error) {
return ssEnumSupport.MarshJSON(uint(ss))
}
var SpecStateFailureStates = SpecStateFailed | SpecStateAborted | SpecStatePanicked | SpecStateInterrupted
var SpecStateFailureStates = SpecStateFailed | SpecStateTimedout | SpecStateAborted | SpecStatePanicked | SpecStateInterrupted
func (ss SpecState) Is(states SpecState) bool {
return ss&states != 0
}
// ProgressReport captures the progress of the current spec. It is, effectively, a structured Ginkgo-aware stack trace
type ProgressReport struct {
Message string
ParallelProcess int
RunningInParallel bool
Time time.Time
ContainerHierarchyTexts []string
LeafNodeText string
LeafNodeLocation CodeLocation
SpecStartTime time.Time
CurrentNodeType NodeType
CurrentNodeText string
CurrentNodeLocation CodeLocation
CurrentNodeStartTime time.Time
CurrentStepText string
CurrentStepLocation CodeLocation
CurrentStepStartTime time.Time
AdditionalReports []string
CapturedGinkgoWriterOutput string `json:",omitempty"`
GinkgoWriterOffset int
Goroutines []Goroutine
}
func (pr ProgressReport) IsZero() bool {
return pr.CurrentNodeType == NodeTypeInvalid
}
func (pr ProgressReport) SpecGoroutine() Goroutine {
for _, goroutine := range pr.Goroutines {
if goroutine.IsSpecGoroutine {
return goroutine
}
}
return Goroutine{}
}
func (pr ProgressReport) HighlightedGoroutines() []Goroutine {
out := []Goroutine{}
for _, goroutine := range pr.Goroutines {
if goroutine.IsSpecGoroutine || !goroutine.HasHighlights() {
continue
}
out = append(out, goroutine)
}
return out
}
func (pr ProgressReport) OtherGoroutines() []Goroutine {
out := []Goroutine{}
for _, goroutine := range pr.Goroutines {
if goroutine.IsSpecGoroutine || goroutine.HasHighlights() {
continue
}
out = append(out, goroutine)
}
return out
}
func (pr ProgressReport) WithoutCapturedGinkgoWriterOutput() ProgressReport {
out := pr
out.CapturedGinkgoWriterOutput = ""
return out
}
type Goroutine struct {
ID uint64
State string
Stack []FunctionCall
IsSpecGoroutine bool
}
func (g Goroutine) IsZero() bool {
return g.ID == 0
}
func (g Goroutine) HasHighlights() bool {
for _, fc := range g.Stack {
if fc.Highlight {
return true
}
}
return false
}
type FunctionCall struct {
Function string
Filename string
Line int
Highlight bool `json:",omitempty"`
Source []string `json:",omitempty"`
SourceHighlight int `json:",omitempty"`
}
// NodeType captures the type of a given Ginkgo Node
type NodeType uint
@ -503,6 +631,8 @@ const (
var NodeTypesForContainerAndIt = NodeTypeContainer | NodeTypeIt
var NodeTypesForSuiteLevelNodes = NodeTypeBeforeSuite | NodeTypeSynchronizedBeforeSuite | NodeTypeAfterSuite | NodeTypeSynchronizedAfterSuite | NodeTypeReportAfterSuite | NodeTypeCleanupAfterSuite
var NodeTypesAllowedDuringCleanupInterrupt = NodeTypeAfterEach | NodeTypeJustAfterEach | NodeTypeAfterAll | NodeTypeAfterSuite | NodeTypeSynchronizedAfterSuite | NodeTypeCleanupAfterEach | NodeTypeCleanupAfterAll | NodeTypeCleanupAfterSuite
var NodeTypesAllowedDuringReportInterrupt = NodeTypeReportBeforeEach | NodeTypeReportAfterEach | NodeTypeReportAfterSuite
var ntEnumSupport = NewEnumSupport(map[uint]string{
uint(NodeTypeInvalid): "INVALID NODE TYPE",
@ -521,8 +651,8 @@ var ntEnumSupport = NewEnumSupport(map[uint]string{
uint(NodeTypeReportBeforeEach): "ReportBeforeEach",
uint(NodeTypeReportAfterEach): "ReportAfterEach",
uint(NodeTypeReportAfterSuite): "ReportAfterSuite",
uint(NodeTypeCleanupInvalid): "INVALID CLEANUP NODE",
uint(NodeTypeCleanupAfterEach): "DeferCleanup",
uint(NodeTypeCleanupInvalid): "DeferCleanup",
uint(NodeTypeCleanupAfterEach): "DeferCleanup (Each)",
uint(NodeTypeCleanupAfterAll): "DeferCleanup (All)",
uint(NodeTypeCleanupAfterSuite): "DeferCleanup (Suite)",
})

View File

@ -1,3 +1,3 @@
package types
const VERSION = "2.1.6"
const VERSION = "2.3.1"

View File

@ -1,3 +1,56 @@
## 1.22.0
### Features
Several improvements have been made to `Eventually` and `Consistently` in this and the most recent releases:
- Eventually and Consistently can take a context.Context [65c01bc]
This enables integration with Ginkgo 2.3.0's interruptible nodes and node timeouts.
- Eventually and Consistently that are passed a SpecContext can provide reports when an interrupt occurs [0d063c9]
- Eventually/Consistently will forward an attached context to functions that ask for one [e2091c5]
- Eventually/Consistently supports passing arguments to functions via WithArguments() [a2dc7c3]
- Eventually and Consistently can now be stopped early with StopTrying(message) and StopTrying(message).Now() [52976bb]
These improvements are all documented in [Gomega's docs](https://onsi.github.io/gomega/#making-asynchronous-assertions)
## Fixes
## Maintenance
## 1.21.1
### Features
- Eventually and Consistently that are passed a SpecContext can provide reports when an interrupt occurs [0d063c9]
## 1.21.0
### Features
- Eventually and Consistently can take a context.Context [65c01bc]
This enables integration with Ginkgo 2.3.0's interruptible nodes and node timeouts.
- Introduces Eventually.Within.ProbeEvery with tests and documentation (#591) [f633800]
- New BeKeyOf matcher with documentation and unit tests (#590) [fb586b3]
## Fixes
- Cover the entire gmeasure suite with leak detection [8c54344]
- Fix gmeasure leak [119d4ce]
- Ignore new Ginkgo ProgressSignal goroutine in gleak [ba548e2]
## Maintenance
- Fixes crashes on newer Ruby 3 installations by upgrading github-pages gem dependency (#596) [12469a0]
## 1.20.2
## Fixes
- label specs that rely on remote access; bump timeout on short-circuit test to make it less flaky [35eeadf]
- gexec: allow more headroom for SIGABRT-related unit tests (#581) [5b78f40]
- Enable reading from a closed gbytes.Buffer (#575) [061fd26]
## Maintenance
- Bump github.com/onsi/ginkgo/v2 from 2.1.5 to 2.1.6 (#583) [55d895b]
- Bump github.com/onsi/ginkgo/v2 from 2.1.4 to 2.1.5 (#582) [346de7c]
## 1.20.1
## Fixes

View File

@ -1,7 +1,13 @@
A Gomega release is a tagged sha and a GitHub release. To cut a release:
1. Ensure CHANGELOG.md is up to date.
- Use `git log --pretty=format:'- %s [%h]' HEAD...vX.X.X` to list all the commits since the last release
- Use
```bash
LAST_VERSION=$(git tag --sort=version:refname | tail -n1)
CHANGES=$(git log --pretty=format:'- %s [%h]' HEAD...$LAST_VERSION)
echo -e "## NEXT\n\n$CHANGES\n\n### Features\n\n## Fixes\n\n## Maintenance\n\n$(cat CHANGELOG.md)" > CHANGELOG.md
```
to update the changelog
- Categorize the changes into
- Breaking Changes (requires a major version)
- New Features (minor version)

View File

@ -22,7 +22,7 @@ import (
"github.com/onsi/gomega/types"
)
const GOMEGA_VERSION = "1.20.1"
const GOMEGA_VERSION = "1.22.0"
const nilGomegaPanic = `You are trying to make an assertion, but haven't registered Gomega's fail handler.
If you're using Ginkgo then you probably forgot to put your assertion in an It().
@ -233,7 +233,7 @@ func ExpectWithOffset(offset int, actual interface{}, extra ...interface{}) Asse
Eventually enables making assertions on asynchronous behavior.
Eventually checks that an assertion *eventually* passes. Eventually blocks when called and attempts an assertion periodically until it passes or a timeout occurs. Both the timeout and polling interval are configurable as optional arguments.
The first optional argument is the timeout (which defaults to 1s), the second is the polling interval (which defaults to 10ms). Both intervals can be specified as time.Duration, parsable duration strings or floats/integers (in which case they are interpreted as seconds).
The first optional argument is the timeout (which defaults to 1s), the second is the polling interval (which defaults to 10ms). Both intervals can be specified as time.Duration, parsable duration strings or floats/integers (in which case they are interpreted as seconds). In addition an optional context.Context can be passed in - Eventually will keep trying until either the timeout epxires or the context is cancelled, whichever comes first.
Eventually works with any Gomega compatible matcher and supports making assertions against three categories of actual value:
@ -266,7 +266,7 @@ this will trigger Go's race detector as the goroutine polling via Eventually wil
**Category 2: Make Eventually assertions on functions**
Eventually can be passed functions that **take no arguments** and **return at least one value**. When configured this way, Eventually will poll the function repeatedly and pass the first returned value to the matcher.
Eventually can be passed functions that **return at least one value**. When configured this way, Eventually will poll the function repeatedly and pass the first returned value to the matcher.
For example:
@ -286,7 +286,27 @@ Then
will pass only if and when the returned error is nil *and* the returned string satisfies the matcher.
It is important to note that the function passed into Eventually is invoked *synchronously* when polled. Eventually does not (in fact, it cannot) kill the function if it takes longer to return than Eventually's configured timeout. You should design your functions with this in mind.
Eventually can also accept functions that take arguments, however you must provide those arguments using .WithArguments(). For example, consider a function that takes a user-id and makes a network request to fetch a full name:
func FetchFullName(userId int) (string, error)
You can poll this function like so:
Eventually(FetchFullName).WithArguments(1138).Should(Equal("Wookie"))
It is important to note that the function passed into Eventually is invoked *synchronously* when polled. Eventually does not (in fact, it cannot) kill the function if it takes longer to return than Eventually's configured timeout. A common practice here is to use a context. Here's an example that combines Ginkgo's spec timeout support with Eventually:
It("fetches the correct count", func(ctx SpecContext) {
Eventually(func() int {
return client.FetchCount(ctx, "/users")
}, ctx).Should(BeNumerically(">=", 17))
}, SpecTimeout(time.Second))
you an also use Eventually().WithContext(ctx) to pass in the context. Passed-in contexts play nicely with paseed-in arguments as long as the context appears first. You can rewrite the above example as:
It("fetches the correct count", func(ctx SpecContext) {
Eventually(client.FetchCount).WithContext(ctx).WithArguments("/users").Should(BeNumerically(">=", 17))
}, SpecTimeout(time.Second))
Either way the context passd to Eventually is also passed to the underlying funciton. Now, when Ginkgo cancels the context both the FetchCount client and Gomega will be informed and can exit.
**Category 3: Making assertions _in_ the function passed into Eventually**
@ -316,12 +336,28 @@ For example:
will rerun the function until all assertions pass.
`Eventually` specifying a timeout interval (and an optional polling interval) are
the same as `Eventually(...).WithTimeout` or `Eventually(...).WithTimeout(...).WithPolling`.
You can also pass additional arugments to functions that take a Gomega. The only rule is that the Gomega argument must be first. If you also want to pass the context attached to Eventually you must ensure that is the second argument. For example:
Eventually(func(g Gomega, ctx context.Context, path string, expected ...string){
tok, err := client.GetToken(ctx)
g.Expect(err).NotTo(HaveOccurred())
elements, err := client.Fetch(ctx, tok, path)
g.Expect(err).NotTo(HaveOccurred())
g.Expect(elements).To(ConsistOf(expected))
}).WithContext(ctx).WithArguments("/names", "Joe", "Jane", "Sam").Should(Succeed())
Finally, in addition to passing timeouts and a context to Eventually you can be more explicit with Eventually's chaining configuration methods:
Eventually(..., "1s", "2s", ctx).Should(...)
is equivalent to
Eventually(...).WithTimeout(time.Second).WithPolling(2*time.Second).WithContext(ctx).Should(...)
*/
func Eventually(actual interface{}, intervals ...interface{}) AsyncAssertion {
func Eventually(actual interface{}, args ...interface{}) AsyncAssertion {
ensureDefaultGomegaIsConfigured()
return Default.Eventually(actual, intervals...)
return Default.Eventually(actual, args...)
}
// EventuallyWithOffset operates like Eventually but takes an additional
@ -333,9 +369,9 @@ func Eventually(actual interface{}, intervals ...interface{}) AsyncAssertion {
// `EventuallyWithOffset` specifying a timeout interval (and an optional polling interval) are
// the same as `Eventually(...).WithOffset(...).WithTimeout` or
// `Eventually(...).WithOffset(...).WithTimeout(...).WithPolling`.
func EventuallyWithOffset(offset int, actual interface{}, intervals ...interface{}) AsyncAssertion {
func EventuallyWithOffset(offset int, actual interface{}, args ...interface{}) AsyncAssertion {
ensureDefaultGomegaIsConfigured()
return Default.EventuallyWithOffset(offset, actual, intervals...)
return Default.EventuallyWithOffset(offset, actual, args...)
}
/*
@ -343,7 +379,7 @@ Consistently, like Eventually, enables making assertions on asynchronous behavio
Consistently blocks when called for a specified duration. During that duration Consistently repeatedly polls its matcher and ensures that it is satisfied. If the matcher is consistently satisfied, then Consistently will pass. Otherwise Consistently will fail.
Both the total waiting duration and the polling interval are configurable as optional arguments. The first optional argument is the duration that Consistently will run for (defaults to 100ms), and the second argument is the polling interval (defaults to 10ms). As with Eventually, these intervals can be passed in as time.Duration, parsable duration strings or an integer or float number of seconds.
Both the total waiting duration and the polling interval are configurable as optional arguments. The first optional argument is the duration that Consistently will run for (defaults to 100ms), and the second argument is the polling interval (defaults to 10ms). As with Eventually, these intervals can be passed in as time.Duration, parsable duration strings or an integer or float number of seconds. You can also pass in an optional context.Context - Consistently will exit early (with a failure) if the context is cancelled before the waiting duration expires.
Consistently accepts the same three categories of actual as Eventually, check the Eventually docs to learn more.
@ -353,9 +389,9 @@ Consistently is useful in cases where you want to assert that something *does no
This will block for 200 milliseconds and repeatedly check the channel and ensure nothing has been received.
*/
func Consistently(actual interface{}, intervals ...interface{}) AsyncAssertion {
func Consistently(actual interface{}, args ...interface{}) AsyncAssertion {
ensureDefaultGomegaIsConfigured()
return Default.Consistently(actual, intervals...)
return Default.Consistently(actual, args...)
}
// ConsistentlyWithOffset operates like Consistently but takes an additional
@ -364,11 +400,44 @@ func Consistently(actual interface{}, intervals ...interface{}) AsyncAssertion {
//
// `ConsistentlyWithOffset` is the same as `Consistently(...).WithOffset` and
// optional `WithTimeout` and `WithPolling`.
func ConsistentlyWithOffset(offset int, actual interface{}, intervals ...interface{}) AsyncAssertion {
func ConsistentlyWithOffset(offset int, actual interface{}, args ...interface{}) AsyncAssertion {
ensureDefaultGomegaIsConfigured()
return Default.ConsistentlyWithOffset(offset, actual, intervals...)
return Default.ConsistentlyWithOffset(offset, actual, args...)
}
/*
StopTrying can be used to signal to Eventually and Consistently that the polled function will not change
and that they should stop trying. In the case of Eventually, if a match does not occur in this, final, iteration then a failure will result. In the case of Consistently, as long as this last iteration satisfies the match, the assertion will be considered successful.
You can send the StopTrying signal by either returning a StopTrying("message") messages as an error from your passed-in function _or_ by calling StopTrying("message").Now() to trigger a panic and end execution.
Here are a couple of examples. This is how you might use StopTrying() as an error to signal that Eventually should stop:
playerIndex, numPlayers := 0, 11
Eventually(func() (string, error) {
name := client.FetchPlayer(playerIndex)
playerIndex += 1
if playerIndex == numPlayers {
return name, StopTrying("No more players left")
} else {
return name, nil
}
}).Should(Equal("Patrick Mahomes"))
note that the final `name` returned alongside `StopTrying()` will be processed.
And here's an example where `StopTrying().Now()` is called to halt execution immediately:
Eventually(func() []string {
names, err := client.FetchAllPlayers()
if err == client.IRRECOVERABLE_ERROR {
StopTrying("Irrecoverable error occurred").Now()
}
return names
}).Should(ContainElement("Patrick Mahomes"))
*/
var StopTrying = internal.StopTrying
// SetDefaultEventuallyTimeout sets the default timeout duration for Eventually. Eventually will repeatedly poll your condition until it succeeds, or until this timeout elapses.
func SetDefaultEventuallyTimeout(t time.Duration) {
Default.SetDefaultEventuallyTimeout(t)

View File

@ -1,15 +1,46 @@
package internal
import (
"errors"
"context"
"fmt"
"reflect"
"runtime"
"sync"
"time"
"github.com/onsi/gomega/types"
)
type StopTryingError interface {
error
Now()
wasViaPanic() bool
}
type stopTryingError struct {
message string
viaPanic bool
}
func (s *stopTryingError) Error() string {
return s.message
}
func (s *stopTryingError) Now() {
s.viaPanic = true
panic(s)
}
func (s *stopTryingError) wasViaPanic() bool {
return s.viaPanic
}
var stopTryingErrorType = reflect.TypeOf(&stopTryingError{})
var StopTrying = func(message string) StopTryingError {
return &stopTryingError{message: message}
}
type AsyncAssertionType uint
const (
@ -17,71 +48,43 @@ const (
AsyncAssertionTypeConsistently
)
func (at AsyncAssertionType) String() string {
switch at {
case AsyncAssertionTypeEventually:
return "Eventually"
case AsyncAssertionTypeConsistently:
return "Consistently"
}
return "INVALID ASYNC ASSERTION TYPE"
}
type AsyncAssertion struct {
asyncType AsyncAssertionType
actualIsFunc bool
actualValue interface{}
actualFunc func() ([]reflect.Value, error)
actualIsFunc bool
actual interface{}
argsToForward []interface{}
timeoutInterval time.Duration
pollingInterval time.Duration
ctx context.Context
offset int
g *Gomega
}
func NewAsyncAssertion(asyncType AsyncAssertionType, actualInput interface{}, g *Gomega, timeoutInterval time.Duration, pollingInterval time.Duration, offset int) *AsyncAssertion {
func NewAsyncAssertion(asyncType AsyncAssertionType, actualInput interface{}, g *Gomega, timeoutInterval time.Duration, pollingInterval time.Duration, ctx context.Context, offset int) *AsyncAssertion {
out := &AsyncAssertion{
asyncType: asyncType,
timeoutInterval: timeoutInterval,
pollingInterval: pollingInterval,
offset: offset,
ctx: ctx,
g: g,
}
switch actualType := reflect.TypeOf(actualInput); {
case actualInput == nil || actualType.Kind() != reflect.Func:
out.actualValue = actualInput
case actualType.NumIn() == 0 && actualType.NumOut() > 0:
out.actual = actualInput
if actualInput != nil && reflect.TypeOf(actualInput).Kind() == reflect.Func {
out.actualIsFunc = true
out.actualFunc = func() ([]reflect.Value, error) {
return reflect.ValueOf(actualInput).Call([]reflect.Value{}), nil
}
case actualType.NumIn() == 1 && actualType.In(0).Implements(reflect.TypeOf((*types.Gomega)(nil)).Elem()):
out.actualIsFunc = true
out.actualFunc = func() (values []reflect.Value, err error) {
var assertionFailure error
assertionCapturingGomega := NewGomega(g.DurationBundle).ConfigureWithFailHandler(func(message string, callerSkip ...int) {
skip := 0
if len(callerSkip) > 0 {
skip = callerSkip[0]
}
_, file, line, _ := runtime.Caller(skip + 1)
assertionFailure = fmt.Errorf("Assertion in callback at %s:%d failed:\n%s", file, line, message)
panic("stop execution")
})
defer func() {
if actualType.NumOut() == 0 {
if assertionFailure == nil {
values = []reflect.Value{reflect.Zero(reflect.TypeOf((*error)(nil)).Elem())}
} else {
values = []reflect.Value{reflect.ValueOf(assertionFailure)}
}
} else {
err = assertionFailure
}
if e := recover(); e != nil && assertionFailure == nil {
panic(e)
}
}()
values = reflect.ValueOf(actualInput).Call([]reflect.Value{reflect.ValueOf(assertionCapturingGomega)})
return
}
default:
msg := fmt.Sprintf("The function passed to Gomega's async assertions should either take no arguments and return values, or take a single Gomega interface that it can use to make assertions within the body of the function. When taking a Gomega interface the function can optionally return values or return nothing. The function you passed takes %d arguments and returns %d values.", actualType.NumIn(), actualType.NumOut())
g.Fail(msg, offset+4)
}
return out
@ -102,6 +105,26 @@ func (assertion *AsyncAssertion) WithPolling(interval time.Duration) types.Async
return assertion
}
func (assertion *AsyncAssertion) Within(timeout time.Duration) types.AsyncAssertion {
assertion.timeoutInterval = timeout
return assertion
}
func (assertion *AsyncAssertion) ProbeEvery(interval time.Duration) types.AsyncAssertion {
assertion.pollingInterval = interval
return assertion
}
func (assertion *AsyncAssertion) WithContext(ctx context.Context) types.AsyncAssertion {
assertion.ctx = ctx
return assertion
}
func (assertion *AsyncAssertion) WithArguments(argsToForward ...interface{}) types.AsyncAssertion {
assertion.argsToForward = argsToForward
return assertion
}
func (assertion *AsyncAssertion) Should(matcher types.GomegaMatcher, optionalDescription ...interface{}) bool {
assertion.g.THelper()
vetOptionalDescription("Asynchronous assertion", optionalDescription...)
@ -126,50 +149,199 @@ func (assertion *AsyncAssertion) buildDescription(optionalDescription ...interfa
return fmt.Sprintf(optionalDescription[0].(string), optionalDescription[1:]...) + "\n"
}
func (assertion *AsyncAssertion) pollActual() (interface{}, error) {
if !assertion.actualIsFunc {
return assertion.actualValue, nil
}
func (assertion *AsyncAssertion) processReturnValues(values []reflect.Value) (interface{}, error, StopTryingError) {
var err error
var stopTrying StopTryingError
values, err := assertion.actualFunc()
if err != nil {
return nil, err
if len(values) == 0 {
return nil, fmt.Errorf("No values were returned by the function passed to Gomega"), stopTrying
}
extras := []interface{}{nil}
for _, value := range values[1:] {
extras = append(extras, value.Interface())
actual := values[0].Interface()
if actual != nil && reflect.TypeOf(actual) == stopTryingErrorType {
stopTrying = actual.(StopTryingError)
}
success, message := vetActuals(extras, 0)
if !success {
return nil, errors.New(message)
for i, extraValue := range values[1:] {
extra := extraValue.Interface()
if extra == nil {
continue
}
extraType := reflect.TypeOf(extra)
if extraType == stopTryingErrorType {
stopTrying = extra.(StopTryingError)
continue
}
zero := reflect.Zero(extraType).Interface()
if reflect.DeepEqual(extra, zero) {
continue
}
if err == nil {
err = fmt.Errorf("Unexpected non-nil/non-zero argument at index %d:\n\t<%T>: %#v", i+1, extra, extra)
}
}
return values[0].Interface(), nil
return actual, err, stopTrying
}
func (assertion *AsyncAssertion) matcherMayChange(matcher types.GomegaMatcher, value interface{}) bool {
if assertion.actualIsFunc {
return true
var gomegaType = reflect.TypeOf((*types.Gomega)(nil)).Elem()
var contextType = reflect.TypeOf(new(context.Context)).Elem()
func (assertion *AsyncAssertion) invalidFunctionError(t reflect.Type) error {
return fmt.Errorf(`The function passed to %s had an invalid signature of %s. Functions passed to %s must either:
(a) have return values or
(b) take a Gomega interface as their first argument and use that Gomega instance to make assertions.
You can learn more at https://onsi.github.io/gomega/#eventually
`, assertion.asyncType, t, assertion.asyncType)
}
func (assertion *AsyncAssertion) noConfiguredContextForFunctionError() error {
return fmt.Errorf(`The function passed to %s requested a context.Context, but no context has been provided. Please pass one in using %s().WithContext().
You can learn more at https://onsi.github.io/gomega/#eventually
`, assertion.asyncType, assertion.asyncType)
}
func (assertion *AsyncAssertion) argumentMismatchError(t reflect.Type, numProvided int) error {
have := "have"
if numProvided == 1 {
have = "has"
}
return types.MatchMayChangeInTheFuture(matcher, value)
return fmt.Errorf(`The function passed to %s has signature %s takes %d arguments but %d %s been provided. Please use %s().WithArguments() to pass the corect set of arguments.
You can learn more at https://onsi.github.io/gomega/#eventually
`, assertion.asyncType, t, t.NumIn(), numProvided, have, assertion.asyncType)
}
func (assertion *AsyncAssertion) buildActualPoller() (func() (interface{}, error, StopTryingError), error) {
if !assertion.actualIsFunc {
return func() (interface{}, error, StopTryingError) { return assertion.actual, nil, nil }, nil
}
actualValue := reflect.ValueOf(assertion.actual)
actualType := reflect.TypeOf(assertion.actual)
numIn, numOut, isVariadic := actualType.NumIn(), actualType.NumOut(), actualType.IsVariadic()
if numIn == 0 && numOut == 0 {
return nil, assertion.invalidFunctionError(actualType)
} else if numIn == 0 {
return func() (actual interface{}, err error, stopTrying StopTryingError) {
defer func() {
if e := recover(); e != nil {
if reflect.TypeOf(e) == stopTryingErrorType {
stopTrying = e.(StopTryingError)
} else {
panic(e)
}
}
}()
actual, err, stopTrying = assertion.processReturnValues(actualValue.Call([]reflect.Value{}))
return
}, nil
}
takesGomega, takesContext := actualType.In(0).Implements(gomegaType), actualType.In(0).Implements(contextType)
if takesGomega && numIn > 1 && actualType.In(1).Implements(contextType) {
takesContext = true
}
if takesContext && len(assertion.argsToForward) > 0 && reflect.TypeOf(assertion.argsToForward[0]).Implements(contextType) {
takesContext = false
}
if !takesGomega && numOut == 0 {
return nil, assertion.invalidFunctionError(actualType)
}
if takesContext && assertion.ctx == nil {
return nil, assertion.noConfiguredContextForFunctionError()
}
var assertionFailure error
inValues := []reflect.Value{}
if takesGomega {
inValues = append(inValues, reflect.ValueOf(NewGomega(assertion.g.DurationBundle).ConfigureWithFailHandler(func(message string, callerSkip ...int) {
skip := 0
if len(callerSkip) > 0 {
skip = callerSkip[0]
}
_, file, line, _ := runtime.Caller(skip + 1)
assertionFailure = fmt.Errorf("Assertion in callback at %s:%d failed:\n%s", file, line, message)
panic("stop execution")
})))
}
if takesContext {
inValues = append(inValues, reflect.ValueOf(assertion.ctx))
}
for _, arg := range assertion.argsToForward {
inValues = append(inValues, reflect.ValueOf(arg))
}
if !isVariadic && numIn != len(inValues) {
return nil, assertion.argumentMismatchError(actualType, len(inValues))
} else if isVariadic && len(inValues) < numIn-1 {
return nil, assertion.argumentMismatchError(actualType, len(inValues))
}
return func() (actual interface{}, err error, stopTrying StopTryingError) {
var values []reflect.Value
assertionFailure = nil
defer func() {
if numOut == 0 {
actual = assertionFailure
} else {
actual, err, stopTrying = assertion.processReturnValues(values)
if assertionFailure != nil {
err = assertionFailure
}
}
if e := recover(); e != nil {
if reflect.TypeOf(e) == stopTryingErrorType {
stopTrying = e.(StopTryingError)
} else if assertionFailure == nil {
panic(e)
}
}
}()
values = actualValue.Call(inValues)
return
}, nil
}
func (assertion *AsyncAssertion) matcherSaysStopTrying(matcher types.GomegaMatcher, value interface{}) StopTryingError {
if assertion.actualIsFunc || types.MatchMayChangeInTheFuture(matcher, value) {
return nil
}
return StopTrying("No future change is possible. Bailing out early")
}
type contextWithAttachProgressReporter interface {
AttachProgressReporter(func() string) func()
}
func (assertion *AsyncAssertion) match(matcher types.GomegaMatcher, desiredMatch bool, optionalDescription ...interface{}) bool {
timer := time.Now()
timeout := time.After(assertion.timeoutInterval)
lock := sync.Mutex{}
var matches bool
var err error
mayChange := true
value, err := assertion.pollActual()
if err == nil {
mayChange = assertion.matcherMayChange(matcher, value)
matches, err = matcher.Match(value)
}
assertion.g.THelper()
fail := func(preamble string) {
pollActual, err := assertion.buildActualPoller()
if err != nil {
assertion.g.Fail(err.Error(), 2+assertion.offset)
return false
}
value, err, stopTrying := pollActual()
if err == nil {
if stopTrying == nil {
stopTrying = assertion.matcherSaysStopTrying(matcher, value)
}
matches, err = matcher.Match(value)
}
messageGenerator := func() string {
// can be called out of band by Ginkgo if the user requests a progress report
lock.Lock()
defer lock.Unlock()
errMsg := ""
message := ""
if err != nil {
@ -181,9 +353,22 @@ func (assertion *AsyncAssertion) match(matcher types.GomegaMatcher, desiredMatch
message = matcher.NegatedFailureMessage(value)
}
}
assertion.g.THelper()
description := assertion.buildDescription(optionalDescription...)
assertion.g.Fail(fmt.Sprintf("%s after %.3fs.\n%s%s%s", preamble, time.Since(timer).Seconds(), description, message, errMsg), 3+assertion.offset)
return fmt.Sprintf("%s%s%s", description, message, errMsg)
}
fail := func(preamble string) {
assertion.g.THelper()
assertion.g.Fail(fmt.Sprintf("%s after %.3fs.\n%s", preamble, time.Since(timer).Seconds(), messageGenerator()), 3+assertion.offset)
}
var contextDone <-chan struct{}
if assertion.ctx != nil {
contextDone = assertion.ctx.Done()
if v, ok := assertion.ctx.Value("GINKGO_SPEC_CONTEXT").(contextWithAttachProgressReporter); ok {
detach := v.AttachProgressReporter(messageGenerator)
defer detach()
}
}
if assertion.asyncType == AsyncAssertionTypeEventually {
@ -192,18 +377,35 @@ func (assertion *AsyncAssertion) match(matcher types.GomegaMatcher, desiredMatch
return true
}
if !mayChange {
fail("No future change is possible. Bailing out early")
if stopTrying != nil {
fail(stopTrying.Error() + " -")
return false
}
select {
case <-time.After(assertion.pollingInterval):
value, err = assertion.pollActual()
if err == nil {
mayChange = assertion.matcherMayChange(matcher, value)
matches, err = matcher.Match(value)
v, e, st := pollActual()
if st != nil && st.wasViaPanic() {
// we were told to stop trying via panic - which means we dont' have reasonable new values
// we should simply use the old values and exit now
fail(st.Error() + " -")
return false
}
lock.Lock()
value, err, stopTrying = v, e, st
lock.Unlock()
if err == nil {
if stopTrying == nil {
stopTrying = assertion.matcherSaysStopTrying(matcher, value)
}
matches, e = matcher.Match(value)
lock.Lock()
err = e
lock.Unlock()
}
case <-contextDone:
fail("Context was cancelled")
return false
case <-timeout:
fail("Timed out")
return false
@ -216,17 +418,32 @@ func (assertion *AsyncAssertion) match(matcher types.GomegaMatcher, desiredMatch
return false
}
if !mayChange {
if stopTrying != nil {
return true
}
select {
case <-time.After(assertion.pollingInterval):
value, err = assertion.pollActual()
if err == nil {
mayChange = assertion.matcherMayChange(matcher, value)
matches, err = matcher.Match(value)
v, e, st := pollActual()
if st != nil && st.wasViaPanic() {
// we were told to stop trying via panic - which means we made it this far and should return successfully
return true
}
lock.Lock()
value, err, stopTrying = v, e, st
lock.Unlock()
if err == nil {
if stopTrying == nil {
stopTrying = assertion.matcherSaysStopTrying(matcher, value)
}
matches, e = matcher.Match(value)
lock.Lock()
err = e
lock.Unlock()
}
case <-contextDone:
fail("Context was cancelled")
return false
case <-timeout:
return true
}

View File

@ -1,6 +1,7 @@
package internal
import (
"context"
"time"
"github.com/onsi/gomega/types"
@ -55,9 +56,19 @@ func (g *Gomega) Eventually(actual interface{}, intervals ...interface{}) types.
return g.EventuallyWithOffset(0, actual, intervals...)
}
func (g *Gomega) EventuallyWithOffset(offset int, actual interface{}, intervals ...interface{}) types.AsyncAssertion {
func (g *Gomega) EventuallyWithOffset(offset int, actual interface{}, args ...interface{}) types.AsyncAssertion {
timeoutInterval := g.DurationBundle.EventuallyTimeout
pollingInterval := g.DurationBundle.EventuallyPollingInterval
intervals := []interface{}{}
var ctx context.Context
for _, arg := range args {
switch v := arg.(type) {
case context.Context:
ctx = v
default:
intervals = append(intervals, arg)
}
}
if len(intervals) > 0 {
timeoutInterval = toDuration(intervals[0])
}
@ -65,16 +76,26 @@ func (g *Gomega) EventuallyWithOffset(offset int, actual interface{}, intervals
pollingInterval = toDuration(intervals[1])
}
return NewAsyncAssertion(AsyncAssertionTypeEventually, actual, g, timeoutInterval, pollingInterval, offset)
return NewAsyncAssertion(AsyncAssertionTypeEventually, actual, g, timeoutInterval, pollingInterval, ctx, offset)
}
func (g *Gomega) Consistently(actual interface{}, intervals ...interface{}) types.AsyncAssertion {
return g.ConsistentlyWithOffset(0, actual, intervals...)
}
func (g *Gomega) ConsistentlyWithOffset(offset int, actual interface{}, intervals ...interface{}) types.AsyncAssertion {
func (g *Gomega) ConsistentlyWithOffset(offset int, actual interface{}, args ...interface{}) types.AsyncAssertion {
timeoutInterval := g.DurationBundle.ConsistentlyDuration
pollingInterval := g.DurationBundle.ConsistentlyPollingInterval
intervals := []interface{}{}
var ctx context.Context
for _, arg := range args {
switch v := arg.(type) {
case context.Context:
ctx = v
default:
intervals = append(intervals, arg)
}
}
if len(intervals) > 0 {
timeoutInterval = toDuration(intervals[0])
}
@ -82,7 +103,7 @@ func (g *Gomega) ConsistentlyWithOffset(offset int, actual interface{}, interval
pollingInterval = toDuration(intervals[1])
}
return NewAsyncAssertion(AsyncAssertionTypeConsistently, actual, g, timeoutInterval, pollingInterval, offset)
return NewAsyncAssertion(AsyncAssertionTypeConsistently, actual, g, timeoutInterval, pollingInterval, ctx, offset)
}
func (g *Gomega) SetDefaultEventuallyTimeout(t time.Duration) {

View File

@ -8,27 +8,27 @@ import (
"github.com/onsi/gomega/types"
)
//Equal uses reflect.DeepEqual to compare actual with expected. Equal is strict about
//types when performing comparisons.
//It is an error for both actual and expected to be nil. Use BeNil() instead.
// Equal uses reflect.DeepEqual to compare actual with expected. Equal is strict about
// types when performing comparisons.
// It is an error for both actual and expected to be nil. Use BeNil() instead.
func Equal(expected interface{}) types.GomegaMatcher {
return &matchers.EqualMatcher{
Expected: expected,
}
}
//BeEquivalentTo is more lax than Equal, allowing equality between different types.
//This is done by converting actual to have the type of expected before
//attempting equality with reflect.DeepEqual.
//It is an error for actual and expected to be nil. Use BeNil() instead.
// BeEquivalentTo is more lax than Equal, allowing equality between different types.
// This is done by converting actual to have the type of expected before
// attempting equality with reflect.DeepEqual.
// It is an error for actual and expected to be nil. Use BeNil() instead.
func BeEquivalentTo(expected interface{}) types.GomegaMatcher {
return &matchers.BeEquivalentToMatcher{
Expected: expected,
}
}
//BeComparableTo uses gocmp.Equal to compare. You can pass cmp.Option as options.
//It is an error for actual and expected to be nil. Use BeNil() instead.
// BeComparableTo uses gocmp.Equal to compare. You can pass cmp.Option as options.
// It is an error for actual and expected to be nil. Use BeNil() instead.
func BeComparableTo(expected interface{}, opts ...cmp.Option) types.GomegaMatcher {
return &matchers.BeComparableToMatcher{
Expected: expected,
@ -36,116 +36,124 @@ func BeComparableTo(expected interface{}, opts ...cmp.Option) types.GomegaMatche
}
}
//BeIdenticalTo uses the == operator to compare actual with expected.
//BeIdenticalTo is strict about types when performing comparisons.
//It is an error for both actual and expected to be nil. Use BeNil() instead.
// BeIdenticalTo uses the == operator to compare actual with expected.
// BeIdenticalTo is strict about types when performing comparisons.
// It is an error for both actual and expected to be nil. Use BeNil() instead.
func BeIdenticalTo(expected interface{}) types.GomegaMatcher {
return &matchers.BeIdenticalToMatcher{
Expected: expected,
}
}
//BeNil succeeds if actual is nil
// BeNil succeeds if actual is nil
func BeNil() types.GomegaMatcher {
return &matchers.BeNilMatcher{}
}
//BeTrue succeeds if actual is true
// BeTrue succeeds if actual is true
func BeTrue() types.GomegaMatcher {
return &matchers.BeTrueMatcher{}
}
//BeFalse succeeds if actual is false
// BeFalse succeeds if actual is false
func BeFalse() types.GomegaMatcher {
return &matchers.BeFalseMatcher{}
}
//HaveOccurred succeeds if actual is a non-nil error
//The typical Go error checking pattern looks like:
// err := SomethingThatMightFail()
// Expect(err).ShouldNot(HaveOccurred())
// HaveOccurred succeeds if actual is a non-nil error
// The typical Go error checking pattern looks like:
//
// err := SomethingThatMightFail()
// Expect(err).ShouldNot(HaveOccurred())
func HaveOccurred() types.GomegaMatcher {
return &matchers.HaveOccurredMatcher{}
}
//Succeed passes if actual is a nil error
//Succeed is intended to be used with functions that return a single error value. Instead of
// err := SomethingThatMightFail()
// Expect(err).ShouldNot(HaveOccurred())
// Succeed passes if actual is a nil error
// Succeed is intended to be used with functions that return a single error value. Instead of
//
//You can write:
// Expect(SomethingThatMightFail()).Should(Succeed())
// err := SomethingThatMightFail()
// Expect(err).ShouldNot(HaveOccurred())
//
//It is a mistake to use Succeed with a function that has multiple return values. Gomega's Ω and Expect
//functions automatically trigger failure if any return values after the first return value are non-zero/non-nil.
//This means that Ω(MultiReturnFunc()).ShouldNot(Succeed()) can never pass.
// You can write:
//
// Expect(SomethingThatMightFail()).Should(Succeed())
//
// It is a mistake to use Succeed with a function that has multiple return values. Gomega's Ω and Expect
// functions automatically trigger failure if any return values after the first return value are non-zero/non-nil.
// This means that Ω(MultiReturnFunc()).ShouldNot(Succeed()) can never pass.
func Succeed() types.GomegaMatcher {
return &matchers.SucceedMatcher{}
}
//MatchError succeeds if actual is a non-nil error that matches the passed in string/error.
// MatchError succeeds if actual is a non-nil error that matches the passed in string/error.
//
//These are valid use-cases:
// Expect(err).Should(MatchError("an error")) //asserts that err.Error() == "an error"
// Expect(err).Should(MatchError(SomeError)) //asserts that err == SomeError (via reflect.DeepEqual)
// These are valid use-cases:
//
//It is an error for err to be nil or an object that does not implement the Error interface
// Expect(err).Should(MatchError("an error")) //asserts that err.Error() == "an error"
// Expect(err).Should(MatchError(SomeError)) //asserts that err == SomeError (via reflect.DeepEqual)
//
// It is an error for err to be nil or an object that does not implement the Error interface
func MatchError(expected interface{}) types.GomegaMatcher {
return &matchers.MatchErrorMatcher{
Expected: expected,
}
}
//BeClosed succeeds if actual is a closed channel.
//It is an error to pass a non-channel to BeClosed, it is also an error to pass nil
// BeClosed succeeds if actual is a closed channel.
// It is an error to pass a non-channel to BeClosed, it is also an error to pass nil
//
//In order to check whether or not the channel is closed, Gomega must try to read from the channel
//(even in the `ShouldNot(BeClosed())` case). You should keep this in mind if you wish to make subsequent assertions about
//values coming down the channel.
// In order to check whether or not the channel is closed, Gomega must try to read from the channel
// (even in the `ShouldNot(BeClosed())` case). You should keep this in mind if you wish to make subsequent assertions about
// values coming down the channel.
//
//Also, if you are testing that a *buffered* channel is closed you must first read all values out of the channel before
//asserting that it is closed (it is not possible to detect that a buffered-channel has been closed until all its buffered values are read).
// Also, if you are testing that a *buffered* channel is closed you must first read all values out of the channel before
// asserting that it is closed (it is not possible to detect that a buffered-channel has been closed until all its buffered values are read).
//
//Finally, as a corollary: it is an error to check whether or not a send-only channel is closed.
// Finally, as a corollary: it is an error to check whether or not a send-only channel is closed.
func BeClosed() types.GomegaMatcher {
return &matchers.BeClosedMatcher{}
}
//Receive succeeds if there is a value to be received on actual.
//Actual must be a channel (and cannot be a send-only channel) -- anything else is an error.
// Receive succeeds if there is a value to be received on actual.
// Actual must be a channel (and cannot be a send-only channel) -- anything else is an error.
//
//Receive returns immediately and never blocks:
// Receive returns immediately and never blocks:
//
//- If there is nothing on the channel `c` then Expect(c).Should(Receive()) will fail and Ω(c).ShouldNot(Receive()) will pass.
// - If there is nothing on the channel `c` then Expect(c).Should(Receive()) will fail and Ω(c).ShouldNot(Receive()) will pass.
//
//- If the channel `c` is closed then Expect(c).Should(Receive()) will fail and Ω(c).ShouldNot(Receive()) will pass.
// - If the channel `c` is closed then Expect(c).Should(Receive()) will fail and Ω(c).ShouldNot(Receive()) will pass.
//
//- If there is something on the channel `c` ready to be read, then Expect(c).Should(Receive()) will pass and Ω(c).ShouldNot(Receive()) will fail.
// - If there is something on the channel `c` ready to be read, then Expect(c).Should(Receive()) will pass and Ω(c).ShouldNot(Receive()) will fail.
//
//If you have a go-routine running in the background that will write to channel `c` you can:
// Eventually(c).Should(Receive())
// If you have a go-routine running in the background that will write to channel `c` you can:
//
//This will timeout if nothing gets sent to `c` (you can modify the timeout interval as you normally do with `Eventually`)
// Eventually(c).Should(Receive())
//
//A similar use-case is to assert that no go-routine writes to a channel (for a period of time). You can do this with `Consistently`:
// Consistently(c).ShouldNot(Receive())
// This will timeout if nothing gets sent to `c` (you can modify the timeout interval as you normally do with `Eventually`)
//
//You can pass `Receive` a matcher. If you do so, it will match the received object against the matcher. For example:
// Expect(c).Should(Receive(Equal("foo")))
// A similar use-case is to assert that no go-routine writes to a channel (for a period of time). You can do this with `Consistently`:
//
//When given a matcher, `Receive` will always fail if there is nothing to be received on the channel.
// Consistently(c).ShouldNot(Receive())
//
//Passing Receive a matcher is especially useful when paired with Eventually:
// You can pass `Receive` a matcher. If you do so, it will match the received object against the matcher. For example:
//
// Eventually(c).Should(Receive(ContainSubstring("bar")))
// Expect(c).Should(Receive(Equal("foo")))
//
//will repeatedly attempt to pull values out of `c` until a value matching "bar" is received.
// When given a matcher, `Receive` will always fail if there is nothing to be received on the channel.
//
//Finally, if you want to have a reference to the value *sent* to the channel you can pass the `Receive` matcher a pointer to a variable of the appropriate type:
// var myThing thing
// Eventually(thingChan).Should(Receive(&myThing))
// Expect(myThing.Sprocket).Should(Equal("foo"))
// Expect(myThing.IsValid()).Should(BeTrue())
// Passing Receive a matcher is especially useful when paired with Eventually:
//
// Eventually(c).Should(Receive(ContainSubstring("bar")))
//
// will repeatedly attempt to pull values out of `c` until a value matching "bar" is received.
//
// Finally, if you want to have a reference to the value *sent* to the channel you can pass the `Receive` matcher a pointer to a variable of the appropriate type:
//
// var myThing thing
// Eventually(thingChan).Should(Receive(&myThing))
// Expect(myThing.Sprocket).Should(Equal("foo"))
// Expect(myThing.IsValid()).Should(BeTrue())
func Receive(args ...interface{}) types.GomegaMatcher {
var arg interface{}
if len(args) > 0 {
@ -157,27 +165,27 @@ func Receive(args ...interface{}) types.GomegaMatcher {
}
}
//BeSent succeeds if a value can be sent to actual.
//Actual must be a channel (and cannot be a receive-only channel) that can sent the type of the value passed into BeSent -- anything else is an error.
//In addition, actual must not be closed.
// BeSent succeeds if a value can be sent to actual.
// Actual must be a channel (and cannot be a receive-only channel) that can sent the type of the value passed into BeSent -- anything else is an error.
// In addition, actual must not be closed.
//
//BeSent never blocks:
// BeSent never blocks:
//
//- If the channel `c` is not ready to receive then Expect(c).Should(BeSent("foo")) will fail immediately
//- If the channel `c` is eventually ready to receive then Eventually(c).Should(BeSent("foo")) will succeed.. presuming the channel becomes ready to receive before Eventually's timeout
//- If the channel `c` is closed then Expect(c).Should(BeSent("foo")) and Ω(c).ShouldNot(BeSent("foo")) will both fail immediately
// - If the channel `c` is not ready to receive then Expect(c).Should(BeSent("foo")) will fail immediately
// - If the channel `c` is eventually ready to receive then Eventually(c).Should(BeSent("foo")) will succeed.. presuming the channel becomes ready to receive before Eventually's timeout
// - If the channel `c` is closed then Expect(c).Should(BeSent("foo")) and Ω(c).ShouldNot(BeSent("foo")) will both fail immediately
//
//Of course, the value is actually sent to the channel. The point of `BeSent` is less to make an assertion about the availability of the channel (which is typically an implementation detail that your test should not be concerned with).
//Rather, the point of `BeSent` is to make it possible to easily and expressively write tests that can timeout on blocked channel sends.
// Of course, the value is actually sent to the channel. The point of `BeSent` is less to make an assertion about the availability of the channel (which is typically an implementation detail that your test should not be concerned with).
// Rather, the point of `BeSent` is to make it possible to easily and expressively write tests that can timeout on blocked channel sends.
func BeSent(arg interface{}) types.GomegaMatcher {
return &matchers.BeSentMatcher{
Arg: arg,
}
}
//MatchRegexp succeeds if actual is a string or stringer that matches the
//passed-in regexp. Optional arguments can be provided to construct a regexp
//via fmt.Sprintf().
// MatchRegexp succeeds if actual is a string or stringer that matches the
// passed-in regexp. Optional arguments can be provided to construct a regexp
// via fmt.Sprintf().
func MatchRegexp(regexp string, args ...interface{}) types.GomegaMatcher {
return &matchers.MatchRegexpMatcher{
Regexp: regexp,
@ -185,9 +193,9 @@ func MatchRegexp(regexp string, args ...interface{}) types.GomegaMatcher {
}
}
//ContainSubstring succeeds if actual is a string or stringer that contains the
//passed-in substring. Optional arguments can be provided to construct the substring
//via fmt.Sprintf().
// ContainSubstring succeeds if actual is a string or stringer that contains the
// passed-in substring. Optional arguments can be provided to construct the substring
// via fmt.Sprintf().
func ContainSubstring(substr string, args ...interface{}) types.GomegaMatcher {
return &matchers.ContainSubstringMatcher{
Substr: substr,
@ -195,9 +203,9 @@ func ContainSubstring(substr string, args ...interface{}) types.GomegaMatcher {
}
}
//HavePrefix succeeds if actual is a string or stringer that contains the
//passed-in string as a prefix. Optional arguments can be provided to construct
//via fmt.Sprintf().
// HavePrefix succeeds if actual is a string or stringer that contains the
// passed-in string as a prefix. Optional arguments can be provided to construct
// via fmt.Sprintf().
func HavePrefix(prefix string, args ...interface{}) types.GomegaMatcher {
return &matchers.HavePrefixMatcher{
Prefix: prefix,
@ -205,9 +213,9 @@ func HavePrefix(prefix string, args ...interface{}) types.GomegaMatcher {
}
}
//HaveSuffix succeeds if actual is a string or stringer that contains the
//passed-in string as a suffix. Optional arguments can be provided to construct
//via fmt.Sprintf().
// HaveSuffix succeeds if actual is a string or stringer that contains the
// passed-in string as a suffix. Optional arguments can be provided to construct
// via fmt.Sprintf().
func HaveSuffix(suffix string, args ...interface{}) types.GomegaMatcher {
return &matchers.HaveSuffixMatcher{
Suffix: suffix,
@ -215,73 +223,74 @@ func HaveSuffix(suffix string, args ...interface{}) types.GomegaMatcher {
}
}
//MatchJSON succeeds if actual is a string or stringer of JSON that matches
//the expected JSON. The JSONs are decoded and the resulting objects are compared via
//reflect.DeepEqual so things like key-ordering and whitespace shouldn't matter.
// MatchJSON succeeds if actual is a string or stringer of JSON that matches
// the expected JSON. The JSONs are decoded and the resulting objects are compared via
// reflect.DeepEqual so things like key-ordering and whitespace shouldn't matter.
func MatchJSON(json interface{}) types.GomegaMatcher {
return &matchers.MatchJSONMatcher{
JSONToMatch: json,
}
}
//MatchXML succeeds if actual is a string or stringer of XML that matches
//the expected XML. The XMLs are decoded and the resulting objects are compared via
//reflect.DeepEqual so things like whitespaces shouldn't matter.
// MatchXML succeeds if actual is a string or stringer of XML that matches
// the expected XML. The XMLs are decoded and the resulting objects are compared via
// reflect.DeepEqual so things like whitespaces shouldn't matter.
func MatchXML(xml interface{}) types.GomegaMatcher {
return &matchers.MatchXMLMatcher{
XMLToMatch: xml,
}
}
//MatchYAML succeeds if actual is a string or stringer of YAML that matches
//the expected YAML. The YAML's are decoded and the resulting objects are compared via
//reflect.DeepEqual so things like key-ordering and whitespace shouldn't matter.
// MatchYAML succeeds if actual is a string or stringer of YAML that matches
// the expected YAML. The YAML's are decoded and the resulting objects are compared via
// reflect.DeepEqual so things like key-ordering and whitespace shouldn't matter.
func MatchYAML(yaml interface{}) types.GomegaMatcher {
return &matchers.MatchYAMLMatcher{
YAMLToMatch: yaml,
}
}
//BeEmpty succeeds if actual is empty. Actual must be of type string, array, map, chan, or slice.
// BeEmpty succeeds if actual is empty. Actual must be of type string, array, map, chan, or slice.
func BeEmpty() types.GomegaMatcher {
return &matchers.BeEmptyMatcher{}
}
//HaveLen succeeds if actual has the passed-in length. Actual must be of type string, array, map, chan, or slice.
// HaveLen succeeds if actual has the passed-in length. Actual must be of type string, array, map, chan, or slice.
func HaveLen(count int) types.GomegaMatcher {
return &matchers.HaveLenMatcher{
Count: count,
}
}
//HaveCap succeeds if actual has the passed-in capacity. Actual must be of type array, chan, or slice.
// HaveCap succeeds if actual has the passed-in capacity. Actual must be of type array, chan, or slice.
func HaveCap(count int) types.GomegaMatcher {
return &matchers.HaveCapMatcher{
Count: count,
}
}
//BeZero succeeds if actual is the zero value for its type or if actual is nil.
// BeZero succeeds if actual is the zero value for its type or if actual is nil.
func BeZero() types.GomegaMatcher {
return &matchers.BeZeroMatcher{}
}
//ContainElement succeeds if actual contains the passed in element. By default
//ContainElement() uses Equal() to perform the match, however a matcher can be
//passed in instead:
// Expect([]string{"Foo", "FooBar"}).Should(ContainElement(ContainSubstring("Bar")))
// ContainElement succeeds if actual contains the passed in element. By default
// ContainElement() uses Equal() to perform the match, however a matcher can be
// passed in instead:
//
//Actual must be an array, slice or map. For maps, ContainElement searches
//through the map's values.
// Expect([]string{"Foo", "FooBar"}).Should(ContainElement(ContainSubstring("Bar")))
//
//If you want to have a copy of the matching element(s) found you can pass a
//pointer to a variable of the appropriate type. If the variable isn't a slice
//or map, then exactly one match will be expected and returned. If the variable
//is a slice or map, then at least one match is expected and all matches will be
//stored in the variable.
// Actual must be an array, slice or map. For maps, ContainElement searches
// through the map's values.
//
// var findings []string
// Expect([]string{"Foo", "FooBar"}).Should(ContainElement(ContainSubString("Bar", &findings)))
// If you want to have a copy of the matching element(s) found you can pass a
// pointer to a variable of the appropriate type. If the variable isn't a slice
// or map, then exactly one match will be expected and returned. If the variable
// is a slice or map, then at least one match is expected and all matches will be
// stored in the variable.
//
// var findings []string
// Expect([]string{"Foo", "FooBar"}).Should(ContainElement(ContainSubString("Bar", &findings)))
func ContainElement(element interface{}, result ...interface{}) types.GomegaMatcher {
return &matchers.ContainElementMatcher{
Element: element,
@ -289,86 +298,102 @@ func ContainElement(element interface{}, result ...interface{}) types.GomegaMatc
}
}
//BeElementOf succeeds if actual is contained in the passed in elements.
//BeElementOf() always uses Equal() to perform the match.
//When the passed in elements are comprised of a single element that is either an Array or Slice, BeElementOf() behaves
//as the reverse of ContainElement() that operates with Equal() to perform the match.
// Expect(2).Should(BeElementOf([]int{1, 2}))
// Expect(2).Should(BeElementOf([2]int{1, 2}))
//Otherwise, BeElementOf() provides a syntactic sugar for Or(Equal(_), Equal(_), ...):
// Expect(2).Should(BeElementOf(1, 2))
// BeElementOf succeeds if actual is contained in the passed in elements.
// BeElementOf() always uses Equal() to perform the match.
// When the passed in elements are comprised of a single element that is either an Array or Slice, BeElementOf() behaves
// as the reverse of ContainElement() that operates with Equal() to perform the match.
//
//Actual must be typed.
// Expect(2).Should(BeElementOf([]int{1, 2}))
// Expect(2).Should(BeElementOf([2]int{1, 2}))
//
// Otherwise, BeElementOf() provides a syntactic sugar for Or(Equal(_), Equal(_), ...):
//
// Expect(2).Should(BeElementOf(1, 2))
//
// Actual must be typed.
func BeElementOf(elements ...interface{}) types.GomegaMatcher {
return &matchers.BeElementOfMatcher{
Elements: elements,
}
}
//ConsistOf succeeds if actual contains precisely the elements passed into the matcher. The ordering of the elements does not matter.
//By default ConsistOf() uses Equal() to match the elements, however custom matchers can be passed in instead. Here are some examples:
// BeKeyOf succeeds if actual is contained in the keys of the passed in map.
// BeKeyOf() always uses Equal() to perform the match between actual and the map keys.
//
// Expect([]string{"Foo", "FooBar"}).Should(ConsistOf("FooBar", "Foo"))
// Expect([]string{"Foo", "FooBar"}).Should(ConsistOf(ContainSubstring("Bar"), "Foo"))
// Expect([]string{"Foo", "FooBar"}).Should(ConsistOf(ContainSubstring("Foo"), ContainSubstring("Foo")))
// Expect("foo").Should(BeKeyOf(map[string]bool{"foo": true, "bar": false}))
func BeKeyOf(element interface{}) types.GomegaMatcher {
return &matchers.BeKeyOfMatcher{
Map: element,
}
}
// ConsistOf succeeds if actual contains precisely the elements passed into the matcher. The ordering of the elements does not matter.
// By default ConsistOf() uses Equal() to match the elements, however custom matchers can be passed in instead. Here are some examples:
//
//Actual must be an array, slice or map. For maps, ConsistOf matches against the map's values.
// Expect([]string{"Foo", "FooBar"}).Should(ConsistOf("FooBar", "Foo"))
// Expect([]string{"Foo", "FooBar"}).Should(ConsistOf(ContainSubstring("Bar"), "Foo"))
// Expect([]string{"Foo", "FooBar"}).Should(ConsistOf(ContainSubstring("Foo"), ContainSubstring("Foo")))
//
//You typically pass variadic arguments to ConsistOf (as in the examples above). However, if you need to pass in a slice you can provided that it
//is the only element passed in to ConsistOf:
// Actual must be an array, slice or map. For maps, ConsistOf matches against the map's values.
//
// Expect([]string{"Foo", "FooBar"}).Should(ConsistOf([]string{"FooBar", "Foo"}))
// You typically pass variadic arguments to ConsistOf (as in the examples above). However, if you need to pass in a slice you can provided that it
// is the only element passed in to ConsistOf:
//
//Note that Go's type system does not allow you to write this as ConsistOf([]string{"FooBar", "Foo"}...) as []string and []interface{} are different types - hence the need for this special rule.
// Expect([]string{"Foo", "FooBar"}).Should(ConsistOf([]string{"FooBar", "Foo"}))
//
// Note that Go's type system does not allow you to write this as ConsistOf([]string{"FooBar", "Foo"}...) as []string and []interface{} are different types - hence the need for this special rule.
func ConsistOf(elements ...interface{}) types.GomegaMatcher {
return &matchers.ConsistOfMatcher{
Elements: elements,
}
}
//ContainElements succeeds if actual contains the passed in elements. The ordering of the elements does not matter.
//By default ContainElements() uses Equal() to match the elements, however custom matchers can be passed in instead. Here are some examples:
// ContainElements succeeds if actual contains the passed in elements. The ordering of the elements does not matter.
// By default ContainElements() uses Equal() to match the elements, however custom matchers can be passed in instead. Here are some examples:
//
// Expect([]string{"Foo", "FooBar"}).Should(ContainElements("FooBar"))
// Expect([]string{"Foo", "FooBar"}).Should(ContainElements(ContainSubstring("Bar"), "Foo"))
// Expect([]string{"Foo", "FooBar"}).Should(ContainElements("FooBar"))
// Expect([]string{"Foo", "FooBar"}).Should(ContainElements(ContainSubstring("Bar"), "Foo"))
//
//Actual must be an array, slice or map.
//For maps, ContainElements searches through the map's values.
// Actual must be an array, slice or map.
// For maps, ContainElements searches through the map's values.
func ContainElements(elements ...interface{}) types.GomegaMatcher {
return &matchers.ContainElementsMatcher{
Elements: elements,
}
}
//HaveEach succeeds if actual solely contains elements that match the passed in element.
//Please note that if actual is empty, HaveEach always will succeed.
//By default HaveEach() uses Equal() to perform the match, however a
//matcher can be passed in instead:
// Expect([]string{"Foo", "FooBar"}).Should(HaveEach(ContainSubstring("Foo")))
// HaveEach succeeds if actual solely contains elements that match the passed in element.
// Please note that if actual is empty, HaveEach always will succeed.
// By default HaveEach() uses Equal() to perform the match, however a
// matcher can be passed in instead:
//
//Actual must be an array, slice or map.
//For maps, HaveEach searches through the map's values.
// Expect([]string{"Foo", "FooBar"}).Should(HaveEach(ContainSubstring("Foo")))
//
// Actual must be an array, slice or map.
// For maps, HaveEach searches through the map's values.
func HaveEach(element interface{}) types.GomegaMatcher {
return &matchers.HaveEachMatcher{
Element: element,
}
}
//HaveKey succeeds if actual is a map with the passed in key.
//By default HaveKey uses Equal() to perform the match, however a
//matcher can be passed in instead:
// Expect(map[string]string{"Foo": "Bar", "BazFoo": "Duck"}).Should(HaveKey(MatchRegexp(`.+Foo$`)))
// HaveKey succeeds if actual is a map with the passed in key.
// By default HaveKey uses Equal() to perform the match, however a
// matcher can be passed in instead:
//
// Expect(map[string]string{"Foo": "Bar", "BazFoo": "Duck"}).Should(HaveKey(MatchRegexp(`.+Foo$`)))
func HaveKey(key interface{}) types.GomegaMatcher {
return &matchers.HaveKeyMatcher{
Key: key,
}
}
//HaveKeyWithValue succeeds if actual is a map with the passed in key and value.
//By default HaveKeyWithValue uses Equal() to perform the match, however a
//matcher can be passed in instead:
// Expect(map[string]string{"Foo": "Bar", "BazFoo": "Duck"}).Should(HaveKeyWithValue("Foo", "Bar"))
// Expect(map[string]string{"Foo": "Bar", "BazFoo": "Duck"}).Should(HaveKeyWithValue(MatchRegexp(`.+Foo$`), "Bar"))
// HaveKeyWithValue succeeds if actual is a map with the passed in key and value.
// By default HaveKeyWithValue uses Equal() to perform the match, however a
// matcher can be passed in instead:
//
// Expect(map[string]string{"Foo": "Bar", "BazFoo": "Duck"}).Should(HaveKeyWithValue("Foo", "Bar"))
// Expect(map[string]string{"Foo": "Bar", "BazFoo": "Duck"}).Should(HaveKeyWithValue(MatchRegexp(`.+Foo$`), "Bar"))
func HaveKeyWithValue(key interface{}, value interface{}) types.GomegaMatcher {
return &matchers.HaveKeyWithValueMatcher{
Key: key,
@ -376,27 +401,27 @@ func HaveKeyWithValue(key interface{}, value interface{}) types.GomegaMatcher {
}
}
//HaveField succeeds if actual is a struct and the value at the passed in field
//matches the passed in matcher. By default HaveField used Equal() to perform the match,
//however a matcher can be passed in in stead.
// HaveField succeeds if actual is a struct and the value at the passed in field
// matches the passed in matcher. By default HaveField used Equal() to perform the match,
// however a matcher can be passed in in stead.
//
//The field must be a string that resolves to the name of a field in the struct. Structs can be traversed
//using the '.' delimiter. If the field ends with '()' a method named field is assumed to exist on the struct and is invoked.
//Such methods must take no arguments and return a single value:
// The field must be a string that resolves to the name of a field in the struct. Structs can be traversed
// using the '.' delimiter. If the field ends with '()' a method named field is assumed to exist on the struct and is invoked.
// Such methods must take no arguments and return a single value:
//
// type Book struct {
// Title string
// Author Person
// }
// type Person struct {
// FirstName string
// LastName string
// DOB time.Time
// }
// Expect(book).To(HaveField("Title", "Les Miserables"))
// Expect(book).To(HaveField("Title", ContainSubstring("Les"))
// Expect(book).To(HaveField("Author.FirstName", Equal("Victor"))
// Expect(book).To(HaveField("Author.DOB.Year()", BeNumerically("<", 1900))
// type Book struct {
// Title string
// Author Person
// }
// type Person struct {
// FirstName string
// LastName string
// DOB time.Time
// }
// Expect(book).To(HaveField("Title", "Les Miserables"))
// Expect(book).To(HaveField("Title", ContainSubstring("Les"))
// Expect(book).To(HaveField("Author.FirstName", Equal("Victor"))
// Expect(book).To(HaveField("Author.DOB.Year()", BeNumerically("<", 1900))
func HaveField(field string, expected interface{}) types.GomegaMatcher {
return &matchers.HaveFieldMatcher{
Field: field,
@ -410,7 +435,7 @@ func HaveField(field string, expected interface{}) types.GomegaMatcher {
// HaveExistingField can be combined with HaveField in order to cover use cases
// with optional fields. HaveField alone would trigger an error in such situations.
//
// Expect(MrHarmless).NotTo(And(HaveExistingField("Title"), HaveField("Title", "Supervillain")))
// Expect(MrHarmless).NotTo(And(HaveExistingField("Title"), HaveField("Title", "Supervillain")))
func HaveExistingField(field string) types.GomegaMatcher {
return &matchers.HaveExistingFieldMatcher{
Field: field,
@ -428,26 +453,27 @@ func HaveExistingField(field string) types.GomegaMatcher {
// be a pointer (as gstruct.PointTo does) but instead also accepts non-pointer
// and even interface values.
//
// actual := 42
// Expect(actual).To(HaveValue(42))
// Expect(&actual).To(HaveValue(42))
// actual := 42
// Expect(actual).To(HaveValue(42))
// Expect(&actual).To(HaveValue(42))
func HaveValue(matcher types.GomegaMatcher) types.GomegaMatcher {
return &matchers.HaveValueMatcher{
Matcher: matcher,
}
}
//BeNumerically performs numerical assertions in a type-agnostic way.
//Actual and expected should be numbers, though the specific type of
//number is irrelevant (float32, float64, uint8, etc...).
// BeNumerically performs numerical assertions in a type-agnostic way.
// Actual and expected should be numbers, though the specific type of
// number is irrelevant (float32, float64, uint8, etc...).
//
//There are six, self-explanatory, supported comparators:
// Expect(1.0).Should(BeNumerically("==", 1))
// Expect(1.0).Should(BeNumerically("~", 0.999, 0.01))
// Expect(1.0).Should(BeNumerically(">", 0.9))
// Expect(1.0).Should(BeNumerically(">=", 1.0))
// Expect(1.0).Should(BeNumerically("<", 3))
// Expect(1.0).Should(BeNumerically("<=", 1.0))
// There are six, self-explanatory, supported comparators:
//
// Expect(1.0).Should(BeNumerically("==", 1))
// Expect(1.0).Should(BeNumerically("~", 0.999, 0.01))
// Expect(1.0).Should(BeNumerically(">", 0.9))
// Expect(1.0).Should(BeNumerically(">=", 1.0))
// Expect(1.0).Should(BeNumerically("<", 3))
// Expect(1.0).Should(BeNumerically("<=", 1.0))
func BeNumerically(comparator string, compareTo ...interface{}) types.GomegaMatcher {
return &matchers.BeNumericallyMatcher{
Comparator: comparator,
@ -455,10 +481,11 @@ func BeNumerically(comparator string, compareTo ...interface{}) types.GomegaMatc
}
}
//BeTemporally compares time.Time's like BeNumerically
//Actual and expected must be time.Time. The comparators are the same as for BeNumerically
// Expect(time.Now()).Should(BeTemporally(">", time.Time{}))
// Expect(time.Now()).Should(BeTemporally("~", time.Now(), time.Second))
// BeTemporally compares time.Time's like BeNumerically
// Actual and expected must be time.Time. The comparators are the same as for BeNumerically
//
// Expect(time.Now()).Should(BeTemporally(">", time.Time{}))
// Expect(time.Now()).Should(BeTemporally("~", time.Now(), time.Second))
func BeTemporally(comparator string, compareTo time.Time, threshold ...time.Duration) types.GomegaMatcher {
return &matchers.BeTemporallyMatcher{
Comparator: comparator,
@ -467,58 +494,61 @@ func BeTemporally(comparator string, compareTo time.Time, threshold ...time.Dura
}
}
//BeAssignableToTypeOf succeeds if actual is assignable to the type of expected.
//It will return an error when one of the values is nil.
// Expect(0).Should(BeAssignableToTypeOf(0)) // Same values
// Expect(5).Should(BeAssignableToTypeOf(-1)) // different values same type
// Expect("foo").Should(BeAssignableToTypeOf("bar")) // different values same type
// Expect(struct{ Foo string }{}).Should(BeAssignableToTypeOf(struct{ Foo string }{}))
// BeAssignableToTypeOf succeeds if actual is assignable to the type of expected.
// It will return an error when one of the values is nil.
//
// Expect(0).Should(BeAssignableToTypeOf(0)) // Same values
// Expect(5).Should(BeAssignableToTypeOf(-1)) // different values same type
// Expect("foo").Should(BeAssignableToTypeOf("bar")) // different values same type
// Expect(struct{ Foo string }{}).Should(BeAssignableToTypeOf(struct{ Foo string }{}))
func BeAssignableToTypeOf(expected interface{}) types.GomegaMatcher {
return &matchers.AssignableToTypeOfMatcher{
Expected: expected,
}
}
//Panic succeeds if actual is a function that, when invoked, panics.
//Actual must be a function that takes no arguments and returns no results.
// Panic succeeds if actual is a function that, when invoked, panics.
// Actual must be a function that takes no arguments and returns no results.
func Panic() types.GomegaMatcher {
return &matchers.PanicMatcher{}
}
//PanicWith succeeds if actual is a function that, when invoked, panics with a specific value.
//Actual must be a function that takes no arguments and returns no results.
// PanicWith succeeds if actual is a function that, when invoked, panics with a specific value.
// Actual must be a function that takes no arguments and returns no results.
//
//By default PanicWith uses Equal() to perform the match, however a
//matcher can be passed in instead:
// Expect(fn).Should(PanicWith(MatchRegexp(`.+Foo$`)))
// By default PanicWith uses Equal() to perform the match, however a
// matcher can be passed in instead:
//
// Expect(fn).Should(PanicWith(MatchRegexp(`.+Foo$`)))
func PanicWith(expected interface{}) types.GomegaMatcher {
return &matchers.PanicMatcher{Expected: expected}
}
//BeAnExistingFile succeeds if a file exists.
//Actual must be a string representing the abs path to the file being checked.
// BeAnExistingFile succeeds if a file exists.
// Actual must be a string representing the abs path to the file being checked.
func BeAnExistingFile() types.GomegaMatcher {
return &matchers.BeAnExistingFileMatcher{}
}
//BeARegularFile succeeds if a file exists and is a regular file.
//Actual must be a string representing the abs path to the file being checked.
// BeARegularFile succeeds if a file exists and is a regular file.
// Actual must be a string representing the abs path to the file being checked.
func BeARegularFile() types.GomegaMatcher {
return &matchers.BeARegularFileMatcher{}
}
//BeADirectory succeeds if a file exists and is a directory.
//Actual must be a string representing the abs path to the file being checked.
// BeADirectory succeeds if a file exists and is a directory.
// Actual must be a string representing the abs path to the file being checked.
func BeADirectory() types.GomegaMatcher {
return &matchers.BeADirectoryMatcher{}
}
//HaveHTTPStatus succeeds if the Status or StatusCode field of an HTTP response matches.
//Actual must be either a *http.Response or *httptest.ResponseRecorder.
//Expected must be either an int or a string.
// Expect(resp).Should(HaveHTTPStatus(http.StatusOK)) // asserts that resp.StatusCode == 200
// Expect(resp).Should(HaveHTTPStatus("404 Not Found")) // asserts that resp.Status == "404 Not Found"
// Expect(resp).Should(HaveHTTPStatus(http.StatusOK, http.StatusNoContent)) // asserts that resp.StatusCode == 200 || resp.StatusCode == 204
// HaveHTTPStatus succeeds if the Status or StatusCode field of an HTTP response matches.
// Actual must be either a *http.Response or *httptest.ResponseRecorder.
// Expected must be either an int or a string.
//
// Expect(resp).Should(HaveHTTPStatus(http.StatusOK)) // asserts that resp.StatusCode == 200
// Expect(resp).Should(HaveHTTPStatus("404 Not Found")) // asserts that resp.Status == "404 Not Found"
// Expect(resp).Should(HaveHTTPStatus(http.StatusOK, http.StatusNoContent)) // asserts that resp.StatusCode == 200 || resp.StatusCode == 204
func HaveHTTPStatus(expected ...interface{}) types.GomegaMatcher {
return &matchers.HaveHTTPStatusMatcher{Expected: expected}
}
@ -541,63 +571,70 @@ func HaveHTTPBody(expected interface{}) types.GomegaMatcher {
return &matchers.HaveHTTPBodyMatcher{Expected: expected}
}
//And succeeds only if all of the given matchers succeed.
//The matchers are tried in order, and will fail-fast if one doesn't succeed.
// Expect("hi").To(And(HaveLen(2), Equal("hi"))
// And succeeds only if all of the given matchers succeed.
// The matchers are tried in order, and will fail-fast if one doesn't succeed.
//
//And(), Or(), Not() and WithTransform() allow matchers to be composed into complex expressions.
// Expect("hi").To(And(HaveLen(2), Equal("hi"))
//
// And(), Or(), Not() and WithTransform() allow matchers to be composed into complex expressions.
func And(ms ...types.GomegaMatcher) types.GomegaMatcher {
return &matchers.AndMatcher{Matchers: ms}
}
//SatisfyAll is an alias for And().
// Expect("hi").Should(SatisfyAll(HaveLen(2), Equal("hi")))
// SatisfyAll is an alias for And().
//
// Expect("hi").Should(SatisfyAll(HaveLen(2), Equal("hi")))
func SatisfyAll(matchers ...types.GomegaMatcher) types.GomegaMatcher {
return And(matchers...)
}
//Or succeeds if any of the given matchers succeed.
//The matchers are tried in order and will return immediately upon the first successful match.
// Expect("hi").To(Or(HaveLen(3), HaveLen(2))
// Or succeeds if any of the given matchers succeed.
// The matchers are tried in order and will return immediately upon the first successful match.
//
//And(), Or(), Not() and WithTransform() allow matchers to be composed into complex expressions.
// Expect("hi").To(Or(HaveLen(3), HaveLen(2))
//
// And(), Or(), Not() and WithTransform() allow matchers to be composed into complex expressions.
func Or(ms ...types.GomegaMatcher) types.GomegaMatcher {
return &matchers.OrMatcher{Matchers: ms}
}
//SatisfyAny is an alias for Or().
// Expect("hi").SatisfyAny(Or(HaveLen(3), HaveLen(2))
// SatisfyAny is an alias for Or().
//
// Expect("hi").SatisfyAny(Or(HaveLen(3), HaveLen(2))
func SatisfyAny(matchers ...types.GomegaMatcher) types.GomegaMatcher {
return Or(matchers...)
}
//Not negates the given matcher; it succeeds if the given matcher fails.
// Expect(1).To(Not(Equal(2))
// Not negates the given matcher; it succeeds if the given matcher fails.
//
//And(), Or(), Not() and WithTransform() allow matchers to be composed into complex expressions.
// Expect(1).To(Not(Equal(2))
//
// And(), Or(), Not() and WithTransform() allow matchers to be composed into complex expressions.
func Not(matcher types.GomegaMatcher) types.GomegaMatcher {
return &matchers.NotMatcher{Matcher: matcher}
}
//WithTransform applies the `transform` to the actual value and matches it against `matcher`.
//The given transform must be either a function of one parameter that returns one value or a
// WithTransform applies the `transform` to the actual value and matches it against `matcher`.
// The given transform must be either a function of one parameter that returns one value or a
// function of one parameter that returns two values, where the second value must be of the
// error type.
// var plus1 = func(i int) int { return i + 1 }
// Expect(1).To(WithTransform(plus1, Equal(2))
//
// var failingplus1 = func(i int) (int, error) { return 42, "this does not compute" }
// Expect(1).To(WithTransform(failingplus1, Equal(2)))
// var plus1 = func(i int) int { return i + 1 }
// Expect(1).To(WithTransform(plus1, Equal(2))
//
//And(), Or(), Not() and WithTransform() allow matchers to be composed into complex expressions.
// var failingplus1 = func(i int) (int, error) { return 42, "this does not compute" }
// Expect(1).To(WithTransform(failingplus1, Equal(2)))
//
// And(), Or(), Not() and WithTransform() allow matchers to be composed into complex expressions.
func WithTransform(transform interface{}, matcher types.GomegaMatcher) types.GomegaMatcher {
return matchers.NewWithTransformMatcher(transform, matcher)
}
//Satisfy matches the actual value against the `predicate` function.
//The given predicate must be a function of one paramter that returns bool.
// var isEven = func(i int) bool { return i%2 == 0 }
// Expect(2).To(Satisfy(isEven))
// Satisfy matches the actual value against the `predicate` function.
// The given predicate must be a function of one paramter that returns bool.
//
// var isEven = func(i int) bool { return i%2 == 0 }
// Expect(2).To(Satisfy(isEven))
func Satisfy(predicate interface{}) types.GomegaMatcher {
return matchers.NewSatisfyMatcher(predicate)
}

View File

@ -0,0 +1,45 @@
package matchers
import (
"fmt"
"reflect"
"github.com/onsi/gomega/format"
)
type BeKeyOfMatcher struct {
Map interface{}
}
func (matcher *BeKeyOfMatcher) Match(actual interface{}) (success bool, err error) {
if !isMap(matcher.Map) {
return false, fmt.Errorf("BeKeyOf matcher needs expected to be a map type")
}
if reflect.TypeOf(actual) == nil {
return false, fmt.Errorf("BeKeyOf matcher expects actual to be typed")
}
var lastError error
for _, key := range reflect.ValueOf(matcher.Map).MapKeys() {
matcher := &EqualMatcher{Expected: key.Interface()}
success, err := matcher.Match(actual)
if err != nil {
lastError = err
continue
}
if success {
return true, nil
}
}
return false, lastError
}
func (matcher *BeKeyOfMatcher) FailureMessage(actual interface{}) (message string) {
return format.Message(actual, "to be a key of", presentable(valuesOf(matcher.Map)))
}
func (matcher *BeKeyOfMatcher) NegatedFailureMessage(actual interface{}) (message string) {
return format.Message(actual, "not to be a key of", presentable(valuesOf(matcher.Map)))
}

View File

@ -1,12 +1,13 @@
package types
import (
"context"
"time"
)
type GomegaFailHandler func(message string, callerSkip ...int)
//A simple *testing.T interface wrapper
// A simple *testing.T interface wrapper
type GomegaTestingT interface {
Helper()
Fatalf(format string, args ...interface{})
@ -30,9 +31,9 @@ type Gomega interface {
SetDefaultConsistentlyPollingInterval(time.Duration)
}
//All Gomega matchers must implement the GomegaMatcher interface
// All Gomega matchers must implement the GomegaMatcher interface
//
//For details on writing custom matchers, check out: http://onsi.github.io/gomega/#adding-your-own-matchers
// For details on writing custom matchers, check out: http://onsi.github.io/gomega/#adding-your-own-matchers
type GomegaMatcher interface {
Match(actual interface{}) (success bool, err error)
FailureMessage(actual interface{}) (message string)
@ -70,6 +71,10 @@ type AsyncAssertion interface {
WithOffset(offset int) AsyncAssertion
WithTimeout(interval time.Duration) AsyncAssertion
WithPolling(interval time.Duration) AsyncAssertion
Within(timeout time.Duration) AsyncAssertion
ProbeEvery(interval time.Duration) AsyncAssertion
WithContext(ctx context.Context) AsyncAssertion
WithArguments(argsToForward ...interface{}) AsyncAssertion
}
// Assertions are returned by Ω and Expect and enable assertions against Gomega matchers

4
vendor/modules.txt vendored
View File

@ -442,7 +442,7 @@ github.com/munnerz/goautoneg
# github.com/oklog/run v1.0.0
## explicit
github.com/oklog/run
# github.com/onsi/ginkgo/v2 v2.1.6
# github.com/onsi/ginkgo/v2 v2.3.1
## explicit; go 1.18
github.com/onsi/ginkgo/v2
github.com/onsi/ginkgo/v2/config
@ -454,7 +454,7 @@ github.com/onsi/ginkgo/v2/internal/parallel_support
github.com/onsi/ginkgo/v2/internal/testingtproxy
github.com/onsi/ginkgo/v2/reporters
github.com/onsi/ginkgo/v2/types
# github.com/onsi/gomega v1.20.1
# github.com/onsi/gomega v1.22.0
## explicit; go 1.18
github.com/onsi/gomega
github.com/onsi/gomega/format