rebase: bump github.com/onsi/ginkgo/v2 from 2.4.0 to 2.8.0

Bumps [github.com/onsi/ginkgo/v2](https://github.com/onsi/ginkgo) from 2.4.0 to 2.8.0.
- [Release notes](https://github.com/onsi/ginkgo/releases)
- [Changelog](https://github.com/onsi/ginkgo/blob/master/CHANGELOG.md)
- [Commits](https://github.com/onsi/ginkgo/compare/v2.4.0...v2.8.0)

---
updated-dependencies:
- dependency-name: github.com/onsi/ginkgo/v2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
This commit is contained in:
dependabot[bot] 2023-01-30 20:05:01 +00:00 committed by mergify[bot]
parent 5a460e1d03
commit 4c4c170439
44 changed files with 1786 additions and 740 deletions

4
go.mod
View File

@ -22,8 +22,8 @@ require (
github.com/kubernetes-csi/csi-lib-utils v0.11.0
github.com/kubernetes-csi/external-snapshotter/client/v6 v6.2.0
github.com/libopenstorage/secrets v0.0.0-20210908194121-a1d19aa9713a
github.com/onsi/ginkgo/v2 v2.4.0
github.com/onsi/gomega v1.23.0
github.com/onsi/ginkgo/v2 v2.8.0
github.com/onsi/gomega v1.25.0
github.com/pkg/xattr v0.4.9
github.com/prometheus/client_golang v1.14.0
github.com/stretchr/testify v1.8.1

8
go.sum
View File

@ -847,8 +847,8 @@ github.com/onsi/ginkgo v1.16.5 h1:8xi0RTUf59SOSfEtZMvwTvXYMzG4gV23XVHOZiXNtnE=
github.com/onsi/ginkgo/v2 v2.1.3/go.mod h1:vw5CSIxN1JObi/U8gcbwft7ZxR2dgaR70JSE3/PpL4c=
github.com/onsi/ginkgo/v2 v2.1.4/go.mod h1:um6tUpWM/cxCK3/FK8BXqEiUMUwRgSM4JXG47RKZmLU=
github.com/onsi/ginkgo/v2 v2.1.6/go.mod h1:MEH45j8TBi6u9BMogfbp0stKC5cdGjumZj5Y7AG4VIk=
github.com/onsi/ginkgo/v2 v2.4.0 h1:+Ig9nvqgS5OBSACXNk15PLdp0U9XPYROt9CFzVdFGIs=
github.com/onsi/ginkgo/v2 v2.4.0/go.mod h1:iHkDK1fKGcBoEHT5W7YBq4RFWaQulw+caOMkAt4OrFo=
github.com/onsi/ginkgo/v2 v2.8.0 h1:pAM+oBNPrpXRs+E/8spkeGx9QgekbRVyr74EUvRVOUI=
github.com/onsi/ginkgo/v2 v2.8.0/go.mod h1:6JsQiECmxCa3V5st74AL/AmsV482EDdVrGaVW6z3oYU=
github.com/onsi/gomega v1.4.2/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.4.3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.5.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
@ -857,8 +857,8 @@ github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1y
github.com/onsi/gomega v1.17.0/go.mod h1:HnhC7FXeEQY45zxNK3PPoIUhzk/80Xly9PcubAlGdZY=
github.com/onsi/gomega v1.19.0/go.mod h1:LY+I3pBVzYsTBU1AnDwOSxaYi9WoWiqgwooUqq9yPro=
github.com/onsi/gomega v1.20.1/go.mod h1:DtrZpjmvpn2mPm4YWQa0/ALMDj9v4YxLgojwPeREyVo=
github.com/onsi/gomega v1.23.0 h1:/oxKu9c2HVap+F3PfKort2Hw5DEU+HGlW8n+tguWsys=
github.com/onsi/gomega v1.23.0/go.mod h1:Z/NWtiqwBrwUt4/2loMmHL63EDLnYHmVbuBpDr2vQAg=
github.com/onsi/gomega v1.25.0 h1:Vw7br2PCDYijJHSfBOWhov+8cAnUf8MfMaIOV323l6Y=
github.com/onsi/gomega v1.25.0/go.mod h1:r+zV744Re+DiYCIPRlYOTxn0YkOLcAnW8k1xXdMPGhM=
github.com/opencontainers/go-digest v1.0.0-rc1/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s=
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=

View File

@ -1,3 +1,144 @@
## 2.8.0
### Features
- Introduce GinkgoHelper() to track and exclude helper functions from potential CodeLocations [e19f556]
Modeled after `testing.T.Helper()`. Now, rather than write code like:
```go
func helper(model Model) {
Expect(model).WithOffset(1).To(BeValid())
Expect(model.SerialNumber).WithOffset(1).To(MatchRegexp(/[a-f0-9]*/))
}
```
you can stop tracking offsets (which makes nesting composing helpers nearly impossible) and simply write:
```go
func helper(model Model) {
GinkgoHelper()
Expect(model).To(BeValid())
Expect(model.SerialNumber).To(MatchRegexp(/[a-f0-9]*/))
}
```
- Introduce GinkgoLabelFilter() and Label().MatchesLabelFilter() to make it possible to programmatically match filters (fixes #1119) [2f6597c]
You can now write code like this:
```go
BeforeSuite(func() {
if Label("slow").MatchesLabelFilter(GinkgoLabelFilter()) {
// do slow setup
}
if Label("fast").MatchesLabelFilter(GinkgoLabelFilter()) {
// do fast setup
}
})
```
to programmatically check whether a given set of labels will match the configured `--label-filter`.
### Maintenance
- Bump webrick from 1.7.0 to 1.8.1 in /docs (#1125) [ea4966e]
- cdeql: add ruby language (#1124) [9dd275b]
- dependabot: add bundler package-ecosystem for docs (#1123) [14e7bdd]
## 2.7.1
### Fixes
- Bring back SuiteConfig.EmitSpecProgress to avoid compilation issue for consumers that set it manually [d2a1cb0]
### Maintenance
- Bump github.com/onsi/gomega from 1.24.2 to 1.25.0 (#1118) [cafece6]
- Bump golang.org/x/tools from 0.4.0 to 0.5.0 (#1111) [eda66c2]
- Bump golang.org/x/sys from 0.3.0 to 0.4.0 (#1112) [ac5ccaa]
- Bump github.com/onsi/gomega from 1.24.1 to 1.24.2 (#1097) [eee6480]
## 2.7.0
### Features
- Introduce ContinueOnFailure for Ordered containers [e0123ca] - Ordered containers that are also decorated with ContinueOnFailure will not stop running specs after the first spec fails.
- Support for bootstrap commands to use custom data for templates (#1110) [7a2b242]
- Support for labels and pending decorator in ginkgo outline output (#1113) [e6e3b98]
- Color aliases for custom color support (#1101) [49fab7a]
### Fixes
- correctly ensure deterministic spec order, even if specs are generated by iterating over a map [89dda20]
- Fix a bug where timedout specs were not correctly treated as failures when determining whether or not to run AfterAlls in an Ordered container.
- Ensure go test coverprofile outputs to the expected location (#1105) [b0bd77b]
## 2.6.1
### Features
- Override formatter colors from envvars - this is a new feature but an alternative approach involving config files might be taken in the future (#1095) [60240d1]
### Fixes
- GinkgoRecover now supports ignoring panics that match a specific, hidden, interface [301f3e2]
### Maintenance
- Bump github.com/onsi/gomega from 1.24.0 to 1.24.1 (#1077) [3643823]
- Bump golang.org/x/tools from 0.2.0 to 0.4.0 (#1090) [f9f856e]
- Bump nokogiri from 1.13.9 to 1.13.10 in /docs (#1091) [0d7087e]
## 2.6.0
### Features
- `ReportBeforeSuite` provides access to the suite report before the suite begins.
- Add junit config option for omitting leafnodetype (#1088) [956e6d2]
- Add support to customize junit report config to omit spec labels (#1087) [de44005]
### Fixes
- Fix stack trace pruning so that it has a chance of working on windows [2165648]
## 2.5.1
### Fixes
- skipped tests only show as 'S' when running with -v [3ab38ae]
- Fix typo in docs/index.md (#1082) [55fc58d]
- Fix typo in docs/index.md (#1081) [8a14f1f]
- Fix link notation in docs/index.md (#1080) [2669612]
- Fix typo in `--progress` deprecation message (#1076) [b4b7edc]
### Maintenance
- chore: Included githubactions in the dependabot config (#976) [baea341]
- Bump golang.org/x/sys from 0.1.0 to 0.2.0 (#1075) [9646297]
## 2.5.0
### Ginkgo output now includes a timeline-view of the spec
This commit changes Ginkgo's default output. Spec details are now
presented as a **timeline** that includes events that occur during the spec
lifecycle interleaved with any GinkgoWriter content. This makes is much easier
to understand the flow of a spec and where a given failure occurs.
The --progress, --slow-spec-threshold, --always-emit-ginkgo-writer flags
and the SuppressProgressReporting decorator have all been deprecated. Instead
the existing -v and -vv flags better capture the level of verbosity to display. However,
a new --show-node-events flag is added to include node `> Enter` and `< Exit` events
in the spec timeline.
In addition, JUnit reports now include the timeline (rendered with -vv) and custom JUnit
reports can be configured and generated using
`GenerateJUnitReportWithConfig(report types.Report, dst string, config JunitReportConfig)`
Code should continue to work unchanged with this version of Ginkgo - however if you have tooling that
was relying on the specific output format of Ginkgo you _may_ run into issues. Ginkgo's console output is not guaranteed to be stable for tooling and automation purposes. You should, instead, use Ginkgo's JSON format
to build tooling on top of as it has stronger guarantees to be stable from version to version.
### Features
- Provide details about which timeout expired [0f2fa27]
### Fixes
- Add Support Policy to docs [c70867a]
### Maintenance
- Bump github.com/onsi/gomega from 1.22.1 to 1.23.0 (#1070) [bb3b4e2]
## 2.4.0
### Features

View File

@ -4,11 +4,7 @@
---
# Ginkgo 2.0 is now Generally Available!
You can learn more about 2.0 in the [Migration Guide](https://onsi.github.io/ginkgo/MIGRATING_TO_V2)!
---
# Ginkgo
Ginkgo is a mature testing framework for Go designed to help you write expressive specs. Ginkgo builds on top of Go's `testing` foundation and is complemented by the [Gomega](https://github.com/onsi/gomega) matcher library. Together, Ginkgo and Gomega let you express the intent behind your specs clearly:
@ -33,53 +29,53 @@ Describe("Checking books out of the library", Label("library"), func() {
})
When("the library has the book in question", func() {
BeforeEach(func() {
Expect(library.Store(book)).To(Succeed())
BeforeEach(func(ctx SpecContext) {
Expect(library.Store(ctx, book)).To(Succeed())
})
Context("and the book is available", func() {
It("lends it to the reader", func() {
Expect(valjean.Checkout(library, "Les Miserables")).To(Succeed())
It("lends it to the reader", func(ctx SpecContext) {
Expect(valjean.Checkout(ctx, library, "Les Miserables")).To(Succeed())
Expect(valjean.Books()).To(ContainElement(book))
Expect(library.UserWithBook(book)).To(Equal(valjean))
})
Expect(library.UserWithBook(ctx, book)).To(Equal(valjean))
}, SpecTimeout(time.Second * 5))
})
Context("but the book has already been checked out", func() {
var javert *users.User
BeforeEach(func() {
BeforeEach(func(ctx SpecContext) {
javert = users.NewUser("Javert")
Expect(javert.Checkout(library, "Les Miserables")).To(Succeed())
Expect(javert.Checkout(ctx, library, "Les Miserables")).To(Succeed())
})
It("tells the user", func() {
err := valjean.Checkout(library, "Les Miserables")
It("tells the user", func(ctx SpecContext) {
err := valjean.Checkout(ctx, library, "Les Miserables")
Expect(error).To(MatchError("Les Miserables is currently checked out"))
})
}, SpecTimeout(time.Second * 5))
It("lets the user place a hold and get notified later", func() {
Expect(valjean.Hold(library, "Les Miserables")).To(Succeed())
Expect(valjean.Holds()).To(ContainElement(book))
It("lets the user place a hold and get notified later", func(ctx SpecContext) {
Expect(valjean.Hold(ctx, library, "Les Miserables")).To(Succeed())
Expect(valjean.Holds(ctx)).To(ContainElement(book))
By("when Javert returns the book")
Expect(javert.Return(library, book)).To(Succeed())
Expect(javert.Return(ctx, library, book)).To(Succeed())
By("it eventually informs Valjean")
notification := "Les Miserables is ready for pick up"
Eventually(valjean.Notifications).Should(ContainElement(notification))
Eventually(ctx, valjean.Notifications).Should(ContainElement(notification))
Expect(valjean.Checkout(library, "Les Miserables")).To(Succeed())
Expect(valjean.Books()).To(ContainElement(book))
Expect(valjean.Holds()).To(BeEmpty())
})
Expect(valjean.Checkout(ctx, library, "Les Miserables")).To(Succeed())
Expect(valjean.Books(ctx)).To(ContainElement(book))
Expect(valjean.Holds(ctx)).To(BeEmpty())
}, SpecTimeout(time.Second * 10))
})
})
When("the library does not have the book in question", func() {
It("tells the reader the book is unavailable", func() {
err := valjean.Checkout(library, "Les Miserables")
It("tells the reader the book is unavailable", func(ctx SpecContext) {
err := valjean.Checkout(ctx, library, "Les Miserables")
Expect(error).To(MatchError("Les Miserables is not in the library catalog"))
})
}, SpecTimeout(time.Second * 5))
})
})
```
@ -92,7 +88,7 @@ If you have a question, comment, bug report, feature request, etc. please open a
Whether writing basic unit specs, complex integration specs, or even performance specs - Ginkgo gives you an expressive Domain-Specific Language (DSL) that will be familiar to users coming from frameworks such as [Quick](https://github.com/Quick/Quick), [RSpec](https://rspec.info), [Jasmine](https://jasmine.github.io), and [Busted](https://lunarmodules.github.io/busted/). This style of testing is sometimes referred to as "Behavior-Driven Development" (BDD) though Ginkgo's utility extends beyond acceptance-level testing.
With Ginkgo's DSL you can use nestable [`Describe`, `Context` and `When` container nodes](https://onsi.github.io/ginkgo/#organizing-specs-with-container-nodes) to help you organize your specs. [`BeforeEach` and `AfterEach` setup nodes](https://onsi.github.io/ginkgo/#extracting-common-setup-beforeeach) for setup and cleanup. [`It` and `Specify` subject nodes](https://onsi.github.io/ginkgo/#spec-subjects-it) that hold your assertions. [`BeforeSuite` and `AfterSuite` nodes](https://onsi.github.io/ginkgo/#suite-setup-and-cleanup-beforesuite-and-aftersuite) to prep for and cleanup after a suite... and [much more!](https://onsi.github.io/ginkgo/#writing-specs)
With Ginkgo's DSL you can use nestable [`Describe`, `Context` and `When` container nodes](https://onsi.github.io/ginkgo/#organizing-specs-with-container-nodes) to help you organize your specs. [`BeforeEach` and `AfterEach` setup nodes](https://onsi.github.io/ginkgo/#extracting-common-setup-beforeeach) for setup and cleanup. [`It` and `Specify` subject nodes](https://onsi.github.io/ginkgo/#spec-subjects-it) that hold your assertions. [`BeforeSuite` and `AfterSuite` nodes](https://onsi.github.io/ginkgo/#suite-setup-and-cleanup-beforesuite-and-aftersuite) to prep for and cleanup after a suite... and [much more!](https://onsi.github.io/ginkgo/#writing-specs).
At runtime, Ginkgo can run your specs in reproducibly [random order](https://onsi.github.io/ginkgo/#spec-randomization) and has sophisticated support for [spec parallelization](https://onsi.github.io/ginkgo/#spec-parallelization). In fact, running specs in parallel is as easy as
@ -100,7 +96,7 @@ At runtime, Ginkgo can run your specs in reproducibly [random order](https://ons
ginkgo -p
```
By following [established patterns for writing parallel specs](https://onsi.github.io/ginkgo/#patterns-for-parallel-integration-specs) you can build even large, complex integration suites that parallelize cleanly and run performantly.
By following [established patterns for writing parallel specs](https://onsi.github.io/ginkgo/#patterns-for-parallel-integration-specs) you can build even large, complex integration suites that parallelize cleanly and run performantly. And you don't have to worry about your spec suite hanging or leaving a mess behind - Ginkgo provides a per-node `context.Context` and the capability to interrupt the spec after a set period of time - and then clean up.
As your suites grow Ginkgo helps you keep your specs organized with [labels](https://onsi.github.io/ginkgo/#spec-labels) and lets you easily run [subsets of specs](https://onsi.github.io/ginkgo/#filtering-specs), either [programmatically](https://onsi.github.io/ginkgo/#focused-specs) or on the [command line](https://onsi.github.io/ginkgo/#combining-filters). And Ginkgo's reporting infrastructure generates machine-readable output in a [variety of formats](https://onsi.github.io/ginkgo/#generating-machine-readable-reports) _and_ allows you to build your own [custom reporting infrastructure](https://onsi.github.io/ginkgo/#generating-reports-programmatically).

View File

@ -21,7 +21,6 @@ import (
"os"
"path/filepath"
"strings"
"time"
"github.com/go-logr/logr"
"github.com/onsi/ginkgo/v2/formatter"
@ -164,6 +163,29 @@ func GinkgoParallelProcess() int {
return suiteConfig.ParallelProcess
}
/*
GinkgoHelper marks the function it's called in as a test helper. When a failure occurs inside a helper function, Ginkgo will skip the helper when analyzing the stack trace to identify where the failure occurred.
This is an alternative, simpler, mechanism to passing in a skip offset when calling Fail or using Gomega.
*/
func GinkgoHelper() {
types.MarkAsHelper(1)
}
/*
GinkgoLabelFilter() returns the label filter configured for this suite via `--label-filter`.
You can use this to manually check if a set of labels would satisfy the filter via:
if (Label("cat", "dog").MatchesLabelFilter(GinkgoLabelFilter())) {
//...
}
*/
func GinkgoLabelFilter() string {
suiteConfig, _ := GinkgoConfiguration()
return suiteConfig.LabelFilter
}
/*
PauseOutputInterception() pauses Ginkgo's output interception. This is only relevant
when running in parallel and output to stdout/stderr is being intercepted. You generally
@ -276,7 +298,7 @@ func RunSpecs(t GinkgoTestingT, description string, args ...interface{}) bool {
}
writer := GinkgoWriter.(*internal.Writer)
if reporterConfig.Verbose && suiteConfig.ParallelTotal == 1 {
if reporterConfig.Verbosity().GTE(types.VerbosityLevelVerbose) && suiteConfig.ParallelTotal == 1 {
writer.SetMode(internal.WriterModeStreamAndBuffer)
} else {
writer.SetMode(internal.WriterModeBufferOnly)
@ -370,6 +392,12 @@ func AbortSuite(message string, callerSkip ...int) {
panic(types.GinkgoErrors.UncaughtGinkgoPanic(cl))
}
/*
ignorablePanic is used by Gomega to signal to GinkgoRecover that Goemga is handling
the error associated with this panic. It i used when Eventually/Consistently are passed a func(g Gomega) and the resulting function launches a goroutines that makes a failed assertion. That failed assertion is registered by Gomega and then panics. Ordinarily the panic is captured by Gomega. In the case of a goroutine Gomega can't capture the panic - so we piggy back on GinkgoRecover so users have a single defer GinkgoRecover() pattern to follow. To do that we need to tell Ginkgo to ignore this panic and not register it as a panic on the global Failer.
*/
type ignorablePanic interface{ GinkgoRecoverShouldIgnoreThisPanic() }
/*
GinkgoRecover should be deferred at the top of any spawned goroutine that (may) call `Fail`
Since Gomega assertions call fail, you should throw a `defer GinkgoRecover()` at the top of any goroutine that
@ -385,6 +413,9 @@ You can learn more about how Ginkgo manages failures here: https://onsi.github.i
func GinkgoRecover() {
e := recover()
if e != nil {
if _, ok := e.(ignorablePanic); ok {
return
}
global.Failer.Panic(types.NewCodeLocationWithStackTrace(1), e)
}
}
@ -513,31 +544,7 @@ Note that By does not generate a new Ginkgo node - rather it is simply synctacti
You can learn more about By here: https://onsi.github.io/ginkgo/#documenting-complex-specs-by
*/
func By(text string, callback ...func()) {
if !global.Suite.InRunPhase() {
exitIfErr(types.GinkgoErrors.ByNotDuringRunPhase(types.NewCodeLocation(1)))
}
value := struct {
Text string
Duration time.Duration
}{
Text: text,
}
t := time.Now()
global.Suite.SetProgressStepCursor(internal.ProgressStepCursor{
Text: text,
CodeLocation: types.NewCodeLocation(1),
StartTime: t,
})
AddReportEntry("By Step", ReportEntryVisibilityNever, Offset(1), &value, t)
formatter := formatter.NewWithNoColorBool(reporterConfig.NoColor)
GinkgoWriter.Println(formatter.F("{{bold}}STEP:{{/}} %s {{gray}}%s{{/}}", text, t.Format(types.GINKGO_TIME_FORMAT)))
if len(callback) == 1 {
callback[0]()
value.Duration = time.Since(t)
}
if len(callback) > 1 {
panic("just one callback per By, please")
}
exitIfErr(global.Suite.By(text, callback...))
}
/*

View File

@ -46,7 +46,7 @@ const Pending = internal.Pending
/*
Serial is a decorator that allows you to mark a spec or container as serial. These specs will never run in parallel with other specs.
Tests in ordered containers cannot be marked as serial - mark the ordered container instead.
Specs in ordered containers cannot be marked as serial - mark the ordered container instead.
You can learn more here: https://onsi.github.io/ginkgo/#serial-specs
You can learn more about decorators here: https://onsi.github.io/ginkgo/#decorator-reference
@ -54,7 +54,7 @@ You can learn more about decorators here: https://onsi.github.io/ginkgo/#decorat
const Serial = internal.Serial
/*
Ordered is a decorator that allows you to mark a container as ordered. Tests in the container will always run in the order they appear.
Ordered is a decorator that allows you to mark a container as ordered. Specs in the container will always run in the order they appear.
They will never be randomized and they will never run in parallel with one another, though they may run in parallel with other specs.
You can learn more here: https://onsi.github.io/ginkgo/#ordered-containers
@ -62,6 +62,16 @@ You can learn more about decorators here: https://onsi.github.io/ginkgo/#decorat
*/
const Ordered = internal.Ordered
/*
ContinueOnFailure is a decorator that allows you to mark an Ordered container to continue running specs even if failures occur. Ordinarily an ordered container will stop running specs after the first failure occurs. Note that if a BeforeAll or a BeforeEach/JustBeforeEach annotated with OncePerOrdered fails then no specs will run as the precondition for the Ordered container will consider to be failed.
ContinueOnFailure only applies to the outermost Ordered container. Attempting to place ContinueOnFailure in a nested container will result in an error.
You can learn more here: https://onsi.github.io/ginkgo/#ordered-containers
You can learn more about decorators here: https://onsi.github.io/ginkgo/#decorator-reference
*/
const ContinueOnFailure = internal.ContinueOnFailure
/*
OncePerOrdered is a decorator that allows you to mark outer BeforeEach, AfterEach, JustBeforeEach, and JustAfterEach setup nodes to run once
per ordered context. Normally these setup nodes run around each individual spec, with OncePerOrdered they will run once around the set of specs in an ordered container.

View File

@ -4,6 +4,7 @@ import (
"fmt"
"os"
"regexp"
"strconv"
"strings"
)
@ -50,6 +51,37 @@ func NewWithNoColorBool(noColor bool) Formatter {
}
func New(colorMode ColorMode) Formatter {
colorAliases := map[string]int{
"black": 0,
"red": 1,
"green": 2,
"yellow": 3,
"blue": 4,
"magenta": 5,
"cyan": 6,
"white": 7,
}
for colorAlias, n := range colorAliases {
colorAliases[fmt.Sprintf("bright-%s", colorAlias)] = n + 8
}
getColor := func(color, defaultEscapeCode string) string {
color = strings.ToUpper(strings.ReplaceAll(color, "-", "_"))
envVar := fmt.Sprintf("GINKGO_CLI_COLOR_%s", color)
envVarColor := os.Getenv(envVar)
if envVarColor == "" {
return defaultEscapeCode
}
if colorCode, ok := colorAliases[envVarColor]; ok {
return fmt.Sprintf("\x1b[38;5;%dm", colorCode)
}
colorCode, err := strconv.Atoi(envVarColor)
if err != nil || colorCode < 0 || colorCode > 255 {
return defaultEscapeCode
}
return fmt.Sprintf("\x1b[38;5;%dm", colorCode)
}
f := Formatter{
ColorMode: colorMode,
colors: map[string]string{
@ -57,18 +89,18 @@ func New(colorMode ColorMode) Formatter {
"bold": "\x1b[1m",
"underline": "\x1b[4m",
"red": "\x1b[38;5;9m",
"orange": "\x1b[38;5;214m",
"coral": "\x1b[38;5;204m",
"magenta": "\x1b[38;5;13m",
"green": "\x1b[38;5;10m",
"dark-green": "\x1b[38;5;28m",
"yellow": "\x1b[38;5;11m",
"light-yellow": "\x1b[38;5;228m",
"cyan": "\x1b[38;5;14m",
"gray": "\x1b[38;5;243m",
"light-gray": "\x1b[38;5;246m",
"blue": "\x1b[38;5;12m",
"red": getColor("red", "\x1b[38;5;9m"),
"orange": getColor("orange", "\x1b[38;5;214m"),
"coral": getColor("coral", "\x1b[38;5;204m"),
"magenta": getColor("magenta", "\x1b[38;5;13m"),
"green": getColor("green", "\x1b[38;5;10m"),
"dark-green": getColor("dark-green", "\x1b[38;5;28m"),
"yellow": getColor("yellow", "\x1b[38;5;11m"),
"light-yellow": getColor("light-yellow", "\x1b[38;5;228m"),
"cyan": getColor("cyan", "\x1b[38;5;14m"),
"gray": getColor("gray", "\x1b[38;5;243m"),
"light-gray": getColor("light-gray", "\x1b[38;5;246m"),
"blue": getColor("blue", "\x1b[38;5;12m"),
},
}
colors := []string{}

View File

@ -7,7 +7,7 @@ GinkgoT() implements an interface analogous to *testing.T and can be used with
third-party libraries that accept *testing.T through an interface.
GinkgoT() takes an optional offset argument that can be used to get the
correct line number associated with the failure.
correct line number associated with the failure - though you do not need to use this if you call GinkgoHelper() or GinkgoT().Helper() appropriately
You can learn more here: https://onsi.github.io/ginkgo/#using-third-party-libraries
*/

View File

@ -94,15 +94,19 @@ type group struct {
runOncePairs map[uint]runOncePairs
runOnceTracker map[runOncePair]types.SpecState
succeeded bool
succeeded bool
failedInARunOnceBefore bool
continueOnFailure bool
}
func newGroup(suite *Suite) *group {
return &group{
suite: suite,
runOncePairs: map[uint]runOncePairs{},
runOnceTracker: map[runOncePair]types.SpecState{},
succeeded: true,
suite: suite,
runOncePairs: map[uint]runOncePairs{},
runOnceTracker: map[runOncePair]types.SpecState{},
succeeded: true,
failedInARunOnceBefore: false,
continueOnFailure: false,
}
}
@ -116,6 +120,7 @@ func (g *group) initialReportForSpec(spec Spec) types.SpecReport {
LeafNodeText: spec.FirstNodeWithType(types.NodeTypeIt).Text,
LeafNodeLabels: []string(spec.FirstNodeWithType(types.NodeTypeIt).Labels),
ParallelProcess: g.suite.config.ParallelProcess,
RunningInParallel: g.suite.isRunningInParallel(),
IsSerial: spec.Nodes.HasNodeMarkedSerial(),
IsInOrderedContainer: !spec.Nodes.FirstNodeMarkedOrdered().IsZero(),
MaxFlakeAttempts: spec.Nodes.GetMaxFlakeAttempts(),
@ -136,10 +141,14 @@ func (g *group) evaluateSkipStatus(spec Spec) (types.SpecState, types.Failure) {
if !g.suite.deadline.IsZero() && g.suite.deadline.Before(time.Now()) {
return types.SpecStateSkipped, types.Failure{}
}
if !g.succeeded {
if !g.succeeded && !g.continueOnFailure {
return types.SpecStateSkipped, g.suite.failureForLeafNodeWithMessage(spec.FirstNodeWithType(types.NodeTypeIt),
"Spec skipped because an earlier spec in an ordered container failed")
}
if g.failedInARunOnceBefore && g.continueOnFailure {
return types.SpecStateSkipped, g.suite.failureForLeafNodeWithMessage(spec.FirstNodeWithType(types.NodeTypeIt),
"Spec skipped because a BeforeAll node failed")
}
beforeOncePairs := g.runOncePairs[spec.SubjectID()].withType(types.NodeTypeBeforeAll | types.NodeTypeBeforeEach | types.NodeTypeJustBeforeEach)
for _, pair := range beforeOncePairs {
if g.runOnceTracker[pair].Is(types.SpecStateSkipped) {
@ -167,7 +176,8 @@ func (g *group) isLastSpecWithPair(specID uint, pair runOncePair) bool {
return lastSpecID == specID
}
func (g *group) attemptSpec(isFinalAttempt bool, spec Spec) {
func (g *group) attemptSpec(isFinalAttempt bool, spec Spec) bool {
failedInARunOnceBefore := false
pairs := g.runOncePairs[spec.SubjectID()]
nodes := spec.Nodes.WithType(types.NodeTypeBeforeAll)
@ -193,6 +203,7 @@ func (g *group) attemptSpec(isFinalAttempt bool, spec Spec) {
}
if g.suite.currentSpecReport.State != types.SpecStatePassed {
terminatingNode, terminatingPair = node, oncePair
failedInARunOnceBefore = !terminatingPair.isZero()
break
}
}
@ -215,7 +226,7 @@ func (g *group) attemptSpec(isFinalAttempt bool, spec Spec) {
//this node has already been run on this attempt, don't rerun it
return false
}
pair := runOncePair{}
var pair runOncePair
switch node.NodeType {
case types.NodeTypeCleanupAfterEach, types.NodeTypeCleanupAfterAll:
// check if we were generated in an AfterNode that has already run
@ -245,9 +256,13 @@ func (g *group) attemptSpec(isFinalAttempt bool, spec Spec) {
if !terminatingPair.isZero() && terminatingNode.NestingLevel == node.NestingLevel {
return true //...or, a run-once node at our nesting level was skipped which means this is our last chance to run
}
case types.SpecStateFailed, types.SpecStatePanicked: // the spec has failed...
case types.SpecStateFailed, types.SpecStatePanicked, types.SpecStateTimedout: // the spec has failed...
if isFinalAttempt {
return true //...if this was the last attempt then we're the last spec to run and so the AfterNode should run
if g.continueOnFailure {
return isLastSpecWithPair || failedInARunOnceBefore //...we're configured to continue on failures - so we should only run if we're the last spec for this pair or if we failed in a runOnceBefore (which means we _are_ the last spec to run)
} else {
return true //...this was the last attempt and continueOnFailure is false therefore we are the last spec to run and so the AfterNode should run
}
}
if !terminatingPair.isZero() { // ...and it failed in a run-once. which will be running again
if node.NodeType.Is(types.NodeTypeCleanupAfterEach | types.NodeTypeCleanupAfterAll) {
@ -280,10 +295,12 @@ func (g *group) attemptSpec(isFinalAttempt bool, spec Spec) {
includeDeferCleanups = true
}
return failedInARunOnceBefore
}
func (g *group) run(specs Specs) {
g.specs = specs
g.continueOnFailure = specs[0].Nodes.FirstNodeMarkedOrdered().MarkedContinueOnFailure
for _, spec := range g.specs {
g.runOncePairs[spec.SubjectID()] = runOncePairsForSpec(spec)
}
@ -300,8 +317,8 @@ func (g *group) run(specs Specs) {
skip := g.suite.config.DryRun || g.suite.currentSpecReport.State.Is(types.SpecStateFailureStates|types.SpecStateSkipped|types.SpecStatePending)
g.suite.currentSpecReport.StartTime = time.Now()
failedInARunOnceBefore := false
if !skip {
var maxAttempts = 1
if g.suite.currentSpecReport.MaxMustPassRepeatedly > 0 {
@ -319,14 +336,14 @@ func (g *group) run(specs Specs) {
g.suite.outputInterceptor.StartInterceptingOutput()
if attempt > 0 {
if g.suite.currentSpecReport.MaxMustPassRepeatedly > 0 {
fmt.Fprintf(g.suite.writer, "\nGinkgo: Attempt #%d Passed. Repeating...\n", attempt)
g.suite.handleSpecEvent(types.SpecEvent{SpecEventType: types.SpecEventSpecRepeat, Attempt: attempt})
}
if g.suite.currentSpecReport.MaxFlakeAttempts > 0 {
fmt.Fprintf(g.suite.writer, "\nGinkgo: Attempt #%d Failed. Retrying...\n", attempt)
g.suite.handleSpecEvent(types.SpecEvent{SpecEventType: types.SpecEventSpecRetry, Attempt: attempt})
}
}
g.attemptSpec(attempt == maxAttempts-1, spec)
failedInARunOnceBefore = g.attemptSpec(attempt == maxAttempts-1, spec)
g.suite.currentSpecReport.EndTime = time.Now()
g.suite.currentSpecReport.RunTime = g.suite.currentSpecReport.EndTime.Sub(g.suite.currentSpecReport.StartTime)
@ -341,6 +358,10 @@ func (g *group) run(specs Specs) {
if g.suite.currentSpecReport.MaxFlakeAttempts > 0 {
if g.suite.currentSpecReport.State.Is(types.SpecStatePassed | types.SpecStateSkipped | types.SpecStateAborted | types.SpecStateInterrupted) {
break
} else if attempt < maxAttempts-1 {
af := types.AdditionalFailure{State: g.suite.currentSpecReport.State, Failure: g.suite.currentSpecReport.Failure}
af.Failure.Message = fmt.Sprintf("Failure recorded during attempt %d:\n%s", attempt+1, af.Failure.Message)
g.suite.currentSpecReport.AdditionalFailures = append(g.suite.currentSpecReport.AdditionalFailures, af)
}
}
}
@ -350,6 +371,7 @@ func (g *group) run(specs Specs) {
g.suite.processCurrentSpecReport()
if g.suite.currentSpecReport.State.Is(types.SpecStateFailureStates) {
g.succeeded = false
g.failedInARunOnceBefore = g.failedInARunOnceBefore || failedInARunOnceBefore
}
g.suite.selectiveLock.Lock()
g.suite.currentSpecReport = types.SpecReport{}

View File

@ -44,23 +44,23 @@ type Node struct {
SynchronizedAfterSuiteProc1Body func(SpecContext)
SynchronizedAfterSuiteProc1BodyHasContext bool
ReportEachBody func(types.SpecReport)
ReportAfterSuiteBody func(types.Report)
ReportEachBody func(types.SpecReport)
ReportSuiteBody func(types.Report)
MarkedFocus bool
MarkedPending bool
MarkedSerial bool
MarkedOrdered bool
MarkedOncePerOrdered bool
MarkedSuppressProgressReporting bool
FlakeAttempts int
MustPassRepeatedly int
Labels Labels
PollProgressAfter time.Duration
PollProgressInterval time.Duration
NodeTimeout time.Duration
SpecTimeout time.Duration
GracePeriod time.Duration
MarkedFocus bool
MarkedPending bool
MarkedSerial bool
MarkedOrdered bool
MarkedContinueOnFailure bool
MarkedOncePerOrdered bool
FlakeAttempts int
MustPassRepeatedly int
Labels Labels
PollProgressAfter time.Duration
PollProgressInterval time.Duration
NodeTimeout time.Duration
SpecTimeout time.Duration
GracePeriod time.Duration
NodeIDWhereCleanupWasGenerated uint
}
@ -70,6 +70,7 @@ type focusType bool
type pendingType bool
type serialType bool
type orderedType bool
type continueOnFailureType bool
type honorsOrderedType bool
type suppressProgressReporting bool
@ -77,6 +78,7 @@ const Focus = focusType(true)
const Pending = pendingType(true)
const Serial = serialType(true)
const Ordered = orderedType(true)
const ContinueOnFailure = continueOnFailureType(true)
const OncePerOrdered = honorsOrderedType(true)
const SuppressProgressReporting = suppressProgressReporting(true)
@ -91,6 +93,10 @@ type NodeTimeout time.Duration
type SpecTimeout time.Duration
type GracePeriod time.Duration
func (l Labels) MatchesLabelFilter(query string) bool {
return types.MustParseLabelFilter(query)(l)
}
func UnionOfLabels(labels ...Labels) Labels {
out := Labels{}
seen := map[string]bool{}
@ -134,6 +140,8 @@ func isDecoration(arg interface{}) bool {
return true
case t == reflect.TypeOf(Ordered):
return true
case t == reflect.TypeOf(ContinueOnFailure):
return true
case t == reflect.TypeOf(OncePerOrdered):
return true
case t == reflect.TypeOf(SuppressProgressReporting):
@ -242,16 +250,18 @@ func NewNode(deprecationTracker *types.DeprecationTracker, nodeType types.NodeTy
if !nodeType.Is(types.NodeTypeContainer) {
appendError(types.GinkgoErrors.InvalidDecoratorForNodeType(node.CodeLocation, nodeType, "Ordered"))
}
case t == reflect.TypeOf(ContinueOnFailure):
node.MarkedContinueOnFailure = bool(arg.(continueOnFailureType))
if !nodeType.Is(types.NodeTypeContainer) {
appendError(types.GinkgoErrors.InvalidDecoratorForNodeType(node.CodeLocation, nodeType, "ContinueOnFailure"))
}
case t == reflect.TypeOf(OncePerOrdered):
node.MarkedOncePerOrdered = bool(arg.(honorsOrderedType))
if !nodeType.Is(types.NodeTypeBeforeEach | types.NodeTypeJustBeforeEach | types.NodeTypeAfterEach | types.NodeTypeJustAfterEach) {
appendError(types.GinkgoErrors.InvalidDecoratorForNodeType(node.CodeLocation, nodeType, "OncePerOrdered"))
}
case t == reflect.TypeOf(SuppressProgressReporting):
node.MarkedSuppressProgressReporting = bool(arg.(suppressProgressReporting))
if nodeType.Is(types.NodeTypeContainer) {
appendError(types.GinkgoErrors.InvalidDecoratorForNodeType(node.CodeLocation, nodeType, "SuppressProgressReporting"))
}
deprecationTracker.TrackDeprecation(types.Deprecations.SuppressProgressReporting())
case t == reflect.TypeOf(FlakeAttempts(0)):
node.FlakeAttempts = int(arg.(FlakeAttempts))
if !nodeType.Is(types.NodeTypesForContainerAndIt) {
@ -321,9 +331,9 @@ func NewNode(deprecationTracker *types.DeprecationTracker, nodeType types.NodeTy
trackedFunctionError = true
break
}
} else if nodeType.Is(types.NodeTypeReportAfterSuite) {
if node.ReportAfterSuiteBody == nil {
node.ReportAfterSuiteBody = arg.(func(types.Report))
} else if nodeType.Is(types.NodeTypeReportBeforeSuite | types.NodeTypeReportAfterSuite) {
if node.ReportSuiteBody == nil {
node.ReportSuiteBody = arg.(func(types.Report))
} else {
appendError(types.GinkgoErrors.MultipleBodyFunctions(node.CodeLocation, nodeType))
trackedFunctionError = true
@ -390,13 +400,17 @@ func NewNode(deprecationTracker *types.DeprecationTracker, nodeType types.NodeTy
appendError(types.GinkgoErrors.InvalidDeclarationOfFocusedAndPending(node.CodeLocation, nodeType))
}
if node.MarkedContinueOnFailure && !node.MarkedOrdered {
appendError(types.GinkgoErrors.InvalidContinueOnFailureDecoration(node.CodeLocation))
}
hasContext := node.HasContext || node.SynchronizedAfterSuiteProc1BodyHasContext || node.SynchronizedAfterSuiteAllProcsBodyHasContext || node.SynchronizedBeforeSuiteProc1BodyHasContext || node.SynchronizedBeforeSuiteAllProcsBodyHasContext
if !hasContext && (node.NodeTimeout > 0 || node.SpecTimeout > 0 || node.GracePeriod > 0) && len(errors) == 0 {
appendError(types.GinkgoErrors.InvalidTimeoutOrGracePeriodForNonContextNode(node.CodeLocation, nodeType))
}
if !node.NodeType.Is(types.NodeTypeReportBeforeEach|types.NodeTypeReportAfterEach|types.NodeTypeSynchronizedBeforeSuite|types.NodeTypeSynchronizedAfterSuite|types.NodeTypeReportAfterSuite) && node.Body == nil && !node.MarkedPending && !trackedFunctionError {
if !node.NodeType.Is(types.NodeTypeReportBeforeEach|types.NodeTypeReportAfterEach|types.NodeTypeSynchronizedBeforeSuite|types.NodeTypeSynchronizedAfterSuite|types.NodeTypeReportBeforeSuite|types.NodeTypeReportAfterSuite) && node.Body == nil && !node.MarkedPending && !trackedFunctionError {
appendError(types.GinkgoErrors.MissingBodyFunction(node.CodeLocation, nodeType))
}

View File

@ -7,6 +7,58 @@ import (
"github.com/onsi/ginkgo/v2/types"
)
type SortableSpecs struct {
Specs Specs
Indexes []int
}
func NewSortableSpecs(specs Specs) *SortableSpecs {
indexes := make([]int, len(specs))
for i := range specs {
indexes[i] = i
}
return &SortableSpecs{
Specs: specs,
Indexes: indexes,
}
}
func (s *SortableSpecs) Len() int { return len(s.Indexes) }
func (s *SortableSpecs) Swap(i, j int) { s.Indexes[i], s.Indexes[j] = s.Indexes[j], s.Indexes[i] }
func (s *SortableSpecs) Less(i, j int) bool {
a, b := s.Specs[s.Indexes[i]], s.Specs[s.Indexes[j]]
firstOrderedA := a.Nodes.FirstNodeMarkedOrdered()
firstOrderedB := b.Nodes.FirstNodeMarkedOrdered()
if firstOrderedA.ID == firstOrderedB.ID && !firstOrderedA.IsZero() {
// strictly preserve order in ordered containers. ID will track this as IDs are generated monotonically
return a.FirstNodeWithType(types.NodeTypeIt).ID < b.FirstNodeWithType(types.NodeTypeIt).ID
}
aCLs := a.Nodes.WithType(types.NodeTypesForContainerAndIt).CodeLocations()
bCLs := b.Nodes.WithType(types.NodeTypesForContainerAndIt).CodeLocations()
for i := 0; i < len(aCLs) && i < len(bCLs); i++ {
aCL, bCL := aCLs[i], bCLs[i]
if aCL.FileName < bCL.FileName {
return true
} else if aCL.FileName > bCL.FileName {
return false
}
if aCL.LineNumber < bCL.LineNumber {
return true
} else if aCL.LineNumber > bCL.LineNumber {
return false
}
}
// either everything is equal or we have different lengths of CLs
if len(aCLs) < len(bCLs) {
return true
} else if len(aCLs) > len(bCLs) {
return false
}
// ok, now we are sure everything was equal. so we use the spec text to break ties
return a.Text() < b.Text()
}
type GroupedSpecIndices []SpecIndices
type SpecIndices []int
@ -28,12 +80,17 @@ func OrderSpecs(specs Specs, suiteConfig types.SuiteConfig) (GroupedSpecIndices,
// Seed a new random source based on thee configured random seed.
r := rand.New(rand.NewSource(suiteConfig.RandomSeed))
// first break things into execution groups
// first, we sort the entire suite to ensure a deterministic order. the sort is performed by filename, then line number, and then spec text. this ensures every parallel process has the exact same spec order and is only necessary to cover the edge case where the user iterates over a map to generate specs.
sortableSpecs := NewSortableSpecs(specs)
sort.Sort(sortableSpecs)
// then we break things into execution groups
// a group represents a single unit of execution and is a collection of SpecIndices
// usually a group is just a single spec, however ordered containers must be preserved as a single group
executionGroupIDs := []uint{}
executionGroups := map[uint]SpecIndices{}
for idx, spec := range specs {
for _, idx := range sortableSpecs.Indexes {
spec := specs[idx]
groupNode := spec.Nodes.FirstNodeMarkedOrdered()
if groupNode.IsZero() {
groupNode = spec.Nodes.FirstNodeWithType(types.NodeTypeIt)
@ -48,7 +105,6 @@ func OrderSpecs(specs Specs, suiteConfig types.SuiteConfig) (GroupedSpecIndices,
// we shuffle outermost containers. so we need to form shufflable groupings of GroupIDs
shufflableGroupingIDs := []uint{}
shufflableGroupingIDToGroupIDs := map[uint][]uint{}
shufflableGroupingsIDToSortKeys := map[uint]string{}
// for each execution group we're going to have to pick a node to represent how the
// execution group is grouped for shuffling:
@ -57,7 +113,7 @@ func OrderSpecs(specs Specs, suiteConfig types.SuiteConfig) (GroupedSpecIndices,
nodeTypesToShuffle = types.NodeTypeIt
}
//so, fo reach execution group:
//so, for each execution group:
for _, groupID := range executionGroupIDs {
// pick out a representative spec
representativeSpec := specs[executionGroups[groupID][0]]
@ -72,22 +128,9 @@ func OrderSpecs(specs Specs, suiteConfig types.SuiteConfig) (GroupedSpecIndices,
if len(shufflableGroupingIDToGroupIDs[shufflableGroupingNode.ID]) == 1 {
// record the shuffleable group ID
shufflableGroupingIDs = append(shufflableGroupingIDs, shufflableGroupingNode.ID)
// and record the sort key to use
shufflableGroupingsIDToSortKeys[shufflableGroupingNode.ID] = shufflableGroupingNode.CodeLocation.String()
}
}
// now we sort the shufflable groups by the sort key. We use the shufflable group nodes code location and break ties using its node id
sort.SliceStable(shufflableGroupingIDs, func(i, j int) bool {
keyA := shufflableGroupingsIDToSortKeys[shufflableGroupingIDs[i]]
keyB := shufflableGroupingsIDToSortKeys[shufflableGroupingIDs[j]]
if keyA == keyB {
return shufflableGroupingIDs[i] < shufflableGroupingIDs[j]
} else {
return keyA < keyB
}
})
// now we permute the sorted shufflable grouping IDs and build the ordered Groups
orderedGroups := GroupedSpecIndices{}
permutation := r.Perm(len(shufflableGroupingIDs))

View File

@ -42,6 +42,8 @@ type Client interface {
PostSuiteWillBegin(report types.Report) error
PostDidRun(report types.SpecReport) error
PostSuiteDidEnd(report types.Report) error
PostReportBeforeSuiteCompleted(state types.SpecState) error
BlockUntilReportBeforeSuiteCompleted() (types.SpecState, error)
PostSynchronizedBeforeSuiteCompleted(state types.SpecState, data []byte) error
BlockUntilSynchronizedBeforeSuiteData() (types.SpecState, []byte, error)
BlockUntilNonprimaryProcsHaveFinished() error

View File

@ -98,6 +98,19 @@ func (client *httpClient) PostEmitProgressReport(report types.ProgressReport) er
return client.post("/progress-report", report)
}
func (client *httpClient) PostReportBeforeSuiteCompleted(state types.SpecState) error {
return client.post("/report-before-suite-completed", state)
}
func (client *httpClient) BlockUntilReportBeforeSuiteCompleted() (types.SpecState, error) {
var state types.SpecState
err := client.poll("/report-before-suite-state", &state)
if err == ErrorGone {
return types.SpecStateFailed, nil
}
return state, err
}
func (client *httpClient) PostSynchronizedBeforeSuiteCompleted(state types.SpecState, data []byte) error {
beforeSuiteState := BeforeSuiteState{
State: state,

View File

@ -26,7 +26,7 @@ type httpServer struct {
handler *ServerHandler
}
//Create a new server, automatically selecting a port
// Create a new server, automatically selecting a port
func newHttpServer(parallelTotal int, reporter reporters.Reporter) (*httpServer, error) {
listener, err := net.Listen("tcp", "127.0.0.1:0")
if err != nil {
@ -38,7 +38,7 @@ func newHttpServer(parallelTotal int, reporter reporters.Reporter) (*httpServer,
}, nil
}
//Start the server. You don't need to `go s.Start()`, just `s.Start()`
// Start the server. You don't need to `go s.Start()`, just `s.Start()`
func (server *httpServer) Start() {
httpServer := &http.Server{}
mux := http.NewServeMux()
@ -52,6 +52,8 @@ func (server *httpServer) Start() {
mux.HandleFunc("/progress-report", server.emitProgressReport)
//synchronization endpoints
mux.HandleFunc("/report-before-suite-completed", server.handleReportBeforeSuiteCompleted)
mux.HandleFunc("/report-before-suite-state", server.handleReportBeforeSuiteState)
mux.HandleFunc("/before-suite-completed", server.handleBeforeSuiteCompleted)
mux.HandleFunc("/before-suite-state", server.handleBeforeSuiteState)
mux.HandleFunc("/have-nonprimary-procs-finished", server.handleHaveNonprimaryProcsFinished)
@ -63,12 +65,12 @@ func (server *httpServer) Start() {
go httpServer.Serve(server.listener)
}
//Stop the server
// Stop the server
func (server *httpServer) Close() {
server.listener.Close()
}
//The address the server can be reached it. Pass this into the `ForwardingReporter`.
// The address the server can be reached it. Pass this into the `ForwardingReporter`.
func (server *httpServer) Address() string {
return "http://" + server.listener.Addr().String()
}
@ -93,7 +95,7 @@ func (server *httpServer) RegisterAlive(node int, alive func() bool) {
// Streaming Endpoints
//
//The server will forward all received messages to Ginkgo reporters registered with `RegisterReporters`
// The server will forward all received messages to Ginkgo reporters registered with `RegisterReporters`
func (server *httpServer) decode(writer http.ResponseWriter, request *http.Request, object interface{}) bool {
defer request.Body.Close()
if json.NewDecoder(request.Body).Decode(object) != nil {
@ -164,6 +166,23 @@ func (server *httpServer) emitProgressReport(writer http.ResponseWriter, request
server.handleError(server.handler.EmitProgressReport(report, voidReceiver), writer)
}
func (server *httpServer) handleReportBeforeSuiteCompleted(writer http.ResponseWriter, request *http.Request) {
var state types.SpecState
if !server.decode(writer, request, &state) {
return
}
server.handleError(server.handler.ReportBeforeSuiteCompleted(state, voidReceiver), writer)
}
func (server *httpServer) handleReportBeforeSuiteState(writer http.ResponseWriter, request *http.Request) {
var state types.SpecState
if server.handleError(server.handler.ReportBeforeSuiteState(voidSender, &state), writer) {
return
}
json.NewEncoder(writer).Encode(state)
}
func (server *httpServer) handleBeforeSuiteCompleted(writer http.ResponseWriter, request *http.Request) {
var beforeSuiteState BeforeSuiteState
if !server.decode(writer, request, &beforeSuiteState) {

View File

@ -76,6 +76,19 @@ func (client *rpcClient) PostEmitProgressReport(report types.ProgressReport) err
return client.client.Call("Server.EmitProgressReport", report, voidReceiver)
}
func (client *rpcClient) PostReportBeforeSuiteCompleted(state types.SpecState) error {
return client.client.Call("Server.ReportBeforeSuiteCompleted", state, voidReceiver)
}
func (client *rpcClient) BlockUntilReportBeforeSuiteCompleted() (types.SpecState, error) {
var state types.SpecState
err := client.poll("Server.ReportBeforeSuiteState", &state)
if err == ErrorGone {
return types.SpecStateFailed, nil
}
return state, err
}
func (client *rpcClient) PostSynchronizedBeforeSuiteCompleted(state types.SpecState, data []byte) error {
beforeSuiteState := BeforeSuiteState{
State: state,

View File

@ -18,16 +18,17 @@ var voidSender Void
// It handles all the business logic to avoid duplication between the two servers
type ServerHandler struct {
done chan interface{}
outputDestination io.Writer
reporter reporters.Reporter
alives []func() bool
lock *sync.Mutex
beforeSuiteState BeforeSuiteState
parallelTotal int
counter int
counterLock *sync.Mutex
shouldAbort bool
done chan interface{}
outputDestination io.Writer
reporter reporters.Reporter
alives []func() bool
lock *sync.Mutex
beforeSuiteState BeforeSuiteState
reportBeforeSuiteState types.SpecState
parallelTotal int
counter int
counterLock *sync.Mutex
shouldAbort bool
numSuiteDidBegins int
numSuiteDidEnds int
@ -37,11 +38,12 @@ type ServerHandler struct {
func newServerHandler(parallelTotal int, reporter reporters.Reporter) *ServerHandler {
return &ServerHandler{
reporter: reporter,
lock: &sync.Mutex{},
counterLock: &sync.Mutex{},
alives: make([]func() bool, parallelTotal),
beforeSuiteState: BeforeSuiteState{Data: nil, State: types.SpecStateInvalid},
reporter: reporter,
lock: &sync.Mutex{},
counterLock: &sync.Mutex{},
alives: make([]func() bool, parallelTotal),
beforeSuiteState: BeforeSuiteState{Data: nil, State: types.SpecStateInvalid},
parallelTotal: parallelTotal,
outputDestination: os.Stdout,
done: make(chan interface{}),
@ -140,6 +142,29 @@ func (handler *ServerHandler) haveNonprimaryProcsFinished() bool {
return true
}
func (handler *ServerHandler) ReportBeforeSuiteCompleted(reportBeforeSuiteState types.SpecState, _ *Void) error {
handler.lock.Lock()
defer handler.lock.Unlock()
handler.reportBeforeSuiteState = reportBeforeSuiteState
return nil
}
func (handler *ServerHandler) ReportBeforeSuiteState(_ Void, reportBeforeSuiteState *types.SpecState) error {
proc1IsAlive := handler.procIsAlive(1)
handler.lock.Lock()
defer handler.lock.Unlock()
if handler.reportBeforeSuiteState == types.SpecStateInvalid {
if proc1IsAlive {
return ErrorEarly
} else {
return ErrorGone
}
}
*reportBeforeSuiteState = handler.reportBeforeSuiteState
return nil
}
func (handler *ServerHandler) BeforeSuiteCompleted(beforeSuiteState BeforeSuiteState, _ *Void) error {
handler.lock.Lock()
defer handler.lock.Unlock()

View File

@ -48,13 +48,10 @@ type ProgressStepCursor struct {
StartTime time.Time
}
func NewProgressReport(isRunningInParallel bool, report types.SpecReport, currentNode Node, currentNodeStartTime time.Time, currentStep ProgressStepCursor, gwOutput string, additionalReports []string, sourceRoots []string, includeAll bool) (types.ProgressReport, error) {
func NewProgressReport(isRunningInParallel bool, report types.SpecReport, currentNode Node, currentNodeStartTime time.Time, currentStep types.SpecEvent, gwOutput string, timelineLocation types.TimelineLocation, additionalReports []string, sourceRoots []string, includeAll bool) (types.ProgressReport, error) {
pr := types.ProgressReport{
ParallelProcess: report.ParallelProcess,
RunningInParallel: isRunningInParallel,
Time: time.Now(),
ParallelProcess: report.ParallelProcess,
RunningInParallel: isRunningInParallel,
ContainerHierarchyTexts: report.ContainerHierarchyTexts,
LeafNodeText: report.LeafNodeText,
LeafNodeLocation: report.LeafNodeLocation,
@ -65,14 +62,14 @@ func NewProgressReport(isRunningInParallel bool, report types.SpecReport, curren
CurrentNodeLocation: currentNode.CodeLocation,
CurrentNodeStartTime: currentNodeStartTime,
CurrentStepText: currentStep.Text,
CurrentStepText: currentStep.Message,
CurrentStepLocation: currentStep.CodeLocation,
CurrentStepStartTime: currentStep.StartTime,
CurrentStepStartTime: currentStep.TimelineLocation.Time,
AdditionalReports: additionalReports,
CapturedGinkgoWriterOutput: gwOutput,
GinkgoWriterOffset: len(gwOutput),
TimelineLocation: timelineLocation,
}
goroutines, err := extractRunningGoroutines()
@ -186,7 +183,6 @@ func extractRunningGoroutines() ([]types.Goroutine, error) {
break
}
}
r := bufio.NewReader(bytes.NewReader(stack))
out := []types.Goroutine{}
idx := -1
@ -234,12 +230,12 @@ func extractRunningGoroutines() ([]types.Goroutine, error) {
return nil, types.GinkgoErrors.FailedToParseStackTrace(fmt.Sprintf("Invalid function call: %s -- missing file name and line number", functionCall.Function))
}
line = strings.TrimLeft(line, " \t")
fields := strings.SplitN(line, ":", 2)
if len(fields) != 2 {
return nil, types.GinkgoErrors.FailedToParseStackTrace(fmt.Sprintf("Invalid filename nad line number: %s", line))
delimiterIdx := strings.LastIndex(line, ":")
if delimiterIdx == -1 {
return nil, types.GinkgoErrors.FailedToParseStackTrace(fmt.Sprintf("Invalid filename and line number: %s", line))
}
functionCall.Filename = fields[0]
line = strings.Split(fields[1], " ")[0]
functionCall.Filename = line[:delimiterIdx]
line = strings.Split(line[delimiterIdx+1:], " ")[0]
lineNumber, err := strconv.ParseInt(line, 10, 64)
functionCall.Line = int(lineNumber)
if err != nil {

View File

@ -1,7 +1,6 @@
package internal
import (
"reflect"
"time"
"github.com/onsi/ginkgo/v2/types"
@ -13,20 +12,20 @@ func NewReportEntry(name string, cl types.CodeLocation, args ...interface{}) (Re
out := ReportEntry{
Visibility: types.ReportEntryVisibilityAlways,
Name: name,
Time: time.Now(),
Location: cl,
Time: time.Now(),
}
var didSetValue = false
for _, arg := range args {
switch reflect.TypeOf(arg) {
case reflect.TypeOf(types.ReportEntryVisibilityAlways):
out.Visibility = arg.(types.ReportEntryVisibility)
case reflect.TypeOf(types.CodeLocation{}):
out.Location = arg.(types.CodeLocation)
case reflect.TypeOf(Offset(0)):
out.Location = types.NewCodeLocation(2 + int(arg.(Offset)))
case reflect.TypeOf(out.Time):
out.Time = arg.(time.Time)
switch x := arg.(type) {
case types.ReportEntryVisibility:
out.Visibility = x
case types.CodeLocation:
out.Location = x
case Offset:
out.Location = types.NewCodeLocation(2 + int(x))
case time.Time:
out.Time = x
default:
if didSetValue {
return ReportEntry{}, types.GinkgoErrors.TooManyReportEntryValues(out.Location, arg)

View File

@ -44,7 +44,8 @@ type Suite struct {
currentSpecContext *specContext
progressStepCursor ProgressStepCursor
currentByStep types.SpecEvent
timelineOrder int
/*
We don't need to lock around all operations. Just those that *could* happen concurrently.
@ -128,7 +129,7 @@ func (suite *Suite) PushNode(node Node) error {
return suite.pushCleanupNode(node)
}
if node.NodeType.Is(types.NodeTypeBeforeSuite | types.NodeTypeAfterSuite | types.NodeTypeSynchronizedBeforeSuite | types.NodeTypeSynchronizedAfterSuite | types.NodeTypeReportAfterSuite) {
if node.NodeType.Is(types.NodeTypeBeforeSuite | types.NodeTypeAfterSuite | types.NodeTypeSynchronizedBeforeSuite | types.NodeTypeSynchronizedAfterSuite | types.NodeTypeBeforeSuite | types.NodeTypeReportBeforeSuite | types.NodeTypeReportAfterSuite) {
return suite.pushSuiteNode(node)
}
@ -150,6 +151,13 @@ func (suite *Suite) PushNode(node Node) error {
}
}
if node.MarkedContinueOnFailure {
firstOrderedNode := suite.tree.AncestorNodeChain().FirstNodeMarkedOrdered()
if !firstOrderedNode.IsZero() {
return types.GinkgoErrors.InvalidContinueOnFailureDecoration(node.CodeLocation)
}
}
if node.NodeType == types.NodeTypeContainer {
// During PhaseBuildTopLevel we only track the top level containers without entering them
// We only enter the top level container nodes during PhaseBuildTree
@ -221,7 +229,7 @@ func (suite *Suite) pushCleanupNode(node Node) error {
node.NodeType = types.NodeTypeCleanupAfterSuite
case types.NodeTypeBeforeAll, types.NodeTypeAfterAll:
node.NodeType = types.NodeTypeCleanupAfterAll
case types.NodeTypeReportBeforeEach, types.NodeTypeReportAfterEach, types.NodeTypeReportAfterSuite:
case types.NodeTypeReportBeforeEach, types.NodeTypeReportAfterEach, types.NodeTypeReportBeforeSuite, types.NodeTypeReportAfterSuite:
return types.GinkgoErrors.PushingCleanupInReportingNode(node.CodeLocation, suite.currentNode.NodeType)
case types.NodeTypeCleanupInvalid, types.NodeTypeCleanupAfterEach, types.NodeTypeCleanupAfterAll, types.NodeTypeCleanupAfterSuite:
return types.GinkgoErrors.PushingCleanupInCleanupNode(node.CodeLocation)
@ -236,19 +244,69 @@ func (suite *Suite) pushCleanupNode(node Node) error {
return nil
}
/*
Pushing and popping the Step Cursor stack
*/
func (suite *Suite) SetProgressStepCursor(cursor ProgressStepCursor) {
func (suite *Suite) generateTimelineLocation() types.TimelineLocation {
suite.selectiveLock.Lock()
defer suite.selectiveLock.Unlock()
suite.progressStepCursor = cursor
suite.timelineOrder += 1
return types.TimelineLocation{
Offset: len(suite.currentSpecReport.CapturedGinkgoWriterOutput) + suite.writer.Len(),
Order: suite.timelineOrder,
Time: time.Now(),
}
}
func (suite *Suite) handleSpecEvent(event types.SpecEvent) types.SpecEvent {
event.TimelineLocation = suite.generateTimelineLocation()
suite.selectiveLock.Lock()
suite.currentSpecReport.SpecEvents = append(suite.currentSpecReport.SpecEvents, event)
suite.selectiveLock.Unlock()
suite.reporter.EmitSpecEvent(event)
return event
}
func (suite *Suite) handleSpecEventEnd(eventType types.SpecEventType, startEvent types.SpecEvent) {
event := startEvent
event.SpecEventType = eventType
event.TimelineLocation = suite.generateTimelineLocation()
event.Duration = event.TimelineLocation.Time.Sub(startEvent.TimelineLocation.Time)
suite.selectiveLock.Lock()
suite.currentSpecReport.SpecEvents = append(suite.currentSpecReport.SpecEvents, event)
suite.selectiveLock.Unlock()
suite.reporter.EmitSpecEvent(event)
}
func (suite *Suite) By(text string, callback ...func()) error {
cl := types.NewCodeLocation(2)
if suite.phase != PhaseRun {
return types.GinkgoErrors.ByNotDuringRunPhase(cl)
}
event := suite.handleSpecEvent(types.SpecEvent{
SpecEventType: types.SpecEventByStart,
CodeLocation: cl,
Message: text,
})
suite.selectiveLock.Lock()
suite.currentByStep = event
suite.selectiveLock.Unlock()
if len(callback) == 1 {
defer func() {
suite.selectiveLock.Lock()
suite.currentByStep = types.SpecEvent{}
suite.selectiveLock.Unlock()
suite.handleSpecEventEnd(types.SpecEventByEnd, event)
}()
callback[0]()
} else if len(callback) > 1 {
panic("just one callback per By, please")
}
return nil
}
/*
Spec Running methods - used during PhaseRun
Spec Running methods - used during PhaseRun
*/
func (suite *Suite) CurrentSpecReport() types.SpecReport {
suite.selectiveLock.Lock()
@ -263,16 +321,20 @@ func (suite *Suite) CurrentSpecReport() types.SpecReport {
}
func (suite *Suite) AddReportEntry(entry ReportEntry) error {
suite.selectiveLock.Lock()
defer suite.selectiveLock.Unlock()
if suite.phase != PhaseRun {
return types.GinkgoErrors.AddReportEntryNotDuringRunPhase(entry.Location)
}
entry.TimelineLocation = suite.generateTimelineLocation()
entry.Time = entry.TimelineLocation.Time
suite.selectiveLock.Lock()
suite.currentSpecReport.ReportEntries = append(suite.currentSpecReport.ReportEntries, entry)
suite.selectiveLock.Unlock()
suite.reporter.EmitReportEntry(entry)
return nil
}
func (suite *Suite) generateProgressReport(fullReport bool) types.ProgressReport {
timelineLocation := suite.generateTimelineLocation()
suite.selectiveLock.Lock()
defer suite.selectiveLock.Unlock()
@ -280,10 +342,8 @@ func (suite *Suite) generateProgressReport(fullReport bool) types.ProgressReport
if suite.currentSpecContext != nil {
additionalReports = suite.currentSpecContext.QueryProgressReporters()
}
stepCursor := suite.progressStepCursor
gwOutput := suite.currentSpecReport.CapturedGinkgoWriterOutput + string(suite.writer.Bytes())
pr, err := NewProgressReport(suite.isRunningInParallel(), suite.currentSpecReport, suite.currentNode, suite.currentNodeStartTime, stepCursor, gwOutput, additionalReports, suite.config.SourceRoots, fullReport)
pr, err := NewProgressReport(suite.isRunningInParallel(), suite.currentSpecReport, suite.currentNode, suite.currentNodeStartTime, suite.currentByStep, gwOutput, timelineLocation, additionalReports, suite.config.SourceRoots, fullReport)
if err != nil {
fmt.Printf("{{red}}Failed to generate progress report:{{/}}\n%s\n", err.Error())
@ -355,7 +415,13 @@ func (suite *Suite) runSpecs(description string, suiteLabels Labels, suitePath s
}
suite.report.SuiteSucceeded = true
suite.runBeforeSuite(numSpecsThatWillBeRun)
suite.runReportSuiteNodesIfNeedBe(types.NodeTypeReportBeforeSuite)
ranBeforeSuite := suite.report.SuiteSucceeded
if suite.report.SuiteSucceeded {
suite.runBeforeSuite(numSpecsThatWillBeRun)
}
if suite.report.SuiteSucceeded {
groupedSpecIndices, serialGroupedSpecIndices := OrderSpecs(specs, suite.config)
@ -394,7 +460,9 @@ func (suite *Suite) runSpecs(description string, suiteLabels Labels, suitePath s
}
}
suite.runAfterSuiteCleanup(numSpecsThatWillBeRun)
if ranBeforeSuite {
suite.runAfterSuiteCleanup(numSpecsThatWillBeRun)
}
interruptStatus := suite.interruptHandler.Status()
if interruptStatus.Interrupted() {
@ -408,9 +476,7 @@ func (suite *Suite) runSpecs(description string, suiteLabels Labels, suitePath s
suite.report.SuiteSucceeded = false
}
if suite.config.ParallelProcess == 1 {
suite.runReportAfterSuite()
}
suite.runReportSuiteNodesIfNeedBe(types.NodeTypeReportAfterSuite)
suite.reporter.SuiteDidEnd(suite.report)
if suite.isRunningInParallel() {
suite.client.PostSuiteDidEnd(suite.report)
@ -424,9 +490,10 @@ func (suite *Suite) runBeforeSuite(numSpecsThatWillBeRun int) {
if !beforeSuiteNode.IsZero() && numSpecsThatWillBeRun > 0 {
suite.selectiveLock.Lock()
suite.currentSpecReport = types.SpecReport{
LeafNodeType: beforeSuiteNode.NodeType,
LeafNodeLocation: beforeSuiteNode.CodeLocation,
ParallelProcess: suite.config.ParallelProcess,
LeafNodeType: beforeSuiteNode.NodeType,
LeafNodeLocation: beforeSuiteNode.CodeLocation,
ParallelProcess: suite.config.ParallelProcess,
RunningInParallel: suite.isRunningInParallel(),
}
suite.selectiveLock.Unlock()
@ -445,9 +512,10 @@ func (suite *Suite) runAfterSuiteCleanup(numSpecsThatWillBeRun int) {
if !afterSuiteNode.IsZero() && numSpecsThatWillBeRun > 0 {
suite.selectiveLock.Lock()
suite.currentSpecReport = types.SpecReport{
LeafNodeType: afterSuiteNode.NodeType,
LeafNodeLocation: afterSuiteNode.CodeLocation,
ParallelProcess: suite.config.ParallelProcess,
LeafNodeType: afterSuiteNode.NodeType,
LeafNodeLocation: afterSuiteNode.CodeLocation,
ParallelProcess: suite.config.ParallelProcess,
RunningInParallel: suite.isRunningInParallel(),
}
suite.selectiveLock.Unlock()
@ -461,9 +529,10 @@ func (suite *Suite) runAfterSuiteCleanup(numSpecsThatWillBeRun int) {
for _, cleanupNode := range afterSuiteCleanup {
suite.selectiveLock.Lock()
suite.currentSpecReport = types.SpecReport{
LeafNodeType: cleanupNode.NodeType,
LeafNodeLocation: cleanupNode.CodeLocation,
ParallelProcess: suite.config.ParallelProcess,
LeafNodeType: cleanupNode.NodeType,
LeafNodeLocation: cleanupNode.CodeLocation,
ParallelProcess: suite.config.ParallelProcess,
RunningInParallel: suite.isRunningInParallel(),
}
suite.selectiveLock.Unlock()
@ -474,23 +543,6 @@ func (suite *Suite) runAfterSuiteCleanup(numSpecsThatWillBeRun int) {
}
}
func (suite *Suite) runReportAfterSuite() {
for _, node := range suite.suiteNodes.WithType(types.NodeTypeReportAfterSuite) {
suite.selectiveLock.Lock()
suite.currentSpecReport = types.SpecReport{
LeafNodeType: node.NodeType,
LeafNodeLocation: node.CodeLocation,
LeafNodeText: node.Text,
ParallelProcess: suite.config.ParallelProcess,
}
suite.selectiveLock.Unlock()
suite.reporter.WillRun(suite.currentSpecReport)
suite.runReportAfterSuiteNode(node, suite.report)
suite.processCurrentSpecReport()
}
}
func (suite *Suite) reportEach(spec Spec, nodeType types.NodeType) {
nodes := spec.Nodes.WithType(nodeType)
if nodeType == types.NodeTypeReportAfterEach {
@ -608,39 +660,80 @@ func (suite *Suite) runSuiteNode(node Node) {
if err != nil && !suite.currentSpecReport.State.Is(types.SpecStateFailureStates) {
suite.currentSpecReport.State, suite.currentSpecReport.Failure = types.SpecStateFailed, suite.failureForLeafNodeWithMessage(node, err.Error())
suite.reporter.EmitFailure(suite.currentSpecReport.State, suite.currentSpecReport.Failure)
}
suite.currentSpecReport.EndTime = time.Now()
suite.currentSpecReport.RunTime = suite.currentSpecReport.EndTime.Sub(suite.currentSpecReport.StartTime)
suite.currentSpecReport.CapturedGinkgoWriterOutput = string(suite.writer.Bytes())
suite.currentSpecReport.CapturedStdOutErr += suite.outputInterceptor.StopInterceptingAndReturnOutput()
return
}
func (suite *Suite) runReportAfterSuiteNode(node Node, report types.Report) {
func (suite *Suite) runReportSuiteNodesIfNeedBe(nodeType types.NodeType) {
nodes := suite.suiteNodes.WithType(nodeType)
// only run ReportAfterSuite on proc 1
if nodeType.Is(types.NodeTypeReportAfterSuite) && suite.config.ParallelProcess != 1 {
return
}
// if we're running ReportBeforeSuite on proc > 1 - we should wait until proc 1 has completed
if nodeType.Is(types.NodeTypeReportBeforeSuite) && suite.config.ParallelProcess != 1 && len(nodes) > 0 {
state, err := suite.client.BlockUntilReportBeforeSuiteCompleted()
if err != nil || state.Is(types.SpecStateFailed) {
suite.report.SuiteSucceeded = false
}
return
}
for _, node := range nodes {
suite.selectiveLock.Lock()
suite.currentSpecReport = types.SpecReport{
LeafNodeType: node.NodeType,
LeafNodeLocation: node.CodeLocation,
LeafNodeText: node.Text,
ParallelProcess: suite.config.ParallelProcess,
RunningInParallel: suite.isRunningInParallel(),
}
suite.selectiveLock.Unlock()
suite.reporter.WillRun(suite.currentSpecReport)
suite.runReportSuiteNode(node, suite.report)
suite.processCurrentSpecReport()
}
// if we're running ReportBeforeSuite and we're running in parallel - we shuld tell the other procs that we're done
if nodeType.Is(types.NodeTypeReportBeforeSuite) && suite.isRunningInParallel() && len(nodes) > 0 {
if suite.report.SuiteSucceeded {
suite.client.PostReportBeforeSuiteCompleted(types.SpecStatePassed)
} else {
suite.client.PostReportBeforeSuiteCompleted(types.SpecStateFailed)
}
}
}
func (suite *Suite) runReportSuiteNode(node Node, report types.Report) {
suite.writer.Truncate()
suite.outputInterceptor.StartInterceptingOutput()
suite.currentSpecReport.StartTime = time.Now()
if suite.config.ParallelTotal > 1 {
// if we're running a ReportAfterSuite in parallel (on proc 1) we (a) wait until other procs have exited and
// (b) always fetch the latest report as prior ReportAfterSuites will contribute to it
if node.NodeType.Is(types.NodeTypeReportAfterSuite) && suite.isRunningInParallel() {
aggregatedReport, err := suite.client.BlockUntilAggregatedNonprimaryProcsReport()
if err != nil {
suite.currentSpecReport.State, suite.currentSpecReport.Failure = types.SpecStateFailed, suite.failureForLeafNodeWithMessage(node, err.Error())
suite.reporter.EmitFailure(suite.currentSpecReport.State, suite.currentSpecReport.Failure)
return
}
report = report.Add(aggregatedReport)
}
node.Body = func(SpecContext) { node.ReportAfterSuiteBody(report) }
node.Body = func(SpecContext) { node.ReportSuiteBody(report) }
suite.currentSpecReport.State, suite.currentSpecReport.Failure = suite.runNode(node, time.Time{}, "")
suite.currentSpecReport.EndTime = time.Now()
suite.currentSpecReport.RunTime = suite.currentSpecReport.EndTime.Sub(suite.currentSpecReport.StartTime)
suite.currentSpecReport.CapturedGinkgoWriterOutput = string(suite.writer.Bytes())
suite.currentSpecReport.CapturedStdOutErr = suite.outputInterceptor.StopInterceptingAndReturnOutput()
return
}
func (suite *Suite) runNode(node Node, specDeadline time.Time, text string) (types.SpecState, types.Failure) {
@ -662,7 +755,7 @@ func (suite *Suite) runNode(node Node, specDeadline time.Time, text string) (typ
suite.selectiveLock.Lock()
suite.currentNode = node
suite.currentNodeStartTime = time.Now()
suite.progressStepCursor = ProgressStepCursor{}
suite.currentByStep = types.SpecEvent{}
suite.selectiveLock.Unlock()
defer func() {
suite.selectiveLock.Lock()
@ -671,13 +764,18 @@ func (suite *Suite) runNode(node Node, specDeadline time.Time, text string) (typ
suite.selectiveLock.Unlock()
}()
if suite.config.EmitSpecProgress && !node.MarkedSuppressProgressReporting {
if text == "" {
text = "TOP-LEVEL"
}
s := fmt.Sprintf("[%s] %s\n %s\n", node.NodeType.String(), text, node.CodeLocation.String())
suite.writer.Write([]byte(s))
if text == "" {
text = "TOP-LEVEL"
}
event := suite.handleSpecEvent(types.SpecEvent{
SpecEventType: types.SpecEventNodeStart,
NodeType: node.NodeType,
Message: text,
CodeLocation: node.CodeLocation,
})
defer func() {
suite.handleSpecEventEnd(types.SpecEventNodeEnd, event)
}()
var failure types.Failure
failure.FailureNodeType, failure.FailureNodeLocation = node.NodeType, node.CodeLocation
@ -697,18 +795,23 @@ func (suite *Suite) runNode(node Node, specDeadline time.Time, text string) (typ
now := time.Now()
deadline := suite.deadline
timeoutInPlay := "suite"
if deadline.IsZero() || (!specDeadline.IsZero() && specDeadline.Before(deadline)) {
deadline = specDeadline
timeoutInPlay = "spec"
}
if node.NodeTimeout > 0 && (deadline.IsZero() || deadline.Sub(now) > node.NodeTimeout) {
deadline = now.Add(node.NodeTimeout)
timeoutInPlay = "node"
}
if (!deadline.IsZero() && deadline.Before(now)) || interruptStatus.Interrupted() {
//we're out of time already. let's wait for a NodeTimeout if we have it, or GracePeriod if we don't
if node.NodeTimeout > 0 {
deadline = now.Add(node.NodeTimeout)
timeoutInPlay = "node"
} else {
deadline = now.Add(gracePeriod)
timeoutInPlay = "grace period"
}
}
@ -743,6 +846,7 @@ func (suite *Suite) runNode(node Node, specDeadline time.Time, text string) (typ
}
outcomeFromRun, failureFromRun := suite.failer.Drain()
failureFromRun.TimelineLocation = suite.generateTimelineLocation()
outcomeC <- outcomeFromRun
failureC <- failureFromRun
}()
@ -772,23 +876,33 @@ func (suite *Suite) runNode(node Node, specDeadline time.Time, text string) (typ
select {
case outcomeFromRun := <-outcomeC:
failureFromRun := <-failureC
if outcome == types.SpecStateInterrupted {
// we've already been interrupted. we just managed to actually exit
if outcome.Is(types.SpecStateInterrupted | types.SpecStateTimedout) {
// we've already been interrupted/timed out. we just managed to actually exit
// before the grace period elapsed
return outcome, failure
} else if outcome == types.SpecStateTimedout {
// we've already timed out. we just managed to actually exit
// before the grace period elapsed. if we have a failure message we should include it
// if we have a failure message we attach it as an additional failure
if outcomeFromRun != types.SpecStatePassed {
failure.Location, failure.ForwardedPanic = failureFromRun.Location, failureFromRun.ForwardedPanic
failure.Message = "This spec timed out and reported the following failure after the timeout:\n\n" + failureFromRun.Message
additionalFailure := types.AdditionalFailure{
State: outcomeFromRun,
Failure: failure, //we make a copy - this will include all the configuration set up above...
}
//...and then we update the failure with the details from failureFromRun
additionalFailure.Failure.Location, additionalFailure.Failure.ForwardedPanic, additionalFailure.Failure.TimelineLocation = failureFromRun.Location, failureFromRun.ForwardedPanic, failureFromRun.TimelineLocation
additionalFailure.Failure.ProgressReport = types.ProgressReport{}
if outcome == types.SpecStateTimedout {
additionalFailure.Failure.Message = fmt.Sprintf("A %s timeout occurred and then the following failure was recorded in the timedout node before it exited:\n%s", timeoutInPlay, failureFromRun.Message)
} else {
additionalFailure.Failure.Message = fmt.Sprintf("An interrupt occurred and then the following failure was recorded in the interrupted node before it exited:\n%s", failureFromRun.Message)
}
suite.reporter.EmitFailure(additionalFailure.State, additionalFailure.Failure)
failure.AdditionalFailure = &additionalFailure
}
return outcome, failure
}
if outcomeFromRun.Is(types.SpecStatePassed) {
return outcomeFromRun, types.Failure{}
} else {
failure.Message, failure.Location, failure.ForwardedPanic = failureFromRun.Message, failureFromRun.Location, failureFromRun.ForwardedPanic
failure.Message, failure.Location, failure.ForwardedPanic, failure.TimelineLocation = failureFromRun.Message, failureFromRun.Location, failureFromRun.ForwardedPanic, failureFromRun.TimelineLocation
suite.reporter.EmitFailure(outcomeFromRun, failure)
return outcomeFromRun, failure
}
case <-gracePeriodChannel:
@ -801,10 +915,12 @@ func (suite *Suite) runNode(node Node, specDeadline time.Time, text string) (typ
case <-deadlineChannel:
// we're out of time - the outcome is a timeout and we capture the failure and progress report
outcome = types.SpecStateTimedout
failure.Message, failure.Location = "Timedout", node.CodeLocation
failure.Message, failure.Location, failure.TimelineLocation = fmt.Sprintf("A %s timeout occurred", timeoutInPlay), node.CodeLocation, suite.generateTimelineLocation()
failure.ProgressReport = suite.generateProgressReport(false).WithoutCapturedGinkgoWriterOutput()
failure.ProgressReport.Message = "{{bold}}This is the Progress Report generated when the timeout occurred:{{/}}"
failure.ProgressReport.Message = fmt.Sprintf("{{bold}}This is the Progress Report generated when the %s timeout occurred:{{/}}", timeoutInPlay)
deadlineChannel = nil
suite.reporter.EmitFailure(outcome, failure)
// tell the spec to stop. it's important we generate the progress report first to make sure we capture where
// the spec is actually stuck
sc.cancel()
@ -814,36 +930,36 @@ func (suite *Suite) runNode(node Node, specDeadline time.Time, text string) (typ
interruptStatus = suite.interruptHandler.Status()
deadlineChannel = nil // don't worry about deadlines, time's up now
failureTimelineLocation := suite.generateTimelineLocation()
progressReport := suite.generateProgressReport(true)
if outcome == types.SpecStateInvalid {
outcome = types.SpecStateInterrupted
failure.Message, failure.Location = interruptStatus.Message(), node.CodeLocation
failure.Message, failure.Location, failure.TimelineLocation = interruptStatus.Message(), node.CodeLocation, failureTimelineLocation
if interruptStatus.ShouldIncludeProgressReport() {
failure.ProgressReport = suite.generateProgressReport(true).WithoutCapturedGinkgoWriterOutput()
failure.ProgressReport = progressReport.WithoutCapturedGinkgoWriterOutput()
failure.ProgressReport.Message = "{{bold}}This is the Progress Report generated when the interrupt was received:{{/}}"
}
suite.reporter.EmitFailure(outcome, failure)
}
var report types.ProgressReport
if interruptStatus.ShouldIncludeProgressReport() {
report = suite.generateProgressReport(false)
}
progressReport = progressReport.WithoutOtherGoroutines()
sc.cancel()
if interruptStatus.Level == interrupt_handler.InterruptLevelBailOut {
if interruptStatus.ShouldIncludeProgressReport() {
report.Message = fmt.Sprintf("{{bold}}{{orange}}%s{{/}}\n{{bold}}{{red}}Final interrupt received{{/}}; Ginkgo will not run any cleanup or reporting nodes and will terminate as soon as possible.\nHere's a current progress report:", interruptStatus.Message())
suite.emitProgressReport(report)
progressReport.Message = fmt.Sprintf("{{bold}}{{orange}}%s{{/}}\n{{bold}}{{red}}Final interrupt received{{/}}; Ginkgo will not run any cleanup or reporting nodes and will terminate as soon as possible.\nHere's a current progress report:", interruptStatus.Message())
suite.emitProgressReport(progressReport)
}
return outcome, failure
}
if interruptStatus.ShouldIncludeProgressReport() {
if interruptStatus.Level == interrupt_handler.InterruptLevelCleanupAndReport {
report.Message = fmt.Sprintf("{{bold}}{{orange}}%s{{/}}\nFirst interrupt received; Ginkgo will run any cleanup and reporting nodes but will skip all remaining specs. {{bold}}Interrupt again to skip cleanup{{/}}.\nHere's a current progress report:", interruptStatus.Message())
progressReport.Message = fmt.Sprintf("{{bold}}{{orange}}%s{{/}}\nFirst interrupt received; Ginkgo will run any cleanup and reporting nodes but will skip all remaining specs. {{bold}}Interrupt again to skip cleanup{{/}}.\nHere's a current progress report:", interruptStatus.Message())
} else if interruptStatus.Level == interrupt_handler.InterruptLevelReportOnly {
report.Message = fmt.Sprintf("{{bold}}{{orange}}%s{{/}}\nSecond interrupt received; Ginkgo will run any reporting nodes but will skip all remaining specs and cleanup nodes. {{bold}}Interrupt again to bail immediately{{/}}.\nHere's a current progress report:", interruptStatus.Message())
progressReport.Message = fmt.Sprintf("{{bold}}{{orange}}%s{{/}}\nSecond interrupt received; Ginkgo will run any reporting nodes but will skip all remaining specs and cleanup nodes. {{bold}}Interrupt again to bail immediately{{/}}.\nHere's a current progress report:", interruptStatus.Message())
}
suite.emitProgressReport(report)
suite.emitProgressReport(progressReport)
}
if gracePeriodChannel == nil {
@ -864,10 +980,12 @@ func (suite *Suite) runNode(node Node, specDeadline time.Time, text string) (typ
}
}
// TODO: search for usages and consider if reporter.EmitFailure() is necessary
func (suite *Suite) failureForLeafNodeWithMessage(node Node, message string) types.Failure {
return types.Failure{
Message: message,
Location: node.CodeLocation,
TimelineLocation: suite.generateTimelineLocation(),
FailureNodeContext: types.FailureNodeIsLeafNode,
FailureNodeType: node.NodeType,
FailureNodeLocation: node.CodeLocation,

View File

@ -81,7 +81,7 @@ func (t *ginkgoTestingTProxy) Fatalf(format string, args ...interface{}) {
}
func (t *ginkgoTestingTProxy) Helper() {
// No-op
types.MarkAsHelper(1)
}
func (t *ginkgoTestingTProxy) Log(args ...interface{}) {

View File

@ -22,24 +22,30 @@ type WriterInterface interface {
Truncate()
Bytes() []byte
Len() int
}
//Writer implements WriterInterface and GinkgoWriterInterface
// Writer implements WriterInterface and GinkgoWriterInterface
type Writer struct {
buffer *bytes.Buffer
outWriter io.Writer
lock *sync.Mutex
mode WriterMode
streamIndent []byte
indentNext bool
teeWriters []io.Writer
}
func NewWriter(outWriter io.Writer) *Writer {
return &Writer{
buffer: &bytes.Buffer{},
lock: &sync.Mutex{},
outWriter: outWriter,
mode: WriterModeStreamAndBuffer,
buffer: &bytes.Buffer{},
lock: &sync.Mutex{},
outWriter: outWriter,
mode: WriterModeStreamAndBuffer,
streamIndent: []byte(" "),
indentNext: true,
}
}
@ -49,6 +55,14 @@ func (w *Writer) SetMode(mode WriterMode) {
w.mode = mode
}
func (w *Writer) Len() int {
w.lock.Lock()
defer w.lock.Unlock()
return w.buffer.Len()
}
var newline = []byte("\n")
func (w *Writer) Write(b []byte) (n int, err error) {
w.lock.Lock()
defer w.lock.Unlock()
@ -58,7 +72,21 @@ func (w *Writer) Write(b []byte) (n int, err error) {
}
if w.mode == WriterModeStreamAndBuffer {
w.outWriter.Write(b)
line, remaining, found := []byte{}, b, false
for len(remaining) > 0 {
line, remaining, found = bytes.Cut(remaining, newline)
if len(line) > 0 {
if w.indentNext {
w.outWriter.Write(w.streamIndent)
w.indentNext = false
}
w.outWriter.Write(line)
}
if found {
w.outWriter.Write(newline)
w.indentNext = true
}
}
}
return w.buffer.Write(b)
}
@ -78,7 +106,7 @@ func (w *Writer) Bytes() []byte {
return copied
}
//GinkgoWriterInterface
// GinkgoWriterInterface
func (w *Writer) TeeTo(writer io.Writer) {
w.lock.Lock()
defer w.lock.Unlock()

View File

@ -30,6 +30,8 @@ type DefaultReporter struct {
specDenoter string
retryDenoter string
formatter formatter.Formatter
runningInParallel bool
}
func NewDefaultReporterUnderTest(conf types.ReporterConfig, writer io.Writer) *DefaultReporter {
@ -97,230 +99,10 @@ func (r *DefaultReporter) SuiteWillBegin(report types.Report) {
}
}
func (r *DefaultReporter) WillRun(report types.SpecReport) {
if r.conf.Verbosity().LT(types.VerbosityLevelVerbose) || report.State.Is(types.SpecStatePending|types.SpecStateSkipped) {
return
}
r.emitDelimiter()
indentation := uint(0)
if report.LeafNodeType.Is(types.NodeTypesForSuiteLevelNodes) {
r.emitBlock(r.f("{{bold}}[%s] %s{{/}}", report.LeafNodeType.String(), report.LeafNodeText))
} else {
if len(report.ContainerHierarchyTexts) > 0 {
r.emitBlock(r.cycleJoin(report.ContainerHierarchyTexts, " "))
indentation = 1
}
line := r.fi(indentation, "{{bold}}%s{{/}}", report.LeafNodeText)
labels := report.Labels()
if len(labels) > 0 {
line += r.f(" {{coral}}[%s]{{/}}", strings.Join(labels, ", "))
}
r.emitBlock(line)
}
r.emitBlock(r.fi(indentation, "{{gray}}%s{{/}}", report.LeafNodeLocation))
}
func (r *DefaultReporter) DidRun(report types.SpecReport) {
v := r.conf.Verbosity()
var header, highlightColor string
includeRuntime, emitGinkgoWriterOutput, stream, denoter := true, true, false, r.specDenoter
succinctLocationBlock := v.Is(types.VerbosityLevelSuccinct)
hasGW := report.CapturedGinkgoWriterOutput != ""
hasStd := report.CapturedStdOutErr != ""
hasEmittableReports := report.ReportEntries.HasVisibility(types.ReportEntryVisibilityAlways) || (report.ReportEntries.HasVisibility(types.ReportEntryVisibilityFailureOrVerbose) && (!report.Failure.IsZero() || v.GTE(types.VerbosityLevelVerbose)))
if report.LeafNodeType.Is(types.NodeTypesForSuiteLevelNodes) {
denoter = fmt.Sprintf("[%s]", report.LeafNodeType)
}
highlightColor = r.highlightColorForState(report.State)
switch report.State {
case types.SpecStatePassed:
succinctLocationBlock = v.LT(types.VerbosityLevelVerbose)
emitGinkgoWriterOutput = (r.conf.AlwaysEmitGinkgoWriter || v.GTE(types.VerbosityLevelVerbose)) && hasGW
if report.LeafNodeType.Is(types.NodeTypesForSuiteLevelNodes) {
if v.GTE(types.VerbosityLevelVerbose) || hasStd || hasEmittableReports {
header = fmt.Sprintf("%s PASSED", denoter)
} else {
return
}
} else {
header, stream = denoter, true
if report.NumAttempts > 1 && report.MaxFlakeAttempts > 1 {
header, stream = fmt.Sprintf("%s [FLAKEY TEST - TOOK %d ATTEMPTS TO PASS]", r.retryDenoter, report.NumAttempts), false
}
if report.RunTime > r.conf.SlowSpecThreshold {
header, stream = fmt.Sprintf("%s [SLOW TEST]", header), false
}
}
if hasStd || emitGinkgoWriterOutput || hasEmittableReports {
stream = false
}
case types.SpecStatePending:
includeRuntime, emitGinkgoWriterOutput = false, false
if v.Is(types.VerbosityLevelSuccinct) {
header, stream = "P", true
} else {
header, succinctLocationBlock = "P [PENDING]", v.LT(types.VerbosityLevelVeryVerbose)
}
case types.SpecStateSkipped:
if report.Failure.Message != "" || v.Is(types.VerbosityLevelVeryVerbose) {
header = "S [SKIPPED]"
} else {
header, stream = "S", true
}
case types.SpecStateFailed:
header = fmt.Sprintf("%s [FAILED]", denoter)
case types.SpecStateTimedout:
header = fmt.Sprintf("%s [TIMEDOUT]", denoter)
case types.SpecStatePanicked:
header = fmt.Sprintf("%s! [PANICKED]", denoter)
case types.SpecStateInterrupted:
header = fmt.Sprintf("%s! [INTERRUPTED]", denoter)
case types.SpecStateAborted:
header = fmt.Sprintf("%s! [ABORTED]", denoter)
}
if report.State.Is(types.SpecStateFailureStates) && report.MaxMustPassRepeatedly > 1 {
header, stream = fmt.Sprintf("%s DURING REPETITION #%d", header, report.NumAttempts), false
}
// Emit stream and return
if stream {
r.emit(r.f(highlightColor + header + "{{/}}"))
return
}
// Emit header
r.emitDelimiter()
if includeRuntime {
header = r.f("%s [%.3f seconds]", header, report.RunTime.Seconds())
}
r.emitBlock(r.f(highlightColor + header + "{{/}}"))
// Emit Code Location Block
r.emitBlock(r.codeLocationBlock(report, highlightColor, succinctLocationBlock, false))
//Emit Stdout/Stderr Output
if hasStd {
r.emitBlock("\n")
r.emitBlock(r.fi(1, "{{gray}}Begin Captured StdOut/StdErr Output >>{{/}}"))
r.emitBlock(r.fi(2, "%s", report.CapturedStdOutErr))
r.emitBlock(r.fi(1, "{{gray}}<< End Captured StdOut/StdErr Output{{/}}"))
}
//Emit Captured GinkgoWriter Output
if emitGinkgoWriterOutput && hasGW {
r.emitBlock("\n")
r.emitGinkgoWriterOutput(1, report.CapturedGinkgoWriterOutput, 0)
}
if hasEmittableReports {
r.emitBlock("\n")
r.emitBlock(r.fi(1, "{{gray}}Begin Report Entries >>{{/}}"))
reportEntries := report.ReportEntries.WithVisibility(types.ReportEntryVisibilityAlways)
if !report.Failure.IsZero() || v.GTE(types.VerbosityLevelVerbose) {
reportEntries = report.ReportEntries.WithVisibility(types.ReportEntryVisibilityAlways, types.ReportEntryVisibilityFailureOrVerbose)
}
for _, entry := range reportEntries {
r.emitBlock(r.fi(2, "{{bold}}"+entry.Name+"{{gray}} - %s @ %s{{/}}", entry.Location, entry.Time.Format(types.GINKGO_TIME_FORMAT)))
if representation := entry.StringRepresentation(); representation != "" {
r.emitBlock(r.fi(3, representation))
}
}
r.emitBlock(r.fi(1, "{{gray}}<< End Report Entries{{/}}"))
}
// Emit Failure Message
if !report.Failure.IsZero() {
r.emitBlock("\n")
r.EmitFailure(1, report.State, report.Failure, false)
}
if len(report.AdditionalFailures) > 0 {
if v.GTE(types.VerbosityLevelVerbose) {
r.emitBlock("\n")
r.emitBlock(r.fi(1, "{{bold}}There were additional failures detected after the initial failure:{{/}}"))
for i, additionalFailure := range report.AdditionalFailures {
r.EmitFailure(2, additionalFailure.State, additionalFailure.Failure, true)
if i < len(report.AdditionalFailures)-1 {
r.emitBlock(r.fi(2, "{{gray}}%s{{/}}", strings.Repeat("-", 10)))
}
}
} else {
r.emitBlock("\n")
r.emitBlock(r.fi(1, "{{bold}}There were additional failures detected after the initial failure. Here's a summary - for full details run Ginkgo in verbose mode:{{/}}"))
for _, additionalFailure := range report.AdditionalFailures {
r.emitBlock(r.fi(2, r.highlightColorForState(additionalFailure.State)+"[%s]{{/}} in [%s] at %s",
r.humanReadableState(additionalFailure.State),
additionalFailure.Failure.FailureNodeType,
additionalFailure.Failure.Location,
))
}
}
}
r.emitDelimiter()
}
func (r *DefaultReporter) highlightColorForState(state types.SpecState) string {
switch state {
case types.SpecStatePassed:
return "{{green}}"
case types.SpecStatePending:
return "{{yellow}}"
case types.SpecStateSkipped:
return "{{cyan}}"
case types.SpecStateFailed:
return "{{red}}"
case types.SpecStateTimedout:
return "{{orange}}"
case types.SpecStatePanicked:
return "{{magenta}}"
case types.SpecStateInterrupted:
return "{{orange}}"
case types.SpecStateAborted:
return "{{coral}}"
default:
return "{{gray}}"
}
}
func (r *DefaultReporter) humanReadableState(state types.SpecState) string {
return strings.ToUpper(state.String())
}
func (r *DefaultReporter) EmitFailure(indent uint, state types.SpecState, failure types.Failure, includeState bool) {
highlightColor := r.highlightColorForState(state)
if includeState {
r.emitBlock(r.fi(indent, highlightColor+"[%s]{{/}}", r.humanReadableState(state)))
}
r.emitBlock(r.fi(indent, highlightColor+"%s{{/}}", failure.Message))
r.emitBlock(r.fi(indent, highlightColor+"In {{bold}}[%s]{{/}}"+highlightColor+" at: {{bold}}%s{{/}}\n", failure.FailureNodeType, failure.Location))
if failure.ForwardedPanic != "" {
r.emitBlock("\n")
r.emitBlock(r.fi(indent, highlightColor+"%s{{/}}", failure.ForwardedPanic))
}
if r.conf.FullTrace || failure.ForwardedPanic != "" {
r.emitBlock("\n")
r.emitBlock(r.fi(indent, highlightColor+"Full Stack Trace{{/}}"))
r.emitBlock(r.fi(indent+1, "%s", failure.Location.FullStackTrace))
}
if !failure.ProgressReport.IsZero() {
r.emitBlock("\n")
r.emitProgressReport(indent, false, failure.ProgressReport)
}
}
func (r *DefaultReporter) SuiteDidEnd(report types.Report) {
failures := report.SpecReports.WithState(types.SpecStateFailureStates)
if len(failures) > 0 {
r.emitBlock("\n\n")
r.emitBlock("\n")
if len(failures) > 1 {
r.emitBlock(r.f("{{red}}{{bold}}Summarizing %d Failures:{{/}}", len(failures)))
} else {
@ -338,7 +120,7 @@ func (r *DefaultReporter) SuiteDidEnd(report types.Report) {
case types.SpecStateInterrupted:
highlightColor, heading = "{{orange}}", "[INTERRUPTED]"
}
locationBlock := r.codeLocationBlock(specReport, highlightColor, true, true)
locationBlock := r.codeLocationBlock(specReport, highlightColor, false, true)
r.emitBlock(r.fi(1, highlightColor+"%s{{/}} %s", heading, locationBlock))
}
}
@ -387,14 +169,271 @@ func (r *DefaultReporter) SuiteDidEnd(report types.Report) {
}
}
func (r *DefaultReporter) WillRun(report types.SpecReport) {
v := r.conf.Verbosity()
if v.LT(types.VerbosityLevelVerbose) || report.State.Is(types.SpecStatePending|types.SpecStateSkipped) || report.RunningInParallel {
return
}
r.emitDelimiter(0)
r.emitBlock(r.f(r.codeLocationBlock(report, "{{/}}", v.Is(types.VerbosityLevelVeryVerbose), false)))
}
func (r *DefaultReporter) DidRun(report types.SpecReport) {
v := r.conf.Verbosity()
inParallel := report.RunningInParallel
header := r.specDenoter
if report.LeafNodeType.Is(types.NodeTypesForSuiteLevelNodes) {
header = fmt.Sprintf("[%s]", report.LeafNodeType)
}
highlightColor := r.highlightColorForState(report.State)
// have we already been streaming the timeline?
timelineHasBeenStreaming := v.GTE(types.VerbosityLevelVerbose) && !inParallel
// should we show the timeline?
var timeline types.Timeline
showTimeline := !timelineHasBeenStreaming && (v.GTE(types.VerbosityLevelVerbose) || report.Failed())
if showTimeline {
timeline = report.Timeline().WithoutHiddenReportEntries()
keepVeryVerboseSpecEvents := v.Is(types.VerbosityLevelVeryVerbose) ||
(v.Is(types.VerbosityLevelVerbose) && r.conf.ShowNodeEvents) ||
(report.Failed() && r.conf.ShowNodeEvents)
if !keepVeryVerboseSpecEvents {
timeline = timeline.WithoutVeryVerboseSpecEvents()
}
if len(timeline) == 0 && report.CapturedGinkgoWriterOutput == "" {
// the timeline is completely empty - don't show it
showTimeline = false
}
if v.LT(types.VerbosityLevelVeryVerbose) && report.CapturedGinkgoWriterOutput == "" && len(timeline) > 0 {
//if we aren't -vv and the timeline only has a single failure, don't show it as it will appear at the end of the report
failure, isFailure := timeline[0].(types.Failure)
if isFailure && (len(timeline) == 1 || (len(timeline) == 2 && failure.AdditionalFailure != nil)) {
showTimeline = false
}
}
}
// should we have a separate section for always-visible reports?
showSeparateVisibilityAlwaysReportsSection := !timelineHasBeenStreaming && !showTimeline && report.ReportEntries.HasVisibility(types.ReportEntryVisibilityAlways)
// should we have a separate section for captured stdout/stderr
showSeparateStdSection := inParallel && (report.CapturedStdOutErr != "")
// given all that - do we have any actual content to show? or are we a single denoter in a stream?
reportHasContent := v.Is(types.VerbosityLevelVeryVerbose) || showTimeline || showSeparateVisibilityAlwaysReportsSection || showSeparateStdSection || report.Failed() || (v.Is(types.VerbosityLevelVerbose) && !report.State.Is(types.SpecStateSkipped))
// should we show a runtime?
includeRuntime := !report.State.Is(types.SpecStateSkipped|types.SpecStatePending) || (report.State.Is(types.SpecStateSkipped) && report.Failure.Message != "")
// should we show the codelocation block?
showCodeLocation := !timelineHasBeenStreaming || !report.State.Is(types.SpecStatePassed)
switch report.State {
case types.SpecStatePassed:
if report.LeafNodeType.Is(types.NodeTypesForSuiteLevelNodes) && !reportHasContent {
return
}
if report.LeafNodeType.Is(types.NodeTypesForSuiteLevelNodes) {
header = fmt.Sprintf("%s PASSED", header)
}
if report.NumAttempts > 1 && report.MaxFlakeAttempts > 1 {
header, reportHasContent = fmt.Sprintf("%s [FLAKEY TEST - TOOK %d ATTEMPTS TO PASS]", r.retryDenoter, report.NumAttempts), true
}
case types.SpecStatePending:
header = "P"
if v.GT(types.VerbosityLevelSuccinct) {
header, reportHasContent = "P [PENDING]", true
}
case types.SpecStateSkipped:
header = "S"
if v.Is(types.VerbosityLevelVeryVerbose) || (v.Is(types.VerbosityLevelVerbose) && report.Failure.Message != "") {
header, reportHasContent = "S [SKIPPED]", true
}
default:
header = fmt.Sprintf("%s [%s]", header, r.humanReadableState(report.State))
if report.MaxMustPassRepeatedly > 1 {
header = fmt.Sprintf("%s DURING REPETITION #%d", header, report.NumAttempts)
}
}
// If we have no content to show, jsut emit the header and return
if !reportHasContent {
r.emit(r.f(highlightColor + header + "{{/}}"))
return
}
if includeRuntime {
header = r.f("%s [%.3f seconds]", header, report.RunTime.Seconds())
}
// Emit header
if !timelineHasBeenStreaming {
r.emitDelimiter(0)
}
r.emitBlock(r.f(highlightColor + header + "{{/}}"))
if showCodeLocation {
r.emitBlock(r.codeLocationBlock(report, highlightColor, v.Is(types.VerbosityLevelVeryVerbose), false))
}
//Emit Stdout/Stderr Output
if showSeparateStdSection {
r.emitBlock("\n")
r.emitBlock(r.fi(1, "{{gray}}Captured StdOut/StdErr Output >>{{/}}"))
r.emitBlock(r.fi(1, "%s", report.CapturedStdOutErr))
r.emitBlock(r.fi(1, "{{gray}}<< Captured StdOut/StdErr Output{{/}}"))
}
if showSeparateVisibilityAlwaysReportsSection {
r.emitBlock("\n")
r.emitBlock(r.fi(1, "{{gray}}Report Entries >>{{/}}"))
for _, entry := range report.ReportEntries.WithVisibility(types.ReportEntryVisibilityAlways) {
r.emitReportEntry(1, entry)
}
r.emitBlock(r.fi(1, "{{gray}}<< Report Entries{{/}}"))
}
if showTimeline {
r.emitBlock("\n")
r.emitBlock(r.fi(1, "{{gray}}Timeline >>{{/}}"))
r.emitTimeline(1, report, timeline)
r.emitBlock(r.fi(1, "{{gray}}<< Timeline{{/}}"))
}
// Emit Failure Message
if !report.Failure.IsZero() && !v.Is(types.VerbosityLevelVeryVerbose) {
r.emitBlock("\n")
r.emitFailure(1, report.State, report.Failure, true)
if len(report.AdditionalFailures) > 0 {
r.emitBlock(r.fi(1, "\nThere were {{bold}}{{red}}additional failures{{/}} detected. To view them in detail run {{bold}}ginkgo -vv{{/}}"))
}
}
r.emitDelimiter(0)
}
func (r *DefaultReporter) highlightColorForState(state types.SpecState) string {
switch state {
case types.SpecStatePassed:
return "{{green}}"
case types.SpecStatePending:
return "{{yellow}}"
case types.SpecStateSkipped:
return "{{cyan}}"
case types.SpecStateFailed:
return "{{red}}"
case types.SpecStateTimedout:
return "{{orange}}"
case types.SpecStatePanicked:
return "{{magenta}}"
case types.SpecStateInterrupted:
return "{{orange}}"
case types.SpecStateAborted:
return "{{coral}}"
default:
return "{{gray}}"
}
}
func (r *DefaultReporter) humanReadableState(state types.SpecState) string {
return strings.ToUpper(state.String())
}
func (r *DefaultReporter) emitTimeline(indent uint, report types.SpecReport, timeline types.Timeline) {
isVeryVerbose := r.conf.Verbosity().Is(types.VerbosityLevelVeryVerbose)
gw := report.CapturedGinkgoWriterOutput
cursor := 0
for _, entry := range timeline {
tl := entry.GetTimelineLocation()
if tl.Offset < len(gw) {
r.emit(r.fi(indent, "%s", gw[cursor:tl.Offset]))
cursor = tl.Offset
} else if cursor < len(gw) {
r.emit(r.fi(indent, "%s", gw[cursor:]))
cursor = len(gw)
}
switch x := entry.(type) {
case types.Failure:
if isVeryVerbose {
r.emitFailure(indent, report.State, x, false)
} else {
r.emitShortFailure(indent, report.State, x)
}
case types.AdditionalFailure:
if isVeryVerbose {
r.emitFailure(indent, x.State, x.Failure, true)
} else {
r.emitShortFailure(indent, x.State, x.Failure)
}
case types.ReportEntry:
r.emitReportEntry(indent, x)
case types.ProgressReport:
r.emitProgressReport(indent, false, x)
case types.SpecEvent:
if isVeryVerbose || !x.IsOnlyVisibleAtVeryVerbose() || r.conf.ShowNodeEvents {
r.emitSpecEvent(indent, x, isVeryVerbose)
}
}
}
if cursor < len(gw) {
r.emit(r.fi(indent, "%s", gw[cursor:]))
}
}
func (r *DefaultReporter) EmitFailure(state types.SpecState, failure types.Failure) {
if r.conf.Verbosity().Is(types.VerbosityLevelVerbose) {
r.emitShortFailure(1, state, failure)
} else if r.conf.Verbosity().Is(types.VerbosityLevelVeryVerbose) {
r.emitFailure(1, state, failure, true)
}
}
func (r *DefaultReporter) emitShortFailure(indent uint, state types.SpecState, failure types.Failure) {
r.emitBlock(r.fi(indent, r.highlightColorForState(state)+"[%s]{{/}} in [%s] - %s {{gray}}@ %s{{/}}",
r.humanReadableState(state),
failure.FailureNodeType,
failure.Location,
failure.TimelineLocation.Time.Format(types.GINKGO_TIME_FORMAT),
))
}
func (r *DefaultReporter) emitFailure(indent uint, state types.SpecState, failure types.Failure, includeAdditionalFailure bool) {
highlightColor := r.highlightColorForState(state)
r.emitBlock(r.fi(indent, highlightColor+"[%s] %s{{/}}", r.humanReadableState(state), failure.Message))
r.emitBlock(r.fi(indent, highlightColor+"In {{bold}}[%s]{{/}}"+highlightColor+" at: {{bold}}%s{{/}} {{gray}}@ %s{{/}}\n", failure.FailureNodeType, failure.Location, failure.TimelineLocation.Time.Format(types.GINKGO_TIME_FORMAT)))
if failure.ForwardedPanic != "" {
r.emitBlock("\n")
r.emitBlock(r.fi(indent, highlightColor+"%s{{/}}", failure.ForwardedPanic))
}
if r.conf.FullTrace || failure.ForwardedPanic != "" {
r.emitBlock("\n")
r.emitBlock(r.fi(indent, highlightColor+"Full Stack Trace{{/}}"))
r.emitBlock(r.fi(indent+1, "%s", failure.Location.FullStackTrace))
}
if !failure.ProgressReport.IsZero() {
r.emitBlock("\n")
r.emitProgressReport(indent, false, failure.ProgressReport)
}
if failure.AdditionalFailure != nil && includeAdditionalFailure {
r.emitBlock("\n")
r.emitFailure(indent, failure.AdditionalFailure.State, failure.AdditionalFailure.Failure, true)
}
}
func (r *DefaultReporter) EmitProgressReport(report types.ProgressReport) {
r.emitDelimiter()
r.emitDelimiter(1)
if report.RunningInParallel {
r.emit(r.f("{{coral}}Progress Report for Ginkgo Process #{{bold}}%d{{/}}\n", report.ParallelProcess))
r.emit(r.fi(1, "{{coral}}Progress Report for Ginkgo Process #{{bold}}%d{{/}}\n", report.ParallelProcess))
}
r.emitProgressReport(0, true, report)
r.emitDelimiter()
shouldEmitGW := report.RunningInParallel || r.conf.Verbosity().LT(types.VerbosityLevelVerbose)
r.emitProgressReport(1, shouldEmitGW, report)
r.emitDelimiter(1)
}
func (r *DefaultReporter) emitProgressReport(indent uint, emitGinkgoWriterOutput bool, report types.ProgressReport) {
@ -409,7 +448,7 @@ func (r *DefaultReporter) emitProgressReport(indent uint, emitGinkgoWriterOutput
r.emit(" ")
subjectIndent = 0
}
r.emit(r.fi(subjectIndent, "{{bold}}{{orange}}%s{{/}} (Spec Runtime: %s)\n", report.LeafNodeText, report.Time.Sub(report.SpecStartTime).Round(time.Millisecond)))
r.emit(r.fi(subjectIndent, "{{bold}}{{orange}}%s{{/}} (Spec Runtime: %s)\n", report.LeafNodeText, report.Time().Sub(report.SpecStartTime).Round(time.Millisecond)))
r.emit(r.fi(indent+1, "{{gray}}%s{{/}}\n", report.LeafNodeLocation))
indent += 1
}
@ -419,12 +458,12 @@ func (r *DefaultReporter) emitProgressReport(indent uint, emitGinkgoWriterOutput
r.emit(r.f(" {{bold}}{{orange}}%s{{/}}", report.CurrentNodeText))
}
r.emit(r.f(" (Node Runtime: %s)\n", report.Time.Sub(report.CurrentNodeStartTime).Round(time.Millisecond)))
r.emit(r.f(" (Node Runtime: %s)\n", report.Time().Sub(report.CurrentNodeStartTime).Round(time.Millisecond)))
r.emit(r.fi(indent+1, "{{gray}}%s{{/}}\n", report.CurrentNodeLocation))
indent += 1
}
if report.CurrentStepText != "" {
r.emit(r.fi(indent, "At {{bold}}{{orange}}[By Step] %s{{/}} (Step Runtime: %s)\n", report.CurrentStepText, report.Time.Sub(report.CurrentStepStartTime).Round(time.Millisecond)))
r.emit(r.fi(indent, "At {{bold}}{{orange}}[By Step] %s{{/}} (Step Runtime: %s)\n", report.CurrentStepText, report.Time().Sub(report.CurrentStepStartTime).Round(time.Millisecond)))
r.emit(r.fi(indent+1, "{{gray}}%s{{/}}\n", report.CurrentStepLocation))
indent += 1
}
@ -433,9 +472,19 @@ func (r *DefaultReporter) emitProgressReport(indent uint, emitGinkgoWriterOutput
indent -= 1
}
if emitGinkgoWriterOutput && report.CapturedGinkgoWriterOutput != "" && (report.RunningInParallel || r.conf.Verbosity().LT(types.VerbosityLevelVerbose)) {
if emitGinkgoWriterOutput && report.CapturedGinkgoWriterOutput != "" {
r.emit("\n")
r.emitGinkgoWriterOutput(indent, report.CapturedGinkgoWriterOutput, 10)
r.emitBlock(r.fi(indent, "{{gray}}Begin Captured GinkgoWriter Output >>{{/}}"))
limit, lines := 10, strings.Split(report.CapturedGinkgoWriterOutput, "\n")
if len(lines) <= limit {
r.emitBlock(r.fi(indent+1, "%s", report.CapturedGinkgoWriterOutput))
} else {
r.emitBlock(r.fi(indent+1, "{{gray}}...{{/}}"))
for _, line := range lines[len(lines)-limit-1:] {
r.emitBlock(r.fi(indent+1, "%s", line))
}
}
r.emitBlock(r.fi(indent, "{{gray}}<< End Captured GinkgoWriter Output{{/}}"))
}
if !report.SpecGoroutine().IsZero() {
@ -471,22 +520,48 @@ func (r *DefaultReporter) emitProgressReport(indent uint, emitGinkgoWriterOutput
}
}
func (r *DefaultReporter) emitGinkgoWriterOutput(indent uint, output string, limit int) {
r.emitBlock(r.fi(indent, "{{gray}}Begin Captured GinkgoWriter Output >>{{/}}"))
if limit == 0 {
r.emitBlock(r.fi(indent+1, "%s", output))
} else {
lines := strings.Split(output, "\n")
if len(lines) <= limit {
r.emitBlock(r.fi(indent+1, "%s", output))
} else {
r.emitBlock(r.fi(indent+1, "{{gray}}...{{/}}"))
for _, line := range lines[len(lines)-limit-1:] {
r.emitBlock(r.fi(indent+1, "%s", line))
}
}
func (r *DefaultReporter) EmitReportEntry(entry types.ReportEntry) {
if r.conf.Verbosity().LT(types.VerbosityLevelVerbose) || entry.Visibility == types.ReportEntryVisibilityNever {
return
}
r.emitReportEntry(1, entry)
}
func (r *DefaultReporter) emitReportEntry(indent uint, entry types.ReportEntry) {
r.emitBlock(r.fi(indent, "{{bold}}"+entry.Name+"{{gray}} - %s @ %s{{/}}", entry.Location, entry.Time.Format(types.GINKGO_TIME_FORMAT)))
if representation := entry.StringRepresentation(); representation != "" {
r.emitBlock(r.fi(indent+1, representation))
}
}
func (r *DefaultReporter) EmitSpecEvent(event types.SpecEvent) {
v := r.conf.Verbosity()
if v.Is(types.VerbosityLevelVeryVerbose) || (v.Is(types.VerbosityLevelVerbose) && (r.conf.ShowNodeEvents || !event.IsOnlyVisibleAtVeryVerbose())) {
r.emitSpecEvent(1, event, r.conf.Verbosity().Is(types.VerbosityLevelVeryVerbose))
}
}
func (r *DefaultReporter) emitSpecEvent(indent uint, event types.SpecEvent, includeLocation bool) {
location := ""
if includeLocation {
location = fmt.Sprintf("- %s ", event.CodeLocation.String())
}
switch event.SpecEventType {
case types.SpecEventInvalid:
return
case types.SpecEventByStart:
r.emitBlock(r.fi(indent, "{{bold}}STEP:{{/}} %s {{gray}}%s@ %s{{/}}", event.Message, location, event.TimelineLocation.Time.Format(types.GINKGO_TIME_FORMAT)))
case types.SpecEventByEnd:
r.emitBlock(r.fi(indent, "{{bold}}END STEP:{{/}} %s {{gray}}%s@ %s (%s){{/}}", event.Message, location, event.TimelineLocation.Time.Format(types.GINKGO_TIME_FORMAT), event.Duration.Round(time.Millisecond)))
case types.SpecEventNodeStart:
r.emitBlock(r.fi(indent, "> Enter {{bold}}[%s]{{/}} %s {{gray}}%s@ %s{{/}}", event.NodeType.String(), event.Message, location, event.TimelineLocation.Time.Format(types.GINKGO_TIME_FORMAT)))
case types.SpecEventNodeEnd:
r.emitBlock(r.fi(indent, "< Exit {{bold}}[%s]{{/}} %s {{gray}}%s@ %s (%s){{/}}", event.NodeType.String(), event.Message, location, event.TimelineLocation.Time.Format(types.GINKGO_TIME_FORMAT), event.Duration.Round(time.Millisecond)))
case types.SpecEventSpecRepeat:
r.emitBlock(r.fi(indent, "\n{{bold}}Attempt #%d {{green}}Passed{{/}}{{bold}}. Repeating %s{{/}} {{gray}}@ %s{{/}}\n\n", event.Attempt, r.retryDenoter, event.TimelineLocation.Time.Format(types.GINKGO_TIME_FORMAT)))
case types.SpecEventSpecRetry:
r.emitBlock(r.fi(indent, "\n{{bold}}Attempt #%d {{red}}Failed{{/}}{{bold}}. Retrying %s{{/}} {{gray}}@ %s{{/}}\n\n", event.Attempt, r.retryDenoter, event.TimelineLocation.Time.Format(types.GINKGO_TIME_FORMAT)))
}
r.emitBlock(r.fi(indent, "{{gray}}<< End Captured GinkgoWriter Output{{/}}"))
}
func (r *DefaultReporter) emitGoroutines(indent uint, goroutines ...types.Goroutine) {
@ -563,11 +638,11 @@ func (r *DefaultReporter) emitBlock(s string) {
}
}
func (r *DefaultReporter) emitDelimiter() {
func (r *DefaultReporter) emitDelimiter(indent uint) {
if r.lastEmissionWasDelimiter {
return
}
r.emitBlock(r.f("{{gray}}%s{{/}}", strings.Repeat("-", 30)))
r.emitBlock(r.fi(indent, "{{gray}}%s{{/}}", strings.Repeat("-", 30)))
r.lastEmissionWasDelimiter = true
}
@ -584,13 +659,14 @@ func (r *DefaultReporter) cycleJoin(elements []string, joiner string) string {
return r.formatter.CycleJoin(elements, joiner, []string{"{{/}}", "{{gray}}"})
}
func (r *DefaultReporter) codeLocationBlock(report types.SpecReport, highlightColor string, succinct bool, usePreciseFailureLocation bool) string {
func (r *DefaultReporter) codeLocationBlock(report types.SpecReport, highlightColor string, veryVerbose bool, usePreciseFailureLocation bool) string {
texts, locations, labels := []string{}, []types.CodeLocation{}, [][]string{}
texts, locations, labels = append(texts, report.ContainerHierarchyTexts...), append(locations, report.ContainerHierarchyLocations...), append(labels, report.ContainerHierarchyLabels...)
if report.LeafNodeType.Is(types.NodeTypesForSuiteLevelNodes) {
texts = append(texts, r.f("[%s] %s", report.LeafNodeType, report.LeafNodeText))
} else {
texts = append(texts, report.LeafNodeText)
texts = append(texts, r.f(report.LeafNodeText))
}
labels = append(labels, report.LeafNodeLabels)
locations = append(locations, report.LeafNodeLocation)
@ -600,24 +676,58 @@ func (r *DefaultReporter) codeLocationBlock(report types.SpecReport, highlightCo
failureLocation = report.Failure.Location
}
highlightIndex := -1
switch report.Failure.FailureNodeContext {
case types.FailureNodeAtTopLevel:
texts = append([]string{r.f(highlightColor+"{{bold}}TOP-LEVEL [%s]{{/}}", report.Failure.FailureNodeType)}, texts...)
texts = append([]string{fmt.Sprintf("TOP-LEVEL [%s]", report.Failure.FailureNodeType)}, texts...)
locations = append([]types.CodeLocation{failureLocation}, locations...)
labels = append([][]string{{}}, labels...)
highlightIndex = 0
case types.FailureNodeInContainer:
i := report.Failure.FailureNodeContainerIndex
texts[i] = r.f(highlightColor+"{{bold}}%s [%s]{{/}}", texts[i], report.Failure.FailureNodeType)
texts[i] = fmt.Sprintf("%s [%s]", texts[i], report.Failure.FailureNodeType)
locations[i] = failureLocation
highlightIndex = i
case types.FailureNodeIsLeafNode:
i := len(texts) - 1
texts[i] = r.f(highlightColor+"{{bold}}[%s] %s{{/}}", report.LeafNodeType, report.LeafNodeText)
texts[i] = fmt.Sprintf("[%s] %s", report.LeafNodeType, report.LeafNodeText)
locations[i] = failureLocation
highlightIndex = i
default:
//there is no failure, so we highlight the leaf ndoe
highlightIndex = len(texts) - 1
}
out := ""
if succinct {
out += r.f("%s", r.cycleJoin(texts, " "))
if veryVerbose {
for i := range texts {
if i == highlightIndex {
out += r.fi(uint(i), highlightColor+"{{bold}}%s{{/}}", texts[i])
} else {
out += r.fi(uint(i), "%s", texts[i])
}
if len(labels[i]) > 0 {
out += r.f(" {{coral}}[%s]{{/}}", strings.Join(labels[i], ", "))
}
out += "\n"
out += r.fi(uint(i), "{{gray}}%s{{/}}\n", locations[i])
}
} else {
for i := range texts {
style := "{{/}}"
if i%2 == 1 {
style = "{{gray}}"
}
if i == highlightIndex {
style = highlightColor + "{{bold}}"
}
out += r.f(style+"%s", texts[i])
if i < len(texts)-1 {
out += " "
} else {
out += r.f("{{/}}")
}
}
flattenedLabels := report.Labels()
if len(flattenedLabels) > 0 {
out += r.f(" {{coral}}[%s]{{/}}", strings.Join(flattenedLabels, ", "))
@ -626,17 +736,15 @@ func (r *DefaultReporter) codeLocationBlock(report types.SpecReport, highlightCo
if usePreciseFailureLocation {
out += r.f("{{gray}}%s{{/}}", failureLocation)
} else {
out += r.f("{{gray}}%s{{/}}", locations[len(locations)-1])
}
} else {
for i := range texts {
out += r.fi(uint(i), "%s", texts[i])
if len(labels[i]) > 0 {
out += r.f(" {{coral}}[%s]{{/}}", strings.Join(labels[i], ", "))
leafLocation := locations[len(locations)-1]
if (report.Failure.FailureNodeLocation != types.CodeLocation{}) && (report.Failure.FailureNodeLocation != leafLocation) {
out += r.fi(1, highlightColor+"[%s]{{/}} {{gray}}%s{{/}}\n", report.Failure.FailureNodeType, report.Failure.FailureNodeLocation)
out += r.fi(1, "{{gray}}[%s] %s{{/}}", report.LeafNodeType, leafLocation)
} else {
out += r.f("{{gray}}%s{{/}}", leafLocation)
}
out += "\n"
out += r.fi(uint(i), "{{gray}}%s{{/}}\n", locations[i])
}
}
return out
}

View File

@ -35,7 +35,7 @@ func ReportViaDeprecatedReporter(reporter DeprecatedReporter, report types.Repor
FailOnPending: report.SuiteConfig.FailOnPending,
FailFast: report.SuiteConfig.FailFast,
FlakeAttempts: report.SuiteConfig.FlakeAttempts,
EmitSpecProgress: report.SuiteConfig.EmitSpecProgress,
EmitSpecProgress: false,
DryRun: report.SuiteConfig.DryRun,
ParallelNode: report.SuiteConfig.ParallelProcess,
ParallelTotal: report.SuiteConfig.ParallelTotal,

View File

@ -15,12 +15,29 @@ import (
"fmt"
"os"
"strings"
"time"
"github.com/onsi/ginkgo/v2/config"
"github.com/onsi/ginkgo/v2/types"
)
type JunitReportConfig struct {
// Spec States for which no timeline should be emitted for system-err
// set this to types.SpecStatePassed|types.SpecStateSkipped|types.SpecStatePending to only match failing specs
OmitTimelinesForSpecState types.SpecState
// Enable OmitFailureMessageAttr to prevent failure messages appearing in the "message" attribute of the Failure and Error tags
OmitFailureMessageAttr bool
//Enable OmitCapturedStdOutErr to prevent captured stdout/stderr appearing in system-out
OmitCapturedStdOutErr bool
// Enable OmitSpecLabels to prevent labels from appearing in the spec name
OmitSpecLabels bool
// Enable OmitLeafNodeType to prevent the spec leaf node type from appearing in the spec name
OmitLeafNodeType bool
}
type JUnitTestSuites struct {
XMLName xml.Name `xml:"testsuites"`
// Tests maps onto the total number of specs in all test suites (this includes any suite nodes such as BeforeSuite)
@ -128,6 +145,10 @@ type JUnitFailure struct {
}
func GenerateJUnitReport(report types.Report, dst string) error {
return GenerateJUnitReportWithConfig(report, dst, JunitReportConfig{})
}
func GenerateJUnitReportWithConfig(report types.Report, dst string, config JunitReportConfig) error {
suite := JUnitTestSuite{
Name: report.SuiteDescription,
Package: report.SuitePath,
@ -149,7 +170,6 @@ func GenerateJUnitReport(report types.Report, dst string) error {
{"FailOnPending", fmt.Sprintf("%t", report.SuiteConfig.FailOnPending)},
{"FailFast", fmt.Sprintf("%t", report.SuiteConfig.FailFast)},
{"FlakeAttempts", fmt.Sprintf("%d", report.SuiteConfig.FlakeAttempts)},
{"EmitSpecProgress", fmt.Sprintf("%t", report.SuiteConfig.EmitSpecProgress)},
{"DryRun", fmt.Sprintf("%t", report.SuiteConfig.DryRun)},
{"ParallelTotal", fmt.Sprintf("%d", report.SuiteConfig.ParallelTotal)},
{"OutputInterceptorMode", report.SuiteConfig.OutputInterceptorMode},
@ -158,21 +178,29 @@ func GenerateJUnitReport(report types.Report, dst string) error {
}
for _, spec := range report.SpecReports {
name := fmt.Sprintf("[%s]", spec.LeafNodeType)
if config.OmitLeafNodeType {
name = ""
}
if spec.FullText() != "" {
name = name + " " + spec.FullText()
}
labels := spec.Labels()
if len(labels) > 0 {
if len(labels) > 0 && !config.OmitSpecLabels {
name = name + " [" + strings.Join(labels, ", ") + "]"
}
name = strings.TrimSpace(name)
test := JUnitTestCase{
Name: name,
Classname: report.SuiteDescription,
Status: spec.State.String(),
Time: spec.RunTime.Seconds(),
SystemOut: systemOutForUnstructuredReporters(spec),
SystemErr: systemErrForUnstructuredReporters(spec),
}
if !spec.State.Is(config.OmitTimelinesForSpecState) {
test.SystemErr = systemErrForUnstructuredReporters(spec)
}
if !config.OmitCapturedStdOutErr {
test.SystemOut = systemOutForUnstructuredReporters(spec)
}
suite.Tests += 1
@ -193,6 +221,9 @@ func GenerateJUnitReport(report types.Report, dst string) error {
Type: "failed",
Description: failureDescriptionForUnstructuredReporters(spec),
}
if config.OmitFailureMessageAttr {
test.Failure.Message = ""
}
suite.Failures += 1
case types.SpecStateTimedout:
test.Failure = &JUnitFailure{
@ -200,6 +231,9 @@ func GenerateJUnitReport(report types.Report, dst string) error {
Type: "timedout",
Description: failureDescriptionForUnstructuredReporters(spec),
}
if config.OmitFailureMessageAttr {
test.Failure.Message = ""
}
suite.Failures += 1
case types.SpecStateInterrupted:
test.Error = &JUnitError{
@ -207,6 +241,9 @@ func GenerateJUnitReport(report types.Report, dst string) error {
Type: "interrupted",
Description: failureDescriptionForUnstructuredReporters(spec),
}
if config.OmitFailureMessageAttr {
test.Error.Message = ""
}
suite.Errors += 1
case types.SpecStateAborted:
test.Failure = &JUnitFailure{
@ -214,6 +251,9 @@ func GenerateJUnitReport(report types.Report, dst string) error {
Type: "aborted",
Description: failureDescriptionForUnstructuredReporters(spec),
}
if config.OmitFailureMessageAttr {
test.Failure.Message = ""
}
suite.Errors += 1
case types.SpecStatePanicked:
test.Error = &JUnitError{
@ -221,6 +261,9 @@ func GenerateJUnitReport(report types.Report, dst string) error {
Type: "panicked",
Description: failureDescriptionForUnstructuredReporters(spec),
}
if config.OmitFailureMessageAttr {
test.Error.Message = ""
}
suite.Errors += 1
}
@ -287,63 +330,21 @@ func MergeAndCleanupJUnitReports(sources []string, dst string) ([]string, error)
func failureDescriptionForUnstructuredReporters(spec types.SpecReport) string {
out := &strings.Builder{}
out.WriteString(spec.Failure.Location.String() + "\n")
out.WriteString(spec.Failure.Location.FullStackTrace)
if !spec.Failure.ProgressReport.IsZero() {
out.WriteString("\n")
NewDefaultReporter(types.ReporterConfig{NoColor: true}, out).EmitProgressReport(spec.Failure.ProgressReport)
}
NewDefaultReporter(types.ReporterConfig{NoColor: true, VeryVerbose: true}, out).emitFailure(0, spec.State, spec.Failure, true)
if len(spec.AdditionalFailures) > 0 {
out.WriteString("\nThere were additional failures detected after the initial failure:\n")
for i, additionalFailure := range spec.AdditionalFailures {
NewDefaultReporter(types.ReporterConfig{NoColor: true}, out).EmitFailure(0, additionalFailure.State, additionalFailure.Failure, true)
if i < len(spec.AdditionalFailures)-1 {
out.WriteString("----------\n")
}
}
out.WriteString("\nThere were additional failures detected after the initial failure. These are visible in the timeline\n")
}
return out.String()
}
func systemErrForUnstructuredReporters(spec types.SpecReport) string {
out := &strings.Builder{}
gw := spec.CapturedGinkgoWriterOutput
cursor := 0
for _, pr := range spec.ProgressReports {
if cursor < pr.GinkgoWriterOffset {
if pr.GinkgoWriterOffset < len(gw) {
out.WriteString(gw[cursor:pr.GinkgoWriterOffset])
cursor = pr.GinkgoWriterOffset
} else if cursor < len(gw) {
out.WriteString(gw[cursor:])
cursor = len(gw)
}
}
NewDefaultReporter(types.ReporterConfig{NoColor: true}, out).EmitProgressReport(pr)
}
if cursor < len(gw) {
out.WriteString(gw[cursor:])
}
NewDefaultReporter(types.ReporterConfig{NoColor: true, VeryVerbose: true}, out).emitTimeline(0, spec, spec.Timeline())
return out.String()
}
func systemOutForUnstructuredReporters(spec types.SpecReport) string {
systemOut := spec.CapturedStdOutErr
if len(spec.ReportEntries) > 0 {
systemOut += "\nReport Entries:\n"
for i, entry := range spec.ReportEntries {
systemOut += fmt.Sprintf("%s\n%s\n%s\n", entry.Name, entry.Location, entry.Time.Format(time.RFC3339Nano))
if representation := entry.StringRepresentation(); representation != "" {
systemOut += representation + "\n"
}
if i+1 < len(spec.ReportEntries) {
systemOut += "--\n"
}
}
}
return systemOut
return spec.CapturedStdOutErr
}
// Deprecated JUnitReporter (so folks can still compile their suites)

View File

@ -9,13 +9,21 @@ type Reporter interface {
WillRun(report types.SpecReport)
DidRun(report types.SpecReport)
SuiteDidEnd(report types.Report)
//Timeline emission
EmitFailure(state types.SpecState, failure types.Failure)
EmitProgressReport(progressReport types.ProgressReport)
EmitReportEntry(entry types.ReportEntry)
EmitSpecEvent(event types.SpecEvent)
}
type NoopReporter struct{}
func (n NoopReporter) SuiteWillBegin(report types.Report) {}
func (n NoopReporter) WillRun(report types.SpecReport) {}
func (n NoopReporter) DidRun(report types.SpecReport) {}
func (n NoopReporter) SuiteDidEnd(report types.Report) {}
func (n NoopReporter) EmitProgressReport(progressReport types.ProgressReport) {}
func (n NoopReporter) SuiteWillBegin(report types.Report) {}
func (n NoopReporter) WillRun(report types.SpecReport) {}
func (n NoopReporter) DidRun(report types.SpecReport) {}
func (n NoopReporter) SuiteDidEnd(report types.Report) {}
func (n NoopReporter) EmitFailure(state types.SpecState, failure types.Failure) {}
func (n NoopReporter) EmitProgressReport(progressReport types.ProgressReport) {}
func (n NoopReporter) EmitReportEntry(entry types.ReportEntry) {}
func (n NoopReporter) EmitSpecEvent(event types.SpecEvent) {}

View File

@ -35,7 +35,7 @@ func CurrentSpecReport() SpecReport {
}
/*
ReportEntryVisibility governs the visibility of ReportEntries in Ginkgo's console reporter
ReportEntryVisibility governs the visibility of ReportEntries in Ginkgo's console reporter
- ReportEntryVisibilityAlways: the default behavior - the ReportEntry is always emitted.
- ReportEntryVisibilityFailureOrVerbose: the ReportEntry is only emitted if the spec fails or if the tests are run with -v (similar to GinkgoWriters behavior).
@ -50,9 +50,9 @@ const ReportEntryVisibilityAlways, ReportEntryVisibilityFailureOrVerbose, Report
/*
AddReportEntry generates and adds a new ReportEntry to the current spec's SpecReport.
It can take any of the following arguments:
- A single arbitrary object to attach as the Value of the ReportEntry. This object will be included in any generated reports and will be emitted to the console when the report is emitted.
- A ReportEntryVisibility enum to control the visibility of the ReportEntry
- An Offset or CodeLocation decoration to control the reported location of the ReportEntry
- A single arbitrary object to attach as the Value of the ReportEntry. This object will be included in any generated reports and will be emitted to the console when the report is emitted.
- A ReportEntryVisibility enum to control the visibility of the ReportEntry
- An Offset or CodeLocation decoration to control the reported location of the ReportEntry
If the Value object implements `fmt.Stringer`, it's `String()` representation is used when emitting to the console.
@ -100,6 +100,25 @@ func ReportAfterEach(body func(SpecReport), args ...interface{}) bool {
return pushNode(internal.NewNode(deprecationTracker, types.NodeTypeReportAfterEach, "", combinedArgs...))
}
/*
ReportBeforeSuite nodes are run at the beginning of the suite. ReportBeforeSuite nodes take a function that receives a suite Report.
They are called at the beginning of the suite, before any specs have run and any BeforeSuite or SynchronizedBeforeSuite nodes, and are passed in the initial report for the suite.
ReportBeforeSuite nodes must be created at the top-level (i.e. not nested in a Context/Describe/When node)
# When running in parallel, Ginkgo ensures that only one of the parallel nodes runs the ReportBeforeSuite
You cannot nest any other Ginkgo nodes within a ReportAfterSuite node's closure.
You can learn more about ReportAfterSuite here: https://onsi.github.io/ginkgo/#generating-reports-programmatically
You can learn more about Ginkgo's reporting infrastructure, including generating reports with the CLI here: https://onsi.github.io/ginkgo/#generating-machine-readable-reports
*/
func ReportBeforeSuite(body func(Report), args ...interface{}) bool {
combinedArgs := []interface{}{body}
combinedArgs = append(combinedArgs, args...)
return pushNode(internal.NewNode(deprecationTracker, types.NodeTypeReportBeforeSuite, "", combinedArgs...))
}
/*
ReportAfterSuite nodes are run at the end of the suite. ReportAfterSuite nodes take a function that receives a suite Report.
@ -113,6 +132,7 @@ In addition to using ReportAfterSuite to programmatically generate suite reports
You cannot nest any other Ginkgo nodes within a ReportAfterSuite node's closure.
You can learn more about ReportAfterSuite here: https://onsi.github.io/ginkgo/#generating-reports-programmatically
You can learn more about Ginkgo's reporting infrastructure, including generating reports with the CLI here: https://onsi.github.io/ginkgo/#generating-machine-readable-reports
*/
func ReportAfterSuite(text string, body func(Report), args ...interface{}) bool {

View File

@ -13,7 +13,7 @@ import (
/*
The EntryDescription decorator allows you to pass a format string to DescribeTable() and Entry(). This format string is used to generate entry names via:
fmt.Sprintf(formatString, parameters...)
fmt.Sprintf(formatString, parameters...)
where parameters are the parameters passed into the entry.
@ -32,19 +32,20 @@ DescribeTable describes a table-driven spec.
For example:
DescribeTable("a simple table",
func(x int, y int, expected bool) {
Ω(x > y).Should(Equal(expected))
},
Entry("x > y", 1, 0, true),
Entry("x == y", 0, 0, false),
Entry("x < y", 0, 1, false),
)
DescribeTable("a simple table",
func(x int, y int, expected bool) {
Ω(x > y).Should(Equal(expected))
},
Entry("x > y", 1, 0, true),
Entry("x == y", 0, 0, false),
Entry("x < y", 0, 1, false),
)
You can learn more about DescribeTable here: https://onsi.github.io/ginkgo/#table-specs
And can explore some Table patterns here: https://onsi.github.io/ginkgo/#table-specs-patterns
*/
func DescribeTable(description string, args ...interface{}) bool {
GinkgoHelper()
generateTable(description, args...)
return true
}
@ -53,6 +54,7 @@ func DescribeTable(description string, args ...interface{}) bool {
You can focus a table with `FDescribeTable`. This is equivalent to `FDescribe`.
*/
func FDescribeTable(description string, args ...interface{}) bool {
GinkgoHelper()
args = append(args, internal.Focus)
generateTable(description, args...)
return true
@ -62,6 +64,7 @@ func FDescribeTable(description string, args ...interface{}) bool {
You can mark a table as pending with `PDescribeTable`. This is equivalent to `PDescribe`.
*/
func PDescribeTable(description string, args ...interface{}) bool {
GinkgoHelper()
args = append(args, internal.Pending)
generateTable(description, args...)
return true
@ -95,26 +98,29 @@ If you want to generate interruptible specs simply write a Table function that a
You can learn more about Entry here: https://onsi.github.io/ginkgo/#table-specs
*/
func Entry(description interface{}, args ...interface{}) TableEntry {
GinkgoHelper()
decorations, parameters := internal.PartitionDecorations(args...)
return TableEntry{description: description, decorations: decorations, parameters: parameters, codeLocation: types.NewCodeLocation(1)}
return TableEntry{description: description, decorations: decorations, parameters: parameters, codeLocation: types.NewCodeLocation(0)}
}
/*
You can focus a particular entry with FEntry. This is equivalent to FIt.
*/
func FEntry(description interface{}, args ...interface{}) TableEntry {
GinkgoHelper()
decorations, parameters := internal.PartitionDecorations(args...)
decorations = append(decorations, internal.Focus)
return TableEntry{description: description, decorations: decorations, parameters: parameters, codeLocation: types.NewCodeLocation(1)}
return TableEntry{description: description, decorations: decorations, parameters: parameters, codeLocation: types.NewCodeLocation(0)}
}
/*
You can mark a particular entry as pending with PEntry. This is equivalent to PIt.
*/
func PEntry(description interface{}, args ...interface{}) TableEntry {
GinkgoHelper()
decorations, parameters := internal.PartitionDecorations(args...)
decorations = append(decorations, internal.Pending)
return TableEntry{description: description, decorations: decorations, parameters: parameters, codeLocation: types.NewCodeLocation(1)}
return TableEntry{description: description, decorations: decorations, parameters: parameters, codeLocation: types.NewCodeLocation(0)}
}
/*
@ -126,7 +132,8 @@ var contextType = reflect.TypeOf(new(context.Context)).Elem()
var specContextType = reflect.TypeOf(new(SpecContext)).Elem()
func generateTable(description string, args ...interface{}) {
cl := types.NewCodeLocation(2)
GinkgoHelper()
cl := types.NewCodeLocation(0)
containerNodeArgs := []interface{}{cl}
entries := []TableEntry{}

View File

@ -1,4 +1,5 @@
package types
import (
"fmt"
"os"
@ -6,6 +7,7 @@ import (
"runtime"
"runtime/debug"
"strings"
"sync"
)
type CodeLocation struct {
@ -37,6 +39,73 @@ func (codeLocation CodeLocation) ContentsOfLine() string {
return lines[codeLocation.LineNumber-1]
}
type codeLocationLocator struct {
pcs map[uintptr]bool
helpers map[string]bool
lock *sync.Mutex
}
func (c *codeLocationLocator) addHelper(pc uintptr) {
c.lock.Lock()
defer c.lock.Unlock()
if c.pcs[pc] {
return
}
c.lock.Unlock()
f := runtime.FuncForPC(pc)
c.lock.Lock()
if f == nil {
return
}
c.helpers[f.Name()] = true
c.pcs[pc] = true
}
func (c *codeLocationLocator) hasHelper(name string) bool {
c.lock.Lock()
defer c.lock.Unlock()
return c.helpers[name]
}
func (c *codeLocationLocator) getCodeLocation(skip int) CodeLocation {
pc := make([]uintptr, 40)
n := runtime.Callers(skip+2, pc)
if n == 0 {
return CodeLocation{}
}
pc = pc[:n]
frames := runtime.CallersFrames(pc)
for {
frame, more := frames.Next()
if !c.hasHelper(frame.Function) {
return CodeLocation{FileName: frame.File, LineNumber: frame.Line}
}
if !more {
break
}
}
return CodeLocation{}
}
var clLocator = &codeLocationLocator{
pcs: map[uintptr]bool{},
helpers: map[string]bool{},
lock: &sync.Mutex{},
}
// MarkAsHelper is used by GinkgoHelper to mark the caller (appropriately offset by skip)as a helper. You can use this directly if you need to provide an optional `skip` to mark functions further up the call stack as helpers.
func MarkAsHelper(optionalSkip ...int) {
skip := 1
if len(optionalSkip) > 0 {
skip += optionalSkip[0]
}
pc, _, _, ok := runtime.Caller(skip)
if ok {
clLocator.addHelper(pc)
}
}
func NewCustomCodeLocation(message string) CodeLocation {
return CodeLocation{
CustomMessage: message,
@ -44,14 +113,13 @@ func NewCustomCodeLocation(message string) CodeLocation {
}
func NewCodeLocation(skip int) CodeLocation {
_, file, line, _ := runtime.Caller(skip + 1)
return CodeLocation{FileName: file, LineNumber: line}
return clLocator.getCodeLocation(skip + 1)
}
func NewCodeLocationWithStackTrace(skip int) CodeLocation {
_, file, line, _ := runtime.Caller(skip + 1)
stackTrace := PruneStack(string(debug.Stack()), skip+1)
return CodeLocation{FileName: file, LineNumber: line, FullStackTrace: stackTrace}
cl := clLocator.getCodeLocation(skip + 1)
cl.FullStackTrace = PruneStack(string(debug.Stack()), skip+1)
return cl
}
// PruneStack removes references to functions that are internal to Ginkgo

View File

@ -26,11 +26,11 @@ type SuiteConfig struct {
FailOnPending bool
FailFast bool
FlakeAttempts int
EmitSpecProgress bool
DryRun bool
PollProgressAfter time.Duration
PollProgressInterval time.Duration
Timeout time.Duration
EmitSpecProgress bool // this is deprecated but its removal is causing compile issue for some users that were setting it manually
OutputInterceptorMode string
SourceRoots []string
GracePeriod time.Duration
@ -81,13 +81,12 @@ func (vl VerbosityLevel) LT(comp VerbosityLevel) bool {
// Configuration for Ginkgo's reporter
type ReporterConfig struct {
NoColor bool
SlowSpecThreshold time.Duration
Succinct bool
Verbose bool
VeryVerbose bool
FullTrace bool
AlwaysEmitGinkgoWriter bool
NoColor bool
Succinct bool
Verbose bool
VeryVerbose bool
FullTrace bool
ShowNodeEvents bool
JSONReport string
JUnitReport string
@ -110,9 +109,7 @@ func (rc ReporterConfig) WillGenerateReport() bool {
}
func NewDefaultReporterConfig() ReporterConfig {
return ReporterConfig{
SlowSpecThreshold: 5 * time.Second,
}
return ReporterConfig{}
}
// Configuration for the Ginkgo CLI
@ -235,6 +232,9 @@ type deprecatedConfig struct {
SlowSpecThresholdWithFLoatUnits float64
Stream bool
Notify bool
EmitSpecProgress bool
SlowSpecThreshold time.Duration
AlwaysEmitGinkgoWriter bool
}
// Flags
@ -275,8 +275,6 @@ var SuiteConfigFlags = GinkgoFlags{
{KeyPath: "S.DryRun", Name: "dry-run", SectionKey: "debug", DeprecatedName: "dryRun", DeprecatedDocLink: "changed-command-line-flags",
Usage: "If set, ginkgo will walk the test hierarchy without actually running anything. Best paired with -v."},
{KeyPath: "S.EmitSpecProgress", Name: "progress", SectionKey: "debug",
Usage: "If set, ginkgo will emit progress information as each spec runs to the GinkgoWriter."},
{KeyPath: "S.PollProgressAfter", Name: "poll-progress-after", SectionKey: "debug", UsageDefaultValue: "0",
Usage: "Emit node progress reports periodically if node hasn't completed after this duration."},
{KeyPath: "S.PollProgressInterval", Name: "poll-progress-interval", SectionKey: "debug", UsageDefaultValue: "10s",
@ -303,6 +301,8 @@ var SuiteConfigFlags = GinkgoFlags{
{KeyPath: "D.RegexScansFilePath", DeprecatedName: "regexScansFilePath", DeprecatedDocLink: "removed--regexscansfilepath", DeprecatedVersion: "2.0.0"},
{KeyPath: "D.DebugParallel", DeprecatedName: "debug", DeprecatedDocLink: "removed--debug", DeprecatedVersion: "2.0.0"},
{KeyPath: "D.EmitSpecProgress", DeprecatedName: "progress", SectionKey: "debug",
DeprecatedVersion: "2.5.0", Usage: ". The functionality provided by --progress was confusing and is no longer needed. Use --show-node-events instead to see node entry and exit events included in the timeline of failed and verbose specs. Or you can run with -vv to always see all node events. Lastly, --poll-progress-after and the PollProgressAfter decorator now provide a better mechanism for debugging specs that tend to get stuck."},
}
// ParallelConfigFlags provides flags for the Ginkgo test process (not the CLI)
@ -319,8 +319,6 @@ var ParallelConfigFlags = GinkgoFlags{
var ReporterConfigFlags = GinkgoFlags{
{KeyPath: "R.NoColor", Name: "no-color", SectionKey: "output", DeprecatedName: "noColor", DeprecatedDocLink: "changed-command-line-flags",
Usage: "If set, suppress color output in default reporter."},
{KeyPath: "R.SlowSpecThreshold", Name: "slow-spec-threshold", SectionKey: "output", UsageArgument: "duration", UsageDefaultValue: "5s",
Usage: "Specs that take longer to run than this threshold are flagged as slow by the default reporter."},
{KeyPath: "R.Verbose", Name: "v", SectionKey: "output",
Usage: "If set, emits more output including GinkgoWriter contents."},
{KeyPath: "R.VeryVerbose", Name: "vv", SectionKey: "output",
@ -329,8 +327,8 @@ var ReporterConfigFlags = GinkgoFlags{
Usage: "If set, default reporter prints out a very succinct report"},
{KeyPath: "R.FullTrace", Name: "trace", SectionKey: "output",
Usage: "If set, default reporter prints out the full stack trace when a failure occurs"},
{KeyPath: "R.AlwaysEmitGinkgoWriter", Name: "always-emit-ginkgo-writer", SectionKey: "output", DeprecatedName: "reportPassed", DeprecatedDocLink: "renamed--reportpassed",
Usage: "If set, default reporter prints out captured output of passed tests."},
{KeyPath: "R.ShowNodeEvents", Name: "show-node-events", SectionKey: "output",
Usage: "If set, default reporter prints node > Enter and < Exit events when specs fail"},
{KeyPath: "R.JSONReport", Name: "json-report", UsageArgument: "filename.json", SectionKey: "output",
Usage: "If set, Ginkgo will generate a JSON-formatted test report at the specified location."},
@ -343,6 +341,8 @@ var ReporterConfigFlags = GinkgoFlags{
Usage: "use --slow-spec-threshold instead and pass in a duration string (e.g. '5s', not '5.0')"},
{KeyPath: "D.NoisyPendings", DeprecatedName: "noisyPendings", DeprecatedDocLink: "removed--noisypendings-and--noisyskippings", DeprecatedVersion: "2.0.0"},
{KeyPath: "D.NoisySkippings", DeprecatedName: "noisySkippings", DeprecatedDocLink: "removed--noisypendings-and--noisyskippings", DeprecatedVersion: "2.0.0"},
{KeyPath: "D.SlowSpecThreshold", DeprecatedName: "slow-spec-threshold", SectionKey: "output", Usage: "--slow-spec-threshold has been deprecated and will be removed in a future version of Ginkgo. This feature has proved to be more noisy than useful. You can use --poll-progress-after, instead, to get more actionable feedback about potentially slow specs and understand where they might be getting stuck.", DeprecatedVersion: "2.5.0"},
{KeyPath: "D.AlwaysEmitGinkgoWriter", DeprecatedName: "always-emit-ginkgo-writer", SectionKey: "output", Usage: " - use -v instead, or one of Ginkgo's machine-readable report formats to get GinkgoWriter output for passing specs."},
}
// BuildTestSuiteFlagSet attaches to the CommandLine flagset and provides flags for the Ginkgo test process

View File

@ -83,6 +83,13 @@ func (d deprecations) Nodot() Deprecation {
}
}
func (d deprecations) SuppressProgressReporting() Deprecation {
return Deprecation{
Message: "Improvements to how reporters emit timeline information means that SuppressProgressReporting is no longer necessary and has been deprecated.",
Version: "2.5.0",
}
}
type DeprecationTracker struct {
deprecations map[Deprecation][]CodeLocation
lock *sync.Mutex

View File

@ -108,8 +108,8 @@ Please ensure all assertions are inside leaf nodes such as {{bold}}BeforeEach{{/
func (g ginkgoErrors) SuiteNodeInNestedContext(nodeType NodeType, cl CodeLocation) error {
docLink := "suite-setup-and-cleanup-beforesuite-and-aftersuite"
if nodeType.Is(NodeTypeReportAfterSuite) {
docLink = "reporting-nodes---reportaftersuite"
if nodeType.Is(NodeTypeReportBeforeSuite | NodeTypeReportAfterSuite) {
docLink = "reporting-nodes---reportbeforesuite-and-reportaftersuite"
}
return GinkgoError{
@ -125,8 +125,8 @@ func (g ginkgoErrors) SuiteNodeInNestedContext(nodeType NodeType, cl CodeLocatio
func (g ginkgoErrors) SuiteNodeDuringRunPhase(nodeType NodeType, cl CodeLocation) error {
docLink := "suite-setup-and-cleanup-beforesuite-and-aftersuite"
if nodeType.Is(NodeTypeReportAfterSuite) {
docLink = "reporting-nodes---reportaftersuite"
if nodeType.Is(NodeTypeReportBeforeSuite | NodeTypeReportAfterSuite) {
docLink = "reporting-nodes---reportbeforesuite-and-reportaftersuite"
}
return GinkgoError{
@ -298,6 +298,15 @@ func (g ginkgoErrors) SetupNodeNotInOrderedContainer(cl CodeLocation, nodeType N
}
}
func (g ginkgoErrors) InvalidContinueOnFailureDecoration(cl CodeLocation) error {
return GinkgoError{
Heading: "ContinueOnFailure not decorating an outermost Ordered Container",
Message: "ContinueOnFailure can only decorate an Ordered container, and this Ordered container must be the outermost Ordered container.",
CodeLocation: cl,
DocLink: "ordered-containers",
}
}
/* DeferCleanup errors */
func (g ginkgoErrors) DeferCleanupInvalidFunction(cl CodeLocation) error {
return GinkgoError{
@ -320,7 +329,7 @@ func (g ginkgoErrors) PushingCleanupNodeDuringTreeConstruction(cl CodeLocation)
func (g ginkgoErrors) PushingCleanupInReportingNode(cl CodeLocation, nodeType NodeType) error {
return GinkgoError{
Heading: fmt.Sprintf("DeferCleanup cannot be called in %s", nodeType),
Message: "Please inline your cleanup code - Ginkgo won't run cleanup code after a ReportAfterEach or ReportAfterSuite.",
Message: "Please inline your cleanup code - Ginkgo won't run cleanup code after a Reporting node.",
CodeLocation: cl,
DocLink: "cleaning-up-our-cleanup-code-defercleanup",
}

View File

@ -272,12 +272,23 @@ func tokenize(input string) func() (*treeNode, error) {
}
}
func MustParseLabelFilter(input string) LabelFilter {
filter, err := ParseLabelFilter(input)
if err != nil {
panic(err)
}
return filter
}
func ParseLabelFilter(input string) (LabelFilter, error) {
if DEBUG_LABEL_FILTER_PARSING {
fmt.Println("\n==============")
fmt.Println("Input: ", input)
fmt.Print("Tokens: ")
}
if input == "" {
return func(_ []string) bool { return true }, nil
}
nextToken := tokenize(input)
root := &treeNode{token: lfTokenRoot}

View File

@ -6,8 +6,8 @@ import (
"time"
)
//ReportEntryValue wraps a report entry's value ensuring it can be encoded and decoded safely into reports
//and across the network connection when running in parallel
// ReportEntryValue wraps a report entry's value ensuring it can be encoded and decoded safely into reports
// and across the network connection when running in parallel
type ReportEntryValue struct {
raw interface{} //unexported to prevent gob from freaking out about unregistered structs
AsJSON string
@ -85,10 +85,12 @@ func (rev *ReportEntryValue) GobDecode(data []byte) error {
type ReportEntry struct {
// Visibility captures the visibility policy for this ReportEntry
Visibility ReportEntryVisibility
// Time captures the time the AddReportEntry was called
Time time.Time
// Location captures the location of the AddReportEntry call
Location CodeLocation
Time time.Time //need this for backwards compatibility
TimelineLocation TimelineLocation
// Name captures the name of this report
Name string
// Value captures the (optional) object passed into AddReportEntry - this can be
@ -120,7 +122,9 @@ func (entry ReportEntry) GetRawValue() interface{} {
return entry.Value.GetRawValue()
}
func (entry ReportEntry) GetTimelineLocation() TimelineLocation {
return entry.TimelineLocation
}
type ReportEntries []ReportEntry

View File

@ -2,6 +2,8 @@ package types
import (
"encoding/json"
"fmt"
"sort"
"strings"
"time"
)
@ -56,19 +58,20 @@ type Report struct {
SuiteConfig SuiteConfig
//SpecReports is a list of all SpecReports generated by this test run
//It is empty when the SuiteReport is provided to ReportBeforeSuite
SpecReports SpecReports
}
//PreRunStats contains a set of stats captured before the test run begins. This is primarily used
//by Ginkgo's reporter to tell the user how many specs are in the current suite (PreRunStats.TotalSpecs)
//and how many it intends to run (PreRunStats.SpecsThatWillRun) after applying any relevant focus or skip filters.
// PreRunStats contains a set of stats captured before the test run begins. This is primarily used
// by Ginkgo's reporter to tell the user how many specs are in the current suite (PreRunStats.TotalSpecs)
// and how many it intends to run (PreRunStats.SpecsThatWillRun) after applying any relevant focus or skip filters.
type PreRunStats struct {
TotalSpecs int
SpecsThatWillRun int
}
//Add is used by Ginkgo's parallel aggregation mechanisms to combine test run reports form individual parallel processes
//to form a complete final report.
// Add is used by Ginkgo's parallel aggregation mechanisms to combine test run reports form individual parallel processes
// to form a complete final report.
func (report Report) Add(other Report) Report {
report.SuiteSucceeded = report.SuiteSucceeded && other.SuiteSucceeded
@ -147,6 +150,9 @@ type SpecReport struct {
// ParallelProcess captures the parallel process that this spec ran on
ParallelProcess int
// RunningInParallel captures whether this spec is part of a suite that ran in parallel
RunningInParallel bool
//Failure is populated if a spec has failed, panicked, been interrupted, or skipped by the user (e.g. calling Skip())
//It includes detailed information about the Failure
Failure Failure
@ -178,6 +184,9 @@ type SpecReport struct {
// AdditionalFailures contains any failures that occurred after the initial spec failure. These typically occur in cleanup nodes after the initial failure and are only emitted when running in verbose mode.
AdditionalFailures []AdditionalFailure
// SpecEvents capture additional events that occur during the spec run
SpecEvents SpecEvents
}
func (report SpecReport) MarshalJSON() ([]byte, error) {
@ -204,6 +213,7 @@ func (report SpecReport) MarshalJSON() ([]byte, error) {
ReportEntries ReportEntries `json:",omitempty"`
ProgressReports []ProgressReport `json:",omitempty"`
AdditionalFailures []AdditionalFailure `json:",omitempty"`
SpecEvents SpecEvents `json:",omitempty"`
}{
ContainerHierarchyTexts: report.ContainerHierarchyTexts,
ContainerHierarchyLocations: report.ContainerHierarchyLocations,
@ -238,6 +248,9 @@ func (report SpecReport) MarshalJSON() ([]byte, error) {
if len(report.AdditionalFailures) > 0 {
out.AdditionalFailures = report.AdditionalFailures
}
if len(report.SpecEvents) > 0 {
out.SpecEvents = report.SpecEvents
}
return json.Marshal(out)
}
@ -255,13 +268,13 @@ func (report SpecReport) CombinedOutput() string {
return report.CapturedStdOutErr + "\n" + report.CapturedGinkgoWriterOutput
}
//Failed returns true if report.State is one of the SpecStateFailureStates
// Failed returns true if report.State is one of the SpecStateFailureStates
// (SpecStateFailed, SpecStatePanicked, SpecStateinterrupted, SpecStateAborted)
func (report SpecReport) Failed() bool {
return report.State.Is(SpecStateFailureStates)
}
//FullText returns a concatenation of all the report.ContainerHierarchyTexts and report.LeafNodeText
// FullText returns a concatenation of all the report.ContainerHierarchyTexts and report.LeafNodeText
func (report SpecReport) FullText() string {
texts := []string{}
texts = append(texts, report.ContainerHierarchyTexts...)
@ -271,7 +284,7 @@ func (report SpecReport) FullText() string {
return strings.Join(texts, " ")
}
//Labels returns a deduped set of all the spec's Labels.
// Labels returns a deduped set of all the spec's Labels.
func (report SpecReport) Labels() []string {
out := []string{}
seen := map[string]bool{}
@ -293,7 +306,7 @@ func (report SpecReport) Labels() []string {
return out
}
//MatchesLabelFilter returns true if the spec satisfies the passed in label filter query
// MatchesLabelFilter returns true if the spec satisfies the passed in label filter query
func (report SpecReport) MatchesLabelFilter(query string) (bool, error) {
filter, err := ParseLabelFilter(query)
if err != nil {
@ -302,29 +315,54 @@ func (report SpecReport) MatchesLabelFilter(query string) (bool, error) {
return filter(report.Labels()), nil
}
//FileName() returns the name of the file containing the spec
// FileName() returns the name of the file containing the spec
func (report SpecReport) FileName() string {
return report.LeafNodeLocation.FileName
}
//LineNumber() returns the line number of the leaf node
// LineNumber() returns the line number of the leaf node
func (report SpecReport) LineNumber() int {
return report.LeafNodeLocation.LineNumber
}
//FailureMessage() returns the failure message (or empty string if the test hasn't failed)
// FailureMessage() returns the failure message (or empty string if the test hasn't failed)
func (report SpecReport) FailureMessage() string {
return report.Failure.Message
}
//FailureLocation() returns the location of the failure (or an empty CodeLocation if the test hasn't failed)
// FailureLocation() returns the location of the failure (or an empty CodeLocation if the test hasn't failed)
func (report SpecReport) FailureLocation() CodeLocation {
return report.Failure.Location
}
// Timeline() returns a timeline view of the report
func (report SpecReport) Timeline() Timeline {
timeline := Timeline{}
if !report.Failure.IsZero() {
timeline = append(timeline, report.Failure)
if report.Failure.AdditionalFailure != nil {
timeline = append(timeline, *(report.Failure.AdditionalFailure))
}
}
for _, additionalFailure := range report.AdditionalFailures {
timeline = append(timeline, additionalFailure)
}
for _, reportEntry := range report.ReportEntries {
timeline = append(timeline, reportEntry)
}
for _, progressReport := range report.ProgressReports {
timeline = append(timeline, progressReport)
}
for _, specEvent := range report.SpecEvents {
timeline = append(timeline, specEvent)
}
sort.Sort(timeline)
return timeline
}
type SpecReports []SpecReport
//WithLeafNodeType returns the subset of SpecReports with LeafNodeType matching one of the requested NodeTypes
// WithLeafNodeType returns the subset of SpecReports with LeafNodeType matching one of the requested NodeTypes
func (reports SpecReports) WithLeafNodeType(nodeTypes NodeType) SpecReports {
count := 0
for i := range reports {
@ -344,7 +382,7 @@ func (reports SpecReports) WithLeafNodeType(nodeTypes NodeType) SpecReports {
return out
}
//WithState returns the subset of SpecReports with State matching one of the requested SpecStates
// WithState returns the subset of SpecReports with State matching one of the requested SpecStates
func (reports SpecReports) WithState(states SpecState) SpecReports {
count := 0
for i := range reports {
@ -363,7 +401,7 @@ func (reports SpecReports) WithState(states SpecState) SpecReports {
return out
}
//CountWithState returns the number of SpecReports with State matching one of the requested SpecStates
// CountWithState returns the number of SpecReports with State matching one of the requested SpecStates
func (reports SpecReports) CountWithState(states SpecState) int {
n := 0
for i := range reports {
@ -374,7 +412,7 @@ func (reports SpecReports) CountWithState(states SpecState) int {
return n
}
//If the Spec passes, CountOfFlakedSpecs returns the number of SpecReports that failed after multiple attempts.
// If the Spec passes, CountOfFlakedSpecs returns the number of SpecReports that failed after multiple attempts.
func (reports SpecReports) CountOfFlakedSpecs() int {
n := 0
for i := range reports {
@ -385,7 +423,7 @@ func (reports SpecReports) CountOfFlakedSpecs() int {
return n
}
//If the Spec fails, CountOfRepeatedSpecs returns the number of SpecReports that passed after multiple attempts
// If the Spec fails, CountOfRepeatedSpecs returns the number of SpecReports that passed after multiple attempts
func (reports SpecReports) CountOfRepeatedSpecs() int {
n := 0
for i := range reports {
@ -396,6 +434,53 @@ func (reports SpecReports) CountOfRepeatedSpecs() int {
return n
}
// TimelineLocation captures the location of an event in the spec's timeline
type TimelineLocation struct {
//Offset is the offset (in bytes) of the event relative to the GinkgoWriter stream
Offset int `json:",omitempty"`
//Order is the order of the event with respect to other events. The absolute value of Order
//is irrelevant. All that matters is that an event with a lower Order occurs before ane vent with a higher Order
Order int `json:",omitempty"`
Time time.Time
}
// TimelineEvent represent an event on the timeline
// consumers of Timeline will need to check the concrete type of each entry to determine how to handle it
type TimelineEvent interface {
GetTimelineLocation() TimelineLocation
}
type Timeline []TimelineEvent
func (t Timeline) Len() int { return len(t) }
func (t Timeline) Less(i, j int) bool {
return t[i].GetTimelineLocation().Order < t[j].GetTimelineLocation().Order
}
func (t Timeline) Swap(i, j int) { t[i], t[j] = t[j], t[i] }
func (t Timeline) WithoutHiddenReportEntries() Timeline {
out := Timeline{}
for _, event := range t {
if reportEntry, isReportEntry := event.(ReportEntry); isReportEntry && reportEntry.Visibility == ReportEntryVisibilityNever {
continue
}
out = append(out, event)
}
return out
}
func (t Timeline) WithoutVeryVerboseSpecEvents() Timeline {
out := Timeline{}
for _, event := range t {
if specEvent, isSpecEvent := event.(SpecEvent); isSpecEvent && specEvent.IsOnlyVisibleAtVeryVerbose() {
continue
}
out = append(out, event)
}
return out
}
// Failure captures failure information for an individual test
type Failure struct {
// Message - the failure message passed into Fail(...). When using a matcher library
@ -408,6 +493,8 @@ type Failure struct {
// This CodeLocation will include a fully-populated StackTrace
Location CodeLocation
TimelineLocation TimelineLocation
// ForwardedPanic - if the failure represents a captured panic (i.e. Summary.State == SpecStatePanicked)
// then ForwardedPanic will be populated with a string representation of the captured panic.
ForwardedPanic string `json:",omitempty"`
@ -420,19 +507,29 @@ type Failure struct {
// FailureNodeType will contain the NodeType of the node in which the failure occurred.
// FailureNodeLocation will contain the CodeLocation of the node in which the failure occurred.
// If populated, FailureNodeContainerIndex will be the index into SpecReport.ContainerHierarchyTexts and SpecReport.ContainerHierarchyLocations that represents the parent container of the node in which the failure occurred.
FailureNodeContext FailureNodeContext
FailureNodeType NodeType
FailureNodeLocation CodeLocation
FailureNodeContainerIndex int
FailureNodeContext FailureNodeContext `json:",omitempty"`
FailureNodeType NodeType `json:",omitempty"`
FailureNodeLocation CodeLocation `json:",omitempty"`
FailureNodeContainerIndex int `json:",omitempty"`
//ProgressReport is populated if the spec was interrupted or timed out
ProgressReport ProgressReport
ProgressReport ProgressReport `json:",omitempty"`
//AdditionalFailure is non-nil if a follow-on failure occurred within the same node after the primary failure. This only happens when a node has timed out or been interrupted. In such cases the AdditionalFailure can include information about where/why the spec was stuck.
AdditionalFailure *AdditionalFailure `json:",omitempty"`
}
func (f Failure) IsZero() bool {
return f.Message == "" && (f.Location == CodeLocation{})
}
func (f Failure) GetTimelineLocation() TimelineLocation {
return f.TimelineLocation
}
// FailureNodeContext captures the location context for the node containing the failing line of code
type FailureNodeContext uint
@ -471,6 +568,10 @@ type AdditionalFailure struct {
Failure Failure
}
func (f AdditionalFailure) GetTimelineLocation() TimelineLocation {
return f.Failure.TimelineLocation
}
// SpecState captures the state of a spec
// To determine if a given `state` represents a failure state, use `state.Is(SpecStateFailureStates)`
type SpecState uint
@ -503,6 +604,9 @@ var ssEnumSupport = NewEnumSupport(map[uint]string{
func (ss SpecState) String() string {
return ssEnumSupport.String(uint(ss))
}
func (ss SpecState) GomegaString() string {
return ssEnumSupport.String(uint(ss))
}
func (ss *SpecState) UnmarshalJSON(b []byte) error {
out, err := ssEnumSupport.UnmarshJSON(b)
*ss = SpecState(out)
@ -520,38 +624,40 @@ func (ss SpecState) Is(states SpecState) bool {
// ProgressReport captures the progress of the current spec. It is, effectively, a structured Ginkgo-aware stack trace
type ProgressReport struct {
Message string
ParallelProcess int
RunningInParallel bool
Message string `json:",omitempty"`
ParallelProcess int `json:",omitempty"`
RunningInParallel bool `json:",omitempty"`
Time time.Time
ContainerHierarchyTexts []string `json:",omitempty"`
LeafNodeText string `json:",omitempty"`
LeafNodeLocation CodeLocation `json:",omitempty"`
SpecStartTime time.Time `json:",omitempty"`
ContainerHierarchyTexts []string
LeafNodeText string
LeafNodeLocation CodeLocation
SpecStartTime time.Time
CurrentNodeType NodeType `json:",omitempty"`
CurrentNodeText string `json:",omitempty"`
CurrentNodeLocation CodeLocation `json:",omitempty"`
CurrentNodeStartTime time.Time `json:",omitempty"`
CurrentNodeType NodeType
CurrentNodeText string
CurrentNodeLocation CodeLocation
CurrentNodeStartTime time.Time
CurrentStepText string `json:",omitempty"`
CurrentStepLocation CodeLocation `json:",omitempty"`
CurrentStepStartTime time.Time `json:",omitempty"`
CurrentStepText string
CurrentStepLocation CodeLocation
CurrentStepStartTime time.Time
AdditionalReports []string `json:",omitempty"`
AdditionalReports []string
CapturedGinkgoWriterOutput string `json:",omitempty"`
TimelineLocation TimelineLocation `json:",omitempty"`
CapturedGinkgoWriterOutput string `json:",omitempty"`
GinkgoWriterOffset int
Goroutines []Goroutine
Goroutines []Goroutine `json:",omitempty"`
}
func (pr ProgressReport) IsZero() bool {
return pr.CurrentNodeType == NodeTypeInvalid
}
func (pr ProgressReport) Time() time.Time {
return pr.TimelineLocation.Time
}
func (pr ProgressReport) SpecGoroutine() Goroutine {
for _, goroutine := range pr.Goroutines {
if goroutine.IsSpecGoroutine {
@ -589,6 +695,22 @@ func (pr ProgressReport) WithoutCapturedGinkgoWriterOutput() ProgressReport {
return out
}
func (pr ProgressReport) WithoutOtherGoroutines() ProgressReport {
out := pr
filteredGoroutines := []Goroutine{}
for _, goroutine := range pr.Goroutines {
if goroutine.IsSpecGoroutine || goroutine.HasHighlights() {
filteredGoroutines = append(filteredGoroutines, goroutine)
}
}
out.Goroutines = filteredGoroutines
return out
}
func (pr ProgressReport) GetTimelineLocation() TimelineLocation {
return pr.TimelineLocation
}
type Goroutine struct {
ID uint64
State string
@ -643,6 +765,7 @@ const (
NodeTypeReportBeforeEach
NodeTypeReportAfterEach
NodeTypeReportBeforeSuite
NodeTypeReportAfterSuite
NodeTypeCleanupInvalid
@ -652,9 +775,9 @@ const (
)
var NodeTypesForContainerAndIt = NodeTypeContainer | NodeTypeIt
var NodeTypesForSuiteLevelNodes = NodeTypeBeforeSuite | NodeTypeSynchronizedBeforeSuite | NodeTypeAfterSuite | NodeTypeSynchronizedAfterSuite | NodeTypeReportAfterSuite | NodeTypeCleanupAfterSuite
var NodeTypesForSuiteLevelNodes = NodeTypeBeforeSuite | NodeTypeSynchronizedBeforeSuite | NodeTypeAfterSuite | NodeTypeSynchronizedAfterSuite | NodeTypeReportBeforeSuite | NodeTypeReportAfterSuite | NodeTypeCleanupAfterSuite
var NodeTypesAllowedDuringCleanupInterrupt = NodeTypeAfterEach | NodeTypeJustAfterEach | NodeTypeAfterAll | NodeTypeAfterSuite | NodeTypeSynchronizedAfterSuite | NodeTypeCleanupAfterEach | NodeTypeCleanupAfterAll | NodeTypeCleanupAfterSuite
var NodeTypesAllowedDuringReportInterrupt = NodeTypeReportBeforeEach | NodeTypeReportAfterEach | NodeTypeReportAfterSuite
var NodeTypesAllowedDuringReportInterrupt = NodeTypeReportBeforeEach | NodeTypeReportAfterEach | NodeTypeReportBeforeSuite | NodeTypeReportAfterSuite
var ntEnumSupport = NewEnumSupport(map[uint]string{
uint(NodeTypeInvalid): "INVALID NODE TYPE",
@ -672,6 +795,7 @@ var ntEnumSupport = NewEnumSupport(map[uint]string{
uint(NodeTypeSynchronizedAfterSuite): "SynchronizedAfterSuite",
uint(NodeTypeReportBeforeEach): "ReportBeforeEach",
uint(NodeTypeReportAfterEach): "ReportAfterEach",
uint(NodeTypeReportBeforeSuite): "ReportBeforeSuite",
uint(NodeTypeReportAfterSuite): "ReportAfterSuite",
uint(NodeTypeCleanupInvalid): "DeferCleanup",
uint(NodeTypeCleanupAfterEach): "DeferCleanup (Each)",
@ -694,3 +818,99 @@ func (nt NodeType) MarshalJSON() ([]byte, error) {
func (nt NodeType) Is(nodeTypes NodeType) bool {
return nt&nodeTypes != 0
}
/*
SpecEvent captures a vareity of events that can occur when specs run. See SpecEventType for the list of available events.
*/
type SpecEvent struct {
SpecEventType SpecEventType
CodeLocation CodeLocation
TimelineLocation TimelineLocation
Message string `json:",omitempty"`
Duration time.Duration `json:",omitempty"`
NodeType NodeType `json:",omitempty"`
Attempt int `json:",omitempty"`
}
func (se SpecEvent) GetTimelineLocation() TimelineLocation {
return se.TimelineLocation
}
func (se SpecEvent) IsOnlyVisibleAtVeryVerbose() bool {
return se.SpecEventType.Is(SpecEventByEnd | SpecEventNodeStart | SpecEventNodeEnd)
}
func (se SpecEvent) GomegaString() string {
out := &strings.Builder{}
out.WriteString("[" + se.SpecEventType.String() + " SpecEvent] ")
if se.Message != "" {
out.WriteString("Message=")
out.WriteString(`"` + se.Message + `",`)
}
if se.Duration != 0 {
out.WriteString("Duration=" + se.Duration.String() + ",")
}
if se.NodeType != NodeTypeInvalid {
out.WriteString("NodeType=" + se.NodeType.String() + ",")
}
if se.Attempt != 0 {
out.WriteString(fmt.Sprintf("Attempt=%d", se.Attempt) + ",")
}
out.WriteString("CL=" + se.CodeLocation.String() + ",")
out.WriteString(fmt.Sprintf("TL.Offset=%d", se.TimelineLocation.Offset))
return out.String()
}
type SpecEvents []SpecEvent
func (se SpecEvents) WithType(seType SpecEventType) SpecEvents {
out := SpecEvents{}
for _, event := range se {
if event.SpecEventType.Is(seType) {
out = append(out, event)
}
}
return out
}
type SpecEventType uint
const (
SpecEventInvalid SpecEventType = 0
SpecEventByStart SpecEventType = 1 << iota
SpecEventByEnd
SpecEventNodeStart
SpecEventNodeEnd
SpecEventSpecRepeat
SpecEventSpecRetry
)
var seEnumSupport = NewEnumSupport(map[uint]string{
uint(SpecEventInvalid): "INVALID SPEC EVENT",
uint(SpecEventByStart): "By",
uint(SpecEventByEnd): "By (End)",
uint(SpecEventNodeStart): "Node",
uint(SpecEventNodeEnd): "Node (End)",
uint(SpecEventSpecRepeat): "Repeat",
uint(SpecEventSpecRetry): "Retry",
})
func (se SpecEventType) String() string {
return seEnumSupport.String(uint(se))
}
func (se *SpecEventType) UnmarshalJSON(b []byte) error {
out, err := seEnumSupport.UnmarshJSON(b)
*se = SpecEventType(out)
return err
}
func (se SpecEventType) MarshalJSON() ([]byte, error) {
return seEnumSupport.MarshJSON(uint(se))
}
func (se SpecEventType) Is(specEventTypes SpecEventType) bool {
return se&specEventTypes != 0
}

View File

@ -1,3 +1,3 @@
package types
const VERSION = "2.4.0"
const VERSION = "2.8.0"

View File

@ -1,3 +1,51 @@
## 1.25.0
### Features
- add `MustPassRepeatedly(int)` to asyncAssertion (#619) [4509f72]
- compare unwrapped errors using DeepEqual (#617) [aaeaa5d]
### Maintenance
- Bump golang.org/x/net from 0.4.0 to 0.5.0 (#614) [c7cfea4]
- Bump github.com/onsi/ginkgo/v2 from 2.6.1 to 2.7.0 (#615) [71b8adb]
- Docs: Fix typo "MUltiple" -> "Multiple" (#616) [9351dda]
- clean up go.sum [cd1dc1d]
## 1.24.2
### Fixes
- Correctly handle assertion failure panics for eventually/consistnetly "g Gomega"s in a goroutine [78f1660]
- docs:Fix typo "you an" -> "you can" (#607) [3187c1f]
- fixes issue #600 (#606) [808d192]
### Maintenance
- Bump golang.org/x/net from 0.2.0 to 0.4.0 (#611) [6ebc0bf]
- Bump nokogiri from 1.13.9 to 1.13.10 in /docs (#612) [258cfc8]
- Bump github.com/onsi/ginkgo/v2 from 2.5.0 to 2.5.1 (#609) [e6c3eb9]
## 1.24.1
### Fixes
- maintain backward compatibility for Eventually and Consisntetly's signatures [4c7df5e]
- fix small typo (#601) [ea0ebe6]
### Maintenance
- Bump golang.org/x/net from 0.1.0 to 0.2.0 (#603) [1ba8372]
- Bump github.com/onsi/ginkgo/v2 from 2.4.0 to 2.5.0 (#602) [f9426cb]
- fix label-filter in test.yml [d795db6]
- stop running flakey tests and rely on external network dependencies in CI [7133290]
## 1.24.0
### Features
Introducting [gcustom](https://onsi.github.io/gomega/#gcustom-a-convenient-mechanism-for-buildling-custom-matchers) - a convenient mechanism for building custom matchers.
This is an RC release for `gcustom`. The external API may be tweaked in response to feedback however it is expected to remain mostly stable.
### Maintenance
- Update BeComparableTo documentation [756eaa0]
## 1.23.0
### Features

View File

@ -5,7 +5,7 @@ A Gomega release is a tagged sha and a GitHub release. To cut a release:
```bash
LAST_VERSION=$(git tag --sort=version:refname | tail -n1)
CHANGES=$(git log --pretty=format:'- %s [%h]' HEAD...$LAST_VERSION)
echo -e "## NEXT\n\n$CHANGES\n\n### Features\n\n## Fixes\n\n## Maintenance\n\n$(cat CHANGELOG.md)" > CHANGELOG.md
echo -e "## NEXT\n\n$CHANGES\n\n### Features\n\n### Fixes\n\n### Maintenance\n\n$(cat CHANGELOG.md)" > CHANGELOG.md
```
to update the changelog
- Categorize the changes into

View File

@ -22,7 +22,7 @@ import (
"github.com/onsi/gomega/types"
)
const GOMEGA_VERSION = "1.23.0"
const GOMEGA_VERSION = "1.25.0"
const nilGomegaPanic = `You are trying to make an assertion, but haven't registered Gomega's fail handler.
If you're using Ginkgo then you probably forgot to put your assertion in an It().
@ -360,6 +360,16 @@ You can also pass additional arugments to functions that take a Gomega. The onl
g.Expect(elements).To(ConsistOf(expected))
}).WithContext(ctx).WithArguments("/names", "Joe", "Jane", "Sam").Should(Succeed())
You can ensure that you get a number of consecutive successful tries before succeeding using `MustPassRepeatedly(int)`. For Example:
int count := 0
Eventually(func() bool {
count++
return count > 2
}).MustPassRepeatedly(2).Should(BeTrue())
// Because we had to wait for 2 calls that returned true
Expect(count).To(Equal(3))
Finally, in addition to passing timeouts and a context to Eventually you can be more explicit with Eventually's chaining configuration methods:
Eventually(..., "1s", "2s", ctx).Should(...)
@ -368,9 +378,9 @@ is equivalent to
Eventually(...).WithTimeout(time.Second).WithPolling(2*time.Second).WithContext(ctx).Should(...)
*/
func Eventually(args ...interface{}) AsyncAssertion {
func Eventually(actualOrCtx interface{}, args ...interface{}) AsyncAssertion {
ensureDefaultGomegaIsConfigured()
return Default.Eventually(args...)
return Default.Eventually(actualOrCtx, args...)
}
// EventuallyWithOffset operates like Eventually but takes an additional
@ -382,9 +392,9 @@ func Eventually(args ...interface{}) AsyncAssertion {
// `EventuallyWithOffset` specifying a timeout interval (and an optional polling interval) are
// the same as `Eventually(...).WithOffset(...).WithTimeout` or
// `Eventually(...).WithOffset(...).WithTimeout(...).WithPolling`.
func EventuallyWithOffset(offset int, args ...interface{}) AsyncAssertion {
func EventuallyWithOffset(offset int, actualOrCtx interface{}, args ...interface{}) AsyncAssertion {
ensureDefaultGomegaIsConfigured()
return Default.EventuallyWithOffset(offset, args...)
return Default.EventuallyWithOffset(offset, actualOrCtx, args...)
}
/*
@ -402,9 +412,9 @@ Consistently is useful in cases where you want to assert that something *does no
This will block for 200 milliseconds and repeatedly check the channel and ensure nothing has been received.
*/
func Consistently(args ...interface{}) AsyncAssertion {
func Consistently(actualOrCtx interface{}, args ...interface{}) AsyncAssertion {
ensureDefaultGomegaIsConfigured()
return Default.Consistently(args...)
return Default.Consistently(actualOrCtx, args...)
}
// ConsistentlyWithOffset operates like Consistently but takes an additional
@ -413,9 +423,9 @@ func Consistently(args ...interface{}) AsyncAssertion {
//
// `ConsistentlyWithOffset` is the same as `Consistently(...).WithOffset` and
// optional `WithTimeout` and `WithPolling`.
func ConsistentlyWithOffset(offset int, args ...interface{}) AsyncAssertion {
func ConsistentlyWithOffset(offset int, actualOrCtx interface{}, args ...interface{}) AsyncAssertion {
ensureDefaultGomegaIsConfigured()
return Default.ConsistentlyWithOffset(offset, args...)
return Default.ConsistentlyWithOffset(offset, actualOrCtx, args...)
}
/*

View File

@ -20,6 +20,17 @@ type contextWithAttachProgressReporter interface {
AttachProgressReporter(func() string) func()
}
type asyncGomegaHaltExecutionError struct{}
func (a asyncGomegaHaltExecutionError) GinkgoRecoverShouldIgnoreThisPanic() {}
func (a asyncGomegaHaltExecutionError) Error() string {
return `An assertion has failed in a goroutine. You should call
defer GinkgoRecover()
at the top of the goroutine that caused this panic. This will allow Ginkgo and Gomega to correctly capture and manage this panic.`
}
type AsyncAssertionType uint
const (
@ -44,21 +55,23 @@ type AsyncAssertion struct {
actual interface{}
argsToForward []interface{}
timeoutInterval time.Duration
pollingInterval time.Duration
ctx context.Context
offset int
g *Gomega
timeoutInterval time.Duration
pollingInterval time.Duration
mustPassRepeatedly int
ctx context.Context
offset int
g *Gomega
}
func NewAsyncAssertion(asyncType AsyncAssertionType, actualInput interface{}, g *Gomega, timeoutInterval time.Duration, pollingInterval time.Duration, ctx context.Context, offset int) *AsyncAssertion {
func NewAsyncAssertion(asyncType AsyncAssertionType, actualInput interface{}, g *Gomega, timeoutInterval time.Duration, pollingInterval time.Duration, mustPassRepeatedly int, ctx context.Context, offset int) *AsyncAssertion {
out := &AsyncAssertion{
asyncType: asyncType,
timeoutInterval: timeoutInterval,
pollingInterval: pollingInterval,
offset: offset,
ctx: ctx,
g: g,
asyncType: asyncType,
timeoutInterval: timeoutInterval,
pollingInterval: pollingInterval,
mustPassRepeatedly: mustPassRepeatedly,
offset: offset,
ctx: ctx,
g: g,
}
out.actual = actualInput
@ -104,6 +117,11 @@ func (assertion *AsyncAssertion) WithArguments(argsToForward ...interface{}) typ
return assertion
}
func (assertion *AsyncAssertion) MustPassRepeatedly(count int) types.AsyncAssertion {
assertion.mustPassRepeatedly = count
return assertion
}
func (assertion *AsyncAssertion) Should(matcher types.GomegaMatcher, optionalDescription ...interface{}) bool {
assertion.g.THelper()
vetOptionalDescription("Asynchronous assertion", optionalDescription...)
@ -191,6 +209,13 @@ You can learn more at https://onsi.github.io/gomega/#eventually
`, assertion.asyncType, t, t.NumIn(), numProvided, have, assertion.asyncType)
}
func (assertion *AsyncAssertion) invalidMustPassRepeatedlyError(reason string) error {
return fmt.Errorf(`Invalid use of MustPassRepeatedly with %s %s
You can learn more at https://onsi.github.io/gomega/#eventually
`, assertion.asyncType, reason)
}
func (assertion *AsyncAssertion) buildActualPoller() (func() (interface{}, error), error) {
if !assertion.actualIsFunc {
return func() (interface{}, error) { return assertion.actual, nil }, nil
@ -229,7 +254,8 @@ func (assertion *AsyncAssertion) buildActualPoller() (func() (interface{}, error
}
_, file, line, _ := runtime.Caller(skip + 1)
assertionFailure = fmt.Errorf("Assertion in callback at %s:%d failed:\n%s", file, line, message)
panic("stop execution")
// we throw an asyncGomegaHaltExecutionError so that defer GinkgoRecover() can catch this error if the user makes an assertion in a goroutine
panic(asyncGomegaHaltExecutionError{})
})))
}
if takesContext {
@ -245,6 +271,13 @@ func (assertion *AsyncAssertion) buildActualPoller() (func() (interface{}, error
return nil, assertion.argumentMismatchError(actualType, len(inValues))
}
if assertion.mustPassRepeatedly != 1 && assertion.asyncType != AsyncAssertionTypeEventually {
return nil, assertion.invalidMustPassRepeatedlyError("it can only be used with Eventually")
}
if assertion.mustPassRepeatedly < 1 {
return nil, assertion.invalidMustPassRepeatedlyError("parameter can't be < 1")
}
return func() (actual interface{}, err error) {
var values []reflect.Value
assertionFailure = nil
@ -384,6 +417,8 @@ func (assertion *AsyncAssertion) match(matcher types.GomegaMatcher, desiredMatch
}
}
// Used to count the number of times in a row a step passed
passedRepeatedlyCount := 0
for {
var nextPoll <-chan time.Time = nil
var isTryAgainAfterError = false
@ -401,13 +436,18 @@ func (assertion *AsyncAssertion) match(matcher types.GomegaMatcher, desiredMatch
if err == nil && matches == desiredMatch {
if assertion.asyncType == AsyncAssertionTypeEventually {
return true
passedRepeatedlyCount += 1
if passedRepeatedlyCount == assertion.mustPassRepeatedly {
return true
}
}
} else if !isTryAgainAfterError {
if assertion.asyncType == AsyncAssertionTypeConsistently {
fail("Failed")
return false
}
// Reset the consecutive pass count
passedRepeatedlyCount = 0
}
if oracleMatcherSaysStop {

View File

@ -2,7 +2,6 @@ package internal
import (
"context"
"fmt"
"time"
"github.com/onsi/gomega/types"
@ -53,42 +52,38 @@ func (g *Gomega) ExpectWithOffset(offset int, actual interface{}, extra ...inter
return NewAssertion(actual, g, offset, extra...)
}
func (g *Gomega) Eventually(args ...interface{}) types.AsyncAssertion {
return g.makeAsyncAssertion(AsyncAssertionTypeEventually, 0, args...)
func (g *Gomega) Eventually(actualOrCtx interface{}, args ...interface{}) types.AsyncAssertion {
return g.makeAsyncAssertion(AsyncAssertionTypeEventually, 0, actualOrCtx, args...)
}
func (g *Gomega) EventuallyWithOffset(offset int, args ...interface{}) types.AsyncAssertion {
return g.makeAsyncAssertion(AsyncAssertionTypeEventually, offset, args...)
func (g *Gomega) EventuallyWithOffset(offset int, actualOrCtx interface{}, args ...interface{}) types.AsyncAssertion {
return g.makeAsyncAssertion(AsyncAssertionTypeEventually, offset, actualOrCtx, args...)
}
func (g *Gomega) Consistently(args ...interface{}) types.AsyncAssertion {
return g.makeAsyncAssertion(AsyncAssertionTypeConsistently, 0, args...)
func (g *Gomega) Consistently(actualOrCtx interface{}, args ...interface{}) types.AsyncAssertion {
return g.makeAsyncAssertion(AsyncAssertionTypeConsistently, 0, actualOrCtx, args...)
}
func (g *Gomega) ConsistentlyWithOffset(offset int, args ...interface{}) types.AsyncAssertion {
return g.makeAsyncAssertion(AsyncAssertionTypeConsistently, offset, args...)
func (g *Gomega) ConsistentlyWithOffset(offset int, actualOrCtx interface{}, args ...interface{}) types.AsyncAssertion {
return g.makeAsyncAssertion(AsyncAssertionTypeConsistently, offset, actualOrCtx, args...)
}
func (g *Gomega) makeAsyncAssertion(asyncAssertionType AsyncAssertionType, offset int, args ...interface{}) types.AsyncAssertion {
func (g *Gomega) makeAsyncAssertion(asyncAssertionType AsyncAssertionType, offset int, actualOrCtx interface{}, args ...interface{}) types.AsyncAssertion {
baseOffset := 3
timeoutInterval := -time.Duration(1)
pollingInterval := -time.Duration(1)
intervals := []interface{}{}
var ctx context.Context
if len(args) == 0 {
g.Fail(fmt.Sprintf("Call to %s is missing a value or function to poll", asyncAssertionType), offset+baseOffset)
return nil
}
actual := args[0]
startingIndex := 1
if _, isCtx := args[0].(context.Context); isCtx && len(args) > 1 {
actual := actualOrCtx
startingIndex := 0
if _, isCtx := actualOrCtx.(context.Context); isCtx && len(args) > 0 {
// the first argument is a context, we should accept it as the context _only if_ it is **not** the only argumnent **and** the second argument is not a parseable duration
// this is due to an unfortunate ambiguity in early version of Gomega in which multi-type durations are allowed after the actual
if _, err := toDuration(args[1]); err != nil {
ctx = args[0].(context.Context)
actual = args[1]
startingIndex = 2
if _, err := toDuration(args[0]); err != nil {
ctx = actualOrCtx.(context.Context)
actual = args[0]
startingIndex = 1
}
}
@ -114,7 +109,7 @@ func (g *Gomega) makeAsyncAssertion(asyncAssertionType AsyncAssertionType, offse
}
}
return NewAsyncAssertion(asyncAssertionType, actual, g, timeoutInterval, pollingInterval, ctx, offset)
return NewAsyncAssertion(asyncAssertionType, actual, g, timeoutInterval, pollingInterval, 1, ctx, offset)
}
func (g *Gomega) SetDefaultEventuallyTimeout(t time.Duration) {

View File

@ -27,7 +27,8 @@ func BeEquivalentTo(expected interface{}) types.GomegaMatcher {
}
}
// BeComparableTo uses gocmp.Equal to compare. You can pass cmp.Option as options.
// BeComparableTo uses gocmp.Equal from github.com/google/go-cmp (instead of reflect.DeepEqual) to perform a deep comparison.
// You can pass cmp.Option as options.
// It is an error for actual and expected to be nil. Use BeNil() instead.
func BeComparableTo(expected interface{}, opts ...cmp.Option) types.GomegaMatcher {
return &matchers.BeComparableToMatcher{

View File

@ -25,7 +25,17 @@ func (matcher *MatchErrorMatcher) Match(actual interface{}) (success bool, err e
expected := matcher.Expected
if isError(expected) {
return reflect.DeepEqual(actualErr, expected) || errors.Is(actualErr, expected.(error)), nil
// first try the built-in errors.Is
if errors.Is(actualErr, expected.(error)) {
return true, nil
}
// if not, try DeepEqual along the error chain
for unwrapped := actualErr; unwrapped != nil; unwrapped = errors.Unwrap(unwrapped) {
if reflect.DeepEqual(unwrapped, expected) {
return true, nil
}
}
return false, nil
}
if isString(expected) {

View File

@ -19,11 +19,11 @@ type Gomega interface {
Expect(actual interface{}, extra ...interface{}) Assertion
ExpectWithOffset(offset int, actual interface{}, extra ...interface{}) Assertion
Eventually(args ...interface{}) AsyncAssertion
EventuallyWithOffset(offset int, args ...interface{}) AsyncAssertion
Eventually(actualOrCtx interface{}, args ...interface{}) AsyncAssertion
EventuallyWithOffset(offset int, actualOrCtx interface{}, args ...interface{}) AsyncAssertion
Consistently(args ...interface{}) AsyncAssertion
ConsistentlyWithOffset(offset int, args ...interface{}) AsyncAssertion
Consistently(actualOrCtx interface{}, args ...interface{}) AsyncAssertion
ConsistentlyWithOffset(offset int, actualOrCtx interface{}, args ...interface{}) AsyncAssertion
SetDefaultEventuallyTimeout(time.Duration)
SetDefaultEventuallyPollingInterval(time.Duration)
@ -75,6 +75,7 @@ type AsyncAssertion interface {
ProbeEvery(interval time.Duration) AsyncAssertion
WithContext(ctx context.Context) AsyncAssertion
WithArguments(argsToForward ...interface{}) AsyncAssertion
MustPassRepeatedly(count int) AsyncAssertion
}
// Assertions are returned by Ω and Expect and enable assertions against Gomega matchers

4
vendor/modules.txt vendored
View File

@ -438,7 +438,7 @@ github.com/munnerz/goautoneg
# github.com/oklog/run v1.0.0
## explicit
github.com/oklog/run
# github.com/onsi/ginkgo/v2 v2.4.0
# github.com/onsi/ginkgo/v2 v2.8.0
## explicit; go 1.18
github.com/onsi/ginkgo/v2
github.com/onsi/ginkgo/v2/config
@ -450,7 +450,7 @@ github.com/onsi/ginkgo/v2/internal/parallel_support
github.com/onsi/ginkgo/v2/internal/testingtproxy
github.com/onsi/ginkgo/v2/reporters
github.com/onsi/ginkgo/v2/types
# github.com/onsi/gomega v1.23.0
# github.com/onsi/gomega v1.25.0
## explicit; go 1.18
github.com/onsi/gomega
github.com/onsi/gomega/format