build: move e2e dependencies into e2e/go.mod

Several packages are only used while running the e2e suite. These
packages are less important to update, as the they can not influence the
final executable that is part of the Ceph-CSI container-image.

By moving these dependencies out of the main Ceph-CSI go.mod, it is
easier to identify if a reported CVE affects Ceph-CSI, or only the
testing (like most of the Kubernetes CVEs).

Signed-off-by: Niels de Vos <ndevos@ibm.com>
This commit is contained in:
Niels de Vos
2025-03-04 08:57:28 +01:00
committed by mergify[bot]
parent 15da101b1b
commit bec6090996
8047 changed files with 1407827 additions and 3453 deletions

View File

@ -0,0 +1,27 @@
# Contributing to go.opentelemetry.io/auto/sdk
The `go.opentelemetry.io/auto/sdk` module is a purpose built OpenTelemetry SDK.
It is designed to be:
0. An OpenTelemetry compliant SDK
1. Instrumented by auto-instrumentation (serializable into OTLP JSON)
2. Lightweight
3. User-friendly
These design choices are listed in the order of their importance.
The primary design goal of this module is to be an OpenTelemetry SDK.
This means that it needs to implement the Go APIs found in `go.opentelemetry.io/otel`.
Having met the requirement of SDK compliance, this module needs to provide code that the `go.opentelemetry.io/auto` module can instrument.
The chosen approach to meet this goal is to ensure the telemetry from the SDK is serializable into JSON encoded OTLP.
This ensures then that the serialized form is compatible with other OpenTelemetry systems, and the auto-instrumentation can use these systems to deserialize any telemetry it is sent.
Outside of these first two goals, the intended use becomes relevant.
This package is intended to be used in the `go.opentelemetry.io/otel` global API as a default when the auto-instrumentation is running.
Because of this, this package needs to not add unnecessary dependencies to that API.
Ideally, it adds none.
It also needs to operate efficiently.
Finally, this module is designed to be user-friendly to Go development.
It hides complexity in order to provide simpler APIs when the previous goals can all still be met.

201
e2e/vendor/go.opentelemetry.io/auto/sdk/LICENSE generated vendored Normal file
View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

15
e2e/vendor/go.opentelemetry.io/auto/sdk/VERSIONING.md generated vendored Normal file
View File

@ -0,0 +1,15 @@
# Versioning
This document describes the versioning policy for this module.
This policy is designed so the following goals can be achieved.
**Users are provided a codebase of value that is stable and secure.**
## Policy
* Versioning of this module will be idiomatic of a Go project using [Go modules](https://github.com/golang/go/wiki/Modules).
* [Semantic import versioning](https://github.com/golang/go/wiki/Modules#semantic-import-versioning) will be used.
* Versions will comply with [semver 2.0](https://semver.org/spec/v2.0.0.html).
* Any `v2` or higher version of this module will be included as a `/vN` at the end of the module path used in `go.mod` files and in the package import path.
* GitHub releases will be made for all releases.

14
e2e/vendor/go.opentelemetry.io/auto/sdk/doc.go generated vendored Normal file
View File

@ -0,0 +1,14 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
/*
Package sdk provides an auto-instrumentable OpenTelemetry SDK.
An [go.opentelemetry.io/auto.Instrumentation] can be configured to target the
process running this SDK. In that case, all telemetry the SDK produces will be
processed and handled by that [go.opentelemetry.io/auto.Instrumentation].
By default, if there is no [go.opentelemetry.io/auto.Instrumentation] set to
auto-instrument the SDK, the SDK will not generate any telemetry.
*/
package sdk

View File

@ -0,0 +1,58 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package telemetry
// Attr is a key-value pair.
type Attr struct {
Key string `json:"key,omitempty"`
Value Value `json:"value,omitempty"`
}
// String returns an Attr for a string value.
func String(key, value string) Attr {
return Attr{key, StringValue(value)}
}
// Int64 returns an Attr for an int64 value.
func Int64(key string, value int64) Attr {
return Attr{key, Int64Value(value)}
}
// Int returns an Attr for an int value.
func Int(key string, value int) Attr {
return Int64(key, int64(value))
}
// Float64 returns an Attr for a float64 value.
func Float64(key string, value float64) Attr {
return Attr{key, Float64Value(value)}
}
// Bool returns an Attr for a bool value.
func Bool(key string, value bool) Attr {
return Attr{key, BoolValue(value)}
}
// Bytes returns an Attr for a []byte value.
// The passed slice must not be changed after it is passed.
func Bytes(key string, value []byte) Attr {
return Attr{key, BytesValue(value)}
}
// Slice returns an Attr for a []Value value.
// The passed slice must not be changed after it is passed.
func Slice(key string, value ...Value) Attr {
return Attr{key, SliceValue(value...)}
}
// Map returns an Attr for a map value.
// The passed slice must not be changed after it is passed.
func Map(key string, value ...Attr) Attr {
return Attr{key, MapValue(value...)}
}
// Equal returns if a is equal to b.
func (a Attr) Equal(b Attr) bool {
return a.Key == b.Key && a.Value.Equal(b.Value)
}

View File

@ -0,0 +1,8 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
/*
Package telemetry provides a lightweight representations of OpenTelemetry
telemetry that is compatible with the OTLP JSON protobuf encoding.
*/
package telemetry

View File

@ -0,0 +1,103 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package telemetry
import (
"encoding/hex"
"errors"
"fmt"
)
const (
traceIDSize = 16
spanIDSize = 8
)
// TraceID is a custom data type that is used for all trace IDs.
type TraceID [traceIDSize]byte
// String returns the hex string representation form of a TraceID.
func (tid TraceID) String() string {
return hex.EncodeToString(tid[:])
}
// IsEmpty returns false if id contains at least one non-zero byte.
func (tid TraceID) IsEmpty() bool {
return tid == [traceIDSize]byte{}
}
// MarshalJSON converts the trace ID into a hex string enclosed in quotes.
func (tid TraceID) MarshalJSON() ([]byte, error) {
if tid.IsEmpty() {
return []byte(`""`), nil
}
return marshalJSON(tid[:])
}
// UnmarshalJSON inflates the trace ID from hex string, possibly enclosed in
// quotes.
func (tid *TraceID) UnmarshalJSON(data []byte) error {
*tid = [traceIDSize]byte{}
return unmarshalJSON(tid[:], data)
}
// SpanID is a custom data type that is used for all span IDs.
type SpanID [spanIDSize]byte
// String returns the hex string representation form of a SpanID.
func (sid SpanID) String() string {
return hex.EncodeToString(sid[:])
}
// IsEmpty returns true if the span ID contains at least one non-zero byte.
func (sid SpanID) IsEmpty() bool {
return sid == [spanIDSize]byte{}
}
// MarshalJSON converts span ID into a hex string enclosed in quotes.
func (sid SpanID) MarshalJSON() ([]byte, error) {
if sid.IsEmpty() {
return []byte(`""`), nil
}
return marshalJSON(sid[:])
}
// UnmarshalJSON decodes span ID from hex string, possibly enclosed in quotes.
func (sid *SpanID) UnmarshalJSON(data []byte) error {
*sid = [spanIDSize]byte{}
return unmarshalJSON(sid[:], data)
}
// marshalJSON converts id into a hex string enclosed in quotes.
func marshalJSON(id []byte) ([]byte, error) {
// Plus 2 quote chars at the start and end.
hexLen := hex.EncodedLen(len(id)) + 2
b := make([]byte, hexLen)
hex.Encode(b[1:hexLen-1], id)
b[0], b[hexLen-1] = '"', '"'
return b, nil
}
// unmarshalJSON inflates trace id from hex string, possibly enclosed in quotes.
func unmarshalJSON(dst []byte, src []byte) error {
if l := len(src); l >= 2 && src[0] == '"' && src[l-1] == '"' {
src = src[1 : l-1]
}
nLen := len(src)
if nLen == 0 {
return nil
}
if len(dst) != hex.DecodedLen(nLen) {
return errors.New("invalid length for ID")
}
_, err := hex.Decode(dst, src)
if err != nil {
return fmt.Errorf("cannot unmarshal ID from string '%s': %w", string(src), err)
}
return nil
}

View File

@ -0,0 +1,67 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package telemetry
import (
"encoding/json"
"strconv"
)
// protoInt64 represents the protobuf encoding of integers which can be either
// strings or integers.
type protoInt64 int64
// Int64 returns the protoInt64 as an int64.
func (i *protoInt64) Int64() int64 { return int64(*i) }
// UnmarshalJSON decodes both strings and integers.
func (i *protoInt64) UnmarshalJSON(data []byte) error {
if data[0] == '"' {
var str string
if err := json.Unmarshal(data, &str); err != nil {
return err
}
parsedInt, err := strconv.ParseInt(str, 10, 64)
if err != nil {
return err
}
*i = protoInt64(parsedInt)
} else {
var parsedInt int64
if err := json.Unmarshal(data, &parsedInt); err != nil {
return err
}
*i = protoInt64(parsedInt)
}
return nil
}
// protoUint64 represents the protobuf encoding of integers which can be either
// strings or integers.
type protoUint64 uint64
// Int64 returns the protoUint64 as a uint64.
func (i *protoUint64) Uint64() uint64 { return uint64(*i) }
// UnmarshalJSON decodes both strings and integers.
func (i *protoUint64) UnmarshalJSON(data []byte) error {
if data[0] == '"' {
var str string
if err := json.Unmarshal(data, &str); err != nil {
return err
}
parsedUint, err := strconv.ParseUint(str, 10, 64)
if err != nil {
return err
}
*i = protoUint64(parsedUint)
} else {
var parsedUint uint64
if err := json.Unmarshal(data, &parsedUint); err != nil {
return err
}
*i = protoUint64(parsedUint)
}
return nil
}

View File

@ -0,0 +1,66 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package telemetry
import (
"bytes"
"encoding/json"
"errors"
"fmt"
"io"
)
// Resource information.
type Resource struct {
// Attrs are the set of attributes that describe the resource. Attribute
// keys MUST be unique (it is not allowed to have more than one attribute
// with the same key).
Attrs []Attr `json:"attributes,omitempty"`
// DroppedAttrs is the number of dropped attributes. If the value
// is 0, then no attributes were dropped.
DroppedAttrs uint32 `json:"droppedAttributesCount,omitempty"`
}
// UnmarshalJSON decodes the OTLP formatted JSON contained in data into r.
func (r *Resource) UnmarshalJSON(data []byte) error {
decoder := json.NewDecoder(bytes.NewReader(data))
t, err := decoder.Token()
if err != nil {
return err
}
if t != json.Delim('{') {
return errors.New("invalid Resource type")
}
for decoder.More() {
keyIface, err := decoder.Token()
if err != nil {
if errors.Is(err, io.EOF) {
// Empty.
return nil
}
return err
}
key, ok := keyIface.(string)
if !ok {
return fmt.Errorf("invalid Resource field: %#v", keyIface)
}
switch key {
case "attributes":
err = decoder.Decode(&r.Attrs)
case "droppedAttributesCount", "dropped_attributes_count":
err = decoder.Decode(&r.DroppedAttrs)
default:
// Skip unknown.
}
if err != nil {
return err
}
}
return nil
}

View File

@ -0,0 +1,67 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package telemetry
import (
"bytes"
"encoding/json"
"errors"
"fmt"
"io"
)
// Scope is the identifying values of the instrumentation scope.
type Scope struct {
Name string `json:"name,omitempty"`
Version string `json:"version,omitempty"`
Attrs []Attr `json:"attributes,omitempty"`
DroppedAttrs uint32 `json:"droppedAttributesCount,omitempty"`
}
// UnmarshalJSON decodes the OTLP formatted JSON contained in data into r.
func (s *Scope) UnmarshalJSON(data []byte) error {
decoder := json.NewDecoder(bytes.NewReader(data))
t, err := decoder.Token()
if err != nil {
return err
}
if t != json.Delim('{') {
return errors.New("invalid Scope type")
}
for decoder.More() {
keyIface, err := decoder.Token()
if err != nil {
if errors.Is(err, io.EOF) {
// Empty.
return nil
}
return err
}
key, ok := keyIface.(string)
if !ok {
return fmt.Errorf("invalid Scope field: %#v", keyIface)
}
switch key {
case "name":
err = decoder.Decode(&s.Name)
case "version":
err = decoder.Decode(&s.Version)
case "attributes":
err = decoder.Decode(&s.Attrs)
case "droppedAttributesCount", "dropped_attributes_count":
err = decoder.Decode(&s.DroppedAttrs)
default:
// Skip unknown.
}
if err != nil {
return err
}
}
return nil
}

View File

@ -0,0 +1,456 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package telemetry
import (
"bytes"
"encoding/hex"
"encoding/json"
"errors"
"fmt"
"io"
"time"
)
// A Span represents a single operation performed by a single component of the
// system.
type Span struct {
// A unique identifier for a trace. All spans from the same trace share
// the same `trace_id`. The ID is a 16-byte array. An ID with all zeroes OR
// of length other than 16 bytes is considered invalid (empty string in OTLP/JSON
// is zero-length and thus is also invalid).
//
// This field is required.
TraceID TraceID `json:"traceId,omitempty"`
// A unique identifier for a span within a trace, assigned when the span
// is created. The ID is an 8-byte array. An ID with all zeroes OR of length
// other than 8 bytes is considered invalid (empty string in OTLP/JSON
// is zero-length and thus is also invalid).
//
// This field is required.
SpanID SpanID `json:"spanId,omitempty"`
// trace_state conveys information about request position in multiple distributed tracing graphs.
// It is a trace_state in w3c-trace-context format: https://www.w3.org/TR/trace-context/#tracestate-header
// See also https://github.com/w3c/distributed-tracing for more details about this field.
TraceState string `json:"traceState,omitempty"`
// The `span_id` of this span's parent span. If this is a root span, then this
// field must be empty. The ID is an 8-byte array.
ParentSpanID SpanID `json:"parentSpanId,omitempty"`
// Flags, a bit field.
//
// Bits 0-7 (8 least significant bits) are the trace flags as defined in W3C Trace
// Context specification. To read the 8-bit W3C trace flag, use
// `flags & SPAN_FLAGS_TRACE_FLAGS_MASK`.
//
// See https://www.w3.org/TR/trace-context-2/#trace-flags for the flag definitions.
//
// Bits 8 and 9 represent the 3 states of whether a span's parent
// is remote. The states are (unknown, is not remote, is remote).
// To read whether the value is known, use `(flags & SPAN_FLAGS_CONTEXT_HAS_IS_REMOTE_MASK) != 0`.
// To read whether the span is remote, use `(flags & SPAN_FLAGS_CONTEXT_IS_REMOTE_MASK) != 0`.
//
// When creating span messages, if the message is logically forwarded from another source
// with an equivalent flags fields (i.e., usually another OTLP span message), the field SHOULD
// be copied as-is. If creating from a source that does not have an equivalent flags field
// (such as a runtime representation of an OpenTelemetry span), the high 22 bits MUST
// be set to zero.
// Readers MUST NOT assume that bits 10-31 (22 most significant bits) will be zero.
//
// [Optional].
Flags uint32 `json:"flags,omitempty"`
// A description of the span's operation.
//
// For example, the name can be a qualified method name or a file name
// and a line number where the operation is called. A best practice is to use
// the same display name at the same call point in an application.
// This makes it easier to correlate spans in different traces.
//
// This field is semantically required to be set to non-empty string.
// Empty value is equivalent to an unknown span name.
//
// This field is required.
Name string `json:"name"`
// Distinguishes between spans generated in a particular context. For example,
// two spans with the same name may be distinguished using `CLIENT` (caller)
// and `SERVER` (callee) to identify queueing latency associated with the span.
Kind SpanKind `json:"kind,omitempty"`
// start_time_unix_nano is the start time of the span. On the client side, this is the time
// kept by the local machine where the span execution starts. On the server side, this
// is the time when the server's application handler starts running.
// Value is UNIX Epoch time in nanoseconds since 00:00:00 UTC on 1 January 1970.
//
// This field is semantically required and it is expected that end_time >= start_time.
StartTime time.Time `json:"startTimeUnixNano,omitempty"`
// end_time_unix_nano is the end time of the span. On the client side, this is the time
// kept by the local machine where the span execution ends. On the server side, this
// is the time when the server application handler stops running.
// Value is UNIX Epoch time in nanoseconds since 00:00:00 UTC on 1 January 1970.
//
// This field is semantically required and it is expected that end_time >= start_time.
EndTime time.Time `json:"endTimeUnixNano,omitempty"`
// attributes is a collection of key/value pairs. Note, global attributes
// like server name can be set using the resource API. Examples of attributes:
//
// "/http/user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36"
// "/http/server_latency": 300
// "example.com/myattribute": true
// "example.com/score": 10.239
//
// The OpenTelemetry API specification further restricts the allowed value types:
// https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/common/README.md#attribute
// Attribute keys MUST be unique (it is not allowed to have more than one
// attribute with the same key).
Attrs []Attr `json:"attributes,omitempty"`
// dropped_attributes_count is the number of attributes that were discarded. Attributes
// can be discarded because their keys are too long or because there are too many
// attributes. If this value is 0, then no attributes were dropped.
DroppedAttrs uint32 `json:"droppedAttributesCount,omitempty"`
// events is a collection of Event items.
Events []*SpanEvent `json:"events,omitempty"`
// dropped_events_count is the number of dropped events. If the value is 0, then no
// events were dropped.
DroppedEvents uint32 `json:"droppedEventsCount,omitempty"`
// links is a collection of Links, which are references from this span to a span
// in the same or different trace.
Links []*SpanLink `json:"links,omitempty"`
// dropped_links_count is the number of dropped links after the maximum size was
// enforced. If this value is 0, then no links were dropped.
DroppedLinks uint32 `json:"droppedLinksCount,omitempty"`
// An optional final status for this span. Semantically when Status isn't set, it means
// span's status code is unset, i.e. assume STATUS_CODE_UNSET (code = 0).
Status *Status `json:"status,omitempty"`
}
// MarshalJSON encodes s into OTLP formatted JSON.
func (s Span) MarshalJSON() ([]byte, error) {
startT := s.StartTime.UnixNano()
if s.StartTime.IsZero() || startT < 0 {
startT = 0
}
endT := s.EndTime.UnixNano()
if s.EndTime.IsZero() || endT < 0 {
endT = 0
}
// Override non-empty default SpanID marshal and omitempty.
var parentSpanId string
if !s.ParentSpanID.IsEmpty() {
b := make([]byte, hex.EncodedLen(spanIDSize))
hex.Encode(b, s.ParentSpanID[:])
parentSpanId = string(b)
}
type Alias Span
return json.Marshal(struct {
Alias
ParentSpanID string `json:"parentSpanId,omitempty"`
StartTime uint64 `json:"startTimeUnixNano,omitempty"`
EndTime uint64 `json:"endTimeUnixNano,omitempty"`
}{
Alias: Alias(s),
ParentSpanID: parentSpanId,
StartTime: uint64(startT),
EndTime: uint64(endT),
})
}
// UnmarshalJSON decodes the OTLP formatted JSON contained in data into s.
func (s *Span) UnmarshalJSON(data []byte) error {
decoder := json.NewDecoder(bytes.NewReader(data))
t, err := decoder.Token()
if err != nil {
return err
}
if t != json.Delim('{') {
return errors.New("invalid Span type")
}
for decoder.More() {
keyIface, err := decoder.Token()
if err != nil {
if errors.Is(err, io.EOF) {
// Empty.
return nil
}
return err
}
key, ok := keyIface.(string)
if !ok {
return fmt.Errorf("invalid Span field: %#v", keyIface)
}
switch key {
case "traceId", "trace_id":
err = decoder.Decode(&s.TraceID)
case "spanId", "span_id":
err = decoder.Decode(&s.SpanID)
case "traceState", "trace_state":
err = decoder.Decode(&s.TraceState)
case "parentSpanId", "parent_span_id":
err = decoder.Decode(&s.ParentSpanID)
case "flags":
err = decoder.Decode(&s.Flags)
case "name":
err = decoder.Decode(&s.Name)
case "kind":
err = decoder.Decode(&s.Kind)
case "startTimeUnixNano", "start_time_unix_nano":
var val protoUint64
err = decoder.Decode(&val)
s.StartTime = time.Unix(0, int64(val.Uint64()))
case "endTimeUnixNano", "end_time_unix_nano":
var val protoUint64
err = decoder.Decode(&val)
s.EndTime = time.Unix(0, int64(val.Uint64()))
case "attributes":
err = decoder.Decode(&s.Attrs)
case "droppedAttributesCount", "dropped_attributes_count":
err = decoder.Decode(&s.DroppedAttrs)
case "events":
err = decoder.Decode(&s.Events)
case "droppedEventsCount", "dropped_events_count":
err = decoder.Decode(&s.DroppedEvents)
case "links":
err = decoder.Decode(&s.Links)
case "droppedLinksCount", "dropped_links_count":
err = decoder.Decode(&s.DroppedLinks)
case "status":
err = decoder.Decode(&s.Status)
default:
// Skip unknown.
}
if err != nil {
return err
}
}
return nil
}
// SpanFlags represents constants used to interpret the
// Span.flags field, which is protobuf 'fixed32' type and is to
// be used as bit-fields. Each non-zero value defined in this enum is
// a bit-mask. To extract the bit-field, for example, use an
// expression like:
//
// (span.flags & SPAN_FLAGS_TRACE_FLAGS_MASK)
//
// See https://www.w3.org/TR/trace-context-2/#trace-flags for the flag definitions.
//
// Note that Span flags were introduced in version 1.1 of the
// OpenTelemetry protocol. Older Span producers do not set this
// field, consequently consumers should not rely on the absence of a
// particular flag bit to indicate the presence of a particular feature.
type SpanFlags int32
const (
// Bits 0-7 are used for trace flags.
SpanFlagsTraceFlagsMask SpanFlags = 255
// Bits 8 and 9 are used to indicate that the parent span or link span is remote.
// Bit 8 (`HAS_IS_REMOTE`) indicates whether the value is known.
// Bit 9 (`IS_REMOTE`) indicates whether the span or link is remote.
SpanFlagsContextHasIsRemoteMask SpanFlags = 256
// SpanFlagsContextHasIsRemoteMask indicates the Span is remote.
SpanFlagsContextIsRemoteMask SpanFlags = 512
)
// SpanKind is the type of span. Can be used to specify additional relationships between spans
// in addition to a parent/child relationship.
type SpanKind int32
const (
// Indicates that the span represents an internal operation within an application,
// as opposed to an operation happening at the boundaries. Default value.
SpanKindInternal SpanKind = 1
// Indicates that the span covers server-side handling of an RPC or other
// remote network request.
SpanKindServer SpanKind = 2
// Indicates that the span describes a request to some remote service.
SpanKindClient SpanKind = 3
// Indicates that the span describes a producer sending a message to a broker.
// Unlike CLIENT and SERVER, there is often no direct critical path latency relationship
// between producer and consumer spans. A PRODUCER span ends when the message was accepted
// by the broker while the logical processing of the message might span a much longer time.
SpanKindProducer SpanKind = 4
// Indicates that the span describes consumer receiving a message from a broker.
// Like the PRODUCER kind, there is often no direct critical path latency relationship
// between producer and consumer spans.
SpanKindConsumer SpanKind = 5
)
// Event is a time-stamped annotation of the span, consisting of user-supplied
// text description and key-value pairs.
type SpanEvent struct {
// time_unix_nano is the time the event occurred.
Time time.Time `json:"timeUnixNano,omitempty"`
// name of the event.
// This field is semantically required to be set to non-empty string.
Name string `json:"name,omitempty"`
// attributes is a collection of attribute key/value pairs on the event.
// Attribute keys MUST be unique (it is not allowed to have more than one
// attribute with the same key).
Attrs []Attr `json:"attributes,omitempty"`
// dropped_attributes_count is the number of dropped attributes. If the value is 0,
// then no attributes were dropped.
DroppedAttrs uint32 `json:"droppedAttributesCount,omitempty"`
}
// MarshalJSON encodes e into OTLP formatted JSON.
func (e SpanEvent) MarshalJSON() ([]byte, error) {
t := e.Time.UnixNano()
if e.Time.IsZero() || t < 0 {
t = 0
}
type Alias SpanEvent
return json.Marshal(struct {
Alias
Time uint64 `json:"timeUnixNano,omitempty"`
}{
Alias: Alias(e),
Time: uint64(t),
})
}
// UnmarshalJSON decodes the OTLP formatted JSON contained in data into se.
func (se *SpanEvent) UnmarshalJSON(data []byte) error {
decoder := json.NewDecoder(bytes.NewReader(data))
t, err := decoder.Token()
if err != nil {
return err
}
if t != json.Delim('{') {
return errors.New("invalid SpanEvent type")
}
for decoder.More() {
keyIface, err := decoder.Token()
if err != nil {
if errors.Is(err, io.EOF) {
// Empty.
return nil
}
return err
}
key, ok := keyIface.(string)
if !ok {
return fmt.Errorf("invalid SpanEvent field: %#v", keyIface)
}
switch key {
case "timeUnixNano", "time_unix_nano":
var val protoUint64
err = decoder.Decode(&val)
se.Time = time.Unix(0, int64(val.Uint64()))
case "name":
err = decoder.Decode(&se.Name)
case "attributes":
err = decoder.Decode(&se.Attrs)
case "droppedAttributesCount", "dropped_attributes_count":
err = decoder.Decode(&se.DroppedAttrs)
default:
// Skip unknown.
}
if err != nil {
return err
}
}
return nil
}
// A pointer from the current span to another span in the same trace or in a
// different trace. For example, this can be used in batching operations,
// where a single batch handler processes multiple requests from different
// traces or when the handler receives a request from a different project.
type SpanLink struct {
// A unique identifier of a trace that this linked span is part of. The ID is a
// 16-byte array.
TraceID TraceID `json:"traceId,omitempty"`
// A unique identifier for the linked span. The ID is an 8-byte array.
SpanID SpanID `json:"spanId,omitempty"`
// The trace_state associated with the link.
TraceState string `json:"traceState,omitempty"`
// attributes is a collection of attribute key/value pairs on the link.
// Attribute keys MUST be unique (it is not allowed to have more than one
// attribute with the same key).
Attrs []Attr `json:"attributes,omitempty"`
// dropped_attributes_count is the number of dropped attributes. If the value is 0,
// then no attributes were dropped.
DroppedAttrs uint32 `json:"droppedAttributesCount,omitempty"`
// Flags, a bit field.
//
// Bits 0-7 (8 least significant bits) are the trace flags as defined in W3C Trace
// Context specification. To read the 8-bit W3C trace flag, use
// `flags & SPAN_FLAGS_TRACE_FLAGS_MASK`.
//
// See https://www.w3.org/TR/trace-context-2/#trace-flags for the flag definitions.
//
// Bits 8 and 9 represent the 3 states of whether the link is remote.
// The states are (unknown, is not remote, is remote).
// To read whether the value is known, use `(flags & SPAN_FLAGS_CONTEXT_HAS_IS_REMOTE_MASK) != 0`.
// To read whether the link is remote, use `(flags & SPAN_FLAGS_CONTEXT_IS_REMOTE_MASK) != 0`.
//
// Readers MUST NOT assume that bits 10-31 (22 most significant bits) will be zero.
// When creating new spans, bits 10-31 (most-significant 22-bits) MUST be zero.
//
// [Optional].
Flags uint32 `json:"flags,omitempty"`
}
// UnmarshalJSON decodes the OTLP formatted JSON contained in data into sl.
func (sl *SpanLink) UnmarshalJSON(data []byte) error {
decoder := json.NewDecoder(bytes.NewReader(data))
t, err := decoder.Token()
if err != nil {
return err
}
if t != json.Delim('{') {
return errors.New("invalid SpanLink type")
}
for decoder.More() {
keyIface, err := decoder.Token()
if err != nil {
if errors.Is(err, io.EOF) {
// Empty.
return nil
}
return err
}
key, ok := keyIface.(string)
if !ok {
return fmt.Errorf("invalid SpanLink field: %#v", keyIface)
}
switch key {
case "traceId", "trace_id":
err = decoder.Decode(&sl.TraceID)
case "spanId", "span_id":
err = decoder.Decode(&sl.SpanID)
case "traceState", "trace_state":
err = decoder.Decode(&sl.TraceState)
case "attributes":
err = decoder.Decode(&sl.Attrs)
case "droppedAttributesCount", "dropped_attributes_count":
err = decoder.Decode(&sl.DroppedAttrs)
case "flags":
err = decoder.Decode(&sl.Flags)
default:
// Skip unknown.
}
if err != nil {
return err
}
}
return nil
}

View File

@ -0,0 +1,40 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package telemetry
// For the semantics of status codes see
// https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/api.md#set-status
type StatusCode int32
const (
// The default status.
StatusCodeUnset StatusCode = 0
// The Span has been validated by an Application developer or Operator to
// have completed successfully.
StatusCodeOK StatusCode = 1
// The Span contains an error.
StatusCodeError StatusCode = 2
)
var statusCodeStrings = []string{
"Unset",
"OK",
"Error",
}
func (s StatusCode) String() string {
if s >= 0 && int(s) < len(statusCodeStrings) {
return statusCodeStrings[s]
}
return "<unknown telemetry.StatusCode>"
}
// The Status type defines a logical error model that is suitable for different
// programming environments, including REST APIs and RPC APIs.
type Status struct {
// A developer-facing human readable error message.
Message string `json:"message,omitempty"`
// The status code.
Code StatusCode `json:"code,omitempty"`
}

View File

@ -0,0 +1,189 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package telemetry
import (
"bytes"
"encoding/json"
"errors"
"fmt"
"io"
)
// Traces represents the traces data that can be stored in a persistent storage,
// OR can be embedded by other protocols that transfer OTLP traces data but do
// not implement the OTLP protocol.
//
// The main difference between this message and collector protocol is that
// in this message there will not be any "control" or "metadata" specific to
// OTLP protocol.
//
// When new fields are added into this message, the OTLP request MUST be updated
// as well.
type Traces struct {
// An array of ResourceSpans.
// For data coming from a single resource this array will typically contain
// one element. Intermediary nodes that receive data from multiple origins
// typically batch the data before forwarding further and in that case this
// array will contain multiple elements.
ResourceSpans []*ResourceSpans `json:"resourceSpans,omitempty"`
}
// UnmarshalJSON decodes the OTLP formatted JSON contained in data into td.
func (td *Traces) UnmarshalJSON(data []byte) error {
decoder := json.NewDecoder(bytes.NewReader(data))
t, err := decoder.Token()
if err != nil {
return err
}
if t != json.Delim('{') {
return errors.New("invalid TracesData type")
}
for decoder.More() {
keyIface, err := decoder.Token()
if err != nil {
if errors.Is(err, io.EOF) {
// Empty.
return nil
}
return err
}
key, ok := keyIface.(string)
if !ok {
return fmt.Errorf("invalid TracesData field: %#v", keyIface)
}
switch key {
case "resourceSpans", "resource_spans":
err = decoder.Decode(&td.ResourceSpans)
default:
// Skip unknown.
}
if err != nil {
return err
}
}
return nil
}
// A collection of ScopeSpans from a Resource.
type ResourceSpans struct {
// The resource for the spans in this message.
// If this field is not set then no resource info is known.
Resource Resource `json:"resource"`
// A list of ScopeSpans that originate from a resource.
ScopeSpans []*ScopeSpans `json:"scopeSpans,omitempty"`
// This schema_url applies to the data in the "resource" field. It does not apply
// to the data in the "scope_spans" field which have their own schema_url field.
SchemaURL string `json:"schemaUrl,omitempty"`
}
// UnmarshalJSON decodes the OTLP formatted JSON contained in data into rs.
func (rs *ResourceSpans) UnmarshalJSON(data []byte) error {
decoder := json.NewDecoder(bytes.NewReader(data))
t, err := decoder.Token()
if err != nil {
return err
}
if t != json.Delim('{') {
return errors.New("invalid ResourceSpans type")
}
for decoder.More() {
keyIface, err := decoder.Token()
if err != nil {
if errors.Is(err, io.EOF) {
// Empty.
return nil
}
return err
}
key, ok := keyIface.(string)
if !ok {
return fmt.Errorf("invalid ResourceSpans field: %#v", keyIface)
}
switch key {
case "resource":
err = decoder.Decode(&rs.Resource)
case "scopeSpans", "scope_spans":
err = decoder.Decode(&rs.ScopeSpans)
case "schemaUrl", "schema_url":
err = decoder.Decode(&rs.SchemaURL)
default:
// Skip unknown.
}
if err != nil {
return err
}
}
return nil
}
// A collection of Spans produced by an InstrumentationScope.
type ScopeSpans struct {
// The instrumentation scope information for the spans in this message.
// Semantically when InstrumentationScope isn't set, it is equivalent with
// an empty instrumentation scope name (unknown).
Scope *Scope `json:"scope"`
// A list of Spans that originate from an instrumentation scope.
Spans []*Span `json:"spans,omitempty"`
// The Schema URL, if known. This is the identifier of the Schema that the span data
// is recorded in. To learn more about Schema URL see
// https://opentelemetry.io/docs/specs/otel/schemas/#schema-url
// This schema_url applies to all spans and span events in the "spans" field.
SchemaURL string `json:"schemaUrl,omitempty"`
}
// UnmarshalJSON decodes the OTLP formatted JSON contained in data into ss.
func (ss *ScopeSpans) UnmarshalJSON(data []byte) error {
decoder := json.NewDecoder(bytes.NewReader(data))
t, err := decoder.Token()
if err != nil {
return err
}
if t != json.Delim('{') {
return errors.New("invalid ScopeSpans type")
}
for decoder.More() {
keyIface, err := decoder.Token()
if err != nil {
if errors.Is(err, io.EOF) {
// Empty.
return nil
}
return err
}
key, ok := keyIface.(string)
if !ok {
return fmt.Errorf("invalid ScopeSpans field: %#v", keyIface)
}
switch key {
case "scope":
err = decoder.Decode(&ss.Scope)
case "spans":
err = decoder.Decode(&ss.Spans)
case "schemaUrl", "schema_url":
err = decoder.Decode(&ss.SchemaURL)
default:
// Skip unknown.
}
if err != nil {
return err
}
}
return nil
}

View File

@ -0,0 +1,452 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
//go:generate stringer -type=ValueKind -trimprefix=ValueKind
package telemetry
import (
"bytes"
"cmp"
"encoding/base64"
"encoding/json"
"errors"
"fmt"
"io"
"math"
"slices"
"strconv"
"unsafe"
)
// A Value represents a structured value.
// A zero value is valid and represents an empty value.
type Value struct {
// Ensure forward compatibility by explicitly making this not comparable.
noCmp [0]func() //nolint: unused // This is indeed used.
// num holds the value for Int64, Float64, and Bool. It holds the length
// for String, Bytes, Slice, Map.
num uint64
// any holds either the KindBool, KindInt64, KindFloat64, stringptr,
// bytesptr, sliceptr, or mapptr. If KindBool, KindInt64, or KindFloat64
// then the value of Value is in num as described above. Otherwise, it
// contains the value wrapped in the appropriate type.
any any
}
type (
// sliceptr represents a value in Value.any for KindString Values.
stringptr *byte
// bytesptr represents a value in Value.any for KindBytes Values.
bytesptr *byte
// sliceptr represents a value in Value.any for KindSlice Values.
sliceptr *Value
// mapptr represents a value in Value.any for KindMap Values.
mapptr *Attr
)
// ValueKind is the kind of a [Value].
type ValueKind int
// ValueKind values.
const (
ValueKindEmpty ValueKind = iota
ValueKindBool
ValueKindFloat64
ValueKindInt64
ValueKindString
ValueKindBytes
ValueKindSlice
ValueKindMap
)
var valueKindStrings = []string{
"Empty",
"Bool",
"Float64",
"Int64",
"String",
"Bytes",
"Slice",
"Map",
}
func (k ValueKind) String() string {
if k >= 0 && int(k) < len(valueKindStrings) {
return valueKindStrings[k]
}
return "<unknown telemetry.ValueKind>"
}
// StringValue returns a new [Value] for a string.
func StringValue(v string) Value {
return Value{
num: uint64(len(v)),
any: stringptr(unsafe.StringData(v)),
}
}
// IntValue returns a [Value] for an int.
func IntValue(v int) Value { return Int64Value(int64(v)) }
// Int64Value returns a [Value] for an int64.
func Int64Value(v int64) Value {
return Value{num: uint64(v), any: ValueKindInt64}
}
// Float64Value returns a [Value] for a float64.
func Float64Value(v float64) Value {
return Value{num: math.Float64bits(v), any: ValueKindFloat64}
}
// BoolValue returns a [Value] for a bool.
func BoolValue(v bool) Value { //nolint:revive // Not a control flag.
var n uint64
if v {
n = 1
}
return Value{num: n, any: ValueKindBool}
}
// BytesValue returns a [Value] for a byte slice. The passed slice must not be
// changed after it is passed.
func BytesValue(v []byte) Value {
return Value{
num: uint64(len(v)),
any: bytesptr(unsafe.SliceData(v)),
}
}
// SliceValue returns a [Value] for a slice of [Value]. The passed slice must
// not be changed after it is passed.
func SliceValue(vs ...Value) Value {
return Value{
num: uint64(len(vs)),
any: sliceptr(unsafe.SliceData(vs)),
}
}
// MapValue returns a new [Value] for a slice of key-value pairs. The passed
// slice must not be changed after it is passed.
func MapValue(kvs ...Attr) Value {
return Value{
num: uint64(len(kvs)),
any: mapptr(unsafe.SliceData(kvs)),
}
}
// AsString returns the value held by v as a string.
func (v Value) AsString() string {
if sp, ok := v.any.(stringptr); ok {
return unsafe.String(sp, v.num)
}
// TODO: error handle
return ""
}
// asString returns the value held by v as a string. It will panic if the Value
// is not KindString.
func (v Value) asString() string {
return unsafe.String(v.any.(stringptr), v.num)
}
// AsInt64 returns the value held by v as an int64.
func (v Value) AsInt64() int64 {
if v.Kind() != ValueKindInt64 {
// TODO: error handle
return 0
}
return v.asInt64()
}
// asInt64 returns the value held by v as an int64. If v is not of KindInt64,
// this will return garbage.
func (v Value) asInt64() int64 {
// Assumes v.num was a valid int64 (overflow not checked).
return int64(v.num) // nolint: gosec
}
// AsBool returns the value held by v as a bool.
func (v Value) AsBool() bool {
if v.Kind() != ValueKindBool {
// TODO: error handle
return false
}
return v.asBool()
}
// asBool returns the value held by v as a bool. If v is not of KindBool, this
// will return garbage.
func (v Value) asBool() bool { return v.num == 1 }
// AsFloat64 returns the value held by v as a float64.
func (v Value) AsFloat64() float64 {
if v.Kind() != ValueKindFloat64 {
// TODO: error handle
return 0
}
return v.asFloat64()
}
// asFloat64 returns the value held by v as a float64. If v is not of
// KindFloat64, this will return garbage.
func (v Value) asFloat64() float64 { return math.Float64frombits(v.num) }
// AsBytes returns the value held by v as a []byte.
func (v Value) AsBytes() []byte {
if sp, ok := v.any.(bytesptr); ok {
return unsafe.Slice((*byte)(sp), v.num)
}
// TODO: error handle
return nil
}
// asBytes returns the value held by v as a []byte. It will panic if the Value
// is not KindBytes.
func (v Value) asBytes() []byte {
return unsafe.Slice((*byte)(v.any.(bytesptr)), v.num)
}
// AsSlice returns the value held by v as a []Value.
func (v Value) AsSlice() []Value {
if sp, ok := v.any.(sliceptr); ok {
return unsafe.Slice((*Value)(sp), v.num)
}
// TODO: error handle
return nil
}
// asSlice returns the value held by v as a []Value. It will panic if the Value
// is not KindSlice.
func (v Value) asSlice() []Value {
return unsafe.Slice((*Value)(v.any.(sliceptr)), v.num)
}
// AsMap returns the value held by v as a []Attr.
func (v Value) AsMap() []Attr {
if sp, ok := v.any.(mapptr); ok {
return unsafe.Slice((*Attr)(sp), v.num)
}
// TODO: error handle
return nil
}
// asMap returns the value held by v as a []Attr. It will panic if the
// Value is not KindMap.
func (v Value) asMap() []Attr {
return unsafe.Slice((*Attr)(v.any.(mapptr)), v.num)
}
// Kind returns the Kind of v.
func (v Value) Kind() ValueKind {
switch x := v.any.(type) {
case ValueKind:
return x
case stringptr:
return ValueKindString
case bytesptr:
return ValueKindBytes
case sliceptr:
return ValueKindSlice
case mapptr:
return ValueKindMap
default:
return ValueKindEmpty
}
}
// Empty returns if v does not hold any value.
func (v Value) Empty() bool { return v.Kind() == ValueKindEmpty }
// Equal returns if v is equal to w.
func (v Value) Equal(w Value) bool {
k1 := v.Kind()
k2 := w.Kind()
if k1 != k2 {
return false
}
switch k1 {
case ValueKindInt64, ValueKindBool:
return v.num == w.num
case ValueKindString:
return v.asString() == w.asString()
case ValueKindFloat64:
return v.asFloat64() == w.asFloat64()
case ValueKindSlice:
return slices.EqualFunc(v.asSlice(), w.asSlice(), Value.Equal)
case ValueKindMap:
sv := sortMap(v.asMap())
sw := sortMap(w.asMap())
return slices.EqualFunc(sv, sw, Attr.Equal)
case ValueKindBytes:
return bytes.Equal(v.asBytes(), w.asBytes())
case ValueKindEmpty:
return true
default:
// TODO: error handle
return false
}
}
func sortMap(m []Attr) []Attr {
sm := make([]Attr, len(m))
copy(sm, m)
slices.SortFunc(sm, func(a, b Attr) int {
return cmp.Compare(a.Key, b.Key)
})
return sm
}
// String returns Value's value as a string, formatted like [fmt.Sprint].
//
// The returned string is meant for debugging;
// the string representation is not stable.
func (v Value) String() string {
switch v.Kind() {
case ValueKindString:
return v.asString()
case ValueKindInt64:
// Assumes v.num was a valid int64 (overflow not checked).
return strconv.FormatInt(int64(v.num), 10) // nolint: gosec
case ValueKindFloat64:
return strconv.FormatFloat(v.asFloat64(), 'g', -1, 64)
case ValueKindBool:
return strconv.FormatBool(v.asBool())
case ValueKindBytes:
return fmt.Sprint(v.asBytes())
case ValueKindMap:
return fmt.Sprint(v.asMap())
case ValueKindSlice:
return fmt.Sprint(v.asSlice())
case ValueKindEmpty:
return "<nil>"
default:
// Try to handle this as gracefully as possible.
//
// Don't panic here. The goal here is to have developers find this
// first if a slog.Kind is is not handled. It is
// preferable to have user's open issue asking why their attributes
// have a "unhandled: " prefix than say that their code is panicking.
return fmt.Sprintf("<unhandled telemetry.ValueKind: %s>", v.Kind())
}
}
// MarshalJSON encodes v into OTLP formatted JSON.
func (v *Value) MarshalJSON() ([]byte, error) {
switch v.Kind() {
case ValueKindString:
return json.Marshal(struct {
Value string `json:"stringValue"`
}{v.asString()})
case ValueKindInt64:
return json.Marshal(struct {
Value string `json:"intValue"`
}{strconv.FormatInt(int64(v.num), 10)})
case ValueKindFloat64:
return json.Marshal(struct {
Value float64 `json:"doubleValue"`
}{v.asFloat64()})
case ValueKindBool:
return json.Marshal(struct {
Value bool `json:"boolValue"`
}{v.asBool()})
case ValueKindBytes:
return json.Marshal(struct {
Value []byte `json:"bytesValue"`
}{v.asBytes()})
case ValueKindMap:
return json.Marshal(struct {
Value struct {
Values []Attr `json:"values"`
} `json:"kvlistValue"`
}{struct {
Values []Attr `json:"values"`
}{v.asMap()}})
case ValueKindSlice:
return json.Marshal(struct {
Value struct {
Values []Value `json:"values"`
} `json:"arrayValue"`
}{struct {
Values []Value `json:"values"`
}{v.asSlice()}})
case ValueKindEmpty:
return nil, nil
default:
return nil, fmt.Errorf("unknown Value kind: %s", v.Kind().String())
}
}
// UnmarshalJSON decodes the OTLP formatted JSON contained in data into v.
func (v *Value) UnmarshalJSON(data []byte) error {
decoder := json.NewDecoder(bytes.NewReader(data))
t, err := decoder.Token()
if err != nil {
return err
}
if t != json.Delim('{') {
return errors.New("invalid Value type")
}
for decoder.More() {
keyIface, err := decoder.Token()
if err != nil {
if errors.Is(err, io.EOF) {
// Empty.
return nil
}
return err
}
key, ok := keyIface.(string)
if !ok {
return fmt.Errorf("invalid Value key: %#v", keyIface)
}
switch key {
case "stringValue", "string_value":
var val string
err = decoder.Decode(&val)
*v = StringValue(val)
case "boolValue", "bool_value":
var val bool
err = decoder.Decode(&val)
*v = BoolValue(val)
case "intValue", "int_value":
var val protoInt64
err = decoder.Decode(&val)
*v = Int64Value(val.Int64())
case "doubleValue", "double_value":
var val float64
err = decoder.Decode(&val)
*v = Float64Value(val)
case "bytesValue", "bytes_value":
var val64 string
if err := decoder.Decode(&val64); err != nil {
return err
}
var val []byte
val, err = base64.StdEncoding.DecodeString(val64)
*v = BytesValue(val)
case "arrayValue", "array_value":
var val struct{ Values []Value }
err = decoder.Decode(&val)
*v = SliceValue(val.Values...)
case "kvlistValue", "kvlist_value":
var val struct{ Values []Attr }
err = decoder.Decode(&val)
*v = MapValue(val.Values...)
default:
// Skip unknown.
continue
}
// Use first valid. Ignore the rest.
return err
}
// Only unknown fields. Return nil without unmarshaling any value.
return nil
}

94
e2e/vendor/go.opentelemetry.io/auto/sdk/limit.go generated vendored Normal file
View File

@ -0,0 +1,94 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package sdk
import (
"log/slog"
"os"
"strconv"
)
// maxSpan are the span limits resolved during startup.
var maxSpan = newSpanLimits()
type spanLimits struct {
// Attrs is the number of allowed attributes for a span.
//
// This is resolved from the environment variable value for the
// OTEL_SPAN_ATTRIBUTE_COUNT_LIMIT key if it exists. Otherwise, the
// environment variable value for OTEL_ATTRIBUTE_COUNT_LIMIT, or 128 if
// that is not set, is used.
Attrs int
// AttrValueLen is the maximum attribute value length allowed for a span.
//
// This is resolved from the environment variable value for the
// OTEL_SPAN_ATTRIBUTE_VALUE_LENGTH_LIMIT key if it exists. Otherwise, the
// environment variable value for OTEL_ATTRIBUTE_VALUE_LENGTH_LIMIT, or -1
// if that is not set, is used.
AttrValueLen int
// Events is the number of allowed events for a span.
//
// This is resolved from the environment variable value for the
// OTEL_SPAN_EVENT_COUNT_LIMIT key, or 128 is used if that is not set.
Events int
// EventAttrs is the number of allowed attributes for a span event.
//
// The is resolved from the environment variable value for the
// OTEL_EVENT_ATTRIBUTE_COUNT_LIMIT key, or 128 is used if that is not set.
EventAttrs int
// Links is the number of allowed Links for a span.
//
// This is resolved from the environment variable value for the
// OTEL_SPAN_LINK_COUNT_LIMIT, or 128 is used if that is not set.
Links int
// LinkAttrs is the number of allowed attributes for a span link.
//
// This is resolved from the environment variable value for the
// OTEL_LINK_ATTRIBUTE_COUNT_LIMIT, or 128 is used if that is not set.
LinkAttrs int
}
func newSpanLimits() spanLimits {
return spanLimits{
Attrs: firstEnv(
128,
"OTEL_SPAN_ATTRIBUTE_COUNT_LIMIT",
"OTEL_ATTRIBUTE_COUNT_LIMIT",
),
AttrValueLen: firstEnv(
-1, // Unlimited.
"OTEL_SPAN_ATTRIBUTE_VALUE_LENGTH_LIMIT",
"OTEL_ATTRIBUTE_VALUE_LENGTH_LIMIT",
),
Events: firstEnv(128, "OTEL_SPAN_EVENT_COUNT_LIMIT"),
EventAttrs: firstEnv(128, "OTEL_EVENT_ATTRIBUTE_COUNT_LIMIT"),
Links: firstEnv(128, "OTEL_SPAN_LINK_COUNT_LIMIT"),
LinkAttrs: firstEnv(128, "OTEL_LINK_ATTRIBUTE_COUNT_LIMIT"),
}
}
// firstEnv returns the parsed integer value of the first matching environment
// variable from keys. The defaultVal is returned if the value is not an
// integer or no match is found.
func firstEnv(defaultVal int, keys ...string) int {
for _, key := range keys {
strV := os.Getenv(key)
if strV == "" {
continue
}
v, err := strconv.Atoi(strV)
if err == nil {
return v
}
slog.Warn(
"invalid limit environment variable",
"error", err,
"key", key,
"value", strV,
)
}
return defaultVal
}

432
e2e/vendor/go.opentelemetry.io/auto/sdk/span.go generated vendored Normal file
View File

@ -0,0 +1,432 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package sdk
import (
"encoding/json"
"fmt"
"reflect"
"runtime"
"strings"
"sync"
"sync/atomic"
"time"
"unicode/utf8"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/codes"
semconv "go.opentelemetry.io/otel/semconv/v1.26.0"
"go.opentelemetry.io/otel/trace"
"go.opentelemetry.io/otel/trace/noop"
"go.opentelemetry.io/auto/sdk/internal/telemetry"
)
type span struct {
noop.Span
spanContext trace.SpanContext
sampled atomic.Bool
mu sync.Mutex
traces *telemetry.Traces
span *telemetry.Span
}
func (s *span) SpanContext() trace.SpanContext {
if s == nil {
return trace.SpanContext{}
}
// s.spanContext is immutable, do not acquire lock s.mu.
return s.spanContext
}
func (s *span) IsRecording() bool {
if s == nil {
return false
}
return s.sampled.Load()
}
func (s *span) SetStatus(c codes.Code, msg string) {
if s == nil || !s.sampled.Load() {
return
}
s.mu.Lock()
defer s.mu.Unlock()
if s.span.Status == nil {
s.span.Status = new(telemetry.Status)
}
s.span.Status.Message = msg
switch c {
case codes.Unset:
s.span.Status.Code = telemetry.StatusCodeUnset
case codes.Error:
s.span.Status.Code = telemetry.StatusCodeError
case codes.Ok:
s.span.Status.Code = telemetry.StatusCodeOK
}
}
func (s *span) SetAttributes(attrs ...attribute.KeyValue) {
if s == nil || !s.sampled.Load() {
return
}
s.mu.Lock()
defer s.mu.Unlock()
limit := maxSpan.Attrs
if limit == 0 {
// No attributes allowed.
s.span.DroppedAttrs += uint32(len(attrs))
return
}
m := make(map[string]int)
for i, a := range s.span.Attrs {
m[a.Key] = i
}
for _, a := range attrs {
val := convAttrValue(a.Value)
if val.Empty() {
s.span.DroppedAttrs++
continue
}
if idx, ok := m[string(a.Key)]; ok {
s.span.Attrs[idx] = telemetry.Attr{
Key: string(a.Key),
Value: val,
}
} else if limit < 0 || len(s.span.Attrs) < limit {
s.span.Attrs = append(s.span.Attrs, telemetry.Attr{
Key: string(a.Key),
Value: val,
})
m[string(a.Key)] = len(s.span.Attrs) - 1
} else {
s.span.DroppedAttrs++
}
}
}
// convCappedAttrs converts up to limit attrs into a []telemetry.Attr. The
// number of dropped attributes is also returned.
func convCappedAttrs(limit int, attrs []attribute.KeyValue) ([]telemetry.Attr, uint32) {
if limit == 0 {
return nil, uint32(len(attrs))
}
if limit < 0 {
// Unlimited.
return convAttrs(attrs), 0
}
limit = min(len(attrs), limit)
return convAttrs(attrs[:limit]), uint32(len(attrs) - limit)
}
func convAttrs(attrs []attribute.KeyValue) []telemetry.Attr {
if len(attrs) == 0 {
// Avoid allocations if not necessary.
return nil
}
out := make([]telemetry.Attr, 0, len(attrs))
for _, attr := range attrs {
key := string(attr.Key)
val := convAttrValue(attr.Value)
if val.Empty() {
continue
}
out = append(out, telemetry.Attr{Key: key, Value: val})
}
return out
}
func convAttrValue(value attribute.Value) telemetry.Value {
switch value.Type() {
case attribute.BOOL:
return telemetry.BoolValue(value.AsBool())
case attribute.INT64:
return telemetry.Int64Value(value.AsInt64())
case attribute.FLOAT64:
return telemetry.Float64Value(value.AsFloat64())
case attribute.STRING:
v := truncate(maxSpan.AttrValueLen, value.AsString())
return telemetry.StringValue(v)
case attribute.BOOLSLICE:
slice := value.AsBoolSlice()
out := make([]telemetry.Value, 0, len(slice))
for _, v := range slice {
out = append(out, telemetry.BoolValue(v))
}
return telemetry.SliceValue(out...)
case attribute.INT64SLICE:
slice := value.AsInt64Slice()
out := make([]telemetry.Value, 0, len(slice))
for _, v := range slice {
out = append(out, telemetry.Int64Value(v))
}
return telemetry.SliceValue(out...)
case attribute.FLOAT64SLICE:
slice := value.AsFloat64Slice()
out := make([]telemetry.Value, 0, len(slice))
for _, v := range slice {
out = append(out, telemetry.Float64Value(v))
}
return telemetry.SliceValue(out...)
case attribute.STRINGSLICE:
slice := value.AsStringSlice()
out := make([]telemetry.Value, 0, len(slice))
for _, v := range slice {
v = truncate(maxSpan.AttrValueLen, v)
out = append(out, telemetry.StringValue(v))
}
return telemetry.SliceValue(out...)
}
return telemetry.Value{}
}
// truncate returns a truncated version of s such that it contains less than
// the limit number of characters. Truncation is applied by returning the limit
// number of valid characters contained in s.
//
// If limit is negative, it returns the original string.
//
// UTF-8 is supported. When truncating, all invalid characters are dropped
// before applying truncation.
//
// If s already contains less than the limit number of bytes, it is returned
// unchanged. No invalid characters are removed.
func truncate(limit int, s string) string {
// This prioritize performance in the following order based on the most
// common expected use-cases.
//
// - Short values less than the default limit (128).
// - Strings with valid encodings that exceed the limit.
// - No limit.
// - Strings with invalid encodings that exceed the limit.
if limit < 0 || len(s) <= limit {
return s
}
// Optimistically, assume all valid UTF-8.
var b strings.Builder
count := 0
for i, c := range s {
if c != utf8.RuneError {
count++
if count > limit {
return s[:i]
}
continue
}
_, size := utf8.DecodeRuneInString(s[i:])
if size == 1 {
// Invalid encoding.
b.Grow(len(s) - 1)
_, _ = b.WriteString(s[:i])
s = s[i:]
break
}
}
// Fast-path, no invalid input.
if b.Cap() == 0 {
return s
}
// Truncate while validating UTF-8.
for i := 0; i < len(s) && count < limit; {
c := s[i]
if c < utf8.RuneSelf {
// Optimization for single byte runes (common case).
_ = b.WriteByte(c)
i++
count++
continue
}
_, size := utf8.DecodeRuneInString(s[i:])
if size == 1 {
// We checked for all 1-byte runes above, this is a RuneError.
i++
continue
}
_, _ = b.WriteString(s[i : i+size])
i += size
count++
}
return b.String()
}
func (s *span) End(opts ...trace.SpanEndOption) {
if s == nil || !s.sampled.Swap(false) {
return
}
// s.end exists so the lock (s.mu) is not held while s.ended is called.
s.ended(s.end(opts))
}
func (s *span) end(opts []trace.SpanEndOption) []byte {
s.mu.Lock()
defer s.mu.Unlock()
cfg := trace.NewSpanEndConfig(opts...)
if t := cfg.Timestamp(); !t.IsZero() {
s.span.EndTime = cfg.Timestamp()
} else {
s.span.EndTime = time.Now()
}
b, _ := json.Marshal(s.traces) // TODO: do not ignore this error.
return b
}
// Expected to be implemented in eBPF.
//
//go:noinline
func (*span) ended(buf []byte) { ended(buf) }
// ended is used for testing.
var ended = func([]byte) {}
func (s *span) RecordError(err error, opts ...trace.EventOption) {
if s == nil || err == nil || !s.sampled.Load() {
return
}
cfg := trace.NewEventConfig(opts...)
attrs := cfg.Attributes()
attrs = append(attrs,
semconv.ExceptionType(typeStr(err)),
semconv.ExceptionMessage(err.Error()),
)
if cfg.StackTrace() {
buf := make([]byte, 2048)
n := runtime.Stack(buf, false)
attrs = append(attrs, semconv.ExceptionStacktrace(string(buf[0:n])))
}
s.mu.Lock()
defer s.mu.Unlock()
s.addEvent(semconv.ExceptionEventName, cfg.Timestamp(), attrs)
}
func typeStr(i any) string {
t := reflect.TypeOf(i)
if t.PkgPath() == "" && t.Name() == "" {
// Likely a builtin type.
return t.String()
}
return fmt.Sprintf("%s.%s", t.PkgPath(), t.Name())
}
func (s *span) AddEvent(name string, opts ...trace.EventOption) {
if s == nil || !s.sampled.Load() {
return
}
cfg := trace.NewEventConfig(opts...)
s.mu.Lock()
defer s.mu.Unlock()
s.addEvent(name, cfg.Timestamp(), cfg.Attributes())
}
// addEvent adds an event with name and attrs at tStamp to the span. The span
// lock (s.mu) needs to be held by the caller.
func (s *span) addEvent(name string, tStamp time.Time, attrs []attribute.KeyValue) {
limit := maxSpan.Events
if limit == 0 {
s.span.DroppedEvents++
return
}
if limit > 0 && len(s.span.Events) == limit {
// Drop head while avoiding allocation of more capacity.
copy(s.span.Events[:limit-1], s.span.Events[1:])
s.span.Events = s.span.Events[:limit-1]
s.span.DroppedEvents++
}
e := &telemetry.SpanEvent{Time: tStamp, Name: name}
e.Attrs, e.DroppedAttrs = convCappedAttrs(maxSpan.EventAttrs, attrs)
s.span.Events = append(s.span.Events, e)
}
func (s *span) AddLink(link trace.Link) {
if s == nil || !s.sampled.Load() {
return
}
l := maxSpan.Links
s.mu.Lock()
defer s.mu.Unlock()
if l == 0 {
s.span.DroppedLinks++
return
}
if l > 0 && len(s.span.Links) == l {
// Drop head while avoiding allocation of more capacity.
copy(s.span.Links[:l-1], s.span.Links[1:])
s.span.Links = s.span.Links[:l-1]
s.span.DroppedLinks++
}
s.span.Links = append(s.span.Links, convLink(link))
}
func convLinks(links []trace.Link) []*telemetry.SpanLink {
out := make([]*telemetry.SpanLink, 0, len(links))
for _, link := range links {
out = append(out, convLink(link))
}
return out
}
func convLink(link trace.Link) *telemetry.SpanLink {
l := &telemetry.SpanLink{
TraceID: telemetry.TraceID(link.SpanContext.TraceID()),
SpanID: telemetry.SpanID(link.SpanContext.SpanID()),
TraceState: link.SpanContext.TraceState().String(),
Flags: uint32(link.SpanContext.TraceFlags()),
}
l.Attrs, l.DroppedAttrs = convCappedAttrs(maxSpan.LinkAttrs, link.Attributes)
return l
}
func (s *span) SetName(name string) {
if s == nil || !s.sampled.Load() {
return
}
s.mu.Lock()
defer s.mu.Unlock()
s.span.Name = name
}
func (*span) TracerProvider() trace.TracerProvider { return TracerProvider() }

124
e2e/vendor/go.opentelemetry.io/auto/sdk/tracer.go generated vendored Normal file
View File

@ -0,0 +1,124 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package sdk
import (
"context"
"time"
"go.opentelemetry.io/otel/trace"
"go.opentelemetry.io/otel/trace/noop"
"go.opentelemetry.io/auto/sdk/internal/telemetry"
)
type tracer struct {
noop.Tracer
name, schemaURL, version string
}
var _ trace.Tracer = tracer{}
func (t tracer) Start(ctx context.Context, name string, opts ...trace.SpanStartOption) (context.Context, trace.Span) {
var psc trace.SpanContext
sampled := true
span := new(span)
// Ask eBPF for sampling decision and span context info.
t.start(ctx, span, &psc, &sampled, &span.spanContext)
span.sampled.Store(sampled)
ctx = trace.ContextWithSpan(ctx, span)
if sampled {
// Only build traces if sampled.
cfg := trace.NewSpanStartConfig(opts...)
span.traces, span.span = t.traces(name, cfg, span.spanContext, psc)
}
return ctx, span
}
// Expected to be implemented in eBPF.
//
//go:noinline
func (t *tracer) start(
ctx context.Context,
spanPtr *span,
psc *trace.SpanContext,
sampled *bool,
sc *trace.SpanContext,
) {
start(ctx, spanPtr, psc, sampled, sc)
}
// start is used for testing.
var start = func(context.Context, *span, *trace.SpanContext, *bool, *trace.SpanContext) {}
func (t tracer) traces(name string, cfg trace.SpanConfig, sc, psc trace.SpanContext) (*telemetry.Traces, *telemetry.Span) {
span := &telemetry.Span{
TraceID: telemetry.TraceID(sc.TraceID()),
SpanID: telemetry.SpanID(sc.SpanID()),
Flags: uint32(sc.TraceFlags()),
TraceState: sc.TraceState().String(),
ParentSpanID: telemetry.SpanID(psc.SpanID()),
Name: name,
Kind: spanKind(cfg.SpanKind()),
}
span.Attrs, span.DroppedAttrs = convCappedAttrs(maxSpan.Attrs, cfg.Attributes())
links := cfg.Links()
if limit := maxSpan.Links; limit == 0 {
span.DroppedLinks = uint32(len(links))
} else {
if limit > 0 {
n := max(len(links)-limit, 0)
span.DroppedLinks = uint32(n)
links = links[n:]
}
span.Links = convLinks(links)
}
if t := cfg.Timestamp(); !t.IsZero() {
span.StartTime = cfg.Timestamp()
} else {
span.StartTime = time.Now()
}
return &telemetry.Traces{
ResourceSpans: []*telemetry.ResourceSpans{
{
ScopeSpans: []*telemetry.ScopeSpans{
{
Scope: &telemetry.Scope{
Name: t.name,
Version: t.version,
},
Spans: []*telemetry.Span{span},
SchemaURL: t.schemaURL,
},
},
},
},
}, span
}
func spanKind(kind trace.SpanKind) telemetry.SpanKind {
switch kind {
case trace.SpanKindInternal:
return telemetry.SpanKindInternal
case trace.SpanKindServer:
return telemetry.SpanKindServer
case trace.SpanKindClient:
return telemetry.SpanKindClient
case trace.SpanKindProducer:
return telemetry.SpanKindProducer
case trace.SpanKindConsumer:
return telemetry.SpanKindConsumer
}
return telemetry.SpanKind(0) // undefined.
}

View File

@ -0,0 +1,33 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package sdk
import (
"go.opentelemetry.io/otel/trace"
"go.opentelemetry.io/otel/trace/noop"
)
// TracerProvider returns an auto-instrumentable [trace.TracerProvider].
//
// If an [go.opentelemetry.io/auto.Instrumentation] is configured to instrument
// the process using the returned TracerProvider, all of the telemetry it
// produces will be processed and handled by that Instrumentation. By default,
// if no Instrumentation instruments the TracerProvider it will not generate
// any trace telemetry.
func TracerProvider() trace.TracerProvider { return tracerProviderInstance }
var tracerProviderInstance = new(tracerProvider)
type tracerProvider struct{ noop.TracerProvider }
var _ trace.TracerProvider = tracerProvider{}
func (p tracerProvider) Tracer(name string, opts ...trace.TracerOption) trace.Tracer {
cfg := trace.NewTracerConfig(opts...)
return tracer{
name: name,
version: cfg.InstrumentationVersion(),
schemaURL: cfg.SchemaURL(),
}
}

View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,305 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otelgrpc // import "go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc"
import (
"google.golang.org/grpc/stats"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/metric"
"go.opentelemetry.io/otel/metric/noop"
"go.opentelemetry.io/otel/propagation"
semconv "go.opentelemetry.io/otel/semconv/v1.17.0"
"go.opentelemetry.io/otel/trace"
)
const (
// ScopeName is the instrumentation scope name.
ScopeName = "go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc"
// GRPCStatusCodeKey is convention for numeric status code of a gRPC request.
GRPCStatusCodeKey = attribute.Key("rpc.grpc.status_code")
)
// InterceptorFilter is a predicate used to determine whether a given request in
// interceptor info should be instrumented. A InterceptorFilter must return true if
// the request should be traced.
//
// Deprecated: Use stats handlers instead.
type InterceptorFilter func(*InterceptorInfo) bool
// Filter is a predicate used to determine whether a given request in
// should be instrumented by the attached RPC tag info.
// A Filter must return true if the request should be instrumented.
type Filter func(*stats.RPCTagInfo) bool
// config is a group of options for this instrumentation.
type config struct {
Filter Filter
InterceptorFilter InterceptorFilter
Propagators propagation.TextMapPropagator
TracerProvider trace.TracerProvider
MeterProvider metric.MeterProvider
SpanStartOptions []trace.SpanStartOption
SpanAttributes []attribute.KeyValue
MetricAttributes []attribute.KeyValue
ReceivedEvent bool
SentEvent bool
tracer trace.Tracer
meter metric.Meter
rpcDuration metric.Float64Histogram
rpcInBytes metric.Int64Histogram
rpcOutBytes metric.Int64Histogram
rpcInMessages metric.Int64Histogram
rpcOutMessages metric.Int64Histogram
}
// Option applies an option value for a config.
type Option interface {
apply(*config)
}
// newConfig returns a config configured with all the passed Options.
func newConfig(opts []Option, role string) *config {
c := &config{
Propagators: otel.GetTextMapPropagator(),
TracerProvider: otel.GetTracerProvider(),
MeterProvider: otel.GetMeterProvider(),
}
for _, o := range opts {
o.apply(c)
}
c.tracer = c.TracerProvider.Tracer(
ScopeName,
trace.WithInstrumentationVersion(SemVersion()),
)
c.meter = c.MeterProvider.Meter(
ScopeName,
metric.WithInstrumentationVersion(Version()),
metric.WithSchemaURL(semconv.SchemaURL),
)
var err error
c.rpcDuration, err = c.meter.Float64Histogram("rpc."+role+".duration",
metric.WithDescription("Measures the duration of inbound RPC."),
metric.WithUnit("ms"))
if err != nil {
otel.Handle(err)
if c.rpcDuration == nil {
c.rpcDuration = noop.Float64Histogram{}
}
}
rpcRequestSize, err := c.meter.Int64Histogram("rpc."+role+".request.size",
metric.WithDescription("Measures size of RPC request messages (uncompressed)."),
metric.WithUnit("By"))
if err != nil {
otel.Handle(err)
if rpcRequestSize == nil {
rpcRequestSize = noop.Int64Histogram{}
}
}
rpcResponseSize, err := c.meter.Int64Histogram("rpc."+role+".response.size",
metric.WithDescription("Measures size of RPC response messages (uncompressed)."),
metric.WithUnit("By"))
if err != nil {
otel.Handle(err)
if rpcResponseSize == nil {
rpcResponseSize = noop.Int64Histogram{}
}
}
rpcRequestsPerRPC, err := c.meter.Int64Histogram("rpc."+role+".requests_per_rpc",
metric.WithDescription("Measures the number of messages received per RPC. Should be 1 for all non-streaming RPCs."),
metric.WithUnit("{count}"))
if err != nil {
otel.Handle(err)
if rpcRequestsPerRPC == nil {
rpcRequestsPerRPC = noop.Int64Histogram{}
}
}
rpcResponsesPerRPC, err := c.meter.Int64Histogram("rpc."+role+".responses_per_rpc",
metric.WithDescription("Measures the number of messages received per RPC. Should be 1 for all non-streaming RPCs."),
metric.WithUnit("{count}"))
if err != nil {
otel.Handle(err)
if rpcResponsesPerRPC == nil {
rpcResponsesPerRPC = noop.Int64Histogram{}
}
}
switch role {
case "client":
c.rpcInBytes = rpcResponseSize
c.rpcInMessages = rpcResponsesPerRPC
c.rpcOutBytes = rpcRequestSize
c.rpcOutMessages = rpcRequestsPerRPC
case "server":
c.rpcInBytes = rpcRequestSize
c.rpcInMessages = rpcRequestsPerRPC
c.rpcOutBytes = rpcResponseSize
c.rpcOutMessages = rpcResponsesPerRPC
default:
c.rpcInBytes = noop.Int64Histogram{}
c.rpcInMessages = noop.Int64Histogram{}
c.rpcOutBytes = noop.Int64Histogram{}
c.rpcOutMessages = noop.Int64Histogram{}
}
return c
}
type propagatorsOption struct{ p propagation.TextMapPropagator }
func (o propagatorsOption) apply(c *config) {
if o.p != nil {
c.Propagators = o.p
}
}
// WithPropagators returns an Option to use the Propagators when extracting
// and injecting trace context from requests.
func WithPropagators(p propagation.TextMapPropagator) Option {
return propagatorsOption{p: p}
}
type tracerProviderOption struct{ tp trace.TracerProvider }
func (o tracerProviderOption) apply(c *config) {
if o.tp != nil {
c.TracerProvider = o.tp
}
}
// WithInterceptorFilter returns an Option to use the request filter.
//
// Deprecated: Use stats handlers instead.
func WithInterceptorFilter(f InterceptorFilter) Option {
return interceptorFilterOption{f: f}
}
type interceptorFilterOption struct {
f InterceptorFilter
}
func (o interceptorFilterOption) apply(c *config) {
if o.f != nil {
c.InterceptorFilter = o.f
}
}
// WithFilter returns an Option to use the request filter.
func WithFilter(f Filter) Option {
return filterOption{f: f}
}
type filterOption struct {
f Filter
}
func (o filterOption) apply(c *config) {
if o.f != nil {
c.Filter = o.f
}
}
// WithTracerProvider returns an Option to use the TracerProvider when
// creating a Tracer.
func WithTracerProvider(tp trace.TracerProvider) Option {
return tracerProviderOption{tp: tp}
}
type meterProviderOption struct{ mp metric.MeterProvider }
func (o meterProviderOption) apply(c *config) {
if o.mp != nil {
c.MeterProvider = o.mp
}
}
// WithMeterProvider returns an Option to use the MeterProvider when
// creating a Meter. If this option is not provide the global MeterProvider will be used.
func WithMeterProvider(mp metric.MeterProvider) Option {
return meterProviderOption{mp: mp}
}
// Event type that can be recorded, see WithMessageEvents.
type Event int
// Different types of events that can be recorded, see WithMessageEvents.
const (
ReceivedEvents Event = iota
SentEvents
)
type messageEventsProviderOption struct {
events []Event
}
func (m messageEventsProviderOption) apply(c *config) {
for _, e := range m.events {
switch e {
case ReceivedEvents:
c.ReceivedEvent = true
case SentEvents:
c.SentEvent = true
}
}
}
// WithMessageEvents configures the Handler to record the specified events
// (span.AddEvent) on spans. By default only summary attributes are added at the
// end of the request.
//
// Valid events are:
// - ReceivedEvents: Record the number of bytes read after every gRPC read operation.
// - SentEvents: Record the number of bytes written after every gRPC write operation.
func WithMessageEvents(events ...Event) Option {
return messageEventsProviderOption{events: events}
}
type spanStartOption struct{ opts []trace.SpanStartOption }
func (o spanStartOption) apply(c *config) {
c.SpanStartOptions = append(c.SpanStartOptions, o.opts...)
}
// WithSpanOptions configures an additional set of
// trace.SpanOptions, which are applied to each new span.
func WithSpanOptions(opts ...trace.SpanStartOption) Option {
return spanStartOption{opts}
}
type spanAttributesOption struct{ a []attribute.KeyValue }
func (o spanAttributesOption) apply(c *config) {
if o.a != nil {
c.SpanAttributes = o.a
}
}
// WithSpanAttributes returns an Option to add custom attributes to the spans.
func WithSpanAttributes(a ...attribute.KeyValue) Option {
return spanAttributesOption{a: a}
}
type metricAttributesOption struct{ a []attribute.KeyValue }
func (o metricAttributesOption) apply(c *config) {
if o.a != nil {
c.MetricAttributes = o.a
}
}
// WithMetricAttributes returns an Option to add custom attributes to the metrics.
func WithMetricAttributes(a ...attribute.KeyValue) Option {
return metricAttributesOption{a: a}
}

View File

@ -0,0 +1,11 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
/*
Package otelgrpc is the instrumentation library for [google.golang.org/grpc].
Use [NewClientHandler] with [grpc.WithStatsHandler] to instrument a gRPC client.
Use [NewServerHandler] with [grpc.StatsHandler] to instrument a gRPC server.
*/
package otelgrpc // import "go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc"

View File

@ -0,0 +1,530 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otelgrpc // import "go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc"
// gRPC tracing middleware
// https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/semantic_conventions/rpc.md
import (
"context"
"errors"
"io"
"net"
"strconv"
"time"
"google.golang.org/grpc"
grpc_codes "google.golang.org/grpc/codes"
"google.golang.org/grpc/metadata"
"google.golang.org/grpc/peer"
"google.golang.org/grpc/status"
"google.golang.org/protobuf/proto"
"go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc/internal"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/codes"
"go.opentelemetry.io/otel/metric"
semconv "go.opentelemetry.io/otel/semconv/v1.17.0"
"go.opentelemetry.io/otel/trace"
)
type messageType attribute.KeyValue
// Event adds an event of the messageType to the span associated with the
// passed context with a message id.
func (m messageType) Event(ctx context.Context, id int, _ interface{}) {
span := trace.SpanFromContext(ctx)
if !span.IsRecording() {
return
}
span.AddEvent("message", trace.WithAttributes(
attribute.KeyValue(m),
RPCMessageIDKey.Int(id),
))
}
var (
messageSent = messageType(RPCMessageTypeSent)
messageReceived = messageType(RPCMessageTypeReceived)
)
// UnaryClientInterceptor returns a grpc.UnaryClientInterceptor suitable
// for use in a grpc.NewClient call.
//
// Deprecated: Use [NewClientHandler] instead.
func UnaryClientInterceptor(opts ...Option) grpc.UnaryClientInterceptor {
cfg := newConfig(opts, "client")
tracer := cfg.TracerProvider.Tracer(
ScopeName,
trace.WithInstrumentationVersion(Version()),
)
return func(
ctx context.Context,
method string,
req, reply interface{},
cc *grpc.ClientConn,
invoker grpc.UnaryInvoker,
callOpts ...grpc.CallOption,
) error {
i := &InterceptorInfo{
Method: method,
Type: UnaryClient,
}
if cfg.InterceptorFilter != nil && !cfg.InterceptorFilter(i) {
return invoker(ctx, method, req, reply, cc, callOpts...)
}
name, attr, _ := telemetryAttributes(method, cc.Target())
startOpts := append([]trace.SpanStartOption{
trace.WithSpanKind(trace.SpanKindClient),
trace.WithAttributes(attr...),
},
cfg.SpanStartOptions...,
)
ctx, span := tracer.Start(
ctx,
name,
startOpts...,
)
defer span.End()
ctx = inject(ctx, cfg.Propagators)
if cfg.SentEvent {
messageSent.Event(ctx, 1, req)
}
err := invoker(ctx, method, req, reply, cc, callOpts...)
if cfg.ReceivedEvent {
messageReceived.Event(ctx, 1, reply)
}
if err != nil {
s, _ := status.FromError(err)
span.SetStatus(codes.Error, s.Message())
span.SetAttributes(statusCodeAttr(s.Code()))
} else {
span.SetAttributes(statusCodeAttr(grpc_codes.OK))
}
return err
}
}
// clientStream wraps around the embedded grpc.ClientStream, and intercepts the RecvMsg and
// SendMsg method call.
type clientStream struct {
grpc.ClientStream
desc *grpc.StreamDesc
span trace.Span
receivedEvent bool
sentEvent bool
receivedMessageID int
sentMessageID int
}
var _ = proto.Marshal
func (w *clientStream) RecvMsg(m interface{}) error {
err := w.ClientStream.RecvMsg(m)
if err == nil && !w.desc.ServerStreams {
w.endSpan(nil)
} else if errors.Is(err, io.EOF) {
w.endSpan(nil)
} else if err != nil {
w.endSpan(err)
} else {
w.receivedMessageID++
if w.receivedEvent {
messageReceived.Event(w.Context(), w.receivedMessageID, m)
}
}
return err
}
func (w *clientStream) SendMsg(m interface{}) error {
err := w.ClientStream.SendMsg(m)
w.sentMessageID++
if w.sentEvent {
messageSent.Event(w.Context(), w.sentMessageID, m)
}
if err != nil {
w.endSpan(err)
}
return err
}
func (w *clientStream) Header() (metadata.MD, error) {
md, err := w.ClientStream.Header()
if err != nil {
w.endSpan(err)
}
return md, err
}
func (w *clientStream) CloseSend() error {
err := w.ClientStream.CloseSend()
if err != nil {
w.endSpan(err)
}
return err
}
func wrapClientStream(s grpc.ClientStream, desc *grpc.StreamDesc, span trace.Span, cfg *config) *clientStream {
return &clientStream{
ClientStream: s,
span: span,
desc: desc,
receivedEvent: cfg.ReceivedEvent,
sentEvent: cfg.SentEvent,
}
}
func (w *clientStream) endSpan(err error) {
if err != nil {
s, _ := status.FromError(err)
w.span.SetStatus(codes.Error, s.Message())
w.span.SetAttributes(statusCodeAttr(s.Code()))
} else {
w.span.SetAttributes(statusCodeAttr(grpc_codes.OK))
}
w.span.End()
}
// StreamClientInterceptor returns a grpc.StreamClientInterceptor suitable
// for use in a grpc.NewClient call.
//
// Deprecated: Use [NewClientHandler] instead.
func StreamClientInterceptor(opts ...Option) grpc.StreamClientInterceptor {
cfg := newConfig(opts, "client")
tracer := cfg.TracerProvider.Tracer(
ScopeName,
trace.WithInstrumentationVersion(Version()),
)
return func(
ctx context.Context,
desc *grpc.StreamDesc,
cc *grpc.ClientConn,
method string,
streamer grpc.Streamer,
callOpts ...grpc.CallOption,
) (grpc.ClientStream, error) {
i := &InterceptorInfo{
Method: method,
Type: StreamClient,
}
if cfg.InterceptorFilter != nil && !cfg.InterceptorFilter(i) {
return streamer(ctx, desc, cc, method, callOpts...)
}
name, attr, _ := telemetryAttributes(method, cc.Target())
startOpts := append([]trace.SpanStartOption{
trace.WithSpanKind(trace.SpanKindClient),
trace.WithAttributes(attr...),
},
cfg.SpanStartOptions...,
)
ctx, span := tracer.Start(
ctx,
name,
startOpts...,
)
ctx = inject(ctx, cfg.Propagators)
s, err := streamer(ctx, desc, cc, method, callOpts...)
if err != nil {
grpcStatus, _ := status.FromError(err)
span.SetStatus(codes.Error, grpcStatus.Message())
span.SetAttributes(statusCodeAttr(grpcStatus.Code()))
span.End()
return s, err
}
stream := wrapClientStream(s, desc, span, cfg)
return stream, nil
}
}
// UnaryServerInterceptor returns a grpc.UnaryServerInterceptor suitable
// for use in a grpc.NewServer call.
//
// Deprecated: Use [NewServerHandler] instead.
func UnaryServerInterceptor(opts ...Option) grpc.UnaryServerInterceptor {
cfg := newConfig(opts, "server")
tracer := cfg.TracerProvider.Tracer(
ScopeName,
trace.WithInstrumentationVersion(Version()),
)
return func(
ctx context.Context,
req interface{},
info *grpc.UnaryServerInfo,
handler grpc.UnaryHandler,
) (interface{}, error) {
i := &InterceptorInfo{
UnaryServerInfo: info,
Type: UnaryServer,
}
if cfg.InterceptorFilter != nil && !cfg.InterceptorFilter(i) {
return handler(ctx, req)
}
ctx = extract(ctx, cfg.Propagators)
name, attr, metricAttrs := telemetryAttributes(info.FullMethod, peerFromCtx(ctx))
startOpts := append([]trace.SpanStartOption{
trace.WithSpanKind(trace.SpanKindServer),
trace.WithAttributes(attr...),
},
cfg.SpanStartOptions...,
)
ctx, span := tracer.Start(
trace.ContextWithRemoteSpanContext(ctx, trace.SpanContextFromContext(ctx)),
name,
startOpts...,
)
defer span.End()
if cfg.ReceivedEvent {
messageReceived.Event(ctx, 1, req)
}
before := time.Now()
resp, err := handler(ctx, req)
s, _ := status.FromError(err)
if err != nil {
statusCode, msg := serverStatus(s)
span.SetStatus(statusCode, msg)
if cfg.SentEvent {
messageSent.Event(ctx, 1, s.Proto())
}
} else {
if cfg.SentEvent {
messageSent.Event(ctx, 1, resp)
}
}
grpcStatusCodeAttr := statusCodeAttr(s.Code())
span.SetAttributes(grpcStatusCodeAttr)
// Use floating point division here for higher precision (instead of Millisecond method).
elapsedTime := float64(time.Since(before)) / float64(time.Millisecond)
metricAttrs = append(metricAttrs, grpcStatusCodeAttr)
cfg.rpcDuration.Record(ctx, elapsedTime, metric.WithAttributeSet(attribute.NewSet(metricAttrs...)))
return resp, err
}
}
// serverStream wraps around the embedded grpc.ServerStream, and intercepts the RecvMsg and
// SendMsg method call.
type serverStream struct {
grpc.ServerStream
ctx context.Context
receivedMessageID int
sentMessageID int
receivedEvent bool
sentEvent bool
}
func (w *serverStream) Context() context.Context {
return w.ctx
}
func (w *serverStream) RecvMsg(m interface{}) error {
err := w.ServerStream.RecvMsg(m)
if err == nil {
w.receivedMessageID++
if w.receivedEvent {
messageReceived.Event(w.Context(), w.receivedMessageID, m)
}
}
return err
}
func (w *serverStream) SendMsg(m interface{}) error {
err := w.ServerStream.SendMsg(m)
w.sentMessageID++
if w.sentEvent {
messageSent.Event(w.Context(), w.sentMessageID, m)
}
return err
}
func wrapServerStream(ctx context.Context, ss grpc.ServerStream, cfg *config) *serverStream {
return &serverStream{
ServerStream: ss,
ctx: ctx,
receivedEvent: cfg.ReceivedEvent,
sentEvent: cfg.SentEvent,
}
}
// StreamServerInterceptor returns a grpc.StreamServerInterceptor suitable
// for use in a grpc.NewServer call.
//
// Deprecated: Use [NewServerHandler] instead.
func StreamServerInterceptor(opts ...Option) grpc.StreamServerInterceptor {
cfg := newConfig(opts, "server")
tracer := cfg.TracerProvider.Tracer(
ScopeName,
trace.WithInstrumentationVersion(Version()),
)
return func(
srv interface{},
ss grpc.ServerStream,
info *grpc.StreamServerInfo,
handler grpc.StreamHandler,
) error {
ctx := ss.Context()
i := &InterceptorInfo{
StreamServerInfo: info,
Type: StreamServer,
}
if cfg.InterceptorFilter != nil && !cfg.InterceptorFilter(i) {
return handler(srv, wrapServerStream(ctx, ss, cfg))
}
ctx = extract(ctx, cfg.Propagators)
name, attr, _ := telemetryAttributes(info.FullMethod, peerFromCtx(ctx))
startOpts := append([]trace.SpanStartOption{
trace.WithSpanKind(trace.SpanKindServer),
trace.WithAttributes(attr...),
},
cfg.SpanStartOptions...,
)
ctx, span := tracer.Start(
trace.ContextWithRemoteSpanContext(ctx, trace.SpanContextFromContext(ctx)),
name,
startOpts...,
)
defer span.End()
err := handler(srv, wrapServerStream(ctx, ss, cfg))
if err != nil {
s, _ := status.FromError(err)
statusCode, msg := serverStatus(s)
span.SetStatus(statusCode, msg)
span.SetAttributes(statusCodeAttr(s.Code()))
} else {
span.SetAttributes(statusCodeAttr(grpc_codes.OK))
}
return err
}
}
// telemetryAttributes returns a span name and span and metric attributes from
// the gRPC method and peer address.
func telemetryAttributes(fullMethod, peerAddress string) (string, []attribute.KeyValue, []attribute.KeyValue) {
name, methodAttrs := internal.ParseFullMethod(fullMethod)
peerAttrs := peerAttr(peerAddress)
attrs := make([]attribute.KeyValue, 0, 1+len(methodAttrs)+len(peerAttrs))
attrs = append(attrs, RPCSystemGRPC)
attrs = append(attrs, methodAttrs...)
metricAttrs := attrs[:1+len(methodAttrs)]
attrs = append(attrs, peerAttrs...)
return name, attrs, metricAttrs
}
// peerAttr returns attributes about the peer address.
func peerAttr(addr string) []attribute.KeyValue {
host, p, err := net.SplitHostPort(addr)
if err != nil {
return nil
}
if host == "" {
host = "127.0.0.1"
}
port, err := strconv.Atoi(p)
if err != nil {
return nil
}
var attr []attribute.KeyValue
if ip := net.ParseIP(host); ip != nil {
attr = []attribute.KeyValue{
semconv.NetSockPeerAddr(host),
semconv.NetSockPeerPort(port),
}
} else {
attr = []attribute.KeyValue{
semconv.NetPeerName(host),
semconv.NetPeerPort(port),
}
}
return attr
}
// peerFromCtx returns a peer address from a context, if one exists.
func peerFromCtx(ctx context.Context) string {
p, ok := peer.FromContext(ctx)
if !ok {
return ""
}
return p.Addr.String()
}
// statusCodeAttr returns status code attribute based on given gRPC code.
func statusCodeAttr(c grpc_codes.Code) attribute.KeyValue {
return GRPCStatusCodeKey.Int64(int64(c))
}
// serverStatus returns a span status code and message for a given gRPC
// status code. It maps specific gRPC status codes to a corresponding span
// status code and message. This function is intended for use on the server
// side of a gRPC connection.
//
// If the gRPC status code is Unknown, DeadlineExceeded, Unimplemented,
// Internal, Unavailable, or DataLoss, it returns a span status code of Error
// and the message from the gRPC status. Otherwise, it returns a span status
// code of Unset and an empty message.
func serverStatus(grpcStatus *status.Status) (codes.Code, string) {
switch grpcStatus.Code() {
case grpc_codes.Unknown,
grpc_codes.DeadlineExceeded,
grpc_codes.Unimplemented,
grpc_codes.Internal,
grpc_codes.Unavailable,
grpc_codes.DataLoss:
return codes.Error, grpcStatus.Message()
default:
return codes.Unset, ""
}
}

View File

@ -0,0 +1,39 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otelgrpc // import "go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc"
import (
"google.golang.org/grpc"
)
// InterceptorType is the flag to define which gRPC interceptor
// the InterceptorInfo object is.
type InterceptorType uint8
const (
// UndefinedInterceptor is the type for the interceptor information that is not
// well initialized or categorized to other types.
UndefinedInterceptor InterceptorType = iota
// UnaryClient is the type for grpc.UnaryClient interceptor.
UnaryClient
// StreamClient is the type for grpc.StreamClient interceptor.
StreamClient
// UnaryServer is the type for grpc.UnaryServer interceptor.
UnaryServer
// StreamServer is the type for grpc.StreamServer interceptor.
StreamServer
)
// InterceptorInfo is the union of some arguments to four types of
// gRPC interceptors.
type InterceptorInfo struct {
// Method is method name registered to UnaryClient and StreamClient
Method string
// UnaryServerInfo is the metadata for UnaryServer
UnaryServerInfo *grpc.UnaryServerInfo
// StreamServerInfo if the metadata for StreamServer
StreamServerInfo *grpc.StreamServerInfo
// Type is the type for interceptor
Type InterceptorType
}

View File

@ -0,0 +1,40 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package internal // import "go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc/internal"
import (
"strings"
"go.opentelemetry.io/otel/attribute"
semconv "go.opentelemetry.io/otel/semconv/v1.17.0"
)
// ParseFullMethod returns a span name following the OpenTelemetry semantic
// conventions as well as all applicable span attribute.KeyValue attributes based
// on a gRPC's FullMethod.
//
// Parsing is consistent with grpc-go implementation:
// https://github.com/grpc/grpc-go/blob/v1.57.0/internal/grpcutil/method.go#L26-L39
func ParseFullMethod(fullMethod string) (string, []attribute.KeyValue) {
if !strings.HasPrefix(fullMethod, "/") {
// Invalid format, does not follow `/package.service/method`.
return fullMethod, nil
}
name := fullMethod[1:]
pos := strings.LastIndex(name, "/")
if pos < 0 {
// Invalid format, does not follow `/package.service/method`.
return name, nil
}
service, method := name[:pos], name[pos+1:]
var attrs []attribute.KeyValue
if service != "" {
attrs = append(attrs, semconv.RPCService(service))
}
if method != "" {
attrs = append(attrs, semconv.RPCMethod(method))
}
return name, attrs
}

View File

@ -0,0 +1,87 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otelgrpc // import "go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc"
import (
"context"
"google.golang.org/grpc/metadata"
"go.opentelemetry.io/otel/baggage"
"go.opentelemetry.io/otel/propagation"
"go.opentelemetry.io/otel/trace"
)
type metadataSupplier struct {
metadata *metadata.MD
}
// assert that metadataSupplier implements the TextMapCarrier interface.
var _ propagation.TextMapCarrier = &metadataSupplier{}
func (s *metadataSupplier) Get(key string) string {
values := s.metadata.Get(key)
if len(values) == 0 {
return ""
}
return values[0]
}
func (s *metadataSupplier) Set(key string, value string) {
s.metadata.Set(key, value)
}
func (s *metadataSupplier) Keys() []string {
out := make([]string, 0, len(*s.metadata))
for key := range *s.metadata {
out = append(out, key)
}
return out
}
// Inject injects correlation context and span context into the gRPC
// metadata object. This function is meant to be used on outgoing
// requests.
// Deprecated: Unnecessary public func.
func Inject(ctx context.Context, md *metadata.MD, opts ...Option) {
c := newConfig(opts, "")
c.Propagators.Inject(ctx, &metadataSupplier{
metadata: md,
})
}
func inject(ctx context.Context, propagators propagation.TextMapPropagator) context.Context {
md, ok := metadata.FromOutgoingContext(ctx)
if !ok {
md = metadata.MD{}
}
propagators.Inject(ctx, &metadataSupplier{
metadata: &md,
})
return metadata.NewOutgoingContext(ctx, md)
}
// Extract returns the correlation context and span context that
// another service encoded in the gRPC metadata object with Inject.
// This function is meant to be used on incoming requests.
// Deprecated: Unnecessary public func.
func Extract(ctx context.Context, md *metadata.MD, opts ...Option) (baggage.Baggage, trace.SpanContext) {
c := newConfig(opts, "")
ctx = c.Propagators.Extract(ctx, &metadataSupplier{
metadata: md,
})
return baggage.FromContext(ctx), trace.SpanContextFromContext(ctx)
}
func extract(ctx context.Context, propagators propagation.TextMapPropagator) context.Context {
md, ok := metadata.FromIncomingContext(ctx)
if !ok {
md = metadata.MD{}
}
return propagators.Extract(ctx, &metadataSupplier{
metadata: &md,
})
}

View File

@ -0,0 +1,41 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otelgrpc // import "go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc"
import (
"go.opentelemetry.io/otel/attribute"
semconv "go.opentelemetry.io/otel/semconv/v1.17.0"
)
// Semantic conventions for attribute keys for gRPC.
const (
// Name of message transmitted or received.
RPCNameKey = attribute.Key("name")
// Type of message transmitted or received.
RPCMessageTypeKey = attribute.Key("message.type")
// Identifier of message transmitted or received.
RPCMessageIDKey = attribute.Key("message.id")
// The compressed size of the message transmitted or received in bytes.
RPCMessageCompressedSizeKey = attribute.Key("message.compressed_size")
// The uncompressed size of the message transmitted or received in
// bytes.
RPCMessageUncompressedSizeKey = attribute.Key("message.uncompressed_size")
)
// Semantic conventions for common RPC attributes.
var (
// Semantic convention for gRPC as the remoting system.
RPCSystemGRPC = semconv.RPCSystemGRPC
// Semantic convention for a message named message.
RPCNameMessage = RPCNameKey.String("message")
// Semantic conventions for RPC message types.
RPCMessageTypeSent = RPCMessageTypeKey.String("SENT")
RPCMessageTypeReceived = RPCMessageTypeKey.String("RECEIVED")
)

View File

@ -0,0 +1,223 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otelgrpc // import "go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc"
import (
"context"
"sync/atomic"
"time"
grpc_codes "google.golang.org/grpc/codes"
"google.golang.org/grpc/peer"
"google.golang.org/grpc/stats"
"google.golang.org/grpc/status"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/codes"
"go.opentelemetry.io/otel/metric"
semconv "go.opentelemetry.io/otel/semconv/v1.17.0"
"go.opentelemetry.io/otel/trace"
"go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc/internal"
)
type gRPCContextKey struct{}
type gRPCContext struct {
inMessages int64
outMessages int64
metricAttrs []attribute.KeyValue
record bool
}
type serverHandler struct {
*config
}
// NewServerHandler creates a stats.Handler for a gRPC server.
func NewServerHandler(opts ...Option) stats.Handler {
h := &serverHandler{
config: newConfig(opts, "server"),
}
return h
}
// TagConn can attach some information to the given context.
func (h *serverHandler) TagConn(ctx context.Context, info *stats.ConnTagInfo) context.Context {
return ctx
}
// HandleConn processes the Conn stats.
func (h *serverHandler) HandleConn(ctx context.Context, info stats.ConnStats) {
}
// TagRPC can attach some information to the given context.
func (h *serverHandler) TagRPC(ctx context.Context, info *stats.RPCTagInfo) context.Context {
ctx = extract(ctx, h.config.Propagators)
name, attrs := internal.ParseFullMethod(info.FullMethodName)
attrs = append(attrs, RPCSystemGRPC)
ctx, _ = h.tracer.Start(
trace.ContextWithRemoteSpanContext(ctx, trace.SpanContextFromContext(ctx)),
name,
trace.WithSpanKind(trace.SpanKindServer),
trace.WithAttributes(append(attrs, h.config.SpanAttributes...)...),
)
gctx := gRPCContext{
metricAttrs: append(attrs, h.config.MetricAttributes...),
record: true,
}
if h.config.Filter != nil {
gctx.record = h.config.Filter(info)
}
return context.WithValue(ctx, gRPCContextKey{}, &gctx)
}
// HandleRPC processes the RPC stats.
func (h *serverHandler) HandleRPC(ctx context.Context, rs stats.RPCStats) {
isServer := true
h.handleRPC(ctx, rs, isServer)
}
type clientHandler struct {
*config
}
// NewClientHandler creates a stats.Handler for a gRPC client.
func NewClientHandler(opts ...Option) stats.Handler {
h := &clientHandler{
config: newConfig(opts, "client"),
}
return h
}
// TagRPC can attach some information to the given context.
func (h *clientHandler) TagRPC(ctx context.Context, info *stats.RPCTagInfo) context.Context {
name, attrs := internal.ParseFullMethod(info.FullMethodName)
attrs = append(attrs, RPCSystemGRPC)
ctx, _ = h.tracer.Start(
ctx,
name,
trace.WithSpanKind(trace.SpanKindClient),
trace.WithAttributes(append(attrs, h.config.SpanAttributes...)...),
)
gctx := gRPCContext{
metricAttrs: append(attrs, h.config.MetricAttributes...),
record: true,
}
if h.config.Filter != nil {
gctx.record = h.config.Filter(info)
}
return inject(context.WithValue(ctx, gRPCContextKey{}, &gctx), h.config.Propagators)
}
// HandleRPC processes the RPC stats.
func (h *clientHandler) HandleRPC(ctx context.Context, rs stats.RPCStats) {
isServer := false
h.handleRPC(ctx, rs, isServer)
}
// TagConn can attach some information to the given context.
func (h *clientHandler) TagConn(ctx context.Context, info *stats.ConnTagInfo) context.Context {
return ctx
}
// HandleConn processes the Conn stats.
func (h *clientHandler) HandleConn(context.Context, stats.ConnStats) {
// no-op
}
func (c *config) handleRPC(ctx context.Context, rs stats.RPCStats, isServer bool) { // nolint: revive // isServer is not a control flag.
span := trace.SpanFromContext(ctx)
var metricAttrs []attribute.KeyValue
var messageId int64
gctx, _ := ctx.Value(gRPCContextKey{}).(*gRPCContext)
if gctx != nil {
if !gctx.record {
return
}
metricAttrs = make([]attribute.KeyValue, 0, len(gctx.metricAttrs)+1)
metricAttrs = append(metricAttrs, gctx.metricAttrs...)
}
switch rs := rs.(type) {
case *stats.Begin:
case *stats.InPayload:
if gctx != nil {
messageId = atomic.AddInt64(&gctx.inMessages, 1)
c.rpcInBytes.Record(ctx, int64(rs.Length), metric.WithAttributeSet(attribute.NewSet(metricAttrs...)))
}
if c.ReceivedEvent {
span.AddEvent("message",
trace.WithAttributes(
semconv.MessageTypeReceived,
semconv.MessageIDKey.Int64(messageId),
semconv.MessageCompressedSizeKey.Int(rs.CompressedLength),
semconv.MessageUncompressedSizeKey.Int(rs.Length),
),
)
}
case *stats.OutPayload:
if gctx != nil {
messageId = atomic.AddInt64(&gctx.outMessages, 1)
c.rpcOutBytes.Record(ctx, int64(rs.Length), metric.WithAttributeSet(attribute.NewSet(metricAttrs...)))
}
if c.SentEvent {
span.AddEvent("message",
trace.WithAttributes(
semconv.MessageTypeSent,
semconv.MessageIDKey.Int64(messageId),
semconv.MessageCompressedSizeKey.Int(rs.CompressedLength),
semconv.MessageUncompressedSizeKey.Int(rs.Length),
),
)
}
case *stats.OutTrailer:
case *stats.OutHeader:
if p, ok := peer.FromContext(ctx); ok {
span.SetAttributes(peerAttr(p.Addr.String())...)
}
case *stats.End:
var rpcStatusAttr attribute.KeyValue
if rs.Error != nil {
s, _ := status.FromError(rs.Error)
if isServer {
statusCode, msg := serverStatus(s)
span.SetStatus(statusCode, msg)
} else {
span.SetStatus(codes.Error, s.Message())
}
rpcStatusAttr = semconv.RPCGRPCStatusCodeKey.Int(int(s.Code()))
} else {
rpcStatusAttr = semconv.RPCGRPCStatusCodeKey.Int(int(grpc_codes.OK))
}
span.SetAttributes(rpcStatusAttr)
span.End()
metricAttrs = append(metricAttrs, rpcStatusAttr)
// Allocate vararg slice once.
recordOpts := []metric.RecordOption{metric.WithAttributeSet(attribute.NewSet(metricAttrs...))}
// Use floating point division here for higher precision (instead of Millisecond method).
// Measure right before calling Record() to capture as much elapsed time as possible.
elapsedTime := float64(rs.EndTime.Sub(rs.BeginTime)) / float64(time.Millisecond)
c.rpcDuration.Record(ctx, elapsedTime, recordOpts...)
if gctx != nil {
c.rpcInMessages.Record(ctx, atomic.LoadInt64(&gctx.inMessages), recordOpts...)
c.rpcOutMessages.Record(ctx, atomic.LoadInt64(&gctx.outMessages), recordOpts...)
}
default:
return
}
}

View File

@ -0,0 +1,17 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otelgrpc // import "go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc"
// Version is the current release version of the gRPC instrumentation.
func Version() string {
return "0.58.0"
// This string is updated by the pre_release.sh script during release
}
// SemVersion is the semantic version to be supplied to tracer/meter creation.
//
// Deprecated: Use [Version] instead.
func SemVersion() string {
return Version()
}

View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,50 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otelhttp // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
import (
"context"
"io"
"net/http"
"net/url"
"strings"
)
// DefaultClient is the default Client and is used by Get, Head, Post and PostForm.
// Please be careful of initialization order - for example, if you change
// the global propagator, the DefaultClient might still be using the old one.
var DefaultClient = &http.Client{Transport: NewTransport(http.DefaultTransport)}
// Get is a convenient replacement for http.Get that adds a span around the request.
func Get(ctx context.Context, targetURL string) (resp *http.Response, err error) {
req, err := http.NewRequestWithContext(ctx, "GET", targetURL, nil)
if err != nil {
return nil, err
}
return DefaultClient.Do(req)
}
// Head is a convenient replacement for http.Head that adds a span around the request.
func Head(ctx context.Context, targetURL string) (resp *http.Response, err error) {
req, err := http.NewRequestWithContext(ctx, "HEAD", targetURL, nil)
if err != nil {
return nil, err
}
return DefaultClient.Do(req)
}
// Post is a convenient replacement for http.Post that adds a span around the request.
func Post(ctx context.Context, targetURL, contentType string, body io.Reader) (resp *http.Response, err error) {
req, err := http.NewRequestWithContext(ctx, "POST", targetURL, body)
if err != nil {
return nil, err
}
req.Header.Set("Content-Type", contentType)
return DefaultClient.Do(req)
}
// PostForm is a convenient replacement for http.PostForm that adds a span around the request.
func PostForm(ctx context.Context, targetURL string, data url.Values) (resp *http.Response, err error) {
return Post(ctx, targetURL, "application/x-www-form-urlencoded", strings.NewReader(data.Encode()))
}

View File

@ -0,0 +1,41 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otelhttp // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
import (
"net/http"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/trace"
)
// Attribute keys that can be added to a span.
const (
ReadBytesKey = attribute.Key("http.read_bytes") // if anything was read from the request body, the total number of bytes read
ReadErrorKey = attribute.Key("http.read_error") // If an error occurred while reading a request, the string of the error (io.EOF is not recorded)
WroteBytesKey = attribute.Key("http.wrote_bytes") // if anything was written to the response writer, the total number of bytes written
WriteErrorKey = attribute.Key("http.write_error") // if an error occurred while writing a reply, the string of the error (io.EOF is not recorded)
)
// Server HTTP metrics.
const (
serverRequestSize = "http.server.request.size" // Incoming request bytes total
serverResponseSize = "http.server.response.size" // Incoming response bytes total
serverDuration = "http.server.duration" // Incoming end to end duration, milliseconds
)
// Client HTTP metrics.
const (
clientRequestSize = "http.client.request.size" // Outgoing request bytes total
clientResponseSize = "http.client.response.size" // Outgoing response bytes total
clientDuration = "http.client.duration" // Outgoing end to end duration, milliseconds
)
// Filter is a predicate used to determine whether a given http.request should
// be traced. A Filter must return true if the request should be traced.
type Filter func(*http.Request) bool
func newTracer(tp trace.TracerProvider) trace.Tracer {
return tp.Tracer(ScopeName, trace.WithInstrumentationVersion(Version()))
}

View File

@ -0,0 +1,196 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otelhttp // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
import (
"context"
"net/http"
"net/http/httptrace"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/metric"
"go.opentelemetry.io/otel/propagation"
"go.opentelemetry.io/otel/trace"
)
// ScopeName is the instrumentation scope name.
const ScopeName = "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
// config represents the configuration options available for the http.Handler
// and http.Transport types.
type config struct {
ServerName string
Tracer trace.Tracer
Meter metric.Meter
Propagators propagation.TextMapPropagator
SpanStartOptions []trace.SpanStartOption
PublicEndpoint bool
PublicEndpointFn func(*http.Request) bool
ReadEvent bool
WriteEvent bool
Filters []Filter
SpanNameFormatter func(string, *http.Request) string
ClientTrace func(context.Context) *httptrace.ClientTrace
TracerProvider trace.TracerProvider
MeterProvider metric.MeterProvider
}
// Option interface used for setting optional config properties.
type Option interface {
apply(*config)
}
type optionFunc func(*config)
func (o optionFunc) apply(c *config) {
o(c)
}
// newConfig creates a new config struct and applies opts to it.
func newConfig(opts ...Option) *config {
c := &config{
Propagators: otel.GetTextMapPropagator(),
MeterProvider: otel.GetMeterProvider(),
}
for _, opt := range opts {
opt.apply(c)
}
// Tracer is only initialized if manually specified. Otherwise, can be passed with the tracing context.
if c.TracerProvider != nil {
c.Tracer = newTracer(c.TracerProvider)
}
c.Meter = c.MeterProvider.Meter(
ScopeName,
metric.WithInstrumentationVersion(Version()),
)
return c
}
// WithTracerProvider specifies a tracer provider to use for creating a tracer.
// If none is specified, the global provider is used.
func WithTracerProvider(provider trace.TracerProvider) Option {
return optionFunc(func(cfg *config) {
if provider != nil {
cfg.TracerProvider = provider
}
})
}
// WithMeterProvider specifies a meter provider to use for creating a meter.
// If none is specified, the global provider is used.
func WithMeterProvider(provider metric.MeterProvider) Option {
return optionFunc(func(cfg *config) {
if provider != nil {
cfg.MeterProvider = provider
}
})
}
// WithPublicEndpoint configures the Handler to link the span with an incoming
// span context. If this option is not provided, then the association is a child
// association instead of a link.
func WithPublicEndpoint() Option {
return optionFunc(func(c *config) {
c.PublicEndpoint = true
})
}
// WithPublicEndpointFn runs with every request, and allows conditionally
// configuring the Handler to link the span with an incoming span context. If
// this option is not provided or returns false, then the association is a
// child association instead of a link.
// Note: WithPublicEndpoint takes precedence over WithPublicEndpointFn.
func WithPublicEndpointFn(fn func(*http.Request) bool) Option {
return optionFunc(func(c *config) {
c.PublicEndpointFn = fn
})
}
// WithPropagators configures specific propagators. If this
// option isn't specified, then the global TextMapPropagator is used.
func WithPropagators(ps propagation.TextMapPropagator) Option {
return optionFunc(func(c *config) {
if ps != nil {
c.Propagators = ps
}
})
}
// WithSpanOptions configures an additional set of
// trace.SpanOptions, which are applied to each new span.
func WithSpanOptions(opts ...trace.SpanStartOption) Option {
return optionFunc(func(c *config) {
c.SpanStartOptions = append(c.SpanStartOptions, opts...)
})
}
// WithFilter adds a filter to the list of filters used by the handler.
// If any filter indicates to exclude a request then the request will not be
// traced. All filters must allow a request to be traced for a Span to be created.
// If no filters are provided then all requests are traced.
// Filters will be invoked for each processed request, it is advised to make them
// simple and fast.
func WithFilter(f Filter) Option {
return optionFunc(func(c *config) {
c.Filters = append(c.Filters, f)
})
}
type event int
// Different types of events that can be recorded, see WithMessageEvents.
const (
ReadEvents event = iota
WriteEvents
)
// WithMessageEvents configures the Handler to record the specified events
// (span.AddEvent) on spans. By default only summary attributes are added at the
// end of the request.
//
// Valid events are:
// - ReadEvents: Record the number of bytes read after every http.Request.Body.Read
// using the ReadBytesKey
// - WriteEvents: Record the number of bytes written after every http.ResponeWriter.Write
// using the WriteBytesKey
func WithMessageEvents(events ...event) Option {
return optionFunc(func(c *config) {
for _, e := range events {
switch e {
case ReadEvents:
c.ReadEvent = true
case WriteEvents:
c.WriteEvent = true
}
}
})
}
// WithSpanNameFormatter takes a function that will be called on every
// request and the returned string will become the Span Name.
func WithSpanNameFormatter(f func(operation string, r *http.Request) string) Option {
return optionFunc(func(c *config) {
c.SpanNameFormatter = f
})
}
// WithClientTrace takes a function that returns client trace instance that will be
// applied to the requests sent through the otelhttp Transport.
func WithClientTrace(f func(context.Context) *httptrace.ClientTrace) Option {
return optionFunc(func(c *config) {
c.ClientTrace = f
})
}
// WithServerName returns an Option that sets the name of the (virtual) server
// handling requests.
func WithServerName(server string) Option {
return optionFunc(func(c *config) {
c.ServerName = server
})
}

View File

@ -0,0 +1,7 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
// Package otelhttp provides an http.Handler and functions that are intended
// to be used to add tracing by wrapping existing handlers (with Handler) and
// routes WithRouteTag.
package otelhttp // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"

View File

@ -0,0 +1,258 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otelhttp // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
import (
"net/http"
"time"
"github.com/felixge/httpsnoop"
"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/semconv"
"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/semconvutil"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/metric"
"go.opentelemetry.io/otel/propagation"
"go.opentelemetry.io/otel/trace"
)
// middleware is an http middleware which wraps the next handler in a span.
type middleware struct {
operation string
server string
tracer trace.Tracer
meter metric.Meter
propagators propagation.TextMapPropagator
spanStartOptions []trace.SpanStartOption
readEvent bool
writeEvent bool
filters []Filter
spanNameFormatter func(string, *http.Request) string
publicEndpoint bool
publicEndpointFn func(*http.Request) bool
traceSemconv semconv.HTTPServer
requestBytesCounter metric.Int64Counter
responseBytesCounter metric.Int64Counter
serverLatencyMeasure metric.Float64Histogram
}
func defaultHandlerFormatter(operation string, _ *http.Request) string {
return operation
}
// NewHandler wraps the passed handler in a span named after the operation and
// enriches it with metrics.
func NewHandler(handler http.Handler, operation string, opts ...Option) http.Handler {
return NewMiddleware(operation, opts...)(handler)
}
// NewMiddleware returns a tracing and metrics instrumentation middleware.
// The handler returned by the middleware wraps a handler
// in a span named after the operation and enriches it with metrics.
func NewMiddleware(operation string, opts ...Option) func(http.Handler) http.Handler {
h := middleware{
operation: operation,
traceSemconv: semconv.NewHTTPServer(),
}
defaultOpts := []Option{
WithSpanOptions(trace.WithSpanKind(trace.SpanKindServer)),
WithSpanNameFormatter(defaultHandlerFormatter),
}
c := newConfig(append(defaultOpts, opts...)...)
h.configure(c)
h.createMeasures()
return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
h.serveHTTP(w, r, next)
})
}
}
func (h *middleware) configure(c *config) {
h.tracer = c.Tracer
h.meter = c.Meter
h.propagators = c.Propagators
h.spanStartOptions = c.SpanStartOptions
h.readEvent = c.ReadEvent
h.writeEvent = c.WriteEvent
h.filters = c.Filters
h.spanNameFormatter = c.SpanNameFormatter
h.publicEndpoint = c.PublicEndpoint
h.publicEndpointFn = c.PublicEndpointFn
h.server = c.ServerName
}
func handleErr(err error) {
if err != nil {
otel.Handle(err)
}
}
func (h *middleware) createMeasures() {
var err error
h.requestBytesCounter, err = h.meter.Int64Counter(
serverRequestSize,
metric.WithUnit("By"),
metric.WithDescription("Measures the size of HTTP request messages."),
)
handleErr(err)
h.responseBytesCounter, err = h.meter.Int64Counter(
serverResponseSize,
metric.WithUnit("By"),
metric.WithDescription("Measures the size of HTTP response messages."),
)
handleErr(err)
h.serverLatencyMeasure, err = h.meter.Float64Histogram(
serverDuration,
metric.WithUnit("ms"),
metric.WithDescription("Measures the duration of inbound HTTP requests."),
)
handleErr(err)
}
// serveHTTP sets up tracing and calls the given next http.Handler with the span
// context injected into the request context.
func (h *middleware) serveHTTP(w http.ResponseWriter, r *http.Request, next http.Handler) {
requestStartTime := time.Now()
for _, f := range h.filters {
if !f(r) {
// Simply pass through to the handler if a filter rejects the request
next.ServeHTTP(w, r)
return
}
}
ctx := h.propagators.Extract(r.Context(), propagation.HeaderCarrier(r.Header))
opts := []trace.SpanStartOption{
trace.WithAttributes(h.traceSemconv.RequestTraceAttrs(h.server, r)...),
}
opts = append(opts, h.spanStartOptions...)
if h.publicEndpoint || (h.publicEndpointFn != nil && h.publicEndpointFn(r.WithContext(ctx))) {
opts = append(opts, trace.WithNewRoot())
// Linking incoming span context if any for public endpoint.
if s := trace.SpanContextFromContext(ctx); s.IsValid() && s.IsRemote() {
opts = append(opts, trace.WithLinks(trace.Link{SpanContext: s}))
}
}
tracer := h.tracer
if tracer == nil {
if span := trace.SpanFromContext(r.Context()); span.SpanContext().IsValid() {
tracer = newTracer(span.TracerProvider())
} else {
tracer = newTracer(otel.GetTracerProvider())
}
}
ctx, span := tracer.Start(ctx, h.spanNameFormatter(h.operation, r), opts...)
defer span.End()
readRecordFunc := func(int64) {}
if h.readEvent {
readRecordFunc = func(n int64) {
span.AddEvent("read", trace.WithAttributes(ReadBytesKey.Int64(n)))
}
}
var bw bodyWrapper
// if request body is nil or NoBody, we don't want to mutate the body as it
// will affect the identity of it in an unforeseeable way because we assert
// ReadCloser fulfills a certain interface and it is indeed nil or NoBody.
if r.Body != nil && r.Body != http.NoBody {
bw.ReadCloser = r.Body
bw.record = readRecordFunc
r.Body = &bw
}
writeRecordFunc := func(int64) {}
if h.writeEvent {
writeRecordFunc = func(n int64) {
span.AddEvent("write", trace.WithAttributes(WroteBytesKey.Int64(n)))
}
}
rww := &respWriterWrapper{
ResponseWriter: w,
record: writeRecordFunc,
ctx: ctx,
props: h.propagators,
statusCode: http.StatusOK, // default status code in case the Handler doesn't write anything
}
// Wrap w to use our ResponseWriter methods while also exposing
// other interfaces that w may implement (http.CloseNotifier,
// http.Flusher, http.Hijacker, http.Pusher, io.ReaderFrom).
w = httpsnoop.Wrap(w, httpsnoop.Hooks{
Header: func(httpsnoop.HeaderFunc) httpsnoop.HeaderFunc {
return rww.Header
},
Write: func(httpsnoop.WriteFunc) httpsnoop.WriteFunc {
return rww.Write
},
WriteHeader: func(httpsnoop.WriteHeaderFunc) httpsnoop.WriteHeaderFunc {
return rww.WriteHeader
},
Flush: func(httpsnoop.FlushFunc) httpsnoop.FlushFunc {
return rww.Flush
},
})
labeler, found := LabelerFromContext(ctx)
if !found {
ctx = ContextWithLabeler(ctx, labeler)
}
next.ServeHTTP(w, r.WithContext(ctx))
span.SetStatus(semconv.ServerStatus(rww.statusCode))
span.SetAttributes(h.traceSemconv.ResponseTraceAttrs(semconv.ResponseTelemetry{
StatusCode: rww.statusCode,
ReadBytes: bw.read.Load(),
ReadError: bw.err,
WriteBytes: rww.written,
WriteError: rww.err,
})...)
// Add metrics
attributes := append(labeler.Get(), semconvutil.HTTPServerRequestMetrics(h.server, r)...)
if rww.statusCode > 0 {
attributes = append(attributes, semconv.HTTPStatusCode(rww.statusCode))
}
o := metric.WithAttributeSet(attribute.NewSet(attributes...))
addOpts := []metric.AddOption{o} // Allocate vararg slice once.
h.requestBytesCounter.Add(ctx, bw.read.Load(), addOpts...)
h.responseBytesCounter.Add(ctx, rww.written, addOpts...)
// Use floating point division here for higher precision (instead of Millisecond method).
elapsedTime := float64(time.Since(requestStartTime)) / float64(time.Millisecond)
h.serverLatencyMeasure.Record(ctx, elapsedTime, o)
}
// WithRouteTag annotates spans and metrics with the provided route name
// with HTTP route attribute.
func WithRouteTag(route string, h http.Handler) http.Handler {
attr := semconv.NewHTTPServer().Route(route)
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
span := trace.SpanFromContext(r.Context())
span.SetAttributes(attr)
labeler, _ := LabelerFromContext(r.Context())
labeler.Add(attr)
h.ServeHTTP(w, r)
})
}

View File

@ -0,0 +1,82 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package semconv // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/semconv"
import (
"fmt"
"net/http"
"os"
"strings"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/codes"
)
type ResponseTelemetry struct {
StatusCode int
ReadBytes int64
ReadError error
WriteBytes int64
WriteError error
}
type HTTPServer struct {
duplicate bool
}
// RequestTraceAttrs returns trace attributes for an HTTP request received by a
// server.
//
// The server must be the primary server name if it is known. For example this
// would be the ServerName directive
// (https://httpd.apache.org/docs/2.4/mod/core.html#servername) for an Apache
// server, and the server_name directive
// (http://nginx.org/en/docs/http/ngx_http_core_module.html#server_name) for an
// nginx server. More generically, the primary server name would be the host
// header value that matches the default virtual host of an HTTP server. It
// should include the host identifier and if a port is used to route to the
// server that port identifier should be included as an appropriate port
// suffix.
//
// If the primary server name is not known, server should be an empty string.
// The req Host will be used to determine the server instead.
func (s HTTPServer) RequestTraceAttrs(server string, req *http.Request) []attribute.KeyValue {
if s.duplicate {
return append(oldHTTPServer{}.RequestTraceAttrs(server, req), newHTTPServer{}.RequestTraceAttrs(server, req)...)
}
return oldHTTPServer{}.RequestTraceAttrs(server, req)
}
// ResponseTraceAttrs returns trace attributes for telemetry from an HTTP response.
//
// If any of the fields in the ResponseTelemetry are not set the attribute will be omitted.
func (s HTTPServer) ResponseTraceAttrs(resp ResponseTelemetry) []attribute.KeyValue {
if s.duplicate {
return append(oldHTTPServer{}.ResponseTraceAttrs(resp), newHTTPServer{}.ResponseTraceAttrs(resp)...)
}
return oldHTTPServer{}.ResponseTraceAttrs(resp)
}
// Route returns the attribute for the route.
func (s HTTPServer) Route(route string) attribute.KeyValue {
return oldHTTPServer{}.Route(route)
}
func NewHTTPServer() HTTPServer {
env := strings.ToLower(os.Getenv("OTEL_HTTP_CLIENT_COMPATIBILITY_MODE"))
return HTTPServer{duplicate: env == "http/dup"}
}
// ServerStatus returns a span status code and message for an HTTP status code
// value returned by a server. Status codes in the 400-499 range are not
// returned as errors.
func ServerStatus(code int) (codes.Code, string) {
if code < 100 || code >= 600 {
return codes.Error, fmt.Sprintf("Invalid HTTP status code %d", code)
}
if code >= 500 {
return codes.Error, ""
}
return codes.Unset, ""
}

View File

@ -0,0 +1,91 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package semconv // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/semconv"
import (
"net"
"net/http"
"strconv"
"strings"
"go.opentelemetry.io/otel/attribute"
semconvNew "go.opentelemetry.io/otel/semconv/v1.24.0"
)
// splitHostPort splits a network address hostport of the form "host",
// "host%zone", "[host]", "[host%zone], "host:port", "host%zone:port",
// "[host]:port", "[host%zone]:port", or ":port" into host or host%zone and
// port.
//
// An empty host is returned if it is not provided or unparsable. A negative
// port is returned if it is not provided or unparsable.
func splitHostPort(hostport string) (host string, port int) {
port = -1
if strings.HasPrefix(hostport, "[") {
addrEnd := strings.LastIndex(hostport, "]")
if addrEnd < 0 {
// Invalid hostport.
return
}
if i := strings.LastIndex(hostport[addrEnd:], ":"); i < 0 {
host = hostport[1:addrEnd]
return
}
} else {
if i := strings.LastIndex(hostport, ":"); i < 0 {
host = hostport
return
}
}
host, pStr, err := net.SplitHostPort(hostport)
if err != nil {
return
}
p, err := strconv.ParseUint(pStr, 10, 16)
if err != nil {
return
}
return host, int(p)
}
func requiredHTTPPort(https bool, port int) int { // nolint:revive
if https {
if port > 0 && port != 443 {
return port
}
} else {
if port > 0 && port != 80 {
return port
}
}
return -1
}
func serverClientIP(xForwardedFor string) string {
if idx := strings.Index(xForwardedFor, ","); idx >= 0 {
xForwardedFor = xForwardedFor[:idx]
}
return xForwardedFor
}
func netProtocol(proto string) (name string, version string) {
name, version, _ = strings.Cut(proto, "/")
name = strings.ToLower(name)
return name, version
}
var methodLookup = map[string]attribute.KeyValue{
http.MethodConnect: semconvNew.HTTPRequestMethodConnect,
http.MethodDelete: semconvNew.HTTPRequestMethodDelete,
http.MethodGet: semconvNew.HTTPRequestMethodGet,
http.MethodHead: semconvNew.HTTPRequestMethodHead,
http.MethodOptions: semconvNew.HTTPRequestMethodOptions,
http.MethodPatch: semconvNew.HTTPRequestMethodPatch,
http.MethodPost: semconvNew.HTTPRequestMethodPost,
http.MethodPut: semconvNew.HTTPRequestMethodPut,
http.MethodTrace: semconvNew.HTTPRequestMethodTrace,
}

View File

@ -0,0 +1,74 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package semconv // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/semconv"
import (
"errors"
"io"
"net/http"
"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/semconvutil"
"go.opentelemetry.io/otel/attribute"
semconv "go.opentelemetry.io/otel/semconv/v1.20.0"
)
type oldHTTPServer struct{}
// RequestTraceAttrs returns trace attributes for an HTTP request received by a
// server.
//
// The server must be the primary server name if it is known. For example this
// would be the ServerName directive
// (https://httpd.apache.org/docs/2.4/mod/core.html#servername) for an Apache
// server, and the server_name directive
// (http://nginx.org/en/docs/http/ngx_http_core_module.html#server_name) for an
// nginx server. More generically, the primary server name would be the host
// header value that matches the default virtual host of an HTTP server. It
// should include the host identifier and if a port is used to route to the
// server that port identifier should be included as an appropriate port
// suffix.
//
// If the primary server name is not known, server should be an empty string.
// The req Host will be used to determine the server instead.
func (o oldHTTPServer) RequestTraceAttrs(server string, req *http.Request) []attribute.KeyValue {
return semconvutil.HTTPServerRequest(server, req)
}
// ResponseTraceAttrs returns trace attributes for telemetry from an HTTP response.
//
// If any of the fields in the ResponseTelemetry are not set the attribute will be omitted.
func (o oldHTTPServer) ResponseTraceAttrs(resp ResponseTelemetry) []attribute.KeyValue {
attributes := []attribute.KeyValue{}
if resp.ReadBytes > 0 {
attributes = append(attributes, semconv.HTTPRequestContentLength(int(resp.ReadBytes)))
}
if resp.ReadError != nil && !errors.Is(resp.ReadError, io.EOF) {
// This is not in the semantic conventions, but is historically provided
attributes = append(attributes, attribute.String("http.read_error", resp.ReadError.Error()))
}
if resp.WriteBytes > 0 {
attributes = append(attributes, semconv.HTTPResponseContentLength(int(resp.WriteBytes)))
}
if resp.StatusCode > 0 {
attributes = append(attributes, semconv.HTTPStatusCode(resp.StatusCode))
}
if resp.WriteError != nil && !errors.Is(resp.WriteError, io.EOF) {
// This is not in the semantic conventions, but is historically provided
attributes = append(attributes, attribute.String("http.write_error", resp.WriteError.Error()))
}
return attributes
}
// Route returns the attribute for the route.
func (o oldHTTPServer) Route(route string) attribute.KeyValue {
return semconv.HTTPRoute(route)
}
// HTTPStatusCode returns the attribute for the HTTP status code.
// This is a temporary function needed by metrics. This will be removed when MetricsRequest is added.
func HTTPStatusCode(status int) attribute.KeyValue {
return semconv.HTTPStatusCode(status)
}

View File

@ -0,0 +1,197 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package semconv // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/semconv"
import (
"net/http"
"strings"
"go.opentelemetry.io/otel/attribute"
semconvNew "go.opentelemetry.io/otel/semconv/v1.24.0"
)
type newHTTPServer struct{}
// TraceRequest returns trace attributes for an HTTP request received by a
// server.
//
// The server must be the primary server name if it is known. For example this
// would be the ServerName directive
// (https://httpd.apache.org/docs/2.4/mod/core.html#servername) for an Apache
// server, and the server_name directive
// (http://nginx.org/en/docs/http/ngx_http_core_module.html#server_name) for an
// nginx server. More generically, the primary server name would be the host
// header value that matches the default virtual host of an HTTP server. It
// should include the host identifier and if a port is used to route to the
// server that port identifier should be included as an appropriate port
// suffix.
//
// If the primary server name is not known, server should be an empty string.
// The req Host will be used to determine the server instead.
func (n newHTTPServer) RequestTraceAttrs(server string, req *http.Request) []attribute.KeyValue {
count := 3 // ServerAddress, Method, Scheme
var host string
var p int
if server == "" {
host, p = splitHostPort(req.Host)
} else {
// Prioritize the primary server name.
host, p = splitHostPort(server)
if p < 0 {
_, p = splitHostPort(req.Host)
}
}
hostPort := requiredHTTPPort(req.TLS != nil, p)
if hostPort > 0 {
count++
}
method, methodOriginal := n.method(req.Method)
if methodOriginal != (attribute.KeyValue{}) {
count++
}
scheme := n.scheme(req.TLS != nil)
if peer, peerPort := splitHostPort(req.RemoteAddr); peer != "" {
// The Go HTTP server sets RemoteAddr to "IP:port", this will not be a
// file-path that would be interpreted with a sock family.
count++
if peerPort > 0 {
count++
}
}
useragent := req.UserAgent()
if useragent != "" {
count++
}
clientIP := serverClientIP(req.Header.Get("X-Forwarded-For"))
if clientIP != "" {
count++
}
if req.URL != nil && req.URL.Path != "" {
count++
}
protoName, protoVersion := netProtocol(req.Proto)
if protoName != "" && protoName != "http" {
count++
}
if protoVersion != "" {
count++
}
attrs := make([]attribute.KeyValue, 0, count)
attrs = append(attrs,
semconvNew.ServerAddress(host),
method,
scheme,
)
if hostPort > 0 {
attrs = append(attrs, semconvNew.ServerPort(hostPort))
}
if methodOriginal != (attribute.KeyValue{}) {
attrs = append(attrs, methodOriginal)
}
if peer, peerPort := splitHostPort(req.RemoteAddr); peer != "" {
// The Go HTTP server sets RemoteAddr to "IP:port", this will not be a
// file-path that would be interpreted with a sock family.
attrs = append(attrs, semconvNew.NetworkPeerAddress(peer))
if peerPort > 0 {
attrs = append(attrs, semconvNew.NetworkPeerPort(peerPort))
}
}
if useragent := req.UserAgent(); useragent != "" {
attrs = append(attrs, semconvNew.UserAgentOriginal(useragent))
}
if clientIP != "" {
attrs = append(attrs, semconvNew.ClientAddress(clientIP))
}
if req.URL != nil && req.URL.Path != "" {
attrs = append(attrs, semconvNew.URLPath(req.URL.Path))
}
if protoName != "" && protoName != "http" {
attrs = append(attrs, semconvNew.NetworkProtocolName(protoName))
}
if protoVersion != "" {
attrs = append(attrs, semconvNew.NetworkProtocolVersion(protoVersion))
}
return attrs
}
func (n newHTTPServer) method(method string) (attribute.KeyValue, attribute.KeyValue) {
if method == "" {
return semconvNew.HTTPRequestMethodGet, attribute.KeyValue{}
}
if attr, ok := methodLookup[method]; ok {
return attr, attribute.KeyValue{}
}
orig := semconvNew.HTTPRequestMethodOriginal(method)
if attr, ok := methodLookup[strings.ToUpper(method)]; ok {
return attr, orig
}
return semconvNew.HTTPRequestMethodGet, orig
}
func (n newHTTPServer) scheme(https bool) attribute.KeyValue { // nolint:revive
if https {
return semconvNew.URLScheme("https")
}
return semconvNew.URLScheme("http")
}
// TraceResponse returns trace attributes for telemetry from an HTTP response.
//
// If any of the fields in the ResponseTelemetry are not set the attribute will be omitted.
func (n newHTTPServer) ResponseTraceAttrs(resp ResponseTelemetry) []attribute.KeyValue {
var count int
if resp.ReadBytes > 0 {
count++
}
if resp.WriteBytes > 0 {
count++
}
if resp.StatusCode > 0 {
count++
}
attributes := make([]attribute.KeyValue, 0, count)
if resp.ReadBytes > 0 {
attributes = append(attributes,
semconvNew.HTTPRequestBodySize(int(resp.ReadBytes)),
)
}
if resp.WriteBytes > 0 {
attributes = append(attributes,
semconvNew.HTTPResponseBodySize(int(resp.WriteBytes)),
)
}
if resp.StatusCode > 0 {
attributes = append(attributes,
semconvNew.HTTPResponseStatusCode(resp.StatusCode),
)
}
return attributes
}
// Route returns the attribute for the route.
func (n newHTTPServer) Route(route string) attribute.KeyValue {
return semconvNew.HTTPRoute(route)
}

View File

@ -0,0 +1,10 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package semconvutil // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/semconvutil"
// Generate semconvutil package:
//go:generate gotmpl --body=../../../../../../internal/shared/semconvutil/httpconv_test.go.tmpl "--data={}" --out=httpconv_test.go
//go:generate gotmpl --body=../../../../../../internal/shared/semconvutil/httpconv.go.tmpl "--data={}" --out=httpconv.go
//go:generate gotmpl --body=../../../../../../internal/shared/semconvutil/netconv_test.go.tmpl "--data={}" --out=netconv_test.go
//go:generate gotmpl --body=../../../../../../internal/shared/semconvutil/netconv.go.tmpl "--data={}" --out=netconv.go

View File

@ -0,0 +1,575 @@
// Code created by gotmpl. DO NOT MODIFY.
// source: internal/shared/semconvutil/httpconv.go.tmpl
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package semconvutil // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/semconvutil"
import (
"fmt"
"net/http"
"strings"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/codes"
semconv "go.opentelemetry.io/otel/semconv/v1.20.0"
)
// HTTPClientResponse returns trace attributes for an HTTP response received by a
// client from a server. It will return the following attributes if the related
// values are defined in resp: "http.status.code",
// "http.response_content_length".
//
// This does not add all OpenTelemetry required attributes for an HTTP event,
// it assumes ClientRequest was used to create the span with a complete set of
// attributes. If a complete set of attributes can be generated using the
// request contained in resp. For example:
//
// append(HTTPClientResponse(resp), ClientRequest(resp.Request)...)
func HTTPClientResponse(resp *http.Response) []attribute.KeyValue {
return hc.ClientResponse(resp)
}
// HTTPClientRequest returns trace attributes for an HTTP request made by a client.
// The following attributes are always returned: "http.url", "http.method",
// "net.peer.name". The following attributes are returned if the related values
// are defined in req: "net.peer.port", "user_agent.original",
// "http.request_content_length".
func HTTPClientRequest(req *http.Request) []attribute.KeyValue {
return hc.ClientRequest(req)
}
// HTTPClientRequestMetrics returns metric attributes for an HTTP request made by a client.
// The following attributes are always returned: "http.method", "net.peer.name".
// The following attributes are returned if the
// related values are defined in req: "net.peer.port".
func HTTPClientRequestMetrics(req *http.Request) []attribute.KeyValue {
return hc.ClientRequestMetrics(req)
}
// HTTPClientStatus returns a span status code and message for an HTTP status code
// value received by a client.
func HTTPClientStatus(code int) (codes.Code, string) {
return hc.ClientStatus(code)
}
// HTTPServerRequest returns trace attributes for an HTTP request received by a
// server.
//
// The server must be the primary server name if it is known. For example this
// would be the ServerName directive
// (https://httpd.apache.org/docs/2.4/mod/core.html#servername) for an Apache
// server, and the server_name directive
// (http://nginx.org/en/docs/http/ngx_http_core_module.html#server_name) for an
// nginx server. More generically, the primary server name would be the host
// header value that matches the default virtual host of an HTTP server. It
// should include the host identifier and if a port is used to route to the
// server that port identifier should be included as an appropriate port
// suffix.
//
// If the primary server name is not known, server should be an empty string.
// The req Host will be used to determine the server instead.
//
// The following attributes are always returned: "http.method", "http.scheme",
// "http.target", "net.host.name". The following attributes are returned if
// they related values are defined in req: "net.host.port", "net.sock.peer.addr",
// "net.sock.peer.port", "user_agent.original", "http.client_ip".
func HTTPServerRequest(server string, req *http.Request) []attribute.KeyValue {
return hc.ServerRequest(server, req)
}
// HTTPServerRequestMetrics returns metric attributes for an HTTP request received by a
// server.
//
// The server must be the primary server name if it is known. For example this
// would be the ServerName directive
// (https://httpd.apache.org/docs/2.4/mod/core.html#servername) for an Apache
// server, and the server_name directive
// (http://nginx.org/en/docs/http/ngx_http_core_module.html#server_name) for an
// nginx server. More generically, the primary server name would be the host
// header value that matches the default virtual host of an HTTP server. It
// should include the host identifier and if a port is used to route to the
// server that port identifier should be included as an appropriate port
// suffix.
//
// If the primary server name is not known, server should be an empty string.
// The req Host will be used to determine the server instead.
//
// The following attributes are always returned: "http.method", "http.scheme",
// "net.host.name". The following attributes are returned if they related
// values are defined in req: "net.host.port".
func HTTPServerRequestMetrics(server string, req *http.Request) []attribute.KeyValue {
return hc.ServerRequestMetrics(server, req)
}
// HTTPServerStatus returns a span status code and message for an HTTP status code
// value returned by a server. Status codes in the 400-499 range are not
// returned as errors.
func HTTPServerStatus(code int) (codes.Code, string) {
return hc.ServerStatus(code)
}
// httpConv are the HTTP semantic convention attributes defined for a version
// of the OpenTelemetry specification.
type httpConv struct {
NetConv *netConv
HTTPClientIPKey attribute.Key
HTTPMethodKey attribute.Key
HTTPRequestContentLengthKey attribute.Key
HTTPResponseContentLengthKey attribute.Key
HTTPRouteKey attribute.Key
HTTPSchemeHTTP attribute.KeyValue
HTTPSchemeHTTPS attribute.KeyValue
HTTPStatusCodeKey attribute.Key
HTTPTargetKey attribute.Key
HTTPURLKey attribute.Key
UserAgentOriginalKey attribute.Key
}
var hc = &httpConv{
NetConv: nc,
HTTPClientIPKey: semconv.HTTPClientIPKey,
HTTPMethodKey: semconv.HTTPMethodKey,
HTTPRequestContentLengthKey: semconv.HTTPRequestContentLengthKey,
HTTPResponseContentLengthKey: semconv.HTTPResponseContentLengthKey,
HTTPRouteKey: semconv.HTTPRouteKey,
HTTPSchemeHTTP: semconv.HTTPSchemeHTTP,
HTTPSchemeHTTPS: semconv.HTTPSchemeHTTPS,
HTTPStatusCodeKey: semconv.HTTPStatusCodeKey,
HTTPTargetKey: semconv.HTTPTargetKey,
HTTPURLKey: semconv.HTTPURLKey,
UserAgentOriginalKey: semconv.UserAgentOriginalKey,
}
// ClientResponse returns attributes for an HTTP response received by a client
// from a server. The following attributes are returned if the related values
// are defined in resp: "http.status.code", "http.response_content_length".
//
// This does not add all OpenTelemetry required attributes for an HTTP event,
// it assumes ClientRequest was used to create the span with a complete set of
// attributes. If a complete set of attributes can be generated using the
// request contained in resp. For example:
//
// append(ClientResponse(resp), ClientRequest(resp.Request)...)
func (c *httpConv) ClientResponse(resp *http.Response) []attribute.KeyValue {
/* The following semantic conventions are returned if present:
http.status_code int
http.response_content_length int
*/
var n int
if resp.StatusCode > 0 {
n++
}
if resp.ContentLength > 0 {
n++
}
attrs := make([]attribute.KeyValue, 0, n)
if resp.StatusCode > 0 {
attrs = append(attrs, c.HTTPStatusCodeKey.Int(resp.StatusCode))
}
if resp.ContentLength > 0 {
attrs = append(attrs, c.HTTPResponseContentLengthKey.Int(int(resp.ContentLength)))
}
return attrs
}
// ClientRequest returns attributes for an HTTP request made by a client. The
// following attributes are always returned: "http.url", "http.method",
// "net.peer.name". The following attributes are returned if the related values
// are defined in req: "net.peer.port", "user_agent.original",
// "http.request_content_length", "user_agent.original".
func (c *httpConv) ClientRequest(req *http.Request) []attribute.KeyValue {
/* The following semantic conventions are returned if present:
http.method string
user_agent.original string
http.url string
net.peer.name string
net.peer.port int
http.request_content_length int
*/
/* The following semantic conventions are not returned:
http.status_code This requires the response. See ClientResponse.
http.response_content_length This requires the response. See ClientResponse.
net.sock.family This requires the socket used.
net.sock.peer.addr This requires the socket used.
net.sock.peer.name This requires the socket used.
net.sock.peer.port This requires the socket used.
http.resend_count This is something outside of a single request.
net.protocol.name The value is the Request is ignored, and the go client will always use "http".
net.protocol.version The value in the Request is ignored, and the go client will always use 1.1 or 2.0.
*/
n := 3 // URL, peer name, proto, and method.
var h string
if req.URL != nil {
h = req.URL.Host
}
peer, p := firstHostPort(h, req.Header.Get("Host"))
port := requiredHTTPPort(req.URL != nil && req.URL.Scheme == "https", p)
if port > 0 {
n++
}
useragent := req.UserAgent()
if useragent != "" {
n++
}
if req.ContentLength > 0 {
n++
}
attrs := make([]attribute.KeyValue, 0, n)
attrs = append(attrs, c.method(req.Method))
var u string
if req.URL != nil {
// Remove any username/password info that may be in the URL.
userinfo := req.URL.User
req.URL.User = nil
u = req.URL.String()
// Restore any username/password info that was removed.
req.URL.User = userinfo
}
attrs = append(attrs, c.HTTPURLKey.String(u))
attrs = append(attrs, c.NetConv.PeerName(peer))
if port > 0 {
attrs = append(attrs, c.NetConv.PeerPort(port))
}
if useragent != "" {
attrs = append(attrs, c.UserAgentOriginalKey.String(useragent))
}
if l := req.ContentLength; l > 0 {
attrs = append(attrs, c.HTTPRequestContentLengthKey.Int64(l))
}
return attrs
}
// ClientRequestMetrics returns metric attributes for an HTTP request made by a client. The
// following attributes are always returned: "http.method", "net.peer.name".
// The following attributes are returned if the related values
// are defined in req: "net.peer.port".
func (c *httpConv) ClientRequestMetrics(req *http.Request) []attribute.KeyValue {
/* The following semantic conventions are returned if present:
http.method string
net.peer.name string
net.peer.port int
*/
n := 2 // method, peer name.
var h string
if req.URL != nil {
h = req.URL.Host
}
peer, p := firstHostPort(h, req.Header.Get("Host"))
port := requiredHTTPPort(req.URL != nil && req.URL.Scheme == "https", p)
if port > 0 {
n++
}
attrs := make([]attribute.KeyValue, 0, n)
attrs = append(attrs, c.method(req.Method), c.NetConv.PeerName(peer))
if port > 0 {
attrs = append(attrs, c.NetConv.PeerPort(port))
}
return attrs
}
// ServerRequest returns attributes for an HTTP request received by a server.
//
// The server must be the primary server name if it is known. For example this
// would be the ServerName directive
// (https://httpd.apache.org/docs/2.4/mod/core.html#servername) for an Apache
// server, and the server_name directive
// (http://nginx.org/en/docs/http/ngx_http_core_module.html#server_name) for an
// nginx server. More generically, the primary server name would be the host
// header value that matches the default virtual host of an HTTP server. It
// should include the host identifier and if a port is used to route to the
// server that port identifier should be included as an appropriate port
// suffix.
//
// If the primary server name is not known, server should be an empty string.
// The req Host will be used to determine the server instead.
//
// The following attributes are always returned: "http.method", "http.scheme",
// "http.target", "net.host.name". The following attributes are returned if they
// related values are defined in req: "net.host.port", "net.sock.peer.addr",
// "net.sock.peer.port", "user_agent.original", "http.client_ip",
// "net.protocol.name", "net.protocol.version".
func (c *httpConv) ServerRequest(server string, req *http.Request) []attribute.KeyValue {
/* The following semantic conventions are returned if present:
http.method string
http.scheme string
net.host.name string
net.host.port int
net.sock.peer.addr string
net.sock.peer.port int
user_agent.original string
http.client_ip string
net.protocol.name string Note: not set if the value is "http".
net.protocol.version string
http.target string Note: doesn't include the query parameter.
*/
/* The following semantic conventions are not returned:
http.status_code This requires the response.
http.request_content_length This requires the len() of body, which can mutate it.
http.response_content_length This requires the response.
http.route This is not available.
net.sock.peer.name This would require a DNS lookup.
net.sock.host.addr The request doesn't have access to the underlying socket.
net.sock.host.port The request doesn't have access to the underlying socket.
*/
n := 4 // Method, scheme, proto, and host name.
var host string
var p int
if server == "" {
host, p = splitHostPort(req.Host)
} else {
// Prioritize the primary server name.
host, p = splitHostPort(server)
if p < 0 {
_, p = splitHostPort(req.Host)
}
}
hostPort := requiredHTTPPort(req.TLS != nil, p)
if hostPort > 0 {
n++
}
peer, peerPort := splitHostPort(req.RemoteAddr)
if peer != "" {
n++
if peerPort > 0 {
n++
}
}
useragent := req.UserAgent()
if useragent != "" {
n++
}
clientIP := serverClientIP(req.Header.Get("X-Forwarded-For"))
if clientIP != "" {
n++
}
var target string
if req.URL != nil {
target = req.URL.Path
if target != "" {
n++
}
}
protoName, protoVersion := netProtocol(req.Proto)
if protoName != "" && protoName != "http" {
n++
}
if protoVersion != "" {
n++
}
attrs := make([]attribute.KeyValue, 0, n)
attrs = append(attrs, c.method(req.Method))
attrs = append(attrs, c.scheme(req.TLS != nil))
attrs = append(attrs, c.NetConv.HostName(host))
if hostPort > 0 {
attrs = append(attrs, c.NetConv.HostPort(hostPort))
}
if peer != "" {
// The Go HTTP server sets RemoteAddr to "IP:port", this will not be a
// file-path that would be interpreted with a sock family.
attrs = append(attrs, c.NetConv.SockPeerAddr(peer))
if peerPort > 0 {
attrs = append(attrs, c.NetConv.SockPeerPort(peerPort))
}
}
if useragent != "" {
attrs = append(attrs, c.UserAgentOriginalKey.String(useragent))
}
if clientIP != "" {
attrs = append(attrs, c.HTTPClientIPKey.String(clientIP))
}
if target != "" {
attrs = append(attrs, c.HTTPTargetKey.String(target))
}
if protoName != "" && protoName != "http" {
attrs = append(attrs, c.NetConv.NetProtocolName.String(protoName))
}
if protoVersion != "" {
attrs = append(attrs, c.NetConv.NetProtocolVersion.String(protoVersion))
}
return attrs
}
// ServerRequestMetrics returns metric attributes for an HTTP request received
// by a server.
//
// The server must be the primary server name if it is known. For example this
// would be the ServerName directive
// (https://httpd.apache.org/docs/2.4/mod/core.html#servername) for an Apache
// server, and the server_name directive
// (http://nginx.org/en/docs/http/ngx_http_core_module.html#server_name) for an
// nginx server. More generically, the primary server name would be the host
// header value that matches the default virtual host of an HTTP server. It
// should include the host identifier and if a port is used to route to the
// server that port identifier should be included as an appropriate port
// suffix.
//
// If the primary server name is not known, server should be an empty string.
// The req Host will be used to determine the server instead.
//
// The following attributes are always returned: "http.method", "http.scheme",
// "net.host.name". The following attributes are returned if they related
// values are defined in req: "net.host.port".
func (c *httpConv) ServerRequestMetrics(server string, req *http.Request) []attribute.KeyValue {
/* The following semantic conventions are returned if present:
http.scheme string
http.route string
http.method string
http.status_code int
net.host.name string
net.host.port int
net.protocol.name string Note: not set if the value is "http".
net.protocol.version string
*/
n := 3 // Method, scheme, and host name.
var host string
var p int
if server == "" {
host, p = splitHostPort(req.Host)
} else {
// Prioritize the primary server name.
host, p = splitHostPort(server)
if p < 0 {
_, p = splitHostPort(req.Host)
}
}
hostPort := requiredHTTPPort(req.TLS != nil, p)
if hostPort > 0 {
n++
}
protoName, protoVersion := netProtocol(req.Proto)
if protoName != "" {
n++
}
if protoVersion != "" {
n++
}
attrs := make([]attribute.KeyValue, 0, n)
attrs = append(attrs, c.methodMetric(req.Method))
attrs = append(attrs, c.scheme(req.TLS != nil))
attrs = append(attrs, c.NetConv.HostName(host))
if hostPort > 0 {
attrs = append(attrs, c.NetConv.HostPort(hostPort))
}
if protoName != "" {
attrs = append(attrs, c.NetConv.NetProtocolName.String(protoName))
}
if protoVersion != "" {
attrs = append(attrs, c.NetConv.NetProtocolVersion.String(protoVersion))
}
return attrs
}
func (c *httpConv) method(method string) attribute.KeyValue {
if method == "" {
return c.HTTPMethodKey.String(http.MethodGet)
}
return c.HTTPMethodKey.String(method)
}
func (c *httpConv) methodMetric(method string) attribute.KeyValue {
method = strings.ToUpper(method)
switch method {
case http.MethodConnect, http.MethodDelete, http.MethodGet, http.MethodHead, http.MethodOptions, http.MethodPatch, http.MethodPost, http.MethodPut, http.MethodTrace:
default:
method = "_OTHER"
}
return c.HTTPMethodKey.String(method)
}
func (c *httpConv) scheme(https bool) attribute.KeyValue { // nolint:revive
if https {
return c.HTTPSchemeHTTPS
}
return c.HTTPSchemeHTTP
}
func serverClientIP(xForwardedFor string) string {
if idx := strings.Index(xForwardedFor, ","); idx >= 0 {
xForwardedFor = xForwardedFor[:idx]
}
return xForwardedFor
}
func requiredHTTPPort(https bool, port int) int { // nolint:revive
if https {
if port > 0 && port != 443 {
return port
}
} else {
if port > 0 && port != 80 {
return port
}
}
return -1
}
// Return the request host and port from the first non-empty source.
func firstHostPort(source ...string) (host string, port int) {
for _, hostport := range source {
host, port = splitHostPort(hostport)
if host != "" || port > 0 {
break
}
}
return
}
// ClientStatus returns a span status code and message for an HTTP status code
// value received by a client.
func (c *httpConv) ClientStatus(code int) (codes.Code, string) {
if code < 100 || code >= 600 {
return codes.Error, fmt.Sprintf("Invalid HTTP status code %d", code)
}
if code >= 400 {
return codes.Error, ""
}
return codes.Unset, ""
}
// ServerStatus returns a span status code and message for an HTTP status code
// value returned by a server. Status codes in the 400-499 range are not
// returned as errors.
func (c *httpConv) ServerStatus(code int) (codes.Code, string) {
if code < 100 || code >= 600 {
return codes.Error, fmt.Sprintf("Invalid HTTP status code %d", code)
}
if code >= 500 {
return codes.Error, ""
}
return codes.Unset, ""
}

View File

@ -0,0 +1,205 @@
// Code created by gotmpl. DO NOT MODIFY.
// source: internal/shared/semconvutil/netconv.go.tmpl
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package semconvutil // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/semconvutil"
import (
"net"
"strconv"
"strings"
"go.opentelemetry.io/otel/attribute"
semconv "go.opentelemetry.io/otel/semconv/v1.20.0"
)
// NetTransport returns a trace attribute describing the transport protocol of the
// passed network. See the net.Dial for information about acceptable network
// values.
func NetTransport(network string) attribute.KeyValue {
return nc.Transport(network)
}
// netConv are the network semantic convention attributes defined for a version
// of the OpenTelemetry specification.
type netConv struct {
NetHostNameKey attribute.Key
NetHostPortKey attribute.Key
NetPeerNameKey attribute.Key
NetPeerPortKey attribute.Key
NetProtocolName attribute.Key
NetProtocolVersion attribute.Key
NetSockFamilyKey attribute.Key
NetSockPeerAddrKey attribute.Key
NetSockPeerPortKey attribute.Key
NetSockHostAddrKey attribute.Key
NetSockHostPortKey attribute.Key
NetTransportOther attribute.KeyValue
NetTransportTCP attribute.KeyValue
NetTransportUDP attribute.KeyValue
NetTransportInProc attribute.KeyValue
}
var nc = &netConv{
NetHostNameKey: semconv.NetHostNameKey,
NetHostPortKey: semconv.NetHostPortKey,
NetPeerNameKey: semconv.NetPeerNameKey,
NetPeerPortKey: semconv.NetPeerPortKey,
NetProtocolName: semconv.NetProtocolNameKey,
NetProtocolVersion: semconv.NetProtocolVersionKey,
NetSockFamilyKey: semconv.NetSockFamilyKey,
NetSockPeerAddrKey: semconv.NetSockPeerAddrKey,
NetSockPeerPortKey: semconv.NetSockPeerPortKey,
NetSockHostAddrKey: semconv.NetSockHostAddrKey,
NetSockHostPortKey: semconv.NetSockHostPortKey,
NetTransportOther: semconv.NetTransportOther,
NetTransportTCP: semconv.NetTransportTCP,
NetTransportUDP: semconv.NetTransportUDP,
NetTransportInProc: semconv.NetTransportInProc,
}
func (c *netConv) Transport(network string) attribute.KeyValue {
switch network {
case "tcp", "tcp4", "tcp6":
return c.NetTransportTCP
case "udp", "udp4", "udp6":
return c.NetTransportUDP
case "unix", "unixgram", "unixpacket":
return c.NetTransportInProc
default:
// "ip:*", "ip4:*", and "ip6:*" all are considered other.
return c.NetTransportOther
}
}
// Host returns attributes for a network host address.
func (c *netConv) Host(address string) []attribute.KeyValue {
h, p := splitHostPort(address)
var n int
if h != "" {
n++
if p > 0 {
n++
}
}
if n == 0 {
return nil
}
attrs := make([]attribute.KeyValue, 0, n)
attrs = append(attrs, c.HostName(h))
if p > 0 {
attrs = append(attrs, c.HostPort(p))
}
return attrs
}
func (c *netConv) HostName(name string) attribute.KeyValue {
return c.NetHostNameKey.String(name)
}
func (c *netConv) HostPort(port int) attribute.KeyValue {
return c.NetHostPortKey.Int(port)
}
func family(network, address string) string {
switch network {
case "unix", "unixgram", "unixpacket":
return "unix"
default:
if ip := net.ParseIP(address); ip != nil {
if ip.To4() == nil {
return "inet6"
}
return "inet"
}
}
return ""
}
// Peer returns attributes for a network peer address.
func (c *netConv) Peer(address string) []attribute.KeyValue {
h, p := splitHostPort(address)
var n int
if h != "" {
n++
if p > 0 {
n++
}
}
if n == 0 {
return nil
}
attrs := make([]attribute.KeyValue, 0, n)
attrs = append(attrs, c.PeerName(h))
if p > 0 {
attrs = append(attrs, c.PeerPort(p))
}
return attrs
}
func (c *netConv) PeerName(name string) attribute.KeyValue {
return c.NetPeerNameKey.String(name)
}
func (c *netConv) PeerPort(port int) attribute.KeyValue {
return c.NetPeerPortKey.Int(port)
}
func (c *netConv) SockPeerAddr(addr string) attribute.KeyValue {
return c.NetSockPeerAddrKey.String(addr)
}
func (c *netConv) SockPeerPort(port int) attribute.KeyValue {
return c.NetSockPeerPortKey.Int(port)
}
// splitHostPort splits a network address hostport of the form "host",
// "host%zone", "[host]", "[host%zone], "host:port", "host%zone:port",
// "[host]:port", "[host%zone]:port", or ":port" into host or host%zone and
// port.
//
// An empty host is returned if it is not provided or unparsable. A negative
// port is returned if it is not provided or unparsable.
func splitHostPort(hostport string) (host string, port int) {
port = -1
if strings.HasPrefix(hostport, "[") {
addrEnd := strings.LastIndex(hostport, "]")
if addrEnd < 0 {
// Invalid hostport.
return
}
if i := strings.LastIndex(hostport[addrEnd:], ":"); i < 0 {
host = hostport[1:addrEnd]
return
}
} else {
if i := strings.LastIndex(hostport, ":"); i < 0 {
host = hostport
return
}
}
host, pStr, err := net.SplitHostPort(hostport)
if err != nil {
return
}
p, err := strconv.ParseUint(pStr, 10, 16)
if err != nil {
return
}
return host, int(p)
}
func netProtocol(proto string) (name string, version string) {
name, version, _ = strings.Cut(proto, "/")
name = strings.ToLower(name)
return name, version
}

View File

@ -0,0 +1,58 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otelhttp // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
import (
"context"
"sync"
"go.opentelemetry.io/otel/attribute"
)
// Labeler is used to allow instrumented HTTP handlers to add custom attributes to
// the metrics recorded by the net/http instrumentation.
type Labeler struct {
mu sync.Mutex
attributes []attribute.KeyValue
}
// Add attributes to a Labeler.
func (l *Labeler) Add(ls ...attribute.KeyValue) {
l.mu.Lock()
defer l.mu.Unlock()
l.attributes = append(l.attributes, ls...)
}
// Get returns a copy of the attributes added to the Labeler.
func (l *Labeler) Get() []attribute.KeyValue {
l.mu.Lock()
defer l.mu.Unlock()
ret := make([]attribute.KeyValue, len(l.attributes))
copy(ret, l.attributes)
return ret
}
type labelerContextKeyType int
const lablelerContextKey labelerContextKeyType = 0
// ContextWithLabeler returns a new context with the provided Labeler instance.
// Attributes added to the specified labeler will be injected into metrics
// emitted by the instrumentation. Only one labeller can be injected into the
// context. Injecting it multiple times will override the previous calls.
func ContextWithLabeler(parent context.Context, l *Labeler) context.Context {
return context.WithValue(parent, lablelerContextKey, l)
}
// LabelerFromContext retrieves a Labeler instance from the provided context if
// one is available. If no Labeler was found in the provided context a new, empty
// Labeler is returned and the second return value is false. In this case it is
// safe to use the Labeler but any attributes added to it will not be used.
func LabelerFromContext(ctx context.Context) (*Labeler, bool) {
l, ok := ctx.Value(lablelerContextKey).(*Labeler)
if !ok {
l = &Labeler{}
}
return l, ok
}

View File

@ -0,0 +1,277 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otelhttp // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
import (
"context"
"io"
"net/http"
"net/http/httptrace"
"sync/atomic"
"time"
"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/semconvutil"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/codes"
"go.opentelemetry.io/otel/metric"
"go.opentelemetry.io/otel/propagation"
semconv "go.opentelemetry.io/otel/semconv/v1.20.0"
"go.opentelemetry.io/otel/trace"
)
// Transport implements the http.RoundTripper interface and wraps
// outbound HTTP(S) requests with a span and enriches it with metrics.
type Transport struct {
rt http.RoundTripper
tracer trace.Tracer
meter metric.Meter
propagators propagation.TextMapPropagator
spanStartOptions []trace.SpanStartOption
filters []Filter
spanNameFormatter func(string, *http.Request) string
clientTrace func(context.Context) *httptrace.ClientTrace
requestBytesCounter metric.Int64Counter
responseBytesCounter metric.Int64Counter
latencyMeasure metric.Float64Histogram
}
var _ http.RoundTripper = &Transport{}
// NewTransport wraps the provided http.RoundTripper with one that
// starts a span, injects the span context into the outbound request headers,
// and enriches it with metrics.
//
// If the provided http.RoundTripper is nil, http.DefaultTransport will be used
// as the base http.RoundTripper.
func NewTransport(base http.RoundTripper, opts ...Option) *Transport {
if base == nil {
base = http.DefaultTransport
}
t := Transport{
rt: base,
}
defaultOpts := []Option{
WithSpanOptions(trace.WithSpanKind(trace.SpanKindClient)),
WithSpanNameFormatter(defaultTransportFormatter),
}
c := newConfig(append(defaultOpts, opts...)...)
t.applyConfig(c)
t.createMeasures()
return &t
}
func (t *Transport) applyConfig(c *config) {
t.tracer = c.Tracer
t.meter = c.Meter
t.propagators = c.Propagators
t.spanStartOptions = c.SpanStartOptions
t.filters = c.Filters
t.spanNameFormatter = c.SpanNameFormatter
t.clientTrace = c.ClientTrace
}
func (t *Transport) createMeasures() {
var err error
t.requestBytesCounter, err = t.meter.Int64Counter(
clientRequestSize,
metric.WithUnit("By"),
metric.WithDescription("Measures the size of HTTP request messages."),
)
handleErr(err)
t.responseBytesCounter, err = t.meter.Int64Counter(
clientResponseSize,
metric.WithUnit("By"),
metric.WithDescription("Measures the size of HTTP response messages."),
)
handleErr(err)
t.latencyMeasure, err = t.meter.Float64Histogram(
clientDuration,
metric.WithUnit("ms"),
metric.WithDescription("Measures the duration of outbound HTTP requests."),
)
handleErr(err)
}
func defaultTransportFormatter(_ string, r *http.Request) string {
return "HTTP " + r.Method
}
// RoundTrip creates a Span and propagates its context via the provided request's headers
// before handing the request to the configured base RoundTripper. The created span will
// end when the response body is closed or when a read from the body returns io.EOF.
func (t *Transport) RoundTrip(r *http.Request) (*http.Response, error) {
requestStartTime := time.Now()
for _, f := range t.filters {
if !f(r) {
// Simply pass through to the base RoundTripper if a filter rejects the request
return t.rt.RoundTrip(r)
}
}
tracer := t.tracer
if tracer == nil {
if span := trace.SpanFromContext(r.Context()); span.SpanContext().IsValid() {
tracer = newTracer(span.TracerProvider())
} else {
tracer = newTracer(otel.GetTracerProvider())
}
}
opts := append([]trace.SpanStartOption{}, t.spanStartOptions...) // start with the configured options
ctx, span := tracer.Start(r.Context(), t.spanNameFormatter("", r), opts...)
if t.clientTrace != nil {
ctx = httptrace.WithClientTrace(ctx, t.clientTrace(ctx))
}
labeler, found := LabelerFromContext(ctx)
if !found {
ctx = ContextWithLabeler(ctx, labeler)
}
r = r.Clone(ctx) // According to RoundTripper spec, we shouldn't modify the origin request.
// use a body wrapper to determine the request size
var bw bodyWrapper
// if request body is nil or NoBody, we don't want to mutate the body as it
// will affect the identity of it in an unforeseeable way because we assert
// ReadCloser fulfills a certain interface and it is indeed nil or NoBody.
if r.Body != nil && r.Body != http.NoBody {
bw.ReadCloser = r.Body
// noop to prevent nil panic. not using this record fun yet.
bw.record = func(int64) {}
r.Body = &bw
}
span.SetAttributes(semconvutil.HTTPClientRequest(r)...)
t.propagators.Inject(ctx, propagation.HeaderCarrier(r.Header))
res, err := t.rt.RoundTrip(r)
if err != nil {
span.RecordError(err)
span.SetStatus(codes.Error, err.Error())
span.End()
return res, err
}
// metrics
metricAttrs := append(labeler.Get(), semconvutil.HTTPClientRequestMetrics(r)...)
if res.StatusCode > 0 {
metricAttrs = append(metricAttrs, semconv.HTTPStatusCode(res.StatusCode))
}
o := metric.WithAttributeSet(attribute.NewSet(metricAttrs...))
addOpts := []metric.AddOption{o} // Allocate vararg slice once.
t.requestBytesCounter.Add(ctx, bw.read.Load(), addOpts...)
// For handling response bytes we leverage a callback when the client reads the http response
readRecordFunc := func(n int64) {
t.responseBytesCounter.Add(ctx, n, addOpts...)
}
// traces
span.SetAttributes(semconvutil.HTTPClientResponse(res)...)
span.SetStatus(semconvutil.HTTPClientStatus(res.StatusCode))
res.Body = newWrappedBody(span, readRecordFunc, res.Body)
// Use floating point division here for higher precision (instead of Millisecond method).
elapsedTime := float64(time.Since(requestStartTime)) / float64(time.Millisecond)
t.latencyMeasure.Record(ctx, elapsedTime, o)
return res, err
}
// newWrappedBody returns a new and appropriately scoped *wrappedBody as an
// io.ReadCloser. If the passed body implements io.Writer, the returned value
// will implement io.ReadWriteCloser.
func newWrappedBody(span trace.Span, record func(n int64), body io.ReadCloser) io.ReadCloser {
// The successful protocol switch responses will have a body that
// implement an io.ReadWriteCloser. Ensure this interface type continues
// to be satisfied if that is the case.
if _, ok := body.(io.ReadWriteCloser); ok {
return &wrappedBody{span: span, record: record, body: body}
}
// Remove the implementation of the io.ReadWriteCloser and only implement
// the io.ReadCloser.
return struct{ io.ReadCloser }{&wrappedBody{span: span, record: record, body: body}}
}
// wrappedBody is the response body type returned by the transport
// instrumentation to complete a span. Errors encountered when using the
// response body are recorded in span tracking the response.
//
// The span tracking the response is ended when this body is closed.
//
// If the response body implements the io.Writer interface (i.e. for
// successful protocol switches), the wrapped body also will.
type wrappedBody struct {
span trace.Span
recorded atomic.Bool
record func(n int64)
body io.ReadCloser
read atomic.Int64
}
var _ io.ReadWriteCloser = &wrappedBody{}
func (wb *wrappedBody) Write(p []byte) (int, error) {
// This will not panic given the guard in newWrappedBody.
n, err := wb.body.(io.Writer).Write(p)
if err != nil {
wb.span.RecordError(err)
wb.span.SetStatus(codes.Error, err.Error())
}
return n, err
}
func (wb *wrappedBody) Read(b []byte) (int, error) {
n, err := wb.body.Read(b)
// Record the number of bytes read
wb.read.Add(int64(n))
switch err {
case nil:
// nothing to do here but fall through to the return
case io.EOF:
wb.recordBytesRead()
wb.span.End()
default:
wb.span.RecordError(err)
wb.span.SetStatus(codes.Error, err.Error())
}
return n, err
}
// recordBytesRead is a function that ensures the number of bytes read is recorded once and only once.
func (wb *wrappedBody) recordBytesRead() {
// note: it is more performant (and equally correct) to use atomic.Bool over sync.Once here. In the event that
// two goroutines are racing to call this method, the number of bytes read will no longer increase. Using
// CompareAndSwap allows later goroutines to return quickly and not block waiting for the race winner to finish
// calling wb.record(wb.read.Load()).
if wb.recorded.CompareAndSwap(false, true) {
// Record the total number of bytes read
wb.record(wb.read.Load())
}
}
func (wb *wrappedBody) Close() error {
wb.recordBytesRead()
wb.span.End()
if wb.body != nil {
return wb.body.Close()
}
return nil
}

View File

@ -0,0 +1,17 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otelhttp // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
// Version is the current release version of the otelhttp instrumentation.
func Version() string {
return "0.53.0"
// This string is updated by the pre_release.sh script during release
}
// SemVersion is the semantic version to be supplied to tracer/meter creation.
//
// Deprecated: Use [Version] instead.
func SemVersion() string {
return Version()
}

View File

@ -0,0 +1,99 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otelhttp // import "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
import (
"context"
"io"
"net/http"
"sync/atomic"
"go.opentelemetry.io/otel/propagation"
)
var _ io.ReadCloser = &bodyWrapper{}
// bodyWrapper wraps a http.Request.Body (an io.ReadCloser) to track the number
// of bytes read and the last error.
type bodyWrapper struct {
io.ReadCloser
record func(n int64) // must not be nil
read atomic.Int64
err error
}
func (w *bodyWrapper) Read(b []byte) (int, error) {
n, err := w.ReadCloser.Read(b)
n1 := int64(n)
w.read.Add(n1)
w.err = err
w.record(n1)
return n, err
}
func (w *bodyWrapper) Close() error {
return w.ReadCloser.Close()
}
var _ http.ResponseWriter = &respWriterWrapper{}
// respWriterWrapper wraps a http.ResponseWriter in order to track the number of
// bytes written, the last error, and to catch the first written statusCode.
// TODO: The wrapped http.ResponseWriter doesn't implement any of the optional
// types (http.Hijacker, http.Pusher, http.CloseNotifier, http.Flusher, etc)
// that may be useful when using it in real life situations.
type respWriterWrapper struct {
http.ResponseWriter
record func(n int64) // must not be nil
// used to inject the header
ctx context.Context
props propagation.TextMapPropagator
written int64
statusCode int
err error
wroteHeader bool
}
func (w *respWriterWrapper) Header() http.Header {
return w.ResponseWriter.Header()
}
func (w *respWriterWrapper) Write(p []byte) (int, error) {
if !w.wroteHeader {
w.WriteHeader(http.StatusOK)
}
n, err := w.ResponseWriter.Write(p)
n1 := int64(n)
w.record(n1)
w.written += n1
w.err = err
return n, err
}
// WriteHeader persists initial statusCode for span attribution.
// All calls to WriteHeader will be propagated to the underlying ResponseWriter
// and will persist the statusCode from the first call.
// Blocking consecutive calls to WriteHeader alters expected behavior and will
// remove warning logs from net/http where developers will notice incorrect handler implementations.
func (w *respWriterWrapper) WriteHeader(statusCode int) {
if !w.wroteHeader {
w.wroteHeader = true
w.statusCode = statusCode
}
w.ResponseWriter.WriteHeader(statusCode)
}
func (w *respWriterWrapper) Flush() {
if !w.wroteHeader {
w.WriteHeader(http.StatusOK)
}
if f, ok := w.ResponseWriter.(http.Flusher); ok {
f.Flush()
}
}

9
e2e/vendor/go.opentelemetry.io/otel/.codespellignore generated vendored Normal file
View File

@ -0,0 +1,9 @@
ot
fo
te
collison
consequentially
ans
nam
valu
thirdparty

10
e2e/vendor/go.opentelemetry.io/otel/.codespellrc generated vendored Normal file
View File

@ -0,0 +1,10 @@
# https://github.com/codespell-project/codespell
[codespell]
builtin = clear,rare,informal
check-filenames =
check-hidden =
ignore-words = .codespellignore
interactive = 1
skip = .git,go.mod,go.sum,go.work,go.work.sum,semconv,venv,.tools
uri-ignore-words-list = *
write =

3
e2e/vendor/go.opentelemetry.io/otel/.gitattributes generated vendored Normal file
View File

@ -0,0 +1,3 @@
* text=auto eol=lf
*.{cmd,[cC][mM][dD]} text eol=crlf
*.{bat,[bB][aA][tT]} text eol=crlf

14
e2e/vendor/go.opentelemetry.io/otel/.gitignore generated vendored Normal file
View File

@ -0,0 +1,14 @@
.DS_Store
Thumbs.db
.tools/
venv/
.idea/
.vscode/
*.iml
*.so
coverage.*
go.work
go.work.sum
gen/

325
e2e/vendor/go.opentelemetry.io/otel/.golangci.yml generated vendored Normal file
View File

@ -0,0 +1,325 @@
# See https://github.com/golangci/golangci-lint#config-file
run:
issues-exit-code: 1 #Default
tests: true #Default
linters:
# Disable everything by default so upgrades to not include new "default
# enabled" linters.
disable-all: true
# Specifically enable linters we want to use.
enable:
- asasalint
- bodyclose
- depguard
- errcheck
- errorlint
- godot
- gofumpt
- goimports
- gosec
- gosimple
- govet
- ineffassign
- misspell
- perfsprint
- revive
- staticcheck
- tenv
- testifylint
- typecheck
- unconvert
- unused
- unparam
- usestdlibvars
issues:
# Maximum issues count per one linter.
# Set to 0 to disable.
# Default: 50
# Setting to unlimited so the linter only is run once to debug all issues.
max-issues-per-linter: 0
# Maximum count of issues with the same text.
# Set to 0 to disable.
# Default: 3
# Setting to unlimited so the linter only is run once to debug all issues.
max-same-issues: 0
# Excluding configuration per-path, per-linter, per-text and per-source.
exclude-rules:
# TODO: Having appropriate comments for exported objects helps development,
# even for objects in internal packages. Appropriate comments for all
# exported objects should be added and this exclusion removed.
- path: '.*internal/.*'
text: "exported (method|function|type|const) (.+) should have comment or be unexported"
linters:
- revive
# Yes, they are, but it's okay in a test.
- path: _test\.go
text: "exported func.*returns unexported type.*which can be annoying to use"
linters:
- revive
# Example test functions should be treated like main.
- path: example.*_test\.go
text: "calls to (.+) only in main[(][)] or init[(][)] functions"
linters:
- revive
# It's okay to not run gosec and perfsprint in a test.
- path: _test\.go
linters:
- gosec
- perfsprint
# Ignoring gosec G404: Use of weak random number generator (math/rand instead of crypto/rand)
# as we commonly use it in tests and examples.
- text: "G404:"
linters:
- gosec
# Ignoring gosec G402: TLS MinVersion too low
# as the https://pkg.go.dev/crypto/tls#Config handles MinVersion default well.
- text: "G402: TLS MinVersion too low."
linters:
- gosec
include:
# revive exported should have comment or be unexported.
- EXC0012
# revive package comment should be of the form ...
- EXC0013
linters-settings:
depguard:
rules:
non-tests:
files:
- "!$test"
- "!**/*test/*.go"
- "!**/internal/matchers/*.go"
deny:
- pkg: "testing"
- pkg: "github.com/stretchr/testify"
- pkg: "crypto/md5"
- pkg: "crypto/sha1"
- pkg: "crypto/**/pkix"
auto/sdk:
files:
- "!internal/global/trace.go"
- "~internal/global/trace_test.go"
deny:
- pkg: "go.opentelemetry.io/auto/sdk"
desc: Do not use SDK from automatic instrumentation.
otlp-internal:
files:
- "!**/exporters/otlp/internal/**/*.go"
deny:
- pkg: "go.opentelemetry.io/otel/exporters/otlp/internal"
desc: Do not use cross-module internal packages.
otlptrace-internal:
files:
- "!**/exporters/otlp/otlptrace/*.go"
- "!**/exporters/otlp/otlptrace/internal/**.go"
deny:
- pkg: "go.opentelemetry.io/otel/exporters/otlp/otlptrace/internal"
desc: Do not use cross-module internal packages.
otlpmetric-internal:
files:
- "!**/exporters/otlp/otlpmetric/internal/*.go"
- "!**/exporters/otlp/otlpmetric/internal/**/*.go"
deny:
- pkg: "go.opentelemetry.io/otel/exporters/otlp/otlpmetric/internal"
desc: Do not use cross-module internal packages.
otel-internal:
files:
- "**/sdk/*.go"
- "**/sdk/**/*.go"
- "**/exporters/*.go"
- "**/exporters/**/*.go"
- "**/schema/*.go"
- "**/schema/**/*.go"
- "**/metric/*.go"
- "**/metric/**/*.go"
- "**/bridge/*.go"
- "**/bridge/**/*.go"
- "**/trace/*.go"
- "**/trace/**/*.go"
- "**/log/*.go"
- "**/log/**/*.go"
deny:
- pkg: "go.opentelemetry.io/otel/internal$"
desc: Do not use cross-module internal packages.
- pkg: "go.opentelemetry.io/otel/internal/attribute"
desc: Do not use cross-module internal packages.
- pkg: "go.opentelemetry.io/otel/internal/internaltest"
desc: Do not use cross-module internal packages.
- pkg: "go.opentelemetry.io/otel/internal/matchers"
desc: Do not use cross-module internal packages.
godot:
exclude:
# Exclude links.
- '^ *\[[^]]+\]:'
# Exclude sentence fragments for lists.
- '^[ ]*[-•]'
# Exclude sentences prefixing a list.
- ':$'
goimports:
local-prefixes: go.opentelemetry.io
misspell:
locale: US
ignore-words:
- cancelled
perfsprint:
err-error: true
errorf: true
int-conversion: true
sprintf1: true
strconcat: true
revive:
# Sets the default failure confidence.
# This means that linting errors with less than 0.8 confidence will be ignored.
# Default: 0.8
confidence: 0.01
rules:
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#blank-imports
- name: blank-imports
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#bool-literal-in-expr
- name: bool-literal-in-expr
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#constant-logical-expr
- name: constant-logical-expr
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#context-as-argument
# TODO (#3372) re-enable linter when it is compatible. https://github.com/golangci/golangci-lint/issues/3280
- name: context-as-argument
disabled: true
arguments:
allowTypesBefore: "*testing.T"
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#context-keys-type
- name: context-keys-type
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#deep-exit
- name: deep-exit
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#defer
- name: defer
disabled: false
arguments:
- ["call-chain", "loop"]
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#dot-imports
- name: dot-imports
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#duplicated-imports
- name: duplicated-imports
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#early-return
- name: early-return
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#empty-block
- name: empty-block
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#empty-lines
- name: empty-lines
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#error-naming
- name: error-naming
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#error-return
- name: error-return
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#error-strings
- name: error-strings
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#errorf
- name: errorf
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#exported
- name: exported
disabled: false
arguments:
- "sayRepetitiveInsteadOfStutters"
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#flag-parameter
- name: flag-parameter
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#identical-branches
- name: identical-branches
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#if-return
- name: if-return
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#increment-decrement
- name: increment-decrement
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#indent-error-flow
- name: indent-error-flow
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#import-shadowing
- name: import-shadowing
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#package-comments
- name: package-comments
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#range
- name: range
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#range-val-in-closure
- name: range-val-in-closure
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#range-val-address
- name: range-val-address
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#redefines-builtin-id
- name: redefines-builtin-id
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#string-format
- name: string-format
disabled: false
arguments:
- - panic
- '/^[^\n]*$/'
- must not contain line breaks
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#struct-tag
- name: struct-tag
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#superfluous-else
- name: superfluous-else
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#time-equal
- name: time-equal
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#var-naming
- name: var-naming
disabled: false
arguments:
- ["ID"] # AllowList
- ["Otel", "Aws", "Gcp"] # DenyList
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#var-declaration
- name: var-declaration
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#unconditional-recursion
- name: unconditional-recursion
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#unexported-return
- name: unexported-return
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#unhandled-error
- name: unhandled-error
disabled: false
arguments:
- "fmt.Fprint"
- "fmt.Fprintf"
- "fmt.Fprintln"
- "fmt.Print"
- "fmt.Printf"
- "fmt.Println"
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#unnecessary-stmt
- name: unnecessary-stmt
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#useless-break
- name: useless-break
disabled: false
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md#waitgroup-by-value
- name: waitgroup-by-value
disabled: false
testifylint:
enable-all: true
disable:
- float-compare
- go-require
- require-error

6
e2e/vendor/go.opentelemetry.io/otel/.lycheeignore generated vendored Normal file
View File

@ -0,0 +1,6 @@
http://localhost
http://jaeger-collector
https://github.com/open-telemetry/opentelemetry-go/milestone/
https://github.com/open-telemetry/opentelemetry-go/projects
file:///home/runner/work/opentelemetry-go/opentelemetry-go/libraries
file:///home/runner/work/opentelemetry-go/opentelemetry-go/manual

29
e2e/vendor/go.opentelemetry.io/otel/.markdownlint.yaml generated vendored Normal file
View File

@ -0,0 +1,29 @@
# Default state for all rules
default: true
# ul-style
MD004: false
# hard-tabs
MD010: false
# line-length
MD013: false
# no-duplicate-header
MD024:
siblings_only: true
#single-title
MD025: false
# ol-prefix
MD029:
style: ordered
# no-inline-html
MD033: false
# fenced-code-language
MD040: false

3289
e2e/vendor/go.opentelemetry.io/otel/CHANGELOG.md generated vendored Normal file

File diff suppressed because it is too large Load Diff

17
e2e/vendor/go.opentelemetry.io/otel/CODEOWNERS generated vendored Normal file
View File

@ -0,0 +1,17 @@
#####################################################
#
# List of approvers for this repository
#
#####################################################
#
# Learn about membership in OpenTelemetry community:
# https://github.com/open-telemetry/community/blob/main/guides/contributor/membership.md
#
#
# Learn about CODEOWNERS file format:
# https://help.github.com/en/articles/about-code-owners
#
* @MrAlias @XSAM @dashpole @pellared @dmathieu
CODEOWNERS @MrAlias @pellared @dashpole @XSAM @dmathieu

664
e2e/vendor/go.opentelemetry.io/otel/CONTRIBUTING.md generated vendored Normal file
View File

@ -0,0 +1,664 @@
# Contributing to opentelemetry-go
The Go special interest group (SIG) meets regularly. See the
OpenTelemetry
[community](https://github.com/open-telemetry/community#golang-sdk)
repo for information on this and other language SIGs.
See the [public meeting
notes](https://docs.google.com/document/d/1E5e7Ld0NuU1iVvf-42tOBpu2VBBLYnh73GJuITGJTTU/edit)
for a summary description of past meetings. To request edit access,
join the meeting or get in touch on
[Slack](https://cloud-native.slack.com/archives/C01NPAXACKT).
## Development
You can view and edit the source code by cloning this repository:
```sh
git clone https://github.com/open-telemetry/opentelemetry-go.git
```
Run `make test` to run the tests instead of `go test`.
There are some generated files checked into the repo. To make sure
that the generated files are up-to-date, run `make` (or `make
precommit` - the `precommit` target is the default).
The `precommit` target also fixes the formatting of the code and
checks the status of the go module files.
Additionally, there is a `codespell` target that checks for common
typos in the code. It is not run by default, but you can run it
manually with `make codespell`. It will set up a virtual environment
in `venv` and install `codespell` there.
If after running `make precommit` the output of `git status` contains
`nothing to commit, working tree clean` then it means that everything
is up-to-date and properly formatted.
## Pull Requests
### How to Send Pull Requests
Everyone is welcome to contribute code to `opentelemetry-go` via
GitHub pull requests (PRs).
To create a new PR, fork the project in GitHub and clone the upstream
repo:
```sh
go get -d go.opentelemetry.io/otel
```
(This may print some warning about "build constraints exclude all Go
files", just ignore it.)
This will put the project in `${GOPATH}/src/go.opentelemetry.io/otel`. You
can alternatively use `git` directly with:
```sh
git clone https://github.com/open-telemetry/opentelemetry-go
```
(Note that `git clone` is *not* using the `go.opentelemetry.io/otel` name -
that name is a kind of a redirector to GitHub that `go get` can
understand, but `git` does not.)
This would put the project in the `opentelemetry-go` directory in
current working directory.
Enter the newly created directory and add your fork as a new remote:
```sh
git remote add <YOUR_FORK> git@github.com:<YOUR_GITHUB_USERNAME>/opentelemetry-go
```
Check out a new branch, make modifications, run linters and tests, update
`CHANGELOG.md`, and push the branch to your fork:
```sh
git checkout -b <YOUR_BRANCH_NAME>
# edit files
# update changelog
make precommit
git add -p
git commit
git push <YOUR_FORK> <YOUR_BRANCH_NAME>
```
Open a pull request against the main `opentelemetry-go` repo. Be sure to add the pull
request ID to the entry you added to `CHANGELOG.md`.
Avoid rebasing and force-pushing to your branch to facilitate reviewing the pull request.
Rewriting Git history makes it difficult to keep track of iterations during code review.
All pull requests are squashed to a single commit upon merge to `main`.
### How to Receive Comments
* If the PR is not ready for review, please put `[WIP]` in the title,
tag it as `work-in-progress`, or mark it as
[`draft`](https://github.blog/2019-02-14-introducing-draft-pull-requests/).
* Make sure CLA is signed and CI is clear.
### How to Get PRs Merged
A PR is considered **ready to merge** when:
* It has received two qualified approvals[^1].
This is not enforced through automation, but needs to be validated by the
maintainer merging.
* The qualified approvals need to be from [Approver]s/[Maintainer]s
affiliated with different companies. Two qualified approvals from
[Approver]s or [Maintainer]s affiliated with the same company counts as a
single qualified approval.
* PRs introducing changes that have already been discussed and consensus
reached only need one qualified approval. The discussion and resolution
needs to be linked to the PR.
* Trivial changes[^2] only need one qualified approval.
* All feedback has been addressed.
* All PR comments and suggestions are resolved.
* All GitHub Pull Request reviews with a status of "Request changes" have
been addressed. Another review by the objecting reviewer with a different
status can be submitted to clear the original review, or the review can be
dismissed by a [Maintainer] when the issues from the original review have
been addressed.
* Any comments or reviews that cannot be resolved between the PR author and
reviewers can be submitted to the community [Approver]s and [Maintainer]s
during the weekly SIG meeting. If consensus is reached among the
[Approver]s and [Maintainer]s during the SIG meeting the objections to the
PR may be dismissed or resolved or the PR closed by a [Maintainer].
* Any substantive changes to the PR require existing Approval reviews be
cleared unless the approver explicitly states that their approval persists
across changes. This includes changes resulting from other feedback.
[Approver]s and [Maintainer]s can help in clearing reviews and they should
be consulted if there are any questions.
* The PR branch is up to date with the base branch it is merging into.
* To ensure this does not block the PR, it should be configured to allow
maintainers to update it.
* It has been open for review for at least one working day. This gives people
reasonable time to review.
* Trivial changes[^2] do not have to wait for one day and may be merged with
a single [Maintainer]'s approval.
* All required GitHub workflows have succeeded.
* Urgent fix can take exception as long as it has been actively communicated
among [Maintainer]s.
Any [Maintainer] can merge the PR once the above criteria have been met.
[^1]: A qualified approval is a GitHub Pull Request review with "Approve"
status from an OpenTelemetry Go [Approver] or [Maintainer].
[^2]: Trivial changes include: typo corrections, cosmetic non-substantive
changes, documentation corrections or updates, dependency updates, etc.
## Design Choices
As with other OpenTelemetry clients, opentelemetry-go follows the
[OpenTelemetry Specification](https://opentelemetry.io/docs/specs/otel).
It's especially valuable to read through the [library
guidelines](https://opentelemetry.io/docs/specs/otel/library-guidelines).
### Focus on Capabilities, Not Structure Compliance
OpenTelemetry is an evolving specification, one where the desires and
use cases are clear, but the method to satisfy those uses cases are
not.
As such, Contributions should provide functionality and behavior that
conforms to the specification, but the interface and structure is
flexible.
It is preferable to have contributions follow the idioms of the
language rather than conform to specific API names or argument
patterns in the spec.
For a deeper discussion, see
[this](https://github.com/open-telemetry/opentelemetry-specification/issues/165).
## Documentation
Each (non-internal, non-test) package must be documented using
[Go Doc Comments](https://go.dev/doc/comment),
preferably in a `doc.go` file.
Prefer using [Examples](https://pkg.go.dev/testing#hdr-Examples)
instead of putting code snippets in Go doc comments.
In some cases, you can even create [Testable Examples](https://go.dev/blog/examples).
You can install and run a "local Go Doc site" in the following way:
```sh
go install golang.org/x/pkgsite/cmd/pkgsite@latest
pkgsite
```
[`go.opentelemetry.io/otel/metric`](https://pkg.go.dev/go.opentelemetry.io/otel/metric)
is an example of a very well-documented package.
### README files
Each (non-internal, non-test, non-documentation) package must contain a
`README.md` file containing at least a title, and a `pkg.go.dev` badge.
The README should not be a repetition of Go doc comments.
You can verify the presence of all README files with the `make verify-readmes`
command.
## Style Guide
One of the primary goals of this project is that it is actually used by
developers. With this goal in mind the project strives to build
user-friendly and idiomatic Go code adhering to the Go community's best
practices.
For a non-comprehensive but foundational overview of these best practices
the [Effective Go](https://golang.org/doc/effective_go.html) documentation
is an excellent starting place.
As a convenience for developers building this project the `make precommit`
will format, lint, validate, and in some cases fix the changes you plan to
submit. This check will need to pass for your changes to be able to be
merged.
In addition to idiomatic Go, the project has adopted certain standards for
implementations of common patterns. These standards should be followed as a
default, and if they are not followed documentation needs to be included as
to the reasons why.
### Configuration
When creating an instantiation function for a complex `type T struct`, it is
useful to allow variable number of options to be applied. However, the strong
type system of Go restricts the function design options. There are a few ways
to solve this problem, but we have landed on the following design.
#### `config`
Configuration should be held in a `struct` named `config`, or prefixed with
specific type name this Configuration applies to if there are multiple
`config` in the package. This type must contain configuration options.
```go
// config contains configuration options for a thing.
type config struct {
// options ...
}
```
In general the `config` type will not need to be used externally to the
package and should be unexported. If, however, it is expected that the user
will likely want to build custom options for the configuration, the `config`
should be exported. Please, include in the documentation for the `config`
how the user can extend the configuration.
It is important that internal `config` are not shared across package boundaries.
Meaning a `config` from one package should not be directly used by another. The
one exception is the API packages. The configs from the base API, eg.
`go.opentelemetry.io/otel/trace.TracerConfig` and
`go.opentelemetry.io/otel/metric.InstrumentConfig`, are intended to be consumed
by the SDK therefore it is expected that these are exported.
When a config is exported we want to maintain forward and backward
compatibility, to achieve this no fields should be exported but should
instead be accessed by methods.
Optionally, it is common to include a `newConfig` function (with the same
naming scheme). This function wraps any defaults setting and looping over
all options to create a configured `config`.
```go
// newConfig returns an appropriately configured config.
func newConfig(options ...Option) config {
// Set default values for config.
config := config{/* […] */}
for _, option := range options {
config = option.apply(config)
}
// Perform any validation here.
return config
}
```
If validation of the `config` options is also performed this can return an
error as well that is expected to be handled by the instantiation function
or propagated to the user.
Given the design goal of not having the user need to work with the `config`,
the `newConfig` function should also be unexported.
#### `Option`
To set the value of the options a `config` contains, a corresponding
`Option` interface type should be used.
```go
type Option interface {
apply(config) config
}
```
Having `apply` unexported makes sure that it will not be used externally.
Moreover, the interface becomes sealed so the user cannot easily implement
the interface on its own.
The `apply` method should return a modified version of the passed config.
This approach, instead of passing a pointer, is used to prevent the config from being allocated to the heap.
The name of the interface should be prefixed in the same way the
corresponding `config` is (if at all).
#### Options
All user configurable options for a `config` must have a related unexported
implementation of the `Option` interface and an exported configuration
function that wraps this implementation.
The wrapping function name should be prefixed with `With*` (or in the
special case of a boolean options `Without*`) and should have the following
function signature.
```go
func With*(…) Option { … }
```
##### `bool` Options
```go
type defaultFalseOption bool
func (o defaultFalseOption) apply(c config) config {
c.Bool = bool(o)
return c
}
// WithOption sets a T to have an option included.
func WithOption() Option {
return defaultFalseOption(true)
}
```
```go
type defaultTrueOption bool
func (o defaultTrueOption) apply(c config) config {
c.Bool = bool(o)
return c
}
// WithoutOption sets a T to have Bool option excluded.
func WithoutOption() Option {
return defaultTrueOption(false)
}
```
##### Declared Type Options
```go
type myTypeOption struct {
MyType MyType
}
func (o myTypeOption) apply(c config) config {
c.MyType = o.MyType
return c
}
// WithMyType sets T to have include MyType.
func WithMyType(t MyType) Option {
return myTypeOption{t}
}
```
##### Functional Options
```go
type optionFunc func(config) config
func (fn optionFunc) apply(c config) config {
return fn(c)
}
// WithMyType sets t as MyType.
func WithMyType(t MyType) Option {
return optionFunc(func(c config) config {
c.MyType = t
return c
})
}
```
#### Instantiation
Using this configuration pattern to configure instantiation with a `NewT`
function.
```go
func NewT(options ...Option) T {…}
```
Any required parameters can be declared before the variadic `options`.
#### Dealing with Overlap
Sometimes there are multiple complex `struct` that share common
configuration and also have distinct configuration. To avoid repeated
portions of `config`s, a common `config` can be used with the union of
options being handled with the `Option` interface.
For example.
```go
// config holds options for all animals.
type config struct {
Weight float64
Color string
MaxAltitude float64
}
// DogOption apply Dog specific options.
type DogOption interface {
applyDog(config) config
}
// BirdOption apply Bird specific options.
type BirdOption interface {
applyBird(config) config
}
// Option apply options for all animals.
type Option interface {
BirdOption
DogOption
}
type weightOption float64
func (o weightOption) applyDog(c config) config {
c.Weight = float64(o)
return c
}
func (o weightOption) applyBird(c config) config {
c.Weight = float64(o)
return c
}
func WithWeight(w float64) Option { return weightOption(w) }
type furColorOption string
func (o furColorOption) applyDog(c config) config {
c.Color = string(o)
return c
}
func WithFurColor(c string) DogOption { return furColorOption(c) }
type maxAltitudeOption float64
func (o maxAltitudeOption) applyBird(c config) config {
c.MaxAltitude = float64(o)
return c
}
func WithMaxAltitude(a float64) BirdOption { return maxAltitudeOption(a) }
func NewDog(name string, o ...DogOption) Dog {…}
func NewBird(name string, o ...BirdOption) Bird {…}
```
### Interfaces
To allow other developers to better comprehend the code, it is important
to ensure it is sufficiently documented. One simple measure that contributes
to this aim is self-documenting by naming method parameters. Therefore,
where appropriate, methods of every exported interface type should have
their parameters appropriately named.
#### Interface Stability
All exported stable interfaces that include the following warning in their
documentation are allowed to be extended with additional methods.
> Warning: methods may be added to this interface in minor releases.
These interfaces are defined by the OpenTelemetry specification and will be
updated as the specification evolves.
Otherwise, stable interfaces MUST NOT be modified.
#### How to Change Specification Interfaces
When an API change must be made, we will update the SDK with the new method one
release before the API change. This will allow the SDK one version before the
API change to work seamlessly with the new API.
If an incompatible version of the SDK is used with the new API the application
will fail to compile.
#### How Not to Change Specification Interfaces
We have explored using a v2 of the API to change interfaces and found that there
was no way to introduce a v2 and have it work seamlessly with the v1 of the API.
Problems happened with libraries that upgraded to v2 when an application did not,
and would not produce any telemetry.
More detail of the approaches considered and their limitations can be found in
the [Use a V2 API to evolve interfaces](https://github.com/open-telemetry/opentelemetry-go/issues/3920)
issue.
#### How to Change Other Interfaces
If new functionality is needed for an interface that cannot be changed it MUST
be added by including an additional interface. That added interface can be a
simple interface for the specific functionality that you want to add or it can
be a super-set of the original interface. For example, if you wanted to a
`Close` method to the `Exporter` interface:
```go
type Exporter interface {
Export()
}
```
A new interface, `Closer`, can be added:
```go
type Closer interface {
Close()
}
```
Code that is passed the `Exporter` interface can now check to see if the passed
value also satisfies the new interface. E.g.
```go
func caller(e Exporter) {
/* ... */
if c, ok := e.(Closer); ok {
c.Close()
}
/* ... */
}
```
Alternatively, a new type that is the super-set of an `Exporter` can be created.
```go
type ClosingExporter struct {
Exporter
Close()
}
```
This new type can be used similar to the simple interface above in that a
passed `Exporter` type can be asserted to satisfy the `ClosingExporter` type
and the `Close` method called.
This super-set approach can be useful if there is explicit behavior that needs
to be coupled with the original type and passed as a unified type to a new
function, but, because of this coupling, it also limits the applicability of
the added functionality. If there exist other interfaces where this
functionality should be added, each one will need their own super-set
interfaces and will duplicate the pattern. For this reason, the simple targeted
interface that defines the specific functionality should be preferred.
See also:
[Keeping Your Modules Compatible: Working with interfaces](https://go.dev/blog/module-compatibility#working-with-interfaces).
### Testing
The tests should never leak goroutines.
Use the term `ConcurrentSafe` in the test name when it aims to verify the
absence of race conditions. The top-level tests with this term will be run
many times in the `test-concurrent-safe` CI job to increase the chance of
catching concurrency issues. This does not apply to subtests when this term
is not in their root name.
### Internal packages
The use of internal packages should be scoped to a single module. A sub-module
should never import from a parent internal package. This creates a coupling
between the two modules where a user can upgrade the parent without the child
and if the internal package API has changed it will fail to upgrade[^3].
There are two known exceptions to this rule:
- `go.opentelemetry.io/otel/internal/global`
- This package manages global state for all of opentelemetry-go. It needs to
be a single package in order to ensure the uniqueness of the global state.
- `go.opentelemetry.io/otel/internal/baggage`
- This package provides values in a `context.Context` that need to be
recognized by `go.opentelemetry.io/otel/baggage` and
`go.opentelemetry.io/otel/bridge/opentracing` but remain private.
If you have duplicate code in multiple modules, make that code into a Go
template stored in `go.opentelemetry.io/otel/internal/shared` and use [gotmpl]
to render the templates in the desired locations. See [#4404] for an example of
this.
[^3]: https://github.com/open-telemetry/opentelemetry-go/issues/3548
### Ignoring context cancellation
OpenTelemetry API implementations need to ignore the cancellation of the context that are
passed when recording a value (e.g. starting a span, recording a measurement, emitting a log).
Recording methods should not return an error describing the cancellation state of the context
when they complete, nor should they abort any work.
This rule may not apply if the OpenTelemetry specification defines a timeout mechanism for
the method. In that case the context cancellation can be used for the timeout with the
restriction that this behavior is documented for the method. Otherwise, timeouts
are expected to be handled by the user calling the API, not the implementation.
Stoppage of the telemetry pipeline is handled by calling the appropriate `Shutdown` method
of a provider. It is assumed the context passed from a user is not used for this purpose.
Outside of the direct recording of telemetry from the API (e.g. exporting telemetry,
force flushing telemetry, shutting down a signal provider) the context cancellation
should be honored. This means all work done on behalf of the user provided context
should be canceled.
## Approvers and Maintainers
### Triagers
- [Cheng-Zhen Yang](https://github.com/scorpionknifes), Independent
### Approvers
### Maintainers
- [Damien Mathieu](https://github.com/dmathieu), Elastic
- [David Ashpole](https://github.com/dashpole), Google
- [Robert Pająk](https://github.com/pellared), Splunk
- [Sam Xie](https://github.com/XSAM), Cisco/AppDynamics
- [Tyler Yahn](https://github.com/MrAlias), Splunk
### Emeritus
- [Aaron Clawson](https://github.com/MadVikingGod)
- [Anthony Mirabella](https://github.com/Aneurysm9)
- [Chester Cheung](https://github.com/hanyuancheung)
- [Evan Torrie](https://github.com/evantorrie)
- [Gustavo Silva Paiva](https://github.com/paivagustavo)
- [Josh MacDonald](https://github.com/jmacd)
- [Liz Fong-Jones](https://github.com/lizthegrey)
### Become an Approver or a Maintainer
See the [community membership document in OpenTelemetry community
repo](https://github.com/open-telemetry/community/blob/main/guides/contributor/membership.md).
[Approver]: #approvers
[Maintainer]: #maintainers
[gotmpl]: https://pkg.go.dev/go.opentelemetry.io/build-tools/gotmpl
[#4404]: https://github.com/open-telemetry/opentelemetry-go/pull/4404

201
e2e/vendor/go.opentelemetry.io/otel/LICENSE generated vendored Normal file
View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

307
e2e/vendor/go.opentelemetry.io/otel/Makefile generated vendored Normal file
View File

@ -0,0 +1,307 @@
# Copyright The OpenTelemetry Authors
# SPDX-License-Identifier: Apache-2.0
TOOLS_MOD_DIR := ./internal/tools
ALL_DOCS := $(shell find . -name '*.md' -type f | sort)
ALL_GO_MOD_DIRS := $(shell find . -type f -name 'go.mod' -exec dirname {} \; | sort)
OTEL_GO_MOD_DIRS := $(filter-out $(TOOLS_MOD_DIR), $(ALL_GO_MOD_DIRS))
ALL_COVERAGE_MOD_DIRS := $(shell find . -type f -name 'go.mod' -exec dirname {} \; | grep -E -v '^./example|^$(TOOLS_MOD_DIR)' | sort)
GO = go
TIMEOUT = 60
.DEFAULT_GOAL := precommit
.PHONY: precommit ci
precommit: generate toolchain-check license-check misspell go-mod-tidy golangci-lint-fix verify-readmes verify-mods test-default
ci: generate toolchain-check license-check lint vanity-import-check verify-readmes verify-mods build test-default check-clean-work-tree test-coverage
# Tools
TOOLS = $(CURDIR)/.tools
$(TOOLS):
@mkdir -p $@
$(TOOLS)/%: $(TOOLS_MOD_DIR)/go.mod | $(TOOLS)
cd $(TOOLS_MOD_DIR) && \
$(GO) build -o $@ $(PACKAGE)
MULTIMOD = $(TOOLS)/multimod
$(TOOLS)/multimod: PACKAGE=go.opentelemetry.io/build-tools/multimod
SEMCONVGEN = $(TOOLS)/semconvgen
$(TOOLS)/semconvgen: PACKAGE=go.opentelemetry.io/build-tools/semconvgen
CROSSLINK = $(TOOLS)/crosslink
$(TOOLS)/crosslink: PACKAGE=go.opentelemetry.io/build-tools/crosslink
SEMCONVKIT = $(TOOLS)/semconvkit
$(TOOLS)/semconvkit: PACKAGE=go.opentelemetry.io/otel/$(TOOLS_MOD_DIR)/semconvkit
GOLANGCI_LINT = $(TOOLS)/golangci-lint
$(TOOLS)/golangci-lint: PACKAGE=github.com/golangci/golangci-lint/cmd/golangci-lint
MISSPELL = $(TOOLS)/misspell
$(TOOLS)/misspell: PACKAGE=github.com/client9/misspell/cmd/misspell
GOCOVMERGE = $(TOOLS)/gocovmerge
$(TOOLS)/gocovmerge: PACKAGE=github.com/wadey/gocovmerge
STRINGER = $(TOOLS)/stringer
$(TOOLS)/stringer: PACKAGE=golang.org/x/tools/cmd/stringer
PORTO = $(TOOLS)/porto
$(TOOLS)/porto: PACKAGE=github.com/jcchavezs/porto/cmd/porto
GOTMPL = $(TOOLS)/gotmpl
$(GOTMPL): PACKAGE=go.opentelemetry.io/build-tools/gotmpl
GORELEASE = $(TOOLS)/gorelease
$(GORELEASE): PACKAGE=golang.org/x/exp/cmd/gorelease
GOVULNCHECK = $(TOOLS)/govulncheck
$(TOOLS)/govulncheck: PACKAGE=golang.org/x/vuln/cmd/govulncheck
.PHONY: tools
tools: $(CROSSLINK) $(GOLANGCI_LINT) $(MISSPELL) $(GOCOVMERGE) $(STRINGER) $(PORTO) $(SEMCONVGEN) $(MULTIMOD) $(SEMCONVKIT) $(GOTMPL) $(GORELEASE)
# Virtualized python tools via docker
# The directory where the virtual environment is created.
VENVDIR := venv
# The directory where the python tools are installed.
PYTOOLS := $(VENVDIR)/bin
# The pip executable in the virtual environment.
PIP := $(PYTOOLS)/pip
# The directory in the docker image where the current directory is mounted.
WORKDIR := /workdir
# The python image to use for the virtual environment.
PYTHONIMAGE := python:3.11.3-slim-bullseye
# Run the python image with the current directory mounted.
DOCKERPY := docker run --rm -v "$(CURDIR):$(WORKDIR)" -w $(WORKDIR) $(PYTHONIMAGE)
# Create a virtual environment for Python tools.
$(PYTOOLS):
# The `--upgrade` flag is needed to ensure that the virtual environment is
# created with the latest pip version.
@$(DOCKERPY) bash -c "python3 -m venv $(VENVDIR) && $(PIP) install --upgrade pip"
# Install python packages into the virtual environment.
$(PYTOOLS)/%: $(PYTOOLS)
@$(DOCKERPY) $(PIP) install -r requirements.txt
CODESPELL = $(PYTOOLS)/codespell
$(CODESPELL): PACKAGE=codespell
# Generate
.PHONY: generate
generate: go-generate vanity-import-fix
.PHONY: go-generate
go-generate: $(OTEL_GO_MOD_DIRS:%=go-generate/%)
go-generate/%: DIR=$*
go-generate/%: $(STRINGER) $(GOTMPL)
@echo "$(GO) generate $(DIR)/..." \
&& cd $(DIR) \
&& PATH="$(TOOLS):$${PATH}" $(GO) generate ./...
.PHONY: vanity-import-fix
vanity-import-fix: $(PORTO)
@$(PORTO) --include-internal -w .
# Generate go.work file for local development.
.PHONY: go-work
go-work: $(CROSSLINK)
$(CROSSLINK) work --root=$(shell pwd)
# Build
.PHONY: build
build: $(OTEL_GO_MOD_DIRS:%=build/%) $(OTEL_GO_MOD_DIRS:%=build-tests/%)
build/%: DIR=$*
build/%:
@echo "$(GO) build $(DIR)/..." \
&& cd $(DIR) \
&& $(GO) build ./...
build-tests/%: DIR=$*
build-tests/%:
@echo "$(GO) build tests $(DIR)/..." \
&& cd $(DIR) \
&& $(GO) list ./... \
| grep -v third_party \
| xargs $(GO) test -vet=off -run xxxxxMatchNothingxxxxx >/dev/null
# Tests
TEST_TARGETS := test-default test-bench test-short test-verbose test-race test-concurrent-safe
.PHONY: $(TEST_TARGETS) test
test-default test-race: ARGS=-race
test-bench: ARGS=-run=xxxxxMatchNothingxxxxx -test.benchtime=1ms -bench=.
test-short: ARGS=-short
test-verbose: ARGS=-v -race
test-concurrent-safe: ARGS=-run=ConcurrentSafe -count=100 -race
test-concurrent-safe: TIMEOUT=120
$(TEST_TARGETS): test
test: $(OTEL_GO_MOD_DIRS:%=test/%)
test/%: DIR=$*
test/%:
@echo "$(GO) test -timeout $(TIMEOUT)s $(ARGS) $(DIR)/..." \
&& cd $(DIR) \
&& $(GO) list ./... \
| grep -v third_party \
| xargs $(GO) test -timeout $(TIMEOUT)s $(ARGS)
COVERAGE_MODE = atomic
COVERAGE_PROFILE = coverage.out
.PHONY: test-coverage
test-coverage: $(GOCOVMERGE)
@set -e; \
printf "" > coverage.txt; \
for dir in $(ALL_COVERAGE_MOD_DIRS); do \
echo "$(GO) test -coverpkg=go.opentelemetry.io/otel/... -covermode=$(COVERAGE_MODE) -coverprofile="$(COVERAGE_PROFILE)" $${dir}/..."; \
(cd "$${dir}" && \
$(GO) list ./... \
| grep -v third_party \
| grep -v 'semconv/v.*' \
| xargs $(GO) test -coverpkg=./... -covermode=$(COVERAGE_MODE) -coverprofile="$(COVERAGE_PROFILE)" && \
$(GO) tool cover -html=coverage.out -o coverage.html); \
done; \
$(GOCOVMERGE) $$(find . -name coverage.out) > coverage.txt
.PHONY: benchmark
benchmark: $(OTEL_GO_MOD_DIRS:%=benchmark/%)
benchmark/%:
@echo "$(GO) test -run=xxxxxMatchNothingxxxxx -bench=. $*..." \
&& cd $* \
&& $(GO) list ./... \
| grep -v third_party \
| xargs $(GO) test -run=xxxxxMatchNothingxxxxx -bench=.
.PHONY: golangci-lint golangci-lint-fix
golangci-lint-fix: ARGS=--fix
golangci-lint-fix: golangci-lint
golangci-lint: $(OTEL_GO_MOD_DIRS:%=golangci-lint/%)
golangci-lint/%: DIR=$*
golangci-lint/%: $(GOLANGCI_LINT)
@echo 'golangci-lint $(if $(ARGS),$(ARGS) ,)$(DIR)' \
&& cd $(DIR) \
&& $(GOLANGCI_LINT) run --allow-serial-runners $(ARGS)
.PHONY: crosslink
crosslink: $(CROSSLINK)
@echo "Updating intra-repository dependencies in all go modules" \
&& $(CROSSLINK) --root=$(shell pwd) --prune
.PHONY: go-mod-tidy
go-mod-tidy: $(ALL_GO_MOD_DIRS:%=go-mod-tidy/%)
go-mod-tidy/%: DIR=$*
go-mod-tidy/%: crosslink
@echo "$(GO) mod tidy in $(DIR)" \
&& cd $(DIR) \
&& $(GO) mod tidy -compat=1.21
.PHONY: lint-modules
lint-modules: go-mod-tidy
.PHONY: lint
lint: misspell lint-modules golangci-lint govulncheck
.PHONY: vanity-import-check
vanity-import-check: $(PORTO)
@$(PORTO) --include-internal -l . || ( echo "(run: make vanity-import-fix)"; exit 1 )
.PHONY: misspell
misspell: $(MISSPELL)
@$(MISSPELL) -w $(ALL_DOCS)
.PHONY: govulncheck
govulncheck: $(OTEL_GO_MOD_DIRS:%=govulncheck/%)
govulncheck/%: DIR=$*
govulncheck/%: $(GOVULNCHECK)
@echo "govulncheck ./... in $(DIR)" \
&& cd $(DIR) \
&& $(GOVULNCHECK) ./...
.PHONY: codespell
codespell: $(CODESPELL)
@$(DOCKERPY) $(CODESPELL)
.PHONY: toolchain-check
toolchain-check:
@toolchainRes=$$(for f in $(ALL_GO_MOD_DIRS); do \
awk '/^toolchain/ { found=1; next } END { if (found) print FILENAME }' $$f/go.mod; \
done); \
if [ -n "$${toolchainRes}" ]; then \
echo "toolchain checking failed:"; echo "$${toolchainRes}"; \
exit 1; \
fi
.PHONY: license-check
license-check:
@licRes=$$(for f in $$(find . -type f \( -iname '*.go' -o -iname '*.sh' \) ! -path '**/third_party/*' ! -path './.git/*' ) ; do \
awk '/Copyright The OpenTelemetry Authors|generated|GENERATED/ && NR<=4 { found=1; next } END { if (!found) print FILENAME }' $$f; \
done); \
if [ -n "$${licRes}" ]; then \
echo "license header checking failed:"; echo "$${licRes}"; \
exit 1; \
fi
.PHONY: check-clean-work-tree
check-clean-work-tree:
@if ! git diff --quiet; then \
echo; \
echo 'Working tree is not clean, did you forget to run "make precommit"?'; \
echo; \
git status; \
exit 1; \
fi
SEMCONVPKG ?= "semconv/"
.PHONY: semconv-generate
semconv-generate: $(SEMCONVGEN) $(SEMCONVKIT)
[ "$(TAG)" ] || ( echo "TAG unset: missing opentelemetry semantic-conventions tag"; exit 1 )
[ "$(OTEL_SEMCONV_REPO)" ] || ( echo "OTEL_SEMCONV_REPO unset: missing path to opentelemetry semantic-conventions repo"; exit 1 )
$(SEMCONVGEN) -i "$(OTEL_SEMCONV_REPO)/model/." --only=attribute_group -p conventionType=trace -f attribute_group.go -z "$(SEMCONVPKG)/capitalizations.txt" -t "$(SEMCONVPKG)/template.j2" -s "$(TAG)"
$(SEMCONVGEN) -i "$(OTEL_SEMCONV_REPO)/model/." --only=metric -f metric.go -t "$(SEMCONVPKG)/metric_template.j2" -s "$(TAG)"
$(SEMCONVKIT) -output "$(SEMCONVPKG)/$(TAG)" -tag "$(TAG)"
.PHONY: gorelease
gorelease: $(OTEL_GO_MOD_DIRS:%=gorelease/%)
gorelease/%: DIR=$*
gorelease/%:| $(GORELEASE)
@echo "gorelease in $(DIR):" \
&& cd $(DIR) \
&& $(GORELEASE) \
|| echo ""
.PHONY: verify-mods
verify-mods: $(MULTIMOD)
$(MULTIMOD) verify
.PHONY: prerelease
prerelease: verify-mods
@[ "${MODSET}" ] || ( echo ">> env var MODSET is not set"; exit 1 )
$(MULTIMOD) prerelease -m ${MODSET}
COMMIT ?= "HEAD"
.PHONY: add-tags
add-tags: verify-mods
@[ "${MODSET}" ] || ( echo ">> env var MODSET is not set"; exit 1 )
$(MULTIMOD) tag -m ${MODSET} -c ${COMMIT}
.PHONY: lint-markdown
lint-markdown:
docker run -v "$(CURDIR):$(WORKDIR)" avtodev/markdown-lint:v1 -c $(WORKDIR)/.markdownlint.yaml $(WORKDIR)/**/*.md
.PHONY: verify-readmes
verify-readmes:
./verify_readmes.sh

111
e2e/vendor/go.opentelemetry.io/otel/README.md generated vendored Normal file
View File

@ -0,0 +1,111 @@
# OpenTelemetry-Go
[![CI](https://github.com/open-telemetry/opentelemetry-go/workflows/ci/badge.svg)](https://github.com/open-telemetry/opentelemetry-go/actions?query=workflow%3Aci+branch%3Amain)
[![codecov.io](https://codecov.io/gh/open-telemetry/opentelemetry-go/coverage.svg?branch=main)](https://app.codecov.io/gh/open-telemetry/opentelemetry-go?branch=main)
[![PkgGoDev](https://pkg.go.dev/badge/go.opentelemetry.io/otel)](https://pkg.go.dev/go.opentelemetry.io/otel)
[![Go Report Card](https://goreportcard.com/badge/go.opentelemetry.io/otel)](https://goreportcard.com/report/go.opentelemetry.io/otel)
[![Slack](https://img.shields.io/badge/slack-@cncf/otel--go-brightgreen.svg?logo=slack)](https://cloud-native.slack.com/archives/C01NPAXACKT)
OpenTelemetry-Go is the [Go](https://golang.org/) implementation of [OpenTelemetry](https://opentelemetry.io/).
It provides a set of APIs to directly measure performance and behavior of your software and send this data to observability platforms.
## Project Status
| Signal | Status |
|---------|--------------------|
| Traces | Stable |
| Metrics | Stable |
| Logs | Beta[^1] |
Progress and status specific to this repository is tracked in our
[project boards](https://github.com/open-telemetry/opentelemetry-go/projects)
and
[milestones](https://github.com/open-telemetry/opentelemetry-go/milestones).
Project versioning information and stability guarantees can be found in the
[versioning documentation](VERSIONING.md).
[^1]: https://github.com/orgs/open-telemetry/projects/43
### Compatibility
OpenTelemetry-Go ensures compatibility with the current supported versions of
the [Go language](https://golang.org/doc/devel/release#policy):
> Each major Go release is supported until there are two newer major releases.
> For example, Go 1.5 was supported until the Go 1.7 release, and Go 1.6 was supported until the Go 1.8 release.
For versions of Go that are no longer supported upstream, opentelemetry-go will
stop ensuring compatibility with these versions in the following manner:
- A minor release of opentelemetry-go will be made to add support for the new
supported release of Go.
- The following minor release of opentelemetry-go will remove compatibility
testing for the oldest (now archived upstream) version of Go. This, and
future, releases of opentelemetry-go may include features only supported by
the currently supported versions of Go.
Currently, this project supports the following environments.
| OS | Go Version | Architecture |
|----------|------------|--------------|
| Ubuntu | 1.23 | amd64 |
| Ubuntu | 1.22 | amd64 |
| Ubuntu | 1.23 | 386 |
| Ubuntu | 1.22 | 386 |
| Linux | 1.23 | arm64 |
| Linux | 1.22 | arm64 |
| macOS 13 | 1.23 | amd64 |
| macOS 13 | 1.22 | amd64 |
| macOS | 1.23 | arm64 |
| macOS | 1.22 | arm64 |
| Windows | 1.23 | amd64 |
| Windows | 1.22 | amd64 |
| Windows | 1.23 | 386 |
| Windows | 1.22 | 386 |
While this project should work for other systems, no compatibility guarantees
are made for those systems currently.
## Getting Started
You can find a getting started guide on [opentelemetry.io](https://opentelemetry.io/docs/languages/go/getting-started/).
OpenTelemetry's goal is to provide a single set of APIs to capture distributed
traces and metrics from your application and send them to an observability
platform. This project allows you to do just that for applications written in
Go. There are two steps to this process: instrument your application, and
configure an exporter.
### Instrumentation
To start capturing distributed traces and metric events from your application
it first needs to be instrumented. The easiest way to do this is by using an
instrumentation library for your code. Be sure to check out [the officially
supported instrumentation
libraries](https://github.com/open-telemetry/opentelemetry-go-contrib/tree/main/instrumentation).
If you need to extend the telemetry an instrumentation library provides or want
to build your own instrumentation for your application directly you will need
to use the
[Go otel](https://pkg.go.dev/go.opentelemetry.io/otel)
package. The [examples](https://github.com/open-telemetry/opentelemetry-go-contrib/tree/main/examples)
are a good way to see some practical uses of this process.
### Export
Now that your application is instrumented to collect telemetry, it needs an
export pipeline to send that telemetry to an observability platform.
All officially supported exporters for the OpenTelemetry project are contained in the [exporters directory](./exporters).
| Exporter | Logs | Metrics | Traces |
|---------------------------------------|:----:|:-------:|:------:|
| [OTLP](./exporters/otlp/) | ✓ | ✓ | ✓ |
| [Prometheus](./exporters/prometheus/) | | ✓ | |
| [stdout](./exporters/stdout/) | ✓ | ✓ | ✓ |
| [Zipkin](./exporters/zipkin/) | | | ✓ |
## Contributing
See the [contributing documentation](CONTRIBUTING.md).

135
e2e/vendor/go.opentelemetry.io/otel/RELEASING.md generated vendored Normal file
View File

@ -0,0 +1,135 @@
# Release Process
## Semantic Convention Generation
New versions of the [OpenTelemetry Semantic Conventions] mean new versions of the `semconv` package need to be generated.
The `semconv-generate` make target is used for this.
1. Checkout a local copy of the [OpenTelemetry Semantic Conventions] to the desired release tag.
2. Pull the latest `otel/semconvgen` image: `docker pull otel/semconvgen:latest`
3. Run the `make semconv-generate ...` target from this repository.
For example,
```sh
export TAG="v1.21.0" # Change to the release version you are generating.
export OTEL_SEMCONV_REPO="/absolute/path/to/opentelemetry/semantic-conventions"
docker pull otel/semconvgen:latest
make semconv-generate # Uses the exported TAG and OTEL_SEMCONV_REPO.
```
This should create a new sub-package of [`semconv`](./semconv).
Ensure things look correct before submitting a pull request to include the addition.
## Breaking changes validation
You can run `make gorelease` that runs [gorelease](https://pkg.go.dev/golang.org/x/exp/cmd/gorelease) to ensure that there are no unwanted changes done in the public API.
You can check/report problems with `gorelease` [here](https://golang.org/issues/26420).
## Verify changes for contrib repository
If the changes in the main repository are going to affect the contrib repository, it is important to verify that the changes are compatible with the contrib repository.
Follow [the steps](https://github.com/open-telemetry/opentelemetry-go-contrib/blob/main/RELEASING.md#verify-otel-changes) in the contrib repository to verify OTel changes.
## Pre-Release
First, decide which module sets will be released and update their versions
in `versions.yaml`. Commit this change to a new branch.
Update go.mod for submodules to depend on the new release which will happen in the next step.
1. Run the `prerelease` make target. It creates a branch
`prerelease_<module set>_<new tag>` that will contain all release changes.
```
make prerelease MODSET=<module set>
```
2. Verify the changes.
```
git diff ...prerelease_<module set>_<new tag>
```
This should have changed the version for all modules to be `<new tag>`.
If these changes look correct, merge them into your pre-release branch:
```go
git merge prerelease_<module set>_<new tag>
```
3. Update the [Changelog](./CHANGELOG.md).
- Make sure all relevant changes for this release are included and are in language that non-contributors to the project can understand.
To verify this, you can look directly at the commits since the `<last tag>`.
```
git --no-pager log --pretty=oneline "<last tag>..HEAD"
```
- Move all the `Unreleased` changes into a new section following the title scheme (`[<new tag>] - <date of release>`).
- Make sure the new section is under the comment for released section, like `<!-- Released section -->`, so it is protected from being overwritten in the future.
- Update all the appropriate links at the bottom.
4. Push the changes to upstream and create a Pull Request on GitHub.
Be sure to include the curated changes from the [Changelog](./CHANGELOG.md) in the description.
## Tag
Once the Pull Request with all the version changes has been approved and merged it is time to tag the merged commit.
***IMPORTANT***: It is critical you use the same tag that you used in the Pre-Release step!
Failure to do so will leave things in a broken state. As long as you do not
change `versions.yaml` between pre-release and this step, things should be fine.
***IMPORTANT***: [There is currently no way to remove an incorrectly tagged version of a Go module](https://github.com/golang/go/issues/34189).
It is critical you make sure the version you push upstream is correct.
[Failure to do so will lead to minor emergencies and tough to work around](https://github.com/open-telemetry/opentelemetry-go/issues/331).
1. For each module set that will be released, run the `add-tags` make target
using the `<commit-hash>` of the commit on the main branch for the merged Pull Request.
```
make add-tags MODSET=<module set> COMMIT=<commit hash>
```
It should only be necessary to provide an explicit `COMMIT` value if the
current `HEAD` of your working directory is not the correct commit.
2. Push tags to the upstream remote (not your fork: `github.com/open-telemetry/opentelemetry-go.git`).
Make sure you push all sub-modules as well.
```
git push upstream <new tag>
git push upstream <submodules-path/new tag>
...
```
## Release
Finally create a Release for the new `<new tag>` on GitHub.
The release body should include all the release notes from the Changelog for this release.
## Post-Release
### Contrib Repository
Once verified be sure to [make a release for the `contrib` repository](https://github.com/open-telemetry/opentelemetry-go-contrib/blob/main/RELEASING.md) that uses this release.
### Website Documentation
Update the [Go instrumentation documentation] in the OpenTelemetry website under [content/en/docs/languages/go].
Importantly, bump any package versions referenced to be the latest one you just released and ensure all code examples still compile and are accurate.
[OpenTelemetry Semantic Conventions]: https://github.com/open-telemetry/semantic-conventions
[Go instrumentation documentation]: https://opentelemetry.io/docs/languages/go/
[content/en/docs/languages/go]: https://github.com/open-telemetry/opentelemetry.io/tree/main/content/en/docs/languages/go
### Demo Repository
Bump the dependencies in the following Go services:
- [`accountingservice`](https://github.com/open-telemetry/opentelemetry-demo/tree/main/src/accountingservice)
- [`checkoutservice`](https://github.com/open-telemetry/opentelemetry-demo/tree/main/src/checkoutservice)
- [`productcatalogservice`](https://github.com/open-telemetry/opentelemetry-demo/tree/main/src/productcatalogservice)

224
e2e/vendor/go.opentelemetry.io/otel/VERSIONING.md generated vendored Normal file
View File

@ -0,0 +1,224 @@
# Versioning
This document describes the versioning policy for this repository. This policy
is designed so the following goals can be achieved.
**Users are provided a codebase of value that is stable and secure.**
## Policy
* Versioning of this project will be idiomatic of a Go project using [Go
modules](https://github.com/golang/go/wiki/Modules).
* [Semantic import
versioning](https://github.com/golang/go/wiki/Modules#semantic-import-versioning)
will be used.
* Versions will comply with [semver
2.0](https://semver.org/spec/v2.0.0.html) with the following exceptions.
* New methods may be added to exported API interfaces. All exported
interfaces that fall within this exception will include the following
paragraph in their public documentation.
> Warning: methods may be added to this interface in minor releases.
* If a module is version `v2` or higher, the major version of the module
must be included as a `/vN` at the end of the module paths used in
`go.mod` files (e.g., `module go.opentelemetry.io/otel/v2`, `require
go.opentelemetry.io/otel/v2 v2.0.1`) and in the package import path
(e.g., `import "go.opentelemetry.io/otel/v2/trace"`). This includes the
paths used in `go get` commands (e.g., `go get
go.opentelemetry.io/otel/v2@v2.0.1`). Note there is both a `/v2` and a
`@v2.0.1` in that example. One way to think about it is that the module
name now includes the `/v2`, so include `/v2` whenever you are using the
module name).
* If a module is version `v0` or `v1`, do not include the major version in
either the module path or the import path.
* Modules will be used to encapsulate signals and components.
* Experimental modules still under active development will be versioned at
`v0` to imply the stability guarantee defined by
[semver](https://semver.org/spec/v2.0.0.html#spec-item-4).
> Major version zero (0.y.z) is for initial development. Anything MAY
> change at any time. The public API SHOULD NOT be considered stable.
* Mature modules for which we guarantee a stable public API will be versioned
with a major version greater than `v0`.
* The decision to make a module stable will be made on a case-by-case
basis by the maintainers of this project.
* Experimental modules will start their versioning at `v0.0.0` and will
increment their minor version when backwards incompatible changes are
released and increment their patch version when backwards compatible
changes are released.
* All stable modules that use the same major version number will use the
same entire version number.
* Stable modules may be released with an incremented minor or patch
version even though that module has not been changed, but rather so
that it will remain at the same version as other stable modules that
did undergo change.
* When an experimental module becomes stable a new stable module version
will be released and will include this now stable module. The new
stable module version will be an increment of the minor version number
and will be applied to all existing stable modules as well as the newly
stable module being released.
* Versioning of the associated [contrib
repository](https://github.com/open-telemetry/opentelemetry-go-contrib) of
this project will be idiomatic of a Go project using [Go
modules](https://github.com/golang/go/wiki/Modules).
* [Semantic import
versioning](https://github.com/golang/go/wiki/Modules#semantic-import-versioning)
will be used.
* Versions will comply with [semver 2.0](https://semver.org/spec/v2.0.0.html).
* If a module is version `v2` or higher, the
major version of the module must be included as a `/vN` at the end of the
module paths used in `go.mod` files (e.g., `module
go.opentelemetry.io/contrib/instrumentation/host/v2`, `require
go.opentelemetry.io/contrib/instrumentation/host/v2 v2.0.1`) and in the
package import path (e.g., `import
"go.opentelemetry.io/contrib/instrumentation/host/v2"`). This includes
the paths used in `go get` commands (e.g., `go get
go.opentelemetry.io/contrib/instrumentation/host/v2@v2.0.1`. Note there
is both a `/v2` and a `@v2.0.1` in that example. One way to think about
it is that the module name now includes the `/v2`, so include `/v2`
whenever you are using the module name).
* If a module is version `v0` or `v1`, do not include the major version
in either the module path or the import path.
* In addition to public APIs, telemetry produced by stable instrumentation
will remain stable and backwards compatible. This is to avoid breaking
alerts and dashboard.
* Modules will be used to encapsulate instrumentation, detectors, exporters,
propagators, and any other independent sets of related components.
* Experimental modules still under active development will be versioned at
`v0` to imply the stability guarantee defined by
[semver](https://semver.org/spec/v2.0.0.html#spec-item-4).
> Major version zero (0.y.z) is for initial development. Anything MAY
> change at any time. The public API SHOULD NOT be considered stable.
* Mature modules for which we guarantee a stable public API and telemetry will
be versioned with a major version greater than `v0`.
* Experimental modules will start their versioning at `v0.0.0` and will
increment their minor version when backwards incompatible changes are
released and increment their patch version when backwards compatible
changes are released.
* Stable contrib modules cannot depend on experimental modules from this
project.
* All stable contrib modules of the same major version with this project
will use the same entire version as this project.
* Stable modules may be released with an incremented minor or patch
version even though that module's code has not been changed. Instead
the only change that will have been included is to have updated that
modules dependency on this project's stable APIs.
* When an experimental module in contrib becomes stable a new stable
module version will be released and will include this now stable
module. The new stable module version will be an increment of the minor
version number and will be applied to all existing stable contrib
modules, this project's modules, and the newly stable module being
released.
* Contrib modules will be kept up to date with this project's releases.
* Due to the dependency contrib modules will implicitly have on this
project's modules the release of stable contrib modules to match the
released version number will be staggered after this project's release.
There is no explicit time guarantee for how long after this projects
release the contrib release will be. Effort should be made to keep them
as close in time as possible.
* No additional stable release in this project can be made until the
contrib repository has a matching stable release.
* No release can be made in the contrib repository after this project's
stable release except for a stable release of the contrib repository.
* GitHub releases will be made for all releases.
* Go modules will be made available at Go package mirrors.
## Example Versioning Lifecycle
To better understand the implementation of the above policy the following
example is provided. This project is simplified to include only the following
modules and their versions:
* `otel`: `v0.14.0`
* `otel/trace`: `v0.14.0`
* `otel/metric`: `v0.14.0`
* `otel/baggage`: `v0.14.0`
* `otel/sdk/trace`: `v0.14.0`
* `otel/sdk/metric`: `v0.14.0`
These modules have been developed to a point where the `otel/trace`,
`otel/baggage`, and `otel/sdk/trace` modules have reached a point that they
should be considered for a stable release. The `otel/metric` and
`otel/sdk/metric` are still under active development and the `otel` module
depends on both `otel/trace` and `otel/metric`.
The `otel` package is refactored to remove its dependencies on `otel/metric` so
it can be released as stable as well. With that done the following release
candidates are made:
* `otel`: `v1.0.0-RC1`
* `otel/trace`: `v1.0.0-RC1`
* `otel/baggage`: `v1.0.0-RC1`
* `otel/sdk/trace`: `v1.0.0-RC1`
The `otel/metric` and `otel/sdk/metric` modules remain at `v0.14.0`.
A few minor issues are discovered in the `otel/trace` package. These issues are
resolved with some minor, but backwards incompatible, changes and are released
as a second release candidate:
* `otel`: `v1.0.0-RC2`
* `otel/trace`: `v1.0.0-RC2`
* `otel/baggage`: `v1.0.0-RC2`
* `otel/sdk/trace`: `v1.0.0-RC2`
Notice that all module version numbers are incremented to adhere to our
versioning policy.
After these release candidates have been evaluated to satisfaction, they are
released as version `v1.0.0`.
* `otel`: `v1.0.0`
* `otel/trace`: `v1.0.0`
* `otel/baggage`: `v1.0.0`
* `otel/sdk/trace`: `v1.0.0`
Since both the `go` utility and the Go module system support [the semantic
versioning definition of
precedence](https://semver.org/spec/v2.0.0.html#spec-item-11), this release
will correctly be interpreted as the successor to the previous release
candidates.
Active development of this project continues. The `otel/metric` module now has
backwards incompatible changes to its API that need to be released and the
`otel/baggage` module has a minor bug fix that needs to be released. The
following release is made:
* `otel`: `v1.0.1`
* `otel/trace`: `v1.0.1`
* `otel/metric`: `v0.15.0`
* `otel/baggage`: `v1.0.1`
* `otel/sdk/trace`: `v1.0.1`
* `otel/sdk/metric`: `v0.15.0`
Notice that, again, all stable module versions are incremented in unison and
the `otel/sdk/metric` package, which depends on the `otel/metric` package, also
bumped its version. This bump of the `otel/sdk/metric` package makes sense
given their coupling, though it is not explicitly required by our versioning
policy.
As we progress, the `otel/metric` and `otel/sdk/metric` packages have reached a
point where they should be evaluated for stability. The `otel` module is
reintegrated with the `otel/metric` package and the following release is made:
* `otel`: `v1.1.0-RC1`
* `otel/trace`: `v1.1.0-RC1`
* `otel/metric`: `v1.1.0-RC1`
* `otel/baggage`: `v1.1.0-RC1`
* `otel/sdk/trace`: `v1.1.0-RC1`
* `otel/sdk/metric`: `v1.1.0-RC1`
All the modules are evaluated and determined to a viable stable release. They
are then released as version `v1.1.0` (the minor version is incremented to
indicate the addition of new signal).
* `otel`: `v1.1.0`
* `otel/trace`: `v1.1.0`
* `otel/metric`: `v1.1.0`
* `otel/baggage`: `v1.1.0`
* `otel/sdk/trace`: `v1.1.0`
* `otel/sdk/metric`: `v1.1.0`

View File

@ -0,0 +1,3 @@
# Attribute
[![PkgGoDev](https://pkg.go.dev/badge/go.opentelemetry.io/otel/attribute)](https://pkg.go.dev/go.opentelemetry.io/otel/attribute)

5
e2e/vendor/go.opentelemetry.io/otel/attribute/doc.go generated vendored Normal file
View File

@ -0,0 +1,5 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
// Package attribute provides key and value attributes.
package attribute // import "go.opentelemetry.io/otel/attribute"

View File

@ -0,0 +1,135 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package attribute // import "go.opentelemetry.io/otel/attribute"
import (
"bytes"
"sync"
"sync/atomic"
)
type (
// Encoder is a mechanism for serializing an attribute set into a specific
// string representation that supports caching, to avoid repeated
// serialization. An example could be an exporter encoding the attribute
// set into a wire representation.
Encoder interface {
// Encode returns the serialized encoding of the attribute set using
// its Iterator. This result may be cached by a attribute.Set.
Encode(iterator Iterator) string
// ID returns a value that is unique for each class of attribute
// encoder. Attribute encoders allocate these using `NewEncoderID`.
ID() EncoderID
}
// EncoderID is used to identify distinct Encoder
// implementations, for caching encoded results.
EncoderID struct {
value uint64
}
// defaultAttrEncoder uses a sync.Pool of buffers to reduce the number of
// allocations used in encoding attributes. This implementation encodes a
// comma-separated list of key=value, with '/'-escaping of '=', ',', and
// '\'.
defaultAttrEncoder struct {
// pool is a pool of attribute set builders. The buffers in this pool
// grow to a size that most attribute encodings will not allocate new
// memory.
pool sync.Pool // *bytes.Buffer
}
)
// escapeChar is used to ensure uniqueness of the attribute encoding where
// keys or values contain either '=' or ','. Since there is no parser needed
// for this encoding and its only requirement is to be unique, this choice is
// arbitrary. Users will see these in some exporters (e.g., stdout), so the
// backslash ('\') is used as a conventional choice.
const escapeChar = '\\'
var (
_ Encoder = &defaultAttrEncoder{}
// encoderIDCounter is for generating IDs for other attribute encoders.
encoderIDCounter uint64
defaultEncoderOnce sync.Once
defaultEncoderID = NewEncoderID()
defaultEncoderInstance *defaultAttrEncoder
)
// NewEncoderID returns a unique attribute encoder ID. It should be called
// once per each type of attribute encoder. Preferably in init() or in var
// definition.
func NewEncoderID() EncoderID {
return EncoderID{value: atomic.AddUint64(&encoderIDCounter, 1)}
}
// DefaultEncoder returns an attribute encoder that encodes attributes in such
// a way that each escaped attribute's key is followed by an equal sign and
// then by an escaped attribute's value. All key-value pairs are separated by
// a comma.
//
// Escaping is done by prepending a backslash before either a backslash, equal
// sign or a comma.
func DefaultEncoder() Encoder {
defaultEncoderOnce.Do(func() {
defaultEncoderInstance = &defaultAttrEncoder{
pool: sync.Pool{
New: func() interface{} {
return &bytes.Buffer{}
},
},
}
})
return defaultEncoderInstance
}
// Encode is a part of an implementation of the AttributeEncoder interface.
func (d *defaultAttrEncoder) Encode(iter Iterator) string {
buf := d.pool.Get().(*bytes.Buffer)
defer d.pool.Put(buf)
buf.Reset()
for iter.Next() {
i, keyValue := iter.IndexedAttribute()
if i > 0 {
_, _ = buf.WriteRune(',')
}
copyAndEscape(buf, string(keyValue.Key))
_, _ = buf.WriteRune('=')
if keyValue.Value.Type() == STRING {
copyAndEscape(buf, keyValue.Value.AsString())
} else {
_, _ = buf.WriteString(keyValue.Value.Emit())
}
}
return buf.String()
}
// ID is a part of an implementation of the AttributeEncoder interface.
func (*defaultAttrEncoder) ID() EncoderID {
return defaultEncoderID
}
// copyAndEscape escapes `=`, `,` and its own escape character (`\`),
// making the default encoding unique.
func copyAndEscape(buf *bytes.Buffer, val string) {
for _, ch := range val {
switch ch {
case '=', ',', escapeChar:
_, _ = buf.WriteRune(escapeChar)
}
_, _ = buf.WriteRune(ch)
}
}
// Valid returns true if this encoder ID was allocated by
// `NewEncoderID`. Invalid encoder IDs will not be cached.
func (id EncoderID) Valid() bool {
return id.value != 0
}

View File

@ -0,0 +1,49 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package attribute // import "go.opentelemetry.io/otel/attribute"
// Filter supports removing certain attributes from attribute sets. When
// the filter returns true, the attribute will be kept in the filtered
// attribute set. When the filter returns false, the attribute is excluded
// from the filtered attribute set, and the attribute instead appears in
// the removed list of excluded attributes.
type Filter func(KeyValue) bool
// NewAllowKeysFilter returns a Filter that only allows attributes with one of
// the provided keys.
//
// If keys is empty a deny-all filter is returned.
func NewAllowKeysFilter(keys ...Key) Filter {
if len(keys) <= 0 {
return func(kv KeyValue) bool { return false }
}
allowed := make(map[Key]struct{})
for _, k := range keys {
allowed[k] = struct{}{}
}
return func(kv KeyValue) bool {
_, ok := allowed[kv.Key]
return ok
}
}
// NewDenyKeysFilter returns a Filter that only allows attributes
// that do not have one of the provided keys.
//
// If keys is empty an allow-all filter is returned.
func NewDenyKeysFilter(keys ...Key) Filter {
if len(keys) <= 0 {
return func(kv KeyValue) bool { return true }
}
forbid := make(map[Key]struct{})
for _, k := range keys {
forbid[k] = struct{}{}
}
return func(kv KeyValue) bool {
_, ok := forbid[kv.Key]
return !ok
}
}

View File

@ -0,0 +1,150 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package attribute // import "go.opentelemetry.io/otel/attribute"
// Iterator allows iterating over the set of attributes in order, sorted by
// key.
type Iterator struct {
storage *Set
idx int
}
// MergeIterator supports iterating over two sets of attributes while
// eliminating duplicate values from the combined set. The first iterator
// value takes precedence.
type MergeIterator struct {
one oneIterator
two oneIterator
current KeyValue
}
type oneIterator struct {
iter Iterator
done bool
attr KeyValue
}
// Next moves the iterator to the next position. Returns false if there are no
// more attributes.
func (i *Iterator) Next() bool {
i.idx++
return i.idx < i.Len()
}
// Label returns current KeyValue. Must be called only after Next returns
// true.
//
// Deprecated: Use Attribute instead.
func (i *Iterator) Label() KeyValue {
return i.Attribute()
}
// Attribute returns the current KeyValue of the Iterator. It must be called
// only after Next returns true.
func (i *Iterator) Attribute() KeyValue {
kv, _ := i.storage.Get(i.idx)
return kv
}
// IndexedLabel returns current index and attribute. Must be called only
// after Next returns true.
//
// Deprecated: Use IndexedAttribute instead.
func (i *Iterator) IndexedLabel() (int, KeyValue) {
return i.idx, i.Attribute()
}
// IndexedAttribute returns current index and attribute. Must be called only
// after Next returns true.
func (i *Iterator) IndexedAttribute() (int, KeyValue) {
return i.idx, i.Attribute()
}
// Len returns a number of attributes in the iterated set.
func (i *Iterator) Len() int {
return i.storage.Len()
}
// ToSlice is a convenience function that creates a slice of attributes from
// the passed iterator. The iterator is set up to start from the beginning
// before creating the slice.
func (i *Iterator) ToSlice() []KeyValue {
l := i.Len()
if l == 0 {
return nil
}
i.idx = -1
slice := make([]KeyValue, 0, l)
for i.Next() {
slice = append(slice, i.Attribute())
}
return slice
}
// NewMergeIterator returns a MergeIterator for merging two attribute sets.
// Duplicates are resolved by taking the value from the first set.
func NewMergeIterator(s1, s2 *Set) MergeIterator {
mi := MergeIterator{
one: makeOne(s1.Iter()),
two: makeOne(s2.Iter()),
}
return mi
}
func makeOne(iter Iterator) oneIterator {
oi := oneIterator{
iter: iter,
}
oi.advance()
return oi
}
func (oi *oneIterator) advance() {
if oi.done = !oi.iter.Next(); !oi.done {
oi.attr = oi.iter.Attribute()
}
}
// Next returns true if there is another attribute available.
func (m *MergeIterator) Next() bool {
if m.one.done && m.two.done {
return false
}
if m.one.done {
m.current = m.two.attr
m.two.advance()
return true
}
if m.two.done {
m.current = m.one.attr
m.one.advance()
return true
}
if m.one.attr.Key == m.two.attr.Key {
m.current = m.one.attr // first iterator attribute value wins
m.one.advance()
m.two.advance()
return true
}
if m.one.attr.Key < m.two.attr.Key {
m.current = m.one.attr
m.one.advance()
return true
}
m.current = m.two.attr
m.two.advance()
return true
}
// Label returns the current value after Next() returns true.
//
// Deprecated: Use Attribute instead.
func (m *MergeIterator) Label() KeyValue {
return m.current
}
// Attribute returns the current value after Next() returns true.
func (m *MergeIterator) Attribute() KeyValue {
return m.current
}

123
e2e/vendor/go.opentelemetry.io/otel/attribute/key.go generated vendored Normal file
View File

@ -0,0 +1,123 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package attribute // import "go.opentelemetry.io/otel/attribute"
// Key represents the key part in key-value pairs. It's a string. The
// allowed character set in the key depends on the use of the key.
type Key string
// Bool creates a KeyValue instance with a BOOL Value.
//
// If creating both a key and value at the same time, use the provided
// convenience function instead -- Bool(name, value).
func (k Key) Bool(v bool) KeyValue {
return KeyValue{
Key: k,
Value: BoolValue(v),
}
}
// BoolSlice creates a KeyValue instance with a BOOLSLICE Value.
//
// If creating both a key and value at the same time, use the provided
// convenience function instead -- BoolSlice(name, value).
func (k Key) BoolSlice(v []bool) KeyValue {
return KeyValue{
Key: k,
Value: BoolSliceValue(v),
}
}
// Int creates a KeyValue instance with an INT64 Value.
//
// If creating both a key and value at the same time, use the provided
// convenience function instead -- Int(name, value).
func (k Key) Int(v int) KeyValue {
return KeyValue{
Key: k,
Value: IntValue(v),
}
}
// IntSlice creates a KeyValue instance with an INT64SLICE Value.
//
// If creating both a key and value at the same time, use the provided
// convenience function instead -- IntSlice(name, value).
func (k Key) IntSlice(v []int) KeyValue {
return KeyValue{
Key: k,
Value: IntSliceValue(v),
}
}
// Int64 creates a KeyValue instance with an INT64 Value.
//
// If creating both a key and value at the same time, use the provided
// convenience function instead -- Int64(name, value).
func (k Key) Int64(v int64) KeyValue {
return KeyValue{
Key: k,
Value: Int64Value(v),
}
}
// Int64Slice creates a KeyValue instance with an INT64SLICE Value.
//
// If creating both a key and value at the same time, use the provided
// convenience function instead -- Int64Slice(name, value).
func (k Key) Int64Slice(v []int64) KeyValue {
return KeyValue{
Key: k,
Value: Int64SliceValue(v),
}
}
// Float64 creates a KeyValue instance with a FLOAT64 Value.
//
// If creating both a key and value at the same time, use the provided
// convenience function instead -- Float64(name, value).
func (k Key) Float64(v float64) KeyValue {
return KeyValue{
Key: k,
Value: Float64Value(v),
}
}
// Float64Slice creates a KeyValue instance with a FLOAT64SLICE Value.
//
// If creating both a key and value at the same time, use the provided
// convenience function instead -- Float64(name, value).
func (k Key) Float64Slice(v []float64) KeyValue {
return KeyValue{
Key: k,
Value: Float64SliceValue(v),
}
}
// String creates a KeyValue instance with a STRING Value.
//
// If creating both a key and value at the same time, use the provided
// convenience function instead -- String(name, value).
func (k Key) String(v string) KeyValue {
return KeyValue{
Key: k,
Value: StringValue(v),
}
}
// StringSlice creates a KeyValue instance with a STRINGSLICE Value.
//
// If creating both a key and value at the same time, use the provided
// convenience function instead -- StringSlice(name, value).
func (k Key) StringSlice(v []string) KeyValue {
return KeyValue{
Key: k,
Value: StringSliceValue(v),
}
}
// Defined returns true for non-empty keys.
func (k Key) Defined() bool {
return len(k) != 0
}

75
e2e/vendor/go.opentelemetry.io/otel/attribute/kv.go generated vendored Normal file
View File

@ -0,0 +1,75 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package attribute // import "go.opentelemetry.io/otel/attribute"
import (
"fmt"
)
// KeyValue holds a key and value pair.
type KeyValue struct {
Key Key
Value Value
}
// Valid returns if kv is a valid OpenTelemetry attribute.
func (kv KeyValue) Valid() bool {
return kv.Key.Defined() && kv.Value.Type() != INVALID
}
// Bool creates a KeyValue with a BOOL Value type.
func Bool(k string, v bool) KeyValue {
return Key(k).Bool(v)
}
// BoolSlice creates a KeyValue with a BOOLSLICE Value type.
func BoolSlice(k string, v []bool) KeyValue {
return Key(k).BoolSlice(v)
}
// Int creates a KeyValue with an INT64 Value type.
func Int(k string, v int) KeyValue {
return Key(k).Int(v)
}
// IntSlice creates a KeyValue with an INT64SLICE Value type.
func IntSlice(k string, v []int) KeyValue {
return Key(k).IntSlice(v)
}
// Int64 creates a KeyValue with an INT64 Value type.
func Int64(k string, v int64) KeyValue {
return Key(k).Int64(v)
}
// Int64Slice creates a KeyValue with an INT64SLICE Value type.
func Int64Slice(k string, v []int64) KeyValue {
return Key(k).Int64Slice(v)
}
// Float64 creates a KeyValue with a FLOAT64 Value type.
func Float64(k string, v float64) KeyValue {
return Key(k).Float64(v)
}
// Float64Slice creates a KeyValue with a FLOAT64SLICE Value type.
func Float64Slice(k string, v []float64) KeyValue {
return Key(k).Float64Slice(v)
}
// String creates a KeyValue with a STRING Value type.
func String(k, v string) KeyValue {
return Key(k).String(v)
}
// StringSlice creates a KeyValue with a STRINGSLICE Value type.
func StringSlice(k string, v []string) KeyValue {
return Key(k).StringSlice(v)
}
// Stringer creates a new key-value pair with a passed name and a string
// value generated by the passed Stringer interface.
func Stringer(k string, v fmt.Stringer) KeyValue {
return Key(k).String(v.String())
}

411
e2e/vendor/go.opentelemetry.io/otel/attribute/set.go generated vendored Normal file
View File

@ -0,0 +1,411 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package attribute // import "go.opentelemetry.io/otel/attribute"
import (
"cmp"
"encoding/json"
"reflect"
"slices"
"sort"
)
type (
// Set is the representation for a distinct attribute set. It manages an
// immutable set of attributes, with an internal cache for storing
// attribute encodings.
//
// This type will remain comparable for backwards compatibility. The
// equivalence of Sets across versions is not guaranteed to be stable.
// Prior versions may find two Sets to be equal or not when compared
// directly (i.e. ==), but subsequent versions may not. Users should use
// the Equals method to ensure stable equivalence checking.
//
// Users should also use the Distinct returned from Equivalent as a map key
// instead of a Set directly. In addition to that type providing guarantees
// on stable equivalence, it may also provide performance improvements.
Set struct {
equivalent Distinct
}
// Distinct is a unique identifier of a Set.
//
// Distinct is designed to be ensures equivalence stability: comparisons
// will return the save value across versions. For this reason, Distinct
// should always be used as a map key instead of a Set.
Distinct struct {
iface interface{}
}
// Sortable implements sort.Interface, used for sorting KeyValue.
//
// Deprecated: This type is no longer used. It was added as a performance
// optimization for Go < 1.21 that is no longer needed (Go < 1.21 is no
// longer supported by the module).
Sortable []KeyValue
)
var (
// keyValueType is used in computeDistinctReflect.
keyValueType = reflect.TypeOf(KeyValue{})
// emptySet is returned for empty attribute sets.
emptySet = &Set{
equivalent: Distinct{
iface: [0]KeyValue{},
},
}
)
// EmptySet returns a reference to a Set with no elements.
//
// This is a convenience provided for optimized calling utility.
func EmptySet() *Set {
return emptySet
}
// reflectValue abbreviates reflect.ValueOf(d).
func (d Distinct) reflectValue() reflect.Value {
return reflect.ValueOf(d.iface)
}
// Valid returns true if this value refers to a valid Set.
func (d Distinct) Valid() bool {
return d.iface != nil
}
// Len returns the number of attributes in this set.
func (l *Set) Len() int {
if l == nil || !l.equivalent.Valid() {
return 0
}
return l.equivalent.reflectValue().Len()
}
// Get returns the KeyValue at ordered position idx in this set.
func (l *Set) Get(idx int) (KeyValue, bool) {
if l == nil || !l.equivalent.Valid() {
return KeyValue{}, false
}
value := l.equivalent.reflectValue()
if idx >= 0 && idx < value.Len() {
// Note: The Go compiler successfully avoids an allocation for
// the interface{} conversion here:
return value.Index(idx).Interface().(KeyValue), true
}
return KeyValue{}, false
}
// Value returns the value of a specified key in this set.
func (l *Set) Value(k Key) (Value, bool) {
if l == nil || !l.equivalent.Valid() {
return Value{}, false
}
rValue := l.equivalent.reflectValue()
vlen := rValue.Len()
idx := sort.Search(vlen, func(idx int) bool {
return rValue.Index(idx).Interface().(KeyValue).Key >= k
})
if idx >= vlen {
return Value{}, false
}
keyValue := rValue.Index(idx).Interface().(KeyValue)
if k == keyValue.Key {
return keyValue.Value, true
}
return Value{}, false
}
// HasValue tests whether a key is defined in this set.
func (l *Set) HasValue(k Key) bool {
if l == nil {
return false
}
_, ok := l.Value(k)
return ok
}
// Iter returns an iterator for visiting the attributes in this set.
func (l *Set) Iter() Iterator {
return Iterator{
storage: l,
idx: -1,
}
}
// ToSlice returns the set of attributes belonging to this set, sorted, where
// keys appear no more than once.
func (l *Set) ToSlice() []KeyValue {
iter := l.Iter()
return iter.ToSlice()
}
// Equivalent returns a value that may be used as a map key. The Distinct type
// guarantees that the result will equal the equivalent. Distinct value of any
// attribute set with the same elements as this, where sets are made unique by
// choosing the last value in the input for any given key.
func (l *Set) Equivalent() Distinct {
if l == nil || !l.equivalent.Valid() {
return emptySet.equivalent
}
return l.equivalent
}
// Equals returns true if the argument set is equivalent to this set.
func (l *Set) Equals(o *Set) bool {
return l.Equivalent() == o.Equivalent()
}
// Encoded returns the encoded form of this set, according to encoder.
func (l *Set) Encoded(encoder Encoder) string {
if l == nil || encoder == nil {
return ""
}
return encoder.Encode(l.Iter())
}
func empty() Set {
return Set{
equivalent: emptySet.equivalent,
}
}
// NewSet returns a new Set. See the documentation for
// NewSetWithSortableFiltered for more details.
//
// Except for empty sets, this method adds an additional allocation compared
// with calls that include a Sortable.
func NewSet(kvs ...KeyValue) Set {
s, _ := NewSetWithFiltered(kvs, nil)
return s
}
// NewSetWithSortable returns a new Set. See the documentation for
// NewSetWithSortableFiltered for more details.
//
// This call includes a Sortable option as a memory optimization.
//
// Deprecated: Use [NewSet] instead.
func NewSetWithSortable(kvs []KeyValue, _ *Sortable) Set {
s, _ := NewSetWithFiltered(kvs, nil)
return s
}
// NewSetWithFiltered returns a new Set. See the documentation for
// NewSetWithSortableFiltered for more details.
//
// This call includes a Filter to include/exclude attribute keys from the
// return value. Excluded keys are returned as a slice of attribute values.
func NewSetWithFiltered(kvs []KeyValue, filter Filter) (Set, []KeyValue) {
// Check for empty set.
if len(kvs) == 0 {
return empty(), nil
}
// Stable sort so the following de-duplication can implement
// last-value-wins semantics.
slices.SortStableFunc(kvs, func(a, b KeyValue) int {
return cmp.Compare(a.Key, b.Key)
})
position := len(kvs) - 1
offset := position - 1
// The requirements stated above require that the stable
// result be placed in the end of the input slice, while
// overwritten values are swapped to the beginning.
//
// De-duplicate with last-value-wins semantics. Preserve
// duplicate values at the beginning of the input slice.
for ; offset >= 0; offset-- {
if kvs[offset].Key == kvs[position].Key {
continue
}
position--
kvs[offset], kvs[position] = kvs[position], kvs[offset]
}
kvs = kvs[position:]
if filter != nil {
if div := filteredToFront(kvs, filter); div != 0 {
return Set{equivalent: computeDistinct(kvs[div:])}, kvs[:div]
}
}
return Set{equivalent: computeDistinct(kvs)}, nil
}
// NewSetWithSortableFiltered returns a new Set.
//
// Duplicate keys are eliminated by taking the last value. This
// re-orders the input slice so that unique last-values are contiguous
// at the end of the slice.
//
// This ensures the following:
//
// - Last-value-wins semantics
// - Caller sees the reordering, but doesn't lose values
// - Repeated call preserve last-value wins.
//
// Note that methods are defined on Set, although this returns Set. Callers
// can avoid memory allocations by:
//
// - allocating a Sortable for use as a temporary in this method
// - allocating a Set for storing the return value of this constructor.
//
// The result maintains a cache of encoded attributes, by attribute.EncoderID.
// This value should not be copied after its first use.
//
// The second []KeyValue return value is a list of attributes that were
// excluded by the Filter (if non-nil).
//
// Deprecated: Use [NewSetWithFiltered] instead.
func NewSetWithSortableFiltered(kvs []KeyValue, _ *Sortable, filter Filter) (Set, []KeyValue) {
return NewSetWithFiltered(kvs, filter)
}
// filteredToFront filters slice in-place using keep function. All KeyValues that need to
// be removed are moved to the front. All KeyValues that need to be kept are
// moved (in-order) to the back. The index for the first KeyValue to be kept is
// returned.
func filteredToFront(slice []KeyValue, keep Filter) int {
n := len(slice)
j := n
for i := n - 1; i >= 0; i-- {
if keep(slice[i]) {
j--
slice[i], slice[j] = slice[j], slice[i]
}
}
return j
}
// Filter returns a filtered copy of this Set. See the documentation for
// NewSetWithSortableFiltered for more details.
func (l *Set) Filter(re Filter) (Set, []KeyValue) {
if re == nil {
return *l, nil
}
// Iterate in reverse to the first attribute that will be filtered out.
n := l.Len()
first := n - 1
for ; first >= 0; first-- {
kv, _ := l.Get(first)
if !re(kv) {
break
}
}
// No attributes will be dropped, return the immutable Set l and nil.
if first < 0 {
return *l, nil
}
// Copy now that we know we need to return a modified set.
//
// Do not do this in-place on the underlying storage of *Set l. Sets are
// immutable and filtering should not change this.
slice := l.ToSlice()
// Don't re-iterate the slice if only slice[0] is filtered.
if first == 0 {
// It is safe to assume len(slice) >= 1 given we found at least one
// attribute above that needs to be filtered out.
return Set{equivalent: computeDistinct(slice[1:])}, slice[:1]
}
// Move the filtered slice[first] to the front (preserving order).
kv := slice[first]
copy(slice[1:first+1], slice[:first])
slice[0] = kv
// Do not re-evaluate re(slice[first+1:]).
div := filteredToFront(slice[1:first+1], re) + 1
return Set{equivalent: computeDistinct(slice[div:])}, slice[:div]
}
// computeDistinct returns a Distinct using either the fixed- or
// reflect-oriented code path, depending on the size of the input. The input
// slice is assumed to already be sorted and de-duplicated.
func computeDistinct(kvs []KeyValue) Distinct {
iface := computeDistinctFixed(kvs)
if iface == nil {
iface = computeDistinctReflect(kvs)
}
return Distinct{
iface: iface,
}
}
// computeDistinctFixed computes a Distinct for small slices. It returns nil
// if the input is too large for this code path.
func computeDistinctFixed(kvs []KeyValue) interface{} {
switch len(kvs) {
case 1:
return [1]KeyValue(kvs)
case 2:
return [2]KeyValue(kvs)
case 3:
return [3]KeyValue(kvs)
case 4:
return [4]KeyValue(kvs)
case 5:
return [5]KeyValue(kvs)
case 6:
return [6]KeyValue(kvs)
case 7:
return [7]KeyValue(kvs)
case 8:
return [8]KeyValue(kvs)
case 9:
return [9]KeyValue(kvs)
case 10:
return [10]KeyValue(kvs)
default:
return nil
}
}
// computeDistinctReflect computes a Distinct using reflection, works for any
// size input.
func computeDistinctReflect(kvs []KeyValue) interface{} {
at := reflect.New(reflect.ArrayOf(len(kvs), keyValueType)).Elem()
for i, keyValue := range kvs {
*(at.Index(i).Addr().Interface().(*KeyValue)) = keyValue
}
return at.Interface()
}
// MarshalJSON returns the JSON encoding of the Set.
func (l *Set) MarshalJSON() ([]byte, error) {
return json.Marshal(l.equivalent.iface)
}
// MarshalLog is the marshaling function used by the logging system to represent this Set.
func (l Set) MarshalLog() interface{} {
kvs := make(map[string]string)
for _, kv := range l.ToSlice() {
kvs[string(kv.Key)] = kv.Value.Emit()
}
return kvs
}
// Len implements sort.Interface.
func (l *Sortable) Len() int {
return len(*l)
}
// Swap implements sort.Interface.
func (l *Sortable) Swap(i, j int) {
(*l)[i], (*l)[j] = (*l)[j], (*l)[i]
}
// Less implements sort.Interface.
func (l *Sortable) Less(i, j int) bool {
return (*l)[i].Key < (*l)[j].Key
}

View File

@ -0,0 +1,31 @@
// Code generated by "stringer -type=Type"; DO NOT EDIT.
package attribute
import "strconv"
func _() {
// An "invalid array index" compiler error signifies that the constant values have changed.
// Re-run the stringer command to generate them again.
var x [1]struct{}
_ = x[INVALID-0]
_ = x[BOOL-1]
_ = x[INT64-2]
_ = x[FLOAT64-3]
_ = x[STRING-4]
_ = x[BOOLSLICE-5]
_ = x[INT64SLICE-6]
_ = x[FLOAT64SLICE-7]
_ = x[STRINGSLICE-8]
}
const _Type_name = "INVALIDBOOLINT64FLOAT64STRINGBOOLSLICEINT64SLICEFLOAT64SLICESTRINGSLICE"
var _Type_index = [...]uint8{0, 7, 11, 16, 23, 29, 38, 48, 60, 71}
func (i Type) String() string {
if i < 0 || i >= Type(len(_Type_index)-1) {
return "Type(" + strconv.FormatInt(int64(i), 10) + ")"
}
return _Type_name[_Type_index[i]:_Type_index[i+1]]
}

271
e2e/vendor/go.opentelemetry.io/otel/attribute/value.go generated vendored Normal file
View File

@ -0,0 +1,271 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package attribute // import "go.opentelemetry.io/otel/attribute"
import (
"encoding/json"
"fmt"
"reflect"
"strconv"
"go.opentelemetry.io/otel/internal"
"go.opentelemetry.io/otel/internal/attribute"
)
//go:generate stringer -type=Type
// Type describes the type of the data Value holds.
type Type int // nolint: revive // redefines builtin Type.
// Value represents the value part in key-value pairs.
type Value struct {
vtype Type
numeric uint64
stringly string
slice interface{}
}
const (
// INVALID is used for a Value with no value set.
INVALID Type = iota
// BOOL is a boolean Type Value.
BOOL
// INT64 is a 64-bit signed integral Type Value.
INT64
// FLOAT64 is a 64-bit floating point Type Value.
FLOAT64
// STRING is a string Type Value.
STRING
// BOOLSLICE is a slice of booleans Type Value.
BOOLSLICE
// INT64SLICE is a slice of 64-bit signed integral numbers Type Value.
INT64SLICE
// FLOAT64SLICE is a slice of 64-bit floating point numbers Type Value.
FLOAT64SLICE
// STRINGSLICE is a slice of strings Type Value.
STRINGSLICE
)
// BoolValue creates a BOOL Value.
func BoolValue(v bool) Value {
return Value{
vtype: BOOL,
numeric: internal.BoolToRaw(v),
}
}
// BoolSliceValue creates a BOOLSLICE Value.
func BoolSliceValue(v []bool) Value {
return Value{vtype: BOOLSLICE, slice: attribute.BoolSliceValue(v)}
}
// IntValue creates an INT64 Value.
func IntValue(v int) Value {
return Int64Value(int64(v))
}
// IntSliceValue creates an INTSLICE Value.
func IntSliceValue(v []int) Value {
var int64Val int64
cp := reflect.New(reflect.ArrayOf(len(v), reflect.TypeOf(int64Val)))
for i, val := range v {
cp.Elem().Index(i).SetInt(int64(val))
}
return Value{
vtype: INT64SLICE,
slice: cp.Elem().Interface(),
}
}
// Int64Value creates an INT64 Value.
func Int64Value(v int64) Value {
return Value{
vtype: INT64,
numeric: internal.Int64ToRaw(v),
}
}
// Int64SliceValue creates an INT64SLICE Value.
func Int64SliceValue(v []int64) Value {
return Value{vtype: INT64SLICE, slice: attribute.Int64SliceValue(v)}
}
// Float64Value creates a FLOAT64 Value.
func Float64Value(v float64) Value {
return Value{
vtype: FLOAT64,
numeric: internal.Float64ToRaw(v),
}
}
// Float64SliceValue creates a FLOAT64SLICE Value.
func Float64SliceValue(v []float64) Value {
return Value{vtype: FLOAT64SLICE, slice: attribute.Float64SliceValue(v)}
}
// StringValue creates a STRING Value.
func StringValue(v string) Value {
return Value{
vtype: STRING,
stringly: v,
}
}
// StringSliceValue creates a STRINGSLICE Value.
func StringSliceValue(v []string) Value {
return Value{vtype: STRINGSLICE, slice: attribute.StringSliceValue(v)}
}
// Type returns a type of the Value.
func (v Value) Type() Type {
return v.vtype
}
// AsBool returns the bool value. Make sure that the Value's type is
// BOOL.
func (v Value) AsBool() bool {
return internal.RawToBool(v.numeric)
}
// AsBoolSlice returns the []bool value. Make sure that the Value's type is
// BOOLSLICE.
func (v Value) AsBoolSlice() []bool {
if v.vtype != BOOLSLICE {
return nil
}
return v.asBoolSlice()
}
func (v Value) asBoolSlice() []bool {
return attribute.AsBoolSlice(v.slice)
}
// AsInt64 returns the int64 value. Make sure that the Value's type is
// INT64.
func (v Value) AsInt64() int64 {
return internal.RawToInt64(v.numeric)
}
// AsInt64Slice returns the []int64 value. Make sure that the Value's type is
// INT64SLICE.
func (v Value) AsInt64Slice() []int64 {
if v.vtype != INT64SLICE {
return nil
}
return v.asInt64Slice()
}
func (v Value) asInt64Slice() []int64 {
return attribute.AsInt64Slice(v.slice)
}
// AsFloat64 returns the float64 value. Make sure that the Value's
// type is FLOAT64.
func (v Value) AsFloat64() float64 {
return internal.RawToFloat64(v.numeric)
}
// AsFloat64Slice returns the []float64 value. Make sure that the Value's type is
// FLOAT64SLICE.
func (v Value) AsFloat64Slice() []float64 {
if v.vtype != FLOAT64SLICE {
return nil
}
return v.asFloat64Slice()
}
func (v Value) asFloat64Slice() []float64 {
return attribute.AsFloat64Slice(v.slice)
}
// AsString returns the string value. Make sure that the Value's type
// is STRING.
func (v Value) AsString() string {
return v.stringly
}
// AsStringSlice returns the []string value. Make sure that the Value's type is
// STRINGSLICE.
func (v Value) AsStringSlice() []string {
if v.vtype != STRINGSLICE {
return nil
}
return v.asStringSlice()
}
func (v Value) asStringSlice() []string {
return attribute.AsStringSlice(v.slice)
}
type unknownValueType struct{}
// AsInterface returns Value's data as interface{}.
func (v Value) AsInterface() interface{} {
switch v.Type() {
case BOOL:
return v.AsBool()
case BOOLSLICE:
return v.asBoolSlice()
case INT64:
return v.AsInt64()
case INT64SLICE:
return v.asInt64Slice()
case FLOAT64:
return v.AsFloat64()
case FLOAT64SLICE:
return v.asFloat64Slice()
case STRING:
return v.stringly
case STRINGSLICE:
return v.asStringSlice()
}
return unknownValueType{}
}
// Emit returns a string representation of Value's data.
func (v Value) Emit() string {
switch v.Type() {
case BOOLSLICE:
return fmt.Sprint(v.asBoolSlice())
case BOOL:
return strconv.FormatBool(v.AsBool())
case INT64SLICE:
j, err := json.Marshal(v.asInt64Slice())
if err != nil {
return fmt.Sprintf("invalid: %v", v.asInt64Slice())
}
return string(j)
case INT64:
return strconv.FormatInt(v.AsInt64(), 10)
case FLOAT64SLICE:
j, err := json.Marshal(v.asFloat64Slice())
if err != nil {
return fmt.Sprintf("invalid: %v", v.asFloat64Slice())
}
return string(j)
case FLOAT64:
return fmt.Sprint(v.AsFloat64())
case STRINGSLICE:
j, err := json.Marshal(v.asStringSlice())
if err != nil {
return fmt.Sprintf("invalid: %v", v.asStringSlice())
}
return string(j)
case STRING:
return v.stringly
default:
return "unknown"
}
}
// MarshalJSON returns the JSON encoding of the Value.
func (v Value) MarshalJSON() ([]byte, error) {
var jsonVal struct {
Type string
Value interface{}
}
jsonVal.Type = v.Type().String()
jsonVal.Value = v.AsInterface()
return json.Marshal(jsonVal)
}

View File

@ -0,0 +1,3 @@
# Baggage
[![PkgGoDev](https://pkg.go.dev/badge/go.opentelemetry.io/otel/baggage)](https://pkg.go.dev/go.opentelemetry.io/otel/baggage)

1018
e2e/vendor/go.opentelemetry.io/otel/baggage/baggage.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

28
e2e/vendor/go.opentelemetry.io/otel/baggage/context.go generated vendored Normal file
View File

@ -0,0 +1,28 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package baggage // import "go.opentelemetry.io/otel/baggage"
import (
"context"
"go.opentelemetry.io/otel/internal/baggage"
)
// ContextWithBaggage returns a copy of parent with baggage.
func ContextWithBaggage(parent context.Context, b Baggage) context.Context {
// Delegate so any hooks for the OpenTracing bridge are handled.
return baggage.ContextWithList(parent, b.list)
}
// ContextWithoutBaggage returns a copy of parent with no baggage.
func ContextWithoutBaggage(parent context.Context) context.Context {
// Delegate so any hooks for the OpenTracing bridge are handled.
return baggage.ContextWithList(parent, nil)
}
// FromContext returns the baggage contained in ctx.
func FromContext(ctx context.Context) Baggage {
// Delegate so any hooks for the OpenTracing bridge are handled.
return Baggage{list: baggage.ListFromContext(ctx)}
}

9
e2e/vendor/go.opentelemetry.io/otel/baggage/doc.go generated vendored Normal file
View File

@ -0,0 +1,9 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
/*
Package baggage provides functionality for storing and retrieving
baggage items in Go context. For propagating the baggage, see the
go.opentelemetry.io/otel/propagation package.
*/
package baggage // import "go.opentelemetry.io/otel/baggage"

3
e2e/vendor/go.opentelemetry.io/otel/codes/README.md generated vendored Normal file
View File

@ -0,0 +1,3 @@
# Codes
[![PkgGoDev](https://pkg.go.dev/badge/go.opentelemetry.io/otel/codes)](https://pkg.go.dev/go.opentelemetry.io/otel/codes)

106
e2e/vendor/go.opentelemetry.io/otel/codes/codes.go generated vendored Normal file
View File

@ -0,0 +1,106 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package codes // import "go.opentelemetry.io/otel/codes"
import (
"encoding/json"
"errors"
"fmt"
"strconv"
)
const (
// Unset is the default status code.
Unset Code = 0
// Error indicates the operation contains an error.
//
// NOTE: The error code in OTLP is 2.
// The value of this enum is only relevant to the internals
// of the Go SDK.
Error Code = 1
// Ok indicates operation has been validated by an Application developers
// or Operator to have completed successfully, or contain no error.
//
// NOTE: The Ok code in OTLP is 1.
// The value of this enum is only relevant to the internals
// of the Go SDK.
Ok Code = 2
maxCode = 3
)
// Code is an 32-bit representation of a status state.
type Code uint32
var codeToStr = map[Code]string{
Unset: "Unset",
Error: "Error",
Ok: "Ok",
}
var strToCode = map[string]Code{
`"Unset"`: Unset,
`"Error"`: Error,
`"Ok"`: Ok,
}
// String returns the Code as a string.
func (c Code) String() string {
return codeToStr[c]
}
// UnmarshalJSON unmarshals b into the Code.
//
// This is based on the functionality in the gRPC codes package:
// https://github.com/grpc/grpc-go/blob/bb64fee312b46ebee26be43364a7a966033521b1/codes/codes.go#L218-L244
func (c *Code) UnmarshalJSON(b []byte) error {
// From json.Unmarshaler: By convention, to approximate the behavior of
// Unmarshal itself, Unmarshalers implement UnmarshalJSON([]byte("null")) as
// a no-op.
if string(b) == "null" {
return nil
}
if c == nil {
return errors.New("nil receiver passed to UnmarshalJSON")
}
var x interface{}
if err := json.Unmarshal(b, &x); err != nil {
return err
}
switch x.(type) {
case string:
if jc, ok := strToCode[string(b)]; ok {
*c = jc
return nil
}
return fmt.Errorf("invalid code: %q", string(b))
case float64:
if ci, err := strconv.ParseUint(string(b), 10, 32); err == nil {
if ci >= maxCode {
return fmt.Errorf("invalid code: %q", ci)
}
*c = Code(ci) // nolint: gosec // Bit size of 32 check above.
return nil
}
return fmt.Errorf("invalid code: %q", string(b))
default:
return fmt.Errorf("invalid code: %q", string(b))
}
}
// MarshalJSON returns c as the JSON encoding of c.
func (c *Code) MarshalJSON() ([]byte, error) {
if c == nil {
return []byte("null"), nil
}
str, ok := codeToStr[*c]
if !ok {
return nil, fmt.Errorf("invalid code: %d", *c)
}
return []byte(fmt.Sprintf("%q", str)), nil
}

10
e2e/vendor/go.opentelemetry.io/otel/codes/doc.go generated vendored Normal file
View File

@ -0,0 +1,10 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
/*
Package codes defines the canonical error codes used by OpenTelemetry.
It conforms to [the OpenTelemetry
specification](https://github.com/open-telemetry/opentelemetry-specification/blob/v1.20.0/specification/trace/api.md#set-status).
*/
package codes // import "go.opentelemetry.io/otel/codes"

25
e2e/vendor/go.opentelemetry.io/otel/doc.go generated vendored Normal file
View File

@ -0,0 +1,25 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
/*
Package otel provides global access to the OpenTelemetry API. The subpackages of
the otel package provide an implementation of the OpenTelemetry API.
The provided API is used to instrument code and measure data about that code's
performance and operation. The measured data, by default, is not processed or
transmitted anywhere. An implementation of the OpenTelemetry SDK, like the
default SDK implementation (go.opentelemetry.io/otel/sdk), and associated
exporters are used to process and transport this data.
To read the getting started guide, see https://opentelemetry.io/docs/languages/go/getting-started/.
To read more about tracing, see go.opentelemetry.io/otel/trace.
To read more about metrics, see go.opentelemetry.io/otel/metric.
To read more about logs, see go.opentelemetry.io/otel/log.
To read more about propagation, see go.opentelemetry.io/otel/propagation and
go.opentelemetry.io/otel/baggage.
*/
package otel // import "go.opentelemetry.io/otel"

27
e2e/vendor/go.opentelemetry.io/otel/error_handler.go generated vendored Normal file
View File

@ -0,0 +1,27 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otel // import "go.opentelemetry.io/otel"
// ErrorHandler handles irremediable events.
type ErrorHandler interface {
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
// Handle handles any error deemed irremediable by an OpenTelemetry
// component.
Handle(error)
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
}
// ErrorHandlerFunc is a convenience adapter to allow the use of a function
// as an ErrorHandler.
type ErrorHandlerFunc func(error)
var _ ErrorHandler = ErrorHandlerFunc(nil)
// Handle handles the irremediable error by calling the ErrorHandlerFunc itself.
func (f ErrorHandlerFunc) Handle(err error) {
f(err)
}

View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,3 @@
# OTLP Trace Exporter
[![PkgGoDev](https://pkg.go.dev/badge/go.opentelemetry.io/otel/exporters/otlp/otlptrace)](https://pkg.go.dev/go.opentelemetry.io/otel/exporters/otlp/otlptrace)

View File

@ -0,0 +1,43 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otlptrace // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace"
import (
"context"
tracepb "go.opentelemetry.io/proto/otlp/trace/v1"
)
// Client manages connections to the collector, handles the
// transformation of data into wire format, and the transmission of that
// data to the collector.
type Client interface {
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
// Start should establish connection(s) to endpoint(s). It is
// called just once by the exporter, so the implementation
// does not need to worry about idempotence and locking.
Start(ctx context.Context) error
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
// Stop should close the connections. The function is called
// only once by the exporter, so the implementation does not
// need to worry about idempotence, but it may be called
// concurrently with UploadTraces, so proper
// locking is required. The function serves as a
// synchronization point - after the function returns, the
// process of closing connections is assumed to be finished.
Stop(ctx context.Context) error
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
// UploadTraces should transform the passed traces to the wire
// format and send it to the collector. May be called
// concurrently.
UploadTraces(ctx context.Context, protoSpans []*tracepb.ResourceSpans) error
// DO NOT CHANGE: any modification will not be backwards compatible and
// must never be done outside of a new major release.
}

View File

@ -0,0 +1,10 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
/*
Package otlptrace contains abstractions for OTLP span exporters.
See the official OTLP span exporter implementations:
- [go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc],
- [go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp].
*/
package otlptrace // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace"

View File

@ -0,0 +1,105 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otlptrace // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace"
import (
"context"
"errors"
"fmt"
"sync"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/internal/tracetransform"
tracesdk "go.opentelemetry.io/otel/sdk/trace"
)
var errAlreadyStarted = errors.New("already started")
// Exporter exports trace data in the OTLP wire format.
type Exporter struct {
client Client
mu sync.RWMutex
started bool
startOnce sync.Once
stopOnce sync.Once
}
// ExportSpans exports a batch of spans.
func (e *Exporter) ExportSpans(ctx context.Context, ss []tracesdk.ReadOnlySpan) error {
protoSpans := tracetransform.Spans(ss)
if len(protoSpans) == 0 {
return nil
}
err := e.client.UploadTraces(ctx, protoSpans)
if err != nil {
return fmt.Errorf("traces export: %w", err)
}
return nil
}
// Start establishes a connection to the receiving endpoint.
func (e *Exporter) Start(ctx context.Context) error {
err := errAlreadyStarted
e.startOnce.Do(func() {
e.mu.Lock()
e.started = true
e.mu.Unlock()
err = e.client.Start(ctx)
})
return err
}
// Shutdown flushes all exports and closes all connections to the receiving endpoint.
func (e *Exporter) Shutdown(ctx context.Context) error {
e.mu.RLock()
started := e.started
e.mu.RUnlock()
if !started {
return nil
}
var err error
e.stopOnce.Do(func() {
err = e.client.Stop(ctx)
e.mu.Lock()
e.started = false
e.mu.Unlock()
})
return err
}
var _ tracesdk.SpanExporter = (*Exporter)(nil)
// New constructs a new Exporter and starts it.
func New(ctx context.Context, client Client) (*Exporter, error) {
exp := NewUnstarted(client)
if err := exp.Start(ctx); err != nil {
return nil, err
}
return exp, nil
}
// NewUnstarted constructs a new Exporter and does not start it.
func NewUnstarted(client Client) *Exporter {
return &Exporter{
client: client,
}
}
// MarshalLog is the marshaling function used by the logging system to represent this Exporter.
func (e *Exporter) MarshalLog() interface{} {
return struct {
Type string
Client Client
}{
Type: "otlptrace",
Client: e.client,
}
}

View File

@ -0,0 +1,147 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package tracetransform // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace/internal/tracetransform"
import (
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/sdk/resource"
commonpb "go.opentelemetry.io/proto/otlp/common/v1"
)
// KeyValues transforms a slice of attribute KeyValues into OTLP key-values.
func KeyValues(attrs []attribute.KeyValue) []*commonpb.KeyValue {
if len(attrs) == 0 {
return nil
}
out := make([]*commonpb.KeyValue, 0, len(attrs))
for _, kv := range attrs {
out = append(out, KeyValue(kv))
}
return out
}
// Iterator transforms an attribute iterator into OTLP key-values.
func Iterator(iter attribute.Iterator) []*commonpb.KeyValue {
l := iter.Len()
if l == 0 {
return nil
}
out := make([]*commonpb.KeyValue, 0, l)
for iter.Next() {
out = append(out, KeyValue(iter.Attribute()))
}
return out
}
// ResourceAttributes transforms a Resource OTLP key-values.
func ResourceAttributes(res *resource.Resource) []*commonpb.KeyValue {
return Iterator(res.Iter())
}
// KeyValue transforms an attribute KeyValue into an OTLP key-value.
func KeyValue(kv attribute.KeyValue) *commonpb.KeyValue {
return &commonpb.KeyValue{Key: string(kv.Key), Value: Value(kv.Value)}
}
// Value transforms an attribute Value into an OTLP AnyValue.
func Value(v attribute.Value) *commonpb.AnyValue {
av := new(commonpb.AnyValue)
switch v.Type() {
case attribute.BOOL:
av.Value = &commonpb.AnyValue_BoolValue{
BoolValue: v.AsBool(),
}
case attribute.BOOLSLICE:
av.Value = &commonpb.AnyValue_ArrayValue{
ArrayValue: &commonpb.ArrayValue{
Values: boolSliceValues(v.AsBoolSlice()),
},
}
case attribute.INT64:
av.Value = &commonpb.AnyValue_IntValue{
IntValue: v.AsInt64(),
}
case attribute.INT64SLICE:
av.Value = &commonpb.AnyValue_ArrayValue{
ArrayValue: &commonpb.ArrayValue{
Values: int64SliceValues(v.AsInt64Slice()),
},
}
case attribute.FLOAT64:
av.Value = &commonpb.AnyValue_DoubleValue{
DoubleValue: v.AsFloat64(),
}
case attribute.FLOAT64SLICE:
av.Value = &commonpb.AnyValue_ArrayValue{
ArrayValue: &commonpb.ArrayValue{
Values: float64SliceValues(v.AsFloat64Slice()),
},
}
case attribute.STRING:
av.Value = &commonpb.AnyValue_StringValue{
StringValue: v.AsString(),
}
case attribute.STRINGSLICE:
av.Value = &commonpb.AnyValue_ArrayValue{
ArrayValue: &commonpb.ArrayValue{
Values: stringSliceValues(v.AsStringSlice()),
},
}
default:
av.Value = &commonpb.AnyValue_StringValue{
StringValue: "INVALID",
}
}
return av
}
func boolSliceValues(vals []bool) []*commonpb.AnyValue {
converted := make([]*commonpb.AnyValue, len(vals))
for i, v := range vals {
converted[i] = &commonpb.AnyValue{
Value: &commonpb.AnyValue_BoolValue{
BoolValue: v,
},
}
}
return converted
}
func int64SliceValues(vals []int64) []*commonpb.AnyValue {
converted := make([]*commonpb.AnyValue, len(vals))
for i, v := range vals {
converted[i] = &commonpb.AnyValue{
Value: &commonpb.AnyValue_IntValue{
IntValue: v,
},
}
}
return converted
}
func float64SliceValues(vals []float64) []*commonpb.AnyValue {
converted := make([]*commonpb.AnyValue, len(vals))
for i, v := range vals {
converted[i] = &commonpb.AnyValue{
Value: &commonpb.AnyValue_DoubleValue{
DoubleValue: v,
},
}
}
return converted
}
func stringSliceValues(vals []string) []*commonpb.AnyValue {
converted := make([]*commonpb.AnyValue, len(vals))
for i, v := range vals {
converted[i] = &commonpb.AnyValue{
Value: &commonpb.AnyValue_StringValue{
StringValue: v,
},
}
}
return converted
}

View File

@ -0,0 +1,19 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package tracetransform // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace/internal/tracetransform"
import (
"go.opentelemetry.io/otel/sdk/instrumentation"
commonpb "go.opentelemetry.io/proto/otlp/common/v1"
)
func InstrumentationScope(il instrumentation.Scope) *commonpb.InstrumentationScope {
if il == (instrumentation.Scope{}) {
return nil
}
return &commonpb.InstrumentationScope{
Name: il.Name,
Version: il.Version,
}
}

View File

@ -0,0 +1,17 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package tracetransform // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace/internal/tracetransform"
import (
"go.opentelemetry.io/otel/sdk/resource"
resourcepb "go.opentelemetry.io/proto/otlp/resource/v1"
)
// Resource transforms a Resource into an OTLP Resource.
func Resource(r *resource.Resource) *resourcepb.Resource {
if r == nil {
return nil
}
return &resourcepb.Resource{Attributes: ResourceAttributes(r)}
}

View File

@ -0,0 +1,207 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package tracetransform // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace/internal/tracetransform"
import (
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/codes"
"go.opentelemetry.io/otel/sdk/instrumentation"
tracesdk "go.opentelemetry.io/otel/sdk/trace"
"go.opentelemetry.io/otel/trace"
tracepb "go.opentelemetry.io/proto/otlp/trace/v1"
)
// Spans transforms a slice of OpenTelemetry spans into a slice of OTLP
// ResourceSpans.
func Spans(sdl []tracesdk.ReadOnlySpan) []*tracepb.ResourceSpans {
if len(sdl) == 0 {
return nil
}
rsm := make(map[attribute.Distinct]*tracepb.ResourceSpans)
type key struct {
r attribute.Distinct
is instrumentation.Scope
}
ssm := make(map[key]*tracepb.ScopeSpans)
var resources int
for _, sd := range sdl {
if sd == nil {
continue
}
rKey := sd.Resource().Equivalent()
k := key{
r: rKey,
is: sd.InstrumentationScope(),
}
scopeSpan, iOk := ssm[k]
if !iOk {
// Either the resource or instrumentation scope were unknown.
scopeSpan = &tracepb.ScopeSpans{
Scope: InstrumentationScope(sd.InstrumentationScope()),
Spans: []*tracepb.Span{},
SchemaUrl: sd.InstrumentationScope().SchemaURL,
}
}
scopeSpan.Spans = append(scopeSpan.Spans, span(sd))
ssm[k] = scopeSpan
rs, rOk := rsm[rKey]
if !rOk {
resources++
// The resource was unknown.
rs = &tracepb.ResourceSpans{
Resource: Resource(sd.Resource()),
ScopeSpans: []*tracepb.ScopeSpans{scopeSpan},
SchemaUrl: sd.Resource().SchemaURL(),
}
rsm[rKey] = rs
continue
}
// The resource has been seen before. Check if the instrumentation
// library lookup was unknown because if so we need to add it to the
// ResourceSpans. Otherwise, the instrumentation library has already
// been seen and the append we did above will be included it in the
// ScopeSpans reference.
if !iOk {
rs.ScopeSpans = append(rs.ScopeSpans, scopeSpan)
}
}
// Transform the categorized map into a slice
rss := make([]*tracepb.ResourceSpans, 0, resources)
for _, rs := range rsm {
rss = append(rss, rs)
}
return rss
}
// span transforms a Span into an OTLP span.
func span(sd tracesdk.ReadOnlySpan) *tracepb.Span {
if sd == nil {
return nil
}
tid := sd.SpanContext().TraceID()
sid := sd.SpanContext().SpanID()
s := &tracepb.Span{
TraceId: tid[:],
SpanId: sid[:],
TraceState: sd.SpanContext().TraceState().String(),
Status: status(sd.Status().Code, sd.Status().Description),
StartTimeUnixNano: uint64(sd.StartTime().UnixNano()),
EndTimeUnixNano: uint64(sd.EndTime().UnixNano()),
Links: links(sd.Links()),
Kind: spanKind(sd.SpanKind()),
Name: sd.Name(),
Attributes: KeyValues(sd.Attributes()),
Events: spanEvents(sd.Events()),
DroppedAttributesCount: uint32(sd.DroppedAttributes()),
DroppedEventsCount: uint32(sd.DroppedEvents()),
DroppedLinksCount: uint32(sd.DroppedLinks()),
}
if psid := sd.Parent().SpanID(); psid.IsValid() {
s.ParentSpanId = psid[:]
}
s.Flags = buildSpanFlags(sd.Parent())
return s
}
// status transform a span code and message into an OTLP span status.
func status(status codes.Code, message string) *tracepb.Status {
var c tracepb.Status_StatusCode
switch status {
case codes.Ok:
c = tracepb.Status_STATUS_CODE_OK
case codes.Error:
c = tracepb.Status_STATUS_CODE_ERROR
default:
c = tracepb.Status_STATUS_CODE_UNSET
}
return &tracepb.Status{
Code: c,
Message: message,
}
}
// links transforms span Links to OTLP span links.
func links(links []tracesdk.Link) []*tracepb.Span_Link {
if len(links) == 0 {
return nil
}
sl := make([]*tracepb.Span_Link, 0, len(links))
for _, otLink := range links {
// This redefinition is necessary to prevent otLink.*ID[:] copies
// being reused -- in short we need a new otLink per iteration.
otLink := otLink
tid := otLink.SpanContext.TraceID()
sid := otLink.SpanContext.SpanID()
flags := buildSpanFlags(otLink.SpanContext)
sl = append(sl, &tracepb.Span_Link{
TraceId: tid[:],
SpanId: sid[:],
Attributes: KeyValues(otLink.Attributes),
DroppedAttributesCount: uint32(otLink.DroppedAttributeCount),
Flags: flags,
})
}
return sl
}
func buildSpanFlags(sc trace.SpanContext) uint32 {
flags := tracepb.SpanFlags_SPAN_FLAGS_CONTEXT_HAS_IS_REMOTE_MASK
if sc.IsRemote() {
flags |= tracepb.SpanFlags_SPAN_FLAGS_CONTEXT_IS_REMOTE_MASK
}
return uint32(flags)
}
// spanEvents transforms span Events to an OTLP span events.
func spanEvents(es []tracesdk.Event) []*tracepb.Span_Event {
if len(es) == 0 {
return nil
}
events := make([]*tracepb.Span_Event, len(es))
// Transform message events
for i := 0; i < len(es); i++ {
events[i] = &tracepb.Span_Event{
Name: es[i].Name,
TimeUnixNano: uint64(es[i].Time.UnixNano()),
Attributes: KeyValues(es[i].Attributes),
DroppedAttributesCount: uint32(es[i].DroppedAttributeCount),
}
}
return events
}
// spanKind transforms a SpanKind to an OTLP span kind.
func spanKind(kind trace.SpanKind) tracepb.Span_SpanKind {
switch kind {
case trace.SpanKindInternal:
return tracepb.Span_SPAN_KIND_INTERNAL
case trace.SpanKindClient:
return tracepb.Span_SPAN_KIND_CLIENT
case trace.SpanKindServer:
return tracepb.Span_SPAN_KIND_SERVER
case trace.SpanKindProducer:
return tracepb.Span_SPAN_KIND_PRODUCER
case trace.SpanKindConsumer:
return tracepb.Span_SPAN_KIND_CONSUMER
default:
return tracepb.Span_SPAN_KIND_UNSPECIFIED
}
}

View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,3 @@
# OTLP Trace gRPC Exporter
[![PkgGoDev](https://pkg.go.dev/badge/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc)](https://pkg.go.dev/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc)

View File

@ -0,0 +1,295 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otlptracegrpc // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc"
import (
"context"
"errors"
"sync"
"time"
"google.golang.org/genproto/googleapis/rpc/errdetails"
"google.golang.org/grpc"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/metadata"
"google.golang.org/grpc/status"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/otlpconfig"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/retry"
coltracepb "go.opentelemetry.io/proto/otlp/collector/trace/v1"
tracepb "go.opentelemetry.io/proto/otlp/trace/v1"
)
type client struct {
endpoint string
dialOpts []grpc.DialOption
metadata metadata.MD
exportTimeout time.Duration
requestFunc retry.RequestFunc
// stopCtx is used as a parent context for all exports. Therefore, when it
// is canceled with the stopFunc all exports are canceled.
stopCtx context.Context
// stopFunc cancels stopCtx, stopping any active exports.
stopFunc context.CancelFunc
// ourConn keeps track of where conn was created: true if created here on
// Start, or false if passed with an option. This is important on Shutdown
// as the conn should only be closed if created here on start. Otherwise,
// it is up to the processes that passed the conn to close it.
ourConn bool
conn *grpc.ClientConn
tscMu sync.RWMutex
tsc coltracepb.TraceServiceClient
}
// Compile time check *client implements otlptrace.Client.
var _ otlptrace.Client = (*client)(nil)
// NewClient creates a new gRPC trace client.
func NewClient(opts ...Option) otlptrace.Client {
return newClient(opts...)
}
func newClient(opts ...Option) *client {
cfg := otlpconfig.NewGRPCConfig(asGRPCOptions(opts)...)
ctx, cancel := context.WithCancel(context.Background())
c := &client{
endpoint: cfg.Traces.Endpoint,
exportTimeout: cfg.Traces.Timeout,
requestFunc: cfg.RetryConfig.RequestFunc(retryable),
dialOpts: cfg.DialOptions,
stopCtx: ctx,
stopFunc: cancel,
conn: cfg.GRPCConn,
}
if len(cfg.Traces.Headers) > 0 {
c.metadata = metadata.New(cfg.Traces.Headers)
}
return c
}
// Start establishes a gRPC connection to the collector.
func (c *client) Start(context.Context) error {
if c.conn == nil {
// If the caller did not provide a ClientConn when the client was
// created, create one using the configuration they did provide.
conn, err := grpc.NewClient(c.endpoint, c.dialOpts...)
if err != nil {
return err
}
// Keep track that we own the lifecycle of this conn and need to close
// it on Shutdown.
c.ourConn = true
c.conn = conn
}
// The otlptrace.Client interface states this method is called just once,
// so no need to check if already started.
c.tscMu.Lock()
c.tsc = coltracepb.NewTraceServiceClient(c.conn)
c.tscMu.Unlock()
return nil
}
var errAlreadyStopped = errors.New("the client is already stopped")
// Stop shuts down the client.
//
// Any active connections to a remote endpoint are closed if they were created
// by the client. Any gRPC connection passed during creation using
// WithGRPCConn will not be closed. It is the caller's responsibility to
// handle cleanup of that resource.
//
// This method synchronizes with the UploadTraces method of the client. It
// will wait for any active calls to that method to complete unimpeded, or it
// will cancel any active calls if ctx expires. If ctx expires, the context
// error will be forwarded as the returned error. All client held resources
// will still be released in this situation.
//
// If the client has already stopped, an error will be returned describing
// this.
func (c *client) Stop(ctx context.Context) error {
// Make sure to return context error if the context is done when calling this method.
err := ctx.Err()
// Acquire the c.tscMu lock within the ctx lifetime.
acquired := make(chan struct{})
go func() {
c.tscMu.Lock()
close(acquired)
}()
select {
case <-ctx.Done():
// The Stop timeout is reached. Kill any remaining exports to force
// the clear of the lock and save the timeout error to return and
// signal the shutdown timed out before cleanly stopping.
c.stopFunc()
err = ctx.Err()
// To ensure the client is not left in a dirty state c.tsc needs to be
// set to nil. To avoid the race condition when doing this, ensure
// that all the exports are killed (initiated by c.stopFunc).
<-acquired
case <-acquired:
}
// Hold the tscMu lock for the rest of the function to ensure no new
// exports are started.
defer c.tscMu.Unlock()
// The otlptrace.Client interface states this method is called only
// once, but there is no guarantee it is called after Start. Ensure the
// client is started before doing anything and let the called know if they
// made a mistake.
if c.tsc == nil {
return errAlreadyStopped
}
// Clear c.tsc to signal the client is stopped.
c.tsc = nil
if c.ourConn {
closeErr := c.conn.Close()
// A context timeout error takes precedence over this error.
if err == nil && closeErr != nil {
err = closeErr
}
}
return err
}
var errShutdown = errors.New("the client is shutdown")
// UploadTraces sends a batch of spans.
//
// Retryable errors from the server will be handled according to any
// RetryConfig the client was created with.
func (c *client) UploadTraces(ctx context.Context, protoSpans []*tracepb.ResourceSpans) error {
// Hold a read lock to ensure a shut down initiated after this starts does
// not abandon the export. This read lock acquire has less priority than a
// write lock acquire (i.e. Stop), meaning if the client is shutting down
// this will come after the shut down.
c.tscMu.RLock()
defer c.tscMu.RUnlock()
if c.tsc == nil {
return errShutdown
}
ctx, cancel := c.exportContext(ctx)
defer cancel()
return c.requestFunc(ctx, func(iCtx context.Context) error {
resp, err := c.tsc.Export(iCtx, &coltracepb.ExportTraceServiceRequest{
ResourceSpans: protoSpans,
})
if resp != nil && resp.PartialSuccess != nil {
msg := resp.PartialSuccess.GetErrorMessage()
n := resp.PartialSuccess.GetRejectedSpans()
if n != 0 || msg != "" {
err := internal.TracePartialSuccessError(n, msg)
otel.Handle(err)
}
}
// nil is converted to OK.
if status.Code(err) == codes.OK {
// Success.
return nil
}
return err
})
}
// exportContext returns a copy of parent with an appropriate deadline and
// cancellation function.
//
// It is the callers responsibility to cancel the returned context once its
// use is complete, via the parent or directly with the returned CancelFunc, to
// ensure all resources are correctly released.
func (c *client) exportContext(parent context.Context) (context.Context, context.CancelFunc) {
var (
ctx context.Context
cancel context.CancelFunc
)
if c.exportTimeout > 0 {
ctx, cancel = context.WithTimeout(parent, c.exportTimeout)
} else {
ctx, cancel = context.WithCancel(parent)
}
if c.metadata.Len() > 0 {
ctx = metadata.NewOutgoingContext(ctx, c.metadata)
}
// Unify the client stopCtx with the parent.
go func() {
select {
case <-ctx.Done():
case <-c.stopCtx.Done():
// Cancel the export as the shutdown has timed out.
cancel()
}
}()
return ctx, cancel
}
// retryable returns if err identifies a request that can be retried and a
// duration to wait for if an explicit throttle time is included in err.
func retryable(err error) (bool, time.Duration) {
s := status.Convert(err)
return retryableGRPCStatus(s)
}
func retryableGRPCStatus(s *status.Status) (bool, time.Duration) {
switch s.Code() {
case codes.Canceled,
codes.DeadlineExceeded,
codes.Aborted,
codes.OutOfRange,
codes.Unavailable,
codes.DataLoss:
// Additionally handle RetryInfo.
_, d := throttleDelay(s)
return true, d
case codes.ResourceExhausted:
// Retry only if the server signals that the recovery from resource exhaustion is possible.
return throttleDelay(s)
}
// Not a retry-able error.
return false, 0
}
// throttleDelay returns of the status is RetryInfo
// and the its duration to wait for if an explicit throttle time.
func throttleDelay(s *status.Status) (bool, time.Duration) {
for _, detail := range s.Details() {
if t, ok := detail.(*errdetails.RetryInfo); ok {
return true, t.RetryDelay.AsDuration()
}
}
return false, 0
}
// MarshalLog is the marshaling function used by the logging system to represent this Client.
func (c *client) MarshalLog() interface{} {
return struct {
Type string
Endpoint string
}{
Type: "otlphttpgrpc",
Endpoint: c.endpoint,
}
}

View File

@ -0,0 +1,66 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
/*
Package otlptracegrpc provides an OTLP span exporter using gRPC.
By default the telemetry is sent to https://localhost:4317.
Exporter should be created using [New].
The environment variables described below can be used for configuration.
OTEL_EXPORTER_OTLP_ENDPOINT, OTEL_EXPORTER_OTLP_TRACES_ENDPOINT (default: "https://localhost:4317") -
target to which the exporter sends telemetry.
The target syntax is defined in https://github.com/grpc/grpc/blob/master/doc/naming.md.
The value must contain a host.
The value may additionally a port, a scheme, and a path.
The value accepts "http" and "https" scheme.
The value should not contain a query string or fragment.
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT takes precedence over OTEL_EXPORTER_OTLP_ENDPOINT.
The configuration can be overridden by [WithEndpoint], [WithEndpointURL], [WithInsecure], and [WithGRPCConn] options.
OTEL_EXPORTER_OTLP_INSECURE, OTEL_EXPORTER_OTLP_TRACES_INSECURE (default: "false") -
setting "true" disables client transport security for the exporter's gRPC connection.
You can use this only when an endpoint is provided without the http or https scheme.
OTEL_EXPORTER_OTLP_ENDPOINT, OTEL_EXPORTER_OTLP_TRACES_ENDPOINT setting overrides
the scheme defined via OTEL_EXPORTER_OTLP_ENDPOINT, OTEL_EXPORTER_OTLP_TRACES_ENDPOINT.
OTEL_EXPORTER_OTLP_TRACES_INSECURE takes precedence over OTEL_EXPORTER_OTLP_INSECURE.
The configuration can be overridden by [WithInsecure], [WithGRPCConn] options.
OTEL_EXPORTER_OTLP_HEADERS, OTEL_EXPORTER_OTLP_TRACES_HEADERS (default: none) -
key-value pairs used as gRPC metadata associated with gRPC requests.
The value is expected to be represented in a format matching the [W3C Baggage HTTP Header Content Format],
except that additional semi-colon delimited metadata is not supported.
Example value: "key1=value1,key2=value2".
OTEL_EXPORTER_OTLP_TRACES_HEADERS takes precedence over OTEL_EXPORTER_OTLP_HEADERS.
The configuration can be overridden by [WithHeaders] option.
OTEL_EXPORTER_OTLP_TIMEOUT, OTEL_EXPORTER_OTLP_TRACES_TIMEOUT (default: "10000") -
maximum time in milliseconds the OTLP exporter waits for each batch export.
OTEL_EXPORTER_OTLP_TRACES_TIMEOUT takes precedence over OTEL_EXPORTER_OTLP_TIMEOUT.
The configuration can be overridden by [WithTimeout] option.
OTEL_EXPORTER_OTLP_COMPRESSION, OTEL_EXPORTER_OTLP_TRACES_COMPRESSION (default: none) -
the gRPC compressor the exporter uses.
Supported value: "gzip".
OTEL_EXPORTER_OTLP_TRACES_COMPRESSION takes precedence over OTEL_EXPORTER_OTLP_COMPRESSION.
The configuration can be overridden by [WithCompressor], [WithGRPCConn] options.
OTEL_EXPORTER_OTLP_CERTIFICATE, OTEL_EXPORTER_OTLP_TRACES_CERTIFICATE (default: none) -
the filepath to the trusted certificate to use when verifying a server's TLS credentials.
OTEL_EXPORTER_OTLP_TRACES_CERTIFICATE takes precedence over OTEL_EXPORTER_OTLP_CERTIFICATE.
The configuration can be overridden by [WithTLSCredentials], [WithGRPCConn] options.
OTEL_EXPORTER_OTLP_CLIENT_CERTIFICATE, OTEL_EXPORTER_OTLP_TRACES_CLIENT_CERTIFICATE (default: none) -
the filepath to the client certificate/chain trust for client's private key to use in mTLS communication in PEM format.
OTEL_EXPORTER_OTLP_TRACES_CLIENT_CERTIFICATE takes precedence over OTEL_EXPORTER_OTLP_CLIENT_CERTIFICATE.
The configuration can be overridden by [WithTLSCredentials], [WithGRPCConn] options.
OTEL_EXPORTER_OTLP_CLIENT_KEY, OTEL_EXPORTER_OTLP_TRACES_CLIENT_KEY (default: none) -
the filepath to the client's private key to use in mTLS communication in PEM format.
OTEL_EXPORTER_OTLP_TRACES_CLIENT_KEY takes precedence over OTEL_EXPORTER_OTLP_CLIENT_KEY.
The configuration can be overridden by [WithTLSCredentials], [WithGRPCConn] option.
[W3C Baggage HTTP Header Content Format]: https://www.w3.org/TR/baggage/#header-content
*/
package otlptracegrpc // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc"

View File

@ -0,0 +1,20 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otlptracegrpc // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc"
import (
"context"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace"
)
// New constructs a new Exporter and starts it.
func New(ctx context.Context, opts ...Option) (*otlptrace.Exporter, error) {
return otlptrace.New(ctx, NewClient(opts...))
}
// NewUnstarted constructs a new Exporter and does not start it.
func NewUnstarted(opts ...Option) *otlptrace.Exporter {
return otlptrace.NewUnstarted(NewClient(opts...))
}

View File

@ -0,0 +1,191 @@
// Code created by gotmpl. DO NOT MODIFY.
// source: internal/shared/otlp/envconfig/envconfig.go.tmpl
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package envconfig // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/envconfig"
import (
"crypto/tls"
"crypto/x509"
"errors"
"fmt"
"net/url"
"strconv"
"strings"
"time"
"go.opentelemetry.io/otel/internal/global"
)
// ConfigFn is the generic function used to set a config.
type ConfigFn func(*EnvOptionsReader)
// EnvOptionsReader reads the required environment variables.
type EnvOptionsReader struct {
GetEnv func(string) string
ReadFile func(string) ([]byte, error)
Namespace string
}
// Apply runs every ConfigFn.
func (e *EnvOptionsReader) Apply(opts ...ConfigFn) {
for _, o := range opts {
o(e)
}
}
// GetEnvValue gets an OTLP environment variable value of the specified key
// using the GetEnv function.
// This function prepends the OTLP specified namespace to all key lookups.
func (e *EnvOptionsReader) GetEnvValue(key string) (string, bool) {
v := strings.TrimSpace(e.GetEnv(keyWithNamespace(e.Namespace, key)))
return v, v != ""
}
// WithString retrieves the specified config and passes it to ConfigFn as a string.
func WithString(n string, fn func(string)) func(e *EnvOptionsReader) {
return func(e *EnvOptionsReader) {
if v, ok := e.GetEnvValue(n); ok {
fn(v)
}
}
}
// WithBool returns a ConfigFn that reads the environment variable n and if it exists passes its parsed bool value to fn.
func WithBool(n string, fn func(bool)) ConfigFn {
return func(e *EnvOptionsReader) {
if v, ok := e.GetEnvValue(n); ok {
b := strings.ToLower(v) == "true"
fn(b)
}
}
}
// WithDuration retrieves the specified config and passes it to ConfigFn as a duration.
func WithDuration(n string, fn func(time.Duration)) func(e *EnvOptionsReader) {
return func(e *EnvOptionsReader) {
if v, ok := e.GetEnvValue(n); ok {
d, err := strconv.Atoi(v)
if err != nil {
global.Error(err, "parse duration", "input", v)
return
}
fn(time.Duration(d) * time.Millisecond)
}
}
}
// WithHeaders retrieves the specified config and passes it to ConfigFn as a map of HTTP headers.
func WithHeaders(n string, fn func(map[string]string)) func(e *EnvOptionsReader) {
return func(e *EnvOptionsReader) {
if v, ok := e.GetEnvValue(n); ok {
fn(stringToHeader(v))
}
}
}
// WithURL retrieves the specified config and passes it to ConfigFn as a net/url.URL.
func WithURL(n string, fn func(*url.URL)) func(e *EnvOptionsReader) {
return func(e *EnvOptionsReader) {
if v, ok := e.GetEnvValue(n); ok {
u, err := url.Parse(v)
if err != nil {
global.Error(err, "parse url", "input", v)
return
}
fn(u)
}
}
}
// WithCertPool returns a ConfigFn that reads the environment variable n as a filepath to a TLS certificate pool. If it exists, it is parsed as a crypto/x509.CertPool and it is passed to fn.
func WithCertPool(n string, fn func(*x509.CertPool)) ConfigFn {
return func(e *EnvOptionsReader) {
if v, ok := e.GetEnvValue(n); ok {
b, err := e.ReadFile(v)
if err != nil {
global.Error(err, "read tls ca cert file", "file", v)
return
}
c, err := createCertPool(b)
if err != nil {
global.Error(err, "create tls cert pool")
return
}
fn(c)
}
}
}
// WithClientCert returns a ConfigFn that reads the environment variable nc and nk as filepaths to a client certificate and key pair. If they exists, they are parsed as a crypto/tls.Certificate and it is passed to fn.
func WithClientCert(nc, nk string, fn func(tls.Certificate)) ConfigFn {
return func(e *EnvOptionsReader) {
vc, okc := e.GetEnvValue(nc)
vk, okk := e.GetEnvValue(nk)
if !okc || !okk {
return
}
cert, err := e.ReadFile(vc)
if err != nil {
global.Error(err, "read tls client cert", "file", vc)
return
}
key, err := e.ReadFile(vk)
if err != nil {
global.Error(err, "read tls client key", "file", vk)
return
}
crt, err := tls.X509KeyPair(cert, key)
if err != nil {
global.Error(err, "create tls client key pair")
return
}
fn(crt)
}
}
func keyWithNamespace(ns, key string) string {
if ns == "" {
return key
}
return fmt.Sprintf("%s_%s", ns, key)
}
func stringToHeader(value string) map[string]string {
headersPairs := strings.Split(value, ",")
headers := make(map[string]string)
for _, header := range headersPairs {
n, v, found := strings.Cut(header, "=")
if !found {
global.Error(errors.New("missing '="), "parse headers", "input", header)
continue
}
name, err := url.PathUnescape(n)
if err != nil {
global.Error(err, "escape header key", "key", n)
continue
}
trimmedName := strings.TrimSpace(name)
value, err := url.PathUnescape(v)
if err != nil {
global.Error(err, "escape header value", "value", v)
continue
}
trimmedValue := strings.TrimSpace(value)
headers[trimmedName] = trimmedValue
}
return headers
}
func createCertPool(certBytes []byte) (*x509.CertPool, error) {
cp := x509.NewCertPool()
if ok := cp.AppendCertsFromPEM(certBytes); !ok {
return nil, errors.New("failed to append certificate to the cert pool")
}
return cp, nil
}

View File

@ -0,0 +1,24 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package internal // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal"
//go:generate gotmpl --body=../../../../../internal/shared/otlp/partialsuccess.go.tmpl "--data={}" --out=partialsuccess.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/partialsuccess_test.go.tmpl "--data={}" --out=partialsuccess_test.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/retry/retry.go.tmpl "--data={}" --out=retry/retry.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/retry/retry_test.go.tmpl "--data={}" --out=retry/retry_test.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/envconfig/envconfig.go.tmpl "--data={}" --out=envconfig/envconfig.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/envconfig/envconfig_test.go.tmpl "--data={}" --out=envconfig/envconfig_test.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/otlptrace/otlpconfig/envconfig.go.tmpl "--data={\"envconfigImportPath\": \"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/envconfig\"}" --out=otlpconfig/envconfig.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/otlptrace/otlpconfig/options.go.tmpl "--data={\"retryImportPath\": \"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/retry\"}" --out=otlpconfig/options.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/otlptrace/otlpconfig/options_test.go.tmpl "--data={\"envconfigImportPath\": \"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/envconfig\"}" --out=otlpconfig/options_test.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/otlptrace/otlpconfig/optiontypes.go.tmpl "--data={}" --out=otlpconfig/optiontypes.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/otlptrace/otlpconfig/tls.go.tmpl "--data={}" --out=otlpconfig/tls.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/otlptrace/otlptracetest/client.go.tmpl "--data={}" --out=otlptracetest/client.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/otlptrace/otlptracetest/collector.go.tmpl "--data={}" --out=otlptracetest/collector.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/otlptrace/otlptracetest/data.go.tmpl "--data={}" --out=otlptracetest/data.go
//go:generate gotmpl --body=../../../../../internal/shared/otlp/otlptrace/otlptracetest/otlptest.go.tmpl "--data={}" --out=otlptracetest/otlptest.go

View File

@ -0,0 +1,142 @@
// Code created by gotmpl. DO NOT MODIFY.
// source: internal/shared/otlp/otlptrace/otlpconfig/envconfig.go.tmpl
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otlpconfig // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/otlpconfig"
import (
"crypto/tls"
"crypto/x509"
"net/url"
"os"
"path"
"strings"
"time"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/envconfig"
)
// DefaultEnvOptionsReader is the default environments reader.
var DefaultEnvOptionsReader = envconfig.EnvOptionsReader{
GetEnv: os.Getenv,
ReadFile: os.ReadFile,
Namespace: "OTEL_EXPORTER_OTLP",
}
// ApplyGRPCEnvConfigs applies the env configurations for gRPC.
func ApplyGRPCEnvConfigs(cfg Config) Config {
opts := getOptionsFromEnv()
for _, opt := range opts {
cfg = opt.ApplyGRPCOption(cfg)
}
return cfg
}
// ApplyHTTPEnvConfigs applies the env configurations for HTTP.
func ApplyHTTPEnvConfigs(cfg Config) Config {
opts := getOptionsFromEnv()
for _, opt := range opts {
cfg = opt.ApplyHTTPOption(cfg)
}
return cfg
}
func getOptionsFromEnv() []GenericOption {
opts := []GenericOption{}
tlsConf := &tls.Config{}
DefaultEnvOptionsReader.Apply(
envconfig.WithURL("ENDPOINT", func(u *url.URL) {
opts = append(opts, withEndpointScheme(u))
opts = append(opts, newSplitOption(func(cfg Config) Config {
cfg.Traces.Endpoint = u.Host
// For OTLP/HTTP endpoint URLs without a per-signal
// configuration, the passed endpoint is used as a base URL
// and the signals are sent to these paths relative to that.
cfg.Traces.URLPath = path.Join(u.Path, DefaultTracesPath)
return cfg
}, withEndpointForGRPC(u)))
}),
envconfig.WithURL("TRACES_ENDPOINT", func(u *url.URL) {
opts = append(opts, withEndpointScheme(u))
opts = append(opts, newSplitOption(func(cfg Config) Config {
cfg.Traces.Endpoint = u.Host
// For endpoint URLs for OTLP/HTTP per-signal variables, the
// URL MUST be used as-is without any modification. The only
// exception is that if an URL contains no path part, the root
// path / MUST be used.
path := u.Path
if path == "" {
path = "/"
}
cfg.Traces.URLPath = path
return cfg
}, withEndpointForGRPC(u)))
}),
envconfig.WithCertPool("CERTIFICATE", func(p *x509.CertPool) { tlsConf.RootCAs = p }),
envconfig.WithCertPool("TRACES_CERTIFICATE", func(p *x509.CertPool) { tlsConf.RootCAs = p }),
envconfig.WithClientCert("CLIENT_CERTIFICATE", "CLIENT_KEY", func(c tls.Certificate) { tlsConf.Certificates = []tls.Certificate{c} }),
envconfig.WithClientCert("TRACES_CLIENT_CERTIFICATE", "TRACES_CLIENT_KEY", func(c tls.Certificate) { tlsConf.Certificates = []tls.Certificate{c} }),
withTLSConfig(tlsConf, func(c *tls.Config) { opts = append(opts, WithTLSClientConfig(c)) }),
envconfig.WithBool("INSECURE", func(b bool) { opts = append(opts, withInsecure(b)) }),
envconfig.WithBool("TRACES_INSECURE", func(b bool) { opts = append(opts, withInsecure(b)) }),
envconfig.WithHeaders("HEADERS", func(h map[string]string) { opts = append(opts, WithHeaders(h)) }),
envconfig.WithHeaders("TRACES_HEADERS", func(h map[string]string) { opts = append(opts, WithHeaders(h)) }),
WithEnvCompression("COMPRESSION", func(c Compression) { opts = append(opts, WithCompression(c)) }),
WithEnvCompression("TRACES_COMPRESSION", func(c Compression) { opts = append(opts, WithCompression(c)) }),
envconfig.WithDuration("TIMEOUT", func(d time.Duration) { opts = append(opts, WithTimeout(d)) }),
envconfig.WithDuration("TRACES_TIMEOUT", func(d time.Duration) { opts = append(opts, WithTimeout(d)) }),
)
return opts
}
func withEndpointScheme(u *url.URL) GenericOption {
switch strings.ToLower(u.Scheme) {
case "http", "unix":
return WithInsecure()
default:
return WithSecure()
}
}
func withEndpointForGRPC(u *url.URL) func(cfg Config) Config {
return func(cfg Config) Config {
// For OTLP/gRPC endpoints, this is the target to which the
// exporter is going to send telemetry.
cfg.Traces.Endpoint = path.Join(u.Host, u.Path)
return cfg
}
}
// WithEnvCompression retrieves the specified config and passes it to ConfigFn as a Compression.
func WithEnvCompression(n string, fn func(Compression)) func(e *envconfig.EnvOptionsReader) {
return func(e *envconfig.EnvOptionsReader) {
if v, ok := e.GetEnvValue(n); ok {
cp := NoCompression
if v == "gzip" {
cp = GzipCompression
}
fn(cp)
}
}
}
// revive:disable-next-line:flag-parameter
func withInsecure(b bool) GenericOption {
if b {
return WithInsecure()
}
return WithSecure()
}
func withTLSConfig(c *tls.Config, fn func(*tls.Config)) func(e *envconfig.EnvOptionsReader) {
return func(e *envconfig.EnvOptionsReader) {
if c.RootCAs != nil || len(c.Certificates) > 0 {
fn(c)
}
}
}

View File

@ -0,0 +1,353 @@
// Code created by gotmpl. DO NOT MODIFY.
// source: internal/shared/otlp/otlptrace/otlpconfig/options.go.tmpl
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otlpconfig // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/otlpconfig"
import (
"crypto/tls"
"fmt"
"net/http"
"net/url"
"path"
"strings"
"time"
"google.golang.org/grpc"
"google.golang.org/grpc/backoff"
"google.golang.org/grpc/credentials"
"google.golang.org/grpc/credentials/insecure"
"google.golang.org/grpc/encoding/gzip"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/retry"
"go.opentelemetry.io/otel/internal/global"
)
const (
// DefaultTracesPath is a default URL path for endpoint that
// receives spans.
DefaultTracesPath string = "/v1/traces"
// DefaultTimeout is a default max waiting time for the backend to process
// each span batch.
DefaultTimeout time.Duration = 10 * time.Second
)
type (
// HTTPTransportProxyFunc is a function that resolves which URL to use as proxy for a given request.
// This type is compatible with `http.Transport.Proxy` and can be used to set a custom proxy function to the OTLP HTTP client.
HTTPTransportProxyFunc func(*http.Request) (*url.URL, error)
SignalConfig struct {
Endpoint string
Insecure bool
TLSCfg *tls.Config
Headers map[string]string
Compression Compression
Timeout time.Duration
URLPath string
// gRPC configurations
GRPCCredentials credentials.TransportCredentials
Proxy HTTPTransportProxyFunc
}
Config struct {
// Signal specific configurations
Traces SignalConfig
RetryConfig retry.Config
// gRPC configurations
ReconnectionPeriod time.Duration
ServiceConfig string
DialOptions []grpc.DialOption
GRPCConn *grpc.ClientConn
}
)
// NewHTTPConfig returns a new Config with all settings applied from opts and
// any unset setting using the default HTTP config values.
func NewHTTPConfig(opts ...HTTPOption) Config {
cfg := Config{
Traces: SignalConfig{
Endpoint: fmt.Sprintf("%s:%d", DefaultCollectorHost, DefaultCollectorHTTPPort),
URLPath: DefaultTracesPath,
Compression: NoCompression,
Timeout: DefaultTimeout,
},
RetryConfig: retry.DefaultConfig,
}
cfg = ApplyHTTPEnvConfigs(cfg)
for _, opt := range opts {
cfg = opt.ApplyHTTPOption(cfg)
}
cfg.Traces.URLPath = cleanPath(cfg.Traces.URLPath, DefaultTracesPath)
return cfg
}
// cleanPath returns a path with all spaces trimmed and all redundancies
// removed. If urlPath is empty or cleaning it results in an empty string,
// defaultPath is returned instead.
func cleanPath(urlPath string, defaultPath string) string {
tmp := path.Clean(strings.TrimSpace(urlPath))
if tmp == "." {
return defaultPath
}
if !path.IsAbs(tmp) {
tmp = fmt.Sprintf("/%s", tmp)
}
return tmp
}
// NewGRPCConfig returns a new Config with all settings applied from opts and
// any unset setting using the default gRPC config values.
func NewGRPCConfig(opts ...GRPCOption) Config {
userAgent := "OTel OTLP Exporter Go/" + otlptrace.Version()
cfg := Config{
Traces: SignalConfig{
Endpoint: fmt.Sprintf("%s:%d", DefaultCollectorHost, DefaultCollectorGRPCPort),
URLPath: DefaultTracesPath,
Compression: NoCompression,
Timeout: DefaultTimeout,
},
RetryConfig: retry.DefaultConfig,
DialOptions: []grpc.DialOption{grpc.WithUserAgent(userAgent)},
}
cfg = ApplyGRPCEnvConfigs(cfg)
for _, opt := range opts {
cfg = opt.ApplyGRPCOption(cfg)
}
if cfg.ServiceConfig != "" {
cfg.DialOptions = append(cfg.DialOptions, grpc.WithDefaultServiceConfig(cfg.ServiceConfig))
}
// Priroritize GRPCCredentials over Insecure (passing both is an error).
if cfg.Traces.GRPCCredentials != nil {
cfg.DialOptions = append(cfg.DialOptions, grpc.WithTransportCredentials(cfg.Traces.GRPCCredentials))
} else if cfg.Traces.Insecure {
cfg.DialOptions = append(cfg.DialOptions, grpc.WithTransportCredentials(insecure.NewCredentials()))
} else {
// Default to using the host's root CA.
creds := credentials.NewTLS(nil)
cfg.Traces.GRPCCredentials = creds
cfg.DialOptions = append(cfg.DialOptions, grpc.WithTransportCredentials(creds))
}
if cfg.Traces.Compression == GzipCompression {
cfg.DialOptions = append(cfg.DialOptions, grpc.WithDefaultCallOptions(grpc.UseCompressor(gzip.Name)))
}
if cfg.ReconnectionPeriod != 0 {
p := grpc.ConnectParams{
Backoff: backoff.DefaultConfig,
MinConnectTimeout: cfg.ReconnectionPeriod,
}
cfg.DialOptions = append(cfg.DialOptions, grpc.WithConnectParams(p))
}
return cfg
}
type (
// GenericOption applies an option to the HTTP or gRPC driver.
GenericOption interface {
ApplyHTTPOption(Config) Config
ApplyGRPCOption(Config) Config
// A private method to prevent users implementing the
// interface and so future additions to it will not
// violate compatibility.
private()
}
// HTTPOption applies an option to the HTTP driver.
HTTPOption interface {
ApplyHTTPOption(Config) Config
// A private method to prevent users implementing the
// interface and so future additions to it will not
// violate compatibility.
private()
}
// GRPCOption applies an option to the gRPC driver.
GRPCOption interface {
ApplyGRPCOption(Config) Config
// A private method to prevent users implementing the
// interface and so future additions to it will not
// violate compatibility.
private()
}
)
// genericOption is an option that applies the same logic
// for both gRPC and HTTP.
type genericOption struct {
fn func(Config) Config
}
func (g *genericOption) ApplyGRPCOption(cfg Config) Config {
return g.fn(cfg)
}
func (g *genericOption) ApplyHTTPOption(cfg Config) Config {
return g.fn(cfg)
}
func (genericOption) private() {}
func newGenericOption(fn func(cfg Config) Config) GenericOption {
return &genericOption{fn: fn}
}
// splitOption is an option that applies different logics
// for gRPC and HTTP.
type splitOption struct {
httpFn func(Config) Config
grpcFn func(Config) Config
}
func (g *splitOption) ApplyGRPCOption(cfg Config) Config {
return g.grpcFn(cfg)
}
func (g *splitOption) ApplyHTTPOption(cfg Config) Config {
return g.httpFn(cfg)
}
func (splitOption) private() {}
func newSplitOption(httpFn func(cfg Config) Config, grpcFn func(cfg Config) Config) GenericOption {
return &splitOption{httpFn: httpFn, grpcFn: grpcFn}
}
// httpOption is an option that is only applied to the HTTP driver.
type httpOption struct {
fn func(Config) Config
}
func (h *httpOption) ApplyHTTPOption(cfg Config) Config {
return h.fn(cfg)
}
func (httpOption) private() {}
func NewHTTPOption(fn func(cfg Config) Config) HTTPOption {
return &httpOption{fn: fn}
}
// grpcOption is an option that is only applied to the gRPC driver.
type grpcOption struct {
fn func(Config) Config
}
func (h *grpcOption) ApplyGRPCOption(cfg Config) Config {
return h.fn(cfg)
}
func (grpcOption) private() {}
func NewGRPCOption(fn func(cfg Config) Config) GRPCOption {
return &grpcOption{fn: fn}
}
// Generic Options
// WithEndpoint configures the trace host and port only; endpoint should
// resemble "example.com" or "localhost:4317". To configure the scheme and path,
// use WithEndpointURL.
func WithEndpoint(endpoint string) GenericOption {
return newGenericOption(func(cfg Config) Config {
cfg.Traces.Endpoint = endpoint
return cfg
})
}
// WithEndpointURL configures the trace scheme, host, port, and path; the
// provided value should resemble "https://example.com:4318/v1/traces".
func WithEndpointURL(v string) GenericOption {
return newGenericOption(func(cfg Config) Config {
u, err := url.Parse(v)
if err != nil {
global.Error(err, "otlptrace: parse endpoint url", "url", v)
return cfg
}
cfg.Traces.Endpoint = u.Host
cfg.Traces.URLPath = u.Path
if u.Scheme != "https" {
cfg.Traces.Insecure = true
}
return cfg
})
}
func WithCompression(compression Compression) GenericOption {
return newGenericOption(func(cfg Config) Config {
cfg.Traces.Compression = compression
return cfg
})
}
func WithURLPath(urlPath string) GenericOption {
return newGenericOption(func(cfg Config) Config {
cfg.Traces.URLPath = urlPath
return cfg
})
}
func WithRetry(rc retry.Config) GenericOption {
return newGenericOption(func(cfg Config) Config {
cfg.RetryConfig = rc
return cfg
})
}
func WithTLSClientConfig(tlsCfg *tls.Config) GenericOption {
return newSplitOption(func(cfg Config) Config {
cfg.Traces.TLSCfg = tlsCfg.Clone()
return cfg
}, func(cfg Config) Config {
cfg.Traces.GRPCCredentials = credentials.NewTLS(tlsCfg)
return cfg
})
}
func WithInsecure() GenericOption {
return newGenericOption(func(cfg Config) Config {
cfg.Traces.Insecure = true
return cfg
})
}
func WithSecure() GenericOption {
return newGenericOption(func(cfg Config) Config {
cfg.Traces.Insecure = false
return cfg
})
}
func WithHeaders(headers map[string]string) GenericOption {
return newGenericOption(func(cfg Config) Config {
cfg.Traces.Headers = headers
return cfg
})
}
func WithTimeout(duration time.Duration) GenericOption {
return newGenericOption(func(cfg Config) Config {
cfg.Traces.Timeout = duration
return cfg
})
}
func WithProxy(pf HTTPTransportProxyFunc) GenericOption {
return newGenericOption(func(cfg Config) Config {
cfg.Traces.Proxy = pf
return cfg
})
}

View File

@ -0,0 +1,40 @@
// Code created by gotmpl. DO NOT MODIFY.
// source: internal/shared/otlp/otlptrace/otlpconfig/optiontypes.go.tmpl
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otlpconfig // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/otlpconfig"
const (
// DefaultCollectorGRPCPort is the default gRPC port of the collector.
DefaultCollectorGRPCPort uint16 = 4317
// DefaultCollectorHTTPPort is the default HTTP port of the collector.
DefaultCollectorHTTPPort uint16 = 4318
// DefaultCollectorHost is the host address the Exporter will attempt
// connect to if no collector address is provided.
DefaultCollectorHost string = "localhost"
)
// Compression describes the compression used for payloads sent to the
// collector.
type Compression int
const (
// NoCompression tells the driver to send payloads without
// compression.
NoCompression Compression = iota
// GzipCompression tells the driver to send payloads after
// compressing them with gzip.
GzipCompression
)
// Marshaler describes the kind of message format sent to the collector.
type Marshaler int
const (
// MarshalProto tells the driver to send using the protobuf binary format.
MarshalProto Marshaler = iota
// MarshalJSON tells the driver to send using json format.
MarshalJSON
)

View File

@ -0,0 +1,26 @@
// Code created by gotmpl. DO NOT MODIFY.
// source: internal/shared/otlp/otlptrace/otlpconfig/tls.go.tmpl
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package otlpconfig // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/otlpconfig"
import (
"crypto/tls"
"crypto/x509"
"errors"
)
// CreateTLSConfig creates a tls.Config from a raw certificate bytes
// to verify a server certificate.
func CreateTLSConfig(certBytes []byte) (*tls.Config, error) {
cp := x509.NewCertPool()
if ok := cp.AppendCertsFromPEM(certBytes); !ok {
return nil, errors.New("failed to append certificate to the cert pool")
}
return &tls.Config{
RootCAs: cp,
}, nil
}

View File

@ -0,0 +1,56 @@
// Code created by gotmpl. DO NOT MODIFY.
// source: internal/shared/otlp/partialsuccess.go
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0
package internal // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal"
import "fmt"
// PartialSuccess represents the underlying error for all handling
// OTLP partial success messages. Use `errors.Is(err,
// PartialSuccess{})` to test whether an error passed to the OTel
// error handler belongs to this category.
type PartialSuccess struct {
ErrorMessage string
RejectedItems int64
RejectedKind string
}
var _ error = PartialSuccess{}
// Error implements the error interface.
func (ps PartialSuccess) Error() string {
msg := ps.ErrorMessage
if msg == "" {
msg = "empty message"
}
return fmt.Sprintf("OTLP partial success: %s (%d %s rejected)", msg, ps.RejectedItems, ps.RejectedKind)
}
// Is supports the errors.Is() interface.
func (ps PartialSuccess) Is(err error) bool {
_, ok := err.(PartialSuccess)
return ok
}
// TracePartialSuccessError returns an error describing a partial success
// response for the trace signal.
func TracePartialSuccessError(itemsRejected int64, errorMessage string) error {
return PartialSuccess{
ErrorMessage: errorMessage,
RejectedItems: itemsRejected,
RejectedKind: "spans",
}
}
// MetricPartialSuccessError returns an error describing a partial success
// response for the metric signal.
func MetricPartialSuccessError(itemsRejected int64, errorMessage string) error {
return PartialSuccess{
ErrorMessage: errorMessage,
RejectedItems: itemsRejected,
RejectedKind: "metric data points",
}
}

Some files were not shown because too many files have changed in this diff Show More