build: move e2e dependencies into e2e/go.mod

Several packages are only used while running the e2e suite. These
packages are less important to update, as the they can not influence the
final executable that is part of the Ceph-CSI container-image.

By moving these dependencies out of the main Ceph-CSI go.mod, it is
easier to identify if a reported CVE affects Ceph-CSI, or only the
testing (like most of the Kubernetes CVEs).

Signed-off-by: Niels de Vos <ndevos@ibm.com>
This commit is contained in:
Niels de Vos
2025-03-04 08:57:28 +01:00
committed by mergify[bot]
parent 15da101b1b
commit bec6090996
8047 changed files with 1407827 additions and 3453 deletions

View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,564 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package client
import (
"context"
"errors"
"fmt"
"io"
"math/rand"
"net"
"sync"
"sync/atomic"
"time"
"google.golang.org/grpc"
"k8s.io/klog/v2"
"sigs.k8s.io/apiserver-network-proxy/konnectivity-client/pkg/client/metrics"
commonmetrics "sigs.k8s.io/apiserver-network-proxy/konnectivity-client/pkg/common/metrics"
"sigs.k8s.io/apiserver-network-proxy/konnectivity-client/proto/client"
)
// Tunnel provides ability to dial a connection through a tunnel.
type Tunnel interface {
// Dial connects to the address on the named network, similar to
// what net.Dial does. The only supported protocol is tcp.
DialContext(requestCtx context.Context, protocol, address string) (net.Conn, error)
// Done returns a channel that is closed when the tunnel is no longer serving any connections,
// and can no longer be used.
Done() <-chan struct{}
}
type dialResult struct {
err *dialFailure
connid int64
}
type pendingDial struct {
// resultCh is the channel to send the dial result to
resultCh chan<- dialResult
// cancelCh is the channel closed when resultCh no longer has a receiver
cancelCh <-chan struct{}
}
// TODO: Replace with a generic implementation once it is safe to assume the client is built with go1.18+
type pendingDialManager struct {
pendingDials map[int64]pendingDial
mutex sync.RWMutex
}
func (p *pendingDialManager) add(dialID int64, pd pendingDial) {
p.mutex.Lock()
defer p.mutex.Unlock()
p.pendingDials[dialID] = pd
}
func (p *pendingDialManager) remove(dialID int64) {
p.mutex.Lock()
defer p.mutex.Unlock()
delete(p.pendingDials, dialID)
}
func (p *pendingDialManager) get(dialID int64) (pendingDial, bool) {
p.mutex.RLock()
defer p.mutex.RUnlock()
pd, ok := p.pendingDials[dialID]
return pd, ok
}
// TODO: Replace with a generic implementation once it is safe to assume the client is built with go1.18+
type connectionManager struct {
conns map[int64]*conn
mutex sync.RWMutex
}
func (cm *connectionManager) add(connID int64, c *conn) {
cm.mutex.Lock()
defer cm.mutex.Unlock()
cm.conns[connID] = c
}
func (cm *connectionManager) remove(connID int64) {
cm.mutex.Lock()
defer cm.mutex.Unlock()
delete(cm.conns, connID)
}
func (cm *connectionManager) get(connID int64) (*conn, bool) {
cm.mutex.RLock()
defer cm.mutex.RUnlock()
c, ok := cm.conns[connID]
return c, ok
}
func (cm *connectionManager) closeAll() {
cm.mutex.Lock()
defer cm.mutex.Unlock()
for _, conn := range cm.conns {
close(conn.readCh)
}
}
// grpcTunnel implements Tunnel
type grpcTunnel struct {
stream client.ProxyService_ProxyClient
sendLock sync.Mutex
recvLock sync.Mutex
grpcConn clientConn
pendingDial pendingDialManager
conns connectionManager
// The tunnel will be closed if the caller fails to read via conn.Read()
// more than readTimeoutSeconds after a packet has been received.
readTimeoutSeconds int
// The done channel is closed after the tunnel has cleaned up all connections and is no longer
// serving.
done chan struct{}
// started is an atomic bool represented as a 0 or 1, and set to true when a single-use tunnel has been started (dialed).
// started should only be accessed through atomic methods.
// TODO: switch this to an atomic.Bool once the client is exclusively buit with go1.19+
started uint32
// closing is an atomic bool represented as a 0 or 1, and set to true when the tunnel is being closed.
// closing should only be accessed through atomic methods.
// TODO: switch this to an atomic.Bool once the client is exclusively buit with go1.19+
closing uint32
// Stores the current metrics.ClientConnectionStatus
prevStatus atomic.Value
}
type clientConn interface {
Close() error
}
var _ clientConn = &grpc.ClientConn{}
var (
// Expose metrics for client to register.
Metrics = metrics.Metrics
)
// CreateSingleUseGrpcTunnel creates a Tunnel to dial to a remote server through a
// gRPC based proxy service.
// Currently, a single tunnel supports a single connection, and the tunnel is closed when the connection is terminated
// The Dial() method of the returned tunnel should only be called once
// Deprecated 2022-06-07: use CreateSingleUseGrpcTunnelWithContext
func CreateSingleUseGrpcTunnel(tunnelCtx context.Context, address string, opts ...grpc.DialOption) (Tunnel, error) {
return CreateSingleUseGrpcTunnelWithContext(context.TODO(), tunnelCtx, address, opts...)
}
// CreateSingleUseGrpcTunnelWithContext creates a Tunnel to dial to a remote server through a
// gRPC based proxy service.
// Currently, a single tunnel supports a single connection.
// The tunnel is normally closed when the connection is terminated.
// If createCtx is cancelled before tunnel creation, an error will be returned.
// If tunnelCtx is cancelled while the tunnel is still in use, the tunnel (and any in flight connections) will be closed.
// The Dial() method of the returned tunnel should only be called once
func CreateSingleUseGrpcTunnelWithContext(createCtx, tunnelCtx context.Context, address string, opts ...grpc.DialOption) (Tunnel, error) {
c, err := grpc.DialContext(createCtx, address, opts...)
if err != nil {
return nil, err
}
grpcClient := client.NewProxyServiceClient(c)
stream, err := grpcClient.Proxy(tunnelCtx)
if err != nil {
c.Close()
return nil, err
}
tunnel := newUnstartedTunnel(stream, c)
go tunnel.serve(tunnelCtx)
return tunnel, nil
}
func newUnstartedTunnel(stream client.ProxyService_ProxyClient, c clientConn) *grpcTunnel {
t := grpcTunnel{
stream: stream,
grpcConn: c,
pendingDial: pendingDialManager{pendingDials: make(map[int64]pendingDial)},
conns: connectionManager{conns: make(map[int64]*conn)},
readTimeoutSeconds: 10,
done: make(chan struct{}),
started: 0,
}
s := metrics.ClientConnectionStatusCreated
t.prevStatus.Store(s)
metrics.Metrics.GetClientConnectionsMetric().WithLabelValues(string(s)).Inc()
return &t
}
func (t *grpcTunnel) updateMetric(status metrics.ClientConnectionStatus) {
select {
case <-t.Done():
return
default:
}
prevStatus := t.prevStatus.Swap(status).(metrics.ClientConnectionStatus)
m := metrics.Metrics.GetClientConnectionsMetric()
m.WithLabelValues(string(prevStatus)).Dec()
m.WithLabelValues(string(status)).Inc()
}
// closeMetric should be called exactly once to finalize client_connections metric.
func (t *grpcTunnel) closeMetric() {
select {
case <-t.Done():
return
default:
}
prevStatus := t.prevStatus.Load().(metrics.ClientConnectionStatus)
metrics.Metrics.GetClientConnectionsMetric().WithLabelValues(string(prevStatus)).Dec()
}
func (t *grpcTunnel) serve(tunnelCtx context.Context) {
defer func() {
t.grpcConn.Close()
// A connection in t.conns after serve() returns means
// we never received a CLOSE_RSP for it, so we need to
// close any channels remaining for these connections.
t.conns.closeAll()
t.closeMetric()
close(t.done)
}()
for {
pkt, err := t.Recv()
if err == io.EOF {
return
}
isClosing := t.isClosing()
if err != nil || pkt == nil {
if !isClosing {
klog.ErrorS(err, "stream read failure")
}
return
}
if isClosing {
return
}
klog.V(5).InfoS("[tracing] recv packet", "type", pkt.Type)
switch pkt.Type {
case client.PacketType_DIAL_RSP:
resp := pkt.GetDialResponse()
pendingDial, ok := t.pendingDial.get(resp.Random)
if !ok {
// If the DIAL_RSP does not match a pending dial, it means one of two things:
// 1. There was a second DIAL_RSP for the connection request (this is very unlikely but possible)
// 2. grpcTunnel.DialContext() returned early due to a dial timeout or the client canceling the context
//
// In either scenario, we should return here and close the tunnel as it is no longer needed.
kvs := []interface{}{"dialID", resp.Random, "connectionID", resp.ConnectID}
if resp.Error != "" {
kvs = append(kvs, "error", resp.Error)
}
klog.V(1).InfoS("DialResp not recognized; dropped", kvs...)
return
}
result := dialResult{connid: resp.ConnectID}
if resp.Error != "" {
result.err = &dialFailure{resp.Error, metrics.DialFailureEndpoint}
} else {
t.updateMetric(metrics.ClientConnectionStatusOk)
}
select {
// try to send to the result channel
case pendingDial.resultCh <- result:
// unblock if the cancel channel is closed
case <-pendingDial.cancelCh:
// Note: this condition can only be hit by a race condition where the
// DialContext() returns early (timeout) after the pendingDial is already
// fetched here, but before the result is sent.
klog.V(1).InfoS("Pending dial has been cancelled; dropped", "connectionID", resp.ConnectID, "dialID", resp.Random)
return
case <-tunnelCtx.Done():
klog.V(1).InfoS("Tunnel has been closed; dropped", "connectionID", resp.ConnectID, "dialID", resp.Random)
return
}
if resp.Error != "" {
// On dial error, avoid leaking serve goroutine.
return
}
case client.PacketType_DIAL_CLS:
resp := pkt.GetCloseDial()
pendingDial, ok := t.pendingDial.get(resp.Random)
if !ok {
// If the DIAL_CLS does not match a pending dial, it means one of two things:
// 1. There was a DIAL_CLS receieved after a DIAL_RSP (unlikely but possible)
// 2. grpcTunnel.DialContext() returned early due to a dial timeout or the client canceling the context
//
// In either scenario, we should return here and close the tunnel as it is no longer needed.
klog.V(1).InfoS("DIAL_CLS after dial finished", "dialID", resp.Random)
} else {
result := dialResult{
err: &dialFailure{"dial closed", metrics.DialFailureDialClosed},
}
select {
case pendingDial.resultCh <- result:
case <-pendingDial.cancelCh:
// Note: this condition can only be hit by a race condition where the
// DialContext() returns early (timeout) after the pendingDial is already
// fetched here, but before the result is sent.
case <-tunnelCtx.Done():
}
}
return // Stop serving & close the tunnel.
case client.PacketType_DATA:
resp := pkt.GetData()
if resp.ConnectID == 0 {
klog.ErrorS(nil, "Received packet missing ConnectID", "packetType", "DATA")
continue
}
// TODO: flow control
conn, ok := t.conns.get(resp.ConnectID)
if !ok {
klog.ErrorS(nil, "Connection not recognized", "connectionID", resp.ConnectID, "packetType", "DATA")
t.sendCloseRequest(resp.ConnectID)
continue
}
timer := time.NewTimer((time.Duration)(t.readTimeoutSeconds) * time.Second)
select {
case conn.readCh <- resp.Data:
timer.Stop()
case <-timer.C:
klog.ErrorS(fmt.Errorf("timeout"), "readTimeout has been reached, the grpc connection to the proxy server will be closed", "connectionID", conn.connID, "readTimeoutSeconds", t.readTimeoutSeconds)
return
case <-tunnelCtx.Done():
klog.V(1).InfoS("Tunnel has been closed, the grpc connection to the proxy server will be closed", "connectionID", conn.connID)
}
case client.PacketType_CLOSE_RSP:
resp := pkt.GetCloseResponse()
conn, ok := t.conns.get(resp.ConnectID)
if !ok {
klog.V(1).InfoS("Connection not recognized", "connectionID", resp.ConnectID, "packetType", "CLOSE_RSP")
continue
}
close(conn.readCh)
conn.closeCh <- resp.Error
close(conn.closeCh)
t.conns.remove(resp.ConnectID)
return
}
}
}
// Dial connects to the address on the named network, similar to
// what net.Dial does. The only supported protocol is tcp.
func (t *grpcTunnel) DialContext(requestCtx context.Context, protocol, address string) (net.Conn, error) {
conn, err := t.dialContext(requestCtx, protocol, address)
if err != nil {
_, reason := GetDialFailureReason(err)
metrics.Metrics.ObserveDialFailure(reason)
}
return conn, err
}
func (t *grpcTunnel) dialContext(requestCtx context.Context, protocol, address string) (net.Conn, error) {
prevStarted := atomic.SwapUint32(&t.started, 1)
if prevStarted != 0 {
return nil, &dialFailure{"single-use dialer already dialed", metrics.DialFailureAlreadyStarted}
}
select {
case <-t.done:
return nil, errors.New("tunnel is closed")
default: // Tunnel is open, carry on.
}
if protocol != "tcp" {
return nil, errors.New("protocol not supported")
}
t.updateMetric(metrics.ClientConnectionStatusDialing)
random := rand.Int63() /* #nosec G404 */
// This channel is closed once we're returning and no longer waiting on resultCh
cancelCh := make(chan struct{})
defer close(cancelCh)
// This channel MUST NOT be buffered. The sender needs to know when we are not receiving things, so they can abort.
resCh := make(chan dialResult)
t.pendingDial.add(random, pendingDial{resultCh: resCh, cancelCh: cancelCh})
defer t.pendingDial.remove(random)
req := &client.Packet{
Type: client.PacketType_DIAL_REQ,
Payload: &client.Packet_DialRequest{
DialRequest: &client.DialRequest{
Protocol: protocol,
Address: address,
Random: random,
},
},
}
klog.V(5).InfoS("[tracing] send packet", "type", req.Type)
err := t.Send(req)
if err != nil {
return nil, err
}
klog.V(5).Infoln("DIAL_REQ sent to proxy server")
c := &conn{
tunnel: t,
random: random,
}
select {
case res := <-resCh:
if res.err != nil {
return nil, res.err
}
c.connID = res.connid
c.readCh = make(chan []byte, 10)
c.closeCh = make(chan string, 1)
t.conns.add(res.connid, c)
case <-time.After(30 * time.Second):
klog.V(5).InfoS("Timed out waiting for DialResp", "dialID", random)
go func() {
defer t.closeTunnel()
t.sendDialClose(random)
}()
return nil, &dialFailure{"dial timeout, backstop", metrics.DialFailureTimeout}
case <-requestCtx.Done():
klog.V(5).InfoS("Context canceled waiting for DialResp", "ctxErr", requestCtx.Err(), "dialID", random)
go func() {
defer t.closeTunnel()
t.sendDialClose(random)
}()
return nil, &dialFailure{"dial timeout, context", metrics.DialFailureContext}
case <-t.done:
klog.V(5).InfoS("Tunnel closed while waiting for DialResp", "dialID", random)
return nil, &dialFailure{"tunnel closed", metrics.DialFailureTunnelClosed}
}
return c, nil
}
func (t *grpcTunnel) Done() <-chan struct{} {
return t.done
}
// Send a best-effort DIAL_CLS request for the given dial ID.
func (t *grpcTunnel) sendCloseRequest(connID int64) error {
req := &client.Packet{
Type: client.PacketType_CLOSE_REQ,
Payload: &client.Packet_CloseRequest{
CloseRequest: &client.CloseRequest{
ConnectID: connID,
},
},
}
klog.V(5).InfoS("[tracing] send req", "type", req.Type)
return t.Send(req)
}
func (t *grpcTunnel) sendDialClose(dialID int64) error {
req := &client.Packet{
Type: client.PacketType_DIAL_CLS,
Payload: &client.Packet_CloseDial{
CloseDial: &client.CloseDial{
Random: dialID,
},
},
}
klog.V(5).InfoS("[tracing] send req", "type", req.Type)
return t.Send(req)
}
func (t *grpcTunnel) closeTunnel() {
atomic.StoreUint32(&t.closing, 1)
t.grpcConn.Close()
}
func (t *grpcTunnel) isClosing() bool {
return atomic.LoadUint32(&t.closing) != 0
}
func (t *grpcTunnel) Send(pkt *client.Packet) error {
t.sendLock.Lock()
defer t.sendLock.Unlock()
const segment = commonmetrics.SegmentFromClient
metrics.Metrics.ObservePacket(segment, pkt.Type)
err := t.stream.Send(pkt)
if err != nil && err != io.EOF {
metrics.Metrics.ObserveStreamError(segment, err, pkt.Type)
}
return err
}
func (t *grpcTunnel) Recv() (*client.Packet, error) {
t.recvLock.Lock()
defer t.recvLock.Unlock()
const segment = commonmetrics.SegmentToClient
pkt, err := t.stream.Recv()
if err != nil {
if err != io.EOF {
metrics.Metrics.ObserveStreamErrorNoPacket(segment, err)
}
return nil, err
}
metrics.Metrics.ObservePacket(segment, pkt.Type)
return pkt, nil
}
func GetDialFailureReason(err error) (isDialFailure bool, reason metrics.DialFailureReason) {
var df *dialFailure
if errors.As(err, &df) {
return true, df.reason
}
return false, metrics.DialFailureUnknown
}
type dialFailure struct {
msg string
reason metrics.DialFailureReason
}
func (df *dialFailure) Error() string {
return df.msg
}

View File

@ -0,0 +1,157 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package client
import (
"errors"
"io"
"net"
"sync/atomic"
"time"
"k8s.io/klog/v2"
"sigs.k8s.io/apiserver-network-proxy/konnectivity-client/proto/client"
)
// CloseTimeout is the timeout to wait CLOSE_RSP packet after a
// successful delivery of CLOSE_REQ.
const CloseTimeout = 10 * time.Second
var errConnTunnelClosed = errors.New("tunnel closed")
var errConnCloseTimeout = errors.New("close timeout")
// conn is an implementation of net.Conn, where the data is transported
// over an established tunnel defined by a gRPC service ProxyService.
type conn struct {
tunnel *grpcTunnel
// connID is set when a successful DIAL_RSP is received
connID int64
// random (dialID) is always initialized
random int64
readCh chan []byte
// On receiving CLOSE_RSP, closeCh will be sent any error message and closed.
closeCh chan string
rdata []byte
// closing is an atomic bool represented as a 0 or 1, and set to true when the connection is being closed.
// closing should only be accessed through atomic methods.
// TODO: switch this to an atomic.Bool once the client is exclusively buit with go1.19+
closing uint32
}
var _ net.Conn = &conn{}
// Write sends the data through the connection over proxy service
func (c *conn) Write(data []byte) (n int, err error) {
req := &client.Packet{
Type: client.PacketType_DATA,
Payload: &client.Packet_Data{
Data: &client.Data{
ConnectID: c.connID,
Data: data,
},
},
}
klog.V(5).InfoS("[tracing] send req", "type", req.Type)
err = c.tunnel.Send(req)
if err != nil {
return 0, err
}
return len(data), err
}
// Read receives data from the connection over proxy service
func (c *conn) Read(b []byte) (n int, err error) {
var data []byte
if c.rdata != nil {
data = c.rdata
} else {
data = <-c.readCh
}
if data == nil {
return 0, io.EOF
}
if len(data) > len(b) {
copy(b, data[:len(b)])
c.rdata = data[len(b):]
return len(b), nil
}
c.rdata = nil
copy(b, data)
return len(data), nil
}
func (c *conn) LocalAddr() net.Addr {
return nil
}
func (c *conn) RemoteAddr() net.Addr {
return nil
}
func (c *conn) SetDeadline(t time.Time) error {
return errors.New("not implemented")
}
func (c *conn) SetReadDeadline(t time.Time) error {
return errors.New("not implemented")
}
func (c *conn) SetWriteDeadline(t time.Time) error {
return errors.New("not implemented")
}
// Close closes the connection, sends best-effort close signal to proxy
// service, and frees resources.
func (c *conn) Close() error {
old := atomic.SwapUint32(&c.closing, 1)
if old != 0 {
// prevent duplicate messages
return nil
}
klog.V(4).Infoln("closing connection", "dialID", c.random, "connectionID", c.connID)
defer c.tunnel.closeTunnel()
if c.connID != 0 {
c.tunnel.sendCloseRequest(c.connID)
} else {
// Never received a DIAL response so no connection ID.
c.tunnel.sendDialClose(c.random)
}
select {
case errMsg := <-c.closeCh:
if errMsg != "" {
return errors.New(errMsg)
}
return nil
case <-c.tunnel.Done():
return errConnTunnelClosed
case <-time.After(CloseTimeout):
}
return errConnCloseTimeout
}

View File

@ -0,0 +1,164 @@
/*
Copyright 2022 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package metrics
import (
"sync"
"github.com/prometheus/client_golang/prometheus"
commonmetrics "sigs.k8s.io/apiserver-network-proxy/konnectivity-client/pkg/common/metrics"
"sigs.k8s.io/apiserver-network-proxy/konnectivity-client/proto/client"
)
const (
Namespace = "konnectivity_network_proxy"
Subsystem = "client"
)
var (
// Metrics provides access to all client metrics. The client
// application is responsible for registering (via Metrics.RegisterMetrics).
Metrics = newMetrics()
)
// ClientMetrics includes all the metrics of the konnectivity-client.
type ClientMetrics struct {
registerOnce sync.Once
streamPackets *prometheus.CounterVec
streamErrors *prometheus.CounterVec
dialFailures *prometheus.CounterVec
clientConns *prometheus.GaugeVec
}
type DialFailureReason string
const (
DialFailureUnknown DialFailureReason = "unknown"
// DialFailureTimeout indicates the hard 30 second timeout was hit.
DialFailureTimeout DialFailureReason = "timeout"
// DialFailureContext indicates that the context was cancelled or reached it's deadline before
// the dial response was returned.
DialFailureContext DialFailureReason = "context"
// DialFailureEndpoint indicates that the konnectivity-agent was unable to reach the backend endpoint.
DialFailureEndpoint DialFailureReason = "endpoint"
// DialFailureDialClosed indicates that the client received a CloseDial response, indicating the
// connection was closed before the dial could complete.
DialFailureDialClosed DialFailureReason = "dialclosed"
// DialFailureTunnelClosed indicates that the client connection was closed before the dial could
// complete.
DialFailureTunnelClosed DialFailureReason = "tunnelclosed"
// DialFailureAlreadyStarted indicates that a single-use tunnel dialer was already used once.
DialFailureAlreadyStarted DialFailureReason = "tunnelstarted"
)
type ClientConnectionStatus string
const (
// The connection is created but has not yet been dialed.
ClientConnectionStatusCreated ClientConnectionStatus = "created"
// The connection is pending dial response.
ClientConnectionStatusDialing ClientConnectionStatus = "dialing"
// The connection is established.
ClientConnectionStatusOk ClientConnectionStatus = "ok"
// The connection is closing.
ClientConnectionStatusClosing ClientConnectionStatus = "closing"
)
func newMetrics() *ClientMetrics {
// The denominator (total dials started) for both
// dial_failure_total and dial_duration_seconds is the
// stream_packets_total (common metric), where segment is
// "from_client" and packet_type is "DIAL_REQ".
dialFailures := prometheus.NewCounterVec(
prometheus.CounterOpts{
Namespace: Namespace,
Subsystem: Subsystem,
Name: "dial_failure_total",
Help: "Number of dial failures observed, by reason (example: remote endpoint error)",
},
[]string{
"reason",
},
)
clientConns := prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Namespace: Namespace,
Subsystem: Subsystem,
Name: "client_connections",
Help: "Number of open client connections, by status (Example: dialing)",
},
[]string{
"status",
},
)
return &ClientMetrics{
streamPackets: commonmetrics.MakeStreamPacketsTotalMetric(Namespace, Subsystem),
streamErrors: commonmetrics.MakeStreamErrorsTotalMetric(Namespace, Subsystem),
dialFailures: dialFailures,
clientConns: clientConns,
}
}
// RegisterMetrics registers all metrics with the client application.
func (c *ClientMetrics) RegisterMetrics(r prometheus.Registerer) {
c.registerOnce.Do(func() {
r.MustRegister(c.streamPackets)
r.MustRegister(c.streamErrors)
r.MustRegister(c.dialFailures)
r.MustRegister(c.clientConns)
})
}
// LegacyRegisterMetrics registers all metrics via MustRegister func.
// TODO: remove this once https://github.com/kubernetes/kubernetes/pull/114293 is available.
func (c *ClientMetrics) LegacyRegisterMetrics(mustRegisterFn func(...prometheus.Collector)) {
c.registerOnce.Do(func() {
mustRegisterFn(c.streamPackets)
mustRegisterFn(c.streamErrors)
mustRegisterFn(c.dialFailures)
mustRegisterFn(c.clientConns)
})
}
// Reset resets the metrics.
func (c *ClientMetrics) Reset() {
c.streamPackets.Reset()
c.streamErrors.Reset()
c.dialFailures.Reset()
c.clientConns.Reset()
}
func (c *ClientMetrics) ObserveDialFailure(reason DialFailureReason) {
c.dialFailures.WithLabelValues(string(reason)).Inc()
}
func (c *ClientMetrics) GetClientConnectionsMetric() *prometheus.GaugeVec {
return c.clientConns
}
func (c *ClientMetrics) ObservePacket(segment commonmetrics.Segment, packetType client.PacketType) {
commonmetrics.ObservePacket(c.streamPackets, segment, packetType)
}
func (c *ClientMetrics) ObserveStreamErrorNoPacket(segment commonmetrics.Segment, err error) {
commonmetrics.ObserveStreamErrorNoPacket(c.streamErrors, segment, err)
}
func (c *ClientMetrics) ObserveStreamError(segment commonmetrics.Segment, err error, packetType client.PacketType) {
commonmetrics.ObserveStreamError(c.streamErrors, segment, err, packetType)
}

View File

@ -0,0 +1,78 @@
/*
Copyright 2022 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package metrics provides metric definitions and helpers used
// across konnectivity client, server, and agent.
package metrics
import (
"github.com/prometheus/client_golang/prometheus"
"google.golang.org/grpc/status"
"sigs.k8s.io/apiserver-network-proxy/konnectivity-client/proto/client"
)
// Segment identifies one of four tunnel segments (e.g. from server to agent).
type Segment string
const (
// SegmentFromClient indicates a packet from client to server.
SegmentFromClient Segment = "from_client"
// SegmentToClient indicates a packet from server to client.
SegmentToClient Segment = "to_client"
// SegmentFromAgent indicates a packet from agent to server.
SegmentFromAgent Segment = "from_agent"
// SegmentToAgent indicates a packet from server to agent.
SegmentToAgent Segment = "to_agent"
)
func MakeStreamPacketsTotalMetric(namespace, subsystem string) *prometheus.CounterVec {
return prometheus.NewCounterVec(
prometheus.CounterOpts{
Namespace: namespace,
Subsystem: subsystem,
Name: "stream_packets_total",
Help: "Count of packets processed, by segment and packet type (example: from_client, DIAL_REQ)",
},
[]string{"segment", "packet_type"},
)
}
func MakeStreamErrorsTotalMetric(namespace, subsystem string) *prometheus.CounterVec {
return prometheus.NewCounterVec(
prometheus.CounterOpts{
Namespace: namespace,
Subsystem: subsystem,
Name: "stream_errors_total",
Help: "Count of gRPC stream errors, by segment, grpc Code, packet type. (example: from_agent, Code.Unavailable, DIAL_RSP)",
},
[]string{"segment", "code", "packet_type"},
)
}
func ObservePacket(m *prometheus.CounterVec, segment Segment, packetType client.PacketType) {
m.WithLabelValues(string(segment), packetType.String()).Inc()
}
func ObserveStreamErrorNoPacket(m *prometheus.CounterVec, segment Segment, err error) {
code := status.Code(err)
m.WithLabelValues(string(segment), code.String(), "Unknown").Inc()
}
func ObserveStreamError(m *prometheus.CounterVec, segment Segment, err error, packetType client.PacketType) {
code := status.Code(err)
m.WithLabelValues(string(segment), code.String(), packetType.String()).Inc()
}

View File

@ -0,0 +1,893 @@
// Copyright The Kubernetes Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.27.1
// protoc v3.21.12
// source: konnectivity-client/proto/client/client.proto
package client
import (
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
reflect "reflect"
sync "sync"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
type PacketType int32
const (
PacketType_DIAL_REQ PacketType = 0
PacketType_DIAL_RSP PacketType = 1
PacketType_CLOSE_REQ PacketType = 2
PacketType_CLOSE_RSP PacketType = 3
PacketType_DATA PacketType = 4
PacketType_DIAL_CLS PacketType = 5
PacketType_DRAIN PacketType = 6
)
// Enum value maps for PacketType.
var (
PacketType_name = map[int32]string{
0: "DIAL_REQ",
1: "DIAL_RSP",
2: "CLOSE_REQ",
3: "CLOSE_RSP",
4: "DATA",
5: "DIAL_CLS",
6: "DRAIN",
}
PacketType_value = map[string]int32{
"DIAL_REQ": 0,
"DIAL_RSP": 1,
"CLOSE_REQ": 2,
"CLOSE_RSP": 3,
"DATA": 4,
"DIAL_CLS": 5,
"DRAIN": 6,
}
)
func (x PacketType) Enum() *PacketType {
p := new(PacketType)
*p = x
return p
}
func (x PacketType) String() string {
return protoimpl.X.EnumStringOf(x.Descriptor(), protoreflect.EnumNumber(x))
}
func (PacketType) Descriptor() protoreflect.EnumDescriptor {
return file_konnectivity_client_proto_client_client_proto_enumTypes[0].Descriptor()
}
func (PacketType) Type() protoreflect.EnumType {
return &file_konnectivity_client_proto_client_client_proto_enumTypes[0]
}
func (x PacketType) Number() protoreflect.EnumNumber {
return protoreflect.EnumNumber(x)
}
// Deprecated: Use PacketType.Descriptor instead.
func (PacketType) EnumDescriptor() ([]byte, []int) {
return file_konnectivity_client_proto_client_client_proto_rawDescGZIP(), []int{0}
}
type Packet struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Type PacketType `protobuf:"varint,1,opt,name=type,proto3,enum=PacketType" json:"type,omitempty"`
// Types that are assignable to Payload:
//
// *Packet_DialRequest
// *Packet_DialResponse
// *Packet_Data
// *Packet_CloseRequest
// *Packet_CloseResponse
// *Packet_CloseDial
// *Packet_Drain
Payload isPacket_Payload `protobuf_oneof:"payload"`
}
func (x *Packet) Reset() {
*x = Packet{}
if protoimpl.UnsafeEnabled {
mi := &file_konnectivity_client_proto_client_client_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *Packet) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*Packet) ProtoMessage() {}
func (x *Packet) ProtoReflect() protoreflect.Message {
mi := &file_konnectivity_client_proto_client_client_proto_msgTypes[0]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use Packet.ProtoReflect.Descriptor instead.
func (*Packet) Descriptor() ([]byte, []int) {
return file_konnectivity_client_proto_client_client_proto_rawDescGZIP(), []int{0}
}
func (x *Packet) GetType() PacketType {
if x != nil {
return x.Type
}
return PacketType_DIAL_REQ
}
func (m *Packet) GetPayload() isPacket_Payload {
if m != nil {
return m.Payload
}
return nil
}
func (x *Packet) GetDialRequest() *DialRequest {
if x, ok := x.GetPayload().(*Packet_DialRequest); ok {
return x.DialRequest
}
return nil
}
func (x *Packet) GetDialResponse() *DialResponse {
if x, ok := x.GetPayload().(*Packet_DialResponse); ok {
return x.DialResponse
}
return nil
}
func (x *Packet) GetData() *Data {
if x, ok := x.GetPayload().(*Packet_Data); ok {
return x.Data
}
return nil
}
func (x *Packet) GetCloseRequest() *CloseRequest {
if x, ok := x.GetPayload().(*Packet_CloseRequest); ok {
return x.CloseRequest
}
return nil
}
func (x *Packet) GetCloseResponse() *CloseResponse {
if x, ok := x.GetPayload().(*Packet_CloseResponse); ok {
return x.CloseResponse
}
return nil
}
func (x *Packet) GetCloseDial() *CloseDial {
if x, ok := x.GetPayload().(*Packet_CloseDial); ok {
return x.CloseDial
}
return nil
}
func (x *Packet) GetDrain() *Drain {
if x, ok := x.GetPayload().(*Packet_Drain); ok {
return x.Drain
}
return nil
}
type isPacket_Payload interface {
isPacket_Payload()
}
type Packet_DialRequest struct {
DialRequest *DialRequest `protobuf:"bytes,2,opt,name=dialRequest,proto3,oneof"`
}
type Packet_DialResponse struct {
DialResponse *DialResponse `protobuf:"bytes,3,opt,name=dialResponse,proto3,oneof"`
}
type Packet_Data struct {
Data *Data `protobuf:"bytes,4,opt,name=data,proto3,oneof"`
}
type Packet_CloseRequest struct {
CloseRequest *CloseRequest `protobuf:"bytes,5,opt,name=closeRequest,proto3,oneof"`
}
type Packet_CloseResponse struct {
CloseResponse *CloseResponse `protobuf:"bytes,6,opt,name=closeResponse,proto3,oneof"`
}
type Packet_CloseDial struct {
CloseDial *CloseDial `protobuf:"bytes,7,opt,name=closeDial,proto3,oneof"`
}
type Packet_Drain struct {
Drain *Drain `protobuf:"bytes,8,opt,name=drain,proto3,oneof"`
}
func (*Packet_DialRequest) isPacket_Payload() {}
func (*Packet_DialResponse) isPacket_Payload() {}
func (*Packet_Data) isPacket_Payload() {}
func (*Packet_CloseRequest) isPacket_Payload() {}
func (*Packet_CloseResponse) isPacket_Payload() {}
func (*Packet_CloseDial) isPacket_Payload() {}
func (*Packet_Drain) isPacket_Payload() {}
type DialRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
// tcp or udp?
Protocol string `protobuf:"bytes,1,opt,name=protocol,proto3" json:"protocol,omitempty"`
// node:port
Address string `protobuf:"bytes,2,opt,name=address,proto3" json:"address,omitempty"`
// random id for client, maybe should be longer
Random int64 `protobuf:"varint,3,opt,name=random,proto3" json:"random,omitempty"`
}
func (x *DialRequest) Reset() {
*x = DialRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_konnectivity_client_proto_client_client_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *DialRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*DialRequest) ProtoMessage() {}
func (x *DialRequest) ProtoReflect() protoreflect.Message {
mi := &file_konnectivity_client_proto_client_client_proto_msgTypes[1]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use DialRequest.ProtoReflect.Descriptor instead.
func (*DialRequest) Descriptor() ([]byte, []int) {
return file_konnectivity_client_proto_client_client_proto_rawDescGZIP(), []int{1}
}
func (x *DialRequest) GetProtocol() string {
if x != nil {
return x.Protocol
}
return ""
}
func (x *DialRequest) GetAddress() string {
if x != nil {
return x.Address
}
return ""
}
func (x *DialRequest) GetRandom() int64 {
if x != nil {
return x.Random
}
return 0
}
type DialResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
// error failed reason; enum?
Error string `protobuf:"bytes,1,opt,name=error,proto3" json:"error,omitempty"`
// connectID indicates the identifier of the connection
ConnectID int64 `protobuf:"varint,2,opt,name=connectID,proto3" json:"connectID,omitempty"`
// random copied from DialRequest
Random int64 `protobuf:"varint,3,opt,name=random,proto3" json:"random,omitempty"`
}
func (x *DialResponse) Reset() {
*x = DialResponse{}
if protoimpl.UnsafeEnabled {
mi := &file_konnectivity_client_proto_client_client_proto_msgTypes[2]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *DialResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*DialResponse) ProtoMessage() {}
func (x *DialResponse) ProtoReflect() protoreflect.Message {
mi := &file_konnectivity_client_proto_client_client_proto_msgTypes[2]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use DialResponse.ProtoReflect.Descriptor instead.
func (*DialResponse) Descriptor() ([]byte, []int) {
return file_konnectivity_client_proto_client_client_proto_rawDescGZIP(), []int{2}
}
func (x *DialResponse) GetError() string {
if x != nil {
return x.Error
}
return ""
}
func (x *DialResponse) GetConnectID() int64 {
if x != nil {
return x.ConnectID
}
return 0
}
func (x *DialResponse) GetRandom() int64 {
if x != nil {
return x.Random
}
return 0
}
type CloseRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
// connectID of the stream to close
ConnectID int64 `protobuf:"varint,1,opt,name=connectID,proto3" json:"connectID,omitempty"`
}
func (x *CloseRequest) Reset() {
*x = CloseRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_konnectivity_client_proto_client_client_proto_msgTypes[3]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *CloseRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*CloseRequest) ProtoMessage() {}
func (x *CloseRequest) ProtoReflect() protoreflect.Message {
mi := &file_konnectivity_client_proto_client_client_proto_msgTypes[3]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use CloseRequest.ProtoReflect.Descriptor instead.
func (*CloseRequest) Descriptor() ([]byte, []int) {
return file_konnectivity_client_proto_client_client_proto_rawDescGZIP(), []int{3}
}
func (x *CloseRequest) GetConnectID() int64 {
if x != nil {
return x.ConnectID
}
return 0
}
type CloseResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
// error message
Error string `protobuf:"bytes,1,opt,name=error,proto3" json:"error,omitempty"`
// connectID indicates the identifier of the connection
ConnectID int64 `protobuf:"varint,2,opt,name=connectID,proto3" json:"connectID,omitempty"`
}
func (x *CloseResponse) Reset() {
*x = CloseResponse{}
if protoimpl.UnsafeEnabled {
mi := &file_konnectivity_client_proto_client_client_proto_msgTypes[4]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *CloseResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*CloseResponse) ProtoMessage() {}
func (x *CloseResponse) ProtoReflect() protoreflect.Message {
mi := &file_konnectivity_client_proto_client_client_proto_msgTypes[4]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use CloseResponse.ProtoReflect.Descriptor instead.
func (*CloseResponse) Descriptor() ([]byte, []int) {
return file_konnectivity_client_proto_client_client_proto_rawDescGZIP(), []int{4}
}
func (x *CloseResponse) GetError() string {
if x != nil {
return x.Error
}
return ""
}
func (x *CloseResponse) GetConnectID() int64 {
if x != nil {
return x.ConnectID
}
return 0
}
type CloseDial struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
// random id of the DialRequest
Random int64 `protobuf:"varint,1,opt,name=random,proto3" json:"random,omitempty"`
}
func (x *CloseDial) Reset() {
*x = CloseDial{}
if protoimpl.UnsafeEnabled {
mi := &file_konnectivity_client_proto_client_client_proto_msgTypes[5]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *CloseDial) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*CloseDial) ProtoMessage() {}
func (x *CloseDial) ProtoReflect() protoreflect.Message {
mi := &file_konnectivity_client_proto_client_client_proto_msgTypes[5]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use CloseDial.ProtoReflect.Descriptor instead.
func (*CloseDial) Descriptor() ([]byte, []int) {
return file_konnectivity_client_proto_client_client_proto_rawDescGZIP(), []int{5}
}
func (x *CloseDial) GetRandom() int64 {
if x != nil {
return x.Random
}
return 0
}
type Drain struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
}
func (x *Drain) Reset() {
*x = Drain{}
if protoimpl.UnsafeEnabled {
mi := &file_konnectivity_client_proto_client_client_proto_msgTypes[6]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *Drain) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*Drain) ProtoMessage() {}
func (x *Drain) ProtoReflect() protoreflect.Message {
mi := &file_konnectivity_client_proto_client_client_proto_msgTypes[6]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use Drain.ProtoReflect.Descriptor instead.
func (*Drain) Descriptor() ([]byte, []int) {
return file_konnectivity_client_proto_client_client_proto_rawDescGZIP(), []int{6}
}
type Data struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
// connectID to connect to
ConnectID int64 `protobuf:"varint,1,opt,name=connectID,proto3" json:"connectID,omitempty"`
// error message if error happens
Error string `protobuf:"bytes,2,opt,name=error,proto3" json:"error,omitempty"`
// stream data
Data []byte `protobuf:"bytes,3,opt,name=data,proto3" json:"data,omitempty"`
}
func (x *Data) Reset() {
*x = Data{}
if protoimpl.UnsafeEnabled {
mi := &file_konnectivity_client_proto_client_client_proto_msgTypes[7]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *Data) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*Data) ProtoMessage() {}
func (x *Data) ProtoReflect() protoreflect.Message {
mi := &file_konnectivity_client_proto_client_client_proto_msgTypes[7]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use Data.ProtoReflect.Descriptor instead.
func (*Data) Descriptor() ([]byte, []int) {
return file_konnectivity_client_proto_client_client_proto_rawDescGZIP(), []int{7}
}
func (x *Data) GetConnectID() int64 {
if x != nil {
return x.ConnectID
}
return 0
}
func (x *Data) GetError() string {
if x != nil {
return x.Error
}
return ""
}
func (x *Data) GetData() []byte {
if x != nil {
return x.Data
}
return nil
}
var File_konnectivity_client_proto_client_client_proto protoreflect.FileDescriptor
var file_konnectivity_client_proto_client_client_proto_rawDesc = []byte{
0x0a, 0x2d, 0x6b, 0x6f, 0x6e, 0x6e, 0x65, 0x63, 0x74, 0x69, 0x76, 0x69, 0x74, 0x79, 0x2d, 0x63,
0x6c, 0x69, 0x65, 0x6e, 0x74, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x63, 0x6c, 0x69, 0x65,
0x6e, 0x74, 0x2f, 0x63, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22,
0xf1, 0x02, 0x0a, 0x06, 0x50, 0x61, 0x63, 0x6b, 0x65, 0x74, 0x12, 0x1f, 0x0a, 0x04, 0x74, 0x79,
0x70, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x0b, 0x2e, 0x50, 0x61, 0x63, 0x6b, 0x65,
0x74, 0x54, 0x79, 0x70, 0x65, 0x52, 0x04, 0x74, 0x79, 0x70, 0x65, 0x12, 0x30, 0x0a, 0x0b, 0x64,
0x69, 0x61, 0x6c, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b,
0x32, 0x0c, 0x2e, 0x44, 0x69, 0x61, 0x6c, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x48, 0x00,
0x52, 0x0b, 0x64, 0x69, 0x61, 0x6c, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x33, 0x0a,
0x0c, 0x64, 0x69, 0x61, 0x6c, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x18, 0x03, 0x20,
0x01, 0x28, 0x0b, 0x32, 0x0d, 0x2e, 0x44, 0x69, 0x61, 0x6c, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e,
0x73, 0x65, 0x48, 0x00, 0x52, 0x0c, 0x64, 0x69, 0x61, 0x6c, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e,
0x73, 0x65, 0x12, 0x1b, 0x0a, 0x04, 0x64, 0x61, 0x74, 0x61, 0x18, 0x04, 0x20, 0x01, 0x28, 0x0b,
0x32, 0x05, 0x2e, 0x44, 0x61, 0x74, 0x61, 0x48, 0x00, 0x52, 0x04, 0x64, 0x61, 0x74, 0x61, 0x12,
0x33, 0x0a, 0x0c, 0x63, 0x6c, 0x6f, 0x73, 0x65, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x18,
0x05, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x0d, 0x2e, 0x43, 0x6c, 0x6f, 0x73, 0x65, 0x52, 0x65, 0x71,
0x75, 0x65, 0x73, 0x74, 0x48, 0x00, 0x52, 0x0c, 0x63, 0x6c, 0x6f, 0x73, 0x65, 0x52, 0x65, 0x71,
0x75, 0x65, 0x73, 0x74, 0x12, 0x36, 0x0a, 0x0d, 0x63, 0x6c, 0x6f, 0x73, 0x65, 0x52, 0x65, 0x73,
0x70, 0x6f, 0x6e, 0x73, 0x65, 0x18, 0x06, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x0e, 0x2e, 0x43, 0x6c,
0x6f, 0x73, 0x65, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x48, 0x00, 0x52, 0x0d, 0x63,
0x6c, 0x6f, 0x73, 0x65, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x2a, 0x0a, 0x09,
0x63, 0x6c, 0x6f, 0x73, 0x65, 0x44, 0x69, 0x61, 0x6c, 0x18, 0x07, 0x20, 0x01, 0x28, 0x0b, 0x32,
0x0a, 0x2e, 0x43, 0x6c, 0x6f, 0x73, 0x65, 0x44, 0x69, 0x61, 0x6c, 0x48, 0x00, 0x52, 0x09, 0x63,
0x6c, 0x6f, 0x73, 0x65, 0x44, 0x69, 0x61, 0x6c, 0x12, 0x1e, 0x0a, 0x05, 0x64, 0x72, 0x61, 0x69,
0x6e, 0x18, 0x08, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x06, 0x2e, 0x44, 0x72, 0x61, 0x69, 0x6e, 0x48,
0x00, 0x52, 0x05, 0x64, 0x72, 0x61, 0x69, 0x6e, 0x42, 0x09, 0x0a, 0x07, 0x70, 0x61, 0x79, 0x6c,
0x6f, 0x61, 0x64, 0x22, 0x5b, 0x0a, 0x0b, 0x44, 0x69, 0x61, 0x6c, 0x52, 0x65, 0x71, 0x75, 0x65,
0x73, 0x74, 0x12, 0x1a, 0x0a, 0x08, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x63, 0x6f, 0x6c, 0x18, 0x01,
0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x63, 0x6f, 0x6c, 0x12, 0x18,
0x0a, 0x07, 0x61, 0x64, 0x64, 0x72, 0x65, 0x73, 0x73, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52,
0x07, 0x61, 0x64, 0x64, 0x72, 0x65, 0x73, 0x73, 0x12, 0x16, 0x0a, 0x06, 0x72, 0x61, 0x6e, 0x64,
0x6f, 0x6d, 0x18, 0x03, 0x20, 0x01, 0x28, 0x03, 0x52, 0x06, 0x72, 0x61, 0x6e, 0x64, 0x6f, 0x6d,
0x22, 0x5a, 0x0a, 0x0c, 0x44, 0x69, 0x61, 0x6c, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65,
0x12, 0x14, 0x0a, 0x05, 0x65, 0x72, 0x72, 0x6f, 0x72, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52,
0x05, 0x65, 0x72, 0x72, 0x6f, 0x72, 0x12, 0x1c, 0x0a, 0x09, 0x63, 0x6f, 0x6e, 0x6e, 0x65, 0x63,
0x74, 0x49, 0x44, 0x18, 0x02, 0x20, 0x01, 0x28, 0x03, 0x52, 0x09, 0x63, 0x6f, 0x6e, 0x6e, 0x65,
0x63, 0x74, 0x49, 0x44, 0x12, 0x16, 0x0a, 0x06, 0x72, 0x61, 0x6e, 0x64, 0x6f, 0x6d, 0x18, 0x03,
0x20, 0x01, 0x28, 0x03, 0x52, 0x06, 0x72, 0x61, 0x6e, 0x64, 0x6f, 0x6d, 0x22, 0x2c, 0x0a, 0x0c,
0x43, 0x6c, 0x6f, 0x73, 0x65, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x1c, 0x0a, 0x09,
0x63, 0x6f, 0x6e, 0x6e, 0x65, 0x63, 0x74, 0x49, 0x44, 0x18, 0x01, 0x20, 0x01, 0x28, 0x03, 0x52,
0x09, 0x63, 0x6f, 0x6e, 0x6e, 0x65, 0x63, 0x74, 0x49, 0x44, 0x22, 0x43, 0x0a, 0x0d, 0x43, 0x6c,
0x6f, 0x73, 0x65, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x14, 0x0a, 0x05, 0x65,
0x72, 0x72, 0x6f, 0x72, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x05, 0x65, 0x72, 0x72, 0x6f,
0x72, 0x12, 0x1c, 0x0a, 0x09, 0x63, 0x6f, 0x6e, 0x6e, 0x65, 0x63, 0x74, 0x49, 0x44, 0x18, 0x02,
0x20, 0x01, 0x28, 0x03, 0x52, 0x09, 0x63, 0x6f, 0x6e, 0x6e, 0x65, 0x63, 0x74, 0x49, 0x44, 0x22,
0x23, 0x0a, 0x09, 0x43, 0x6c, 0x6f, 0x73, 0x65, 0x44, 0x69, 0x61, 0x6c, 0x12, 0x16, 0x0a, 0x06,
0x72, 0x61, 0x6e, 0x64, 0x6f, 0x6d, 0x18, 0x01, 0x20, 0x01, 0x28, 0x03, 0x52, 0x06, 0x72, 0x61,
0x6e, 0x64, 0x6f, 0x6d, 0x22, 0x07, 0x0a, 0x05, 0x44, 0x72, 0x61, 0x69, 0x6e, 0x22, 0x4e, 0x0a,
0x04, 0x44, 0x61, 0x74, 0x61, 0x12, 0x1c, 0x0a, 0x09, 0x63, 0x6f, 0x6e, 0x6e, 0x65, 0x63, 0x74,
0x49, 0x44, 0x18, 0x01, 0x20, 0x01, 0x28, 0x03, 0x52, 0x09, 0x63, 0x6f, 0x6e, 0x6e, 0x65, 0x63,
0x74, 0x49, 0x44, 0x12, 0x14, 0x0a, 0x05, 0x65, 0x72, 0x72, 0x6f, 0x72, 0x18, 0x02, 0x20, 0x01,
0x28, 0x09, 0x52, 0x05, 0x65, 0x72, 0x72, 0x6f, 0x72, 0x12, 0x12, 0x0a, 0x04, 0x64, 0x61, 0x74,
0x61, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x04, 0x64, 0x61, 0x74, 0x61, 0x2a, 0x69, 0x0a,
0x0a, 0x50, 0x61, 0x63, 0x6b, 0x65, 0x74, 0x54, 0x79, 0x70, 0x65, 0x12, 0x0c, 0x0a, 0x08, 0x44,
0x49, 0x41, 0x4c, 0x5f, 0x52, 0x45, 0x51, 0x10, 0x00, 0x12, 0x0c, 0x0a, 0x08, 0x44, 0x49, 0x41,
0x4c, 0x5f, 0x52, 0x53, 0x50, 0x10, 0x01, 0x12, 0x0d, 0x0a, 0x09, 0x43, 0x4c, 0x4f, 0x53, 0x45,
0x5f, 0x52, 0x45, 0x51, 0x10, 0x02, 0x12, 0x0d, 0x0a, 0x09, 0x43, 0x4c, 0x4f, 0x53, 0x45, 0x5f,
0x52, 0x53, 0x50, 0x10, 0x03, 0x12, 0x08, 0x0a, 0x04, 0x44, 0x41, 0x54, 0x41, 0x10, 0x04, 0x12,
0x0c, 0x0a, 0x08, 0x44, 0x49, 0x41, 0x4c, 0x5f, 0x43, 0x4c, 0x53, 0x10, 0x05, 0x12, 0x09, 0x0a,
0x05, 0x44, 0x52, 0x41, 0x49, 0x4e, 0x10, 0x06, 0x32, 0x2f, 0x0a, 0x0c, 0x50, 0x72, 0x6f, 0x78,
0x79, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x12, 0x1f, 0x0a, 0x05, 0x50, 0x72, 0x6f, 0x78,
0x79, 0x12, 0x07, 0x2e, 0x50, 0x61, 0x63, 0x6b, 0x65, 0x74, 0x1a, 0x07, 0x2e, 0x50, 0x61, 0x63,
0x6b, 0x65, 0x74, 0x22, 0x00, 0x28, 0x01, 0x30, 0x01, 0x42, 0x46, 0x5a, 0x44, 0x73, 0x69, 0x67,
0x73, 0x2e, 0x6b, 0x38, 0x73, 0x2e, 0x69, 0x6f, 0x2f, 0x61, 0x70, 0x69, 0x73, 0x65, 0x72, 0x76,
0x65, 0x72, 0x2d, 0x6e, 0x65, 0x74, 0x77, 0x6f, 0x72, 0x6b, 0x2d, 0x70, 0x72, 0x6f, 0x78, 0x79,
0x2f, 0x6b, 0x6f, 0x6e, 0x6e, 0x65, 0x63, 0x74, 0x69, 0x76, 0x69, 0x74, 0x79, 0x2d, 0x63, 0x6c,
0x69, 0x65, 0x6e, 0x74, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x63, 0x6c, 0x69, 0x65, 0x6e,
0x74, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
var (
file_konnectivity_client_proto_client_client_proto_rawDescOnce sync.Once
file_konnectivity_client_proto_client_client_proto_rawDescData = file_konnectivity_client_proto_client_client_proto_rawDesc
)
func file_konnectivity_client_proto_client_client_proto_rawDescGZIP() []byte {
file_konnectivity_client_proto_client_client_proto_rawDescOnce.Do(func() {
file_konnectivity_client_proto_client_client_proto_rawDescData = protoimpl.X.CompressGZIP(file_konnectivity_client_proto_client_client_proto_rawDescData)
})
return file_konnectivity_client_proto_client_client_proto_rawDescData
}
var file_konnectivity_client_proto_client_client_proto_enumTypes = make([]protoimpl.EnumInfo, 1)
var file_konnectivity_client_proto_client_client_proto_msgTypes = make([]protoimpl.MessageInfo, 8)
var file_konnectivity_client_proto_client_client_proto_goTypes = []interface{}{
(PacketType)(0), // 0: PacketType
(*Packet)(nil), // 1: Packet
(*DialRequest)(nil), // 2: DialRequest
(*DialResponse)(nil), // 3: DialResponse
(*CloseRequest)(nil), // 4: CloseRequest
(*CloseResponse)(nil), // 5: CloseResponse
(*CloseDial)(nil), // 6: CloseDial
(*Drain)(nil), // 7: Drain
(*Data)(nil), // 8: Data
}
var file_konnectivity_client_proto_client_client_proto_depIdxs = []int32{
0, // 0: Packet.type:type_name -> PacketType
2, // 1: Packet.dialRequest:type_name -> DialRequest
3, // 2: Packet.dialResponse:type_name -> DialResponse
8, // 3: Packet.data:type_name -> Data
4, // 4: Packet.closeRequest:type_name -> CloseRequest
5, // 5: Packet.closeResponse:type_name -> CloseResponse
6, // 6: Packet.closeDial:type_name -> CloseDial
7, // 7: Packet.drain:type_name -> Drain
1, // 8: ProxyService.Proxy:input_type -> Packet
1, // 9: ProxyService.Proxy:output_type -> Packet
9, // [9:10] is the sub-list for method output_type
8, // [8:9] is the sub-list for method input_type
8, // [8:8] is the sub-list for extension type_name
8, // [8:8] is the sub-list for extension extendee
0, // [0:8] is the sub-list for field type_name
}
func init() { file_konnectivity_client_proto_client_client_proto_init() }
func file_konnectivity_client_proto_client_client_proto_init() {
if File_konnectivity_client_proto_client_client_proto != nil {
return
}
if !protoimpl.UnsafeEnabled {
file_konnectivity_client_proto_client_client_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*Packet); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_konnectivity_client_proto_client_client_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*DialRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_konnectivity_client_proto_client_client_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*DialResponse); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_konnectivity_client_proto_client_client_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*CloseRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_konnectivity_client_proto_client_client_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*CloseResponse); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_konnectivity_client_proto_client_client_proto_msgTypes[5].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*CloseDial); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_konnectivity_client_proto_client_client_proto_msgTypes[6].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*Drain); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_konnectivity_client_proto_client_client_proto_msgTypes[7].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*Data); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
}
file_konnectivity_client_proto_client_client_proto_msgTypes[0].OneofWrappers = []interface{}{
(*Packet_DialRequest)(nil),
(*Packet_DialResponse)(nil),
(*Packet_Data)(nil),
(*Packet_CloseRequest)(nil),
(*Packet_CloseResponse)(nil),
(*Packet_CloseDial)(nil),
(*Packet_Drain)(nil),
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_konnectivity_client_proto_client_client_proto_rawDesc,
NumEnums: 1,
NumMessages: 8,
NumExtensions: 0,
NumServices: 1,
},
GoTypes: file_konnectivity_client_proto_client_client_proto_goTypes,
DependencyIndexes: file_konnectivity_client_proto_client_client_proto_depIdxs,
EnumInfos: file_konnectivity_client_proto_client_client_proto_enumTypes,
MessageInfos: file_konnectivity_client_proto_client_client_proto_msgTypes,
}.Build()
File_konnectivity_client_proto_client_client_proto = out.File
file_konnectivity_client_proto_client_client_proto_rawDesc = nil
file_konnectivity_client_proto_client_client_proto_goTypes = nil
file_konnectivity_client_proto_client_client_proto_depIdxs = nil
}

View File

@ -0,0 +1,104 @@
// Copyright The Kubernetes Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
syntax = "proto3";
// Retransmit?
// Sliding windows?
option go_package = "sigs.k8s.io/apiserver-network-proxy/konnectivity-client/proto/client";
service ProxyService {
rpc Proxy(stream Packet) returns (stream Packet) {}
}
enum PacketType {
DIAL_REQ = 0;
DIAL_RSP = 1;
CLOSE_REQ = 2;
CLOSE_RSP = 3;
DATA = 4;
DIAL_CLS = 5;
DRAIN = 6;
}
message Packet {
PacketType type = 1;
oneof payload {
DialRequest dialRequest = 2;
DialResponse dialResponse = 3;
Data data = 4;
CloseRequest closeRequest = 5;
CloseResponse closeResponse = 6;
CloseDial closeDial = 7;
Drain drain = 8;
}
}
message DialRequest {
// tcp or udp?
string protocol = 1;
// node:port
string address = 2;
// random id for client, maybe should be longer
int64 random = 3;
}
message DialResponse {
// error failed reason; enum?
string error = 1;
// connectID indicates the identifier of the connection
int64 connectID = 2;
// random copied from DialRequest
int64 random = 3;
}
message CloseRequest {
// connectID of the stream to close
int64 connectID = 1;
}
message CloseResponse {
// error message
string error = 1;
// connectID indicates the identifier of the connection
int64 connectID = 2;
}
message CloseDial {
// random id of the DialRequest
int64 random = 1;
}
message Drain {
// A hint from an Agent to Server that it is pending termination.
// A Server should prefer non-draining agents for new dials.
}
message Data {
// connectID to connect to
int64 connectID = 1;
// error message if error happens
string error = 2;
// stream data
bytes data = 3;
}

View File

@ -0,0 +1,150 @@
/*
Copyright The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by protoc-gen-go-grpc. DO NOT EDIT.
// versions:
// - protoc-gen-go-grpc v1.2.0
// - protoc v3.21.12
// source: konnectivity-client/proto/client/client.proto
package client
import (
context "context"
grpc "google.golang.org/grpc"
codes "google.golang.org/grpc/codes"
status "google.golang.org/grpc/status"
)
// This is a compile-time assertion to ensure that this generated file
// is compatible with the grpc package it is being compiled against.
// Requires gRPC-Go v1.32.0 or later.
const _ = grpc.SupportPackageIsVersion7
// ProxyServiceClient is the client API for ProxyService service.
//
// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream.
type ProxyServiceClient interface {
Proxy(ctx context.Context, opts ...grpc.CallOption) (ProxyService_ProxyClient, error)
}
type proxyServiceClient struct {
cc grpc.ClientConnInterface
}
func NewProxyServiceClient(cc grpc.ClientConnInterface) ProxyServiceClient {
return &proxyServiceClient{cc}
}
func (c *proxyServiceClient) Proxy(ctx context.Context, opts ...grpc.CallOption) (ProxyService_ProxyClient, error) {
stream, err := c.cc.NewStream(ctx, &ProxyService_ServiceDesc.Streams[0], "/ProxyService/Proxy", opts...)
if err != nil {
return nil, err
}
x := &proxyServiceProxyClient{stream}
return x, nil
}
type ProxyService_ProxyClient interface {
Send(*Packet) error
Recv() (*Packet, error)
grpc.ClientStream
}
type proxyServiceProxyClient struct {
grpc.ClientStream
}
func (x *proxyServiceProxyClient) Send(m *Packet) error {
return x.ClientStream.SendMsg(m)
}
func (x *proxyServiceProxyClient) Recv() (*Packet, error) {
m := new(Packet)
if err := x.ClientStream.RecvMsg(m); err != nil {
return nil, err
}
return m, nil
}
// ProxyServiceServer is the server API for ProxyService service.
// All implementations should embed UnimplementedProxyServiceServer
// for forward compatibility
type ProxyServiceServer interface {
Proxy(ProxyService_ProxyServer) error
}
// UnimplementedProxyServiceServer should be embedded to have forward compatible implementations.
type UnimplementedProxyServiceServer struct {
}
func (UnimplementedProxyServiceServer) Proxy(ProxyService_ProxyServer) error {
return status.Errorf(codes.Unimplemented, "method Proxy not implemented")
}
// UnsafeProxyServiceServer may be embedded to opt out of forward compatibility for this service.
// Use of this interface is not recommended, as added methods to ProxyServiceServer will
// result in compilation errors.
type UnsafeProxyServiceServer interface {
mustEmbedUnimplementedProxyServiceServer()
}
func RegisterProxyServiceServer(s grpc.ServiceRegistrar, srv ProxyServiceServer) {
s.RegisterService(&ProxyService_ServiceDesc, srv)
}
func _ProxyService_Proxy_Handler(srv interface{}, stream grpc.ServerStream) error {
return srv.(ProxyServiceServer).Proxy(&proxyServiceProxyServer{stream})
}
type ProxyService_ProxyServer interface {
Send(*Packet) error
Recv() (*Packet, error)
grpc.ServerStream
}
type proxyServiceProxyServer struct {
grpc.ServerStream
}
func (x *proxyServiceProxyServer) Send(m *Packet) error {
return x.ServerStream.SendMsg(m)
}
func (x *proxyServiceProxyServer) Recv() (*Packet, error) {
m := new(Packet)
if err := x.ServerStream.RecvMsg(m); err != nil {
return nil, err
}
return m, nil
}
// ProxyService_ServiceDesc is the grpc.ServiceDesc for ProxyService service.
// It's only intended for direct use with grpc.RegisterService,
// and not to be introspected or modified (even as a copy)
var ProxyService_ServiceDesc = grpc.ServiceDesc{
ServiceName: "ProxyService",
HandlerType: (*ProxyServiceServer)(nil),
Methods: []grpc.MethodDesc{},
Streams: []grpc.StreamDesc{
{
StreamName: "Proxy",
Handler: _ProxyService_Proxy_Handler,
ServerStreams: true,
ClientStreams: true,
},
},
Metadata: "konnectivity-client/proto/client/client.proto",
}

42
e2e/vendor/sigs.k8s.io/json/CONTRIBUTING.md generated vendored Normal file
View File

@ -0,0 +1,42 @@
# Contributing Guidelines
Welcome to Kubernetes. We are excited about the prospect of you joining our [community](https://git.k8s.io/community)! The Kubernetes community abides by the CNCF [code of conduct](code-of-conduct.md). Here is an excerpt:
_As contributors and maintainers of this project, and in the interest of fostering an open and welcoming community, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities._
## Criteria for adding code here
This library adapts the stdlib `encoding/json` decoder to be compatible with
Kubernetes JSON decoding, and is not expected to actively add new features.
It may be updated with changes from the stdlib `encoding/json` decoder.
Any code that is added must:
* Have full unit test and benchmark coverage
* Be backward compatible with the existing exposed go API
* Have zero external dependencies
* Preserve existing benchmark performance
* Preserve compatibility with existing decoding behavior of `UnmarshalCaseSensitivePreserveInts()` or `UnmarshalStrict()`
* Avoid use of `unsafe`
## Getting Started
We have full documentation on how to get started contributing here:
<!---
If your repo has certain guidelines for contribution, put them here ahead of the general k8s resources
-->
- [Contributor License Agreement](https://git.k8s.io/community/CLA.md) Kubernetes projects require that you sign a Contributor License Agreement (CLA) before we can accept your pull requests
- [Kubernetes Contributor Guide](https://git.k8s.io/community/contributors/guide) - Main contributor documentation, or you can just jump directly to the [contributing section](https://git.k8s.io/community/contributors/guide#contributing)
- [Contributor Cheat Sheet](https://git.k8s.io/community/contributors/guide/contributor-cheatsheet) - Common resources for existing developers
## Community, discussion, contribution, and support
You can reach the maintainers of this project via the
[sig-api-machinery mailing list / channels](https://github.com/kubernetes/community/tree/master/sig-api-machinery#contact).
## Mentorship
- [Mentoring Initiatives](https://git.k8s.io/community/mentoring) - We have a diverse set of mentorship programs available that are always looking for volunteers!

238
e2e/vendor/sigs.k8s.io/json/LICENSE generated vendored Normal file
View File

@ -0,0 +1,238 @@
Files other than internal/golang/* licensed under:
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
------------------
internal/golang/* files licensed under:
Copyright (c) 2009 The Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

35
e2e/vendor/sigs.k8s.io/json/Makefile generated vendored Normal file
View File

@ -0,0 +1,35 @@
.PHONY: default build test benchmark fmt vet
default: build
build:
go build ./...
test:
go test sigs.k8s.io/json/...
benchmark:
go test sigs.k8s.io/json -bench . -benchmem
fmt:
go mod tidy
gofmt -s -w *.go
vet:
go vet sigs.k8s.io/json
@echo "checking for external dependencies"
@deps=$$(go list -f '{{ if not (or .Standard .Module.Main) }}{{.ImportPath}}{{ end }}' -deps sigs.k8s.io/json/... || true); \
if [ -n "$${deps}" ]; then \
echo "only stdlib dependencies allowed, found:"; \
echo "$${deps}"; \
exit 1; \
fi
@echo "checking for unsafe use"
@unsafe=$$(go list -f '{{.ImportPath}} depends on {{.Imports}}' sigs.k8s.io/json/... | grep unsafe || true); \
if [ -n "$${unsafe}" ]; then \
echo "no dependencies on unsafe allowed, found:"; \
echo "$${unsafe}"; \
exit 1; \
fi

6
e2e/vendor/sigs.k8s.io/json/OWNERS generated vendored Normal file
View File

@ -0,0 +1,6 @@
# See the OWNERS docs at https://go.k8s.io/owners
approvers:
- deads2k
- jpbetz
- liggitt

40
e2e/vendor/sigs.k8s.io/json/README.md generated vendored Normal file
View File

@ -0,0 +1,40 @@
# sigs.k8s.io/json
[![Go Reference](https://pkg.go.dev/badge/sigs.k8s.io/json.svg)](https://pkg.go.dev/sigs.k8s.io/json)
## Introduction
This library is a subproject of [sig-api-machinery](https://github.com/kubernetes/community/tree/master/sig-api-machinery#json).
It provides case-sensitive, integer-preserving JSON unmarshaling functions based on `encoding/json` `Unmarshal()`.
## Compatibility
The `UnmarshalCaseSensitivePreserveInts()` function behaves like `encoding/json#Unmarshal()` with the following differences:
- JSON object keys are treated case-sensitively.
Object keys must exactly match json tag names (for tagged struct fields)
or struct field names (for untagged struct fields).
- JSON integers are unmarshaled into `interface{}` fields as an `int64` instead of a
`float64` when possible, falling back to `float64` on any parse or overflow error.
- Syntax errors do not return an `encoding/json` `*SyntaxError` error.
Instead, they return an error which can be passed to `SyntaxErrorOffset()` to obtain an offset.
## Additional capabilities
The `UnmarshalStrict()` function decodes identically to `UnmarshalCaseSensitivePreserveInts()`,
and also returns non-fatal strict errors encountered while decoding:
- Duplicate fields encountered
- Unknown fields encountered
### Community, discussion, contribution, and support
You can reach the maintainers of this project via the
[sig-api-machinery mailing list / channels](https://github.com/kubernetes/community/tree/master/sig-api-machinery#contact).
### Code of conduct
Participation in the Kubernetes community is governed by the [Kubernetes Code of Conduct](code-of-conduct.md).
[owners]: https://git.k8s.io/community/contributors/guide/owners.md
[Creative Commons 4.0]: https://git.k8s.io/website/LICENSE

22
e2e/vendor/sigs.k8s.io/json/SECURITY.md generated vendored Normal file
View File

@ -0,0 +1,22 @@
# Security Policy
## Security Announcements
Join the [kubernetes-security-announce] group for security and vulnerability announcements.
You can also subscribe to an RSS feed of the above using [this link][kubernetes-security-announce-rss].
## Reporting a Vulnerability
Instructions for reporting a vulnerability can be found on the
[Kubernetes Security and Disclosure Information] page.
## Supported Versions
Information about supported Kubernetes versions can be found on the
[Kubernetes version and version skew support policy] page on the Kubernetes website.
[kubernetes-security-announce]: https://groups.google.com/forum/#!forum/kubernetes-security-announce
[kubernetes-security-announce-rss]: https://groups.google.com/forum/feed/kubernetes-security-announce/msgs/rss_v2_0.xml?num=50
[Kubernetes version and version skew support policy]: https://kubernetes.io/docs/setup/release/version-skew-policy/#supported-versions
[Kubernetes Security and Disclosure Information]: https://kubernetes.io/docs/reference/issues-security/security/#report-a-vulnerability

15
e2e/vendor/sigs.k8s.io/json/SECURITY_CONTACTS generated vendored Normal file
View File

@ -0,0 +1,15 @@
# Defined below are the security contacts for this repo.
#
# They are the contact point for the Product Security Committee to reach out
# to for triaging and handling of incoming issues.
#
# The below names agree to abide by the
# [Embargo Policy](https://git.k8s.io/security/private-distributors-list.md#embargo-policy)
# and will be removed and replaced if they violate that agreement.
#
# DO NOT REPORT SECURITY VULNERABILITIES DIRECTLY TO THESE NAMES, FOLLOW THE
# INSTRUCTIONS AT https://kubernetes.io/security/
deads2k
lavalamp
liggitt

3
e2e/vendor/sigs.k8s.io/json/code-of-conduct.md generated vendored Normal file
View File

@ -0,0 +1,3 @@
# Kubernetes Community Code of Conduct
Please refer to our [Kubernetes Community Code of Conduct](https://git.k8s.io/community/code-of-conduct.md)

17
e2e/vendor/sigs.k8s.io/json/doc.go generated vendored Normal file
View File

@ -0,0 +1,17 @@
/*
Copyright 2021 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package json // import "sigs.k8s.io/json"

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,48 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package json
import (
"unicode"
"unicode/utf8"
)
// foldName returns a folded string such that foldName(x) == foldName(y)
// is identical to bytes.EqualFold(x, y).
func foldName(in []byte) []byte {
// This is inlinable to take advantage of "function outlining".
var arr [32]byte // large enough for most JSON names
return appendFoldedName(arr[:0], in)
}
func appendFoldedName(out, in []byte) []byte {
for i := 0; i < len(in); {
// Handle single-byte ASCII.
if c := in[i]; c < utf8.RuneSelf {
if 'a' <= c && c <= 'z' {
c -= 'a' - 'A'
}
out = append(out, c)
i++
continue
}
// Handle multi-byte Unicode.
r, n := utf8.DecodeRune(in[i:])
out = utf8.AppendRune(out, foldRune(r))
i += n
}
return out
}
// foldRune is returns the smallest rune for all runes in the same fold set.
func foldRune(r rune) rune {
for {
r2 := unicode.SimpleFold(r)
if r2 <= r {
return r2
}
r = r2
}
}

View File

@ -0,0 +1,42 @@
// Copyright 2019 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build gofuzz
package json
import (
"fmt"
)
func Fuzz(data []byte) (score int) {
for _, ctor := range []func() any{
func() any { return new(any) },
func() any { return new(map[string]any) },
func() any { return new([]any) },
} {
v := ctor()
err := Unmarshal(data, v)
if err != nil {
continue
}
score = 1
m, err := Marshal(v)
if err != nil {
fmt.Printf("v=%#v\n", v)
panic(err)
}
u := ctor()
err = Unmarshal(m, u)
if err != nil {
fmt.Printf("v=%#v\n", v)
fmt.Printf("m=%s\n", m)
panic(err)
}
}
return
}

View File

@ -0,0 +1,182 @@
// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package json
import "bytes"
// HTMLEscape appends to dst the JSON-encoded src with <, >, &, U+2028 and U+2029
// characters inside string literals changed to \u003c, \u003e, \u0026, \u2028, \u2029
// so that the JSON will be safe to embed inside HTML <script> tags.
// For historical reasons, web browsers don't honor standard HTML
// escaping within <script> tags, so an alternative JSON encoding must be used.
func HTMLEscape(dst *bytes.Buffer, src []byte) {
dst.Grow(len(src))
dst.Write(appendHTMLEscape(dst.AvailableBuffer(), src))
}
func appendHTMLEscape(dst, src []byte) []byte {
// The characters can only appear in string literals,
// so just scan the string one byte at a time.
start := 0
for i, c := range src {
if c == '<' || c == '>' || c == '&' {
dst = append(dst, src[start:i]...)
dst = append(dst, '\\', 'u', '0', '0', hex[c>>4], hex[c&0xF])
start = i + 1
}
// Convert U+2028 and U+2029 (E2 80 A8 and E2 80 A9).
if c == 0xE2 && i+2 < len(src) && src[i+1] == 0x80 && src[i+2]&^1 == 0xA8 {
dst = append(dst, src[start:i]...)
dst = append(dst, '\\', 'u', '2', '0', '2', hex[src[i+2]&0xF])
start = i + len("\u2029")
}
}
return append(dst, src[start:]...)
}
// Compact appends to dst the JSON-encoded src with
// insignificant space characters elided.
func Compact(dst *bytes.Buffer, src []byte) error {
dst.Grow(len(src))
b := dst.AvailableBuffer()
b, err := appendCompact(b, src, false)
dst.Write(b)
return err
}
func appendCompact(dst, src []byte, escape bool) ([]byte, error) {
origLen := len(dst)
scan := newScanner()
defer freeScanner(scan)
start := 0
for i, c := range src {
if escape && (c == '<' || c == '>' || c == '&') {
if start < i {
dst = append(dst, src[start:i]...)
}
dst = append(dst, '\\', 'u', '0', '0', hex[c>>4], hex[c&0xF])
start = i + 1
}
// Convert U+2028 and U+2029 (E2 80 A8 and E2 80 A9).
if escape && c == 0xE2 && i+2 < len(src) && src[i+1] == 0x80 && src[i+2]&^1 == 0xA8 {
if start < i {
dst = append(dst, src[start:i]...)
}
dst = append(dst, '\\', 'u', '2', '0', '2', hex[src[i+2]&0xF])
start = i + 3
}
v := scan.step(scan, c)
if v >= scanSkipSpace {
if v == scanError {
break
}
if start < i {
dst = append(dst, src[start:i]...)
}
start = i + 1
}
}
if scan.eof() == scanError {
return dst[:origLen], scan.err
}
if start < len(src) {
dst = append(dst, src[start:]...)
}
return dst, nil
}
func appendNewline(dst []byte, prefix, indent string, depth int) []byte {
dst = append(dst, '\n')
dst = append(dst, prefix...)
for i := 0; i < depth; i++ {
dst = append(dst, indent...)
}
return dst
}
// indentGrowthFactor specifies the growth factor of indenting JSON input.
// Empirically, the growth factor was measured to be between 1.4x to 1.8x
// for some set of compacted JSON with the indent being a single tab.
// Specify a growth factor slightly larger than what is observed
// to reduce probability of allocation in appendIndent.
// A factor no higher than 2 ensures that wasted space never exceeds 50%.
const indentGrowthFactor = 2
// Indent appends to dst an indented form of the JSON-encoded src.
// Each element in a JSON object or array begins on a new,
// indented line beginning with prefix followed by one or more
// copies of indent according to the indentation nesting.
// The data appended to dst does not begin with the prefix nor
// any indentation, to make it easier to embed inside other formatted JSON data.
// Although leading space characters (space, tab, carriage return, newline)
// at the beginning of src are dropped, trailing space characters
// at the end of src are preserved and copied to dst.
// For example, if src has no trailing spaces, neither will dst;
// if src ends in a trailing newline, so will dst.
func Indent(dst *bytes.Buffer, src []byte, prefix, indent string) error {
dst.Grow(indentGrowthFactor * len(src))
b := dst.AvailableBuffer()
b, err := appendIndent(b, src, prefix, indent)
dst.Write(b)
return err
}
func appendIndent(dst, src []byte, prefix, indent string) ([]byte, error) {
origLen := len(dst)
scan := newScanner()
defer freeScanner(scan)
needIndent := false
depth := 0
for _, c := range src {
scan.bytes++
v := scan.step(scan, c)
if v == scanSkipSpace {
continue
}
if v == scanError {
break
}
if needIndent && v != scanEndObject && v != scanEndArray {
needIndent = false
depth++
dst = appendNewline(dst, prefix, indent, depth)
}
// Emit semantically uninteresting bytes
// (in particular, punctuation in strings) unmodified.
if v == scanContinue {
dst = append(dst, c)
continue
}
// Add spacing around real punctuation.
switch c {
case '{', '[':
// delay indent so that empty object and array are formatted as {} and [].
needIndent = true
dst = append(dst, c)
case ',':
dst = append(dst, c)
dst = appendNewline(dst, prefix, indent, depth)
case ':':
dst = append(dst, c, ' ')
case '}', ']':
if needIndent {
// suppress indent in empty object/array
needIndent = false
} else {
depth--
dst = appendNewline(dst, prefix, indent, depth)
}
dst = append(dst, c)
default:
dst = append(dst, c)
}
}
if scan.eof() == scanError {
return dst[:origLen], scan.err
}
return dst, nil
}

View File

@ -0,0 +1,168 @@
/*
Copyright 2021 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package json
import (
gojson "encoding/json"
"strconv"
"strings"
)
// Type-alias error and data types returned from decoding
type UnmarshalTypeError = gojson.UnmarshalTypeError
type UnmarshalFieldError = gojson.UnmarshalFieldError
type InvalidUnmarshalError = gojson.InvalidUnmarshalError
type Number = gojson.Number
type RawMessage = gojson.RawMessage
type Token = gojson.Token
type Delim = gojson.Delim
type UnmarshalOpt func(*decodeState)
func UseNumber(d *decodeState) {
d.useNumber = true
}
func DisallowUnknownFields(d *decodeState) {
d.disallowUnknownFields = true
}
// CaseSensitive requires json keys to exactly match specified json tags (for tagged struct fields)
// or struct field names (for untagged struct fields), or be treated as an unknown field.
func CaseSensitive(d *decodeState) {
d.caseSensitive = true
}
func (d *Decoder) CaseSensitive() {
d.d.caseSensitive = true
}
// PreserveInts decodes numbers as int64 when decoding to untyped fields,
// if the JSON data does not contain a "." character, parses as an integer successfully,
// and does not overflow int64. Otherwise, it falls back to default float64 decoding behavior.
//
// If UseNumber is also set, it takes precedence over PreserveInts.
func PreserveInts(d *decodeState) {
d.preserveInts = true
}
func (d *Decoder) PreserveInts() {
d.d.preserveInts = true
}
// DisallowDuplicateFields treats duplicate fields encountered while decoding as an error.
func DisallowDuplicateFields(d *decodeState) {
d.disallowDuplicateFields = true
}
func (d *Decoder) DisallowDuplicateFields() {
d.d.disallowDuplicateFields = true
}
func (d *decodeState) newFieldError(errType strictErrType, field string) *strictError {
if len(d.strictFieldStack) > 0 {
return &strictError{
ErrType: errType,
Path: strings.Join(d.strictFieldStack, "") + "." + field,
}
} else {
return &strictError{
ErrType: errType,
Path: field,
}
}
}
// saveStrictError saves a strict decoding error,
// for reporting at the end of the unmarshal if no other errors occurred.
func (d *decodeState) saveStrictError(err *strictError) {
// prevent excessive numbers of accumulated errors
if len(d.savedStrictErrors) >= 100 {
return
}
// dedupe accumulated strict errors
if d.seenStrictErrors == nil {
d.seenStrictErrors = map[strictError]struct{}{}
}
if _, seen := d.seenStrictErrors[*err]; seen {
return
}
// accumulate the error
d.seenStrictErrors[*err] = struct{}{}
d.savedStrictErrors = append(d.savedStrictErrors, err)
}
func (d *decodeState) appendStrictFieldStackKey(key string) {
if !d.disallowDuplicateFields && !d.disallowUnknownFields {
return
}
if len(d.strictFieldStack) > 0 {
d.strictFieldStack = append(d.strictFieldStack, ".", key)
} else {
d.strictFieldStack = append(d.strictFieldStack, key)
}
}
func (d *decodeState) appendStrictFieldStackIndex(i int) {
if !d.disallowDuplicateFields && !d.disallowUnknownFields {
return
}
d.strictFieldStack = append(d.strictFieldStack, "[", strconv.Itoa(i), "]")
}
type strictErrType string
const (
unknownStrictErrType strictErrType = "unknown field"
duplicateStrictErrType strictErrType = "duplicate field"
)
// strictError is a strict decoding error
// It has an ErrType (either unknown or duplicate)
// and a path to the erroneous field
type strictError struct {
ErrType strictErrType
Path string
}
func (e *strictError) Error() string {
return string(e.ErrType) + " " + strconv.Quote(e.Path)
}
func (e *strictError) FieldPath() string {
return e.Path
}
func (e *strictError) SetFieldPath(path string) {
e.Path = path
}
// UnmarshalStrictError holds errors resulting from use of strict disallow___ decoder directives.
// If this is returned from Unmarshal(), it means the decoding was successful in all other respects.
type UnmarshalStrictError struct {
Errors []error
}
func (e *UnmarshalStrictError) Error() string {
var b strings.Builder
b.WriteString("json: ")
for i, err := range e.Errors {
if i > 0 {
b.WriteString(", ")
}
b.WriteString(err.Error())
}
return b.String()
}

View File

@ -0,0 +1,610 @@
// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package json
// JSON value parser state machine.
// Just about at the limit of what is reasonable to write by hand.
// Some parts are a bit tedious, but overall it nicely factors out the
// otherwise common code from the multiple scanning functions
// in this package (Compact, Indent, checkValid, etc).
//
// This file starts with two simple examples using the scanner
// before diving into the scanner itself.
import (
"strconv"
"sync"
)
// Valid reports whether data is a valid JSON encoding.
func Valid(data []byte) bool {
scan := newScanner()
defer freeScanner(scan)
return checkValid(data, scan) == nil
}
// checkValid verifies that data is valid JSON-encoded data.
// scan is passed in for use by checkValid to avoid an allocation.
// checkValid returns nil or a SyntaxError.
func checkValid(data []byte, scan *scanner) error {
scan.reset()
for _, c := range data {
scan.bytes++
if scan.step(scan, c) == scanError {
return scan.err
}
}
if scan.eof() == scanError {
return scan.err
}
return nil
}
// A SyntaxError is a description of a JSON syntax error.
// [Unmarshal] will return a SyntaxError if the JSON can't be parsed.
type SyntaxError struct {
msg string // description of error
Offset int64 // error occurred after reading Offset bytes
}
func (e *SyntaxError) Error() string { return e.msg }
// A scanner is a JSON scanning state machine.
// Callers call scan.reset and then pass bytes in one at a time
// by calling scan.step(&scan, c) for each byte.
// The return value, referred to as an opcode, tells the
// caller about significant parsing events like beginning
// and ending literals, objects, and arrays, so that the
// caller can follow along if it wishes.
// The return value scanEnd indicates that a single top-level
// JSON value has been completed, *before* the byte that
// just got passed in. (The indication must be delayed in order
// to recognize the end of numbers: is 123 a whole value or
// the beginning of 12345e+6?).
type scanner struct {
// The step is a func to be called to execute the next transition.
// Also tried using an integer constant and a single func
// with a switch, but using the func directly was 10% faster
// on a 64-bit Mac Mini, and it's nicer to read.
step func(*scanner, byte) int
// Reached end of top-level value.
endTop bool
// Stack of what we're in the middle of - array values, object keys, object values.
parseState []int
// Error that happened, if any.
err error
// total bytes consumed, updated by decoder.Decode (and deliberately
// not set to zero by scan.reset)
bytes int64
}
var scannerPool = sync.Pool{
New: func() any {
return &scanner{}
},
}
func newScanner() *scanner {
scan := scannerPool.Get().(*scanner)
// scan.reset by design doesn't set bytes to zero
scan.bytes = 0
scan.reset()
return scan
}
func freeScanner(scan *scanner) {
// Avoid hanging on to too much memory in extreme cases.
if len(scan.parseState) > 1024 {
scan.parseState = nil
}
scannerPool.Put(scan)
}
// These values are returned by the state transition functions
// assigned to scanner.state and the method scanner.eof.
// They give details about the current state of the scan that
// callers might be interested to know about.
// It is okay to ignore the return value of any particular
// call to scanner.state: if one call returns scanError,
// every subsequent call will return scanError too.
const (
// Continue.
scanContinue = iota // uninteresting byte
scanBeginLiteral // end implied by next result != scanContinue
scanBeginObject // begin object
scanObjectKey // just finished object key (string)
scanObjectValue // just finished non-last object value
scanEndObject // end object (implies scanObjectValue if possible)
scanBeginArray // begin array
scanArrayValue // just finished array value
scanEndArray // end array (implies scanArrayValue if possible)
scanSkipSpace // space byte; can skip; known to be last "continue" result
// Stop.
scanEnd // top-level value ended *before* this byte; known to be first "stop" result
scanError // hit an error, scanner.err.
)
// These values are stored in the parseState stack.
// They give the current state of a composite value
// being scanned. If the parser is inside a nested value
// the parseState describes the nested state, outermost at entry 0.
const (
parseObjectKey = iota // parsing object key (before colon)
parseObjectValue // parsing object value (after colon)
parseArrayValue // parsing array value
)
// This limits the max nesting depth to prevent stack overflow.
// This is permitted by https://tools.ietf.org/html/rfc7159#section-9
const maxNestingDepth = 10000
// reset prepares the scanner for use.
// It must be called before calling s.step.
func (s *scanner) reset() {
s.step = stateBeginValue
s.parseState = s.parseState[0:0]
s.err = nil
s.endTop = false
}
// eof tells the scanner that the end of input has been reached.
// It returns a scan status just as s.step does.
func (s *scanner) eof() int {
if s.err != nil {
return scanError
}
if s.endTop {
return scanEnd
}
s.step(s, ' ')
if s.endTop {
return scanEnd
}
if s.err == nil {
s.err = &SyntaxError{"unexpected end of JSON input", s.bytes}
}
return scanError
}
// pushParseState pushes a new parse state p onto the parse stack.
// an error state is returned if maxNestingDepth was exceeded, otherwise successState is returned.
func (s *scanner) pushParseState(c byte, newParseState int, successState int) int {
s.parseState = append(s.parseState, newParseState)
if len(s.parseState) <= maxNestingDepth {
return successState
}
return s.error(c, "exceeded max depth")
}
// popParseState pops a parse state (already obtained) off the stack
// and updates s.step accordingly.
func (s *scanner) popParseState() {
n := len(s.parseState) - 1
s.parseState = s.parseState[0:n]
if n == 0 {
s.step = stateEndTop
s.endTop = true
} else {
s.step = stateEndValue
}
}
func isSpace(c byte) bool {
return c <= ' ' && (c == ' ' || c == '\t' || c == '\r' || c == '\n')
}
// stateBeginValueOrEmpty is the state after reading `[`.
func stateBeginValueOrEmpty(s *scanner, c byte) int {
if isSpace(c) {
return scanSkipSpace
}
if c == ']' {
return stateEndValue(s, c)
}
return stateBeginValue(s, c)
}
// stateBeginValue is the state at the beginning of the input.
func stateBeginValue(s *scanner, c byte) int {
if isSpace(c) {
return scanSkipSpace
}
switch c {
case '{':
s.step = stateBeginStringOrEmpty
return s.pushParseState(c, parseObjectKey, scanBeginObject)
case '[':
s.step = stateBeginValueOrEmpty
return s.pushParseState(c, parseArrayValue, scanBeginArray)
case '"':
s.step = stateInString
return scanBeginLiteral
case '-':
s.step = stateNeg
return scanBeginLiteral
case '0': // beginning of 0.123
s.step = state0
return scanBeginLiteral
case 't': // beginning of true
s.step = stateT
return scanBeginLiteral
case 'f': // beginning of false
s.step = stateF
return scanBeginLiteral
case 'n': // beginning of null
s.step = stateN
return scanBeginLiteral
}
if '1' <= c && c <= '9' { // beginning of 1234.5
s.step = state1
return scanBeginLiteral
}
return s.error(c, "looking for beginning of value")
}
// stateBeginStringOrEmpty is the state after reading `{`.
func stateBeginStringOrEmpty(s *scanner, c byte) int {
if isSpace(c) {
return scanSkipSpace
}
if c == '}' {
n := len(s.parseState)
s.parseState[n-1] = parseObjectValue
return stateEndValue(s, c)
}
return stateBeginString(s, c)
}
// stateBeginString is the state after reading `{"key": value,`.
func stateBeginString(s *scanner, c byte) int {
if isSpace(c) {
return scanSkipSpace
}
if c == '"' {
s.step = stateInString
return scanBeginLiteral
}
return s.error(c, "looking for beginning of object key string")
}
// stateEndValue is the state after completing a value,
// such as after reading `{}` or `true` or `["x"`.
func stateEndValue(s *scanner, c byte) int {
n := len(s.parseState)
if n == 0 {
// Completed top-level before the current byte.
s.step = stateEndTop
s.endTop = true
return stateEndTop(s, c)
}
if isSpace(c) {
s.step = stateEndValue
return scanSkipSpace
}
ps := s.parseState[n-1]
switch ps {
case parseObjectKey:
if c == ':' {
s.parseState[n-1] = parseObjectValue
s.step = stateBeginValue
return scanObjectKey
}
return s.error(c, "after object key")
case parseObjectValue:
if c == ',' {
s.parseState[n-1] = parseObjectKey
s.step = stateBeginString
return scanObjectValue
}
if c == '}' {
s.popParseState()
return scanEndObject
}
return s.error(c, "after object key:value pair")
case parseArrayValue:
if c == ',' {
s.step = stateBeginValue
return scanArrayValue
}
if c == ']' {
s.popParseState()
return scanEndArray
}
return s.error(c, "after array element")
}
return s.error(c, "")
}
// stateEndTop is the state after finishing the top-level value,
// such as after reading `{}` or `[1,2,3]`.
// Only space characters should be seen now.
func stateEndTop(s *scanner, c byte) int {
if !isSpace(c) {
// Complain about non-space byte on next call.
s.error(c, "after top-level value")
}
return scanEnd
}
// stateInString is the state after reading `"`.
func stateInString(s *scanner, c byte) int {
if c == '"' {
s.step = stateEndValue
return scanContinue
}
if c == '\\' {
s.step = stateInStringEsc
return scanContinue
}
if c < 0x20 {
return s.error(c, "in string literal")
}
return scanContinue
}
// stateInStringEsc is the state after reading `"\` during a quoted string.
func stateInStringEsc(s *scanner, c byte) int {
switch c {
case 'b', 'f', 'n', 'r', 't', '\\', '/', '"':
s.step = stateInString
return scanContinue
case 'u':
s.step = stateInStringEscU
return scanContinue
}
return s.error(c, "in string escape code")
}
// stateInStringEscU is the state after reading `"\u` during a quoted string.
func stateInStringEscU(s *scanner, c byte) int {
if '0' <= c && c <= '9' || 'a' <= c && c <= 'f' || 'A' <= c && c <= 'F' {
s.step = stateInStringEscU1
return scanContinue
}
// numbers
return s.error(c, "in \\u hexadecimal character escape")
}
// stateInStringEscU1 is the state after reading `"\u1` during a quoted string.
func stateInStringEscU1(s *scanner, c byte) int {
if '0' <= c && c <= '9' || 'a' <= c && c <= 'f' || 'A' <= c && c <= 'F' {
s.step = stateInStringEscU12
return scanContinue
}
// numbers
return s.error(c, "in \\u hexadecimal character escape")
}
// stateInStringEscU12 is the state after reading `"\u12` during a quoted string.
func stateInStringEscU12(s *scanner, c byte) int {
if '0' <= c && c <= '9' || 'a' <= c && c <= 'f' || 'A' <= c && c <= 'F' {
s.step = stateInStringEscU123
return scanContinue
}
// numbers
return s.error(c, "in \\u hexadecimal character escape")
}
// stateInStringEscU123 is the state after reading `"\u123` during a quoted string.
func stateInStringEscU123(s *scanner, c byte) int {
if '0' <= c && c <= '9' || 'a' <= c && c <= 'f' || 'A' <= c && c <= 'F' {
s.step = stateInString
return scanContinue
}
// numbers
return s.error(c, "in \\u hexadecimal character escape")
}
// stateNeg is the state after reading `-` during a number.
func stateNeg(s *scanner, c byte) int {
if c == '0' {
s.step = state0
return scanContinue
}
if '1' <= c && c <= '9' {
s.step = state1
return scanContinue
}
return s.error(c, "in numeric literal")
}
// state1 is the state after reading a non-zero integer during a number,
// such as after reading `1` or `100` but not `0`.
func state1(s *scanner, c byte) int {
if '0' <= c && c <= '9' {
s.step = state1
return scanContinue
}
return state0(s, c)
}
// state0 is the state after reading `0` during a number.
func state0(s *scanner, c byte) int {
if c == '.' {
s.step = stateDot
return scanContinue
}
if c == 'e' || c == 'E' {
s.step = stateE
return scanContinue
}
return stateEndValue(s, c)
}
// stateDot is the state after reading the integer and decimal point in a number,
// such as after reading `1.`.
func stateDot(s *scanner, c byte) int {
if '0' <= c && c <= '9' {
s.step = stateDot0
return scanContinue
}
return s.error(c, "after decimal point in numeric literal")
}
// stateDot0 is the state after reading the integer, decimal point, and subsequent
// digits of a number, such as after reading `3.14`.
func stateDot0(s *scanner, c byte) int {
if '0' <= c && c <= '9' {
return scanContinue
}
if c == 'e' || c == 'E' {
s.step = stateE
return scanContinue
}
return stateEndValue(s, c)
}
// stateE is the state after reading the mantissa and e in a number,
// such as after reading `314e` or `0.314e`.
func stateE(s *scanner, c byte) int {
if c == '+' || c == '-' {
s.step = stateESign
return scanContinue
}
return stateESign(s, c)
}
// stateESign is the state after reading the mantissa, e, and sign in a number,
// such as after reading `314e-` or `0.314e+`.
func stateESign(s *scanner, c byte) int {
if '0' <= c && c <= '9' {
s.step = stateE0
return scanContinue
}
return s.error(c, "in exponent of numeric literal")
}
// stateE0 is the state after reading the mantissa, e, optional sign,
// and at least one digit of the exponent in a number,
// such as after reading `314e-2` or `0.314e+1` or `3.14e0`.
func stateE0(s *scanner, c byte) int {
if '0' <= c && c <= '9' {
return scanContinue
}
return stateEndValue(s, c)
}
// stateT is the state after reading `t`.
func stateT(s *scanner, c byte) int {
if c == 'r' {
s.step = stateTr
return scanContinue
}
return s.error(c, "in literal true (expecting 'r')")
}
// stateTr is the state after reading `tr`.
func stateTr(s *scanner, c byte) int {
if c == 'u' {
s.step = stateTru
return scanContinue
}
return s.error(c, "in literal true (expecting 'u')")
}
// stateTru is the state after reading `tru`.
func stateTru(s *scanner, c byte) int {
if c == 'e' {
s.step = stateEndValue
return scanContinue
}
return s.error(c, "in literal true (expecting 'e')")
}
// stateF is the state after reading `f`.
func stateF(s *scanner, c byte) int {
if c == 'a' {
s.step = stateFa
return scanContinue
}
return s.error(c, "in literal false (expecting 'a')")
}
// stateFa is the state after reading `fa`.
func stateFa(s *scanner, c byte) int {
if c == 'l' {
s.step = stateFal
return scanContinue
}
return s.error(c, "in literal false (expecting 'l')")
}
// stateFal is the state after reading `fal`.
func stateFal(s *scanner, c byte) int {
if c == 's' {
s.step = stateFals
return scanContinue
}
return s.error(c, "in literal false (expecting 's')")
}
// stateFals is the state after reading `fals`.
func stateFals(s *scanner, c byte) int {
if c == 'e' {
s.step = stateEndValue
return scanContinue
}
return s.error(c, "in literal false (expecting 'e')")
}
// stateN is the state after reading `n`.
func stateN(s *scanner, c byte) int {
if c == 'u' {
s.step = stateNu
return scanContinue
}
return s.error(c, "in literal null (expecting 'u')")
}
// stateNu is the state after reading `nu`.
func stateNu(s *scanner, c byte) int {
if c == 'l' {
s.step = stateNul
return scanContinue
}
return s.error(c, "in literal null (expecting 'l')")
}
// stateNul is the state after reading `nul`.
func stateNul(s *scanner, c byte) int {
if c == 'l' {
s.step = stateEndValue
return scanContinue
}
return s.error(c, "in literal null (expecting 'l')")
}
// stateError is the state after reaching a syntax error,
// such as after reading `[1}` or `5.1.2`.
func stateError(s *scanner, c byte) int {
return scanError
}
// error records an error and switches to the error state.
func (s *scanner) error(c byte, context string) int {
s.step = stateError
s.err = &SyntaxError{"invalid character " + quoteChar(c) + " " + context, s.bytes}
return scanError
}
// quoteChar formats c as a quoted character literal.
func quoteChar(c byte) string {
// special cases - different from quoted strings
if c == '\'' {
return `'\''`
}
if c == '"' {
return `'"'`
}
// use quoted string with different quotation marks
s := strconv.Quote(string(c))
return "'" + s[1:len(s)-1] + "'"
}

View File

@ -0,0 +1,517 @@
// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package json
import (
"bytes"
"io"
)
// A Decoder reads and decodes JSON values from an input stream.
type Decoder struct {
r io.Reader
buf []byte
d decodeState
scanp int // start of unread data in buf
scanned int64 // amount of data already scanned
scan scanner
err error
tokenState int
tokenStack []int
}
// NewDecoder returns a new decoder that reads from r.
//
// The decoder introduces its own buffering and may
// read data from r beyond the JSON values requested.
func NewDecoder(r io.Reader) *Decoder {
return &Decoder{r: r}
}
// UseNumber causes the Decoder to unmarshal a number into an interface{} as a
// [Number] instead of as a float64.
func (dec *Decoder) UseNumber() { dec.d.useNumber = true }
// DisallowUnknownFields causes the Decoder to return an error when the destination
// is a struct and the input contains object keys which do not match any
// non-ignored, exported fields in the destination.
func (dec *Decoder) DisallowUnknownFields() { dec.d.disallowUnknownFields = true }
// Decode reads the next JSON-encoded value from its
// input and stores it in the value pointed to by v.
//
// See the documentation for [Unmarshal] for details about
// the conversion of JSON into a Go value.
func (dec *Decoder) Decode(v any) error {
if dec.err != nil {
return dec.err
}
if err := dec.tokenPrepareForDecode(); err != nil {
return err
}
if !dec.tokenValueAllowed() {
return &SyntaxError{msg: "not at beginning of value", Offset: dec.InputOffset()}
}
// Read whole value into buffer.
n, err := dec.readValue()
if err != nil {
return err
}
dec.d.init(dec.buf[dec.scanp : dec.scanp+n])
dec.scanp += n
// Don't save err from unmarshal into dec.err:
// the connection is still usable since we read a complete JSON
// object from it before the error happened.
err = dec.d.unmarshal(v)
// fixup token streaming state
dec.tokenValueEnd()
return err
}
// Buffered returns a reader of the data remaining in the Decoder's
// buffer. The reader is valid until the next call to [Decoder.Decode].
func (dec *Decoder) Buffered() io.Reader {
return bytes.NewReader(dec.buf[dec.scanp:])
}
// readValue reads a JSON value into dec.buf.
// It returns the length of the encoding.
func (dec *Decoder) readValue() (int, error) {
dec.scan.reset()
scanp := dec.scanp
var err error
Input:
// help the compiler see that scanp is never negative, so it can remove
// some bounds checks below.
for scanp >= 0 {
// Look in the buffer for a new value.
for ; scanp < len(dec.buf); scanp++ {
c := dec.buf[scanp]
dec.scan.bytes++
switch dec.scan.step(&dec.scan, c) {
case scanEnd:
// scanEnd is delayed one byte so we decrement
// the scanner bytes count by 1 to ensure that
// this value is correct in the next call of Decode.
dec.scan.bytes--
break Input
case scanEndObject, scanEndArray:
// scanEnd is delayed one byte.
// We might block trying to get that byte from src,
// so instead invent a space byte.
if stateEndValue(&dec.scan, ' ') == scanEnd {
scanp++
break Input
}
case scanError:
dec.err = dec.scan.err
return 0, dec.scan.err
}
}
// Did the last read have an error?
// Delayed until now to allow buffer scan.
if err != nil {
if err == io.EOF {
if dec.scan.step(&dec.scan, ' ') == scanEnd {
break Input
}
if nonSpace(dec.buf) {
err = io.ErrUnexpectedEOF
}
}
dec.err = err
return 0, err
}
n := scanp - dec.scanp
err = dec.refill()
scanp = dec.scanp + n
}
return scanp - dec.scanp, nil
}
func (dec *Decoder) refill() error {
// Make room to read more into the buffer.
// First slide down data already consumed.
if dec.scanp > 0 {
dec.scanned += int64(dec.scanp)
n := copy(dec.buf, dec.buf[dec.scanp:])
dec.buf = dec.buf[:n]
dec.scanp = 0
}
// Grow buffer if not large enough.
const minRead = 512
if cap(dec.buf)-len(dec.buf) < minRead {
newBuf := make([]byte, len(dec.buf), 2*cap(dec.buf)+minRead)
copy(newBuf, dec.buf)
dec.buf = newBuf
}
// Read. Delay error for next iteration (after scan).
n, err := dec.r.Read(dec.buf[len(dec.buf):cap(dec.buf)])
dec.buf = dec.buf[0 : len(dec.buf)+n]
return err
}
func nonSpace(b []byte) bool {
for _, c := range b {
if !isSpace(c) {
return true
}
}
return false
}
// An Encoder writes JSON values to an output stream.
type Encoder struct {
w io.Writer
err error
escapeHTML bool
indentBuf []byte
indentPrefix string
indentValue string
}
// NewEncoder returns a new encoder that writes to w.
func NewEncoder(w io.Writer) *Encoder {
return &Encoder{w: w, escapeHTML: true}
}
// Encode writes the JSON encoding of v to the stream,
// with insignificant space characters elided,
// followed by a newline character.
//
// See the documentation for [Marshal] for details about the
// conversion of Go values to JSON.
func (enc *Encoder) Encode(v any) error {
if enc.err != nil {
return enc.err
}
e := newEncodeState()
defer encodeStatePool.Put(e)
err := e.marshal(v, encOpts{escapeHTML: enc.escapeHTML})
if err != nil {
return err
}
// Terminate each value with a newline.
// This makes the output look a little nicer
// when debugging, and some kind of space
// is required if the encoded value was a number,
// so that the reader knows there aren't more
// digits coming.
e.WriteByte('\n')
b := e.Bytes()
if enc.indentPrefix != "" || enc.indentValue != "" {
enc.indentBuf, err = appendIndent(enc.indentBuf[:0], b, enc.indentPrefix, enc.indentValue)
if err != nil {
return err
}
b = enc.indentBuf
}
if _, err = enc.w.Write(b); err != nil {
enc.err = err
}
return err
}
// SetIndent instructs the encoder to format each subsequent encoded
// value as if indented by the package-level function Indent(dst, src, prefix, indent).
// Calling SetIndent("", "") disables indentation.
func (enc *Encoder) SetIndent(prefix, indent string) {
enc.indentPrefix = prefix
enc.indentValue = indent
}
// SetEscapeHTML specifies whether problematic HTML characters
// should be escaped inside JSON quoted strings.
// The default behavior is to escape &, <, and > to \u0026, \u003c, and \u003e
// to avoid certain safety problems that can arise when embedding JSON in HTML.
//
// In non-HTML settings where the escaping interferes with the readability
// of the output, SetEscapeHTML(false) disables this behavior.
func (enc *Encoder) SetEscapeHTML(on bool) {
enc.escapeHTML = on
}
/*
// RawMessage is a raw encoded JSON value.
// It implements [Marshaler] and [Unmarshaler] and can
// be used to delay JSON decoding or precompute a JSON encoding.
type RawMessage []byte
// MarshalJSON returns m as the JSON encoding of m.
func (m RawMessage) MarshalJSON() ([]byte, error) {
if m == nil {
return []byte("null"), nil
}
return m, nil
}
// UnmarshalJSON sets *m to a copy of data.
func (m *RawMessage) UnmarshalJSON(data []byte) error {
if m == nil {
return errors.New("json.RawMessage: UnmarshalJSON on nil pointer")
}
*m = append((*m)[0:0], data...)
return nil
}
*/
var _ Marshaler = (*RawMessage)(nil)
var _ Unmarshaler = (*RawMessage)(nil)
/*
// A Token holds a value of one of these types:
//
// - [Delim], for the four JSON delimiters [ ] { }
// - bool, for JSON booleans
// - float64, for JSON numbers
// - [Number], for JSON numbers
// - string, for JSON string literals
// - nil, for JSON null
type Token any
*/
const (
tokenTopValue = iota
tokenArrayStart
tokenArrayValue
tokenArrayComma
tokenObjectStart
tokenObjectKey
tokenObjectColon
tokenObjectValue
tokenObjectComma
)
// advance tokenstate from a separator state to a value state
func (dec *Decoder) tokenPrepareForDecode() error {
// Note: Not calling peek before switch, to avoid
// putting peek into the standard Decode path.
// peek is only called when using the Token API.
switch dec.tokenState {
case tokenArrayComma:
c, err := dec.peek()
if err != nil {
return err
}
if c != ',' {
return &SyntaxError{"expected comma after array element", dec.InputOffset()}
}
dec.scanp++
dec.tokenState = tokenArrayValue
case tokenObjectColon:
c, err := dec.peek()
if err != nil {
return err
}
if c != ':' {
return &SyntaxError{"expected colon after object key", dec.InputOffset()}
}
dec.scanp++
dec.tokenState = tokenObjectValue
}
return nil
}
func (dec *Decoder) tokenValueAllowed() bool {
switch dec.tokenState {
case tokenTopValue, tokenArrayStart, tokenArrayValue, tokenObjectValue:
return true
}
return false
}
func (dec *Decoder) tokenValueEnd() {
switch dec.tokenState {
case tokenArrayStart, tokenArrayValue:
dec.tokenState = tokenArrayComma
case tokenObjectValue:
dec.tokenState = tokenObjectComma
}
}
/*
// A Delim is a JSON array or object delimiter, one of [ ] { or }.
type Delim rune
func (d Delim) String() string {
return string(d)
}
*/
// Token returns the next JSON token in the input stream.
// At the end of the input stream, Token returns nil, [io.EOF].
//
// Token guarantees that the delimiters [ ] { } it returns are
// properly nested and matched: if Token encounters an unexpected
// delimiter in the input, it will return an error.
//
// The input stream consists of basic JSON values—bool, string,
// number, and null—along with delimiters [ ] { } of type [Delim]
// to mark the start and end of arrays and objects.
// Commas and colons are elided.
func (dec *Decoder) Token() (Token, error) {
for {
c, err := dec.peek()
if err != nil {
return nil, err
}
switch c {
case '[':
if !dec.tokenValueAllowed() {
return dec.tokenError(c)
}
dec.scanp++
dec.tokenStack = append(dec.tokenStack, dec.tokenState)
dec.tokenState = tokenArrayStart
return Delim('['), nil
case ']':
if dec.tokenState != tokenArrayStart && dec.tokenState != tokenArrayComma {
return dec.tokenError(c)
}
dec.scanp++
dec.tokenState = dec.tokenStack[len(dec.tokenStack)-1]
dec.tokenStack = dec.tokenStack[:len(dec.tokenStack)-1]
dec.tokenValueEnd()
return Delim(']'), nil
case '{':
if !dec.tokenValueAllowed() {
return dec.tokenError(c)
}
dec.scanp++
dec.tokenStack = append(dec.tokenStack, dec.tokenState)
dec.tokenState = tokenObjectStart
return Delim('{'), nil
case '}':
if dec.tokenState != tokenObjectStart && dec.tokenState != tokenObjectComma {
return dec.tokenError(c)
}
dec.scanp++
dec.tokenState = dec.tokenStack[len(dec.tokenStack)-1]
dec.tokenStack = dec.tokenStack[:len(dec.tokenStack)-1]
dec.tokenValueEnd()
return Delim('}'), nil
case ':':
if dec.tokenState != tokenObjectColon {
return dec.tokenError(c)
}
dec.scanp++
dec.tokenState = tokenObjectValue
continue
case ',':
if dec.tokenState == tokenArrayComma {
dec.scanp++
dec.tokenState = tokenArrayValue
continue
}
if dec.tokenState == tokenObjectComma {
dec.scanp++
dec.tokenState = tokenObjectKey
continue
}
return dec.tokenError(c)
case '"':
if dec.tokenState == tokenObjectStart || dec.tokenState == tokenObjectKey {
var x string
old := dec.tokenState
dec.tokenState = tokenTopValue
err := dec.Decode(&x)
dec.tokenState = old
if err != nil {
return nil, err
}
dec.tokenState = tokenObjectColon
return x, nil
}
fallthrough
default:
if !dec.tokenValueAllowed() {
return dec.tokenError(c)
}
var x any
if err := dec.Decode(&x); err != nil {
return nil, err
}
return x, nil
}
}
}
func (dec *Decoder) tokenError(c byte) (Token, error) {
var context string
switch dec.tokenState {
case tokenTopValue:
context = " looking for beginning of value"
case tokenArrayStart, tokenArrayValue, tokenObjectValue:
context = " looking for beginning of value"
case tokenArrayComma:
context = " after array element"
case tokenObjectKey:
context = " looking for beginning of object key string"
case tokenObjectColon:
context = " after object key"
case tokenObjectComma:
context = " after object key:value pair"
}
return nil, &SyntaxError{"invalid character " + quoteChar(c) + context, dec.InputOffset()}
}
// More reports whether there is another element in the
// current array or object being parsed.
func (dec *Decoder) More() bool {
c, err := dec.peek()
return err == nil && c != ']' && c != '}'
}
func (dec *Decoder) peek() (byte, error) {
var err error
for {
for i := dec.scanp; i < len(dec.buf); i++ {
c := dec.buf[i]
if isSpace(c) {
continue
}
dec.scanp = i
return c, nil
}
// buffer has been scanned, now report any error
if err != nil {
return 0, err
}
err = dec.refill()
}
}
// InputOffset returns the input stream byte offset of the current decoder position.
// The offset gives the location of the end of the most recently returned token
// and the beginning of the next token.
func (dec *Decoder) InputOffset() int64 {
return dec.scanned + int64(dec.scanp)
}

View File

@ -0,0 +1,218 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package json
import "unicode/utf8"
// safeSet holds the value true if the ASCII character with the given array
// position can be represented inside a JSON string without any further
// escaping.
//
// All values are true except for the ASCII control characters (0-31), the
// double quote ("), and the backslash character ("\").
var safeSet = [utf8.RuneSelf]bool{
' ': true,
'!': true,
'"': false,
'#': true,
'$': true,
'%': true,
'&': true,
'\'': true,
'(': true,
')': true,
'*': true,
'+': true,
',': true,
'-': true,
'.': true,
'/': true,
'0': true,
'1': true,
'2': true,
'3': true,
'4': true,
'5': true,
'6': true,
'7': true,
'8': true,
'9': true,
':': true,
';': true,
'<': true,
'=': true,
'>': true,
'?': true,
'@': true,
'A': true,
'B': true,
'C': true,
'D': true,
'E': true,
'F': true,
'G': true,
'H': true,
'I': true,
'J': true,
'K': true,
'L': true,
'M': true,
'N': true,
'O': true,
'P': true,
'Q': true,
'R': true,
'S': true,
'T': true,
'U': true,
'V': true,
'W': true,
'X': true,
'Y': true,
'Z': true,
'[': true,
'\\': false,
']': true,
'^': true,
'_': true,
'`': true,
'a': true,
'b': true,
'c': true,
'd': true,
'e': true,
'f': true,
'g': true,
'h': true,
'i': true,
'j': true,
'k': true,
'l': true,
'm': true,
'n': true,
'o': true,
'p': true,
'q': true,
'r': true,
's': true,
't': true,
'u': true,
'v': true,
'w': true,
'x': true,
'y': true,
'z': true,
'{': true,
'|': true,
'}': true,
'~': true,
'\u007f': true,
}
// htmlSafeSet holds the value true if the ASCII character with the given
// array position can be safely represented inside a JSON string, embedded
// inside of HTML <script> tags, without any additional escaping.
//
// All values are true except for the ASCII control characters (0-31), the
// double quote ("), the backslash character ("\"), HTML opening and closing
// tags ("<" and ">"), and the ampersand ("&").
var htmlSafeSet = [utf8.RuneSelf]bool{
' ': true,
'!': true,
'"': false,
'#': true,
'$': true,
'%': true,
'&': false,
'\'': true,
'(': true,
')': true,
'*': true,
'+': true,
',': true,
'-': true,
'.': true,
'/': true,
'0': true,
'1': true,
'2': true,
'3': true,
'4': true,
'5': true,
'6': true,
'7': true,
'8': true,
'9': true,
':': true,
';': true,
'<': false,
'=': true,
'>': false,
'?': true,
'@': true,
'A': true,
'B': true,
'C': true,
'D': true,
'E': true,
'F': true,
'G': true,
'H': true,
'I': true,
'J': true,
'K': true,
'L': true,
'M': true,
'N': true,
'O': true,
'P': true,
'Q': true,
'R': true,
'S': true,
'T': true,
'U': true,
'V': true,
'W': true,
'X': true,
'Y': true,
'Z': true,
'[': true,
'\\': false,
']': true,
'^': true,
'_': true,
'`': true,
'a': true,
'b': true,
'c': true,
'd': true,
'e': true,
'f': true,
'g': true,
'h': true,
'i': true,
'j': true,
'k': true,
'l': true,
'm': true,
'n': true,
'o': true,
'p': true,
'q': true,
'r': true,
's': true,
't': true,
'u': true,
'v': true,
'w': true,
'x': true,
'y': true,
'z': true,
'{': true,
'|': true,
'}': true,
'~': true,
'\u007f': true,
}

View File

@ -0,0 +1,38 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package json
import (
"strings"
)
// tagOptions is the string following a comma in a struct field's "json"
// tag, or the empty string. It does not include the leading comma.
type tagOptions string
// parseTag splits a struct field's json tag into its name and
// comma-separated options.
func parseTag(tag string) (string, tagOptions) {
tag, opt, _ := strings.Cut(tag, ",")
return tag, tagOptions(opt)
}
// Contains reports whether a comma-separated list of options
// contains a particular substr flag. substr must be surrounded by a
// string boundary or commas.
func (o tagOptions) Contains(optionName string) bool {
if len(o) == 0 {
return false
}
s := string(o)
for s != "" {
var name string
name, s, _ = strings.Cut(s, ",")
if name == optionName {
return true
}
}
return false
}

150
e2e/vendor/sigs.k8s.io/json/json.go generated vendored Normal file
View File

@ -0,0 +1,150 @@
/*
Copyright 2021 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package json
import (
gojson "encoding/json"
"fmt"
"io"
internaljson "sigs.k8s.io/json/internal/golang/encoding/json"
)
// Decoder describes the decoding API exposed by `encoding/json#Decoder`
type Decoder interface {
Decode(v interface{}) error
Buffered() io.Reader
Token() (gojson.Token, error)
More() bool
InputOffset() int64
}
// NewDecoderCaseSensitivePreserveInts returns a decoder that matches the behavior of encoding/json#NewDecoder, with the following changes:
// - When unmarshaling into a struct, JSON keys must case-sensitively match `json` tag names (for tagged struct fields)
// or struct field names (for untagged struct fields), or they are treated as unknown fields and discarded.
// - When unmarshaling a number into an interface value, it is unmarshaled as an int64 if
// the JSON data does not contain a "." character and parses as an integer successfully and
// does not overflow int64. Otherwise, the number is unmarshaled as a float64.
// - If a syntax error is returned, it will not be of type encoding/json#SyntaxError,
// but will be recognizeable by this package's IsSyntaxError() function.
func NewDecoderCaseSensitivePreserveInts(r io.Reader) Decoder {
d := internaljson.NewDecoder(r)
d.CaseSensitive()
d.PreserveInts()
return d
}
// UnmarshalCaseSensitivePreserveInts parses the JSON-encoded data and stores the result in the value pointed to by v.
//
// UnmarshalCaseSensitivePreserveInts matches the behavior of encoding/json#Unmarshal, with the following changes:
// - When unmarshaling into a struct, JSON keys must case-sensitively match `json` tag names (for tagged struct fields)
// or struct field names (for untagged struct fields), or they are treated as unknown fields and discarded.
// - When unmarshaling a number into an interface value, it is unmarshaled as an int64 if
// the JSON data does not contain a "." character and parses as an integer successfully and
// does not overflow int64. Otherwise, the number is unmarshaled as a float64.
// - If a syntax error is returned, it will not be of type encoding/json#SyntaxError,
// but will be recognizeable by this package's IsSyntaxError() function.
func UnmarshalCaseSensitivePreserveInts(data []byte, v interface{}) error {
return internaljson.Unmarshal(
data,
v,
internaljson.CaseSensitive,
internaljson.PreserveInts,
)
}
type StrictOption int
const (
// DisallowDuplicateFields returns strict errors if data contains duplicate fields
DisallowDuplicateFields StrictOption = 1
// DisallowUnknownFields returns strict errors if data contains unknown fields when decoding into typed structs
DisallowUnknownFields StrictOption = 2
)
// UnmarshalStrict parses the JSON-encoded data and stores the result in the value pointed to by v.
// Unmarshaling is performed identically to UnmarshalCaseSensitivePreserveInts(), returning an error on failure.
//
// If parsing succeeds, additional strict checks as selected by `strictOptions` are performed
// and a list of the strict failures (if any) are returned. If no `strictOptions` are selected,
// all supported strict checks are performed.
//
// Strict errors returned will implement the FieldError interface for the specific erroneous fields.
//
// Currently supported strict checks are:
// - DisallowDuplicateFields: ensure the data contains no duplicate fields
// - DisallowUnknownFields: ensure the data contains no unknown fields (when decoding into typed structs)
//
// Additional strict checks may be added in the future.
//
// Note that the strict checks do not change what is stored in v.
// For example, if duplicate fields are present, they will be parsed and stored in v,
// and errors about the duplicate fields will be returned in the strict error list.
func UnmarshalStrict(data []byte, v interface{}, strictOptions ...StrictOption) (strictErrors []error, err error) {
if len(strictOptions) == 0 {
err = internaljson.Unmarshal(data, v,
// options matching UnmarshalCaseSensitivePreserveInts
internaljson.CaseSensitive,
internaljson.PreserveInts,
// all strict options
internaljson.DisallowDuplicateFields,
internaljson.DisallowUnknownFields,
)
} else {
opts := make([]internaljson.UnmarshalOpt, 0, 2+len(strictOptions))
// options matching UnmarshalCaseSensitivePreserveInts
opts = append(opts, internaljson.CaseSensitive, internaljson.PreserveInts)
for _, strictOpt := range strictOptions {
switch strictOpt {
case DisallowDuplicateFields:
opts = append(opts, internaljson.DisallowDuplicateFields)
case DisallowUnknownFields:
opts = append(opts, internaljson.DisallowUnknownFields)
default:
return nil, fmt.Errorf("unknown strict option %d", strictOpt)
}
}
err = internaljson.Unmarshal(data, v, opts...)
}
if strictErr, ok := err.(*internaljson.UnmarshalStrictError); ok {
return strictErr.Errors, nil
}
return nil, err
}
// SyntaxErrorOffset returns if the specified error is a syntax error produced by encoding/json or this package.
func SyntaxErrorOffset(err error) (isSyntaxError bool, offset int64) {
switch err := err.(type) {
case *gojson.SyntaxError:
return true, err.Offset
case *internaljson.SyntaxError:
return true, err.Offset
default:
return false, 0
}
}
// FieldError is an error that provides access to the path of the erroneous field
type FieldError interface {
error
// FieldPath provides the full path of the erroneous field within the json object.
FieldPath() string
// SetFieldPath updates the path of the erroneous field output in the error message.
SetFieldPath(path string)
}

201
e2e/vendor/sigs.k8s.io/structured-merge-diff/v4/LICENSE generated vendored Normal file
View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,21 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package fieldpath defines a way for referencing path elements (e.g., an
// index in an array, or a key in a map). It provides types for arranging these
// into paths for referencing nested fields, and for grouping those into sets,
// for referencing multiple nested fields.
package fieldpath

View File

@ -0,0 +1,317 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package fieldpath
import (
"fmt"
"sort"
"strings"
"sigs.k8s.io/structured-merge-diff/v4/value"
)
// PathElement describes how to select a child field given a containing object.
type PathElement struct {
// Exactly one of the following fields should be non-nil.
// FieldName selects a single field from a map (reminder: this is also
// how structs are represented). The containing object must be a map.
FieldName *string
// Key selects the list element which has fields matching those given.
// The containing object must be an associative list with map typed
// elements. They are sorted alphabetically.
Key *value.FieldList
// Value selects the list element with the given value. The containing
// object must be an associative list with a primitive typed element
// (i.e., a set).
Value *value.Value
// Index selects a list element by its index number. The containing
// object must be an atomic list.
Index *int
}
// Less provides an order for path elements.
func (e PathElement) Less(rhs PathElement) bool {
return e.Compare(rhs) < 0
}
// Compare provides an order for path elements.
func (e PathElement) Compare(rhs PathElement) int {
if e.FieldName != nil {
if rhs.FieldName == nil {
return -1
}
return strings.Compare(*e.FieldName, *rhs.FieldName)
} else if rhs.FieldName != nil {
return 1
}
if e.Key != nil {
if rhs.Key == nil {
return -1
}
return e.Key.Compare(*rhs.Key)
} else if rhs.Key != nil {
return 1
}
if e.Value != nil {
if rhs.Value == nil {
return -1
}
return value.Compare(*e.Value, *rhs.Value)
} else if rhs.Value != nil {
return 1
}
if e.Index != nil {
if rhs.Index == nil {
return -1
}
if *e.Index < *rhs.Index {
return -1
} else if *e.Index == *rhs.Index {
return 0
}
return 1
} else if rhs.Index != nil {
return 1
}
return 0
}
// Equals returns true if both path elements are equal.
func (e PathElement) Equals(rhs PathElement) bool {
if e.FieldName != nil {
if rhs.FieldName == nil {
return false
}
return *e.FieldName == *rhs.FieldName
} else if rhs.FieldName != nil {
return false
}
if e.Key != nil {
if rhs.Key == nil {
return false
}
return e.Key.Equals(*rhs.Key)
} else if rhs.Key != nil {
return false
}
if e.Value != nil {
if rhs.Value == nil {
return false
}
return value.Equals(*e.Value, *rhs.Value)
} else if rhs.Value != nil {
return false
}
if e.Index != nil {
if rhs.Index == nil {
return false
}
return *e.Index == *rhs.Index
} else if rhs.Index != nil {
return false
}
return true
}
// String presents the path element as a human-readable string.
func (e PathElement) String() string {
switch {
case e.FieldName != nil:
return "." + *e.FieldName
case e.Key != nil:
strs := make([]string, len(*e.Key))
for i, k := range *e.Key {
strs[i] = fmt.Sprintf("%v=%v", k.Name, value.ToString(k.Value))
}
// Keys are supposed to be sorted.
return "[" + strings.Join(strs, ",") + "]"
case e.Value != nil:
return fmt.Sprintf("[=%v]", value.ToString(*e.Value))
case e.Index != nil:
return fmt.Sprintf("[%v]", *e.Index)
default:
return "{{invalid path element}}"
}
}
// KeyByFields is a helper function which constructs a key for an associative
// list type. `nameValues` must have an even number of entries, alternating
// names (type must be string) with values (type must be value.Value). If these
// conditions are not met, KeyByFields will panic--it's intended for static
// construction and shouldn't have user-produced values passed to it.
func KeyByFields(nameValues ...interface{}) *value.FieldList {
if len(nameValues)%2 != 0 {
panic("must have a value for every name")
}
out := value.FieldList{}
for i := 0; i < len(nameValues)-1; i += 2 {
out = append(out, value.Field{Name: nameValues[i].(string), Value: value.NewValueInterface(nameValues[i+1])})
}
out.Sort()
return &out
}
// PathElementSet is a set of path elements.
// TODO: serialize as a list.
type PathElementSet struct {
members sortedPathElements
}
func MakePathElementSet(size int) PathElementSet {
return PathElementSet{
members: make(sortedPathElements, 0, size),
}
}
type sortedPathElements []PathElement
// Implement the sort interface; this would permit bulk creation, which would
// be faster than doing it one at a time via Insert.
func (spe sortedPathElements) Len() int { return len(spe) }
func (spe sortedPathElements) Less(i, j int) bool { return spe[i].Less(spe[j]) }
func (spe sortedPathElements) Swap(i, j int) { spe[i], spe[j] = spe[j], spe[i] }
// Insert adds pe to the set.
func (s *PathElementSet) Insert(pe PathElement) {
loc := sort.Search(len(s.members), func(i int) bool {
return !s.members[i].Less(pe)
})
if loc == len(s.members) {
s.members = append(s.members, pe)
return
}
if s.members[loc].Equals(pe) {
return
}
s.members = append(s.members, PathElement{})
copy(s.members[loc+1:], s.members[loc:])
s.members[loc] = pe
}
// Union returns a set containing elements that appear in either s or s2.
func (s *PathElementSet) Union(s2 *PathElementSet) *PathElementSet {
out := &PathElementSet{}
i, j := 0, 0
for i < len(s.members) && j < len(s2.members) {
if s.members[i].Less(s2.members[j]) {
out.members = append(out.members, s.members[i])
i++
} else {
out.members = append(out.members, s2.members[j])
if !s2.members[j].Less(s.members[i]) {
i++
}
j++
}
}
if i < len(s.members) {
out.members = append(out.members, s.members[i:]...)
}
if j < len(s2.members) {
out.members = append(out.members, s2.members[j:]...)
}
return out
}
// Intersection returns a set containing elements which appear in both s and s2.
func (s *PathElementSet) Intersection(s2 *PathElementSet) *PathElementSet {
out := &PathElementSet{}
i, j := 0, 0
for i < len(s.members) && j < len(s2.members) {
if s.members[i].Less(s2.members[j]) {
i++
} else {
if !s2.members[j].Less(s.members[i]) {
out.members = append(out.members, s.members[i])
i++
}
j++
}
}
return out
}
// Difference returns a set containing elements which appear in s but not in s2.
func (s *PathElementSet) Difference(s2 *PathElementSet) *PathElementSet {
out := &PathElementSet{}
i, j := 0, 0
for i < len(s.members) && j < len(s2.members) {
if s.members[i].Less(s2.members[j]) {
out.members = append(out.members, s.members[i])
i++
} else {
if !s2.members[j].Less(s.members[i]) {
i++
}
j++
}
}
if i < len(s.members) {
out.members = append(out.members, s.members[i:]...)
}
return out
}
// Size retuns the number of elements in the set.
func (s *PathElementSet) Size() int { return len(s.members) }
// Has returns true if pe is a member of the set.
func (s *PathElementSet) Has(pe PathElement) bool {
loc := sort.Search(len(s.members), func(i int) bool {
return !s.members[i].Less(pe)
})
if loc == len(s.members) {
return false
}
if s.members[loc].Equals(pe) {
return true
}
return false
}
// Equals returns true if s and s2 have exactly the same members.
func (s *PathElementSet) Equals(s2 *PathElementSet) bool {
if len(s.members) != len(s2.members) {
return false
}
for k := range s.members {
if !s.members[k].Equals(s2.members[k]) {
return false
}
}
return true
}
// Iterate calls f for each PathElement in the set. The order is deterministic.
func (s *PathElementSet) Iterate(f func(PathElement)) {
for _, pe := range s.members {
f(pe)
}
}

View File

@ -0,0 +1,134 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package fieldpath
import (
"sigs.k8s.io/structured-merge-diff/v4/value"
)
// SetFromValue creates a set containing every leaf field mentioned in v.
func SetFromValue(v value.Value) *Set {
s := NewSet()
w := objectWalker{
path: Path{},
value: v,
allocator: value.NewFreelistAllocator(),
do: func(p Path) { s.Insert(p) },
}
w.walk()
return s
}
type objectWalker struct {
path Path
value value.Value
allocator value.Allocator
do func(Path)
}
func (w *objectWalker) walk() {
switch {
case w.value.IsNull():
case w.value.IsFloat():
case w.value.IsInt():
case w.value.IsString():
case w.value.IsBool():
// All leaf fields handled the same way (after the switch
// statement).
// Descend
case w.value.IsList():
// If the list were atomic, we'd break here, but we don't have
// a schema, so we can't tell.
l := w.value.AsListUsing(w.allocator)
defer w.allocator.Free(l)
iter := l.RangeUsing(w.allocator)
defer w.allocator.Free(iter)
for iter.Next() {
i, value := iter.Item()
w2 := *w
w2.path = append(w.path, w.GuessBestListPathElement(i, value))
w2.value = value
w2.walk()
}
return
case w.value.IsMap():
// If the map/struct were atomic, we'd break here, but we don't
// have a schema, so we can't tell.
m := w.value.AsMapUsing(w.allocator)
defer w.allocator.Free(m)
m.IterateUsing(w.allocator, func(k string, val value.Value) bool {
w2 := *w
w2.path = append(w.path, PathElement{FieldName: &k})
w2.value = val
w2.walk()
return true
})
return
}
// Leaf fields get added to the set.
if len(w.path) > 0 {
w.do(w.path)
}
}
// AssociativeListCandidateFieldNames lists the field names which are
// considered keys if found in a list element.
var AssociativeListCandidateFieldNames = []string{
"key",
"id",
"name",
}
// GuessBestListPathElement guesses whether item is an associative list
// element, which should be referenced by key(s), or if it is not and therefore
// referencing by index is acceptable. Currently this is done by checking
// whether item has any of the fields listed in
// AssociativeListCandidateFieldNames which have scalar values.
func (w *objectWalker) GuessBestListPathElement(index int, item value.Value) PathElement {
if !item.IsMap() {
// Non map items could be parts of sets or regular "atomic"
// lists. We won't try to guess whether something should be a
// set or not.
return PathElement{Index: &index}
}
m := item.AsMapUsing(w.allocator)
defer w.allocator.Free(m)
var keys value.FieldList
for _, name := range AssociativeListCandidateFieldNames {
f, ok := m.Get(name)
if !ok {
continue
}
// only accept primitive/scalar types as keys.
if f.IsNull() || f.IsMap() || f.IsList() {
continue
}
keys = append(keys, value.Field{Name: name, Value: f})
}
if len(keys) > 0 {
keys.Sort()
return PathElement{Key: &keys}
}
return PathElement{Index: &index}
}

View File

@ -0,0 +1,144 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package fieldpath
import (
"fmt"
"strings"
)
// APIVersion describes the version of an object or of a fieldset.
type APIVersion string
type VersionedSet interface {
Set() *Set
APIVersion() APIVersion
Applied() bool
}
// VersionedSet associates a version to a set.
type versionedSet struct {
set *Set
apiVersion APIVersion
applied bool
}
func NewVersionedSet(set *Set, apiVersion APIVersion, applied bool) VersionedSet {
return versionedSet{
set: set,
apiVersion: apiVersion,
applied: applied,
}
}
func (v versionedSet) Set() *Set {
return v.set
}
func (v versionedSet) APIVersion() APIVersion {
return v.apiVersion
}
func (v versionedSet) Applied() bool {
return v.applied
}
// ManagedFields is a map from manager to VersionedSet (what they own in
// what version).
type ManagedFields map[string]VersionedSet
// Equals returns true if the two managedfields are the same, false
// otherwise.
func (lhs ManagedFields) Equals(rhs ManagedFields) bool {
if len(lhs) != len(rhs) {
return false
}
for manager, left := range lhs {
right, ok := rhs[manager]
if !ok {
return false
}
if left.APIVersion() != right.APIVersion() || left.Applied() != right.Applied() {
return false
}
if !left.Set().Equals(right.Set()) {
return false
}
}
return true
}
// Copy the list, this is mostly a shallow copy.
func (lhs ManagedFields) Copy() ManagedFields {
copy := ManagedFields{}
for manager, set := range lhs {
copy[manager] = set
}
return copy
}
// Difference returns a symmetric difference between two Managers. If a
// given user's entry has version X in lhs and version Y in rhs, then
// the return value for that user will be from rhs. If the difference for
// a user is an empty set, that user will not be inserted in the map.
func (lhs ManagedFields) Difference(rhs ManagedFields) ManagedFields {
diff := ManagedFields{}
for manager, left := range lhs {
right, ok := rhs[manager]
if !ok {
if !left.Set().Empty() {
diff[manager] = left
}
continue
}
// If we have sets in both but their version
// differs, we don't even diff and keep the
// entire thing.
if left.APIVersion() != right.APIVersion() {
diff[manager] = right
continue
}
newSet := left.Set().Difference(right.Set()).Union(right.Set().Difference(left.Set()))
if !newSet.Empty() {
diff[manager] = NewVersionedSet(newSet, right.APIVersion(), false)
}
}
for manager, set := range rhs {
if _, ok := lhs[manager]; ok {
// Already done
continue
}
if !set.Set().Empty() {
diff[manager] = set
}
}
return diff
}
func (lhs ManagedFields) String() string {
s := strings.Builder{}
for k, v := range lhs {
fmt.Fprintf(&s, "%s:\n", k)
fmt.Fprintf(&s, "- Applied: %v\n", v.Applied())
fmt.Fprintf(&s, "- APIVersion: %v\n", v.APIVersion())
fmt.Fprintf(&s, "- Set: %v\n", v.Set())
}
return s.String()
}

View File

@ -0,0 +1,118 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package fieldpath
import (
"fmt"
"strings"
"sigs.k8s.io/structured-merge-diff/v4/value"
)
// Path describes how to select a potentially deeply-nested child field given a
// containing object.
type Path []PathElement
func (fp Path) String() string {
strs := make([]string, len(fp))
for i := range fp {
strs[i] = fp[i].String()
}
return strings.Join(strs, "")
}
// Equals returns true if the two paths are equivalent.
func (fp Path) Equals(fp2 Path) bool {
if len(fp) != len(fp2) {
return false
}
for i := range fp {
if !fp[i].Equals(fp2[i]) {
return false
}
}
return true
}
// Less provides a lexical order for Paths.
func (fp Path) Compare(rhs Path) int {
i := 0
for {
if i >= len(fp) && i >= len(rhs) {
// Paths are the same length and all items are equal.
return 0
}
if i >= len(fp) {
// LHS is shorter.
return -1
}
if i >= len(rhs) {
// RHS is shorter.
return 1
}
if c := fp[i].Compare(rhs[i]); c != 0 {
return c
}
// The items are equal; continue.
i++
}
}
func (fp Path) Copy() Path {
new := make(Path, len(fp))
copy(new, fp)
return new
}
// MakePath constructs a Path. The parts may be PathElements, ints, strings.
func MakePath(parts ...interface{}) (Path, error) {
var fp Path
for _, p := range parts {
switch t := p.(type) {
case PathElement:
fp = append(fp, t)
case int:
// TODO: Understand schema and object and convert this to the
// FieldSpecifier below if appropriate.
fp = append(fp, PathElement{Index: &t})
case string:
fp = append(fp, PathElement{FieldName: &t})
case *value.FieldList:
if len(*t) == 0 {
return nil, fmt.Errorf("associative list key type path elements must have at least one key (got zero)")
}
fp = append(fp, PathElement{Key: t})
case value.Value:
// TODO: understand schema and verify that this is a set type
// TODO: make a copy of t
fp = append(fp, PathElement{Value: &t})
default:
return nil, fmt.Errorf("unable to make %#v into a path element", p)
}
}
return fp, nil
}
// MakePathOrDie panics if parts can't be turned into a path. Good for things
// that are known at complie time.
func MakePathOrDie(parts ...interface{}) Path {
fp, err := MakePath(parts...)
if err != nil {
panic(err)
}
return fp
}

View File

@ -0,0 +1,114 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package fieldpath
import (
"sort"
"sigs.k8s.io/structured-merge-diff/v4/value"
)
// PathElementValueMap is a map from PathElement to value.Value.
//
// TODO(apelisse): We have multiple very similar implementation of this
// for PathElementSet and SetNodeMap, so we could probably share the
// code.
type PathElementValueMap struct {
valueMap PathElementMap
}
func MakePathElementValueMap(size int) PathElementValueMap {
return PathElementValueMap{
valueMap: MakePathElementMap(size),
}
}
type sortedPathElementValues []pathElementValue
// Implement the sort interface; this would permit bulk creation, which would
// be faster than doing it one at a time via Insert.
func (spev sortedPathElementValues) Len() int { return len(spev) }
func (spev sortedPathElementValues) Less(i, j int) bool {
return spev[i].PathElement.Less(spev[j].PathElement)
}
func (spev sortedPathElementValues) Swap(i, j int) { spev[i], spev[j] = spev[j], spev[i] }
// Insert adds the pathelement and associated value in the map.
// If insert is called twice with the same PathElement, the value is replaced.
func (s *PathElementValueMap) Insert(pe PathElement, v value.Value) {
s.valueMap.Insert(pe, v)
}
// Get retrieves the value associated with the given PathElement from the map.
// (nil, false) is returned if there is no such PathElement.
func (s *PathElementValueMap) Get(pe PathElement) (value.Value, bool) {
v, ok := s.valueMap.Get(pe)
if !ok {
return nil, false
}
return v.(value.Value), true
}
// PathElementValueMap is a map from PathElement to interface{}.
type PathElementMap struct {
members sortedPathElementValues
}
type pathElementValue struct {
PathElement PathElement
Value interface{}
}
func MakePathElementMap(size int) PathElementMap {
return PathElementMap{
members: make(sortedPathElementValues, 0, size),
}
}
// Insert adds the pathelement and associated value in the map.
// If insert is called twice with the same PathElement, the value is replaced.
func (s *PathElementMap) Insert(pe PathElement, v interface{}) {
loc := sort.Search(len(s.members), func(i int) bool {
return !s.members[i].PathElement.Less(pe)
})
if loc == len(s.members) {
s.members = append(s.members, pathElementValue{pe, v})
return
}
if s.members[loc].PathElement.Equals(pe) {
s.members[loc].Value = v
return
}
s.members = append(s.members, pathElementValue{})
copy(s.members[loc+1:], s.members[loc:])
s.members[loc] = pathElementValue{pe, v}
}
// Get retrieves the value associated with the given PathElement from the map.
// (nil, false) is returned if there is no such PathElement.
func (s *PathElementMap) Get(pe PathElement) (interface{}, bool) {
loc := sort.Search(len(s.members), func(i int) bool {
return !s.members[i].PathElement.Less(pe)
})
if loc == len(s.members) {
return nil, false
}
if s.members[loc].PathElement.Equals(pe) {
return s.members[loc].Value, true
}
return nil, false
}

View File

@ -0,0 +1,168 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package fieldpath
import (
"errors"
"fmt"
"io"
"strconv"
"strings"
jsoniter "github.com/json-iterator/go"
"sigs.k8s.io/structured-merge-diff/v4/value"
)
var ErrUnknownPathElementType = errors.New("unknown path element type")
const (
// Field indicates that the content of this path element is a field's name
peField = "f"
// Value indicates that the content of this path element is a field's value
peValue = "v"
// Index indicates that the content of this path element is an index in an array
peIndex = "i"
// Key indicates that the content of this path element is a key value map
peKey = "k"
// Separator separates the type of a path element from the contents
peSeparator = ":"
)
var (
peFieldSepBytes = []byte(peField + peSeparator)
peValueSepBytes = []byte(peValue + peSeparator)
peIndexSepBytes = []byte(peIndex + peSeparator)
peKeySepBytes = []byte(peKey + peSeparator)
peSepBytes = []byte(peSeparator)
)
// DeserializePathElement parses a serialized path element
func DeserializePathElement(s string) (PathElement, error) {
b := []byte(s)
if len(b) < 2 {
return PathElement{}, errors.New("key must be 2 characters long:")
}
typeSep, b := b[:2], b[2:]
if typeSep[1] != peSepBytes[0] {
return PathElement{}, fmt.Errorf("missing colon: %v", s)
}
switch typeSep[0] {
case peFieldSepBytes[0]:
// Slice s rather than convert b, to save on
// allocations.
str := s[2:]
return PathElement{
FieldName: &str,
}, nil
case peValueSepBytes[0]:
iter := readPool.BorrowIterator(b)
defer readPool.ReturnIterator(iter)
v, err := value.ReadJSONIter(iter)
if err != nil {
return PathElement{}, err
}
return PathElement{Value: &v}, nil
case peKeySepBytes[0]:
iter := readPool.BorrowIterator(b)
defer readPool.ReturnIterator(iter)
fields := value.FieldList{}
iter.ReadObjectCB(func(iter *jsoniter.Iterator, key string) bool {
v, err := value.ReadJSONIter(iter)
if err != nil {
iter.Error = err
return false
}
fields = append(fields, value.Field{Name: key, Value: v})
return true
})
fields.Sort()
return PathElement{Key: &fields}, iter.Error
case peIndexSepBytes[0]:
i, err := strconv.Atoi(s[2:])
if err != nil {
return PathElement{}, err
}
return PathElement{
Index: &i,
}, nil
default:
return PathElement{}, ErrUnknownPathElementType
}
}
var (
readPool = jsoniter.NewIterator(jsoniter.ConfigCompatibleWithStandardLibrary).Pool()
writePool = jsoniter.NewStream(jsoniter.ConfigCompatibleWithStandardLibrary, nil, 1024).Pool()
)
// SerializePathElement serializes a path element
func SerializePathElement(pe PathElement) (string, error) {
buf := strings.Builder{}
err := serializePathElementToWriter(&buf, pe)
return buf.String(), err
}
func serializePathElementToWriter(w io.Writer, pe PathElement) error {
stream := writePool.BorrowStream(w)
defer writePool.ReturnStream(stream)
switch {
case pe.FieldName != nil:
if _, err := stream.Write(peFieldSepBytes); err != nil {
return err
}
stream.WriteRaw(*pe.FieldName)
case pe.Key != nil:
if _, err := stream.Write(peKeySepBytes); err != nil {
return err
}
stream.WriteObjectStart()
for i, field := range *pe.Key {
if i > 0 {
stream.WriteMore()
}
stream.WriteObjectField(field.Name)
value.WriteJSONStream(field.Value, stream)
}
stream.WriteObjectEnd()
case pe.Value != nil:
if _, err := stream.Write(peValueSepBytes); err != nil {
return err
}
value.WriteJSONStream(*pe.Value, stream)
case pe.Index != nil:
if _, err := stream.Write(peIndexSepBytes); err != nil {
return err
}
stream.WriteInt(*pe.Index)
default:
return errors.New("invalid PathElement")
}
b := stream.Buffer()
err := stream.Flush()
// Help jsoniter manage its buffers--without this, the next
// use of the stream is likely to require an allocation. Look
// at the jsoniter stream code to understand why. They were probably
// optimizing for folks using the buffer directly.
stream.SetBuffer(b[:0])
return err
}

View File

@ -0,0 +1,238 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package fieldpath
import (
"bytes"
"io"
"unsafe"
jsoniter "github.com/json-iterator/go"
)
func (s *Set) ToJSON() ([]byte, error) {
buf := bytes.Buffer{}
err := s.ToJSONStream(&buf)
if err != nil {
return nil, err
}
return buf.Bytes(), nil
}
func (s *Set) ToJSONStream(w io.Writer) error {
stream := writePool.BorrowStream(w)
defer writePool.ReturnStream(stream)
var r reusableBuilder
stream.WriteObjectStart()
err := s.emitContentsV1(false, stream, &r)
if err != nil {
return err
}
stream.WriteObjectEnd()
return stream.Flush()
}
func manageMemory(stream *jsoniter.Stream) error {
// Help jsoniter manage its buffers--without this, it does a bunch of
// alloctaions that are not necessary. They were probably optimizing
// for folks using the buffer directly.
b := stream.Buffer()
if len(b) > 4096 || cap(b)-len(b) < 2048 {
if err := stream.Flush(); err != nil {
return err
}
stream.SetBuffer(b[:0])
}
return nil
}
type reusableBuilder struct {
bytes.Buffer
}
func (r *reusableBuilder) unsafeString() string {
b := r.Bytes()
return *(*string)(unsafe.Pointer(&b))
}
func (r *reusableBuilder) reset() *bytes.Buffer {
r.Reset()
return &r.Buffer
}
func (s *Set) emitContentsV1(includeSelf bool, stream *jsoniter.Stream, r *reusableBuilder) error {
mi, ci := 0, 0
first := true
preWrite := func() {
if first {
first = false
return
}
stream.WriteMore()
}
if includeSelf && !(len(s.Members.members) == 0 && len(s.Children.members) == 0) {
preWrite()
stream.WriteObjectField(".")
stream.WriteEmptyObject()
}
for mi < len(s.Members.members) && ci < len(s.Children.members) {
mpe := s.Members.members[mi]
cpe := s.Children.members[ci].pathElement
if c := mpe.Compare(cpe); c < 0 {
preWrite()
if err := serializePathElementToWriter(r.reset(), mpe); err != nil {
return err
}
stream.WriteObjectField(r.unsafeString())
stream.WriteEmptyObject()
mi++
} else if c > 0 {
preWrite()
if err := serializePathElementToWriter(r.reset(), cpe); err != nil {
return err
}
stream.WriteObjectField(r.unsafeString())
stream.WriteObjectStart()
if err := s.Children.members[ci].set.emitContentsV1(false, stream, r); err != nil {
return err
}
stream.WriteObjectEnd()
ci++
} else {
preWrite()
if err := serializePathElementToWriter(r.reset(), cpe); err != nil {
return err
}
stream.WriteObjectField(r.unsafeString())
stream.WriteObjectStart()
if err := s.Children.members[ci].set.emitContentsV1(true, stream, r); err != nil {
return err
}
stream.WriteObjectEnd()
mi++
ci++
}
}
for mi < len(s.Members.members) {
mpe := s.Members.members[mi]
preWrite()
if err := serializePathElementToWriter(r.reset(), mpe); err != nil {
return err
}
stream.WriteObjectField(r.unsafeString())
stream.WriteEmptyObject()
mi++
}
for ci < len(s.Children.members) {
cpe := s.Children.members[ci].pathElement
preWrite()
if err := serializePathElementToWriter(r.reset(), cpe); err != nil {
return err
}
stream.WriteObjectField(r.unsafeString())
stream.WriteObjectStart()
if err := s.Children.members[ci].set.emitContentsV1(false, stream, r); err != nil {
return err
}
stream.WriteObjectEnd()
ci++
}
return manageMemory(stream)
}
// FromJSON clears s and reads a JSON formatted set structure.
func (s *Set) FromJSON(r io.Reader) error {
// The iterator pool is completely useless for memory management, grrr.
iter := jsoniter.Parse(jsoniter.ConfigCompatibleWithStandardLibrary, r, 4096)
found, _ := readIterV1(iter)
if found == nil {
*s = Set{}
} else {
*s = *found
}
return iter.Error
}
// returns true if this subtree is also (or only) a member of parent; s is nil
// if there are no further children.
func readIterV1(iter *jsoniter.Iterator) (children *Set, isMember bool) {
iter.ReadMapCB(func(iter *jsoniter.Iterator, key string) bool {
if key == "." {
isMember = true
iter.Skip()
return true
}
pe, err := DeserializePathElement(key)
if err == ErrUnknownPathElementType {
// Ignore these-- a future version maybe knows what
// they are. We drop these completely rather than try
// to preserve things we don't understand.
iter.Skip()
return true
} else if err != nil {
iter.ReportError("parsing key as path element", err.Error())
iter.Skip()
return true
}
grandchildren, childIsMember := readIterV1(iter)
if childIsMember {
if children == nil {
children = &Set{}
}
m := &children.Members.members
// Since we expect that most of the time these will have been
// serialized in the right order, we just verify that and append.
appendOK := len(*m) == 0 || (*m)[len(*m)-1].Less(pe)
if appendOK {
*m = append(*m, pe)
} else {
children.Members.Insert(pe)
}
}
if grandchildren != nil {
if children == nil {
children = &Set{}
}
// Since we expect that most of the time these will have been
// serialized in the right order, we just verify that and append.
m := &children.Children.members
appendOK := len(*m) == 0 || (*m)[len(*m)-1].pathElement.Less(pe)
if appendOK {
*m = append(*m, setNode{pe, grandchildren})
} else {
*children.Children.Descend(pe) = *grandchildren
}
}
return true
})
if children == nil {
isMember = true
}
return children, isMember
}

View File

@ -0,0 +1,782 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package fieldpath
import (
"fmt"
"sigs.k8s.io/structured-merge-diff/v4/value"
"sort"
"strings"
"sigs.k8s.io/structured-merge-diff/v4/schema"
)
// Set identifies a set of fields.
type Set struct {
// Members lists fields that are part of the set.
// TODO: will be serialized as a list of path elements.
Members PathElementSet
// Children lists child fields which themselves have children that are
// members of the set. Appearance in this list does not imply membership.
// Note: this is a tree, not an arbitrary graph.
Children SetNodeMap
}
// NewSet makes a set from a list of paths.
func NewSet(paths ...Path) *Set {
s := &Set{}
for _, p := range paths {
s.Insert(p)
}
return s
}
// Insert adds the field identified by `p` to the set. Important: parent fields
// are NOT added to the set; if that is desired, they must be added separately.
func (s *Set) Insert(p Path) {
if len(p) == 0 {
// Zero-length path identifies the entire object; we don't
// track top-level ownership.
return
}
for {
if len(p) == 1 {
s.Members.Insert(p[0])
return
}
s = s.Children.Descend(p[0])
p = p[1:]
}
}
// Union returns a Set containing elements which appear in either s or s2.
func (s *Set) Union(s2 *Set) *Set {
return &Set{
Members: *s.Members.Union(&s2.Members),
Children: *s.Children.Union(&s2.Children),
}
}
// Intersection returns a Set containing leaf elements which appear in both s
// and s2. Intersection can be constructed from Union and Difference operations
// (example in the tests) but it's much faster to do it in one pass.
func (s *Set) Intersection(s2 *Set) *Set {
return &Set{
Members: *s.Members.Intersection(&s2.Members),
Children: *s.Children.Intersection(&s2.Children),
}
}
// Difference returns a Set containing elements which:
// * appear in s
// * do not appear in s2
//
// In other words, for leaf fields, this acts like a regular set difference
// operation. When non leaf fields are compared with leaf fields ("parents"
// which contain "children"), the effect is:
// * parent - child = parent
// * child - parent = {empty set}
func (s *Set) Difference(s2 *Set) *Set {
return &Set{
Members: *s.Members.Difference(&s2.Members),
Children: *s.Children.Difference(s2),
}
}
// RecursiveDifference returns a Set containing elements which:
// * appear in s
// * do not appear in s2
//
// Compared to a regular difference,
// this removes every field **and its children** from s that is contained in s2.
//
// For example, with s containing `a.b.c` and s2 containing `a.b`,
// a RecursiveDifference will result in `a`, as the entire node `a.b` gets removed.
func (s *Set) RecursiveDifference(s2 *Set) *Set {
return &Set{
Members: *s.Members.Difference(&s2.Members),
Children: *s.Children.RecursiveDifference(s2),
}
}
// EnsureNamedFieldsAreMembers returns a Set that contains all the
// fields in s, as well as all the named fields that are typically not
// included. For example, a set made of "a.b.c" will end-up also owning
// "a" if it's a named fields but not "a.b" if it's a map.
func (s *Set) EnsureNamedFieldsAreMembers(sc *schema.Schema, tr schema.TypeRef) *Set {
members := PathElementSet{
members: make(sortedPathElements, 0, s.Members.Size()+len(s.Children.members)),
}
atom, _ := sc.Resolve(tr)
members.members = append(members.members, s.Members.members...)
for _, node := range s.Children.members {
// Only insert named fields.
if node.pathElement.FieldName != nil && atom.Map != nil {
if _, has := atom.Map.FindField(*node.pathElement.FieldName); has {
members.Insert(node.pathElement)
}
}
}
return &Set{
Members: members,
Children: *s.Children.EnsureNamedFieldsAreMembers(sc, tr),
}
}
// MakePrefixMatcherOrDie is the same as PrefixMatcher except it panics if parts can't be
// turned into a SetMatcher.
func MakePrefixMatcherOrDie(parts ...interface{}) *SetMatcher {
result, err := PrefixMatcher(parts...)
if err != nil {
panic(err)
}
return result
}
// PrefixMatcher creates a SetMatcher that matches all field paths prefixed by the given list of matcher path parts.
// The matcher parts may any of:
//
// - PathElementMatcher - for wildcards, `MatchAnyPathElement()` can be used as well.
// - PathElement - for any path element
// - value.FieldList - for listMap keys
// - value.Value - for scalar list elements
// - string - For field names
// - int - for array indices
func PrefixMatcher(parts ...interface{}) (*SetMatcher, error) {
current := MatchAnySet() // match all field path suffixes
for i := len(parts) - 1; i >= 0; i-- {
part := parts[i]
var pattern PathElementMatcher
switch t := part.(type) {
case PathElementMatcher:
// any path matcher, including wildcard
pattern = t
case PathElement:
// any path element
pattern = PathElementMatcher{PathElement: t}
case *value.FieldList:
// a listMap key
if len(*t) == 0 {
return nil, fmt.Errorf("associative list key type path elements must have at least one key (got zero)")
}
pattern = PathElementMatcher{PathElement: PathElement{Key: t}}
case value.Value:
// a scalar or set-type list element
pattern = PathElementMatcher{PathElement: PathElement{Value: &t}}
case string:
// a plain field name
pattern = PathElementMatcher{PathElement: PathElement{FieldName: &t}}
case int:
// a plain list index
pattern = PathElementMatcher{PathElement: PathElement{Index: &t}}
default:
return nil, fmt.Errorf("unexpected type %T", t)
}
current = &SetMatcher{
members: []*SetMemberMatcher{{
Path: pattern,
Child: current,
}},
}
}
return current, nil
}
// MatchAnyPathElement returns a PathElementMatcher that matches any path element.
func MatchAnyPathElement() PathElementMatcher {
return PathElementMatcher{Wildcard: true}
}
// MatchAnySet returns a SetMatcher that matches any set.
func MatchAnySet() *SetMatcher {
return &SetMatcher{wildcard: true}
}
// NewSetMatcher returns a new SetMatcher.
// Wildcard members take precedent over non-wildcard members;
// all non-wildcard members are ignored if there is a wildcard members.
func NewSetMatcher(wildcard bool, members ...*SetMemberMatcher) *SetMatcher {
sort.Sort(sortedMemberMatcher(members))
return &SetMatcher{wildcard: wildcard, members: members}
}
// SetMatcher defines a matcher that matches fields in a Set.
// SetMatcher is structured much like a Set but with wildcard support.
type SetMatcher struct {
// wildcard indicates that all members and children are included in the match.
// If set, the members field is ignored.
wildcard bool
// members provides patterns to match the members of a Set.
// Wildcard members are sorted before non-wildcards and take precedent over
// non-wildcard members.
members sortedMemberMatcher
}
type sortedMemberMatcher []*SetMemberMatcher
func (s sortedMemberMatcher) Len() int { return len(s) }
func (s sortedMemberMatcher) Less(i, j int) bool { return s[i].Path.Less(s[j].Path) }
func (s sortedMemberMatcher) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
func (s sortedMemberMatcher) Find(p PathElementMatcher) (location int, ok bool) {
return sort.Find(len(s), func(i int) int {
return s[i].Path.Compare(p)
})
}
// Merge merges s and s2 and returns a SetMatcher that matches all field paths matched by either s or s2.
// During the merge, members of s and s2 with the same PathElementMatcher merged into a single member
// with the children of each merged by calling this function recursively.
func (s *SetMatcher) Merge(s2 *SetMatcher) *SetMatcher {
if s.wildcard || s2.wildcard {
return NewSetMatcher(true)
}
merged := make(sortedMemberMatcher, len(s.members), len(s.members)+len(s2.members))
copy(merged, s.members)
for _, m := range s2.members {
if i, ok := s.members.Find(m.Path); ok {
// since merged is a shallow copy, do not modify elements in place
merged[i] = &SetMemberMatcher{
Path: merged[i].Path,
Child: merged[i].Child.Merge(m.Child),
}
} else {
merged = append(merged, m)
}
}
return NewSetMatcher(false, merged...) // sort happens here
}
// SetMemberMatcher defines a matcher that matches the members of a Set.
// SetMemberMatcher is structured much like the elements of a SetNodeMap, but
// with wildcard support.
type SetMemberMatcher struct {
// Path provides a matcher to match members of a Set.
// If Path is a wildcard, all members of a Set are included in the match.
// Otherwise, if any Path is Equal to a member of a Set, that member is
// included in the match and the children of that member are matched
// against the Child matcher.
Path PathElementMatcher
// Child provides a matcher to use for the children of matched members of a Set.
Child *SetMatcher
}
// PathElementMatcher defined a path matcher for a PathElement.
type PathElementMatcher struct {
// Wildcard indicates that all PathElements are matched by this matcher.
// If set, PathElement is ignored.
Wildcard bool
// PathElement indicates that a PathElement is matched if it is Equal
// to this PathElement.
PathElement
}
func (p PathElementMatcher) Equals(p2 PathElementMatcher) bool {
return p.Wildcard != p2.Wildcard && p.PathElement.Equals(p2.PathElement)
}
func (p PathElementMatcher) Less(p2 PathElementMatcher) bool {
if p.Wildcard && !p2.Wildcard {
return true
} else if p2.Wildcard {
return false
}
return p.PathElement.Less(p2.PathElement)
}
func (p PathElementMatcher) Compare(p2 PathElementMatcher) int {
if p.Wildcard && !p2.Wildcard {
return -1
} else if p2.Wildcard {
return 1
}
return p.PathElement.Compare(p2.PathElement)
}
// FilterIncludeMatches returns a Set with only the field paths that match.
func (s *Set) FilterIncludeMatches(pattern *SetMatcher) *Set {
if pattern.wildcard {
return s
}
members := PathElementSet{}
for _, m := range s.Members.members {
for _, pm := range pattern.members {
if pm.Path.Wildcard || pm.Path.PathElement.Equals(m) {
members.Insert(m)
break
}
}
}
return &Set{
Members: members,
Children: *s.Children.FilterIncludeMatches(pattern),
}
}
// Size returns the number of members of the set.
func (s *Set) Size() int {
return s.Members.Size() + s.Children.Size()
}
// Empty returns true if there are no members of the set. It is a separate
// function from Size since it's common to check whether size > 0, and
// potentially much faster to return as soon as a single element is found.
func (s *Set) Empty() bool {
if s.Members.Size() > 0 {
return false
}
return s.Children.Empty()
}
// Has returns true if the field referenced by `p` is a member of the set.
func (s *Set) Has(p Path) bool {
if len(p) == 0 {
// No one owns "the entire object"
return false
}
for {
if len(p) == 1 {
return s.Members.Has(p[0])
}
var ok bool
s, ok = s.Children.Get(p[0])
if !ok {
return false
}
p = p[1:]
}
}
// Equals returns true if s and s2 have exactly the same members.
func (s *Set) Equals(s2 *Set) bool {
return s.Members.Equals(&s2.Members) && s.Children.Equals(&s2.Children)
}
// String returns the set one element per line.
func (s *Set) String() string {
elements := []string{}
s.Iterate(func(p Path) {
elements = append(elements, p.String())
})
return strings.Join(elements, "\n")
}
// Iterate calls f once for each field that is a member of the set (preorder
// DFS). The path passed to f will be reused so make a copy if you wish to keep
// it.
func (s *Set) Iterate(f func(Path)) {
s.iteratePrefix(Path{}, f)
}
func (s *Set) iteratePrefix(prefix Path, f func(Path)) {
s.Members.Iterate(func(pe PathElement) { f(append(prefix, pe)) })
s.Children.iteratePrefix(prefix, f)
}
// WithPrefix returns the subset of paths which begin with the given prefix,
// with the prefix not included.
func (s *Set) WithPrefix(pe PathElement) *Set {
subset, ok := s.Children.Get(pe)
if !ok {
return NewSet()
}
return subset
}
// Leaves returns a set containing only the leaf paths
// of a set.
func (s *Set) Leaves() *Set {
leaves := PathElementSet{}
im := 0
ic := 0
// any members that are not also children are leaves
outer:
for im < len(s.Members.members) {
member := s.Members.members[im]
for ic < len(s.Children.members) {
d := member.Compare(s.Children.members[ic].pathElement)
if d == 0 {
ic++
im++
continue outer
} else if d < 0 {
break
} else /* if d > 0 */ {
ic++
}
}
leaves.members = append(leaves.members, member)
im++
}
return &Set{
Members: leaves,
Children: *s.Children.Leaves(),
}
}
// setNode is a pair of PathElement / Set, for the purpose of expressing
// nested set membership.
type setNode struct {
pathElement PathElement
set *Set
}
// SetNodeMap is a map of PathElement to subset.
type SetNodeMap struct {
members sortedSetNode
}
type sortedSetNode []setNode
// Implement the sort interface; this would permit bulk creation, which would
// be faster than doing it one at a time via Insert.
func (s sortedSetNode) Len() int { return len(s) }
func (s sortedSetNode) Less(i, j int) bool { return s[i].pathElement.Less(s[j].pathElement) }
func (s sortedSetNode) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
// Descend adds pe to the set if necessary, returning the associated subset.
func (s *SetNodeMap) Descend(pe PathElement) *Set {
loc := sort.Search(len(s.members), func(i int) bool {
return !s.members[i].pathElement.Less(pe)
})
if loc == len(s.members) {
s.members = append(s.members, setNode{pathElement: pe, set: &Set{}})
return s.members[loc].set
}
if s.members[loc].pathElement.Equals(pe) {
return s.members[loc].set
}
s.members = append(s.members, setNode{})
copy(s.members[loc+1:], s.members[loc:])
s.members[loc] = setNode{pathElement: pe, set: &Set{}}
return s.members[loc].set
}
// Size returns the sum of the number of members of all subsets.
func (s *SetNodeMap) Size() int {
count := 0
for _, v := range s.members {
count += v.set.Size()
}
return count
}
// Empty returns false if there's at least one member in some child set.
func (s *SetNodeMap) Empty() bool {
for _, n := range s.members {
if !n.set.Empty() {
return false
}
}
return true
}
// Get returns (the associated set, true) or (nil, false) if there is none.
func (s *SetNodeMap) Get(pe PathElement) (*Set, bool) {
loc := sort.Search(len(s.members), func(i int) bool {
return !s.members[i].pathElement.Less(pe)
})
if loc == len(s.members) {
return nil, false
}
if s.members[loc].pathElement.Equals(pe) {
return s.members[loc].set, true
}
return nil, false
}
// Equals returns true if s and s2 have the same structure (same nested
// child sets).
func (s *SetNodeMap) Equals(s2 *SetNodeMap) bool {
if len(s.members) != len(s2.members) {
return false
}
for i := range s.members {
if !s.members[i].pathElement.Equals(s2.members[i].pathElement) {
return false
}
if !s.members[i].set.Equals(s2.members[i].set) {
return false
}
}
return true
}
// Union returns a SetNodeMap with members that appear in either s or s2.
func (s *SetNodeMap) Union(s2 *SetNodeMap) *SetNodeMap {
out := &SetNodeMap{}
i, j := 0, 0
for i < len(s.members) && j < len(s2.members) {
if s.members[i].pathElement.Less(s2.members[j].pathElement) {
out.members = append(out.members, s.members[i])
i++
} else {
if !s2.members[j].pathElement.Less(s.members[i].pathElement) {
out.members = append(out.members, setNode{pathElement: s.members[i].pathElement, set: s.members[i].set.Union(s2.members[j].set)})
i++
} else {
out.members = append(out.members, s2.members[j])
}
j++
}
}
if i < len(s.members) {
out.members = append(out.members, s.members[i:]...)
}
if j < len(s2.members) {
out.members = append(out.members, s2.members[j:]...)
}
return out
}
// Intersection returns a SetNodeMap with members that appear in both s and s2.
func (s *SetNodeMap) Intersection(s2 *SetNodeMap) *SetNodeMap {
out := &SetNodeMap{}
i, j := 0, 0
for i < len(s.members) && j < len(s2.members) {
if s.members[i].pathElement.Less(s2.members[j].pathElement) {
i++
} else {
if !s2.members[j].pathElement.Less(s.members[i].pathElement) {
res := s.members[i].set.Intersection(s2.members[j].set)
if !res.Empty() {
out.members = append(out.members, setNode{pathElement: s.members[i].pathElement, set: res})
}
i++
}
j++
}
}
return out
}
// Difference returns a SetNodeMap with members that appear in s but not in s2.
func (s *SetNodeMap) Difference(s2 *Set) *SetNodeMap {
out := &SetNodeMap{}
i, j := 0, 0
for i < len(s.members) && j < len(s2.Children.members) {
if s.members[i].pathElement.Less(s2.Children.members[j].pathElement) {
out.members = append(out.members, setNode{pathElement: s.members[i].pathElement, set: s.members[i].set})
i++
} else {
if !s2.Children.members[j].pathElement.Less(s.members[i].pathElement) {
diff := s.members[i].set.Difference(s2.Children.members[j].set)
// We aren't permitted to add nodes with no elements.
if !diff.Empty() {
out.members = append(out.members, setNode{pathElement: s.members[i].pathElement, set: diff})
}
i++
}
j++
}
}
if i < len(s.members) {
out.members = append(out.members, s.members[i:]...)
}
return out
}
// RecursiveDifference returns a SetNodeMap with members that appear in s but not in s2.
//
// Compared to a regular difference,
// this removes every field **and its children** from s that is contained in s2.
//
// For example, with s containing `a.b.c` and s2 containing `a.b`,
// a RecursiveDifference will result in `a`, as the entire node `a.b` gets removed.
func (s *SetNodeMap) RecursiveDifference(s2 *Set) *SetNodeMap {
out := &SetNodeMap{}
i, j := 0, 0
for i < len(s.members) && j < len(s2.Children.members) {
if s.members[i].pathElement.Less(s2.Children.members[j].pathElement) {
if !s2.Members.Has(s.members[i].pathElement) {
out.members = append(out.members, setNode{pathElement: s.members[i].pathElement, set: s.members[i].set})
}
i++
} else {
if !s2.Children.members[j].pathElement.Less(s.members[i].pathElement) {
if !s2.Members.Has(s.members[i].pathElement) {
diff := s.members[i].set.RecursiveDifference(s2.Children.members[j].set)
if !diff.Empty() {
out.members = append(out.members, setNode{pathElement: s.members[i].pathElement, set: diff})
}
}
i++
}
j++
}
}
if i < len(s.members) {
for _, c := range s.members[i:] {
if !s2.Members.Has(c.pathElement) {
out.members = append(out.members, c)
}
}
}
return out
}
// EnsureNamedFieldsAreMembers returns a set that contains all the named fields along with the leaves.
func (s *SetNodeMap) EnsureNamedFieldsAreMembers(sc *schema.Schema, tr schema.TypeRef) *SetNodeMap {
out := make(sortedSetNode, 0, s.Size())
atom, _ := sc.Resolve(tr)
for _, member := range s.members {
tr := schema.TypeRef{}
if member.pathElement.FieldName != nil && atom.Map != nil {
tr = atom.Map.ElementType
if sf, ok := atom.Map.FindField(*member.pathElement.FieldName); ok {
tr = sf.Type
}
} else if member.pathElement.Key != nil && atom.List != nil {
tr = atom.List.ElementType
}
out = append(out, setNode{
pathElement: member.pathElement,
set: member.set.EnsureNamedFieldsAreMembers(sc, tr),
})
}
return &SetNodeMap{
members: out,
}
}
// FilterIncludeMatches returns a SetNodeMap with only the field paths that match the matcher.
func (s *SetNodeMap) FilterIncludeMatches(pattern *SetMatcher) *SetNodeMap {
if pattern.wildcard {
return s
}
var out sortedSetNode
for _, member := range s.members {
for _, c := range pattern.members {
if c.Path.Wildcard || c.Path.PathElement.Equals(member.pathElement) {
childSet := member.set.FilterIncludeMatches(c.Child)
if childSet.Size() > 0 {
out = append(out, setNode{
pathElement: member.pathElement,
set: childSet,
})
}
break
}
}
}
return &SetNodeMap{
members: out,
}
}
// Iterate calls f for each PathElement in the set.
func (s *SetNodeMap) Iterate(f func(PathElement)) {
for _, n := range s.members {
f(n.pathElement)
}
}
func (s *SetNodeMap) iteratePrefix(prefix Path, f func(Path)) {
for _, n := range s.members {
pe := n.pathElement
n.set.iteratePrefix(append(prefix, pe), f)
}
}
// Leaves returns a SetNodeMap containing
// only setNodes with leaf PathElements.
func (s *SetNodeMap) Leaves() *SetNodeMap {
out := &SetNodeMap{}
out.members = make(sortedSetNode, len(s.members))
for i, n := range s.members {
out.members[i] = setNode{
pathElement: n.pathElement,
set: n.set.Leaves(),
}
}
return out
}
// Filter defines an interface for excluding field paths from a set.
// NewExcludeSetFilter can be used to create a filter that removes
// specific field paths and all of their children.
// NewIncludeMatcherFilter can be used to create a filter that removes all fields except
// the fields that match a field path matcher. PrefixMatcher and MakePrefixMatcherOrDie
// can be used to define field path patterns.
type Filter interface {
// Filter returns a filtered copy of the set.
Filter(*Set) *Set
}
// NewExcludeSetFilter returns a filter that removes field paths in the exclude set.
func NewExcludeSetFilter(exclude *Set) Filter {
return excludeFilter{exclude}
}
// NewExcludeFilterSetMap converts a map of APIVersion to exclude set to a map of APIVersion to exclude filters.
func NewExcludeFilterSetMap(resetFields map[APIVersion]*Set) map[APIVersion]Filter {
result := make(map[APIVersion]Filter)
for k, v := range resetFields {
result[k] = excludeFilter{v}
}
return result
}
type excludeFilter struct {
excludeSet *Set
}
func (t excludeFilter) Filter(set *Set) *Set {
return set.RecursiveDifference(t.excludeSet)
}
// NewIncludeMatcherFilter returns a filter that only includes field paths that match.
// If no matchers are provided, the filter includes all field paths.
// PrefixMatcher and MakePrefixMatcherOrDie can help create basic matcher.
func NewIncludeMatcherFilter(matchers ...*SetMatcher) Filter {
if len(matchers) == 0 {
return includeMatcherFilter{&SetMatcher{wildcard: true}}
}
matcher := matchers[0]
for i := 1; i < len(matchers); i++ {
matcher = matcher.Merge(matchers[i])
}
return includeMatcherFilter{matcher}
}
type includeMatcherFilter struct {
matcher *SetMatcher
}
func (pf includeMatcherFilter) Filter(set *Set) *Set {
return set.FilterIncludeMatches(pf.matcher)
}

View File

@ -0,0 +1,121 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package merge
import (
"fmt"
"sort"
"strings"
"sigs.k8s.io/structured-merge-diff/v4/fieldpath"
)
// Conflict is a conflict on a specific field with the current manager of
// that field. It does implement the error interface so that it can be
// used as an error.
type Conflict struct {
Manager string
Path fieldpath.Path
}
// Conflict is an error.
var _ error = Conflict{}
// Error formats the conflict as an error.
func (c Conflict) Error() string {
return fmt.Sprintf("conflict with %q: %v", c.Manager, c.Path)
}
// Equals returns true if c == c2
func (c Conflict) Equals(c2 Conflict) bool {
if c.Manager != c2.Manager {
return false
}
return c.Path.Equals(c2.Path)
}
// Conflicts accumulates multiple conflicts and aggregates them by managers.
type Conflicts []Conflict
var _ error = Conflicts{}
// Error prints the list of conflicts, grouped by sorted managers.
func (conflicts Conflicts) Error() string {
if len(conflicts) == 1 {
return conflicts[0].Error()
}
m := map[string][]fieldpath.Path{}
for _, conflict := range conflicts {
m[conflict.Manager] = append(m[conflict.Manager], conflict.Path)
}
managers := []string{}
for manager := range m {
managers = append(managers, manager)
}
// Print conflicts by sorted managers.
sort.Strings(managers)
messages := []string{}
for _, manager := range managers {
messages = append(messages, fmt.Sprintf("conflicts with %q:", manager))
for _, path := range m[manager] {
messages = append(messages, fmt.Sprintf("- %v", path))
}
}
return strings.Join(messages, "\n")
}
// Equals returns true if the lists of conflicts are the same.
func (c Conflicts) Equals(c2 Conflicts) bool {
if len(c) != len(c2) {
return false
}
for i := range c {
if !c[i].Equals(c2[i]) {
return false
}
}
return true
}
// ToSet aggregates conflicts for all managers into a single Set.
func (c Conflicts) ToSet() *fieldpath.Set {
set := fieldpath.NewSet()
for _, conflict := range []Conflict(c) {
set.Insert(conflict.Path)
}
return set
}
// ConflictsFromManagers creates a list of conflicts given Managers sets.
func ConflictsFromManagers(sets fieldpath.ManagedFields) Conflicts {
conflicts := []Conflict{}
for manager, set := range sets {
set.Set().Iterate(func(p fieldpath.Path) {
conflicts = append(conflicts, Conflict{
Manager: manager,
Path: p.Copy(),
})
})
}
return conflicts
}

View File

@ -0,0 +1,394 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package merge
import (
"fmt"
"sigs.k8s.io/structured-merge-diff/v4/fieldpath"
"sigs.k8s.io/structured-merge-diff/v4/typed"
"sigs.k8s.io/structured-merge-diff/v4/value"
)
// Converter is an interface to the conversion logic. The converter
// needs to be able to convert objects from one version to another.
type Converter interface {
Convert(object *typed.TypedValue, version fieldpath.APIVersion) (*typed.TypedValue, error)
IsMissingVersionError(error) bool
}
// UpdateBuilder allows you to create a new Updater by exposing all of
// the options and setting them once.
type UpdaterBuilder struct {
Converter Converter
IgnoreFilter map[fieldpath.APIVersion]fieldpath.Filter
// IgnoredFields provides a set of fields to ignore for each
IgnoredFields map[fieldpath.APIVersion]*fieldpath.Set
// Stop comparing the new object with old object after applying.
// This was initially used to avoid spurious etcd update, but
// since that's vastly inefficient, we've come-up with a better
// way of doing that. Create this flag to stop it.
// Comparing has become more expensive too now that we're not using
// `Compare` but `value.Equals` so this gives an option to avoid it.
ReturnInputOnNoop bool
}
func (u *UpdaterBuilder) BuildUpdater() *Updater {
return &Updater{
Converter: u.Converter,
IgnoreFilter: u.IgnoreFilter,
IgnoredFields: u.IgnoredFields,
returnInputOnNoop: u.ReturnInputOnNoop,
}
}
// Updater is the object used to compute updated FieldSets and also
// merge the object on Apply.
type Updater struct {
// Deprecated: This will eventually become private.
Converter Converter
// Deprecated: This will eventually become private.
IgnoredFields map[fieldpath.APIVersion]*fieldpath.Set
// Deprecated: This will eventually become private.
IgnoreFilter map[fieldpath.APIVersion]fieldpath.Filter
returnInputOnNoop bool
}
func (s *Updater) update(oldObject, newObject *typed.TypedValue, version fieldpath.APIVersion, managers fieldpath.ManagedFields, workflow string, force bool) (fieldpath.ManagedFields, *typed.Comparison, error) {
conflicts := fieldpath.ManagedFields{}
removed := fieldpath.ManagedFields{}
compare, err := oldObject.Compare(newObject)
if err != nil {
return nil, nil, fmt.Errorf("failed to compare objects: %v", err)
}
var versions map[fieldpath.APIVersion]*typed.Comparison
if s.IgnoredFields != nil && s.IgnoreFilter != nil {
return nil, nil, fmt.Errorf("IgnoreFilter and IgnoreFilter may not both be set")
}
if s.IgnoredFields != nil {
versions = map[fieldpath.APIVersion]*typed.Comparison{
version: compare.ExcludeFields(s.IgnoredFields[version]),
}
} else {
versions = map[fieldpath.APIVersion]*typed.Comparison{
version: compare.FilterFields(s.IgnoreFilter[version]),
}
}
for manager, managerSet := range managers {
if manager == workflow {
continue
}
compare, ok := versions[managerSet.APIVersion()]
if !ok {
var err error
versionedOldObject, err := s.Converter.Convert(oldObject, managerSet.APIVersion())
if err != nil {
if s.Converter.IsMissingVersionError(err) {
delete(managers, manager)
continue
}
return nil, nil, fmt.Errorf("failed to convert old object: %v", err)
}
versionedNewObject, err := s.Converter.Convert(newObject, managerSet.APIVersion())
if err != nil {
if s.Converter.IsMissingVersionError(err) {
delete(managers, manager)
continue
}
return nil, nil, fmt.Errorf("failed to convert new object: %v", err)
}
compare, err = versionedOldObject.Compare(versionedNewObject)
if err != nil {
return nil, nil, fmt.Errorf("failed to compare objects: %v", err)
}
if s.IgnoredFields != nil {
versions[managerSet.APIVersion()] = compare.ExcludeFields(s.IgnoredFields[managerSet.APIVersion()])
} else {
versions[managerSet.APIVersion()] = compare.FilterFields(s.IgnoreFilter[managerSet.APIVersion()])
}
}
conflictSet := managerSet.Set().Intersection(compare.Modified.Union(compare.Added))
if !conflictSet.Empty() {
conflicts[manager] = fieldpath.NewVersionedSet(conflictSet, managerSet.APIVersion(), false)
}
if !compare.Removed.Empty() {
removed[manager] = fieldpath.NewVersionedSet(compare.Removed, managerSet.APIVersion(), false)
}
}
if !force && len(conflicts) != 0 {
return nil, nil, ConflictsFromManagers(conflicts)
}
for manager, conflictSet := range conflicts {
managers[manager] = fieldpath.NewVersionedSet(managers[manager].Set().Difference(conflictSet.Set()), managers[manager].APIVersion(), managers[manager].Applied())
}
for manager, removedSet := range removed {
managers[manager] = fieldpath.NewVersionedSet(managers[manager].Set().Difference(removedSet.Set()), managers[manager].APIVersion(), managers[manager].Applied())
}
for manager := range managers {
if managers[manager].Set().Empty() {
delete(managers, manager)
}
}
return managers, compare, nil
}
// Update is the method you should call once you've merged your final
// object on CREATE/UPDATE/PATCH verbs. newObject must be the object
// that you intend to persist (after applying the patch if this is for a
// PATCH call), and liveObject must be the original object (empty if
// this is a CREATE call).
func (s *Updater) Update(liveObject, newObject *typed.TypedValue, version fieldpath.APIVersion, managers fieldpath.ManagedFields, manager string) (*typed.TypedValue, fieldpath.ManagedFields, error) {
var err error
managers, err = s.reconcileManagedFieldsWithSchemaChanges(liveObject, managers)
if err != nil {
return nil, fieldpath.ManagedFields{}, err
}
managers, compare, err := s.update(liveObject, newObject, version, managers, manager, true)
if err != nil {
return nil, fieldpath.ManagedFields{}, err
}
if _, ok := managers[manager]; !ok {
managers[manager] = fieldpath.NewVersionedSet(fieldpath.NewSet(), version, false)
}
set := managers[manager].Set().Difference(compare.Removed).Union(compare.Modified).Union(compare.Added)
if s.IgnoredFields != nil && s.IgnoreFilter != nil {
return nil, nil, fmt.Errorf("IgnoreFilter and IgnoreFilter may not both be set")
}
var ignoreFilter fieldpath.Filter
if s.IgnoredFields != nil {
ignoreFilter = fieldpath.NewExcludeSetFilter(s.IgnoredFields[version])
} else {
ignoreFilter = s.IgnoreFilter[version]
}
if ignoreFilter != nil {
set = ignoreFilter.Filter(set)
}
managers[manager] = fieldpath.NewVersionedSet(
set,
version,
false,
)
if managers[manager].Set().Empty() {
delete(managers, manager)
}
return newObject, managers, nil
}
// Apply should be called when Apply is run, given the current object as
// well as the configuration that is applied. This will merge the object
// and return it.
func (s *Updater) Apply(liveObject, configObject *typed.TypedValue, version fieldpath.APIVersion, managers fieldpath.ManagedFields, manager string, force bool) (*typed.TypedValue, fieldpath.ManagedFields, error) {
var err error
managers, err = s.reconcileManagedFieldsWithSchemaChanges(liveObject, managers)
if err != nil {
return nil, fieldpath.ManagedFields{}, err
}
newObject, err := liveObject.Merge(configObject)
if err != nil {
return nil, fieldpath.ManagedFields{}, fmt.Errorf("failed to merge config: %v", err)
}
lastSet := managers[manager]
set, err := configObject.ToFieldSet()
if err != nil {
return nil, fieldpath.ManagedFields{}, fmt.Errorf("failed to get field set: %v", err)
}
if s.IgnoredFields != nil && s.IgnoreFilter != nil {
return nil, nil, fmt.Errorf("IgnoreFilter and IgnoreFilter may not both be set")
}
var ignoreFilter fieldpath.Filter
if s.IgnoredFields != nil {
ignoreFilter = fieldpath.NewExcludeSetFilter(s.IgnoredFields[version])
} else {
ignoreFilter = s.IgnoreFilter[version]
}
if ignoreFilter != nil {
set = ignoreFilter.Filter(set)
}
managers[manager] = fieldpath.NewVersionedSet(set, version, true)
newObject, err = s.prune(newObject, managers, manager, lastSet)
if err != nil {
return nil, fieldpath.ManagedFields{}, fmt.Errorf("failed to prune fields: %v", err)
}
managers, _, err = s.update(liveObject, newObject, version, managers, manager, force)
if err != nil {
return nil, fieldpath.ManagedFields{}, err
}
if !s.returnInputOnNoop && value.EqualsUsing(value.NewFreelistAllocator(), liveObject.AsValue(), newObject.AsValue()) {
newObject = nil
}
return newObject, managers, nil
}
// prune will remove a field, list or map item, iff:
// * applyingManager applied it last time
// * applyingManager didn't apply it this time
// * no other applier claims to manage it
func (s *Updater) prune(merged *typed.TypedValue, managers fieldpath.ManagedFields, applyingManager string, lastSet fieldpath.VersionedSet) (*typed.TypedValue, error) {
if lastSet == nil || lastSet.Set().Empty() {
return merged, nil
}
version := lastSet.APIVersion()
convertedMerged, err := s.Converter.Convert(merged, version)
if err != nil {
if s.Converter.IsMissingVersionError(err) {
return merged, nil
}
return nil, fmt.Errorf("failed to convert merged object to last applied version: %v", err)
}
sc, tr := convertedMerged.Schema(), convertedMerged.TypeRef()
pruned := convertedMerged.RemoveItems(lastSet.Set().EnsureNamedFieldsAreMembers(sc, tr))
pruned, err = s.addBackOwnedItems(convertedMerged, pruned, version, managers, applyingManager)
if err != nil {
return nil, fmt.Errorf("failed add back owned items: %v", err)
}
pruned, err = s.addBackDanglingItems(convertedMerged, pruned, lastSet)
if err != nil {
return nil, fmt.Errorf("failed add back dangling items: %v", err)
}
return s.Converter.Convert(pruned, managers[applyingManager].APIVersion())
}
// addBackOwnedItems adds back any fields, list and map items that were removed by prune,
// but other appliers or updaters (or the current applier's new config) claim to own.
func (s *Updater) addBackOwnedItems(merged, pruned *typed.TypedValue, prunedVersion fieldpath.APIVersion, managedFields fieldpath.ManagedFields, applyingManager string) (*typed.TypedValue, error) {
var err error
managedAtVersion := map[fieldpath.APIVersion]*fieldpath.Set{}
for _, managerSet := range managedFields {
if _, ok := managedAtVersion[managerSet.APIVersion()]; !ok {
managedAtVersion[managerSet.APIVersion()] = fieldpath.NewSet()
}
managedAtVersion[managerSet.APIVersion()] = managedAtVersion[managerSet.APIVersion()].Union(managerSet.Set())
}
// Add back owned items at pruned version first to avoid conversion failure
// caused by pruned fields which are required for conversion.
if managed, ok := managedAtVersion[prunedVersion]; ok {
merged, pruned, err = s.addBackOwnedItemsForVersion(merged, pruned, prunedVersion, managed)
if err != nil {
return nil, err
}
delete(managedAtVersion, prunedVersion)
}
for version, managed := range managedAtVersion {
merged, pruned, err = s.addBackOwnedItemsForVersion(merged, pruned, version, managed)
if err != nil {
return nil, err
}
}
return pruned, nil
}
// addBackOwnedItemsForVersion adds back any fields, list and map items that were removed by prune with specific managed field path at a version.
// It is an extracted sub-function from addBackOwnedItems for code reuse.
func (s *Updater) addBackOwnedItemsForVersion(merged, pruned *typed.TypedValue, version fieldpath.APIVersion, managed *fieldpath.Set) (*typed.TypedValue, *typed.TypedValue, error) {
var err error
merged, err = s.Converter.Convert(merged, version)
if err != nil {
if s.Converter.IsMissingVersionError(err) {
return merged, pruned, nil
}
return nil, nil, fmt.Errorf("failed to convert merged object at version %v: %v", version, err)
}
pruned, err = s.Converter.Convert(pruned, version)
if err != nil {
if s.Converter.IsMissingVersionError(err) {
return merged, pruned, nil
}
return nil, nil, fmt.Errorf("failed to convert pruned object at version %v: %v", version, err)
}
mergedSet, err := merged.ToFieldSet()
if err != nil {
return nil, nil, fmt.Errorf("failed to create field set from merged object at version %v: %v", version, err)
}
prunedSet, err := pruned.ToFieldSet()
if err != nil {
return nil, nil, fmt.Errorf("failed to create field set from pruned object at version %v: %v", version, err)
}
sc, tr := merged.Schema(), merged.TypeRef()
pruned = merged.RemoveItems(mergedSet.EnsureNamedFieldsAreMembers(sc, tr).Difference(prunedSet.EnsureNamedFieldsAreMembers(sc, tr).Union(managed.EnsureNamedFieldsAreMembers(sc, tr))))
return merged, pruned, nil
}
// addBackDanglingItems makes sure that the fields list and map items removed by prune were
// previously owned by the currently applying manager. This will add back fields list and map items
// that are unowned or that are owned by Updaters and shouldn't be removed.
func (s *Updater) addBackDanglingItems(merged, pruned *typed.TypedValue, lastSet fieldpath.VersionedSet) (*typed.TypedValue, error) {
convertedPruned, err := s.Converter.Convert(pruned, lastSet.APIVersion())
if err != nil {
if s.Converter.IsMissingVersionError(err) {
return merged, nil
}
return nil, fmt.Errorf("failed to convert pruned object to last applied version: %v", err)
}
prunedSet, err := convertedPruned.ToFieldSet()
if err != nil {
return nil, fmt.Errorf("failed to create field set from pruned object in last applied version: %v", err)
}
mergedSet, err := merged.ToFieldSet()
if err != nil {
return nil, fmt.Errorf("failed to create field set from merged object in last applied version: %v", err)
}
sc, tr := merged.Schema(), merged.TypeRef()
prunedSet = prunedSet.EnsureNamedFieldsAreMembers(sc, tr)
mergedSet = mergedSet.EnsureNamedFieldsAreMembers(sc, tr)
last := lastSet.Set().EnsureNamedFieldsAreMembers(sc, tr)
return merged.RemoveItems(mergedSet.Difference(prunedSet).Intersection(last)), nil
}
// reconcileManagedFieldsWithSchemaChanges reconciles the managed fields with any changes to the
// object's schema since the managed fields were written.
//
// Supports:
// - changing types from atomic to granular
// - changing types from granular to atomic
func (s *Updater) reconcileManagedFieldsWithSchemaChanges(liveObject *typed.TypedValue, managers fieldpath.ManagedFields) (fieldpath.ManagedFields, error) {
result := fieldpath.ManagedFields{}
for manager, versionedSet := range managers {
tv, err := s.Converter.Convert(liveObject, versionedSet.APIVersion())
if s.Converter.IsMissingVersionError(err) { // okay to skip, obsolete versions will be deleted automatically anyway
continue
}
if err != nil {
return nil, err
}
reconciled, err := typed.ReconcileFieldSetWithSchema(versionedSet.Set(), tv)
if err != nil {
return nil, err
}
if reconciled != nil {
result[manager] = fieldpath.NewVersionedSet(reconciled, versionedSet.APIVersion(), versionedSet.Applied())
} else {
result[manager] = versionedSet
}
}
return result, nil
}

View File

@ -0,0 +1,28 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package schema defines a targeted schema language which allows one to
// represent all the schema information necessary to perform "structured"
// merges and diffs.
//
// Due to the targeted nature of the data model, the schema language can fit in
// just a few hundred lines of go code, making it much more understandable and
// concise than e.g. OpenAPI.
//
// This schema was derived by observing the API objects used by Kubernetes, and
// formalizing a model which allows certain operations ("apply") to be more
// well defined. It is currently missing one feature: one-of ("unions").
package schema

View File

@ -0,0 +1,375 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package schema
import (
"sync"
)
// Schema is a list of named types.
//
// Schema types are indexed in a map before the first search so this type
// should be considered immutable.
type Schema struct {
Types []TypeDef `yaml:"types,omitempty"`
once sync.Once
m map[string]TypeDef
lock sync.Mutex
// Cached results of resolving type references to atoms. Only stores
// type references which require fields of Atom to be overriden.
resolvedTypes map[TypeRef]Atom
}
// A TypeSpecifier references a particular type in a schema.
type TypeSpecifier struct {
Type TypeRef `yaml:"type,omitempty"`
Schema Schema `yaml:"schema,omitempty"`
}
// TypeDef represents a named type in a schema.
type TypeDef struct {
// Top level types should be named. Every type must have a unique name.
Name string `yaml:"name,omitempty"`
Atom `yaml:"atom,omitempty,inline"`
}
// TypeRef either refers to a named type or declares an inlined type.
type TypeRef struct {
// Either the name or one member of Atom should be set.
NamedType *string `yaml:"namedType,omitempty"`
Inlined Atom `yaml:",inline,omitempty"`
// If this reference refers to a map-type or list-type, this field overrides
// the `ElementRelationship` of the referred type when resolved.
// If this field is nil, then it has no effect.
// See `Map` and `List` for more information about `ElementRelationship`
ElementRelationship *ElementRelationship `yaml:"elementRelationship,omitempty"`
}
// Atom represents the smallest possible pieces of the type system.
// Each set field in the Atom represents a possible type for the object.
// If none of the fields are set, any object will fail validation against the atom.
type Atom struct {
*Scalar `yaml:"scalar,omitempty"`
*List `yaml:"list,omitempty"`
*Map `yaml:"map,omitempty"`
}
// Scalar (AKA "primitive") represents a type which has a single value which is
// either numeric, string, or boolean, or untyped for any of them.
//
// TODO: split numeric into float/int? Something even more fine-grained?
type Scalar string
const (
Numeric = Scalar("numeric")
String = Scalar("string")
Boolean = Scalar("boolean")
Untyped = Scalar("untyped")
)
// ElementRelationship is an enum of the different possible relationships
// between the elements of container types (maps, lists).
type ElementRelationship string
const (
// Associative only applies to lists (see the documentation there).
Associative = ElementRelationship("associative")
// Atomic makes container types (lists, maps) behave
// as scalars / leaf fields
Atomic = ElementRelationship("atomic")
// Separable means the items of the container type have no particular
// relationship (default behavior for maps).
Separable = ElementRelationship("separable")
)
// Map is a key-value pair. Its default semantics are the same as an
// associative list, but:
// - It is serialized differently:
// map: {"k": {"value": "v"}}
// list: [{"key": "k", "value": "v"}]
// - Keys must be string typed.
// - Keys can't have multiple components.
//
// Optionally, maps may be atomic (for example, imagine representing an RGB
// color value--it doesn't make sense to have different actors own the R and G
// values).
//
// Maps may also represent a type which is composed of a number of different fields.
// Each field has a name and a type.
//
// Fields are indexed in a map before the first search so this type
// should be considered immutable.
type Map struct {
// Each struct field appears exactly once in this list. The order in
// this list defines the canonical field ordering.
Fields []StructField `yaml:"fields,omitempty"`
// A Union is a grouping of fields with special rules. It may refer to
// one or more fields in the above list. A given field from the above
// list may be referenced in exactly 0 or 1 places in the below list.
// One can have multiple unions in the same struct, but the fields can't
// overlap between unions.
Unions []Union `yaml:"unions,omitempty"`
// ElementType is the type of the structs's unknown fields.
ElementType TypeRef `yaml:"elementType,omitempty"`
// ElementRelationship states the relationship between the map's items.
// * `separable` (or unset) implies that each element is 100% independent.
// * `atomic` implies that all elements depend on each other, and this
// is effectively a scalar / leaf field; it doesn't make sense for
// separate actors to set the elements. Example: an RGB color struct;
// it would never make sense to "own" only one component of the
// color.
// The default behavior for maps is `separable`; it's permitted to
// leave this unset to get the default behavior.
ElementRelationship ElementRelationship `yaml:"elementRelationship,omitempty"`
once sync.Once
m map[string]StructField
}
// FindField is a convenience function that returns the referenced StructField,
// if it exists, or (nil, false) if it doesn't.
func (m *Map) FindField(name string) (StructField, bool) {
m.once.Do(func() {
m.m = make(map[string]StructField, len(m.Fields))
for _, field := range m.Fields {
m.m[field.Name] = field
}
})
sf, ok := m.m[name]
return sf, ok
}
// CopyInto this instance of Map into the other
// If other is nil this method does nothing.
// If other is already initialized, overwrites it with this instance
// Warning: Not thread safe
func (m *Map) CopyInto(dst *Map) {
if dst == nil {
return
}
// Map type is considered immutable so sharing references
dst.Fields = m.Fields
dst.ElementType = m.ElementType
dst.Unions = m.Unions
dst.ElementRelationship = m.ElementRelationship
if m.m != nil {
// If cache is non-nil then the once token had been consumed.
// Must reset token and use it again to ensure same semantics.
dst.once = sync.Once{}
dst.once.Do(func() {
dst.m = m.m
})
}
}
// UnionFields are mapping between the fields that are part of the union and
// their discriminated value. The discriminated value has to be set, and
// should not conflict with other discriminated value in the list.
type UnionField struct {
// FieldName is the name of the field that is part of the union. This
// is the serialized form of the field.
FieldName string `yaml:"fieldName"`
// Discriminatorvalue is the value of the discriminator to
// select that field. If the union doesn't have a discriminator,
// this field is ignored.
DiscriminatorValue string `yaml:"discriminatorValue"`
}
// Union, or oneof, means that only one of multiple fields of a structure can be
// set at a time. Setting the discriminator helps clearing oher fields:
// - If discriminator changed to non-nil, and a new field has been added
// that doesn't match, an error is returned,
// - If discriminator hasn't changed and two fields or more are set, an
// error is returned,
// - If discriminator changed to non-nil, all other fields but the
// discriminated one will be cleared,
// - Otherwise, If only one field is left, update discriminator to that value.
type Union struct {
// Discriminator, if present, is the name of the field that
// discriminates fields in the union. The mapping between the value of
// the discriminator and the field is done by using the Fields list
// below.
Discriminator *string `yaml:"discriminator,omitempty"`
// DeduceInvalidDiscriminator indicates if the discriminator
// should be updated automatically based on the fields set. This
// typically defaults to false since we don't want to deduce by
// default (the behavior exists to maintain compatibility on
// existing types and shouldn't be used for new types).
DeduceInvalidDiscriminator bool `yaml:"deduceInvalidDiscriminator,omitempty"`
// This is the list of fields that belong to this union. All the
// fields present in here have to be part of the parent
// structure. Discriminator (if oneOf has one), is NOT included in
// this list. The value for field is how we map the name of the field
// to actual value for discriminator.
Fields []UnionField `yaml:"fields,omitempty"`
}
// StructField pairs a field name with a field type.
type StructField struct {
// Name is the field name.
Name string `yaml:"name,omitempty"`
// Type is the field type.
Type TypeRef `yaml:"type,omitempty"`
// Default value for the field, nil if not present.
Default interface{} `yaml:"default,omitempty"`
}
// List represents a type which contains a zero or more elements, all of the
// same subtype. Lists may be either associative: each element is more or less
// independent and could be managed by separate entities in the system; or
// atomic, where the elements are heavily dependent on each other: it is not
// sensible to change one element without considering the ramifications on all
// the other elements.
type List struct {
// ElementType is the type of the list's elements.
ElementType TypeRef `yaml:"elementType,omitempty"`
// ElementRelationship states the relationship between the list's elements
// and must have one of these values:
// * `atomic`: the list is treated as a single entity, like a scalar.
// * `associative`:
// - If the list element is a scalar, the list is treated as a set.
// - If the list element is a map, the list is treated as a map.
// There is no default for this value for lists; all schemas must
// explicitly state the element relationship for all lists.
ElementRelationship ElementRelationship `yaml:"elementRelationship,omitempty"`
// Iff ElementRelationship is `associative`, and the element type is
// map, then Keys must have non-zero length, and it lists the fields
// of the element's map type which are to be used as the keys of the
// list.
//
// TODO: change this to "non-atomic struct" above and make the code reflect this.
//
// Each key must refer to a single field name (no nesting, not JSONPath).
Keys []string `yaml:"keys,omitempty"`
}
// FindNamedType is a convenience function that returns the referenced TypeDef,
// if it exists, or (nil, false) if it doesn't.
func (s *Schema) FindNamedType(name string) (TypeDef, bool) {
s.once.Do(func() {
s.m = make(map[string]TypeDef, len(s.Types))
for _, t := range s.Types {
s.m[t.Name] = t
}
})
t, ok := s.m[name]
return t, ok
}
func (s *Schema) resolveNoOverrides(tr TypeRef) (Atom, bool) {
result := Atom{}
if tr.NamedType != nil {
t, ok := s.FindNamedType(*tr.NamedType)
if !ok {
return Atom{}, false
}
result = t.Atom
} else {
result = tr.Inlined
}
return result, true
}
// Resolve is a convenience function which returns the atom referenced, whether
// it is inline or named. Returns (Atom{}, false) if the type can't be resolved.
//
// This allows callers to not care about the difference between a (possibly
// inlined) reference and a definition.
func (s *Schema) Resolve(tr TypeRef) (Atom, bool) {
// If this is a plain reference with no overrides, just return the type
if tr.ElementRelationship == nil {
return s.resolveNoOverrides(tr)
}
s.lock.Lock()
defer s.lock.Unlock()
if s.resolvedTypes == nil {
s.resolvedTypes = make(map[TypeRef]Atom)
}
var result Atom
var exists bool
// Return cached result if available
// If not, calculate result and cache it
if result, exists = s.resolvedTypes[tr]; !exists {
if result, exists = s.resolveNoOverrides(tr); exists {
// Allow field-level electives to override the referred type's modifiers
switch {
case result.Map != nil:
mapCopy := Map{}
result.Map.CopyInto(&mapCopy)
mapCopy.ElementRelationship = *tr.ElementRelationship
result.Map = &mapCopy
case result.List != nil:
listCopy := *result.List
listCopy.ElementRelationship = *tr.ElementRelationship
result.List = &listCopy
case result.Scalar != nil:
return Atom{}, false
default:
return Atom{}, false
}
} else {
return Atom{}, false
}
// Save result. If it is nil, that is also recorded as not existing.
s.resolvedTypes[tr] = result
}
return result, true
}
// Clones this instance of Schema into the other
// If other is nil this method does nothing.
// If other is already initialized, overwrites it with this instance
// Warning: Not thread safe
func (s *Schema) CopyInto(dst *Schema) {
if dst == nil {
return
}
// Schema type is considered immutable so sharing references
dst.Types = s.Types
if s.m != nil {
// If cache is non-nil then the once token had been consumed.
// Must reset token and use it again to ensure same semantics.
dst.once = sync.Once{}
dst.once.Do(func() {
dst.m = s.m
})
}
}

View File

@ -0,0 +1,202 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package schema
import "reflect"
// Equals returns true iff the two Schemas are equal.
func (a *Schema) Equals(b *Schema) bool {
if a == nil || b == nil {
return a == nil && b == nil
}
if len(a.Types) != len(b.Types) {
return false
}
for i := range a.Types {
if !a.Types[i].Equals(&b.Types[i]) {
return false
}
}
return true
}
// Equals returns true iff the two TypeRefs are equal.
//
// Note that two typerefs that have an equivalent type but where one is
// inlined and the other is named, are not considered equal.
func (a *TypeRef) Equals(b *TypeRef) bool {
if a == nil || b == nil {
return a == nil && b == nil
}
if (a.NamedType == nil) != (b.NamedType == nil) {
return false
}
if a.NamedType != nil {
if *a.NamedType != *b.NamedType {
return false
}
//return true
}
if a.ElementRelationship != b.ElementRelationship {
return false
}
return a.Inlined.Equals(&b.Inlined)
}
// Equals returns true iff the two TypeDefs are equal.
func (a *TypeDef) Equals(b *TypeDef) bool {
if a == nil || b == nil {
return a == nil && b == nil
}
if a.Name != b.Name {
return false
}
return a.Atom.Equals(&b.Atom)
}
// Equals returns true iff the two Atoms are equal.
func (a *Atom) Equals(b *Atom) bool {
if a == nil || b == nil {
return a == nil && b == nil
}
if (a.Scalar == nil) != (b.Scalar == nil) {
return false
}
if (a.List == nil) != (b.List == nil) {
return false
}
if (a.Map == nil) != (b.Map == nil) {
return false
}
switch {
case a.Scalar != nil:
return *a.Scalar == *b.Scalar
case a.List != nil:
return a.List.Equals(b.List)
case a.Map != nil:
return a.Map.Equals(b.Map)
}
return true
}
// Equals returns true iff the two Maps are equal.
func (a *Map) Equals(b *Map) bool {
if a == nil || b == nil {
return a == nil && b == nil
}
if !a.ElementType.Equals(&b.ElementType) {
return false
}
if a.ElementRelationship != b.ElementRelationship {
return false
}
if len(a.Fields) != len(b.Fields) {
return false
}
for i := range a.Fields {
if !a.Fields[i].Equals(&b.Fields[i]) {
return false
}
}
if len(a.Unions) != len(b.Unions) {
return false
}
for i := range a.Unions {
if !a.Unions[i].Equals(&b.Unions[i]) {
return false
}
}
return true
}
// Equals returns true iff the two Unions are equal.
func (a *Union) Equals(b *Union) bool {
if a == nil || b == nil {
return a == nil && b == nil
}
if (a.Discriminator == nil) != (b.Discriminator == nil) {
return false
}
if a.Discriminator != nil {
if *a.Discriminator != *b.Discriminator {
return false
}
}
if a.DeduceInvalidDiscriminator != b.DeduceInvalidDiscriminator {
return false
}
if len(a.Fields) != len(b.Fields) {
return false
}
for i := range a.Fields {
if !a.Fields[i].Equals(&b.Fields[i]) {
return false
}
}
return true
}
// Equals returns true iff the two UnionFields are equal.
func (a *UnionField) Equals(b *UnionField) bool {
if a == nil || b == nil {
return a == nil && b == nil
}
if a.FieldName != b.FieldName {
return false
}
if a.DiscriminatorValue != b.DiscriminatorValue {
return false
}
return true
}
// Equals returns true iff the two StructFields are equal.
func (a *StructField) Equals(b *StructField) bool {
if a == nil || b == nil {
return a == nil && b == nil
}
if a.Name != b.Name {
return false
}
if !reflect.DeepEqual(a.Default, b.Default) {
return false
}
return a.Type.Equals(&b.Type)
}
// Equals returns true iff the two Lists are equal.
func (a *List) Equals(b *List) bool {
if a == nil || b == nil {
return a == nil && b == nil
}
if !a.ElementType.Equals(&b.ElementType) {
return false
}
if a.ElementRelationship != b.ElementRelationship {
return false
}
if len(a.Keys) != len(b.Keys) {
return false
}
for i := range a.Keys {
if a.Keys[i] != b.Keys[i] {
return false
}
}
return true
}

View File

@ -0,0 +1,165 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package schema
// SchemaSchemaYAML is a schema against which you can validate other schemas.
// It will validate itself. It can be unmarshalled into a Schema type.
var SchemaSchemaYAML = `types:
- name: schema
map:
fields:
- name: types
type:
list:
elementRelationship: associative
elementType:
namedType: typeDef
keys:
- name
- name: typeDef
map:
fields:
- name: name
type:
scalar: string
- name: scalar
type:
scalar: string
- name: map
type:
namedType: map
- name: list
type:
namedType: list
- name: untyped
type:
namedType: untyped
- name: typeRef
map:
fields:
- name: namedType
type:
scalar: string
- name: scalar
type:
scalar: string
- name: map
type:
namedType: map
- name: list
type:
namedType: list
- name: untyped
type:
namedType: untyped
- name: elementRelationship
type:
scalar: string
- name: scalar
scalar: string
- name: map
map:
fields:
- name: fields
type:
list:
elementType:
namedType: structField
elementRelationship: associative
keys: [ "name" ]
- name: unions
type:
list:
elementType:
namedType: union
elementRelationship: atomic
- name: elementType
type:
namedType: typeRef
- name: elementRelationship
type:
scalar: string
- name: unionField
map:
fields:
- name: fieldName
type:
scalar: string
- name: discriminatorValue
type:
scalar: string
- name: union
map:
fields:
- name: discriminator
type:
scalar: string
- name: deduceInvalidDiscriminator
type:
scalar: boolean
- name: fields
type:
list:
elementRelationship: associative
elementType:
namedType: unionField
keys:
- fieldName
- name: structField
map:
fields:
- name: name
type:
scalar: string
- name: type
type:
namedType: typeRef
- name: default
type:
namedType: __untyped_atomic_
- name: list
map:
fields:
- name: elementType
type:
namedType: typeRef
- name: elementRelationship
type:
scalar: string
- name: keys
type:
list:
elementType:
scalar: string
elementRelationship: atomic
- name: untyped
map:
fields:
- name: elementRelationship
type:
scalar: string
- name: __untyped_atomic_
scalar: untyped
list:
elementType:
namedType: __untyped_atomic_
elementRelationship: atomic
map:
elementType:
namedType: __untyped_atomic_
elementRelationship: atomic
`

View File

@ -0,0 +1,470 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package typed
import (
"fmt"
"strings"
"sigs.k8s.io/structured-merge-diff/v4/fieldpath"
"sigs.k8s.io/structured-merge-diff/v4/schema"
"sigs.k8s.io/structured-merge-diff/v4/value"
)
// Comparison is the return value of a TypedValue.Compare() operation.
//
// No field will appear in more than one of the three fieldsets. If all of the
// fieldsets are empty, then the objects must have been equal.
type Comparison struct {
// Removed contains any fields removed by rhs (the right-hand-side
// object in the comparison).
Removed *fieldpath.Set
// Modified contains fields present in both objects but different.
Modified *fieldpath.Set
// Added contains any fields added by rhs.
Added *fieldpath.Set
}
// IsSame returns true if the comparison returned no changes (the two
// compared objects are similar).
func (c *Comparison) IsSame() bool {
return c.Removed.Empty() && c.Modified.Empty() && c.Added.Empty()
}
// String returns a human readable version of the comparison.
func (c *Comparison) String() string {
bld := strings.Builder{}
if !c.Modified.Empty() {
bld.WriteString(fmt.Sprintf("- Modified Fields:\n%v\n", c.Modified))
}
if !c.Added.Empty() {
bld.WriteString(fmt.Sprintf("- Added Fields:\n%v\n", c.Added))
}
if !c.Removed.Empty() {
bld.WriteString(fmt.Sprintf("- Removed Fields:\n%v\n", c.Removed))
}
return bld.String()
}
// ExcludeFields fields from the compare recursively removes the fields
// from the entire comparison
func (c *Comparison) ExcludeFields(fields *fieldpath.Set) *Comparison {
if fields == nil || fields.Empty() {
return c
}
c.Removed = c.Removed.RecursiveDifference(fields)
c.Modified = c.Modified.RecursiveDifference(fields)
c.Added = c.Added.RecursiveDifference(fields)
return c
}
func (c *Comparison) FilterFields(filter fieldpath.Filter) *Comparison {
if filter == nil {
return c
}
c.Removed = filter.Filter(c.Removed)
c.Modified = filter.Filter(c.Modified)
c.Added = filter.Filter(c.Added)
return c
}
type compareWalker struct {
lhs value.Value
rhs value.Value
schema *schema.Schema
typeRef schema.TypeRef
// Current path that we are comparing
path fieldpath.Path
// Resulting comparison.
comparison *Comparison
// internal housekeeping--don't set when constructing.
inLeaf bool // Set to true if we're in a "big leaf"--atomic map/list
// Allocate only as many walkers as needed for the depth by storing them here.
spareWalkers *[]*compareWalker
allocator value.Allocator
}
// compare compares stuff.
func (w *compareWalker) compare(prefixFn func() string) (errs ValidationErrors) {
if w.lhs == nil && w.rhs == nil {
// check this condidition here instead of everywhere below.
return errorf("at least one of lhs and rhs must be provided")
}
a, ok := w.schema.Resolve(w.typeRef)
if !ok {
return errorf("schema error: no type found matching: %v", *w.typeRef.NamedType)
}
alhs := deduceAtom(a, w.lhs)
arhs := deduceAtom(a, w.rhs)
// deduceAtom does not fix the type for nil values
// nil is a wildcard and will accept whatever form the other operand takes
if w.rhs == nil {
errs = append(errs, handleAtom(alhs, w.typeRef, w)...)
} else if w.lhs == nil || alhs.Equals(&arhs) {
errs = append(errs, handleAtom(arhs, w.typeRef, w)...)
} else {
w2 := *w
errs = append(errs, handleAtom(alhs, w.typeRef, &w2)...)
errs = append(errs, handleAtom(arhs, w.typeRef, w)...)
}
if !w.inLeaf {
if w.lhs == nil {
w.comparison.Added.Insert(w.path)
} else if w.rhs == nil {
w.comparison.Removed.Insert(w.path)
}
}
return errs.WithLazyPrefix(prefixFn)
}
// doLeaf should be called on leaves before descending into children, if there
// will be a descent. It modifies w.inLeaf.
func (w *compareWalker) doLeaf() {
if w.inLeaf {
// We're in a "big leaf", an atomic map or list. Ignore
// subsequent leaves.
return
}
w.inLeaf = true
// We don't recurse into leaf fields for merging.
if w.lhs == nil {
w.comparison.Added.Insert(w.path)
} else if w.rhs == nil {
w.comparison.Removed.Insert(w.path)
} else if !value.EqualsUsing(w.allocator, w.rhs, w.lhs) {
// TODO: Equality is not sufficient for this.
// Need to implement equality check on the value type.
w.comparison.Modified.Insert(w.path)
}
}
func (w *compareWalker) doScalar(t *schema.Scalar) ValidationErrors {
// Make sure at least one side is a valid scalar.
lerrs := validateScalar(t, w.lhs, "lhs: ")
rerrs := validateScalar(t, w.rhs, "rhs: ")
if len(lerrs) > 0 && len(rerrs) > 0 {
return append(lerrs, rerrs...)
}
// All scalars are leaf fields.
w.doLeaf()
return nil
}
func (w *compareWalker) prepareDescent(pe fieldpath.PathElement, tr schema.TypeRef, cmp *Comparison) *compareWalker {
if w.spareWalkers == nil {
// first descent.
w.spareWalkers = &[]*compareWalker{}
}
var w2 *compareWalker
if n := len(*w.spareWalkers); n > 0 {
w2, *w.spareWalkers = (*w.spareWalkers)[n-1], (*w.spareWalkers)[:n-1]
} else {
w2 = &compareWalker{}
}
*w2 = *w
w2.typeRef = tr
w2.path = append(w2.path, pe)
w2.lhs = nil
w2.rhs = nil
w2.comparison = cmp
return w2
}
func (w *compareWalker) finishDescent(w2 *compareWalker) {
// if the descent caused a realloc, ensure that we reuse the buffer
// for the next sibling.
w.path = w2.path[:len(w2.path)-1]
*w.spareWalkers = append(*w.spareWalkers, w2)
}
func (w *compareWalker) derefMap(prefix string, v value.Value) (value.Map, ValidationErrors) {
if v == nil {
return nil, nil
}
m, err := mapValue(w.allocator, v)
if err != nil {
return nil, errorf("%v: %v", prefix, err)
}
return m, nil
}
func (w *compareWalker) visitListItems(t *schema.List, lhs, rhs value.List) (errs ValidationErrors) {
rLen := 0
if rhs != nil {
rLen = rhs.Length()
}
lLen := 0
if lhs != nil {
lLen = lhs.Length()
}
maxLength := rLen
if lLen > maxLength {
maxLength = lLen
}
// Contains all the unique PEs between lhs and rhs, exactly once.
// Order doesn't matter since we're just tracking ownership in a set.
allPEs := make([]fieldpath.PathElement, 0, maxLength)
// Gather all the elements from lhs, indexed by PE, in a list for duplicates.
lValues := fieldpath.MakePathElementMap(lLen)
for i := 0; i < lLen; i++ {
child := lhs.At(i)
pe, err := listItemToPathElement(w.allocator, w.schema, t, child)
if err != nil {
errs = append(errs, errorf("element %v: %v", i, err.Error())...)
// If we can't construct the path element, we can't
// even report errors deeper in the schema, so bail on
// this element.
continue
}
if v, found := lValues.Get(pe); found {
list := v.([]value.Value)
lValues.Insert(pe, append(list, child))
} else {
lValues.Insert(pe, []value.Value{child})
allPEs = append(allPEs, pe)
}
}
// Gather all the elements from rhs, indexed by PE, in a list for duplicates.
rValues := fieldpath.MakePathElementMap(rLen)
for i := 0; i < rLen; i++ {
rValue := rhs.At(i)
pe, err := listItemToPathElement(w.allocator, w.schema, t, rValue)
if err != nil {
errs = append(errs, errorf("element %v: %v", i, err.Error())...)
// If we can't construct the path element, we can't
// even report errors deeper in the schema, so bail on
// this element.
continue
}
if v, found := rValues.Get(pe); found {
list := v.([]value.Value)
rValues.Insert(pe, append(list, rValue))
} else {
rValues.Insert(pe, []value.Value{rValue})
if _, found := lValues.Get(pe); !found {
allPEs = append(allPEs, pe)
}
}
}
for _, pe := range allPEs {
lList := []value.Value(nil)
if l, ok := lValues.Get(pe); ok {
lList = l.([]value.Value)
}
rList := []value.Value(nil)
if l, ok := rValues.Get(pe); ok {
rList = l.([]value.Value)
}
switch {
case len(lList) == 0 && len(rList) == 0:
// We shouldn't be here anyway.
return
// Normal use-case:
// We have no duplicates for this PE, compare items one-to-one.
case len(lList) <= 1 && len(rList) <= 1:
lValue := value.Value(nil)
if len(lList) != 0 {
lValue = lList[0]
}
rValue := value.Value(nil)
if len(rList) != 0 {
rValue = rList[0]
}
errs = append(errs, w.compareListItem(t, pe, lValue, rValue)...)
// Duplicates before & after use-case:
// Compare the duplicates lists as if they were atomic, mark modified if they changed.
case len(lList) >= 2 && len(rList) >= 2:
listEqual := func(lList, rList []value.Value) bool {
if len(lList) != len(rList) {
return false
}
for i := range lList {
if !value.Equals(lList[i], rList[i]) {
return false
}
}
return true
}
if !listEqual(lList, rList) {
w.comparison.Modified.Insert(append(w.path, pe))
}
// Duplicates before & not anymore use-case:
// Rcursively add new non-duplicate items, Remove duplicate marker,
case len(lList) >= 2:
if len(rList) != 0 {
errs = append(errs, w.compareListItem(t, pe, nil, rList[0])...)
}
w.comparison.Removed.Insert(append(w.path, pe))
// New duplicates use-case:
// Recursively remove old non-duplicate items, add duplicate marker.
case len(rList) >= 2:
if len(lList) != 0 {
errs = append(errs, w.compareListItem(t, pe, lList[0], nil)...)
}
w.comparison.Added.Insert(append(w.path, pe))
}
}
return
}
func (w *compareWalker) indexListPathElements(t *schema.List, list value.List) ([]fieldpath.PathElement, fieldpath.PathElementValueMap, ValidationErrors) {
var errs ValidationErrors
length := 0
if list != nil {
length = list.Length()
}
observed := fieldpath.MakePathElementValueMap(length)
pes := make([]fieldpath.PathElement, 0, length)
for i := 0; i < length; i++ {
child := list.At(i)
pe, err := listItemToPathElement(w.allocator, w.schema, t, child)
if err != nil {
errs = append(errs, errorf("element %v: %v", i, err.Error())...)
// If we can't construct the path element, we can't
// even report errors deeper in the schema, so bail on
// this element.
continue
}
// Ignore repeated occurences of `pe`.
if _, found := observed.Get(pe); found {
continue
}
observed.Insert(pe, child)
pes = append(pes, pe)
}
return pes, observed, errs
}
func (w *compareWalker) compareListItem(t *schema.List, pe fieldpath.PathElement, lChild, rChild value.Value) ValidationErrors {
w2 := w.prepareDescent(pe, t.ElementType, w.comparison)
w2.lhs = lChild
w2.rhs = rChild
errs := w2.compare(pe.String)
w.finishDescent(w2)
return errs
}
func (w *compareWalker) derefList(prefix string, v value.Value) (value.List, ValidationErrors) {
if v == nil {
return nil, nil
}
l, err := listValue(w.allocator, v)
if err != nil {
return nil, errorf("%v: %v", prefix, err)
}
return l, nil
}
func (w *compareWalker) doList(t *schema.List) (errs ValidationErrors) {
lhs, _ := w.derefList("lhs: ", w.lhs)
if lhs != nil {
defer w.allocator.Free(lhs)
}
rhs, _ := w.derefList("rhs: ", w.rhs)
if rhs != nil {
defer w.allocator.Free(rhs)
}
// If both lhs and rhs are empty/null, treat it as a
// leaf: this helps preserve the empty/null
// distinction.
emptyPromoteToLeaf := (lhs == nil || lhs.Length() == 0) && (rhs == nil || rhs.Length() == 0)
if t.ElementRelationship == schema.Atomic || emptyPromoteToLeaf {
w.doLeaf()
return nil
}
if lhs == nil && rhs == nil {
return nil
}
errs = w.visitListItems(t, lhs, rhs)
return errs
}
func (w *compareWalker) visitMapItem(t *schema.Map, out map[string]interface{}, key string, lhs, rhs value.Value) (errs ValidationErrors) {
fieldType := t.ElementType
if sf, ok := t.FindField(key); ok {
fieldType = sf.Type
}
pe := fieldpath.PathElement{FieldName: &key}
w2 := w.prepareDescent(pe, fieldType, w.comparison)
w2.lhs = lhs
w2.rhs = rhs
errs = append(errs, w2.compare(pe.String)...)
w.finishDescent(w2)
return errs
}
func (w *compareWalker) visitMapItems(t *schema.Map, lhs, rhs value.Map) (errs ValidationErrors) {
out := map[string]interface{}{}
value.MapZipUsing(w.allocator, lhs, rhs, value.Unordered, func(key string, lhsValue, rhsValue value.Value) bool {
errs = append(errs, w.visitMapItem(t, out, key, lhsValue, rhsValue)...)
return true
})
return errs
}
func (w *compareWalker) doMap(t *schema.Map) (errs ValidationErrors) {
lhs, _ := w.derefMap("lhs: ", w.lhs)
if lhs != nil {
defer w.allocator.Free(lhs)
}
rhs, _ := w.derefMap("rhs: ", w.rhs)
if rhs != nil {
defer w.allocator.Free(rhs)
}
// If both lhs and rhs are empty/null, treat it as a
// leaf: this helps preserve the empty/null
// distinction.
emptyPromoteToLeaf := (lhs == nil || lhs.Empty()) && (rhs == nil || rhs.Empty())
if t.ElementRelationship == schema.Atomic || emptyPromoteToLeaf {
w.doLeaf()
return nil
}
if lhs == nil && rhs == nil {
return nil
}
errs = append(errs, w.visitMapItems(t, lhs, rhs)...)
return errs
}

View File

@ -0,0 +1,18 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package typed contains logic for operating on values with given schemas.
package typed

View File

@ -0,0 +1,259 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package typed
import (
"errors"
"fmt"
"strings"
"sigs.k8s.io/structured-merge-diff/v4/fieldpath"
"sigs.k8s.io/structured-merge-diff/v4/schema"
"sigs.k8s.io/structured-merge-diff/v4/value"
)
// ValidationError reports an error about a particular field
type ValidationError struct {
Path string
ErrorMessage string
}
// Error returns a human readable error message.
func (ve ValidationError) Error() string {
if len(ve.Path) == 0 {
return ve.ErrorMessage
}
return fmt.Sprintf("%s: %v", ve.Path, ve.ErrorMessage)
}
// ValidationErrors accumulates multiple validation error messages.
type ValidationErrors []ValidationError
// Error returns a human readable error message reporting each error in the
// list.
func (errs ValidationErrors) Error() string {
if len(errs) == 1 {
return errs[0].Error()
}
messages := []string{"errors:"}
for _, e := range errs {
messages = append(messages, " "+e.Error())
}
return strings.Join(messages, "\n")
}
// Set the given path to all the validation errors.
func (errs ValidationErrors) WithPath(p string) ValidationErrors {
for i := range errs {
errs[i].Path = p
}
return errs
}
// WithPrefix prefixes all errors path with the given pathelement. This
// is useful when unwinding the stack on errors.
func (errs ValidationErrors) WithPrefix(prefix string) ValidationErrors {
for i := range errs {
errs[i].Path = prefix + errs[i].Path
}
return errs
}
// WithLazyPrefix prefixes all errors path with the given pathelement.
// This is useful when unwinding the stack on errors. Prefix is
// computed lazily only if there is an error.
func (errs ValidationErrors) WithLazyPrefix(fn func() string) ValidationErrors {
if len(errs) == 0 {
return errs
}
prefix := ""
if fn != nil {
prefix = fn()
}
for i := range errs {
errs[i].Path = prefix + errs[i].Path
}
return errs
}
func errorf(format string, args ...interface{}) ValidationErrors {
return ValidationErrors{{
ErrorMessage: fmt.Sprintf(format, args...),
}}
}
type atomHandler interface {
doScalar(*schema.Scalar) ValidationErrors
doList(*schema.List) ValidationErrors
doMap(*schema.Map) ValidationErrors
}
func resolveSchema(s *schema.Schema, tr schema.TypeRef, v value.Value, ah atomHandler) ValidationErrors {
a, ok := s.Resolve(tr)
if !ok {
typeName := "inlined type"
if tr.NamedType != nil {
typeName = *tr.NamedType
}
return errorf("schema error: no type found matching: %v", typeName)
}
a = deduceAtom(a, v)
return handleAtom(a, tr, ah)
}
// deduceAtom determines which of the possible types in atom 'atom' applies to value 'val'.
// If val is of a type allowed by atom, return a copy of atom with all other types set to nil.
// if val is nil, or is not of a type allowed by atom, just return the original atom,
// and validation will fail at a later stage. (with a more useful error)
func deduceAtom(atom schema.Atom, val value.Value) schema.Atom {
switch {
case val == nil:
case val.IsFloat(), val.IsInt(), val.IsString(), val.IsBool():
if atom.Scalar != nil {
return schema.Atom{Scalar: atom.Scalar}
}
case val.IsList():
if atom.List != nil {
return schema.Atom{List: atom.List}
}
case val.IsMap():
if atom.Map != nil {
return schema.Atom{Map: atom.Map}
}
}
return atom
}
func handleAtom(a schema.Atom, tr schema.TypeRef, ah atomHandler) ValidationErrors {
switch {
case a.Map != nil:
return ah.doMap(a.Map)
case a.Scalar != nil:
return ah.doScalar(a.Scalar)
case a.List != nil:
return ah.doList(a.List)
}
name := "inlined"
if tr.NamedType != nil {
name = "named type: " + *tr.NamedType
}
return errorf("schema error: invalid atom: %v", name)
}
// Returns the list, or an error. Reminder: nil is a valid list and might be returned.
func listValue(a value.Allocator, val value.Value) (value.List, error) {
if val.IsNull() {
// Null is a valid list.
return nil, nil
}
if !val.IsList() {
return nil, fmt.Errorf("expected list, got %v", val)
}
return val.AsListUsing(a), nil
}
// Returns the map, or an error. Reminder: nil is a valid map and might be returned.
func mapValue(a value.Allocator, val value.Value) (value.Map, error) {
if val == nil {
return nil, fmt.Errorf("expected map, got nil")
}
if val.IsNull() {
// Null is a valid map.
return nil, nil
}
if !val.IsMap() {
return nil, fmt.Errorf("expected map, got %v", val)
}
return val.AsMapUsing(a), nil
}
func getAssociativeKeyDefault(s *schema.Schema, list *schema.List, fieldName string) (interface{}, error) {
atom, ok := s.Resolve(list.ElementType)
if !ok {
return nil, errors.New("invalid elementType for list")
}
if atom.Map == nil {
return nil, errors.New("associative list may not have non-map types")
}
// If the field is not found, we can assume there is no default.
field, _ := atom.Map.FindField(fieldName)
return field.Default, nil
}
func keyedAssociativeListItemToPathElement(a value.Allocator, s *schema.Schema, list *schema.List, child value.Value) (fieldpath.PathElement, error) {
pe := fieldpath.PathElement{}
if child.IsNull() {
// null entries are illegal.
return pe, errors.New("associative list with keys may not have a null element")
}
if !child.IsMap() {
return pe, errors.New("associative list with keys may not have non-map elements")
}
keyMap := value.FieldList{}
m := child.AsMapUsing(a)
defer a.Free(m)
for _, fieldName := range list.Keys {
if val, ok := m.Get(fieldName); ok {
keyMap = append(keyMap, value.Field{Name: fieldName, Value: val})
} else if def, err := getAssociativeKeyDefault(s, list, fieldName); err != nil {
return pe, fmt.Errorf("couldn't find default value for %v: %v", fieldName, err)
} else if def != nil {
keyMap = append(keyMap, value.Field{Name: fieldName, Value: value.NewValueInterface(def)})
} else {
return pe, fmt.Errorf("associative list with keys has an element that omits key field %q (and doesn't have default value)", fieldName)
}
}
keyMap.Sort()
pe.Key = &keyMap
return pe, nil
}
func setItemToPathElement(child value.Value) (fieldpath.PathElement, error) {
pe := fieldpath.PathElement{}
switch {
case child.IsMap():
// TODO: atomic maps should be acceptable.
return pe, errors.New("associative list without keys has an element that's a map type")
case child.IsList():
// Should we support a set of lists? For the moment
// let's say we don't.
// TODO: atomic lists should be acceptable.
return pe, errors.New("not supported: associative list with lists as elements")
case child.IsNull():
return pe, errors.New("associative list without keys has an element that's an explicit null")
default:
// We are a set type.
pe.Value = &child
return pe, nil
}
}
func listItemToPathElement(a value.Allocator, s *schema.Schema, list *schema.List, child value.Value) (fieldpath.PathElement, error) {
if list.ElementRelationship != schema.Associative {
return fieldpath.PathElement{}, errors.New("invalid indexing of non-associative list")
}
if len(list.Keys) > 0 {
return keyedAssociativeListItemToPathElement(a, s, list, child)
}
// If there's no keys, then we must be a set of primitives.
return setItemToPathElement(child)
}

View File

@ -0,0 +1,427 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package typed
import (
"sigs.k8s.io/structured-merge-diff/v4/fieldpath"
"sigs.k8s.io/structured-merge-diff/v4/schema"
"sigs.k8s.io/structured-merge-diff/v4/value"
)
type mergingWalker struct {
lhs value.Value
rhs value.Value
schema *schema.Schema
typeRef schema.TypeRef
// Current path that we are merging
path fieldpath.Path
// How to merge. Called after schema validation for all leaf fields.
rule mergeRule
// If set, called after non-leaf items have been merged. (`out` is
// probably already set.)
postItemHook mergeRule
// output of the merge operation (nil if none)
out *interface{}
// internal housekeeping--don't set when constructing.
inLeaf bool // Set to true if we're in a "big leaf"--atomic map/list
// Allocate only as many walkers as needed for the depth by storing them here.
spareWalkers *[]*mergingWalker
allocator value.Allocator
}
// merge rules examine w.lhs and w.rhs (up to one of which may be nil) and
// optionally set w.out. If lhs and rhs are both set, they will be of
// comparable type.
type mergeRule func(w *mergingWalker)
var (
ruleKeepRHS = mergeRule(func(w *mergingWalker) {
if w.rhs != nil {
v := w.rhs.Unstructured()
w.out = &v
} else if w.lhs != nil {
v := w.lhs.Unstructured()
w.out = &v
}
})
)
// merge sets w.out.
func (w *mergingWalker) merge(prefixFn func() string) (errs ValidationErrors) {
if w.lhs == nil && w.rhs == nil {
// check this condidition here instead of everywhere below.
return errorf("at least one of lhs and rhs must be provided")
}
a, ok := w.schema.Resolve(w.typeRef)
if !ok {
return errorf("schema error: no type found matching: %v", *w.typeRef.NamedType)
}
alhs := deduceAtom(a, w.lhs)
arhs := deduceAtom(a, w.rhs)
// deduceAtom does not fix the type for nil values
// nil is a wildcard and will accept whatever form the other operand takes
if w.rhs == nil {
errs = append(errs, handleAtom(alhs, w.typeRef, w)...)
} else if w.lhs == nil || alhs.Equals(&arhs) {
errs = append(errs, handleAtom(arhs, w.typeRef, w)...)
} else {
w2 := *w
errs = append(errs, handleAtom(alhs, w.typeRef, &w2)...)
errs = append(errs, handleAtom(arhs, w.typeRef, w)...)
}
if !w.inLeaf && w.postItemHook != nil {
w.postItemHook(w)
}
return errs.WithLazyPrefix(prefixFn)
}
// doLeaf should be called on leaves before descending into children, if there
// will be a descent. It modifies w.inLeaf.
func (w *mergingWalker) doLeaf() {
if w.inLeaf {
// We're in a "big leaf", an atomic map or list. Ignore
// subsequent leaves.
return
}
w.inLeaf = true
// We don't recurse into leaf fields for merging.
w.rule(w)
}
func (w *mergingWalker) doScalar(t *schema.Scalar) ValidationErrors {
// Make sure at least one side is a valid scalar.
lerrs := validateScalar(t, w.lhs, "lhs: ")
rerrs := validateScalar(t, w.rhs, "rhs: ")
if len(lerrs) > 0 && len(rerrs) > 0 {
return append(lerrs, rerrs...)
}
// All scalars are leaf fields.
w.doLeaf()
return nil
}
func (w *mergingWalker) prepareDescent(pe fieldpath.PathElement, tr schema.TypeRef) *mergingWalker {
if w.spareWalkers == nil {
// first descent.
w.spareWalkers = &[]*mergingWalker{}
}
var w2 *mergingWalker
if n := len(*w.spareWalkers); n > 0 {
w2, *w.spareWalkers = (*w.spareWalkers)[n-1], (*w.spareWalkers)[:n-1]
} else {
w2 = &mergingWalker{}
}
*w2 = *w
w2.typeRef = tr
w2.path = append(w2.path, pe)
w2.lhs = nil
w2.rhs = nil
w2.out = nil
return w2
}
func (w *mergingWalker) finishDescent(w2 *mergingWalker) {
// if the descent caused a realloc, ensure that we reuse the buffer
// for the next sibling.
w.path = w2.path[:len(w2.path)-1]
*w.spareWalkers = append(*w.spareWalkers, w2)
}
func (w *mergingWalker) derefMap(prefix string, v value.Value) (value.Map, ValidationErrors) {
if v == nil {
return nil, nil
}
m, err := mapValue(w.allocator, v)
if err != nil {
return nil, errorf("%v: %v", prefix, err)
}
return m, nil
}
func (w *mergingWalker) visitListItems(t *schema.List, lhs, rhs value.List) (errs ValidationErrors) {
rLen := 0
if rhs != nil {
rLen = rhs.Length()
}
lLen := 0
if lhs != nil {
lLen = lhs.Length()
}
outLen := lLen
if outLen < rLen {
outLen = rLen
}
out := make([]interface{}, 0, outLen)
rhsPEs, observedRHS, rhsErrs := w.indexListPathElements(t, rhs, false)
errs = append(errs, rhsErrs...)
lhsPEs, observedLHS, lhsErrs := w.indexListPathElements(t, lhs, true)
errs = append(errs, lhsErrs...)
if len(errs) != 0 {
return errs
}
sharedOrder := make([]*fieldpath.PathElement, 0, rLen)
for i := range rhsPEs {
pe := &rhsPEs[i]
if _, ok := observedLHS.Get(*pe); ok {
sharedOrder = append(sharedOrder, pe)
}
}
var nextShared *fieldpath.PathElement
if len(sharedOrder) > 0 {
nextShared = sharedOrder[0]
sharedOrder = sharedOrder[1:]
}
mergedRHS := fieldpath.MakePathElementMap(len(rhsPEs))
lLen, rLen = len(lhsPEs), len(rhsPEs)
for lI, rI := 0, 0; lI < lLen || rI < rLen; {
if lI < lLen && rI < rLen {
pe := lhsPEs[lI]
if pe.Equals(rhsPEs[rI]) {
// merge LHS & RHS items
mergedRHS.Insert(pe, struct{}{})
lChild, _ := observedLHS.Get(pe) // may be nil if the PE is duplicaated.
rChild, _ := observedRHS.Get(pe)
mergeOut, errs := w.mergeListItem(t, pe, lChild, rChild)
errs = append(errs, errs...)
if mergeOut != nil {
out = append(out, *mergeOut)
}
lI++
rI++
nextShared = nil
if len(sharedOrder) > 0 {
nextShared = sharedOrder[0]
sharedOrder = sharedOrder[1:]
}
continue
}
if _, ok := observedRHS.Get(pe); ok && nextShared != nil && !nextShared.Equals(lhsPEs[lI]) {
// shared item, but not the one we want in this round
lI++
continue
}
}
if lI < lLen {
pe := lhsPEs[lI]
if _, ok := observedRHS.Get(pe); !ok {
// take LHS item using At to make sure we get the right item (observed may not contain the right item).
lChild := lhs.AtUsing(w.allocator, lI)
mergeOut, errs := w.mergeListItem(t, pe, lChild, nil)
errs = append(errs, errs...)
if mergeOut != nil {
out = append(out, *mergeOut)
}
lI++
continue
} else if _, ok := mergedRHS.Get(pe); ok {
// we've already merged it with RHS, we don't want to duplicate it, skip it.
lI++
}
}
if rI < rLen {
// Take the RHS item, merge with matching LHS item if possible
pe := rhsPEs[rI]
mergedRHS.Insert(pe, struct{}{})
lChild, _ := observedLHS.Get(pe) // may be nil if absent or duplicaated.
rChild, _ := observedRHS.Get(pe)
mergeOut, errs := w.mergeListItem(t, pe, lChild, rChild)
errs = append(errs, errs...)
if mergeOut != nil {
out = append(out, *mergeOut)
}
rI++
// Advance nextShared, if we are merging nextShared.
if nextShared != nil && nextShared.Equals(pe) {
nextShared = nil
if len(sharedOrder) > 0 {
nextShared = sharedOrder[0]
sharedOrder = sharedOrder[1:]
}
}
}
}
if len(out) > 0 {
i := interface{}(out)
w.out = &i
}
return errs
}
func (w *mergingWalker) indexListPathElements(t *schema.List, list value.List, allowDuplicates bool) ([]fieldpath.PathElement, fieldpath.PathElementValueMap, ValidationErrors) {
var errs ValidationErrors
length := 0
if list != nil {
length = list.Length()
}
observed := fieldpath.MakePathElementValueMap(length)
pes := make([]fieldpath.PathElement, 0, length)
for i := 0; i < length; i++ {
child := list.At(i)
pe, err := listItemToPathElement(w.allocator, w.schema, t, child)
if err != nil {
errs = append(errs, errorf("element %v: %v", i, err.Error())...)
// If we can't construct the path element, we can't
// even report errors deeper in the schema, so bail on
// this element.
continue
}
if _, found := observed.Get(pe); found && !allowDuplicates {
errs = append(errs, errorf("duplicate entries for key %v", pe.String())...)
continue
} else if !found {
observed.Insert(pe, child)
} else {
// Duplicated items are not merged with the new value, make them nil.
observed.Insert(pe, value.NewValueInterface(nil))
}
pes = append(pes, pe)
}
return pes, observed, errs
}
func (w *mergingWalker) mergeListItem(t *schema.List, pe fieldpath.PathElement, lChild, rChild value.Value) (out *interface{}, errs ValidationErrors) {
w2 := w.prepareDescent(pe, t.ElementType)
w2.lhs = lChild
w2.rhs = rChild
errs = append(errs, w2.merge(pe.String)...)
if w2.out != nil {
out = w2.out
}
w.finishDescent(w2)
return
}
func (w *mergingWalker) derefList(prefix string, v value.Value) (value.List, ValidationErrors) {
if v == nil {
return nil, nil
}
l, err := listValue(w.allocator, v)
if err != nil {
return nil, errorf("%v: %v", prefix, err)
}
return l, nil
}
func (w *mergingWalker) doList(t *schema.List) (errs ValidationErrors) {
lhs, _ := w.derefList("lhs: ", w.lhs)
if lhs != nil {
defer w.allocator.Free(lhs)
}
rhs, _ := w.derefList("rhs: ", w.rhs)
if rhs != nil {
defer w.allocator.Free(rhs)
}
// If both lhs and rhs are empty/null, treat it as a
// leaf: this helps preserve the empty/null
// distinction.
emptyPromoteToLeaf := (lhs == nil || lhs.Length() == 0) && (rhs == nil || rhs.Length() == 0)
if t.ElementRelationship == schema.Atomic || emptyPromoteToLeaf {
w.doLeaf()
return nil
}
if lhs == nil && rhs == nil {
return nil
}
errs = w.visitListItems(t, lhs, rhs)
return errs
}
func (w *mergingWalker) visitMapItem(t *schema.Map, out map[string]interface{}, key string, lhs, rhs value.Value) (errs ValidationErrors) {
fieldType := t.ElementType
if sf, ok := t.FindField(key); ok {
fieldType = sf.Type
}
pe := fieldpath.PathElement{FieldName: &key}
w2 := w.prepareDescent(pe, fieldType)
w2.lhs = lhs
w2.rhs = rhs
errs = append(errs, w2.merge(pe.String)...)
if w2.out != nil {
out[key] = *w2.out
}
w.finishDescent(w2)
return errs
}
func (w *mergingWalker) visitMapItems(t *schema.Map, lhs, rhs value.Map) (errs ValidationErrors) {
out := map[string]interface{}{}
value.MapZipUsing(w.allocator, lhs, rhs, value.Unordered, func(key string, lhsValue, rhsValue value.Value) bool {
errs = append(errs, w.visitMapItem(t, out, key, lhsValue, rhsValue)...)
return true
})
if len(out) > 0 {
i := interface{}(out)
w.out = &i
}
return errs
}
func (w *mergingWalker) doMap(t *schema.Map) (errs ValidationErrors) {
lhs, _ := w.derefMap("lhs: ", w.lhs)
if lhs != nil {
defer w.allocator.Free(lhs)
}
rhs, _ := w.derefMap("rhs: ", w.rhs)
if rhs != nil {
defer w.allocator.Free(rhs)
}
// If both lhs and rhs are empty/null, treat it as a
// leaf: this helps preserve the empty/null
// distinction.
emptyPromoteToLeaf := (lhs == nil || lhs.Empty()) && (rhs == nil || rhs.Empty())
if t.ElementRelationship == schema.Atomic || emptyPromoteToLeaf {
w.doLeaf()
return nil
}
if lhs == nil && rhs == nil {
return nil
}
errs = append(errs, w.visitMapItems(t, lhs, rhs)...)
return errs
}

View File

@ -0,0 +1,151 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package typed
import (
"fmt"
"sigs.k8s.io/structured-merge-diff/v4/schema"
"sigs.k8s.io/structured-merge-diff/v4/value"
yaml "sigs.k8s.io/yaml/goyaml.v2"
)
// YAMLObject is an object encoded in YAML.
type YAMLObject string
// Parser implements YAMLParser and allows introspecting the schema.
type Parser struct {
Schema schema.Schema
}
// create builds an unvalidated parser.
func create(s YAMLObject) (*Parser, error) {
p := Parser{}
err := yaml.Unmarshal([]byte(s), &p.Schema)
return &p, err
}
func createOrDie(schema YAMLObject) *Parser {
p, err := create(schema)
if err != nil {
panic(fmt.Errorf("failed to create parser: %v", err))
}
return p
}
var ssParser = createOrDie(YAMLObject(schema.SchemaSchemaYAML))
// NewParser will build a YAMLParser from a schema. The schema is validated.
func NewParser(schema YAMLObject) (*Parser, error) {
_, err := ssParser.Type("schema").FromYAML(schema)
if err != nil {
return nil, fmt.Errorf("unable to validate schema: %v", err)
}
p, err := create(schema)
if err != nil {
return nil, err
}
return p, nil
}
// TypeNames returns a list of types this parser understands.
func (p *Parser) TypeNames() (names []string) {
for _, td := range p.Schema.Types {
names = append(names, td.Name)
}
return names
}
// Type returns a helper which can produce objects of the given type. Any
// errors are deferred until a further function is called.
func (p *Parser) Type(name string) ParseableType {
return ParseableType{
Schema: &p.Schema,
TypeRef: schema.TypeRef{NamedType: &name},
}
}
// ParseableType allows for easy production of typed objects.
type ParseableType struct {
TypeRef schema.TypeRef
Schema *schema.Schema
}
// IsValid return true if p's schema and typename are valid.
func (p ParseableType) IsValid() bool {
_, ok := p.Schema.Resolve(p.TypeRef)
return ok
}
// FromYAML parses a yaml string into an object with the current schema
// and the type "typename" or an error if validation fails.
func (p ParseableType) FromYAML(object YAMLObject, opts ...ValidationOptions) (*TypedValue, error) {
var v interface{}
err := yaml.Unmarshal([]byte(object), &v)
if err != nil {
return nil, err
}
return AsTyped(value.NewValueInterface(v), p.Schema, p.TypeRef, opts...)
}
// FromUnstructured converts a go "interface{}" type, typically an
// unstructured object in Kubernetes world, to a TypedValue. It returns an
// error if the resulting object fails schema validation.
// The provided interface{} must be one of: map[string]interface{},
// map[interface{}]interface{}, []interface{}, int types, float types,
// string or boolean. Nested interface{} must also be one of these types.
func (p ParseableType) FromUnstructured(in interface{}, opts ...ValidationOptions) (*TypedValue, error) {
return AsTyped(value.NewValueInterface(in), p.Schema, p.TypeRef, opts...)
}
// FromStructured converts a go "interface{}" type, typically an structured object in
// Kubernetes, to a TypedValue. It will return an error if the resulting object fails
// schema validation. The provided "interface{}" value must be a pointer so that the
// value can be modified via reflection. The provided "interface{}" may contain structs
// and types that are converted to Values by the jsonMarshaler interface.
func (p ParseableType) FromStructured(in interface{}, opts ...ValidationOptions) (*TypedValue, error) {
v, err := value.NewValueReflect(in)
if err != nil {
return nil, fmt.Errorf("error creating struct value reflector: %v", err)
}
return AsTyped(v, p.Schema, p.TypeRef, opts...)
}
// DeducedParseableType is a ParseableType that deduces the type from
// the content of the object.
var DeducedParseableType ParseableType = createOrDie(YAMLObject(`types:
- name: __untyped_atomic_
scalar: untyped
list:
elementType:
namedType: __untyped_atomic_
elementRelationship: atomic
map:
elementType:
namedType: __untyped_atomic_
elementRelationship: atomic
- name: __untyped_deduced_
scalar: untyped
list:
elementType:
namedType: __untyped_atomic_
elementRelationship: atomic
map:
elementType:
namedType: __untyped_deduced_
elementRelationship: separable
`)).Type("__untyped_deduced_")

View File

@ -0,0 +1,290 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package typed
import (
"fmt"
"sync"
"sigs.k8s.io/structured-merge-diff/v4/fieldpath"
"sigs.k8s.io/structured-merge-diff/v4/schema"
)
var fmPool = sync.Pool{
New: func() interface{} { return &reconcileWithSchemaWalker{} },
}
func (v *reconcileWithSchemaWalker) finished() {
v.fieldSet = nil
v.schema = nil
v.value = nil
v.typeRef = schema.TypeRef{}
v.path = nil
v.toRemove = nil
v.toAdd = nil
fmPool.Put(v)
}
type reconcileWithSchemaWalker struct {
value *TypedValue // root of the live object
schema *schema.Schema // root of the live schema
// state of node being visited by walker
fieldSet *fieldpath.Set
typeRef schema.TypeRef
path fieldpath.Path
isAtomic bool
// the accumulated diff to perform to apply reconciliation
toRemove *fieldpath.Set // paths to remove recursively
toAdd *fieldpath.Set // paths to add after any removals
// Allocate only as many walkers as needed for the depth by storing them here.
spareWalkers *[]*reconcileWithSchemaWalker
}
func (v *reconcileWithSchemaWalker) prepareDescent(pe fieldpath.PathElement, tr schema.TypeRef) *reconcileWithSchemaWalker {
if v.spareWalkers == nil {
// first descent.
v.spareWalkers = &[]*reconcileWithSchemaWalker{}
}
var v2 *reconcileWithSchemaWalker
if n := len(*v.spareWalkers); n > 0 {
v2, *v.spareWalkers = (*v.spareWalkers)[n-1], (*v.spareWalkers)[:n-1]
} else {
v2 = &reconcileWithSchemaWalker{}
}
*v2 = *v
v2.typeRef = tr
v2.path = append(v.path, pe)
v2.value = v.value
return v2
}
func (v *reconcileWithSchemaWalker) finishDescent(v2 *reconcileWithSchemaWalker) {
v2.fieldSet = nil
v2.schema = nil
v2.value = nil
v2.typeRef = schema.TypeRef{}
if cap(v2.path) < 20 { // recycle slices that do not have unexpectedly high capacity
v2.path = v2.path[:0]
} else {
v2.path = nil
}
// merge any accumulated changes into parent walker
if v2.toRemove != nil {
if v.toRemove == nil {
v.toRemove = v2.toRemove
} else {
v.toRemove = v.toRemove.Union(v2.toRemove)
}
}
if v2.toAdd != nil {
if v.toAdd == nil {
v.toAdd = v2.toAdd
} else {
v.toAdd = v.toAdd.Union(v2.toAdd)
}
}
v2.toRemove = nil
v2.toAdd = nil
// if the descent caused a realloc, ensure that we reuse the buffer
// for the next sibling.
*v.spareWalkers = append(*v.spareWalkers, v2)
}
// ReconcileFieldSetWithSchema reconciles the a field set with any changes to the
// object's schema since the field set was written. Returns the reconciled field set, or nil of
// no changes were made to the field set.
//
// Supports:
// - changing types from atomic to granular
// - changing types from granular to atomic
func ReconcileFieldSetWithSchema(fieldset *fieldpath.Set, tv *TypedValue) (*fieldpath.Set, error) {
v := fmPool.Get().(*reconcileWithSchemaWalker)
v.fieldSet = fieldset
v.value = tv
v.schema = tv.schema
v.typeRef = tv.typeRef
defer v.finished()
errs := v.reconcile()
if len(errs) > 0 {
return nil, fmt.Errorf("errors reconciling field set with schema: %s", errs.Error())
}
// If there are any accumulated changes, apply them
if v.toAdd != nil || v.toRemove != nil {
out := v.fieldSet
if v.toRemove != nil {
out = out.RecursiveDifference(v.toRemove)
}
if v.toAdd != nil {
out = out.Union(v.toAdd)
}
return out, nil
}
return nil, nil
}
func (v *reconcileWithSchemaWalker) reconcile() (errs ValidationErrors) {
a, ok := v.schema.Resolve(v.typeRef)
if !ok {
errs = append(errs, errorf("could not resolve %v", v.typeRef)...)
return
}
return handleAtom(a, v.typeRef, v)
}
func (v *reconcileWithSchemaWalker) doScalar(_ *schema.Scalar) (errs ValidationErrors) {
return errs
}
func (v *reconcileWithSchemaWalker) visitListItems(t *schema.List, element *fieldpath.Set) (errs ValidationErrors) {
handleElement := func(pe fieldpath.PathElement, isMember bool) {
var hasChildren bool
v2 := v.prepareDescent(pe, t.ElementType)
v2.fieldSet, hasChildren = element.Children.Get(pe)
v2.isAtomic = isMember && !hasChildren
errs = append(errs, v2.reconcile()...)
v.finishDescent(v2)
}
element.Children.Iterate(func(pe fieldpath.PathElement) {
if element.Members.Has(pe) {
return
}
handleElement(pe, false)
})
element.Members.Iterate(func(pe fieldpath.PathElement) {
handleElement(pe, true)
})
return errs
}
func (v *reconcileWithSchemaWalker) doList(t *schema.List) (errs ValidationErrors) {
// reconcile lists changed from granular to atomic.
// Note that migrations from atomic to granular are not recommended and will
// be treated as if they were always granular.
//
// In this case, the manager that owned the previously atomic field (and all subfields),
// will now own just the top-level field and none of the subfields.
if !v.isAtomic && t.ElementRelationship == schema.Atomic {
v.toRemove = fieldpath.NewSet(v.path) // remove all root and all children fields
v.toAdd = fieldpath.NewSet(v.path) // add the root of the atomic
return errs
}
if v.fieldSet != nil {
errs = v.visitListItems(t, v.fieldSet)
}
return errs
}
func (v *reconcileWithSchemaWalker) visitMapItems(t *schema.Map, element *fieldpath.Set) (errs ValidationErrors) {
handleElement := func(pe fieldpath.PathElement, isMember bool) {
var hasChildren bool
if tr, ok := typeRefAtPath(t, pe); ok { // ignore fields not in the schema
v2 := v.prepareDescent(pe, tr)
v2.fieldSet, hasChildren = element.Children.Get(pe)
v2.isAtomic = isMember && !hasChildren
errs = append(errs, v2.reconcile()...)
v.finishDescent(v2)
}
}
element.Children.Iterate(func(pe fieldpath.PathElement) {
if element.Members.Has(pe) {
return
}
handleElement(pe, false)
})
element.Members.Iterate(func(pe fieldpath.PathElement) {
handleElement(pe, true)
})
return errs
}
func (v *reconcileWithSchemaWalker) doMap(t *schema.Map) (errs ValidationErrors) {
// We don't currently reconcile deduced types (unstructured CRDs) or maps that contain only unknown
// fields since deduced types do not yet support atomic or granular tags.
if isUntypedDeducedMap(t) {
return errs
}
// reconcile maps and structs changed from granular to atomic.
// Note that migrations from atomic to granular are not recommended and will
// be treated as if they were always granular.
//
// In this case the manager that owned the previously atomic field (and all subfields),
// will now own just the top-level field and none of the subfields.
if !v.isAtomic && t.ElementRelationship == schema.Atomic {
if v.fieldSet != nil && v.fieldSet.Size() > 0 {
v.toRemove = fieldpath.NewSet(v.path) // remove all root and all children fields
v.toAdd = fieldpath.NewSet(v.path) // add the root of the atomic
}
return errs
}
if v.fieldSet != nil {
errs = v.visitMapItems(t, v.fieldSet)
}
return errs
}
func fieldSetAtPath(node *fieldpath.Set, path fieldpath.Path) (*fieldpath.Set, bool) {
ok := true
for _, pe := range path {
if node, ok = node.Children.Get(pe); !ok {
break
}
}
return node, ok
}
func descendToPath(node *fieldpath.Set, path fieldpath.Path) *fieldpath.Set {
for _, pe := range path {
node = node.Children.Descend(pe)
}
return node
}
func typeRefAtPath(t *schema.Map, pe fieldpath.PathElement) (schema.TypeRef, bool) {
tr := t.ElementType
if pe.FieldName != nil {
if sf, ok := t.FindField(*pe.FieldName); ok {
tr = sf.Type
}
}
return tr, tr != schema.TypeRef{}
}
// isUntypedDeducedMap returns true if m has no fields defined, but allows untyped elements.
// This is equivalent to a openAPI object that has x-kubernetes-preserve-unknown-fields=true
// but does not have any properties defined on the object.
func isUntypedDeducedMap(m *schema.Map) bool {
return isUntypedDeducedRef(m.ElementType) && m.Fields == nil
}
func isUntypedDeducedRef(t schema.TypeRef) bool {
if t.NamedType != nil {
return *t.NamedType == "__untyped_deduced_"
}
atom := t.Inlined
return atom.Scalar != nil && *atom.Scalar == "untyped"
}

View File

@ -0,0 +1,165 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package typed
import (
"sigs.k8s.io/structured-merge-diff/v4/fieldpath"
"sigs.k8s.io/structured-merge-diff/v4/schema"
"sigs.k8s.io/structured-merge-diff/v4/value"
)
type removingWalker struct {
value value.Value
out interface{}
schema *schema.Schema
toRemove *fieldpath.Set
allocator value.Allocator
shouldExtract bool
}
// removeItemsWithSchema will walk the given value and look for items from the toRemove set.
// Depending on whether shouldExtract is set true or false, it will return a modified version
// of the input value with either:
// 1. only the items in the toRemove set (when shouldExtract is true) or
// 2. the items from the toRemove set removed from the value (when shouldExtract is false).
func removeItemsWithSchema(val value.Value, toRemove *fieldpath.Set, schema *schema.Schema, typeRef schema.TypeRef, shouldExtract bool) value.Value {
w := &removingWalker{
value: val,
schema: schema,
toRemove: toRemove,
allocator: value.NewFreelistAllocator(),
shouldExtract: shouldExtract,
}
resolveSchema(schema, typeRef, val, w)
return value.NewValueInterface(w.out)
}
func (w *removingWalker) doScalar(t *schema.Scalar) ValidationErrors {
w.out = w.value.Unstructured()
return nil
}
func (w *removingWalker) doList(t *schema.List) (errs ValidationErrors) {
if !w.value.IsList() {
return nil
}
l := w.value.AsListUsing(w.allocator)
defer w.allocator.Free(l)
// If list is null or empty just return
if l == nil || l.Length() == 0 {
return nil
}
// atomic lists should return everything in the case of extract
// and nothing in the case of remove (!w.shouldExtract)
if t.ElementRelationship == schema.Atomic {
if w.shouldExtract {
w.out = w.value.Unstructured()
}
return nil
}
var newItems []interface{}
iter := l.RangeUsing(w.allocator)
defer w.allocator.Free(iter)
for iter.Next() {
_, item := iter.Item()
// Ignore error because we have already validated this list
pe, _ := listItemToPathElement(w.allocator, w.schema, t, item)
path, _ := fieldpath.MakePath(pe)
// save items on the path when we shouldExtract
// but ignore them when we are removing (i.e. !w.shouldExtract)
if w.toRemove.Has(path) {
if w.shouldExtract {
newItems = append(newItems, removeItemsWithSchema(item, w.toRemove, w.schema, t.ElementType, w.shouldExtract).Unstructured())
} else {
continue
}
}
if subset := w.toRemove.WithPrefix(pe); !subset.Empty() {
item = removeItemsWithSchema(item, subset, w.schema, t.ElementType, w.shouldExtract)
} else {
// don't save items not on the path when we shouldExtract.
if w.shouldExtract {
continue
}
}
newItems = append(newItems, item.Unstructured())
}
if len(newItems) > 0 {
w.out = newItems
}
return nil
}
func (w *removingWalker) doMap(t *schema.Map) ValidationErrors {
if !w.value.IsMap() {
return nil
}
m := w.value.AsMapUsing(w.allocator)
if m != nil {
defer w.allocator.Free(m)
}
// If map is null or empty just return
if m == nil || m.Empty() {
return nil
}
// atomic maps should return everything in the case of extract
// and nothing in the case of remove (!w.shouldExtract)
if t.ElementRelationship == schema.Atomic {
if w.shouldExtract {
w.out = w.value.Unstructured()
}
return nil
}
fieldTypes := map[string]schema.TypeRef{}
for _, structField := range t.Fields {
fieldTypes[structField.Name] = structField.Type
}
newMap := map[string]interface{}{}
m.Iterate(func(k string, val value.Value) bool {
pe := fieldpath.PathElement{FieldName: &k}
path, _ := fieldpath.MakePath(pe)
fieldType := t.ElementType
if ft, ok := fieldTypes[k]; ok {
fieldType = ft
}
// save values on the path when we shouldExtract
// but ignore them when we are removing (i.e. !w.shouldExtract)
if w.toRemove.Has(path) {
if w.shouldExtract {
newMap[k] = removeItemsWithSchema(val, w.toRemove, w.schema, fieldType, w.shouldExtract).Unstructured()
}
return true
}
if subset := w.toRemove.WithPrefix(pe); !subset.Empty() {
val = removeItemsWithSchema(val, subset, w.schema, fieldType, w.shouldExtract)
} else {
// don't save values not on the path when we shouldExtract.
if w.shouldExtract {
return true
}
}
newMap[k] = val.Unstructured()
return true
})
if len(newMap) > 0 {
w.out = newMap
}
return nil
}

View File

@ -0,0 +1,190 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package typed
import (
"sync"
"sigs.k8s.io/structured-merge-diff/v4/fieldpath"
"sigs.k8s.io/structured-merge-diff/v4/schema"
"sigs.k8s.io/structured-merge-diff/v4/value"
)
var tPool = sync.Pool{
New: func() interface{} { return &toFieldSetWalker{} },
}
func (tv TypedValue) toFieldSetWalker() *toFieldSetWalker {
v := tPool.Get().(*toFieldSetWalker)
v.value = tv.value
v.schema = tv.schema
v.typeRef = tv.typeRef
v.set = &fieldpath.Set{}
v.allocator = value.NewFreelistAllocator()
return v
}
func (v *toFieldSetWalker) finished() {
v.schema = nil
v.typeRef = schema.TypeRef{}
v.path = nil
v.set = nil
tPool.Put(v)
}
type toFieldSetWalker struct {
value value.Value
schema *schema.Schema
typeRef schema.TypeRef
set *fieldpath.Set
path fieldpath.Path
// Allocate only as many walkers as needed for the depth by storing them here.
spareWalkers *[]*toFieldSetWalker
allocator value.Allocator
}
func (v *toFieldSetWalker) prepareDescent(pe fieldpath.PathElement, tr schema.TypeRef) *toFieldSetWalker {
if v.spareWalkers == nil {
// first descent.
v.spareWalkers = &[]*toFieldSetWalker{}
}
var v2 *toFieldSetWalker
if n := len(*v.spareWalkers); n > 0 {
v2, *v.spareWalkers = (*v.spareWalkers)[n-1], (*v.spareWalkers)[:n-1]
} else {
v2 = &toFieldSetWalker{}
}
*v2 = *v
v2.typeRef = tr
v2.path = append(v2.path, pe)
return v2
}
func (v *toFieldSetWalker) finishDescent(v2 *toFieldSetWalker) {
// if the descent caused a realloc, ensure that we reuse the buffer
// for the next sibling.
v.path = v2.path[:len(v2.path)-1]
*v.spareWalkers = append(*v.spareWalkers, v2)
}
func (v *toFieldSetWalker) toFieldSet() ValidationErrors {
return resolveSchema(v.schema, v.typeRef, v.value, v)
}
func (v *toFieldSetWalker) doScalar(t *schema.Scalar) ValidationErrors {
v.set.Insert(v.path)
return nil
}
func (v *toFieldSetWalker) visitListItems(t *schema.List, list value.List) (errs ValidationErrors) {
// Keeps track of the PEs we've seen
seen := fieldpath.MakePathElementSet(list.Length())
// Keeps tracks of the PEs we've counted as duplicates
duplicates := fieldpath.MakePathElementSet(list.Length())
for i := 0; i < list.Length(); i++ {
child := list.At(i)
pe, _ := listItemToPathElement(v.allocator, v.schema, t, child)
if seen.Has(pe) {
if duplicates.Has(pe) {
// do nothing
} else {
v.set.Insert(append(v.path, pe))
duplicates.Insert(pe)
}
} else {
seen.Insert(pe)
}
}
for i := 0; i < list.Length(); i++ {
child := list.At(i)
pe, _ := listItemToPathElement(v.allocator, v.schema, t, child)
if duplicates.Has(pe) {
continue
}
v2 := v.prepareDescent(pe, t.ElementType)
v2.value = child
errs = append(errs, v2.toFieldSet()...)
v2.set.Insert(v2.path)
v.finishDescent(v2)
}
return errs
}
func (v *toFieldSetWalker) doList(t *schema.List) (errs ValidationErrors) {
list, _ := listValue(v.allocator, v.value)
if list != nil {
defer v.allocator.Free(list)
}
if t.ElementRelationship == schema.Atomic {
v.set.Insert(v.path)
return nil
}
if list == nil {
return nil
}
errs = v.visitListItems(t, list)
return errs
}
func (v *toFieldSetWalker) visitMapItems(t *schema.Map, m value.Map) (errs ValidationErrors) {
m.Iterate(func(key string, val value.Value) bool {
pe := fieldpath.PathElement{FieldName: &key}
tr := t.ElementType
if sf, ok := t.FindField(key); ok {
tr = sf.Type
}
v2 := v.prepareDescent(pe, tr)
v2.value = val
errs = append(errs, v2.toFieldSet()...)
if val.IsNull() || (val.IsMap() && val.AsMap().Length() == 0) {
v2.set.Insert(v2.path)
} else if _, ok := t.FindField(key); !ok {
v2.set.Insert(v2.path)
}
v.finishDescent(v2)
return true
})
return errs
}
func (v *toFieldSetWalker) doMap(t *schema.Map) (errs ValidationErrors) {
m, _ := mapValue(v.allocator, v.value)
if m != nil {
defer v.allocator.Free(m)
}
if t.ElementRelationship == schema.Atomic {
v.set.Insert(v.path)
return nil
}
if m == nil {
return nil
}
errs = v.visitMapItems(t, m)
return errs
}

View File

@ -0,0 +1,294 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package typed
import (
"sync"
"sigs.k8s.io/structured-merge-diff/v4/fieldpath"
"sigs.k8s.io/structured-merge-diff/v4/schema"
"sigs.k8s.io/structured-merge-diff/v4/value"
)
// ValidationOptions is the list of all the options available when running the validation.
type ValidationOptions int
const (
// AllowDuplicates means that sets and associative lists can have duplicate similar items.
AllowDuplicates ValidationOptions = iota
)
// extractItemsOptions is the options available when extracting items.
type extractItemsOptions struct {
appendKeyFields bool
}
type ExtractItemsOption func(*extractItemsOptions)
// WithAppendKeyFields configures ExtractItems to include key fields.
// It is exported for use in configuring ExtractItems.
func WithAppendKeyFields() ExtractItemsOption {
return func(opts *extractItemsOptions) {
opts.appendKeyFields = true
}
}
// AsTyped accepts a value and a type and returns a TypedValue. 'v' must have
// type 'typeName' in the schema. An error is returned if the v doesn't conform
// to the schema.
func AsTyped(v value.Value, s *schema.Schema, typeRef schema.TypeRef, opts ...ValidationOptions) (*TypedValue, error) {
tv := &TypedValue{
value: v,
typeRef: typeRef,
schema: s,
}
if err := tv.Validate(opts...); err != nil {
return nil, err
}
return tv, nil
}
// AsTypeUnvalidated is just like AsTyped, but doesn't validate that the type
// conforms to the schema, for cases where that has already been checked or
// where you're going to call a method that validates as a side-effect (like
// ToFieldSet).
//
// Deprecated: This function was initially created because validation
// was expensive. Now that this has been solved, objects should always
// be created as validated, using `AsTyped`.
func AsTypedUnvalidated(v value.Value, s *schema.Schema, typeRef schema.TypeRef) *TypedValue {
tv := &TypedValue{
value: v,
typeRef: typeRef,
schema: s,
}
return tv
}
// TypedValue is a value of some specific type.
type TypedValue struct {
value value.Value
typeRef schema.TypeRef
schema *schema.Schema
}
// TypeRef is the type of the value.
func (tv TypedValue) TypeRef() schema.TypeRef {
return tv.typeRef
}
// AsValue removes the type from the TypedValue and only keeps the value.
func (tv TypedValue) AsValue() value.Value {
return tv.value
}
// Schema gets the schema from the TypedValue.
func (tv TypedValue) Schema() *schema.Schema {
return tv.schema
}
// Validate returns an error with a list of every spec violation.
func (tv TypedValue) Validate(opts ...ValidationOptions) error {
w := tv.walker()
for _, opt := range opts {
switch opt {
case AllowDuplicates:
w.allowDuplicates = true
}
}
defer w.finished()
if errs := w.validate(nil); len(errs) != 0 {
return errs
}
return nil
}
// ToFieldSet creates a set containing every leaf field and item mentioned, or
// validation errors, if any were encountered.
func (tv TypedValue) ToFieldSet() (*fieldpath.Set, error) {
w := tv.toFieldSetWalker()
defer w.finished()
if errs := w.toFieldSet(); len(errs) != 0 {
return nil, errs
}
return w.set, nil
}
// Merge returns the result of merging tv and pso ("partially specified
// object") together. Of note:
// - No fields can be removed by this operation.
// - If both tv and pso specify a given leaf field, the result will keep pso's
// value.
// - Container typed elements will have their items ordered:
// 1. like tv, if pso doesn't change anything in the container
// 2. like pso, if pso does change something in the container.
//
// tv and pso must both be of the same type (their Schema and TypeRef must
// match), or an error will be returned. Validation errors will be returned if
// the objects don't conform to the schema.
func (tv TypedValue) Merge(pso *TypedValue) (*TypedValue, error) {
return merge(&tv, pso, ruleKeepRHS, nil)
}
var cmpwPool = sync.Pool{
New: func() interface{} { return &compareWalker{} },
}
// Compare compares the two objects. See the comments on the `Comparison`
// struct for details on the return value.
//
// tv and rhs must both be of the same type (their Schema and TypeRef must
// match), or an error will be returned. Validation errors will be returned if
// the objects don't conform to the schema.
func (tv TypedValue) Compare(rhs *TypedValue) (c *Comparison, err error) {
lhs := tv
if lhs.schema != rhs.schema {
return nil, errorf("expected objects with types from the same schema")
}
if !lhs.typeRef.Equals(&rhs.typeRef) {
return nil, errorf("expected objects of the same type, but got %v and %v", lhs.typeRef, rhs.typeRef)
}
cmpw := cmpwPool.Get().(*compareWalker)
defer func() {
cmpw.lhs = nil
cmpw.rhs = nil
cmpw.schema = nil
cmpw.typeRef = schema.TypeRef{}
cmpw.comparison = nil
cmpw.inLeaf = false
cmpwPool.Put(cmpw)
}()
cmpw.lhs = lhs.value
cmpw.rhs = rhs.value
cmpw.schema = lhs.schema
cmpw.typeRef = lhs.typeRef
cmpw.comparison = &Comparison{
Removed: fieldpath.NewSet(),
Modified: fieldpath.NewSet(),
Added: fieldpath.NewSet(),
}
if cmpw.allocator == nil {
cmpw.allocator = value.NewFreelistAllocator()
}
errs := cmpw.compare(nil)
if len(errs) > 0 {
return nil, errs
}
return cmpw.comparison, nil
}
// RemoveItems removes each provided list or map item from the value.
func (tv TypedValue) RemoveItems(items *fieldpath.Set) *TypedValue {
tv.value = removeItemsWithSchema(tv.value, items, tv.schema, tv.typeRef, false)
return &tv
}
// ExtractItems returns a value with only the provided list or map items extracted from the value.
func (tv TypedValue) ExtractItems(items *fieldpath.Set, opts ...ExtractItemsOption) *TypedValue {
options := &extractItemsOptions{}
for _, opt := range opts {
opt(options)
}
if options.appendKeyFields {
tvPathSet, err := tv.ToFieldSet()
if err == nil {
keyFieldPathSet := fieldpath.NewSet()
items.Iterate(func(path fieldpath.Path) {
if !tvPathSet.Has(path) {
return
}
for i, pe := range path {
if pe.Key == nil {
continue
}
for _, keyField := range *pe.Key {
keyName := keyField.Name
// Create a new slice with the same elements as path[:i+1], but set its capacity to len(path[:i+1]).
// This ensures that appending to keyFieldPath creates a new underlying array, avoiding accidental
// modification of the original slice (path).
keyFieldPath := append(path[:i+1:i+1], fieldpath.PathElement{FieldName: &keyName})
keyFieldPathSet.Insert(keyFieldPath)
}
}
})
items = items.Union(keyFieldPathSet)
}
}
tv.value = removeItemsWithSchema(tv.value, items, tv.schema, tv.typeRef, true)
return &tv
}
func (tv TypedValue) Empty() *TypedValue {
tv.value = value.NewValueInterface(nil)
return &tv
}
var mwPool = sync.Pool{
New: func() interface{} { return &mergingWalker{} },
}
func merge(lhs, rhs *TypedValue, rule, postRule mergeRule) (*TypedValue, error) {
if lhs.schema != rhs.schema {
return nil, errorf("expected objects with types from the same schema")
}
if !lhs.typeRef.Equals(&rhs.typeRef) {
return nil, errorf("expected objects of the same type, but got %v and %v", lhs.typeRef, rhs.typeRef)
}
mw := mwPool.Get().(*mergingWalker)
defer func() {
mw.lhs = nil
mw.rhs = nil
mw.schema = nil
mw.typeRef = schema.TypeRef{}
mw.rule = nil
mw.postItemHook = nil
mw.out = nil
mw.inLeaf = false
mwPool.Put(mw)
}()
mw.lhs = lhs.value
mw.rhs = rhs.value
mw.schema = lhs.schema
mw.typeRef = lhs.typeRef
mw.rule = rule
mw.postItemHook = postRule
if mw.allocator == nil {
mw.allocator = value.NewFreelistAllocator()
}
errs := mw.merge(nil)
if len(errs) > 0 {
return nil, errs
}
out := &TypedValue{
schema: lhs.schema,
typeRef: lhs.typeRef,
}
if mw.out != nil {
out.value = value.NewValueInterface(*mw.out)
}
return out, nil
}

View File

@ -0,0 +1,205 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package typed
import (
"sync"
"sigs.k8s.io/structured-merge-diff/v4/fieldpath"
"sigs.k8s.io/structured-merge-diff/v4/schema"
"sigs.k8s.io/structured-merge-diff/v4/value"
)
var vPool = sync.Pool{
New: func() interface{} { return &validatingObjectWalker{} },
}
func (tv TypedValue) walker() *validatingObjectWalker {
v := vPool.Get().(*validatingObjectWalker)
v.value = tv.value
v.schema = tv.schema
v.typeRef = tv.typeRef
v.allowDuplicates = false
if v.allocator == nil {
v.allocator = value.NewFreelistAllocator()
}
return v
}
func (v *validatingObjectWalker) finished() {
v.schema = nil
v.typeRef = schema.TypeRef{}
vPool.Put(v)
}
type validatingObjectWalker struct {
value value.Value
schema *schema.Schema
typeRef schema.TypeRef
// If set to true, duplicates will be allowed in
// associativeLists/sets.
allowDuplicates bool
// Allocate only as many walkers as needed for the depth by storing them here.
spareWalkers *[]*validatingObjectWalker
allocator value.Allocator
}
func (v *validatingObjectWalker) prepareDescent(tr schema.TypeRef) *validatingObjectWalker {
if v.spareWalkers == nil {
// first descent.
v.spareWalkers = &[]*validatingObjectWalker{}
}
var v2 *validatingObjectWalker
if n := len(*v.spareWalkers); n > 0 {
v2, *v.spareWalkers = (*v.spareWalkers)[n-1], (*v.spareWalkers)[:n-1]
} else {
v2 = &validatingObjectWalker{}
}
*v2 = *v
v2.typeRef = tr
return v2
}
func (v *validatingObjectWalker) finishDescent(v2 *validatingObjectWalker) {
// if the descent caused a realloc, ensure that we reuse the buffer
// for the next sibling.
*v.spareWalkers = append(*v.spareWalkers, v2)
}
func (v *validatingObjectWalker) validate(prefixFn func() string) ValidationErrors {
return resolveSchema(v.schema, v.typeRef, v.value, v).WithLazyPrefix(prefixFn)
}
func validateScalar(t *schema.Scalar, v value.Value, prefix string) (errs ValidationErrors) {
if v == nil {
return nil
}
if v.IsNull() {
return nil
}
switch *t {
case schema.Numeric:
if !v.IsFloat() && !v.IsInt() {
// TODO: should the schema separate int and float?
return errorf("%vexpected numeric (int or float), got %T", prefix, v.Unstructured())
}
case schema.String:
if !v.IsString() {
return errorf("%vexpected string, got %#v", prefix, v)
}
case schema.Boolean:
if !v.IsBool() {
return errorf("%vexpected boolean, got %v", prefix, v)
}
case schema.Untyped:
if !v.IsFloat() && !v.IsInt() && !v.IsString() && !v.IsBool() {
return errorf("%vexpected any scalar, got %v", prefix, v)
}
default:
return errorf("%vunexpected scalar type in schema: %v", prefix, *t)
}
return nil
}
func (v *validatingObjectWalker) doScalar(t *schema.Scalar) ValidationErrors {
if errs := validateScalar(t, v.value, ""); len(errs) > 0 {
return errs
}
return nil
}
func (v *validatingObjectWalker) visitListItems(t *schema.List, list value.List) (errs ValidationErrors) {
observedKeys := fieldpath.MakePathElementSet(list.Length())
for i := 0; i < list.Length(); i++ {
child := list.AtUsing(v.allocator, i)
defer v.allocator.Free(child)
var pe fieldpath.PathElement
if t.ElementRelationship != schema.Associative {
pe.Index = &i
} else {
var err error
pe, err = listItemToPathElement(v.allocator, v.schema, t, child)
if err != nil {
errs = append(errs, errorf("element %v: %v", i, err.Error())...)
// If we can't construct the path element, we can't
// even report errors deeper in the schema, so bail on
// this element.
return
}
if observedKeys.Has(pe) && !v.allowDuplicates {
errs = append(errs, errorf("duplicate entries for key %v", pe.String())...)
}
observedKeys.Insert(pe)
}
v2 := v.prepareDescent(t.ElementType)
v2.value = child
errs = append(errs, v2.validate(pe.String)...)
v.finishDescent(v2)
}
return errs
}
func (v *validatingObjectWalker) doList(t *schema.List) (errs ValidationErrors) {
list, err := listValue(v.allocator, v.value)
if err != nil {
return errorf(err.Error())
}
if list == nil {
return nil
}
defer v.allocator.Free(list)
errs = v.visitListItems(t, list)
return errs
}
func (v *validatingObjectWalker) visitMapItems(t *schema.Map, m value.Map) (errs ValidationErrors) {
m.IterateUsing(v.allocator, func(key string, val value.Value) bool {
pe := fieldpath.PathElement{FieldName: &key}
tr := t.ElementType
if sf, ok := t.FindField(key); ok {
tr = sf.Type
} else if (t.ElementType == schema.TypeRef{}) {
errs = append(errs, errorf("field not declared in schema").WithPrefix(pe.String())...)
return false
}
v2 := v.prepareDescent(tr)
v2.value = val
// Giving pe.String as a parameter actually increases the allocations.
errs = append(errs, v2.validate(func() string { return pe.String() })...)
v.finishDescent(v2)
return true
})
return errs
}
func (v *validatingObjectWalker) doMap(t *schema.Map) (errs ValidationErrors) {
m, err := mapValue(v.allocator, v.value)
if err != nil {
return errorf(err.Error())
}
if m == nil {
return nil
}
defer v.allocator.Free(m)
errs = v.visitMapItems(t, m)
return errs
}

View File

@ -0,0 +1,203 @@
/*
Copyright 2020 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package value
// Allocator provides a value object allocation strategy.
// Value objects can be allocated by passing an allocator to the "Using"
// receiver functions on the value interfaces, e.g. Map.ZipUsing(allocator, ...).
// Value objects returned from "Using" functions should be given back to the allocator
// once longer needed by calling Allocator.Free(Value).
type Allocator interface {
// Free gives the allocator back any value objects returned by the "Using"
// receiver functions on the value interfaces.
// interface{} may be any of: Value, Map, List or Range.
Free(interface{})
// The unexported functions are for "Using" receiver functions of the value types
// to request what they need from the allocator.
allocValueUnstructured() *valueUnstructured
allocListUnstructuredRange() *listUnstructuredRange
allocValueReflect() *valueReflect
allocMapReflect() *mapReflect
allocStructReflect() *structReflect
allocListReflect() *listReflect
allocListReflectRange() *listReflectRange
}
// HeapAllocator simply allocates objects to the heap. It is the default
// allocator used receiver functions on the value interfaces that do not accept
// an allocator and should be used whenever allocating objects that will not
// be given back to an allocator by calling Allocator.Free(Value).
var HeapAllocator = &heapAllocator{}
type heapAllocator struct{}
func (p *heapAllocator) allocValueUnstructured() *valueUnstructured {
return &valueUnstructured{}
}
func (p *heapAllocator) allocListUnstructuredRange() *listUnstructuredRange {
return &listUnstructuredRange{vv: &valueUnstructured{}}
}
func (p *heapAllocator) allocValueReflect() *valueReflect {
return &valueReflect{}
}
func (p *heapAllocator) allocStructReflect() *structReflect {
return &structReflect{}
}
func (p *heapAllocator) allocMapReflect() *mapReflect {
return &mapReflect{}
}
func (p *heapAllocator) allocListReflect() *listReflect {
return &listReflect{}
}
func (p *heapAllocator) allocListReflectRange() *listReflectRange {
return &listReflectRange{vr: &valueReflect{}}
}
func (p *heapAllocator) Free(_ interface{}) {}
// NewFreelistAllocator creates freelist based allocator.
// This allocator provides fast allocation and freeing of short lived value objects.
//
// The freelists are bounded in size by freelistMaxSize. If more than this amount of value objects is
// allocated at once, the excess will be returned to the heap for garbage collection when freed.
//
// This allocator is unsafe and must not be accessed concurrently by goroutines.
//
// This allocator works well for traversal of value data trees. Typical usage is to acquire
// a freelist at the beginning of the traversal and use it through out
// for all temporary value access.
func NewFreelistAllocator() Allocator {
return &freelistAllocator{
valueUnstructured: &freelist{new: func() interface{} {
return &valueUnstructured{}
}},
listUnstructuredRange: &freelist{new: func() interface{} {
return &listUnstructuredRange{vv: &valueUnstructured{}}
}},
valueReflect: &freelist{new: func() interface{} {
return &valueReflect{}
}},
mapReflect: &freelist{new: func() interface{} {
return &mapReflect{}
}},
structReflect: &freelist{new: func() interface{} {
return &structReflect{}
}},
listReflect: &freelist{new: func() interface{} {
return &listReflect{}
}},
listReflectRange: &freelist{new: func() interface{} {
return &listReflectRange{vr: &valueReflect{}}
}},
}
}
// Bound memory usage of freelists. This prevents the processing of very large lists from leaking memory.
// This limit is large enough for endpoints objects containing 1000 IP address entries. Freed objects
// that don't fit into the freelist are orphaned on the heap to be garbage collected.
const freelistMaxSize = 1000
type freelistAllocator struct {
valueUnstructured *freelist
listUnstructuredRange *freelist
valueReflect *freelist
mapReflect *freelist
structReflect *freelist
listReflect *freelist
listReflectRange *freelist
}
type freelist struct {
list []interface{}
new func() interface{}
}
func (f *freelist) allocate() interface{} {
var w2 interface{}
if n := len(f.list); n > 0 {
w2, f.list = f.list[n-1], f.list[:n-1]
} else {
w2 = f.new()
}
return w2
}
func (f *freelist) free(v interface{}) {
if len(f.list) < freelistMaxSize {
f.list = append(f.list, v)
}
}
func (w *freelistAllocator) Free(value interface{}) {
switch v := value.(type) {
case *valueUnstructured:
v.Value = nil // don't hold references to unstructured objects
w.valueUnstructured.free(v)
case *listUnstructuredRange:
v.vv.Value = nil // don't hold references to unstructured objects
w.listUnstructuredRange.free(v)
case *valueReflect:
v.ParentMapKey = nil
v.ParentMap = nil
w.valueReflect.free(v)
case *mapReflect:
w.mapReflect.free(v)
case *structReflect:
w.structReflect.free(v)
case *listReflect:
w.listReflect.free(v)
case *listReflectRange:
v.vr.ParentMapKey = nil
v.vr.ParentMap = nil
w.listReflectRange.free(v)
}
}
func (w *freelistAllocator) allocValueUnstructured() *valueUnstructured {
return w.valueUnstructured.allocate().(*valueUnstructured)
}
func (w *freelistAllocator) allocListUnstructuredRange() *listUnstructuredRange {
return w.listUnstructuredRange.allocate().(*listUnstructuredRange)
}
func (w *freelistAllocator) allocValueReflect() *valueReflect {
return w.valueReflect.allocate().(*valueReflect)
}
func (w *freelistAllocator) allocStructReflect() *structReflect {
return w.structReflect.allocate().(*structReflect)
}
func (w *freelistAllocator) allocMapReflect() *mapReflect {
return w.mapReflect.allocate().(*mapReflect)
}
func (w *freelistAllocator) allocListReflect() *listReflect {
return w.listReflect.allocate().(*listReflect)
}
func (w *freelistAllocator) allocListReflectRange() *listReflectRange {
return w.listReflectRange.allocate().(*listReflectRange)
}

View File

@ -0,0 +1,21 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package value defines types for an in-memory representation of yaml or json
// objects, organized for convenient comparison with a schema (as defined by
// the sibling schema package). Functions for reading and writing the objects
// are also provided.
package value

View File

@ -0,0 +1,97 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package value
import (
"sort"
"strings"
)
// Field is an individual key-value pair.
type Field struct {
Name string
Value Value
}
// FieldList is a list of key-value pairs. Each field is expected to
// have a different name.
type FieldList []Field
// Sort sorts the field list by Name.
func (f FieldList) Sort() {
if len(f) < 2 {
return
}
if len(f) == 2 {
if f[1].Name < f[0].Name {
f[0], f[1] = f[1], f[0]
}
return
}
sort.SliceStable(f, func(i, j int) bool {
return f[i].Name < f[j].Name
})
}
// Less compares two lists lexically.
func (f FieldList) Less(rhs FieldList) bool {
return f.Compare(rhs) == -1
}
// Compare compares two lists lexically. The result will be 0 if f==rhs, -1
// if f < rhs, and +1 if f > rhs.
func (f FieldList) Compare(rhs FieldList) int {
i := 0
for {
if i >= len(f) && i >= len(rhs) {
// Maps are the same length and all items are equal.
return 0
}
if i >= len(f) {
// F is shorter.
return -1
}
if i >= len(rhs) {
// RHS is shorter.
return 1
}
if c := strings.Compare(f[i].Name, rhs[i].Name); c != 0 {
return c
}
if c := Compare(f[i].Value, rhs[i].Value); c != 0 {
return c
}
// The items are equal; continue.
i++
}
}
// Equals returns true if the two fieldslist are equals, false otherwise.
func (f FieldList) Equals(rhs FieldList) bool {
if len(f) != len(rhs) {
return false
}
for i := range f {
if f[i].Name != rhs[i].Name {
return false
}
if !Equals(f[i].Value, rhs[i].Value) {
return false
}
}
return true
}

View File

@ -0,0 +1,91 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package value
import (
"fmt"
"reflect"
"strings"
)
// TODO: This implements the same functionality as https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/runtime/converter.go#L236
// but is based on the highly efficient approach from https://golang.org/src/encoding/json/encode.go
func lookupJsonTags(f reflect.StructField) (name string, omit bool, inline bool, omitempty bool) {
tag := f.Tag.Get("json")
if tag == "-" {
return "", true, false, false
}
name, opts := parseTag(tag)
if name == "" {
name = f.Name
}
return name, false, opts.Contains("inline"), opts.Contains("omitempty")
}
func isZero(v reflect.Value) bool {
switch v.Kind() {
case reflect.Array, reflect.Map, reflect.Slice, reflect.String:
return v.Len() == 0
case reflect.Bool:
return !v.Bool()
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return v.Int() == 0
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
return v.Uint() == 0
case reflect.Float32, reflect.Float64:
return v.Float() == 0
case reflect.Interface, reflect.Ptr:
return v.IsNil()
case reflect.Chan, reflect.Func:
panic(fmt.Sprintf("unsupported type: %v", v.Type()))
}
return false
}
type tagOptions string
// parseTag splits a struct field's json tag into its name and
// comma-separated options.
func parseTag(tag string) (string, tagOptions) {
if idx := strings.Index(tag, ","); idx != -1 {
return tag[:idx], tagOptions(tag[idx+1:])
}
return tag, tagOptions("")
}
// Contains reports whether a comma-separated list of options
// contains a particular substr flag. substr must be surrounded by a
// string boundary or commas.
func (o tagOptions) Contains(optionName string) bool {
if len(o) == 0 {
return false
}
s := string(o)
for s != "" {
var next string
i := strings.Index(s, ",")
if i >= 0 {
s, next = s[:i], s[i+1:]
}
if s == optionName {
return true
}
s = next
}
return false
}

View File

@ -0,0 +1,139 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package value
// List represents a list object.
type List interface {
// Length returns how many items can be found in the map.
Length() int
// At returns the item at the given position in the map. It will
// panic if the index is out of range.
At(int) Value
// AtUsing uses the provided allocator and returns the item at the given
// position in the map. It will panic if the index is out of range.
// The returned Value should be given back to the Allocator when no longer needed
// by calling Allocator.Free(Value).
AtUsing(Allocator, int) Value
// Range returns a ListRange for iterating over the items in the list.
Range() ListRange
// RangeUsing uses the provided allocator and returns a ListRange for
// iterating over the items in the list.
// The returned Range should be given back to the Allocator when no longer needed
// by calling Allocator.Free(Value).
RangeUsing(Allocator) ListRange
// Equals compares the two lists, and return true if they are the same, false otherwise.
// Implementations can use ListEquals as a general implementation for this methods.
Equals(List) bool
// EqualsUsing uses the provided allocator and compares the two lists, and return true if
// they are the same, false otherwise. Implementations can use ListEqualsUsing as a general
// implementation for this methods.
EqualsUsing(Allocator, List) bool
}
// ListRange represents a single iteration across the items of a list.
type ListRange interface {
// Next increments to the next item in the range, if there is one, and returns true, or returns false if there are no more items.
Next() bool
// Item returns the index and value of the current item in the range. or panics if there is no current item.
// For efficiency, Item may reuse the values returned by previous Item calls. Callers should be careful avoid holding
// pointers to the value returned by Item() that escape the iteration loop since they become invalid once either
// Item() or Allocator.Free() is called.
Item() (index int, value Value)
}
var EmptyRange = &emptyRange{}
type emptyRange struct{}
func (_ *emptyRange) Next() bool {
return false
}
func (_ *emptyRange) Item() (index int, value Value) {
panic("Item called on empty ListRange")
}
// ListEquals compares two lists lexically.
// WARN: This is a naive implementation, calling lhs.Equals(rhs) is typically the most efficient.
func ListEquals(lhs, rhs List) bool {
return ListEqualsUsing(HeapAllocator, lhs, rhs)
}
// ListEqualsUsing uses the provided allocator and compares two lists lexically.
// WARN: This is a naive implementation, calling lhs.EqualsUsing(allocator, rhs) is typically the most efficient.
func ListEqualsUsing(a Allocator, lhs, rhs List) bool {
if lhs.Length() != rhs.Length() {
return false
}
lhsRange := lhs.RangeUsing(a)
defer a.Free(lhsRange)
rhsRange := rhs.RangeUsing(a)
defer a.Free(rhsRange)
for lhsRange.Next() && rhsRange.Next() {
_, lv := lhsRange.Item()
_, rv := rhsRange.Item()
if !EqualsUsing(a, lv, rv) {
return false
}
}
return true
}
// ListLess compares two lists lexically.
func ListLess(lhs, rhs List) bool {
return ListCompare(lhs, rhs) == -1
}
// ListCompare compares two lists lexically. The result will be 0 if l==rhs, -1
// if l < rhs, and +1 if l > rhs.
func ListCompare(lhs, rhs List) int {
return ListCompareUsing(HeapAllocator, lhs, rhs)
}
// ListCompareUsing uses the provided allocator and compares two lists lexically. The result will be 0 if l==rhs, -1
// if l < rhs, and +1 if l > rhs.
func ListCompareUsing(a Allocator, lhs, rhs List) int {
lhsRange := lhs.RangeUsing(a)
defer a.Free(lhsRange)
rhsRange := rhs.RangeUsing(a)
defer a.Free(rhsRange)
for {
lhsOk := lhsRange.Next()
rhsOk := rhsRange.Next()
if !lhsOk && !rhsOk {
// Lists are the same length and all items are equal.
return 0
}
if !lhsOk {
// LHS is shorter.
return -1
}
if !rhsOk {
// RHS is shorter.
return 1
}
_, lv := lhsRange.Item()
_, rv := rhsRange.Item()
if c := CompareUsing(a, lv, rv); c != 0 {
return c
}
// The items are equal; continue.
}
}

View File

@ -0,0 +1,98 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package value
import (
"reflect"
)
type listReflect struct {
Value reflect.Value
}
func (r listReflect) Length() int {
val := r.Value
return val.Len()
}
func (r listReflect) At(i int) Value {
val := r.Value
return mustWrapValueReflect(val.Index(i), nil, nil)
}
func (r listReflect) AtUsing(a Allocator, i int) Value {
val := r.Value
return a.allocValueReflect().mustReuse(val.Index(i), nil, nil, nil)
}
func (r listReflect) Unstructured() interface{} {
l := r.Length()
result := make([]interface{}, l)
for i := 0; i < l; i++ {
result[i] = r.At(i).Unstructured()
}
return result
}
func (r listReflect) Range() ListRange {
return r.RangeUsing(HeapAllocator)
}
func (r listReflect) RangeUsing(a Allocator) ListRange {
length := r.Value.Len()
if length == 0 {
return EmptyRange
}
rr := a.allocListReflectRange()
rr.list = r.Value
rr.i = -1
rr.entry = TypeReflectEntryOf(r.Value.Type().Elem())
return rr
}
func (r listReflect) Equals(other List) bool {
return r.EqualsUsing(HeapAllocator, other)
}
func (r listReflect) EqualsUsing(a Allocator, other List) bool {
if otherReflectList, ok := other.(*listReflect); ok {
return reflect.DeepEqual(r.Value.Interface(), otherReflectList.Value.Interface())
}
return ListEqualsUsing(a, &r, other)
}
type listReflectRange struct {
list reflect.Value
vr *valueReflect
i int
entry *TypeReflectCacheEntry
}
func (r *listReflectRange) Next() bool {
r.i += 1
return r.i < r.list.Len()
}
func (r *listReflectRange) Item() (index int, value Value) {
if r.i < 0 {
panic("Item() called before first calling Next()")
}
if r.i >= r.list.Len() {
panic("Item() called on ListRange with no more items")
}
v := r.list.Index(r.i)
return r.i, r.vr.mustReuse(v, r.entry, nil, nil)
}

View File

@ -0,0 +1,74 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package value
type listUnstructured []interface{}
func (l listUnstructured) Length() int {
return len(l)
}
func (l listUnstructured) At(i int) Value {
return NewValueInterface(l[i])
}
func (l listUnstructured) AtUsing(a Allocator, i int) Value {
return a.allocValueUnstructured().reuse(l[i])
}
func (l listUnstructured) Equals(other List) bool {
return l.EqualsUsing(HeapAllocator, other)
}
func (l listUnstructured) EqualsUsing(a Allocator, other List) bool {
return ListEqualsUsing(a, &l, other)
}
func (l listUnstructured) Range() ListRange {
return l.RangeUsing(HeapAllocator)
}
func (l listUnstructured) RangeUsing(a Allocator) ListRange {
if len(l) == 0 {
return EmptyRange
}
r := a.allocListUnstructuredRange()
r.list = l
r.i = -1
return r
}
type listUnstructuredRange struct {
list listUnstructured
vv *valueUnstructured
i int
}
func (r *listUnstructuredRange) Next() bool {
r.i += 1
return r.i < len(r.list)
}
func (r *listUnstructuredRange) Item() (index int, value Value) {
if r.i < 0 {
panic("Item() called before first calling Next()")
}
if r.i >= len(r.list) {
panic("Item() called on ListRange with no more items")
}
return r.i, r.vv.reuse(r.list[r.i])
}

View File

@ -0,0 +1,270 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package value
import (
"sort"
)
// Map represents a Map or go structure.
type Map interface {
// Set changes or set the value of the given key.
Set(key string, val Value)
// Get returns the value for the given key, if present, or (nil, false) otherwise.
Get(key string) (Value, bool)
// GetUsing uses the provided allocator and returns the value for the given key,
// if present, or (nil, false) otherwise.
// The returned Value should be given back to the Allocator when no longer needed
// by calling Allocator.Free(Value).
GetUsing(a Allocator, key string) (Value, bool)
// Has returns true if the key is present, or false otherwise.
Has(key string) bool
// Delete removes the key from the map.
Delete(key string)
// Equals compares the two maps, and return true if they are the same, false otherwise.
// Implementations can use MapEquals as a general implementation for this methods.
Equals(other Map) bool
// EqualsUsing uses the provided allocator and compares the two maps, and return true if
// they are the same, false otherwise. Implementations can use MapEqualsUsing as a general
// implementation for this methods.
EqualsUsing(a Allocator, other Map) bool
// Iterate runs the given function for each key/value in the
// map. Returning false in the closure prematurely stops the
// iteration.
Iterate(func(key string, value Value) bool) bool
// IterateUsing uses the provided allocator and runs the given function for each key/value
// in the map. Returning false in the closure prematurely stops the iteration.
IterateUsing(Allocator, func(key string, value Value) bool) bool
// Length returns the number of items in the map.
Length() int
// Empty returns true if the map is empty.
Empty() bool
// Zip iterates over the entries of two maps together. If both maps contain a value for a given key, fn is called
// with the values from both maps, otherwise it is called with the value of the map that contains the key and nil
// for the map that does not contain the key. Returning false in the closure prematurely stops the iteration.
Zip(other Map, order MapTraverseOrder, fn func(key string, lhs, rhs Value) bool) bool
// ZipUsing uses the provided allocator and iterates over the entries of two maps together. If both maps
// contain a value for a given key, fn is called with the values from both maps, otherwise it is called with
// the value of the map that contains the key and nil for the map that does not contain the key. Returning
// false in the closure prematurely stops the iteration.
ZipUsing(a Allocator, other Map, order MapTraverseOrder, fn func(key string, lhs, rhs Value) bool) bool
}
// MapTraverseOrder defines the map traversal ordering available.
type MapTraverseOrder int
const (
// Unordered indicates that the map traversal has no ordering requirement.
Unordered = iota
// LexicalKeyOrder indicates that the map traversal is ordered by key, lexically.
LexicalKeyOrder
)
// MapZip iterates over the entries of two maps together. If both maps contain a value for a given key, fn is called
// with the values from both maps, otherwise it is called with the value of the map that contains the key and nil
// for the other map. Returning false in the closure prematurely stops the iteration.
func MapZip(lhs, rhs Map, order MapTraverseOrder, fn func(key string, lhs, rhs Value) bool) bool {
return MapZipUsing(HeapAllocator, lhs, rhs, order, fn)
}
// MapZipUsing uses the provided allocator and iterates over the entries of two maps together. If both maps
// contain a value for a given key, fn is called with the values from both maps, otherwise it is called with
// the value of the map that contains the key and nil for the other map. Returning false in the closure
// prematurely stops the iteration.
func MapZipUsing(a Allocator, lhs, rhs Map, order MapTraverseOrder, fn func(key string, lhs, rhs Value) bool) bool {
if lhs != nil {
return lhs.ZipUsing(a, rhs, order, fn)
}
if rhs != nil {
return rhs.ZipUsing(a, lhs, order, func(key string, rhs, lhs Value) bool { // arg positions of lhs and rhs deliberately swapped
return fn(key, lhs, rhs)
})
}
return true
}
// defaultMapZip provides a default implementation of Zip for implementations that do not need to provide
// their own optimized implementation.
func defaultMapZip(a Allocator, lhs, rhs Map, order MapTraverseOrder, fn func(key string, lhs, rhs Value) bool) bool {
switch order {
case Unordered:
return unorderedMapZip(a, lhs, rhs, fn)
case LexicalKeyOrder:
return lexicalKeyOrderedMapZip(a, lhs, rhs, fn)
default:
panic("Unsupported map order")
}
}
func unorderedMapZip(a Allocator, lhs, rhs Map, fn func(key string, lhs, rhs Value) bool) bool {
if (lhs == nil || lhs.Empty()) && (rhs == nil || rhs.Empty()) {
return true
}
if lhs != nil {
ok := lhs.IterateUsing(a, func(key string, lhsValue Value) bool {
var rhsValue Value
if rhs != nil {
if item, ok := rhs.GetUsing(a, key); ok {
rhsValue = item
defer a.Free(rhsValue)
}
}
return fn(key, lhsValue, rhsValue)
})
if !ok {
return false
}
}
if rhs != nil {
return rhs.IterateUsing(a, func(key string, rhsValue Value) bool {
if lhs == nil || !lhs.Has(key) {
return fn(key, nil, rhsValue)
}
return true
})
}
return true
}
func lexicalKeyOrderedMapZip(a Allocator, lhs, rhs Map, fn func(key string, lhs, rhs Value) bool) bool {
var lhsLength, rhsLength int
var orderedLength int // rough estimate of length of union of map keys
if lhs != nil {
lhsLength = lhs.Length()
orderedLength = lhsLength
}
if rhs != nil {
rhsLength = rhs.Length()
if rhsLength > orderedLength {
orderedLength = rhsLength
}
}
if lhsLength == 0 && rhsLength == 0 {
return true
}
ordered := make([]string, 0, orderedLength)
if lhs != nil {
lhs.IterateUsing(a, func(key string, _ Value) bool {
ordered = append(ordered, key)
return true
})
}
if rhs != nil {
rhs.IterateUsing(a, func(key string, _ Value) bool {
if lhs == nil || !lhs.Has(key) {
ordered = append(ordered, key)
}
return true
})
}
sort.Strings(ordered)
for _, key := range ordered {
var litem, ritem Value
if lhs != nil {
litem, _ = lhs.GetUsing(a, key)
}
if rhs != nil {
ritem, _ = rhs.GetUsing(a, key)
}
ok := fn(key, litem, ritem)
if litem != nil {
a.Free(litem)
}
if ritem != nil {
a.Free(ritem)
}
if !ok {
return false
}
}
return true
}
// MapLess compares two maps lexically.
func MapLess(lhs, rhs Map) bool {
return MapCompare(lhs, rhs) == -1
}
// MapCompare compares two maps lexically.
func MapCompare(lhs, rhs Map) int {
return MapCompareUsing(HeapAllocator, lhs, rhs)
}
// MapCompareUsing uses the provided allocator and compares two maps lexically.
func MapCompareUsing(a Allocator, lhs, rhs Map) int {
c := 0
var llength, rlength int
if lhs != nil {
llength = lhs.Length()
}
if rhs != nil {
rlength = rhs.Length()
}
if llength == 0 && rlength == 0 {
return 0
}
i := 0
MapZipUsing(a, lhs, rhs, LexicalKeyOrder, func(key string, lhs, rhs Value) bool {
switch {
case i == llength:
c = -1
case i == rlength:
c = 1
case lhs == nil:
c = 1
case rhs == nil:
c = -1
default:
c = CompareUsing(a, lhs, rhs)
}
i++
return c == 0
})
return c
}
// MapEquals returns true if lhs == rhs, false otherwise. This function
// acts on generic types and should not be used by callers, but can help
// implement Map.Equals.
// WARN: This is a naive implementation, calling lhs.Equals(rhs) is typically the most efficient.
func MapEquals(lhs, rhs Map) bool {
return MapEqualsUsing(HeapAllocator, lhs, rhs)
}
// MapEqualsUsing uses the provided allocator and returns true if lhs == rhs,
// false otherwise. This function acts on generic types and should not be used
// by callers, but can help implement Map.Equals.
// WARN: This is a naive implementation, calling lhs.EqualsUsing(allocator, rhs) is typically the most efficient.
func MapEqualsUsing(a Allocator, lhs, rhs Map) bool {
if lhs == nil && rhs == nil {
return true
}
if lhs == nil || rhs == nil {
return false
}
if lhs.Length() != rhs.Length() {
return false
}
return MapZipUsing(a, lhs, rhs, Unordered, func(key string, lhs, rhs Value) bool {
if lhs == nil || rhs == nil {
return false
}
return EqualsUsing(a, lhs, rhs)
})
}

View File

@ -0,0 +1,209 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package value
import (
"reflect"
)
type mapReflect struct {
valueReflect
}
func (r mapReflect) Length() int {
val := r.Value
return val.Len()
}
func (r mapReflect) Empty() bool {
val := r.Value
return val.Len() == 0
}
func (r mapReflect) Get(key string) (Value, bool) {
return r.GetUsing(HeapAllocator, key)
}
func (r mapReflect) GetUsing(a Allocator, key string) (Value, bool) {
k, v, ok := r.get(key)
if !ok {
return nil, false
}
return a.allocValueReflect().mustReuse(v, nil, &r.Value, &k), true
}
func (r mapReflect) get(k string) (key, value reflect.Value, ok bool) {
mapKey := r.toMapKey(k)
val := r.Value.MapIndex(mapKey)
return mapKey, val, val.IsValid() && val != reflect.Value{}
}
func (r mapReflect) Has(key string) bool {
var val reflect.Value
val = r.Value.MapIndex(r.toMapKey(key))
if !val.IsValid() {
return false
}
return val != reflect.Value{}
}
func (r mapReflect) Set(key string, val Value) {
r.Value.SetMapIndex(r.toMapKey(key), reflect.ValueOf(val.Unstructured()))
}
func (r mapReflect) Delete(key string) {
val := r.Value
val.SetMapIndex(r.toMapKey(key), reflect.Value{})
}
// TODO: Do we need to support types that implement json.Marshaler and are used as string keys?
func (r mapReflect) toMapKey(key string) reflect.Value {
val := r.Value
return reflect.ValueOf(key).Convert(val.Type().Key())
}
func (r mapReflect) Iterate(fn func(string, Value) bool) bool {
return r.IterateUsing(HeapAllocator, fn)
}
func (r mapReflect) IterateUsing(a Allocator, fn func(string, Value) bool) bool {
if r.Value.Len() == 0 {
return true
}
v := a.allocValueReflect()
defer a.Free(v)
return eachMapEntry(r.Value, func(e *TypeReflectCacheEntry, key reflect.Value, value reflect.Value) bool {
return fn(key.String(), v.mustReuse(value, e, &r.Value, &key))
})
}
func eachMapEntry(val reflect.Value, fn func(*TypeReflectCacheEntry, reflect.Value, reflect.Value) bool) bool {
iter := val.MapRange()
entry := TypeReflectEntryOf(val.Type().Elem())
for iter.Next() {
next := iter.Value()
if !next.IsValid() {
continue
}
if !fn(entry, iter.Key(), next) {
return false
}
}
return true
}
func (r mapReflect) Unstructured() interface{} {
result := make(map[string]interface{}, r.Length())
r.Iterate(func(s string, value Value) bool {
result[s] = value.Unstructured()
return true
})
return result
}
func (r mapReflect) Equals(m Map) bool {
return r.EqualsUsing(HeapAllocator, m)
}
func (r mapReflect) EqualsUsing(a Allocator, m Map) bool {
lhsLength := r.Length()
rhsLength := m.Length()
if lhsLength != rhsLength {
return false
}
if lhsLength == 0 {
return true
}
vr := a.allocValueReflect()
defer a.Free(vr)
entry := TypeReflectEntryOf(r.Value.Type().Elem())
return m.Iterate(func(key string, value Value) bool {
_, lhsVal, ok := r.get(key)
if !ok {
return false
}
return EqualsUsing(a, vr.mustReuse(lhsVal, entry, nil, nil), value)
})
}
func (r mapReflect) Zip(other Map, order MapTraverseOrder, fn func(key string, lhs, rhs Value) bool) bool {
return r.ZipUsing(HeapAllocator, other, order, fn)
}
func (r mapReflect) ZipUsing(a Allocator, other Map, order MapTraverseOrder, fn func(key string, lhs, rhs Value) bool) bool {
if otherMapReflect, ok := other.(*mapReflect); ok && order == Unordered {
return r.unorderedReflectZip(a, otherMapReflect, fn)
}
return defaultMapZip(a, &r, other, order, fn)
}
// unorderedReflectZip provides an optimized unordered zip for mapReflect types.
func (r mapReflect) unorderedReflectZip(a Allocator, other *mapReflect, fn func(key string, lhs, rhs Value) bool) bool {
if r.Empty() && (other == nil || other.Empty()) {
return true
}
lhs := r.Value
lhsEntry := TypeReflectEntryOf(lhs.Type().Elem())
// map lookup via reflection is expensive enough that it is better to keep track of visited keys
visited := map[string]struct{}{}
vlhs, vrhs := a.allocValueReflect(), a.allocValueReflect()
defer a.Free(vlhs)
defer a.Free(vrhs)
if other != nil {
rhs := other.Value
rhsEntry := TypeReflectEntryOf(rhs.Type().Elem())
iter := rhs.MapRange()
for iter.Next() {
key := iter.Key()
keyString := key.String()
next := iter.Value()
if !next.IsValid() {
continue
}
rhsVal := vrhs.mustReuse(next, rhsEntry, &rhs, &key)
visited[keyString] = struct{}{}
var lhsVal Value
if _, v, ok := r.get(keyString); ok {
lhsVal = vlhs.mustReuse(v, lhsEntry, &lhs, &key)
}
if !fn(keyString, lhsVal, rhsVal) {
return false
}
}
}
iter := lhs.MapRange()
for iter.Next() {
key := iter.Key()
if _, ok := visited[key.String()]; ok {
continue
}
next := iter.Value()
if !next.IsValid() {
continue
}
if !fn(key.String(), vlhs.mustReuse(next, lhsEntry, &lhs, &key), nil) {
return false
}
}
return true
}

View File

@ -0,0 +1,190 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package value
type mapUnstructuredInterface map[interface{}]interface{}
func (m mapUnstructuredInterface) Set(key string, val Value) {
m[key] = val.Unstructured()
}
func (m mapUnstructuredInterface) Get(key string) (Value, bool) {
return m.GetUsing(HeapAllocator, key)
}
func (m mapUnstructuredInterface) GetUsing(a Allocator, key string) (Value, bool) {
if v, ok := m[key]; !ok {
return nil, false
} else {
return a.allocValueUnstructured().reuse(v), true
}
}
func (m mapUnstructuredInterface) Has(key string) bool {
_, ok := m[key]
return ok
}
func (m mapUnstructuredInterface) Delete(key string) {
delete(m, key)
}
func (m mapUnstructuredInterface) Iterate(fn func(key string, value Value) bool) bool {
return m.IterateUsing(HeapAllocator, fn)
}
func (m mapUnstructuredInterface) IterateUsing(a Allocator, fn func(key string, value Value) bool) bool {
if len(m) == 0 {
return true
}
vv := a.allocValueUnstructured()
defer a.Free(vv)
for k, v := range m {
if ks, ok := k.(string); !ok {
continue
} else {
if !fn(ks, vv.reuse(v)) {
return false
}
}
}
return true
}
func (m mapUnstructuredInterface) Length() int {
return len(m)
}
func (m mapUnstructuredInterface) Empty() bool {
return len(m) == 0
}
func (m mapUnstructuredInterface) Equals(other Map) bool {
return m.EqualsUsing(HeapAllocator, other)
}
func (m mapUnstructuredInterface) EqualsUsing(a Allocator, other Map) bool {
lhsLength := m.Length()
rhsLength := other.Length()
if lhsLength != rhsLength {
return false
}
if lhsLength == 0 {
return true
}
vv := a.allocValueUnstructured()
defer a.Free(vv)
return other.IterateUsing(a, func(key string, value Value) bool {
lhsVal, ok := m[key]
if !ok {
return false
}
return EqualsUsing(a, vv.reuse(lhsVal), value)
})
}
func (m mapUnstructuredInterface) Zip(other Map, order MapTraverseOrder, fn func(key string, lhs, rhs Value) bool) bool {
return m.ZipUsing(HeapAllocator, other, order, fn)
}
func (m mapUnstructuredInterface) ZipUsing(a Allocator, other Map, order MapTraverseOrder, fn func(key string, lhs, rhs Value) bool) bool {
return defaultMapZip(a, m, other, order, fn)
}
type mapUnstructuredString map[string]interface{}
func (m mapUnstructuredString) Set(key string, val Value) {
m[key] = val.Unstructured()
}
func (m mapUnstructuredString) Get(key string) (Value, bool) {
return m.GetUsing(HeapAllocator, key)
}
func (m mapUnstructuredString) GetUsing(a Allocator, key string) (Value, bool) {
if v, ok := m[key]; !ok {
return nil, false
} else {
return a.allocValueUnstructured().reuse(v), true
}
}
func (m mapUnstructuredString) Has(key string) bool {
_, ok := m[key]
return ok
}
func (m mapUnstructuredString) Delete(key string) {
delete(m, key)
}
func (m mapUnstructuredString) Iterate(fn func(key string, value Value) bool) bool {
return m.IterateUsing(HeapAllocator, fn)
}
func (m mapUnstructuredString) IterateUsing(a Allocator, fn func(key string, value Value) bool) bool {
if len(m) == 0 {
return true
}
vv := a.allocValueUnstructured()
defer a.Free(vv)
for k, v := range m {
if !fn(k, vv.reuse(v)) {
return false
}
}
return true
}
func (m mapUnstructuredString) Length() int {
return len(m)
}
func (m mapUnstructuredString) Equals(other Map) bool {
return m.EqualsUsing(HeapAllocator, other)
}
func (m mapUnstructuredString) EqualsUsing(a Allocator, other Map) bool {
lhsLength := m.Length()
rhsLength := other.Length()
if lhsLength != rhsLength {
return false
}
if lhsLength == 0 {
return true
}
vv := a.allocValueUnstructured()
defer a.Free(vv)
return other.IterateUsing(a, func(key string, value Value) bool {
lhsVal, ok := m[key]
if !ok {
return false
}
return EqualsUsing(a, vv.reuse(lhsVal), value)
})
}
func (m mapUnstructuredString) Zip(other Map, order MapTraverseOrder, fn func(key string, lhs, rhs Value) bool) bool {
return m.ZipUsing(HeapAllocator, other, order, fn)
}
func (m mapUnstructuredString) ZipUsing(a Allocator, other Map, order MapTraverseOrder, fn func(key string, lhs, rhs Value) bool) bool {
return defaultMapZip(a, m, other, order, fn)
}
func (m mapUnstructuredString) Empty() bool {
return len(m) == 0
}

View File

@ -0,0 +1,484 @@
/*
Copyright 2020 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package value
import (
"bytes"
"encoding/json"
"errors"
"fmt"
"io"
"reflect"
"sort"
"sync"
"sync/atomic"
)
// UnstructuredConverter defines how a type can be converted directly to unstructured.
// Types that implement json.Marshaler may also optionally implement this interface to provide a more
// direct and more efficient conversion. All types that choose to implement this interface must still
// implement this same conversion via json.Marshaler.
type UnstructuredConverter interface {
json.Marshaler // require that json.Marshaler is implemented
// ToUnstructured returns the unstructured representation.
ToUnstructured() interface{}
}
// TypeReflectCacheEntry keeps data gathered using reflection about how a type is converted to/from unstructured.
type TypeReflectCacheEntry struct {
isJsonMarshaler bool
ptrIsJsonMarshaler bool
isJsonUnmarshaler bool
ptrIsJsonUnmarshaler bool
isStringConvertable bool
ptrIsStringConvertable bool
structFields map[string]*FieldCacheEntry
orderedStructFields []*FieldCacheEntry
}
// FieldCacheEntry keeps data gathered using reflection about how the field of a struct is converted to/from
// unstructured.
type FieldCacheEntry struct {
// JsonName returns the name of the field according to the json tags on the struct field.
JsonName string
// isOmitEmpty is true if the field has the json 'omitempty' tag.
isOmitEmpty bool
// fieldPath is a list of field indices (see FieldByIndex) to lookup the value of
// a field in a reflect.Value struct. The field indices in the list form a path used
// to traverse through intermediary 'inline' fields.
fieldPath [][]int
fieldType reflect.Type
TypeEntry *TypeReflectCacheEntry
}
func (f *FieldCacheEntry) CanOmit(fieldVal reflect.Value) bool {
return f.isOmitEmpty && (safeIsNil(fieldVal) || isZero(fieldVal))
}
// GetFrom returns the field identified by this FieldCacheEntry from the provided struct.
func (f *FieldCacheEntry) GetFrom(structVal reflect.Value) reflect.Value {
// field might be nested within 'inline' structs
for _, elem := range f.fieldPath {
structVal = dereference(structVal).FieldByIndex(elem)
}
return structVal
}
var marshalerType = reflect.TypeOf(new(json.Marshaler)).Elem()
var unmarshalerType = reflect.TypeOf(new(json.Unmarshaler)).Elem()
var unstructuredConvertableType = reflect.TypeOf(new(UnstructuredConverter)).Elem()
var defaultReflectCache = newReflectCache()
// TypeReflectEntryOf returns the TypeReflectCacheEntry of the provided reflect.Type.
func TypeReflectEntryOf(t reflect.Type) *TypeReflectCacheEntry {
cm := defaultReflectCache.get()
if record, ok := cm[t]; ok {
return record
}
updates := reflectCacheMap{}
result := typeReflectEntryOf(cm, t, updates)
if len(updates) > 0 {
defaultReflectCache.update(updates)
}
return result
}
// TypeReflectEntryOf returns all updates needed to add provided reflect.Type, and the types its fields transitively
// depend on, to the cache.
func typeReflectEntryOf(cm reflectCacheMap, t reflect.Type, updates reflectCacheMap) *TypeReflectCacheEntry {
if record, ok := cm[t]; ok {
return record
}
if record, ok := updates[t]; ok {
return record
}
typeEntry := &TypeReflectCacheEntry{
isJsonMarshaler: t.Implements(marshalerType),
ptrIsJsonMarshaler: reflect.PtrTo(t).Implements(marshalerType),
isJsonUnmarshaler: reflect.PtrTo(t).Implements(unmarshalerType),
isStringConvertable: t.Implements(unstructuredConvertableType),
ptrIsStringConvertable: reflect.PtrTo(t).Implements(unstructuredConvertableType),
}
if t.Kind() == reflect.Struct {
fieldEntries := map[string]*FieldCacheEntry{}
buildStructCacheEntry(t, fieldEntries, nil)
typeEntry.structFields = fieldEntries
sortedByJsonName := make([]*FieldCacheEntry, len(fieldEntries))
i := 0
for _, entry := range fieldEntries {
sortedByJsonName[i] = entry
i++
}
sort.Slice(sortedByJsonName, func(i, j int) bool {
return sortedByJsonName[i].JsonName < sortedByJsonName[j].JsonName
})
typeEntry.orderedStructFields = sortedByJsonName
}
// cyclic type references are allowed, so we must add the typeEntry to the updates map before resolving
// the field.typeEntry references, or creating them if they are not already in the cache
updates[t] = typeEntry
for _, field := range typeEntry.structFields {
if field.TypeEntry == nil {
field.TypeEntry = typeReflectEntryOf(cm, field.fieldType, updates)
}
}
return typeEntry
}
func buildStructCacheEntry(t reflect.Type, infos map[string]*FieldCacheEntry, fieldPath [][]int) {
for i := 0; i < t.NumField(); i++ {
field := t.Field(i)
jsonName, omit, isInline, isOmitempty := lookupJsonTags(field)
if omit {
continue
}
if isInline {
e := field.Type
if field.Type.Kind() == reflect.Ptr {
e = field.Type.Elem()
}
if e.Kind() == reflect.Struct {
buildStructCacheEntry(e, infos, append(fieldPath, field.Index))
}
continue
}
info := &FieldCacheEntry{JsonName: jsonName, isOmitEmpty: isOmitempty, fieldPath: append(fieldPath, field.Index), fieldType: field.Type}
infos[jsonName] = info
}
}
// Fields returns a map of JSON field name to FieldCacheEntry for structs, or nil for non-structs.
func (e TypeReflectCacheEntry) Fields() map[string]*FieldCacheEntry {
return e.structFields
}
// Fields returns a map of JSON field name to FieldCacheEntry for structs, or nil for non-structs.
func (e TypeReflectCacheEntry) OrderedFields() []*FieldCacheEntry {
return e.orderedStructFields
}
// CanConvertToUnstructured returns true if this TypeReflectCacheEntry can convert values of its type to unstructured.
func (e TypeReflectCacheEntry) CanConvertToUnstructured() bool {
return e.isJsonMarshaler || e.ptrIsJsonMarshaler || e.isStringConvertable || e.ptrIsStringConvertable
}
// ToUnstructured converts the provided value to unstructured and returns it.
func (e TypeReflectCacheEntry) ToUnstructured(sv reflect.Value) (interface{}, error) {
// This is based on https://github.com/kubernetes/kubernetes/blob/82c9e5c814eb7acc6cc0a090c057294d0667ad66/staging/src/k8s.io/apimachinery/pkg/runtime/converter.go#L505
// and is intended to replace it.
// Check if the object is a nil pointer.
if sv.Kind() == reflect.Ptr && sv.IsNil() {
// We're done - we don't need to store anything.
return nil, nil
}
// Check if the object has a custom string converter and use it if available, since it is much more efficient
// than round tripping through json.
if converter, ok := e.getUnstructuredConverter(sv); ok {
return converter.ToUnstructured(), nil
}
// Check if the object has a custom JSON marshaller/unmarshaller.
if marshaler, ok := e.getJsonMarshaler(sv); ok {
data, err := marshaler.MarshalJSON()
if err != nil {
return nil, err
}
switch {
case len(data) == 0:
return nil, fmt.Errorf("error decoding from json: empty value")
case bytes.Equal(data, nullBytes):
// We're done - we don't need to store anything.
return nil, nil
case bytes.Equal(data, trueBytes):
return true, nil
case bytes.Equal(data, falseBytes):
return false, nil
case data[0] == '"':
var result string
err := unmarshal(data, &result)
if err != nil {
return nil, fmt.Errorf("error decoding string from json: %v", err)
}
return result, nil
case data[0] == '{':
result := make(map[string]interface{})
err := unmarshal(data, &result)
if err != nil {
return nil, fmt.Errorf("error decoding object from json: %v", err)
}
return result, nil
case data[0] == '[':
result := make([]interface{}, 0)
err := unmarshal(data, &result)
if err != nil {
return nil, fmt.Errorf("error decoding array from json: %v", err)
}
return result, nil
default:
var (
resultInt int64
resultFloat float64
err error
)
if err = unmarshal(data, &resultInt); err == nil {
return resultInt, nil
} else if err = unmarshal(data, &resultFloat); err == nil {
return resultFloat, nil
} else {
return nil, fmt.Errorf("error decoding number from json: %v", err)
}
}
}
return nil, fmt.Errorf("provided type cannot be converted: %v", sv.Type())
}
// CanConvertFromUnstructured returns true if this TypeReflectCacheEntry can convert objects of the type from unstructured.
func (e TypeReflectCacheEntry) CanConvertFromUnstructured() bool {
return e.isJsonUnmarshaler
}
// FromUnstructured converts the provided source value from unstructured into the provided destination value.
func (e TypeReflectCacheEntry) FromUnstructured(sv, dv reflect.Value) error {
// TODO: this could be made much more efficient using direct conversions like
// UnstructuredConverter.ToUnstructured provides.
st := dv.Type()
data, err := json.Marshal(sv.Interface())
if err != nil {
return fmt.Errorf("error encoding %s to json: %v", st.String(), err)
}
if unmarshaler, ok := e.getJsonUnmarshaler(dv); ok {
return unmarshaler.UnmarshalJSON(data)
}
return fmt.Errorf("unable to unmarshal %v into %v", sv.Type(), dv.Type())
}
var (
nullBytes = []byte("null")
trueBytes = []byte("true")
falseBytes = []byte("false")
)
func (e TypeReflectCacheEntry) getJsonMarshaler(v reflect.Value) (json.Marshaler, bool) {
if e.isJsonMarshaler {
return v.Interface().(json.Marshaler), true
}
if e.ptrIsJsonMarshaler {
// Check pointer receivers if v is not a pointer
if v.Kind() != reflect.Ptr && v.CanAddr() {
v = v.Addr()
return v.Interface().(json.Marshaler), true
}
}
return nil, false
}
func (e TypeReflectCacheEntry) getJsonUnmarshaler(v reflect.Value) (json.Unmarshaler, bool) {
if !e.isJsonUnmarshaler {
return nil, false
}
return v.Addr().Interface().(json.Unmarshaler), true
}
func (e TypeReflectCacheEntry) getUnstructuredConverter(v reflect.Value) (UnstructuredConverter, bool) {
if e.isStringConvertable {
return v.Interface().(UnstructuredConverter), true
}
if e.ptrIsStringConvertable {
// Check pointer receivers if v is not a pointer
if v.CanAddr() {
v = v.Addr()
return v.Interface().(UnstructuredConverter), true
}
}
return nil, false
}
type typeReflectCache struct {
// use an atomic and copy-on-write since there are a fixed (typically very small) number of structs compiled into any
// go program using this cache
value atomic.Value
// mu is held by writers when performing load/modify/store operations on the cache, readers do not need to hold a
// read-lock since the atomic value is always read-only
mu sync.Mutex
}
func newReflectCache() *typeReflectCache {
cache := &typeReflectCache{}
cache.value.Store(make(reflectCacheMap))
return cache
}
type reflectCacheMap map[reflect.Type]*TypeReflectCacheEntry
// get returns the reflectCacheMap.
func (c *typeReflectCache) get() reflectCacheMap {
return c.value.Load().(reflectCacheMap)
}
// update merges the provided updates into the cache.
func (c *typeReflectCache) update(updates reflectCacheMap) {
c.mu.Lock()
defer c.mu.Unlock()
currentCacheMap := c.value.Load().(reflectCacheMap)
hasNewEntries := false
for t := range updates {
if _, ok := currentCacheMap[t]; !ok {
hasNewEntries = true
break
}
}
if !hasNewEntries {
// Bail if the updates have been set while waiting for lock acquisition.
// This is safe since setting entries is idempotent.
return
}
newCacheMap := make(reflectCacheMap, len(currentCacheMap)+len(updates))
for k, v := range currentCacheMap {
newCacheMap[k] = v
}
for t, update := range updates {
newCacheMap[t] = update
}
c.value.Store(newCacheMap)
}
// Below json Unmarshal is fromk8s.io/apimachinery/pkg/util/json
// to handle number conversions as expected by Kubernetes
// limit recursive depth to prevent stack overflow errors
const maxDepth = 10000
// unmarshal unmarshals the given data
// If v is a *map[string]interface{}, numbers are converted to int64 or float64
func unmarshal(data []byte, v interface{}) error {
// Build a decoder from the given data
decoder := json.NewDecoder(bytes.NewBuffer(data))
// Preserve numbers, rather than casting to float64 automatically
decoder.UseNumber()
// Run the decode
if err := decoder.Decode(v); err != nil {
return err
}
next := decoder.InputOffset()
if _, err := decoder.Token(); !errors.Is(err, io.EOF) {
tail := bytes.TrimLeft(data[next:], " \t\r\n")
return fmt.Errorf("unexpected trailing data at offset %d", len(data)-len(tail))
}
// If the decode succeeds, post-process the object to convert json.Number objects to int64 or float64
switch v := v.(type) {
case *map[string]interface{}:
return convertMapNumbers(*v, 0)
case *[]interface{}:
return convertSliceNumbers(*v, 0)
case *interface{}:
return convertInterfaceNumbers(v, 0)
default:
return nil
}
}
func convertInterfaceNumbers(v *interface{}, depth int) error {
var err error
switch v2 := (*v).(type) {
case json.Number:
*v, err = convertNumber(v2)
case map[string]interface{}:
err = convertMapNumbers(v2, depth+1)
case []interface{}:
err = convertSliceNumbers(v2, depth+1)
}
return err
}
// convertMapNumbers traverses the map, converting any json.Number values to int64 or float64.
// values which are map[string]interface{} or []interface{} are recursively visited
func convertMapNumbers(m map[string]interface{}, depth int) error {
if depth > maxDepth {
return fmt.Errorf("exceeded max depth of %d", maxDepth)
}
var err error
for k, v := range m {
switch v := v.(type) {
case json.Number:
m[k], err = convertNumber(v)
case map[string]interface{}:
err = convertMapNumbers(v, depth+1)
case []interface{}:
err = convertSliceNumbers(v, depth+1)
}
if err != nil {
return err
}
}
return nil
}
// convertSliceNumbers traverses the slice, converting any json.Number values to int64 or float64.
// values which are map[string]interface{} or []interface{} are recursively visited
func convertSliceNumbers(s []interface{}, depth int) error {
if depth > maxDepth {
return fmt.Errorf("exceeded max depth of %d", maxDepth)
}
var err error
for i, v := range s {
switch v := v.(type) {
case json.Number:
s[i], err = convertNumber(v)
case map[string]interface{}:
err = convertMapNumbers(v, depth+1)
case []interface{}:
err = convertSliceNumbers(v, depth+1)
}
if err != nil {
return err
}
}
return nil
}
// convertNumber converts a json.Number to an int64 or float64, or returns an error
func convertNumber(n json.Number) (interface{}, error) {
// Attempt to convert to an int64 first
if i, err := n.Int64(); err == nil {
return i, nil
}
// Return a float64 (default json.Decode() behavior)
// An overflow will return an error
return n.Float64()
}

View File

@ -0,0 +1,50 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package value
// Compare compares floats. The result will be 0 if lhs==rhs, -1 if f <
// rhs, and +1 if f > rhs.
func FloatCompare(lhs, rhs float64) int {
if lhs > rhs {
return 1
} else if lhs < rhs {
return -1
}
return 0
}
// IntCompare compares integers. The result will be 0 if i==rhs, -1 if i <
// rhs, and +1 if i > rhs.
func IntCompare(lhs, rhs int64) int {
if lhs > rhs {
return 1
} else if lhs < rhs {
return -1
}
return 0
}
// Compare compares booleans. The result will be 0 if b==rhs, -1 if b <
// rhs, and +1 if b > rhs.
func BoolCompare(lhs, rhs bool) int {
if lhs == rhs {
return 0
} else if !lhs {
return -1
}
return 1
}

View File

@ -0,0 +1,208 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package value
import (
"fmt"
"reflect"
)
type structReflect struct {
valueReflect
}
func (r structReflect) Length() int {
i := 0
eachStructField(r.Value, func(_ *TypeReflectCacheEntry, s string, value reflect.Value) bool {
i++
return true
})
return i
}
func (r structReflect) Empty() bool {
return eachStructField(r.Value, func(_ *TypeReflectCacheEntry, s string, value reflect.Value) bool {
return false // exit early if the struct is non-empty
})
}
func (r structReflect) Get(key string) (Value, bool) {
return r.GetUsing(HeapAllocator, key)
}
func (r structReflect) GetUsing(a Allocator, key string) (Value, bool) {
if val, ok := r.findJsonNameField(key); ok {
return a.allocValueReflect().mustReuse(val, nil, nil, nil), true
}
return nil, false
}
func (r structReflect) Has(key string) bool {
_, ok := r.findJsonNameField(key)
return ok
}
func (r structReflect) Set(key string, val Value) {
fieldEntry, ok := TypeReflectEntryOf(r.Value.Type()).Fields()[key]
if !ok {
panic(fmt.Sprintf("key %s may not be set on struct %T: field does not exist", key, r.Value.Interface()))
}
oldVal := fieldEntry.GetFrom(r.Value)
newVal := reflect.ValueOf(val.Unstructured())
r.update(fieldEntry, key, oldVal, newVal)
}
func (r structReflect) Delete(key string) {
fieldEntry, ok := TypeReflectEntryOf(r.Value.Type()).Fields()[key]
if !ok {
panic(fmt.Sprintf("key %s may not be deleted on struct %T: field does not exist", key, r.Value.Interface()))
}
oldVal := fieldEntry.GetFrom(r.Value)
if oldVal.Kind() != reflect.Ptr && !fieldEntry.isOmitEmpty {
panic(fmt.Sprintf("key %s may not be deleted on struct: %T: value is neither a pointer nor an omitempty field", key, r.Value.Interface()))
}
r.update(fieldEntry, key, oldVal, reflect.Zero(oldVal.Type()))
}
func (r structReflect) update(fieldEntry *FieldCacheEntry, key string, oldVal, newVal reflect.Value) {
if oldVal.CanSet() {
oldVal.Set(newVal)
return
}
// map items are not addressable, so if a struct is contained in a map, the only way to modify it is
// to write a replacement fieldEntry into the map.
if r.ParentMap != nil {
if r.ParentMapKey == nil {
panic("ParentMapKey must not be nil if ParentMap is not nil")
}
replacement := reflect.New(r.Value.Type()).Elem()
fieldEntry.GetFrom(replacement).Set(newVal)
r.ParentMap.SetMapIndex(*r.ParentMapKey, replacement)
return
}
// This should never happen since NewValueReflect ensures that the root object reflected on is a pointer and map
// item replacement is handled above.
panic(fmt.Sprintf("key %s may not be modified on struct: %T: struct is not settable", key, r.Value.Interface()))
}
func (r structReflect) Iterate(fn func(string, Value) bool) bool {
return r.IterateUsing(HeapAllocator, fn)
}
func (r structReflect) IterateUsing(a Allocator, fn func(string, Value) bool) bool {
vr := a.allocValueReflect()
defer a.Free(vr)
return eachStructField(r.Value, func(e *TypeReflectCacheEntry, s string, value reflect.Value) bool {
return fn(s, vr.mustReuse(value, e, nil, nil))
})
}
func eachStructField(structVal reflect.Value, fn func(*TypeReflectCacheEntry, string, reflect.Value) bool) bool {
for _, fieldCacheEntry := range TypeReflectEntryOf(structVal.Type()).OrderedFields() {
fieldVal := fieldCacheEntry.GetFrom(structVal)
if fieldCacheEntry.CanOmit(fieldVal) {
// omit it
continue
}
ok := fn(fieldCacheEntry.TypeEntry, fieldCacheEntry.JsonName, fieldVal)
if !ok {
return false
}
}
return true
}
func (r structReflect) Unstructured() interface{} {
// Use number of struct fields as a cheap way to rough estimate map size
result := make(map[string]interface{}, r.Value.NumField())
r.Iterate(func(s string, value Value) bool {
result[s] = value.Unstructured()
return true
})
return result
}
func (r structReflect) Equals(m Map) bool {
return r.EqualsUsing(HeapAllocator, m)
}
func (r structReflect) EqualsUsing(a Allocator, m Map) bool {
// MapEquals uses zip and is fairly efficient for structReflect
return MapEqualsUsing(a, &r, m)
}
func (r structReflect) findJsonNameFieldAndNotEmpty(jsonName string) (reflect.Value, bool) {
structCacheEntry, ok := TypeReflectEntryOf(r.Value.Type()).Fields()[jsonName]
if !ok {
return reflect.Value{}, false
}
fieldVal := structCacheEntry.GetFrom(r.Value)
return fieldVal, !structCacheEntry.CanOmit(fieldVal)
}
func (r structReflect) findJsonNameField(jsonName string) (val reflect.Value, ok bool) {
structCacheEntry, ok := TypeReflectEntryOf(r.Value.Type()).Fields()[jsonName]
if !ok {
return reflect.Value{}, false
}
fieldVal := structCacheEntry.GetFrom(r.Value)
return fieldVal, !structCacheEntry.CanOmit(fieldVal)
}
func (r structReflect) Zip(other Map, order MapTraverseOrder, fn func(key string, lhs, rhs Value) bool) bool {
return r.ZipUsing(HeapAllocator, other, order, fn)
}
func (r structReflect) ZipUsing(a Allocator, other Map, order MapTraverseOrder, fn func(key string, lhs, rhs Value) bool) bool {
if otherStruct, ok := other.(*structReflect); ok && r.Value.Type() == otherStruct.Value.Type() {
lhsvr, rhsvr := a.allocValueReflect(), a.allocValueReflect()
defer a.Free(lhsvr)
defer a.Free(rhsvr)
return r.structZip(otherStruct, lhsvr, rhsvr, fn)
}
return defaultMapZip(a, &r, other, order, fn)
}
// structZip provides an optimized zip for structReflect types. The zip is always lexical key ordered since there is
// no additional cost to ordering the zip for structured types.
func (r structReflect) structZip(other *structReflect, lhsvr, rhsvr *valueReflect, fn func(key string, lhs, rhs Value) bool) bool {
lhsVal := r.Value
rhsVal := other.Value
for _, fieldCacheEntry := range TypeReflectEntryOf(lhsVal.Type()).OrderedFields() {
lhsFieldVal := fieldCacheEntry.GetFrom(lhsVal)
rhsFieldVal := fieldCacheEntry.GetFrom(rhsVal)
lhsOmit := fieldCacheEntry.CanOmit(lhsFieldVal)
rhsOmit := fieldCacheEntry.CanOmit(rhsFieldVal)
if lhsOmit && rhsOmit {
continue
}
var lhsVal, rhsVal Value
if !lhsOmit {
lhsVal = lhsvr.mustReuse(lhsFieldVal, fieldCacheEntry.TypeEntry, nil, nil)
}
if !rhsOmit {
rhsVal = rhsvr.mustReuse(rhsFieldVal, fieldCacheEntry.TypeEntry, nil, nil)
}
if !fn(fieldCacheEntry.JsonName, lhsVal, rhsVal) {
return false
}
}
return true
}

View File

@ -0,0 +1,347 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package value
import (
"bytes"
"fmt"
"io"
"strings"
jsoniter "github.com/json-iterator/go"
yaml "sigs.k8s.io/yaml/goyaml.v2"
)
var (
readPool = jsoniter.NewIterator(jsoniter.ConfigCompatibleWithStandardLibrary).Pool()
writePool = jsoniter.NewStream(jsoniter.ConfigCompatibleWithStandardLibrary, nil, 1024).Pool()
)
// A Value corresponds to an 'atom' in the schema. It should return true
// for at least one of the IsXXX methods below, or the value is
// considered "invalid"
type Value interface {
// IsMap returns true if the Value is a Map, false otherwise.
IsMap() bool
// IsList returns true if the Value is a List, false otherwise.
IsList() bool
// IsBool returns true if the Value is a bool, false otherwise.
IsBool() bool
// IsInt returns true if the Value is a int64, false otherwise.
IsInt() bool
// IsFloat returns true if the Value is a float64, false
// otherwise.
IsFloat() bool
// IsString returns true if the Value is a string, false
// otherwise.
IsString() bool
// IsMap returns true if the Value is null, false otherwise.
IsNull() bool
// AsMap converts the Value into a Map (or panic if the type
// doesn't allow it).
AsMap() Map
// AsMapUsing uses the provided allocator and converts the Value
// into a Map (or panic if the type doesn't allow it).
AsMapUsing(Allocator) Map
// AsList converts the Value into a List (or panic if the type
// doesn't allow it).
AsList() List
// AsListUsing uses the provided allocator and converts the Value
// into a List (or panic if the type doesn't allow it).
AsListUsing(Allocator) List
// AsBool converts the Value into a bool (or panic if the type
// doesn't allow it).
AsBool() bool
// AsInt converts the Value into an int64 (or panic if the type
// doesn't allow it).
AsInt() int64
// AsFloat converts the Value into a float64 (or panic if the type
// doesn't allow it).
AsFloat() float64
// AsString converts the Value into a string (or panic if the type
// doesn't allow it).
AsString() string
// Unstructured converts the Value into an Unstructured interface{}.
Unstructured() interface{}
}
// FromJSON is a helper function for reading a JSON document.
func FromJSON(input []byte) (Value, error) {
return FromJSONFast(input)
}
// FromJSONFast is a helper function for reading a JSON document.
func FromJSONFast(input []byte) (Value, error) {
iter := readPool.BorrowIterator(input)
defer readPool.ReturnIterator(iter)
return ReadJSONIter(iter)
}
// ToJSON is a helper function for producing a JSon document.
func ToJSON(v Value) ([]byte, error) {
buf := bytes.Buffer{}
stream := writePool.BorrowStream(&buf)
defer writePool.ReturnStream(stream)
WriteJSONStream(v, stream)
b := stream.Buffer()
err := stream.Flush()
// Help jsoniter manage its buffers--without this, the next
// use of the stream is likely to require an allocation. Look
// at the jsoniter stream code to understand why. They were probably
// optimizing for folks using the buffer directly.
stream.SetBuffer(b[:0])
return buf.Bytes(), err
}
// ReadJSONIter reads a Value from a JSON iterator.
func ReadJSONIter(iter *jsoniter.Iterator) (Value, error) {
v := iter.Read()
if iter.Error != nil && iter.Error != io.EOF {
return nil, iter.Error
}
return NewValueInterface(v), nil
}
// WriteJSONStream writes a value into a JSON stream.
func WriteJSONStream(v Value, stream *jsoniter.Stream) {
stream.WriteVal(v.Unstructured())
}
// ToYAML marshals a value as YAML.
func ToYAML(v Value) ([]byte, error) {
return yaml.Marshal(v.Unstructured())
}
// Equals returns true iff the two values are equal.
func Equals(lhs, rhs Value) bool {
return EqualsUsing(HeapAllocator, lhs, rhs)
}
// EqualsUsing uses the provided allocator and returns true iff the two values are equal.
func EqualsUsing(a Allocator, lhs, rhs Value) bool {
if lhs.IsFloat() || rhs.IsFloat() {
var lf float64
if lhs.IsFloat() {
lf = lhs.AsFloat()
} else if lhs.IsInt() {
lf = float64(lhs.AsInt())
} else {
return false
}
var rf float64
if rhs.IsFloat() {
rf = rhs.AsFloat()
} else if rhs.IsInt() {
rf = float64(rhs.AsInt())
} else {
return false
}
return lf == rf
}
if lhs.IsInt() {
if rhs.IsInt() {
return lhs.AsInt() == rhs.AsInt()
}
return false
} else if rhs.IsInt() {
return false
}
if lhs.IsString() {
if rhs.IsString() {
return lhs.AsString() == rhs.AsString()
}
return false
} else if rhs.IsString() {
return false
}
if lhs.IsBool() {
if rhs.IsBool() {
return lhs.AsBool() == rhs.AsBool()
}
return false
} else if rhs.IsBool() {
return false
}
if lhs.IsList() {
if rhs.IsList() {
lhsList := lhs.AsListUsing(a)
defer a.Free(lhsList)
rhsList := rhs.AsListUsing(a)
defer a.Free(rhsList)
return lhsList.EqualsUsing(a, rhsList)
}
return false
} else if rhs.IsList() {
return false
}
if lhs.IsMap() {
if rhs.IsMap() {
lhsList := lhs.AsMapUsing(a)
defer a.Free(lhsList)
rhsList := rhs.AsMapUsing(a)
defer a.Free(rhsList)
return lhsList.EqualsUsing(a, rhsList)
}
return false
} else if rhs.IsMap() {
return false
}
if lhs.IsNull() {
if rhs.IsNull() {
return true
}
return false
} else if rhs.IsNull() {
return false
}
// No field is set, on either objects.
return true
}
// ToString returns a human-readable representation of the value.
func ToString(v Value) string {
if v.IsNull() {
return "null"
}
switch {
case v.IsFloat():
return fmt.Sprintf("%v", v.AsFloat())
case v.IsInt():
return fmt.Sprintf("%v", v.AsInt())
case v.IsString():
return fmt.Sprintf("%q", v.AsString())
case v.IsBool():
return fmt.Sprintf("%v", v.AsBool())
case v.IsList():
strs := []string{}
list := v.AsList()
for i := 0; i < list.Length(); i++ {
strs = append(strs, ToString(list.At(i)))
}
return "[" + strings.Join(strs, ",") + "]"
case v.IsMap():
strs := []string{}
v.AsMap().Iterate(func(k string, v Value) bool {
strs = append(strs, fmt.Sprintf("%v=%v", k, ToString(v)))
return true
})
return strings.Join(strs, "")
}
// No field is set, on either objects.
return "{{undefined}}"
}
// Less provides a total ordering for Value (so that they can be sorted, even
// if they are of different types).
func Less(lhs, rhs Value) bool {
return Compare(lhs, rhs) == -1
}
// Compare provides a total ordering for Value (so that they can be
// sorted, even if they are of different types). The result will be 0 if
// v==rhs, -1 if v < rhs, and +1 if v > rhs.
func Compare(lhs, rhs Value) int {
return CompareUsing(HeapAllocator, lhs, rhs)
}
// CompareUsing uses the provided allocator and provides a total
// ordering for Value (so that they can be sorted, even if they
// are of different types). The result will be 0 if v==rhs, -1
// if v < rhs, and +1 if v > rhs.
func CompareUsing(a Allocator, lhs, rhs Value) int {
if lhs.IsFloat() {
if !rhs.IsFloat() {
// Extra: compare floats and ints numerically.
if rhs.IsInt() {
return FloatCompare(lhs.AsFloat(), float64(rhs.AsInt()))
}
return -1
}
return FloatCompare(lhs.AsFloat(), rhs.AsFloat())
} else if rhs.IsFloat() {
// Extra: compare floats and ints numerically.
if lhs.IsInt() {
return FloatCompare(float64(lhs.AsInt()), rhs.AsFloat())
}
return 1
}
if lhs.IsInt() {
if !rhs.IsInt() {
return -1
}
return IntCompare(lhs.AsInt(), rhs.AsInt())
} else if rhs.IsInt() {
return 1
}
if lhs.IsString() {
if !rhs.IsString() {
return -1
}
return strings.Compare(lhs.AsString(), rhs.AsString())
} else if rhs.IsString() {
return 1
}
if lhs.IsBool() {
if !rhs.IsBool() {
return -1
}
return BoolCompare(lhs.AsBool(), rhs.AsBool())
} else if rhs.IsBool() {
return 1
}
if lhs.IsList() {
if !rhs.IsList() {
return -1
}
lhsList := lhs.AsListUsing(a)
defer a.Free(lhsList)
rhsList := rhs.AsListUsing(a)
defer a.Free(rhsList)
return ListCompareUsing(a, lhsList, rhsList)
} else if rhs.IsList() {
return 1
}
if lhs.IsMap() {
if !rhs.IsMap() {
return -1
}
lhsMap := lhs.AsMapUsing(a)
defer a.Free(lhsMap)
rhsMap := rhs.AsMapUsing(a)
defer a.Free(rhsMap)
return MapCompareUsing(a, lhsMap, rhsMap)
} else if rhs.IsMap() {
return 1
}
if lhs.IsNull() {
if !rhs.IsNull() {
return -1
}
return 0
} else if rhs.IsNull() {
return 1
}
// Invalid Value-- nothing is set.
return 0
}

View File

@ -0,0 +1,294 @@
/*
Copyright 2019 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package value
import (
"encoding/base64"
"fmt"
"reflect"
)
// NewValueReflect creates a Value backed by an "interface{}" type,
// typically an structured object in Kubernetes world that is uses reflection to expose.
// The provided "interface{}" value must be a pointer so that the value can be modified via reflection.
// The provided "interface{}" may contain structs and types that are converted to Values
// by the jsonMarshaler interface.
func NewValueReflect(value interface{}) (Value, error) {
if value == nil {
return NewValueInterface(nil), nil
}
v := reflect.ValueOf(value)
if v.Kind() != reflect.Ptr {
// The root value to reflect on must be a pointer so that map.Set() and map.Delete() operations are possible.
return nil, fmt.Errorf("value provided to NewValueReflect must be a pointer")
}
return wrapValueReflect(v, nil, nil)
}
// wrapValueReflect wraps the provide reflect.Value as a value. If parent in the data tree is a map, parentMap
// and parentMapKey must be provided so that the returned value may be set and deleted.
func wrapValueReflect(value reflect.Value, parentMap, parentMapKey *reflect.Value) (Value, error) {
val := HeapAllocator.allocValueReflect()
return val.reuse(value, nil, parentMap, parentMapKey)
}
// wrapValueReflect wraps the provide reflect.Value as a value, and panics if there is an error. If parent in the data
// tree is a map, parentMap and parentMapKey must be provided so that the returned value may be set and deleted.
func mustWrapValueReflect(value reflect.Value, parentMap, parentMapKey *reflect.Value) Value {
v, err := wrapValueReflect(value, parentMap, parentMapKey)
if err != nil {
panic(err)
}
return v
}
// the value interface doesn't care about the type for value.IsNull, so we can use a constant
var nilType = reflect.TypeOf(&struct{}{})
// reuse replaces the value of the valueReflect. If parent in the data tree is a map, parentMap and parentMapKey
// must be provided so that the returned value may be set and deleted.
func (r *valueReflect) reuse(value reflect.Value, cacheEntry *TypeReflectCacheEntry, parentMap, parentMapKey *reflect.Value) (Value, error) {
if cacheEntry == nil {
cacheEntry = TypeReflectEntryOf(value.Type())
}
if cacheEntry.CanConvertToUnstructured() {
u, err := cacheEntry.ToUnstructured(value)
if err != nil {
return nil, err
}
if u == nil {
value = reflect.Zero(nilType)
} else {
value = reflect.ValueOf(u)
}
}
r.Value = dereference(value)
r.ParentMap = parentMap
r.ParentMapKey = parentMapKey
r.kind = kind(r.Value)
return r, nil
}
// mustReuse replaces the value of the valueReflect and panics if there is an error. If parent in the data tree is a
// map, parentMap and parentMapKey must be provided so that the returned value may be set and deleted.
func (r *valueReflect) mustReuse(value reflect.Value, cacheEntry *TypeReflectCacheEntry, parentMap, parentMapKey *reflect.Value) Value {
v, err := r.reuse(value, cacheEntry, parentMap, parentMapKey)
if err != nil {
panic(err)
}
return v
}
func dereference(val reflect.Value) reflect.Value {
kind := val.Kind()
if (kind == reflect.Interface || kind == reflect.Ptr) && !safeIsNil(val) {
return val.Elem()
}
return val
}
type valueReflect struct {
ParentMap *reflect.Value
ParentMapKey *reflect.Value
Value reflect.Value
kind reflectType
}
func (r valueReflect) IsMap() bool {
return r.kind == mapType || r.kind == structMapType
}
func (r valueReflect) IsList() bool {
return r.kind == listType
}
func (r valueReflect) IsBool() bool {
return r.kind == boolType
}
func (r valueReflect) IsInt() bool {
return r.kind == intType || r.kind == uintType
}
func (r valueReflect) IsFloat() bool {
return r.kind == floatType
}
func (r valueReflect) IsString() bool {
return r.kind == stringType || r.kind == byteStringType
}
func (r valueReflect) IsNull() bool {
return r.kind == nullType
}
type reflectType = int
const (
mapType = iota
structMapType
listType
intType
uintType
floatType
stringType
byteStringType
boolType
nullType
)
func kind(v reflect.Value) reflectType {
typ := v.Type()
rk := typ.Kind()
switch rk {
case reflect.Map:
if v.IsNil() {
return nullType
}
return mapType
case reflect.Struct:
return structMapType
case reflect.Int, reflect.Int64, reflect.Int32, reflect.Int16, reflect.Int8:
return intType
case reflect.Uint, reflect.Uint32, reflect.Uint16, reflect.Uint8:
// Uint64 deliberately excluded, see valueUnstructured.Int.
return uintType
case reflect.Float64, reflect.Float32:
return floatType
case reflect.String:
return stringType
case reflect.Bool:
return boolType
case reflect.Slice:
if v.IsNil() {
return nullType
}
elemKind := typ.Elem().Kind()
if elemKind == reflect.Uint8 {
return byteStringType
}
return listType
case reflect.Chan, reflect.Func, reflect.Ptr, reflect.UnsafePointer, reflect.Interface:
if v.IsNil() {
return nullType
}
panic(fmt.Sprintf("unsupported type: %v", v.Type()))
default:
panic(fmt.Sprintf("unsupported type: %v", v.Type()))
}
}
// TODO find a cleaner way to avoid panics from reflect.IsNil()
func safeIsNil(v reflect.Value) bool {
k := v.Kind()
switch k {
case reflect.Chan, reflect.Func, reflect.Map, reflect.Ptr, reflect.UnsafePointer, reflect.Interface, reflect.Slice:
return v.IsNil()
}
return false
}
func (r valueReflect) AsMap() Map {
return r.AsMapUsing(HeapAllocator)
}
func (r valueReflect) AsMapUsing(a Allocator) Map {
switch r.kind {
case structMapType:
v := a.allocStructReflect()
v.valueReflect = r
return v
case mapType:
v := a.allocMapReflect()
v.valueReflect = r
return v
default:
panic("value is not a map or struct")
}
}
func (r valueReflect) AsList() List {
return r.AsListUsing(HeapAllocator)
}
func (r valueReflect) AsListUsing(a Allocator) List {
if r.IsList() {
v := a.allocListReflect()
v.Value = r.Value
return v
}
panic("value is not a list")
}
func (r valueReflect) AsBool() bool {
if r.IsBool() {
return r.Value.Bool()
}
panic("value is not a bool")
}
func (r valueReflect) AsInt() int64 {
if r.kind == intType {
return r.Value.Int()
}
if r.kind == uintType {
return int64(r.Value.Uint())
}
panic("value is not an int")
}
func (r valueReflect) AsFloat() float64 {
if r.IsFloat() {
return r.Value.Float()
}
panic("value is not a float")
}
func (r valueReflect) AsString() string {
switch r.kind {
case stringType:
return r.Value.String()
case byteStringType:
return base64.StdEncoding.EncodeToString(r.Value.Bytes())
}
panic("value is not a string")
}
func (r valueReflect) Unstructured() interface{} {
val := r.Value
switch {
case r.IsNull():
return nil
case val.Kind() == reflect.Struct:
return structReflect{r}.Unstructured()
case val.Kind() == reflect.Map:
return mapReflect{valueReflect: r}.Unstructured()
case r.IsList():
return listReflect{r.Value}.Unstructured()
case r.IsString():
return r.AsString()
case r.IsInt():
return r.AsInt()
case r.IsBool():
return r.AsBool()
case r.IsFloat():
return r.AsFloat()
default:
panic(fmt.Sprintf("value of type %s is not a supported by value reflector", val.Type()))
}
}

View File

@ -0,0 +1,178 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package value
import (
"fmt"
)
// NewValueInterface creates a Value backed by an "interface{}" type,
// typically an unstructured object in Kubernetes world.
// interface{} must be one of: map[string]interface{}, map[interface{}]interface{}, []interface{}, int types, float types,
// string or boolean. Nested interface{} must also be one of these types.
func NewValueInterface(v interface{}) Value {
return Value(HeapAllocator.allocValueUnstructured().reuse(v))
}
type valueUnstructured struct {
Value interface{}
}
// reuse replaces the value of the valueUnstructured.
func (vi *valueUnstructured) reuse(value interface{}) Value {
vi.Value = value
return vi
}
func (v valueUnstructured) IsMap() bool {
if _, ok := v.Value.(map[string]interface{}); ok {
return true
}
if _, ok := v.Value.(map[interface{}]interface{}); ok {
return true
}
return false
}
func (v valueUnstructured) AsMap() Map {
return v.AsMapUsing(HeapAllocator)
}
func (v valueUnstructured) AsMapUsing(_ Allocator) Map {
if v.Value == nil {
panic("invalid nil")
}
switch t := v.Value.(type) {
case map[string]interface{}:
return mapUnstructuredString(t)
case map[interface{}]interface{}:
return mapUnstructuredInterface(t)
}
panic(fmt.Errorf("not a map: %#v", v))
}
func (v valueUnstructured) IsList() bool {
if v.Value == nil {
return false
}
_, ok := v.Value.([]interface{})
return ok
}
func (v valueUnstructured) AsList() List {
return v.AsListUsing(HeapAllocator)
}
func (v valueUnstructured) AsListUsing(_ Allocator) List {
return listUnstructured(v.Value.([]interface{}))
}
func (v valueUnstructured) IsFloat() bool {
if v.Value == nil {
return false
} else if _, ok := v.Value.(float64); ok {
return true
} else if _, ok := v.Value.(float32); ok {
return true
}
return false
}
func (v valueUnstructured) AsFloat() float64 {
if f, ok := v.Value.(float32); ok {
return float64(f)
}
return v.Value.(float64)
}
func (v valueUnstructured) IsInt() bool {
if v.Value == nil {
return false
} else if _, ok := v.Value.(int); ok {
return true
} else if _, ok := v.Value.(int8); ok {
return true
} else if _, ok := v.Value.(int16); ok {
return true
} else if _, ok := v.Value.(int32); ok {
return true
} else if _, ok := v.Value.(int64); ok {
return true
} else if _, ok := v.Value.(uint); ok {
return true
} else if _, ok := v.Value.(uint8); ok {
return true
} else if _, ok := v.Value.(uint16); ok {
return true
} else if _, ok := v.Value.(uint32); ok {
return true
}
return false
}
func (v valueUnstructured) AsInt() int64 {
if i, ok := v.Value.(int); ok {
return int64(i)
} else if i, ok := v.Value.(int8); ok {
return int64(i)
} else if i, ok := v.Value.(int16); ok {
return int64(i)
} else if i, ok := v.Value.(int32); ok {
return int64(i)
} else if i, ok := v.Value.(uint); ok {
return int64(i)
} else if i, ok := v.Value.(uint8); ok {
return int64(i)
} else if i, ok := v.Value.(uint16); ok {
return int64(i)
} else if i, ok := v.Value.(uint32); ok {
return int64(i)
}
return v.Value.(int64)
}
func (v valueUnstructured) IsString() bool {
if v.Value == nil {
return false
}
_, ok := v.Value.(string)
return ok
}
func (v valueUnstructured) AsString() string {
return v.Value.(string)
}
func (v valueUnstructured) IsBool() bool {
if v.Value == nil {
return false
}
_, ok := v.Value.(bool)
return ok
}
func (v valueUnstructured) AsBool() bool {
return v.Value.(bool)
}
func (v valueUnstructured) IsNull() bool {
return v.Value == nil
}
func (v valueUnstructured) Unstructured() interface{} {
return v.Value
}

24
e2e/vendor/sigs.k8s.io/yaml/.gitignore generated vendored Normal file
View File

@ -0,0 +1,24 @@
# OSX leaves these everywhere on SMB shares
._*
# Eclipse files
.classpath
.project
.settings/**
# Idea files
.idea/**
.idea/
# Emacs save files
*~
# Vim-related files
[._]*.s[a-w][a-z]
[._]s[a-w][a-z]
*.un~
Session.vim
.netrwhist
# Go test binaries
*.test

12
e2e/vendor/sigs.k8s.io/yaml/.travis.yml generated vendored Normal file
View File

@ -0,0 +1,12 @@
language: go
arch: arm64
dist: focal
go: 1.15.x
script:
- diff -u <(echo -n) <(gofmt -d *.go)
- diff -u <(echo -n) <(golint $(go list -e ./...) | grep -v YAMLToJSON)
- GO111MODULE=on go vet .
- GO111MODULE=on go test -v -race ./...
- git diff --exit-code
install:
- GO111MODULE=off go get golang.org/x/lint/golint

31
e2e/vendor/sigs.k8s.io/yaml/CONTRIBUTING.md generated vendored Normal file
View File

@ -0,0 +1,31 @@
# Contributing Guidelines
Welcome to Kubernetes. We are excited about the prospect of you joining our [community](https://github.com/kubernetes/community)! The Kubernetes community abides by the CNCF [code of conduct](code-of-conduct.md). Here is an excerpt:
_As contributors and maintainers of this project, and in the interest of fostering an open and welcoming community, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities._
## Getting Started
We have full documentation on how to get started contributing here:
<!---
If your repo has certain guidelines for contribution, put them here ahead of the general k8s resources
-->
- [Contributor License Agreement](https://git.k8s.io/community/CLA.md) Kubernetes projects require that you sign a Contributor License Agreement (CLA) before we can accept your pull requests
- [Kubernetes Contributor Guide](http://git.k8s.io/community/contributors/guide) - Main contributor documentation, or you can just jump directly to the [contributing section](http://git.k8s.io/community/contributors/guide#contributing)
- [Contributor Cheat Sheet](https://git.k8s.io/community/contributors/guide/contributor-cheatsheet.md) - Common resources for existing developers
## Mentorship
- [Mentoring Initiatives](https://git.k8s.io/community/mentoring) - We have a diverse set of mentorship programs available that are always looking for volunteers!
<!---
Custom Information - if you're copying this template for the first time you can add custom content here, for example:
## Contact Information
- [Slack channel](https://kubernetes.slack.com/messages/kubernetes-users) - Replace `kubernetes-users` with your slack channel string, this will send users directly to your channel.
- [Mailing list](URL)
-->

306
e2e/vendor/sigs.k8s.io/yaml/LICENSE generated vendored Normal file
View File

@ -0,0 +1,306 @@
The MIT License (MIT)
Copyright (c) 2014 Sam Ghods
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
Copyright (c) 2012 The Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# The forked go-yaml.v3 library under this project is covered by two
different licenses (MIT and Apache):
#### MIT License ####
The following files were ported to Go from C files of libyaml, and thus
are still covered by their original MIT license, with the additional
copyright staring in 2011 when the project was ported over:
apic.go emitterc.go parserc.go readerc.go scannerc.go
writerc.go yamlh.go yamlprivateh.go
Copyright (c) 2006-2010 Kirill Simonov
Copyright (c) 2006-2011 Kirill Simonov
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
### Apache License ###
All the remaining project files are covered by the Apache license:
Copyright (c) 2011-2019 Canonical Ltd
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
# The forked go-yaml.v2 library under the project is covered by an
Apache license:
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

23
e2e/vendor/sigs.k8s.io/yaml/OWNERS generated vendored Normal file
View File

@ -0,0 +1,23 @@
# See the OWNERS docs at https://go.k8s.io/owners
approvers:
- dims
- jpbetz
- smarterclayton
- deads2k
- sttts
- liggitt
reviewers:
- dims
- thockin
- jpbetz
- smarterclayton
- wojtek-t
- deads2k
- derekwaynecarr
- mikedanese
- liggitt
- sttts
- tallclair
labels:
- sig/api-machinery

123
e2e/vendor/sigs.k8s.io/yaml/README.md generated vendored Normal file
View File

@ -0,0 +1,123 @@
# YAML marshaling and unmarshaling support for Go
[![Build Status](https://travis-ci.org/kubernetes-sigs/yaml.svg)](https://travis-ci.org/kubernetes-sigs/yaml)
kubernetes-sigs/yaml is a permanent fork of [ghodss/yaml](https://github.com/ghodss/yaml).
## Introduction
A wrapper around [go-yaml](https://github.com/go-yaml/yaml) designed to enable a better way of handling YAML when marshaling to and from structs.
In short, this library first converts YAML to JSON using go-yaml and then uses `json.Marshal` and `json.Unmarshal` to convert to or from the struct. This means that it effectively reuses the JSON struct tags as well as the custom JSON methods `MarshalJSON` and `UnmarshalJSON` unlike go-yaml. For a detailed overview of the rationale behind this method, [see this blog post](http://web.archive.org/web/20190603050330/http://ghodss.com/2014/the-right-way-to-handle-yaml-in-golang/).
## Compatibility
This package uses [go-yaml](https://github.com/go-yaml/yaml) and therefore supports [everything go-yaml supports](https://github.com/go-yaml/yaml#compatibility).
## Caveats
**Caveat #1:** When using `yaml.Marshal` and `yaml.Unmarshal`, binary data should NOT be preceded with the `!!binary` YAML tag. If you do, go-yaml will convert the binary data from base64 to native binary data, which is not compatible with JSON. You can still use binary in your YAML files though - just store them without the `!!binary` tag and decode the base64 in your code (e.g. in the custom JSON methods `MarshalJSON` and `UnmarshalJSON`). This also has the benefit that your YAML and your JSON binary data will be decoded exactly the same way. As an example:
```
BAD:
exampleKey: !!binary gIGC
GOOD:
exampleKey: gIGC
... and decode the base64 data in your code.
```
**Caveat #2:** When using `YAMLToJSON` directly, maps with keys that are maps will result in an error since this is not supported by JSON. This error will occur in `Unmarshal` as well since you can't unmarshal map keys anyways since struct fields can't be keys.
## Installation and usage
To install, run:
```
$ go get sigs.k8s.io/yaml
```
And import using:
```
import "sigs.k8s.io/yaml"
```
Usage is very similar to the JSON library:
```go
package main
import (
"fmt"
"sigs.k8s.io/yaml"
)
type Person struct {
Name string `json:"name"` // Affects YAML field names too.
Age int `json:"age"`
}
func main() {
// Marshal a Person struct to YAML.
p := Person{"John", 30}
y, err := yaml.Marshal(p)
if err != nil {
fmt.Printf("err: %v\n", err)
return
}
fmt.Println(string(y))
/* Output:
age: 30
name: John
*/
// Unmarshal the YAML back into a Person struct.
var p2 Person
err = yaml.Unmarshal(y, &p2)
if err != nil {
fmt.Printf("err: %v\n", err)
return
}
fmt.Println(p2)
/* Output:
{John 30}
*/
}
```
`yaml.YAMLToJSON` and `yaml.JSONToYAML` methods are also available:
```go
package main
import (
"fmt"
"sigs.k8s.io/yaml"
)
func main() {
j := []byte(`{"name": "John", "age": 30}`)
y, err := yaml.JSONToYAML(j)
if err != nil {
fmt.Printf("err: %v\n", err)
return
}
fmt.Println(string(y))
/* Output:
age: 30
name: John
*/
j2, err := yaml.YAMLToJSON(y)
if err != nil {
fmt.Printf("err: %v\n", err)
return
}
fmt.Println(string(j2))
/* Output:
{"age":30,"name":"John"}
*/
}
```

9
e2e/vendor/sigs.k8s.io/yaml/RELEASE.md generated vendored Normal file
View File

@ -0,0 +1,9 @@
# Release Process
The `yaml` Project is released on an as-needed basis. The process is as follows:
1. An issue is proposing a new release with a changelog since the last release
1. All [OWNERS](OWNERS) must LGTM this release
1. An OWNER runs `git tag -s $VERSION` and inserts the changelog and pushes the tag with `git push $VERSION`
1. The release issue is closed
1. An announcement email is sent to `kubernetes-dev@googlegroups.com` with the subject `[ANNOUNCE] kubernetes-template-project $VERSION is released`

17
e2e/vendor/sigs.k8s.io/yaml/SECURITY_CONTACTS generated vendored Normal file
View File

@ -0,0 +1,17 @@
# Defined below are the security contacts for this repo.
#
# They are the contact point for the Product Security Team to reach out
# to for triaging and handling of incoming issues.
#
# The below names agree to abide by the
# [Embargo Policy](https://github.com/kubernetes/sig-release/blob/master/security-release-process-documentation/security-release-process.md#embargo-policy)
# and will be removed and replaced if they violate that agreement.
#
# DO NOT REPORT SECURITY VULNERABILITIES DIRECTLY TO THESE NAMES, FOLLOW THE
# INSTRUCTIONS AT https://kubernetes.io/security/
cjcullen
jessfraz
liggitt
philips
tallclair

3
e2e/vendor/sigs.k8s.io/yaml/code-of-conduct.md generated vendored Normal file
View File

@ -0,0 +1,3 @@
# Kubernetes Community Code of Conduct
Please refer to our [Kubernetes Community Code of Conduct](https://git.k8s.io/community/code-of-conduct.md)

501
e2e/vendor/sigs.k8s.io/yaml/fields.go generated vendored Normal file
View File

@ -0,0 +1,501 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package yaml
import (
"bytes"
"encoding"
"encoding/json"
"reflect"
"sort"
"strings"
"sync"
"unicode"
"unicode/utf8"
)
// indirect walks down 'value' allocating pointers as needed,
// until it gets to a non-pointer.
// if it encounters an Unmarshaler, indirect stops and returns that.
// if decodingNull is true, indirect stops at the last pointer so it can be set to nil.
func indirect(value reflect.Value, decodingNull bool) (json.Unmarshaler, encoding.TextUnmarshaler, reflect.Value) {
// If 'value' is a named type and is addressable,
// start with its address, so that if the type has pointer methods,
// we find them.
if value.Kind() != reflect.Ptr && value.Type().Name() != "" && value.CanAddr() {
value = value.Addr()
}
for {
// Load value from interface, but only if the result will be
// usefully addressable.
if value.Kind() == reflect.Interface && !value.IsNil() {
element := value.Elem()
if element.Kind() == reflect.Ptr && !element.IsNil() && (!decodingNull || element.Elem().Kind() == reflect.Ptr) {
value = element
continue
}
}
if value.Kind() != reflect.Ptr {
break
}
if value.Elem().Kind() != reflect.Ptr && decodingNull && value.CanSet() {
break
}
if value.IsNil() {
if value.CanSet() {
value.Set(reflect.New(value.Type().Elem()))
} else {
value = reflect.New(value.Type().Elem())
}
}
if value.Type().NumMethod() > 0 {
if u, ok := value.Interface().(json.Unmarshaler); ok {
return u, nil, reflect.Value{}
}
if u, ok := value.Interface().(encoding.TextUnmarshaler); ok {
return nil, u, reflect.Value{}
}
}
value = value.Elem()
}
return nil, nil, value
}
// A field represents a single field found in a struct.
type field struct {
name string
nameBytes []byte // []byte(name)
equalFold func(s, t []byte) bool // bytes.EqualFold or equivalent
tag bool
index []int
typ reflect.Type
omitEmpty bool
quoted bool
}
func fillField(f field) field {
f.nameBytes = []byte(f.name)
f.equalFold = foldFunc(f.nameBytes)
return f
}
// byName sorts field by name, breaking ties with depth,
// then breaking ties with "name came from json tag", then
// breaking ties with index sequence.
type byName []field
func (x byName) Len() int { return len(x) }
func (x byName) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
func (x byName) Less(i, j int) bool {
if x[i].name != x[j].name {
return x[i].name < x[j].name
}
if len(x[i].index) != len(x[j].index) {
return len(x[i].index) < len(x[j].index)
}
if x[i].tag != x[j].tag {
return x[i].tag
}
return byIndex(x).Less(i, j)
}
// byIndex sorts field by index sequence.
type byIndex []field
func (x byIndex) Len() int { return len(x) }
func (x byIndex) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
func (x byIndex) Less(i, j int) bool {
for k, xik := range x[i].index {
if k >= len(x[j].index) {
return false
}
if xik != x[j].index[k] {
return xik < x[j].index[k]
}
}
return len(x[i].index) < len(x[j].index)
}
// typeFields returns a list of fields that JSON should recognize for the given type.
// The algorithm is breadth-first search over the set of structs to include - the top struct
// and then any reachable anonymous structs.
func typeFields(t reflect.Type) []field {
// Anonymous fields to explore at the current level and the next.
current := []field{}
next := []field{{typ: t}}
// Count of queued names for current level and the next.
var count map[reflect.Type]int
var nextCount map[reflect.Type]int
// Types already visited at an earlier level.
visited := map[reflect.Type]bool{}
// Fields found.
var fields []field
for len(next) > 0 {
current, next = next, current[:0]
count, nextCount = nextCount, map[reflect.Type]int{}
for _, f := range current {
if visited[f.typ] {
continue
}
visited[f.typ] = true
// Scan f.typ for fields to include.
for i := 0; i < f.typ.NumField(); i++ {
sf := f.typ.Field(i)
if sf.PkgPath != "" { // unexported
continue
}
tag := sf.Tag.Get("json")
if tag == "-" {
continue
}
name, opts := parseTag(tag)
if !isValidTag(name) {
name = ""
}
index := make([]int, len(f.index)+1)
copy(index, f.index)
index[len(f.index)] = i
ft := sf.Type
if ft.Name() == "" && ft.Kind() == reflect.Ptr {
// Follow pointer.
ft = ft.Elem()
}
// Record found field and index sequence.
if name != "" || !sf.Anonymous || ft.Kind() != reflect.Struct {
tagged := name != ""
if name == "" {
name = sf.Name
}
fields = append(fields, fillField(field{
name: name,
tag: tagged,
index: index,
typ: ft,
omitEmpty: opts.Contains("omitempty"),
quoted: opts.Contains("string"),
}))
if count[f.typ] > 1 {
// If there were multiple instances, add a second,
// so that the annihilation code will see a duplicate.
// It only cares about the distinction between 1 or 2,
// so don't bother generating any more copies.
fields = append(fields, fields[len(fields)-1])
}
continue
}
// Record new anonymous struct to explore in next round.
nextCount[ft]++
if nextCount[ft] == 1 {
next = append(next, fillField(field{name: ft.Name(), index: index, typ: ft}))
}
}
}
}
sort.Sort(byName(fields))
// Delete all fields that are hidden by the Go rules for embedded fields,
// except that fields with JSON tags are promoted.
// The fields are sorted in primary order of name, secondary order
// of field index length. Loop over names; for each name, delete
// hidden fields by choosing the one dominant field that survives.
out := fields[:0]
for advance, i := 0, 0; i < len(fields); i += advance {
// One iteration per name.
// Find the sequence of fields with the name of this first field.
fi := fields[i]
name := fi.name
for advance = 1; i+advance < len(fields); advance++ {
fj := fields[i+advance]
if fj.name != name {
break
}
}
if advance == 1 { // Only one field with this name
out = append(out, fi)
continue
}
dominant, ok := dominantField(fields[i : i+advance])
if ok {
out = append(out, dominant)
}
}
fields = out
sort.Sort(byIndex(fields))
return fields
}
// dominantField looks through the fields, all of which are known to
// have the same name, to find the single field that dominates the
// others using Go's embedding rules, modified by the presence of
// JSON tags. If there are multiple top-level fields, the boolean
// will be false: This condition is an error in Go and we skip all
// the fields.
func dominantField(fields []field) (field, bool) {
// The fields are sorted in increasing index-length order. The winner
// must therefore be one with the shortest index length. Drop all
// longer entries, which is easy: just truncate the slice.
length := len(fields[0].index)
tagged := -1 // Index of first tagged field.
for i, f := range fields {
if len(f.index) > length {
fields = fields[:i]
break
}
if f.tag {
if tagged >= 0 {
// Multiple tagged fields at the same level: conflict.
// Return no field.
return field{}, false
}
tagged = i
}
}
if tagged >= 0 {
return fields[tagged], true
}
// All remaining fields have the same length. If there's more than one,
// we have a conflict (two fields named "X" at the same level) and we
// return no field.
if len(fields) > 1 {
return field{}, false
}
return fields[0], true
}
var fieldCache struct {
sync.RWMutex
m map[reflect.Type][]field
}
// cachedTypeFields is like typeFields but uses a cache to avoid repeated work.
func cachedTypeFields(t reflect.Type) []field {
fieldCache.RLock()
f := fieldCache.m[t]
fieldCache.RUnlock()
if f != nil {
return f
}
// Compute fields without lock.
// Might duplicate effort but won't hold other computations back.
f = typeFields(t)
if f == nil {
f = []field{}
}
fieldCache.Lock()
if fieldCache.m == nil {
fieldCache.m = map[reflect.Type][]field{}
}
fieldCache.m[t] = f
fieldCache.Unlock()
return f
}
func isValidTag(s string) bool {
if s == "" {
return false
}
for _, c := range s {
switch {
case strings.ContainsRune("!#$%&()*+-./:<=>?@[]^_{|}~ ", c):
// Backslash and quote chars are reserved, but
// otherwise any punctuation chars are allowed
// in a tag name.
default:
if !unicode.IsLetter(c) && !unicode.IsDigit(c) {
return false
}
}
}
return true
}
const (
caseMask = ^byte(0x20) // Mask to ignore case in ASCII.
kelvin = '\u212a'
smallLongEss = '\u017f'
)
// foldFunc returns one of four different case folding equivalence
// functions, from most general (and slow) to fastest:
//
// 1) bytes.EqualFold, if the key s contains any non-ASCII UTF-8
// 2) equalFoldRight, if s contains special folding ASCII ('k', 'K', 's', 'S')
// 3) asciiEqualFold, no special, but includes non-letters (including _)
// 4) simpleLetterEqualFold, no specials, no non-letters.
//
// The letters S and K are special because they map to 3 runes, not just 2:
// - S maps to s and to U+017F 'ſ' Latin small letter long s
// - k maps to K and to U+212A '' Kelvin sign
//
// See http://play.golang.org/p/tTxjOc0OGo
//
// The returned function is specialized for matching against s and
// should only be given s. It's not curried for performance reasons.
func foldFunc(s []byte) func(s, t []byte) bool {
nonLetter := false
special := false // special letter
for _, b := range s {
if b >= utf8.RuneSelf {
return bytes.EqualFold
}
upper := b & caseMask
if upper < 'A' || upper > 'Z' {
nonLetter = true
} else if upper == 'K' || upper == 'S' {
// See above for why these letters are special.
special = true
}
}
if special {
return equalFoldRight
}
if nonLetter {
return asciiEqualFold
}
return simpleLetterEqualFold
}
// equalFoldRight is a specialization of bytes.EqualFold when s is
// known to be all ASCII (including punctuation), but contains an 's',
// 'S', 'k', or 'K', requiring a Unicode fold on the bytes in t.
// See comments on foldFunc.
func equalFoldRight(s, t []byte) bool {
for _, sb := range s {
if len(t) == 0 {
return false
}
tb := t[0]
if tb < utf8.RuneSelf {
if sb != tb {
sbUpper := sb & caseMask
if 'A' <= sbUpper && sbUpper <= 'Z' {
if sbUpper != tb&caseMask {
return false
}
} else {
return false
}
}
t = t[1:]
continue
}
// sb is ASCII and t is not. t must be either kelvin
// sign or long s; sb must be s, S, k, or K.
tr, size := utf8.DecodeRune(t)
switch sb {
case 's', 'S':
if tr != smallLongEss {
return false
}
case 'k', 'K':
if tr != kelvin {
return false
}
default:
return false
}
t = t[size:]
}
return len(t) <= 0
}
// asciiEqualFold is a specialization of bytes.EqualFold for use when
// s is all ASCII (but may contain non-letters) and contains no
// special-folding letters.
// See comments on foldFunc.
func asciiEqualFold(s, t []byte) bool {
if len(s) != len(t) {
return false
}
for i, sb := range s {
tb := t[i]
if sb == tb {
continue
}
if ('a' <= sb && sb <= 'z') || ('A' <= sb && sb <= 'Z') {
if sb&caseMask != tb&caseMask {
return false
}
} else {
return false
}
}
return true
}
// simpleLetterEqualFold is a specialization of bytes.EqualFold for
// use when s is all ASCII letters (no underscores, etc) and also
// doesn't contain 'k', 'K', 's', or 'S'.
// See comments on foldFunc.
func simpleLetterEqualFold(s, t []byte) bool {
if len(s) != len(t) {
return false
}
for i, b := range s {
if b&caseMask != t[i]&caseMask {
return false
}
}
return true
}
// tagOptions is the string following a comma in a struct field's "json"
// tag, or the empty string. It does not include the leading comma.
type tagOptions string
// parseTag splits a struct field's json tag into its name and
// comma-separated options.
func parseTag(tag string) (string, tagOptions) {
if idx := strings.Index(tag, ","); idx != -1 {
return tag[:idx], tagOptions(tag[idx+1:])
}
return tag, tagOptions("")
}
// Contains reports whether a comma-separated list of options
// contains a particular substr flag. substr must be surrounded by a
// string boundary or commas.
func (o tagOptions) Contains(optionName string) bool {
if len(o) == 0 {
return false
}
s := string(o)
for s != "" {
var next string
i := strings.Index(s, ",")
if i >= 0 {
s, next = s[:i], s[i+1:]
}
if s == optionName {
return true
}
s = next
}
return false
}

201
e2e/vendor/sigs.k8s.io/yaml/goyaml.v2/LICENSE generated vendored Normal file
View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

31
e2e/vendor/sigs.k8s.io/yaml/goyaml.v2/LICENSE.libyaml generated vendored Normal file
View File

@ -0,0 +1,31 @@
The following files were ported to Go from C files of libyaml, and thus
are still covered by their original copyright and license:
apic.go
emitterc.go
parserc.go
readerc.go
scannerc.go
writerc.go
yamlh.go
yamlprivateh.go
Copyright (c) 2006 Kirill Simonov
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

13
e2e/vendor/sigs.k8s.io/yaml/goyaml.v2/NOTICE generated vendored Normal file
View File

@ -0,0 +1,13 @@
Copyright 2011-2016 Canonical Ltd.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

24
e2e/vendor/sigs.k8s.io/yaml/goyaml.v2/OWNERS generated vendored Normal file
View File

@ -0,0 +1,24 @@
# See the OWNERS docs at https://go.k8s.io/owners
approvers:
- dims
- jpbetz
- smarterclayton
- deads2k
- sttts
- liggitt
- natasha41575
- knverey
reviewers:
- dims
- thockin
- jpbetz
- smarterclayton
- deads2k
- derekwaynecarr
- mikedanese
- liggitt
- sttts
- tallclair
labels:
- sig/api-machinery

143
e2e/vendor/sigs.k8s.io/yaml/goyaml.v2/README.md generated vendored Normal file
View File

@ -0,0 +1,143 @@
# go-yaml fork
This package is a fork of the go-yaml library and is intended solely for consumption
by kubernetes projects. In this fork, we plan to support only critical changes required for
kubernetes, such as small bug fixes and regressions. Larger, general-purpose feature requests
should be made in the upstream go-yaml library, and we will reject such changes in this fork
unless we are pulling them from upstream.
This fork is based on v2.4.0: https://github.com/go-yaml/yaml/releases/tag/v2.4.0
# YAML support for the Go language
Introduction
------------
The yaml package enables Go programs to comfortably encode and decode YAML
values. It was developed within [Canonical](https://www.canonical.com) as
part of the [juju](https://juju.ubuntu.com) project, and is based on a
pure Go port of the well-known [libyaml](http://pyyaml.org/wiki/LibYAML)
C library to parse and generate YAML data quickly and reliably.
Compatibility
-------------
The yaml package supports most of YAML 1.1 and 1.2, including support for
anchors, tags, map merging, etc. Multi-document unmarshalling is not yet
implemented, and base-60 floats from YAML 1.1 are purposefully not
supported since they're a poor design and are gone in YAML 1.2.
Installation and usage
----------------------
The import path for the package is *gopkg.in/yaml.v2*.
To install it, run:
go get gopkg.in/yaml.v2
API documentation
-----------------
If opened in a browser, the import path itself leads to the API documentation:
* [https://gopkg.in/yaml.v2](https://gopkg.in/yaml.v2)
API stability
-------------
The package API for yaml v2 will remain stable as described in [gopkg.in](https://gopkg.in).
License
-------
The yaml package is licensed under the Apache License 2.0. Please see the LICENSE file for details.
Example
-------
```Go
package main
import (
"fmt"
"log"
"gopkg.in/yaml.v2"
)
var data = `
a: Easy!
b:
c: 2
d: [3, 4]
`
// Note: struct fields must be public in order for unmarshal to
// correctly populate the data.
type T struct {
A string
B struct {
RenamedC int `yaml:"c"`
D []int `yaml:",flow"`
}
}
func main() {
t := T{}
err := yaml.Unmarshal([]byte(data), &t)
if err != nil {
log.Fatalf("error: %v", err)
}
fmt.Printf("--- t:\n%v\n\n", t)
d, err := yaml.Marshal(&t)
if err != nil {
log.Fatalf("error: %v", err)
}
fmt.Printf("--- t dump:\n%s\n\n", string(d))
m := make(map[interface{}]interface{})
err = yaml.Unmarshal([]byte(data), &m)
if err != nil {
log.Fatalf("error: %v", err)
}
fmt.Printf("--- m:\n%v\n\n", m)
d, err = yaml.Marshal(&m)
if err != nil {
log.Fatalf("error: %v", err)
}
fmt.Printf("--- m dump:\n%s\n\n", string(d))
}
```
This example will generate the following output:
```
--- t:
{Easy! {2 [3 4]}}
--- t dump:
a: Easy!
b:
c: 2
d: [3, 4]
--- m:
map[a:Easy! b:map[c:2 d:[3 4]]]
--- m dump:
a: Easy!
b:
c: 2
d:
- 3
- 4
```

744
e2e/vendor/sigs.k8s.io/yaml/goyaml.v2/apic.go generated vendored Normal file
View File

@ -0,0 +1,744 @@
package yaml
import (
"io"
)
func yaml_insert_token(parser *yaml_parser_t, pos int, token *yaml_token_t) {
//fmt.Println("yaml_insert_token", "pos:", pos, "typ:", token.typ, "head:", parser.tokens_head, "len:", len(parser.tokens))
// Check if we can move the queue at the beginning of the buffer.
if parser.tokens_head > 0 && len(parser.tokens) == cap(parser.tokens) {
if parser.tokens_head != len(parser.tokens) {
copy(parser.tokens, parser.tokens[parser.tokens_head:])
}
parser.tokens = parser.tokens[:len(parser.tokens)-parser.tokens_head]
parser.tokens_head = 0
}
parser.tokens = append(parser.tokens, *token)
if pos < 0 {
return
}
copy(parser.tokens[parser.tokens_head+pos+1:], parser.tokens[parser.tokens_head+pos:])
parser.tokens[parser.tokens_head+pos] = *token
}
// Create a new parser object.
func yaml_parser_initialize(parser *yaml_parser_t) bool {
*parser = yaml_parser_t{
raw_buffer: make([]byte, 0, input_raw_buffer_size),
buffer: make([]byte, 0, input_buffer_size),
}
return true
}
// Destroy a parser object.
func yaml_parser_delete(parser *yaml_parser_t) {
*parser = yaml_parser_t{}
}
// String read handler.
func yaml_string_read_handler(parser *yaml_parser_t, buffer []byte) (n int, err error) {
if parser.input_pos == len(parser.input) {
return 0, io.EOF
}
n = copy(buffer, parser.input[parser.input_pos:])
parser.input_pos += n
return n, nil
}
// Reader read handler.
func yaml_reader_read_handler(parser *yaml_parser_t, buffer []byte) (n int, err error) {
return parser.input_reader.Read(buffer)
}
// Set a string input.
func yaml_parser_set_input_string(parser *yaml_parser_t, input []byte) {
if parser.read_handler != nil {
panic("must set the input source only once")
}
parser.read_handler = yaml_string_read_handler
parser.input = input
parser.input_pos = 0
}
// Set a file input.
func yaml_parser_set_input_reader(parser *yaml_parser_t, r io.Reader) {
if parser.read_handler != nil {
panic("must set the input source only once")
}
parser.read_handler = yaml_reader_read_handler
parser.input_reader = r
}
// Set the source encoding.
func yaml_parser_set_encoding(parser *yaml_parser_t, encoding yaml_encoding_t) {
if parser.encoding != yaml_ANY_ENCODING {
panic("must set the encoding only once")
}
parser.encoding = encoding
}
var disableLineWrapping = false
// Create a new emitter object.
func yaml_emitter_initialize(emitter *yaml_emitter_t) {
*emitter = yaml_emitter_t{
buffer: make([]byte, output_buffer_size),
raw_buffer: make([]byte, 0, output_raw_buffer_size),
states: make([]yaml_emitter_state_t, 0, initial_stack_size),
events: make([]yaml_event_t, 0, initial_queue_size),
}
if disableLineWrapping {
emitter.best_width = -1
}
}
// Destroy an emitter object.
func yaml_emitter_delete(emitter *yaml_emitter_t) {
*emitter = yaml_emitter_t{}
}
// String write handler.
func yaml_string_write_handler(emitter *yaml_emitter_t, buffer []byte) error {
*emitter.output_buffer = append(*emitter.output_buffer, buffer...)
return nil
}
// yaml_writer_write_handler uses emitter.output_writer to write the
// emitted text.
func yaml_writer_write_handler(emitter *yaml_emitter_t, buffer []byte) error {
_, err := emitter.output_writer.Write(buffer)
return err
}
// Set a string output.
func yaml_emitter_set_output_string(emitter *yaml_emitter_t, output_buffer *[]byte) {
if emitter.write_handler != nil {
panic("must set the output target only once")
}
emitter.write_handler = yaml_string_write_handler
emitter.output_buffer = output_buffer
}
// Set a file output.
func yaml_emitter_set_output_writer(emitter *yaml_emitter_t, w io.Writer) {
if emitter.write_handler != nil {
panic("must set the output target only once")
}
emitter.write_handler = yaml_writer_write_handler
emitter.output_writer = w
}
// Set the output encoding.
func yaml_emitter_set_encoding(emitter *yaml_emitter_t, encoding yaml_encoding_t) {
if emitter.encoding != yaml_ANY_ENCODING {
panic("must set the output encoding only once")
}
emitter.encoding = encoding
}
// Set the canonical output style.
func yaml_emitter_set_canonical(emitter *yaml_emitter_t, canonical bool) {
emitter.canonical = canonical
}
//// Set the indentation increment.
func yaml_emitter_set_indent(emitter *yaml_emitter_t, indent int) {
if indent < 2 || indent > 9 {
indent = 2
}
emitter.best_indent = indent
}
// Set the preferred line width.
func yaml_emitter_set_width(emitter *yaml_emitter_t, width int) {
if width < 0 {
width = -1
}
emitter.best_width = width
}
// Set if unescaped non-ASCII characters are allowed.
func yaml_emitter_set_unicode(emitter *yaml_emitter_t, unicode bool) {
emitter.unicode = unicode
}
// Set the preferred line break character.
func yaml_emitter_set_break(emitter *yaml_emitter_t, line_break yaml_break_t) {
emitter.line_break = line_break
}
///*
// * Destroy a token object.
// */
//
//YAML_DECLARE(void)
//yaml_token_delete(yaml_token_t *token)
//{
// assert(token); // Non-NULL token object expected.
//
// switch (token.type)
// {
// case YAML_TAG_DIRECTIVE_TOKEN:
// yaml_free(token.data.tag_directive.handle);
// yaml_free(token.data.tag_directive.prefix);
// break;
//
// case YAML_ALIAS_TOKEN:
// yaml_free(token.data.alias.value);
// break;
//
// case YAML_ANCHOR_TOKEN:
// yaml_free(token.data.anchor.value);
// break;
//
// case YAML_TAG_TOKEN:
// yaml_free(token.data.tag.handle);
// yaml_free(token.data.tag.suffix);
// break;
//
// case YAML_SCALAR_TOKEN:
// yaml_free(token.data.scalar.value);
// break;
//
// default:
// break;
// }
//
// memset(token, 0, sizeof(yaml_token_t));
//}
//
///*
// * Check if a string is a valid UTF-8 sequence.
// *
// * Check 'reader.c' for more details on UTF-8 encoding.
// */
//
//static int
//yaml_check_utf8(yaml_char_t *start, size_t length)
//{
// yaml_char_t *end = start+length;
// yaml_char_t *pointer = start;
//
// while (pointer < end) {
// unsigned char octet;
// unsigned int width;
// unsigned int value;
// size_t k;
//
// octet = pointer[0];
// width = (octet & 0x80) == 0x00 ? 1 :
// (octet & 0xE0) == 0xC0 ? 2 :
// (octet & 0xF0) == 0xE0 ? 3 :
// (octet & 0xF8) == 0xF0 ? 4 : 0;
// value = (octet & 0x80) == 0x00 ? octet & 0x7F :
// (octet & 0xE0) == 0xC0 ? octet & 0x1F :
// (octet & 0xF0) == 0xE0 ? octet & 0x0F :
// (octet & 0xF8) == 0xF0 ? octet & 0x07 : 0;
// if (!width) return 0;
// if (pointer+width > end) return 0;
// for (k = 1; k < width; k ++) {
// octet = pointer[k];
// if ((octet & 0xC0) != 0x80) return 0;
// value = (value << 6) + (octet & 0x3F);
// }
// if (!((width == 1) ||
// (width == 2 && value >= 0x80) ||
// (width == 3 && value >= 0x800) ||
// (width == 4 && value >= 0x10000))) return 0;
//
// pointer += width;
// }
//
// return 1;
//}
//
// Create STREAM-START.
func yaml_stream_start_event_initialize(event *yaml_event_t, encoding yaml_encoding_t) {
*event = yaml_event_t{
typ: yaml_STREAM_START_EVENT,
encoding: encoding,
}
}
// Create STREAM-END.
func yaml_stream_end_event_initialize(event *yaml_event_t) {
*event = yaml_event_t{
typ: yaml_STREAM_END_EVENT,
}
}
// Create DOCUMENT-START.
func yaml_document_start_event_initialize(
event *yaml_event_t,
version_directive *yaml_version_directive_t,
tag_directives []yaml_tag_directive_t,
implicit bool,
) {
*event = yaml_event_t{
typ: yaml_DOCUMENT_START_EVENT,
version_directive: version_directive,
tag_directives: tag_directives,
implicit: implicit,
}
}
// Create DOCUMENT-END.
func yaml_document_end_event_initialize(event *yaml_event_t, implicit bool) {
*event = yaml_event_t{
typ: yaml_DOCUMENT_END_EVENT,
implicit: implicit,
}
}
///*
// * Create ALIAS.
// */
//
//YAML_DECLARE(int)
//yaml_alias_event_initialize(event *yaml_event_t, anchor *yaml_char_t)
//{
// mark yaml_mark_t = { 0, 0, 0 }
// anchor_copy *yaml_char_t = NULL
//
// assert(event) // Non-NULL event object is expected.
// assert(anchor) // Non-NULL anchor is expected.
//
// if (!yaml_check_utf8(anchor, strlen((char *)anchor))) return 0
//
// anchor_copy = yaml_strdup(anchor)
// if (!anchor_copy)
// return 0
//
// ALIAS_EVENT_INIT(*event, anchor_copy, mark, mark)
//
// return 1
//}
// Create SCALAR.
func yaml_scalar_event_initialize(event *yaml_event_t, anchor, tag, value []byte, plain_implicit, quoted_implicit bool, style yaml_scalar_style_t) bool {
*event = yaml_event_t{
typ: yaml_SCALAR_EVENT,
anchor: anchor,
tag: tag,
value: value,
implicit: plain_implicit,
quoted_implicit: quoted_implicit,
style: yaml_style_t(style),
}
return true
}
// Create SEQUENCE-START.
func yaml_sequence_start_event_initialize(event *yaml_event_t, anchor, tag []byte, implicit bool, style yaml_sequence_style_t) bool {
*event = yaml_event_t{
typ: yaml_SEQUENCE_START_EVENT,
anchor: anchor,
tag: tag,
implicit: implicit,
style: yaml_style_t(style),
}
return true
}
// Create SEQUENCE-END.
func yaml_sequence_end_event_initialize(event *yaml_event_t) bool {
*event = yaml_event_t{
typ: yaml_SEQUENCE_END_EVENT,
}
return true
}
// Create MAPPING-START.
func yaml_mapping_start_event_initialize(event *yaml_event_t, anchor, tag []byte, implicit bool, style yaml_mapping_style_t) {
*event = yaml_event_t{
typ: yaml_MAPPING_START_EVENT,
anchor: anchor,
tag: tag,
implicit: implicit,
style: yaml_style_t(style),
}
}
// Create MAPPING-END.
func yaml_mapping_end_event_initialize(event *yaml_event_t) {
*event = yaml_event_t{
typ: yaml_MAPPING_END_EVENT,
}
}
// Destroy an event object.
func yaml_event_delete(event *yaml_event_t) {
*event = yaml_event_t{}
}
///*
// * Create a document object.
// */
//
//YAML_DECLARE(int)
//yaml_document_initialize(document *yaml_document_t,
// version_directive *yaml_version_directive_t,
// tag_directives_start *yaml_tag_directive_t,
// tag_directives_end *yaml_tag_directive_t,
// start_implicit int, end_implicit int)
//{
// struct {
// error yaml_error_type_t
// } context
// struct {
// start *yaml_node_t
// end *yaml_node_t
// top *yaml_node_t
// } nodes = { NULL, NULL, NULL }
// version_directive_copy *yaml_version_directive_t = NULL
// struct {
// start *yaml_tag_directive_t
// end *yaml_tag_directive_t
// top *yaml_tag_directive_t
// } tag_directives_copy = { NULL, NULL, NULL }
// value yaml_tag_directive_t = { NULL, NULL }
// mark yaml_mark_t = { 0, 0, 0 }
//
// assert(document) // Non-NULL document object is expected.
// assert((tag_directives_start && tag_directives_end) ||
// (tag_directives_start == tag_directives_end))
// // Valid tag directives are expected.
//
// if (!STACK_INIT(&context, nodes, INITIAL_STACK_SIZE)) goto error
//
// if (version_directive) {
// version_directive_copy = yaml_malloc(sizeof(yaml_version_directive_t))
// if (!version_directive_copy) goto error
// version_directive_copy.major = version_directive.major
// version_directive_copy.minor = version_directive.minor
// }
//
// if (tag_directives_start != tag_directives_end) {
// tag_directive *yaml_tag_directive_t
// if (!STACK_INIT(&context, tag_directives_copy, INITIAL_STACK_SIZE))
// goto error
// for (tag_directive = tag_directives_start
// tag_directive != tag_directives_end; tag_directive ++) {
// assert(tag_directive.handle)
// assert(tag_directive.prefix)
// if (!yaml_check_utf8(tag_directive.handle,
// strlen((char *)tag_directive.handle)))
// goto error
// if (!yaml_check_utf8(tag_directive.prefix,
// strlen((char *)tag_directive.prefix)))
// goto error
// value.handle = yaml_strdup(tag_directive.handle)
// value.prefix = yaml_strdup(tag_directive.prefix)
// if (!value.handle || !value.prefix) goto error
// if (!PUSH(&context, tag_directives_copy, value))
// goto error
// value.handle = NULL
// value.prefix = NULL
// }
// }
//
// DOCUMENT_INIT(*document, nodes.start, nodes.end, version_directive_copy,
// tag_directives_copy.start, tag_directives_copy.top,
// start_implicit, end_implicit, mark, mark)
//
// return 1
//
//error:
// STACK_DEL(&context, nodes)
// yaml_free(version_directive_copy)
// while (!STACK_EMPTY(&context, tag_directives_copy)) {
// value yaml_tag_directive_t = POP(&context, tag_directives_copy)
// yaml_free(value.handle)
// yaml_free(value.prefix)
// }
// STACK_DEL(&context, tag_directives_copy)
// yaml_free(value.handle)
// yaml_free(value.prefix)
//
// return 0
//}
//
///*
// * Destroy a document object.
// */
//
//YAML_DECLARE(void)
//yaml_document_delete(document *yaml_document_t)
//{
// struct {
// error yaml_error_type_t
// } context
// tag_directive *yaml_tag_directive_t
//
// context.error = YAML_NO_ERROR // Eliminate a compiler warning.
//
// assert(document) // Non-NULL document object is expected.
//
// while (!STACK_EMPTY(&context, document.nodes)) {
// node yaml_node_t = POP(&context, document.nodes)
// yaml_free(node.tag)
// switch (node.type) {
// case YAML_SCALAR_NODE:
// yaml_free(node.data.scalar.value)
// break
// case YAML_SEQUENCE_NODE:
// STACK_DEL(&context, node.data.sequence.items)
// break
// case YAML_MAPPING_NODE:
// STACK_DEL(&context, node.data.mapping.pairs)
// break
// default:
// assert(0) // Should not happen.
// }
// }
// STACK_DEL(&context, document.nodes)
//
// yaml_free(document.version_directive)
// for (tag_directive = document.tag_directives.start
// tag_directive != document.tag_directives.end
// tag_directive++) {
// yaml_free(tag_directive.handle)
// yaml_free(tag_directive.prefix)
// }
// yaml_free(document.tag_directives.start)
//
// memset(document, 0, sizeof(yaml_document_t))
//}
//
///**
// * Get a document node.
// */
//
//YAML_DECLARE(yaml_node_t *)
//yaml_document_get_node(document *yaml_document_t, index int)
//{
// assert(document) // Non-NULL document object is expected.
//
// if (index > 0 && document.nodes.start + index <= document.nodes.top) {
// return document.nodes.start + index - 1
// }
// return NULL
//}
//
///**
// * Get the root object.
// */
//
//YAML_DECLARE(yaml_node_t *)
//yaml_document_get_root_node(document *yaml_document_t)
//{
// assert(document) // Non-NULL document object is expected.
//
// if (document.nodes.top != document.nodes.start) {
// return document.nodes.start
// }
// return NULL
//}
//
///*
// * Add a scalar node to a document.
// */
//
//YAML_DECLARE(int)
//yaml_document_add_scalar(document *yaml_document_t,
// tag *yaml_char_t, value *yaml_char_t, length int,
// style yaml_scalar_style_t)
//{
// struct {
// error yaml_error_type_t
// } context
// mark yaml_mark_t = { 0, 0, 0 }
// tag_copy *yaml_char_t = NULL
// value_copy *yaml_char_t = NULL
// node yaml_node_t
//
// assert(document) // Non-NULL document object is expected.
// assert(value) // Non-NULL value is expected.
//
// if (!tag) {
// tag = (yaml_char_t *)YAML_DEFAULT_SCALAR_TAG
// }
//
// if (!yaml_check_utf8(tag, strlen((char *)tag))) goto error
// tag_copy = yaml_strdup(tag)
// if (!tag_copy) goto error
//
// if (length < 0) {
// length = strlen((char *)value)
// }
//
// if (!yaml_check_utf8(value, length)) goto error
// value_copy = yaml_malloc(length+1)
// if (!value_copy) goto error
// memcpy(value_copy, value, length)
// value_copy[length] = '\0'
//
// SCALAR_NODE_INIT(node, tag_copy, value_copy, length, style, mark, mark)
// if (!PUSH(&context, document.nodes, node)) goto error
//
// return document.nodes.top - document.nodes.start
//
//error:
// yaml_free(tag_copy)
// yaml_free(value_copy)
//
// return 0
//}
//
///*
// * Add a sequence node to a document.
// */
//
//YAML_DECLARE(int)
//yaml_document_add_sequence(document *yaml_document_t,
// tag *yaml_char_t, style yaml_sequence_style_t)
//{
// struct {
// error yaml_error_type_t
// } context
// mark yaml_mark_t = { 0, 0, 0 }
// tag_copy *yaml_char_t = NULL
// struct {
// start *yaml_node_item_t
// end *yaml_node_item_t
// top *yaml_node_item_t
// } items = { NULL, NULL, NULL }
// node yaml_node_t
//
// assert(document) // Non-NULL document object is expected.
//
// if (!tag) {
// tag = (yaml_char_t *)YAML_DEFAULT_SEQUENCE_TAG
// }
//
// if (!yaml_check_utf8(tag, strlen((char *)tag))) goto error
// tag_copy = yaml_strdup(tag)
// if (!tag_copy) goto error
//
// if (!STACK_INIT(&context, items, INITIAL_STACK_SIZE)) goto error
//
// SEQUENCE_NODE_INIT(node, tag_copy, items.start, items.end,
// style, mark, mark)
// if (!PUSH(&context, document.nodes, node)) goto error
//
// return document.nodes.top - document.nodes.start
//
//error:
// STACK_DEL(&context, items)
// yaml_free(tag_copy)
//
// return 0
//}
//
///*
// * Add a mapping node to a document.
// */
//
//YAML_DECLARE(int)
//yaml_document_add_mapping(document *yaml_document_t,
// tag *yaml_char_t, style yaml_mapping_style_t)
//{
// struct {
// error yaml_error_type_t
// } context
// mark yaml_mark_t = { 0, 0, 0 }
// tag_copy *yaml_char_t = NULL
// struct {
// start *yaml_node_pair_t
// end *yaml_node_pair_t
// top *yaml_node_pair_t
// } pairs = { NULL, NULL, NULL }
// node yaml_node_t
//
// assert(document) // Non-NULL document object is expected.
//
// if (!tag) {
// tag = (yaml_char_t *)YAML_DEFAULT_MAPPING_TAG
// }
//
// if (!yaml_check_utf8(tag, strlen((char *)tag))) goto error
// tag_copy = yaml_strdup(tag)
// if (!tag_copy) goto error
//
// if (!STACK_INIT(&context, pairs, INITIAL_STACK_SIZE)) goto error
//
// MAPPING_NODE_INIT(node, tag_copy, pairs.start, pairs.end,
// style, mark, mark)
// if (!PUSH(&context, document.nodes, node)) goto error
//
// return document.nodes.top - document.nodes.start
//
//error:
// STACK_DEL(&context, pairs)
// yaml_free(tag_copy)
//
// return 0
//}
//
///*
// * Append an item to a sequence node.
// */
//
//YAML_DECLARE(int)
//yaml_document_append_sequence_item(document *yaml_document_t,
// sequence int, item int)
//{
// struct {
// error yaml_error_type_t
// } context
//
// assert(document) // Non-NULL document is required.
// assert(sequence > 0
// && document.nodes.start + sequence <= document.nodes.top)
// // Valid sequence id is required.
// assert(document.nodes.start[sequence-1].type == YAML_SEQUENCE_NODE)
// // A sequence node is required.
// assert(item > 0 && document.nodes.start + item <= document.nodes.top)
// // Valid item id is required.
//
// if (!PUSH(&context,
// document.nodes.start[sequence-1].data.sequence.items, item))
// return 0
//
// return 1
//}
//
///*
// * Append a pair of a key and a value to a mapping node.
// */
//
//YAML_DECLARE(int)
//yaml_document_append_mapping_pair(document *yaml_document_t,
// mapping int, key int, value int)
//{
// struct {
// error yaml_error_type_t
// } context
//
// pair yaml_node_pair_t
//
// assert(document) // Non-NULL document is required.
// assert(mapping > 0
// && document.nodes.start + mapping <= document.nodes.top)
// // Valid mapping id is required.
// assert(document.nodes.start[mapping-1].type == YAML_MAPPING_NODE)
// // A mapping node is required.
// assert(key > 0 && document.nodes.start + key <= document.nodes.top)
// // Valid key id is required.
// assert(value > 0 && document.nodes.start + value <= document.nodes.top)
// // Valid value id is required.
//
// pair.key = key
// pair.value = value
//
// if (!PUSH(&context,
// document.nodes.start[mapping-1].data.mapping.pairs, pair))
// return 0
//
// return 1
//}
//
//

815
e2e/vendor/sigs.k8s.io/yaml/goyaml.v2/decode.go generated vendored Normal file
View File

@ -0,0 +1,815 @@
package yaml
import (
"encoding"
"encoding/base64"
"fmt"
"io"
"math"
"reflect"
"strconv"
"time"
)
const (
documentNode = 1 << iota
mappingNode
sequenceNode
scalarNode
aliasNode
)
type node struct {
kind int
line, column int
tag string
// For an alias node, alias holds the resolved alias.
alias *node
value string
implicit bool
children []*node
anchors map[string]*node
}
// ----------------------------------------------------------------------------
// Parser, produces a node tree out of a libyaml event stream.
type parser struct {
parser yaml_parser_t
event yaml_event_t
doc *node
doneInit bool
}
func newParser(b []byte) *parser {
p := parser{}
if !yaml_parser_initialize(&p.parser) {
panic("failed to initialize YAML emitter")
}
if len(b) == 0 {
b = []byte{'\n'}
}
yaml_parser_set_input_string(&p.parser, b)
return &p
}
func newParserFromReader(r io.Reader) *parser {
p := parser{}
if !yaml_parser_initialize(&p.parser) {
panic("failed to initialize YAML emitter")
}
yaml_parser_set_input_reader(&p.parser, r)
return &p
}
func (p *parser) init() {
if p.doneInit {
return
}
p.expect(yaml_STREAM_START_EVENT)
p.doneInit = true
}
func (p *parser) destroy() {
if p.event.typ != yaml_NO_EVENT {
yaml_event_delete(&p.event)
}
yaml_parser_delete(&p.parser)
}
// expect consumes an event from the event stream and
// checks that it's of the expected type.
func (p *parser) expect(e yaml_event_type_t) {
if p.event.typ == yaml_NO_EVENT {
if !yaml_parser_parse(&p.parser, &p.event) {
p.fail()
}
}
if p.event.typ == yaml_STREAM_END_EVENT {
failf("attempted to go past the end of stream; corrupted value?")
}
if p.event.typ != e {
p.parser.problem = fmt.Sprintf("expected %s event but got %s", e, p.event.typ)
p.fail()
}
yaml_event_delete(&p.event)
p.event.typ = yaml_NO_EVENT
}
// peek peeks at the next event in the event stream,
// puts the results into p.event and returns the event type.
func (p *parser) peek() yaml_event_type_t {
if p.event.typ != yaml_NO_EVENT {
return p.event.typ
}
if !yaml_parser_parse(&p.parser, &p.event) {
p.fail()
}
return p.event.typ
}
func (p *parser) fail() {
var where string
var line int
if p.parser.problem_mark.line != 0 {
line = p.parser.problem_mark.line
// Scanner errors don't iterate line before returning error
if p.parser.error == yaml_SCANNER_ERROR {
line++
}
} else if p.parser.context_mark.line != 0 {
line = p.parser.context_mark.line
}
if line != 0 {
where = "line " + strconv.Itoa(line) + ": "
}
var msg string
if len(p.parser.problem) > 0 {
msg = p.parser.problem
} else {
msg = "unknown problem parsing YAML content"
}
failf("%s%s", where, msg)
}
func (p *parser) anchor(n *node, anchor []byte) {
if anchor != nil {
p.doc.anchors[string(anchor)] = n
}
}
func (p *parser) parse() *node {
p.init()
switch p.peek() {
case yaml_SCALAR_EVENT:
return p.scalar()
case yaml_ALIAS_EVENT:
return p.alias()
case yaml_MAPPING_START_EVENT:
return p.mapping()
case yaml_SEQUENCE_START_EVENT:
return p.sequence()
case yaml_DOCUMENT_START_EVENT:
return p.document()
case yaml_STREAM_END_EVENT:
// Happens when attempting to decode an empty buffer.
return nil
default:
panic("attempted to parse unknown event: " + p.event.typ.String())
}
}
func (p *parser) node(kind int) *node {
return &node{
kind: kind,
line: p.event.start_mark.line,
column: p.event.start_mark.column,
}
}
func (p *parser) document() *node {
n := p.node(documentNode)
n.anchors = make(map[string]*node)
p.doc = n
p.expect(yaml_DOCUMENT_START_EVENT)
n.children = append(n.children, p.parse())
p.expect(yaml_DOCUMENT_END_EVENT)
return n
}
func (p *parser) alias() *node {
n := p.node(aliasNode)
n.value = string(p.event.anchor)
n.alias = p.doc.anchors[n.value]
if n.alias == nil {
failf("unknown anchor '%s' referenced", n.value)
}
p.expect(yaml_ALIAS_EVENT)
return n
}
func (p *parser) scalar() *node {
n := p.node(scalarNode)
n.value = string(p.event.value)
n.tag = string(p.event.tag)
n.implicit = p.event.implicit
p.anchor(n, p.event.anchor)
p.expect(yaml_SCALAR_EVENT)
return n
}
func (p *parser) sequence() *node {
n := p.node(sequenceNode)
p.anchor(n, p.event.anchor)
p.expect(yaml_SEQUENCE_START_EVENT)
for p.peek() != yaml_SEQUENCE_END_EVENT {
n.children = append(n.children, p.parse())
}
p.expect(yaml_SEQUENCE_END_EVENT)
return n
}
func (p *parser) mapping() *node {
n := p.node(mappingNode)
p.anchor(n, p.event.anchor)
p.expect(yaml_MAPPING_START_EVENT)
for p.peek() != yaml_MAPPING_END_EVENT {
n.children = append(n.children, p.parse(), p.parse())
}
p.expect(yaml_MAPPING_END_EVENT)
return n
}
// ----------------------------------------------------------------------------
// Decoder, unmarshals a node into a provided value.
type decoder struct {
doc *node
aliases map[*node]bool
mapType reflect.Type
terrors []string
strict bool
decodeCount int
aliasCount int
aliasDepth int
}
var (
mapItemType = reflect.TypeOf(MapItem{})
durationType = reflect.TypeOf(time.Duration(0))
defaultMapType = reflect.TypeOf(map[interface{}]interface{}{})
ifaceType = defaultMapType.Elem()
timeType = reflect.TypeOf(time.Time{})
ptrTimeType = reflect.TypeOf(&time.Time{})
)
func newDecoder(strict bool) *decoder {
d := &decoder{mapType: defaultMapType, strict: strict}
d.aliases = make(map[*node]bool)
return d
}
func (d *decoder) terror(n *node, tag string, out reflect.Value) {
if n.tag != "" {
tag = n.tag
}
value := n.value
if tag != yaml_SEQ_TAG && tag != yaml_MAP_TAG {
if len(value) > 10 {
value = " `" + value[:7] + "...`"
} else {
value = " `" + value + "`"
}
}
d.terrors = append(d.terrors, fmt.Sprintf("line %d: cannot unmarshal %s%s into %s", n.line+1, shortTag(tag), value, out.Type()))
}
func (d *decoder) callUnmarshaler(n *node, u Unmarshaler) (good bool) {
terrlen := len(d.terrors)
err := u.UnmarshalYAML(func(v interface{}) (err error) {
defer handleErr(&err)
d.unmarshal(n, reflect.ValueOf(v))
if len(d.terrors) > terrlen {
issues := d.terrors[terrlen:]
d.terrors = d.terrors[:terrlen]
return &TypeError{issues}
}
return nil
})
if e, ok := err.(*TypeError); ok {
d.terrors = append(d.terrors, e.Errors...)
return false
}
if err != nil {
fail(err)
}
return true
}
// d.prepare initializes and dereferences pointers and calls UnmarshalYAML
// if a value is found to implement it.
// It returns the initialized and dereferenced out value, whether
// unmarshalling was already done by UnmarshalYAML, and if so whether
// its types unmarshalled appropriately.
//
// If n holds a null value, prepare returns before doing anything.
func (d *decoder) prepare(n *node, out reflect.Value) (newout reflect.Value, unmarshaled, good bool) {
if n.tag == yaml_NULL_TAG || n.kind == scalarNode && n.tag == "" && (n.value == "null" || n.value == "~" || n.value == "" && n.implicit) {
return out, false, false
}
again := true
for again {
again = false
if out.Kind() == reflect.Ptr {
if out.IsNil() {
out.Set(reflect.New(out.Type().Elem()))
}
out = out.Elem()
again = true
}
if out.CanAddr() {
if u, ok := out.Addr().Interface().(Unmarshaler); ok {
good = d.callUnmarshaler(n, u)
return out, true, good
}
}
}
return out, false, false
}
const (
// 400,000 decode operations is ~500kb of dense object declarations, or
// ~5kb of dense object declarations with 10000% alias expansion
alias_ratio_range_low = 400000
// 4,000,000 decode operations is ~5MB of dense object declarations, or
// ~4.5MB of dense object declarations with 10% alias expansion
alias_ratio_range_high = 4000000
// alias_ratio_range is the range over which we scale allowed alias ratios
alias_ratio_range = float64(alias_ratio_range_high - alias_ratio_range_low)
)
func allowedAliasRatio(decodeCount int) float64 {
switch {
case decodeCount <= alias_ratio_range_low:
// allow 99% to come from alias expansion for small-to-medium documents
return 0.99
case decodeCount >= alias_ratio_range_high:
// allow 10% to come from alias expansion for very large documents
return 0.10
default:
// scale smoothly from 99% down to 10% over the range.
// this maps to 396,000 - 400,000 allowed alias-driven decodes over the range.
// 400,000 decode operations is ~100MB of allocations in worst-case scenarios (single-item maps).
return 0.99 - 0.89*(float64(decodeCount-alias_ratio_range_low)/alias_ratio_range)
}
}
func (d *decoder) unmarshal(n *node, out reflect.Value) (good bool) {
d.decodeCount++
if d.aliasDepth > 0 {
d.aliasCount++
}
if d.aliasCount > 100 && d.decodeCount > 1000 && float64(d.aliasCount)/float64(d.decodeCount) > allowedAliasRatio(d.decodeCount) {
failf("document contains excessive aliasing")
}
switch n.kind {
case documentNode:
return d.document(n, out)
case aliasNode:
return d.alias(n, out)
}
out, unmarshaled, good := d.prepare(n, out)
if unmarshaled {
return good
}
switch n.kind {
case scalarNode:
good = d.scalar(n, out)
case mappingNode:
good = d.mapping(n, out)
case sequenceNode:
good = d.sequence(n, out)
default:
panic("internal error: unknown node kind: " + strconv.Itoa(n.kind))
}
return good
}
func (d *decoder) document(n *node, out reflect.Value) (good bool) {
if len(n.children) == 1 {
d.doc = n
d.unmarshal(n.children[0], out)
return true
}
return false
}
func (d *decoder) alias(n *node, out reflect.Value) (good bool) {
if d.aliases[n] {
// TODO this could actually be allowed in some circumstances.
failf("anchor '%s' value contains itself", n.value)
}
d.aliases[n] = true
d.aliasDepth++
good = d.unmarshal(n.alias, out)
d.aliasDepth--
delete(d.aliases, n)
return good
}
var zeroValue reflect.Value
func resetMap(out reflect.Value) {
for _, k := range out.MapKeys() {
out.SetMapIndex(k, zeroValue)
}
}
func (d *decoder) scalar(n *node, out reflect.Value) bool {
var tag string
var resolved interface{}
if n.tag == "" && !n.implicit {
tag = yaml_STR_TAG
resolved = n.value
} else {
tag, resolved = resolve(n.tag, n.value)
if tag == yaml_BINARY_TAG {
data, err := base64.StdEncoding.DecodeString(resolved.(string))
if err != nil {
failf("!!binary value contains invalid base64 data")
}
resolved = string(data)
}
}
if resolved == nil {
if out.Kind() == reflect.Map && !out.CanAddr() {
resetMap(out)
} else {
out.Set(reflect.Zero(out.Type()))
}
return true
}
if resolvedv := reflect.ValueOf(resolved); out.Type() == resolvedv.Type() {
// We've resolved to exactly the type we want, so use that.
out.Set(resolvedv)
return true
}
// Perhaps we can use the value as a TextUnmarshaler to
// set its value.
if out.CanAddr() {
u, ok := out.Addr().Interface().(encoding.TextUnmarshaler)
if ok {
var text []byte
if tag == yaml_BINARY_TAG {
text = []byte(resolved.(string))
} else {
// We let any value be unmarshaled into TextUnmarshaler.
// That might be more lax than we'd like, but the
// TextUnmarshaler itself should bowl out any dubious values.
text = []byte(n.value)
}
err := u.UnmarshalText(text)
if err != nil {
fail(err)
}
return true
}
}
switch out.Kind() {
case reflect.String:
if tag == yaml_BINARY_TAG {
out.SetString(resolved.(string))
return true
}
if resolved != nil {
out.SetString(n.value)
return true
}
case reflect.Interface:
if resolved == nil {
out.Set(reflect.Zero(out.Type()))
} else if tag == yaml_TIMESTAMP_TAG {
// It looks like a timestamp but for backward compatibility
// reasons we set it as a string, so that code that unmarshals
// timestamp-like values into interface{} will continue to
// see a string and not a time.Time.
// TODO(v3) Drop this.
out.Set(reflect.ValueOf(n.value))
} else {
out.Set(reflect.ValueOf(resolved))
}
return true
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
switch resolved := resolved.(type) {
case int:
if !out.OverflowInt(int64(resolved)) {
out.SetInt(int64(resolved))
return true
}
case int64:
if !out.OverflowInt(resolved) {
out.SetInt(resolved)
return true
}
case uint64:
if resolved <= math.MaxInt64 && !out.OverflowInt(int64(resolved)) {
out.SetInt(int64(resolved))
return true
}
case float64:
if resolved <= math.MaxInt64 && !out.OverflowInt(int64(resolved)) {
out.SetInt(int64(resolved))
return true
}
case string:
if out.Type() == durationType {
d, err := time.ParseDuration(resolved)
if err == nil {
out.SetInt(int64(d))
return true
}
}
}
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
switch resolved := resolved.(type) {
case int:
if resolved >= 0 && !out.OverflowUint(uint64(resolved)) {
out.SetUint(uint64(resolved))
return true
}
case int64:
if resolved >= 0 && !out.OverflowUint(uint64(resolved)) {
out.SetUint(uint64(resolved))
return true
}
case uint64:
if !out.OverflowUint(uint64(resolved)) {
out.SetUint(uint64(resolved))
return true
}
case float64:
if resolved <= math.MaxUint64 && !out.OverflowUint(uint64(resolved)) {
out.SetUint(uint64(resolved))
return true
}
}
case reflect.Bool:
switch resolved := resolved.(type) {
case bool:
out.SetBool(resolved)
return true
}
case reflect.Float32, reflect.Float64:
switch resolved := resolved.(type) {
case int:
out.SetFloat(float64(resolved))
return true
case int64:
out.SetFloat(float64(resolved))
return true
case uint64:
out.SetFloat(float64(resolved))
return true
case float64:
out.SetFloat(resolved)
return true
}
case reflect.Struct:
if resolvedv := reflect.ValueOf(resolved); out.Type() == resolvedv.Type() {
out.Set(resolvedv)
return true
}
case reflect.Ptr:
if out.Type().Elem() == reflect.TypeOf(resolved) {
// TODO DOes this make sense? When is out a Ptr except when decoding a nil value?
elem := reflect.New(out.Type().Elem())
elem.Elem().Set(reflect.ValueOf(resolved))
out.Set(elem)
return true
}
}
d.terror(n, tag, out)
return false
}
func settableValueOf(i interface{}) reflect.Value {
v := reflect.ValueOf(i)
sv := reflect.New(v.Type()).Elem()
sv.Set(v)
return sv
}
func (d *decoder) sequence(n *node, out reflect.Value) (good bool) {
l := len(n.children)
var iface reflect.Value
switch out.Kind() {
case reflect.Slice:
out.Set(reflect.MakeSlice(out.Type(), l, l))
case reflect.Array:
if l != out.Len() {
failf("invalid array: want %d elements but got %d", out.Len(), l)
}
case reflect.Interface:
// No type hints. Will have to use a generic sequence.
iface = out
out = settableValueOf(make([]interface{}, l))
default:
d.terror(n, yaml_SEQ_TAG, out)
return false
}
et := out.Type().Elem()
j := 0
for i := 0; i < l; i++ {
e := reflect.New(et).Elem()
if ok := d.unmarshal(n.children[i], e); ok {
out.Index(j).Set(e)
j++
}
}
if out.Kind() != reflect.Array {
out.Set(out.Slice(0, j))
}
if iface.IsValid() {
iface.Set(out)
}
return true
}
func (d *decoder) mapping(n *node, out reflect.Value) (good bool) {
switch out.Kind() {
case reflect.Struct:
return d.mappingStruct(n, out)
case reflect.Slice:
return d.mappingSlice(n, out)
case reflect.Map:
// okay
case reflect.Interface:
if d.mapType.Kind() == reflect.Map {
iface := out
out = reflect.MakeMap(d.mapType)
iface.Set(out)
} else {
slicev := reflect.New(d.mapType).Elem()
if !d.mappingSlice(n, slicev) {
return false
}
out.Set(slicev)
return true
}
default:
d.terror(n, yaml_MAP_TAG, out)
return false
}
outt := out.Type()
kt := outt.Key()
et := outt.Elem()
mapType := d.mapType
if outt.Key() == ifaceType && outt.Elem() == ifaceType {
d.mapType = outt
}
if out.IsNil() {
out.Set(reflect.MakeMap(outt))
}
l := len(n.children)
for i := 0; i < l; i += 2 {
if isMerge(n.children[i]) {
d.merge(n.children[i+1], out)
continue
}
k := reflect.New(kt).Elem()
if d.unmarshal(n.children[i], k) {
kkind := k.Kind()
if kkind == reflect.Interface {
kkind = k.Elem().Kind()
}
if kkind == reflect.Map || kkind == reflect.Slice {
failf("invalid map key: %#v", k.Interface())
}
e := reflect.New(et).Elem()
if d.unmarshal(n.children[i+1], e) {
d.setMapIndex(n.children[i+1], out, k, e)
}
}
}
d.mapType = mapType
return true
}
func (d *decoder) setMapIndex(n *node, out, k, v reflect.Value) {
if d.strict && out.MapIndex(k) != zeroValue {
d.terrors = append(d.terrors, fmt.Sprintf("line %d: key %#v already set in map", n.line+1, k.Interface()))
return
}
out.SetMapIndex(k, v)
}
func (d *decoder) mappingSlice(n *node, out reflect.Value) (good bool) {
outt := out.Type()
if outt.Elem() != mapItemType {
d.terror(n, yaml_MAP_TAG, out)
return false
}
mapType := d.mapType
d.mapType = outt
var slice []MapItem
var l = len(n.children)
for i := 0; i < l; i += 2 {
if isMerge(n.children[i]) {
d.merge(n.children[i+1], out)
continue
}
item := MapItem{}
k := reflect.ValueOf(&item.Key).Elem()
if d.unmarshal(n.children[i], k) {
v := reflect.ValueOf(&item.Value).Elem()
if d.unmarshal(n.children[i+1], v) {
slice = append(slice, item)
}
}
}
out.Set(reflect.ValueOf(slice))
d.mapType = mapType
return true
}
func (d *decoder) mappingStruct(n *node, out reflect.Value) (good bool) {
sinfo, err := getStructInfo(out.Type())
if err != nil {
panic(err)
}
name := settableValueOf("")
l := len(n.children)
var inlineMap reflect.Value
var elemType reflect.Type
if sinfo.InlineMap != -1 {
inlineMap = out.Field(sinfo.InlineMap)
inlineMap.Set(reflect.New(inlineMap.Type()).Elem())
elemType = inlineMap.Type().Elem()
}
var doneFields []bool
if d.strict {
doneFields = make([]bool, len(sinfo.FieldsList))
}
for i := 0; i < l; i += 2 {
ni := n.children[i]
if isMerge(ni) {
d.merge(n.children[i+1], out)
continue
}
if !d.unmarshal(ni, name) {
continue
}
if info, ok := sinfo.FieldsMap[name.String()]; ok {
if d.strict {
if doneFields[info.Id] {
d.terrors = append(d.terrors, fmt.Sprintf("line %d: field %s already set in type %s", ni.line+1, name.String(), out.Type()))
continue
}
doneFields[info.Id] = true
}
var field reflect.Value
if info.Inline == nil {
field = out.Field(info.Num)
} else {
field = out.FieldByIndex(info.Inline)
}
d.unmarshal(n.children[i+1], field)
} else if sinfo.InlineMap != -1 {
if inlineMap.IsNil() {
inlineMap.Set(reflect.MakeMap(inlineMap.Type()))
}
value := reflect.New(elemType).Elem()
d.unmarshal(n.children[i+1], value)
d.setMapIndex(n.children[i+1], inlineMap, name, value)
} else if d.strict {
d.terrors = append(d.terrors, fmt.Sprintf("line %d: field %s not found in type %s", ni.line+1, name.String(), out.Type()))
}
}
return true
}
func failWantMap() {
failf("map merge requires map or sequence of maps as the value")
}
func (d *decoder) merge(n *node, out reflect.Value) {
switch n.kind {
case mappingNode:
d.unmarshal(n, out)
case aliasNode:
if n.alias != nil && n.alias.kind != mappingNode {
failWantMap()
}
d.unmarshal(n, out)
case sequenceNode:
// Step backwards as earlier nodes take precedence.
for i := len(n.children) - 1; i >= 0; i-- {
ni := n.children[i]
if ni.kind == aliasNode {
if ni.alias != nil && ni.alias.kind != mappingNode {
failWantMap()
}
} else if ni.kind != mappingNode {
failWantMap()
}
d.unmarshal(ni, out)
}
default:
failWantMap()
}
}
func isMerge(n *node) bool {
return n.kind == scalarNode && n.value == "<<" && (n.implicit == true || n.tag == yaml_MERGE_TAG)
}

1685
e2e/vendor/sigs.k8s.io/yaml/goyaml.v2/emitterc.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

390
e2e/vendor/sigs.k8s.io/yaml/goyaml.v2/encode.go generated vendored Normal file
View File

@ -0,0 +1,390 @@
package yaml
import (
"encoding"
"fmt"
"io"
"reflect"
"regexp"
"sort"
"strconv"
"strings"
"time"
"unicode/utf8"
)
// jsonNumber is the interface of the encoding/json.Number datatype.
// Repeating the interface here avoids a dependency on encoding/json, and also
// supports other libraries like jsoniter, which use a similar datatype with
// the same interface. Detecting this interface is useful when dealing with
// structures containing json.Number, which is a string under the hood. The
// encoder should prefer the use of Int64(), Float64() and string(), in that
// order, when encoding this type.
type jsonNumber interface {
Float64() (float64, error)
Int64() (int64, error)
String() string
}
type encoder struct {
emitter yaml_emitter_t
event yaml_event_t
out []byte
flow bool
// doneInit holds whether the initial stream_start_event has been
// emitted.
doneInit bool
}
func newEncoder() *encoder {
e := &encoder{}
yaml_emitter_initialize(&e.emitter)
yaml_emitter_set_output_string(&e.emitter, &e.out)
yaml_emitter_set_unicode(&e.emitter, true)
return e
}
func newEncoderWithWriter(w io.Writer) *encoder {
e := &encoder{}
yaml_emitter_initialize(&e.emitter)
yaml_emitter_set_output_writer(&e.emitter, w)
yaml_emitter_set_unicode(&e.emitter, true)
return e
}
func (e *encoder) init() {
if e.doneInit {
return
}
yaml_stream_start_event_initialize(&e.event, yaml_UTF8_ENCODING)
e.emit()
e.doneInit = true
}
func (e *encoder) finish() {
e.emitter.open_ended = false
yaml_stream_end_event_initialize(&e.event)
e.emit()
}
func (e *encoder) destroy() {
yaml_emitter_delete(&e.emitter)
}
func (e *encoder) emit() {
// This will internally delete the e.event value.
e.must(yaml_emitter_emit(&e.emitter, &e.event))
}
func (e *encoder) must(ok bool) {
if !ok {
msg := e.emitter.problem
if msg == "" {
msg = "unknown problem generating YAML content"
}
failf("%s", msg)
}
}
func (e *encoder) marshalDoc(tag string, in reflect.Value) {
e.init()
yaml_document_start_event_initialize(&e.event, nil, nil, true)
e.emit()
e.marshal(tag, in)
yaml_document_end_event_initialize(&e.event, true)
e.emit()
}
func (e *encoder) marshal(tag string, in reflect.Value) {
if !in.IsValid() || in.Kind() == reflect.Ptr && in.IsNil() {
e.nilv()
return
}
iface := in.Interface()
switch m := iface.(type) {
case jsonNumber:
integer, err := m.Int64()
if err == nil {
// In this case the json.Number is a valid int64
in = reflect.ValueOf(integer)
break
}
float, err := m.Float64()
if err == nil {
// In this case the json.Number is a valid float64
in = reflect.ValueOf(float)
break
}
// fallback case - no number could be obtained
in = reflect.ValueOf(m.String())
case time.Time, *time.Time:
// Although time.Time implements TextMarshaler,
// we don't want to treat it as a string for YAML
// purposes because YAML has special support for
// timestamps.
case Marshaler:
v, err := m.MarshalYAML()
if err != nil {
fail(err)
}
if v == nil {
e.nilv()
return
}
in = reflect.ValueOf(v)
case encoding.TextMarshaler:
text, err := m.MarshalText()
if err != nil {
fail(err)
}
in = reflect.ValueOf(string(text))
case nil:
e.nilv()
return
}
switch in.Kind() {
case reflect.Interface:
e.marshal(tag, in.Elem())
case reflect.Map:
e.mapv(tag, in)
case reflect.Ptr:
if in.Type() == ptrTimeType {
e.timev(tag, in.Elem())
} else {
e.marshal(tag, in.Elem())
}
case reflect.Struct:
if in.Type() == timeType {
e.timev(tag, in)
} else {
e.structv(tag, in)
}
case reflect.Slice, reflect.Array:
if in.Type().Elem() == mapItemType {
e.itemsv(tag, in)
} else {
e.slicev(tag, in)
}
case reflect.String:
e.stringv(tag, in)
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
if in.Type() == durationType {
e.stringv(tag, reflect.ValueOf(iface.(time.Duration).String()))
} else {
e.intv(tag, in)
}
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
e.uintv(tag, in)
case reflect.Float32, reflect.Float64:
e.floatv(tag, in)
case reflect.Bool:
e.boolv(tag, in)
default:
panic("cannot marshal type: " + in.Type().String())
}
}
func (e *encoder) mapv(tag string, in reflect.Value) {
e.mappingv(tag, func() {
keys := keyList(in.MapKeys())
sort.Sort(keys)
for _, k := range keys {
e.marshal("", k)
e.marshal("", in.MapIndex(k))
}
})
}
func (e *encoder) itemsv(tag string, in reflect.Value) {
e.mappingv(tag, func() {
slice := in.Convert(reflect.TypeOf([]MapItem{})).Interface().([]MapItem)
for _, item := range slice {
e.marshal("", reflect.ValueOf(item.Key))
e.marshal("", reflect.ValueOf(item.Value))
}
})
}
func (e *encoder) structv(tag string, in reflect.Value) {
sinfo, err := getStructInfo(in.Type())
if err != nil {
panic(err)
}
e.mappingv(tag, func() {
for _, info := range sinfo.FieldsList {
var value reflect.Value
if info.Inline == nil {
value = in.Field(info.Num)
} else {
value = in.FieldByIndex(info.Inline)
}
if info.OmitEmpty && isZero(value) {
continue
}
e.marshal("", reflect.ValueOf(info.Key))
e.flow = info.Flow
e.marshal("", value)
}
if sinfo.InlineMap >= 0 {
m := in.Field(sinfo.InlineMap)
if m.Len() > 0 {
e.flow = false
keys := keyList(m.MapKeys())
sort.Sort(keys)
for _, k := range keys {
if _, found := sinfo.FieldsMap[k.String()]; found {
panic(fmt.Sprintf("Can't have key %q in inlined map; conflicts with struct field", k.String()))
}
e.marshal("", k)
e.flow = false
e.marshal("", m.MapIndex(k))
}
}
}
})
}
func (e *encoder) mappingv(tag string, f func()) {
implicit := tag == ""
style := yaml_BLOCK_MAPPING_STYLE
if e.flow {
e.flow = false
style = yaml_FLOW_MAPPING_STYLE
}
yaml_mapping_start_event_initialize(&e.event, nil, []byte(tag), implicit, style)
e.emit()
f()
yaml_mapping_end_event_initialize(&e.event)
e.emit()
}
func (e *encoder) slicev(tag string, in reflect.Value) {
implicit := tag == ""
style := yaml_BLOCK_SEQUENCE_STYLE
if e.flow {
e.flow = false
style = yaml_FLOW_SEQUENCE_STYLE
}
e.must(yaml_sequence_start_event_initialize(&e.event, nil, []byte(tag), implicit, style))
e.emit()
n := in.Len()
for i := 0; i < n; i++ {
e.marshal("", in.Index(i))
}
e.must(yaml_sequence_end_event_initialize(&e.event))
e.emit()
}
// isBase60 returns whether s is in base 60 notation as defined in YAML 1.1.
//
// The base 60 float notation in YAML 1.1 is a terrible idea and is unsupported
// in YAML 1.2 and by this package, but these should be marshalled quoted for
// the time being for compatibility with other parsers.
func isBase60Float(s string) (result bool) {
// Fast path.
if s == "" {
return false
}
c := s[0]
if !(c == '+' || c == '-' || c >= '0' && c <= '9') || strings.IndexByte(s, ':') < 0 {
return false
}
// Do the full match.
return base60float.MatchString(s)
}
// From http://yaml.org/type/float.html, except the regular expression there
// is bogus. In practice parsers do not enforce the "\.[0-9_]*" suffix.
var base60float = regexp.MustCompile(`^[-+]?[0-9][0-9_]*(?::[0-5]?[0-9])+(?:\.[0-9_]*)?$`)
func (e *encoder) stringv(tag string, in reflect.Value) {
var style yaml_scalar_style_t
s := in.String()
canUsePlain := true
switch {
case !utf8.ValidString(s):
if tag == yaml_BINARY_TAG {
failf("explicitly tagged !!binary data must be base64-encoded")
}
if tag != "" {
failf("cannot marshal invalid UTF-8 data as %s", shortTag(tag))
}
// It can't be encoded directly as YAML so use a binary tag
// and encode it as base64.
tag = yaml_BINARY_TAG
s = encodeBase64(s)
case tag == "":
// Check to see if it would resolve to a specific
// tag when encoded unquoted. If it doesn't,
// there's no need to quote it.
rtag, _ := resolve("", s)
canUsePlain = rtag == yaml_STR_TAG && !isBase60Float(s)
}
// Note: it's possible for user code to emit invalid YAML
// if they explicitly specify a tag and a string containing
// text that's incompatible with that tag.
switch {
case strings.Contains(s, "\n"):
style = yaml_LITERAL_SCALAR_STYLE
case canUsePlain:
style = yaml_PLAIN_SCALAR_STYLE
default:
style = yaml_DOUBLE_QUOTED_SCALAR_STYLE
}
e.emitScalar(s, "", tag, style)
}
func (e *encoder) boolv(tag string, in reflect.Value) {
var s string
if in.Bool() {
s = "true"
} else {
s = "false"
}
e.emitScalar(s, "", tag, yaml_PLAIN_SCALAR_STYLE)
}
func (e *encoder) intv(tag string, in reflect.Value) {
s := strconv.FormatInt(in.Int(), 10)
e.emitScalar(s, "", tag, yaml_PLAIN_SCALAR_STYLE)
}
func (e *encoder) uintv(tag string, in reflect.Value) {
s := strconv.FormatUint(in.Uint(), 10)
e.emitScalar(s, "", tag, yaml_PLAIN_SCALAR_STYLE)
}
func (e *encoder) timev(tag string, in reflect.Value) {
t := in.Interface().(time.Time)
s := t.Format(time.RFC3339Nano)
e.emitScalar(s, "", tag, yaml_PLAIN_SCALAR_STYLE)
}
func (e *encoder) floatv(tag string, in reflect.Value) {
// Issue #352: When formatting, use the precision of the underlying value
precision := 64
if in.Kind() == reflect.Float32 {
precision = 32
}
s := strconv.FormatFloat(in.Float(), 'g', -1, precision)
switch s {
case "+Inf":
s = ".inf"
case "-Inf":
s = "-.inf"
case "NaN":
s = ".nan"
}
e.emitScalar(s, "", tag, yaml_PLAIN_SCALAR_STYLE)
}
func (e *encoder) nilv() {
e.emitScalar("null", "", "", yaml_PLAIN_SCALAR_STYLE)
}
func (e *encoder) emitScalar(value, anchor, tag string, style yaml_scalar_style_t) {
implicit := tag == ""
e.must(yaml_scalar_event_initialize(&e.event, []byte(anchor), []byte(tag), []byte(value), implicit, implicit, style))
e.emit()
}

1095
e2e/vendor/sigs.k8s.io/yaml/goyaml.v2/parserc.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

412
e2e/vendor/sigs.k8s.io/yaml/goyaml.v2/readerc.go generated vendored Normal file
View File

@ -0,0 +1,412 @@
package yaml
import (
"io"
)
// Set the reader error and return 0.
func yaml_parser_set_reader_error(parser *yaml_parser_t, problem string, offset int, value int) bool {
parser.error = yaml_READER_ERROR
parser.problem = problem
parser.problem_offset = offset
parser.problem_value = value
return false
}
// Byte order marks.
const (
bom_UTF8 = "\xef\xbb\xbf"
bom_UTF16LE = "\xff\xfe"
bom_UTF16BE = "\xfe\xff"
)
// Determine the input stream encoding by checking the BOM symbol. If no BOM is
// found, the UTF-8 encoding is assumed. Return 1 on success, 0 on failure.
func yaml_parser_determine_encoding(parser *yaml_parser_t) bool {
// Ensure that we had enough bytes in the raw buffer.
for !parser.eof && len(parser.raw_buffer)-parser.raw_buffer_pos < 3 {
if !yaml_parser_update_raw_buffer(parser) {
return false
}
}
// Determine the encoding.
buf := parser.raw_buffer
pos := parser.raw_buffer_pos
avail := len(buf) - pos
if avail >= 2 && buf[pos] == bom_UTF16LE[0] && buf[pos+1] == bom_UTF16LE[1] {
parser.encoding = yaml_UTF16LE_ENCODING
parser.raw_buffer_pos += 2
parser.offset += 2
} else if avail >= 2 && buf[pos] == bom_UTF16BE[0] && buf[pos+1] == bom_UTF16BE[1] {
parser.encoding = yaml_UTF16BE_ENCODING
parser.raw_buffer_pos += 2
parser.offset += 2
} else if avail >= 3 && buf[pos] == bom_UTF8[0] && buf[pos+1] == bom_UTF8[1] && buf[pos+2] == bom_UTF8[2] {
parser.encoding = yaml_UTF8_ENCODING
parser.raw_buffer_pos += 3
parser.offset += 3
} else {
parser.encoding = yaml_UTF8_ENCODING
}
return true
}
// Update the raw buffer.
func yaml_parser_update_raw_buffer(parser *yaml_parser_t) bool {
size_read := 0
// Return if the raw buffer is full.
if parser.raw_buffer_pos == 0 && len(parser.raw_buffer) == cap(parser.raw_buffer) {
return true
}
// Return on EOF.
if parser.eof {
return true
}
// Move the remaining bytes in the raw buffer to the beginning.
if parser.raw_buffer_pos > 0 && parser.raw_buffer_pos < len(parser.raw_buffer) {
copy(parser.raw_buffer, parser.raw_buffer[parser.raw_buffer_pos:])
}
parser.raw_buffer = parser.raw_buffer[:len(parser.raw_buffer)-parser.raw_buffer_pos]
parser.raw_buffer_pos = 0
// Call the read handler to fill the buffer.
size_read, err := parser.read_handler(parser, parser.raw_buffer[len(parser.raw_buffer):cap(parser.raw_buffer)])
parser.raw_buffer = parser.raw_buffer[:len(parser.raw_buffer)+size_read]
if err == io.EOF {
parser.eof = true
} else if err != nil {
return yaml_parser_set_reader_error(parser, "input error: "+err.Error(), parser.offset, -1)
}
return true
}
// Ensure that the buffer contains at least `length` characters.
// Return true on success, false on failure.
//
// The length is supposed to be significantly less that the buffer size.
func yaml_parser_update_buffer(parser *yaml_parser_t, length int) bool {
if parser.read_handler == nil {
panic("read handler must be set")
}
// [Go] This function was changed to guarantee the requested length size at EOF.
// The fact we need to do this is pretty awful, but the description above implies
// for that to be the case, and there are tests
// If the EOF flag is set and the raw buffer is empty, do nothing.
if parser.eof && parser.raw_buffer_pos == len(parser.raw_buffer) {
// [Go] ACTUALLY! Read the documentation of this function above.
// This is just broken. To return true, we need to have the
// given length in the buffer. Not doing that means every single
// check that calls this function to make sure the buffer has a
// given length is Go) panicking; or C) accessing invalid memory.
//return true
}
// Return if the buffer contains enough characters.
if parser.unread >= length {
return true
}
// Determine the input encoding if it is not known yet.
if parser.encoding == yaml_ANY_ENCODING {
if !yaml_parser_determine_encoding(parser) {
return false
}
}
// Move the unread characters to the beginning of the buffer.
buffer_len := len(parser.buffer)
if parser.buffer_pos > 0 && parser.buffer_pos < buffer_len {
copy(parser.buffer, parser.buffer[parser.buffer_pos:])
buffer_len -= parser.buffer_pos
parser.buffer_pos = 0
} else if parser.buffer_pos == buffer_len {
buffer_len = 0
parser.buffer_pos = 0
}
// Open the whole buffer for writing, and cut it before returning.
parser.buffer = parser.buffer[:cap(parser.buffer)]
// Fill the buffer until it has enough characters.
first := true
for parser.unread < length {
// Fill the raw buffer if necessary.
if !first || parser.raw_buffer_pos == len(parser.raw_buffer) {
if !yaml_parser_update_raw_buffer(parser) {
parser.buffer = parser.buffer[:buffer_len]
return false
}
}
first = false
// Decode the raw buffer.
inner:
for parser.raw_buffer_pos != len(parser.raw_buffer) {
var value rune
var width int
raw_unread := len(parser.raw_buffer) - parser.raw_buffer_pos
// Decode the next character.
switch parser.encoding {
case yaml_UTF8_ENCODING:
// Decode a UTF-8 character. Check RFC 3629
// (http://www.ietf.org/rfc/rfc3629.txt) for more details.
//
// The following table (taken from the RFC) is used for
// decoding.
//
// Char. number range | UTF-8 octet sequence
// (hexadecimal) | (binary)
// --------------------+------------------------------------
// 0000 0000-0000 007F | 0xxxxxxx
// 0000 0080-0000 07FF | 110xxxxx 10xxxxxx
// 0000 0800-0000 FFFF | 1110xxxx 10xxxxxx 10xxxxxx
// 0001 0000-0010 FFFF | 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx
//
// Additionally, the characters in the range 0xD800-0xDFFF
// are prohibited as they are reserved for use with UTF-16
// surrogate pairs.
// Determine the length of the UTF-8 sequence.
octet := parser.raw_buffer[parser.raw_buffer_pos]
switch {
case octet&0x80 == 0x00:
width = 1
case octet&0xE0 == 0xC0:
width = 2
case octet&0xF0 == 0xE0:
width = 3
case octet&0xF8 == 0xF0:
width = 4
default:
// The leading octet is invalid.
return yaml_parser_set_reader_error(parser,
"invalid leading UTF-8 octet",
parser.offset, int(octet))
}
// Check if the raw buffer contains an incomplete character.
if width > raw_unread {
if parser.eof {
return yaml_parser_set_reader_error(parser,
"incomplete UTF-8 octet sequence",
parser.offset, -1)
}
break inner
}
// Decode the leading octet.
switch {
case octet&0x80 == 0x00:
value = rune(octet & 0x7F)
case octet&0xE0 == 0xC0:
value = rune(octet & 0x1F)
case octet&0xF0 == 0xE0:
value = rune(octet & 0x0F)
case octet&0xF8 == 0xF0:
value = rune(octet & 0x07)
default:
value = 0
}
// Check and decode the trailing octets.
for k := 1; k < width; k++ {
octet = parser.raw_buffer[parser.raw_buffer_pos+k]
// Check if the octet is valid.
if (octet & 0xC0) != 0x80 {
return yaml_parser_set_reader_error(parser,
"invalid trailing UTF-8 octet",
parser.offset+k, int(octet))
}
// Decode the octet.
value = (value << 6) + rune(octet&0x3F)
}
// Check the length of the sequence against the value.
switch {
case width == 1:
case width == 2 && value >= 0x80:
case width == 3 && value >= 0x800:
case width == 4 && value >= 0x10000:
default:
return yaml_parser_set_reader_error(parser,
"invalid length of a UTF-8 sequence",
parser.offset, -1)
}
// Check the range of the value.
if value >= 0xD800 && value <= 0xDFFF || value > 0x10FFFF {
return yaml_parser_set_reader_error(parser,
"invalid Unicode character",
parser.offset, int(value))
}
case yaml_UTF16LE_ENCODING, yaml_UTF16BE_ENCODING:
var low, high int
if parser.encoding == yaml_UTF16LE_ENCODING {
low, high = 0, 1
} else {
low, high = 1, 0
}
// The UTF-16 encoding is not as simple as one might
// naively think. Check RFC 2781
// (http://www.ietf.org/rfc/rfc2781.txt).
//
// Normally, two subsequent bytes describe a Unicode
// character. However a special technique (called a
// surrogate pair) is used for specifying character
// values larger than 0xFFFF.
//
// A surrogate pair consists of two pseudo-characters:
// high surrogate area (0xD800-0xDBFF)
// low surrogate area (0xDC00-0xDFFF)
//
// The following formulas are used for decoding
// and encoding characters using surrogate pairs:
//
// U = U' + 0x10000 (0x01 00 00 <= U <= 0x10 FF FF)
// U' = yyyyyyyyyyxxxxxxxxxx (0 <= U' <= 0x0F FF FF)
// W1 = 110110yyyyyyyyyy
// W2 = 110111xxxxxxxxxx
//
// where U is the character value, W1 is the high surrogate
// area, W2 is the low surrogate area.
// Check for incomplete UTF-16 character.
if raw_unread < 2 {
if parser.eof {
return yaml_parser_set_reader_error(parser,
"incomplete UTF-16 character",
parser.offset, -1)
}
break inner
}
// Get the character.
value = rune(parser.raw_buffer[parser.raw_buffer_pos+low]) +
(rune(parser.raw_buffer[parser.raw_buffer_pos+high]) << 8)
// Check for unexpected low surrogate area.
if value&0xFC00 == 0xDC00 {
return yaml_parser_set_reader_error(parser,
"unexpected low surrogate area",
parser.offset, int(value))
}
// Check for a high surrogate area.
if value&0xFC00 == 0xD800 {
width = 4
// Check for incomplete surrogate pair.
if raw_unread < 4 {
if parser.eof {
return yaml_parser_set_reader_error(parser,
"incomplete UTF-16 surrogate pair",
parser.offset, -1)
}
break inner
}
// Get the next character.
value2 := rune(parser.raw_buffer[parser.raw_buffer_pos+low+2]) +
(rune(parser.raw_buffer[parser.raw_buffer_pos+high+2]) << 8)
// Check for a low surrogate area.
if value2&0xFC00 != 0xDC00 {
return yaml_parser_set_reader_error(parser,
"expected low surrogate area",
parser.offset+2, int(value2))
}
// Generate the value of the surrogate pair.
value = 0x10000 + ((value & 0x3FF) << 10) + (value2 & 0x3FF)
} else {
width = 2
}
default:
panic("impossible")
}
// Check if the character is in the allowed range:
// #x9 | #xA | #xD | [#x20-#x7E] (8 bit)
// | #x85 | [#xA0-#xD7FF] | [#xE000-#xFFFD] (16 bit)
// | [#x10000-#x10FFFF] (32 bit)
switch {
case value == 0x09:
case value == 0x0A:
case value == 0x0D:
case value >= 0x20 && value <= 0x7E:
case value == 0x85:
case value >= 0xA0 && value <= 0xD7FF:
case value >= 0xE000 && value <= 0xFFFD:
case value >= 0x10000 && value <= 0x10FFFF:
default:
return yaml_parser_set_reader_error(parser,
"control characters are not allowed",
parser.offset, int(value))
}
// Move the raw pointers.
parser.raw_buffer_pos += width
parser.offset += width
// Finally put the character into the buffer.
if value <= 0x7F {
// 0000 0000-0000 007F . 0xxxxxxx
parser.buffer[buffer_len+0] = byte(value)
buffer_len += 1
} else if value <= 0x7FF {
// 0000 0080-0000 07FF . 110xxxxx 10xxxxxx
parser.buffer[buffer_len+0] = byte(0xC0 + (value >> 6))
parser.buffer[buffer_len+1] = byte(0x80 + (value & 0x3F))
buffer_len += 2
} else if value <= 0xFFFF {
// 0000 0800-0000 FFFF . 1110xxxx 10xxxxxx 10xxxxxx
parser.buffer[buffer_len+0] = byte(0xE0 + (value >> 12))
parser.buffer[buffer_len+1] = byte(0x80 + ((value >> 6) & 0x3F))
parser.buffer[buffer_len+2] = byte(0x80 + (value & 0x3F))
buffer_len += 3
} else {
// 0001 0000-0010 FFFF . 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx
parser.buffer[buffer_len+0] = byte(0xF0 + (value >> 18))
parser.buffer[buffer_len+1] = byte(0x80 + ((value >> 12) & 0x3F))
parser.buffer[buffer_len+2] = byte(0x80 + ((value >> 6) & 0x3F))
parser.buffer[buffer_len+3] = byte(0x80 + (value & 0x3F))
buffer_len += 4
}
parser.unread++
}
// On EOF, put NUL into the buffer and return.
if parser.eof {
parser.buffer[buffer_len] = 0
buffer_len++
parser.unread++
break
}
}
// [Go] Read the documentation of this function above. To return true,
// we need to have the given length in the buffer. Not doing that means
// every single check that calls this function to make sure the buffer
// has a given length is Go) panicking; or C) accessing invalid memory.
// This happens here due to the EOF above breaking early.
for buffer_len < length {
parser.buffer[buffer_len] = 0
buffer_len++
}
parser.buffer = parser.buffer[:buffer_len]
return true
}

258
e2e/vendor/sigs.k8s.io/yaml/goyaml.v2/resolve.go generated vendored Normal file
View File

@ -0,0 +1,258 @@
package yaml
import (
"encoding/base64"
"math"
"regexp"
"strconv"
"strings"
"time"
)
type resolveMapItem struct {
value interface{}
tag string
}
var resolveTable = make([]byte, 256)
var resolveMap = make(map[string]resolveMapItem)
func init() {
t := resolveTable
t[int('+')] = 'S' // Sign
t[int('-')] = 'S'
for _, c := range "0123456789" {
t[int(c)] = 'D' // Digit
}
for _, c := range "yYnNtTfFoO~" {
t[int(c)] = 'M' // In map
}
t[int('.')] = '.' // Float (potentially in map)
var resolveMapList = []struct {
v interface{}
tag string
l []string
}{
{true, yaml_BOOL_TAG, []string{"y", "Y", "yes", "Yes", "YES"}},
{true, yaml_BOOL_TAG, []string{"true", "True", "TRUE"}},
{true, yaml_BOOL_TAG, []string{"on", "On", "ON"}},
{false, yaml_BOOL_TAG, []string{"n", "N", "no", "No", "NO"}},
{false, yaml_BOOL_TAG, []string{"false", "False", "FALSE"}},
{false, yaml_BOOL_TAG, []string{"off", "Off", "OFF"}},
{nil, yaml_NULL_TAG, []string{"", "~", "null", "Null", "NULL"}},
{math.NaN(), yaml_FLOAT_TAG, []string{".nan", ".NaN", ".NAN"}},
{math.Inf(+1), yaml_FLOAT_TAG, []string{".inf", ".Inf", ".INF"}},
{math.Inf(+1), yaml_FLOAT_TAG, []string{"+.inf", "+.Inf", "+.INF"}},
{math.Inf(-1), yaml_FLOAT_TAG, []string{"-.inf", "-.Inf", "-.INF"}},
{"<<", yaml_MERGE_TAG, []string{"<<"}},
}
m := resolveMap
for _, item := range resolveMapList {
for _, s := range item.l {
m[s] = resolveMapItem{item.v, item.tag}
}
}
}
const longTagPrefix = "tag:yaml.org,2002:"
func shortTag(tag string) string {
// TODO This can easily be made faster and produce less garbage.
if strings.HasPrefix(tag, longTagPrefix) {
return "!!" + tag[len(longTagPrefix):]
}
return tag
}
func longTag(tag string) string {
if strings.HasPrefix(tag, "!!") {
return longTagPrefix + tag[2:]
}
return tag
}
func resolvableTag(tag string) bool {
switch tag {
case "", yaml_STR_TAG, yaml_BOOL_TAG, yaml_INT_TAG, yaml_FLOAT_TAG, yaml_NULL_TAG, yaml_TIMESTAMP_TAG:
return true
}
return false
}
var yamlStyleFloat = regexp.MustCompile(`^[-+]?(\.[0-9]+|[0-9]+(\.[0-9]*)?)([eE][-+]?[0-9]+)?$`)
func resolve(tag string, in string) (rtag string, out interface{}) {
if !resolvableTag(tag) {
return tag, in
}
defer func() {
switch tag {
case "", rtag, yaml_STR_TAG, yaml_BINARY_TAG:
return
case yaml_FLOAT_TAG:
if rtag == yaml_INT_TAG {
switch v := out.(type) {
case int64:
rtag = yaml_FLOAT_TAG
out = float64(v)
return
case int:
rtag = yaml_FLOAT_TAG
out = float64(v)
return
}
}
}
failf("cannot decode %s `%s` as a %s", shortTag(rtag), in, shortTag(tag))
}()
// Any data is accepted as a !!str or !!binary.
// Otherwise, the prefix is enough of a hint about what it might be.
hint := byte('N')
if in != "" {
hint = resolveTable[in[0]]
}
if hint != 0 && tag != yaml_STR_TAG && tag != yaml_BINARY_TAG {
// Handle things we can lookup in a map.
if item, ok := resolveMap[in]; ok {
return item.tag, item.value
}
// Base 60 floats are a bad idea, were dropped in YAML 1.2, and
// are purposefully unsupported here. They're still quoted on
// the way out for compatibility with other parser, though.
switch hint {
case 'M':
// We've already checked the map above.
case '.':
// Not in the map, so maybe a normal float.
floatv, err := strconv.ParseFloat(in, 64)
if err == nil {
return yaml_FLOAT_TAG, floatv
}
case 'D', 'S':
// Int, float, or timestamp.
// Only try values as a timestamp if the value is unquoted or there's an explicit
// !!timestamp tag.
if tag == "" || tag == yaml_TIMESTAMP_TAG {
t, ok := parseTimestamp(in)
if ok {
return yaml_TIMESTAMP_TAG, t
}
}
plain := strings.Replace(in, "_", "", -1)
intv, err := strconv.ParseInt(plain, 0, 64)
if err == nil {
if intv == int64(int(intv)) {
return yaml_INT_TAG, int(intv)
} else {
return yaml_INT_TAG, intv
}
}
uintv, err := strconv.ParseUint(plain, 0, 64)
if err == nil {
return yaml_INT_TAG, uintv
}
if yamlStyleFloat.MatchString(plain) {
floatv, err := strconv.ParseFloat(plain, 64)
if err == nil {
return yaml_FLOAT_TAG, floatv
}
}
if strings.HasPrefix(plain, "0b") {
intv, err := strconv.ParseInt(plain[2:], 2, 64)
if err == nil {
if intv == int64(int(intv)) {
return yaml_INT_TAG, int(intv)
} else {
return yaml_INT_TAG, intv
}
}
uintv, err := strconv.ParseUint(plain[2:], 2, 64)
if err == nil {
return yaml_INT_TAG, uintv
}
} else if strings.HasPrefix(plain, "-0b") {
intv, err := strconv.ParseInt("-" + plain[3:], 2, 64)
if err == nil {
if true || intv == int64(int(intv)) {
return yaml_INT_TAG, int(intv)
} else {
return yaml_INT_TAG, intv
}
}
}
default:
panic("resolveTable item not yet handled: " + string(rune(hint)) + " (with " + in + ")")
}
}
return yaml_STR_TAG, in
}
// encodeBase64 encodes s as base64 that is broken up into multiple lines
// as appropriate for the resulting length.
func encodeBase64(s string) string {
const lineLen = 70
encLen := base64.StdEncoding.EncodedLen(len(s))
lines := encLen/lineLen + 1
buf := make([]byte, encLen*2+lines)
in := buf[0:encLen]
out := buf[encLen:]
base64.StdEncoding.Encode(in, []byte(s))
k := 0
for i := 0; i < len(in); i += lineLen {
j := i + lineLen
if j > len(in) {
j = len(in)
}
k += copy(out[k:], in[i:j])
if lines > 1 {
out[k] = '\n'
k++
}
}
return string(out[:k])
}
// This is a subset of the formats allowed by the regular expression
// defined at http://yaml.org/type/timestamp.html.
var allowedTimestampFormats = []string{
"2006-1-2T15:4:5.999999999Z07:00", // RCF3339Nano with short date fields.
"2006-1-2t15:4:5.999999999Z07:00", // RFC3339Nano with short date fields and lower-case "t".
"2006-1-2 15:4:5.999999999", // space separated with no time zone
"2006-1-2", // date only
// Notable exception: time.Parse cannot handle: "2001-12-14 21:59:43.10 -5"
// from the set of examples.
}
// parseTimestamp parses s as a timestamp string and
// returns the timestamp and reports whether it succeeded.
// Timestamp formats are defined at http://yaml.org/type/timestamp.html
func parseTimestamp(s string) (time.Time, bool) {
// TODO write code to check all the formats supported by
// http://yaml.org/type/timestamp.html instead of using time.Parse.
// Quick check: all date formats start with YYYY-.
i := 0
for ; i < len(s); i++ {
if c := s[i]; c < '0' || c > '9' {
break
}
}
if i != 4 || i == len(s) || s[i] != '-' {
return time.Time{}, false
}
for _, format := range allowedTimestampFormats {
if t, err := time.Parse(format, s); err == nil {
return t, true
}
}
return time.Time{}, false
}

2711
e2e/vendor/sigs.k8s.io/yaml/goyaml.v2/scannerc.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

113
e2e/vendor/sigs.k8s.io/yaml/goyaml.v2/sorter.go generated vendored Normal file
View File

@ -0,0 +1,113 @@
package yaml
import (
"reflect"
"unicode"
)
type keyList []reflect.Value
func (l keyList) Len() int { return len(l) }
func (l keyList) Swap(i, j int) { l[i], l[j] = l[j], l[i] }
func (l keyList) Less(i, j int) bool {
a := l[i]
b := l[j]
ak := a.Kind()
bk := b.Kind()
for (ak == reflect.Interface || ak == reflect.Ptr) && !a.IsNil() {
a = a.Elem()
ak = a.Kind()
}
for (bk == reflect.Interface || bk == reflect.Ptr) && !b.IsNil() {
b = b.Elem()
bk = b.Kind()
}
af, aok := keyFloat(a)
bf, bok := keyFloat(b)
if aok && bok {
if af != bf {
return af < bf
}
if ak != bk {
return ak < bk
}
return numLess(a, b)
}
if ak != reflect.String || bk != reflect.String {
return ak < bk
}
ar, br := []rune(a.String()), []rune(b.String())
for i := 0; i < len(ar) && i < len(br); i++ {
if ar[i] == br[i] {
continue
}
al := unicode.IsLetter(ar[i])
bl := unicode.IsLetter(br[i])
if al && bl {
return ar[i] < br[i]
}
if al || bl {
return bl
}
var ai, bi int
var an, bn int64
if ar[i] == '0' || br[i] == '0' {
for j := i-1; j >= 0 && unicode.IsDigit(ar[j]); j-- {
if ar[j] != '0' {
an = 1
bn = 1
break
}
}
}
for ai = i; ai < len(ar) && unicode.IsDigit(ar[ai]); ai++ {
an = an*10 + int64(ar[ai]-'0')
}
for bi = i; bi < len(br) && unicode.IsDigit(br[bi]); bi++ {
bn = bn*10 + int64(br[bi]-'0')
}
if an != bn {
return an < bn
}
if ai != bi {
return ai < bi
}
return ar[i] < br[i]
}
return len(ar) < len(br)
}
// keyFloat returns a float value for v if it is a number/bool
// and whether it is a number/bool or not.
func keyFloat(v reflect.Value) (f float64, ok bool) {
switch v.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return float64(v.Int()), true
case reflect.Float32, reflect.Float64:
return v.Float(), true
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
return float64(v.Uint()), true
case reflect.Bool:
if v.Bool() {
return 1, true
}
return 0, true
}
return 0, false
}
// numLess returns whether a < b.
// a and b must necessarily have the same kind.
func numLess(a, b reflect.Value) bool {
switch a.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return a.Int() < b.Int()
case reflect.Float32, reflect.Float64:
return a.Float() < b.Float()
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
return a.Uint() < b.Uint()
case reflect.Bool:
return !a.Bool() && b.Bool()
}
panic("not a number")
}

26
e2e/vendor/sigs.k8s.io/yaml/goyaml.v2/writerc.go generated vendored Normal file
View File

@ -0,0 +1,26 @@
package yaml
// Set the writer error and return false.
func yaml_emitter_set_writer_error(emitter *yaml_emitter_t, problem string) bool {
emitter.error = yaml_WRITER_ERROR
emitter.problem = problem
return false
}
// Flush the output buffer.
func yaml_emitter_flush(emitter *yaml_emitter_t) bool {
if emitter.write_handler == nil {
panic("write handler not set")
}
// Check if the buffer is empty.
if emitter.buffer_pos == 0 {
return true
}
if err := emitter.write_handler(emitter, emitter.buffer[:emitter.buffer_pos]); err != nil {
return yaml_emitter_set_writer_error(emitter, "write error: "+err.Error())
}
emitter.buffer_pos = 0
return true
}

478
e2e/vendor/sigs.k8s.io/yaml/goyaml.v2/yaml.go generated vendored Normal file
View File

@ -0,0 +1,478 @@
// Package yaml implements YAML support for the Go language.
//
// Source code and other details for the project are available at GitHub:
//
// https://github.com/go-yaml/yaml
//
package yaml
import (
"errors"
"fmt"
"io"
"reflect"
"strings"
"sync"
)
// MapSlice encodes and decodes as a YAML map.
// The order of keys is preserved when encoding and decoding.
type MapSlice []MapItem
// MapItem is an item in a MapSlice.
type MapItem struct {
Key, Value interface{}
}
// The Unmarshaler interface may be implemented by types to customize their
// behavior when being unmarshaled from a YAML document. The UnmarshalYAML
// method receives a function that may be called to unmarshal the original
// YAML value into a field or variable. It is safe to call the unmarshal
// function parameter more than once if necessary.
type Unmarshaler interface {
UnmarshalYAML(unmarshal func(interface{}) error) error
}
// The Marshaler interface may be implemented by types to customize their
// behavior when being marshaled into a YAML document. The returned value
// is marshaled in place of the original value implementing Marshaler.
//
// If an error is returned by MarshalYAML, the marshaling procedure stops
// and returns with the provided error.
type Marshaler interface {
MarshalYAML() (interface{}, error)
}
// Unmarshal decodes the first document found within the in byte slice
// and assigns decoded values into the out value.
//
// Maps and pointers (to a struct, string, int, etc) are accepted as out
// values. If an internal pointer within a struct is not initialized,
// the yaml package will initialize it if necessary for unmarshalling
// the provided data. The out parameter must not be nil.
//
// The type of the decoded values should be compatible with the respective
// values in out. If one or more values cannot be decoded due to a type
// mismatches, decoding continues partially until the end of the YAML
// content, and a *yaml.TypeError is returned with details for all
// missed values.
//
// Struct fields are only unmarshalled if they are exported (have an
// upper case first letter), and are unmarshalled using the field name
// lowercased as the default key. Custom keys may be defined via the
// "yaml" name in the field tag: the content preceding the first comma
// is used as the key, and the following comma-separated options are
// used to tweak the marshalling process (see Marshal).
// Conflicting names result in a runtime error.
//
// For example:
//
// type T struct {
// F int `yaml:"a,omitempty"`
// B int
// }
// var t T
// yaml.Unmarshal([]byte("a: 1\nb: 2"), &t)
//
// See the documentation of Marshal for the format of tags and a list of
// supported tag options.
//
func Unmarshal(in []byte, out interface{}) (err error) {
return unmarshal(in, out, false)
}
// UnmarshalStrict is like Unmarshal except that any fields that are found
// in the data that do not have corresponding struct members, or mapping
// keys that are duplicates, will result in
// an error.
func UnmarshalStrict(in []byte, out interface{}) (err error) {
return unmarshal(in, out, true)
}
// A Decoder reads and decodes YAML values from an input stream.
type Decoder struct {
strict bool
parser *parser
}
// NewDecoder returns a new decoder that reads from r.
//
// The decoder introduces its own buffering and may read
// data from r beyond the YAML values requested.
func NewDecoder(r io.Reader) *Decoder {
return &Decoder{
parser: newParserFromReader(r),
}
}
// SetStrict sets whether strict decoding behaviour is enabled when
// decoding items in the data (see UnmarshalStrict). By default, decoding is not strict.
func (dec *Decoder) SetStrict(strict bool) {
dec.strict = strict
}
// Decode reads the next YAML-encoded value from its input
// and stores it in the value pointed to by v.
//
// See the documentation for Unmarshal for details about the
// conversion of YAML into a Go value.
func (dec *Decoder) Decode(v interface{}) (err error) {
d := newDecoder(dec.strict)
defer handleErr(&err)
node := dec.parser.parse()
if node == nil {
return io.EOF
}
out := reflect.ValueOf(v)
if out.Kind() == reflect.Ptr && !out.IsNil() {
out = out.Elem()
}
d.unmarshal(node, out)
if len(d.terrors) > 0 {
return &TypeError{d.terrors}
}
return nil
}
func unmarshal(in []byte, out interface{}, strict bool) (err error) {
defer handleErr(&err)
d := newDecoder(strict)
p := newParser(in)
defer p.destroy()
node := p.parse()
if node != nil {
v := reflect.ValueOf(out)
if v.Kind() == reflect.Ptr && !v.IsNil() {
v = v.Elem()
}
d.unmarshal(node, v)
}
if len(d.terrors) > 0 {
return &TypeError{d.terrors}
}
return nil
}
// Marshal serializes the value provided into a YAML document. The structure
// of the generated document will reflect the structure of the value itself.
// Maps and pointers (to struct, string, int, etc) are accepted as the in value.
//
// Struct fields are only marshalled if they are exported (have an upper case
// first letter), and are marshalled using the field name lowercased as the
// default key. Custom keys may be defined via the "yaml" name in the field
// tag: the content preceding the first comma is used as the key, and the
// following comma-separated options are used to tweak the marshalling process.
// Conflicting names result in a runtime error.
//
// The field tag format accepted is:
//
// `(...) yaml:"[<key>][,<flag1>[,<flag2>]]" (...)`
//
// The following flags are currently supported:
//
// omitempty Only include the field if it's not set to the zero
// value for the type or to empty slices or maps.
// Zero valued structs will be omitted if all their public
// fields are zero, unless they implement an IsZero
// method (see the IsZeroer interface type), in which
// case the field will be excluded if IsZero returns true.
//
// flow Marshal using a flow style (useful for structs,
// sequences and maps).
//
// inline Inline the field, which must be a struct or a map,
// causing all of its fields or keys to be processed as if
// they were part of the outer struct. For maps, keys must
// not conflict with the yaml keys of other struct fields.
//
// In addition, if the key is "-", the field is ignored.
//
// For example:
//
// type T struct {
// F int `yaml:"a,omitempty"`
// B int
// }
// yaml.Marshal(&T{B: 2}) // Returns "b: 2\n"
// yaml.Marshal(&T{F: 1}} // Returns "a: 1\nb: 0\n"
//
func Marshal(in interface{}) (out []byte, err error) {
defer handleErr(&err)
e := newEncoder()
defer e.destroy()
e.marshalDoc("", reflect.ValueOf(in))
e.finish()
out = e.out
return
}
// An Encoder writes YAML values to an output stream.
type Encoder struct {
encoder *encoder
}
// NewEncoder returns a new encoder that writes to w.
// The Encoder should be closed after use to flush all data
// to w.
func NewEncoder(w io.Writer) *Encoder {
return &Encoder{
encoder: newEncoderWithWriter(w),
}
}
// Encode writes the YAML encoding of v to the stream.
// If multiple items are encoded to the stream, the
// second and subsequent document will be preceded
// with a "---" document separator, but the first will not.
//
// See the documentation for Marshal for details about the conversion of Go
// values to YAML.
func (e *Encoder) Encode(v interface{}) (err error) {
defer handleErr(&err)
e.encoder.marshalDoc("", reflect.ValueOf(v))
return nil
}
// Close closes the encoder by writing any remaining data.
// It does not write a stream terminating string "...".
func (e *Encoder) Close() (err error) {
defer handleErr(&err)
e.encoder.finish()
return nil
}
func handleErr(err *error) {
if v := recover(); v != nil {
if e, ok := v.(yamlError); ok {
*err = e.err
} else {
panic(v)
}
}
}
type yamlError struct {
err error
}
func fail(err error) {
panic(yamlError{err})
}
func failf(format string, args ...interface{}) {
panic(yamlError{fmt.Errorf("yaml: "+format, args...)})
}
// A TypeError is returned by Unmarshal when one or more fields in
// the YAML document cannot be properly decoded into the requested
// types. When this error is returned, the value is still
// unmarshaled partially.
type TypeError struct {
Errors []string
}
func (e *TypeError) Error() string {
return fmt.Sprintf("yaml: unmarshal errors:\n %s", strings.Join(e.Errors, "\n "))
}
// --------------------------------------------------------------------------
// Maintain a mapping of keys to structure field indexes
// The code in this section was copied from mgo/bson.
// structInfo holds details for the serialization of fields of
// a given struct.
type structInfo struct {
FieldsMap map[string]fieldInfo
FieldsList []fieldInfo
// InlineMap is the number of the field in the struct that
// contains an ,inline map, or -1 if there's none.
InlineMap int
}
type fieldInfo struct {
Key string
Num int
OmitEmpty bool
Flow bool
// Id holds the unique field identifier, so we can cheaply
// check for field duplicates without maintaining an extra map.
Id int
// Inline holds the field index if the field is part of an inlined struct.
Inline []int
}
var structMap = make(map[reflect.Type]*structInfo)
var fieldMapMutex sync.RWMutex
func getStructInfo(st reflect.Type) (*structInfo, error) {
fieldMapMutex.RLock()
sinfo, found := structMap[st]
fieldMapMutex.RUnlock()
if found {
return sinfo, nil
}
n := st.NumField()
fieldsMap := make(map[string]fieldInfo)
fieldsList := make([]fieldInfo, 0, n)
inlineMap := -1
for i := 0; i != n; i++ {
field := st.Field(i)
if field.PkgPath != "" && !field.Anonymous {
continue // Private field
}
info := fieldInfo{Num: i}
tag := field.Tag.Get("yaml")
if tag == "" && strings.Index(string(field.Tag), ":") < 0 {
tag = string(field.Tag)
}
if tag == "-" {
continue
}
inline := false
fields := strings.Split(tag, ",")
if len(fields) > 1 {
for _, flag := range fields[1:] {
switch flag {
case "omitempty":
info.OmitEmpty = true
case "flow":
info.Flow = true
case "inline":
inline = true
default:
return nil, errors.New(fmt.Sprintf("Unsupported flag %q in tag %q of type %s", flag, tag, st))
}
}
tag = fields[0]
}
if inline {
switch field.Type.Kind() {
case reflect.Map:
if inlineMap >= 0 {
return nil, errors.New("Multiple ,inline maps in struct " + st.String())
}
if field.Type.Key() != reflect.TypeOf("") {
return nil, errors.New("Option ,inline needs a map with string keys in struct " + st.String())
}
inlineMap = info.Num
case reflect.Struct:
sinfo, err := getStructInfo(field.Type)
if err != nil {
return nil, err
}
for _, finfo := range sinfo.FieldsList {
if _, found := fieldsMap[finfo.Key]; found {
msg := "Duplicated key '" + finfo.Key + "' in struct " + st.String()
return nil, errors.New(msg)
}
if finfo.Inline == nil {
finfo.Inline = []int{i, finfo.Num}
} else {
finfo.Inline = append([]int{i}, finfo.Inline...)
}
finfo.Id = len(fieldsList)
fieldsMap[finfo.Key] = finfo
fieldsList = append(fieldsList, finfo)
}
default:
//return nil, errors.New("Option ,inline needs a struct value or map field")
return nil, errors.New("Option ,inline needs a struct value field")
}
continue
}
if tag != "" {
info.Key = tag
} else {
info.Key = strings.ToLower(field.Name)
}
if _, found = fieldsMap[info.Key]; found {
msg := "Duplicated key '" + info.Key + "' in struct " + st.String()
return nil, errors.New(msg)
}
info.Id = len(fieldsList)
fieldsList = append(fieldsList, info)
fieldsMap[info.Key] = info
}
sinfo = &structInfo{
FieldsMap: fieldsMap,
FieldsList: fieldsList,
InlineMap: inlineMap,
}
fieldMapMutex.Lock()
structMap[st] = sinfo
fieldMapMutex.Unlock()
return sinfo, nil
}
// IsZeroer is used to check whether an object is zero to
// determine whether it should be omitted when marshaling
// with the omitempty flag. One notable implementation
// is time.Time.
type IsZeroer interface {
IsZero() bool
}
func isZero(v reflect.Value) bool {
kind := v.Kind()
if z, ok := v.Interface().(IsZeroer); ok {
if (kind == reflect.Ptr || kind == reflect.Interface) && v.IsNil() {
return true
}
return z.IsZero()
}
switch kind {
case reflect.String:
return len(v.String()) == 0
case reflect.Interface, reflect.Ptr:
return v.IsNil()
case reflect.Slice:
return v.Len() == 0
case reflect.Map:
return v.Len() == 0
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return v.Int() == 0
case reflect.Float32, reflect.Float64:
return v.Float() == 0
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
return v.Uint() == 0
case reflect.Bool:
return !v.Bool()
case reflect.Struct:
vt := v.Type()
for i := v.NumField() - 1; i >= 0; i-- {
if vt.Field(i).PkgPath != "" {
continue // Private field
}
if !isZero(v.Field(i)) {
return false
}
}
return true
}
return false
}
// FutureLineWrap globally disables line wrapping when encoding long strings.
// This is a temporary and thus deprecated method introduced to faciliate
// migration towards v3, which offers more control of line lengths on
// individual encodings, and has a default matching the behavior introduced
// by this function.
//
// The default formatting of v2 was erroneously changed in v2.3.0 and reverted
// in v2.4.0, at which point this function was introduced to help migration.
func FutureLineWrap() {
disableLineWrapping = true
}

739
e2e/vendor/sigs.k8s.io/yaml/goyaml.v2/yamlh.go generated vendored Normal file
View File

@ -0,0 +1,739 @@
package yaml
import (
"fmt"
"io"
)
// The version directive data.
type yaml_version_directive_t struct {
major int8 // The major version number.
minor int8 // The minor version number.
}
// The tag directive data.
type yaml_tag_directive_t struct {
handle []byte // The tag handle.
prefix []byte // The tag prefix.
}
type yaml_encoding_t int
// The stream encoding.
const (
// Let the parser choose the encoding.
yaml_ANY_ENCODING yaml_encoding_t = iota
yaml_UTF8_ENCODING // The default UTF-8 encoding.
yaml_UTF16LE_ENCODING // The UTF-16-LE encoding with BOM.
yaml_UTF16BE_ENCODING // The UTF-16-BE encoding with BOM.
)
type yaml_break_t int
// Line break types.
const (
// Let the parser choose the break type.
yaml_ANY_BREAK yaml_break_t = iota
yaml_CR_BREAK // Use CR for line breaks (Mac style).
yaml_LN_BREAK // Use LN for line breaks (Unix style).
yaml_CRLN_BREAK // Use CR LN for line breaks (DOS style).
)
type yaml_error_type_t int
// Many bad things could happen with the parser and emitter.
const (
// No error is produced.
yaml_NO_ERROR yaml_error_type_t = iota
yaml_MEMORY_ERROR // Cannot allocate or reallocate a block of memory.
yaml_READER_ERROR // Cannot read or decode the input stream.
yaml_SCANNER_ERROR // Cannot scan the input stream.
yaml_PARSER_ERROR // Cannot parse the input stream.
yaml_COMPOSER_ERROR // Cannot compose a YAML document.
yaml_WRITER_ERROR // Cannot write to the output stream.
yaml_EMITTER_ERROR // Cannot emit a YAML stream.
)
// The pointer position.
type yaml_mark_t struct {
index int // The position index.
line int // The position line.
column int // The position column.
}
// Node Styles
type yaml_style_t int8
type yaml_scalar_style_t yaml_style_t
// Scalar styles.
const (
// Let the emitter choose the style.
yaml_ANY_SCALAR_STYLE yaml_scalar_style_t = iota
yaml_PLAIN_SCALAR_STYLE // The plain scalar style.
yaml_SINGLE_QUOTED_SCALAR_STYLE // The single-quoted scalar style.
yaml_DOUBLE_QUOTED_SCALAR_STYLE // The double-quoted scalar style.
yaml_LITERAL_SCALAR_STYLE // The literal scalar style.
yaml_FOLDED_SCALAR_STYLE // The folded scalar style.
)
type yaml_sequence_style_t yaml_style_t
// Sequence styles.
const (
// Let the emitter choose the style.
yaml_ANY_SEQUENCE_STYLE yaml_sequence_style_t = iota
yaml_BLOCK_SEQUENCE_STYLE // The block sequence style.
yaml_FLOW_SEQUENCE_STYLE // The flow sequence style.
)
type yaml_mapping_style_t yaml_style_t
// Mapping styles.
const (
// Let the emitter choose the style.
yaml_ANY_MAPPING_STYLE yaml_mapping_style_t = iota
yaml_BLOCK_MAPPING_STYLE // The block mapping style.
yaml_FLOW_MAPPING_STYLE // The flow mapping style.
)
// Tokens
type yaml_token_type_t int
// Token types.
const (
// An empty token.
yaml_NO_TOKEN yaml_token_type_t = iota
yaml_STREAM_START_TOKEN // A STREAM-START token.
yaml_STREAM_END_TOKEN // A STREAM-END token.
yaml_VERSION_DIRECTIVE_TOKEN // A VERSION-DIRECTIVE token.
yaml_TAG_DIRECTIVE_TOKEN // A TAG-DIRECTIVE token.
yaml_DOCUMENT_START_TOKEN // A DOCUMENT-START token.
yaml_DOCUMENT_END_TOKEN // A DOCUMENT-END token.
yaml_BLOCK_SEQUENCE_START_TOKEN // A BLOCK-SEQUENCE-START token.
yaml_BLOCK_MAPPING_START_TOKEN // A BLOCK-SEQUENCE-END token.
yaml_BLOCK_END_TOKEN // A BLOCK-END token.
yaml_FLOW_SEQUENCE_START_TOKEN // A FLOW-SEQUENCE-START token.
yaml_FLOW_SEQUENCE_END_TOKEN // A FLOW-SEQUENCE-END token.
yaml_FLOW_MAPPING_START_TOKEN // A FLOW-MAPPING-START token.
yaml_FLOW_MAPPING_END_TOKEN // A FLOW-MAPPING-END token.
yaml_BLOCK_ENTRY_TOKEN // A BLOCK-ENTRY token.
yaml_FLOW_ENTRY_TOKEN // A FLOW-ENTRY token.
yaml_KEY_TOKEN // A KEY token.
yaml_VALUE_TOKEN // A VALUE token.
yaml_ALIAS_TOKEN // An ALIAS token.
yaml_ANCHOR_TOKEN // An ANCHOR token.
yaml_TAG_TOKEN // A TAG token.
yaml_SCALAR_TOKEN // A SCALAR token.
)
func (tt yaml_token_type_t) String() string {
switch tt {
case yaml_NO_TOKEN:
return "yaml_NO_TOKEN"
case yaml_STREAM_START_TOKEN:
return "yaml_STREAM_START_TOKEN"
case yaml_STREAM_END_TOKEN:
return "yaml_STREAM_END_TOKEN"
case yaml_VERSION_DIRECTIVE_TOKEN:
return "yaml_VERSION_DIRECTIVE_TOKEN"
case yaml_TAG_DIRECTIVE_TOKEN:
return "yaml_TAG_DIRECTIVE_TOKEN"
case yaml_DOCUMENT_START_TOKEN:
return "yaml_DOCUMENT_START_TOKEN"
case yaml_DOCUMENT_END_TOKEN:
return "yaml_DOCUMENT_END_TOKEN"
case yaml_BLOCK_SEQUENCE_START_TOKEN:
return "yaml_BLOCK_SEQUENCE_START_TOKEN"
case yaml_BLOCK_MAPPING_START_TOKEN:
return "yaml_BLOCK_MAPPING_START_TOKEN"
case yaml_BLOCK_END_TOKEN:
return "yaml_BLOCK_END_TOKEN"
case yaml_FLOW_SEQUENCE_START_TOKEN:
return "yaml_FLOW_SEQUENCE_START_TOKEN"
case yaml_FLOW_SEQUENCE_END_TOKEN:
return "yaml_FLOW_SEQUENCE_END_TOKEN"
case yaml_FLOW_MAPPING_START_TOKEN:
return "yaml_FLOW_MAPPING_START_TOKEN"
case yaml_FLOW_MAPPING_END_TOKEN:
return "yaml_FLOW_MAPPING_END_TOKEN"
case yaml_BLOCK_ENTRY_TOKEN:
return "yaml_BLOCK_ENTRY_TOKEN"
case yaml_FLOW_ENTRY_TOKEN:
return "yaml_FLOW_ENTRY_TOKEN"
case yaml_KEY_TOKEN:
return "yaml_KEY_TOKEN"
case yaml_VALUE_TOKEN:
return "yaml_VALUE_TOKEN"
case yaml_ALIAS_TOKEN:
return "yaml_ALIAS_TOKEN"
case yaml_ANCHOR_TOKEN:
return "yaml_ANCHOR_TOKEN"
case yaml_TAG_TOKEN:
return "yaml_TAG_TOKEN"
case yaml_SCALAR_TOKEN:
return "yaml_SCALAR_TOKEN"
}
return "<unknown token>"
}
// The token structure.
type yaml_token_t struct {
// The token type.
typ yaml_token_type_t
// The start/end of the token.
start_mark, end_mark yaml_mark_t
// The stream encoding (for yaml_STREAM_START_TOKEN).
encoding yaml_encoding_t
// The alias/anchor/scalar value or tag/tag directive handle
// (for yaml_ALIAS_TOKEN, yaml_ANCHOR_TOKEN, yaml_SCALAR_TOKEN, yaml_TAG_TOKEN, yaml_TAG_DIRECTIVE_TOKEN).
value []byte
// The tag suffix (for yaml_TAG_TOKEN).
suffix []byte
// The tag directive prefix (for yaml_TAG_DIRECTIVE_TOKEN).
prefix []byte
// The scalar style (for yaml_SCALAR_TOKEN).
style yaml_scalar_style_t
// The version directive major/minor (for yaml_VERSION_DIRECTIVE_TOKEN).
major, minor int8
}
// Events
type yaml_event_type_t int8
// Event types.
const (
// An empty event.
yaml_NO_EVENT yaml_event_type_t = iota
yaml_STREAM_START_EVENT // A STREAM-START event.
yaml_STREAM_END_EVENT // A STREAM-END event.
yaml_DOCUMENT_START_EVENT // A DOCUMENT-START event.
yaml_DOCUMENT_END_EVENT // A DOCUMENT-END event.
yaml_ALIAS_EVENT // An ALIAS event.
yaml_SCALAR_EVENT // A SCALAR event.
yaml_SEQUENCE_START_EVENT // A SEQUENCE-START event.
yaml_SEQUENCE_END_EVENT // A SEQUENCE-END event.
yaml_MAPPING_START_EVENT // A MAPPING-START event.
yaml_MAPPING_END_EVENT // A MAPPING-END event.
)
var eventStrings = []string{
yaml_NO_EVENT: "none",
yaml_STREAM_START_EVENT: "stream start",
yaml_STREAM_END_EVENT: "stream end",
yaml_DOCUMENT_START_EVENT: "document start",
yaml_DOCUMENT_END_EVENT: "document end",
yaml_ALIAS_EVENT: "alias",
yaml_SCALAR_EVENT: "scalar",
yaml_SEQUENCE_START_EVENT: "sequence start",
yaml_SEQUENCE_END_EVENT: "sequence end",
yaml_MAPPING_START_EVENT: "mapping start",
yaml_MAPPING_END_EVENT: "mapping end",
}
func (e yaml_event_type_t) String() string {
if e < 0 || int(e) >= len(eventStrings) {
return fmt.Sprintf("unknown event %d", e)
}
return eventStrings[e]
}
// The event structure.
type yaml_event_t struct {
// The event type.
typ yaml_event_type_t
// The start and end of the event.
start_mark, end_mark yaml_mark_t
// The document encoding (for yaml_STREAM_START_EVENT).
encoding yaml_encoding_t
// The version directive (for yaml_DOCUMENT_START_EVENT).
version_directive *yaml_version_directive_t
// The list of tag directives (for yaml_DOCUMENT_START_EVENT).
tag_directives []yaml_tag_directive_t
// The anchor (for yaml_SCALAR_EVENT, yaml_SEQUENCE_START_EVENT, yaml_MAPPING_START_EVENT, yaml_ALIAS_EVENT).
anchor []byte
// The tag (for yaml_SCALAR_EVENT, yaml_SEQUENCE_START_EVENT, yaml_MAPPING_START_EVENT).
tag []byte
// The scalar value (for yaml_SCALAR_EVENT).
value []byte
// Is the document start/end indicator implicit, or the tag optional?
// (for yaml_DOCUMENT_START_EVENT, yaml_DOCUMENT_END_EVENT, yaml_SEQUENCE_START_EVENT, yaml_MAPPING_START_EVENT, yaml_SCALAR_EVENT).
implicit bool
// Is the tag optional for any non-plain style? (for yaml_SCALAR_EVENT).
quoted_implicit bool
// The style (for yaml_SCALAR_EVENT, yaml_SEQUENCE_START_EVENT, yaml_MAPPING_START_EVENT).
style yaml_style_t
}
func (e *yaml_event_t) scalar_style() yaml_scalar_style_t { return yaml_scalar_style_t(e.style) }
func (e *yaml_event_t) sequence_style() yaml_sequence_style_t { return yaml_sequence_style_t(e.style) }
func (e *yaml_event_t) mapping_style() yaml_mapping_style_t { return yaml_mapping_style_t(e.style) }
// Nodes
const (
yaml_NULL_TAG = "tag:yaml.org,2002:null" // The tag !!null with the only possible value: null.
yaml_BOOL_TAG = "tag:yaml.org,2002:bool" // The tag !!bool with the values: true and false.
yaml_STR_TAG = "tag:yaml.org,2002:str" // The tag !!str for string values.
yaml_INT_TAG = "tag:yaml.org,2002:int" // The tag !!int for integer values.
yaml_FLOAT_TAG = "tag:yaml.org,2002:float" // The tag !!float for float values.
yaml_TIMESTAMP_TAG = "tag:yaml.org,2002:timestamp" // The tag !!timestamp for date and time values.
yaml_SEQ_TAG = "tag:yaml.org,2002:seq" // The tag !!seq is used to denote sequences.
yaml_MAP_TAG = "tag:yaml.org,2002:map" // The tag !!map is used to denote mapping.
// Not in original libyaml.
yaml_BINARY_TAG = "tag:yaml.org,2002:binary"
yaml_MERGE_TAG = "tag:yaml.org,2002:merge"
yaml_DEFAULT_SCALAR_TAG = yaml_STR_TAG // The default scalar tag is !!str.
yaml_DEFAULT_SEQUENCE_TAG = yaml_SEQ_TAG // The default sequence tag is !!seq.
yaml_DEFAULT_MAPPING_TAG = yaml_MAP_TAG // The default mapping tag is !!map.
)
type yaml_node_type_t int
// Node types.
const (
// An empty node.
yaml_NO_NODE yaml_node_type_t = iota
yaml_SCALAR_NODE // A scalar node.
yaml_SEQUENCE_NODE // A sequence node.
yaml_MAPPING_NODE // A mapping node.
)
// An element of a sequence node.
type yaml_node_item_t int
// An element of a mapping node.
type yaml_node_pair_t struct {
key int // The key of the element.
value int // The value of the element.
}
// The node structure.
type yaml_node_t struct {
typ yaml_node_type_t // The node type.
tag []byte // The node tag.
// The node data.
// The scalar parameters (for yaml_SCALAR_NODE).
scalar struct {
value []byte // The scalar value.
length int // The length of the scalar value.
style yaml_scalar_style_t // The scalar style.
}
// The sequence parameters (for YAML_SEQUENCE_NODE).
sequence struct {
items_data []yaml_node_item_t // The stack of sequence items.
style yaml_sequence_style_t // The sequence style.
}
// The mapping parameters (for yaml_MAPPING_NODE).
mapping struct {
pairs_data []yaml_node_pair_t // The stack of mapping pairs (key, value).
pairs_start *yaml_node_pair_t // The beginning of the stack.
pairs_end *yaml_node_pair_t // The end of the stack.
pairs_top *yaml_node_pair_t // The top of the stack.
style yaml_mapping_style_t // The mapping style.
}
start_mark yaml_mark_t // The beginning of the node.
end_mark yaml_mark_t // The end of the node.
}
// The document structure.
type yaml_document_t struct {
// The document nodes.
nodes []yaml_node_t
// The version directive.
version_directive *yaml_version_directive_t
// The list of tag directives.
tag_directives_data []yaml_tag_directive_t
tag_directives_start int // The beginning of the tag directives list.
tag_directives_end int // The end of the tag directives list.
start_implicit int // Is the document start indicator implicit?
end_implicit int // Is the document end indicator implicit?
// The start/end of the document.
start_mark, end_mark yaml_mark_t
}
// The prototype of a read handler.
//
// The read handler is called when the parser needs to read more bytes from the
// source. The handler should write not more than size bytes to the buffer.
// The number of written bytes should be set to the size_read variable.
//
// [in,out] data A pointer to an application data specified by
// yaml_parser_set_input().
// [out] buffer The buffer to write the data from the source.
// [in] size The size of the buffer.
// [out] size_read The actual number of bytes read from the source.
//
// On success, the handler should return 1. If the handler failed,
// the returned value should be 0. On EOF, the handler should set the
// size_read to 0 and return 1.
type yaml_read_handler_t func(parser *yaml_parser_t, buffer []byte) (n int, err error)
// This structure holds information about a potential simple key.
type yaml_simple_key_t struct {
possible bool // Is a simple key possible?
required bool // Is a simple key required?
token_number int // The number of the token.
mark yaml_mark_t // The position mark.
}
// The states of the parser.
type yaml_parser_state_t int
const (
yaml_PARSE_STREAM_START_STATE yaml_parser_state_t = iota
yaml_PARSE_IMPLICIT_DOCUMENT_START_STATE // Expect the beginning of an implicit document.
yaml_PARSE_DOCUMENT_START_STATE // Expect DOCUMENT-START.
yaml_PARSE_DOCUMENT_CONTENT_STATE // Expect the content of a document.
yaml_PARSE_DOCUMENT_END_STATE // Expect DOCUMENT-END.
yaml_PARSE_BLOCK_NODE_STATE // Expect a block node.
yaml_PARSE_BLOCK_NODE_OR_INDENTLESS_SEQUENCE_STATE // Expect a block node or indentless sequence.
yaml_PARSE_FLOW_NODE_STATE // Expect a flow node.
yaml_PARSE_BLOCK_SEQUENCE_FIRST_ENTRY_STATE // Expect the first entry of a block sequence.
yaml_PARSE_BLOCK_SEQUENCE_ENTRY_STATE // Expect an entry of a block sequence.
yaml_PARSE_INDENTLESS_SEQUENCE_ENTRY_STATE // Expect an entry of an indentless sequence.
yaml_PARSE_BLOCK_MAPPING_FIRST_KEY_STATE // Expect the first key of a block mapping.
yaml_PARSE_BLOCK_MAPPING_KEY_STATE // Expect a block mapping key.
yaml_PARSE_BLOCK_MAPPING_VALUE_STATE // Expect a block mapping value.
yaml_PARSE_FLOW_SEQUENCE_FIRST_ENTRY_STATE // Expect the first entry of a flow sequence.
yaml_PARSE_FLOW_SEQUENCE_ENTRY_STATE // Expect an entry of a flow sequence.
yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_KEY_STATE // Expect a key of an ordered mapping.
yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_VALUE_STATE // Expect a value of an ordered mapping.
yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_END_STATE // Expect the and of an ordered mapping entry.
yaml_PARSE_FLOW_MAPPING_FIRST_KEY_STATE // Expect the first key of a flow mapping.
yaml_PARSE_FLOW_MAPPING_KEY_STATE // Expect a key of a flow mapping.
yaml_PARSE_FLOW_MAPPING_VALUE_STATE // Expect a value of a flow mapping.
yaml_PARSE_FLOW_MAPPING_EMPTY_VALUE_STATE // Expect an empty value of a flow mapping.
yaml_PARSE_END_STATE // Expect nothing.
)
func (ps yaml_parser_state_t) String() string {
switch ps {
case yaml_PARSE_STREAM_START_STATE:
return "yaml_PARSE_STREAM_START_STATE"
case yaml_PARSE_IMPLICIT_DOCUMENT_START_STATE:
return "yaml_PARSE_IMPLICIT_DOCUMENT_START_STATE"
case yaml_PARSE_DOCUMENT_START_STATE:
return "yaml_PARSE_DOCUMENT_START_STATE"
case yaml_PARSE_DOCUMENT_CONTENT_STATE:
return "yaml_PARSE_DOCUMENT_CONTENT_STATE"
case yaml_PARSE_DOCUMENT_END_STATE:
return "yaml_PARSE_DOCUMENT_END_STATE"
case yaml_PARSE_BLOCK_NODE_STATE:
return "yaml_PARSE_BLOCK_NODE_STATE"
case yaml_PARSE_BLOCK_NODE_OR_INDENTLESS_SEQUENCE_STATE:
return "yaml_PARSE_BLOCK_NODE_OR_INDENTLESS_SEQUENCE_STATE"
case yaml_PARSE_FLOW_NODE_STATE:
return "yaml_PARSE_FLOW_NODE_STATE"
case yaml_PARSE_BLOCK_SEQUENCE_FIRST_ENTRY_STATE:
return "yaml_PARSE_BLOCK_SEQUENCE_FIRST_ENTRY_STATE"
case yaml_PARSE_BLOCK_SEQUENCE_ENTRY_STATE:
return "yaml_PARSE_BLOCK_SEQUENCE_ENTRY_STATE"
case yaml_PARSE_INDENTLESS_SEQUENCE_ENTRY_STATE:
return "yaml_PARSE_INDENTLESS_SEQUENCE_ENTRY_STATE"
case yaml_PARSE_BLOCK_MAPPING_FIRST_KEY_STATE:
return "yaml_PARSE_BLOCK_MAPPING_FIRST_KEY_STATE"
case yaml_PARSE_BLOCK_MAPPING_KEY_STATE:
return "yaml_PARSE_BLOCK_MAPPING_KEY_STATE"
case yaml_PARSE_BLOCK_MAPPING_VALUE_STATE:
return "yaml_PARSE_BLOCK_MAPPING_VALUE_STATE"
case yaml_PARSE_FLOW_SEQUENCE_FIRST_ENTRY_STATE:
return "yaml_PARSE_FLOW_SEQUENCE_FIRST_ENTRY_STATE"
case yaml_PARSE_FLOW_SEQUENCE_ENTRY_STATE:
return "yaml_PARSE_FLOW_SEQUENCE_ENTRY_STATE"
case yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_KEY_STATE:
return "yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_KEY_STATE"
case yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_VALUE_STATE:
return "yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_VALUE_STATE"
case yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_END_STATE:
return "yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_END_STATE"
case yaml_PARSE_FLOW_MAPPING_FIRST_KEY_STATE:
return "yaml_PARSE_FLOW_MAPPING_FIRST_KEY_STATE"
case yaml_PARSE_FLOW_MAPPING_KEY_STATE:
return "yaml_PARSE_FLOW_MAPPING_KEY_STATE"
case yaml_PARSE_FLOW_MAPPING_VALUE_STATE:
return "yaml_PARSE_FLOW_MAPPING_VALUE_STATE"
case yaml_PARSE_FLOW_MAPPING_EMPTY_VALUE_STATE:
return "yaml_PARSE_FLOW_MAPPING_EMPTY_VALUE_STATE"
case yaml_PARSE_END_STATE:
return "yaml_PARSE_END_STATE"
}
return "<unknown parser state>"
}
// This structure holds aliases data.
type yaml_alias_data_t struct {
anchor []byte // The anchor.
index int // The node id.
mark yaml_mark_t // The anchor mark.
}
// The parser structure.
//
// All members are internal. Manage the structure using the
// yaml_parser_ family of functions.
type yaml_parser_t struct {
// Error handling
error yaml_error_type_t // Error type.
problem string // Error description.
// The byte about which the problem occurred.
problem_offset int
problem_value int
problem_mark yaml_mark_t
// The error context.
context string
context_mark yaml_mark_t
// Reader stuff
read_handler yaml_read_handler_t // Read handler.
input_reader io.Reader // File input data.
input []byte // String input data.
input_pos int
eof bool // EOF flag
buffer []byte // The working buffer.
buffer_pos int // The current position of the buffer.
unread int // The number of unread characters in the buffer.
raw_buffer []byte // The raw buffer.
raw_buffer_pos int // The current position of the buffer.
encoding yaml_encoding_t // The input encoding.
offset int // The offset of the current position (in bytes).
mark yaml_mark_t // The mark of the current position.
// Scanner stuff
stream_start_produced bool // Have we started to scan the input stream?
stream_end_produced bool // Have we reached the end of the input stream?
flow_level int // The number of unclosed '[' and '{' indicators.
tokens []yaml_token_t // The tokens queue.
tokens_head int // The head of the tokens queue.
tokens_parsed int // The number of tokens fetched from the queue.
token_available bool // Does the tokens queue contain a token ready for dequeueing.
indent int // The current indentation level.
indents []int // The indentation levels stack.
simple_key_allowed bool // May a simple key occur at the current position?
simple_keys []yaml_simple_key_t // The stack of simple keys.
simple_keys_by_tok map[int]int // possible simple_key indexes indexed by token_number
// Parser stuff
state yaml_parser_state_t // The current parser state.
states []yaml_parser_state_t // The parser states stack.
marks []yaml_mark_t // The stack of marks.
tag_directives []yaml_tag_directive_t // The list of TAG directives.
// Dumper stuff
aliases []yaml_alias_data_t // The alias data.
document *yaml_document_t // The currently parsed document.
}
// Emitter Definitions
// The prototype of a write handler.
//
// The write handler is called when the emitter needs to flush the accumulated
// characters to the output. The handler should write @a size bytes of the
// @a buffer to the output.
//
// @param[in,out] data A pointer to an application data specified by
// yaml_emitter_set_output().
// @param[in] buffer The buffer with bytes to be written.
// @param[in] size The size of the buffer.
//
// @returns On success, the handler should return @c 1. If the handler failed,
// the returned value should be @c 0.
//
type yaml_write_handler_t func(emitter *yaml_emitter_t, buffer []byte) error
type yaml_emitter_state_t int
// The emitter states.
const (
// Expect STREAM-START.
yaml_EMIT_STREAM_START_STATE yaml_emitter_state_t = iota
yaml_EMIT_FIRST_DOCUMENT_START_STATE // Expect the first DOCUMENT-START or STREAM-END.
yaml_EMIT_DOCUMENT_START_STATE // Expect DOCUMENT-START or STREAM-END.
yaml_EMIT_DOCUMENT_CONTENT_STATE // Expect the content of a document.
yaml_EMIT_DOCUMENT_END_STATE // Expect DOCUMENT-END.
yaml_EMIT_FLOW_SEQUENCE_FIRST_ITEM_STATE // Expect the first item of a flow sequence.
yaml_EMIT_FLOW_SEQUENCE_ITEM_STATE // Expect an item of a flow sequence.
yaml_EMIT_FLOW_MAPPING_FIRST_KEY_STATE // Expect the first key of a flow mapping.
yaml_EMIT_FLOW_MAPPING_KEY_STATE // Expect a key of a flow mapping.
yaml_EMIT_FLOW_MAPPING_SIMPLE_VALUE_STATE // Expect a value for a simple key of a flow mapping.
yaml_EMIT_FLOW_MAPPING_VALUE_STATE // Expect a value of a flow mapping.
yaml_EMIT_BLOCK_SEQUENCE_FIRST_ITEM_STATE // Expect the first item of a block sequence.
yaml_EMIT_BLOCK_SEQUENCE_ITEM_STATE // Expect an item of a block sequence.
yaml_EMIT_BLOCK_MAPPING_FIRST_KEY_STATE // Expect the first key of a block mapping.
yaml_EMIT_BLOCK_MAPPING_KEY_STATE // Expect the key of a block mapping.
yaml_EMIT_BLOCK_MAPPING_SIMPLE_VALUE_STATE // Expect a value for a simple key of a block mapping.
yaml_EMIT_BLOCK_MAPPING_VALUE_STATE // Expect a value of a block mapping.
yaml_EMIT_END_STATE // Expect nothing.
)
// The emitter structure.
//
// All members are internal. Manage the structure using the @c yaml_emitter_
// family of functions.
type yaml_emitter_t struct {
// Error handling
error yaml_error_type_t // Error type.
problem string // Error description.
// Writer stuff
write_handler yaml_write_handler_t // Write handler.
output_buffer *[]byte // String output data.
output_writer io.Writer // File output data.
buffer []byte // The working buffer.
buffer_pos int // The current position of the buffer.
raw_buffer []byte // The raw buffer.
raw_buffer_pos int // The current position of the buffer.
encoding yaml_encoding_t // The stream encoding.
// Emitter stuff
canonical bool // If the output is in the canonical style?
best_indent int // The number of indentation spaces.
best_width int // The preferred width of the output lines.
unicode bool // Allow unescaped non-ASCII characters?
line_break yaml_break_t // The preferred line break.
state yaml_emitter_state_t // The current emitter state.
states []yaml_emitter_state_t // The stack of states.
events []yaml_event_t // The event queue.
events_head int // The head of the event queue.
indents []int // The stack of indentation levels.
tag_directives []yaml_tag_directive_t // The list of tag directives.
indent int // The current indentation level.
flow_level int // The current flow level.
root_context bool // Is it the document root context?
sequence_context bool // Is it a sequence context?
mapping_context bool // Is it a mapping context?
simple_key_context bool // Is it a simple mapping key context?
line int // The current line.
column int // The current column.
whitespace bool // If the last character was a whitespace?
indention bool // If the last character was an indentation character (' ', '-', '?', ':')?
open_ended bool // If an explicit document end is required?
// Anchor analysis.
anchor_data struct {
anchor []byte // The anchor value.
alias bool // Is it an alias?
}
// Tag analysis.
tag_data struct {
handle []byte // The tag handle.
suffix []byte // The tag suffix.
}
// Scalar analysis.
scalar_data struct {
value []byte // The scalar value.
multiline bool // Does the scalar contain line breaks?
flow_plain_allowed bool // Can the scalar be expessed in the flow plain style?
block_plain_allowed bool // Can the scalar be expressed in the block plain style?
single_quoted_allowed bool // Can the scalar be expressed in the single quoted style?
block_allowed bool // Can the scalar be expressed in the literal or folded styles?
style yaml_scalar_style_t // The output style.
}
// Dumper stuff
opened bool // If the stream was already opened?
closed bool // If the stream was already closed?
// The information associated with the document nodes.
anchors *struct {
references int // The number of references.
anchor int // The anchor id.
serialized bool // If the node has been emitted?
}
last_anchor_id int // The last assigned anchor id.
document *yaml_document_t // The currently emitted document.
}

173
e2e/vendor/sigs.k8s.io/yaml/goyaml.v2/yamlprivateh.go generated vendored Normal file
View File

@ -0,0 +1,173 @@
package yaml
const (
// The size of the input raw buffer.
input_raw_buffer_size = 512
// The size of the input buffer.
// It should be possible to decode the whole raw buffer.
input_buffer_size = input_raw_buffer_size * 3
// The size of the output buffer.
output_buffer_size = 128
// The size of the output raw buffer.
// It should be possible to encode the whole output buffer.
output_raw_buffer_size = (output_buffer_size*2 + 2)
// The size of other stacks and queues.
initial_stack_size = 16
initial_queue_size = 16
initial_string_size = 16
)
// Check if the character at the specified position is an alphabetical
// character, a digit, '_', or '-'.
func is_alpha(b []byte, i int) bool {
return b[i] >= '0' && b[i] <= '9' || b[i] >= 'A' && b[i] <= 'Z' || b[i] >= 'a' && b[i] <= 'z' || b[i] == '_' || b[i] == '-'
}
// Check if the character at the specified position is a digit.
func is_digit(b []byte, i int) bool {
return b[i] >= '0' && b[i] <= '9'
}
// Get the value of a digit.
func as_digit(b []byte, i int) int {
return int(b[i]) - '0'
}
// Check if the character at the specified position is a hex-digit.
func is_hex(b []byte, i int) bool {
return b[i] >= '0' && b[i] <= '9' || b[i] >= 'A' && b[i] <= 'F' || b[i] >= 'a' && b[i] <= 'f'
}
// Get the value of a hex-digit.
func as_hex(b []byte, i int) int {
bi := b[i]
if bi >= 'A' && bi <= 'F' {
return int(bi) - 'A' + 10
}
if bi >= 'a' && bi <= 'f' {
return int(bi) - 'a' + 10
}
return int(bi) - '0'
}
// Check if the character is ASCII.
func is_ascii(b []byte, i int) bool {
return b[i] <= 0x7F
}
// Check if the character at the start of the buffer can be printed unescaped.
func is_printable(b []byte, i int) bool {
return ((b[i] == 0x0A) || // . == #x0A
(b[i] >= 0x20 && b[i] <= 0x7E) || // #x20 <= . <= #x7E
(b[i] == 0xC2 && b[i+1] >= 0xA0) || // #0xA0 <= . <= #xD7FF
(b[i] > 0xC2 && b[i] < 0xED) ||
(b[i] == 0xED && b[i+1] < 0xA0) ||
(b[i] == 0xEE) ||
(b[i] == 0xEF && // #xE000 <= . <= #xFFFD
!(b[i+1] == 0xBB && b[i+2] == 0xBF) && // && . != #xFEFF
!(b[i+1] == 0xBF && (b[i+2] == 0xBE || b[i+2] == 0xBF))))
}
// Check if the character at the specified position is NUL.
func is_z(b []byte, i int) bool {
return b[i] == 0x00
}
// Check if the beginning of the buffer is a BOM.
func is_bom(b []byte, i int) bool {
return b[0] == 0xEF && b[1] == 0xBB && b[2] == 0xBF
}
// Check if the character at the specified position is space.
func is_space(b []byte, i int) bool {
return b[i] == ' '
}
// Check if the character at the specified position is tab.
func is_tab(b []byte, i int) bool {
return b[i] == '\t'
}
// Check if the character at the specified position is blank (space or tab).
func is_blank(b []byte, i int) bool {
//return is_space(b, i) || is_tab(b, i)
return b[i] == ' ' || b[i] == '\t'
}
// Check if the character at the specified position is a line break.
func is_break(b []byte, i int) bool {
return (b[i] == '\r' || // CR (#xD)
b[i] == '\n' || // LF (#xA)
b[i] == 0xC2 && b[i+1] == 0x85 || // NEL (#x85)
b[i] == 0xE2 && b[i+1] == 0x80 && b[i+2] == 0xA8 || // LS (#x2028)
b[i] == 0xE2 && b[i+1] == 0x80 && b[i+2] == 0xA9) // PS (#x2029)
}
func is_crlf(b []byte, i int) bool {
return b[i] == '\r' && b[i+1] == '\n'
}
// Check if the character is a line break or NUL.
func is_breakz(b []byte, i int) bool {
//return is_break(b, i) || is_z(b, i)
return ( // is_break:
b[i] == '\r' || // CR (#xD)
b[i] == '\n' || // LF (#xA)
b[i] == 0xC2 && b[i+1] == 0x85 || // NEL (#x85)
b[i] == 0xE2 && b[i+1] == 0x80 && b[i+2] == 0xA8 || // LS (#x2028)
b[i] == 0xE2 && b[i+1] == 0x80 && b[i+2] == 0xA9 || // PS (#x2029)
// is_z:
b[i] == 0)
}
// Check if the character is a line break, space, or NUL.
func is_spacez(b []byte, i int) bool {
//return is_space(b, i) || is_breakz(b, i)
return ( // is_space:
b[i] == ' ' ||
// is_breakz:
b[i] == '\r' || // CR (#xD)
b[i] == '\n' || // LF (#xA)
b[i] == 0xC2 && b[i+1] == 0x85 || // NEL (#x85)
b[i] == 0xE2 && b[i+1] == 0x80 && b[i+2] == 0xA8 || // LS (#x2028)
b[i] == 0xE2 && b[i+1] == 0x80 && b[i+2] == 0xA9 || // PS (#x2029)
b[i] == 0)
}
// Check if the character is a line break, space, tab, or NUL.
func is_blankz(b []byte, i int) bool {
//return is_blank(b, i) || is_breakz(b, i)
return ( // is_blank:
b[i] == ' ' || b[i] == '\t' ||
// is_breakz:
b[i] == '\r' || // CR (#xD)
b[i] == '\n' || // LF (#xA)
b[i] == 0xC2 && b[i+1] == 0x85 || // NEL (#x85)
b[i] == 0xE2 && b[i+1] == 0x80 && b[i+2] == 0xA8 || // LS (#x2028)
b[i] == 0xE2 && b[i+1] == 0x80 && b[i+2] == 0xA9 || // PS (#x2029)
b[i] == 0)
}
// Determine the width of the character.
func width(b byte) int {
// Don't replace these by a switch without first
// confirming that it is being inlined.
if b&0x80 == 0x00 {
return 1
}
if b&0xE0 == 0xC0 {
return 2
}
if b&0xF0 == 0xE0 {
return 3
}
if b&0xF8 == 0xF0 {
return 4
}
return 0
}

419
e2e/vendor/sigs.k8s.io/yaml/yaml.go generated vendored Normal file
View File

@ -0,0 +1,419 @@
/*
Copyright 2021 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package yaml
import (
"bytes"
"encoding/json"
"fmt"
"io"
"reflect"
"strconv"
"sigs.k8s.io/yaml/goyaml.v2"
)
// Marshal marshals obj into JSON using stdlib json.Marshal, and then converts JSON to YAML using JSONToYAML (see that method for more reference)
func Marshal(obj interface{}) ([]byte, error) {
jsonBytes, err := json.Marshal(obj)
if err != nil {
return nil, fmt.Errorf("error marshaling into JSON: %w", err)
}
return JSONToYAML(jsonBytes)
}
// JSONOpt is a decoding option for decoding from JSON format.
type JSONOpt func(*json.Decoder) *json.Decoder
// Unmarshal first converts the given YAML to JSON, and then unmarshals the JSON into obj. Options for the
// standard library json.Decoder can be optionally specified, e.g. to decode untyped numbers into json.Number instead of float64, or to disallow unknown fields (but for that purpose, see also UnmarshalStrict). obj must be a non-nil pointer.
//
// Important notes about the Unmarshal logic:
//
// - Decoding is case-insensitive, unlike the rest of Kubernetes API machinery, as this is using the stdlib json library. This might be confusing to users.
// - This decodes any number (although it is an integer) into a float64 if the type of obj is unknown, e.g. *map[string]interface{}, *interface{}, or *[]interface{}. This means integers above +/- 2^53 will lose precision when round-tripping. Make a JSONOpt that calls d.UseNumber() to avoid this.
// - Duplicate fields, including in-case-sensitive matches, are ignored in an undefined order. Note that the YAML specification forbids duplicate fields, so this logic is more permissive than it needs to. See UnmarshalStrict for an alternative.
// - Unknown fields, i.e. serialized data that do not map to a field in obj, are ignored. Use d.DisallowUnknownFields() or UnmarshalStrict to override.
// - As per the YAML 1.1 specification, which yaml.v2 used underneath implements, literal 'yes' and 'no' strings without quotation marks will be converted to true/false implicitly.
// - YAML non-string keys, e.g. ints, bools and floats, are converted to strings implicitly during the YAML to JSON conversion process.
// - There are no compatibility guarantees for returned error values.
func Unmarshal(yamlBytes []byte, obj interface{}, opts ...JSONOpt) error {
return unmarshal(yamlBytes, obj, yaml.Unmarshal, opts...)
}
// UnmarshalStrict is similar to Unmarshal (please read its documentation for reference), with the following exceptions:
//
// - Duplicate fields in an object yield an error. This is according to the YAML specification.
// - If obj, or any of its recursive children, is a struct, presence of fields in the serialized data unknown to the struct will yield an error.
func UnmarshalStrict(yamlBytes []byte, obj interface{}, opts ...JSONOpt) error {
return unmarshal(yamlBytes, obj, yaml.UnmarshalStrict, append(opts, DisallowUnknownFields)...)
}
// unmarshal unmarshals the given YAML byte stream into the given interface,
// optionally performing the unmarshalling strictly
func unmarshal(yamlBytes []byte, obj interface{}, unmarshalFn func([]byte, interface{}) error, opts ...JSONOpt) error {
jsonTarget := reflect.ValueOf(obj)
jsonBytes, err := yamlToJSONTarget(yamlBytes, &jsonTarget, unmarshalFn)
if err != nil {
return fmt.Errorf("error converting YAML to JSON: %w", err)
}
err = jsonUnmarshal(bytes.NewReader(jsonBytes), obj, opts...)
if err != nil {
return fmt.Errorf("error unmarshaling JSON: %w", err)
}
return nil
}
// jsonUnmarshal unmarshals the JSON byte stream from the given reader into the
// object, optionally applying decoder options prior to decoding. We are not
// using json.Unmarshal directly as we want the chance to pass in non-default
// options.
func jsonUnmarshal(reader io.Reader, obj interface{}, opts ...JSONOpt) error {
d := json.NewDecoder(reader)
for _, opt := range opts {
d = opt(d)
}
if err := d.Decode(&obj); err != nil {
return fmt.Errorf("while decoding JSON: %v", err)
}
return nil
}
// JSONToYAML converts JSON to YAML. Notable implementation details:
//
// - Duplicate fields, are case-sensitively ignored in an undefined order.
// - The sequence indentation style is compact, which means that the "- " marker for a YAML sequence will be on the same indentation level as the sequence field name.
// - Unlike Unmarshal, all integers, up to 64 bits, are preserved during this round-trip.
func JSONToYAML(j []byte) ([]byte, error) {
// Convert the JSON to an object.
var jsonObj interface{}
// We are using yaml.Unmarshal here (instead of json.Unmarshal) because the
// Go JSON library doesn't try to pick the right number type (int, float,
// etc.) when unmarshalling to interface{}, it just picks float64
// universally. go-yaml does go through the effort of picking the right
// number type, so we can preserve number type throughout this process.
err := yaml.Unmarshal(j, &jsonObj)
if err != nil {
return nil, err
}
// Marshal this object into YAML.
yamlBytes, err := yaml.Marshal(jsonObj)
if err != nil {
return nil, err
}
return yamlBytes, nil
}
// YAMLToJSON converts YAML to JSON. Since JSON is a subset of YAML,
// passing JSON through this method should be a no-op.
//
// Some things YAML can do that are not supported by JSON:
// - In YAML you can have binary and null keys in your maps. These are invalid
// in JSON, and therefore int, bool and float keys are converted to strings implicitly.
// - Binary data in YAML with the !!binary tag is not supported. If you want to
// use binary data with this library, encode the data as base64 as usual but do
// not use the !!binary tag in your YAML. This will ensure the original base64
// encoded data makes it all the way through to the JSON.
// - And more... read the YAML specification for more details.
//
// Notable about the implementation:
//
// - Duplicate fields are case-sensitively ignored in an undefined order. Note that the YAML specification forbids duplicate fields, so this logic is more permissive than it needs to. See YAMLToJSONStrict for an alternative.
// - As per the YAML 1.1 specification, which yaml.v2 used underneath implements, literal 'yes' and 'no' strings without quotation marks will be converted to true/false implicitly.
// - Unlike Unmarshal, all integers, up to 64 bits, are preserved during this round-trip.
// - There are no compatibility guarantees for returned error values.
func YAMLToJSON(y []byte) ([]byte, error) {
return yamlToJSONTarget(y, nil, yaml.Unmarshal)
}
// YAMLToJSONStrict is like YAMLToJSON but enables strict YAML decoding,
// returning an error on any duplicate field names.
func YAMLToJSONStrict(y []byte) ([]byte, error) {
return yamlToJSONTarget(y, nil, yaml.UnmarshalStrict)
}
func yamlToJSONTarget(yamlBytes []byte, jsonTarget *reflect.Value, unmarshalFn func([]byte, interface{}) error) ([]byte, error) {
// Convert the YAML to an object.
var yamlObj interface{}
err := unmarshalFn(yamlBytes, &yamlObj)
if err != nil {
return nil, err
}
// YAML objects are not completely compatible with JSON objects (e.g. you
// can have non-string keys in YAML). So, convert the YAML-compatible object
// to a JSON-compatible object, failing with an error if irrecoverable
// incompatibilties happen along the way.
jsonObj, err := convertToJSONableObject(yamlObj, jsonTarget)
if err != nil {
return nil, err
}
// Convert this object to JSON and return the data.
jsonBytes, err := json.Marshal(jsonObj)
if err != nil {
return nil, err
}
return jsonBytes, nil
}
func convertToJSONableObject(yamlObj interface{}, jsonTarget *reflect.Value) (interface{}, error) {
var err error
// Resolve jsonTarget to a concrete value (i.e. not a pointer or an
// interface). We pass decodingNull as false because we're not actually
// decoding into the value, we're just checking if the ultimate target is a
// string.
if jsonTarget != nil {
jsonUnmarshaler, textUnmarshaler, pointerValue := indirect(*jsonTarget, false)
// We have a JSON or Text Umarshaler at this level, so we can't be trying
// to decode into a string.
if jsonUnmarshaler != nil || textUnmarshaler != nil {
jsonTarget = nil
} else {
jsonTarget = &pointerValue
}
}
// If yamlObj is a number or a boolean, check if jsonTarget is a string -
// if so, coerce. Else return normal.
// If yamlObj is a map or array, find the field that each key is
// unmarshaling to, and when you recurse pass the reflect.Value for that
// field back into this function.
switch typedYAMLObj := yamlObj.(type) {
case map[interface{}]interface{}:
// JSON does not support arbitrary keys in a map, so we must convert
// these keys to strings.
//
// From my reading of go-yaml v2 (specifically the resolve function),
// keys can only have the types string, int, int64, float64, binary
// (unsupported), or null (unsupported).
strMap := make(map[string]interface{})
for k, v := range typedYAMLObj {
// Resolve the key to a string first.
var keyString string
switch typedKey := k.(type) {
case string:
keyString = typedKey
case int:
keyString = strconv.Itoa(typedKey)
case int64:
// go-yaml will only return an int64 as a key if the system
// architecture is 32-bit and the key's value is between 32-bit
// and 64-bit. Otherwise the key type will simply be int.
keyString = strconv.FormatInt(typedKey, 10)
case float64:
// Stolen from go-yaml to use the same conversion to string as
// the go-yaml library uses to convert float to string when
// Marshaling.
s := strconv.FormatFloat(typedKey, 'g', -1, 32)
switch s {
case "+Inf":
s = ".inf"
case "-Inf":
s = "-.inf"
case "NaN":
s = ".nan"
}
keyString = s
case bool:
if typedKey {
keyString = "true"
} else {
keyString = "false"
}
default:
return nil, fmt.Errorf("unsupported map key of type: %s, key: %+#v, value: %+#v",
reflect.TypeOf(k), k, v)
}
// jsonTarget should be a struct or a map. If it's a struct, find
// the field it's going to map to and pass its reflect.Value. If
// it's a map, find the element type of the map and pass the
// reflect.Value created from that type. If it's neither, just pass
// nil - JSON conversion will error for us if it's a real issue.
if jsonTarget != nil {
t := *jsonTarget
if t.Kind() == reflect.Struct {
keyBytes := []byte(keyString)
// Find the field that the JSON library would use.
var f *field
fields := cachedTypeFields(t.Type())
for i := range fields {
ff := &fields[i]
if bytes.Equal(ff.nameBytes, keyBytes) {
f = ff
break
}
// Do case-insensitive comparison.
if f == nil && ff.equalFold(ff.nameBytes, keyBytes) {
f = ff
}
}
if f != nil {
// Find the reflect.Value of the most preferential
// struct field.
jtf := t.Field(f.index[0])
strMap[keyString], err = convertToJSONableObject(v, &jtf)
if err != nil {
return nil, err
}
continue
}
} else if t.Kind() == reflect.Map {
// Create a zero value of the map's element type to use as
// the JSON target.
jtv := reflect.Zero(t.Type().Elem())
strMap[keyString], err = convertToJSONableObject(v, &jtv)
if err != nil {
return nil, err
}
continue
}
}
strMap[keyString], err = convertToJSONableObject(v, nil)
if err != nil {
return nil, err
}
}
return strMap, nil
case []interface{}:
// We need to recurse into arrays in case there are any
// map[interface{}]interface{}'s inside and to convert any
// numbers to strings.
// If jsonTarget is a slice (which it really should be), find the
// thing it's going to map to. If it's not a slice, just pass nil
// - JSON conversion will error for us if it's a real issue.
var jsonSliceElemValue *reflect.Value
if jsonTarget != nil {
t := *jsonTarget
if t.Kind() == reflect.Slice {
// By default slices point to nil, but we need a reflect.Value
// pointing to a value of the slice type, so we create one here.
ev := reflect.Indirect(reflect.New(t.Type().Elem()))
jsonSliceElemValue = &ev
}
}
// Make and use a new array.
arr := make([]interface{}, len(typedYAMLObj))
for i, v := range typedYAMLObj {
arr[i], err = convertToJSONableObject(v, jsonSliceElemValue)
if err != nil {
return nil, err
}
}
return arr, nil
default:
// If the target type is a string and the YAML type is a number,
// convert the YAML type to a string.
if jsonTarget != nil && (*jsonTarget).Kind() == reflect.String {
// Based on my reading of go-yaml, it may return int, int64,
// float64, or uint64.
var s string
switch typedVal := typedYAMLObj.(type) {
case int:
s = strconv.FormatInt(int64(typedVal), 10)
case int64:
s = strconv.FormatInt(typedVal, 10)
case float64:
s = strconv.FormatFloat(typedVal, 'g', -1, 32)
case uint64:
s = strconv.FormatUint(typedVal, 10)
case bool:
if typedVal {
s = "true"
} else {
s = "false"
}
}
if len(s) > 0 {
yamlObj = interface{}(s)
}
}
return yamlObj, nil
}
}
// JSONObjectToYAMLObject converts an in-memory JSON object into a YAML in-memory MapSlice,
// without going through a byte representation. A nil or empty map[string]interface{} input is
// converted to an empty map, i.e. yaml.MapSlice(nil).
//
// interface{} slices stay interface{} slices. map[string]interface{} becomes yaml.MapSlice.
//
// int64 and float64 are down casted following the logic of github.com/go-yaml/yaml:
// - float64s are down-casted as far as possible without data-loss to int, int64, uint64.
// - int64s are down-casted to int if possible without data-loss.
//
// Big int/int64/uint64 do not lose precision as in the json-yaml roundtripping case.
//
// string, bool and any other types are unchanged.
func JSONObjectToYAMLObject(j map[string]interface{}) yaml.MapSlice {
if len(j) == 0 {
return nil
}
ret := make(yaml.MapSlice, 0, len(j))
for k, v := range j {
ret = append(ret, yaml.MapItem{Key: k, Value: jsonToYAMLValue(v)})
}
return ret
}
func jsonToYAMLValue(j interface{}) interface{} {
switch j := j.(type) {
case map[string]interface{}:
if j == nil {
return interface{}(nil)
}
return JSONObjectToYAMLObject(j)
case []interface{}:
if j == nil {
return interface{}(nil)
}
ret := make([]interface{}, len(j))
for i := range j {
ret[i] = jsonToYAMLValue(j[i])
}
return ret
case float64:
// replicate the logic in https://github.com/go-yaml/yaml/blob/51d6538a90f86fe93ac480b35f37b2be17fef232/resolve.go#L151
if i64 := int64(j); j == float64(i64) {
if i := int(i64); i64 == int64(i) {
return i
}
return i64
}
if ui64 := uint64(j); j == float64(ui64) {
return ui64
}
return j
case int64:
if i := int(j); j == int64(i) {
return i
}
return j
}
return j
}

31
e2e/vendor/sigs.k8s.io/yaml/yaml_go110.go generated vendored Normal file
View File

@ -0,0 +1,31 @@
// This file contains changes that are only compatible with go 1.10 and onwards.
//go:build go1.10
// +build go1.10
/*
Copyright 2021 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package yaml
import "encoding/json"
// DisallowUnknownFields configures the JSON decoder to error out if unknown
// fields come along, instead of dropping them by default.
func DisallowUnknownFields(d *json.Decoder) *json.Decoder {
d.DisallowUnknownFields()
return d
}