vendor files

This commit is contained in:
Serguei Bezverkhi
2018-01-09 13:57:14 -05:00
parent 558bc6c02a
commit 7b24313bd6
16547 changed files with 4527373 additions and 0 deletions

164
vendor/k8s.io/kubernetes/build/BUILD generated vendored Normal file
View File

@ -0,0 +1,164 @@
package(default_visibility = ["//visibility:public"])
load("@io_bazel_rules_docker//docker:docker.bzl", "docker_build", "docker_bundle")
load("@io_kubernetes_build//defs:build.bzl", "release_filegroup")
filegroup(
name = "package-srcs",
srcs = glob(["**"]),
tags = ["automanaged"],
)
filegroup(
name = "all-srcs",
srcs = [
":package-srcs",
"//build/debs:all-srcs",
"//build/release-tars:all-srcs",
"//build/rpms:all-srcs",
"//build/visible_to:all-srcs",
],
tags = ["automanaged"],
)
# This list should roughly match kube::build::get_docker_wrapped_binaries()
# in build/common.sh.
DOCKERIZED_BINARIES = {
"cloud-controller-manager": {
"base": "@official_busybox//image",
"target": "//cmd/cloud-controller-manager:cloud-controller-manager",
},
"kube-apiserver": {
"base": "@official_busybox//image",
"target": "//cmd/kube-apiserver:kube-apiserver",
},
"kube-controller-manager": {
"base": "@official_busybox//image",
"target": "//cmd/kube-controller-manager:kube-controller-manager",
},
"kube-scheduler": {
"base": "@official_busybox//image",
"target": "//plugin/cmd/kube-scheduler:kube-scheduler",
},
"kube-proxy": {
"base": "@debian-iptables-amd64//image",
"target": "//cmd/kube-proxy:kube-proxy",
},
}
[docker_build(
name = binary + "-internal",
base = meta["base"],
cmd = ["/usr/bin/" + binary],
debs = [
"//build/debs:%s.deb" % binary,
],
symlinks = {
# Some cluster startup scripts expect to find the binaries in /usr/local/bin,
# but the debs install the binaries into /usr/bin.
"/usr/local/bin/" + binary: "/usr/bin/" + binary,
},
) for binary, meta in DOCKERIZED_BINARIES.items()]
[docker_bundle(
name = binary,
images = {"gcr.io/google_containers/%s:{STABLE_DOCKER_TAG}" % binary: binary + "-internal"},
stamp = True,
) for binary in DOCKERIZED_BINARIES.keys()]
[genrule(
name = binary + "_docker_tag",
srcs = [meta["target"]],
outs = [binary + ".docker_tag"],
cmd = "grep ^STABLE_DOCKER_TAG bazel-out/stable-status.txt | awk '{print $$2}' >$@",
stamp = 1,
) for binary, meta in DOCKERIZED_BINARIES.items()]
genrule(
name = "os_package_version",
outs = ["version"],
cmd = """
grep ^STABLE_BUILD_SCM_REVISION bazel-out/stable-status.txt \
| awk '{print $$2}' \
| sed -e 's/^v//' -Ee 's/-([a-z]+)/~\\1/' -e 's/-/+/g' \
>$@
""",
stamp = 1,
)
genrule(
name = "cni_package_version",
outs = ["cni_version"],
cmd = "echo 0.5.1 >$@",
)
release_filegroup(
name = "docker-artifacts",
srcs = [":%s.tar" % binary for binary in DOCKERIZED_BINARIES.keys()] +
[":%s.docker_tag" % binary for binary in DOCKERIZED_BINARIES.keys()],
)
# KUBE_CLIENT_TARGETS
release_filegroup(
name = "client-targets",
srcs = [
"//cmd/kubectl",
],
)
# KUBE_NODE_TARGETS
release_filegroup(
name = "node-targets",
srcs = [
"//cmd/kube-proxy",
"//cmd/kubeadm",
"//cmd/kubelet",
],
)
# KUBE_SERVER_TARGETS
# No need to duplicate CLIENT_TARGETS or NODE_TARGETS here,
# since we include them in the actual build rule.
release_filegroup(
name = "server-targets",
srcs = [
"//cluster/gce/gci/mounter",
"//cmd/cloud-controller-manager",
"//cmd/hyperkube",
"//cmd/kube-apiserver",
"//cmd/kube-controller-manager",
"//plugin/cmd/kube-scheduler",
"//vendor/k8s.io/kube-aggregator",
],
)
# kube::golang::test_targets
filegroup(
name = "test-targets",
srcs = [
"//cmd/gendocs",
"//cmd/genkubedocs",
"//cmd/genman",
"//cmd/genswaggertypedocs",
"//cmd/genyaml",
"//cmd/kubemark", # TODO: server platforms only
"//cmd/linkcheck",
"//test/e2e:e2e.test",
"//test/e2e_node:e2e_node.test", # TODO: server platforms only
"//vendor/github.com/onsi/ginkgo/ginkgo",
],
)
# KUBE_TEST_PORTABLE
filegroup(
name = "test-portable-targets",
srcs = [
"//hack:e2e.go",
"//hack:get-build.sh",
"//hack:ginkgo-e2e.sh",
"//hack/e2e-internal:all-srcs",
"//hack/lib:all-srcs",
"//test/e2e/testing-manifests:all-srcs",
"//test/kubemark:all-srcs",
],
)

15
vendor/k8s.io/kubernetes/build/OWNERS generated vendored Normal file
View File

@ -0,0 +1,15 @@
reviewers:
- cblecker
- ixdy
- jbeda
- lavalamp
- zmerlynn
- spxtr
approvers:
- cblecker
- ixdy
- jbeda
- lavalamp
- zmerlynn
- mikedanese
- spxtr

112
vendor/k8s.io/kubernetes/build/README.md generated vendored Normal file
View File

@ -0,0 +1,112 @@
# Building Kubernetes
Building Kubernetes is easy if you take advantage of the containerized build environment. This document will help guide you through understanding this build process.
## Requirements
1. Docker, using one of the following configurations:
* **Mac OS X** You can either use Docker for Mac or docker-machine. See installation instructions [here](https://docs.docker.com/docker-for-mac/).
**Note**: You will want to set the Docker VM to have at least 3GB of initial memory or building will likely fail. (See: [#11852]( http://issue.k8s.io/11852)).
* **Linux with local Docker** Install Docker according to the [instructions](https://docs.docker.com/installation/#installation) for your OS.
* **Remote Docker engine** Use a big machine in the cloud to build faster. This is a little trickier so look at the section later on.
2. **Optional** [Google Cloud SDK](https://developers.google.com/cloud/sdk/)
You must install and configure Google Cloud SDK if you want to upload your release to Google Cloud Storage and may safely omit this otherwise.
## Overview
While it is possible to build Kubernetes using a local golang installation, we have a build process that runs in a Docker container. This simplifies initial set up and provides for a very consistent build and test environment.
## Key scripts
The following scripts are found in the `build/` directory. Note that all scripts must be run from the Kubernetes root directory.
* `build/run.sh`: Run a command in a build docker container. Common invocations:
* `build/run.sh make`: Build just linux binaries in the container. Pass options and packages as necessary.
* `build/run.sh make cross`: Build all binaries for all platforms
* `build/run.sh make test`: Run all unit tests
* `build/run.sh make test-integration`: Run integration test
* `build/run.sh make test-cmd`: Run CLI tests
* `build/copy-output.sh`: This will copy the contents of `_output/dockerized/bin` from the Docker container to the local `_output/dockerized/bin`. It will also copy out specific file patterns that are generated as part of the build process. This is run automatically as part of `build/run.sh`.
* `build/make-clean.sh`: Clean out the contents of `_output`, remove any locally built container images and remove the data container.
* `/build/shell.sh`: Drop into a `bash` shell in a build container with a snapshot of the current repo code.
## Basic Flow
The scripts directly under `build/` are used to build and test. They will ensure that the `kube-build` Docker image is built (based on `build/build-image/Dockerfile`) and then execute the appropriate command in that container. These scripts will both ensure that the right data is cached from run to run for incremental builds and will copy the results back out of the container.
The `kube-build` container image is built by first creating a "context" directory in `_output/images/build-image`. It is done there instead of at the root of the Kubernetes repo to minimize the amount of data we need to package up when building the image.
There are 3 different containers instances that are run from this image. The first is a "data" container to store all data that needs to persist across to support incremental builds. Next there is an "rsync" container that is used to transfer data in and out to the data container. Lastly there is a "build" container that is used for actually doing build actions. The data container persists across runs while the rsync and build containers are deleted after each use.
`rsync` is used transparently behind the scenes to efficiently move data in and out of the container. This will use an ephemeral port picked by Docker. You can modify this by setting the `KUBE_RSYNC_PORT` env variable.
All Docker names are suffixed with a hash derived from the file path (to allow concurrent usage on things like CI machines) and a version number. When the version number changes all state is cleared and clean build is started. This allows the build infrastructure to be changed and signal to CI systems that old artifacts need to be deleted.
## Proxy Settings
If you are behind a proxy and you are letting these scripts use `docker-machine` to set up your local VM for you on macOS, you need to export proxy settings for Kubernetes build, the following environment variables should be defined.
```
export KUBERNETES_HTTP_PROXY=http://username:password@proxyaddr:proxyport
export KUBERNETES_HTTPS_PROXY=https://username:password@proxyaddr:proxyport
```
Optionally, you can specify addresses of no proxy for Kubernetes build, for example
```
export KUBERNETES_NO_PROXY=127.0.0.1
```
If you are using sudo to make Kubernetes build for example make quick-release, you need run `sudo -E make quick-release` to pass the environment variables.
## Really Remote Docker Engine
It is possible to use a Docker Engine that is running remotely (under your desk or in the cloud). Docker must be configured to connect to that machine and the local rsync port must be forwarded (via SSH or nc) from localhost to the remote machine.
To do this easily with GCE and `docker-machine`, do something like this:
```
# Create the remote docker machine on GCE. This is a pretty beefy machine with SSD disk.
KUBE_BUILD_VM=k8s-build
KUBE_BUILD_GCE_PROJECT=<project>
docker-machine create \
--driver=google \
--google-project=${KUBE_BUILD_GCE_PROJECT} \
--google-zone=us-west1-a \
--google-machine-type=n1-standard-8 \
--google-disk-size=50 \
--google-disk-type=pd-ssd \
${KUBE_BUILD_VM}
# Set up local docker to talk to that machine
eval $(docker-machine env ${KUBE_BUILD_VM})
# Pin down the port that rsync will be exposed on on the remote machine
export KUBE_RSYNC_PORT=8730
# forward local 8730 to that machine so that rsync works
docker-machine ssh ${KUBE_BUILD_VM} -L ${KUBE_RSYNC_PORT}:localhost:${KUBE_RSYNC_PORT} -N &
```
Look at `docker-machine stop`, `docker-machine start` and `docker-machine rm` to manage this VM.
## Releasing
The `build/release.sh` script will build a release. It will build binaries, run tests, (optionally) build runtime Docker images.
The main output is a tar file: `kubernetes.tar.gz`. This includes:
* Cross compiled client utilities.
* Script (`kubectl`) for picking and running the right client binary based on platform.
* Examples
* Cluster deployment scripts for various clouds
* Tar file containing all server binaries
* Tar file containing salt deployment tree shared across multiple cloud deployments.
In addition, there are some other tar files that are created:
* `kubernetes-client-*.tar.gz` Client binaries for a specific platform.
* `kubernetes-server-*.tar.gz` Server binaries for a specific platform.
* `kubernetes-salt.tar.gz` The salt script/tree shared across multiple deployment scripts.
When building final release tars, they are first staged into `_output/release-stage` before being tar'd up and put into `_output/release-tars`.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/build/README.md?pixel)]()

54
vendor/k8s.io/kubernetes/build/build-image/Dockerfile generated vendored Normal file
View File

@ -0,0 +1,54 @@
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This file creates a standard build environment for building Kubernetes
FROM gcr.io/google_containers/kube-cross:KUBE_BUILD_IMAGE_CROSS_TAG
# Mark this as a kube-build container
RUN touch /kube-build-image
# To run as non-root we sometimes need to rebuild go stdlib packages.
RUN chmod -R a+rwx /usr/local/go/pkg
# For running integration tests /var/run/kubernetes is required
# and should be writable by user
RUN mkdir /var/run/kubernetes && chmod a+rwx /var/run/kubernetes
# The kubernetes source is expected to be mounted here. This will be the base
# of operations.
ENV HOME /go/src/k8s.io/kubernetes
WORKDIR ${HOME}
# Make output from the dockerized build go someplace else
ENV KUBE_OUTPUT_SUBPATH _output/dockerized
# Pick up version stuff here as we don't copy our .git over.
ENV KUBE_GIT_VERSION_FILE ${HOME}/.dockerized-kube-version-defs
# Add system-wide git user information
RUN git config --system user.email "nobody@k8s.io" \
&& git config --system user.name "kube-build-image"
# Fix permissions on gopath
RUN chmod -R a+rwx $GOPATH
# Make log messages use the right timezone
ADD localtime /etc/localtime
RUN chmod a+r /etc/localtime
# Set up rsyncd
ADD rsyncd.password /
RUN chmod a+r /rsyncd.password
ADD rsyncd.sh /
RUN chmod a+rx /rsyncd.sh

1
vendor/k8s.io/kubernetes/build/build-image/VERSION generated vendored Normal file
View File

@ -0,0 +1 @@
5

View File

@ -0,0 +1,78 @@
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This file creates a standard build environment for building cross
# platform go binary for the architecture kubernetes cares about.
FROM golang:1.9.2
ENV GOARM 7
ENV KUBE_DYNAMIC_CROSSPLATFORMS \
armhf \
arm64 \
s390x \
ppc64el
ENV KUBE_CROSSPLATFORMS \
linux/386 \
linux/arm linux/arm64 \
linux/ppc64le \
linux/s390x \
darwin/amd64 darwin/386 \
windows/amd64 windows/386
# Pre-compile the standard go library when cross-compiling. This is much easier now when we have go1.5+
RUN for platform in ${KUBE_CROSSPLATFORMS}; do GOOS=${platform%/*} GOARCH=${platform##*/} go install std; done
# Install g++, then download and install protoc for generating protobuf output
RUN apt-get update \
&& apt-get install -y g++ rsync jq apt-utils file patch \
&& apt-get clean && rm -rf /var/lib/apt/lists/*
RUN mkdir -p /usr/local/src/protobuf \
&& cd /usr/local/src/protobuf \
&& curl -sSL https://github.com/google/protobuf/releases/download/v3.0.0-beta-2/protobuf-cpp-3.0.0-beta-2.tar.gz | tar -xzv \
&& cd protobuf-3.0.0-beta-2 \
&& ./configure \
&& make install \
&& ldconfig \
&& cd .. \
&& rm -rf protobuf-3.0.0-beta-2 \
&& protoc --version
# Use dynamic cgo linking for architectures other than amd64 for the server platforms
# To install crossbuild essential for other architectures add the following repository.
RUN echo "deb http://archive.ubuntu.com/ubuntu xenial main universe" > /etc/apt/sources.list.d/cgocrosscompiling.list \
&& apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 40976EAF437D05B5 3B4FE6ACC0B21F32 \
&& apt-get update \
&& apt-get install -y build-essential \
&& for platform in ${KUBE_DYNAMIC_CROSSPLATFORMS}; do apt-get install -y crossbuild-essential-${platform}; done \
&& apt-get clean && rm -rf /var/lib/apt/lists/*
# work around 64MB tmpfs size in Docker 1.6
ENV TMPDIR /tmp.k8s
RUN mkdir $TMPDIR \
&& chmod a+rwx $TMPDIR \
&& chmod o+t $TMPDIR
# Get the code coverage tool and goimports
RUN go get golang.org/x/tools/cmd/cover \
golang.org/x/tools/cmd/goimports
# Download and symlink etcd. We need this for our integration tests.
RUN export ETCD_VERSION=v3.1.10; \
mkdir -p /usr/local/src/etcd \
&& cd /usr/local/src/etcd \
&& curl -fsSL https://github.com/coreos/etcd/releases/download/${ETCD_VERSION}/etcd-${ETCD_VERSION}-linux-amd64.tar.gz | tar -xz \
&& ln -s ../src/etcd/etcd-${ETCD_VERSION}-linux-amd64/etcd /usr/local/bin/

View File

@ -0,0 +1,27 @@
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
.PHONY: build push
IMAGE=kube-cross
TAG=$(shell cat VERSION)
all: push
build:
docker build --pull -t gcr.io/google_containers/$(IMAGE):$(TAG) .
push: build
gcloud docker --server=gcr.io -- push gcr.io/google_containers/$(IMAGE):$(TAG)

View File

@ -0,0 +1 @@
v1.9.2-1

83
vendor/k8s.io/kubernetes/build/build-image/rsyncd.sh generated vendored Executable file
View File

@ -0,0 +1,83 @@
#!/bin/bash
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This script will set up and run rsyncd to allow data to move into and out of
# our dockerized build system. This is used for syncing sources and changes of
# sources into the docker-build-container. It is also used to transfer built binaries
# and generated files back out.
#
# When run as root (rare) it'll preserve the file ids as sent from the client.
# Usually it'll be run as non-dockerized UID/GID and end up translating all file
# ownership to that.
set -o errexit
set -o nounset
set -o pipefail
# The directory that gets sync'd
VOLUME=${HOME}
# Assume that this is running in Docker on a bridge. Allow connections from
# anything on the local subnet.
ALLOW=$(ip route | awk '/^default via/ { reg = "^[0-9./]+ dev "$5 } ; $0 ~ reg { print $1 }')
CONFDIR="/tmp/rsync.k8s"
PIDFILE="${CONFDIR}/rsyncd.pid"
CONFFILE="${CONFDIR}/rsyncd.conf"
SECRETS="${CONFDIR}/rsyncd.secrets"
mkdir -p "${CONFDIR}"
if [[ -f "${PIDFILE}" ]]; then
PID=$(cat "${PIDFILE}")
echo "Cleaning up old PID file: ${PIDFILE}"
kill $PID &> /dev/null || true
rm "${PIDFILE}"
fi
PASSWORD=$(</rsyncd.password)
cat <<EOF >"${SECRETS}"
k8s:${PASSWORD}
EOF
chmod go= "${SECRETS}"
USER_CONFIG=
if [[ "$(id -u)" == "0" ]]; then
USER_CONFIG=" uid = 0"$'\n'" gid = 0"
fi
cat <<EOF >"${CONFFILE}"
pid file = ${PIDFILE}
use chroot = no
log file = /dev/stdout
reverse lookup = no
munge symlinks = no
port = 8730
[k8s]
numeric ids = true
$USER_CONFIG
hosts deny = *
hosts allow = ${ALLOW} ${ALLOW_HOST-}
auth users = k8s
secrets file = ${SECRETS}
read only = false
path = ${VOLUME}
filter = - /.make/ - /_tmp/
EOF
exec /usr/bin/rsync --no-detach --daemon --config="${CONFFILE}" "$@"

770
vendor/k8s.io/kubernetes/build/common.sh generated vendored Executable file
View File

@ -0,0 +1,770 @@
#!/bin/bash
# Copyright 2014 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Common utilities, variables and checks for all build scripts.
set -o errexit
set -o nounset
set -o pipefail
USER_ID=$(id -u)
GROUP_ID=$(id -g)
DOCKER_OPTS=${DOCKER_OPTS:-""}
DOCKER=(docker ${DOCKER_OPTS})
DOCKER_HOST=${DOCKER_HOST:-""}
DOCKER_MACHINE_NAME=${DOCKER_MACHINE_NAME:-"kube-dev"}
readonly DOCKER_MACHINE_DRIVER=${DOCKER_MACHINE_DRIVER:-"virtualbox --virtualbox-cpu-count -1"}
# This will canonicalize the path
KUBE_ROOT=$(cd $(dirname "${BASH_SOURCE}")/.. && pwd -P)
source "${KUBE_ROOT}/hack/lib/init.sh"
# Constants
readonly KUBE_BUILD_IMAGE_REPO=kube-build
readonly KUBE_BUILD_IMAGE_CROSS_TAG="$(cat ${KUBE_ROOT}/build/build-image/cross/VERSION)"
# This version number is used to cause everyone to rebuild their data containers
# and build image. This is especially useful for automated build systems like
# Jenkins.
#
# Increment/change this number if you change the build image (anything under
# build/build-image) or change the set of volumes in the data container.
readonly KUBE_BUILD_IMAGE_VERSION_BASE="$(cat ${KUBE_ROOT}/build/build-image/VERSION)"
readonly KUBE_BUILD_IMAGE_VERSION="${KUBE_BUILD_IMAGE_VERSION_BASE}-${KUBE_BUILD_IMAGE_CROSS_TAG}"
# Here we map the output directories across both the local and remote _output
# directories:
#
# *_OUTPUT_ROOT - the base of all output in that environment.
# *_OUTPUT_SUBPATH - location where golang stuff is built/cached. Also
# persisted across docker runs with a volume mount.
# *_OUTPUT_BINPATH - location where final binaries are placed. If the remote
# is really remote, this is the stuff that has to be copied
# back.
# OUT_DIR can come in from the Makefile, so honor it.
readonly LOCAL_OUTPUT_ROOT="${KUBE_ROOT}/${OUT_DIR:-_output}"
readonly LOCAL_OUTPUT_SUBPATH="${LOCAL_OUTPUT_ROOT}/dockerized"
readonly LOCAL_OUTPUT_BINPATH="${LOCAL_OUTPUT_SUBPATH}/bin"
readonly LOCAL_OUTPUT_GOPATH="${LOCAL_OUTPUT_SUBPATH}/go"
readonly LOCAL_OUTPUT_IMAGE_STAGING="${LOCAL_OUTPUT_ROOT}/images"
# This is a symlink to binaries for "this platform" (e.g. build tools).
readonly THIS_PLATFORM_BIN="${LOCAL_OUTPUT_ROOT}/bin"
readonly REMOTE_ROOT="/go/src/${KUBE_GO_PACKAGE}"
readonly REMOTE_OUTPUT_ROOT="${REMOTE_ROOT}/_output"
readonly REMOTE_OUTPUT_SUBPATH="${REMOTE_OUTPUT_ROOT}/dockerized"
readonly REMOTE_OUTPUT_BINPATH="${REMOTE_OUTPUT_SUBPATH}/bin"
readonly REMOTE_OUTPUT_GOPATH="${REMOTE_OUTPUT_SUBPATH}/go"
# This is the port on the workstation host to expose RSYNC on. Set this if you
# are doing something fancy with ssh tunneling.
readonly KUBE_RSYNC_PORT="${KUBE_RSYNC_PORT:-}"
# This is the port that rsync is running on *inside* the container. This may be
# mapped to KUBE_RSYNC_PORT via docker networking.
readonly KUBE_CONTAINER_RSYNC_PORT=8730
# Get the set of master binaries that run in Docker (on Linux)
# Entry format is "<name-of-binary>,<base-image>".
# Binaries are placed in /usr/local/bin inside the image.
#
# $1 - server architecture
kube::build::get_docker_wrapped_binaries() {
debian_iptables_version=v10
### If you change any of these lists, please also update DOCKERIZED_BINARIES
### in build/BUILD.
case $1 in
"amd64")
local targets=(
cloud-controller-manager,busybox
kube-apiserver,busybox
kube-controller-manager,busybox
kube-scheduler,busybox
kube-aggregator,busybox
kube-proxy,gcr.io/google-containers/debian-iptables-amd64:${debian_iptables_version}
);;
"arm")
local targets=(
cloud-controller-manager,arm32v7/busybox
kube-apiserver,arm32v7/busybox
kube-controller-manager,arm32v7/busybox
kube-scheduler,arm32v7/busybox
kube-aggregator,arm32v7/busybox
kube-proxy,gcr.io/google-containers/debian-iptables-arm:${debian_iptables_version}
);;
"arm64")
local targets=(
cloud-controller-manager,arm64v8/busybox
kube-apiserver,arm64v8/busybox
kube-controller-manager,arm64v8/busybox
kube-scheduler,arm64v8/busybox
kube-aggregator,arm64v8/busybox
kube-proxy,gcr.io/google-containers/debian-iptables-arm64:${debian_iptables_version}
);;
"ppc64le")
local targets=(
cloud-controller-manager,ppc64le/busybox
kube-apiserver,ppc64le/busybox
kube-controller-manager,ppc64le/busybox
kube-scheduler,ppc64le/busybox
kube-aggregator,ppc64le/busybox
kube-proxy,gcr.io/google-containers/debian-iptables-ppc64le:${debian_iptables_version}
);;
"s390x")
local targets=(
cloud-controller-manager,s390x/busybox
kube-apiserver,s390x/busybox
kube-controller-manager,s390x/busybox
kube-scheduler,s390x/busybox
kube-aggregator,s390x/busybox
kube-proxy,gcr.io/google-containers/debian-iptables-s390x:${debian_iptables_version}
);;
esac
echo "${targets[@]}"
}
# ---------------------------------------------------------------------------
# Basic setup functions
# Verify that the right utilities and such are installed for building Kube. Set
# up some dynamic constants.
# Args:
# $1 - boolean of whether to require functioning docker (default true)
#
# Vars set:
# KUBE_ROOT_HASH
# KUBE_BUILD_IMAGE_TAG_BASE
# KUBE_BUILD_IMAGE_TAG
# KUBE_BUILD_IMAGE
# KUBE_BUILD_CONTAINER_NAME_BASE
# KUBE_BUILD_CONTAINER_NAME
# KUBE_DATA_CONTAINER_NAME_BASE
# KUBE_DATA_CONTAINER_NAME
# KUBE_RSYNC_CONTAINER_NAME_BASE
# KUBE_RSYNC_CONTAINER_NAME
# DOCKER_MOUNT_ARGS
# LOCAL_OUTPUT_BUILD_CONTEXT
function kube::build::verify_prereqs() {
local -r require_docker=${1:-true}
kube::log::status "Verifying Prerequisites...."
kube::build::ensure_tar || return 1
kube::build::ensure_rsync || return 1
if ${require_docker}; then
kube::build::ensure_docker_in_path || return 1
if kube::build::is_osx; then
kube::build::docker_available_on_osx || return 1
fi
kube::util::ensure_docker_daemon_connectivity || return 1
if (( ${KUBE_VERBOSE} > 6 )); then
kube::log::status "Docker Version:"
"${DOCKER[@]}" version | kube::log::info_from_stdin
fi
fi
KUBE_GIT_BRANCH=$(git symbolic-ref --short -q HEAD 2>/dev/null || true)
KUBE_ROOT_HASH=$(kube::build::short_hash "${HOSTNAME:-}:${KUBE_ROOT}:${KUBE_GIT_BRANCH}")
KUBE_BUILD_IMAGE_TAG_BASE="build-${KUBE_ROOT_HASH}"
KUBE_BUILD_IMAGE_TAG="${KUBE_BUILD_IMAGE_TAG_BASE}-${KUBE_BUILD_IMAGE_VERSION}"
KUBE_BUILD_IMAGE="${KUBE_BUILD_IMAGE_REPO}:${KUBE_BUILD_IMAGE_TAG}"
KUBE_BUILD_CONTAINER_NAME_BASE="kube-build-${KUBE_ROOT_HASH}"
KUBE_BUILD_CONTAINER_NAME="${KUBE_BUILD_CONTAINER_NAME_BASE}-${KUBE_BUILD_IMAGE_VERSION}"
KUBE_RSYNC_CONTAINER_NAME_BASE="kube-rsync-${KUBE_ROOT_HASH}"
KUBE_RSYNC_CONTAINER_NAME="${KUBE_RSYNC_CONTAINER_NAME_BASE}-${KUBE_BUILD_IMAGE_VERSION}"
KUBE_DATA_CONTAINER_NAME_BASE="kube-build-data-${KUBE_ROOT_HASH}"
KUBE_DATA_CONTAINER_NAME="${KUBE_DATA_CONTAINER_NAME_BASE}-${KUBE_BUILD_IMAGE_VERSION}"
DOCKER_MOUNT_ARGS=(--volumes-from "${KUBE_DATA_CONTAINER_NAME}")
LOCAL_OUTPUT_BUILD_CONTEXT="${LOCAL_OUTPUT_IMAGE_STAGING}/${KUBE_BUILD_IMAGE}"
kube::version::get_version_vars
kube::version::save_version_vars "${KUBE_ROOT}/.dockerized-kube-version-defs"
}
# ---------------------------------------------------------------------------
# Utility functions
function kube::build::docker_available_on_osx() {
if [[ -z "${DOCKER_HOST}" ]]; then
if [[ -S "/var/run/docker.sock" ]]; then
kube::log::status "Using Docker for MacOS"
return 0
fi
kube::log::status "No docker host is set. Checking options for setting one..."
if [[ -z "$(which docker-machine)" ]]; then
kube::log::status "It looks like you're running Mac OS X, yet neither Docker for Mac nor docker-machine can be found."
kube::log::status "See: https://docs.docker.com/engine/installation/mac/ for installation instructions."
return 1
elif [[ -n "$(which docker-machine)" ]]; then
kube::build::prepare_docker_machine
fi
fi
}
function kube::build::prepare_docker_machine() {
kube::log::status "docker-machine was found."
local available_memory_bytes=$(sysctl -n hw.memsize 2>/dev/null)
local bytes_in_mb=1048576
# Give virtualbox 1/2 the system memory. Its necessary to divide by 2, instead
# of multiple by .5, because bash can only multiply by ints.
local memory_divisor=2
local virtualbox_memory_mb=$(( ${available_memory_bytes} / (${bytes_in_mb} * ${memory_divisor}) ))
docker-machine inspect "${DOCKER_MACHINE_NAME}" &> /dev/null || {
kube::log::status "Creating a machine to build Kubernetes"
docker-machine create --driver ${DOCKER_MACHINE_DRIVER} \
--virtualbox-memory "${virtualbox_memory_mb}" \
--engine-env HTTP_PROXY="${KUBERNETES_HTTP_PROXY:-}" \
--engine-env HTTPS_PROXY="${KUBERNETES_HTTPS_PROXY:-}" \
--engine-env NO_PROXY="${KUBERNETES_NO_PROXY:-127.0.0.1}" \
"${DOCKER_MACHINE_NAME}" > /dev/null || {
kube::log::error "Something went wrong creating a machine."
kube::log::error "Try the following: "
kube::log::error "docker-machine create -d ${DOCKER_MACHINE_DRIVER} --virtualbox-memory ${virtualbox_memory_mb} ${DOCKER_MACHINE_NAME}"
return 1
}
}
docker-machine start "${DOCKER_MACHINE_NAME}" &> /dev/null
# it takes `docker-machine env` a few seconds to work if the machine was just started
local docker_machine_out
while ! docker_machine_out=$(docker-machine env "${DOCKER_MACHINE_NAME}" 2>&1); do
if [[ ${docker_machine_out} =~ "Error checking TLS connection" ]]; then
echo ${docker_machine_out}
docker-machine regenerate-certs ${DOCKER_MACHINE_NAME}
else
sleep 1
fi
done
eval $(docker-machine env "${DOCKER_MACHINE_NAME}")
kube::log::status "A Docker host using docker-machine named '${DOCKER_MACHINE_NAME}' is ready to go!"
return 0
}
function kube::build::is_osx() {
[[ "$(uname)" == "Darwin" ]]
}
function kube::build::is_gnu_sed() {
[[ $(sed --version 2>&1) == *GNU* ]]
}
function kube::build::ensure_rsync() {
if [[ -z "$(which rsync)" ]]; then
kube::log::error "Can't find 'rsync' in PATH, please fix and retry."
return 1
fi
}
function kube::build::update_dockerfile() {
if kube::build::is_gnu_sed; then
sed_opts=(-i)
else
sed_opts=(-i '')
fi
sed "${sed_opts[@]}" "s/KUBE_BUILD_IMAGE_CROSS_TAG/${KUBE_BUILD_IMAGE_CROSS_TAG}/" "${LOCAL_OUTPUT_BUILD_CONTEXT}/Dockerfile"
}
function kube::build::set_proxy() {
if [[ -n "${KUBERNETES_HTTPS_PROXY:-}" ]]; then
echo "ENV https_proxy $KUBERNETES_HTTPS_PROXY" >> "${LOCAL_OUTPUT_BUILD_CONTEXT}/Dockerfile"
fi
if [[ -n "${KUBERNETES_HTTP_PROXY:-}" ]]; then
echo "ENV http_proxy $KUBERNETES_HTTP_PROXY" >> "${LOCAL_OUTPUT_BUILD_CONTEXT}/Dockerfile"
fi
if [[ -n "${KUBERNETES_NO_PROXY:-}" ]]; then
echo "ENV no_proxy $KUBERNETES_NO_PROXY" >> "${LOCAL_OUTPUT_BUILD_CONTEXT}/Dockerfile"
fi
}
function kube::build::ensure_docker_in_path() {
if [[ -z "$(which docker)" ]]; then
kube::log::error "Can't find 'docker' in PATH, please fix and retry."
kube::log::error "See https://docs.docker.com/installation/#installation for installation instructions."
return 1
fi
}
function kube::build::ensure_tar() {
if [[ -n "${TAR:-}" ]]; then
return
fi
# Find gnu tar if it is available, bomb out if not.
TAR=tar
if which gtar &>/dev/null; then
TAR=gtar
else
if which gnutar &>/dev/null; then
TAR=gnutar
fi
fi
if ! "${TAR}" --version | grep -q GNU; then
echo " !!! Cannot find GNU tar. Build on Linux or install GNU tar"
echo " on Mac OS X (brew install gnu-tar)."
return 1
fi
}
function kube::build::has_docker() {
which docker &> /dev/null
}
function kube::build::has_ip() {
which ip &> /dev/null && ip -Version | grep 'iproute2' &> /dev/null
}
# Detect if a specific image exists
#
# $1 - image repo name
# #2 - image tag
function kube::build::docker_image_exists() {
[[ -n $1 && -n $2 ]] || {
kube::log::error "Internal error. Image not specified in docker_image_exists."
exit 2
}
[[ $("${DOCKER[@]}" images -q "${1}:${2}") ]]
}
# Delete all images that match a tag prefix except for the "current" version
#
# $1: The image repo/name
# $2: The tag base. We consider any image that matches $2*
# $3: The current image not to delete if provided
function kube::build::docker_delete_old_images() {
# In Docker 1.12, we can replace this with
# docker images "$1" --format "{{.Tag}}"
for tag in $("${DOCKER[@]}" images ${1} | tail -n +2 | awk '{print $2}') ; do
if [[ "${tag}" != "${2}"* ]] ; then
V=3 kube::log::status "Keeping image ${1}:${tag}"
continue
fi
if [[ -z "${3:-}" || "${tag}" != "${3}" ]] ; then
V=2 kube::log::status "Deleting image ${1}:${tag}"
"${DOCKER[@]}" rmi "${1}:${tag}" >/dev/null
else
V=3 kube::log::status "Keeping image ${1}:${tag}"
fi
done
}
# Stop and delete all containers that match a pattern
#
# $1: The base container prefix
# $2: The current container to keep, if provided
function kube::build::docker_delete_old_containers() {
# In Docker 1.12 we can replace this line with
# docker ps -a --format="{{.Names}}"
for container in $("${DOCKER[@]}" ps -a | tail -n +2 | awk '{print $NF}') ; do
if [[ "${container}" != "${1}"* ]] ; then
V=3 kube::log::status "Keeping container ${container}"
continue
fi
if [[ -z "${2:-}" || "${container}" != "${2}" ]] ; then
V=2 kube::log::status "Deleting container ${container}"
kube::build::destroy_container "${container}"
else
V=3 kube::log::status "Keeping container ${container}"
fi
done
}
# Takes $1 and computes a short has for it. Useful for unique tag generation
function kube::build::short_hash() {
[[ $# -eq 1 ]] || {
kube::log::error "Internal error. No data based to short_hash."
exit 2
}
local short_hash
if which md5 >/dev/null 2>&1; then
short_hash=$(md5 -q -s "$1")
else
short_hash=$(echo -n "$1" | md5sum)
fi
echo ${short_hash:0:10}
}
# Pedantically kill, wait-on and remove a container. The -f -v options
# to rm don't actually seem to get the job done, so force kill the
# container, wait to ensure it's stopped, then try the remove. This is
# a workaround for bug https://github.com/docker/docker/issues/3968.
function kube::build::destroy_container() {
"${DOCKER[@]}" kill "$1" >/dev/null 2>&1 || true
if [[ $("${DOCKER[@]}" version --format '{{.Server.Version}}') = 17.06.0* ]]; then
# Workaround https://github.com/moby/moby/issues/33948.
# TODO: remove when 17.06.0 is not relevant anymore
DOCKER_API_VERSION=v1.29 "${DOCKER[@]}" wait "$1" >/dev/null 2>&1 || true
else
"${DOCKER[@]}" wait "$1" >/dev/null 2>&1 || true
fi
"${DOCKER[@]}" rm -f -v "$1" >/dev/null 2>&1 || true
}
# ---------------------------------------------------------------------------
# Building
function kube::build::clean() {
if kube::build::has_docker ; then
kube::build::docker_delete_old_containers "${KUBE_BUILD_CONTAINER_NAME_BASE}"
kube::build::docker_delete_old_containers "${KUBE_RSYNC_CONTAINER_NAME_BASE}"
kube::build::docker_delete_old_containers "${KUBE_DATA_CONTAINER_NAME_BASE}"
kube::build::docker_delete_old_images "${KUBE_BUILD_IMAGE_REPO}" "${KUBE_BUILD_IMAGE_TAG_BASE}"
V=2 kube::log::status "Cleaning all untagged docker images"
"${DOCKER[@]}" rmi $("${DOCKER[@]}" images -q --filter 'dangling=true') 2> /dev/null || true
fi
if [[ -d "${LOCAL_OUTPUT_ROOT}" ]]; then
kube::log::status "Removing _output directory"
rm -rf "${LOCAL_OUTPUT_ROOT}"
fi
}
# Set up the context directory for the kube-build image and build it.
function kube::build::build_image() {
mkdir -p "${LOCAL_OUTPUT_BUILD_CONTEXT}"
# Make sure the context directory owned by the right user for syncing sources to container.
chown -R ${USER_ID}:${GROUP_ID} "${LOCAL_OUTPUT_BUILD_CONTEXT}"
cp /etc/localtime "${LOCAL_OUTPUT_BUILD_CONTEXT}/"
cp build/build-image/Dockerfile "${LOCAL_OUTPUT_BUILD_CONTEXT}/Dockerfile"
cp build/build-image/rsyncd.sh "${LOCAL_OUTPUT_BUILD_CONTEXT}/"
dd if=/dev/urandom bs=512 count=1 2>/dev/null | LC_ALL=C tr -dc 'A-Za-z0-9' | dd bs=32 count=1 2>/dev/null > "${LOCAL_OUTPUT_BUILD_CONTEXT}/rsyncd.password"
chmod go= "${LOCAL_OUTPUT_BUILD_CONTEXT}/rsyncd.password"
kube::build::update_dockerfile
kube::build::set_proxy
kube::build::docker_build "${KUBE_BUILD_IMAGE}" "${LOCAL_OUTPUT_BUILD_CONTEXT}" 'false'
# Clean up old versions of everything
kube::build::docker_delete_old_containers "${KUBE_BUILD_CONTAINER_NAME_BASE}" "${KUBE_BUILD_CONTAINER_NAME}"
kube::build::docker_delete_old_containers "${KUBE_RSYNC_CONTAINER_NAME_BASE}" "${KUBE_RSYNC_CONTAINER_NAME}"
kube::build::docker_delete_old_containers "${KUBE_DATA_CONTAINER_NAME_BASE}" "${KUBE_DATA_CONTAINER_NAME}"
kube::build::docker_delete_old_images "${KUBE_BUILD_IMAGE_REPO}" "${KUBE_BUILD_IMAGE_TAG_BASE}" "${KUBE_BUILD_IMAGE_TAG}"
kube::build::ensure_data_container
kube::build::sync_to_container
}
# Build a docker image from a Dockerfile.
# $1 is the name of the image to build
# $2 is the location of the "context" directory, with the Dockerfile at the root.
# $3 is the value to set the --pull flag for docker build; true by default
function kube::build::docker_build() {
local -r image=$1
local -r context_dir=$2
local -r pull="${3:-true}"
local -ra build_cmd=("${DOCKER[@]}" build -t "${image}" "--pull=${pull}" "${context_dir}")
kube::log::status "Building Docker image ${image}"
local docker_output
docker_output=$("${build_cmd[@]}" 2>&1) || {
cat <<EOF >&2
+++ Docker build command failed for ${image}
${docker_output}
To retry manually, run:
${build_cmd[*]}
EOF
return 1
}
}
function kube::build::ensure_data_container() {
# If the data container exists AND exited successfully, we can use it.
# Otherwise nuke it and start over.
local ret=0
local code=$(docker inspect \
-f '{{.State.ExitCode}}' \
"${KUBE_DATA_CONTAINER_NAME}" 2>/dev/null || ret=$?)
if [[ "${ret}" == 0 && "${code}" != 0 ]]; then
kube::build::destroy_container "${KUBE_DATA_CONTAINER_NAME}"
ret=1
fi
if [[ "${ret}" != 0 ]]; then
kube::log::status "Creating data container ${KUBE_DATA_CONTAINER_NAME}"
# We have to ensure the directory exists, or else the docker run will
# create it as root.
mkdir -p "${LOCAL_OUTPUT_GOPATH}"
# We want this to run as root to be able to chown, so non-root users can
# later use the result as a data container. This run both creates the data
# container and chowns the GOPATH.
#
# The data container creates volumes for all of the directories that store
# intermediates for the Go build. This enables incremental builds across
# Docker sessions. The *_cgo paths are re-compiled versions of the go std
# libraries for true static building.
local -ra docker_cmd=(
"${DOCKER[@]}" run
--volume "${REMOTE_ROOT}" # white-out the whole output dir
--volume /usr/local/go/pkg/linux_386_cgo
--volume /usr/local/go/pkg/linux_amd64_cgo
--volume /usr/local/go/pkg/linux_arm_cgo
--volume /usr/local/go/pkg/linux_arm64_cgo
--volume /usr/local/go/pkg/linux_ppc64le_cgo
--volume /usr/local/go/pkg/darwin_amd64_cgo
--volume /usr/local/go/pkg/darwin_386_cgo
--volume /usr/local/go/pkg/windows_amd64_cgo
--volume /usr/local/go/pkg/windows_386_cgo
--name "${KUBE_DATA_CONTAINER_NAME}"
--hostname "${HOSTNAME}"
"${KUBE_BUILD_IMAGE}"
chown -R ${USER_ID}:${GROUP_ID}
"${REMOTE_ROOT}"
/usr/local/go/pkg/
)
"${docker_cmd[@]}"
fi
}
# Run a command in the kube-build image. This assumes that the image has
# already been built.
function kube::build::run_build_command() {
kube::log::status "Running build command..."
kube::build::run_build_command_ex "${KUBE_BUILD_CONTAINER_NAME}" -- "$@"
}
# Run a command in the kube-build image. This assumes that the image has
# already been built.
#
# Arguments are in the form of
# <container name> <extra docker args> -- <command>
function kube::build::run_build_command_ex() {
[[ $# != 0 ]] || { echo "Invalid input - please specify a container name." >&2; return 4; }
local container_name="${1}"
shift
local -a docker_run_opts=(
"--name=${container_name}"
"--user=$(id -u):$(id -g)"
"--hostname=${HOSTNAME}"
"${DOCKER_MOUNT_ARGS[@]}"
)
local detach=false
[[ $# != 0 ]] || { echo "Invalid input - please specify docker arguments followed by --." >&2; return 4; }
# Everything before "--" is an arg to docker
until [ -z "${1-}" ] ; do
if [[ "$1" == "--" ]]; then
shift
break
fi
docker_run_opts+=("$1")
if [[ "$1" == "-d" || "$1" == "--detach" ]] ; then
detach=true
fi
shift
done
# Everything after "--" is the command to run
[[ $# != 0 ]] || { echo "Invalid input - please specify a command to run." >&2; return 4; }
local -a cmd=()
until [ -z "${1-}" ] ; do
cmd+=("$1")
shift
done
docker_run_opts+=(
--env "KUBE_FASTBUILD=${KUBE_FASTBUILD:-false}"
--env "KUBE_BUILDER_OS=${OSTYPE:-notdetected}"
--env "KUBE_VERBOSE=${KUBE_VERBOSE}"
--env "GOFLAGS=${GOFLAGS:-}"
--env "GOLDFLAGS=${GOLDFLAGS:-}"
--env "GOGCFLAGS=${GOGCFLAGS:-}"
)
# If we have stdin we can run interactive. This allows things like 'shell.sh'
# to work. However, if we run this way and don't have stdin, then it ends up
# running in a daemon-ish mode. So if we don't have a stdin, we explicitly
# attach stderr/stdout but don't bother asking for a tty.
if [[ -t 0 ]]; then
docker_run_opts+=(--interactive --tty)
elif [[ "${detach}" == false ]]; then
docker_run_opts+=(--attach=stdout --attach=stderr)
fi
local -ra docker_cmd=(
"${DOCKER[@]}" run "${docker_run_opts[@]}" "${KUBE_BUILD_IMAGE}")
# Clean up container from any previous run
kube::build::destroy_container "${container_name}"
"${docker_cmd[@]}" "${cmd[@]}"
if [[ "${detach}" == false ]]; then
kube::build::destroy_container "${container_name}"
fi
}
function kube::build::rsync_probe {
# Wait unil rsync is up and running.
local tries=20
while (( ${tries} > 0 )) ; do
if rsync "rsync://k8s@${1}:${2}/" \
--password-file="${LOCAL_OUTPUT_BUILD_CONTEXT}/rsyncd.password" \
&> /dev/null ; then
return 0
fi
tries=$(( ${tries} - 1))
sleep 0.1
done
return 1
}
# Start up the rsync container in the background. This should be explicitly
# stopped with kube::build::stop_rsyncd_container.
#
# This will set the global var KUBE_RSYNC_ADDR to the effective port that the
# rsync daemon can be reached out.
function kube::build::start_rsyncd_container() {
IPTOOL=ifconfig
if kube::build::has_ip ; then
IPTOOL="ip address"
fi
kube::build::stop_rsyncd_container
V=3 kube::log::status "Starting rsyncd container"
kube::build::run_build_command_ex \
"${KUBE_RSYNC_CONTAINER_NAME}" -p 127.0.0.1:${KUBE_RSYNC_PORT}:${KUBE_CONTAINER_RSYNC_PORT} -d \
-e ALLOW_HOST="$(${IPTOOL} | grep -Eo 'inet (addr:)?([0-9]*\.){3}[0-9]*' | grep -Eo '([0-9]*\.){3}[0-9]*' | grep -v '127.0.0.1')" \
-- /rsyncd.sh >/dev/null
local mapped_port
if ! mapped_port=$("${DOCKER[@]}" port "${KUBE_RSYNC_CONTAINER_NAME}" ${KUBE_CONTAINER_RSYNC_PORT} 2> /dev/null | cut -d: -f 2) ; then
kube::log::error "Could not get effective rsync port"
return 1
fi
local container_ip
container_ip=$("${DOCKER[@]}" inspect --format '{{ .NetworkSettings.IPAddress }}' "${KUBE_RSYNC_CONTAINER_NAME}")
# Sometimes we can reach rsync through localhost and a NAT'd port. Other
# times (when we are running in another docker container on the Jenkins
# machines) we have to talk directly to the container IP. There is no one
# strategy that works in all cases so we test to figure out which situation we
# are in.
if kube::build::rsync_probe 127.0.0.1 ${mapped_port}; then
KUBE_RSYNC_ADDR="127.0.0.1:${mapped_port}"
return 0
elif kube::build::rsync_probe "${container_ip}" ${KUBE_CONTAINER_RSYNC_PORT}; then
KUBE_RSYNC_ADDR="${container_ip}:${KUBE_CONTAINER_RSYNC_PORT}"
return 0
fi
kube::log::error "Could not connect to rsync container. See build/README.md for setting up remote Docker engine."
return 1
}
function kube::build::stop_rsyncd_container() {
V=3 kube::log::status "Stopping any currently running rsyncd container"
unset KUBE_RSYNC_ADDR
kube::build::destroy_container "${KUBE_RSYNC_CONTAINER_NAME}"
}
function kube::build::rsync {
local -a rsync_opts=(
--archive
--password-file="${LOCAL_OUTPUT_BUILD_CONTEXT}/rsyncd.password"
)
if (( ${KUBE_VERBOSE} >= 6 )); then
rsync_opts+=("-iv")
fi
if (( ${KUBE_RSYNC_COMPRESS} > 0 )); then
rsync_opts+=("--compress-level=${KUBE_RSYNC_COMPRESS}")
fi
V=3 kube::log::status "Running rsync"
rsync "${rsync_opts[@]}" "$@"
}
# This will launch rsyncd in a container and then sync the source tree to the
# container over the local network.
function kube::build::sync_to_container() {
kube::log::status "Syncing sources to container"
kube::build::start_rsyncd_container
# rsync filters are a bit confusing. Here we are syncing everything except
# output only directories and things that are not necessary like the git
# directory and generated files. The '- /' filter prevents rsync
# from trying to set the uid/gid/perms on the root of the sync tree.
# As an exception, we need to sync generated files in staging/, because
# they will not be re-generated by 'make'. Note that the 'H' filtered files
# are hidden from rsync so they will be deleted in the target container if
# they exist. This will allow them to be re-created in the container if
# necessary.
kube::build::rsync \
--delete \
--filter='H /.git' \
--filter='- /.make/' \
--filter='- /_tmp/' \
--filter='- /_output/' \
--filter='- /' \
--filter='H zz_generated.*' \
--filter='H generated.proto' \
"${KUBE_ROOT}/" "rsync://k8s@${KUBE_RSYNC_ADDR}/k8s/"
kube::build::stop_rsyncd_container
}
# Copy all build results back out.
function kube::build::copy_output() {
kube::log::status "Syncing out of container"
kube::build::start_rsyncd_container
local rsync_extra=""
if (( ${KUBE_VERBOSE} >= 6 )); then
rsync_extra="-iv"
fi
# The filter syntax for rsync is a little obscure. It filters on files and
# directories. If you don't go in to a directory you won't find any files
# there. Rules are evaluated in order. The last two rules are a little
# magic. '+ */' says to go in to every directory and '- /**' says to ignore
# any file or directory that isn't already specifically allowed.
#
# We are looking to copy out all of the built binaries along with various
# generated files.
kube::build::rsync \
--prune-empty-dirs \
--filter='- /_temp/' \
--filter='+ /vendor/' \
--filter='+ /Godeps/' \
--filter='+ /staging/***/Godeps/**' \
--filter='+ /_output/dockerized/bin/**' \
--filter='+ zz_generated.*' \
--filter='+ generated.proto' \
--filter='+ *.pb.go' \
--filter='+ types.go' \
--filter='+ */' \
--filter='- /**' \
"rsync://k8s@${KUBE_RSYNC_ADDR}/k8s/" "${KUBE_ROOT}"
kube::build::stop_rsyncd_container
}

26
vendor/k8s.io/kubernetes/build/copy-output.sh generated vendored Executable file
View File

@ -0,0 +1,26 @@
#!/bin/bash
# Copyright 2014 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Copies any built binaries (and other generated files) out of the Docker build contianer.
set -o errexit
set -o nounset
set -o pipefail
KUBE_ROOT=$(dirname "${BASH_SOURCE}")/..
source "${KUBE_ROOT}/build/common.sh"
kube::build::verify_prereqs
kube::build::copy_output

19
vendor/k8s.io/kubernetes/build/debian-base/Dockerfile generated vendored Normal file
View File

@ -0,0 +1,19 @@
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM scratch
ADD rootfs.tar /
CMD ["/bin/sh"]

View File

@ -0,0 +1,101 @@
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM BASEIMAGE
# If we're building for another architecture than amd64, the CROSS_BUILD_ placeholder is removed so
# e.g. CROSS_BUILD_COPY turns into COPY
# If we're building normally, for amd64, CROSS_BUILD lines are removed
CROSS_BUILD_COPY qemu-ARCH-static /usr/bin/
ENV DEBIAN_FRONTEND=noninteractive
# Smaller package install size.
COPY excludes /etc/dpkg/dpkg.cfg.d/excludes
# Convenience script for building on this base image.
COPY clean-install /usr/local/bin/clean-install
# Update system packages.
RUN apt-get update \
&& apt-get dist-upgrade -y
# Hold required packages to avoid breaking the installation of packages
RUN apt-mark hold apt gnupg adduser passwd libsemanage1 libcap2
# Remove unnecessary packages.
# This list was generated manually by listing the installed packages (`apt list --installed`),
# then running `apt-cache rdepends --installed --no-recommends` to find the "root" packages.
# The root packages were evaluated based on whether they were needed in the container image.
# Several utilities (e.g. ping) were kept for usefulness, but may be removed in later versions.
RUN echo "Yes, do as I say!" | apt-get purge \
bash \
debconf-i18n \
e2fslibs \
e2fsprogs \
init \
initscripts \
libcap2-bin \
libkmod2 \
libmount1 \
libsmartcols1 \
libudev1 \
libblkid1 \
libncursesw5 \
libprocps6 \
libslang2 \
libss2 \
libtext-charwidth-perl libtext-iconv-perl libtext-wrapi18n-perl \
ncurses-base \
ncurses-bin \
systemd \
systemd-sysv \
sysv-rc \
tzdata
# No-op stubs replace some unnecessary binaries that may be depended on in the install process (in
# particular we don't run an init process).
WORKDIR /usr/local/bin
RUN touch noop && \
chmod 555 noop && \
ln -s noop runlevel && \
ln -s noop invoke-rc.d && \
ln -s noop update-rc.d
WORKDIR /
# Cleanup cached and unnecessary files.
RUN apt-get autoremove -y && \
apt-get clean -y && \
tar -czf /usr/share/copyrights.tar.gz /usr/share/common-licenses /usr/share/doc/*/copyright && \
rm -rf \
/usr/share/doc \
/usr/share/man \
/usr/share/info \
/usr/share/locale \
/var/lib/apt/lists/* \
/var/log/* \
/var/cache/debconf/* \
/usr/share/common-licenses* \
/usr/share/bash-completion \
~/.bashrc \
~/.profile \
/etc/systemd \
/lib/lsb \
/lib/udev \
/usr/lib/x86_64-linux-gnu/gconv/IBM* \
/usr/lib/x86_64-linux-gnu/gconv/EBC* && \
mkdir -p /usr/share/man/man1 /usr/share/man/man2 \
/usr/share/man/man3 /usr/share/man/man4 \
/usr/share/man/man5 /usr/share/man/man6 \
/usr/share/man/man7 /usr/share/man/man8

79
vendor/k8s.io/kubernetes/build/debian-base/Makefile generated vendored Executable file
View File

@ -0,0 +1,79 @@
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
all: build
REGISTRY ?= gcr.io/google-containers
IMAGE ?= debian-base
BUILD_IMAGE ?= debian-build
TAG ?= 0.3
TAR_FILE ?= rootfs.tar
ARCH?=amd64
TEMP_DIR:=$(shell mktemp -d)
QEMUVERSION=v2.9.1
ifeq ($(ARCH),amd64)
BASEIMAGE?=debian:stretch
endif
ifeq ($(ARCH),arm)
BASEIMAGE?=arm32v7/debian:stretch
QEMUARCH=arm
endif
ifeq ($(ARCH),arm64)
BASEIMAGE?=arm64v8/debian:stretch
QEMUARCH=aarch64
endif
ifeq ($(ARCH),ppc64le)
BASEIMAGE?=ppc64le/debian:stretch
QEMUARCH=ppc64le
endif
ifeq ($(ARCH),s390x)
BASEIMAGE?=s390x/debian:stretch
QEMUARCH=s390x
endif
build: clean
cp ./* $(TEMP_DIR)
cat Dockerfile.build \
| sed "s|BASEIMAGE|$(BASEIMAGE)|g" \
| sed "s|ARCH|$(QEMUARCH)|g" \
> $(TEMP_DIR)/Dockerfile.build
ifeq ($(ARCH),amd64)
# When building "normally" for amd64, remove the whole line, it has no part in the amd64 image
sed "/CROSS_BUILD_/d" $(TEMP_DIR)/Dockerfile.build > $(TEMP_DIR)/Dockerfile.build.tmp
else
# When cross-building, only the placeholder "CROSS_BUILD_" should be removed
# Register /usr/bin/qemu-ARCH-static as the handler for ARM binaries in the kernel
docker run --rm --privileged multiarch/qemu-user-static:register --reset
curl -sSL https://github.com/multiarch/qemu-user-static/releases/download/$(QEMUVERSION)/x86_64_qemu-$(QEMUARCH)-static.tar.gz | tar -xz -C $(TEMP_DIR)
sed "s/CROSS_BUILD_//g" $(TEMP_DIR)/Dockerfile.build > $(TEMP_DIR)/Dockerfile.build.tmp
endif
mv $(TEMP_DIR)/Dockerfile.build.tmp $(TEMP_DIR)/Dockerfile.build
docker build --pull -t $(BUILD_IMAGE) -f $(TEMP_DIR)/Dockerfile.build $(TEMP_DIR)
docker create --name $(BUILD_IMAGE) $(BUILD_IMAGE)
docker export $(BUILD_IMAGE) > $(TEMP_DIR)/$(TAR_FILE)
docker build -t $(REGISTRY)/$(IMAGE)-$(ARCH):$(TAG) $(TEMP_DIR)
rm -rf $(TEMP_DIR)
push: build
gcloud docker -- push $(REGISTRY)/$(IMAGE)-$(ARCH):$(TAG)
clean:
docker rmi -f $(REGISTRY)/$(IMAGE)-$(ARCH):$(TAG) || true
docker rmi -f $(BUILD_IMAGE) || true
docker rm -f $(BUILD_IMAGE) || true

12
vendor/k8s.io/kubernetes/build/debian-base/README.md generated vendored Normal file
View File

@ -0,0 +1,12 @@
# Kubernetes Debian Base
The Kubernetes debian-base image provides a common base for Kubernetes system images that require
external dependencies (such as `iptables`, `sh`, or anything that is more than a static go-binary).
This image differs from the standard debian image by removing a lot of packages and files that are
generally not necessary in containers. The end result is an image that is just over 40 MB, down from
123 MB.
The image also provides a convenience script `/usr/local/bin/clean-install` that encapsulates the
process of updating apt repositories, installing the packages, and then cleaning up unnecessary
caches & logs.

36
vendor/k8s.io/kubernetes/build/debian-base/clean-install generated vendored Executable file
View File

@ -0,0 +1,36 @@
#!/bin/sh
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# A script encapsulating a common Dockerimage pattern for installing packages
# and then cleaning up the unnecessary install artifacts.
# e.g. clean-install iptables ebtables conntrack
set -o errexit
if [ $# = 0 ]; then
echo >&2 "No packages specified"
exit 1
fi
apt-get update
apt-get install -y --no-install-recommends $@
apt-get clean -y
rm -rf \
/var/cache/debconf/* \
/var/lib/apt/lists/* \
/var/log/* \
/tmp/* \
/var/tmp/*

10
vendor/k8s.io/kubernetes/build/debian-base/excludes generated vendored Normal file
View File

@ -0,0 +1,10 @@
path-exclude /usr/share/doc/*
path-include /usr/share/doc/*/copyright
path-exclude /usr/share/groff/*
path-exclude /usr/share/i18n/locales/*
path-include /usr/share/i18n/locales/en_US*
path-exclude /usr/share/info/*
path-exclude /usr/share/locale/*
path-include /usr/share/locale/en_US*
path-include /usr/share/locale/locale.alias
path-exclude /usr/share/man/*

View File

@ -0,0 +1 @@
/cni-tars

View File

@ -0,0 +1,43 @@
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM BASEIMAGE
RUN echo CACHEBUST>/dev/null && clean-install \
bash
# The samba-common, cifs-utils, and nfs-common packages depend on
# ucf, which itself depends on /bin/bash.
RUN echo "dash dash/sh boolean false" | debconf-set-selections
RUN DEBIAN_FRONTEND=noninteractive dpkg-reconfigure dash
RUN echo CACHEBUST>/dev/null && clean-install \
ca-certificates \
ceph-common \
cifs-utils \
conntrack \
e2fsprogs \
ebtables \
ethtool \
git \
glusterfs-client \
iptables \
jq \
kmod \
openssh-client \
nfs-common \
socat \
util-linux
COPY cni-bin/bin /opt/cni/bin

View File

@ -0,0 +1,60 @@
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Build the hyperkube base image. This image is used to build the hyperkube image.
#
# Usage:
# [ARCH=amd64] [REGISTRY="gcr.io/google-containers"] make (build|push)
REGISTRY?=gcr.io/google-containers
IMAGE?=debian-hyperkube-base
TAG=0.8
ARCH?=amd64
CACHEBUST?=1
BASEIMAGE=gcr.io/google-containers/debian-base-$(ARCH):0.3
CNI_VERSION=v0.6.0
TEMP_DIR:=$(shell mktemp -d)
CNI_TARBALL=cni-plugins-$(ARCH)-$(CNI_VERSION).tgz
.PHONY: all build push clean
all: push
cni-tars/$(CNI_TARBALL):
mkdir -p cni-tars/
cd cni-tars/ && curl -sSLO --retry 5 https://storage.googleapis.com/kubernetes-release/network-plugins/${CNI_TARBALL}
clean:
rm -rf cni-tars/
build: cni-tars/$(CNI_TARBALL)
cp Dockerfile $(TEMP_DIR)
cd $(TEMP_DIR) && sed -i "s|BASEIMAGE|$(BASEIMAGE)|g" Dockerfile
ifeq ($(CACHEBUST),1)
cd ${TEMP_DIR} && sed -i.back "s|CACHEBUST|$(shell uuidgen)|g" Dockerfile
endif
mkdir -p ${TEMP_DIR}/cni-bin/bin
tar -xz -C ${TEMP_DIR}/cni-bin/bin -f "cni-tars/${CNI_TARBALL}"
# Register /usr/bin/qemu-ARCH-static as the handler for non-x86 binaries in the kernel
docker run --rm --privileged multiarch/qemu-user-static:register --reset
docker build --pull -t $(REGISTRY)/$(IMAGE)-$(ARCH):$(TAG) $(TEMP_DIR)
rm -rf $(TEMP_DIR)
push: build
gcloud docker -- push $(REGISTRY)/$(IMAGE)-$(ARCH):$(TAG)

View File

@ -0,0 +1,33 @@
### debian-hyperkube-base
Serves as the base image for `gcr.io/google-containers/hyperkube-${ARCH}`
images.
This image is compiled for multiple architectures.
#### How to release
If you're editing the Dockerfile or some other thing, please bump the `TAG` in the Makefile.
```console
# Build for linux/amd64 (default)
$ make push ARCH=amd64
# ---> gcr.io/google-containers/debian-hyperkube-base-amd64:TAG
$ make push ARCH=arm
# ---> gcr.io/google-containers/debian-hyperkube-base-arm:TAG
$ make push ARCH=arm64
# ---> gcr.io/google-containers/debian-hyperkube-base-arm64:TAG
$ make push ARCH=ppc64le
# ---> gcr.io/google-containers/debian-hyperkube-base-ppc64le:TAG
$ make push ARCH=s390x
# ---> gcr.io/google-containers/debian-hyperkube-base-s390x:TAG
```
If you don't want to push the images, run `make build` instead
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/build/debian-hyperkube-base/README.md?pixel)]()

View File

@ -0,0 +1,26 @@
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM BASEIMAGE
# If we're building for another architecture than amd64, the CROSS_BUILD_ placeholder is removed so e.g. CROSS_BUILD_COPY turns into COPY
# If we're building normally, for amd64, CROSS_BUILD lines are removed
CROSS_BUILD_COPY qemu-ARCH-static /usr/bin/
RUN clean-install \
conntrack \
ebtables \
ipset \
iptables \
kmod

View File

@ -0,0 +1,60 @@
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
.PHONY: build push
REGISTRY?="gcr.io/google-containers"
IMAGE=debian-iptables
TAG=v10
ARCH?=amd64
TEMP_DIR:=$(shell mktemp -d)
QEMUVERSION=v2.9.1
ifeq ($(ARCH),arm)
QEMUARCH=arm
endif
ifeq ($(ARCH),arm64)
QEMUARCH=aarch64
endif
ifeq ($(ARCH),ppc64le)
QEMUARCH=ppc64le
endif
ifeq ($(ARCH),s390x)
QEMUARCH=s390x
endif
BASEIMAGE=gcr.io/google-containers/debian-base-$(ARCH):0.3
build:
cp ./* $(TEMP_DIR)
cd $(TEMP_DIR) && sed -i "s|BASEIMAGE|$(BASEIMAGE)|g" Dockerfile
cd $(TEMP_DIR) && sed -i "s|ARCH|$(QEMUARCH)|g" Dockerfile
ifeq ($(ARCH),amd64)
# When building "normally" for amd64, remove the whole line, it has no part in the amd64 image
cd $(TEMP_DIR) && sed -i "/CROSS_BUILD_/d" Dockerfile
else
# When cross-building, only the placeholder "CROSS_BUILD_" should be removed
# Register /usr/bin/qemu-ARCH-static as the handler for ARM binaries in the kernel
docker run --rm --privileged multiarch/qemu-user-static:register --reset
curl -sSL https://github.com/multiarch/qemu-user-static/releases/download/$(QEMUVERSION)/x86_64_qemu-$(QEMUARCH)-static.tar.gz | tar -xz -C $(TEMP_DIR)
cd $(TEMP_DIR) && sed -i "s/CROSS_BUILD_//g" Dockerfile
endif
docker build --pull -t $(REGISTRY)/$(IMAGE)-$(ARCH):$(TAG) $(TEMP_DIR)
push: build
gcloud docker -- push $(REGISTRY)/$(IMAGE)-$(ARCH):$(TAG)
all: push

View File

@ -0,0 +1,32 @@
### debian-iptables
Serves as the base image for `gcr.io/google_containers/kube-proxy-${ARCH}` and multiarch (not `amd64`) `gcr.io/google_containers/flannel-${ARCH}` images.
This image is compiled for multiple architectures.
#### How to release
If you're editing the Dockerfile or some other thing, please bump the `TAG` in the Makefile.
```console
# Build for linux/amd64 (default)
$ make push ARCH=amd64
# ---> gcr.io/google_containers/debian-iptables-amd64:TAG
$ make push ARCH=arm
# ---> gcr.io/google_containers/debian-iptables-arm:TAG
$ make push ARCH=arm64
# ---> gcr.io/google_containers/debian-iptables-arm64:TAG
$ make push ARCH=ppc64le
# ---> gcr.io/google_containers/debian-iptables-ppc64le:TAG
$ make push ARCH=s390x
# ---> gcr.io/google_containers/debian-iptables-s390x:TAG
```
If you don't want to push the images, run `make` or `make build` instead
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/build/debian-iptables/README.md?pixel)]()

13
vendor/k8s.io/kubernetes/build/debs/10-kubeadm.conf generated vendored Normal file
View File

@ -0,0 +1,13 @@
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
# Value should match Docker daemon settings.
# Defaults are "cgroupfs" for Debian/Ubuntu/OpenSUSE and "systemd" for Fedora/CentOS/RHEL
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"
Environment="KUBELET_CERTIFICATE_ARGS=--rotate-certificates=true"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS

181
vendor/k8s.io/kubernetes/build/debs/BUILD generated vendored Normal file
View File

@ -0,0 +1,181 @@
package(default_visibility = ["//visibility:public"])
load("@bazel_tools//tools/build_defs/pkg:pkg.bzl", "pkg_tar")
load("@io_kubernetes_build//defs:deb.bzl", "k8s_deb", "deb_data")
load("@io_kubernetes_build//defs:build.bzl", "release_filegroup")
# We do not include kube-scheduler, kube-controller-manager,
# kube-apiserver, and kube-proxy in this list even though we
# produce debs for them. We recommend that they be run in docker
# images. We use the debs that we produce here to build those
# images.
release_filegroup(
name = "debs",
srcs = [
":cloud-controller-manager.deb",
":kubeadm.deb",
":kubectl.deb",
":kubelet.deb",
":kubernetes-cni.deb",
],
)
[deb_data(
name = binary,
data = [
{
"files": ["//cmd/" + binary],
"mode": "0755",
"dir": "/usr/bin",
},
],
) for binary in [
"cloud-controller-manager",
"kubectl",
"kube-apiserver",
"kube-controller-manager",
"kube-proxy",
]]
deb_data(
name = "kube-scheduler",
data = [
{
"files": ["//plugin/cmd/kube-scheduler"],
"mode": "0755",
"dir": "/usr/bin",
},
],
)
deb_data(
name = "kubelet",
data = [
{
"files": ["//cmd/kubelet"],
"mode": "0755",
"dir": "/usr/bin",
},
{
"files": ["kubelet.service"],
"mode": "644",
"dir": "/lib/systemd/system",
},
],
)
deb_data(
name = "kubeadm",
data = [
{
"files": ["//cmd/kubeadm"],
"mode": "0755",
"dir": "/usr/bin",
},
{
"files": ["10-kubeadm.conf"],
"mode": "644",
"dir": "/etc/systemd/system/kubelet.service.d",
},
],
)
pkg_tar(
name = "kubernetes-cni-data",
package_dir = "/opt/cni/bin",
deps = ["@kubernetes_cni//file"],
)
k8s_deb(
name = "cloud-controller-manager",
description = "Kubernetes Cloud Controller Manager",
version_file = "//build:os_package_version",
)
k8s_deb(
name = "kubectl",
description = """Kubernetes Command Line Tool
The Kubernetes command line tool for interacting with the Kubernetes API.
""",
version_file = "//build:os_package_version",
)
k8s_deb(
name = "kube-apiserver",
description = "Kubernetes API Server",
version_file = "//build:os_package_version",
)
k8s_deb(
name = "kube-controller-manager",
description = "Kubernetes Controller Manager",
version_file = "//build:os_package_version",
)
k8s_deb(
name = "kube-scheduler",
description = "Kubernetes Scheduler",
version_file = "//build:os_package_version",
)
k8s_deb(
name = "kube-proxy",
depends = [
"iptables (>= 1.4.21)",
"iproute2",
],
description = "Kubernetes Service Proxy",
version_file = "//build:os_package_version",
)
k8s_deb(
name = "kubelet",
depends = [
"iptables (>= 1.4.21)",
"kubernetes-cni (>= 0.5.1)",
"iproute2",
"socat",
"util-linux",
"mount",
"ebtables",
"ethtool",
],
description = """Kubernetes Node Agent
The node agent of Kubernetes, the container cluster manager
""",
version_file = "//build:os_package_version",
)
k8s_deb(
name = "kubeadm",
depends = [
"kubelet (>= 1.8.0)",
"kubectl (>= 1.8.0)",
"kubernetes-cni (>= 0.5.1)",
],
description = """Kubernetes Cluster Bootstrapping Tool
The Kubernetes command line tool for bootstrapping a Kubernetes cluster.
""",
version_file = "//build:os_package_version",
)
k8s_deb(
name = "kubernetes-cni",
description = """Kubernetes Packaging of CNI
The Container Networking Interface tools for provisioning container networks.
""",
version_file = "//build:cni_package_version",
)
filegroup(
name = "package-srcs",
srcs = glob(["**"]),
tags = ["automanaged"],
visibility = ["//visibility:private"],
)
filegroup(
name = "all-srcs",
srcs = [":package-srcs"],
tags = ["automanaged"],
)

10
vendor/k8s.io/kubernetes/build/debs/OWNERS generated vendored Normal file
View File

@ -0,0 +1,10 @@
reviewers:
- luxas
- jbeda
- mikedanese
- pipejakob
approvers:
- luxas
- jbeda
- mikedanese
- pipejakob

12
vendor/k8s.io/kubernetes/build/debs/kubelet.service generated vendored Normal file
View File

@ -0,0 +1,12 @@
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=http://kubernetes.io/docs/
[Service]
ExecStart=/usr/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target

544
vendor/k8s.io/kubernetes/build/lib/release.sh generated vendored Normal file
View File

@ -0,0 +1,544 @@
#!/bin/bash
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This file creates release artifacts (tar files, container images) that are
# ready to distribute to install or distribute to end users.
###############################################################################
# Most of the ::release:: namespace functions have been moved to
# github.com/kubernetes/release. Have a look in that repo and specifically in
# lib/releaselib.sh for ::release::-related functionality.
###############################################################################
# This is where the final release artifacts are created locally
readonly RELEASE_STAGE="${LOCAL_OUTPUT_ROOT}/release-stage"
readonly RELEASE_TARS="${LOCAL_OUTPUT_ROOT}/release-tars"
readonly RELEASE_IMAGES="${LOCAL_OUTPUT_ROOT}/release-images"
KUBE_BUILD_HYPERKUBE=${KUBE_BUILD_HYPERKUBE:-n}
if [[ -n "${KUBE_DOCKER_IMAGE_TAG-}" && -n "${KUBE_DOCKER_REGISTRY-}" ]]; then
# retain legacy behavior of automatically building hyperkube during releases
KUBE_BUILD_HYPERKUBE=y
fi
# Validate a ci version
#
# Globals:
# None
# Arguments:
# version
# Returns:
# If version is a valid ci version
# Sets: (e.g. for '1.2.3-alpha.4.56+abcdef12345678')
# VERSION_MAJOR (e.g. '1')
# VERSION_MINOR (e.g. '2')
# VERSION_PATCH (e.g. '3')
# VERSION_PRERELEASE (e.g. 'alpha')
# VERSION_PRERELEASE_REV (e.g. '4')
# VERSION_BUILD_INFO (e.g. '.56+abcdef12345678')
# VERSION_COMMITS (e.g. '56')
function kube::release::parse_and_validate_ci_version() {
# Accept things like "v1.2.3-alpha.4.56+abcdef12345678" or "v1.2.3-beta.4"
local -r version_regex="^v(0|[1-9][0-9]*)\\.(0|[1-9][0-9]*)\\.(0|[1-9][0-9]*)-([a-zA-Z0-9]+)\\.(0|[1-9][0-9]*)(\\.(0|[1-9][0-9]*)\\+[0-9a-f]{7,40})?$"
local -r version="${1-}"
[[ "${version}" =~ ${version_regex} ]] || {
kube::log::error "Invalid ci version: '${version}', must match regex ${version_regex}"
return 1
}
VERSION_MAJOR="${BASH_REMATCH[1]}"
VERSION_MINOR="${BASH_REMATCH[2]}"
VERSION_PATCH="${BASH_REMATCH[3]}"
VERSION_PRERELEASE="${BASH_REMATCH[4]}"
VERSION_PRERELEASE_REV="${BASH_REMATCH[5]}"
VERSION_BUILD_INFO="${BASH_REMATCH[6]}"
VERSION_COMMITS="${BASH_REMATCH[7]}"
}
# ---------------------------------------------------------------------------
# Build final release artifacts
function kube::release::clean_cruft() {
# Clean out cruft
find ${RELEASE_STAGE} -name '*~' -exec rm {} \;
find ${RELEASE_STAGE} -name '#*#' -exec rm {} \;
find ${RELEASE_STAGE} -name '.DS*' -exec rm {} \;
}
function kube::release::package_tarballs() {
# Clean out any old releases
rm -rf "${RELEASE_STAGE}" "${RELEASE_TARS}" "${RELEASE_IMAGES}"
mkdir -p "${RELEASE_TARS}"
kube::release::package_src_tarball &
kube::release::package_client_tarballs &
kube::release::package_salt_tarball &
kube::release::package_kube_manifests_tarball &
kube::util::wait-for-jobs || { kube::log::error "previous tarball phase failed"; return 1; }
# _node and _server tarballs depend on _src tarball
kube::release::package_node_tarballs &
kube::release::package_server_tarballs &
kube::util::wait-for-jobs || { kube::log::error "previous tarball phase failed"; return 1; }
kube::release::package_final_tarball & # _final depends on some of the previous phases
kube::release::package_test_tarball & # _test doesn't depend on anything
kube::util::wait-for-jobs || { kube::log::error "previous tarball phase failed"; return 1; }
}
# Package the source code we built, for compliance/licensing/audit/yadda.
function kube::release::package_src_tarball() {
local -r src_tarball="${RELEASE_TARS}/kubernetes-src.tar.gz"
kube::log::status "Building tarball: src"
if [[ "${KUBE_GIT_TREE_STATE-}" == "clean" ]]; then
git archive -o "${src_tarball}" HEAD
else
local source_files=(
$(cd "${KUBE_ROOT}" && find . -mindepth 1 -maxdepth 1 \
-not \( \
\( -path ./_\* -o \
-path ./.git\* -o \
-path ./.config\* -o \
-path ./.gsutil\* \
\) -prune \
\))
)
"${TAR}" czf "${src_tarball}" -C "${KUBE_ROOT}" "${source_files[@]}"
fi
}
# Package up all of the cross compiled clients. Over time this should grow into
# a full SDK
function kube::release::package_client_tarballs() {
# Find all of the built client binaries
local platform platforms
platforms=($(cd "${LOCAL_OUTPUT_BINPATH}" ; echo */*))
for platform in "${platforms[@]}"; do
local platform_tag=${platform/\//-} # Replace a "/" for a "-"
kube::log::status "Starting tarball: client $platform_tag"
(
local release_stage="${RELEASE_STAGE}/client/${platform_tag}/kubernetes"
rm -rf "${release_stage}"
mkdir -p "${release_stage}/client/bin"
local client_bins=("${KUBE_CLIENT_BINARIES[@]}")
if [[ "${platform%/*}" == "windows" ]]; then
client_bins=("${KUBE_CLIENT_BINARIES_WIN[@]}")
fi
# This fancy expression will expand to prepend a path
# (${LOCAL_OUTPUT_BINPATH}/${platform}/) to every item in the
# KUBE_CLIENT_BINARIES array.
cp "${client_bins[@]/#/${LOCAL_OUTPUT_BINPATH}/${platform}/}" \
"${release_stage}/client/bin/"
kube::release::clean_cruft
local package_name="${RELEASE_TARS}/kubernetes-client-${platform_tag}.tar.gz"
kube::release::create_tarball "${package_name}" "${release_stage}/.."
) &
done
kube::log::status "Waiting on tarballs"
kube::util::wait-for-jobs || { kube::log::error "client tarball creation failed"; exit 1; }
}
# Package up all of the node binaries
function kube::release::package_node_tarballs() {
local platform
for platform in "${KUBE_NODE_PLATFORMS[@]}"; do
local platform_tag=${platform/\//-} # Replace a "/" for a "-"
local arch=$(basename ${platform})
kube::log::status "Building tarball: node $platform_tag"
local release_stage="${RELEASE_STAGE}/node/${platform_tag}/kubernetes"
rm -rf "${release_stage}"
mkdir -p "${release_stage}/node/bin"
local node_bins=("${KUBE_NODE_BINARIES[@]}")
if [[ "${platform%/*}" == "windows" ]]; then
node_bins=("${KUBE_NODE_BINARIES_WIN[@]}")
fi
# This fancy expression will expand to prepend a path
# (${LOCAL_OUTPUT_BINPATH}/${platform}/) to every item in the
# KUBE_NODE_BINARIES array.
cp "${node_bins[@]/#/${LOCAL_OUTPUT_BINPATH}/${platform}/}" \
"${release_stage}/node/bin/"
# TODO: Docker images here
# kube::release::create_docker_images_for_server "${release_stage}/server/bin" "${arch}"
# Include the client binaries here too as they are useful debugging tools.
local client_bins=("${KUBE_CLIENT_BINARIES[@]}")
if [[ "${platform%/*}" == "windows" ]]; then
client_bins=("${KUBE_CLIENT_BINARIES_WIN[@]}")
fi
cp "${client_bins[@]/#/${LOCAL_OUTPUT_BINPATH}/${platform}/}" \
"${release_stage}/node/bin/"
cp "${KUBE_ROOT}/Godeps/LICENSES" "${release_stage}/"
cp "${RELEASE_TARS}/kubernetes-src.tar.gz" "${release_stage}/"
kube::release::clean_cruft
local package_name="${RELEASE_TARS}/kubernetes-node-${platform_tag}.tar.gz"
kube::release::create_tarball "${package_name}" "${release_stage}/.."
done
}
# Package up all of the server binaries
function kube::release::package_server_tarballs() {
local platform
for platform in "${KUBE_SERVER_PLATFORMS[@]}"; do
local platform_tag=${platform/\//-} # Replace a "/" for a "-"
local arch=$(basename ${platform})
kube::log::status "Building tarball: server $platform_tag"
local release_stage="${RELEASE_STAGE}/server/${platform_tag}/kubernetes"
rm -rf "${release_stage}"
mkdir -p "${release_stage}/server/bin"
mkdir -p "${release_stage}/addons"
# This fancy expression will expand to prepend a path
# (${LOCAL_OUTPUT_BINPATH}/${platform}/) to every item in the
# KUBE_SERVER_BINARIES array.
cp "${KUBE_SERVER_BINARIES[@]/#/${LOCAL_OUTPUT_BINPATH}/${platform}/}" \
"${release_stage}/server/bin/"
kube::release::create_docker_images_for_server "${release_stage}/server/bin" "${arch}"
# Include the client binaries here too as they are useful debugging tools.
local client_bins=("${KUBE_CLIENT_BINARIES[@]}")
if [[ "${platform%/*}" == "windows" ]]; then
client_bins=("${KUBE_CLIENT_BINARIES_WIN[@]}")
fi
cp "${client_bins[@]/#/${LOCAL_OUTPUT_BINPATH}/${platform}/}" \
"${release_stage}/server/bin/"
cp "${KUBE_ROOT}/Godeps/LICENSES" "${release_stage}/"
cp "${RELEASE_TARS}/kubernetes-src.tar.gz" "${release_stage}/"
kube::release::clean_cruft
local package_name="${RELEASE_TARS}/kubernetes-server-${platform_tag}.tar.gz"
kube::release::create_tarball "${package_name}" "${release_stage}/.."
done
}
function kube::release::md5() {
if which md5 >/dev/null 2>&1; then
md5 -q "$1"
else
md5sum "$1" | awk '{ print $1 }'
fi
}
function kube::release::sha1() {
if which sha1sum >/dev/null 2>&1; then
sha1sum "$1" | awk '{ print $1 }'
else
shasum -a1 "$1" | awk '{ print $1 }'
fi
}
function kube::release::build_hyperkube_image() {
local -r arch="$1"
local -r registry="$2"
local -r version="$3"
local -r save_dir="${4-}"
kube::log::status "Building hyperkube image for arch: ${arch}"
ARCH="${arch}" REGISTRY="${registry}" VERSION="${version}" \
make -C cluster/images/hyperkube/ build >/dev/null
local hyperkube_tag="${registry}/hyperkube-${arch}:${version}"
if [[ -n "${save_dir}" ]]; then
"${DOCKER[@]}" save "${hyperkube_tag}" > "${save_dir}/hyperkube-${arch}.tar"
fi
if [[ -z "${KUBE_DOCKER_IMAGE_TAG-}" || -z "${KUBE_DOCKER_REGISTRY-}" ]]; then
# not a release
kube::log::status "Deleting hyperkube image ${hyperkube_tag}"
"${DOCKER[@]}" rmi "${hyperkube_tag}" &>/dev/null || true
fi
}
# This will take binaries that run on master and creates Docker images
# that wrap the binary in them. (One docker image per binary)
# Args:
# $1 - binary_dir, the directory to save the tared images to.
# $2 - arch, architecture for which we are building docker images.
function kube::release::create_docker_images_for_server() {
# Create a sub-shell so that we don't pollute the outer environment
(
local binary_dir="$1"
local arch="$2"
local binary_name
local binaries=($(kube::build::get_docker_wrapped_binaries ${arch}))
local images_dir="${RELEASE_IMAGES}/${arch}"
mkdir -p "${images_dir}"
local -r docker_registry="gcr.io/google_containers"
# Docker tags cannot contain '+'
local docker_tag="${KUBE_GIT_VERSION/+/_}"
if [[ -z "${docker_tag}" ]]; then
kube::log::error "git version information missing; cannot create Docker tag"
return 1
fi
for wrappable in "${binaries[@]}"; do
local oldifs=$IFS
IFS=","
set $wrappable
IFS=$oldifs
local binary_name="$1"
local base_image="$2"
local docker_build_path="${binary_dir}/${binary_name}.dockerbuild"
local docker_file_path="${docker_build_path}/Dockerfile"
local binary_file_path="${binary_dir}/${binary_name}"
local docker_image_tag="${docker_registry}"
if [[ ${arch} == "amd64" ]]; then
# If we are building a amd64 docker image, preserve the original
# image name
docker_image_tag+="/${binary_name}:${docker_tag}"
else
# If we are building a docker image for another architecture,
# append the arch in the image tag
docker_image_tag+="/${binary_name}-${arch}:${docker_tag}"
fi
kube::log::status "Starting docker build for image: ${binary_name}-${arch}"
(
rm -rf ${docker_build_path}
mkdir -p ${docker_build_path}
ln ${binary_dir}/${binary_name} ${docker_build_path}/${binary_name}
printf " FROM ${base_image} \n ADD ${binary_name} /usr/local/bin/${binary_name}\n" > ${docker_file_path}
"${DOCKER[@]}" build --pull -q -t "${docker_image_tag}" ${docker_build_path} >/dev/null
"${DOCKER[@]}" save "${docker_image_tag}" > "${binary_dir}/${binary_name}.tar"
echo "${docker_tag}" > ${binary_dir}/${binary_name}.docker_tag
rm -rf ${docker_build_path}
ln "${binary_dir}/${binary_name}.tar" "${images_dir}/"
# If we are building an official/alpha/beta release we want to keep
# docker images and tag them appropriately.
if [[ -n "${KUBE_DOCKER_IMAGE_TAG-}" && -n "${KUBE_DOCKER_REGISTRY-}" ]]; then
local release_docker_image_tag="${KUBE_DOCKER_REGISTRY}/${binary_name}-${arch}:${KUBE_DOCKER_IMAGE_TAG}"
# Only rmi and tag if name is different
if [[ $docker_image_tag != $release_docker_image_tag ]]; then
kube::log::status "Tagging docker image ${docker_image_tag} as ${release_docker_image_tag}"
"${DOCKER[@]}" rmi "${release_docker_image_tag}" 2>/dev/null || true
"${DOCKER[@]}" tag "${docker_image_tag}" "${release_docker_image_tag}" 2>/dev/null
fi
else
# not a release
kube::log::status "Deleting docker image ${docker_image_tag}"
"${DOCKER[@]}" rmi ${docker_image_tag} &>/dev/null || true
fi
) &
done
if [[ "${KUBE_BUILD_HYPERKUBE}" =~ [yY] ]]; then
kube::release::build_hyperkube_image "${arch}" "${docker_registry}" \
"${docker_tag}" "${images_dir}" &
fi
kube::util::wait-for-jobs || { kube::log::error "previous Docker build failed"; return 1; }
kube::log::status "Docker builds done"
)
}
# Package up the salt configuration tree. This is an optional helper to getting
# a cluster up and running.
function kube::release::package_salt_tarball() {
kube::log::status "Building tarball: salt"
local release_stage="${RELEASE_STAGE}/salt/kubernetes"
rm -rf "${release_stage}"
mkdir -p "${release_stage}"
cp -R "${KUBE_ROOT}/cluster/saltbase" "${release_stage}/"
# TODO(#3579): This is a temporary hack. It gathers up the yaml,
# yaml.in, json files in cluster/addons (minus any demos) and overlays
# them into kube-addons, where we expect them. (This pipeline is a
# fancy copy, stripping anything but the files we don't want.)
local objects
objects=$(cd "${KUBE_ROOT}/cluster/addons" && find . \( -name \*.yaml -or -name \*.yaml.in -or -name \*.json \) | grep -v demo)
tar c -C "${KUBE_ROOT}/cluster/addons" ${objects} | tar x -C "${release_stage}/saltbase/salt/kube-addons"
kube::release::clean_cruft
local package_name="${RELEASE_TARS}/kubernetes-salt.tar.gz"
kube::release::create_tarball "${package_name}" "${release_stage}/.."
}
# This will pack kube-system manifests files for distros without using salt
# such as GCI and Ubuntu Trusty. We directly copy manifests from
# cluster/addons and cluster/saltbase/salt. The script of cluster initialization
# will remove the salt configuration and evaluate the variables in the manifests.
function kube::release::package_kube_manifests_tarball() {
kube::log::status "Building tarball: manifests"
local salt_dir="${KUBE_ROOT}/cluster/saltbase/salt"
local release_stage="${RELEASE_STAGE}/manifests/kubernetes"
rm -rf "${release_stage}"
mkdir -p "${release_stage}"
cp "${salt_dir}/kube-registry-proxy/kube-registry-proxy.yaml" "${release_stage}/"
cp "${salt_dir}/kube-proxy/kube-proxy.manifest" "${release_stage}/"
local gci_dst_dir="${release_stage}/gci-trusty"
mkdir -p "${gci_dst_dir}"
cp "${salt_dir}/cluster-autoscaler/cluster-autoscaler.manifest" "${gci_dst_dir}/"
cp "${salt_dir}/etcd/etcd.manifest" "${gci_dst_dir}"
cp "${salt_dir}/kube-scheduler/kube-scheduler.manifest" "${gci_dst_dir}"
cp "${salt_dir}/kube-apiserver/kube-apiserver.manifest" "${gci_dst_dir}"
cp "${salt_dir}/kube-apiserver/abac-authz-policy.jsonl" "${gci_dst_dir}"
cp "${salt_dir}/kube-controller-manager/kube-controller-manager.manifest" "${gci_dst_dir}"
cp "${salt_dir}/kube-addons/kube-addon-manager.yaml" "${gci_dst_dir}"
cp "${salt_dir}/l7-gcp/glbc.manifest" "${gci_dst_dir}"
cp "${salt_dir}/rescheduler/rescheduler.manifest" "${gci_dst_dir}/"
cp "${salt_dir}/e2e-image-puller/e2e-image-puller.manifest" "${gci_dst_dir}/"
cp "${KUBE_ROOT}/cluster/gce/gci/configure-helper.sh" "${gci_dst_dir}/gci-configure-helper.sh"
cp "${KUBE_ROOT}/cluster/gce/gci/health-monitor.sh" "${gci_dst_dir}/health-monitor.sh"
cp "${KUBE_ROOT}/cluster/gce/container-linux/configure-helper.sh" "${gci_dst_dir}/container-linux-configure-helper.sh"
cp -r "${salt_dir}/kube-admission-controls/limit-range" "${gci_dst_dir}"
local objects
objects=$(cd "${KUBE_ROOT}/cluster/addons" && find . \( -name \*.yaml -or -name \*.yaml.in -or -name \*.json \) | grep -v demo)
tar c -C "${KUBE_ROOT}/cluster/addons" ${objects} | tar x -C "${gci_dst_dir}"
# Merge GCE-specific addons with general purpose addons.
local gce_objects
gce_objects=$(cd "${KUBE_ROOT}/cluster/gce/addons" && find . \( -name \*.yaml -or -name \*.yaml.in -or -name \*.json \) \( -not -name \*demo\* \))
if [[ -n "${gce_objects}" ]]; then
tar c -C "${KUBE_ROOT}/cluster/gce/addons" ${gce_objects} | tar x -C "${gci_dst_dir}"
fi
kube::release::clean_cruft
local package_name="${RELEASE_TARS}/kubernetes-manifests.tar.gz"
kube::release::create_tarball "${package_name}" "${release_stage}/.."
}
# This is the stuff you need to run tests from the binary distribution.
function kube::release::package_test_tarball() {
kube::log::status "Building tarball: test"
local release_stage="${RELEASE_STAGE}/test/kubernetes"
rm -rf "${release_stage}"
mkdir -p "${release_stage}"
local platform
for platform in "${KUBE_TEST_PLATFORMS[@]}"; do
local test_bins=("${KUBE_TEST_BINARIES[@]}")
if [[ "${platform%/*}" == "windows" ]]; then
test_bins=("${KUBE_TEST_BINARIES_WIN[@]}")
fi
mkdir -p "${release_stage}/platforms/${platform}"
cp "${test_bins[@]/#/${LOCAL_OUTPUT_BINPATH}/${platform}/}" \
"${release_stage}/platforms/${platform}"
done
for platform in "${KUBE_TEST_SERVER_PLATFORMS[@]}"; do
mkdir -p "${release_stage}/platforms/${platform}"
cp "${KUBE_TEST_SERVER_BINARIES[@]/#/${LOCAL_OUTPUT_BINPATH}/${platform}/}" \
"${release_stage}/platforms/${platform}"
done
# Add the test image files
mkdir -p "${release_stage}/test/images"
cp -fR "${KUBE_ROOT}/test/images" "${release_stage}/test/"
tar c ${KUBE_TEST_PORTABLE[@]} | tar x -C ${release_stage}
kube::release::clean_cruft
local package_name="${RELEASE_TARS}/kubernetes-test.tar.gz"
kube::release::create_tarball "${package_name}" "${release_stage}/.."
}
# This is all the platform-independent stuff you need to run/install kubernetes.
# Arch-specific binaries will need to be downloaded separately (possibly by
# using the bundled cluster/get-kube-binaries.sh script).
# Included in this tarball:
# - Cluster spin up/down scripts and configs for various cloud providers
# - Tarballs for salt configs that are ready to be uploaded
# to master by whatever means appropriate.
# - Examples (which may or may not still work)
# - The remnants of the docs/ directory
function kube::release::package_final_tarball() {
kube::log::status "Building tarball: final"
# This isn't a "full" tarball anymore, but the release lib still expects
# artifacts under "full/kubernetes/"
local release_stage="${RELEASE_STAGE}/full/kubernetes"
rm -rf "${release_stage}"
mkdir -p "${release_stage}"
mkdir -p "${release_stage}/client"
cat <<EOF > "${release_stage}/client/README"
Client binaries are no longer included in the Kubernetes final tarball.
Run cluster/get-kube-binaries.sh to download client and server binaries.
EOF
# We want everything in /cluster except saltbase. That is only needed on the
# server.
cp -R "${KUBE_ROOT}/cluster" "${release_stage}/"
rm -rf "${release_stage}/cluster/saltbase"
mkdir -p "${release_stage}/server"
cp "${RELEASE_TARS}/kubernetes-salt.tar.gz" "${release_stage}/server/"
cp "${RELEASE_TARS}/kubernetes-manifests.tar.gz" "${release_stage}/server/"
cat <<EOF > "${release_stage}/server/README"
Server binary tarballs are no longer included in the Kubernetes final tarball.
Run cluster/get-kube-binaries.sh to download client and server binaries.
EOF
mkdir -p "${release_stage}/third_party"
cp -R "${KUBE_ROOT}/third_party/htpasswd" "${release_stage}/third_party/htpasswd"
# Include hack/lib as a dependency for the cluster/ scripts
mkdir -p "${release_stage}/hack"
cp -R "${KUBE_ROOT}/hack/lib" "${release_stage}/hack/"
cp -R "${KUBE_ROOT}/examples" "${release_stage}/"
cp -R "${KUBE_ROOT}/docs" "${release_stage}/"
cp "${KUBE_ROOT}/README.md" "${release_stage}/"
cp "${KUBE_ROOT}/Godeps/LICENSES" "${release_stage}/"
cp "${KUBE_ROOT}/Vagrantfile" "${release_stage}/"
echo "${KUBE_GIT_VERSION}" > "${release_stage}/version"
kube::release::clean_cruft
local package_name="${RELEASE_TARS}/kubernetes.tar.gz"
kube::release::create_tarball "${package_name}" "${release_stage}/.."
}
# Build a release tarball. $1 is the output tar name. $2 is the base directory
# of the files to be packaged. This assumes that ${2}/kubernetes is what is
# being packaged.
function kube::release::create_tarball() {
kube::build::ensure_tar
local tarfile=$1
local stagingdir=$2
"${TAR}" czf "${tarfile}" -C "${stagingdir}" kubernetes --owner=0 --group=0
}

31
vendor/k8s.io/kubernetes/build/make-build-image.sh generated vendored Executable file
View File

@ -0,0 +1,31 @@
#!/bin/bash
# Copyright 2014 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Build the docker image necessary for building Kubernetes
#
# This script will package the parts of the repo that we need to build
# Kubernetes into a tar file and put it in the right place in the output
# directory. It will then copy over the Dockerfile and build the kube-build
# image.
set -o errexit
set -o nounset
set -o pipefail
KUBE_ROOT="$(dirname "${BASH_SOURCE}")/.."
source "${KUBE_ROOT}/build/common.sh"
kube::build::verify_prereqs
kube::build::build_image

26
vendor/k8s.io/kubernetes/build/make-clean.sh generated vendored Executable file
View File

@ -0,0 +1,26 @@
#!/bin/bash
# Copyright 2014 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Clean out the output directory on the docker host.
set -o errexit
set -o nounset
set -o pipefail
KUBE_ROOT=$(dirname "${BASH_SOURCE}")/..
source "${KUBE_ROOT}/build/common.sh"
kube::build::verify_prereqs false
kube::build::clean

9
vendor/k8s.io/kubernetes/build/openapi.bzl generated vendored Normal file
View File

@ -0,0 +1,9 @@
# A project wanting to generate openapi code for vendored
# k8s.io/kubernetes will need to set the following variables in
# //build/openapi.bzl in their project and customize the go prefix:
#
# openapi_go_prefix = "k8s.io/myproject/"
# openapi_vendor_prefix = "vendor/k8s.io/kubernetes/"
openapi_go_prefix = "k8s.io/kubernetes/"
openapi_vendor_prefix = ""

27
vendor/k8s.io/kubernetes/build/package-tarballs.sh generated vendored Executable file
View File

@ -0,0 +1,27 @@
#!/bin/bash
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -o errexit
set -o nounset
set -o pipefail
# Complete the release with the standard env
KUBE_ROOT=$(dirname "${BASH_SOURCE}")/..
source "${KUBE_ROOT}/build/common.sh"
source "${KUBE_ROOT}/build/lib/release.sh"
kube::build::ensure_tar
kube::version::get_version_vars
kube::release::package_tarballs

3
vendor/k8s.io/kubernetes/build/pause/.gitignore generated vendored Normal file
View File

@ -0,0 +1,3 @@
/.container-*
/.push-*
/bin

18
vendor/k8s.io/kubernetes/build/pause/Dockerfile generated vendored Normal file
View File

@ -0,0 +1,18 @@
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM scratch
ARG ARCH
ADD bin/pause-${ARCH} /pause
ENTRYPOINT ["/pause"]

112
vendor/k8s.io/kubernetes/build/pause/Makefile generated vendored Normal file
View File

@ -0,0 +1,112 @@
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
.PHONY: all push push-legacy container clean orphan
REGISTRY ?= gcr.io/google_containers
IMAGE = $(REGISTRY)/pause-$(ARCH)
LEGACY_AMD64_IMAGE = $(REGISTRY)/pause
TAG = 3.0
# Architectures supported: amd64, arm, arm64, ppc64le and s390x
ARCH ?= amd64
ALL_ARCH = amd64 arm arm64 ppc64le s390x
CFLAGS = -Os -Wall -Werror -static
KUBE_CROSS_IMAGE ?= gcr.io/google_containers/kube-cross
KUBE_CROSS_VERSION ?= $(shell cat ../build-image/cross/VERSION)
BIN = pause
SRCS = pause.c
ifeq ($(ARCH),amd64)
TRIPLE ?= x86_64-linux-gnu
endif
ifeq ($(ARCH),arm)
TRIPLE ?= arm-linux-gnueabi
endif
ifeq ($(ARCH),arm64)
TRIPLE ?= aarch64-linux-gnu
endif
ifeq ($(ARCH),ppc64le)
TRIPLE ?= powerpc64le-linux-gnu
endif
ifeq ($(ARCH),s390x)
TRIPLE ?= s390x-linux-gnu
endif
# If you want to build AND push all containers, see the 'all-push' rule.
all: all-container
sub-container-%:
$(MAKE) ARCH=$* container
sub-push-%:
$(MAKE) ARCH=$* push
all-container: $(addprefix sub-container-,$(ALL_ARCH))
all-push: $(addprefix sub-push-,$(ALL_ARCH))
build: bin/$(BIN)-$(ARCH)
bin/$(BIN)-$(ARCH): $(SRCS)
mkdir -p bin
docker run --rm -u $$(id -u):$$(id -g) -v $$(pwd):/build \
$(KUBE_CROSS_IMAGE):$(KUBE_CROSS_VERSION) \
/bin/bash -c "\
cd /build && \
$(TRIPLE)-gcc $(CFLAGS) -o $@ $^ && \
$(TRIPLE)-strip $@"
container: .container-$(ARCH)
.container-$(ARCH): bin/$(BIN)-$(ARCH)
docker build --pull -t $(IMAGE):$(TAG) --build-arg ARCH=$(ARCH) .
ifeq ($(ARCH),amd64)
docker rmi $(LEGACY_AMD64_IMAGE):$(TAG) 2>/dev/null || true
docker tag $(IMAGE):$(TAG) $(LEGACY_AMD64_IMAGE):$(TAG)
endif
touch $@
push: .push-$(ARCH)
.push-$(ARCH): .container-$(ARCH)
gcloud docker -- push $(IMAGE):$(TAG)
touch $@
push-legacy: .push-legacy-$(ARCH)
.push-legacy-$(ARCH): .container-$(ARCH)
ifeq ($(ARCH),amd64)
gcloud docker -- push $(LEGACY_AMD64_IMAGE):$(TAG)
endif
touch $@
# Useful for testing, not automatically included in container image
orphan: bin/orphan-$(ARCH)
bin/orphan-$(ARCH): orphan.c
mkdir -p bin
docker run -u $$(id -u):$$(id -g) -v $$(pwd):/build \
$(KUBE_CROSS_IMAGE):$(KUBE_CROSS_VERSION) \
/bin/bash -c "\
cd /build && \
$(TRIPLE)-gcc $(CFLAGS) -o $@ $^ && \
$(TRIPLE)-strip $@"
clean:
rm -rf .container-* .push-* bin/

36
vendor/k8s.io/kubernetes/build/pause/orphan.c generated vendored Normal file
View File

@ -0,0 +1,36 @@
/*
Copyright 2016 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
/* Creates a zombie to be reaped by init. Useful for testing. */
#include <stdio.h>
#include <unistd.h>
int main() {
pid_t pid;
pid = fork();
if (pid == 0) {
while (getppid() > 1)
;
printf("Child exiting: pid=%d ppid=%d\n", getpid(), getppid());
return 0;
} else if (pid > 0) {
printf("Parent exiting: pid=%d ppid=%d\n", getpid(), getppid());
return 0;
}
perror("Could not create child");
return 1;
}

51
vendor/k8s.io/kubernetes/build/pause/pause.c generated vendored Normal file
View File

@ -0,0 +1,51 @@
/*
Copyright 2016 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
#include <signal.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <unistd.h>
static void sigdown(int signo) {
psignal(signo, "Shutting down, got signal");
exit(0);
}
static void sigreap(int signo) {
while (waitpid(-1, NULL, WNOHANG) > 0);
}
int main() {
if (getpid() != 1)
/* Not an error because pause sees use outside of infra containers. */
fprintf(stderr, "Warning: pause should be the first process\n");
if (sigaction(SIGINT, &(struct sigaction){.sa_handler = sigdown}, NULL) < 0)
return 1;
if (sigaction(SIGTERM, &(struct sigaction){.sa_handler = sigdown}, NULL) < 0)
return 2;
if (sigaction(SIGCHLD, &(struct sigaction){.sa_handler = sigreap,
.sa_flags = SA_NOCLDSTOP},
NULL) < 0)
return 3;
for (;;)
pause();
fprintf(stderr, "Error: infinite loop terminated\n");
return 42;
}

49
vendor/k8s.io/kubernetes/build/release-in-a-container.sh generated vendored Executable file
View File

@ -0,0 +1,49 @@
#!/bin/bash
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -o errexit
set -o nounset
set -o pipefail
# Complete the release with the standard env
KUBE_ROOT=$(dirname "${BASH_SOURCE}")/..
# Check and error if not "in-a-container"
if [[ ! -f /.dockerenv ]]; then
echo
echo "'make release-in-a-container' can only be used from a docker container."
echo
exit 1
fi
# Other dependencies: Your container should contain docker
if ! type -p docker >/dev/null 2>&1; then
echo
echo "'make release-in-a-container' requires a container with" \
"docker installed."
echo
exit 1
fi
# First run make cross-in-a-container
make cross-in-a-container
# at the moment only make test is supported.
if [[ $KUBE_RELEASE_RUN_TESTS =~ ^[yY]$ ]]; then
make test
fi
$KUBE_ROOT/build/package-tarballs.sh

241
vendor/k8s.io/kubernetes/build/release-tars/BUILD generated vendored Normal file
View File

@ -0,0 +1,241 @@
package(default_visibility = ["//visibility:public"])
load("@io_bazel//tools/build_defs/pkg:pkg.bzl", "pkg_tar")
load("@io_kubernetes_build//defs:build.bzl", "release_filegroup")
filegroup(
name = "package-srcs",
srcs = glob(["**"]),
tags = ["automanaged"],
visibility = ["//visibility:private"],
)
filegroup(
name = "all-srcs",
srcs = [":package-srcs"],
tags = ["automanaged"],
)
config_setting(
name = "package_src",
values = {
"define": "PACKAGE_SRC=true",
},
visibility = ["//visibility:private"],
)
genrule(
name = "kubernetes-src-readme",
outs = ["README-src.txt"],
cmd = """
echo For build efficiency, the src was not included in this release.>$@
echo The full source code can be viewed at >>$@
echo -n https://github.com/kubernetes/kubernetes/tree/ >>$@
grep ^STABLE_BUILD_GIT_COMMIT bazel-out/stable-status.txt | cut -d' ' -f2 >>$@
""",
stamp = 1,
)
pkg_tar(
name = "kubernetes-src",
build_tar = "@io_kubernetes_build//tools/build_tar",
extension = "tar.gz",
files = select({
":package_src": ["//:all-srcs"],
"//conditions:default": ["README-src.txt"],
}),
package_dir = "kubernetes",
strip_prefix = select({
":package_src": "//",
"//conditions:default": ".",
}),
)
# FIXME: this should be configurable/auto-detected
PLATFORM_ARCH_STRING = "linux-amd64"
# Included in node and server tarballs.
filegroup(
name = "license-targets",
srcs = [
":kubernetes-src.tar.gz",
"//:Godeps/LICENSES",
],
visibility = ["//visibility:private"],
)
pkg_tar(
name = "_client-bin",
build_tar = "@io_kubernetes_build//tools/build_tar",
files = ["//build:client-targets"],
mode = "0755",
package_dir = "client/bin",
visibility = ["//visibility:private"],
)
pkg_tar(
name = "kubernetes-client-%s" % PLATFORM_ARCH_STRING,
build_tar = "@io_kubernetes_build//tools/build_tar",
extension = "tar.gz",
package_dir = "kubernetes",
deps = [
":_client-bin",
],
)
pkg_tar(
name = "_node-bin",
build_tar = "@io_kubernetes_build//tools/build_tar",
files = [
"//build:client-targets",
"//build:node-targets",
],
mode = "0755",
package_dir = "node/bin",
visibility = ["//visibility:private"],
)
pkg_tar(
name = "kubernetes-node-%s" % PLATFORM_ARCH_STRING,
build_tar = "@io_kubernetes_build//tools/build_tar",
extension = "tar.gz",
files = [":license-targets"],
mode = "0644",
package_dir = "kubernetes",
deps = [
":_node-bin",
],
)
pkg_tar(
name = "_server-bin",
build_tar = "@io_kubernetes_build//tools/build_tar",
files = [
"//build:client-targets",
"//build:docker-artifacts",
"//build:node-targets",
"//build:server-targets",
],
mode = "0755",
package_dir = "server/bin",
visibility = ["//visibility:private"],
)
genrule(
name = "dummy",
outs = [".dummy"],
cmd = "touch $@",
visibility = ["//visibility:private"],
)
# Some of the startup scripts fail if there isn't an addons/ directory in the server tarball.
pkg_tar(
name = "_server-addons",
build_tar = "@io_kubernetes_build//tools/build_tar",
files = [
":.dummy",
],
package_dir = "addons",
visibility = ["//visibility:private"],
)
pkg_tar(
name = "kubernetes-server-%s" % PLATFORM_ARCH_STRING,
build_tar = "@io_kubernetes_build//tools/build_tar",
extension = "tar.gz",
files = [":license-targets"],
mode = "0644",
package_dir = "kubernetes",
deps = [
":_server-addons",
":_server-bin",
],
)
pkg_tar(
name = "_test-bin",
build_tar = "@io_kubernetes_build//tools/build_tar",
files = ["//build:test-targets"],
mode = "0755",
package_dir = "platforms/" + PLATFORM_ARCH_STRING.replace("-", "/"),
# TODO: how to make this multiplatform?
visibility = ["//visibility:private"],
)
pkg_tar(
name = "kubernetes-test",
build_tar = "@io_kubernetes_build//tools/build_tar",
extension = "tar.gz",
files = ["//build:test-portable-targets"],
package_dir = "kubernetes",
strip_prefix = "//",
deps = [
# TODO: how to make this multiplatform?
":_test-bin",
],
)
pkg_tar(
name = "_full_server",
build_tar = "@io_kubernetes_build//tools/build_tar",
files = [
":kubernetes-manifests.tar.gz",
":kubernetes-salt.tar.gz",
],
package_dir = "server",
visibility = ["//visibility:private"],
)
pkg_tar(
name = "kubernetes",
build_tar = "@io_kubernetes_build//tools/build_tar",
extension = "tar.gz",
files = [
"//:Godeps/LICENSES",
"//:README.md",
"//:Vagrantfile",
"//:version",
"//cluster:all-srcs",
"//docs:all-srcs",
"//examples:all-srcs",
"//hack/lib:all-srcs",
"//third_party/htpasswd:all-srcs",
],
package_dir = "kubernetes",
strip_prefix = "//",
deps = [
":_full_server",
],
)
pkg_tar(
name = "kubernetes-manifests",
build_tar = "@io_kubernetes_build//tools/build_tar",
extension = "tar.gz",
deps = [
"//cluster:manifests",
],
)
pkg_tar(
name = "kubernetes-salt",
build_tar = "@io_kubernetes_build//tools/build_tar",
extension = "tar.gz",
deps = [
"//cluster/saltbase:salt",
],
)
release_filegroup(
name = "release-tars",
srcs = [
":kubernetes.tar.gz",
":kubernetes-client-%s.tar.gz" % PLATFORM_ARCH_STRING,
":kubernetes-node-%s.tar.gz" % PLATFORM_ARCH_STRING,
":kubernetes-server-%s.tar.gz" % PLATFORM_ARCH_STRING,
":kubernetes-manifests.tar.gz",
":kubernetes-salt.tar.gz",
":kubernetes-src.tar.gz",
":kubernetes-test.tar.gz",
],
)

45
vendor/k8s.io/kubernetes/build/release.sh generated vendored Executable file
View File

@ -0,0 +1,45 @@
#!/bin/bash
# Copyright 2014 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Build a Kubernetes release. This will build the binaries, create the Docker
# images and other build artifacts.
#
# For pushing these artifacts publicly to Google Cloud Storage or to a registry
# please refer to the kubernetes/release repo at
# https://github.com/kubernetes/release.
set -o errexit
set -o nounset
set -o pipefail
KUBE_ROOT=$(dirname "${BASH_SOURCE}")/..
source "${KUBE_ROOT}/build/common.sh"
source "${KUBE_ROOT}/build/lib/release.sh"
KUBE_RELEASE_RUN_TESTS=${KUBE_RELEASE_RUN_TESTS-y}
kube::build::verify_prereqs
kube::build::build_image
kube::build::run_build_command make cross
if [[ $KUBE_RELEASE_RUN_TESTS =~ ^[yY]$ ]]; then
kube::build::run_build_command make test
kube::build::run_build_command make test-integration
fi
kube::build::copy_output
kube::release::package_tarballs

16
vendor/k8s.io/kubernetes/build/root/.bazelrc generated vendored Normal file
View File

@ -0,0 +1,16 @@
# Show us information about failures.
build --verbose_failures
test --test_output=errors
# Include git version info
build --workspace_status_command hack/print-workspace-status.sh
# Make /tmp hermetic
build --sandbox_tmpfs_path=/tmp
# Ensure that Bazel never runs as root, which can cause unit tests to fail.
# This flag requires Bazel 0.5.0+
build --sandbox_fake_username
# Enable go race detection.
test --features=race

8
vendor/k8s.io/kubernetes/build/root/.kazelcfg.json generated vendored Normal file
View File

@ -0,0 +1,8 @@
{
"GoPrefix": "k8s.io/kubernetes",
"SkippedPaths": [
"^_.*"
],
"AddSourcesRules": true,
"K8sOpenAPIGen": true
}

82
vendor/k8s.io/kubernetes/build/root/BUILD.root generated vendored Normal file
View File

@ -0,0 +1,82 @@
# gazelle:exclude _artifacts
# gazelle:exclude _gopath
# gazelle:exclude _output
# gazelle:exclude _tmp
package(default_visibility = ["//visibility:public"])
load("@io_bazel_rules_go//go:def.bzl", "go_prefix")
load("@io_kubernetes_build//defs:build.bzl", "gcs_upload")
go_prefix("k8s.io/kubernetes")
filegroup(
name = "_binary-artifacts-and-hashes",
srcs = [
"//build:client-targets-and-hashes",
"//build:docker-artifacts-and-hashes",
"//build:node-targets-and-hashes",
"//build:server-targets-and-hashes",
"//build/debs:debs-and-hashes",
],
visibility = ["//visibility:private"],
)
gcs_upload(
name = "push-build",
data = [
":_binary-artifacts-and-hashes",
"//build/release-tars:release-tars-and-hashes",
"//cluster/gce:gcs-release-artifacts-and-hashes",
],
tags = ["manual"],
upload_paths = {
"//:_binary-artifacts-and-hashes": "bin/linux/amd64",
"//build/release-tars:release-tars-and-hashes": "",
"//cluster/gce:gcs-release-artifacts-and-hashes": "extra/gce",
},
)
filegroup(
name = "package-srcs",
srcs = glob(
["**"],
exclude = [
"bazel-*/**",
"_*/**",
".config/**",
".git/**",
".gsutil/**",
".make/**",
],
),
visibility = ["//visibility:private"],
)
filegroup(
name = "all-srcs",
srcs = [
":package-srcs",
"//api:all-srcs",
"//build:all-srcs",
"//cluster:all-srcs",
"//cmd:all-srcs",
"//docs:all-srcs",
"//examples:all-srcs",
"//hack:all-srcs",
"//pkg:all-srcs",
"//plugin:all-srcs",
"//staging:all-srcs",
"//test:all-srcs",
"//third_party:all-srcs",
"//vendor:all-srcs",
],
tags = ["automanaged"],
)
genrule(
name = "save_git_version",
outs = ["version"],
cmd = "grep ^STABLE_BUILD_SCM_REVISION bazel-out/stable-status.txt | awk '{print $$2}' >$@",
stamp = 1,
)

595
vendor/k8s.io/kubernetes/build/root/Makefile generated vendored Normal file
View File

@ -0,0 +1,595 @@
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
DBG_MAKEFILE ?=
ifeq ($(DBG_MAKEFILE),1)
$(warning ***** starting Makefile for goal(s) "$(MAKECMDGOALS)")
$(warning ***** $(shell date))
else
# If we're not debugging the Makefile, don't echo recipes.
MAKEFLAGS += -s
endif
# Old-skool build tools.
#
# Commonly used targets (see each target for more information):
# all: Build code.
# test: Run tests.
# clean: Clean up.
# It's necessary to set this because some environments don't link sh -> bash.
SHELL := /bin/bash
# We don't need make's built-in rules.
MAKEFLAGS += --no-builtin-rules
.SUFFIXES:
# Constants used throughout.
.EXPORT_ALL_VARIABLES:
OUT_DIR ?= _output
BIN_DIR := $(OUT_DIR)/bin
PRJ_SRC_PATH := k8s.io/kubernetes
GENERATED_FILE_PREFIX := zz_generated.
# Metadata for driving the build lives here.
META_DIR := .make
ifdef KUBE_GOFLAGS
$(info KUBE_GOFLAGS is now deprecated. Please use GOFLAGS instead.)
ifndef GOFLAGS
GOFLAGS := $(KUBE_GOFLAGS)
unexport KUBE_GOFLAGS
else
$(error Both KUBE_GOFLAGS and GOFLAGS are set. Please use just GOFLAGS)
endif
endif
# Extra options for the release or quick-release options:
KUBE_RELEASE_RUN_TESTS := $(KUBE_RELEASE_RUN_TESTS)
KUBE_FASTBUILD := $(KUBE_FASTBUILD)
# This controls the verbosity of the build. Higher numbers mean more output.
KUBE_VERBOSE ?= 1
define ALL_HELP_INFO
# Build code.
#
# Args:
# WHAT: Directory names to build. If any of these directories has a 'main'
# package, the build will produce executable files under $(OUT_DIR)/go/bin.
# If not specified, "everything" will be built.
# GOFLAGS: Extra flags to pass to 'go' when building.
# GOLDFLAGS: Extra linking flags passed to 'go' when building.
# GOGCFLAGS: Additional go compile flags passed to 'go' when building.
#
# Example:
# make
# make all
# make all WHAT=cmd/kubelet GOFLAGS=-v
# make all GOGCFLAGS="-N -l"
# Note: Use the -N -l options to disable compiler optimizations an inlining.
# Using these build options allows you to subsequently use source
# debugging tools like delve.
endef
.PHONY: all
ifeq ($(PRINT_HELP),y)
all:
@echo "$$ALL_HELP_INFO"
else
all: generated_files
hack/make-rules/build.sh $(WHAT)
endif
define GINKGO_HELP_INFO
# Build ginkgo
#
# Example:
# make ginkgo
endef
.PHONY: ginkgo
ifeq ($(PRINT_HELP),y)
ginkgo:
@echo "$$GINKGO_HELP_INFO"
else
ginkgo:
hack/make-rules/build.sh vendor/github.com/onsi/ginkgo/ginkgo
endif
define VERIFY_HELP_INFO
# Runs all the presubmission verifications.
#
# Args:
# BRANCH: Branch to be passed to verify-godeps.sh script.
#
# Example:
# make verify
# make verify BRANCH=branch_x
endef
.PHONY: verify
ifeq ($(PRINT_HELP),y)
verify:
@echo "$$VERIFY_HELP_INFO"
else
verify: verify_generated_files
KUBE_VERIFY_GIT_BRANCH=$(BRANCH) hack/make-rules/verify.sh -v
endif
define QUICK_VERIFY_HELP_INFO
# Runs only the presubmission verifications that aren't slow.
#
# Example:
# make quick-verify
endef
.PHONY: quick-verify
ifeq ($(PRINT_HELP),y)
quick-verify:
@echo "$$QUICK_VERIFY_HELP_INFO"
else
quick-verify: verify_generated_files
hack/make-rules/verify.sh -v -Q
endif
define UPDATE_HELP_INFO
# Runs all the generated updates.
#
# Example:
# make update
endef
.PHONY: update
ifeq ($(PRINT_HELP),y)
update:
@echo "$$UPDATE_HELP_INFO"
else
update:
hack/update-all.sh
endif
define CHECK_TEST_HELP_INFO
# Build and run tests.
#
# Args:
# WHAT: Directory names to test. All *_test.go files under these
# directories will be run. If not specified, "everything" will be tested.
# TESTS: Same as WHAT.
# KUBE_COVER: Whether to run tests with code coverage. Set to 'y' to enable coverage collection.
# GOFLAGS: Extra flags to pass to 'go' when building.
# GOLDFLAGS: Extra linking flags to pass to 'go' when building.
# GOGCFLAGS: Additional go compile flags passed to 'go' when building.
#
# Example:
# make check
# make test
# make check WHAT=./pkg/kubelet GOFLAGS=-v
endef
.PHONY: check test
ifeq ($(PRINT_HELP),y)
check test:
@echo "$$CHECK_TEST_HELP_INFO"
else
check test: generated_files
hack/make-rules/test.sh $(WHAT) $(TESTS)
endif
define TEST_IT_HELP_INFO
# Build and run integration tests.
#
# Args:
# WHAT: Directory names to test. All *_test.go files under these
# directories will be run. If not specified, "everything" will be tested.
#
# Example:
# make test-integration
endef
.PHONY: test-integration
ifeq ($(PRINT_HELP),y)
test-integration:
@echo "$$TEST_IT_HELP_INFO"
else
test-integration: generated_files
hack/make-rules/test-integration.sh $(WHAT)
endif
define TEST_E2E_HELP_INFO
# Build and run end-to-end tests.
#
# Example:
# make test-e2e
endef
.PHONY: test-e2e
ifeq ($(PRINT_HELP),y)
test-e2e:
@echo "$$TEST_E2E_HELP_INFO"
else
test-e2e: ginkgo generated_files
go run hack/e2e.go -- -v --build --up --test --down
endif
define TEST_E2E_NODE_HELP_INFO
# Build and run node end-to-end tests.
#
# Args:
# FOCUS: Regexp that matches the tests to be run. Defaults to "".
# SKIP: Regexp that matches the tests that needs to be skipped. Defaults
# to "".
# RUN_UNTIL_FAILURE: If true, pass --untilItFails to ginkgo so tests are run
# repeatedly until they fail. Defaults to false.
# REMOTE: If true, run the tests on a remote host instance on GCE. Defaults
# to false.
# IMAGES: For REMOTE=true only. Comma delimited list of images for creating
# remote hosts to run tests against. Defaults to a recent image.
# LIST_IMAGES: If true, don't run tests. Just output the list of available
# images for testing. Defaults to false.
# HOSTS: For REMOTE=true only. Comma delimited list of running gce hosts to
# run tests against. Defaults to "".
# DELETE_INSTANCES: For REMOTE=true only. Delete any instances created as
# part of this test run. Defaults to false.
# ARTIFACTS: For REMOTE=true only. Local directory to scp test artifacts into
# from the remote hosts. Defaults to "/tmp/_artifacts".
# REPORT: For REMOTE=false only. Local directory to write juntil xml results
# to. Defaults to "/tmp/".
# CLEANUP: For REMOTE=true only. If false, do not stop processes or delete
# test files on remote hosts. Defaults to true.
# IMAGE_PROJECT: For REMOTE=true only. Project containing images provided to
# IMAGES. Defaults to "kubernetes-node-e2e-images".
# INSTANCE_PREFIX: For REMOTE=true only. Instances created from images will
# have the name "${INSTANCE_PREFIX}-${IMAGE_NAME}". Defaults to "test".
# INSTANCE_METADATA: For REMOTE=true and running on GCE only.
# GUBERNATOR: For REMOTE=true only. Produce link to Gubernator to view logs.
# Defaults to false.
# PARALLELISM: The number of gingko nodes to run. Defaults to 8.
# RUNTIME: Container runtime to use (eg. docker, rkt, remote).
# Defaults to "docker".
# CONTAINER_RUNTIME_ENDPOINT: remote container endpoint to connect to.
# Used when RUNTIME is set to "remote".
# IMAGE_SERVICE_ENDPOINT: remote image endpoint to connect to, to prepull images.
# Used when RUNTIME is set to "remote".
# IMAGE_CONFIG_FILE: path to a file containing image configuration.
# SYSTEM_SPEC_NAME: The name of the system spec to be used for validating the
# image in the node conformance test. The specs are located at
# test/e2e_node/system/specs/. For example, "SYSTEM_SPEC_NAME=gke" will use
# the spec at test/e2e_node/system/specs/gke.yaml. If unspecified, the
# default built-in spec (system.DefaultSpec) will be used.
#
# Example:
# make test-e2e-node FOCUS=Kubelet SKIP=container
# make test-e2e-node REMOTE=true DELETE_INSTANCES=true
# make test-e2e-node TEST_ARGS='--kubelet-flags="--cgroups-per-qos=true"'
# Build and run tests.
endef
.PHONY: test-e2e-node
ifeq ($(PRINT_HELP),y)
test-e2e-node:
@echo "$$TEST_E2E_NODE_HELP_INFO"
else
test-e2e-node: ginkgo generated_files
hack/make-rules/test-e2e-node.sh
endif
define TEST_CMD_HELP_INFO
# Build and run cmdline tests.
#
# Example:
# make test-cmd
endef
.PHONY: test-cmd
ifeq ($(PRINT_HELP),y)
test-cmd:
@echo "$$TEST_CMD_HELP_INFO"
else
test-cmd: generated_files
hack/make-rules/test-kubeadm-cmd.sh
hack/make-rules/test-cmd.sh
endif
define CLEAN_HELP_INFO
# Remove all build artifacts.
#
# Example:
# make clean
#
# TODO(thockin): call clean_generated when we stop committing generated code.
endef
.PHONY: clean
ifeq ($(PRINT_HELP),y)
clean:
@echo "$$CLEAN_HELP_INFO"
else
clean: clean_meta
build/make-clean.sh
hack/make-rules/clean.sh
endif
define CLEAN_META_HELP_INFO
# Remove make-related metadata files.
#
# Example:
# make clean_meta
endef
.PHONY: clean_meta
ifeq ($(PRINT_HELP),y)
clean_meta:
@echo "$$CLEAN_META_HELP_INFO"
else
clean_meta:
rm -rf $(META_DIR)
endif
define CLEAN_GENERATED_HELP_INFO
# Remove all auto-generated artifacts. Generated artifacts in staging folder should not be removed as they are not
# generated using generated_files.
#
# Example:
# make clean_generated
endef
.PHONY: clean_generated
ifeq ($(PRINT_HELP),y)
clean_generated:
@echo "$$CLEAN_GENERATED_HELP_INFO"
else
clean_generated:
find . -type f -name $(GENERATED_FILE_PREFIX)\* | grep -v "[.]/staging/.*" | xargs rm -f
endif
define VET_HELP_INFO
# Run 'go vet'.
#
# Args:
# WHAT: Directory names to vet. All *.go files under these
# directories will be vetted. If not specified, "everything" will be
# vetted.
#
# Example:
# make vet
# make vet WHAT=./pkg/kubelet
endef
.PHONY: vet
ifeq ($(PRINT_HELP),y)
vet:
@echo "$$VET_HELP_INFO"
else
vet: generated_files
CALLED_FROM_MAIN_MAKEFILE=1 hack/make-rules/vet.sh $(WHAT)
endif
define RELEASE_HELP_INFO
# Build a release
# Use the 'release-in-a-container' target to build the release when already in
# a container vs. creating a new container to build in using the 'release'
# target. Useful for running in GCB.
#
# Example:
# make release
# make release-in-a-container
endef
.PHONY: release release-in-a-container
ifeq ($(PRINT_HELP),y)
release release-in-a-container:
@echo "$$RELEASE_HELP_INFO"
else
release:
build/release.sh
release-in-a-container:
build/release-in-a-container.sh
endif
define RELEASE_SKIP_TESTS_HELP_INFO
# Build a release, but skip tests
#
# Args:
# KUBE_RELEASE_RUN_TESTS: Whether to run tests. Set to 'y' to run tests anyways.
# KUBE_FASTBUILD: Whether to cross-compile for other architectures. Set to 'true' to do so.
#
# Example:
# make release-skip-tests
# make quick-release
endef
.PHONY: release-skip-tests quick-release
ifeq ($(PRINT_HELP),y)
release-skip-tests quick-release:
@echo "$$RELEASE_SKIP_TESTS_HELP_INFO"
else
release-skip-tests quick-release: KUBE_RELEASE_RUN_TESTS = n
release-skip-tests quick-release: KUBE_FASTBUILD = true
release-skip-tests quick-release:
build/release.sh
endif
define PACKAGE_HELP_INFO
# Package tarballs
# Use the 'package-tarballs' target to run the final packaging steps of
# a release.
#
# Example:
# make package-tarballs
endef
.PHONY: package package-tarballs
ifeq ($(PRINT_HELP),y)
package package-tarballs:
@echo "$$PACKAGE_HELP_INFO"
else
package package-tarballs:
build/package-tarballs.sh
endif
define CROSS_HELP_INFO
# Cross-compile for all platforms
# Use the 'cross-in-a-container' target to cross build when already in
# a container vs. creating a new container to build from (build-image)
# Useful for running in GCB.
#
# Example:
# make cross
# make cross-in-a-container
endef
.PHONY: cross cross-in-a-container
ifeq ($(PRINT_HELP),y)
cross cross-in-a-container:
@echo "$$CROSS_HELP_INFO"
else
cross:
hack/make-rules/cross.sh
cross-in-a-container: KUBE_OUTPUT_SUBPATH = $(OUT_DIR)/dockerized
cross-in-a-container:
ifeq (,$(wildcard /.dockerenv))
@echo -e "\nThe 'cross-in-a-container' target can only be used from within a docker container.\n"
else
hack/make-rules/cross.sh
endif
endif
define CMD_HELP_INFO
# Add rules for all directories in cmd/
#
# Example:
# make kubectl kube-proxy
endef
#TODO: make EXCLUDE_TARGET auto-generated when there are other files in cmd/
EXCLUDE_TARGET=BUILD OWNERS
.PHONY: $(filter-out %$(EXCLUDE_TARGET),$(notdir $(abspath $(wildcard cmd/*/))))
ifeq ($(PRINT_HELP),y)
$(filter-out %$(EXCLUDE_TARGET),$(notdir $(abspath $(wildcard cmd/*/)))):
@echo "$$CMD_HELP_INFO"
else
$(filter-out %$(EXCLUDE_TARGET),$(notdir $(abspath $(wildcard cmd/*/)))): generated_files
hack/make-rules/build.sh cmd/$@
endif
define PLUGIN_CMD_HELP_INFO
# Add rules for all directories in plugin/cmd/
#
# Example:
# make kube-scheduler
endef
.PHONY: $(notdir $(abspath $(wildcard plugin/cmd/*/)))
ifeq ($(PRINT_HELP),y)
$(notdir $(abspath $(wildcard plugin/cmd/*/))):
@echo "$$PLUGIN_CMD_HELP_INFO"
else
$(notdir $(abspath $(wildcard plugin/cmd/*/))): generated_files
hack/make-rules/build.sh plugin/cmd/$@
endif
define GENERATED_FILES_HELP_INFO
# Produce auto-generated files needed for the build.
#
# Example:
# make generated_files
endef
.PHONY: generated_files
ifeq ($(PRINT_HELP),y)
generated_files:
@echo "$$GENERATED_FILES_HELP_INFO"
else
generated_files:
$(MAKE) -f Makefile.generated_files $@ CALLED_FROM_MAIN_MAKEFILE=1
endif
define VERIFY_GENERATED_FILES_HELP_INFO
# Verify auto-generated files needed for the build.
#
# Example:
# make verify_generated_files
endef
.PHONY: verify_generated_files
ifeq ($(PRINT_HELP),y)
verify_generated_files:
@echo "$$VERIFY_GENERATED_FILES_HELP_INFO"
else
verify_generated_files:
$(MAKE) -f Makefile.generated_files $@ CALLED_FROM_MAIN_MAKEFILE=1
endif
define HELP_INFO
# Print make targets and help info
#
# Example:
# make help
endef
.PHONY: help
ifeq ($(PRINT_HELP),y)
help:
@echo "$$HELP_INFO"
else
help:
hack/make-rules/make-help.sh
endif
# Non-dockerized bazel rules.
.PHONY: bazel-build bazel-test bazel-release
ifeq ($(PRINT_HELP),y)
define BAZEL_BUILD_HELP_INFO
# Build with bazel
#
# Example:
# make bazel-build
endef
bazel-build:
@echo "$$BAZEL_BUILD_HELP_INFO"
else
# Some things in vendor don't build due to empty target lists for cross-platform rules.
bazel-build:
bazel build -- //... -//vendor/...
endif
ifeq ($(PRINT_HELP),y)
define BAZEL_TEST_HELP_INFO
# Test with bazel
#
# Example:
# make bazel-test
endef
bazel-test:
@echo "$$BAZEL_TEST_HELP_INFO"
else
# //hack:verify-all is a manual target.
# We don't want to build any of the release artifacts when running tests.
# Some things in vendor don't build due to empty target lists for cross-platform rules.
bazel-test:
bazel test --build_tag_filters=-e2e,-integration --test_tag_filters=-e2e,-integration --flaky_test_attempts=3 -- \
//... \
//hack:verify-all \
-//build/... \
-//vendor/...
endif
ifeq ($(PRINT_HELP),y)
define BAZEL_TEST_INTEGRATION_HELP_INFO
# Integration test with bazel
#
# Example:
# make bazel-test-integration
endef
bazel-test-integration:
@echo "$$BAZEL_TEST_INTEGRATION_HELP_INFO"
else
bazel-test-integration:
bazel test //test/integration/...
endif
ifeq ($(PRINT_HELP),y)
define BAZEL_RELEASE_HELP_INFO
# Build release tars with bazel
#
# Example:
# make bazel-release
endef
bazel-release:
@echo "$$BAZEL_RELEASE_HELP_INFO"
else
bazel-release:
bazel build //build/release-tars
endif

View File

@ -0,0 +1,807 @@
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Don't allow users to call this directly. There are too many variables this
# assumes to inherit from the main Makefile. This is not a user-facing file.
ifeq ($(CALLED_FROM_MAIN_MAKEFILE),)
$(error Please use the main Makefile, e.g. `make generated_files`)
endif
# Don't allow an implicit 'all' rule. This is not a user-facing file.
ifeq ($(MAKECMDGOALS),)
$(error This Makefile requires an explicit rule to be specified)
endif
ifeq ($(DBG_MAKEFILE),1)
$(warning ***** starting Makefile.generated_files for goal(s) "$(MAKECMDGOALS)")
$(warning ***** $(shell date))
endif
# It's necessary to set this because some environments don't link sh -> bash.
SHELL := /bin/bash
# This rule collects all the generated file sets into a single rule. Other
# rules should depend on this to ensure generated files are rebuilt.
.PHONY: generated_files
generated_files: gen_deepcopy gen_defaulter gen_conversion gen_openapi
.PHONY: verify_generated_files
verify_generated_files: verify_gen_deepcopy \
verify_gen_defaulter \
verify_gen_conversion
# Code-generation logic.
#
# This stuff can be pretty tricky, and there's probably some corner cases that
# we don't handle well. That said, here's a straightforward test to prove that
# the most common cases work. Sadly, it is manual.
#
# make clean
# find . -name .make\* | xargs rm -f
# find . -name zz_generated\* | xargs rm -f
# # verify `find . -name zz_generated.deepcopy.go | wc -l` is 0
# # verify `find . -name .make | wc -l` is 0
#
# make nonexistent
# # expect "No rule to make target"
# # verify `find .make/ -type f | wc -l` has many files
#
# make gen_deepcopy
# # expect deepcopy-gen is built exactly once
# # expect many files to be regenerated
# # verify `find . -name zz_generated.deepcopy.go | wc -l` has files
# make gen_deepcopy
# # expect nothing to be rebuilt, finish in O(seconds)
# touch pkg/api/types.go
# make gen_deepcopy
# # expect one file to be regenerated
# make gen_deepcopy
# # expect nothing to be rebuilt, finish in O(seconds)
# touch vendor/k8s.io/code-generator/cmd/deepcopy-gen/main.go
# make gen_deepcopy
# # expect deepcopy-gen is built exactly once
# # expect many files to be regenerated
# # verify `find . -name zz_generated.deepcopy.go | wc -l` has files
# make gen_deepcopy
# # expect nothing to be rebuilt, finish in O(seconds)
#
# make gen_conversion
# # expect conversion-gen is built exactly once
# # expect many files to be regenerated
# # verify `find . -name zz_generated.conversion.go | wc -l` has files
# make gen_conversion
# # expect nothing to be rebuilt, finish in O(seconds)
# touch pkg/api/types.go
# make gen_conversion
# # expect one file to be regenerated
# make gen_conversion
# # expect nothing to be rebuilt, finish in O(seconds)
# touch vendor/k8s.io/code-generator/cmd/conversion-gen/main.go
# make gen_conversion
# # expect conversion-gen is built exactly once
# # expect many files to be regenerated
# # verify `find . -name zz_generated.conversion.go | wc -l` has files
# make gen_conversion
# # expect nothing to be rebuilt, finish in O(seconds)
#
# make all
# # expect it to build
#
# make test
# # expect it to pass
#
# make clean
# # verify `find . -name zz_generated.deepcopy.go | wc -l` is 0
# # verify `find . -name .make | wc -l` is 0
#
# make all WHAT=cmd/kube-proxy
# # expect it to build
#
# make clean
# make test WHAT=cmd/kube-proxy
# # expect it to pass
# This variable holds a list of every directory that contains Go files in this
# project. Other rules and variables can use this as a starting point to
# reduce filesystem accesses.
ifeq ($(DBG_MAKEFILE),1)
$(warning ***** finding all *.go dirs)
endif
ALL_GO_DIRS := $(shell \
hack/make-rules/helpers/cache_go_dirs.sh $(META_DIR)/all_go_dirs.mk \
)
# The name of the metadata file which lists *.go files in each pkg.
GOFILES_META := gofiles.mk
# Establish a dependency between the deps file and the dir. Whenever a dir
# changes (files added or removed) the deps file will be considered stale.
#
# The variable value was set in $(GOFILES_META) and included as part of the
# dependency management logic.
#
# This is looser than we really need (e.g. we don't really care about non *.go
# files or even *_test.go files), but this is much easier to represent.
#
# Because we 'sinclude' the deps file, it is considered for rebuilding, as part
# of make's normal evaluation. If it gets rebuilt, make will restart.
#
# The '$(eval)' is needed because this has a different RHS for each LHS, and
# would otherwise produce results that make can't parse.
$(foreach dir, $(ALL_GO_DIRS), $(eval \
$(META_DIR)/$(dir)/$(GOFILES_META): $(dir) \
))
# How to rebuild a deps file. When make determines that the deps file is stale
# (see above), it executes this rule, and then re-loads the deps file.
#
# This is looser than we really need (e.g. we don't really care about test
# files), but this is MUCH faster than calling `go list`.
#
# We regenerate the output file in order to satisfy make's "newer than" rules,
# but we only need to rebuild targets if the contents actually changed. That
# is what the .stamp file represents.
$(foreach dir, $(ALL_GO_DIRS), \
$(META_DIR)/$(dir)/$(GOFILES_META)):
FILES=$$(ls $</*.go | grep --color=never -v $(GENERATED_FILE_PREFIX)); \
mkdir -p $(@D); \
echo "gofiles__$< := $$(echo $${FILES})" >$@.tmp; \
if ! cmp -s $@.tmp $@; then \
if [[ "$(DBG_CODEGEN)" == 1 ]]; then \
echo "DBG: gofiles changed for $@"; \
fi; \
touch $@.stamp; \
fi; \
mv $@.tmp $@
# This is required to fill in the DAG, since some cases (e.g. 'make clean all')
# will reference the .stamp file when it doesn't exist. We don't need to
# rebuild it in that case, just keep make happy.
$(foreach dir, $(ALL_GO_DIRS), \
$(META_DIR)/$(dir)/$(GOFILES_META).stamp):
# Include any deps files as additional Makefile rules. This triggers make to
# consider the deps files for rebuild, which makes the whole
# dependency-management logic work. 'sinclude' is "silent include" which does
# not fail if the file does not exist.
$(foreach dir, $(ALL_GO_DIRS), $(eval \
sinclude $(META_DIR)/$(dir)/$(GOFILES_META) \
))
# Generate a list of all files that have a `+k8s:` comment-tag. This will be
# used to derive lists of files/dirs for generation tools.
ifeq ($(DBG_MAKEFILE),1)
$(warning ***** finding all +k8s: tags)
endif
ALL_K8S_TAG_FILES := $(shell \
find $(ALL_GO_DIRS) -maxdepth 1 -type f -name \*.go \
| xargs grep --color=never -l '^// *+k8s:' \
)
#
# Deep-copy generation
#
# Any package that wants deep-copy functions generated must include a
# comment-tag in column 0 of one file of the form:
# // +k8s:deepcopy-gen=<VALUE>
#
# The <VALUE> may be one of:
# generate: generate deep-copy functions into the package
# register: generate deep-copy functions and register them with a
# scheme
# The result file, in each pkg, of deep-copy generation.
DEEPCOPY_BASENAME := $(GENERATED_FILE_PREFIX)deepcopy
DEEPCOPY_FILENAME := $(DEEPCOPY_BASENAME).go
# The tool used to generate deep copies.
DEEPCOPY_GEN := $(BIN_DIR)/deepcopy-gen
# Find all the directories that request deep-copy generation.
ifeq ($(DBG_MAKEFILE),1)
$(warning ***** finding all +k8s:deepcopy-gen tags)
endif
DEEPCOPY_DIRS := $(shell \
grep --color=never -l '+k8s:deepcopy-gen=' $(ALL_K8S_TAG_FILES) \
| xargs -n1 dirname \
| LC_ALL=C sort -u \
)
DEEPCOPY_FILES := $(addsuffix /$(DEEPCOPY_FILENAME), $(DEEPCOPY_DIRS))
# Shell function for reuse in rules.
RUN_GEN_DEEPCOPY = \
function run_gen_deepcopy() { \
if [[ -f $(META_DIR)/$(DEEPCOPY_GEN).todo ]]; then \
pkgs=$$(cat $(META_DIR)/$(DEEPCOPY_GEN).todo | paste -sd, -); \
if [[ "$(DBG_CODEGEN)" == 1 ]]; then \
echo "DBG: running $(DEEPCOPY_GEN) for $$pkgs"; \
fi; \
./hack/run-in-gopath.sh $(DEEPCOPY_GEN) \
--v $(KUBE_VERBOSE) \
--logtostderr \
-i "$$pkgs" \
--bounding-dirs $(PRJ_SRC_PATH),"k8s.io/api" \
-O $(DEEPCOPY_BASENAME) \
"$$@"; \
fi \
}; \
run_gen_deepcopy
# This rule aggregates the set of files to generate and then generates them all
# in a single run of the tool.
.PHONY: gen_deepcopy
gen_deepcopy: $(DEEPCOPY_FILES) $(DEEPCOPY_GEN)
$(RUN_GEN_DEEPCOPY)
.PHONY: verify_gen_deepcopy
verify_gen_deepcopy: $(DEEPCOPY_GEN)
$(RUN_GEN_DEEPCOPY) --verify-only
# For each dir in DEEPCOPY_DIRS, this establishes a dependency between the
# output file and the input files that should trigger a rebuild.
#
# Note that this is a deps-only statement, not a full rule (see below). This
# has to be done in a distinct step because wildcards don't work in static
# pattern rules.
#
# The '$(eval)' is needed because this has a different RHS for each LHS, and
# would otherwise produce results that make can't parse.
#
# We depend on the $(GOFILES_META).stamp to detect when the set of input files
# has changed. This allows us to detect deleted input files.
$(foreach dir, $(DEEPCOPY_DIRS), $(eval \
$(dir)/$(DEEPCOPY_FILENAME): $(META_DIR)/$(dir)/$(GOFILES_META).stamp \
$(gofiles__$(dir)) \
))
# Unilaterally remove any leftovers from previous runs.
$(shell rm -f $(META_DIR)/$(DEEPCOPY_GEN)*.todo)
# How to regenerate deep-copy code. This is a little slow to run, so we batch
# it up and trigger the batch from the 'generated_files' target.
$(DEEPCOPY_FILES): $(DEEPCOPY_GEN)
mkdir -p $$(dirname $(META_DIR)/$(DEEPCOPY_GEN))
if [[ "$(DBG_CODEGEN)" == 1 ]]; then \
echo "DBG: deepcopy needed $(@D): $?"; \
ls -lf --full-time $@ $? || true; \
fi
echo $(PRJ_SRC_PATH)/$(@D) >> $(META_DIR)/$(DEEPCOPY_GEN).todo
# This calculates the dependencies for the generator tool, so we only rebuild
# it when needed. It is PHONY so that it always runs, but it only updates the
# file if the contents have actually changed. We 'sinclude' this later.
.PHONY: $(META_DIR)/$(DEEPCOPY_GEN).mk
$(META_DIR)/$(DEEPCOPY_GEN).mk:
mkdir -p $(@D); \
(echo -n "$(DEEPCOPY_GEN): "; \
./hack/run-in-gopath.sh go list \
-f '{{.ImportPath}}{{"\n"}}{{range .Deps}}{{.}}{{"\n"}}{{end}}' \
./vendor/k8s.io/code-generator/cmd/deepcopy-gen \
| grep --color=never "^$(PRJ_SRC_PATH)/" \
| xargs ./hack/run-in-gopath.sh go list \
-f '{{$$d := .Dir}}{{$$d}}{{"\n"}}{{range .GoFiles}}{{$$d}}/{{.}}{{"\n"}}{{end}}' \
| paste -sd' ' - \
| sed 's/ / \\=,/g' \
| tr '=,' '\n\t' \
| sed "s|$$(pwd -P)/||"; \
) > $@.tmp; \
if ! cmp -s $@.tmp $@; then \
if [[ "$(DBG_CODEGEN)" == 1 ]]; then \
echo "DBG: $(DEEPCOPY_GEN).mk changed"; \
fi; \
cat $@.tmp > $@; \
rm -f $@.tmp; \
fi
# Include dependency info for the generator tool. This will cause the rule of
# the same name to be considered and if it is updated, make will restart.
sinclude $(META_DIR)/$(DEEPCOPY_GEN).mk
# How to build the generator tool. The deps for this are defined in
# the $(DEEPCOPY_GEN).mk, above.
#
# A word on the need to touch: This rule might trigger if, for example, a
# non-Go file was added or deleted from a directory on which this depends.
# This target needs to be reconsidered, but Go realizes it doesn't actually
# have to be rebuilt. In that case, make will forever see the dependency as
# newer than the binary, and try to rebuild it over and over. So we touch it,
# and make is happy.
$(DEEPCOPY_GEN):
hack/make-rules/build.sh ./vendor/k8s.io/code-generator/cmd/deepcopy-gen
touch $@
#
# Defaulter generation
#
# Any package that wants defaulter functions generated must include a
# comment-tag in column 0 of one file of the form:
# // +k8s:defaulter-gen=<VALUE>
#
# The <VALUE> depends on context:
# on types:
# true: always generate a defaulter for this type
# false: never generate a defaulter for this type
# on functions:
# covers: if the function name matches SetDefault_NAME, instructs
# the generator not to recurse
# on packages:
# FIELDNAME: any object with a field of this name is a candidate
# for having a defaulter generated
# The result file, in each pkg, of defaulter generation.
DEFAULTER_BASENAME := $(GENERATED_FILE_PREFIX)defaults
DEFAULTER_FILENAME := $(DEFAULTER_BASENAME).go
# The tool used to generate defaulters.
DEFAULTER_GEN := $(BIN_DIR)/defaulter-gen
# All directories that request any form of defaulter generation.
ifeq ($(DBG_MAKEFILE),1)
$(warning ***** finding all +k8s:defaulter-gen tags)
endif
DEFAULTER_DIRS := $(shell \
grep --color=never -l '+k8s:defaulter-gen=' $(ALL_K8S_TAG_FILES) \
| xargs -n1 dirname \
| LC_ALL=C sort -u \
)
DEFAULTER_FILES := $(addsuffix /$(DEFAULTER_FILENAME), $(DEFAULTER_DIRS))
RUN_GEN_DEFAULTER := \
function run_gen_defaulter() { \
if [[ -f $(META_DIR)/$(DEFAULTER_GEN).todo ]]; then \
pkgs=$$(cat $(META_DIR)/$(DEFAULTER_GEN).todo | paste -sd, -); \
if [[ "$(DBG_CODEGEN)" == 1 ]]; then \
echo "DBG: running $(DEFAULTER_GEN) for $$pkgs"; \
fi; \
./hack/run-in-gopath.sh $(DEFAULTER_GEN) \
--v $(KUBE_VERBOSE) \
--logtostderr \
-i "$$pkgs" \
--extra-peer-dirs $$(echo $(addprefix $(PRJ_SRC_PATH)/, $(DEFAULTER_DIRS)) | sed 's/ /,/g') \
-O $(DEFAULTER_BASENAME) \
"$$@"; \
fi \
}; \
run_gen_defaulter
# This rule aggregates the set of files to generate and then generates them all
# in a single run of the tool.
.PHONY: gen_defaulter
gen_defaulter: $(DEFAULTER_FILES) $(DEFAULTER_GEN)
$(RUN_GEN_DEFAULTER)
.PHONY: verify_gen_deepcopy
verify_gen_defaulter: $(DEFAULTER_GEN)
$(RUN_GEN_DEFAULTER) --verify-only
# For each dir in DEFAULTER_DIRS, this establishes a dependency between the
# output file and the input files that should trigger a rebuild.
#
# The variable value was set in $(GOFILES_META) and included as part of the
# dependency management logic.
#
# Note that this is a deps-only statement, not a full rule (see below). This
# has to be done in a distinct step because wildcards don't work in static
# pattern rules.
#
# The '$(eval)' is needed because this has a different RHS for each LHS, and
# would otherwise produce results that make can't parse.
#
# We depend on the $(GOFILES_META).stamp to detect when the set of input files
# has changed. This allows us to detect deleted input files.
$(foreach dir, $(DEFAULTER_DIRS), $(eval \
$(dir)/$(DEFAULTER_FILENAME): $(META_DIR)/$(dir)/$(GOFILES_META).stamp \
$(gofiles__$(dir)) \
))
# For each dir in DEFAULTER_DIRS, for each target in $(defaulters__$(dir)),
# this establishes a dependency between the output file and the input files
# that should trigger a rebuild.
#
# The variable value was set in $(GOFILES_META) and included as part of the
# dependency management logic.
#
# Note that this is a deps-only statement, not a full rule (see below). This
# has to be done in a distinct step because wildcards don't work in static
# pattern rules.
#
# The '$(eval)' is needed because this has a different RHS for each LHS, and
# would otherwise produce results that make can't parse.
#
# We depend on the $(GOFILES_META).stamp to detect when the set of input files
# has changed. This allows us to detect deleted input files.
$(foreach dir, $(DEFAULTER_DIRS), \
$(foreach tgt, $(defaulters__$(dir)), $(eval \
$(dir)/$(DEFAULTER_FILENAME): $(META_DIR)/$(tgt)/$(GOFILES_META).stamp \
$(gofiles__$(tgt)) \
)) \
)
# Unilaterally remove any leftovers from previous runs.
$(shell rm -f $(META_DIR)/$(DEFAULTER_GEN)*.todo)
# How to regenerate defaulter code. This is a little slow to run, so we batch
# it up and trigger the batch from the 'generated_files' target.
$(DEFAULTER_FILES): $(DEFAULTER_GEN)
mkdir -p $$(dirname $(META_DIR)/$(DEFAULTER_GEN))
if [[ "$(DBG_CODEGEN)" == 1 ]]; then \
echo "DBG: defaulter needed $(@D): $?"; \
ls -lf --full-time $@ $? || true; \
fi
echo $(PRJ_SRC_PATH)/$(@D) >> $(META_DIR)/$(DEFAULTER_GEN).todo
# This calculates the dependencies for the generator tool, so we only rebuild
# it when needed. It is PHONY so that it always runs, but it only updates the
# file if the contents have actually changed. We 'sinclude' this later.
.PHONY: $(META_DIR)/$(DEFAULTER_GEN).mk
$(META_DIR)/$(DEFAULTER_GEN).mk:
mkdir -p $(@D); \
(echo -n "$(DEFAULTER_GEN): "; \
./hack/run-in-gopath.sh go list \
-f '{{.ImportPath}}{{"\n"}}{{range .Deps}}{{.}}{{"\n"}}{{end}}' \
./vendor/k8s.io/code-generator/cmd/defaulter-gen \
| grep --color=never "^$(PRJ_SRC_PATH)/" \
| xargs ./hack/run-in-gopath.sh go list \
-f '{{$$d := .Dir}}{{$$d}}{{"\n"}}{{range .GoFiles}}{{$$d}}/{{.}}{{"\n"}}{{end}}' \
| paste -sd' ' - \
| sed 's/ / \\=,/g' \
| tr '=,' '\n\t' \
| sed "s|$$(pwd -P)/||"; \
) > $@.tmp; \
if ! cmp -s $@.tmp $@; then \
if [[ "$(DBG_CODEGEN)" == 1 ]]; then \
echo "DBG: $(DEFAULTER_GEN).mk changed"; \
fi; \
cat $@.tmp > $@; \
rm -f $@.tmp; \
fi
# Include dependency info for the generator tool. This will cause the rule of
# the same name to be considered and if it is updated, make will restart.
sinclude $(META_DIR)/$(DEFAULTER_GEN).mk
# How to build the generator tool. The deps for this are defined in
# the $(DEFAULTER_GEN).mk, above.
#
# A word on the need to touch: This rule might trigger if, for example, a
# non-Go file was added or deleted from a directory on which this depends.
# This target needs to be reconsidered, but Go realizes it doesn't actually
# have to be rebuilt. In that case, make will forever see the dependency as
# newer than the binary, and try to rebuild it over and over. So we touch it,
# and make is happy.
$(DEFAULTER_GEN):
hack/make-rules/build.sh ./vendor/k8s.io/code-generator/cmd/defaulter-gen
touch $@
#
# Open-api generation
#
# Any package that wants open-api functions generated must include a
# comment-tag in column 0 of one file of the form:
# // +k8s:openapi-gen=true
#
# The result file, in each pkg, of open-api generation.
OPENAPI_BASENAME := $(GENERATED_FILE_PREFIX)openapi
OPENAPI_FILENAME := $(OPENAPI_BASENAME).go
OPENAPI_OUTPUT_PKG := pkg/generated/openapi
# The tool used to generate open apis.
OPENAPI_GEN := $(BIN_DIR)/openapi-gen
# Find all the directories that request open-api generation.
ifeq ($(DBG_MAKEFILE),1)
$(warning ***** finding all +k8s:openapi-gen tags)
endif
OPENAPI_DIRS := $(shell \
grep --color=never -l '+k8s:openapi-gen=' $(ALL_K8S_TAG_FILES) \
| xargs -n1 dirname \
| LC_ALL=C sort -u \
)
OPENAPI_OUTFILE := $(OPENAPI_OUTPUT_PKG)/$(OPENAPI_FILENAME)
# This rule is the user-friendly entrypoint for openapi generation.
.PHONY: gen_openapi
gen_openapi: $(OPENAPI_OUTFILE) $(OPENAPI_GEN)
# For each dir in OPENAPI_DIRS, this establishes a dependency between the
# output file and the input files that should trigger a rebuild.
#
# Note that this is a deps-only statement, not a full rule (see below). This
# has to be done in a distinct step because wildcards don't work in static
# pattern rules.
#
# The '$(eval)' is needed because this has a different RHS for each LHS, and
# would otherwise produce results that make can't parse.
#
# We depend on the $(GOFILES_META).stamp to detect when the set of input files
# has changed. This allows us to detect deleted input files.
$(foreach dir, $(OPENAPI_DIRS), $(eval \
$(OPENAPI_OUTFILE): $(META_DIR)/$(dir)/$(GOFILES_META).stamp \
$(gofiles__$(dir)) \
))
# How to regenerate open-api code. This emits a single file for all results.
$(OPENAPI_OUTFILE): $(OPENAPI_GEN) $(OPENAPI_GEN)
function run_gen_openapi() { \
./hack/run-in-gopath.sh $(OPENAPI_GEN) \
--v $(KUBE_VERBOSE) \
--logtostderr \
-i $$(echo $(addprefix $(PRJ_SRC_PATH)/, $(OPENAPI_DIRS)) | sed 's/ /,/g') \
-p $(PRJ_SRC_PATH)/$(OPENAPI_OUTPUT_PKG) \
-O $(OPENAPI_BASENAME) \
"$$@"; \
}; \
run_gen_openapi
# This calculates the dependencies for the generator tool, so we only rebuild
# it when needed. It is PHONY so that it always runs, but it only updates the
# file if the contents have actually changed. We 'sinclude' this later.
.PHONY: $(META_DIR)/$(OPENAPI_GEN).mk
$(META_DIR)/$(OPENAPI_GEN).mk:
mkdir -p $(@D); \
(echo -n "$(OPENAPI_GEN): "; \
./hack/run-in-gopath.sh go list \
-f '{{.ImportPath}}{{"\n"}}{{range .Deps}}{{.}}{{"\n"}}{{end}}' \
./vendor/k8s.io/code-generator/cmd/openapi-gen \
| grep --color=never "^$(PRJ_SRC_PATH)/" \
| xargs ./hack/run-in-gopath.sh go list \
-f '{{$$d := .Dir}}{{$$d}}{{"\n"}}{{range .GoFiles}}{{$$d}}/{{.}}{{"\n"}}{{end}}' \
| paste -sd' ' - \
| sed 's/ / \\=,/g' \
| tr '=,' '\n\t' \
| sed "s|$$(pwd -P)/||"; \
) > $@.tmp; \
if ! cmp -s $@.tmp $@; then \
if [[ "$(DBG_CODEGEN)" == 1 ]]; then \
echo "DBG: $(OPENAPI_GEN).mk changed"; \
fi; \
cat $@.tmp > $@; \
rm -f $@.tmp; \
fi
# Include dependency info for the generator tool. This will cause the rule of
# the same name to be considered and if it is updated, make will restart.
sinclude $(META_DIR)/$(OPENAPI_GEN).mk
# How to build the generator tool. The deps for this are defined in
# the $(OPENAPI_GEN).mk, above.
#
# A word on the need to touch: This rule might trigger if, for example, a
# non-Go file was added or deleted from a directory on which this depends.
# This target needs to be reconsidered, but Go realizes it doesn't actually
# have to be rebuilt. In that case, make will forever see the dependency as
# newer than the binary, and try to rebuild it over and over. So we touch it,
# and make is happy.
$(OPENAPI_GEN):
hack/make-rules/build.sh ./vendor/k8s.io/code-generator/cmd/openapi-gen
touch $@
#
# Conversion generation
#
# Any package that wants conversion functions generated must include one or
# more comment-tags in any .go file, in column 0, of the form:
# // +k8s:conversion-gen=<CONVERSION_TARGET_DIR>
#
# The CONVERSION_TARGET_DIR is a project-local path to another directory which
# should be considered when evaluating peer types for conversions. Types which
# are found in the source package (where conversions are being generated)
# but do not have a peer in one of the target directories will not have
# conversions generated.
#
# TODO: it might be better in the long term to make peer-types explicit in the
# IDL.
# The result file, in each pkg, of conversion generation.
CONVERSION_BASENAME := $(GENERATED_FILE_PREFIX)conversion
CONVERSION_FILENAME := $(CONVERSION_BASENAME).go
# The tool used to generate conversions.
CONVERSION_GEN := $(BIN_DIR)/conversion-gen
# The name of the metadata file listing conversion peers for each pkg.
CONVERSIONS_META := conversions.mk
# All directories that request any form of conversion generation.
ifeq ($(DBG_MAKEFILE),1)
$(warning ***** finding all +k8s:conversion-gen tags)
endif
CONVERSION_DIRS := $(shell \
grep --color=never '^// *+k8s:conversion-gen=' $(ALL_K8S_TAG_FILES) \
| cut -f1 -d: \
| xargs -n1 dirname \
| LC_ALL=C sort -u \
)
CONVERSION_FILES := $(addsuffix /$(CONVERSION_FILENAME), $(CONVERSION_DIRS))
CONVERSION_EXTRA_PEER_DIRS := k8s.io/kubernetes/pkg/apis/core,k8s.io/kubernetes/pkg/apis/core/v1,k8s.io/api/core/v1
# Shell function for reuse in rules.
RUN_GEN_CONVERSION = \
function run_gen_conversion() { \
if [[ -f $(META_DIR)/$(CONVERSION_GEN).todo ]]; then \
pkgs=$$(cat $(META_DIR)/$(CONVERSION_GEN).todo | paste -sd, -); \
if [[ "$(DBG_CODEGEN)" == 1 ]]; then \
echo "DBG: running $(CONVERSION_GEN) for $$pkgs"; \
fi; \
./hack/run-in-gopath.sh $(CONVERSION_GEN) \
--extra-peer-dirs $(CONVERSION_EXTRA_PEER_DIRS) \
--v $(KUBE_VERBOSE) \
--logtostderr \
-i "$$pkgs" \
-O $(CONVERSION_BASENAME) \
"$$@"; \
fi \
}; \
run_gen_conversion
# This rule aggregates the set of files to generate and then generates them all
# in a single run of the tool.
.PHONY: gen_conversion
gen_conversion: $(CONVERSION_FILES) $(CONVERSION_GEN)
$(RUN_GEN_CONVERSION)
.PHONY: verify_gen_conversion
verify_gen_conversion: $(CONVERSION_GEN)
$(RUN_GEN_CONVERSION) --verify-only
# Establish a dependency between the deps file and the dir. Whenever a dir
# changes (files added or removed) the deps file will be considered stale.
#
# This is looser than we really need (e.g. we don't really care about non *.go
# files or even *_test.go files), but this is much easier to represent.
#
# Because we 'sinclude' the deps file, it is considered for rebuilding, as part
# of make's normal evaluation. If it gets rebuilt, make will restart.
#
# The '$(eval)' is needed because this has a different RHS for each LHS, and
# would otherwise produce results that make can't parse.
$(foreach dir, $(CONVERSION_DIRS), $(eval \
$(META_DIR)/$(dir)/$(CONVERSIONS_META): $(dir) \
))
# How to rebuild a deps file. When make determines that the deps file is stale
# (see above), it executes this rule, and then re-loads the deps file.
#
# This is looser than we really need (e.g. we don't really care about test
# files), but this is MUCH faster than calling `go list`.
#
# We regenerate the output file in order to satisfy make's "newer than" rules,
# but we only need to rebuild targets if the contents actually changed. That
# is what the .stamp file represents.
$(foreach dir, $(CONVERSION_DIRS), \
$(META_DIR)/$(dir)/$(CONVERSIONS_META)):
TAGS=$$(grep --color=never -h '^// *+k8s:conversion-gen=' $</*.go \
| cut -f2- -d= \
| sed 's|$(PRJ_SRC_PATH)/||' \
| sed 's|^k8s.io/|vendor/k8s.io/|'); \
mkdir -p $(@D); \
echo "conversions__$< := $$(echo $${TAGS})" >$@.tmp; \
if ! cmp -s $@.tmp $@; then \
if [[ "$(DBG_CODEGEN)" == 1 ]]; then \
echo "DBG: conversions changed for $@"; \
fi; \
touch $@.stamp; \
fi; \
mv $@.tmp $@
# Include any deps files as additional Makefile rules. This triggers make to
# consider the deps files for rebuild, which makes the whole
# dependency-management logic work. 'sinclude' is "silent include" which does
# not fail if the file does not exist.
$(foreach dir, $(CONVERSION_DIRS), $(eval \
sinclude $(META_DIR)/$(dir)/$(CONVERSIONS_META) \
))
# For each dir in CONVERSION_DIRS, this establishes a dependency between the
# output file and the input files that should trigger a rebuild.
#
# The variable value was set in $(GOFILES_META) and included as part of the
# dependency management logic.
#
# Note that this is a deps-only statement, not a full rule (see below). This
# has to be done in a distinct step because wildcards don't work in static
# pattern rules.
#
# The '$(eval)' is needed because this has a different RHS for each LHS, and
# would otherwise produce results that make can't parse.
#
# We depend on the $(GOFILES_META).stamp to detect when the set of input files
# has changed. This allows us to detect deleted input files.
$(foreach dir, $(CONVERSION_DIRS), $(eval \
$(dir)/$(CONVERSION_FILENAME): $(META_DIR)/$(dir)/$(GOFILES_META).stamp \
$(gofiles__$(dir)) \
))
# For each dir in CONVERSION_DIRS, for each target in $(conversions__$(dir)),
# this establishes a dependency between the output file and the input files
# that should trigger a rebuild.
#
# The variable value was set in $(GOFILES_META) and included as part of the
# dependency management logic.
#
# Note that this is a deps-only statement, not a full rule (see below). This
# has to be done in a distinct step because wildcards don't work in static
# pattern rules.
#
# The '$(eval)' is needed because this has a different RHS for each LHS, and
# would otherwise produce results that make can't parse.
#
# We depend on the $(GOFILES_META).stamp to detect when the set of input files
# has changed. This allows us to detect deleted input files.
$(foreach dir, $(CONVERSION_DIRS), \
$(foreach tgt, $(conversions__$(dir)), $(eval \
$(dir)/$(CONVERSION_FILENAME): $(META_DIR)/$(tgt)/$(GOFILES_META).stamp \
$(gofiles__$(tgt)) \
)) \
)
# Unilaterally remove any leftovers from previous runs.
$(shell rm -f $(META_DIR)/$(CONVERSION_GEN)*.todo)
# How to regenerate conversion code. This is a little slow to run, so we batch
# it up and trigger the batch from the 'generated_files' target.
$(CONVERSION_FILES): $(CONVERSION_GEN)
mkdir -p $$(dirname $(META_DIR)/$(CONVERSION_GEN))
if [[ "$(DBG_CODEGEN)" == 1 ]]; then \
echo "DBG: conversion needed $(@D): $?"; \
ls -lf --full-time $@ $? || true; \
fi
echo $(PRJ_SRC_PATH)/$(@D) >> $(META_DIR)/$(CONVERSION_GEN).todo
# This calculates the dependencies for the generator tool, so we only rebuild
# it when needed. It is PHONY so that it always runs, but it only updates the
# file if the contents have actually changed. We 'sinclude' this later.
.PHONY: $(META_DIR)/$(CONVERSION_GEN).mk
$(META_DIR)/$(CONVERSION_GEN).mk:
mkdir -p $(@D); \
(echo -n "$(CONVERSION_GEN): "; \
./hack/run-in-gopath.sh go list \
-f '{{.ImportPath}}{{"\n"}}{{range .Deps}}{{.}}{{"\n"}}{{end}}' \
./vendor/k8s.io/code-generator/cmd/conversion-gen \
| grep --color=never "^$(PRJ_SRC_PATH)/" \
| xargs ./hack/run-in-gopath.sh go list \
-f '{{$$d := .Dir}}{{$$d}}{{"\n"}}{{range .GoFiles}}{{$$d}}/{{.}}{{"\n"}}{{end}}' \
| paste -sd' ' - \
| sed 's/ / \\=,/g' \
| tr '=,' '\n\t' \
| sed "s|$$(pwd -P)/||"; \
) > $@.tmp; \
if ! cmp -s $@.tmp $@; then \
if [[ "$(DBG_CODEGEN)" == 1 ]]; then \
echo "DBG: $(CONVERSION_GEN).mk changed"; \
fi; \
cat $@.tmp > $@; \
rm -f $@.tmp; \
fi
# Include dependency info for the generator tool. This will cause the rule of
# the same name to be considered and if it is updated, make will restart.
sinclude $(META_DIR)/$(CONVERSION_GEN).mk
# How to build the generator tool. The deps for this are defined in
# the $(CONVERSION_GEN).mk, above.
#
# A word on the need to touch: This rule might trigger if, for example, a
# non-Go file was added or deleted from a directory on which this depends.
# This target needs to be reconsidered, but Go realizes it doesn't actually
# have to be rebuilt. In that case, make will forever see the dependency as
# newer than the binary, and try to rebuild it over and over. So we touch it,
# and make is happy.
$(CONVERSION_GEN):
hack/make-rules/build.sh ./vendor/k8s.io/code-generator/cmd/conversion-gen
touch $@

103
vendor/k8s.io/kubernetes/build/root/WORKSPACE generated vendored Normal file
View File

@ -0,0 +1,103 @@
http_archive(
name = "io_bazel_rules_go",
sha256 = "441e560e947d8011f064bd7348d86940d6b6131ae7d7c4425a538e8d9f884274",
strip_prefix = "rules_go-c72631a220406c4fae276861ee286aaec82c5af2",
urls = ["https://github.com/bazelbuild/rules_go/archive/c72631a220406c4fae276861ee286aaec82c5af2.tar.gz"],
)
http_archive(
name = "io_kubernetes_build",
sha256 = "89788eb30f10258ae0c6ab8b8625a28cb4c101fba93a8a6725ba227bb778ff27",
strip_prefix = "repo-infra-653485c1a6d554513266d55683da451bd41f7d65",
urls = ["https://github.com/kubernetes/repo-infra/archive/653485c1a6d554513266d55683da451bd41f7d65.tar.gz"],
)
ETCD_VERSION = "3.1.10"
new_http_archive(
name = "com_coreos_etcd",
build_file = "third_party/etcd.BUILD",
sha256 = "2d335f298619c6fb02b1124773a56966e448ad9952b26fea52909da4fe80d2be",
strip_prefix = "etcd-v%s-linux-amd64" % ETCD_VERSION,
urls = ["https://github.com/coreos/etcd/releases/download/v%s/etcd-v%s-linux-amd64.tar.gz" % (ETCD_VERSION, ETCD_VERSION)],
)
# This contains a patch to not prepend ./ to tarfiles produced by pkg_tar.
# When merged upstream, we'll no longer need to use ixdy's fork:
# https://bazel-review.googlesource.com/#/c/10390/
http_archive(
name = "io_bazel",
sha256 = "892a84aa1e7c1f99fb57bb056cb648745c513077252815324579a012d263defb",
strip_prefix = "bazel-df2c687c22bdd7c76f3cdcc85f38fefd02f0b844",
urls = ["https://github.com/ixdy/bazel/archive/df2c687c22bdd7c76f3cdcc85f38fefd02f0b844.tar.gz"],
)
http_archive(
name = "io_bazel_rules_docker",
sha256 = "c440717ee9b1b2f4a1e9bf5622539feb5aef9db83fc1fa1517818f13c041b0be",
strip_prefix = "rules_docker-8bbe2a8abd382641e65ff7127a3700a8530f02ce",
urls = ["https://github.com/bazelbuild/rules_docker/archive/8bbe2a8abd382641e65ff7127a3700a8530f02ce.tar.gz"],
)
load("@io_kubernetes_build//defs:bazel_version.bzl", "check_version")
check_version("0.6.0")
load("@io_bazel_rules_go//go:def.bzl", "go_rules_dependencies", "go_register_toolchains", "go_download_sdk")
load("@io_bazel_rules_docker//docker:docker.bzl", "docker_repositories", "docker_pull")
go_rules_dependencies()
# The upstream version of rules_go is broken in a number of ways. Until it's fixed, explicitly download and use go1.9.2 ourselves.
go_download_sdk(
name = "go_sdk",
sdks = {
"darwin_amd64": ("go1.9.2.darwin-amd64.tar.gz", "73fd5840d55f5566d8db6c0ffdd187577e8ebe650c783f68bd27cbf95bde6743"),
"linux_386": ("go1.9.2.linux-386.tar.gz", "574b2c4b1a248e58ef7d1f825beda15429610a2316d9cbd3096d8d3fa8c0bc1a"),
"linux_amd64": ("go1.9.2.linux-amd64.tar.gz", "de874549d9a8d8d8062be05808509c09a88a248e77ec14eb77453530829ac02b"),
"linux_armv6l": ("go1.9.2.linux-armv6l.tar.gz", "8a6758c8d390e28ef2bcea511f62dcb43056f38c1addc06a8bc996741987e7bb"),
"windows_386": ("go1.9.2.windows-386.zip", "35d3be5d7b97c6d11ffb76c1b19e20a824e427805ee918e82c08a2e5793eda20"),
"windows_amd64": ("go1.9.2.windows-amd64.zip", "682ec3626a9c45b657c2456e35cadad119057408d37f334c6c24d88389c2164c"),
"freebsd_386": ("go1.9.2.freebsd-386.tar.gz", "809dcb0a8457c8d0abf954f20311a1ee353486d0ae3f921e9478189721d37677"),
"freebsd_amd64": ("go1.9.2.freebsd-amd64.tar.gz", "8be985c3e251c8e007fa6ecd0189bc53e65cc519f4464ddf19fa11f7ed251134"),
"linux_arm64": ("go1.9.2.linux-arm64.tar.gz", "0016ac65ad8340c84f51bc11dbb24ee8265b0a4597dbfdf8d91776fc187456fa"),
"linux_ppc64le": ("go1.9.2.linux-ppc64le.tar.gz", "adb440b2b6ae9e448c253a20836d8e8aa4236f731d87717d9c7b241998dc7f9d"),
"linux_s390x": ("go1.9.2.linux-s390x.tar.gz", "a7137b4fbdec126823a12a4b696eeee2f04ec616e9fb8a54654c51d5884c1345"),
},
)
go_register_toolchains(
go_version = "overridden by go_download_sdk",
)
docker_repositories()
http_file(
name = "kubernetes_cni",
sha256 = "f04339a21b8edf76d415e7f17b620e63b8f37a76b2f706671587ab6464411f2d",
url = "https://storage.googleapis.com/kubernetes-release/network-plugins/cni-plugins-amd64-v0.6.0.tgz",
)
docker_pull(
name = "debian-iptables-amd64",
digest = "sha256:a3b936c0fb98a934eecd2cfb91f73658d402b29116084e778ce9ddb68e55383e",
registry = "gcr.io",
repository = "google-containers/debian-iptables-amd64",
tag = "v10", # ignored, but kept here for documentation
)
docker_pull(
name = "debian-hyperkube-base-amd64",
digest = "sha256:fc1b461367730660ac5a40c1eb2d1b23221829acf8a892981c12361383b3742b",
registry = "gcr.io",
repository = "google-containers/debian-hyperkube-base-amd64",
tag = "0.8", # ignored, but kept here for documentation
)
docker_pull(
name = "official_busybox",
digest = "sha256:be3c11fdba7cfe299214e46edc642e09514dbb9bbefcd0d3836c05a1e0cd0642",
registry = "index.docker.io",
repository = "library/busybox",
tag = "latest", # ignored, but kept here for documentation
)

13
vendor/k8s.io/kubernetes/build/rpms/10-kubeadm.conf generated vendored Normal file
View File

@ -0,0 +1,13 @@
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
# Value should match Docker daemon settings.
# Defaults are "cgroupfs" for Debian/Ubuntu/OpenSUSE and "systemd" for Fedora/CentOS/RHEL
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"
Environment="KUBELET_CERTIFICATE_ARGS=--rotate-certificates=true"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS

62
vendor/k8s.io/kubernetes/build/rpms/BUILD generated vendored Normal file
View File

@ -0,0 +1,62 @@
package(default_visibility = ["//visibility:public"])
load("@bazel_tools//tools/build_defs/pkg:rpm.bzl", "pkg_rpm")
pkg_rpm(
name = "kubectl",
architecture = "x86_64",
changelog = "//:CHANGELOG.md",
data = [
"//cmd/kubectl",
],
spec_file = "kubectl.spec",
version_file = "//build:os_package_version",
)
pkg_rpm(
name = "kubelet",
architecture = "x86_64",
changelog = "//:CHANGELOG.md",
data = [
"kubelet.service",
"//cmd/kubelet",
],
spec_file = "kubelet.spec",
version_file = "//build:os_package_version",
)
pkg_rpm(
name = "kubeadm",
architecture = "x86_64",
changelog = "//:CHANGELOG.md",
data = [
"10-kubeadm.conf",
"//cmd/kubeadm",
],
spec_file = "kubeadm.spec",
version_file = "//build:os_package_version",
)
pkg_rpm(
name = "kubernetes-cni",
architecture = "x86_64",
changelog = "//:CHANGELOG.md",
data = [
"@kubernetes_cni//file",
],
spec_file = "kubernetes-cni.spec",
version_file = "//build:cni_package_version",
)
filegroup(
name = "package-srcs",
srcs = glob(["**"]),
tags = ["automanaged"],
visibility = ["//visibility:private"],
)
filegroup(
name = "all-srcs",
srcs = [":package-srcs"],
tags = ["automanaged"],
)

24
vendor/k8s.io/kubernetes/build/rpms/kubeadm.spec generated vendored Normal file
View File

@ -0,0 +1,24 @@
Name: kubeadm
Version: OVERRIDE_THIS
Release: 00
License: ASL 2.0
Summary: Container Cluster Manager - Kubernetes Cluster Bootstrapping Tool
Requires: kubelet >= 1.8.0
Requires: kubectl >= 1.8.0
Requires: kubernetes-cni >= 0.5.1
URL: https://kubernetes.io
%description
Command-line utility for deploying a Kubernetes cluster.
%install
install -m 755 -d %{buildroot}%{_bindir}
install -m 755 -d %{buildroot}%{_sysconfdir}/systemd/system/
install -m 755 -d %{buildroot}%{_sysconfdir}/systemd/system/kubelet.service.d/
install -p -m 755 -t %{buildroot}%{_bindir} kubeadm
install -p -m 755 -t %{buildroot}%{_sysconfdir}/systemd/system/kubelet.service.d/ 10-kubeadm.conf
%files
%{_bindir}/kubeadm
%{_sysconfdir}/systemd/system/kubelet.service.d/10-kubeadm.conf

18
vendor/k8s.io/kubernetes/build/rpms/kubectl.spec generated vendored Normal file
View File

@ -0,0 +1,18 @@
Name: kubectl
Version: OVERRIDE_THIS
Release: 00
License: ASL 2.0
Summary: Container Cluster Manager - Kubernetes client tools
URL: https://kubernetes.io
%description
Command-line utility for interacting with a Kubernetes cluster.
%install
install -m 755 -d %{buildroot}%{_bindir}
install -p -m 755 -t %{buildroot}%{_bindir} kubectl
%files
%{_bindir}/kubectl

12
vendor/k8s.io/kubernetes/build/rpms/kubelet.service generated vendored Normal file
View File

@ -0,0 +1,12 @@
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=http://kubernetes.io/docs/
[Service]
ExecStart=/usr/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target

31
vendor/k8s.io/kubernetes/build/rpms/kubelet.spec generated vendored Normal file
View File

@ -0,0 +1,31 @@
Name: kubelet
Version: OVERRIDE_THIS
Release: 00
License: ASL 2.0
Summary: Container Cluster Manager - Kubernetes Node Agent
URL: https://kubernetes.io
Requires: iptables >= 1.4.21
Requires: kubernetes-cni >= 0.5.1
Requires: socat
Requires: util-linux
Requires: ethtool
Requires: iproute
Requires: ebtables
%description
The node agent of Kubernetes, the container cluster manager.
%install
install -m 755 -d %{buildroot}%{_bindir}
install -m 755 -d %{buildroot}%{_sysconfdir}/systemd/system/
install -m 755 -d %{buildroot}%{_sysconfdir}/kubernetes/manifests/
install -p -m 755 -t %{buildroot}%{_bindir} kubelet
install -p -m 755 -t %{buildroot}%{_sysconfdir}/systemd/system/ kubelet.service
%files
%{_bindir}/kubelet
%{_sysconfdir}/systemd/system/kubelet.service
%{_sysconfdir}/kubernetes/manifests/

View File

@ -0,0 +1,24 @@
Name: kubernetes-cni
Version: OVERRIDE_THIS
Release: 00
License: ASL 2.0
Summary: Container Cluster Manager - CNI plugins
URL: https://kubernetes.io
%description
Binaries required to provision container networking.
%prep
mkdir -p ./bin
tar -C ./bin -xz -f cni-plugins-amd64-v0.6.0.tgz
%install
install -m 755 -d %{buildroot}%{_sysconfdir}/cni/net.d/
install -m 755 -d %{buildroot}/opt/cni
mv bin/ %{buildroot}/opt/cni/
%files
/opt/cni
%{_sysconfdir}/cni/net.d/

43
vendor/k8s.io/kubernetes/build/run.sh generated vendored Executable file
View File

@ -0,0 +1,43 @@
#!/bin/bash
# Copyright 2014 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Run a command in the docker build container. Typically this will be one of
# the commands in `hack/`. When running in the build container the user is sure
# to have a consistent reproducible build environment.
set -o errexit
set -o nounset
set -o pipefail
KUBE_ROOT=$(dirname "${BASH_SOURCE}")/..
source "$KUBE_ROOT/build/common.sh"
KUBE_RUN_COPY_OUTPUT="${KUBE_RUN_COPY_OUTPUT:-y}"
kube::build::verify_prereqs
kube::build::build_image
if [[ ${KUBE_RUN_COPY_OUTPUT} =~ ^[yY]$ ]]; then
kube::log::status "Output from this container will be rsynced out upon completion. Set KUBE_RUN_COPY_OUTPUT=n to disable."
else
kube::log::status "Output from this container will NOT be rsynced out upon completion. Set KUBE_RUN_COPY_OUTPUT=y to enable."
fi
kube::build::run_build_command "$@"
if [[ ${KUBE_RUN_COPY_OUTPUT} =~ ^[yY]$ ]]; then
kube::build::copy_output
fi

28
vendor/k8s.io/kubernetes/build/shell.sh generated vendored Executable file
View File

@ -0,0 +1,28 @@
#!/bin/bash
# Copyright 2014 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Run a bash script in the Docker build image.
#
# This container will have a snapshot of the current sources.
set -o errexit
set -o nounset
set -o pipefail
KUBE_ROOT=$(dirname "${BASH_SOURCE}")/..
source "${KUBE_ROOT}/build/common.sh"
KUBE_RUN_COPY_OUTPUT="${KUBE_RUN_COPY_OUTPUT:-n}" "${KUBE_ROOT}/build/run.sh" bash "$@"

32
vendor/k8s.io/kubernetes/build/util.sh generated vendored Normal file
View File

@ -0,0 +1,32 @@
#!/bin/bash
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Common utility functions for build scripts
KUBE_ROOT=$(dirname "${BASH_SOURCE}")/..
function kube::release::semantic_version() {
# This takes:
# Client Version: version.Info{Major:"1", Minor:"1+", GitVersion:"v1.1.0-alpha.0.2328+3c0a05de4a38e3", GitCommit:"3c0a05de4a38e355d147dbfb4d85bad6d2d73bb9", GitTreeState:"clean"}
# and spits back the GitVersion piece in a way that is somewhat
# resilient to the other fields changing (we hope)
${KUBE_ROOT}/cluster/kubectl.sh version --client | sed "s/, */\\
/g" | egrep "^GitVersion:" | cut -f2 -d: | cut -f2 -d\"
}
function kube::release::semantic_image_tag_version() {
printf "$(kube::release::semantic_version)" | tr + _
}

349
vendor/k8s.io/kubernetes/build/visible_to/BUILD generated vendored Normal file
View File

@ -0,0 +1,349 @@
# Package groups defined for use in kubernetes visibility rules.
#
# See associated README.md for explanation.
#
# Style suggestions:
#
# - Sort package group definitions by name.
#
# - Prefer obvious package group names.
#
# E.g "pkg_kubectl_cmd_util_CONSUMERS" names a group
# of packages allowed to depend on (consume) the
# //pkg/kubectl/cmd/util package.
#
#
# - A group name ending in _BAD wants to be deleted.
#
# Such a group wants to contract, rather than expand.
# It likely exists to permit a legacy unintentional
# dependency that requires more work to remove.
#
# - Prefer defining new groups to expanding groups.
#
# The former permits tight targeting, the latter can
# allow unnecessary visibility and thus bad deps.
#
package_group(
name = "COMMON_generators",
packages = [
"//cmd/gendocs",
"//cmd/genman",
"//cmd/genyaml",
],
)
package_group(
name = "COMMON_testing",
packages = [
"//hack",
"//hack/lib",
"//hack/make-rules",
"//test/e2e",
"//test/e2e/framework",
"//test/e2e/kubectl",
"//test/e2e/workload",
"//test/integration/etcd",
"//test/integration/framework",
"//test/integration/kubectl",
],
)
package_group(
name = "cluster",
packages = [
"//cluster/...",
],
)
package_group(
name = "KUBEADM_BAD",
packages = [
"//cmd/kubeadm/app/cmd",
],
)
package_group(
name = "cmd_kubectl_CONSUMERS",
packages = [
"//cmd",
],
)
package_group(
name = "cmd_kubectl_app_CONSUMERS",
packages = [
"//cmd/kubectl",
],
)
package_group(
name = "pkg_kubectl_CONSUMERS_BAD",
includes = [
":KUBEADM_BAD",
],
packages = [
"//cmd/clicheck",
"//cmd/hyperkube",
"//pkg",
],
)
package_group(
name = "pkg_kubectl_CONSUMERS",
includes = [
":COMMON_generators",
":pkg_kubectl_CONSUMERS_BAD",
],
packages = [
"//cmd/kubectl",
"//cmd/kubectl/app",
"//pkg/kubectl/cmd",
"//pkg/kubectl/cmd/auth",
"//pkg/kubectl/cmd/config",
"//pkg/kubectl/cmd/rollout",
"//pkg/kubectl/cmd/set",
"//pkg/kubectl/cmd/testing",
"//pkg/kubectl/cmd/util",
"//pkg/kubectl/cmd/util/editor",
],
)
package_group(
name = "pkg_kubectl_cmd_CONSUMERS_BAD",
packages = [
"//cmd/clicheck",
"//cmd/hyperkube",
],
)
package_group(
name = "pkg_kubectl_cmd_CONSUMERS",
includes = [
":COMMON_generators",
":pkg_kubectl_cmd_CONSUMERS_BAD",
],
packages = [
"//cmd/kubectl",
"//cmd/kubectl/app",
"//pkg/kubectl",
"//pkg/kubectl/cmd",
],
)
package_group(
name = "pkg_kubectl_cmd_auth_CONSUMERS",
packages = [
"//pkg/kubectl/cmd",
"//pkg/kubectl/cmd/rollout",
],
)
package_group(
name = "pkg_kubectl_cmd_config_CONSUMERS",
packages = [
"//pkg/kubectl/cmd",
],
)
package_group(
name = "pkg_kubectl_cmd_rollout_CONSUMERS",
packages = [
"//pkg/kubectl/cmd",
],
)
package_group(
name = "pkg_kubectl_cmd_set_CONSUMERS",
packages = [
"//pkg/kubectl/cmd",
"//pkg/kubectl/cmd/rollout",
],
)
package_group(
name = "pkg_kubectl_cmd_templates_CONSUMERS",
includes = [
":COMMON_generators",
":COMMON_testing",
],
packages = [
"//cmd/kubectl",
"//cmd/kubectl/app",
"//pkg/kubectl/cmd",
"//pkg/kubectl/cmd/auth",
"//pkg/kubectl/cmd/config",
"//pkg/kubectl/cmd/resource",
"//pkg/kubectl/cmd/rollout",
"//pkg/kubectl/cmd/set",
"//pkg/kubectl/cmd/templates",
"//pkg/kubectl/cmd/util",
"//pkg/kubectl/cmd/util/sanity",
],
)
package_group(
name = "pkg_kubectl_cmd_testdata_edit_CONSUMERS",
packages = [
"//pkg/kubectl/cmd",
],
)
package_group(
name = "pkg_kubectl_cmd_testing_CONSUMERS",
packages = [
"//pkg/kubectl/cmd",
"//pkg/kubectl/cmd/auth",
"//pkg/kubectl/cmd/resource",
"//pkg/kubectl/cmd/set",
"//pkg/kubectl/explain",
],
)
package_group(
name = "pkg_kubectl_cmd_util_CONSUMERS_BAD",
includes = [
":KUBEADM_BAD",
],
packages = [
"//cmd/clicheck",
"//cmd/hyperkube",
"//cmd/kube-proxy/app",
"//plugin/cmd/kube-scheduler/app",
],
)
package_group(
name = "pkg_kubectl_cmd_util_CONSUMERS",
includes = [
":COMMON_generators",
":COMMON_testing",
":pkg_kubectl_cmd_util_CONSUMERS_BAD",
],
packages = [
"//cmd/kubectl",
"//cmd/kubectl/app",
"//pkg/kubectl/cmd",
"//pkg/kubectl/cmd/auth",
"//pkg/kubectl/cmd/config",
"//pkg/kubectl/cmd/resource",
"//pkg/kubectl/cmd/rollout",
"//pkg/kubectl/cmd/set",
"//pkg/kubectl/cmd/testing",
"//pkg/kubectl/cmd/util",
"//pkg/kubectl/cmd/util/editor",
],
)
package_group(
name = "pkg_kubectl_cmd_util_editor_CONSUMERS",
packages = [
"//pkg/kubectl/cmd",
"//pkg/kubectl/cmd/util",
],
)
package_group(
name = "pkg_kubectl_cmd_util_jsonmerge_CONSUMERS",
packages = [
"//pkg/kubectl/cmd",
"//pkg/kubectl/cmd/util",
],
)
package_group(
name = "pkg_kubectl_cmd_util_sanity_CONSUMERS",
packages = [
"//cmd/clicheck",
"//pkg/kubectl/cmd/util",
],
)
package_group(
name = "pkg_kubectl_metricsutil_CONSUMERS_BAD",
packages = [
"//cmd/clicheck",
"//cmd/hyperkube",
],
)
package_group(
name = "pkg_kubectl_metricsutil_CONSUMERS",
includes = [
":COMMON_generators",
":pkg_kubectl_metricsutil_CONSUMERS_BAD",
],
packages = [
"//cmd/kubectl",
"//cmd/kubectl/app",
"//pkg/kubectl",
"//pkg/kubectl/cmd",
],
)
package_group(
name = "pkg_kubectl_resource_CONSUMERS",
includes = [
":COMMON_generators",
":COMMON_testing",
],
packages = [
"//cmd/kubectl",
"//cmd/kubectl/app",
"//pkg/kubectl",
"//pkg/kubectl/cmd",
"//pkg/kubectl/cmd/auth",
"//pkg/kubectl/cmd/config",
"//pkg/kubectl/cmd/resource",
"//pkg/kubectl/cmd/rollout",
"//pkg/kubectl/cmd/set",
"//pkg/kubectl/cmd/testing",
"//pkg/kubectl/cmd/util",
"//pkg/kubectl/cmd/util/editor",
],
)
package_group(
name = "pkg_kubectl_testing_CONSUMERS",
packages = [
"//pkg/kubectl",
"//pkg/printers/internalversion",
],
)
package_group(
name = "pkg_kubectl_util_CONSUMERS",
packages = [
"//pkg/kubectl",
"//pkg/kubectl/cmd",
"//pkg/kubectl/proxy",
],
)
package_group(
name = "pkg_kubectl_validation_CONSUMERS",
packages = [
"//pkg/kubectl",
"//pkg/kubectl/cmd/testing",
"//pkg/kubectl/cmd/util",
"//pkg/kubectl/resource",
],
)
# Added by ./hack/verify-bazel.sh; should be excluded from
# that script since it makes no sense here.
filegroup(
name = "package-srcs",
srcs = glob(["**"]),
tags = ["automanaged"],
)
# Added by ./hack/verify-bazel.sh; should be excluded from
# that script since it makes no sense here.
filegroup(
name = "all-srcs",
srcs = [":package-srcs"],
tags = ["automanaged"],
visibility = ["//visibility:public"],
)

24
vendor/k8s.io/kubernetes/build/visible_to/OWNERS generated vendored Normal file
View File

@ -0,0 +1,24 @@
reviewers:
- brendandburns
- dchen1107
- ixdy
- jbeda
- lavalamp
- mikedanese
- monopole
- pwittrock
- smarterclayton
- thockin
approvers:
- bgrant0607
- brendandburns
- dchen1107
- ixdy
- jbeda
- lavalamp
- mikedanese
- monopole
- pwittrock
- smarterclayton
- thockin
- wojtek-t

184
vendor/k8s.io/kubernetes/build/visible_to/README.md generated vendored Normal file
View File

@ -0,0 +1,184 @@
# Package Groups Used in Kubernetes Visibility Rules
## Background
`BUILD` rules define dependencies, answering the question:
on what packages does _foo_ depend?
The `BUILD` file in this package allows one to define
_allowed_ reverse dependencies, answering the question:
given a package _foo_, what other specific packages are
allowed to depend on it?
This is done via visibility rules.
Visibility rules discourage unintended, spurious
dependencies that blur code boundaries, slow CICD queues and
generally inhibit progress.
#### Facts
* A package is any directory that contains a `BUILD` file.
* A `package_group` is a `BUILD` file rule that defines a named
set of packages for use in other rules, e.g., given
```
package_group(
name = "database_CONSUMERS",
packages = [
"//foo/dbinitializer",
"//foo/backend/...", # `backend` and everything below it
],
)
```
one can specify the following visibility rule in any `BUILD` rule:
```
visibility = [ "//build/visible_to:database_CONSUMERS" ],
```
* A visibility rule takes a list of package groups as its
argument - or one of the pre-defined groups
`//visibility:private` or `//visibility:public`.
* If no visibility is explicitly defined, a package is
_private_ by default.
* Violations in visibility cause `make bazel-build` to fail,
which in turn causes the submit queue to fail - that's the
enforcement.
#### Why define all package groups meant for visibility here (in one file)?
* Ease discovery of appropriate groups for use in a rule.
* Ease reuse (inclusions) of commonly used groups.
* Consistent style:
* easy to read `//build/visible_to:math_library_CONSUMERS` rules,
* call out bad dependencies for eventual removal.
* Make it more obvious in code reviews when visibility is being
modified.
* One set of `OWNERS` to manage visibility.
The alternative is to use special [package literals] directly
in visibility rules, e.g.
```
visibility = [
"//foo/dbinitializer:__pkg__",
"//foo/backend:__subpackages__",
],
```
The difference in style is similar to the difference between
using a named static constant like `MAX_NODES` rather than a
literal like `12`. Names are preferable to literals for intent
documentation, search, changing one place rather than _n_,
associating usage in distant code blocks, etc.
## Rule Examples
#### Nobody outside this package can depend on me.
```
visibility = ["//visibility:private"],
```
Since this is the default, there's no reason to use this
rule except as a means to override, for some specific
target, some broader, whole-package visibility rule.
#### Anyone can depend on me (eschew this).
```
visibility = ["//visibility:public"],
```
#### Only some servers can depend on me.
Appropriate for, say, backend storage utilities.
```
visibility = ["//visible_to:server_foo","//visible_to:server_bar"].
```
#### Both some client and some server can see me.
Appropriate for shared API definition files and generated code:
```
visibility = ["//visible_to:client_foo,//visible_to:server_foo"],
```
## Handy commands
#### Quickly check for visibility violations
```
bazel build --check_visibility --nobuild \
//cmd/... //pkg/... //plugin/... \
//third_party/... //examples/... //test/... //vendor/k8s.io/...
```
#### Who depends on target _q_?
To create a seed set for a visibility group, one can ask what
packages currently depend on (must currently be able to see) a
given Go library target? It's a time consuming query.
```
q=//pkg/kubectl/cmd:go_default_library
bazel query "rdeps(...,${q})" | \
grep go_default_library | \
sed 's/\(.*\):go_default_library/ "\1",/'
```
#### What targets below _p_ are visible to anyone?
A means to look for things one missed when locking down _p_.
```
p=//pkg/kubectl/cmd
bazel query "visible(...,${p}/...)"
```
#### What packages below _p_ may target _q_ depend on without violating visibility rules?
A means to pinpoint unexpected visibility.
```
p=//pkg/kubectl
q=//cmd/kubelet:kubelet
bazel query "visible(${q},${p}/...)" | more
```
#### What packages does target _q_ need?
```
q=//cmd/kubectl:kubectl
bazel query "buildfiles(deps($q))" | \
grep -v @bazel_tools | \
grep -v @io_bazel_rules | \
grep -v @io_kubernetes_build | \
grep -v @local_config | \
grep -v @local_jdk | \
grep -v //visible_to: | \
sed 's/:BUILD//' | \
sort | uniq > ~/KUBECTL_BUILD.txt
```
or try
```
bazel query --nohost_deps --noimplicit_deps \
"kind('source file', deps($q))" | wc -
```
#### How does kubectl depend on pkg/util/parsers?
```
bazel query "somepath(cmd/kubectl:kubectl, pkg/util/parsers:go_default_library)"
```
[package literals]: https://bazel.build/versions/master/docs/be/common-definitions.html#common.visibility