mirror of
https://github.com/ceph/ceph-csi.git
synced 2025-06-13 18:43:34 +00:00
vendor files
This commit is contained in:
100
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/README.md
generated
vendored
Normal file
100
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/README.md
generated
vendored
Normal file
@ -0,0 +1,100 @@
|
||||
# Kubernetes-master
|
||||
|
||||
[Kubernetes](http://kubernetes.io/) is an open source system for managing
|
||||
application containers across a cluster of hosts. The Kubernetes project was
|
||||
started by Google in 2014, combining the experience of running production
|
||||
workloads combined with best practices from the community.
|
||||
|
||||
The Kubernetes project defines some new terms that may be unfamiliar to users
|
||||
or operators. For more information please refer to the concept guide in the
|
||||
[getting started guide](https://kubernetes.io/docs/home/).
|
||||
|
||||
This charm is an encapsulation of the Kubernetes master processes and the
|
||||
operations to run on any cloud for the entire lifecycle of the cluster.
|
||||
|
||||
This charm is built from other charm layers using the Juju reactive framework.
|
||||
The other layers focus on specific subset of operations making this layer
|
||||
specific to operations of Kubernetes master processes.
|
||||
|
||||
# Deployment
|
||||
|
||||
This charm is not fully functional when deployed by itself. It requires other
|
||||
charms to model a complete Kubernetes cluster. A Kubernetes cluster needs a
|
||||
distributed key value store such as [Etcd](https://coreos.com/etcd/) and the
|
||||
kubernetes-worker charm which delivers the Kubernetes node services. A cluster
|
||||
requires a Software Defined Network (SDN) and Transport Layer Security (TLS) so
|
||||
the components in a cluster communicate securely.
|
||||
|
||||
Please take a look at the [Canonical Distribution of Kubernetes](https://jujucharms.com/canonical-kubernetes/)
|
||||
or the [Kubernetes core](https://jujucharms.com/kubernetes-core/) bundles for
|
||||
examples of complete models of Kubernetes clusters.
|
||||
|
||||
# Resources
|
||||
|
||||
The kubernetes-master charm takes advantage of the [Juju Resources](https://jujucharms.com/docs/2.0/developer-resources)
|
||||
feature to deliver the Kubernetes software.
|
||||
|
||||
In deployments on public clouds the Charm Store provides the resource to the
|
||||
charm automatically with no user intervention. Some environments with strict
|
||||
firewall rules may not be able to contact the Charm Store. In these network
|
||||
restricted environments the resource can be uploaded to the model by the Juju
|
||||
operator.
|
||||
|
||||
# Configuration
|
||||
|
||||
This charm supports some configuration options to set up a Kubernetes cluster
|
||||
that works in your environment:
|
||||
|
||||
#### dns_domain
|
||||
|
||||
The domain name to use for the Kubernetes cluster for DNS.
|
||||
|
||||
#### enable-dashboard-addons
|
||||
|
||||
Enables the installation of Kubernetes dashboard, Heapster, Grafana, and
|
||||
InfluxDB.
|
||||
|
||||
#### enable-rbac
|
||||
|
||||
Enable RBAC and Node authorisation.
|
||||
|
||||
# DNS for the cluster
|
||||
|
||||
The DNS add-on allows the pods to have a DNS names in addition to IP addresses.
|
||||
The Kubernetes cluster DNS server (based off the SkyDNS library) supports
|
||||
forward lookups (A records), service lookups (SRV records) and reverse IP
|
||||
address lookups (PTR records). More information about the DNS can be obtained
|
||||
from the [Kubernetes DNS admin guide](http://kubernetes.io/docs/admin/dns/).
|
||||
|
||||
# Actions
|
||||
|
||||
The kubernetes-master charm models a few one time operations called
|
||||
[Juju actions](https://jujucharms.com/docs/stable/actions) that can be run by
|
||||
Juju users.
|
||||
|
||||
#### create-rbd-pv
|
||||
|
||||
This action creates RADOS Block Device (RBD) in Ceph and defines a Persistent
|
||||
Volume in Kubernetes so the containers can use durable storage. This action
|
||||
requires a relation to the ceph-mon charm before it can create the volume.
|
||||
|
||||
#### restart
|
||||
|
||||
This action restarts the master processes `kube-apiserver`,
|
||||
`kube-controller-manager`, and `kube-scheduler` when the user needs a restart.
|
||||
|
||||
# More information
|
||||
|
||||
- [Kubernetes github project](https://github.com/kubernetes/kubernetes)
|
||||
- [Kubernetes issue tracker](https://github.com/kubernetes/kubernetes/issues)
|
||||
- [Kubernetes documentation](http://kubernetes.io/docs/)
|
||||
- [Kubernetes releases](https://github.com/kubernetes/kubernetes/releases)
|
||||
|
||||
# Contact
|
||||
|
||||
The kubernetes-master charm is free and open source operations created
|
||||
by the containers team at Canonical.
|
||||
|
||||
Canonical also offers enterprise support and customization services. Please
|
||||
refer to the [Kubernetes product page](https://www.ubuntu.com/cloud/kubernetes)
|
||||
for more details.
|
50
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/actions.yaml
generated
vendored
Normal file
50
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/actions.yaml
generated
vendored
Normal file
@ -0,0 +1,50 @@
|
||||
restart:
|
||||
description: Restart the Kubernetes master services on demand.
|
||||
create-rbd-pv:
|
||||
description: Create RADOS Block Device (RDB) volume in Ceph and creates PersistentVolume.
|
||||
params:
|
||||
name:
|
||||
type: string
|
||||
description: Name the persistent volume.
|
||||
minLength: 1
|
||||
size:
|
||||
type: integer
|
||||
description: Size in MB of the RBD volume.
|
||||
minimum: 1
|
||||
mode:
|
||||
type: string
|
||||
default: ReadWriteOnce
|
||||
description: Access mode for the persistent volume.
|
||||
filesystem:
|
||||
type: string
|
||||
default: xfs
|
||||
description: File system type to format the volume.
|
||||
skip-size-check:
|
||||
type: boolean
|
||||
default: false
|
||||
description: Allow creation of overprovisioned RBD.
|
||||
required:
|
||||
- name
|
||||
- size
|
||||
namespace-list:
|
||||
description: List existing k8s namespaces
|
||||
namespace-create:
|
||||
description: Create new namespace
|
||||
params:
|
||||
name:
|
||||
type: string
|
||||
description: Namespace name eg. staging
|
||||
minLength: 2
|
||||
required:
|
||||
- name
|
||||
namespace-delete:
|
||||
description: Delete namespace
|
||||
params:
|
||||
name:
|
||||
type: string
|
||||
description: Namespace name eg. staging
|
||||
minLength: 2
|
||||
required:
|
||||
- name
|
||||
upgrade:
|
||||
description: Upgrade the kubernetes snaps
|
300
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/actions/create-rbd-pv
generated
vendored
Executable file
300
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/actions/create-rbd-pv
generated
vendored
Executable file
@ -0,0 +1,300 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
# Copyright 2015 The Kubernetes Authors.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from charmhelpers.core.templating import render
|
||||
from charms.reactive import is_state
|
||||
from charmhelpers.core.hookenv import action_get
|
||||
from charmhelpers.core.hookenv import action_set
|
||||
from charmhelpers.core.hookenv import action_fail
|
||||
from subprocess import check_call
|
||||
from subprocess import check_output
|
||||
from subprocess import CalledProcessError
|
||||
from tempfile import TemporaryDirectory
|
||||
import json
|
||||
import re
|
||||
import os
|
||||
import sys
|
||||
|
||||
|
||||
os.environ['PATH'] += os.pathsep + os.path.join(os.sep, 'snap', 'bin')
|
||||
|
||||
|
||||
def main():
|
||||
''' Control logic to enlist Ceph RBD volumes as PersistentVolumes in
|
||||
Kubernetes. This will invoke the validation steps, and only execute if
|
||||
this script thinks the environment is 'sane' enough to provision volumes.
|
||||
'''
|
||||
|
||||
# validate relationship pre-reqs before additional steps can be taken
|
||||
if not validate_relation():
|
||||
print('Failed ceph relationship check')
|
||||
action_fail('Failed ceph relationship check')
|
||||
return
|
||||
|
||||
if not is_ceph_healthy():
|
||||
print('Ceph was not healthy.')
|
||||
action_fail('Ceph was not healthy.')
|
||||
return
|
||||
|
||||
context = {}
|
||||
|
||||
context['RBD_NAME'] = action_get_or_default('name').strip()
|
||||
context['RBD_SIZE'] = action_get_or_default('size')
|
||||
context['RBD_FS'] = action_get_or_default('filesystem').strip()
|
||||
context['PV_MODE'] = action_get_or_default('mode').strip()
|
||||
|
||||
# Ensure we're not exceeding available space in the pool
|
||||
if not validate_space(context['RBD_SIZE']):
|
||||
return
|
||||
|
||||
# Ensure our paramters match
|
||||
param_validation = validate_parameters(context['RBD_NAME'],
|
||||
context['RBD_FS'],
|
||||
context['PV_MODE'])
|
||||
if not param_validation == 0:
|
||||
return
|
||||
|
||||
if not validate_unique_volume_name(context['RBD_NAME']):
|
||||
action_fail('Volume name collision detected. Volume creation aborted.')
|
||||
return
|
||||
|
||||
context['monitors'] = get_monitors()
|
||||
|
||||
# Invoke creation and format the mount device
|
||||
create_rbd_volume(context['RBD_NAME'],
|
||||
context['RBD_SIZE'],
|
||||
context['RBD_FS'])
|
||||
|
||||
# Create a temporary workspace to render our persistentVolume template, and
|
||||
# enlist the RDB based PV we've just created
|
||||
with TemporaryDirectory() as active_working_path:
|
||||
temp_template = '{}/pv.yaml'.format(active_working_path)
|
||||
render('rbd-persistent-volume.yaml', temp_template, context)
|
||||
|
||||
cmd = ['kubectl', 'create', '-f', temp_template]
|
||||
debug_command(cmd)
|
||||
check_call(cmd)
|
||||
|
||||
|
||||
def action_get_or_default(key):
|
||||
''' Convenience method to manage defaults since actions dont appear to
|
||||
properly support defaults '''
|
||||
|
||||
value = action_get(key)
|
||||
if value:
|
||||
return value
|
||||
elif key == 'filesystem':
|
||||
return 'xfs'
|
||||
elif key == 'size':
|
||||
return 0
|
||||
elif key == 'mode':
|
||||
return "ReadWriteOnce"
|
||||
elif key == 'skip-size-check':
|
||||
return False
|
||||
else:
|
||||
return ''
|
||||
|
||||
|
||||
def create_rbd_volume(name, size, filesystem):
|
||||
''' Create the RBD volume in Ceph. Then mount it locally to format it for
|
||||
the requested filesystem.
|
||||
|
||||
:param name - The name of the RBD volume
|
||||
:param size - The size in MB of the volume
|
||||
:param filesystem - The type of filesystem to format the block device
|
||||
'''
|
||||
|
||||
# Create the rbd volume
|
||||
# $ rbd create foo --size 50 --image-feature layering
|
||||
command = ['rbd', 'create', '--size', '{}'.format(size), '--image-feature',
|
||||
'layering', name]
|
||||
debug_command(command)
|
||||
check_call(command)
|
||||
|
||||
# Lift the validation sequence to determine if we actually created the
|
||||
# rbd volume
|
||||
if validate_unique_volume_name(name):
|
||||
# we failed to create the RBD volume. whoops
|
||||
action_fail('RBD Volume not listed after creation.')
|
||||
print('Ceph RBD volume {} not found in rbd list'.format(name))
|
||||
# hack, needs love if we're killing the process thread this deep in
|
||||
# the call stack.
|
||||
sys.exit(0)
|
||||
|
||||
mount = ['rbd', 'map', name]
|
||||
debug_command(mount)
|
||||
device_path = check_output(mount).strip()
|
||||
|
||||
try:
|
||||
format_command = ['mkfs.{}'.format(filesystem), device_path]
|
||||
debug_command(format_command)
|
||||
check_call(format_command)
|
||||
unmount = ['rbd', 'unmap', name]
|
||||
debug_command(unmount)
|
||||
check_call(unmount)
|
||||
except CalledProcessError:
|
||||
print('Failed to format filesystem and unmount. RBD created but not'
|
||||
' enlisted.')
|
||||
action_fail('Failed to format filesystem and unmount.'
|
||||
' RDB created but not enlisted.')
|
||||
|
||||
|
||||
def is_ceph_healthy():
|
||||
''' Probe the remote ceph cluster for health status '''
|
||||
command = ['ceph', 'health']
|
||||
debug_command(command)
|
||||
health_output = check_output(command)
|
||||
if b'HEALTH_OK' in health_output:
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
||||
|
||||
def get_monitors():
|
||||
''' Parse the monitors out of /etc/ceph/ceph.conf '''
|
||||
found_hosts = []
|
||||
# This is kind of hacky. We should be piping this in from juju relations
|
||||
with open('/etc/ceph/ceph.conf', 'r') as ceph_conf:
|
||||
for line in ceph_conf.readlines():
|
||||
if 'mon host' in line:
|
||||
# strip out the key definition
|
||||
hosts = line.lstrip('mon host = ').split(' ')
|
||||
for host in hosts:
|
||||
found_hosts.append(host)
|
||||
return found_hosts
|
||||
|
||||
|
||||
def get_available_space():
|
||||
''' Determine the space available in the RBD pool. Throw an exception if
|
||||
the RBD pool ('rbd') isn't found. '''
|
||||
command = 'ceph df -f json'.split()
|
||||
debug_command(command)
|
||||
out = check_output(command).decode('utf-8')
|
||||
data = json.loads(out)
|
||||
for pool in data['pools']:
|
||||
if pool['name'] == 'rbd':
|
||||
return int(pool['stats']['max_avail'] / (1024 * 1024))
|
||||
raise UnknownAvailableSpaceException('Unable to determine available space.') # noqa
|
||||
|
||||
|
||||
def validate_unique_volume_name(name):
|
||||
''' Poll the CEPH-MON services to determine if we have a unique rbd volume
|
||||
name to use. If there is naming collisions, block the request for volume
|
||||
provisioning.
|
||||
|
||||
:param name - The name of the RBD volume
|
||||
'''
|
||||
|
||||
command = ['rbd', 'list']
|
||||
debug_command(command)
|
||||
raw_out = check_output(command)
|
||||
|
||||
# Split the output on newlines
|
||||
# output spec:
|
||||
# $ rbd list
|
||||
# foo
|
||||
# foobar
|
||||
volume_list = raw_out.decode('utf-8').splitlines()
|
||||
|
||||
for volume in volume_list:
|
||||
if volume.strip() == name:
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def validate_relation():
|
||||
''' Determine if we are related to ceph. If we are not, we should
|
||||
note this in the action output and fail this action run. We are relying
|
||||
on specific files in specific paths to be placed in order for this function
|
||||
to work. This method verifies those files are placed. '''
|
||||
|
||||
# TODO: Validate that the ceph-common package is installed
|
||||
if not is_state('ceph-storage.available'):
|
||||
message = 'Failed to detect connected ceph-mon'
|
||||
print(message)
|
||||
action_set({'pre-req.ceph-relation': message})
|
||||
return False
|
||||
|
||||
if not os.path.isfile('/etc/ceph/ceph.conf'):
|
||||
message = 'No Ceph configuration found in /etc/ceph/ceph.conf'
|
||||
print(message)
|
||||
action_set({'pre-req.ceph-configuration': message})
|
||||
return False
|
||||
|
||||
# TODO: Validate ceph key
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def validate_space(size):
|
||||
if action_get_or_default('skip-size-check'):
|
||||
return True
|
||||
available_space = get_available_space()
|
||||
if available_space < size:
|
||||
msg = 'Unable to allocate RBD of size {}MB, only {}MB are available.'
|
||||
action_fail(msg.format(size, available_space))
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def validate_parameters(name, fs, mode):
|
||||
''' Validate the user inputs to ensure they conform to what the
|
||||
action expects. This method will check the naming characters used
|
||||
for the rbd volume, ensure they have selected a fstype we are expecting
|
||||
and the mode against our whitelist '''
|
||||
name_regex = '^[a-zA-z0-9][a-zA-Z0-9|-]'
|
||||
|
||||
fs_whitelist = ['xfs', 'ext4']
|
||||
|
||||
# see http://kubernetes.io/docs/user-guide/persistent-volumes/#access-modes
|
||||
# for supported operations on RBD volumes.
|
||||
mode_whitelist = ['ReadWriteOnce', 'ReadOnlyMany']
|
||||
|
||||
fails = 0
|
||||
|
||||
if not re.match(name_regex, name):
|
||||
message = 'Validation failed for RBD volume-name'
|
||||
action_fail(message)
|
||||
fails = fails + 1
|
||||
action_set({'validation.name': message})
|
||||
|
||||
if fs not in fs_whitelist:
|
||||
message = 'Validation failed for file system'
|
||||
action_fail(message)
|
||||
fails = fails + 1
|
||||
action_set({'validation.filesystem': message})
|
||||
|
||||
if mode not in mode_whitelist:
|
||||
message = "Validation failed for mode"
|
||||
action_fail(message)
|
||||
fails = fails + 1
|
||||
action_set({'validation.mode': message})
|
||||
|
||||
return fails
|
||||
|
||||
|
||||
def debug_command(cmd):
|
||||
''' Print a debug statement of the command invoked '''
|
||||
print("Invoking {}".format(cmd))
|
||||
|
||||
|
||||
class UnknownAvailableSpaceException(Exception):
|
||||
pass
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
59
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/actions/namespace-create
generated
vendored
Executable file
59
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/actions/namespace-create
generated
vendored
Executable file
@ -0,0 +1,59 @@
|
||||
#!/usr/bin/env python3
|
||||
import os
|
||||
from yaml import safe_load as load
|
||||
from charmhelpers.core.hookenv import (
|
||||
action_get,
|
||||
action_set,
|
||||
action_fail,
|
||||
action_name
|
||||
)
|
||||
from charmhelpers.core.templating import render
|
||||
from subprocess import check_output
|
||||
|
||||
|
||||
os.environ['PATH'] += os.pathsep + os.path.join(os.sep, 'snap', 'bin')
|
||||
|
||||
|
||||
def kubectl(args):
|
||||
cmd = ['kubectl'] + args
|
||||
return check_output(cmd)
|
||||
|
||||
|
||||
def namespace_list():
|
||||
y = load(kubectl(['get', 'namespaces', '-o', 'yaml']))
|
||||
ns = [i['metadata']['name'] for i in y['items']]
|
||||
action_set({'namespaces': ', '.join(ns)+'.'})
|
||||
return ns
|
||||
|
||||
|
||||
def namespace_create():
|
||||
name = action_get('name')
|
||||
if name in namespace_list():
|
||||
action_fail('Namespace "{}" already exists.'.format(name))
|
||||
return
|
||||
|
||||
render('create-namespace.yaml.j2', '/etc/kubernetes/addons/create-namespace.yaml',
|
||||
context={'name': name})
|
||||
kubectl(['create', '-f', '/etc/kubernetes/addons/create-namespace.yaml'])
|
||||
action_set({'msg': 'Namespace "{}" created.'.format(name)})
|
||||
|
||||
|
||||
def namespace_delete():
|
||||
name = action_get('name')
|
||||
if name in ['default', 'kube-system']:
|
||||
action_fail('Not allowed to delete "{}".'.format(name))
|
||||
return
|
||||
if name not in namespace_list():
|
||||
action_fail('Namespace "{}" does not exist.'.format(name))
|
||||
return
|
||||
kubectl(['delete', 'ns/'+name])
|
||||
action_set({'msg': 'Namespace "{}" deleted.'.format(name)})
|
||||
|
||||
|
||||
action = action_name().replace('namespace-', '')
|
||||
if action == 'create':
|
||||
namespace_create()
|
||||
elif action == 'list':
|
||||
namespace_list()
|
||||
elif action == 'delete':
|
||||
namespace_delete()
|
1
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/actions/namespace-delete
generated
vendored
Symbolic link
1
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/actions/namespace-delete
generated
vendored
Symbolic link
@ -0,0 +1 @@
|
||||
namespace-create
|
1
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/actions/namespace-list
generated
vendored
Symbolic link
1
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/actions/namespace-list
generated
vendored
Symbolic link
@ -0,0 +1 @@
|
||||
namespace-create
|
14
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/actions/restart
generated
vendored
Executable file
14
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/actions/restart
generated
vendored
Executable file
@ -0,0 +1,14 @@
|
||||
#!/bin/bash
|
||||
|
||||
set +ex
|
||||
|
||||
# Restart the apiserver, controller-manager, and scheduler
|
||||
|
||||
systemctl restart snap.kube-apiserver.daemon
|
||||
action-set apiserver.status='restarted'
|
||||
|
||||
systemctl restart snap.kube-controller-manager.daemon
|
||||
action-set controller-manager.status='restarted'
|
||||
|
||||
systemctl restart snap.kube-scheduler.daemon
|
||||
action-set kube-scheduler.status='restarted'
|
5
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/actions/upgrade
generated
vendored
Executable file
5
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/actions/upgrade
generated
vendored
Executable file
@ -0,0 +1,5 @@
|
||||
#!/bin/sh
|
||||
set -eux
|
||||
|
||||
charms.reactive set_state kubernetes-master.upgrade-specified
|
||||
exec hooks/config-changed
|
78
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/config.yaml
generated
vendored
Normal file
78
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/config.yaml
generated
vendored
Normal file
@ -0,0 +1,78 @@
|
||||
options:
|
||||
enable-dashboard-addons:
|
||||
type: boolean
|
||||
default: True
|
||||
description: Deploy the Kubernetes Dashboard and Heapster addons
|
||||
dns_domain:
|
||||
type: string
|
||||
default: cluster.local
|
||||
description: The local domain for cluster dns
|
||||
extra_sans:
|
||||
type: string
|
||||
default: ""
|
||||
description: |
|
||||
Space-separated list of extra SAN entries to add to the x509 certificate
|
||||
created for the master nodes.
|
||||
service-cidr:
|
||||
type: string
|
||||
default: 10.152.183.0/24
|
||||
description: CIDR to user for Kubernetes services. Cannot be changed after deployment.
|
||||
allow-privileged:
|
||||
type: string
|
||||
default: "auto"
|
||||
description: |
|
||||
Allow kube-apiserver to run in privileged mode. Supported values are
|
||||
"true", "false", and "auto". If "true", kube-apiserver will run in
|
||||
privileged mode by default. If "false", kube-apiserver will never run in
|
||||
privileged mode. If "auto", kube-apiserver will not run in privileged
|
||||
mode by default, but will switch to privileged mode if gpu hardware is
|
||||
detected on a worker node.
|
||||
channel:
|
||||
type: string
|
||||
default: "1.8/stable"
|
||||
description: |
|
||||
Snap channel to install Kubernetes master services from
|
||||
client_password:
|
||||
type: string
|
||||
default: ""
|
||||
description: |
|
||||
Password to be used for admin user (leave empty for random password).
|
||||
api-extra-args:
|
||||
type: string
|
||||
default: ""
|
||||
description: |
|
||||
Space separated list of flags and key=value pairs that will be passed as arguments to
|
||||
kube-apiserver. For example a value like this:
|
||||
runtime-config=batch/v2alpha1=true profiling=true
|
||||
will result in kube-apiserver being run with the following options:
|
||||
--runtime-config=batch/v2alpha1=true --profiling=true
|
||||
controller-manager-extra-args:
|
||||
type: string
|
||||
default: ""
|
||||
description: |
|
||||
Space separated list of flags and key=value pairs that will be passed as arguments to
|
||||
kube-controller-manager. For example a value like this:
|
||||
runtime-config=batch/v2alpha1=true profiling=true
|
||||
will result in kube-controller-manager being run with the following options:
|
||||
--runtime-config=batch/v2alpha1=true --profiling=true
|
||||
scheduler-extra-args:
|
||||
type: string
|
||||
default: ""
|
||||
description: |
|
||||
Space separated list of flags and key=value pairs that will be passed as arguments to
|
||||
kube-scheduler. For example a value like this:
|
||||
runtime-config=batch/v2alpha1=true profiling=true
|
||||
will result in kube-scheduler being run with the following options:
|
||||
--runtime-config=batch/v2alpha1=true --profiling=true
|
||||
authorization-mode:
|
||||
type: string
|
||||
default: "AlwaysAllow"
|
||||
description: |
|
||||
Comma separated authorization modes. Allowed values are
|
||||
"RBAC", "Node", "Webhook", "ABAC", "AlwaysDeny" and "AlwaysAllow".
|
||||
require-manual-upgrade:
|
||||
type: boolean
|
||||
default: true
|
||||
description: |
|
||||
When true, master nodes will not be upgraded until the user triggers
|
||||
it manually by running the upgrade action.
|
13
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/copyright
generated
vendored
Normal file
13
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/copyright
generated
vendored
Normal file
@ -0,0 +1,13 @@
|
||||
Copyright 2016 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
15
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/debug-scripts/kubectl
generated
vendored
Executable file
15
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/debug-scripts/kubectl
generated
vendored
Executable file
@ -0,0 +1,15 @@
|
||||
#!/bin/sh
|
||||
set -ux
|
||||
|
||||
export PATH=$PATH:/snap/bin
|
||||
|
||||
alias kubectl="kubectl --kubeconfig=/home/ubuntu/config"
|
||||
|
||||
kubectl cluster-info > $DEBUG_SCRIPT_DIR/cluster-info
|
||||
kubectl cluster-info dump > $DEBUG_SCRIPT_DIR/cluster-info-dump
|
||||
for obj in pods svc ingress secrets pv pvc rc; do
|
||||
kubectl describe $obj --all-namespaces > $DEBUG_SCRIPT_DIR/describe-$obj
|
||||
done
|
||||
for obj in nodes; do
|
||||
kubectl describe $obj > $DEBUG_SCRIPT_DIR/describe-$obj
|
||||
done
|
9
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/debug-scripts/kubernetes-master-services
generated
vendored
Executable file
9
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/debug-scripts/kubernetes-master-services
generated
vendored
Executable file
@ -0,0 +1,9 @@
|
||||
#!/bin/sh
|
||||
set -ux
|
||||
|
||||
for service in kube-apiserver kube-controller-manager kube-scheduler; do
|
||||
systemctl status snap.$service.daemon > $DEBUG_SCRIPT_DIR/$service-systemctl-status
|
||||
journalctl -u snap.$service.daemon > $DEBUG_SCRIPT_DIR/$service-journal
|
||||
done
|
||||
|
||||
# FIXME: grab snap config or something
|
17
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/exec.d/vmware-patch/charm-pre-install
generated
vendored
Executable file
17
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/exec.d/vmware-patch/charm-pre-install
generated
vendored
Executable file
@ -0,0 +1,17 @@
|
||||
#!/bin/bash
|
||||
MY_HOSTNAME=$(hostname)
|
||||
|
||||
: ${JUJU_UNIT_NAME:=`uuidgen`}
|
||||
|
||||
|
||||
if [ "${MY_HOSTNAME}" == "ubuntuguest" ]; then
|
||||
juju-log "Detected broken vsphere integration. Applying hostname override"
|
||||
|
||||
FRIENDLY_HOSTNAME=$(echo $JUJU_UNIT_NAME | tr / -)
|
||||
juju-log "Setting hostname to $FRIENDLY_HOSTNAME"
|
||||
if [ ! -f /etc/hostname.orig ]; then
|
||||
mv /etc/hostname /etc/hostname.orig
|
||||
fi
|
||||
echo "${FRIENDLY_HOSTNAME}" > /etc/hostname
|
||||
hostname $FRIENDLY_HOSTNAME
|
||||
fi
|
362
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/icon.svg
generated
vendored
Normal file
362
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/icon.svg
generated
vendored
Normal file
File diff suppressed because one or more lines are too long
After Width: | Height: | Size: 26 KiB |
32
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/layer.yaml
generated
vendored
Normal file
32
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/layer.yaml
generated
vendored
Normal file
@ -0,0 +1,32 @@
|
||||
repo: https://github.com/kubernetes/kubernetes.git
|
||||
includes:
|
||||
- 'layer:basic'
|
||||
- 'layer:snap'
|
||||
- 'layer:tls-client'
|
||||
- 'layer:leadership'
|
||||
- 'layer:debug'
|
||||
- 'layer:metrics'
|
||||
- 'layer:nagios'
|
||||
- 'layer:cdk-service-kicker'
|
||||
- 'interface:ceph-admin'
|
||||
- 'interface:etcd'
|
||||
- 'interface:http'
|
||||
- 'interface:kubernetes-cni'
|
||||
- 'interface:kube-dns'
|
||||
- 'interface:kube-control'
|
||||
- 'interface:public-address'
|
||||
options:
|
||||
basic:
|
||||
packages:
|
||||
- socat
|
||||
tls-client:
|
||||
ca_certificate_path: '/root/cdk/ca.crt'
|
||||
server_certificate_path: '/root/cdk/server.crt'
|
||||
server_key_path: '/root/cdk/server.key'
|
||||
client_certificate_path: '/root/cdk/client.crt'
|
||||
client_key_path: '/root/cdk/client.key'
|
||||
cdk-service-kicker:
|
||||
services:
|
||||
- snap.kube-apiserver.daemon
|
||||
- snap.kube-controller-manager.daemon
|
||||
- snap.kube-scheduler.daemon
|
71
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/lib/charms/kubernetes/common.py
generated
vendored
Normal file
71
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/lib/charms/kubernetes/common.py
generated
vendored
Normal file
@ -0,0 +1,71 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
# Copyright 2015 The Kubernetes Authors.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import re
|
||||
import subprocess
|
||||
|
||||
from time import sleep
|
||||
|
||||
|
||||
def get_version(bin_name):
|
||||
"""Get the version of an installed Kubernetes binary.
|
||||
|
||||
:param str bin_name: Name of binary
|
||||
:return: 3-tuple version (maj, min, patch)
|
||||
|
||||
Example::
|
||||
|
||||
>>> `get_version('kubelet')
|
||||
(1, 6, 0)
|
||||
|
||||
"""
|
||||
cmd = '{} --version'.format(bin_name).split()
|
||||
version_string = subprocess.check_output(cmd).decode('utf-8')
|
||||
return tuple(int(q) for q in re.findall("[0-9]+", version_string)[:3])
|
||||
|
||||
|
||||
def retry(times, delay_secs):
|
||||
""" Decorator for retrying a method call.
|
||||
|
||||
Args:
|
||||
times: How many times should we retry before giving up
|
||||
delay_secs: Delay in secs
|
||||
|
||||
Returns: A callable that would return the last call outcome
|
||||
"""
|
||||
|
||||
def retry_decorator(func):
|
||||
""" Decorator to wrap the function provided.
|
||||
|
||||
Args:
|
||||
func: Provided function should return either True od False
|
||||
|
||||
Returns: A callable that would return the last call outcome
|
||||
|
||||
"""
|
||||
def _wrapped(*args, **kwargs):
|
||||
res = func(*args, **kwargs)
|
||||
attempt = 0
|
||||
while not res and attempt < times:
|
||||
sleep(delay_secs)
|
||||
res = func(*args, **kwargs)
|
||||
if res:
|
||||
break
|
||||
attempt += 1
|
||||
return res
|
||||
return _wrapped
|
||||
|
||||
return retry_decorator
|
63
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/metadata.yaml
generated
vendored
Normal file
63
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/metadata.yaml
generated
vendored
Normal file
@ -0,0 +1,63 @@
|
||||
name: kubernetes-master
|
||||
summary: The Kubernetes control plane.
|
||||
maintainers:
|
||||
- Tim Van Steenburgh <tim.van.steenburgh@canonical.com>
|
||||
- George Kraft <george.kraft@canonical.com>
|
||||
- Rye Terrell <rye.terrell@canonical.com>
|
||||
- Konstantinos Tsakalozos <kos.tsakalozos@canonical.com>
|
||||
- Charles Butler <Chuck@dasroot.net>
|
||||
- Matthew Bruzek <mbruzek@ubuntu.com>
|
||||
description: |
|
||||
Kubernetes is an open-source platform for deploying, scaling, and operations
|
||||
of application containers across a cluster of hosts. Kubernetes is portable
|
||||
in that it works with public, private, and hybrid clouds. Extensible through
|
||||
a pluggable infrastructure. Self healing in that it will automatically
|
||||
restart and place containers on healthy nodes if a node ever goes away.
|
||||
tags:
|
||||
- infrastructure
|
||||
- kubernetes
|
||||
- master
|
||||
subordinate: false
|
||||
series:
|
||||
- xenial
|
||||
provides:
|
||||
kube-api-endpoint:
|
||||
interface: http
|
||||
cluster-dns:
|
||||
# kube-dns is deprecated. Its functionality has been rolled into the
|
||||
# kube-control interface. The cluster-dns relation will be removed in
|
||||
# a future release.
|
||||
interface: kube-dns
|
||||
kube-control:
|
||||
interface: kube-control
|
||||
cni:
|
||||
interface: kubernetes-cni
|
||||
scope: container
|
||||
requires:
|
||||
etcd:
|
||||
interface: etcd
|
||||
loadbalancer:
|
||||
interface: public-address
|
||||
ceph-storage:
|
||||
interface: ceph-admin
|
||||
resources:
|
||||
kubectl:
|
||||
type: file
|
||||
filename: kubectl.snap
|
||||
description: kubectl snap
|
||||
kube-apiserver:
|
||||
type: file
|
||||
filename: kube-apiserver.snap
|
||||
description: kube-apiserver snap
|
||||
kube-controller-manager:
|
||||
type: file
|
||||
filename: kube-controller-manager.snap
|
||||
description: kube-controller-manager snap
|
||||
kube-scheduler:
|
||||
type: file
|
||||
filename: kube-scheduler.snap
|
||||
description: kube-scheduler snap
|
||||
cdk-addons:
|
||||
type: file
|
||||
filename: cdk-addons.snap
|
||||
description: CDK addons snap
|
34
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/metrics.yaml
generated
vendored
Normal file
34
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/metrics.yaml
generated
vendored
Normal file
@ -0,0 +1,34 @@
|
||||
metrics:
|
||||
juju-units: {}
|
||||
pods:
|
||||
type: gauge
|
||||
description: number of pods
|
||||
command: /snap/bin/kubectl get po --all-namespaces | tail -n+2 | wc -l
|
||||
services:
|
||||
type: gauge
|
||||
description: number of services
|
||||
command: /snap/bin/kubectl get svc --all-namespaces | tail -n+2 | wc -l
|
||||
replicasets:
|
||||
type: gauge
|
||||
description: number of replicasets
|
||||
command: /snap/bin/kubectl get rs --all-namespaces | tail -n+2 | wc -l
|
||||
replicationcontrollers:
|
||||
type: gauge
|
||||
description: number of replicationcontrollers
|
||||
command: /snap/bin/kubectl get rc --all-namespaces | tail -n+2 | wc -l
|
||||
nodes:
|
||||
type: gauge
|
||||
description: number of kubernetes nodes
|
||||
command: /snap/bin/kubectl get nodes | tail -n+2 | wc -l
|
||||
persistentvolume:
|
||||
type: gauge
|
||||
description: number of pv
|
||||
command: /snap/bin/kubectl get pv | tail -n+2 | wc -l
|
||||
persistentvolumeclaims:
|
||||
type: gauge
|
||||
description: number of claims
|
||||
command: /snap/bin/kubectl get pvc --all-namespaces | tail -n+2 | wc -l
|
||||
serviceaccounts:
|
||||
type: gauge
|
||||
description: number of sa
|
||||
command: /snap/bin/kubectl get sa --all-namespaces | tail -n+2 | wc -l
|
1235
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/reactive/kubernetes_master.py
generated
vendored
Normal file
1235
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/reactive/kubernetes_master.py
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
7
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/templates/ceph-secret.yaml
generated
vendored
Normal file
7
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/templates/ceph-secret.yaml
generated
vendored
Normal file
@ -0,0 +1,7 @@
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: ceph-secret
|
||||
type: kubernetes.io/rbd
|
||||
data:
|
||||
key: {{ secret }}
|
18
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/templates/ceph.conf
generated
vendored
Normal file
18
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/templates/ceph.conf
generated
vendored
Normal file
@ -0,0 +1,18 @@
|
||||
[global]
|
||||
auth cluster required = {{ auth_supported }}
|
||||
auth service required = {{ auth_supported }}
|
||||
auth client required = {{ auth_supported }}
|
||||
keyring = /etc/ceph/$cluster.$name.keyring
|
||||
mon host = {{ mon_hosts }}
|
||||
fsid = {{ fsid }}
|
||||
|
||||
log to syslog = {{ use_syslog }}
|
||||
err to syslog = {{ use_syslog }}
|
||||
clog to syslog = {{ use_syslog }}
|
||||
mon cluster log to syslog = {{ use_syslog }}
|
||||
debug mon = {{ loglevel }}/5
|
||||
debug osd = {{ loglevel }}/5
|
||||
|
||||
[client]
|
||||
log file = /var/log/ceph.log
|
||||
|
6
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/templates/create-namespace.yaml.j2
generated
vendored
Normal file
6
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/templates/create-namespace.yaml.j2
generated
vendored
Normal file
@ -0,0 +1,6 @@
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: {{ name }}
|
||||
labels:
|
||||
name: {{ name }}
|
25
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/templates/rbd-persistent-volume.yaml
generated
vendored
Normal file
25
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/templates/rbd-persistent-volume.yaml
generated
vendored
Normal file
@ -0,0 +1,25 @@
|
||||
# JUJU Internal Template used to enlist RBD volumes from the
|
||||
# `create-rbd-pv` action. This is a temporary file on disk to enlist resources.
|
||||
apiVersion: v1
|
||||
kind: PersistentVolume
|
||||
metadata:
|
||||
name: {{ RBD_NAME }}
|
||||
spec:
|
||||
capacity:
|
||||
storage: {{ RBD_SIZE }}M
|
||||
accessModes:
|
||||
- {{ PV_MODE }}
|
||||
storageClassName: "rbd"
|
||||
rbd:
|
||||
monitors:
|
||||
{% for host in monitors %}
|
||||
- {{ host }}
|
||||
{% endfor %}
|
||||
pool: rbd
|
||||
image: {{ RBD_NAME }}
|
||||
user: admin
|
||||
secretRef:
|
||||
name: ceph-secret
|
||||
fsType: {{ RBD_FS }}
|
||||
readOnly: false
|
||||
# persistentVolumeReclaimPolicy: Recycle
|
12
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/tox.ini
generated
vendored
Normal file
12
vendor/k8s.io/kubernetes/cluster/juju/layers/kubernetes-master/tox.ini
generated
vendored
Normal file
@ -0,0 +1,12 @@
|
||||
[tox]
|
||||
skipsdist=True
|
||||
envlist = py34, py35
|
||||
skip_missing_interpreters = True
|
||||
|
||||
[testenv]
|
||||
commands = py.test -v
|
||||
deps =
|
||||
-r{toxinidir}/requirements.txt
|
||||
|
||||
[flake8]
|
||||
exclude=docs
|
Reference in New Issue
Block a user