mirror of
https://github.com/ceph/ceph-csi.git
synced 2024-11-10 08:20:23 +00:00
902 lines
70 KiB
Plaintext
902 lines
70 KiB
Plaintext
|
name,owner,auto-assigned,sig
|
||
|
Addon update should propagate add-on file changes,eparis,1,
|
||
|
AppArmor should enforce an AppArmor profile,derekwaynecarr,0,node
|
||
|
AppArmor when running with AppArmor should enforce a permissive profile,yujuhong,1,node
|
||
|
AppArmor when running with AppArmor should enforce a profile blocking writes,freehan,1,node
|
||
|
AppArmor when running with AppArmor should reject an unloaded profile,rmmh,1,node
|
||
|
AppArmor when running without AppArmor should reject a pod with an AppArmor profile,rrati,0,node
|
||
|
Cadvisor should be healthy on every node.,vishh,0,node
|
||
|
Cassandra should create and scale cassandra,fabioy,1,apps
|
||
|
CassandraStatefulSet should create statefulset,wojtek-t,1,apps
|
||
|
Cluster level logging using Elasticsearch should check that logs from containers are ingested into Elasticsearch,crassirostris,0,instrumentation
|
||
|
Cluster level logging using GCL should check that logs from containers are ingested in GCL,crassirostris,0,instrumentation
|
||
|
Cluster level logging using GCL should create a constant load with long-living pods and ensure logs delivery,crassirostris,0,instrumentation
|
||
|
Cluster level logging using GCL should create a constant load with short-living pods and ensure logs delivery,crassirostris,0,instrumentation
|
||
|
Cluster size autoscaling should add node to the particular mig,spxtr,1,autoscaling
|
||
|
Cluster size autoscaling should correctly scale down after a node is not needed,pmorie,1,autoscaling
|
||
|
Cluster size autoscaling should correctly scale down after a node is not needed when there is non autoscaled pool,krousey,1,autoscaling
|
||
|
Cluster size autoscaling should disable node pool autoscaling,Q-Lee,1,autoscaling
|
||
|
Cluster size autoscaling should increase cluster size if pending pods are small,childsb,1,autoscaling
|
||
|
Cluster size autoscaling should increase cluster size if pending pods are small and there is another node pool that is not autoscaled,apelisse,1,autoscaling
|
||
|
Cluster size autoscaling should increase cluster size if pods are pending due to host port conflict,brendandburns,1,autoscaling
|
||
|
Cluster size autoscaling should scale up correct target pool,mikedanese,1,autoscaling
|
||
|
Cluster size autoscaling shouldn't increase cluster size if pending pod is too large,rrati,0,autoscaling
|
||
|
ClusterDns should create pod that uses dns,sttts,0,network
|
||
|
ConfigMap optional updates should be reflected in volume,timothysc,1,apps
|
||
|
ConfigMap should be consumable from pods in volume,alex-mohr,1,apps
|
||
|
ConfigMap should be consumable from pods in volume as non-root,rrati,0,apps
|
||
|
ConfigMap should be consumable from pods in volume as non-root with FSGroup,roberthbailey,1,apps
|
||
|
ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set,rrati,0,apps
|
||
|
ConfigMap should be consumable from pods in volume with defaultMode set,Random-Liu,1,apps
|
||
|
ConfigMap should be consumable from pods in volume with mappings,rrati,0,apps
|
||
|
ConfigMap should be consumable from pods in volume with mappings and Item mode set,eparis,1,apps
|
||
|
ConfigMap should be consumable from pods in volume with mappings as non-root,apelisse,1,apps
|
||
|
ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup,zmerlynn,1,apps
|
||
|
ConfigMap should be consumable in multiple volumes in the same pod,caesarxuchao,1,apps
|
||
|
ConfigMap should be consumable via environment variable,ncdc,1,apps
|
||
|
ConfigMap should be consumable via the environment,rkouj,0,apps
|
||
|
ConfigMap updates should be reflected in volume,kevin-wangzefeng,1,apps
|
||
|
Container Lifecycle Hook when create a pod with lifecycle hook when it is exec hook should execute poststart exec hook properly,Random-Liu,1,node
|
||
|
Container Lifecycle Hook when create a pod with lifecycle hook when it is exec hook should execute prestop exec hook properly,rrati,0,node
|
||
|
Container Lifecycle Hook when create a pod with lifecycle hook when it is http hook should execute poststart http hook properly,vishh,1,node
|
||
|
Container Lifecycle Hook when create a pod with lifecycle hook when it is http hook should execute prestop http hook properly,freehan,1,node
|
||
|
Container Runtime Conformance Test container runtime conformance blackbox test when running a container with a new image *,Random-Liu,0,node
|
||
|
Container Runtime Conformance Test container runtime conformance blackbox test when starting a container that exits it should run with the expected status,luxas,1,node
|
||
|
Container Runtime Conformance Test container runtime conformance blackbox test when starting a container that exits should report termination message *,alex-mohr,1,node
|
||
|
ContainerLogPath Pod with a container printed log to stdout should print log to correct log path,resouer,0,node
|
||
|
CronJob should not emit unexpected warnings,soltysh,1,apps
|
||
|
CronJob should not schedule jobs when suspended,soltysh,1,apps
|
||
|
CronJob should not schedule new jobs when ForbidConcurrent,soltysh,1,apps
|
||
|
CronJob should remove from active list jobs that have been deleted,soltysh,1,apps
|
||
|
CronJob should replace jobs when ReplaceConcurrent,soltysh,1,apps
|
||
|
CronJob should schedule multiple jobs concurrently,soltysh,1,apps
|
||
|
DNS config map should be able to change configuration,rkouj,0,network
|
||
|
DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios,MrHohn,0,network
|
||
|
DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods when cluster size changed,MrHohn,0,network
|
||
|
DNS should provide DNS for ExternalName services,rmmh,1,network
|
||
|
DNS should provide DNS for pods for Hostname and Subdomain Annotation,mtaufen,1,network
|
||
|
DNS should provide DNS for services,roberthbailey,1,network
|
||
|
DNS should provide DNS for the cluster,roberthbailey,1,network
|
||
|
Daemon set should retry creating failed daemon pods,yifan-gu,1,apps
|
||
|
Daemon set should run and stop complex daemon,jlowdermilk,1,apps
|
||
|
Daemon set should run and stop complex daemon with node affinity,erictune,1,apps
|
||
|
Daemon set should run and stop simple daemon,mtaufen,1,apps
|
||
|
DaemonRestart Controller Manager should not create/delete replicas across restart,rrati,0,apps
|
||
|
DaemonRestart Kubelet should not restart containers across restart,madhusudancs,1,apps
|
||
|
DaemonRestart Scheduler should continue assigning pods to nodes across restart,lavalamp,1,apps
|
||
|
Density create a batch of pods latency/resource should be within limit when create * pods with * interval,apelisse,1,scalability
|
||
|
Density create a batch of pods with higher API QPS latency/resource should be within limit when create * pods with * interval (QPS *),jlowdermilk,1,scalability
|
||
|
Density create a sequence of pods latency/resource should be within limit when create * pods with * background pods,wojtek-t,1,scalability
|
||
|
Density should allow running maximum capacity pods on nodes,smarterclayton,1,scalability
|
||
|
Density should allow starting * pods per node using * with * secrets and * daemons,rkouj,0,scalability
|
||
|
Deployment RecreateDeployment should delete old pods and create new ones,kargakis,0,apps
|
||
|
Deployment RollingUpdateDeployment should delete old pods and create new ones,kargakis,0,apps
|
||
|
Deployment deployment reaping should cascade to its replica sets and pods,kargakis,1,apps
|
||
|
Deployment deployment should create new pods,kargakis,0,apps
|
||
|
Deployment deployment should delete old replica sets,kargakis,0,apps
|
||
|
Deployment deployment should label adopted RSs and pods,kargakis,0,apps
|
||
|
Deployment deployment should support rollback,kargakis,0,apps
|
||
|
Deployment deployment should support rollback when there's replica set with no revision,kargakis,0,apps
|
||
|
Deployment deployment should support rollover,kargakis,0,apps
|
||
|
Deployment iterative rollouts should eventually progress,kargakis,0,apps
|
||
|
Deployment lack of progress should be reported in the deployment status,kargakis,0,apps
|
||
|
Deployment overlapping deployment should not fight with each other,kargakis,1,apps
|
||
|
Deployment paused deployment should be able to scale,kargakis,1,apps
|
||
|
Deployment paused deployment should be ignored by the controller,kargakis,0,apps
|
||
|
Deployment scaled rollout deployment should not block on annotation check,kargakis,1,apps
|
||
|
DisruptionController evictions: * => *,rkouj,0,scheduling
|
||
|
DisruptionController should create a PodDisruptionBudget,rkouj,0,scheduling
|
||
|
DisruptionController should update PodDisruptionBudget status,rkouj,0,scheduling
|
||
|
Docker Containers should be able to override the image's default arguments (docker cmd),maisem,0,node
|
||
|
Docker Containers should be able to override the image's default command and arguments,maisem,0,node
|
||
|
Docker Containers should be able to override the image's default commmand (docker entrypoint),maisem,0,node
|
||
|
Docker Containers should use the image defaults if command and args are blank,vishh,0,node
|
||
|
Downward API should create a pod that prints his name and namespace,nhlfr,0,node
|
||
|
Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars,deads2k,1,node
|
||
|
Downward API should provide default limits.cpu/memory from node allocatable,derekwaynecarr,0,node
|
||
|
Downward API should provide pod IP as an env var,nhlfr,0,node
|
||
|
Downward API should provide pod name and namespace as env vars,nhlfr,0,node
|
||
|
Downward API volume should provide container's cpu limit,smarterclayton,1,node
|
||
|
Downward API volume should provide container's cpu request,krousey,1,node
|
||
|
Downward API volume should provide container's memory limit,krousey,1,node
|
||
|
Downward API volume should provide container's memory request,mikedanese,1,node
|
||
|
Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set,lavalamp,1,node
|
||
|
Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set,freehan,1,node
|
||
|
Downward API volume should provide podname as non-root with fsgroup,rrati,0,node
|
||
|
Downward API volume should provide podname as non-root with fsgroup and defaultMode,rrati,0,node
|
||
|
Downward API volume should provide podname only,mwielgus,1,node
|
||
|
Downward API volume should set DefaultMode on files,davidopp,1,node
|
||
|
Downward API volume should set mode on item file,mtaufen,1,node
|
||
|
Downward API volume should update annotations on modification,eparis,1,node
|
||
|
Downward API volume should update labels on modification,timothysc,1,node
|
||
|
Dynamic provisioning DynamicProvisioner Alpha should create and delete alpha persistent volumes,rrati,0,storage
|
||
|
Dynamic provisioning DynamicProvisioner External should let an external dynamic provisioner create and delete persistent volumes,jsafrane,0,storage
|
||
|
Dynamic provisioning DynamicProvisioner should create and delete persistent volumes,jsafrane,0,storage
|
||
|
Dynamic provisioning DynamicProvisioner should not provision a volume in an unmanaged GCE zone.,jszczepkowski,1,
|
||
|
DynamicKubeletConfiguration When a configmap called `kubelet-` is added to the `kube-system` namespace The Kubelet on that node should restart to take up the new config,mwielgus,1,storage
|
||
|
ESIPP should handle updates to source ip annotation,MrHohn,1,network
|
||
|
ESIPP should only target nodes with endpoints,MrHohn,0,network
|
||
|
ESIPP should work for type=LoadBalancer,MrHohn,1,network
|
||
|
ESIPP should work for type=NodePort,MrHohn,1,network
|
||
|
ESIPP should work from pods,MrHohn,1,network
|
||
|
Empty starts a pod,childsb,1,
|
||
|
"EmptyDir volumes should support (non-root,0644,default)",tallclair,1,node
|
||
|
"EmptyDir volumes should support (non-root,0644,tmpfs)",spxtr,1,node
|
||
|
"EmptyDir volumes should support (non-root,0666,default)",dchen1107,1,node
|
||
|
"EmptyDir volumes should support (non-root,0666,tmpfs)",apelisse,1,node
|
||
|
"EmptyDir volumes should support (non-root,0777,default)",mwielgus,1,node
|
||
|
"EmptyDir volumes should support (non-root,0777,tmpfs)",smarterclayton,1,node
|
||
|
"EmptyDir volumes should support (root,0644,default)",mtaufen,1,node
|
||
|
"EmptyDir volumes should support (root,0644,tmpfs)",madhusudancs,1,node
|
||
|
"EmptyDir volumes should support (root,0666,default)",brendandburns,1,node
|
||
|
"EmptyDir volumes should support (root,0666,tmpfs)",davidopp,1,node
|
||
|
"EmptyDir volumes should support (root,0777,default)",spxtr,1,node
|
||
|
"EmptyDir volumes should support (root,0777,tmpfs)",alex-mohr,1,node
|
||
|
EmptyDir volumes volume on default medium should have the correct mode,yifan-gu,1,node
|
||
|
EmptyDir volumes volume on tmpfs should have the correct mode,mwielgus,1,node
|
||
|
"EmptyDir volumes when FSGroup is specified files with FSGroup ownership should support (root,0644,tmpfs)",justinsb,1,node
|
||
|
EmptyDir volumes when FSGroup is specified new files should be created with FSGroup ownership when container is non-root,brendandburns,1,node
|
||
|
EmptyDir volumes when FSGroup is specified new files should be created with FSGroup ownership when container is root,childsb,1,node
|
||
|
EmptyDir volumes when FSGroup is specified volume on default medium should have the correct mode using FSGroup,eparis,1,node
|
||
|
EmptyDir volumes when FSGroup is specified volume on tmpfs should have the correct mode using FSGroup,timothysc,1,node
|
||
|
EmptyDir wrapper volumes should not cause race condition when used for configmaps,mtaufen,1,node
|
||
|
EmptyDir wrapper volumes should not cause race condition when used for git_repo,brendandburns,1,node
|
||
|
EmptyDir wrapper volumes should not conflict,deads2k,1,node
|
||
|
Etcd failure should recover from SIGKILL,pmorie,1,api-machinery
|
||
|
Etcd failure should recover from network partition with master,justinsb,1,api-machinery
|
||
|
Events should be sent by kubelets and the scheduler about pods scheduling and running,zmerlynn,1,node
|
||
|
Firewall rule should create valid firewall rules for LoadBalancer type service,MrHohn,0,network
|
||
|
Firewall rule should have correct firewall rules for e2e cluster,MrHohn,0,network
|
||
|
GCP Volumes GlusterFS should be mountable,nikhiljindal,0,storage
|
||
|
GCP Volumes NFSv4 should be mountable for NFSv4,nikhiljindal,0,storage
|
||
|
GKE local SSD should write and read from node local SSD,fabioy,0,storage
|
||
|
GKE node pools should create a cluster with multiple node pools,fabioy,1,cluster-lifecycle
|
||
|
Garbage Collection Test: * Should eventually garbage collect containers when we exceed the number of dead containers per container,Random-Liu,0,cluster-lifecycle
|
||
|
Garbage collector should delete RS created by deployment when not orphaning,rkouj,0,cluster-lifecycle
|
||
|
Garbage collector should delete pods created by rc when not orphaning,justinsb,1,cluster-lifecycle
|
||
|
Garbage collector should orphan RS created by deployment when deleteOptions.OrphanDependents is true,rkouj,0,cluster-lifecycle
|
||
|
Garbage collector should orphan pods created by rc if delete options say so,fabioy,1,cluster-lifecycle
|
||
|
Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil,zmerlynn,1,cluster-lifecycle
|
||
|
"Generated release_1_5 clientset should create pods, delete pods, watch pods",rrati,0,api-machinery
|
||
|
"Generated release_1_5 clientset should create v2alpha1 cronJobs, delete cronJobs, watch cronJobs",soltysh,1,api-machinery
|
||
|
HA-master survive addition/removal replicas different zones,derekwaynecarr,0,api-machinery
|
||
|
HA-master survive addition/removal replicas multizone workers,rkouj,0,api-machinery
|
||
|
HA-master survive addition/removal replicas same zone,derekwaynecarr,0,api-machinery
|
||
|
Hazelcast should create and scale hazelcast,mikedanese,1,big-data
|
||
|
Horizontal pod autoscaling (scale resource: CPU) Deployment Should scale from 1 pod to 3 pods and from 3 to 5,jszczepkowski,0,autoscaling
|
||
|
Horizontal pod autoscaling (scale resource: CPU) Deployment Should scale from 5 pods to 3 pods and from 3 to 1,jszczepkowski,0,autoscaling
|
||
|
Horizontal pod autoscaling (scale resource: CPU) ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5,jszczepkowski,0,autoscaling
|
||
|
Horizontal pod autoscaling (scale resource: CPU) ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1,jszczepkowski,0,autoscaling
|
||
|
Horizontal pod autoscaling (scale resource: CPU) ReplicationController *,jszczepkowski,0,autoscaling
|
||
|
Horizontal pod autoscaling (scale resource: CPU) ReplicationController light Should scale from 1 pod to 2 pods,jszczepkowski,0,autoscaling
|
||
|
Horizontal pod autoscaling (scale resource: CPU) ReplicationController light Should scale from 2 pods to 1 pod,jszczepkowski,0,autoscaling
|
||
|
HostPath should give a volume the correct mode,thockin,1,node
|
||
|
HostPath should support r/w,luxas,1,node
|
||
|
HostPath should support subPath,sttts,1,node
|
||
|
ImageID should be set to the manifest digest (from RepoDigests) when available,rrati,0,node
|
||
|
InitContainer should invoke init containers on a RestartAlways pod,saad-ali,1,node
|
||
|
InitContainer should invoke init containers on a RestartNever pod,rrati,0,node
|
||
|
InitContainer should not start app containers and fail the pod if init containers fail on a RestartNever pod,maisem,0,node
|
||
|
InitContainer should not start app containers if init containers fail on a RestartAlways pod,maisem,0,node
|
||
|
Initial Resources should set initial resources based on historical data,piosz,0,node
|
||
|
Job should delete a job,soltysh,1,apps
|
||
|
Job should run a job to completion when tasks sometimes fail and are locally restarted,soltysh,1,apps
|
||
|
Job should run a job to completion when tasks sometimes fail and are not locally restarted,soltysh,1,apps
|
||
|
Job should run a job to completion when tasks succeed,soltysh,1,apps
|
||
|
Kibana Logging Instances Is Alive should check that the Kibana logging instance is alive,swagiaal,0,instrumentation
|
||
|
Kubectl alpha client Kubectl run CronJob should create a CronJob,soltysh,1,cli
|
||
|
Kubectl alpha client Kubectl run ScheduledJob should create a ScheduledJob,soltysh,1,cli
|
||
|
Kubectl client Guestbook application should create and stop a working application,pwittrock,0,cli
|
||
|
Kubectl client Kubectl api-versions should check if v1 is in available api versions,pwittrock,0,cli
|
||
|
Kubectl client Kubectl apply should apply a new configuration to an existing RC,pwittrock,0,cli
|
||
|
Kubectl client Kubectl apply should reuse port when apply to an existing SVC,deads2k,0,cli
|
||
|
Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info,pwittrock,0,cli
|
||
|
Kubectl client Kubectl create quota should create a quota with scopes,rrati,0,cli
|
||
|
Kubectl client Kubectl create quota should create a quota without scopes,xiang90,1,cli
|
||
|
Kubectl client Kubectl create quota should reject quota with invalid scopes,brendandburns,1,cli
|
||
|
Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods,pwittrock,0,cli
|
||
|
Kubectl client Kubectl expose should create services for rc,pwittrock,0,cli
|
||
|
Kubectl client Kubectl label should update the label on a resource,pwittrock,0,cli
|
||
|
Kubectl client Kubectl logs should be able to retrieve and filter logs,jlowdermilk,0,cli
|
||
|
Kubectl client Kubectl patch should add annotations for pods in rc,janetkuo,0,cli
|
||
|
Kubectl client Kubectl replace should update a single-container pod's image,rrati,0,cli
|
||
|
Kubectl client Kubectl rolling-update should support rolling-update to same image,janetkuo,0,cli
|
||
|
"Kubectl client Kubectl run --rm job should create a job from an image, then delete the job",soltysh,1,cli
|
||
|
Kubectl client Kubectl run default should create an rc or deployment from an image,janetkuo,0,cli
|
||
|
Kubectl client Kubectl run deployment should create a deployment from an image,janetkuo,0,cli
|
||
|
Kubectl client Kubectl run job should create a job from an image when restart is OnFailure,soltysh,1,cli
|
||
|
Kubectl client Kubectl run pod should create a pod from an image when restart is Never,janetkuo,0,cli
|
||
|
Kubectl client Kubectl run rc should create an rc from an image,janetkuo,0,cli
|
||
|
Kubectl client Kubectl taint should remove all the taints with the same key off a node,erictune,1,cli
|
||
|
Kubectl client Kubectl taint should update the taint on a node, pwittrock,0,cli
|
||
|
Kubectl client Kubectl version should check is all data is printed,janetkuo,0,cli
|
||
|
Kubectl client Proxy server should support --unix-socket=/path,zmerlynn,1,cli
|
||
|
Kubectl client Proxy server should support proxy with --port 0,ncdc,1,cli
|
||
|
Kubectl client Simple pod should handle in-cluster config,rkouj,0,cli
|
||
|
Kubectl client Simple pod should return command exit codes,yifan-gu,1,cli
|
||
|
Kubectl client Simple pod should support exec,ncdc,0,cli
|
||
|
Kubectl client Simple pod should support exec through an HTTP proxy,ncdc,0,cli
|
||
|
Kubectl client Simple pod should support inline execution and attach,ncdc,0,cli
|
||
|
Kubectl client Simple pod should support port-forward,ncdc,0,cli
|
||
|
Kubectl client Update Demo should create and stop a replication controller,sttts,0,cli
|
||
|
Kubectl client Update Demo should do a rolling update of a replication controller,sttts,0,cli
|
||
|
Kubectl client Update Demo should scale a replication controller,sttts,0,cli
|
||
|
Kubelet Cgroup Manager Pod containers On scheduling a BestEffort Pod Pod containers should have been created under the BestEffort cgroup,derekwaynecarr,0,node
|
||
|
Kubelet Cgroup Manager Pod containers On scheduling a Burstable Pod Pod containers should have been created under the Burstable cgroup,derekwaynecarr,0,node
|
||
|
Kubelet Cgroup Manager Pod containers On scheduling a Guaranteed Pod Pod containers should have been created under the cgroup-root,derekwaynecarr,0,node
|
||
|
Kubelet Cgroup Manager QOS containers On enabling QOS cgroup hierarchy Top level QoS containers should have been created,davidopp,1,node
|
||
|
Kubelet Container Manager Validate OOM score adjustments once the node is setup Kubelet's oom-score-adj should be -999,vishh,1,node
|
||
|
"Kubelet Container Manager Validate OOM score adjustments once the node is setup burstable container's oom-score-adj should be between [2, 1000)",derekwaynecarr,1,node
|
||
|
Kubelet Container Manager Validate OOM score adjustments once the node is setup docker daemon's oom-score-adj should be -999,thockin,1,node
|
||
|
Kubelet Container Manager Validate OOM score adjustments once the node is setup guaranteed container's oom-score-adj should be -998,vishh,1,node
|
||
|
Kubelet Container Manager Validate OOM score adjustments once the node is setup pod infra containers oom-score-adj should be -998 and best effort container's should be 1000,timothysc,1,node
|
||
|
Kubelet Eviction Manager hard eviction test pod using the most disk space gets evicted when the node disk usage is above the eviction hard threshold should evict the pod using the most disk space,rkouj,0,node
|
||
|
Kubelet Volume Manager Volume Manager On terminatation of pod with memory backed volume should remove the volume from the node,rkouj,0,node
|
||
|
Kubelet experimental resource usage tracking resource tracking for * pods per node,yujuhong,0,node
|
||
|
Kubelet regular resource usage tracking resource tracking for * pods per node,yujuhong,0,node
|
||
|
Kubelet when scheduling a busybox command in a pod it should print the output to logs,ixdy,1,node
|
||
|
Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete,smarterclayton,1,node
|
||
|
Kubelet when scheduling a busybox command that always fails in a pod should have an error terminated reason,deads2k,1,node
|
||
|
Kubelet when scheduling a read only busybox container it should not write to root filesystem,timothysc,1,node
|
||
|
KubeletManagedEtcHosts should test kubelet managed /etc/hosts file,Random-Liu,1,node
|
||
|
Kubernetes Dashboard should check that the kubernetes-dashboard instance is alive,wonderfly,0,ui
|
||
|
LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied.,cjcullen,1,node
|
||
|
Liveness liveness pods should be automatically restarted,derekwaynecarr,0,node
|
||
|
Load capacity should be able to handle * pods per node * with * secrets and * daemons,rkouj,0,network
|
||
|
Loadbalancing: L7 GCE should conform to Ingress spec,derekwaynecarr,0,network
|
||
|
Loadbalancing: L7 GCE should create ingress with given static-ip,eparis,1,
|
||
|
Loadbalancing: L7 Nginx should conform to Ingress spec,ncdc,1,network
|
||
|
"Logging soak should survive logging 1KB every * seconds, for a duration of *, scaling up to * pods per node",justinsb,1,node
|
||
|
"MemoryEviction when there is memory pressure should evict pods in the correct order (besteffort first, then burstable, then guaranteed)",ixdy,1,node
|
||
|
Mesos applies slave attributes as labels,justinsb,1,apps
|
||
|
Mesos schedules pods annotated with roles on correct slaves,tallclair,1,apps
|
||
|
Mesos starts static pods on every node in the mesos cluster,lavalamp,1,apps
|
||
|
MetricsGrabber should grab all metrics from API server.,gmarek,0,instrumentation
|
||
|
MetricsGrabber should grab all metrics from a ControllerManager.,gmarek,0,instrumentation
|
||
|
MetricsGrabber should grab all metrics from a Kubelet.,gmarek,0,instrumentation
|
||
|
MetricsGrabber should grab all metrics from a Scheduler.,gmarek,0,instrumentation
|
||
|
MirrorPod when create a mirror pod should be recreated when mirror pod forcibly deleted,roberthbailey,1,node
|
||
|
MirrorPod when create a mirror pod should be recreated when mirror pod gracefully deleted,justinsb,1,node
|
||
|
MirrorPod when create a mirror pod should be updated when static pod updated,saad-ali,1,node
|
||
|
Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster.,piosz,0,instrumentation
|
||
|
Multi-AZ Clusters should spread the pods of a replication controller across zones,xiang90,1,api-machinery
|
||
|
Multi-AZ Clusters should spread the pods of a service across zones,mwielgus,1,api-machinery
|
||
|
Namespaces should always delete fast (ALL of 100 namespaces in 150 seconds),rmmh,1,api-machinery
|
||
|
Namespaces should delete fast enough (90 percent of 100 namespaces in 150 seconds),kevin-wangzefeng,1,api-machinery
|
||
|
Namespaces should ensure that all pods are removed when a namespace is deleted.,xiang90,1,api-machinery
|
||
|
Namespaces should ensure that all services are removed when a namespace is deleted.,pmorie,1,api-machinery
|
||
|
Network Partition *,foxish,0,network
|
||
|
Network Partition Pods should return to running and ready state after network partition is healed *,foxish,0,network
|
||
|
Network Partition should come back up if node goes down,foxish,0,network
|
||
|
Network Partition should create new pods when node is partitioned,foxish,0,network
|
||
|
Network Partition should eagerly create replacement pod during network partition when termination grace is non-zero,foxish,0,network
|
||
|
Network Partition should not reschedule stateful pods if there is a network partition,brendandburns,0,network
|
||
|
Network should set TCP CLOSE_WAIT timeout,bowei,0,network
|
||
|
Networking Granular Checks: Pods should function for intra-pod communication: http,stts,0,network
|
||
|
Networking Granular Checks: Pods should function for intra-pod communication: udp,freehan,0,network
|
||
|
Networking Granular Checks: Pods should function for node-pod communication: http,spxtr,1,network
|
||
|
Networking Granular Checks: Pods should function for node-pod communication: udp,wojtek-t,1,network
|
||
|
Networking Granular Checks: Services should function for endpoint-Service: http,bgrant0607,1,network
|
||
|
Networking Granular Checks: Services should function for endpoint-Service: udp,maisem,1,network
|
||
|
Networking Granular Checks: Services should function for node-Service: http,thockin,1,network
|
||
|
Networking Granular Checks: Services should function for node-Service: udp,yifan-gu,1,network
|
||
|
Networking Granular Checks: Services should function for pod-Service: http,childsb,1,network
|
||
|
Networking Granular Checks: Services should function for pod-Service: udp,brendandburns,1,network
|
||
|
Networking Granular Checks: Services should update endpoints: http,rrati,0,network
|
||
|
Networking Granular Checks: Services should update endpoints: udp,freehan,1,network
|
||
|
Networking Granular Checks: Services should update nodePort: http,nikhiljindal,1,network
|
||
|
Networking Granular Checks: Services should update nodePort: udp,smarterclayton,1,network
|
||
|
Networking IPerf should transfer ~ 1GB onto the service endpoint * servers (maximum of * clients),fabioy,1,network
|
||
|
Networking should check kube-proxy urls,lavalamp,1,network
|
||
|
Networking should provide Internet connection for containers,sttts,0,network
|
||
|
"Networking should provide unchanging, static URL paths for kubernetes api services",freehan,0,network
|
||
|
NoExecuteTaintManager doesn't evict pod with tolerations from tainted nodes,freehan,0,scheduling
|
||
|
NoExecuteTaintManager eventually evict pod with finite tolerations from tainted nodes,freehan,0,scheduling
|
||
|
NoExecuteTaintManager evicts pods from tainted nodes,freehan,0,scheduling
|
||
|
NoExecuteTaintManager removing taint cancels eviction,freehan,0,scheduling
|
||
|
NodeProblemDetector KernelMonitor should generate node condition and events for corresponding errors,Random-Liu,0,node
|
||
|
Nodes Resize should be able to add nodes,piosz,1,cluster-lifecycle
|
||
|
Nodes Resize should be able to delete nodes,zmerlynn,1,cluster-lifecycle
|
||
|
Opaque resources should account opaque integer resources in pods with multiple containers.,ConnorDoyle,0,node
|
||
|
Opaque resources should not break pods that do not consume opaque integer resources.,ConnorDoyle,0,node
|
||
|
Opaque resources should not schedule pods that exceed the available amount of opaque integer resource.,ConnorDoyle,0,node
|
||
|
Opaque resources should schedule pods that do consume opaque integer resources.,ConnorDoyle,0,node
|
||
|
PersistentVolumes PersistentVolumes:GCEPD should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach,copejon,0,storage
|
||
|
PersistentVolumes PersistentVolumes:GCEPD should test that deleting the Namespace of a PVC and Pod causes the successful detach of Persistent Disk,thockin,1,storage
|
||
|
PersistentVolumes PersistentVolumes:GCEPD should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach,copejon,0,storage
|
||
|
PersistentVolumes PersistentVolumes:NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted.,lavalamp,1,
|
||
|
PersistentVolumes PersistentVolumes:NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access,copejon,0,storage
|
||
|
PersistentVolumes PersistentVolumes:NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access,copejon,0,storage
|
||
|
PersistentVolumes PersistentVolumes:NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access,copejon,0,storage
|
||
|
PersistentVolumes PersistentVolumes:NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access,copejon,0,storage
|
||
|
PersistentVolumes PersistentVolumes:NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access,copejon,0,storage
|
||
|
PersistentVolumes PersistentVolumes:NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access,copejon,0,storage
|
||
|
PersistentVolumes PersistentVolumes:NFS with multiple PVs and PVCs all in same ns should create 4 PVs and 2 PVCs: test write access,copejon,0,storage
|
||
|
PersistentVolumes Selector-Label Volume Binding:vsphere should bind volume with claim for given label,copejon,0,storage
|
||
|
PersistentVolumes persistentvolumereclaim:vsphere should delete persistent volume when reclaimPolicy set to delete and associated claim is deleted,copejon,0,storage
|
||
|
PersistentVolumes persistentvolumereclaim:vsphere should retain persistent volume when reclaimPolicy set to retain when associated claim is deleted,copejon,0,storage
|
||
|
PersistentVolumes when kubelet restarts *,rkouj,0,storage
|
||
|
PersistentVolumes:vsphere should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach,rkouj,0,storage
|
||
|
PersistentVolumes:vsphere should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach,rkouj,0,storage
|
||
|
Pet Store should scale to persist a nominal number ( * ) of transactions in * seconds,xiang90,1,apps
|
||
|
"Pod Disks Should schedule a pod w/ a RW PD, gracefully remove it, then schedule it on another host",saad-ali,0,storage
|
||
|
"Pod Disks Should schedule a pod w/ a readonly PD on two hosts, then remove both gracefully.",saad-ali,0,storage
|
||
|
Pod Disks should be able to detach from a node which was deleted,rkouj,0,storage
|
||
|
Pod Disks should be able to detach from a node whose api object was deleted,rkouj,0,storage
|
||
|
"Pod Disks should schedule a pod w/ a RW PD shared between multiple containers, write to PD, delete pod, verify contents, and repeat in rapid succession",saad-ali,0,storage
|
||
|
"Pod Disks should schedule a pod w/ a RW PD, ungracefully remove it, then schedule it on another host",mml,1,storage
|
||
|
"Pod Disks should schedule a pod w/ a readonly PD on two hosts, then remove both ungracefully.",saad-ali,1,storage
|
||
|
"Pod Disks should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession",saad-ali,0,storage
|
||
|
Pod garbage collector should handle the creation of 1000 pods,wojtek-t,1,node
|
||
|
Pods Extended Delete Grace Period should be submitted and removed,rkouj,0,node
|
||
|
Pods Extended Pods Set QOS Class should be submitted and removed,rkouj,0,node
|
||
|
Pods should allow activeDeadlineSeconds to be updated,derekwaynecarr,0,node
|
||
|
Pods should be submitted and removed,davidopp,1,node
|
||
|
Pods should be updated,derekwaynecarr,1,node
|
||
|
Pods should cap back-off at MaxContainerBackOff,maisem,1,node
|
||
|
Pods should contain environment variables for services,jlowdermilk,1,node
|
||
|
Pods should get a host IP,xiang90,1,node
|
||
|
Pods should have their auto-restart back-off timer reset on image update,mikedanese,1,node
|
||
|
Pods should support remote command execution over websockets,madhusudancs,1,node
|
||
|
Pods should support retrieving logs from the container over websockets,vishh,0,node
|
||
|
Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets,eparis,1,node
|
||
|
"Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends data, and disconnects",rkouj,0,node
|
||
|
"Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends no data, and disconnects",rkouj,0,node
|
||
|
"Port forwarding With a server listening on 0.0.0.0 that expects no client request should support a client that connects, sends data, and disconnects",rkouj,0,node
|
||
|
Port forwarding With a server listening on localhost should support forwarding over websockets,lavalamp,1,node
|
||
|
"Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends data, and disconnects",rkouj,0,node
|
||
|
"Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends no data, and disconnects",rkouj,0,node
|
||
|
"Port forwarding With a server listening on localhost that expects no client request should support a client that connects, sends data, and disconnects",rkouj,0,node
|
||
|
PreStop should call prestop when killing a pod,ncdc,1,node
|
||
|
PrivilegedPod should enable privileged commands,derekwaynecarr,0,node
|
||
|
Probing container should *not* be restarted with a /healthz http liveness probe,Random-Liu,0,node
|
||
|
"Probing container should *not* be restarted with a exec ""cat /tmp/health"" liveness probe",Random-Liu,0,node
|
||
|
Probing container should be restarted with a /healthz http liveness probe,Random-Liu,0,node
|
||
|
Probing container should be restarted with a docker exec liveness probe with timeout,tallclair,0,node
|
||
|
"Probing container should be restarted with a exec ""cat /tmp/health"" liveness probe",Random-Liu,0,node
|
||
|
Probing container should have monotonically increasing restart count,Random-Liu,0,node
|
||
|
Probing container with readiness probe should not be ready before initial delay and never restart,Random-Liu,0,node
|
||
|
Probing container with readiness probe that fails should never be ready and never restart,Random-Liu,0,node
|
||
|
Projected optional updates should be reflected in volume,pmorie,1,storage
|
||
|
Projected should be able to mount in a volume regardless of a different secret existing with same name in different namespace,Q-Lee,1,
|
||
|
Projected should be consumable from pods in volume,yujuhong,1,storage
|
||
|
Projected should be consumable from pods in volume as non-root,fabioy,1,storage
|
||
|
Projected should be consumable from pods in volume as non-root with FSGroup,timothysc,1,storage
|
||
|
Projected should be consumable from pods in volume as non-root with defaultMode and fsGroup set,xiang90,1,storage
|
||
|
Projected should be consumable from pods in volume with defaultMode set,piosz,1,storage
|
||
|
Projected should be consumable from pods in volume with mappings,lavalamp,1,storage
|
||
|
Projected should be consumable from pods in volume with mappings and Item Mode set,dchen1107,1,storage
|
||
|
Projected should be consumable from pods in volume with mappings and Item mode set,kevin-wangzefeng,1,storage
|
||
|
Projected should be consumable from pods in volume with mappings as non-root,roberthbailey,1,storage
|
||
|
Projected should be consumable from pods in volume with mappings as non-root with FSGroup,ixdy,1,storage
|
||
|
Projected should be consumable in multiple volumes in a pod,ixdy,1,storage
|
||
|
Projected should be consumable in multiple volumes in the same pod,luxas,1,storage
|
||
|
Projected should project all components that make up the projection API,fabioy,1,storage
|
||
|
Projected should provide container's cpu limit,justinsb,1,storage
|
||
|
Projected should provide container's cpu request,smarterclayton,1,storage
|
||
|
Projected should provide container's memory limit,cjcullen,1,storage
|
||
|
Projected should provide container's memory request,spxtr,1,storage
|
||
|
Projected should provide node allocatable (cpu) as default cpu limit if the limit is not set,zmerlynn,1,storage
|
||
|
Projected should provide node allocatable (memory) as default memory limit if the limit is not set,mikedanese,1,storage
|
||
|
Projected should provide podname as non-root with fsgroup,fabioy,1,storage
|
||
|
Projected should provide podname as non-root with fsgroup and defaultMode,gmarek,1,storage
|
||
|
Projected should provide podname only,vishh,1,storage
|
||
|
Projected should set DefaultMode on files,tallclair,1,storage
|
||
|
Projected should set mode on item file,gmarek,1,storage
|
||
|
Projected should update annotations on modification,janetkuo,1,storage
|
||
|
Projected should update labels on modification,xiang90,1,storage
|
||
|
Projected updates should be reflected in volume,yujuhong,1,storage
|
||
|
Proxy * should proxy logs on node,rrati,0,node
|
||
|
Proxy * should proxy logs on node using proxy subresource,rrati,0,node
|
||
|
Proxy * should proxy logs on node with explicit kubelet port,ixdy,1,node
|
||
|
Proxy * should proxy logs on node with explicit kubelet port using proxy subresource,dchen1107,1,node
|
||
|
Proxy * should proxy through a service and a pod,rrati,0,node
|
||
|
Proxy * should proxy to cadvisor,jszczepkowski,1,node
|
||
|
Proxy * should proxy to cadvisor using proxy subresource,roberthbailey,1,node
|
||
|
Reboot each node by dropping all inbound packets for a while and ensure they function afterwards,quinton-hoole,0,node
|
||
|
Reboot each node by dropping all outbound packets for a while and ensure they function afterwards,quinton-hoole,0,node
|
||
|
Reboot each node by ordering clean reboot and ensure they function upon restart,quinton-hoole,0,node
|
||
|
Reboot each node by ordering unclean reboot and ensure they function upon restart,quinton-hoole,0,node
|
||
|
Reboot each node by switching off the network interface and ensure they function upon switch on,quinton-hoole,0,node
|
||
|
Reboot each node by triggering kernel panic and ensure they function upon restart,quinton-hoole,0,node
|
||
|
Redis should create and stop redis servers,tallclair,1,apps
|
||
|
ReplicaSet should serve a basic image on each replica with a private image,pmorie,1,apps
|
||
|
ReplicaSet should serve a basic image on each replica with a public image,krousey,0,apps
|
||
|
ReplicaSet should surface a failure condition on a common issue like exceeded quota,kargakis,0,apps
|
||
|
ReplicationController should serve a basic image on each replica with a private image,jbeda,1,apps
|
||
|
ReplicationController should serve a basic image on each replica with a public image,krousey,1,apps
|
||
|
ReplicationController should surface a failure condition on a common issue like exceeded quota,kargakis,0,apps
|
||
|
Rescheduler should ensure that critical pod is scheduled in case there is no resources available,mtaufen,1,apps
|
||
|
Resource-usage regular resource usage tracking resource tracking for * pods per node,janetkuo,1,
|
||
|
ResourceQuota should create a ResourceQuota and capture the life of a configMap.,tallclair,1,api-machinery
|
||
|
ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class.,derekwaynecarr,0,api-machinery
|
||
|
ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim.,bgrant0607,1,api-machinery
|
||
|
ResourceQuota should create a ResourceQuota and capture the life of a pod.,pmorie,1,api-machinery
|
||
|
ResourceQuota should create a ResourceQuota and capture the life of a replication controller.,rrati,0,api-machinery
|
||
|
ResourceQuota should create a ResourceQuota and capture the life of a secret.,ncdc,1,api-machinery
|
||
|
ResourceQuota should create a ResourceQuota and capture the life of a service.,tallclair,1,api-machinery
|
||
|
ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated.,krousey,1,api-machinery
|
||
|
ResourceQuota should verify ResourceQuota with best effort scope.,mml,1,api-machinery
|
||
|
ResourceQuota should verify ResourceQuota with terminating scopes.,ncdc,1,api-machinery
|
||
|
Restart Docker Daemon Network should recover from ip leak,bprashanth,0,node
|
||
|
Restart should restart all nodes and ensure all nodes and pods recover,rrati,0,node
|
||
|
RethinkDB should create and stop rethinkdb servers,mwielgus,1,apps
|
||
|
SSH should SSH to all nodes and run commands,quinton-hoole,0,
|
||
|
SchedulerPredicates validates MaxPods limit number of pods that are allowed to run,gmarek,0,scheduling
|
||
|
SchedulerPredicates validates resource limits of pods that are allowed to run,gmarek,0,scheduling
|
||
|
SchedulerPredicates validates that Inter-pod-Affinity is respected if not matching,rrati,0,scheduling
|
||
|
SchedulerPredicates validates that InterPod Affinity and AntiAffinity is respected if matching,yifan-gu,1,scheduling
|
||
|
SchedulerPredicates validates that InterPodAffinity is respected if matching,kevin-wangzefeng,1,scheduling
|
||
|
SchedulerPredicates validates that InterPodAffinity is respected if matching with multiple Affinities,caesarxuchao,1,scheduling
|
||
|
SchedulerPredicates validates that InterPodAntiAffinity is respected if matching 2,sttts,0,scheduling
|
||
|
SchedulerPredicates validates that NodeAffinity is respected if not matching,fgrzadkowski,0,scheduling
|
||
|
SchedulerPredicates validates that NodeSelector is respected if matching,gmarek,0,scheduling
|
||
|
SchedulerPredicates validates that NodeSelector is respected if not matching,gmarek,0,scheduling
|
||
|
SchedulerPredicates validates that a pod with an invalid NodeAffinity is rejected,deads2k,1,scheduling
|
||
|
SchedulerPredicates validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid,smarterclayton,1,scheduling
|
||
|
SchedulerPredicates validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work,rrati,0,scheduling
|
||
|
SchedulerPredicates validates that required NodeAffinity setting is respected if matching,mml,1,scheduling
|
||
|
SchedulerPredicates validates that taints-tolerations is respected if matching,jlowdermilk,1,scheduling
|
||
|
SchedulerPredicates validates that taints-tolerations is respected if not matching,derekwaynecarr,1,scheduling
|
||
|
Secret should create a pod that reads a secret,luxas,1,apps
|
||
|
Secrets optional updates should be reflected in volume,justinsb,1,apps
|
||
|
Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace,rkouj,0,apps
|
||
|
Secrets should be consumable from pods in env vars,mml,1,apps
|
||
|
Secrets should be consumable from pods in volume,rrati,0,apps
|
||
|
Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set,rrati,0,apps
|
||
|
Secrets should be consumable from pods in volume with defaultMode set,derekwaynecarr,1,apps
|
||
|
Secrets should be consumable from pods in volume with mappings,jbeda,1,apps
|
||
|
Secrets should be consumable from pods in volume with mappings and Item Mode set,quinton-hoole,1,apps
|
||
|
Secrets should be consumable in multiple volumes in a pod,alex-mohr,1,apps
|
||
|
Secrets should be consumable via the environment,ixdy,1,apps
|
||
|
Security Context should support container.SecurityContext.RunAsUser,alex-mohr,1,apps
|
||
|
Security Context should support pod.Spec.SecurityContext.RunAsUser,bgrant0607,1,apps
|
||
|
Security Context should support pod.Spec.SecurityContext.SupplementalGroups,rrati,0,apps
|
||
|
Security Context should support seccomp alpha docker/default annotation,freehan,1,apps
|
||
|
Security Context should support seccomp alpha unconfined annotation on the container,childsb,1,apps
|
||
|
Security Context should support seccomp alpha unconfined annotation on the pod,krousey,1,apps
|
||
|
Security Context should support seccomp default which is unconfined,lavalamp,1,apps
|
||
|
Security Context should support volume SELinux relabeling,thockin,1,apps
|
||
|
Security Context should support volume SELinux relabeling when using hostIPC,alex-mohr,1,apps
|
||
|
Security Context should support volume SELinux relabeling when using hostPID,dchen1107,1,apps
|
||
|
Service endpoints latency should not be very high,cjcullen,1,network
|
||
|
ServiceAccounts should allow opting out of API token automount,bgrant0607,1,
|
||
|
ServiceAccounts should ensure a single API token exists,liggitt,0,network
|
||
|
ServiceAccounts should mount an API token into pods,liggitt,0,network
|
||
|
ServiceLoadBalancer should support simple GET on Ingress ips,bprashanth,0,network
|
||
|
Services should be able to change the type and ports of a service,bprashanth,0,network
|
||
|
Services should be able to create a functioning NodePort service,bprashanth,0,network
|
||
|
Services should be able to up and down services,bprashanth,0,network
|
||
|
Services should check NodePort out-of-range,bprashanth,0,network
|
||
|
Services should create endpoints for unready pods,maisem,0,network
|
||
|
Services should only allow access from service loadbalancer source ranges,madhusudancs,0,network
|
||
|
Services should preserve source pod IP for traffic thru service cluster IP,MrHohn,1,network
|
||
|
Services should prevent NodePort collisions,bprashanth,0,network
|
||
|
Services should provide secure master service,bprashanth,0,network
|
||
|
Services should release NodePorts on delete,bprashanth,0,network
|
||
|
Services should serve a basic endpoint from pods,bprashanth,0,network
|
||
|
Services should serve multiport endpoints from pods,bprashanth,0,network
|
||
|
Services should use same NodePort with same port but different protocols,timothysc,1,network
|
||
|
Services should work after restarting apiserver,bprashanth,0,network
|
||
|
Services should work after restarting kube-proxy,bprashanth,0,network
|
||
|
SimpleMount should be able to mount an emptydir on a container,rrati,0,node
|
||
|
"Spark should start spark master, driver and workers",jszczepkowski,1,apps
|
||
|
"Staging client repo client should create pods, delete pods, watch pods",jbeda,1,api-machinery
|
||
|
StatefulSet Basic StatefulSet functionality Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed,derekwaynecarr,0,apps
|
||
|
StatefulSet Basic StatefulSet functionality Scaling should happen in predictable order and halt if any stateful pod is unhealthy,derekwaynecarr,0,apps
|
||
|
StatefulSet Basic StatefulSet functionality Should recreate evicted statefulset,rrati,0,apps
|
||
|
StatefulSet Basic StatefulSet functionality should allow template updates,rkouj,0,apps
|
||
|
StatefulSet Basic StatefulSet functionality should not deadlock when a pod's predecessor fails,rkouj,0,apps
|
||
|
StatefulSet Basic StatefulSet functionality should provide basic identity,bprashanth,1,apps
|
||
|
StatefulSet Deploy clustered applications should creating a working CockroachDB cluster,rkouj,0,apps
|
||
|
StatefulSet Deploy clustered applications should creating a working mysql cluster,yujuhong,1,apps
|
||
|
StatefulSet Deploy clustered applications should creating a working redis cluster,yifan-gu,1,apps
|
||
|
StatefulSet Deploy clustered applications should creating a working zookeeper cluster,pmorie,1,apps
|
||
|
"Storm should create and stop Zookeeper, Nimbus and Storm worker servers",mtaufen,1,apps
|
||
|
Summary API when querying /stats/summary should report resource usage through the stats api,Random-Liu,1,api-machinery
|
||
|
"Sysctls should not launch unsafe, but not explicitly enabled sysctls on the node",madhusudancs,1,node
|
||
|
Sysctls should reject invalid sysctls,davidopp,1,node
|
||
|
Sysctls should support sysctls,Random-Liu,1,node
|
||
|
Sysctls should support unsafe sysctls which are actually whitelisted,deads2k,1,node
|
||
|
Upgrade cluster upgrade should maintain a functioning cluster,luxas,1,cluster-lifecycle
|
||
|
Upgrade master upgrade should maintain a functioning cluster,xiang90,1,cluster-lifecycle
|
||
|
Upgrade node upgrade should maintain a functioning cluster,zmerlynn,1,cluster-lifecycle
|
||
|
Variable Expansion should allow composing env vars into new env vars,derekwaynecarr,0,node
|
||
|
Variable Expansion should allow substituting values in a container's args,dchen1107,1,node
|
||
|
Variable Expansion should allow substituting values in a container's command,mml,1,node
|
||
|
Volume Disk Format verify disk format type - eagerzeroedthick is honored for dynamically provisioned pv using storageclass,piosz,1,
|
||
|
Volume Disk Format verify disk format type - thin is honored for dynamically provisioned pv using storageclass,alex-mohr,1,
|
||
|
Volume Disk Format verify disk format type - zeroedthick is honored for dynamically provisioned pv using storageclass,jlowdermilk,1,
|
||
|
Volume Placement provision pod on node with matching labels should create and delete pod with the same volume source attach/detach to different worker nodes,mml,0,storage
|
||
|
Volume Placement provision pod on node with matching labels should create and delete pod with the same volume source on the same worker node,mml,0,storage
|
||
|
Volumes Ceph RBD should be mountable,fabioy,1,storage
|
||
|
Volumes CephFS should be mountable,Q-Lee,1,storage
|
||
|
Volumes Cinder should be mountable,cjcullen,1,storage
|
||
|
Volumes ConfigMap should be mountable,rkouj,0,storage
|
||
|
Volumes GlusterFS should be mountable,eparis,1,storage
|
||
|
Volumes NFS should be mountable,rrati,0,storage
|
||
|
Volumes PD should be mountable,caesarxuchao,1,storage
|
||
|
Volumes iSCSI should be mountable,jsafrane,1,storage
|
||
|
Volumes vsphere should be mountable,jsafrane,0,storage
|
||
|
k8s.io/kubernetes/cmd/genutils,rmmh,1,
|
||
|
k8s.io/kubernetes/cmd/hyperkube,jbeda,0,
|
||
|
k8s.io/kubernetes/cmd/kube-apiserver/app/options,nikhiljindal,0,
|
||
|
k8s.io/kubernetes/cmd/kube-controller-manager/app,dchen1107,1,
|
||
|
k8s.io/kubernetes/cmd/kube-proxy/app,luxas,1,
|
||
|
k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/install,ixdy,1,
|
||
|
k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/validation,caesarxuchao,1,
|
||
|
k8s.io/kubernetes/cmd/kubeadm/app/cmd,caesarxuchao,1,
|
||
|
k8s.io/kubernetes/cmd/kubeadm/app/discovery,brendandburns,0,
|
||
|
k8s.io/kubernetes/cmd/kubeadm/app/images,davidopp,1,
|
||
|
k8s.io/kubernetes/cmd/kubeadm/app/master,apprenda,0,
|
||
|
k8s.io/kubernetes/cmd/kubeadm/app/node,apprenda,0,
|
||
|
k8s.io/kubernetes/cmd/kubeadm/app/phases/addons,rkouj,0,
|
||
|
k8s.io/kubernetes/cmd/kubeadm/app/phases/certs,rkouj,0,
|
||
|
k8s.io/kubernetes/cmd/kubeadm/app/phases/certs/pkiutil,ixdy,1,
|
||
|
k8s.io/kubernetes/cmd/kubeadm/app/phases/token,pmorie,1,
|
||
|
k8s.io/kubernetes/cmd/kubeadm/app/preflight,apprenda,0,
|
||
|
k8s.io/kubernetes/cmd/kubeadm/app/util,krousey,1,
|
||
|
k8s.io/kubernetes/cmd/kubeadm/app/util/kubeconfig,apelisse,1,
|
||
|
k8s.io/kubernetes/cmd/kubeadm/app/util/token,sttts,1,
|
||
|
k8s.io/kubernetes/cmd/kubeadm/test/cmd,krousey,0,
|
||
|
k8s.io/kubernetes/cmd/kubelet/app,derekwaynecarr,0,
|
||
|
k8s.io/kubernetes/cmd/libs/go2idl/client-gen/types,caesarxuchao,0,
|
||
|
k8s.io/kubernetes/cmd/libs/go2idl/go-to-protobuf/protobuf,smarterclayton,0,
|
||
|
k8s.io/kubernetes/cmd/libs/go2idl/openapi-gen/generators,davidopp,1,
|
||
|
k8s.io/kubernetes/examples,Random-Liu,0,
|
||
|
k8s.io/kubernetes/hack,thockin,1,
|
||
|
k8s.io/kubernetes/hack/cmd/teststale,thockin,1,
|
||
|
k8s.io/kubernetes/pkg/api,Q-Lee,1,
|
||
|
k8s.io/kubernetes/pkg/api/endpoints,cjcullen,1,
|
||
|
k8s.io/kubernetes/pkg/api/events,jlowdermilk,1,
|
||
|
k8s.io/kubernetes/pkg/api/install,timothysc,1,
|
||
|
k8s.io/kubernetes/pkg/api/service,spxtr,1,
|
||
|
k8s.io/kubernetes/pkg/api/testapi,caesarxuchao,1,
|
||
|
k8s.io/kubernetes/pkg/api/util,rkouj,0,
|
||
|
k8s.io/kubernetes/pkg/api/v1,rkouj,0,
|
||
|
k8s.io/kubernetes/pkg/api/v1/endpoints,rkouj,0,
|
||
|
k8s.io/kubernetes/pkg/api/v1/pod,rkouj,0,
|
||
|
k8s.io/kubernetes/pkg/api/v1/service,rkouj,0,
|
||
|
k8s.io/kubernetes/pkg/api/validation,smarterclayton,1,
|
||
|
k8s.io/kubernetes/pkg/apimachinery/tests,rkouj,0,
|
||
|
k8s.io/kubernetes/pkg/apis/abac/v0,liggitt,0,
|
||
|
k8s.io/kubernetes/pkg/apis/abac/v1beta1,liggitt,0,
|
||
|
k8s.io/kubernetes/pkg/apis/apps/validation,derekwaynecarr,1,
|
||
|
k8s.io/kubernetes/pkg/apis/authorization/validation,erictune,0,
|
||
|
k8s.io/kubernetes/pkg/apis/autoscaling/v1,yarntime,0,
|
||
|
k8s.io/kubernetes/pkg/apis/autoscaling/validation,mtaufen,1,
|
||
|
k8s.io/kubernetes/pkg/apis/batch/v1,vishh,1,
|
||
|
k8s.io/kubernetes/pkg/apis/batch/v2alpha1,jlowdermilk,1,
|
||
|
k8s.io/kubernetes/pkg/apis/batch/validation,erictune,0,
|
||
|
k8s.io/kubernetes/pkg/apis/componentconfig,jbeda,1,
|
||
|
k8s.io/kubernetes/pkg/apis/extensions,bgrant0607,1,
|
||
|
k8s.io/kubernetes/pkg/apis/extensions/v1beta1,madhusudancs,1,
|
||
|
k8s.io/kubernetes/pkg/apis/extensions/validation,nikhiljindal,1,
|
||
|
k8s.io/kubernetes/pkg/apis/policy/validation,deads2k,1,
|
||
|
k8s.io/kubernetes/pkg/apis/rbac/v1alpha1,liggitt,0,
|
||
|
k8s.io/kubernetes/pkg/apis/rbac/validation,erictune,0,
|
||
|
k8s.io/kubernetes/pkg/apis/storage/validation,caesarxuchao,1,
|
||
|
k8s.io/kubernetes/pkg/auth/authorizer/abac,liggitt,0,
|
||
|
k8s.io/kubernetes/pkg/client/chaosclient,deads2k,1,
|
||
|
k8s.io/client-go/tools/leaderelection,xiang90,1,
|
||
|
k8s.io/kubernetes/pkg/client/legacylisters,jsafrane,1,
|
||
|
k8s.io/kubernetes/pkg/client/listers/batch/internalversion,mqliang,0,
|
||
|
k8s.io/kubernetes/pkg/client/listers/extensions/internalversion,eparis,1,
|
||
|
k8s.io/kubernetes/pkg/client/listers/extensions/v1beta1,jszczepkowski,1,
|
||
|
k8s.io/kubernetes/pkg/client/retry,caesarxuchao,1,
|
||
|
k8s.io/kubernetes/pkg/client/tests,Q-Lee,1,
|
||
|
k8s.io/kubernetes/pkg/client/unversioned,justinsb,1,
|
||
|
k8s.io/kubernetes/pkg/cloudprovider/providers/aws,eparis,1,
|
||
|
k8s.io/kubernetes/pkg/cloudprovider/providers/azure,saad-ali,1,
|
||
|
k8s.io/kubernetes/pkg/cloudprovider/providers/cloudstack,roberthbailey,1,
|
||
|
k8s.io/kubernetes/pkg/cloudprovider/providers/gce,yifan-gu,1,
|
||
|
k8s.io/kubernetes/pkg/cloudprovider/providers/mesos,mml,1,
|
||
|
k8s.io/kubernetes/pkg/cloudprovider/providers/openstack,Q-Lee,1,
|
||
|
k8s.io/kubernetes/pkg/cloudprovider/providers/ovirt,dchen1107,1,
|
||
|
k8s.io/kubernetes/pkg/cloudprovider/providers/photon,luomiao,0,
|
||
|
k8s.io/kubernetes/pkg/cloudprovider/providers/vsphere,apelisse,1,
|
||
|
k8s.io/kubernetes/pkg/controller,mikedanese,1,
|
||
|
k8s.io/kubernetes/pkg/controller/bootstrap,mikedanese,0,
|
||
|
k8s.io/kubernetes/pkg/controller/certificates,rkouj,0,
|
||
|
k8s.io/kubernetes/pkg/controller/cloud,rkouj,0,
|
||
|
k8s.io/kubernetes/pkg/controller/cronjob,soltysh,1,
|
||
|
k8s.io/kubernetes/pkg/controller/daemon,Q-Lee,1,
|
||
|
k8s.io/kubernetes/pkg/controller/deployment,kargakis,0,
|
||
|
k8s.io/kubernetes/pkg/controller/deployment/util,kargakis,1,
|
||
|
k8s.io/kubernetes/pkg/controller/disruption,fabioy,1,
|
||
|
k8s.io/kubernetes/pkg/controller/endpoint,mwielgus,1,
|
||
|
k8s.io/kubernetes/pkg/controller/garbagecollector,rmmh,1,
|
||
|
k8s.io/kubernetes/pkg/controller/garbagecollector/metaonly,cjcullen,1,
|
||
|
k8s.io/kubernetes/pkg/controller/job,soltysh,1,
|
||
|
k8s.io/kubernetes/pkg/controller/namespace/deletion,nikhiljindal,1,
|
||
|
k8s.io/kubernetes/pkg/controller/node,gmarek,0,
|
||
|
k8s.io/kubernetes/pkg/controller/podautoscaler,piosz,0,
|
||
|
k8s.io/kubernetes/pkg/controller/podautoscaler/metrics,piosz,0,
|
||
|
k8s.io/kubernetes/pkg/controller/podgc,rrati,0,
|
||
|
k8s.io/kubernetes/pkg/controller/replicaset,fgrzadkowski,0,
|
||
|
k8s.io/kubernetes/pkg/controller/replication,fgrzadkowski,0,
|
||
|
k8s.io/kubernetes/pkg/controller/resourcequota,rrati,0,
|
||
|
k8s.io/kubernetes/pkg/controller/route,gmarek,0,
|
||
|
k8s.io/kubernetes/pkg/controller/service,asalkeld,0,
|
||
|
k8s.io/kubernetes/pkg/controller/serviceaccount,liggitt,0,
|
||
|
k8s.io/kubernetes/pkg/controller/statefulset,justinsb,1,
|
||
|
k8s.io/kubernetes/pkg/controller/ttl,wojtek-t,1,
|
||
|
k8s.io/kubernetes/pkg/controller/volume/attachdetach,luxas,1,
|
||
|
k8s.io/kubernetes/pkg/controller/volume/attachdetach/cache,rrati,0,
|
||
|
k8s.io/kubernetes/pkg/controller/volume/attachdetach/reconciler,jsafrane,1,
|
||
|
k8s.io/kubernetes/pkg/controller/volume/persistentvolume,jsafrane,0,
|
||
|
k8s.io/kubernetes/pkg/credentialprovider,justinsb,1,
|
||
|
k8s.io/kubernetes/pkg/credentialprovider/aws,zmerlynn,1,
|
||
|
k8s.io/kubernetes/pkg/credentialprovider/azure,brendandburns,0,
|
||
|
k8s.io/kubernetes/pkg/credentialprovider/gcp,mml,1,
|
||
|
k8s.io/kubernetes/pkg/fieldpath,childsb,1,
|
||
|
k8s.io/kubernetes/pkg/kubeapiserver,piosz,1,
|
||
|
k8s.io/kubernetes/pkg/kubeapiserver/admission,rkouj,0,
|
||
|
k8s.io/kubernetes/pkg/kubeapiserver/authorizer,rkouj,0,
|
||
|
k8s.io/kubernetes/pkg/kubeapiserver/options,thockin,1,
|
||
|
k8s.io/kubernetes/pkg/kubectl,madhusudancs,1,
|
||
|
k8s.io/kubernetes/pkg/kubectl/cmd,rmmh,1,
|
||
|
k8s.io/kubernetes/pkg/kubectl/cmd/config,asalkeld,0,
|
||
|
k8s.io/kubernetes/pkg/kubectl/cmd/set,erictune,1,
|
||
|
k8s.io/kubernetes/pkg/kubectl/cmd/util,asalkeld,0,
|
||
|
k8s.io/kubernetes/pkg/kubectl/cmd/util/editor,rrati,0,
|
||
|
k8s.io/kubernetes/pkg/kubectl/resource,caesarxuchao,1,
|
||
|
k8s.io/kubernetes/pkg/kubelet,vishh,0,
|
||
|
k8s.io/kubernetes/pkg/kubelet/cadvisor,sttts,1,
|
||
|
k8s.io/kubernetes/pkg/kubelet/certificate,mikedanese,1,
|
||
|
k8s.io/kubernetes/pkg/kubelet/client,tallclair,1,
|
||
|
k8s.io/kubernetes/pkg/kubelet/cm,vishh,0,
|
||
|
k8s.io/kubernetes/pkg/kubelet/config,mikedanese,1,
|
||
|
k8s.io/kubernetes/pkg/kubelet/container,yujuhong,0,
|
||
|
k8s.io/kubernetes/pkg/kubelet/custommetrics,kevin-wangzefeng,0,
|
||
|
k8s.io/kubernetes/pkg/kubelet/dockershim,zmerlynn,1,
|
||
|
k8s.io/kubernetes/pkg/kubelet/envvars,rrati,0,
|
||
|
k8s.io/kubernetes/pkg/kubelet/eviction,childsb,1,
|
||
|
k8s.io/kubernetes/pkg/kubelet/images,caesarxuchao,1,
|
||
|
k8s.io/kubernetes/pkg/kubelet/kuberuntime,yifan-gu,1,
|
||
|
k8s.io/kubernetes/pkg/kubelet/lifecycle,yujuhong,1,
|
||
|
k8s.io/kubernetes/pkg/kubelet/network/cni,freehan,0,
|
||
|
k8s.io/kubernetes/pkg/kubelet/network/hairpin,freehan,0,
|
||
|
k8s.io/kubernetes/pkg/kubelet/network/hostport,erictune,1,
|
||
|
k8s.io/kubernetes/pkg/kubelet/network/kubenet,freehan,0,
|
||
|
k8s.io/kubernetes/pkg/kubelet/network/testing,spxtr,1,
|
||
|
k8s.io/kubernetes/pkg/kubelet/pleg,yujuhong,0,
|
||
|
k8s.io/kubernetes/pkg/kubelet/pod,alex-mohr,1,
|
||
|
k8s.io/kubernetes/pkg/kubelet/prober,alex-mohr,1,
|
||
|
k8s.io/kubernetes/pkg/kubelet/prober/results,krousey,1,
|
||
|
k8s.io/kubernetes/pkg/kubelet/qos,vishh,0,
|
||
|
k8s.io/kubernetes/pkg/kubelet/rkt,apelisse,1,
|
||
|
k8s.io/kubernetes/pkg/kubelet/rktshim,mml,1,
|
||
|
k8s.io/kubernetes/pkg/kubelet/secret,kevin-wangzefeng,1,
|
||
|
k8s.io/kubernetes/pkg/kubelet/server,tallclair,0,
|
||
|
k8s.io/kubernetes/pkg/kubelet/server/portforward,rkouj,0,
|
||
|
k8s.io/kubernetes/pkg/kubelet/server/stats,tallclair,0,
|
||
|
k8s.io/kubernetes/pkg/kubelet/server/streaming,caesarxuchao,1,
|
||
|
k8s.io/kubernetes/pkg/kubelet/status,mwielgus,1,
|
||
|
k8s.io/kubernetes/pkg/kubelet/sysctl,piosz,1,
|
||
|
k8s.io/kubernetes/pkg/kubelet/types,jlowdermilk,1,
|
||
|
k8s.io/kubernetes/pkg/kubelet/util/cache,timothysc,1,
|
||
|
k8s.io/kubernetes/pkg/kubelet/util/csr,apelisse,1,
|
||
|
k8s.io/kubernetes/pkg/kubelet/util/format,ncdc,1,
|
||
|
k8s.io/kubernetes/pkg/kubelet/util/queue,yujuhong,0,
|
||
|
k8s.io/kubernetes/pkg/kubelet/volumemanager,rrati,0,
|
||
|
k8s.io/kubernetes/pkg/kubelet/volumemanager/cache,janetkuo,1,
|
||
|
k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler,tallclair,1,
|
||
|
k8s.io/kubernetes/pkg/master,fabioy,1,
|
||
|
k8s.io/kubernetes/pkg/master/tunneler,jsafrane,1,
|
||
|
k8s.io/kubernetes/pkg/probe/exec,bgrant0607,1,
|
||
|
k8s.io/kubernetes/pkg/probe/http,mtaufen,1,
|
||
|
k8s.io/kubernetes/pkg/probe/tcp,mtaufen,1,
|
||
|
k8s.io/kubernetes/pkg/proxy/config,ixdy,1,
|
||
|
k8s.io/kubernetes/pkg/proxy/healthcheck,rrati,0,
|
||
|
k8s.io/kubernetes/pkg/proxy/iptables,freehan,0,
|
||
|
k8s.io/kubernetes/pkg/proxy/userspace,luxas,1,
|
||
|
k8s.io/kubernetes/pkg/proxy/util,knobunc,0,
|
||
|
k8s.io/kubernetes/pkg/proxy/winuserspace,jbhurat,0,
|
||
|
k8s.io/kubernetes/pkg/quota,sttts,1,
|
||
|
k8s.io/kubernetes/pkg/quota/evaluator/core,yifan-gu,1,
|
||
|
k8s.io/kubernetes/pkg/registry/apps/statefulset,kevin-wangzefeng,1,
|
||
|
k8s.io/kubernetes/pkg/registry/apps/statefulset/storage,jlowdermilk,1,
|
||
|
k8s.io/kubernetes/pkg/registry/authorization/subjectaccessreview,liggitt,1,
|
||
|
k8s.io/kubernetes/pkg/registry/authorization/util,liggitt,1,
|
||
|
k8s.io/kubernetes/pkg/registry/autoscaling/horizontalpodautoscaler,bgrant0607,1,
|
||
|
k8s.io/kubernetes/pkg/registry/autoscaling/horizontalpodautoscaler/storage,dchen1107,1,
|
||
|
k8s.io/kubernetes/pkg/registry/batch/cronjob,soltysh,1,
|
||
|
k8s.io/kubernetes/pkg/registry/batch/cronjob/storage,soltysh,1,
|
||
|
k8s.io/kubernetes/pkg/registry/batch/job,soltysh,1,
|
||
|
k8s.io/kubernetes/pkg/registry/batch/job/storage,soltysh,1,
|
||
|
k8s.io/kubernetes/pkg/registry/certificates/certificates,smarterclayton,1,
|
||
|
k8s.io/kubernetes/pkg/registry/core/componentstatus,krousey,1,
|
||
|
k8s.io/kubernetes/pkg/registry/core/configmap,janetkuo,1,
|
||
|
k8s.io/kubernetes/pkg/registry/core/configmap/storage,wojtek-t,1,
|
||
|
k8s.io/kubernetes/pkg/registry/core/endpoint,bprashanth,1,
|
||
|
k8s.io/kubernetes/pkg/registry/core/endpoint/storage,wojtek-t,1,
|
||
|
k8s.io/kubernetes/pkg/registry/core/event,ixdy,1,
|
||
|
k8s.io/kubernetes/pkg/registry/core/event/storage,thockin,1,
|
||
|
k8s.io/kubernetes/pkg/registry/core/limitrange,yifan-gu,1,
|
||
|
k8s.io/kubernetes/pkg/registry/core/limitrange/storage,spxtr,1,
|
||
|
k8s.io/kubernetes/pkg/registry/core/namespace,quinton-hoole,1,
|
||
|
k8s.io/kubernetes/pkg/registry/core/namespace/storage,jsafrane,1,
|
||
|
k8s.io/kubernetes/pkg/registry/core/node,rmmh,1,
|
||
|
k8s.io/kubernetes/pkg/registry/core/node/storage,spxtr,1,
|
||
|
k8s.io/kubernetes/pkg/registry/core/persistentvolume,lavalamp,1,
|
||
|
k8s.io/kubernetes/pkg/registry/core/persistentvolume/storage,alex-mohr,1,
|
||
|
k8s.io/kubernetes/pkg/registry/core/persistentvolumeclaim,bgrant0607,1,
|
||
|
k8s.io/kubernetes/pkg/registry/core/persistentvolumeclaim/storage,cjcullen,1,
|
||
|
k8s.io/kubernetes/pkg/registry/core/pod,Random-Liu,1,
|
||
|
k8s.io/kubernetes/pkg/registry/core/pod/rest,jsafrane,1,
|
||
|
k8s.io/kubernetes/pkg/registry/core/pod/storage,wojtek-t,1,
|
||
|
k8s.io/kubernetes/pkg/registry/core/podtemplate,thockin,1,
|
||
|
k8s.io/kubernetes/pkg/registry/core/podtemplate/storage,spxtr,1,
|
||
|
k8s.io/kubernetes/pkg/registry/core/replicationcontroller,freehan,1,
|
||
|
k8s.io/kubernetes/pkg/registry/core/replicationcontroller/storage,liggitt,1,
|
||
|
k8s.io/kubernetes/pkg/registry/core/resourcequota,rrati,0,
|
||
|
k8s.io/kubernetes/pkg/registry/core/resourcequota/storage,childsb,1,
|
||
|
k8s.io/kubernetes/pkg/registry/core/rest,deads2k,0,
|
||
|
k8s.io/kubernetes/pkg/registry/core/secret,rrati,0,
|
||
|
k8s.io/kubernetes/pkg/registry/core/secret/storage,childsb,1,
|
||
|
k8s.io/kubernetes/pkg/registry/core/service,madhusudancs,1,
|
||
|
k8s.io/kubernetes/pkg/registry/core/service/allocator,jbeda,1,
|
||
|
k8s.io/kubernetes/pkg/registry/core/service/allocator/storage,spxtr,1,
|
||
|
k8s.io/kubernetes/pkg/registry/core/service/ipallocator,eparis,1,
|
||
|
k8s.io/kubernetes/pkg/registry/core/service/ipallocator/controller,mtaufen,1,
|
||
|
k8s.io/kubernetes/pkg/registry/core/service/ipallocator/storage,xiang90,1,
|
||
|
k8s.io/kubernetes/pkg/registry/core/service/portallocator,rrati,0,
|
||
|
k8s.io/kubernetes/pkg/registry/core/service/portallocator/controller,rkouj,0,
|
||
|
k8s.io/kubernetes/pkg/registry/core/service/storage,cjcullen,1,
|
||
|
k8s.io/kubernetes/pkg/registry/core/serviceaccount,caesarxuchao,1,
|
||
|
k8s.io/kubernetes/pkg/registry/core/serviceaccount/storage,smarterclayton,1,
|
||
|
k8s.io/kubernetes/pkg/registry/extensions/controller/storage,jsafrane,1,
|
||
|
k8s.io/kubernetes/pkg/registry/extensions/daemonset,nikhiljindal,1,
|
||
|
k8s.io/kubernetes/pkg/registry/extensions/daemonset/storage,kevin-wangzefeng,1,
|
||
|
k8s.io/kubernetes/pkg/registry/extensions/deployment,dchen1107,1,
|
||
|
k8s.io/kubernetes/pkg/registry/extensions/deployment/storage,timothysc,1,
|
||
|
k8s.io/kubernetes/pkg/registry/extensions/ingress,apelisse,1,
|
||
|
k8s.io/kubernetes/pkg/registry/extensions/ingress/storage,luxas,1,
|
||
|
k8s.io/kubernetes/pkg/registry/extensions/podsecuritypolicy/storage,dchen1107,1,
|
||
|
k8s.io/kubernetes/pkg/registry/extensions/replicaset,rrati,0,
|
||
|
k8s.io/kubernetes/pkg/registry/extensions/replicaset/storage,wojtek-t,1,
|
||
|
k8s.io/kubernetes/pkg/registry/extensions/rest,rrati,0,
|
||
|
k8s.io/kubernetes/pkg/registry/policy/poddisruptionbudget,Q-Lee,1,
|
||
|
k8s.io/kubernetes/pkg/registry/policy/poddisruptionbudget/storage,dchen1107,1,
|
||
|
k8s.io/kubernetes/pkg/registry/rbac/reconciliation,roberthbailey,1,
|
||
|
k8s.io/kubernetes/pkg/registry/rbac/validation,rkouj,0,
|
||
|
k8s.io/kubernetes/pkg/registry/storage/storageclass,brendandburns,1,
|
||
|
k8s.io/kubernetes/pkg/registry/storage/storageclass/storage,wojtek-t,1,
|
||
|
k8s.io/kubernetes/pkg/security/apparmor,bgrant0607,1,
|
||
|
k8s.io/kubernetes/pkg/security/podsecuritypolicy,erictune,0,
|
||
|
k8s.io/kubernetes/pkg/security/podsecuritypolicy/apparmor,rrati,0,
|
||
|
k8s.io/kubernetes/pkg/security/podsecuritypolicy/capabilities,erictune,0,
|
||
|
k8s.io/kubernetes/pkg/security/podsecuritypolicy/group,erictune,0,
|
||
|
k8s.io/kubernetes/pkg/security/podsecuritypolicy/seccomp,rmmh,1,
|
||
|
k8s.io/kubernetes/pkg/security/podsecuritypolicy/selinux,erictune,0,
|
||
|
k8s.io/kubernetes/pkg/security/podsecuritypolicy/sysctl,rrati,0,
|
||
|
k8s.io/kubernetes/pkg/security/podsecuritypolicy/user,erictune,0,
|
||
|
k8s.io/kubernetes/pkg/security/podsecuritypolicy/util,erictune,0,
|
||
|
k8s.io/kubernetes/pkg/securitycontext,erictune,1,
|
||
|
k8s.io/kubernetes/pkg/serviceaccount,liggitt,0,
|
||
|
k8s.io/kubernetes/pkg/ssh,jbeda,1,
|
||
|
k8s.io/kubernetes/pkg/util,jbeda,1,
|
||
|
k8s.io/kubernetes/pkg/util/async,spxtr,1,
|
||
|
k8s.io/kubernetes/pkg/util/bandwidth,thockin,1,
|
||
|
k8s.io/kubernetes/pkg/util/config,jszczepkowski,1,
|
||
|
k8s.io/kubernetes/pkg/util/configz,ixdy,1,
|
||
|
k8s.io/kubernetes/pkg/util/dbus,roberthbailey,1,
|
||
|
k8s.io/kubernetes/pkg/util/env,asalkeld,0,
|
||
|
k8s.io/kubernetes/pkg/util/exec,krousey,1,
|
||
|
k8s.io/kubernetes/pkg/util/goroutinemap,saad-ali,0,
|
||
|
k8s.io/kubernetes/pkg/util/hash,timothysc,1,
|
||
|
k8s.io/kubernetes/pkg/util/i18n,brendandburns,0,
|
||
|
k8s.io/kubernetes/pkg/util/io,mtaufen,1,
|
||
|
k8s.io/kubernetes/pkg/util/iptables,rrati,0,
|
||
|
k8s.io/kubernetes/pkg/util/keymutex,saad-ali,0,
|
||
|
k8s.io/kubernetes/pkg/util/labels,rmmh,1,
|
||
|
k8s.io/kubernetes/pkg/util/limitwriter,deads2k,1,
|
||
|
k8s.io/kubernetes/pkg/util/mount,xiang90,1,
|
||
|
k8s.io/kubernetes/pkg/util/net/sets,rrati,0,
|
||
|
k8s.io/kubernetes/pkg/util/node,liggitt,0,
|
||
|
k8s.io/kubernetes/pkg/util/oom,vishh,0,
|
||
|
k8s.io/kubernetes/pkg/util/parsers,derekwaynecarr,1,
|
||
|
k8s.io/kubernetes/pkg/util/procfs,roberthbailey,1,
|
||
|
k8s.io/kubernetes/pkg/util/slice,quinton-hoole,0,
|
||
|
k8s.io/kubernetes/pkg/util/strings,quinton-hoole,0,
|
||
|
k8s.io/kubernetes/pkg/util/system,mwielgus,0,
|
||
|
k8s.io/kubernetes/pkg/util/tail,zmerlynn,1,
|
||
|
k8s.io/kubernetes/pkg/util/taints,rrati,0,
|
||
|
k8s.io/kubernetes/pkg/util/term,davidopp,1,
|
||
|
k8s.io/kubernetes/pkg/util/threading,roberthbailey,1,
|
||
|
k8s.io/kubernetes/pkg/util/version,danwinship,0,
|
||
|
k8s.io/kubernetes/pkg/volume,saad-ali,0,
|
||
|
k8s.io/kubernetes/pkg/volume/aws_ebs,caesarxuchao,1,
|
||
|
k8s.io/kubernetes/pkg/volume/azure_dd,bgrant0607,1,
|
||
|
k8s.io/kubernetes/pkg/volume/azure_file,maisem,1,
|
||
|
k8s.io/kubernetes/pkg/volume/cephfs,eparis,1,
|
||
|
k8s.io/kubernetes/pkg/volume/cinder,jsafrane,1,
|
||
|
k8s.io/kubernetes/pkg/volume/configmap,derekwaynecarr,1,
|
||
|
k8s.io/kubernetes/pkg/volume/downwardapi,mikedanese,1,
|
||
|
k8s.io/kubernetes/pkg/volume/empty_dir,quinton-hoole,1,
|
||
|
k8s.io/kubernetes/pkg/volume/fc,rrati,0,
|
||
|
k8s.io/kubernetes/pkg/volume/flexvolume,Q-Lee,1,
|
||
|
k8s.io/kubernetes/pkg/volume/flocker,jbeda,1,
|
||
|
k8s.io/kubernetes/pkg/volume/gce_pd,saad-ali,0,
|
||
|
k8s.io/kubernetes/pkg/volume/git_repo,davidopp,1,
|
||
|
k8s.io/kubernetes/pkg/volume/glusterfs,tallclair,1,
|
||
|
k8s.io/kubernetes/pkg/volume/host_path,jbeda,1,
|
||
|
k8s.io/kubernetes/pkg/volume/iscsi,cjcullen,1,
|
||
|
k8s.io/kubernetes/pkg/volume/nfs,justinsb,1,
|
||
|
k8s.io/kubernetes/pkg/volume/photon_pd,luomiao,0,
|
||
|
k8s.io/kubernetes/pkg/volume/projected,kevin-wangzefeng,1,
|
||
|
k8s.io/kubernetes/pkg/volume/quobyte,yujuhong,1,
|
||
|
k8s.io/kubernetes/pkg/volume/rbd,piosz,1,
|
||
|
k8s.io/kubernetes/pkg/volume/secret,rmmh,1,
|
||
|
k8s.io/kubernetes/pkg/volume/util,saad-ali,0,
|
||
|
k8s.io/kubernetes/pkg/volume/util/nestedpendingoperations,freehan,1,
|
||
|
k8s.io/kubernetes/pkg/volume/util/operationexecutor,rkouj,0,
|
||
|
k8s.io/kubernetes/pkg/volume/vsphere_volume,deads2k,1,
|
||
|
k8s.io/kubernetes/plugin/cmd/kube-scheduler/app,deads2k,1,
|
||
|
k8s.io/kubernetes/plugin/pkg/admission/admit,piosz,1,
|
||
|
k8s.io/kubernetes/plugin/pkg/admission/alwayspullimages,ncdc,1,
|
||
|
k8s.io/kubernetes/plugin/pkg/admission/antiaffinity,timothysc,1,
|
||
|
k8s.io/kubernetes/plugin/pkg/admission/defaulttolerationseconds,luxas,1,
|
||
|
k8s.io/kubernetes/plugin/pkg/admission/deny,eparis,1,
|
||
|
k8s.io/kubernetes/plugin/pkg/admission/exec,deads2k,1,
|
||
|
k8s.io/kubernetes/plugin/pkg/admission/gc,kevin-wangzefeng,1,
|
||
|
k8s.io/kubernetes/plugin/pkg/admission/imagepolicy,apelisse,1,
|
||
|
k8s.io/kubernetes/plugin/pkg/admission/initialresources,piosz,0,
|
||
|
k8s.io/kubernetes/plugin/pkg/admission/limitranger,ncdc,1,
|
||
|
k8s.io/kubernetes/plugin/pkg/admission/namespace/autoprovision,derekwaynecarr,0,
|
||
|
k8s.io/kubernetes/plugin/pkg/admission/namespace/exists,derekwaynecarr,0,
|
||
|
k8s.io/kubernetes/plugin/pkg/admission/namespace/lifecycle,derekwaynecarr,0,
|
||
|
k8s.io/kubernetes/plugin/pkg/admission/persistentvolume/label,rrati,0,
|
||
|
k8s.io/kubernetes/plugin/pkg/admission/podnodeselector,ixdy,1,
|
||
|
k8s.io/kubernetes/plugin/pkg/admission/resourcequota,fabioy,1,
|
||
|
k8s.io/kubernetes/plugin/pkg/admission/resourcequota/apis/resourcequota/validation,cjcullen,1,
|
||
|
k8s.io/kubernetes/plugin/pkg/admission/security/podsecuritypolicy,maisem,1,
|
||
|
k8s.io/kubernetes/plugin/pkg/admission/securitycontext/scdeny,rrati,0,
|
||
|
k8s.io/kubernetes/plugin/pkg/admission/serviceaccount,liggitt,0,
|
||
|
k8s.io/kubernetes/plugin/pkg/admission/storageclass/default,pmorie,1,
|
||
|
k8s.io/kubernetes/plugin/pkg/auth/authorizer/rbac,rrati,0,
|
||
|
k8s.io/kubernetes/plugin/pkg/auth/authorizer/rbac/bootstrappolicy,mml,1,
|
||
|
k8s.io/kubernetes/plugin/pkg/scheduler,fgrzadkowski,0,
|
||
|
k8s.io/kubernetes/plugin/pkg/scheduler/algorithm/predicates,fgrzadkowski,0,
|
||
|
k8s.io/kubernetes/plugin/pkg/scheduler/algorithm/priorities,fgrzadkowski,0,
|
||
|
k8s.io/kubernetes/plugin/pkg/scheduler/algorithmprovider,fgrzadkowski,0,
|
||
|
k8s.io/kubernetes/plugin/pkg/scheduler/algorithmprovider/defaults,fgrzadkowski,0,
|
||
|
k8s.io/kubernetes/plugin/pkg/scheduler/api/validation,fgrzadkowski,0,
|
||
|
k8s.io/kubernetes/plugin/pkg/scheduler/core,madhusudancs,1,
|
||
|
k8s.io/kubernetes/plugin/pkg/scheduler/factory,fgrzadkowski,0,
|
||
|
k8s.io/kubernetes/plugin/pkg/scheduler/schedulercache,fgrzadkowski,0,
|
||
|
k8s.io/kubernetes/plugin/pkg/scheduler/util,wojtek-t,1,
|
||
|
k8s.io/kubernetes/test/e2e,kevin-wangzefeng,1,
|
||
|
k8s.io/kubernetes/test/e2e/chaosmonkey,pmorie,1,
|
||
|
k8s.io/kubernetes/test/e2e_node,mml,1,
|
||
|
k8s.io/kubernetes/test/e2e_node/system,Random-Liu,0,
|
||
|
k8s.io/kubernetes/test/integration/auth,jbeda,1,
|
||
|
k8s.io/kubernetes/test/integration/client,Q-Lee,1,
|
||
|
k8s.io/kubernetes/test/integration/configmap,Q-Lee,1,
|
||
|
k8s.io/kubernetes/test/integration/evictions,brendandburns,0,
|
||
|
k8s.io/kubernetes/test/integration/examples,maisem,1,
|
||
|
k8s.io/kubernetes/test/integration/garbagecollector,jlowdermilk,1,
|
||
|
k8s.io/kubernetes/test/integration/kubeaggregator,deads2k,1,
|
||
|
k8s.io/kubernetes/test/integration/kubectl,rrati,0,
|
||
|
k8s.io/kubernetes/test/integration/master,fabioy,1,
|
||
|
k8s.io/kubernetes/test/integration/metrics,lavalamp,1,
|
||
|
k8s.io/kubernetes/test/integration/objectmeta,janetkuo,1,
|
||
|
k8s.io/kubernetes/test/integration/openshift,kevin-wangzefeng,1,
|
||
|
k8s.io/kubernetes/test/integration/pods,smarterclayton,1,
|
||
|
k8s.io/kubernetes/test/integration/quota,alex-mohr,1,
|
||
|
k8s.io/kubernetes/test/integration/replicaset,janetkuo,1,
|
||
|
k8s.io/kubernetes/test/integration/replicationcontroller,jbeda,1,
|
||
|
k8s.io/kubernetes/test/integration/scheduler,mikedanese,1,
|
||
|
k8s.io/kubernetes/test/integration/scheduler_perf,roberthbailey,1,
|
||
|
k8s.io/kubernetes/test/integration/secrets,rmmh,1,
|
||
|
k8s.io/kubernetes/test/integration/serviceaccount,deads2k,1,
|
||
|
k8s.io/kubernetes/test/integration/storageclasses,rrati,0,
|
||
|
k8s.io/kubernetes/test/integration/thirdparty,davidopp,1,
|
||
|
k8s.io/kubernetes/test/integration/ttlcontroller,wojtek-t,1,
|
||
|
k8s.io/kubernetes/test/integration/volume,rrati,0,
|
||
|
k8s.io/kubernetes/test/list,maisem,1,
|
||
|
kubelet Clean up pods on node kubelet should be able to delete * pods per node in *.,yujuhong,0,node
|
||
|
kubelet host cleanup with volume mounts Host cleanup after pod using NFS mount is deleted *,bgrant0607,1,
|
||
|
"when we run containers that should cause * should eventually see *, and then evict all of the correct pods",Random-Liu,0,node
|