Commit Graph

217 Commits (3d2ea28c965a5cabe12774b130c9070e6bda2aeb)

Author SHA1 Message Date
Rong Zhang 1aee6ec371
Merge pull request #2903 from riverzhang/swap
Add manage swap on the worker node
2018-06-21 22:20:23 +08:00
rongzhang 3232e2743e Add manage swap on the worker node 2018-06-21 08:15:01 +00:00
Andreas Krüger c3d8b131db
Merge pull request #2801 from dvazar/bugfix/undefined__network_plugin__variable
Fixed "network_plugin" variable
2018-06-19 10:01:06 +02:00
Aivars Sterns cb0a257349
Merge pull request #2819 from oleh-ozimok/fix-cidr-assert
Fix enough network address space assert
2018-06-06 07:32:16 +03:00
Dmitry f912a4ece5 Fix compare AnsibleUnsafeText with int (#2828) 2018-06-04 11:34:10 +03:00
Oleg Ozimok 38f7ba2584 Fix enough network address space assert 2018-05-27 18:01:17 +03:00
dvazar b3f9cae820 fixed a check unknown networks (cilium & contiv) 2018-05-22 16:43:19 +07:00
dvazar 4b8daa22f6 Fixes #2800 2018-05-19 00:57:09 +07:00
Christopher J. Ruwe c1bc4615fe assert that number of pods on node does not exceed CIDR address range
The number of pods on a given node is determined by the  --max-pods=k
directive. When the address space is exhausted, no more pods can be
scheduled even if from the --max-pods-perspective, the node still has
capacity.

The special case that a pod is scheduled and uses the node IP in the
host network namespace is too "soft" to derive a guarantee.

Comparing kubelet_max_pods with kube_network_node_prefix when given
allows to assert that pod limits match the CIDR address space.
2018-05-16 11:55:46 +00:00
Matthew Mosesohn 07cc981971
refactor vault role (#2733)
* Move front-proxy-client certs back to kube mount

We want the same CA for all k8s certs

* Refactor vault to use a third party module

The module adds idempotency and reduces some of the repetitive
logic in the vault role

Requires ansible-modules-hashivault on ansible node and hvac
on the vault hosts themselves

Add upgrade test scenario
Remove bootstrap-os tags from tasks

* fix upgrade issues

* improve unseal logic

* specify ca and fix etcd check

* Fix initialization check

bump machine size
2018-05-11 19:11:38 +03:00
mirwan c3c5817af6 sysctl file should be in defaults so that it can be overriden (#2475)
* sysctl file should be in defaults so that it can be overriden

* Change sysctl_file_path to be consistent with roles/kubernetes/preinstall/defaults/main.yml
2018-04-27 18:50:58 +03:00
Markos Chandras 9168c71359 Revert "Revert "Add openSUSE support" (#2697)" (#2699)
This reverts commit 51f4e6585a.
2018-04-26 12:52:06 +03:00
Matthew Mosesohn 51f4e6585a
Revert "Add openSUSE support" (#2697) 2018-04-23 14:28:24 +03:00
Paul Montero 75950344fb
run_once pre_upgrade tasks which are executing in localhost 2018-04-19 11:38:13 -05:00
Matthew Mosesohn f73717ea35
Mount local volume provisioner dirs for containerized kubelet (#2648) 2018-04-12 22:55:13 +03:00
Markos Chandras e42203a13e roles: kubernetes: preinstall: Add SUSE support
Add support for installing package dependencies and refreshing metadata
on SUSE distributions

Co-authored-by: Nirmoy Das <ndas@suse.de>
2018-04-11 17:46:14 +01:00
Daniel Hoherd ca40d51bc6 Fix typos (no logic changes) 2018-04-05 15:54:58 -07:00
Andreas Krüger ba24fe3226
Merge pull request #2570 from avoidik/transfer-cloud-configs
Move cloud config configurations to proper location
2018-04-02 10:31:38 +02:00
Matthew Mosesohn 3004791c64
Add pre-upgrade task for moving credentials file (#2394)
* Add pre-upgrade task for moving credentials file

This reverts commit 7ef9f4dfdd.

* add python interpreter workaround for localhost
2018-04-02 11:19:23 +03:00
avoidik 15efdf0c16 Move credential checks 2018-03-31 03:26:37 +03:00
avoidik ab8760cc83 Move credentials pre-check 2018-03-31 03:24:57 +03:00
Matthew Mosesohn 72a4223884 Write cloud-config during kubelet configuration
This file should only be updated during kubelet upgrade so that
master components are not accidentally restarted first during
preinstall stage.
2018-03-28 16:26:36 +03:00
Chad Swenson 7d33650019
Merge pull request #2462 from woopstar/coredns-patch
Add CoreDNS support
2018-03-16 18:33:36 -05:00
woopstar e40368ae2b Add CoreDNS support with various fixes
Added CoreDNS to downloads

Updated with labels. Should now work without RBAC too

Fix DNS settings on hosts

Rename CoreDNS service from kube-dns to coredns

Add rotate based on http://edgeofsanity.net/rant/2017/12/20/systemd-resolved-is-broken.html

Updated docs with CoreDNS info

Added labels and fixed minor settings from official yaml file: https://github.com/kubernetes/kubernetes/blob/release-1.9/cluster/addons/dns/coredns.yaml.sed

Added a secondary deployment and secondary service ip. This is to mitigate dns timeouts and create high resitency for failures. See discussion at 'https://github.com/coreos/coreos-kubernetes/issues/641#issuecomment-281174806'

Set dns list correct. Thanks to @whereismyjetpack

Only download KubeDNS or CoreDNS if selected

Move dns cleanup to its own file and import tasks based on dns mode

Fix install of KubeDNS when dnsmask_kubedns mode is selected

Add new dns option coredns_dual for dual stack deployment. Added variable to configure replicas deployed. Updated docs for dual stack deployment. Removed rotate option in resolv.conf.

Run DNS manifests for CoreDNS and KubeDNS

Set skydns servers on dual stack deployment

Use only one template for CoreDNS dual deployment

Set correct cluster ip for the dns server
2018-03-16 21:51:37 +01:00
Aivars Sterns 710295bd2f
Merge pull request #2434 from protomech/feature/azure-vnet-resource-group
add support for azure vnetResourceGroup
2018-03-13 17:42:09 +02:00
Wong Hoi Sing Edison 6402004018 FIXUP #2424: local_provisioner directory should be created only if enabled 2018-03-08 11:57:46 +08:00
Michael Beatty 07657aecf4 add support for azure vnetResourceGroup 2018-03-05 13:40:25 -06:00
Wong Hoi Sing Edison d4c61d2628 Fixup for gce_centos7-flannel-addons 2018-02-21 13:41:25 +08:00
Wong Hoi Sing Edison deef47c923 Upgrade Local Volume Provisioner Addon to v2.0.0 2018-02-21 13:41:25 +08:00
melkosoft f13e76d022 Added cilium support (#2236)
* Added cilium support

* Fix typo in debian test config

* Remove empty lines

* Changed cilium version from <latest> to <v1.0.0-rc3>

* Add missing changes for cilium

* Add cilium to CI pipeline

* Fix wrong file name

* Check kernel version for cilium

* fixed ci error

* fixed cilium-ds.j2 template

* added waiting for cilium pods to run

* Fixed missing EOF

* Fixed trailing spaces

* Fixed trailing spaces

* Fixed trailing spaces

* Fixed too many blank lines

* Updated tolerations,annotations in cilium DS template

* Set cilium_version to iptables-1.9 to see if bug is fixed in CI

* Update cilium image tag to v1.0.0-rc4

* Update Cilium test case CI vars filenames

* Add optional prometheus flag, adjust initial readiness delay

* Update README.md with cilium info
2018-02-16 21:37:47 -06:00
Antoine Legrand 76a89039ad
Merge pull request #2285 from jasdeep-hundal/do_not_install_python_apt
Remove redundant python-apt install
2018-02-15 17:04:08 +01:00
jasdeep-hundal f57abae01e Remove redundant python-apt install
Ansible automatically installs the python-apt package when using
the 'apt' Ansible module, if python-apt is not present. This patch
removes the (unneeded) explicit installation in the Kubespray
'preinstall' role.
2018-02-08 18:59:37 -08:00
rong.zhang 47adf4bce6 Disalbe install epel-release rpm on Centos/Redhat
1.Disalbe install epel-release rpm on Centos/Redhat
2.Use yum install epel-release
2018-02-07 14:58:50 +08:00
Matthew Mosesohn 2df4b6c5d2
Rename default_resolver to cloud_resolver (#2209)
Cloud resolvers are mandatory for hosts on GCE and OpenStack
clouds. The 8.8.8.8 alternative resolver was dropped because
there is already a default nameserver. The new var name
reflects the purpose better.

Also restart apiserver when modifying dns settings.
2018-01-31 00:26:07 +03:00
Matthew Mosesohn dc6a17e092
Use include/import tasks (#2192)
import_tasks will consume far less memory, so it should be
used whenever it is compatible.
2018-01-29 14:37:48 +03:00
Brad Beam 0c8bed21ee
Merge pull request #2019 from chadswen/disable-api-insecure-port
Support for disabling apiserver insecure port (the sequel)
2018-01-24 19:58:53 -06:00
Matthew Mosesohn bf1411060e Add optional manual dns_mode (#2178) 2018-01-23 14:28:42 +01:00
Brad Beam fed7b97dcb
Merge pull request #2030 from mattymo/removerbaccheck
Remove RBAC from boolean checks
2017-12-06 23:41:13 -06:00
Spencer Smith c4458c9d9a
Merge pull request #1997 from mrbobbytables/feature-keepalived-cloud-provider
Add minimal keepalived-cloud-provider support
2017-12-06 23:28:27 -05:00
Kuldip Madnani fe036cbe77 Adding changes to handle updation of yum Management cache in rhel. (#2026)
* Adding changes to handle updation of yum cache in rhel.

* Removed the redundant spaces
2017-12-06 09:00:41 +00:00
Matthew Mosesohn 952ec65a40 Remove RBAC from boolean checks 2017-12-06 11:57:40 +03:00
Chad Swenson b8788421d5 Support for disabling apiserver insecure port
This allows `kube_apiserver_insecure_port` to be set to 0 (disabled).

Rework of #1937 with kubeadm support

Also, fixed an issue in `kubeadm-migrate-certs` where the old apiserver cert was copied as the kubeadm key
2017-12-05 09:13:45 -06:00
Brad Beam c2347db934
Merge pull request #1953 from chadswen/dashboard-refactor
Kubernetes Dashboard v1.7.1 Refactor
2017-12-05 08:50:55 -06:00
unclejack e5d353d0a7 contiv network support (#1914)
* Add Contiv support

Contiv is a network plugin for Kubernetes and Docker. It supports
vlan/vxlan/BGP/Cisco ACI technologies. It support firewall policies,
multiple networks and bridging pods onto physical networks.

* Update contiv version to 1.1.4

Update contiv version to 1.1.4 and added SVC_SUBNET in contiv-config.

* Load openvswitch module to workaround on CentOS7.4

* Set contiv cni version to 0.1.0

Correct contiv CNI version to 0.1.0.

* Use kube_apiserver_endpoint for K8S_API_SERVER

Use kube_apiserver_endpoint as K8S_API_SERVER to make contiv talks
to a available endpoint no matter if there's a loadbalancer or not.

* Make contiv use its own etcd

Before this commit, contiv is using a etcd proxy mode to k8s etcd,
this work fine when the etcd hosts are co-located with contiv etcd
proxy, however the k8s peering certs are only in etcd group, as a
result the etcd-proxy is not able to peering with the k8s etcd on
etcd group, plus the netplugin is always trying to find the etcd
endpoint on localhost, this will cause problem for all netplugins
not runnign on etcd group nodes.
This commit make contiv uses its own etcd, separate from k8s one.
on kube-master nodes (where net-master runs), it will run as leader
mode and on all rest nodes it will run as proxy mode.

* Use cp instead of rsync to copy cni binaries

Since rsync has been removed from hyperkube, this commit changes it
to use cp instead.

* Make contiv-etcd able to run on master nodes

* Add rbac_enabled flag for contiv pods

* Add contiv into CNI network plugin lists

* migrate contiv test to tests/files

Signed-off-by: Cristian Staretu <cristian.staretu@gmail.com>

* Add required rules for contiv netplugin

* Better handling json return of fwdMode

* Make contiv etcd port configurable

* Use default var instead of templating

* roles/download/defaults/main.yml: use contiv 1.1.7

Signed-off-by: Cristian Staretu <cristian.staretu@gmail.com>
2017-11-29 14:24:16 +00:00
Bogdan Dobrelya 8aafe64397
Defaults for apiserver_loadbalancer_domain_name (#1993)
* Defaults for apiserver_loadbalancer_domain_name

When loadbalancer_apiserver is defined, use the
apiserver_loadbalancer_domain_name with a given default value.

Fix unconsistencies for checking if apiserver_loadbalancer_domain_name
is defined AND using it with a default value provided at once.

Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>

* Define defaults for LB modes in common defaults

Adjust the defaults for apiserver_loadbalancer_domain_name and
loadbalancer_apiserver_localhost to come from a single source, which is
kubespray-defaults. Removes some confusion and simplefies the code.

Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2017-11-23 16:15:48 +00:00
Bob Killen 2140303fcc
add minimal keepalived-cloud-provider support 2017-11-23 08:43:36 -05:00
Chad Swenson 0c6f172e75 Kubernetes Dashboard v1.7.1 Refactor
This version required changing the previous access model for dashboard completely but it's a change for the better. Docs were updated.

* New login/auth options that use apiserver auth proxying by default
* Requires RBAC in `authorization_modes`
* Only serves over https
* No longer available at https://first_master:6443/ui until apiserver is updated with the https proxy URL:
* Can access from https://first_master:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login you will be prompted for credentials
* Or you can run 'kubectl proxy' from your local machine to access dashboard in your browser from: http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
* It is recommended to access dashboard from behind a gateway that enforces an authentication token, details and other access options here: https://github.com/kubernetes/dashboard/wiki/Accessing-Dashboard---1.7.X-and-above
2017-11-15 10:05:48 -06:00
Matthew Mosesohn f9b68a5d17
Revert "Support for disabling apiserver insecure port" (#1974) 2017-11-14 13:41:28 +00:00
Chad Swenson 0c7e1889e4 Support for disabling apiserver insecure port
This allows `kube_apiserver_insecure_port` to be set to 0 (disabled). It's working, but so far I have had to:

1. Make the `uri` module "Wait for apiserver up" checks use `kube_apiserver_port` (HTTPS)
2. Add apiserver client cert/key to the "Wait for apiserver up" checks
3. Update apiserver liveness probe to use HTTPS ports
4. Set `kube_api_anonymous_auth` to true to allow liveness probe to hit apiserver's /healthz over HTTPS (livenessProbes can't use client cert/key unfortunately)
5. RBAC has to be enabled. Anonymous requests are in the `system:unauthenticated` group which is granted access to /healthz by one of RBAC's default ClusterRoleBindings. An equivalent ABAC rule could allow this as well.

Changes 1 and 2 should work for everyone, but 3, 4, and 5 require new coupling of currently independent configuration settings. So I also added a new settings check.

Options:

1. The problem goes away if you have both anonymous-auth and RBAC enabled. This is how kubeadm does it. This may be the best way to go since RBAC is already on by default but anonymous auth is not.
2. Include conditional templates to set a different liveness probe for possible combinations of `kube_apiserver_insecure_port = 0`, RBAC, and `kube_api_anonymous_auth` (won't be possible to cover every case without a guaranteed authorizer for the secure port)
3. Use basic auth headers for the liveness probe (I really don't like this, it adds a new dependency on basic auth which I'd also like to leave independently configurable, and it requires encoded passwords in the apiserver manifest)

Option 1 seems like the clear winner to me, but is there a reason we wouldn't want anonymous-auth on by default? The apiserver binary defaults anonymous-auth to true, but kubespray's default was false.
2017-11-06 14:01:10 -06:00
Stanislav Makar 33adb334cd Fix openstack tenant id variable name (#1932) 2017-11-05 08:40:41 +00:00