Commit Graph

52 Commits (0cb51e753070424703bcc12c1d9026c1dc966d26)

Author SHA1 Message Date
woopstar e40368ae2b Add CoreDNS support with various fixes
Added CoreDNS to downloads

Updated with labels. Should now work without RBAC too

Fix DNS settings on hosts

Rename CoreDNS service from kube-dns to coredns

Add rotate based on http://edgeofsanity.net/rant/2017/12/20/systemd-resolved-is-broken.html

Updated docs with CoreDNS info

Added labels and fixed minor settings from official yaml file: https://github.com/kubernetes/kubernetes/blob/release-1.9/cluster/addons/dns/coredns.yaml.sed

Added a secondary deployment and secondary service ip. This is to mitigate dns timeouts and create high resitency for failures. See discussion at 'https://github.com/coreos/coreos-kubernetes/issues/641#issuecomment-281174806'

Set dns list correct. Thanks to @whereismyjetpack

Only download KubeDNS or CoreDNS if selected

Move dns cleanup to its own file and import tasks based on dns mode

Fix install of KubeDNS when dnsmask_kubedns mode is selected

Add new dns option coredns_dual for dual stack deployment. Added variable to configure replicas deployed. Updated docs for dual stack deployment. Removed rotate option in resolv.conf.

Run DNS manifests for CoreDNS and KubeDNS

Set skydns servers on dual stack deployment

Use only one template for CoreDNS dual deployment

Set correct cluster ip for the dns server
2018-03-16 21:51:37 +01:00
Antoine Legrand 37cfd289d8
Merge pull request #2248 from hswong3i/dashboard.yml.j2
Dashboard template should not suffix with .yml.j2
2018-02-06 11:25:02 +01:00
Wong Hoi Sing Edison 4ad53339f6 KubeDNS template should not suffix with .yml.j2 2018-02-05 16:26:54 +08:00
Wong Hoi Sing Edison a4d3da6a8e Dashboard template should not suffix with .yml.j2 2018-02-05 16:18:21 +08:00
Matthew Mosesohn dc6a17e092
Use include/import tasks (#2192)
import_tasks will consume far less memory, so it should be
used whenever it is compatible.
2018-01-29 14:37:48 +03:00
Chad Swenson b8788421d5 Support for disabling apiserver insecure port
This allows `kube_apiserver_insecure_port` to be set to 0 (disabled).

Rework of #1937 with kubeadm support

Also, fixed an issue in `kubeadm-migrate-certs` where the old apiserver cert was copied as the kubeadm key
2017-12-05 09:13:45 -06:00
Chad Swenson a89ee8c406 Add ability to use custom cert secret instead of init container provisioned self-signed certs 2017-11-15 10:05:52 -06:00
Chad Swenson 0c6f172e75 Kubernetes Dashboard v1.7.1 Refactor
This version required changing the previous access model for dashboard completely but it's a change for the better. Docs were updated.

* New login/auth options that use apiserver auth proxying by default
* Requires RBAC in `authorization_modes`
* Only serves over https
* No longer available at https://first_master:6443/ui until apiserver is updated with the https proxy URL:
* Can access from https://first_master:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login you will be prompted for credentials
* Or you can run 'kubectl proxy' from your local machine to access dashboard in your browser from: http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
* It is recommended to access dashboard from behind a gateway that enforces an authentication token, details and other access options here: https://github.com/kubernetes/dashboard/wiki/Accessing-Dashboard---1.7.X-and-above
2017-11-15 10:05:48 -06:00
Matthew Mosesohn f9b68a5d17
Revert "Support for disabling apiserver insecure port" (#1974) 2017-11-14 13:41:28 +00:00
Chad Swenson 0c7e1889e4 Support for disabling apiserver insecure port
This allows `kube_apiserver_insecure_port` to be set to 0 (disabled). It's working, but so far I have had to:

1. Make the `uri` module "Wait for apiserver up" checks use `kube_apiserver_port` (HTTPS)
2. Add apiserver client cert/key to the "Wait for apiserver up" checks
3. Update apiserver liveness probe to use HTTPS ports
4. Set `kube_api_anonymous_auth` to true to allow liveness probe to hit apiserver's /healthz over HTTPS (livenessProbes can't use client cert/key unfortunately)
5. RBAC has to be enabled. Anonymous requests are in the `system:unauthenticated` group which is granted access to /healthz by one of RBAC's default ClusterRoleBindings. An equivalent ABAC rule could allow this as well.

Changes 1 and 2 should work for everyone, but 3, 4, and 5 require new coupling of currently independent configuration settings. So I also added a new settings check.

Options:

1. The problem goes away if you have both anonymous-auth and RBAC enabled. This is how kubeadm does it. This may be the best way to go since RBAC is already on by default but anonymous auth is not.
2. Include conditional templates to set a different liveness probe for possible combinations of `kube_apiserver_insecure_port = 0`, RBAC, and `kube_api_anonymous_auth` (won't be possible to cover every case without a guaranteed authorizer for the secure port)
3. Use basic auth headers for the liveness probe (I really don't like this, it adds a new dependency on basic auth which I'd also like to leave independently configurable, and it requires encoded passwords in the apiserver manifest)

Option 1 seems like the clear winner to me, but is there a reason we wouldn't want anonymous-auth on by default? The apiserver binary defaults anonymous-auth to true, but kubespray's default was false.
2017-11-06 14:01:10 -06:00
Matthew Mosesohn ec53b8b66a Move cluster roles and system namespace to new role
This should be done after kubeconfig is set for admin and
before network plugins are up.
2017-10-26 14:36:05 +01:00
Matthew Mosesohn 33c4d64b62 Make ClusterRoleBinding to admit all nodes with right cert (#1861)
This is to work around #1856 which can occur when kubelet
hostname and resolvable hostname (or cloud instance name)
do not match.
2017-10-24 17:05:58 +01:00
Aivars Sterns 9c86da1403 Normalize tags in all places to prepare for tag fixing in future (#1739) 2017-10-05 08:43:04 +01:00
Matthew Mosesohn bd272e0b3c Upgrade to kubeadm (#1667)
* Enable upgrade to kubeadm

* fix kubedns upgrade

* try upgrade route

* use init/upgrade strategy for kubeadm and ignore kubedns svc

* Use bin_dir for kubeadm

* delete more secrets

* fix waiting for terminating pods

* Manually enforce kube-proxy for kubeadm deploy

* remove proxy. update to kubeadm 1.8.0rc1
2017-09-26 10:38:58 +01:00
Matthew Mosesohn b294db5aed fix apply for netchecker upgrade (#1659)
* fix apply for netchecker upgrade and graceful upgrade

* Speed up daemonset upgrades. Make check wait for ds upgrades.
2017-09-15 13:19:37 +01:00
Matthew Mosesohn 6744726089 kubeadm support (#1631)
* kubeadm support

* move k8s master to a subtask
* disable k8s secrets when using kubeadm
* fix etcd cert serial var
* move simple auth users to master role
* make a kubeadm-specific env file for kubelet
* add non-ha CI job

* change ci boolean vars to json format

* fixup

* Update create-gce.yml

* Update create-gce.yml

* Update create-gce.yml
2017-09-13 19:00:51 +01:00
Matthew Mosesohn 75b13caf0b Fix kube-apiserver status checks when changing insecure bind addr (#1633) 2017-09-09 23:41:48 +03:00
Matthew Mosesohn 649388188b Fix netchecker update side effect (#1644)
* Fix netchecker update side effect

kubectl apply should only be used on resources created
with kubectl apply. To workaround this, we should apply
the old manifest before upgrading it.

* Update 030_check-network.yml
2017-09-09 23:38:38 +03:00
Matthew Mosesohn 9fa1873a65 Add kube dashboard, enabled by default (#1643)
* Add kube dashboard, enabled by default

Also add rbac role for kube user

* Update main.yml
2017-09-09 23:38:03 +03:00
Matthew Mosesohn d279d145d5 Fix non-rbac deployment of resources as a list (#1613)
* Use kubectl apply instead of create/replace

Disable checks for existing resources to speed up execution.

* Fix non-rbac deployment of resources as a list

* Fix autoscaler tolerations field

* set all kube resources to state=latest

* Update netchecker and weave
2017-09-05 08:23:12 +03:00
Matthew Mosesohn 660282e82f Make daemonsets upgradeable (#1606)
Canal will be covered by a separate PR
2017-09-04 11:30:01 +03:00
Brad Beam 8b151d12b9 Adding yamllinter to ci steps (#1556)
* Adding yaml linter to ci check

* Minor linting fixes from yamllint

* Changing CI to install python pkgs from requirements.txt

- adding in a secondary requirements.txt for tests
- moving yamllint to tests requirements
2017-08-24 12:09:52 +03:00
Matthew Mosesohn df28db0066 Fix cert and netchecker upgrade issues (#1543)
* Bump tag for upgrade CI, fix netchecker upgrade

netchecker-server was changed from pod to deployment, so
we need an upgrade hook for it.

CI now uses v2.1.1 as a basis for upgrade.

* Fix upgrades for certs from non-rbac to rbac
2017-08-18 15:46:22 +03:00
Brad Beam af007c7189 Fixing netchecker-server type - pod => deployment (#1509) 2017-08-14 18:43:56 +03:00
jwfang a8e6a0763d run netchecker-server with list pods 2017-07-17 19:29:59 +08:00
jwfang e1386ba604 only patch system:kube-dns role for old dns 2017-07-17 19:29:59 +08:00
jwfang 83deecb9e9 Revert "no need to patch system:kube-dns"
This reverts commit c2ea8c588aa5c3879f402811d3599a7bb3ccab24.
2017-07-17 19:29:59 +08:00
jwfang d8dcb8f6e0 no need to patch system:kube-dns 2017-07-17 19:29:59 +08:00
jwfang 092bf07cbf basic rbac support 2017-07-17 19:29:59 +08:00
Gregory Storme 266ca9318d Use the kube_apiserver_insecure_port variable instead of static 8080 2017-06-12 09:20:59 +02:00
Seungkyu Ahn d5516a4ca9 Make kubedns up to date
Update kube-dns version to 1.14.2
https://github.com/kubernetes/kubernetes/pull/45684
2017-06-27 00:57:29 +00:00
Sergii Golovatiuk 45044c2d75 Reschedule netchecker-server in case of HW failure.
Pod opbject is not reschedulable by kubernetes. It means that if node
with netchecker-server goes down, netchecker-server won't be scheduled
somewhere. This commit changes the type of netchecker-server to
Deployment, so netchecker-server will be scheduled on other nodes in
case of failures.
2017-04-14 10:49:16 +02:00
Sergii Golovatiuk 2670eefcd4 Refactoring resolv.conf
- Renaming templates for netchecker
- Add dnsPolicy: ClusterFirstWithHostNet to kube-proxy

Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
2017-04-05 09:28:01 +02:00
Sergii Golovatiuk 1cfe0beac0 Set ClusterFirstWithHostNet for Pods with hostnetwork: true
In kubernetes 1.6 ClusterFirstWithHostNet was added as an option. In
accordance to it kubelet will generate resolv.conf based on own
resolv.conf. However, this doesn't create 'options', thus the proper
solution requires some investigation.

This patch sets the same resolv.conf for kubelet as host

Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
2017-04-04 16:34:13 +02:00
Aleksandr Didenko 3a39904011 Move calico-policy-controller into separate role
By default Calico CNI does not create any network access policies
or profiles if 'policy' is enabled in CNI config. And without any
policies/profiles network access to/from PODs is blocked.

K8s related policies are created by calico-policy-controller in
such case. So we need to start it as soon as possible, before any
real workloads.

This patch also fixes kube-api port in calico-policy-controller
yaml template.

Closes #1132
2017-03-17 11:21:52 +01:00
Matthew Mosesohn 9cb12cf250 Add autoscalers for dnsmasq and kubedns
By default kubedns and dnsmasq scale when installed.
Dnsmasq is no longer a daemonset. It is now a deployment.
Kubedns is no longer a replicationcluster. It is now a deployment.
Minimum replicas is two (to enable rolling updates).

Reduced memory erquirements for dnsmasq and kubedns
2017-03-02 13:44:22 +03:00
Andrew Greenwood ca9ea097df Cleanup legacy syntax, spacing, files all to yml
Migrate older inline= syntax to pure yml syntax for module args as to be consistant with most of the rest of the tasks
Cleanup some spacing in various files
Rename some files named yaml to yml for consistancy
2017-02-17 16:22:34 -05:00
Matthew Mosesohn 3c713a3f53 Fix upgrade for all daemonset type resources
Daemonsets cannot be simply upgraded through a single API call,
regardless of any kubectl documentation. The resource must be
purged and then recreated in order to make any changes.
2017-02-08 18:16:00 +03:00
Greg Althaus 61dab8dc0b Should only check for api-server running on the master.
If this runs on other nodes, it will fail the playbook.
2017-01-17 15:57:34 -06:00
Alexander Block 1d2a18b355 Introduce dns_mode and resolvconf_mode and implement docker_dns mode
Also update reset.yml to do more dns/network related cleanup.
2017-01-05 23:38:51 +01:00
Bogdan Dobrelya d8a2941e9e Fix cert paths for flannel/calico policy apps
Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
2017-01-03 16:12:54 +01:00
Matthew Mosesohn d314174149 Add wait for kube-apiserver to kubernetes-apps
Fixes #777
2016-12-21 15:39:39 +03:00
Bogdan Dobrelya c75f394707 Address standalone kubelet config case
Also place in global vars and do not repeat the kube_*_config_dir
and kube_namespace vars for better code maintainability and UX.

Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
2016-12-13 16:35:53 +01:00
Bogdan Dobrelya 8cc84e132a Add tags
Add tags to allow more granular tasks filtering.
Add generator script for MD formatted tags found.
Add docs for tags how-to.

Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
2016-12-09 12:14:28 +01:00
Bogdan Dobrelya b7692fad09 Add advanced net check for DNS K8s app
* Add an option to deploy K8s app to test e2e network connectivity
  and cluster DNS resolve via Kubedns for nethost/simple pods
  (defaults to false).
* Parametrize existing k8s apps templates with kube_namespace and
  kube_config_dir instead of hardcode.
* For CoreOS, ensure nameservers from inventory to be put in the
  first place to allow hostnet pods connectivity via short names
  or FQDN and hostnet agents to pass as well, if netchecker
  deployed.

Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
2016-11-28 13:23:25 +01:00
Bogdan Dobrelya 764a2fd5a8 Merge pull request #588 from adidenko/canal-support
Adding support for canal network plugin
2016-11-09 10:31:56 +01:00
Aleksandr Didenko 4ece73d432 Fix idempotency of calico-policy-controller rs
We need to specify kube resource type and name in order to avoid
playbook errors related to k8s resource duplication.
2016-11-08 12:59:18 +01:00
Aleksandr Didenko 60a217766f Add ConfigMap for basic configuration options
Container settings moved from deamonset yaml to a separate
configmap.
2016-11-08 12:57:34 +01:00
Aleksandr Didenko 309240cd6f Adding support for canal network plugin
This patch provides support for Canal network plugin installation
as a self-hosted app, see the following link for details:

https://github.com/tigera/canal/tree/master/k8s-install
2016-11-08 11:04:01 +01:00
Artem Roma 3919d666c1 Add possibility to enable network policy via Calico network controller
The requirements for network policy feature are described here [1]. In
order to enable it, appropriate configuration must be provided to the CNI
plug in and Calico policy controller must be set up. Beside that
corresponding extensions needed to be enabled in k8s API.

Now to turn on the feature user can define `enable_network_policy`
customization variable for Ansible.

[1] http://kubernetes.io/docs/user-guide/networkpolicies/
2016-10-10 17:22:12 +03:00