Commit Graph

22 Commits (f17f4ff963f3cfd0953ece1bd30036485493e26e)

Author SHA1 Message Date
Matthew Mosesohn 4638acfe81 Retry applying podsecurity policies (#4279) 2019-02-24 22:50:55 -08:00
Jeff Bornemann c41c1e771f OCI Cloud Provider Update (#4186)
* OCI subnet AD 2 is not required for CCM >= 0.7.0

Reorganize OCI provider to generate configuration, rather than pull

Add pull secret option to OCI cloud provider

* Updated oci example to document new parameters
2019-02-11 12:08:53 -08:00
Chad Swenson 145687a48e Reduce log spam of verbose tasks (#3806)
Added a loop_control label to a few tasks that flood our logs.
2018-12-04 10:35:44 -08:00
Erwan Miran b15e685a0b sysctl related PodSecurityPolicy spec since 1.12 (#3743) 2018-11-26 00:13:51 -08:00
Andreas Krüger 730caa3d58 Add PriorityClasses on the last master. (#3706) 2018-11-14 15:59:20 -08:00
Erwan Miran 7bec169d58 Fix ansible syntax to avoid ansible deprecation warnings (#3512)
* failed

* version_compare

* succeeded

* skipped

* success

* version_compare becomes version since ansible 2.5

* ansible minimal version updated in doc and spec

* last version_compare
2018-10-16 15:33:30 -07:00
Kuldip Madnani 36898a2c39 Adding pod priority for all the components. (#3361)
* Changes to assign pod priority to kube components.

* Removed the boolean flag pod_priority_assignment

* Created new priorityclass k8s-cluster-critical

* Created new priorityclass k8s-cluster-critical

* Fixed the trailing spaces

* Fixed the trailing spaces

* Added kube version check while creating Priority Class k8s-cluster-critical

* Moved k8s-cluster-critical.yml

* Moved k8s-cluster-critical.yml to kube_config_dir
2018-09-25 07:50:22 -07:00
Antoine Legrand 4882531c29
Merge pull request #3115 from oracle/oracle_oci_controller
Cloud provider support for OCI (Oracle Cloud Infrastructure)
2018-08-23 18:22:45 +02:00
Erwan Miran 80cfeea957 psp, roles and rbs for PodSecurityPolicy when podsecuritypolicy_enabled is true 2018-08-22 18:16:13 +02:00
Jeff Bornemann 94df70be98 Cloud provider support for OCI (Oracle Cloud Infrastructure)
Signed-off-by: Jeff Bornemann <jeff.bornemann@oracle.com>
2018-08-21 17:36:42 -04:00
vterdunov 4b98537f79
Properly check vsphere_cloud_provider.rc 2018-04-02 18:45:42 +03:00
Matthew Mosesohn 03bcfa7ff5
Stop templating kube-system namespace and creating it (#2545)
Kubernetes makes this namespace automatically, so there is
no need for kubespray to manage it.
2018-03-30 14:29:13 +03:00
woopstar f1d2f84043 Only apply roles from first master node to fix regression 2018-03-18 16:15:01 +01:00
MQasimSarfraz 1bcc641dae Create vsphere clusterrole only if it doesnt exists 2018-03-14 11:29:35 +00:00
MQasimSarfraz 9a4aa4288c Fix vsphere cloud_provider RBAC permissions 2018-03-12 18:07:08 +00:00
chadswen b0ab92c921 Prefix system:node CRB
Change the name of `system:node` CRB to `kubespray:system:node` to avoid
conflicts with the auto-reconciled CRB also named `system:node`

Fixes #2121
2018-03-08 23:56:46 -06:00
Jonas Kongslund 585303ad66 Start with three dashes for consistency 2018-03-03 10:05:05 +04:00
Jonas Kongslund a800ed094b Added support for webhook authentication/authorization on the secure kubelet endpoint 2018-03-03 10:00:09 +04:00
Chad Swenson b8788421d5 Support for disabling apiserver insecure port
This allows `kube_apiserver_insecure_port` to be set to 0 (disabled).

Rework of #1937 with kubeadm support

Also, fixed an issue in `kubeadm-migrate-certs` where the old apiserver cert was copied as the kubeadm key
2017-12-05 09:13:45 -06:00
Matthew Mosesohn f9b68a5d17
Revert "Support for disabling apiserver insecure port" (#1974) 2017-11-14 13:41:28 +00:00
Chad Swenson 0c7e1889e4 Support for disabling apiserver insecure port
This allows `kube_apiserver_insecure_port` to be set to 0 (disabled). It's working, but so far I have had to:

1. Make the `uri` module "Wait for apiserver up" checks use `kube_apiserver_port` (HTTPS)
2. Add apiserver client cert/key to the "Wait for apiserver up" checks
3. Update apiserver liveness probe to use HTTPS ports
4. Set `kube_api_anonymous_auth` to true to allow liveness probe to hit apiserver's /healthz over HTTPS (livenessProbes can't use client cert/key unfortunately)
5. RBAC has to be enabled. Anonymous requests are in the `system:unauthenticated` group which is granted access to /healthz by one of RBAC's default ClusterRoleBindings. An equivalent ABAC rule could allow this as well.

Changes 1 and 2 should work for everyone, but 3, 4, and 5 require new coupling of currently independent configuration settings. So I also added a new settings check.

Options:

1. The problem goes away if you have both anonymous-auth and RBAC enabled. This is how kubeadm does it. This may be the best way to go since RBAC is already on by default but anonymous auth is not.
2. Include conditional templates to set a different liveness probe for possible combinations of `kube_apiserver_insecure_port = 0`, RBAC, and `kube_api_anonymous_auth` (won't be possible to cover every case without a guaranteed authorizer for the secure port)
3. Use basic auth headers for the liveness probe (I really don't like this, it adds a new dependency on basic auth which I'd also like to leave independently configurable, and it requires encoded passwords in the apiserver manifest)

Option 1 seems like the clear winner to me, but is there a reason we wouldn't want anonymous-auth on by default? The apiserver binary defaults anonymous-auth to true, but kubespray's default was false.
2017-11-06 14:01:10 -06:00
Matthew Mosesohn ec53b8b66a Move cluster roles and system namespace to new role
This should be done after kubeconfig is set for admin and
before network plugins are up.
2017-10-26 14:36:05 +01:00