This PR ensures that the e2fsprogs and xfsprogs packages are
installed on all Kubernetes nodes and that the packages are
the latest versions. It also ensures that the nodes can
create XFS filesystems when necessary, since not all distros
install xfsprogs by default.
e2fsprogs - ext2/ext3/ext4 file system utilities
xfsprogs - Utilities for managing the XFS filesystem
- Fixed an issue where storage class host directories were looped
through excessive target hosts
- Fixes examples in the LVP `README.md` to use nested dicts instead of a
list of dicts
* Makes local volume provisioner more dynamic
* Correct variable name in local storage provisioner defaults
* Updates external-provisioner readme
* Updates variable naming to be more clear, more documentation, fixes sample inventory
* Variable refactor, untangled some jinja2 loops
* Corrects variable name
* No variable substitution in dict keys, replaced with anchor
* Fixes default storage_classes dict, inline docs
* Fixes spelling in inline docs
* Addresses comments in review
* Updates all the defaults
* Fix failing CI task
* Fixes external provisioner daemonset
* Remove non-kubeadm deployment
* More cleanup
* More cleanup
* More cleanup
* More cleanup
* Fix gitlab
* Try stop gce first before absent to make the delete process work
* More cleanup
* Fix bug with checking if kubeadm has already run
* Fix bug with checking if kubeadm has already run
* More fixes
* Fix test
* fix
* Fix gitlab checkout untill kubespray 2.8 is on quay
* Fixed
* Add upgrade path from non-kubeadm to kubeadm. Revert ssl path
* Readd secret checking
* Do gitlab checks from v2.7.0 test upgrade path to 2.8.0
* fix typo
* Fix CI jobs to kubeadm again. Fix broken hyperkube path
* Fix gitlab
* Fix rotate tokens
* More fixes
* More fixes
* Fix tokens
When using resolvconf_mode host_resolvconf, there is an early DNS
config stage where Kubernetes cluster DNS is not injected for host
DNS intially. Later, the cluster DNS is enabled, but we do not
need to run every task from the kubernetes/preinstall role.
* warning on meta flush_handlers
* avoid rm
* avoid "Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually" warning on subsequent tasks using blockinfile
* is match
* failed
* version_compare
* succeeded
* skipped
* success
* version_compare becomes version since ansible 2.5
* ansible minimal version updated in doc and spec
* last version_compare
* [jjo] add kube-router support
Fixescloudnativelabs/kube-router#147.
* add kube-router as another network_plugin choice
* support most used kube-router flags via
`kube_router_foo` vars as other plugins
* implement replacing kube-proxy (--run-service-proxy=true) via
`kube_proxy_mode: none`, verified in a _non kubeadm_enabled_
install, should also work for recent kubeadm releases via
`skipKubeProxyInstall: true` config
* [jjo] address PR#3339 review from @woopstar
* add busybox image used by kube-router to downloads
* fix busybox download groups key
* rework kubeadm_enabled + kube_router_run_service_proxy
- verify it working ok w/the kubeadm_enabled and
kube_router_run_service_proxy true or false
- introduce `kube_proxy_remove` fact, to decouple logic
from kube_proxy_mode (which affects kubeadm configmap
settings, thus no-good to ab-use it to 'none')
* improve kube-router.md re: kubeadm_enabled and kube_router_run_service_proxy
* address @woopstar latest review
* add inventory/sample/group_vars/k8s-cluster/k8s-net-kube-router.yml
* fix kube_router_run_service_proxy conditional for kube-proxy removal
* fix kube_proxy_remove fact (w/ |bool), add some needed kube-proxy tags on my and existing changes
* update kube-router tolerations for 1.12 compatibility
* add PriorityClass to kube-router DaemonSet
The hosts(5) manpage clearly states that the first entry is the
"canonical name", or FQDN (Fully-Qualified Domain Name):
IP_address canonical_hostname [aliases...]
By using the alias as a first entry, `hostname -f` does not return the
correct domain which breaks all sorts of unrelated functionality (it
has impact over email server configuration, for example).
* [jjo] add DIND support to contrib/
- add contrib/dind with ansible playbook to
create "node" containers, and setup them to mimic
host nodes as much as possible (using Ubuntu images),
see contrib/dind/README.md
- nodes' /etc/hosts editing via `blockinfile` and
`lineinfile` need `unsafe_writes: yes` because /etc/hosts
are mounted by docker, and thus can't be handled atomically
(modify copy + rename)
* dind-host role: set node container hostname on creation
* add "Resulting deployment" section with some CLI outputs
* typo
* selectable node_distro: debian, ubuntu
* some fixes for node_distro: ubuntu
* cpu optimization: add early `pkill -STOP agetty`
* typo
* add centos dind support ;)
* add kubespray-dind.yaml, support fedora
- add kubespray-dind.yaml (former custom.yaml at README.md)
- rework README.md as per above
- use some YAML power to share distros' commonality
- add fedora support
* create unique /etc/machine-id and other updates
- create unique /etc/machine-id in each docker node,
used as seed for e.g. weave mac addresses
- with above, now netchecker 100% passes WoHooOO!
🎉🎉🎉
- updated README.md output from (1.12.1, verified
netcheck)
* minor typos
* fix centos node creation, needs earlier udevadm removal to avoid flaky facts, also verified netcheck Ok \o/
* add Q&D test-distros.sh, back to manual /etc/machine-id hack
* run-test-distros.sh cosmetics and minor fixes
* run-test-distros.sh: $rc fix and minor formatting changes
* run-test-distros.sh output cosmetics
- Local Volume StorageClass configuration is now manged by `local_volume_provisioner_storage_classes`, a list of maps that specifies local storage classes with `name` `host_dir` and `mount_dir` keys per entry
- Tasks and templates updated to loop through local volume storage classes
- Previous defaults for path/class names were not changed
- Fixed an issue where a `kubernetes/preinstall` was creating directories inconsistently with the `kubernetes-apps/external_provisioner/local_volume_provisioner` task
* calico upgrade to v3
* update calico_rr version
* add missing file
* change contents of main.yml as it was left old version
* enable network policy by default
* remove unneeded task
* Fix kubelet calico settings
* fix when statement
* switch back to node-kubeconfig.yaml
The number of pods on a given node is determined by the --max-pods=k
directive. When the address space is exhausted, no more pods can be
scheduled even if from the --max-pods-perspective, the node still has
capacity.
The special case that a pod is scheduled and uses the node IP in the
host network namespace is too "soft" to derive a guarantee.
Comparing kubelet_max_pods with kube_network_node_prefix when given
allows to assert that pod limits match the CIDR address space.
* Move front-proxy-client certs back to kube mount
We want the same CA for all k8s certs
* Refactor vault to use a third party module
The module adds idempotency and reduces some of the repetitive
logic in the vault role
Requires ansible-modules-hashivault on ansible node and hvac
on the vault hosts themselves
Add upgrade test scenario
Remove bootstrap-os tags from tasks
* fix upgrade issues
* improve unseal logic
* specify ca and fix etcd check
* Fix initialization check
bump machine size
* sysctl file should be in defaults so that it can be overriden
* Change sysctl_file_path to be consistent with roles/kubernetes/preinstall/defaults/main.yml
The openssl package on Tumbleweed is actually a virtual package covering
openssl-1.0.0 and openssl-1.1.0 implementations. It defaults to 1.1.0 so
when trying to install it and openssl-1.0.0 is installed, zypper fails
with conflicts. As such, lets explicitly pull the package that we need
which also updates the virtual one.
Co-authored-by: Markos Chandras <mchandras@suse.de>
Added CoreDNS to downloads
Updated with labels. Should now work without RBAC too
Fix DNS settings on hosts
Rename CoreDNS service from kube-dns to coredns
Add rotate based on http://edgeofsanity.net/rant/2017/12/20/systemd-resolved-is-broken.html
Updated docs with CoreDNS info
Added labels and fixed minor settings from official yaml file: https://github.com/kubernetes/kubernetes/blob/release-1.9/cluster/addons/dns/coredns.yaml.sed
Added a secondary deployment and secondary service ip. This is to mitigate dns timeouts and create high resitency for failures. See discussion at 'https://github.com/coreos/coreos-kubernetes/issues/641#issuecomment-281174806'
Set dns list correct. Thanks to @whereismyjetpack
Only download KubeDNS or CoreDNS if selected
Move dns cleanup to its own file and import tasks based on dns mode
Fix install of KubeDNS when dnsmask_kubedns mode is selected
Add new dns option coredns_dual for dual stack deployment. Added variable to configure replicas deployed. Updated docs for dual stack deployment. Removed rotate option in resolv.conf.
Run DNS manifests for CoreDNS and KubeDNS
Set skydns servers on dual stack deployment
Use only one template for CoreDNS dual deployment
Set correct cluster ip for the dns server
* Added cilium support
* Fix typo in debian test config
* Remove empty lines
* Changed cilium version from <latest> to <v1.0.0-rc3>
* Add missing changes for cilium
* Add cilium to CI pipeline
* Fix wrong file name
* Check kernel version for cilium
* fixed ci error
* fixed cilium-ds.j2 template
* added waiting for cilium pods to run
* Fixed missing EOF
* Fixed trailing spaces
* Fixed trailing spaces
* Fixed trailing spaces
* Fixed too many blank lines
* Updated tolerations,annotations in cilium DS template
* Set cilium_version to iptables-1.9 to see if bug is fixed in CI
* Update cilium image tag to v1.0.0-rc4
* Update Cilium test case CI vars filenames
* Add optional prometheus flag, adjust initial readiness delay
* Update README.md with cilium info
Ansible automatically installs the python-apt package when using
the 'apt' Ansible module, if python-apt is not present. This patch
removes the (unneeded) explicit installation in the Kubespray
'preinstall' role.
Cloud resolvers are mandatory for hosts on GCE and OpenStack
clouds. The 8.8.8.8 alternative resolver was dropped because
there is already a default nameserver. The new var name
reflects the purpose better.
Also restart apiserver when modifying dns settings.
This allows `kube_apiserver_insecure_port` to be set to 0 (disabled).
Rework of #1937 with kubeadm support
Also, fixed an issue in `kubeadm-migrate-certs` where the old apiserver cert was copied as the kubeadm key
* Add Contiv support
Contiv is a network plugin for Kubernetes and Docker. It supports
vlan/vxlan/BGP/Cisco ACI technologies. It support firewall policies,
multiple networks and bridging pods onto physical networks.
* Update contiv version to 1.1.4
Update contiv version to 1.1.4 and added SVC_SUBNET in contiv-config.
* Load openvswitch module to workaround on CentOS7.4
* Set contiv cni version to 0.1.0
Correct contiv CNI version to 0.1.0.
* Use kube_apiserver_endpoint for K8S_API_SERVER
Use kube_apiserver_endpoint as K8S_API_SERVER to make contiv talks
to a available endpoint no matter if there's a loadbalancer or not.
* Make contiv use its own etcd
Before this commit, contiv is using a etcd proxy mode to k8s etcd,
this work fine when the etcd hosts are co-located with contiv etcd
proxy, however the k8s peering certs are only in etcd group, as a
result the etcd-proxy is not able to peering with the k8s etcd on
etcd group, plus the netplugin is always trying to find the etcd
endpoint on localhost, this will cause problem for all netplugins
not runnign on etcd group nodes.
This commit make contiv uses its own etcd, separate from k8s one.
on kube-master nodes (where net-master runs), it will run as leader
mode and on all rest nodes it will run as proxy mode.
* Use cp instead of rsync to copy cni binaries
Since rsync has been removed from hyperkube, this commit changes it
to use cp instead.
* Make contiv-etcd able to run on master nodes
* Add rbac_enabled flag for contiv pods
* Add contiv into CNI network plugin lists
* migrate contiv test to tests/files
Signed-off-by: Cristian Staretu <cristian.staretu@gmail.com>
* Add required rules for contiv netplugin
* Better handling json return of fwdMode
* Make contiv etcd port configurable
* Use default var instead of templating
* roles/download/defaults/main.yml: use contiv 1.1.7
Signed-off-by: Cristian Staretu <cristian.staretu@gmail.com>
* Defaults for apiserver_loadbalancer_domain_name
When loadbalancer_apiserver is defined, use the
apiserver_loadbalancer_domain_name with a given default value.
Fix unconsistencies for checking if apiserver_loadbalancer_domain_name
is defined AND using it with a default value provided at once.
Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
* Define defaults for LB modes in common defaults
Adjust the defaults for apiserver_loadbalancer_domain_name and
loadbalancer_apiserver_localhost to come from a single source, which is
kubespray-defaults. Removes some confusion and simplefies the code.
Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
This allows `kube_apiserver_insecure_port` to be set to 0 (disabled). It's working, but so far I have had to:
1. Make the `uri` module "Wait for apiserver up" checks use `kube_apiserver_port` (HTTPS)
2. Add apiserver client cert/key to the "Wait for apiserver up" checks
3. Update apiserver liveness probe to use HTTPS ports
4. Set `kube_api_anonymous_auth` to true to allow liveness probe to hit apiserver's /healthz over HTTPS (livenessProbes can't use client cert/key unfortunately)
5. RBAC has to be enabled. Anonymous requests are in the `system:unauthenticated` group which is granted access to /healthz by one of RBAC's default ClusterRoleBindings. An equivalent ABAC rule could allow this as well.
Changes 1 and 2 should work for everyone, but 3, 4, and 5 require new coupling of currently independent configuration settings. So I also added a new settings check.
Options:
1. The problem goes away if you have both anonymous-auth and RBAC enabled. This is how kubeadm does it. This may be the best way to go since RBAC is already on by default but anonymous auth is not.
2. Include conditional templates to set a different liveness probe for possible combinations of `kube_apiserver_insecure_port = 0`, RBAC, and `kube_api_anonymous_auth` (won't be possible to cover every case without a guaranteed authorizer for the secure port)
3. Use basic auth headers for the liveness probe (I really don't like this, it adds a new dependency on basic auth which I'd also like to leave independently configurable, and it requires encoded passwords in the apiserver manifest)
Option 1 seems like the clear winner to me, but is there a reason we wouldn't want anonymous-auth on by default? The apiserver binary defaults anonymous-auth to true, but kubespray's default was false.
* Change deprecated vagrant ansible flag 'sudo' to 'become'
* Workaround ansible bug where access var via dict doesn't get real value
When accessing a variable via it's name "{{ foo }}" its value is
retrieved. But when the variable value is retrieved via the vars-dict
"{{ vars['foo'] }}" this doesn't resolve the expression of the variable
any more due to a bug. So e.g. a expression foo="{{ 1 == 1 }}" isn't
longer resolved but just returned as string "1 == 1".
* Make file yamllint complient
* Update main.yml
Needs to set up resolv.conf before updating Yum cache otherwise no name resolution available (resolv.conf empty).
* Update main.yml
Removing trailing spaces
* don't try to install this rpm on fedora atomic
* add docker 1.13.1 for fedora
* built-in docker unit file is sufficient, as tested on both fedora and centos atomic
* Rename dns_server to dnsmasq_dns_server so that it includes role prefix
as the var name is generic and conflicts when integrating with existing ansible automation.
* Enable selinux state to be configurable with new var preinstall_selinux_state
* kubeadm support
* move k8s master to a subtask
* disable k8s secrets when using kubeadm
* fix etcd cert serial var
* move simple auth users to master role
* make a kubeadm-specific env file for kubelet
* add non-ha CI job
* change ci boolean vars to json format
* fixup
* Update create-gce.yml
* Update create-gce.yml
* Update create-gce.yml
* Adding yaml linter to ci check
* Minor linting fixes from yamllint
* Changing CI to install python pkgs from requirements.txt
- adding in a secondary requirements.txt for tests
- moving yamllint to tests requirements
When VPC is used, external DNS might not be available. This patch change
behavior to use metadata service instead of external DNS when
upstream_dns_servers is not specified.
Signed-off-by: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
Until now it was not possible to add an API Loadbalancer
without an static IP Address. But certain Loadbalancers
like AWS Elastic Loadbalanacer dontt have an fixed IP address.
With this commit it is possible to add these kind of Loadbalancers
to the Kargo deployment.
Updates based on feedback
Simplify checks for file exists
remove invalid char
Review feedback. Use regular systemd file.
Add template for docker systemd atomic
* Leave all.yml to keep only optional vars
* Store groups' specific vars by existing group names
* Fix optional vars casted as mandatory (add default())
* Fix missing defaults for an optional IP var
* Relink group_vars for terraform to reflect changes
Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
Sometimes, a sysadmin might outright delete the SELinux rpms and
delete the configuration. This causes the selinux module to fail
with
```
IOError: [Errno 2] No such file or directory: '/etc/selinux/config'\n",
"module_stdout": "", "msg": "MODULE FAILURE"}
```
This simply checks that /etc/selinux/config exists before we try
to set it Permissive.
Update from feedback
Migrate older inline= syntax to pure yml syntax for module args as to be consistant with most of the rest of the tasks
Cleanup some spacing in various files
Rename some files named yaml to yml for consistancy
"shell" step doesn't support check mode, which currently leads to failures,
when Ansible is being run in check mode (because Ansible doesn't run command,
assuming that command might have effect, and no "rc" or "output" is registered).
Setting "check_mode: no" allows to run those "shell" commands in check mode
(which is safe, because those shell commands doesn't have side effects).
always_run was deprecated in Ansible 2.2 and will be removed in 2.4
ansible logs contain "[DEPRECATION WARNING]: always_run is deprecated.
Use check_mode = no instead". This patch fix deprecation.
* Drop linux capabilities for unprivileged containerized
worlkoads Kargo configures for deployments.
* Configure required securityContext/user/group/groups for kube
components' static manifests, etcd, calico-rr and k8s apps,
like dnsmasq daemonset.
* Rework cloud-init (etcd) users creation for CoreOS.
* Fix nologin paths, adjust defaults for addusers role and ensure
supplementary groups membership added for users.
* Add netplug user for network plugins (yet unused by privileged
networking containers though).
* Grant the kube and netplug users read access for etcd certs via
the etcd certs group.
* Grant group read access to kube certs via the kube cert group.
* Remove priveleged mode for calico-rr and run it under its uid/gid
and supplementary etcd_cert group.
* Adjust docs.
* Align cpu/memory limits and dropped caps with added rkt support
for control plane.
Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
Also remove the check for != "RedHat" when removing the dhclient hook,
as this had also to be done on other distros. Instead, check if the
dhclienthookfile is defined.
the tasks fail because selinux prevents ip-forwarding setting.
Moving the tasks around addresses two issues. Makes sure that
the correct python tools are in place before adjusting of selinux
and makes sure that ipforwarding is toggled after selinux adjustments.
Current design expects users to define at least one
nameserver in the nameservers var to backup host OS DNS config
when the K8s cluster DNS service IP is not available and hosts
still have to resolve external or intranet FQDNs.
Fix undefined nameservers to fallback to the default_resolver.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
Add BGP route reflectors support in order to optimize BGP topology
for deployments with Calico network plugin.
Also bump version of calico/ctl for some bug fixes.
Also place in global vars and do not repeat the kube_*_config_dir
and kube_namespace vars for better code maintainability and UX.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
Do not repeat options and nameservers in the dhclient hooks.
Do not prepend nameservers for dhclient but supersede and fail back
to the upstream_dns_resolvers then default_resolver. Fixes order of
nameservers placement, which is cluster DNS ip goes always first.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
* For Debian/RedHat OS families (with NetworkManager/dhclient/resolvconf
optionally enabled) prepend /etc/resolv.conf with required nameservers,
options, and supersede domain and search domains via the dhclient/resolvconf
hooks.
* Drop (z)nodnsupdate dhclient hook and re-implement it to complement the
resolvconf -u command, which is distro/cloud provider specific.
Update docs as well.
* Enable network restart to apply and persist changes and simplify handlers
to rely on network restart only. This fixes DNS resolve for hostnet K8s
pods for Red Hat OS family. Skip network restart for canal/calico plugins,
unless https://github.com/projectcalico/felix/issues/1185 fixed.
* Replace linefiles line plus with_items to block mode as it's faster.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
Co-authored-by: Matthew Mosesohn <mmosesohn@mirantis.com>
In order to enable offline/intranet installation cases:
* Move DNS/resolvconf configuration to preinstall role. Remove
skip_dnsmasq_k8s var as not needed anymore.
* Preconfigure DNS stack early, which may be the case when downloading
artifacts from intranet repositories. Do not configure
K8s DNS resolvers for hosts /etc/resolv.conf yet early (as they may be
not existing).
* Reconfigure K8s DNS resolvers for hosts only after kubedns/dnsmasq
was set up and before K8s apps to be created.
* Move docker install task to early stage as well and unbind it from the
etcd role's specific install path. Fix external flannel dependency on
docker role handlers. Also fix the docker restart handlers' steps
ordering to match the expected sequence (the socket then the service).
* Add default resolver fact, which is
the cloud provider specific and remove hardcoded GCE resolver.
* Reduce default ndots for hosts /etc/resolv.conf to 2. Multiple search
domains combined with high ndots values lead to poor performance of
DNS stack and make ansible workers to fail very often with the
"Timeout (12s) waiting for privilege escalation prompt:" error.
* Update docs.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
The variale etcd_access_addresses is used to determine
how to address communication from other roles to
the etcd cluster.
It was set to the address that ansible uses to
connect to instance ({{ item }})s and not the
the variable:
ip_access
which had already been created and could already
be overridden through the access_ip variable.
This change allows ansible to connect to a machine using
a different address than the one used to access etcd.
Override GCE sysctl in /etc/sysctl.d/99-sysctl.conf instead of
the /etc/sysctl.d/11-gce-network-security.conf. It is recreated
by GCE, f.e. if gcloud CLI invokes some security related changes,
thus losing customizations we want to be persistent.
Update cloud providers firewall requirements in calico docs.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
* Don't push containers if not changed
* Do preinstall role only once and redistribute defaults to
corresponding roles
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
Change the kubelet --hostname-override flag to use the ansible_hostname variable which should be more consistent with the value required by cloud providers
Add ansible_hostname alias to /etc/hosts when it is different from inventory_hostname to overcome node name limitations see https://github.com/kubernetes/kubernetes/issues/22770
Signed-off-by: Chad Swenson <chadswen@gmail.com>
Also adds all masters by hostname and localhost/127.0.0.1 to
apiserver SSL certificate.
Includes documentation update on how localhost loadbalancer works.
* Add the retry_stagger var to tweak push and retry time strategies.
* Add large deployments related docs.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
etcd facts are generated in kubernetes/preinstall, so etcd nodes need
to be evaluated first before the rest of the deployment.
Moved several directory facts from kubernetes/node to
kubernetes/preinstall because they are not backward dependent.
* Add HA docs for API server.
* Add auto-evaluated internal endpoints and clarify the loadbalancer_apiserver
vars and usecases.
* Use facts for kube_apiserver to not repeat code and enable LB endpoints use.
* Use /healthz check for the wait-for apiserver.
* Use the single endpoint for kubelet instead of the list of apiservers
* Specify kube_apiserver_count to for HA layout
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
* Add auto-evaluated internal endpoints and clarify the loadbalancer_apiserver
vars and usecases.
* Add loadbalancer_apiserver_localhost (default false). If enabled, override
the external LB and expect localhost:443/8080 to be new internal only frontends.
* Add kube_apiserver_multiaccess to ignore loadbalancers, and make clients
to access the apiservers as a comma-separated list of access_ip/ip/ansible ip
(a default mode). When disabled, allow clients to use the given loadbalancers.
* Define connections security mode for kube controllers, schedulers, proxies.
It is insecure be default, which is the current deployment choice.
* Rework the groups['kube-master'][0] hardcode defining the apiserver
endpoints.
* Improve grouping of vars and add facts for kube_apiserver.
* Define kube_apiserver_insecure_bind_address as a fact, add more
facts for ease of use.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
Move set_facts to the preinstall scope, so every role
may see it. For example, network plugins to see the etcd_endpoint.
Signed-off-by: Bogdan Dobrelya <bdobrelia@mirantis.com>
This should make things a little more composable,
by making these roles meta roles that perform no
actions by default we allow each role to own its own
resources.
Currently kubespray does not install kubernetes in a way that allows cinder volumes to be used. This commit provides the necessary cloud configuration file and configures kubelet and kube-apiserver to use it.