* update documentation to add and remove nodes
* add information about parameters to change when adding multiple etcd nodes
* add information about reset_nodes
* add documentation about adding existing nodes to ectd masters.
* Add additional network configuration options to external Openstack CCM (#6083)
* Change the default version of external openstack cloud controller image to v1.18.1 since there was an issue in v1.18.0 where some IPs of the private network were ignored
* Change Network section in external-openstack-cloud-config.j2 to Networking
* Add networking customization information in the openstack documentation
* add strategy mitogen_linear when installed mitogen
* add small docs
Rename playbook file
The raw action executes as a regular Mitogen connection, which requires Python on the target, so add strategy: linear to bootstrap-os role playbook.
* add mitogen to CI test
fix typo
* enable mitogen test on deploy-part1 tests
change version from master to release
download tar.gz archive
* run all CI tests with mitogen
* disable mitogen with upgrade CI tests
* enable mitogen on CI tests via env vars
* disable mitogen on CI test by default, enable on some different OS
* disable mitogen CI test on centos8
(get error /usr/bin/python: No such file or directory)
cloud_provider option exists in ./inventory/sample/group_vars/all/all.yml
In addition, the quick start shows to create configuration by copying
./inventory/sample. So this updates path of all.yml for fitting the above.
The Terraform installation part states that is for CentOS 7, but the echo command refers to OS X binary. Updated the echo command to use the Linux version.
* assembly fallback_ips and no_proxy var only one time on localhost and populate result on all hosts
* add tag always, fix ansible lint errors
* workaround to mitogen issue dw/mitogen#663
* do not gather fact before install python on coreos like distros
* try to pass docker molecule test
* fix upgrade of crio on fcos
- update documents
* install conntrack required by kube-proxy
- like commit 48c41bcbe7
* enable fedora modular repo for crio
* allow to override crio configuration
- set cgroup manager same to kubelet_cgroup_driver if defined
- path of seccomp_profile depends on distribution
* allow to override crio configuration
- fix path for ubuntu
* allow to override crio configuration
- fix cni path for fcos
* requirements.txt: Bump versions
Ansible 2.8+ allow ansible_python_interpreter autodetection
Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
* tests: do not force ansible_python_interpreter
we do not expect people to set ansible_python_interpreter, so we should not set it in the CI
Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
* Add CentOS 8 Calico to CI
Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
* kubernetes-sigs-kubespray #5824
Added support nodes which are part of Virtual Machine Scale Sets(VMSS)
* kubernetes-sigs-kubespray #5824
* kubernetes-sigs-kubespray #5824
Added comments and updatetd azure docs.
* kubernetes-sigs-kubespray #5824
Added supported values comments for "azure_vmtype" in azure.yml
* fedora coreos support
- bootstrap and new fact for
* fedora coreos support
- fix bootstrap condition
* fedora coreos support
- allow customize packages for fedora coreos bootstrap
* fedora coreos support
- prevent install ptyhon3 and epel via dnf for fedora coreos
* fedora coreos support
- handle all ostree like os in same way
* fedora coreos support
- handle all ostree like os in same way for crio
* fedora coreos support
- add fcos documentations
* add documentation for how to upgrade to the new external cloud provider
* add migrate_openstack_provider playbook
* fix codeblock syntax highligth
* make docs for migrating cloud provider better
* update grammar
* fix typo
* Make sure the code is correct markdown
* remove Fenced code blocks
* fix markdown syntax
* remove extra lines and fix trailing spaces
* download file
* download containers
* fix push image to nodes
* pull if none image on host
* fix
* improve docker image tag checks.
do not pull already cached images
* rebase fix merge conflict
* add support download_run_once when upgrade and scale cluster
add some test with download_run_once
* set default values to temp flag for every download cycle
* add save,load abilty for containerd and crio when download_run_once=true
* return redefine image save/load command to set_docker_image_facts.yml
* move set command to set_container_facts
* ctr in containerd_bin_dir
* fix order of ctr image export arguments
* temporary disable download_run_once for containerd and crio
due https://github.com/containerd/containerd/issues/4075
* remove unused files
* fix strict yaml linter warning and errors
* refactor logical conditions to pull and cache container images
* remove comment due lint check
* document role
* remove image_load_on_localhost, because cached images are always loaded to docker on remote sites
* remove XXX from debug output
* Fix incorrect assertion comparison for kube_network_node_prefix
* Ignore assertion comparison for kube_network_node_prefix when using calico
* Adding more var docs description for kube_network_node_prefix
* Fixing trailing whitespaces
* Introduce kubelet_config_extra_args and kubelet_node_config_extra_args to pass params to kubelet via YAML config
* kubelet_config_extra_args is not the alternative
* Fix recover-control-plane to work with etcd 3.3.x and add CI
* Set default values for testcase
* Add actual test jobs
* Attempt to satisty gitlab ci linter
* Fix ansible targets
* Set etcd_member_name as stated in the docs...
* Recovering from 0 masters is not supported yet
* Add other master to broken_kube-master group as well
* Increase number of retries to see if etcd needs more time to heal
* Make number of retries for ETCD loops configurable, increase it for recovery CI and document it
Since it is unsupported to skip upgrades, I've detailed the steps for upgrading a step at a time and removed some language that indicated it should work
* Refactor calico-rr to run in k8s cluster with taint
Change-Id: I75a3169ff5b36ce8302fc7ef1c32d3eb697b5afa
* add preinstall checks
* rework calico/rr role
Change-Id: I2f0a7e6cb77cf91ad4a615923680760d2e5d9ca8
* add empty calico-rr group
Change-Id: I006c0a60db9b72d02245bf8fdfabcf982144a5ad
* add macvlan cni to kubespray
* macvlan: lint yaml files and fix sample config file
* macvlan: add OWNERS file
* add macvlan to README
* macvlan : CI first shoot
* macvlan : CI add full masquerade
* delegate retrive pod cidr to master only
* macvlan: add config for CI
* macvlan: add netchecker deployment
* File and container image downloads are now cached localy, so that repeated vagrant up/down runs do not trigger downloading of those files. This is especially useful on laptops with kubernetes runnig locally on vm's. The total size of the cache, after an ansible run, is currently around 800MB, so bandwidth (=time) savings can be quite significant.
* When download_run_once is false, the default is still not to cache, but setting download_force_cache will still enable caching.
* The local cache location can be set with download_cache_dir and defaults to /tmp/kubernetes_cache
* A local docker instance is no longer required to cache docker images; Images are cached to file. A local docker instance is still required, though, if you wish to download images on localhost.
* Fixed a FIXME, wher the argument was that delegate_to doesn't play nice with omit. That is a correct observation and the fix is to use default(inventory_host) instead of default(omit). See ansible/ansible#26009
* Removed "Register docker images info" task from download_container and set_docker_image_facts because it was faulty and unused.
* Removed redundant when:download.{container,enabled,run_once} conditions from {sync,download}_container.yml
* All features of commit d6fd0d2aca by Timoses <timosesu@gmail.com>, merged May 1st 2019, are included in this patch. Not all code was included verbatim, but each feature of that commit was checked to be working in this patch. One notable change: The actual downloading of the kubeadm images was moved to {download,sync)_container, to enable caching.
Note 1: I considered splitting this patch, but most changes that are not directly related to caching, are a pleasant by-product of implementing the caching code, so splitting would be impractical.
Note 2: I have my doubts about the usefulness of the upload, download and upgrade tags in the download role. Must they remain or can they be removed? If anybody knows, then please speak up.
* Download to delegate and sync files when download_run_once
* Fail on error after saving container image
* Do not set changed status when downloaded container was up to date
* Only sync containers when they are actually required
Previously, non-required images (pull_required=false as
image existed on target host) were synced to the target
hosts. This failed as the image was not downloaded to
the download_delegate and hence was not available for
syncing.
* Sync containers when only missing on some hosts
* Consider images with multiple repo tags
* Enable kubeadm images pull/syncing with download_delegate
* Use kubeadm images list to pull/sync
'kubeadm config images pull' is replaced by collecting the images
list with 'kubeadm config images list' and using the commonly
used method of pull/syncing the images.
* Ensure containers are downloaded and synced for all hosts
* Fix download/syncing when download_delegate is a kubernetes host
add the support of the folling property in azure-credential-check.yml
- azure_loadbalancer_sku: Sku of Load Balancer and Public IP. Candidate values are: basic and standard.
- azure_exclude_master_from_standard_lb: excludes master nodes from standard load balancer.
- azure_disable_outbound_snat: disables the outbound SNAT for public load balancer rules
- useInstanceMetadata: Use instance metadata service where possible
- azure_primary_availability_set: (Optional) The name of the availability set that should be used as the load balancer backend
Ansible 2.0 has deprecated the “ssh” from ansible_ssh_host.
Updating the docs to be more aligned with the Ansible version used in the sample/inventory.ini file as well.
Also adding `[bastion]` group in the docs to avoid confusion.