* network_plugin/custom_cni: add CNI to apply provided manifests
Add a new simple custom_cni to install provided Kubernetes manifests.
This could be useful to use manifests directly provided by a CNI when
there are not support by Kubespray (i.e.: helm chart or any other manifests
generation method).
Co-authored-by: James Landrein <james.landrein@proton.ch>
Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>
* network_plugin/custom_cni: add test with cilium
Co-authored-by: James Landrein <james.landrein@proton.ch>
Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>
---------
Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>
Co-authored-by: James Landrein <james.landrein@proton.ch>
crun_bin_dir was used to specify the destination of the crun binary during the
download process. This path must match with the value provided in the CRI-O
configuration file. So changing its value to bin_dir helps to mismatch errors.
Signed-off-by: Victor Morales <chipahuac@hotmail.com>
* Fix yq install in argocd role: use download_file instead of get_url
* Fix use download_file instead of get_url to download argocd-install manifest in argocd role
* Fix order and add arm64 checksum
* Fix: Failed to template loop_control.label: 'None'
This fixes the CrashLoopBackoff error that appears because envoy
configuration has changed a lot and upstream removed the envoy proxy to
use nginx only instead. Those changes are based on upstream cilium helm.
This commit uses a kubeadm join config to pull down cert for etcd in
workers nodes (which is needed in some circumstances, for instance with
calico or cilium).
The previous way didn't allow us to pass certain parameters which was
typically given in the config in other kubeadm invokations in Kubespray.
This made kubeadm produced some errors for some edge cases.
For example, in our deployment we don't have a default route and even
though it's only to download the certificates, kubeadm produce an error
`unable to select an IP from default routes` (these command are kubeadm
controlplane command, so kubeadm does some additional checks). This is
fixed by specifying `advertiseAddress` within the kubeadm config.
Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>
Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>
Add a variable `calico_felix_floatingIPs` which permit to enable calico feature `floatingIPs`
(disabled per default).
Signed-off-by: MatthieuFin <matthieu2717@gmail.com>
#9679
In 6db6c8678c, this was disabled becaue
kubesrpay gave too much permissions that were not needed. This commit
re-enable back this option by default and also removes the extra
permissions that kubespray gave that were in fact not needed.
Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>
Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>
* optimize cgroups settings for node reserved
* fix
* set cgroup slice for multi container engine
* set cgroup slice for crio
* add reserved cgroups variables to sample files
* Compatible with cgroup path for different container managers
* add cgroups doc
* fix markdown
At the upstream calico development, the v3.21 branch is not updated
over 2 monthes. In addition, unnecessary error message is output at
Kubespray deployment due to different URLs for calico v3.21 or v3.22+
This drops the v3.21 support to solve the issue.
Without minimal cluster configuration, even on a one node control plane,
the health check of the ovn-cental container always fails as it queries the
cluster/status.
* feat(): Add wireguard backend to flannel cni
As described in the flannel docs:
https://github.com/flannel-io/flannel/blob/master/Documentation/backends.md#wireguard
This does not support optional configuration methods like:
- setting a psk (will be autogenerated by default)
- chang listening ports
- change mode (defaults to 'separate')
- change PersistentKeepaliveInterval (defaults to 0)
* Add supported backends to flannel docs
* Fix markdown in docs
This fixes a task failure in the rescue block that uncordons nodes after an unsuccessful drain. The issue occurs when `kube_override_hostname` is set and does not match `inventory_hostname`.
The `containerd-common` role is responsible for gathering OS specific variables from the vars directory of the roles that include or import it. `containerd-common` is imported via role dependency by a total of two roles, `container-engine/docker`, and `container-engine/containerd`.
containerd-common is needed by both the docker and containerd roles as a dependency when:
- containerd is selected as the container engine
- a docker install is detected and needs to be removed
- apt is the package manager
However, by default, roles can not be invoked more than once in the same play, unless `allow_duplicates: true` is set for that role. This results in the failure of the `containerd | Remove containerd repository` task, since only the docker vars will be loaded in the play, and `containerd_repo_info.repos`, normally populated by containerd/vars, is left empty.
This change sets `allow_duplicates: true` for `containerd-common` which fixes the currently failing containerd tasks if docker was detected and removed in the same play.