* Setting the {kube,system}ReservedCgroup does not make the kubelet
enforce the limits, adding the corresponding entry in
enforceNodeAllocatable does.
- more explicit variable names
- add a warning for enforcing kube and system limits.
* Streamline resource kubelet resource reservation:
- remove "master" variants: those should be handled by group_vars
- Use emtpy defaults to leave them to kubelet default configuration
* Exercise the new semantics in CI.
* We don't need to organize the cgroup hierarchy differently if we don't
use the resources reservation, so remove the variance, always place
the kubelet at the same place (default to
/runtime.slice/kubelet.service)
* Same for the container "runtimes" (which means in fact the container
**engines**, aka containerd, cri-o, not runc or kata)
* Accordingly, there is no need for a lot of customization on the cgroup
hierarchy, so reduce it to `kube_slice` and `system_slice`. All the
rest is derived from that and not user-modifiable.
* Correct the semantics of kube_reserved and system_reserved:
- kube-reserved and systemd-reserved do not guarantee on their own
that resources will be available for the respective cgroups, they
allow to calculate NodeAllocatable.
See https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable
For the kubelet slice, we let systemd do it for us by specifying a slice
int the unit file; it's implicitly created on service start.
For the system slice, it's not the kubelet responsibility to create it.
See https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/#system-reserved ,
which explicitly tell "Note that kubelet does not create
--system-reserved-cgroup if it doesn't exist".
systemd takes care of creating that for us, we only have to point the
kubelet to it if needed.
Testing for group membership with group names makes Kubespray more
tolerant towards the structure of the inventory.
Where 'inventory_hostname in groups["some_group"] would fail if
"some_group" is not defined, '"some_group" in group_names' would not.
- Use proper syntax highlighting for config.rb examples
- Consistent shell style ($ as prompt)
- Use only one way to do things
- Remove OS specific details
The current way to handle a custom inventory in vagrant is a bit
hackish, copy files around and can break Vagrantfile parsing in
cornercase scenarios (removing vagrant inventories, or the inventory
copied into vagrant inventory).
Instead, simply pass additional inventories to the ansible-playbook
command lines as raw arguments with `-i`.
This also makes supporting multiples inventories trivial, so we add a
new `$inventories` variable for that purpose.
Specifying one directory for kubeadm patches is not ideal:
1. It does not allow working with multiples inventories easily
2. No ansible templating of the patch
3. Ansible path searching can sometimes be confusing
Instead, provide the patch directly in a variable, and add some quality
of life to handle components targeting and patch ordering more
explicitly (`target` and `type` which are translated to the kubeadm
scheme which is based on the file name)
kubernetes/control-plane and kubernetes/kubeadm roles both push kubeadm
patches in the same way.
Extract that code and make it a dependency of both.
This is safe because it's only configuration for kubeadm, which only
takes effect when kubeadm is run.
* Update multus to v4.1.0 and clarify cilium compatibility
* Fix: bug introduced by #10934 where the template would break if multus was defined
* Set priorityClassName to system-node-critical for multus pods
Allow the script to be called with a list of components, to only
download new versions checksums for those.
By default, we get new versions checksums for all supported (by the
script) components.
runc upstream does not provide one hash file per assets in their
releases, but one file with all the hashes.
To handle this (and/or any arbitrary format from upstreams), add a
dictionary mapping the name of the download to a lambda function which
transform the file provided by upstream into a dictionary of hashes,
keyed by architecture.
The script is currently limited to one hardcoded URL for kubernetes
related binaries, and a fixed set of architectures.
The solution is three-fold:
1. Use an url template dictionary for each download -> this allow to easily
add support for new downloads.
2. Source the architectures to search from the existing data
3. Enumerate the existing versions in the data and start searching from
the last one until no newer version is found (newer in the version
order sense, irrespective of actual age)