* Support kubeadm patches in v1beta3
* Update kubeadm patches sample files in inventory
* Fix pre-commit syntax
* Set kubeadm_patches enabled to false in sample inventory
* Drop support for Cilium < 1.10
Signed-off-by: necatican <necaticanyildirim@gmail.com>
* Synchronize Cilium templates for 1.11.7
Signed-off-by: necatican <contact@necatican.com>
* Set Cilium v1.12.1 as the default version
Signed-off-by: necatican <contact@necatican.com>
Signed-off-by: necatican <necaticanyildirim@gmail.com>
Signed-off-by: necatican <contact@necatican.com>
* feat: make kubernetes owner parametrized
* docs: update hardening guide with configuration for CIS 1.1.19
* fix: set etcd data directory permissions to be compliant to CIS 1.1.12
* [etcd] Add extra documentation for `etcd_memory_limit` and `etcd_quota_backend_bytes`
Signed-off-by: necatican <necaticanyildirim@gmail.com>
* [etcd] Add support for setting ETCD_MAX_REQUEST_BYTES
Signed-off-by: necatican <necaticanyildirim@gmail.com>
* [cilium] Separate templates for cilium, cilium-operator, and hubble installations
Signed-off-by: necatican <necaticanyildirim@gmail.com>
* [cilium] Update cilium-operator templates
Signed-off-by: necatican <necaticanyildirim@gmail.com>
* [cilium] Allow using custom args and mounting extra volumes for the Cilium Operator
Signed-off-by: necatican <necaticanyildirim@gmail.com>
* [cilium] Update the cilium configmap to filter out the deprecated variables, and add the new variables
Signed-off-by: necatican <necaticanyildirim@gmail.com>
* [cilium] Add an option to use Wireguard encryption on Cilium 1.10 and up
Signed-off-by: necatican <necaticanyildirim@gmail.com>
* [cilium] Update cilium-agent templates
Signed-off-by: necatican <necaticanyildirim@gmail.com>
* [cilium] Bump Cilium version to 1.11.3
Signed-off-by: necatican <necaticanyildirim@gmail.com>
* - add ability to specify the network_zone in hetzner terraform
- Export the network id from hetzner terraform the the generated inventory.ini
* - Add with_networks variable to allow different deployments of hcloud controller manager
- Add network id to hcloud controller secret (added via the inventory)
- Don't include extra_args if it's not set
* [calico] make vxlan encapsulation the default
* don't enable ipip encapsulation by default
* set calico_network_backend by default to vxlan
* update sample inventory and documentation
* [CI] pin default calico parameters for upgrade tests to ensure proper upgrade
* [CI] improve netchecker connectivity testing
* [CI] show logs for tests
* [calico] tweak task name
* [CI] Don't run the provisioner from vagrant since we run it in testcases_run.sh
* [CI] move kube-router tests to vagrant to avoid network connectivity issues during netchecker check
* service proxy mode still fails connectivity tests so keeping it manual mode
* [kube-router] account for containerd use-case
* Use sysctl_file_path variable for all sysctl_file locations
* Add sysctl_file_path variable to kubespay-defaults
* Remove previously used sysctl file locations if present
* Use explicit filename in roles/kubernetes/node/defaults/main.yml
* Defaults: use explicit value
* Update config.toml.j2
i think this commit code is not completed works
exam registry address : a.com:5000
insecure registry must be http://a.com:5000
but this code add insecure a.com:5000 (without http://)
If there is no http, containerd accesses with https even if insecure_skip_verify = true
solution is code edit
* Update config.toml.j2
* Update containerd.yml
* Update containerd.yml
* Update containerd.yml
* Update config.toml.j2
* containerd: change default resolvconf_mode to host_resolvconf
* Wait for kube-apiserver to come back after pod refresh
* Handle resolv.conf gracefully
* Retain currently configured DNS entries to ensure we don't break the resolvers
* Suse uses wickedd for network management so no dhcp hooks
* Molecule: increase ansible timeout
* CI: Increase ansible timeout to 120s for Packet jobs
* set ingress-nginx default terminationGracePeriodSeconds to 5 min for the drain of connection
* Add ingress_nginx_termination_grace_period_seconds at sample inventory
install containerd on centos need to binary download it
but offline.yml has no that value
binary download url default in
roles/download/defaults/main.yml:runc_download_url: "https://github.com/opencontainers/runc/releases/download/{{ runc_version }}/runc.{{ image_arch }}"
roles/download/defaults/main.yml:containerd_download_url: "https://github.com/containerd/containerd/releases/download/v{{ containerd_version }}/containerd-{{ containerd_version }}-linux-{{ image_arch }}.tar.gz"
if i use default offlie.yml, it's error from task download files
because runc,containerd down url is none offline
i want fix this
just add 2 new line
* Defaults: replace docker with containerd as our default container_manager
* CI: Use docker for download_localhost test
* Defaults: with container_manager=containerd we need etcd_deployment_type=host
* CI: Run weave jobs with docker
* CI: Vagrant don't download_force_cache
* CI: Fix upgrade tests
* should run compatible with old settings, this means docker
* we need to run with a distro that has at least modern containerd,
this means move from debian9 to debian10 to allow `containerd_version`
to match between 2.17 and master
* add metallb auto-assign property for main IP range & update addons.yml for sample inventory
* add new line at the end of file roles\kubernetes-apps\metallb\defaults\main.yml
* set default value for matallb_auto_assign = true
The addon-resizer container can reduce resource limits of cpu and
memory of metrics-server container in the pod, and that caused
OOMKilled.
In addition, the original metrics-server manifest doesn't contain
the addon-resizer container as [1].
So this adds metrics_server_resizer option to control the addon-resizer
container deployment and the default value is false to make it stable
for most environments.
[1]: 527679e5e8/manifests/base/deployment.yaml
* CSI: update CSI snapshot CRDs
* CSI: update snapshot controller tag version with kubernetes specific versions
* CSI: allow enabling csi_snapshot_controller independent of Cinder CSI
* CSI: Align csi-snapshot-controller with upstream and use a Deployment instead of a StatefulSet
Changes:
* ClusterRole updated according to the latest manifests from
https://github.com/kubernetes/cloud-provider-vsphere
* vSphere CPI/CSI default versions bumped and
tested successfully on K8S 1.21.1
* vSphere documentation updated
Signed-off-by: Vitaliy D <vi7alya@gmail.com>
* Override the default value of containerd's root, state, and oom_score configurations
* Add tests data for containerd_storage_dir, containerd_state_dir and containerd_oom_score variables
* add initial MetalLB docs
* metallb allow disabling the deployment of the metallb speaker
* calico>=3.18 allow using calico to advertise service loadbalancer IPs
* Document the use of MetalLB and Calico
* clean MetalLB docs
* rename ansible groups to use _ instead of -
k8s-cluster -> k8s_cluster
k8s-node -> k8s_node
calico-rr -> calico_rr
no-floating -> no_floating
Note: kube-node,k8s-cluster groups in upgrade CI
need clean-up after v2.16 is tagged
* ensure old groups are mapped to the new ones
* AlmaLinux: ansible>2.9.19 is needed to know about AlmaLinux
* AlmaLinux: identify as a centos derrivative
* AlmaLinux: add AlmaLinux to checks for CentOS
* Use ansible_os_family to compare family and not distribution
* Remove contrib/vault
This is marked as broken since 2018 / 3dcb914607
This still reference apiserver.pem, not used since ddffdb63bf
Signed-off-by: Etienne Champetier <e.champetier@ateme.com>
* Finish nuking vault from the codebase
Signed-off-by: Etienne Champetier <e.champetier@ateme.com>
This replaces kube-master with kube_control_plane because of [1]:
The Kubernetes project is moving away from wording that is
considered offensive. A new working group WG Naming was created
to track this work, and the word "master" was declared as offensive.
A proposal was formalized for replacing the word "master" with
"control plane". This means it should be removed from source code,
documentation, and user-facing configuration from Kubernetes and
its sub-projects.
NOTE: The reason why this changes it to kube_control_plane not
kube-control-plane is for valid group names on ansible.
[1]: https://github.com/kubernetes/enhancements/blob/master/keps/sig-cluster-lifecycle/kubeadm/2067-rename-master-label-taint/README.md#motivation
While at it remove force_certificate_regeneration
This boolean only forced the renewal of the apiserver certs
Either manually use k8s-certs-renew.sh or set auto_renew_certificates
Signed-off-by: Etienne Champetier <e.champetier@ateme.com>
* Download Calico KDD CRDs
* Replace kustomize with lineinfile and use ansible assemble module
* Replace find+lineinfile by sed in shell module to avoid nested loop
* add condition on sed
* use block for kdd tasks + remove supernumerary kdd manifest apply in start "Start Calico resources"
* add nodeselector and tolerations for metallb
* remove unnecessary commented lines in metallb template
* set default speaker toleration to match original manifest