kubespray/contrib/terraform/upcloud
k8s-infra-cherrypick-robot 9b122fb5a8
[release-2.25] pre-commit: make hooks self contained + ci config (#11359)
* Use alternate self-sufficient shellcheck precommit

This pre-commit does not require prerequisite on the host, making it
easier to run in CI workflows.

* Switch to upstream ansible-lint pre-commit hook

This way, the hook is self contained and does not depend on a previous
virtualenv installation.

* pre-commit: fix hooks dependencies

- ansible-syntax-check
- tox-inventory-builder
- jinja-syntax-check

* Fix ci-matrix pre-commit hook

- Remove dependency of pydblite which fails to setup on recent pythons
- Discard shell script and put everything into pre-commit

* pre-commit: apply autofixes hooks and fix the rest manually

- markdownlint (manual fix)
- end-of-file-fixer
- requirements-txt-fixer
- trailing-whitespace

* Convert check_typo to pre-commit + use maintained version

client9/misspell is unmaintained, and has been forked by the golangci
team, see https://github.com/client9/misspell/issues/197#issuecomment-1596318684.

They haven't yet added a pre-commit config, so use my fork with the
pre-commit hook config until the pull request is merged.

* collection-build-install convert to pre-commit

* Run pre-commit hooks in dynamic pipeline

Use gitlab dynamic child pipelines feature to have one source of truth
for the pre-commit jobs, the pre-commit config file.

Use one cache per pre-commit. This should reduce the "fetching cache"
time steps in gitlab-ci, since each job will have a separate cache with
only its hook installed.

* Remove gitlab-ci job done in pre-commit

* pre-commit: adjust mardownlint default, md fixes

Use a style file as recommended by upstream. This makes for only one
source of truth.
Conserve previous upstream default for MD007 (upstream default changed
here https://github.com/markdownlint/markdownlint/pull/373)

* Update pre-commit hooks

---------

Co-authored-by: Max Gautier <mg@max.gautier.name>
2024-07-12 00:21:42 -07:00
..
modules/kubernetes-cluster [release-2.25] pre-commit: make hooks self contained + ci config (#11359) 2024-07-12 00:21:42 -07:00
sample-inventory [release-2.25] pre-commit: make hooks self contained + ci config (#11359) 2024-07-12 00:21:42 -07:00
templates Rename ansible groups to use _ instead of - (#7552) 2021-04-29 05:20:50 -07:00
README.md upcloud: update terraform provider strict anti-affinity (#10474) 2023-10-07 04:45:41 +02:00
cluster-settings.tfvars [release-2.25] pre-commit: make hooks self contained + ci config (#11359) 2024-07-12 00:21:42 -07:00
main.tf Make proxy protocol in upcloud LB configurable (#10971) 2024-03-22 16:08:59 -07:00
output.tf UpCloud server plan, firewall, load balancer integration (#8758) 2022-05-11 10:15:03 -07:00
variables.tf Make proxy protocol in upcloud LB configurable (#10971) 2024-03-22 16:08:59 -07:00
versions.tf upcloud: update terraform provider strict anti-affinity (#10474) 2023-10-07 04:45:41 +02:00

README.md

Kubernetes on UpCloud with Terraform

Provision a Kubernetes cluster on UpCloud using Terraform and Kubespray

Overview

The setup looks like following

   Kubernetes cluster
+--------------------------+
|      +--------------+    |
|      | +--------------+  |
| -->  | |              |  |
|      | | Master/etcd  |  |
|      | | node(s)      |  |
|      +-+              |  |
|        +--------------+  |
|              ^           |
|              |           |
|              v           |
|      +--------------+    |
|      | +--------------+  |
| -->  | |              |  |
|      | |    Worker    |  |
|      | |    node(s)   |  |
|      +-+              |  |
|        +--------------+  |
+--------------------------+

The nodes uses a private network for node to node communication and a public interface for all external communication.

Requirements

  • Terraform 0.13.0 or newer

Quickstart

NOTE: Assumes you are at the root of the kubespray repo.

For authentication in your cluster you can use the environment variables.

export TF_VAR_UPCLOUD_USERNAME=username
export TF_VAR_UPCLOUD_PASSWORD=password

To allow API access to your UpCloud account, you need to allow API connections by visiting Account-page in your UpCloud Hub.

Copy the cluster configuration file.

CLUSTER=my-upcloud-cluster
cp -r inventory/sample inventory/$CLUSTER
cp contrib/terraform/upcloud/cluster-settings.tfvars inventory/$CLUSTER/
export ANSIBLE_CONFIG=ansible.cfg
cd inventory/$CLUSTER

Edit cluster-settings.tfvars to match your requirement.

Run Terraform to create the infrastructure.

terraform init ../../contrib/terraform/upcloud
terraform apply --var-file cluster-settings.tfvars \
    -state=tfstate-$CLUSTER.tfstate \
     ../../contrib/terraform/upcloud/

You should now have a inventory file named inventory.ini that you can use with kubespray. You can use the inventory file with kubespray to set up a cluster.

It is a good idea to check that you have basic SSH connectivity to the nodes. You can do that by:

ansible -i inventory.ini -m ping all

You can setup Kubernetes with kubespray using the generated inventory:

ansible-playbook -i inventory.ini ../../cluster.yml -b -v

Teardown

You can teardown your infrastructure using the following Terraform command:

terraform destroy --var-file cluster-settings.tfvars \
      -state=tfstate-$CLUSTER.tfstate \
      ../../contrib/terraform/upcloud/

Variables

  • prefix: Prefix to add to all resources, if set to "" don't set any prefix
  • template_name: The name or UUID of a base image
  • username: a user to access the nodes, defaults to "ubuntu"
  • private_network_cidr: CIDR to use for the private network, defaults to "172.16.0.0/24"
  • ssh_public_keys: List of public SSH keys to install on all machines
  • zone: The zone where to run the cluster
  • machines: Machines to provision. Key of this object will be used as the name of the machine
    • node_type: The role of this node (master|worker)
    • plan: Preconfigured cpu/mem plan to use (disables cpu and mem attributes below)
    • cpu: number of cpu cores
    • mem: memory size in MB
    • disk_size: The size of the storage in GB
    • additional_disks: Additional disks to attach to the node.
      • size: The size of the additional disk in GB
      • tier: The tier of disk to use (maxiops is the only one you can choose atm)
  • firewall_enabled: Enable firewall rules
  • firewall_default_deny_in: Set the firewall to deny inbound traffic by default. Automatically adds UpCloud DNS server and NTP port allowlisting.
  • firewall_default_deny_out: Set the firewall to deny outbound traffic by default.
  • master_allowed_remote_ips: List of IP ranges that should be allowed to access API of masters
    • start_address: Start of address range to allow
    • end_address: End of address range to allow
  • k8s_allowed_remote_ips: List of IP ranges that should be allowed SSH access to all nodes
    • start_address: Start of address range to allow
    • end_address: End of address range to allow
  • master_allowed_ports: List of port ranges that should be allowed to access the masters
    • protocol: Protocol (tcp|udp|icmp)
    • port_range_min: Start of port range to allow
    • port_range_max: End of port range to allow
    • start_address: Start of address range to allow
    • end_address: End of address range to allow
  • worker_allowed_ports: List of port ranges that should be allowed to access the workers
    • protocol: Protocol (tcp|udp|icmp)
    • port_range_min: Start of port range to allow
    • port_range_max: End of port range to allow
    • start_address: Start of address range to allow
    • end_address: End of address range to allow
  • loadbalancer_enabled: Enable managed load balancer
  • loadbalancer_plan: Plan to use for load balancer (development|production-small)
  • loadbalancers: Ports to load balance and which machines to forward to. Key of this object will be used as the name of the load balancer frontends/backends
    • port: Port to load balance.
    • target_port: Port to the backend servers.
    • backend_servers: List of servers that traffic to the port should be forwarded to.
  • server_groups: Group servers together
    • servers: The servers that should be included in the group.
    • anti_affinity_policy: Defines if a server group is an anti-affinity group. Setting this to "strict" or yes" will result in all servers in the group being placed on separate compute hosts. The value can be "strict", "yes" or "no". "strict" refers to strict policy doesn't allow servers in the same server group to be on the same host. "yes" refers to best-effort policy and tries to put servers on different hosts, but this is not guaranteed.