9b122fb5a8
* Use alternate self-sufficient shellcheck precommit This pre-commit does not require prerequisite on the host, making it easier to run in CI workflows. * Switch to upstream ansible-lint pre-commit hook This way, the hook is self contained and does not depend on a previous virtualenv installation. * pre-commit: fix hooks dependencies - ansible-syntax-check - tox-inventory-builder - jinja-syntax-check * Fix ci-matrix pre-commit hook - Remove dependency of pydblite which fails to setup on recent pythons - Discard shell script and put everything into pre-commit * pre-commit: apply autofixes hooks and fix the rest manually - markdownlint (manual fix) - end-of-file-fixer - requirements-txt-fixer - trailing-whitespace * Convert check_typo to pre-commit + use maintained version client9/misspell is unmaintained, and has been forked by the golangci team, see https://github.com/client9/misspell/issues/197#issuecomment-1596318684. They haven't yet added a pre-commit config, so use my fork with the pre-commit hook config until the pull request is merged. * collection-build-install convert to pre-commit * Run pre-commit hooks in dynamic pipeline Use gitlab dynamic child pipelines feature to have one source of truth for the pre-commit jobs, the pre-commit config file. Use one cache per pre-commit. This should reduce the "fetching cache" time steps in gitlab-ci, since each job will have a separate cache with only its hook installed. * Remove gitlab-ci job done in pre-commit * pre-commit: adjust mardownlint default, md fixes Use a style file as recommended by upstream. This makes for only one source of truth. Conserve previous upstream default for MD007 (upstream default changed here https://github.com/markdownlint/markdownlint/pull/373) * Update pre-commit hooks --------- Co-authored-by: Max Gautier <mg@max.gautier.name> |
||
---|---|---|
.. | ||
modules/kubernetes-cluster | ||
sample-inventory | ||
templates | ||
README.md | ||
cluster-settings.tfvars | ||
main.tf | ||
output.tf | ||
variables.tf | ||
versions.tf |
README.md
Kubernetes on UpCloud with Terraform
Provision a Kubernetes cluster on UpCloud using Terraform and Kubespray
Overview
The setup looks like following
Kubernetes cluster
+--------------------------+
| +--------------+ |
| | +--------------+ |
| --> | | | |
| | | Master/etcd | |
| | | node(s) | |
| +-+ | |
| +--------------+ |
| ^ |
| | |
| v |
| +--------------+ |
| | +--------------+ |
| --> | | | |
| | | Worker | |
| | | node(s) | |
| +-+ | |
| +--------------+ |
+--------------------------+
The nodes uses a private network for node to node communication and a public interface for all external communication.
Requirements
- Terraform 0.13.0 or newer
Quickstart
NOTE: Assumes you are at the root of the kubespray repo.
For authentication in your cluster you can use the environment variables.
export TF_VAR_UPCLOUD_USERNAME=username
export TF_VAR_UPCLOUD_PASSWORD=password
To allow API access to your UpCloud account, you need to allow API connections by visiting Account-page in your UpCloud Hub.
Copy the cluster configuration file.
CLUSTER=my-upcloud-cluster
cp -r inventory/sample inventory/$CLUSTER
cp contrib/terraform/upcloud/cluster-settings.tfvars inventory/$CLUSTER/
export ANSIBLE_CONFIG=ansible.cfg
cd inventory/$CLUSTER
Edit cluster-settings.tfvars
to match your requirement.
Run Terraform to create the infrastructure.
terraform init ../../contrib/terraform/upcloud
terraform apply --var-file cluster-settings.tfvars \
-state=tfstate-$CLUSTER.tfstate \
../../contrib/terraform/upcloud/
You should now have a inventory file named inventory.ini
that you can use with kubespray.
You can use the inventory file with kubespray to set up a cluster.
It is a good idea to check that you have basic SSH connectivity to the nodes. You can do that by:
ansible -i inventory.ini -m ping all
You can setup Kubernetes with kubespray using the generated inventory:
ansible-playbook -i inventory.ini ../../cluster.yml -b -v
Teardown
You can teardown your infrastructure using the following Terraform command:
terraform destroy --var-file cluster-settings.tfvars \
-state=tfstate-$CLUSTER.tfstate \
../../contrib/terraform/upcloud/
Variables
prefix
: Prefix to add to all resources, if set to "" don't set any prefixtemplate_name
: The name or UUID of a base imageusername
: a user to access the nodes, defaults to "ubuntu"private_network_cidr
: CIDR to use for the private network, defaults to "172.16.0.0/24"ssh_public_keys
: List of public SSH keys to install on all machineszone
: The zone where to run the clustermachines
: Machines to provision. Key of this object will be used as the name of the machinenode_type
: The role of this node (master|worker)plan
: Preconfigured cpu/mem plan to use (disablescpu
andmem
attributes below)cpu
: number of cpu coresmem
: memory size in MBdisk_size
: The size of the storage in GBadditional_disks
: Additional disks to attach to the node.size
: The size of the additional disk in GBtier
: The tier of disk to use (maxiops
is the only one you can choose atm)
firewall_enabled
: Enable firewall rulesfirewall_default_deny_in
: Set the firewall to deny inbound traffic by default. Automatically adds UpCloud DNS server and NTP port allowlisting.firewall_default_deny_out
: Set the firewall to deny outbound traffic by default.master_allowed_remote_ips
: List of IP ranges that should be allowed to access API of mastersstart_address
: Start of address range to allowend_address
: End of address range to allow
k8s_allowed_remote_ips
: List of IP ranges that should be allowed SSH access to all nodesstart_address
: Start of address range to allowend_address
: End of address range to allow
master_allowed_ports
: List of port ranges that should be allowed to access the mastersprotocol
: Protocol (tcp|udp|icmp)port_range_min
: Start of port range to allowport_range_max
: End of port range to allowstart_address
: Start of address range to allowend_address
: End of address range to allow
worker_allowed_ports
: List of port ranges that should be allowed to access the workersprotocol
: Protocol (tcp|udp|icmp)port_range_min
: Start of port range to allowport_range_max
: End of port range to allowstart_address
: Start of address range to allowend_address
: End of address range to allow
loadbalancer_enabled
: Enable managed load balancerloadbalancer_plan
: Plan to use for load balancer (development|production-small)loadbalancers
: Ports to load balance and which machines to forward to. Key of this object will be used as the name of the load balancer frontends/backendsport
: Port to load balance.target_port
: Port to the backend servers.backend_servers
: List of servers that traffic to the port should be forwarded to.
server_groups
: Group servers togetherservers
: The servers that should be included in the group.anti_affinity_policy
: Defines if a server group is an anti-affinity group. Setting this to "strict" or yes" will result in all servers in the group being placed on separate compute hosts. The value can be "strict", "yes" or "no". "strict" refers to strict policy doesn't allow servers in the same server group to be on the same host. "yes" refers to best-effort policy and tries to put servers on different hosts, but this is not guaranteed.