kubespray/contrib/terraform/nifcloud
k8s-infra-cherrypick-robot 9b122fb5a8
[release-2.25] pre-commit: make hooks self contained + ci config (#11359)
* Use alternate self-sufficient shellcheck precommit

This pre-commit does not require prerequisite on the host, making it
easier to run in CI workflows.

* Switch to upstream ansible-lint pre-commit hook

This way, the hook is self contained and does not depend on a previous
virtualenv installation.

* pre-commit: fix hooks dependencies

- ansible-syntax-check
- tox-inventory-builder
- jinja-syntax-check

* Fix ci-matrix pre-commit hook

- Remove dependency of pydblite which fails to setup on recent pythons
- Discard shell script and put everything into pre-commit

* pre-commit: apply autofixes hooks and fix the rest manually

- markdownlint (manual fix)
- end-of-file-fixer
- requirements-txt-fixer
- trailing-whitespace

* Convert check_typo to pre-commit + use maintained version

client9/misspell is unmaintained, and has been forked by the golangci
team, see https://github.com/client9/misspell/issues/197#issuecomment-1596318684.

They haven't yet added a pre-commit config, so use my fork with the
pre-commit hook config until the pull request is merged.

* collection-build-install convert to pre-commit

* Run pre-commit hooks in dynamic pipeline

Use gitlab dynamic child pipelines feature to have one source of truth
for the pre-commit jobs, the pre-commit config file.

Use one cache per pre-commit. This should reduce the "fetching cache"
time steps in gitlab-ci, since each job will have a separate cache with
only its hook installed.

* Remove gitlab-ci job done in pre-commit

* pre-commit: adjust mardownlint default, md fixes

Use a style file as recommended by upstream. This makes for only one
source of truth.
Conserve previous upstream default for MD007 (upstream default changed
here https://github.com/markdownlint/markdownlint/pull/373)

* Update pre-commit hooks

---------

Co-authored-by: Max Gautier <mg@max.gautier.name>
2024-07-12 00:21:42 -07:00
..
modules/kubernetes-cluster Added terraform support for NIFCLOUD (#10227) 2023-06-19 02:02:22 -07:00
sample-inventory Added terraform support for NIFCLOUD (#10227) 2023-06-19 02:02:22 -07:00
.gitignore Added terraform support for NIFCLOUD (#10227) 2023-06-19 02:02:22 -07:00
README.md [release-2.25] pre-commit: make hooks self contained + ci config (#11359) 2024-07-12 00:21:42 -07:00
generate-inventory.sh Added terraform support for NIFCLOUD (#10227) 2023-06-19 02:02:22 -07:00
main.tf Added terraform support for NIFCLOUD (#10227) 2023-06-19 02:02:22 -07:00
output.tf Added terraform support for NIFCLOUD (#10227) 2023-06-19 02:02:22 -07:00
terraform.tf Added terraform support for NIFCLOUD (#10227) 2023-06-19 02:02:22 -07:00
variables.tf Added terraform support for NIFCLOUD (#10227) 2023-06-19 02:02:22 -07:00

README.md

Kubernetes on NIFCLOUD with Terraform

Provision a Kubernetes cluster on NIFCLOUD using Terraform and Kubespray

Overview

The setup looks like following

                              Kubernetes cluster
                        +----------------------------+
+---------------+       |   +--------------------+   |
|               |       |   | +--------------------+ |
| API server LB +---------> | |                    | |
|               |       |   | | Control Plane/etcd | |
+---------------+       |   | | node(s)            | |
                        |   +-+                    | |
                        |     +--------------------+ |
                        |           ^                |
                        |           |                |
                        |           v                |
                        |   +--------------------+   |
                        |   | +--------------------+ |
                        |   | |                    | |
                        |   | |        Worker      | |
                        |   | |        node(s)     | |
                        |   +-+                    | |
                        |     +--------------------+ |
                        +----------------------------+

Requirements

  • Terraform 1.3.7

Quickstart

Export Variables

  • Your NIFCLOUD credentials:

    export NIFCLOUD_ACCESS_KEY_ID=<YOUR ACCESS KEY>
    export NIFCLOUD_SECRET_ACCESS_KEY=<YOUR SECRET ACCESS KEY>
    
  • The SSH KEY used to connect to the instance:

    export TF_VAR_SSHKEY_NAME=<YOUR SSHKEY NAME>
    
  • The IP address to connect to bastion server:

    export TF_VAR_working_instance_ip=$(curl ifconfig.me)
    

Create The Infrastructure

  • Run terraform:

    terraform init
    terraform apply -var-file ./sample-inventory/cluster.tfvars
    

Setup The Kubernetes

  • Generate cluster configuration file:

    ./generate-inventory.sh > sample-inventory/inventory.ini
    
  • Export Variables:

    BASTION_IP=$(terraform output -json | jq -r '.kubernetes_cluster.value.bastion_info | to_entries[].value.public_ip')
    API_LB_IP=$(terraform output -json | jq -r '.kubernetes_cluster.value.control_plane_lb')
    CP01_IP=$(terraform output -json | jq -r '.kubernetes_cluster.value.control_plane_info | to_entries[0].value.private_ip')
    export ANSIBLE_SSH_ARGS="-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ProxyCommand=\"ssh root@${BASTION_IP} -W %h:%p\""
    
  • Set ssh-agent"

    eval `ssh-agent`
    ssh-add <THE PATH TO YOUR SSH KEY>
    
  • Run cluster.yml playbook:

    cd ./../../../
    ansible-playbook -i contrib/terraform/nifcloud/inventory/inventory.ini cluster.yml
    

Connecting to Kubernetes

  • Install kubectl on the localhost

  • Fetching kubeconfig file:

    mkdir -p ~/.kube
    scp -o ProxyCommand="ssh root@${BASTION_IP} -W %h:%p" root@${CP01_IP}:/etc/kubernetes/admin.conf ~/.kube/config
    
  • Rewrite /etc/hosts

    sudo echo "${API_LB_IP} lb-apiserver.kubernetes.local" >> /etc/hosts
    
  • Run kubectl

    kubectl get node
    

Variables

  • region: Region where to run the cluster
  • az: Availability zone where to run the cluster
  • private_ip_bn: Private ip address of bastion server
  • private_network_cidr: Subnet of private network
  • instances_cp: Machine to provision as Control Plane. Key of this object will be used as part of the machine' name
    • private_ip: private ip address of machine
  • instances_wk: Machine to provision as Worker Node. Key of this object will be used as part of the machine' name
    • private_ip: private ip address of machine
  • instance_key_name: The key name of the Key Pair to use for the instance
  • instance_type_bn: The instance type of bastion server
  • instance_type_wk: The instance type of worker node
  • instance_type_cp: The instance type of control plane
  • image_name: OS image used for the instance
  • working_instance_ip: The IP address to connect to bastion server
  • accounting_type: Accounting type. (1: monthly, 2: pay per use)