kubespray/contrib/terraform/nifcloud
Yoshitaka Fujii 89a0f515c7
Added terraform support for NIFCLOUD (#10227)
* Add NIFCLOUD

* Add tf-validate-nifcloud in gitlab-ci
2023-06-19 02:02:22 -07:00
..
modules/kubernetes-cluster Added terraform support for NIFCLOUD (#10227) 2023-06-19 02:02:22 -07:00
sample-inventory Added terraform support for NIFCLOUD (#10227) 2023-06-19 02:02:22 -07:00
.gitignore Added terraform support for NIFCLOUD (#10227) 2023-06-19 02:02:22 -07:00
README.md Added terraform support for NIFCLOUD (#10227) 2023-06-19 02:02:22 -07:00
generate-inventory.sh Added terraform support for NIFCLOUD (#10227) 2023-06-19 02:02:22 -07:00
main.tf Added terraform support for NIFCLOUD (#10227) 2023-06-19 02:02:22 -07:00
output.tf Added terraform support for NIFCLOUD (#10227) 2023-06-19 02:02:22 -07:00
terraform.tf Added terraform support for NIFCLOUD (#10227) 2023-06-19 02:02:22 -07:00
variables.tf Added terraform support for NIFCLOUD (#10227) 2023-06-19 02:02:22 -07:00

README.md

Kubernetes on NIFCLOUD with Terraform

Provision a Kubernetes cluster on NIFCLOUD using Terraform and Kubespray

Overview

The setup looks like following

                              Kubernetes cluster
                        +----------------------------+
+---------------+       |   +--------------------+   |
|               |       |   | +--------------------+ |
| API server LB +---------> | |                    | |
|               |       |   | | Control Plane/etcd | |
+---------------+       |   | | node(s)            | |
                        |   +-+                    | |
                        |     +--------------------+ |
                        |           ^                |
                        |           |                |
                        |           v                |
                        |   +--------------------+   |
                        |   | +--------------------+ |
                        |   | |                    | |
                        |   | |        Worker      | |
                        |   | |        node(s)     | |
                        |   +-+                    | |
                        |     +--------------------+ |
                        +----------------------------+

Requirements

  • Terraform 1.3.7

Quickstart

Export Variables

  • Your NIFCLOUD credentials:

    export NIFCLOUD_ACCESS_KEY_ID=<YOUR ACCESS KEY>
    export NIFCLOUD_SECRET_ACCESS_KEY=<YOUR SECRET ACCESS KEY>
    
  • The SSH KEY used to connect to the instance:

    export TF_VAR_SSHKEY_NAME=<YOUR SSHKEY NAME>
    
  • The IP address to connect to bastion server:

    export TF_VAR_working_instance_ip=$(curl ifconfig.me)
    

Create The Infrastructure

  • Run terraform:

    terraform init
    terraform apply -var-file ./sample-inventory/cluster.tfvars
    

Setup The Kubernetes

  • Generate cluster configuration file:

    ./generate-inventory.sh > sample-inventory/inventory.ini
    
    
  • Export Variables:

    BASTION_IP=$(terraform output -json | jq -r '.kubernetes_cluster.value.bastion_info | to_entries[].value.public_ip')
    API_LB_IP=$(terraform output -json | jq -r '.kubernetes_cluster.value.control_plane_lb')
    CP01_IP=$(terraform output -json | jq -r '.kubernetes_cluster.value.control_plane_info | to_entries[0].value.private_ip')
    export ANSIBLE_SSH_ARGS="-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ProxyCommand=\"ssh root@${BASTION_IP} -W %h:%p\""
    
  • Set ssh-agent"

    eval `ssh-agent`
    ssh-add <THE PATH TO YOUR SSH KEY>
    
  • Run cluster.yml playbook:

    cd ./../../../
    ansible-playbook -i contrib/terraform/nifcloud/inventory/inventory.ini cluster.yml
    

Connecting to Kubernetes

  • Install kubectl on the localhost

  • Fetching kubeconfig file:

    mkdir -p ~/.kube
    scp -o ProxyCommand="ssh root@${BASTION_IP} -W %h:%p" root@${CP01_IP}:/etc/kubernetes/admin.conf ~/.kube/config
    
  • Rewrite /etc/hosts

    sudo echo "${API_LB_IP} lb-apiserver.kubernetes.local" >> /etc/hosts
    
  • Run kubectl

    kubectl get node
    

Variables

  • region: Region where to run the cluster
  • az: Availability zone where to run the cluster
  • private_ip_bn: Private ip address of bastion server
  • private_network_cidr: Subnet of private network
  • instances_cp: Machine to provision as Control Plane. Key of this object will be used as part of the machine' name
    • private_ip: private ip address of machine
  • instances_wk: Machine to provision as Worker Node. Key of this object will be used as part of the machine' name
    • private_ip: private ip address of machine
  • instance_key_name: The key name of the Key Pair to use for the instance
  • instance_type_bn: The instance type of bastion server
  • instance_type_wk: The instance type of worker node
  • instance_type_cp: The instance type of control plane
  • image_name: OS image used for the instance
  • working_instance_ip: The IP address to connect to bastion server
  • accounting_type: Accounting type. (1: monthly, 2: pay per use)