89a0f515c7
* Add NIFCLOUD * Add tf-validate-nifcloud in gitlab-ci |
||
---|---|---|
.. | ||
modules/kubernetes-cluster | ||
sample-inventory | ||
.gitignore | ||
README.md | ||
generate-inventory.sh | ||
main.tf | ||
output.tf | ||
terraform.tf | ||
variables.tf |
README.md
Kubernetes on NIFCLOUD with Terraform
Provision a Kubernetes cluster on NIFCLOUD using Terraform and Kubespray
Overview
The setup looks like following
Kubernetes cluster
+----------------------------+
+---------------+ | +--------------------+ |
| | | | +--------------------+ |
| API server LB +---------> | | | |
| | | | | Control Plane/etcd | |
+---------------+ | | | node(s) | |
| +-+ | |
| +--------------------+ |
| ^ |
| | |
| v |
| +--------------------+ |
| | +--------------------+ |
| | | | |
| | | Worker | |
| | | node(s) | |
| +-+ | |
| +--------------------+ |
+----------------------------+
Requirements
- Terraform 1.3.7
Quickstart
Export Variables
-
Your NIFCLOUD credentials:
export NIFCLOUD_ACCESS_KEY_ID=<YOUR ACCESS KEY> export NIFCLOUD_SECRET_ACCESS_KEY=<YOUR SECRET ACCESS KEY>
-
The SSH KEY used to connect to the instance:
- FYI: Cloud Help(SSH Key)
export TF_VAR_SSHKEY_NAME=<YOUR SSHKEY NAME>
-
The IP address to connect to bastion server:
export TF_VAR_working_instance_ip=$(curl ifconfig.me)
Create The Infrastructure
-
Run terraform:
terraform init terraform apply -var-file ./sample-inventory/cluster.tfvars
Setup The Kubernetes
-
Generate cluster configuration file:
./generate-inventory.sh > sample-inventory/inventory.ini
-
Export Variables:
BASTION_IP=$(terraform output -json | jq -r '.kubernetes_cluster.value.bastion_info | to_entries[].value.public_ip') API_LB_IP=$(terraform output -json | jq -r '.kubernetes_cluster.value.control_plane_lb') CP01_IP=$(terraform output -json | jq -r '.kubernetes_cluster.value.control_plane_info | to_entries[0].value.private_ip') export ANSIBLE_SSH_ARGS="-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ProxyCommand=\"ssh root@${BASTION_IP} -W %h:%p\""
-
Set ssh-agent"
eval `ssh-agent` ssh-add <THE PATH TO YOUR SSH KEY>
-
Run cluster.yml playbook:
cd ./../../../ ansible-playbook -i contrib/terraform/nifcloud/inventory/inventory.ini cluster.yml
Connecting to Kubernetes
-
Install kubectl on the localhost
-
Fetching kubeconfig file:
mkdir -p ~/.kube scp -o ProxyCommand="ssh root@${BASTION_IP} -W %h:%p" root@${CP01_IP}:/etc/kubernetes/admin.conf ~/.kube/config
-
Rewrite /etc/hosts
sudo echo "${API_LB_IP} lb-apiserver.kubernetes.local" >> /etc/hosts
-
Run kubectl
kubectl get node
Variables
region
: Region where to run the clusteraz
: Availability zone where to run the clusterprivate_ip_bn
: Private ip address of bastion serverprivate_network_cidr
: Subnet of private networkinstances_cp
: Machine to provision as Control Plane. Key of this object will be used as part of the machine' nameprivate_ip
: private ip address of machine
instances_wk
: Machine to provision as Worker Node. Key of this object will be used as part of the machine' nameprivate_ip
: private ip address of machine
instance_key_name
: The key name of the Key Pair to use for the instanceinstance_type_bn
: The instance type of bastion serverinstance_type_wk
: The instance type of worker nodeinstance_type_cp
: The instance type of control planeimage_name
: OS image used for the instanceworking_instance_ip
: The IP address to connect to bastion serveraccounting_type
: Accounting type. (1: monthly, 2: pay per use)