kubespray/contrib/terraform/aws
Dan Bode cb84b93930 Decouple etcd/k8s-cluster roles in ec2 terraform
Currently, the terraform script in contrib
adds etcd role as a child of k8s-cluster in
its generated inventory file.

This is problematic when the etcd role is
deployed on separate nodes from the k8s master
and nodes. In this case, this leads to failures
of the k8s node since the PKI certs required for
that role have not been propogated.
2016-11-21 10:44:13 -08:00
..
.gitignore base functionality to create aws resources 2016-06-07 12:45:25 -07:00
00-create-infrastructure.tf Add IAM profiles for Kubernetes nodes 2016-06-15 12:58:44 -04:00
01-create-inventory.tf Decouple etcd/k8s-cluster roles in ec2 terraform 2016-11-21 10:44:13 -08:00
README.md base functionality to create aws resources 2016-06-07 12:45:25 -07:00
terraform.tfvars base functionality to create aws resources 2016-06-07 12:45:25 -07:00

README.md

Kubernetes on AWS with Terraform

Overview:

  • This will create nodes in a VPC inside of AWS

  • A dynamic number of masters, etcd, and nodes can be created

  • These scripts currently expect Private IP connectivity with the nodes that are created. This means that you may need a tunnel to your VPC or to run these scripts from a VM inside the VPC. Will be looking into how to work around this later.

How to Use:

  • Export the variables for your Amazon credentials:
export AWS_ACCESS_KEY_ID="xxx"
export AWS_SECRET_ACCESS_KEY="yyy"
  • Update contrib/terraform/aws/terraform.tfvars with your data

  • Run with terraform apply

  • Once the infrastructure is created, you can run the kubespray playbooks and supply contrib/terraform/aws/inventory with the -i flag.

Future Work:

  • Update the inventory creation file to be something a little more reasonable. It's just a local-exec from Terraform now, using terraform.py or something may make sense in the future.