Merge pull request #434 from kubespray/issue-426

Check only for AWS, wrote some docs on actually using AWS
pull/431/merge
Smaine Kahlouch 2016-08-24 21:55:57 +02:00 committed by GitHub
commit bcec5553c5
5 changed files with 18 additions and 2 deletions

View File

@ -25,6 +25,7 @@ To deploy the cluster you can use :
* [Ansible variables](docs/ansible.md) * [Ansible variables](docs/ansible.md)
* [Cloud providers](docs/cloud.md) * [Cloud providers](docs/cloud.md)
* [OpenStack](docs/openstack.md) * [OpenStack](docs/openstack.md)
* [AWS](docs/aws.md)
* [Network plugins](#network-plugins) * [Network plugins](#network-plugins)
* [Roadmap](docs/roadmap.md) * [Roadmap](docs/roadmap.md)

10
docs/aws.md 100644
View File

@ -0,0 +1,10 @@
AWS
===============
To deploy kubespray on [AWS](https://www.openstack.org/) uncomment the `cloud_provider` option in `group_vars/all.yml` and set it to `'aws'`.
Prior to creating your instances, you **must** ensure that you have created IAM roles and policies for both "kubernetes-master" and "kubernetes-node". You can find the IAM policies [here](https://github.com/kubernetes/kubernetes/tree/master/cluster/aws/templates/iam). See the [IAM Documentation](https://aws.amazon.com/documentation/iam/) if guidance is needed on how to set these up. When you bring your instances online, associate them with the respective IAM role. Nodes that are only to be used for Etcd do not need a role.
The next step is to make sure the hostnames in your `inventory` file are identical to your internal hostnames in AWS. This may look something like `ip-111-222-333-444.us-west-2.compute.internal`. You can then specify how Ansible connects to these instances with `ansible_ssh_host` and `ansible_ssh_user`.
You can now create your cluster!

View File

@ -35,6 +35,8 @@ spec:
{% if cloud_provider is defined and cloud_provider == "openstack" %} {% if cloud_provider is defined and cloud_provider == "openstack" %}
- --cloud-provider={{ cloud_provider }} - --cloud-provider={{ cloud_provider }}
- --cloud-config={{ kube_config_dir }}/cloud_config - --cloud-config={{ kube_config_dir }}/cloud_config
{% elif cloud_provider is defined and cloud_provider == "aws" %}
- --cloud-provider={{ cloud_provider }}
{% endif %} {% endif %}
- 2>&1 >> {{ kube_log_dir }}/kube-apiserver.log - 2>&1 >> {{ kube_log_dir }}/kube-apiserver.log
volumeMounts: volumeMounts:

View File

@ -18,8 +18,10 @@ spec:
- --enable-hostpath-provisioner={{ kube_hostpath_dynamic_provisioner }} - --enable-hostpath-provisioner={{ kube_hostpath_dynamic_provisioner }}
- --v={{ kube_log_level | default('2') }} - --v={{ kube_log_level | default('2') }}
{% if cloud_provider is defined and cloud_provider == "openstack" %} {% if cloud_provider is defined and cloud_provider == "openstack" %}
- --cloud-provider=openstack - --cloud-provider={{cloud_provider}}
- --cloud-config={{ kube_config_dir }}/cloud_config - --cloud-config={{ kube_config_dir }}/cloud_config
{% elif cloud_provider is defined and cloud_provider == "aws" %}
- --cloud-provider={{cloud_provider}}
{% endif %} {% endif %}
livenessProbe: livenessProbe:
httpGet: httpGet:

View File

@ -33,8 +33,9 @@ DOCKER_SOCKET="--docker-endpoint=unix:/var/run/weave/weave.sock"
KUBE_ALLOW_PRIV="--allow-privileged=true" KUBE_ALLOW_PRIV="--allow-privileged=true"
{% if cloud_provider is defined and cloud_provider == "openstack" %} {% if cloud_provider is defined and cloud_provider == "openstack" %}
KUBELET_CLOUDPROVIDER="--cloud-provider={{ cloud_provider }} --cloud-config={{ kube_config_dir }}/cloud_config" KUBELET_CLOUDPROVIDER="--cloud-provider={{ cloud_provider }} --cloud-config={{ kube_config_dir }}/cloud_config"
{% elif cloud_provider is defined and cloud_provider == "aws" %}
KUBELET_CLOUDPROVIDER="--cloud-provider={{ cloud_provider }}"
{% else %} {% else %}
{# TODO: gce and aws don't need the cloud provider to be set? #}
KUBELET_CLOUDPROVIDER="" KUBELET_CLOUDPROVIDER=""
{% endif %} {% endif %}
{% if ansible_service_mgr in ["sysvinit","upstart"] %} {% if ansible_service_mgr in ["sysvinit","upstart"] %}