Helm v3 only (#6846)

* Fix etcd download dest

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>

* Only support Helm v3, cleanup install

Signed-off-by: Etienne Champetier <champetier.etienne@gmail.com>
pull/6980/head
Etienne Champetier 2020-12-02 03:20:50 -05:00 committed by GitHub
parent 4f7a760a94
commit 68b96bdf1a
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
18 changed files with 53 additions and 513 deletions

View File

@ -62,9 +62,6 @@ docker_ubuntu_repo_gpgkey: "{{ ubuntu_repo }}/docker-ce/gpg"
containerd_ubuntu_repo_base_url: "{{ ubuntu_repo }}/containerd" containerd_ubuntu_repo_base_url: "{{ ubuntu_repo }}/containerd"
containerd_ubuntu_repo_gpgkey: "{{ ubuntu_repo }}/containerd/gpg" containerd_ubuntu_repo_gpgkey: "{{ ubuntu_repo }}/containerd/gpg"
containerd_ubuntu_repo_repokey: 'YOURREPOKEY' containerd_ubuntu_repo_repokey: 'YOURREPOKEY'
# If using helm
helm_stable_repo_url: "{{ helm_registry }}"
``` ```
For the OS specific settings, just define the one matching your OS. For the OS specific settings, just define the one matching your OS.
@ -73,7 +70,6 @@ If you use the settings like the one above, you'll need to define in your invent
* `registry_host`: Container image registry. If you _don't_ use the same repository path for the container images that the ones defined in [Download's role defaults](https://github.com/kubernetes-sigs/kubespray/blob/master/roles/download/defaults/main.yml), you need to override the `*_image_repo` for these container images. If you want to make your life easier, use the same repository path, you won't have to override anything else. * `registry_host`: Container image registry. If you _don't_ use the same repository path for the container images that the ones defined in [Download's role defaults](https://github.com/kubernetes-sigs/kubespray/blob/master/roles/download/defaults/main.yml), you need to override the `*_image_repo` for these container images. If you want to make your life easier, use the same repository path, you won't have to override anything else.
* `files_repo`: HTTP webserver or reverse proxy that is able to serve the files listed above. Path is not important, you can store them anywhere as long as it's accessible by kubespray. It's recommended to use `*_version` in the path so that you don't need to modify this setting everytime kubespray upgrades one of these components. * `files_repo`: HTTP webserver or reverse proxy that is able to serve the files listed above. Path is not important, you can store them anywhere as long as it's accessible by kubespray. It's recommended to use `*_version` in the path so that you don't need to modify this setting everytime kubespray upgrades one of these components.
* `yum_repo`/`debian_repo`/`ubuntu_repo`: OS package repository depending of your OS, should point to your internal repository. Adjust the path accordingly. * `yum_repo`/`debian_repo`/`ubuntu_repo`: OS package repository depending of your OS, should point to your internal repository. Adjust the path accordingly.
* `helm_registry`: Helm Registry to use for `stable` Helm Charts if `helm_enabled: true`
## Install Kubespray Python Packages ## Install Kubespray Python Packages

View File

@ -202,5 +202,4 @@ in the form of dicts of key-value pairs of configuration parameters that will be
## App variables ## App variables
* *helm_version* - Defaults to v3.x, set to a v2 version (e.g. `v2.16.1` ) to install Helm 2.x (will install Tiller!). * *helm_version* - Only supports v3.x. Existing v2 installs (with Tiller) will not be modified and need to be removed manually.
Picking v3 for an existing cluster running Tiller will leave it alone. In that case you will have to remove Tiller manually afterwards.

View File

@ -204,8 +204,6 @@ kata_containers_enabled: false
# containerd_untrusted_runtime_engine: '' # containerd_untrusted_runtime_engine: ''
# containerd_untrusted_runtime_root: '' # containerd_untrusted_runtime_root: ''
helm_deployment_type: host
kubeadm_certificate_key: "{{ lookup('password', credentials_dir + '/kubeadm_certificate_key.creds length=64 chars=hexdigits') | lower }}" kubeadm_certificate_key: "{{ lookup('password', credentials_dir + '/kubeadm_certificate_key.creds length=64 chars=hexdigits') | lower }}"
# K8s image pull policy (imagePullPolicy) # K8s image pull policy (imagePullPolicy)

View File

@ -66,6 +66,3 @@
# containerd_ubuntu_repo_base_url: "{{ ubuntu_repo }}/containerd" # containerd_ubuntu_repo_base_url: "{{ ubuntu_repo }}/containerd"
# containerd_ubuntu_repo_gpgkey: "{{ ubuntu_repo }}/containerd/gpg" # containerd_ubuntu_repo_gpgkey: "{{ ubuntu_repo }}/containerd/gpg"
# containerd_ubuntu_repo_repokey: 'YOURREPOKEY' # containerd_ubuntu_repo_repokey: 'YOURREPOKEY'
# [Optiona] Helm: if helm_enabled: true in addons.yml
# helm_stable_repo_url: "{{ helm_registry }}"

View File

@ -84,6 +84,7 @@ kube_router_version: "v1.1.0"
multus_version: "v3.6" multus_version: "v3.6"
ovn4nfv_ovn_image_version: "v1.0.0" ovn4nfv_ovn_image_version: "v1.0.0"
ovn4nfv_k8s_plugin_image_version: "v1.1.0" ovn4nfv_k8s_plugin_image_version: "v1.1.0"
helm_version: "v3.3.4"
# Get kubernetes major version (i.e. 1.17.4 => 1.17) # Get kubernetes major version (i.e. 1.17.4 => 1.17)
kube_major_version: "{{ kube_version | regex_replace('^v([0-9])+\\.([0-9]+)\\.[0-9]+', 'v\\1.\\2') }}" kube_major_version: "{{ kube_version | regex_replace('^v([0-9])+\\.([0-9]+)\\.[0-9]+', 'v\\1.\\2') }}"
@ -101,6 +102,7 @@ etcd_download_url: "https://github.com/coreos/etcd/releases/download/{{ etcd_ver
cni_download_url: "https://github.com/containernetworking/plugins/releases/download/{{ cni_version }}/cni-plugins-linux-{{ image_arch }}-{{ cni_version }}.tgz" cni_download_url: "https://github.com/containernetworking/plugins/releases/download/{{ cni_version }}/cni-plugins-linux-{{ image_arch }}-{{ cni_version }}.tgz"
calicoctl_download_url: "https://github.com/projectcalico/calicoctl/releases/download/{{ calico_ctl_version }}/calicoctl-linux-{{ image_arch }}" calicoctl_download_url: "https://github.com/projectcalico/calicoctl/releases/download/{{ calico_ctl_version }}/calicoctl-linux-{{ image_arch }}"
crictl_download_url: "https://github.com/kubernetes-sigs/cri-tools/releases/download/{{ crictl_version }}/crictl-{{ crictl_version }}-{{ ansible_system | lower }}-{{ image_arch }}.tar.gz" crictl_download_url: "https://github.com/kubernetes-sigs/cri-tools/releases/download/{{ crictl_version }}/crictl-{{ crictl_version }}-{{ ansible_system | lower }}-{{ image_arch }}.tar.gz"
helm_download_url: "https://get.helm.sh/helm-{{ helm_version }}-linux-{{ image_arch }}.tar.gz"
crictl_checksums: crictl_checksums:
arm: arm:
@ -401,6 +403,11 @@ calicoctl_binary_checksums:
v3.16.2: aa5695940ec8a36393725a5ce7b156f776fed8da38b994c0828d7f3a60e59bc6 v3.16.2: aa5695940ec8a36393725a5ce7b156f776fed8da38b994c0828d7f3a60e59bc6
v3.15.2: 49165f9e4ad55402248b578310fcf68a57363f54e66be04ac24be9714899b4d5 v3.15.2: 49165f9e4ad55402248b578310fcf68a57363f54e66be04ac24be9714899b4d5
helm_archive_checksums:
arm: 9da6cc39a796f85b6c4e6d48fd8e4888f1003bfb7a193bb6c427cdd752ad40bb
amd64: b664632683c36446deeb85c406871590d879491e3de18978b426769e43a1e82c
arm64: bdd00b8ff422171b4be5b649a42e5261394a89d7ea57944005fc34d34d1f8160
etcd_binary_checksum: "{{ etcd_binary_checksums[image_arch] }}" etcd_binary_checksum: "{{ etcd_binary_checksums[image_arch] }}"
cni_binary_checksum: "{{ cni_binary_checksums[image_arch] }}" cni_binary_checksum: "{{ cni_binary_checksums[image_arch] }}"
kubelet_binary_checksum: "{{ kubelet_checksums[image_arch][kube_version] }}" kubelet_binary_checksum: "{{ kubelet_checksums[image_arch][kube_version] }}"
@ -408,6 +415,7 @@ kubectl_binary_checksum: "{{ kubectl_checksums[image_arch][kube_version] }}"
kubeadm_binary_checksum: "{{ kubeadm_checksums[image_arch][kubeadm_version] }}" kubeadm_binary_checksum: "{{ kubeadm_checksums[image_arch][kubeadm_version] }}"
calicoctl_binary_checksum: "{{ calicoctl_binary_checksums[image_arch][calico_ctl_version] }}" calicoctl_binary_checksum: "{{ calicoctl_binary_checksums[image_arch][calico_ctl_version] }}"
crictl_binary_checksum: "{{ crictl_checksums[image_arch][crictl_version] }}" crictl_binary_checksum: "{{ crictl_checksums[image_arch][crictl_version] }}"
helm_archive_checksum: "{{ helm_archive_checksums[image_arch] }}"
# Containers # Containers
# In some cases, we need a way to set --registry-mirror or --insecure-registry for docker, # In some cases, we need a way to set --registry-mirror or --insecure-registry for docker,
@ -480,11 +488,6 @@ dnsautoscaler_image_repo: "{{ kube_image_repo }}/cpa/cluster-proportional-autosc
dnsautoscaler_image_tag: "{{ dnsautoscaler_version }}" dnsautoscaler_image_tag: "{{ dnsautoscaler_version }}"
test_image_repo: "{{ kube_image_repo }}/busybox" test_image_repo: "{{ kube_image_repo }}/busybox"
test_image_tag: latest test_image_tag: latest
helm_version: "v3.2.4"
helm_image_repo: "{{ docker_image_repo }}/lachlanevenson/k8s-helm"
helm_image_tag: "{{ helm_version }}"
tiller_image_repo: "{{ gcr_image_repo }}/kubernetes-helm/tiller"
tiller_image_tag: "{{ helm_version }}"
registry_image_repo: "{{ docker_image_repo }}/library/registry" registry_image_repo: "{{ docker_image_repo }}/library/registry"
registry_image_tag: "2.7.1" registry_image_tag: "2.7.1"
@ -598,7 +601,7 @@ downloads:
file: "{{ etcd_deployment_type == 'host' }}" file: "{{ etcd_deployment_type == 'host' }}"
enabled: true enabled: true
version: "{{ etcd_version }}" version: "{{ etcd_version }}"
dest: "{{ local_release_dir }}/etcd-{{ etcd_version }}-linux-amd64.tar.gz" dest: "{{ local_release_dir }}/etcd-{{ etcd_version }}-linux-{{ image_arch }}.tar.gz"
repo: "{{ etcd_image_repo }}" repo: "{{ etcd_image_repo }}"
tag: "{{ etcd_image_tag }}" tag: "{{ etcd_image_tag }}"
sha256: >- sha256: >-
@ -887,21 +890,16 @@ downloads:
helm: helm:
enabled: "{{ helm_enabled }}" enabled: "{{ helm_enabled }}"
container: true file: true
repo: "{{ helm_image_repo }}" version: "{{ helm_version }}"
tag: "{{ helm_image_tag }}" dest: "{{ local_release_dir }}/helm-{{ helm_version }}/helm-{{ helm_version }}-linux-{{ image_arch }}.tar.gz"
sha256: "{{ helm_digest_checksum|default(None) }}" sha256: "{{ helm_archive_checksum }}"
url: "{{ helm_download_url }}"
unarchive: true
owner: "root"
mode: "0755"
groups: groups:
- kube-node - kube-master
tiller:
enabled: "{{ helm_enabled and helm_version is version('v3.0.0', '<') }}"
container: true
repo: "{{ tiller_image_repo }}"
tag: "{{ tiller_image_tag }}"
sha256: "{{ tiller_digest_checksum|default(None) }}"
groups:
- kube-node
registry: registry:
enabled: "{{ registry_enabled }}" enabled: "{{ registry_enabled }}"

View File

@ -1,53 +1,2 @@
--- ---
helm_enabled: false helm_enabled: false
# specify a dir and attach it to helm for HELM_HOME.
helm_home_dir: "/root/.helm"
# Deployment mode: host or docker
helm_deployment_type: host
# Wait until Tiller is running and ready to receive requests
tiller_wait: false
# Do not download the local repository cache on helm init
helm_skip_refresh: false
# Secure Tiller installation with TLS
tiller_enable_tls: false
helm_config_dir: "{{ kube_config_dir }}/helm"
helm_script_dir: "{{ bin_dir }}/helm-scripts"
# Store tiller release information as Secret instead of a ConfigMap
tiller_secure_release_info: false
# Where private root key will be secured for TLS
helm_tiller_cert_dir: "{{ helm_config_dir }}/ssl"
tiller_tls_cert: "{{ helm_tiller_cert_dir }}/tiller.pem"
tiller_tls_key: "{{ helm_tiller_cert_dir }}/tiller-key.pem"
tiller_tls_ca_cert: "{{ helm_tiller_cert_dir }}/ca.pem"
# Permission owner and group for helm client cert. Will be dependent on the helm_home_dir
helm_cert_group: root
helm_cert_owner: root
# Set URL for stable repository
# helm_stable_repo_url: "https://charts.helm.sh/stable"
# Namespace for the Tiller Deployment.
tiller_namespace: kube-system
# Set node selector options for Tiller Deployment manifest.
# tiller_node_selectors: "key1=val1,key2=val2"
# Override values for the Tiller Deployment manifest.
# tiller_override: "key1=val1,key2=val2"
# Limit the maximum number of revisions saved per release. Use 0 for no limit.
# tiller_max_history: 0
# The name of the tiller service account
tiller_service_account: tiller
# The number of tiller pod replicas. If not defined, tiller defaults to a single replica
# tiller_replicas: 1

View File

@ -1,110 +0,0 @@
---
- name: "Gen_helm_tiller_certs | Create helm config directory (on {{ groups['kube-master'][0] }})"
run_once: yes
delegate_to: "{{ groups['kube-master'][0] }}"
file:
path: "{{ helm_config_dir }}"
state: directory
owner: kube
- name: "Gen_helm_tiller_certs | Create helm script directory (on {{ groups['kube-master'][0] }})"
run_once: yes
delegate_to: "{{ groups['kube-master'][0] }}"
file:
path: "{{ helm_script_dir }}"
state: directory
owner: kube
- name: Gen_helm_tiller_certs | Copy certs generation script
run_once: yes
delegate_to: "{{ groups['kube-master'][0] }}"
template:
src: "helm-make-ssl.sh.j2"
dest: "{{ helm_script_dir }}/helm-make-ssl.sh"
mode: 0700
- name: "Check_helm_certs | check if helm client certs have already been generated on first master (on {{ groups['kube-master'][0] }})"
find:
paths: "{{ helm_home_dir }}"
patterns: "*.pem"
get_checksum: true
delegate_to: "{{ groups['kube-master'][0] }}"
register: helmcert_master
run_once: true
- name: Gen_helm_tiller_certs | run cert generation script # noqa 301
run_once: yes
delegate_to: "{{ groups['kube-master'][0] }}"
command: "{{ helm_script_dir }}/helm-make-ssl.sh -e {{ helm_home_dir }} -d {{ helm_tiller_cert_dir }}"
- name: Check_helm_client_certs | Set helm_client_certs
set_fact:
helm_client_certs: ['ca.pem', 'cert.pem', 'key.pem']
- name: "Check_helm_client_certs | check if a cert already exists on master node"
find:
paths: "{{ helm_home_dir }}"
patterns: "*.pem"
get_checksum: true
register: helmcert_node
when: inventory_hostname != groups['kube-master'][0]
- name: "Check_helm_client_certs | Set 'sync_helm_certs' to true on masters"
set_fact:
sync_helm_certs: (not item in helmcert_node.files | map(attribute='path') | map("basename") | list or helmcert_node.files | selectattr("path", "equalto", "{{ helm_home_dir }}/{{ item }}") | map(attribute="checksum")|first|default('') != helmcert_master.files | selectattr("path", "equalto", "{{ helm_home_dir }}/{{ item }}") | map(attribute="checksum")|first|default(''))
when:
- inventory_hostname != groups['kube-master'][0]
with_items:
- "{{ helm_client_certs }}"
- name: Gen_helm_tiller_certs | Gather helm client certs
# noqa 303 - tar is called intentionally here, but maybe this should be done with the slurp module
shell: "set -o pipefail && tar cfz - -C {{ helm_home_dir }} {{ helm_client_certs|join(' ') }} | base64 --wrap=0"
args:
executable: /bin/bash
no_log: true
register: helm_client_cert_data
check_mode: no
delegate_to: "{{ groups['kube-master'][0] }}"
when: sync_helm_certs|default(false) and inventory_hostname != groups['kube-master'][0]
- name: Gen_helm_tiller_certs | Use tempfile for unpacking certs on masters
tempfile:
state: file
path: /tmp
prefix: helmcertsXXXXX
suffix: tar.gz
register: helm_cert_tempfile
when: sync_helm_certs|default(false) and inventory_hostname != groups['kube-master'][0]
- name: Gen_helm_tiller_certs | Write helm client certs to tempfile
copy:
content: "{{ helm_client_cert_data.stdout }}"
dest: "{{ helm_cert_tempfile.path }}"
owner: root
mode: "0600"
when: sync_helm_certs|default(false) and inventory_hostname != groups['kube-master'][0]
- name: Gen_helm_tiller_certs | Unpack helm certs on
shell: "set -o pipefail && base64 -d < {{ helm_cert_tempfile.path }} | tar xz -C {{ helm_home_dir }}"
args:
executable: /bin/bash
no_log: true
changed_when: false
check_mode: no
when: sync_helm_certs|default(false) and inventory_hostname != groups['kube-master'][0]
- name: Gen_helm_tiller_certs | Cleanup tempfile on masters
file:
path: "{{ helm_cert_tempfile.path }}"
state: absent
when: sync_helm_certs|default(false) and inventory_hostname != groups['kube-master'][0]
- name: Gen_certs | check certificate permissions
file:
path: "{{ helm_home_dir }}"
group: "{{ helm_cert_group }}"
state: directory
owner: "{{ helm_cert_owner }}"
mode: "u=rwX,g-rwx,o-rwx"
recurse: yes

View File

@ -1,8 +0,0 @@
---
- name: Helm | Set up helm docker launcher
template:
src: helm-container.j2
dest: "{{ bin_dir }}/helm"
owner: root
mode: 0755
register: helm_container

View File

@ -1,42 +0,0 @@
---
- name: Helm | Set commands for helm host tasks
set_fact:
helm_compare_command: >-
{%- if container_manager in ['docker', 'crio'] %}
{{ docker_bin_dir }}/docker run --rm -v {{ bin_dir }}:/systembindir --entrypoint /usr/bin/cmp {{ helm_image_repo }}:{{ helm_image_tag }} /usr/local/bin/helm /systembindir/helm
{%- elif container_manager == "containerd" %}
ctr run --rm --mount type=bind,src={{ bin_dir }},dst=/systembindir,options=rbind:rw {{ helm_image_repo }}:{{ helm_image_tag }} helm-compare sh -c 'cmp /usr/local/bin/helm /systembindir/helm'
{%- endif %}
helm_copy_command: >-
{%- if container_manager in ['docker', 'crio'] %}
{{ docker_bin_dir }}/docker run --rm -v {{ bin_dir }}:/systembindir --entrypoint /bin/cp {{ helm_image_repo }}:{{ helm_image_tag }} -f /usr/local/bin/helm /systembindir/helm
{%- elif container_manager == "containerd" %}
ctr run --rm --mount type=bind,src={{ bin_dir }},dst=/systembindir,options=rbind:rw {{ helm_image_repo }}:{{ helm_image_tag }} helm-copy sh -c '/bin/cp -f /usr/local/bin/helm /systembindir/helm'
{%- endif %}
- name: Helm | ensure helm container is pulled for containerd
command: "ctr i pull {{ helm_image_repo }}:{{ helm_image_tag }}"
when: container_manager == "containerd"
- name: Helm | Compare host helm with helm container
command: "{{ helm_compare_command }}"
register: helm_task_compare_result
until: helm_task_compare_result.rc in [0,1,2]
retries: 4
delay: "{{ retry_stagger | random + 3 }}"
changed_when: false
failed_when: "helm_task_compare_result.rc not in [0,1,2]"
- name: Helm | Copy helm from helm container
command: "{{ helm_copy_command }}"
when: helm_task_compare_result.rc != 0
register: helm_task_result
until: helm_task_result.rc == 0
retries: 4
delay: "{{ retry_stagger | random + 3 }}"
- name: Helm | Copy socat wrapper for Flatcar Container Linux by Kinvolk
command: "{{ docker_bin_dir }}/docker run --rm -v {{ bin_dir }}:/opt/bin {{ install_socat_image_repo }}:{{ install_socat_image_tag }}"
args:
creates: "{{ bin_dir }}/socat"
when: ansible_os_family in ['Flatcar Container Linux by Kinvolk']

View File

@ -1,131 +1,34 @@
--- ---
- name: Helm | Make sure HELM_HOME directory exists - name: Helm | Download helm
file: path={{ helm_home_dir }} state=directory include_tasks: "../../../download/tasks/download_file.yml"
vars:
download: "{{ download_defaults | combine(downloads.helm) }}"
- name: Helm | Set up helm launcher - name: Copy helm binary from download dir
include_tasks: "install_{{ helm_deployment_type }}.yml" synchronize:
src: "{{ local_release_dir }}/helm-{{ helm_version }}/linux-{{ image_arch }}/helm"
dest: "{{ bin_dir }}/helm"
compress: no
perms: yes
owner: no
group: no
delegate_to: "{{ inventory_hostname }}"
- name: Helm | Lay Down Helm Manifests (RBAC) - name: Check if bash_completion.d folder exists # noqa 503
template: stat:
src: "{{ item.file }}.j2" path: "/etc/bash_completion.d/"
dest: "{{ kube_config_dir }}/{{ item.file }}" register: stat_result
with_items:
- {name: tiller, file: tiller-namespace.yml, type: namespace}
- {name: tiller, file: tiller-sa.yml, type: sa}
- {name: tiller, file: tiller-clusterrolebinding.yml, type: clusterrolebinding}
register: manifests
when:
- dns_mode != 'none'
- inventory_hostname == groups['kube-master'][0]
- helm_version is version('v3.0.0', '<')
- name: Helm | Apply Helm Manifests (RBAC) - name: Get helm completion
kube: command: "{{ bin_dir }}/helm completion bash"
name: "{{ item.item.name }}" changed_when: False
namespace: "{{ tiller_namespace }}" register: helm_completion
kubectl: "{{ bin_dir }}/kubectl" check_mode: False
resource: "{{ item.item.type }}" when: stat_result.stat.exists
filename: "{{ kube_config_dir }}/{{ item.item.file }}"
state: "latest"
with_items: "{{ manifests.results }}"
when:
- dns_mode != 'none'
- inventory_hostname == groups['kube-master'][0]
- helm_version is version('v3.0.0', '<')
# Generate necessary certs for securing Helm and Tiller connection with TLS - name: Install helm completion
- name: Helm | Set up TLS copy:
include_tasks: "gen_helm_tiller_certs.yml" dest: /etc/bash_completion.d/helm.sh
when: content: "{{ helm_completion.stdout }}"
- tiller_enable_tls become: True
- helm_version is version('v3.0.0', '<') when: stat_result.stat.exists
- name: Helm | Install client on all masters
command: >
{{ bin_dir }}/helm init --tiller-namespace={{ tiller_namespace }}
{% if helm_skip_refresh %} --skip-refresh{% endif %}
{% if helm_stable_repo_url is defined %} --stable-repo-url {{ helm_stable_repo_url }}{% endif %}
--client-only
environment: "{{ proxy_env }}"
changed_when: false
when:
- helm_version is version('v3.0.0', '<')
# FIXME: https://github.com/helm/helm/issues/6374
- name: Helm | Install/upgrade helm
shell: >
set -o pipefail &&
{{ bin_dir }}/helm init --tiller-namespace={{ tiller_namespace }}
{% if helm_skip_refresh %} --skip-refresh{% endif %}
{% if helm_stable_repo_url is defined %} --stable-repo-url {{ helm_stable_repo_url }}{% endif %}
--upgrade --tiller-image={{ tiller_image_repo }}:{{ tiller_image_tag }}
{% if rbac_enabled %} --service-account={{ tiller_service_account }}{% endif %}
{% if tiller_node_selectors is defined %} --node-selectors {{ tiller_node_selectors }}{% endif %}
--override spec.template.spec.priorityClassName={% if tiller_namespace == 'kube-system' %}system-cluster-critical{% else %}k8s-cluster-critical{% endif %}
{% if tiller_override is defined and tiller_override %} --override {{ tiller_override }}{% endif %}
{% if tiller_max_history is defined %} --history-max={{ tiller_max_history }}{% endif %}
{% if tiller_enable_tls %} --tiller-tls --tiller-tls-verify --tiller-tls-cert={{ tiller_tls_cert }} --tiller-tls-key={{ tiller_tls_key }} --tls-ca-cert={{ tiller_tls_ca_cert }} {% endif %}
{% if tiller_secure_release_info %} --override 'spec.template.spec.containers[0].command'='{/tiller,--storage=secret}' {% endif %}
--override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm'
{% if tiller_wait %} --wait{% endif %}
{% if tiller_replicas is defined %} --replicas {{ tiller_replicas | int }}{% endif %}
--output yaml
| sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@'
| {{ bin_dir }}/kubectl apply -f -
args:
executable: /bin/bash
register: install_helm
when:
- inventory_hostname == groups['kube-master'][0]
- helm_version is version('v3.0.0', '<')
changed_when: false
environment: "{{ proxy_env }}"
# FIXME: https://github.com/helm/helm/issues/4063
- name: Helm | Force apply tiller overrides if necessary
shell: >
set -o pipefail &&
{{ bin_dir }}/helm init --upgrade --tiller-image={{ tiller_image_repo }}:{{ tiller_image_tag }} --tiller-namespace={{ tiller_namespace }}
{% if helm_skip_refresh %} --skip-refresh{% endif %}
{% if helm_stable_repo_url is defined %} --stable-repo-url {{ helm_stable_repo_url }}{% endif %}
{% if rbac_enabled %} --service-account={{ tiller_service_account }}{% endif %}
{% if tiller_node_selectors is defined %} --node-selectors {{ tiller_node_selectors }}{% endif %}
--override spec.template.spec.priorityClassName={% if tiller_namespace == 'kube-system' %}system-cluster-critical{% else %}k8s-cluster-critical{% endif %}
{% if tiller_override is defined and tiller_override %} --override {{ tiller_override }}{% endif %}
{% if tiller_max_history is defined %} --history-max={{ tiller_max_history }}{% endif %}
{% if tiller_enable_tls %} --tiller-tls --tiller-tls-verify --tiller-tls-cert={{ tiller_tls_cert }} --tiller-tls-key={{ tiller_tls_key }} --tls-ca-cert={{ tiller_tls_ca_cert }} {% endif %}
{% if tiller_secure_release_info %} --override 'spec.template.spec.containers[0].command'='{/tiller,--storage=secret}' {% endif %}
--override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm'
{% if tiller_wait %} --wait{% endif %}
{% if tiller_replicas is defined %} --replicas {{ tiller_replicas | int }}{% endif %}
--output yaml
| sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@'
| {{ bin_dir }}/kubectl apply -f -
args:
executable: /bin/bash
changed_when: false
when:
- inventory_hostname == groups['kube-master'][0]
- helm_version is version('v3.0.0', '<')
environment: "{{ proxy_env }}"
- name: Helm | Add/update stable repo on all masters
command: "{{ bin_dir }}/helm repo add stable {{ helm_stable_repo_url }}"
environment: "{{ proxy_env }}"
when:
- helm_version is version('v3.0.0', '>=')
- helm_stable_repo_url is defined
- name: Make sure bash_completion.d folder exists # noqa 503
file:
name: "/etc/bash_completion.d/"
state: directory
when:
- ((helm_container is defined and helm_container.changed) or (helm_task_result is defined and helm_task_result.changed))
- ansible_os_family in ["ClearLinux"]
- name: Helm | Set up bash completion # noqa 503
shell: "umask 022 && {{ bin_dir }}/helm completion bash >/etc/bash_completion.d/helm.sh"
when:
- ((helm_container is defined and helm_container.changed) or (helm_task_result is defined and helm_task_result.changed))
- not ansible_os_family in ["Flatcar Container Linux by Kinvolk"]

View File

@ -1,17 +0,0 @@
#!/bin/bash
{{ docker_bin_dir }}/docker run --rm \
--net=host \
--name=helm \
-v {{ ansible_env.HOME | default('/root') }}/.kube:/root/.kube:ro \
-v /etc/ssl:/etc/ssl:ro \
-v {{ helm_home_dir }}:{{ helm_home_dir }}:rw \
{% for dir in ssl_ca_dirs -%}
-v {{ dir }}:{{ dir }}:ro \
{% endfor -%}
{% if http_proxy is defined or https_proxy is defined -%}
-e http_proxy="{{proxy_env.http_proxy}}" \
-e https_proxy="{{proxy_env.https_proxy}}" \
-e no_proxy="{{proxy_env.no_proxy}}" \
{% endif -%}
{{ helm_image_repo }}:{{ helm_image_tag}} \
"$@"

View File

@ -1,76 +0,0 @@
#!/bin/bash
set -o errexit
set -o pipefail
usage()
{
cat << EOF
Create self signed certificates
Usage : $(basename $0) -f <config> [-d <ssldir>]
-h | --help : Show this message
-e | --helm-home : Helm home directory
-d | --ssldir : Directory where the certificates will be installed
EOF
}
# Options parsing
while (($#)); do
case "$1" in
-h | --help) usage; exit 0;;
-e | --helm-home) HELM_HOME="${2}"; shift 2;;
-d | --ssldir) SSLDIR="${2}"; shift 2;;
*)
usage
echo "ERROR : Unknown option"
exit 3
;;
esac
done
if [ -z ${SSLDIR} ]; then
SSLDIR="/etc/kubernetes/helm/ssl"
fi
tmpdir=$(mktemp -d /tmp/helm_cacert.XXXXXX)
trap 'rm -rf "${tmpdir}"' EXIT
cd "${tmpdir}"
mkdir -p "${SSLDIR}"
# Root CA
if [ -e "$SSLDIR/ca-key.pem" ]; then
# Reuse existing CA
cp $SSLDIR/{ca.pem,ca-key.pem} .
else
openssl genrsa -out ca-key.pem 4096 > /dev/null 2>&1
openssl req -x509 -new -nodes -key ca-key.pem -days {{certificates_duration}} -out ca.pem -subj "/CN=tiller-ca" > /dev/null 2>&1
fi
gen_key_and_cert() {
local name=$1
local subject=$2
openssl genrsa -out ${name}-key.pem 4096 > /dev/null 2>&1
openssl req -new -key ${name}-key.pem -sha256 -out ${name}.csr -subj "${subject}" > /dev/null 2>&1
openssl x509 -req -in ${name}.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out ${name}.pem -days {{certificates_duration}} > /dev/null 2>&1
}
#Generate cert and key for Tiller if they don't exist
if ! [ -e "$SSLDIR/tiller.pem" ]; then
gen_key_and_cert "tiller" "/CN=tiller-server"
fi
#Generate cert and key for Helm client if they don't exist
if ! [ -e "$SSLDIR/helm.pem" ]; then
gen_key_and_cert "helm" "/CN=helm-client"
fi
# Secure certs to first master
mv *.pem ${SSLDIR}/
# Install Helm client certs to first master
# Copy using Helm default names for convenience
cp ${SSLDIR}/ca.pem ${HELM_HOME}/ca.pem
cp ${SSLDIR}/helm.pem ${HELM_HOME}/cert.pem
cp ${SSLDIR}/helm-key.pem ${HELM_HOME}/key.pem

View File

@ -1,29 +0,0 @@
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: tiller
namespace: {{ tiller_namespace }}
subjects:
- kind: ServiceAccount
name: {{ tiller_service_account }}
namespace: {{ tiller_namespace }}
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
{% if podsecuritypolicy_enabled %}
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: psp:tiller
subjects:
- kind: ServiceAccount
name: {{ tiller_service_account }}
namespace: {{ tiller_namespace }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: psp:privileged
{% endif %}

View File

@ -1,4 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
name: "{{ tiller_namespace}}"

View File

@ -1,6 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ tiller_service_account }}
namespace: {{ tiller_namespace }}

View File

@ -180,17 +180,14 @@ kubectl exec -it $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --vers
## Using Helm ## Using Helm
NGINX Ingress controller can be installed via [Helm](https://helm.sh/) using the chart [stable/nginx-ingress](https://github.com/kubernetes/charts/tree/master/stable/nginx-ingress) from the official charts repository. NGINX Ingress controller can be installed via [Helm](https://helm.sh/) using the chart [ingress-nginx/ingress-nginx](https://kubernetes.github.io/ingress-nginx).
Official documentation is [here](https://kubernetes.github.io/ingress-nginx/deploy/#using-helm)
To install the chart with the release name `my-nginx`: To install the chart with the release name `my-nginx`:
```console ```console
helm install stable/nginx-ingress --name my-nginx helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
``` helm install my-nginx ingress-nginx/ingress-nginx
If the kubernetes cluster has RBAC enabled, then run:
```console
helm install stable/nginx-ingress --name my-nginx --set rbac.create=true
``` ```
Detect installed version: Detect installed version:

View File

@ -303,8 +303,6 @@ containerd_use_systemd_cgroup: false
etcd_deployment_type: docker etcd_deployment_type: docker
cert_management: script cert_management: script
helm_deployment_type: host
# Make a copy of kubeconfig on the host that runs Ansible in {{ inventory_dir }}/artifacts # Make a copy of kubeconfig on the host that runs Ansible in {{ inventory_dir }}/artifacts
kubeconfig_localhost: false kubeconfig_localhost: false
# Download kubectl onto the host that runs Ansible in {{ bin_dir }} # Download kubectl onto the host that runs Ansible in {{ bin_dir }}

View File

@ -12,6 +12,3 @@ dns_min_replicas: 1
typha_enabled: true typha_enabled: true
calico_backend: kdd calico_backend: kdd
typha_secure: true typha_secure: true
# Test helm 2 install
helm_version: v2.16.7