mirror of https://github.com/easzlab/kubeasz.git
add docs: set up es-cluster on k8s
parent
e6edece5dd
commit
681cf495ef
54
README.md
54
README.md
|
@ -1,28 +1,26 @@
|
|||
# kubeasz
|
||||
|
||||
`kubeasz`致力于提供快速部署高可用`k8s`集群的工具,并且也努力成为`k8s`实践、使用的参考书;基于二进制方式部署和利用`ansible-playbook`实现自动化:既提供一键安装脚本,也可以分步执行安装各个组件,同时讲解每一步主要参数配置和注意事项;二进制方式部署有助于理解系统各组件的交互原理和熟悉组件启动参数,有助于快速排查解决实际问题。
|
||||
`kubeasz`致力于提供快速部署高可用`k8s`集群的工具, 并且也努力成为`k8s`实践、使用的参考书;基于二进制方式部署和利用`ansible-playbook`实现自动化:即提供一键安装脚本, 也可以分步执行安装各个组件, 同时讲解每一步主要参数配置和注意事项。
|
||||
|
||||
**集群特性:`TLS` 双向认证、`RBAC` 授权、多`Master`高可用、支持`Network Policy`**
|
||||
**集群特性:`TLS`双向认证、`RBAC`授权、多`Master`高可用、支持`Network Policy`、备份恢复**
|
||||
|
||||
项目基于`Ubuntu 16.04/CentOS 7`,需要了解基础`kubernetes` `docker` `linux` `ansible`等知识。
|
||||
|
||||
请阅读[项目TodoList](docs/mixes/TodoList.md)和[项目分支说明](docs/mixes/branch.md),欢迎提[Issues](https://github.com/gjmzj/kubeasz/issues)和[PRs](docs/mixes/HowToContribute.md)参与维护项目。
|
||||
|
||||
|组件|更新|支持|
|
||||
|:-|:-|:-|
|
||||
|OS| |Ubuntu 16.04+, CentOS 7|
|
||||
|k8s|v1.11.3|v1.8, v1.9, v1.10, v1.11|
|
||||
|etcd|v3.3.8|v3.1, v3.2, v3.3|
|
||||
|docker|17.03.1-ce|17.*.*-ce, 18.*.*-ce|
|
||||
|network| |calico, cilium, flannel, kube-router|
|
||||
|组件|支持|
|
||||
|:-|:-|
|
||||
|OS|Ubuntu 16.04+, CentOS 7|
|
||||
|k8s|v1.8, v1.9, v1.10, v1.11, v1.12|
|
||||
|etcd|v3.1, v3.2, v3.3|
|
||||
|docker|17.03.2-ce, 18.06.1-ce|
|
||||
|network|calico, cilium, flannel, kube-router|
|
||||
|
||||
-注:集群用到的所有二进制文件已打包好供下载 [https://pan.baidu.com/s/1c4RFaA](https://pan.baidu.com/s/1c4RFaA)
|
||||
|
||||
请阅读[项目TodoList](docs/mixes/TodoList.md)和[项目分支说明](docs/mixes/branch.md), 欢迎提[Issues](https://github.com/gjmzj/kubeasz/issues)和[PRs](docs/mixes/HowToContribute.md)参与维护项目。
|
||||
|
||||
## 快速指南
|
||||
|
||||
单机快速体验k8s集群的测试、开发环境--[AllinOne部署](docs/setup/quickStart.md)
|
||||
|
||||
## 安装步骤
|
||||
## 安装指南
|
||||
|
||||
<table border="0">
|
||||
<tr>
|
||||
|
@ -44,14 +42,14 @@
|
|||
<table border="0">
|
||||
<tr>
|
||||
<td><strong>常用插件</strong></td>
|
||||
<td><a href="docs/guide/kubedns.md">kube-dns</a></td>
|
||||
<td><a href="docs/guide/kubedns.md">DNS</a></td>
|
||||
<td><a href="docs/guide/dashboard.md">dashboard</a></td>
|
||||
<td><a href="docs/guide/metrics-server.md">metrics-server</a></td>
|
||||
<td><a href="docs/guide/prometheus.md">prometheus</a></td>
|
||||
<td><a href="docs/guide/index.md">更多...</a></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><strong>集群运维</strong></td>
|
||||
<td><strong>集群管理</strong></td>
|
||||
<td><a href="docs/op/AddNode.md">增加node</a></td>
|
||||
<td><a href="docs/op/AddMaster.md">增加master</a></td>
|
||||
<td><a href="docs/op/upgrade.md">升级集群</a></td>
|
||||
|
@ -67,24 +65,32 @@
|
|||
<td><a href=""></a></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><strong>其他</strong></td>
|
||||
<td><a href="docs/guide/harbor.md">harbor部署</a></td>
|
||||
<td><a href="docs/guide/ingress.md">ingress入口</a></td>
|
||||
<td><a href="docs/guide/helm.md">helm工具</a></td>
|
||||
<td><a href="docs/guide/jenkins.md">jenkins集成</a></td>
|
||||
<td><strong>周边生态</strong></td>
|
||||
<td><a href="docs/guide/harbor.md">harbor</a></td>
|
||||
<td><a href="docs/guide/helm.md">helm</a></td>
|
||||
<td><a href="docs/guide/jenkins.md">jenkins</a></td>
|
||||
<td><a href=""></a></td>
|
||||
<td><a href=""></a></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><strong>应用实践</strong></td>
|
||||
<td><a href="docs/practice/java_war_app.md">java应用部署</a></td>
|
||||
<td><a href="docs/practice/es_cluster.md">elasticsearch部署</a></td>
|
||||
<td><a href=""></a></td>
|
||||
<td><a href=""></a></td>
|
||||
<td><a href=""></a></td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
## 沟通交流
|
||||
|
||||
- 微信群:k8s&kubeasz实践,搜索微信号`badtobone`,请备注(城市-github用户名),验证通过会加入群聊。
|
||||
- 微信群:k8s&kubeasz实践, 搜索微信号`badtobone`, 请备注(城市-github用户名), 验证通过会加入群聊。
|
||||
- 推荐阅读:[rootsongjc-Kubernetes指南](https://github.com/rootsongjc/kubernetes-handbook) [feisky-Kubernetes指南](https://github.com/feiskyer/kubernetes-handbook/blob/master/zh/SUMMARY.md) [opsnull-安装教程](https://github.com/opsnull/follow-me-install-kubernetes-cluster)
|
||||
|
||||
## 贡献&致谢
|
||||
|
||||
感谢所有为项目提交 `Issues`和`PRs` 的贡献者!
|
||||
|
||||
- [如何贡献](docs/mixes/HowToContribute.md)
|
||||
- [如何 PR](docs/mixes/HowToContribute.md)
|
||||
|
||||
Copyright 2017 gjmzj (jmgaozz@163.com) Apache License 2.0,详情见 [LICENSE](docs/mixes/LICENSE) 文件。
|
||||
Copyright 2017 gjmzj (jmgaozz@163.com) Apache License 2.0, 详情见 [LICENSE](docs/mixes/LICENSE) 文件。
|
||||
|
|
|
@ -0,0 +1,168 @@
|
|||
# Elasticsearch 部署实践
|
||||
|
||||
`Elasticsearch`是目前全文搜索引擎的首选,它可以快速地储存、搜索和分析海量数据;也可以看成是真正分布式的高效数据库集群;`Elastic`的底层是开源库`Lucene`;封装并提供了`REST API`的操作接口。
|
||||
|
||||
## 单节点 docker 测试安装
|
||||
``` bash
|
||||
cat > es-start.sh << EOF
|
||||
#!/bin/bash
|
||||
|
||||
sysctl -w vm.max_map_count=262144
|
||||
|
||||
docker run --detach \
|
||||
--name es01 \
|
||||
-p 9200:9200 -p 9300:9300 \
|
||||
-e "discovery.type=single-node" \
|
||||
-e "bootstrap.memory_lock=true" --ulimit memlock=-1:-1 \
|
||||
--ulimit nofile=65536:65536 \
|
||||
--volume /srv/elasticsearch/data:/usr/share/elasticsearch/data \
|
||||
--volume /srv/elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
|
||||
jmgao1983/elasticsearch:6.4.0
|
||||
EOF
|
||||
```
|
||||
|
||||
执行`sh es-start.sh`后,就在本地运行了。
|
||||
|
||||
- 验证 docker 镜像运行情况
|
||||
``` bash
|
||||
root@docker-ts:~# docker ps -a
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
171f3fecb596 jmgao1983/elasticsearch:6.4.0 "/usr/local/bin/do..." 2 hours ago Up 2 hours 0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp es01
|
||||
```
|
||||
- 验证 es 健康检查
|
||||
``` bash
|
||||
root@docker-ts:~# curl http://127.0.0.1:9200/_cat/health
|
||||
epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
|
||||
1535523956 06:25:56 docker-es green 1 1 0 0 0 0 0 0 - 100.0%
|
||||
```
|
||||
## 在 k8s 上部署 Elasticsearch 集群
|
||||
|
||||
在生产环境下,Elasticsearch 集群由不同的角色节点组成:
|
||||
|
||||
- master 节点:参与主节点选举,不存储数据;建议3个以上,维护整个集群的稳定可靠状态
|
||||
- data 节点:不参与选主,负责存储数据;主要消耗磁盘,内存
|
||||
- client 节点:不参与选主,不存储数据;负责处理用户请求,实现请求转发,负载均衡等功能
|
||||
|
||||
这里使用`helm chart`来部署 (https://github.com/helm/charts/tree/master/incubator/elasticsearch)
|
||||
|
||||
- 1.安装 helm: 以本项目[安全安装helm](../guide/helm.md)为例
|
||||
- 2.准备 PV: 以本项目[K8S 集群存储](../setup/08-cluster-storage.md)创建`nfs`动态 PV 为例
|
||||
- 编辑配置文件:roles/cluster-storage/defaults/main.yml
|
||||
``` bash
|
||||
storage:
|
||||
nfs:
|
||||
enabled: "yes"
|
||||
server: "192.168.1.8"
|
||||
server_path: "/share"
|
||||
storage_class: "nfs-es"
|
||||
provisioner_name: "nfs-provisioner-01"
|
||||
```
|
||||
- 创建 nfs provisioner
|
||||
``` bash
|
||||
$ ansible-playbook /etc/ansible/roles/cluster-storage/cluster-storage.yml
|
||||
# 执行成功后验证
|
||||
$ kubectl get pod --all-namespaces |grep nfs-prov
|
||||
kube-system nfs-provisioner-01-6b7fbbf9d4-bh8lh 1/1 Running 0 1d
|
||||
```
|
||||
- 3.安装 elasticsearch chart
|
||||
``` bash
|
||||
$ cd /etc/ansible/manifests/es-cluster
|
||||
# 如果你的helm安装没有启用tls证书,请使用helm命令替换以下的helms命令
|
||||
$ helms install --name es-cluster --namespace elastic -f es-values.yaml elasticsearch
|
||||
```
|
||||
- 4.验证 es 集群
|
||||
``` bash
|
||||
# 验证k8s上 es集群状态
|
||||
$ kubectl get pod,svc -n elastic
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
pod/es-cluster-elasticsearch-client-778df74c8f-7fj4k 1/1 Running 0 2m17s
|
||||
pod/es-cluster-elasticsearch-client-778df74c8f-skh8l 1/1 Running 0 2m3s
|
||||
pod/es-cluster-elasticsearch-data-0 1/1 Running 0 25m
|
||||
pod/es-cluster-elasticsearch-data-1 1/1 Running 0 11m
|
||||
pod/es-cluster-elasticsearch-master-0 1/1 Running 0 25m
|
||||
pod/es-cluster-elasticsearch-master-1 1/1 Running 0 12m
|
||||
pod/es-cluster-elasticsearch-master-2 1/1 Running 0 10m
|
||||
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
service/es-cluster-elasticsearch-client NodePort 10.68.157.105 <none> 9200:29200/TCP,9300:29300/TCP 25m
|
||||
service/es-cluster-elasticsearch-discovery ClusterIP None <none> 9300/TCP 25m
|
||||
|
||||
# 验证 es集群本身状态
|
||||
$ curl $NODE_IP:29200/_cat/health
|
||||
1539335131 09:05:31 es-on-k8s green 7 2 0 0 0 0 0 0 - 100.0%
|
||||
|
||||
$ curl $NODE_IP:29200/_cat/indices?v
|
||||
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
|
||||
root@k8s401:/etc/ansible# curl 10.100.97.41:29200/_cat/nodes?
|
||||
172.31.2.4 27 80 5 0.09 0.11 0.21 mi - es-cluster-elasticsearch-master-0
|
||||
172.31.1.7 30 97 3 0.39 0.29 0.27 i - es-cluster-elasticsearch-client-778df74c8f-skh8l
|
||||
172.31.3.7 20 97 3 0.11 0.17 0.18 i - es-cluster-elasticsearch-client-778df74c8f-7fj4k
|
||||
172.31.1.5 8 97 5 0.39 0.29 0.27 di - es-cluster-elasticsearch-data-0
|
||||
172.31.2.5 8 80 3 0.09 0.11 0.21 di - es-cluster-elasticsearch-data-1
|
||||
172.31.1.6 18 97 4 0.39 0.29 0.27 mi - es-cluster-elasticsearch-master-2
|
||||
172.31.3.6 20 97 4 0.11 0.17 0.18 mi * es-cluster-elasticsearch-master-1
|
||||
```
|
||||
### es 性能压测
|
||||
|
||||
如上已使用 chart 在 k8s上部署了 **7** 节点的 elasticsearch 集群;各位应该十分好奇性能怎么样;官方提供了压测工具[esrally](https://github.com/elastic/rally)可以方便的进行性能压测,这里省略安装和测试过程;压测机上执行:
|
||||
`esrally --track=http_logs --target-hosts="$NODE_IP:29200" --pipeline=benchmark-only --report-file=report.md`
|
||||
压测过程需要1-2个小时,部分压测结果如下:
|
||||
``` bash
|
||||
------------------------------------------------------
|
||||
_______ __ _____
|
||||
/ ____(_)___ ____ _/ / / ___/_________ ________
|
||||
/ /_ / / __ \/ __ `/ / \__ \/ ___/ __ \/ ___/ _ \
|
||||
/ __/ / / / / / /_/ / / ___/ / /__/ /_/ / / / __/
|
||||
/_/ /_/_/ /_/\__,_/_/ /____/\___/\____/_/ \___/
|
||||
------------------------------------------------------
|
||||
|
||||
| Lap | Metric | Task | Value | Unit |
|
||||
|------:|-------------------------------------:|-------------:|------------:|--------:|
|
||||
...
|
||||
| All | Min Throughput | index-append | 16903.2 | docs/s |
|
||||
| All | Median Throughput | index-append | 17624.4 | docs/s |
|
||||
| All | Max Throughput | index-append | 19382.8 | docs/s |
|
||||
| All | 50th percentile latency | index-append | 1865.74 | ms |
|
||||
| All | 90th percentile latency | index-append | 3708.04 | ms |
|
||||
| All | 99th percentile latency | index-append | 6379.49 | ms |
|
||||
| All | 99.9th percentile latency | index-append | 8389.74 | ms |
|
||||
| All | 99.99th percentile latency | index-append | 9612.84 | ms |
|
||||
| All | 100th percentile latency | index-append | 9861.02 | ms |
|
||||
| All | 50th percentile service time | index-append | 1865.74 | ms |
|
||||
| All | 90th percentile service time | index-append | 3708.04 | ms |
|
||||
| All | 99th percentile service time | index-append | 6379.49 | ms |
|
||||
| All | 99.9th percentile service time | index-append | 8389.74 | ms |
|
||||
| All | 99.99th percentile service time | index-append | 9612.84 | ms |
|
||||
| All | 100th percentile service time | index-append | 9861.02 | ms |
|
||||
| All | error rate | index-append | 0 | % |
|
||||
| All | Min Throughput | default | 0.66 | ops/s |
|
||||
| All | Median Throughput | default | 0.66 | ops/s |
|
||||
| All | Max Throughput | default | 0.66 | ops/s |
|
||||
| All | 50th percentile latency | default | 770131 | ms |
|
||||
| All | 90th percentile latency | default | 825511 | ms |
|
||||
| All | 99th percentile latency | default | 838030 | ms |
|
||||
| All | 100th percentile latency | default | 839382 | ms |
|
||||
| All | 50th percentile service time | default | 1539.4 | ms |
|
||||
| All | 90th percentile service time | default | 1635.39 | ms |
|
||||
| All | 99th percentile service time | default | 1728.02 | ms |
|
||||
| All | 100th percentile service time | default | 1736.2 | ms |
|
||||
| All | error rate | default | 0 | % |
|
||||
...
|
||||
```
|
||||
从测试结果看:集群的吞吐可以(k8s es-client pod还可以扩展);延迟略高一些(因为使用了nfs共享存储);整体效果不错。
|
||||
|
||||
### 中文分词安装
|
||||
|
||||
安装 ik 插件即可,可以自定义已安装ik插件的es docker镜像:创建如下 Dockerfile
|
||||
``` bash
|
||||
FROM jmgao1983/elasticsearch:6.4.0
|
||||
|
||||
RUN /usr/share/elasticsearch/bin/elasticsearch-plugin install \
|
||||
--batch https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v6.4.0/elasticsearch-analysis-ik-6.4.0.zip \
|
||||
&& cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
|
||||
```
|
||||
|
||||
### 参考阅读
|
||||
|
||||
1. [Elasticsearch 入门教程](http://www.ruanyifeng.com/blog/2017/08/elasticsearch.html)
|
||||
2. [Elasticsearch 压测方案之 esrally 简介](https://segmentfault.com/a/1190000011174694)
|
|
@ -0,0 +1,3 @@
|
|||
.git
|
||||
# OWNERS file for Kubernetes
|
||||
OWNERS
|
|
@ -0,0 +1,21 @@
|
|||
name: elasticsearch
|
||||
home: https://www.elastic.co/products/elasticsearch
|
||||
version: 1.7.2
|
||||
appVersion: 6.4.0
|
||||
description: Flexible and powerful open source, distributed real-time search and analytics
|
||||
engine.
|
||||
icon: https://static-www.elastic.co/assets/blteb1c97719574938d/logo-elastic-elasticsearch-lt.svg
|
||||
sources:
|
||||
- https://www.elastic.co/products/elasticsearch
|
||||
- https://github.com/jetstack/elasticsearch-pet
|
||||
- https://github.com/giantswarm/kubernetes-elastic-stack
|
||||
- https://github.com/GoogleCloudPlatform/elasticsearch-docker
|
||||
- https://github.com/clockworksoul/helm-elasticsearch
|
||||
- https://github.com/pires/kubernetes-elasticsearch-cluster
|
||||
maintainers:
|
||||
- name: simonswine
|
||||
email: christian@jetstack.io
|
||||
- name: icereval
|
||||
email: michael.haselton@gmail.com
|
||||
- name: rendhalver
|
||||
email: pete.brown@powerhrg.com
|
|
@ -0,0 +1,8 @@
|
|||
approvers:
|
||||
- simonswine
|
||||
- icereval
|
||||
- rendhalver
|
||||
reviewers:
|
||||
- simonswine
|
||||
- icereval
|
||||
- rendhalver
|
|
@ -0,0 +1,190 @@
|
|||
# Elasticsearch Helm Chart
|
||||
|
||||
This chart uses a standard Docker image of Elasticsearch (docker.elastic.co/elasticsearch/elasticsearch-oss) and uses a service pointing to the master's transport port for service discovery.
|
||||
Elasticsearch does not communicate with the Kubernetes API, hence no need for RBAC permissions.
|
||||
|
||||
## Warning for previous users
|
||||
If you are currently using an earlier version of this Chart you will need to redeploy your Elasticsearch clusters. The discovery method used here is incompatible with using RBAC.
|
||||
If you are upgrading to Elasticsearch 6 from the 5.5 version used in this chart before, please note that your cluster needs to do a full cluster restart.
|
||||
The simplest way to do that is to delete the installation (keep the PVs) and install this chart again with the new version.
|
||||
If you want to avoid doing that upgrade to Elasticsearch 5.6 first before moving on to Elasticsearch 6.0.
|
||||
|
||||
## Prerequisites Details
|
||||
|
||||
* Kubernetes 1.6+
|
||||
* PV dynamic provisioning support on the underlying infrastructure
|
||||
|
||||
## StatefulSets Details
|
||||
* https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
|
||||
|
||||
## StatefulSets Caveats
|
||||
* https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#limitations
|
||||
|
||||
## Todo
|
||||
|
||||
* Implement TLS/Auth/Security
|
||||
* Smarter upscaling/downscaling
|
||||
* Solution for memory locking
|
||||
|
||||
## Chart Details
|
||||
This chart will do the following:
|
||||
|
||||
* Implemented a dynamically scalable elasticsearch cluster using Kubernetes StatefulSets/Deployments
|
||||
* Multi-role deployment: master, client (coordinating) and data nodes
|
||||
* Statefulset Supports scaling down without degrading the cluster
|
||||
|
||||
## Installing the Chart
|
||||
|
||||
To install the chart with the release name `my-release`:
|
||||
|
||||
```bash
|
||||
$ helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
|
||||
$ helm install --name my-release incubator/elasticsearch
|
||||
```
|
||||
|
||||
## Deleting the Charts
|
||||
|
||||
Delete the Helm deployment as normal
|
||||
|
||||
```
|
||||
$ helm delete my-release
|
||||
```
|
||||
|
||||
Deletion of the StatefulSet doesn't cascade to deleting associated PVCs. To delete them:
|
||||
|
||||
```
|
||||
$ kubectl delete pvc -l release=my-release,component=data
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
The following table lists the configurable parameters of the elasticsearch chart and their default values.
|
||||
|
||||
| Parameter | Description | Default |
|
||||
| ------------------------------------ | ------------------------------------------------------------------- | ------------------------------------ |
|
||||
| `appVersion` | Application Version (Elasticsearch) | `6.4.0` |
|
||||
| `image.repository` | Container image name | `docker.elastic.co/elasticsearch/elasticsearch-oss` |
|
||||
| `image.tag` | Container image tag | `6.4.0` |
|
||||
| `image.pullPolicy` | Container pull policy | `Always` |
|
||||
| `cluster.name` | Cluster name | `elasticsearch` |
|
||||
| `cluster.xpackEnable` | Writes the X-Pack configuration options to the configuration file | `false` |
|
||||
| `cluster.config` | Additional cluster config appended | `{}` |
|
||||
| `cluster.keystoreSecret` | Name of secret holding secure config options in an es keystore | `nil` |
|
||||
| `cluster.env` | Cluster environment variables | `{MINIMUM_MASTER_NODES: "2"}` |
|
||||
| `client.name` | Client component name | `client` |
|
||||
| `client.replicas` | Client node replicas (deployment) | `2` |
|
||||
| `client.resources` | Client node resources requests & limits | `{} - cpu limit must be an integer` |
|
||||
| `client.priorityClassName` | Client priorityClass | `nil` |
|
||||
| `client.heapSize` | Client node heap size | `512m` |
|
||||
| `client.podAnnotations` | Client Deployment annotations | `{}` |
|
||||
| `client.nodeSelector` | Node labels for client pod assignment | `{}` |
|
||||
| `client.tolerations` | Client tolerations | `[]` |
|
||||
| `client.serviceAnnotations` | Client Service annotations | `{}` |
|
||||
| `client.serviceType` | Client service type | `ClusterIP` |
|
||||
| `client.loadBalancerIP` | Client loadBalancerIP | `{}` |
|
||||
| `client.loadBalancerSourceRanges` | Client loadBalancerSourceRanges | `{}` |
|
||||
| `master.exposeHttp` | Expose http port 9200 on master Pods for monitoring, etc | `false` |
|
||||
| `master.name` | Master component name | `master` |
|
||||
| `master.replicas` | Master node replicas (deployment) | `2` |
|
||||
| `master.resources` | Master node resources requests & limits | `{} - cpu limit must be an integer` |
|
||||
| `master.priorityClassName` | Master priorityClass | `nil` |
|
||||
| `master.podAnnotations` | Master Deployment annotations | `{}` |
|
||||
| `master.nodeSelector` | Node labels for master pod assignment | `{}` |
|
||||
| `master.tolerations` | Master tolerations | `[]` |
|
||||
| `master.heapSize` | Master node heap size | `512m` |
|
||||
| `master.name` | Master component name | `master` |
|
||||
| `master.persistence.enabled` | Master persistent enabled/disabled | `true` |
|
||||
| `master.persistence.name` | Master statefulset PVC template name | `data` |
|
||||
| `master.persistence.size` | Master persistent volume size | `4Gi` |
|
||||
| `master.persistence.storageClass` | Master persistent volume Class | `nil` |
|
||||
| `master.persistence.accessMode` | Master persistent Access Mode | `ReadWriteOnce` |
|
||||
| `data.exposeHttp` | Expose http port 9200 on data Pods for monitoring, etc | `false` |
|
||||
| `data.replicas` | Data node replicas (statefulset) | `2` |
|
||||
| `data.resources` | Data node resources requests & limits | `{} - cpu limit must be an integer` |
|
||||
| `data.priorityClassName` | Data priorityClass | `nil` |
|
||||
| `data.heapSize` | Data node heap size | `1536m` |
|
||||
| `data.persistence.enabled` | Data persistent enabled/disabled | `true` |
|
||||
| `data.persistence.name` | Data statefulset PVC template name | `data` |
|
||||
| `data.persistence.size` | Data persistent volume size | `30Gi` |
|
||||
| `data.persistence.storageClass` | Data persistent volume Class | `nil` |
|
||||
| `data.persistence.accessMode` | Data persistent Access Mode | `ReadWriteOnce` |
|
||||
| `data.podAnnotations` | Data StatefulSet annotations | `{}` |
|
||||
| `data.nodeSelector` | Node labels for data pod assignment | `{}` |
|
||||
| `data.tolerations` | Data tolerations | `[]` |
|
||||
| `data.terminationGracePeriodSeconds` | Data termination grace period (seconds) | `3600` |
|
||||
| `data.antiAffinity` | Data anti-affinity policy | `soft` |
|
||||
|
||||
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`.
|
||||
|
||||
In terms of Memory resources you should make sure that you follow that equation:
|
||||
|
||||
- `${role}HeapSize < ${role}MemoryRequests < ${role}MemoryLimits`
|
||||
|
||||
The YAML value of cluster.config is appended to elasticsearch.yml file for additional customization ("script.inline: on" for example to allow inline scripting)
|
||||
|
||||
# Deep dive
|
||||
|
||||
## Application Version
|
||||
|
||||
This chart aims to support Elasticsearch v2 and v5 deployments by specifying the `values.yaml` parameter `appVersion`.
|
||||
|
||||
### Version Specific Features
|
||||
|
||||
* Memory Locking *(variable renamed)*
|
||||
* Ingest Node *(v5)*
|
||||
* X-Pack Plugin *(v5)*
|
||||
|
||||
Upgrade paths & more info: https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-upgrade.html
|
||||
|
||||
## Mlocking
|
||||
|
||||
This is a limitation in kubernetes right now. There is no way to raise the
|
||||
limits of lockable memory, so that these memory areas won't be swapped. This
|
||||
would degrade performance heavily. The issue is tracked in
|
||||
[kubernetes/#3595](https://github.com/kubernetes/kubernetes/issues/3595).
|
||||
|
||||
```
|
||||
[WARN ][bootstrap] Unable to lock JVM Memory: error=12,reason=Cannot allocate memory
|
||||
[WARN ][bootstrap] This can result in part of the JVM being swapped out.
|
||||
[WARN ][bootstrap] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536
|
||||
```
|
||||
|
||||
## Minimum Master Nodes
|
||||
> The minimum_master_nodes setting is extremely important to the stability of your cluster. This setting helps prevent split brains, the existence of two masters in a single cluster.
|
||||
|
||||
>When you have a split brain, your cluster is at danger of losing data. Because the master is considered the supreme ruler of the cluster, it decides when new indices can be created, how shards are moved, and so forth. If you have two masters, data integrity becomes perilous, since you have two nodes that think they are in charge.
|
||||
|
||||
>This setting tells Elasticsearch to not elect a master unless there are enough master-eligible nodes available. Only then will an election take place.
|
||||
|
||||
>This setting should always be configured to a quorum (majority) of your master-eligible nodes. A quorum is (number of master-eligible nodes / 2) + 1
|
||||
|
||||
More info: https://www.elastic.co/guide/en/elasticsearch/guide/1.x/_important_configuration_changes.html#_minimum_master_nodes
|
||||
|
||||
# Client and Coordinating Nodes
|
||||
|
||||
Elasticsearch v5 terminology has updated, and now refers to a `Client Node` as a `Coordinating Node`.
|
||||
|
||||
More info: https://www.elastic.co/guide/en/elasticsearch/reference/5.5/modules-node.html#coordinating-node
|
||||
|
||||
## Select right storage class for SSD volumes
|
||||
|
||||
### GCE + Kubernetes 1.5
|
||||
|
||||
Create StorageClass for SSD-PD
|
||||
|
||||
```
|
||||
$ kubectl create -f - <<EOF
|
||||
kind: StorageClass
|
||||
apiVersion: extensions/v1beta1
|
||||
metadata:
|
||||
name: ssd
|
||||
provisioner: kubernetes.io/gce-pd
|
||||
parameters:
|
||||
type: pd-ssd
|
||||
EOF
|
||||
```
|
||||
Create cluster with Storage class `ssd` on Kubernetes 1.5+
|
||||
|
||||
```
|
||||
$ helm install incubator/elasticsearch --name my-release --set data.storageClass=ssd,data.storage=100Gi
|
||||
```
|
|
@ -0,0 +1,31 @@
|
|||
The elasticsearch cluster has been installed.
|
||||
|
||||
Elasticsearch can be accessed:
|
||||
|
||||
* Within your cluster, at the following DNS name at port 9200:
|
||||
|
||||
{{ template "elasticsearch.client.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local
|
||||
|
||||
* From outside the cluster, run these commands in the same shell:
|
||||
{{- if contains "NodePort" .Values.client.serviceType }}
|
||||
|
||||
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "elasticsearch.client.fullname" . }})
|
||||
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
|
||||
echo http://$NODE_IP:$NODE_PORT
|
||||
{{- else if contains "LoadBalancer" .Values.client.serviceType }}
|
||||
|
||||
WARNING: You have likely exposed your Elasticsearch cluster direct to the internet.
|
||||
Elasticsearch does not implement any security for public facing clusters by default.
|
||||
As a minimum level of security; switch to ClusterIP/NodePort and place an Nginx gateway infront of the cluster in order to lock down access to dangerous HTTP endpoints and verbs.
|
||||
|
||||
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
|
||||
You can watch the status of by running 'kubectl get svc -w {{ template "elasticsearch.client.fullname" . }}'
|
||||
|
||||
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "elasticsearch.client.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
|
||||
echo http://$SERVICE_IP:9200
|
||||
{{- else if contains "ClusterIP" .Values.client.serviceType }}
|
||||
|
||||
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app={{ template "elasticsearch.name" . }},component={{ .Values.client.name }},release={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
|
||||
echo "Visit http://127.0.0.1:9200 to use Elasticsearch"
|
||||
kubectl port-forward --namespace {{ .Release.Namespace }} $POD_NAME 9200:9200
|
||||
{{- end }}
|
|
@ -0,0 +1,48 @@
|
|||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Expand the name of the chart.
|
||||
*/}}
|
||||
{{- define "elasticsearch.name" -}}
|
||||
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create a default fully qualified app name.
|
||||
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
|
||||
*/}}
|
||||
{{- define "elasticsearch.fullname" -}}
|
||||
{{- if .Values.fullnameOverride -}}
|
||||
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- $name := default .Chart.Name .Values.nameOverride -}}
|
||||
{{- if contains $name .Release.Name -}}
|
||||
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create a default fully qualified client name.
|
||||
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
|
||||
*/}}
|
||||
{{- define "elasticsearch.client.fullname" -}}
|
||||
{{ template "elasticsearch.fullname" . }}-{{ .Values.client.name }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create a default fully qualified data name.
|
||||
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
|
||||
*/}}
|
||||
{{- define "elasticsearch.data.fullname" -}}
|
||||
{{ template "elasticsearch.fullname" . }}-{{ .Values.data.name }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create a default fully qualified master name.
|
||||
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
|
||||
*/}}
|
||||
{{- define "elasticsearch.master.fullname" -}}
|
||||
{{ template "elasticsearch.fullname" . }}-{{ .Values.master.name }}
|
||||
{{- end -}}
|
|
@ -0,0 +1,145 @@
|
|||
apiVersion: apps/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
app: {{ template "elasticsearch.name" . }}
|
||||
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
|
||||
component: "{{ .Values.client.name }}"
|
||||
heritage: {{ .Release.Service }}
|
||||
release: {{ .Release.Name }}
|
||||
name: {{ template "elasticsearch.client.fullname" . }}
|
||||
spec:
|
||||
replicas: {{ .Values.client.replicas }}
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: {{ template "elasticsearch.name" . }}
|
||||
component: "{{ .Values.client.name }}"
|
||||
release: {{ .Release.Name }}
|
||||
{{- if .Values.client.podAnnotations }}
|
||||
annotations:
|
||||
{{ toYaml .Values.client.podAnnotations | indent 8 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
{{- if .Values.client.priorityClassName }}
|
||||
priorityClassName: "{{ .Values.client.priorityClassName }}"
|
||||
{{- end }}
|
||||
securityContext:
|
||||
fsGroup: 1000
|
||||
{{- if eq .Values.client.antiAffinity "hard" }}
|
||||
affinity:
|
||||
podAntiAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
- topologyKey: "kubernetes.io/hostname"
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app: "{{ template "elasticsearch.name" . }}"
|
||||
release: "{{ .Release.Name }}"
|
||||
component: "{{ .Values.client.name }}"
|
||||
{{- else if eq .Values.client.antiAffinity "soft" }}
|
||||
affinity:
|
||||
podAntiAffinity:
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- weight: 1
|
||||
podAffinityTerm:
|
||||
topologyKey: kubernetes.io/hostname
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app: "{{ template "elasticsearch.name" . }}"
|
||||
release: "{{ .Release.Name }}"
|
||||
component: "{{ .Values.client.name }}"
|
||||
{{- end }}
|
||||
{{- if .Values.client.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{ toYaml .Values.client.nodeSelector | indent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.client.tolerations }}
|
||||
tolerations:
|
||||
{{ toYaml .Values.client.tolerations | indent 8 }}
|
||||
{{- end }}
|
||||
initContainers:
|
||||
# see https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html
|
||||
# and https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration-memory.html#mlockall
|
||||
- name: "sysctl"
|
||||
image: "busybox"
|
||||
imagePullPolicy: "Always"
|
||||
command: ["sysctl", "-w", "vm.max_map_count=262144"]
|
||||
securityContext:
|
||||
privileged: true
|
||||
containers:
|
||||
- name: elasticsearch
|
||||
env:
|
||||
- name: NODE_DATA
|
||||
value: "false"
|
||||
{{- if hasPrefix "5." .Values.appVersion }}
|
||||
- name: NODE_INGEST
|
||||
value: "false"
|
||||
{{- end }}
|
||||
- name: NODE_MASTER
|
||||
value: "false"
|
||||
- name: DISCOVERY_SERVICE
|
||||
value: {{ template "elasticsearch.fullname" . }}-discovery
|
||||
- name: PROCESSORS
|
||||
valueFrom:
|
||||
resourceFieldRef:
|
||||
resource: limits.cpu
|
||||
- name: ES_JAVA_OPTS
|
||||
value: "-Djava.net.preferIPv4Stack=true -Xms{{ .Values.client.heapSize }} -Xmx{{ .Values.client.heapSize }}"
|
||||
{{- range $key, $value := .Values.cluster.env }}
|
||||
- name: {{ $key }}
|
||||
value: {{ $value | quote }}
|
||||
{{- end }}
|
||||
resources:
|
||||
{{ toYaml .Values.client.resources | indent 12 }}
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /_cluster/health
|
||||
port: 9200
|
||||
initialDelaySeconds: 5
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /_cluster/health
|
||||
port: 9200
|
||||
initialDelaySeconds: 90
|
||||
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy | quote }}
|
||||
ports:
|
||||
- containerPort: 9200
|
||||
name: http
|
||||
- containerPort: 9300
|
||||
name: transport
|
||||
volumeMounts:
|
||||
- mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
|
||||
name: config
|
||||
subPath: elasticsearch.yml
|
||||
{{- if hasPrefix "2." .Values.image.tag }}
|
||||
- mountPath: /usr/share/elasticsearch/config/logging.yml
|
||||
name: config
|
||||
subPath: logging.yml
|
||||
{{- end }}
|
||||
{{- if hasPrefix "5." .Values.image.tag }}
|
||||
- mountPath: /usr/share/elasticsearch/config/log4j2.properties
|
||||
name: config
|
||||
subPath: log4j2.properties
|
||||
{{- end }}
|
||||
{{- if .Values.cluster.keystoreSecret }}
|
||||
- name: keystore
|
||||
mountPath: "/usr/share/elasticsearch/config/elasticsearch.keystore"
|
||||
subPath: elasticsearch.keystore
|
||||
readOnly: true
|
||||
{{- end }}
|
||||
{{- if .Values.image.pullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{- range $pullSecret := .Values.image.pullSecrets }}
|
||||
- name: {{ $pullSecret }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
volumes:
|
||||
- name: config
|
||||
configMap:
|
||||
name: {{ template "elasticsearch.fullname" . }}
|
||||
{{- if .Values.cluster.keystoreSecret }}
|
||||
- name: keystore
|
||||
secret:
|
||||
secretName: {{ .Values.cluster.keystoreSecret }}
|
||||
{{- end }}
|
|
@ -0,0 +1,24 @@
|
|||
{{- if .Values.client.podDisruptionBudget.enabled }}
|
||||
apiVersion: policy/v1beta1
|
||||
kind: PodDisruptionBudget
|
||||
metadata:
|
||||
labels:
|
||||
app: {{ template "elasticsearch.name" . }}
|
||||
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
|
||||
component: "{{ .Values.client.name }}"
|
||||
heritage: {{ .Release.Service }}
|
||||
release: {{ .Release.Name }}
|
||||
name: {{ template "elasticsearch.client.fullname" . }}
|
||||
spec:
|
||||
{{- if .Values.client.podDisruptionBudget.minAvailable }}
|
||||
minAvailable: {{ .Values.client.podDisruptionBudget.minAvailable }}
|
||||
{{- end }}
|
||||
{{- if .Values.client.podDisruptionBudget.maxUnavailable }}
|
||||
maxUnavailable: {{ .Values.client.podDisruptionBudget.maxUnavailable }}
|
||||
{{- end }}
|
||||
selector:
|
||||
matchLabels:
|
||||
app: {{ template "elasticsearch.name" . }}
|
||||
component: "{{ .Values.client.name }}"
|
||||
release: {{ .Release.Name }}
|
||||
{{- end }}
|
|
@ -0,0 +1,39 @@
|
|||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
app: {{ template "elasticsearch.name" . }}
|
||||
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
|
||||
component: "{{ .Values.client.name }}"
|
||||
heritage: {{ .Release.Service }}
|
||||
release: {{ .Release.Name }}
|
||||
name: {{ template "elasticsearch.client.fullname" . }}
|
||||
{{- if .Values.client.serviceAnnotations }}
|
||||
annotations:
|
||||
{{ toYaml .Values.client.serviceAnnotations | indent 4 }}
|
||||
{{- end }}
|
||||
|
||||
spec:
|
||||
ports:
|
||||
- name: http
|
||||
port: 9200
|
||||
targetPort: 9200
|
||||
nodePort: 29200
|
||||
- name: tcp
|
||||
port: 9300
|
||||
targetPort: 9300
|
||||
nodePort: 29300
|
||||
selector:
|
||||
app: {{ template "elasticsearch.name" . }}
|
||||
component: "{{ .Values.client.name }}"
|
||||
release: {{ .Release.Name }}
|
||||
type: {{ .Values.client.serviceType }}
|
||||
{{- if .Values.client.loadBalancerIP }}
|
||||
loadBalancerIP: "{{ .Values.client.loadBalancerIP }}"
|
||||
{{- end }}
|
||||
{{if .Values.client.loadBalancerSourceRanges}}
|
||||
loadBalancerSourceRanges:
|
||||
{{range $rangeList := .Values.client.loadBalancerSourceRanges}}
|
||||
- {{ $rangeList }}
|
||||
{{end}}
|
||||
{{end}}
|
|
@ -0,0 +1,153 @@
|
|||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: {{ template "elasticsearch.fullname" . }}
|
||||
labels:
|
||||
app: {{ template "elasticsearch.fullname" . }}
|
||||
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
|
||||
release: "{{ .Release.Name }}"
|
||||
heritage: "{{ .Release.Service }}"
|
||||
data:
|
||||
elasticsearch.yml: |-
|
||||
cluster.name: {{ .Values.cluster.name }}
|
||||
|
||||
node.data: ${NODE_DATA:true}
|
||||
node.master: ${NODE_MASTER:true}
|
||||
{{- if hasPrefix "5." .Values.appVersion }}
|
||||
node.ingest: ${NODE_INGEST:true}
|
||||
{{- else if hasPrefix "6." .Values.appVersion }}
|
||||
node.ingest: ${NODE_INGEST:true}
|
||||
{{- end }}
|
||||
node.name: ${HOSTNAME}
|
||||
|
||||
network.host: 0.0.0.0
|
||||
|
||||
{{- if hasPrefix "2." .Values.appVersion }}
|
||||
# see https://github.com/kubernetes/kubernetes/issues/3595
|
||||
bootstrap.mlockall: ${BOOTSTRAP_MLOCKALL:false}
|
||||
|
||||
discovery:
|
||||
zen:
|
||||
ping.unicast.hosts: ${DISCOVERY_SERVICE:}
|
||||
minimum_master_nodes: ${MINIMUM_MASTER_NODES:2}
|
||||
{{- else if hasPrefix "5." .Values.appVersion }}
|
||||
# see https://github.com/kubernetes/kubernetes/issues/3595
|
||||
bootstrap.memory_lock: ${BOOTSTRAP_MEMORY_LOCK:false}
|
||||
|
||||
discovery:
|
||||
zen:
|
||||
ping.unicast.hosts: ${DISCOVERY_SERVICE:}
|
||||
minimum_master_nodes: ${MINIMUM_MASTER_NODES:2}
|
||||
|
||||
{{- if .Values.cluster.xpackEnable }}
|
||||
# see https://www.elastic.co/guide/en/x-pack/current/xpack-settings.html
|
||||
xpack.ml.enabled: ${XPACK_ML_ENABLED:false}
|
||||
xpack.monitoring.enabled: ${XPACK_MONITORING_ENABLED:false}
|
||||
xpack.security.enabled: ${XPACK_SECURITY_ENABLED:false}
|
||||
xpack.watcher.enabled: ${XPACK_WATCHER_ENABLED:false}
|
||||
{{- end }}
|
||||
{{- else if hasPrefix "6." .Values.appVersion }}
|
||||
# see https://github.com/kubernetes/kubernetes/issues/3595
|
||||
bootstrap.memory_lock: ${BOOTSTRAP_MEMORY_LOCK:false}
|
||||
|
||||
discovery:
|
||||
zen:
|
||||
ping.unicast.hosts: ${DISCOVERY_SERVICE:}
|
||||
minimum_master_nodes: ${MINIMUM_MASTER_NODES:2}
|
||||
|
||||
{{- if .Values.cluster.xpackEnable }}
|
||||
# see https://www.elastic.co/guide/en/x-pack/current/xpack-settings.html
|
||||
xpack.ml.enabled: ${XPACK_ML_ENABLED:false}
|
||||
xpack.monitoring.enabled: ${XPACK_MONITORING_ENABLED:false}
|
||||
xpack.security.enabled: ${XPACK_SECURITY_ENABLED:false}
|
||||
xpack.watcher.enabled: ${XPACK_WATCHER_ENABLED:false}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
# see https://github.com/elastic/elasticsearch-definitive-guide/pull/679
|
||||
processors: ${PROCESSORS:}
|
||||
|
||||
# avoid split-brain w/ a minimum consensus of two masters plus a data node
|
||||
gateway.expected_master_nodes: ${EXPECTED_MASTER_NODES:2}
|
||||
gateway.expected_data_nodes: ${EXPECTED_DATA_NODES:1}
|
||||
gateway.recover_after_time: ${RECOVER_AFTER_TIME:5m}
|
||||
gateway.recover_after_master_nodes: ${RECOVER_AFTER_MASTER_NODES:2}
|
||||
gateway.recover_after_data_nodes: ${RECOVER_AFTER_DATA_NODES:1}
|
||||
{{- with .Values.cluster.config }}
|
||||
{{ toYaml . | indent 4 }}
|
||||
{{- end }}
|
||||
{{- if hasPrefix "2." .Values.image.tag }}
|
||||
logging.yml: |-
|
||||
# you can override this using by setting a system property, for example -Des.logger.level=DEBUG
|
||||
es.logger.level: INFO
|
||||
rootLogger: ${es.logger.level}, console
|
||||
logger:
|
||||
# log action execution errors for easier debugging
|
||||
action: DEBUG
|
||||
# reduce the logging for aws, too much is logged under the default INFO
|
||||
com.amazonaws: WARN
|
||||
appender:
|
||||
console:
|
||||
type: console
|
||||
layout:
|
||||
type: consolePattern
|
||||
conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"
|
||||
{{- else if hasPrefix "5." .Values.image.tag }}
|
||||
log4j2.properties: |-
|
||||
status = error
|
||||
appender.console.type = Console
|
||||
appender.console.name = console
|
||||
appender.console.layout.type = PatternLayout
|
||||
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%m%n
|
||||
rootLogger.level = info
|
||||
rootLogger.appenderRef.console.ref = console
|
||||
logger.searchguard.name = com.floragunn
|
||||
logger.searchguard.level = info
|
||||
{{- else if hasPrefix "6." .Values.image.tag }}
|
||||
log4j2.properties: |-
|
||||
status = error
|
||||
appender.console.type = Console
|
||||
appender.console.name = console
|
||||
appender.console.layout.type = PatternLayout
|
||||
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%m%n
|
||||
rootLogger.level = info
|
||||
rootLogger.appenderRef.console.ref = console
|
||||
logger.searchguard.name = com.floragunn
|
||||
logger.searchguard.level = info
|
||||
{{- end }}
|
||||
pre-stop-hook.sh: |-
|
||||
#!/bin/bash
|
||||
exec &> >(tee -a "/var/log/elasticsearch-hooks.log")
|
||||
NODE_NAME=${HOSTNAME}
|
||||
echo "Prepare to migrate data of the node ${NODE_NAME}"
|
||||
echo "Move all data from node ${NODE_NAME}"
|
||||
curl -s -XPUT -H 'Content-Type: application/json' '{{ template "elasticsearch.client.fullname" . }}:9200/_cluster/settings' -d "{
|
||||
\"transient\" :{
|
||||
\"cluster.routing.allocation.exclude._name\" : \"${NODE_NAME}\"
|
||||
}
|
||||
}"
|
||||
echo ""
|
||||
|
||||
while true ; do
|
||||
echo -e "Wait for node ${NODE_NAME} to become empty"
|
||||
SHARDS_ALLOCATION=$(curl -s -XGET 'http://{{ template "elasticsearch.client.fullname" . }}:9200/_cat/shards')
|
||||
if ! echo "${SHARDS_ALLOCATION}" | grep -E "${NODE_NAME}"; then
|
||||
break
|
||||
fi
|
||||
sleep 1
|
||||
done
|
||||
echo "Node ${NODE_NAME} is ready to shutdown"
|
||||
post-start-hook.sh: |-
|
||||
#!/bin/bash
|
||||
exec &> >(tee -a "/var/log/elasticsearch-hooks.log")
|
||||
NODE_NAME=${HOSTNAME}
|
||||
CLUSTER_SETTINGS=$(curl -s -XGET "http://{{ template "elasticsearch.client.fullname" . }}:9200/_cluster/settings")
|
||||
if echo "${CLUSTER_SETTINGS}" | grep -E "${NODE_NAME}"; then
|
||||
echo "Activate node ${NODE_NAME}"
|
||||
curl -s -XPUT -H 'Content-Type: application/json' "http://{{ template "elasticsearch.client.fullname" . }}:9200/_cluster/settings" -d "{
|
||||
\"transient\" :{
|
||||
\"cluster.routing.allocation.exclude._name\" : null
|
||||
}
|
||||
}"
|
||||
fi
|
||||
echo "Node ${NODE_NAME} is ready to be used"
|
|
@ -0,0 +1,24 @@
|
|||
{{- if .Values.data.podDisruptionBudget.enabled }}
|
||||
apiVersion: policy/v1beta1
|
||||
kind: PodDisruptionBudget
|
||||
metadata:
|
||||
labels:
|
||||
app: {{ template "elasticsearch.name" . }}
|
||||
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
|
||||
component: "{{ .Values.data.name }}"
|
||||
heritage: {{ .Release.Service }}
|
||||
release: {{ .Release.Name }}
|
||||
name: {{ template "elasticsearch.data.fullname" . }}
|
||||
spec:
|
||||
{{- if .Values.data.podDisruptionBudget.minAvailable }}
|
||||
minAvailable: {{ .Values.data.podDisruptionBudget.minAvailable }}
|
||||
{{- end }}
|
||||
{{- if .Values.data.podDisruptionBudget.maxUnavailable }}
|
||||
maxUnavailable: {{ .Values.data.podDisruptionBudget.maxUnavailable }}
|
||||
{{- end }}
|
||||
selector:
|
||||
matchLabels:
|
||||
app: {{ template "elasticsearch.name" . }}
|
||||
component: "{{ .Values.data.name }}"
|
||||
release: {{ .Release.Name }}
|
||||
{{- end }}
|
|
@ -0,0 +1,190 @@
|
|||
apiVersion: apps/v1beta1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
labels:
|
||||
app: {{ template "elasticsearch.name" . }}
|
||||
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
|
||||
component: "{{ .Values.data.name }}"
|
||||
heritage: {{ .Release.Service }}
|
||||
release: {{ .Release.Name }}
|
||||
name: {{ template "elasticsearch.data.fullname" . }}
|
||||
spec:
|
||||
serviceName: {{ template "elasticsearch.data.fullname" . }}
|
||||
replicas: {{ .Values.data.replicas }}
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: {{ template "elasticsearch.name" . }}
|
||||
component: "{{ .Values.data.name }}"
|
||||
release: {{ .Release.Name }}
|
||||
{{- if .Values.data.podAnnotations }}
|
||||
annotations:
|
||||
{{ toYaml .Values.data.podAnnotations | indent 8 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
{{- if .Values.data.priorityClassName }}
|
||||
priorityClassName: "{{ .Values.data.priorityClassName }}"
|
||||
{{- end }}
|
||||
securityContext:
|
||||
fsGroup: 1000
|
||||
{{- if eq .Values.data.antiAffinity "hard" }}
|
||||
affinity:
|
||||
podAntiAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
- topologyKey: "kubernetes.io/hostname"
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app: "{{ template "elasticsearch.name" . }}"
|
||||
release: "{{ .Release.Name }}"
|
||||
component: "{{ .Values.data.name }}"
|
||||
{{- else if eq .Values.data.antiAffinity "soft" }}
|
||||
affinity:
|
||||
podAntiAffinity:
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- weight: 1
|
||||
podAffinityTerm:
|
||||
topologyKey: kubernetes.io/hostname
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app: "{{ template "elasticsearch.name" . }}"
|
||||
release: "{{ .Release.Name }}"
|
||||
component: "{{ .Values.data.name }}"
|
||||
{{- end }}
|
||||
{{- if .Values.data.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{ toYaml .Values.data.nodeSelector | indent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.data.tolerations }}
|
||||
tolerations:
|
||||
{{ toYaml .Values.data.tolerations | indent 8 }}
|
||||
{{- end }}
|
||||
initContainers:
|
||||
# see https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html
|
||||
# and https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration-memory.html#mlockall
|
||||
- name: "sysctl"
|
||||
image: "busybox"
|
||||
imagePullPolicy: "Always"
|
||||
command: ["sysctl", "-w", "vm.max_map_count=262144"]
|
||||
securityContext:
|
||||
privileged: true
|
||||
- name: "chown"
|
||||
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy | quote }}
|
||||
command:
|
||||
- /bin/bash
|
||||
- -c
|
||||
- chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/data &&
|
||||
chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/logs
|
||||
securityContext:
|
||||
runAsUser: 0
|
||||
volumeMounts:
|
||||
- mountPath: /usr/share/elasticsearch/data
|
||||
name: data
|
||||
containers:
|
||||
- name: elasticsearch
|
||||
env:
|
||||
- name: DISCOVERY_SERVICE
|
||||
value: {{ template "elasticsearch.fullname" . }}-discovery
|
||||
- name: NODE_MASTER
|
||||
value: "false"
|
||||
- name: PROCESSORS
|
||||
valueFrom:
|
||||
resourceFieldRef:
|
||||
resource: limits.cpu
|
||||
- name: ES_JAVA_OPTS
|
||||
value: "-Djava.net.preferIPv4Stack=true -Xms{{ .Values.data.heapSize }} -Xmx{{ .Values.data.heapSize }}"
|
||||
{{- range $key, $value := .Values.cluster.env }}
|
||||
- name: {{ $key }}
|
||||
value: {{ $value | quote }}
|
||||
{{- end }}
|
||||
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy | quote }}
|
||||
ports:
|
||||
- containerPort: 9300
|
||||
name: transport
|
||||
{{ if .Values.data.exposeHttp }}
|
||||
- containerPort: 9200
|
||||
name: http
|
||||
{{ end }}
|
||||
resources:
|
||||
{{ toYaml .Values.data.resources | indent 12 }}
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /_cluster/health?local=true
|
||||
port: 9200
|
||||
initialDelaySeconds: 5
|
||||
volumeMounts:
|
||||
- mountPath: /usr/share/elasticsearch/data
|
||||
name: data
|
||||
- mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
|
||||
name: config
|
||||
subPath: elasticsearch.yml
|
||||
{{- if hasPrefix "2." .Values.image.tag }}
|
||||
- mountPath: /usr/share/elasticsearch/config/logging.yml
|
||||
name: config
|
||||
subPath: logging.yml
|
||||
{{- end }}
|
||||
{{- if hasPrefix "5." .Values.image.tag }}
|
||||
- mountPath: /usr/share/elasticsearch/config/log4j2.properties
|
||||
name: config
|
||||
subPath: log4j2.properties
|
||||
{{- end }}
|
||||
- name: config
|
||||
mountPath: /pre-stop-hook.sh
|
||||
subPath: pre-stop-hook.sh
|
||||
- name: config
|
||||
mountPath: /post-start-hook.sh
|
||||
subPath: post-start-hook.sh
|
||||
{{- if .Values.cluster.keystoreSecret }}
|
||||
- name: keystore
|
||||
mountPath: "/usr/share/elasticsearch/config/elasticsearch.keystore"
|
||||
subPath: elasticsearch.keystore
|
||||
readOnly: true
|
||||
{{- end }}
|
||||
lifecycle:
|
||||
preStop:
|
||||
exec:
|
||||
command: ["/bin/bash","/pre-stop-hook.sh"]
|
||||
postStart:
|
||||
exec:
|
||||
command: ["/bin/bash","/post-start-hook.sh"]
|
||||
terminationGracePeriodSeconds: {{ .Values.data.terminationGracePeriodSeconds }}
|
||||
{{- if .Values.image.pullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{- range $pullSecret := .Values.image.pullSecrets }}
|
||||
- name: {{ $pullSecret }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
volumes:
|
||||
- name: config
|
||||
configMap:
|
||||
name: {{ template "elasticsearch.fullname" . }}
|
||||
{{- if .Values.cluster.keystoreSecret }}
|
||||
- name: keystore
|
||||
secret:
|
||||
secretName: {{ .Values.cluster.keystoreSecret }}
|
||||
{{- end }}
|
||||
{{- if not .Values.data.persistence.enabled }}
|
||||
- name: data
|
||||
emptyDir: {}
|
||||
{{- end }}
|
||||
updateStrategy:
|
||||
type: {{ .Values.data.updateStrategy.type }}
|
||||
{{- if .Values.data.persistence.enabled }}
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: {{ .Values.data.persistence.name }}
|
||||
spec:
|
||||
accessModes:
|
||||
- {{ .Values.data.persistence.accessMode | quote }}
|
||||
{{- if .Values.data.persistence.storageClass }}
|
||||
{{- if (eq "-" .Values.data.persistence.storageClass) }}
|
||||
storageClassName: ""
|
||||
{{- else }}
|
||||
storageClassName: "{{ .Values.data.persistence.storageClass }}"
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
resources:
|
||||
requests:
|
||||
storage: "{{ .Values.data.persistence.size }}"
|
||||
{{- end }}
|
|
@ -0,0 +1,24 @@
|
|||
{{- if .Values.master.podDisruptionBudget.enabled }}
|
||||
apiVersion: policy/v1beta1
|
||||
kind: PodDisruptionBudget
|
||||
metadata:
|
||||
labels:
|
||||
app: {{ template "elasticsearch.name" . }}
|
||||
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
|
||||
component: "{{ .Values.master.name }}"
|
||||
heritage: {{ .Release.Service }}
|
||||
release: {{ .Release.Name }}
|
||||
name: {{ template "elasticsearch.master.fullname" . }}
|
||||
spec:
|
||||
{{- if .Values.master.podDisruptionBudget.minAvailable }}
|
||||
minAvailable: {{ .Values.master.podDisruptionBudget.minAvailable }}
|
||||
{{- end }}
|
||||
{{- if .Values.master.podDisruptionBudget.maxUnavailable }}
|
||||
maxUnavailable: {{ .Values.master.podDisruptionBudget.maxUnavailable }}
|
||||
{{- end }}
|
||||
selector:
|
||||
matchLabels:
|
||||
app: {{ template "elasticsearch.name" . }}
|
||||
component: "{{ .Values.master.name }}"
|
||||
release: {{ .Release.Name }}
|
||||
{{- end }}
|
|
@ -0,0 +1,180 @@
|
|||
apiVersion: apps/v1beta1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
labels:
|
||||
app: {{ template "elasticsearch.name" . }}
|
||||
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
|
||||
component: "{{ .Values.master.name }}"
|
||||
heritage: {{ .Release.Service }}
|
||||
release: {{ .Release.Name }}
|
||||
name: {{ template "elasticsearch.master.fullname" . }}
|
||||
spec:
|
||||
serviceName: {{ template "elasticsearch.master.fullname" . }}
|
||||
replicas: {{ .Values.master.replicas }}
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: {{ template "elasticsearch.name" . }}
|
||||
component: "{{ .Values.master.name }}"
|
||||
release: {{ .Release.Name }}
|
||||
{{- if .Values.master.podAnnotations }}
|
||||
annotations:
|
||||
{{ toYaml .Values.master.podAnnotations | indent 8 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
{{- if .Values.master.priorityClassName }}
|
||||
priorityClassName: "{{ .Values.master.priorityClassName }}"
|
||||
{{- end }}
|
||||
securityContext:
|
||||
fsGroup: 1000
|
||||
{{- if eq .Values.master.antiAffinity "hard" }}
|
||||
affinity:
|
||||
podAntiAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
- topologyKey: "kubernetes.io/hostname"
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app: "{{ template "elasticsearch.name" . }}"
|
||||
release: "{{ .Release.Name }}"
|
||||
component: "{{ .Values.master.name }}"
|
||||
{{- else if eq .Values.master.antiAffinity "soft" }}
|
||||
affinity:
|
||||
podAntiAffinity:
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- weight: 1
|
||||
podAffinityTerm:
|
||||
topologyKey: kubernetes.io/hostname
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app: "{{ template "elasticsearch.name" . }}"
|
||||
release: "{{ .Release.Name }}"
|
||||
component: "{{ .Values.master.name }}"
|
||||
{{- end }}
|
||||
{{- if .Values.master.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{ toYaml .Values.master.nodeSelector | indent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.master.tolerations }}
|
||||
tolerations:
|
||||
{{ toYaml .Values.master.tolerations | indent 8 }}
|
||||
{{- end }}
|
||||
initContainers:
|
||||
# see https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html
|
||||
# and https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration-memory.html#mlockall
|
||||
- name: "sysctl"
|
||||
image: "busybox"
|
||||
imagePullPolicy: "Always"
|
||||
command: ["sysctl", "-w", "vm.max_map_count=262144"]
|
||||
securityContext:
|
||||
privileged: true
|
||||
- name: "chown"
|
||||
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy | quote }}
|
||||
command:
|
||||
- /bin/bash
|
||||
- -c
|
||||
- chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/data &&
|
||||
chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/logs
|
||||
securityContext:
|
||||
runAsUser: 0
|
||||
volumeMounts:
|
||||
- mountPath: /usr/share/elasticsearch/data
|
||||
name: data
|
||||
containers:
|
||||
- name: elasticsearch
|
||||
env:
|
||||
- name: NODE_DATA
|
||||
value: "false"
|
||||
{{- if hasPrefix "5." .Values.appVersion }}
|
||||
- name: NODE_INGEST
|
||||
value: "false"
|
||||
{{- end }}
|
||||
- name: DISCOVERY_SERVICE
|
||||
value: {{ template "elasticsearch.fullname" . }}-discovery
|
||||
- name: PROCESSORS
|
||||
valueFrom:
|
||||
resourceFieldRef:
|
||||
resource: limits.cpu
|
||||
- name: ES_JAVA_OPTS
|
||||
value: "-Djava.net.preferIPv4Stack=true -Xms{{ .Values.master.heapSize }} -Xmx{{ .Values.master.heapSize }}"
|
||||
{{- range $key, $value := .Values.cluster.env }}
|
||||
- name: {{ $key }}
|
||||
value: {{ $value | quote }}
|
||||
{{- end }}
|
||||
resources:
|
||||
{{ toYaml .Values.master.resources | indent 12 }}
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /_cluster/health?local=true
|
||||
port: 9200
|
||||
initialDelaySeconds: 5
|
||||
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy | quote }}
|
||||
ports:
|
||||
- containerPort: 9300
|
||||
name: transport
|
||||
{{ if .Values.master.exposeHttp }}
|
||||
- containerPort: 9200
|
||||
name: http
|
||||
{{ end }}
|
||||
volumeMounts:
|
||||
- mountPath: /usr/share/elasticsearch/data
|
||||
name: data
|
||||
- mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
|
||||
name: config
|
||||
subPath: elasticsearch.yml
|
||||
{{- if hasPrefix "2." .Values.image.tag }}
|
||||
- mountPath: /usr/share/elasticsearch/config/logging.yml
|
||||
name: config
|
||||
subPath: logging.yml
|
||||
{{- end }}
|
||||
{{- if hasPrefix "5." .Values.image.tag }}
|
||||
- mountPath: /usr/share/elasticsearch/config/log4j2.properties
|
||||
name: config
|
||||
subPath: log4j2.properties
|
||||
{{- end }}
|
||||
{{- if .Values.cluster.keystoreSecret }}
|
||||
- name: keystore
|
||||
mountPath: "/usr/share/elasticsearch/config/elasticsearch.keystore"
|
||||
subPath: elasticsearch.keystore
|
||||
readOnly: true
|
||||
{{- end }}
|
||||
{{- if .Values.image.pullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{- range $pullSecret := .Values.image.pullSecrets }}
|
||||
- name: {{ $pullSecret }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
volumes:
|
||||
- name: config
|
||||
configMap:
|
||||
name: {{ template "elasticsearch.fullname" . }}
|
||||
{{- if .Values.cluster.keystoreSecret }}
|
||||
- name: keystore
|
||||
secret:
|
||||
secretName: {{ .Values.cluster.keystoreSecret }}
|
||||
{{- end }}
|
||||
{{- if not .Values.master.persistence.enabled }}
|
||||
- name: data
|
||||
emptyDir: {}
|
||||
{{- end }}
|
||||
updateStrategy:
|
||||
type: {{ .Values.master.updateStrategy.type }}
|
||||
{{- if .Values.master.persistence.enabled }}
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: {{ .Values.master.persistence.name }}
|
||||
spec:
|
||||
accessModes:
|
||||
- {{ .Values.master.persistence.accessMode | quote }}
|
||||
{{- if .Values.master.persistence.storageClass }}
|
||||
{{- if (eq "-" .Values.master.persistence.storageClass) }}
|
||||
storageClassName: ""
|
||||
{{- else }}
|
||||
storageClassName: "{{ .Values.master.persistence.storageClass }}"
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
resources:
|
||||
requests:
|
||||
storage: "{{ .Values.master.persistence.size }}"
|
||||
{{ end }}
|
|
@ -0,0 +1,19 @@
|
|||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
app: {{ template "elasticsearch.name" . }}
|
||||
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
|
||||
component: "{{ .Values.master.name }}"
|
||||
heritage: {{ .Release.Service }}
|
||||
release: {{ .Release.Name }}
|
||||
name: {{ template "elasticsearch.fullname" . }}-discovery
|
||||
spec:
|
||||
clusterIP: None
|
||||
ports:
|
||||
- port: 9300
|
||||
targetPort: transport
|
||||
selector:
|
||||
app: {{ template "elasticsearch.name" . }}
|
||||
component: "{{ .Values.master.name }}"
|
||||
release: {{ .Release.Name }}
|
|
@ -0,0 +1,121 @@
|
|||
# Default values for elasticsearch.
|
||||
# This is a YAML-formatted file.
|
||||
# Declare variables to be passed into your templates.
|
||||
appVersion: "6.4.0"
|
||||
|
||||
image:
|
||||
repository: "docker.elastic.co/elasticsearch/elasticsearch-oss"
|
||||
tag: "6.4.0"
|
||||
pullPolicy: "IfNotPresent"
|
||||
# If specified, use these secrets to access the image
|
||||
# pullSecrets:
|
||||
# - registry-secret
|
||||
|
||||
cluster:
|
||||
name: "elasticsearch"
|
||||
# If you want X-Pack installed, switch to an image that includes it, enable this option and toggle the features you want
|
||||
# enabled in the environment variables outlined in the README
|
||||
xpackEnable: false
|
||||
# Some settings must be placed in a keystore, so they need to be mounted in from a secret.
|
||||
# Use this setting to specify the name of the secret
|
||||
# keystoreSecret: eskeystore
|
||||
config: {}
|
||||
env:
|
||||
# IMPORTANT: https://www.elastic.co/guide/en/elasticsearch/reference/current/important-settings.html#minimum_master_nodes
|
||||
# To prevent data loss, it is vital to configure the discovery.zen.minimum_master_nodes setting so that each master-eligible
|
||||
# node knows the minimum number of master-eligible nodes that must be visible in order to form a cluster.
|
||||
MINIMUM_MASTER_NODES: "2"
|
||||
|
||||
client:
|
||||
name: client
|
||||
replicas: 2
|
||||
serviceType: ClusterIP
|
||||
loadBalancerIP: {}
|
||||
loadBalancerSourceRanges: {}
|
||||
## (dict) If specified, apply these annotations to the client service
|
||||
# serviceAnnotations:
|
||||
# example: client-svc-foo
|
||||
heapSize: "512m"
|
||||
antiAffinity: "soft"
|
||||
nodeSelector: {}
|
||||
tolerations: []
|
||||
resources:
|
||||
limits:
|
||||
cpu: "1"
|
||||
# memory: "1024Mi"
|
||||
requests:
|
||||
cpu: "25m"
|
||||
memory: "512Mi"
|
||||
priorityClassName: ""
|
||||
## (dict) If specified, apply these annotations to each client Pod
|
||||
# podAnnotations:
|
||||
# example: client-foo
|
||||
podDisruptionBudget:
|
||||
enabled: false
|
||||
minAvailable: 1
|
||||
# maxUnavailable: 1
|
||||
|
||||
master:
|
||||
name: master
|
||||
exposeHttp: false
|
||||
replicas: 3
|
||||
heapSize: "512m"
|
||||
persistence:
|
||||
enabled: true
|
||||
accessMode: ReadWriteOnce
|
||||
name: data
|
||||
size: "4Gi"
|
||||
# storageClass: "ssd"
|
||||
antiAffinity: "soft"
|
||||
nodeSelector: {}
|
||||
tolerations: []
|
||||
resources:
|
||||
limits:
|
||||
cpu: "1"
|
||||
# memory: "1024Mi"
|
||||
requests:
|
||||
cpu: "25m"
|
||||
memory: "512Mi"
|
||||
priorityClassName: ""
|
||||
## (dict) If specified, apply these annotations to each master Pod
|
||||
# podAnnotations:
|
||||
# example: master-foo
|
||||
podDisruptionBudget:
|
||||
enabled: false
|
||||
minAvailable: 2 # Same as `cluster.env.MINIMUM_MASTER_NODES`
|
||||
# maxUnavailable: 1
|
||||
updateStrategy:
|
||||
type: OnDelete
|
||||
|
||||
data:
|
||||
name: data
|
||||
exposeHttp: false
|
||||
replicas: 2
|
||||
heapSize: "1536m"
|
||||
persistence:
|
||||
enabled: true
|
||||
accessMode: ReadWriteOnce
|
||||
name: data
|
||||
size: "30Gi"
|
||||
# storageClass: "ssd"
|
||||
terminationGracePeriodSeconds: 3600
|
||||
antiAffinity: "soft"
|
||||
nodeSelector: {}
|
||||
tolerations: []
|
||||
resources:
|
||||
limits:
|
||||
cpu: "1"
|
||||
# memory: "2048Mi"
|
||||
requests:
|
||||
cpu: "25m"
|
||||
memory: "1536Mi"
|
||||
priorityClassName: ""
|
||||
## (dict) If specified, apply these annotations to each data Pod
|
||||
# podAnnotations:
|
||||
# example: data-foo
|
||||
podDisruptionBudget:
|
||||
enabled: false
|
||||
# minAvailable: 1
|
||||
maxUnavailable: 1
|
||||
updateStrategy:
|
||||
type: OnDelete
|
|
@ -0,0 +1,44 @@
|
|||
image:
|
||||
repository: "jmgao1983/elasticsearch"
|
||||
|
||||
cluster:
|
||||
name: "es-on-k8s"
|
||||
env:
|
||||
MINIMUM_MASTER_NODES: "2"
|
||||
|
||||
client:
|
||||
serviceType: NodePort
|
||||
|
||||
master:
|
||||
name: master
|
||||
replicas: 3
|
||||
heapSize: "512m"
|
||||
persistence:
|
||||
enabled: true
|
||||
accessMode: ReadWriteOnce
|
||||
name: data
|
||||
size: "4Gi"
|
||||
storageClass: "nfs-es"
|
||||
|
||||
data:
|
||||
name: data
|
||||
replicas: 2
|
||||
heapSize: "1536m"
|
||||
persistence:
|
||||
enabled: true
|
||||
accessMode: ReadWriteOnce
|
||||
name: data
|
||||
size: "40Gi"
|
||||
storageClass: "nfs-es"
|
||||
terminationGracePeriodSeconds: 3600
|
||||
resources:
|
||||
limits:
|
||||
cpu: "1"
|
||||
# memory: "2048Mi"
|
||||
requests:
|
||||
cpu: "25m"
|
||||
memory: "1536Mi"
|
||||
podDisruptionBudget:
|
||||
enabled: false
|
||||
# minAvailable: 1
|
||||
maxUnavailable: 1
|
Loading…
Reference in New Issue