更新helm3文档,举例redis集群安装

pull/860/head
gjmzj 2020-05-28 21:06:43 +08:00
parent 9117d07e82
commit 89ac57a9fc
35 changed files with 1536 additions and 435 deletions

View File

@ -3,7 +3,7 @@
项目致力于提供快速部署高可用`k8s`集群的工具, 同时也努力成为`k8s`实践、使用的参考书;基于二进制方式部署和利用`ansible-playbook`实现自动化;既提供一键安装脚本, 也可以根据`安装指南`分步执行安装各个组件。
- **集群特性** `TLS`双向认证、`RBAC`授权、[多Master高可用](docs/setup/00-planning_and_overall_intro.md#ha-architecture)、支持`Network Policy`、备份恢复、[离线安装](docs/setup/offline_install.md)
- **集群版本** kubernetes v1.14, v1.15, v1.16, v1.17
- **集群版本** kubernetes v1.15, v1.16, v1.17, v1.18
- **操作系统** CentOS/RedHat 7, Debian 9/10, Ubuntu 1604/1804
- **运行时** docker 18.06.x-ce, 18.09.x, 19.03.x [containerd](docs/guide/containerd.md) 1.2.6
- **网络** [calico](docs/setup/network-plugin/calico.md), [cilium](docs/setup/network-plugin/cilium.md), [flannel](docs/setup/network-plugin/flannel.md), [kube-ovn](docs/setup/network-plugin/kube-ovn.md), [kube-router](docs/setup/network-plugin/kube-router.md)

View File

@ -2,53 +2,146 @@
`Helm`致力于成为k8s集群的应用包管理工具希望像linux 系统的`RPM` `DPKG`那样成功确实在k8s上部署复杂一点的应用很麻烦需要管理很多yaml文件configmap,controller,service,rbac,pv,pvc等等而helm能够整齐管理这些文档版本控制参数化安装方便的打包与分享等。
- 建议积累一定k8s经验以后再去使用helm对于初学者来说手工去配置那些yaml文件对于快速学习k8s的设计理念和运行原理非常有帮助而不是直接去使用helm面对又一层封装与复杂度。
- 本文参考 helm 官网安全实践启用 TLS 认证,参考 https://docs.helm.sh/using_helm/#securing-your-helm-installation
- 本文基于helm 3建议版本helm 2 文档[请看这里](helm2.md)
## 安全安装 helm在线
## 安装 helm
在官方repo下载[release版本](https://github.com/helm/helm/releases)中自带的二进制文件即可以Linux amd64为例
以下步骤以 helm/tiller 版本 v2.14.1 为例在helm客户端和tiller服务器间建立安全的SSL/TLS认证机制tiller服务器和helm客户端都是使用同一CA签发的`client cert`,然后互相识别对方身份。建议通过本项目提供的`ansible role`安装符合官网上介绍的安全加固措施在ansible控制端运行:
``` bash
# 1.配置默认helm参数 vi /etc/ansible/roles/helm/defaults/main.yml
# 2.执行安装
$ ansible-playbook /etc/ansible/roles/helm/helm.yml
```
- 注意默认仅在第一个master节点初始化helm客户端如果需要在其他节点初始化helm客户端请修改 roles/helm/helm.yml 文件的 hosts 定义,然后再次执行`ansible-playbook /etc/ansible/roles/helm/helm.yml`即可
wget https://get.helm.sh/helm-v3.2.1-linux-amd64.tar.gz
mv ./linux-amd64/helm /usr/bin
```
简单介绍下`/roles/helm/tasks/main.yml`中的步骤
- 启用官方 charts 仓库
- 1-下载最新release的helm客户端到/etc/ansible/bin目录下
- 2-由集群CA签发helm客户端证书和私钥
- 3-由集群CA签发tiller服务端证书和私钥
- 4-创建tiller专用的RBAC配置只允许helm在指定的namespace查看和安装应用
- 5-安全安装tiller到集群tiller服务启用tls验证
- 6-配置helm客户端使用tls方式与tiller服务端通讯
```
helm repo add stable https://kubernetes-charts.storage.googleapis.com/
```
## 安全安装 helm离线
在内网环境中由于不能访问互联网无法连接repo地址使用上述的在线安装helm的方式会报错。因此需要使用离线安装的方法来安装。要注意的是tiller的镜像版本必须为v2.14.1,否则会不匹配。
离线安装步骤:
## 使用 helm 安装应用
helm3 安装命令与 helm2 稍有变化个人习惯先下载对应charts到本地然后按照固定目录格式安装以创建一个redis集群举例
- 创建 redis-cluster 目录
``` bash
# 1.创建本地repo
mkdir -p /opt/helm-repo
# 2.启动helm repo server,如果要其他服务器访问改为本地IP
nohup helm serve --address 127.0.0.1:8879 --repo-path /opt/helm-repo &
# 3.更改helm 配置文件
将/etc/ansible/roles/helm/defaults/main.yml中repo的地址改为 http://127.0.0.1:8879
cat <<EOF >/etc/ansible/roles/helm/defaults/main.yml
helm_namespace: kube-system
helm_cert_cn: helm001
tiller_sa: tiller
tiller_cert_cn: tiller001
tiller_image: easzlab/tiller:v2.14.1
#repo_url: https://kubernetes-charts.storage.googleapis.com
repo_url: http://127.0.0.1:8879
history_max: 5
# 如果默认官方repo 网络访问不稳定可以使用如下的阿里云镜像repo
#repo_url: https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
mkdir -p /opt/charts/redis-cluster
cd /opt/charts/redis-cluster
```
- 下载最新stalbe/redis-ha
```
helm repo update
helm pull stable/redis-ha
```
- 解压 charts复制 values.yaml设置
```
tar zxvf redis-ha-*.tgz
cp redis-ha/values.yaml .
```
- 创建 start.sh 脚本记录启动命令
```
cat > start.sh << EOF
#!/bin/sh
set -x
ROOT=$(cd `dirname $0`; pwd)
cd $ROOT
helm install redis \
--create-namespace \
--namespace dependency \
-f ./values.yaml \
./redis-ha
EOF
# 4.运行安全helm命令
ansible-playbook /etc/ansible/roles/helm/helm.yml
```
## 使用helm安装应用到k8s上
请阅读本项目文档[helm安装prometheus监控](prometheus.md)
- 查看当前目录结构如下
```
tree .
.
├── redis-ha # redis-ha 原始charts目录
├── start.sh # 启动命名脚本
└── values.yaml # 个性化参数配置
```
- 修改当前目录的 values.yaml 为你的个性化配置
``` bash
#举例values.yaml 配置如下没有启用PV
#cat values.yaml
image:
repository: redis
tag: 5.0.6-alpine
replicas: 2
## Redis specific configuration options
redis:
port: 6379
masterGroupName: "mymaster" # must match ^[\\w-\\.]+$) and can be templated
config:
## For all available options see http://download.redis.io/redis-stable/redis.conf
min-replicas-to-write: 1
min-replicas-max-lag: 5 # Value in seconds
maxmemory: "4g" # Max memory to use for each redis instance. Default is unlimited.
maxmemory-policy: "allkeys-lru" # Max memory policy to use for each redis instance. Default is volatile-lru.
repl-diskless-sync: "yes"
rdbcompression: "yes"
rdbchecksum: "yes"
resources:
requests:
memory: 200Mi
cpu: 100m
limits:
memory: 4000Mi
## Sentinel specific configuration options
sentinel:
port: 26379
quorum: 1
resources:
requests:
memory: 200Mi
cpu: 100m
limits:
memory: 200Mi
hardAntiAffinity: true
## Configures redis with AUTH (requirepass & masterauth conf params)
auth: false
persistentVolume:
enabled: false
hostPath:
path: "/data/mcs-redis/{{ .Release.Name }}"
```
- 执行安装
```
bash ./start.sh
```
- 查看安装
```
helm ls -A
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
redis dependency 1 2020-05-28 20:57:31.166002853 +0800 CST deployed redis-ha-4.4.4 5.0.6
# 查看k8s上资源
kubectl get pod,svc -n dependency
NAME READY STATUS RESTARTS AGE
pod/redis-redis-ha-server-0 2/2 Running 0 119s
pod/redis-redis-ha-server-1 2/2 Running 0 104s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/redis-redis-ha ClusterIP None <none> 6379/TCP,26379/TCP 119s
service/redis-redis-ha-announce-0 ClusterIP 10.68.41.65 <none> 6379/TCP,26379/TCP 119s
service/redis-redis-ha-announce-1 ClusterIP 10.68.64.49 <none> 6379/TCP,26379/TCP 119s
```

View File

@ -0,0 +1,54 @@
# Helm
`Helm`致力于成为k8s集群的应用包管理工具希望像linux 系统的`RPM` `DPKG`那样成功确实在k8s上部署复杂一点的应用很麻烦需要管理很多yaml文件configmap,controller,service,rbac,pv,pvc等等而helm能够整齐管理这些文档版本控制参数化安装方便的打包与分享等。
- 建议积累一定k8s经验以后再去使用helm对于初学者来说手工去配置那些yaml文件对于快速学习k8s的设计理念和运行原理非常有帮助而不是直接去使用helm面对又一层封装与复杂度。
- 本文参考 helm 官网安全实践启用 TLS 认证,参考 https://docs.helm.sh/using_helm/#securing-your-helm-installation
## 安全安装 helm在线
以下步骤以 helm/tiller 版本 v2.14.1 为例在helm客户端和tiller服务器间建立安全的SSL/TLS认证机制tiller服务器和helm客户端都是使用同一CA签发的`client cert`,然后互相识别对方身份。建议通过本项目提供的`ansible role`安装符合官网上介绍的安全加固措施在ansible控制端运行:
``` bash
# 1.配置默认helm参数 vi /etc/ansible/roles/helm/defaults/main.yml
# 2.执行安装
$ ansible-playbook /etc/ansible/roles/helm/helm.yml
```
- 注意默认仅在第一个master节点初始化helm客户端如果需要在其他节点初始化helm客户端请修改 roles/helm/helm.yml 文件的 hosts 定义,然后再次执行`ansible-playbook /etc/ansible/roles/helm/helm.yml`即可
简单介绍下`/roles/helm/tasks/main.yml`中的步骤
- 1-下载最新release的helm客户端到/etc/ansible/bin目录下
- 2-由集群CA签发helm客户端证书和私钥
- 3-由集群CA签发tiller服务端证书和私钥
- 4-创建tiller专用的RBAC配置只允许helm在指定的namespace查看和安装应用
- 5-安全安装tiller到集群tiller服务启用tls验证
- 6-配置helm客户端使用tls方式与tiller服务端通讯
## 安全安装 helm离线
在内网环境中由于不能访问互联网无法连接repo地址使用上述的在线安装helm的方式会报错。因此需要使用离线安装的方法来安装。要注意的是tiller的镜像版本必须为v2.14.1,否则会不匹配。
离线安装步骤:
```bash
# 1.创建本地repo
mkdir -p /opt/helm-repo
# 2.启动helm repo server,如果要其他服务器访问改为本地IP
nohup helm serve --address 127.0.0.1:8879 --repo-path /opt/helm-repo &
# 3.更改helm 配置文件
将/etc/ansible/roles/helm/defaults/main.yml中repo的地址改为 http://127.0.0.1:8879
cat <<EOF >/etc/ansible/roles/helm/defaults/main.yml
helm_namespace: kube-system
helm_cert_cn: helm001
tiller_sa: tiller
tiller_cert_cn: tiller001
tiller_image: easzlab/tiller:v2.14.1
#repo_url: https://kubernetes-charts.storage.googleapis.com
repo_url: http://127.0.0.1:8879
history_max: 5
# 如果默认官方repo 网络访问不稳定可以使用如下的阿里云镜像repo
#repo_url: https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
EOF
# 4.运行安全helm命令
ansible-playbook /etc/ansible/roles/helm/helm.yml
```
## 使用helm安装应用到k8s上
请阅读本项目文档[helm安装prometheus监控](prometheus.md)

View File

@ -12,7 +12,7 @@
|角色|数量|描述|
|:-|:-|:-|
|管理节点|1|运行ansible/easzctl脚本可以复用master建议使用独立节点1c1g|
|管理节点|1|运行ansible/easzctl脚本一般复用master节点|
|etcd节点|3|注意etcd集群需要1,3,5,7...奇数个节点一般复用master节点|
|master节点|2|高可用集群至少2个master节点|
|node节点|3|运行应用负载的节点,可根据需要提升机器配置/增加节点数|

View File

@ -1,16 +1,16 @@
# 01-创建证书和环境配置
# 01-创建证书和环境准备
本步骤[01.prepare.yml](../../01.prepare.yml)主要完成:
- [chrony role](../guide/chrony.md): 集群节点时间同步[可选]
- deploy role: 创建CA证书、kubeconfig、kube-proxy.kubeconfig
- prepare role: 分发CA证书、kubectl客户端安装、环境配置
- deploy role: 创建CA证书、集群组件访问apiserver所需的各种kubeconfig
- prepare role: 系统基础环境配置、分发CA证书、kubectl客户端安装
## deploy 角色
请在另外窗口打开[roles/deploy/tasks/main.yml](../../roles/deploy/tasks/main.yml) 文件,对照看以下讲解内容。
### 创建 CA 证书
### 创建 CA 证书
``` bash
roles/deploy/

View File

@ -1,120 +0,0 @@
## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
image:
repository: redis
tag: 5.0.3-alpine
pullPolicy: IfNotPresent
## replicas number for each component
replicas: 2
## Redis specific configuration options
redis:
port: 6379
masterGroupName: mymaster
config:
## Additional redis conf options can be added below
## For all available options see http://download.redis.io/redis-stable/redis.conf
min-slaves-to-write: 1
min-slaves-max-lag: 5 # Value in seconds
maxmemory: "1g" # Max memory to use for each redis instance. Default is unlimited.
maxmemory-policy: "allkeys-lru" # Max memory policy to use for each redis instance. Default is volatile-lru.
# Determines if scheduled RDB backups are created. Default is false.
# Please note that local (on-disk) RDBs will still be created when re-syncing with a new slave. The only way to prevent this is to enable diskless replication.
# save: "900 1"
# When enabled, directly sends the RDB over the wire to slaves, without using the disk as intermediate storage. Default is false.
repl-diskless-sync: "yes"
rdbcompression: "yes"
rdbchecksum: "yes"
## Custom redis.conf files used to override default settings. If this file is
## specified then the redis.config above will be ignored.
# customConfig: |-
# Define configuration here
resources:
requests:
memory: 500Mi
cpu: 100m
limits:
memory: 1100Mi
## Sentinel specific configuration options
sentinel:
port: 26379
quorum: 1
config:
## Additional sentinel conf options can be added below. Only options that
## are expressed in the format simialar to 'sentinel xxx mymaster xxx' will
## be properly templated.
## For available options see http://download.redis.io/redis-stable/sentinel.conf
down-after-milliseconds: 10000
## Failover timeout value in milliseconds
failover-timeout: 180000
parallel-syncs: 5
## Custom sentinel.conf files used to override default settings. If this file is
## specified then the sentinel.config above will be ignored.
# customConfig: |-
# Define configuration here
resources:
requests:
memory: 200Mi
cpu: 100m
limits:
memory: 200Mi
securityContext:
runAsUser: 1000
fsGroup: 1000
runAsNonRoot: true
## Node labels, affinity, and tolerations for pod assignment
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
affinity: |
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: {{ template "redis-ha.name" . }}
release: {{ .Release.Name }}
topologyKey: kubernetes.io/hostname
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
app: {{ template "redis-ha.name" . }}
release: {{ .Release.Name }}
topologyKey: failure-domain.beta.kubernetes.io/zone
podDisruptionBudget: {}
# maxUnavailable: 1
# minAvailable: 1
## Configures redis with AUTH (requirepass & masterauth conf params)
auth: true
redisPassword: redis1234
## Use existing secret containing "auth" key (ignores redisPassword)
# existingSecret:
persistentVolume:
enabled: false
## redis-ha data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
storageClass: "nfs-db"
accessModes:
- ReadWriteOnce
size: 3Gi
annotations: {}
init:
resources: {}

View File

@ -1,20 +1,21 @@
name: redis-ha
home: http://redis.io/
apiVersion: v1
appVersion: 5.0.6
description: Highly available Kubernetes implementation of Redis
engine: gotpl
home: http://redis.io/
icon: https://upload.wikimedia.org/wikipedia/en/thumb/6/6b/Redis_Logo.svg/1200px-Redis_Logo.svg.png
keywords:
- redis
- keyvalue
- database
version: 3.1.3
appVersion: 5.0.3
description: Highly available Kubernetes implementation of Redis
icon: https://upload.wikimedia.org/wikipedia/en/thumb/6/6b/Redis_Logo.svg/1200px-Redis_Logo.svg.png
maintainers:
- email: salimsalaues@gmail.com
name: ssalaues
details:
This Helm chart provides a highly available Redis implementation with a master/slave configuration
and uses Sentinel sidecars for failover management
- email: aaron.layfield@gmail.com
name: dandydeveloper
name: redis-ha
sources:
- https://redis.io/download
- https://github.com/scality/Zenko/tree/development/1.0/kubernetes/zenko/charts/redis-ha
- https://github.com/oliver006/redis_exporter
version: 4.4.4

View File

@ -1,4 +1,6 @@
approvers:
- ssalaues
- dandydeveloper
reviewers:
- ssalaues
- dandydeveloper

View File

@ -1,5 +1,14 @@
# Redis
----------------------------------------
# Deprecation Warning
*As part of the [deprecation timeline](https://github.com/helm/charts/#deprecation-timeline). We will move this to an official repository [here](https://github.com/DandyDeveloper/charts)*
Please make PRs / Issues here from now on
We will keep the changes in sync as best we can, but we will be notifying people to submit PRs here from now on instead. If you have any questions, feel free to get in touch with either of the maintainers.
----------------------------------------
[Redis](http://redis.io/) is an advanced key-value cache and store. It is often referred to as a data structure server since keys can contain strings, hashes, lists, sets, sorted sets, bitmaps and hyperloglogs.
## TL;DR;
@ -9,8 +18,8 @@ $ helm install stable/redis-ha
```
By default this chart install 3 pods total:
* one pod containing a redis master and sentinel containers
* two pods each containing redis slave and sentinel containers.
* one pod containing a redis master and sentinel container (optional prometheus metrics exporter sidecar available)
* two pods each containing a redis slave and sentinel containers (optional prometheus metrics exporter sidecars available)
## Introduction
@ -25,6 +34,10 @@ This chart bootstraps a [Redis](https://redis.io) highly available master/slave
Please note that there have been a number of changes simplifying the redis management strategy (for better failover and elections) in the 3.x version of this chart. These changes allow the use of official [redis](https://hub.docker.com/_/redis/) images that do not require special RBAC or ServiceAccount roles. As a result when upgrading from version >=2.0.1 to >=3.0.0 of this chart, `Role`, `RoleBinding`, and `ServiceAccount` resources should be deleted manually.
### Upgrading the chart from 3.x to 4.x
Starting from version `4.x` HAProxy sidecar prometheus-exporter removed and replaced by the embedded [HAProxy metrics endpoint](https://github.com/haproxy/haproxy/tree/master/contrib/prometheus-exporter), as a result when upgrading from version 3.x to 4.x section `haproxy.exporter` should be removed and the `haproxy.metrics` need to be configured for fit your needs.
## Installing the Chart
To install the chart
@ -52,12 +65,16 @@ The command removes all the Kubernetes components associated with the chart and
The following table lists the configurable parameters of the Redis chart and their default values.
| Parameter | Description | Default |
| -------------------------------- | ----------------------------------------------------- | --------------------------------------------------------- |
|:--------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------|
| `image` | Redis image | `redis` |
| `tag` | Redis tag | `5.0.3-alpine` |
| `imagePullSecrets` | Reference to one or more secrets to be used when pulling redis images | [] |
| `tag` | Redis tag | `5.0.6-alpine` |
| `replicas` | Number of redis master/slave pods | `3` |
| `serviceAccount.create` | Specifies whether a ServiceAccount should be created | `true` |
| `serviceAccount.name` | The name of the ServiceAccount to create | Generated using the redis-ha.fullname template |
| `rbac.create` | Create and use RBAC resources | `true` |
| `redis.port` | Port to access the redis service | `6379` |
| `redis.masterGroupName` | Redis convention for naming the cluster group | `mymaster` |
| `redis.masterGroupName` | Redis convention for naming the cluster group: must match `^[\\w-\\.]+$` and can be templated | `mymaster` |
| `redis.config` | Any valid redis config options in this section will be applied to each server (see below) | see values.yaml |
| `redis.customConfig` | Allows for custom redis.conf files to be applied. If this is used then `redis.config` is ignored | `` |
| `redis.resources` | CPU/Memory for master/slave nodes resource requests/limits | `{}` |
@ -66,21 +83,91 @@ The following table lists the configurable parameters of the Redis chart and the
| `sentinel.config` | Valid sentinel config options in this section will be applied as config options to each sentinel (see below) | see values.yaml |
| `sentinel.customConfig` | Allows for custom sentinel.conf files to be applied. If this is used then `sentinel.config` is ignored | `` |
| `sentinel.resources` | CPU/Memory for sentinel node resource requests/limits | `{}` |
| `init.resources` | CPU/Memory for init Container node resource requests/limits | `{}`
| `init.resources` | CPU/Memory for init Container node resource requests/limits | `{}` |
| `auth` | Enables or disables redis AUTH (Requires `redisPassword` to be set) | `false` |
| `redisPassword` | A password that configures a `requirepass` and `masterauth` in the conf parameters (Requires `auth: enabled`) | `` |
| `existingSecret` | An existing secret containing an `auth` key that configures `requirepass` and `masterauth` in the conf parameters (Requires `auth: enabled`, cannot be used in conjunction with `.Values.redisPassword`) | `` |
| `authKey` | The key holding the redis password in an existing secret. | `auth` |
| `existingSecret` | An existing secret containing a key defined by `authKey` that configures `requirepass` and `masterauth` in the conf parameters (Requires `auth: enabled`, cannot be used in conjunction with `.Values.redisPassword`) | `` |
| `nodeSelector` | Node labels for pod assignment | `{}` |
| `tolerations` | Toleration labels for pod assignment | `[]` |
| `podAntiAffinity.server` | Antiaffinity for pod assignment of servers, `hard` or `soft` | `Hard node and soft zone anti-affinity` |
| `hardAntiAffinity` | Whether the Redis server pods should be forced to run on separate nodes. | `true` |
| `additionalAffinities` | Additional affinities to add to the Redis server pods. | `{}` |
| `securityContext` | Security context to be added to the Redis server pods. | `{runAsUser: 1000, fsGroup: 1000, runAsNonRoot: true}` |
| `affinity` | Override all other affinity settings with a string. | `""` |
| `persistentVolume.size` | Size for the volume | 10Gi |
| `persistentVolume.annotations` | Annotations for the volume | `{}` |
| `persistentVolume.reclaimPolicy` | Method used to reclaim an obsoleted volume. `Delete` or `Retain` | `""` |
| `emptyDir` | Configuration of `emptyDir`, used only if persistentVolume is disabled and no hostPath specified | `{}` |
| `exporter.enabled` | If `true`, the prometheus exporter sidecar is enabled | `false` |
| `exporter.image` | Exporter image | `oliver006/redis_exporter` |
| `exporter.tag` | Exporter tag | `v0.31.0` |
| `exporter.port` | Exporter port | `9121` |
| `exporter.annotations` | Prometheus scrape annotations | `{prometheus.io/path: /metrics, prometheus.io/port: "9121", prometheus.io/scrape: "true"}` |
| `exporter.extraArgs` | Additional args for the exporter | `{}` |
| `exporter.script` | A custom custom Lua script that will be mounted to exporter for collection of custom metrics. Creates a ConfigMap and sets env var `REDIS_EXPORTER_SCRIPT`. | |
| `exporter.serviceMonitor.enabled` | Use servicemonitor from prometheus operator | `false` |
| `exporter.serviceMonitor.namespace` | Namespace the service monitor is created in | `default` |
| `exporter.serviceMonitor.interval` | Scrape interval, If not set, the Prometheus default scrape interval is used | `nil` |
| `exporter.serviceMonitor.telemetryPath` | Path to redis-exporter telemetry-path | `/metrics` |
| `exporter.serviceMonitor.labels` | Labels for the servicemonitor passed to Prometheus Operator | `{}` |
| `exporter.serviceMonitor.timeout` | How long until a scrape request times out. If not set, the Prometheus default scape timeout is used | `nil` |
| `haproxy.enabled` | Enabled HAProxy LoadBalancing/Proxy | `false` |
| `haproxy.replicas` | Number of HAProxy instances | `3` |
| `haproxy.image.repository`| HAProxy Image Repository | `haproxy` |
| `haproxy.image.tag` | HAProxy Image Tag | `2.0.1` |
| `haproxy.image.pullPolicy`| HAProxy Image PullPolicy | `IfNotPresent` |
| `haproxy.imagePullSecrets`| Reference to one or more secrets to be used when pulling haproxy images | [] |
| `haproxy.annotations` | HAProxy template annotations | `{}` |
| `haproxy.customConfig` | Allows for custom config-haproxy.cfg file to be applied. If this is used then default config will be overwriten | `` |
| `haproxy.extraConfig` | Allows to place any additional configuration section to add to the default config-haproxy.cfg | `` |
| `haproxy.resources` | HAProxy resources | `{}` |
| `haproxy.emptyDir` | Configuration of `emptyDir` | `{}` |
| `haproxy.service.type` | HAProxy service type "ClusterIP", "LoadBalancer" or "NodePort" | `ClusterIP` |
| `haproxy.service.nodePort` | HAProxy service nodePort value (haproxy.service.type must be NodePort) | not set |
| `haproxy.service.annotations` | HAProxy service annotations | `{}` |
| `haproxy.stickyBalancing` | HAProxy sticky load balancing to Redis nodes. Helps with connections shutdown. | `false` |
| `haproxy.hapreadport.enable` | Enable a read only port for redis slaves | `false` |
| `haproxy.hapreadport.port` | Haproxy port for read only redis slaves | `6380` |
| `haproxy.metrics.enabled` | HAProxy enable prometheus metric scraping | `false` |
| `haproxy.metrics.port` | HAProxy prometheus metrics scraping port | `9101` |
| `haproxy.metrics.portName` | HAProxy metrics scraping port name | `exporter-port` |
| `haproxy.metrics.scrapePath` | HAProxy prometheus metrics scraping port | `/metrics` |
| `haproxy.metrics.serviceMonitor.enabled` | Use servicemonitor from prometheus operator for HAProxy metrics | `false` |
| `haproxy.metrics.serviceMonitor.namespace` | Namespace the service monitor for HAProxy metrics is created in | `default` |
| `haproxy.metrics.serviceMonitor.interval` | Scrape interval, If not set, the Prometheus default scrape interval is used | `nil` |
| `haproxy.metrics.serviceMonitor.telemetryPath` | Path to HAProxy metrics telemetry-path | `/metrics` |
| `haproxy.metrics.serviceMonitor.labels` | Labels for the HAProxy metrics servicemonitor passed to Prometheus Operator | `{}` |
| `haproxy.metrics.serviceMonitor.timeout` | How long until a scrape request times out. If not set, the Prometheus default scape timeout is used | `nil` |
| `haproxy.init.resources` | Extra init resources | `{}` |
| `haproxy.timeout.connect` | haproxy.cfg `timeout connect` setting | `4s` |
| `haproxy.timeout.server` | haproxy.cfg `timeout server` setting | `30s` |
| `haproxy.timeout.client` | haproxy.cfg `timeout client` setting | `30s` |
| `haproxy.timeout.check` | haproxy.cfg `timeout check` setting | `2s` |
| `haproxy.priorityClassName` | priorityClassName for `haproxy` deployment | not set |
| `haproxy.securityContext` | Security context to be added to the HAProxy deployment. | `{runAsUser: 1000, fsGroup: 1000, runAsNonRoot: true}` |
| `haproxy.hardAntiAffinity` | Whether the haproxy pods should be forced to run on separate nodes. | `true` |
| `haproxy.affinity` | Override all other haproxy affinity settings with a string. | `""` |
| `haproxy.additionalAffinities` | Additional affinities to add to the haproxy server pods. | `{}` |
| `podDisruptionBudget` | Pod Disruption Budget rules | `{}` |
| `priorityClassName` | priorityClassName for `redis-ha-statefulset` | not set |
| `hostPath.path` | Use this path on the host for data storage | not set |
| `hostPath.chown` | Run an init-container as root to set ownership on the hostPath | `true` |
| `sysctlImage.enabled` | Enable an init container to modify Kernel settings | `false` |
| `sysctlImage.command` | sysctlImage command to execute | [] |
| `sysctlImage.registry` | sysctlImage Init container registry | `docker.io` |
| `sysctlImage.repository` | sysctlImage Init container name | `busybox` |
| `sysctlImage.tag` | sysctlImage Init container tag | `1.31.1` |
| `sysctlImage.pullPolicy` | sysctlImage Init container pull policy | `Always` |
| `sysctlImage.mountHostSys`| Mount the host `/sys` folder to `/host-sys` | `false` |
| `sysctlImage.resources` | sysctlImage resources | `{}` |
| `schedulerName` | Alternate scheduler name | `nil` |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
```bash
$ helm install \
--set image=redis \
--set tag=5.0.3-alpine \
--set tag=5.0.5-alpine \
stable/redis-ha
```
@ -107,6 +194,20 @@ For example `repl-timeout 60` would be added to the `redis.config` section of th
repl-timeout: "60"
```
Note:
1. Some config options should be renamed by redis versione.g.:
```
# In redis 5.xsee https://raw.githubusercontent.com/antirez/redis/5.0/redis.conf
min-replicas-to-write: 1
min-replicas-max-lag: 5
# In redis 4.x and redis 3.xsee https://raw.githubusercontent.com/antirez/redis/4.0/redis.conf and https://raw.githubusercontent.com/antirez/redis/3.0/redis.conf
min-slaves-to-write 1
min-slaves-max-lag 5
```
Sentinel options supported must be in the the `sentinel <option> <master-group-name> <value>` format. For example, `sentinel down-after-milliseconds 30000` would be added to the `sentinel.config` section of the `values.yaml` as:
```yml
@ -115,3 +216,24 @@ Sentinel options supported must be in the the `sentinel <option> <master-group-n
If more control is needed from either the redis or sentinel config then an entire config can be defined under `redis.customConfig` or `sentinel.customConfig`. Please note that these values will override any configuration options under their respective section. For example, if you define `sentinel.customConfig` then the `sentinel.config` is ignored.
## Host Kernel Settings
Redis may require some changes in the kernel of the host machine to work as expected, in particular increasing the `somaxconn` value and disabling transparent huge pages.
To do so, you can set up a privileged initContainer with the `sysctlImage` config values, for example:
```
sysctlImage:
enabled: true
mountHostSys: true
command:
- /bin/sh
- -xc
- |-
sysctl -w net.core.somaxconn=10000
echo never > /host-sys/kernel/mm/transparent_hugepage/enabled
```
## HAProxy startup
When HAProxy is enabled, it will attempt to connect to each announce-service of each redis replica instance in its init container before starting.
It will fail if announce-service IP is not available fast enough (10 seconds max by announce-service).
A such case could happen if the orchestator is pending the nomination of redis pods.
Risk is limited because announce-service is using `publishNotReadyAddresses: true`, although, in such case, HAProxy pod will be rescheduled afterward by the orchestrator.

View File

@ -0,0 +1,10 @@
---
## Enable HAProxy to manage Load Balancing
haproxy:
enabled: true
annotations:
any.domain/key: "value"
serviceAccount:
create: true
metrics:
enabled: true

View File

@ -0,0 +1,275 @@
{{/* vim: set filetype=mustache: */}}
{{- define "config-redis.conf" }}
{{- if .Values.redis.customConfig }}
{{ tpl .Values.redis.customConfig . | indent 4 }}
{{- else }}
dir "/data"
port {{ .Values.redis.port }}
{{- range $key, $value := .Values.redis.config }}
{{ $key }} {{ $value }}
{{- end }}
{{- if .Values.auth }}
requirepass replace-default-auth
masterauth replace-default-auth
{{- end }}
{{- end }}
{{- end }}
{{- define "config-sentinel.conf" }}
{{- if .Values.sentinel.customConfig }}
{{ tpl .Values.sentinel.customConfig . | indent 4 }}
{{- else }}
dir "/data"
{{- range $key, $value := .Values.sentinel.config }}
{{- if eq "maxclients" $key }}
{{ $key }} {{ $value }}
{{- else }}
sentinel {{ $key }} {{ template "redis-ha.masterGroupName" $ }} {{ $value }}
{{- end }}
{{- end }}
{{- if .Values.auth }}
sentinel auth-pass {{ template "redis-ha.masterGroupName" . }} replace-default-auth
{{- end }}
{{- end }}
{{- end }}
{{- define "config-init.sh" }}
HOSTNAME="$(hostname)"
INDEX="${HOSTNAME##*-}"
MASTER="$(redis-cli -h {{ template "redis-ha.fullname" . }} -p {{ .Values.sentinel.port }} sentinel get-master-addr-by-name {{ template "redis-ha.masterGroupName" . }} | grep -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}')"
MASTER_GROUP="{{ template "redis-ha.masterGroupName" . }}"
QUORUM="{{ .Values.sentinel.quorum }}"
REDIS_CONF=/data/conf/redis.conf
REDIS_PORT={{ .Values.redis.port }}
SENTINEL_CONF=/data/conf/sentinel.conf
SENTINEL_PORT={{ .Values.sentinel.port }}
SERVICE={{ template "redis-ha.fullname" . }}
set -eu
sentinel_update() {
echo "Updating sentinel config with master $MASTER"
eval MY_SENTINEL_ID="\${SENTINEL_ID_$INDEX}"
sed -i "1s/^/sentinel myid $MY_SENTINEL_ID\\n/" "$SENTINEL_CONF"
sed -i "2s/^/sentinel monitor $MASTER_GROUP $1 $REDIS_PORT $QUORUM \\n/" "$SENTINEL_CONF"
echo "sentinel announce-ip $ANNOUNCE_IP" >> $SENTINEL_CONF
echo "sentinel announce-port $SENTINEL_PORT" >> $SENTINEL_CONF
}
redis_update() {
echo "Updating redis config"
echo "slaveof $1 $REDIS_PORT" >> "$REDIS_CONF"
echo "slave-announce-ip $ANNOUNCE_IP" >> $REDIS_CONF
echo "slave-announce-port $REDIS_PORT" >> $REDIS_CONF
}
copy_config() {
cp /readonly-config/redis.conf "$REDIS_CONF"
cp /readonly-config/sentinel.conf "$SENTINEL_CONF"
}
setup_defaults() {
echo "Setting up defaults"
if [ "$INDEX" = "0" ]; then
echo "Setting this pod as the default master"
redis_update "$ANNOUNCE_IP"
sentinel_update "$ANNOUNCE_IP"
sed -i "s/^.*slaveof.*//" "$REDIS_CONF"
else
DEFAULT_MASTER="$(getent hosts "$SERVICE-announce-0" | awk '{ print $1 }')"
if [ -z "$DEFAULT_MASTER" ]; then
echo "Unable to resolve host"
exit 1
fi
echo "Setting default slave config.."
redis_update "$DEFAULT_MASTER"
sentinel_update "$DEFAULT_MASTER"
fi
}
find_master() {
echo "Attempting to find master"
if [ "$(redis-cli -h "$MASTER"{{ if .Values.auth }} -a "$AUTH"{{ end }} ping)" != "PONG" ]; then
echo "Can't ping master, attempting to force failover"
if redis-cli -h "$SERVICE" -p "$SENTINEL_PORT" sentinel failover "$MASTER_GROUP" | grep -q 'NOGOODSLAVE' ; then
setup_defaults
return 0
fi
sleep 10
MASTER="$(redis-cli -h $SERVICE -p $SENTINEL_PORT sentinel get-master-addr-by-name $MASTER_GROUP | grep -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}')"
if [ "$MASTER" ]; then
sentinel_update "$MASTER"
redis_update "$MASTER"
else
echo "Could not failover, exiting..."
exit 1
fi
else
echo "Found reachable master, updating config"
sentinel_update "$MASTER"
redis_update "$MASTER"
fi
}
mkdir -p /data/conf/
echo "Initializing config.."
copy_config
ANNOUNCE_IP=$(getent hosts "$SERVICE-announce-$INDEX" | awk '{ print $1 }')
if [ -z "$ANNOUNCE_IP" ]; then
"Could not resolve the announce ip for this pod"
exit 1
elif [ "$MASTER" ]; then
find_master
else
setup_defaults
fi
if [ "${AUTH:-}" ]; then
echo "Setting auth values"
ESCAPED_AUTH=$(echo "$AUTH" | sed -e 's/[\/&]/\\&/g');
sed -i "s/replace-default-auth/${ESCAPED_AUTH}/" "$REDIS_CONF" "$SENTINEL_CONF"
fi
echo "Ready..."
{{- end }}
{{- define "config-haproxy.cfg" }}
{{- if .Values.haproxy.customConfig }}
{{ .Values.haproxy.customConfig | indent 4}}
{{- else }}
defaults REDIS
mode tcp
timeout connect {{ .Values.haproxy.timeout.connect }}
timeout server {{ .Values.haproxy.timeout.server }}
timeout client {{ .Values.haproxy.timeout.client }}
timeout check {{ .Values.haproxy.timeout.check }}
listen health_check_http_url
bind :8888
mode http
monitor-uri /healthz
option dontlognull
{{- $root := . }}
{{- $fullName := include "redis-ha.fullname" . }}
{{- $replicas := int (toString .Values.replicas) }}
{{- $masterGroupName := include "redis-ha.masterGroupName" . }}
{{- range $i := until $replicas }}
# Check Sentinel and whether they are nominated master
backend check_if_redis_is_master_{{ $i }}
mode tcp
option tcp-check
tcp-check connect
{{- if $root.auth }}
tcp-check send AUTH\ {{ $root.redisPassword }}\r\n
tcp-check expect string +OK
{{- end }}
tcp-check send PING\r\n
tcp-check expect string +PONG
tcp-check send SENTINEL\ get-master-addr-by-name\ {{ $masterGroupName }}\r\n
tcp-check expect string REPLACE_ANNOUNCE{{ $i }}
tcp-check send QUIT\r\n
tcp-check expect string +OK
{{- range $i := until $replicas }}
server R{{ $i }} {{ $fullName }}-announce-{{ $i }}:26379 check inter 1s
{{- end }}
{{- end }}
# decide redis backend to use
#master
frontend ft_redis_master
bind *:{{ $root.Values.redis.port }}
use_backend bk_redis_master
{{- if .Values.haproxy.readOnly.enabled }}
#slave
frontend ft_redis_slave
bind *:{{ .Values.haproxy.readOnly.port }}
use_backend bk_redis_slave
{{- end }}
# Check all redis servers to see if they think they are master
backend bk_redis_master
{{- if .Values.haproxy.stickyBalancing }}
balance source
hash-type consistent
{{- end }}
mode tcp
option tcp-check
tcp-check connect
{{- if .Values.auth }}
tcp-check send AUTH\ REPLACE_AUTH_SECRET\r\n
tcp-check expect string +OK
{{- end }}
tcp-check send PING\r\n
tcp-check expect string +PONG
tcp-check send info\ replication\r\n
tcp-check expect string role:master
tcp-check send QUIT\r\n
tcp-check expect string +OK
{{- range $i := until $replicas }}
use-server R{{ $i }} if { srv_is_up(R{{ $i }}) } { nbsrv(check_if_redis_is_master_{{ $i }}) ge 2 }
server R{{ $i }} {{ $fullName }}-announce-{{ $i }}:{{ $root.Values.redis.port }} check inter 1s fall 1 rise 1
{{- end }}
{{- if .Values.haproxy.readOnly.enabled }}
backend bk_redis_slave
{{- if .Values.haproxy.stickyBalancing }}
balance source
hash-type consistent
{{- end }}
mode tcp
option tcp-check
tcp-check connect
{{- if .Values.auth }}
tcp-check send AUTH\ REPLACE_AUTH_SECRET\r\n
tcp-check expect string +OK
{{- end }}
tcp-check send PING\r\n
tcp-check expect string +PONG
tcp-check send info\ replication\r\n
tcp-check expect string role:slave
tcp-check send QUIT\r\n
tcp-check expect string +OK
{{- range $i := until $replicas }}
server R{{ $i }} {{ $fullName }}-announce-{{ $i }}:{{ $root.Values.redis.port }} check inter 1s fall 1 rise 1
{{- end }}
{{- end }}
{{- if .Values.haproxy.metrics.enabled }}
frontend metrics
mode http
bind *:{{ .Values.haproxy.metrics.port }}
option http-use-htx
http-request use-service prometheus-exporter if { path {{ .Values.haproxy.metrics.scrapePath }} }
{{- end }}
{{- if .Values.haproxy.extraConfig }}
# Additional configuration
{{ .Values.haproxy.extraConfig | indent 4 }}
{{- end }}
{{- end }}
{{- end }}
{{- define "config-haproxy_init.sh" }}
HAPROXY_CONF=/data/haproxy.cfg
cp /readonly/haproxy.cfg "$HAPROXY_CONF"
{{- $fullName := include "redis-ha.fullname" . }}
{{- $replicas := int (toString .Values.replicas) }}
{{- range $i := until $replicas }}
for loop in $(seq 1 10); do
getent hosts {{ $fullName }}-announce-{{ $i }} && break
echo "Waiting for service {{ $fullName }}-announce-{{ $i }} to be ready ($loop) ..." && sleep 1
done
ANNOUNCE_IP{{ $i }}=$(getent hosts "{{ $fullName }}-announce-{{ $i }}" | awk '{ print $1 }')
if [ -z "$ANNOUNCE_IP{{ $i }}" ]; then
echo "Could not resolve the announce ip for {{ $fullName }}-announce-{{ $i }}"
exit 1
fi
sed -i "s/REPLACE_ANNOUNCE{{ $i }}/$ANNOUNCE_IP{{ $i }}/" "$HAPROXY_CONF"
if [ "${AUTH:-}" ]; then
echo "Setting auth values"
ESCAPED_AUTH=$(echo "$AUTH" | sed -e 's/[\/&]/\\&/g');
sed -i "s/REPLACE_AUTH_SECRET/${ESCAPED_AUTH}/" "$HAPROXY_CONF"
fi
{{- end }}
{{- end }}

View File

@ -25,6 +25,16 @@ We truncate at 63 chars because some Kubernetes name fields are limited to this
{{- end -}}
{{- end -}}
{{/*
Return sysctl image
*/}}
{{- define "redis.sysctl.image" -}}
{{- $registryName := default "docker.io" .Values.sysctlImage.registry -}}
{{- $tag := default "latest" .Values.sysctlImage.tag | toString -}}
{{- printf "%s/%s:%s" $registryName .Values.sysctlImage.repository $tag -}}
{{- end -}}
{{- /*
Credit: @technosophos
https://github.com/technosophos/common-chart/
@ -51,3 +61,23 @@ Example output:
{{- replace "+" "_" .Chart.Version | printf "%s-%s" .Chart.Name -}}
{{- end -}}
{{/*
Create the name of the service account to use
*/}}
{{- define "redis-ha.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default (include "redis-ha.fullname" .) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}
{{- define "redis-ha.masterGroupName" -}}
{{- $masterGroupName := tpl ( .Values.redis.masterGroupName | default "") . -}}
{{- $validMasterGroupName := regexMatch "^[\\w-\\.]+$" $masterGroupName -}}
{{- if $validMasterGroupName -}}
{{ $masterGroupName }}
{{- else -}}
{{ required "A valid .Values.redis.masterGroupName entry is required (matching ^[\\w-\\.]+$)" ""}}
{{- end -}}
{{- end -}}

View File

@ -3,9 +3,10 @@ apiVersion: v1
kind: Secret
metadata:
name: {{ template "redis-ha.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{ include "labels.standard" . | indent 4 }}
type: Opaque
data:
auth: {{ .Values.redisPassword | b64enc | quote }}
{{ .Values.authKey }}: {{ .Values.redisPassword | b64enc | quote }}
{{- end -}}

View File

@ -1,5 +1,6 @@
{{- $fullName := include "redis-ha.fullname" . }}
{{- $replicas := int .Values.replicas }}
{{- $namespace := .Release.Namespace -}}
{{- $replicas := int (toString .Values.replicas) }}
{{- $root := . }}
{{- range $i := until $replicas }}
---
@ -7,6 +8,7 @@ apiVersion: v1
kind: Service
metadata:
name: {{ $fullName }}-announce-{{ $i }}
namespace: {{ $namespace }}
labels:
{{ include "labels.standard" $root | indent 4 }}
annotations:
@ -26,6 +28,12 @@ spec:
port: {{ $root.Values.sentinel.port }}
protocol: TCP
targetPort: sentinel
{{- if $root.Values.exporter.enabled }}
- name: exporter
port: {{ $root.Values.exporter.port }}
protocol: TCP
targetPort: exporter-port
{{- end }}
selector:
release: {{ $root.Release.Name }}
app: {{ include "redis-ha.name" $root }}

View File

@ -2,6 +2,7 @@ apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "redis-ha.fullname" . }}-configmap
namespace: {{ .Release.Namespace }}
labels:
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
@ -9,129 +10,16 @@ metadata:
app: {{ template "redis-ha.fullname" . }}
data:
redis.conf: |
{{- if .Values.redis.customConfig }}
{{ .Values.redis.customConfig | indent 4 }}
{{- else }}
dir "/data"
{{- range $key, $value := .Values.redis.config }}
{{ $key }} {{ $value }}
{{- end }}
{{- if .Values.auth }}
requirepass replace-default-auth
masterauth replace-default-auth
{{- end }}
{{- end }}
{{- include "config-redis.conf" . }}
sentinel.conf: |
{{- if .Values.sentinel.customConfig }}
{{ .Values.sentinel.customConfig | indent 4 }}
{{- else }}
dir "/data"
{{- $root := . -}}
{{- range $key, $value := .Values.sentinel.config }}
sentinel {{ $key }} {{ $root.Values.redis.masterGroupName }} {{ $value }}
{{- end }}
{{- if .Values.auth }}
sentinel auth-pass {{ .Values.redis.masterGroupName }} replace-default-auth
{{- end }}
{{- end }}
{{- include "config-sentinel.conf" . }}
init.sh: |
HOSTNAME="$(hostname)"
INDEX="${HOSTNAME##*-}"
MASTER="$(redis-cli -h {{ template "redis-ha.fullname" . }} -p {{ .Values.sentinel.port }} sentinel get-master-addr-by-name {{ .Values.redis.masterGroupName }} | grep -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}')"
MASTER_GROUP="{{ .Values.redis.masterGroupName }}"
QUORUM="{{ .Values.sentinel.quorum }}"
REDIS_CONF=/data/conf/redis.conf
REDIS_PORT={{ .Values.redis.port }}
SENTINEL_CONF=/data/conf/sentinel.conf
SENTINEL_PORT={{ .Values.sentinel.port }}
SERVICE={{ template "redis-ha.fullname" . }}
set -eu
sentinel_update() {
echo "Updating sentinel config"
sed -i "1s/^/$(cat sentinel-id)\\n/" "$SENTINEL_CONF"
sed -i "2s/^/sentinel monitor $MASTER_GROUP $1 $REDIS_PORT $QUORUM \\n/" "$SENTINEL_CONF"
echo "sentinel announce-ip $ANNOUNCE_IP" >> $SENTINEL_CONF
echo "sentinel announce-port $SENTINEL_PORT" >> $SENTINEL_CONF
}
redis_update() {
echo "Updating redis config"
echo "slaveof $1 $REDIS_PORT" >> "$REDIS_CONF"
echo "slave-announce-ip $ANNOUNCE_IP" >> $REDIS_CONF
echo "slave-announce-port $REDIS_PORT" >> $REDIS_CONF
}
copy_config() {
if [ -f "$SENTINEL_CONF" ]; then
grep "sentinel myid" "$SENTINEL_CONF" > sentinel-id || true
fi
cp /readonly-config/redis.conf "$REDIS_CONF"
cp /readonly-config/sentinel.conf "$SENTINEL_CONF"
}
setup_defaults() {
echo "Setting up defaults"
if [ "$INDEX" = "0" ]; then
echo "Setting this pod as the default master"
sed -i "s/^.*slaveof.*//" "$REDIS_CONF"
sentinel_update "$ANNOUNCE_IP"
else
DEFAULT_MASTER="$(getent hosts "$SERVICE-announce-0" | awk '{ print $1 }')"
if [ -z "$DEFAULT_MASTER" ]; then
echo "Unable to resolve host"
exit 1
fi
echo "Setting default slave config.."
redis_update "$DEFAULT_MASTER"
sentinel_update "$DEFAULT_MASTER"
fi
}
find_master() {
echo "Attempting to find master"
if [ "$(redis-cli -h "$MASTER"{{ if .Values.auth }} -a "$AUTH"{{ end }} ping)" != "PONG" ]; then
echo "Can't ping master, attempting to force failover"
if redis-cli -h "$SERVICE" -p "$SENTINEL_PORT" sentinel failover "$MASTER_GROUP" | grep -q 'NOGOODSLAVE' ; then
setup_defaults
return 0
fi
sleep 10
MASTER="$(redis-cli -h $SERVICE -p $SENTINEL_PORT sentinel get-master-addr-by-name $MASTER_GROUP | grep -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}')"
if [ "$MASTER" ]; then
sentinel_update "$MASTER"
redis_update "$MASTER"
else
echo "Could not failover, exiting..."
exit 1
fi
else
echo "Found reachable master, updating config"
sentinel_update "$MASTER"
redis_update "$MASTER"
fi
}
mkdir -p /data/conf/
echo "Initializing config.."
copy_config
ANNOUNCE_IP=$(getent hosts "$SERVICE-announce-$INDEX" | awk '{ print $1 }')
if [ -z "$ANNOUNCE_IP" ]; then
"Could not resolve the announce ip for this pod"
exit 1
elif [ "$MASTER" ]; then
find_master
else
setup_defaults
fi
if [ "${AUTH:-}" ]; then
echo "Setting auth values"
sed -i "s/replace-default-auth/$AUTH/" "$REDIS_CONF" "$SENTINEL_CONF"
fi
echo "Ready..."
{{- include "config-init.sh" . }}
{{ if .Values.haproxy.enabled }}
haproxy.cfg: |-
{{- include "config-haproxy.cfg" . }}
{{- end }}
haproxy_init.sh: |
{{- include "config-haproxy_init.sh" . }}

View File

@ -0,0 +1,11 @@
{{- if .Values.exporter.script }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "redis-ha.fullname" . }}-exporter-script-configmap
namespace: {{ .Release.Namespace }}
labels:
{{ include "labels.standard" . | indent 4 }}
data:
script: {{ toYaml .Values.exporter.script | indent 2 }}
{{- end }}

View File

@ -1,41 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "redis-ha.fullname" . }}-probes
labels:
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
app: {{ template "redis-ha.fullname" . }}
data:
check-quorum.sh: |
#!/bin/sh
set -eu
MASTER_GROUP="{{ .Values.redis.masterGroupName }}"
SENTINEL_PORT={{ .Values.sentinel.port }}
REDIS_PORT={{ .Values.redis.port }}
NUM_SLAVES=$(redis-cli -p "$SENTINEL_PORT" sentinel master {{ .Values.redis.masterGroupName }} | awk '/num-slaves/{getline; print}')
MIN_SLAVES={{ index .Values.redis.config "min-slaves-to-write" }}
if [ "$1" = "$SENTINEL_PORT" ]; then
if redis-cli -p "$SENTINEL_PORT" sentinel ckquorum "$MASTER_GROUP" | grep -q NOQUORUM ; then
echo "ERROR: NOQUORUM. Sentinel quorum check failed, not enough sentinels found"
exit 1
fi
elif [ "$1" = "$REDIS_PORT" ]; then
if [ "$MIN_SLAVES" -gt "$NUM_SLAVES" ]; then
echo "Could not find enough replicating slaves. Needed $MIN_SLAVES but found $NUM_SLAVES"
exit 1
fi
fi
sh /probes/readiness.sh "$1"
readiness.sh: |
#!/bin/sh
set -eu
CHECK_SERVER="$(redis-cli -p "$1"{{ if .Values.auth }} -a "$AUTH"{{ end }} ping)"
if [ "$CHECK_SERVER" != "PONG" ]; then
echo "Server check failed with: $CHECK_SERVER"
exit 1
fi

View File

@ -3,6 +3,7 @@ apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: {{ template "redis-ha.fullname" . }}-pdb
namespace: {{ .Release.Namespace }}
labels:
{{ include "labels.standard" . | indent 4 }}
spec:

View File

@ -0,0 +1,19 @@
{{- if and .Values.serviceAccount.create .Values.rbac.create }}
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ template "redis-ha.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
app: {{ template "redis-ha.fullname" . }}
rules:
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
{{- end }}

View File

@ -0,0 +1,19 @@
{{- if and .Values.serviceAccount.create .Values.rbac.create }}
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ template "redis-ha.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
app: {{ template "redis-ha.fullname" . }}
subjects:
- kind: ServiceAccount
name: {{ template "redis-ha.serviceAccountName" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ template "redis-ha.fullname" . }}
{{- end }}

View File

@ -2,8 +2,12 @@ apiVersion: v1
kind: Service
metadata:
name: {{ template "redis-ha.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{ include "labels.standard" . | indent 4 }}
{{- if and ( .Values.exporter.enabled ) ( .Values.exporter.serviceMonitor.enabled ) }}
servicemonitor: enabled
{{- end }}
annotations:
{{- if .Values.serviceAnnotations }}
{{ toYaml .Values.serviceAnnotations | indent 4 }}
@ -20,6 +24,12 @@ spec:
port: {{ .Values.sentinel.port }}
protocol: TCP
targetPort: sentinel
{{- if .Values.exporter.enabled }}
- name: exporter-port
port: {{ .Values.exporter.port }}
protocol: TCP
targetPort: exporter-port
{{- end }}
selector:
release: {{ .Release.Name }}
app: {{ template "redis-ha.name" . }}

View File

@ -0,0 +1,12 @@
{{- if .Values.serviceAccount.create }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "redis-ha.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
labels:
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
app: {{ template "redis-ha.fullname" . }}
{{- end }}

View File

@ -0,0 +1,35 @@
{{- if and ( .Capabilities.APIVersions.Has "monitoring.coreos.com/v1" ) ( .Values.exporter.serviceMonitor.enabled ) ( .Values.exporter.enabled ) }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
{{- if .Values.exporter.serviceMonitor.labels }}
labels:
{{ toYaml .Values.exporter.serviceMonitor.labels | indent 4}}
{{- end }}
name: {{ template "redis-ha.fullname" . }}
namespace: {{ .Release.Namespace }}
{{- if .Values.exporter.serviceMonitor.namespace }}
namespace: {{ .Values.exporter.serviceMonitor.namespace }}
{{- end }}
spec:
endpoints:
- targetPort: {{ .Values.exporter.port }}
{{- if .Values.exporter.serviceMonitor.interval }}
interval: {{ .Values.exporter.serviceMonitor.interval }}
{{- end }}
{{- if .Values.exporter.serviceMonitor.telemetryPath }}
path: {{ .Values.exporter.serviceMonitor.telemetryPath }}
{{- end }}
{{- if .Values.exporter.serviceMonitor.timeout }}
scrapeTimeout: {{ .Values.exporter.serviceMonitor.timeout }}
{{- end }}
jobLabel: {{ template "redis-ha.fullname" . }}
namespaceSelector:
matchNames:
- {{ .Release.Namespace }}
selector:
matchLabels:
app: {{ template "redis-ha.name" . }}
release: {{ .Release.Name }}
servicemonitor: enabled
{{- end }}

View File

@ -2,7 +2,9 @@ apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ template "redis-ha.fullname" . }}-server
namespace: {{ .Release.Namespace }}
labels:
{{ template "redis-ha.fullname" . }}: replica
{{ include "labels.standard" . | indent 4 }}
spec:
selector:
@ -17,15 +19,26 @@ spec:
template:
metadata:
annotations:
checksum/init-config: {{ include (print $.Template.BasePath "/redis-ha-configmap.yaml") . | sha256sum }}
checksum/probe-config: {{ include (print $.Template.BasePath "/redis-ha-healthchecks.yaml") . | sha256sum }}
checksum/init-config: {{ print (include "config-redis.conf" .) (include "config-init.sh" .) | sha256sum }}
{{- if .Values.podAnnotations }}
{{ toYaml .Values.podAnnotations | indent 8 }}
{{- end }}
{{- if .Values.exporter.enabled }}
prometheus.io/port: "{{ .Values.exporter.port }}"
prometheus.io/scrape: "true"
prometheus.io/path: {{ .Values.exporter.scrapePath }}
{{- end }}
labels:
release: {{ .Release.Name }}
app: {{ template "redis-ha.name" . }}
{{ template "redis-ha.fullname" . }}: replica
{{- range $key, $value := .Values.labels }}
{{ $key }}: {{ $value }}
{{- end }}
spec:
{{- if .Values.schedulerName }}
schedulerName: "{{ .Values.schedulerName }}"
{{- end }}
{{- if .Values.nodeSelector }}
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
@ -34,13 +47,82 @@ spec:
tolerations:
{{ toYaml .Values.tolerations | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- if .Values.affinity }}
{{- with .Values.affinity }}
{{ tpl . $ | indent 8 }}
{{- end }}
{{- else }}
{{- if .Values.additionalAffinities }}
{{ toYaml .Values.additionalAffinities | indent 8 }}
{{- end }}
podAntiAffinity:
{{- if .Values.hardAntiAffinity }}
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: {{ template "redis-ha.name" . }}
release: {{ .Release.Name }}
{{ template "redis-ha.fullname" . }}: replica
topologyKey: kubernetes.io/hostname
{{- else }}
preferredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: {{ template "redis-ha.name" . }}
release: {{ .Release.Name }}
{{ template "redis-ha.fullname" . }}: replica
topologyKey: kubernetes.io/hostname
{{- end }}
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
app: {{ template "redis-ha.name" . }}
release: {{ .Release.Name }}
{{ template "redis-ha.fullname" . }}: replica
topologyKey: failure-domain.beta.kubernetes.io/zone
{{- end }}
{{- if .Values.imagePullSecrets }}
imagePullSecrets: {{ toYaml .Values.imagePullSecrets | nindent 8 }}
{{- end }}
securityContext:
{{ toYaml .Values.securityContext | indent 8 }}
serviceAccountName: {{ template "redis-ha.serviceAccountName" . }}
initContainers:
{{- if .Values.sysctlImage.enabled }}
- name: init-sysctl
image: {{ template "redis.sysctl.image" . }}
imagePullPolicy: {{ .Values.sysctlImage.pullPolicy }}
resources:
{{ toYaml .Values.sysctlImage.resources | indent 10 }}
{{- if .Values.sysctlImage.mountHostSys }}
volumeMounts:
- name: host-sys
mountPath: /host-sys
{{- end }}
command:
{{ toYaml .Values.sysctlImage.command | indent 10 }}
securityContext:
runAsNonRoot: false
privileged: true
runAsUser: 0
{{- end }}
{{- if and .Values.hostPath.path .Values.hostPath.chown }}
- name: hostpath-chown
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
securityContext:
runAsNonRoot: false
runAsUser: 0
command:
- chown
- "{{ .Values.securityContext.runAsUser }}"
- /data
volumeMounts:
- name: data
mountPath: /data
{{- end }}
- name: config-init
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
@ -50,8 +132,13 @@ spec:
- sh
args:
- /readonly-config/init.sh
{{- if .Values.auth }}
env:
{{- $replicas := int (toString .Values.replicas) -}}
{{- range $i := until $replicas }}
- name: SENTINEL_ID_{{ $i }}
value: {{ printf "%s\n%s\nindex: %d" (include "redis-ha.name" $) ($.Release.Name) $i | sha1sum }}
{{ end -}}
{{- if .Values.auth }}
- name: AUTH
valueFrom:
secretKeyRef:
@ -60,7 +147,7 @@ spec:
{{- else }}
name: {{ template "redis-ha.fullname" . }}
{{- end }}
key: auth
key: {{ .Values.authKey }}
{{- end }}
volumeMounts:
- name: config
@ -86,18 +173,12 @@ spec:
{{- else }}
name: {{ template "redis-ha.fullname" . }}
{{- end }}
key: auth
key: {{ .Values.authKey }}
{{- end }}
livenessProbe:
exec:
command: [ "sh", "/probes/readiness.sh", "{{ .Values.redis.port }}"]
tcpSocket:
port: {{ .Values.redis.port }}
initialDelaySeconds: 15
periodSeconds: 5
readinessProbe:
exec:
command: ["sh", "/probes/readiness.sh", "{{ .Values.redis.port }}"]
initialDelaySeconds: 15
periodSeconds: 5
resources:
{{ toYaml .Values.redis.resources | indent 10 }}
ports:
@ -106,8 +187,6 @@ spec:
volumeMounts:
- mountPath: /data
name: data
- mountPath: /probes
name: probes
- name: sentinel
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
@ -125,18 +204,12 @@ spec:
{{- else }}
name: {{ template "redis-ha.fullname" . }}
{{- end }}
key: auth
key: {{ .Values.authKey }}
{{- end }}
livenessProbe:
exec:
command: [ "sh", "/probes/readiness.sh", "{{ .Values.sentinel.port }}"]
tcpSocket:
port: {{ .Values.sentinel.port }}
initialDelaySeconds: 15
periodSeconds: 5
readinessProbe:
exec:
command: ["sh", "/probes/readiness.sh", "{{ .Values.sentinel.port }}"]
initialDelaySeconds: 15
periodSeconds: 5
resources:
{{ toYaml .Values.sentinel.resources | indent 10 }}
ports:
@ -145,15 +218,70 @@ spec:
volumeMounts:
- mountPath: /data
name: data
- mountPath: /probes
name: probes
{{- if .Values.exporter.enabled }}
- name: redis-exporter
image: "{{ .Values.exporter.image }}:{{ .Values.exporter.tag }}"
imagePullPolicy: {{ .Values.exporter.pullPolicy }}
args:
{{- range $key, $value := .Values.exporter.extraArgs }}
- --{{ $key }}={{ $value }}
{{- end }}
env:
- name: REDIS_ADDR
value: redis://localhost:{{ .Values.redis.port }}
{{- if .Values.auth }}
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
{{- if .Values.existingSecret }}
name: {{ .Values.existingSecret }}
{{- else }}
name: {{ template "redis-ha.fullname" . }}
{{- end }}
key: {{ .Values.authKey }}
{{- end }}
{{- if .Values.exporter.script }}
- name: REDIS_EXPORTER_SCRIPT
value: /script/script.lua
{{- end }}
livenessProbe:
httpGet:
path: {{ .Values.exporter.scrapePath }}
port: {{ .Values.exporter.port }}
initialDelaySeconds: 15
timeoutSeconds: 1
periodSeconds: 15
resources:
{{ toYaml .Values.exporter.resources | indent 10 }}
ports:
- name: exporter-port
containerPort: {{ .Values.exporter.port }}
{{- if .Values.exporter.script }}
volumeMounts:
- mountPath: /script
name: script-mount
{{- end }}
{{- end }}
{{- if .Values.priorityClassName }}
priorityClassName: {{ .Values.priorityClassName }}
{{- end }}
volumes:
- name: config
configMap:
name: {{ template "redis-ha.fullname" . }}-configmap
- name: probes
{{- if .Values.sysctlImage.mountHostSys }}
- name: host-sys
hostPath:
path: /sys
{{- end }}
{{- if .Values.exporter.script }}
- name: script-mount
configMap:
name: {{ template "redis-ha.fullname" . }}-probes
name: {{ template "redis-ha.fullname" . }}-exporter-script-configmap
items:
- key: script
path: script.lua
{{- end }}
{{- if .Values.persistentVolume.enabled }}
volumeClaimTemplates:
- metadata:
@ -177,7 +305,15 @@ spec:
storageClassName: "{{ .Values.persistentVolume.storageClass }}"
{{- end }}
{{- end }}
{{- if .Values.persistentVolume.reclaimPolicy }}
persistentVolumeReclaimPolicy: "{{ .Values.persistentVolume.reclaimPolicy }}"
{{- end }}
{{- else if .Values.hostPath.path }}
- name: data
hostPath:
path: {{ tpl .Values.hostPath.path .}}
{{- else }}
- name: data
emptyDir: {}
emptyDir:
{{ toYaml .Values.emptyDir | indent 10 }}
{{- end }}

View File

@ -0,0 +1,151 @@
{{- if .Values.haproxy.enabled }}
kind: Deployment
apiVersion: apps/v1
metadata:
name: {{ template "redis-ha.fullname" . }}-haproxy
namespace: {{ .Release.Namespace }}
labels:
{{ include "labels.standard" . | indent 4 }}
spec:
strategy:
type: RollingUpdate
revisionHistoryLimit: 1
replicas: {{ .Values.haproxy.replicas }}
selector:
matchLabels:
app: {{ template "redis-ha.name" . }}-haproxy
release: {{ .Release.Name }}
template:
metadata:
name: {{ template "redis-ha.fullname" . }}-haproxy
labels:
app: {{ template "redis-ha.name" . }}-haproxy
release: {{ .Release.Name }}
annotations:
{{- if .Values.haproxy.metrics.enabled }}
prometheus.io/port: "{{ .Values.haproxy.metrics.port }}"
prometheus.io/scrape: "true"
prometheus.io/path: "{{ .Values.haproxy.metrics.scrapePath }}"
{{- end }}
checksum/config: {{ print (include "config-haproxy.cfg" .) (include "config-haproxy_init.sh" .) | sha256sum }}
{{- if .Values.haproxy.annotations }}
{{ toYaml .Values.haproxy.annotations | indent 8 }}
{{- end }}
spec:
# Needed when using unmodified rbac-setup.yml
{{ if .Values.haproxy.serviceAccount.create }}
serviceAccountName: {{ template "redis-ha.serviceAccountName" . }}-haproxy
{{ end }}
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
tolerations:
{{ toYaml .Values.tolerations | indent 8 }}
affinity:
{{- if .Values.haproxy.affinity }}
{{- with .Values.haproxy.affinity }}
{{ tpl . $ | indent 8 }}
{{- end }}
{{- else }}
{{- if .Values.haproxy.additionalAffinities }}
{{ toYaml .Values.haproxy.additionalAffinities | indent 8 }}
{{- end }}
podAntiAffinity:
{{- if .Values.haproxy.hardAntiAffinity }}
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: {{ template "redis-ha.name" . }}-haproxy
release: {{ .Release.Name }}
topologyKey: kubernetes.io/hostname
{{- else }}
preferredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: {{ template "redis-ha.name" . }}-haproxy
release: {{ .Release.Name }}
topologyKey: kubernetes.io/hostname
{{- end }}
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
app: {{ template "redis-ha.name" . }}-haproxy
release: {{ .Release.Name }}
topologyKey: failure-domain.beta.kubernetes.io/zone
{{- end }}
initContainers:
- name: config-init
image: {{ .Values.haproxy.image.repository }}:{{ .Values.haproxy.image.tag }}
imagePullPolicy: {{ .Values.haproxy.image.pullPolicy }}
resources:
{{ toYaml .Values.haproxy.init.resources | indent 10 }}
command:
- sh
args:
- /readonly/haproxy_init.sh
{{- if .Values.auth }}
env:
- name: AUTH
valueFrom:
secretKeyRef:
{{- if .Values.existingSecret }}
name: {{ .Values.existingSecret }}
{{- else }}
name: {{ template "redis-ha.fullname" . }}
{{- end }}
key: {{ .Values.authKey }}
{{- end }}
volumeMounts:
- name: config-volume
mountPath: /readonly
readOnly: true
- name: data
mountPath: /data
{{- if .Values.haproxy.imagePullSecrets }}
imagePullSecrets: {{ toYaml .Values.haproxy.imagePullSecrets | nindent 8 }}
{{- end }}
securityContext:
{{ toYaml .Values.haproxy.securityContext | indent 8 }}
containers:
- name: haproxy
image: {{ .Values.haproxy.image.repository }}:{{ .Values.haproxy.image.tag }}
imagePullPolicy: {{ .Values.haproxy.image.pullPolicy }}
livenessProbe:
httpGet:
path: /healthz
port: 8888
initialDelaySeconds: 5
periodSeconds: 3
ports:
- name: redis
containerPort: {{ default "6379" .Values.redis.port }}
{{- if .Values.haproxy.readOnly.enabled }}
- name: readonlyport
containerPort: {{ default "6380" .Values.haproxy.readOnly.port }}
{{- end }}
{{- if .Values.haproxy.metrics.enabled }}
- name: metrics-port
containerPort: {{ default "9101" .Values.haproxy.metrics.port }}
{{- end }}
resources:
{{ toYaml .Values.haproxy.resources | indent 10 }}
volumeMounts:
- name: data
mountPath: /usr/local/etc/haproxy
- name: shared-socket
mountPath: /run/haproxy
{{- if .Values.haproxy.priorityClassName }}
priorityClassName: {{ .Values.haproxy.priorityClassName }}
{{- end }}
volumes:
- name: config-volume
configMap:
name: {{ template "redis-ha.fullname" . }}-configmap
- name: shared-socket
emptyDir:
{{ toYaml .Values.haproxy.emptyDir | indent 10 }}
- name: data
emptyDir:
{{ toYaml .Values.haproxy.emptyDir | indent 10 }}
{{- end }}

View File

@ -0,0 +1,42 @@
{{- if .Values.haproxy.enabled }}
apiVersion: v1
kind: Service
metadata:
name: {{ template "redis-ha.fullname" . }}-haproxy
namespace: {{ .Release.Namespace }}
labels:
{{ include "labels.standard" . | indent 4 }}
component: {{ template "redis-ha.fullname" . }}-haproxy
annotations:
{{- if .Values.haproxy.service.annotations }}
{{ toYaml .Values.haproxy.service.annotations | indent 4 }}
{{- end }}
spec:
type: {{ default "ClusterIP" .Values.haproxy.service.type }}
{{- if and (eq .Values.haproxy.service.type "LoadBalancer") .Values.haproxy.service.loadBalancerIP }}
loadBalancerIP: {{ .Values.haproxy.service.loadBalancerIP }}
{{- end }}
ports:
- name: haproxy
port: {{ .Values.redis.port }}
protocol: TCP
targetPort: redis
{{- if and (eq .Values.haproxy.service.type "NodePort") .Values.haproxy.service.nodePort }}
nodePort: {{ .Values.haproxy.service.nodePort }}
{{- end }}
{{- if .Values.haproxy.readOnly.enabled }}
- name: haproxyreadonly
port: {{ .Values.haproxy.readOnly.port }}
protocol: TCP
targetPort: {{ .Values.haproxy.readOnly.port }}
{{- end }}
{{- if .Values.haproxy.metrics.enabled }}
- name: {{ .Values.haproxy.metrics.portName }}
port: {{ .Values.haproxy.metrics.port }}
protocol: TCP
targetPort: metrics-port
{{- end }}
selector:
release: {{ .Release.Name }}
app: {{ template "redis-ha.name" . }}-haproxy
{{- end }}

View File

@ -0,0 +1,12 @@
{{- if and .Values.haproxy.serviceAccount.create .Values.haproxy.enabled }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "redis-ha.serviceAccountName" . }}-haproxy
namespace: {{ .Release.Namespace }}
labels:
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
app: {{ template "redis-ha.fullname" . }}
{{- end }}

View File

@ -0,0 +1,34 @@
{{- if and ( .Capabilities.APIVersions.Has "monitoring.coreos.com/v1" ) ( .Values.haproxy.metrics.serviceMonitor.enabled ) ( .Values.haproxy.metrics.enabled ) }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
{{- with .Values.haproxy.metrics.serviceMonitor.labels }}
labels: {{ toYaml . | nindent 4}}
{{- end }}
name: {{ template "redis-ha.fullname" . }}-haproxy
namespace: {{ .Release.Namespace }}
{{- if .Values.haproxy.metrics.serviceMonitor.namespace }}
namespace: {{ .Values.haproxy.metrics.serviceMonitor.namespace }}
{{- end }}
spec:
endpoints:
- targetPort: {{ .Values.haproxy.metrics.port }}
{{- if .Values.haproxy.metrics.serviceMonitor.interval }}
interval: {{ .Values.haproxy.metrics.serviceMonitor.interval }}
{{- end }}
{{- if .Values.haproxy.metrics.serviceMonitor.telemetryPath }}
path: {{ .Values.haproxy.metrics.serviceMonitor.telemetryPath }}
{{- end }}
{{- if .Values.haproxy.metrics.serviceMonitor.timeout }}
scrapeTimeout: {{ .Values.haproxy.metrics.serviceMonitor.timeout }}
{{- end }}
jobLabel: {{ template "redis-ha.fullname" . }}-haproxy
namespaceSelector:
matchNames:
- {{ .Release.Namespace }}
selector:
matchLabels:
app: {{ template "redis-ha.name" . }}
release: {{ .Release.Name }}
component: {{ template "redis-ha.fullname" . }}-haproxy
{{- end }}

View File

@ -17,20 +17,11 @@ spec:
- name: config
mountPath: /readonly-config
readOnly: true
- name: check-probes
image: koalaman/shellcheck:v0.5.0
args:
- --shell=sh
- /probes/check-quorum.sh
volumeMounts:
- name: probes
mountPath: /probes
readOnly: true
{{- if .Values.imagePullSecrets }}
imagePullSecrets: {{ toYaml .Values.imagePullSecrets | nindent 4 }}
{{- end }}
restartPolicy: Never
volumes:
- name: config
configMap:
name: {{ template "redis-ha.fullname" . }}-configmap
- name: probes
configMap:
name: {{ template "redis-ha.fullname" . }}-probes
restartPolicy: Never

View File

@ -14,4 +14,7 @@ spec:
- sh
- -c
- redis-cli -h {{ template "redis-ha.fullname" . }} -p {{ .Values.redis.port }} info server
{{- if .Values.imagePullSecrets }}
imagePullSecrets: {{ toYaml .Values.imagePullSecrets | nindent 4 }}
{{- end }}
restartPolicy: Never

View File

@ -3,20 +3,155 @@
##
image:
repository: redis
tag: 5.0.3-alpine
tag: 5.0.6-alpine
pullPolicy: IfNotPresent
## Reference to one or more secrets to be used when pulling images
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
## This imagePullSecrets is only for redis images
##
imagePullSecrets: []
# - name: "image-pull-secret"
## replicas number for each component
replicas: 3
## Kubernetes priorityClass name for the redis-ha-server pod
# priorityClassName: ""
## Custom labels for the redis pod
labels: {}
## Pods Service Account
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
serviceAccount:
## Specifies whether a ServiceAccount should be created
##
create: true
## The name of the ServiceAccount to use.
## If not set and create is true, a name is generated using the redis-ha.fullname template
# name:
## Enables a HA Proxy for better LoadBalancing / Sentinel Master support. Automatically proxies to Redis master.
## Recommend for externally exposed Redis clusters.
## ref: https://cbonte.github.io/haproxy-dconv/1.9/intro.html
haproxy:
enabled: false
# Enable if you want a dedicated port in haproxy for redis-slaves
readOnly:
enabled: false
port: 6380
replicas: 3
image:
repository: haproxy
tag: 2.0.4
pullPolicy: IfNotPresent
## Reference to one or more secrets to be used when pulling images
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
imagePullSecrets: []
# - name: "image-pull-secret"
annotations: {}
resources: {}
emptyDir: {}
## Enable sticky sessions to Redis nodes via HAProxy
## Very useful for long-living connections as in case of Sentry for example
stickyBalancing: false
## Kubernetes priorityClass name for the haproxy pod
# priorityClassName: ""
## Service type for HAProxy
##
service:
type: ClusterIP
loadBalancerIP:
annotations: {}
serviceAccount:
create: true
## Official HAProxy embedded prometheus metrics settings.
## Ref: https://github.com/haproxy/haproxy/tree/master/contrib/prometheus-exporter
##
metrics:
enabled: false
# prometheus port & scrape path
port: 9101
portName: exporter-port
scrapePath: /metrics
serviceMonitor:
# When set true then use a ServiceMonitor to configure scraping
enabled: false
# Set the namespace the ServiceMonitor should be deployed
# namespace: monitoring
# Set how frequently Prometheus should scrape
# interval: 30s
# Set path to redis-exporter telemtery-path
# telemetryPath: /metrics
# Set labels for the ServiceMonitor, use this to define your scrape label for Prometheus Operator
# labels: {}
# Set timeout for scrape
# timeout: 10s
init:
resources: {}
timeout:
connect: 4s
server: 30s
client: 30s
check: 2s
securityContext:
runAsUser: 1000
fsGroup: 1000
runAsNonRoot: true
## Whether the haproxy pods should be forced to run on separate nodes.
hardAntiAffinity: true
## Additional affinities to add to the haproxy pods.
additionalAffinities: {}
## Override all other affinity settings for the haproxy pods with a string.
affinity: |
## Custom config-haproxy.cfg files used to override default settings. If this file is
## specified then the config-haproxy.cfg above will be ignored.
# customConfig: |-
# Define configuration here
## Place any additional configuration section to add to the default config-haproxy.cfg
# extraConfig: |-
# Define configuration here
## Role Based Access
## Ref: https://kubernetes.io/docs/admin/authorization/rbac/
##
rbac:
create: true
sysctlImage:
enabled: false
command: []
registry: docker.io
repository: busybox
tag: 1.31.1
pullPolicy: Always
mountHostSys: false
resources: {}
## Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
# schedulerName:
## Redis specific configuration options
redis:
port: 6379
masterGroupName: mymaster
masterGroupName: "mymaster" # must match ^[\\w-\\.]+$) and can be templated
config:
## Additional redis conf options can be added below
## For all available options see http://download.redis.io/redis-stable/redis.conf
min-slaves-to-write: 1
min-slaves-max-lag: 5 # Value in seconds
min-replicas-to-write: 1
min-replicas-max-lag: 5 # Value in seconds
maxmemory: "0" # Max memory to use for each redis instance. Default is unlimited.
maxmemory-policy: "volatile-lru" # Max memory policy to use for each redis instance. Default is volatile-lru.
# Determines if scheduled RDB backups are created. Default is false.
@ -27,6 +162,7 @@ redis:
rdbcompression: "yes"
rdbchecksum: "yes"
## Custom redis.conf files used to override default settings. If this file is
## specified then the redis.config above will be ignored.
# customConfig: |-
@ -46,12 +182,13 @@ sentinel:
config:
## Additional sentinel conf options can be added below. Only options that
## are expressed in the format simialar to 'sentinel xxx mymaster xxx' will
## be properly templated.
## be properly templated expect maxclients option.
## For available options see http://download.redis.io/redis-stable/sentinel.conf
down-after-milliseconds: 10000
## Failover timeout value in milliseconds
failover-timeout: 180000
parallel-syncs: 5
maxclients: 10000
## Custom sentinel.conf files used to override default settings. If this file is
## specified then the sentinel.config above will be ignored.
@ -74,22 +211,108 @@ securityContext:
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
nodeSelector: {}
## Whether the Redis server pods should be forced to run on separate nodes.
## This is accomplished by setting their AntiAffinity with requiredDuringSchedulingIgnoredDuringExecution as opposed to preferred.
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity-beta-feature
##
hardAntiAffinity: true
## Additional affinities to add to the Redis server pods.
##
## Example:
## nodeAffinity:
## preferredDuringSchedulingIgnoredDuringExecution:
## - weight: 50
## preference:
## matchExpressions:
## - key: spot
## operator: NotIn
## values:
## - "true"
##
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
additionalAffinities: {}
## Override all other affinity settings for the Redis server pods with a string.
##
## Example:
## affinity: |
## podAntiAffinity:
## requiredDuringSchedulingIgnoredDuringExecution:
## - labelSelector:
## matchLabels:
## app: {{ template "redis-ha.name" . }}
## release: {{ .Release.Name }}
## topologyKey: kubernetes.io/hostname
## preferredDuringSchedulingIgnoredDuringExecution:
## - weight: 100
## podAffinityTerm:
## labelSelector:
## matchLabels:
## app: {{ template "redis-ha.name" . }}
## release: {{ .Release.Name }}
## topologyKey: failure-domain.beta.kubernetes.io/zone
##
affinity: |
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: {{ template "redis-ha.name" . }}
release: {{ .Release.Name }}
topologyKey: kubernetes.io/hostname
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
app: {{ template "redis-ha.name" . }}
release: {{ .Release.Name }}
topologyKey: failure-domain.beta.kubernetes.io/zone
# Prometheus exporter specific configuration options
exporter:
enabled: false
image: oliver006/redis_exporter
tag: v1.3.2
pullPolicy: IfNotPresent
# prometheus port & scrape path
port: 9121
scrapePath: /metrics
# cpu/memory resource limits/requests
resources: {}
# Additional args for redis exporter
extraArgs: {}
# Used to mount a LUA-Script via config map and use it for metrics-collection
# script: |
# -- Example script copied from: https://github.com/oliver006/redis_exporter/blob/master/contrib/sample_collect_script.lua
# -- Example collect script for -script option
# -- This returns a Lua table with alternating keys and values.
# -- Both keys and values must be strings, similar to a HGETALL result.
# -- More info about Redis Lua scripting: https://redis.io/commands/eval
#
# local result = {}
#
# -- Add all keys and values from some hash in db 5
# redis.call("SELECT", 5)
# local r = redis.call("HGETALL", "some-hash-with-stats")
# if r ~= nil then
# for _,v in ipairs(r) do
# table.insert(result, v) -- alternating keys and values
# end
# end
#
# -- Set foo to 42
# table.insert(result, "foo")
# table.insert(result, "42") -- note the string, use tostring() if needed
#
# return result
serviceMonitor:
# When set true then use a ServiceMonitor to configure scraping
enabled: false
# Set the namespace the ServiceMonitor should be deployed
# namespace: monitoring
# Set how frequently Prometheus should scrape
# interval: 30s
# Set path to redis-exporter telemtery-path
# telemetryPath: /metrics
# Set labels for the ServiceMonitor, use this to define your scrape label for Prometheus Operator
# labels: {}
# Set timeout for scrape
# timeout: 10s
podDisruptionBudget: {}
# maxUnavailable: 1
@ -99,9 +322,12 @@ podDisruptionBudget: {}
auth: false
# redisPassword:
## Use existing secret containing "auth" key (ignores redisPassword)
## Use existing secret containing key `authKey` (ignores redisPassword)
# existingSecret:
## Defines the key holding the redis password in existing secret.
authKey: auth
persistentVolume:
enabled: true
## redis-ha data Persistent Volume Storage Class
@ -116,5 +342,21 @@ persistentVolume:
- ReadWriteOnce
size: 10Gi
annotations: {}
# reclaimPolicy per https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming
reclaimPolicy: ""
init:
resources: {}
# To use a hostPath for data, set persistentVolume.enabled to false
# and define hostPath.path.
# Warning: this might overwrite existing folders on the host system!
hostPath:
## path is evaluated as template so placeholders are replaced
# path: "/data/{{ .Release.Name }}"
# if chown is true, an init-container with root permissions is launched to
# change the owner of the hostPath folder to the user defined in the
# security context
chown: true
emptyDir: {}

View File

@ -0,0 +1,11 @@
#!/bin/sh
set -x
ROOT=$(cd `dirname $0`; pwd)
cd $ROOT
helm install redis \
--create-namespace \
--namespace dependency \
-f ./values.yaml \
./redis-ha

View File

@ -0,0 +1,49 @@
image:
repository: redis
tag: 5.0.6-alpine
replicas: 2
## Redis specific configuration options
redis:
port: 6379
masterGroupName: "mymaster" # must match ^[\\w-\\.]+$) and can be templated
config:
## For all available options see http://download.redis.io/redis-stable/redis.conf
min-replicas-to-write: 1
min-replicas-max-lag: 5 # Value in seconds
maxmemory: "4g" # Max memory to use for each redis instance. Default is unlimited.
maxmemory-policy: "allkeys-lru" # Max memory policy to use for each redis instance. Default is volatile-lru.
repl-diskless-sync: "yes"
rdbcompression: "yes"
rdbchecksum: "yes"
resources:
requests:
memory: 200Mi
cpu: 100m
limits:
memory: 4000Mi
## Sentinel specific configuration options
sentinel:
port: 26379
quorum: 1
resources:
requests:
memory: 200Mi
cpu: 100m
limits:
memory: 200Mi
hardAntiAffinity: true
## Configures redis with AUTH (requirepass & masterauth conf params)
auth: false
persistentVolume:
enabled: false
hostPath:
path: "/data/mcs-redis/{{ .Release.Name }}"

View File

@ -15,7 +15,7 @@ set -o errexit
# default version, can be overridden by cmd line options
export DOCKER_VER=19.03.8
export KUBEASZ_VER=2.2.0
export KUBEASZ_VER=2.2.1
export K8S_BIN_VER=v1.18.3
export EXT_BIN_VER=0.5.2
export SYS_PKG_VER=0.3.3