使用StatefulSet部署有状态应用

pull/62/head
Jimmy Song 2017-10-23 13:39:57 +08:00
parent 195e3750d7
commit a89a9376ee
12 changed files with 918 additions and 15 deletions

View File

@ -52,6 +52,7 @@
- [3.5 在kubernetes中开发部署应用](guide/application-development-deployment-flow.md) - [3.5 在kubernetes中开发部署应用](guide/application-development-deployment-flow.md)
- [3.5.1 适用于kubernetes的应用开发部署流程](guide/deploy-applications-in-kubernetes.md) - [3.5.1 适用于kubernetes的应用开发部署流程](guide/deploy-applications-in-kubernetes.md)
- [3.5.2 迁移传统应用到kubernetes中——以Hadoop YARN为例](guide/migrating-hadoop-yarn-to-kubernetes.md) - [3.5.2 迁移传统应用到kubernetes中——以Hadoop YARN为例](guide/migrating-hadoop-yarn-to-kubernetes.md)
- [3.5.3 使用StatefulSet部署用状态应用](guide/using-statefulset.md)
- [4. 最佳实践](practice/index.md) - [4. 最佳实践](practice/index.md)
- [ 4.1 在CentOS上部署kubernetes1.6集群](practice/install-kbernetes1.6-on-centos.md) - [ 4.1 在CentOS上部署kubernetes1.6集群](practice/install-kbernetes1.6-on-centos.md)
- [4.1.1 创建TLS证书和秘钥](practice/create-tls-and-secret-key.md) - [4.1.1 创建TLS证书和秘钥](practice/create-tls-and-secret-key.md)

View File

@ -0,0 +1,327 @@
# 使用StatefulSet部署有状态应用
[StatefulSet](../concepts/statefulset.md) 这个对象是专门用来部署用状态应用的可以为Pod提供稳定的身份标识包括hostname、启动顺序、DNS名称等。
下面以在kubernetes1.6版本中部署zookeeper和kafka为例讲解StatefulSet的使用其中kafka依赖于zookeeper。
Dockerfile和配置文件见 [zookeeper](https://github.com/rootsongjc/kubernetes-handbook/blob/master/manifests/zookeeper) 和 [kafka](https://github.com/rootsongjc/kubernetes-handbook/blob/master/manifests/kafaka)。
**注:**所有的镜像基于CentOS系统的JDK制作为我的私人镜像外部无法访问。
## 部署Zookeeper
Dockerfile中从远程获取zookeeper的安装文件然后在定义了三个脚本
- zkGenConfig.sh生成zookeeper配置文件
- zkMetrics.sh获取zookeeper的metrics
- zkOk.sh用来做ReadinessProb
我们在来看下这三个脚本的执行结果:
zkGenConfig.sh
zkMetrics.sh脚本实际上执行的是下面的命令
```bash
$ echo mntr | nc localhost $ZK_CLIENT_PORT >& 1
zk_version 3.4.6-1569965, built on 02/20/2014 09:09 GMT
zk_avg_latency 0
zk_max_latency 5
zk_min_latency 0
zk_packets_received 427879
zk_packets_sent 427890
zk_num_alive_connections 3
zk_outstanding_requests 0
zk_server_state leader
zk_znode_count 18
zk_watch_count 3
zk_ephemerals_count 4
zk_approximate_data_size 613
zk_open_file_descriptor_count 29
zk_max_file_descriptor_count 1048576
zk_followers 1
zk_synced_followers 1
zk_pending_syncs 0
```
zkOk.sh脚本实际上执行的是下面的命令
```bash
$ echo ruok | nc 127.0.0.1 $ZK_CLIENT_PORT
imok
```
**zookeeper.yaml**
下面是启动三个zookeeper实例的yaml配置文件
```yaml
---
apiVersion: v1
kind: Service
metadata:
name: zk-svc
labels:
app: zk-svc
spec:
ports:
- port: 2888
name: server
- port: 3888
name: leader-election
clusterIP: None
selector:
app: zk
---
apiVersion: v1
kind: ConfigMap
metadata:
name: zk-cm
data:
jvm.heap: "1G"
tick: "2000"
init: "10"
sync: "5"
client.cnxns: "60"
snap.retain: "3"
purge.interval: "0"
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: zk-pdb
spec:
selector:
matchLabels:
app: zk
minAvailable: 2
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: zk
spec:
serviceName: zk-svc
replicas: 3
template:
metadata:
labels:
app: zk
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zk
topologyKey: "kubernetes.io/hostname"
containers:
- name: k8szk
imagePullPolicy: Always
image: sz-pg-oam-docker-hub-001.tendcloud.com/library/zookeeper:3.4.6
resources:
requests:
memory: "2Gi"
cpu: "500m"
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: server
- containerPort: 3888
name: leader-election
env:
- name : ZK_REPLICAS
value: "3"
- name : ZK_HEAP_SIZE
valueFrom:
configMapKeyRef:
name: zk-cm
key: jvm.heap
- name : ZK_TICK_TIME
valueFrom:
configMapKeyRef:
name: zk-cm
key: tick
- name : ZK_INIT_LIMIT
valueFrom:
configMapKeyRef:
name: zk-cm
key: init
- name : ZK_SYNC_LIMIT
valueFrom:
configMapKeyRef:
name: zk-cm
key: tick
- name : ZK_MAX_CLIENT_CNXNS
valueFrom:
configMapKeyRef:
name: zk-cm
key: client.cnxns
- name: ZK_SNAP_RETAIN_COUNT
valueFrom:
configMapKeyRef:
name: zk-cm
key: snap.retain
- name: ZK_PURGE_INTERVAL
valueFrom:
configMapKeyRef:
name: zk-cm
key: purge.interval
- name: ZK_CLIENT_PORT
value: "2181"
- name: ZK_SERVER_PORT
value: "2888"
- name: ZK_ELECTION_PORT
value: "3888"
command:
- sh
- -c
- zkGenConfig.sh && zkServer.sh start-foreground
readinessProbe:
exec:
command:
- "zkOk.sh"
initialDelaySeconds: 10
timeoutSeconds: 5
livenessProbe:
exec:
command:
- "zkOk.sh"
initialDelaySeconds: 10
timeoutSeconds: 5
securityContext:
runAsUser: 1000
fsGroup: 1000
```
我们再主要下上面那三个脚本的用途。
## 部署kafka
Kafka的docker镜像制作跟zookeeper类似都是从远程下载安装包后解压安装。
与zookeeper不同的是只要一个脚本但是又依赖于我们上一步安装的zookeeperkafkaGenConfig.sh用来生成kafka的配置文件。
我们来看下这个脚本。
```bash
#!/bin/bash
HOST=`hostname -s`
if [[ $HOST =~ (.*)-([0-9]+)$ ]]; then
NAME=${BASH_REMATCH[1]}
ORD=${BASH_REMATCH[2]}
else
echo "Failed to extract ordinal from hostname $HOST"
exit 1
fi
MY_ID=$((ORD+1))
sed -i s"/broker.id=0/broker.id=$MY_ID/g" /opt/kafka/config/server.properties
sed -i s'/zookeeper.connect=localhost:2181/zookeeper.connect=zk-0.zk-svc.brand.svc:2181,zk-1.zk-svc.brand.svc:2181,zk-2.zk-svc.brand.svc:2181/g' /opt/kafka/config/server.properties
```
该脚本根据statefulset生成的pod的hostname的后半截数字部分作为broker ID同时再替换zookeeper的地址。
**Kafka.yaml**
下面是创建3个kafka实例的yaml配置。
```yaml
---
apiVersion: v1
kind: Service
metadata:
name: kafka-svc
labels:
app: kafka
spec:
ports:
- port: 9093
name: server
clusterIP: None
selector:
app: kafka
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: kafka-pdb
spec:
selector:
matchLabels:
app: kafka
minAvailable: 2
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: kafka
spec:
serviceName: kafka-svc
replicas: 3
template:
metadata:
labels:
app: kafka
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- kafka
topologyKey: "kubernetes.io/hostname"
podAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zk
topologyKey: "kubernetes.io/hostname"
terminationGracePeriodSeconds: 300
containers:
- name: k8skafka
imagePullPolicy: Always
image: sz-pg-oam-docker-hub-001.tendcloud.com/library/kafka:2.10-0.8.2.1
resources:
requests:
memory: "1Gi"
cpu: 500m
env:
- name: KF_REPLICAS
value: "3"
ports:
- containerPort: 9093
name: server
command:
- /bin/bash
- -c
- "/opt/kafka/bin/kafkaGenConfig.sh && /opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/server.properties"
env:
- name: KAFKA_HEAP_OPTS
value : "-Xmx512M -Xms512M"
- name: KAFKA_OPTS
value: "-Dlogging.level=DEBUG"
readinessProbe:
tcpSocket:
port: 9092
initialDelaySeconds: 15
timeoutSeconds: 1
```
## 参考
https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
[kubernetes contrib - statefulsets](https://github.com/kubernetes/contrib/tree/master/statefulsets)

View File

@ -0,0 +1,27 @@
FROM sz-pg-oam-docker-hub-001.tendcloud.com/library/jdk:8u45
ENV KAFKA_USER=kafka \
KAFKA_DATA_DIR=/var/lib/kafka/data \
JAVA_HOME=/usr/local/java \
KAFKA_HOME=/opt/kafka \
PATH=$PATH:/opt/kafka/bin
ARG KAFKA_DIST=kafka_2.10-0.8.2.1
RUN set -x \
&& yum install -y wget tar \
&& wget -q "http://repo.tendcloud.com/td-configuration/deploy/kafka/$KAFKA_DIST.tgz" \
&& export GNUPGHOME="$(mktemp -d)" \
&& tar -xzf "$KAFKA_DIST.tgz" -C /opt \
&& rm -r "$GNUPGHOME" "$KAFKA_DIST.tgz"
COPY log4j.properties /opt/$KAFKA_DIST/config/
RUN set -x \
&& ln -s /opt/$KAFKA_DIST $KAFKA_HOME \
&& useradd $KAFKA_USER \
&& [ `id -u $KAFKA_USER` -eq 1000 ] \
&& [ `id -g $KAFKA_USER` -eq 1000 ] \
&& mkdir -p $KAFKA_DATA_DIR \
&& chown -R "$KAFKA_USER:$KAFKA_USER" /opt/$KAFKA_DIST \
&& chown -R "$KAFKA_USER:$KAFKA_USER" $KAFKA_DATA_DIR
COPY kafkaGenConfig.sh /opt/$KAFKA_DIST/bin

View File

@ -0,0 +1,87 @@
---
apiVersion: v1
kind: Service
metadata:
name: kafka-svc
labels:
app: kafka
spec:
ports:
- port: 9093
name: server
clusterIP: None
selector:
app: kafka
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: kafka-pdb
spec:
selector:
matchLabels:
app: kafka
minAvailable: 2
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: kafka
spec:
serviceName: kafka-svc
replicas: 3
template:
metadata:
labels:
app: kafka
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- kafka
topologyKey: "kubernetes.io/hostname"
podAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zk
topologyKey: "kubernetes.io/hostname"
terminationGracePeriodSeconds: 300
containers:
- name: k8skafka
imagePullPolicy: Always
image: sz-pg-oam-docker-hub-001.tendcloud.com/library/kafka:2.10-0.8.2.1
resources:
requests:
memory: "1Gi"
cpu: 500m
env:
- name: KF_REPLICAS
value: "3"
ports:
- containerPort: 9093
name: server
command:
- /bin/bash
- -c
- "/opt/kafka/bin/kafkaGenConfig.sh && /opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/server.properties"
env:
- name: KAFKA_HEAP_OPTS
value : "-Xmx512M -Xms512M"
- name: KAFKA_OPTS
value: "-Dlogging.level=DEBUG"
readinessProbe:
tcpSocket:
port: 9092
initialDelaySeconds: 15
timeoutSeconds: 1

View File

@ -0,0 +1,13 @@
#!/bin/bash
HOST=`hostname -s`
if [[ $HOST =~ (.*)-([0-9]+)$ ]]; then
NAME=${BASH_REMATCH[1]}
ORD=${BASH_REMATCH[2]}
else
echo "Failed to extract ordinal from hostname $HOST"
exit 1
fi
MY_ID=$((ORD+1))
sed -i s"/broker.id=0/broker.id=$MY_ID/g" /opt/kafka/config/server.properties
sed -i s'/zookeeper.connect=localhost:2181/zookeeper.connect=zk-0.zk-svc.brand.svc:2181,zk-1.zk-svc.brand.svc:2181,zk-2.zk-svc.brand.svc:2181/g' /opt/kafka/config/server.properties

View File

@ -0,0 +1,86 @@
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
log4j.rootLogger=${logging.level}, stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.kafkaAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.kafkaAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.kafkaAppender.File=${kafka.logs.dir}/server.log
log4j.appender.kafkaAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.kafkaAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.stateChangeAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.stateChangeAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.stateChangeAppender.File=${kafka.logs.dir}/state-change.log
log4j.appender.stateChangeAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.stateChangeAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.requestAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.requestAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.requestAppender.File=${kafka.logs.dir}/kafka-request.log
log4j.appender.requestAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.requestAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.cleanerAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.cleanerAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.cleanerAppender.File=${kafka.logs.dir}/log-cleaner.log
log4j.appender.cleanerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.cleanerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.controllerAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.controllerAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.controllerAppender.File=${kafka.logs.dir}/controller.log
log4j.appender.controllerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.controllerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.authorizerAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.authorizerAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.authorizerAppender.File=${kafka.logs.dir}/kafka-authorizer.log
log4j.appender.authorizerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.authorizerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
# Turn on all our debugging info
#log4j.logger.kafka.producer.async.DefaultEventHandler=DEBUG, kafkaAppender
#log4j.logger.kafka.client.ClientUtils=DEBUG, kafkaAppender
#log4j.logger.kafka.perf=DEBUG, kafkaAppender
#log4j.logger.kafka.perf.ProducerPerformance$ProducerThread=DEBUG, kafkaAppender
#log4j.logger.org.I0Itec.zkclient.ZkClient=DEBUG
#log4j.logger.kafka=INFO, stdout
log4j.logger.kafka.network.RequestChannel$=WARN, stdout
log4j.additivity.kafka.network.RequestChannel$=false
#log4j.logger.kafka.network.Processor=TRACE, requestAppender
#log4j.logger.kafka.server.KafkaApis=TRACE, requestAppender
#log4j.additivity.kafka.server.KafkaApis=false
log4j.logger.kafka.request.logger=WARN, stdout
log4j.additivity.kafka.request.logger=false
log4j.logger.kafka.controller=TRACE, stdout
log4j.additivity.kafka.controller=false
log4j.logger.kafka.log.LogCleaner=INFO, stdout
log4j.additivity.kafka.log.LogCleaner=false
log4j.logger.state.change.logger=TRACE, stdout
log4j.additivity.state.change.logger=false
#Change this to debug to get the actual audit log for authorizer.
log4j.logger.kafka.authorizer.logger=WARN, stdout
log4j.additivity.kafka.authorizer.logger=false

View File

@ -0,0 +1,121 @@
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults
############################# Server Basics #############################
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=1
############################# Socket Server Settings #############################
# The port the socket server listens on
port=9092
# Hostname the broker will bind to. If not set, the server will bind to all interfaces
#host.name=localhost
# Hostname the broker will advertise to producers and consumers. If not set, it uses the
# value for "host.name" if configured. Otherwise, it will use the value returned from
# java.net.InetAddress.getCanonicalHostName().
#advertised.host.name=<hostname routable by clients>
# The port to publish to ZooKeeper for clients to use. If this is not set,
# it will publish the same port that the broker binds to.
#advertised.port=<port accessible by clients>
# The number of threads handling network requests
num.network.threads=3
# The number of threads doing disk I/O
num.io.threads=8
# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400
# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400
# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600
############################# Log Basics #############################
# A comma seperated list of directories under which to store log files
log.dirs=/tmp/kafka-logs
# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1
# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1
############################# Log Flush Policy #############################
# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
# 1. Durability: Unflushed data may be lost if you are not using replication.
# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.
# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000
# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000
############################# Log Retention Policy #############################
# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.
# The minimum age of a log file to be eligible for deletion
log.retention.hours=168
# A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
# segments don't drop below log.retention.bytes.
#log.retention.bytes=1073741824
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824
# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000
# By default the log cleaner is disabled and the log retention policy will default to just delete segments after their retention expires.
# If log.cleaner.enable=true is set the cleaner will be enabled and individual logs can then be marked for log compaction.
log.cleaner.enable=false
############################# Zookeeper #############################
# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=localhost:2181
# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000

View File

@ -0,0 +1,51 @@
FROM sz-pg-oam-docker-hub-001.tendcloud.com/library/jdk:7u80
ENV ZK_USER=zookeeper \
ZK_DATA_DIR=/var/lib/zookeeper/data \
ZK_DATA_LOG_DIR=/var/lib/zookeeper/log \
ZK_LOG_DIR=/var/log/zookeeper \
JAVA_HOME=/usr/local/java
ARG ZK_DIST=zookeeper-3.4.6
RUN set -x \
&& yum install -y wget nc \
&& wget -q "http://repo.tendcloud.com/td-configuration/deploy/zookeeper/$ZK_DIST.tar.gz" \
&& export GNUPGHOME="$(mktemp -d)" \
&& tar -xzf "$ZK_DIST.tar.gz" -C /opt \
&& rm -r "$GNUPGHOME" "$ZK_DIST.tar.gz" \
&& ln -s /opt/$ZK_DIST /opt/zookeeper \
&& rm -rf /opt/zookeeper/CHANGES.txt \
/opt/zookeeper/README.txt \
/opt/zookeeper/NOTICE.txt \
/opt/zookeeper/CHANGES.txt \
/opt/zookeeper/README_packaging.txt \
/opt/zookeeper/build.xml \
/opt/zookeeper/config \
/opt/zookeeper/contrib \
/opt/zookeeper/dist-maven \
/opt/zookeeper/docs \
/opt/zookeeper/ivy.xml \
/opt/zookeeper/ivysettings.xml \
/opt/zookeeper/recipes \
/opt/zookeeper/src \
/opt/zookeeper/$ZK_DIST.jar.asc \
/opt/zookeeper/$ZK_DIST.jar.md5 \
/opt/zookeeper/$ZK_DIST.jar.sha1 \
&& rm -rf /var/lib/apt/lists/*
# Copy configuration generator script to bin
COPY zkGenConfig.sh zkOk.sh zkMetrics.sh /opt/zookeeper/bin/
# Create a user for the zookeeper process and configure file system ownership
# for necessary directories and symlink the distribution as a user executable
RUN set -x \
&& useradd $ZK_USER \
&& [ `id -u $ZK_USER` -eq 1000 ] \
&& [ `id -g $ZK_USER` -eq 1000 ] \
&& mkdir -p $ZK_DATA_DIR $ZK_DATA_LOG_DIR $ZK_LOG_DIR /usr/share/zookeeper /tmp/zookeeper /usr/etc/ \
&& chown -R "$ZK_USER:$ZK_USER" /opt/$ZK_DIST $ZK_DATA_DIR $ZK_LOG_DIR $ZK_DATA_LOG_DIR /tmp/zookeeper \
&& ln -s /opt/zookeeper/conf/ /usr/etc/zookeeper \
&& ln -s /opt/zookeeper/bin/* /usr/bin \
&& ln -s /opt/zookeeper/$ZK_DIST.jar /usr/share/zookeeper/ \
&& ln -s /opt/zookeeper/lib/* /usr/share/zookeeper

View File

@ -0,0 +1,158 @@
#!/usr/bin/env bash
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
ZK_USER=${ZK_USER:-"zookeeper"}
ZK_LOG_LEVEL=${ZK_LOG_LEVEL:-"INFO"}
ZK_DATA_DIR=${ZK_DATA_DIR:-"/var/lib/zookeeper/data"}
ZK_DATA_LOG_DIR=${ZK_DATA_LOG_DIR:-"/var/lib/zookeeper/log"}
ZK_LOG_DIR=${ZK_LOG_DIR:-"var/log/zookeeper"}
ZK_CONF_DIR=${ZK_CONF_DIR:-"/opt/zookeeper/conf"}
ZK_CLIENT_PORT=${ZK_CLIENT_PORT:-2181}
ZK_SERVER_PORT=${ZK_SERVER_PORT:-2888}
ZK_ELECTION_PORT=${ZK_ELECTION_PORT:-3888}
ZK_TICK_TIME=${ZK_TICK_TIME:-2000}
ZK_INIT_LIMIT=${ZK_INIT_LIMIT:-10}
ZK_SYNC_LIMIT=${ZK_SYNC_LIMIT:-5}
ZK_HEAP_SIZE=${ZK_HEAP_SIZE:-2G}
ZK_MAX_CLIENT_CNXNS=${ZK_MAX_CLIENT_CNXNS:-60}
ZK_MIN_SESSION_TIMEOUT=${ZK_MIN_SESSION_TIMEOUT:- $((ZK_TICK_TIME*2))}
ZK_MAX_SESSION_TIMEOUT=${ZK_MAX_SESSION_TIMEOUT:- $((ZK_TICK_TIME*20))}
ZK_SNAP_RETAIN_COUNT=${ZK_SNAP_RETAIN_COUNT:-3}
ZK_PURGE_INTERVAL=${ZK_PURGE_INTERVAL:-0}
ID_FILE="$ZK_DATA_DIR/myid"
ZK_CONFIG_FILE="$ZK_CONF_DIR/zoo.cfg"
LOGGER_PROPS_FILE="$ZK_CONF_DIR/log4j.properties"
JAVA_ENV_FILE="$ZK_CONF_DIR/java.env"
HOST=`hostname -s`
DOMAIN=`hostname -d`
ZK_REPLICAS=3
function print_servers() {
for (( i=1; i<=$ZK_REPLICAS; i++ ))
do
echo "server.$i=$NAME-$((i-1)).$DOMAIN:$ZK_SERVER_PORT:$ZK_ELECTION_PORT"
done
}
function validate_env() {
echo "Validating environment"
if [ -z $ZK_REPLICAS ]; then
echo "ZK_REPLICAS is a mandatory environment variable"
exit 1
fi
if [[ $HOST =~ (.*)-([0-9]+)$ ]]; then
NAME=${BASH_REMATCH[1]}
ORD=${BASH_REMATCH[2]}
else
echo "Failed to extract ordinal from hostname $HOST"
exit 1
fi
MY_ID=$((ORD+1))
echo "ZK_REPLICAS=$ZK_REPLICAS"
echo "MY_ID=$MY_ID"
echo "ZK_LOG_LEVEL=$ZK_LOG_LEVEL"
echo "ZK_DATA_DIR=$ZK_DATA_DIR"
echo "ZK_DATA_LOG_DIR=$ZK_DATA_LOG_DIR"
echo "ZK_LOG_DIR=$ZK_LOG_DIR"
echo "ZK_CLIENT_PORT=$ZK_CLIENT_PORT"
echo "ZK_SERVER_PORT=$ZK_SERVER_PORT"
echo "ZK_ELECTION_PORT=$ZK_ELECTION_PORT"
echo "ZK_TICK_TIME=$ZK_TICK_TIME"
echo "ZK_INIT_LIMIT=$ZK_INIT_LIMIT"
echo "ZK_SYNC_LIMIT=$ZK_SYNC_LIMIT"
echo "ZK_MAX_CLIENT_CNXNS=$ZK_MAX_CLIENT_CNXNS"
echo "ZK_MIN_SESSION_TIMEOUT=$ZK_MIN_SESSION_TIMEOUT"
echo "ZK_MAX_SESSION_TIMEOUT=$ZK_MAX_SESSION_TIMEOUT"
echo "ZK_HEAP_SIZE=$ZK_HEAP_SIZE"
echo "ZK_SNAP_RETAIN_COUNT=$ZK_SNAP_RETAIN_COUNT"
echo "ZK_PURGE_INTERVAL=$ZK_PURGE_INTERVAL"
echo "ENSEMBLE"
print_servers
echo "Environment validation successful"
}
function create_config() {
rm -f $ZK_CONFIG_FILE
echo "Creating ZooKeeper configuration"
echo "#This file was autogenerated by k8szk DO NOT EDIT" >> $ZK_CONFIG_FILE
echo "clientPort=$ZK_CLIENT_PORT" >> $ZK_CONFIG_FILE
echo "dataDir=$ZK_DATA_DIR" >> $ZK_CONFIG_FILE
echo "dataLogDir=$ZK_DATA_LOG_DIR" >> $ZK_CONFIG_FILE
echo "tickTime=$ZK_TICK_TIME" >> $ZK_CONFIG_FILE
echo "initLimit=$ZK_INIT_LIMIT" >> $ZK_CONFIG_FILE
echo "syncLimit=$ZK_SYNC_LIMIT" >> $ZK_CONFIG_FILE
echo "maxClientCnxns=$ZK_MAX_CLIENT_CNXNS" >> $ZK_CONFIG_FILE
echo "minSessionTimeout=$ZK_MIN_SESSION_TIMEOUT" >> $ZK_CONFIG_FILE
echo "maxSessionTimeout=$ZK_MAX_SESSION_TIMEOUT" >> $ZK_CONFIG_FILE
echo "autopurge.snapRetainCount=$ZK_SNAP_RETAIN_COUNT" >> $ZK_CONFIG_FILE
echo "autopurge.purgeInteval=$ZK_PURGE_INTERVAL" >> $ZK_CONFIG_FILE
if [ $ZK_REPLICAS -gt 1 ]; then
print_servers >> $ZK_CONFIG_FILE
fi
echo "Wrote ZooKeeper configuration file to $ZK_CONFIG_FILE"
}
function create_data_dirs() {
echo "Creating ZooKeeper data directories and setting permissions"
if [ ! -d $ZK_DATA_DIR ]; then
mkdir -p $ZK_DATA_DIR
chown -R $ZK_USER:$ZK_USER $ZK_DATA_DIR
fi
if [ ! -d $ZK_DATA_LOG_DIR ]; then
mkdir -p $ZK_DATA_LOG_DIR
chown -R $ZK_USER:$ZK_USER $ZK_DATA_LOG_DIR
fi
if [ ! -d $ZK_LOG_DIR ]; then
mkdir -p $ZK_LOG_DIR
chown -R $ZK_USER:$ZK_USER $ZK_LOG_DIR
fi
if [ ! -f $ID_FILE ]; then
echo $MY_ID >> $ID_FILE
fi
echo "Created ZooKeeper data directories and set permissions in $ZK_DATA_DIR"
}
function create_log_props () {
rm -f $LOGGER_PROPS_FILE
echo "Creating ZooKeeper log4j configuration"
echo "zookeeper.root.logger=CONSOLE" >> $LOGGER_PROPS_FILE
echo "zookeeper.console.threshold="$ZK_LOG_LEVEL >> $LOGGER_PROPS_FILE
echo "log4j.rootLogger=\${zookeeper.root.logger}" >> $LOGGER_PROPS_FILE
echo "log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender" >> $LOGGER_PROPS_FILE
echo "log4j.appender.CONSOLE.Threshold=\${zookeeper.console.threshold}" >> $LOGGER_PROPS_FILE
echo "log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout" >> $LOGGER_PROPS_FILE
echo "log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n" >> $LOGGER_PROPS_FILE
echo "Wrote log4j configuration to $LOGGER_PROPS_FILE"
}
function create_java_env() {
rm -f $JAVA_ENV_FILE
echo "Creating JVM configuration file"
echo "ZOO_LOG_DIR=$ZK_LOG_DIR" >> $JAVA_ENV_FILE
echo "JVMFLAGS=\"-Xmx$ZK_HEAP_SIZE -Xms$ZK_HEAP_SIZE\"" >> $JAVA_ENV_FILE
echo "Wrote JVM configuration to $JAVA_ENV_FILE"
}
validate_env && create_config && create_log_props && create_data_dirs && create_java_env

View File

@ -0,0 +1,17 @@
opyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
ZK_CLIENT_PORT=${ZK_CLIENT_PORT:-2181}
echo mntr | nc localhost $ZK_CLIENT_PORT >& 1

View File

@ -0,0 +1,26 @@
#!/usr/bin/env bash
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# zkOk.sh uses the ruok ZooKeeper four letter work to determine if the instance
# is health. The $? variable will be set to 0 if server responds that it is
# healthy, or 1 if the server fails to respond.
ZK_CLIENT_PORT=${ZK_CLIENT_PORT:-2181}
OK=$(echo ruok | nc 127.0.0.1 $ZK_CLIENT_PORT)
if [ "$OK" == "imok" ]; then
exit 0
else
exit 1
fi

View File

@ -63,7 +63,7 @@ spec:
containers: containers:
- name: k8szk - name: k8szk
imagePullPolicy: Always imagePullPolicy: Always
image: gcr.io/google_samples/k8szk:v2 image: sz-pg-oam-docker-hub-001.tendcloud.com/library/zookeeper:3.4.6
resources: resources:
requests: requests:
memory: "2Gi" memory: "2Gi"
@ -135,17 +135,6 @@ spec:
- "zkOk.sh" - "zkOk.sh"
initialDelaySeconds: 10 initialDelaySeconds: 10
timeoutSeconds: 5 timeoutSeconds: 5
# volumeMounts: securityContext:
# - name: datadir runAsUser: 1000
# mountPath: /var/lib/zookeeper fsGroup: 1000
# securityContext:
# runAsUser: 1000
# fsGroup: 1000
# volumeClaimTemplates:
# - metadata:
# name: datadir
# spec:
# accessModes: [ "ReadWriteOnce" ]
# resources:
# requests:
# storage: 10Gi