linkerd使用说明

pull/41/head
Jimmy Song 2017-08-01 21:20:16 +08:00
parent 5ea5d89a6c
commit d05a27a124
29 changed files with 2245 additions and 0 deletions

View File

@ -62,6 +62,7 @@
- [5.1.1.1 安装istio](usecases/istio-installation.md) - [5.1.1.1 安装istio](usecases/istio-installation.md)
- [5.1.1.2 配置请求的路由规则](usecases/configuring-request-routing.md) - [5.1.1.2 配置请求的路由规则](usecases/configuring-request-routing.md)
- [5.1.2 Linkerd](usecases/linkerd.md) - [5.1.2 Linkerd](usecases/linkerd.md)
- [5.1.2.1 Linkerd 使用指南](usecases/linkerd-user-guide.md)
- [5.1.3 微服务中的服务发现](usecases/service-discovery-in-microservices.md) - [5.1.3 微服务中的服务发现](usecases/service-discovery-in-microservices.md)
- [5.2 大数据](usecases/big-data.md) - [5.2 大数据](usecases/big-data.md)
- [5.2.1 Spark on Kubernetes](usecases/spark-on-kubernetes.md) - [5.2.1 Spark on Kubernetes](usecases/spark-on-kubernetes.md)

Binary file not shown.

After

Width:  |  Height:  |  Size: 541 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 196 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 227 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 63 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 127 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 184 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 69 KiB

View File

@ -0,0 +1,54 @@
---
apiVersion: v1
kind: ReplicationController
metadata:
name: api
spec:
replicas: 3
selector:
app: api
template:
metadata:
labels:
app: api
spec:
dnsPolicy: ClusterFirst
containers:
- name: service
image: sz-pg-oam-docker-hub-001.tendcloud.com/library/buoyantio-helloworld:0.1.4
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: http_proxy
value: $(NODE_NAME):4140
args:
- "-addr=:7779"
- "-text=api"
- "-target=hello"
- "-json"
ports:
- name: service
containerPort: 7779
- name: kubectl
image: sz-pg-oam-docker-hub-001.tendcloud.com/library/buoyantio-kubectl:v1.4.0
args:
- proxy
- "-p"
- "8001"
---
apiVersion: v1
kind: Service
metadata:
name: api
spec:
selector:
app: api
ports:
- name: http
port: 7779

View File

@ -0,0 +1,89 @@
---
apiVersion: v1
kind: ReplicationController
metadata:
name: hello
spec:
replicas: 3
selector:
app: hello
template:
metadata:
labels:
app: hello
spec:
dnsPolicy: ClusterFirst
containers:
- name: service
image: sz-pg-oam-docker-hub-001.tendcloud.com/library/buoyantio-helloworld:0.1.4
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
command:
- "/bin/sh"
- "-c"
- "helloworld -addr=:7777 -text=Hello -target=$NODE_NAME:4140 -protocol=grpc"
ports:
- name: service
containerPort: 7777
---
apiVersion: v1
kind: Service
metadata:
name: hello
spec:
selector:
app: hello
clusterIP: None
ports:
- name: grpc
port: 7777
---
apiVersion: v1
kind: ReplicationController
metadata:
name: world-v1
spec:
replicas: 3
selector:
app: world-v1
template:
metadata:
labels:
app: world-v1
spec:
dnsPolicy: ClusterFirst
containers:
- name: service
image: sz-pg-oam-docker-hub-001.tendcloud.com/library/buoyantio-helloworld:0.1.4
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: TARGET_WORLD
value: world
args:
- "-addr=:7778"
- "-protocol=grpc"
ports:
- name: service
containerPort: 7778
---
apiVersion: v1
kind: Service
metadata:
name: world-v1
spec:
selector:
app: world-v1
clusterIP: None
ports:
- name: grpc
port: 7778

View File

@ -0,0 +1,17 @@
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-world
annotations:
kubernetes.io/ingress.class: "linkerd"
spec:
backend:
serviceName: world-v1
servicePort: http
rules:
- host: world.v2
http:
paths:
- backend:
serviceName: world-v2
servicePort: http

View File

@ -0,0 +1,91 @@
---
apiVersion: v1
kind: ReplicationController
metadata:
name: hello
spec:
replicas: 1
selector:
app: hello
template:
metadata:
labels:
app: hello
spec:
dnsPolicy: ClusterFirst
containers:
- name: service
image: sz-pg-oam-docker-hub-001.tendcloud.com/library/buoyantio-helloworld:0.1.4
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: http_proxy
value: $(NODE_NAME):4140
args:
- "-addr=:7777"
- "-text=Hello"
- "-target=world"
- "-latency=500ms"
ports:
- name: service
containerPort: 7777
---
apiVersion: v1
kind: Service
metadata:
name: hello
spec:
selector:
app: hello
clusterIP: None
ports:
- name: http
port: 7777
---
apiVersion: v1
kind: ReplicationController
metadata:
name: world-v1
spec:
replicas: 1
selector:
app: world-v1
template:
metadata:
labels:
app: world-v1
spec:
dnsPolicy: ClusterFirst
containers:
- name: service
image: sz-pg-oam-docker-hub-001.tendcloud.com/library/buoyantio-helloworld:0.1.4
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: TARGET_WORLD
value: world
args:
- "-addr=:7778"
ports:
- name: service
containerPort: 7778
---
apiVersion: v1
kind: Service
metadata:
name: world-v1
spec:
selector:
app: world-v1
clusterIP: None
ports:
- name: http
port: 7778

View File

@ -0,0 +1,88 @@
---
apiVersion: v1
kind: ReplicationController
metadata:
name: hello
spec:
replicas: 3
selector:
app: hello
template:
metadata:
labels:
app: hello
spec:
dnsPolicy: ClusterFirst
containers:
- name: service
image: sz-pg-oam-docker-hub-001.tendcloud.com/library/buoyantio-helloworld:0.1.4
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: http_proxy
value: $(NODE_NAME):4140
args:
- "-addr=:7777"
- "-text=Hello"
- "-target=world"
ports:
- name: service
containerPort: 7777
---
apiVersion: v1
kind: Service
metadata:
name: hello
spec:
selector:
app: hello
ports:
- name: http
port: 7777
---
apiVersion: v1
kind: ReplicationController
metadata:
name: world-v1
spec:
replicas: 3
selector:
app: world-v1
template:
metadata:
labels:
app: world-v1
spec:
dnsPolicy: ClusterFirst
containers:
- name: service
image: sz-pg-oam-docker-hub-001.tendcloud.com/library/buoyantio-helloworld:0.1.4
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: TARGET_WORLD
value: world
args:
- "-addr=:7778"
ports:
- name: service
containerPort: 7778
---
apiVersion: v1
kind: Service
metadata:
name: world-v1
spec:
selector:
app: world-v1
ports:
- name: http
port: 7778

View File

@ -0,0 +1,24 @@
# RBAC configs for jenkins
---
# allows the jenkins process to run the continuous deploy demo
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: jenkins-rc
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["services", "replicationcontrollers"]
verbs: ["*"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: jenkins-role-binding
subjects:
- kind: ServiceAccount
name: default
namespace: default
roleRef:
kind: ClusterRole
name: jenkins-rc
apiGroup: rbac.authorization.k8s.io

View File

@ -0,0 +1,33 @@
---
apiVersion: v1
kind: ReplicationController
metadata:
name: jenkins
spec:
replicas: 1
selector:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
dnsPolicy: ClusterFirst
containers:
- name: jenkins
image: sz-pg-oam-docker-hub-001.tendcloud.com/library/buoyantio-jenkins-plus:2.60.1
ports:
- name: http
containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: jenkins
spec:
selector:
app: jenkins
ports:
- name: http
port: 80
targetPort: 8080

View File

@ -0,0 +1,128 @@
# runs linkerd in a daemonset, in linker-to-linker mode, routing gRPC requests
---
apiVersion: v1
kind: ConfigMap
metadata:
name: l5d-config
data:
config.yaml: |-
admin:
port: 9990
namers:
- kind: io.l5d.k8s
experimental: true
host: localhost
port: 8001
telemetry:
- kind: io.l5d.prometheus
- kind: io.l5d.recentRequests
sampleRate: 0.25
usage:
orgId: linkerd-examples-daemonset-grpc
routers:
- protocol: h2
label: outgoing
experimental: true
dtab: |
/srv => /#/io.l5d.k8s/default/grpc;
/grpc => /srv;
/grpc/World => /srv/world-v1;
/svc => /$/io.buoyant.http.domainToPathPfx/grpc;
identifier:
kind: io.l5d.header.path
segments: 1
interpreter:
kind: default
transformers:
- kind: io.l5d.k8s.daemonset
namespace: default
port: incoming
service: l5d
servers:
- port: 4140
ip: 0.0.0.0
- protocol: h2
label: incoming
experimental: true
dtab: |
/srv => /#/io.l5d.k8s/default/grpc;
/grpc => /srv;
/grpc/World => /srv/world-v1;
/svc => /$/io.buoyant.http.domainToPathPfx/grpc;
identifier:
kind: io.l5d.header.path
segments: 1
interpreter:
kind: default
transformers:
- kind: io.l5d.k8s.localnode
servers:
- port: 4141
ip: 0.0.0.0
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
labels:
app: l5d
name: l5d
spec:
template:
metadata:
labels:
app: l5d
spec:
volumes:
- name: l5d-config
configMap:
name: "l5d-config"
containers:
- name: l5d
image: sz-pg-oam-docker-hub-001.tendcloud.com/library/buoyantio-linkerd:1.1.2
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
args:
- /io.buoyant/linkerd/config/config.yaml
ports:
- name: outgoing
containerPort: 4140
hostPort: 4140
- name: incoming
containerPort: 4141
- name: admin
containerPort: 9990
volumeMounts:
- name: "l5d-config"
mountPath: "/io.buoyant/linkerd/config"
readOnly: true
- name: kubectl
image: sz-pg-oam-docker-hub-001.tendcloud.com/library/buoyantio-kubectl:v1.4.0
args:
- "proxy"
- "-p"
- "8001"
---
apiVersion: v1
kind: Service
metadata:
name: l5d
spec:
selector:
app: l5d
type: LoadBalancer
ports:
- name: outgoing
port: 4140
- name: incoming
port: 4141
- name: admin
port: 9990

View File

@ -0,0 +1,77 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: l5d-config
data:
config.yaml: |-
namers:
- kind: io.l5d.k8s
routers:
- protocol: http
identifier:
kind: io.l5d.ingress
servers:
- port: 80
ip: 0.0.0.0
clearContext: true
dtab: /svc => /#/io.l5d.k8s
usage:
orgId: linkerd-examples-ingress
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
labels:
app: l5d
name: l5d
spec:
template:
metadata:
labels:
app: l5d
spec:
volumes:
- name: l5d-config
configMap:
name: "l5d-config"
containers:
- name: l5d
image: sz-pg-oam-docker-hub-001.tendcloud.com/library/buoyantio-linkerd:1.1.2
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
args:
- /io.buoyant/linkerd/config/config.yaml
ports:
- name: http
containerPort: 80
hostPort: 80
- name: admin
containerPort: 9990
volumeMounts:
- name: "l5d-config"
mountPath: "/io.buoyant/linkerd/config"
readOnly: true
- name: kubectl
image: sz-pg-oam-docker-hub-001.tendcloud.com/library/buoyantio-kubectl:v1.4.0
args: ["proxy", "-p", "8001"]
---
apiVersion: v1
kind: Service
metadata:
name: l5d
spec:
selector:
app: l5d
type: LoadBalancer
ports:
- name: http
port: 80
- name: admin
port: 9990

View File

@ -0,0 +1,148 @@
# runs linkerd in a daemonset, in linker-to-linker mode
---
apiVersion: v1
kind: ConfigMap
metadata:
name: l5d-config
data:
config.yaml: |-
admin:
port: 9990
namers:
- kind: io.l5d.k8s
experimental: true
host: localhost
port: 8001
telemetry:
- kind: io.l5d.prometheus
- kind: io.l5d.recentRequests
sampleRate: 0.25
usage:
orgId: linkerd-examples-daemonset-ingress
routers:
- protocol: http
label: ingress
dtab: |
/srv => /#/io.l5d.k8s/default/http ;
/domain/world/hello/www => /srv/hello ;
/domain/world/hello/api => /srv/api ;
/host => /$/io.buoyant.http.domainToPathPfx/domain ;
/svc => /host ;
interpreter:
kind: default
transformers:
- kind: io.l5d.k8s.daemonset
namespace: default
port: incoming
service: l5d
servers:
- port: 4142
ip: 0.0.0.0
- protocol: http
label: outgoing
dtab: |
/srv => /#/io.l5d.k8s/default/http ;
/host => /srv ;
/host/world => /srv/world-v1 ;
/svc => /host ;
interpreter:
kind: default
transformers:
- kind: io.l5d.k8s.daemonset
namespace: default
port: incoming
service: l5d
servers:
- port: 4140
ip: 0.0.0.0
service:
responseClassifier:
kind: io.l5d.http.retryableRead5XX
- protocol: http
label: incoming
dtab: |
/srv => /#/io.l5d.k8s/default/http ;
/domain/world/hello/www => /srv/hello ;
/domain/world/hello/api => /srv/api ;
/host => /$/io.buoyant.http.domainToPathPfx/domain ;
/host => /srv ;
/host/world => /srv/world-v1 ;
/svc => /host ;
interpreter:
kind: default
transformers:
- kind: io.l5d.k8s.localnode
servers:
- port: 4141
ip: 0.0.0.0
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
labels:
app: l5d
name: l5d
spec:
template:
metadata:
labels:
app: l5d
spec:
volumes:
- name: l5d-config
configMap:
name: "l5d-config"
containers:
- name: l5d
image: sz-pg-oam-docker-hub-001.tendcloud.com/library/buoyantio-linkerd:1.1.2
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
args:
- /io.buoyant/linkerd/config/config.yaml
ports:
- name: outgoing
containerPort: 4140
hostPort: 4140
- name: incoming
containerPort: 4141
hostPort: 4141
- name: ingress
containerPort: 4142
- name: admin
containerPort: 9990
volumeMounts:
- name: "l5d-config"
mountPath: "/io.buoyant/linkerd/config"
readOnly: true
- name: kubectl
image: sz-pg-oam-docker-hub-001.tendcloud.com/library/buoyantio-kubectl:v1.4.0
args:
- "proxy"
- "-p"
- "8001"
---
apiVersion: v1
kind: Service
metadata:
name: l5d
spec:
selector:
app: l5d
ports:
- name: ingress
port: 80
targetPort: 4142
- name: incoming
port: 4141
- name: admin
port: 9990

View File

@ -0,0 +1,127 @@
# runs linkerd in a daemonset, in linker-to-linker mode, using namerd to route
# requests, and handling edge traffic on a separate linkerd router
---
apiVersion: v1
kind: ConfigMap
metadata:
name: l5d-config
data:
config.yaml: |-
admin:
port: 9990
telemetry:
- kind: io.l5d.prometheus
- kind: io.l5d.recentRequests
sampleRate: 0.25
usage:
orgId: linkerd-examples-daemonset-namerd
routers:
- protocol: http
label: outgoing
interpreter:
kind: io.l5d.namerd
dst: /$/inet/namerd.default.svc.cluster.local/4100
namespace: internal
transformers:
- kind: io.l5d.k8s.daemonset
namespace: default
port: incoming
service: l5d
servers:
- port: 4140
ip: 0.0.0.0
service:
responseClassifier:
kind: io.l5d.http.retryableRead5XX
- protocol: http
label: incoming
interpreter:
kind: io.l5d.namerd
dst: /$/inet/namerd.default.svc.cluster.local/4100
namespace: internal
transformers:
- kind: io.l5d.k8s.localnode
servers:
- port: 4141
ip: 0.0.0.0
- protocol: http
label: external
interpreter:
kind: io.l5d.namerd
dst: /$/inet/namerd.default.svc.cluster.local/4100
namespace: external
servers:
- port: 4142
ip: 0.0.0.0
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
labels:
app: l5d
name: l5d
spec:
template:
metadata:
labels:
app: l5d
spec:
volumes:
- name: l5d-config
configMap:
name: "l5d-config"
containers:
- name: l5d
image: sz-pg-oam-docker-hub-001.tendcloud.com/library/buoyantio-linkerd:1.1.2
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
args:
- /io.buoyant/linkerd/config/config.yaml
ports:
- name: outgoing
containerPort: 4140
hostPort: 4140
- name: incoming
containerPort: 4141
- name: external
containerPort: 4142
- name: admin
containerPort: 9990
volumeMounts:
- name: "l5d-config"
mountPath: "/io.buoyant/linkerd/config"
readOnly: true
- name: kubectl
image: sz-pg-oam-docker-hub-001.tendcloud.com/library/buoyantio-kubectl:v1.4.0
args:
- "proxy"
- "-p"
- "8001"
---
apiVersion: v1
kind: Service
metadata:
name: l5d
spec:
selector:
app: l5d
type: LoadBalancer
ports:
- name: external
port: 80
targetPort: 4142
- name: outgoing
port: 4140
- name: incoming
port: 4141
- name: admin
port: 9990

View File

@ -0,0 +1,47 @@
# RBAC configs for linkerd
---
# grant linkerd/namerd permissions to enable service discovery
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: linkerd-endpoints-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["endpoints", "services", "pods"] # pod access is required for the *-legacy.yml examples in this folder
verbs: ["get", "watch", "list"]
---
# grant namerd permisisons to third party resources for dtab storage
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: namerd-dtab-storage
rules:
- apiGroups: ["l5d.io"]
resources: ["dtabs"]
verbs: ["get", "watch", "list", "update", "create"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: linkerd-role-binding
subjects:
- kind: ServiceAccount
name: default
namespace: default
roleRef:
kind: ClusterRole
name: linkerd-endpoints-reader
apiGroup: rbac.authorization.k8s.io
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: namerd-role-binding
subjects:
- kind: ServiceAccount
name: default
namespace: default
roleRef:
kind: ClusterRole
name: namerd-dtab-storage
apiGroup: rbac.authorization.k8s.io

View File

@ -0,0 +1,57 @@
---
apiVersion: v1
kind: ReplicationController
metadata:
name: linkerd-viz
labels:
name: linkerd-viz
spec:
replicas: 1
selector:
name: linkerd-viz
template:
metadata:
labels:
name: linkerd-viz
spec:
containers:
- name: linkerd-viz
image: sz-pg-oam-docker-hub-001.tendcloud.com/library/buoyantio-linkerd-viz:latest
args: ["k8s"]
imagePullPolicy: Always
env:
- name: PUBLIC_PORT
value: "3000"
- name: STATS_PORT
value: "9191"
- name: SCRAPE_INTERVAL
value: "30s"
ports:
- name: grafana
containerPort: 3000
- name: prometheus
containerPort: 9191
- name: kubectl
image: sz-pg-oam-docker-hub-001.tendcloud.com/library/buoyantio-kubectl:v1.4.0
args:
- "proxy"
- "-p"
- "8001"
---
apiVersion: v1
kind: Service
metadata:
name: linkerd-viz
labels:
name: linkerd-viz
spec:
ports:
- name: grafana
port: 80
targetPort: 3000
- name: prometheus
port: 9191
targetPort: 9191
selector:
name: linkerd-viz

View File

@ -0,0 +1,127 @@
# runs linkerd in a daemonset, in linker-to-linker mode, with zipkin tracing
---
apiVersion: v1
kind: ConfigMap
metadata:
name: l5d-config
data:
config.yaml: |-
admin:
port: 9990
namers:
- kind: io.l5d.k8s
experimental: true
host: localhost
port: 8001
telemetry:
- kind: io.l5d.prometheus
- kind: io.l5d.zipkin
host: zipkin-collector.default.svc.cluster.local
port: 9410
sampleRate: 1.0
- kind: io.l5d.recentRequests
sampleRate: 0.25
usage:
orgId: linkerd-examples-daemonset-zipkin
routers:
- protocol: http
label: outgoing
dtab: |
/srv => /#/io.l5d.k8s/default/http;
/host => /srv;
/svc => /host;
/host/world => /srv/world-v1;
interpreter:
kind: default
transformers:
- kind: io.l5d.k8s.daemonset
namespace: default
port: incoming
service: l5d
servers:
- port: 4140
ip: 0.0.0.0
service:
responseClassifier:
kind: io.l5d.http.retryableRead5XX
- protocol: http
label: incoming
dtab: |
/srv => /#/io.l5d.k8s/default/http;
/host => /srv;
/svc => /host;
/host/world => /srv/world-v1;
interpreter:
kind: default
transformers:
- kind: io.l5d.k8s.localnode
servers:
- port: 4141
ip: 0.0.0.0
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
labels:
app: l5d
name: l5d
spec:
template:
metadata:
labels:
app: l5d
spec:
volumes:
- name: l5d-config
configMap:
name: "l5d-config"
containers:
- name: l5d
image: sz-pg-oam-docker-hub-001.tendcloud.com/library/buoyantio-linkerd:1.1.2
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
args:
- /io.buoyant/linkerd/config/config.yaml
ports:
- name: outgoing
containerPort: 4140
hostPort: 4140
- name: incoming
containerPort: 4141
hostPort: 4141
- name: admin
containerPort: 9990
volumeMounts:
- name: "l5d-config"
mountPath: "/io.buoyant/linkerd/config"
readOnly: true
- name: kubectl
image: sz-pg-oam-docker-hub-001.tendcloud.com/library/buoyantio-kubectl:v1.4.0
args:
- "proxy"
- "-p"
- "8001"
---
apiVersion: v1
kind: Service
metadata:
name: l5d
spec:
selector:
app: l5d
ports:
- name: outgoing
port: 4140
- name: incoming
port: 4141
- name: admin
port: 9990

View File

@ -0,0 +1,122 @@
# runs linkerd in a daemonset, in linker-to-linker mode
---
apiVersion: v1
kind: ConfigMap
metadata:
name: l5d-config
data:
config.yaml: |-
admin:
port: 9990
namers:
- kind: io.l5d.k8s
experimental: true
host: localhost
port: 8001
telemetry:
- kind: io.l5d.prometheus
- kind: io.l5d.recentRequests
sampleRate: 0.25
usage:
orgId: linkerd-examples-daemonset
routers:
- protocol: http
label: outgoing
dtab: |
/srv => /#/io.l5d.k8s/default/http;
/host => /srv;
/svc => /host;
/host/world => /srv/world-v1;
interpreter:
kind: default
transformers:
- kind: io.l5d.k8s.daemonset
namespace: default
port: incoming
service: l5d
servers:
- port: 4140
ip: 0.0.0.0
service:
responseClassifier:
kind: io.l5d.http.retryableRead5XX
- protocol: http
label: incoming
dtab: |
/srv => /#/io.l5d.k8s/default/http;
/host => /srv;
/svc => /host;
/host/world => /srv/world-v1;
interpreter:
kind: default
transformers:
- kind: io.l5d.k8s.localnode
servers:
- port: 4141
ip: 0.0.0.0
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
labels:
app: l5d
name: l5d
spec:
template:
metadata:
labels:
app: l5d
spec:
volumes:
- name: l5d-config
configMap:
name: "l5d-config"
containers:
- name: l5d
image: sz-pg-oam-docker-hub-001.tendcloud.com/library/buoyantio-linkerd:1.1.2
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
args:
- /io.buoyant/linkerd/config/config.yaml
ports:
- name: outgoing
containerPort: 4140
hostPort: 4140
- name: incoming
containerPort: 4141
- name: admin
containerPort: 9990
volumeMounts:
- name: "l5d-config"
mountPath: "/io.buoyant/linkerd/config"
readOnly: true
- name: kubectl
image: sz-pg-oam-docker-hub-001.tendcloud.com/library/buoyantio-kubectl:v1.4.0
args:
- "proxy"
- "-p"
- "8001"
---
apiVersion: v1
kind: Service
metadata:
name: l5d
spec:
selector:
app: l5d
ports:
- name: outgoing
port: 4140
- name: incoming
port: 4141
- name: admin
port: 9990

View File

@ -0,0 +1,152 @@
---
kind: ThirdPartyResource
apiVersion: extensions/v1beta1
metadata:
name: d-tab.l5d.io
description: stores dtabs used by namerd
versions:
- name: v1alpha1
---
kind: ConfigMap
apiVersion: v1
metadata:
name: namerd-config
data:
config.yml: |-
admin:
port: 9990
namers:
- kind: io.l5d.k8s
experimental: true
host: localhost
port: 8001
storage:
kind: io.l5d.k8s
host: localhost
port: 8001
namespace: default
interfaces:
- kind: io.l5d.thriftNameInterpreter
ip: 0.0.0.0
port: 4100
- kind: io.l5d.httpController
ip: 0.0.0.0
port: 4180
---
kind: ReplicationController
apiVersion: v1
metadata:
name: namerd
spec:
replicas: 1
selector:
app: namerd
template:
metadata:
labels:
app: namerd
spec:
dnsPolicy: ClusterFirst
volumes:
- name: namerd-config
configMap:
name: namerd-config
containers:
- name: namerd
image: sz-pg-oam-docker-hub-001.tendcloud.com/library/buoyantio-namerd:1.1.2
args:
- /io.buoyant/namerd/config/config.yml
ports:
- name: thrift
containerPort: 4100
- name: http
containerPort: 4180
- name: admin
containerPort: 9990
volumeMounts:
- name: "namerd-config"
mountPath: "/io.buoyant/namerd/config"
readOnly: true
- name: kubectl
image: sz-pg-oam-docker-hub-001.tendcloud.com/library/buoyantio-kubectl:v1.4.0
args:
- "proxy"
- "-p"
- "8001"
---
apiVersion: v1
kind: Service
metadata:
name: namerd
spec:
selector:
app: namerd
ports:
- name: thrift
port: 4100
- name: http
port: 4180
- name: admin
port: 9990
---
kind: ConfigMap
apiVersion: v1
metadata:
name: namerctl-script
data:
createNs.sh: |-
#!/bin/sh
set -e
if namerctl dtab get external > /dev/null 2>&1; then
echo "external namespace already exists"
else
echo "
/host => /#/io.l5d.k8s/default/http/hello;
/svc/* => /host;
" | namerctl dtab create external -
fi
if namerctl dtab get internal > /dev/null 2>&1; then
echo "internal namespace already exists"
else
echo "
/srv => /#/io.l5d.k8s/default/http;
/host => /srv;
/tmp => /srv;
/svc => /host;
/host/world => /srv/world-v1;
" | namerctl dtab create internal -
fi
---
kind: Job
apiVersion: batch/v1
metadata:
name: namerctl
spec:
template:
metadata:
name: namerctl
spec:
volumes:
- name: namerctl-script
configMap:
name: namerctl-script
defaultMode: 0755
containers:
- name: namerctl
image: sz-pg-oam-docker-hub-001.tendcloud.com/library/linkerd-namerctl:0.8.6
env:
- name: NAMERCTL_BASE_URL
value: http://namerd.default.svc.cluster.local:4180
command:
- "/namerctl/createNs.sh"
volumeMounts:
- name: "namerctl-script"
mountPath: "/namerctl"
readOnly: true
restartPolicy: OnFailure

View File

@ -0,0 +1,110 @@
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-config
data:
nginx.conf: |-
worker_processes 1;
events { worker_connections 1024; }
http {
sendfile on;
server {
listen 80;
# a test endpoint that returns http 200s
location / {
proxy_pass http://httpstat.us/200;
proxy_set_header X-Real-IP $remote_addr;
}
}
server {
listen 80;
server_name api.hello.world;
location / {
proxy_pass http://l5d.default.svc.cluster.local;
proxy_set_header Host $host;
proxy_set_header Connection "";
proxy_http_version 1.1;
more_clear_input_headers 'l5d-ctx-*' 'l5d-dtab' 'l5d-sample';
}
}
server {
listen 80;
server_name www.hello.world;
location / {
# allow 'employees' to perform dtab overrides
if ($cookie_special_employee_cookie != "letmein") {
more_clear_input_headers 'l5d-ctx-*' 'l5d-dtab' 'l5d-sample';
}
# add a dtab override to get people to our beta, world-v2
set $xheader "";
if ($cookie_special_employee_cookie ~* "dogfood") {
set $xheader "/host/world => /srv/world-v2;";
}
proxy_set_header 'l5d-dtab' $xheader;
proxy_pass http://l5d.default.svc.cluster.local;
proxy_set_header Host $host;
proxy_set_header Connection "";
proxy_http_version 1.1;
}
}
}
---
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 3
selector:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
volumes:
- name: nginx-config
configMap:
name: nginx-config
containers:
- name: nginx
image: sz-pg-oam-docker-hub-001.tendcloud.com/library/buoyantio-nginx:1.10.2
ports:
- containerPort: 80
volumeMounts:
- name: "nginx-config"
mountPath: "/etc/nginx"
---
apiVersion: v1
kind: Service
metadata:
labels:
name: nginx
name: nginx
spec:
ports:
- name: http
port: 80
targetPort: 80
selector:
app: nginx

View File

@ -0,0 +1,369 @@
################################################################################
# Linkerd Sevice Mesh
#
# This is a basic Kubernetes config file to deploy a service mesh of Linkerd
# instances onto your Kubernetes cluster that is capable of handling HTTP,
# HTTP/2 and gRPC calls with some reasonable defaults.
#
# To configure your applications to use Linkerd for HTTP traffic you can set the
# `http_proxy` envionment variable to `$(NODE_NAME):4140` where `NODE_NAME` is
# the name of node on which the application instance is running. The
# `NODE_NAME` environment variable can be set with the downward API.
#
# If your application does not support the `http_proxy` environment variable or
# if you want to configure your application to use Linkerd for HTTP/2 or gRPC
# traffic, you must configure your application to send traffic directly to
# Linkerd:
#
# * $(NODE_NAME):4140 for HTTP
# * $(NODE_NAME):4240 for HTTP/2
# * $(NODE_NAME):4340 for gRPC
#
# If you are sending HTTP or HTTP/2 traffic directly to Linkerd, you must set
# the Host/Authority header to `<service>` or `<service>.<namespace>` where
# `<service>` and `<namespace>` are the names of the service and namespace
# that you want to proxy to. If unspecified, `<namespace>` defaults to
# `default`.
#
# If your application receives HTTP, HTTP/2, and/or gRPC traffic it must have a
# Kubernetes Service object with ports named `http`, `h2`, and/or `grpc`
# respectively.
#
# You can deploy this to your Kubernetes cluster by running:
# kubectl create ns linkerd
# kubectl apply -n linkerd -f servicemesh.yml
#
# There are sections of this config that can be uncommented to enable:
# * CNI compatibility
# * Automatic retries
# * Zipkin tracing
################################################################################
---
apiVersion: v1
kind: ConfigMap
metadata:
name: l5d-config
namespace: linkerd
data:
config.yaml: |-
admin:
port: 9990
# Namers provide Linkerd with service discovery information. To use a
# namer, you reference it in the dtab by its prefix. We define 4 namers:
# * /io.l5d.k8s gets the address of the target app
# * /io.l5d.k8s.http gets the address of the http-incoming Linkerd router on the target app's node
# * /io.l5d.k8s.h2 gets the address of the h2-incoming Linkerd router on the target app's node
# * /io.l5d.k8s.grpc gets the address of the grpc-incoming Linkerd router on the target app's node
namers:
- kind: io.l5d.k8s
- kind: io.l5d.k8s
prefix: /io.l5d.k8s.http
transformers:
# The daemonset transformer replaces the address of the target app with
# the address of the http-incoming router of the Linkerd daemonset pod
# on the target app's node.
- kind: io.l5d.k8s.daemonset
namespace: linkerd
port: http-incoming
service: l5d
# hostNetwork: true # Uncomment if using host networking (eg for CNI)
- kind: io.l5d.k8s
prefix: /io.l5d.k8s.h2
transformers:
# The daemonset transformer replaces the address of the target app with
# the address of the h2-incoming router of the Linkerd daemonset pod
# on the target app's node.
- kind: io.l5d.k8s.daemonset
namespace: linkerd
port: h2-incoming
service: l5d
# hostNetwork: true # Uncomment if using host networking (eg for CNI)
- kind: io.l5d.k8s
prefix: /io.l5d.k8s.grpc
transformers:
# The daemonset transformer replaces the address of the target app with
# the address of the grpc-incoming router of the Linkerd daemonset pod
# on the target app's node.
- kind: io.l5d.k8s.daemonset
namespace: linkerd
port: grpc-incoming
service: l5d
# hostNetwork: true # Uncomment if using host networking (eg for CNI)
- kind: io.l5d.rewrite
prefix: /portNsSvcToK8s
pattern: "/{port}/{ns}/{svc}"
name: "/k8s/{ns}/{port}/{svc}"
# Telemeters export metrics and tracing data about Linkerd, the services it
# connects to, and the requests it processes.
telemetry:
- kind: io.l5d.prometheus # Expose Prometheus style metrics on :9990/admin/metrics/prometheus
- kind: io.l5d.recentRequests
sampleRate: 0.25 # Tune this sample rate before going to production
# - kind: io.l5d.zipkin # Uncomment to enable exporting of zipkin traces
# host: zipkin-collector.default.svc.cluster.local # Zipkin collector address
# port: 9410
# sampleRate: 1.0 # Set to a lower sample rate depending on your traffic volume
# Usage is used for anonymized usage reporting. You can set the orgId to
# identify your organization or set `enabled: false` to disable entirely.
usage:
orgId: linkerd-examples-servicemesh
# Routers define how Linkerd actually handles traffic. Each router listens
# for requests, applies routing rules to those requests, and proxies them
# to the appropriate destinations. Each router is protocol specific.
# For each protocol (HTTP, HTTP/2, gRPC) we define an outgoing router and
# an incoming router. The application is expected to send traffic to the
# outgoing router which proxies it to the incoming router of the Linkerd
# running on the target service's node. The incoming router then proxies
# the request to the target application itself. We also define HTTP and
# HTTP/2 ingress routers which act as Ingress Controllers and route based
# on the Ingress resource.
routers:
- label: http-outgoing
protocol: http
servers:
- port: 4140
ip: 0.0.0.0
# This dtab looks up service names in k8s and falls back to DNS if they're
# not found (e.g. for external services). It accepts names of the form
# "service" and "service.namespace", defaulting the namespace to
# "default". For DNS lookups, it uses port 80 if unspecified. Note that
# dtab rules are read bottom to top. To see this in action, on the Linkerd
# administrative dashboard, click on the "dtab" tab, select "http-outgoing"
# from the dropdown, and enter a service name like "a.b". (Or click on the
# "requests" tab to see recent traffic through the system and how it was
# resolved.)
dtab: |
/ph => /$/io.buoyant.rinet ; # /ph/80/google.com -> /$/io.buoyant.rinet/80/google.com
/svc => /ph/80 ; # /svc/google.com -> /ph/80/google.com
/svc => /$/io.buoyant.porthostPfx/ph ; # /svc/google.com:80 -> /ph/80/google.com
/k8s => /#/io.l5d.k8s.http ; # /k8s/default/http/foo -> /#/io.l5d.k8s.http/default/http/foo
/portNsSvc => /#/portNsSvcToK8s ; # /portNsSvc/http/default/foo -> /k8s/default/http/foo
/host => /portNsSvc/http/default ; # /host/foo -> /portNsSvc/http/default/foo
/host => /portNsSvc/http ; # /host/default/foo -> /portNsSvc/http/default/foo
/svc => /$/io.buoyant.http.domainToPathPfx/host ; # /svc/foo.default -> /host/default/foo
client:
kind: io.l5d.static
configs:
# Use HTTPS if sending to port 443
- prefix: "/$/io.buoyant.rinet/443/{service}"
tls:
commonName: "{service}"
- label: http-incoming
protocol: http
servers:
- port: 4141
ip: 0.0.0.0
interpreter:
kind: default
transformers:
- kind: io.l5d.k8s.localnode
# hostNetwork: true # Uncomment if using host networking (eg for CNI)
dtab: |
/k8s => /#/io.l5d.k8s ; # /k8s/default/http/foo -> /#/io.l5d.k8s/default/http/foo
/portNsSvc => /#/portNsSvcToK8s ; # /portNsSvc/http/default/foo -> /k8s/default/http/foo
/host => /portNsSvc/http/default ; # /host/foo -> /portNsSvc/http/default/foo
/host => /portNsSvc/http ; # /host/default/foo -> /portNsSvc/http/default/foo
/svc => /$/io.buoyant.http.domainToPathPfx/host ; # /svc/foo.default -> /host/default/foo
- label: h2-outgoing
protocol: h2
experimental: true
servers:
- port: 4240
ip: 0.0.0.0
dtab: |
/ph => /$/io.buoyant.rinet ; # /ph/80/google.com -> /$/io.buoyant.rinet/80/google.com
/svc => /ph/80 ; # /svc/google.com -> /ph/80/google.com
/svc => /$/io.buoyant.porthostPfx/ph ; # /svc/google.com:80 -> /ph/80/google.com
/k8s => /#/io.l5d.k8s.h2 ; # /k8s/default/h2/foo -> /#/io.l5d.k8s.h2/default/h2/foo
/portNsSvc => /#/portNsSvcToK8s ; # /portNsSvc/h2/default/foo -> /k8s/default/h2/foo
/host => /portNsSvc/h2/default ; # /host/foo -> /portNsSvc/h2/default/foo
/host => /portNsSvc/h2 ; # /host/default/foo -> /portNsSvc/h2/default/foo
/svc => /$/io.buoyant.http.domainToPathPfx/host ; # /svc/foo.default -> /host/default/foo
client:
kind: io.l5d.static
configs:
# Use HTTPS if sending to port 443
- prefix: "/$/io.buoyant.rinet/443/{service}"
tls:
commonName: "{service}"
- label: h2-incoming
protocol: h2
experimental: true
servers:
- port: 4241
ip: 0.0.0.0
interpreter:
kind: default
transformers:
- kind: io.l5d.k8s.localnode
# hostNetwork: true # Uncomment if using host networking (eg for CNI)
dtab: |
/k8s => /#/io.l5d.k8s ; # /k8s/default/h2/foo -> /#/io.l5d.k8s/default/h2/foo
/portNsSvc => /#/portNsSvcToK8s ; # /portNsSvc/h2/default/foo -> /k8s/default/h2/foo
/host => /portNsSvc/h2/default ; # /host/foo -> /portNsSvc/h2/default/foo
/host => /portNsSvc/h2 ; # /host/default/foo -> /portNsSvc/h2/default/foo
/svc => /$/io.buoyant.http.domainToPathPfx/host ; # /svc/foo.default -> /host/default/foo
- label: grpc-outgoing
protocol: h2
experimental: true
servers:
- port: 4340
ip: 0.0.0.0
identifier:
kind: io.l5d.header.path
segments: 1
dtab: |
/hp => /$/inet ; # /hp/linkerd.io/8888 -> /$/inet/linkerd.io/8888
/svc => /$/io.buoyant.hostportPfx/hp ; # /svc/linkerd.io:8888 -> /hp/linkerd.io/8888
/srv => /#/io.l5d.k8s.grpc/default/grpc; # /srv/service/package -> /#/io.l5d.k8s.grpc/default/grpc/service/package
/svc => /$/io.buoyant.http.domainToPathPfx/srv ; # /svc/package.service -> /srv/service/package
client:
kind: io.l5d.static
configs:
# Always use TLS when sending to external grpc servers
- prefix: "/$/inet/{service}"
tls:
commonName: "{service}"
- label: gprc-incoming
protocol: h2
experimental: true
servers:
- port: 4341
ip: 0.0.0.0
identifier:
kind: io.l5d.header.path
segments: 1
interpreter:
kind: default
transformers:
- kind: io.l5d.k8s.localnode
# hostNetwork: true # Uncomment if using host networking (eg for CNI)
dtab: |
/srv => /#/io.l5d.k8s/default/grpc ; # /srv/service/package -> /#/io.l5d.k8s/default/grpc/service/package
/svc => /$/io.buoyant.http.domainToPathPfx/srv ; # /svc/package.service -> /srv/service/package
# HTTP Ingress Controller listening on port 80
- protocol: http
label: http-ingress
servers:
- port: 80
ip: 0.0.0.0
clearContext: true
identifier:
kind: io.l5d.ingress
dtab: /svc => /#/io.l5d.k8s
# HTTP/2 Ingress Controller listening on port 8080
- protocol: h2
experimental: true
label: h2-ingress
servers:
- port: 8080
ip: 0.0.0.0
clearContext: true
identifier:
kind: io.l5d.ingress
dtab: /svc => /#/io.l5d.k8s
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
labels:
app: l5d
name: l5d
namespace: linkerd
spec:
template:
metadata:
labels:
app: l5d
spec:
# hostNetwork: true # Uncomment to use host networking (eg for CNI)
volumes:
- name: l5d-config
configMap:
name: "l5d-config"
containers:
- name: l5d
image: sz-pg-oam-docker-hub-001.tendcloud.com/library/buoyantio-linkerd:1.1.2
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
args:
- /io.buoyant/linkerd/config/config.yaml
ports:
- name: http-outgoing
containerPort: 4140
hostPort: 4140
- name: http-incoming
containerPort: 4141
- name: h2-outgoing
containerPort: 4240
hostPort: 4240
- name: h2-incoming
containerPort: 4241
- name: grpc-outgoing
containerPort: 4340
hostPort: 4340
- name: grpc-incoming
containerPort: 4341
- name: http-ingress
containerPort: 80
- name: h2-ingress
containerPort: 8080
volumeMounts:
- name: "l5d-config"
mountPath: "/io.buoyant/linkerd/config"
readOnly: true
# Run `kubectl proxy` as a sidecar to give us authenticated access to the
# Kubernetes API.
- name: kubectl
image: sz-pg-oam-docker-hub-001.tendcloud.com/library/buoyantio-kubectl:v1.4.0
args:
- "proxy"
- "-p"
- "8001"
---
apiVersion: v1
kind: Service
metadata:
name: l5d
namespace: linkerd
spec:
selector:
app: l5d
type: LoadBalancer
ports:
- name: http-outgoing
port: 4140
- name: http-incoming
port: 4141
- name: h2-outgoing
port: 4240
- name: h2-incoming
port: 4241
- name: grpc-outgoing
port: 4340
- name: grpc-incoming
port: 4341
- name: http-ingress
port: 80
- name: h2-ingress
port: 8080

View File

@ -0,0 +1,42 @@
---
apiVersion: v1
kind: ReplicationController
metadata:
name: world-v2
spec:
replicas: 3
selector:
app: world-v2
template:
metadata:
labels:
app: world-v2
spec:
dnsPolicy: ClusterFirst
containers:
- name: service
image: sz-pg-oam-docker-hub-001.tendcloud.com/library/buoyantio-helloworld:0.1.4
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: TARGET_WORLD
value: earth
args:
- "-addr=:7778"
ports:
- name: service
containerPort: 7778
---
apiVersion: v1
kind: Service
metadata:
name: world-v2
spec:
selector:
app: world-v2
clusterIP: None
ports:
- name: http
port: 7778

View File

@ -0,0 +1,55 @@
---
apiVersion: v1
kind: ReplicationController
metadata:
name: zipkin
spec:
replicas: 1
selector:
app: zipkin
template:
metadata:
name: zipkin
labels:
app: zipkin
spec:
containers:
- name: zipkin
image: sz-pg-oam-docker-hub-001.tendcloud.com/library/openzipkin-zipkin:1.20
env:
- name: SCRIBE_ENABLED
value: "true"
ports:
- name: scribe
containerPort: 9410
- name: http
containerPort: 9411
---
apiVersion: v1
kind: Service
metadata:
labels:
name: zipkin-collector
name: zipkin-collector
spec:
type: ClusterIP
selector:
app: zipkin
ports:
- name: scribe
port: 9410
targetPort: 9410
---
apiVersion: v1
kind: Service
metadata:
labels:
name: zipkin
name: zipkin
spec:
selector:
app: zipkin
ports:
- name: http
port: 80
targetPort: 9411

View File

@ -0,0 +1,287 @@
# Linkerd 使用指南
## 前言
Linkerd 作为一款 service mesh 与kubernetes 结合后主要有以下几种用法:
1. 作为服务网关,可以监控 kubernetes 中的服务和实例
2. 使用 TLS 加密服务
3. 通过流量转移到持续交付
4. 开发测试环境Eat your own dog food、Ingress 和边缘路由
5. 给微服务做 staging
6. 分布式 tracing
7. 作为 Ingress controller
8. 使用 gRPC 更方便
以下我们着重讲解在 kubernetes 中如何使用 linkerd 作为 kubernetes 的 Ingress controller并作为边缘节点代替 [Traefik](https://traefik.io) 的功能,详见 [边缘节点的配置](../practice/edge-node-configuration.md)。
## 准备
安装测试时需要用到的镜像有:
```
buoyantio/helloworld:0.1.4
buoyantio/jenkins-plus:2.60.1
buoyantio/kubectl:v1.4.0
buoyantio/linkerd:1.1.2
buoyantio/namerd:1.1.2
buoyantio/nginx:1.10.2
linkerd/namerctl:0.8.6
openzipkin/zipkin:1.20
tutum/dnsutils:latest
```
这些镜像可以直接通过 Docker Hub 获取,我将它们下载下来并上传到了自己的私有镜像仓库 `sz-pg-oam-docker-hub-001.tendcloud.com`下文中用到的镜像皆来自我的私有镜像仓库yaml 配置见 [linkerd](../manifests/linkerd) 目录,并在使用时将配置中的镜像地址修改为你自己的。
## 部署
首先需要先创建 RBAC因为使用 namerd 和 ingress 时需要用到。
```bash
$ kubectl create -f linkerd-rbac-beta.yml
```
Linkerd 提供了 Jenkins 示例,在部署的时候使用以下命令:
```bash
$ kubectl create -f jenkins-rbac-beta.yml
$ kubectl create -f jenkins.yml
```
访问 http://jenkins.jimmysong.io
![Jenkins pipeline](../images/linkerd-jenkins-pipeline.jpg)
![Jenkins config](../images/linkerd-jenkins.jpg)
**注意**:要访问 Jenkins 需要在 Ingress 中增加配置,下文会提到。
在 kubernetes 中使用 Jenkins 的时候需要注意 Pipeline 中的配置:
```
def currentVersion = getCurrentVersion()
def newVersion = getNextVersion(currentVersion)
def frontendIp = kubectl("get svc l5d -o jsonpath=\"{.status.loadBalancer.ingress[0].*}\"").trim()
def originalDst = getDst(getDtab())
```
`frontendIP` 的地址要配置成 service 的 Cluster IP 因为我们没有用到LoadBalancer。
需要安装 namerdnamerd 负责 dtab 信息的存储,当然也可以存储在 etcd、consul中。dtab 保存的是路由规则信息,支持递归解析,详见 [dtab](https://linkerd.io/in-depth/dtabs/)。
流量切换主要是通过 [dtab](https://linkerd.io/in-depth/dtabs/) 来实现的,通过在 HTTP 请求的 header 中增加 `l5d-dtab``Host` 信息可以对流量分离到 kubernetes 中的不同 service 上。
**遇到的问题**
Failed with the following error(s)
Error signal dtab is already marked as being deployed!
因为该 dtab entry 已经存在,需要删除后再运行。
访问 http://namerd.jimmysong.io
![namerd](../images/namerd-internal.jpg)
dtab 保存在 namerd 中,该页面中的更改不会生效,需要使用命令行来操作。
使用 [namerctl](https://github.com/linkerd/namerctl) 来操作。
```bash
$ namerctl --base-url http://namerd-backend.jimmysong.io dtab update internal file
```
**注意**update 时需要将更新文本先写入文件中。
## 部署 Linkerd
直接使用 yaml 文件部署,注意修改镜像仓库地址。
```bash
# 创建 namerd
$ kubectl create -f namerd.yaml
# 创建 ingress
$ kubectl create -f linkerd-ingress.yml
# 创建测试服务 hello-world
$ kubectl create -f hello-world.yml
# 创建 API 服务
$ kubectl create -f api.yml
# 创建测试服务 world-v2
$ kubectl create -f world-v2.yml
```
为了在本地调试 linkerd我们将 linkerd 的 service 加入到 ingress 中,详见 [边缘节点配置](practice/edge-node-configuration.md)。
在 Ingress 中增加如下内容:
```yaml
- host: linkerd.jimmysong.io
http:
paths:
- path: /
backend:
serviceName: l5d
servicePort: 9990
- host: linkerd-viz.jimmysong.io
http:
paths:
- path: /
backend:
serviceName: linkerd-viz
servicePort: 80
- host: l5d.jimmysong.io
http:
paths:
- path: /
backend:
serviceName: l5d
servicePort: 4141
- host: jenkins.jimmysong.io
http:
paths:
- path: /
backend:
serviceName: jenkins
servicePort: 80
```
在本地`/etc/hosts`中添加如下内容:
```i
172.20.0.119 linkerd.jimmysong.io
172.20.0.119 linkerd-viz.jimmysong.io
172.20.0.119 l5d.jimmysong.io
```
**测试路由功能**
使用 curl 简单测试。
单条测试
```bash
$ curl -s -H "Host: www.hello.world" 172.20.0.120:4141
Hello (172.30.60.14) world (172.30.71.19)!!%
```
请注意请求返回的结果,表示访问的是 `world-v1` service。
```bash
$ for i in $(seq 0 10000);do echo $i;curl -s -H "Host: www.hello.world" 172.20.0.120:4141;done
```
使用 ab test。
```bash
$ ab -c 4 -n 10000 -H "Host: www.hello.world" http://172.20.0.120:4141/
This is ApacheBench, Version 2.3 <$Revision: 1757674 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 172.20.0.120 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests
Server Software:
Server Hostname: 172.20.0.120
Server Port: 4141
Document Path: /
Document Length: 43 bytes
Concurrency Level: 4
Time taken for tests: 262.505 seconds
Complete requests: 10000
Failed requests: 0
Total transferred: 2210000 bytes
HTML transferred: 430000 bytes
Requests per second: 38.09 [#/sec] (mean)
Time per request: 105.002 [ms] (mean)
Time per request: 26.250 [ms] (mean, across all concurrent requests)
Transfer rate: 8.22 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 36 51 91.1 39 2122
Processing: 39 54 29.3 46 585
Waiting: 39 52 20.3 46 362
Total: 76 105 96.3 88 2216
Percentage of the requests served within a certain time (ms)
50% 88
66% 93
75% 99
80% 103
90% 119
95% 146
98% 253
99% 397
100% 2216 (longest request)
```
## 监控 kubernets 中的服务与实例
访问 http://linkerd.jimmysong.io 查看流量情况
Outcoming
![linkerd监控](../images/linkerd-helloworld-outgoing.jpg)
Incoming
![linkerd监控](../images/linkerd-helloworld-incoming.jpg)
访问 http://linkerd-viz.jimmysong.io 查看应用 metric 监控
![linkerd性能监控](../images/linkerd-grafana.png)
## 测试路由
测试在 http header 中增加 dtab 规则。
```bash
$ curl -H "Host: www.hello.world" -H "l5d-dtab:/host/world => /srv/world-v2;" 172.20.0.120:4141
Hello (172.30.60.14) earth (172.30.94.40)!!
```
请注意调用返回的结果,表示调用的是 `world-v2` 的 service。
另外再对比 ab test 的结果与 `linkerd-viz ` 页面上的结果,可以看到结果一致。
但是我们可能不想把该功能暴露给所有人,所以可以在前端部署一个 nginx 来过滤 header 中的 `l5d-dtab` 打头的字段,并通过设置 cookie 的方式来替代 header 里的 `l5d-dtab` 字段。
```bash
$ http_proxy=http://172.20.0.120:4141 curl -s http:/hello
Hello (172.30.60.14) world (172.30.71.19)!!
```
## 将 Linkerd 作为 Ingress controller
将 Linkerd 作为 kubernetes ingress controller 的方式跟将 Treafik 作为 ingress controller 的过程过程完全一样,可以直接参考 [边缘节点配置](../practice/edge-node-configuration.md)。
架构如下图所示。
![Linkerd ingress controller](../images/linkerd-ingress-controller.jpg)
*(图片来自 A Service Mesh for Kubernetes - Buoyant.io)*
当然可以绕过 kubernetes ingress controller 直接使用 linkerd 作为边界路由,通过 dtab 和 linkerd 前面的 nginx 来路由流量。
## 参考
https://github.com/linkerd/linkerd-examples/
[A Service Mesh for Kubernetes](https://cdn2.hubspot.net/hubfs/2818724/A%20Service%20Mesh%20for%20Kubernetes_Final.pdf)
[dtab](https://linkerd.io/in-depth/dtabs/)