add kubeapps

pull/1282/head
gjmzj 2023-05-28 12:06:42 +08:00
parent 14cfebff68
commit c3c43f227c
14 changed files with 343 additions and 9 deletions

View File

@ -6,7 +6,7 @@
- 安装 [dashboard](dashboard.md)
- 安装 [metrics-server](metrics-server.md)
- 安装 [prometheus](prometheus.md)
- 安装 [heapster](heapster.md) DEPRECATED WARNNING
- 安装 [kubeapps](kubeapps.md)
- 安装 [ingress](ingress.md)
- 安装 [helm](helm.md)
- 安装 [efk](efk.md)

View File

@ -0,0 +1,67 @@
# 使用kubeapps管理集群应用
Kubeapps 是一个基于 Web 的应用程序,它可以在 Kubernetes 集群上进行一站式安装,并使用户能够部署、管理和升级应用程序。
<img alt="kubeapps_dashboard" width="400" height="300" src="https://github.com/vmware-tanzu/kubeapps/raw/main/site/content/docs/latest/img/dashboard-login.png">
项目地址https://github.com/vmware-tanzu/kubeapps
部署项目地址https://github.com/bitnami/charts/tree/main/bitnami/kubeapps
## 使用kubeasz部署
- 1.编辑集群配置文件clusters/${集群名}/config.yml
``` bash
kubeapps_install: "yes" # 启用安装
kubeapps_install_namespace: "kubeapps" # 设置安装命名空间
kubeapps_working_namespace: "default" # 设置默认应用命名空间
kubeapps_storage_class: "local-path" # 设置存储storageclass默认使用local-path-provisioner
kubeapps_chart_ver: "12.4.3"
```
- 2.下载相关容器镜像
``` bash
# 下载kubeapps镜像
/etc/kubeasz/ezdown -X kubeapps
# 下载local-path-provisioner镜像
/etc/kubeasz/ezdown -X local-path-provisioner
```
- 3.安装cluster-addon
``` bash
$ dk ezctl setup ${集群名} 07
# 执行成功后验证
$ kubectl get pod --all-namespaces |grep kubeapps
```
## 验证使用kubeapps
阅读文档https://github.com/vmware-tanzu/kubeapps/blob/main/site/content/docs/latest/tutorials/getting-started.md
正式使用建议配置OAuth2/OIDC用户认证这里仅验证使用k8s ServiceAccount 方式登陆,项目已预装三个用户权限:
- 1.kubeapps-admin-token全局cluster-admin权限不建议使用
- 2.kubeapps-edit-token某个命名空间下的应用可写权限
- 3.kubeapps-view-token某个命名空间下的应用只读权限
``` bash
# 获取UI访问地址默认使用NodePort
kubectl get svc -n kubeapps kubeapps
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubeapps NodePort 10.68.92.88 <none> 80:32490/TCP 117m
# 获取admin token
kubectl get secrets -n kube-system kubeapps-admin-token -o go-template='{{.data.token | base64decode}}'
# 获取某命名空间应用部署权限 token
kubectl get secrets -n default kubeapps-edit-token -o go-template='{{.data.token | base64decode}}'
# 获取某命名空间应用部署权限 token
kubectl get secrets -n default kubeapps-view-token -o go-template='{{.data.token | base64decode}}'
```
打开浏览器访问http://${Node_IP}:32490输入上面合适权限的token即可。

View File

@ -7,12 +7,16 @@
如上面所说PV和PVC都只是抽象的概念在k8s中是通过插件的方式提供具体的存储实现。目前包含有NFS、iSCSI和云提供商指定的存储系统更多的存储实现[参考官方文档](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes)。
这里PV又有两种提供方式: 静态或者动态。
本篇以介绍 **NFS存储** 为例讲解k8s 众多存储方案中的一个实现。
以下介绍两种`provisioner`, 可以提供静态或者动态的PV
- [nfs-provisioner](https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner): NFS存储目录供应者
- [local-path-provisioner](https://github.com/rancher/local-path-provisioner): 本地存储目录供应者
## NFS存储目录供应者
## 静态 PV
首先我们需要一个NFS服务器用于提供底层存储。通过文档[nfs-server](../guide/nfs-server.md)我们可以创建一个NFS服务器。
### 静态 PV
- 创建静态 pv指定容量访问模式回收策略存储类等
``` bash
@ -36,7 +40,7 @@ spec:
```
- 创建 pvc即可绑定使用上述 pv了具体请看后文 test pod例子
## 创建动态PV
### 创建动态PV
在一个工作k8s 集群中,`PVC`请求会很多,如果每次都需要管理员手动去创建对应的 `PV`资源,那就很不方便;因此 K8S还提供了多种 `provisioner`来动态创建 `PV`,不仅节省了管理员的时间,还可以根据`StorageClasses`封装不同类型的存储供 PVC 选用。
@ -59,14 +63,14 @@ nfs_path: "/data/nfs" # 修改为实际的nfs共享目录
- 2.创建 nfs provisioner
``` bash
$ ezctl setup ${集群名} 07
$ dk ezctl setup ${集群名} 07
# 执行成功后验证
$ kubectl get pod --all-namespaces |grep nfs-client
kube-system nfs-client-provisioner-84ff87c669-ksw95 1/1 Running 0 21m
```
## 验证使用动态 PV
- 3.验证使用动态 PV
在目录clusters/${集群名}/yml/nfs-provisioner/ 有个测试例子
@ -98,5 +102,30 @@ test-claim Bound pvc-44d34a50-e00b-4f6c-8005-40f5cc54af18 2Mi RWX
```
如上可以发现挂载的时候nfs-client根据PVC自动创建了一个目录我们Pod中挂载的`/mnt`,实际引用的就是该目录,而我们在`/mnt`下创建的`SUCCESS`文件,也自动写入到了这里。
# 后续
后面当我们需要为上层应用提供持久化存储时,只需要提供`StorageClass`即可。很多应用都会根据`StorageClass`来创建他们的所需的PVC, 最后再把PVC挂载到他们的Deployment或StatefulSet中使用比如efk、jenkins等
## 本地存储目录供应者
当应用对于磁盘I/O性能要求高比较适合本地文件目录存储特别地可以本地挂载SSD磁盘注意本地磁盘需要配置raid冗余策略。Local Path Provisioner 可以方便地在k8s集群中使用本地文件目录存储。
在kubeasz项目中集成安装
- 1.编辑集群配置文件clusters/${集群名}/config.yml
``` bash
... 省略
local_path_provisioner_install: "yes" # 修改为yes
# 设置默认本地存储路径
local_path_provisioner_dir: "/opt/local-path-provisioner"
```
- 2.创建 local path provisioner
``` bash
$ dk ezctl setup ${集群名} 07
# 执行成功后验证
$ kubectl get pod --all-namespaces |grep nfs-client-provisioner
```
- 3.验证使用(略)

View File

@ -187,6 +187,13 @@ prom_install: "no"
prom_namespace: "monitor"
prom_chart_ver: "__prom_chart__"
# kubeapps 自动安装如果选择安装默认同时安装local-storage提供storageClass: "local-path"
kubeapps_install: "no"
kubeapps_install_namespace: "kubeapps"
kubeapps_working_namespace: "default"
kubeapps_storage_class: "local-path"
kubeapps_chart_ver: "__kubeapps_chart__"
# local-storage (local-path-provisioner) 自动安装
local_path_provisioner_install: "no"
local_path_provisioner_ver: "__local_path_provisioner__"

2
ezctl
View File

@ -162,6 +162,7 @@ function new() {
nfsProvisionerVer=$(grep 'nfsProvisionerVer=' ezdown|cut -d'=' -f2)
pauseVer=$(grep 'pauseVer=' ezdown|cut -d'=' -f2)
promChartVer=$(grep 'promChartVer=' ezdown|cut -d'=' -f2)
kubeappsVer=$(grep 'kubeappsVer=' ezdown|cut -d'=' -f2)
harborVer=$(grep 'HARBOR_VER=' ezdown|cut -d'=' -f2)
registryMirror=true
@ -181,6 +182,7 @@ function new() {
-e "s/__local_path_provisioner__/$localpathProvisionerVer/g" \
-e "s/__nfs_provisioner__/$nfsProvisionerVer/g" \
-e "s/__prom_chart__/$promChartVer/g" \
-e "s/__kubeapps_chart__/$kubeappsVer/g" \
-e "s/__harbor__/$harborVer/g" \
-e "s/^ENABLE_MIRROR_REGISTRY.*$/ENABLE_MIRROR_REGISTRY: $registryMirror/g" \
-e "s/__metrics__/$metricsVer/g" "clusters/$1/config.yml"

35
ezdown
View File

@ -38,6 +38,7 @@ kubeOvnVer=v1.11.5
localpathProvisionerVer=v0.0.24
nfsProvisionerVer=v4.0.2
promChartVer=45.23.0
kubeappsVer=12.4.3
function usage() {
echo -e "\033[33mUsage:\033[0m ezdown [options] [args]"
@ -90,6 +91,7 @@ available options:
flannel to download images of flannel
kube-ovn to download images of kube-ovn
kube-router to download images of kube-router
kubeapps to download images of kubeapps
local-path-provisioner to download images of local-path-provisioner
network-check to download images of network-check
nfs-provisioner to download images of nfs-provisioner
@ -469,6 +471,39 @@ function get_extra_images() {
docker push "easzlab.io.local:5000/flannel/flannel-cni-plugin:v1.1.2"
;;
# kubeapps images
kubeapps)
if [[ ! -f "$imageDir/kubeapps_$kubeappsVer.tar" ]];then
docker pull "bitnami/kubeapps-apis:2.7.0-debian-11-r10" && \
docker pull "bitnami/kubeapps-apprepository-controller:2.7.0-scratch-r0" && \
docker pull "bitnami/kubeapps-asset-syncer:2.7.0-scratch-r0" && \
docker pull "bitnami/kubeapps-dashboard:2.7.0-debian-11-r12" && \
docker pull "bitnami/nginx:1.23.4-debian-11-r18" && \
docker pull "bitnami/postgresql:15.3.0-debian-11-r0" && \
docker save -o "$imageDir/kubeapps_$kubeappsVer.tar" \
"bitnami/kubeapps-apis:2.7.0-debian-11-r10" \
"bitnami/kubeapps-apprepository-controller:2.7.0-scratch-r0" \
"bitnami/kubeapps-asset-syncer:2.7.0-scratch-r0" \
"bitnami/kubeapps-dashboard:2.7.0-debian-11-r12" \
"bitnami/nginx:1.23.4-debian-11-r18" \
"bitnami/postgresql:15.3.0-debian-11-r0"
else
docker load -i "$imageDir/kubeapps_$kubeappsVer.tar"
fi
docker tag "bitnami/kubeapps-apis:2.7.0-debian-11-r10" "easzlab.io.local:5000/bitnami/kubeapps-apis:2.7.0-debian-11-r10"
docker tag "bitnami/kubeapps-apprepository-controller:2.7.0-scratch-r0" "easzlab.io.local:5000/bitnami/kubeapps-apprepository-controller:2.7.0-scratch-r0"
docker tag "bitnami/kubeapps-asset-syncer:2.7.0-scratch-r0" "easzlab.io.local:5000/bitnami/kubeapps-asset-syncer:2.7.0-scratch-r0"
docker tag "bitnami/kubeapps-dashboard:2.7.0-debian-11-r12" "easzlab.io.local:5000/bitnami/kubeapps-dashboard:2.7.0-debian-11-r12"
docker tag "bitnami/nginx:1.23.4-debian-11-r18" "easzlab.io.local:5000/bitnami/nginx:1.23.4-debian-11-r18"
docker tag "bitnami/postgresql:15.3.0-debian-11-r0" "easzlab.io.local:5000/bitnami/postgresql:15.3.0-debian-11-r0"
docker push "easzlab.io.local:5000/bitnami/kubeapps-apis:2.7.0-debian-11-r10"
docker push "easzlab.io.local:5000/bitnami/kubeapps-apprepository-controller:2.7.0-scratch-r0"
docker push "easzlab.io.local:5000/bitnami/kubeapps-asset-syncer:2.7.0-scratch-r0"
docker push "easzlab.io.local:5000/bitnami/kubeapps-dashboard:2.7.0-debian-11-r12"
docker push "easzlab.io.local:5000/bitnami/nginx:1.23.4-debian-11-r18"
docker push "easzlab.io.local:5000/bitnami/postgresql:15.3.0-debian-11-r0"
;;
# kube-ovn images
kube-ovn)
if [[ ! -f "$imageDir/kube-ovn_$kubeOvnVer.tar" ]];then

Binary file not shown.

View File

@ -0,0 +1,24 @@
# https://github.com/bitnami/charts/tree/main/bitnami/kubeapps
- block:
- name: prepare some dirs
file: name={{ cluster_dir }}/yml/kubeapps/token state=directory
- name: 创建 kubeapps chart 个性化设置
template: src=kubeapps/values.yaml.j2 dest={{ cluster_dir }}/yml/kubeapps/values.yaml
- name: 准备临时用户tokens
template: src=kubeapps/{{ item }}.j2 dest={{ cluster_dir }}/yml/kubeapps/token/{{ item }}
with_items:
- "kubeapps-admin-token.yaml"
- "single-namespace-edit-token.yaml"
- "single-namespace-view-token.yaml"
- name: helm 创建 kubeapps
shell: "{{ base_dir }}/bin/helm upgrade kubeapps --install --create-namespace \
-n {{ kubeapps_install_namespace }} -f {{ cluster_dir }}/yml/kubeapps/values.yaml \
{{ base_dir }}/roles/cluster-addon/files/kubeapps-{{ kubeapps_chart_ver }}.tgz"
- name: 创建临时用户tokens
shell: "{{ base_dir }}/bin/kubectl apply -f {{ cluster_dir }}/yml/kubeapps/token/"
when: 'kubeapps_install == "yes"'

View File

@ -10,4 +10,4 @@
- name: 创建 local-storage部署
shell: "{{ base_dir }}/bin/kubectl apply -f {{ cluster_dir }}/yml/local-storage/local-path-storage.yaml"
when: 'local_path_provisioner_install == "yes"'
when: 'local_path_provisioner_install == "yes" or (kubeapps_install == "yes" and kubeapps_storage_class == "local-path")'

View File

@ -29,3 +29,6 @@
- import_tasks: network_check.yml
when: 'network_check_enabled|bool and CLUSTER_NETWORK != "cilium"'
- import_tasks: kubeapps.yml
when: 'kubeapps_install == "yes"'

View File

@ -0,0 +1,30 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kubeapps-operator
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubeapps-operator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubeapps-operator
namespace: kube-system
---
apiVersion: v1
kind: Secret
metadata:
name: kubeapps-admin-token
namespace: kube-system
annotations:
kubernetes.io/service-account.name: "kubeapps-operator"
type: kubernetes.io/service-account-token

View File

@ -0,0 +1,31 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kubeapps-editor
namespace: {{ kubeapps_working_namespace }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubeapps-editor
namespace: {{ kubeapps_working_namespace }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: edit
subjects:
- kind: ServiceAccount
name: kubeapps-editor
namespace: {{ kubeapps_working_namespace }}
---
apiVersion: v1
kind: Secret
metadata:
name: kubeapps-edit-token
namespace: {{ kubeapps_working_namespace }}
annotations:
kubernetes.io/service-account.name: "kubeapps-editor"
type: kubernetes.io/service-account-token

View File

@ -0,0 +1,31 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kubeapps-viewer
namespace: {{ kubeapps_working_namespace }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubeapps-viewer
namespace: {{ kubeapps_working_namespace }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view
subjects:
- kind: ServiceAccount
name: kubeapps-viewer
namespace: {{ kubeapps_working_namespace }}
---
apiVersion: v1
kind: Secret
metadata:
name: kubeapps-view-token
namespace: {{ kubeapps_working_namespace }}
annotations:
kubernetes.io/service-account.name: "kubeapps-viewer"
type: kubernetes.io/service-account-token

View File

@ -0,0 +1,75 @@
global:
imageRegistry: "easzlab.io.local:5000"
# default to use "local-path-provisioner"
storageClass: "{{ kubeapps_storage_class }}"
## @section Kubeapps packaging options
packaging:
helm:
enabled: true
carvel:
enabled: false
flux:
enabled: false
## @section Frontend parameters
frontend:
image:
repository: bitnami/nginx
tag: 1.23.4-debian-11-r18
replicaCount: 1
service:
type: NodePort
## @section Dashboard parameters
dashboard:
enabled: true
image:
repository: bitnami/kubeapps-dashboard
tag: 2.7.0-debian-11-r12
replicaCount: 1
## @section AppRepository Controller parameters
apprepository:
image:
repository: bitnami/kubeapps-apprepository-controller
tag: 2.7.0-scratch-r0
syncImage:
repository: bitnami/kubeapps-asset-syncer
tag: 2.7.0-scratch-r0
initialRepos:
- name: bitnami
url: https://charts.bitnami.com/bitnami
## @param apprepository.crontab Default schedule for syncing App repositories
crontab: "*/10 * * * *"
watchAllNamespaces: true
replicaCount: 1
## Auth Proxy configuration for OIDC support
authProxy:
enabled: false
## @section Other Parameters
clusters:
- name: default
domain: cluster.local
## @section Database Parameters
postgresql:
enabled: true
auth:
username: "postgres"
postgresPassword: "Postgres1234!"
database: assets
existingSecret: ""
primary:
persistence:
enabled: true
architecture: standalone
## @section kubeappsapis parameters
kubeappsapis:
image:
repository: bitnami/kubeapps-apis
tag: 2.7.0-debian-11-r10
replicaCount: 1