优化显示效果
parent
2248d0efa9
commit
51eab0b1ed
|
@ -1,6 +1,5 @@
|
|||
# Awesome Docker
|
||||
|
||||
# [![Awesome](https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg)](https://github.com/sindresorhus/awesome) [![Join the chat at https://gitter.im/veggiemonk/awesome-docker](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/veggiemonk/awesome-docker?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) [![Build Status](https://travis-ci.org/veggiemonk/awesome-docker.svg?branch=master)](https://travis-ci.org/veggiemonk/awesome-docker)
|
||||
|
||||
https://github.com/veggiemonk/awesome-docker
|
||||
|
||||
|
|
|
@ -1,20 +1,10 @@
|
|||
Awesome-Kubernetes
|
||||
=======================================================================
|
||||
|
||||
[![Awesome](https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg)](https://github.com/sindresorhus/awesome)
|
||||
[![Build Status](https://travis-ci.org/ramitsurana/awesome-kubernetes.svg?branch=master)](https://travis-ci.org/ramitsurana/awesome-kubernetes)
|
||||
[![License](https://img.shields.io/badge/License-CC%204.0-brightgreen.svg)](http://creativecommons.org/licenses/by-nc/4.0/)
|
||||
|
||||
A curated list for awesome kubernetes sources
|
||||
Inspired by [@sindresorhus' awesome](https://github.com/sindresorhus/awesome)
|
||||
|
||||
![k8](https://cloud.githubusercontent.com/assets/8342133/13547481/fcb5ffb0-e2fa-11e5-8f75-555cea5eb7b2.png)
|
||||
|
||||
|
||||
> "Talent wins games, but teamwork and intelligence wins championships."
|
||||
>
|
||||
> -- Michael Jordan
|
||||
|
||||
Without the help from these [amazing contributors](https://github.com/ramitsurana/awesome-kubernetes/graphs/contributors),
|
||||
building this awesome-repo would never has been possible. Thank You very much guys !!
|
||||
|
||||
|
@ -37,9 +27,9 @@ _Source:_ [What is Kubernetes](http://kubernetes.io/)
|
|||
**Kubernetes is known to be a descendant of Google's system BORG**
|
||||
|
||||
> The first unified container-management system developed at Google was the system we internally call Borg.
|
||||
It was built to manage both long-running services and batch jobs, which had previously been handled by two separate
|
||||
systems: Babysitter and the Global Work Queue. The latter’s architecture strongly influenced Borg, but was focused on
|
||||
batch jobs; both predated Linux control groups.
|
||||
> It was built to manage both long-running services and batch jobs, which had previously been handled by two separate
|
||||
> systems: Babysitter and the Global Work Queue. The latter’s architecture strongly influenced Borg, but was focused on
|
||||
> batch jobs; both predated Linux control groups.
|
||||
|
||||
_Source:_ [Kubernetes Past](http://research.google.com/pubs/archive/44843.pdf)
|
||||
|
||||
|
@ -163,9 +153,9 @@ Useful Articles
|
|||
|
||||
* [Kubernetes: Getting Started With a Local Deployment](http://www.jetstack.io/new-blog/2015/7/6/getting-started-with-a-local-deployment)
|
||||
* [Installation on Centos 7](http://severalnines.com/blog/installing-kubernetes-cluster-minions-centos7-manage-pods-services)
|
||||
* [Packaging Multiple Resources together](http://blog.arungupta.me/kubernetes-application-package-multiple-resources-together/)
|
||||
* [An Introduction to Kubernetes](https://www.digitalocean.com/community/tutorials/an-introduction-to-kubernetes) by [Justin Ellingwood](https://twitter.com/jmellingwood)
|
||||
* [Scaling Docker with Kubernetes](http://www.infoq.com/articles/scaling-docker-with-kubernetes) by [Carlos Sanchez](https://twitter.com/csanchez)
|
||||
* [Packaging Multiple Resources together](http://blog.arungupta.me/kubernetes-application-package-multiple-resources-together/)
|
||||
* [An Introduction to Kubernetes](https://www.digitalocean.com/community/tutorials/an-introduction-to-kubernetes) by [Justin Ellingwood](https://twitter.com/jmellingwood)
|
||||
* [Scaling Docker with Kubernetes](http://www.infoq.com/articles/scaling-docker-with-kubernetes) by [Carlos Sanchez](https://twitter.com/csanchez)
|
||||
* [Creating a Kubernetes Cluster to Run Docker Formatted Container Images](https://access.redhat.com/articles/1353773) by [Chris Negus](https://twitter.com/linuxcricket)
|
||||
* [Containerizing Docker on Kubernetes !!](https://www.linkedin.com/pulse/containerizing-docker-kubernetes-ramit-surana) by [Ramit Surana](https://twitter.com/ramitsurana)
|
||||
* [Running Kubernetes Example on CoreOS, Part 1](https://coreos.com/blog/running-kubernetes-example-on-CoreOS-part-1/) by [Kelsey Hightower](https://twitter.com/kelseyhightower)
|
||||
|
@ -225,7 +215,7 @@ Cloud Providers
|
|||
* [Rackspace](https://www.rackspace.com/en-in) - Rackspace
|
||||
* [Eldarion Cloud](http://eldarion.cloud/)
|
||||
* [StackPoint Cloud](https://stackpointcloud.com/)
|
||||
|
||||
|
||||
|
||||
Case Studies
|
||||
=======================================================================
|
||||
|
@ -398,7 +388,7 @@ Related Projects
|
|||
* [Vault controller](https://github.com/kelseyhightower/vault-controller)
|
||||
* [kube-lego](https://github.com/jetstack/kube-lego)
|
||||
* [k8sec](https://github.com/dtan4/k8sec)
|
||||
|
||||
|
||||
## Desktop applications
|
||||
|
||||
* [Kubernetic](https://kubernetic.com/)
|
||||
|
@ -479,7 +469,7 @@ Related Projects
|
|||
* [Consul](http://consul.io)
|
||||
* [Kelsey Hightower Consul](https://github.com/kelseyhightower/consul-on-kubernetes)
|
||||
* [Bridge between Kubernetes and Consul](https://github.com/Beldur/kube2consul)
|
||||
|
||||
|
||||
## Operating System
|
||||
|
||||
* [CoreOS](http://coreos.com)
|
||||
|
|
|
@ -2,11 +2,11 @@
|
|||
|
||||
## Deis架构
|
||||
|
||||
![](https://deis.com/docs/workflow/diagrams/Workflow_Overview.png)
|
||||
![Workflow概览](../images/workflow-overview.png)
|
||||
|
||||
![](https://deis.com/docs/workflow/diagrams/Workflow_Detail.png)
|
||||
![Workflow详细结构](../images/workflow-detail.png)
|
||||
|
||||
![](https://deis.com/docs/workflow/diagrams/Application_Layout.png)
|
||||
![应用分层架构](../images/application-layout.png)
|
||||
|
||||
## Deis安装部署
|
||||
|
||||
|
|
|
@ -12,7 +12,7 @@ Helm的使用方法见[这里](helm-app.html)。
|
|||
|
||||
Deis workflow是基于Kubernetes的PaaS管理平台,进一步简化了应用的打包、部署和服务发现。
|
||||
|
||||
![](https://deis.com/docs/workflow/diagrams/Git_Push_Flow.png)
|
||||
![Deis workflow](../images/git-push-flow.png)
|
||||
|
||||
## Operator
|
||||
|
||||
|
|
|
@ -4,9 +4,9 @@ Secret解决了密码、token、密钥等敏感数据的配置问题,而不需
|
|||
|
||||
Secret有三种类型:
|
||||
|
||||
* Service Account:用来访问Kubernetes API,由Kubernetes自动创建,并且会自动挂载到Pod的`/run/secrets/kubernetes.io/serviceaccount`目录中;
|
||||
* Opaque:base64编码格式的Secret,用来存储密码、密钥等;
|
||||
* `kubernetes.io/dockerconfigjson`:用来存储私有docker registry的认证信息。
|
||||
* **Service Account** :用来访问Kubernetes API,由Kubernetes自动创建,并且会自动挂载到Pod的`/run/secrets/kubernetes.io/serviceaccount`目录中;
|
||||
* **Opaque** :base64编码格式的Secret,用来存储密码、密钥等;
|
||||
* **kubernetes.io/dockerconfigjson** :用来存储私有docker registry的认证信息。
|
||||
|
||||
## Opaque Secret
|
||||
|
||||
|
@ -21,7 +21,7 @@ MWYyZDFlMmU2N2Rm
|
|||
|
||||
secrets.yml
|
||||
|
||||
```yml
|
||||
```Yaml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
|
@ -41,7 +41,7 @@ data:
|
|||
|
||||
### 将Secret挂载到Volume中
|
||||
|
||||
```yml
|
||||
```Yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
|
@ -68,7 +68,7 @@ spec:
|
|||
|
||||
### 将Secret导出到环境变量中
|
||||
|
||||
```yml
|
||||
```yaml
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
|
@ -103,7 +103,7 @@ spec:
|
|||
|
||||
## kubernetes.io/dockerconfigjson
|
||||
|
||||
可以直接用kubectl命令来创建用于docker registry认证的secret:
|
||||
可以直接用`kubectl`命令来创建用于docker registry认证的secret:
|
||||
|
||||
```sh
|
||||
$ kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
|
||||
|
|
|
@ -9,7 +9,7 @@ Kubernetes在设计之初就充分考虑了针对容器的服务发现与负载
|
|||
|
||||
## Service
|
||||
|
||||
![](media/14735737093456.jpg)
|
||||
![](../images/service.jpg)
|
||||
|
||||
Service是对一组提供相同功能的Pods的抽象,并为它们提供一个统一的入口。借助Service,应用可以方便的实现服务发现与负载均衡,并实现应用的零宕机升级。Service通过标签来选取服务后端,一般配合Replication Controller或者Deployment来保证后端容器的正常运行。
|
||||
|
||||
|
@ -59,7 +59,7 @@ spec:
|
|||
servicePort: 80
|
||||
```
|
||||
|
||||
注意Ingress本身并不会自动创建负载均衡器,cluster中需要运行一个ingress controller来根据Ingress的定义来管理负载均衡器。目前社区提供了nginx和gce的参考实现。
|
||||
**注意:** Ingress本身并不会自动创建负载均衡器,cluster中需要运行一个ingress controller来根据Ingress的定义来管理负载均衡器。目前社区提供了nginx和gce的参考实现。
|
||||
|
||||
Traefik提供了易用的Ingress Controller,使用方法见<https://docs.traefik.io/user-guide/kubernetes/>。
|
||||
|
||||
|
|
|
@ -8,7 +8,7 @@
|
|||
|
||||
Kubernetes设计理念和功能其实就是一个类似Linux的分层架构,如下图所示
|
||||
|
||||
![](../images/14937095836427.jpg)
|
||||
![分层架构示意图](../images/kubernetes-layers-arch.jpg)
|
||||
|
||||
* 核心层:Kubernetes最核心的功能,对外提供API构建高层的应用,最内提供插件式应用执行环境
|
||||
* 应用层:部署(无状态应用、有状态应用、批处理任务、集群应用等)和路由(服务发现、DNS解析等)
|
||||
|
@ -20,7 +20,7 @@ Kubernetes设计理念和功能其实就是一个类似Linux的分层架构,
|
|||
|
||||
### API设计原则
|
||||
|
||||
对于云计算系统,系统API实际上处于系统设计的统领地位,正如本文前面所说,K8s集群系统每支持一项新功能,引入一项新技术,一定会新引入对应的API对象,支持对该功能的管理操作,理解掌握的API,就好比抓住了K8s系统的牛鼻子。K8s系统API的设计有以下几条原则:
|
||||
对于云计算系统,系统API实际上处于系统设计的统领地位,正如本文前面所说,kubernetes集群系统每支持一项新功能,引入一项新技术,一定会新引入对应的API对象,支持对该功能的管理操作,理解掌握的API,就好比抓住了kubernetes系统的牛鼻子。Kubernetes系统API的设计有以下几条原则:
|
||||
|
||||
1. **所有API应该是声明式的**。正如前文所说,声明式的操作,相对于命令式操作,对于重复操作的效果是稳定的,这对于容易出现数据丢失或重复的分布式环境来说是很重要的。另外,声明式操作更容易被用户使用,可以使系统向用户隐藏实现的细节,隐藏实现的细节的同时,也就保留了系统未来持续优化的可能性。此外,声明式的API,同时隐含了所有的API对象都是名词性质的,例如Service、Volume这些API都是名词,这些名词描述了用户所期望得到的一个目标分布式对象。
|
||||
2. **API对象是彼此互补而且可组合的**。这里面实际是鼓励API对象尽量实现面向对象设计时的要求,即“高内聚,松耦合”,对业务相关的概念有一个合适的分解,提高分解出来的对象的可重用性。事实上,K8s这种分布式系统管理平台,也是一种业务系统,只不过它的业务就是调度和管理容器服务。
|
||||
|
|
|
@ -30,18 +30,18 @@ spec:
|
|||
restartPolicy: OnFailure
|
||||
```
|
||||
|
||||
```
|
||||
```Bash
|
||||
$ kubectl create -f cronjob.yaml
|
||||
cronjob "hello" created
|
||||
```
|
||||
|
||||
当然,也可以用`kubectl run`来创建一个CronJob:
|
||||
|
||||
```
|
||||
```bash
|
||||
kubectl run hello --schedule="*/1 * * * *" --restart=OnFailure --image=busybox -- /bin/sh -c "date; echo Hello from the Kubernetes cluster"
|
||||
```
|
||||
|
||||
```
|
||||
```bash
|
||||
$ kubectl get cronjob
|
||||
NAME SCHEDULE SUSPEND ACTIVE LAST-SCHEDULE
|
||||
hello */1 * * * * False 0 <none>
|
||||
|
|
|
@ -133,7 +133,7 @@ spec:
|
|||
|
||||
除了DaemonSet,还可以使用静态Pod来在每台机器上运行指定的Pod,这需要kubelet在启动的时候指定manifest目录:
|
||||
|
||||
```
|
||||
```bash
|
||||
kubelet --pod-manifest-path=<the directory>
|
||||
```
|
||||
|
||||
|
|
|
@ -30,7 +30,7 @@ spec:
|
|||
restartPolicy: Never
|
||||
```
|
||||
|
||||
```
|
||||
```bash
|
||||
$ kubectl create -f ./job.yaml
|
||||
job "pi" created
|
||||
$ pods=$(kubectl get pods --selector=job-name=pi --output=jsonpath={.items..metadata.name})
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# StatefulSet
|
||||
|
||||
StatefulSet是为了解决有状态服务的问题(对应Deployments和ReplicaSets是为无状态服务而设计),其应用场景包括
|
||||
StatefulSet是为了解决有状态服务的问题(对应Deployments和ReplicaSets是为无状态服务而设计),其应用场景包括:
|
||||
|
||||
- 稳定的持久化存储,即Pod重新调度后还是能访问到相同的持久化数据,基于PVC来实现
|
||||
- 稳定的网络标志,即Pod重新调度后其PodName和HostName不变,基于Headless Service(即没有Cluster IP的Service)来实现
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
## Kubernetes集群架构
|
||||
|
||||
![](../ha/ha.png)
|
||||
![高可用示意图](../images/ha.png)
|
||||
|
||||
### etcd集群
|
||||
|
||||
|
|
|
@ -8,7 +8,7 @@ Kubernetes从1.5开始,通过`kops`或者`kube-up.sh`部署的集群会自动
|
|||
|
||||
如下图所示
|
||||
|
||||
![](ha.png)
|
||||
![高可用示意图](../images/ha.png)
|
||||
|
||||
## etcd集群
|
||||
|
||||
|
|
|
@ -4,7 +4,7 @@
|
|||
|
||||
通过修改Deployment中副本的数量(replicas),可以动态扩展或收缩应用:
|
||||
|
||||
![scale](media/scale.png)
|
||||
![扩容](../images/scale.png)
|
||||
|
||||
这些自动扩展的容器会自动加入到service中,而收缩回收的容器也会自动从service中删除。
|
||||
|
||||
|
@ -22,13 +22,13 @@ nginx-app 3 3 3 3 10m
|
|||
```
|
||||
kubectl rolling-update frontend-v1 frontend-v2 --image=image:v2
|
||||
```
|
||||
![update1](media/update1.png)
|
||||
![update1](../images/update1.png)
|
||||
|
||||
![update2](media/update2.png)
|
||||
![update2](../images/update2.png)
|
||||
|
||||
![update3](media/update3.png)
|
||||
![update3](../images/update3.png)
|
||||
|
||||
![update4](media/update4.png)
|
||||
![update4](../images/update4.png)
|
||||
|
||||
在滚动升级的过程中,如果发现了失败或者配置错误,还可以随时会滚回来:
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# Kubernetes cluster
|
||||
|
||||
![](architecture.png)
|
||||
![集群架构图](../images/architecture.png)
|
||||
|
||||
一个Kubernetes集群由分布式存储etcd、控制节点controller以及服务节点Node组成。
|
||||
|
||||
|
@ -10,7 +10,7 @@
|
|||
|
||||
## 集群联邦
|
||||
|
||||
![](federation.png)
|
||||
![集群联邦架构图](../images/federation.png)
|
||||
|
||||
## Kubernetes单机版
|
||||
|
||||
|
|
|
@ -4,23 +4,23 @@
|
|||
|
||||
Pod是一组紧密关联的容器集合,它们共享Volume和network namespace,是Kubernetes调度的基本单位。Pod的设计理念是支持多个容器在一个Pod中共享网络和文件系统,可以通过进程间通信和文件共享这种简单高效的方式组合完成服务。
|
||||
|
||||
![pod](media/pod.png)
|
||||
![Pod结构示意图](../images/pod.png)
|
||||
|
||||
## Node
|
||||
|
||||
Node是Pod真正运行的主机,可以物理机,也可以是虚拟机。为了管理Pod,每个Node节点上至少要运行container runtime(比如docker或者rkt)、`kubelet`和`kube-proxy`服务。
|
||||
Node是Pod真正运行的主机,可以是物理机,也可以是虚拟机。为了管理Pod,每个Node节点上至少要运行container runtime(比如docker或者rkt)、`kubelet`和`kube-proxy`服务。
|
||||
|
||||
![node](media/node.png)
|
||||
![Node结构示意图](../images/node.png)
|
||||
|
||||
## Service
|
||||
|
||||
Service是应用服务的抽象,通过labels为应用提供负载均衡和服务发现。Service对外暴露一个统一的访问接口,外部服务不需要了解后端容器的运行。
|
||||
Service是应用服务的抽象,通过`labels`为应用提供负载均衡和服务发现。Service对外暴露一个统一的访问接口,外部服务不需要了解后端运行的容器。
|
||||
|
||||
![](media/14731220608865.png)
|
||||
![Serivce结构示意图](../images/service-arch.png)
|
||||
|
||||
## Label
|
||||
|
||||
Label是识别Kubernetes对象的标签,以key/value的方式附加到对象上。Label不提供唯一性,并且实际上经常是很多对象(如Pods)都使用相同的label来标志具体的应用。
|
||||
Label是识别Kubernetes对象的标签,以key/value的方式附加到对象上。Label不提供唯一性,并且实际上经常是很多对象(如Pods)都使用相同的label来标识具体的应用。
|
||||
|
||||
Label定义好后其他对象可以使用Label Selector来选择一组相同label的对象(比如ReplicaSet和Service用label来选择一组Pod)。Label Selector支持以下几种方式:
|
||||
|
||||
|
@ -30,7 +30,7 @@ Label定义好后其他对象可以使用Label Selector来选择一组相同labe
|
|||
|
||||
## Annotations
|
||||
|
||||
Annotations是key/value形式附加于对象的注解。不同于Labels用于标志和选择对象,Annotations则是用来记录一些附加信息,以便于外部工具进行查找。
|
||||
Annotations是key/value形式附加于对象的注解。不同于Labels用于标识和选择对象,Annotations则是用来记录一些附加信息,以便于外部工具进行查找。
|
||||
|
||||
## Namespace
|
||||
|
||||
|
|
|
@ -12,8 +12,8 @@ Kubernetes是谷歌开源的容器集群管理系统,是Google多年大规模
|
|||
|
||||
Kubernetes发展非常迅速,已经成为容器编排领域的领导者。
|
||||
|
||||
![](media/14731186543149.jpg)
|
||||
![Kubernetes发展速度](../images/kubernetes-velocity.jpg)
|
||||
|
||||
## Kubernetes架构
|
||||
|
||||
![](architecture.png)
|
||||
![Kubernetes架构](../images/architecture.png)
|
||||
|
|
|
@ -4,13 +4,13 @@
|
|||
|
||||
[cAdvisor](https://github.com/google/cadvisor)是一个来自Google的容器监控工具,也是kubelet内置的容器资源收集工具。它会自动收集本机容器CPU、内存、网络和文件系统的资源占用情况,并对外提供cAdvisor原生的API(默认端口为`--cadvisor-port=4194`)。
|
||||
|
||||
![](images/14842107270881.png)
|
||||
![cAdvisor监控示意图](../images/cadvisor.png)
|
||||
|
||||
## InfluxDB和Grafana
|
||||
|
||||
[InfluxDB](https://www.influxdata.com/time-series-platform/influxdb/)是一个开源分布式时序、事件和指标数据库;而[Grafana](http://grafana.org/)则是InfluxDB的dashboard,提供了强大的图表展示功能。
|
||||
|
||||
![](images/14842114123604.jpg)
|
||||
![Grafana界面图](../images/grafana-ui.jpg)
|
||||
|
||||
## Heapster
|
||||
|
||||
|
@ -18,7 +18,7 @@
|
|||
|
||||
Heapster从kubelet提供的API采集节点和容器的资源占用:
|
||||
|
||||
![](images/14842118198998.png)
|
||||
![Heapster架构](../images/heapster-arch.png)
|
||||
|
||||
另外,Heapster的`/metrics` API提供了Prometheus格式的数据。
|
||||
|
||||
|
@ -42,13 +42,13 @@ InfluxDB is running at https://kubernetes-master/api/v1/proxy/namespaces/kube-sy
|
|||
|
||||
[Prometheus](https://prometheus.io)是另外一个监控和时间序列数据库,并且还提供了告警的功能。他提供了强大的查询语言和HTTP接口,也支持将数据导出到Grafana中展示。
|
||||
|
||||
使用Prometheus监控Kubernetes需要配置好数据源,一个简单的示例是[prometheus.yml](prometheus.txt):
|
||||
使用Prometheus监控Kubernetes需要配置好数据源,一个简单的示例是[prometheus.yml](../manifests/prometheus/prometheus.yml):
|
||||
|
||||
```
|
||||
kubectl create -f http://feisky.xyz/kubernetes/monitor/prometheus.txt
|
||||
```bash
|
||||
kubectl create -f http://feisky.xyz/kubernetes/monitor/prometheus.yml
|
||||
```
|
||||
|
||||
![](images/14842125295113.jpg)
|
||||
![Prometheus界面示意图](../images/prometheus-ui.jpg)
|
||||
|
||||
|
||||
## 其他容器监控系统
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
Container Runtime Interface (CRI)是Kubelet 1.5/1.6中主要负责的一块项目,它重新定义了Kubelet Container Runtime API,将原来完全面向Pod级别的API拆分成面向Sandbox和Container的API,并分离镜像管理和容器引擎到不同的服务。
|
||||
|
||||
![](cri.png)
|
||||
![容器运行时接口](../images/cri.png)
|
||||
|
||||
CRI最早从从1.4版就开始设计讨论和开发,在v1.5中发布第一个测试版。
|
||||
|
||||
|
@ -16,5 +16,5 @@ CRI最早从从1.4版就开始设计讨论和开发,在v1.5中发布第一个
|
|||
- 4) Runc: https://github.com/kubernetes-incubator/cri-o
|
||||
- 5) Mirantis: https://github.com/Mirantis/virtlet
|
||||
- 6) Cloud foundary: https://github.com/cloudfoundry/garden
|
||||
- 7) Infranetes: not opensourced yet.
|
||||
- 7) Infranetes: not open sourced yet.
|
||||
|
||||
|
|
|
@ -12,7 +12,7 @@ Kubernetes有着丰富的网络插件,方便用户自定义所需的网络。
|
|||
|
||||
安装CNI:
|
||||
|
||||
```
|
||||
```Bash
|
||||
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
|
||||
[kubernetes]
|
||||
name=Kubernetes
|
||||
|
@ -29,8 +29,8 @@ yum install -y kubernetes-cni
|
|||
|
||||
配置CNI brige插件:
|
||||
|
||||
```
|
||||
mkdir -p /etc/cni/net.d
|
||||
```bash
|
||||
mkdir -p /etc/cni/net.d
|
||||
cat >/etc/cni/net.d/10-mynet.conf <<-EOF
|
||||
{
|
||||
"cniVersion": "0.3.0",
|
||||
|
|
Loading…
Reference in New Issue