merged
|
@ -0,0 +1,16 @@
|
||||||
|
# Node rules:
|
||||||
|
## Grunt intermediate storage (http://gruntjs.com/creating-plugins#storing-task-files)
|
||||||
|
.grunt
|
||||||
|
|
||||||
|
## Dependency directory
|
||||||
|
## Commenting this out is preferred by some people, see
|
||||||
|
## https://docs.npmjs.com/misc/faq#should-i-check-my-node_modules-folder-into-git
|
||||||
|
node_modules
|
||||||
|
|
||||||
|
# Book build output
|
||||||
|
_book
|
||||||
|
|
||||||
|
# eBook build output
|
||||||
|
*.epub
|
||||||
|
*.mobi
|
||||||
|
*.pdf
|
After Width: | Height: | Size: 1.2 MiB |
17
LICENSE
|
@ -1,4 +1,3 @@
|
||||||
|
|
||||||
Apache License
|
Apache License
|
||||||
Version 2.0, January 2004
|
Version 2.0, January 2004
|
||||||
http://www.apache.org/licenses/
|
http://www.apache.org/licenses/
|
||||||
|
@ -179,24 +178,10 @@
|
||||||
APPENDIX: How to apply the Apache License to your work.
|
APPENDIX: How to apply the Apache License to your work.
|
||||||
|
|
||||||
To apply the Apache License to your work, attach the following
|
To apply the Apache License to your work, attach the following
|
||||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
boilerplate notice, with the fields enclosed by brackets "{}"
|
||||||
replaced with your own identifying information. (Don't include
|
replaced with your own identifying information. (Don't include
|
||||||
the brackets!) The text should be enclosed in the appropriate
|
the brackets!) The text should be enclosed in the appropriate
|
||||||
comment syntax for the file format. We also recommend that a
|
comment syntax for the file format. We also recommend that a
|
||||||
file or class name and description of purpose be included on the
|
file or class name and description of purpose be included on the
|
||||||
same "printed page" as the copyright notice for easier
|
same "printed page" as the copyright notice for easier
|
||||||
identification within third-party archives.
|
identification within third-party archives.
|
||||||
|
|
||||||
Copyright [yyyy] [name of copyright owner]
|
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
you may not use this file except in compliance with the License.
|
|
||||||
You may obtain a copy of the License at
|
|
||||||
|
|
||||||
http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
|
|
||||||
Unless required by applicable law or agreed to in writing, software
|
|
||||||
distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
See the License for the specific language governing permissions and
|
|
||||||
limitations under the License.
|
|
||||||
|
|
|
@ -0,0 +1,42 @@
|
||||||
|
BOOK_NAME := kubernetes-handbook
|
||||||
|
BOOK_OUTPUT := _book
|
||||||
|
|
||||||
|
.PHONY: build
|
||||||
|
build:
|
||||||
|
gitbook build . $(BOOK_OUTPUT)
|
||||||
|
|
||||||
|
.PHONY: serve
|
||||||
|
serve:
|
||||||
|
gitbook serve . $(BOOK_OUTPUT)
|
||||||
|
|
||||||
|
.PHONY: epub
|
||||||
|
epub:
|
||||||
|
gitbook epub . $(BOOK_NAME).epub
|
||||||
|
|
||||||
|
.PHONY: pdf
|
||||||
|
pdf:
|
||||||
|
gitbook pdf . $(BOOK_NAME).pdf
|
||||||
|
|
||||||
|
.PHONY: mobi
|
||||||
|
mobi:
|
||||||
|
gitbook mobi . $(BOOK_NAME).pdf
|
||||||
|
|
||||||
|
.PHONY: install
|
||||||
|
install:
|
||||||
|
npm install gitbook-cli -g
|
||||||
|
gitbook install
|
||||||
|
|
||||||
|
.PHONY: clean
|
||||||
|
clean:
|
||||||
|
rm -rf $(BOOK_OUTPUT)
|
||||||
|
|
||||||
|
.PHONY: help
|
||||||
|
help:
|
||||||
|
@echo "Help for make"
|
||||||
|
@echo "make - Build the book"
|
||||||
|
@echo "make build - Build the book"
|
||||||
|
@echo "make serve - Serving the book on localhost:4000"
|
||||||
|
@echo "make install - Install gitbook and plugins"
|
||||||
|
@echo "make epub - Build epub book"
|
||||||
|
@echo "make pdf - Build pdf book"
|
||||||
|
@echo "make clean - Remove generated files"
|
|
@ -105,4 +105,3 @@ pandoc --latex-engine=xelatex --template=pm-template input.md -o output.pdf
|
||||||
|
|
||||||
[opsnull](http://github.com/opsnull)
|
[opsnull](http://github.com/opsnull)
|
||||||
|
|
||||||
[godliness](https://github.com/godliness/)
|
|
||||||
|
|
|
@ -34,4 +34,3 @@
|
||||||
- [9.0 Kubernetes领域应用]()
|
- [9.0 Kubernetes领域应用]()
|
||||||
- [10.0 问题记录](issues.md)
|
- [10.0 问题记录](issues.md)
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,42 @@
|
||||||
|
# Kubernetes Dashboard
|
||||||
|
|
||||||
|
Kubernetes Dashboard的部署非常简单,只需要运行
|
||||||
|
|
||||||
|
```
|
||||||
|
kubectl create -f https://git.io/kube-dashboard
|
||||||
|
```
|
||||||
|
|
||||||
|
稍等一会,dashborad就会创建好
|
||||||
|
|
||||||
|
```
|
||||||
|
$ kubectl -n kube-system get service kubernetes-dashboard
|
||||||
|
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||||
|
kubernetes-dashboard 10.101.211.212 <nodes> 80:32729/TCP 1m
|
||||||
|
$ kubectl -n kube-system describe service kubernetes-dashboard
|
||||||
|
Name: kubernetes-dashboard
|
||||||
|
Namespace: kube-system
|
||||||
|
Labels: app=kubernetes-dashboard
|
||||||
|
Annotations: <none>
|
||||||
|
Selector: app=kubernetes-dashboard
|
||||||
|
Type: NodePort
|
||||||
|
IP: 10.101.211.212
|
||||||
|
Port: <unset> 80/TCP
|
||||||
|
NodePort: <unset> 32729/TCP
|
||||||
|
Endpoints: 10.244.1.3:9090
|
||||||
|
Session Affinity: None
|
||||||
|
Events: <none>
|
||||||
|
```
|
||||||
|
|
||||||
|
然后就可以通过`http://nodeIP:32729`来访问了。
|
||||||
|
|
||||||
|
## https
|
||||||
|
|
||||||
|
通常情况下,建议Dashboard服务以https的方式运行,在访问它之前我们需要将证书导入系统中:
|
||||||
|
|
||||||
|
```
|
||||||
|
openssl pkcs12 -export -in apiserver-kubelet-client.crt -inkey apiserver-kubelet-client.key -out kube.p12
|
||||||
|
curl -sSL -E ./kube.p12:password -k https://nodeIP:6443/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
|
||||||
|
```
|
||||||
|
|
||||||
|
将kube.p12导入系统就可以用浏览器来访问了。注意,如果nodeIP不在证书CN里面,则需要做个hosts映射。
|
||||||
|
|
|
@ -0,0 +1,3 @@
|
||||||
|
# Elasticsearch Fluentd Kibana (EFK)
|
||||||
|
|
||||||
|
配置文件见<https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch>.
|
|
@ -0,0 +1,7 @@
|
||||||
|
# Heapster
|
||||||
|
|
||||||
|
```
|
||||||
|
git clone https://github.com/kubernetes/heapster
|
||||||
|
cd heapster
|
||||||
|
kubectl create -f deploy/kube-config/influxdb/
|
||||||
|
```
|
|
@ -0,0 +1,6 @@
|
||||||
|
# Kubernetes Addons
|
||||||
|
|
||||||
|
- [Dashboard](dashboard.html)
|
||||||
|
- [Heapster](heapster.html)
|
||||||
|
- [EFK](efk.html)
|
||||||
|
|
|
@ -0,0 +1,758 @@
|
||||||
|
# Awesome Docker [![Awesome](https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg)](https://github.com/sindresorhus/awesome) [![Join the chat at https://gitter.im/veggiemonk/awesome-docker](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/veggiemonk/awesome-docker?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) [![Build Status](https://travis-ci.org/veggiemonk/awesome-docker.svg?branch=master)](https://travis-ci.org/veggiemonk/awesome-docker)
|
||||||
|
|
||||||
|
https://github.com/veggiemonk/awesome-docker
|
||||||
|
|
||||||
|
> A curated list of Docker resources and projects
|
||||||
|
Inspired by [@sindresorhus](https://github.com/sindresorhus)' [awesome][sindresorhus] and improved by these **[amazing contributors](https://github.com/veggiemonk/awesome-docker/graphs/contributors)**.
|
||||||
|
|
||||||
|
It's now a GitHub project because it's considerably easier for other people to edit, fix and expand on Docker using GitHub. Just click [README.md][editREADME] to submit a [pull request][editREADME].
|
||||||
|
If this list is not complete, you can [contribute][editREADME] to make it so.
|
||||||
|
|
||||||
|
> **Please**, help organize these resources so that they are _easy to find_ and _understand_ for new comers. See how to **[Contribute](https://github.com/veggiemonk/awesome-docker/blob/master/CONTRIBUTING.md)** for tips!
|
||||||
|
|
||||||
|
#### *If you see a link here that is not (any longer) a good fit, you can fix it by submitting a [pull request][editREADME] to improve this file. Thank you!*
|
||||||
|
|
||||||
|
The creators and maintainers of this list do not receive and should not receive any form of payment to accept a change made by any contributor. The goal of this repo is to index articles, learning materials and projects, not to advertise for profit. **All pull requests are merged by default** and removed if inappropriate or unavailable, or fixed when necessary.
|
||||||
|
|
||||||
|
All the links are monitored and tested with [awesome_bot](https://github.com/dkhamsing/awesome_bot) made by [@dkhamsing](https://github.com/dkhamsing)
|
||||||
|
|
||||||
|
# What is Docker ?
|
||||||
|
|
||||||
|
> Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications. Consisting of Docker Engine, a portable, lightweight runtime and packaging tool, and Docker Hub, a cloud service for sharing applications and automating workflows, Docker enables apps to be quickly assembled from components and eliminates the friction between development, QA, and production environments. As a result, IT can ship faster and run the same app, unchanged, on laptops, data center VMs, and any cloud.
|
||||||
|
|
||||||
|
_Source:_ [What is Docker](https://www.docker.com/what-docker)
|
||||||
|
|
||||||
|
# Where to start ?
|
||||||
|
|
||||||
|
* [10-minute Interactive Tutorial](https://docs.docker.com/docker-for-mac/)
|
||||||
|
* [Docker Training](http://training.docker.com/)
|
||||||
|
* Read this complete article: [Basics – Docker, Containers, Hypervisors, CoreOS](http://etherealmind.com/basics-docker-containers-hypervisors-coreos/)
|
||||||
|
* Watch the video: [Docker for Developers][docker4dev] (54:26) by [@jpetazzo][jpetazzo]
|
||||||
|
* [Docker Jumpstart](https://github.com/odewahn/docker-jumpstart/): a quick introduction
|
||||||
|
* [Docker Curriculum](http://prakhar.me/docker-curriculum/): A comprehensive tutorial for getting started with Docker. Teaches how to use Docker and deploy dockerized apps on AWS with Elastic Beanstalk and Elastic Container Service.
|
||||||
|
* [Install Docker on your machine](docker-cheat-sheet#installation) and play with a few [Useful Images](#useful-images)
|
||||||
|
* Try [Panamax: Docker Management for Humans][panamax.io] It will install a CoreOS VM with VirtualBox and has nice front end
|
||||||
|
* [Install Docker Toolbox](https://www.docker.com/products/docker-toolbox) Docker Toolbox is an installer to quickly and easily install and setup a Docker environment on your computer. Available for both Windows and Mac, the Toolbox installs Docker Client, Machine, Compose (Mac only), Kitematic and VirtualBox.
|
||||||
|
* Check out: [Docker Cheat Sheet][docker-cheat-sheet] by [@wsargent][wsargent] __MUST SEE__
|
||||||
|
* [Project Web Dev][projwebdev] : (Article series) How to create your own website based on Docker
|
||||||
|
* [Docker Containers on the desktop][jessblog] by [@jfrazelle][jfrazelle]) The **funniest way** to
|
||||||
|
learn
|
||||||
|
about docker! (Tips: checkout her [dotfiles][jfrazelledotfiles] and her [dockerfiles][jfrazelledockerfiles])
|
||||||
|
* [Container Hacks and Fun Images][jessvid] by [@jfrazelle][jfrazelle] @ DockerCon 2015 **MUST WATCH VIDEO** (38:50)
|
||||||
|
* [Learn Docker](https://github.com/dwyl/learn-docker) Full environment set up, screenshots, step-by-step tutorial and more resources (video, articles, cheat sheets) by [@dwyl](https://github.com/dwyl)
|
||||||
|
* [Docker Caveats](http://docker-saigon.github.io/post/Docker-Caveats/) What You Should Know About Running Docker In Production (written 11 APRIL 2016) __MUST SEE__
|
||||||
|
* [How to Whale](https://howtowhale.com/) Learn Docker in your web browser, no setup or installation required.
|
||||||
|
|
||||||
|
# MENU
|
||||||
|
|
||||||
|
- [What is Docker ?](#what-is-docker-)
|
||||||
|
- [Where to start ?](#where-to-start-)
|
||||||
|
- [MENU](#menu)
|
||||||
|
- [Useful Articles](#useful-articles)
|
||||||
|
- [Main Resources](#main-resources)
|
||||||
|
- [General Articles](#general-articles)
|
||||||
|
- [Deep Dive](#deep-dive)
|
||||||
|
- [Networking](#networking)
|
||||||
|
- [Metal](#metal)
|
||||||
|
- [Multi-Server](#multi-server)
|
||||||
|
- [Cloud Infrastructure](#cloud-infrastructure)
|
||||||
|
- [Good Tips](#good-tips)
|
||||||
|
- [Newsletter](#newsletter)
|
||||||
|
- [Continuous Integration](#continuous-integration)
|
||||||
|
- [Optimizing Images](#optimizing-images)
|
||||||
|
- [Service Discovery](#service-discovery)
|
||||||
|
- [Security](#security)
|
||||||
|
- [Performances](#performances)
|
||||||
|
- [Raspberry Pi & ARM](#raspberry-pi--arm)
|
||||||
|
- [Other](#other)
|
||||||
|
- [Books](#books)
|
||||||
|
- [Tools](#tools)
|
||||||
|
- [Terminal User Interface](#terminal-user-interface)
|
||||||
|
- [Dev Tools](#dev-tools)
|
||||||
|
- [Continuous Integration / Continuous Delivery](#continuous-integration--continuous-delivery)
|
||||||
|
- [Deployment](#deployment)
|
||||||
|
- [Hosting for repositories (registries)](#hosting-for-repositories-registries)
|
||||||
|
- [Hosting for containers](#hosting-for-containers)
|
||||||
|
- [Reverse Proxy](#reverse-proxy)
|
||||||
|
- [Web Interface](#web-interface)
|
||||||
|
- [Local Container Manager](#local-container-manager)
|
||||||
|
- [Volume management and plugins](#volume-management-and-plugins)
|
||||||
|
- [Useful Images](#useful-images)
|
||||||
|
- [Dockerfile](#dockerfile)
|
||||||
|
- [Storing Images and Registries](#storing-images-and-registries)
|
||||||
|
- [Monitoring](#monitoring)
|
||||||
|
- [Networking](#networking)
|
||||||
|
- [Logging](#logging)
|
||||||
|
- [Deployment and Infrastructure](#deployment-and-infrastructure)
|
||||||
|
- [PaaS](#paas)
|
||||||
|
- [Remote Container Manager / Orchestration](#remote-container-manager--orchestration)
|
||||||
|
- [Security](#security)
|
||||||
|
- [Service Discovery](#service-discovery)
|
||||||
|
- [Metadata](#metadata)
|
||||||
|
- [Slides](#slides)
|
||||||
|
- [Videos](#videos)
|
||||||
|
- [Main Account](#main-account)
|
||||||
|
- [Useful videos](#useful-videos)
|
||||||
|
- [Interactive Learning Environments](#interactive-learning-environments)
|
||||||
|
- [Interesting Twitter Accounts](#interesting-twitter-accounts)
|
||||||
|
- [People](#people)
|
||||||
|
|
||||||
|
|
||||||
|
# Useful Articles
|
||||||
|
|
||||||
|
## Main Resources
|
||||||
|
|
||||||
|
* [Docker Weekly](https://blog.docker.com/docker-weekly-archives/) Huge resource
|
||||||
|
* [Docker Cheat Sheet][docker-cheat-sheet] by [@wsargent][wsargent] __MUST SEE__
|
||||||
|
* [Docker Printable Refcard][docker-quick-ref] by [@dimonomid][dimonomid]
|
||||||
|
* [CenturyLink Labs](https://labs.ctl.io/category/docker/)
|
||||||
|
* [Valuable Docker Links](http://www.nkode.io/2014/08/24/valuable-docker-links.html) Very complete
|
||||||
|
* [Docker Ecosystem](https://www.mindmeister.com/389671722/docker-ecosystem) (Mind Map) __MUST SEE__
|
||||||
|
* [Docker Ecosystem](http://comp.photo777.org/wp-content/uploads/2015/09/Docker-ecosystem-8.5.1.pdf) (PDF) __MUST SEE__ find it on [blog](http://comp.photo777.org/docker-ecosystem/) by Bryzgalov Peter.
|
||||||
|
* [Blog](https://blog.jessfraz.com/) of [@frazelledazzell][jfrazelle]
|
||||||
|
* [Blog](http://jpetazzo.github.io/) of [@jpetazzo][jpetazzo]
|
||||||
|
* [Blog](http://progrium.com/blog/) of [@progrium][progrium]
|
||||||
|
* [Blog](http://jasonwilder.com/) of [@jwilder][jwilder]
|
||||||
|
* [Blog](http://crosbymichael.com/) of [@crosbymichael][crosbymichael]
|
||||||
|
* [Blog](http://gliderlabs.com/blog/) of [@gliderlabs][gliderlabs]
|
||||||
|
* [Blog](http://sebgoa.blogspot.be/) of [@sebgoa][sebgoa]
|
||||||
|
* [Blog](https://blog.codeship.com/) of [@codeship](https://github.com/codeship)
|
||||||
|
* [Digital Ocean Community](https://www.digitalocean.com/community/search?q=docker&type=tutorials)
|
||||||
|
* [Container42](http://container42.com/)
|
||||||
|
* [Container solutions](http://container-solutions.com/blog/)
|
||||||
|
* [DockerOne](http://dockone.io/) Docker Community (in Chinese) by [@LiYingJie](http://dockone.io/people/%E6%9D%8E%E9%A2%96%E6%9D%B0)
|
||||||
|
* [Project Web Dev][projwebdev] : (Article series) How to create your own website based on Docker
|
||||||
|
* [Docker vs. VMs? Combining Both for Cloud Portability Nirvana](http://www.rightscale.com/blog/cloud-management-best-practices/docker-vs-vms-combining-both-cloud-portability-nirvana)
|
||||||
|
* [Docker Containers on the desktop][jessblog] by [@jfrazelle][jfrazelle] The **funniest way** to learn
|
||||||
|
about docker! (Tips: checkout her [dotfiles][jfrazelledotfiles] and her [dockerfiles][jfrazelledockerfiles]))
|
||||||
|
* [Awesome Linux Container](https://github.com/Friz-zy/awesome-linux-containers) more general about container than this repo, by [@Friz-zy](https://github.com/Friz-zy).
|
||||||
|
|
||||||
|
## General Articles
|
||||||
|
* [Getting Started with Docker](https://serversforhackers.com/getting-started-with-docker) by [@fideloper](https://github.com/fideloper) -- [Servers For Hackers](https://serversforhackers.com/editions) is valuable resource. At some point, every programmer finds themselves needing to know their way around a server.
|
||||||
|
* [What is Docker and how do you monitor it?](http://axibase.com/docker-monitoring/)
|
||||||
|
* [How to Use Docker on OS X: The Missing Guide](https://www.viget.com/articles/how-to-use-docker-on-os-x-the-missing-guide)
|
||||||
|
* [Docker for (Java) Developers](https://ro14nd.de/Docker-for-Developers)
|
||||||
|
* [Deploying NGINX with Docker](https://www.nginx.com/blog/deploying-nginx-nginx-plus-docker/)
|
||||||
|
* [Eight Docker Development Patterns](http://hokstad.com/docker/patterns)
|
||||||
|
* [Rails Development Environment for OS X using Docker](https://allenan.com/docker-rails-dev-environment-for-osx/)
|
||||||
|
* [Logging on Docker: What You Need to Know](https://dzone.com/articles/logging-docker-what-you-need) + see the
|
||||||
|
[video][loggingDocker] (~50min)
|
||||||
|
* [Comparing Five Monitoring Options for Docker](http://rancher.com/comparing-monitoring-options-for-docker-deployments/)
|
||||||
|
* [Minimalistic data-only container for Docker Compose](http://dockermeetupsinbordeaux.github.io/docker-compose/data-container/2015/03/01/minimalistic-docker-data-container.html) (Written Mar 1, 2015)
|
||||||
|
* [Running Docker Containers with Systemd](http://container-solutions.com/running-docker-containers-with-systemd/)
|
||||||
|
* [Dockerizing Flask With Compose and Machine - From Localhost to the Cloud](https://realpython.com/blog/python/dockerizing-flask-with-compose-and-machine-from-localhost-to-the-cloud/) -- [GitHub](https://github.com/realpython/orchestrating-docker) Learn how to deploy an application using Docker Compose and Docker Machine (written 17 April 2015)
|
||||||
|
* [Why and How to use Docker for Development](https://medium.com/iron-io-blog/why-and-how-to-use-docker-for-development-a156c1de3b24) (written 28 APR 2015)
|
||||||
|
* [Automating Docker Logging: ElasticSearch, Logstash, Kibana, and Logspout](https://nathanleclaire.com/blog/2015/04/27/automating-docker-logging-elasticsearch-logstash-kibana-and-logspout/) (written 27 APR 2015)
|
||||||
|
* [Docker Host Volume Synchronization](http://oliverguenther.de/2015/05/docker-host-volume-synchronization/) (written 1 JUN 2015)
|
||||||
|
* [From Local Development to Remote Deployment with Docker Machine and Compose](https://developer.rackspace.com/blog/dev-to-deploy-with-docker-machine-and-compose/) (written 2 JUL 2015)
|
||||||
|
* [Docker: Build, Ship and Run Any App, Anywhere](http://delftswa.github.io/chapters/docker/index.html) by [Martijn Dwars](https://github.com/MartijnDwars), [Wiebe van Geest](https://github.com/wrvangeest), [Rik Nijessen](https://github.com/gewoonrik), and [Rick Wieman](https://github.com/RickWieman) from [Delft University of Technology](http://www.tudelft.nl/) (written 2 JUL 2015)
|
||||||
|
* [Joining the Docker Ship](http://thenewstack.io/joining-the-docker-ship-and-go/) Learn how to contribute to docker (written 9 JUL 2015)
|
||||||
|
* [Continuous Deployment with Gradle and Docker](https://github.com/gesellix/pipeline-with-gradle-and-docker/blob/master/README.md) Describes a complete pipeline from source to production deploy (includes a complete Spring Boot example project) by
|
||||||
|
[@gesellix][gesellix]
|
||||||
|
* [Containerization and the PaaS Cloud](https://www.computer.org/cms/Computer.org/ComputingNow/issues/2015/09/mcd2015030024.pdf) -- This article discusses the requirements that arise from having to facilitate applications through distributed multicloud platforms.
|
||||||
|
* [Docker for Development: Common Problems and Solutions](https://medium.com/@rdsubhas/docker-for-development-common-problems-and-solutions-95b25cae41eb) by [@rdsubhas](https://github.com/rdsubhas)
|
||||||
|
* [Docker Adoption Data](https://www.datadoghq.com/docker-adoption/) A study by Datadog on the real world Docker usage stastics and deployment patterns.
|
||||||
|
* [How to monitor Docker](https://www.datadoghq.com/blog/the-docker-monitoring-problem/) (4-part series)
|
||||||
|
* [Using Ansible with Docker Machine to Bootstrap Host Nodes](https://nathanleclaire.com/blog/2015/11/10/using-ansible-with-docker-machine-to-bootstrap-host-nodes/) by [@nathanleclaire](https://github.com/nathanleclaire)
|
||||||
|
* [Swarm v. Fleet v. Kubernetes v. Mesos](https://www.oreilly.com/ideas/swarm-v-fleet-v-kubernetes-v-mesos) Comparing different orchestration tools. (written OCT 2015)
|
||||||
|
* [The Shortlist of Docker Hosting](https://blog.codeship.com/the-shortlist-of-docker-hosting) There are so many specialized and optimized Docker hosting services available, it’s high time for a review to see what’s on offer (by Chris Ward).
|
||||||
|
|
||||||
|
## Portuguese Articles
|
||||||
|
|
||||||
|
* [Uma rápida introdução ao Docker e instalação no Ubuntu](https://woliveiras.com.br/posts/uma-rapida-introducao-ao-docker-e-instalacao-no-ubuntu/)
|
||||||
|
* [O que é uma imagem e o que é um container Docker?](https://woliveiras.com.br/posts/imagem-docker-ou-um-container-docker/)
|
||||||
|
* [Criando uma imagem Docker personalizada](https://woliveiras.com.br/posts/Criando-uma-imagem-Docker-personalizada/)
|
||||||
|
* [Comandos mais utilizados no Docker](https://woliveiras.com.br/posts/comandos-mais-utilizados-no-docker/)
|
||||||
|
|
||||||
|
## Deep Dive
|
||||||
|
* [Creating containers - Part 1](http://crosbymichael.com/creating-containers-part-1.html) This is part one of a series of blog posts detailing how docker creates containers. By [@crosbymichael][crosbymichael]
|
||||||
|
* [Data-only container madness](http://container42.com/2014/11/18/data-only-container-madness/)
|
||||||
|
|
||||||
|
## Networking
|
||||||
|
* [Using Docker Machine with Weave 0.10](https://www.weave.works/using-docker-machine-with-weave-0-10/) (written 22 APR 2015)
|
||||||
|
* [How to Route Traffic through a Tor Docker container](https://blog.jessfraz.com/post/routing-traffic-through-tor-docker-container/) by [@jfrazelle][jfrazelle] (writtent 20 JUN 2015)
|
||||||
|
|
||||||
|
## Metal
|
||||||
|
* [How to use Docker on Full Metal](http://blog.bigstep.com/use-docker-full-metal-cloud/)
|
||||||
|
|
||||||
|
## Multi-Server
|
||||||
|
* [A Docker based mini-PaaS](http://shortcircuit.net.au/~prologic/blog/article/2015/03/24/a-docker-based-mini-paas/)
|
||||||
|
by [@prologic][prologic]
|
||||||
|
* [A multi-host scalable web services demo using Docker swarm, Docker compose, NGINX, and Blockbridge](https://www.blockbridge.com/a-scalable-web-services-demo-using-docker-swarm-compose-and-blockbridge/)
|
||||||
|
|
||||||
|
## Cloud Infrastructure
|
||||||
|
* [Cloud Infrastructure Automation for Docker Nodes](https://blog.tutum.co/2015/04/29/cloud-infrastructure-automation-for-docker-nodes/)
|
||||||
|
|
||||||
|
## Good Tips
|
||||||
|
* [24 random docker tips](https://csabapalfi.github.io/random-docker-tips/) by [@csabapalfi](https://github.com/csabapalfi)
|
||||||
|
* [GUI Apps with Docker](http://fabiorehm.com/blog/2014/09/11/running-gui-apps-with-docker/) by [@fgrehm][fgrehm]
|
||||||
|
* [Automated Nginx Reverse Proxy for Docker](http://jasonwilder.com/blog/2014/03/25/automated-nginx-reverse-proxy-for-docker/) by [@jwilder][jwilder]
|
||||||
|
* [Using NSEnter with Boot2Docker](https://ro14nd.de/NSEnter-with-Boot2Docker)
|
||||||
|
* [A Simple Way to Dockerize Applications](http://jasonwilder.com/blog/2014/10/13/a-simple-way-to-dockerize-applications/) by [@jwilder][jwilder]
|
||||||
|
* [Building good docker images](http://jonathan.bergknoff.com/journal/building-good-docker-images) by [@jbergknoff](https://github.com/jbergknoff)
|
||||||
|
* [10 Things Not To Forget Before Deploying Docker In Production](http://www.slideshare.net/rightscale/docker-meetup-40826948)
|
||||||
|
* [Docker CIFS – How to Mount CIFS as a Docker Volume](http://backdrift.org/docker-cifs-howto-mount-cifs-volume-docker-container)
|
||||||
|
* [Nginx Proxy for Docker](https://blog.danivovich.com/2015/07/09/nginx-proxy-for-docker-containers/) (written 9 JUL 2015)
|
||||||
|
* [Dealing with linked containers dependency in docker-compose](http://brunorocha.org/python/dealing-with-linked-containers-dependency-in-docker-compose.html) by [@rochacbruno](https://github.com/rochacbruno)
|
||||||
|
* [Docker Tips](http://www.mervine.net/notes/docker-tips) by [@jmervine](https://github.com/jmervine)
|
||||||
|
* [Docker on Windows behind a firewall](http://toedter.com/2015/05/11/docker-on-windows-behind-a-firewall/) by [@kaitoedter](https://twitter.com/kaitoedter)
|
||||||
|
* [Pulling Git into a Docker image without leaving SSH keys behind](http://blog.cloud66.com/pulling-git-into-a-docker-image-without-leaving-ssh-keys-behind/) by [@khash](https://github.com/khash)
|
||||||
|
* [6 Million Ways To Log In Docker](http://www.slideshare.net/raychaser/6-million-ways-to-log-in-docker-nyc-docker-meetup-12172014) by [@raychaser](https://twitter.com/raychaser)
|
||||||
|
* [Dockerfile Generator](http://jrruethe.github.io/blog/2015/09/20/dockerfile-generator/) (ruby script)
|
||||||
|
* [Running Production Hadoop Clusters in Docker Containers](http://conferences.oreilly.com/strata/big-data-conference-ca-2015/public/schedule/detail/38521)
|
||||||
|
* [10 practical docker tips](http://www.smartjava.org/content/10-practical-docker-tips-day-day-docker-usage) (Dec 2015) by [@josdirksen](https://github.com/josdirksen)
|
||||||
|
* [Kubernetes Cheatsheet](http://k8s.info/cs.html) - A great resource for managing your Kubernetes installation
|
||||||
|
* [Container Best Practices](http://docs.projectatomic.io/container-best-practices/) - Red Hat's Project Atomic created a Container Best Practices guide which applies to everything and is updated regurlary.
|
||||||
|
* [Production Meteor and Node Using Docker, Part I](https://projectricochet.com/blog/production-meteor-and-node-using-docker-part-i) by [@projectricochet](https://github.com/projectricochet)
|
||||||
|
* [Resource Management in Docker](https://goldmann.pl/blog/2014/09/11/resource-management-in-docker/) by [@marekgoldmann](https://twitter.com/marekgoldmann)
|
||||||
|
|
||||||
|
## Newsletter
|
||||||
|
* [Docker Team](https://www.docker.com/)
|
||||||
|
* [CenturyLink Labs](https://labs.ctl.io/)
|
||||||
|
* [Tutum](https://dashboard.tutum.co/)
|
||||||
|
* [DevOps Weekly](http://www.devopsweekly.com)
|
||||||
|
* [Shippable](http://blog.shippable.com/)
|
||||||
|
* [WebOps weekly](http://webopsweekly.com/)
|
||||||
|
|
||||||
|
## Continuous Integration
|
||||||
|
* [Docker and Phoenix: How to Make Your Continuous Integration More Awesome](https://ariya.io/2014/12/docker-and-phoenix-how-to-make-your-continuous-integration-more-awesome)
|
||||||
|
* [Jenkins 2.0 - Screencast Series](http://theremotelab.com/blog/jenkins2.0-screencast-series/) by [Virendra Bhalothia](https://twitter.com/bhalothiaa)
|
||||||
|
* [Pushing to ECR Using Jenkins Pipeline Plugin](https://blog.mikesir87.io/2016/04/pushing-to-ecr-using-jenkins-pipeline-plugin/) by [@mikesir87](https://github.com/mikesir87)
|
||||||
|
|
||||||
|
## Optimizing Images
|
||||||
|
* [Create the smallest possible Docker container](http://blog.xebia.com/create-the-smallest-possible-docker-container/)
|
||||||
|
* [Creating a Docker image from your code](https://blog.tutum.co/2014/04/10/creating-a-docker-image-from-your-code/)
|
||||||
|
* [Optimizing Docker Images](https://www.ctl.io/developers/blog/post/optimizing-docker-images/)
|
||||||
|
* [How to Optimize Your Dockerfile](https://blog.tutum.co/2014/10/22/how-to-optimize-your-dockerfile/) by [@tutumcloud](https://github.com/tutumcloud)
|
||||||
|
* [Building Docker Images for Static Go Binaries](https://medium.com/@kelseyhightower/optimizing-docker-images-for-static-binaries-b5696e26eb07) by [@kelseyhightower](https://github.com/kelseyhightower)
|
||||||
|
* [Squashing Docker Images](http://jasonwilder.com/blog/2014/08/19/squashing-docker-images/) by [@jwilder][jwilder]
|
||||||
|
* [Dockerfile Golf (or optimizing the Docker build process)](http://www.davidmkerr.com/2014/08/dockerfile-golf-or-optimizing-docker.html)
|
||||||
|
* [ImageLayers](https://imagelayers.iron.io/) Visualize Docker images and the layers that compose them.
|
||||||
|
* [DockerSlim](https://github.com/docker-slim/docker-slim) shrinks fat Docker images creating the smallest possible images.
|
||||||
|
* [SkinnyWhale](https://github.com/djosephsen/skinnywhale) Skinnywhale helps you make smaller (as in megabytes) Docker containers.
|
||||||
|
|
||||||
|
## Service Discovery
|
||||||
|
* [@progrium][progrium] Service Discovery articles series:
|
||||||
|
* [Consul Service Discovery with Docker](http://progrium.com/blog/2014/08/20/consul-service-discovery-with-docker/)
|
||||||
|
* [Understanding Modern Service Discovery with Docker](http://progrium.com/blog/2014/07/29/understanding-modern-service-discovery-with-docker/)
|
||||||
|
* [Automatic Docker Service Announcement with Registrator](http://progrium.com/blog/2014/09/10/automatic-docker-service-announcement-with-registrator/)
|
||||||
|
|
||||||
|
## Security
|
||||||
|
* [Docker and SELinux](http://www.projectatomic.io/docs/docker-and-selinux/)
|
||||||
|
* [Bringing new security features to Docker](https://opensource.com/business/14/9/security-for-docker)
|
||||||
|
* [Docker Secure Deployment Guidelines](https://github.com/GDSSecurity/Docker-Secure-Deployment-Guidelines)
|
||||||
|
* [Security Best Practices for Building Docker Images](https://linux-audit.com/tag/docker/)
|
||||||
|
* [Docker Security: Are Your Containers Tightly Secured to the Ship? SlideShare](http://fr.slideshare.net/MichaelBoelen/docker-security-are-your-containers-tightly-secured-to-the-ship)
|
||||||
|
* [Tuning Docker with the newest security enhancements](https://opensource.com/business/15/3/docker-security-tuning)
|
||||||
|
* [Lynis is an open source security auditing tool including Docker auditing](https://cisofy.com/lynis/)
|
||||||
|
* [Understanding Docker security and best practices](https://blog.docker.com/2015/05/understanding-docker-security-and-best-practices/) (written 5 MAY 2015)
|
||||||
|
* [Docker Security Cheat Sheet] (https://github.com/konstruktoid/Docker/blob/master/Security/CheatSheet.adoc)
|
||||||
|
* [How CVE's are handled on Offical Docker Images](https://github.com/docker-library/official-images/issues/1448)
|
||||||
|
* [Improving Docker Security with Authenticated Volumes](https://www.blockbridge.com/improving-docker-security-with-authenticated-volumes/)
|
||||||
|
|
||||||
|
## Performances
|
||||||
|
* [Performance Analysis of Docker on Red Hat Enterprise Linux 7](http://developerblog.redhat.com/2014/08/19/performance-analysis-docker-red-hat-enterprise-linux-7/)
|
||||||
|
* [Distrubuted JMeter testing using Docker](http://srivaths.blogspot.fr/2014/08/distrubuted-jmeter-testing-using-docker.html?m=1)
|
||||||
|
* [nsinit: per-container resource monitoring of Docker containers on RHEL/Fedora](http://www.breakage.org/2014/09/03/nsinit-per-container-resource-monitoring-of-docker-containers-on-rhelfedora/)
|
||||||
|
|
||||||
|
## Raspberry Pi & ARM
|
||||||
|
* [git push docker containers to linux devices](https://resin.io/) Modern DevOps for IoT, leveraging git and Docker.
|
||||||
|
* [Docker Pirates ARMed with explosive stuff](http://blog.hypriot.com/) Huge resource on clustering, swarm, docker, pre-installed image for SD card on Raspberry Pi
|
||||||
|
* [Docker on Raspberry Pi](http://blog.xebia.com/docker-on-a-raspberry-pi/)
|
||||||
|
* [Fool-Proof Recipe: Docker on the Raspberry Pi](https://www.voxxed.com/blog/2015/04/fool-proof-recipe-docker-on-the-raspberry-pi/) Same article as above but more opinionated.
|
||||||
|
* [Raspberry Pi with Docker 1.5.0](http://blog.hypriot.com/post/heavily-armed-after-major-upgrade-raspberry-pi-with-docker-1-dot-5-0/)
|
||||||
|
* [Swarming Raspberry Pi – Part 1](http://matthewkwilliams.com/index.php/2015/03/21/swarming-raspberry-pi-part-1/)
|
||||||
|
* [Swarming Raspberry Pi, Part 2: Registry & Mirror](http://matthewkwilliams.com/index.php/2015/03/29/swarming-raspberry-pi-part-2-registry-mirror/)
|
||||||
|
* [Swarming Raspberry Pi: Docker Swarm Discovery Options](http://matthewkwilliams.com/index.php/2015/04/03/swarming-raspberry-pi-docker-swarm-discovery-options/)
|
||||||
|
* [Uniform Development by Docker & QEMU](http://www.instructables.com/id/Uniform-Development-by-Docker-QEMU/)
|
||||||
|
* [Get Docker up and running on the RaspberryPi in three steps](https://github.com/umiddelb/armhf/wiki/Get-Docker-up-and-running-on-the-RaspberryPi-%28ARMv6%29-in-three-steps)
|
||||||
|
* [Installing, running, using Docker on armhf (ARMv7) devices](https://github.com/umiddelb/armhf/wiki/Installing,-running,-using-docker-on-armhf-(ARMv7)-devices)
|
||||||
|
* [How to run 2500 webservers on a Raspberry Pi](http://blog.loof.fr/2015/10/how-to-run-2500-webservers-on-raspberry.html)
|
||||||
|
|
||||||
|
|
||||||
|
## Other
|
||||||
|
* Presentation: Docker and JBoss - the perfect combination
|
||||||
|
* [Vidéo](https://www.youtube.com/watch?v=4uQ6gR_xZhE)
|
||||||
|
* [Code source](https://github.com/goldmann/goldmann.pl/tree/master/.presentations/2014-vjbug-docker/demos)
|
||||||
|
* [JBoss and Docker Presentation](https://goldmann.pl/presentations/2014-vjbug-docker/ )
|
||||||
|
|
||||||
|
# Books
|
||||||
|
|
||||||
|
## In English
|
||||||
|
|
||||||
|
* [Docker Book](https://dockerbook.com/) by James Turnbul ([@kartar][kartar])
|
||||||
|
* [Docker Cookbook](http://shop.oreilly.com/product/0636920036791.do) by Sébastien Goasguen ([@sebgoa][sebgoa]) (Publisher: O'Reilly)
|
||||||
|
* [Docker Cookbook](http://dockercookbook.github.io/) by Neependra Khare ([@neependra](https://twitter.com/neependra)) (Publisher: Packt)
|
||||||
|
* [Docker in Action](https://www.manning.com/books/docker-in-action) by Jeff Nickoloff ([@allingeek](https://twitter.com/allingeek))
|
||||||
|
* [Docker in Practice](https://www.manning.com/books/docker-in-practice) by Ian Miell ([@ianmiell][ianmiell]) and Aidan Hobson Sayers ([@aidanhs](https://github.com/aidanhs)). ==> [Website](http://docker-in-practice.github.io/)
|
||||||
|
* [Docker Up & Running](https://newrelic.com/docker-book) by [Karl Matthias](https://twitter.com/relistan) and [Sean P. Kane](https://twitter.com/spkane)
|
||||||
|
* [Using Docker](http://shop.oreilly.com/product/0636920035671.do) by Adrian Mouat ([@adrianmouat](https://twitter.com/adrianmouat)) (Publisher: O'Reilly)
|
||||||
|
* [Docker Security](https://www.openshift.com/promotions/docker-security.html) by Adrian Mouat ([@adrianmouat](https://twitter.com/adrianmouat)) (Publisher: O'Reilly)
|
||||||
|
* [Kubernetes](https://www.openshift.com/promotions/kubernetes.html) by [David Rensin](http://research.google.com/pubs/DavidRensin.html) (Publisher: O'Reilly)
|
||||||
|
* [Docker in Production: Lessons from the Trenches](http://www.amazon.com/Docker-Production-Trenches-Joe-Johnston-ebook/dp/B0141W6KYC) by Joe Johnston (Author), John Fiedler (Author), Milos Gajdos (Author), Antoni Batchelli (Author), Justin Cormack (Author)
|
||||||
|
* [Mastering Docker](https://www.packtpub.com/virtualization-and-cloud/mastering-docker) by Scott Gallagher (Publisher: Packt)
|
||||||
|
* [Learning Docker](https://www.packtpub.com/virtualization-and-cloud/learning-docker) by Pethuru Raj, Jeeva S. Chelladhurai and Vinod Singh (Publisher: Packt)
|
||||||
|
* [Troubleshooting Docker](https://www.packtpub.com/virtualization-and-cloud/troubleshooting-docker) by John Wooten, Navid Shaikh (Publisher: Packt)
|
||||||
|
* [Orchestrating Docker](https://www.packtpub.com/virtualization-and-cloud/orchestrating-docker) by Shrikrishna Holla (Publisher: Packt)
|
||||||
|
* [Extending Docker](https://www.packtpub.com/networking-and-servers/extending-docker) by Russ McKendrick (Publisher: Packt)
|
||||||
|
* [Securing Docker](https://www.packtpub.com/virtualization-and-cloud/securing-docker) by Scott Gallagher (Publisher: Packt)
|
||||||
|
* [Learning Docker Networking](https://www.packtpub.com/networking-and-servers/learning-docker-networking) by Rajdeep Dua, Vaibhav Kohli and Santosh Kumar Konduri (Publisher: Packt)
|
||||||
|
* [Docker High Performance](https://www.packtpub.com/networking-and-servers/docker-high-performance) by Allan Espinosa (Publisher: Packt)
|
||||||
|
* [Kubernetes Up and Running: Dive into the Future of Infrastructure](http://shop.oreilly.com/product/0636920043874.do) by Kelsey Hightower ([@kelseyhightower](https://twitter.com/kelseyhightower)) (Publisher: O'Reilly)
|
||||||
|
|
||||||
|
## Chinese
|
||||||
|
* [The Source Code Analysis of Docker](https://www.amazon.cn/图书/dp/B012ROMRUM) (Chinese) by [Allen Sun](https://github.com/allencloud)
|
||||||
|
* [Docker Container and Container Cloud](https://www.amazon.cn/图书/dp/B014ETH1IG) (Chinese) by [Harry Zhang](https://twitter.com/resouer) & Jianbo Sun & Zhejiang University SEL Laboratory
|
||||||
|
|
||||||
|
## German
|
||||||
|
|
||||||
|
* [Docker: Container-Infrastruktur für Microservices](http://www.bee42.com/dockerbook/) by Peter Roßbach ([@PRossbach](https://twitter.com/PRossbach))
|
||||||
|
|
||||||
|
## Portuguese
|
||||||
|
|
||||||
|
* [Containers com Docker do desenvolvimento à produção](https://www.casadocodigo.com.br/products/livro-docker) by Daniel Romero ([@infoslack](https://twitter.com/infoslack))
|
||||||
|
* [Aprendendo Docker: Do básico à orquestração de contêineres](http://aprendendodocker.com.br/) by Wellington F. Silva ([@_wsilva](https://twitter.com/_wsilva)) (Publisher: Editora Novatec)
|
||||||
|
* [Docker para Desenvolvedores](https://leanpub.com/dockerparadesenvolvedores) by Rafael Gomes ([@kelseyhightower](https://twitter.com/gomex)) (Publisher: Leanpub) - 55% finished
|
||||||
|
|
||||||
|
|
||||||
|
# Tools
|
||||||
|
|
||||||
|
* [Docker](https://github.com/docker/docker)
|
||||||
|
* [Docker Images](https://hub.docker.com)
|
||||||
|
* [Docker Compose](https://github.com/docker/compose/) (Define and run multi-container applications with Docker)
|
||||||
|
* [Docker Machine](https://github.com/docker/machine) (Machine management for a container-centric world)
|
||||||
|
* [Docker Registry][distribution] (The Docker toolset to pack, ship, store, and deliver content)
|
||||||
|
* [Docker Swarm](https://github.com/docker/swarm) (Swarm: a Docker-native clustering system)
|
||||||
|
|
||||||
|
## Terminal User Interface
|
||||||
|
|
||||||
|
* [sen](https://github.com/TomasTomecek/sen) - Terminal user interface for docker engine, by [@TomasTomecek](https://github.com/TomasTomecek)
|
||||||
|
* [wharfee](https://github.com/j-bennet/wharfee) - Autocompletion and syntax highlighting for Docker commands.) by [@j-bennet](https://github.com/j-bennet)
|
||||||
|
* [ctop](https://github.com/yadutaf/ctop) - A command line / text based Linux Containers monitoring tool that works just like you expect by [@yadutaf](https://github.com/yadutaf)
|
||||||
|
* [dry](https://github.com/moncho/dry) - An interactive CLI for Docker containers by [@moncho](https://github.com/moncho)
|
||||||
|
* [dockercraft](https://github.com/docker/dockercraft) - Docker + Minecraft = Dockercraft by [@docker][docker]
|
||||||
|
* [dockersql](https://github.com/crosbymichael/dockersql) - A command line interface to query Docker using SQL by [@crosbymichael][crosbymichael]
|
||||||
|
|
||||||
|
## Dev Tools
|
||||||
|
|
||||||
|
* [draw-compose](https://github.com/Alexis-benoist/draw-compose) - Utility to draw a schema of a docker compose by [@Alexis-benoist](https://github.com/Alexis-benoist)
|
||||||
|
* [GoSu](https://github.com/tianon/gosu) - Run this specific application as this specific user and get out of the pipeline (entrypoint script tool) by [@tianon](https://github.com/tianon)
|
||||||
|
* [Chaperone](https://github.com/garywiz/chaperone) - A single PID1 process designed for docker containers. Does user management, log management, startup, zombie reaping, all in one small package. by [@garywiz](https://github.com/garywiz)
|
||||||
|
* [ns-enter](https://github.com/jpetazzo/nsenter) (no more ssh, enter name spaces of container) by [@jpetazzo][jpetazzo]
|
||||||
|
* [Squid-in-a-can](https://github.com/jpetazzo/squid-in-a-can) (in case of proxy problem) by [@jpetazzo][jpetazzo]
|
||||||
|
* [Composerize](https://github.com/magicmark/composerize) Conververt docker run commands into docker-compose files
|
||||||
|
* [docker-gen](https://github.com/jwilder/docker-gen) (Generate files from docker container meta-data) by [@jwilder][jwilder]
|
||||||
|
* [dockerize](https://github.com/jwilder/dockerize) (Utility to simplify running applications in docker containers) by [@jwilder][jwilder]
|
||||||
|
* [registrator](https://github.com/progrium/registrator) (Service registry bridge for Docker) by [@progrium][progrium]
|
||||||
|
* [Dockly](https://github.com/swipely/dockly) (Dockly is a gem made to ease the pain of packaging an application in Docker.) by [@swipely](https://github.com/swipely/)
|
||||||
|
* [docker-volumes](https://github.com/cpuguy83/docker-volumes) (Docker Volume Manager) by [@cpuguy83][cpuguy83]
|
||||||
|
* [dockerfile_lint](https://github.com/projectatomic/dockerfile_lint) (A rule-based 'linter' for Dockerfiles) by [@redhataccess](https://github.com/redhataccess)
|
||||||
|
* [powerstrip](https://github.com/clusterhq/powerstrip) (A tool for prototyping Docker extensions) by [@clusterhq](https://github.com/clusterhq)
|
||||||
|
* [Vagga](https://github.com/tailhook/vagga) (Vagga is a containerisation tool without daemons. It is a fully-userspace container engine inspired by Vagrant and Docker, specialized for development environments.) by [@tailhook](https://github.com/tailhook/)
|
||||||
|
* [dockerode](https://github.com/apocas/dockerode) (Not just another Docker Remote API node.js module) by [@apocas](https://github.com/apocas)
|
||||||
|
* [go-dockerclient](https://github.com/fsouza/go-dockerclient/) (Go HTTP client for the Docker remote API.) by [@fsouza](https://github.com/fsouza/)
|
||||||
|
* [Docker.DotNet](https://github.com/Microsoft/Docker.DotNet) (C#/.NET HTTP client for the Docker remote API) by [@ahmetalpbalkan](https://github.com/ahmetalpbalkan/)
|
||||||
|
* [container-factory](https://github.com/lsqio/container-factory) - Produces Docker images from tarballs of application source code by [@lsqio](https://github.com/lsqio)
|
||||||
|
* [codelift](https://codelift.io/) - CodeLift is an automated Docker image build utility for 'dockerizing' services by [@BoozAllen](https://twitter.com/BoozAllen)
|
||||||
|
* [percheron][percheron] - Organise your Docker containers with muscle and intelligence by [@ashmckenzie](https://github.com/ashmckenzie)
|
||||||
|
* [crane](https://github.com/michaelsauter/crane) - Lift containers with ease. Easy orchestration for images and containers by [@michaelsauter](https://github.com/michaelsauter)
|
||||||
|
* [sherdock](https://github.com/rancher/sherdock) - Automatic GC of images based on regexp by [@rancher][rancher]
|
||||||
|
* [bocker](https://github.com/p8952/bocker) (1) - Docker implemented in 100 lines of bash by [p8952](https://github.com/p8952)
|
||||||
|
* [bocker](https://github.com/icy/bocker) (2) - Write Dockerfile completely in Bash. Extensible and simple. --> Reusable by [@icy](https://github.com/icy)
|
||||||
|
* [docker-gc](https://github.com/spotify/docker-gc) - A cron job that will delete old stopped containers and unused images by [@spotify](https://github.com/spotify)
|
||||||
|
* [dlayer](https://github.com/wercker/dlayer) - Stats collector for Docker layers by [@wercker](https://github.com/wercker)
|
||||||
|
* [forward2docker](https://github.com/bsideup/forward2docker) - Utility to auto forward a port from localhost into ports on Docker containers running in a boot2docker VM by [@bsideup](https://github.com/bsideup)
|
||||||
|
* [dockramp](https://github.com/jlhawn/dockramp) - Proof of Concept: A Client Driven Docker Image Builder by [@jlhawn](https://github.com/jlhawn)
|
||||||
|
* [portainer](https://github.com/duedil-ltd/portainer) - Apache Mesos framework for building Docker images by [@tarnfeld](https://github.com/tarnfeld)
|
||||||
|
* [Gradle Docker plugin](https://github.com/gesellix/gradle-docker-plugin) - A Docker remote api plugin for Gradle by [@gesellix][gesellix]
|
||||||
|
* [Docker client](https://github.com/gesellix/docker-client) - A Docker remote api client library for the JVM, written in Groovy by [@gesellix][gesellix]
|
||||||
|
* [Dropdock](http://dropdock.io/) - A framework designed for Drupal to build fast, isolated development environments using Docker.
|
||||||
|
* [Devstep](https://github.com/fgrehm/devstep) - Development environments powered by Docker and buildpacks by [@fgrehm][fgrehm]
|
||||||
|
* [Lorry](https://lorry.io/) - Lorry is a docker-compose.yml validator and composer by [@CenturyLinkLabs][CenturyLinkLabs]
|
||||||
|
* [Dray](http://dray.it/) - Dray is an engine for managing the execution of container-based workflows. Docker Workflow Engine - UNIX pipes for Docker by [@CenturyLinkLabs][CenturyLinkLabs]
|
||||||
|
* [docker-do](https://github.com/benzaita/docker-do) - hassle-free docker run, like `env` but for docker by [@benzaita](https://github.com/benzaita)
|
||||||
|
* [Docker osx dev](https://github.com/brikis98/docker-osx-dev) - A productive development environment with Docker on OS X by [@brikis98](https://github.com/brikis98)
|
||||||
|
* [rocker](https://github.com/grammarly/rocker) - Extended Dockerfile builder. Supports multiple FROMs, MOUNTS, templates, etc. by [grammarly](https://github.com/grammarly).
|
||||||
|
* [dexec](https://github.com/docker-exec/dexec) - Command line interface for running code with Docker Exec images. https://docker-exec.github.io/ written in Go.
|
||||||
|
* [crowdr](https://github.com/polonskiy/crowdr) - Tool for managing multiple Docker containers (docker-compose alternative) by [@polonskiy](https://github.com/polonskiy/)
|
||||||
|
* [ahab](https://github.com/instacart/ahab) - Docker event handling with Python by [@instacart](https://github.com/instacart)
|
||||||
|
* [docker-garby](https://github.com/konstruktoid/docker-garby) - Docker garbage collection script by [@konstruktoid](https://github.com/konstruktoid).
|
||||||
|
* [DevLab](https://github.com/TechnologyAdvice/DevLab) - Utility for running containerized development environments
|
||||||
|
* [is-docker](https://github.com/sindresorhus/is-docker) - Check if the process is running inside a Docker container by [@sindresorhus][sindresorhus]
|
||||||
|
* [Docker meets the IDE](http://domeide.github.io/) - Integrating your favorite containers in the editor of your choice by [domeide](https://github.com/domeide)
|
||||||
|
* [DVM](https://github.com/getcarina/dvm) - Docker version manager by [@getcarina](https://github.com/getcarina)
|
||||||
|
* [docker-ls](https://github.com/mayflower/docker-ls) - CLI tools for browsing and manipulating docker registries by [@mayflower](https://github.com/mayflower)
|
||||||
|
* [habitus](https://github.com/cloud66/habitus) - A Build Flow Tool for Docker http://www.habitus.io by [@cloud66](https://github.com/cloud66)
|
||||||
|
* [Compose Registry](https://www.composeregistry.com) - A very handy search engine for Compose Files
|
||||||
|
* [Docker Clean](https://github.com/zzrotdesign/docker-clean) - A script that cleans Docker containers, images and volumes by [@zzrotdesign](https://github.com/zzrotdesign)
|
||||||
|
* [Powerline-Docker](https://github.com/adrianmo/powerline-docker) - A Powerline segment for showing the status of Docker containers by [@adrianmo](https://github.com/adrianmo)
|
||||||
|
* [Docker-PowerShell](https://github.com/Microsoft/Docker-PowerShell) - PowerShell Module for Docker
|
||||||
|
* [docker-compose-search](https://github.com/francescou/docker-compose-search) - A search engine for Docker Compose application stacks by [@francescou](https://github.com/francescou/)
|
||||||
|
* [Docker Volume Clone Utility](https://github.com/gdiepen/docker-convenience-scripts) - A Docker Utility to Clone Volumes [@gdiepen](https://twitter.com/gdiepen)
|
||||||
|
* [docker-companion](https://github.com/mudler/docker-companion) - A command line tool written in Golang to squash and unpack docker images by [@mudler](https://github.com/mudler/)
|
||||||
|
* [sbt-docker-compose](https://github.com/Tapad/sbt-docker-compose) - Integrates Docker Compose functionality into sbt by [@kurtkopchik](https://github.com/kurtkopchik/)
|
||||||
|
* [Whale-linter](https://github.com/jeromepin/whale-linter) - A simple and small Dockerfile linter written in Python3+ without dependencies.
|
||||||
|
* [docker-make](https://github.com/CtripCloud/docker-make) - build,tag,and push a bunch of related docker images via a single command.
|
||||||
|
* [caduc](https://github.com/tjamet/caduc) - A docker garbage collector cleaning stuff you did not use recently
|
||||||
|
* [OctoLinker](https://github.com/OctoLinker/browser-extension) - A browser extension for GitHub that makes the image name in a `Dockerfile` clickable and redirect you to the related Docker Hub page.
|
||||||
|
* [docker-replay](https://github.com/bcicen/docker-replay) Generate `docker run`command and options from running containers
|
||||||
|
* [dext-docker-registry-plugin](https://github.com/vutran/dext-docker-registry-plugin) - Search the Docker Registry with the Dext smart launcher.
|
||||||
|
|
||||||
|
## Continuous Integration / Continuous Delivery
|
||||||
|
|
||||||
|
* [Awesome-ciandcd](https://github.com/ciandcd/awesome-ciandcd) - Not specific to docker but relevant.
|
||||||
|
* [Buddy](https://buddy.works) - The best of Git, build & deployment tools combined into one powerful tool that supercharged our development
|
||||||
|
* [Captain](https://github.com/harbur/captain) - Convert your Git workflow to Docker containers ready for Continuous Delivery by [@harbur](https://github.com/harbur)
|
||||||
|
* [CircleCI](https://circleci.com/) - Push or pull Docker images from your build environment, or build and run containers right on CircleCI.
|
||||||
|
* [CodeFresh](https://codefresh.io) - Accelerate your transition to Docker containers
|
||||||
|
* [CodeShip](https://pages.codeship.com/docker) - Work with your established Docker workflows while automating your testing and deployment tasks with our hosted platform dedicated to speed and security.
|
||||||
|
* [Docker plugin for Jenkins](https://github.com/jenkinsci/docker-plugin/) - The aim of the docker plugin is to be able to use a docker host to dynamically provision a slave, run a single build, then tear-down that slave.
|
||||||
|
* [Dockunit](https://github.com/dockunit/platform) - Docker based integration tests. A simple Node based utility for running Docker based unit tests. By [@dockunit](https://github.com/dockunit)
|
||||||
|
* [Drone](https://github.com/drone/drone) - Continuous integration server built on Docker and configured using YAML files.
|
||||||
|
* [GitLab CI](https://about.gitlab.com/gitlab-ci/) - GitLab has integrated CI to test, build and deploy your code with the use of GitLab runners.
|
||||||
|
* [GOCD-Docker](https://github.com/gocd/gocd-docker)Go Server and Agent in docker containers to provision.
|
||||||
|
* [IBM DevOps Services](https://hub.jazz.net) - Continuous delivery using a pipeline deployment onto IBM Containers on Bluemix.
|
||||||
|
* [InSpec](https://github.com/chef/inspec) - InSpec is an open-source testing framework for infrastructure with a human- and machine-readable language for specifying compliance, security and policy requirements.
|
||||||
|
* [Shippable](https://app.shippable.com/) - A SaaS platform for developers and DevOps teams that significantly reduces the time taken for code to be built, tested and deployed to production.
|
||||||
|
* [Watchtower](https://github.com/CenturyLinkLabs/watchtower) - Automatically update running Docker containers by
|
||||||
|
[@CenturyLinkLabs][CenturyLinkLabs]
|
||||||
|
* [Microservices Continuous Deployment](https://github.com/francescou/docker-continuous-deployment) - Continuous deployment of a microservices application
|
||||||
|
* [Pumba](https://github.com/gaia-adm/pumba) - Chaos testing tool for Docker. Can be deployed on Kubernets and CoreOS clusters.
|
||||||
|
|
||||||
|
## Deployment
|
||||||
|
|
||||||
|
* [Conduit](https://github.com/ehazlett/conduit) - Experimental deployment system for Docker by [@ehazlett](https://github.com/ehazlett)
|
||||||
|
* [depcon](https://github.com/gondor/depcon) - Depcon is written in Go and allows you to easily deploy Docker containers to Apache Mesos/Marathon, Amazon ECS and Kubernetes. By [@gonodr][gondor]
|
||||||
|
* [dockit](https://github.com/humblec/dockit) - Do docker actions and Deploy gluster containers!
|
||||||
|
* [rocker-compose](https://github.com/grammarly/rocker-compose) - Docker composition tool with idempotency features for deploying apps composed of multiple containers.
|
||||||
|
* [Zodiac](https://github.com/CenturyLinkLabs/zodiac) - A lightweight tool for easy deployment and rollback of dockerized applications. By [@CenturyLinkLabs][CenturyLinkLabs]
|
||||||
|
|
||||||
|
## Hosting for repositories (registries)
|
||||||
|
|
||||||
|
Securely store your Docker images.
|
||||||
|
* [Docker Hub](https://hub.docker.com/) (provided by Docker Inc.)
|
||||||
|
* [Quay.io](https://quay.io/) (part of CoreOS) - Secure hosting for private Docker repositories
|
||||||
|
* [GitLab Container Registry](http://docs.gitlab.com/ce/container_registry/README.html) - Repositories focused on using it images in GitLab CI
|
||||||
|
* [TreeScale](https://treescale.com/) - Build and Distriubute container based applicaitons.
|
||||||
|
|
||||||
|
## Hosting for containers
|
||||||
|
|
||||||
|
* [Amazon ECS](http://aws.amazon.com/ecs/) - A management service on EC2 that supports Docker containers.
|
||||||
|
* [ContainerShip Cloud][containership] - Multi-Cloud Container Hosting Automation Platform.
|
||||||
|
* [Docker Cloud](https://cloud.docker.com/) - Former Tutum
|
||||||
|
* [Google Container Engine](https://cloud.google.com/container-engine/docs/) - Docker containers on Google Cloud Computing powered by [Kubernetes][kubernetes].
|
||||||
|
* [Giant Swarm](https://giantswarm.io/) - Simple microservice infrastructure. Deploy your containers in seconds.
|
||||||
|
* [IBM Bluemix](https://console.ng.bluemix.net/) - Run Docker containers in a hosted cloud environment on IBM Bluemix.
|
||||||
|
* [OpenShift Dedicated](https://www.openshift.com/dedicated/index.html) - A hosted [OpenShift][openshift] cluster for running your Docker containers managed by Red Hat.
|
||||||
|
* [Orchard](https://www.orchardup.com/) (part of Docker Inc) - Get a Docker host in the cloud, instantly.
|
||||||
|
* [Triton](https://www.joyent.com/) - Elastic container-native infrastructure by Joyent.
|
||||||
|
|
||||||
|
## Reverse Proxy
|
||||||
|
|
||||||
|
* [nginx-proxy][nginxproxy] - Automated nginx proxy for Docker containers using docker-gen by [@jwilder][jwilder]
|
||||||
|
* [Let's Encrypt Nginx-proxy Companion](https://github.com/JrCs/docker-letsencrypt-nginx-proxy-companion) - A lightweight companion container for the nginx-proxy. It allow the creation/renewal of Let's Encrypt certificates automatically. By [@JrCs](https://github.com/JrCs)
|
||||||
|
* [h2o-proxy](https://github.com/zchee/h2o-proxy) - Automated H2O reverse proxy for Docker containers. An alternative to [jwilder/nginx-proxy][nginxproxy] by [@zchee](https://github.com/zchee)
|
||||||
|
* [docker-proxy](https://github.com/silarsis/docker-proxy) - Transparent proxy for docker containers, run in a docker container. By [@silarsis](https://github.com/silarsis)
|
||||||
|
* [muguet](https://github.com/mattallty/muguet) - DNS Server & Reverse proxy for Docker environments. By [@mattallty](https://github.com/mattallty)
|
||||||
|
* [Træfɪk](https://traefik.io/) - Automated reverse proxy and load-balancer for Docker, Mesos, Consul, Etcd... By [@EmileVauge](https://github.com/emilevauge)
|
||||||
|
* [fabio](https://github.com/eBay/fabio) - A fast, modern, zero-conf load balancing HTTP(S) router for deploying microservices managed by consul. By [@eBay](https://github.com/eBay)
|
||||||
|
* [Swarm Ingress Router](https://github.com/tpbowden/swarm-ingress-router) - Route DNS names to Swarm services based on labels.
|
||||||
|
|
||||||
|
## Web Interface
|
||||||
|
|
||||||
|
* [Docker Registry Browser](https://github.com/klausmeyer/docker-registry-browser) - Web Interface for the Docker Registry HTTP API v2 by [@klausmeyer](https://github.com/klausmeyer)
|
||||||
|
* [Docker Registry UI](https://github.com/atc-/docker-registry-ui) - A web UI for easy private/local Docker Registry integration by [@atc-](https://github.com/atc-)
|
||||||
|
* [DockerUI](https://github.com/kevana/ui-for-docker) - DockerUI is a web interface to interact with the Remote API by [@crosbymichael][crosbymichael]
|
||||||
|
* [Portus](https://github.com/SUSE/Portus) - Authorization service and frontend for Docker registry (v2) by [@SUSE](https://github.com/SUSE)
|
||||||
|
* [docker-registry-web](https://github.com/mkuchin/docker-registry-web) - Web UI, authentication service and event recorder for private docker registry v2 by [@mkuchin](https://github.com/mkuchin)
|
||||||
|
* [dockering-on-rails](https://github.com/Electrofenster/dockerding-on-rails) - Simple Web-Interface for Docker with a lot of features by [@Electrofenster](https://github.com/Electrofenster/)
|
||||||
|
* [Rapid Dashboard](https://github.com/ozlerhakan/rapid) - A simple query dashboard to use Docker Remote API by [@ozlerhakan](https://github.com/ozlerhakan/)
|
||||||
|
* [docker-swarm-visualizer](https://github.com/manomarks/docker-swarm-visualizer) - Visualizes Docker services on a Docker Swarm (for running demos).
|
||||||
|
|
||||||
|
## Local Container Manager
|
||||||
|
|
||||||
|
* [Shutit](http://ianmiell.github.io/shutit/) - Tool for building and maintaining complex Docker deployments by
|
||||||
|
[@ianmiell][ianmiell]
|
||||||
|
* [FuGu](https://github.com/mattes/fugu) - Docker run wrapper without orchestration by [@mattes](https://github.com/mattes)
|
||||||
|
* [Boot2Docker](https://github.com/boot2docker/boot2docker) - Docker for OSX and Windows -- http://boot2docker.io/
|
||||||
|
* [docker-vm](https://github.com/shyiko/docker-vm) - Simple and transparent alternative to boot2docker (backed by Vagrant) by [@shyiko](https://github.com/shyiko)
|
||||||
|
* [Vessel](https://github.com/awvessel/vessel) - Automates the setup & use of dockerized development environments by [@awvessel](https://github.com/awvessel)
|
||||||
|
* [subuser](http://subuser.org) - Makes it easy to securely and portably run graphical desktop applications in Docker
|
||||||
|
* [OctoHost](http://www.octohost.io/) - Simple web focused Docker based mini-PaaS server. git push to deploy your websites as needed) by [@octohost](https://github.com/octohost)
|
||||||
|
* [Dokku][dokku] - Docker powered mini-Heroku in around 100 lines of Bash by [@progrium][progrium]
|
||||||
|
* [Ansible - manage docker containers](http://docs.ansible.com/ansible/docker_module.html)
|
||||||
|
* [Vagrant - Docker provider](https://www.vagrantup.com/docs/docker/basics.html) - Good starting point is [vagrant-docker-example](https://github.com/bubenkoff/vagrant-docker-example) by [@bubenkoff](https://github.com/bubenkoff)
|
||||||
|
* [Dray](https://github.com/CenturyLinkLabs/dray) - An engine for managing the execution of container-based workflows. http://Dray.it by [@CenturyLinkLabs][CenturyLinkLabs]
|
||||||
|
* [percheron][percheron] - Organise your Docker containers with muscle and intelligence by [@ashmckenzie](https://github.com/ashmckenzie)
|
||||||
|
* [Dusty](http://dusty.gc.com/) - Managed Docker development environments on OS X
|
||||||
|
* [Beluga](https://github.com/cortexmedia/Beluga) - CLI to deploy docker containers on a single server or low amount of servers. By [@cortextmedia](https://github.com/cortexmedia)
|
||||||
|
* [libcompose](https://github.com/docker/libcompose) - Go library for Docker Compose.
|
||||||
|
* [DLite](https://github.com/nlf/dlite) - Simplest way to use Docker on OSX, no VM needed. By [@nlf](https://github.com/nlf)
|
||||||
|
* [Azk](http://www.azk.io/) - Orchestrate development enviornments on your local machine by [@azukiapp](https://github.com/azukiapp)
|
||||||
|
* [Turbo](https://ramitsurana.github.io/turbo/) - Simple and Powerful utility for docker. By [@ramitsurana][ramitsurana]
|
||||||
|
|
||||||
|
## Volume management and plugins
|
||||||
|
* [Blockbridge](https://github.com/blockbridge/blockbridge-docker-volume) - The Blockbridge plugin is a volume plugin that provides access to an extensible set of container-based persistent storage options. It supports single and multi-host Docker environments with features that include tenant isolation, automated provisioning, encryption, secure deletion, snapshots and QoS. By [@blockbridge][blockbridge]
|
||||||
|
* [Convoy](https://github.com/rancher/convoy) - an open-source Docker volume driver that can snapshot, backup and restore Docker volumes anywhere. By [@rancher][rancher]
|
||||||
|
* [Azure Files Volume Driver](https://github.com/ahmetalpbalkan/azurefile-dockervolumedriver) - A Docker volume driver that allows you to mount persistent volumes backed by Microsoft Azure File Service. By [@ahmetalpbalkan][ahmetalpbalkan]
|
||||||
|
* [Docker Unison](https://github.com/leighmcculloch/docker-unison) A docker volume container using Unison for fast two-way folder sync. Created as an alternative to slow boot2docker volumes on OS X. By [@leighmcculloch](https://github.com/leighmcculloch)
|
||||||
|
* [Netshare](https://github.com/gondor/docker-volume-netshare) A Docker volume plugin written in Go that supports mounting NFS, AWS EFS & CIFS volumes within a container. By [@gondor][gondor]
|
||||||
|
* [Docker Machine NFS](https://github.com/adlogix/docker-machine-nfs) Activates NFS for an existing boot2docker box created through Docker Machine on OS X.
|
||||||
|
* [REX-Ray](https://github.com/emccode/rexray) Vendor agnostic storage orchestration engine to provide persistent storage for Docker containers as well as Mesos frameworks and tasks.
|
||||||
|
* [Local Persist](https://github.com/CWSpear/local-persist) Specify a mountpoint for your local volumes (created via `docker volume create`) so that files will always persist and so you can mount to different directories in different containers.
|
||||||
|
|
||||||
|
|
||||||
|
## Useful Images
|
||||||
|
|
||||||
|
* [Official Images from Docker Hub](https://github.com/docker-library/official-images)
|
||||||
|
* [Base Image](https://github.com/phusion/baseimage-docker) by [@phusion](https://github.com/phusion/)
|
||||||
|
* [Busybox](https://github.com/jpetazzo/docker-busybox) (with either `buildroot` or Ubuntu's `busybox-static`) by [@jpetazzo][jpetazzo]
|
||||||
|
* [OpenWRT](http://www.zoobab.com/docker-openwrt-image) by [@zoobab](https://github.com/zoobab)
|
||||||
|
* [Phusion Docker Hub Account](https://hub.docker.com/u/phusion/)
|
||||||
|
* [passenger-docker](https://github.com/phusion/passenger-docker) (Docker base images for Ruby, Python, Node.js and Meteor web apps) by [@phusion](https://github.com/phusion)
|
||||||
|
* [docker-alpine][alpine] (A super small Docker base image *(5MB)* using Alpine Linux) by [@gliderlabs][gliderlabs]
|
||||||
|
* [docker-fluentd][fluentd] (the Container to Log Other Containers' Logs) by [@kiyoto][kiyoto]
|
||||||
|
* [chaperone-docker](https://github.com/garywiz/chaperone-docker) (A set of images using the Chaperone process manager, including a lean Alpine image, LAMP, LEMP, and bare-bones base kits.)
|
||||||
|
* [nvidia-docker](https://github.com/NVIDIA/nvidia-docker) (Build and run Docker containers leveraging NVIDIA GPUs.)
|
||||||
|
|
||||||
|
|
||||||
|
## Dockerfile
|
||||||
|
|
||||||
|
* [Collection of Dockerfiles](https://github.com/crosbymichael/Dockerfiles) by [@crosbymichael][crosbymichael]
|
||||||
|
* [Dockerfile Project](http://dockerfile.github.io/) : Trusted Automated Docker Builds. Dockerfile Project maintains a central repository of Dockerfile for various popular open source software services runnable on a Docker container.
|
||||||
|
* [Dockerfile Example](https://github.com/komljen/dockerfile-examples) by [@komljen](https://github.com/komljen)
|
||||||
|
* [Dockerfile Example 2](https://github.com/kstaken/dockerfile-examples) by [@kstaken](https://github.com/kstaken)
|
||||||
|
* [Dockerfile @jfrazelle][jfrazelledockerfiles] by [@jfrazelle][jfrazelle] **MUST SEE** for a fully containerized
|
||||||
|
desktop!
|
||||||
|
|
||||||
|
## Storing Images and Registries
|
||||||
|
|
||||||
|
* [Docker Registry v2][distribution] (The Docker toolset to pack, ship, store, and deliver content)
|
||||||
|
* [Rescoyl](https://github.com/noteed/rescoyl) (Private Docker registry) by [@noteed][noteed]
|
||||||
|
* [Atomic Registry](http://www.projectatomic.io/registry/) Red Hat Atomic Registry is an open source enterprise registry based on the Origin and Cockpit projects, enhancing the Docker registry library.
|
||||||
|
* [VMWare Harbor](http://vmware.github.io/harbor/) Project Harbor by VMWare is an enterprise-class registry server that stores and distributes Docker images. Harbor extends the open source Docker Distribution by adding the functionalities usually required by an enterprise, such as security, identity and management.
|
||||||
|
|
||||||
|
## Monitoring
|
||||||
|
|
||||||
|
* [Axibase Time-Series Database](http://axibase.com/products/axibase-time-series-database/writing-data/docker-cadvisor/) (Long-term retention of container statistics and built-in dashboards for Docker. Collected with native Google cAdvisor storage driver.)
|
||||||
|
* [cAdvisor](https://github.com/google/cadvisor) (Analyzes resource usage and performance characteristics of running containers. created by [@Google](https://github.com/google)
|
||||||
|
* [Datadog](https://www.datadoghq.com/) Datadog is a full-stack monitoring service for large-scale cloud environments that aggregates metrics/events from servers, databases, and applications. It includes support for Docker, Kubernetes, and Mesos.
|
||||||
|
* [Dockerana](https://github.com/dockerana/dockerana) (packaged version of Graphite and Grafana, specifically targeted at metrics from Docker.)
|
||||||
|
* [Docker-mon](https://github.com/icecrime/docker-mon) (Console-based Docker monitoring) by [@icecrime](https://github.com/icecrime)
|
||||||
|
* [Glances] (http://nicolargo.github.io/glances/) (A cross-platform curses-based system monitoring tool written in Python) by [@nicolargo](https://github.com/nicolargo)
|
||||||
|
* [InfluxDB, cAdvisor, Grafana](https://github.com/vegasbrianc/docker-monitoring) (InfluxDB Time series DB in combination with Grafana and cAdvisor) by [@vegasbrianc][vegasbrianc]
|
||||||
|
* [Meros](https://meros.io) Analyzes containers resources, captures logs, remote web SSH terminal and powerful DevOps alerts.
|
||||||
|
* [New Relic](https://newrelic.com/docker) New Relics Docker Monitoring tool
|
||||||
|
* [Prometheus](https://prometheus.io/) (Open-source service monitoring system and time series database)
|
||||||
|
* [Ruxit](https://www.dynatrace.com/technologies/cloud-and-microservices/docker-monitoring/) Monitor containerized applications without installing agents or modifying your Run commands
|
||||||
|
* [Seagull](https://github.com/tobegit3hub/seagull) (Friendly Web UI to monitor docker daemon.) by [@tobegit3hub](https://github.com/tobegit3hub)
|
||||||
|
* [Site24x7](https://www.site24x7.com/docker-monitoring.html) Docker MOnitoring for DevOps and IT is a SaaS Pay per Host model
|
||||||
|
* [Sysdig](http://www.sysdig.org/): An open source troubleshooting tool that provides a rich set of real-time, system-level information. It has container-specific features and is very useful in Docker environments.
|
||||||
|
* [Zabbix Docker module](https://github.com/monitoringartist/Zabbix-Docker-Monitoring): Zabbix module that provides discovery of running containers, CPU/memory/blk IO/net container metrics. Systemd Docker and LXC execution driver is also supported. It's a dynamically linked shared object library, so its performance is (~10x) better, than any script solution.
|
||||||
|
* [SPM for Docker][spm] Monitoring of host and container metrics, Docker events and logs. Automatic log parser. Anomaly Detection and alerting for metrics and logs. [@sematext][sematext]
|
||||||
|
* [Zabbix Docker](https://github.com/gomex/docker-zabbix) - Monitor containers automatically using zabbix LLD feature.
|
||||||
|
* [Collecting docker logs and stats with Splunk](http://blogs.splunk.com/2015/08/24/collecting-docker-logs-and-stats-with-splunk/)
|
||||||
|
* [Grafana Docker Dashboard Template](https://grafana.net/dashboards/179) - A template for your Docker, Grafana and Prometheus stack [@vegasbrianc][vegasbrianc]
|
||||||
|
* [DoMonit](https://github.com/eon01/DoMonit) - A simple Docker Monitoring wrapper For Docker API
|
||||||
|
|
||||||
|
## Networking
|
||||||
|
|
||||||
|
* [Calico-Docker](https://www.projectcalico.org/getting-started/docker/) - Calico is a pure layer 3 virtual network that allows containers over multiple docker-hosts to talk to each other.
|
||||||
|
* [Wagl](https://github.com/ahmetalpbalkan/wagl) - DNS Service Discovery for Docker Swarm (by [@ahmetalpbalkan][ahmetalpbalkan] ) http://ahmetalpbalkan.github.io/wagl/
|
||||||
|
* [Weave][weave] (The Docker network) -- Weave creates a virtual network that connects Docker containers deployed across multiple hosts.
|
||||||
|
* [Flannel](https://github.com/coreos/flannel/) - Flannel is a virtual network that gives a subnet to each host for use with container runtimes.
|
||||||
|
|
||||||
|
|
||||||
|
## Logging
|
||||||
|
|
||||||
|
* [Docker-Fluentd][fluentd]: (Docker container to Log Other Containers' Logs. One can aggregate the logs of Docker containers running on the same host using Fluentd.) by [@kiyoto][kiyoto]
|
||||||
|
* [LogJam](https://github.com/gocardless/logjam) (Logjam is a log forwarder designed to listen on a local port, receive log entries over UDP, and forward these messages on to a log collection server (such as logstash).) by [@gocardless](https://github.com/gocardless)
|
||||||
|
* [Logspout](https://github.com/gliderlabs/logspout) (Log routing for Docker container logs) by [@gliderlabs][gliderlabs]
|
||||||
|
* [Logsene for Docker][spm] Monitoring of Metrics, Events and Logs implemented in Node.js. Integrated [logagent-js](https://github.com/sematext/logagent-js) to detect and parse various log formats. [@sematext][sematext]
|
||||||
|
|
||||||
|
## Deployment and Infrastructure
|
||||||
|
|
||||||
|
* [Centurion](https://github.com/newrelic/centurion): Centurion is a mass deployment tool for Docker fleets. It takes containers from a Docker registry and runs them on a fleet of hosts with the correct environment variables, host volume mappings, and port mappings. By [@newrelic](https://github.com/newrelic)
|
||||||
|
* [Clocker](https://github.com/brooklyncentral/clocker): Clocker creates and manages a Docker cloud infrastructure. Clocker supports single-click deployments and runtime management of multi-node applications that run as containers distributed across multiple hosts, on both Docker and Marathon. It leverages [Calico][calico] and [Weave][weave] for networking and [Brooklyn][brooklyn] for application blueprints. By [@brooklyncentral](https://github.com/brooklyncentral)
|
||||||
|
* [Cloud 66](http://www.cloud66.com) - Full-stack hosted container management as a service
|
||||||
|
* [deploy](https://github.com/Perennials/deploy) - Git and Docker deployment tool. A middle ground between simple Docker composition tools and full blown cluster orchestration. Declarative configuration and short commands for managing (syncing, building, running) of infrastructures of more than a few services. Able to deploy whole preconfigured server or system of services with a single line (without having to scroll the line).
|
||||||
|
* [Docket](https://github.com/netvarun/docket): Custom docker registry that allows for lightning fast deploys through bittorrent by [@netvarun](https://github.com/netvarun/)
|
||||||
|
* [Longshoreman](https://github.com/longshoreman/longshoreman): Longshoreman automates application deployment using Docker. Just create a Docker repository (or use a service), configure the cluster using AWS or Digital Ocean (or whatever you like) and deploy applications using a Heroku-like CLI tool. By [longshoreman](https://github.com/longshoreman)
|
||||||
|
|
||||||
|
|
||||||
|
## PaaS
|
||||||
|
|
||||||
|
* [Atlantis](https://github.com/ooyala/atlantis) - Atlantis is an Open Source PaaS for HTTP applications built on Docker and written in Go
|
||||||
|
* [Deis](https://github.com/deis/deis) (Your PaaS, your rules) -- http://deis.io/
|
||||||
|
* [Dokku][dokku] (Docker powered mini-Heroku in around 100 lines of Bash) by [@progrium][progrium]
|
||||||
|
* [Empire](https://github.com/remind101/empire): A PaaS built on top of Amazon EC2 Container Service (ECS)
|
||||||
|
* [Flynn](https://github.com/flynn/flynn) (A next generation open source platform as a service) -- https://flynn.io/
|
||||||
|
* [OpenShift][openshift] (An open source PaaS built on [Kubernetes][kubernetes] and optimized for Dockerized app development and deployment) by [Red Hat](https://www.redhat.com/)
|
||||||
|
* [Tsuru](https://github.com/tsuru/tsuru) (Tsuru is an extensible and open source Platform as a Service software) -- https://tsuru.io/
|
||||||
|
* [Convox Rack] (https://github.com/convox/rack): Convox Rack is open source PaaS built on top of expert infrastructure automation and devops best practices.
|
||||||
|
* [Rancher][rancher]: Rancher is an open source project that provides a complete platform for operating Docker in production
|
||||||
|
* [Dcw](https://github.com/pbertera/dcw): Docker-compose SSH wrapper: a very poor man PaaS, exposing the docker-compose and custom-container commands defined in container labels.
|
||||||
|
|
||||||
|
## Remote Container Manager / Orchestration
|
||||||
|
|
||||||
|
* [autodock](https://github.com/prologic/autodock) (Daemon for Docker Automation) by [@prologic][prologic]
|
||||||
|
* [blimp](https://github.com/tubesandlube/blimp) Uses Docker Machine to easily move a container from one Docker host to another, show containers running against all of your hosts, replicate a container across multiple hosts and more. By [@defermat](https://github.com/defermat) and [@schvin](https://github.com/schvin)
|
||||||
|
* [Capitan] (https://github.com/byrnedo/capitan) Composable docker orchestration with added scripting support by [@byrnedo](https://github.com/byrnedo).
|
||||||
|
* [Citadel](https://github.com/citadel/citadel) (Citadel is a toolkit for scheduling containers on a Docker cluster) (unmaintained)
|
||||||
|
* [CloudSlang](http://www.cloudslang.io/) (CloudSlang is a workflow engine to create Docker process automation)
|
||||||
|
* [ContainerShip](https://github.com/containership/containership) (A simple container management platform) -- [containership]
|
||||||
|
* [CoreOS][coreos] (Linux for Massive Server Deployments) -- https://coreos.com/
|
||||||
|
* [Decking](http://decking.io/): (Decking aims to simplify the creation, organsation and running of clusters of Docker containers in a way which is familiar to developers)
|
||||||
|
* [Deploying a Containerized App on a Public Node with Mesos](https://docs.mesosphere.com/usage/tutorials/containerized-app/) (Docker plus Mesosphere provides an easy way to automate and scale deployment of containers in a production environment)
|
||||||
|
* [Flocker](https://github.com/ClusterHQ/flocker) (Flocker is a data volume manager and multi-host Docker cluster management tool) by [@ClusterHQ](https://github.com/ClusterHQ)
|
||||||
|
* [Gaudi](https://github.com/marmelab/gaudi) (Gaudi allows to share multi-component applications, based on Docker, Go, and YAM) ~~ project discontinued.
|
||||||
|
* [Kontena](https://github.com/kontena/kontena) (Application Containers for Masses) -- https://www.kontena.io/
|
||||||
|
* [Kubernetes][kubernetes] (Open source orchestration system for Docker containers by Google) -- [kubernetes] See Also [awesome-kubernetes](https://github.com/ramitsurana/awesome-kubernetes) by [@ramitsurana][ramitsurana]
|
||||||
|
* [Maestro](https://github.com/toscanini/maestro) (Maestro provides the ability to easily launch, orchestrate and manage mulitiple Docker containers as single unit) by [@tascanini](https://github.com/toscanini)
|
||||||
|
* [Marathon](https://mesosphere.github.io/marathon/docs/) (Marathon is a private PaaS built on Mesos. It automatically handles hardware or software failures and ensures that an app is "always on")
|
||||||
|
* [Nomad Project] (https://www.nomadproject.io/) Easily deploy applications at any scale. A Distributed, Highly Available, Datacenter-Aware Scheduler.
|
||||||
|
* [Panamax](https://github.com/CenturyLinkLabs/panamax-ui/wiki) (Docker Management for Humans) -- [panamax.io]
|
||||||
|
* [Rancher](https://github.com/rancher/rancher) (Portable AWS-style infrastructure service for Docker) -- http://rancher.com/
|
||||||
|
* [Fleet](https://github.com/coreos/fleet) (A Distributed init System providing low-level orchestration ) -- [coreos.com]
|
||||||
|
* [Serf](https://github.com/hashicorp/serf) (Service orchestration and management tool) by [@hashicorp](https://github.com/hashicorp)
|
||||||
|
* [Shipyard](https://github.com/shipyard/shipyard) (Composable Docker Management) -- http://shipyard-project.com/
|
||||||
|
* [MCollective Docker Agent](https://github.com/m4ce/mcollective-docker-agent) Uses MCollective to orchestrate your Docker containers and images -- [@m4ce](https://github.com/m4ce)
|
||||||
|
* [ElasticKube](https://github.com/ElasticBox/elastickube) open source management platform for Kubernetes.
|
||||||
|
* [Mantl](https://github.com/ciscocloud/mantl) Mantl is a modern platform for rapidly deploying globally distributed services [@ciscocloud](http://mantl.io)
|
||||||
|
|
||||||
|
## Security
|
||||||
|
|
||||||
|
* [docker-bench-security](https://github.com/docker/docker-bench-security) script that checks for dozens of common best-practices around deploying Docker containers in production. By [@docker][docker]
|
||||||
|
* [notary](https://github.com/docker/notary) a server and a client for running and interacting with trusted collections. By [@docker][docker]
|
||||||
|
* [Twistlock](https://twistlock.com/) Twistlock Security Suite detects vulnerabilities, hardens container images, and enforces security policies across the lifecycle of applications.
|
||||||
|
* [Clair](https://github.com/coreos/clair) Clair is an open source project for the static analysis of vulnerabilities in appc and docker containers. By [@coreos][CoreOS]
|
||||||
|
|
||||||
|
## Service Discovery
|
||||||
|
|
||||||
|
* [docker-consul](https://github.com/gliderlabs/docker-consul) by [@progrium][progrium]
|
||||||
|
* [etcd](https://github.com/coreos/etcd): A highly-available key value store for shared configuration and service discovery by [@coreOS][coreos]
|
||||||
|
* [Docker Grand Ambassador](https://github.com/cpuguy83/docker-grand-ambassador) This is a fully dynamic docker link ambassador. + [Article](https://docs.docker.com/engine/articles/ambassador_pattern_linking/) by [@cpuguy83][cpuguy83]
|
||||||
|
* [proxy](https://github.com/factorish/proxy): lightweight nginx based load balancer self using service discovery provided by registrator. by [@factorish](https://github.com/factorish)
|
||||||
|
* [wagl](https://github.com/ahmetalpbalkan/wagl/): Service discovery for docker swarm using DNS
|
||||||
|
|
||||||
|
## Metadata
|
||||||
|
* [MicroBadger](https://microbadger.com) - add metadata to Docker images using labels.
|
||||||
|
|
||||||
|
# Slides
|
||||||
|
|
||||||
|
* [Docker Slideshare Account](http://www.slideshare.net/Docker)
|
||||||
|
* [Docker Security](http://www.slideshare.net/jpetazzo) with [@jpetazzo][jpetazzo]
|
||||||
|
* [Hide your DEV ENV in a container](http://www.slideshare.net/JohanJanssen4/hide-your-development-environment-and-application-in-a-container) by [@johanjanssen42](https://twitter.com/johanjanssen42)
|
||||||
|
* [Docker for the new era](https://www.slideshare.net/ramitsurana/docker-for-the-new-era) by [@ramitsurana][ramitsurana]
|
||||||
|
|
||||||
|
# Videos
|
||||||
|
|
||||||
|
## Main Account
|
||||||
|
|
||||||
|
* [Docker Youtube Account](https://www.youtube.com/user/dockerrun)
|
||||||
|
* [CenturyLink Labs Docker Interviews](https://www.youtube.com/playlist?list=PL_q4Fk7SVBCIjyuCBFBItXnzGI3qBa2L1)
|
||||||
|
* [Container Camp](https://www.youtube.com/channel/UCvksXSnLqIVM_uFB7xyrsSg/videos) Conference about *containers*!!! [@containercamp](https://twitter.com/containercamp)
|
||||||
|
* [Quoi d'neuf Docker](https://www.youtube.com/channel/UCOAhkxpryr_BKybt9wIw-NQ/videos) **FRENCH** chronique vidéo sur Youtube proposant de courtes vidéos (maximum 15 minutes) sur la thématique "Docker et son écosystème" [Site Web](http://www.quoidneufdocker.xyz/)
|
||||||
|
|
||||||
|
## Useful videos
|
||||||
|
|
||||||
|
* [Ansible and Docker HP](https://www.youtube.com/watch?v=oZ45v8AeE7k) (32:38)
|
||||||
|
* [Container Hacks and Fun Images][jessvid] by [@jfrazelle][jfrazelle] @ DockerCon 2015 (**MUST WATCH VIDEO**: 38:50)
|
||||||
|
* [Contributing to Docker by Andrew "Tianon" Page (InfoSiftr)](https://www.youtube.com/watch?v=1jwo8-1HYYg) (34:31)
|
||||||
|
* [Docker for Developers][docker4dev] (54:26) by [@jpetazzo][jpetazzo] <== Good introduction, context, demo
|
||||||
|
* [Docker in Production](https://www.youtube.com/watch?v=Glk5d5WP6MI) by [@jpetazzo][jpetazzo] (36:05)
|
||||||
|
* [Introduction to Docker and containers](https://www.youtube.com/watch?v=ZVaRK10HBjo) (3:09:00) by [@jpetazzo][jpetazzo]
|
||||||
|
* [Deploying and scaling applications with Docker, Swarm, and a tiny bit of Python magic](https://www.youtube.com/watch?v=GpHMTR7P2Ms) (3:11:06) by [@jpetazzo][jpetazzo]
|
||||||
|
* [Docker: How to Use Your Own Private Registry](https://www.youtube.com/watch?v=CAewZCBT4PI) (15:01)
|
||||||
|
* [Docker and SELinux by Daniel Walsh from Red Hat ](https://www.youtube.com/watch?v=zWGFqMuEHdw) (40:23)
|
||||||
|
* [Extending Docker with Plugins](https://vimeo.com/110835013) (15:21)
|
||||||
|
* [From Local Docker Development to Production Deployments](https://www.youtube.com/watch?v=7CZFpHUPqXw) by [@jpetazzo][jpetazzo] @ AWS re:Invent 2015
|
||||||
|
* [Immutable Infrastructure with Docker and EC2 by Michael Bryzek (Gilt)](https://www.youtube.com/watch?v=GaHzdqFithc) (42:04)
|
||||||
|
* [Logging on Docker: What You Need to Know][loggingDocker] (51:27)
|
||||||
|
* [Performance Analysis of Docker - Jeremy Eder](https://www.youtube.com/watch?v=6f2E6PKYb0w) (1:36:58)
|
||||||
|
* [Run Any App on Mesos on Any Infrastructure Using Docker](https://www.youtube.com/watch?v=u5jd9YT9EsY) (17:44)
|
||||||
|
* [State of containers: a debate with CoreOS, VMware and Google](https://www.youtube.com/watch?v=IiITP3yIRd8) (27:38)
|
||||||
|
* [SysAdminCasts: Introduction to Docker](https://sysadmincasts.com/episodes/31-introduction-to-docker) (15:49)
|
||||||
|
* [Scalable Microservices with Kubernetes](https://www.udacity.com/course/scalable-microservices-with-kubernetes--ud615) Free Udacity course
|
||||||
|
|
||||||
|
# Interactive Learning Environments
|
||||||
|
|
||||||
|
* [Katacoda](https://www.katacoda.com/): Learn Docker using Interactive Browser-Based Labs
|
||||||
|
|
||||||
|
# Interesting Twitter Accounts
|
||||||
|
|
||||||
|
* [Docker](https://twitter.com/docker)
|
||||||
|
* [CenturyLink Labs](https://twitter.com/CenturyLinkLabs)
|
||||||
|
* [Flux7Labs](https://twitter.com/Flux7Labs)
|
||||||
|
* [TutumCloud](https://twitter.com/tutumcloud)
|
||||||
|
* [Project Atomic](https://twitter.com/ProjectAtomic)
|
||||||
|
* [OpenShift by Red Hat](https://twitter.com/openshift)
|
||||||
|
* [YLD](https://twitter.com/YLDio)
|
||||||
|
* [The New Stack](https://twitter.com/thenewstack)
|
||||||
|
* [Docker News](https://twitter.com/dockernews)
|
||||||
|
* [Docker Captains Twitter List](https://twitter.com/EltonStoneman/lists/docker-captains)
|
||||||
|
|
||||||
|
## People
|
||||||
|
|
||||||
|
* [Solomon Hykes](https://twitter.com/solomonstre) Founder of Docker
|
||||||
|
* [Gabriel Monroy](https://twitter.com/gabrtv) Creator of Deis
|
||||||
|
* [Jérôme Petazzoni](https://twitter.com/jpetazzo) Docker Developer
|
||||||
|
* [Michael Crosby](https://twitter.com/crosbymichael) Docker Developer
|
||||||
|
* [James Turnbull][kartar] Author of Docker Book
|
||||||
|
* [Jeff Lindsay](https://twitter.com/progrium) Design-minded software architect
|
||||||
|
* [Jessie Frazelle](https://twitter.com/jessfraz) Ex-@docker maintainer and uses full containerized desktop, lots of fun.
|
||||||
|
* [Docker Captains](https://www.docker.com/community/docker-captains) - Docker experts and community leaders
|
||||||
|
|
||||||
|
[blockbridge]: https://github.com/blockbridge
|
||||||
|
[weave]: https://github.com/weaveworks/weave
|
||||||
|
[calico]: https://github.com/projectcalico/calico-containers
|
||||||
|
[brooklyn]: http://brooklyn.apache.org/
|
||||||
|
[kubernetes]: http://kubernetes.io
|
||||||
|
[openshift]: https://www.openshift.org/
|
||||||
|
[sindresorhus]: https://github.com/sindresorhus/awesome
|
||||||
|
[editREADME]: https://github.com/veggiemonk/awesome-docker/edit/master/README.md
|
||||||
|
[jpetazzo]: https://github.com/jpetazzo
|
||||||
|
[panamax.io]: http://panamax.io/
|
||||||
|
[docker4dev]: https://www.youtube.com/watch?v=FdkNAjjO5yQ
|
||||||
|
[loggingDocker]: https://vimeo.com/123341629
|
||||||
|
[docker-cheat-sheet]: https://github.com/wsargent/docker-cheat-sheet
|
||||||
|
[wsargent]: https://github.com/wsargent
|
||||||
|
[docker-quick-ref]: https://github.com/dimonomid/docker-quick-ref
|
||||||
|
[dimonomid]: https://github.com/dimonomid
|
||||||
|
[projwebdev]: http://project-webdev.blogspot.de
|
||||||
|
[jessblog]: https://blog.jessfraz.com/post/docker-containers-on-the-desktop/
|
||||||
|
[jfrazelle]: https://github.com/jfrazelle
|
||||||
|
[jfrazelledotfiles]: https://github.com/jfrazelle/dotfiles
|
||||||
|
[jfrazelledockerfiles]: https://github.com/jfrazelle/dockerfiles
|
||||||
|
[jessvid]: https://www.youtube.com/watch?v=1qlLUf7KtAw
|
||||||
|
[progrium]: https://github.com/progrium
|
||||||
|
[jwilder]: https://github.com/jwilder
|
||||||
|
[crosbymichael]: https://github.com/crosbymichael
|
||||||
|
[gliderlabs]: https://github.com/gliderlabs
|
||||||
|
[gesellix]: https://github.com/gesellix
|
||||||
|
[prologic]: https://github.com/prologic
|
||||||
|
[fgrehm]: https://github.com/fgrehm
|
||||||
|
[ianmiell]: https://github.com/ianmiell
|
||||||
|
[distribution]: https://github.com/docker/distribution
|
||||||
|
[cpuguy83]: https://github.com/cpuguy83
|
||||||
|
[percheron]: https://github.com/ashmckenzie/percheron
|
||||||
|
[CenturyLinkLabs]: https://github.com/CenturyLinkLabs
|
||||||
|
[gondor]: https://github.com/gondor
|
||||||
|
[noteed]: https://github.com/noteed
|
||||||
|
[nginxproxy]: https://github.com/jwilder/nginx-proxy
|
||||||
|
[dokku]: https://github.com/dokku/dokku
|
||||||
|
[ahmetalpbalkan]: https://github.com/ahmetalpbalkan
|
||||||
|
[alpine]: https://github.com/gliderlabs/docker-alpine
|
||||||
|
[fluentd]: https://github.com/kiyoto/docker-fluentd
|
||||||
|
[kiyoto]: https://github.com/kiyoto
|
||||||
|
[spm]: https://github.com/sematext/sematext-agent-docker
|
||||||
|
[coreos]: https://github.com/coreos
|
||||||
|
[docker]: https://github.com/docker
|
||||||
|
[sematext]: https://twitter.com/sematext
|
||||||
|
[sebgoa]: https://twitter.com/sebgoa
|
||||||
|
[kartar]: https://twitter.com/kartar
|
||||||
|
[docker-compose]: https://docs.docker.com/compose/
|
||||||
|
[containership]: https://containership.io
|
||||||
|
[rancher]: https://github.com/rancher
|
||||||
|
[ramitsurana]: https://github.com/ramitsurana
|
||||||
|
[vegasbrianc]: https://github.com/vegasbrianc
|
|
@ -0,0 +1,649 @@
|
||||||
|
Awesome-Kubernetes
|
||||||
|
=======================================================================
|
||||||
|
|
||||||
|
[![Awesome](https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg)](https://github.com/sindresorhus/awesome)
|
||||||
|
[![Build Status](https://travis-ci.org/ramitsurana/awesome-kubernetes.svg?branch=master)](https://travis-ci.org/ramitsurana/awesome-kubernetes)
|
||||||
|
[![License](https://img.shields.io/badge/License-CC%204.0-brightgreen.svg)](http://creativecommons.org/licenses/by-nc/4.0/)
|
||||||
|
|
||||||
|
A curated list for awesome kubernetes sources
|
||||||
|
Inspired by [@sindresorhus' awesome](https://github.com/sindresorhus/awesome)
|
||||||
|
|
||||||
|
![k8](https://cloud.githubusercontent.com/assets/8342133/13547481/fcb5ffb0-e2fa-11e5-8f75-555cea5eb7b2.png)
|
||||||
|
|
||||||
|
|
||||||
|
> "Talent wins games, but teamwork and intelligence wins championships."
|
||||||
|
>
|
||||||
|
> -- Michael Jordan
|
||||||
|
|
||||||
|
Without the help from these [amazing contributors](https://github.com/ramitsurana/awesome-kubernetes/graphs/contributors),
|
||||||
|
building this awesome-repo would never has been possible. Thank You very much guys !!
|
||||||
|
|
||||||
|
**Thanks to Gitbook.This awesome list can now be downloaded and read in the form of a book.Check it out --> https://www.gitbook.com/book/ramitsurana/awesome-kubernetes/ .Keep Learning Keep Sharing !!**
|
||||||
|
|
||||||
|
**If you see a package or project here that is no longer maintained or is not a good fit, please submit a pull request to improve this file. Thank you!**
|
||||||
|
|
||||||
|
**If you are interested in becoming a maintainer for the awesome kubernetes list.Please drop me a mail at ramitsurana@gmail.com with your name and github id.Thanks**
|
||||||
|
|
||||||
|
|
||||||
|
## What is Kubernetes? :ship:
|
||||||
|
|
||||||
|
> Kubernetes is an open source orchestration system for Docker containers. It handles scheduling onto nodes in a compute cluster and actively manages workloads to ensure that their state matches the users declared intentions. Using the concepts of "labels" and "pods", it groups the containers which make up an application into logical units for easy management and discovery.
|
||||||
|
|
||||||
|
_Source:_ [What is Kubernetes](http://kubernetes.io/)
|
||||||
|
|
||||||
|
|
||||||
|
## History:
|
||||||
|
|
||||||
|
**Kubernetes is known to be a descendant of Google's system BORG**
|
||||||
|
|
||||||
|
> The first unified container-management system developed at Google was the system we internally call Borg.
|
||||||
|
It was built to manage both long-running services and batch jobs, which had previously been handled by two separate
|
||||||
|
systems: Babysitter and the Global Work Queue. The latter’s architecture strongly influenced Borg, but was focused on
|
||||||
|
batch jobs; both predated Linux control groups.
|
||||||
|
|
||||||
|
_Source:_ [Kubernetes Past](http://research.google.com/pubs/archive/44843.pdf)
|
||||||
|
|
||||||
|
## Date of Birth:
|
||||||
|
|
||||||
|
Kubernetes celebrates its birthday every year on 21st July.The project was born in the year 2015.
|
||||||
|
|
||||||
|
## Roadmap
|
||||||
|
|
||||||
|
The awesome-kubernetes will now soon be available in the form of different releases and package bundles, It means that you can
|
||||||
|
download the awesome kubernetes release up to a certain period of time, The release for awesome kubernetes 2015 bundle is released.Checkout the releases column for more info.Stay tuned for more updates.
|
||||||
|
|
||||||
|
-----------------------------------------------------------------------
|
||||||
|
|
||||||
|
|
||||||
|
Menu
|
||||||
|
=======================================================================
|
||||||
|
|
||||||
|
* [Starting Point](#starting-point)
|
||||||
|
* [Installers](#installers)
|
||||||
|
* [Main Resources](#main-resources)
|
||||||
|
* [Useful Articles](#useful-articles)
|
||||||
|
* [Cloud Providers](#cloud-providers)
|
||||||
|
* [Case Studies](#case-studies)
|
||||||
|
* [Persistent Volume Providers](#persistent-volume-providers)
|
||||||
|
* [Related Projects](#related-projects)
|
||||||
|
* [Related Software](#related-software)
|
||||||
|
* [Enterprise Kubernetes Products](#enterprise-kubernetes-products)
|
||||||
|
* [Monitoring Services](#monitoring-services)
|
||||||
|
* [Paas Providers](#paasproviders)
|
||||||
|
* [Continuous Delivery](#continuous-delivery)
|
||||||
|
* [Serverless Implementation](#serverless-implementation)
|
||||||
|
* [Operators](#operators)
|
||||||
|
* [Custom Schedulers](#custom-schedulers)
|
||||||
|
* [Container Support](#container-support)
|
||||||
|
* [Database/NoSQL](#database)
|
||||||
|
* [Networking](#networking)
|
||||||
|
* [Service mesh](#service-mesh)
|
||||||
|
* [RPC](#rpc)
|
||||||
|
* [Secret generation and management](#secret-generation-and-management)
|
||||||
|
* [Desktop applications](#desktop-applications)
|
||||||
|
* [Mobile applications](#mobile-applications)
|
||||||
|
* [API/CLI adaptors](#apicli-adaptors)
|
||||||
|
* [Application deployment orchestration](#application-deployment-orchestration)
|
||||||
|
* [Configuration](#configuration)
|
||||||
|
* [Security](#security)
|
||||||
|
* [Managed Kubernetes](#managed-kubernetes)
|
||||||
|
* [Load balancing](#load-balancing)
|
||||||
|
* [Developer platform](#developer-platform)
|
||||||
|
* [Big Data](#big-data)
|
||||||
|
* [Service Discovery](#service-discovery)
|
||||||
|
* [Operating System](#operating-system)
|
||||||
|
* [Raspberry Pi](#raspberry-pi)
|
||||||
|
* [Books](#books) :books:
|
||||||
|
* [Slide Presentations](#slide-presentations)
|
||||||
|
* [Videos](#videos) :tv:
|
||||||
|
* [Main Account](#main-account)
|
||||||
|
* [Other Useful videos](#other-useful-videos)
|
||||||
|
* [Interactive Learning Environments](#interactive-learning-environments)
|
||||||
|
* [Interesting Twitter Accounts](#interesting-twitter-accounts)
|
||||||
|
* [Amazing People](#amazing-people)
|
||||||
|
* [Meetup Groups](#meetup-groups)
|
||||||
|
* [Connecting with Kubernetes](#connecting-with-kubernetes)
|
||||||
|
* [Conferences](#conferences)
|
||||||
|
* [Contributing](#contributing)
|
||||||
|
* [License](#license)
|
||||||
|
|
||||||
|
|
||||||
|
-----------------------------------------------------------------------
|
||||||
|
|
||||||
|
|
||||||
|
Starting Point
|
||||||
|
=======================================================================
|
||||||
|
|
||||||
|
*A place that marks the beginning of a journey*
|
||||||
|
|
||||||
|
* [Are you Ready to Manage your Infrastructure like Google?](https://blog.jetstack.io/blog/k8s-getting-started-part1/)
|
||||||
|
* [Google is years ahead when it comes to the cloud, but it's happy the world is catching up](http://www.businessinsider.in/Google-is-years-ahead-when-it-comes-to-the-cloud-but-its-happy-the-world-is-catching-up/articleshow/47793327.cms)
|
||||||
|
* [An Intro to Google’s Kubernetes and How to Use It](https://www.ctl.io/developers/blog/post/what-is-kubernetes-and-how-to-use-it/) by [Laura Frank](https://twitter.com/rhein_wein)
|
||||||
|
* [Getting Started on Kubernetes](http://containertutorials.com/get_started_kubernetes/index.html) by [Rajdeep Dua](https://twitter.com/rajdeepdua)
|
||||||
|
* [Kubernetes: The Future of Cloud Hosting](http://meteorhacks.com/learn-kubernetes-the-future-of-the-cloud/) by [Meteorhacks](https://twitter.com/meteorhacks)
|
||||||
|
* [Kubernetes by Google](https://thevirtualizationguy.wordpress.com/tag/kubernetes/) by [Gaston Pantana](https://twitter.com/GastonPantana)
|
||||||
|
* [Key Concepts](http://blog.arungupta.me/key-concepts-kubernetes/) by [Arun Gupta](https://twitter.com/arungupta)
|
||||||
|
* [Application Containers: Kubernetes and Docker from Scratch](http://keithtenzer.com/2015/06/01/application-containers-kubernetes-and-docker-from-scratch/) by [Keith Tenzer](https://twitter.com/keithtenzer)
|
||||||
|
* [Learn the Kubernetes Key Concepts in 10 Minutes](http://omerio.com/2015/12/18/learn-the-kubernetes-key-concepts-in-10-minutes/) by [Omer Dawelbeit](https://twitter.com/omerio)
|
||||||
|
* [Top Reasons Businesses Should Move to Kubernetes Now](https://supergiant.io/blog/top-reasons-businesses-should-move-to-kubernetes-now) by [Mike Johnston](https://github.com/gopherstein)
|
||||||
|
* [The Children's Illustrated Guide to Kubernetes](https://deis.com/blog/2016/kubernetes-illustrated-guide/) by [Deis](https://github.com/deis)
|
||||||
|
* [The ‘kubectl run’ command](https://medium.com/@mhausenblas/the-kubectl-run-command-27c68de5cb76#.mlwi5an7o) by [Michael Hausenblas](https://twitter.com/mhausenblas)
|
||||||
|
* [Docker Kubernetes Lab Handbook](https://github.com/xiaopeng163/docker-k8s-lab) by [Peng Xiao](https://twitter.com/xiaopeng163)
|
||||||
|
|
||||||
|
Installers
|
||||||
|
=======================================================================
|
||||||
|
|
||||||
|
|
||||||
|
* [Minikube](https://github.com/kubernetes/minikube) - Run Kubernetes locally
|
||||||
|
* [Kops](https://github.com/kubernetes/kops) - OS Agnostique - AWS - [Apache-2.0](https://github.com/kubernetes/kops/blob/master/LICENSE)
|
||||||
|
* [Kube-deploy](https://github.com/kubernetes/kube-deploy)
|
||||||
|
* [Kubeadm](http://kubernetes.io/docs/admin/kubeadm/) - OS Agnostique - Cloud Agnostique - [Apache-2.0](https://github.com/kubernetes/kubeadm/blob/master/LICENSE)
|
||||||
|
* [Kargo](https://github.com/kubernetes-incubator/kargo) - OS Agnostique - Cloud Agnostique - [Apache-2.0](https://github.com/kubernetes-incubator/kargo/blob/master/LICENSE)
|
||||||
|
* [Bootkube](https://github.com/kubernetes-incubator/bootkube) - CoreOS - Cloud Agnostique - [Apache-2.0](https://github.com/kubernetes-incubator/bootkube/blob/master/LICENSE)
|
||||||
|
* [Kube-aws](https://github.com/coreos/kube-aws) - CoreOS - AWS - [Apache-2.0](https://github.com/coreos/kube-aws/blob/master/CONTRIBUTING.md)
|
||||||
|
* [Kismatic](https://github.com/apprenda/kismatic) - CentOS - Cloud Agnostique - [Apache-2.0](https://github.com/apprenda/kismatic/blob/master/LICENSE)
|
||||||
|
* [Juju](https://jujucharms.com/canonical-kubernetes) - Ubuntu - Cloud Agnostique - [Proprietary](https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/235)
|
||||||
|
* [Terraform](https://github.com/kz8s/tack) - CoreOS - AWS - [MIT](https://github.com/kz8s/tack/blob/master/LICENSE)
|
||||||
|
* [Supergiant](http://supergiant.io/) - CoreOS - Cloud Agnostique - [Apache-2.0](https://github.com/supergiant/supergiant/blob/master/LICENSE)
|
||||||
|
|
||||||
|
Main Resources
|
||||||
|
=======================================================================
|
||||||
|
|
||||||
|
*Official resources from the Kubernetes team*
|
||||||
|
|
||||||
|
* [Kubernetes Documentation](http://kubernetes.io/docs/)
|
||||||
|
* [Kubernetes Source](https://github.com/kubernetes/kubernetes/)
|
||||||
|
* [Kubernetes Troubleshooting](http://kubernetes.io/docs/troubleshooting/)
|
||||||
|
|
||||||
|
|
||||||
|
Useful Articles
|
||||||
|
=======================================================================
|
||||||
|
|
||||||
|
*A piece of writing included with others in a newspaper, magazine, or other publication*
|
||||||
|
|
||||||
|
* [Kubernetes: Getting Started With a Local Deployment](http://www.jetstack.io/new-blog/2015/7/6/getting-started-with-a-local-deployment)
|
||||||
|
* [Installation on Centos 7](http://severalnines.com/blog/installing-kubernetes-cluster-minions-centos7-manage-pods-services)
|
||||||
|
* [Packaging Multiple Resources together](http://blog.arungupta.me/kubernetes-application-package-multiple-resources-together/)
|
||||||
|
* [An Introduction to Kubernetes](https://www.digitalocean.com/community/tutorials/an-introduction-to-kubernetes) by [Justin Ellingwood](https://twitter.com/jmellingwood)
|
||||||
|
* [Scaling Docker with Kubernetes](http://www.infoq.com/articles/scaling-docker-with-kubernetes) by [Carlos Sanchez](https://twitter.com/csanchez)
|
||||||
|
* [Creating a Kubernetes Cluster to Run Docker Formatted Container Images](https://access.redhat.com/articles/1353773) by [Chris Negus](https://twitter.com/linuxcricket)
|
||||||
|
* [Containerizing Docker on Kubernetes !!](https://www.linkedin.com/pulse/containerizing-docker-kubernetes-ramit-surana) by [Ramit Surana](https://twitter.com/ramitsurana)
|
||||||
|
* [Running Kubernetes Example on CoreOS, Part 1](https://coreos.com/blog/running-kubernetes-example-on-CoreOS-part-1/) by [Kelsey Hightower](https://twitter.com/kelseyhightower)
|
||||||
|
* [Play With Kubernetes Quickly Using Docker](https://zwischenzugs.wordpress.com/2015/04/06/play-with-kubernetes-quickly-using-docker/)
|
||||||
|
* [1 command to Kubernetes with Docker compose](http://sebgoa.blogspot.in/2015/04/1-command-to-kubernetes-with-docker.html) by [Sebastien Goasguen](https://twitter.com/sebgoa)
|
||||||
|
* [Nginx Server Deployment using Kubernetes](http://containertutorials.com/get_started_kubernetes/k8s_example.html) by [Rajdeep Dua](https://www.twitter.com/rajdeepdua)
|
||||||
|
* [What even is a kubelet?](http://kamalmarhubi.com/blog/2015/08/27/what-even-is-a-kubelet/) by [Kamal Marhubi](https://twitter.com/kamalmarhubi)
|
||||||
|
* [Kubernetes from the ground up: the API server](http://kamalmarhubi.com/blog/2015/09/06/kubernetes-from-the-ground-up-the-api-server/) by [Kamal Marhubi](https://twitter.com/kamalmarhubi)
|
||||||
|
* [Kubernetes 101 – Networking](http://www.dasblinkenlichten.com/kubernetes-101-networking/) by [Jon Langemak](https://twitter.com/blinken_lichten)
|
||||||
|
* [Dynamic Kubernetes installation/configuration with SaltStack](http://www.dasblinkenlichten.com/dynamic-kubernetes-installationconfiguration-with-saltstack/) by [Jon Langemak](https://twitter.com/blinken_lichten)
|
||||||
|
* [Deploying Kubernetes with SaltStack](http://www.dasblinkenlichten.com/deploying-kubernetes-with-saltstack/) by [Jon Langemak](https://twitter.com/blinken_lichten)
|
||||||
|
* [Logging in Kubernetes with Fluentd and Elasticsearch](http://www.dasblinkenlichten.com/logging-in-kubernetes-with-fluentd-and-elasticsearch/) by [Jon Langemak](https://twitter.com/blinken_lichten)
|
||||||
|
* [Corekube: Running Kubernetes on CoreOS via OpenStack](https://developer.rackspace.com/blog/running-coreos-and-kubernetes/) by [Mike Metral](https://twitter.com/mikemetral)
|
||||||
|
* [Networking Kubernetes Clusters on CoreOS with Weave](http://www.weave.works/guides/networking-kubernetes-clusters-on-coreos-with-weave/) by [Weaveworks](https://twitter.com/weaveworks)
|
||||||
|
* [CoreOS + Kubernetes Step By Step](https://coreos.com/kubernetes/docs/latest/getting-started.html) by [Coreos](https://twitter.com/coreoslinux)
|
||||||
|
* [Deploying to Kubernetes with Panamax](https://www.ctl.io/developers/blog/post/deploying-to-kubernetes-with-panamax/) by [Brian DeHamer](https://twitter.com/bdehamer)
|
||||||
|
* [Deploy Kubernetes with a Single Command Using Atomicapp](http://www.projectatomic.io/blog/2015/08/fun-with-kubenetes-and-atomicapp/) by [Jason Brooks](https://twitter.com/jasonbrooks)
|
||||||
|
* [Deploying a Bare Metal Kubernetes Cluster](http://blog.jameskyle.org/2014/08/deploying-baremetal-kubernetes-cluster/) by [James Kyle](https://twitter.com/jameskyle75)
|
||||||
|
* [AWS Advent 2014 - CoreOS and Kubernetes on AWS](http://awsadvent.tumblr.com/post/104260597799/aws-advent-2014-coreos-and-kubernetes-on-aws) by [Tim Dsyinger](https://twitter.com/dysinger)
|
||||||
|
* [Kubernetes and AWS VPC Peering](http://ben.straub.cc/2015/08/19/kubernetes-aws-vpc-peering/) by [Ben Straub](https://twitter.com/benstraub)
|
||||||
|
* [Deploy a Kubernetes development cluster with Juju!](https://insights.ubuntu.com/2015/07/23/deploy-a-kubernetes-development-cluster-with-juju-2/) by [Matt Bruzek](https://twitter.com/mattatcanonical)
|
||||||
|
* [Try Kubernetes with Vagrant](http://lollyrock.com/articles/kubernetes-vagrant/) by [Christoph Hartmann](https://twitter.com/chri_hartmann)
|
||||||
|
* [Keycloak on Kubernetes with OpenShift 3](http://blog.keycloak.org/2015/04/keycloak-on-kubernetes-with-openshift-3.html) by [Marko Strukelj](https://twitter.com/mstruk2000)
|
||||||
|
* [Kubernetes clusters with Oh-My-Vagrant](https://ttboj.wordpress.com/2015/05/02/kubernetes-clusters-with-oh-my-vagrant/) by [James](https://twitter.com/#!/purpleidea)
|
||||||
|
* [Fleet Unit Files for Kubernetes on CoreOS](http://blog.michaelhamrah.com/2015/06/fleet-unit-files-for-kubernetes-on-coreos/) by [Michael Hamrah](https://twitter.com/mhamrah)
|
||||||
|
* [Kubernetes on AWS](https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html) by [CoreOS](https://twitter.com/coreoslinux)
|
||||||
|
* [Testing Kubernetes on AWS](http://alanwill.me/Testing-Kubernetes-on-AWS/) by [Alan Will](https://twitter.com/alanwill)
|
||||||
|
* [Kubernetes: First steps on Amazon AWS](http://blog.dutchcoders.io/kubernetes-first-steps-on-amazon-aws/) by [Remco](http://blog.dutchcoders.io/author/remco/)
|
||||||
|
* [Kubernetes Container Orchestration through Java APIs](http://keithtenzer.com/2015/05/04/kubernetes-container-orchestration-through-java-apis/) by [Keith Tenzer](https://twitter.com/keithtenzer)
|
||||||
|
* [Containers at Scale with Kubernetes on OpenStack](http://keithtenzer.com/2015/04/15/containers-at-scale-with-kubernetes-on-openstack/) by [Keith Tenzer](https://twitter.com/keithtenzer)
|
||||||
|
* [Installing cAdvisor and Heapster on bare metal Kubernetes](http://www.dasblinkenlichten.com/installing-cadvisor-and-heapster-on-bare-metal-kubernetes/) by [Jon Langemak](https://twitter.com/blinken_lichten)
|
||||||
|
* [Docker Clustering Tools Compared: Kubernetes vs Docker Swarm](http://technologyconversations.com/2015/11/04/docker-clustering-tools-compared-kubernetes-vs-docker-swarm/)
|
||||||
|
* [Comparison of Networking Solutions for Kubernetes](http://machinezone.github.io/research/networking-solutions-for-kubernetes/)
|
||||||
|
* [Why Docker and Google Kubernetes Are Like PaaS Done Right](https://www.sdxcentral.com/articles/news/why-docker-and-google-kubernetes-are-like-paas-done-right/2015/08/)
|
||||||
|
* [Kubernetes Authentication plugins and kubeconfig](http://www.dasblinkenlichten.com/kubernetes-authentication-plugins-and-kubeconfig/) by [Jon Langemak](https://twitter.com/blinken_lichten)
|
||||||
|
* [Kubernetes with SaltStack revisited](http://www.dasblinkenlichten.com/kubernetes-with-saltstack-revisited/) by [Jon Langemak](https://twitter.com/blinken_lichten)
|
||||||
|
* [Kubernetes Authentication - OpenID Connect](http://www.devoperandi.com/kubernetes-authentication-openid-connect/) by [Michael Ward](https://twitter.com/DevoperandI)
|
||||||
|
* [Logging - Kafka topic by namespace](http://www.devoperandi.com/logging-kafka-topic-by-kubernetes-namespace/) by [Michael Ward](https://twitter.com/DevoperandI)
|
||||||
|
* [Achieving CI/CD with Kubernetes](http://theremotelab.com/blog/achieving-ci-cd-with-k8s/) by [Ramit Surana](https://twitter.com/ramitsurana)
|
||||||
|
* [Kubernetes Monitoring Guide](https://www.datadoghq.com/blog/monitoring-kubernetes-era/) by [JM Saponaro](http://github.com/JayJayM)
|
||||||
|
* [Deploying Kubernetes with Ansible and Terraform](http://solinea.com/blog/deploying-kubernetes-ansible-terraform)
|
||||||
|
* [Cluster Consul using Kubernetes API](http://www.devoperandi.com/cluster-consul-using-kubernetes-api/)
|
||||||
|
* [Continuous Deployment with Google Container Engine and Kubernetes](https://semaphoreci.com/community/tutorials/continuous-deployment-with-google-container-engine-and-kubernetes)
|
||||||
|
* [Handling Sensitive Data In A Docker Application with Kubernetes Secrets](https://scotch.io/tutorials/google-cloud-platform-iii-handling-sensitive-data-in-a-docker-application-with-kubernetes-secrets) by [John Kariuki ](https://twitter.com/_kar_is)
|
||||||
|
* [How to Create and Use Kubernetes Secrets](http://linoxide.com/containers/create-use-kubernetes-secrets/) by [Mohamed Ez Ez](http://linoxide.com/author/mohamedez/)
|
||||||
|
|
||||||
|
|
||||||
|
Cloud Providers
|
||||||
|
=======================================================================
|
||||||
|
|
||||||
|
|
||||||
|
* [GCE](https://cloud.google.com/compute/) - Google Compute Engine [default]
|
||||||
|
* [GKE](https://cloud.google.com/container-engine/) - Google Container Engine
|
||||||
|
* [AWS](http://aws.amazon.com/ec2) - Amazon EC2
|
||||||
|
* [Azure](https://azure.microsoft.com/en-in/) - Microsoft Azure
|
||||||
|
* [Vsphere](http://www.vmware.com/products/vsphere.html) - VMWare VSphere
|
||||||
|
* [Rackspace](https://www.rackspace.com/en-in) - Rackspace
|
||||||
|
* [Eldarion Cloud](http://eldarion.cloud/)
|
||||||
|
* [StackPoint Cloud](https://stackpointcloud.com/)
|
||||||
|
|
||||||
|
|
||||||
|
Case Studies
|
||||||
|
=======================================================================
|
||||||
|
|
||||||
|
*Study of Various different case studies*
|
||||||
|
|
||||||
|
* [Building a Bank with Kubernetes](https://monzo.com/blog/2016/09/19/building-a-modern-bank-backend/)
|
||||||
|
* [Bringing Pokemon Go to Google Cloud](https://cloudplatform.googleblog.com/2016/09/bringing-Pokemon-GO-to-life-on-Google-Cloud.html)
|
||||||
|
* [Monitoring Kubernetes at Wayblazer](https://sysdig.com/blog/monitoring-docker-kubernetes-wayblazer/)
|
||||||
|
|
||||||
|
|
||||||
|
Persistent Volume Providers
|
||||||
|
=======================================================================
|
||||||
|
|
||||||
|
*List of some Persistent Volume Providers for Kubernetes.Check out [Kubernets Experimental](https://github.com/kubernetes/kubernetes/tree/master/examples/persistent-volume-provisioning) for more info*
|
||||||
|
|
||||||
|
* [GCE](https://cloud.google.com/compute/)
|
||||||
|
* [AWS](aws.amazon.com)
|
||||||
|
* [Glusterfs](https://www.gluster.org/)
|
||||||
|
* [OpenStack Cinder](https://wiki.openstack.org/cinder)
|
||||||
|
* [CephRBD](http://ceph.com/ceph-storage/block-storage/)
|
||||||
|
* [QuoByte](https://www.quobyte.com/)
|
||||||
|
|
||||||
|
Related Projects
|
||||||
|
=======================================================================
|
||||||
|
|
||||||
|
*Kubernetes-related projects that you might find helpful*
|
||||||
|
|
||||||
|
|
||||||
|
## Related Software
|
||||||
|
|
||||||
|
*Projects built to make life with Kubernetes even better, more powerful, more scalable*
|
||||||
|
|
||||||
|
* [Hypernetes](https://github.com/hyperhq/hypernetes)
|
||||||
|
* [Ubernetes](https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/federation.md)
|
||||||
|
* [kmachine](https://github.com/skippbox/kmachine)
|
||||||
|
* [Supergiant](https://supergiant.io)
|
||||||
|
* [Kubefuse](https://opencredo.com/introducing-kubefuse-file-system-kubernetes/)
|
||||||
|
* [KubeSpray][https://github.com/kubespray]
|
||||||
|
* [Bootkube](https://github.com/coreos/bootkube)
|
||||||
|
* [Localkube](https://github.com/redspread/localkube)
|
||||||
|
* [Kubernetes Ec2 Autoscaler](https://github.com/openai/kubernetes-ec2-autoscaler)
|
||||||
|
* [Kubeform](https://capgemini.github.io/kubeform/)
|
||||||
|
* [k8comp](https://github.com/cststack/k8comp)
|
||||||
|
* [kube-openvpn](https://github.com/pieterlange/kube-openvpn)
|
||||||
|
* [Client Libraries](https://github.com/kubernetes/community/blob/master/contributors/devel/client-libraries.md)
|
||||||
|
|
||||||
|
## Package Managers
|
||||||
|
|
||||||
|
* [Helm](http://helm.sh)
|
||||||
|
* [KPM](https://github.com/coreos/kpm)
|
||||||
|
|
||||||
|
## Monitoring Services
|
||||||
|
|
||||||
|
*To maintain regular surveillance over kubernetes*
|
||||||
|
|
||||||
|
* [Console](https://github.com/kubernetes/dashboard)
|
||||||
|
* [Datadog](https://www.datadoghq.com/)
|
||||||
|
* [Heapster](https://github.com/kubernetes/heapster)
|
||||||
|
* [Kubedash](https://github.com/kubernetes/kubedash)
|
||||||
|
* [Kube-ui](https://github.com/kubernetes/kube-ui)
|
||||||
|
* [Prometheus](http://prometheus.io)
|
||||||
|
* [Sysdig Open Source](http://www.sysdig.org/)
|
||||||
|
* [Sysdig Monitoring](https://www.sysdig.com/)
|
||||||
|
* [Weave Scope](https://www.weave.works/products/weave-scope/)
|
||||||
|
|
||||||
|
## Enterprise Kubernetes Products
|
||||||
|
|
||||||
|
* [CoreOS Tectonic](https://tectonic.com)
|
||||||
|
|
||||||
|
## PaaS Providers
|
||||||
|
|
||||||
|
*Kubernetes Platform as a Service providers*
|
||||||
|
|
||||||
|
* [OpenShift](https://www.openshift.com/)
|
||||||
|
* [Deis Workflow](https://deis.com/)
|
||||||
|
* [Gondor/Kel]
|
||||||
|
* [WSO2](http://wso2.com)
|
||||||
|
* [Rancher](http://rancher.com/running-kubernetes-aws-rancher/)
|
||||||
|
* [Kumoru](http://kumoru.io/)
|
||||||
|
|
||||||
|
## Continuous Delivery
|
||||||
|
|
||||||
|
*Build-test-deploy automated workflow software designed to make production environments more stable and life better for engineers*
|
||||||
|
|
||||||
|
* [Jenkins](https://jenkins.io)
|
||||||
|
* [Jenkins-Kubernetes Plugin](https://github.com/jenkinsci/kubernetes-plugin) by [Carlos Sanchez](https://www.twitter.com/csanchez)
|
||||||
|
* [Automated Image Builds with Jenkins, Packer, and Kubernetes](https://cloud.google.com/solutions/automated-build-images-with-jenkins-kubernetes#kubernetes_architecture)
|
||||||
|
* [On-demand Jenkins slaves with Kubernetes and the Google Container Engine](https://www.cloudbees.com/blog/demand-jenkins-slaves-kubernetes-and-google-container-engine)
|
||||||
|
* [Jenkins setups for Kubernetes and Docker Workflow](http://iocanel.blogspot.in/2015/09/jenkins-setups-for-kubernetes-and.html)
|
||||||
|
* [kb8or](https://github.com/UKHomeOffice/kb8or)
|
||||||
|
* [Wercker](http://blog.wercker.com/topic/kubernetes)
|
||||||
|
* [Shippable](http://blog.shippable.com/topic/kubernetes)
|
||||||
|
* [GitLab](http://blog.lwolf.org/post/how-to-easily-deploy-gitlab-on-kubernetes/)
|
||||||
|
* [Cloudmunch](http://www.cloudmunch.com/continuous-delivery-for-kubernetes/)
|
||||||
|
* [Kontinuous](https://github.com/AcalephStorage/kontinuous)
|
||||||
|
* [Kit](https://invisionapp.github.io/kit/)
|
||||||
|
* [Spinnaker](http://www.spinnaker.io/blog/deploy-to-kubernetes-using-spinnaker)
|
||||||
|
|
||||||
|
|
||||||
|
## Serverless Implementations
|
||||||
|
|
||||||
|
* [Funktion](https://github.com/fabric8io/funktion)
|
||||||
|
* [Fission](https://github.com/platform9/fission)
|
||||||
|
* [Kubeless](https://github.com/skippbox/kubeless)
|
||||||
|
* OpenWhisk
|
||||||
|
* [Iron.io](http://iron.io)
|
||||||
|
|
||||||
|
## Operators
|
||||||
|
|
||||||
|
* [etcd](https://github.com/coreos/etcd-operator)
|
||||||
|
* [Prometheus](https://github.com/coreos/prometheus-operator)
|
||||||
|
* [Elasticsearch](https://github.com/upmc-enterprises/elasticsearch-operator)
|
||||||
|
|
||||||
|
## Custom Schedulers
|
||||||
|
|
||||||
|
* [Scheduler](https://github.com/kelseyhightower/scheduler) Example cost based scheduler by [Kelsey Hightower](https://twitter.com/kelseyhightower)
|
||||||
|
* [Sticky Node Scheduler](https://github.com/philipn/kubernetes-sticky-node-scheduler)
|
||||||
|
* [ksched](https://github.com/coreos/ksched) Experimental flow based scheduler
|
||||||
|
* [kronjob](https://github.com/Eneco/kronjob) Scheduler for recurring jobs
|
||||||
|
* [escheduler](https://github.com/agonzalezro/escheduler) Example scheduler written in elixir
|
||||||
|
* [bashScheduler](https://github.com/rothgar/bashScheduler) Example scheduler written in bash
|
||||||
|
|
||||||
|
## Container Support
|
||||||
|
|
||||||
|
*A list of linux containers supported by kubernetes.*
|
||||||
|
|
||||||
|
* [Docker](http://docker.com):
|
||||||
|
* [Rkt](http://coreos.com/rkt)
|
||||||
|
* [Rktnetes](http://kubernetes.io/docs/getting-started-guides/rkt/)
|
||||||
|
* [containerd]
|
||||||
|
* [CRI-O (OCI)]
|
||||||
|
* [Hyper.sh/frakti]
|
||||||
|
|
||||||
|
## Database
|
||||||
|
|
||||||
|
* [CockroachDB](https://www.cockroachlabs.com/blog/running-cockroachdb-on-kubernetes/)
|
||||||
|
* [Cassandra / DataStax](http://blog.kubernetes.io/2016/07/thousand-instances-of-cassandra-using-kubernetes-pet-set.html)
|
||||||
|
* [MongoDB](https://www.mongodb.com/blog/post/running-mongodb-as-a-microservice-with-docker-and-kubernetes)
|
||||||
|
* [Hazelcast](https://ppires.wordpress.com/2014/12/24/clustering-hazelcast-on-kubernetes/)
|
||||||
|
* [Crate](https://crate.io/docs/scale/kubernetes/)
|
||||||
|
* [Minio](http://minio.io)
|
||||||
|
* [Vitess](http://vitess.io/) - Horizontal scaling of MySql by Youtube
|
||||||
|
|
||||||
|
## Networking
|
||||||
|
|
||||||
|
* [WeaveWorks](https://www.weave.works/)
|
||||||
|
* [Canal](https://github.com/tigera/canal) by [Tigera](https://github.com/tigera)
|
||||||
|
* [OpenContrail](https://github.com/Juniper/contrail-kubernetes)
|
||||||
|
* [Nuage](https://github.com/nuagenetworks/nuage-kubernetes)
|
||||||
|
* [Kuryr](https://github.com/openstack/kuryr-kubernetes)
|
||||||
|
* [Contiv](http://contiv.github.io/)
|
||||||
|
|
||||||
|
|
||||||
|
## Service mesh
|
||||||
|
|
||||||
|
* [Envoy](http://lyft.github.io/envoy/)
|
||||||
|
* [Linkerd](https://linkerd.io/getting-started/k8s/)
|
||||||
|
* [Amalgam8](http://www.amalgam8.io/)
|
||||||
|
* [WeaveWorks](http://www.weave.works/weave-for-kubernetes/)
|
||||||
|
|
||||||
|
## RPC
|
||||||
|
|
||||||
|
* [Grpc](http://grpc.io)
|
||||||
|
* [Micro](https://github.com/micro/kubernetes)
|
||||||
|
|
||||||
|
|
||||||
|
## Secret generation and management
|
||||||
|
|
||||||
|
* [Vault controller](https://github.com/kelseyhightower/vault-controller)
|
||||||
|
* [kube-lego](https://github.com/jetstack/kube-lego)
|
||||||
|
* [k8sec](https://github.com/dtan4/k8sec)
|
||||||
|
|
||||||
|
## Desktop applications
|
||||||
|
|
||||||
|
* [Kubernetic](https://kubernetic.com/)
|
||||||
|
|
||||||
|
## Mobile applications
|
||||||
|
|
||||||
|
* [Cabin](http://www.skippbox.com/cabin/)
|
||||||
|
* [Cockpit](http://cockpit-project.org/guide/latest/feature-kubernetes.html)
|
||||||
|
|
||||||
|
## API/CLI adaptors
|
||||||
|
|
||||||
|
* [Kubebot](https://github.com/harbur/kubebot)
|
||||||
|
* [StackStorm](https://github.com/StackStorm/st2)
|
||||||
|
* [Kubefuse](https://opencredo.com/introducing-kubefuse-file-system-kubernetes/)
|
||||||
|
* [Ksql](https://github.com/brendandburns/ksql)
|
||||||
|
* [kubectld](https://github.com/rancher/kubectld)
|
||||||
|
* [Kubesh](https://github.com/projectodd/kubernetes/blob/kubesh/cmd/kubesh/README.md) - Work around kubectl
|
||||||
|
|
||||||
|
## Application deployment orchestration
|
||||||
|
|
||||||
|
* [ElasticKube](https://elasticbox.com/kubernetes)
|
||||||
|
* [AppController](https://github.com/Mirantis/k8s-AppController)
|
||||||
|
* [Broadway](https://github.com/namely/broadway)
|
||||||
|
* [Kb8or](https://github.com/UKHomeOffice/kb8or)
|
||||||
|
* [IBM UrbanCode](https://developer.ibm.com/urbancode/plugin/kubernetes/)
|
||||||
|
* [Nulecule](https://github.com/projectatomic/nulecule)
|
||||||
|
* [Deployment manager](https://cloud.google.com/deployment-manager/)
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
* [Kompose]
|
||||||
|
* [Jsonnet](https://github.com/google/jsonnet/tree/master/case_studies/kubernetes)
|
||||||
|
* [Spread](http://redspread.com)
|
||||||
|
* [K8comp](https://github.com/cststack/k8comp)
|
||||||
|
* [Ktmpl](https://github.com/InQuicker/ktmpl)
|
||||||
|
* [Konfd](https://github.com/kelseyhightower/konfd)
|
||||||
|
* [kenv](https://github.com/thisendout/kenv)
|
||||||
|
* [kubediff](https://github.com/weaveworks/kubediff)
|
||||||
|
* [Habitat](https://www.habitat.sh/docs/container-orchestration/)
|
||||||
|
* [Puppet](https://forge.puppet.com/garethr/kubernetes/readme)
|
||||||
|
* [Ansible](https://docs.ansible.com/ansible/kubernetes_module.html)
|
||||||
|
|
||||||
|
## Security
|
||||||
|
|
||||||
|
* [Trireme](http://github.com/aporeto-inc/trireme-kubernetes)
|
||||||
|
* [Aquasec](http://blog.aquasec.com/topic/kubernetes)
|
||||||
|
* [Twistlock](https://www.twistlock.com/)
|
||||||
|
* [Sysdig Falco](http://www.sysdig.org/falco/)
|
||||||
|
|
||||||
|
## Managed Kubernetes
|
||||||
|
|
||||||
|
* [Platform9](https://platform9.com/)
|
||||||
|
* [Gravitational](https://github.com/gravitational)
|
||||||
|
* [KCluster](https://kcluster.io/)
|
||||||
|
|
||||||
|
## Load balancing
|
||||||
|
|
||||||
|
* [Nginx Plus](https://www.nginx.com/blog/load-balancing-kubernetes-services-nginx-plus/)
|
||||||
|
* [Traefik](https://traefik.io/)
|
||||||
|
* [AppsCode Voyager - Secure HAProxy based Ingress Controller](https://github.com/appscode/voyager)
|
||||||
|
|
||||||
|
## Developer platform
|
||||||
|
|
||||||
|
* [Fabric8](http://fabric8.io)
|
||||||
|
* [Spring Cloud integration](https://github.com/fabric8io/spring-cloud-kubernetes)
|
||||||
|
* [Mantl](http://mantl.io/)
|
||||||
|
* [goPaddle](http://www.gopaddle.io)
|
||||||
|
* [VAMP](http://vamp.io)
|
||||||
|
|
||||||
|
## Big Data
|
||||||
|
|
||||||
|
* [Kube-Yarn](https://github.com/Comcast/kube-yarn)
|
||||||
|
* [Spark](https://github.com/kubernetes/kubernetes/tree/master/examples/spark)
|
||||||
|
|
||||||
|
|
||||||
|
## Service Discovery
|
||||||
|
|
||||||
|
* [Consul](http://consul.io)
|
||||||
|
* [Kelsey Hightower Consul](https://github.com/kelseyhightower/consul-on-kubernetes)
|
||||||
|
* [Bridge between Kubernetes and Consul](https://github.com/Beldur/kube2consul)
|
||||||
|
|
||||||
|
## Operating System
|
||||||
|
|
||||||
|
* [CoreOS](http://coreos.com)
|
||||||
|
* [Kurma](http://kurma.io)
|
||||||
|
* [GCI](https://cloud.google.com/container-optimized-os/docs/)
|
||||||
|
|
||||||
|
Raspberry Pi
|
||||||
|
=======================================================================
|
||||||
|
|
||||||
|
*Some of the awesome findings and experiments on using Kubernetes with Raspberry Pi.Checkout [Kubecloud](http://kubecloud.io)*
|
||||||
|
|
||||||
|
* [Setting up a Kubernetes on ARM cluster](http://kubecloud.io/kubernetes-on-arm-cluster/)
|
||||||
|
* [Local registry in Kubernetes on ARM](http://kubecloud.io/kubernetes-on-arm-registry/)
|
||||||
|
|
||||||
|
Books
|
||||||
|
=======================================================================
|
||||||
|
|
||||||
|
*A written or printed work consisting of pages glued or sewn together along one side and bound in covers that provide
|
||||||
|
us with information*
|
||||||
|
|
||||||
|
* [Kubernetes: Up and Running](http://shop.oreilly.com/product/0636920043874.do) by [Kelsey Hightower](https://twitter.com/kelseyhightower)
|
||||||
|
* [Docker and Kubernetes Under the Hood](http://item.jd.com/11757034.html) (Chinese) by [Harry Zhang](https://twitter.com/resouer), Jianbo Sun and ZJU SEL lab
|
||||||
|
* [Kubernetes: Scheduling the Future at Cloud Scale](http://www.oreilly.com/webops-perf/free/kubernetes.csp) by [Dave K. Rensin](http://www.linkedin.com/in/drensin)
|
||||||
|
* [Kubernetes in Action](https://www.manning.com/books/kubernetes-in-action) by [Marko Lukša](https://twitter.com/markoluksa)
|
||||||
|
* [Kubernetes Cookbook](https://www.packtpub.com/virtualization-and-cloud/kubernetes-cookbook) by Hideto Saito, Hui-Chuan Chloe Lee, Ke-Jou Carol Hsu
|
||||||
|
* [Getting Started with Kubernetes](http://shop.oreilly.com/product/9781784394035.do) by Jonathan Baier
|
||||||
|
|
||||||
|
|
||||||
|
Slide Presentations
|
||||||
|
=======================================================================
|
||||||
|
|
||||||
|
*A slide is a single page of a presentation created with software such as PowerPoint or OpenOffice Impress.*
|
||||||
|
|
||||||
|
* [Architecture Overview](http://www.slideshare.net/enakai/architecture-overview-kubernetes-with-red-hat-enterprise-linux-71) by [enakai00](https://twitter.com/enakai00/)
|
||||||
|
* [Package your Java EE Application using Docker and Kubernetes](http://www.slideshare.net/arungupta1/package-your-java-ee-application-using-docker-and-kubernetes) by [Arun Gupta](https://twitter.com/arungupta)
|
||||||
|
* [Scaling Jenkins with Docker and Kubernetes](http://www.slideshare.net/carlossg/scaling-jenkins-with-docker-and-kubernetes) by [Carlos Sanchez](https://www.twitter.com/csanchez)
|
||||||
|
* [An Introduction to Kubernetes](http://www.slideshare.net/imesh/an-introduction-to-kubernetes) by [Imesh Gunaratne](https://www.linkedin.com/in/imesh)
|
||||||
|
* [Musings on Mesos: Docker, Kubernetes, and Beyond.](http://www.slideshare.net/timothysc/apache-coneu) by [Timothy St. Clair](http://timothysc.github.io/)
|
||||||
|
* [Cluster management with Kubernetes](http://www.slideshare.net/SatnamSingh67/2015-0605-cluster-management-with-kubernetes) by Satnam Singh
|
||||||
|
* [A brief study on Kubernetes and its components](http://www.slideshare.net/ramitsurana/a-brief-study-on-kubernetes-and-its-components) by [Ramit Surana](https://www.twitter.com/ramitsurana)
|
||||||
|
* [Moving to Kubernetes - Tales from SoundCloud](http://www.slideshare.net/dagrobie/moving-to-kubernetes-tales-from-soundcloud) by [Tobias Schmidt](https://twitter.com/dagrobie)
|
||||||
|
* [Kubernetes Scaling SIG (K8Scale)](http://www.slideshare.net/kubecon/kubernetes-scaling-sig-k8scale) by [Bob Wise](https://twitter.com/countspongebob)
|
||||||
|
* [Network oriented Kubernetes intro](http://www.slideshare.net/salv_orlando/network-oriented-kubernetes-intro) by [Salv Orlando](https://twitter.com/taturiello)
|
||||||
|
* [Zero downtime-java-deployments-with-docker-and-kubernetes](http://www.slideshare.net/ArjanSchaaf/zero-downtimejavadeploymentswithdockerandkubernetes) by [Arjan Schaaf](https://www.linkedin.com/in/arjanschaaf)
|
||||||
|
* [Kubernetes and CoreOS @ Athens Docker meetup](http://www.slideshare.net/mistio/kubernetes-and-coreos-athens-docker-meetup) by [Mist.io](https://twitter.com/mist_io)
|
||||||
|
* [Achieving CI/CD with Kubernetes](http://www.slideshare.net/ramitsurana/achieving-cicd-with-kubernetes) by [Ramit Surana](https://twitter.com/ramitsurana)
|
||||||
|
|
||||||
|
|
||||||
|
Videos
|
||||||
|
=======================================================================
|
||||||
|
|
||||||
|
*A recording of moving visual images made digitally or on videotape.*
|
||||||
|
|
||||||
|
|
||||||
|
### Main Account
|
||||||
|
|
||||||
|
* [Google Developers](https://www.youtube.com/channel/UC_x5XG1OV2P6uZZ5FSM9Ttw)
|
||||||
|
* [Kubernetes](https://www.youtube.com/channel/UCZ2bu0qutTOM0tHYa_jkIwg)
|
||||||
|
|
||||||
|
|
||||||
|
### Other Useful Videos
|
||||||
|
|
||||||
|
* [Google I/O 2014 - Containerizing the Cloud with Docker on Google Cloud Platform](https://www.youtube.com/watch?v=tsk0pWf4ipw) by [Google Developers](https://www.youtube.com/channel/UC_x5XG1OV2P6uZZ5FSM9Ttw)
|
||||||
|
* [Container Orchestration using CoreOS and Kubernetes](https://www.youtube.com/watch?v=tA8XNVPZM2w) by [Kelsey Hightower](https://twitter.com/kelseyhightower)
|
||||||
|
* [A Technical Overview of Kubernetes](https://www.youtube.com/watch?v=WwBdNXt6wO4) by [Bredan Burns](https://twitter.com/brendandburns)
|
||||||
|
* [Docker Containers and Kubernetes with Brian Dorsey](https://www.youtube.com/watch?v=Fcb4aoSAZ98) by [Brian Dorsey](https://twitter.com/briandorsey)
|
||||||
|
* [Alpaca Kubernetes on AWS](https://www.youtube.com/watch?v=jLk1pkc7kv4) by [Adrien Lemaire](https://twitter.com/fandekasp)
|
||||||
|
* [Arun Gupta: Package your Java applications using Docker and Kubernetes](https://www.youtube.com/watch?v=R2nj1vRjLwE) by [Arun Gupta](https://twitter.com/arungupta)
|
||||||
|
* ["Managing Containers at Scale with CoreOS and Kubernetes" by Kelsey Hightower](https://www.youtube.com/watch?v=pozC9rBvAIs) by [Kelsey Hightower](https://twitter.com/kelseyhightower)
|
||||||
|
* [Kubernetes: The Journey So Far - Greg DeMichillie](https://youtu.be/6a2Nirr8cI0) by [Greg DeMichillie](https://twitter.com/gregde)
|
||||||
|
* [DevNation 2015 - Paul Bakker - Kubernetes: Beyond the basics](https://youtu.be/MuazGHiiGmA) by [Paul Bakker](https://twitter.com/pbakker)
|
||||||
|
|
||||||
|
Interactive Learning Environments
|
||||||
|
=======================================================================
|
||||||
|
|
||||||
|
*Learn Kubernetes using an interactive environment without requiring downloads or configuration*
|
||||||
|
|
||||||
|
* [Katacoda](https://www.katacoda.com/courses/kubernetes)
|
||||||
|
* [Kubernetes Bootcamp](https://kubernetesbootcamp.github.io/kubernetes-bootcamp/)
|
||||||
|
|
||||||
|
Interesting Twitter Accounts
|
||||||
|
=======================================================================
|
||||||
|
|
||||||
|
*Twitter is quick, it’s easy to communicate on, and is a very valuable social channel for a brand or business if you use it to its full potential, By following these news aggregators, rolling news channels, and companies, you can get the inside scoop of a story long before it hits the mainstream news outlets.*
|
||||||
|
|
||||||
|
|
||||||
|
* [Kubernetes](https://twitter.com/kubernetesio)
|
||||||
|
* [Google Cloud Platform](https://twitter.com/googlecloud)
|
||||||
|
* [Kube Con](https://twitter.com/kubeconio)
|
||||||
|
* [Kismatic](https://twitter.com/kismatic)
|
||||||
|
* [Engine Yard](https://twitter.com/engineyard)
|
||||||
|
* [Apcera](https://twitter.com/Apcera)
|
||||||
|
* [CoreOS](https://twitter.com/coreoslinux)
|
||||||
|
* [DevOps Summit](https://twitter.com/DevOpsSummit)
|
||||||
|
* [KubeWeekly](https://twitter.com/kubeweekly)
|
||||||
|
* [KubeFacts](https://twitter.com/kubefacts)
|
||||||
|
* [Skippbox](https://twitter.com/skippbox)
|
||||||
|
|
||||||
|
|
||||||
|
Amazing People
|
||||||
|
=======================================================================
|
||||||
|
|
||||||
|
* [Bredan Burns](https://twitter.com/brendandburns), Partner Architect at Microsoft
|
||||||
|
* [Kelsey Hightower](https://twitter.com/kelseyhightower), Staff Developer Advocate at Google
|
||||||
|
* [Arun Gupta](https://twitter.com/arungupta), Vice President, Developer Relations at Couchbase
|
||||||
|
* [Carlos Sanchez](https://www.twitter.com/csanchez), Senior Software Engineer, CloudBees
|
||||||
|
* [Joseph Jacks](https://twitter.com/asynchio), Senior Director, Product Management: Kubernetes Platform Engineering at Apprenda
|
||||||
|
* [Joe Beda](https://twitter.com/jbeda), Co-founder and Technical Lead for Kubernetes
|
||||||
|
* [Patrick Reilly](https://twitter.com/preillyme), CEO at Kismatic, Inc. / Advisor at Mesosphere, Inc
|
||||||
|
* [Brandon Philips](https://twitter.com/BrandonPhilips), CTO at CoreOS
|
||||||
|
* [Eric Tune](https://twitter.com/eric_tune), Senior Staff Engineer at Google
|
||||||
|
* [Tim Hockin](https://twitter.com/thockin), Senior Staff SW Engineer / Engineering Manager at Google
|
||||||
|
* [Brian Grant](https://github.com/bgrant0607), System Software Architect at Google
|
||||||
|
|
||||||
|
Meetup Groups
|
||||||
|
=======================================================================
|
||||||
|
|
||||||
|
*An awesome way to connect with kubernauts around the globe*
|
||||||
|
|
||||||
|
* [Berlin](https://twitter.com/kubernetesber)
|
||||||
|
* [New York](https://twitter.com/kubernetesnyc)
|
||||||
|
* [Paris](https://twitter.com/kubernetesparis)
|
||||||
|
* [San Fransico](https://twitter.com/kubernetesSF)
|
||||||
|
* [Bangalore](https://www.meetup.com/Bangalore-Kubernetes-Meetup)
|
||||||
|
* [Pune](https://www.meetup.com/Kubernetes-Pune/)
|
||||||
|
* [London](https://www.meetup.com/Kubernetes-London/)
|
||||||
|
|
||||||
|
|
||||||
|
Connecting with Kubernetes
|
||||||
|
=======================================================================
|
||||||
|
|
||||||
|
* [Freenode](http://webchat.freenode.net/?channels=google-containers)
|
||||||
|
* [Twitter](https://twitter.com/kubernetesio)
|
||||||
|
* [Google +](https://plus.google.com/u/0/b/116512812300813784482/116512812300813784482)
|
||||||
|
* [Stackoverflow](http://stackoverflow.com/questions/tagged/kubernetes)
|
||||||
|
* [Slack](http://slack.k8s.io/)
|
||||||
|
* [Mailing List](https://groups.google.com/forum/#!forum/google-containers)
|
||||||
|
* [Newsletter](https://kismatic.com/community/introducing-kubernetes-weekly-news/) by [Kismatic](https://kismatic.com/)
|
||||||
|
* [Reddit](https://www.reddit.com/r/kubernetes/)
|
||||||
|
* [Community](https://github.com/kubernetes/community)
|
||||||
|
|
||||||
|
Conferences
|
||||||
|
=======================================================================
|
||||||
|
|
||||||
|
*Some must to go and attend conferences on kubernetes*
|
||||||
|
|
||||||
|
* [Kubecon](http://events.linuxfoundation.org/events/kubecon)
|
||||||
|
* [Container Camp](https://container.camp/)
|
||||||
|
* [GCP Next](https://cloudplatformonline.com/Next2016.html)
|
||||||
|
* [Docker Con](http://dockercon.com)
|
||||||
|
* [Devoxx](http://devoxx.com)
|
||||||
|
|
||||||
|
|
||||||
|
Contributing
|
||||||
|
=======================================================================
|
||||||
|
|
||||||
|
Contributions are most welcome!
|
||||||
|
|
||||||
|
This list is just getting started, please contribute to make it super awesome.
|
||||||
|
|
||||||
|
Check out the [Contributing Guidelines](https://github.com/ramitsurana/awesome-kubernetes/blob/master/CONTRIBUTING.md).
|
||||||
|
|
||||||
|
|
||||||
|
License
|
||||||
|
=======================================================================
|
||||||
|
|
||||||
|
<a rel="license" href="http://creativecommons.org/licenses/by-nc/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" href="http://purl.org/dc/dcmitype/InteractiveResource" property="dct:title" rel="dct:type">awesome-kubernetes</span> by <a xmlns:cc="http://creativecommons.org" href="http://www.linkedin.com/in/ramitsurana" property="cc:attributionName" rel="cc:attributionURL">Ramit Surana</a> is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc/4.0/">Creative Commons Attribution-NonCommercial 4.0 International License</a>.
|
|
@ -0,0 +1,10 @@
|
||||||
|
# 附录
|
||||||
|
|
||||||
|
参考文档以及一些实用的资源链接。
|
||||||
|
|
||||||
|
- [Kubernetes documentation](http://kubernetes.io/docs/)
|
||||||
|
- [Awesome Kubernetes](awesome-kubernetes.html)
|
||||||
|
- [Kubernetes the hard way](https://github.com/kelseyhightower/kubernetes-the-hard-way)
|
||||||
|
- [Awesome Docker](awesome-docker.html)
|
||||||
|
- [Kubernetes Bootcamp](https://kubernetesbootcamp.github.io/kubernetes-bootcamp/index.html)
|
||||||
|
- [Design patterns for container-based distributed systems](https://www.usenix.org/system/files/conference/hotcloud16/hotcloud16_burns.pdf)
|
|
@ -0,0 +1,145 @@
|
||||||
|
# Helm工作原理
|
||||||
|
|
||||||
|
## 基本概念
|
||||||
|
|
||||||
|
Helm的三个基本概念
|
||||||
|
|
||||||
|
- Chart:Helm应用(package),包括该应用的所有Kubernetes manifest模版,类似于YUM RPM或Apt dpkg文件
|
||||||
|
- Repository:Helm package存储仓库
|
||||||
|
- Release:chart的部署实例,每个chart可以部署一个或多个release
|
||||||
|
|
||||||
|
## Helm工作原理
|
||||||
|
|
||||||
|
Helm包括两个部分,`helm`客户端和`tiller`服务端。
|
||||||
|
|
||||||
|
> the client is responsible for managing charts, and the server is responsible for managing releases.
|
||||||
|
|
||||||
|
### helm客户端
|
||||||
|
|
||||||
|
helm客户端是一个命令行工具,负责管理charts、reprepository和release。它通过gPRC API(使用`kubectl port-forward`将tiller的端口映射到本地,然后再通过映射后的端口跟tiller通信)向tiller发送请求,并由tiller来管理对应的Kubernetes资源。
|
||||||
|
|
||||||
|
Helm客户端的使用方法参见[Helm命令](helm.html)。
|
||||||
|
|
||||||
|
### tiller服务端
|
||||||
|
|
||||||
|
tiller接收来自helm客户端的请求,并把相关资源的操作发送到Kubernetes,负责管理(安装、查询、升级或删除等)和跟踪Kubernetes资源。为了方便管理,tiller把release的相关信息保存在kubernetes的ConfigMap中。
|
||||||
|
|
||||||
|
tiller对外暴露gRPC API,供helm客户端调用。
|
||||||
|
|
||||||
|
## Helm Charts
|
||||||
|
|
||||||
|
Helm使用[Chart](https://github.com/kubernetes/charts)来管理Kubernetes manifest文件。每个chart都至少包括
|
||||||
|
|
||||||
|
- 应用的基本信息`Chart.yaml`
|
||||||
|
- 一个或多个Kubernetes manifest文件模版(放置于templates/目录中),可以包括Pod、Deployment、Service等各种Kubernetes资源
|
||||||
|
|
||||||
|
### Chart.yaml示例
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
name: The name of the chart (required)
|
||||||
|
version: A SemVer 2 version (required)
|
||||||
|
description: A single-sentence description of this project (optional)
|
||||||
|
keywords:
|
||||||
|
- A list of keywords about this project (optional)
|
||||||
|
home: The URL of this project's home page (optional)
|
||||||
|
sources:
|
||||||
|
- A list of URLs to source code for this project (optional)
|
||||||
|
maintainers: # (optional)
|
||||||
|
- name: The maintainer's name (required for each maintainer)
|
||||||
|
email: The maintainer's email (optional for each maintainer)
|
||||||
|
engine: gotpl # The name of the template engine (optional, defaults to gotpl)
|
||||||
|
icon: A URL to an SVG or PNG image to be used as an icon (optional).
|
||||||
|
```
|
||||||
|
|
||||||
|
### 依赖管理
|
||||||
|
|
||||||
|
Helm支持两种方式管理依赖的方式:
|
||||||
|
|
||||||
|
- 直接把依赖的package放在`charts/`目录中
|
||||||
|
- 使用`requirements.yaml`并用`helm dep up foochart`来自动下载依赖的packages
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
dependencies:
|
||||||
|
- name: apache
|
||||||
|
version: 1.2.3
|
||||||
|
repository: http://example.com/charts
|
||||||
|
- name: mysql
|
||||||
|
version: 3.2.1
|
||||||
|
repository: http://another.example.com/charts
|
||||||
|
```
|
||||||
|
|
||||||
|
### Chart模版
|
||||||
|
|
||||||
|
Chart模板基于Go template和[Sprig](https://github.com/Masterminds/sprig),比如
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: v1
|
||||||
|
kind: ReplicationController
|
||||||
|
metadata:
|
||||||
|
name: deis-database
|
||||||
|
namespace: deis
|
||||||
|
labels:
|
||||||
|
heritage: deis
|
||||||
|
spec:
|
||||||
|
replicas: 1
|
||||||
|
selector:
|
||||||
|
app: deis-database
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: deis-database
|
||||||
|
spec:
|
||||||
|
serviceAccount: deis-database
|
||||||
|
containers:
|
||||||
|
- name: deis-database
|
||||||
|
image: {{.Values.imageRegistry}}/postgres:{{.Values.dockerTag}}
|
||||||
|
imagePullPolicy: {{.Values.pullPolicy}}
|
||||||
|
ports:
|
||||||
|
- containerPort: 5432
|
||||||
|
env:
|
||||||
|
- name: DATABASE_STORAGE
|
||||||
|
value: {{default "minio" .Values.storage}}
|
||||||
|
```
|
||||||
|
|
||||||
|
模版参数的默认值必须放到`values.yaml`文件中,其格式为
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
imageRegistry: "quay.io/deis"
|
||||||
|
dockerTag: "latest"
|
||||||
|
pullPolicy: "alwaysPull"
|
||||||
|
storage: "s3"
|
||||||
|
|
||||||
|
# 依赖的mysql chart的默认参数
|
||||||
|
mysql:
|
||||||
|
max_connections: 100
|
||||||
|
password: "secret"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Helm插件
|
||||||
|
|
||||||
|
插件提供了扩展Helm核心功能的方法,它在客户端执行,并放在`$(helm home)/plugins`目录中。
|
||||||
|
|
||||||
|
一个典型的helm插件格式为
|
||||||
|
|
||||||
|
```sh
|
||||||
|
$(helm home)/plugins/
|
||||||
|
|- keybase/
|
||||||
|
|
|
||||||
|
|- plugin.yaml
|
||||||
|
|- keybase.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
而plugin.yaml格式为
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
name: "keybase"
|
||||||
|
version: "0.1.0"
|
||||||
|
usage: "Integreate Keybase.io tools with Helm"
|
||||||
|
description: |-
|
||||||
|
This plugin provides Keybase services to Helm.
|
||||||
|
ignoreFlags: false
|
||||||
|
useTunnel: false
|
||||||
|
command: "$HELM_PLUGIN_DIR/keybase.sh"
|
||||||
|
```
|
||||||
|
|
||||||
|
这样,就可以用`helm keybase`命令来使用这个插件。
|
|
@ -0,0 +1,156 @@
|
||||||
|
# Deis workflow
|
||||||
|
|
||||||
|
## Deis架构
|
||||||
|
|
||||||
|
![](https://deis.com/docs/workflow/diagrams/Workflow_Overview.png)
|
||||||
|
|
||||||
|
![](https://deis.com/docs/workflow/diagrams/Workflow_Detail.png)
|
||||||
|
|
||||||
|
![](https://deis.com/docs/workflow/diagrams/Application_Layout.png)
|
||||||
|
|
||||||
|
## Deis安装部署
|
||||||
|
|
||||||
|
首先需要部署一套kubernetes(比如minikube,GKE等,记得启用`KUBE_ENABLE_CLUSTER_DNS=true`),并配置好本机的kubectl客户端,然后运行以下脚本安装deis:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
# install deis v2 (workflow)
|
||||||
|
curl -sSL http://deis.io/deis-cli/install-v2.sh | bash
|
||||||
|
mv deis /usr/local/bin/
|
||||||
|
|
||||||
|
# install helm
|
||||||
|
wget https://storage.googleapis.com/kubernetes-helm/helm-v2.2.1-linux-amd64.tar.gz
|
||||||
|
tar zxvf helm-v2.2.1-linux-amd64.tar.gz
|
||||||
|
mv linux-amd64/helm /usr/local/bin/
|
||||||
|
rm -rf linux-amd64 helm-v2.2.1-linux-amd64.tar.gz
|
||||||
|
helm init
|
||||||
|
|
||||||
|
# deploy helm components
|
||||||
|
helm repo add deis https://charts.deis.com/workflow
|
||||||
|
helm install deis/workflow --namespace deis
|
||||||
|
kubectl --namespace=deis get pods
|
||||||
|
```
|
||||||
|
|
||||||
|
## Deis基本使用
|
||||||
|
|
||||||
|
### 注册用户并登录
|
||||||
|
|
||||||
|
```sh
|
||||||
|
deis register deis-controller.deis.svc.cluster.local
|
||||||
|
deis login deis-controller.deis.svc.cluster.local
|
||||||
|
deis perms:create newuser --admin
|
||||||
|
```
|
||||||
|
|
||||||
|
### 部署应用
|
||||||
|
|
||||||
|
**注意,deis的大部分操作命令都需要在应用的目录中(即下面的`example-dockerfile-http`)。**
|
||||||
|
|
||||||
|
```sh
|
||||||
|
git clone https://github.com/deis/example-dockerfile-http.git
|
||||||
|
cd example-dockerfile-http
|
||||||
|
docker build -t deis/example-dockerfile-http .
|
||||||
|
docker push deis/example-dockerfile-http
|
||||||
|
|
||||||
|
# create app
|
||||||
|
deis create example-dockerfile-http --no-remote
|
||||||
|
# deploy app
|
||||||
|
deis pull deis/example-dockerfile-http:latest
|
||||||
|
|
||||||
|
# query application status
|
||||||
|
deis info
|
||||||
|
```
|
||||||
|
|
||||||
|
扩展应用
|
||||||
|
|
||||||
|
```sh
|
||||||
|
$ deis scale cmd=3
|
||||||
|
$ deis ps
|
||||||
|
=== example-dockerfile-http Processes
|
||||||
|
--- cmd:
|
||||||
|
example-dockerfile-http-cmd-4246296512-08124 up (v2)
|
||||||
|
example-dockerfile-http-cmd-4246296512-40lfv up (v2)
|
||||||
|
example-dockerfile-http-cmd-4246296512-fx3w3 up (v2)
|
||||||
|
```
|
||||||
|
|
||||||
|
也可以配置自动扩展
|
||||||
|
|
||||||
|
```sh
|
||||||
|
deis autoscale:set example-dockerfile-http --min=3 --max=8 --cpu-percent=75
|
||||||
|
```
|
||||||
|
|
||||||
|
这样,就可以通过Kubernetes的DNS来访问应用了(配置了外网负载均衡后,还可以通过负载均衡来访问服务):
|
||||||
|
|
||||||
|
```sh
|
||||||
|
$ curl example-dockerfile-http.example-dockerfile-http.svc.cluster.local
|
||||||
|
Powered by Deis
|
||||||
|
```
|
||||||
|
|
||||||
|
### 域名和路由
|
||||||
|
|
||||||
|
```sh
|
||||||
|
# 注意设置CNMAE记录到原来的地址
|
||||||
|
deis domains:add hello.bacongobbler.com
|
||||||
|
|
||||||
|
dig hello.deisapp.com
|
||||||
|
deis routing:enable
|
||||||
|
```
|
||||||
|
|
||||||
|
这实际上是在deis-router的nginx配置中增加了 virtual hosts :
|
||||||
|
|
||||||
|
```
|
||||||
|
server {
|
||||||
|
listen 8080;
|
||||||
|
server_name ~^example-dockerfile-http\.(?<domain>.+)$;
|
||||||
|
server_name_in_redirect off;
|
||||||
|
port_in_redirect off;
|
||||||
|
set $app_name "example-dockerfile-http";
|
||||||
|
vhost_traffic_status_filter_by_set_key example-dockerfile-http application::*;
|
||||||
|
|
||||||
|
location / {
|
||||||
|
proxy_buffering off;
|
||||||
|
proxy_set_header Host $host;
|
||||||
|
proxy_set_header X-Forwarded-For $remote_addr;
|
||||||
|
proxy_set_header X-Forwarded-Proto $access_scheme;
|
||||||
|
proxy_set_header X-Forwarded-Port $forwarded_port;
|
||||||
|
proxy_redirect off;
|
||||||
|
proxy_connect_timeout 30s;
|
||||||
|
proxy_send_timeout 1300s;
|
||||||
|
proxy_read_timeout 1300s;
|
||||||
|
proxy_http_version 1.1;
|
||||||
|
proxy_set_header Upgrade $http_upgrade;
|
||||||
|
proxy_set_header Connection $connection_upgrade;
|
||||||
|
|
||||||
|
proxy_pass http://10.0.0.224:80;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
server {
|
||||||
|
listen 8080;
|
||||||
|
server_name hello.bacongobbler.com;
|
||||||
|
server_name_in_redirect off;
|
||||||
|
port_in_redirect off;
|
||||||
|
set $app_name "example-dockerfile-http";
|
||||||
|
vhost_traffic_status_filter_by_set_key example-dockerfile-http application::*;
|
||||||
|
|
||||||
|
location / {
|
||||||
|
proxy_buffering off;
|
||||||
|
proxy_set_header Host $host;
|
||||||
|
proxy_set_header X-Forwarded-For $remote_addr;
|
||||||
|
proxy_set_header X-Forwarded-Proto $access_scheme;
|
||||||
|
proxy_set_header X-Forwarded-Port $forwarded_port;
|
||||||
|
proxy_redirect off;
|
||||||
|
proxy_connect_timeout 30s;
|
||||||
|
proxy_send_timeout 1300s;
|
||||||
|
proxy_read_timeout 1300s;
|
||||||
|
proxy_http_version 1.1;
|
||||||
|
proxy_set_header Upgrade $http_upgrade;
|
||||||
|
proxy_set_header Connection $connection_upgrade;
|
||||||
|
proxy_pass http://10.0.0.224:80;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 参考文档
|
||||||
|
|
||||||
|
- https://github.com/deis/workflow
|
||||||
|
- https://deis.com/workflow/
|
||||||
|
|
|
@ -0,0 +1,103 @@
|
||||||
|
# Kubernetes应用管理--Helm
|
||||||
|
|
||||||
|
[Helm](https://github.com/kubernetes/helm)是一个类似于yum/apt/[homebrew](https://brew.sh/)的Kubernetes应用管理工具。Helm使用[Chart](https://github.com/kubernetes/charts)来管理Kubernetes manifest文件。
|
||||||
|
|
||||||
|
## Helm基本使用
|
||||||
|
|
||||||
|
安装`helm`客户端
|
||||||
|
|
||||||
|
```sh
|
||||||
|
brew install kubernetes-helm
|
||||||
|
```
|
||||||
|
|
||||||
|
初始化Helm并安装`Tiller`服务(需要事先配置好kubeclt)
|
||||||
|
|
||||||
|
```sh
|
||||||
|
helm init
|
||||||
|
```
|
||||||
|
|
||||||
|
更新charts列表
|
||||||
|
|
||||||
|
```sh
|
||||||
|
helm repo update
|
||||||
|
```
|
||||||
|
|
||||||
|
部署服务,比如mysql
|
||||||
|
|
||||||
|
```sh
|
||||||
|
➜ ~ helm install stable/mysql
|
||||||
|
NAME: quieting-warthog
|
||||||
|
LAST DEPLOYED: Tue Feb 21 16:13:02 2017
|
||||||
|
NAMESPACE: default
|
||||||
|
STATUS: DEPLOYED
|
||||||
|
|
||||||
|
RESOURCES:
|
||||||
|
==> v1/Secret
|
||||||
|
NAME TYPE DATA AGE
|
||||||
|
quieting-warthog-mysql Opaque 2 1s
|
||||||
|
|
||||||
|
==> v1/PersistentVolumeClaim
|
||||||
|
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
|
||||||
|
quieting-warthog-mysql Pending 1s
|
||||||
|
|
||||||
|
==> v1/Service
|
||||||
|
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||||
|
quieting-warthog-mysql 10.3.253.105 <none> 3306/TCP 1s
|
||||||
|
|
||||||
|
==> extensions/v1beta1/Deployment
|
||||||
|
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||||
|
quieting-warthog-mysql 1 1 1 0 1s
|
||||||
|
|
||||||
|
|
||||||
|
NOTES:
|
||||||
|
MySQL can be accessed via port 3306 on the following DNS name from within your cluster:
|
||||||
|
quieting-warthog-mysql.default.svc.cluster.local
|
||||||
|
|
||||||
|
To get your root password run:
|
||||||
|
|
||||||
|
kubectl get secret --namespace default quieting-warthog-mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo
|
||||||
|
|
||||||
|
To connect to your database:
|
||||||
|
|
||||||
|
1. Run an Ubuntu pod that you can use as a client:
|
||||||
|
|
||||||
|
kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il
|
||||||
|
|
||||||
|
2. Install the mysql client:
|
||||||
|
|
||||||
|
$ apt-get update && apt-get install mysql-client -y
|
||||||
|
|
||||||
|
3. Connect using the mysql cli, then provide your password:
|
||||||
|
$ mysql -h quieting-warthog-mysql -p
|
||||||
|
```
|
||||||
|
|
||||||
|
更多命令的使用方法可以参考[Helm命令参考](helm.html)。
|
||||||
|
|
||||||
|
## Helm工作原理
|
||||||
|
|
||||||
|
见[Helm工作原理](basic.html)。
|
||||||
|
|
||||||
|
## 链接
|
||||||
|
|
||||||
|
### Helm文档
|
||||||
|
|
||||||
|
* https://github.com/kubernetes/helm
|
||||||
|
* https://github.com/kubernetes/charts
|
||||||
|
|
||||||
|
### 第三方Helm repository
|
||||||
|
|
||||||
|
* https://github.com/deis/charts
|
||||||
|
* https://github.com/bitnami/charts
|
||||||
|
* https://github.com/att-comdev/openstack-helm
|
||||||
|
* https://github.com/sapcc/openstack-helm
|
||||||
|
* https://github.com/mgoodness/kube-prometheus-charts
|
||||||
|
* https://github.com/helm/charts
|
||||||
|
* https://github.com/jackzampolin/tick-charts
|
||||||
|
|
||||||
|
### 常用Helm插件
|
||||||
|
|
||||||
|
1. [helm-tiller](https://github.com/adamreese/helm-tiller) - Additional commands to work with Tiller
|
||||||
|
2. [Technosophos's Helm Plugins](https://github.com/technosophos/helm-plugins) - Plugins for GitHub, Keybase, and GPG
|
||||||
|
3. [helm-template](https://github.com/technosophos/helm-template) - Debug/render templates client-side
|
||||||
|
4. [Helm Value Store](https://github.com/skuid/helm-value-store) - Plugin for working with Helm deployment values
|
||||||
|
5. [Drone.io Helm Plugin](http://plugins.drone.io/ipedrazas/drone-helm/) - Run Helm inside of the Drone CI/CD system
|
|
@ -0,0 +1,168 @@
|
||||||
|
# Helm命令参考
|
||||||
|
|
||||||
|
## 查询charts
|
||||||
|
|
||||||
|
```sh
|
||||||
|
helm search
|
||||||
|
helm search mysql
|
||||||
|
```
|
||||||
|
|
||||||
|
## 查询package详细信息
|
||||||
|
|
||||||
|
```sh
|
||||||
|
helm inspect stable/mariadb
|
||||||
|
```
|
||||||
|
|
||||||
|
## 部署package
|
||||||
|
|
||||||
|
```sh
|
||||||
|
helm install stable/mysql
|
||||||
|
```
|
||||||
|
|
||||||
|
部署之前可以自定义package的选项:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
# 查询支持的选项
|
||||||
|
helm inspect values stable/mysql
|
||||||
|
|
||||||
|
# 自定义password
|
||||||
|
echo "mysqlRootPassword: passwd" > config.yaml
|
||||||
|
helm install -f config.yaml stable/mysql
|
||||||
|
```
|
||||||
|
|
||||||
|
另外,还可以通过打包文件(.tgz)或者本地package路径(如path/foo)来部署应用。
|
||||||
|
|
||||||
|
## 查询服务(Release)列表
|
||||||
|
|
||||||
|
```sh
|
||||||
|
➜ ~ helm ls
|
||||||
|
NAME REVISION UPDATED STATUS CHART NAMESPACE
|
||||||
|
quieting-warthog 1 Tue Feb 21 20:13:02 2017 DEPLOYED mysql-0.2.5 default
|
||||||
|
```
|
||||||
|
|
||||||
|
## 查询服务(Release)状态
|
||||||
|
|
||||||
|
```sh
|
||||||
|
➜ ~ helm status quieting-warthog
|
||||||
|
LAST DEPLOYED: Tue Feb 21 16:13:02 2017
|
||||||
|
NAMESPACE: default
|
||||||
|
STATUS: DEPLOYED
|
||||||
|
|
||||||
|
RESOURCES:
|
||||||
|
==> v1/Secret
|
||||||
|
NAME TYPE DATA AGE
|
||||||
|
quieting-warthog-mysql Opaque 2 9m
|
||||||
|
|
||||||
|
==> v1/PersistentVolumeClaim
|
||||||
|
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
|
||||||
|
quieting-warthog-mysql Bound pvc-90af9bf9-f80d-11e6-930a-42010af00102 8Gi RWO 9m
|
||||||
|
|
||||||
|
==> v1/Service
|
||||||
|
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||||
|
quieting-warthog-mysql 10.3.253.105 <none> 3306/TCP 9m
|
||||||
|
|
||||||
|
==> extensions/v1beta1/Deployment
|
||||||
|
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||||
|
quieting-warthog-mysql 1 1 1 1 9m
|
||||||
|
|
||||||
|
|
||||||
|
NOTES:
|
||||||
|
MySQL can be accessed via port 3306 on the following DNS name from within your cluster:
|
||||||
|
quieting-warthog-mysql.default.svc.cluster.local
|
||||||
|
|
||||||
|
To get your root password run:
|
||||||
|
|
||||||
|
kubectl get secret --namespace default quieting-warthog-mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo
|
||||||
|
|
||||||
|
To connect to your database:
|
||||||
|
|
||||||
|
1. Run an Ubuntu pod that you can use as a client:
|
||||||
|
|
||||||
|
kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il
|
||||||
|
|
||||||
|
2. Install the mysql client:
|
||||||
|
|
||||||
|
$ apt-get update && apt-get install mysql-client -y
|
||||||
|
|
||||||
|
3. Connect using the mysql cli, then provide your password:
|
||||||
|
$ mysql -h quieting-warthog-mysql -p
|
||||||
|
```
|
||||||
|
|
||||||
|
## 升级和回滚Release
|
||||||
|
|
||||||
|
```sh
|
||||||
|
# 升级
|
||||||
|
cat "mariadbUser: user1" >panda.yaml
|
||||||
|
helm upgrade -f panda.yaml happy-panda stable/mariadb
|
||||||
|
|
||||||
|
# 回滚
|
||||||
|
helm rollback happy-panda 1
|
||||||
|
```
|
||||||
|
|
||||||
|
## 删除Release
|
||||||
|
|
||||||
|
```sh
|
||||||
|
helm delete quieting-warthog
|
||||||
|
```
|
||||||
|
|
||||||
|
## repo管理
|
||||||
|
|
||||||
|
```sh
|
||||||
|
# 添加incubator repo
|
||||||
|
helm repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com/
|
||||||
|
|
||||||
|
# 查询repo列表
|
||||||
|
helm repo list
|
||||||
|
|
||||||
|
# 生成repo索引(用于搭建helm repository)
|
||||||
|
helm repo index
|
||||||
|
```
|
||||||
|
|
||||||
|
## chart管理
|
||||||
|
|
||||||
|
```sh
|
||||||
|
# 创建一个新的chart
|
||||||
|
helm create deis-workflow
|
||||||
|
|
||||||
|
# validate chart
|
||||||
|
helm lint
|
||||||
|
|
||||||
|
# 打包chart到tgz
|
||||||
|
helm package deis-workflow
|
||||||
|
```
|
||||||
|
|
||||||
|
## Helm命令参考
|
||||||
|
|
||||||
|
```
|
||||||
|
completion Generate bash autocompletions script
|
||||||
|
create create a new chart with the given name
|
||||||
|
delete given a release name, delete the release from Kubernetes
|
||||||
|
dependency manage a chart's dependencies
|
||||||
|
fetch download a chart from a repository and (optionally) unpack it in local directory
|
||||||
|
get download a named release
|
||||||
|
history fetch release history
|
||||||
|
home displays the location of HELM_HOME
|
||||||
|
init initialize Helm on both client and server
|
||||||
|
inspect inspect a chart
|
||||||
|
install install a chart archive
|
||||||
|
lint examines a chart for possible issues
|
||||||
|
list list releases
|
||||||
|
package package a chart directory into a chart archive
|
||||||
|
repo add, list, remove, update, and index chart repositories
|
||||||
|
reset uninstalls Tiller from a cluster
|
||||||
|
rollback roll back a release to a previous revision
|
||||||
|
search search for a keyword in charts
|
||||||
|
serve start a local http web server
|
||||||
|
status displays the status of the named release
|
||||||
|
test test a release
|
||||||
|
upgrade upgrade a release
|
||||||
|
verify verify that a chart at the given path has been signed and is valid
|
||||||
|
version print the client/server version information
|
||||||
|
|
||||||
|
Flags:
|
||||||
|
--debug enable verbose output
|
||||||
|
--home string location of your Helm config. Overrides $HELM_HOME (default "~/.helm")
|
||||||
|
--host string address of tiller. Overrides $HELM_HOST
|
||||||
|
--kube-context string name of the kubeconfig context to use
|
||||||
|
--tiller-namespace string namespace of tiller (default "kube-system")
|
||||||
|
```
|
|
@ -0,0 +1,36 @@
|
||||||
|
# Kubernetes应用管理
|
||||||
|
|
||||||
|
Kubernetes应用及manifest的管理方法。
|
||||||
|
|
||||||
|
## Helm
|
||||||
|
|
||||||
|
[Helm](helm-app.html)是一个类似于yum/apt/[homebrew](https://brew.sh/)的Kubernetes应用管理工具。Helm使用[Chart](https://github.com/kubernetes/charts)来管理Kubernetes manifest文件。
|
||||||
|
|
||||||
|
Helm的使用方法见[这里](helm-app.html)。
|
||||||
|
|
||||||
|
## Deis workflow
|
||||||
|
|
||||||
|
Deis workflow是基于Kubernetes的PaaS管理平台,进一步简化了应用的打包、部署和服务发现。
|
||||||
|
|
||||||
|
![](https://deis.com/docs/workflow/diagrams/Git_Push_Flow.png)
|
||||||
|
|
||||||
|
## Operator
|
||||||
|
|
||||||
|
- https://github.com/coreos/etcd-operator
|
||||||
|
- https://github.com/coreos/prometheus-operator
|
||||||
|
- https://github.com/sapcc/kubernetes-operators
|
||||||
|
- https://github.com/kbst/memcached
|
||||||
|
- https://github.com/krallistic/kafka-operator
|
||||||
|
- https://github.com/huawei-cloudfederation/redis-operator
|
||||||
|
- https://github.com/upmc-enterprises/elasticsearch-operator
|
||||||
|
- https://github.com/pires/nats-operator
|
||||||
|
- https://github.com/rosskukulinski/rethinkdb-operator
|
||||||
|
|
||||||
|
|
||||||
|
## 其他
|
||||||
|
|
||||||
|
当然,目前大家最常用了还是自己管理manifest,比如kubernetes项目就提供了很多应用的示例
|
||||||
|
|
||||||
|
- https://github.com/kubernetes/kubernetes/tree/master/examples
|
||||||
|
- https://github.com/kubernetes/contrib
|
||||||
|
- https://github.com/kubernetes/ingress
|
|
@ -0,0 +1,160 @@
|
||||||
|
# Secret
|
||||||
|
|
||||||
|
Secret解决了密码、token、密钥等敏感数据的配置问题,而不需要把这些敏感数据暴露到镜像或者Pod Spec中。Secret可以以Volume或者环境变量的方式使用。
|
||||||
|
|
||||||
|
Secret有三种类型:
|
||||||
|
|
||||||
|
* Service Account:用来访问Kubernetes API,由Kubernetes自动创建,并且会自动挂载到Pod的`/run/secrets/kubernetes.io/serviceaccount`目录中;
|
||||||
|
* Opaque:base64编码格式的Secret,用来存储密码、密钥等;
|
||||||
|
* `kubernetes.io/dockerconfigjson`:用来存储私有docker registry的认证信息。
|
||||||
|
|
||||||
|
## Opaque Secret
|
||||||
|
|
||||||
|
Opaque类型的数据是一个map类型,要求value是base64编码格式:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
$ echo -n "admin" | base64
|
||||||
|
YWRtaW4=
|
||||||
|
$ echo -n "1f2d1e2e67df" | base64
|
||||||
|
MWYyZDFlMmU2N2Rm
|
||||||
|
```
|
||||||
|
|
||||||
|
secrets.yml
|
||||||
|
|
||||||
|
```yml
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Secret
|
||||||
|
metadata:
|
||||||
|
name: mysecret
|
||||||
|
type: Opaque
|
||||||
|
data:
|
||||||
|
password: MWYyZDFlMmU2N2Rm
|
||||||
|
username: YWRtaW4=
|
||||||
|
```
|
||||||
|
|
||||||
|
接着,就可以创建secret了:`kubectl create -f secrets.yml`。
|
||||||
|
|
||||||
|
创建好secret之后,有两种方式来使用它:
|
||||||
|
|
||||||
|
* 以Volume方式
|
||||||
|
* 以环境变量方式
|
||||||
|
|
||||||
|
### 将Secret挂载到Volume中
|
||||||
|
|
||||||
|
```yml
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Pod
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
name: db
|
||||||
|
name: db
|
||||||
|
spec:
|
||||||
|
volumes:
|
||||||
|
- name: secrets
|
||||||
|
secret:
|
||||||
|
secretName: mysecret
|
||||||
|
containers:
|
||||||
|
- image: gcr.io/my_project_id/pg:v1
|
||||||
|
name: db
|
||||||
|
volumeMounts:
|
||||||
|
- name: secrets
|
||||||
|
mountPath: "/etc/secrets"
|
||||||
|
readOnly: true
|
||||||
|
ports:
|
||||||
|
- name: cp
|
||||||
|
containerPort: 5432
|
||||||
|
hostPort: 5432
|
||||||
|
```
|
||||||
|
|
||||||
|
### 将Secret导出到环境变量中
|
||||||
|
|
||||||
|
```yml
|
||||||
|
apiVersion: extensions/v1beta1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
name: wordpress-deployment
|
||||||
|
spec:
|
||||||
|
replicas: 2
|
||||||
|
strategy:
|
||||||
|
type: RollingUpdate
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: wordpress
|
||||||
|
visualize: "true"
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: "wordpress"
|
||||||
|
image: "wordpress"
|
||||||
|
ports:
|
||||||
|
- containerPort: 80
|
||||||
|
env:
|
||||||
|
- name: WORDPRESS_DB_USER
|
||||||
|
valueFrom:
|
||||||
|
secretKeyRef:
|
||||||
|
name: mysecret
|
||||||
|
key: username
|
||||||
|
- name: WORDPRESS_DB_PASSWORD
|
||||||
|
valueFrom:
|
||||||
|
secretKeyRef:
|
||||||
|
name: mysecret
|
||||||
|
key: password
|
||||||
|
```
|
||||||
|
|
||||||
|
## kubernetes.io/dockerconfigjson
|
||||||
|
|
||||||
|
可以直接用kubectl命令来创建用于docker registry认证的secret:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
$ kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
|
||||||
|
secret "myregistrykey" created.
|
||||||
|
```
|
||||||
|
|
||||||
|
也可以直接读取`~/.docker/config.json`的内容来创建:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
$ cat ~/.docker/config.json | base64
|
||||||
|
$ cat > myregistrykey.yaml <<EOF
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Secret
|
||||||
|
metadata:
|
||||||
|
name: myregistrykey
|
||||||
|
data:
|
||||||
|
.dockerconfigjson: UmVhbGx5IHJlYWxseSByZWVlZWVlZWVlZWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGx5eXl5eXl5eXl5eXl5eXl5eXl5eSBsbGxsbGxsbGxsbGxsbG9vb29vb29vb29vb29vb29vb29vb29vb29vb25ubm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg==
|
||||||
|
type: kubernetes.io/dockerconfigjson
|
||||||
|
EOF
|
||||||
|
$ kubectl create -f myregistrykey.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
在创建Pod的时候,通过`imagePullSecrets`来引用刚创建的`myregistrykey`:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Pod
|
||||||
|
metadata:
|
||||||
|
name: foo
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: foo
|
||||||
|
image: janedoe/awesomeapp:v1
|
||||||
|
imagePullSecrets:
|
||||||
|
- name: myregistrykey
|
||||||
|
```
|
||||||
|
|
||||||
|
### Service Account
|
||||||
|
|
||||||
|
Service Account用来访问Kubernetes API,由Kubernetes自动创建,并且会自动挂载到Pod的`/run/secrets/kubernetes.io/serviceaccount`目录中。
|
||||||
|
|
||||||
|
```sh
|
||||||
|
$ kubectl run nginx --image nginx
|
||||||
|
deployment "nginx" created
|
||||||
|
$ kubectl get pods
|
||||||
|
NAME READY STATUS RESTARTS AGE
|
||||||
|
nginx-3137573019-md1u2 1/1 Running 0 13s
|
||||||
|
$ kubectl exec nginx-3137573019-md1u2 ls /run/secrets/kubernetes.io/serviceaccount
|
||||||
|
ca.crt
|
||||||
|
namespace
|
||||||
|
token
|
||||||
|
```
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,89 @@
|
||||||
|
# 服务发现与负载均衡
|
||||||
|
|
||||||
|
Kubernetes在设计之初就充分考虑了针对容器的服务发现与负载均衡机制,提供了Service资源,并通过kube-proxy配合cloud provider来适应不同的应用场景。随着kubernetes用户的激增,用户场景的不断丰富,又产生了一些新的负载均衡机制。目前,kubernetes中的负载均衡大致可以分为以下几种机制,每种机制都有其特定的应用场景:
|
||||||
|
|
||||||
|
- Service:直接用Service提供cluster内部的负载均衡,并借助cloud provider提供的LB提供外部访问
|
||||||
|
- Ingress Controller:还是用Service提供cluster内部的负载均衡,但是通过自定义LB提供外部访问
|
||||||
|
- Service Load Balancer:把load balancer直接跑在容器中,实现Bare Metal的Service Load Balancer
|
||||||
|
- Custom Load Balancer:自定义负载均衡,并替代kube-proxy,一般在物理部署Kubernetes时使用,方便接入公司已有的外部服务
|
||||||
|
|
||||||
|
## Service
|
||||||
|
|
||||||
|
![](media/14735737093456.jpg)
|
||||||
|
|
||||||
|
Service是对一组提供相同功能的Pods的抽象,并为它们提供一个统一的入口。借助Service,应用可以方便的实现服务发现与负载均衡,并实现应用的零宕机升级。Service通过标签来选取服务后端,一般配合Replication Controller或者Deployment来保证后端容器的正常运行。
|
||||||
|
|
||||||
|
Service有三种类型:
|
||||||
|
|
||||||
|
- ClusterIP:默认类型,自动分配一个仅cluster内部可以访问的虚拟IP
|
||||||
|
- NodePort:在ClusterIP基础上为Service在每台机器上绑定一个端口,这样就可以通过`<NodeIP>:NodePort`来访问改服务
|
||||||
|
- LoadBalancer:在NodePort的基础上,借助cloud provider创建一个外部的负载均衡器,并将请求转发到`<NodeIP>:NodePort`
|
||||||
|
|
||||||
|
另外,也可以讲已有的服务以Service的形式加入到Kubernetes集群中来,只需要在创建Service的时候不指定Label selector,而是在Service创建好后手动为其添加endpoint。
|
||||||
|
|
||||||
|
## Ingress Controller
|
||||||
|
|
||||||
|
Service虽然解决了服务发现和负载均衡的问题,但它在使用上还是有一些限制,比如
|
||||||
|
|
||||||
|
- 只支持4层负载均衡,没有7层功能
|
||||||
|
- 对外访问的时候,NodePort类型需要在外部搭建额外的负载均衡,而LoadBalancer要求kubernetes必须跑在支持的cloud provider上面
|
||||||
|
|
||||||
|
Ingress就是为了解决这些限制而引入的新资源,主要用来将服务暴露到cluster外面,并且可以自定义服务的访问策略。比如想要通过负载均衡器实现不同子域名到不同服务的访问:
|
||||||
|
|
||||||
|
```
|
||||||
|
foo.bar.com --| |-> foo.bar.com s1:80
|
||||||
|
| 178.91.123.132 |
|
||||||
|
bar.foo.com --| |-> bar.foo.com s2:80
|
||||||
|
```
|
||||||
|
|
||||||
|
可以这样来定义Ingress:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: extensions/v1beta1
|
||||||
|
kind: Ingress
|
||||||
|
metadata:
|
||||||
|
name: test
|
||||||
|
spec:
|
||||||
|
rules:
|
||||||
|
- host: foo.bar.com
|
||||||
|
http:
|
||||||
|
paths:
|
||||||
|
- backend:
|
||||||
|
serviceName: s1
|
||||||
|
servicePort: 80
|
||||||
|
- host: bar.foo.com
|
||||||
|
http:
|
||||||
|
paths:
|
||||||
|
- backend:
|
||||||
|
serviceName: s2
|
||||||
|
servicePort: 80
|
||||||
|
```
|
||||||
|
|
||||||
|
注意Ingress本身并不会自动创建负载均衡器,cluster中需要运行一个ingress controller来根据Ingress的定义来管理负载均衡器。目前社区提供了nginx和gce的参考实现。
|
||||||
|
|
||||||
|
Traefik提供了易用的Ingress Controller,使用方法见<https://docs.traefik.io/user-guide/kubernetes/>。
|
||||||
|
|
||||||
|
## Service Load Balancer
|
||||||
|
|
||||||
|
在Ingress出现以前,Service Load Balancer是推荐的解决Service局限性的方式。Service Load Balancer将haproxy跑在容器中,并监控service和endpoint的变化,通过容器IP对外提供4层和7层负载均衡服务。
|
||||||
|
|
||||||
|
社区提供的Service Load Balancer支持四种负载均衡协议:TCP、HTTP、HTTPS和SSL TERMINATION,并支持ACL访问控制。
|
||||||
|
|
||||||
|
## Custom Load Balancer
|
||||||
|
|
||||||
|
虽然Kubernetes提供了丰富的负载均衡机制,但在实际使用的时候,还是会碰到一些复杂的场景是它不能支持的,比如
|
||||||
|
|
||||||
|
- 接入已有的负载均衡设备
|
||||||
|
- 多租户网络情况下,容器网络和主机网络是隔离的,这样`kube-proxy`就不能正常工作
|
||||||
|
|
||||||
|
这个时候就可以自定义组件,并代替kube-proxy来做负载均衡。基本的思路是监控kubernetes中service和endpoints的变化,并根据这些变化来配置负载均衡器。比如weave flux、nginx plus、kube2haproxy等
|
||||||
|
|
||||||
|
## 参考资料
|
||||||
|
|
||||||
|
- http://kubernetes.io/docs/user-guide/services/
|
||||||
|
- http://kubernetes.io/docs/user-guide/ingress/
|
||||||
|
- https://github.com/kubernetes/contrib/tree/master/service-loadbalancer
|
||||||
|
- https://www.nginx.com/blog/load-balancing-kubernetes-services-nginx-plus/
|
||||||
|
- https://github.com/weaveworks/flux
|
||||||
|
- https://github.com/AdoHe/kube2haproxy
|
||||||
|
|
|
@ -0,0 +1,199 @@
|
||||||
|
# Kubernetes存储卷
|
||||||
|
|
||||||
|
我们知道默认情况下容器的数据都是非持久化的,在容器消亡以后数据也跟着丢失,所以Docker提供了Volume机制以便将数据持久化存储。类似的,Kubernetes提供了更强大的Volume机制和丰富的插件,解决了容器数据持久化和容器间共享数据的问题。
|
||||||
|
|
||||||
|
## Volume
|
||||||
|
|
||||||
|
目前,Kubernetes支持以下Volume类型:
|
||||||
|
|
||||||
|
- emptyDir
|
||||||
|
- hostPath
|
||||||
|
- gcePersistentDisk
|
||||||
|
- awsElasticBlockStore
|
||||||
|
- nfs
|
||||||
|
- iscsi
|
||||||
|
- flocker
|
||||||
|
- glusterfs
|
||||||
|
- rbd
|
||||||
|
- cephfs
|
||||||
|
- gitRepo
|
||||||
|
- secret
|
||||||
|
- persistentVolumeClaim
|
||||||
|
- downwardAPI
|
||||||
|
- azureFileVolume
|
||||||
|
- vsphereVolume
|
||||||
|
- flexvolume
|
||||||
|
|
||||||
|
注意,这些volume并非全部都是持久化的,比如emptyDir、secret、gitRepo等,这些volume会随着Pod的消亡而消失。
|
||||||
|
|
||||||
|
## PersistentVolume
|
||||||
|
|
||||||
|
对于持久化的Volume,PersistentVolume (PV)和PersistentVolumeClaim (PVC)提供了更方便的管理卷的方法:PV提供网络存储资源,而PVC请求存储资源。这样,设置持久化的工作流包括配置底层文件系统或者云数据卷、创建持久性数据卷、最后创建claim来将pod跟数据卷关联起来。PV和PVC可以将pod和数据卷解耦,pod不需要知道确切的文件系统或者支持它的持久化引擎。
|
||||||
|
|
||||||
|
### PV
|
||||||
|
|
||||||
|
PersistentVolume(PV)是集群之中的一块网络存储。跟 Node 一样,也是集群的资源。PV 跟 Volume (卷) 类似,不过会有独立于 Pod 的生命周期。比如一个NFS的PV可以定义为
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: v1
|
||||||
|
kind: PersistentVolume
|
||||||
|
metadata:
|
||||||
|
name: pv0003
|
||||||
|
spec:
|
||||||
|
capacity:
|
||||||
|
storage: 5Gi
|
||||||
|
accessModes:
|
||||||
|
- ReadWriteOnce
|
||||||
|
persistentVolumeReclaimPolicy: Recycle
|
||||||
|
nfs:
|
||||||
|
path: /tmp
|
||||||
|
server: 172.17.0.2
|
||||||
|
```
|
||||||
|
|
||||||
|
PV的访问模式有三种:
|
||||||
|
|
||||||
|
* 第一种,ReadWriteOnce:是最基本的方式,可读可写,但只支持被单个Pod挂载。
|
||||||
|
* 第二种,ReadOnlyMany:可以以只读的方式被多个Pod挂载。
|
||||||
|
* 第三种,ReadWriteMany:这种存储可以以读写的方式被多个Pod共享。不是每一种存储都支持这三种方式,像共享方式,目前支持的还比较少,比较常用的是NFS。在PVC绑定PV时通常根据两个条件来绑定,一个是存储的大小,另一个就是访问模式。
|
||||||
|
|
||||||
|
### StorageClass
|
||||||
|
|
||||||
|
上面通过手动的方式创建了一个NFS Volume,这在管理很多Volume的时候不太方便。Kubernetes还提供了[StorageClass](https://kubernetes.io/docs/user-guide/persistent-volumes/#storageclasses)来动态创建PV,不仅节省了管理员的时间,还可以封装不同类型的存储供PVC选用。
|
||||||
|
|
||||||
|
GCE的例子:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
kind: StorageClass
|
||||||
|
apiVersion: storage.k8s.io/v1beta1
|
||||||
|
metadata:
|
||||||
|
name: slow
|
||||||
|
provisioner: kubernetes.io/gce-pd
|
||||||
|
parameters:
|
||||||
|
type: pd-standard
|
||||||
|
zone: us-central1-a
|
||||||
|
```
|
||||||
|
|
||||||
|
Ceph RBD的例子:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: storage.k8s.io/v1beta1
|
||||||
|
kind: StorageClass
|
||||||
|
metadata:
|
||||||
|
name: fast
|
||||||
|
provisioner: kubernetes.io/rbd
|
||||||
|
parameters:
|
||||||
|
monitors: 10.16.153.105:6789
|
||||||
|
adminId: kube
|
||||||
|
adminSecretName: ceph-secret
|
||||||
|
adminSecretNamespace: kube-system
|
||||||
|
pool: kube
|
||||||
|
userId: kube
|
||||||
|
userSecretName: ceph-secret-user
|
||||||
|
```
|
||||||
|
|
||||||
|
### PVC
|
||||||
|
|
||||||
|
PV是存储资源,而PersistentVolumeClaim (PVC) 是对PV的请求。PVC跟Pod类似:Pod消费Node的源,而PVC消费PV资源;Pod能够请求CPU和内存资源,而PVC请求特定大小和访问模式的数据卷。
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
kind: PersistentVolumeClaim
|
||||||
|
apiVersion: v1
|
||||||
|
metadata:
|
||||||
|
name: myclaim
|
||||||
|
spec:
|
||||||
|
accessModes:
|
||||||
|
- ReadWriteOnce
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
storage: 8Gi
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
release: "stable"
|
||||||
|
matchExpressions:
|
||||||
|
- {key: environment, operator: In, values: [dev]}
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
PVC可以直接挂载到Pod中:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
kind: Pod
|
||||||
|
apiVersion: v1
|
||||||
|
metadata:
|
||||||
|
name: mypod
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: myfrontend
|
||||||
|
image: dockerfile/nginx
|
||||||
|
volumeMounts:
|
||||||
|
- mountPath: "/var/www/html"
|
||||||
|
name: mypd
|
||||||
|
volumes:
|
||||||
|
- name: mypd
|
||||||
|
persistentVolumeClaim:
|
||||||
|
claimName: myclaim
|
||||||
|
```
|
||||||
|
|
||||||
|
## emptyDir
|
||||||
|
|
||||||
|
如果Pod配置了emptyDir类型Volume, Pod 被分配到Node上时候,会创建emptyDir,只要Pod运行在Node上,emptyDir都会存在(容器挂掉不会导致emptyDir丢失数据),但是如果Pod从Node上被删除(Pod被删除,或者Pod发生迁移),emptyDir也会被删除,并且永久丢失。
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Pod
|
||||||
|
metadata:
|
||||||
|
name: test-pd
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- image: gcr.io/google_containers/test-webserver
|
||||||
|
name: test-container
|
||||||
|
volumeMounts:
|
||||||
|
- mountPath: /test-pd
|
||||||
|
name: test-volume
|
||||||
|
volumes:
|
||||||
|
- name: test-volume
|
||||||
|
hostPath:
|
||||||
|
# directory location on host
|
||||||
|
path: /data
|
||||||
|
```
|
||||||
|
|
||||||
|
## 其他Volume说明
|
||||||
|
|
||||||
|
### hostPath
|
||||||
|
|
||||||
|
hostPath允许挂载Node上的文件系统到Pod里面去。如果Pod有需要使用Node上的文件,可以使用hostPath。
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
- hostPath:
|
||||||
|
path: /tmp/data
|
||||||
|
name: data
|
||||||
|
```
|
||||||
|
|
||||||
|
### NFS
|
||||||
|
|
||||||
|
NFS 是Network File System的缩写,即网络文件系统。Kubernetes中通过简单地配置就可以挂载NFS到Pod中,而NFS中的数据是可以永久保存的,同时NFS支持同时写操作。
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
volumes:
|
||||||
|
- name: nfs
|
||||||
|
nfs:
|
||||||
|
# FIXME: use the right hostname
|
||||||
|
server: 10.254.234.223
|
||||||
|
path: "/"
|
||||||
|
```
|
||||||
|
|
||||||
|
### FlexVolume
|
||||||
|
|
||||||
|
注意要把volume plugin放到`/usr/libexec/kubernetes/kubelet-plugins/volume/exec/<vendor~driver>/<driver>`,plugin要实现`init/attach/detach/mount/umount`等命令(可参考lvm的[示例](https://github.com/kubernetes/kubernetes/tree/master/examples/volumes/flexvolume))。
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
- name: test
|
||||||
|
flexVolume:
|
||||||
|
driver: "kubernetes.io/lvm"
|
||||||
|
fsType: "ext4"
|
||||||
|
options:
|
||||||
|
volumeID: "vol1"
|
||||||
|
size: "1000m"
|
||||||
|
volumegroup: "kube_vg"
|
||||||
|
```
|
||||||
|
|
|
@ -0,0 +1,108 @@
|
||||||
|
# 云原生应用
|
||||||
|
|
||||||
|
## 云原生应用的基本概念
|
||||||
|
|
||||||
|
云原生应用,是指原生为在云平台上部署运行而设计开发的应用。公平的说,大多数传统的应用,不做任何改动,都是可以在云平台运行起来的,只要云平台支持这个传统应用所运行的计算机架构和操作系统。只不过这种运行模式,仅仅是把虚拟机当物理机一样使用,不能够真正利用起来云平台的能力。
|
||||||
|
|
||||||
|
## 云原生应用与相关技术理念的关系
|
||||||
|
|
||||||
|
### 云原生应用与云平台的关系
|
||||||
|
|
||||||
|
云平台是用来部署、管理和运行SaaS云应用的。SaaS是云计算的三种服务模型之一,即跟业务相关的应用即服务。云计算最根本的特性是提供按需分配资源和弹性计算的能力,而云原生应用的设计理念就是让部署到云平台的应用能够利用到云平台的能力,实现按需使用计算资源和弹性伸缩,成为一个合格的SaaS应用。
|
||||||
|
|
||||||
|
### 云原生应用与12要素的关系
|
||||||
|
|
||||||
|
12要素是PaaS平台Haroku团队提出的应用设计理念,是有关SaaS应用设计理念的红宝书;可以说,12要素应用就是云原生应用的同义词。
|
||||||
|
|
||||||
|
### 云原生应用与Stateless和Share Nothing架构的关系
|
||||||
|
|
||||||
|
为了实现水平伸缩的能力,云原生应用应该是Stateless和Share Nothing的。
|
||||||
|
|
||||||
|
### 云原生应用与微服务架构的关系
|
||||||
|
|
||||||
|
微服务架构是实现企业分布式系统的一种架构模式,即将一个复杂的单体应用按照业务的限定上下文,分解成多个独立部署的组件。这些独立部署的组件,就称为微服务。而在谈论云原生应用与微服务架构关系的时候,根据上下文不同可能是有两种不同的含义。一种含义是宏观的云原生应用,即将整个分布式系统看作一个应用,这种语境下,微服务架构是实现云原生应用的一种架构模式;另一种含义是微观的云原生应用,即每个微服务是一个应用,这种语境下,每个微服务要按照云原生应用的设计理念去设计,才能真正实现微服务架构所要达到的目的,即让分布式系统具备按需使用计算资源和弹性伸缩的能力,这里“应用”和“服务”变成了同义词。
|
||||||
|
|
||||||
|
### 云原生应用与宠物和牲畜的关系
|
||||||
|
|
||||||
|
云原生应用的设计理念是希望把应用当作牲畜来养,而不是当作宠物来养。部署一个云原生应用的集群,就好像圈养了一大群奶牛,目的主要是为了产奶,对待每头牛就像对待机器一样没有什么感情,死了一头就再养一头,而不会像对待宠物那样细心呵护。而传统应用,因为往往因为对运行环境依赖严重,运维人员需要细心照顾、维护,万一出现宕机,一般要在原来的服务器上修复问题再恢复运行;如果恢复不了,整个应用系统就瘫痪了,因此会令运维人员像“宠物死了”一样伤心。
|
||||||
|
|
||||||
|
## 云原生应用的设计理念——12要素
|
||||||
|
|
||||||
|
### 一个应用对应一套代码多次部署
|
||||||
|
|
||||||
|
这一理念主要是强调应用应该清晰明确地区分什么是应用,什么是部署。一个应用对应的就是一个代码仓库,一个软件产品;一次部署对应的是一个运行起来的应用;因此应用与部署的关系是一对多。这种一对多的关系也体现了应用代码的可重用性,一套代码可以重用到多次的部署中去;不同部署之间的区分是配置,而代码是共享的。对应用架构来说,最基本的是要区分运行时行为和非运行时行为,一个应用的非运行时的代表就是一个代码仓库,它可能有多个运行时实例,每个实例就是一次部署。
|
||||||
|
|
||||||
|
### 明确地声明并隔离依赖的程序库
|
||||||
|
|
||||||
|
不管用什么语言开发应用,编程语言一定都有管理程序库的机制。这一理念强调所有依赖库一定要明确的声明出来,因为只有这样,在运行应用的时候,才能保证所有运行所需要的程序库都正确部署到了云环境中。
|
||||||
|
|
||||||
|
### 将配置存储到部署环境中
|
||||||
|
|
||||||
|
正像前面所说,一个应用的不同部署之间是共享一套代码的,不同之处是配置。代码是存储到代码仓库中的,那自然配置不应该是存到代码仓库中。每次部署都有自己独立的部署环境,每次部署所对应的配置要存到这次部署所对应的部署环境中去,因此配置的另一个同义词就是环境变量。这里的部署,不包括应用内部的配置,例如Java的Properties文件或者是Servlet的映射配置文件web.xml等,这些算作是代码而不是配置。这是一个容易令人混淆的地方,那到底什么算代码,什么算配置?判断的标准很简单,就是变化的频率。变动导致产品版本更新的,就是代码;每次部署都可能变更,而每次变动不导致产品版本更新的,就是配置,就是环境变量。
|
||||||
|
|
||||||
|
### 将后端支撑服务作为挂载资源来使用
|
||||||
|
|
||||||
|
这一理念强调应用使用后台支撑服务的方式。不同的服务之间的区别就只是资源的URL不同,也就是设定这个资源的相关环境变量不同。不管是本地资源还是远程资源,应用程序都可以正常使用,区别只是环境变量的值不同,而应用本身并不会因为环境变量不同而有所区别。最常用的后台支撑服务就是数据库、缓存、消息队列等服务。这一理念可以保证应用在任何环境都可以正常运行,不会因为后台支撑服务的变化而导致应用无法运行。
|
||||||
|
|
||||||
|
### 严格区分构建阶段和运行阶段
|
||||||
|
|
||||||
|
这一理念跟区分应用和部署类似,本质上也是要严格区分应用的非运行时行为和运行时行为。构建是将应用的代码仓库编译打包成可运行的软件的过程,是非运行时行为。因此说,这一理念另一方面也说明要防止在运行阶段改代码的行为,这样才能够保证运行中应用的稳定性。
|
||||||
|
|
||||||
|
### 将应用作为无状态的进程来运行
|
||||||
|
|
||||||
|
这一理念要求所有的用户数据都要通过后端支撑服务来存储,而应用本身是无状态的,因为只有这样,应用才能做到水平伸缩,从而利用云平台弹性伸缩的能力。
|
||||||
|
|
||||||
|
### 仅需要绑定一个端口就可以对外发布一个服务
|
||||||
|
|
||||||
|
这一理念强调应用本身对于发布服务的环境不应该有过多的要求,而应该是完全自包含的,也就是说不需要依赖云平台提供应用运行容器,而只需要云平台分配某个端口对外发布服务。这一理念保证应用可以使用云平台中任意分配的端口发布服务。
|
||||||
|
|
||||||
|
### 可以像UNIX进程一样水平扩展
|
||||||
|
|
||||||
|
在UNIX操作系统上,不同的进程彼此独立地运行着,共享这整个操作系统管理的计算机资源。云原生应用在云平台上的运行模式也是类似的,云平台就是分布式操作系统,不同的云原生应用彼此独立互补干扰的运行在一个云平台上,可以充分利用云平台的整体计算能力。
|
||||||
|
|
||||||
|
### 可以快速启动和优雅地关闭
|
||||||
|
|
||||||
|
快速启动是为了能充分利用云平台根据需要调度资源的能力,能够在需要的时候,以最小的延时扩展计算能力提供服务。优雅地关闭,一方面是为了释放资源,将不再使用的计算资源归还云平台;另一方面也是为了保证应用逻辑的完整性,将该完成的任务正确完成,未能完成的任务重新交回到系统由其它应用的运行实例来继续完成。要假设云原生应用的目标工作环境中随时有大量同样的应用实例在运行、启动和关闭,因此快速启动和优雅关闭对高性能和稳定的系统非常重要。
|
||||||
|
|
||||||
|
### 保持开发环境、预发布环境和生产环境尽量一致
|
||||||
|
|
||||||
|
保持环境一致,是为了提高开发单元测试、功能测试和集成测试的有效性,避免出现开发测试中正常而在生产环境中出现问题的情况。
|
||||||
|
|
||||||
|
### 将日志作为事件流来处理
|
||||||
|
|
||||||
|
云原生应用运行在复杂的分布式基础设施之上,如果日志不通过简单统一的模式来管理,将给系统排错或通过日志挖掘信息带来很大困难。同时,如果应用将日志输出到系统的文件中,也会给系统的存储空间造成压力,增加系统运维的复杂性。因此这一理念推荐应用将日志输出到标准输出,然后由云平台统一收集处理。
|
||||||
|
|
||||||
|
### 将应用管理任务当作一次性进程来运行
|
||||||
|
|
||||||
|
将应用的管理任务与应用的业务请求以相似的方式运行,以同样的方式进行调度、日志和监控,将有利于系统的稳定性和分析系统的整体行为。
|
||||||
|
|
||||||
|
## 云原生应用的挑战
|
||||||
|
|
||||||
|
### 处理分布式系统的网络通信问题
|
||||||
|
|
||||||
|
云原生应用必须要针对分布式系统中网络通信的复杂性进行设计。对于分布式系统,如果还像单一进程应用那样考虑问题,就会进入所谓的“分布式系统的认识误区”,包括武断地认为:网络是可靠的;网络的延时为零;网络带宽是无限大的;网络是安全的;网络拓扑是不变的;系统中只有一个管理员和网络环境都是统一一致的。也许现在很少会有人幼稚到真的认为分布式环境中的交互处理和运行在单一进程中的函数调用是一样的;但开发的复杂度、功能上线的压力,经常会使开发人员把这些复杂问题暂时放在一边,不断积累起越来越多的“技术负债”。
|
||||||
|
|
||||||
|
### 处理分布式系统的状态一致性问题
|
||||||
|
|
||||||
|
分布式系统的CAP理论认为,在分布式系统中,系统的一致性、可用性和分区容忍性,三者不可能同时兼顾。当然,实际在分布式系统中,由于网络通信固有的不稳定,分区容忍性是必须要存在的,因此在设计应用的时候,就要在一致性和可用性之间权衡选择。
|
||||||
|
|
||||||
|
### 最终一致性
|
||||||
|
|
||||||
|
很多情况下,在一致性和可用性之间,云原生应用比传统应用更加偏向可用性,而采用最终一致性代替传统用事务交易保证的ACID一致性。传统的ACID一致性编程模型与业务无关,开发人员对它经验丰富,而最终一致性的交互模式与业务相关,必须通过业务的合理性来校验阶段不一致的合理性,这使得最终一致性比ACID一致性复杂得多。
|
||||||
|
|
||||||
|
### 服务发现和负载均衡
|
||||||
|
|
||||||
|
云原生应用的运行实例随时可能关闭和启动,因此需要机制使得访问应用服务的客户端随时都能找到健康运行的实例,放弃对宕机实例的访问,这就是服务发现的问题。与服务发现同时存在的,是在多个健康实例中选择一个实例真正为某个客户请求提供服务的过程,这就是负载均衡。
|
||||||
|
|
||||||
|
### 任务分解和数据分片
|
||||||
|
|
||||||
|
大的任务要分解成很多小任务,分配到各个运行实例上去执行,然后再将执行结果汇总,这就是任务分解。数据分布到各个实例上做处理和存储,这个就是数据分片。这些都需要适应云计算环境的机制去支持。
|
||||||
|
|
||||||
|
### 主控角色选举
|
||||||
|
|
||||||
|
不管是任务分解还是数据分片,每个应用实例上负责的子任务和数据分片虽然是不同的,但如何分解、谁负责谁这种分配映射表一定是完全相同的;因此在这种情况下,需要负责计算分配映射表的主控角色;而因为云计算环境下没有实例是永远保证健康运行的,主控角色不可能是永远固定的;这就需要主控角色选举的机制,能够在主控角色空白或出现故障宕机的情况下,自选举出新的主控角色。
|
||||||
|
|
||||||
|
像设计模式解决面向对象设计中的复杂问题一样,面对云原生应用的复杂应用场景,我们也需要一些典型的设计模式能够可重用地解决一些特定场景的问题。这些我们将在本系列文章的后面结合应用案例予以介绍。
|
||||||
|
|
||||||
|
[1] http://www.infoq.com/cn/articles/kubernetes-and-cloud-native-applications-part02
|
||||||
|
|
|
@ -0,0 +1,132 @@
|
||||||
|
# Kubernetes的设计理念
|
||||||
|
|
||||||
|
### Kubernetes设计理念与分布式系统
|
||||||
|
|
||||||
|
分析和理解Kubernetes的设计理念可以使我们更深入地了解Kubernetes系统,更好地利用它管理分布式部署的云原生应用,另一方面也可以让我们借鉴其在分布式系统设计方面的经验。
|
||||||
|
|
||||||
|
### 分层架构
|
||||||
|
|
||||||
|
Kubernetes设计理念和功能其实就是一个类似Linux的分层架构,如下图所示
|
||||||
|
|
||||||
|
![](/images/14937095836427.jpg)
|
||||||
|
|
||||||
|
* 核心层:Kubernetes最核心的功能,对外提供API构建高层的应用,最内提供插件式应用执行环境
|
||||||
|
* 应用层:部署(无状态应用、有状态应用、批处理任务、集群应用等)和路由(服务发现、DNS解析等)
|
||||||
|
* 管理层:系统度量(如基础设施、容器和网络的度量),自动化(如自动扩展、动态Provision等)以及策略管理(RBAC、Quota、PSP、NetworkPolicy等)
|
||||||
|
* 接口层:kubectl命令行工具、客户端SDK以及集群联邦
|
||||||
|
* 生态系统:在接口层之上的庞大容器集群管理调度的生态系统,可以划分为两个范畴
|
||||||
|
* Kubernetes外部:日志、监控、配置管理、CI、CD、Workflow、FaaS、OTS应用、ChatOps等
|
||||||
|
* Kubernetes内部:CRI、CNI、CVI、镜像仓库、Cloud Provider、集群自身的配置和管理等
|
||||||
|
|
||||||
|
### API设计原则
|
||||||
|
|
||||||
|
对于云计算系统,系统API实际上处于系统设计的统领地位,正如本文前面所说,K8s集群系统每支持一项新功能,引入一项新技术,一定会新引入对应的API对象,支持对该功能的管理操作,理解掌握的API,就好比抓住了K8s系统的牛鼻子。K8s系统API的设计有以下几条原则:
|
||||||
|
|
||||||
|
1. **所有API应该是声明式的**。正如前文所说,声明式的操作,相对于命令式操作,对于重复操作的效果是稳定的,这对于容易出现数据丢失或重复的分布式环境来说是很重要的。另外,声明式操作更容易被用户使用,可以使系统向用户隐藏实现的细节,隐藏实现的细节的同时,也就保留了系统未来持续优化的可能性。此外,声明式的API,同时隐含了所有的API对象都是名词性质的,例如Service、Volume这些API都是名词,这些名词描述了用户所期望得到的一个目标分布式对象。
|
||||||
|
2. **API对象是彼此互补而且可组合的**。这里面实际是鼓励API对象尽量实现面向对象设计时的要求,即“高内聚,松耦合”,对业务相关的概念有一个合适的分解,提高分解出来的对象的可重用性。事实上,K8s这种分布式系统管理平台,也是一种业务系统,只不过它的业务就是调度和管理容器服务。
|
||||||
|
3. **高层API以操作意图为基础设计**。如何能够设计好API,跟如何能用面向对象的方法设计好应用系统有相通的地方,高层设计一定是从业务出发,而不是过早的从技术实现出发。因此,针对K8s的高层API设计,一定是以K8s的业务为基础出发,也就是以系统调度管理容器的操作意图为基础设计。
|
||||||
|
4. **低层API根据高层API的控制需要设计**。设计实现低层API的目的,是为了被高层API使用,考虑减少冗余、提高重用性的目的,低层API的设计也要以需求为基础,要尽量抵抗受技术实现影响的诱惑。
|
||||||
|
5. **尽量避免简单封装,不要有在外部API无法显式知道的内部隐藏的机制**。简单的封装,实际没有提供新的功能,反而增加了对所封装API的依赖性。内部隐藏的机制也是非常不利于系统维护的设计方式,例如PetSet和ReplicaSet,本来就是两种Pod集合,那么K8s就用不同API对象来定义它们,而不会说只用同一个ReplicaSet,内部通过特殊的算法再来区分这个ReplicaSet是有状态的还是无状态。
|
||||||
|
6. **API操作复杂度与对象数量成正比**。这一条主要是从系统性能角度考虑,要保证整个系统随着系统规模的扩大,性能不会迅速变慢到无法使用,那么最低的限定就是API的操作复杂度不能超过O\(N\),N是对象的数量,否则系统就不具备水平伸缩性了。
|
||||||
|
7. **API对象状态不能依赖于网络连接状态**。由于众所周知,在分布式环境下,网络连接断开是经常发生的事情,因此要保证API对象状态能应对网络的不稳定,API对象的状态就不能依赖于网络连接状态。
|
||||||
|
8. **尽量避免让操作机制依赖于全局状态,因为在分布式系统中要保证全局状态的同步是非常困难的**。
|
||||||
|
|
||||||
|
### 控制机制设计原则
|
||||||
|
|
||||||
|
* **控制逻辑应该只依赖于当前状态**。这是为了保证分布式系统的稳定可靠,对于经常出现局部错误的分布式系统,如果控制逻辑只依赖当前状态,那么就非常容易将一个暂时出现故障的系统恢复到正常状态,因为你只要将该系统重置到某个稳定状态,就可以自信的知道系统的所有控制逻辑会开始按照正常方式运行。
|
||||||
|
* **假设任何错误的可能,并做容错处理**。在一个分布式系统中出现局部和临时错误是大概率事件。错误可能来自于物理系统故障,外部系统故障也可能来自于系统自身的代码错误,依靠自己实现的代码不会出错来保证系统稳定其实也是难以实现的,因此要设计对任何可能错误的容错处理。
|
||||||
|
* **尽量避免复杂状态机,控制逻辑不要依赖无法监控的内部状态**。因为分布式系统各个子系统都是不能严格通过程序内部保持同步的,所以如果两个子系统的控制逻辑如果互相有影响,那么子系统就一定要能互相访问到影响控制逻辑的状态,否则,就等同于系统里存在不确定的控制逻辑。
|
||||||
|
* **假设任何操作都可能被任何操作对象拒绝,甚至被错误解析**。由于分布式系统的复杂性以及各子系统的相对独立性,不同子系统经常来自不同的开发团队,所以不能奢望任何操作被另一个子系统以正确的方式处理,要保证出现错误的时候,操作级别的错误不会影响到系统稳定性。
|
||||||
|
* **每个模块都可以在出错后自动恢复**。由于分布式系统中无法保证系统各个模块是始终连接的,因此每个模块要有自我修复的能力,保证不会因为连接不到其他模块而自我崩溃。
|
||||||
|
* **每个模块都可以在必要时优雅地降级服务**。所谓优雅地降级服务,是对系统鲁棒性的要求,即要求在设计实现模块时划分清楚基本功能和高级功能,保证基本功能不会依赖高级功能,这样同时就保证了不会因为高级功能出现故障而导致整个模块崩溃。根据这种理念实现的系统,也更容易快速地增加新的高级功能,以为不必担心引入高级功能影响原有的基本功能。
|
||||||
|
|
||||||
|
## Kubernetes的核心技术概念和API对象
|
||||||
|
|
||||||
|
API对象是K8s集群中的管理操作单元。K8s集群系统每支持一项新功能,引入一项新技术,一定会新引入对应的API对象,支持对该功能的管理操作。例如副本集Replica Set对应的API对象是RS。
|
||||||
|
|
||||||
|
每个API对象都有3大类属性:元数据metadata、规范spec和状态status。元数据是用来标识API对象的,每个对象都至少有3个元数据:namespace,name和uid;除此以外还有各种各样的标签labels用来标识和匹配不同的对象,例如用户可以用标签env来标识区分不同的服务部署环境,分别用env=dev、env=testing、env=production来标识开发、测试、生产的不同服务。规范描述了用户期望K8s集群中的分布式系统达到的理想状态(Desired State),例如用户可以通过复制控制器Replication Controller设置期望的Pod副本数为3;status描述了系统实际当前达到的状态(Status),例如系统当前实际的Pod副本数为2;那么复制控制器当前的程序逻辑就是自动启动新的Pod,争取达到副本数为3。
|
||||||
|
|
||||||
|
K8s中所有的配置都是通过API对象的spec去设置的,也就是用户通过配置系统的理想状态来改变系统,这是k8s重要设计理念之一,即所有的操作都是声明式(Declarative)的而不是命令式(Imperative)的。声明式操作在分布式系统中的好处是稳定,不怕丢操作或运行多次,例如设置副本数为3的操作运行多次也还是一个结果,而给副本数加1的操作就不是声明式的,运行多次结果就错了。
|
||||||
|
|
||||||
|
### Pod
|
||||||
|
|
||||||
|
K8s有很多技术概念,同时对应很多API对象,最重要的也是最基础的是微服务豆荚Pod。Pod是在K8s集群中运行部署应用或服务的最小单元,它是可以支持多容器的。Pod的设计理念是支持多个容器在一个Pod中共享网络地址和文件系统,可以通过进程间通信和文件共享这种简单高效的方式组合完成服务。Pod对多容器的支持是K8最基础的设计理念。比如你运行一个操作系统发行版的软件仓库,一个Nginx容器用来发布软件,另一个容器专门用来从源仓库做同步,这两个容器的镜像不太可能是一个团队开发的,但是他们一块儿工作才能提供一个微服务;这种情况下,不同的团队各自开发构建自己的容器镜像,在部署的时候组合成一个微服务对外提供服务。
|
||||||
|
|
||||||
|
Pod是K8s集群中所有业务类型的基础,可以看作运行在K8集群中的小机器人,不同类型的业务就需要不同类型的小机器人去执行。目前K8s中的业务主要可以分为长期伺服型(long-running)、批处理型(batch)、节点后台支撑型(node-daemon)和有状态应用型(stateful application);分别对应的小机器人控制器为Deployment、Job、DaemonSet和PetSet,本文后面会一一介绍。
|
||||||
|
|
||||||
|
### 复制控制器(Replication Controller,RC)
|
||||||
|
|
||||||
|
RC是K8s集群中最早的保证Pod高可用的API对象。通过监控运行中的Pod来保证集群中运行指定数目的Pod副本。指定的数目可以是多个也可以是1个;少于指定数目,RC就会启动运行新的Pod副本;多于指定数目,RC就会杀死多余的Pod副本。即使在指定数目为1的情况下,通过RC运行Pod也比直接运行Pod更明智,因为RC也可以发挥它高可用的能力,保证永远有1个Pod在运行。RC是K8s较早期的技术概念,只适用于长期伺服型的业务类型,比如控制小机器人提供高可用的Web服务。
|
||||||
|
|
||||||
|
### 副本集(Replica Set,RS)
|
||||||
|
|
||||||
|
RS是新一代RC,提供同样的高可用能力,区别主要在于RS后来居上,能支持更多种类的匹配模式。副本集对象一般不单独使用,而是作为Deployment的理想状态参数使用。
|
||||||
|
|
||||||
|
### 部署\(Deployment\)
|
||||||
|
|
||||||
|
部署表示用户对K8s集群的一次更新操作。部署是一个比RS应用模式更广的API对象,可以是创建一个新的服务,更新一个新的服务,也可以是滚动升级一个服务。滚动升级一个服务,实际是创建一个新的RS,然后逐渐将新RS中副本数增加到理想状态,将旧RS中的副本数减小到0的复合操作;这样一个复合操作用一个RS是不太好描述的,所以用一个更通用的Deployment来描述。以K8s的发展方向,未来对所有长期伺服型的的业务的管理,都会通过Deployment来管理。
|
||||||
|
|
||||||
|
### 服务(Service)
|
||||||
|
|
||||||
|
RC、RS和Deployment只是保证了支撑服务的微服务豆荚的数量,但是没有解决如何访问这些服务的问题。一个Pod只是一个运行服务的实例,随时可能在一个节点上停止,在另一个节点以一个新的IP启动一个新的Pod,因此不能以确定的IP和端口号提供服务。要稳定地提供服务需要服务发现和负载均衡能力。服务发现完成的工作,是针对客户端访问的服务,找到对应的的后端服务实例。在K8集群中,客户端需要访问的服务就是Service对象。每个Service会对应一个集群内部有效的虚拟IP,集群内部通过虚拟IP访问一个服务。在K8s集群中微服务的负载均衡是由Kube-proxy实现的。Kube-proxy是K8s集群内部的负载均衡器。它是一个分布式代理服务器,在K8s的每个节点上都有一个;这一设计体现了它的伸缩性优势,需要访问服务的节点越多,提供负载均衡能力的Kube-proxy就越多,高可用节点也随之增多。与之相比,我们平时在服务器端做个反向代理做负载均衡,还要进一步解决反向代理的负载均衡和高可用问题。
|
||||||
|
|
||||||
|
### 任务(Job)
|
||||||
|
|
||||||
|
Job是K8s用来控制批处理型任务的API对象。批处理业务与长期伺服业务的主要区别是批处理业务的运行有头有尾,而长期伺服业务在用户不停止的情况下永远运行。Job管理的Pod根据用户的设置把任务成功完成就自动退出了。成功完成的标志根据不同的spec.completions策略而不同:单Pod型任务有一个Pod成功就标志完成;定数成功型任务保证有N个任务全部成功;工作队列型任务根据应用确认的全局成功而标志成功。
|
||||||
|
|
||||||
|
### 后台支撑服务集(DaemonSet)
|
||||||
|
|
||||||
|
长期伺服型和批处理型服务的核心在业务应用,可能有些节点运行多个同类业务的Pod,有些节点上又没有这类Pod运行;而后台支撑型服务的核心关注点在K8s集群中的节点(物理机或虚拟机),要保证每个节点上都有一个此类Pod运行。节点可能是所有集群节点也可能是通过nodeSelector选定的一些特定节点。典型的后台支撑型服务包括,存储,日志和监控等在每个节点上支持K8s集群运行的服务。
|
||||||
|
|
||||||
|
### 有状态服务集(PetSet)
|
||||||
|
|
||||||
|
K8s在1.3版本里发布了Alpha版的PetSet功能。在云原生应用的体系里,有下面两组近义词;第一组是无状态(stateless)、牲畜(cattle)、无名(nameless)、可丢弃(disposable);第二组是有状态(stateful)、宠物(pet)、有名(having name)、不可丢弃(non-disposable)。RC和RS主要是控制提供无状态服务的,其所控制的Pod的名字是随机设置的,一个Pod出故障了就被丢弃掉,在另一个地方重启一个新的Pod,名字变了、名字和启动在哪儿都不重要,重要的只是Pod总数;而PetSet是用来控制有状态服务,PetSet中的每个Pod的名字都是事先确定的,不能更改。PetSet中Pod的名字的作用,并不是《千与千寻》的人性原因,而是关联与该Pod对应的状态。
|
||||||
|
|
||||||
|
对于RC和RS中的Pod,一般不挂载存储或者挂载共享存储,保存的是所有Pod共享的状态,Pod像牲畜一样没有分别(这似乎也确实意味着失去了人性特征);对于PetSet中的Pod,每个Pod挂载自己独立的存储,如果一个Pod出现故障,从其他节点启动一个同样名字的Pod,要挂在上原来Pod的存储继续以它的状态提供服务。
|
||||||
|
|
||||||
|
适合于PetSet的业务包括数据库服务MySQL和PostgreSQL,集群化管理服务Zookeeper、etcd等有状态服务。PetSet的另一种典型应用场景是作为一种比普通容器更稳定可靠的模拟虚拟机的机制。传统的虚拟机正是一种有状态的宠物,运维人员需要不断地维护它,容器刚开始流行时,我们用容器来模拟虚拟机使用,所有状态都保存在容器里,而这已被证明是非常不安全、不可靠的。使用PetSet,Pod仍然可以通过漂移到不同节点提供高可用,而存储也可以通过外挂的存储来提供高可靠性,PetSet做的只是将确定的Pod与确定的存储关联起来保证状态的连续性。PetSet还只在Alpha阶段,后面的设计如何演变,我们还要继续观察。
|
||||||
|
|
||||||
|
### 集群联邦(Federation)
|
||||||
|
|
||||||
|
K8s在1.3版本里发布了beta版的Federation功能。在云计算环境中,服务的作用距离范围从近到远一般可以有:同主机(Host,Node)、跨主机同可用区(Available Zone)、跨可用区同地区(Region)、跨地区同服务商(Cloud Service Provider)、跨云平台。K8s的设计定位是单一集群在同一个地域内,因为同一个地区的网络性能才能满足K8s的调度和计算存储连接要求。而联合集群服务就是为提供跨Region跨服务商K8s集群服务而设计的。
|
||||||
|
|
||||||
|
每个K8s Federation有自己的分布式存储、API Server和Controller Manager。用户可以通过Federation的API Server注册该Federation的成员K8s Cluster。当用户通过Federation的API Server创建、更改API对象时,Federation API Server会在自己所有注册的子K8s Cluster都创建一份对应的API对象。在提供业务请求服务时,K8s Federation会先在自己的各个子Cluster之间做负载均衡,而对于发送到某个具体K8s Cluster的业务请求,会依照这个K8s Cluster独立提供服务时一样的调度模式去做K8s Cluster内部的负载均衡。而Cluster之间的负载均衡是通过域名服务的负载均衡来实现的。
|
||||||
|
|
||||||
|
所有的设计都尽量不影响K8s Cluster现有的工作机制,这样对于每个子K8s集群来说,并不需要更外层的有一个K8s Federation,也就是意味着所有现有的K8s代码和机制不需要因为Federation功能有任何变化。
|
||||||
|
|
||||||
|
### 存储卷(Volume)
|
||||||
|
|
||||||
|
K8s集群中的存储卷跟Docker的存储卷有些类似,只不过Docker的存储卷作用范围为一个容器,而K8s的存储卷的生命周期和作用范围是一个Pod。每个Pod中声明的存储卷由Pod中的所有容器共享。K8s支持非常多的存储卷类型,特别的,支持多种公有云平台的存储,包括AWS,Google和Azure云;支持多种分布式存储包括GlusterFS和Ceph;也支持较容易使用的主机本地目录hostPath和NFS。K8s还支持使用Persistent Volume Claim即PVC这种逻辑存储,使用这种存储,使得存储的使用者可以忽略后台的实际存储技术(例如AWS,Google或GlusterFS和Ceph),而将有关存储实际技术的配置交给存储管理员通过Persistent Volume来配置。
|
||||||
|
|
||||||
|
### 持久存储卷(Persistent Volume,PV)和持久存储卷声明(Persistent Volume Claim,PVC)
|
||||||
|
|
||||||
|
PV和PVC使得K8s集群具备了存储的逻辑抽象能力,使得在配置Pod的逻辑里可以忽略对实际后台存储技术的配置,而把这项配置的工作交给PV的配置者,即集群的管理者。存储的PV和PVC的这种关系,跟计算的Node和Pod的关系是非常类似的;PV和Node是资源的提供者,根据集群的基础设施变化而变化,由K8s集群管理员配置;而PVC和Pod是资源的使用者,根据业务服务的需求变化而变化,有K8s集群的使用者即服务的管理员来配置。
|
||||||
|
|
||||||
|
### 节点(Node)
|
||||||
|
|
||||||
|
K8s集群中的计算能力由Node提供,最初Node称为服务节点Minion,后来改名为Node。K8s集群中的Node也就等同于Mesos集群中的Slave节点,是所有Pod运行所在的工作主机,可以是物理机也可以是虚拟机。不论是物理机还是虚拟机,工作主机的统一特征是上面要运行kubelet管理节点上运行的容器。
|
||||||
|
|
||||||
|
### 密钥对象(Secret)
|
||||||
|
|
||||||
|
Secret是用来保存和传递密码、密钥、认证凭证这些敏感信息的对象。使用Secret的好处是可以避免把敏感信息明文写在配置文件里。在K8s集群中配置和使用服务不可避免的要用到各种敏感信息实现登录、认证等功能,例如访问AWS存储的用户名密码。为了避免将类似的敏感信息明文写在所有需要使用的配置文件中,可以将这些信息存入一个Secret对象,而在配置文件中通过Secret对象引用这些敏感信息。这种方式的好处包括:意图明确,避免重复,减少暴漏机会。
|
||||||
|
|
||||||
|
### 用户帐户(User Account)和服务帐户(Service Account)
|
||||||
|
|
||||||
|
顾名思义,用户帐户为人提供账户标识,而服务账户为计算机进程和K8s集群中运行的Pod提供账户标识。用户帐户和服务帐户的一个区别是作用范围;用户帐户对应的是人的身份,人的身份与服务的namespace无关,所以用户账户是跨namespace的;而服务帐户对应的是一个运行中程序的身份,与特定namespace是相关的。
|
||||||
|
|
||||||
|
### 名字空间(Namespace)
|
||||||
|
|
||||||
|
名字空间为K8s集群提供虚拟的隔离作用,K8s集群初始有两个名字空间,分别是默认名字空间default和系统名字空间kube-system,除此以外,管理员可以可以创建新的名字空间满足需要。
|
||||||
|
|
||||||
|
### RBAC访问授权
|
||||||
|
|
||||||
|
K8s在1.3版本中发布了alpha版的基于角色的访问控制(Role-based Access Control,RBAC)的授权模式。相对于基于属性的访问控制(Attribute-based Access Control,ABAC),RBAC主要是引入了角色(Role)和角色绑定(RoleBinding)的抽象概念。在ABAC中,K8s集群中的访问策略只能跟用户直接关联;而在RBAC中,访问策略可以跟某个角色关联,具体的用户在跟一个或多个角色相关联。显然,RBAC像其他新功能一样,每次引入新功能,都会引入新的API对象,从而引入新的概念抽象,而这一新的概念抽象一定会使集群服务管理和使用更容易扩展和重用。
|
||||||
|
|
||||||
|
## 总结
|
||||||
|
|
||||||
|
从K8s的系统架构、技术概念和设计理念,我们可以看到K8s系统最核心的两个设计理念:一个是**容错性**,一个是**易扩展性**。容错性实际是保证K8s系统稳定性和安全性的基础,易扩展性是保证K8s对变更友好,可以快速迭代增加新功能的基础。
|
||||||
|
|
||||||
|
按照分布式系统一致性算法Paxos发明人计算机科学家[Leslie Lamport](http://research.microsoft.com/users/lamport/pubs/pubs.html)的理念,一个分布式系统有两类特性:安全性Safety和活性Liveness。安全性保证系统的稳定,保证系统不会崩溃,不会出现业务错误,不会做坏事,是严格约束的;活性使得系统可以提供功能,提高性能,增加易用性,让系统可以在用户“看到的时间内”做些好事,是尽力而为的。K8s系统的设计理念正好与Lamport安全性与活性的理念不谋而合,也正是因为K8s在引入功能和技术的时候,非常好地划分了安全性和活性,才可以让K8s能有这么快版本迭代,快速引入像RBAC、Federation和PetSet这种新功能。
|
||||||
|
|
||||||
|
\[1\] [http://www.infoq.com/cn/articles/kubernetes-and-cloud-native-applications-part01](http://www.infoq.com/cn/articles/kubernetes-and-cloud-native-applications-part01)
|
||||||
|
|
|
@ -0,0 +1,63 @@
|
||||||
|
# CronJob
|
||||||
|
|
||||||
|
CronJob即定时任务,就类似于Linux系统的crontab,在指定的时间周期运行指定的任务。在Kubernetes 1.5,使用CronJob需要开启`batch/v2alpha1` API,即`--runtime-config=batch/v2alpha1`。
|
||||||
|
|
||||||
|
## CronJob Spec
|
||||||
|
|
||||||
|
- `.spec.schedule`指定任务运行周期,格式同[Cron](https://en.wikipedia.org/wiki/Cron)
|
||||||
|
- `.spec.jobTemplate`指定需要运行的任务,格式同[Job](job.md)
|
||||||
|
- `.spec.startingDeadlineSeconds`指定任务开始的截止期限
|
||||||
|
- `.spec.concurrencyPolicy`指定任务的并发策略,支持Allow、Forbid和Replace三个选项
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: batch/v2alpha1
|
||||||
|
kind: CronJob
|
||||||
|
metadata:
|
||||||
|
name: hello
|
||||||
|
spec:
|
||||||
|
schedule: "*/1 * * * *"
|
||||||
|
jobTemplate:
|
||||||
|
spec:
|
||||||
|
template:
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: hello
|
||||||
|
image: busybox
|
||||||
|
args:
|
||||||
|
- /bin/sh
|
||||||
|
- -c
|
||||||
|
- date; echo Hello from the Kubernetes cluster
|
||||||
|
restartPolicy: OnFailure
|
||||||
|
```
|
||||||
|
|
||||||
|
```
|
||||||
|
$ kubectl create -f cronjob.yaml
|
||||||
|
cronjob "hello" created
|
||||||
|
```
|
||||||
|
|
||||||
|
当然,也可以用`kubectl run`来创建一个CronJob:
|
||||||
|
|
||||||
|
```
|
||||||
|
kubectl run hello --schedule="*/1 * * * *" --restart=OnFailure --image=busybox -- /bin/sh -c "date; echo Hello from the Kubernetes cluster"
|
||||||
|
```
|
||||||
|
|
||||||
|
```
|
||||||
|
$ kubectl get cronjob
|
||||||
|
NAME SCHEDULE SUSPEND ACTIVE LAST-SCHEDULE
|
||||||
|
hello */1 * * * * False 0 <none>
|
||||||
|
$ kubectl get jobs
|
||||||
|
NAME DESIRED SUCCESSFUL AGE
|
||||||
|
hello-1202039034 1 1 49s
|
||||||
|
$ pods=$(kubectl get pods --selector=job-name=hello-1202039034 --output=jsonpath={.items..metadata.name} -a)
|
||||||
|
$ kubectl logs $pods
|
||||||
|
Mon Aug 29 21:34:09 UTC 2016
|
||||||
|
Hello from the Kubernetes cluster
|
||||||
|
|
||||||
|
# 注意,删除cronjob的时候不会自动删除job,这些job可以用kubectl delete job来删除
|
||||||
|
$ kubectl delete cronjob hello
|
||||||
|
cronjob "hello" deleted
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,18 @@
|
||||||
|
apiVersion: batch/v2alpha1
|
||||||
|
kind: CronJob
|
||||||
|
metadata:
|
||||||
|
name: hello
|
||||||
|
spec:
|
||||||
|
schedule: "*/1 * * * *"
|
||||||
|
jobTemplate:
|
||||||
|
spec:
|
||||||
|
template:
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: hello
|
||||||
|
image: busybox
|
||||||
|
args:
|
||||||
|
- /bin/sh
|
||||||
|
- -c
|
||||||
|
- date; echo Hello from the Kubernetes cluster
|
||||||
|
restartPolicy: OnFailure
|
|
@ -0,0 +1,143 @@
|
||||||
|
# DaemonSet
|
||||||
|
|
||||||
|
DaemonSet保证在每个Node上都运行一个容器副本,常用来部署一些集群的日志、监控或者其他系统管理程序。典型的应用常见包括:
|
||||||
|
|
||||||
|
* 日志收集,比如fluentd,logstash等
|
||||||
|
* 系统监控,比如Prometheus Node Exporter,collectd,New Relic agent,Ganglia gmond等
|
||||||
|
* 系统程序,比如kube-proxy, kube-dns, glusterd, ceph等
|
||||||
|
|
||||||
|
使用Fluentd收集日志的例子:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: extensions/v1beta1
|
||||||
|
kind: DaemonSet
|
||||||
|
metadata:
|
||||||
|
name: fluentd
|
||||||
|
spec:
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: logging
|
||||||
|
id: fluentd
|
||||||
|
name: fluentd
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: fluentd-es
|
||||||
|
image: gcr.io/google_containers/fluentd-elasticsearch:1.3
|
||||||
|
env:
|
||||||
|
- name: FLUENTD_ARGS
|
||||||
|
value: -qq
|
||||||
|
volumeMounts:
|
||||||
|
- name: containers
|
||||||
|
mountPath: /var/lib/docker/containers
|
||||||
|
- name: varlog
|
||||||
|
mountPath: /varlog
|
||||||
|
volumes:
|
||||||
|
- hostPath:
|
||||||
|
path: /var/lib/docker/containers
|
||||||
|
name: containers
|
||||||
|
- hostPath:
|
||||||
|
path: /var/log
|
||||||
|
name: varlog
|
||||||
|
```
|
||||||
|
|
||||||
|
## 指定Node节点
|
||||||
|
|
||||||
|
DaemonSet会忽略Node的unschedulable状态,有两种方式来指定Pod只运行在指定的Node节点上:
|
||||||
|
|
||||||
|
- nodeSelector:只调度到匹配指定label的Node上
|
||||||
|
- nodeAffinity:功能更丰富的Node选择器,比如支持集合操作
|
||||||
|
- podAffinity:调度到满足条件的Pod所在的Node上
|
||||||
|
|
||||||
|
nodeSelector示例:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
spec:
|
||||||
|
nodeSelector:
|
||||||
|
disktype: ssd
|
||||||
|
```
|
||||||
|
|
||||||
|
nodeAffinity示例:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
metadata:
|
||||||
|
name: with-node-affinity
|
||||||
|
annotations:
|
||||||
|
scheduler.alpha.kubernetes.io/affinity: >
|
||||||
|
{
|
||||||
|
"nodeAffinity": {
|
||||||
|
"requiredDuringSchedulingIgnoredDuringExecution": {
|
||||||
|
"nodeSelectorTerms": [
|
||||||
|
{
|
||||||
|
"matchExpressions": [
|
||||||
|
{
|
||||||
|
"key": "kubernetes.io/e2e-az-name",
|
||||||
|
"operator": "In",
|
||||||
|
"values": ["e2e-az1", "e2e-az2"]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
another-annotation-key: another-annotation-value
|
||||||
|
```
|
||||||
|
|
||||||
|
podAffinity示例:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
metadata:
|
||||||
|
name: with-pod-affinity
|
||||||
|
annotations:
|
||||||
|
scheduler.alpha.kubernetes.io/affinity: >
|
||||||
|
{
|
||||||
|
"podAffinity": {
|
||||||
|
"requiredDuringSchedulingIgnoredDuringExecution": [
|
||||||
|
{
|
||||||
|
"labelSelector": {
|
||||||
|
"matchExpressions": [
|
||||||
|
{
|
||||||
|
"key": "security",
|
||||||
|
"operator": "In",
|
||||||
|
"values": ["S1"]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"topologyKey": "failure-domain.beta.kubernetes.io/zone"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"podAntiAffinity": {
|
||||||
|
"requiredDuringSchedulingIgnoredDuringExecution": [
|
||||||
|
{
|
||||||
|
"labelSelector": {
|
||||||
|
"matchExpressions": [
|
||||||
|
{
|
||||||
|
"key": "security",
|
||||||
|
"operator": "In",
|
||||||
|
"values": ["S2"]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"topologyKey": "kubernetes.io/hostname"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
spec:
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
## 静态Pod
|
||||||
|
|
||||||
|
除了DaemonSet,还可以使用静态Pod来在每台机器上运行指定的Pod,这需要kubelet在启动的时候指定manifest目录:
|
||||||
|
|
||||||
|
```
|
||||||
|
kubelet --pod-manifest-path=<the directory>
|
||||||
|
```
|
||||||
|
|
||||||
|
然后将所需要的Pod定义文件放到指定的manifest目录中即可。
|
||||||
|
|
||||||
|
注意:静态Pod不能通过API Server来删除,但可以通过删除manifest文件来自动删除对应的Pod。
|
||||||
|
|
|
@ -0,0 +1,53 @@
|
||||||
|
# Deployment
|
||||||
|
|
||||||
|
Deployment为Pod和ReplicaSet提供了一个声明式定义(declarative)方法,用来替代以前的ReplicationController来方便的管理应用。典型的应用场景包括:
|
||||||
|
|
||||||
|
- 定义Deployment来创建Pod和ReplicaSet
|
||||||
|
- 滚动升级和回滚应用
|
||||||
|
- 扩容和缩容
|
||||||
|
- 暂停和继续Deployment
|
||||||
|
|
||||||
|
比如一个简单的nginx应用可以定义为
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: extensions/v1beta1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
name: nginx-deployment
|
||||||
|
spec:
|
||||||
|
replicas: 3
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: nginx
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: nginx
|
||||||
|
image: nginx:1.7.9
|
||||||
|
ports:
|
||||||
|
- containerPort: 80
|
||||||
|
```
|
||||||
|
|
||||||
|
扩容:
|
||||||
|
|
||||||
|
```
|
||||||
|
kubectl scale deployment nginx-deployment --replicas 10
|
||||||
|
```
|
||||||
|
|
||||||
|
如果集群支持 horizontal pod autoscaling 的话,还可以为Deployment设置自动扩展:
|
||||||
|
|
||||||
|
```
|
||||||
|
kubectl autoscale deployment nginx-deployment --min=10 --max=15 --cpu-percent=80
|
||||||
|
```
|
||||||
|
|
||||||
|
更新镜像也比较简单:
|
||||||
|
|
||||||
|
```
|
||||||
|
kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
|
||||||
|
```
|
||||||
|
|
||||||
|
回滚:
|
||||||
|
|
||||||
|
```
|
||||||
|
kubectl rollout undo deployment/nginx-deployment
|
||||||
|
```
|
|
@ -0,0 +1,70 @@
|
||||||
|
# Kubernetes架构
|
||||||
|
|
||||||
|
Kubernetes最初源于谷歌内部的Borg,提供了面向应用的容器集群部署和管理系统。Kubernetes
|
||||||
|
|
||||||
|
## Borg简介
|
||||||
|
|
||||||
|
Borg是谷歌内部的大规模集群管理系统,负责对谷歌内部很多核心服务的调度和管理。Borg的目的是让用户能够不必操心资源管理的问题,让他们专注于自己的核心业务,并且做到跨多个数据中心的资源利用率最大化。
|
||||||
|
|
||||||
|
Borg主要由BorgMaster、Borglet、borgcfg和Scheduler组成,如下图所示
|
||||||
|
|
||||||
|
![borg](media/borg.png)
|
||||||
|
|
||||||
|
* BorgMaster是整个集群的大脑,负责维护整个集群的状态,并将数据持久化到Paxos存储中;
|
||||||
|
* Scheduer负责任务的调度,根据应用的特点将其调度到具体的机器上去;
|
||||||
|
* Borglet负责真正运行任务(在容器中);
|
||||||
|
* borgcfg是Borg的命令行工具,用于跟Borg系统交互,一般通过一个配置文件来提交任务。
|
||||||
|
|
||||||
|
## Kubernetes架构
|
||||||
|
|
||||||
|
Kubernetes借鉴了Borg的设计理念,比如Pod、Service、Labels和单Pod单IP等。Kubernetes的整体架构跟Borg非常像,如下图所示
|
||||||
|
|
||||||
|
![architecture](media/architecture.png)
|
||||||
|
|
||||||
|
Kubernetes主要由以下几个核心组件组成:
|
||||||
|
|
||||||
|
- etcd保存了整个集群的状态;
|
||||||
|
- apiserver提供了资源操作的唯一入口,并提供认证、授权、访问控制、API注册和发现等机制;
|
||||||
|
- controller manager负责维护集群的状态,比如故障检测、自动扩展、滚动更新等;
|
||||||
|
- scheduler负责资源的调度,按照预定的调度策略将Pod调度到相应的机器上;
|
||||||
|
- kubelet负责维护容器的生命周期,同时也负责Volume(CVI)和网络(CNI)的管理;
|
||||||
|
- Container runtime负责镜像管理以及Pod和容器的真正运行(CRI);
|
||||||
|
- kube-proxy负责为Service提供cluster内部的服务发现和负载均衡;
|
||||||
|
|
||||||
|
除了核心组件,还有一些推荐的Add-ons:
|
||||||
|
|
||||||
|
- kube-dns负责为整个集群提供DNS服务
|
||||||
|
- Ingress Controller为服务提供外网入口
|
||||||
|
- Heapster提供资源监控
|
||||||
|
- Dashboard提供GUI
|
||||||
|
- Federation提供跨可用区的集群
|
||||||
|
|
||||||
|
|
||||||
|
![](/images/14791969222306.png)
|
||||||
|
|
||||||
|
![](/images/14791969311297.png)
|
||||||
|
|
||||||
|
### 分层架构
|
||||||
|
|
||||||
|
Kubernetes设计理念和功能其实就是一个类似Linux的分层架构,如下图所示
|
||||||
|
|
||||||
|
![](/images/14937095836427.jpg)
|
||||||
|
|
||||||
|
* 核心层:Kubernetes最核心的功能,对外提供API构建高层的应用,对内提供插件式应用执行环境
|
||||||
|
* 应用层:部署(无状态应用、有状态应用、批处理任务、集群应用等)和路由(服务发现、DNS解析等)
|
||||||
|
* 管理层:系统度量(如基础设施、容器和网络的度量),自动化(如自动扩展、动态Provision等)以及策略管理(RBAC、Quota、PSP、NetworkPolicy等)
|
||||||
|
* 接口层:kubectl命令行工具、客户端SDK以及集群联邦
|
||||||
|
* 生态系统:在接口层之上的庞大容器集群管理调度的生态系统,可以划分为两个范畴
|
||||||
|
* Kubernetes外部:日志、监控、配置管理、CI、CD、Workflow、FaaS、OTS应用、ChatOps等
|
||||||
|
* Kubernetes内部:CRI、CNI、CVI、镜像仓库、Cloud Provider、集群自身的配置和管理等
|
||||||
|
|
||||||
|
> 关于分层架构,可以关注下Kubernetes社区正在推进的[Kbernetes architectual roadmap](https://docs.google.com/document/d/1XkjVm4bOeiVkj-Xt1LgoGiqWsBfNozJ51dyI-ljzt1o)和[slide](https://docs.google.com/presentation/d/1GpELyzXOGEPY0Y1ft26yMNV19ROKt8eMN67vDSSHglk/edit)。
|
||||||
|
|
||||||
|
## 参考文档
|
||||||
|
|
||||||
|
- [Kubernetes design and architecture](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/architecture.md)
|
||||||
|
- <http://queue.acm.org/detail.cfm?id=2898444>
|
||||||
|
- <http://static.googleusercontent.com/media/research.google.com/zh-CN//pubs/archive/43438.pdf>
|
||||||
|
- <http://thenewstack.io/kubernetes-an-overview>
|
||||||
|
- [Kbernetes architectual roadmap](https://docs.google.com/document/d/1XkjVm4bOeiVkj-Xt1LgoGiqWsBfNozJ51dyI-ljzt1o)和[slide](https://docs.google.com/presentation/d/1GpELyzXOGEPY0Y1ft26yMNV19ROKt8eMN67vDSSHglk/edit)
|
||||||
|
|
|
@ -0,0 +1,46 @@
|
||||||
|
# Job
|
||||||
|
|
||||||
|
Job负责批处理任务,即仅执行一次的任务,它保证批处理任务的一个或多个Pod成功结束。
|
||||||
|
|
||||||
|
## Job Spec格式
|
||||||
|
|
||||||
|
- spec.template格式同Pod
|
||||||
|
- RestartPolicy仅支持Never或OnFailure
|
||||||
|
- 单个Pod时,默认Pod成功运行后Job即结束
|
||||||
|
- `.spec.completions`标志Job结束需要成功运行的Pod个数,默认为1
|
||||||
|
- `.spec.parallelism`标志并行运行的Pod的个数,默认为1
|
||||||
|
- `spec.activeDeadlineSeconds`标志失败Pod的重试最大时间,超过这个时间不会继续重试
|
||||||
|
|
||||||
|
一个简单的例子:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: batch/v1
|
||||||
|
kind: Job
|
||||||
|
metadata:
|
||||||
|
name: pi
|
||||||
|
spec:
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
name: pi
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: pi
|
||||||
|
image: perl
|
||||||
|
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
|
||||||
|
restartPolicy: Never
|
||||||
|
```
|
||||||
|
|
||||||
|
```
|
||||||
|
$ kubectl create -f ./job.yaml
|
||||||
|
job "pi" created
|
||||||
|
$ pods=$(kubectl get pods --selector=job-name=pi --output=jsonpath={.items..metadata.name})
|
||||||
|
$ kubectl logs $pods
|
||||||
|
3.141592653589793238462643383279502...
|
||||||
|
```
|
||||||
|
|
||||||
|
## Bare Pods
|
||||||
|
|
||||||
|
所谓Bare Pods是指直接用PodSpec来创建的Pod(即不在ReplicaSets或者ReplicationCtroller的管理之下的Pods)。这些Pod在Node重启后不会自动重启,但Job则会创建新的Pod继续任务。所以,推荐使用Job来替代Bare Pods,即便是应用只需要一个Pod。
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,14 @@
|
||||||
|
apiVersion: batch/v1
|
||||||
|
kind: Job
|
||||||
|
metadata:
|
||||||
|
name: pi
|
||||||
|
spec:
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
name: pi
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: pi
|
||||||
|
image: perl
|
||||||
|
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
|
||||||
|
restartPolicy: Never
|
|
@ -0,0 +1,100 @@
|
||||||
|
# kubeadm工作原理
|
||||||
|
|
||||||
|
## 初始化系统
|
||||||
|
|
||||||
|
所有机器都需要初始化docker和kubelet。这是因为kubeadm依赖kubelet来启动Master组件,比如kube-apiserver、kube-manager-controller、kube-scheduler、kube-proxy等。
|
||||||
|
|
||||||
|
## 安装master
|
||||||
|
|
||||||
|
在初始化master时,只需要执行kubeadm init命令即可,比如
|
||||||
|
|
||||||
|
```sh
|
||||||
|
kubeadm init kubeadm init --pod-network-cidr 10.244.0.0/16 --kubernetes-version latest
|
||||||
|
```
|
||||||
|
|
||||||
|
这个命令会自动
|
||||||
|
|
||||||
|
- 系统状态检查
|
||||||
|
- 生成token
|
||||||
|
- 生成自签名CA和可独断证书
|
||||||
|
- 生成kubeconfig用于kubelet连接API server
|
||||||
|
- 为Master组件生成Static Pod manifests,并放到`/etc/kubernetes/manifests`目录中
|
||||||
|
- 配置RBAC并设置Master node只运行控制平面组件
|
||||||
|
- 创建附加服务,比如kube-proxy和kube-dns
|
||||||
|
|
||||||
|
|
||||||
|
## 配置Network plugin
|
||||||
|
|
||||||
|
kubeadm在初始化时并不关心网络插件,默认情况下,kubelet配置使用CNI插件,这样就需要用户来额外初始化网络插件。
|
||||||
|
|
||||||
|
### CNI bridge
|
||||||
|
|
||||||
|
```sh
|
||||||
|
mkdir -p /etc/cni/net.d
|
||||||
|
cat >/etc/cni/net.d/10-mynet.conf <<-EOF
|
||||||
|
{
|
||||||
|
"cniVersion": "0.3.0",
|
||||||
|
"name": "mynet",
|
||||||
|
"type": "bridge",
|
||||||
|
"bridge": "cni0",
|
||||||
|
"isGateway": true,
|
||||||
|
"ipMasq": true,
|
||||||
|
"ipam": {
|
||||||
|
"type": "host-local",
|
||||||
|
"subnet": "10.244.0.0/16",
|
||||||
|
"routes": [
|
||||||
|
{ "dst": "0.0.0.0/0" }
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
EOF
|
||||||
|
cat >/etc/cni/net.d/99-loopback.conf <<-EOF
|
||||||
|
{
|
||||||
|
"cniVersion": "0.3.0",
|
||||||
|
"type": "loopback"
|
||||||
|
}
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
|
### flannel
|
||||||
|
|
||||||
|
```sh
|
||||||
|
kubectl create -f https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel-rbac.yml
|
||||||
|
kubectl create -f https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
### weave
|
||||||
|
|
||||||
|
```sh
|
||||||
|
kubectl apply -f https://git.io/weave-kube-1.6
|
||||||
|
```
|
||||||
|
|
||||||
|
### calico
|
||||||
|
|
||||||
|
```sh
|
||||||
|
kubectl apply -f http://docs.projectcalico.org/v2.1/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
## 添加Node
|
||||||
|
|
||||||
|
```sh
|
||||||
|
token=$(kubeadm token list | grep authentication,signing | awk '{print $1}')
|
||||||
|
kubeadm join --token $token ${master_ip}
|
||||||
|
```
|
||||||
|
|
||||||
|
这包括以下几个步骤
|
||||||
|
|
||||||
|
- 从API server下载CA
|
||||||
|
- 创建本地证书,并请求API Server签名
|
||||||
|
- 最后配置kubelet连接到API Server
|
||||||
|
|
||||||
|
## 删除安装
|
||||||
|
|
||||||
|
```
|
||||||
|
kubeadm reset
|
||||||
|
```
|
||||||
|
|
||||||
|
## 参考文档
|
||||||
|
|
||||||
|
- [kubeadm Setup Tool](https://kubernetes.io/docs/admin/kubeadm/)
|
||||||
|
|
After Width: | Height: | Size: 42 KiB |
After Width: | Height: | Size: 262 KiB |
After Width: | Height: | Size: 72 KiB |
|
@ -0,0 +1,16 @@
|
||||||
|
apiVersion: extensions/v1beta1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
name: nginx-deployment
|
||||||
|
spec:
|
||||||
|
replicas: 3
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: nginx
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: nginx
|
||||||
|
image: nginx:1.7.9
|
||||||
|
ports:
|
||||||
|
- containerPort: 80
|
|
@ -0,0 +1,58 @@
|
||||||
|
# ReplicationController和ReplicaSet
|
||||||
|
|
||||||
|
ReplicationCtronller用来确保容器应用的副本数始终保持在用户定义的副本数,即如果有容器异常退出,会自动创建新的Pod来替代;而如果异常多出来的容器也会自动回收。
|
||||||
|
|
||||||
|
在新版本的Kubernetes中建议使用ReplicaSet来取代ReplicationCtronller。ReplicaSet跟ReplicationCtronller没有本质的不同,只是名字不一样,并且ReplicaSet支持集合式的selector。
|
||||||
|
|
||||||
|
虽然ReplicaSet可以独立使用,但一般还是建议使用 Deployment 来自动管理ReplicaSet,这样就无需担心跟其他机制的不兼容问题(比如ReplicaSet不支持rolling-update但Deployment支持)。
|
||||||
|
|
||||||
|
ReplicaSet示例:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: extensions/v1beta1
|
||||||
|
kind: ReplicaSet
|
||||||
|
metadata:
|
||||||
|
name: frontend
|
||||||
|
# these labels can be applied automatically
|
||||||
|
# from the labels in the pod template if not set
|
||||||
|
# labels:
|
||||||
|
# app: guestbook
|
||||||
|
# tier: frontend
|
||||||
|
spec:
|
||||||
|
# this replicas value is default
|
||||||
|
# modify it according to your case
|
||||||
|
replicas: 3
|
||||||
|
# selector can be applied automatically
|
||||||
|
# from the labels in the pod template if not set,
|
||||||
|
# but we are specifying the selector here to
|
||||||
|
# demonstrate its usage.
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
tier: frontend
|
||||||
|
matchExpressions:
|
||||||
|
- {key: tier, operator: In, values: [frontend]}
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: guestbook
|
||||||
|
tier: frontend
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: php-redis
|
||||||
|
image: gcr.io/google_samples/gb-frontend:v3
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
cpu: 100m
|
||||||
|
memory: 100Mi
|
||||||
|
env:
|
||||||
|
- name: GET_HOSTS_FROM
|
||||||
|
value: dns
|
||||||
|
# If your cluster config does not include a dns service, then to
|
||||||
|
# instead access environment variables to find service host
|
||||||
|
# info, comment out the 'value: dns' line above, and uncomment the
|
||||||
|
# line below.
|
||||||
|
# value: env
|
||||||
|
ports:
|
||||||
|
- containerPort: 80
|
||||||
|
```
|
||||||
|
|
|
@ -0,0 +1,63 @@
|
||||||
|
# Service Account
|
||||||
|
|
||||||
|
Service account是为了方便Pod里面的进程调用Kubernetes API或其他外部服务,它不同于User account:
|
||||||
|
|
||||||
|
- User account是为人设计的,而service account则是为了Pod中的进程;
|
||||||
|
- User account是跨namespace的,而service account则是仅局限它所在的namespace;
|
||||||
|
- 开启ServiceAccount(默认开启)后,每个namespace都会自动创建一个Service account,并会相应的secret挂载到每一个Pod中
|
||||||
|
- 默认ServiceAccount为default,自动关联一个访问kubernetes API的[Secret](Secret.md)
|
||||||
|
- 每个Pod在创建后都会自动设置`spec.serviceAccount`为default(除非指定了其他ServiceAccout)
|
||||||
|
- 每个container启动后都会挂载对应的token和`ca.crt`到`/var/run/secrets/kubernetes.io/serviceaccount/`
|
||||||
|
|
||||||
|
当然了,也可以创建更多的Service Account:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ cat > /tmp/serviceaccount.yaml <<EOF
|
||||||
|
apiVersion: v1
|
||||||
|
kind: ServiceAccount
|
||||||
|
metadata:
|
||||||
|
name: build-robot
|
||||||
|
namespace: default
|
||||||
|
EOF
|
||||||
|
$ kubectl create -f /tmp/serviceaccount.yaml
|
||||||
|
serviceaccounts/build-robot
|
||||||
|
```
|
||||||
|
|
||||||
|
Service Account为服务提供了一种方便的认知机制,但它不关心授权的问题。可以配合[RBAC](https://kubernetes.io/docs/admin/authorization/#a-quick-note-on-service-accounts)来为Service Account鉴权:
|
||||||
|
- 配置`--authorization-mode=RBAC`和`--runtime-config=rbac.authorization.k8s.io/v1alpha1`
|
||||||
|
- 配置`--authorization-rbac-super-user=admin`
|
||||||
|
- 定义Role、ClusterRole、RoleBinding或ClusterRoleBinding
|
||||||
|
|
||||||
|
比如
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# This role allows to read pods in the namespace "default"
|
||||||
|
kind: Role
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1alpha1
|
||||||
|
metadata:
|
||||||
|
namespace: default
|
||||||
|
name: pod-reader
|
||||||
|
rules:
|
||||||
|
- apiGroups: [""] # The API group "" indicates the core API Group.
|
||||||
|
resources: ["pods"]
|
||||||
|
verbs: ["get", "watch", "list"]
|
||||||
|
nonResourceURLs: []
|
||||||
|
---
|
||||||
|
# This role binding allows "default" to read pods in the namespace "default"
|
||||||
|
kind: RoleBinding
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1alpha1
|
||||||
|
metadata:
|
||||||
|
name: read-pods
|
||||||
|
namespace: default
|
||||||
|
subjects:
|
||||||
|
- kind: ServiceAccount # May be "User", "Group" or "ServiceAccount"
|
||||||
|
name: default
|
||||||
|
roleRef:
|
||||||
|
kind: Role
|
||||||
|
name: pod-reader
|
||||||
|
apiGroup: rbac.authorization.k8s.io
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,330 @@
|
||||||
|
# StatefulSet
|
||||||
|
|
||||||
|
StatefulSet是为了解决有状态服务的问题(对应Deployments和ReplicaSets是为无状态服务而设计),其应用场景包括
|
||||||
|
|
||||||
|
- 稳定的持久化存储,即Pod重新调度后还是能访问到相同的持久化数据,基于PVC来实现
|
||||||
|
- 稳定的网络标志,即Pod重新调度后其PodName和HostName不变,基于Headless Service(即没有Cluster IP的Service)来实现
|
||||||
|
- 有序部署,有序扩展,即Pod是有顺序的,在部署或者扩展的时候要依据定义的顺序依次依次进行(即从0到N-1,在下一个Pod运行之前所有之前的Pod必须都是Running和Ready状态),基于init containers来实现
|
||||||
|
- 有序收缩,有序删除(即从N-1到0)
|
||||||
|
|
||||||
|
从上面的应用场景可以发现,StatefulSet由以下几个部分组成:
|
||||||
|
|
||||||
|
- 用于定义网络标志(DNS domain)的Headless Service
|
||||||
|
- 用于创建PersistentVolumes的volumeClaimTemplates
|
||||||
|
- 定义具体应用的StatefulSet
|
||||||
|
|
||||||
|
StatefulSet中每个Pod的DNS格式为`statefulSetName-{0..N-1}.serviceName.namespace.svc.cluster.local`,其中
|
||||||
|
|
||||||
|
- `serviceName`为Headless Service的名字
|
||||||
|
- `0..N-1`为Pod所在的序号,从0开始到N-1
|
||||||
|
- `statefulSetName`为StatefulSet的名字
|
||||||
|
- `namespace`为服务所在的namespace,Headless Servic和StatefulSet必须在相同的namespace
|
||||||
|
- `.cluster.local`为Cluster Domain,
|
||||||
|
|
||||||
|
## 简单示例
|
||||||
|
|
||||||
|
以一个简单的nginx服务[web.yaml](web.txt)为例:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
name: nginx
|
||||||
|
labels:
|
||||||
|
app: nginx
|
||||||
|
spec:
|
||||||
|
ports:
|
||||||
|
- port: 80
|
||||||
|
name: web
|
||||||
|
clusterIP: None
|
||||||
|
selector:
|
||||||
|
app: nginx
|
||||||
|
---
|
||||||
|
apiVersion: apps/v1beta1
|
||||||
|
kind: StatefulSet
|
||||||
|
metadata:
|
||||||
|
name: web
|
||||||
|
spec:
|
||||||
|
serviceName: "nginx"
|
||||||
|
replicas: 2
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: nginx
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: nginx
|
||||||
|
image: gcr.io/google_containers/nginx-slim:0.8
|
||||||
|
ports:
|
||||||
|
- containerPort: 80
|
||||||
|
name: web
|
||||||
|
volumeMounts:
|
||||||
|
- name: www
|
||||||
|
mountPath: /usr/share/nginx/html
|
||||||
|
volumeClaimTemplates:
|
||||||
|
- metadata:
|
||||||
|
name: www
|
||||||
|
annotations:
|
||||||
|
volume.alpha.kubernetes.io/storage-class: anything
|
||||||
|
spec:
|
||||||
|
accessModes: [ "ReadWriteOnce" ]
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
storage: 1Gi
|
||||||
|
```
|
||||||
|
|
||||||
|
```sh
|
||||||
|
$ kubectl create -f web.yaml
|
||||||
|
service "nginx" created
|
||||||
|
statefulset "web" created
|
||||||
|
|
||||||
|
# 查看创建的headless service和statefulset
|
||||||
|
$ kubectl get service nginx
|
||||||
|
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||||
|
nginx None <none> 80/TCP 1m
|
||||||
|
$ kubectl get statefulset web
|
||||||
|
NAME DESIRED CURRENT AGE
|
||||||
|
web 2 2 2m
|
||||||
|
|
||||||
|
# 根据volumeClaimTemplates自动创建PVC(在GCE中会自动创建kubernetes.io/gce-pd类型的volume)
|
||||||
|
$ kubectl get pvc
|
||||||
|
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
|
||||||
|
www-web-0 Bound pvc-d064a004-d8d4-11e6-b521-42010a800002 1Gi RWO 16s
|
||||||
|
www-web-1 Bound pvc-d06a3946-d8d4-11e6-b521-42010a800002 1Gi RWO 16s
|
||||||
|
|
||||||
|
# 查看创建的Pod,他们都是有序的
|
||||||
|
$ kubectl get pods -l app=nginx
|
||||||
|
NAME READY STATUS RESTARTS AGE
|
||||||
|
web-0 1/1 Running 0 5m
|
||||||
|
web-1 1/1 Running 0 4m
|
||||||
|
|
||||||
|
# 使用nslookup查看这些Pod的DNS
|
||||||
|
$ kubectl run -i --tty --image busybox dns-test --restart=Never --rm /bin/sh
|
||||||
|
/ # nslookup web-0.nginx
|
||||||
|
Server: 10.0.0.10
|
||||||
|
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
|
||||||
|
|
||||||
|
Name: web-0.nginx
|
||||||
|
Address 1: 10.244.2.10
|
||||||
|
/ # nslookup web-1.nginx
|
||||||
|
Server: 10.0.0.10
|
||||||
|
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
|
||||||
|
|
||||||
|
Name: web-1.nginx
|
||||||
|
Address 1: 10.244.3.12
|
||||||
|
/ # nslookup web-0.nginx.default.svc.cluster.local
|
||||||
|
Server: 10.0.0.10
|
||||||
|
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
|
||||||
|
|
||||||
|
Name: web-0.nginx.default.svc.cluster.local
|
||||||
|
Address 1: 10.244.2.10
|
||||||
|
```
|
||||||
|
|
||||||
|
还可以进行其他的操作
|
||||||
|
|
||||||
|
```sh
|
||||||
|
# 扩容
|
||||||
|
$ kubectl scale statefulset web --replicas=5
|
||||||
|
|
||||||
|
# 缩容
|
||||||
|
$ kubectl patch statefulset web -p '{"spec":{"replicas":3}}'
|
||||||
|
|
||||||
|
# 镜像更新(目前还不支持直接更新image,需要patch来间接实现)
|
||||||
|
$ kubectl patch statefulset web --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"gcr.io/google_containers/nginx-slim:0.7"}]'
|
||||||
|
|
||||||
|
# 删除StatefulSet和Headless Service
|
||||||
|
$ kubectl delete statefulset web
|
||||||
|
$ kubectl delete service nginx
|
||||||
|
|
||||||
|
# StatefulSet删除后PVC还会保留着,数据不再使用的话也需要删除
|
||||||
|
$ kubectl delete pvc www-web-0 www-web-1
|
||||||
|
```
|
||||||
|
|
||||||
|
## zookeeper
|
||||||
|
|
||||||
|
另外一个更能说明StatefulSet强大功能的示例为[zookeeper.yaml](zookeeper.txt)。
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
name: zk-headless
|
||||||
|
labels:
|
||||||
|
app: zk-headless
|
||||||
|
spec:
|
||||||
|
ports:
|
||||||
|
- port: 2888
|
||||||
|
name: server
|
||||||
|
- port: 3888
|
||||||
|
name: leader-election
|
||||||
|
clusterIP: None
|
||||||
|
selector:
|
||||||
|
app: zk
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: ConfigMap
|
||||||
|
metadata:
|
||||||
|
name: zk-config
|
||||||
|
data:
|
||||||
|
ensemble: "zk-0;zk-1;zk-2"
|
||||||
|
jvm.heap: "2G"
|
||||||
|
tick: "2000"
|
||||||
|
init: "10"
|
||||||
|
sync: "5"
|
||||||
|
client.cnxns: "60"
|
||||||
|
snap.retain: "3"
|
||||||
|
purge.interval: "1"
|
||||||
|
---
|
||||||
|
apiVersion: policy/v1beta1
|
||||||
|
kind: PodDisruptionBudget
|
||||||
|
metadata:
|
||||||
|
name: zk-budget
|
||||||
|
spec:
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app: zk
|
||||||
|
minAvailable: 2
|
||||||
|
---
|
||||||
|
apiVersion: apps/v1beta1
|
||||||
|
kind: StatefulSet
|
||||||
|
metadata:
|
||||||
|
name: zk
|
||||||
|
spec:
|
||||||
|
serviceName: zk-headless
|
||||||
|
replicas: 3
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: zk
|
||||||
|
annotations:
|
||||||
|
pod.alpha.kubernetes.io/initialized: "true"
|
||||||
|
scheduler.alpha.kubernetes.io/affinity: >
|
||||||
|
{
|
||||||
|
"podAntiAffinity": {
|
||||||
|
"requiredDuringSchedulingRequiredDuringExecution": [{
|
||||||
|
"labelSelector": {
|
||||||
|
"matchExpressions": [{
|
||||||
|
"key": "app",
|
||||||
|
"operator": "In",
|
||||||
|
"values": ["zk-headless"]
|
||||||
|
}]
|
||||||
|
},
|
||||||
|
"topologyKey": "kubernetes.io/hostname"
|
||||||
|
}]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: k8szk
|
||||||
|
imagePullPolicy: Always
|
||||||
|
image: gcr.io/google_samples/k8szk:v1
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
memory: "4Gi"
|
||||||
|
cpu: "1"
|
||||||
|
ports:
|
||||||
|
- containerPort: 2181
|
||||||
|
name: client
|
||||||
|
- containerPort: 2888
|
||||||
|
name: server
|
||||||
|
- containerPort: 3888
|
||||||
|
name: leader-election
|
||||||
|
env:
|
||||||
|
- name : ZK_ENSEMBLE
|
||||||
|
valueFrom:
|
||||||
|
configMapKeyRef:
|
||||||
|
name: zk-config
|
||||||
|
key: ensemble
|
||||||
|
- name : ZK_HEAP_SIZE
|
||||||
|
valueFrom:
|
||||||
|
configMapKeyRef:
|
||||||
|
name: zk-config
|
||||||
|
key: jvm.heap
|
||||||
|
- name : ZK_TICK_TIME
|
||||||
|
valueFrom:
|
||||||
|
configMapKeyRef:
|
||||||
|
name: zk-config
|
||||||
|
key: tick
|
||||||
|
- name : ZK_INIT_LIMIT
|
||||||
|
valueFrom:
|
||||||
|
configMapKeyRef:
|
||||||
|
name: zk-config
|
||||||
|
key: init
|
||||||
|
- name : ZK_SYNC_LIMIT
|
||||||
|
valueFrom:
|
||||||
|
configMapKeyRef:
|
||||||
|
name: zk-config
|
||||||
|
key: tick
|
||||||
|
- name : ZK_MAX_CLIENT_CNXNS
|
||||||
|
valueFrom:
|
||||||
|
configMapKeyRef:
|
||||||
|
name: zk-config
|
||||||
|
key: client.cnxns
|
||||||
|
- name: ZK_SNAP_RETAIN_COUNT
|
||||||
|
valueFrom:
|
||||||
|
configMapKeyRef:
|
||||||
|
name: zk-config
|
||||||
|
key: snap.retain
|
||||||
|
- name: ZK_PURGE_INTERVAL
|
||||||
|
valueFrom:
|
||||||
|
configMapKeyRef:
|
||||||
|
name: zk-config
|
||||||
|
key: purge.interval
|
||||||
|
- name: ZK_CLIENT_PORT
|
||||||
|
value: "2181"
|
||||||
|
- name: ZK_SERVER_PORT
|
||||||
|
value: "2888"
|
||||||
|
- name: ZK_ELECTION_PORT
|
||||||
|
value: "3888"
|
||||||
|
command:
|
||||||
|
- sh
|
||||||
|
- -c
|
||||||
|
- zkGenConfig.sh && zkServer.sh start-foreground
|
||||||
|
readinessProbe:
|
||||||
|
exec:
|
||||||
|
command:
|
||||||
|
- "zkOk.sh"
|
||||||
|
initialDelaySeconds: 15
|
||||||
|
timeoutSeconds: 5
|
||||||
|
livenessProbe:
|
||||||
|
exec:
|
||||||
|
command:
|
||||||
|
- "zkOk.sh"
|
||||||
|
initialDelaySeconds: 15
|
||||||
|
timeoutSeconds: 5
|
||||||
|
volumeMounts:
|
||||||
|
- name: datadir
|
||||||
|
mountPath: /var/lib/zookeeper
|
||||||
|
securityContext:
|
||||||
|
runAsUser: 1000
|
||||||
|
fsGroup: 1000
|
||||||
|
volumeClaimTemplates:
|
||||||
|
- metadata:
|
||||||
|
name: datadir
|
||||||
|
annotations:
|
||||||
|
volume.alpha.kubernetes.io/storage-class: anything
|
||||||
|
spec:
|
||||||
|
accessModes: [ "ReadWriteOnce" ]
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
storage: 20Gi
|
||||||
|
```
|
||||||
|
|
||||||
|
```sh
|
||||||
|
kubectl create -f zookeeper.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
详细的使用说明见[zookeeper stateful application](https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/)。
|
||||||
|
|
||||||
|
## StatefulSet注意事项
|
||||||
|
|
||||||
|
1. 还在beta状态,需要kubernetes v1.5版本以上才支持
|
||||||
|
2. 所有Pod的Volume必须使用PersistentVolume或者是管理员事先创建好
|
||||||
|
3. 为了保证数据安全,删除StatefulSet时不会删除Volume
|
||||||
|
4. StatefulSet需要一个Headless Service来定义DNS domain,需要在StatefulSet之前创建好
|
||||||
|
5. 目前StatefulSet还没有feature complete,比如更新操作还需要手动patch。
|
||||||
|
|
||||||
|
|
||||||
|
更多可以参考[Kubernetes文档](https://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/)。
|
|
@ -0,0 +1,47 @@
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
name: nginx
|
||||||
|
labels:
|
||||||
|
app: nginx
|
||||||
|
spec:
|
||||||
|
ports:
|
||||||
|
- port: 80
|
||||||
|
name: web
|
||||||
|
clusterIP: None
|
||||||
|
selector:
|
||||||
|
app: nginx
|
||||||
|
---
|
||||||
|
apiVersion: apps/v1beta1
|
||||||
|
kind: StatefulSet
|
||||||
|
metadata:
|
||||||
|
name: web
|
||||||
|
spec:
|
||||||
|
serviceName: "nginx"
|
||||||
|
replicas: 2
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: nginx
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: nginx
|
||||||
|
image: gcr.io/google_containers/nginx-slim:0.8
|
||||||
|
ports:
|
||||||
|
- containerPort: 80
|
||||||
|
name: web
|
||||||
|
volumeMounts:
|
||||||
|
- name: www
|
||||||
|
mountPath: /usr/share/nginx/html
|
||||||
|
volumeClaimTemplates:
|
||||||
|
- metadata:
|
||||||
|
name: www
|
||||||
|
annotations:
|
||||||
|
volume.alpha.kubernetes.io/storage-class: anything
|
||||||
|
spec:
|
||||||
|
accessModes: [ "ReadWriteOnce" ]
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
storage: 1Gi
|
||||||
|
|
|
@ -0,0 +1,164 @@
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
name: zk-headless
|
||||||
|
labels:
|
||||||
|
app: zk-headless
|
||||||
|
spec:
|
||||||
|
ports:
|
||||||
|
- port: 2888
|
||||||
|
name: server
|
||||||
|
- port: 3888
|
||||||
|
name: leader-election
|
||||||
|
clusterIP: None
|
||||||
|
selector:
|
||||||
|
app: zk
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: ConfigMap
|
||||||
|
metadata:
|
||||||
|
name: zk-config
|
||||||
|
data:
|
||||||
|
ensemble: "zk-0;zk-1;zk-2"
|
||||||
|
jvm.heap: "2G"
|
||||||
|
tick: "2000"
|
||||||
|
init: "10"
|
||||||
|
sync: "5"
|
||||||
|
client.cnxns: "60"
|
||||||
|
snap.retain: "3"
|
||||||
|
purge.interval: "1"
|
||||||
|
---
|
||||||
|
apiVersion: policy/v1beta1
|
||||||
|
kind: PodDisruptionBudget
|
||||||
|
metadata:
|
||||||
|
name: zk-budget
|
||||||
|
spec:
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app: zk
|
||||||
|
minAvailable: 2
|
||||||
|
---
|
||||||
|
apiVersion: apps/v1beta1
|
||||||
|
kind: StatefulSet
|
||||||
|
metadata:
|
||||||
|
name: zk
|
||||||
|
spec:
|
||||||
|
serviceName: zk-headless
|
||||||
|
replicas: 3
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: zk
|
||||||
|
annotations:
|
||||||
|
pod.alpha.kubernetes.io/initialized: "true"
|
||||||
|
scheduler.alpha.kubernetes.io/affinity: >
|
||||||
|
{
|
||||||
|
"podAntiAffinity": {
|
||||||
|
"requiredDuringSchedulingRequiredDuringExecution": [{
|
||||||
|
"labelSelector": {
|
||||||
|
"matchExpressions": [{
|
||||||
|
"key": "app",
|
||||||
|
"operator": "In",
|
||||||
|
"values": ["zk-headless"]
|
||||||
|
}]
|
||||||
|
},
|
||||||
|
"topologyKey": "kubernetes.io/hostname"
|
||||||
|
}]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: k8szk
|
||||||
|
imagePullPolicy: Always
|
||||||
|
image: gcr.io/google_samples/k8szk:v1
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
memory: "4Gi"
|
||||||
|
cpu: "1"
|
||||||
|
ports:
|
||||||
|
- containerPort: 2181
|
||||||
|
name: client
|
||||||
|
- containerPort: 2888
|
||||||
|
name: server
|
||||||
|
- containerPort: 3888
|
||||||
|
name: leader-election
|
||||||
|
env:
|
||||||
|
- name : ZK_ENSEMBLE
|
||||||
|
valueFrom:
|
||||||
|
configMapKeyRef:
|
||||||
|
name: zk-config
|
||||||
|
key: ensemble
|
||||||
|
- name : ZK_HEAP_SIZE
|
||||||
|
valueFrom:
|
||||||
|
configMapKeyRef:
|
||||||
|
name: zk-config
|
||||||
|
key: jvm.heap
|
||||||
|
- name : ZK_TICK_TIME
|
||||||
|
valueFrom:
|
||||||
|
configMapKeyRef:
|
||||||
|
name: zk-config
|
||||||
|
key: tick
|
||||||
|
- name : ZK_INIT_LIMIT
|
||||||
|
valueFrom:
|
||||||
|
configMapKeyRef:
|
||||||
|
name: zk-config
|
||||||
|
key: init
|
||||||
|
- name : ZK_SYNC_LIMIT
|
||||||
|
valueFrom:
|
||||||
|
configMapKeyRef:
|
||||||
|
name: zk-config
|
||||||
|
key: tick
|
||||||
|
- name : ZK_MAX_CLIENT_CNXNS
|
||||||
|
valueFrom:
|
||||||
|
configMapKeyRef:
|
||||||
|
name: zk-config
|
||||||
|
key: client.cnxns
|
||||||
|
- name: ZK_SNAP_RETAIN_COUNT
|
||||||
|
valueFrom:
|
||||||
|
configMapKeyRef:
|
||||||
|
name: zk-config
|
||||||
|
key: snap.retain
|
||||||
|
- name: ZK_PURGE_INTERVAL
|
||||||
|
valueFrom:
|
||||||
|
configMapKeyRef:
|
||||||
|
name: zk-config
|
||||||
|
key: purge.interval
|
||||||
|
- name: ZK_CLIENT_PORT
|
||||||
|
value: "2181"
|
||||||
|
- name: ZK_SERVER_PORT
|
||||||
|
value: "2888"
|
||||||
|
- name: ZK_ELECTION_PORT
|
||||||
|
value: "3888"
|
||||||
|
command:
|
||||||
|
- sh
|
||||||
|
- -c
|
||||||
|
- zkGenConfig.sh && zkServer.sh start-foreground
|
||||||
|
readinessProbe:
|
||||||
|
exec:
|
||||||
|
command:
|
||||||
|
- "zkOk.sh"
|
||||||
|
initialDelaySeconds: 15
|
||||||
|
timeoutSeconds: 5
|
||||||
|
livenessProbe:
|
||||||
|
exec:
|
||||||
|
command:
|
||||||
|
- "zkOk.sh"
|
||||||
|
initialDelaySeconds: 15
|
||||||
|
timeoutSeconds: 5
|
||||||
|
volumeMounts:
|
||||||
|
- name: datadir
|
||||||
|
mountPath: /var/lib/zookeeper
|
||||||
|
securityContext:
|
||||||
|
runAsUser: 1000
|
||||||
|
fsGroup: 1000
|
||||||
|
volumeClaimTemplates:
|
||||||
|
- metadata:
|
||||||
|
name: datadir
|
||||||
|
annotations:
|
||||||
|
volume.alpha.kubernetes.io/storage-class: anything
|
||||||
|
spec:
|
||||||
|
accessModes: [ "ReadWriteOnce" ]
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
storage: 20Gi
|
|
@ -30,4 +30,4 @@
|
||||||
"image-captions": {
|
"image-captions": {
|
||||||
"caption": "图片 - _CAPTION_"
|
"caption": "图片 - _CAPTION_"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -0,0 +1,10 @@
|
||||||
|
# 集群联邦
|
||||||
|
|
||||||
|
![federation](media/federation.png)
|
||||||
|
|
||||||
|
![federation-service](media/federation-service.png)
|
||||||
|
|
||||||
|
https://tectonic.com/blog/kubernetes-cluster-federation.html
|
||||||
|
|
||||||
|
|
||||||
|
|
After Width: | Height: | Size: 15 KiB |
After Width: | Height: | Size: 23 KiB |
After Width: | Height: | Size: 63 KiB |
After Width: | Height: | Size: 102 KiB |
|
@ -0,0 +1,105 @@
|
||||||
|
# Minikube
|
||||||
|
|
||||||
|
相比Docker一个二进制文件解决所有问题,Kubernetes则为不同的服务提供了不同的二进制文件,并将一些服务放到了addons中。故而,Kubernetes的部署相对要麻烦的多。借助[minikube](https://github.com/kubernetes/minikube)项目,现在可以很方便的在本机快速启动一个单节点的Kubernetes集群。
|
||||||
|
|
||||||
|
## 安装minikube
|
||||||
|
|
||||||
|
minikube最新release版本为v0.15.0,支持Kubernetes v1.3.0到v1.5.1的各个版本,默认启动Kubernetes v1.5.1。
|
||||||
|
|
||||||
|
OSX
|
||||||
|
|
||||||
|
```
|
||||||
|
curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.15.0/minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
|
||||||
|
```
|
||||||
|
|
||||||
|
Linux
|
||||||
|
|
||||||
|
```
|
||||||
|
curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.15.0/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
|
||||||
|
```
|
||||||
|
|
||||||
|
Windows
|
||||||
|
|
||||||
|
```
|
||||||
|
下载https://storage.googleapis.com/minikube/releases/v0.15.0/minikube-windows-amd64.exe,并重命名为minikube.exe
|
||||||
|
```
|
||||||
|
|
||||||
|
minikube支持xhyve(on OSX)、VirtualBox、VMWare Fusion等多种不同的driver,这些driver也需要单独安装,比如在OSX上安装xhyve driver:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
brew install docker-machine-driver-xhyve
|
||||||
|
# docker-machine-driver-xhyve need root owner and uid
|
||||||
|
sudo chown root:wheel $(brew --prefix)/opt/docker-machine-driver-xhyve/bin/docker-machine-driver-xhyve
|
||||||
|
sudo chmod u+s $(brew --prefix)/opt/docker-machine-driver-xhyve/bin/docker-machine-driver-xhyve
|
||||||
|
```
|
||||||
|
|
||||||
|
另外,还需要安装一个`kubectl`客户端,用来跟kubernetes交互:
|
||||||
|
|
||||||
|
```
|
||||||
|
gcloud components install kubectl
|
||||||
|
```
|
||||||
|
|
||||||
|
## 启动Kubernetes Cluster
|
||||||
|
|
||||||
|
启动Kubernetes Cluster就非常简单了,一个命令即可:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ minikube start
|
||||||
|
Starting local Kubernetes cluster...
|
||||||
|
Kubectl is now configured to use the cluster.
|
||||||
|
```
|
||||||
|
|
||||||
|
当然了,国内环境下,最好加上代理:
|
||||||
|
|
||||||
|
```
|
||||||
|
minikube start --docker-env HTTP_PROXY=http://proxy-ip:port --docker-env HTTPS_PROXY=http://proxy-ip:port
|
||||||
|
```
|
||||||
|
|
||||||
|
然后就可以通过kubectl来玩Kubernetes了,比如启动一个简单的nginx服务:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ kubectl run nginx --image=nginx --port=80
|
||||||
|
deployment "nginx" created
|
||||||
|
$ kubectl expose deployment nginx --port=80 --type=NodePort --name=nginx-http
|
||||||
|
service "nginx-http" exposed
|
||||||
|
$ kubectl get pods
|
||||||
|
NAME READY STATUS RESTARTS AGE
|
||||||
|
nginx-2032906785-81t56 1/1 Running 0 2m
|
||||||
|
$ kubectl get services
|
||||||
|
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||||
|
kubernetes 10.0.0.1 <none> 443/TCP 20m
|
||||||
|
nginx-http 10.0.0.146 <none> 80/TCP 2m
|
||||||
|
$ minikube service nginx-http --url
|
||||||
|
http://192.168.64.10:30569
|
||||||
|
```
|
||||||
|
|
||||||
|
这样就可以通过`http://192.168.64.10:30569`来直接访问nginx服务。
|
||||||
|
|
||||||
|
minikube默认还部署了最新的dashboard,可以通过`minikube dashboard`命令在默认浏览器中打开:
|
||||||
|
|
||||||
|
![](media/14735740742630.jpg)
|
||||||
|
|
||||||
|
|
||||||
|
更多的玩法可以参考minikube的帮助文档:
|
||||||
|
|
||||||
|
```
|
||||||
|
Usage:
|
||||||
|
minikube [command]
|
||||||
|
|
||||||
|
Available Commands:
|
||||||
|
dashboard Opens/displays the kubernetes dashboard URL for your local cluster
|
||||||
|
delete Deletes a local kubernetes cluster.
|
||||||
|
docker-env sets up docker env variables; similar to '$(docker-machine env)'
|
||||||
|
get-k8s-versions Gets the list of available kubernetes versions available for minikube.
|
||||||
|
ip Retrieve the IP address of the running cluster.
|
||||||
|
logs Gets the logs of the running localkube instance, used for debugging minikube, not user code.
|
||||||
|
service Gets the kubernetes URL for the specified service in your local cluster
|
||||||
|
ssh Log into or run a command on a machine with SSH; similar to 'docker-machine ssh'
|
||||||
|
start Starts a local kubernetes cluster.
|
||||||
|
status Gets the status of a local kubernetes cluster.
|
||||||
|
stop Stops a running local kubernetes cluster.
|
||||||
|
version Print the version of minikube.
|
||||||
|
```
|
||||||
|
|
||||||
|
更多请参考https://github.com/kubernetes/minikube。
|
||||||
|
|
|
@ -0,0 +1,48 @@
|
||||||
|
# Node
|
||||||
|
|
||||||
|
## Node维护模式
|
||||||
|
|
||||||
|
```
|
||||||
|
kubectl drain NODE [Options]
|
||||||
|
```
|
||||||
|
|
||||||
|
- 它会删除该NODE上由ReplicationController, ReplicaSet, DaemonSet, StatefulSet or Job创建的Pod
|
||||||
|
- 不删除mirror pods(因为不可通过API删除mirror pods)
|
||||||
|
- 如果还有其它类型的Pod(比如不通过RC而直接通过kubectl create的Pod)并且没有--force选项,该命令会直接失败
|
||||||
|
- 如果命令中增加了--force选项,则会强制删除这些不是通过ReplicationController, Job或者DaemonSet创建的Pod
|
||||||
|
|
||||||
|
有的时候不需要evict pod,只需要标记Node不可调用,可以用`kubectl cordon`命令。
|
||||||
|
|
||||||
|
恢复的话只需要运行`kubectl uncordon NODE`将NODE重新改成可调度状态。
|
||||||
|
|
||||||
|
## Taint tolerant
|
||||||
|
|
||||||
|
// taint节点,阻止新的pod上来
|
||||||
|
kubectl taint nodes node08 dedicated=maintaining:NoSchedule
|
||||||
|
// label节点,只允许指定的pod上来
|
||||||
|
kubectl label nodes node08 hyper/nodetype=maintaining
|
||||||
|
|
||||||
|
// 然后在Pod定义中加入如下annotation:
|
||||||
|
```
|
||||||
|
annotations:
|
||||||
|
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"dedicated", "value":"maintaining"}]'
|
||||||
|
scheduler.alpha.kubernetes.io/affinity: >
|
||||||
|
{
|
||||||
|
"nodeAffinity": {
|
||||||
|
"requiredDuringSchedulingIgnoredDuringExecution": {
|
||||||
|
"nodeSelectorTerms": [
|
||||||
|
{
|
||||||
|
"matchExpressions": [
|
||||||
|
{
|
||||||
|
"key": "hyper/nodetype",
|
||||||
|
"operator": "In",
|
||||||
|
"values": ["maintaining"]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
|
@ -0,0 +1,2 @@
|
||||||
|
# 核心组件
|
||||||
|
|
|
@ -0,0 +1,42 @@
|
||||||
|
# kube-proxy
|
||||||
|
|
||||||
|
## Iptables示例
|
||||||
|
|
||||||
|
```
|
||||||
|
# Iptables –t nat –L –n
|
||||||
|
Chain PREROUTING (policy ACCEPT)
|
||||||
|
target prot opt source destination
|
||||||
|
KUBE-SERVICES all -- anywhere anywhere /* kubernetes service portals */ ← 1
|
||||||
|
DOCKER all -- anywhere anywhere ADDRTYPE match dst-type LOCAL
|
||||||
|
|
||||||
|
Chain KUBE-SEP-G3MLSGWVLUPEIMXS (1 references) ← 4
|
||||||
|
target prot opt source destination
|
||||||
|
MARK all -- 172.16.16.2 anywhere /* default/webpod-service: */ MARK set 0x4d415351
|
||||||
|
DNAT tcp -- anywhere anywhere /* default/webpod-service: */ tcp to:172.16.16.2:80
|
||||||
|
|
||||||
|
Chain KUBE-SEP-OUBP2X5UG3G4CYYB (1 references)
|
||||||
|
target prot opt source destination
|
||||||
|
MARK all -- 192.168.190.128 anywhere /* default/kubernetes: */ MARK set 0x4d415351
|
||||||
|
DNAT tcp -- anywhere anywhere /* default/kubernetes: */ tcp to:192.168.190.128:6443
|
||||||
|
|
||||||
|
Chain KUBE-SEP-PXEMGP3B44XONJEO (1 references) ← 4
|
||||||
|
target prot opt source destination
|
||||||
|
MARK all -- 172.16.91.2 anywhere /* default/webpod-service: */ MARK set 0x4d415351
|
||||||
|
DNAT tcp -- anywhere anywhere /* default/webpod-service: */ tcp to:172.16.91.2:80
|
||||||
|
|
||||||
|
Chain KUBE-SERVICES (2 references) ← 2
|
||||||
|
target prot opt source destination
|
||||||
|
KUBE-SVC-N4RX4VPNP4ATLCGG tcp -- anywhere 192.168.3.237 /* default/webpod-service: cluster IP */ tcp dpt:http
|
||||||
|
KUBE-SVC-6N4SJQIF3IX3FORG tcp -- anywhere 192.168.3.1 /* default/kubernetes: cluster IP */ tcp dpt:https
|
||||||
|
KUBE-NODEPORTS all -- anywhere anywhere /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL
|
||||||
|
|
||||||
|
Chain KUBE-SVC-6N4SJQIF3IX3FORG (1 references)
|
||||||
|
target prot opt source destination
|
||||||
|
KUBE-SEP-OUBP2X5UG3G4CYYB all -- anywhere anywhere /* default/kubernetes: */
|
||||||
|
|
||||||
|
Chain KUBE-SVC-N4RX4VPNP4ATLCGG (1 references) ← 3
|
||||||
|
target prot opt source destination
|
||||||
|
KUBE-SEP-G3MLSGWVLUPEIMXS all -- anywhere anywhere /* default/webpod-service: */ statistic mode random probability 0.50000000000
|
||||||
|
KUBE-SEP-PXEMGP3B44XONJEO all -- anywhere anywhere /* default/webpod-service: */
|
||||||
|
```
|
||||||
|
|
|
@ -0,0 +1,7 @@
|
||||||
|
# Kubernetes debugging
|
||||||
|
|
||||||
|
|
||||||
|
## Kubernetes应用debugging
|
||||||
|
|
||||||
|
* [调试运行中的容器应用](http://feisky.xyz/2017/02/14/Debugging-application-in-containers/)
|
||||||
|
|
|
@ -0,0 +1,152 @@
|
||||||
|
# 证书生成
|
||||||
|
|
||||||
|
kubeadm在部署Kubernetes时会自动生成Kubernetes所需要的证书,这里是手动生成这些证书的方法。
|
||||||
|
|
||||||
|
安装cfssl
|
||||||
|
|
||||||
|
```sh
|
||||||
|
go get -u github.com/cloudflare/cfssl/cmd/...
|
||||||
|
```
|
||||||
|
|
||||||
|
创建CA配置文件
|
||||||
|
|
||||||
|
```sh
|
||||||
|
mkdir -p /etc/ssl/certs
|
||||||
|
cd /etc/ssl/certs
|
||||||
|
|
||||||
|
cfssl print-defaults config > config.json
|
||||||
|
cfssl print-defaults csr > csr.json
|
||||||
|
cat >ca-config.json <<EOF
|
||||||
|
{
|
||||||
|
"signing": {
|
||||||
|
"default": {
|
||||||
|
"expiry": "87600h"
|
||||||
|
},
|
||||||
|
"profiles": {
|
||||||
|
"kubernetes": {
|
||||||
|
"usages": [
|
||||||
|
"signing",
|
||||||
|
"key encipherment",
|
||||||
|
"server auth",
|
||||||
|
"client auth"
|
||||||
|
],
|
||||||
|
"expiry": "87600h"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
EOF
|
||||||
|
|
||||||
|
cat >ca-csr.json <<EOF
|
||||||
|
{
|
||||||
|
"CN": "kubernetes",
|
||||||
|
"key": {
|
||||||
|
"algo": "rsa",
|
||||||
|
"size": 2048
|
||||||
|
},
|
||||||
|
"names": [
|
||||||
|
{
|
||||||
|
"C": "CN",
|
||||||
|
"ST": "BeiJing",
|
||||||
|
"L": "BeiJing",
|
||||||
|
"O": "k8s",
|
||||||
|
"OU": "System"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
|
创建CA证书和私钥
|
||||||
|
|
||||||
|
```sh
|
||||||
|
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
|
||||||
|
```
|
||||||
|
|
||||||
|
创建Kubernetes证书
|
||||||
|
|
||||||
|
```sh
|
||||||
|
cat >kubernetes-csr.json <<EOF
|
||||||
|
{
|
||||||
|
"CN": "kubernetes",
|
||||||
|
"hosts": [
|
||||||
|
"127.0.0.1",
|
||||||
|
"172.20.0.112",
|
||||||
|
"172.20.0.113",
|
||||||
|
"172.20.0.114",
|
||||||
|
"172.20.0.115",
|
||||||
|
"10.254.0.1",
|
||||||
|
"kubernetes",
|
||||||
|
"kubernetes.default",
|
||||||
|
"kubernetes.default.svc",
|
||||||
|
"kubernetes.default.svc.cluster",
|
||||||
|
"kubernetes.default.svc.cluster.local"
|
||||||
|
],
|
||||||
|
"key": {
|
||||||
|
"algo": "rsa",
|
||||||
|
"size": 2048
|
||||||
|
},
|
||||||
|
"names": [
|
||||||
|
{
|
||||||
|
"C": "CN",
|
||||||
|
"ST": "BeiJing",
|
||||||
|
"L": "BeiJing",
|
||||||
|
"O": "k8s",
|
||||||
|
"OU": "System"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
EOF
|
||||||
|
|
||||||
|
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
|
||||||
|
```
|
||||||
|
|
||||||
|
创建Admin证书
|
||||||
|
|
||||||
|
```sh
|
||||||
|
cat >admin-csr.json <<EOF
|
||||||
|
{
|
||||||
|
"CN": "admin",
|
||||||
|
"hosts": [],
|
||||||
|
"key": {
|
||||||
|
"algo": "rsa",
|
||||||
|
"size": 2048
|
||||||
|
},
|
||||||
|
"names": [
|
||||||
|
{
|
||||||
|
"C": "CN",
|
||||||
|
"ST": "BeiJing",
|
||||||
|
"L": "BeiJing",
|
||||||
|
"O": "system:masters",
|
||||||
|
"OU": "System"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
EOF
|
||||||
|
|
||||||
|
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
|
||||||
|
|
||||||
|
|
||||||
|
cat >kube-proxy-csr.json <<EOF
|
||||||
|
{
|
||||||
|
"CN": "system:kube-proxy",
|
||||||
|
"hosts": [],
|
||||||
|
"key": {
|
||||||
|
"algo": "rsa",
|
||||||
|
"size": 2048
|
||||||
|
},
|
||||||
|
"names": [
|
||||||
|
{
|
||||||
|
"C": "CN",
|
||||||
|
"ST": "BeiJing",
|
||||||
|
"L": "BeiJing",
|
||||||
|
"O": "k8s",
|
||||||
|
"OU": "System"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
EOF
|
||||||
|
|
||||||
|
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
|
||||||
|
|
||||||
|
```
|
|
@ -0,0 +1,85 @@
|
||||||
|
# 集群部署
|
||||||
|
|
||||||
|
## Kubernetes集群架构
|
||||||
|
|
||||||
|
![](../ha/ha.png)
|
||||||
|
|
||||||
|
### etcd集群
|
||||||
|
|
||||||
|
从`https://discovery.etcd.io/new?size=3`获取token后,把<https://kubernetes.io/docs/admin/high-availability/etcd.yaml>放到每台机器的`/etc/kubernetes/manifests/etcd.yaml`,并替换掉`${DISCOVERY_TOKEN}`, `${NODE_NAME}`和`${NODE_IP}`,既可以由kubelet来启动一个etcd集群。
|
||||||
|
|
||||||
|
对于运行在kubelet外部的etcd,可以参考[etcd clustering guide](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/clustering.md)来手动配置集群模式。
|
||||||
|
|
||||||
|
### kube-apiserver
|
||||||
|
|
||||||
|
把<https://kubernetes.io/docs/admin/high-availability/kube-apiserver.yaml>放到每台Master节点的`/etc/kubernetes/manifests/`,并把相关的配置放到`/srv/kubernetes/`,即可由kubelet自动创建并启动apiserver:
|
||||||
|
|
||||||
|
- basic_auth.csv - basic auth user and password
|
||||||
|
- ca.crt - Certificate Authority cert
|
||||||
|
- known_tokens.csv - tokens that entities (e.g. the kubelet) can use to talk to the apiserver
|
||||||
|
- kubecfg.crt - Client certificate, public key
|
||||||
|
- kubecfg.key - Client certificate, private key
|
||||||
|
- server.cert - Server certificate, public key
|
||||||
|
- server.key - Server certificate, private key
|
||||||
|
|
||||||
|
apiserver启动后,还需要为它们做负载均衡,可以使用云平台的弹性负载均衡服务或者使用haproxy/lvs/nginx等为master节点配置负载均衡。
|
||||||
|
|
||||||
|
另外,还可以借助Keepalived、OSPF、Pacemaker等来保证负载均衡节点的高可用。
|
||||||
|
|
||||||
|
注意:
|
||||||
|
|
||||||
|
- 大规模集群注意增加`--max-requests-inflight`(默认400)
|
||||||
|
- 使用nginx时注意增加`proxy_timeout: 10m`
|
||||||
|
|
||||||
|
### controller manager和scheduler
|
||||||
|
|
||||||
|
controller manager和scheduler需要保证任何时刻都只有一个实例运行,需要一个选主的过程,所以在启动时要设置`--leader-elect=true`,比如
|
||||||
|
|
||||||
|
```
|
||||||
|
kube-scheduler --master=127.0.0.1:8080 --v=2 --leader-elect=true
|
||||||
|
kube-controller-manager --master=127.0.0.1:8080 --cluster-cidr=10.245.0.0/16 --allocate-node-cidrs=true --service-account-private-key-file=/srv/kubernetes/server.key --v=2 --leader-elect=true
|
||||||
|
```
|
||||||
|
|
||||||
|
把[kube-scheduler.yaml](https://kubernetes.io/docs/admin/high-availability/kube-scheduler.yaml)和[kube-controller-manager.yaml](https://kubernetes.io/docs/admin/high-availability/kube-controller-manager.yaml)(非GCE平台需要适当修改) 放到每台master节点的`/etc/kubernetes/manifests/`即可。
|
||||||
|
|
||||||
|
### kube-dns
|
||||||
|
|
||||||
|
kube-dns可以通过Deployment的方式来部署,默认kubeadm会自动创建。但在大规模集群的时候,需要放宽资源限制,比如
|
||||||
|
|
||||||
|
```
|
||||||
|
dns_replicas: 6
|
||||||
|
dns_cpu_limit: 100m
|
||||||
|
dns_memory_limit: 512Mi
|
||||||
|
dns_cpu_requests 70m
|
||||||
|
dns_memory_requests: 70Mi
|
||||||
|
```
|
||||||
|
|
||||||
|
另外,也需要给dnsmasq增加资源,比如增加缓存大小到10000,增加并发处理数量`--dns-forward-max=1000`等。
|
||||||
|
|
||||||
|
### 数据持久化
|
||||||
|
|
||||||
|
除了上面提到的这些配置,持久化存储也是高可用Kubernetes集群所必须的。
|
||||||
|
|
||||||
|
- 对于公有云上部署的集群,可以考虑使用云平台提供的持久化存储,比如aws ebs或者gce persistent disk
|
||||||
|
- 对于物理机部署的集群,可以考虑使用iSCSI、NFS、Gluster或者Ceph等网络存储,也可以使用RAID
|
||||||
|
|
||||||
|
## GCE/Azure
|
||||||
|
|
||||||
|
在GCE或者Azure上面可以利用cluster脚本方便的部署集群:
|
||||||
|
|
||||||
|
```
|
||||||
|
# gce,aws,gke,azure-legacy,vsphere,openstack-heat,rackspace,libvirt-coreos
|
||||||
|
export KUBERNETES_PROVIDER=gce
|
||||||
|
curl -sS https://get.k8s.io | bash
|
||||||
|
cd kubernetes
|
||||||
|
cluster/kube-up.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
## AWS
|
||||||
|
|
||||||
|
在aws上建议使用[kops](https://kubernetes.io/docs/getting-started-guides/kops/)来部署。
|
||||||
|
|
||||||
|
## 物理机或虚拟机
|
||||||
|
|
||||||
|
在Linux物理机或虚拟机中,建议使用[kubeadm](https://kubernetes.io/docs/getting-started-guides/kubeadm/)来部署Kubernetes集群。
|
||||||
|
|
|
@ -0,0 +1,181 @@
|
||||||
|
# Cluster deploying of frakti On CentOS
|
||||||
|
|
||||||
|
This document shows how to easily install a kubernetes cluster with frakti runtime.
|
||||||
|
|
||||||
|
Frakti is a hypervisor-based container runtime, it depends on a few packages besides kubernetes:
|
||||||
|
|
||||||
|
- hyperd: the hyper container engine (main container runtime)
|
||||||
|
- docker: the docker container engine (auxiliary container runtime)
|
||||||
|
- cni: the network plugin
|
||||||
|
|
||||||
|
## Optional: create instances on GCE
|
||||||
|
|
||||||
|
It is recommended to run frakti-enabled kubernetes on baremetal, but you could still have a try of frakti on public clouds.
|
||||||
|
|
||||||
|
**Do not forget to enable ip_forward on GCE.**
|
||||||
|
|
||||||
|
## Initialize all nodes
|
||||||
|
|
||||||
|
### Install hyperd
|
||||||
|
|
||||||
|
```sh
|
||||||
|
# install from https://docs.hypercontainer.io/get_started/install/linux.html
|
||||||
|
curl -sSL https://hypercontainer.io/install | bash
|
||||||
|
|
||||||
|
echo -e "Hypervisor=libvirt\n\
|
||||||
|
Kernel=/var/lib/hyper/kernel\n\
|
||||||
|
Initrd=/var/lib/hyper/hyper-initrd.img\n\
|
||||||
|
Hypervisor=qemu\n\
|
||||||
|
StorageDriver=overlay\n\
|
||||||
|
gRPCHost=127.0.0.1:22318" > /etc/hyper/config
|
||||||
|
systemctl enable hyperd
|
||||||
|
systemctl restart hyperd
|
||||||
|
```
|
||||||
|
|
||||||
|
### Install docker
|
||||||
|
|
||||||
|
```sh
|
||||||
|
yum install -y docker
|
||||||
|
sed -i 's/native.cgroupdriver=systemd/native.cgroupdriver=cgroupfs/g' /usr/lib/systemd/system/docker.service
|
||||||
|
systemctl daemon-reload
|
||||||
|
|
||||||
|
systemctl enable docker
|
||||||
|
systemctl start docker
|
||||||
|
```
|
||||||
|
|
||||||
|
### Install frakti
|
||||||
|
|
||||||
|
```sh
|
||||||
|
curl -sSL https://github.com/kubernetes/frakti/releases/download/v0.1/frakti -o /usr/bin/frakti
|
||||||
|
chmod +x /usr/bin/frakti
|
||||||
|
cat <<EOF > /lib/systemd/system/frakti.service
|
||||||
|
[Unit]
|
||||||
|
Description=Hypervisor-based container runtime for Kubernetes
|
||||||
|
Documentation=https://github.com/kubernetes/frakti
|
||||||
|
After=network.target
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
ExecStart=/usr/bin/frakti --v=3 \
|
||||||
|
--log-dir=/var/log/frakti \
|
||||||
|
--logtostderr=false \
|
||||||
|
--listen=/var/run/frakti.sock \
|
||||||
|
--streaming-server-addr=%H \
|
||||||
|
--hyper-endpoint=127.0.0.1:22318
|
||||||
|
MountFlags=shared
|
||||||
|
TasksMax=8192
|
||||||
|
LimitNOFILE=1048576
|
||||||
|
LimitNPROC=1048576
|
||||||
|
LimitCORE=infinity
|
||||||
|
TimeoutStartSec=0
|
||||||
|
Restart=on-abnormal
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
|
### Install CNI
|
||||||
|
|
||||||
|
Frakti requires cni network for starting.
|
||||||
|
|
||||||
|
Note:
|
||||||
|
|
||||||
|
- Configure different subnet for different hosts, e.g.
|
||||||
|
- 10.244.1.0/24
|
||||||
|
- 10.244.2.0/24
|
||||||
|
- 10.244.3.0/24
|
||||||
|
- Configure host routes on GCE
|
||||||
|
- gcloud compute routes create "instance-1" --description "instance-1" --destination-range "10.244.1.0/24" --network "default" --next-hop-instance "instance-1" --next-hop-instance-zone "asia-east1-a" --priority "100"
|
||||||
|
- gcloud compute routes create "instance-2" --description "instance-2" --destination-range "10.244.2.0/24" --network "default" --next-hop-instance "instance-2" --next-hop-instance-zone "asia-east1-a" --priority "100"
|
||||||
|
- gcloud compute routes create "instance-3" --description "instance-3" --destination-range "10.244.3.0/24" --network "default" --next-hop-instance "instance-3" --next-hop-instance-zone "asia-east1-a" --priority "100"
|
||||||
|
|
||||||
|
```sh
|
||||||
|
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
|
||||||
|
[kubernetes]
|
||||||
|
name=Kubernetes
|
||||||
|
baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64-unstable
|
||||||
|
enabled=1
|
||||||
|
gpgcheck=1
|
||||||
|
repo_gpgcheck=1
|
||||||
|
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
|
||||||
|
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
|
||||||
|
EOF
|
||||||
|
setenforce 0
|
||||||
|
yum install -y kubernetes-cni bridge-utils
|
||||||
|
```
|
||||||
|
|
||||||
|
Configure cni network
|
||||||
|
|
||||||
|
```sh
|
||||||
|
mkdir -p /etc/cni/net.d
|
||||||
|
cat >/etc/cni/net.d/10-mynet.conf <<-EOF
|
||||||
|
{
|
||||||
|
"cniVersion": "0.3.0",
|
||||||
|
"name": "mynet",
|
||||||
|
"type": "bridge",
|
||||||
|
"bridge": "cni0",
|
||||||
|
"isGateway": true,
|
||||||
|
"ipMasq": true,
|
||||||
|
"ipam": {
|
||||||
|
"type": "host-local",
|
||||||
|
"subnet": "10.244.1.0/24",
|
||||||
|
"routes": [
|
||||||
|
{ "dst": "0.0.0.0/0" }
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
EOF
|
||||||
|
cat >/etc/cni/net.d/99-loopback.conf <<-EOF
|
||||||
|
{
|
||||||
|
"cniVersion": "0.3.0",
|
||||||
|
"type": "loopback"
|
||||||
|
}
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
|
### Install kubelet
|
||||||
|
|
||||||
|
```sh
|
||||||
|
yum install -y kubelet kubeadm kubectl
|
||||||
|
# Note that there are no kubernete v1.6 rpms on `yum.kubernetes.io`, so it needs to be fetched from `dl.k8s.io`:
|
||||||
|
# Download latest release of kubelet and kubectl
|
||||||
|
# TODO: remove this after the stable v1.6 release
|
||||||
|
cd /tmp/
|
||||||
|
curl -SL https://dl.k8s.io/v1.6.0-beta.4/kubernetes-server-linux-amd64.tar.gz -o kubernetes-server-linux-amd64.tar.gz
|
||||||
|
tar zxvf kubernetes-server-linux-amd64.tar.gz
|
||||||
|
/bin/cp -f kubernetes/server/bin/{kubelet,kubeadm,kubectl} /usr/bin/
|
||||||
|
rm -rf kubernetes-server-linux-amd64.tar.gz kubernetes
|
||||||
|
```
|
||||||
|
|
||||||
|
Configure kubelet with frakti runtime
|
||||||
|
|
||||||
|
```sh
|
||||||
|
sed -i '2 i\Environment="KUBELET_EXTRA_ARGS=--container-runtime=remote --container-runtime-endpoint=/var/run/frakti.sock --feature-gates=AllAlpha=true"' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
|
||||||
|
```
|
||||||
|
|
||||||
|
## Setting up the master node
|
||||||
|
|
||||||
|
hyperkube image could be customized via `KUBE_HYPERKUBE_IMAGE`:
|
||||||
|
|
||||||
|
- `VERSION=v1.6.0 make -C cluster/images/hyperkube build`
|
||||||
|
- `export KUBE_HYPERKUBE_IMAGE=xxxx`
|
||||||
|
|
||||||
|
```sh
|
||||||
|
kubeadm init kubeadm init --pod-network-cidr 10.244.0.0/16 --kubernetes-version latest
|
||||||
|
```
|
||||||
|
|
||||||
|
Optional: enable schedule pods on the master
|
||||||
|
|
||||||
|
```sh
|
||||||
|
export KUBECONFIG=/etc/kubernetes/admin.conf
|
||||||
|
kubectl taint nodes --all node-role.kubernetes.io/master:NoSchedule-
|
||||||
|
```
|
||||||
|
|
||||||
|
## Setting up the worker nodes
|
||||||
|
|
||||||
|
```sh
|
||||||
|
# get token on master node
|
||||||
|
token=$(kubeadm token list | grep authentication,signing | awk '{print $1}')
|
||||||
|
# join master on worker nodes
|
||||||
|
kubeadm join --token $token ${master_ip}:6443
|
||||||
|
```
|
|
@ -0,0 +1,231 @@
|
||||||
|
# Cluster deploying of frakti
|
||||||
|
|
||||||
|
- [Cluster deploying of frakti](#cluster-deploying-of-frakti)
|
||||||
|
- [Overview](#overview)
|
||||||
|
- [Install packages](#install-packages)
|
||||||
|
- [Install hyperd](#install-hyperd)
|
||||||
|
- [Install docker](#install-docker)
|
||||||
|
- [Install frakti](#install-frakti)
|
||||||
|
- [Install CNI](#install-cni)
|
||||||
|
- [Install kubelet](#install-kubelet)
|
||||||
|
- [Setting up the master node](#setting-up-the-worker-nodes)
|
||||||
|
- [Setting up the worker nodes](#setting-up-the-worker-nodes)
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
This document shows how to easily install a kubernetes cluster with frakti runtime.
|
||||||
|
|
||||||
|
Frakti is a hypervisor-based container runtime, it depends on a few packages besides kubernetes:
|
||||||
|
|
||||||
|
- hyperd: the hyper container engine (main container runtime)
|
||||||
|
- docker: the docker container engine (auxiliary container runtime)
|
||||||
|
- cni: the network plugin
|
||||||
|
|
||||||
|
## Install packages
|
||||||
|
|
||||||
|
### Install hyperd
|
||||||
|
|
||||||
|
On Ubuntu 16.04+:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
apt-get update && apt-get install -y qemu libvirt-bin
|
||||||
|
curl -sSL https://hypercontainer.io/install | bash
|
||||||
|
```
|
||||||
|
|
||||||
|
On CentOS 7:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
curl -sSL https://hypercontainer.io/install | bash
|
||||||
|
```
|
||||||
|
|
||||||
|
Configure hyperd:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
echo -e "Hypervisor=libvirt\n\
|
||||||
|
Kernel=/var/lib/hyper/kernel\n\
|
||||||
|
Initrd=/var/lib/hyper/hyper-initrd.img\n\
|
||||||
|
Hypervisor=qemu\n\
|
||||||
|
StorageDriver=overlay\n\
|
||||||
|
gRPCHost=127.0.0.1:22318" > /etc/hyper/config
|
||||||
|
systemctl enable hyperd
|
||||||
|
systemctl restart hyperd
|
||||||
|
```
|
||||||
|
|
||||||
|
### Install docker
|
||||||
|
|
||||||
|
On Ubuntu 16.04+:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
apt-get update
|
||||||
|
apt-get install -y docker.io
|
||||||
|
```
|
||||||
|
|
||||||
|
On CentOS 7:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
yum install -y docker
|
||||||
|
sed -i 's/native.cgroupdriver=systemd/native.cgroupdriver=cgroupfs/g' /usr/lib/systemd/system/docker.service
|
||||||
|
systemctl daemon-reload
|
||||||
|
```
|
||||||
|
|
||||||
|
Configure and start docker:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
systemctl enable docker
|
||||||
|
systemctl start docker
|
||||||
|
```
|
||||||
|
|
||||||
|
### Install frakti
|
||||||
|
|
||||||
|
```sh
|
||||||
|
curl -sSL https://github.com/kubernetes/frakti/releases/download/v0.1/frakti -o /usr/bin/frakti
|
||||||
|
chmod +x /usr/bin/frakti
|
||||||
|
cat <<EOF > /lib/systemd/system/frakti.service
|
||||||
|
[Unit]
|
||||||
|
Description=Hypervisor-based container runtime for Kubernetes
|
||||||
|
Documentation=https://github.com/kubernetes/frakti
|
||||||
|
After=network.target
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
ExecStart=/usr/bin/frakti --v=3 \
|
||||||
|
--log-dir=/var/log/frakti \
|
||||||
|
--logtostderr=false \
|
||||||
|
--listen=/var/run/frakti.sock \
|
||||||
|
--streaming-server-addr=%H \
|
||||||
|
--hyper-endpoint=127.0.0.1:22318
|
||||||
|
MountFlags=shared
|
||||||
|
TasksMax=8192
|
||||||
|
LimitNOFILE=1048576
|
||||||
|
LimitNPROC=1048576
|
||||||
|
LimitCORE=infinity
|
||||||
|
TimeoutStartSec=0
|
||||||
|
Restart=on-abnormal
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
|
### Install CNI
|
||||||
|
|
||||||
|
On Ubuntu 16.04+:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
apt-get update && apt-get install -y apt-transport-https
|
||||||
|
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
|
||||||
|
cat <<EOF > /etc/apt/sources.list.d/kubernetes.list
|
||||||
|
deb http://apt.kubernetes.io/ kubernetes-xenial-unstable main
|
||||||
|
EOF
|
||||||
|
apt-get update
|
||||||
|
apt-get install -y kubernetes-cni
|
||||||
|
```
|
||||||
|
|
||||||
|
On CentOS 7:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
|
||||||
|
[kubernetes]
|
||||||
|
name=Kubernetes
|
||||||
|
baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64-unstable
|
||||||
|
enabled=1
|
||||||
|
gpgcheck=1
|
||||||
|
repo_gpgcheck=1
|
||||||
|
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
|
||||||
|
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
|
||||||
|
EOF
|
||||||
|
setenforce 0
|
||||||
|
yum install -y kubernetes-cni
|
||||||
|
```
|
||||||
|
|
||||||
|
Configure CNI networks:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
mkdir -p /etc/cni/net.d
|
||||||
|
cat >/etc/cni/net.d/10-mynet.conf <<-EOF
|
||||||
|
{
|
||||||
|
"cniVersion": "0.3.0",
|
||||||
|
"name": "mynet",
|
||||||
|
"type": "bridge",
|
||||||
|
"bridge": "cni0",
|
||||||
|
"isGateway": true,
|
||||||
|
"ipMasq": true,
|
||||||
|
"ipam": {
|
||||||
|
"type": "host-local",
|
||||||
|
"subnet": "10.244.0.0/16",
|
||||||
|
"routes": [
|
||||||
|
{ "dst": "0.0.0.0/0" }
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
EOF
|
||||||
|
cat >/etc/cni/net.d/99-loopback.conf <<-EOF
|
||||||
|
{
|
||||||
|
"cniVersion": "0.3.0",
|
||||||
|
"type": "loopback"
|
||||||
|
}
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
|
### Start frakti
|
||||||
|
|
||||||
|
```sh
|
||||||
|
systemctl enable frakti
|
||||||
|
systemctl start frakti
|
||||||
|
```
|
||||||
|
|
||||||
|
### Install kubelet
|
||||||
|
|
||||||
|
On Ubuntu 16.04+:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
apt-get install -y kubelet kubeadm kubectl
|
||||||
|
```
|
||||||
|
|
||||||
|
On CentOS 7:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
yum install -y kubelet kubeadm kubectl
|
||||||
|
```
|
||||||
|
|
||||||
|
> Note that there are no kubernete v1.6 rpms on `yum.kubernetes.io`, so it needs to be fetched from `dl.k8s.io`:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
# Download latest release of kubelet and kubectl
|
||||||
|
# TODO: remove this after the stable v1.6 release
|
||||||
|
curl -SL https://dl.k8s.io/v1.6.0-beta.4/kubernetes-server-linux-amd64.tar.gz -o kubernetes-server-linux-amd64.tar.gz
|
||||||
|
tar zxvf kubernetes-server-linux-amd64.tar.gz
|
||||||
|
/bin/cp -f kubernetes/server/bin/{kubelet,kubeadm,kubectl} /usr/bin/
|
||||||
|
rm -rf kubernetes-server-linux-amd64.tar.gz
|
||||||
|
```
|
||||||
|
|
||||||
|
Configure kubelet with frakti runtime:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
sed -i '2 i\Environment="KUBELET_EXTRA_ARGS=--container-runtime=remote --container-runtime-endpoint=/var/run/frakti.sock --feature-gates=AllAlpha=true"' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
|
||||||
|
```
|
||||||
|
|
||||||
|
## Setting up the master node
|
||||||
|
|
||||||
|
```sh
|
||||||
|
# export KUBE_HYPERKUBE_IMAGE=
|
||||||
|
kubeadm init kubeadm init --pod-network-cidr 10.244.0.0/16 --kubernetes-version latest
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
Optional: enable schedule pods on the master
|
||||||
|
|
||||||
|
```sh
|
||||||
|
export KUBECONFIG=/etc/kubernetes/admin.conf
|
||||||
|
kubectl taint nodes --all node-role.kubernetes.io/master:NoSchedule-
|
||||||
|
```
|
||||||
|
|
||||||
|
## Setting up the worker nodes
|
||||||
|
|
||||||
|
```sh
|
||||||
|
# get token on master node
|
||||||
|
token=$(kubeadm token list | grep authentication,signing | awk '{print $1}')
|
||||||
|
|
||||||
|
# join master on worker nodes
|
||||||
|
kubeadm join --token $token ${master_ip}
|
||||||
|
```
|
||||||
|
|
|
@ -0,0 +1,177 @@
|
||||||
|
# Cluster deploying of frakti On Ubuntu
|
||||||
|
|
||||||
|
This document shows how to easily install a kubernetes cluster with frakti runtime.
|
||||||
|
|
||||||
|
Frakti is a hypervisor-based container runtime, it depends on a few packages besides kubernetes:
|
||||||
|
|
||||||
|
- hyperd: the hyper container engine (main container runtime)
|
||||||
|
- docker: the docker container engine (auxiliary container runtime)
|
||||||
|
- cni: the network plugin
|
||||||
|
|
||||||
|
## Optional: create instances on GCE
|
||||||
|
|
||||||
|
It is recommended to run frakti-enabled kubernetes on baremetal, but you could still have a try of frakti on public clouds.
|
||||||
|
|
||||||
|
**Do not forget to enable ip_forward on GCE.**
|
||||||
|
|
||||||
|
## Initialize all nodes
|
||||||
|
|
||||||
|
### Install hyperd
|
||||||
|
|
||||||
|
```sh
|
||||||
|
# install from https://docs.hypercontainer.io/get_started/install/linux.html
|
||||||
|
apt-get update && apt-get install -y qemu libvirt-bin
|
||||||
|
curl -sSL https://hypercontainer.io/install | bash
|
||||||
|
|
||||||
|
echo -e "Hypervisor=libvirt\n\
|
||||||
|
Kernel=/var/lib/hyper/kernel\n\
|
||||||
|
Initrd=/var/lib/hyper/hyper-initrd.img\n\
|
||||||
|
Hypervisor=qemu\n\
|
||||||
|
StorageDriver=overlay\n\
|
||||||
|
gRPCHost=127.0.0.1:22318" > /etc/hyper/config
|
||||||
|
systemctl enable hyperd
|
||||||
|
systemctl restart hyperd
|
||||||
|
```
|
||||||
|
|
||||||
|
### Install docker
|
||||||
|
|
||||||
|
```sh
|
||||||
|
apt-get update
|
||||||
|
apt-get install -y docker.io
|
||||||
|
|
||||||
|
systemctl enable docker
|
||||||
|
systemctl start docker
|
||||||
|
```
|
||||||
|
|
||||||
|
### Install frakti
|
||||||
|
|
||||||
|
```sh
|
||||||
|
curl -sSL https://github.com/kubernetes/frakti/releases/download/v0.1/frakti -o /usr/bin/frakti
|
||||||
|
chmod +x /usr/bin/frakti
|
||||||
|
cat <<EOF > /lib/systemd/system/frakti.service
|
||||||
|
[Unit]
|
||||||
|
Description=Hypervisor-based container runtime for Kubernetes
|
||||||
|
Documentation=https://github.com/kubernetes/frakti
|
||||||
|
After=network.target
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
ExecStart=/usr/bin/frakti --v=3 \
|
||||||
|
--log-dir=/var/log/frakti \
|
||||||
|
--logtostderr=false \
|
||||||
|
--listen=/var/run/frakti.sock \
|
||||||
|
--streaming-server-addr=%H \
|
||||||
|
--hyper-endpoint=127.0.0.1:22318
|
||||||
|
MountFlags=shared
|
||||||
|
TasksMax=8192
|
||||||
|
LimitNOFILE=1048576
|
||||||
|
LimitNPROC=1048576
|
||||||
|
LimitCORE=infinity
|
||||||
|
TimeoutStartSec=0
|
||||||
|
Restart=on-abnormal
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
|
### Install CNI
|
||||||
|
|
||||||
|
Frakti requires cni network for starting.
|
||||||
|
|
||||||
|
Note:
|
||||||
|
|
||||||
|
- Configure different subnet for different hosts, e.g.
|
||||||
|
- 10.244.1.0/24
|
||||||
|
- 10.244.2.0/24
|
||||||
|
- 10.244.3.0/24
|
||||||
|
- Configure host routes on GCE
|
||||||
|
- gcloud compute routes create "instance-1" --description "instance-1" --destination-range "10.244.1.0/24" --network "default" --next-hop-instance "instance-1" --next-hop-instance-zone "asia-east1-a" --priority "100"
|
||||||
|
- gcloud compute routes create "instance-2" --description "instance-2" --destination-range "10.244.2.0/24" --network "default" --next-hop-instance "instance-2" --next-hop-instance-zone "asia-east1-a" --priority "100"
|
||||||
|
- gcloud compute routes create "instance-3" --description "instance-3" --destination-range "10.244.3.0/24" --network "default" --next-hop-instance "instance-3" --next-hop-instance-zone "asia-east1-a" --priority "100"
|
||||||
|
|
||||||
|
```sh
|
||||||
|
apt-get update && apt-get install -y apt-transport-https
|
||||||
|
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
|
||||||
|
cat <<EOF > /etc/apt/sources.list.d/kubernetes.list
|
||||||
|
deb http://apt.kubernetes.io/ kubernetes-xenial-unstable main
|
||||||
|
EOF
|
||||||
|
apt-get update
|
||||||
|
apt-get install -y kubernetes-cni
|
||||||
|
```
|
||||||
|
|
||||||
|
Configure cni network
|
||||||
|
|
||||||
|
```sh
|
||||||
|
mkdir -p /etc/cni/net.d
|
||||||
|
cat >/etc/cni/net.d/10-mynet.conf <<-EOF
|
||||||
|
{
|
||||||
|
"cniVersion": "0.3.0",
|
||||||
|
"name": "mynet",
|
||||||
|
"type": "bridge",
|
||||||
|
"bridge": "cni0",
|
||||||
|
"isGateway": true,
|
||||||
|
"ipMasq": true,
|
||||||
|
"ipam": {
|
||||||
|
"type": "host-local",
|
||||||
|
"subnet": "10.244.1.0/24",
|
||||||
|
"routes": [
|
||||||
|
{ "dst": "0.0.0.0/0" }
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
EOF
|
||||||
|
cat >/etc/cni/net.d/99-loopback.conf <<-EOF
|
||||||
|
{
|
||||||
|
"cniVersion": "0.3.0",
|
||||||
|
"type": "loopback"
|
||||||
|
}
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
|
### start frakti
|
||||||
|
|
||||||
|
```sh
|
||||||
|
systemctl enable frakti
|
||||||
|
systemctl start frakti
|
||||||
|
```
|
||||||
|
|
||||||
|
### Install kubelet
|
||||||
|
|
||||||
|
```sh
|
||||||
|
apt-get install -y kubelet kubeadm kubectl
|
||||||
|
```
|
||||||
|
|
||||||
|
Configure kubelet with frakti runtime:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
sed -i '2 i\Environment="KUBELET_EXTRA_ARGS=--container-runtime=remote --container-runtime-endpoint=/var/run/frakti.sock --feature-gates=AllAlpha=true"' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
|
||||||
|
```
|
||||||
|
|
||||||
|
## Setting up the master node
|
||||||
|
|
||||||
|
hyperkube image could be customized via `KUBE_HYPERKUBE_IMAGE`:
|
||||||
|
|
||||||
|
- `VERSION=v1.6.0 make -C cluster/images/hyperkube build`
|
||||||
|
- `export KUBE_HYPERKUBE_IMAGE=xxxx`
|
||||||
|
|
||||||
|
```sh
|
||||||
|
kubeadm init kubeadm init --pod-network-cidr 10.244.0.0/16 --kubernetes-version latest
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
Optional: enable schedule pods on the master
|
||||||
|
|
||||||
|
```sh
|
||||||
|
export KUBECONFIG=/etc/kubernetes/admin.conf
|
||||||
|
kubectl taint nodes --all node-role.kubernetes.io/master:NoSchedule-
|
||||||
|
```
|
||||||
|
|
||||||
|
## Setting up the worker nodes
|
||||||
|
|
||||||
|
```sh
|
||||||
|
# get token on master node
|
||||||
|
token=$(kubeadm token list | grep authentication,signing | awk '{print $1}')
|
||||||
|
|
||||||
|
# join master on worker nodes
|
||||||
|
kubeadm join --token $token ${master_ip}
|
||||||
|
```
|
|
@ -0,0 +1,9 @@
|
||||||
|
# Kubernetes部署
|
||||||
|
|
||||||
|
- [单机部署](single.md)
|
||||||
|
- [集群部署](cluster.md)
|
||||||
|
- [kubeadm](kubeadm.md)
|
||||||
|
- [frakti](frakti/index.md)
|
||||||
|
- [证书生成示例](certificate.md)
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,123 @@
|
||||||
|
# kubeadm
|
||||||
|
|
||||||
|
## 初始化系统
|
||||||
|
|
||||||
|
所有机器都需要初始化docker和kubelet。
|
||||||
|
|
||||||
|
### ubuntu
|
||||||
|
|
||||||
|
```sh
|
||||||
|
# for ubuntu 16.04+
|
||||||
|
apt-get update && apt-get install -y apt-transport-https
|
||||||
|
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
|
||||||
|
cat <<EOF > /etc/apt/sources.list.d/kubernetes.list
|
||||||
|
deb http://apt.kubernetes.io/ kubernetes-xenial main
|
||||||
|
EOF
|
||||||
|
apt-get update
|
||||||
|
# Install docker if you don't have it already.
|
||||||
|
apt-get install -y docker.io
|
||||||
|
apt-get install -y kubelet kubeadm kubectl kubernetes-cni
|
||||||
|
systemctl enable docker && systemctl start docker
|
||||||
|
|
||||||
|
systemctl enable kubelet && systemctl start kubelet
|
||||||
|
```
|
||||||
|
|
||||||
|
### centos
|
||||||
|
|
||||||
|
```sh
|
||||||
|
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
|
||||||
|
[kubernetes]
|
||||||
|
name=Kubernetes
|
||||||
|
baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64
|
||||||
|
enabled=1
|
||||||
|
gpgcheck=1
|
||||||
|
repo_gpgcheck=1
|
||||||
|
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
|
||||||
|
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
|
||||||
|
EOF
|
||||||
|
setenforce 0
|
||||||
|
yum install -y docker kubelet kubeadm kubectl kubernetes-cni
|
||||||
|
systemctl enable docker && systemctl start docker
|
||||||
|
|
||||||
|
systemctl enable kubelet && systemctl start kubelet
|
||||||
|
```
|
||||||
|
|
||||||
|
## 安装master
|
||||||
|
|
||||||
|
```sh
|
||||||
|
# --api-advertise-addresses <ip-address>
|
||||||
|
# for flannel, setup --pod-network-cidr 10.244.0.0/16
|
||||||
|
kubeadm init kubeadm init --pod-network-cidr 10.244.0.0/16 --kubernetes-version latest
|
||||||
|
|
||||||
|
# eanable schedule pods on the master
|
||||||
|
export KUBECONFIG=/etc/kubernetes/admin.conf
|
||||||
|
# for v1.5-, use kubectl taint nodes --all dedicated-
|
||||||
|
kubectl taint nodes --all node-role.kubernetes.io/master:NoSchedule-
|
||||||
|
```
|
||||||
|
|
||||||
|
## 配置Network plugin
|
||||||
|
|
||||||
|
### CNI bridge
|
||||||
|
|
||||||
|
```sh
|
||||||
|
mkdir -p /etc/cni/net.d
|
||||||
|
cat >/etc/cni/net.d/10-mynet.conf <<-EOF
|
||||||
|
{
|
||||||
|
"cniVersion": "0.3.0",
|
||||||
|
"name": "mynet",
|
||||||
|
"type": "bridge",
|
||||||
|
"bridge": "cni0",
|
||||||
|
"isGateway": true,
|
||||||
|
"ipMasq": true,
|
||||||
|
"ipam": {
|
||||||
|
"type": "host-local",
|
||||||
|
"subnet": "10.244.0.0/16",
|
||||||
|
"routes": [
|
||||||
|
{ "dst": "0.0.0.0/0" }
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
EOF
|
||||||
|
cat >/etc/cni/net.d/99-loopback.conf <<-EOF
|
||||||
|
{
|
||||||
|
"cniVersion": "0.3.0",
|
||||||
|
"type": "loopback"
|
||||||
|
}
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
|
### flannel
|
||||||
|
|
||||||
|
```sh
|
||||||
|
#kubectl apply -f https://gist.githubusercontent.com/feiskyer/1e7a95f27c391a35af47881eb20131d7/raw/4266f05355590fa185bc8e50c0f50d2841993d20/flannel.yaml
|
||||||
|
kubectl create -f https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel-rbac.yml
|
||||||
|
kubectl create -f https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
### weave
|
||||||
|
|
||||||
|
```sh
|
||||||
|
# kubectl apply -f https://gist.githubusercontent.com/feiskyer/0b00688584cc7ed9bd9a993adddae5e3/raw/67f3558e32d5c76be38e36ef713cc46deb2a74ca/weave.yaml
|
||||||
|
kubectl apply -f https://git.io/weave-kube-1.6
|
||||||
|
```
|
||||||
|
|
||||||
|
### calico
|
||||||
|
|
||||||
|
```sh
|
||||||
|
# kubectl apply -f https://gist.githubusercontent.com/feiskyer/0f952c7dadbfcefd2ce81ba7ea24a8ca/raw/92addea398bbc4d4a1dcff8a98c1ac334c8acb26/calico.yaml
|
||||||
|
kubectl apply -f http://docs.projectcalico.org/v2.1/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
## 添加Node
|
||||||
|
|
||||||
|
```sh
|
||||||
|
token=$(kubeadm token list | grep authentication,signing | awk '{print $1}')
|
||||||
|
kubeadm join --token $token ${master_ip}
|
||||||
|
```
|
||||||
|
|
||||||
|
## 删除安装
|
||||||
|
|
||||||
|
```
|
||||||
|
kubeadm reset
|
||||||
|
```
|
||||||
|
|
|
@ -0,0 +1,24 @@
|
||||||
|
# Kubernetes日志
|
||||||
|
|
||||||
|
ELK可谓是容器日志收集、处理和搜索的黄金搭档:
|
||||||
|
|
||||||
|
* Logstash(或者Fluentd)负责收集日志
|
||||||
|
* Elasticsearch存储日志并提供搜索
|
||||||
|
* Kibana负责日志查询和展示
|
||||||
|
|
||||||
|
注意:Kubernetes默认使用fluentd(以DaemonSet的方式启动)来收集日志,并将收集的日志发送给elasticsearch。
|
||||||
|
|
||||||
|
**小提示**
|
||||||
|
|
||||||
|
在使用`cluster/kube-up.sh`部署集群的时候,可以设置`KUBE_LOGGING_DESTINATION`环境变量自动部署Elasticsearch和Kibana,并使用fluentd收集日志(配置参考[addons/fluentd-elasticsearch](https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch)):
|
||||||
|
|
||||||
|
```
|
||||||
|
KUBE_LOGGING_DESTINATION=elasticsearch
|
||||||
|
KUBE_ENABLE_NODE_LOGGING=true
|
||||||
|
cluster/kube-up.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
如果使用GCE或者GKE的话,还可以[将日志发送给Google Cloud Logging](https://kubernetes.io/docs/user-guide/logging/stackdriver/),并可以集成Google Cloud Storage和BigQuery。
|
||||||
|
|
||||||
|
如果需要集成其他的日志方案,还可以自定义docker的log driver,将日志发送到splunk或者awslogs等。
|
||||||
|
|
|
@ -0,0 +1,45 @@
|
||||||
|
# 单机部署
|
||||||
|
|
||||||
|
创建Kubernetes cluster(单机版)最简单的方法是[minikube](https://github.com/kubernetes/minikube):
|
||||||
|
|
||||||
|
首先下载kubectl
|
||||||
|
|
||||||
|
```sh
|
||||||
|
curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/v1.5.2/bin/linux/amd64/kubectl
|
||||||
|
chmod +x kubectl
|
||||||
|
```
|
||||||
|
|
||||||
|
然后启动minikube
|
||||||
|
|
||||||
|
```sh
|
||||||
|
$ minikube start
|
||||||
|
Starting local Kubernetes cluster...
|
||||||
|
Kubectl is now configured to use the cluster.
|
||||||
|
$ kubectl cluster-info
|
||||||
|
Kubernetes master is running at https://192.168.64.12:8443
|
||||||
|
kubernetes-dashboard is running at https://192.168.64.12:8443/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
|
||||||
|
|
||||||
|
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
|
||||||
|
```
|
||||||
|
|
||||||
|
## 开发版
|
||||||
|
|
||||||
|
minikube/localkube只提供了正式release版本,而如果想要部署master或者开发版的话,则可以用`hack/local-up-cluster.sh`来启动一个本地集群:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
cd $GOPATH/src/k8s.io/kubernetes
|
||||||
|
|
||||||
|
export KUBERNETES_PROVIDER=local
|
||||||
|
hack/install-etcd.sh
|
||||||
|
export PATH=$GOPATH/src/k8s.io/kubernetes/third_party/etcd:$PATH
|
||||||
|
hack/local-up-cluster.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
打开另外一个终端,配置kubectl:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
cd $GOPATH/src/k8s.io/kubernetes
|
||||||
|
export KUBECONFIG=/var/run/kubernetes/admin.kubeconfig
|
||||||
|
cluster/kubectl.sh
|
||||||
|
```
|
||||||
|
|
|
@ -0,0 +1,7 @@
|
||||||
|
# Kubernetes社区贡献
|
||||||
|
|
||||||
|
- [Contributing guidelines](https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md)
|
||||||
|
- [Kubernetes Developer Guide](https://github.com/kubernetes/community/tree/master/contributors/devel)
|
||||||
|
- [Special Interest Groups](https://github.com/kubernetes/community)
|
||||||
|
- [Feature Tracking and Backlog](https://github.com/kubernetes/features)
|
||||||
|
- [Community Expectations](https://github.com/kubernetes/community/blob/master/contributors/devel/community-expectations.md)
|
|
@ -0,0 +1,131 @@
|
||||||
|
# Kubernetes开发环境
|
||||||
|
|
||||||
|
## 配置开发环境
|
||||||
|
|
||||||
|
```sh
|
||||||
|
apt-get install -y gcc make socat git
|
||||||
|
|
||||||
|
# install docker
|
||||||
|
curl -fsSL https://get.docker.com/ | sh
|
||||||
|
|
||||||
|
# install etcd
|
||||||
|
curl -L https://github.com/coreos/etcd/releases/download/v3.0.10/etcd-v3.0.10-linux-amd64.tar.gz -o etcd-v3.0.10-linux-amd64.tar.gz && tar xzvf etcd-v3.0.10-linux-amd64.tar.gz && /bin/cp -f etcd-v3.0.10-linux-amd64/{etcd,etcdctl} /usr/bin && rm -rf etcd-v3.0.10-linux-amd64*
|
||||||
|
|
||||||
|
# install golang
|
||||||
|
curl -sL https://storage.googleapis.com/golang/go1.8.1.linux-amd64.tar.gz | tar -C /usr/local -zxf -
|
||||||
|
export GOPATH=/gopath
|
||||||
|
export PATH=$PATH:$GOPATH/bin:/usr/local/bin:/usr/local/go/bin/
|
||||||
|
|
||||||
|
# Get essential tools for building kubernetes
|
||||||
|
go get -u github.com/jteeuwen/go-bindata/go-bindata
|
||||||
|
|
||||||
|
# Get kubernetes code
|
||||||
|
mkdir -p $GOPATH/src/k8s.io
|
||||||
|
git clone https://github.com/kubernetes/kubernetes $GOPATH/src/k8s.io/kubernetes
|
||||||
|
cd $GOPATH/src/k8s.io/kubernetes
|
||||||
|
|
||||||
|
# Start a local cluster
|
||||||
|
export KUBERNETES_PROVIDER=local
|
||||||
|
# export EXPERIMENTAL_CRI=true
|
||||||
|
# export ALLOW_SECURITY_CONTEXT=yes
|
||||||
|
# set dockerd --selinux-enabled
|
||||||
|
# export NET_PLUGIN=kubenet
|
||||||
|
hack/local-up-cluster.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
打开另外一个终端,配置kubectl:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
export KUBECONFIG=/var/run/kubernetes/admin.kubeconfig
|
||||||
|
cluster/kubectl.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
## 编译release版
|
||||||
|
|
||||||
|
```sh
|
||||||
|
make quick-release
|
||||||
|
```
|
||||||
|
|
||||||
|
## 容器集成开发环境
|
||||||
|
|
||||||
|
```
|
||||||
|
hyper run -it feisky/kubernetes-dev bash
|
||||||
|
# /hack/start-hyperd.sh
|
||||||
|
# /hack/start-docker.sh
|
||||||
|
# /hack/start-frakti.sh
|
||||||
|
# /hack/start-kubernetes-frakti.sh
|
||||||
|
# /hack/setup-kubectl.sh
|
||||||
|
# cluster/kubectl.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
## 单元测试
|
||||||
|
|
||||||
|
```sh
|
||||||
|
# unit test a special package
|
||||||
|
go test -v k8s.io/kubernetes/pkg/kubelet/kuberuntime
|
||||||
|
```
|
||||||
|
|
||||||
|
## e2e测试
|
||||||
|
|
||||||
|
```sh
|
||||||
|
make WHAT='test/e2e/e2e.test'
|
||||||
|
make ginkgo
|
||||||
|
|
||||||
|
export KUBERNETES_PROVIDER=local
|
||||||
|
go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Port\sforwarding'
|
||||||
|
go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Feature:SecurityContext'
|
||||||
|
```
|
||||||
|
|
||||||
|
## Node e2e测试
|
||||||
|
|
||||||
|
```sh
|
||||||
|
export KUBERNETES_PROVIDER=local
|
||||||
|
make test-e2e-node FOCUS="InitContainer" TEST_ARGS="--runtime-integration-type=cri"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Bot命令
|
||||||
|
|
||||||
|
- Jenkins verification: `@k8s-bot verify test this`
|
||||||
|
- GCE E2E: `@k8s-bot cvm gce e2e test this`
|
||||||
|
- Test all: `@k8s-bot test this please, issue #IGNORE`
|
||||||
|
- CRI test: `@k8s-bot cri test this.`
|
||||||
|
- Verity test: `@k8s-bot verify test this`
|
||||||
|
- **LGTM (only applied if you are one of assignees):**: `/lgtm`
|
||||||
|
- LGTM cancel: `/lgtm cancel`
|
||||||
|
|
||||||
|
更多命令见[kubernetes test-infra](https://github.com/kubernetes/test-infra/blob/master/prow/commands.md)。
|
||||||
|
|
||||||
|
## 有用的git命令
|
||||||
|
|
||||||
|
拉取pull request到本地:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
git fetch upstream pull/324/head:branch
|
||||||
|
git fetch upstream pull/365/merge:branch
|
||||||
|
```
|
||||||
|
|
||||||
|
或者配置`.git/config`并运行`git fetch`拉取所有的pull requests:
|
||||||
|
|
||||||
|
```
|
||||||
|
fetch = +refs/pull/*:refs/remotes/origin/pull/*
|
||||||
|
```
|
||||||
|
|
||||||
|
## 用docker-machine创建虚拟机的方法
|
||||||
|
|
||||||
|
```sh
|
||||||
|
docker-machine create --driver google --google-project xxxx --google-machine-type n1-standard-2 --google-disk-size 30 kubernetes
|
||||||
|
```
|
||||||
|
|
||||||
|
## Minikube启动本地cluster
|
||||||
|
|
||||||
|
```sh
|
||||||
|
$ minikube get-k8s-versions
|
||||||
|
The following Kubernetes versions are available:
|
||||||
|
- v1.5.1
|
||||||
|
- v1.4.3
|
||||||
|
...
|
||||||
|
|
||||||
|
# http proxy is required in China
|
||||||
|
$ minikube start --docker-env HTTP_PROXY=http://proxy-ip:port --docker-env HTTPS_PROXY=http://proxy-ip:port --vm-driver=xhyve --kubernetes-version="v1.6.2"
|
||||||
|
```
|
||||||
|
|
|
@ -0,0 +1,196 @@
|
||||||
|
# Kubernetes测试
|
||||||
|
|
||||||
|
## 单元测试
|
||||||
|
|
||||||
|
单元测试仅依赖于源代码,是测试代码逻辑是否符合预期的最简单方法。
|
||||||
|
|
||||||
|
**运行所有的单元测试**
|
||||||
|
|
||||||
|
```
|
||||||
|
make test
|
||||||
|
```
|
||||||
|
|
||||||
|
**仅测试指定的package**
|
||||||
|
|
||||||
|
```sh
|
||||||
|
# 单个package
|
||||||
|
make test WHAT=./pkg/api
|
||||||
|
# 多个packages
|
||||||
|
make test WHAT=./pkg/{api,kubelet}
|
||||||
|
```
|
||||||
|
|
||||||
|
或者,也可以直接用`go test`
|
||||||
|
|
||||||
|
```sh
|
||||||
|
go test -v k8s.io/kubernetes/pkg/kubelet
|
||||||
|
```
|
||||||
|
|
||||||
|
**仅测试指定package的某个测试case**
|
||||||
|
|
||||||
|
```
|
||||||
|
# Runs TestValidatePod in pkg/api/validation with the verbose flag set
|
||||||
|
make test WHAT=./pkg/api/validation KUBE_GOFLAGS="-v" KUBE_TEST_ARGS='-run ^TestValidatePod$'
|
||||||
|
|
||||||
|
# Runs tests that match the regex ValidatePod|ValidateConfigMap in pkg/api/validation
|
||||||
|
make test WHAT=./pkg/api/validation KUBE_GOFLAGS="-v" KUBE_TEST_ARGS="-run ValidatePod\|ValidateConfigMap$"
|
||||||
|
```
|
||||||
|
|
||||||
|
或者直接用`go test`
|
||||||
|
|
||||||
|
```
|
||||||
|
go test -v k8s.io/kubernetes/pkg/api/validation -run ^TestValidatePod$
|
||||||
|
```
|
||||||
|
|
||||||
|
**并行测试**
|
||||||
|
|
||||||
|
并行测试是root out flakes的一种有效方法:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
# Have 2 workers run all tests 5 times each (10 total iterations).
|
||||||
|
make test PARALLEL=2 ITERATION=5
|
||||||
|
```
|
||||||
|
|
||||||
|
**生成测试报告**
|
||||||
|
|
||||||
|
```
|
||||||
|
make test KUBE_COVER=y
|
||||||
|
```
|
||||||
|
|
||||||
|
## Benchmark测试
|
||||||
|
|
||||||
|
```
|
||||||
|
go test ./pkg/apiserver -benchmem -run=XXX -bench=BenchmarkWatch
|
||||||
|
```
|
||||||
|
|
||||||
|
## 集成测试
|
||||||
|
|
||||||
|
Kubernetes集成测试需要安装etcd(只要按照即可,不需要启动),比如
|
||||||
|
|
||||||
|
```
|
||||||
|
hack/install-etcd.sh # Installs in ./third_party/etcd
|
||||||
|
echo export PATH="\$PATH:$(pwd)/third_party/etcd" >> ~/.profile # Add to PATH
|
||||||
|
```
|
||||||
|
|
||||||
|
集成测试会在需要的时候自动启动etcd和kubernetes服务,并运行[test/integration](https://github.com/kubernetes/kubernetes/tree/master/test/integration)里面的测试。
|
||||||
|
|
||||||
|
**运行所有集成测试**
|
||||||
|
|
||||||
|
```sh
|
||||||
|
make test-integration # Run all integration tests.
|
||||||
|
```
|
||||||
|
|
||||||
|
**指定集成测试用例**
|
||||||
|
|
||||||
|
```sh
|
||||||
|
# Run integration test TestPodUpdateActiveDeadlineSeconds with the verbose flag set.
|
||||||
|
make test-integration KUBE_GOFLAGS="-v" KUBE_TEST_ARGS="-run ^TestPodUpdateActiveDeadlineSeconds$"
|
||||||
|
```
|
||||||
|
|
||||||
|
## End to end (e2e)测试
|
||||||
|
|
||||||
|
End to end (e2e) 测试模拟用户行为操作Kubernetes,用来保证Kubernetes服务或集群的行为完全符合设计预期。
|
||||||
|
|
||||||
|
在开启e2e测试之前,需要先编译测试文件,并设置KUBERNETES_PROVIDER(默认为gce):
|
||||||
|
|
||||||
|
```
|
||||||
|
make WHAT='test/e2e/e2e.test'
|
||||||
|
make ginkgo
|
||||||
|
export KUBERNETES_PROVIDER=local
|
||||||
|
```
|
||||||
|
|
||||||
|
**启动cluster,测试,最后停止cluster**
|
||||||
|
|
||||||
|
```sh
|
||||||
|
# build Kubernetes, up a cluster, run tests, and tear everything down
|
||||||
|
go run hack/e2e.go -- -v --build --up --test --down
|
||||||
|
```
|
||||||
|
|
||||||
|
**仅测试指定的用例**
|
||||||
|
|
||||||
|
```sh
|
||||||
|
go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Kubectl\sclient\s\[k8s\.io\]\sKubectl\srolling\-update\sshould\ssupport\srolling\-update\sto\ssame\simage\s\[Conformance\]$'
|
||||||
|
```
|
||||||
|
|
||||||
|
**略过测试用例**
|
||||||
|
|
||||||
|
```sh
|
||||||
|
go run hack/e2e.go -- -v --test --test_args="--ginkgo.skip=Pods.*env
|
||||||
|
```
|
||||||
|
|
||||||
|
**并行测试**
|
||||||
|
|
||||||
|
```sh
|
||||||
|
# Run tests in parallel, skip any that must be run serially
|
||||||
|
GINKGO_PARALLEL=y go run hack/e2e.go --v --test --test_args="--ginkgo.skip=\[Serial\]"
|
||||||
|
|
||||||
|
# Run tests in parallel, skip any that must be run serially and keep the test namespace if test failed
|
||||||
|
GINKGO_PARALLEL=y go run hack/e2e.go --v --test --test_args="--ginkgo.skip=\[Serial\] --delete-namespace-on-failure=false"
|
||||||
|
```
|
||||||
|
|
||||||
|
**清理测试**
|
||||||
|
|
||||||
|
```sh
|
||||||
|
go run hack/e2e.go -- -v --down
|
||||||
|
```
|
||||||
|
|
||||||
|
**有用的`-ctl`**
|
||||||
|
|
||||||
|
```sh
|
||||||
|
# -ctl can be used to quickly call kubectl against your e2e cluster. Useful for
|
||||||
|
# cleaning up after a failed test or viewing logs. Use -v to avoid suppressing
|
||||||
|
# kubectl output.
|
||||||
|
go run hack/e2e.go -- -v -ctl='get events'
|
||||||
|
go run hack/e2e.go -- -v -ctl='delete pod foobar'
|
||||||
|
```
|
||||||
|
|
||||||
|
## Fedaration e2e测试
|
||||||
|
|
||||||
|
```sh
|
||||||
|
export FEDERATION=true
|
||||||
|
export E2E_ZONES="us-central1-a us-central1-b us-central1-f"
|
||||||
|
# or export FEDERATION_PUSH_REPO_BASE="quay.io/colin_hom"
|
||||||
|
export FEDERATION_PUSH_REPO_BASE="gcr.io/${GCE_PROJECT_NAME}"
|
||||||
|
|
||||||
|
# build container images
|
||||||
|
KUBE_RELEASE_RUN_TESTS=n KUBE_FASTBUILD=true go run hack/e2e.go -- -v -build
|
||||||
|
|
||||||
|
# push the federation container images
|
||||||
|
build/push-federation-images.sh
|
||||||
|
|
||||||
|
# Deploy federation control plane
|
||||||
|
go run hack/e2e.go -- -v --up
|
||||||
|
|
||||||
|
# Finally, run the tests
|
||||||
|
go run hack/e2e.go -- -v --test --test_args="--ginkgo.focus=\[Feature:Federation\]"
|
||||||
|
|
||||||
|
# Don't forget to teardown everything down
|
||||||
|
go run hack/e2e.go -- -v --down
|
||||||
|
```
|
||||||
|
|
||||||
|
可以用`cluster/log-dump.sh <directory>`方便的下载相关日志,帮助排查测试中碰到的问题。
|
||||||
|
|
||||||
|
## Node e2e测试
|
||||||
|
|
||||||
|
Node e2e仅测试Kubelet的相关功能,可以在本地或者集群中测试
|
||||||
|
|
||||||
|
```sh
|
||||||
|
export KUBERNETES_PROVIDER=local
|
||||||
|
make test-e2e-node FOCUS="InitContainer"
|
||||||
|
make test_e2e_node TEST_ARGS="--experimental-cgroups-per-qos=true"
|
||||||
|
```
|
||||||
|
|
||||||
|
## 补充说明
|
||||||
|
|
||||||
|
借助kubectl的模版可以方便获取想要的数据,比如查询某个container的镜像的方法为
|
||||||
|
|
||||||
|
```sh
|
||||||
|
kubectl get pods nginx-4263166205-ggst4 -o template '--template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "nginx"}}{{.image}}{{end}}{{end}}{{end}}'
|
||||||
|
```
|
||||||
|
|
||||||
|
## 参考文档
|
||||||
|
|
||||||
|
* [Kubernetes testing](https://github.com/kubernetes/community/blob/master/contributors/devel/testing.md)
|
||||||
|
* [End-to-End Testing](https://github.com/kubernetes/community/blob/master/contributors/devel/e2e-tests.md)
|
||||||
|
* [Node e2e test](https://github.com/kubernetes/community/blob/master/contributors/devel/e2e-node-tests.md)
|
||||||
|
* [How to write e2e test](https://github.com/kubernetes/community/blob/master/contributors/devel/writing-good-e2e-tests.md)
|
||||||
|
* [Coding Conventions](https://github.com/kubernetes/community/blob/master/contributors/devel/coding-conventions.md#testing-conventions)
|
|
@ -0,0 +1,220 @@
|
||||||
|
# Kubernetes ecosystem
|
||||||
|
|
||||||
|
![](CloudNativeLandscape.jpg)
|
||||||
|
|
||||||
|
* http://kubernetes.io/partners/
|
||||||
|
* K8s distributions and SaaS offerings
|
||||||
|
- [http://openshift.com](http://openshift.com/)
|
||||||
|
- https://tectonic.com/
|
||||||
|
- http://rancher.com/kubernetes/
|
||||||
|
- [https://www.infoq.com/news/
|
||||||
|
<wbr>2016/11/apprenda-kubernetes-
|
||||||
|
<wbr>ket](https://www.infoq.com/news/2016/11/apprenda-kubernetes-ket)
|
||||||
|
- [https://github.com/samsung-
|
||||||
|
<wbr>cnct/kraken](https://github.com/samsung-cnct/kraken)
|
||||||
|
- [https://www.mirantis.com/
|
||||||
|
<wbr>solutions/container-
|
||||||
|
<wbr>technologies/](https://www.mirantis.com/solutions/container-technologies/)
|
||||||
|
- [https://www.ubuntu.com/cloud/
|
||||||
|
<wbr>kubernetes](https://www.ubuntu.com/cloud/kubernetes)
|
||||||
|
- [https://platform9.com/
|
||||||
|
<wbr>products-2/kubernetes/](https://platform9.com/products-2/kubernetes/)
|
||||||
|
- https://kubermatic.io/en/
|
||||||
|
- https://stackpoint.io/#/
|
||||||
|
- [http://gravitational.com/
|
||||||
|
<wbr>telekube/](http://gravitational.com/telekube/)
|
||||||
|
- https://kcluster.io/
|
||||||
|
- [http://www.stratoscale.com/
|
||||||
|
<wbr>products/kubernetes-as-a-
|
||||||
|
<wbr>service/](http://www.stratoscale.com/products/kubernetes-as-a-service/)
|
||||||
|
- https://giantswarm.io/product/
|
||||||
|
- [https://cloud.google.com/
|
||||||
|
<wbr>container-engine/](https://cloud.google.com/container-engine/)
|
||||||
|
- [https://www.digitalocean.com/
|
||||||
|
<wbr>community/tutorials/an-
|
||||||
|
<wbr>introduction-to-kubernetes](https://www.digitalocean.com/community/tutorials/an-introduction-to-kubernetes)
|
||||||
|
- [http://blog.kubernetes.io/
|
||||||
|
<wbr>2016/11/bringing-kubernetes-
|
||||||
|
<wbr>support-to-azure.html](http://blog.kubernetes.io/2016/11/bringing-kubernetes-support-to-azure.html)
|
||||||
|
- [http://thenewstack.io/huawei-
|
||||||
|
<wbr>launches-kubernetes-based-
|
||||||
|
<wbr>container-engine/](http://thenewstack.io/huawei-launches-kubernetes-based-container-engine/)
|
||||||
|
- [http://blogs.univa.com/2016/
|
||||||
|
<wbr>05/univa-announces-navops-
|
||||||
|
<wbr>command-for-managing-
|
||||||
|
<wbr>enterprise-container-workload-
|
||||||
|
<wbr>on-kubernetes-distributions/](http://blogs.univa.com/2016/05/univa-announces-navops-command-for-managing-enterprise-container-workload-on-kubernetes-distributions/)
|
||||||
|
- https://supergiant.io/
|
||||||
|
- https://diamanti.com/products/
|
||||||
|
- [http://www.vmware.com/company/
|
||||||
|
<wbr>news/releases/vmw-newsfeed.
|
||||||
|
<wbr>VMware-Introduces-Kubernetes-
|
||||||
|
<wbr>as-a-Service-on-Photon-
|
||||||
|
<wbr>Platform.2104598.html](http://www.vmware.com/company/news/releases/vmw-newsfeed.VMware-Introduces-Kubernetes-as-a-Service-on-Photon-Platform.2104598.html)
|
||||||
|
- http://mantl.io/
|
||||||
|
- [https://github.com/hyperhq/
|
||||||
|
<wbr>hypernetes](https://github.com/hyperhq/hypernetes)
|
||||||
|
- [https://github.com/vmturbo/
|
||||||
|
<wbr>kubernetes](https://github.com/vmturbo/kubernetes)
|
||||||
|
- [https://www.joyent.com/
|
||||||
|
<wbr>containerpilot](https://www.joyent.com/containerpilot)
|
||||||
|
* PaaS on Kubernetes
|
||||||
|
- Openshift
|
||||||
|
- Deis Workflow
|
||||||
|
- Gondor/Kel
|
||||||
|
- WSO2
|
||||||
|
- Rancher
|
||||||
|
- Kumoru
|
||||||
|
* Serverless implementations
|
||||||
|
- Funktion
|
||||||
|
- [Fission](https://github.com/platform9/fission)
|
||||||
|
- Kubeless
|
||||||
|
- OpenWhisk
|
||||||
|
- Iron.io
|
||||||
|
* Application frameworks
|
||||||
|
* [Spring Cloud](http://www.nicolaferraro.stfi.re/2016/10/23/hot-reconfiguration-of-microservices-on-kubernetes/)
|
||||||
|
* API Management
|
||||||
|
- Apigee
|
||||||
|
- [Kong](https://github.com/Mashape/kong-dist-kubernetes)
|
||||||
|
- Apiman
|
||||||
|
* Data processing
|
||||||
|
- Pachyderm
|
||||||
|
- Heron
|
||||||
|
* Package managers
|
||||||
|
* Helm
|
||||||
|
* [KPM](https://github.com/coreos/kpm)
|
||||||
|
* Configuration
|
||||||
|
- Kompose
|
||||||
|
- Jsonnet
|
||||||
|
- [Spread](https://redspread.com/)
|
||||||
|
- [K8comp](https://github.com/cststack/k8comp)
|
||||||
|
- [Ktmpl](https://github.com/InQuicker/ktmpl)
|
||||||
|
- [Konfd](https://github.com/kelseyhightower/konfd)
|
||||||
|
- [kenv](https://github.com/thisendout/kenv)
|
||||||
|
- [kubediff](https://github.com/weaveworks/kubediff)
|
||||||
|
- [Habitat](https://www.habitat.sh/docs/container-orchestration/)
|
||||||
|
- [Puppet](https://forge.puppet.com/garethr/kubernetes/readme)
|
||||||
|
- [Ansible](https://docs.ansible.com/ansible/kubernetes_module.html)
|
||||||
|
* Application deployment orchestration
|
||||||
|
- [ElasticKube](https://elasticbox.com/kubernetes)
|
||||||
|
- [AppController](https://github.com/Mirantis/k8s-AppController)
|
||||||
|
- [Broadway](https://github.com/namely/broadway)
|
||||||
|
- [Kb8or](https://github.com/UKHomeOffice/kb8or)
|
||||||
|
- [IBM UrbanCode](https://developer.ibm.com/urbancode/plugin/kubernetes/)
|
||||||
|
- [nulecule](https://github.com/projectatomic/nulecule)
|
||||||
|
- [Deployment manager](https://cloud.google.com/deployment-manager/)
|
||||||
|
* API/CLI adaptors
|
||||||
|
- [Kubebot](https://blog.harbur.io/introducing-kubebot-a-kubernetes-bot-for-slack/)
|
||||||
|
- [StackStorm](https://github.com/StackStorm/st2)
|
||||||
|
- [Kubefuse](https://opencredo.com/introducing-kubefuse-file-system-kubernetes/)
|
||||||
|
- [Ksql](https://github.com/brendandburns/ksql)
|
||||||
|
- [kubectld](https://github.com/rancher/kubectld)
|
||||||
|
* UIs / mobile apps
|
||||||
|
* [Cabin](http://www.skippbox.com/announcing-cabin-the-first-mobile-app-for-kubernetes/)
|
||||||
|
* [Cockpit](http://cockpit-project.org/guide/latest/feature-kubernetes.html)
|
||||||
|
* CI/CD
|
||||||
|
* [Jenkins plugin](https://github.com/jenkinsci/kubernetes-pipeline-plugin)
|
||||||
|
* Wercker
|
||||||
|
* Shippable
|
||||||
|
- GitLab
|
||||||
|
- [cloudmunch](http://www.cloudmunch.com/continuous-delivery-for-kubernetes/)
|
||||||
|
- [Kontinuous](https://github.com/AcalephStorage/kontinuous)
|
||||||
|
- [Kit](https://invisionapp.github.io/kit/)
|
||||||
|
- [Spinnaker](http://www.spinnaker.io/docs/kubernetes-source-to-prod)
|
||||||
|
* Developer platform
|
||||||
|
* [Fabric8](https://fabric8.io/)
|
||||||
|
* [Spring Cloud integration](https://github.com/fabric8io/spring-cloud-kubernetes)
|
||||||
|
* [goPaddle](https://www.gopaddle.io/#/)
|
||||||
|
* [VAMP](http://vamp.io/)
|
||||||
|
* Secret generation and management
|
||||||
|
* [Vault controller](https://github.com/kelseyhightower/vault-controller)
|
||||||
|
* [kube-lego](https://github.com/jetstack/kube-lego)
|
||||||
|
* [k8sec](https://github.com/dtan4/k8sec)
|
||||||
|
* [Client libraries](https://github.com/kubernetes/community/blob/master/contributors/devel/client-libraries.md)
|
||||||
|
* Autoscaling
|
||||||
|
* [Kapacitor](https://www.influxdata.com/kubernetes-monitoring-and-autoscaling-with-telegraf-and-kapacitor/)
|
||||||
|
* Monitoring
|
||||||
|
* Sysdig
|
||||||
|
* Datadog
|
||||||
|
* Sematext
|
||||||
|
* Prometheus
|
||||||
|
* Snap
|
||||||
|
- [Satellite](https://github.com/gravitational/satellite)
|
||||||
|
- [Netsil](http://netsil.com/product/)
|
||||||
|
- [Weave Scope](https://github.com/weaveworks/scope)
|
||||||
|
- [AppFormix](http://www.appformix.com/solutions/appformix-for-kubernetes/)
|
||||||
|
* Logging
|
||||||
|
* Sematext
|
||||||
|
* [Sumo Logic](https://github.com/jdumars/sumokube)
|
||||||
|
* RPC
|
||||||
|
* Grpc
|
||||||
|
* [Micro](https://github.com/micro/kubernetes)
|
||||||
|
* Load balancing
|
||||||
|
* Nginx Plus
|
||||||
|
* Traefik
|
||||||
|
* Service mesh
|
||||||
|
* Envoy
|
||||||
|
* Linkerd
|
||||||
|
* Amalgam8
|
||||||
|
* WeaveWorks
|
||||||
|
* Networking
|
||||||
|
* WeaveWorks
|
||||||
|
* Tigera
|
||||||
|
* [OpenContrail](http://www.opencontrail.org/kubernetes-networking-with-opencontrail/)
|
||||||
|
* Nuage
|
||||||
|
* [Kuryr](https://github.com/openstack/kuryr-kubernetes)
|
||||||
|
* [Contiv](http://contiv.github.io/)
|
||||||
|
* Storage
|
||||||
|
* Flocker
|
||||||
|
* [Portworx](https://portworx.com/products/)
|
||||||
|
- REX-Ray
|
||||||
|
- [Torus](https://coreos.com/blog/torus-distributed-storage-by-coreos.html)
|
||||||
|
- Hedvig
|
||||||
|
- [Quobyte](https://www.quobyte.com/containers)
|
||||||
|
- [NetApp](https://netapp.github.io/blog/2016/05/11/netapp-persistent-storage-in-kubernetes-using-ontap-and-nfs/)
|
||||||
|
- [Datera](http://www.storagereview.com/datera_s_elastic_data_fabric_integrates_with_kubernetes)
|
||||||
|
- [Ceph](http://ceph.com/planet/bring-persistent-storage-for-your-containers-with-krbd-on-kubernetes/)
|
||||||
|
- [Gluster](http://blog.gluster.org/2016/08/coming-soon-dynamic-provisioning-of-glusterfs-volumes-in-kubernetesopenshift/)
|
||||||
|
* Database/noSQL
|
||||||
|
* [CockroachDB](https://www.cockroachlabs.com/docs/orchestrate-cockroachdb-with-kubernetes.html)
|
||||||
|
- [Cassandra](http://blog.kubernetes.io/2016/07/thousand-instances-of-cassandra-using-kubernetes-pet-set.html) / [DataStax](http://www.datastax.com/dev/blog/scale-quickly-with-datastax-enterprise-on-google-container-engine)
|
||||||
|
- [MongoDB](https://www.mongodb.com/blog/post/running-mongodb-as-a-microservice-with-docker-and-kubernetes)
|
||||||
|
- [Hazelcast](https://blog.hazelcast.com/openshift/)
|
||||||
|
- [Crate](https://crate.io/a/kubernetes-and-crate/)
|
||||||
|
- [Vitess](http://vitess.io/getting-started/)
|
||||||
|
- [Minio](https://blog.minio.io/storage-in-paas-minio-and-deis-7f9f604dedf2#.7rr6awv0j)
|
||||||
|
* Container runtimes
|
||||||
|
* containerd
|
||||||
|
* Rkt
|
||||||
|
* CRI-O (OCI)
|
||||||
|
* Hyper.sh/frakti
|
||||||
|
* Security
|
||||||
|
* [Trireme](http://opensourceforu.com/2016/11/trireme-adds-production-scale-security-kubernetes)
|
||||||
|
* [Aquasec](http://blog.aquasec.com/security-best-practices-for-kubernetes-deployment)
|
||||||
|
* [Twistlock](https://www.twistlock.com/2015/11/10/twistlock-is-now-available-on-google-cloud-platform/)
|
||||||
|
* Appliances
|
||||||
|
* Diamanti
|
||||||
|
* Redapt
|
||||||
|
* Cloud providers
|
||||||
|
* GKE/GCE
|
||||||
|
* AWS
|
||||||
|
* Azure
|
||||||
|
* Digital Ocean
|
||||||
|
* CenturyLink
|
||||||
|
* Rackspace
|
||||||
|
* VMWare
|
||||||
|
* Openstack
|
||||||
|
* Cloudstack
|
||||||
|
* Managed Kubernetes
|
||||||
|
* Platform9
|
||||||
|
* Gravitational
|
||||||
|
* [KCluster](https://kcluster.io/)
|
||||||
|
* VMs on Kubernetes
|
||||||
|
* Openstack
|
||||||
|
* Redhat
|
||||||
|
* Other
|
||||||
|
* [Netflix OSS](http://blog.christianposta.com/microservices/netflix-oss-or-kubernetes-how-about-both/)
|
||||||
|
* [Kube-monkey](https://github.com/asobti/kube-monkey)
|
||||||
|
* [Kubecraft](https://github.com/stevesloka/kubecraft)
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,75 @@
|
||||||
|
# Kubernetes HA
|
||||||
|
|
||||||
|
Kubernetes从1.5开始,通过`kops`或者`kube-up.sh`部署的集群会自动部署一个高可用的系统,包括
|
||||||
|
|
||||||
|
- etcd集群模式
|
||||||
|
- apiserver负载均衡
|
||||||
|
- controller manager、scheduler和cluster autoscaler自动选主(有且仅有一个运行实例)
|
||||||
|
|
||||||
|
如下图所示
|
||||||
|
|
||||||
|
![](ha.png)
|
||||||
|
|
||||||
|
## etcd集群
|
||||||
|
|
||||||
|
从`https://discovery.etcd.io/new?size=3`获取token后,把<https://kubernetes.io/docs/admin/high-availability/etcd.yaml>放到每台机器的`/etc/kubernetes/manifests/etcd.yaml`,并替换掉`${DISCOVERY_TOKEN}`, `${NODE_NAME}`和`${NODE_IP}`,既可以由kubelet来启动一个etcd集群。
|
||||||
|
|
||||||
|
对于运行在kubelet外部的etcd,可以参考[etcd clustering guide](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/clustering.md)来手动配置集群模式。
|
||||||
|
|
||||||
|
## apiserver
|
||||||
|
|
||||||
|
把<https://kubernetes.io/docs/admin/high-availability/kube-apiserver.yaml>放到每台Master节点的`/etc/kubernetes/manifests/`,并把相关的配置放到`/srv/kubernetes/`,即可由kubelet自动创建并启动apiserver:
|
||||||
|
|
||||||
|
- basic_auth.csv - basic auth user and password
|
||||||
|
- ca.crt - Certificate Authority cert
|
||||||
|
- known_tokens.csv - tokens that entities (e.g. the kubelet) can use to talk to the apiserver
|
||||||
|
- kubecfg.crt - Client certificate, public key
|
||||||
|
- kubecfg.key - Client certificate, private key
|
||||||
|
- server.cert - Server certificate, public key
|
||||||
|
- server.key - Server certificate, private key
|
||||||
|
|
||||||
|
apiserver启动后,还需要为它们做负载均衡,可以使用云平台的弹性负载均衡服务或者使用haproxy/lvs等为master节点配置负载均衡。
|
||||||
|
|
||||||
|
## controller manager和scheduler
|
||||||
|
|
||||||
|
controller manager和scheduler需要保证任何时刻都只有一个实例运行,需要一个选主的过程,所以在启动时要设置`--leader-elect=true`,比如
|
||||||
|
|
||||||
|
```
|
||||||
|
kube-scheduler --master=127.0.0.1:8080 --v=2 --leader-elect=true
|
||||||
|
kube-controller-manager --master=127.0.0.1:8080 --cluster-cidr=10.245.0.0/16 --allocate-node-cidrs=true --service-account-private-key-file=/srv/kubernetes/server.key --v=2 --leader-elect=true
|
||||||
|
```
|
||||||
|
|
||||||
|
把[kube-scheduler.yaml](https://kubernetes.io/docs/admin/high-availability/kube-scheduler.yaml)和[kube-controller-manager.yaml](https://kubernetes.io/docs/admin/high-availability/kube-controller-manager.yaml)(非GCE平台需要适当修改) 放到每台master节点的`/etc/kubernetes/manifests/`即可。
|
||||||
|
|
||||||
|
## kube-dns
|
||||||
|
|
||||||
|
kube-dns可以通过Deployment的方式来部署,默认kubeadm会自动创建。但在大规模集群的时候,需要放宽资源限制,比如
|
||||||
|
|
||||||
|
```
|
||||||
|
dns_replicas: 6
|
||||||
|
dns_cpu_limit: 100m
|
||||||
|
dns_memory_limit: 512Mi
|
||||||
|
dns_cpu_requests 70m
|
||||||
|
dns_memory_requests: 70Mi
|
||||||
|
```
|
||||||
|
|
||||||
|
另外,也需要给dnsmasq增加资源,比如增加缓存大小到10000,增加并发处理数量`--dns-forward-max=1000`等。
|
||||||
|
|
||||||
|
## kube-proxy
|
||||||
|
|
||||||
|
默认kube-proxy使用iptables来为Service作负载均衡,这在大规模时会产生很大的Latency,可以考虑使用[IPVS](https://docs.google.com/presentation/d/1BaIAywY2qqeHtyGZtlyAp89JIZs59MZLKcFLxKE6LyM/edit#slide=id.p3)的替代方式(注意Kubernetes v1.6还不支持IPVS模式)。
|
||||||
|
|
||||||
|
## 数据持久化
|
||||||
|
|
||||||
|
除了上面提到的这些配置,持久化存储也是高可用Kubernetes集群所必须的。
|
||||||
|
|
||||||
|
- 对于公有云上部署的集群,可以考虑使用云平台提供的持久化存储,比如aws ebs或者gce persistent disk
|
||||||
|
- 对于物理机部署的集群,可以考虑使用iSCSI、NFS、Gluster或者Ceph等网络存储,也可以使用RAID
|
||||||
|
|
||||||
|
## 参考文档
|
||||||
|
|
||||||
|
- https://kubernetes.io/docs/admin/high-availability/
|
||||||
|
- http://kubecloud.io/setup-ha-k8s-kops/
|
||||||
|
- https://github.com/coreos/etcd/blob/master/Documentation/op-guide/clustering.md
|
||||||
|
- [Kubernetes Master Tier For 1000 Nodes Scale](http://fuel-ccp.readthedocs.io/en/latest/design/k8s_1000_nodes_architecture.html)
|
||||||
|
- [Scaling Kubernetes to Support 50000 Services](https://docs.google.com/presentation/d/1BaIAywY2qqeHtyGZtlyAp89JIZs59MZLKcFLxKE6LyM/edit#slide=id.p3)
|
After Width: | Height: | Size: 99 KiB |
After Width: | Height: | Size: 82 KiB |
After Width: | Height: | Size: 95 KiB |
After Width: | Height: | Size: 113 KiB |
|
@ -0,0 +1,193 @@
|
||||||
|
# Kubernetes 101
|
||||||
|
|
||||||
|
体验Kubernetes最简单的方法是跑一个nginx容器,然后使用kubectl操作该容器。Kubernetes提供了一个类似于`docker run`的命令`kubectl run`,可以方便的创建一个容器(实际上创建的是一个由deployment来管理的Pod):
|
||||||
|
|
||||||
|
```sh
|
||||||
|
$ kubectl run --image=nginx nginx-app --port=80
|
||||||
|
deployment "nginx-app" created
|
||||||
|
$ kubectl get pods
|
||||||
|
NAME READY STATUS RESTARTS AGE
|
||||||
|
nginx-app-4028413181-cnt1i 1/1 Running 0 52s
|
||||||
|
```
|
||||||
|
|
||||||
|
等到容器变成Running后,就可以各种`kubectl`命令来操作它了,比如
|
||||||
|
|
||||||
|
- `kubectl get` - 类似于`docker ps`,查询资源列表
|
||||||
|
- `kubectl describe` - 类似于`docker inspect`,获取资源的详细信息
|
||||||
|
- `kubectl logs` - 类似于`docker logs`,获取容器的日志
|
||||||
|
- `kubectl exec` - 类似于`docker exec`,在容器内执行一个命令
|
||||||
|
|
||||||
|
```sh
|
||||||
|
$ kubectl get pods
|
||||||
|
NAME READY STATUS RESTARTS AGE
|
||||||
|
nginx-app-4028413181-cnt1i 1/1 Running 0 6m
|
||||||
|
$ kubectl exec nginx-app-4028413181-cnt1i ps aux
|
||||||
|
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
|
||||||
|
root 1 0.0 0.5 31736 5108 ? Ss 00:19 0:00 nginx: master process nginx -g daemon off;
|
||||||
|
nginx 5 0.0 0.2 32124 2844 ? S 00:19 0:00 nginx: worker process
|
||||||
|
root 18 0.0 0.2 17500 2112 ? Rs 00:25 0:00 ps aux
|
||||||
|
$ kubectl describe pod nginx-app-4028413181-cnt1i
|
||||||
|
Name: nginx-app-4028413181-cnt1i
|
||||||
|
Namespace: default
|
||||||
|
Node: boot2docker/192.168.64.12
|
||||||
|
Start Time: Tue, 06 Sep 2016 08:18:41 +0800
|
||||||
|
Labels: pod-template-hash=4028413181
|
||||||
|
run=nginx-app
|
||||||
|
Status: Running
|
||||||
|
IP: 172.17.0.3
|
||||||
|
Controllers: ReplicaSet/nginx-app-4028413181
|
||||||
|
Containers:
|
||||||
|
nginx-app:
|
||||||
|
Container ID: docker://4ef989b57d0a7638ad9c5bbc22e16d5ea5b459281c77074fc982eba50973107f
|
||||||
|
Image: nginx
|
||||||
|
Image ID: docker://sha256:4efb2fcdb1ab05fb03c9435234343c1cc65289eeb016be86193e88d3a5d84f6b
|
||||||
|
Port: 80/TCP
|
||||||
|
State: Running
|
||||||
|
Started: Tue, 06 Sep 2016 08:19:30 +0800
|
||||||
|
Ready: True
|
||||||
|
Restart Count: 0
|
||||||
|
Environment Variables: <none>
|
||||||
|
Conditions:
|
||||||
|
Type Status
|
||||||
|
Initialized True
|
||||||
|
Ready True
|
||||||
|
PodScheduled True
|
||||||
|
Volumes:
|
||||||
|
default-token-9o8ks:
|
||||||
|
Type: Secret (a volume populated by a Secret)
|
||||||
|
SecretName: default-token-9o8ks
|
||||||
|
QoS Tier: BestEffort
|
||||||
|
Events:
|
||||||
|
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
|
||||||
|
--------- -------- ----- ---- ------------- -------- ------ -------
|
||||||
|
8m 8m 1 {default-scheduler } Normal Scheduled Successfully assigned nginx-app-4028413181-cnt1i to boot2docker
|
||||||
|
8m 8m 1 {kubelet boot2docker} spec.containers{nginx-app} Normal Pulling pulling image "nginx"
|
||||||
|
7m 7m 1 {kubelet boot2docker} spec.containers{nginx-app} Normal Pulled Successfully pulled image "nginx"
|
||||||
|
7m 7m 1 {kubelet boot2docker} spec.containers{nginx-app} Normal Created Created container with docker id 4ef989b57d0a
|
||||||
|
7m 7m 1 {kubelet boot2docker} spec.containers{nginx-app} Normal Started Started container with docker id 4ef989b57d0a
|
||||||
|
|
||||||
|
|
||||||
|
$ kubectl logs nginx-app-4028413181-cnt1i
|
||||||
|
127.0.0.1 - - [06/Sep/2016:00:27:13 +0000] "GET / HTTP/1.0 " 200 612 "-" "-" "-"
|
||||||
|
127.0.0.1 - - [06/Sep/2016:00:27:15 +0000] "GET / HTTP/1.0 " 200 612 "-" "-" "-"
|
||||||
|
```
|
||||||
|
|
||||||
|
## 使用yaml定义Pod
|
||||||
|
|
||||||
|
上面是通过`kubectl run`来启动了第一个Pod,但是`kubectl run`并不能支持所有的功能。在Kubernetes中,更经常使用yaml文件来定义资源,并通过`kubectl create -f file.yaml`来创建资源。比如,一个简单的nginx Pod可以定义为:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Pod
|
||||||
|
metadata:
|
||||||
|
name: nginx
|
||||||
|
labels:
|
||||||
|
app: nginx
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: nginx
|
||||||
|
image: nginx
|
||||||
|
ports:
|
||||||
|
- containerPort: 80
|
||||||
|
```
|
||||||
|
|
||||||
|
前面提到,`kubectl run`并不是直接创建一个Pod,而是先创建一个Deployment资源(replicas=1),再由Deployment来自动创建Pod,这等价于这样一个配置:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: extensions/v1beta1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
run: nginx-app
|
||||||
|
name: nginx-app
|
||||||
|
namespace: default
|
||||||
|
spec:
|
||||||
|
replicas: 1
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
run: nginx-app
|
||||||
|
strategy:
|
||||||
|
rollingUpdate:
|
||||||
|
maxSurge: 1
|
||||||
|
maxUnavailable: 1
|
||||||
|
type: RollingUpdate
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
run: nginx-app
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- image: nginx
|
||||||
|
name: nginx-app
|
||||||
|
ports:
|
||||||
|
- containerPort: 80
|
||||||
|
protocol: TCP
|
||||||
|
dnsPolicy: ClusterFirst
|
||||||
|
restartPolicy: Always
|
||||||
|
```
|
||||||
|
|
||||||
|
## 使用Volume
|
||||||
|
|
||||||
|
Pod的生命周期通常比较短,只要出现了异常,就会创建一个新的Pod来代替它。那容器产生的数据呢?容器内的数据会随着Pod消亡而自动消失。Volume就是为了持久化容器数据而生,比如可以为redis容器指定一个hostPath来存储redis数据:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Pod
|
||||||
|
metadata:
|
||||||
|
name: redis
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: redis
|
||||||
|
image: redis
|
||||||
|
volumeMounts:
|
||||||
|
- name: redis-persistent-storage
|
||||||
|
mountPath: /data/redis
|
||||||
|
volumes:
|
||||||
|
- name: redis-persistent-storage
|
||||||
|
hostPath:
|
||||||
|
path: /data/
|
||||||
|
```
|
||||||
|
|
||||||
|
Kubernetes volume支持非常多的插件,可以根据实际需要来选择:
|
||||||
|
|
||||||
|
* emptyDir
|
||||||
|
* hostPath
|
||||||
|
* gcePersistentDisk
|
||||||
|
* awsElasticBlockStore
|
||||||
|
* nfs
|
||||||
|
* iscsi
|
||||||
|
* flocker
|
||||||
|
* glusterfs
|
||||||
|
* rbd
|
||||||
|
* cephfs
|
||||||
|
* gitRepo
|
||||||
|
* secret
|
||||||
|
* persistentVolumeClaim
|
||||||
|
* downwardAPI
|
||||||
|
* azureFileVolume
|
||||||
|
* vsphereVolume
|
||||||
|
|
||||||
|
## 使用Service
|
||||||
|
|
||||||
|
前面虽然创建了Pod,但是在kubernetes中,Pod的IP地址会随着Pod的重启而变化,并不建议直接拿Pod的IP来交互。那如何来访问这些Pod提供的服务呢?使用Service。Service为一组Pod(通过labels来选择)提供一个统一的入口,并为它们提供负载均衡和自动服务发现。比如,可以为前面的`nginx-app`创建一个service:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
$ kubectl expose deployment nginx-app --type=NodePort --port=80 --target-port=80
|
||||||
|
service "nginx-app" exposed
|
||||||
|
$ kubectl describe service nginx-app
|
||||||
|
Name: nginx-app
|
||||||
|
Namespace: default
|
||||||
|
Labels: run=nginx-app
|
||||||
|
Selector: run=nginx-app
|
||||||
|
Type: NodePort
|
||||||
|
IP: 10.0.0.66
|
||||||
|
Port: <unset> 80/TCP
|
||||||
|
NodePort: <unset> 30772/TCP
|
||||||
|
Endpoints: 172.17.0.3:80
|
||||||
|
Session Affinity: None
|
||||||
|
No events.
|
||||||
|
```
|
||||||
|
|
||||||
|
这样,在cluster内部就可以通过`http://10.0.0.66`和`http://node-ip:30772`来访问nginx-app。而在cluster外面,只能通过`http://node-ip:30772`来访问。
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,85 @@
|
||||||
|
# Kubernetes 201
|
||||||
|
|
||||||
|
## 扩展应用
|
||||||
|
|
||||||
|
通过修改Deployment中副本的数量(replicas),可以动态扩展或收缩应用:
|
||||||
|
|
||||||
|
![scale](media/scale.png)
|
||||||
|
|
||||||
|
这些自动扩展的容器会自动加入到service中,而收缩回收的容器也会自动从service中删除。
|
||||||
|
|
||||||
|
```sh
|
||||||
|
$ kubectl scale --replicas=3 deployment/nginx-app
|
||||||
|
$ kubectl get deploy
|
||||||
|
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||||
|
nginx-app 3 3 3 3 10m
|
||||||
|
```
|
||||||
|
|
||||||
|
## 滚动升级
|
||||||
|
|
||||||
|
滚动升级(Rolling Update)通过逐个容器替代升级的方式来实现无中断的服务升级:
|
||||||
|
|
||||||
|
```
|
||||||
|
kubectl rolling-update frontend-v1 frontend-v2 --image=image:v2
|
||||||
|
```
|
||||||
|
![update1](media/update1.png)
|
||||||
|
|
||||||
|
![update2](media/update2.png)
|
||||||
|
|
||||||
|
![update3](media/update3.png)
|
||||||
|
|
||||||
|
![update4](media/update4.png)
|
||||||
|
|
||||||
|
在滚动升级的过程中,如果发现了失败或者配置错误,还可以随时会滚回来:
|
||||||
|
|
||||||
|
```
|
||||||
|
kubectl rolling-update frontend-v1 frontend-v2 --rollback
|
||||||
|
```
|
||||||
|
|
||||||
|
需要注意的是,rolling-update只针对ReplicationController,不能直接用在deployment上。Deployment可以在spec中设置更新策略为RollingUpdate(默认就是RollingUpdate):
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
spec:
|
||||||
|
replicas: 3
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
run: nginx-app
|
||||||
|
strategy:
|
||||||
|
rollingUpdate:
|
||||||
|
maxSurge: 1
|
||||||
|
maxUnavailable: 1
|
||||||
|
type: RollingUpdate
|
||||||
|
```
|
||||||
|
|
||||||
|
而更新应用的话,就可以直接用`kubectl set`命令:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
kubectl set image deployment/nginx-app nginx-app=nginx:1.9.1
|
||||||
|
```
|
||||||
|
|
||||||
|
滚动升级的过程可以用`rollout`命令查看:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
$ kubectl rollout status deployment/nginx-app
|
||||||
|
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
|
||||||
|
Waiting for rollout to finish: 2 of 3 updated replicas are available...
|
||||||
|
Waiting for rollout to finish: 2 of 3 updated replicas are available...
|
||||||
|
Waiting for rollout to finish: 2 of 3 updated replicas are available...
|
||||||
|
Waiting for rollout to finish: 2 of 3 updated replicas are available...
|
||||||
|
Waiting for rollout to finish: 2 of 3 updated replicas are available...
|
||||||
|
deployment "nginx-app" successfully rolled out
|
||||||
|
```
|
||||||
|
|
||||||
|
Deployment同样支持回滚:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
$ kubectl rollout history deployment/nginx-app
|
||||||
|
deployments "nginx-app"
|
||||||
|
REVISION CHANGE-CAUSE
|
||||||
|
1 <none>
|
||||||
|
2 <none>
|
||||||
|
|
||||||
|
$ kubernetes kubectl rollout undo deployment/nginx-app
|
||||||
|
deployment "nginx-app" rolled back
|
||||||
|
```
|
||||||
|
|
After Width: | Height: | Size: 262 KiB |
|
@ -0,0 +1,29 @@
|
||||||
|
# Kubernetes cluster
|
||||||
|
|
||||||
|
![](architecture.png)
|
||||||
|
|
||||||
|
一个Kubernetes集群由分布式存储etcd、控制节点controller以及服务节点Node组成。
|
||||||
|
|
||||||
|
- 控制节点主要负责整个集群的管理,比如容器的调度、维护资源的状态、自动扩展以及滚动更新等
|
||||||
|
- 服务节点是真正运行容器的主机,负责管理镜像和容器以及cluster内的服务发现和负载均衡
|
||||||
|
- etcd集群保存了整个集群的状态
|
||||||
|
|
||||||
|
## 集群联邦
|
||||||
|
|
||||||
|
![](federation.png)
|
||||||
|
|
||||||
|
## Kubernetes单机版
|
||||||
|
|
||||||
|
创建Kubernetes cluster(单机版)最简单的方法是[minikube](https://github.com/kubernetes/minikube):
|
||||||
|
|
||||||
|
```sh
|
||||||
|
$ minikube start
|
||||||
|
Starting local Kubernetes cluster...
|
||||||
|
Kubectl is now configured to use the cluster.
|
||||||
|
$ kubectl cluster-info
|
||||||
|
Kubernetes master is running at https://192.168.64.12:8443
|
||||||
|
kubernetes-dashboard is running at https://192.168.64.12:8443/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
|
||||||
|
|
||||||
|
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
|
||||||
|
```
|
||||||
|
|
|
@ -0,0 +1,39 @@
|
||||||
|
# Kubernetes核心概念
|
||||||
|
|
||||||
|
## Pod
|
||||||
|
|
||||||
|
Pod是一组紧密关联的容器集合,它们共享Volume和network namespace,是Kubernetes调度的基本单位。Pod的设计理念是支持多个容器在一个Pod中共享网络和文件系统,可以通过进程间通信和文件共享这种简单高效的方式组合完成服务。
|
||||||
|
|
||||||
|
![pod](media/pod.png)
|
||||||
|
|
||||||
|
## Node
|
||||||
|
|
||||||
|
Node是Pod真正运行的主机,可以物理机,也可以是虚拟机。为了管理Pod,每个Node节点上至少要运行container runtime(比如docker或者rkt)、`kubelet`和`kube-proxy`服务。
|
||||||
|
|
||||||
|
![node](media/node.png)
|
||||||
|
|
||||||
|
## Service
|
||||||
|
|
||||||
|
Service是应用服务的抽象,通过labels为应用提供负载均衡和服务发现。Service对外暴露一个统一的访问接口,外部服务不需要了解后端容器的运行。
|
||||||
|
|
||||||
|
![](media/14731220608865.png)
|
||||||
|
|
||||||
|
## Label
|
||||||
|
|
||||||
|
Label是识别Kubernetes对象的标签,以key/value的方式附加到对象上。Label不提供唯一性,并且实际上经常是很多对象(如Pods)都使用相同的label来标志具体的应用。
|
||||||
|
|
||||||
|
Label定义好后其他对象可以使用Label Selector来选择一组相同label的对象(比如ReplicaSet和Service用label来选择一组Pod)。Label Selector支持以下几种方式:
|
||||||
|
|
||||||
|
- 等式,如`app=nginx`和`env!=production`
|
||||||
|
- 集合,如`env in (production, qa)`
|
||||||
|
- 多个label(它们之间是AND关系),如`app=nginx,env=test`
|
||||||
|
|
||||||
|
## Annotations
|
||||||
|
|
||||||
|
Annotations是key/value形式附加于对象的注解。不同于Labels用于标志和选择对象,Annotations则是用来记录一些附加信息,以便于外部工具进行查找。
|
||||||
|
|
||||||
|
## Namespace
|
||||||
|
|
||||||
|
Namespace是对一组资源和对象的抽象集合,比如可以用来将系统内部的对象划分为不同的项目组或用户组。常见的pods, services, replication controllers和deployments等都是属于某一个namespace的(默认是default),而node, persistentVolumes等则不属于任何namespace。
|
||||||
|
|
||||||
|
|
After Width: | Height: | Size: 114 KiB |
|
@ -0,0 +1,19 @@
|
||||||
|
# Kubernetes简介
|
||||||
|
|
||||||
|
Kubernetes是谷歌开源的容器集群管理系统,是Google多年大规模容器管理技术Borg的开源版本,主要功能包括:
|
||||||
|
|
||||||
|
- 基于容器的应用部署、维护和滚动升级
|
||||||
|
- 负载均衡和服务发现
|
||||||
|
- 跨机器和跨地区的集群调度
|
||||||
|
- 自动伸缩
|
||||||
|
- 无状态服务和有状态服务
|
||||||
|
- 广泛的Volume支持
|
||||||
|
- 插件机制保证扩展性
|
||||||
|
|
||||||
|
Kubernetes发展非常迅速,已经成为容器编排领域的领导者。
|
||||||
|
|
||||||
|
![](media/14731186543149.jpg)
|
||||||
|
|
||||||
|
## Kubernetes架构
|
||||||
|
|
||||||
|
![](architecture.png)
|
After Width: | Height: | Size: 64 KiB |
After Width: | Height: | Size: 11 KiB |
After Width: | Height: | Size: 35 KiB |
After Width: | Height: | Size: 16 KiB |
After Width: | Height: | Size: 14 KiB |
After Width: | Height: | Size: 25 KiB |
After Width: | Height: | Size: 29 KiB |
After Width: | Height: | Size: 172 KiB |
After Width: | Height: | Size: 129 KiB |
After Width: | Height: | Size: 87 KiB |
After Width: | Height: | Size: 44 KiB |
After Width: | Height: | Size: 44 KiB |
After Width: | Height: | Size: 53 KiB |
After Width: | Height: | Size: 50 KiB |
After Width: | Height: | Size: 52 KiB |
After Width: | Height: | Size: 108 KiB |
After Width: | Height: | Size: 201 KiB |
After Width: | Height: | Size: 24 KiB |
After Width: | Height: | Size: 121 KiB |
|
@ -0,0 +1,61 @@
|
||||||
|
# Kubernetes监控
|
||||||
|
|
||||||
|
## cAdvisor
|
||||||
|
|
||||||
|
[cAdvisor](https://github.com/google/cadvisor)是一个来自Google的容器监控工具,也是kubelet内置的容器资源收集工具。它会自动收集本机容器CPU、内存、网络和文件系统的资源占用情况,并对外提供cAdvisor原生的API(默认端口为`--cadvisor-port=4194`)。
|
||||||
|
|
||||||
|
![](images/14842107270881.png)
|
||||||
|
|
||||||
|
## InfluxDB和Grafana
|
||||||
|
|
||||||
|
[InfluxDB](https://www.influxdata.com/time-series-platform/influxdb/)是一个开源分布式时序、事件和指标数据库;而[Grafana](http://grafana.org/)则是InfluxDB的dashboard,提供了强大的图表展示功能。
|
||||||
|
|
||||||
|
![](images/14842114123604.jpg)
|
||||||
|
|
||||||
|
## Heapster
|
||||||
|
|
||||||
|
前面提到的cAdvisor只提供了单机的容器资源占用情况,而[Heapster](https://github.com/kubernetes/heapster)则提供了整个集群的资源监控,并支持持久化数据存储到InfluxDB、Google Cloud Monitoring或者[其他的存储后端](https://github.com/kubernetes/heapster)。
|
||||||
|
|
||||||
|
Heapster从kubelet提供的API采集节点和容器的资源占用:
|
||||||
|
|
||||||
|
![](images/14842118198998.png)
|
||||||
|
|
||||||
|
另外,Heapster的`/metrics` API提供了Prometheus格式的数据。
|
||||||
|
|
||||||
|
### 部署Heapster、InfluxDB和Grafana
|
||||||
|
|
||||||
|
在Kubernetes部署成功后,dashboard、DNS和监控的服务也会默认部署好,比如通过`cluster/kube-up.sh`部署的集群默认会开启以下服务:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
$ kubectl cluster-info
|
||||||
|
Kubernetes master is running at https://kubernetes-master
|
||||||
|
Heapster is running at https://kubernetes-master/api/v1/proxy/namespaces/kube-system/services/heapster
|
||||||
|
KubeDNS is running at https://kubernetes-master/api/v1/proxy/namespaces/kube-system/services/kube-dns
|
||||||
|
kubernetes-dashboard is running at https://kubernetes-master/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
|
||||||
|
Grafana is running at https://kubernetes-master/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
|
||||||
|
InfluxDB is running at https://kubernetes-master/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb
|
||||||
|
```
|
||||||
|
|
||||||
|
如果这些服务没有自动部署的话,可以根据[cluster/addons](https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)来添加需要的服务。
|
||||||
|
|
||||||
|
## Prometheus
|
||||||
|
|
||||||
|
[Prometheus](https://prometheus.io)是另外一个监控和时间序列数据库,并且还提供了告警的功能。他提供了强大的查询语言和HTTP接口,也支持将数据导出到Grafana中展示。
|
||||||
|
|
||||||
|
使用Prometheus监控Kubernetes需要配置好数据源,一个简单的示例是[prometheus.yml](prometheus.txt):
|
||||||
|
|
||||||
|
```
|
||||||
|
kubectl create -f http://feisky.xyz/kubernetes/monitor/prometheus.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
![](images/14842125295113.jpg)
|
||||||
|
|
||||||
|
|
||||||
|
## 其他容器监控系统
|
||||||
|
|
||||||
|
- [Sysdig](http://blog.kubernetes.io/2015/11/monitoring-Kubernetes-with-Sysdig.html)
|
||||||
|
- CoScale
|
||||||
|
- Datadog
|
||||||
|
- Sematext
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,183 @@
|
||||||
|
---
|
||||||
|
apiVersion: extensions/v1beta1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
name: prometheus-deployment
|
||||||
|
name: prometheus
|
||||||
|
spec:
|
||||||
|
replicas: 1
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: prometheus
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- image: quay.io/prometheus/prometheus:v1.0.1
|
||||||
|
name: prometheus
|
||||||
|
command:
|
||||||
|
- "/bin/prometheus"
|
||||||
|
args:
|
||||||
|
- "-config.file=/etc/prometheus/prometheus.yml"
|
||||||
|
- "-storage.local.path=/prometheus"
|
||||||
|
- "-storage.local.retention=24h"
|
||||||
|
ports:
|
||||||
|
- containerPort: 9090
|
||||||
|
protocol: TCP
|
||||||
|
volumeMounts:
|
||||||
|
- mountPath: "/prometheus"
|
||||||
|
name: data
|
||||||
|
- mountPath: "/etc/prometheus"
|
||||||
|
name: config-volume
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
cpu: 100m
|
||||||
|
memory: 100Mi
|
||||||
|
limits:
|
||||||
|
cpu: 500m
|
||||||
|
memory: 2500Mi
|
||||||
|
volumes:
|
||||||
|
- emptyDir: {}
|
||||||
|
name: data
|
||||||
|
- configMap:
|
||||||
|
name: prometheus-config
|
||||||
|
name: config-volume
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: ConfigMap
|
||||||
|
metadata:
|
||||||
|
name: prometheus-config
|
||||||
|
data:
|
||||||
|
prometheus.yml: |
|
||||||
|
global:
|
||||||
|
scrape_interval: 30s
|
||||||
|
scrape_timeout: 30s
|
||||||
|
scrape_configs:
|
||||||
|
- job_name: 'prometheus'
|
||||||
|
static_configs:
|
||||||
|
- targets: ['localhost:9090']
|
||||||
|
- job_name: 'kubernetes-cluster'
|
||||||
|
scheme: https
|
||||||
|
tls_config:
|
||||||
|
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
|
||||||
|
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
|
||||||
|
kubernetes_sd_configs:
|
||||||
|
- api_servers:
|
||||||
|
- 'https://kubernetes.default.svc'
|
||||||
|
in_cluster: true
|
||||||
|
role: apiserver
|
||||||
|
- job_name: 'kubernetes-nodes'
|
||||||
|
scheme: https
|
||||||
|
tls_config:
|
||||||
|
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
|
||||||
|
insecure_skip_verify: true
|
||||||
|
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
|
||||||
|
kubernetes_sd_configs:
|
||||||
|
- api_servers:
|
||||||
|
- 'https://kubernetes.default.svc'
|
||||||
|
in_cluster: true
|
||||||
|
role: node
|
||||||
|
relabel_configs:
|
||||||
|
- action: labelmap
|
||||||
|
regex: __meta_kubernetes_node_label_(.+)
|
||||||
|
- job_name: 'kubernetes-service-endpoints'
|
||||||
|
scheme: https
|
||||||
|
kubernetes_sd_configs:
|
||||||
|
- api_servers:
|
||||||
|
- 'https://kubernetes.default.svc'
|
||||||
|
in_cluster: true
|
||||||
|
role: endpoint
|
||||||
|
relabel_configs:
|
||||||
|
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
|
||||||
|
action: keep
|
||||||
|
regex: true
|
||||||
|
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
|
||||||
|
action: replace
|
||||||
|
target_label: __scheme__
|
||||||
|
regex: (https?)
|
||||||
|
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
|
||||||
|
action: replace
|
||||||
|
target_label: __metrics_path__
|
||||||
|
regex: (.+)
|
||||||
|
- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
|
||||||
|
action: replace
|
||||||
|
target_label: __address__
|
||||||
|
regex: (.+)(?::\d+);(\d+)
|
||||||
|
replacement: $1:$2
|
||||||
|
- action: labelmap
|
||||||
|
regex: __meta_kubernetes_service_label_(.+)
|
||||||
|
- source_labels: [__meta_kubernetes_service_namespace]
|
||||||
|
action: replace
|
||||||
|
target_label: kubernetes_namespace
|
||||||
|
- source_labels: [__meta_kubernetes_service_name]
|
||||||
|
action: replace
|
||||||
|
target_label: kubernetes_name
|
||||||
|
- job_name: 'kubernetes-services'
|
||||||
|
scheme: https
|
||||||
|
metrics_path: /probe
|
||||||
|
params:
|
||||||
|
module: [http_2xx]
|
||||||
|
kubernetes_sd_configs:
|
||||||
|
- api_servers:
|
||||||
|
- 'https://kubernetes.default.svc'
|
||||||
|
in_cluster: true
|
||||||
|
role: service
|
||||||
|
relabel_configs:
|
||||||
|
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]
|
||||||
|
action: keep
|
||||||
|
regex: true
|
||||||
|
- source_labels: [__address__]
|
||||||
|
target_label: __param_target
|
||||||
|
- target_label: __address__
|
||||||
|
replacement: blackbox
|
||||||
|
- source_labels: [__param_target]
|
||||||
|
target_label: instance
|
||||||
|
- action: labelmap
|
||||||
|
regex: __meta_kubernetes_service_label_(.+)
|
||||||
|
- source_labels: [__meta_kubernetes_service_namespace]
|
||||||
|
target_label: kubernetes_namespace
|
||||||
|
- source_labels: [__meta_kubernetes_service_name]
|
||||||
|
target_label: kubernetes_name
|
||||||
|
- job_name: 'kubernetes-pods'
|
||||||
|
scheme: https
|
||||||
|
kubernetes_sd_configs:
|
||||||
|
- api_servers:
|
||||||
|
- 'https://kubernetes.default.svc'
|
||||||
|
in_cluster: true
|
||||||
|
role: pod
|
||||||
|
relabel_configs:
|
||||||
|
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
|
||||||
|
action: keep
|
||||||
|
regex: true
|
||||||
|
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
|
||||||
|
action: replace
|
||||||
|
target_label: __metrics_path__
|
||||||
|
regex: (.+)
|
||||||
|
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
|
||||||
|
action: replace
|
||||||
|
regex: (.+):(?:\d+);(\d+)
|
||||||
|
replacement: ${1}:${2}
|
||||||
|
target_label: __address__
|
||||||
|
- action: labelmap
|
||||||
|
regex: __meta_kubernetes_pod_label_(.+)
|
||||||
|
- source_labels: [__meta_kubernetes_pod_namespace]
|
||||||
|
action: replace
|
||||||
|
target_label: kubernetes_namespace
|
||||||
|
- source_labels: [__meta_kubernetes_pod_name]
|
||||||
|
action: replace
|
||||||
|
target_label: kubernetes_pod_name
|
||||||
|
|
||||||
|
---
|
||||||
|
kind: Service
|
||||||
|
apiVersion: v1
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: prometheus
|
||||||
|
name: prometheus
|
||||||
|
spec:
|
||||||
|
type: NodePort
|
||||||
|
ports:
|
||||||
|
- port: 9090
|
||||||
|
selector:
|
||||||
|
app: prometheus
|
||||||
|
|
|
@ -0,0 +1,20 @@
|
||||||
|
# Container Runtime Interface
|
||||||
|
|
||||||
|
Container Runtime Interface (CRI)是Kubelet 1.5/1.6中主要负责的一块项目,它重新定义了Kubelet Container Runtime API,将原来完全面向Pod级别的API拆分成面向Sandbox和Container的API,并分离镜像管理和容器引擎到不同的服务。
|
||||||
|
|
||||||
|
![](cri.png)
|
||||||
|
|
||||||
|
CRI最早从从1.4版就开始设计讨论和开发,在v1.5中发布第一个测试版。
|
||||||
|
|
||||||
|
## 目前的CRI实现
|
||||||
|
|
||||||
|
目前,有多家厂商都在基于CRI集成自己的容器引擎,其中包括
|
||||||
|
|
||||||
|
- 1) Docker: 核心代码依然保留在kubelet内部
|
||||||
|
- 2) HyperContainer: https://github.com/kubernetes/frakti
|
||||||
|
- 3) Rkt: https://github.com/kubernetes-incubator/rktlet
|
||||||
|
- 4) Runc: https://github.com/kubernetes-incubator/cri-o
|
||||||
|
- 5) Mirantis: https://github.com/Mirantis/virtlet
|
||||||
|
- 6) Cloud foundary: https://github.com/cloudfoundry/garden
|
||||||
|
- 7) Infranetes: not opensourced yet.
|
||||||
|
|
|
@ -0,0 +1,21 @@
|
||||||
|
# Kubernetes认证与授权插件
|
||||||
|
|
||||||
|
## 认证
|
||||||
|
|
||||||
|
- X509 Client Certs
|
||||||
|
- Static Token File
|
||||||
|
- Putting a Bearer Token in a Request
|
||||||
|
- Static Password File
|
||||||
|
- Service Account Tokens
|
||||||
|
- OpenID Connect Tokens
|
||||||
|
- Webhook Token Authentication
|
||||||
|
- Authenticating Proxy
|
||||||
|
- Keystone Password
|
||||||
|
|
||||||
|
## 授权
|
||||||
|
|
||||||
|
- AlwaysDeny
|
||||||
|
- AlwaysAllow
|
||||||
|
- ABAC (Attribute-Based Access Control)
|
||||||
|
- RBAC (Role-Based Access Control)
|
||||||
|
- Webhook
|