Some typical docker compose templates.
 
 
 
 
 
 
Go to file
Baohua Yang 3d39afc6dd Update kafka mode files 2018-06-22 22:03:28 +08:00
.github Start reconfiguration demo 2017-11-26 19:50:49 +08:00
consul-discovery Add consul discoverable architecture 2015-08-18 10:47:49 +08:00
elk_netflow Add more charts 2015-11-12 09:56:36 +08:00
haproxy_web Use lf instead of crlf 2016-04-15 16:45:21 +08:00
hyperledger_fabric Update kafka mode files 2018-06-22 22:03:28 +08:00
kafka Update image tag to use fix number 2017-09-15 13:39:12 +08:00
mongo-elasticsearch Use mongosetup image 2015-08-21 17:03:19 +08:00
mongo_cluster When using the appropriate compose format, hostnames resolve automagically via dockers internal container network dns. 2018-03-26 18:26:21 -07:00
mongo_webui add more files 2015-08-10 16:14:51 +08:00
nginx_auth Use new nginx auth image 2015-08-12 14:14:39 +08:00
packetbeat_ek Use lf instead of crlf 2016-04-15 16:45:21 +08:00
registry_mirror Use lf instead of crlf 2016-04-15 16:45:21 +08:00
spark_cluster Add example running command to spark 2015-09-24 10:43:24 +08:00
.gitignore Update kafka mode files 2018-06-22 22:03:28 +08:00
README.md Update hyperledger fabric name in README 2017-10-25 22:16:13 -07:00

README.md

Docker Compose Files

Some typical docker compose examples.

If you're not familiar with Docker, can have a look at these books (in CN):

Install Docker&Docker Compose

$ curl -sSL https://get.docker.com/ | sh
$ sudo pip install docker-compose

Docker-compose Usage

See Docker Compose Documentation.

Examples files

consul-discovery

Using consul to make a service-discoverable architecture.

elk_netflow

Elk cluster, with netflow support.

docker-compose scale es=3

haproxy_web

A simple haproxy and web applications cluster.

hyperledger_fabric

Quickly bootup a hyperledger fabric cluster with several validator nodes, without vagrant or any manual configuration.

Now we support from v0.6 to v1.0.x.

See hyperledger_fabric for more details.

kafka

Start a simple kafka service for testing.

mongo_cluster

Start 3 mongo instance to make a replica set.

mongo-elasticsearch

Start mongo (as cluster) and elasticsearch, use a mongo-connector to sync the data from mongo to elasticsearch.

mongo_webui

Start 1 mongo instance and a mongo-express web tool to watch it.

The mongo instance will store data into local /opt/data/mongo_home.

The web UI will listen on local 8081 port.

nginx_auth

Use nginx as a proxy with authentication for backend application.

packetbeat_ek

Demo the packetbeat, elasticsearch and kibana.

Some kibana dashboard config files are included.

To import them, after all containers startup, go inside the kibana container, and run

$ cd /kibana/beats-dashboards-1.0.1 && ./load.sh http://elasticsearch:9200

registry_mirror

docker registry mirror, with redis as the backend cache.

spark_cluster

Spark cluster with master and worker nodes.

docker-compose scale worker=2

Try submitting a test pi application using the spark-submit command.

/urs/local/spark/bin/spark-submit --master spark://master:7077 --class org.apache.spark.examples.SparkPi /usr/local/spark/lib/spark-examples-1.4.0-hadoop2.6.0.jar 1000