2015-07-30 14:54:22 +08:00
Docker Compose Files
===
2015-09-22 15:22:12 +08:00
Some typical docker compose examples.
2015-07-30 14:54:22 +08:00
2015-08-18 11:49:57 +08:00
# Install Docker and Docker Compose
Take ubuntu for example
2015-07-30 14:54:22 +08:00
```sh
2015-08-18 11:49:57 +08:00
$ curl -sSL https://get.docker.com/ | sh
2015-07-30 14:54:22 +08:00
$ sudo pip install docker-compose
```
# Docker-compose Usage
2016-01-29 16:12:09 +08:00
See [Docker Compose Documentation ](https://docs.docker.com/compose/ ).
2015-07-30 14:54:22 +08:00
2016-01-29 16:12:09 +08:00
# Examples files
2015-07-30 14:54:22 +08:00
2016-01-29 16:12:09 +08:00
## [consul-discovery](consul-discovery)
2015-08-18 10:47:49 +08:00
Using consul to make a service-discoverable architecture.
2016-01-29 16:12:09 +08:00
## [elk_netflow](elk_netflow)
2015-09-22 15:22:12 +08:00
Elk cluster, with netflow support.
```sh
docker-compose scale es=3
```
2016-01-29 16:12:09 +08:00
## [haproxy_web](haproxy_web)
2015-11-18 21:34:42 +08:00
A simple haproxy and web applications cluster.
2016-04-15 16:45:57 +08:00
## [hyperledger](hyperledger)
2016-05-06 16:11:14 +08:00
Quickly bootup a hyperledger cluster with several validator nodes, without vagrant or any manual configuration. By default, the cluster enables PBFT as the consensus.
2016-04-15 16:45:57 +08:00
Note, currently you should manually create an `openblockchain/baseimage:latest` first. The
easiest way to do so is:
```sh
$ docker pull yeasy/hyperledger:latest
$ docker tag yeasy/hyperledger:latest openblockchain/baseimage:latest
2016-05-06 16:11:14 +08:00
$ docker pull yeasy/hyperledger-peer:pbft
2016-04-20 14:58:38 +08:00
$ docker pull yeasy/hyperledger-membersrvc:latest
2016-04-15 16:45:57 +08:00
```
2016-04-15 16:50:14 +08:00
Then you can start a 4 nodes hyperledger cluster with
```sh
$ docker-compose up
```
2016-04-18 14:19:35 +08:00
After the cluster is synced, you can validate by deploying, invoking or querying chaincode from the container or from the
host. See [hyperledger-peer ](https://github .com/yeasy/docker-hyperledger-peer ) if you've not familiar on that.
2016-04-20 11:07:30 +08:00
This refers the example from the [hyperledger ](https://github.com/hyperledger/fabric/tree/master/consensus/docker-compose-files ) project.
2016-04-15 16:45:57 +08:00
2016-01-29 16:12:09 +08:00
## [mongo_cluster](mongo_cluster)
2015-07-30 14:54:22 +08:00
Start 3 mongo instance to make a replica set.
2016-01-29 16:12:09 +08:00
## [mongo-elasticsearch](mongo-elasticsearch)
2015-08-21 15:37:35 +08:00
Start mongo (as cluster) and elasticsearch, use a mongo-connector to sync the data from mongo to elasticsearch.
2016-01-29 16:12:09 +08:00
## [mongo_webui](mongo_webui)
2015-07-30 14:54:22 +08:00
Start 1 mongo instance and a mongo-express web tool to watch it.
The mongo instance will store data into local /opt/data/mongo_home.
The web UI will listen on local 8081 port.
2015-08-10 16:14:51 +08:00
2016-01-29 16:12:09 +08:00
## [nginx_auth](nginx_auth)
2015-08-10 16:14:51 +08:00
Use nginx as a proxy with authentication for backend application.
2016-01-29 16:12:09 +08:00
## [packetbeat_ek](packetbeat_ek)
2016-01-28 16:15:31 +08:00
Demo the packetbeat, elasticsearch and kibana.
2016-01-28 16:21:04 +08:00
Some kibana [dashboard config ](https://github.com/elastic/beats-dashboards ) files are included.
2016-01-28 16:15:31 +08:00
To import them, after all containers startup, go inside the kibana container, and run
```sh
2016-01-29 16:16:59 +08:00
$ cd /kibana/beats-dashboards-1.0.1 & & ./load.sh http://elasticsearch:9200
2016-01-28 16:15:31 +08:00
```
2016-01-29 16:12:09 +08:00
## [registry_mirror](registry_mirror)
2015-08-10 16:14:51 +08:00
docker registry mirror, with redis as the backend cache.
2016-01-29 16:12:09 +08:00
## [spark_cluster](spark_cluster)
2015-09-24 10:43:24 +08:00
Spark cluster with master and worker nodes.
2015-08-10 16:14:51 +08:00
```sh
2015-09-22 15:22:12 +08:00
docker-compose scale worker=2
2015-08-10 16:14:51 +08:00
```
2015-09-24 10:43:24 +08:00
Try submitting a test pi application using the spark-submit command.
```sh
/urs/local/spark/bin/spark-submit --master spark://master:7077 --class org.apache.spark.examples.SparkPi /usr/local/spark/lib/spark-examples-1.4.0-hadoop2.6.0.jar 1000
```