docker-compose-files/README.md

56 lines
1.4 KiB
Markdown
Raw Normal View History

2015-07-30 14:54:22 +08:00
Docker Compose Files
===
2015-09-22 15:22:12 +08:00
Some typical docker compose examples.
2015-07-30 14:54:22 +08:00
2015-08-18 11:49:57 +08:00
# Install Docker and Docker Compose
Take ubuntu for example
2015-07-30 14:54:22 +08:00
```sh
2015-08-18 11:49:57 +08:00
$ curl -sSL https://get.docker.com/ | sh
2015-07-30 14:54:22 +08:00
$ sudo pip install docker-compose
```
# Docker-compose Usage
See [https://docs.docker.com/compose/](https://docs.docker.com/compose/).
2015-09-22 15:22:12 +08:00
# Examples
2015-07-30 14:54:22 +08:00
2015-08-18 10:47:49 +08:00
## consul-discovery
Using consul to make a service-discoverable architecture.
2015-09-22 15:22:12 +08:00
## elk_netflow
Elk cluster, with netflow support.
```sh
docker-compose scale es=3
```
2015-07-30 14:54:22 +08:00
## mongo_cluster
Start 3 mongo instance to make a replica set.
## mongo-elasticsearch
Start mongo (as cluster) and elasticsearch, use a mongo-connector to sync the data from mongo to elasticsearch.
2015-07-30 14:54:22 +08:00
## mongo_webui
Start 1 mongo instance and a mongo-express web tool to watch it.
The mongo instance will store data into local /opt/data/mongo_home.
The web UI will listen on local 8081 port.
2015-08-10 16:14:51 +08:00
## nginx_auth
Use nginx as a proxy with authentication for backend application.
## registry_mirror
docker registry mirror, with redis as the backend cache.
2015-09-22 15:22:12 +08:00
## spark_cluster
2015-09-24 10:43:24 +08:00
Spark cluster with master and worker nodes.
2015-08-10 16:14:51 +08:00
```sh
2015-09-22 15:22:12 +08:00
docker-compose scale worker=2
2015-08-10 16:14:51 +08:00
```
2015-09-24 10:43:24 +08:00
Try submitting a test pi application using the spark-submit command.
```sh
/urs/local/spark/bin/spark-submit --master spark://master:7077 --class org.apache.spark.examples.SparkPi /usr/local/spark/lib/spark-examples-1.4.0-hadoop2.6.0.jar 1000
```