docker-compose启动容器
docker-compose 是什么
docker-compose 是一个用来把 docker 自动化的东西。
有了 docker-compose 就可以把所有繁复的 docker 操作全都一条命令,自动化的完成,简单粗暴。
1、准备三台服务器
192.168.0.94
192.168.0.92
192.168.0.91
注意:下面的操作在三台服务器上面都要执行。
2.安装docker
yum -y install docker
3.启动docker
systemctl start docker
4.创建目录
mkdir /home/docker
5.拉取官方zookeeper、kafka镜像
docker pull zookeeper
docker pull kafka
6.在/home/docker目录创建docker-compose.yml以及zoo.cfg文件
三台服务器文件内容大致类似,节点数字不同来区分不同的服务
192.168.0.4
vim docker-compose.yml
version: '2'
services:
zookeeper:
image: 'zookeeper:latest'
restart: always
environment:
ZOO_MY_ID: 1
volumes:
- /home/docker/zoo.cfg:/conf/zoo.cfg
ports:
- 2181:2181
- 2888:2888
- 3888:3888
container_name: 'zookeeper01'
#network_mode: "host"
kafka1:
image: wurstmeister/kafka
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: 192.168.0.94
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.0.94:9092
KAFKA_ZOOKEEPER_CONNECT: "192.168.0.94:2181,192.168.0.92:2181,192.168.0.91:2181"
KAFKA_ADVERTISED_PORT: 9092
KAFKA_BROKER_ID: 1
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
depends_on:
- zookeeper
container_name: kafka1
创建挂载文件zoo.cfg
vi /home/docker/zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data
dataLogDir=/data
clientPort=2181
server.1=0.0.0.0:2888:3888
server.2=192.168.0.2:2888:3888
server.3=192.168.0.1:2888:3888
192.168.0.2
vim docker-compose.yml
version: '2'
services:
zookeeper:
image: 'zookeeper:latest'
restart: always
environment:
ZOO_MY_ID: 2
volumes:
- /home/docker/zoo.cfg:/conf/zoo.cfg
ports:
- 2181:2181
- 2888:2888
- 3888:3888
container_name: 'zookeeper02'
# network_mode: "host"
kafka2:
image: wurstmeister/kafka
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: 192.168.0.92
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.0.92:9092
KAFKA_ZOOKEEPER_CONNECT: "192.168.0.94:2181,192.168.0.92:2181,192.168.0.91:2181"
KAFKA_ADVERTISED_PORT: 9092
KAFKA_BROKER_ID: 2
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
depends_on:
- zookeeper
container_name: kafka2
创建挂载文件zoo.cfg
vi /home/docker/zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data
dataLogDir=/data
clientPort=2181
server.1=192.168.0.4:2888:3888
server.2=0.0.0.0:2888:3888
server.3=192.168.0.1:2888:3888
192.168.0.1
vim docker-compose.yml
version: '2'
services:
zookeeper:
image: 'zookeeper:latest'
restart: always
environment:
ZOO_MY_ID: 3
volumes:
- /home/docker/zoo.cfg:/conf/zoo.cfg
ports:
- 2181:2181
- 2888:2888
- 3888:3888
container_name: 'zookeeper03'
#network_mode: "host"
kafka3:
image: wurstmeister/kafka
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: 192.168.0.91
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.0.91:9092
KAFKA_ZOOKEEPER_CONNECT: "192.168.0.94:2181,192.168.0.92:2181,192.168.0.91:2181"
KAFKA_ADVERTISED_PORT: 9092
KAFKA_BROKER_ID: 3
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
depends_on:
- zookeeper
container_name: kafka3
创建挂载文件zoo.cfg
vi /home/docker/zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data
dataLogDir=/data
clientPort=2181
server.1=192.168.0.4:2888:3888
server.2=192.168.0.2:2888:3888
server.3=0.0.0.0:2888:3888
服务到此就配置完了,是不是很方便。
7.通过docker启动zookeeper,在docker-compose.yml所在目录执行下列命令
三台机器执行相同命令
docker-compose up
8.查看docker容器运行状态
docker-compose ps
到此就算成功了!
9.创建kafka的Topic
执行 docker exec -it kafka1 bash 进入kafka操作端
创建topic
kafka-topics.sh --create --zookeeper zookeeper:2181 --replication-factor 1 --partitions 3 --topic test001
查看创建的topic列表,每个节点都可以看到新建的topic
kafka-topics.sh --list --zookeeper zookeeper:2181
2、生产消息
kafka-console-producer.sh --broker-list kafka1:9092,kafka2:9092,kafka3:9092 --topic test001
3、消费消息
kafka-console-consumer.sh --bootstrap-server kafka1:9092,kafka2:9092,kafka3:9092 --topic test001 --from-beginning