使用docker-compose部署zookeeper+kafka集群(非伪集群)

本文档详细介绍了如何使用docker-compose在三台服务器上部署Zookeeper和Kafka集群。首先,分别在每台服务器上安装docker,然后创建目录,拉取镜像,编写docker-compose.yml和zoo.cfg文件,配置各节点的ID和服务信息。接着,通过docker-compose启动服务,并检查容器状态。最后,展示了创建Kafka Topic、生产消息和消费消息的步骤,整个过程简洁高效。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

docker-compose启动容器

docker-compose 是什么

docker-compose 是一个用来把 docker 自动化的东西。
有了 docker-compose 就可以把所有繁复的 docker 操作全都一条命令,自动化的完成,简单粗暴。

1、准备三台服务器

192.168.0.94

192.168.0.92

192.168.0.91

注意:下面的操作在三台服务器上面都要执行。

2.安装docker

yum -y install docker

3.启动docker

systemctl start docker

4.创建目录

mkdir /home/docker

5.拉取官方zookeeper、kafka镜像

docker pull zookeeper

docker pull kafka

6.在/home/docker目录创建docker-compose.yml以及zoo.cfg文件

三台服务器文件内容大致类似,节点数字不同来区分不同的服务

192.168.0.4

vim docker-compose.yml

version: '2'
services:
  zookeeper:
    image: 'zookeeper:latest'
    restart: always
    environment:
      ZOO_MY_ID: 1
    volumes:
      - /home/docker/zoo.cfg:/conf/zoo.cfg
    ports:
      - 2181:2181
      - 2888:2888
      - 3888:3888
    container_name: 'zookeeper01'
    #network_mode: "host"
  kafka1:
    image: wurstmeister/kafka
    ports:
      - "9092:9092"
    environment:
      KAFKA_ADVERTISED_HOST_NAME: 192.168.0.94
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.0.94:9092
      KAFKA_ZOOKEEPER_CONNECT: "192.168.0.94:2181,192.168.0.92:2181,192.168.0.91:2181"
      KAFKA_ADVERTISED_PORT: 9092
      KAFKA_BROKER_ID: 1
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
    depends_on:
      - zookeeper
    container_name: kafka1

创建挂载文件zoo.cfg

vi /home/docker/zoo.cfg

tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data
dataLogDir=/data
clientPort=2181
server.1=0.0.0.0:2888:3888
server.2=192.168.0.2:2888:3888
server.3=192.168.0.1:2888:3888

192.168.0.2

vim docker-compose.yml

version: '2'
services:
  zookeeper:
    image: 'zookeeper:latest'
    restart: always
    environment:
      ZOO_MY_ID: 2
    volumes:
      - /home/docker/zoo.cfg:/conf/zoo.cfg
    ports:
      - 2181:2181
      - 2888:2888
      - 3888:3888
    container_name: 'zookeeper02'
   # network_mode: "host"
  kafka2:
    image: wurstmeister/kafka
    ports:
      - "9092:9092"
    environment:
      KAFKA_ADVERTISED_HOST_NAME: 192.168.0.92
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.0.92:9092
      KAFKA_ZOOKEEPER_CONNECT: "192.168.0.94:2181,192.168.0.92:2181,192.168.0.91:2181"
      KAFKA_ADVERTISED_PORT: 9092
      KAFKA_BROKER_ID: 2
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
    depends_on:
      - zookeeper
    container_name: kafka2

创建挂载文件zoo.cfg

vi /home/docker/zoo.cfg

tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data
dataLogDir=/data
clientPort=2181
server.1=192.168.0.4:2888:3888
server.2=0.0.0.0:2888:3888
server.3=192.168.0.1:2888:3888

192.168.0.1

vim docker-compose.yml

version: '2'
services:
  zookeeper:
    image: 'zookeeper:latest'
    restart: always
    environment:
      ZOO_MY_ID: 3
    volumes:
      - /home/docker/zoo.cfg:/conf/zoo.cfg
    ports:
      - 2181:2181
      - 2888:2888
      - 3888:3888
    container_name: 'zookeeper03'
    #network_mode: "host"
  kafka3:
    image: wurstmeister/kafka
    ports:
      - "9092:9092"
    environment:
      KAFKA_ADVERTISED_HOST_NAME: 192.168.0.91
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.0.91:9092
      KAFKA_ZOOKEEPER_CONNECT: "192.168.0.94:2181,192.168.0.92:2181,192.168.0.91:2181"
      KAFKA_ADVERTISED_PORT: 9092
      KAFKA_BROKER_ID: 3
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
    depends_on:
      - zookeeper
    container_name: kafka3

创建挂载文件zoo.cfg

vi /home/docker/zoo.cfg

tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data
dataLogDir=/data
clientPort=2181
server.1=192.168.0.4:2888:3888
server.2=192.168.0.2:2888:3888
server.3=0.0.0.0:2888:3888

服务到此就配置完了,是不是很方便。

7.通过docker启动zookeeper,在docker-compose.yml所在目录执行下列命令

三台机器执行相同命令

docker-compose up

8.查看docker容器运行状态

docker-compose ps

到此就算成功了!

9.创建kafka的Topic

执行 docker exec -it kafka1 bash 进入kafka操作端

创建topic

kafka-topics.sh --create --zookeeper zookeeper:2181 --replication-factor 1 --partitions 3 --topic test001

查看创建的topic列表,每个节点都可以看到新建的topic

kafka-topics.sh --list --zookeeper zookeeper:2181

2、生产消息

kafka-console-producer.sh --broker-list kafka1:9092,kafka2:9092,kafka3:9092 --topic test001

 3、消费消息

kafka-console-consumer.sh --bootstrap-server kafka1:9092,kafka2:9092,kafka3:9092 --topic test001 --from-beginning

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值