部署kafka集群

docker-compose.yaml文件

[root@kafka kafka]# vim docker-compose.yaml 
version: '3.3'
services:
  zookeeper:
    image: wurstmeister/zookeeper
    container_name: zookeeper
    ports:
      - 2181:2181
    volumes:
      - /home/kafka/zookeeper/data:/data
      - /home/kafka/zookeeper/datalog:/datalog
      - /home/kafka/zookeeper/logs:/logs
    restart: always
  kafka1:
    image: wurstmeister/kafka
    depends_on:
      - zookeeper
    container_name: kafka1
    ports:
      - 9092:9092
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: 172.23.1.55:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://172.23.1.55:9092
      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092
      KAFKA_LOG_DIRS: /data/kafka-data
      KAFKA_LOG_RETENTION_HOURS: 24
    volumes:
      - /home/kafka/kafka1/data:/data/kafka-data
    restart: unless-stopped  
  kafka2:
    image: wurstmeister/kafka
    depends_on:
      - zookeeper
    container_name: kafka2
    ports:
      - 9093:9093
    environment:
      KAFKA_BROKER_ID: 2
      KAFKA_ZOOKEEPER_CONNECT: 172.23.1.55:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://172.23.1.55:9093
      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9093
      KAFKA_LOG_DIRS: /data/kafka-data
      KAFKA_LOG_RETENTION_HOURS: 24
    volumes:
      - /home/kafka/kafka2/data:/data/kafka-data
    restart: unless-stopped
  kafka3:
    image: wurstmeister/kafka
    depends_on:
      - zookeeper
    container_name: kafka3
    ports:
      - 9094:9094
    environment:
      KAFKA_BROKER_ID: 3
      KAFKA_ZOOKEEPER_CONNECT: 172.23.1.55:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://172.23.1.55:9094
      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9094
      KAFKA_LOG_DIRS: /data/kafka-data
      KAFKA_LOG_RETENTION_HOURS: 24
    volumes:
      - /home/kafka/kafka3/data:/data/kafka-data
    restart: unless-stopped
  kafka-manager:
    image: sheepkiller/kafka-manager
    environment:
      ZK_HOSTS: 172.23.1.55
    ports:
      - "9000:9000"
                
[root@kafka kafka]# docker-compose up -d
Creating network "kafka_default" with the default driver
Creating kafka_kafka-manager_1 ... done
Creating zookeeper             ... done
Creating kafka1                ... done
Creating kafka2                ... done
Creating kafka3                ... done    
   
[root@kafka kafka]# docker-compose ps
        Name                       Command               State                         Ports                       
-------------------------------------------------------------------------------------------------------------------
kafka1                  start-kafka.sh                   Up      0.0.0.0:9092->9092/tcp                            
kafka2                  start-kafka.sh                   Up      0.0.0.0:9093->9093/tcp                            
kafka3                  start-kafka.sh                   Up      0.0.0.0:9094->9094/tcp                            
kafka_kafka-manager_1   ./start-kafka-manager.sh         Up      0.0.0.0:9000->9000/tcp                            
zookeeper               /bin/sh -c /usr/sbin/sshd  ...   Up      0.0.0.0:2181->2181/tcp, 22/tcp, 2888/tcp, 3888/tcp                                                                                         

容器验证

创建topic

创建一个 topic名称为test,3个分区,2个副本

[root@kafka kafka]# docker exec -it kafka1 bash
bash-5.1# cd /opt/kafka/config/
bash-5.1# ls
connect-console-sink.properties    connect-file-source.properties     consumer.properties                server.properties
connect-console-source.properties  connect-log4j.properties           kraft                              tools-log4j.properties
connect-distributed.properties     connect-mirror-maker.properties    log4j.properties                   trogdor.conf
connect-file-sink.properties       connect-standalone.properties      producer.properties                zookeeper.properties
bash-5.1# cd /opt/kafka/bin/
bash-5.1# ./kafka-topics.sh --create --topic test --zookeeper 172.23.1.55:2181 --partitions 3 --replication-factor 2                   
Created topic test.

注意:副本数不能超过brokers数(分区是可以超过的),否则会创建失败

查看topic列表

bash-5.1# ./kafka-topics.sh --list --zookeeper 172.23.1.55:2181 
__consumer_offsets
log-topic
test
utm-topic

查看topic详情

bash-5.1# ./kafka-topics.sh --describe --topic test --zookeeper 172.23.1.55:2181     
Topic: test     TopicId: YULcFXg-TQWBx7fb6liiGg PartitionCount: 3       ReplicationFactor: 2    Configs: 
        Topic: test     Partition: 0    Leader: 2       Replicas: 2,3   Isr: 2,3
        Topic: test     Partition: 1    Leader: 3       Replicas: 3,1   Isr: 3,1
        Topic: test     Partition: 2    Leader: 1       Replicas: 1,2   Isr: 1,2

在宿主机上,切到 /home/kafka/kafka1/data下,可以看到topic的数据

[root@kafka kafka]# ls /home/kafka/kafka1/data/|grep test
test-1
test-2

说明:

  • 数据文件名称组成:topic名称_分区号
  • 由于是3个分区+两个副本,所有会生成6个数据文件,不重复的分摊到3台borker上(查看kafka2和kafka3目录下可验证)

模拟消息生产消费

创建一个生产者向topic中发送消息

[root@kafka kafka]# docker exec -it kafka1 bash
bash-5.1# cd /opt/kafka/bin/
bash-5.1# ./kafka-console-producer.sh --topic test --broker-list 172.23.1.55:9092                     
>hello   
>abcdef

登录到 kafka2或者kafka3容器内创建一个消费者接收topic中的消息

[root@kafka ~]# docker exec -it kafka2 bash
bash-5.1# cd /opt/kafka/bin/
bash-5.1# ./kafka-console-consumer.sh --topic test --bootstrap-server 172.23.1.55:9092 --from-beginning                  
hello
abcdef

注意:–from-beginning表示从最开始读消息,不加该参数则根据最大offset读(从最新消息开始读取)

删除topic

bash-5.1# ./kafka-topics.sh --delete --topic test --zookeeper 172.23.1.55:2181
Topic test is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.  

删除topic不会立马删除,而是会先给该topic打一个标记

问题

修改docker-compose.yaml配置文件后,up启动失败,使用docker-compose logs 命令查看日志报错

The broker is trying to join the wrong cluster. Configured zookeeper.connect may be wrong.

解决办法:删除宿主机上kafka挂载的数据文件(data目录下),然后docker-compose down,最后再up

文章作者: 鲜花的主人
版权声明: 本站所有文章除特别声明外,均采用 CC BY-NC-SA 4.0 许可协议。转载请注明来自 爱吃可爱多
Kafka Kafka
喜欢就支持一下吧
打赏
微信 微信
支付宝 支付宝