Zookeeper+Kafka集群的部署与启动
我使用的虚拟机内网IP分别为:192.168.136.129,192.168.136.132,192.168.136.133
1. 完整步骤
见链接:Zookeeper 和 Kafka 工作原理及如何搭建 Zookeeper集群 + Kafka集群_kafka集群安装部署-CSDN博客
2. Zookeeper的部署与启动
2.1 添加集群信息
cd /usr/local/zookeeper-3.5.7/conf/
(在/usr/local/zookeeper-3.5.7/目录下输入./bin/zkCli.sh可进入客户端)
vim zoo.cfg
#添加集群信息
server.1=192.168.30.107:3188:3288
server.2=192.168.30.108:3188:3288
server.3=192.168.30.109:3188:3288
2.2 设置集群的节点
在每个节点的dataDir指定的目录下创建一个 myid 的文件:
echo 1 > /usr/local/zookeeper-3.5.7/data/myid
echo 2 > /usr/local/zookeeper-3.5.7/data/myid
echo 3 > /usr/local/zookeeper-3.5.7/data/myid
2.3 启动节点、查看状态
# 启动节点
service zookeeper start
# 查看当前状态
service zookeeper status
3. Kafka集群的部署与启动
kafka路径在:/usr/local/kafka/
3.1 指定集群节点、指定zookeeper连接地址
cd /usr/local/kafka/config/
# 修改配置文件
vim server.properties
broker.id=0 ●21行,broker的全局唯一编号,每个broker不能重复,因此要在其他机器上配置 broker.id=1、broker.id=2
zookeeper.connect=192.168.136.129:2181,192.168.136.132:2181,192.168.136.133:2181 ●123行,配置连接Zookeeper集群地址
3.2 设置环境变量
vim /etc/profile
export KAFKA_HOME=/usr/local/kafka
export PATH=$PATH:$KAFKA_HOME/bin
source /etc/profile
3.3 启动和查看状态
# 分别启动kafka
service kafka start
# 查看状态
service kafka status
4. 常用端口
- 9092端口:这是Kafka的默认端口,用于接收和处理来自生产者和消费者的请求。这个端口主要用于PLAINTEXT通信协议。如果您在Kafka的配置中更改了监听器(listeners)配置,那么这个端口可能会有所不同。
- 2181端口:如果您使用Zookeeper作为Kafka的集群协调服务,Zookeeper默认使用2181端口。所有的Kafka broker都需要与Zookeeper进行通信,来同步集群状态以及选举leader等。
5. 生产者消费者模式测试
5.1 创建topic
kafka-topics.sh --create --zookeeper 192.168.136.129:2181,192.168.136.132:2181,192.168.136.133:2181 --replication-factor 2 --partitions 3 --topic test
-
–zookeeper:定义 zookeeper 集群服务器地址,如果有多个 IP 地址使用逗号分割,一般使用一个 IP 即可
-
–replication-factor:定义分区副本数,1 代表单副本,建议为 2 –partitions:定义分区数
-
–topic:定义 topic 名称
5.2 查看当前服务器中的所有 topic
kafka-topics.sh --list --zookeeper 192.168.136.129:2181,192.168.136.132:2181,192.168.136.133:2181
返回结果:
test
5.3 查看某个topic的详情
kafka-topics.sh --describe --zookeeper 192.168.136.129:2181,192.168.136.132:2181,192.168.136.133:2181结果:
结果:
Topic: __consumer_offsets PartitionCount: 50 ReplicationFactor: 1 Configs: compression.type=producer,cleanup.policy=compact,segment.bytes=104857600
Topic: __consumer_offsets Partition: 0 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 1 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 2 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 3 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 4 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 5 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 6 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 7 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 8 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 9 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 10 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 11 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 12 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 13 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 14 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 15 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 16 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 17 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 18 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 19 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 20 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 21 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 22 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 23 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 24 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 25 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 26 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 27 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 28 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 29 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 30 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 31 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 32 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 33 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 34 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 35 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 36 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 37 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 38 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 39 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 40 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 41 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 42 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 43 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 44 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 45 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 46 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 47 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 48 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 49 Leader: 0 Replicas: 0 Isr: 0
5.4 发布消息(生产者进行)
kafka-console-producer.sh --broker-list 192.168.136.129:9092,192.168.136.132:9092,192.168.136.133:9092 --topic test
kafka-console-producer.sh 为Kafka提供的消费例子
运行结果:此时用户可以在键盘上输入字符
5.5 消费消息(生成者进行)
此时生产者可以是kafka集群中的任意一个节点(包括与生产者同一个节点/主机)
kafka-console-consumer.sh --bootstrap-server 192.168.136.129:9092,192.168.136.132:9092,192.168.136.133:9092 --topic test --from-beginning
--from-beginning:会把主题中以往所有的数据都读取出来
运行结果:此时会输出test主题以往生产者输入的全部数据
