Kafka生产与消费

命令行测试

启动命令

1
./bin/kafka-server-start.sh ./config/server.properties &

创建topic和删除topic

  • –zookeeper ZooKeeper服务器地址
  • –partitions 分区个数
  • –topic topic-demo 指定topic名自为topic-demo
  • –replication-factor 指定副本数
  • –delete 删除指令
  • –create 创建指令
    1
    2
    3
    4
    5
    6
    7

    [root@slave1 kafka_2.11-2.2.1]# ./bin/kafka-topics.sh --zookeeper master:2181 --create --topic topic-demo --replication-factor 3 --partitions 4
    Created topic topic-demo.

    [root@slave1 kafka_2.11-2.2.1]# ./bin/kafka-topics.sh --zookeeper master:2181 --topic topic-demo --delete
    Topic topic-demo is marked for deletion.
    Note: This will have no impact if delete.topic.enable is not set to true.

    查看topic的详细信息

    1
    2
    3
    4
    5
    6
    [root@slave1 kafka_2.11-2.2.1]# ./bin/kafka-topics.sh --zookeeper master:2181 --topic topic-demo --describe 
    Topic:topic-demo PartitionCount:4 ReplicationFactor:3 Configs:
    Topic: topic-demo Partition: 0 Leader: 2 Replicas: 2,3,1 Isr: 2,3,1
    Topic: topic-demo Partition: 1 Leader: 3 Replicas: 3,1,2 Isr: 3,1,2
    Topic: topic-demo Partition: 2 Leader: 1 Replicas: 1,2,3 Isr: 1,2,3
    Topic: topic-demo Partition: 3 Leader: 2 Replicas: 2,1,3 Isr: 2,1,3

    生产者消费者

    1
    2
    3
    4
    5
    6
    7
    [root@slave1 kafka_2.11-2.2.1]# ./bin/kafka-console-producer.sh --topic topic-demo --broker-list master:9092
    >Hello, I am lyhcc.
    >

    [root@slave1 kafka_2.11-2.2.1]# ./bin/kafka-console-consumer.sh --bootstrap-server master:9092 --topic topic-demo

    Hello, I am lyhcc.

说明:命令行方式只是用来测试


Java实现生产者和消费者客户端

要往kafka中写入消息, 首先要创建一个生产者客户端并设置一些参数,然后创建消息的ProducerRecord对象,其中必须包含所要发往的主题以及消息的消息体,然后生产者客户端将消息发送出去,最后通过close方法关闭生产者客户端并回收相应的资源
消费者消费消息 首先创建一个消费者客户端实例并配置相应的参数,然后订阅主题并消费即可

生产者

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
public class KafkaProducerDemo {
public static final String brokerList = "master:9092";
public static final String topic = "topic-demo";

public static void main(String[] args) {
Properties properties = new Properties();
properties.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
properties.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
properties.put("bootstrap.servers",brokerList);

//1. 配置生产者客户端参数并创建kafka实例
KafkaProducer<String, String> producer = new KafkaProducer<String, String>(properties);
//2. 构建所需要发送的信息
ProducerRecord<String,String> record = new ProducerRecord<String, String>(topic, "hello world");
//3. 发送消息
try {
producer.send(record);
} catch (Exception e) {
e.printStackTrace();
}
//4. 关闭生产者客户端
producer.close();
}
}

消费者

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
public class KafkaComsumerDemo {
public static final String brokerList = "master:9092";
public static final String topic = "topic-demo";
public static final String groupId = "group.demo";

public static void main(String[] args) {
Properties properties = new Properties();
properties.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
properties.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
properties.put("group.id",groupId);

properties.put("bootstrap.servers", brokerList);

//1. 创建一个消费者客户端实例
KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(properties);
//2. 订阅主题
consumer.subscribe(Collections.singletonList(topic));
//3. 遍历消息
while (true) {
ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(1000));
for (ConsumerRecord<String, String> record : records) {
System.out.println(record.value());
}
}
}
}

在编写之前需要引入Kafka相关包

1
2
3
4
5
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>2.2.1</version>
</dependency>

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×