??本篇博主帶來(lái)的是Kafka的Producer API操作。
目錄
1. 消息發(fā)送流程
??Kafka的Producer發(fā)送消息采用的是異步發(fā)送
的方式。在消息發(fā)送的過(guò)程中,涉及到了兩個(gè)線(xiàn)程——main線(xiàn)程和Sender線(xiàn)程
,以及一個(gè)線(xiàn)程共享變量——RecordAccumulator
。main線(xiàn)程將消息發(fā)送給RecordAccumulator,Sender線(xiàn)程不斷從RecordAccumulator中拉取消息發(fā)送到Kafka broker。
2. 無(wú)回調(diào)參數(shù)的API下圖為KafkaProducer發(fā)送消息流程
相關(guān)參數(shù):
batch.size:只有數(shù)據(jù)積累到batch.size之后,sender才會(huì)發(fā)送數(shù)據(jù)。
linger.ms:如果數(shù)據(jù)遲遲未達(dá)到batch.size,sender等待linger.time之后就會(huì)發(fā)送數(shù)據(jù)。
- 1. 導(dǎo)入依賴(lài)
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>8</source>
<target>8</target>
</configuration>
</plugin>
</plugins>
</build>
<dependencies>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>0.11.0.2</version>
</dependency>
</dependencies>
- 2. 編寫(xiě)代碼
需要用到的類(lèi):
KafkaProducer:需要?jiǎng)?chuàng)建一個(gè)生產(chǎn)者對(duì)象,用來(lái)發(fā)送數(shù)據(jù)
ProducerConfig:獲取所需的一系列配置參數(shù)
ProducerRecord:每條數(shù)據(jù)都要封裝成一個(gè)ProducerRecord對(duì)象
- 3. 完整代碼
package com.buwenbuhuo.kafka.producer;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerRecord;
import java.util.Properties;
import java.util.concurrent.ExecutionException;
/**
* @author 卜溫不火
* @create 2020-05-06 20:21
* com.buwenbuhuo.kafka.producer - the name of the target package where the new class or interface will be created.
* kafka0506 - the name of the current project.
*/
public class CustomProducer {
public static void main(String[] args) throws ExecutionException, InterruptedException {
Properties props = new Properties();
props.put("bootstrap.servers", "hadoop002:9092");//kafka集群,broker-list
props.put("acks", "all");
props.put("retries", 1);//重試次數(shù)
props.put("batch.size", 16384);//批次大小
props.put("linger.ms", 1);//等待時(shí)間
props.put("buffer.memory", 33554432);//RecordAccumulator緩沖區(qū)大小
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
Producer<String, String> producer = new KafkaProducer<>(props);
for (int i = 0; i < 100; i++) {
producer.send(new ProducerRecord<String, String>("second", i + "", "my name is bu wen bu huo -" + i));
}
producer.close();
}
}
- 4. 群起zookeeper和kafka
[bigdata@hadoop002 zookeeper-3.4.10]$ bin/start-allzk.sh
[bigdata@hadoop002 kafka]$ bin/start-kafkaall.sh
- 5. 啟動(dòng)一個(gè)consumer控制臺(tái)
[bigdata@hadoop002 kafka]$ bin/kafka-console-consumer.sh --bootstrap-server hadoop002:9092 --topic first
-
6. 跑代碼進(jìn)行測(cè)試
3. 帶回調(diào)函數(shù)的API為什么是無(wú)序的,這是因?yàn)槲覀冎荒鼙WC分區(qū)內(nèi)有序。其他無(wú)序。
??回調(diào)函數(shù)會(huì)在producer收到ack時(shí)調(diào)用,為異步調(diào)用,該方法有兩個(gè)參數(shù),分別是RecordMetadata和Exception,如果Exception為null,說(shuō)明消息發(fā)送成功,如果Exception不為null,說(shuō)明消息發(fā)送失敗。
??注意:消息發(fā)送失敗會(huì)自動(dòng)重試,不需要我們?cè)诨卣{(diào)函數(shù)中手動(dòng)重試
。
- 1. 代碼
package com.buwenbuhuo.kafka.producer;
import org.apache.kafka.clients.producer.*;
import org.apache.kafka.common.serialization.StringSerializer;
import java.util.Properties;
/**
* @author 卜溫不火
* @create 2020-05-06 20:21
* com.buwenbuhuo.kafka.producer - the name of the target package where the new class or interface will be created.
* kafka0506 - the name of the current project.
*/
public class CustomProducer {
public static void main(String[] args) {
Properties props = new Properties();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "hadoop002:9092");
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
props.put(ProducerConfig.ACKS_CONFIG, "all");
props.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384);
props.put(ProducerConfig.LINGER_MS_CONFIG, 1);
//1.創(chuàng)建1個(gè)生產(chǎn)者對(duì)象
KafkaProducer<String, String> producer = new KafkaProducer<>(props);
//2.調(diào)用send方法
for (int i = 0; i < 1000; i++) {
producer.send(new ProducerRecord<String, String>("second", i + "", "message-" + i), (metadata,exception)->{
if(exception==null){
System.out.println("success"+metadata.topic()+"-"+metadata.partition()+"-"+metadata.offset());
}else{
exception.printStackTrace();
}
});
}
//3.關(guān)閉生產(chǎn)者
producer.close();
}
}
- 2. 結(jié)果
-
3. 大致流程圖
??同步發(fā)送的意思就是,一條消息發(fā)送之后,會(huì)阻塞當(dāng)前線(xiàn)程,直至返回ack。
??由于send方法返回的是一個(gè)Future對(duì)象,根據(jù)Futrue對(duì)象的特點(diǎn),我們也可以實(shí)現(xiàn)同步發(fā)送的效果,只需在調(diào)用Future對(duì)象的get方發(fā)即可。
- 1. 代碼:
package com.buwenbuhuo.kafka.producer;
import org.apache.kafka.clients.producer.*;
import java.util.Properties;
import java.util.concurrent.ExecutionException;
/**
* @author 卜溫不火
* @create 2020-05-06 20:21
* com.buwenbuhuo.kafka.producer - the name of the target package where the new class or interface will be created.
* kafka0506 - the name of the current project.
*/
public class CustomProducer {
public static void main(String[] args) throws ExecutionException, InterruptedException {
Properties props = new Properties();
props.put("bootstrap.servers", "hadoop002:9092");//kafka集群,broker-list
props.put("acks", "all");
props.put("retries", 1);//重試次數(shù)
props.put("batch.size", 16384);//批次大小
props.put("linger.ms", 1);//等待時(shí)間
props.put("buffer.memory", 33554432);//RecordAccumulator緩沖區(qū)大小
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
Producer<String, String> producer = new KafkaProducer<>(props);
for (int i = 0; i < 100; i++) {
producer.send(new ProducerRecord<String, String>("second", i + "", "my name is bu wen bu huo -" + i)).get();
}
producer.close();
}
}
-
2. 結(jié)果
??本次的分享就到這里了,
??
看
完
就
贊
,
養(yǎng)
成
習(xí)
慣
!
!
!
color{#FF0000}{看完就贊,養(yǎng)成習(xí)慣!??!}
看完就贊,養(yǎng)成習(xí)慣!!!^ _ ^ ?? ?? ??
??碼字不易,大家的支持就是我堅(jiān)持下去的動(dòng)力。點(diǎn)贊后不要忘了關(guān)注我哦!
本文摘自 :https://blog.51cto.com/u