Spring For Apache Kafka
Spring For Apache Kafka
Authors
Gary Russell , Artem Bilan , Biju Kunjummen
2.0.0.RELEASE
Table of Contents
1. Preface
2. Whats new?
2.1. Whats new in 2.0 Since 1.3
2.1.1. Spring Framework and Java Versions
2.1.2. @KafkaListener Changes
2.1.3. Message Listeners
2.1.4. ConsumerAwareRebalanceListener
2.1.5. @EmbeddedKafka Annotation
3. Introduction
3.1. Quick Tour for the Impatient
3.1.1. Introduction
Compatibility
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 1/63
11/8/2017 Spring for Apache Kafka
4. Reference
4.1. Using Spring for Apache Kafka
4.1.1. Configuring Topics
4.1.2. Sending Messages
KafkaTemplate
Transactions
5. Spring Integration
5.1. Spring Integration for Apache Kafka
5.1.1. Introduction
5.1.2. Outbound Channel Adapter
5.1.3. Message Driven Channel Adapter
5.1.4. Message Conversion
5.1.5. Whats New in Spring Integration for Apache Kafka
2.1.x
2.2.x
2.3.x
3.0.x
6. Other Resources
A. Change History
1. Preface
The Spring for Apache Kafka project applies core Spring concepts to the development of Kafka-
based messaging solutions. We provide a "template" as a high-level abstraction for sending
messages. We also provide support for Message-driven POJOs.
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 3/63
11/8/2017 Spring for Apache Kafka
2. Whats new?
2.1.4 ConsumerAwareRebalanceListener
Rebalance listeners can now access the Consumer object during rebalance notifications. See
the section called Rebalance Listeners for more information.
3. Introduction
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 4/63
11/8/2017 Spring for Apache Kafka
This first part of the reference documentation is a high-level overview of Spring for Apache
Kafka and the underlying concepts and some code snippets that will get you up and running as
quickly as possible.
3.1.1 Introduction
This is the 5 minute tour to get started with Spring Kafka.
Prerequisites: install and run Apache Kafka Then grab the spring-kafka JAR and all of its
dependencies - the easiest way to do that is to declare a dependency in your build tool, e.g. for
Maven:
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
<version>2.0.0.RELEASE</version>
</dependency>
compile 'org.springframework.kafka:spring-kafka:2.0.0.RELEASE'
Compatibility
@Test
public void testAutoCommit() throws Exception {
logger.info("Start auto");
ContainerProperties containerProps = new ContainerProperties("topic1", "topic
final CountDownLatch latch = new CountDownLatch(4);
containerProps.setMessageListener(new MessageListener<Integer, String>() {
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 5/63
11/8/2017 Spring for Apache Kafka
@Override
public void onMessage(ConsumerRecord<Integer, String> message) {
logger.info("received: " + message);
latch.countDown();
}
});
KafkaMessageListenerContainer<Integer, String> container = createContainer(co
container.setBeanName("testAuto");
container.start();
Thread.sleep(1000); // wait a bit for the container to start
KafkaTemplate<Integer, String> template = createTemplate();
template.setDefaultTopic(topic1);
template.sendDefault(0, "foo");
template.sendDefault(2, "bar");
template.sendDefault(0, "baz");
template.sendDefault(2, "qux");
template.flush();
assertTrue(latch.await(60, TimeUnit.SECONDS));
container.stop();
logger.info("Stop auto");
@Autowired
private Listener listener;
@Autowired
private KafkaTemplate<Integer, String> template;
@Test
public void testSimple() throws Exception {
template.send("annotated1", 0, "foo");
template.flush();
assertTrue(this.listener.latch1.await(10, TimeUnit.SECONDS));
}
@Configuration
@EnableKafka
@Bean
ConcurrentKafkaListenerContainerFactory<Integer, String>
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 7/63
11/8/2017 Spring for Apache Kafka
Co cu e t a a
ste e Co ta e acto y< tege , St g>
kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<Integer, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
return factory;
}
@Bean
public ConsumerFactory<Integer, String> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs());
}
@Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, embeddedKafka.getBroke
...
return props;
}
@Bean
public Listener listener() {
return new Listener();
}
@Bean
public ProducerFactory<Integer, String> producerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfigs());
}
@Bean
public Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, embeddedKafka.getBroke
...
return props;
}
@Bean
public KafkaTemplate<Integer, String> kafkaTemplate() {
return new KafkaTemplate<Integer, String>(producerFactory());
}
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 8/63
11/8/2017 Spring for Apache Kafka
Application.
@SpringBootApplication
public class Application implements CommandLineRunner {
@Autowired
private KafkaTemplate<String, String> template;
@Override
public void run(String... args) throws Exception {
this.template.send("myTopic", "foo1");
this.template.send("myTopic", "foo2");
this.template.send("myTopic", "foo3");
latch.await(60, TimeUnit.SECONDS);
logger.info("All received");
}
@KafkaListener(topics = "myTopic")
public void listen(ConsumerRecord<?, ?> cr) throws Exception {
logger.info(cr.toString());
latch.countDown();
}
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 9/63
11/8/2017 Spring for Apache Kafka
Boot takes care of most of the configuration; when using a local broker, the only properties we
need are:
application.properties.
spring.kafka.consumer.group-id=foo
spring.kafka.consumer.auto-offset-reset=earliest
The first because we are using group management to assign topic partitions to consumers so
we need a group, the second to ensure the new consumer group will get the messages we just
sent, because the container might start after the sends have completed.
4. Reference
This part of the reference documentation details the various components that comprise Spring
for Apache Kafka. The main chapter covers the core classes to develop a Kafka application with
Spring.
@Bean
public KafkaAdmin admin() {
Map<String, Object> configs = new HashMap<>();
configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG,
StringUtils.arrayToCommaDelimitedString(kafkaEmbedded().getBrokerAddr
return new KafkaAdmin(configs);
}
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 10/63
11/8/2017 Spring for Apache Kafka
@Bean
public NewTopic topic1() {
return new NewTopic("foo", 10, (short) 2);
}
@Bean
public NewTopic topic2() {
return new NewTopic("bar", 10, (short) 2);
}
By default, if the broker is not available, a message will be logged, but the context will continue
to load. You can programmatically invoke the admins initialize() method to try again later.
If you wish this condition to be considered fatal, set the admins fatalIfBrokerNotAvailable
property to true and the context will fail to initialize.
The admin does not alter existing topics; it will log (INFO) if the number of
partitions dont match.
KafkaTemplate
Overview
The KafkaTemplate wraps a producer and provides convenience methods to send data to
kafka topics.
void flush();
The sendDefault API requires that a default topic has been provided to the template.
The API which take in a timestamp as a parameter will store this timestamp in the record. The
behavior of the user provided timestamp is stored is dependent on the timestamp type
configured on the Kafka topic. If the topic is configured to use CREATE_TIME then the user
specified timestamp will be recorded or generated if not specified. If the topic is configured to
use LOG_APPEND_TIME then the user specified timestamp will be ignored and broker will add in
the local broker time.
The metrics and partitionsFor methods simply delegate to the same methods on the
underlying Producer . The execute method provides direct access to the underlying
Producer .
To use the template, configure a producer factory and provide it in the templates constructor:
@Bean
public ProducerFactory<Integer, String> producerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfigs());
}
@Bean
public Map<String Object> producerConfigs() {
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 12/63
11/8/2017 Spring for Apache Kafka
public Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class)
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.clas
// See https://kafka.apache.org/documentation/#producerconfigs for more prope
return props;
}
@Bean
public KafkaTemplate<Integer, String> kafkaTemplate() {
return new KafkaTemplate<Integer, String>(producerFactory());
}
When using the methods with a Message<?> parameter, topic, partition and key information is
provided in a message header:
KafkaHeaders.TOPIC
KafkaHeaders.PARTITION_ID
KafkaHeaders.MESSAGE_KEY
KafkaHeaders.TIMESTAMP
Optionally, you can configure the KafkaTemplate with a ProducerListener to get an async
callback with the results of the send (success or failure) instead of waiting for the Future to
complete.
boolean isInterestedInSuccess();
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 13/63
11/8/2017 Spring for Apache Kafka
For convenience, the abstract ProducerListenerAdapter is provided in case you only want
to implement one of the methods. It returns false for isInterestedInSuccess .
Notice that the send methods return a ListenableFuture<SendResult> . You can register a
callback with the listener to receive the result of the send asynchronously.
@Override
public void onSuccess(SendResult<Integer, String> result) {
...
}
@Override
public void onFailure(Throwable ex) {
...
}
});
The SendResult has two properties, a ProducerRecord and RecordMetadata ; refer to the
Kafka API documentation for information about those objects.
If you wish to block the sending thread, to await the result, you can invoke the futures get()
method. You may wish to invoke flush() before waiting or, for convenience, the template has
a constructor with an autoFlush parameter which will cause the template to flush() on
each send. Note, however that flushing will likely significantly reduce performance.
Examples
Non Blocking (Async).
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 14/63
11/8/2017 Spring for Apache Kafka
@Override
public void onSuccess(SendResult<Integer, String> result) {
handleSuccess(data);
}
@Override
public void onFailure(Throwable ex) {
handleFailure(data, record, ex);
}
});
}
Blocking (Sync).
try {
template.send(record).get(10, TimeUnit.SECONDS);
handleSuccess(data);
}
catch (ExecutionException e) {
handleFailure(data, record, e.getCause());
}
catch (TimeoutException | InterruptedException e) {
handleFailure(data, record, e);
}
}
Transactions
The 0.11.0.0 client library added support for transactions. Spring for Apache Kafka adds support
in several ways.
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 15/63
11/8/2017 Spring for Apache Kafka
KafkaTransactionManager
The KafkaTransactionManager is an implementation of Spring Frameworks
PlatformTransactionManager ; it is provided with a reference to the producer factory in its
constructor. If you provide a custom producer factory, it must support transactions - see
ProducerFactory.transactionCapable() .
You can use the KafkaTransactionManager with normal Spring transaction support
( @Transactional , TransactionTemplate etc). If a transaction is active, any
KafkaTemplate operations performed within the scope of the transaction will use the
transactions Producer . The manager will commit or rollback the transaction depending on
success or failure. The KafkaTemplate must be configured to use the same
ProducerFactory as the transaction manager.
Transaction Synchronization
If you need to synchronize a Kafka transaction with some other transaction; simply configure the
listener container with the appropriate transaction manager (one that supports synchronization,
such as the DataSourceTransactionManager ). Any operations performed on a transactional
KafkaTemplate from the listener will participate in a single transaction. The Kafka transaction
will be committed (or rolled back) immediately after the controlling transaction. Before exiting
the listener, you should invoke one of the templates sendOffsetsToTransaction methods.
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 16/63
11/8/2017 Spring for Apache Kafka
For convenience, the listener container binds its consumer group id to the thread so, generally,
you can use the first method:
For example:
@Bean
KafkaMessageListenerContainer container(ConsumerFactory<String, String> cf,
final KafkaTemplate template) {
ContainerProperties props = new ContainerProperties("foo");
props.setGroupId("group");
props.setTransactionManager(new SomeOtherTransactionManager());
...
props.setMessageListener((MessageListener<String, String>) m -> {
template.send("foo", "bar");
template.send("baz", "qux");
template.sendOffsetsToTransaction(
Collections.singletonMap(new TopicPartition(m.topic(), m.partition())
new OffsetAndMetadata(m.offset() + 1)));
});
The offset to be committed is one greater than the offset of the record(s)
processed by the listener.
Important
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 17/63
11/8/2017 Spring for Apache Kafka
The argument in the callback is the template itself ( this ). If the callback exits normally, the
transaction is committed; if an exception is thrown, the transaction is rolled-back.
Message Listeners
When using a Message Listener Container you must provide a listener to receive data. There
are currently eight supported interfaces for message listeners:
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 18/63
11/8/2017 Spring for Apache Kafka
Use this for processing individual ConsumerRecord s received from the kafka consumer
poll() operation when using auto-commit, or one of the container-managed commit
methods.
Use this for processing individual ConsumerRecord s received from the kafka consumer
poll() operation when using one of the manual commit methods.
Use this for processing individual ConsumerRecord s received from the kafka consumer
poll() operation when using auto-commit, or one of the container-managed commit
methods. Access to the Consumer object is provided.
Use this for processing individual ConsumerRecord s received from the kafka consumer
poll() operation when using one of the manual commit methods. Access to the
Consumer object is provided.
Use this for processing all ConsumerRecord s received from the kafka consumer poll()
operation when using auto-commit, or one of the container-managed commit methods.
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 19/63
11/8/2017 Spring for Apache Kafka
AckMode.RECORD is not supported when using this interface since the listener is given the
complete batch.
Use this for processing all ConsumerRecord s received from the kafka consumer poll()
operation when using one of the manual commit methods.
Use this for processing all ConsumerRecord s received from the kafka consumer poll()
operation when using auto-commit, or one of the container-managed commit methods.
AckMode.RECORD is not supported when using this interface since the listener is given the
complete batch. Access to the Consumer object is provided.
Use this for processing all ConsumerRecord s received from the kafka consumer poll()
operation when using one of the manual commit methods. Access to the Consumer object
is provided.
Important
The Consumer object is not thread-safe; you must only invoke its methods on the
thread that calls the listener.
KafkaMessageListenerContainer
ConcurrentMessageListenerContainer
KafkaMessageListenerContainer
The following constructors are available.
Each takes a ConsumerFactory and information about topics and partitions, as well as other
configuration in a ContainerProperties object. The second constructor is used by the
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 20/63
11/8/2017 Spring for Apache Kafka
Refer to the JavaDocs for ContainerProperties for more information about the various
properties that can be set.
ConcurrentMessageListenerContainer
The single constructor is similar to the first KafkaListenerContainer constructor:
For the first constructor, kafka will distribute the partitions across the consumers. For the second
constructor, the ConcurrentMessageListenerContainer distributes the TopicPartition s
across the delegate KafkaMessageListenerContainer s.
If, say, 6 TopicPartition s are provided and the concurrency is 3; each container will get 2
partitions. For 5 TopicPartition s, 2 containers will get 2 partitions and the third will get 1. If
the concurrency is greater than the number of TopicPartitions , the concurrency will be
adjusted down such that each container will get one partition.
The client.id property (if set) will be appended with -n where n is the
consumer instance according to the concurrency. This is required to provide
unique names for MBeans when JMX is enabled.
Starting with version 1.3, the MessageListenerContainer provides an access to the metrics
of the underlying KafkaConsumer . In case of ConcurrentMessageListenerContainer the
metrics() method returns the metrics for all the target KafkaMessageListenerContainer
instances. The metrics are grouped into the Map<MetricName, ? extends Metric> by the
client-id provided for the underlying KafkaConsumer .
Committing Offsets
Several options are provided for committing offsets. If the enable.auto.commit consumer
property is true, kafka will auto-commit the offsets according to its configuration. If it is false, the
containers support the following AckMode s.
The consumer poll() method will return one or more ConsumerRecords ; the
MessageListener is called for each record; the following describes the action taken by the
container for each AckMode :
RECORD - commit the offset when the listener returns after processing the record.
BATCH - commit the offset when all the records returned by the poll() have been
processed.
TIME - commit the offset when all the records returned by the poll() have been
processed as long as the ackTime since the last commit has been exceeded.
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 22/63
11/8/2017 Spring for Apache Kafka
COUNT - commit the offset when all the records returned by the poll() have been
processed as long as ackCount records have been received since the last commit.
COUNT_TIME - similar to TIME and COUNT but the commit is performed if either condition
is true.
MANUAL - the message listener is responsible to acknowledge() the Acknowledgment ;
after which, the same semantics as BATCH are applied.
MANUAL_IMMEDIATE - commit the offset immediately when the
Acknowledgment.acknowledge() method is called by the listener.
void acknowledge();
This gives the listener control over when offsets are committed.
@KafkaListener Annotation
The @KafkaListener annotation provides a mechanism for simple POJO listeners:
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 23/63
11/8/2017 Spring for Apache Kafka
@Configuration
@EnableKafka
public class KafkaConfig {
@Bean
KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<Integer, Str
kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<Integer, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setConcurrency(3);
factory.getContainerProperties().setPollTimeout(3000);
return factory;
}
@Bean
public ConsumerFactory<Integer, String> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs());
}
@Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, embeddedKafka.getBroke
...
return props;
}
}
Notice that to set container properties, you must use the getContainerProperties() method
on the factory. It is used as a template for the actual properties injected into the container.
You can also configure POJO listeners with explicit topics and partitions (and, optionally, their
initial offsets):
Each partition can be specified in the partitions or partitionOffsets attribute, but not
both.
When using manual AckMode , the listener can also be provided with the Acknowledgment ;
this example also shows how to use a different container factory.
Finally, metadata about the message is available from message headers, the following header
names can be used for retrieving the headers of the message:
KafkaHeaders.RECEIVED_MESSAGE_KEY
KafkaHeaders.RECEIVED_TOPIC
KafkaHeaders.RECEIVED_PARTITION_ID
KafkaHeaders.RECEIVED_TIMESTAMP
KafkaHeaders.TIMESTAMP_TYPE
Starting with version 1.1, @KafkaListener methods can be configured to receive the entire
batch of consumer records received from the consumer poll. To configure the listener container
factory to create batch listeners, set the batchListener property:
@Bean
public KafkaListenerContainerFactory<?> batchFactory() {
ConcurrentKafkaListenerContainerFactory<Integer, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 25/63
11/8/2017 Spring for Apache Kafka
factory.setBatchListener(true); // <<<<<<<<<<<<<<<<<<<<<<<<<
return factory;
}
The topic, partition, offset etc are available in headers which parallel the payloads:
Alternatively you can receive a List of Message<?> objects with each offset, etc in each
message, but it must be the only parameter (aside from an optional Acknowledgment when
using manual commits) defined on the method:
You can also receive a list of ConsumerRecord<?, ?> objects but it must be the only
parameter (aside from an optional Acknowledgment when using manual commits) defined on
the method:
Starting with version 2.0, the id attribute (if present) is used as the Kafka group.id property,
overriding the configured property in the consumer factory, if present. You can also set
groupId explicitly, or set idIsGroup to false, to restore the previous behavior of using the
consumer factory group.id .
@KafkaListener on a class
When using @KafkaListener at the class-level, you specify @KafkaHandler at the method
level. When messages are delivered, the converted message payload type is used to determine
which method to call.
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 27/63
11/8/2017 Spring for Apache Kafka
@KafkaHandler
public void listen(String foo) {
...
}
@KafkaHandler
public void listen(Integer bar) {
...
}
Rebalance Listeners
ContainerProperties has a property consumerRebalanceListener which takes an
implementation of the Kafka clients ConsumerRebalanceListener interface. If this property is
not provided, the container will configure a simple logging listener that logs rebalance events
under the INFO level. The framework also adds a sub-interface
ConsumerAwareRebalanceListener :
Notice that there are two callbacks when partitions are revoked: the first is called immediately;
the second is called after any pending offsets are committed. This is useful if you wish to
maintain offsets in some external repository; for example:
containerProperties.setConsumerRebalanceListener(new ConsumerAwareRebalanceListen
@Override
public void onPartitionsRevokedBeforeCommit(Consumer<?, ?> consumer, Collecti
// acknowledge any pending Acknowledgments (if using manual acks)
}
@Override
public void onPartitionsRevokedAfterCommit(Consumer<?, ?> consumer, Collectio
// ...
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 28/63
11/8/2017 Spring for Apache Kafka
// ...
store(consumer.position(partition));
// ...
}
@Override
public void onPartitionsAssigned(Collection<TopicPartition> partitions) {
// ...
consumer.seek(partition, offsetTracker.getOffset() + 1);
// ...
}
});
The result of the expression evaluation must be a String representing the topic name.
@KafkaListener(topics = "annotated21")
@SendTo("!{request.value()}") // runtime SpEL
public String replyingListener(String in) {
...
}
@KafkaListener(topics = "annotated22")
@SendTo("#{myBean.replyTopic}") // config time SpEL
public Collection<String> replyingBatchListener(List<String> in) {
...
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 29/63
11/8/2017 Spring for Apache Kafka
@KafkaHandler
public String foo(String in) {
...
}
@KafkaHandler
@SendTo("!{'annotated25reply2'}")
public String bar(@Payload(required = false) KafkaNull nul,
@Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) int key) {
...
}
@Bean
public KafkaTemplate<String, String> myReplyingTemplate() {
return new KafkaTemplate<Integer, String>(producerFactory()) {
@Override
public ListenableFuture<SendResult<String, String>> send(String topic, St
return super.send(topic, partitionForData(data), keyForData(data), da
}
...
};
}
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 30/63
11/8/2017 Spring for Apache Kafka
@Bean
public KafkaListenerErrorHandler voidSendToErrorHandler() {
return (m, e) -> {
return ... // some information about the failure and input data
};
}
Filtering Messages
In certain scenarios, such as rebalancing, a message may be redelivered that has already been
processed. The framework cannot know whether such a message has been processed or not,
that is an application-level function. This is known as the Idempotent Receiver pattern and
Spring Integration provides an implementation thereof.
The Spring for Apache Kafka project also provides some assistance by means of the
FilteringMessageListenerAdapter class, which can wrap your MessageListener . This
class takes an implementation of RecordFilterStrategy where you implement the filter
method to signal that a message is a duplicate and should be discarded.
Retrying Deliveries
If your listener throws an exception, the default behavior is to invoke the ErrorHandler , if
configured, or logged otherwise.
The contents of the RetryContext passed into the RecoveryCallback will depend on the
type of listener. The context will always have an attribute record which is the record for which
the failure occurred. If your listener is acknowledging and/or consumer aware, additional
attributes acknowledgment and/or consumer will be available. For convenience, the
RetryingAcknowledgingMessageListenerAdapter provides static constants for these keys.
See its javadocs for more information.
A retry adapter is not provided for any of the batch message listeners because the framework
has no knowledge of where, in a batch, the failure occurred. Users wishing retry capabilities,
when using a batch listener, are advised to use a RetryTemplate within the listener itself.
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 32/63
11/8/2017 Spring for Apache Kafka
@Bean
public KafKaMessageListenerContainer(ConnectionFactory connectionFactory) {
ContainerProperties containerProps = new ContainerProperties("topic1", "topic
...
containerProps.setIdleEventInterval(60000L);
...
KafKaMessageListenerContainer<String, String> container = new KafKaMessageLis
return container;
}
@Bean
public ConcurrentKafkaListenerContainerFactory kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
...
factory.getContainerProperties().setIdleEventInterval(60000L);
...
return factory;
}
In each of these cases, an event will be published once per minute while the container is idle.
Event Consumption
You can capture these events by implementing ApplicationListener - either a general
listener, or one narrowed to only receive this specific event. You can also use
@EventListener , introduced in Spring Framework 4.2.
The following example combines the @KafkaListener and @EventListener into a single
class. Its important to understand that the application listener will get events for all containers
so you may need to check the listener id if you want to take specific action based on which
container is idle. You can also use the @EventListener condition for this purpose.
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 33/63
11/8/2017 Spring for Apache Kafka
The event is published on the consumer thread, so it is safe to interact with the Consumer
object.
@EventListener(condition = "event.listenerId.startsWith('qux-')")
public void eventHandler(ListenerContainerIdleEvent event) {
...
}
Important
Event listeners will see events for all containers; so, in the example above, we
narrow the events received based on the listener ID. Since containers created for
the @KafkaListener support concurrency, the actual containers are named
id-n where the n is a unique value for each instance to support the
concurrency. Hence we use startsWith in the condition.
Caution
If you wish to use the idle event to stop the lister container, you should
not call container.stop() on the thread that calls the listener - it will
cause delays and unnecessary log messages. Instead, you should
hand off the event to a different thread that can then stop the container.
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 34/63
11/8/2017 Spring for Apache Kafka
Also, you should not stop() the container instance in the event if it is
a child container, you should stop the concurrent container instead.
When manually assigning partitions, simply set the initial offset (if desired) in the configured
TopicPartitionInitialOffset arguments (see the section called Message Listener
Containers). You can also seek to a specific offset at any time.
The first is called when the container is started; this callback should be used when seeking at
some arbitrary time after initialization. You should save a reference to the callback; if you are
using the same listener in multiple containers (or in a
ConcurrentMessageListenerContainer ) you should store the callback in a ThreadLocal
or some other structure keyed by the listener Thread .
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 35/63
11/8/2017 Spring for Apache Kafka
When using group management, the second method is called when assignments change. You
can use this method, for example, for setting initial offsets for the partitions, by calling the
callback; you must use the callback argument, not the one passed into
registerSeekCallback . This method will never be called if you explicitly assign partitions
yourself; use the TopicPartitionInitialOffset in that case.
You can also perform seek operations from onIdleContainer() when an idle container is
detected; see the section called Detecting Idle Asynchronous Consumers for how to enable
idle container detection.
To arbitrarily seek at runtime, use the callback reference from the registerSeekCallback for
the appropriate thread.
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, IntegerDeserializer.class
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.clas
...
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, IntegerSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
for more complex or particular cases, the KafkaConsumer , and therefore KafkaProducer ,
provides overloaded constructors to accept (De)Serializer instances for keys and/or
values , respectively.
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 36/63
11/8/2017 Spring for Apache Kafka
Although the Serializer / Deserializer API is quite simple and flexible from the low-level
Kafka Consumer and Producer perspective, you might need more flexibility at the Spring
Messaging level, either when using @KafkaListener or Spring Integration. To easily convert
to/from org.springframework.messaging.Message , Spring for Apache Kafka provides a
MessageConverter abstraction with the MessagingMessageConverter implementation and
its StringJsonMessageConverter customization. The MessageConverter can be injected
into KafkaTemplate instance directly and via AbstractKafkaListenerContainerFactory
bean definition for the @KafkaListener.containerFactory() property:
@Bean
public KafkaListenerContainerFactory<?> kafkaJsonListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<Integer, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setMessageConverter(new StringJsonMessageConverter());
return factory;
}
...
@KafkaListener(topics = "jsonData",
containerFactory = "kafkaJsonListenerContainerFactory")
public void jsonListener(Foo foo) {
...
}
When using a @KafkaListener , the parameter type is provided to the message converter to
assist with the conversion.
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 37/63
11/8/2017 Spring for Apache Kafka
This type inference can only be achieved when the @KafkaListener annotation
is declared at the method level. With a class-level @KafkaListener , the payload
type is used to select which @KafkaHandler method to invoke so it must already
have been converted before the method can be chosen.
String key();
byte[] value();
The KafkaHeaderMapper strategy is provided to map header entries between Kafka Headers
and MessageHeaders :
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 38/63
11/8/2017 Spring for Apache Kafka
The DefaultKafkaHeaderMapper maps the key to the MessageHeaders header name and,
in order to support rich header types, for outbound messages, JSON conversion is performed. A
"special" header, with key, spring_json_header_types contains a JSON map of
<key>:<type> . This header is used on the inbound side to provide appropriate conversion of
each header value to the original type.
On the inbound side, all Kafka Header s are mapped to MessageHeaders . On the outbound
side, by default, all MessageHeaders are mapped except id , timestamp , and the headers
that map to ConsumerRecord properties.
You can specify which headers are to be mapped for outbound messages, by providing patterns
to the mapper.
public DefaultKafkaHeaderMapper() {
...
}
The first constructor will use a default Jackson ObjectMapper and map most headers, as
discussed above. The second constructor will use the provided Jackson ObjectMapper and
map most headers, as discussed above. The third constructor will use a default Jackson
ObjectMapper and map headers according to the provided patterns. The third constructor will
use the provided Jackson ObjectMapper and map headers according to the provided patterns.
Patterns are rather simple and can contain either a leading or trailing wildcard * , or both, e.g.
*.foo.* . Patterns can be negated with a leading ! . The first pattern that matches a header
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 39/63
11/8/2017 Spring for Apache Kafka
When providing your own patterns, it is recommended to include !id and !timestamp since
these headers are read-only on the inbound side.
Important
With the batch converter, the converted headers are available in the
KafkaHeaders.BATCH_CONVERTED_HEADERS as a List<Map<String, Object>> where the
map in a position of the list corresponds to the data position in the payload.
If the converter has no converter (either because Jackson is not present, or it is explicitly set to
null ), the headers from the consumer record are provided unconverted in the
KafkaHeaders.NATIVE_HEADERS header (a Headers object, or a List<Headers> in the
case of the batch converter, where the position in the list corresponds to the data position in the
payload).
Important
To send a null payload using the KafkaTemplate simply pass null into the value argument
of the send() methods. One exception to this is the send(Message<?> message) variant.
Since spring-messaging Message<?> cannot have a null payload, a special payload type
KafkaNull is used and the framework will send null . For convenience, the static
KafkaNull.INSTANCE is provided.
When using a message listener container, the received ConsumerRecord will have a null
value() .
To configure the @KafkaListener to handle null payloads, you must use the @Payload
annotation with required = false ; you will usually also need the key so your application
knows which key was "deleted":
@KafkaHandler
public void listen(String foo) {
...
}
@KafkaHandler
public void listen(Integer bar) {
...
}
@KafkaHandler
public void delete(@Payload(required = false) KafkaNull nul, @Header(KafkaHea
...
}
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 41/63
11/8/2017 Spring for Apache Kafka
@Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<Integer,
kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<Integer, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
...
factory.getContainerProperties().setErrorHandler(myErrorHandler);
...
return factory;
}
Starting with version 2.0, the @KafkaListener annotation has a new attribute:
errorHandler .
@FunctionalInterface
public interface KafkaListenerErrorHandler {
As you can see, you have access to the spring-messaging Message<?> object produced by the
message converter and the exception that was thrown by the listener, wrapped in a
ListenerExecutionFailedException . The error handler can throw the original or a new
exception which will be thrown to the container. Anything returned by the error handler is
ignored.
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 42/63
11/8/2017 Spring for Apache Kafka
If your error handler implements this interface you can, for example, adjust the offsets
accordingly. For example, to reset the offset to replay the failed message, you could do
something like the following; note however, these are simplistic implementations and you would
probably want more checking in the error handler.
@Bean
public ConsumerAwareListenerErrorHandler listen3ErrorHandler() {
return (m, e, c) -> {
this.listen3Exception = e;
MessageHeaders headers = m.getHeaders();
c.seek(new org.apache.kafka.common.TopicPartition(
headers.get(KafkaHeaders.RECEIVED_TOPIC, String.class),
headers.get(KafkaHeaders.RECEIVED_PARTITION_ID, Integer.class)),
headers.get(KafkaHeaders.OFFSET, Long.class));
return null;
};
}
@Bean
public ConsumerAwareListenerErrorHandler listen10ErrorHandler() {
return (m, e, c) -> {
this.listen10Exception = e;
MessageHeaders headers = m.getHeaders();
List<String> topics = headers.get(KafkaHeaders.RECEIVED_TOPIC, List.class
List<Integer> partitions = headers.get(KafkaHeaders.RECEIVED_PARTITION_ID
List<Long> offsets = headers.get(KafkaHeaders.OFFSET, List.class);
Map<TopicPartition, Long> offsetsToReset = new HashMap<>();
for (int i = 0; i < topics.size(); i++) {
int index = i;
offsetsToReset.compute(new TopicPartition(topics.get(i), partitions.g
(k, v) -> v == null ? offsets.get(index) : Math.min(v, offset
}
offsetsToReset.forEach((k, v) -> c.seek(k, v));
return null;
};
}
This resets each topic/partition in the batch to the lowest offset in the batch.
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 43/63
11/8/2017 Spring for Apache Kafka
respectively.
Similar to the @KafkaListener error handlers, you can reset the offsets as needed based on
the data that failed.
Unlike the listener-level error handlers, however, you should set the container
property ackOnError to false when making adjustments; otherwise any pending
acks will be applied after your repositioning.
4.1.8 Kerberos
Starting with version 2.0 a KafkaJaasLoginModuleInitializer class has been added to
assist with Kerberos configuration. Simply add this bean, with the desired configuration, to your
application context.
@Bean
public KafkaJaasLoginModuleInitializer jaasConfig() throws IOException {
KafkaJaasLoginModuleInitializer jaasConfig = new KafkaJaasLoginModuleInitiali
jaasConfig.setControlFlag("REQUIRED");
Map<String, String> options = new HashMap<>();
options.put("useKeyTab", "true");
options.put("storeKey", "true");
options.put("keyTab", "/etc/security/keytabs/kafka_client.keytab");
options.put("principal", "kafka-client-1@EXAMPLE.COM");
jaasConfig.setOptions(options);
return jaasConfig;
}
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 44/63
11/8/2017 Spring for Apache Kafka
4.2.1 Introduction
Starting with version 1.1.4, Spring for Apache Kafka provides first class support for Kafka
Streams. For using it from a Spring application, the kafka-streams jar must be present on
classpath. It is an optional dependency of the spring-kafka project and isnt downloaded
transitively.
4.2.2 Basics
The reference Apache Kafka Streams documentation suggests this way of using the API:
// Use the builders to define the actual processing topology, e.g. to specify
// from which input topics to read, which stream operations (filter, map, etc.)
// should be called, and so on.
// Use the configuration to tell your application where the Kafka cluster is,
// which serializers/deserializers to use by default, to specify security setting
// and so on.
StreamsConfig config = ...;
@Bean
public FactoryBean<KStreamBuilder> myKStreamBuilder(StreamsConfig streamsConfig)
return new KStreamBuilderFactoryBean(streamsConfig);
}
@Bean
public KStream<?, ?> kStream(KStreamBuilder kStreamBuilder) {
KStream<Integer, String> stream = kStreamBuilder.stream(STREAMING_TOPIC1);
// Fluent KStream API
return stream;
}
If you would like to control lifecycle manually (e.g. stop and start by some condition), you can
reference the KStreamBuilderFactoryBean bean directly using factory bean ( & ) prefix.
Since KStreamBuilderFactoryBean utilize its internal KafkaStreams instance, it is safe to
stop and restart it again - a new KafkaStreams is created on each start() . Also consider
using different KStreamBuilderFactoryBean s, if you would like to control lifecycles for
KStream instances separately.
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 46/63
11/8/2017 Spring for Apache Kafka
@Bean
public KStreamBuilderFactoryBean myKStreamBuilder(StreamsConfig streamsConfig) {
return new KStreamBuilderFactoryBean(streamsConfig);
}
...
@Autowired
private KStreamBuilderFactoryBean myKStreamBuilderFactoryBean;
Or add @Qualifier for injection by name if you use interface bean definition:
@Bean
public FactoryBean<KStreamBuilder> myKStreamBuilder(StreamsConfig streamsConfig)
return new KStreamBuilderFactoryBean(streamsConfig);
}
...
@Autowired
@Qualifier("&myKStreamBuilder")
private KStreamBuilderFactoryBean myKStreamBuilderFactoryBean;
4.2.5 Configuration
To configure the Kafka Streams environment, the KStreamBuilderFactoryBean requires a
Map of particular properties or a StreamsConfig instance. See Apache Kafka documentation
for all possible options.
To avoid boilerplate code for most cases, especially when you develop micro services, Spring
for Apache Kafka provides the @EnableKafkaStreams annotation, which should be placed
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 47/63
11/8/2017 Spring for Apache Kafka
alongside with @Configuration . Only you need is to declare StreamsConfig bean with the
defaultKafkaStreamsConfig name. A KStreamBuilder bean with the
defaultKStreamBuilder name will be declare in the application context automatically. Any
additional KStreamBuilderFactoryBean beans can be declared and used as well.
@Configuration
@EnableKafka
@EnableKafkaStreams
public static class KafkaStreamsConfiguration {
@Bean(name = KafkaStreamsDefaultConfiguration.DEFAULT_STREAMS_CONFIG_BEAN_NAM
public StreamsConfig kStreamsConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(StreamsConfig.APPLICATION_ID_CONFIG, "testStreams");
props.put(StreamsConfig.KEY_SERDE_CLASS_CONFIG, Serdes.Integer().getClass
props.put(StreamsConfig.VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClas
props.put(StreamsConfig.TIMESTAMP_EXTRACTOR_CLASS_CONFIG, WallclockTimest
return new StreamsConfig(props);
}
@Bean
public KStream<Integer, String> kStream(KStreamBuilder kStreamBuilder) {
KStream<Integer, String> stream = kStreamBuilder.stream("streamingTopic1"
stream
.mapValues(String::toUpperCase)
.groupByKey()
.reduce((String value1, String value2) -> value1 + value2,
TimeWindows.of(1000),
"windowStore")
.toStream()
.map((windowedId, value) -> new KeyValue<>(windowedId.key(), valu
.filter((i, s) -> s.length() > 40)
.to("streamingTopic2");
stream.print();
return stream;
}
}
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 48/63
11/8/2017 Spring for Apache Kafka
4.3.1 Introduction
The spring-kafka-test jar contains some useful utilities to assist with testing your
applications.
4.3.2 JUnit
o.s.kafka.test.utils.KafkaTestUtils provides some static methods to set up producer
and consumer properties:
/**
* Set up test properties for an {@code <Integer, String>} consumer.
* @param group the group id.
* @param autoCommit the auto commit.
* @param embeddedKafka a {@link KafkaEmbedded} instance.
* @return the properties.
*/
public static Map<String, Object> consumerProps(String group, String autoCommit,
KafkaEmbedded embeddedKafka) { ... }
/**
* Set up test properties for an {@code <Integer, String>} producer.
* @param embeddedKafka a {@link KafkaEmbedded} instance.
* @return the properties.
*/
public static Map<String, Object> senderProps(KafkaEmbedded embeddedKafka) { ...
/**
* Create embedded Kafka brokers.
* @param count the number of brokers.
* @param controlledShutdown passed into TestUtils.createBrokerConfig.
* @param topics the topics to create (2 partitions per).
*/
public KafkaEmbedded(int count, boolean controlledShutdown, String... topics) { .
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 49/63
11/8/2017 Spring for Apache Kafka
/**
*
* Create embedded Kafka brokers.
* @param count the number of brokers.
* @param controlledShutdown passed into TestUtils.createBrokerConfig.
* @param partitions partitions per topic.
* @param topics the topics to create.
*/
public KafkaEmbedded(int count, boolean controlledShutdown, int partitions, Strin
The embedded kafka class has a utility method allowing you to consume for all the topics it
created:
The KafkaTestUtils has some utility methods to fetch results from the consumer:
/**
* Poll the consumer, expecting a single record for the specified topic.
* @param consumer the consumer.
* @param topic the topic.
* @return the record.
* @throws org.junit.ComparisonFailure if exactly one record is not received.
*/
public static <K, V> ConsumerRecord<K, V> getSingleRecord(Consumer<K, V> consumer
/**
* Poll the consumer for records.
* @param consumer the consumer.
* @return the records.
*/
public static <K, V> ConsumerRecords<K, V> getRecords(Consumer<K, V> consumer) {
Usage:
...
template.sendDefault(0, 2, "bar");
ConsumerRecord<Integer, String> received = KafkaTestUtils.getSingleRecord(consume
...
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 50/63
11/8/2017 Spring for Apache Kafka
@RunWith(SpringRunner.class)
@DirtiesContext
@EmbeddedKafka(partitions = 1,
topics = {
KafkaStreamsTests.STREAMING_TOPIC1,
KafkaStreamsTests.STREAMING_TOPIC2 })
public class KafkaStreamsTests {
@Autowired
private KafkaEmbedded kafkaEmbedded;
@Test
public void someTest() {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("tes
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
ConsumerFactory<Integer, String> cf = new DefaultKafkaConsumerFactory<>(c
Consumer<Integer, String> consumer = cf.createConsumer();
this.embeddedKafka.consumeFromAnEmbeddedTopic(consumer, KafkaStreamsTests
ConsumerRecords<Integer, String> replies = KafkaTestUtils.getRecords(cons
assertThat(replies.count()).isGreaterThanOrEqualTo(1);
}
@Configuration
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 51/63
11/8/2017 Spring for Apache Kafka
@EnableKafkaStreams
public static class KafkaStreamsConfiguration {
@Bean(name = KafkaStreamsDefaultConfiguration.DEFAULT_STREAMS_CONFIG_BEAN_
public StreamsConfig kStreamsConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(StreamsConfig.APPLICATION_ID_CONFIG, "testStreams");
props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, this.brokerAddresse
return new StreamsConfig(props);
}
/**
* @param key the key
* @param <K> the type.
* @return a Matcher that matches the key in a consumer record.
*/
public static <K> Matcher<ConsumerRecord<K, ?>> hasKey(K key) { ... }
/**
* @param value the value.
* @param <V> the type.
* @return a Matcher that matches the value in a consumer record.
*/
public static <V> Matcher<ConsumerRecord<?, V>> hasValue(V value) { ... }
/**
* @param partition the partition.
* @return a Matcher that matches the partition in a consumer record.
*/
/**
* Matcher testing the timestamp of a {@link ConsumerRecord} asssuming the topic
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 52/63
11/8/2017 Spring for Apache Kafka
/**
* Matcher testing the timestamp of a {@link ConsumerRecord}
* @param type timestamp type of the record
* @param ts timestamp of the consumer record.
* @return a Matcher that matches the timestamp in a consumer record.
*/
public static Matcher<ConsumerRecord<?, ?>> hasTimestamp(TimestampType type, long
return new ConsumerRecordTimestampMatcher(type, ts);
}
/**
* @param key the key
* @param <K> the type.
* @return a Condition that matches the key in a consumer record.
*/
public static <K> Condition<ConsumerRecord<K, ?>> key(K key) { ... }
/**
* @param value the value.
* @param <V> the type.
* @return a Condition that matches the value in a consumer record.
*/
public static <V> Condition<ConsumerRecord<?, V>> value(V value) { ... }
/**
* @param partition the partition.
* @return a Condition that matches the partition in a consumer record.
*/
public static Condition<ConsumerRecord<?, ?>> partition(int partition) { ... }
/**
* @param value the timestamp.
* @ di i h
https://docs.spring.io/spring-kafka/reference/htmlsingle/ h h i l i d 53/63
11/8/2017 Spring for Apache Kafka
* @return a Condition that matches the timestamp value in a consumer record.
*/
public static Condition<ConsumerRecord<?, ?>> timestamp(long value) {
return new ConsumerRecordTimestampCondition(TimestampType.CREATE_TIME, value);
}
/**
* @param type the type of timestamp
* @param value the timestamp.
* @return a Condition that matches the timestamp value in a consumer record.
*/
public static Condition<ConsumerRecord<?, ?>> timestamp(TimestampType type, long
return new ConsumerRecordTimestampCondition(type, value);
}
4.3.6 Example
Putting it all together:
@ClassRule
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true, TEMPLA
@Test
public void testTemplate() throws Exception {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("testT",
embeddedKafka);
DefaultKafkaConsumerFactory<Integer, String> cf =
new DefaultKafkaConsumerFactory<Integer, String>(cons
ContainerProperties containerProperties = new ContainerProperties(TEMPLAT
KafkaMessageListenerContainer<Integer, String> container =
new KafkaMessageListenerContainer<>(cf, containerProp
final BlockingQueue<ConsumerRecord<Integer, String>> records = new Linked
container.setupMessageListener(new MessageListener<Integer, String>() {
@Override
public void onMessage(ConsumerRecord<Integer, String> record) {
System.out.println(record);
records.add(record);
}
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 54/63
11/8/2017 Spring for Apache Kafka
});
container.setBeanName("templateTests");
container.start();
ContainerTestUtils.waitForAssignment(container, embeddedKafka.getPartitio
Map<String, Object> senderProps =
KafkaTestUtils.senderProps(embeddedKafka.getBrokersAs
ProducerFactory<Integer, String> pf =
new DefaultKafkaProducerFactory<Integer, String>(send
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf);
template.setDefaultTopic(TEMPLATE_TOPIC);
template.sendDefault("foo");
assertThat(records.poll(10, TimeUnit.SECONDS), hasValue("foo"));
template.sendDefault(0, 2, "bar");
ConsumerRecord<Integer, String> received = records.poll(10, TimeUnit.SECO
assertThat(received, hasKey(2));
assertThat(received, hasPartition(0));
assertThat(received, hasValue("bar"));
template.send(TEMPLATE_TOPIC, 0, 2, "baz");
received = records.poll(10, TimeUnit.SECONDS);
assertThat(received, hasKey(2));
assertThat(received, hasPartition(0));
assertThat(received, hasValue("baz"));
}
The above uses the hamcrest matchers; with AssertJ , the final part looks like this
...
assertThat(records.poll(10, TimeUnit.SECONDS)).has(value("foo"));
template.sendDefault(0, 2, "bar");
ConsumerRecord<Integer, String> received = records.poll(10, TimeUnit.SECO
assertThat(received).has(key(2));
assertThat(received).has(partition(0));
assertThat(received).has(value("bar"));
template.send(TEMPLATE_TOPIC, 0, 2, "baz");
received = records.poll(10, TimeUnit.SECONDS);
assertThat(received).has(key(2));
assertThat(received).has(partition(0));
assertThat(received).has(value("baz"));
}
}
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 55/63
11/8/2017 Spring for Apache Kafka
5. Spring Integration
This part of the reference shows how to use the spring-integration-kafka module of
Spring Integration.
5.1.1 Introduction
This documentation pertains to versions 2.0.0 and above; for documentation for earlier
releases, see the 1.3.x README.
Spring Integration Kafka is now based on the Spring for Apache Kafka project. It provides the
following components:
The target topic and partition for publishing the message can be customized through the
kafka_topic and kafka_partitionId headers, respectively.
Important
NOTE : If the adapter is configured with a topic or message key (either with a constant or
expression), those are used and the corresponding header is ignored. If you wish the header to
override the configuration, you need to configure it in an expression, such as:
Here is an example of how the Kafka outbound channel adapter is configured with XML:
<int-kafka:outbound-channel-adapter id="kafkaOutboundChannelAdapter"
kafka-template="template"
auto-startup="false"
channel="inputToKafka"
topic="foo"
sync="false"
message-key-expression="'bar'"
send-failure-channel="failures"
send-success-channel="successes"
error-message-strategy="ems"
partition-id-expression="2">
</int-kafka:outbound-channel-adapter>
As you can see, the adapter requires a KafkaTemplate which, in turn, requires a suitably
configured KafkaProducerFactory .
@Bean
@ServiceActivator(inputChannel = "toKafka")
public MessageHandler handler() throws Exception {
KafkaProducerMessageHandler<String, String> handler =
new KafkaProducerMessageHandler<>(kafkaTemplate());
handler.setTopicExpression(new LiteralExpression("someTopic"));
handler.setMessageKeyExpression(new LiteralExpression("someKey"));
handler.setFailureChannel(failures());
return handler;
}
@Bean
public KafkaTemplate<String, String> kafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
@Bean
public ProducerFactory<String, String> producerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, this.brokerAddress);
// set more properties
return new DefaultKafkaProducerFactory<>(props);
}
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 58/63
11/8/2017 Spring for Apache Kafka
Starting with spring-integration-kafka version 2.1, the mode attribute is available ( record or
batch , default record ). For record mode, each message payload is converted from a
single ConsumerRecord ; for mode batch the payload is a list of objects which are converted
from all the ConsumerRecord s returned by the consumer poll. As with the batched
@KafkaListener , the KafkaHeaders.RECEIVED_MESSAGE_KEY ,
KafkaHeaders.RECEIVED_PARTITION_ID , KafkaHeaders.RECEIVED_TOPIC and
KafkaHeaders.OFFSET headers are also lists with, positions corresponding to the position in
the payload.
<int-kafka:message-driven-channel-adapter
id="kafkaListener"
listener-container="container1"
auto-startup="false"
phase="100"
send-timeout="5000"
mode="record"
retry-template="template"
recovery-callback="callback"
error-message-strategy="ems"
channel="someChannel"
error-channel="errorChannel" />
</bean>
@Bean
public KafkaMessageDrivenChannelAdapter<String, String>
adapter(KafkaMessageListenerContainer<String, String> container) {
KafkaMessageDrivenChannelAdapter<String, String> kafkaMessageDrivenChannelAda
new KafkaMessageDrivenChannelAdapter<>(container, ListenerMode.record
kafkaMessageDrivenChannelAdapter.setOutputChannel(received());
return kafkaMessageDrivenChannelAdapter;
}
@Bean
public KafkaMessageListenerContainer<String, String> container() throws Exception
ContainerProperties properties = new ContainerProperties(this.topic);
// set more properties
return new KafkaMessageListenerContainer<>(consumerFactory(), properties);
}
@Bean
public ConsumerFactory<String, String> consumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, this.brokerAddress);
// set more properties
return new DefaultKafkaConsumerFactory<>(props);
}
Received messages will have certain headers populated. Refer to the KafkaHeaders class for
more information.
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 60/63
11/8/2017 Spring for Apache Kafka
Important
The Consumer object (in the kafka_consumer header) is not thread-safe; you
must only invoke its methods on the thread that calls the listener within the
adapter; if you hand off the message to another thread, you must not call its
methods.
When a retry-template is provided, delivery failures will be retried according to its retry
policy. An error-channel is not allowed in this case. The recovery-callback can be used
to handle the error when retries are exhausted. In most cases, this will be an
ErrorMessageSendingRecoverer which will send the ErrorMessage to a channel.
When using this converter with a message-driven channel adapter, you can specify the type to
which you want the incoming payload to be converted. This is achieved by setting the
payload-type attribute ( payloadType property) on the adapter.
<int-kafka:message-driven-channel-adapter
id="kafkaListener"
listener-container="container1"
auto-startup="false"
phase="100"
send-timeout="5000"
channel="nullChannel"
message-converter="messageConverter"
payload-type="com.example.Foo"
error-channel="errorChannel" />
<bean id="messageConverter"
class="org.springframework.kafka.support.converter.MessagingMessageConverter"
@Bean
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 61/63
11/8/2017 Spring for Apache Kafka
public KafkaMessageDrivenChannelAdapter<String, String>
adapter(KafkaMessageListenerContainer<String, String> container) {
KafkaMessageDrivenChannelAdapter<String, String> kafkaMessageDrivenChannelAda
new KafkaMessageDrivenChannelAdapter<>(container, ListenerMode.record
kafkaMessageDrivenChannelAdapter.setOutputChannel(received());
kafkaMessageDrivenChannelAdapter.setMessageConverter(converter());
kafkaMessageDrivenChannelAdapter.setPayloadType(Foo.class);
return kafkaMessageDrivenChannelAdapter;
}
2.1.x
The 2.1.x branch introduced the following changes:
2.2.x
The 2.2.x branch introduced the following changes:
2.3.x
The 2.3.x branch introduced the following changes:
Update to spring-kafka 1.3.x; including support for transactions and header mapping
provided by kafka-clients 0.11.0.0
Support for record timestamps
3.0.x
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 62/63
11/8/2017 Spring for Apache Kafka
6. Other Resources
In addition to this reference documentation, there exist a number of other resources that may
help you learn about Spring and Apache Kafka.
https://docs.spring.io/spring-kafka/reference/htmlsingle/ 63/63