You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
GH-3554: Doc for batch listener error handling with DLT
Fixes#3554
Adding documention for how to use DLT's with batch mode listeners.
Clarifies exception classification, offset commit behavior, and deserialization
error handling patterns.
Signed-off-by: Soby Chacko <[email protected]>
DeadLetterPublishingRecoverer recoverer = new DeadLetterPublishingRecoverer(kafkaTemplate,
366
+
(record, ex) -> new TopicPartition(record.topic() + "-dlt", record.partition()));
367
+
368
+
// Configure retries: 3 attempts with 1 second between each
369
+
DefaultErrorHandler errorHandler = new DefaultErrorHandler(recoverer,
370
+
new FixedBackOff(1000L, 2L)); // 2 retries = 3 total attempts
371
+
372
+
factory.setCommonErrorHandler(errorHandler);
373
+
return factory;
374
+
}
375
+
----
376
+
377
+
[[batch-listener-error-flow]]
378
+
=== How Batch Error Handling Works
379
+
380
+
When a `BatchListenerFailedException` is thrown, the `DefaultErrorHandler`:
381
+
382
+
1. **Commits offsets** for all records before the failed record
383
+
2. **Retries** the failed record (and subsequent records) according to the `BackOff` configuration
384
+
3. **Publishes to DLT** when retries are exhausted - only the failed record is sent to the DLT
385
+
4. **Commits the failed record's offset** and redelivers remaining records for processing
386
+
387
+
Example flow with a batch of 6 records where record at index 2 fails:
388
+
389
+
* First attempt: Records 0, 1 processed successfully; record 2 fails
390
+
* Container commits offsets for records 0, 1
391
+
* Retry attempt 1: Records 2, 3, 4, 5 are retried
392
+
* Retry attempt 2: Records 2, 3, 4, 5 are retried again
393
+
* After retries exhausted: Record 2 is published to DLT and its offset is committed
394
+
* Container continues with records 3, 4, 5
395
+
396
+
[[batch-listener-skip-retries]]
397
+
=== Skipping Retries for Specific Exceptions
398
+
399
+
By default, the `DefaultErrorHandler` retries all exceptions except for fatal ones (like `DeserializationException`, `MessageConversionException`, etc.).
400
+
To skip retries for your own exception types, configure the error handler with exception classifications.
401
+
402
+
The error handler examines the **cause** of the `BatchListenerFailedException` to determine if it should skip retries:
403
+
404
+
[source, java]
405
+
----
406
+
@Bean
407
+
public ConcurrentKafkaListenerContainerFactory<String, Order> batchFactory(
public void listen(List<ConsumerRecord<String, Order>> records, Acknowledgment ack) {
473
+
for (ConsumerRecord<String, Order> record : records) {
474
+
try {
475
+
process(record.value());
476
+
}
477
+
catch (Exception e) {
478
+
throw new BatchListenerFailedException("Processing failed", e, record);
479
+
}
480
+
}
481
+
ack.acknowledge();
482
+
}
483
+
----
484
+
303
485
[[batch-listener-conv-errors]]
304
-
== Conversion Errors with Batch Error Handlers
486
+
=== Conversion Errors with Batch Error Handlers
305
487
306
488
Starting with version 2.8, batch listeners can now properly handle conversion errors, when using a `MessageConverter` with a `ByteArrayDeserializer`, a `BytesDeserializer` or a `StringDeserializer`, as well as a `DefaultErrorHandler`.
307
489
When a conversion error occurs, the payload is set to null and a deserialization exception is added to the record headers, similar to the `ErrorHandlingDeserializer`.
@@ -323,6 +505,46 @@ void listen(List<Thing> in, @Header(KafkaHeaders.CONVERSION_FAILURES) List<Conve
323
505
}
324
506
----
325
507
508
+
[[batch-listener-deser-errors]]
509
+
=== Deserialization Errors with Batch Listeners
510
+
511
+
Use `ErrorHandlingDeserializer` to handle deserialization failures gracefully:
512
+
513
+
[source, java]
514
+
----
515
+
@Bean
516
+
public ConsumerFactory<String, Order> consumerFactory() {
public void listen(List<ConsumerRecord<String, Order>> records) {
535
+
for (ConsumerRecord<String, Order> record : records) {
536
+
if (record.value() == null) {
537
+
// Deserialization failed - throw exception to send to DLT
538
+
// The DeadLetterPublishingRecoverer will restore the original byte[] value
539
+
throw new BatchListenerFailedException("Deserialization failed", record);
540
+
}
541
+
process(record.value());
542
+
}
543
+
}
544
+
----
545
+
546
+
When `DeadLetterPublishingRecoverer` publishes deserialization failures to the DLT, it automatically restores the original `byte[]` value that failed to deserialize.
0 commit comments