Connector Error Handling Properties
Overview
Use the following configuration settings to specify how the MongoDB Kafka sink connector handles errors and to configure the dead letter queue.
For a list of sink connector configuration settings organized by category, see the guide on Sink Connector Configuration Properties.
Settings
Name | Description |
---|---|
mongo.errors.tolerance | Type: string Description: Whether to continue processing messages if the connector encounters
an error. Allows the connector to override the errors.tolerance
Kafka cluster setting.When set to none , the connector reports any error and
blocks further processing of the rest of the messages.When set to all , the connector ignores any problematic messages.When set to data , the connector tolerates only data errors and
fails on all other errors.To learn more about error handling strategies, see the
Handle Errors page. This property overrides the errors.tolerance property of the Connect Framework. Default: Inherits the value from the errors.tolerance
setting.Accepted Values: "none" or "all" |
mongo.errors.log.enable | Type: boolean Description: Whether the connector should write details of errors including
failed operations to the log file. The connector classifies
errors as "tolerated" or "not tolerated" using the
errors.tolerance or mongo.errors.tolerance settings.When set to true , the connector logs both "tolerated" and
"not tolerated" errors.When set to false , the connector logs "not tolerated" errors.This property overrides the errors.log.enable property of the Connect Framework. Default: false Accepted Values: true or false |
errors.log.include.messages | Type: boolean Description: Whether the connector should include the invalid message when
logging an error. An invalid message includes data such as record
keys, values, and headers. Default: false Accepted Values: true or false |
errors.deadletterqueue.topic.name | Type: string Description: Name of topic to use as the dead letter queue. If blank, the
connector does not send any invalid messages to the dead letter
queue. To learn more about the dead letter queue, see the
Dead Letter Queue Configuration Example. Default: "" Accepted Values: A valid Kafka topic name |
errors.deadletterqueue.context.headers.enable | Type: boolean Description: Whether the connector should include context headers when it
writes messages to the dead letter queue. To learn more about the dead letter queue, see the
Dead Letter Queue Configuration Example. To learn about the exceptions the connector defines and
reports through context headers, see
Bulk Write Exceptions. Default: false Accepted Values: true or false |
errors.deadletterqueue.topic.replication.factor | Type: integer Description: The number of nodes on which to replicate the dead letter queue
topic. If you are running a single-node Kafka cluster, you must
set this to 1 .To learn more about the dead letter queue, see the
Dead Letter Queue Configuration Example. Default: 3 Accepted Values: A valid number of nodes |
Bulk Write Exceptions
The connector can report the following exceptions to your dead letter queue as context headers when performing bulk writes:
Name | Description | |
---|---|---|
WriteException | This class outputs the error in the following format:
The fields in the preceding message contain the following information:
| |
WriteConcernException | This class outputs the error in the following format:
The fields in the preceding message contain the following information:
| |
WriteSkippedException | Description: Informs that MongoDB did not attempt the write of a SinkRecord as part of
the following scenario:
To learn how to set the connector to perform unordered bulk
write operations, see the Connector Message Processing Properties page. Message Format: This exception produces no message. |
To enable bulk write exception reporting to the dead letter queue, use the following connector configuration:
errors.tolerance=all errors.deadletterqueue.topic.name=<name of topic to use as dead letter queue> errors.deadletterqueue.context.headers.enable=true
Dead Letter Queue Configuration Example
Apache Kafka version 2.6 added support for handling errant records. The Kafka connector automatically sends messages that it cannot process to the dead letter queue. Once on the dead letter queue, you can inspect the errant records, update them, and resubmit them for processing.
The following is an example configuration for enabling the dead letter queue
topic example.deadletterqueue
. This configuration specifies that the
dead letter queue and log file should record invalid messages, and that
the dead letter queue messages should include context headers.
mongo.errors.tolerance=all mongo.errors.log.enable=true errors.log.include.messages=true errors.deadletterqueue.topic.name=example.deadletterqueue errors.deadletterqueue.context.headers.enable=true
To learn more about dead letter queues, see Write Errors and Errant Messages to a Topic.