Docs Menu
Docs Home
/
MongoDB Kafka Connector
/ /

Error Handling

On this page

  • Overview
  • Handle Errors
  • Stop For All Errors
  • Tolerate All Errors
  • Tolerate Data Errors
  • Write Errors and Errant Messages to a Topic
  • Log Errors
  • Handle Errors at the Connector Level

In this guide, you can learn how to handle errors in your MongoDB Kafka sink connector. The following list shows some common scenarios that cause your sink connector to experience an error:

  • You write to a topic using Avro serialization and try to decode your messages from that topic using Protobuf deserialization

  • You use a change data capture handler on a message that does not contain change event documents

  • You apply an invalid single message transform to incoming documents

When your sink connector encounters an error it does two actions:

  • Handles the Error

  • Logs the Error

When your connector encounters an error, it needs to handle it in some way. Your sink connector can do the following in response to an error:

By default, your sink connector terminates and stops processing messages when it encounters an error. This is a good option for you if any error in your sink connector indicates a serious problem.

When your sink connector crashes, you must do one of the following actions and then restart your connector to resume processing messages:

  • Allow your sink connector to temporarily tolerate errors

  • Update your sink connector's configuration to allow it to process the message

  • Remove the errant message from your topic

You can have your sink connector stop when it encounters an error by either not specifying any value for the errors.tolerance option, or by adding the following to your connector configuration:

errors.tolerance=none

You can configure your sink connector to tolerate all errors and never stop processing messages. This is a good option for getting your sink connector up and running quickly, but you run the risk of missing problems in your connector as you do not receive any feedback if something goes wrong.

You can have your sink connector tolerate all errors by specifying the following option:

errors.tolerance=all

Warning

Ordered Bulk Writes Can Result in Skipped Messages

If you set your connector to tolerate errors and use ordered bulk writes, you may lose data. If you set your connector to tolerate errors and use unordered bulk writes, you lose less data. To learn more about bulk write operations, see the Write Model Strategies page.

You can configure your sink connector to tolerate only data errors, and stop processing for all others. With this setting, the connector sends data errors to the dead letter queue if one is configured.

Configure your sink connector to tolerate only data errors by specifying the following option:

errors.tolerance=data

You can configure your sink connector to write errors and errant messages to a topic, called a dead letter queue, for you to inspect or process further. A dead letter queue is a location in message queueing systems such as Apache Kafka where the system routes errant messages instead of crashing or ignoring the error. Dead letter queues combine the feedback of stopping the program with the durability of tolerating all errors, and are a good error handling starting point for most deployments.

You can have your sink connector route all errant messages to a dead letter queue by specifying the following options:

errors.tolerance=all
errors.deadletterqueue.topic.name=<name of topic to use as dead letter queue>

If you want to include the specific reason for the error as well as the errant message, use the following option:

errors.deadletterqueue.context.headers.enable=true

To learn more about dead letter queues, see Confluent's guide on Dead Letter Queues.

To view another dead letter queue configuration example, see Dead Letter Queue Configuration Example.

To learn about the exceptions your connector defines and writes as context headers to the dead letter queue, see Bulk Write Exceptions.

You can record tolerated and untolerated errors to a log file. Click on the tabs to see how to log errors:

The following default option makes Kafka Connect write only untolerated errors to its application log:

errors.log.enable=false

The following option makes Kafka Connect write both tolerated and untolerated errors to its application log:

errors.log.enable=true

If you would like to log metadata about your message, such as your message's topic and offset, use the following option:

errors.log.include.messages=true

For more information, see Confluent's guide on logging with Kafka Connect.

The sink connector provides options that allow you to configure error handling at the connector level. The options are as follows:

Kafka Connect Option
MongoDB Kafka Connector Option
errors.tolerance
mongo.errors.tolerance
errors.log.enable
mongo.errors.log.enable

You want to use these options if you want your connector to respond differently to errors related to MongoDB than to errors related to the Kafka Connect framework.

For more information, see the following resources:

Back

Post-processors