Docs Menu
Docs Home
/
MongoDB Kafka Connector

What's New

On this page

  • What's New in 1.13
  • What's New in 1.12
  • What's New in 1.11.2
  • What's New in 1.11.1
  • What's New in 1.11
  • What's New in 1.10.1
  • What's New in 1.10
  • What's New in 1.9.1
  • What's New in 1.9
  • What's New in 1.8.1
  • What's New in 1.8
  • What's New in 1.7
  • What's New in 1.6.1
  • What's New in 1.6
  • What's New in 1.5
  • What's New in 1.4
  • What's New in 1.3
  • What's New in 1.2
  • What's New in 1.1
  • What's New in 1.0

Learn what's new by version:

  • Version 1.13

  • Version 1.12

  • Version 1.11.2

  • Version 1.11.1

  • Version 1.11

  • Version 1.10.1

  • Version 1.10

  • Version 1.9.1

  • Version 1.9

  • Version 1.8.1

  • Version 1.8

  • Version 1.7

  • Version 1.6.1

  • Version 1.6

  • Version 1.5

  • Version 1.4

  • Version 1.3

  • Version 1.2

  • Version 1.1

  • Version 1.0

  • Added a custom authentication provider interface for Source and Sink Connectors. This feature enables you to write and use a custom implementation class in your connector. To learn more, see the Custom Authentication Provider guide.

  • Fixed an issue that occurred when validating configuration for Source and Sink Connectors if the configuration contained secrets and used the Provider framework. To learn more about this fix, see the KAFKA-414 JIRA issue.

  • Added support for a data configuration value in the mongo.errors.tolerance configuration setting. With mongo.errors.tolerance=data, the sink connector tolerates only data errors, and fails for any others.

  • Fixed a bug in which unsuccessful attempts to retrieve items from a change stream were logged at the INFO level instead of at ERROR level. To learn more about this fix, see the KAFKA-396 JIRA issue.

  • Fixed a bug in which requirements for the DELETE_WRITEMODEL_STRATEGY_CONFIG String value prevented the creation of a DeleteOneDefaultStrategy object. To learn more about this fix, see the KAFKA-395 JIRA issue.

  • Fixed wildcard matching on partial field names in documents. To learn more about this fix, see the KAFKA-391 JIRA issue.

  • Fixed an issue in which a null pointer exception is thrown when the connector attempts to log null values on configuration settings. To learn more about this fix, see the KAFKA-390 JIRA issue.

  • Added support for regular expressions in the topic.namespace.map property. To learn more about this feature and see an example of its use, see the Regular Expressions usage example in the Topic Naming page.

  • Added support for setting a custom delete write model strategy by using the delete.writemodel.strategy configuration property. To learn more, see Sink Connector Write Model Strategies.

  • Added the UpdateOneDefaultStrategy write model strategy. To learn more, see the list of Write Model Strategies.

  • Added the change.stream.document.key.as.key source connector configuration property. When set to true, the connector adds keys of the deleted documents to the tombstone events. When set to false, the connector uses the resume token as the source key for the tombstone events.

    Because this property is set to true by default, this might be a breaking change for some users. To learn more, see the list of Change Stream Properties.

  • DDL events from Debezium are recorded as no-ops and no longer cause an error.

Important

Upgrade to Version 1.10.1

Version 1.9 introduced a bug related to MongoSourceTask.start that can cause a resource leak on both the connector side and the server side.

Upgrade to version 1.10.1 if you are using version 1.9 or 1.10 of the connector.

  • Fixed a resource leak related to MongoSourceTask.start that was introduced in version 1.9.

  • Added the connector name to JMX monitoring metrics.

  • Added support for SSL by creating the following configuration options:

    • connection.ssl.truststore

    • connection.ssl.truststorePassword

    • connection.ssl.keystore

    • connection.ssl.keystorePassword

  • Ensured the driver parses config values from config providers before validating them.

  • Corrected the behavior of schema inference for documents in nested arrays.

  • Introduced the startup.mode=timestamp setting that allows you to start a Change Stream at a specific timestamp by setting the new startup.mode.timestamp.start.at.operation.time property.

  • Deprecated the copy.existing property and all copy.existing.* properties. Use the startup.mode=copy_existing and the startup.mode.copy.existing.* properties to configure the copy existing feature.

  • Introduced the change.stream.full.document.before.change setting that allows you to access and configure the pre-image of an update operation in the change stream event document.

  • Improved schema inference for nested documents contained in arrays.

  • Introduced the publish.full.document.only.tombstones.on.delete setting that configures the connector to send tombstone events when documents are deleted. This setting only applies when publish.full.document.only is true.

  • Added MongoDB server exception information to dead letter queue messages.

  • Corrected the type returned by getAttribute() and getAttributes() method calls in JMX MBeans to Attribute.

  • Updated the MongoDB Java driver dependency to version 4.7.

  • Added several logger events and details in source and sink connectors to help with debugging. For a complete list of updates, see the KAFKA-302 issue in JIRA.

  • Added JMX monitoring support for the source and sink connectors. To learn more about monitoring connectors, see the Monitoring page.

  • Added support for the Debezium MongoDB change stream CDC handler. You can now configure the connector to listen for events produced by this handler.

  • Updated the MongoDB Java driver dependency to version 4.5

  • Added dead letter queue error reports if the connector experiences bulk write errors

  • Added support for unordered bulk writes with the bulk.write.ordered configuration property

  • Added warning when attempting to use a Change Data Capture (CDC) handler with a post processor

  • Removed support for the max.num.retries configuration property

  • Removed support for the retries.defer.timeout configuration property

Important

Disable Retries Through Connection URI

To disable retries, specify the retryWrites=false option in your MongoDB connection URI.

The following configuration, which contains a placeholder MongoDB connection URI, disables retries:

connection.uri=mongodb://mongodb0.example.com:27017,mongodb1.example.com:27017,mongodb2.example.com:27017/?replicaSet=myRepl&retryWrites=false

To learn more about connecting the MongoDB Kafka Connector to MongoDB, see the Connect to MongoDB guide.

To learn more about connection URI options, see the Connection Options guide in the MongoDB Java driver documentation.

  • Added support for user-defined topic separators with the topic.separator configuration property

  • Added support for the allow disk use field of the MongoDB Query API in the copy existing aggregation with the copy.existing.allow.disk.use configuration property

  • Added support for Avro schema namespaces in the output.schema.value and output.schema.key configuration properties

  • Fixed Avro schema union validation

  • Updated MongoDB Java driver dependency to 4.3.1 in the combined JARs

  • Fixed connection validator user privilege check

  • Fixed a bug in UuidProvidedIn[Key|Value]Strategy classes that prevented them from loading

  • Added support for Stable API to force the server to run operations with behavior compatible with the specified API version

    Note

    Starting from February 2022, the Versioned API is known the Stable API. All concepts and features remain the same with this naming change.

  • Added error handling properties for the sink connector and source connector that can override the Kafka Connect framework's error handling behavior

  • Added mongo-kafka-connect-<version>-confluent.jar, which contains the connector and all dependencies required to run it on the Confluent Platform

  • No new changes, additions or improvements

  • Corrected the behavior of LazyBsonDocument#clone to respond to any changes made once unwrapped

  • Fixed the timestamp integer overflow in the Source Connector

  • Updated to enable recovery when calling the getMore() method in the Source Connector

  • Updated to enable recovery from broken change stream due to event sizes that are greater than 16 MB in the Source Connector

  • Updated the MongoDB Java driver dependency to version 4.2

  • Added the DeleteOneBusinessKeyStrategy write strategy to remove records from a topic

  • Added support for handling errant records that cause problems when processing them

  • Added support for Qlik Replicate Change Data Capture (CDC) to process event streams

  • Replaced BsonDocument with RawBsonDocument

  • Improved the copy.existing namespace handling

  • Improved the error messages for invalid pipeline operators

  • Improved the efficiency of heartbeats by making them tombstone messages

  • Corrected the inferred schema naming conventions

  • Updated to ensure that schemas can be backwards compatible

  • Fixed the Sink validation issue with topics.regex

  • Fixed the Sink NPE issue when using with Confluent Connect 6.1.0

  • Updated to ensure that the change stream cursor closes so it only reports errors that exist

  • Changed to include or exclude the _id field for a projection only if it's explicitly added

  • Updated the MongoDB Java Driver to version 4.1

  • Added support for Change Data Capture (CDC) based on MongoDB change stream events

  • Added the NamespaceMapper interface to allow for dynamic namespace mapping

  • Added the TopicMapper interface to allow topic mapping

  • Changed the top-level inferred schema to be mandatory

  • Fixed a validation issue and synthetic configuration property in the Sink Connector

  • Corrected general exception logging

  • Updated to clone the LazyBsonDocument instead of the unwrapped BsonDocument

  • Added automated integration testing for the latest Kafka Connector and Confluent Platform versions to ensure compatibility

  • Added support for records that contain Bson byte types

  • Added support for the errors.tolerance property

  • Changed max.num.retries default to 1

  • Improved the error messages for business key errors

  • Improved the error handling for List and JSON array configuration options

  • Updated to use the dot notation for filters in key update strategies

  • Added support to output a key or value as a Bson byte type

  • Added support for schema and custom Avro schema definitions

  • Added support for dead letter queue and the errors.tolerance property

  • Added configurations for the following formatters:

    • DefaultJson

    • ExtendedJson

    • SimplifiedJson

  • Added configuration for copy.existing.pipeline to allow you to use indexes during the copying process

  • Added configuration for copy.existing.namespace.regex to allow you to copy the filtering of namespaces

  • Added configuration for offset.partition.name to allow for custom partitioning naming strategies

  • Updated to validate that the fullDocument field is a document

  • Updated to sanitize the connection string in the offset partition map to improve maintenance of the connection.uri, database, and collection parameters

  • Updated to disable publishing a source record without a topic name

  • Stopped MongoDB 3.6 from copying existing issues when the collection didn't exist in the Source Connector

Important

We deprecated the following post processors:

  • BlacklistKeyProjector

  • BlacklistValueProjector

  • WhitelistKeyProjector

  • WhitelistValueProjector

If you are using one of these post processors, use the respective one instead for future compatibility:

  • BlockListKeyProjector

  • BlockListValueProjector,

  • AllowListKeyProjector

  • AllowListValueProjector

  • Added configurations for the following properties:

    • document.id.strategy.overwrite.existing

    • UuidStrategy output types

    • document.id.strategy.partial.value.projection.type

    • document.id.strategy.partial.value.projection.list

    • document.id.strategy.partial.key.projection.type

    • document.id.strategy.partial.key.projection.list

    • UuidProvidedInKeyStrategy

    • UuidProvidedInValueStrategy

  • Added the UpdateOneBusinessKeyTimestampStrategy post processor

  • Added built-in support for parallelism and scalable data copying by assigning topic partitions to tasks

  • Improved the error messaging for missing resume tokens

  • Removed exceptions reported by the MongoCopyDataManager when the source database does not exist

  • Fixed the copying the existing resumability error in the Source Connector

  • Added support for the topics.regex property

  • Updated to ignore unused source record key or value fields

  • Added validation for the connection using MongoSinkConnector.validate

  • Added validation for the connection using MongoSourceConnector.validate

  • Removed the "Unrecognized field: startAfter" error for resuming a change stream in the Source Connector

The initial GA release.

Back

MongoDB Kafka Connector