Release Notes
MongoDB Connector for Spark 10.2
The 10.2 connector release includes the following new features:
Added the
ignoreNullValues
write-configuration property, which enables you to control whether the connector ignores null values. In previous versions, the connector always wrotenull
values to MongoDB.Added options for the
convertJson
write-configuration property.Added the
change.stream.micro.batch.max.partition.count
read-configuration property, which allows you to divide micro-batches into multiple partitions for parallel processing.Improved change stream schema inference when using the
change.stream.publish.full.document.only
read-configuration property.Added the
change.stream.startup.mode
read-configuration property, which specifies how the connector processes change events when no offset is available.Support for adding a comment to operations.
MongoDB Connector for Spark 10.1.1
Corrected a bug in which aggregations including the
$collStats
pipeline stage did not return a count field for Time Series collections.
MongoDB Connector for Spark 10.1.0
Support for Scala 2.13.
Support for micro-batch mode with Spark Structured Streaming.
Support for BSON data types.
Improved partitioner support for empty collections.
Option to disable automatic upsert on write operations.
Improved schema inference for empty arrays.
Support for null values in arrays and lists. The Connector now writes these values to MongoDB instead of throwing an exception.
See this post on the MongoDB blog for more information.
MongoDB Connector for Spark 10.0.0
Support for Spark Structured Streaming.