Docs Menu
Docs Home
/
MongoDB Manual
/

Release Notes for MongoDB 4.2

On this page

  • Patch Releases
  • Distributed Transactions
  • Removed MMAPv1 Storage Engine
  • Removed Commands and Methods
  • MongoDB Drivers
  • Sharded Clusters
  • Security Improvements
  • Aggregation Improvements
  • Change Stream
  • Update Enhancements
  • Wildcard Indexes
  • Platform Support
  • MongoDB Tools
  • Monitoring
  • Flow Control
  • Logging and Diagnostics
  • General Improvements
  • Query Plan Improvements
  • Optimized Index Builds
  • Changes Affecting Compatibility
  • Upgrade Procedures
  • Download
  • Known Issues
  • Report an Issue

Warning

Past Release Limitations

Some past releases have critical issues. These releases are not recommended for production use. Use the latest available patch release version instead.

Issue
Affected Versions
WT-10461
4.2.0 - 4.2.23 (ARM64 or POWER system architectures)

Issues fixed in 4.2.24:

  • SERVER-71759 dataSize command doesn't yield

  • SERVER-68115 Bug fix for "elemMatchRootLength > 0" invariant trigger

  • SERVER-72535 Sharded clusters allow creating the 'admin', 'local', and 'config' databases with alternative casings

  • SERVER-62738 Give mongos the ability to passthrough to a specific shard

  • SERVER-68361 LogTransactionOperationsForShardingHandler::commit misses transferring documents from prepared and non-prepared transactions changing a document's shard key value

  • All JIRA issues closed in 4.2.24

  • 4.2.24 Changelog

Issues fixed in 4.2.23:

Issues fixed in 4.2.22:

Issues fixed in 4.2.21:

Issues fixed in 4.2.20:

Issues fixed in 4.2.19:

  • SERVER-62065 Upgrade path from 3.6 to 4.0 can leave chunk entries without history on the shards

  • SERVER-60685 TransactionCoordinator may interrupt locally executing update with non-Interruption error category, leading to server crash

  • SERVER-60682 TransactionCoordinator may block acquiring WiredTiger write ticket to persist its decision, prolonging transactions being in the prepared state

  • SERVER-53335 Queries, updates, and deletes with non-"simple" collations may miss documents when using hashed sharding

  • SERVER-40691 $nin:[...] queries are not indexed

  • All JIRA issues closed in 4.2.19

  • 4.2.19 Changelog

Issues fixed in 4.2.18:

  • SERVER-61427 Unique index builds can cause a loss of availability during commit due to checking many false duplicates

  • SERVER-56227 Add user-facing command to set allowMigrations to false for a sharded collection

  • SERVER-59226 Deadlock when stepping down with a profile session marked as uninterruptible

  • SERVER-56226 [v4.4] Introduce 'permitMigrations' field on config.collections entry to prevent chunk migrations from committing

  • SERVER-54064 Sessions on arbiters accumulate and cannot be cleared out

  • All JIRA issues closed in 4.2.18

  • 4.2.18 Changelog

Issues fixed in 4.2.17:

Issues fixed in 4.2.16:

  • SERVER-59074: Do not acquire storage tickets just to set/wait on oplog visibility

  • SERVER-54729: MongoDB Enterprise Debian/Ubuntu packages should depend on libsasl2-modules and libsasl2-modules-gssapi-mit

  • SERVER-39621: Disabled chaining should enforce sync source change when the primary steps down even if the oplog fetcher isn't killed on sync source

  • SERVER-34938: Secondary slowdown or hang due to content pinned in cache by single oplog batch

  • WT-7776: Add a hard limit on the number of modify updates before we instantiate a complete update

  • All JIRA issues closed in 4.2.16

  • 4.2.16 Changelog

Issues fixed in 4.2.15:

Issues fixed in 4.2.14:

Issues fixed in 4.2.13:

Issues fixed in 4.2.12:

  • SERVER-40361: Reduce memory footprint of plan cache entries

  • SERVER-47863: Initial Sync Progress Metrics

  • SERVER-48471: Hashed indexes may be incorrectly marked multikey and be ineligible as a shard key

  • SERVER-50769: server restarted after expr{"expr":"_currentApplyOps.getArrayLength() > 0","file":"src/mongo/db/pipeline/document_source_change_stream_transform.cpp","line":535}}

  • SERVER-52654: new signing keys not generated by the monitoring-keys-for-HMAC thread

  • SERVER-52879: Periodic operation latency spikes every 5 minutes due to closing idle cached WT sessions

  • All JIRA issues closed in 4.2.12

  • 4.2.12 Changelog

Issues fixed in 4.2.11:

  • SERVER-43664: Speedup WiredTiger storage engine startup for many tables by optimizing WiredTigerUtil::setTableLogging()

  • SERVER-45938: Allow matching O/OU/DC in client x509 cert if clusterMode:keyFile

  • SERVER-48523: Unconditionally check the first entry in the oplog when attempting to resume a change stream

  • SERVER-51120: Find queries with SORT_MERGE incorrectly sort the results when the collation is specified

  • WT-6507: Exit cache eviction worker after our operation has timed out

  • All JIRA issues closed in 4.2.11

  • 4.2.11 Changelog

Issues fixed in 4.2.10:

Issues fixed in 4.2.9:

  • SERVER-44051: getShardDistribution() does not report "Collection XYZ is not sharded" on dropped but previously sharded collections

  • SERVER-45610: Some reads work while system is RECOVERING

  • SERVER-47714: Secondary asserts on system.profile collection with WiredTigerRecordStore::insertRecord 95: Operation not supported

  • SERVER-48067: Reduce memory consumption for unique index builds with large numbers of non-unique keys

  • SERVER-49233: Introduce a flag to toggle the logic for bumping collection's major version during split

  • WT-6480: Fix a bug where files without block modification information were repeatedly copied at each incremental backup

  • All JIRA issues closed in 4.2.9

  • 4.2.9 Changelog

Issues fixed in 4.2.8:

  • SERVER-46897: REMOVED node may never send heartbeat to fetch newest config

  • SERVER-47799: AsyncRequestsSender should update replica set monitor in between retries for InterruptedAtShutdown

  • SERVER-47994: Fix for numerical overflow in GeoHash

  • SERVER-48307: 3 Transactions that write to exactly one shard and read from one or more other shards may incorrectly indicate failure on retry after successful commit

  • WT-6366: Off-by-one overflow in block-modification bitmaps for incremental backup

  • All JIRA issues closed in 4.2.8

  • 4.2.8 Changelog

Issues fixed in 4.2.7:

Issues fixed in 4.2.6:

  • SERVER-45119: CollectionShardingState::getCurrentShardVersionIfKnown returns collection version instead of shard version

  • SERVER-44892: getShardDistribution should use $collStats agg stage instead of collStats command

  • SERVER-43848: find/update/delete w/o shard key predicate under txn with snapshot read can miss documents

  • SERVER-42827: Allow sessions collection to return OK for creating indexes if at least one shard returns OK and others return CannotImplicitlyCreateCollection

  • SERVER-40805: Indicate the reason for replanning in the log file

  • SERVER-45389: Add metrics tracking how often shards have inconsistent indexes

  • SERVER-44689: Add serverStatus counter for each use of an aggregation stage in a user's request

  • All JIRA issues closed in 4.2.6

  • 4.2.6 Changelog

Note

The release of version 4.2.4 was skipped due to an issue encountered during the release. However, the 4.2.5 release includes the fixes made in 4.2.4.

Issues fixed in 4.2.5:

Issues fixed in 4.2.4:

  • SERVER-44915: Extend $indexStats output to include full index options and shard name

  • SERVER-46121: mongos crashes with invariant error after changing taskExecutorPoolSize

  • SERVER-45137: Increasing memory allocation in Top::record with high rate of collection creates and drops

  • SERVER-44904: Startup recovery should not delete corrupt documents while rebuilding unfinished indexes

  • SERVER-44260: Transaction can conflict with previous transaction on the session if the all committed point is held back

  • SERVER-35050: Don't abort collection clone due to negative document count

  • SERVER-39112: Primary drain mode can be unnecessarily slow

  • All JIRA issues closed in 4.2.4

  • 4.2.4 Changelog

Issues fixed:

Note

Fixed issues include those that resolve the following Common Vulnerabilities and Exposure (CVE):

Issues fixed:

  • SERVER-31083: Allow passing primary shard to "enableSharding" command for a new database

  • SERVER-33272: The DatabaseHolder::close() function no longer requires a global write lock and neither does the dropDatabase command

  • SERVER-44050: Arrays along 'hashed' index key path are not correctly rejected

  • SERVER-43882: Building indexes for startup recovery uses unowned RecordData after yielding its cursor

  • SERVER-44617: $regexFind crash when one of the capture group doesn't match the input but pattern matches

  • SERVER-44721: Shell KMS AWS support cannot decrypt responses

  • SERVER-43860: Pipeline style update in $merge can produce unexpected result

  • WT-4961: Checkpoints with cache overflow must keep history for reads

  • All JIRA issues closed in 4.2.2

  • 4.2.2 Changelog

Issues fixed:

Note

Distributed Transactions and Multi-Document Transactions

Starting in MongoDB 4.2, the two terms are synonymous. Distributed transactions refer to multi-document transactions on sharded clusters and replica sets. Multi-document transactions (whether on sharded clusters or replica sets) are also known as distributed transactions starting in MongoDB 4.2.

In version 4.2, MongoDB introduces distributed transactions. Distributed transactions:

  • Adds support for multi-document transactions on sharded clusters.

    • All members of the 4.2 sharded clusters must have featureCompatibilityVersion of 4.2.

    • Clients must use MongoDB drivers updated for MongoDB 4.2

  • Incorporates the existing support for transactions on replica sets.

    • All members of the 4.2 replica set must have featureCompatibilityVersion of 4.2.

    • Clients must use MongoDB drivers updated for MongoDB 4.2

  • Removes the 16MB total size limit for a transaction. In version 4.2, MongoDB creates as many oplog entries (maximum size 16MB each) as necessary to encapsulate all write operations in a transaction. In MongoDB 4.0, MongoDB creates a single entry for all write operations in a transaction, thereby imposing a 16MB total size limit for a transaction.

  • Extends transaction support to deployments whose secondary members use the in-memory storage engine. That is, transactions are available for deployments that use the WiredTiger storage engine for the primary and either the WiredTiger or the in-memory storage engine for the secondary members. In MongoDB 4.0, transactions are available for deployments that use the WiredTiger storage engine only.

For more information, see Transactions.

Tip

See also:

MongoDB 4.2 removes the deprecated MMAPv1 storage engine.

If your 4.0 deployment uses MMAPv1, you must change the deployment to WiredTiger Storage Engine before upgrading to MongoDB 4.2. For details, see:

MongoDB removes the following MMAPv1 specific configuration options:

Removed Configuration File Setting
Removed Command-line Option
storage.mmapv1.journal.commitIntervalMs
storage.mmapv1.journal.debugFlags
mongod --journalOptions
storage.mmapv1.nsSize
mongod --nssize
storage.mmapv1.preallocDataFiles
mongod --noprealloc
storage.mmapv1.quota.enforced
mongod --quota
storage.mmapv1.quota.maxFilesPerDB
mongod --quotaFiles
storage.mmapv1.smallFiles
mongod --smallfiles
storage.repairPath
mongod --repairpath
replication.secondaryIndexPrefetch
mongod --replIndexPrefetch

Note

Starting in version 4.2, MongoDB processes will not start with these options. Remove any MMAPv1 specific configuration options if using a WiredTiger deployment.

MongoDB removes the following MMAPv1 parameters:

  • newCollectionsUsePowerOf2Sizes

  • replIndexPrefetch

MongoDB removes the MMAPv1 specific touch command.

MongoDB removes the MMAPv1 specific options:

Removed Command
Removed Method
Notes
group
db.collection.group()
Use db.collection.aggregate() with the $group stage instead.
eval
The MongoDB 4.2 mongo shell methods db.eval() and db.collection.copyTo() can only be run when connected to MongoDB 4.0 or earlier.
copydb

The corresponding mongo shell helpers db.copyDatabase() can only be run when connected to MongoDB 4.0 or earlier.

As an alternative, users can use mongodump and mongorestore (see Copy and Clone Databases) or write a script using the drivers.

clone

The corresponding mongo shell helpers db.cloneDatabase() can only be run when connected to MongoDB 4.0 or earlier.

As an alternative, users can use mongodump and mongorestore (see Copy and Clone Databases) or write a script using the drivers.

geoNear

Use db.collection.aggregate() with the $geoNear stage instead.

For more information, see Remove Support for the geoNear Command.

parallelCollectionScan
repairDatabase
db.repairDatabase()
getPrevError
db.getPrevError()

MongoDB removes the deprecated option maxScan for the find command and the mongo shell helper cursor.maxScan(). Use either the maxTimeMS option for the find command or the helper cursor.maxTimeMS() instead.

The following drivers are feature compatible [1] with MongoDB 4.2:

[1] For a complete list of official 4.2+ compatible drivers with support for Client-Side Field Level Encryption, see Compatibility.

Retryable reads allow MongoDB 4.2+ compatible drivers to automatically retry certain read operations a single time if they encounter certain network or server errors. See Retryable Reads for more information.

Starting in MongoDB 4.2, you can update a document's shard key value unless the shard key field is the immutable _id field. In MongoDB 4.2 and earlier, a document's shard key field value is immutable.

For details on updating the shard key, see Change a Document's Shard Key Value.

mongodump and mongorestore cannot be part of a backup strategy for 4.2+ sharded clusters that have sharded transactions in progress, as backups created with mongodump do not maintain the atomicity guarantees of transactions across shards.

For 4.2+ sharded clusters with in-progress sharded transactions, use one of the following coordinated backup and restore processes which do maintain the atomicity guarantees of transactions across shards:

Starting in MongoDB 6.1, automatic chunk splitting is not performed. This is because of balancing policy improvements. Auto-splitting commands still exist, but do not perform an operation. For details, see Balancing Policy Changes.

In MongoDB versions earlier than 6.1:

The mongo methods sh.enableBalancing(namespace) and sh.disableBalancing(namespace) have no affect on the auto-splitting.

Starting in MongoDB 4.2, MongoDB adds the parameter ShardingTaskExecutorPoolReplicaSetMatching. This parameter determines the minimum size of the mongod / mongos instance's connection pool to each member of the sharded cluster. This value can vary during runtime.

mongod and mongos maintain connection pools to each replica set secondary for every replica set in the sharded cluster. By default, these pools have a number of connections that is at least the number of connections to the primary.

To modify, see ShardingTaskExecutorPoolReplicaSetMatching.

Starting in MongoDB 4.2,

  • Operations which replace documents, such as replaceOne() or update() (when used with a replacement document), will first attempt to target a single shard by using the query filter. If the operation cannot target a single shard by the query filter, it then attempts to target by the replacement document. In earlier versions, these operations only attempt to target using the replacement document.

  • The save() method is deprecated: use the insertOne() or replaceOne() method instead. The save() method cannot be used with sharded collections that are not sharded by _id, and attempting to do so will result in an error.

  • For a replace document operation that includes upsert: true and is on a sharded collection, the filter must include an equality match on the full shard key.

MongoDB 4.2 includes fixes that resolve the following Common Vulnerabilities and Exposures (CVEs):

MongoDB 4.2 adds TLS options for the mongod, the mongos, and the mongo shell to replace the corresponding SSL options (deprecated in 4.2). The new TLS options provide identical functionality as the deprecated SSL options as MongoDB has always supported TLS 1.0 and later.

  • For the command-line TLS options, refer to the mongod, mongos, and mongo shell pages.

  • For the corresponding mongod and mongos configuration file options, refer to the configuration file page.

  • For the connection string tls options, refer to the connection string page.

MongoDB 4.2 deprecates the SSL options for the mongod, the mongos, and the mongo shell as well as the corresponding net.ssl Options configuration file options.

Use the new TLS options instead.

New Parameter
Description
Available for mongod and mongos, the parameter can be set to true to stop the instance from sending its TLS certificate when initiating intra-cluster communications with other mongod or mongos instances. For details, see tlsWithholdClientCertificate.

Available for mongod and mongos, the parameter can be set to an alternative certificate DN to use for x.509 membership authentication. For details, see tlsX509ClusterAuthDNOverride.

You can use this parameter for a rolling update of certificates to new certificates that contain a new DN value. See Rolling Update of x.509 Cluster Certificates that Contain New DN.

MongoDB 4.2 adds the --tlsClusterCAFile option/net.tls.clusterCAFile for mongod and mongos, which specifies a .pem file for validating the TLS certificate from a client establishing a connection. This lets you use separate Certificate Authorities to verify the client to server and server to client portions of the TLS handshake.

Tip

See also:

Starting in version 4.2 on Linux:

In earlier versions of MongoDB (3.6.14+ and 4.0.3+), MongoDB enables support for Ephemeral Elliptic Curve Diffie-Hellman (ECDHE) if, during compile time, the Linux platform's OpenSSL supports automatic curve selection of ECDH parameters.

On Windows and macOS, MongoDB's support for ECDHE and DH remain unchanged from earlier versions; that is, support is implicit through the use of the platform's respective native TLS/SSL OS libraries.

For more information, see Forward Secrecy.

Starting in version 4.2 of the mongo shell, you can use the passwordPrompt() method in conjunction with various user authentication/management methods/commands to prompt for the password instead of specifying the password directly in the method/command call. However, you can still specify the password directly as you would with earlier versions of the mongo shell.

For example:

db.createUser( {
user:"user123",
pwd: passwordPrompt(), // Instead of specifying the password in cleartext
roles:[ "readWrite" ]
} )

Starting in MongoDB 4.2, keyfiles for internal membership authentication use YAML format to allow for multiple keys in a keyfile. The YAML format accepts content of:

  • a single key string (same as in earlier versions),

  • multiple key strings (each string must be enclosed in quotes), or

  • sequence of key strings.

The YAML format is compatible with the existing single-key keyfiles that use the text file format.

The new format allows for rolling upgrade of the keys without downtime. See Rotate Keys for Replica Sets and Rotate Keys for Sharded Clusters.

For MongoDB 4.2 Enterprise binaries linked against libldap (such as when running on RHEL), access to the libldap is synchronized, incurring some performance/latency costs.

For MongoDB 4.2 Enterprise binaries linked against libldap_r, there is no change in behavior from earlier MongoDB versions.

For encrypted storage engine configured with AES256-GCM cipher:

  • Restoring from Hot Backup
    Starting in 4.2, if you restore from files taken via "hot" backup (i.e. the mongod is running), MongoDB can detect "dirty" keys on startup and automatically rollover the database key to avoid IV (Initialization Vector) reuse.
  • Restoring from Cold Backup

    However, if you restore from files taken via "cold" backup (i.e. the mongod is not running), MongoDB cannot detect "dirty" keys on startup, and reuse of IV voids confidentiality and integrity guarantees.

    Starting in 4.2, to avoid the reuse of the keys after restoring from a cold filesystem snapshot, MongoDB adds a new command-line option --eseDatabaseKeyRollover. When started with the --eseDatabaseKeyRollover option, the mongod instance rolls over the database keys configured with AES256-GCM cipher and exits.

For more information, see encrypted storage engine and --eseDatabaseKeyRollover.

The official MongoDB 4.2+ compatible drivers provide a client-side field level encryption framework. Applications can encrypt fields in documents prior to transmitting data over the wire to the server. Only applications with access to the correct encryption keys can decrypt and read the protected data. Deleting an encryption key renders all data encrypted with that key as permanently unreadable.

For a complete list of official 4.2+ compatible drivers with support for client-side field level encryption, see Compatibility.

For an end-to-end procedure for configuring field level encryption using select MongoDB 4.2+ compatible drivers, see the Client Side Field Level Encryption Guide.

Explicit (manual) encryption of fields

Official MongoDB 4.2+ compatible drivers and the MongoDB 4.2 or later mongo shell support explicitly encrypting or decrypting fields with a specific data encryption key and encryption algorithm.

Applications must modify any code associated with constructing read and write operations to include encryption/decryption logic via the driver encryption library. Applications are responsible for selecting the appropriate data encryption key for encryption/decryption on a per-operation basis.

For more information, see Explicit Encryption.

Automatic encryption of fields

Note

Enterprise Feature

The automatic feature of field level encryption is only available in MongoDB Enterprise 4.2 or later, and MongoDB Atlas 4.2 or later clusters.

Official MongoDB 4.2+ compatible drivers and the MongoDB 4.2 or later mongo shell support automatically encrypting fields in read and write operations.

Applications must create a database connection object (e.g. MongoClient) with the automatic encryption configuration settings. The configuration settings must include automatic encryption encryption rules using a strict subset of the JSON Schema Draft 4 standard syntax and encryption-specific schema keywords. Applications do not have to modify code associated with constructing the read/write operation. See Encryption Schemas for complete documentation on automatic encryption rules.

For more information, see Automatic Encryption.

For complete documentation on client-side field level encryption, see Client-Side Field Level Encryption.

  • Add serverStatus to the backup built-in role.

  • To connect a client over TLS/SSL connection, MongoDB 4.2 supports matching by IP addresses as well as DNS for Subject Alternative Name (SAN) matching.

    For example, a mongod instance's x.509 certificate has the following SAN:

    X509v3 Subject Alternative Name:
    DNS:hostname.example.com, DNS:localhost, IP Address:127.0.0.1

    Then, to connect a mongo shell to the instance, you can specify the host of 127.0.0.1 or the DNS names:

    • mongo "mongodb:\\127.0.0.1:27017\test" --tls --tlsCAFile /etc/ssl/ca.pem ...
    • mongo "mongodb:\\hostname.example.com:27017\test" --tls --tlsCAFile /etc/ssl/ca.pem ...
    • mongo "mongodb:\\localhost:27017\test" --tls --tlsCAFile /etc/ssl/ca.pem ...

    In previous versions, MongoDB only supported DNS entries for SAN matching.

    • mongo "mongodb:\\hostname.example.com:27017\test" --tls --tlsCAFile /etc/ssl/ca.pem ...
    • mongo "mongodb:\\localhost:27017\test" --tls --tlsCAFile /etc/ssl/ca.pem ...

Starting in version 4.2, MongoDB Enterprise adds a new token {PROVIDED_USER} that can be used in security.ldap.authz.queryTemplate. When used in the template, MongoDB substitutes the supplied username, i.e. before either authentication or LDAP transformation.

MongoDB 4.2 adds the $merge aggregation stage.

With the new stage you can:

  • Can output to a collection in the same or different database.

  • Can incorporate results (merge documents, replace documents, keep existing documents, fail the operation, process documents with an custom update pipeline) into an existing collection.

  • Can output to an existing sharded collection.

The new stage allows users to create on-demand materialized views, where the content of the output collection can be incrementally updated each time the pipeline is run.

MongoDB 4.2 adds new trigonometry expressions for use in aggregation pipelines.

Trigonometry expressions perform trigonometric operations on numbers. Values that represent angles are always input or output in radians. Use $degreesToRadians and $radiansToDegrees to convert between degree and radian measurements.

Name
Description
Returns the sine of a value that is measured in radians.
Returns the cosine of a value that is measured in radians.
Returns the tangent of a value that is measured in radians.
Returns the inverse sin (arc sine) of a value in radians.
Returns the inverse cosine (arc cosine) of a value in radians.
Returns the inverse tangent (arc tangent) of a value in radians.
Returns the inverse tangent (arc tangent) of y / x in radians, where y and x are the first and second values passed to the expression respectively.
Returns the inverse hyperbolic sine (hyperbolic arc sine) of a value in radians.
Returns the inverse hyperbolic cosine (hyperbolic arc cosine) of a value in radians.
Returns the inverse hyperbolic tangent (hyperbolic arc tangent) of a value in radians.
Returns the hyperbolic sine of a value that is measured in radians.
Returns the hyperbolic cosine of a value that is measured in radians.
Returns the hyperbolic tangent of a value that is measured in radians.
Converts a value from degrees to radians.
Converts a value from radians to degrees.

MongoDB 4.2 adds the $round aggregation expression. Use $round to round numerical values to a specific digit or decimal place.

MongoDB 4.2 adds expanded functionality and new syntax to $trunc. Use $trunc with the new syntax to truncate numerical values to a specific digit or decimal place.

MongoDB 4.2 adds the following regular expression (regex) pattern matching operators for use in the aggregation pipeline:

Operator
Description
Applies a regular expression (regex) to a string and returns information on the first matched substring.
Applies a regular expression (regex) to a string and returns information on all matched substrings.
Applies a regular expression (regex) to a string and returns true if a match is found and false if a match is not found.

Prior to MongoDB 4.2, aggregation pipeline can only use the query operator $regex in the $match stage.

MongoDB 4.2 adds the following new aggregation pipeline stages:

New Stage
Description
Writes the aggregation results to a collection. The $merge stage can incorporate results (merge documents, replace documents, keep existing documents, fail the operation, process documents with an custom update pipeline) into an existing collection.

Provides plan cache information for a collection.

The $planCacheStats aggregation stage is preferred over the following methods and commands, which have been deprecated in 4.2:

  • PlanCache.getPlansByQuery() method/planCacheListPlans command, and

  • PlanCache.listQueryShapes() method/planCacheListQueryShapes command.

Replaces the input document with the specified document. The operation replaces all existing fields in the input document, including the _id field. The new $replaceWith stage is an alias to the $replaceRoot stage.
Adds new fields to documents. The stage outputs documents that contains all existing fields from the input documents as well as the newly added fields. The new $set stage is an alias to the $addFields stage.
Excludes fields from documents. The new $unset stage is an alias to the $project stage that excludes fields.

MongoDB 4.2 adds the following new aggregation pipeline variables:

Variable
Description
Returns the current datetime value.

Returns the current timestamp value.

Only available on replica sets and sharded clusters.

Starting in MongoDB 4.2, you can use the aggregation pipeline for updates in:

For the updates, the pipeline can consist of the following stages:

Using the aggregation pipeline allows for a more expressive update statement, such as expressing conditional updates based on current field values or updating one field using the value of another field(s).

See the individual reference pages for details and examples.

MongoDB 4.2 adds startAfter as an option for Change Streams, which starts a new change stream after the event indicated by a resume token. With this option, you can start a change stream from an invalidate event, thereby guaranteeing no missed notifications after the previous stream was invalidated.

MongoDB 4.2 uses the version 1 (i.e. v1) change streams resume tokens, introduced in version 4.0.7.

Starting in MongoDB 4.2, change streams will throw an exception if the change stream aggregation pipeline modifies an event's _id field.

Starting in MongoDB 4.2, change streams are available regardless of the "majority" read concern support; that is, read concern majority support can be either enabled (default) or disabled to use change streams.

In MongoDB 4.0 and earlier, change streams are available only if "majority" read concern support is enabled (default).

Starting in MongoDB 4.2, you can use additional stages in the change stream aggregation pipeline to modify the change stream output (i.e. the event documents):

Starting in MongoDB 4.2, change streams will throw an exception if the change stream aggregation pipeline modifies an event's _id field.

Starting in MongoDB 4.2, you can use the aggregation pipeline for updates in:

For the updates, the pipeline can consist of the following stages:

Using the aggregation pipeline allows for a more expressive update statement, such as expressing conditional updates based on current field values or updating one field using the value of another field(s).

See the individual reference pages for details and examples.

Starting in MongoDB 4.2, the update command and the associated mongo shell method db.collection.update() can accept a hint argument to specify the index to use. See:

Starting in MongoDB 4.2,

  • Operations which replace documents, such as replaceOne() or update() (when used with a replacement document), will first attempt to target a single shard by using the query filter. If the operation cannot target a single shard by the query filter, it then attempts to target by the replacement document. In earlier versions, these operations only attempt to target using the replacement document.

  • The save() method is deprecated: use the insertOne() or replaceOne() method instead. The save() method cannot be used with sharded collections that are not sharded by _id, and attempting to do so will result in an error.

  • For a replace document operation that includes upsert: true and is on a sharded collection, the filter must include an equality match on the full shard key.

MongoDB 4.2 introduces wildcard indexes for supporting queries against fields whose names are unknown or arbitrary.

Consider an application that captures user-defined data under the userMetadata field and supports querying against that data:

{ "userMetadata" : { "likes" : [ "dogs", "cats" ] } }
{ "userMetadata" : { "dislikes" : "pickles" } }
{ "userMetadata" : { "age" : 45 } }
{ "userMetadata" : "inactive" }

Administrators want to create indexes to support queries on any subfield of userMetadata.

A wildcard index on userMetadata can support single-field queries on userMetadata, userMetadata.likes, userMetadata.dislikes, and userMetadata.age:

db.userData.createIndex( { "userMetadata.$**" : 1 } )

The index can support the following queries:

db.userData.find({ "userMetadata.likes" : "dogs" })
db.userData.find({ "userMetadata.dislikes" : "pickles" })
db.userData.find({ "userMetadata.age" : { $gt : 30 } })
db.userData.find({ "userMetadata" : "inactive" })

A non-wildcard index on userMetadata can only support queries on values of userMetadata.

Important

Wildcard indexes are not designed to replace workload-based index planning. For more information on creating indexes to support queries, see Create Indexes to Support Your Queries. For complete documentation on wildcard index limitations, see Wildcard Index Restrictions.

The mongod featureCompatibilityVersion must be 4.2 to create wildcard indexes. For instructions on setting the fCV, see Set Feature Compatibility Version on MongoDB 6.0 Deployments.

You can create a wildcard index using the createIndexes database command or its shell helpers db.collection.createIndex() and db.collection.createIndexes(). For examples of creating a wildcard index, see Create a Wildcard Index.

See Wildcard Indexes for complete documentation.

  • MongoDB 4.2 adds support for:

    • Ubuntu 18.04 on ARM64

  • MongoDB 4.2 removes support for:

    • Debian 8

    • Ubuntu 14.04

    • Ubuntu 16.04 ARM64 for MongoDB Community Edition

    • Ubuntu 16.04 POWER/PPC64LE (Also removed in version 3.6.13 and 3.4.21)

    • macOS 10.11

Starting in version 4.2, MongoDB removes the --sslFIPSMode option for the following programs:

The programs will use FIPS compliant connections to mongod / mongos if the mongod / mongos instances are configured to use FIPS mode.

Starting in version 4.2,

  • For the following Database Tools, if the write concern is specified in both the --uri connection string and the --writeConcern option, the --writeConcern option overrides the one in the connection string:

  • For the following Database Tools, if the read preference is specified in both the --uri connection string and the --readPreference option, the --readPreference option overrides the one in the connection string:

Starting in version 4.2:

Binary
Changes
Uses Extended JSON v2.0 (Canonical mode) format.

Use Extended JSON v2.0 (Canonical mode) format for the metadata. Requires mongorestore version 4.2 or later that supports Extended JSON v2.0 (Canonical mode or Relaxed) format.

Tip

In general, use corresponding versions of mongodump and mongorestore. That is, to restore data files created with a specific version of mongodump, use the corresponding version of mongorestore.

Creates output data in Extended JSON v2.0 (Relaxed mode) by default.
Creates output data in Extended JSON v2.0 (Canonical mode) if used with --jsonFormat.
Expects import data to be in Extended JSON v2.0 (either Relaxed or Canonical mode) by default.
Can recognize data that is in Extended JSON v1.0 format if the option --legacy is specified.

Tip

In general, the versions of mongoexport and mongoimport should match. That is, to import data created from mongoexport, you should use the corresponding version of mongoimport.

For details on MongoDB extended JSON v2, see MongoDB Extended JSON (v2).

Tip

See also:

The mongofiles command get_id and delete_id can accept both ObjectId or non-ObjectId values for the _id.

Starting in version 4.2:

  • mongoimport uses maximum batch size of 100,000 to perform bulk insert/upsert operations.

  • mongoimport by default, continues when it encounters duplicate key and document validation errors. To ensure that the program stops on these errors, specify --stopOnError.

  • Specifying --maintainInsertionOrder for mongoimport:

    • Maintains document insertion order using ordered bulk write operations; i.e. both the batch order and document order within the batches are maintained. In earlier versions, only the batch order is maintained; document order within batches are not maintained.

    • Enables --stopOnError and sets numInsertionWorkers to 1.

Starting in version 4.2:

  • mongorestore by default, continues when it encounters duplicate key and document validation errors. To ensure that the program stops on these errors, specify --stopOnError.

  • Specifying --maintainInsertionOrder for mongorestore:

    • Maintains document insertion order using ordered bulk write operations; i.e. both the batch order and document order within the batches are maintained. In earlier versions, only the batch order is maintained; document order within batches are not maintained.

    • Enables --stopOnError and sets --numInsertionWorkersPerCollection to 1.

Starting with MongoDB 4.2, the following operations take an exclusive collection lock instead of an exclusive database lock:

Prior to MongoDB 4.2, these operations took an exclusive lock on the database, blocking all operations on the database and its collections until the operation completed.

In earlier versions, get_id and delete_id can only accept ObjectId values for the _id.

Starting in version 4.2, the Storage Node Watchdog is available in both the MongoDB Community edition and the MongoDB Enterprise edition.

In earlier versions, the feature is available in the MongoDB Enterprise edition only.

MongoDB 4.2 introduces a flow control mechanism to control the rate at which the primary applies its writes in order to keep the majority committed lag under a specified maximum value.

Flow control is enabled by default.

Note

For flow control to engage, the replica set/sharded cluster must have: featureCompatibilityVersion (FCV) of 4.2 and read concern majority enabled. That is, enabled flow control has no effect if FCV is not 4.2 or if read concern majority is disabled.

For more information, see Replication Lag and Flow Control.

  • Added INITSYNC component to log messages.

  • Added ELECTION component to log messages.

  • For debug messages, include the verbosity level (i.e. D [1-5]). For example, if verbosity level is 2, MongoDB logs D2. In previous versions, MongoDB log messages only specified D for Debug level.

  • When logging to syslog, the format of the message text includes the component. For example:

    ... ACCESS [repl writer worker 5] Unsupported modification to roles collection ...

    Previously, the syslog message text did not include the component. For example:

    ... [repl writer worker 1] Unsupported modification to roles collection ...
  • MongoDB 4.2 adds a usedDisk indicator to the profiler log messages and diagnostic log messages for the aggregate operation. The usedDisk indicates whether any stages of an aggregate operation wrote data to temporary files due to memory restrictions. For more information on aggregation memory restrictions, see Memory Restrictions.

MongoDB 4.2 adds a new option idleCursors to the $currentOp aggregation stage in order to return information on idle cursors.

In addition, MongoDB 4.2 adds the following new fields to the documents returned from the $currentOp aggregation stage, currentOp command, and db.currentOp() helper:

$currentOp
currentOp/db.currentOp()
Description
Specifies whether the reported operation is an op, idleSession, or idleCursor.
Specifies cursor details. Available when returning getmore operations or idleCursor information.
Specifies users associated with the operation.
Specifies the number of times the current operation had to wait for a prepared transaction with a write to commit or abort.
Specifies users that are impersonating the effective users for the operation.
Specifies the number of times the current operation conflicted with another write operation.

See also 4.2 current op compatibility changes

Starting in MongoDB 4.2, the serverStatus command and the mongo shell method db.serverStatus() include the following output changes:

Starting in version MongoDB 4.2, replSetGetStatus and its mongo shell helper rs.status() return:

MongoDB 4.2 deprecates the field lastStableCheckpointTimestamp.

Starting in version 4.2, MongoDB reports on ReplicationStateTransition lock information.

In addition, MongoDB 4.2 separates ParallelBatchWriterMode lock information from Global lock information. Earlier MongoDB versions report ParallelBatchWriterMode lock information as part of Global locks.

For operations that report on lock information, see:

Starting in MongoDB 4.2, the $collStats aggregation, the collStats command, and the mongo shell helper db.collection.stats() return information on indexes that are currently being built.

For details, see:

Starting in MongoDB 4.2, the $collStats aggregation, the collStats command, and the mongo shell helper db.collection.stats() return the scaleFactor used to scale the various size data.

Starting in MongoDB 4.2, the dbStats command, and the mongo shell helper db.stats() return the scaleFactor used to scale the various size data.

MongoDB supports using expansion directives in configuration files to load externally sourced values. Expansion directives can load values for specific configuration file options or load the entire configuration file.

The following expansion directives are available:

Expansion Directive
Description
Allows users to specify a REST endpoint as the external source for configuration file options or the full configuration file.
Allows users to specify a shell or terminal command as the external source for configuration file options or the full configuration file.

For complete documentation, see Externally Sourced Configuration File Values.

MongoDB 4.2 adds the --outputConfig option for mongod and mongos. The option outputs to stdout the mongod / mongos instance's configuration, in YAML format.

If the configuration uses any Externally Sourced Configuration File Values, the option returns the resolved value for those options.

Warning

This may include any configured passwords or secrets previously obfuscated through the external source.

For usage examples, see:

Starting in MongoDB 4.2, for featureCompatibilityVersion set to "4.2" or greater, MongoDB removes the Index Key Limit. For fCV set to "4.0", the limit still applies.

Starting in version 4.2, for featureCompatibilityVersion set to "4.2" or greater, MongoDB removes the Index Name Length limit of 127 byte maximum. In previous versions or MongoDB versions with featureCompatibilityVersion (fCV) set to "4.0", index names must fall within the limit.

Starting in MongoDB 4.2, you can specify multiple indexes to the dropIndexes command and its mongo shell helper db.collection.dropIndexes(). To specify multiple indexes to drop, pass an array of index names to dropIndexes/db.collection.dropIndexes().

Starting in MongoDB 4.2, the dropIndexes or its shell helpers dropIndex() and dropIndexes() operation only kills queries that are using the index being dropped. This may include queries considering the index as part of query planning.

Prior to MongoDB 4.2, dropping an index on a collection would kill all open queries on the collection.

Starting in version 4.2, MongoDB supports zstd for:

Starting in MongoDB 4.2, if a db.collection.bulkWrite() operation encounters an error inside a transaction, the method throws a BulkWriteException (same as outside a transaction).

In 4.0, if the bulkWrite operation encounters an error inside a transaction, the error thrown is not wrapped as a BulkWriteException.

Inside a transaction, the first error in a bulk write causes the entire bulk write to fail and aborts the transaction, even if the bulk write is unordered.

Starting in MongoDB 4.2, the cache entry is associated with a state:

Associating states with entries helps reduce the likelihood that sub-optimal cache entries remain in the cache. For more information, see Query Plans.

  • queryHash
    To help identify slow queries with the same query shape, starting in MongoDB 4.2, each query shape is associated with a queryHash. The queryHash is a hexadecimal string that represents a hash of the query shape and is dependent only on the query shape.

    Note

    As with any hash function, two different query shapes may result in the same hash value. However, the occurrence of hash collisions between different query shapes is unlikely.
  • planCacheKey

    To provide more insight into the query plan cache, MongoDB 4.2 introduces the planCacheKey.

    planCacheKey is a hash of the key for the plan cache entry associated with the query.

    Note

    Unlike the queryHash, the planCacheKey is a function of both the query shape and the currently available indexes for the shape. That is, if indexes that can support the query shape are added/dropped, the planCacheKey value may change whereas the queryHash value would not change.

    Tip

    See also:

  • The queryHash and planCacheKey are available in:

The fields are also available in operations that return information about the query plan cache:

  • $planCacheStats aggregation stage (New in MongoDB 4.2)

  • PlanCache.listQueryShapes() method/planCacheListQueryShapes command (Deprecated in MongoDB 4.2)

  • PlanCache.getPlansByQuery() method/planCacheListPlans command (Deprecated in MongoDB 4.2)

Starting in MongoDB 4.2 (and 4.0.7), $not operator can perform logical NOT operation on $regex operator expressions as well as on regular expression objects (i.e. /pattern/).

In 4.0 and earlier versions, you could use $not operator with regular expression objects (i.e. /pattern/) but not with $regex operator expressions.

Starting in MongoDB 4.2, users can always kill their own cursors, regardless of whether the users have the privilege to killCursors. As such, the killCursors privilege has no effect starting in MongoDB 4.2.

In MongoDB 4.0, users required the killCursors privilege in order to kill their own cursors.

MongoDB 4.2 adds the parameter replBatchLimitBytes to configure the maximum oplog application batch size. The parameter is also available starting in MongoDB 4.0.10.

MongoDB 4.2 will retry certain single-document upserts (update with upsert: true and multi: false) that encounter a duplicate key exception. See Duplicate Key Errors on Upsert for conditions.

Prior to MongoDB 4.2, MongoDB would not retry upsert operations that encountered a duplicate key error.

Starting in MongODB 4.2, the mongo shell method db.dropDatabase() can take an optional write concern document.

The dropConnections command drops the mongod / mongos instance's outgoing connections to the specified hosts. The dropConnections must be run against the admin database.

For the following operations, if the issuing client disconnects before the operation completes, MongoDB marks the following operations for termination (e.g. killOp on the operation):

Starting in version 4.2 (and 4.0.13 and 3.6.14 ), if a replica set member uses the in-memory storage engine (voting or non-voting) but the replica set has writeConcernMajorityJournalDefault set to true, the replica set member logs a startup warning.

Starting in MongoDB 4.2 (and 4.0.13), the mongo shell displays a warning message when connected to non-genuine MongoDB instances as these instances may behave differently from the official MongoDB instances; e.g. missing or incomplete features, different feature behaviors, etc.

Starting in version 4.2, MongoDB deprecates:

  • The map-reduce option to create a new sharded collection as well as the use of the sharded option for map-reduce. To output to a sharded collection, create the sharded collection first. MongoDB 4.2 also deprecates the replacement of an existing sharded collection.

  • The explicit specification of nonAtomic: false option.

Starting in MongoDB 4.2, the rollback time limit is calculated between the first operation after the common point and the last point in the oplog for the member to roll back.

In MongoDB 4.0, the rollback time limit is calculated between the common point and the last point in the oplog for the member to roll back.

For more information, see Rollback Elapsed Time Limitations.

MongoDB 4.2 adds a new mongo shell method isInteractive() that returns a boolean indicating whether the mongo shell is running in interactive or script mode.

Starting in MongoDB 4.2, the explain output can include a new optimizedPipeline field. For details, refer to optimizedPipeline.

Starting in MongoDB 4.2, the output for isMaster, and the db.isMaster() helper method, returns the isMaster.connectionId for the mongod / mongos instance's connection to the client.

MongoDB index builds against a populated collection require an exclusive read-write lock against the collection. Operations that require a read or write lock on the collection must wait until the mongod releases the lock. MongoDB uses an optimized build process that only holds the exclusive lock at the beginning and end of the index build. The rest of the build process yields to interleaving read and write operations.

For feature compatibility version (fcv) 4.2, MongoDB 4.2 index builds fully replace the index build processes supported in previous MongoDB versions. MongoDB ignores the background index build option if specified to createIndexes or its shell helpers createIndex() and createIndexes().

Note

Requires featureCompatibilityVersion 4.2

For MongoDB clusters upgraded from 4.0 to 4.2, you must set the feature compatibility version (fcv) to 4.2 to enable the optimized build process. For more information on setting the fCV, see setFeatureCompatibilityVersion.

MongoDB 4.2 clusters running with fCV 4.0 only support 4.0 index builds.

For complete documentation on the index build process, see Index Builds on Populated Collections.

Some changes can affect compatibility and may require user actions. For a detailed list of compatibility changes, see Compatibility Changes in MongoDB 4.2.

Important

Feature Compatibility Version

To upgrade, the 4.0 instances must have featureCompatibilityVersion set to 4.0. To check the version:

db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } )

For specific details on verifying and setting the featureCompatibilityVersion as well as information on other prerequisites/considerations for upgrades, refer to the individual upgrade instructions:

If you need guidance on upgrading to 4.2, MongoDB professional services offer major version upgrade support to help ensure a smooth transition without interruption to your MongoDB application.

To download MongoDB 4.2, go to the MongoDB Download Center

In Version
Issues
Status
4.2.0
Fixed in 4.2.1

To report an issue, see https://github.com/mongodb/mongo/wiki/Submit-Bug-Reports for instructions on how to file a JIRA ticket for the MongoDB server or one of the related projects.

Back

4.4 Changelog