Docs Menu
Docs Home
/
MongoDB Manual
/

MongoDB Limits and Thresholds

On this page

  • MongoDB Atlas Limitations
  • BSON Documents
  • Naming Restrictions
  • Namespaces
  • Indexes
  • Sorts
  • Data
  • Replica Sets
  • Sharded Clusters
  • Operations
  • Sessions
  • Shell

This document provides a collection of hard and soft limitations of the MongoDB system. The limitations on this page apply to deployments hosted in all of the following environments unless specified otherwise:

  • MongoDB Atlas: The fully managed service for MongoDB deployments in the cloud

  • MongoDB Enterprise: The subscription-based, self-managed version of MongoDB

  • MongoDB Community: The source-available, free-to-use, and self-managed version of MongoDB

The following limitations apply only to deployments hosted in MongoDB Atlas. If any of these limits present a problem for your organization, contact Atlas support.

Component
Limit
12
Shards in single-region clusters
50
Cross-region network permissions for a multi-region cluster
40. Additionally, a cluster in any project spans more than 40 regions, you can't create a multi-region cluster in this project.
Electable nodes per replica set or shard
7
Cluster tier for the Config server (minimum and maximum)
M30

MongoDB Atlas limits concurrent incoming connections based on the cluster tier and class. MongoDB Atlas connection limits apply per node. For sharded clusters, MongoDB Atlas connection limits apply per mongos router. The number of mongos routers is equal to the number of replica set nodes across all shards.

Your read preference also contributes to the total number of connections that MongoDB Atlas can allocate for a given query.

MongoDB Atlas has the following connection limits for the specified cluster tiers:

MongoDB Atlas Cluster Tier
Maximum Connections Per Node
M0
500
M2
500
M5
500
M10
1500
M20
3000
M30
3000
M40
6000
M50
16000
M60
32000
M80
96000
M140
96000
M200
128000
M300
128000
MongoDB Atlas Cluster Tier
Maximum Connections Per Node
M40
4000
M50
16000
M60
32000
M80
64000
M140
96000
M200
128000
M300
128000
M400
128000
M700
128000
MongoDB Atlas Cluster Tier
Maximum Connections Per Node
M0
500
M2
500
M5
500
M10
1500
M20
3000
M30
3000
M40
6000
M50
16000
M60
32000
M80
64000
M140
96000
M200
128000
M300
128000

Note

MongoDB Atlas reserves a small number of connections to each cluster for supporting MongoDB Atlas services.

If you're connecting to a multi-cloud MongoDB Atlas deployment through a private connection, you can access only the nodes in the same cloud provider that you're connecting from. This cloud provider might not have the primary node in its region. When this happens, you must specify the secondary read preference mode in the connection string to access the deployment.

If you need access to all nodes for your multi-cloud MongoDB Atlas deployment from your current provider through a private connection, you must perform one of the following actions:

  • Configure a VPN in the current provider to each of the remaining providers.

  • Configure a private endpoint to MongoDB Atlas for each of the remaining providers.

While there is no hard limit on the number of collections in a single MongoDB Atlas cluster, the performance of a cluster might degrade if it serves a large number of collections and indexes. Larger collections have a greater impact on performance.

The recommended maximum combined number of collections and indexes by MongoDB Atlas cluster tier are as follows:

MongoDB Atlas Cluster Tier
Recommended Maximum
M10
5,000 collections and indexes
M20 / M30
10,000 collections and indexes
M40/+
100,000 collections and indexes

MongoDB Atlas deployments have the following organization and project limits:

Component
Limit
Database users per MongoDB Atlas project
100
Atlas users per MongoDB Atlas project
500
Atlas users per MongoDB Atlas organization
500
API Keys per MongoDB Atlas organization
500
Access list entries per MongoDB Atlas Project
200
Users per MongoDB Atlas team
250
Teams per MongoDB Atlas project
100
Teams per MongoDB Atlas organization
250
Teams per MongoDB Atlas user
100
Organizations per MongoDB Atlas user
250
Linked organizations per MongoDB Atlas user
50
Clusters per MongoDB Atlas project
25
Projects per MongoDB Atlas organization
250
Custom MongoDB roles per MongoDB Atlas project
100
Assigned roles per database user
100
Hourly billing per MongoDB Atlas organization
$50
Federated database instances per MongoDB Atlas project
25
Total Network Peering Connections per MongoDB Atlas project
50. Additionally, MongoDB Atlas limits the number of nodes per Network Peering connection based on the CIDR block and the region selected for the project.
Pending network peering connections per MongoDB Atlas project
25
AWS Private Link addressable targets per region
50
Azure PrivateLink addressable targets per region
150
Unique shard keys per MongoDB Atlas project
40
Atlas Data Lake pipelines per MongoDB Atlas project
25
M0 clusters per MongoDB Atlas project
1

MongoDB Atlas limits the length and enforces ReGex requirements for the following component labels:

Component
Character Limit
RegEx Pattern
Cluster Name
64 [1]
^([a-zA-Z0-9]([a-zA-Z0-9-]){0,21}(?<!-)([\w]{0,42}))$ [2]
Project Name
64
^[\p{L}\p{N}\-_.(),:&@+']{1,64}$ [3]
Organization Name
64
^[\p{L}\p{N}\-_.(),:&@+']{1,64}$ [3]
API Key Description
250
[1] If you have peering-only mode enabled, the cluster name character limit is 23.
[2] MongoDB Atlas uses the first 23 characters of a cluster's name. These characters must be unique within the cluster's project. Cluster names with fewer than 23 characters can't end with a hyphen (-). Cluster names with more than 23 characters can't have a hyphen as the 23rd character.
[3](1, 2) Organization and project names can include any Unicode letter or number plus the following punctuation: -_.(),:&@+'.

Additional limitations apply to MongoDB Atlas serverless instances, free clusters, and shared clusters. To learn more, see the following resources:

Some MongoDB commands are unsupported in MongoDB Atlas. Additionally, some commands are supported only in MongoDB Atlas free clusters. To learn more, see the following resources:

BSON Document Size

The maximum BSON document size is 16 megabytes.

The maximum document size helps ensure that a single document cannot use excessive amount of RAM or, during transmission, excessive amount of bandwidth. To store documents larger than the maximum size, MongoDB provides the GridFS API. See mongofiles and the documentation for your driver for more information about GridFS.

Nested Depth for BSON Documents

MongoDB supports no more than 100 levels of nesting for BSON documents. Each object or array adds a level.

Use of Case in Database Names

Do not rely on case to distinguish between databases. For example, you cannot use two databases with names like, salesData and SalesData.

After you create a database in MongoDB, you must use consistent capitalization when you refer to it. For example, if you create the salesData database, do not refer to it using alternate capitalization such as salesdata or SalesData.

Restrictions on Database Names for Windows

For MongoDB deployments running on Windows, database names cannot contain any of the following characters:

/\. "$*<>:|?

Also database names cannot contain the null character.

Restrictions on Database Names for Unix and Linux Systems

For MongoDB deployments running on Unix and Linux systems, database names cannot contain any of the following characters:

/\. "$

Also database names cannot contain the null character.

Length of Database Names

Database names cannot be empty and must have fewer than 64 characters.

Restriction on Collection Names

Collection names should begin with an underscore or a letter character, and cannot:

  • contain the $.

  • be an empty string (e.g. "").

  • contain the null character.

  • begin with the system. prefix. (Reserved for internal use.)

If your collection name includes special characters, such as the underscore character, or begins with numbers, then to access the collection use the db.getCollection() method in the mongo shell or a similar method for your driver.

Namespace Length:

  • For featureCompatibilityVersion set to "4.4" or greater, MongoDB raises the limit on collection/view namespace to 255 bytes. For a collection or a view, the namespace includes the database name, the dot (.) separator, and the collection/view name (e.g. <database>.<collection>),

  • For featureCompatibilityVersion set to "4.2" or earlier, the maximum length of the collection/view namespace remains 120 bytes.

Restrictions on Field Names
  • Field names cannot contain the null character.

  • Top-level field names cannot start with the dollar sign ($) character.

    Otherwise, starting in MongoDB 3.6, the server permits storage of field names that contain dots (i.e. .) and dollar signs (i.e. $).

    Important

    The MongoDB Query Language cannot always meaningfully express queries over documents whose field names contain these characters (see SERVER-30575).

    Until support is added in the query language, the use of $ and . in field names is not recommended and is not supported by the official MongoDB drivers.

Warning

MongoDB does not support duplicate field names

The MongoDB Query Language is undefined over documents with duplicate field names. BSON builders may support creating a BSON document with duplicate field names. While the BSON builder may not throw an error, inserting these documents into MongoDB is not supported even if the insert succeeds. For example, inserting a BSON document with duplicate field names through a MongoDB driver may result in the driver silently dropping the duplicate values prior to insertion.

Namespace Length
  • For featureCompatibilityVersion set to "4.4" or greater, MongoDB raises the limit on collection/view namespace to 255 bytes. For a collection or a view, the namespace includes the database name, the dot (.) separator, and the collection/view name (e.g. <database>.<collection>),

  • For featureCompatibilityVersion set to "4.2" or earlier, the maximum length of the collection/view namespace remains 120 bytes.

Tip

See also:

Index Key Limit

Note

Changed in version 4.2

Starting in version 4.2, MongoDB removes the Index Key Limit for featureCompatibilityVersion (fCV) set to "4.2" or greater.

For MongoDB 2.6 through MongoDB versions with fCV set to "4.0" or earlier, the total size of an index entry, which can include structural overhead depending on the BSON type, must be less than 1024 bytes.

When the Index Key Limit applies:

  • MongoDB will not create an index on a collection if the index entry for an existing document exceeds the index key limit.

  • Reindexing operations will error if the index entry for an indexed field exceeds the index key limit. Reindexing operations occur as part of the compact command as well as the db.collection.reIndex() method.

    Because these operations drop all the indexes from a collection and then recreate them sequentially, the error from the index key limit prevents these operations from rebuilding any remaining indexes for the collection.

  • MongoDB will not insert into an indexed collection any document with an indexed field whose corresponding index entry would exceed the index key limit, and instead, will return an error. Previous versions of MongoDB would insert but not index such documents.

  • Updates to the indexed field will error if the updated value causes the index entry to exceed the index key limit.

    If an existing document contains an indexed field whose index entry exceeds the limit, any update that results in the relocation of that document on disk will error.

  • mongorestore and mongoimport will not insert documents that contain an indexed field whose corresponding index entry would exceed the index key limit.

  • In MongoDB 2.6, secondary members of replica sets will continue to replicate documents with an indexed field whose corresponding index entry exceeds the index key limit on initial sync but will print warnings in the logs.

    Secondary members also allow index build and rebuild operations on a collection that contains an indexed field whose corresponding index entry exceeds the index key limit but with warnings in the logs.

    With mixed version replica sets where the secondaries are version 2.6 and the primary is version 2.4, secondaries will replicate documents inserted or updated on the 2.4 primary, but will print error messages in the log if the documents contain an indexed field whose corresponding index entry exceeds the index key limit.

  • For existing sharded collections, chunk migration will fail if the chunk has a document that contains an indexed field whose index entry exceeds the index key limit.

Number of Indexes per Collection

A single collection can have no more than 64 indexes.

Index Name Length

Note

Changed in version 4.2

Starting in version 4.2, MongoDB removes the Index Name Length limit for MongoDB versions with featureCompatibilityVersion (fCV) set to "4.2" or greater.

In previous versions of MongoDB or MongoDB versions with fCV set to "4.0" or earlier, fully qualified index names, which include the namespace and the dot separators (i.e. <database name>.<collection name>.$<index name>), cannot be longer than 127 bytes.

By default, <index name> is the concatenation of the field names and index type. You can explicitly specify the <index name> to the createIndex() method to ensure that the fully qualified index name does not exceed the limit.

Number of Indexed Fields in a Compound Index

There can be no more than 32 fields in a compound index.

Queries cannot use both text and Geospatial Indexes

You cannot combine the $text query, which requires a special text index, with a query operator that requires a different type of special index. For example you cannot combine $text query with the $near operator.

Fields with 2dsphere Indexes can only hold Geometries

Fields with 2dsphere indexes must hold geometry data in the form of coordinate pairs or GeoJSON data. If you attempt to insert a document with non-geometry data in a 2dsphere indexed field, or build a 2dsphere index on a collection where the indexed field has non-geometry data, the operation will fail.

Tip

See also:

The unique indexes limit in Sharding Operational Restrictions.

NaN values returned from Covered Queries by the WiredTiger Storage Engine are always of type double

If the value of a field returned from a query that is covered by an index is NaN, the type of that NaN value is always double.

Multikey Index

Multikey indexes cannot cover queries over array field(s).

Geospatial Index

Geospatial indexes cannot cover a query.

Memory Usage in Index Builds

createIndexes supports building one or more indexes on a collection. createIndexes uses a combination of memory and temporary files on disk to complete index builds. The default limit on memory usage for createIndexes is 200 megabytes (for versions 4.2.3 and later) and 500 (for versions 4.2.2 and earlier), shared between all indexes built using a single createIndexes command. Once the memory limit is reached, createIndexes uses temporary disk files in a subdirectory named _tmp within the --dbpath directory to complete the build.

You can override the memory limit by setting the maxIndexBuildMemoryUsageMegabytes server parameter. Setting a higher memory limit may result in faster completion of index builds. However, setting this limit too high relative to the unused RAM on your system can result in memory exhaustion and server shutdown.

Index builds may be initiated either by a user command such as Create Index or by an administrative process such as an initial sync. Both are subject to the limit set by maxIndexBuildMemoryUsageMegabytes.

An initial sync operation populates only one collection at a time and has no risk of exceeding the memory limit. However, it is possible for a user to start index builds on multiple collections in multiple databases simultaneously and potentially consume an amount of memory greater than the limit set in maxIndexBuildMemoryUsageMegabytes.

Tip

To minimize the impact of building an index on replica sets and sharded clusters with replica set shards, use a rolling index build procedure as described on Rolling Index Builds on Replica Sets.

Collation and Index Types

The following index types only support simple binary comparison and do not support collation:

Tip

To create a text, a 2d, or a geoHaystack index on a collection that has a non-simple collation, you must explicitly specify {collation: {locale: "simple"} } when creating the index.

Hidden Indexes
Maximum Number of Sort Keys

You can sort on a maximum of 32 keys.

Maximum Number of Documents in a Capped Collection

If you specify a maximum number of documents for a capped collection using the max parameter to create, the limit must be less than 2 32 documents. If you do not specify a maximum number of documents when creating a capped collection, there is no limit on the number of documents.

Number of Members of a Replica Set

Replica sets can have up to 50 members.

Number of Voting Members of a Replica Set

Replica sets can have up to 7 voting members. For replica sets with more than 7 total members, see Non-Voting Members.

Maximum Size of Auto-Created Oplog

If you do not explicitly specify an oplog size (i.e. with oplogSizeMB or --oplogSize) MongoDB will create an oplog that is no larger than 50 gigabytes. [4]

[4] The oplog can grow past its configured size limit to avoid deleting the majority commit point.

Sharded clusters have the restrictions and thresholds described here.

Operations Unavailable in Sharded Environments

$where does not permit references to the db object from the $where function. This is uncommon in un-sharded collections.

The geoSearch command is not supported in sharded environments.

In MongoDB 4.4 and earlier, you cannot specify sharded collections in the from parameter of $lookup stages.

Covered Queries in Sharded Clusters

When run on mongos, indexes can only cover queries on sharded collections if the index contains the shard key.

Sharding Existing Collection Data Size

An existing collection can only be sharded if its size does not exceed specific limits. These limits can be estimated based on the average size of all shard key values, and the configured chunk size.

Important

These limits only apply for the initial sharding operation. Sharded collections can grow to any size after successfully enabling sharding.

MongoDB distributes documents in the collection so that each chunk is half full at creation. Use the following formulas to calculate the theoretical maximum collection size.

maxSplits = 16777216 (bytes) / <average size of shard key values in bytes>
maxCollectionSize (MB) = maxSplits * (chunkSize / 2)

Note

The maximum BSON document size is 16MB or 16777216 bytes.

All conversions should use base-2 scale, e.g. 1024 kilobytes = 1 megabyte.

If maxCollectionSize is less than or nearly equal to the target collection, increase the chunk size to ensure successful initial sharding. If there is doubt as to whether the result of the calculation is too 'close' to the target collection size, it is likely better to increase the chunk size.

After successful initial sharding, you can reduce the chunk size as needed. If you later reduce the chunk size, it may take time for all chunks to split to the new size. See Modify Chunk Size in a Sharded Cluster for instructions on modifying chunk size.

This table illustrates the approximate maximum collection sizes using the formulas described above:

Average Size of Shard Key Values
512 bytes
256 bytes
128 bytes
64 bytes
Maximum Number of Splits
32,768
65,536
131,072
262,144
Max Collection Size (64 MB Chunk Size)
1 TB
2 TB
4 TB
8 TB
Max Collection Size (128 MB Chunk Size)
2 TB
4 TB
8 TB
16 TB
Max Collection Size (256 MB Chunk Size)
4 TB
8 TB
16 TB
32 TB
Single Document Modification Operations in Sharded Collections

All update() and remove() operations for a sharded collection that specify the justOne or multi: false option must include the shard key or the _id field in the query specification. update() and remove() operations specifying justOne or multi: false in a sharded collection which do not contain either the shard key or the _id field return an error.

Unique Indexes in Sharded Collections

MongoDB does not support unique indexes across shards, except when the unique index contains the full shard key as a prefix of the index. In these situations MongoDB will enforce uniqueness across the full key, not a single field.

Tip

See:

Unique Constraints on Arbitrary Fields for an alternate approach.

Maximum Number of Documents Per Chunk to Migrate

By default, MongoDB cannot move a chunk if the number of documents in the chunk is greater than 1.3 times the result of dividing the configured chunk size by the average document size. db.collection.stats() includes the avgObjSize field, which represents the average document size in the collection.

For chunks that are too large to migrate, starting in MongoDB 4.4:

  • A new balancer setting attemptToBalanceJumboChunks allows the balancer to migrate chunks too large to move as long as the chunks are not labeled jumbo. See Balance Chunks that Exceed Size Limit for details.

  • The moveChunk command can specify a new option forceJumbo to allow for the migration of chunks that are too large to move. The chunks may or may not be labeled jumbo.

Shard Key Size

Starting in version 4.4, MongoDB removes the limit on the shard key size.

For MongoDB 4.2 and earlier, a shard key cannot exceed 512 bytes.

Shard Key Index Type

A shard key index can be an ascending index on the shard key, a compound index that starts with the shard key and specifies ascending order for the shard key, or a hashed index.

A shard key index cannot be:

Shard Key Selection is Immutable in MongoDB 4.2 and Earlier

Note

Changed in Version 4.4

Starting in MongoDB 4.4, you can refine a collection's shard key by adding a suffix field or fields to the existing key. See refineCollectionShardKey.

In MongoDB 4.2 and earlier, once you shard a collection, the selection of the shard key is immutable; i.e. you cannot select a different shard key for that collection.

If you must change a shard key:

  • Dump all data from MongoDB into an external format.

  • Drop the original sharded collection.

  • Configure sharding using the new shard key.

  • Pre-split the shard key range to ensure initial even distribution.

  • Restore the dumped data into MongoDB.

Monotonically Increasing Shard Keys Can Limit Insert Throughput

For clusters with high insert volumes, a shard keys with monotonically increasing and decreasing keys can affect insert throughput. If your shard key is the _id field, be aware that the default values of the _id fields are ObjectIds which have generally increasing values.

When inserting documents with monotonically increasing shard keys, all inserts belong to the same chunk on a single shard. The system eventually divides the chunk range that receives all write operations and migrates its contents to distribute data more evenly. However, at any moment the cluster directs insert operations only to a single shard, which creates an insert throughput bottleneck.

If the operations on the cluster are predominately read operations and updates, this limitation may not affect the cluster.

To avoid this constraint, use a hashed shard key or select a field that does not increase or decrease monotonically.

Hashed shard keys and hashed indexes store hashes of keys with ascending values.

Sort Operations

If MongoDB cannot use an index or indexes to obtain the sort order, MongoDB must perform a blocking sort operation on the data. The name refers to the requirement that the SORT stage reads all input documents before returning any output documents, blocking the flow of data for that specific query.

If MongoDB requires using more than 100 megabytes of system memory for the blocking sort operation, MongoDB returns an error unless the query specifies cursor.allowDiskUse() (New in MongoDB 4.4). allowDiskUse() allows MongoDB to use temporary files on disk to store data exceeding the 100 megabyte system memory limit while processing a blocking sort operation.

Changed in version 4.4: For MongoDB 4.2 and prior, blocking sort operations could not exceed 32 megabytes of system memory.

For more information on sorts and index use, see Sort and Index Use.

Aggregation Pipeline Operation

Each individual pipeline stage has a limit of 100 megabytes of RAM. By default, if a stage exceeds this limit, MongoDB produces an error. For some pipeline stages you can allow pipeline processing to take up more space by using the allowDiskUse option to enable aggregation pipeline stages to write data to temporary files.

The $search aggregation stage is not restricted to 100 megabytes of RAM because it runs in a separate process.

Examples of stages that can spill to disk when allowDiskUse is true are:

Note

Pipeline stages operate on streams of documents with each pipeline stage taking in documents, processing them, and then outputing the resulting documents.

Some stages can't output any documents until they have processed all incoming documents. These pipeline stages must keep their stage output in RAM until all incoming documents are processed. As a result, these pipeline stages may require more space than the 100 MB limit.

If the results of one of your $sort pipeline stages exceed the limit, consider adding a $limit stage.

Starting in MongoDB 4.2, the profiler log messages and diagnostic log messages includes a usedDisk indicator if any aggregation stage wrote data to temporary files due to memory restrictions.

Aggregation and Read Concern
2d Geospatial queries cannot use the $or operator
Geospatial Queries

For spherical queries, use the 2dsphere index result.

The use of 2d index for spherical queries may lead to incorrect results, such as the use of the 2d index for spherical queries that wrap around the poles.

Geospatial Coordinates
  • Valid longitude values are between -180 and 180, both inclusive.

  • Valid latitude values are between -90 and 90, both inclusive.

Area of GeoJSON Polygons

For $geoIntersects or $geoWithin, if you specify a single-ringed polygon that has an area greater than a single hemisphere, include the custom MongoDB coordinate reference system in the $geometry expression; otherwise, $geoIntersects or $geoWithin queries for the complementary geometry. For all other GeoJSON polygons with areas greater than a hemisphere, $geoIntersects or $geoWithin queries for the complementary geometry.

Multi-document Transactions

For multi-document transactions:

  • You can create collections and indexes in transactions. For details, see Create Collections and Indexes in a Transaction

  • The collections used in a transaction can be in different databases.

    Note

    You cannot create new collections in cross-shard write transactions. For example, if you write to an existing collection in one shard and implicitly create a collection in a different shard, MongoDB cannot perform both operations in the same transaction.

  • You cannot write to capped collections.

  • You cannot read/write to collections in the config, admin, or local databases.

  • You cannot write to system.* collections.

  • You cannot return the supported operation's query plan using explain or similar commands.

  • For cursors created outside of a transaction, you cannot call getMore inside the transaction.

  • For cursors created in a transaction, you cannot call getMore outside the transaction.

Changed in version 4.4.

The following operations are not allowed in transactions:

Transactions have a lifetime limit as specified by transactionLifetimeLimitSeconds. The default is 60 seconds.

Write Command Batch Limit Size

100,000 writes are allowed in a single batch operation, defined by a single request to the server.

Changed in version 3.6: The limit raises from 1,000 to 100,000 writes. This limit also applies to legacy OP_INSERT messages.

The Bulk() operations in the mongo shell and comparable methods in the drivers do not have this limit.

Views

The view definition pipeline cannot include the $out or the $merge stage. If the view definition includes nested pipeline (e.g. the view definition includes $lookup or $facet stage), this restriction applies to the nested pipelines as well.

Views have the following operation restrictions:

Projection Restrictions

New in version 4.4:

$-Prefixed Field Path Restriction
Starting in MongoDB 4.4, the find() and findAndModify() projection cannot project a field that starts with $ with the exception of the DBRef fields.For example, starting in MongoDB 4.4, the following operation is invalid:
db.inventory.find( {}, { "$instock.warehouse": 0, "$item": 0, "detail.$price": 1 } ) // Invalid starting in 4.4
MongoDB already has a restriction where top-level field names cannot start with the dollar sign ($).In earlier version, MongoDB ignores the $-prefixed field projections.
$ Positional Operator Placement Restriction
Starting in MongoDB 4.4, the $ projection operator can only appear at the end of the field path, for example "field.$" or "fieldA.fieldB.$".For example, starting in MongoDB 4.4, the following operation is invalid:
db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4
To resolve, remove the component of the field path that follows the $ projection operator.In previous versions, MongoDB ignores the part of the path that follows the $; i.e. the projection is treated as "instock.$".
Empty Field Name Projection Restriction
Starting in MongoDB 4.4, find() and findAndModify() projection cannot include a projection of an empty field name.For example, starting in MongoDB 4.4, the following operation is invalid:
db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4
In previous versions, MongoDB treats the inclusion/exclusion of the empty field as it would the projection of non-existing fields.
Path Collision: Embedded Documents and Its Fields
Starting in MongoDB 4.4, it is illegal to project an embedded document with any of the embedded document's fields.For example, consider a collection inventory with documents that contain a size field:
{ ..., size: { h: 10, w: 15.25, uom: "cm" }, ... }
Starting in MongoDB 4.4, the following operation fails with a Path collision error because it attempts to project both size document and the size.uom field:
db.inventory.find( {}, { size: 1, "size.uom": 1 } ) // Invalid starting in 4.4
In previous versions, lattermost projection between the embedded documents and its fields determines the projection:
  • If the projection of the embedded document comes after any and all projections of its fields, MongoDB projects the embedded document. For example, the projection document { "size.uom": 1, size: 1 } produces the same result as the projection document { size: 1 }.

  • If the projection of the embedded document comes before the projection any of its fields, MongoDB projects the specified field or fields. For example, the projection document { "size.uom": 1, size: 1, "size.h": 1 } produces the same result as the projection document { "size.uom": 1, "size.h": 1 }.

Path Collision: $slice of an Array and Embedded Fields
Starting in MongoDB 4.4, find() and findAndModify() projection cannot contain both a $slice of an array and a field embedded in the array.For example, consider a collection inventory that contains an array field instock:
{ ..., instock: [ { warehouse: "A", qty: 35 }, { warehouse: "B", qty: 15 }, { warehouse: "C", qty: 35 } ], ... }
Starting in MongoDB 4.4, the following operation fails with a Path collision error:
db.inventory.find( {}, { "instock": { $slice: 1 }, "instock.warehouse": 0 } ) // Invalid starting in 4.4
In previous versions, the projection applies both projections and returns the first element ($slice: 1) in the instock array but suppresses the warehouse field in the projected element. Starting in MongoDB 4.4, to achieve the same result, use the db.collection.aggregate() method with two separate $project stages.
$ Positional Operator and $slice Restriction
Starting in MongoDB 4.4, find() and findAndModify() projection cannot include $slice projection expression as part of a $ projection expression.For example, starting in MongoDB 4.4, the following operation is invalid:
db.inventory.find( { "instock.qty": { $gt: 25 } }, { "instock.$": { $slice: 1 } } ) // Invalid starting in 4.4
MongoDB already has a restriction where top-level field names cannot start with the dollar sign ($).In previous versions, MongoDB returns the first element (instock.$) in the instock array that matches the query condition; i.e. the positional projection "instock.$" takes precedence and the $slice:1 is a no-op. The "instock.$": { $slice: 1 } does not exclude any other document field.

Sessions and $external Username Limit

Changed in version 3.6.3: To use sessions with $external authentication users (i.e. Kerberos, LDAP, x.509 users), the usernames cannot be greater than 10k bytes.

Session Idle Timeout

Sessions that receive no read or write operations for 30 minutes or that are not refreshed using refreshSessions within this threshold are marked as expired and can be closed by the MongoDB server at any time. Closing a session kills any in-progress operations and open cursors associated with the session. This includes cursors configured with noCursorTimeout() or a maxTimeMS() greater than 30 minutes.

Consider an application that issues a db.collection.find(). The server returns a cursor along with a batch of documents defined by the cursor.batchSize() of the find(). The session refreshes each time the application requests a new batch of documents from the server. However, if the application takes longer than 30 minutes to process the current batch of documents, the session is marked as expired and closed. When the application requests the next batch of documents, the server returns an error as the cursor was killed when the session was closed.

For operations that return a cursor, if the cursor may be idle for longer than 30 minutes, issue the operation within an explicit session using Mongo.startSession() and periodically refresh the session using the refreshSessions command. For example:

var session = db.getMongo().startSession()
var sessionId = session
sessionId // show the sessionId
var cursor = session.getDatabase("examples").getCollection("data").find().noCursorTimeout()
var refreshTimestamp = new Date() // take note of time at operation start
while (cursor.hasNext()) {
// Check if more than 5 minutes have passed since the last refresh
if ( (new Date()-refreshTimestamp)/1000 > 300 ) {
print("refreshing session")
db.adminCommand({"refreshSessions" : [sessionId]})
refreshTimestamp = new Date()
}
// process cursor normally
}

In the example operation, the db.collection.find() method is associated with an explicit session. The cursor is configured with noCursorTimeout() to prevent the server from closing the cursor if idle. The while loop includes a block that uses refreshSessions to refresh the session every 5 minutes. Since the session will never exceed the 30 minute idle timeout, the cursor can remain open indefinitely.

For MongoDB drivers, defer to the driver documentation for instructions and syntax for creating sessions.

The mongo shell prompt has a limit of 4095 codepoints for each line. If you enter a line with more than 4095 codepoints, the shell will truncate it.

Back

MongoDB Server Parameters