Docs Menu
Docs Home
/
MongoDB Manual
/ /

Aggregation Pipeline Limits

On this page

  • Result Size Restrictions
  • Memory Restrictions

Aggregation operations with the aggregate command have the following limitations.

The aggregate command can either return a cursor or store the results in a collection. Each document in the result set is subject to the 16 megabyte BSON Document Size limit. If any single document exceeds the BSON Document Size limit, the aggregation produces an error. The limit only applies to the returned documents. During the pipeline processing, the documents may exceed this size. The db.collection.aggregate() method returns a cursor by default.

Each individual pipeline stage has a limit of 100 megabytes of RAM. By default, if a stage exceeds this limit, MongoDB produces an error. For some pipeline stages you can allow pipeline processing to take up more space by using the allowDiskUse option to enable aggregation pipeline stages to write data to temporary files.

The $search aggregation stage is not restricted to 100 megabytes of RAM because it runs in a separate process.

Examples of stages that can spill to disk when allowDiskUse is true are:

Note

Pipeline stages operate on streams of documents with each pipeline stage taking in documents, processing them, and then outputing the resulting documents.

Some stages can't output any documents until they have processed all incoming documents. These pipeline stages must keep their stage output in RAM until all incoming documents are processed. As a result, these pipeline stages may require more space than the 100 MB limit.

If the results of one of your $sort pipeline stages exceed the limit, consider adding a $limit stage.

Starting in MongoDB 4.2, the profiler log messages and diagnostic log messages includes a usedDisk indicator if any aggregation stage wrote data to temporary files due to memory restrictions.

Back

Aggregation Pipeline Optimization