Aggregation Pipeline Stages
In the db.collection.aggregate()
method and
db.aggregate()
method, pipeline stages appear in an array. In the Atlas UI, you can arrange pipeline
stages using the aggregation pipeline builder. Documents pass
through the stages in sequence.
Compatibility
You can use pipeline stages for deployments hosted in the following environments:
MongoDB Atlas: The fully managed service for MongoDB deployments in the cloud
MongoDB Enterprise: The subscription-based, self-managed version of MongoDB
MongoDB Community: The source-available, free-to-use, and self-managed version of MongoDB
Stages
db.collection.aggregate()
Stages
All except the $out
, $merge
, $geoNear
,
and $changeStream
stages can appear multiple times in a pipeline.
Note
For details on a specific operator, including syntax and examples, click on the link to the operator's reference page.
db.collection.aggregate( [ { <stage> }, ... ] )
Stage | Description |
---|---|
Adds new fields to documents. Similar to
| |
Categorizes incoming documents into groups, called buckets, based on a specified expression and bucket boundaries. | |
Categorizes incoming documents into a specific number of groups, called buckets, based on a specified expression. Bucket boundaries are automatically determined in an attempt to evenly distribute the documents into the specified number of buckets. | |
Returns a Change Stream cursor for the collection. This stage can only occur once in an aggregation pipeline and it must occur as the first stage. | |
Returns statistics regarding a collection or view. | |
Returns a count of the number of documents at this stage of the aggregation pipeline. Distinct from the | |
Processes multiple aggregation pipelines within a single stage on the same set of input documents. Enables the creation of multi-faceted aggregations capable of characterizing data across multiple dimensions, or facets, in a single stage. | |
Performs a recursive search on a collection. To each output document, adds a new array field that contains the traversal results of the recursive search for that document. | |
Groups input documents by a specified identifier expression and applies the accumulator expression(s), if specified, to each group. Consumes all input documents and outputs one document per each distinct group. The output documents only contain the identifier field and, if specified, accumulated fields. | |
Returns statistics regarding the use of each index for the collection. | |
Passes the first n documents unmodified to the pipeline where n is the specified limit. For each input document, outputs either one document (for the first n documents) or zero documents (after the first n documents). | |
Lists all sessions that have been active long enough to
propagate to the | |
Performs a left outer join to another collection in the same database to filter in documents from the "joined" collection for processing. | |
Filters the document stream to allow only matching documents
to pass unmodified into the next pipeline stage.
| |
Writes the resulting documents of the aggregation pipeline to
a collection. The stage can incorporate (insert new
documents, merge documents, replace documents, keep existing
documents, fail the operation, process documents with a
custom update pipeline) the results into an output
collection. To use the | |
Writes the resulting documents of the aggregation pipeline to
a collection. To use the | |
Returns plan cache information for a collection. | |
Reshapes each document in the stream, such as by adding new fields or removing existing fields. For each input document, outputs one document. See also | |
Reshapes each document in the stream by restricting the
content for each document based on information stored in the
documents themselves. Incorporates the functionality of
| |
Replaces a document with the specified embedded document. The
operation replaces all existing fields in the input document,
including the
| |
Replaces a document with the specified embedded document. The
operation replaces all existing fields in the input document,
including the
| |
Randomly selects the specified number of documents from its input. | |
Performs a full-text search of the field or fields in an Atlas collection.
| |
Adds new fields to documents. Similar to
| |
Groups documents into windows and applies one or more operators to the documents in each window. New in version 5.0. | |
Skips the first n documents where n is the specified skip number and passes the remaining documents unmodified to the pipeline. For each input document, outputs either zero documents (for the first n documents) or one document (if after the first n documents). | |
Reorders the document stream by a specified sort key. Only the order changes; the documents remain unmodified. For each input document, outputs one document. | |
Groups incoming documents based on the value of a specified expression, then computes the count of documents in each distinct group. | |
Performs a union of two collections; i.e. combines pipeline results from two collections into a single result set. | |
Deconstructs an array field from the input documents to output a document for each element. Each output document replaces the array with an element value. For each input document, outputs n documents where n is the number of array elements and can be zero for an empty array. |
For aggregation expression operators to use in the pipeline stages, see Aggregation Pipeline Operators.
db.aggregate()
Stages
Starting in version 3.6, MongoDB also provides the
db.aggregate()
method:
db.aggregate( [ { <stage> }, ... ] )
The following stages use the db.aggregate()
method and not
the db.collection.aggregate()
method.
Stage | Description |
---|---|
Returns a Change Stream cursor for the collection. This stage can only occur once in an aggregation pipeline and it must occur as the first stage. | |
Returns information on active and/or dormant operations for the MongoDB deployment. | |
Stages Available for Updates
You can use the aggregation pipeline for updates in:
Command | mongosh Methods |
---|---|
For the updates, the pipeline can consist of the following stages:
$addFields
and its alias$set
$replaceRoot
and its alias$replaceWith
.
Alphabetical Listing of Stages
Name | Description |
---|---|
Adds new fields to documents. Outputs documents that contain all existing fields from the input documents and newly added fields. | |
Categorizes incoming documents into groups, called buckets, based on a specified expression and bucket boundaries. | |
Categorizes incoming documents into a specific number of groups, called buckets, based on a specified expression. Bucket boundaries are automatically determined in an attempt to evenly distribute the documents into the specified number of buckets. | |
Returns a Change Stream cursor for the collection or database. This stage can only occur once in an aggregation pipeline and it must occur as the first stage. | |
Returns statistics regarding a collection or view. | |
Returns a count of the number of documents at this stage of the aggregation pipeline. Distinct from the | |
Returns information on active and/or dormant operations for the
MongoDB deployment. To run, use the | |
Processes multiple aggregation pipelines within a single stage on the same set of input documents. Enables the creation of multi-faceted aggregations capable of characterizing data across multiple dimensions, or facets, in a single stage. | |
Performs a recursive search on a collection. To each output document, adds a new array field that contains the traversal results of the recursive search for that document. | |
Groups input documents by a specified identifier expression and applies the accumulator expression(s), if specified, to each group. Consumes all input documents and outputs one document per each distinct group. The output documents only contain the identifier field and, if specified, accumulated fields. | |
Returns statistics regarding the use of each index for the collection. | |
Passes the first n documents unmodified to the pipeline where n is the specified limit. For each input document, outputs either one document (for the first n documents) or zero documents (after the first n documents). | |
Lists all sessions that have been active long enough to propagate to
the | |
Performs a left outer join to another collection in the same database to filter in documents from the "joined" collection for processing. | |
Filters the document stream to allow only matching documents
to pass unmodified into the next pipeline stage. | |
Writes the resulting documents of the aggregation pipeline to a
collection. The stage can incorporate (insert new documents, merge
documents, replace documents, keep existing documents, fail the
operation, process documents with a custom update pipeline) the
results into an output collection. To use the New in version 4.2. | |
Writes the resulting documents of the aggregation pipeline to a
collection. To use the | |
Returns plan cache information for a collection. | |
Reshapes each document in the stream, such as by adding new fields or removing existing fields. For each input document, outputs one document. | |
Reshapes each document in the stream by restricting the content for
each document based on information stored in the documents
themselves. Incorporates the functionality of | |
Replaces a document with the specified embedded document. The
operation replaces all existing fields in the input document,
including the | |
Replaces a document with the specified embedded document. The
operation replaces all existing fields in the input document,
including the Alias for | |
Randomly selects the specified number of documents from its input. | |
Performs a full-text search of the field or fields in an Atlas collection. Note
| |
Adds new fields to documents. Outputs documents that contain all existing fields from the input documents and newly added fields. Alias for | |
Groups documents into windows and applies one or more operators to the documents in each window. New in version 5.0. | |
Skips the first n documents where n is the specified skip number and passes the remaining documents unmodified to the pipeline. For each input document, outputs either zero documents (for the first n documents) or one document (if after the first n documents). | |
Reorders the document stream by a specified sort key. Only the order changes; the documents remain unmodified. For each input document, outputs one document. | |
Groups incoming documents based on the value of a specified expression, then computes the count of documents in each distinct group. | |
Performs a union of two collections; i.e. combines pipeline results from two collections into a single result set. New in version 4.4. | |
Removes/exludes fields from documents. Alias for | |
Deconstructs an array field from the input documents to output a document for each element. Each output document replaces the array with an element value. For each input document, outputs n documents where n is the number of array elements and can be zero for an empty array. |