Docs Home → Develop Applications → MongoDB Manual
db.collection.bulkWrite()
On this page
Definition
db.collection.bulkWrite()
Important
mongosh Method
This is a
mongosh
method. This is not the documentation forNode.js
or other programming language specific driver methods.In most cases,
mongosh
methods work the same way as the legacymongo
shell methods. However, some legacy methods are unavailable inmongosh
.For the legacy
mongo
shell documentation, refer to the documentation for the corresponding MongoDB Server release:For MongoDB API drivers, refer to the language specific MongoDB driver documentation.
New in version 3.2.
Performs multiple write operations with controls for order of execution.
db.collection.bulkWrite()
has the following syntax:db.collection.bulkWrite( [ <operation 1>, <operation 2>, ... ], { writeConcern : <document>, ordered : <boolean> } ) ParameterTypeDescriptionoperations
arrayAn array of
bulkWrite()
write operations.Valid operations are:
See Write Operations for usage of each operation.
writeConcern
documentOptional. A document expressing the write concern. Omit to use the default write concern.
Do not explicitly set the write concern for the operation if run in a transaction. To use write concern with transactions, see Transactions and Write Concern.
ordered
booleanOptional. A boolean specifying whether the
mongod
instance should perform an ordered or unordered operation execution. Defaults totrue
.Returns: - A boolean
acknowledged
astrue
if the operation ran with write concern orfalse
if write concern was disabled. - A count for each write operation.
- An array containing an
_id
for each successfully inserted or upserted documents.
- A boolean
Behavior
bulkWrite()
takes an array of write operations and
executes each of them. By default operations are executed in order.
See Execution of Operations for controlling
the order of write operation execution.
Write Operations
insertOne
Inserts a single document into the collection.
db.collection.bulkWrite( [ { insertOne : { "document" : <document> } } ] )
updateOne and updateMany
Field | Notes |
---|---|
filter | The selection criteria for the update. The same query
selectors as in the
db.collection.find() method are available. |
update | The update operation to perform. Can specify either:
|
upsert | Optional. A boolean to indicate whether to perform an upsert. By default, |
arrayFilters | Optional. An array of filter documents that determine which
array elements to modify for an update operation on an array
field. |
collation | Optional. Specifies the collation to use for
the operation. |
hint | Optional. The index to use to support the
update New in version 4.2.1. |
For details, see db.collection.updateOne()
and
db.collection.updateMany()
.
replaceOne
replaceOne
replaces a single document in the collection that matches the
filter. If multiple documents match, replaceOne
will replace the first
matching document only.
db.collection.bulkWrite([ { replaceOne : { "filter" : <document>, "replacement" : <document>, "upsert" : <boolean>, "collation": <document>, // Available starting in 3.4 "hint": <document|string> // Available starting in 4.2.1 } } ] )
Field | Notes |
---|---|
filter | The selection criteria for the replacement operation. The same
query selectors as in the
db.collection.find() method are available. |
replacement | The replacement document. The document cannot contain
update operators. |
upsert | Optional. A boolean to indicate whether to perform an upsert. By
default, upsert is false . |
collation | Optional. Specifies the collation to use for
the operation. |
hint | Optional. The index to use to support the
update New in version 4.2.1. |
For details, see to db.collection.replaceOne()
.
deleteOne and deleteMany
Field | Notes |
---|---|
filter | The selection criteria for the delete operation. The same
query selectors as in the
db.collection.find() method are available. |
collation | Optional. Specifies the collation to use for
the operation. |
For details, see db.collection.deleteOne()
and
db.collection.deleteMany()
.
_id
Field
If the document does not specify an _id field, then mongod
adds the _id
field and assign a unique
ObjectId()
for the document before inserting or upserting it.
Most drivers create an ObjectId and insert the _id
field, but the
mongod
will create and populate the _id
if the driver or
application does not.
If the document contains an _id
field, the _id
value must be
unique within the collection to avoid duplicate key error.
Update or replace operations cannot specify an _id
value that differs
from the original document.
Execution of Operations
The ordered
parameter specifies whether
bulkWrite()
will execute operations in order or not.
By default, operations are executed in order.
The following code represents a bulkWrite()
with
five operations.
db.collection.bulkWrite( [ { insertOne : <document> }, { updateOne : <document> }, { updateMany : <document> }, { replaceOne : <document> }, { deleteOne : <document> }, { deleteMany : <document> } ] )
In the default ordered : true
state, each operation will
be executed in order, from the first operation insertOne
to the last operation deleteMany
.
If ordered
is set to false, operations may be reordered by
mongod
to increase performance.
Applications should not depend on order of operation execution.
The following code represents an unordered
bulkWrite()
with six operations:
db.collection.bulkWrite( [ { insertOne : <document> }, { updateOne : <document> }, { updateMany : <document> }, { replaceOne : <document> }, { deleteOne : <document> }, { deleteMany : <document> } ], { ordered : false } )
With ordered : false
, the results of the operation may vary. For example,
the deleteOne
or deleteMany
may remove more or fewer documents
depending on whether the run before or after the insertOne
, updateOne
,
updateMany
, or replaceOne
operations.
The number of operations in each group cannot exceed the value of
the maxWriteBatchSize of
the database. As of MongoDB 3.6, this value is 100,000
.
This value is shown in the hello.maxWriteBatchSize
field.
This limit prevents issues with oversized error messages. If a group
exceeds this limit,
the client driver divides the group into smaller groups with counts
less than or equal to the value of the limit. For example, with the
maxWriteBatchSize
value of 100,000
, if the queue consists of
200,000
operations, the driver creates 2 groups, each with
100,000
operations.
Note
The driver only divides the group into smaller groups when using the high-level API. If using db.runCommand() directly (for example, when writing a driver), MongoDB throws an error when attempting to execute a write batch which exceeds the limit.
Starting in MongoDB 3.6, once the error report for a single batch grows
too large, MongoDB truncates all remaining error messages to the empty
string. Currently, begins once there are at least 2 error messages with
total size greater than 1MB
.
The sizes and grouping mechanics are internal performance details and are subject to change in future versions.
Executing an ordered
list of operations on a
sharded collection will generally be slower than executing an
unordered
list
since with an ordered list, each operation must wait for the previous
operation to finish.
Capped Collections
bulkWrite()
write operations have restrictions when
used on a capped collection.
updateOne
and updateMany
throw a WriteError
if the
update
criteria increases the size of the document being modified.
replaceOne
throws a WriteError
if the
replacement
document has a larger size than the original
document.
deleteOne
and deleteMany
throw a WriteError
if used on a
capped collection.
Time Series Collections
bulkWrite()
write operations have restrictions
when used on a time series collection. Only insertOne
can be
used on time series collections. All other operations will return a
WriteError
.
Error Handling
db.collection.bulkWrite()
throws a BulkWriteError
exception on errors (unless the operation is part of a transaction on
MongoDB 4.0). See Error Handling inside Transactions.
Excluding Write Concern errors, ordered operations stop after an error, while unordered operations continue to process any remaining write operations in the queue, unless when run inside a transaction. See Error Handling inside Transactions.
Write concern errors are displayed in the writeConcernErrors
field, while
all other errors are displayed in the writeErrors
field. If an error is
encountered, the number of successful write operations are displayed instead
of the inserted _id
values. Ordered operations display the single error
encountered while unordered operations display each error in an array.
Transactions
db.collection.bulkWrite()
can be used inside multi-document transactions.
Important
In most cases, multi-document transaction incurs a greater performance cost over single document writes, and the availability of multi-document transactions should not be a replacement for effective schema design. For many scenarios, the denormalized data model (embedded documents and arrays) will continue to be optimal for your data and use cases. That is, for many scenarios, modeling your data appropriately will minimize the need for multi-document transactions.
For additional transactions usage considerations (such as runtime limit and oplog size limit), see also Production Considerations.
Inserts and Upserts within Transactions
For feature compatibility version (fcv) "4.4"
and greater, if an insert operation or update operation with
upsert: true
is run in a transaction against a non-existing
collection, the collection is implicitly created.
Note
You cannot create new collections in cross-shard write transactions. For example, if you write to an existing collection in one shard and implicitly create a collection in a different shard, MongoDB cannot perform both operations in the same transaction.
For fcv "4.2"
or less, the collection must already exist for
insert and upsert: true
operations.
Write Concerns and Transactions
Do not explicitly set the write concern for the operation if run in a transaction. To use write concern with transactions, see Transactions and Write Concern.
Error Handling inside Transactions
Starting in MongoDB 4.2, if a db.collection.bulkWrite()
operation encounters an error inside a transaction, the method throws a BulkWriteException (same as outside a transaction).
In 4.0, if the bulkWrite
operation encounters an error inside a
transaction, the error thrown is not wrapped as a
BulkWriteException
.
Inside a transaction, the first error in a bulk write causes the entire bulk write to fail and aborts the transaction, even if the bulk write is unordered.
Examples
Bulk Write Operations
The characters
collection in the guidebook
database contains the following documents:
{ "_id" : 1, "char" : "Brisbane", "class" : "monk", "lvl" : 4 }, { "_id" : 2, "char" : "Eldon", "class" : "alchemist", "lvl" : 3 }, { "_id" : 3, "char" : "Meldane", "class" : "ranger", "lvl" : 3 }
The following bulkWrite()
performs multiple
operations on the collection:
try { db.characters.bulkWrite([ { insertOne: { "document": { "_id": 4, "char": "Dithras", "class": "barbarian", "lvl": 4 } } }, { insertOne: { "document": { "_id": 5, "char": "Taeln", "class": "fighter", "lvl": 3 } } }, { updateOne : { "filter" : { "char" : "Eldon" }, "update" : { $set : { "status" : "Critical Injury" } } } }, { deleteOne : { "filter" : { "char" : "Brisbane"} } }, { replaceOne : { "filter" : { "char" : "Meldane" }, "replacement" : { "char" : "Tanys", "class" : "oracle", "lvl": 4 } } } ]); } catch (e) { print(e); }
The operation returns the following:
{ "acknowledged" : true, "deletedCount" : 1, "insertedCount" : 2, "matchedCount" : 2, "upsertedCount" : 0, "insertedIds" : { "0" : 4, "1" : 5 }, "upsertedIds" : { } }
If the collection had contained a document with "_id" : 5"
before executing the bulk write, then when the bulk write is executed,
the following duplicate key exception would be thrown for the second insertOne:
BulkWriteError({ "writeErrors" : [ { "index" : 1, "code" : 11000, "errmsg" : "E11000 duplicate key error collection: guidebook.characters index: _id_ dup key: { _id: 5.0 }", "op" : { "_id" : 5, "char" : "Taeln", "class" : "fighter", "lvl" : 3 } } ], "writeConcernErrors" : [ ], "nInserted" : 1, "nUpserted" : 0, "nMatched" : 0, "nModified" : 0, "nRemoved" : 0, "upserted" : [ ] })
Since ordered
is true by default, only the first operation completes
successfully. The rest are not executed. Running the
bulkWrite()
with ordered : false
would allow the
remaining operations to complete despite the error.
Unordered Bulk Write
The characters
collection in the guidebook
database contains the following documents:
{ "_id" : 1, "char" : "Brisbane", "class" : "monk", "lvl" : 4 }, { "_id" : 2, "char" : "Eldon", "class" : "alchemist", "lvl" : 3 }, { "_id" : 3, "char" : "Meldane", "class" : "ranger", "lvl" : 3 }
The following bulkWrite()
performs multiple
unordered
operations on the characters
collection. Note that one of
the insertOne
stages has a duplicate _id
value:
try { db.characters.bulkWrite([ { insertOne: { "document": { "_id": 4, "char": "Dithras", "class": "barbarian", "lvl": 4 } } }, { insertOne: { "document": { "_id": 4, "char": "Taeln", "class": "fighter", "lvl": 3 } } }, { updateOne : { "filter" : { "char" : "Eldon" }, "update" : { $set : { "status" : "Critical Injury" } } } }, { deleteOne : { "filter" : { "char" : "Brisbane"} } }, { replaceOne : { "filter" : { "char" : "Meldane" }, "replacement" : { "char" : "Tanys", "class" : "oracle", "lvl": 4 } } } ], { ordered : false } ); } catch (e) { print(e); }
The operation returns the following:
BulkWriteError({ "writeErrors" : [ { "index" : 1, "code" : 11000, "errmsg" : "E11000 duplicate key error collection: guidebook.characters index: _id_ dup key: { _id: 4.0 }", "op" : { "_id" : 4, "char" : "Taeln", "class" : "fighter", "lvl" : 3 } } ], "writeConcernErrors" : [ ], "nInserted" : 1, "nUpserted" : 0, "nMatched" : 2, "nModified" : 2, "nRemoved" : 1, "upserted" : [ ] })
Since this was an unordered
operation, the writes remaining in the queue
were processed despite the exception.
Bulk Write with Write Concern
The enemies
collection contains the following documents:
{ "_id" : 1, "char" : "goblin", "rating" : 1, "encounter" : 0.24 }, { "_id" : 2, "char" : "hobgoblin", "rating" : 1.5, "encounter" : 0.30 }, { "_id" : 3, "char" : "ogre", "rating" : 3, "encounter" : 0.2 }, { "_id" : 4, "char" : "ogre berserker" , "rating" : 3.5, "encounter" : 0.12}
The following bulkWrite()
performs multiple
operations on the collection using a write concern value of
"majority"
and timeout value of 100 milliseconds:
try { db.enemies.bulkWrite( [ { updateMany : { "filter" : { "rating" : { $gte : 3} }, "update" : { $inc : { "encounter" : 0.1 } } }, }, { updateMany : { "filter" : { "rating" : { $lt : 2} }, "update" : { $inc : { "encounter" : -0.25 } } }, }, { deleteMany : { "filter" : { "encounter": { $lt : 0 } } } }, { insertOne : { "document" : { "_id" :5, "char" : "ogrekin" , "rating" : 2, "encounter" : 0.31 } } } ], { writeConcern : { w : "majority", wtimeout : 100 } } ); } catch (e) { print(e); }
If the total time required for all required nodes in the replica set to
acknowledge the write operation is greater than wtimeout
,
the following writeConcernError
is displayed when the wtimeout
period
has passed.
BulkWriteError({ "writeErrors" : [ ], "writeConcernErrors" : [ { "code" : 64, "codeName" : "WriteConcernFailed", "errmsg" : "waiting for replication timed out", "errInfo" : { "wtimeout" : true } }, { "code" : 64, "codeName" : "WriteConcernFailed", "errmsg" : "waiting for replication timed out", "errInfo" : { "wtimeout" : true } }, { "code" : 64, "codeName" : "WriteConcernFailed", "errmsg" : "waiting for replication timed out", "errInfo" : { "wtimeout" : true } } ], "nInserted" : 1, "nUpserted" : 0, "nMatched" : 4, "nModified" : 4, "nRemoved" : 1, "upserted" : [ ] })
The result set shows the operations executed since
writeConcernErrors
errors are not an indicator that any write
operations failed.