Reshard a Collection
On this page
- Requirements
- Limitations
- Resharding Process
- Start the resharding operation.
- Monitor the resharding operation.
- Finish the resharding operation.
- Block writes early to force resharding to complete
- Abort resharding operation
- Behavior
- Minimum Duration of a Resharding Operation
- Retryable Writes
- Error Case
- Duplicate
_id
Values
New in version 5.0.
The ideal shard key allows MongoDB to distribute documents evenly throughout the cluster while facilitating common query patterns. A suboptimal shard key can lead to performance or scaling issues due to uneven data distribution. Starting in MongoDB 5.0, you can change the shard key for a collection to change the distribution of your data across a cluster.
Note
Before resharding your collection, read Troubleshoot Shard Keys for information on common performance and scaling issues and advice on how to fix them.
Requirements
Before you reshard your collection, ensure that you meet the following requirements:
Your application can tolerate a period of two seconds where the collection that is being resharded blocks writes. During the time period where writes are blocked your application experiences an increase in latency. If your workload cannot tolerate this requirement, consider refining your shard key instead.
Your database meets these resource requirements:
Available storage space: Ensure that your available storage space is at least 1.2x the size of the collection that you want to reshard. For example, if the size of the collection you want to reshard is 1 TB, you should have at least 1.2 TB of free storage when starting the sharding operation.
I/O: Ensure that your I/O capacity is below 50%.
CPU load: Ensure your CPU load is below 80%.
Important
These requirements are not enforced by the database. A failure to allocate enough resources can result in:
the database running out of space and shutting down
decreased performance
the resharding operation taking longer than expected
If your application has time periods with less traffic, reshard your collection during that time if possible.
You must perform one of these tasks:
rewrite your application's queries to use both the current shard key and the new shard key
stop your application and then:
rewrite your application's queries to use the new shard key
wait until the resharding of the collection completes (to monitor the resharding process, use a
$currentOp
pipeline stage)deploy your rewritten application
The following queries return an error if the query filter does not include both the current shard key or a unique field (like
_id
):For optimal performance, we recommend that you also rewrite other queries to include the new shard key.
Once the resharding operation completes, you can remove the old shard key from the queries.
No index builds are in progress. Use
db.currentOp()
to check for any running index builds:db.adminCommand( { currentOp: true, $or: [ { op: "command", "command.createIndexes": { $exists: true } }, { op: "none", "msg" : /^Index Build/ } ] } ) In the result document, if the
inprog
field value is an empty array, there are no index builds in progress:{ inprog: [], ok: 1, '$clusterTime': { ... }, operationTime: <timestamp> }
Warning
We strongly recommend that you check the Limitations and read the resharding process section in full before resharding your collection.
Limitations
Only one collection can be resharded at a time.
writeConcernMajorityJournalDefault
must betrue
.Resharding a collection that has a uniqueness constraint is not supported.
The new shard key cannot have a uniqueness constraint.
The following commands and corresponding shell methods are not supported on the collection that is being resharded while the resharding operation is in progress:
The following commands and methods are not supported on the cluster while the resharding operation is in progress:
Warning
Using any of the preceding commands during a resharding operation causes the resharding operation to fail.
If the collection to be resharded uses Atlas Search, the search index will become unavailable when the resharding operation completes. You need to manually rebuild the search index once the resharding operation completes.
You can't reshard a sharded time series collection.
Resharding Process
In a collection resharding operation, a shard can be a:
donor, which currently stores chunks for the sharded collection.
recipient, which stores new chunks for the sharded collection based on the shard keys and zones.
A shard can be donor and a recipient at the same time. The set of donor shards is identical to the recipient shards, unless you use zones.
The config server primary is always the resharding coordinator and starts each phase of the resharding operation.
Start the resharding operation.
While connected to the mongos
, issue a
reshardCollection
command that specifies the collection
to be resharded and the new shard key:
db.adminCommand({ reshardCollection: "<database>.<collection>", key: <shardkey> })
MongoDB sets the max number of seconds to block writes to two seconds and begins the resharding operation.
Monitor the resharding operation.
To monitor the resharding operation, you can use the
$currentOp
pipeline stage:
db.getSiblingDB("admin").aggregate([ { $currentOp: { allUsers: true, localOps: false } }, { $match: { type: "op", "originatingCommand.reshardCollection": "<database>.<collection>" } } ])
Note
To see updated values, you need to continuously run the preceeding pipeline.
The $currentOp
pipeline outputs:
totalOperationTimeElapsedSecs
: elapsed operation time in secondsremainingOperationTimeEstimatedSecs
: estimated time remaining in seconds for the current resharding operation. It is returned as-1
when a new resharding operation starts.Starting in:
MongoDB 5.0, but before MongoDB 6.1,
remainingOperationTimeEstimatedSecs
is only available on a recipient shard during a resharding operation.MongoDB 6.1,
remainingOperationTimeEstimatedSecs
is also available on the coordinator during a resharding operation.
The resharding operation performs these phases in order:
The clone phase duplicates the current collection data.
The catch-up phase applies any pending write operations to the resharded collection.
remainingOperationTimeEstimatedSecs
is set to a pessimistic time estimate:The catch-up phase time estimate is set to the clone phase time, which is a relatively long time.
In practice, if there are only a few pending write operations, the actual catch-up phase time is relatively short.
[ { shard: '<shard>', type: 'op', desc: 'ReshardingRecipientService | ReshardingDonorService | ReshardingCoordinatorService <reshardingUUID>', op: 'command', ns: '<database>.<collection>', originatingCommand: { reshardCollection: '<database>.<collection>', key: <shardkey>, unique: <boolean>, collation: { locale: 'simple' } }, totalOperationTimeElapsedSecs: <number>, remainingOperationTimeEstimatedSecs: <number>, ... }, ... ]
Finish the resharding operation.
Throughout the resharding process, the estimated time to complete the
resharding operation (remainingOperationTimeEstimatedSecs
)
decreases. When the estimated time is below two seconds, MongoDB
blocks writes and completes the resharding operation. Until the
estimated time to complete the resharing operation is below two
seconds, the resharding operation does not block writes by default.
During the time period where writes are blocked your application
experiences an increase in latency.
Once the resharding process has completed, the resharding command
returns ok: 1
.
{ ok: 1, '$clusterTime': { clusterTime: <timestamp>, signature: { hash: Binary(Buffer.from("0000000000000000000000000000000000000000", "hex"), 0), keyId: <number> } }, operationTime: <timestamp> }
To see whether the resharding operation completed successfully, check
the output of the sh.status()
method:
sh.status()
The sh.status()
method output contains a subsection for the
databases
. If resharding has completed successfully, the output
lists the new shard key for the collection:
databases [ { database: { _id: '<database>', primary: '<shard>', partitioned: false, version: { uuid: <uuid>, timestamp: <timestamp>, lastMod: <number> } }, collections: { '<database>.<collection>': { shardKey: <shardkey>, unique: <boolean>, balancing: <boolean>, chunks: [], tags: [] } } } ... ]
Note
If the resharded collection uses Atlas Search, the search index will become unavailable when the resharding operation completes. You need to manually rebuild the search index once the resharding operation completes.
Block writes early to force resharding to complete
You can manually force the resharding operation to complete by
issuing the commitReshardCollection
command. This is
useful if the current time estimate to complete the resharding
operation is an acceptable duration for your collection to block
writes. The commitReshardCollection
command blocks
writes early and forces the resharding operation to complete. The
command has the following syntax:
db.adminCommand({ commitReshardCollection: "<database>.<collection>" })
Abort resharding operation
You can abort the resharding operation during any stage of the
resharding operation, even after running the
commitReshardCollection
, until shards have fully caught
up.
For example, if the write unavailability duration estimate does not
decrease, you can abort the resharding operation with the
abortReshardCollection
command:
db.adminCommand({ abortReshardCollection: "<database>.<collection>" })
After canceling the operation, you can retry the resharding operation during a time window with lower write volume. If this is not possible, add more shards before retrying.
Behavior
Minimum Duration of a Resharding Operation
The minimum duration of a resharding operation is always 5 minutes.
Retryable Writes
Retryable writes initiated before or during
resharding can be retried during and after the collection has been
resharded for up to 5 minutes. After 5 minutes you may be unable to find
the definitive result of the write and subsequent attempts to retry the
write fail with an IncompleteTransactionHistory
error.
Error Case
Duplicate _id
Values
The resharding operation fails if _id
values are not globally unique
to avoid corrupting collection data. Duplicate _id
values can also
prevent successful chunk migration. If you have documents with duplicate
_id
values, copy the data from each into a new document, and then
delete the duplicate documents.