removeShard
On this page
Definition
removeShard
Removes a shard from a sharded cluster. When you run
removeShard
, MongoDB drains the shard by using the balancer to move the shard's chunks to other shards in the cluster. Once the shard is drained, MongoDB removes the shard from the cluster.
Compatibility
This command is available in deployments hosted in the following environments:
MongoDB Atlas: The fully managed service for MongoDB deployments in the cloud
Important
This command is not supported in M10+ clusters or serverless instances. For more information, see Unsupported Commands.
MongoDB Enterprise: The subscription-based, self-managed version of MongoDB
MongoDB Community: The source-available, free-to-use, and self-managed version of MongoDB
Syntax
The command has the following syntax:
db.adminCommand( { removeShard : <shardToRemove> } )
Behavior
No Cluster Back Ups During Shard Removal
You cannot back up the cluster data during shard removal.
Concurrent removeShard
Operations
You can have more than one removeShard
operation in progress.
Access Requirements
If you have authorization
enabled, you must have the
clusterManager
role or any role that
includes the removeShard
action.
Database Migration Requirements
Each database in a sharded cluster has a primary shard. If the shard you
want to remove is also the primary of one of the cluster's databases, then
you must manually move the databases to a new shard after migrating
all data from the shard. See the movePrimary
command and
the Remove Shards from a Sharded Cluster for more information.
Chunk Balancing
When you remove a shard in a cluster with an uneven chunk distribution, the balancer first removes the chunks from the draining shard and then balances the remaining uneven chunk distribution.
Write Concern
mongos
converts the
write concern of the
removeShard
command to "majority"
.
Change Streams
Removing a shard may cause an open change stream cursor to close, and the closed change stream cursor may not be fully resumable.
DDL Operations
If you run removeShard
while your cluster executes a DDL operation
(operation that modifies a collection such as
reshardCollection
), removeShard
only executes after the
concurrent DDL operation finishes.
Example
From mongosh
, the removeShard
operation resembles the following:
db.adminCommand( { removeShard : "bristol01" } )
Replace bristol01
with the name of the shard to remove. When you
run removeShard
, the command returns with a message that
resembles the following:
{ "msg" : "draining started successfully", "state" : "started", "shard" : "bristol01", "note" : "you need to drop or movePrimary these databases", "dbsToMove" : [ "fizz", "buzz" ], "ok" : 1, "operationTime" : Timestamp(1575398919, 2), "$clusterTime" : { "clusterTime" : Timestamp(1575398919, 2), "signature" : { "hash" : BinData(0,"Oi68poWCFCA7b9kyhIcg+TzaGiA="), "keyId" : NumberLong("6766255701040824328") } } }
The balancer begins migrating ("draining") chunks from the shard named
bristol01
to other shards in the cluster. These migrations happen
slowly in order to avoid placing undue load on the cluster.
The output includes the field dbsToMove
indicating the databases
for which bristol01
is the primary shard.
After the balancer moves all chunks and after all collections are moved
by moveCollection
, you must movePrimary
for the database(s).
If you run the command again, removeShard
returns the
current status of the process. For example, if the operation is in an
ongoing
state, the command returns an output that resembles the
following:
{ "msg" : "draining ongoing", "state" : "ongoing", "remaining" : { "chunks" : NumberLong(2), "dbs" : NumberLong(2), "jumboChunks" : NumberLong(0) }, "note" : "you need to drop or movePrimary these databases", "dbsToMove" : [ "fizz", "buzz" ], "ok" : 1, "operationTime" : Timestamp(1575399086, 1655), "$clusterTime" : { "clusterTime" : Timestamp(1575399086, 1655), "signature" : { "hash" : BinData(0,"XBrTmjMMe82fUtVLRm13GBVtRE8="), "keyId" : NumberLong("6766255701040824328") } } }
In the output, the remaining
field includes the following fields:
Field | Description |
---|---|
chunks | Total number of chunks currently remaining on the shard. |
dbs | Total number of databases whose primary shard is the shard. These databases are specified in
the dbsToMove output field. |
jumboChunks | Of the total number of If the After the |
Continue checking the status of the removeShard
command
(i.e. rerun the command) until the number of chunks remaining is 0
.
{ "msg" : "draining ongoing", "state" : "ongoing", "remaining" : { "chunks" : NumberLong(0), // All chunks have moved "dbs" : NumberLong(2), "jumboChunks" : NumberLong(0) }, "note" : "you need to drop or movePrimary these databases", "dbsToMove" : [ "fizz", "buzz" ], "ok" : 1, "operationTime" : Timestamp(1575400343, 1), "$clusterTime" : { "clusterTime" : Timestamp(1575400343, 1), "signature" : { "hash" : BinData(0,"9plu5B/hw4uWAgEmjjBP3syw1Zk="), "keyId" : NumberLong("6766255701040824328") } } }
After all chunks have been drained from the shard, if you have
dbsToMove
, you can either movePrimary
for those
databases or alternatively, drop the databases (which deletes the
associated data files).
After the balancer completes moving all chunks off the shard and you
have handled the dbsToMove
, removeShard
can finish.
Running removeShard
again returns output that resembles
the following:
{ "msg" : "removeshard completed successfully", "state" : "completed", "shard" : "bristol01", "ok" : 1, "operationTime" : Timestamp(1575400370, 2), "$clusterTime" : { "clusterTime" : Timestamp(1575400370, 2), "signature" : { "hash" : BinData(0,"JjSRciHECXDBXo0e5nJv9mdRG8M="), "keyId" : NumberLong("6766255701040824328") } } }