deleteShard
定義
removeShard
Removes a shard from a シャーディングされたクラスター. When you run
removeShard
, MongoDB drains the shard by using the balancer to move the shard's chunks to other shards in the cluster. Once the shard is drained, MongoDB removes the shard from the cluster.注意
If you want to re-add a removed shard to your sharded cluster, you must clear the
storage.dbPath
of all the nodes of the shard to remove the shard's files of before you can re-add it.
互換性
このコマンドは、次の環境でホストされている配置で使用できます。
MongoDB Enterprise: サブスクリプションベースの自己管理型 MongoDB バージョン
MongoDB Community: ソースが利用可能で、無料で使用できる自己管理型の MongoDB のバージョン
注意
This command is not supported in MongoDB Atlas. See Modify your Atlas Sharded Cluster to add or remove shards from your Atlas cluster,
構文
このコマンドの構文は、次のとおりです。
db.adminCommand( { removeShard : <shardToRemove> } )
動作
No Cluster Back Ups During Shard Removal
You cannot back up the cluster data during shard removal.
Concurrent removeShard
Operations
You can have more than one removeShard
operation in progress.
Access Requirements
If you have authorization
enabled, you must have the
clusterManager
role or any role that
includes the removeShard
action.
Database Migration Requirements
Each database in a sharded cluster has a primary shard. If the shard you
want to remove is also the primary of one of the cluster's databases, then
you must manually move the databases to a new shard after migrating
all data from the shard. See the movePrimary
command and
the シャーディングされたクラスターからシャードを削除する for more information.
Chunk Balancing
チャンクの分布が不均一なクラスター内のシャードを削除すると、バランサーはまずドレイン シャードからチャンクを削除し、次に残りの不均一なチャンクの分布のバランスをとります。
以下も参照してください。
書込み保証 (write concern)
mongos
converts the
書込み保証 (write concern) of the
removeShard
command to "majority"
.
変更ストリーム
シャードを削除すると、開いている変更ストリームのカーソルが閉じてしまい、閉じた変更ストリームのカーソルが完全に再開できなくなることがあります。
DDL 操作
クラスターが DDL操作( reshardCollection
などのコレクションを変更する操作)を実行しているときに
removeShard
を実行すると、
removeShard
は同時 DDL操作が完了した後にのみ実行されます。
例
From mongosh
, the removeShard
operation resembles the following:
db.adminCommand( { removeShard : "bristol01" } )
置換 bristol01
with the name of the shard to remove. When you
run removeShard
, the command returns with a message that
resembles the following:
{ "msg" : "draining started successfully", "state" : "started", "shard" : "bristol01", "note" : "you need to drop or movePrimary these databases", "dbsToMove" : [ "fizz", "buzz" ], "ok" : 1, "operationTime" : Timestamp(1575398919, 2), "$clusterTime" : { "clusterTime" : Timestamp(1575398919, 2), "signature" : { "hash" : BinData(0,"Oi68poWCFCA7b9kyhIcg+TzaGiA="), "keyId" : NumberLong("6766255701040824328") } } }
The balancer begins migrating ("draining") chunks from the shard named
bristol01
to other shards in the cluster. These migrations happen
slowly in order to avoid placing undue load on the cluster.
The output includes the field dbsToMove
indicating the databases
for which bristol01
is the プライマリシャード.
After the balancer moves all chunks and after all collections are moved
by moveCollection
, you must movePrimary
for the database(s).
If you run the command again, removeShard
returns the
current status of the process. For example, if the operation is in an
ongoing
state, the command returns an output that resembles the
following:
{ "msg" : "draining ongoing", "state" : "ongoing", "remaining" : { "chunks" : NumberLong(2), "dbs" : NumberLong(2), "jumboChunks" : NumberLong(0) }, "note" : "you need to drop or movePrimary these databases", "dbsToMove" : [ "fizz", "buzz" ], "ok" : 1, "operationTime" : Timestamp(1575399086, 1655), "$clusterTime" : { "clusterTime" : Timestamp(1575399086, 1655), "signature" : { "hash" : BinData(0,"XBrTmjMMe82fUtVLRm13GBVtRE8="), "keyId" : NumberLong("6766255701040824328") } } }
In the output, the remaining
field includes the following fields:
フィールド | 説明 |
---|---|
| Total number of chunks currently remaining on the shard. |
| Total number of databases whose プライマリシャード is the shard. These databases are specified in
the |
|
If the
|
Continue checking the status of the removeShard
command
(i.e. rerun the command) until the number of chunks remaining is 0
.
{ "msg" : "draining ongoing", "state" : "ongoing", "remaining" : { "chunks" : NumberLong(0), // All chunks have moved "dbs" : NumberLong(2), "jumboChunks" : NumberLong(0) }, "note" : "you need to drop or movePrimary these databases", "dbsToMove" : [ "fizz", "buzz" ], "ok" : 1, "operationTime" : Timestamp(1575400343, 1), "$clusterTime" : { "clusterTime" : Timestamp(1575400343, 1), "signature" : { "hash" : BinData(0,"9plu5B/hw4uWAgEmjjBP3syw1Zk="), "keyId" : NumberLong("6766255701040824328") } } }
After all chunks have been drained from the shard, if you have
dbsToMove
, you can either movePrimary
for those
databases or alternatively, drop the databases (which deletes the
associated data files).
After the balancer completes moving all chunks off the shard and you
have handled the dbsToMove
, removeShard
can finish.
Running removeShard
again returns output that resembles
the following:
{ "msg" : "removeshard completed successfully", "state" : "completed", "shard" : "bristol01", "ok" : 1, "operationTime" : Timestamp(1575400370, 2), "$clusterTime" : { "clusterTime" : Timestamp(1575400370, 2), "signature" : { "hash" : BinData(0,"JjSRciHECXDBXo0e5nJv9mdRG8M="), "keyId" : NumberLong("6766255701040824328") } } }