Docs Menu
Docs Home
/
MongoDB Manual
/ /

Rolling Index Builds on Sharded Clusters

On this page

  • Considerations
  • Prerequisites
  • Procedure
  • Additional Information

Index builds can impact sharded cluster performance. By default, MongoDB 4.4 and later build indexes simultaneously on all data-bearing replica set members. Index builds on sharded clusters occur only on those shards which contain data for the collection being indexed. For workloads which cannot tolerate performance decrease due to index builds, consider using the following procedure to build indexes in a rolling fashion.

Rolling index builds take at most one shard replica set member out at a time, starting with the secondary members, and builds the index on that member as a standalone. Rolling index builds require at least one replica set election per shard.

To create unique indexes using the following procedure, you must stop all writes to the collection during this procedure.

If you cannot stop all writes to the collection during this procedure, do not use the procedure on this page. Instead, build your unique index on the collection by issuing db.collection.createIndex() on the mongos for a sharded cluster.

Ensure that your oplog is large enough to permit the indexing or re-indexing operation to complete without falling too far behind to catch up. See the oplog sizing documentation for additional information.

For building unique indexes
  1. To create unique indexes using the following procedure, you must stop all writes to the collection during the index build. Otherwise, you may end up with inconsistent data across the replica set members. If you cannot stop all writes to the collection, do not use the following procedure to create unique indexes.

    Warning

    If you cannot stop all writes to the collection, do not use the following procedure to create unique indexes.

  2. Before creating the index, validate that no documents in the collection violate the index constraints. If a collection is distributed across shards and a shard contains a chunk with duplicate documents, the create index operation may succeed on the shards without duplicates but not on the shard with duplicates. To avoid leaving inconsistent indexes across shards, you can issue the db.collection.dropIndex() from a mongos to drop the index from the collection.

Important

The following procedure to build indexes in a rolling fashion applies to sharded clusters deployments, and not replica set deployments. For the procedure for replica sets, see Rolling Index Builds on Replica Sets instead.

Connect mongosh to a mongos instance in the sharded cluster, and run sh.stopBalancer() to disable the balancer: [1]

sh.stopBalancer()

Note

If a migration is in progress, the system will complete the in-progress migration before stopping the balancer.

To verify that the balancer is disabled, run sh.getBalancerState(), which returns false if the balancer is disabled:

sh.getBalancerState()
[1] Starting in MongoDB 6.1, automatic chunk splitting is not performed. This is because of balancing policy improvements. Auto-splitting commands still exist, but do not perform an operation. For details, see Balancing Policy Changes.In MongoDB versions earlier than 6.1, sh.stopBalancer() also disables auto-splitting for the sharded cluster.

From mongosh connected to the mongos, refresh the cached routing table for that mongos to avoid returning stale distribution information for the collection. Once refreshed, run db.collection.getShardDistribution() for the collection you wish to build the index.

For example, if you want to create an ascending index on the records collection in the test database:

db.adminCommand( { flushRouterConfig: "test.records" } );
db.records.getShardDistribution();

The method outputs the shard distribution. For example, consider a sharded cluster with 3 shards shardA, shardB, and shardC and the db.collection.getShardDistribution() returns the following:

Shard shardA at shardA/s1-mongo1.example.net:27018,s1-mongo2.example.net:27018,s1-mongo3.example.net:27018
data : 1KiB docs : 50 chunks : 1
estimated data per chunk : 1KiB
estimated docs per chunk : 50
Shard shardC at shardC/s3-mongo1.example.net:27018,s3-mongo2.example.net:27018,s3-mongo3.example.net:27018
data : 1KiB docs : 50 chunks : 1
estimated data per chunk : 1KiB
estimated docs per chunk : 50
Totals
data : 3KiB docs : 100 chunks : 2
Shard shardA contains 50% data, 50% docs in cluster, avg obj size on shard : 40B
Shard shardC contains 50% data, 50% docs in cluster, avg obj size on shard : 40B

From the output, you only build the indexes for test.records on shardA and shardC.

For each shard that contains chunks for the collection, follow the procedure to build the index on the shard.

For an affected shard, stop the mongod process associated with one of its secondary. Restart after making the following configuration updates:

If you are using a configuration file, make the following configuration updates:

For example, for a shard replica set member, the updated configuration file will include content like the following example:

net:
bindIp: localhost,<hostname(s)|ip address(es)>
port: 27218
# port: 27018
#replication:
# replSetName: shardA
#sharding:
# clusterRole: shardsvr
setParameter:
skipShardingConfigurationChecks: true
disableLogicalSessionCacheRefresh: true

And restart:

mongod --config <path/To/ConfigFile>

Other settings (e.g. storage.dbPath, etc.) remain the same.

If using command-line options, make the following configuration updates:

For example, restart your shard replica set member without the --replSet and --shardsvr options. Specify a new port number and set both the skipShardingConfigurationChecks and disableLogicalSessionCacheRefresh parameters to true:

mongod --port 27218 --setParameter skipShardingConfigurationChecks=true --setParameter disableLogicalSessionCacheRefresh=true

Other settings (e.g. --dbpath, etc.) remain the same.

[2](1, 2) By running the mongod on a different port, you ensure that the other members of the replica set and all clients will not contact the member while you are building the index.

Connect directly to the mongod instance running as a standalone on the new port and create the new index for this instance.

For example, connect mongosh to the instance, and use the db.collection.createIndex() method to create an ascending index on the username field of the records collection:

db.records.createIndex( { username: 1 } )

When the index build completes, shutdown the mongod instance. Undo the configuration changes made when starting as a standalone to return to its original configuration and restart.

Important

Be sure to remove the skipShardingConfigurationChecks parameter and disableLogicalSessionCacheRefresh parameter.

For example, to restart your replica set shard member:

If you are using a configuration file:

net:
bindIp: localhost,<hostname(s)|ip address(es)>
port: 27018
replication:
replSetName: shardA
sharding:
clusterRole: shardsvr

Other settings (e.g. storage.dbPath, etc.) remain the same.

And restart:

mongod --config <path/To/ConfigFile>

If you are using command-line options:

For example:

mongod --port 27018 --replSet shardA --shardsvr

Other settings (e.g. --dbpath, etc.) remain the same.

Allow replication to catch up on this member.

Once the member catches up with the other members of the set, repeat the procedure one member at a time for the remaining secondary members for the shard:

  1. C1. Stop One Secondary and Restart as a Standalone

  2. C2. Build the Index

  3. C3. Restart the Program mongod as a Replica Set Member

When all the secondaries for the shard have the new index, step down the primary for the shard, restart it as a standalone using the procedure described above, and build the index on the former primary:

  1. Use the rs.stepDown() method in mongosh to step down the primary. Upon successful stepdown, the current primary becomes a secondary and the replica set members elect a new primary.

  2. C1. Stop One Secondary and Restart as a Standalone

  3. C2. Build the Index

  4. C3. Restart the Program mongod as a Replica Set Member

Once you finish building the index for a shard, repeat C. Build Indexes on the Shards That Contain Collection Chunks for the other affected shards.

Once you finish the rolling index build for the affected shards, restart the balancer.

Connect mongosh to a mongos instance in the sharded cluster, and run sh.startBalancer(): [3]

sh.startBalancer()
[3] Starting in MongoDB 6.1, automatic chunk splitting is not performed. This is because of balancing policy improvements. Auto-splitting commands still exist, but do not perform an operation. For details, see Balancing Policy Changes.In MongoDB versions earlier than 6.1, sh.startBalancer() also enables auto-splitting for the sharded cluster.

A sharded collection has an inconsistent index if the collection does not have the exact same indexes (including the index options) on each shard that contains chunks for the collection. Although inconsistent indexes should not occur during normal operations, inconsistent indexes can occur, such as:

  • When a user is creating an index with a unique key constraint and one shard contains a chunk with duplicate documents. In such cases, the create index operation may succeed on the shards without duplicates but not on the shard with duplicates.

  • When a user is creating an index across the shards in a rolling manner but either fails to build the index for an associated shard or incorrectly builds an index with different specification.

Starting in MongoDB 4.4 (and 4.2.6), the config server primary periodically checks for index inconsistencies across the shards for sharded collections. To configure these periodic checks, see enableShardedIndexConsistencyCheck and shardedIndexConsistencyCheckIntervalMS.

The command serverStatus returns the field shardedIndexConsistency to report on index inconsistencies when run on the config server primary.

To check if a sharded collection has inconsistent indexes, see Find Inconsistent Indexes Across Shards.

Back

Rolling Index Builds on Replica Sets