Docs Menu
Docs Home
/
MongoDB Manual
/ / /

Upgrade a Sharded Cluster to 6.0

On this page

  • Upgrade Recommendations and Checklists
  • Prerequisites
  • Download 6.0 Binaries
  • Upgrade Procedure
  • Additional Upgrade Procedures

Warning

Due to balancing policy updates in MongoDB 6.0.3, the balancer may start immediately after upgrade, even if the number of chunks stays the same.

Familiarize yourself with the content of this document, including thoroughly reviewing the prerequisites, prior to upgrading to MongoDB 6.0.

The following steps outline the procedure to upgrade a mongod that is a shard member from version 5.0 to 6.0.

If you need guidance on upgrading to 6.0, MongoDB professional services offer major version upgrade support to help ensure a smooth transition without interruption to your MongoDB application.

When upgrading, consider the following:

To upgrade an existing MongoDB deployment to 6.0, you must be running a 5.0-series release.

To upgrade from a version earlier than the 5.0-series, you must successively upgrade major releases until you have upgraded to 5.0-series. For example, if you are running a 4.4-series, you must upgrade first to 5.0 before you can upgrade to 6.0.

Before you upgrade MongoDB, check that you're using a MongoDB 6.0-compatible driver. Consult the driver documentation for your specific driver to verify compatibility with MongoDB 6.0.

Upgraded deployments that run on incompatible drivers might encounter unexpected or undefined behavior.

Warning

If your drivers use legacy opcodes that were deprecated in v3.6, update your drivers to a version that uses supported opcodes. Drivers that use legacy opcodes are no longer supported.

Before beginning your upgrade, see the Compatibility Changes in MongoDB 6.0 document to ensure that your applications and deployments are compatible with MongoDB 6.0. Resolve the incompatibilities in your deployment before starting the upgrade.

Before upgrading MongoDB, always test your application in a staging environment before deploying the upgrade to your production environment.

After upgrading to 6.0, if you need to downgrade, we recommend downgrading to the latest patch release of 5.0.

To upgrade a sharded cluster to 6.0, all members of the cluster must be at least version 5.0. The upgrade process checks all components of the cluster and will produce warnings if any component is running version earlier than 5.0.

The 5.0 sharded cluster must have featureCompatibilityVersion set to "5.0".

To ensure that all members of the sharded cluster have featureCompatibilityVersion set to "5.0", connect to each shard replica set member and each config server replica set member and check the featureCompatibilityVersion:

Tip

For a sharded cluster that has access control enabled, to run the following command against a shard replica set member, you must connect to the member as a shard local user.

db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } )

All members should return a result that includes "featureCompatibilityVersion" : { "version" : "5.0" }.

To set or update featureCompatibilityVersion, run the following command on the mongos:

db.adminCommand( { setFeatureCompatibilityVersion: "5.0" } )

For more information, see setFeatureCompatibilityVersion.

For shards and config servers, ensure that no replica set member is in the ROLLBACK or RECOVERING state.

Optional but Recommended. As a precaution, take a backup of the config database before upgrading the sharded cluster.

If you installed MongoDB from the MongoDB apt, yum, dnf, or zypper repositories, you should upgrade to 6.0 using your package manager.

Follow the appropriate 6.0 installation instructions for your Linux system. This will involve adding a repository for the new release, then performing the actual upgrade process.

If you have not installed MongoDB using a package manager, you can manually download the MongoDB binaries from the MongoDB Download Center.

See 6.0 installation instructions for more information.

Warning

If you upgrade an existing instance of MongoDB to MongoDB 6.0.5, that instance may fail to start if fork: true is set in the mongod.conf file.

The upgrade issue affects all MongoDB instances that use .deb or .rpm installation packages. Installations that use the tarball (.tgz) release or other package types are not affected. For more information, see SERVER-74345.

To remove the fork: true setting, run these commands from a system terminal:

systemctl stop mongod.service
sed -i.bak '/fork: true/d' /etc/mongod.conf
systemctl start mongod.service

The second systemctl command starts the upgraded instance after the setting is removed.

1

Connect mongosh to a mongos instance in the sharded cluster, and run sh.stopBalancer() to disable the balancer:

sh.stopBalancer()

Note

If a migration is in progress, the system will complete the in-progress migration before stopping the balancer. You can run sh.isBalancerRunning() to check the balancer's current state.

To verify that the balancer is disabled, run sh.getBalancerState(), which returns false if the balancer is disabled:

sh.getBalancerState()

For more information on disabling the balancer, see Disable the Balancer.

2

For each mongos in the cluster, connect mongosh to the mongos instance and run flushRouterConfig to refresh the cached routing table:

db.adminCommand("flushRouterConfig")
db.adminCommand(
{
flushRouterConfig: 1
}
)
3
  1. Upgrade the secondary members of the replica set, one at a time.

    1. Shut down the secondary instance.

      To shut down the mongod process, use mongosh to connect to the cluster member and run the following command:

      db.adminCommand( { shutdown: 1 } )
    2. Replace the 5.0 binary with the 6.0 binary.

    3. Start the 6.0 binary.

      Start the 6.0 binary with the --configsvr, --replSet, and --port. Include any other options as used by the deployment.

      mongod --configsvr --replSet <replSetName> --port <port> --dbpath <path> --bind_ip localhost,<ip address>

      If using a configuration file, update the file to specify sharding.clusterRole: configsvr, replication.replSetName, net.port, and net.bindIp, then start the 6.0 binary:

      sharding:
      clusterRole: configsvr
      replication:
      replSetName: <string>
      net:
      port: <port>
      bindIp: localhost,<ip address>
      storage:
      dbpath: <path>

      Include any other settings as appropriate for your deployment.

    4. Wait for the member to recover to the SECONDARY state before upgrading the next secondary member.

      To check the member's state, issue rs.status() in mongosh.

    5. Repeat for each secondary member.

  2. Step down the replica set primary.

    1. Step down the primary.

      Connect mongosh to the primary and use rs.stepDown() to step down the primary and force an election of a new primary:

      rs.stepDown()
    2. Shut down the stepped-down primary.

      When rs.status() shows that the primary has stepped down and another member has assumed the PRIMARY state, shut down the stepped-down primary.

      To shut down the stepped-down primary, use mongosh to connect to the primary and run the following command:

      db.adminCommand( { shutdown: 1 } )
    3. Replace the mongod binary with the 6.0 binary.

    4. Start the 6.0 binary.

      Start the 6.0 with the --configsvr, --replSet, --port, and --bind_ip options. Include any optional command line options used by the previous deployment:

      mongod --configsvr --replSet <replSetName> --port <port> --dbpath <path> --bind_ip localhost,<ip address>

      If using a configuration file, update the file to specify sharding.clusterRole: configsvr, replication.replSetName, net.port, and net.bindIp, then start the 6.0 binary:

      sharding:
      clusterRole: configsvr
      replication:
      replSetName: <string>
      net:
      port: <port>
      bindIp: localhost,<ip address>
      storage:
      dbpath: <path>

      Include any other configuration as appropriate for your deployment.

4

Upgrade the shards one at a time.

  1. For each shard replica set, upgrade the secondary members of the replica set, one at a time:

    1. Shut down the secondary instance.

      To shut down the mongod process, use mongosh to connect to the cluster member and run the following command:

      db.adminCommand( { shutdown: 1 } )
    2. Replace the 5.0 binary with the 6.0 binary.

      Start the 6.0 binary with the --shardsvr, --replSet, --port, and --bind_ip options. Include any additional command line options as appropriate for your deployment:

      mongod --shardsvr --replSet <replSetName> --port <port> --dbpath <path> --bind_ip localhost,<ip address>

      If using a configuration file, update the file to include sharding.clusterRole: shardsvr, replication.replSetName, net.port, and net.bindIp, then start the 6.0 binary:

      sharding:
      clusterRole: shardsvr
      replication:
      replSetName: <string>
      net:
      port: <port>
      bindIp: localhost,<ip address>
      storage:
      dbpath: <path>

      Include any other configuration as appropriate for your deployment.

    3. Wait for the member to recover to the SECONDARY state before upgrading the next secondary member.

      To check the member's state, you can issue rs.status() in mongosh.

    4. Repeat for each secondary member.

  2. Step down the replica set primary.

    Connect mongosh to the primary and use rs.stepDown() to step down the primary and force an election of a new primary:

    rs.stepDown()
  3. Upgrade the stepped-down primary.

    When rs.status() shows that the primary has stepped down and another member has assumed the PRIMARY state, upgrade the stepped-down primary:

    1. Shut down the stepped-down primary.

      To shut down the stepped-down primary, use mongosh to connect to the replica set member and run the following command:

      db.adminCommand( { shutdown: 1 } )
    2. Replace the mongod binary with the 6.0 binary.

    3. Start the 6.0 binary.

      Start the 6.0 binary with the --shardsvr, --replSet, --port, and --bind_ip options. Include any additional command line options as appropriate for your deployment:

      mongod --shardsvr --replSet <replSetName> --port <port> --dbpath <path> --bind_ip localhost,<ip address>

      If using a configuration file, update the file to specify sharding.clusterRole: shardsvr, replication.replSetName, net.port, and net.bindIp, then start the 6.0 binary:

      sharding:
      clusterRole: shardsvr
      replication:
      replSetName: <string>
      net:
      port: <port>
      bindIp: localhost,<ip address>
      storage:
      dbpath: <path>

      Include any other configuration as appropriate for your deployment.

5

Replace each mongos instance with the 6.0 binary and restart. Include any other configuration as appropriate for your deployment.

Note

The --bind_ip option must be specified when the sharded cluster members are run on different hosts or if remote clients connect to the sharded cluster.

mongos --configdb csReplSet/<rsconfigsver1:port1>,<rsconfigsver2:port2>,<rsconfigsver3:port3> --bind_ip localhost,<ip address>
6

Using mongosh, connect to a mongos in the cluster and run sh.startBalancer() to re-enable the balancer:

sh.startBalancer()

Starting in MongoDB 6.0.3, automatic chunk splitting is not performed. This is because of balancing policy improvements. Auto-splitting commands still exist, but do not perform an operation. For details, see Balancing Policy Changes.

In MongoDB versions earlier than 6.0.3, sh.startBalancer() also enables auto-splitting for the sharded cluster.

If you do not wish to enable auto-splitting while the balancer is enabled, you must also run sh.disableAutoSplit().

For more information about re-enabling the balancer, see Enable the Balancer.

7

At this point, you can run the 6.0 binaries without the 6.0 features that are incompatible with 5.0.

To enable these 6.0 features, set the feature compatibility version (fCV) to 6.0.

Tip

Enabling these backwards-incompatible features can complicate the downgrade process since you must remove any persisted backwards-incompatible features before you downgrade.

It is recommended that after upgrading, you allow your deployment to run without enabling these features for a burn-in period to ensure the likelihood of downgrade is minimal. When you are confident that the likelihood of downgrade is minimal, enable these features.

On a mongos instance, run the setFeatureCompatibilityVersion command in the admin database:

db.adminCommand( { setFeatureCompatibilityVersion: "6.0" } )

Setting featureCompatibilityVersion (fCV) : "6.0" implicitly performs a replSetReconfig on each shard to add the term field to the shard replica configuration document.

The command doesn't complete until the new configuration propagates to a majority of replica set members.

This command must perform writes to an internal system collection. If for any reason the command does not complete successfully, you can safely retry the command on the mongos as the operation is idempotent.

Note

While setFeatureCompatibilityVersion is running on the sharded cluster, chunk migrations, splits, and merges can fail with ConflictingOperationInProgress.

Any orphaned documents that exist on your shards will be cleaned up when you set the setFeatureCompatibilityVersion to 6.0. The cleanup process:

  • Does not block the upgrade from completing, and

  • Is rate limited. To mitigate the potential effect on performance during orphaned document cleanup, see Range Deletion Performance Tuning.

Note

Additional Consideration

The mongos binary will crash when attempting to connect to mongod instances whose feature compatibility version (fCV) is greater than that of the mongos. For example, you cannot connect a MongoDB 5.0 version mongos to a 6.0 sharded cluster with fCV set to 6.0. You can, however, connect a MongoDB 5.0 version mongos to a 6.0 sharded cluster with fCV set to 5.0.

Back

Replica Set