Upgrade a Sharded Cluster to 6.0
On this page
Warning
Due to balancing policy updates in MongoDB 6.0.3, the balancer may start immediately after upgrade, even if the number of chunks stays the same.
Familiarize yourself with the content of this document, including thoroughly reviewing the prerequisites, prior to upgrading to MongoDB 6.0.
The following steps outline the procedure to upgrade a
mongod
that is a shard member from version 5.0
to 6.0.
If you need guidance on upgrading to 6.0, MongoDB professional services offer major version upgrade support to help ensure a smooth transition without interruption to your MongoDB application.
Upgrade Recommendations and Checklists
When upgrading, consider the following:
Upgrade Version Path
To upgrade an existing MongoDB deployment to 6.0, you must be running a 5.0-series release.
To upgrade from a version earlier than the 5.0-series, you must successively upgrade major releases until you have upgraded to 5.0-series. For example, if you are running a 4.4-series, you must upgrade first to 5.0 before you can upgrade to 6.0.
Check Driver Compatibility
Before you upgrade MongoDB, check that you're using a MongoDB 6.0-compatible driver. Consult the driver documentation for your specific driver to verify compatibility with MongoDB 6.0.
Upgraded deployments that run on incompatible drivers might encounter unexpected or undefined behavior.
Warning
If your drivers use legacy opcodes that were deprecated in v3.6, update your drivers to a version that uses supported opcodes. Drivers that use legacy opcodes are no longer supported.
Preparedness
Before beginning your upgrade, see the Compatibility Changes in MongoDB 6.0 document to ensure that your applications and deployments are compatible with MongoDB 6.0. Resolve the incompatibilities in your deployment before starting the upgrade.
Before upgrading MongoDB, always test your application in a staging environment before deploying the upgrade to your production environment.
Downgrade Consideration
After upgrading to 6.0, if you need to downgrade, we recommend downgrading to the latest patch release of 5.0.
Prerequisites
All Members Version
To upgrade a sharded cluster to 6.0, all members of the cluster must be at least version 5.0. The upgrade process checks all components of the cluster and will produce warnings if any component is running version earlier than 5.0.
Feature Compatibility Version
The 5.0 sharded cluster must have
featureCompatibilityVersion
set to "5.0"
.
To ensure that all members of the sharded cluster have
featureCompatibilityVersion
set to "5.0"
, connect to each
shard replica set member and each config server replica set member
and check the featureCompatibilityVersion
:
Tip
For a sharded cluster that has access control enabled, to run the following command against a shard replica set member, you must connect to the member as a shard local user.
db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } )
All members should return a result that includes
"featureCompatibilityVersion" : { "version" : "5.0" }
.
To set or update featureCompatibilityVersion
, run the
following command on the mongos
:
db.adminCommand( { setFeatureCompatibilityVersion: "5.0" } )
For more information, see
setFeatureCompatibilityVersion
.
Replica Set Member State
For shards and config servers, ensure that no replica set member is in
the ROLLBACK
or RECOVERING
state.
Back up the config
Database
Optional but Recommended. As a precaution, take a backup of the
config
database before upgrading the sharded cluster.
Download 6.0 Binaries
Use Package Manager
If you installed MongoDB from the MongoDB apt
, yum
, dnf
, or
zypper
repositories, you should upgrade to 6.0 using your package
manager.
Follow the appropriate 6.0 installation instructions for your Linux system. This will involve adding a repository for the new release, then performing the actual upgrade process.
Download 6.0 Binaries Manually
If you have not installed MongoDB using a package manager, you can manually download the MongoDB binaries from the MongoDB Download Center.
See 6.0 installation instructions for more information.
Upgrade Procedure
Warning
If you upgrade an existing instance of MongoDB to MongoDB
6.0.5, that instance may fail to start if fork: true
is
set in the mongod.conf
file.
The upgrade issue affects all MongoDB instances that use .deb
or
.rpm
installation packages. Installations that use the tarball
(.tgz
) release or other package types are not affected. For more
information, see SERVER-74345.
To remove the fork: true
setting, run these commands from a system
terminal:
systemctl stop mongod.service sed -i.bak '/fork: true/d' /etc/mongod.conf systemctl start mongod.service
The second systemctl
command starts the upgraded instance after the
setting is removed.
Disable the Balancer.
Connect mongosh
to a mongos
instance
in the sharded cluster, and run sh.stopBalancer()
to
disable the balancer:
sh.stopBalancer()
Note
If a migration is in progress, the system will complete the
in-progress migration before stopping the balancer. You can run
sh.isBalancerRunning()
to check the balancer's current
state.
To verify that the balancer is disabled, run
sh.getBalancerState()
, which returns false if the
balancer is disabled:
sh.getBalancerState()
For more information on disabling the balancer, see Disable the Balancer.
Refresh the cached routing table for each mongos
.
For each mongos
in the cluster, connect
mongosh
to the mongos
instance and run
flushRouterConfig
to refresh the cached routing table:
db.adminCommand("flushRouterConfig") db.adminCommand( { flushRouterConfig: 1 } )
Upgrade the config servers.
Upgrade the secondary members of the replica set, one at a time.
Shut down the secondary instance.
To shut down the
mongod
process, usemongosh
to connect to the cluster member and run the following command:db.adminCommand( { shutdown: 1 } ) Replace the 5.0 binary with the 6.0 binary.
Start the 6.0 binary.
Start the 6.0 binary with the
--configsvr
,--replSet
, and--port
. Include any other options as used by the deployment.mongod --configsvr --replSet <replSetName> --port <port> --dbpath <path> --bind_ip localhost,<ip address> If using a configuration file, update the file to specify
sharding.clusterRole: configsvr
,replication.replSetName
,net.port
, andnet.bindIp
, then start the 6.0 binary:sharding: clusterRole: configsvr replication: replSetName: <string> net: port: <port> bindIp: localhost,<ip address> storage: dbpath: <path> Include any other settings as appropriate for your deployment.
Wait for the member to recover to the
SECONDARY
state before upgrading the next secondary member.To check the member's state, issue
rs.status()
inmongosh
.Repeat for each secondary member.
Step down the replica set primary.
Step down the primary.
Connect
mongosh
to the primary and users.stepDown()
to step down the primary and force an election of a new primary:rs.stepDown() Shut down the stepped-down primary.
When
rs.status()
shows that the primary has stepped down and another member has assumed thePRIMARY
state, shut down the stepped-down primary.To shut down the stepped-down primary, use
mongosh
to connect to the primary and run the following command:db.adminCommand( { shutdown: 1 } ) Replace the
mongod
binary with the 6.0 binary.Start the 6.0 binary.
Start the 6.0 with the
--configsvr
,--replSet
,--port
, and--bind_ip
options. Include any optional command line options used by the previous deployment:mongod --configsvr --replSet <replSetName> --port <port> --dbpath <path> --bind_ip localhost,<ip address> If using a configuration file, update the file to specify
sharding.clusterRole: configsvr
,replication.replSetName
,net.port
, andnet.bindIp
, then start the 6.0 binary:sharding: clusterRole: configsvr replication: replSetName: <string> net: port: <port> bindIp: localhost,<ip address> storage: dbpath: <path> Include any other configuration as appropriate for your deployment.
Upgrade the shards.
Upgrade the shards one at a time.
For each shard replica set, upgrade the secondary members of the replica set, one at a time:
Shut down the secondary instance.
To shut down the
mongod
process, usemongosh
to connect to the cluster member and run the following command:db.adminCommand( { shutdown: 1 } ) Replace the 5.0 binary with the 6.0 binary.
Start the 6.0 binary with the
--shardsvr
,--replSet
,--port
, and--bind_ip
options. Include any additional command line options as appropriate for your deployment:mongod --shardsvr --replSet <replSetName> --port <port> --dbpath <path> --bind_ip localhost,<ip address> If using a configuration file, update the file to include
sharding.clusterRole: shardsvr
,replication.replSetName
,net.port
, andnet.bindIp
, then start the 6.0 binary:sharding: clusterRole: shardsvr replication: replSetName: <string> net: port: <port> bindIp: localhost,<ip address> storage: dbpath: <path> Include any other configuration as appropriate for your deployment.
Wait for the member to recover to the
SECONDARY
state before upgrading the next secondary member.To check the member's state, you can issue
rs.status()
inmongosh
.Repeat for each secondary member.
Step down the replica set primary.
Connect
mongosh
to the primary and users.stepDown()
to step down the primary and force an election of a new primary:rs.stepDown() Upgrade the stepped-down primary.
When
rs.status()
shows that the primary has stepped down and another member has assumed thePRIMARY
state, upgrade the stepped-down primary:Shut down the stepped-down primary.
To shut down the stepped-down primary, use
mongosh
to connect to the replica set member and run the following command:db.adminCommand( { shutdown: 1 } ) Replace the
mongod
binary with the 6.0 binary.Start the 6.0 binary.
Start the 6.0 binary with the
--shardsvr
,--replSet
,--port
, and--bind_ip
options. Include any additional command line options as appropriate for your deployment:mongod --shardsvr --replSet <replSetName> --port <port> --dbpath <path> --bind_ip localhost,<ip address> If using a configuration file, update the file to specify
sharding.clusterRole: shardsvr
,replication.replSetName
,net.port
, andnet.bindIp
, then start the 6.0 binary:sharding: clusterRole: shardsvr replication: replSetName: <string> net: port: <port> bindIp: localhost,<ip address> storage: dbpath: <path> Include any other configuration as appropriate for your deployment.
Upgrade the mongos
instances.
Replace each mongos
instance with the 6.0 binary
and restart. Include any other configuration as appropriate for your
deployment.
Note
The --bind_ip
option must be specified when
the sharded cluster members are run on different hosts or if
remote clients connect to the sharded cluster.
mongos --configdb csReplSet/<rsconfigsver1:port1>,<rsconfigsver2:port2>,<rsconfigsver3:port3> --bind_ip localhost,<ip address>
Re-enable the balancer.
Using mongosh
, connect to a
mongos
in the cluster and run
sh.startBalancer()
to re-enable the balancer:
sh.startBalancer()
Starting in MongoDB 6.0.3, automatic chunk splitting is not performed. This is because of balancing policy improvements. Auto-splitting commands still exist, but do not perform an operation. For details, see Balancing Policy Changes.
In MongoDB versions earlier than 6.0.3, sh.startBalancer()
also enables auto-splitting for the sharded cluster.
If you do not wish to enable auto-splitting while the balancer is
enabled, you must also run sh.disableAutoSplit()
.
For more information about re-enabling the balancer, see Enable the Balancer.
Enable backwards-incompatible 6.0 features.
At this point, you can run the 6.0 binaries without the 6.0 features that are incompatible with 5.0.
To enable these 6.0 features, set the feature compatibility
version (fCV
) to 6.0.
Tip
Enabling these backwards-incompatible features can complicate the downgrade process since you must remove any persisted backwards-incompatible features before you downgrade.
It is recommended that after upgrading, you allow your deployment to run without enabling these features for a burn-in period to ensure the likelihood of downgrade is minimal. When you are confident that the likelihood of downgrade is minimal, enable these features.
On a mongos
instance, run the
setFeatureCompatibilityVersion
command in the admin
database:
db.adminCommand( { setFeatureCompatibilityVersion: "6.0" } )
Setting featureCompatibilityVersion (fCV) : "6.0"
implicitly performs a replSetReconfig
on each shard to
add the term
field to the shard replica configuration
document.
The command doesn't complete until the new configuration propagates to a majority of replica set members.
This command must perform writes to an internal system collection.
If for any reason the command does not complete successfully, you
can safely retry the command on the mongos
as the
operation is idempotent.
Note
While setFeatureCompatibilityVersion
is running on
the sharded cluster, chunk migrations, splits, and merges
can fail with ConflictingOperationInProgress
.
Any orphaned documents that exist on your
shards will be cleaned up when you set the
setFeatureCompatibilityVersion
to 6.0. The
cleanup process:
Does not block the upgrade from completing, and
Is rate limited. To mitigate the potential effect on performance during orphaned document cleanup, see Range Deletion Performance Tuning.
Note
Additional Consideration
The mongos
binary will crash when attempting to connect
to mongod
instances whose
feature compatibility version (fCV) is greater than
that of the mongos
. For example, you cannot connect
a MongoDB 5.0 version mongos
to a 6.0
sharded cluster with fCV set to 6.0. You
can, however, connect a MongoDB 5.0 version
mongos
to a 6.0 sharded cluster with fCV set to 5.0.
Additional Upgrade Procedures
To upgrade a standalone, see Upgrade a Standalone to 6.0.
To upgrade a replica set, see Upgrade a Replica Set to 6.0.