Docs Menu
Docs Home
/
MongoDB Manual
/ / /

Downgrade 5.0 Sharded Cluster to 4.4

On this page

  • Downgrade Path
  • Create Backup
  • Prerequisites
  • Procedure

Before you attempt any downgrade, familiarize yourself with the content of this document.

Important

Before you upgrade or downgrade a replica set, ensure all replica set members are running. If you do not, the upgrade or downgrade will not complete until all members are started.

If you need to downgrade from 5.0, downgrade to the latest patch release of 4.4.

MongoDB only supports single-version downgrades. You cannot downgrade to a release that is multiple versions behind your current release.

For example, you may downgrade a 5.0-series to a 4.4-series deployment. However, further downgrading that 4.4-series deployment to a 4.2-series deployment is not supported.

Optional but Recommended. Create a backup of your database.

To downgrade from 5.0 to 4.4, you must remove incompatible features that are persisted and/or update incompatible configuration settings. These include:

MongoDB 5.0 changed the default value for cluster-wide read and write concerns, and downgrading to MongoDB 4.4 might change those defaults back. Consider manually configuring your cluster's default read and write concern before downgrading:

MongoDB 5.0 adds support for including the . or $ characters in document field names. You must delete any documents containing field names that include the . or $ characters before downgrading to MongoDB 4.4.

MongoDB 5.0 enables support for slim-format timezone data files. If using slim-format timezone data files in your deployment, as provided to MongoDB with the --timeZoneInfo command line option or processManagement.timeZoneInfo configuration file setting, you must downgrade to MongoDB 4.4.7 or later, or else revert your timezone data files to use the previous non-slim-format data files.

Ensure that any resharding operations have successfully completed before attempting any downgrade procedure. If a recent resharding operation has failed due to a primary failover, you must first run the cleanupReshardCollection command before downgrading the featureCompatibilityVersion of your sharded cluster.

If a resharding operation is still running while you downgrade the featureCompatibilityVersion of your sharded cluster, the resharding operation will abort.

First, verify the following:

  • Ensure that no initial sync is in progress. Running setFeatureCompatibilityVersion command while an initial sync is in progress will cause the initial sync to restart.

  • Ensure that no nodes have a newlyAdded field in their replica set configuration. Run the following command on each node in your sharded cluster to verify this:

    use local
    db.system.replset.find( { "members.newlyAdded" : { $exists : true } } );

    The newlyAdded field only appears in a node's replica set configuration document during and shortly after an initial sync.

  • Ensure that no replica set member is in ROLLBACK or RECOVERING state.

Next, to downgrade the featureCompatibilityVersion of your sharded cluster:

  1. Connect a mongo shell to the mongos instance.

  2. Downgrade the featureCompatibilityVersion to "4.4".

    db.adminCommand({setFeatureCompatibilityVersion: "4.4"})

    The setFeatureCompatibilityVersion command performs writes to an internal system collection and is idempotent. If for any reason the command does not complete successfully, retry the command on the mongos instance.

    Note

  3. To ensure that all members of the sharded cluster reflect the updated featureCompatibilityVersion, connect to each shard replica set member and each config server replica set member and check the featureCompatibilityVersion:

    Tip

    For a sharded cluster that has access control enabled, to run the following command against a shard replica set member, you must connect to the member as a shard local user.

    db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } )

    All members should return a result that includes:

    "featureCompatibilityVersion" : { "version" : "4.4" }

    If any member returns a featureCompatibilityVersion of "5.0", wait for the member to reflect version "4.4" before proceeding.

Note

Arbiters do not replicate the admin.system.version collection. Because of this, arbiters always have a feature compatibility version equal to the downgrade version of the binary, regardless of the fCV value of the replica set.

For example, an arbiter in a MongoDB 5.0 cluster, has an fCV value of 4.4.

For more information on the returned featureCompatibilityVersion value, see Get FeatureCompatibilityVersion.

The following steps are necessary only if fCV has ever been set to "5.0".

Remove all persisted 5.0 features that are incompatible with 4.4. These include:

Time-series Collections
Remove all time series collections.
Runtime Audit Filter Management
  • Disable Runtime Audit Filter Management by setting auditLog.runtimeConfiguration to false in the node's configuration file.

  • Update the audit filters for this mongod or mongos instance in the local configuration file.

Remove all persisted features that use 5.0 features. These include but are not limited to:

Warning

Before proceeding with the downgrade procedure, ensure that all members, including delayed replica set members in the sharded cluster, reflect the prerequisite changes. That is, check the featureCompatibilityVersion and the removal of incompatible features for each node before downgrading.

1

Using either a package manager or a manual download, get the latest release in the 4.4 series. If using a package manager, add a new repository for the 4.4 binaries, then perform the actual downgrade process.

Important

Before you upgrade or downgrade a replica set, ensure all replica set members are running. If you do not, the upgrade or downgrade will not complete until all members are started.

If you need to downgrade from 5.0, downgrade to the latest patch release of 4.4.

2

Connect mongosh to a mongos instance in the sharded cluster, and run sh.stopBalancer() to disable the balancer:

sh.stopBalancer()

Note

If a migration is in progress, the system will complete the in-progress migration before stopping the balancer. You can run sh.isBalancerRunning() to check the balancer's current state.

To verify that the balancer is disabled, run sh.getBalancerState(), which returns false if the balancer is disabled:

sh.getBalancerState()

For more information on disabling the balancer, see Disable the Balancer.

3

Downgrade the binaries and restart.

4

Downgrade the shards one at a time.

  1. Downgrade the shard's secondary members one at a time:

    1. Run the following command in mongosh to perform a clean shutdown, or refer to Stop mongod Processes for additional ways to safely terminate the mongod process:

      db.adminCommand( { shutdown: 1 } )
    2. Replace the 5.0 binary with the 4.4 binary and restart.

    3. Wait for the member to recover to SECONDARY state before downgrading the next secondary member. To check the member's state, connect mongosh to the shard and run rs.status() method.

      Repeat to downgrade for each secondary member.

  2. Downgrade the shard arbiter, if any.

    Skip this step if the replica set does not include an arbiter.

    1. Run the following command in mongosh to perform a clean shutdown, or refer to Stop mongod Processes for additional ways to safely terminate the mongod process:

      db.adminCommand( { shutdown: 1 } )
    2. Delete the contents of the arbiter data directory. The storage.dbPath configuration setting or --dbpath command line option specify the data directory of the arbiter mongod.

      rm -rf /path/to/mongodb/datafiles/*
    3. Replace the 5.0 binary with the 4.4 binary and restart.

    4. Wait for the member to recover to ARBITER state. To check the member's state, connect mongosh to the member and run rs.status() method.

  3. Downgrade the shard's primary.

    1. Step down the replica set primary. Connect mongosh to the primary and use rs.stepDown() to step down the primary and force an election of a new primary:

      rs.stepDown()
    2. Run rs.status().

      rs.status()

      When the status shows that the primary has stepped down and another member has assumed PRIMARY state, proceed.

    3. Run the following command from mongosh to perform a clean shutdown of the stepped-down primary, or refer to Stop mongod Processes for additional ways to safely terminate the mongod process:

      db.adminCommand( { shutdown: 1 } )
    4. Replace the 5.0 binary with the 4.4 binary and restart.

Repeat for the remaining shards.

5
  1. Downgrade the secondary members of the config servers replica set (CSRS) one at a time:

    1. Run the following command in mongosh to perform a clean shutdown, or refer to Stop mongod Processes for additional ways to safely terminate the mongod process:

      db.adminCommand( { shutdown: 1 } )
    2. Replace the 5.0 binary with the 4.4 binary and restart.

    3. Wait for the member to recover to SECONDARY state before downgrading the next secondary member. To check the member's state, connect mongosh to the shard and run rs.status() method.

      Repeat to downgrade for each secondary member.

  2. Step down the config server primary.

    1. Connect mongosh to the primary and use rs.stepDown() to step down the primary and force an election of a new primary:

      rs.stepDown()
    2. Run rs.status().

      rs.status()

      When the status shows that the primary has stepped down and another member has assumed PRIMARY state, proceed.

    3. Run the following command from mongosh to perform a clean shutdown of the stepped-down primary, or refer to Stop mongod Processes for additional ways to safely terminate the mongod process:

      db.adminCommand( { shutdown: 1 } )
    4. Replace the 5.0 binary with the 4.4 binary and restart.

6

Once the downgrade of sharded cluster components is complete, connect mongosh to a mongos and re-enable the balancer.

sh.startBalancer()

The mongosh method sh.startBalancer() also enables auto-splitting for the sharded cluster.

Back

Replica Set