Docs Menu
Docs Home
/
MongoDB Manual
/ /

Downgrade 4.0 Sharded Cluster to 3.6

On this page

  • Downgrade Path
  • Create Backup
  • Prerequisites
  • Procedure

Before you attempt any downgrade, familiarize yourself with the content of this document.

Once upgraded to 4.0, if you need to downgrade, we recommend downgrading to the latest patch release of 3.6.

MongoDB 4.0 introduces new hex-encoded string change stream resume tokens:

The resume token _data type depends on the MongoDB versions and, in some cases, the feature compatibility version (fcv) at the time of the change stream's opening/resumption (i.e. a change in fcv value does not affect the resume tokens for already opened change streams):

MongoDB Version
Feature Compatibility Version
Resume Token _data Type
MongoDB 4.0.7 and later
"4.0" or "3.6"
Hex-encoded string (v1)
MongoDB 4.0.6 and earlier
"4.0"
Hex-encoded string (v0)
MongoDB 4.0.6 and earlier
"3.6"
BinData
MongoDB 3.6
"3.6"
BinData
  • When downgrading from MongoDB 4.0.7 or greater, clients cannot use the resume tokens returned from the 4.0.7+ deployment. To resume a change stream, clients will need to use a pre-4.0.7 upgrade resume token (if available). Otherwise, clients will need to start a new change stream.

  • When downgrading from MongoDB 4.0.6 or earlier, clients can use BinData resume tokens returned from the 4.0 deployment, but not the v0 tokens.

Optional but Recommended. Create a backup of your database.

Before downgrading the binaries, you must downgrade the feature compatibility version and remove any 4.0 features incompatible with 3.6 or earlier versions as outlined below. These steps are necessary only if featureCompatibilityVersion has ever been set to "4.0".

  1. Connect a mongo shell to the mongos instance.

  2. Downgrade the featureCompatibilityVersion to "3.6".

    db.adminCommand({setFeatureCompatibilityVersion: "3.6"})

    The setFeatureCompatibilityVersion command performs writes to an internal system collection and is idempotent. If for any reason the command does not complete successfully, retry the command on the mongos instance.

To ensure that all members of the sharded cluster reflect the updated featureCompatibilityVersion, connect to each shard replica set member and each config server replica set member and check the featureCompatibilityVersion:

Tip

For a sharded cluster that has access control enabled, to run the following command against a shard replica set member, you must connect to the member as a shard local user.

db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } )

All members should return a result that includes:

"featureCompatibilityVersion" : { "version" : "3.6" }

If any member returns a featureCompatibilityVersion that includes either a version value of "4.0" or a targetVersion field, wait for the member to reflect version "3.6" before proceeding.

For more information on the returned featureCompatibilityVersion value, see Get FeatureCompatibilityVersion.

Remove all persisted features that are incompatible with 4.0. For example, if you have defined any view definitions, document validators, and partial index filters that use 4.0 query features such as the aggregation convert operators, you must remove them.

If you have users with only SCRAM-SHA-256 credentials, you should create SCRAM-SHA-1 credentials for these users before downgrading. To update a user who only has SCRAM-SHA-256 credentials, run db.updateUser() with mechanisms set to SCRAM-SHA-1 only and the pwd set to the password:

db.updateUser(
"reportUser256",
{
mechanisms: [ "SCRAM-SHA-1" ],
pwd: <newpwd>
}
)

Warning

Before proceeding with the downgrade procedure, ensure that all members, including delayed replica set members in the sharded cluster, reflect the prerequisite changes. That is, check the featureCompatibilityVersion and the removal of incompatible features for each node before downgrading.

Note

If you ran MongoDB 4.0 with authenticationMechanisms that included SCRAM-SHA-256, omit SCRAM-SHA-256 when restarting with the 3.6 binary.

1

Using either a package manager or a manual download, get the latest release in the 3.6 series. If using a package manager, add a new repository for the 3.6 binaries, then perform the actual downgrade process.

Once upgraded to 4.0, if you need to downgrade, we recommend downgrading to the latest patch release of 3.6.

2

Connect a mongo shell to a mongos instance in the sharded cluster, and run sh.stopBalancer() to disable the balancer:

sh.stopBalancer()

Note

If a migration is in progress, the system will complete the in-progress migration before stopping the balancer. You can run sh.isBalancerRunning() to check the balancer's current state.

To verify that the balancer is disabled, run sh.getBalancerState(), which returns false if the balancer is disabled:

sh.getBalancerState()

For more information on disabling the balancer, see Disable the Balancer.

3

Downgrade the binaries and restart.

4

Downgrade the shards one at a time. If the shards are replica sets, for each shard:

  1. Downgrade the secondary members of the replica set one at a time:

    1. Shut down the mongod instance and replace the 4.0 binary with the 3.6 binary.

    2. Start the 3.6 binary with the --shardsvr and --port command line options. Include any other configuration as appropriate for your deployment, e.g. --bind_ip.

      mongod --shardsvr --port <port> --dbpath <path> --bind_ip localhost,<hostname(s)|ip address(es)>

      Or if using a configuration file, update the file to include sharding.clusterRole: shardsvr, net.port, and any other configuration as appropriate for your deployment, e.g. net.bindIp, and start:

      sharding:
      clusterRole: shardsvr
      net:
      port: <port>
      bindIp: localhost,<hostname(s)|ip address(es)>
      storage:
      dbpath: <path>
    3. Wait for the member to recover to SECONDARY state before downgrading the next secondary member. To check the member's state, you can issue rs.status() in the mongo shell.

      Repeat for each secondary member.

  2. Step down the replica set primary.

    Connect a mongo shell to the primary and use rs.stepDown() to step down the primary and force an election of a new primary:

    rs.stepDown()
  3. When rs.status() shows that the primary has stepped down and another member has assumed PRIMARY state, downgrade the stepped-down primary:

    1. Shut down the stepped-down primary and replace the mongod binary with the 3.6 binary.

    2. Start the 3.6 binary with the --shardsvr and --port command line options. Include any other configuration as appropriate for your deployment, e.g. --bind_ip.

      mongod --shardsvr --port <port> --dbpath <path> --bind_ip localhost,<hostname(s)|ip address(es)>

      Or if using a configuration file, update the file to include sharding.clusterRole: shardsvr, net.port, and any other configuration as appropriate for your deployment, e.g. net.bindIp, and start the 3.6 binary:

      sharding:
      clusterRole: shardsvr
      net:
      port: <port>
      bindIp: localhost,<hostname(s)|ip address(es)>
      storage:
      dbpath: <path>
5

If the config servers are replica sets:

  1. Downgrade the secondary members of the replica set one at a time:

    1. Shut down the secondary mongod instance and replace the 4.0 binary with the 3.6 binary.

    2. Start the 3.6 binary with both the --configsvr and --port options. Include any other configuration as appropriate for your deployment, e.g. --bind_ip.

      mongod --configsvr --port <port> --dbpath <path> --bind_ip localhost,<hostname(s)|ip address(es)>

      If using a configuration file, update the file to specify sharding.clusterRole: configsvr, net.port, and any other configuration as appropriate for your deployment, e.g. net.bindIp, and start the 3.6 binary:

      sharding:
      clusterRole: configsvr
      net:
      port: <port>
      bindIp: localhost,<hostname(s)|ip address(es)>
      storage:
      dbpath: <path>

      Include any other configuration as appropriate for your deployment.

    3. Wait for the member to recover to SECONDARY state before downgrading the next secondary member. To check the member's state, issue rs.status() in the mongo shell.

      Repeat for each secondary member.

  2. Step down the replica set primary.

    1. Connect a mongo shell to the primary and use rs.stepDown() to step down the primary and force an election of a new primary:

      rs.stepDown()
    2. When rs.status() shows that the primary has stepped down and another member has assumed PRIMARY state, shut down the stepped-down primary and replace the mongod binary with the 3.6 binary.

    3. Start the 3.6 binary with both the --configsvr and --port options. Include any other configuration as appropriate for your deployment, e.g. --bind_ip.

      mongod --configsvr --port <port> --dbpath <path> --bind_ip localhost,<hostname(s)|ip address(es)>

      If using a configuration file, update the file to specify sharding.clusterRole: configsvr, net.port, and any other configuration as appropriate for your deployment, e.g. net.bindIp, and start the 3.6 binary:

      sharding:
      clusterRole: configsvr
      net:
      port: <port>
      bindIp: localhost,<hostname(s)|ip address(es)>
      storage:
      dbpath: <path>
6

Once the downgrade of sharded cluster components is complete, re-enable the balancer.

Note

The MongoDB 3.6 deployment can use the BinData resume tokens returned from a change stream opened against the 4.0 deployment, but not the v0 or the v1 hex-encoded string resume tokens.

Back

Downgrade 4.0 Replica Set to 3.6