Docs Menu
Docs Home
/
MongoDB Manual

Downgrade 4.2 Sharded Cluster to 4.0

On this page

  • Downgrade Path
  • Considerations
  • Create Backup
  • Access Control
  • Prerequisites
  • Procedure

Before you attempt any downgrade, familiarize yourself with the content of this document.

Important

Before you upgrade or downgrade a replica set, ensure all replica set members are running. If you do not, the upgrade or downgrade will not complete until all members are started.

If you need to downgrade from 4.2, downgrade to the latest patch release of 4.0.

Tip

If you downgrade,

  • On Windows, downgrade to version 4.0.12 or later version. You cannot downgrade to a 4.0.11 or earlier version.

  • On Linux/macOS, if you are running change streams and want to seamlessly resume change streams, downgrade to 4.0.7 or later versions.

Starting in MongoDB 4.2, change streams are available regardless of the "majority" read concern support; that is, read concern majority support can be either enabled (default) or disabled to use change streams.

In MongoDB 4.0 and earlier, change streams are available only if "majority" read concern support is enabled (default).

Once you downgrade to 4.0-series, change streams will be disabled if you have disabled read concern "majority".

Optional but Recommended. Create a backup of your database.

If your sharded cluster has access control enabled, your downgrade user privileges must include additional privileges to manage indexes on the config database.

db.getSiblingDB("admin").createRole({
role: "configIndexRole",
privileges: [
{
resource: { db: "config", collection: "" },
actions: [ "find", "dropIndex", "createIndex", "listIndexes" ]
}
],
roles: [ ]
});

Add the newly created role to your downgrade user. For example, if you have a user myDowngradeUser in the admin database that already has the root role, use db.grantRolesToUser() to grant the additional role:

db.getSiblingDB("admin").grantRolesToUser( "myDowngradeUser",
[ { role: "configIndexRole", db: "admin" } ],
{ w: "majority", wtimeout: 4000 }
);

To downgrade from 4.2 to 4.0, you must remove incompatible features that are persisted and/or update incompatible configuration settings. These include:

To downgrade the featureCompatibilityVersion of your sharded cluster:

  1. Connect a mongo shell to the mongos instance.

  2. Downgrade the featureCompatibilityVersion to "4.0".

    db.adminCommand({setFeatureCompatibilityVersion: "4.0"})

    The setFeatureCompatibilityVersion command performs writes to an internal system collection and is idempotent. If for any reason the command does not complete successfully, retry the command on the mongos instance.

  3. To ensure that all members of the sharded cluster reflect the updated featureCompatibilityVersion, connect to each shard replica set member and each config server replica set member and check the featureCompatibilityVersion:

    Tip

    For a sharded cluster that has access control enabled, to run the following command against a shard replica set member, you must connect to the member as a shard local user.

    db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } )

    All members should return a result that includes:

    "featureCompatibilityVersion" : { "version" : "4.0" }

    If any member returns a featureCompatibilityVersion of "4.2", wait for the member to reflect version "4.0" before proceeding.

Note

Arbiters do not replicate the admin.system.version collection. Because of this, arbiters always have a feature compatibility version equal to the downgrade version of the binary, regardless of the fCV value of the replica set.

For example, an arbiter in a MongoDB 4.2 cluster, has an fCV value of 4.0.

For more information on the returned featureCompatibilityVersion value, see View FeatureCompatibilityVersion.

The following steps are necessary only if fCV has ever been set to "4.2".

Remove all persisted 4.2 features that are incompatible with 4.0. These include:

Starting in MongoDB 4.2, for featureCompatibilityVersion (fCV) set to "4.2" or greater, MongoDB removes the Index Key Limit. For fCV set to "4.0", the limit still applies.

If you have an index with keys that exceed the Index Key Limit once fCV is set to "4.0", consider changing the index to a hashed index or to indexing a computed value. You can also temporarily use failIndexKeyTooLong set to false before resolving the problem. However, with failIndexKeyTooLong set to false, queries that use these indexes can return incomplete results.

Starting in MongoDB 4.2, for featureCompatibilityVersion (fCV) set to "4.2" or greater, MongoDB removes the Index Name Length. For fCV set to "4.0", the limit still applies.

If you have an index with a name that exceeds the Index Name Length once fCV is set to "4.0", drop and recreate the index with a shorter name.

db.collection.dropIndex( <name | index specification> )
db.collection.createIndex(
{ <index specification> },
{ name: <shorter name> }
}

Tip

See:

With featureCompatibilityVersion (fCV) "4.2", MongoDB uses a new internal format for unique indexes that is incompatible with MongoDB 4.0. The new internal format applies to both existing unique indexes as well as newly created/rebuilt unique indexes.

If fCV has ever been set to "4.2", use the following script to drop and recreate all unique indexes.

Script to run on mongos
// A script to rebuild unique indexes after downgrading fcv 4.2 to 4.0.
// Run this script to drop and recreate unique indexes
// for backwards compatibility with 4.0.
db.adminCommand("listDatabases").databases.forEach(function(d){
let mdb = db.getSiblingDB(d.name);
mdb.getCollectionInfos( { type: "collection" } ).forEach(function(c){
let currentCollection = mdb.getCollection(c.name);
currentCollection.getIndexes().forEach(function(idx){
if (idx.unique){
print("Dropping and recreating the following index:" + tojson(idx))
assert.commandWorked(mdb.runCommand({dropIndexes: c.name, index: idx.name}));
let res = mdb.runCommand({ createIndexes: c.name, indexes: [idx] });
if (res.ok !== 1)
assert.commandWorked(res);
}
});
});
});
Script to run on shards
After you have run the script on mongos, you need to check individual shards if you have created shard local users. That is, if you created maintenance users directly on the shards instead of through mongos, run the script on the primary member of the shard.
// A script to rebuild unique indexes after downgrading fcv 4.2 to 4.0.
// Run this script on shards to drop and recreate unique indexes
// for backwards compatibility with 4.0.
let mdb = db.getSiblingDB('admin');
mdb.getCollectionInfos( { type: "collection" } ).forEach(function(c){
let currentCollection = mdb.getCollection(c.name);
currentCollection.getIndexes().forEach(function(idx){
if (idx.unique){
print("Dropping and recreating the following index:" + tojson(idx))
assert.commandWorked(mdb.runCommand({dropIndexes: c.name, index: idx.name}));
let res = mdb.runCommand({ createIndexes: c.name, indexes: [idx] });
if (res.ok !== 1)
assert.commandWorked(res);
}
});
});

In addition, if you have enabled access control, you must also remove the system unique index user_1_db_1 on the admin.system.users collection.

If fCV has ever been set to "4.2", use the following command to drop the user_1_db_1 system unique index:

db.getSiblingDB("admin").getCollection("system.users").dropIndex("user_1_db_1")

The user_1_db_1 index will automatically be rebuilt when starting the server with the 4.0 binary in the procedure below.

For featureCompatibilityVersion (fCV) set to "4.2", MongoDB supports creating Wildcard Indexes. You must drop all wildcard indexes before downgrading to fCV "4.0".

Use the following script to drop and recreate all wildcard indexes:

// A script to drop wildcard indexes before downgrading fcv 4.2 to 4.0.
// Run this script to drop wildcard indexes
// for backwards compatibility with 4.0.
db.adminCommand("listDatabases").databases.forEach(function(d){
let mdb = db.getSiblingDB(d.name);
mdb.getCollectionInfos({ type: "collection" }).forEach(function(c){
let currentCollection = mdb.getCollection(c.name);
currentCollection.getIndexes().forEach(function(idx){
var key = Object.keys(idx.key);
if (key[0].includes("$**")) {
print("Dropping index: " + idx.name + " from " + idx.ns);
let res = mdb.runCommand({dropIndexes: currentCollection, index: idx.name});
assert.commandWorked(res);
}
});
});
});

Important

Downgrading to fCV "4.0" during an in-progress wildcard index build does not automatically drop or kill the index build. The index build can complete after downgrading to fcv "4.0", resulting in a valid wildcard index on the collection. Starting the 4.0 binary against against that data directory will result in startup failures.

Use db.currentOp() to check for any in-progress wildcard index builds. Once any in-progress wildcard index builds complete, run the script to drop them before downgrading to fCV "4.0".

Before downgrading the binaries, modify read-only view definitions and collection validation definitions that include the 4.2 operators, such as $set, $unset, $replaceWith.

You can modify a view either by:

You can modify the colleciton validation expressions by:

Starting in MongoDB 4.2, MongoDB adds "tls"-prefixed options as aliases for the "ssl"-prefixed options.

If your deployments or clients use the "tls"-prefixed options, replace with the corresponding "ssl"-prefixed options for the mongod, the mongos, and the mongo shell and drivers.

The zstd compression library is available for journal data compression starting in version 4.2.

For any shard or config server member that uses zstd library for its journal compressor:

If the member uses zstd for journal compression and zstd Data Compression,
If the member only uses zstd for journal compression only,

Note

The following procedure involves restarting the replica member as a standalone without the journal.

  1. Perform a clean shutdown of the mongod instance:

    db.getSiblingDB('admin').shutdownServer()
  2. Update the configuration file to prepare to restart as a standalone:

    For example:

    storage:
    journal:
    enabled: false
    setParameter:
    skipShardingConfigurationChecks: true
    disableLogicalSessionCacheRefresh: true
    #replication:
    # replSetName: shardA
    #sharding:
    # clusterRole: shardsvr
    net:
    port: 27218

    If you use command-line options instead of a configuration file, you will have to update the command-line option during the restart.

  3. Restart the mongod instance:

    • If you are using a configuration file:

      mongod -f <path/to/myconfig.conf>
    • If you are using command-line options instead of a configuration file:

      mongod --nojournal --setParameter skipShardingConfigurationChecks=true --setParameter disableLogicalSessionCacheRefresh=true --port <samePort> ...
  4. Perform a clean shutdown of the mongod instance:

    db.getSiblingDB('admin').shutdownServer()

    Confirm that the process is no longer running.

  5. Update the configuration file to prepare to restart with the new journal compressor:

    For example:

    storage:
    wiredTiger:
    engineConfig:
    journalCompressor: <newValue>
    replication:
    replSetName: shardA
    sharding:
    clusterRole: shardsvr
    net:
    port: 27218

    If you use command-line options instead of a configuration file, you will have to update the command-line options during the restart below.

  6. Restart the mongod instance as a replica set member:

    • If you are using a configuration file:

      mongod -f <path/to/myconfig.conf>
    • If you are using command-line options instead of a configuration file:

      mongod --shardsvr --wiredTigerJournalCompressor <differentCompressor|none> --replSet ...

Note

If you encounter an unclean shutdown for a mongod during the downgrade procedure such that you need to use the journal files to recover, recover the instance using the 4.2 mongod and then retry the downgrade of the instance.

Important

If you also use zstd Journal Compression, perform these steps after you perform the prerequisite steps for the journal compressor.

The zstd compression library is available starting in version 4.2. For any config server member or shard member that has data stored using zstd compression, the downgrade procedure will require an initial sync for that member. To prepare:

  1. Create a new empty data directory for the mongod instance. This directory will be used in the downgrade procedure below.

    Important

    Ensure that the user account running mongod has read and write permissions for the new directory.

  2. If you use a configuration file, update the file to prepare for the downgrade procedure:

    1. Delete storage.wiredTiger.collectionConfig.blockCompressor to use the default compressor (snappy) or set to another 4.0 supported compressor.

    2. Update storage.dbPath to the new data directory.

    If you use command-line options instead, you will have to update the options in the procedure below.

Repeat for any other members that used zstd compression.

The zstd compression library is available for network message compression starting in version 4.2.

To prepare for the downgrade:

  1. For any mongod / mongos instance that uses zstd for network message compression and uses a configuration file, update the net.compression.compressors setting to prepare for the restart during the downgrade procedure.

    If you use command-line options instead, you will have to update the options in the procedure below.
  2. For any client that specifies zstd in its URI connection string, update to remove zstd from the list.

  3. For any mongo shell that specifies zstd in its --networkMessageCompressors, update to remove zstd from the list.

Important

Messages are compressed when both parties enable network compression. Otherwise, messages between the parties are uncompressed.

Important

Remove client-side field level encryption code in applications prior to downgrading the server.

MongoDB 4.2 adds support for enforcing client-side field level encryption as part of a collection's JSON Schema document validation. Specifically, the $jsonSchema object supports the encrypt and encryptMetadata keywords. MongoDB 4.0 does not support these keywords and fails to start if any collection specifies those keywords as part of its validation $jsonSchema.

Use db.getCollectionInfos() on each database to identify collections specifying automatic field level encryption rules as part of the $jsonSchema validator. To prepare for downgrade, connect to a cluster mongos and perform either of the following operations for each collection using the 4.0-incompatible keywords:

  • Use collMod to modify the collection's validator and replace the $jsonSchema with a schema that contains only 4.0-compatible document validation syntax:

    db.runCommand({
    "collMod" : "<collection>",
    "validator" : {
    "$jsonSchema" : { <4.0-compatible schema object> }
    }
    })

    -or-

  • Use collMod to remove the validator object entirely:

    db.runComand({ "collMod" : "<collection>", "validator" : {} })

Warning

Before proceeding with the downgrade procedure, ensure that all members, including delayed replica set members in the sharded cluster, reflect the prerequisite changes. That is, check the featureCompatibilityVersion and the removal of incompatible features for each node before downgrading.

1

Using either a package manager or a manual download, get the latest release in the 4.0 series. If using a package manager, add a new repository for the 4.0 binaries, then perform the actual downgrade process.

Important

Before you upgrade or downgrade a replica set, ensure all replica set members are running. If you do not, the upgrade or downgrade will not complete until all members are started.

If you need to downgrade from 4.2, downgrade to the latest patch release of 4.0.

2

Connect a mongo shell to a mongos instance in the sharded cluster, and run sh.stopBalancer() to disable the balancer:

sh.stopBalancer()

Note

If a migration is in progress, the system will complete the in-progress migration before stopping the balancer. You can run sh.isBalancerRunning() to check the balancer's current state.

To verify that the balancer is disabled, run sh.getBalancerState(), which returns false if the balancer is disabled:

sh.getBalancerState()

Starting in MongoDB 4.2, sh.stopBalancer() also disables auto-splitting for the sharded cluster.

For more information on disabling the balancer, see Disable the Balancer.

3

Downgrade the binaries and restart.

Note

If you use command-line options instead of a configuration file, update the command-line options as appropriate during the restart.

4

Downgrade the shards one at a time.

  1. Downgrade the shard's secondary members one at a time:

    1. Shut down the mongod instance.

      db.adminCommand( { shutdown: 1 } )
    2. Replace the 4.2 binary with the 4.0 binary and restart.

      Note

      If you use command-line options instead of a configuration file, update the command-line options as appropriate during the restart.

      • If your command-line options include "tls"-prefixed options, update to "ssl"-prefixed options.

      • If the mongod instance used zstd data compression,

      • If the mongod instance used zstd journal compression,

      • If the mongod instance included zstd network message compression,

        • Remove --networkMessageCompressors to enable message compression using the default snappy,zlib compressors. Alternatively, explicitly specify the compressor(s).

    3. Wait for the member to recover to SECONDARY state before downgrading the next secondary member. To check the member's state, connect a mongo shell to the shard and run rs.status() method.

      Repeat to downgrade for each secondary member.

  2. Downgrade the shard arbiter, if any.

    Skip this step if the replica set does not include an arbiter.

    1. Shut down the mongod. See Stop mongod Processes for additional ways to safely terminate mongod processes.

      db.adminCommand( { shutdown: 1 } )
    2. Delete the arbiter data directory contents. The storage.dbPath configuration setting or --dbpath command line option specify the data directory of the arbiter mongod.

      rm -rf /path/to/mongodb/datafiles/*
    3. Replace the 4.2 binary with the 4.0 binary and restart.

      Note

      If you use command-line options instead of a configuration file, update the command-line options as appropriate during the restart.

      • If your command-line options include "tls"-prefixed options, update to "ssl"-prefixed options.

      • If the mongod instance used zstd data compression,

      • If the mongod instance used zstd journal compression,

      • If the mongod instance included zstd network message compression,

        • Remove --networkMessageCompressors to enable message compression using the default snappy,zlib compressors. Alternatively, explicitly specify the compressor(s).

    4. Wait for the member to recover to ARBITER state. To check the member's state, connect a mongo shell to the member and run rs.status() method.

  3. Downgrade the shard's primary.

    1. Step down the shard's primary. Connect a mongo shell to the primary and use rs.stepDown() to step down the primary and force an election of a new primary:

      rs.stepDown()
    2. When rs.status() shows that the primary has stepped down and another member has assumed PRIMARY state, downgrade the stepped-down primary:

    3. Shut down the stepped-down primary.

      db.adminCommand( { shutdown: 1 } )
    4. Replace the 4.2 binary with the 4.0 binary and restart.

      Note

      If you use command-line options instead of a configuration file, update the command-line options as appropriate during the restart.

      • If your command-line options include "tls"-prefixed options, update to "ssl"-prefixed options.

      • If the mongod instance used zstd data compression,

      • If the mongod instance used zstd journal compression,

      • If the mongod instance included zstd network message compression,

        • Remove --networkMessageCompressors to enable message compression using the default snappy,zlib compressors. Alternatively, explicitly specify the compressor(s).

Repeat for the remaining shards.

5
  1. Downgrade the secondary members of the config servers replica set (CSRS) one at a time:

    1. Shut down the mongod instance.

      db.adminCommand( { shutdown: 1 } )
    2. Replace the 4.2 binary with the 4.0 binary and restart.

      Note

      If you use command-line options instead of a configuration file, update the command-line options as appropriate during the restart.

      • If your command-line options include "tls"-prefixed options, update to "ssl"-prefixed options.

      • If the mongod instance used zstd data compression,

      • If the mongod instance used zstd journal compression,

      • If the mongod instance included zstd network message compression,

        • Remove --networkMessageCompressors to enable message compression using the default snappy,zlib compressors. Alternatively, explicitly specify the compressor(s).

    3. Wait for the member to recover to SECONDARY state before downgrading the next secondary member. To check the member's state, connect a mongo shell to the shard and run rs.status() method.

      Repeat to downgrade for each secondary member.

  2. Step down the config server primary.

    1. Connect a mongo shell to the primary and use rs.stepDown() to step down the primary and force an election of a new primary:

      rs.stepDown()
    2. When rs.status() shows that the primary has stepped down and another member has assumed PRIMARY state, downgrade the stepped-down primary:

    3. Shut down the stepped-down primary.

      db.adminCommand( { shutdown: 1 } )
    4. Replace the 4.2 binary with the 4.0 binary and restart.

      Note

      If you use command-line options instead of a configuration file, update the command-line options as appropriate during the restart.

      • If your command-line options include "tls"-prefixed options, update to "ssl"-prefixed options.

      • If the mongod instance used zstd data compression,

      • If the mongod instance used zstd journal compression,

      • If the mongod instance included zstd network message compression,

        • Remove --networkMessageCompressors to enable message compression using the default snappy,zlib compressors. Alternatively, explicitly specify the compressor(s).

6

Once the downgrade of sharded cluster components is complete, connect to the mongos and restart the balancer.

sh.startBalancer();

To verify that the balancer is enabled, run sh.getBalancerState():

sh.getBalancerState()

If the balancer is enabled, the method returns true.

7

When stopping the balancer as part of the downgrade process, the sh.stopBalancer() method also disabled auto-splitting.

Once downgraded to MongoDB 4.0, sh.startBalancer() does not re-enable auto-splitting. If you wish to re-enable auto-splitting, run sh.enableAutoSplit():

sh.enableAutoSplit()

Next

What is MongoDB?