Downgrade 4.2 Sharded Cluster to 4.0
Before you attempt any downgrade, familiarize yourself with the content of this document.
Downgrade Path
Important
Before you upgrade or downgrade a replica set, ensure all replica set members are running. If you do not, the upgrade or downgrade will not complete until all members are started.
If you need to downgrade from 4.2, downgrade to the latest patch release of 4.0.
Tip
If you downgrade,
On Windows, downgrade to version 4.0.12 or later version. You cannot downgrade to a 4.0.11 or earlier version.
On Linux/macOS, if you are running change streams and want to seamlessly resume change streams, downgrade to 4.0.7 or later versions.
Considerations
Starting in MongoDB 4.2, change streams are
available regardless of the "majority"
read concern
support; that is, read concern majority
support can be either
enabled (default) or disabled
to use change streams.
In MongoDB 4.0 and earlier, change streams are
available only if "majority"
read concern support is
enabled (default).
Once you downgrade to 4.0-series, change streams will be disabled if
you have disabled read concern "majority"
.
Create Backup
Optional but Recommended. Create a backup of your database.
Access Control
If your sharded cluster has access control enabled, your
downgrade user privileges must include additional privileges to
manage indexes on the config
database.
db.getSiblingDB("admin").createRole({ role: "configIndexRole", privileges: [ { resource: { db: "config", collection: "" }, actions: [ "find", "dropIndex", "createIndex", "listIndexes" ] } ], roles: [ ] });
Add the newly created role to your downgrade user. For example,
if you have a user myDowngradeUser
in the admin
database
that already has the root
role, use
db.grantRolesToUser()
to grant the additional role:
db.getSiblingDB("admin").grantRolesToUser( "myDowngradeUser", [ { role: "configIndexRole", db: "admin" } ], { w: "majority", wtimeout: 4000 } );
Prerequisites
To downgrade from 4.2 to 4.0, you must remove incompatible features that are persisted and/or update incompatible configuration settings. These include:
1. Downgrade Feature Compatibility Version (fCV)
To downgrade the featureCompatibilityVersion
of your sharded
cluster:
Downgrade the
featureCompatibilityVersion
to"4.0"
.db.adminCommand({setFeatureCompatibilityVersion: "4.0"}) The
setFeatureCompatibilityVersion
command performs writes to an internal system collection and is idempotent. If for any reason the command does not complete successfully, retry the command on themongos
instance.To ensure that all members of the sharded cluster reflect the updated
featureCompatibilityVersion
, connect to each shard replica set member and each config server replica set member and check thefeatureCompatibilityVersion
:Tip
For a sharded cluster that has access control enabled, to run the following command against a shard replica set member, you must connect to the member as a shard local user.
db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } ) All members should return a result that includes:
"featureCompatibilityVersion" : { "version" : "4.0" } If any member returns a
featureCompatibilityVersion
of"4.2"
, wait for the member to reflect version"4.0"
before proceeding.
Note
Arbiters do not replicate the admin.system.version
collection.
Because of this, arbiters always have a feature compatibility version equal
to the downgrade version of the binary, regardless of the fCV value of the
replica set.
For example, an arbiter in a MongoDB 4.2 cluster, has an fCV value of 4.0.
For more information on the returned featureCompatibilityVersion
value, see View FeatureCompatibilityVersion.
2. Remove fCV 4.2 Persisted Features
The following steps are necessary only if fCV has ever been set to
"4.2"
.
Remove all persisted 4.2 features that are incompatible with 4.0. These include:
2a. Index Key Size
Starting in MongoDB 4.2, for featureCompatibilityVersion
(fCV)
set to "4.2"
or greater, MongoDB removes the Index Key Limit. For fCV set to "4.0"
, the limit still applies.
If you have an index with keys that exceed the Index Key Limit once fCV is set to "4.0"
,
consider changing the index to a hashed index or to indexing a
computed value. You can also temporarily use
failIndexKeyTooLong
set to false
before resolving
the problem. However, with failIndexKeyTooLong
set to
false
, queries that use these indexes can return incomplete
results.
2b. Index Name Length
Starting in MongoDB 4.2, for featureCompatibilityVersion
(fCV)
set to "4.2"
or greater, MongoDB removes the Index Name Length. For fCV set to "4.0"
, the limit still applies.
If you have an index with a name that exceeds the Index Name Length once fCV is set to "4.0"
,
drop and recreate the index with a shorter name.
db.collection.dropIndex( <name | index specification> ) db.collection.createIndex( { <index specification> }, { name: <shorter name> } }
2c. Unique Index Version
With featureCompatibilityVersion
(fCV) "4.2"
, MongoDB uses a
new internal format for unique indexes that is incompatible with
MongoDB 4.0. The new internal format applies to both existing unique
indexes as well as newly created/rebuilt unique indexes.
If fCV has ever been set to "4.2"
, use the following script to
drop and recreate all unique indexes.
- Script to run on
mongos
// A script to rebuild unique indexes after downgrading fcv 4.2 to 4.0. // Run this script to drop and recreate unique indexes // for backwards compatibility with 4.0. db.adminCommand("listDatabases").databases.forEach(function(d){ let mdb = db.getSiblingDB(d.name); mdb.getCollectionInfos( { type: "collection" } ).forEach(function(c){ let currentCollection = mdb.getCollection(c.name); currentCollection.getIndexes().forEach(function(idx){ if (idx.unique){ print("Dropping and recreating the following index:" + tojson(idx)) assert.commandWorked(mdb.runCommand({dropIndexes: c.name, index: idx.name})); let res = mdb.runCommand({ createIndexes: c.name, indexes: [idx] }); if (res.ok !== 1) assert.commandWorked(res); } }); }); }); - Script to run on shards
- After you have run the script on
mongos
, you need to check individual shards if you have created shard local users. That is, if you created maintenance users directly on the shards instead of throughmongos
, run the script on the primary member of the shard.// A script to rebuild unique indexes after downgrading fcv 4.2 to 4.0. // Run this script on shards to drop and recreate unique indexes // for backwards compatibility with 4.0. let mdb = db.getSiblingDB('admin'); mdb.getCollectionInfos( { type: "collection" } ).forEach(function(c){ let currentCollection = mdb.getCollection(c.name); currentCollection.getIndexes().forEach(function(idx){ if (idx.unique){ print("Dropping and recreating the following index:" + tojson(idx)) assert.commandWorked(mdb.runCommand({dropIndexes: c.name, index: idx.name})); let res = mdb.runCommand({ createIndexes: c.name, indexes: [idx] }); if (res.ok !== 1) assert.commandWorked(res); } }); });
2d. Remove user_1_db_1
System Unique Index
In addition, if you have enabled access control, you must also remove the system
unique index user_1_db_1
on the admin.system.users
collection.
If fCV has ever been set to "4.2"
, use the following command to
drop the user_1_db_1
system unique index:
db.getSiblingDB("admin").getCollection("system.users").dropIndex("user_1_db_1")
The user_1_db_1
index will automatically be rebuilt when starting
the server with the 4.0 binary in the procedure below.
2e. Remove Wildcard Indexes
For featureCompatibilityVersion
(fCV) set to "4.2"
, MongoDB
supports creating Wildcard Indexes. You must drop all
wildcard indexes before downgrading to fCV "4.0"
.
Use the following script to drop and recreate all wildcard indexes:
// A script to drop wildcard indexes before downgrading fcv 4.2 to 4.0. // Run this script to drop wildcard indexes // for backwards compatibility with 4.0. db.adminCommand("listDatabases").databases.forEach(function(d){ let mdb = db.getSiblingDB(d.name); mdb.getCollectionInfos({ type: "collection" }).forEach(function(c){ let currentCollection = mdb.getCollection(c.name); currentCollection.getIndexes().forEach(function(idx){ var key = Object.keys(idx.key); if (key[0].includes("$**")) { print("Dropping index: " + idx.name + " from " + idx.ns); let res = mdb.runCommand({dropIndexes: currentCollection, index: idx.name}); assert.commandWorked(res); } }); }); });
Important
Downgrading to fCV "4.0"
during an in-progress wildcard index
build does not automatically drop or kill the index build. The
index build can complete after downgrading to fcv "4.0"
,
resulting in a valid wildcard index on the collection. Starting
the 4.0 binary against against that data directory will result in
startup failures.
Use db.currentOp()
to check for any in-progress wildcard
index builds. Once any in-progress wildcard index builds complete,
run the script to drop them before downgrading to
fCV "4.0"
.
2f. View Definitions/Collection Validation Definitions that Include 4.2 Operators
Before downgrading the binaries, modify read-only view definitions and collection validation definitions
that include the 4.2 operators, such as
$set
, $unset
, $replaceWith
.
For the
$set
stage, use the$addFields
stage instead.For the
$replaceWith
stage, use the$replaceRoot
stage instead.
You can modify a view either by:
dropping the view (
db.myview.drop()
method) and recreating the view (db.createView()
method) orusing the
collMod
command.
You can modify the colleciton validation expressions by:
using the
collMod
command.
3. Update tls
-Prefixed Configuration
Starting in MongoDB 4.2, MongoDB adds "tls"
-prefixed options as
aliases for the "ssl"-prefixed
options.
If your deployments or clients use the "tls"
-prefixed options,
replace with the corresponding "ssl"-prefixed
options for the
mongod, the mongos, and the mongo shell
and drivers.
4. Prepare Downgrade from zstd
Compression
zstd
Journal Compression
The zstd compression library is available for journal data compression starting in version 4.2.
For any shard or config server member that uses zstd library for its journal compressor:
- If the member uses
zstd
for journal compression andzstd
Data Compression, If using a configuration file, delete
storage.wiredTiger.engineConfig.journalCompressor
to use the default compressor (snappy
) or set to another 4.0 supported compressor.If using command-line options instead, you will have to update the options in the procedure below.
- If the member only uses
zstd
for journal compression only, Note
The following procedure involves restarting the replica member as a standalone without the journal.
Perform a clean shutdown of the
mongod
instance:db.getSiblingDB('admin').shutdownServer() Update the configuration file to prepare to restart as a standalone:
Set
storage.journal.enabled
tofalse
.Set parameter
skipShardingConfigurationChecks
to true.Set parameter
disableLogicalSessionCacheRefresh
totrue
in thesetParameter
section.Comment out the replication settings for your deployment.
Comment out the
sharding.clusterRole
setting.Set the
net.port
to the member's current port, if it is not explicitly set.
For example:
storage: journal: enabled: false setParameter: skipShardingConfigurationChecks: true disableLogicalSessionCacheRefresh: true #replication: # replSetName: shardA #sharding: # clusterRole: shardsvr net: port: 27218 If you use command-line options instead of a configuration file, you will have to update the command-line option during the restart.
Restart the
mongod
instance:If you are using a configuration file:
mongod -f <path/to/myconfig.conf> If you are using command-line options instead of a configuration file:
Include the
--nojournal
option.Set parameter
skipShardingConfigurationChecks
to true.Set parameter
disableLogicalSessionCacheRefresh
totrue
in the--setParameter
option.Remove any replication command-line options (such as
--replSet
).Remove
--shardsvr
/--configsvr
option.Explicitly include
--port
set to the instance's current port.
mongod --nojournal --setParameter skipShardingConfigurationChecks=true --setParameter disableLogicalSessionCacheRefresh=true --port <samePort> ...
Perform a clean shutdown of the
mongod
instance:db.getSiblingDB('admin').shutdownServer() Confirm that the process is no longer running.
Update the configuration file to prepare to restart with the new journal compressor:
Remove the
storage.journal.enabled
setting.Remove the
skipShardingConfigurationChecks
parameter setting.Remove the
disableLogicalSessionCacheRefresh
parameter setting.Uncomment the replication settings for your deployment.
Uncomment the
sharding.clusterRole
setting.Remove
storage.wiredTiger.engineConfig.journalCompressor
setting to use the default journal compressor or specify a new value.
For example:
storage: wiredTiger: engineConfig: journalCompressor: <newValue> replication: replSetName: shardA sharding: clusterRole: shardsvr net: port: 27218 If you use command-line options instead of a configuration file, you will have to update the command-line options during the restart below.
Restart the
mongod
instance as a replica set member:If you are using a configuration file:
mongod -f <path/to/myconfig.conf> If you are using command-line options instead of a configuration file:
Remove the
--nojournal
option.Remove the
skipShardingConfigurationChecks
parameter setting.Remove the
disableLogicalSessionCacheRefresh
parameter setting.Remove the
--wiredTigerJournalCompressor
command-line option to use the default journal compressor or update to a new value.Include
--shardsvr
/--configsvr
option.Include your replication command-line options as well as any additional options for your replica set member.
mongod --shardsvr --wiredTigerJournalCompressor <differentCompressor|none> --replSet ...
zstd
Data Compression
Important
If you also use
zstd
Journal Compression, perform these
steps after you perform the prerequisite steps for the
journal compressor.
The zstd compression library is available starting in version 4.2. For any config server member or shard member that has data stored using zstd compression, the downgrade procedure will require an initial sync for that member. To prepare:
Create a new empty
data directory
for themongod
instance. This directory will be used in the downgrade procedure below.Important
Ensure that the user account running
mongod
has read and write permissions for the new directory.If you use a configuration file, update the file to prepare for the downgrade procedure:
Delete
storage.wiredTiger.collectionConfig.blockCompressor
to use the default compressor (snappy
) or set to another 4.0 supported compressor.Update
storage.dbPath
to the new data directory.
If you use command-line options instead, you will have to update the options in the procedure below.
Repeat for any other members that used zstd compression.
zstd
Network Compression
The zstd compression library is available for network message compression starting in version 4.2.
To prepare for the downgrade:
For any
mongod
/mongos
instance that uses zstd for network message compression and uses a configuration file, update thenet.compression.compressors
setting to prepare for the restart during the downgrade procedure.If you use command-line options instead, you will have to update the options in the procedure below.For any client that specifies
zstd
in itsURI connection string
, update to removezstd
from the list.For any
mongo
shell that specifieszstd
in its--networkMessageCompressors
, update to removezstd
from the list.
Important
Messages are compressed when both parties enable network compression. Otherwise, messages between the parties are uncompressed.
5. Remove Client-Side Field Level Encryption Document Validation Keywords
Important
Remove client-side field level encryption code in applications prior to downgrading the server.
MongoDB 4.2 adds support for
enforcing client-side field level encryption as part of a collection's
JSON Schema document validation. Specifically,
the $jsonSchema
object supports the
encrypt
and encryptMetadata
keywords. MongoDB 4.0 does not support these keywords and fails to
start if any collection specifies those keywords as part of its
validation $jsonSchema
.
Use db.getCollectionInfos()
on each database to identify
collections specifying automatic field level encryption rules as part of
the $jsonSchema
validator. To prepare for downgrade, connect to
a cluster mongos
and perform either of the following
operations for each collection using the 4.0-incompatible keywords:
Use
collMod
to modify the collection'svalidator
and replace the$jsonSchema
with a schema that contains only 4.0-compatible document validation syntax:db.runCommand({ "collMod" : "<collection>", "validator" : { "$jsonSchema" : { <4.0-compatible schema object> } } }) -or-
Use
collMod
to remove thevalidator
object entirely:db.runComand({ "collMod" : "<collection>", "validator" : {} })
Procedure
Downgrade a Sharded Cluster
Warning
Before proceeding with the downgrade procedure, ensure that all
members, including delayed replica set members in the sharded
cluster, reflect the prerequisite changes. That is, check the
featureCompatibilityVersion
and the removal of incompatible
features for each node before downgrading.
Download the latest 4.0 binaries.
Using either a package manager or a manual download, get the latest release in the 4.0 series. If using a package manager, add a new repository for the 4.0 binaries, then perform the actual downgrade process.
Important
Before you upgrade or downgrade a replica set, ensure all replica set members are running. If you do not, the upgrade or downgrade will not complete until all members are started.
If you need to downgrade from 4.2, downgrade to the latest patch release of 4.0.
Disable the Balancer.
Connect a mongo
shell to a mongos
instance in
the sharded cluster, and run sh.stopBalancer()
to
disable the balancer:
sh.stopBalancer()
Note
If a migration is in progress, the system will complete the
in-progress migration before stopping the balancer. You can run
sh.isBalancerRunning()
to check the balancer's current
state.
To verify that the balancer is disabled, run
sh.getBalancerState()
, which returns false if the balancer
is disabled:
sh.getBalancerState()
Starting in MongoDB 4.2, sh.stopBalancer()
also disables
auto-splitting for the sharded cluster.
For more information on disabling the balancer, see Disable the Balancer.
Downgrade the mongos
instances.
Downgrade the binaries and restart.
Note
If you use command-line options instead of a configuration file, update the command-line options as appropriate during the restart.
If your
mongos
command-line options include "tls"-prefixed options, update to "ssl"-prefixed options.If the
mongos
instance includedzstd
network message compression, remove--networkMessageCompressors
to use the defaultsnappy,zlib
compressors. Alternatively, specify the list of compressors to use.
Downgrade each shard, one at a time.
Downgrade the shards one at a time.
Downgrade the shard's secondary members one at a time:
Shut down the
mongod
instance.db.adminCommand( { shutdown: 1 } ) Replace the 4.2 binary with the 4.0 binary and restart.
Note
If you use command-line options instead of a configuration file, update the command-line options as appropriate during the restart.
If your command-line options include "tls"-prefixed options, update to "ssl"-prefixed options.
If the
mongod
instance usedzstd
data compression,Update
--dbpath
to the new directory (created during the prerequisites).Remove
--wiredTigerCollectionBlockCompressor
to use the defaultsnappy
compressor (or, alternatively, explicitly set to a 4.0 supported compressor).
If the
mongod
instance usedzstd
journal compression,Remove
--wiredTigerJournalCompressor
to use the defaultsnappy
compressor (or, alternatively, explicitly set to a 4.0 supported compressor).
If the
mongod
instance includedzstd
network message compression,Remove
--networkMessageCompressors
to enable message compression using the defaultsnappy,zlib
compressors. Alternatively, explicitly specify the compressor(s).
Wait for the member to recover to
SECONDARY
state before downgrading the next secondary member. To check the member's state, connect amongo
shell to the shard and runrs.status()
method.Repeat to downgrade for each secondary member.
Downgrade the shard arbiter, if any.
Skip this step if the replica set does not include an arbiter.
Shut down the
mongod
. See Stopmongod
Processes for additional ways to safely terminatemongod
processes.db.adminCommand( { shutdown: 1 } ) Delete the arbiter data directory contents. The
storage.dbPath
configuration setting or--dbpath
command line option specify the data directory of the arbitermongod
.rm -rf /path/to/mongodb/datafiles/* Replace the 4.2 binary with the 4.0 binary and restart.
Note
If you use command-line options instead of a configuration file, update the command-line options as appropriate during the restart.
If your command-line options include "tls"-prefixed options, update to "ssl"-prefixed options.
If the
mongod
instance usedzstd
data compression,Update
--dbpath
to the new directory (created during the prerequisites).Remove
--wiredTigerCollectionBlockCompressor
to use the defaultsnappy
compressor (or, alternatively, explicitly set to a 4.0 supported compressor).
If the
mongod
instance usedzstd
journal compression,Remove
--wiredTigerJournalCompressor
to use the defaultsnappy
compressor (or, alternatively, explicitly set to a 4.0 supported compressor).
If the
mongod
instance includedzstd
network message compression,Remove
--networkMessageCompressors
to enable message compression using the defaultsnappy,zlib
compressors. Alternatively, explicitly specify the compressor(s).
Wait for the member to recover to
ARBITER
state. To check the member's state, connect amongo
shell to the member and runrs.status()
method.
Downgrade the shard's primary.
Step down the shard's primary. Connect a
mongo
shell to the primary and users.stepDown()
to step down the primary and force an election of a new primary:rs.stepDown() When
rs.status()
shows that the primary has stepped down and another member has assumedPRIMARY
state, downgrade the stepped-down primary:Shut down the stepped-down primary.
db.adminCommand( { shutdown: 1 } ) Replace the 4.2 binary with the 4.0 binary and restart.
Note
If you use command-line options instead of a configuration file, update the command-line options as appropriate during the restart.
If your command-line options include "tls"-prefixed options, update to "ssl"-prefixed options.
If the
mongod
instance usedzstd
data compression,Update
--dbpath
to the new directory (created during the prerequisites).Remove
--wiredTigerCollectionBlockCompressor
to use the defaultsnappy
compressor (or, alternatively, explicitly set to a 4.0 supported compressor).
If the
mongod
instance usedzstd
journal compression,Remove
--wiredTigerJournalCompressor
to use the defaultsnappy
compressor (or, alternatively, explicitly set to a 4.0 supported compressor).
If the
mongod
instance includedzstd
network message compression,Remove
--networkMessageCompressors
to enable message compression using the defaultsnappy,zlib
compressors. Alternatively, explicitly specify the compressor(s).
Repeat for the remaining shards.
Downgrade the config servers.
Downgrade the secondary members of the config servers replica set (CSRS) one at a time:
Shut down the
mongod
instance.db.adminCommand( { shutdown: 1 } ) Replace the 4.2 binary with the 4.0 binary and restart.
Note
If you use command-line options instead of a configuration file, update the command-line options as appropriate during the restart.
If your command-line options include "tls"-prefixed options, update to "ssl"-prefixed options.
If the
mongod
instance usedzstd
data compression,Update
--dbpath
to the new directory (created during the prerequisites).Remove
--wiredTigerCollectionBlockCompressor
to use the defaultsnappy
compressor (or, alternatively, explicitly set to a 4.0 supported compressor).
If the
mongod
instance usedzstd
journal compression,Remove
--wiredTigerJournalCompressor
to use the defaultsnappy
compressor (or, alternatively, explicitly set to a 4.0 supported compressor).
If the
mongod
instance includedzstd
network message compression,Remove
--networkMessageCompressors
to enable message compression using the defaultsnappy,zlib
compressors. Alternatively, explicitly specify the compressor(s).
Wait for the member to recover to
SECONDARY
state before downgrading the next secondary member. To check the member's state, connect amongo
shell to the shard and runrs.status()
method.Repeat to downgrade for each secondary member.
Step down the config server primary.
Connect a
mongo
shell to the primary and users.stepDown()
to step down the primary and force an election of a new primary:rs.stepDown() When
rs.status()
shows that the primary has stepped down and another member has assumedPRIMARY
state, downgrade the stepped-down primary:Shut down the stepped-down primary.
db.adminCommand( { shutdown: 1 } ) Replace the 4.2 binary with the 4.0 binary and restart.
Note
If you use command-line options instead of a configuration file, update the command-line options as appropriate during the restart.
If your command-line options include "tls"-prefixed options, update to "ssl"-prefixed options.
If the
mongod
instance usedzstd
data compression,Update
--dbpath
to the new directory (created during the prerequisites).Remove
--wiredTigerCollectionBlockCompressor
to use the defaultsnappy
compressor (or, alternatively, explicitly set to a 4.0 supported compressor).
If the
mongod
instance usedzstd
journal compression,Remove
--wiredTigerJournalCompressor
to use the defaultsnappy
compressor (or, alternatively, explicitly set to a 4.0 supported compressor).
If the
mongod
instance includedzstd
network message compression,Remove
--networkMessageCompressors
to enable message compression using the defaultsnappy,zlib
compressors. Alternatively, explicitly specify the compressor(s).
Re-enable the balancer.
Once the downgrade of sharded cluster components is
complete, connect to the mongos
and restart the balancer.
sh.startBalancer();
To verify that the balancer is enabled, run
sh.getBalancerState()
:
sh.getBalancerState()
If the balancer is enabled, the method returns true.
Re-enable autosplit.
When stopping the balancer as part of the downgrade process, the
sh.stopBalancer()
method also disabled auto-splitting.
Once downgraded to MongoDB 4.0, sh.startBalancer()
does
not re-enable auto-splitting. If you wish to re-enable
auto-splitting, run sh.enableAutoSplit()
:
sh.enableAutoSplit()