Upgrade a Sharded Cluster to 3.6
On this page
Note
MongoDB 3.6 is not tested on APFS, the new filesystem in macOS 10.13 and may encounter errors.
Starting in MongoDB 3.6.13, MongoDB 3.6-series removes support for Ubuntu 16.04 PPCLE.
For earlier MongoDB Enterprise versions that support Ubuntu 16.04 POWER/PPC64LE:
Due to a lock elision bug present in older versions of the
glibc
package on Ubuntu 16.04 for POWER, you must upgrade theglibc
package to at leastglibc 2.23-0ubuntu5
before running MongoDB. Systems with older versions of theglibc
package will experience database server crashes and misbehavior due to random memory corruption, and are unsuitable for production deployments of MongoDB
Important
Before you attempt any upgrade, please familiarize yourself with the content of this document.
If you need guidance on upgrading to 3.6, MongoDB professional services offer major version upgrade support to help ensure a smooth transition without interruption to your MongoDB application.
Upgrade Recommendations and Checklists
When upgrading, consider the following:
Upgrade Version Path
To upgrade an existing MongoDB deployment to 3.6, you must be running a 3.4-series release.
To upgrade from a version earlier than the 3.4-series, you must successively upgrade major releases until you have upgraded to 3.4-series. For example, if you are running a 3.2-series, you must upgrade first to 3.4 before you can upgrade to 3.6.
Check Driver Compatibility
Before you upgrade MongoDB, check that you're using a MongoDB 3.6-compatible driver. Consult the driver documentation for your specific driver to verify compatibility with MongoDB 3.6.
Upgraded deployments that run on incompatible drivers might encounter unexpected or undefined behavior.
Preparedness
Before beginning your upgrade, see the Compatibility Changes in MongoDB 3.6 document to ensure that your applications and deployments are compatible with MongoDB 3.6. Resolve the incompatibilities in your deployment before starting the upgrade.
Before upgrading MongoDB, always test your application in a staging environment before deploying the upgrade to your production environment.
Downgrade Consideration
Once upgraded to 3.6, if you need to downgrade, we recommend downgrading to the latest patch release of 3.4.
Default Bind to Localhost
Starting in MongoDB 3.6, mongod
and mongos
instances bind to localhost by default. Remote clients, including other
members of the replica set, cannot connect to an instance bound only to
localhost. To override and bind to other ip addresses, use the
net.bindIp
configuration file setting or the --bind_ip
command-line option to specify a list of ip addresses.
The upgrade process will require that you specify the
net.bindIp
setting (or --bind_ip
) if your sharded
cluster members are run on different hosts or if you wish remote
clients to connect to your sharded cluster.
Warning
Before binding to a non-localhost (e.g. publicly accessible) IP address, ensure you have secured your cluster from unauthorized access. For a complete list of security recommendations, see Security Checklist. At minimum, consider enabling authentication and hardening network infrastructure.
For more information, see Localhost Binding Compatibility Changes
Shard Replica Sets
Starting in MongoDB 3.6, mongod
instances with the shard
server role must be replica set members.
To upgrade your sharded cluster to version 3.6, the shard servers must be running as a replica set. To convert an existing shard standalone instance to a shard replica set, see Convert a Shard Standalone to a Shard Replica Set.
Drivers
For MongoDB 3.6.0 - 3.6.3 binaries, you should upgrade your drivers to 3.6 feature compatible drivers only after you have upgraded the MongoDB binaries and updated the feature compatibility version of the sharded cluster to 3.6.
For more information, see SERVER-33763.
Read Concern Majority
Starting in MongoDB 3.6, MongoDB enables support for "majority" read concern by default.
For MongoDB 3.6.1 - 3.6.x, you can disable read concern "majority" to prevent the storage cache pressure from immobilizing a deployment with a primary-secondary-arbiter (PSA) architecture. Disabling "majority" read concern also disables support for change streams
For more information, see Disable Read Concern Majority.
Prerequisites
- Version 3.4 or Greater
- To upgrade a sharded cluster to 3.6, all members of the cluster must be at least version 3.4. The upgrade process checks all components of the cluster and will produce warnings if any component is running version earlier than 3.4.
- Feature Compatibility Version
The 3.4 sharded cluster must have
featureCompatibilityVersion
set to3.4
.To ensure that all members of the sharded cluster have
featureCompatibilityVersion
set to3.4
, connect to each shard replica set member and each config server replica set member and check thefeatureCompatibilityVersion
:Tip
For a sharded cluster that has access control enabled, to run the following command against a shard replica set member, you must connect to the member as a shard local user.
db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } ) All members should return a result that includes
"featureCompatibilityVersion": "3.4"
.To set or update
featureCompatibilityVersion
, run the following command on themongos
:db.adminCommand( { setFeatureCompatibilityVersion: "3.4" } ) For more information, see
setFeatureCompatibilityVersion
.
- Shard Aware
The shards in the 3.4 sharded clusters must be shard aware (i.e. the shards must have received their
shardIdentity
document, located in theadmin.system.version
collection):For sharded clusters that started as 3.4, the shards are shard aware.
For 3.4 sharded clusters that were upgraded from 3.2-series, when you update
featureCompatibilityVersion
from3.2
to3.4
, the config server attempts to send the shards their respectiveshardIdentity
document every 30 seconds until success. You must wait until all shards receive the documents.To check whether a shard replica set member has received its
shardIdentity
document, issue thefind
command against thesystem.version
collection in theadmin
database and check for a document where"_id" : "shardIdentity"
.For an example of a
shardIdentity
document:{ "_id" : "shardIdentity", "clusterId" : ObjectId("2bba123c6eeedcd192b19024"), "shardName" : "shard2", "configsvrConnectionString" : "configDbRepl/alpha.example.net:28100,beta.example.net:28100,charlie.example.net:28100" }
- Back up the
config
Database - Optional but Recommended. As a precaution, take a backup of the
config
database before upgrading the sharded cluster.
- Back up the
Download 3.6 Binaries
Use Package Manager
If you installed MongoDB from the MongoDB apt
, yum
, dnf
, or
zypper
repositories, you should upgrade to 3.6 using your package
manager.
Follow the appropriate 3.6 installation instructions for your Linux system. This will involve adding a repository for the new release, then performing the actual upgrade process.
Download 3.6 Binaries Manually
If you have not installed MongoDB using a package manager, you can manually download the MongoDB binaries from the MongoDB Download Center.
See 3.6 installation instructions for more information.
Upgrade Process
Disable the Balancer.
Connect a mongo
shell to a mongos
instance in
the sharded cluster, and run sh.stopBalancer()
to
disable the balancer:
sh.stopBalancer()
Note
If a migration is in progress, the system will complete the
in-progress migration before stopping the balancer. You can run
sh.isBalancerRunning()
to check the balancer's current
state.
To verify that the balancer is disabled, run
sh.getBalancerState()
, which returns false if the balancer
is disabled:
sh.getBalancerState()
For more information on disabling the balancer, see Disable the Balancer.
Upgrade the config servers.
Upgrade the secondary members of the replica set one at a time:
Shut down the secondary
mongod
instance and replace the 3.4 binary with the 3.6 binary.Start the 3.6 binary with the
--configsvr
,--replSet
, and--port
. Include any other options as used by the deployment.Note
The
--bind_ip
option must be specified when the sharded cluster members are run on different hosts or if remote clients connect to the sharded cluster. For more information, see Localhost Binding Compatibility Changes.mongod --configsvr --replSet <replSetName> --port <port> --dbpath <path> --bind_ip localhost,<hostname(s)|ip address(es)> If using a configuration file, update the file to specify
sharding.clusterRole: configsvr
,replication.replSetName
,net.port
, andnet.bindIp
, then start the 3.6 binary:sharding: clusterRole: configsvr replication: replSetName: <string> net: port: <port> bindIp: localhost,<hostname(s)|ip address(es)> storage: dbpath: <path> Include any other settings as appropriate for your deployment.
Wait for the member to recover to
SECONDARY
state before upgrading the next secondary member. To check the member's state, issuers.status()
in themongo
shell.Repeat for each secondary member.
Step down the replica set primary.
Connect a
mongo
shell to the primary and users.stepDown()
to step down the primary and force an election of a new primary:rs.stepDown() When
rs.status()
shows that the primary has stepped down and another member has assumedPRIMARY
state, shut down the stepped-down primary and replace themongod
binary with the 3.6 binary.Start the 3.6 binary with the
--configsvr
,--replSet
,--port
, and--bind_ip
options. Include any optional command line options used by the previous deployment:mongod --configsvr --replSet <replSetName> --port <port> --dbpath <path> --bind_ip localhost,<hostname(s)|ip address(es)> Note
The
--bind_ip
option must be specified when the sharded cluster members are run on different hosts or if remote clients connect to the sharded cluster. For more information, see Localhost Binding Compatibility Changes.If using a configuration file, update the file to specify
sharding.clusterRole: configsvr
,replication.replSetName
,net.port
, andnet.bindIp
, then start the 3.6 binary:sharding: clusterRole: configsvr replication: replSetName: <string> net: port: <port> bindIp: localhost,<hostname(s)|ip address(es)> storage: dbpath: <path> Include any other configuration as appropriate for your deployment.
Upgrade the shards.
Upgrade the shards one at a time.
For each shard replica set:
Upgrade the secondary members of the replica set one at a time:
Shut down the
mongod
instance and replace the 3.4 binary with the 3.6 binary.Start the 3.6 binary with the
--shardsvr
,--replSet
,--port
, and--bind_ip
options. Include any optional command line options used by the previous deployment:mongod --shardsvr --replSet <replSetName> --port <port> --dbpath <path> --bind_ip localhost,<hostname(s)|ip address(es)> Note
The
--bind_ip
option must be specified when the sharded cluster members are run on different hosts or if remote clients connect to the sharded cluster. For more information, see Localhost Binding Compatibility Changes.If using a configuration file, update the file to include
sharding.clusterRole: shardsvr
,replication.replSetName
,net.port
, andnet.bindIp
, then start the 3.6 binary:sharding: clusterRole: shardsvr replication: replSetName: <string> net: port: <port> bindIp: localhost,<hostname(s)|ip address(es)> storage: dbpath: <path> Include any other configuration as appropriate for your deployment.
Wait for the member to recover to
SECONDARY
state before upgrading the next secondary member. To check the member's state, you can issuers.status()
in themongo
shell.Repeat for each secondary member.
Step down the replica set primary.
Connect a
mongo
shell to the primary and users.stepDown()
to step down the primary and force an election of a new primary:rs.stepDown() When
rs.status()
shows that the primary has stepped down and another member has assumedPRIMARY
state, upgrade the stepped-down primary:Shut down the stepped-down primary and replace the
mongod
binary with the 3.6 binary.Start the 3.6 binary with the
--shardsvr
,--replSet
,--port
, and--bind_ip
options. Include any optional command line options used by the previous deployment:mongod --shardsvr --replSet <replSetName> --port <port> --dbpath <path> --bind_ip localhost,<hostname(s)|ip address(es)> Note
The
--bind_ip
option must be specified when the sharded cluster members are run on different hosts or if remote clients connect to the sharded cluster. For more information, see Localhost Binding Compatibility Changes.If using a configuration file, update the file to specify
sharding.clusterRole: shardsvr
,replication.replSetName
,net.port
, andnet.bindIp
, then start the 3.6 binary:sharding: clusterRole: shardsvr replication: replSetName: <string> net: port: <port> bindIp: localhost,<hostname(s)|ip address(es)> storage: dbpath: <path> Include any other configuration as appropriate for your deployment.
Upgrade the mongos
instances.
Replace each mongos
instance with the 3.6 binary
and restart. Include any other configuration as appropriate for your
deployment.
Note
The --bind_ip
option must be specified when
the sharded cluster members are run on different hosts or if
remote clients connect to the sharded cluster. For more information, see
Localhost Binding Compatibility Changes.
mongos --configdb csReplSet/<rsconfigsver1:port1>,<rsconfigsver2:port2>,<rsconfigsver3:port3> --bind_ip localhost,<hostname(s)|ip address(es)>
Re-enable the balancer.
Using a 3.6 mongo
shell, connect to a
mongos
in the cluster and run
sh.setBalancerState()
to re-enable the balancer:
sh.setBalancerState(true)
The 3.4 and earlier mongo
shell is not
compatible with 3.6 clusters.
For more information about re-enabling the balancer, see Enable the Balancer.
Enable backwards-incompatible 3.6 features.
At this point, you can run the 3.6 binaries without the 3.6 features that are incompatible with 3.4. That is, you can run the 3.6 sharded cluster with feature compatibility version set to 3.4
Important
For MongoDB 3.6.0-3.6.3, you should upgrade your drivers to 3.6 feature compatible drivers only after you have updated the feature compatibility version of the sharded cluster to 3.6. For more information, see SERVER-33763.
To enable these 3.6 features, set the feature compatibility
version (FCV
) to 3.6.
Note
Enabling these backwards-incompatible features can complicate the downgrade process since you must remove any persisted backwards-incompatible features before you downgrade.
It is recommended that after upgrading, you allow your deployment to run without enabling these features for a burn-in period to ensure the likelihood of downgrade is minimal. When you are confident that the likelihood of downgrade is minimal, enable these features.
On a mongos
instance, run the
setFeatureCompatibilityVersion
command in the admin
database:
db.adminCommand( { setFeatureCompatibilityVersion: "3.6" } )
This command must perform writes to an internal system
collection. If for any reason the command does not complete
successfully, you can safely retry the command on the
mongos
as the operation is idempotent.
Restart mongos
instances.
After changing the featureCompatibilityVersion
, all
mongos
instances need to be restarted to pick up the
changes in the causal consistency behavior.
Additional Upgrade Procedures
To upgrade a standalone, see Upgrade a Standalone to 3.6.
To upgrade a replica set, see Upgrade a Replica Set to 3.6.