Docs Home → Develop Applications → MongoDB Manual
Change Sharded Cluster to WiredTiger
Note
Starting in version 4.2, MongoDB removes the deprecated MMAPv1 storage engine. Before upgrading to MongoDB 4.2 from a MongoDB 4.0 deployment that uses MMAPv1, you must first upgrade to WiredTiger.
Use this tutorial to update MongoDB 4.0 sharded clusters to use WiredTiger.
For earlier versions of MongoDB:
To convert a 3.6 sharded cluster that uses MMAPv1, see the MongoDB 3.6 manual.
To convert a 3.4 sharded cluster that uses MMAPv1, see the MongoDB 3.4 manual.
To convert a 3.2 sharded cluster that uses MMAPv1, see the MongoDB 3.2 manual.
Considerations
Downtime
If you change the host or port of any shard, you must update the shard configuration as well.
PSA 3-member Architecture
Starting in MongoDB 3.6, "majority"
read concern,
available for WiredTiger, is enabled by default. However, for MongoDB
4.0.3+, if you have a three-member shard replica set with a
primary-secondary-arbiter (PSA) architecture, you can disable
"majority"
read concern for that shard replica set.
Disabling "majority"
for a three member PSA architecture
avoids possible cache-pressure build up.
The procedure below disables "majority"
read concern for
MongoDB 4.0.3 PSA architecture by including
--enableMajorityReadConcern false
. If you are running a MongoDB 4.0.1 or
4.0.2 PSA architecture, first upgrade to the latest 4.0 version in
order to disable this read concern.
Note
Disabling "majority"
read concern disables support
for Change Streams for MongoDB 4.0 and earlier. For MongoDB
4.2+, disabling read concern "majority"
has no effect on change
streams availability.
Disabling "majority"
read concern prevents
collMod
commands which modify an index from
rolling back. If such an operation needs
to be rolled back, you must resync the affected nodes with the
primary node.
Disabling "majority"
read concern affects support for
transactions on sharded clusters. Specifically:
A transaction cannot use read concern
"snapshot"
if the transaction involves a shard that has disabled read concern "majority".A transaction that writes to multiple shards errors if any of the transaction's read or write operations involves a shard that has disabled read concern
"majority"
.
However, it does not affect transactions
on replica sets. For transactions on replica sets, you can specify
read concern "majority"
(or "snapshot"
or "local"
) for multi-document transactions even if
read concern "majority"
is disabled.
For more information on PSA architecture and read concern
"majority"
, see .
MongoDB 3.0 or Greater
You must be using MongoDB version 3.0 or greater in order to use the WiredTiger storage engine. If using an earlier MongoDB version, you must upgrade your MongoDB version before proceeding to change your storage engine. To upgrade your MongoDB version, refer to the appropriate version of the manual.
Default Bind to Localhost
Starting with MongoDB 3.6, MongoDB binaries, mongod
and
mongos
, bind to localhost
by default.
From MongoDB versions 2.6 to 3.4, only the binaries from the
official MongoDB RPM (Red Hat, CentOS, Fedora Linux, and derivatives)
and DEB (Debian, Ubuntu, and derivatives) packages would bind to
localhost
by default. To learn more about this change, see
Localhost Binding Compatibility Changes.
Config Servers
Starting in version 3.4, config servers must be deployed as replica sets (CSRS). As such, version 3.4+ config servers already use the WiredTiger storage engine.
XFS and WiredTiger
With the WiredTiger storage engine, using XFS for data bearing nodes is recommended on Linux. For more information, see Kernel and File Systems.
MMAPv1 Only Restrictions
Once upgraded to WiredTiger, your WiredTiger deployment is not subject to the following MMAPv1-only restrictions:
MMAPv1 Restrictions | Short Description |
---|---|
Number of Namespaces | For MMAPv1, the number of namespaces is limited to the size of
the namespace file divided by 628. |
Size of Namespace File | For MMAPv1, namespace files can be no larger than 2047 megabytes. |
Database Size | The MMAPv1 storage engine limits each database to no more than
16000 data files. |
Data Size | For MMAPv1, a single mongod instance cannot
manage a data set that exceeds maximum virtual memory address
space provided by the underlying operating system. |
Number of Collections in a Database | For the MMAPv1 storage engine, the maximum number of collections
in a database is a function of the size of the namespace file
and the number of indexes of collections in the database. |
Procedure
For each replica set shard, to change the storage engine to WiredTiger:
A. Update the secondary members to WiredTiger.
Update the secondary members one at a time:
Shut down the secondary member.
In mongosh
, shut down the secondary.
use admin db.shutdownServer()
Prepare a data directory for the new mongod
running with WiredTiger.
Prepare a data directory for the new mongod
instance that
will run with the WiredTiger storage engine. mongod
must have read
and write permissions for this directory. You can either delete the
contents of the stopped secondary member's current data directory or
create a new directory entirely.
mongod
with WiredTiger will not start with data files created with
a different storage engine.
Update configuration for WiredTiger.
Remove any MMAPv1 Specific Configuration Options from the mongod
instance configuration.
Start mongod
with WiredTiger.
Start mongod
, specifying wiredTiger
as the
--storageEngine
and the prepared data directory for
WiredTiger as the --dbpath
.
Specify additional options as appropriate, such as
--bind_ip
.
Warning
Before binding to a non-localhost (e.g. publicly accessible) IP address, ensure you have secured your cluster from unauthorized access. For a complete list of security recommendations, see Security Checklist. At minimum, consider enabling authentication and hardening network infrastructure.
Since no data exists in the --dbpath
, the mongod
will perform an
initial sync. The length of the
initial sync process depends on the size of the database and network
connection between members of the replica set.
You can also specify the options in a configuration file. To specify the storage engine, use
the storage.engine
setting.
Repeat the steps for the remaining secondary members, updating them one at a time.
B. Step down the primary.
Once all the secondary members have been upgraded to WiredTiger,
connect mongosh
to the primary and use
rs.stepDown()
to step down the primary and force an election
of a new primary.
rs.stepDown()
C. Update the old primary.
When the primary has stepped down and become a secondary, update the secondary to use WiredTiger as before:
Shut down the secondary member.
In mongosh
, shut down the secondary.
use admin db.shutdownServer()
Prepare a data directory for the new mongod
running with WiredTiger.
Prepare a data directory for the new mongod
instance that
will run with the WiredTiger storage engine. mongod
must have read
and write permissions for this directory. You can either delete the
contents of the stopped secondary member's current data directory or
create a new directory entirely.
mongod
with WiredTiger will not start with data files created with
a different storage engine.
Update configuration for WiredTiger.
Remove any MMAPv1 Specific Configuration Options from the mongod
instance configuration.
Start mongod
with WiredTiger.
Start mongod
, specifying wiredTiger
as the
--storageEngine
and the prepared data directory for
WiredTiger as the --dbpath
.
Specify additional options as appropriate, such as
--bind_ip
.
Warning
Before binding to a non-localhost (e.g. publicly accessible) IP address, ensure you have secured your cluster from unauthorized access. For a complete list of security recommendations, see Security Checklist. At minimum, consider enabling authentication and hardening network infrastructure.
Since no data exists in the --dbpath
, the mongod
will perform an
initial sync. The length of the
initial sync process depends on the size of the database and network
connection between members of the replica set.
You can also specify the options in a configuration file. To specify the storage engine, use
the storage.engine
setting.
Repeat the procedure for the other shards.