Docs Menu
Docs Home
/
MongoDB Manual
/ / /

Back Up a Sharded Cluster with File System Snapshots

On this page

  • Overview
  • Considerations
  • Procedure

This document describes a procedure for taking a backup of all components of a sharded cluster. This procedure uses file system snapshots to capture a copy of the mongod instance.

Important

To capture a point-in-time backup from a sharded cluster you must stop all writes to the cluster. On a running production system, you can only capture an approximation of point-in-time snapshot.

For more information on backups in MongoDB and backups of sharded clusters in particular, see MongoDB Backup Methods and Backup and Restore Sharded Clusters.

In MongoDB 4.2+, you cannot use file system snapshots for backups that involve transactions across shards because those backups do not maintain atomicity. Instead, use one of the following to perform the backups:

  • MongoDB Atlas,

  • MongoDB Cloud Manager, or

  • MongoDB Ops Manager.

For encrypted storage engines that use AES256-GCM encryption mode, AES256-GCM requires that every process use a unique counter block value with the key.

For encrypted storage engine configured with AES256-GCM cipher:

  • Restoring from Hot Backup
    Starting in 4.2, if you restore from files taken via "hot" backup (i.e. the mongod is running), MongoDB can detect "dirty" keys on startup and automatically rollover the database key to avoid IV (Initialization Vector) reuse.
  • Restoring from Cold Backup

    However, if you restore from files taken via "cold" backup (i.e. the mongod is not running), MongoDB cannot detect "dirty" keys on startup, and reuse of IV voids confidentiality and integrity guarantees.

    Starting in 4.2, to avoid the reuse of the keys after restoring from a cold filesystem snapshot, MongoDB adds a new command-line option --eseDatabaseKeyRollover. When started with the --eseDatabaseKeyRollover option, the mongod instance rolls over the database keys configured with AES256-GCM cipher and exits.

Tip

  • In general, if using filesystem based backups for MongoDB Enterprise 4.2+, use the "hot" backup feature, if possible.

  • For MongoDB Enterprise versions 4.0 and earlier, if you use AES256-GCM encryption mode, do not make copies of your data files or restore from filesystem snapshots ("hot" or "cold").

It is essential that you stop the balancer before capturing a backup.

If the balancer is active while you capture backups, the backup artifacts may be incomplete and/or have duplicate data, as chunks may migrate while recording backups.

In this procedure, you will stop the cluster balancer and take a backup up of the config database, and then take backups of each shard in the cluster using a file-system snapshot tool. If you need an exact moment-in-time snapshot of the system, you will need to stop all application writes before taking the file system snapshots; otherwise the snapshot will only approximate a moment in time.

For approximate point-in-time snapshots, you can minimize the impact on the cluster by taking the backup from a secondary member of each replica set shard.

If the journal and data files are on the same logical volume, you can use a single point-in-time snapshot to capture a consistent copy of the data files.

If the journal and data files are on different file systems, you must use db.fsyncLock() and db.fsyncUnlock() to ensure that the data files do not change, providing consistency for the purposes of creating backups.

If your deployment depends on Amazon's Elastic Block Storage (EBS) with RAID configured within your instance, it is impossible to get a consistent state across all disks using the platform's snapshot tool. As an alternative, you can do one of the following:

1

Connect mongosh to a cluster mongos instance. Use the sh.stopBalancer() method to stop the balancer. If a balancing round is in progress, the operation waits for balancing to complete before stopping the balancer.

use config
sh.stopBalancer()

Starting in MongoDB 6.1, automatic chunk splitting is not performed. This is because of balancing policy improvements. Auto-splitting commands still exist, but do not perform an operation. For details, see Balancing Policy Changes.

In MongoDB versions earlier than 6.1, sh.stopBalancer() also disables auto-splitting for the sharded cluster.

For more information, see the Disable the Balancer procedure.

2

If your secondary does not have journaling enabled or its journal and data files are on different volumes, you must lock the secondary's mongod instance before capturing a backup.

If your secondary has journaling enabled and its journal and data files are on the same volume, you may skip this step.

Important

If your deployment requires this step, you must perform it on one secondary of each shard and one secondary of the config server replica set (CSRS).

Ensure that the oplog has sufficient capacity to allow these secondaries to catch up to the state of the primaries after finishing the backup procedure. See Oplog Size for more information.

For each shard replica set in the sharded cluster, confirm that the member has replicated data up to some control point. To verify, first connect mongosh to the shard primary and perform a write operation with "majority" write concern on a control collection:

use config
db.BackupControl.findAndModify(
{
query: { _id: 'BackupControlDocument' },
update: { $inc: { counter : 1 } },
new: true,
upsert: true,
writeConcern: { w: 'majority', wtimeout: 15000 }
}
);

The operation should return the modified (or inserted) control document:

{ "_id" : "BackupControlDocument", "counter" : 1 }

Query the shard secondary member for the returned control document. Connect mongosh to the shard secondary to lock and use db.collection.find() to query for the control document:

rs.secondaryOk();
use config;
db.BackupControl.find(
{ "_id" : "BackupControlDocument", "counter" : 1 }
).readConcern('majority');

If the secondary member contains the latest control document, it is safe to lock the member. Otherwise, wait until the member contains the document or select a different secondary member that contains the latest control document.

To lock the secondary member, run db.fsyncLock() on the member:

db.fsyncLock()

If locking a secondary of the CSRS, confirm that the member has replicated data up to some control point. To verify, first connect mongosh to the CSRS primary and perform a write operation with "majority" write concern on a control collection:

use config
db.BackupControl.findAndModify(
{
query: { _id: 'BackupControlDocument' },
update: { $inc: { counter : 1 } },
new: true,
upsert: true,
writeConcern: { w: 'majority', wtimeout: 15000 }
}
);

The operation should return the modified (or inserted) control document:

{ "_id" : "BackupControlDocument", "counter" : 1 }

Query the CSRS secondary member for the returned control document. Connect mongosh to the CSRS secondary to lock and use db.collection.find() to query for the control document:

rs.secondaryOk();
use config;
db.BackupControl.find(
{ "_id" : "BackupControlDocument", "counter" : 1 }
).readConcern('majority');

If the secondary member contains the latest control document, it is safe to lock the member. Otherwise, wait until the member contains the document or select a different secondary member that contains the latest control document.

To lock the secondary member, run db.fsyncLock() on the member:

db.fsyncLock()
3

Note

Backing up a config server backs up the sharded cluster's metadata. You only need to back up one config server, as they all hold the same data. Perform this step against the locked CSRS secondary member.

To create a file-system snapshot of the config server, follow the procedure in Create a Snapshot.

4

If you locked a member of the replica set shards, perform this step against the locked secondary.

You may back up the shards in parallel. For each shard, create a snapshot, using the procedure in Back Up and Restore with Filesystem Snapshots.

5

If you locked any mongod instances to capture the backup, unlock them.

To unlock the replica set members, use db.fsyncUnlock() method in mongosh.

db.fsyncUnlock()
6

To re-enable to balancer, connect mongosh to a mongos instance and run sh.startBalancer().

sh.startBalancer()

Starting in MongoDB 6.1, automatic chunk splitting is not performed. This is because of balancing policy improvements. Auto-splitting commands still exist, but do not perform an operation. For details, see Balancing Policy Changes.

In MongoDB versions earlier than 6.1, sh.startBalancer() also enables auto-splitting for the sharded cluster.

Back

Backup and Restore Sharded Clusters