mongodump is very slow

Hello, I’m currently using mongodump for backing up my production MongoDB instance. However, the process is taking an unusually long time—about 34 hours to complete a backup of 750 GB of uncompressed data.

Here are the details of my environment:

MongoDB Version: 4.0.27
OS Version: Ubuntu 18.08
Server Type: Standalone server
RAM: 128 GB (with around 35 GB available)
Backup Method: mongodump with gzip compression

Could you please suggest ways to speed up the backup process?

Also, I would like to know the best backup strategy for a production environment of this size, considering factors like efficiency, reliability, and restore times.

If you use the Community Edition, the way to speed up the backup is to increase the number of parallel and use the parameter -j.

If spending some money is an option then Cloud Manager is a very good option that also provides point in time recovery over a configurable period. Incremental backup reduce backup time between full backups (depending on database churn).

Using a file system or volume snapshot is a very fast way to take a backup. These are also fast to restore as the indexes remain intact. This is my first choice after Ops Manager or Cloud Manager.

Other backup methods are mentioned on https://www.mongodb.com/docs/manual/core/backups but are of decreasing value with this size database.