Consider Your Cloud Backup Strategy on World Backup Day, March 31
World Backup Day, marked every March 31, reminds all of us of the risk and vulnerability of the data stored on our devices and systems.
Data loss happens every day for a variety of reasons. While human error is the most common cause, ransomware is fast becoming the most expensive one. A big reason is because ransomware criminals have moved on from targeting individuals to attacking businesses, where data is far more sensitive and valuable. Businesses have more to lose if their data is compromised or exposed, and they have deeper pockets to pay larger ransoms. Having a backup copy of data can help a business recover clean copies of data after it’s been corrupted by malware, accidentally deleted, or destroyed by a fire or flood.
Backup could also save the day during cloud outages. For businesses, any type of data loss is extremely costly. So having a backup plan as part of an overall disaster recovery strategy is critical for the survival of every business.
Backup benefits
A backup and disaster recovery strategy is necessary to protect your mission critical data against these types of risks. With such a strategy in place, you'll gain peace of mind knowing that if your data ever becomes accidentally deleted or infected by malware, you'll be able to recover it and avoid the cost and consequences of data loss. You'll also satisfy important regulatory and compliance requirements by demonstrating that you've taken a proactive approach towards data safety and business continuity.
Taking regular backups offers other advantages as well. The backups can be used to create new environments for development, staging, or QA without impacting production. This practice enables development teams to quickly and easily test new features, accelerating application development and ensuring smooth product launches.
Backup and the cloud factor
Back when business systems were mostly on-premises, there was a simple framework for disaster recovery planning: the 3-2-1 backup rule. It essentially recommends keeping three copies of your data, in at least two different form factors, with one of them kept at an offsite or remote location. In practice, this might mean creating a bare metal image of an entire server and storing it on a secondary server in the same server room. In addition, you would also make a copy of all of your server data on magnetic tape and transport it to an offsite, secure location. So that’s three copies of data (the original, the bare metal image, and the tape backup) in two different form factors (disc and tape), with one stored offsite.
The 3-2-1 backup rule has worked well for a lot of businesses for decades. But systems have changed. First, barely anyone uses tape anymore. The time it takes to transport a tape backup to where it can be used for disaster recovery is more than most businesses can tolerate. The criticality of systems has shrunk recovery time objectives (RTO) to just hours or minutes. Tape backups are simply not practical for most modern IT environments. But the cloud is.
Hyperscale cloud providers can provide service levels that are just as reliable, if not moreso, than traditional on-premises environments. But customers shouldn’t make the mistake of thinking that by deploying in the cloud, they’re off the hook when it comes to planning and implementing a disaster recovery strategy. Businesses can and do lose data in the cloud. And hyperscale cloud providers do experience outages. Businesses must have a backup strategy, but it needs to be updated to factor in the cloud workloads that have become ubiquitous.
Cross-cloud data protection
An outage at any of the major cloud providers could take your databases offline. Keeping extra backup copies with different cloud providers is a good way to ensure you can still access your data in the event of an outage with a single cloud provider.
Recent outages at hyperscale cloud providers have underscored the need for cross-cloud backup. “Most cloud outages are related to software bugs rather than physical catastrophes,” says Chris Shum, Atlas product lead at MongoDB. “We've always protected ourselves against physical catastrophes by distributing across data centers or regions, but no one really protects themselves against the software bug.” Shum says by backing up workloads on different clouds, you could tolerate a cloud provider going down and your database and your application would still be up.
Getting around the cloud backup skills gap
Data gravity keeps many businesses locked into a single cloud provider. Becoming fluent with a particular cloud provider is a skill to acquire just like anything else. And once you become comfortable with one provider, its hardware configurations, operational tools, and its offerings, it can be hard to venture out to a different provider with different hardware and pricing. But developing flexibility in the cloud is critical if you hope to leverage the best features, functionality, and cost efficiencies from each cloud provider. MongoDB has made it possible to do just that.
“With Atlas, we’ve made it our mission to abstract as much of the management away as possible,” Shum says. “It’s all available as a fully managed service. So things like hardware asymmetry between different cloud providers, offerings being different, prices being different, how you’d set up networking infrastructure, all of those things that to you as a consumer might be different, we’ve abstracted it away for you. And because of the abstraction, you are then free to move nodes to whichever cloud provider or region you want.”
Data protection with Atlas
MongoDB Atlas provides point in time recovery of replica sets and cluster-wide snapshots of sharded clusters. It’s simple to restore to precisely the moment you need, quickly and safely.
Backups can be restored automatically to the existing MongoDB Atlas cluster or downloaded to be manually archived or restored on a different infrastructure. MongoDB Atlas provides:
-
Security features to protect access to your data
-
Built in replication for always-on availability, tolerating complete data center failure
-
Backups and point in time recovery to protect against data corruption
-
Fine-grained monitoring to let you know when to scale — additional instances can be provisioned with the push of a button
-
Automated patching and one-click upgrades for new major versions of the database, enabling you to take advantage of the latest and greatest MongoDB features
-
A choice of cloud providers, regions, and billing options
Backup wrap-up
If you’re still depending on a single cloud provider to keep your critical workloads online and available, consider World Backup Day as your reminder to identify and close the gaps in your cloud disaster recovery strategy. For information on how to back up and restore cluster data in MongoDB read this article in our documentation.