ANNOUNCEMENT: Voyage AI joins MongoDB to power more accurate and trustworthy AI applications on Atlas.
Learn more
Docs Menu

Guidance for Atlas High Availability

Consult this page to plan the appropriate cluster configuration that optimizes your availability and performance while aligning with your enterprise's cost controls and access needs.

When you launch a new cluster in Atlas, Atlas automatically configures a minimum three-node replica set and distributes it in the region you deploy to. If a primary member experiences an outage, the MongoDB database automatically detects this failure and elects a secondary member as a replacement, and promotes this secondary member to become the new primary. Atlas then restores or replaces the failing member to ensure that the cluster is returned to its target configuration as soon as possible. The MongoDB client driver also automatically switches all client connections. The entire selection and failover process happens within seconds, without manual intervention. MongoDB optimizes the algorithms used to detect failure and elect a new primary, reducing the failover interval.

Clusters that you deploy within a single region are spread across availability zones within that region, so that they can withstand partial region outages without an interruption of read or write availability.

If maintaining write operations in your preferred region at all times is a high priority, MongoDB recommends deploying the cluster so that at least two electable members are in at least two availability zones within your preferred region.

For the best database performance in a worldwide deployment, users can configure a global cluster which uses location-aware sharding to minimize read and write latency. If you have geographical storage requirements, you can also ensure that Atlas stores data in a particular geographical area.

A topology with a single region with 3 nodes: one primary and two secondaries.

This topology is appropriate if low latency is required but high availability requirements are limited to a single region. This topology can tolerate any 1 node failure and easily satisfy majority write concern within region secondaries. This will maintain a primary in the preferred region upon any node failure, limits cost, and is the least complicated from an application architecture perspective. The reads and writes from secondaries in this topology encounter low latency in your preferred region.

This topology, however, can't tolerate a regional outage.

Use the following recommendations to configure your Atlas deployments and backups for high availability and to expedite recovery from disasters.

To learn more about how to plan for and respond to disasters, see Guidance for Atlas Disaster Recovery.

When creating a new cluster, you can choose between between a range of cluster tiers available under the Dedicated, Flex, or Free deployment types.

To determine the recommended cluster tier for your application size, see the Atlas Cluster Size Guide .

To elect a primary, you need a majority of voting replica set members available. We recommend that you create replica sets with an odd number of voting replica set members. There is no benefit in having an even number of voting replica set members. Atlas satisfies this requirement by default, as Atlas requires having 3, 5, or 7 nodes.

Fault tolerance is the number of replica set members that can become unavailable with enough members still available for a primary election. Fault tolerance of a four-member replica set is the same as for a three-member replica set because both can withstand a single-node outage.

To test if your application can handle a failure of the replica set primary, submit a request to test primary failover. When Atlas simulates a failover event, Atlas shuts down the current primary and holds an election to elect a new primary.

To learn more about replica set members, see Replica Set Members. To learn more about replica set elections and voting, see Replica Set Elections.

MongoDB provides high availability by storing multiple copies of data across replica sets. To ensure that members of the same replica set don't share the same resources, and that a replica set can elect a primary if a data center becomes unavailable, you must distribute nodes across at least three data centers. We recommend that you use at least five data centers.

You can distribute data across multiple data centers by deploying your cluster to a region that supports multiple availability zones. Each availability zone typically contains one or more discrete data centers, each with redundant power, networking and connectivity, often housed in separate facilities. When you deploy a dedicated cluster to a region that supports availability zones, Atlas automatically splits your cluster's nodes across the availability zones. For example, for a three-node replica set cluster deployed to a region with three availability zones, Atlas deploys one node in each zone. This ensures that members of the same replica set are not sharing resources, so that your database remains available even if one zone experiences an outage.

To understand the risks associated with using fewer than three data centers, consider the following diagram, which shows data distributed across two data centers:

An image showing two data centers: Data Center 1, with a primary and a secondary node, and Data Center 2, with only a secondary node

In the previous diagram, if Data Center 2 becomes unavailable, a majority of replica set members remain available and Atlas can elect a primary. However, if you lose Data Center 1, you have only one out of three replica set members available, no majority, and the system degrades into read-only mode.

Consider the following diagram, which shows a five-node replica set cluster deployed to a region with three availability zones:

An image showing three data centers: Data Center 1, with a primary and secondary node, Data Center, 2 with two secondary nodes, and Data Center 3, with one secondary node
click to enlarge

The 2-2-1 topology in the previous diagram is our recommended topology to balance high availability, performance, and cost across multiple regions. In this topology, if any one data center fails, a majority of replica set members remain available and Atlas can perform automatic faiilover. While a three-node, three-region topology can also tolerate any one-region failure, any failover event in a 1-1-1 topology forces a new primary to be elected in a different region. The 2-2-1 topology can tolerate a primary node failure while keeping the new primary in the preferred region.

We recommend that you deploy replica sets to the following regions because they support at least three availability zones:

AWS Region
Location
Atlas Region

us-east-1

Northern Virginia, USA

US_EAST_1

us-west-2

Oregon, USA

US_WEST_2

ca-central-1

Montreal, QC, Canada

CA_CENTRAL_1

ca-west-1

Calgary, Canada

CA_WEST_1

us-east-2

Ohio, USA

US_EAST_2

sa-east-1

Sao Paulo, Brazil

SA_EAST_1

AWS Region
Location
Atlas Region

ap-southeast-1

Singapore

AP_SOUTHEAST_1

ap-southeast-2

Sydney, NSW, Australia

AP_SOUTHEAST_2

ap-southeast-3

Jakarta, Indonesia

AP_SOUTHEAST_3

ap-south-1

Mumbai, India

AP_SOUTH_1

ap-east-1

Hong Kong, China

AP_EAST_1

ap-northeast-1

Tokyo, Japan

AP_NORTHEAST_1

ap-northeast-2

Seoul, South Korea

AP_NORTHEAST_2

ap-northeast-3

Osaka, Japan

AP_NORTHEAST_3

ap-south-2

Hyderabad, India

AP_SOUTH_2

ap-southeast-4

Melbourne, Victoria, Australia

AP_SOUTHEAST_4

AWS Region
Location
Atlas Region

eu-west-1

Ireland

EU_WEST_1

eu-central-1

Frankfurt, Germany

EU_CENTRAL_1

eu-north-1

Stockholm, Sweden

EU_NORTH_1

eu-west-2

London, England, UK

EU_WEST_2

eu-west-3

Paris, France

EU_WEST_3

eu-south-1

Milan, Italy

EU_SOUTH_1

eu-central-2

Zurich, Switzerland

EU_CENTRAL_2

eu-south-2

Spain

EU_SOUTH_2

AWS Region
Location
Atlas Region

me-south-1

Bahrain

ME_SOUTH_1

af-south-1

Cape Town, South Africa

AF_SOUTH_1

il-central-1

Tel Aviv, Israel

IL_CENTRAL_1

me-central-1

UAE

ME_CENTRAL_1

Azure Region
Location
Atlas Region

centralus

Iowa, USA

US_CENTRAL

eastus

Virginia (East US)

US_EAST

eastus2

Virginia, USA

US_EAST_2

northcentralus

Illinois, USA

US_NORTH_CENTRAL

westus

California, USA

US_WEST

westus2

Washington, USA

US_WEST_2

westus3

Arizona, USA

US_WEST_3

southcentralus

Texas, USA

US_SOUTH_CENTRAL

brazilsouth

Sao Paulo, Brazil

BRAZIL_SOUTH

brazilsoutheast

Rio de Janeiro, Brazil

BRAZIL_SOUTHEAST

canadacentral

Toronto, ON, Canada

CANADA_CENTRAL

Azure Region
Location
Atlas Region

northeurope

Ireland

EUROPE_NORTH

westeurope

Netherlands

EUROPE_WEST

uksouth

London, England, UK

UK_SOUTH

francecentral

Paris, France

FRANCE_CENTRAL

italynorth

Milan, Italy

ITALY_NORTH

germanywestcentral

Frankfurt, Germany

GERMANY_WEST_CENTRAL

polandcentral

Warsaw, Poland

POLAND_CENTRAL

switzerlandnorth

Zurich, Switzerland

SWITZERLAND_NORTH

norwayeast

Oslo, Norway

NORWAY_EAST

swedencentral

Gävle, Sweden

SWEDEN_CENTRAL

Azure Region
Location
Atlas Region

eastasia

Hong Kong, China

ASIA_EAST

southeastasia

Singapore

ASIA_SOUTH_EAST

australiaeast

New South Wales, Australia

AUSTRALIA_EAST

centralindia

Pune (Central India)

INDIA_CENTRAL

japaneast

Tokyo, Japan

JAPAN_EAST

koreacentral

Seoul, South Korea

KOREA_CENTRAL

Azure Region
Location
Atlas Region

southafricanorth

Johannesburg, South Africa

SOUTH_AFRICA_NORTH

Azure Region
Location
Atlas Region

uaenorth

Dubai, UAE

UAE_NORTH

qatarcentral

Qatar

QATAR_CENTRAL

israelcentral

Israel

ISRAEL_CENTRAL

GCP Region
Location
Atlas Region

us-central1

Iowa, USA

CENTRAL_US

us-east4

North Virginia, USA

US_EAST_4

us-east5

Columbus, OH, USA

US_EAST_5

northamerica-northeast1

Montreal, Canada

NORTH_AMERICA_NORTHEAST_1

northamerica-northeast2

Toronto, Canada

NORTH_AMERICA_NORTHEAST_2

southamerica-east1

Sao Paulo, Brazil

SOUTH_AMERICA_EAST_1

southamerica-west1

Santiago, Chile

SOUTH_AMERICA_WEST_1

us-west1

Oregon, USA

WESTERN_US

us-west2

Los Angeles, CA, USA

US_WEST_2

us-west3

Salt Lake City, UT, USA

US_WEST_3

us-west4

Las Vegas, NV, USA

US_WEST_4

us-south1

Dallas, TX, USA

US_SOUTH_1

GCP Region
Location
Atlas Region

asia-east1

Taiwan

EASTERN_ASIA_PACIFIC

asia-east2

Hong Kong, China

ASIA_EAST_2

asia-northeast1

Tokyo, Japan

NORTHEASTERN_ASIA_PACIFIC

asia-northeast2

Osaka, Japan

ASIA_NORTHEAST_2

asia-northeast3

Seoul, Korea

ASIA_NORTHEAST_3

asia-southeast1

Singapore

SOUTHEASTERN_ASIA_PACIFIC

asia-south1

Mumbai, India

ASIA_SOUTH_1

asia-south2

Delhi, India

ASIA_SOUTH_2

australia-southeast1

Sydney, Australia

AUSTRALIA_SOUTHEAST_1

australia-southeast2

Melbourne, Australia

AUSTRALIA_SOUTHEAST_2

asia-southeast2

Jakarta, Indonesia

ASIA_SOUTHEAST_2

GCP Region
Location
Atlas Region

europe-west1

Belgium

WESTERN_EUROPE

europe-north1

Finland

EUROPE_NORTH_1

europe-west2

London, UK

EUROPE_WEST_2

europe-west3

Frankfurt, Germany

EUROPE_WEST_3

europe-west4

Netherlands

EUROPE_WEST_4

europe-west6

Zurich, Switzerland

EUROPE_WEST_6

europe-west10

Berlin, Germany

EUROPE_WEST_10

europe-central2

Warsaw, Poland

EUROPE_CENTRAL_2

europe-west8

Milan, Italy

EUROPE_WEST_8

europe-west9

Paris, France

EUROPE_WEST_9

europe-west12

Turin, Italy

EUROPE_WEST_12

europe-southwest1

Madrid, Spain

EUROPE_SOUTHWEST_1

GCP Region
Location
Atlas Region

me-west1

Tel Aviv, Israel

MIDDLE_EAST_WEST_1

me-central1

Doha, Qatar

MIDDLE_EAST_CENTRAL_1

me-central2

Dammam, Saudi Arabia

MIDDLE_EAST_CENTRAL_2

When a client connects to a sharded cluster, we recommend that you include multiple mongos processes, separated by commas, in the connection URI. To learn more, see MongoDB Connection String Examples. This setup allows operations to route to different mongos instances for load balancing, but it is also important for disaster recovery.

Consider the following diagram, which shows a sharded cluster spread across three data centers. The application connects to the cluster from a remote location. If Data Center 3 becomes unavailable, the application can still connect to the mongos processes in the other data centers.

An image showing three data centers: Data Center 1, with a primary shards and two mongos, Data Center 2, with secondary shards and two mongos, and Data Center 3, with secondary shards and two mongos. The application connects to all six mongos instances.
click to enlarge

You can use retryable reads and retryable writes to simplify the required error handling for the mongos configuration.

MongoDB allows you to specify the level of acknowledgment requested for write operations by using write concern. For example, if you have a three-node replica set with a write concern of majority, every write operation needs to be persisted on two nodes before an acknowledgment of completion is sent to the driver that issued said write operation. For the best protection from a regional node outage, we recommend that you set the write concern to majority.

Even though using majority write concern increases write latency compared with write concern 1, we recommend that you use majority write concern because it allows your data centers to continue performing write operations even if a replica set loses the primary.

To understand the advantage of using majority write concern over a numeric write concern, consider a three-region, five-node replica set with a 2-2-1 topology, with a numeric write concern of 4. If one of the two-node regions becomes unavailable, write operations cannot complete because they are unable to persist data on four nodes. Using majority will allow write operations to continue when they persist data across at least two nodes, maintaining data redundancy as well as write continuity.

Frequent data backups is critical for business continuity and disaster recovery. Frequent backups ensure that data loss and downtime is minimal if a disaster or cyber attack disrupts normal operations.

We recommend that you:

  • Set your backup frequency to meet your desired business continuity objectives. Continuous backups may be needed for some systems, while less frequent snapshots may be desirable for others.

  • Store backups in a different physical location than the source data.

  • Test your backup recovery process to ensure that you can restore backups in a repeatable and timely manner.

  • Confirm that your clusters run the same MongoDB versions for compatibility during restore.

  • Configure a backup compliance policy to prevent deleting backup snapshots, prevent decreasing the snapshot retention time, and more.

For more backup recommendations, see Guidance for Atlas Backups.

To avoid resource capacity issues, we recommend that you monitor resource utilization and hold regular capacity planning sessions. MongoDB Professional Services offers these sessions.

Over-utilized clusters could fail causing a disaster. Scale up clusters to higher tiers if your utilization is regularly alerting at a steady state, such as above 60%+ utilization for system CPU and system memory.

To view your resource utilization, see Monitor Real-Time Performance. To view metrics with the Atlas Administration API, see Monitoring and Logs.

To learn best practices for alerts and monitoring for resource utilization, see Guidance for Atlas Monitoring and Alerts.

If you encounter resource capacity issues, see Resource Capacity Issues.

We recommend that you run the latest MongoDB version as it allows you to take advantage of new features and provides improved security guarantees compared with previous versions.

Ensure that you perform MongoDB major version upgrades far before your current version reaches end of life.

You can't downgrade your MongoDB version using the Atlas UI. Because of this, when planning and executing a major version upgrade, we recommend that you work directly with MongoDB Professional or Technical Services to help you avoid any issues that might occur during the upgrade process.

MongoDB allows you to configure a maintenance window for your cluster, which sets the hour of the day when Atlas starts weekly maintenance on your cluster. Setting a custom maintenance window allows you to schedule maintenance, which requires at least one replica set election per replica set, outside of business-critical hours.

You can also set protected hours for your project, which defines a daily window of time in which standard updates cannot begin. Standard updates do not involve cluster restarts or resyncs.

The following examples configure the Single Region, 3 Node Replica Set / Shard deployment topology using Atlas tools for automation.

These examples also apply other recommended configurations, including:

  • Cluster tier set to M10 for a dev/test environment. Use the cluster size guide to learn the recommended cluster tier for your application size.

  • Single Region, 3-Node Replica Set / Shard deployment topology.

Our examples use AWS, Azure, and Google Cloud interchangeably. You can use any of these three cloud providers, but you must change the region name to match the cloud provider. To learn about the cloud providers and their regions, see Cloud Providers.

  • Cluster tier set to M30 for a medium-sized application. Use the cluster size guide to learn the recommended cluster tier for your application size.

  • Single Region, 3-Node Replica Set / Shard deployment topology.

Our examples use AWS, Azure, and Google Cloud interchangeably. You can use any of these three cloud providers, but you must change the region name to match the cloud provider. To learn about the cloud providers and their regions, see Cloud Providers.

Note

Before you can create resources with the Atlas CLI, you must:

For your development and testing environments, run the following command for each project. Change the IDs and names to use your values:

atlas clusters create CustomerPortalDev \
--projectId 56fd11f25f23b33ef4c2a331 \
--region EASTERN_US \
--members 3 \
--tier M10 \
--provider GCP \
--mdbVersion 8.0 \
--diskSizeGB 30 \
--tag bu=ConsumerProducts \
--tag teamName=TeamA \
--tag appName=ProductManagementApp \
--tag env=Production \
--tag version=8.0 \
--tag email=marissa@acme.com \
--watch

For your staging and production environments, create the following cluster.json file for each project. Change the IDs and names to use your values:

{
"clusterType": "REPLICASET",
"links": [],
"name": "CustomerPortalProd",
"mongoDBMajorVersion": "8.0",
"replicationSpecs": [
{
"numShards": 1,
"regionConfigs": [
{
"electableSpecs": {
"instanceSize": "M30",
"nodeCount": 3
},
"priority": 7,
"providerName": "GCP",
"regionName": "EASTERN_US",
"analyticsSpecs": {
"nodeCount": 0,
"instanceSize": "M30"
},
"autoScaling": {
"compute": {
"enabled": false,
"scaleDownEnabled": false
},
"diskGB": {
"enabled": false
}
},
"readOnlySpecs": {
"nodeCount": 0,
"instanceSize": "M30"
}
}
],
"zoneName": "Zone 1"
}
],
"tag" : [{
"bu": "ConsumerProducts",
"teamName": "TeamA",
"appName": "ProductManagementApp",
"env": "Production",
"version": "8.0",
"email": "marissa@acme.com"
}]
}

After you create the cluster.json file, run the following command for each project. The command uses the cluster.json file to create a cluster.

atlas cluster create --projectId 5e2211c17a3e5a48f5497de3 --file cluster.json

For more configuration options and info about this example, see atlas clusters create.

Note

Before you can create resources with Terraform, you must:

  • Create your paying organization and create an API key for the paying organization. Store your API key as environment variables by running the following command in the terminal:

    export MONGODB_ATLAS_PUBLIC_KEY="<insert your public key here>"
    export MONGODB_ATLAS_PRIVATE_KEY="<insert your private key here>"
  • Install Terraform

For your development and testing environments, create the following files for each application and environment pair. Place the files for each application and environment pair in their own directory. Change the IDs and names to use your values:

# Create a Project
resource "mongodbatlas_project" "atlas-project" {
org_id = var.atlas_org_id
name = var.atlas_project_name
}
# Create an Atlas Advanced Cluster
resource "mongodbatlas_advanced_cluster" "atlas-cluster" {
project_id = mongodbatlas_project.atlas-project.id
name = "ClusterPortalDev"
cluster_type = "REPLICASET"
mongo_db_major_version = var.mongodb_version
replication_specs {
region_configs {
electable_specs {
instance_size = var.cluster_instance_size_name
node_count = 3
}
priority = 7
provider_name = var.cloud_provider
region_name = var.atlas_region
}
}
tags {
key = "BU"
value = "ConsumerProducts"
}
tags {
key = "TeamName"
value = "TeamA"
}
tags {
key = "AppName"
value = "ProductManagementApp"
}
tags {
key = "Env"
value = "Test"
}
tags {
key = "Version"
value = "8.0"
}
tags {
key = "Email"
value = "marissa@acme.com"
}
}
# Outputs to Display
output "atlas_cluster_connection_string" { value = mongodbatlas_advanced_cluster.atlas-cluster.connection_strings.0.standard_srv }
output "project_name" { value = mongodbatlas_project.atlas-project.name }
# Atlas Organization ID
variable "atlas_org_id" {
type = string
description = "Atlas Organization ID"
}
# Atlas Project Name
variable "atlas_project_name" {
type = string
description = "Atlas Project Name"
}
# Atlas Project Environment
variable "environment" {
type = string
description = "The environment to be built"
}
# Cluster Instance Size Name
variable "cluster_instance_size_name" {
type = string
description = "Cluster instance size name"
}
# Cloud Provider to Host Atlas Cluster
variable "cloud_provider" {
type = string
description = "AWS or GCP or Azure"
}
# Atlas Region
variable "atlas_region" {
type = string
description = "Atlas region where resources will be created"
}
# MongoDB Version
variable "mongodb_version" {
type = string
description = "MongoDB Version"
}
# Atlas Group Name
variable "atlas_group_name" {
type = string
description = "Atlas Group Name"
}
atlas_org_id = "32b6e34b3d91647abb20e7b8"
atlas_project_name = "Customer Portal - Dev"
environment = "dev"
cluster_instance_size_name = "M10"
cloud_provider = "AWS"
atlas_region = "US_WEST_2"
mongodb_version = "8.0"
# Define the MongoDB Atlas Provider
terraform {
required_providers {
mongodbatlas = {
source = "mongodb/mongodbatlas"
}
}
required_version = ">= 0.13"
}

After you create the files, navigate to each application and environment pair's directory and run the following command to initialize Terraform:

terraform init

Run the following command to view the Terraform plan:

terraform plan

Run the following command to create one project and one deployment for the application and environment pair. The command uses the files and the MongoDB & HashiCorp Terraform to create the projects and clusters:

terraform apply

When prompted, type yes and press Enter to apply the configuration.

For your staging and production environments, create the following files for each application and environment pair. Place the files for each application and environment pair in their own directory. Change the IDs and names to use your values:

# Create a Group to Assign to Project
resource "mongodbatlas_team" "project_group" {
org_id = var.atlas_org_id
name = var.atlas_group_name
usernames = [
"user1@example.com",
"user2@example.com"
]
}
# Create a Project
resource "mongodbatlas_project" "atlas-project" {
org_id = var.atlas_org_id
name = var.atlas_project_name
# Assign the Project the Group with Specific Roles
team_id = mongodbatlas_team.project_group.team_id
role_names = ["GROUP_READ_ONLY", "GROUP_CLUSTER_MANAGER"]
}
# Create an Atlas Advanced Cluster
resource "mongodbatlas_advanced_cluster" "atlas-cluster" {
project_id = mongodbatlas_project.atlas-project.id
name = "ClusterPortalProd"
cluster_type = "REPLICASET"
mongo_db_major_version = var.mongodb_version
replication_specs {
region_configs {
electable_specs {
instance_size = var.cluster_instance_size_name
node_count = 3
}
priority = 7
provider_name = var.cloud_provider
region_name = var.atlas_region
}
}
tags {
key = "BU"
value = "ConsumerProducts"
}
tags {
key = "TeamName"
value = "TeamA"
}
tags {
key = "AppName"
value = "ProductManagementApp"
}
tags {
key = "Env"
value = "Production"
}
tags {
key = "Version"
value = "8.0"
}
tags {
key = "Email"
value = "marissa@acme.com"
}
}
# Outputs to Display
output "atlas_cluster_connection_string" { value = mongodbatlas_advanced_cluster.atlas-cluster.connection_strings.0.standard_srv }
output "project_name" { value = mongodbatlas_project.atlas-project.name }
# Atlas Organization ID
variable "atlas_org_id" {
type = string
description = "Atlas Organization ID"
}
# Atlas Project Name
variable "atlas_project_name" {
type = string
description = "Atlas Project Name"
}
# Atlas Project Environment
variable "environment" {
type = string
description = "The environment to be built"
}
# Cluster Instance Size Name
variable "cluster_instance_size_name" {
type = string
description = "Cluster instance size name"
}
# Cloud Provider to Host Atlas Cluster
variable "cloud_provider" {
type = string
description = "AWS or GCP or Azure"
}
# Atlas Region
variable "atlas_region" {
type = string
description = "Atlas region where resources will be created"
}
# MongoDB Version
variable "mongodb_version" {
type = string
description = "MongoDB Version"
}
# Atlas Group Name
variable "atlas_group_name" {
type = string
description = "Atlas Group Name"
}
atlas_org_id = "32b6e34b3d91647abb20e7b8"
atlas_project_name = "Customer Portal - Prod"
environment = "prod"
cluster_instance_size_name = "M30"
cloud_provider = "AWS"
atlas_region = "US_WEST_2"
mongodb_version = "8.0"
atlas_group_name = "Atlas Group"
# Define the MongoDB Atlas Provider
terraform {
required_providers {
mongodbatlas = {
source = "mongodb/mongodbatlas"
}
}
required_version = ">= 0.13"
}

After you create the files, navigate to each application and environment pair's directory and run the following command to initialize Terraform:

terraform init

Run the following command to view the Terraform plan:

terraform plan

Run the following command to create one project and one deployment for the application and environment pair. The command uses the files and the MongoDB & HashiCorp Terraform to create the projects and clusters:

terraform apply

When prompted, type yes and press Enter to apply the configuration.

For more configuration options and info about this example, see MongoDB & HashiCorp Terraform and the MongoDB Terraform Blog Post.