How to Deploy MongoDB Atlas with Terraform on AWS
Rate this tutorial
MongoDB Atlas is the multi-cloud developer data platform that provides an integrated suite of cloud database and data services. We help to accelerate and simplify how you build resilient and performant global applications on the cloud provider of your choice.
HashiCorp Terraform is an Infrastructure-as-Code (IaC) tool that lets you define cloud resources in human-readable configuration files that you can version, reuse, and share. Hence, we built the Terraform MongoDB Atlas Provider that automates infrastructure deployments by making it easy to provision, manage, and control Atlas configurations as code on any of the three major cloud providers.
In addition, teams can also choose to deploy MongoDB Atlas through the MongoDB Atlas CLI (Command-Line Interface), Atlas Administration API, AWS CloudFormation, and as always, with the Atlas UI (User Interface).
In this blog post, we will learn how to deploy MongoDB Atlas hosted on AWS using Terraform. In addition, we will explore how to use Private Endpoints with AWS Private Link to provide increased security with private connectivity for your MongoDB Atlas cluster without exposing traffic to the public internet.
We designed this Quickstart for beginners with no experience with MongoDB Atlas, HashiCorp Terraform, or AWS and seeking to set up their first environment. Feel free to access all source code described below from this GitHub repo.
Let’s get started:
Sign up for a free MongoDB Atlas account, verify your email address, and log into your new account.
Once you have an account created and are logged into MongoDB Atlas, you will need to generate an API key to authenticate the Terraform MongoDB Atlas Provider.
Go to the top of the Atlas UI, click the Gear Icon to the right of the organization name you created, click Access Manager in the lefthand menu bar, click the API Keys tab, and then click the green Create API Key box.
Enter a description for the API key that will help you remember what it’s being used for — for example “Terraform API Key.” Next, you’ll select the appropriate permission for what you want to accomplish with Terraform. Both the Organization Owner and Organization Project Creator roles (see role descriptions below) will provide access to complete this task, but by using the principle of least privilege, let’s select the Organization Project Creator role in the dropdown menu and click Next.
Make sure to copy your private key and store it in a secure location. After you leave this page, your full private key will notbe accessible.
MongoDB Atlas API keys have specific endpoints that require an API Key Access List. Creating an API Key Access List ensures that API calls must originate from IPs or CIDR ranges given access. As a good refresher, learn more about cloud networking.
On the same page, scroll down and click Add Access List Entry. If you are unsure of the IP address that you are running Terraform on (and you are performing this step from that machine), simply click Use Current IP Address and Save. Another option is to open up your IP Access List to all, but this comes with significant potential risk. To do this, you can add the following two CIDRs: 0.0.0.0/1 and 128.0.0.0/1. These entries will open your IP Access List to at most 4,294,967,296 (or 2^32) IPv4 addresses and should be used with caution.
Go to the lefthand menu bar and click Billing and then Add Payment Method. Follow the steps to ensure there is a valid payment method for your organization. Note when creating a free (forever) or M0 cluster tier, you can skip this step.
Go to the official HashiCorp Terraform downloads page and follow the instructions to set up Terraform on the platform of your choice. For the purposes of this demo, we will be using an Ubuntu/Debian environment.
We will need to configure the MongoDB Atlas Provider using the MongoDB Atlas API Key you generated earlier (Step 2). We will be securely storing these secrets as Environment Variables.
First, go to the terminal window and create Environment Variables with the below commands. This prevents you from having to hard-code secrets directly into Terraform config files (which is not recommended):
1 export MONGODB_ATLAS_PUBLIC_KEY="<insert your public key here>" 2 export MONGODB_ATLAS_PRIVATE_KEY="<insert your private key here>"
Next, create in an empty directory with an empty file called provider.tf. Here we will input the following code to define the MongoDB Atlas Provider. This will automatically grab the most current version of the Terraform MongoDB Atlas Provider.
1 # Define the MongoDB Atlas Provider 2 terraform { 3 required_providers { 4 mongodbatlas = { 5 source = "mongodb/mongodbatlas" 6 } 7 } 8 required_version = ">= 0.13" 9 }
We will now create a variables.tf file to declare all the Terraform variables used as part of this exercise and all of which are of type string. Next, we’ll define values (i.e. our secrets) for each of these variables in the terraform.tfvars file. Note as with most secrets, best practice is not to upload them (or the terraform.tfvars file itself) to public repos.
1 # Atlas Organization ID 2 variable "atlas_org_id" { 3 type = string 4 description = "Atlas Organization ID" 5 } 6 # Atlas Project Name 7 variable "atlas_project_name" { 8 type = string 9 description = "Atlas Project Name" 10 } 11 12 # Atlas Project Environment 13 variable "environment" { 14 type = string 15 description = "The environment to be built" 16 } 17 18 # Cluster Instance Size Name 19 variable "cluster_instance_size_name" { 20 type = string 21 description = "Cluster instance size name" 22 } 23 24 # Cloud Provider to Host Atlas Cluster 25 variable "cloud_provider" { 26 type = string 27 description = "AWS or GCP or Azure" 28 } 29 30 # Atlas Region 31 variable "atlas_region" { 32 type = string 33 description = "Atlas region where resources will be created" 34 } 35 36 # MongoDB Version 37 variable "mongodb_version" { 38 type = string 39 description = "MongoDB Version" 40 } 41 42 # IP Address Access 43 variable "ip_address" { 44 type = string 45 description = "IP address used to access Atlas cluster" 46 }
The example below specifies to use the most current MongoDB version (as of this writing), which is 6.0, a M10 cluster tier which is great for a robust development environment and will be deployed on AWS in the US_WEST_2 Atlas region. For specific details about all the available options besides M10 and US_WEST_2, please see the documentation.
1 atlas_org_id = "<UPDATE WITH YOUR ORG ID>" 2 atlas_project_name = "myFirstProject" 3 environment = "dev" 4 cluster_instance_size_name = "M10" 5 cloud_provider = "AWS" 6 atlas_region = "US_WEST_2" 7 mongodb_version = "6.0" 8 ip_address = "<UPDATE WITH YOUR IP>"
Next, create a main.tf file, which we will populate together to create the minimum required resources to create and access your cluster: a MongoDB Atlas Project (Step 8), Database User/Password (Step 9), IP Access List (Step 10), and of course, the MongoDB Atlas Cluster itself (Step 11). We will then walk through how to create Terraform Outputs (Step 12) so you can access your Atlas cluster and then create a Private Endpoint with AWS PrivateLink (Step 13). If you are already familiar with any of these steps, feel free to skip ahead.
Note: As infrastructure resources get created, modified, or destroyed, several more files will be generated in your directory by Terraform (for example the terraform.tfstate file). It is best practice not to modify these additional files directly unless you know what you are doing.
MongoDB Atlas Projects helps to organize and provide granular access controls to our resources inside our MongoDB Atlas Organization. Note MongoDB Atlas “Groups” and “Projects” are synonymous terms.
To create a Project using Terraform, we will need the MongoDB Atlas Organization ID with at least the Organization Project Creator role (defined when we created the MongoDB Atlas Provider API Keys in Step 2).
To get this information, simply click on Settings on the lefthand menu bar in the Atlas UI and click the copy icon next to Organization ID. You can now paste this information as the atlas_org_id in your terraform.tfvars file.
Next in the main.tf file, we will use the resource mongodbatlas_project from the Terraform MongoDB Atlas Provider to create our Project. To do this, simply input:
1 # Create a Project 2 resource "mongodbatlas_project" "atlas-project" { 3 org_id = var.atlas_org_id 4 name = var.atlas_project_name 5 }
To authenticate a client to MongoDB, like the MongoDB Shell or your application code using a MongoDB Driver (officially supported in Python, Node.js, Go, Java, C#, C++, Rust, and several others), you must add a corresponding Database User to your MongoDB Atlas Project. See the documentation for more information on available user roles so you can customize the user’s RBAC (Role Based Access Control) as per your team’s needs.
For now, simply input the following code as part of the next few lines in the main.tf file to create a Database User with a random 16-character password. This will use the resource mongodbatlas_database_user from the Terraform MongoDB Atlas Provider. The database user_password is a sensitive secret, so to access it, you will need to input the “terraform output -json user_password” command in your terminal window after our deployment is complete to reveal.
1 # Create a Database User 2 resource "mongodbatlas_database_user" "db-user" { 3 username = "user-1" 4 password = random_password.db-user-password.result 5 project_id = mongodbatlas_project.atlas-project.id 6 auth_database_name = "admin" 7 roles { 8 role_name = "readWrite" 9 database_name = "${var.atlas_project_name}-db" 10 } 11 } 12 13 # Create a Database Password 14 resource "random_password" "db-user-password" { 15 length = 16 16 special = true 17 override_special = "_%@" 18 }
Next, we will create the IP Access List by inputting the following into your main.tf file. Be sure to lookup the IP address (or CIDR range) of the machine where you’ll be connecting to your MongoDB Atlas cluster from and paste it into the terraform.tfvars file (as shown in code block in Step 7). This will use the resource mongodbatlas_project_ip_access_list from the Terraform MongoDB Atlas Provider.
1 # Create Database IP Access List 2 resource "mongodbatlas_project_ip_access_list" "ip" { 3 project_id = mongodbatlas_project.atlas-project.id 4 ip_address = var.ip_address 5 }
We will now use the mongodbatlas_advanced_cluster resource to create a MongoDB Atlas Cluster. With this resource, you can not only create a deployment, but you can manage it over its lifecycle, scaling compute and storage independently, enabling cloud backups, and creating analytics nodes.
In this example, we group three database servers together to create a replica set with a primary server and two secondary replicas duplicating the primary's data. This architecture is primarily designed with high availability in mind and can automatically handle failover if one of the servers goes down — and recover automatically when it comes back online. We call all these nodes electable because an election is held between them to work out which one is primary.
We will also set the optional backup_enabled flag to true. This provides increased data resiliency by enabling localized backup storage using the native snapshot functionality of the cluster's cloud service provider (see documentation).
Lastly, we create one analytics node. Analytics nodes are read-only nodes that can exclusively be used to execute database queries on. That means that this analytics workload is isolated to this node only so operational performance isn't impacted. This makes analytic nodes ideal to run longer and more computationally intensive analytics queries on without impacting your replica set performance (see documentation).
1 # Create an Atlas Advanced Cluster 2 resource "mongodbatlas_advanced_cluster" "atlas-cluster" { 3 project_id = mongodbatlas_project.atlas-project.id 4 name = "${var.atlas_project_name}-${var.environment}-cluster" 5 cluster_type = "REPLICASET" 6 backup_enabled = true 7 mongo_db_major_version = var.mongodb_version 8 replication_specs { 9 region_configs { 10 electable_specs { 11 instance_size = var.cluster_instance_size_name 12 node_count = 3 13 } 14 analytics_specs { 15 instance_size = var.cluster_instance_size_name 16 node_count = 1 17 } 18 priority = 7 19 provider_name = var.cloud_provider 20 region_name = var.atlas_region 21 } 22 } 23 }
You can output information from your Terraform configuration to the terminal window of the machine executing Terraform commands. This can be especially useful for values you won’t know until the resources are created, like the random password for the database user or the connection string to your Atlas cluster deployment. The code below in the main.tf file will output these values to the terminal display for you to use after Terraform completes.
1 # Outputs to Display 2 output "atlas_cluster_connection_string" { value = mongodbatlas_advanced_cluster.atlas-cluster.connection_strings.0.standard_srv } 3 output "ip_access_list" { value = mongodbatlas_project_ip_access_list.ip.ip_address } 4 output "project_name" { value = mongodbatlas_project.atlas-project.name } 5 output "username" { value = mongodbatlas_database_user.db-user.username } 6 output "user_password" { 7 sensitive = true 8 value = mongodbatlas_database_user.db-user.password 9 }
Increasingly, we see our customers want their data to traverse only private networks. One of the best ways to connect to Atlas over a private network from AWS is to use AWS PrivateLink, which establishes a one-way connection that preserves your perceived network trust boundary while eliminating additional security controls associated with other options like VPC peering (Azure Private Link and GCP Private Service Connect are supported, as well). Learn more about AWS Private Link with MongoDB Atlas.
To get started, we will need to first Install the AWS CLI. If you have not already done so, also see AWS Account Creation and AWS Access Key Creation for more details.
Next, let’s go to the terminal and create AWS Environment Variables with the below commands (similar as we did in Step 6 with our MongoDB Atlas credentials). Use the same region as above, except we will use the AWS naming convention instead, i.e., “us-west-2”.
1 export AWS_ACCESS_KEY_ID="<INSERT YOUR KEY HERE>" 2 export AWS_SECRET_ACCESS_KEY="<INSERT YOUR KEY HERE>" 3 export AWS_DEFAULT_REGION="<INSERT AWS REGION HERE>"
Then, we add the AWS provider to the provider.tf file. This will enable us to now deploy AWS resources from the Terraform AWS Provider in addition to MongoDB Atlas resources from the Terraform MongoDB Atlas Provider directly from the same Terraform config files.
1 # Define the MongoDB Atlas and AWS Providers 2 terraform { 3 required_providers { 4 mongodbatlas = { 5 source = "mongodb/mongodbatlas" 6 } 7 aws = { 8 source = "hashicorp/aws" 9 } 10 } 11 required_version = ">= 0.13" 12 }
We now add a new entry in our variables.tf and terraform.tfvars files for the desired AWS region. As mentioned earlier, we will be using “us-west-2” which is the AWS region in Oregon, USA.
variables.tf
1 # AWS Region 2 variable "aws_region" { 3 type = string 4 description = "AWS Region" 5 }
terraform.tfvars
1 aws_region = "us-west-2"
Next we create two more files for each of the new types of resources to be deployed: aws-vpc.tf to create a full network configuration on the AWS side and atlas-pl.tf to create both the Amazon VPC Endpoint and the MongoDB Atlas Endpoint of the PrivateLink. In your environment, you may already have an AWS network created. If so, then you’ll want to include the correct values in the atlas-pl.tf file and won’t need aws-vpc.tf file. To get started quickly, we will simply git clone them from our repo.
After that we will use a Terraform Data Source and wait until the PrivateLink creation is completed so we can get the new connection string for the PrivateLink connection. In the main.tf, simply add the below:
1 data "mongodbatlas_advanced_cluster" "atlas-cluser" { 2 project_id = mongodbatlas_project.atlas-project.id 3 name = mongodbatlas_advanced_cluster.atlas-cluster.name 4 depends_on = [mongodbatlas_privatelink_endpoint_service.atlaseplink] 5 }
Lastly, staying in the main.tf file, we add the below additional output code snippet in order to display the Private Endpoint-Aware Connection String to the terminal:
1 output "privatelink_connection_string" { 2 value = lookup(mongodbatlas_advanced_cluster.atlas-cluster.connection_strings[0].aws_private_link_srv, aws_vpc_endpoint.ptfe_service.id) 3 }
We are now all set to start creating our first MongoDB Atlas deployment!
Open the terminal console and type the following command: terraform init to initialize Terraform. This will download and install both the Terraform AWS and MongoDB Atlas Providers (if you have not done so already).
Next, we will run the terraform plan command. This will output what Terraform plans to do, such as creation, changes, or destruction of resources. If the output is not what you expect, then it’s likely an issue in your Terraform configuration files.
Next, use the terraform apply command to deploy the infrastructure. If all looks good, input yes to approve terraform build.
Success!
Note new AWS and MongoDB Atlas resources can take ~15 minutes to provision and the provider will continue to give you a status update until it is complete. You can also check on progress through the Atlas UI and AWS Management Console.
The connection string shown in the output can be used to access (including performing CRUD operations) on your MongoDB database via the MongoDB Shell, MongoDB Compass GUI, and Data Explorer in the UI (as shown below). Learn more about how to interact with data in MongoDB Atlas with the MongoDB Query Language (MQL). As a pro tip, I regularly leverage the MongoDB Cheat Sheet to quickly reference key commands.
Lastly, as a reminder, the database user_password is a sensitive secret, so to access it, you will need to input the “terraform output -json user_password” command in your terminal window to reveal.
Feel free to explore more complex environments (including code examples for deploying MongoDB Atlas Clusters from other cloud vendors) which you can find in our public repo examples. When ready to delete all infrastructure created, you can leverage the terraform destroy command.
Here the resources we created earlier will all now be terminated. If all looks good, input yes:
After a few minutes, we are back to an empty slate on both our MongoDB Atlas and AWS environments. It goes without saying, but please be mindful when using the terraform destroy command in any kind of production environment.
The HashiCorp Terraform MongoDB Atlas Provider is an open source project licensed under the Mozilla Public License 2.0 and we welcome community contributions. To learn more, see our contributing guidelines. As always, feel free to contact us with any issues.
Happy Terraforming with MongoDB Atlas on AWS!