MongoDB Atlas With Terraform: Database Users and Vault
SM
Samuel Molling8 min read • Published Apr 15, 2024 • Updated Apr 15, 2024
FULL APPLICATION
Rate this tutorial
In this tutorial, I will show how to create a user for the MongoDB database in Atlas using Terraform and how to store this credential securely in HashiCorp Vault. We saw in the previous article, MongoDB Atlas With Terraform - Cluster and Backup Policies, how to create a cluster with configured backup policies. Now, we will go ahead and create our first user. If you haven't seen the previous articles, I suggest you look to understand how to get started.
This article is for anyone who intends to use or already uses infrastructure as code (IaC) on the MongoDB Atlas platform or wants to learn more about it.
Everything we do here is contained in the provider/resource documentation:
At this point, we will create our first user using Terraform in MongoDB Atlas and store the URI to connect to my cluster in HashiCorp Vault. For those unfamiliar, HashiCorp Vault is a secrets management tool that allows you to securely store, access, and manage sensitive credentials such as passwords, API keys, certificates, and more. It is designed to help organizations protect their data and infrastructure in complex, distributed IT environments. In it, we will store the connection URI of the user that will be created with the cluster we created in the last article.
Before we begin, make sure that all the prerequisites mentioned in the previous article are properly configured: Install Terraform, create an API key in MongoDB Atlas, and set up a project and a cluster in Atlas. These steps are essential to ensure the success of creating your database user.
The first step is to run HashiCorp Vault so that we can test our module. It is possible to run Vault on Docker Local. If you don't have Docker installed, you can download it. After downloading Docker, we will download the image we want to run — in this case, from Vault. To do this, we will execute a command in the terminal
docker pull vault:1.13.3
or download using Docker Desktop.Now, we will create a container from this image. Click on the image and click on Run. After this, a box will open where we only need to map the port from our computer to the container. In this case, I will use port 8200 which is the Vault's default port. Click Run.
The container will start running. If we go to our browser and enter the URL
localhost:8200/
, the Vault login screen will appear.To access the Vault, we will use the Root Token that is generated when we create the container.
Now, we will log in. After opening, we will create a new KV-type engine just to illustrate it a little better. Click Secrets Engines -> Enable new Engine -> Generic KV and click Next.
In Path, put
kv/my_app
and click on Enable Engine. Now, we have our Vault configured and working.The next step is to configure the Terraform provider. This will allow Terraform to communicate with the MongoDB Atlas and Vault API to manage resources. Add the following block of code to your providers.tf file:
1 provider "mongodbatlas" {} 2 provider "vault" { 3 address = "http://localhost:8200" 4 token = "hvs.brmNeZd31NwEmyky1uYI2wvY" 5 skip_child_token = true 6 }
In the previous article, we configured the Terraform provider by placing our public and private keys in environment variables. We will continue in this way. We will add a new provider, the Vault. In it, we will configure the Vault address, the authentication token, and the skip_child_token parameter so that we can authenticate to the Vault.
Note: It is not advisable to specify the authentication token in a production environment. Use one of the authentication methods recommended by HashiCorp, such as app_role. You can evaluate the options in Terraform’s docs
The version file continues to have the same purpose, as mentioned in other articles, but we will add the version of the Vault provider as something new.
1 terraform { 2 required_version = ">= 0.12" 3 required_providers { 4 mongodbatlas = { 5 source = "mongodb/mongodbatlas" 6 version = "1.14.0" 7 } 8 vault = { 9 source = "hashicorp/vault" 10 version = "4.0.0" 11 } 12 } 13 }
After configuring the version file and establishing the Terraform and provider versions, the next step is to define the user resource in MongoDB Atlas. This is done by creating a .tf file — for example, main.tf — where we will create our module. As we are going to make a module that will be reusable, we will use variables and default values so that other calls can create users with different permissions, without having to write a new module.
1 # ------------------------------------------------------------------------------ 2 # RANDOM PASSWORD 3 # ------------------------------------------------------------------------------ 4 resource "random_password" "default" { 5 length = var.password_length 6 special = false 7 } 8 9 # ------------------------------------------------------------------------------ 10 # DATABASE USER 11 # ------------------------------------------------------------------------------ 12 resource "mongodbatlas_database_user" "default" { 13 project_id = data.mongodbatlas_project.default.id 14 username = var.username 15 password = random_password.default.result 16 auth_database_name = var.auth_database_name 17 18 dynamic "roles" { 19 for_each = var.roles 20 content { 21 role_name = try(roles.value["role_name"], null) 22 database_name = try(roles.value["database_name"], null) 23 collection_name = try(roles.value["collection_name"], null) 24 } 25 } 26 27 dynamic "scopes" { 28 for_each = var.scope 29 content { 30 name = scopes.value["name"] 31 type = scopes.value["type"] 32 } 33 } 34 35 36 dynamic "labels" { 37 for_each = local.tags 38 content { 39 key = labels.key 40 value = labels.value 41 } 42 } 43 } 44 45 resource "vault_kv_secret_v2" "default" { 46 mount = var.vault_mount 47 name = var.secret_name 48 data_json = jsonencode(local.secret) 49 }
At the beginning of the file, we have the random_password resource that is used to generate a random password for our user. In the mongodbatlas_database_user resource, we will specify our user details. We are placing some values as variables as done in other articles, such as name and auth_database_name with a default value of admin. Below, we create three dynamic blocks: roles, scopes, and labels. For roles, it is a list of maps that can contain the name of the role (read, readWrite, or some other), the database_name, and the collection_name. These values can be optional if you create a user with atlasAdmin permission, as in this case, it does not. It is necessary to specify a database or collection, or if you wanted, to specify only the database and not a specific collection. We will do an example. For the scopes block, the type is a DATA_LAKE or a CLUSTER. In our case, we will specify a cluster, which is the name of our created cluster, the demo cluster. And the labels serve as tags for our user.
Finally, we define the vault_kv_secret_v2 resource that will create a secret in our Vault. It receives the mount where it will be created and the name of the secret. The data_json is the value of the secret; we are creating it in the locals.tf file that we will evaluate below. It is a JSON value — that is why we are encoding it.
In the variable.tf file, we create variables with default values:
1 variable "project_name" { 2 description = "The name of the Atlas project" 3 type = string 4 } 5 6 variable "cluster_name" { 7 description = "The name of the Atlas cluster" 8 type = string 9 } 10 11 variable "password_length" { 12 description = "The length of the password" 13 type = number 14 default = 20 15 } 16 17 variable "username" { 18 description = "The username of the database user" 19 type = string 20 } 21 22 variable "auth_database_name" { 23 description = "The name of the database in which the user is created" 24 type = string 25 default = "admin" 26 } 27 28 variable "roles" { 29 description = <<HEREDOC 30 Required - One or more user roles blocks. 31 HEREDOC 32 type = list(map(string)) 33 } 34 35 variable "scope" { 36 description = "The scopes to assign to the user" 37 type = list(object({ 38 name = string 39 type = string 40 })) 41 default = [] 42 } 43 44 variable "labels" { 45 type = map(any) 46 default = null 47 description = "A JSON containing additional labels" 48 } 49 50 variable "uri_options" { 51 type = string 52 default = "retryWrites=true&w=majority&readPreference=secondaryPreferred" 53 description = "A string containing additional URI configs" 54 } 55 56 variable "vault_mount" { 57 description = "The mount point for the Vault secret" 58 type = string 59 } 60 61 variable "secret_name" { 62 description = "The name of the Vault secret" 63 type = string 64 } 65 66 variable "application" { 67 description = <<HEREDOC 68 Optional - Key-value pairs that tag and categorize the cluster for billing and organizational purposes. 69 HEREDOC 70 type = string 71 } 72 73 variable "environment" { 74 description = <<HEREDOC 75 Optional - Key-value pairs that tag and categorize the cluster for billing and organizational purposes. 76 HEREDOC 77 type = string 78 }
We configured a file called locals.tf with the values for our Vault and the tags that were created, like the last article. The interesting thing here is that we are defining how our user's connection string will be assembled and saved in the Vault. We could only save the username and password too, but I personally prefer to save the URI. This way, I can specify some good practices such as defining connection tags, such as readPreference, without depending on the developer to place it in the application. In the code below, there are some text treatments so that the URI is correct. At the end, I create a variable called secret that has a URI key and receives the value of the created URI.
1 locals { 2 private_connection_srv = data.mongodbatlas_advanced_cluster.default.connection_strings.0.standard_srv 3 cluster_uri = trimprefix(local.private_connection_srv, "mongodb+srv://") 4 private_connection_string = "mongodb+srv://${mongodbatlas_database_user.default.username}:${random_password.default.result}@${local.cluster_uri}/${var.auth_database_name}?${var.uri_options}" 5 6 secret = { "URI" = local.private_connection_string } 7 8 tags = { 9 name = var.application 10 environment = var.environment 11 } 12 }
In this article, we adopt the use of data sources in Terraform to establish a dynamic connection with existing resources, such as our MongoDB Atlas project and our cluster. Specifically, in the data.tf file, we define a data source, mongodbatlas_project and mongodbatlas_advanced_cluster, to access information about an existing project and cluster based on its name:
1 data "mongodbatlas_project" "default" { 2 name = var.project_name 3 } 4 5 6 data "mongodbatlas_advanced_cluster" "default" { 7 project_id = data.mongodbatlas_project.default.id 8 name = var.cluster_name 9 }
Finally, we define our variables file, terraform.tfvars:
1 project_name = "project-test" 2 username = "usr_myapp" 3 application = "teste-cluster" 4 environment = "dev" 5 cluster_name = "cluster-demo" 6 7 roles = [{ 8 "role_name" = "readWrite", 9 "database_name" = "db1", 10 "collection_name" = "collection1" 11 }, { 12 "role_name" : "read", 13 "database_name" : "db2" 14 }] 15 16 scope = [{ 17 name = "cluster-demo", 18 type = "CLUSTER" 19 }] 20 21 secret_name = "MY_MONGODB_SECRET" 22 vault_mount = "kv/my_app"
These values defined in terraform.tfvars are used by Terraform to populate corresponding variables in your configuration. In it, we are specifying the user's scope, values for the Vault, and our user's roles. The user will have readWrite permission on db1 in collection1 and read permission on db2 in all collections for the demo cluster.
The file structure is as follows:
- main.tf: In this file, we will define the main resource, the mongodbatlas_database_user and vault_kv_secret_v2, along with a random password generation. Here, you have configured the cluster and backup routines.
- provider.tf: This file is where we define the provider we are using, in our case, mongodbatlas and Vault.
- terraform.tfvars: This file contains the variables that will be used in our module — for example, the user name and Vault information, among others.
- variable.tf: Here, we define the variables mentioned in the terraform.tfvars file, specifying the type and, optionally, a default value.
- version.tf: This file is used to specify the version of Terraform and the providers we are using.
- data.tf: Here, we specify a datasource that will bring us information about our project and created cluster. We will search for its name and for our module, it will give us the project ID and cluster information such as its connection string.
- locals.tf: We specify example tags to use in our user and treatments to create the URI in the Vault.
Now is the time to apply. =D
We run a Terraform init in the terminal in the folder where the files are located so that it downloads the providers, modules, etc…
Note: Remember to export the environment variables with the public and private key.
1 export MONGODB_ATLAS_PUBLIC_KEY="your_public_key" 2 export MONGODB_ATLAS_PRIVATE_KEY=your_private_key"
Now, we run init and then plan, as in previous articles.
We assess that our plan is exactly what we expect and run the apply to create it.
When running the
terraform apply
command, you will be prompted for approval with yes
or no
. Type yes
.Now, let's look in Atlas to see if the user was created successfully...
Let's also look in the Vault to see if our secret was created.
It was created successfully! Now, let's test if the URI is working perfectly.
This is the format of the URI that is generated:
mongosh "mongodb+srv://usr_myapp:<password>@<clusterEndpoint>/admin?retryWrites=true&majority&readPreference=secondaryPreferred"
We connect and will make an insertion to evaluate whether the permissions are adequate — initially, in db1 in collection1.
Success! Now, in db3, make sure it will not have permission in another database.
Excellent — permission denied, as expected.
We have reached the end of this series of articles about MongoDB. I hope they were enlightening and useful for you!
To learn more about MongoDB and various tools, I invite you to visit the Developer Center to read the other articles.
Top Comments in Forums
There are no comments on this article yet.
Related
Tutorial
Maintaining a Geolocation Specific Game Leaderboard with Phaser and MongoDB
Apr 02, 2024 | 18 min read
Tutorial
How to Archive Data to Cloud Object Storage with MongoDB Online Archive
Sep 09, 2024 | 9 min read