Explore Developer Center's New Chatbot! MongoDB AI Chatbot can be accessed at the top of your navigation to answer all your MongoDB questions.

Join us at AWS re:Invent 2024! Learn how to use MongoDB for AI use cases.
MongoDB Developer
MongoDB
plus
Sign in to follow topics
MongoDB Developer Centerchevron-right
Developer Topicschevron-right
Productschevron-right
MongoDBchevron-right

Ensuring High Availability for MongoDB on Kubernetes

Mercy Bassey11 min read • Published Jul 12, 2024 • Updated Jul 12, 2024
MongoDB
SNIPPET
Facebook Icontwitter iconlinkedin icon
Graphic of Kubernetes and MongoDB
Rate this tutorial
star-empty
star-empty
star-empty
star-empty
star-empty
A database is a structured collection of data that allows efficient storage, retrieval, and manipulation of information. Databases are fundamental to modern applications and support critical functions like storing user accounts, transactions, and much more. Ensuring the high availability of a database is vital, as downtime can lead to significant disruptions, data loss, and financial impacts. High availability ensures that a database remains accessible and functional, even in the face of hardware failures, network issues, or other disruptions. In this tutorial, we will focus on achieving high availability with MongoDB.
MongoDB, when deployed on Kubernetes, can leverage the orchestration and automation capabilities of the platform to enhance its availability and resilience. With its robust features for container management, scaling, and recovery, Kubernetes provides an ideal environment for deploying highly available MongoDB instances. This tutorial will guide you through the steps necessary to deploy MongoDB on Kubernetes, configure it for high availability, set up backup mechanisms using mongodump, and implement automatic scaling to handle varying workloads.

Prerequisites

To follow along in this tutorial, make sure you have:

Deploying MongoDB

To begin, you’ll need to set up the necessary resources and configurations to deploy MongoDB in your Kubernetes cluster. You will create a persistent volume to ensure your MongoDB data is retained even if pods are rescheduled. You will also set up a headless service to enable stable network communication between MongoDB pods. Finally, you will deploy MongoDB using a StatefulSet, which will manage the deployment and scaling of MongoDB pods while maintaining their unique identities.
Note: If you wish to read more about the concepts mentioned, you can go through the documentation on Kubernetes.
Add the following configuration settings in a file called pv-pvc.yaml. This will create a persistent volume called mongodb-pv and a persistent volume claim called mongodb-pvc:
The PersistentVolume (PV) will provide the actual storage, while the PersistentVolumeClaim (PVC) will request the storage from the available PV.
1apiVersion: v1
2kind: PersistentVolume
3metadata:
4 name: mongodb-pv
5spec:
6 capacity:
7 storage: 5Gi
8 accessModes:
9 - ReadWriteOnce
10 hostPath:
11 path: /data/mongodb
12---
13apiVersion: v1
14kind: PersistentVolumeClaim
15metadata:
16 name: mongodb-pvc
17spec:
18 storageClassName: ""
19 accessModes:
20 - ReadWriteOnce
21 resources:
22 requests:
23 storage: 5Gi
Run the following commands to create the PersistentVolume and PersistentVolumeClaim:
1kubectl apply -f pv-pvc.yaml
Confirm that they are created with the following commands:
1kubectl get pv
2kubectl get pvc
You should see output similar to the following, confirming that the PersistentVolume and PersistentVolumeClaim have been created successfully:
Screenshot representing that PersistentVolume and PersistentVolumeClaims have been created
Screenshot representing that PersistentVolume and PersistentVolumeClaims have been created
Next, create a headless service to manage stable network identities for the MongoDB pods. In a file called headless-service.yaml, add the following configuration; this will create a headless service for your MongoDB database called mongodb-service:
1apiVersion: v1
2kind: Service
3metadata:
4 name: mongodb-service
5spec:
6 clusterIP: None
7 selector:
8 app: mongodb
9 ports:
10 - port: 27017
Apply the headless service to your Kubernetes cluster using the command:
1kubectl apply -f headless-service.yaml
Confirm that the headless service has been created using the following command:
1kubectl get svc
The following output is expected:
Creating and viewing headless service
Creating and viewing headless service
Set up authentication for your MongoDB replica set using the following commands:
Setting up authentication for your MongoDB replica set using the mongodb-keyfile is crucial to secure communication between members, prevent unauthorized access, and ensure that only trusted nodes can join the replica set, thereby maintaining data integrity and security.
1sudo bash -c "openssl rand -base64 756 > mongodb-keyfile"
2sudo chmod 400 mongodb-keyfile
Create a Kubernetes secret to store the keyfile:
1kubectl create secret generic mongodb-keyfile --from-file=mongodb-keyfile
Finally, create a StatefulSet to deploy MongoDB with persistent storage and stable network identities. In a file called statefulset.yaml, add the following configuration:
1apiVersion: apps/v1
2kind: StatefulSet
3metadata:
4 name: mongodb # Specifies the name of the statefulset
5spec:
6 serviceName: "mongodb-service" # Specifies the service to use
7 replicas: 3
8 selector:
9 matchLabels:
10 app: mongodb
11 template:
12 metadata:
13 labels:
14 app: mongodb
15 spec:
16 containers:
17 - name: mongodb
18 image: mongo:latest
19 command:
20 - mongod
21 - "--replSet"
22 - rs0
23 - "--bind_ip_all"
24 ports:
25 - containerPort: 27017
26 volumeMounts:
27 - name: mongodb-storage
28 mountPath: /data/db
29 - name: keyfile
30 mountPath: /etc/mongodb-keyfile
31 readOnly: true
32 resources:
33 requests:
34 cpu: "100m"
35 memory: "256Mi"
36 limits:
37 cpu: "500m"
38 memory: "512Mi"
39 volumes:
40 - name: keyfile
41 secret:
42 secretName: mongodb-keyfile
43 defaultMode: 0400
44 volumeClaimTemplates:
45 - metadata:
46 name: mongodb-storage
47 spec:
48 accessModes: ["ReadWriteOnce"]
49 resources:
50 requests:
51 storage: 5Gi
Apply the StatefulSet to your Kubernetes cluster:
1kubectl apply -f statefulset.yaml
Confirm that the StatefulSet was created successfully with all pods healthy and running:
1kubectl get statefulsets
2kubectl get pods
You should have the following output with the StatefulSet creating three MongoDB pods named mongodb-0, mongodb-1, and mongodb-2:
Creating and viewing Statefulsets and pods
Creating and viewing Statefulsets and pods

Configuring high availability

To ensure high availability for your MongoDB database, you will configure a replica set. A replica set in MongoDB is a group of mongod instances that maintain the same data set, providing redundancy and high availability.
Before configuring the replica set, it is helpful to have some data present. This step is optional, but having data in your database will help you understand the backup process more clearly in the following subsections.
First, exec into one of the MongoDB pods and initialize the replica set. Typically, you would use the first pod (e.g., mongodb-0).
1kubectl exec -it mongodb-0 -- mongosh
2
3rs.initiate({
4 _id: "rs0",
5 members: [
6 { _id: 0, host: "mongodb-0.mongodb-service:27017", priority: 2 },
7 { _id: 1, host: "mongodb-1.mongodb-service:27017", priority: 1 },
8 { _id: 2, host: "mongodb-2.mongodb-service:27017", priority: 1 }
9 ]
10})
You should see the following output:
1test> rs.initiate({
2... _id: "rs0",
3... members: [
4... { _id: 0, host: "mongodb-0.mongodb-service:27017", priority: 2 },
5... { _id: 1, host: "mongodb-1.mongodb-service:27017", priority: 1 },
6... { _id: 2, host: "mongodb-2.mongodb-service:27017", priority: 1 }
7... ]
8... })
9{
10 ok: 1,
11 '$clusterTime': {
12 clusterTime: Timestamp({ t: 1718718451, i: 1 }),
13 signature: {
14 hash: Binary.createFromBase64('AAAAAAAAAAAAAAAAAAAAAAAAAAA=', 0),
15 keyId: Long('0')
16 }
17 },
18 operationTime: Timestamp({ t: 1718718451, i: 1 })
19}
20rs0 [direct: secondary] test>
When it comes to replica sets, insertion only occurs in the primary. Since we have set the first pod mongodb-0 to serve as the primary, we can make insertions. However, if you are not sure which pod is the primary, you can do so using the command:
1rs.status()
Switch to a database called “users” and then insert some data using the following commands:
1use users
2
3db.femaleusers.insertOne({ name: "Jane Doe", age: 30, email: "jane.doe@example.com" })
4db.femaleusers.insertOne({ name: "Mary Smith", age: 25, email: "mary.smith@example.com" })
5db.femaleusers.insertOne({ name: "Alice Johnson", age: 28, email: "alice.johnson@example.com" })
6db.femaleusers.insertOne({ name: "Emily Davis", age: 32, email: "emily.davis@example.com" })
7db.femaleusers.insertOne({ name: "Sarah Brown", age: 27, email: "sarah.brown@example.com" })
8
9db.femaleusers.find()
10exit
Note: You can use the find() command to list the users we just inserted. This command can be executed from any of the members of the replica set, including the secondary pods.
Once you have run the commands, you should see the following outputs indicating successful insertion of the documents into the femaleusers collection in the users database:
Viewing successful insertion of user documents into the femaleusers collection
Viewing successful insertion of user documents into the femaleusers collection

Performing a backup with mongodump

With some data already inserted into your MongoDB database, you are now ready to initiate the first step of high availability for your MongoDB database using mongodump.
First, create a separate PV and PVC for storing backups by adding the following configuration settings in a backup-pv-pvc.yaml file:
1apiVersion: v1
2kind: PersistentVolume
3metadata:
4 name: backup-pv
5spec:
6 capacity:
7 storage: 5Gi
8 accessModes:
9 - ReadWriteOnce
10 hostPath:
11 path: /mnt/backup
12---
13apiVersion: v1
14kind: PersistentVolumeClaim
15metadata:
16 name: backup-pvc
17spec:
18 accessModes:
19 - ReadWriteOnce
20 resources:
21 requests:
22 storage: 5Gi
Apply this with the following command:
1kubectl apply -f backup-pv-pvc.yaml
Confirm that they have been created with the following commands:
1kubectl get pv
2kubectl get pvc
Confirming that backup-pv and backup-pvc have been created
Confirming that backup-pv and backup-pvc have been created
To back up your MongoDB data, you will create a Kubernetes CronJob that uses the mongodump utility. Using a CronJob here means you get to schedule when you’d like backups to occur automatically.
Create a file called mongodb-backup-cronjob.yaml with the following content:
1apiVersion: batch/v1
2kind: CronJob
3metadata:
4 name: mongodb-backup
5spec:
6 schedule: "*/5 * * * *" # Runs backup every five minutes
7 jobTemplate:
8 spec:
9 template:
10 spec:
11 containers:
12 - name: mongodump
13 image: mongo:latest
14 command:
15 - sh
16 - -c
17 - |
18 # Perform backup
19 mongodump --host=mongodb-service --port=27017 --out=/backup/$(date +\%Y-\%m-\%dT\%H-\%M-\%S)
20 # Remove backups older than 7 days
21 find /backup -type d -mtime +7 -exec rm -rf {} +
22 volumeMounts:
23 - name: backup-storage
24 mountPath: /backup
25 restartPolicy: Never
26 volumes:
27 - name: backup-storage
28 persistentVolumeClaim:
29 claimName: backup-pvc
Apply the configuration to create the CronJob and then verify that the CronJob has been created:
1kubectl apply -f mongodb-backup-cronjob.yaml
2kubectl get cronjob
You should see output similar to the following, confirming that the CronJob has been created successfully:
Confirming that the CronJob has been created
Confirming that the CronJob has been created
After five minutes, you can confirm the status of the pods using the following command:
1kubectl get pods
You should see a pod with a name similar to this which was created by the CronJob:
Viewing the pod created by the CronJob
Viewing the pod created by the CronJob
To verify that the backup was created successfully, check the logs of the backup pod:
1kubectl logs mongodb-backup-28635290-jfsjn
The log output should be similar to this:
Viewing the logs of the pod created by the CronJob
Viewing the logs of the pod created by the CronJob
To access the backup files, you need to view the contents of the persistent volume where the backups are stored. Create a temporary pod to access these files by creating a file named backup-access.yaml with the following content:
This will create a temporary busybox pod that mounts the backup-pvc persistent volume claim (PVC) to the /backup directory within the container, allowing you to access and explore the backup files stored in the PV.
1apiVersion: v1
2kind: Pod
3metadata:
4 name: backup-access
5spec:
6 containers:
7 - name: busybox
8 image: busybox
9 command: ["sh", "-c", "sleep 3600"]
10 volumeMounts:
11 - name: backup-storage
12 mountPath: /backup
13 volumes:
14 - name: backup-storage
15 persistentVolumeClaim:
16 claimName: backup-pvc
Apply the configuration and access the temporary pod:
1kubectl apply -f backup-access.yaml
2kubectl exec -it backup-access -- sh
Once inside the pod, navigate to the /backup directory to view the backup files:
1cd /backup
2ls
3cd <file>
4ls
5cd users
You should see the following output:
Accessing users from backup
Accessing users from backup
Now, exit from the busybox container by typing in exit.

Performing failover and recovery testing

To ensure the high availability of your MongoDB replica set, you need to test failover and recovery processes. This section will guide you through simulating a failure and verifying that your MongoDB setup can handle it gracefully.
In a MongoDB replica set, one member is the primary node responsible for handling write operations, while the other members are secondary nodes that replicate data from the primary. If the primary node fails, the replica set will automatically elect a new primary from the remaining secondary nodes.
Note: To know more about replica sets in MongoDB, you can visit the documentation on replica sets in MongoDB.
Begin by identifying the current primary node:
1kubectl exec -it mongodb-0 -- mongosh --eval "rs.status()"
Look for the member with the "stateStr" : "PRIMARY" attribute:
1{
2 "_id" : 0,
3 "name" : "mongodb-0.mongodb-service:27017",
4 "stateStr" : "PRIMARY",
5 ...
6}
Simulate a failure by deleting the current primary node pod:
1kubectl delete pod mongodb-0
Monitor the status of the replica set and observe the election of a new primary:
1kubectl exec -it mongodb-1 -- mongosh --eval "rs.status()"
You should see that one of the secondary nodes has been promoted to primary:
1{
2 "_id" : 1,
3 "name" : "mongodb-1.mongodb-service:27017",
4 "stateStr" : "PRIMARY",
5 ...
6}
Verify that the deleted pod has been recreated and rejoined the replica set as a secondary node:
1kubectl get pods
Confirm that the pod is back and running:
1NAME READY STATUS RESTARTS AGE
2mongodb-0 1/1 Running 0 70s
3mongodb-1 1/1 Running 0 77m
4mongodb-2 1/1 Running 0 73m
Check the replica set status again to ensure the new node is now re-elected as the primary since it had the highest priority:
1kubectl exec -it mongodb-0 -- mongosh --eval "rs.status()"
You should see:
1{
2 "_id" : 0,
3 "name" : "mongodb-0.mongodb-service:27017",
4 "stateStr" : "PRIMARY",
5 ...
6}

Configuring automatic scaling

In a dynamic environment, workload demands can fluctuate, necessitating the need for your MongoDB deployment to scale automatically to ensure consistent performance. Kubernetes provides a feature called Horizontal Pod Autoscaler (HPA) to manage this scaling.
The Kubernetes Horizontal Pod Autoscaler automatically scales the number of pods in a deployment, replication controller, or replica set based on observed CPU utilization (or other select metrics). It helps ensure that your application has the right amount of resources to handle varying levels of traffic.
To configure automatic scaling for your MongoDB deployment, you need to first install Metrics Server using the following commands.
The HPA relies on the metrics server to collect metrics from the Kubernetes cluster.
1kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
You should see the following output if applied successfully:
Installing Metrics Server
Installing Metrics Server
Execute the command kubectl get pods -n kube-system to view the status of the metrics server pods. You should see the following:
1~$ kubectl get pods -n kube-system
2NAME READY STATUS RESTARTS AGE
3...
4metrics-server-6d94bc8694-mkdrb 0/1 Running 0 60s
At this point, the containers in the Metrics Server are not running due to TLS certificate issues. To resolve this, execute the following command to edit the metrics server deployment:
1kubectl edit deployment metrics-server -n kube-system
The kubectl edit deployment metrics-server -n kube-system command opens the deployment configuration in a vim editor by default. To edit the file, move your cursor to the appropriate section using the arrow keys. Type i to enter insert mode and make your changes. Once you have finished editing, press the Esc key to exit insert mode, then type :wq to save your changes and close the editor.
Add the following commands to the container spec to bypass TLS verification:
1spec:
2 containers:
3 - args:
4 ...
5 command:
6 - /metrics-server
7 - --kubelet-insecure-tls
8 - --kubelet-preferred-address-types=InternalIP
Adding commands to bypass TLS verification for Metrics Server
Adding commands to bypass TLS verification for Metrics Server
After making these changes, confirm that the containers are running by executing:
1kubectl get pods -n kube-system
You should see the following output:
1~$ kubectl get pods -n kube-system
2NAME READY STATUS RESTARTS AGEcalico-kube-controllers-67dfb8c8c9-spmcs 1/1 Running 0 16m
3...
4metrics-server-777dff589b-b56lg 1/1 Running 0 117s
Check the resource usage of the pods in your cluster:
1kubectl top pods
You see the following output:
Check the resource usage of the pods in your cluster
Check the resource usage of the pods in your cluster
Define an HPA resource that specifies how and when to scale the MongoDB StatefulSet using the following command:
1kubectl autoscale statefulset mongodb --min=3 --max=10 --cpu-percent=50
This configuration will set the minimum number of replicas to 3 and allow scaling up to a maximum of 10 replicas based on CPU utilization.
You should see the following output:
1horizontalpodautoscaler.autoscaling/mongodb autoscaled
Monitor the status of the HPA using the following command:
1kubectl get hpa
This will show you the current status of the HPA, including the current number of replicas and the metrics used for scaling.
Viewing HPA for the MongoDB StatefulSet
Viewing HPA for the MongoDB StatefulSet

Conclusion

High availability is crucial for the smooth operation of any database system. This tutorial has demonstrated how to deploy MongoDB on Kubernetes, configure it for high availability with a replica set, set up backup mechanisms using MongoDump, and configure automatic scaling using Kubernetes’s HPA. By following the steps highlighted in this tutorial, you can maintain continuous access to your data to prevent significant downtimes, which could lead to data loss and financial impacts. As a result, you'll have data durability and high availability at your fingertips.
If you have any questions or comments, feel free to join us in the MongoDB Developer Community.
Top Comments in Forums
Forum Commenter Avatar
Premanshu_MukherjiPremanshu Mukherjilast week

Hello Mercy,
That was a very helpful artical. It was just what I needed to setup, especially the replica setup. THANK YOU.
I have a suggestion and a query

Suggestion: Please include a code snippet about how to connect to the MongoDB from a python or go lang. The connection string is what I was trying to figure out. This would make the article useful to developers.

Query: A headless deployment gives all the A records for the MongoDB Pods. When we connect to MongoDB from an application, how it is decided which pod will be used for writing? I am using a connection string which does not have any pod information and I am still able to connect. Who does the magic?

I got lucky to see this article in first search result. Saved a lot.
Waiting for your reply,
Prem

See More on Forums

Facebook Icontwitter iconlinkedin icon
Rate this tutorial
star-empty
star-empty
star-empty
star-empty
star-empty
Related
Quickstart

Getting Started With MongoDB and Starlette


Jul 12, 2024 | 5 min read
Tutorial

Building with Patterns: The Bucket Pattern


May 16, 2022 | 3 min read
Tutorial

Getting Started With Server-side Kotlin and MongoDB


Oct 08, 2024 | 6 min read
Tutorial

Enable Generative AI and Semantic Search Capabilities on Your Database With MongoDB Atlas and OpenAI


Sep 09, 2024 | 8 min read
Table of Contents
  • Prerequisites