Explore Developer Center's New Chatbot! MongoDB AI Chatbot can be accessed at the top of your navigation to answer all your MongoDB questions.

Join us at AWS re:Invent 2024! Learn how to use MongoDB for AI use cases.
MongoDB Developer
Connectors
plus
Sign in to follow topics
MongoDB Developer Centerchevron-right
Developer Topicschevron-right
Productschevron-right
Connectorschevron-right

Mastering MongoDB Ops Manager on Kubernetes

Arek Borucki7 min read • Published Jan 13, 2023 • Updated Jan 13, 2023
KubernetesConnectors
Facebook Icontwitter iconlinkedin icon
Rate this tutorial
star-empty
star-empty
star-empty
star-empty
star-empty
This article is part of a three-parts series on deploying MongoDB across multiple Kubernetes clusters using the operators.
Managing MongoDB deployments can be a rigorous task, particularly when working with large numbers of databases and servers. Without the right tools and processes in place, it can be time-consuming to ensure that these deployments are running smoothly and efficiently. One significant issue in managing MongoDB clusters at scale is the lack of automation, which can lead to time-consuming and error-prone tasks such as backups, recovery, and upgrades. These tasks are crucial for maintaining the availability and performance of your clusters.
Additionally, monitoring and alerting can be a challenge, as it may be difficult to identify and resolve issues with your deployments. To address these problems, it's essential to use software that offers monitoring and alerting capabilities. Optimizing the performance of your deployments also requires guidance and support from the right sources.
Finally, it's critical for your deployments to be secure and compliant with industry standards. To achieve this, you need features that can help you determine if your deployments meet these standards.
MongoDB Ops Manager is a web-based application designed to assist with the management and monitoring of MongoDB deployments. It offers a range of features that make it easier to deploy, manage, and monitor MongoDB databases, such as:
  • Automated backups and recovery: Ops Manager can take automated backups of your MongoDB deployments and provide options for recovery in case of failure.
  • Monitoring and alerting: Ops Manager provides monitoring and alerting capabilities to help identify and resolve issues with your MongoDB deployments.
  • Performance optimization: Ops Manager offers tools and recommendations to optimize the performance of your MongoDB deployments.
  • Upgrade management: Ops Manager can help you manage and plan upgrades to your MongoDB deployments, including rolling upgrades and backups to ensure data availability during the upgrade process.
  • Security and compliance: Ops Manager provides features to help you secure your MongoDB deployments and meet compliance requirements.
However, managing Ops Manager can be a challenging task that requires a thorough understanding of its inner workings and how it interacts with the internal MongoDB databases. It is necessary to have the knowledge and expertise to perform upgrades, monitor it, audit it, and ensure its security. As Ops Manager is a crucial part of managing the operation of your MongoDB databases, its proper management is essential.
Fortunately, the MongoDB Enterprise Kubernetes Operator enables us to run Ops Manager on Kubernetes clusters, using native Kubernetes capabilities to manage Ops Manager for us, which makes it more convenient and efficient.

Kubernetes: MongoDBOpsManager custom resource

The MongoDB Enterprise Kubernetes Operator is software that can be used to deploy Ops Manager and MongoDB resources to a Kubernetes cluster, and it's responsible for managing the lifecycle of each of these deployments. It has been developed based on years of experience and expertise, and it's equipped with the necessary knowledge to properly install, upgrade, monitor, manage, and secure MongoDB objects on Kubernetes.
The Kubernetes Operator uses the MongoDBOpsManager custom resource to manage Ops Manager objects. It constantly monitors the specification of the custom resource for any changes and, when changes are detected, the operator validates them and makes the necessary updates to the resources in the Kubernetes cluster.
MongoDBOpsManager custom resources specification defines the following Ops Manager components:
MongoDBOpsManager custom resources specification defines the following Ops Manager components:
the Application Database,
the Ops Manager application, and
the Backup Daemon.
When you use the Kubernetes Operator to create an instance of Ops Manager, the Ops Manager MongoDB Application Database will be deployed as a replica set. It's not possible to configure the Application Database as a standalone database or a sharded cluster.
The Kubernetes Operator automatically sets up Ops Manager to monitor the Application Database that powers the Ops Manager Application. It creates a project named  <ops-manager-deployment-name>-db to allow you to monitor the Application Database deployment. While Ops Manager monitors the Application Database deployment, it does not manage it.
When you deploy Ops Manager, you need to configure it. This typically involves using the configuration wizard. However, you can bypass the configuration wizard if you set certain essential settings in your object specification before deployment. I will demonstrate that in this post.
The Operator automatically enables backup. It deploys a StatefulSet, which consists of a single pod, to host the Backup Daemon Service and creates a Persistent Volume Claim and Persistent Volume for the Backup Daemon's head database. The operator uses the Ops Manager API to enable the Backup Daemon and configure the head database.

Getting started

Alright, let's get started using the operator and build something! For this tutorial, we will need the following tools: 
To get started, we should first create a Kubernetes cluster and then install the MongoDB Kubernetes Operator on the cluster. Part 1 of this series provides instructions on how to do so.
Note For the sake of simplicity, we are deploying Ops Manager in the same namespace as our MongoDB Operator. In a production environment, you should deploy Ops Manager in its own namespace.

Environment pre-checks 

Upon successful creation of a cluster and installation of the operator (described in Part 1), it's essential to validate their readiness for use.
1gcloud container clusters list
2
3NAME                LOCATION       MASTER_VERSION    NUM_NODES  STATUS\
4master-operator     us-south1-a    1.23.14-gke.1800      4      RUNNING
Display our new Kubernetes full cluster name using [kubectx](https://github.com/ahmetb/kubectx).
1kubectx
You should see your cluster listed here. Make sure your context is set to master cluster.
1kubectx $(kubectx | grep "master-operator" | awk '{print $1}')
In order to continue this tutorial, make sure that the operator is in the runningstate.
1kubectl get po -n "${NAMESPACE}"
2
3NAME                                    READY   STATUS   RESTARTS   AGE\
4mongodb-enterprise-operator-649bbdddf5   1/1    Running   0         7m9s

Using the MongoDBOpsManager CRD

Create a secret containing the username and password on the master Kubernetes cluster for accessing the Ops Manager user interface after installation.
1kubectl -n "${NAMESPACE}" create secret generic om-admin-secret \
2 --from-literal=Username="opsmanager@example.com" \
3 --from-literal=Password="p@ssword123" \
4 --from-literal=FirstName="Ops" \
5 --from-literal=LastName="Manager"
​​

Deploying Ops Manager 

Then, we can deploy Ops Manger on the master Kubernetes cluster with the help of opsmanagers Custom Resource, creating MongoDBOpsManager object, using the following manifest:
1OM_VERSION=6.0.5
2APPDB_VERSION=5.0.5-ent
3kubectl apply -f - <<EOF
4apiVersion: mongodb.com/v1
5kind: MongoDBOpsManager
6metadata:
7 name: ops-manager
8 namespace: "${NAMESPACE}"
9spec:
10 version: "${OM_VERSION}"
11 # the name of the secret containing admin user credentials.
12 adminCredentials: om-admin-secret
13 externalConnectivity:
14 type: LoadBalancer
15 configuration:
16 mms.ignoreInitialUiSetup: "true"
17 automation.versions.source: mongodb
18 mms.adminEmailAddr: support@example.com
19 mms.fromEmailAddr: support@example.com
20 mms.replyToEmailAddr: support@example.com
21 mms.mail.hostname: example.com
22 mms.mail.port: "465"
23 mms.mail.ssl: "false"
24 mms.mail.transport: smtp
25 # the Replica Set backing Ops Manager.
26 applicationDatabase:
27 members: 3
28 version: "${APPDB_VERSION}"
29EOF
After a few minutes, we should see our Ops Manager and Ops Manager MongoDB application database pods running.
1kubectl -n "${NAMESPACE}" get pods
2
3NAME                                       READY      STATUS     RESTARTS
4ops-manager-0                           1/1      Running      0
5ops-manager-db-0                        3/3      Running      0
6ops-manager-db-1                        3/3      Running      0
7ops-manager-db-2                        3/3      Running      0
Storage part creation has been orchestrated by the operator. Persistent Volumes Claims  are created and can be displayed via
1kubectl -n "${NAMESPACE}" get pvc
2
3NAME       STATUS   VOLUME     CAPACITY  ACCESS MODES   STORAGE CLASS   AGE
4data-ops-manager-db-0 Bound pvc-d5a1b385-6d1b  15Gi  RWO  standard  59m
5data-ops-manager-db-1 Bound pvc-db0e89dc-d73a  15Gi  RWO  standard  58m
6data-ops-manager-db-2 Bound pvc-c7e124a2-917   15Gi  RWO  standard  57m
Kubernetes StatefulSets has also been created by the Operator.
1kubectl -n "${NAMESPACE}" get sts
2
3NAME             READY   AGE
4ops-manager      1/1     29m31s
5ops-manager-db   3/3     30m29s
The operator has created all required network services, including a Load Balancer service type for the Ops manager access, so we now have an external IP address and can log into the Ops Manager from outside the Kubernetes cluster. You can verify IPs by viewing the services.
1kubectl -n "${NAMESPACE}" get svc
2
3NAME                  TYPE    CLUSTER-IP    EXTERNAL-IP  PORT(S)
4ops-manager-db-svc  ClusterIP    None        <none>      27017/TCP
5ops-manager-svc     ClusterIP    None        <none       8080/TCP,25999/TCP
6ops-manager-svc-ext LoadBalancer 10.76.10.231 34.174.54.103 8080:32078/TCP ,25999:31961/TCP    
The following diagram describes how the Kubernetes Operator reconciles changes to the  MongoDBOpsManager CustomResourceDefinition.
The following diagram describes how the Kubernetes Operator reconciles changes to the MongoDBOpsManager CustomResourceDefinition
To generate Ops Manager URL address, execute:
1URL=http://$(kubectl -n "${NAMESPACE}" get svc ops-manager-svc-ext -o jsonpath='{.status.loadBalancer.ingress[0].ip}:{.spec.ports[0].port}')
2echo $URL
The URL address will look similar to the example provided, but IP address may vary depending on the Kubernetes cluster.
1http://34.174.54.105:8080
The final step is to update the Ops Manager Kubernetes manifest to include an external IP address created by Load Balancer in spec.configuration.mms.centralUrl via kubectl patch.
1kubectl -n "${NAMESPACE}" patch om ops-manager --type=merge -p "{\"spec\":{\"configuration\":{\"mms.centralUrl\":\"${URL}\"}}}"
We should wait a few minutes. The Ops Manager pod must be restarted, so wait until the ops-manager-0 pod is in the running state again.
Using the username and password stored in the om-admin-secret (opsmanager@example.com : p@ssword123), we can log in to the Ops Manager User Interface using the address in the $URL variable.
Enter your username and password and click Login
The Kubernetes Operator was in the Ops Manager ops-manager-db organization and the ops-manager-db project.
Select Ops Manager organization
If we click on the project ops-manager-db, we will be redirected to the panel where we can see the database pods of the Ops Manager application. Ops Manager monitors this database.
Manage your MongoDB cluster from Ops Manager

Basic troubleshooting

If you run through issues during the installation, here are some clues to help you investigate what went wrong.
If you want to display an Ops Manager cluster, use this command. It will show you a general overview of om objects, including internal database and Backup Daemon state.
1kubectl -n "${NAMESPACE}" get om
2
3NAME REPLICAS VERSION STATE(OPSMANAGER) STATE(APPDB) STATE (BACKUP)   AGE
4ops-manager     6.0.5     Running        Running     Pending          18m  
You can use describe to get a detailed overview.
1kubectl -n "${NAMESPACE}" describe om ops-manager
It's always a good idea to check the Ops Manager logs
1kubectl -n "${NAMESPACE}" logs -f po/ops-manager-0
or current events from the namespace.
1kubectl -n "${NAMESPACE}" get events

Summary

We have just installed Ops Manager on our Kubernetes cluster. This gives us many benefits. The operator has properly installed and configured the Ops Manager instance with the internal database, took care of the storage portion, and created network services, including a Load Balancer. We can now use Ops Manager and easily create any kind of MongoDB database on the Kubernetes cluster following the best practices, monitor instances of our databases, introduce query optimizations with the help of the Ops Manager performance advisor, and provide backup, restore, or rolling upgrades through automation.
In the next part, I will demonstrate how to run the latest type of MongoDB Kubernetes Custom Resource, a Multi Cluster Replica Set. This replica set will be deployed across three separate Kubernetes clusters located in different regions, providing the ideal solution for critical applications that require continuous availability, even in the event of a Kubernetes cluster failure. Want to get started with the MongoDB Ops Manager on your own Kubernetes cluster? Install  it now and see for yourself how it can simplify your operations. Make sure to visit the MongoDB community forum for the latest discussions and resources

Facebook Icontwitter iconlinkedin icon
Rate this tutorial
star-empty
star-empty
star-empty
star-empty
star-empty
This is part of a series
Deploying MongoDB Across Multiple Kubernetes Clusters
Up Next
Continue

More in this series
Related
Podcast

MongoDB Podcast Interview With Connectors and Translators Team


Sep 11, 2024 | 16 min
Tutorial

Tuning the MongoDB Connector for Apache Kafka


Sep 17, 2024 | 10 min read
Tutorial

Deploying MongoDB Across Multiple Kubernetes Clusters With MongoDBMulti


Sep 05, 2023 | 11 min read
Tutorial

Using AWS IAM Authentication with the MongoDB Connector for Apache Kafka


Jul 01, 2024 | 4 min read
Table of Contents