Docs Menu
Docs Home
/
MongoDB Enterprise Kubernetes Operator
/

Deploy an Ops Manager Resource

On this page

  • Considerations
  • Procedure

You can deploy Ops Manager as a resource in a Kubernetes container using the Kubernetes Operator.

The following considerations apply:

When you configure your Ops Manager deployment, you must choose whether to run connections over HTTPS or HTTP.

The following HTTPS procedure:

  • Establishes TLS-encrypted connections to/from the Ops Manager application.

  • Establishes TLS-encrypted connections between the application database's replica set members.

  • Requires valid certificates for TLS encryption.

The following HTTP procedure:

  • Doesn't encrypt connections to or from the Ops Manager application.

  • Doesn't encrypt connections between the application database's replica set members.

  • Has fewer setup requirements.

When running over HTTPS, Ops Manager runs on port 8443 by default.

Select the appropriate tab based on whether you want to encrypt your Ops Manager and application database connections with TLS.

  • Complete the Prerequisites.

  • Read the Considerations.

  • Create one TLS certificate for the Application Database's replica set.

    This TLS certificate requires the following attributes:

    DNS Names

    Ensure that you add SANs or Subject Names for each Pod that hosts a member of the Application Database replica set. The SAN for each pod must use the following format:

    <opsmgr-metadata.name>-db-<index>.<opsmgr-metadata.name>-db-svc.<namespace>.svc.cluster.local
    Key Usages

    Ensure that the TLS certificates include the following key-usages (5280):

    • "server auth"

    • "client auth"

Important

The Kubernetes Operator uses kubernetes.io/tls secrets to store TLS certificates and private keys for Ops Manager and MongoDB resources. Starting in Kubernetes Operator version 1.17.0, the Kubernetes Operator doesn't support concatenated PEM files stored as Opaque secrets.

Before you deploy an Ops Manager resource, make sure you plan for your Ops Manager resource:

This procedure applies to deploying an Ops Manager instance in a single Kubernetes cluster, and to deploying Ops Manager on an operator cluster in a multi-cluster deployment. If you want to deploy multiple instances of Ops Manager on multiple Kubernetes clusters, see Deploy Ops Manager Resources on Multiple Kubernetes Clusters.

Follow these steps to deploy the Ops Manager resource to run over HTTPS and secure the application database using TLS.

1

If you have not already, run the following command to execute all kubectl commands in the namespace you created.

Note

If you are deploying an Ops Manager resource in a multi-Kubernetes cluster MongoDB deployment:

  • Set the context to the name of the central cluster, such as: kubectl config set context "$MDB_CENTRAL_CLUSTER_FULL_NAME".

  • Set the --namespace to the same scope that you used for your multi-Kubernetes cluster MongoDB deployment, such as: kubectl config --namespace "mongodb".

kubectl config set-context $(kubectl config current-context) --namespace=<metadata.namespace>
2

If you're using HashiCorp Vault as your secret storage tool, you can Create a Vault Secret instead.

To learn about your options for secret storage, see Configure Secret Storage.

  1. Once you have your TLS certificates and private keys, run the following command to create a secret that stores Ops Manager's TLS certificate:

    kubectl create secret tls <prefix>-<metadata.name>-cert \
    --cert=<om-tls-cert> \
    --key=<om-tls-key>
  2. Run the following command to create a new secret that stores the application database's TLS certificate:

    kubectl create secret tls <prefix>-<metadata.name>-db-cert \
    --cert=<appdb-tls-cert> \
    --key=<appdb-tls-key>
3

If your Ops Manager TLS certificate is signed by a custom CA, the CA certificate must also contain additional certificates that allow Ops Manager Backup Daemon to download MongoDB binaries from the internet. To create the TLS certificate(s), create a ConfigMap to hold the CA certificate:

Important

The Kubernetes Operator requires that your Ops Manager certificate is named mms-ca.crt in the ConfigMap.

  1. Obtain the entire TLS certificate chain for Ops Manager from downloads.mongodb.com. The following openssl command outputs the certificate in the chain to your current working directory, in .crt format:

    openssl s_client -showcerts -verify 2 \
    -connect downloads.mongodb.com:443 -servername downloads.mongodb.com < /dev/null \
    | awk '/BEGIN/,/END/{ if(/BEGIN/){a++}; out="cert"a".crt"; print >out}'
  2. Concatenate your CA's certificate file for Ops Manager with the entire TLS certificate chain from downloads.mongodb.com that you obtained in the previous step:

    cat cert2.crt cert3.crt cert4.crt >> mms-ca.crt
  3. Create the ConfigMap for Ops Manager:

    kubectl create configmap om-http-cert-ca --from-file="mms-ca.crt"
4

Change the settings to match your Ops Manager and application database configuration.

1---
2apiVersion: mongodb.com/v1
3kind: MongoDBOpsManager
4metadata:
5 name: <myopsmanager>
6spec:
7 replicas: 1
8 version: <opsmanagerversion>
9 adminCredentials: <adminusercredentials> # Should match metadata.name
10 # in the Kubernetes secret
11 # for the admin user
12
13 externalConnectivity:
14 type: LoadBalancer
15 security:
16 certsSecretPrefix: <prefix> # Required. Text to prefix
17 # the name of the secret that contains
18 # Ops Manager's TLS certificate.
19 tls:
20 ca: "om-http-cert-ca" # Optional. Name of the ConfigMap file
21 # containing the certificate authority that
22 # signs the certificates used by the Ops
23 # Manager custom resource.
24
25 applicationDatabase:
26 topology: SingleCluster
27 members: 3
28 version: "6.0.0-ubi8"
29 security:
30 certsSecretPrefix: <prefix> # Required. Text to prefix to the
31 # name of the secret that contains the Application
32 # Database's TLS certificate. Name the secret
33 # <prefix>-<metadata.name>-db-cert.
34 tls:
35 ca: "appdb-ca" # Optional. Name of the ConfigMap file
36 # containing the certicate authority that
37 # signs the certificates used by the
38 # application database.
39
40...
1---
2apiVersion: mongodb.com/v1
3kind: MongoDBOpsManager
4metadata:
5 name: <myopsmanager>
6spec:
7 replicas: 1
8 version: <opsmanagerversion>
9 adminCredentials: <adminusercredentials> # Should match metadata.name
10 # in the Kubernetes secret
11 # for the admin user
12
13 externalConnectivity:
14 type: LoadBalancer
15 security:
16 certsSecretPrefix: <prefix> # Required. Text to prefix
17 # the name of the secret that contains
18 # Ops Manager's TLS certificate.
19 tls:
20 ca: "om-http-cert-ca" # Optional. Name of the ConfigMap file
21 # containing the certificate authority that
22 # signs the certificates used by the Ops
23 # Manager custom resource.
24
25 applicationDatabase:
26 topology: MultiCluster
27 clusterSpecList:
28 - clusterName: cluster1.example.com
29 members: 4
30 - clusterName: cluster2.example.com
31 members: 3
32 - clusterName: cluster3.example.com
33 members: 2
34 version: "6.0.0-ubi8"
35 security:
36 certsSecretPrefix: <prefix> # Required. Text to prefix to the
37 # name of the secret that contains the Application
38 # Database's TLS certificate. Name the secret
39 # <prefix>-<metadata.name>-db-cert.
40 tls:
41 ca: "appdb-ca" # Optional. Name of the ConfigMap file
42 # containing the certicate authority that
43 # signs the certificates used by the
44 # application database.
45
46...
5
6
Key
Type
Description
Example
string

Name for this Kubernetes Ops Manager object.

Resource names must be 44 characters or less.

See also metadata.name and Kubernetes documentation on names.

om
number

Number of Ops Manager instances to run in parallel.

The minimum valid value is 1. This field is ignored if you specify MultiCluster in the spec.topology setting.

1
string

Version of Ops Manager to be installed.

The format should be X.Y.Z. To view available Ops Manager versions, view the container registry.

6.0.0
string
Name of the secret you created for the Ops Manager admin user. Configure the secret to use the same namespace as the Ops Manager resource.
om-admin-secret
spec
.security
string

Required.

Text to prefix to the name of the secret that contains Ops Managers TLS certificates.

om-prod
spec
.security
.tls
string
Name of the ConfigMap you created to verify your Ops Manager TLS certificates signed using a custom CA. This field is required if you signed your Ops Manager TLS certificates using a custom CA.
om-http-cert-ca
spec
.externalConnectivity
string
The Kubernetes service ServiceType that exposes Ops Manager outside of Kubernetes. Exclude the spec.externalConnectivity setting and its children if you don't want the Kubernetes Operator to create a Kubernetes service to route external traffic to the Ops Manager application.
LoadBalancer
spec
.applicationDatabase
integer
Number of members of the Ops Manager Application Database replica set.
3
spec
.applicationDatabase
string

Required.

Version of MongoDB that the Ops Manager Application Database should run.

The format should be X.Y.Z-ubi8 for the Enterprise edition and X.Y.Z for the Community edition. Do not add the -ubi8 tag suffix to the Community edition image because the Kubernetes Operator adds the tag suffix automatically.

Important

Ensure that you choose a compatible MongoDB Server version.

Compatible versions differ depending on the base image that the MongoDB database resource uses.

To learn more about MongoDB versioning, see MongoDB Versioning in the MongoDB Manual.

For best results, use the latest available enterprise MongoDB version that is compatible with your Ops Manager version.

spec
.applicationDatabase
string

Optional.

The type of the Kubernetes deployment for the Application Database. If omitted, the default is SingleCluster.

If you specify MultiCluster, the Kubernetes Operator ignores values that you set for the spec.applicationDatabase.members field, if specified. Instead, you must specify the clusterSpecList and include in it the clusterName of each selected Kubernetes member cluster on which you want to deploy the Application Database, and the number of members (MongoDB nodes) in each Kubernetes cluster.

You can't convert a single cluster Ops Manager instance to a multi-Kubernetes cluster MongoDB deployment instance by modifying the topology and the clusterSpecList settings in the CRD.

See also the example of the resource specification.

MultiCluster
spec
.applicationDatabase
.security
string

Required.

Text to prefix to the name of the secret that contains the application database's TLS certificates.

appdb-prod
spec
.applicationDatabase
.security
.tls
string
Name of the ConfigMap you created to verify your application database TLS certificates signed using a custom CA. This field is required if you signed your application database TLS certificates using a custom CA.

ca

The Kubernetes Operator mounts the CA you add using the spec.applicationDatabase.security.tls.ca setting to both the Ops Manager and the Application Database Pods.

7

To configure backup, you must enable it, and then:

  • Choose to configure an S3 snapshot store or a blockstore. If you deploy both an S3 snapshot store and a blockstore, Ops Manager randomly choses one to use for backup.

  • Choose to configure an oplog store or an S3 oplog store. If you deploy both an oplog store and an S3 oplog store, Ops Manager randomly choses one of them to use for the oplog backup.

Key
Type
Description
Example
spec
.backup
boolean
Flag that indicates that Backup is enabled. You must specify spec.backup.enabled: true to configure settings for the head database, oplog store, and snapshot store.
true
spec
.backup
.headDB
collection
A collection of configuration settings for the head database. For descriptions of the individual settings in the collection, see spec.backup.headDB.
spec
.backup
.opLogStores
string
Name of the oplog store.
oplog1
spec
.backup
.s3OpLogStores
string
Name of the S3 oplog store.
my-s3-oplog-store
spec
.backup
.opLogStores
.mongodbResourceRef
string
Name of the MongoDB resource or MongoDBMultiCluster resource for the oplog store. The resource's metadata.name must match this name.
my-oplog-db
spec
.backup
.s3OpLogStores
.mongodbResourceRef
string
Name of the MongoDB resource or MongoDBMultiCluster resource for the S3 oplog store. The resource's metadata.name must match this name.
my-s3-oplog-db

You must also configure an S3 snapshot store or a blockstore.

If you deploy both an S3 snapshot store and a blockstore, Ops Manager randomly choses one to use for Backup.

To configure a snapshot store, configure the following settings:

Key
Type
Description
Example
spec
.backup
.s3Stores
string
Name of the S3 snapshot store.
s3store1
spec
.backup
.s3Stores
.s3SecretRef
string
Name of the secret that contains the accessKey and secretKey fields. The Backup Daemon Service uses the values of these fields as credentials to access the S3 or S3-compatible bucket.
my-s3-credentials
spec
.backup
.s3Stores
string
URL of the S3 or S3-compatible bucket that stores the database Backup snapshots.
s3.us-east-1.amazonaws.com
spec
.backup
.s3Stores
string
Name of the S3 or S3-compatible bucket that stores the database Backup snapshots.
my-bucket

To configure a blockstore, configure the following settings:

Key
Type
Description
Example
spec
.backup
.blockStores
string
Name of the blockstore.
blockStore1
spec
.backup
.blockStores
.mongodbResourceRef
string
Name of the MongoDB resource that you create for the blockstore. You must deploy this database resource in the same namespace as the Ops Manager resource.
my-mongodb-blockstore
8

Add any optional settings for backups that you want to apply to your deployment to the object specification file. For example, for each type of backup store, and for Ops Manager backup daemon processes, you can assign labels to associate particular backup stores or backup daemon processes with specific projects. Use spec.backup.[*].assignmentLabels elements of the OpsManager resources.

9

Add any optional settings that you want to apply to your deployment to the object specification file.

10
11

Run the following kubectl command on the filename of the Ops Manager resource definition:

kubectl apply -f <opsmgr-resource>.yaml

Note

If you are deploying an Ops Manager resource on a multi-Kubernetes cluster MongoDB deployment, run:

kubectl apply \
--context "$MDB_CENTRAL_CLUSTER_FULL_NAME" \
--namespace "mongodb"
-f https://raw.githubusercontent.com/mongodb/mongodb-enterprise-kubernetes/master/samples/ops-manager/ops-manager-external.yaml
12

To check the status of your Ops Manager resource, invoke the following command:

kubectl get om -o yaml -w

The command returns the output similar to the following under the status field while the resource deploys:

status:
applicationDatabase:
lastTransition: "2022-04-01T09:49:22Z"
message: AppDB Statefulset is not ready yet
phase: Pending
type: ""
version: ""
backup:
phase: ""
opsManager:
phase: ""

The Kubernetes Operator reconciles the resources in the following order:

  1. Application Database.

  2. Ops Manager.

  3. Backup.

The Kubernetes Operator doesn't reconcile a resource until the preceding one enters the Running phase.

After the Ops Manager resource completes the Pending phase, the command returns output similar to the following under the status field if you enabled Backup:

status:
applicationDatabase:
lastTransition: "2022-04-01T09:50:20Z"
members: 3
phase: Running
type: ReplicaSet
version: "6.0.5-ubi8"
backup:
lastTransition: "2022-04-01T09:57:42Z"
message: The MongoDB object <namespace>/<oplogresourcename>
doesn't exist
phase: Pending
opsManager:
lastTransition: "2023-04-01T09:57:40Z"
phase: Running
replicas: 1
url: https://om-svc.cloudqa.svc.cluster.local:8443
version: "6.0.17"

Backup remains in a Pending state until you configure the Backup databases.

Tip

The status.opsManager.url field states the resource's connection URL. Using this URL, you can reach Ops Manager from inside the Kubernetes cluster or create a project using a ConfigMap.

After the resource completes the Pending phase, the command returns output similar to the following under the status field:

status:
applicationDatabase:
lastTransition: "2022-12-06T18:23:22Z"
members: 3
phase: Running
type: ReplicaSet
version: "6.0.5-ubi8"
opsManager:
lastTransition: "2022-12-06T18:23:26Z"
message: The MongoDB object namespace/oplogdbname doesn't exist
phase: Pending
url: https://om-svc.dev.svc.cluster.local:8443
version: ""

Backup remains in a Pending state until you configure the Backup databases.

Tip

The status.opsManager.url field states the resource's connection URL. Using this URL, you can reach Ops Manager from inside the Kubernetes cluster or create a project using a ConfigMap.

13

The steps you take differ based on how you are routing traffic to the Ops Manager application in Kubernetes. If you configured the Kubernetes Operator to create a Kubernetes service for you, or you created a Kubernetes service manually, use one of the following methods to access the Ops Manager application:

  1. Query your cloud provider to get the FQDN of the load balancer service. See your cloud provider's documentation for details.

  2. Open a browser window and navigate to the Ops Manager application using the FQDN and port number of your load balancer service.

    https://ops.example.com:8443
  3. Log in to Ops Manager using the admin user credentials.

  1. Set your firewall rules to allow access from the Internet to the spec.externalConnectivity.port on the host on which your Kubernetes cluster is running.

  2. Open a browser window and navigate to the Ops Manager application using the FQDN and the spec.externalConnectivity.port.

    https://ops.example.com:30036
  3. Log in to Ops Manager using the admin user credentials.

To learn how to access the Ops Manager application using a third-party service, refer to the documentation for your solution.

14

To configure credentials, you must create an Ops Manager organization, generate programmatic API keys, and create a secret. These activities follow the prerequisites and procedure on the Create Credentials for the Kubernetes Operator page.

15

To create a project, follow the prerequisites and procedure on the Create One Project using a ConfigMap page.

Set the following fields in your project ConfigMap:

  • Set data.baseUrl in the ConfigMap to the Ops Manager Application's URL. To find this URL, invoke the following command:

    kubectl get om -o yaml -w

    The command returns the URL of the Ops Manager Application in the status.opsManager.url field, similar to the following example:

    status:
    applicationDatabase:
    lastTransition: "2022-12-06T18:23:22Z"
    members: 3
    phase: Running
    type: ReplicaSet
    version: "6.0.5-ubi8"
    opsManager:
    lastTransition: "2022-12-06T18:23:26Z"
    message: The MongoDB object namespace/oplogdbname doesn't exist
    phase: Pending
    url: https://om-svc.dev.svc.cluster.local:8443
    version: ""

    Important

    If you deploy Ops Manager with the Kubernetes Operator and Ops Manager will manage MongoDB database resources deployed outside of the Kubernetes cluster it's deployed to, you must set data.baseUrl to the same value of the spec.configuration.mms.centralUrl setting in the Ops Manager resource specification.

    To learn more, see Managing External MongoDB Deployments.

  • Set data.sslMMSCAConfigMap to the name of your ConfigMap containing the root CA certificate used to sign the Ops Manager host's certificate. The Kubernetes Operator requires that you name this Ops Manager resource's certificate mms-ca.crt in the ConfigMap.

16

By default, Ops Manager enables Backup. Create a MongoDB database resource for the oplog and snapshot stores to complete the configuration.

  1. Deploy a MongoDB database resource for the oplog store in the same namespace as the Ops Manager resource.

    Note

    Create this database as a three-member replica set.

    Match the metadata.name of the resource with the spec.backup.opLogStores.mongodbResourceRef.name that you specified in your Ops Manager resource definition.

  2. Deploy a MongoDB database resource for the S3 snapshot store in the same namespace as the Ops Manager resource.

    Note

    Create the S3 snapshot store as a replica set.

    Match the metadata.name of the resource to the spec.backup.s3Stores.mongodbResourceRef.name that you specified in your Ops Manager resource definition.

17

To check the status of your Ops Manager resource, invoke the following command:

kubectl get om -o yaml -w

When Ops Manager is running, the command returns the output similar to the following, under the status field:

status:
applicationDatabase:
lastTransition: "2022-12-06T17:46:15Z"
members: 3
phase: Running
type: ReplicaSet
version: "6.0.5-ubi8"
opsManager:
lastTransition: "2022-12-06T17:46:32Z"
phase: Running
replicas: 1
url: https://om-backup-svc.dev.svc.cluster.local:8443
version: "6.0.17"

See Troubleshoot the Kubernetes Operator for information about the resource deployment statuses.

Follow these steps to deploy the Ops Manager resource to run over HTTP:

1

If you have not already, run the following command to execute all kubectl commands in the namespace you created.

Note

If you are deploying an Ops Manager resource in a multi-Kubernetes cluster MongoDB deployment:

  • Set the context to the name of the central cluster, such as: kubectl config set context "$MDB_CENTRAL_CLUSTER_FULL_NAME".

  • Set the --namespace to the same scope that you used for your multi-Kubernetes cluster MongoDB deployment, such as: kubectl config --namespace "mongodb".

kubectl config set-context $(kubectl config current-context) --namespace=<metadata.namespace>
2

Change the settings to match your Ops Manager configuration.

1---
2apiVersion: mongodb.com/v1
3kind: MongoDBOpsManager
4metadata:
5 name: <myopsmanager>
6spec:
7 replicas: 1
8 version: <opsmanagerversion>
9 adminCredentials: <adminusercredentials> # Should match metadata.name
10 # in the secret
11 # for the admin user
12 externalConnectivity:
13 type: LoadBalancer
14
15 applicationDatabase:
16 topology: SingleCluster
17 members: 3
18 version: <mongodbversion>
19...
1---
2apiVersion: mongodb.com/v1
3kind: MongoDBOpsManager
4metadata:
5 name: <myopsmanager>
6spec:
7 replicas: 1
8 version: <opsmanagerversion>
9 adminCredentials: <adminusercredentials> # Should match metadata.name
10 # in the Kubernetes secret
11 # for the admin user
12
13 externalConnectivity:
14 type: LoadBalancer
15
16 applicationDatabase:
17 topology: MultiCluster
18 clusterSpecList:
19 - clusterName: cluster1.example.com
20 members: 4
21 - clusterName: cluster2.example.com
22 members: 3
23 - clusterName: cluster3.example.com
24 members: 2
25 version: "6.0.5-ubi8"
26
27...
3
4
Key
Type
Description
Example
string

Name for this Kubernetes Ops Manager object.

Resource names must be 44 characters or less.

To learn more, see metadata.name, and Kubernetes documentation on names.

om
number

Number of Ops Manager instances to run in parallel.

The minimum valid value is 1. This field is ignored if you specify MultiCluster in the spec.topology setting.

1
string

Version of Ops Manager to be installed.

The format should be X.Y.Z. For the list of available Ops Manager versions, view the container registry.

6.0.0
string

Name of the secret you created for the Ops Manager admin user.

Configure the secret to use the same namespace as the Ops Manager resource.

om-admin-secret
spec
.externalConnectivity
string

Optional.

The Kubernetes service ServiceType that exposes Ops Manager outside of Kubernetes.

Exclude the spec.externalConnectivity setting and its children if you don't want the Kubernetes Operator to create a Kubernetes service to route external traffic to the Ops Manager application.

LoadBalancer
spec
.applicationDatabase
integer
Number of members of the Ops Manager Application Database replica set.
3
spec
.applicationDatabase
string

Required.

Version of MongoDB that the Ops Manager Application Database should run.

The format should be X.Y.Z for the Community edition and X.Y.Z-ubi8 for the Enterprise edition.

Important

Ensure that you choose a compatible MongoDB Server version.

Compatible versions differ depending on the base image that the MongoDB database resource uses.

To learn more about MongoDB versioning, see MongoDB Versioning in the MongoDB Manual.

For best results, use the latest available enterprise MongoDB version that is compatible with your Ops Manager version.

spec
.applicationDatabase
string

Optional.

The type of the Kubernetes deployment for the Application Database. If omitted, the default is SingleCluster.

If you specify MultiCluster, the Kubernetes Operator ignores values that you set for the spec.applicationDatabase.members field, if specified.

Instead, you must specify the clusterSpecList and include in it the clusterName of each selected Kubernetes member cluster on which you want to deploy the Application Database, and the number of members (MongoDB nodes) in each Kubernetes cluster.

You can't convert a single cluster Ops Manager instance to a multi-Kubernetes cluster MongoDB deployment instance by modifying the topology and the clusterSpecList settings in the CRD.

See also the example of the resource specification.

MultiCluster
5

To configure backup, you must enable it, and then:

  • Choose to configure an S3 snapshot store or a blockstore. If you deploy both an S3 snapshot store and a blockstore, Ops Manager randomly choses one to use for backup.

  • Choose to configure an oplog store or an S3 oplog store. If you deploy both an oplog store and an S3 oplog store, Ops Manager randomly choses one of them to use for the oplog backup.

Key
Type
Description
Example
spec
.backup
boolean
Flag that indicates that backup is enabled. You must specify spec.backup.enabled: true to configure settings for the head database, oplog store, S3 oplog store, and snapshot store.
true
spec
.backup
.headDB
collection
A collection of configuration settings for the head database. For descriptions of the individual settings in the collection, see spec.backup.headDB.
spec
.backup
.opLogStores
string
Name of the oplog store.
oplog1
spec
.backup
.s3OpLogStores
string
Name of the S3 oplog store.
my-s3-oplog-store
spec
.backup
.opLogStores
.mongodbResourceRef
string
Name of the MongoDB resource or MongoDBMultiCluster resource for the oplog store. The resource's metadata.name must match this name.
my-oplog-db
spec
.backup
.s3OpLogStores
.mongodbResourceRef
string
Name of the MongoDB database resource for the S3 oplog store.
my-s3-oplog-db

You must also configure an S3 snapshot store or a blockstore. If you deploy both an S3 snapshot store and a blockstore, Ops Manager randomly choses one to use for backup.

To configure an S3 snapshot store, configure the following settings:

Key
Type
Description
Example
spec
.backup
.s3Stores
string
Name of the S3 snapshot store.
s3store1
spec
.backup
.s3Stores
.s3SecretRef
string
Name of the secret that contains the accessKey and secretKey fields. The Backup Daemon Service uses the values of these fields as credentials to access the S3 or S3-compatible bucket.
my-s3-credentials
spec
.backup
.s3Stores
string
URL of the S3 or S3-compatible bucket that stores the database backup snapshots.
s3.us-east-1.amazonaws.com
spec
.backup
.s3Stores
string
Name of the S3 or S3-compatible bucket that stores the database backup snapshots.
my-bucket
spec
.backup
.s3Stores
string
Region where your S3-compatible bucket resides. Use this field only if your S3 store's s3BucketEndpoint doesn't include a region in its URL. Don't use this field with AWS S3 buckets.
us-east-1

To configure a blockstore, configure the following settings:

Key
Type
Description
Example
spec
.backup
.blockStores
string
Name of the blockstore.
blockStore1
spec
.backup
.blockStores
.mongodbResourceRef
string
Name of the MongoDB resource that you create for the blockstore. You must deploy this database resource in the same namespace as the Ops Manager resource. The resource's metadata.name must match this name.
my-mongodb-blockstore
6

Add any optional settings for backups that you want to apply to your deployment to the object specification file. For example, for each type of backup store, and for Ops Manager backup daemon processes, you can assign labels to associate particular backup backup stores or backup daemon processes with specific projects. Use spec.backup.[*].assignmentLabels elements of the OpsManager resources.

7

Add any optional settings that you want to apply to your deployment to the object specification file.

8
9

Run the following kubectl command on the filename of the Ops Manager resource definition:

kubectl apply -f <opsmgr-resource>.yaml

Note

If you are deploying an Ops Manager resource on a multi-Kubernetes cluster MongoDB deployment, run:

kubectl apply \
--context "$MDB_CENTRAL_CLUSTER_FULL_NAME" \
--namespace "mongodb"
-f https://raw.githubusercontent.com/mongodb/mongodb-enterprise-kubernetes/master/samples/ops-manager/ops-manager-external.yaml
10

To check the status of your Ops Manager resource, invoke the following command:

kubectl get om -o yaml -w

The command returns output similar to the following, under the status field while the resource deploys:

status:
applicationDatabase:
lastTransition: "2023-04-01T09:49:22Z"
message: AppDB Statefulset is not ready yet
phase: Pending
type: ""
version: ""
backup:
phase: ""
opsManager:
phase: ""

The Kubernetes Operator reconciles the resources in the following order:

  1. Application Database.

  2. Ops Manager.

  3. Backup.

The Kubernetes Operator doesn't reconcile a resource until the preceding one enters the Running phase.

After the Ops Manager resource completes the Pending phase, the command returns the output similar to the following under the status field if you enabled backup:

status:
applicationDatabase:
lastTransition: "2023-04-01T09:50:20Z"
members: 3
phase: Running
type: ReplicaSet
version: "6.0.5-ubi8"
backup:
lastTransition: "2022-04-01T09:57:42Z"
message: The MongoDB object <namespace>/<oplogresourcename>
doesn't exist
phase: Pending
opsManager:
lastTransition: "2022-04-01T09:57:40Z"
phase: Running
replicas: 1
url: http://om-svc.cloudqa.svc.cluster.local:8080
version: "6.0.17"

Backup remains in a Pending state until you configure the backup databases.

Tip

The status.opsManager.url field states the resource's connection URL. Using this URL, you can reach Ops Manager from inside the Kubernetes cluster or create a project using a ConfigMap.

11

The steps you take differ based on how you are routing traffic to the Ops Manager application in Kubernetes. If you configured the Kubernetes Operator to create a Kubernetes service for you, or you created a Kubernetes service manually, use one of the following methods to access the Ops Manager application:

  1. Query your cloud provider to get the FQDN of the load balancer service. See your cloud provider's documentation for details.

  2. Open a browser window and navigate to the Ops Manager application using the FQDN and port number of your load balancer service.

    http://ops.example.com:8080
  3. Log in to Ops Manager using the admin user credentials.

  1. Set your firewall rules to allow access from the Internet to the spec.externalConnectivity.port on the host on which your Kubernetes cluster is running.

  2. Open a browser window and navigate to the Ops Manager application using the FQDN and the spec.externalConnectivity.port.

    http://ops.example.com:30036
  3. Log in to Ops Manager using the admin user credentials.

To learn how to access the Ops Manager application using a third-party service, refer to the documentation for your solution.

12

If you enabled backup, you must create an Ops Manager organization, generate programmatic API keys, and create a secret in your secret-storage-tool. These activities follow the prerequisites and procedure on the Create Credentials for the Kubernetes Operator page.

13

If you enabled backup, create a project by following the prerequisites and procedure on the Create One Project using a ConfigMap page.

You must set data.baseUrl in the ConfigMap to the Ops Manager Application's URL. To find this URL, invoke the following command:

kubectl get om -o yaml -w

The command returns the URL of the Ops Manager Application in the status.opsManager.url field, similar to the following example:

status:
applicationDatabase:
lastTransition: "2022-04-01T10:00:32Z"
members: 3
phase: Running
type: ReplicaSet
version: "6.0.5-ubi8"
backup:
lastTransition: "2022-04-01T09:57:42Z"
message: The MongoDB object <namespace>/<oplogresourcename>
doesn't exist
phase: Pending
opsManager:
lastTransition: "2022-04-01T09:57:40Z"
phase: Running
replicas: 1
url: http://om-svc.cloudqa.svc.cluster.local:8080
version: "6.0.17"

Important

If you deploy Ops Manager with the Kubernetes Operator and Ops Manager will manage MongoDB database resources deployed outside of the Kubernetes cluster it's deployed to, you must set data.baseUrl to the same value of the spec.configuration.mms.centralUrl setting in the Ops Manager resource specification.

To learn more, see Managing External MongoDB Deployments.

14

If you enabled Backup, create a MongoDB database resource for the oplog and snapshot stores to complete the configuration.

  1. Deploy a MongoDB database resource for the oplog store in the same namespace as the Ops Manager resource.

    Note

    Create this database as a replica set.

    Match the metadata.name of the resource with the spec.backup.opLogStores.mongodbResourceRef.name that you specified in your Ops Manager resource definition.

  2. Choose one of the following:

    1. Deploy a MongoDB database resource for the blockstore in the same namespace as the Ops Manager resource.

      Match the metadata.name of the resource to the spec.backup.blockStores.mongodbResourceRef.name that you specified in your Ops Manager resource definition.

    2. Configure an S3 bucket to use as the S3 snapshot store.

      Ensure that you can access the S3 bucket using the details that you specified in your Ops Manager resource definition.

15

If you enabled backup, check the status of your Ops Manager resource by invoking the following command:

kubectl get om -o yaml -w

When Ops Manager is running, the command returns the following output under the status field:

status:
applicationDatabase:
lastTransition: "2022-04-01T10:00:32Z"
members: 3
phase: Running
type: ReplicaSet
version: "6.0.5-ubi8"
backup:
lastTransition: "2022-04-01T10:00:53Z"
phase: Running
version: "6.0.5-ubi8"
opsManager:
lastTransition: "2022-04-01T10:00:34Z"
phase: Running
replicas: 1
url: http://om-svc.cloudqa.svc.cluster.local:8080
version: "6.0.17"

See Troubleshoot the Kubernetes Operator for information about the resource deployment statuses.