Deploy an Ops Manager Resource
On this page
You can deploy Ops Manager as a resource in a Kubernetes container using the Kubernetes Operator.
Considerations
The following considerations apply:
Encrypting Connections
When you configure your Ops Manager deployment, you must choose whether to run connections over HTTPS or HTTP.
The following HTTPS procedure:
Establishes TLS-encrypted connections to/from the Ops Manager application.
Establishes TLS-encrypted connections between the application database's replica set members.
Requires valid certificates for TLS encryption.
The following HTTP procedure:
Doesn't encrypt connections to or from the Ops Manager application.
Doesn't encrypt connections between the application database's replica set members.
Has fewer setup requirements.
When running over HTTPS, Ops Manager runs on port 8443
by
default.
Select the appropriate tab based on whether you want to encrypt your Ops Manager and application database connections with TLS.
Prerequisites
Complete the Prerequisites.
Read the Considerations.
Create one TLS certificate for the Application Database's replica set.
This TLS certificate requires the following attributes:
DNS Names
Ensure that you add SANs or Subject Names for each Pod that hosts a member of the Application Database replica set. The SAN for each pod must use the following format:
<opsmgr-metadata.name>-db-<index>.<opsmgr-metadata.name>-db-svc.<namespace>.svc.cluster.local Key Usages
Ensure that the TLS certificates include the following key-usages (5280):
"server auth"
"client auth"
Important
The Kubernetes Operator uses kubernetes.io/tls secrets to store TLS certificates and private keys for Ops Manager and MongoDB resources. Starting in Kubernetes Operator version 1.17.0, the Kubernetes Operator doesn't support concatenated PEM files stored as Opaque secrets.
Before you deploy an Ops Manager resource, make sure you plan for your Ops Manager resource:
Complete the Prerequisites
Read the Considerations.
Procedure
This procedure applies to deploying an Ops Manager instance in a single Kubernetes cluster, and to deploying Ops Manager on an operator cluster in a multi-cluster deployment. If you want to deploy multiple instances of Ops Manager on multiple Kubernetes clusters, see Deploy Ops Manager Resources on Multiple Kubernetes Clusters.
Follow these steps to deploy the Ops Manager resource to run over HTTPS and secure the application database using TLS.
Configure kubectl
to default to your namespace.
If you have not already, run the following command to execute all
kubectl
commands in the namespace you created.
Note
If you are deploying an Ops Manager resource in a multi-Kubernetes cluster MongoDB deployment:
Set the
context
to the name of the central cluster, such as:kubectl config set context "$MDB_CENTRAL_CLUSTER_FULL_NAME"
.Set the
--namespace
to the same scope that you used for your multi-Kubernetes cluster MongoDB deployment, such as:kubectl config --namespace "mongodb"
.
kubectl config set-context $(kubectl config current-context) --namespace=<metadata.namespace>
Create secrets for your certificates.
If you're using HashiCorp Vault as your secret storage tool, you can Create a Vault Secret instead.
To learn about your options for secret storage, see Configure Secret Storage.
Once you have your TLS certificates and private keys, run the following command to create a secret that stores Ops Manager's TLS certificate:
kubectl create secret tls <prefix>-<metadata.name>-cert \ --cert=<om-tls-cert> \ --key=<om-tls-key> Run the following command to create a new secret that stores the application database's TLS certificate:
kubectl create secret tls <prefix>-<metadata.name>-db-cert \ --cert=<appdb-tls-cert> \ --key=<appdb-tls-key>
Add additional certificates to custom CA certificates.
If your Ops Manager TLS certificate is signed by a custom CA, the CA certificate must also contain additional certificates that allow Ops Manager Backup Daemon to download MongoDB binaries from the internet. To create the TLS certificate(s), create a ConfigMap to hold the CA certificate:
Important
The Kubernetes Operator requires that your Ops Manager certificate is named
mms-ca.crt
in the ConfigMap.
Obtain the entire TLS certificate chain for Ops Manager from
downloads.mongodb.com
. The followingopenssl
command outputs the certificate in the chain to your current working directory, in.crt
format:openssl s_client -showcerts -verify 2 \ -connect downloads.mongodb.com:443 -servername downloads.mongodb.com < /dev/null \ | awk '/BEGIN/,/END/{ if(/BEGIN/){a++}; out="cert"a".crt"; print >out}' Concatenate your CA's certificate file for Ops Manager with the entire TLS certificate chain from
downloads.mongodb.com
that you obtained in the previous step:cat cert2.crt cert3.crt cert4.crt >> mms-ca.crt Create the ConfigMap for Ops Manager:
kubectl create configmap om-http-cert-ca --from-file="mms-ca.crt"
Copy one of the following Ops Manager Kubernetes object examples.
Change the settings to match your Ops Manager and application database configuration.
1 2 apiVersion: mongodb.com/v1 3 kind: MongoDBOpsManager 4 metadata: 5 name: <myopsmanager> 6 spec: 7 replicas: 1 8 version: <opsmanagerversion> 9 adminCredentials: <adminusercredentials> # Should match metadata.name 10 # in the Kubernetes secret 11 # for the admin user 12 13 externalConnectivity: 14 type: LoadBalancer 15 security: 16 certsSecretPrefix: <prefix> # Required. Text to prefix 17 # the name of the secret that contains 18 # Ops Manager's TLS certificate. 19 tls: 20 ca: "om-http-cert-ca" # Optional. Name of the ConfigMap file 21 # containing the certificate authority that 22 # signs the certificates used by the Ops 23 # Manager custom resource. 24 25 applicationDatabase: 26 topology: SingleCluster 27 members: 3 28 version: "6.0.0-ubi8" 29 security: 30 certsSecretPrefix: <prefix> # Required. Text to prefix to the 31 # name of the secret that contains the Application 32 # Database's TLS certificate. Name the secret 33 # <prefix>-<metadata.name>-db-cert. 34 tls: 35 ca: "appdb-ca" # Optional, unless enabling TLS for |mms|. 36 # Name of the ConfigMap file 37 # containing the certicate authority that 38 # signs the certificates used by the 39 # application database. 40 41 ...
1 2 apiVersion: mongodb.com/v1 3 kind: MongoDBOpsManager 4 metadata: 5 name: <myopsmanager> 6 spec: 7 replicas: 1 8 version: <opsmanagerversion> 9 adminCredentials: <adminusercredentials> # Should match metadata.name 10 # in the Kubernetes secret 11 # for the admin user 12 13 externalConnectivity: 14 type: LoadBalancer 15 security: 16 certsSecretPrefix: <prefix> # Required. Text to prefix 17 # the name of the secret that contains 18 # Ops Manager's TLS certificate. 19 tls: 20 ca: "om-http-cert-ca" # Optional. Name of the ConfigMap file 21 # containing the certificate authority that 22 # signs the certificates used by the Ops 23 # Manager custom resource. 24 25 applicationDatabase: 26 topology: MultiCluster 27 clusterSpecList: 28 - clusterName: cluster1.example.com 29 members: 4 30 - clusterName: cluster2.example.com 31 members: 3 32 - clusterName: cluster3.example.com 33 members: 2 34 version: "6.0.0-ubi8" 35 security: 36 certsSecretPrefix: <prefix> # Required. Text to prefix to the 37 # name of the secret that contains the Application 38 # Database's TLS certificate. Name the secret 39 # <prefix>-<metadata.name>-db-cert. 40 tls: 41 ca: "appdb-ca" # Optional, unless enabling TLS for |mms|. 42 # Name of the ConfigMap file 43 # containing the certicate authority that 44 # signs the certificates used by the 45 # application database. 46 47 ...
Open your preferred text editor and paste the object specification into a new text file.
Configure the settings specific to your deployment.
Key | Type | Description | Example |
---|---|---|---|
string | Name for this Kubernetes Ops Manager object. Resource names must be 44 characters or less. See also |
| |
number | Number of Ops Manager instances to run in parallel. The minimum valid value is |
| |
string | Version of Ops Manager to be installed. The format should be X.Y.Z. To view available Ops Manager versions, view the container registry. |
| |
string |
| ||
string | Required. Text to prefix to the name of the secret that contains Ops Managers TLS certificates. |
| |
string | Name of the ConfigMap you created to verify your Ops Manager TLS certificates signed using a custom CA. This field is required if you signed your Ops Manager TLS certificates using a custom CA. |
| |
string | The Kubernetes service ServiceType
that exposes Ops Manager outside of Kubernetes. Exclude the
|
| |
integer | Number of members of the Ops Manager Application Database replica set. |
| |
string | Required. Version of MongoDB that the Ops Manager Application Database should run. The format should be
IMPORTANT: Ensure that you choose a compatible MongoDB Server version. Compatible versions differ depending on the base image that the MongoDB database resource uses. To learn more about MongoDB versioning, see MongoDB Versioning in the MongoDB Manual. | For best results, use the latest available enterprise MongoDB version that is compatible with your Ops Manager version. | |
string | Optional. The type of the Kubernetes deployment for the Application Database.
If omitted, the default is If you specify You can't convert a single cluster Ops Manager instance to a
multi-Kubernetes cluster MongoDB deployment instance by modifying the
See also the example of the resource specification. |
| |
string | Required. Text to prefix to the name of the secret that contains the application database's TLS certificates. |
| |
string | Name of the ConfigMap you created to verify your application database TLS certificates signed using a custom CA. This field is required if you signed your application database TLS certificates using a custom CA. |
The Kubernetes Operator mounts the CA you add using the
|
Optional: Configure Backup settings.
To configure backup, you must enable it, and then:
Choose to configure an S3 snapshot store or a blockstore. If you deploy both an S3 snapshot store and a blockstore, Ops Manager randomly choses one to use for backup.
Choose to configure an oplog store or an S3 oplog store. If you deploy both an oplog store and an S3 oplog store, Ops Manager randomly choses one of them to use for the oplog backup.
Key | Type | Description | Example |
---|---|---|---|
boolean | Flag that indicates that Backup is enabled. You must
specify |
| |
spec .backup .headDB | collection | A collection of configuration settings for the
head database. For descriptions of the individual
settings in the collection, see
| |
string | Name of the oplog store. |
| |
string | Name of the S3 oplog store. |
| |
string | Name of the |
| |
string | Name of the |
|
You must also configure an S3 snapshot store or a blockstore.
If you deploy both an S3 snapshot store and a blockstore, Ops Manager randomly choses one to use for Backup.
To configure a snapshot store, configure the following settings:
Key | Type | Description | Example |
---|---|---|---|
string | Name of the S3 snapshot store. |
| |
string | Name of the secret that contains the |
| |
string | URL of the S3 or S3-compatible bucket that stores the database Backup snapshots. |
| |
string | Name of the S3 or S3-compatible bucket that stores the database Backup snapshots. |
|
To configure a blockstore, configure the following settings:
Key | Type | Description | Example |
---|---|---|---|
string | Name of the blockstore. |
| |
string | Name of the |
|
Optional: Configure any additional settings for an Ops Manager backup.
Add any optional settings for backups
that you want to apply to your deployment to the object specification
file. For example, for each type of backup store, and for Ops Manager backup
daemon processes, you can assign labels to associate particular backup
stores or backup daemon processes with specific projects.
Use spec.backup.[*].assignmentLabels
elements of the OpsManager
resources.
Optional: Configure any additional settings for an Ops Manager deployment.
Add any optional settings that you want to apply to your deployment to the object specification file.
Create your Ops Manager instance.
Run the following kubectl
command on the filename of the
Ops Manager resource definition:
kubectl apply -f <opsmgr-resource>.yaml
Note
If you are deploying an Ops Manager resource on a multi-Kubernetes cluster MongoDB deployment, run:
kubectl apply \ --context "$MDB_CENTRAL_CLUSTER_FULL_NAME" \ --namespace "mongodb" -f https://raw.githubusercontent.com/mongodb/mongodb-enterprise-kubernetes/master/samples/ops-manager/ops-manager-external.yaml
Track the status of your Ops Manager instance.
To check the status of your Ops Manager resource, invoke the following command:
kubectl get om -o yaml -w
The command returns the output similar to the following under the status
field while the resource deploys:
status: applicationDatabase: lastTransition: "2022-04-01T09:49:22Z" message: AppDB Statefulset is not ready yet phase: Pending type: "" version: "" backup: phase: "" opsManager: phase: ""
The Kubernetes Operator reconciles the resources in the following order:
Application Database.
Ops Manager.
Backup.
The Kubernetes Operator doesn't reconcile a resource until the preceding
one enters the Running
phase.
After the Ops Manager resource completes the Pending
phase, the
command returns output similar to the following under the status
field if you enabled Backup:
status: applicationDatabase: lastTransition: "2022-04-01T09:50:20Z" members: 3 phase: Running type: ReplicaSet version: "6.0.5-ubi8" backup: lastTransition: "2022-04-01T09:57:42Z" message: The MongoDB object <namespace>/<oplogresourcename> doesn't exist phase: Pending opsManager: lastTransition: "2023-04-01T09:57:40Z" phase: Running replicas: 1 url: https://om-svc.cloudqa.svc.cluster.local:8443 version: "6.0.17"
Backup remains in a Pending
state until you configure the Backup
databases.
Tip
The status.opsManager.url
field states the resource's
connection URL. Using this URL, you can reach Ops Manager from
inside the Kubernetes cluster or create a project using a
ConfigMap.
After the resource completes the Pending
phase, the command
returns output similar to the following under the status
field:
status: applicationDatabase: lastTransition: "2022-12-06T18:23:22Z" members: 3 phase: Running type: ReplicaSet version: "6.0.5-ubi8" opsManager: lastTransition: "2022-12-06T18:23:26Z" message: The MongoDB object namespace/oplogdbname doesn't exist phase: Pending url: https://om-svc.dev.svc.cluster.local:8443 version: ""
Backup remains in a Pending
state until you configure the Backup
databases.
Tip
The status.opsManager.url
field states the resource's
connection URL. Using this URL, you can reach Ops Manager from
inside the Kubernetes cluster or create a project using a
ConfigMap.
Access the Ops Manager application.
The steps you take differ based on how you are routing traffic to the Ops Manager application in Kubernetes. If you configured the Kubernetes Operator to create a Kubernetes service for you, or you created a Kubernetes service manually, use one of the following methods to access the Ops Manager application:
Query your cloud provider to get the FQDN of the load balancer service. See your cloud provider's documentation for details.
Open a browser window and navigate to the Ops Manager application using the FQDN and port number of your load balancer service.
https://ops.example.com:8443 Log in to Ops Manager using the admin user credentials.
Set your firewall rules to allow access from the Internet to the
spec.externalConnectivity.
port
on the host on which your Kubernetes cluster is running.Open a browser window and navigate to the Ops Manager application using the FQDN and the
spec.externalConnectivity.
port
.https://ops.example.com:30036 Log in to Ops Manager using the admin user credentials.
To learn how to access the Ops Manager application using a third-party service, refer to the documentation for your solution.
Create credentials for the Kubernetes Operator.
To configure credentials, you must create an Ops Manager organization, generate programmatic API keys, and create a secret. These activities follow the prerequisites and procedure on the Create Credentials for the Kubernetes Operator page.
Create a project using a ConfigMap.
To create a project, follow the prerequisites and procedure on the Create One Project using a ConfigMap page.
Set the following fields in your project ConfigMap:
Set
data.baseUrl
in the ConfigMap to the Ops Manager Application's URL. To find this URL, invoke the following command:kubectl get om -o yaml -w The command returns the URL of the Ops Manager Application in the
status.opsManager.url
field, similar to the following example:status: applicationDatabase: lastTransition: "2022-12-06T18:23:22Z" members: 3 phase: Running type: ReplicaSet version: "6.0.5-ubi8" opsManager: lastTransition: "2022-12-06T18:23:26Z" message: The MongoDB object namespace/oplogdbname doesn't exist phase: Pending url: https://om-svc.dev.svc.cluster.local:8443 version: "" Important
If you deploy Ops Manager with the Kubernetes Operator and Ops Manager will manage MongoDB database resources deployed outside of the Kubernetes cluster it's deployed to, you must set
data.baseUrl
to the same value of thespec.configuration.mms.centralUrl
setting in the Ops Manager resource specification.To learn more, see Managing External MongoDB Deployments.
Set
data.sslMMSCAConfigMap
to the name of your ConfigMap containing the root CA certificate used to sign the Ops Manager host's certificate. The Kubernetes Operator requires that you name this Ops Manager resource's certificatemms-ca.crt
in the ConfigMap.
Deploy MongoDB database resources to complete the backup configuration.
By default, Ops Manager enables Backup. Create a MongoDB database resource for the oplog and snapshot stores to complete the configuration.
Deploy a MongoDB database resource for the oplog store in the same namespace as the Ops Manager resource.
Note
Create this database as a three-member replica set.
Match the
metadata.name
of the resource with thespec.backup.opLogStores.mongodbResourceRef.name
that you specified in your Ops Manager resource definition.Deploy a MongoDB database resource for the S3 snapshot store in the same namespace as the Ops Manager resource.
Note
Create the S3 snapshot store as a replica set.
Match the
metadata.name
of the resource to thespec.backup.s3Stores.mongodbResourceRef.name
that you specified in your Ops Manager resource definition.
Confirm that the Ops Manager resource is running.
To check the status of your Ops Manager resource, invoke the following command:
kubectl get om -o yaml -w
When Ops Manager is running, the command returns the output similar to the
following, under the status
field:
status: applicationDatabase: lastTransition: "2022-12-06T17:46:15Z" members: 3 phase: Running type: ReplicaSet version: "6.0.5-ubi8" opsManager: lastTransition: "2022-12-06T17:46:32Z" phase: Running replicas: 1 url: https://om-backup-svc.dev.svc.cluster.local:8443 version: "6.0.17"
See Troubleshoot the Kubernetes Operator for information about the resource deployment statuses.
Follow these steps to deploy the Ops Manager resource to run over HTTP:
Configure kubectl
to default to your namespace.
If you have not already, run the following command to execute all
kubectl
commands in the namespace you created.
Note
If you are deploying an Ops Manager resource in a multi-Kubernetes cluster MongoDB deployment:
Set the
context
to the name of the central cluster, such as:kubectl config set context "$MDB_CENTRAL_CLUSTER_FULL_NAME"
.Set the
--namespace
to the same scope that you used for your multi-Kubernetes cluster MongoDB deployment, such as:kubectl config --namespace "mongodb"
.
kubectl config set-context $(kubectl config current-context) --namespace=<metadata.namespace>
Copy one of the following Ops Manager Kubernetes object examples.
Change the settings to match your Ops Manager configuration.
1 2 apiVersion: mongodb.com/v1 3 kind: MongoDBOpsManager 4 metadata: 5 name: <myopsmanager> 6 spec: 7 replicas: 1 8 version: <opsmanagerversion> 9 adminCredentials: <adminusercredentials> # Should match metadata.name 10 # in the secret 11 # for the admin user 12 externalConnectivity: 13 type: LoadBalancer 14 15 applicationDatabase: 16 topology: SingleCluster 17 members: 3 18 version: <mongodbversion> 19 ...
1 2 apiVersion: mongodb.com/v1 3 kind: MongoDBOpsManager 4 metadata: 5 name: <myopsmanager> 6 spec: 7 replicas: 1 8 version: <opsmanagerversion> 9 adminCredentials: <adminusercredentials> # Should match metadata.name 10 # in the Kubernetes secret 11 # for the admin user 12 13 externalConnectivity: 14 type: LoadBalancer 15 16 applicationDatabase: 17 topology: MultiCluster 18 clusterSpecList: 19 - clusterName: cluster1.example.com 20 members: 4 21 - clusterName: cluster2.example.com 22 members: 3 23 - clusterName: cluster3.example.com 24 members: 2 25 version: "6.0.5-ubi8" 26 27 ...
Open your preferred text editor and paste the object specification into a new text file.
Configure the settings included in the previous example.
Key | Type | Description | Example |
---|---|---|---|
string | Name for this Kubernetes Ops Manager object. Resource names must be 44 characters or less. To learn more, see |
| |
number | Number of Ops Manager instances to run in parallel. The minimum valid value is |
| |
string | Version of Ops Manager to be installed. The format should be X.Y.Z. For the list of available Ops Manager versions, view the container registry. |
| |
string |
| ||
string | Optional. The Kubernetes service ServiceType that exposes Ops Manager outside of Kubernetes. Exclude the
|
| |
integer | Number of members of the Ops Manager Application Database replica set. |
| |
string | Required. Version of MongoDB that the Ops Manager Application Database should run. The format should be IMPORTANT: Ensure that you choose a compatible MongoDB Server version. Compatible versions differ depending on the base image that the MongoDB database resource uses. To learn more about MongoDB versioning, see MongoDB Versioning in the MongoDB Manual. | For best results, use the latest available enterprise MongoDB version that is compatible with your Ops Manager version. | |
string | Optional. The type of the Kubernetes deployment for the Application Database.
If omitted, the default is If you specify Instead, you must specify the You can't convert a single cluster Ops Manager instance to a
multi-Kubernetes cluster MongoDB deployment instance by modifying the
See also the example of the resource specification. |
|
Optional: Configure backup settings.
To configure backup, you must enable it, and then:
Choose to configure an S3 snapshot store or a blockstore. If you deploy both an S3 snapshot store and a blockstore, Ops Manager randomly choses one to use for backup.
Choose to configure an oplog store or an S3 oplog store. If you deploy both an oplog store and an S3 oplog store, Ops Manager randomly choses one of them to use for the oplog backup.
Key | Type | Description | Example |
---|---|---|---|
boolean | Flag that indicates that backup is enabled. You must specify
|
| |
spec .backup .headDB | collection | A collection of configuration settings for the
head database. For descriptions of the individual
settings in the collection, see
| |
string | Name of the oplog store. |
| |
string | Name of the S3 oplog store. |
| |
string | Name of the |
| |
string | Name of the MongoDB database resource for the S3 oplog store. |
|
You must also configure an S3 snapshot store or a blockstore. If you deploy both an S3 snapshot store and a blockstore, Ops Manager randomly choses one to use for backup.
To configure an S3 snapshot store, configure the following settings:
Key | Type | Description | Example |
---|---|---|---|
string | Name of the S3 snapshot store. |
| |
string | Name of the secret that contains the |
| |
string | URL of the S3 or S3-compatible bucket that stores the database backup snapshots. |
| |
string | Name of the S3 or S3-compatible bucket that stores the database backup snapshots. |
| |
string | Region where your S3-compatible bucket resides. Use this
field only if your S3 store's
|
|
To configure a blockstore, configure the following settings:
Key | Type | Description | Example |
---|---|---|---|
string | Name of the blockstore. |
| |
string | Name of the |
|
Optional: Configure any additional settings for an Ops Manager backup.
Add any optional settings for backups
that you want to apply to your deployment to the object specification
file. For example, for each type of backup store, and for Ops Manager backup
daemon processes, you can assign labels to associate particular backup
backup stores or backup daemon processes with specific projects.
Use spec.backup.[*].assignmentLabels
elements of the OpsManager
resources.
Optional: Configure any additional settings for an Ops Manager deployment.
Add any optional settings that you want to apply to your deployment to the object specification file.
Create your Ops Manager instance.
Run the following kubectl
command on the filename of the Ops Manager resource definition:
kubectl apply -f <opsmgr-resource>.yaml
Note
If you are deploying an Ops Manager resource on a multi-Kubernetes cluster MongoDB deployment, run:
kubectl apply \ --context "$MDB_CENTRAL_CLUSTER_FULL_NAME" \ --namespace "mongodb" -f https://raw.githubusercontent.com/mongodb/mongodb-enterprise-kubernetes/master/samples/ops-manager/ops-manager-external.yaml
Track the status of your Ops Manager instance.
To check the status of your Ops Manager resource, invoke the following command:
kubectl get om -o yaml -w
The command returns output similar to the following, under the status
field
while the resource deploys:
status: applicationDatabase: lastTransition: "2023-04-01T09:49:22Z" message: AppDB Statefulset is not ready yet phase: Pending type: "" version: "" backup: phase: "" opsManager: phase: ""
The Kubernetes Operator reconciles the resources in the following order:
Application Database.
Ops Manager.
Backup.
The Kubernetes Operator doesn't reconcile a resource until the preceding
one enters the Running
phase.
After the Ops Manager resource completes the Pending
phase, the
command returns the output similar to the following under the status
field if you enabled backup:
status: applicationDatabase: lastTransition: "2023-04-01T09:50:20Z" members: 3 phase: Running type: ReplicaSet version: "6.0.5-ubi8" backup: lastTransition: "2022-04-01T09:57:42Z" message: The MongoDB object <namespace>/<oplogresourcename> doesn't exist phase: Pending opsManager: lastTransition: "2022-04-01T09:57:40Z" phase: Running replicas: 1 url: http://om-svc.cloudqa.svc.cluster.local:8080 version: "6.0.17"
Backup remains in a Pending
state until you configure the backup
databases.
Tip
The status.opsManager.url
field states the resource's
connection URL. Using this URL, you can reach Ops Manager from
inside the Kubernetes cluster or create a project using a
ConfigMap.
Access the Ops Manager application.
The steps you take differ based on how you are routing traffic to the Ops Manager application in Kubernetes. If you configured the Kubernetes Operator to create a Kubernetes service for you, or you created a Kubernetes service manually, use one of the following methods to access the Ops Manager application:
Query your cloud provider to get the FQDN of the load balancer service. See your cloud provider's documentation for details.
Open a browser window and navigate to the Ops Manager application using the FQDN and port number of your load balancer service.
http://ops.example.com:8080 Log in to Ops Manager using the admin user credentials.
Set your firewall rules to allow access from the Internet to the
spec.externalConnectivity.
port
on the host on which your Kubernetes cluster is running.Open a browser window and navigate to the Ops Manager application using the FQDN and the
spec.externalConnectivity.
port
.http://ops.example.com:30036 Log in to Ops Manager using the admin user credentials.
To learn how to access the Ops Manager application using a third-party service, refer to the documentation for your solution.
Optional: Create credentials for the Kubernetes Operator.
If you enabled backup, you must create an Ops Manager organization, generate programmatic API keys, and create a secret in your secret-storage-tool. These activities follow the prerequisites and procedure on the Create Credentials for the Kubernetes Operator page.
Optional: Create a project using a ConfigMap.
If you enabled backup, create a project by following the prerequisites and procedure on the Create One Project using a ConfigMap page.
You must set data.baseUrl
in the ConfigMap to the Ops Manager Application's URL. To find this URL, invoke the following command:
kubectl get om -o yaml -w
The command returns the URL of the Ops Manager Application in the
status.opsManager.url
field, similar to the following example:
status: applicationDatabase: lastTransition: "2022-04-01T10:00:32Z" members: 3 phase: Running type: ReplicaSet version: "6.0.5-ubi8" backup: lastTransition: "2022-04-01T09:57:42Z" message: The MongoDB object <namespace>/<oplogresourcename> doesn't exist phase: Pending opsManager: lastTransition: "2022-04-01T09:57:40Z" phase: Running replicas: 1 url: http://om-svc.cloudqa.svc.cluster.local:8080 version: "6.0.17"
Important
If you deploy Ops Manager with the Kubernetes Operator and Ops Manager will
manage MongoDB database resources deployed outside of the Kubernetes
cluster it's deployed to, you must set data.baseUrl
to the same
value of the
spec.configuration.mms.centralUrl
setting in the Ops Manager resource specification.
To learn more, see Managing External MongoDB Deployments.
Optional: Deploy MongoDB database resources to complete the backup configuration.
If you enabled Backup, create a MongoDB database resource for the oplog and snapshot stores to complete the configuration.
Deploy a MongoDB database resource for the oplog store in the same namespace as the Ops Manager resource.
Note
Create this database as a replica set.
Match the
metadata.name
of the resource with thespec.backup.opLogStores.mongodbResourceRef.name
that you specified in your Ops Manager resource definition.Choose one of the following:
Deploy a MongoDB database resource for the blockstore in the same namespace as the Ops Manager resource.
Match the
metadata.name
of the resource to thespec.backup.blockStores.mongodbResourceRef.name
that you specified in your Ops Manager resource definition.Configure an S3 bucket to use as the S3 snapshot store.
Ensure that you can access the S3 bucket using the details that you specified in your Ops Manager resource definition.
Optional: Confirm that the Ops Manager resource is running.
If you enabled backup, check the status of your Ops Manager resource by invoking the following command:
kubectl get om -o yaml -w
When Ops Manager is running, the command returns the following
output under the status
field:
status: applicationDatabase: lastTransition: "2022-04-01T10:00:32Z" members: 3 phase: Running type: ReplicaSet version: "6.0.5-ubi8" backup: lastTransition: "2022-04-01T10:00:53Z" phase: Running version: "6.0.5-ubi8" opsManager: lastTransition: "2022-04-01T10:00:34Z" phase: Running replicas: 1 url: http://om-svc.cloudqa.svc.cluster.local:8080 version: "6.0.17"
See Troubleshoot the Kubernetes Operator for information about the resource deployment statuses.