Docs Menu
Docs Home
/
MongoDB Enterprise Kubernetes Operator
/

Deploy Ops Manager Resources on Multiple Kubernetes Clusters

On this page

  • Overview
  • Prerequisites
  • Procedure

To make your multi-cluster Ops Manager and the Application Database deployment resilient to entire data center or zone failures, deploy the Ops Manager Application and the Application Database on multiple Kubernetes clusters.

To learn more about the architecture, networking, limitations, and peformance of multi-Kubernetes cluster deployments for Ops Manager resources, see:

  • Multi-Cluster Ops Manager Architecture

  • Limitations

  • Differences between Single and Multi-Cluster Ops Manager Deployments

  • Architecture Diagram

  • Networking, Load Balancer, Service Mesh

  • Performance

When you deploy the Ops Manager Application and the Application Database using the procedure in this section, you:

  1. Use GKE (Google Kubernetes Engine) and Istio service mesh as tools that help demonstrate the multi-Kubernetes cluster deployment.

  2. Install the Kubernetes Operator on one of the member Kubernetes clusters known as the operator cluster. The operator cluster acts as a Hub in the "Hub and Spoke" pattern used by the Kubernetes Operator to manage deployments on multiple Kubernetes clusters.

  3. Deploy the operator cluster in the $OPERATOR_NAMESPACE and configure this cluster to watch $NAMESPACE and manage all member Kubernetes clusters.

  4. Deploy the Application Database and the Ops Manager Application on a single member Kubernetes cluster to demonstrate similarity of a multi-cluster deployment to a single cluster deployment. A single cluster deployment with spec.topology and spec.applicationDatabase.topology set to MultiCluster prepares the deployment for adding more Kubernetes clusters to it.

  5. Deploy an additional Application Database replica set on the second member Kubernetes cluster to improve Application Database's resiliency. You also deploy an additional Ops Manager Application instance in the second member Kubernetes cluster.

  6. Create valid certificates for TLS encryption, and establish TLS-encrypted connections to and from the Ops Manager Application and between the Application Database's replica set members. When running over HTTPS, Ops Manager runs on port 8443 by default.

  7. Enable backup using S3-compatible storage and deploy the Backup Daemon on the third member Kubernetes cluster. To simplify setting up S3-compatible storage buckets, you deploy the MinIO Operator. You enable the Backup Daemon only on one member cluster in your deployment. However, you can configure other member clusters to host the Backup Daemon resources as well. Only S3 backups are supported in multi-cluster Ops Manager deployments.

Before you can begin the deployment, install the following required tools:

Install gcloud CLI and authorize into it:

gcloud auth login

The kubectl mongodb plugin automates the configuration of the Kubernetes clusters. This allows the Kubernetes Operator to deploy resources, necessary roles, and services for accounts for the Ops Manager Application, Application Database, and MongoDB resources on these clusters.

To install the kubectl mongodb plugin:

1

Download your desired Kubernetes Operator package version from the Release Page of the MongoDB Enterprise Kubernetes Operator Repository.

The package's name uses this pattern: kubectl-mongodb_{{ .Version }}_{{ .Os }}_{{ .Arch }}.tar.gz.

Use one of the following packages:

  • kubectl-mongodb_{{ .Version }}_darwin_amd64.tar.gz

  • kubectl-mongodb_{{ .Version }}_darwin_arm64.tar.gz

  • kubectl-mongodb_{{ .Version }}_linux_amd64.tar.gz

  • kubectl-mongodb_{{ .Version }}_linux_arm64.tar.gz

2

Unpack the package, as in the following example:

tar -zxvf kubectl-mongodb_<version>_darwin_amd64.tar.gz
3

Find the kubectl-mongodb binary in the unpacked directory and move it to its desired destination, inside the PATH for the Kubernetes Operator user, as shown in the following example:

mv kubectl-mongodb /usr/local/bin/kubectl-mongodb

Now you can run the kubectl mongodb plugin using the following commands:

kubectl mongodb multicluster setup
kubectl mongodb multicluster recover

To learn more about the supported flags, see the MongoDB kubectl plugin Reference.

Clone the MongoDB Enterprise Kubernetes Operator repository, change into the mongodb-enterprise-kubernetes directory, and check out the current version.

git clone https://github.com/mongodb/mongodb-enterprise-kubernetes.git
cd mongodb-enterprise-kubernetes
git checkout 1.26
cd public/samples/ops-manager-multi-cluster

Important

Some steps in this guide work only if you run them from the public/samples/ops-manager-multi-cluster directory.

All steps in this guide reference the environment variables defined in env_variables.sh.

1export MDB_GKE_PROJECT="### Set your GKE project name here ###"
2
3export NAMESPACE="mongodb"
4export OPERATOR_NAMESPACE="mongodb-operator"
5
6# comma-separated key=value pairs
7# export OPERATOR_ADDITIONAL_HELM_VALUES=""
8
9# Adjust the values for each Kubernetes cluster in your deployment.
10# The deployment script references the following variables to get values for each cluster.
11export K8S_CLUSTER_0="k8s-mdb-0"
12export K8S_CLUSTER_0_ZONE="europe-central2-a"
13export K8S_CLUSTER_0_NUMBER_OF_NODES=3
14export K8S_CLUSTER_0_MACHINE_TYPE="e2-standard-4"
15export K8S_CLUSTER_0_CONTEXT_NAME="gke_${MDB_GKE_PROJECT}_${K8S_CLUSTER_0_ZONE}_${K8S_CLUSTER_0}"
16
17export K8S_CLUSTER_1="k8s-mdb-1"
18export K8S_CLUSTER_1_ZONE="europe-central2-b"
19export K8S_CLUSTER_1_NUMBER_OF_NODES=3
20export K8S_CLUSTER_1_MACHINE_TYPE="e2-standard-4"
21export K8S_CLUSTER_1_CONTEXT_NAME="gke_${MDB_GKE_PROJECT}_${K8S_CLUSTER_1_ZONE}_${K8S_CLUSTER_1}"
22
23export K8S_CLUSTER_2="k8s-mdb-2"
24export K8S_CLUSTER_2_ZONE="europe-central2-c"
25export K8S_CLUSTER_2_NUMBER_OF_NODES=1
26export K8S_CLUSTER_2_MACHINE_TYPE="e2-standard-4"
27export K8S_CLUSTER_2_CONTEXT_NAME="gke_${MDB_GKE_PROJECT}_${K8S_CLUSTER_2_ZONE}_${K8S_CLUSTER_2}"
28
29# Comment out the following line so that the script does not create preemptible nodes.
30# DO NOT USE preemptible nodes in production.
31export GKE_SPOT_INSTANCES_SWITCH="--preemptible"
32
33export S3_OPLOG_BUCKET_NAME=s3-oplog-store
34export S3_SNAPSHOT_BUCKET_NAME=s3-snapshot-store
35
36# minio defaults
37export S3_ENDPOINT="minio.tenant-tiny.svc.cluster.local"
38export S3_ACCESS_KEY="console"
39export S3_SECRET_KEY="console123"
40
41export OPERATOR_HELM_CHART="mongodb/enterprise-operator"
42
43# (Optional) Change the following setting when using the external URL.
44# This env variable is used in OpenSSL configuration to generate
45# server certificates for Ops Manager Application.
46export OPS_MANAGER_EXTERNAL_DOMAIN="om-svc.${NAMESPACE}.svc.cluster.local"
47
48export OPS_MANAGER_VERSION="7.0.4"
49export APPDB_VERSION="7.0.9-ubi8"

Adjust the settings in the previous example for your needs as instructed in the comments and source them into your shell as follows:

source env_variables.sh

Important

Each time after you update env_variables.sh, run source env_variables.sh to ensure that the scripts in this section use updated variables.

This procedure applies to deploying an Ops Manager instance on multiple Kubernetes clusters.

1

You may skip this step if you already have installed and configured your own Kubernetes clusters with a service mesh.

  1. Create three GKE (Google Kubernetes Engine) clusters:

    1gcloud container clusters create "${K8S_CLUSTER_0}" \
    2 --zone="${K8S_CLUSTER_0_ZONE}" \
    3 --num-nodes="${K8S_CLUSTER_0_NUMBER_OF_NODES}" \
    4 --machine-type "${K8S_CLUSTER_0_MACHINE_TYPE}" \
    5 ${GKE_SPOT_INSTANCES_SWITCH:-""}
    1gcloud container clusters create "${K8S_CLUSTER_1}" \
    2 --zone="${K8S_CLUSTER_1_ZONE}" \
    3 --num-nodes="${K8S_CLUSTER_1_NUMBER_OF_NODES}" \
    4 --machine-type "${K8S_CLUSTER_1_MACHINE_TYPE}" \
    5 ${GKE_SPOT_INSTANCES_SWITCH:-""}
    1gcloud container clusters create "${K8S_CLUSTER_2}" \
    2 --zone="${K8S_CLUSTER_2_ZONE}" \
    3 --num-nodes="${K8S_CLUSTER_2_NUMBER_OF_NODES}" \
    4 --machine-type "${K8S_CLUSTER_2_MACHINE_TYPE}" \
    5 ${GKE_SPOT_INSTANCES_SWITCH:-""}
  2. Obtain credentials and save contexts to the current kubeconfig file. By default, this file is located in the ~/.kube/config directory and referenced by the $KUBECONFIG environment variable.

    1gcloud container clusters get-credentials "${K8S_CLUSTER_0}" --zone="${K8S_CLUSTER_0_ZONE}"
    2gcloud container clusters get-credentials "${K8S_CLUSTER_1}" --zone="${K8S_CLUSTER_1_ZONE}"
    3gcloud container clusters get-credentials "${K8S_CLUSTER_2}" --zone="${K8S_CLUSTER_2_ZONE}"

    All kubectl commands reference these contexts using the following variables:

    • $K8S_CLUSTER_0_CONTEXT_NAME

    • $K8S_CLUSTER_1_CONTEXT_NAME

    • $K8S_CLUSTER_2_CONTEXT_NAME

  3. Verify that kubectl has access to Kubernetes clusters:

    1echo "Nodes in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}"
    2kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" get nodes
    3echo; echo "Nodes in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}"
    4kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" get nodes
    5echo; echo "Nodes in cluster ${K8S_CLUSTER_2_CONTEXT_NAME}"
    6kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" get nodes
    1Nodes in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0
    2NAME STATUS ROLES AGE VERSION
    3gke-k8s-mdb-0-default-pool-d0f98a43-dtlc Ready <none> 65s v1.28.7-gke.1026000
    4gke-k8s-mdb-0-default-pool-d0f98a43-q9sf Ready <none> 65s v1.28.7-gke.1026000
    5gke-k8s-mdb-0-default-pool-d0f98a43-zn8x Ready <none> 64s v1.28.7-gke.1026000
    6
    7Nodes in cluster gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1
    8NAME STATUS ROLES AGE VERSION
    9gke-k8s-mdb-1-default-pool-37ea602a-0qgw Ready <none> 111s v1.28.7-gke.1026000
    10gke-k8s-mdb-1-default-pool-37ea602a-k4qk Ready <none> 114s v1.28.7-gke.1026000
    11gke-k8s-mdb-1-default-pool-37ea602a-p2g7 Ready <none> 113s v1.28.7-gke.1026000
    12
    13Nodes in cluster gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2
    14NAME STATUS ROLES AGE VERSION
    15gke-k8s-mdb-2-default-pool-4b459a09-t1v9 Ready <none> 29s v1.28.7-gke.1026000
  4. Install Istio service mesh to allow cross-cluster DNS resolution and network connectivity between Kubernetes clusters:

    1CTX_CLUSTER1=${K8S_CLUSTER_0_CONTEXT_NAME} \
    2CTX_CLUSTER2=${K8S_CLUSTER_1_CONTEXT_NAME} \
    3CTX_CLUSTER3=${K8S_CLUSTER_2_CONTEXT_NAME} \
    4ISTIO_VERSION="1.20.2" \
    5../multi-cluster/install_istio_separate_network.sh
2

Note

To enable sidecar injection in Istio, the following commands add the istio-injection=enabled labels to the $OPERATOR_NAMESPACE and the mongodb namespaces on each member cluster. If you use another service mesh, configure it to handle network traffic in the created namespaces.

  • Create a separate namespace, mongodb-operator, referenced by the $OPERATOR_NAMESPACE environment variable for the Kubernetes Operator deployment.

  • Create the same $OPERATOR_NAMESPACE on each member Kubernetes cluster. This is needed so that the kubectl mongodb plugin can create a service account for the Kubernetes Operator on each member cluster. The Kubernetes Operator uses these service accounts on the operator cluster to perform operations on each member cluster.

    1kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" create namespace "${OPERATOR_NAMESPACE}"
    2kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" label namespace "${OPERATOR_NAMESPACE}" istio-injection=enabled --overwrite
    3
    4kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" create namespace "${OPERATOR_NAMESPACE}"
    5kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" label namespace "${OPERATOR_NAMESPACE}" istio-injection=enabled --overwrite
    6
    7kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" create namespace "${OPERATOR_NAMESPACE}"
    8kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" label namespace "${OPERATOR_NAMESPACE}" istio-injection=enabled --overwrite
  • On each member cluster, including the member cluster that serves as the operator cluster, create another, separate namespace, mongodb. The Kubernetes Operator uses this namespace for Ops Manager resources and components.

    1kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" create namespace "${NAMESPACE}"
    2kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" label namespace "${NAMESPACE}" istio-injection=enabled --overwrite
    3
    4kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" create namespace "${NAMESPACE}"
    5kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" label namespace "${NAMESPACE}" istio-injection=enabled --overwrite
    6
    7kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" create namespace "${NAMESPACE}"
    8kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" label namespace "${NAMESPACE}" istio-injection=enabled --overwrite
3

This step is optional if you use official Helm charts and images from the Quay registry.

1kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OPERATOR_NAMESPACE}" create secret generic "image-registries-secret" \
2 --from-file=.dockerconfigjson="${HOME}/.docker/config.json" --type=kubernetes.io/dockerconfigjson
3kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create secret generic "image-registries-secret" \
4 --from-file=.dockerconfigjson="${HOME}/.docker/config.json" --type=kubernetes.io/dockerconfigjson
5kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" create secret generic "image-registries-secret" \
6 --from-file=.dockerconfigjson="${HOME}/.docker/config.json" --type=kubernetes.io/dockerconfigjson
7kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" create secret generic "image-registries-secret" \
8 --from-file=.dockerconfigjson="${HOME}/.docker/config.json" --type=kubernetes.io/dockerconfigjson
4

The following optional scripts verify whether the service mesh is configured correctly for cross-cluster DNS resolution and connectivity.

  1. Run this script on cluster 0:

    1kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2 apiVersion: apps/v1
    3 kind: StatefulSet
    4 metadata:
    5 name: echoserver0
    6 spec:
    7 replicas: 1
    8 selector:
    9 matchLabels:
    10 app: echoserver0
    11 template:
    12 metadata:
    13 labels:
    14 app: echoserver0
    15 spec:
    16 containers:
    17 - image: k8s.gcr.io/echoserver:1.10
    18 imagePullPolicy: Always
    19 name: echoserver0
    20 ports:
    21 - containerPort: 8080
    22EOF
  2. Run this script on cluster 1:

    1kubectl apply --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2 apiVersion: apps/v1
    3 kind: StatefulSet
    4 metadata:
    5 name: echoserver1
    6 spec:
    7 replicas: 1
    8 selector:
    9 matchLabels:
    10 app: echoserver1
    11 template:
    12 metadata:
    13 labels:
    14 app: echoserver1
    15 spec:
    16 containers:
    17 - image: k8s.gcr.io/echoserver:1.10
    18 imagePullPolicy: Always
    19 name: echoserver1
    20 ports:
    21 - containerPort: 8080
    22EOF
  3. Run this script on cluster 2:

    1kubectl apply --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2 apiVersion: apps/v1
    3 kind: StatefulSet
    4 metadata:
    5 name: echoserver2
    6 spec:
    7 replicas: 1
    8 selector:
    9 matchLabels:
    10 app: echoserver2
    11 template:
    12 metadata:
    13 labels:
    14 app: echoserver2
    15 spec:
    16 containers:
    17 - image: k8s.gcr.io/echoserver:1.10
    18 imagePullPolicy: Always
    19 name: echoserver2
    20 ports:
    21 - containerPort: 8080
    22EOF
  4. Run this script to wait for the creation of StatefulSets:

    1kubectl wait --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" --for=condition=ready pod -l statefulset.kubernetes.io/pod-name=echoserver0-0 --timeout=60s
    2kubectl wait --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" --for=condition=ready pod -l statefulset.kubernetes.io/pod-name=echoserver1-0 --timeout=60s
    3kubectl wait --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" --for=condition=ready pod -l statefulset.kubernetes.io/pod-name=echoserver2-0 --timeout=60s
  5. Create Pod service on cluster 0:

    1kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2apiVersion: v1
    3kind: Service
    4metadata:
    5 name: echoserver0-0
    6spec:
    7 ports:
    8 - port: 8080
    9 targetPort: 8080
    10 protocol: TCP
    11 selector:
    12 statefulset.kubernetes.io/pod-name: "echoserver0-0"
    13EOF
  6. Create Pod service on cluster 1:

    1kubectl apply --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2apiVersion: v1
    3kind: Service
    4metadata:
    5 name: echoserver1-0
    6spec:
    7 ports:
    8 - port: 8080
    9 targetPort: 8080
    10 protocol: TCP
    11 selector:
    12 statefulset.kubernetes.io/pod-name: "echoserver1-0"
    13EOF
  7. Create Pod service on cluster 2:

    1kubectl apply --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2apiVersion: v1
    3kind: Service
    4metadata:
    5 name: echoserver2-0
    6spec:
    7 ports:
    8 - port: 8080
    9 targetPort: 8080
    10 protocol: TCP
    11 selector:
    12 statefulset.kubernetes.io/pod-name: "echoserver2-0"
    13EOF
  8. Create round robin service on cluster 0:

    1kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2apiVersion: v1
    3kind: Service
    4metadata:
    5 name: echoserver
    6spec:
    7 ports:
    8 - port: 8080
    9 targetPort: 8080
    10 protocol: TCP
    11 selector:
    12 app: echoserver0
    13EOF
  9. Create round robin service on cluster 1:

    1kubectl apply --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2apiVersion: v1
    3kind: Service
    4metadata:
    5 name: echoserver
    6spec:
    7 ports:
    8 - port: 8080
    9 targetPort: 8080
    10 protocol: TCP
    11 selector:
    12 app: echoserver1
    13EOF
  10. Create round robin service on cluster 2:

    1kubectl apply --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2apiVersion: v1
    3kind: Service
    4metadata:
    5 name: echoserver
    6spec:
    7 ports:
    8 - port: 8080
    9 targetPort: 8080
    10 protocol: TCP
    11 selector:
    12 app: echoserver2
    13EOF
  11. Verify Pod 0 from cluster 1:

    1source_cluster=${K8S_CLUSTER_1_CONTEXT_NAME}
    2target_pod="echoserver0-0"
    3source_pod="echoserver1-0"
    4target_url="http://${target_pod}.${NAMESPACE}.svc.cluster.local:8080"
    5echo "Checking cross-cluster DNS resolution and connectivity from ${source_pod} in ${source_cluster} to ${target_pod}"
    6out=$(kubectl exec --context "${source_cluster}" -n "${NAMESPACE}" "${source_pod}" -- \
    7 /bin/bash -c "curl -v ${target_url}" 2>&1);
    8grep "Hostname: ${target_pod}" &>/dev/null <<< "${out}" && echo "SUCCESS" || (echo "ERROR: ${out}" && return 1)
    1Checking cross-cluster DNS resolution and connectivity from echoserver1-0 in gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1 to echoserver0-0
    2SUCCESS
  12. Verify Pod 1 from cluster 0:

    1source_cluster=${K8S_CLUSTER_0_CONTEXT_NAME}
    2target_pod="echoserver1-0"
    3source_pod="echoserver0-0"
    4target_url="http://${target_pod}.${NAMESPACE}.svc.cluster.local:8080"
    5echo "Checking cross-cluster DNS resolution and connectivity from ${source_pod} in ${source_cluster} to ${target_pod}"
    6out=$(kubectl exec --context "${source_cluster}" -n "${NAMESPACE}" "${source_pod}" -- \
    7 /bin/bash -c "curl -v ${target_url}" 2>&1);
    8grep "Hostname: ${target_pod}" &>/dev/null <<< "${out}" && echo "SUCCESS" || (echo "ERROR: ${out}" && return 1)
    1Checking cross-cluster DNS resolution and connectivity from echoserver0-0 in gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0 to echoserver1-0
    2SUCCESS
  13. Verify Pod 1 from cluster 2:

    1source_cluster=${K8S_CLUSTER_2_CONTEXT_NAME}
    2target_pod="echoserver1-0"
    3source_pod="echoserver2-0"
    4target_url="http://${target_pod}.${NAMESPACE}.svc.cluster.local:8080"
    5echo "Checking cross-cluster DNS resolution and connectivity from ${source_pod} in ${source_cluster} to ${target_pod}"
    6out=$(kubectl exec --context "${source_cluster}" -n "${NAMESPACE}" "${source_pod}" -- \
    7 /bin/bash -c "curl -v ${target_url}" 2>&1);
    8grep "Hostname: ${target_pod}" &>/dev/null <<< "${out}" && echo "SUCCESS" || (echo "ERROR: ${out}" && return 1)
    1Checking cross-cluster DNS resolution and connectivity from echoserver2-0 in gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2 to echoserver1-0
    2SUCCESS
  1. Verify Pod 2 from cluster 0:

    1source_cluster=${K8S_CLUSTER_0_CONTEXT_NAME}
    2target_pod="echoserver2-0"
    3source_pod="echoserver0-0"
    4target_url="http://${target_pod}.${NAMESPACE}.svc.cluster.local:8080"
    5echo "Checking cross-cluster DNS resolution and connectivity from ${source_pod} in ${source_cluster} to ${target_pod}"
    6out=$(kubectl exec --context "${source_cluster}" -n "${NAMESPACE}" "${source_pod}" -- \
    7 /bin/bash -c "curl -v ${target_url}" 2>&1);
    8grep "Hostname: ${target_pod}" &>/dev/null <<< "${out}" && echo "SUCCESS" || (echo "ERROR: ${out}" && return 1)
    1Checking cross-cluster DNS resolution and connectivity from echoserver0-0 in gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0 to echoserver2-0
    2SUCCESS
  2. Run the cleanup script:

    1kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" delete statefulset echoserver0
    2kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" delete statefulset echoserver1
    3kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" delete statefulset echoserver2
    4kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" delete service echoserver
    5kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" delete service echoserver
    6kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" delete service echoserver
    7kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" delete service echoserver0-0
    8kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" delete service echoserver1-0
    9kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" delete service echoserver2-0
5

In this step, you use the kubectl mongodb plugin to automate the Kubernetes cluster configuration that is necessary for the Kubernetes Operator to manage workloads on multiple Kubernetes clusters.

Because you configure the Kubernetes clusters before you install the Kubernetes Operator, when you deploy the Kubernetes Operator for the multi-Kubernetes cluster operation, all the necessary multi-cluster configuration is already in place.

As stated in the Overview, the Kubernetes Operator has the configuration for three member clusters that you can use to deploy Ops Manager MongoDB databases. The first cluster is also used as the operator cluster, where you install the Kubernetes Operator and deploy the custom resources.

1kubectl mongodb multicluster setup \
2 --central-cluster="${K8S_CLUSTER_0_CONTEXT_NAME}" \
3 --member-clusters="${K8S_CLUSTER_0_CONTEXT_NAME},${K8S_CLUSTER_1_CONTEXT_NAME},${K8S_CLUSTER_2_CONTEXT_NAME}" \
4 --member-cluster-namespace="${NAMESPACE}" \
5 --central-cluster-namespace="${OPERATOR_NAMESPACE}" \
6 --create-service-account-secrets \
7 --install-database-roles=true \
8 --image-pull-secrets=image-registries-secret
1Build: 1f23ae48c41d208f14c860356e483ba386a3aab8, 2024-04-26T12:19:36Z
2Ensured namespaces exist in all clusters.
3creating central cluster roles in cluster: gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0
4creating member roles in cluster: gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1
5creating member roles in cluster: gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2
6Ensured ServiceAccounts and Roles.
7Creating KubeConfig secret mongodb-operator/mongodb-enterprise-operator-multi-cluster-kubeconfig in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0
8Ensured database Roles in member clusters.
9Creating Member list Configmap mongodb-operator/mongodb-enterprise-operator-member-list in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0
6
  1. Install the Kubernetes Operator into the $OPERATOR_NAMESPACE, configured to watch $NAMESPACE and to manage three member Kubernetes clusters. At this point in the procedure, ServiceAccounts and roles are already deployed by the kubectl mongodb plugin. Therefore, the following scripts skip configuring them and set operator.createOperatorServiceAccount=false and operator.createResourcesServiceAccountsAndRoles=false. The scripts specify the multiCluster.clusters setting to instruct the Helm chart to deploy the Kubernetes Operator in multi-cluster mode.

    1helm upgrade --install \
    2 --debug \
    3 --kube-context "${K8S_CLUSTER_0_CONTEXT_NAME}" \
    4 mongodb-enterprise-operator-multi-cluster \
    5 "${OPERATOR_HELM_CHART}" \
    6 --namespace="${OPERATOR_NAMESPACE}" \
    7 --set namespace="${OPERATOR_NAMESPACE}" \
    8 --set operator.namespace="${OPERATOR_NAMESPACE}" \
    9 --set operator.watchNamespace="${NAMESPACE}" \
    10 --set operator.name=mongodb-enterprise-operator-multi-cluster \
    11 --set operator.createOperatorServiceAccount=false \
    12 --set operator.createResourcesServiceAccountsAndRoles=false \
    13 --set "multiCluster.clusters={${K8S_CLUSTER_0_CONTEXT_NAME},${K8S_CLUSTER_1_CONTEXT_NAME},${K8S_CLUSTER_2_CONTEXT_NAME}}" \
    14 --set "${OPERATOR_ADDITIONAL_HELM_VALUES:-"dummy=value"}"
    1Release "mongodb-enterprise-operator-multi-cluster" does not exist. Installing it now.
    2NAME: mongodb-enterprise-operator-multi-cluster
    3LAST DEPLOYED: Tue Apr 30 19:40:26 2024
    4NAMESPACE: mongodb-operator
    5STATUS: deployed
    6REVISION: 1
    7TEST SUITE: None
    8USER-SUPPLIED VALUES:
    9dummy: value
    10multiCluster:
    11 clusters:
    12 - gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0
    13 - gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1
    14 - gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2
    15namespace: mongodb-operator
    16operator:
    17 createOperatorServiceAccount: false
    18 createResourcesServiceAccountsAndRoles: false
    19 name: mongodb-enterprise-operator-multi-cluster
    20 namespace: mongodb-operator
    21 watchNamespace: mongodb
    22
    23COMPUTED VALUES:
    24agent:
    25 name: mongodb-agent-ubi
    26 version: 107.0.0.8502-1
    27database:
    28 name: mongodb-enterprise-database-ubi
    29 version: 1.25.0
    30dummy: value
    31initAppDb:
    32 name: mongodb-enterprise-init-appdb-ubi
    33 version: 1.25.0
    34initDatabase:
    35 name: mongodb-enterprise-init-database-ubi
    36 version: 1.25.0
    37initOpsManager:
    38 name: mongodb-enterprise-init-ops-manager-ubi
    39 version: 1.25.0
    40managedSecurityContext: false
    41mongodb:
    42 appdbAssumeOldFormat: false
    43 imageType: ubi8
    44 name: mongodb-enterprise-server
    45 repo: quay.io/mongodb
    46mongodbLegacyAppDb:
    47 name: mongodb-enterprise-appdb-database-ubi
    48 repo: quay.io/mongodb
    49multiCluster:
    50 clusterClientTimeout: 10
    51 clusters:
    52 - gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0
    53 - gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1
    54 - gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2
    55 kubeConfigSecretName: mongodb-enterprise-operator-multi-cluster-kubeconfig
    56 performFailOver: true
    57namespace: mongodb-operator
    58operator:
    59 additionalArguments: []
    60 affinity: {}
    61 createOperatorServiceAccount: false
    62 createResourcesServiceAccountsAndRoles: false
    63 deployment_name: mongodb-enterprise-operator
    64 env: prod
    65 mdbDefaultArchitecture: non-static
    66 name: mongodb-enterprise-operator-multi-cluster
    67 namespace: mongodb-operator
    68 nodeSelector: {}
    69 operator_image_name: mongodb-enterprise-operator-ubi
    70 replicas: 1
    71 resources:
    72 limits:
    73 cpu: 1100m
    74 memory: 1Gi
    75 requests:
    76 cpu: 500m
    77 memory: 200Mi
    78 tolerations: []
    79 vaultSecretBackend:
    80 enabled: false
    81 tlsSecretRef: ""
    82 version: 1.25.0
    83 watchNamespace: mongodb
    84 watchedResources:
    85 - mongodb
    86 - opsmanagers
    87 - mongodbusers
    88 webhook:
    89 registerConfiguration: true
    90opsManager:
    91 name: mongodb-enterprise-ops-manager-ubi
    92registry:
    93 agent: quay.io/mongodb
    94 appDb: quay.io/mongodb
    95 database: quay.io/mongodb
    96 imagePullSecrets: null
    97 initAppDb: quay.io/mongodb
    98 initDatabase: quay.io/mongodb
    99 initOpsManager: quay.io/mongodb
    100 operator: quay.io/mongodb
    101 opsManager: quay.io/mongodb
    102 pullPolicy: Always
    103subresourceEnabled: true
    104
    105HOOKS:
    106MANIFEST:
    107---
    108# Source: enterprise-operator/templates/operator-roles.yaml
    109kind: ClusterRole
    110apiVersion: rbac.authorization.k8s.io/v1
    111metadata:
    112 name: mongodb-enterprise-operator-mongodb-webhook
    113rules:
    114 - apiGroups:
    115 - "admissionregistration.k8s.io"
    116 resources:
    117 - validatingwebhookconfigurations
    118 verbs:
    119 - get
    120 - create
    121 - update
    122 - delete
    123 - apiGroups:
    124 - ""
    125 resources:
    126 - services
    127 verbs:
    128 - get
    129 - list
    130 - watch
    131 - create
    132 - update
    133 - delete
    134---
    135# Source: enterprise-operator/templates/operator-roles.yaml
    136kind: ClusterRoleBinding
    137apiVersion: rbac.authorization.k8s.io/v1
    138metadata:
    139 name: mongodb-enterprise-operator-multi-cluster-mongodb-operator-webhook-binding
    140roleRef:
    141 apiGroup: rbac.authorization.k8s.io
    142 kind: ClusterRole
    143 name: mongodb-enterprise-operator-mongodb-webhook
    144subjects:
    145 - kind: ServiceAccount
    146 name: mongodb-enterprise-operator-multi-cluster
    147 namespace: mongodb-operator
    148---
    149# Source: enterprise-operator/templates/operator.yaml
    150apiVersion: apps/v1
    151kind: Deployment
    152metadata:
    153 name: mongodb-enterprise-operator-multi-cluster
    154 namespace: mongodb-operator
    155spec:
    156 replicas: 1
    157 selector:
    158 matchLabels:
    159 app.kubernetes.io/component: controller
    160 app.kubernetes.io/name: mongodb-enterprise-operator-multi-cluster
    161 app.kubernetes.io/instance: mongodb-enterprise-operator-multi-cluster
    162 template:
    163 metadata:
    164 labels:
    165 app.kubernetes.io/component: controller
    166 app.kubernetes.io/name: mongodb-enterprise-operator-multi-cluster
    167 app.kubernetes.io/instance: mongodb-enterprise-operator-multi-cluster
    168 spec:
    169 serviceAccountName: mongodb-enterprise-operator-multi-cluster
    170 securityContext:
    171 runAsNonRoot: true
    172 runAsUser: 2000
    173 containers:
    174 - name: mongodb-enterprise-operator-multi-cluster
    175 image: "quay.io/mongodb/mongodb-enterprise-operator-ubi:1.25.0"
    176 imagePullPolicy: Always
    177 args:
    178 - -watch-resource=mongodb
    179 - -watch-resource=opsmanagers
    180 - -watch-resource=mongodbusers
    181 - -watch-resource=mongodbmulticluster
    182 command:
    183 - /usr/local/bin/mongodb-enterprise-operator
    184 volumeMounts:
    185 - mountPath: /etc/config/kubeconfig
    186 name: kube-config-volume
    187 resources:
    188 limits:
    189 cpu: 1100m
    190 memory: 1Gi
    191 requests:
    192 cpu: 500m
    193 memory: 200Mi
    194 env:
    195 - name: OPERATOR_ENV
    196 value: prod
    197 - name: MDB_DEFAULT_ARCHITECTURE
    198 value: non-static
    199 - name: WATCH_NAMESPACE
    200 value: "mongodb"
    201 - name: NAMESPACE
    202 valueFrom:
    203 fieldRef:
    204 fieldPath: metadata.namespace
    205 - name: CLUSTER_CLIENT_TIMEOUT
    206 value: "10"
    207 - name: IMAGE_PULL_POLICY
    208 value: Always
    209 # Database
    210 - name: MONGODB_ENTERPRISE_DATABASE_IMAGE
    211 value: quay.io/mongodb/mongodb-enterprise-database-ubi
    212 - name: INIT_DATABASE_IMAGE_REPOSITORY
    213 value: quay.io/mongodb/mongodb-enterprise-init-database-ubi
    214 - name: INIT_DATABASE_VERSION
    215 value: 1.25.0
    216 - name: DATABASE_VERSION
    217 value: 1.25.0
    218 # Ops Manager
    219 - name: OPS_MANAGER_IMAGE_REPOSITORY
    220 value: quay.io/mongodb/mongodb-enterprise-ops-manager-ubi
    221 - name: INIT_OPS_MANAGER_IMAGE_REPOSITORY
    222 value: quay.io/mongodb/mongodb-enterprise-init-ops-manager-ubi
    223 - name: INIT_OPS_MANAGER_VERSION
    224 value: 1.25.0
    225 # AppDB
    226 - name: INIT_APPDB_IMAGE_REPOSITORY
    227 value: quay.io/mongodb/mongodb-enterprise-init-appdb-ubi
    228 - name: INIT_APPDB_VERSION
    229 value: 1.25.0
    230 - name: OPS_MANAGER_IMAGE_PULL_POLICY
    231 value: Always
    232 - name: AGENT_IMAGE
    233 value: "quay.io/mongodb/mongodb-agent-ubi:107.0.0.8502-1"
    234 - name: MDB_AGENT_IMAGE_REPOSITORY
    235 value: "quay.io/mongodb/mongodb-agent-ubi"
    236 - name: MONGODB_IMAGE
    237 value: mongodb-enterprise-server
    238 - name: MONGODB_REPO_URL
    239 value: quay.io/mongodb
    240 - name: MDB_IMAGE_TYPE
    241 value: ubi8
    242 - name: PERFORM_FAILOVER
    243 value: 'true'
    244 volumes:
    245 - name: kube-config-volume
    246 secret:
    247 defaultMode: 420
    248 secretName: mongodb-enterprise-operator-multi-cluster-kubeconfig
  2. Check the Kubernetes Operator deployment:

    1kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OPERATOR_NAMESPACE}" rollout status deployment/mongodb-enterprise-operator-multi-cluster
    2echo "Operator deployment in ${OPERATOR_NAMESPACE} namespace"
    3kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OPERATOR_NAMESPACE}" get deployments
    4echo; echo "Operator pod in ${OPERATOR_NAMESPACE} namespace"
    5kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${OPERATOR_NAMESPACE}" get pods
    1Waiting for deployment "mongodb-enterprise-operator-multi-cluster" rollout to finish: 0 of 1 updated replicas are available...
    2deployment "mongodb-enterprise-operator-multi-cluster" successfully rolled out
    3Operator deployment in mongodb-operator namespace
    4NAME READY UP-TO-DATE AVAILABLE AGE
    5mongodb-enterprise-operator-multi-cluster 1/1 1 1 12s
    6
    7Operator pod in mongodb-operator namespace
    8NAME READY STATUS RESTARTS AGE
    9mongodb-enterprise-operator-multi-cluster-78cc97547d-nlgds 2/2 Running 1 (3s ago) 12s
7

In this step, you enable TLS for the Application Database and the Ops Manager Application. If you don't want to use TLS, remove the following fields from the MongoDBOpsManager resources:

  1. Optional. Generate keys and certificates:

    Use the openssl command line tool to generate self-signed CAs and certificates for testing purposes.

    1mkdir certs || true
    2
    3cat <<EOF >certs/ca.cnf
    4[ req ]
    5default_bits = 2048
    6prompt = no
    7default_md = sha256
    8distinguished_name = dn
    9
    10[ dn ]
    11C=US
    12ST=New York
    13L=New York
    14O=Example Company
    15OU=IT Department
    16CN=exampleCA
    17EOF
    18
    19cat <<EOF >certs/om.cnf
    20[ req ]
    21default_bits = 2048
    22prompt = no
    23default_md = sha256
    24distinguished_name = dn
    25req_extensions = req_ext
    26
    27[ dn ]
    28C=US
    29ST=New York
    30L=New York
    31O=Example Company
    32OU=IT Department
    33CN=${OPS_MANAGER_EXTERNAL_DOMAIN}
    34
    35[ req_ext ]
    36subjectAltName = @alt_names
    37keyUsage = critical, digitalSignature, keyEncipherment
    38extendedKeyUsage = serverAuth, clientAuth
    39
    40[ alt_names ]
    41DNS.1 = ${OPS_MANAGER_EXTERNAL_DOMAIN}
    42DNS.2 = om-svc.${NAMESPACE}.svc.cluster.local
    43EOF
    44
    45cat <<EOF >certs/appdb.cnf
    46[ req ]
    47default_bits = 2048
    48prompt = no
    49default_md = sha256
    50distinguished_name = dn
    51req_extensions = req_ext
    52
    53[ dn ]
    54C=US
    55ST=New York
    56L=New York
    57O=Example Company
    58OU=IT Department
    59CN=AppDB
    60
    61[ req_ext ]
    62subjectAltName = @alt_names
    63keyUsage = critical, digitalSignature, keyEncipherment
    64extendedKeyUsage = serverAuth, clientAuth
    65
    66[ alt_names ]
    67# multi-cluster mongod hostnames from service for each pod
    68DNS.1 = *.${NAMESPACE}.svc.cluster.local
    69# single-cluster mongod hostnames from headless service
    70DNS.2 = *.om-db-svc.${NAMESPACE}.svc.cluster.local
    71EOF
    72
    73# generate CA keypair and certificate
    74openssl genrsa -out certs/ca.key 2048
    75openssl req -x509 -new -nodes -key certs/ca.key -days 1024 -out certs/ca.crt -config certs/ca.cnf
    76
    77# generate OpsManager's keypair and certificate
    78openssl genrsa -out certs/om.key 2048
    79openssl req -new -key certs/om.key -out certs/om.csr -config certs/om.cnf
    80
    81# generate AppDB's keypair and certificate
    82openssl genrsa -out certs/appdb.key 2048
    83openssl req -new -key certs/appdb.key -out certs/appdb.csr -config certs/appdb.cnf
    84
    85# generate certificates signed by CA for OpsManager and AppDB
    86openssl x509 -req -in certs/om.csr -CA certs/ca.crt -CAkey certs/ca.key -CAcreateserial -out certs/om.crt -days 365 -sha256 -extfile certs/om.cnf -extensions req_ext
    87openssl x509 -req -in certs/appdb.csr -CA certs/ca.crt -CAkey certs/ca.key -CAcreateserial -out certs/appdb.crt -days 365 -sha256 -extfile certs/appdb.cnf -extensions req_ext
  2. Create secrets with TLS keys:

    If you prefer to use your own keys and certificates, skip the previous generation step and put the keys and certificates into the following files:

    • certs/ca.crt - CA certificates. These are not necessary when using trusted certificates.

    • certs/appdb.key - private key for the Application Database.

    • certs/appdb.crt - certificate for the Application Database.

    • certs/om.key - private key for Ops Manager.

    • certs/om.crt - certificate for Ops Manager.

    1kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create secret tls cert-prefix-om-cert \
    2 --cert=certs/om.crt \
    3 --key=certs/om.key
    4
    5kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create secret tls cert-prefix-om-db-cert \
    6 --cert=certs/appdb.crt \
    7 --key=certs/appdb.key
    8
    9kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create configmap om-cert-ca --from-file="mms-ca.crt=certs/ca.crt"
    10kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create configmap appdb-cert-ca --from-file="ca-pem=certs/ca.crt"
8

At this point, you have prepared the environment and the Kubernetes Operator to deploy the Ops Manager resource.

  1. Create the necessary credentials for the Ops Manager admin user that the Kubernetes Operator will create after deploying the Ops Manager Application instance:

    1kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" --namespace "${NAMESPACE}" create secret generic om-admin-user-credentials \
    2 --from-literal=Username="admin" \
    3 --from-literal=Password="Passw0rd@" \
    4 --from-literal=FirstName="Jane" \
    5 --from-literal=LastName="Doe"
  2. Deploy the simplest MongoDBOpsManager custom resource possible (with TLS enabled) on a single member cluster, which is also known as the operator cluster.

    This deployment is almost the same as for the single-cluster mode, but with spec.topology and spec.applicationDatabase.topology set to MultiCluster.

    Deploying this way shows that a single Kubernetes cluster deployment is a special case of a multi-Kubernetes cluster deployment on a single Kubernetes member cluster. You can start deploying the Ops Manager Application and the Application Database on as many Kubernetes clusters as necessary from the beginning, and don't have to start with the deployment with only a single member Kubernetes cluster.

    At this point, you have prepared the Ops Manager deployment to span more than one Kubernetes cluster, which you will do later in this procedure.

    1kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2apiVersion: mongodb.com/v1
    3kind: MongoDBOpsManager
    4metadata:
    5 name: om
    6spec:
    7 topology: MultiCluster
    8 version: "${OPS_MANAGER_VERSION}"
    9 adminCredentials: om-admin-user-credentials
    10 security:
    11 certsSecretPrefix: cert-prefix
    12 tls:
    13 ca: om-cert-ca
    14 clusterSpecList:
    15 - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
    16 members: 1
    17 applicationDatabase:
    18 version: "${APPDB_VERSION}"
    19 topology: MultiCluster
    20 security:
    21 certsSecretPrefix: cert-prefix
    22 tls:
    23 ca: appdb-cert-ca
    24 clusterSpecList:
    25 - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
    26 members: 3
    27 backup:
    28 enabled: false
    29EOF
  3. Wait for the Kubernetes Operator to pick up the work and reach the status.applicationDatabase.phase=Pending state. Wait for both the Application Database and Ops Manager deployments to complete.

    1echo "Waiting for Application Database to reach Pending phase..."
    2kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Pending opsmanager/om --timeout=30s
    1Waiting for Application Database to reach Pending phase...
    2mongodbopsmanager.mongodb.com/om condition met
  4. Deploy Ops Manager. The Kubernetes Operator deploys Ops Manager by performing the following steps. It:

    • Deploys the Application Database's replica set nodes and waits for the MongoDB processes in the replica set to start running.

    • Deploys the Ops Manager Application instance with the Application Database's connection string and waits for it to become ready.

    • Adds the Monitoring MongoDB Agent containers to each Application Database's Pod.

    • Waits for both the Ops Manager Application and the Application Database Pods to start running.

    1echo "Waiting for Application Database to reach Running phase..."
    2kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=900s
    3echo; echo "Waiting for Ops Manager to reach Running phase..."
    4kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Running opsmanager/om --timeout=900s
    5echo; echo "Waiting for Application Database to reach Pending phase (enabling monitoring)..."
    6kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=900s
    7echo "Waiting for Application Database to reach Running phase..."
    8kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=900s
    9echo; echo "Waiting for Ops Manager to reach Running phase..."
    10kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Running opsmanager/om --timeout=900s
    11echo; echo "MongoDBOpsManager resource"
    12kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" get opsmanager/om
    13echo; echo "Pods running in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}"
    14kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" get pods
    15echo; echo "Pods running in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}"
    16kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" get pods
    1Waiting for Application Database to reach Running phase...
    2mongodbopsmanager.mongodb.com/om condition met
    3
    4Waiting for Ops Manager to reach Running phase...
    5mongodbopsmanager.mongodb.com/om condition met
    6
    7Waiting for Application Database to reach Pending phase (enabling monitoring)...
    8mongodbopsmanager.mongodb.com/om condition met
    9Waiting for Application Database to reach Running phase...
    10mongodbopsmanager.mongodb.com/om condition met
    11
    12Waiting for Ops Manager to reach Running phase...
    13mongodbopsmanager.mongodb.com/om condition met
    14
    15MongoDBOpsManager resource
    16NAME REPLICAS VERSION STATE (OPSMANAGER) STATE (APPDB) STATE (BACKUP) AGE WARNINGS
    17om 7.0.4 Running Running Disabled 11m
    18
    19Pods running in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0
    20NAME READY STATUS RESTARTS AGE
    21om-0-0 2/2 Running 0 8m39s
    22om-db-0-0 4/4 Running 0 44s
    23om-db-0-1 4/4 Running 0 2m6s
    24om-db-0-2 4/4 Running 0 3m19s
    25
    26Pods running in cluster gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1

    Now that you have deployed a single-member cluster in a multi-cluster mode, you can reconfigure this deployment to span more than one Kubernetes cluster.

  5. On the second member cluster, deploy two additional Application Database replica set members and one additional instance of the Ops Manager Application:

    1kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2apiVersion: mongodb.com/v1
    3kind: MongoDBOpsManager
    4metadata:
    5 name: om
    6spec:
    7 topology: MultiCluster
    8 version: "${OPS_MANAGER_VERSION}"
    9 adminCredentials: om-admin-user-credentials
    10 security:
    11 certsSecretPrefix: cert-prefix
    12 tls:
    13 ca: om-cert-ca
    14 clusterSpecList:
    15 - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
    16 members: 1
    17 - clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}"
    18 members: 1
    19 applicationDatabase:
    20 version: "${APPDB_VERSION}"
    21 topology: MultiCluster
    22 security:
    23 certsSecretPrefix: cert-prefix
    24 tls:
    25 ca: appdb-cert-ca
    26 clusterSpecList:
    27 - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
    28 members: 3
    29 - clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}"
    30 members: 2
    31 backup:
    32 enabled: false
    33EOF
  6. Wait for the Kubernetes Operator to pick up the work (pending phase):

    1echo "Waiting for Application Database to reach Pending phase..."
    2kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Pending opsmanager/om --timeout=30s
    1Waiting for Application Database to reach Pending phase...
    2mongodbopsmanager.mongodb.com/om condition met
  7. Wait for the Kubernetes Operator to finish deploying all components:

    1echo "Waiting for Application Database to reach Running phase..."
    2kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=600s
    3echo; echo "Waiting for Ops Manager to reach Running phase..."
    4kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Running opsmanager/om --timeout=600s
    5echo; echo "MongoDBOpsManager resource"
    6kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" get opsmanager/om
    7echo; echo "Pods running in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}"
    8kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" get pods
    9echo; echo "Pods running in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}"
    10kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" get pods
    1Waiting for Application Database to reach Running phase...
    2mongodbopsmanager.mongodb.com/om condition met
    3
    4Waiting for Ops Manager to reach Running phase...
    5mongodbopsmanager.mongodb.com/om condition met
    6
    7MongoDBOpsManager resource
    8NAME REPLICAS VERSION STATE (OPSMANAGER) STATE (APPDB) STATE (BACKUP) AGE WARNINGS
    9om 7.0.4 Pending Running Disabled 14m
    10
    11Pods running in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0
    12NAME READY STATUS RESTARTS AGE
    13om-0-0 2/2 Terminating 0 12m
    14om-db-0-0 4/4 Running 0 4m12s
    15om-db-0-1 4/4 Running 0 5m34s
    16om-db-0-2 4/4 Running 0 6m47s
    17
    18Pods running in cluster gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1
    19NAME READY STATUS RESTARTS AGE
    20om-1-0 0/2 Init:0/2 0 0s
    21om-db-1-0 4/4 Running 0 3m24s
    22om-db-1-1 4/4 Running 0 104s
9

In a multi-Kubernetes cluster deployment of the Ops Manager Application, you can configure only S3-based backup storage. This procedure refers to S3_* defined in env_variables.sh.

  1. Optional. Install the MinIO Operator.

    This procedure deploys S3-compatible storage for your backups using the MinIO Operator. You can skip this step if you have AWS S3 or other S3-compatible buckets available. Adjust the S3_* variables accordingly in env_variables.sh in this case.

    1kustomize build "github.com/minio/operator/resources/?timeout=120&ref=v5.0.12" | \
    2 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" apply -f -
    3
    4kustomize build "github.com/minio/operator/examples/kustomization/tenant-tiny?timeout=120&ref=v5.0.12" | \
    5 kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" apply -f -
    6
    7# add two buckets to the tenant config
    8kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "tenant-tiny" patch tenant/myminio \
    9 --type='json' \
    10 -p="[{\"op\": \"add\", \"path\": \"/spec/buckets\", \"value\": [{\"name\": \"${S3_OPLOG_BUCKET_NAME}\"}, {\"name\": \"${S3_SNAPSHOT_BUCKET_NAME}\"}]}]"
  2. Before you configure and enable backup, create secrets:

    • s3-access-secret - contains S3 credentials.

    • s3-ca-cert - contains a CA certificate that issued the bucket's server certificate. In the case of the sample MinIO deployment used in this procedure, the default Kubernetes Root CA certificate is used to sign the certificate. Because it's not a publicly trusted CA certificate, you must provide it so that Ops Manager can trust the connection.

    If you use publicly trusted certificates, you may skip this step and remove the values from the spec.backup.s3Stores.customCertificateSecretRefs and spec.backup.s3OpLogStores.customCertificateSecretRefs settings.

    1kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create secret generic s3-access-secret \
    2 --from-literal=accessKey="${S3_ACCESS_KEY}" \
    3 --from-literal=secretKey="${S3_SECRET_KEY}"
    4
    5# minio TLS secrets are signed with the default k8s root CA
    6kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" create secret generic s3-ca-cert \
    7 --from-literal=ca.crt="$(kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n kube-system get configmap kube-root-ca.crt -o jsonpath="{.data.ca\.crt}")"
10
  1. The Kubernetes Operator can configure and deploy all components, the Ops Manager Application, the Backup Daemon instances, and the Application Database's replica set nodes in any combination on any member clusters for which you configure the Kubernetes Operator.

    To illustrate the flexibility of the multi-Kubernetes cluster deployment configuration, deploy only one Backup Daemon instance on the third member cluster and specify zero Backup Daemon members for the first and second clusters.

    1kubectl apply --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" -f - <<EOF
    2apiVersion: mongodb.com/v1
    3kind: MongoDBOpsManager
    4metadata:
    5 name: om
    6spec:
    7 topology: MultiCluster
    8 version: "${OPS_MANAGER_VERSION}"
    9 adminCredentials: om-admin-user-credentials
    10 security:
    11 certsSecretPrefix: cert-prefix
    12 tls:
    13 ca: om-cert-ca
    14 clusterSpecList:
    15 - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
    16 members: 1
    17 backup:
    18 members: 0
    19 - clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}"
    20 members: 1
    21 backup:
    22 members: 0
    23 - clusterName: "${K8S_CLUSTER_2_CONTEXT_NAME}"
    24 members: 0
    25 backup:
    26 members: 1
    27 configuration: # to avoid configuration wizard on first login
    28 mms.adminEmailAddr: email@example.com
    29 mms.fromEmailAddr: email@example.com
    30 mms.ignoreInitialUiSetup: "true"
    31 mms.mail.hostname: smtp@example.com
    32 mms.mail.port: "465"
    33 mms.mail.ssl: "true"
    34 mms.mail.transport: smtp
    35 mms.minimumTLSVersion: TLSv1.2
    36 mms.replyToEmailAddr: email@example.com
    37 applicationDatabase:
    38 version: "${APPDB_VERSION}"
    39 topology: MultiCluster
    40 security:
    41 certsSecretPrefix: cert-prefix
    42 tls:
    43 ca: appdb-cert-ca
    44 clusterSpecList:
    45 - clusterName: "${K8S_CLUSTER_0_CONTEXT_NAME}"
    46 members: 3
    47 - clusterName: "${K8S_CLUSTER_1_CONTEXT_NAME}"
    48 members: 2
    49 backup:
    50 enabled: true
    51 s3Stores:
    52 - name: my-s3-block-store
    53 s3SecretRef:
    54 name: "s3-access-secret"
    55 pathStyleAccessEnabled: true
    56 s3BucketEndpoint: "${S3_ENDPOINT}"
    57 s3BucketName: "${S3_SNAPSHOT_BUCKET_NAME}"
    58 customCertificateSecretRefs:
    59 - name: s3-ca-cert
    60 key: ca.crt
    61 s3OpLogStores:
    62 - name: my-s3-oplog-store
    63 s3SecretRef:
    64 name: "s3-access-secret"
    65 s3BucketEndpoint: "${S3_ENDPOINT}"
    66 s3BucketName: "${S3_OPLOG_BUCKET_NAME}"
    67 pathStyleAccessEnabled: true
    68 customCertificateSecretRefs:
    69 - name: s3-ca-cert
    70 key: ca.crt
    71EOF
  2. Wait until the Kubernetes Operator finishes its configuration:

    1echo; echo "Waiting for Backup to reach Running phase..."
    2kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.backup.phase}'=Running opsmanager/om --timeout=1200s
    3echo "Waiting for Application Database to reach Running phase..."
    4kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.applicationDatabase.phase}'=Running opsmanager/om --timeout=600s
    5echo; echo "Waiting for Ops Manager to reach Running phase..."
    6kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" wait --for=jsonpath='{.status.opsManager.phase}'=Running opsmanager/om --timeout=600s
    7echo; echo "MongoDBOpsManager resource"
    8kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" get opsmanager/om
    9echo; echo "Pods running in cluster ${K8S_CLUSTER_0_CONTEXT_NAME}"
    10kubectl --context "${K8S_CLUSTER_0_CONTEXT_NAME}" -n "${NAMESPACE}" get pods
    11echo; echo "Pods running in cluster ${K8S_CLUSTER_1_CONTEXT_NAME}"
    12kubectl --context "${K8S_CLUSTER_1_CONTEXT_NAME}" -n "${NAMESPACE}" get pods
    13echo; echo "Pods running in cluster ${K8S_CLUSTER_2_CONTEXT_NAME}"
    14kubectl --context "${K8S_CLUSTER_2_CONTEXT_NAME}" -n "${NAMESPACE}" get pods
    1Waiting for Backup to reach Running phase...
    2mongodbopsmanager.mongodb.com/om condition met
    3Waiting for Application Database to reach Running phase...
    4mongodbopsmanager.mongodb.com/om condition met
    5
    6Waiting for Ops Manager to reach Running phase...
    7mongodbopsmanager.mongodb.com/om condition met
    8
    9MongoDBOpsManager resource
    10NAME REPLICAS VERSION STATE (OPSMANAGER) STATE (APPDB) STATE (BACKUP) AGE WARNINGS
    11om 7.0.4 Running Running Running 22m
    12
    13Pods running in cluster gke_scratch-kubernetes-team_europe-central2-a_k8s-mdb-0
    14NAME READY STATUS RESTARTS AGE
    15om-0-0 2/2 Running 0 7m10s
    16om-db-0-0 4/4 Running 0 11m
    17om-db-0-1 4/4 Running 0 13m
    18om-db-0-2 4/4 Running 0 14m
    19
    20Pods running in cluster gke_scratch-kubernetes-team_europe-central2-b_k8s-mdb-1
    21NAME READY STATUS RESTARTS AGE
    22om-1-0 2/2 Running 0 4m8s
    23om-db-1-0 4/4 Running 0 11m
    24om-db-1-1 4/4 Running 0 9m25s
    25
    26Pods running in cluster gke_scratch-kubernetes-team_europe-central2-c_k8s-mdb-2
    27NAME READY STATUS RESTARTS AGE
    28om-2-backup-daemon-0 2/2 Running 0 2m5s
11

Run the following script to delete the GKE clusters and clean up your environment.

Important

The following commands are not reversible. They delete all clusters referenced in env_variables.sh. Don't run these commands if you wish to retain the GKE clusters, for example, if you didn't create the GKE clusters as part of this procedure.

1yes | gcloud container clusters delete "${K8S_CLUSTER_0}" --zone="${K8S_CLUSTER_0_ZONE}" &
2yes | gcloud container clusters delete "${K8S_CLUSTER_1}" --zone="${K8S_CLUSTER_1_ZONE}" &
3yes | gcloud container clusters delete "${K8S_CLUSTER_2}" --zone="${K8S_CLUSTER_2_ZONE}" &
4wait