Docs Menu
Docs Home
/
MongoDB Enterprise Kubernetes Operator
/

Multi-Kubernetes-Cluster Quick Start

On this page

  • Prefer to Learn by Watching?
  • Prerequisites
  • Review the General Prerequisites
  • Set Environment Variables and GKE Zones
  • Set up GKE Clusters
  • Obtain User Authentication Credentials for Central and Member Clusters
  • Deploy a MongoDBMultiCluster Resource
  • Next Steps

Use the quick start to deploy a MongoDB replica set across three Kubernetes member clusters, using GKE (Google Kubernetes Engine) and Istio service mesh.

Before you begin:

  • Learn about multi-Kubernetes-cluster deployments

  • Review the list of multi-Kubernetes-cluster services and tools

  • Complete the prerequisites

Note

The following procedures scope your multi-Kubernetes cluster MongoDB deployment to a single namespace named mongodb. You can configure your multi-Kubernetes cluster MongoDB deployment to watch resources in multiple namespaces or all namespaces.

Follow along with this video tutorial walk-through that demonstrates how to create a multi-Kubernetes cluster MongoDB deployment.

Duration: 12 Minutes

Deploying a MongoDB Replica Set across Multiple Kubernetes Clusters

Before you create a multi-Kubernetes cluster MongoDB deployment using the quick start, complete the following tasks:

Ensure you meet the general prerequisite before you proceed. To learn more, see General Prerequisites.

Set the environment variables with cluster names and the available GKE zones where you deploy the clusters, as in this example:

export MDB_GKE_PROJECT={GKE project name}
export MDB_CENTRAL_CLUSTER_ZONE="us-west1-a"
export MDB_CLUSTER_1_ZONE="us-west1-b"
export MDB_CLUSTER_2_ZONE="us-east1-b"
export MDB_CLUSTER_3_ZONE="us-central1-a"
export MDB_CENTRAL_CLUSTER_FULL_NAME="gke_${MDB_GKE_PROJECT}_${MDB_CENTRAL_CLUSTER_ZONE}_${MDB_CENTRAL_CLUSTER}"
export MDB_CLUSTER_1_FULL_NAME="gke_${MDB_GKE_PROJECT}_${MDB_CLUSTER_1_ZONE}_${MDB_CLUSTER_1}"
export MDB_CLUSTER_2_FULL_NAME="gke_${MDB_GKE_PROJECT}_${MDB_CLUSTER_2_ZONE}_${MDB_CLUSTER_2}"
export MDB_CLUSTER_3_FULL_NAME="gke_${MDB_GKE_PROJECT}_${MDB_CLUSTER_3_ZONE}_${MDB_CLUSTER_3}"

Set up GKE (Google Kubernetes Engine) clusters:

1

If you have not done so already, create a Google Cloud project, enable billing on the project, enable the Artifact Registry and GKE APIs, and launch Cloud Shell by following the relevant procedures in the Google Kubernetes Engine Quickstart in the Google Cloud documentation.

2

Create one central cluster and one or more member clusters, specifying the GKE zones, the number of nodes, and the instance types, as in these examples:

gcloud container clusters create $MDB_CENTRAL_CLUSTER \
--zone=$MDB_CENTRAL_CLUSTER_ZONE \
--num-nodes=5 \
--machine-type "e2-standard-2"
gcloud container clusters create $MDB_CLUSTER_1 \
--zone=$MDB_CLUSTER_1_ZONE \
--num-nodes=5 \
--machine-type "e2-standard-2"
gcloud container clusters create $MDB_CLUSTER_2 \
--zone=$MDB_CLUSTER_2_ZONE \
--num-nodes=5 \
--machine-type "e2-standard-2"
gcloud container clusters create $MDB_CLUSTER_3 \
--zone=$MDB_CLUSTER_3_ZONE \
--num-nodes=5 \
--machine-type "e2-standard-2"

Obtain user authentication credentials for the central and member Kubernetes clusters and save the credentials. You will later use these credentials for running kubectl commands on these clusters.

Run the following commands:

gcloud container clusters get-credentials $MDB_CENTRAL_CLUSTER \
--zone=$MDB_CENTRAL_CLUSTER_ZONE
gcloud container clusters get-credentials $MDB_CLUSTER_1 \
--zone=$MDB_CLUSTER_1_ZONE
gcloud container clusters get-credentials $MDB_CLUSTER_2 \
--zone=$MDB_CLUSTER_2_ZONE
gcloud container clusters get-credentials $MDB_CLUSTER_3 \
--zone=$MDB_CLUSTER_3_ZONE

Select the appropriate tab based on whether you want to encrypt replica set connections in your multi-Kubernetes cluster MongoDB deployments using TLS certificates.

You can use the following procedures in this TLS-Encrypted Connections tab:

  • Deploy a MongoDBMultiCluster resource

  • Renew TLS Certificates for a MongoDBMultiCluster resource

These procedures establish TLS-encrypted connections between MongoDB hosts in a replica set, and between client applications and MongoDB deployments.

Before you begin, you must have valid certificates for TLS encryption.

1

Run the kubectl command to create a new secret that stores the MongoDBMultiCluster resource certificate:

kubectl --context $MDB_CENTRAL_CLUSTER_FULL_NAME \
--namespace=<metadata.namespace> \
create secret tls <prefix>-<metadata.name>-cert \
--cert=<resource-tls-cert> \
--key=<resource-tls-key>

Note

You must prefix your secrets with <prefix>-<metadata.name>.

For example, if you call your deployment my-deployment and you set the prefix to mdb, you must name the TLS secret for the client TLS communications mdb-my-deployment-cert. Also, you must name the TLS secret for internal cluster authentication (if enabled) mdb-my-deployment-clusterfile.

2

Run the kubectl command to link your CA to your MongoDBMultiCluster resource. Specify the CA certificate file that you must always name ca-pem for the MongoDBMultiCluster resource:

kubectl --context $MDB_CENTRAL_CLUSTER_FULL_NAME \
--namespace=<metadata.namespace> \
create configmap custom-ca -from-file=ca-pem=<your-custom-ca-file>
3

By default, the Kubernetes Operator is scoped to the mongodb namespace. When you run the following command, the kubectl mongodb plugin:

  • Creates one central cluster, three member clusters, and a namespace labeled mongodb in each of the clusters.

  • Creates a default ConfigMap with the hard-coded name mongodb-enterprise-operator-member-list that contains all the member clusters. You can't change the ConfigMap's name.

  • Creates the service accounts and Roles required for running database workloads in the member clusters.

Run the kubectl mongodb plugin:

kubectl mongodb multicluster setup \
--central-cluster="${MDB_CENTRAL_CLUSTER_FULL_NAME}" \
--member-clusters="${MDB_CLUSTER_1_FULL_NAME},${MDB_CLUSTER_2_FULL_NAME},${MDB_CLUSTER_3_FULL_NAME}" \
--member-cluster-namespace="mongodb" \
--central-cluster-namespace="mongodb" \
--create-service-account-secrets \
--install-database-roles=true
4

If you're using Istio, run the following command on the central cluster, specifying the context for each of the member clusters in the deployment. To enable sidecar injection in Istio, the following commands add the istio-injection=enabled labels to the mongodb namespaces on each member cluster. If you use another service mesh, configure it to handle network traffic in the created namespaces.

kubectl label \
--context=$MDB_CLUSTER_1_FULL_NAME \
namespace mongodb \
istio-injection=enabled
kubectl label \
--context=$MDB_CLUSTER_2_FULL_NAME \
namespace mongodb \
istio-injection=enabled
kubectl label \
--context=$MDB_CLUSTER_3_FULL_NAME \
namespace mongodb \
istio-injection=enabled
5

If you have not done so already, run the following commands to run all kubectl commands on the central cluster in the default namespace.

kubectl config use-context $MDB_CENTRAL_CLUSTER_FULL_NAME
kubectl config set-context $(kubectl config current-context) \
--namespace=mongodb
6

Deploy the MongoDB Enterprise Kubernetes Operator in the central cluster in the mongodb namespace with Helm or kubectl.

  1. Add the MongoDB Helm Charts for Kubernetes repository to Helm.

    helm repo add mongodb https://mongodb.github.io/helm-charts
  2. Use the MongoDB Helm Charts for Kubernetes to deploy the Kubernetes Operator.

    helm upgrade \
    --install \
    mongodb-enterprise-operator-multi-cluster \
    mongodb/enterprise-operator \
    --namespace mongodb \
    --set namespace=mongodb \
    --version <mongodb-kubernetes-operator-version> \
    --set operator.name=mongodb-enterprise-operator-multi-cluster \
    --set operator.createOperatorServiceAccount=false \
    --set operator.createResourcesServiceAccountsAndRoles=false \
    --set "multiCluster.clusters={$MDB_CLUSTER_1_FULL_NAME,$MDB_CLUSTER_2_FULL_NAME,$MDB_CLUSTER_3_FULL_NAME}" \
    --set multiCluster.performFailover=false
  1. Apply the Kubernetes Operator custom resources.

    kubectl apply -f https://raw.githubusercontent.com/mongodb/mongodb-enterprise-kubernetes/master/crds.yaml
  2. Download the Kubernetes Operator YAML template.

    curl https://raw.githubusercontent.com/mongodb/mongodb-enterprise-kubernetes/master/mongodb-enterprise-multi-cluster.yaml -o operator.yaml
  3. Optional: Customize the Kubernetes Operator YAML template.

    To learn about optional Kubernetes Operator installation settings, see MongoDB Enterprise Kubernetes Operator kubectl and oc Installation Settings.

  4. Apply the Kubernetes Operator YAML file.

    kubectl apply -f operator.yaml
  5. Verify that the Kubernetes Operator is deployed.

    To verify that the Kubernetes Operator installed correctly, run the following command and verify the output:

    kubectl describe deployments mongodb-enterprise-operator -n <metadata.namespace>
    oc describe deployments mongodb-enterprise-operator -n <metadata.namespace>

    By default, deployments exist in the mongodb namespace. If the following error message appears, ensure you use the correct namespace:

    Error from server (NotFound): deployments.apps "mongodb-enterprise-operator" not found

    To troubleshoot your Kubernetes Operator, see Review Logs from the Kubernetes Operator and other troubleshooting topics.

    Important

    If you need to remove the Kubernetes Operator or the namespace, you first must remove MongoDB resources.

7
  1. Create a secret so that the Kubernetes Operator can create and update objects in your Ops Manager project. To learn more, see Create Credentials for the Kubernetes Operator.

  2. Create a ConfigMap to link the Kubernetes Operator to your Ops Manager project. To learn more, see Create One Project using a ConfigMap.

8
9

Set spec.credentials, spec.opsManager.configMapRef.name, and security settings and deploy the MongoDBMultiCluster resource. In the following code sample, duplicateServiceObjects is set to true to enable DNS proxying in Istio.

Note

To enable the cross-cluster DNS resolution by the Istio service mesh, this tutorial creates service objects with a single ClusterIP address per each Kubernetes Pod.

kubectl apply -f - <<EOF
apiVersion: mongodb.com/v1
kind: MongoDBMultiCluster
metadata:
name: multi-replica-set
spec:
version: 6.0.0-ent
type: ReplicaSet
persistent: false
duplicateServiceObjects: true
credentials: my-credentials
opsManager:
configMapRef:
name: my-project
security:
certsSecretPrefix: <prefix>
tls:
ca: custom-ca
clusterSpecList:
- clusterName: ${MDB_CLUSTER_1_FULL_NAME}
members: 3
- clusterName: ${MDB_CLUSTER_2_FULL_NAME}
members: 2
- clusterName: ${MDB_CLUSTER_3_FULL_NAME}
members: 3
EOF

The Kubernetes Operator copies the ConfigMap with the CA that you created in previous steps to each member cluster, generates a concatenated PEM secret, and distributes it to the member clusters.

10
  1. For member clusters, run the following commands to verify that the MongoDB Pods are in the running state:

    kubectl get pods \
    --context=$MDB_CLUSTER_1_FULL_NAME \
    --namespace mongodb
    kubectl get pods \
    --context=$MDB_CLUSTER_2_FULL_NAME \
    --namespace mongodb
    kubectl get pods \
    --context=$MDB_CLUSTER_3_FULL_NAME \
    --namespace mongodb
  2. In the central cluster, run the following command to verify that the MongoDBMultiCluster resource is in the running state:

    kubectl --context=$MDB_CENTRAL_CLUSTER_FULL_NAME \
    --namespace mongodb \
    get mdbmc multi-replica-set -o yaml -w

Renew your TLS certificates periodically using the following procedure.

1

Run this kubectl command to renew an existing secret that stores the certificates for the MongoDBMultiCluster resource:

kubectl --context $MDB_CENTRAL_CLUSTER_FULL_NAME \
--namespace=<metadata.namespace> \
create secret tls <prefix>-<metadata.name>-cert \
--cert=<resource-tls-cert> \
--key=<resource-tls-key> \
--dry-run=client \
-o yaml |
kubectl apply -f -

This procedure doesn't encrypt connections between MongoDB hosts in a replica set, and between client applications and MongoDB deployments.

1

By default, the Kubernetes Operator is scoped to the mongodb namespace. When you run the following command, the kubectl mongodb plugin:

  • Creates one central cluster, three member clusters, and a namespace labeled mongodb in each of the clusters.

  • Creates a default ConfigMap with the hard-coded name mongodb-enterprise-operator-member-list that contains all the member clusters. You can't change the ConfigMap's name.

  • Creates the service accounts and Roles required for running database workloads in the member clusters.

Run the kubectl mongodb plugin:

kubectl mongodb multicluster setup \
--central-cluster="${MDB_CENTRAL_CLUSTER_FULL_NAME}" \
--member-clusters="${MDB_CLUSTER_1_FULL_NAME},${MDB_CLUSTER_2_FULL_NAME},${MDB_CLUSTER_3_FULL_NAME}" \
--member-cluster-namespace="mongodb" \
--central-cluster-namespace="mongodb" \
--create-service-account-secrets \
--install-database-roles=true
2

If you're using Istio, run the following command on the central cluster, specifying the context for each of the member clusters in the deployment. To enable sidecar injection in Istio, the following commands add the istio-injection=enabled labels to the mongodb namespaces on each member cluster. If you use another service mesh, configure it to handle network traffic in the created namespaces.

kubectl label \
--context=$MDB_CLUSTER_1_FULL_NAME \
namespace mongodb \
istio-injection=enabled
kubectl label \
--context=$MDB_CLUSTER_2_FULL_NAME \
namespace mongodb \
istio-injection=enabled
kubectl label \
--context=$MDB_CLUSTER_3_FULL_NAME \
namespace mongodb \
istio-injection=enabled
3

If you have not done so already, run the following commands to run all kubectl commands on the central cluster in the default namespace.

kubectl config use-context $MDB_CENTRAL_CLUSTER_FULL_NAME
kubectl config set-context $(kubectl config current-context) \
--namespace=mongodb
4

Deploy the MongoDB Enterprise Kubernetes Operator in the central cluster in the mongodb namespace with Helm or kubectl.

  1. Add the MongoDB Helm Charts for Kubernetes repository to Helm.

    helm repo add mongodb https://mongodb.github.io/helm-charts
  2. Use the MongoDB Helm Charts for Kubernetes to deploy the Kubernetes Operator.

    helm upgrade \
    --install \
    mongodb-enterprise-operator-multi-cluster \
    mongodb/enterprise-operator \
    --namespace mongodb \
    --set namespace=mongodb \
    --version <mongodb-kubernetes-operator-version> \
    --set operator.name=mongodb-enterprise-operator-multi-cluster \
    --set operator.createOperatorServiceAccount=false \
    --set operator.createResourcesServiceAccountsAndRoles=false \
    --set "multiCluster.clusters={$MDB_CLUSTER_1_FULL_NAME,$MDB_CLUSTER_2_FULL_NAME,$MDB_CLUSTER_3_FULL_NAME}" \
    --set multiCluster.performFailover=false
  1. Apply the Kubernetes Operator custom resources.

    kubectl apply -f https://raw.githubusercontent.com/mongodb/mongodb-enterprise-kubernetes/master/crds.yaml
  2. Download the Kubernetes Operator YAML template.

    curl https://raw.githubusercontent.com/mongodb/mongodb-enterprise-kubernetes/master/mongodb-enterprise-multi-cluster.yaml -o operator.yaml
  3. Optional: Customize the Kubernetes Operator YAML template.

    To learn about optional Kubernetes Operator installation settings, see MongoDB Enterprise Kubernetes Operator kubectl and oc Installation Settings.

  4. Apply the Kubernetes Operator YAML file.

    kubectl apply -f operator.yaml
  5. Verify that the Kubernetes Operator is deployed.

    To verify that the Kubernetes Operator installed correctly, run the following command and verify the output:

    kubectl describe deployments mongodb-enterprise-operator -n <metadata.namespace>
    oc describe deployments mongodb-enterprise-operator -n <metadata.namespace>

    By default, deployments exist in the mongodb namespace. If the following error message appears, ensure you use the correct namespace:

    Error from server (NotFound): deployments.apps "mongodb-enterprise-operator" not found

    To troubleshoot your Kubernetes Operator, see Review Logs from the Kubernetes Operator and other troubleshooting topics.

    Important

    If you need to remove the Kubernetes Operator or the namespace, you first must remove MongoDB resources.

5
  1. Create a secret so that the Kubernetes Operator can create and update objects in your Ops Manager project. To learn more, see Create Credentials for the Kubernetes Operator.

  2. Create a ConfigMap to link the Kubernetes Operator to your Ops Manager project. To learn more, see Create One Project using a ConfigMap.

6
7

Set spec.credentials, spec.opsManager.configMapRef.name, and security settings and deploy the MongoDBMultiCluster resource. In the following code sample, duplicateServiceObjects is set to true to enable DNS proxying in Istio.

Note

To enable the cross-cluster DNS resolution by the Istio service mesh, this tutorial creates service objects with a single ClusterIP address per each Kubernetes Pod.

kubectl apply -f - <<EOF
apiVersion: mongodb.com/v1
kind: MongoDBMultiCluster
metadata:
name: multi-replica-set
spec:
version: 6.0.0-ent
type: ReplicaSet
persistent: false
duplicateServiceObjects: true
credentials: my-credentials
opsManager:
configMapRef:
name: my-project
security:
certsSecretPrefix: <prefix>
tls:
ca: custom-ca
clusterSpecList:
- clusterName: ${MDB_CLUSTER_1_FULL_NAME}
members: 3
- clusterName: ${MDB_CLUSTER_2_FULL_NAME}
members: 2
- clusterName: ${MDB_CLUSTER_3_FULL_NAME}
members: 3
EOF

The Kubernetes Operator copies the ConfigMap with the CA that you created in previous steps to each member cluster, generates a concatenated PEM secret, and distributes it to the member clusters.

8
  1. For member clusters, run the following commands to verify that the MongoDB Pods are in the running state:

    kubectl get pods \
    --context=$MDB_CLUSTER_1_FULL_NAME \
    --namespace mongodb
    kubectl get pods \
    --context=$MDB_CLUSTER_2_FULL_NAME \
    --namespace mongodb
    kubectl get pods \
    --context=$MDB_CLUSTER_3_FULL_NAME \
    --namespace mongodb
  2. In the central cluster, run the following command to verify that the MongoDBMultiCluster resource is in the running state:

    kubectl --context=$MDB_CENTRAL_CLUSTER_FULL_NAME \
    --namespace mongodb \
    get mdbmc multi-replica-set -o yaml -w

After deploying your MongoDB replica set across three Kubernetes member clusters, you can add a database user so you can connect to your MongoDB database. See Manage Database Users.

Back

Prerequisites