Docs Menu
Docs Home
/
MongoDB Enterprise Kubernetes Operator
/

Prerequisites

On this page

  • Review Supported Hardware Architectures
  • Clone the MongoDB Enterprise Kubernetes Operator Repository
  • Set Environment Variables
  • Install Go and Helm
  • Understand Kubernetes Roles and Role Bindings
  • Set the Deployment's Scope
  • Plan for External Connectivity: Should You Use a Service Mesh?
  • Check Connectivity Across Clusters
  • Review the Requirements for Deploying Ops Manager
  • Prepare for TLS-Encrypted Connections
  • Choose GitOps or the kubectl MongoDB Plugin
  • Install the kubectl MongoDB Plugin
  • Configure Resources for GitOps

Before you create a multi-Kubernetes cluster MongoDB deployment using either the quick start or a deployment procedure, complete the following tasks.

To learn more about prerequisites specific to the quick start, see Quick Start Prerequisites.

See supported hardware architectures.

Clone the MongoDB Enterprise Kubernetes Operator repository:

git clone https://github.com/mongodb/mongodb-enterprise-kubernetes.git

Set the environment variables with cluster names where you deploy the clusters, as in this example:

export MDB_CENTRAL_CLUSTER_FULL_NAME="mdb-central"
export MDB_CLUSTER_1_FULL_NAME="mdb-1"
export MDB_CLUSTER_2_FULL_NAME="mdb-2"
export MDB_CLUSTER_3_FULL_NAME="mdb-3"

Install the following tools:

  1. Install Go v1.17 or later.

  2. Install Helm.

To use a multi-Kubernetes cluster MongoDB deployment, you must have a specific set of Kubernetes Roles, ClusterRoles, RoleBindings, ClusterRoleBindings, and ServiceAccounts, which you can configure in any of the following ways:

  • Follow the Multi-Kubernetes-Cluster Quick Start, which tells you how to use the MongoDB Plugin to automatically create the required objects and apply them to the appropriate clusters within your multi-Kubernetes cluster MongoDB deployment.

  • Use Helm to configure the required Kubernetes Roles and service accounts for each member cluster:

    helm template --show-only \
    templates/database-roles.yaml \
    mongodb/enterprise-operator \
    --set namespace=mongodb | \
    kubectl apply -f - \
    --context=$MDB_CLUSTER_1_FULL_NAME \
    --namespace mongodb
    helm template --show-only \
    templates/database-roles.yaml \
    mongodb/enterprise-operator \
    --set namespace=mongodb | \
    kubectl apply -f - \
    --context=$MDB_CLUSTER_2_FULL_NAME \
    --namespace mongodb
    helm template --show-only \
    templates/database-roles.yaml \
    mongodb/enterprise-operator \
    --set namespace=mongodb | \
    kubectl apply -f - \
    --context=$MDB_CLUSTER_3_FULL_NAME \
    --namespace mongodb
  • Manually create Kubernetes object .yaml files and add the required Kubernetes Roles and service accounts to your multi-Kubernetes cluster MongoDB deployment with the kubectl apply command. This may be necessary for certain highly automated workflows. MongoDB provides sample configuration files.

    For custom resources scoped to a subset of namespaces:

    For custom resources scoped to a cluster-wide namespace:

    Each file defines multiple resources. To support your deployment, you must replace the placeholder values in the following fields:

    • subjects.namespace in each RoleBinding or ClusterRoleBinding resource

    • metadata.namespace in each ServiceAccount resource

    After modifying the definitions, apply them by running the following command for each file:

    kubectl apply -f <fileName>

By default, the multi-cluster Kubernetes Operator is scoped to the namespace in which you install it. The Kubernetes Operator reconciles the MongoDBMultiCluster resource deployed in the same namespace as the Kubernetes Operator.

When you run the MongoDB kubectl plugin as part of the multi-cluster quick start, and don't modify the kubectl mongodb plugin settings, the plugin:

  • Creates a default ConfigMap named mongodb-enterprise-operator-member-list that contains all the member clusters of the multi-Kubernetes cluster MongoDB deployment. This name is hard-coded and you can't change it. See Known Issues.

  • Creates ServiceAccounts, Roles, ClusterRoles, RoleBindings and ClusterRoleBindings in the central cluster and each member cluster.

  • Applies the correct permissions for service accounts.

  • Uses the preceding settings to create your multi-Kubernetes cluster MongoDB deployment.

Once the Kubernetes Operator creates the multi-Kubernetes cluster MongoDB deployment, the Kubernetes Operator starts watching MongoDB resources in the mongodb namespace.

To configure the Kubernetes Operator with the correct permissions to deploy in a subset or all namespaces, run the following command and specify the namespaces that you would like the Kubernetes Operator to watch.

kubectl mongodb multicluster setup \
--central-cluster="${MDB_CENTRAL_CLUSTER_FULL_NAME}" \
--member-clusters="${MDB_CLUSTER_1_FULL_NAME},${MDB_CLUSTER_2_FULL_NAME},${MDB_CLUSTER_3_FULL_NAME}" \
--member-cluster-namespace="mongodb2" \
--central-cluster-namespace="mongodb2" \
--create-service-account-secrets \
--cluster-scoped="true"

When you install the multi-Kubernetes cluster MongoDB deployment to multiple or all namespaces, you can configure the Kubernetes Operator to:

Note

Install and set up a single Kubernetes Operator instance and configure it to watch one, many, or all custom resources in different, non-overlapping subsets of namespaces. See also Does MongoDB support running more than one Kubernetes Operator instance?

If you set the scope for the multi-Kubernetes cluster MongoDB deployment to many namespaces, you can configure the Kubernetes Operator to watch MongoDB resources in these namespaces in the multi-Kubernetes cluster MongoDB deployment.

Set the spec.template.spec.containers.name.env.name:WATCH_NAMESPACE in the mongodb-enterprise.yaml file from the MongoDB Enterprise Kubernetes Operator GitHub Repository to the comma-separated list of namespaces that you would like the Kubernetes Operator to watch:

WATCH_NAMESPACE: "$namespace1,$namespace2,$namespace3"

Run the following command and replace the values in the last line with the namespaces that you would like the Kubernetes Operator to watch.

helm upgrade \
--install \
mongodb-enterprise-operator-multi-cluster \
mongodb/enterprise-operator \
--namespace mongodb \
--set namespace=mongodb \
--version <mongodb-kubernetes-operator-version>\
--set operator.name=mongodb-enterprise-operator-multi-cluster \
--set operator.createOperatorServiceAccount=false \
--set operator.createResourcesServiceAccountsAndRoles=false \
--set "multiCluster.clusters={$MDB_CLUSTER_1_FULL_NAME,$MDB_CLUSTER_2_FULL_NAME,$MDB_CLUSTER_3_FULL_NAME}" \
--set operator.watchNamespace="$namespace1,$namespace2,$namespace3"

If you set the scope for the multi-Kubernetes cluster MongoDB deployment to all namespaces instead of the default mongodb namespace, you can configure the Kubernetes Operator to watch MongoDB resources in all namespaces in the multi-Kubernetes cluster MongoDB deployment.

Set the spec.template.spec.containers.name.env.name:WATCH_NAMESPACE in mongodb-enterprise.yaml to "*". You must include the double quotation marks (") around the asterisk (*) in the YAML file.

WATCH_NAMESPACE: "*"

Run the following command:

helm upgrade \
--install \
mongodb-enterprise-operator-multi-cluster \
mongodb/enterprise-operator \
--namespace mongodb \
--set namespace=mongodb \
--version <mongodb-kubernetes-operator-version>\
--set operator.name=mongodb-enterprise-operator-multi-cluster \
--set operator.createOperatorServiceAccount=false \
--set operator.createResourcesServiceAccountsAndRoles=false \
--set "multiCluster.clusters={$MDB_CLUSTER_1_FULL_NAME,$MDB_CLUSTER_2_FULL_NAME,$MDB_CLUSTER_3_FULL_NAME}" \
--set operator.watchNamespace="*"

A service mesh enables inter-cluster communication between the replica set members deployed in different Kubernetes clusters. Using a service mesh greatly simplifies creating multi-Kubernetes cluster MongoDB deployments and is the recommended way of deploying MongoDB across multiple Kubernetes clusters. However, if your IT organization doesn't use a service mesh, you can deploy a replica set in a multi-Kubernetes cluster MongoDB deployment without it.

Depending on your environment, do the following:

Regardless of the deployment type, a MongoDB deployment in Kubernetes must establish the following connections:

  • From the Ops Manager MongoDB Agent in the Pod to its mongod process, to enable MongoDB deployment's lifecycle management and monitoring.

  • From the Ops Manager MongoDB Agent in the Pod to the Ops Manager instance, to enable automation.

  • Between all mongod processes, to allow replication.

When the Kubernetes Operator deploys the MongoDB resources, it treats these connectivity requirements in the following ways, depending on the type of deployment:

  • In a single Kubernetes cluster deployment, the Kubernetes Operator configures hostnames in the replica set as FQDNs of a Headless Service. This is a single service that resolves the DNS of a direct IP address of each Pod hosting a MongoDB instance by the Pod's FQDN, as follows: <pod-name>.<replica-set-name>-svc.<namespace>.svc.cluster.local.

  • In a multi-Kubernetes cluster MongoDB deployment that uses a service mesh, the Kubernetes Operator creates a separate StatefulSet for each MongoDB replica set member in the Kubernetes cluster. A service mesh allows communication between mongod processes across distinct Kubernetes clusters.

    Using a service mesh allows the multi-Kubernetes cluster MongoDB deployment to:

    • Achieve global DNS hostname resolution across Kubernetes clusters and establish connectivity between them. For each MongoDB deployment Pod in each Kubernetes cluster, the Kubernetes Operator creates a ClusterIP service through the spec.duplicateServiceObjects: true configuration in the MongoDBMultiCluster resource. Each process has a hostname defined to this service's FQDN: <pod-name>-svc.<namespace>.svc.cluster.local. These hostnames resolve from DNS to a service's ClusterIP in each member cluster.

    • Establish communication between Pods in different Kubernetes clusters. As a result, replica set members hosted on different clusters form a single replica set across these clusters.

  • In a multi-Kubernetes cluster MongoDB deployment without a service mesh, the Kubernetes Operator uses the following MongoDBMultiCluster resource settings to expose all its mongod processes externally. This enables DNS resolution of hostnames between distinct Kubernetes clusters, and establishes connectivity between Pods routed through the networks that connect these clusters.

Install Istio in a multi-primary mode on different networks, using the Istio documentation. Istio is a service mesh that simplifies DNS resolution and helps establish inter-cluster communication between the member Kubernetes clusters in a multi-Kubernetes cluster MongoDB deployment. If you choose to use a service mesh, you need to install it. If you can't utilize a service mesh, skip this section and use external domains and configure DNS to enable external connectivity.

In addition, we offer the install_istio_separate_network example script. This script is based on Istio documentation and provides an example installation that uses the multi-primary mode on different networks. We don't guarantee the script's maintenance with future Istio releases. If you choose to use the script, review the latest Istio documentation for installing a multicluster, and, if necessary, adjust the script to match the documentation and your deployment. If you use another service mesh solution, create your own script for configuring separate networks to facilitate DNS resolution.

If you don't use a service mesh, do the following to enable external connectivity to and between mongod processes and the Ops Manager MongoDB Agent:

  • When you create a multi-Kubernetes cluster MongoDB deployment, use the spec.clusterSpecList.externalAccess.externalDomain setting to specify an external domain and instruct the Kubernetes Operator to configure hostnames for mongod processes in the following pattern:

    <pod-name>.<externalDomain>

    Note

    You can specify external domains only for new deployments. You can't change external domains after you configure a multi-Kubernetes cluster MongoDB deployment.

    After you configure an external domain in this way, the Ops Manager MongoDB Agents and mongod processes use this domain to connect to each other.

  • Customize external services that the Kubernetes Operator creates for each Pod in the Kubernetes cluster. Use the global configuration in the spec.externalAccess settings and Kubernetes cluster-specific overrides in the spec.clusterSpecList.externalAccess.externalService settings.

  • Configure Pod hostnames in a DNS zone to ensure that each Kubernetes Pod hosting a mongod process allows establishing an external connection to the other mongod processes in a multi-Kubernetes cluster MongoDB deployment. A Pod is considered "exposed externally" when you can connect to a mongod process by using the <pod-name>.<externalDomain> hostname on ports 27017 (this is the default database port) and 27018 (this is the database port + 1). You may also need to configure firewall rules to allow TCP traffic on ports 27017 and 27018.

After you complete these prerequisites, you can deploy a multi-Kubernetes cluster without a service mesh.

Follow the steps in this procedure to verify that service FQDNs are reachable across Kubernetes clusters.

In this example, you deploy a sample application defined in sample-service.yaml across two Kubernetes clusters.

1

Create a namespace in each of the Kubernetes clusters to deploy the sample-service.yaml.

kubectl create --context="${CTX_CLUSTER_1}" namespace sample
kubectl create --context="${CTX_CLUSTER_2}" namespace sample

Note

In certain service mesh solutions, you might need to annotate or label the namespace.

2
kubectl apply --context="${CTX_CLUSTER_1}" \
-f sample-service.yaml \
-l service=helloworld1 \
-n sample
kubectl apply --context="${CTX_CLUSTER_2}" \
-f sample-service.yaml \
-l service=helloworld2 \
-n sample
3
kubectl apply --context="${CTX_CLUSTER_1}" \
-f sample-service.yaml \
-l version=v1 \
-n sample
4

Check that the CLUSTER_1 hosting Pod is in the Running state.

kubectl get pod --context="${CTX_CLUSTER_1}" \
-n sample \
-l app=helloworld
5
kubectl apply --context="${CTX_CLUSTER_2}" \
-f sample-service.yaml \
-l version=v2 \
-n sample
6

Check that the CLUSTER_2 hosting Pod is in the Running state.

kubectl get pod --context="${CTX_CLUSTER_2}" \
-n sample \
-l app=helloworld
7

Deploy the Pod in CLUSTER_1 and check that you can reach the sample application in CLUSTER_2.

kubectl run --context="${CTX_CLUSTER_1}" \
-n sample \
curl --image=radial/busyboxplus:curl \
-i --tty \
curl -sS helloworld2.sample:5000/hello

You should see output similar to this example:

Hello version: v2, instance: helloworld-v2-758dd55874-6x4t8
8

Deploy the Pod in CLUSTER_2 and check that you can reach the sample application in CLUSTER_1.

kubectl run --context="${CTX_CLUSTER_2}" \
-n sample \
curl --image=radial/busyboxplus:curl \
-i --tty \
curl -sS helloworld1.sample:5000/hello

You should see output similar to this example:

Hello version: v1, instance: helloworld-v1-758dd55874-6x4t8

As part of the Quick Start, you deploy an Ops Manager resource on the central cluster.

If you plan to secure your multi-Kubernetes cluster MongoDB deployment using TLS encryption, complete the following tasks to enable internal cluster authentication and generate TLS certificates for member clusters and the MongoDB Agent:

Note

You must possess the CA certificate and the key that you used to sign your TLS certificates.

1

Use one of the following options:

  • Generate a wildcard TLS certificate that covers hostnames of the services that the Kubernetes Operator creates for each Pod in the deployment.

    If you generate wildcard certificates, you can continue using the same certificates when you scale up or rebalance nodes in the Kubernetes member clusters, for example for disaster recovery.

    For example, add the hostname similar to the following format to the SAN:

    *.<namespace>.svc.cluster.local
  • For each Kubernetes service that the Kubernetes Operator generates corresponding to each Pod in each member cluster, add SANs to the certificate. In your TLS certificate, the SAN for each Kubernetes service must use the following format:

    <metadata.name>-<member_cluster_index>-<n>-svc.<namespace>.svc.cluster.local

    where n ranges from 0 to clusterSpecList[member_cluster_index].members - 1.

2

For the MongoDB Agent TLS certificate:

  • The Common Name in the TLS certificate must not be empty.

  • The combined Organization and Organizational Unit in each TLS certificate must differ from the Organization and Organizational Unit in the TLS certificate for your replica set members.

To speed up creating TLS certificates for member Kubernetes clusters, we offer the setup_tls script. We don't guarantee the script's maintenance. If you choose to use the script, test it and adjust it to your needs. The script does the following:

  • Creates the cert-manager namespace in the connected cluster and installs cert-manager using Helm in the cert-manager namespace.

  • Installs a local CA using mkcert.

  • Downloads TLS certificates from downloads.mongodb.com and concatenates them with the CA file name and ca-chain.

  • Creates a ConfigMap that includes the ca-chain files.

  • Creates an Issuer resource, which cert-manager uses to generate certificates.

  • Creates a Certificate resource, which cert-manager uses to create a key object for the certificates.

To use the script:

1

Install mkcert on the machine you plan to run this script.

2
kubectl --context $MDB_CENTRAL_CLUSTER_FULL_NAME \
--namespace=<metadata.namespace> \
3
curl https://raw.githubusercontent.com/mon mongodb-enterprise-kubernetes/master/tools/multicluster/setup_tl -o setup_tls.sh

The output includes:

  • A secret containing the CA named ca-key-pair.

  • A secret containing the server certificates on the central n clustercert-${resource}-cert.

  • A ConfigMap containing the CA certificates named issuer-ca.

4

For the MongoDB Agent TLS certificate:

  • The Common Name in the TLS certificate must not be empty.

  • The combined Organization and Organizational Unit in each TLS certificate must differ from the Organization and Organizational Unit in the TLS certificate for your replica set members.

1

Use one of the following options:

  • Generate a wildcard TLS certificate that contains all externalDomains that you created in the SAN. For example, add the hostnames similar to the following format to the SAN:

    *.cluster-0.example.com, *.cluster-1.example.com

    If you generate wildcard certificates, you can continue using them when you scale up or rebalance nodes in the Kubernetes member clusters, for example for disaster recovery.

  • Generate a TLS certificate for each MongoDB replica set member hostname in the SAN. For example, add the hostnames similar to the following to the SAN:

    my-replica-set-0-0.cluster-0.example.com,
    my-replica-set-0-1.cluster-0.example.com,
    my-replica-set-1-0.cluster-1.example.com,
    my-replica-set-1-1.cluster-1.example.com

    If you generate an individual TLS certificate that contains all the specific hostnames, you must create a new certificate each time you scale up or rebalance nodes in the Kubernetes member clusters, for example for disaster recovery.

2

For the MongoDB Agent TLS certificate:

  • The Common Name in the TLS certificate must not be empty.

  • The combined Organization and Organizational Unit in each TLS certificate must differ from the Organization and Organizational Unit in the TLS certificate for your replica set members.

Important

The Kubernetes Operator uses kubernetes.io/tls secrets to store TLS certificates and private keys for Ops Manager and MongoDB resources. Starting in Kubernetes Operator version 1.17.0, the Kubernetes Operator doesn't support concatenated PEM files stored as Opaque secrets.

You can choose to create and maintain the resource files needed for the MongoDBMultiCluster resources deployment in a GitOps environment.

If you use a GitOps workflow, you can't use the kubectl mongodb plugin, which automatically configures role-based access control (RBAC) and creates the kubeconfig file that allows the central cluster to communicate with its member clusters. Instead, you must manually configure or build your own automation for configuring the RBAC and kubeconfig files based on the procedure and examples in Configure Resources for GitOps.

The following prerequisite sections describe how to install the kubectl MongoDB plugin if you don't use GitOps or configure resources for GitOps if you do.

Use the kubectl mongodb plugin to:

Note

If you use GitOps, you can't use the kubectl mongodb plugin. Instead, follow the procedure in Configure Resources for GitOps.

To install the kubectl mongodb plugin:

1

Download your desired Kubernetes Operator package version from the Release Page of the MongoDB Enterprise Kubernetes Operator Repository.

The package's name uses this pattern: kubectl-mongodb_{{ .Version }}_{{ .Os }}_{{ .Arch }}.tar.gz.

Use one of the following packages:

  • kubectl-mongodb_{{ .Version }}_darwin_amd64.tar.gz

  • kubectl-mongodb_{{ .Version }}_darwin_arm64.tar.gz

  • kubectl-mongodb_{{ .Version }}_linux_amd64.tar.gz

  • kubectl-mongodb_{{ .Version }}_linux_arm64.tar.gz

2

Unpack the package, as in the following example:

tar -zxvf kubectl-mongodb_<version>_darwin_amd64.tar.gz
3

Find the kubectl-mongodb binary in the unpacked directory and move it to its desired destination, inside the PATH for the Kubernetes Operator user, as shown in the following example:

mv kubectl-mongodb /usr/local/bin/kubectl-mongodb

Now you can run the kubectl mongodb plugin using the following commands:

kubectl mongodb multicluster setup
kubectl mongodb multicluster recover

To learn more about the supported flags, see the MongoDB kubectl plugin Reference.

If you use a GitOps workflow, you won't be able to use the kubectl mongodb plugin to automatically configure role-based access control (RBAC) or the kubeconfig file that allows the central cluster to communicate with its member clusters. Instead, you must manually configure and apply the following resource files or build your own automation based on the information below.

Note

To learn how the kubectl mongodb plugin automates the following steps, view the code in GitHub.

To configure RBAC and the kubeconfig for GitOps:

1

Use these RBAC resource examples to create your own. To learn more about these RBAC resources, see Understand Kubernetes Roles and Role Bindings.

To apply them to your central and member clusters with GitOps, you can use a tool like Argo CD.

2

The Kubernetes Operator keeps track of its member clusters using a ConfigMap file. Copy, modify, and apply the following example ConfigMap:

apiVersion: v1
kind: ConfigMap
data:
cluster1: ""
cluster2: ""
metadata:
namespace: <namespace>
name: mongodb-enterprise-operator-member-list
labels:
multi-cluster: "true"
3

The Kubernetes Operator, which runs in the central cluster, communicates with the Pods in the member clusters through the Kubernetes API. For this to work, the Kubernetes Operator needs a kubeconfig file that contains the service account tokens of the member clusters. Create this kubeconfig file by following these steps:

  1. Obtain a list of service accounts configured in the Kubernetes Operator's namespace. For example, if you chose to use the default mongodb namespace, then you can obtain the service accounts using the following command:

    kubectl get serviceaccounts -n mongodb
  2. Get the secret for each service account that belongs to a member cluster.

    kubectl get secret <service-account-name> -n mongodb -o yaml
  3. In each service account secret, copy the CA certificate and token. For example, copy <ca_certificate> and <token> from the secret, as shown in the following example:

    apiVersion: v1
    kind: Secret
    metadata:
    name: my-service-account
    namespace: mongodb
    data:
    ca.crt: <ca_certificate>
    token: <token>
  4. Copy the following kubeconfig example for the central cluster and replace the placeholders with the <ca_certificate> and <token> you copied from the service account secrets by running the commands listed below.

    apiVersion: v1
    clusters:
    - cluster:
    certificate-authority:
    server: https://
    name: kind-e2e-cluster-1
    - cluster:
    certificate-authority:
    server: https://
    name: kind-e2e-cluster-2
    contexts:
    - context:
    cluster: kind-e2e-cluster-1
    namespace: mongodb
    user: kind-e2e-cluster-1
    name: kind-e2e-cluster-1
    - context:
    cluster: kind-e2e-cluster-2
    namespace: mongodb
    user: kind-e2e-cluster-2
    name: kind-e2e-cluster-2
    kind: Config
    users:
    - name: kind-e2e-cluster-1
    user:
    token:
    - name: kind-e2e-cluster-2
    user:
    token:

    Populate the following kubectl commands with the correct values and run them to update your example kubeconfig file.

    kubectl config --kubeconfig=kubeconfig set-cluster kind-e2e-cluster-1 --certificate-authority=<cluster-1-ca.crt>
    kubectl config --kubeconfig=kubeconfig set-cluster kind-e2e-cluster-2 --certificate-authority=<cluster-2-ca.crt>
    kubectl config --kubeconfig=kubeconfig set-credentials kind-e2e-cluster-1 --token=<cluster-1-token>
    kubectl config --kubeconfig=kubeconfig set-credentials kind-e2e-cluster-2 --token=<cluster-2-token>
  5. Create a secret in the central cluster that you mount in the Kubernetes Operator as illustrated in the reference Helm chart. For example:

    kubectl --context="${CTX_CENTRAL_CLUSTER}" -n <operator-namespace> create secret --from-file=kubeconfig=<path-to-kubeconfig-file> <kubeconfig-secret-name>

Back

Services & Tools