Docs Menu
Docs Home
/
MongoDB Enterprise Kubernetes Operator
/ /

Considerations

On this page

  • Deploy Multiple MongoDB Replica Sets
  • Specify CPU and Memory Resource Requirements
  • Consider Recommended Practices for Storage
  • Use Static Containers (Public Preview)
  • Co-locate mongos Pods with Your Applications
  • Name Your MongoDB Service with its Purpose
  • Use Labels to Differentiate Between Deployments
  • Customize the CustomResourceDefinitions that the Kubernetes Operator Watches
  • Ensure Proper Persistence Configuration
  • Set CPU and Memory Utilization Bounds for the Kubernetes Operator Pod
  • Set CPU and Memory Utilization Bounds for MongoDB Pods
  • Use Multiple Availability Zones
  • Increase Thread Count to Run multiple Reconciliation Processes in Parallel

This page details best practices and system configuration recommendations for the MongoDB Enterprise Kubernetes Operator when running in production.

We recommend that you use a single instance of the Kubernetes Operator to deploy and manage your MongoDB replica sets.

To deploy more than 10 MongoDB replica sets in parallel, you can increase the thread count of your Kubernetes Operator instance.

Note

The following considerations apply:

  • All sizing and performance recommendations for common MongoDB deployments through the Kubernetes Operator in this section are subject to change. Do not treat these recommendations as guarantees or limitations of any kind.

  • These recommendations reflect performance testing findings and represent our suggestions for production deployments. We ran the tests on a cluster comprised of seven AWS EC2 instances of type t2.2xlarge and a master node of type t2.medium.

  • The recommendations in this section don't discuss characteristics of any specific deployment. Your deployment's characteristics may differ from the assumptions made to create these recommendations. Contact MongoDB Support for further help with sizings.

In Kubernetes, each Pod includes parameters that allow you to specify CPU resources and memory resources for each container in the Pod.

To indicate resource bounds, Kubernetes uses the requests and limits parameters, where:

  • request indicates a lower bound of a resource.

  • limit indicates an upper bound of a resource.

The following sections illustrate how to:

For the Pods hosting Ops Manager, use the default resource limits configurations.

The following recommendations are applicable to MongoDB deployments and Ops Manager Application database deployments managed by the Kubernetes Operator.

Note

  • Redundancy in the storage class is not required because MongoDB automatically replicates data between the members of a replica set or shard.

  • You can't share storage across members of a replica set or shard.

  • Use the storage class that provides the best performance given your constraints. See scale a deployment to learn more.

  • Provision Persistent Volumes that support ReadWriteOnce for your storage class.

  • On a worker node level, apply the best practices from MongoDB on Linux.

  • Ensure that you use a storage class that supports volume expansion to enable database volume resizing.

Static containers are simpler and more secure than non-static containers. Static containers are immutable at runtime, which means that they don't change from the image used to create the container. In addition:

  • While running, static containers don't download binaries or run scripts or other utilities over network connections. Static containers only download runtime configuration files.

  • While running, static containers don't modify any file except storage volume mounts.

  • You can run security scans on the container images to determine what is actually run as a live container, and the container that runs won't run binaries other than what was defined in the image.

  • Static containers don't require that you host the MongoDB binary on either Ops Manager or another HTTPS server, which is especially useful if you have an air-gapped environment.

  • You can't run extensive CMD scripts for the static container.

  • You can't copy files between static containers using initContainer.

Note

When deployed as static containers, a Kubernetes Operator deployment consists of two containers - a mongodb-agent container and a mongodb-enterprise-server container. The MongoDB database custom resource inherits resource limit definitions from the mongodb-agent container, which runs the mongod process in a static container deployment. In order to modify the resource limits for the MongoDB database resource, you must specify your desired resource limits on the mongodb-agent container.

To learn more, see Static Containers (Public Preview).

You can run the lightweight mongos instance on the same node as your apps using MongoDB. The Kubernetes Operator supports standard Kubernetes node affinity and anti-affinity features. Using these features, you can force install the mongos on the same node as your application.

The following abbreviated example shows affinity and multiple availability zones configuration.

The podAffinity key determines whether to install an application on the same Pod, node, or data center as another application.

To specify Pod affinity:

  1. Add a label and value in the spec.podSpec.podTemplate.metadata.labels YAML collection to tag the deployment. See spec.podSpec.podTemplate.metadata, and the Kubernetes PodSpec v1 core API.

  2. Specify which label the mongos uses in the spec.mongosPodSpec.podAffinity .requiredDuringSchedulingIgnoredDuringExecution.labelSelector YAML collection. The matchExpressions collection defines the label that the Kubernetes Operator uses to identify the Pod for hosting the mongos.

Example

1apiVersion: mongodb.com/v1
2kind: MongoDB
3metadata:
4 name: my-replica-set
5spec:
6 members: 3
7 version: 4.2.1-ent
8 service: my-service
9
10 ...
11 podTemplate:
12 spec:
13 affinity:
14 podAffinity:
15 requiredDuringSchedulingIgnoredDuringExecution:
16 - labelSelector:
17 matchExpressions:
18 - key: security
19 operator: In
20 values:
21 - S1
22 topologyKey: failure-domain.beta.kubernetes.io/zone
23 nodeAffinity:
24 requiredDuringSchedulingIgnoredDuringExecution:
25 nodeSelectorTerms:
26 - matchExpressions:
27 - key: kubernetes.io/e2e-az-name
28 operator: In
29 values:
30 - e2e-az1
31 - e2e-az2
32 podAntiAffinity:
33 requiredDuringSchedulingIgnoredDuringExecution:
34 - podAffinityTerm:
35 topologyKey: nodeId

See the full example of multiple availability zones and node affinity configuration in replica-set-affinity.yaml in the Affinity Samples directory.

This directory also contains sample affinity and multiple zones configurations for sharded clusters and standalone MongoDB deployments.

Tip

See also:

Set the spec.service parameter to a value that identifies this deployment's purpose, as illustrated in the following example.

1apiVersion: mongodb.com/v1
2kind: MongoDB
3metadata:
4 name: my-replica-set
5spec:
6 members: 3
7 version: "6.0.0-ent"
8 service: drilling-pumps-geosensors
9 featureCompatibilityVersion: "4.0"

Tip

See also:

Use the Pod affinity Kubernetes feature to:

  • Separate different MongoDB resources, such as test, staging, and production environments.

  • Place Pods on some specific nodes to take advantage of features such as SSD support.

1mongosPodSpec:
2 podAffinity:
3 requiredDuringSchedulingIgnoredDuringExecution:
4 - labelSelector:
5 matchExpressions:
6 - key: security
7 operator: In
8 values:
9 - S1
10 topologyKey: failure-domain.beta.kubernetes.io/zone

Tip

See also:

You can specify which custom resources you want the Kubernetes Operator to watch. This allows you to install the CustomResourceDefinition for only the resources that you want the Kubernetes Operator to manage.

You must use helm to configure the Kubernetes Operator to watch only the custom resources you specify. Follow the relevant helm installation instructions, but make the following adjustments:

  1. Decide which CustomResourceDefinitions you want to install. You can install any number of the following:

    Value
    Description

    mongodb

    Install the CustomResourceDefinitions for database resources and watch those resources.

    mongodbusers

    Install the CustomResourceDefinitions for MongoDB user resources and watch those resources.

    opsmanagers

    Install the CustomResourceDefinitions for Ops Manager resources and watch those resources.

  2. Install the Helm Chart and specify which CustomResourceDefinitions you want the Kubernetes Operator to watch.

    Separate each custom resource with a comma:

    helm install <deployment-name> mongodb/enterprise-operator \
    --set operator.watchedResources="{mongodb,mongodbusers}" \
    --skip-crds

The Kubernetes deployments orchestrated by the Kubernetes Operator are stateful. The Kubernetes container uses Persistent Volumes to maintain the cluster state between restarts.

To satisfy the statefulness requirement, the Kubernetes Operator performs the following actions:

  • Creates Persistent Volumes for your MongoDB deployment.

  • Mounts storage devices to one or more directories called mount points.

  • Creates one persistent volume for each MongoDB mount point.

  • Sets the default path in each Kubernetes container to /data.

To meet your MongoDB cluster's storage needs, make the following changes in your configuration for each replica set deployed with the Kubernetes Operator:

  • Verify that persistent volumes are enabled in spec.persistent. This setting defaults to true.

  • Specify a sufficient amount of storage for the Kubernetes Operator to allocate for each of the volumes. The volumes store the data and the logs.

The following abbreviated example shows recommended persistent storage sizes.

1apiVersion: mongodb.com/v1
2kind: MongoDB
3metadata:
4 name: my-replica-cluster
5spec:
6
7 ...
8 persistent: true
9
10
11 shardPodSpec:
12 ...
13 persistence:
14 multiple:
15 data:
16 storage: "20Gi"
17 logs:
18 storage: "4Gi"
19 storageClass: standard

For a full example of persistent volumes configuration, see replica-set-persistent-volumes.yaml in the Persistent Volumes Samples directory. This directory also contains sample persistent volumes configurations for sharded clusters and standalone deployments.

When you deploy MongoDB replica sets with the Kubernetes Operator, the initial reconcilliation process increases CPU usage for the Pod running the Kubernetes Operator. However, when the replica set deployment process completes, the CPU usage by the Kubernetes Operator reduces considerably.

Note

The severity of CPU usage spikes in the Kubernetes Operator is directly impacted by the thread count of the Kubernetes Operator, as the thread count (defined by the MDB_MAX_CONCURRENT_RECONCILES value) is equal to the number of reconcilliation processes that can be running in parallel at any given time.

For production deployments, to satisfy deploying up to 50 MongoDB replica sets or sharded clusters in parallel with the Kubernetes Operator, set the CPU and memory resources and limits for the Kubernetes Operator Pod as follows:

  • spec.template.spec.containers.resources.requests.cpu to 500m

  • spec.template.spec.containers.resources.limits.cpu to 1100m

  • spec.template.spec.containers.resources.requests.memory to 200Mi

  • spec.template.spec.containers.resources.limits.memory to 1Gi

If you use Helm to deploy resources, define these values in the values.yaml file.

The following abbreviated example shows the configuration with recommended CPU and memory bounds for the Kubernetes Operator Pod in your deployment of 50 replica sets or sharded clusters. If you are deploying fewer than 50 MongoDB clusters, you may use lower numbers in the configuration file for the Kubernetes Operator Pod.

Example

1apiVersion: apps/v1
2kind: Deployment
3metadata:
4 name: mongodb-enterprise-operator
5 namespace: mongodb
6spec:
7 replicas: 1
8 selector:
9 matchLabels:
10 app.kubernetes.io/component: controller
11 app.kubernetes.io/name: mongodb-enterprise-operator
12 app.kubernetes.io/instance: mongodb-enterprise-operator
13 template:
14 metadata:
15 labels:
16 app.kubernetes.io/component: controller
17 app.kubernetes.io/name: mongodb-enterprise-operator
18 app.kubernetes.io/instance: mongodb-enterprise-operator
19 spec:
20 serviceAccountName: mongodb-enterprise-operator
21 securityContext:
22 runAsNonRoot: true
23 runAsUser: 2000
24 containers:
25 - name: mongodb-enterprise-operator
26 image: quay.io/mongodb/mongodb-enterprise-operator:1.9.2
27 imagePullPolicy: Always
28 args:
29 - "-watch-resource=mongodb"
30 - "-watch-resource=opsmanagers"
31 - "-watch-resource=mongodbusers"
32 command:
33 - "/usr/local/bin/mongodb-enterprise-operator"
34 resources:
35 limits:
36 cpu: 1100m
37 memory: 1Gi
38 requests:
39 cpu: 500m
40 memory: 200Mi

For a full example of CPU and memory utilization resources and limits for the Kubernetes Operator Pod that satisfy parallel deployment of up to 50 MongoDB replica sets, see the mongodb-enterprise.yaml file.

The values for Pods hosting replica sets or sharded clusters map to the requests field for CPU and memory for the created Pod. These values are consistent with considerations stated for MongoDB hosts.

The Kubernetes Operator uses its allocated memory for processing, for the WiredTiger cache, and for storing packages during the deployments.

For production deployments, set the CPU and memory resources and limits for the MongoDB Pod as follows:

  • spec.podSpec.podTemplate.spec.containers.resources.requests.cpu to 0.25

  • spec.podSpec.podTemplate.spec.containers.resources.limits.cpu to 0.25

  • spec.podSpec.podTemplate.spec.containers.resources.requests.memory to 512M

  • spec.podSpec.podTemplate.spec.containers.resources.limits.memory to 512M

If you use Helm to deploy resources, define these values in the values.yaml file.

The following abbreviated example shows the configuration with recommended CPU and memory bounds for each Pod hosting a MongoDB replica set member in your deployment.

Example

1apiVersion: mongodb.com/v1
2kind: MongoDB
3metadata:
4name: my-replica-set
5spec:
6 members: 3
7 version: 4.0.0-ent
8 service: my-service
9 ...
10
11 persistent: true
12 podSpec:
13 podTemplate:
14 spec:
15 containers:
16 - name: mongodb-enterprise-database
17 resources:
18 limits:
19 cpu: "0.25"
20 memory: 512M

For a full example of CPU and memory utilization resources and limits for Pods hosting MongoDB replica set members, see the replica-set-podspec.yaml file in the MongoDB Podspec Samples directory.

This directory also contains sample CPU and memory limits configurations for Pods used for:

Set the Kubernetes Operator and StatefulSets to distribute all members of one replica set to different nodes to ensure high availability.

The following abbreviated example shows affinity and multiple availability zones configuration.

Example

1apiVersion: mongodb.com/v1
2kind: MongoDB
3metadata:
4 name: my-replica-set
5spec:
6 members: 3
7 version: 4.2.1-ent
8 service: my-service
9 ...
10 podAntiAffinityTopologyKey: nodeId
11 podAffinity:
12 requiredDuringSchedulingIgnoredDuringExecution:
13 - labelSelector:
14 matchExpressions:
15 - key: security
16 operator: In
17 values:
18 - S1
19 topologyKey: failure-domain.beta.kubernetes.io/zone
20
21 nodeAffinity:
22 requiredDuringSchedulingIgnoredDuringExecution:
23 nodeSelectorTerms:
24 - matchExpressions:
25 - key: kubernetes.io/e2e-az-name
26 operator: In
27 values:
28 - e2e-az1
29 - e2e-az2

In this example, the Kubernetes Operator schedules the Pods deployment to the nodes which have the label kubernetes.io/e2e-az-name in e2e-az1 or e2e-az2 availability zones. Change nodeAffinity to schedule the deployment of Pods to the desired availability zones.

See the full example of multiple availability zones configuration in replica-set-affinity.yaml in the Affinity Samples directory.

This directory also contains sample affinity and multiple zones configurations for sharded clusters and standalone MongoDB deployments.

If you plan to deploy more than 10 MongoDB replica sets in parallel, you can configure the Kubernetes Operator to run multiple reconciliation processes in parallel by setting MDB_MAX_CONCURRENT_RECONCILES environment variable in your Kubernetes Operator deployment or or through the operator.maxConcurrentReconciles field in your Helm values.yaml file to configure a higher thread count.

Increasing the thread count of the Kubernetes Operator allows you to vertically scale your Kubernetes Operator deployment to hundreds of MongoDB resources running within your Kubernetes cluster and optimize CPU utilization.

Please monitor Kubernetes API server and Kubernetes Operator resource usage and adjust their respective resource allocation if necessary.

Note

  • Proceed with caution when increasing the MDB_MAX_CONCURRENT_RECONCILES beyond 10. In particular, you must monitor the Kubernetes Operator, and the Kubernetes API closely to avoid downtime resulting from increased load on those components.

    To determine the thread count that suits your deployment's needs, use the following guidelines:

    • Your requirements for how responsive the Kubernetes Operator must be when reconciling many resources

    • The compute resources available within your Kubernetes environment and the total processing load your Kubernetes compute resources are under, including resources that may be unrelated to MongoDB

  • An alternative to increasing the thread count of a single Kubernetes Operator instance, while still increasing the number of MongoDB resources you can support in your Kubernetes cluster, is to deploy multiple Kubernetes Operator instances within your Kubernetes cluster. However, deploying multiple Kubernetes Operator instances requires that you ensure that no two Kubernetes Operator instances are monitoring the same MongoDB resources.

    Running more than one instance of the Kubernetes Operator should be done with care, as more Kubernetes Operator instances (especially with parallel reconciliation enabled) put the API server at greater risk of being overwhelmed.

  • Scaling of the Kubernetes API server is not a valid reason to run more than one instance of the Kubernetes Operator. If you observe that performance of the API server is affected, adding more instances of the Kubernetes Operator is likely to compound the problem.

Back

Set Deployment Scope