Docs Menu
Docs Home
/
MongoDB Enterprise Kubernetes Operator

Release Notes for MongoDB Enterprise Kubernetes Operator

On this page

  • MongoDB Enterprise Kubernetes Operator 1.28 Series
  • MongoDB Enterprise Kubernetes Operator 1.27 Series
  • MongoDB Enterprise Kubernetes Operator 1.26 Series
  • MongoDB Enterprise Kubernetes Operator 1.25 Series
  • MongoDB Enterprise Kubernetes Operator 1.24 Series
  • MongoDB Enterprise Kubernetes Operator 1.23 Series
  • MongoDB Enterprise Kubernetes Operator 1.22 Series
  • MongoDB Enterprise Kubernetes Operator 1.21 Series
  • Older Release Notes

Released 2024-10-02

  • MongoDB, AppDB, and MongoDBMultiCluster resources: Fixes a bug where the init container wasn't getting the default security context, which was then flagged by security policies.

  • MongoDBMultiCluster resource: Fixes a bug where resource validations weren't performed as part of the reconcile loop.

Released 2024-08-27

  • MongoDB resource: Adds support for enabling log rotation for MongoDB processes, monitoring agent, and backup agent. To learn more, see MongoDB CRD Log Rotation Settings.

    Use the following settings to configure logs rotation per component:

    For sharded clusters, the Kubernetes Operator supports configuring log rotation only under the spec.Agent settings, and not per process type, such as mongos or configsrv.

  • OpsManager resource: Adds support for replacing the logback.xml configuration file, which configures general logging settings, such as log rotation for Ops Manager and Ops Manager backups.

    Use the following settings:

    • spec.logging.logBackAccessRef for the ConfigMap and access key with the logback access configuration file to mount on the Ops Manager Pod. Name the ConfigMap's access key logback-access.xml. This file configures access to the logging configuration file for Ops Manager.

    • spec.logging.logBackRef for the ConfigMap and access key with the logback configuration file to mount on the Ops Manager Pod. This file configures the general logging behavior for Ops Manager, including log rotation policies, log levels, and other logging parameters. Name the ConfigMap's access key logback.xml.

    • spec.backup.logging.logBackAccessRef for the ConfigMap and access key with the logback access configuration file to mount on the Ops Manager Pod. Name the ConfigMap's access key logback-access.xml. This file configures access to the logging configuration file for Ops Manager backups.

    • spec.backup.logging.logBackRef for the ConfigMap and access key with the logback configuration file to mount on the Ops Manager Pod. This file configures the general logging behavior for Ops Manager backups, including log rotation policies, log levels, and other logging parameters. Name the ConfigMap's access key logback.xml.

  • Agent launcher: Fixes an issue where, under some resync scenarios, the journal data in /journal may have been corrupted. The Agent now ensures that no conflicting journal data exist and prioritizes the data from /data/journal. To deactivate this behavior, set the environment variable MDB_CLEAN_JOURNAL in the Kubernetes Operator to any value other than 1.

  • MongoDB, AppDB, MongoDBMulti resources: Fixes an issue that ensures that external domains are used in the connectionString, if you configure it.

  • MongoDB resource: Removes panic response if you provide a Horizon configuration that is shorter than the number of members. The Kubernetes Operator now issues a descriptive error in the status of the MongoDB resource in such cases.

  • MongoDB resource: Fixes an issue where, when creating a resource in a new project named as a prefix of another project would fail, preventing the MongoDB resource from being created.

Released 2024-06-21

  • Improves CPU utilization and vertical scaling of the Kubernetes Operator and achieves faster reconciliation of all managed resources by allowing you to control the number of reconciliations the Kubernetes Operator can perform in parallel.

    You can set MDB_MAX_CONCURRENT_RECONCILES for the Kubernetes Operator deployment or operator.maxConcurrentReconciles in the Kubernetes Operator installation Helm chart. If not provided, the default value is 1. The ability to control the number of reconciliations might lead to an increased load on the Ops Manager and the Kubernetes API server in the same time window. Observe the Kubernetes Operator resource usage and adjust operator.resources.requests and operator.resources.limits if needed. To learn more, see Resource Management for Pods and Containers in the Kubernetes documentation.

  • Adds support for OpenShift 4.15. To learn more, see MongoDB Enterprise Kubernetes Operator Compatibility.

  • Adds an operator.maxConcurrentReconciles parameter that allows you to control the number of reconciliations the Kubernetes Operator can perform in parallel. The default value is 1.

  • Adds the operator.webhook.installClusterRole parameter that controls whether to install the cluster role allowing the Kubernetes Operator to configure admission webhooks. Set this parameter to false when the cluster roles aren't allowed. The default value is true.

  • MongoDB resource: Fixes a bug where configuring a MongoDB resource with multiple entries in spec.agent.startupOptions would cause additional unnecessary reconciliation of the underlying StatefulSet.

  • MongoDB, MongoDBMultiCluster resources: Fixes a bug where the Kubernetes Operator wouldn't watch for changes in the X-509 certificates configured for MongoDB Agent authentication.

  • MongoDB resource: Fixes a bug where boolean flags passed to the MongoDB Agent can't be set to false if their default value is true.

Released 2024-04-30

  • MongoDBOpsManager resource. The Kubernetes Operator no longer supports Ops Manager 5.0. Upgrade to a later version of Ops Manager. While Ops Manager 5.0 may continue to work with the Kubernetes Operator, MongoDB won't test the Kubernetes Operator against Ops Manager 5.0.

  • MongoDBOpsManager resource: Adds support for deploying the Ops Manager Application on multiple Kubernetes clusters. To learn more, see Deploy Ops Manager Resources on Multiple Kubernetes Clusters.

  • (Public Preview) MongoDB, OpsManager resources: Introduces opt-in Static Containers (Public Preview) for all types of deployments.

    • In this release, use static containers only for testing purposes. Static containers might become the default in a later release.

    • To activate static containers mode, set the MDB_DEFAULT_ARCHITECTURE environment variable at the Kubernetes Operator level to static. Alternatively, annotate a specific MongoDB or OpsManager custom resource with mongodb.com/v1.architecture: "static".

    • The Kubernetes Operator supports seamless migration between the static and non-static architectures. To learn more, see:

  • OpsManager resource: Adds the spec.internalConnectivity field to allow overrides for the service used by the Kubernetes Operator to ensure internal connectivity to the OpsManager resource-hosting Pods.

  • MongoDB resource: You can recover a resource due to a broken Automation configuration in sharded clusters. In previous releases, you could recover other types of resources but not sharded clusters. To learn more, see Recover Resource Due to Broken Automation Configuration.

  • MongoDB, MongoDBMultiCluster resources: These resources now allow you to add placeholders in external services.

    • You can define annotations for external services managed by the Kubernetes Operator that contain placeholders which will be automatically replaced by the proper values. Previously, the Kubernetes Operator configured the same annotations for all external services created for each Pod. Starting with this release, you can add placeholders so that the Kubernetes Operator can customize annotations in each service with values that are relevant and unique for each particular Pod. To learn more, see:

  • The kubectl mongodb plugin: Allows you to print build information when using the plugin.

  • The setup command of the kubectl mongodb plugin: Adds the registry.imagePullSecrets setting. If specified, created service accounts reference the specified secret on the imagePullSecrets field.

  • Improves handling of configurations when the Kubernetes Operator watches more than one namespace, and when you install the Kubernetes Operator in a namespace that differs from the namespace in which the Kubernetes Operator watches resources.

  • Optimizes setting up roles and permissions in member Kubernetes clusters using a single service account per Kubernetes cluster with correctly configured roles and role bindings (no cluster roles are necessary) for each watched namespace.

  • Extends the existing event-based reconciliation process by a time-based reconciliation that is triggered every 24 hours. This ensures that all Monitoring Agents are always upgraded in a timely manner.

  • OpenShift and OLM Operator: Removes the requirement for cluster-wide permissions. Previously, the Kubernetes Operator needed these permissions to configure admission webhooks. Starting with this release, webhooks are automatically configured by OLM.

  • Adds an optional MDB_WEBHOOK_REGISTER_CONFIGURATION environment variable for the Kubernetes Operator. The variable controls whether the Kubernetes Operator should perform automatic admission webhook configuration. The default is true. The variable is set to false for OLM and OpenShift deployments.

  • Changes the default agent.version to 107.0.0.8502-1. This changes the default Agent used in Kubernetes Operator deployments that you install using a Helm chart.

  • Adds the operator.additionalArguments variable with the default of [] to allow you to pass additional arguments for the Kubernetes Operator binary.

  • Adds the operator.createResourcesServiceAccountsAndRoles variable with the default of true to control whether to install roles and service accounts for MongoDB and OpsManager resources. When you use the kubectl mongodb plugin to configure the Kubernetes Operator for a multi-Kubernetes cluster deployment, the plugin installs all necessary roles and service accounts. Therefore, to avoid clashes, in some cases don't install those roles using the Kubernetes Operator Helm chart.

  • MongoDBMultiCluster resource: Fixes an issue where the Kubernetes Operator reported that spec.externalAccess.externalDomain and spec.clusterSpecList[*].externalAccess.externalDomains fields were required even though they weren't used. The Kubernetes Operator prematurely triggered a validation for these fields in cases where the custom resources contained a defined spec.externalAccess structure. Starting with this release, the Kubernetes Operator checks for uniqueness of external domains only when you define the external domains in spec.externalAccess.externalDomain or spec.clusterSpecList[*].externalAccess.externalDomains settings.

  • MongoDB resource: Fixes a bug where upon deleting a MongoDB resource, the controlledFeature policies remained set on the related Ops Manager or Cloud Manager instance, making cleanup in the UI impossible in the case of losing the Kubernetes Operator.

  • OpsManager resource: Fixes an issue where the admin-key secret was deleted when you removed the OpsManager custom resource. Fixing the admin-key secret deletion enables easier re-installation of Ops Manager.

  • MongoDB Readiness Probe: Fixes a misleading error message for the readiness probe: "... kubelet Readiness probe failed:...". This affects all MongoDB deployments.

  • Operator: Fixes cases where in some instances, while communicating with the OpsManager custom resource, the Kubernetes Operator skipped TLS verification, even if you enabled TLS.

  • Kubectl plugin: The released kubectl mongodb plugin binaries are now signed, and the signatures are published with the release assets. The public key is available at this address. The released kubectl mongodb plugin binaries are also notarized for MacOS.

  • Released Images signed: All container images published for the Kubernetes Operator are cryptographically signed. This is visible in the MongoDB Quay registry. You can verify the signatures using the MongoDB public key. Released images are available at this address.

Released 2023-12-21

  • Adds support for the upcoming Ops Manager 7.0.x series.

  • Fixes an issue that prevented terminating a backup correctly.

Released 2023-11-13

  • Fixes an issue with scaling a replica set in a multi-Kubernetes cluster MongoDB deployment when a member cluster has lost connectivity. The fix addresses both the manual and automated recovery procedures.

  • Fixes an issue where changing the names of the Automation Agent and MongoDB audit logs prevented them from being sent to the Kubernetes Pod logs. There are no restrictions on the file names of MongoDB audit logs as of Kubernetes Operator 1.22.

  • Allows the following new log types from the mongodb-enterprise-database container to stream directly to Kubernetes logs:

    • agent-launcher-script

    • monitoring-agent

    • backup-agent

  • Fixes an issue that prevented storing the MongoDBUser resource in the namespace set in spec.mongodbResourceRef.namespace.

Released 2023-09-21

The Kubernetes Operator no longer uses the Reconciling state for all custom resources. In most cases this state has been replaced with Pending and a corresponding message. If you use monitoring tools with the custom MongoDB resources deployed with the Kubernetes Operator, you might need to adjust your dashboards and alerting rules to use the Pending state name.

  • Adds support for configuring logRotate on the MongoDB Agent for the Application Database by adding the following new fields to the MongoDBOpsManager resource:

    • spec.applicationDatabase.agent.logRotate

    • spec.applicationDatabase.agent.logRotate.numTotal

    • spec.applicationDatabase.agent.logRotate.numUncompressed

    • spec.applicationDatabase.agent.logRotate.percentOfDiskspace

    • spec.applicationDatabase.agent.logRotate.sizeThresholdMB

    • spec.applicationDatabase.agent.logRotate.timeThresholdHrs

  • You can now configure the systemLog to send logs to a custom location other than the default /var/log/mongodb-mms-automation directory using the following new fields in the MongoDBOpsManager resource:

  • Improves handling of Application Database clusters in multi-Kubernetes cluster MongoDB deployments.

    In the last release, to scale down processes, the Kubernetes Operator required a connection to the Kubernetes cluster. This could block the reconciliation process due to a full-cluster outage.

    In this release, the Kubernetes Operator successfully manages the remaining healthy clusters as long as they have a majority of votes to elect a primary. The Kubernetes Operator doesn't remove associated processes from the automation configuration and replica set configuration. The Kubernetes Operator deletes these processes only if you delete the corresponding cluster from spec.applicationDatabase.clusterSpecList or change the number of the cluster members to zero. When the Kubernetes Operator deletes these processes, it scales down the replica set by removing processes tied to that cluster one at a time.

  • Adds an automatic recovery mechanism for MongoDB resources when a custom resource remains in a Pending or Failed state for a longer period of time. In addition, introduces the following environment variables:

    To learn more, see Recover Resource Due to Broken Automation Configuration.

  • Allows you to route the audit logs for the MongoDB resource to the Kubernetes Pod logs. Ensure that you write the MongoDB resource's audit logs to the /var/log/mongodb-mms-automation/mongodb-audit.log file. The Pod hosting the resource monitors this file and appends its content to its Kubernetes logs.

    To send audit logs to the Kubernetes Pod logs, use the following example configuration in the MongoDB resource:

    spec:
    additionalMongodConfig:
    auditLog:
    destination: file
    format: JSON
    path: /var/log/mongodb-mms-automation/mongodb-audit.log

    The Kubernetes Operator tags audit log entries with the mongodb-audit key in the Pod logs.

    To extract audit log entries, use a command similar to the following example:

    kubectl logs -c mongodb-enterprise-database replica-set-0 | \
    jq -r 'select(.logType == "mongodb-audit") | .contents'

Fixes an issue where you couldn't set the spec.backup.autoTerminateOnDeletion setting to true for sharded clusters. This setting controls whether the Kubernetes Operator stops and terminates the backup when you delete a MongoDB resource. If omitted, the default value is false.

Released 2023-08-25

  • Renames the environment variable CURRENT_NAMESPACE to NAMESPACE. This variable tracks the namespace of the Kubernetes Operator. If you've set this variable by editing the MongoDB resources, update CURRENT_NAMESPACE to NAMESPACE while upgrading the Kubernetes Operator.

  • Fixes an issue where StatefulSet override labels failed to override the StatefulSet.

  • Supports configuring backups of the Application Database and MongoDB for the MongoDBMultiCluster resource.

  • Adds documentation for configuring a MongoDBMultiCluster resources deployment in a GitOps environment. To learn more, see Configure Resources for GitOps.

  • Adds MetadataWrapper, a label and annotations wrapper, to the MongoDB resource, MongoDBMultiCluster resource and MongoDBOpsManager resources. The wrapper supports overriding metadata.Labels and metadata.Annotations.

  • Fixes an issue that prevented setting an arbitrary port number for spec.externalConnectivity.port when using the LoadBalancer service type to expose Ops Manager externally.

  • Fixes an issue that caused Ops Manager to reject certificates by enabling the Kubernetes Operator to import the appdb-ca, which is a bundle of CAs, into the Ops Manager JVM trust store.

To see the release notes for older versions of the operator, click here.