Release Notes for MongoDB Enterprise Kubernetes Operator
On this page
- MongoDB Enterprise Kubernetes Operator 1.30 Series
- MongoDB Enterprise Kubernetes Operator 1.29 Series
- MongoDB Enterprise Kubernetes Operator 1.28 Series
- MongoDB Enterprise Kubernetes Operator 1.27 Series
- MongoDB Enterprise Kubernetes Operator 1.26 Series
- MongoDB Enterprise Kubernetes Operator 1.25 Series
- MongoDB Enterprise Kubernetes Operator 1.24 Series
- MongoDB Enterprise Kubernetes Operator 1.23 Series
- MongoDB Enterprise Kubernetes Operator 1.22 Series
- MongoDB Enterprise Kubernetes Operator 1.21 Series
- Older Release Notes
MongoDB Enterprise Kubernetes Operator 1.30 Series
MongoDB Enterprise Kubernetes Operator 1.30.0
Released 2024-12-20
New Features
MongoDB: Fixes and improves Multi-Cluster Sharded Cluster deployments (Public Preview).
MongoDB: Supports
spec.shardOverrides
field for single-cluster topologies. We recommend using this field for customizing settings for specific shards.MongoDB: Deprecates
spec.shardSpecificPodSpec
. Usespec.shardOverrides
for customizing specific shards for both single- and multi-cluster topologies. For an example, see the mongodb-enterprise-kubernetes GitHub repository.
Bug Fixes
MongoDB: Fixes placeholder name for
mongos
in single-cluster sharded deployments with an external domain set. This fix changesmongodProcessDomain
andmongodProcessFQDN
tomongosProcessDomain
andmongosProcessFQDN
respectively.MongoDB, AppDB, MongoDBMultiCluster: If you lose a member cluster, you no longer receive validation errors if the failed cluster still exists in the
clusterSpecList
. This allows you to more easily reconfigure your deployments during disaster recovery.
MongoDB Enterprise Kubernetes Operator 1.29 Series
MongoDB Enterprise Kubernetes Operator 1.29.0
Released 2024-11-11
New Features
AppDb: Adds support for automated expansion of storage by increasing the capacity of Kubernetes
persistentVolumeClaims
. To learn more, see Increase Storage for Persistent Volumes.Automated expansion of
persistentVolumeClaims
is only supported if the underlying KubernetesstorageClass
supports KubernetespersistentVolume
expansion. Ensure yourstorageClass
supports in-place expansion without data loss.
Bug Fixes
MongoDB, AppDB, MongoDBMultiCluster: Fixes a bug where specifying a fractional number for a storage volume's size, such as
1.7Gi
, breaks the reconciliation loop for that resource with an errorCan't execute update on forbidden fields
even if the underlying Persistent Volume Claim is deployed successfully.MongoDB, MongoDBMultiCluster, OpsManager, AppDB: Improves stability of deployments during TLS rotations. In scenarios where the StatefulSet of the deployment was reconciling and a TLS rotation happened, the deployment would reach a broken state. Deployments will now store the previous TLS certificate alongside the new one.
MongoDB Enterprise Kubernetes Operator 1.28 Series
MongoDB Enterprise Kubernetes Operator 1.28.0
Released 2024-10-02
New Features
(Public Preview) MongoDB resource: Introduces support for sharded clusters across multi-Kubernetes cluster MongoDB deployments by setting
spec.topology=MultiCluster
when creating a MongoDB resource withspec.type=ShardedCluster
. To learn more, see Multi-Cluster Sharded Cluster.MongoDB and MongoDBMultiCluster resources: Adds support for automated expansion of storage by increasing the capacity of Kubernetes
persistentVolumeClaims
. To learn more, see Increase Storage for Persistent Volumes.Automated expansion of
persistentVolumeClaims
is only supported if the underlying KubernetesstorageClass
supports KubernetespersistentVolume
expansion. Ensure yourstorageClass
supports in-place expansion without data loss.
OpsManager resource: Introduces support for Ops Manager 8.0.0.
MongoDB and MongoDBMultiCluster resources: Introduces support for MongoDB 8.0.0.
MongoDB and MongoDBMultiCluster application database resources: Changes the default behaviour of
spec.featureCompatibilityVersion
for the database:When you upgrade your MongoDB version, the Kubernetes Operator sets
spec.featureCompatibilityVersion
to the version you are upgrading from to give you the option to downgrade if necessary.If you want the feature compatibility version to match the new MongoDB version, you must manually set
featureCompatibilityVersion
to either the new MongoDB version, orAlwaysMatchVersion
.
Updates container images to use Red Hat UBI 9 as the base image, except
mongodb-enterprise-database-ubi
, which remains onUBI8
to support workloads running on MongoDB 6.0.4 and earlier.The UBI8 image is only used for the default non-static architecture. For a full UBI9 setup, use static containers. To learn more, see Static Containers (Public Preview).
Bug Fixes
MongoDB, AppDB, and MongoDBMultiCluster resources: Fixes a bug where the
init
container wasn't getting the default security context, which was then flagged by security policies.MongoDBMultiCluster resource: Fixes a bug where resource validations weren't performed as part of the reconcile loop.
MongoDB Enterprise Kubernetes Operator 1.27 Series
MongoDB Enterprise Kubernetes Operator 1.27.0
Released 2024-08-27
New Features
MongoDB resource: Adds support for enabling log rotation for MongoDB processes, monitoring agent, and backup agent. To learn more, see MongoDB CRD Log Rotation Settings.
Use the following settings to configure logs rotation per component:
spec.agent.mongod.logRotate
for the mongoDB processes.spec.agent.mongod.auditlogRotate
for the mongoDB processes audit logs.spec.agent.backupAgent.logRotate
for the backup agent.spec.agent.monitoringAgent.logRotate
for the monitoring agent.spec.agent.readinessProbe.environmentVariables
for the environment variables the readiness probe runs with. This setting also applies to settings related to the logs rotation. To learn more about the supported environment settings, see Readiness Probe.spec.applicationDatabase.agent.<component>.logRotate
for the Application Database.
For sharded clusters, the Kubernetes Operator supports configuring log rotation only under the
spec.Agent
settings, and not per process type, such asmongos
orconfigsrv
.OpsManager resource: Adds support for replacing the logback.xml configuration file, which configures general logging settings, such as log rotation for Ops Manager and Ops Manager backups.
Use the following settings:
spec.logging.logBackAccessRef
for the ConfigMap and access key with thelogback
access configuration file to mount on the Ops Manager Pod. Name the ConfigMap's access keylogback-access.xml
. This file configures access to the logging configuration file for Ops Manager.spec.logging.logBackRef
for the ConfigMap and access key with thelogback
configuration file to mount on the Ops Manager Pod. This file configures the general logging behavior for Ops Manager, including log rotation policies, log levels, and other logging parameters. Name the ConfigMap's access keylogback.xml
.spec.backup.logging.logBackAccessRef
for the ConfigMap and access key with thelogback
access configuration file to mount on the Ops Manager Pod. Name the ConfigMap's access keylogback-access.xml
. This file configures access to the logging configuration file for Ops Manager backups.spec.backup.logging.logBackRef
for the ConfigMap and access key with thelogback
configuration file to mount on the Ops Manager Pod. This file configures the general logging behavior for Ops Manager backups, including log rotation policies, log levels, and other logging parameters. Name the ConfigMap's access keylogback.xml
.
Deprecations
The
spec.applicationDatabase.agent.logRotate
setting for the Application Database has been deprecated. Usespec.applicationDatabase.agent.mongod.logRotate
instead.
Bug Fixes
Agent launcher: Fixes an issue where, under some resync scenarios, the journal data in
/journal
may have been corrupted. The Agent now ensures that no conflicting journal data exist and prioritizes the data from/data/journal
. To deactivate this behavior, set the environment variableMDB_CLEAN_JOURNAL
in the Kubernetes Operator to any value other than 1.MongoDB, AppDB, MongoDBMulti resources: Fixes an issue that ensures that external domains are used in the
connectionString
, if you configure it.MongoDB resource: Removes panic response if you provide a Horizon configuration that is shorter than the number of members. The Kubernetes Operator now issues a descriptive error in the status of the MongoDB resource in such cases.
MongoDB resource: Fixes an issue where, when creating a resource in a new project named as a prefix of another project would fail, preventing the MongoDB resource from being created.
MongoDB Enterprise Kubernetes Operator 1.26 Series
MongoDB Enterprise Kubernetes Operator 1.26.0
Released 2024-06-21
New Features
Improves CPU utilization and vertical scaling of the Kubernetes Operator and achieves faster reconciliation of all managed resources by allowing you to control the number of reconciliations the Kubernetes Operator can perform in parallel.
You can set
MDB_MAX_CONCURRENT_RECONCILES
for the Kubernetes Operator deployment oroperator.maxConcurrentReconciles
in the Kubernetes Operator installation Helm chart. If not provided, the default value is 1. The ability to control the number of reconciliations might lead to an increased load on the Ops Manager and the Kubernetes API server in the same time window. Observe the Kubernetes Operator resource usage and adjustoperator.resources.requests
andoperator.resources.limits
if needed. To learn more, see Resource Management for Pods and Containers in the Kubernetes documentation.Adds support for OpenShift 4.15. To learn more, see MongoDB Enterprise Kubernetes Operator Compatibility.
Helm Chart Installation Changes
Adds an
operator.maxConcurrentReconciles
parameter that allows you to control the number of reconciliations the Kubernetes Operator can perform in parallel. The default value is 1.Adds the operator.webhook.installClusterRole parameter that controls whether to install the cluster role allowing the Kubernetes Operator to configure admission webhooks. Set this parameter to
false
when the cluster roles aren't allowed. The default value istrue
.
Bug Fixes
MongoDB resource: Fixes a bug where configuring a MongoDB resource with multiple entries in
spec.agent.startupOptions
would cause additional unnecessary reconciliation of the underlyingStatefulSet
.MongoDB, MongoDBMultiCluster resources: Fixes a bug where the Kubernetes Operator wouldn't watch for changes in the X-509 certificates configured for MongoDB Agent authentication.
MongoDB resource: Fixes a bug where boolean flags passed to the MongoDB Agent can't be set to
false
if their default value istrue
.
MongoDB Enterprise Kubernetes Operator 1.25 Series
MongoDB Enterprise Kubernetes Operator 1.25.0
Released 2024-04-30
Breaking Change
MongoDBOpsManager resource. The Kubernetes Operator no longer supports Ops Manager 5.0. Upgrade to a later version of Ops Manager. While Ops Manager 5.0 may continue to work with the Kubernetes Operator, MongoDB won't test the Kubernetes Operator against Ops Manager 5.0.
New Features
MongoDBOpsManager resource: Adds support for deploying the Ops Manager Application on multiple Kubernetes clusters. To learn more, see Deploy Ops Manager Resources on Multiple Kubernetes Clusters.
(Public Preview) MongoDB, OpsManager resources: Introduces opt-in Static Containers (Public Preview) for all types of deployments.
In this release, use static containers only for testing purposes. Static containers might become the default in a later release.
To activate static containers mode, set the
MDB_DEFAULT_ARCHITECTURE
environment variable at the Kubernetes Operator level tostatic
. Alternatively, annotate a specificMongoDB
orOpsManager
custom resource withmongodb.com/v1.architecture: "static"
.The Kubernetes Operator supports seamless migration between the static and non-static architectures. To learn more, see:
OpsManager resource: Adds the
spec.internalConnectivity
field to allow overrides for the service used by the Kubernetes Operator to ensure internal connectivity to theOpsManager
resource-hosting Pods.MongoDB resource: You can recover a resource due to a broken Automation configuration in sharded clusters. In previous releases, you could recover other types of resources but not sharded clusters. To learn more, see Recover Resource Due to Broken Automation Configuration.
MongoDB, MongoDBMultiCluster resources: These resources now allow you to add placeholders in external services.
You can define annotations for external services managed by the Kubernetes Operator that contain placeholders which will be automatically replaced by the proper values. Previously, the Kubernetes Operator configured the same annotations for all external services created for each Pod. Starting with this release, you can add placeholders so that the Kubernetes Operator can customize annotations in each service with values that are relevant and unique for each particular Pod. To learn more, see:
MongoDB
resource:spec.externalAccess.externalService.annotations
MongoDBMultiCluster
resource spec.externalAccess.externalService.annotations
The
kubectl mongodb
plugin: Allows you to print build information when using the plugin.The
setup
command of thekubectl mongodb
plugin: Adds the registry.imagePullSecrets setting. If specified, created service accounts reference the specified secret on theimagePullSecrets
field.Improves handling of configurations when the Kubernetes Operator watches more than one namespace, and when you install the Kubernetes Operator in a namespace that differs from the namespace in which the Kubernetes Operator watches resources.
Optimizes setting up roles and permissions in member Kubernetes clusters using a single service account per Kubernetes cluster with correctly configured roles and role bindings (no cluster roles are necessary) for each watched namespace.
Extends the existing event-based reconciliation process by a time-based reconciliation that is triggered every 24 hours. This ensures that all Monitoring Agents are always upgraded in a timely manner.
OpenShift and OLM Operator: Removes the requirement for cluster-wide permissions. Previously, the Kubernetes Operator needed these permissions to configure admission webhooks. Starting with this release, webhooks are automatically configured by OLM.
Adds an optional
MDB_WEBHOOK_REGISTER_CONFIGURATION
environment variable for the Kubernetes Operator. The variable controls whether the Kubernetes Operator should perform automatic admission webhook configuration. The default istrue
. The variable is set tofalse
for OLM and OpenShift deployments.
Helm Chart Installation Changes
Changes the default
agent.version
to107.0.0.8502-1
. This changes the default Agent used in Kubernetes Operator deployments that you install using a Helm chart.Adds the
operator.additionalArguments
variable with the default of[]
to allow you to pass additional arguments for the Kubernetes Operator binary.Adds the
operator.createResourcesServiceAccountsAndRoles
variable with the default oftrue
to control whether to install roles and service accounts forMongoDB
andOpsManager
resources. When you use thekubectl mongodb
plugin to configure the Kubernetes Operator for a multi-Kubernetes cluster deployment, the plugin installs all necessary roles and service accounts. Therefore, to avoid clashes, in some cases don't install those roles using the Kubernetes Operator Helm chart.
Bug Fixes
MongoDBMultiCluster resource: Fixes an issue where the Kubernetes Operator reported that
spec.externalAccess.externalDomain
andspec.clusterSpecList[*].externalAccess.externalDomains
fields were required even though they weren't used. The Kubernetes Operator prematurely triggered a validation for these fields in cases where the custom resources contained a definedspec.externalAccess
structure. Starting with this release, the Kubernetes Operator checks for uniqueness of external domains only when you define the external domains inspec.externalAccess.externalDomain
orspec.clusterSpecList[*].externalAccess.externalDomains
settings.MongoDB resource: Fixes a bug where upon deleting a
MongoDB
resource, thecontrolledFeature
policies remained set on the related Ops Manager or Cloud Manager instance, making cleanup in the UI impossible in the case of losing the Kubernetes Operator.OpsManager resource: Fixes an issue where the
admin-key
secret was deleted when you removed theOpsManager
custom resource. Fixing theadmin-key
secret deletion enables easier re-installation of Ops Manager.MongoDB Readiness Probe: Fixes a misleading error message for the readiness probe:
"... kubelet Readiness probe failed:..."
. This affects all MongoDB deployments.Operator: Fixes cases where in some instances, while communicating with the
OpsManager
custom resource, the Kubernetes Operator skipped TLS verification, even if you enabled TLS.
Improvements
Kubectl plugin: The released
kubectl mongodb
plugin binaries are now signed, and the signatures are published with the release assets. The public key is available at this address. The releasedkubectl mongodb
plugin binaries are also notarized for MacOS.Released Images signed: All container images published for the Kubernetes Operator are cryptographically signed. This is visible in the MongoDB Quay registry. You can verify the signatures using the MongoDB public key. Released images are available at this address.
MongoDB Enterprise Kubernetes Operator 1.24 Series
MongoDB Enterprise Kubernetes Operator 1.24.0
Released 2023-12-21
MongoDBOpsManager Resource
New Features
Adds support for the upcoming Ops Manager 7.0.x series.
Bug Fixes
Fixes an issue that prevented terminating a backup correctly.
MongoDB Enterprise Kubernetes Operator 1.23 Series
MongoDB Enterprise Kubernetes Operator 1.23.0
Released 2023-11-13
Warnings and Breaking Changes
Aligns the component image version numbers with the Kubernetes Operator release tag so it's clear which images go with which version of the Kubernetes Operator. This affects the following images:
quay.io/mongodb/mongodb-enterprise-database-ubi
quay.io/mongodb/mongodb-enterprise-init-database-ubi
quay.io/mongodb/mongodb-enterprise-init-appdb-ubi
quay.io/mongodb/mongodb-enterprise-init-ops-manager-ubi
To learn more, see MongoDB Enterprise Kubernetes Operator kubectl and oc Installation Settings and MongoDB Enterprise Kubernetes Operator Helm Installation Settings.
Replaces
spec.exposedExternally
(deprecated in Kubernetes Operator 1.19) withspec.externalAccess
.
Bug Fixes
Fixes an issue with scaling a replica set in a multi-Kubernetes cluster MongoDB deployment when a member cluster has lost connectivity. The fix addresses both the manual and automated recovery procedures.
Fixes an issue where changing the names of the Automation Agent and MongoDB audit logs prevented them from being sent to the Kubernetes Pod logs. There are no restrictions on the file names of MongoDB audit logs as of Kubernetes Operator 1.22.
Allows the following new log types from the
mongodb-enterprise-database
container to stream directly to Kubernetes logs:agent-launcher-script
monitoring-agent
backup-agent
Fixes an issue that prevented storing the
MongoDBUser
resource in the namespace set inspec.mongodbResourceRef.namespace
.
MongoDB Enterprise Kubernetes Operator 1.22 Series
MongoDB Enterprise Kubernetes Operator 1.22.0
Released 2023-09-21
Breaking Changes
The Kubernetes Operator no longer uses the Reconciling
state for all custom resources.
In most cases this state has been replaced with Pending
and a corresponding
message. If you use monitoring tools with the custom MongoDB resources deployed
with the Kubernetes Operator, you might need to adjust your dashboards and alerting rules
to use the Pending
state name.
MongoDBOpsManager Resource
Improvements
Adds support for configuring logRotate on the MongoDB Agent for the Application Database by adding the following new fields to the
MongoDBOpsManager
resource:spec.applicationDatabase.agent.logRotate
spec.applicationDatabase.agent.logRotate.numTotal
spec.applicationDatabase.agent.logRotate.numUncompressed
spec.applicationDatabase.agent.logRotate.percentOfDiskspace
spec.applicationDatabase.agent.logRotate.sizeThresholdMB
spec.applicationDatabase.agent.logRotate.timeThresholdHrs
You can now configure the systemLog to send logs to a custom location other than the default
/var/log/mongodb-mms-automation
directory using the following new fields in theMongoDBOpsManager
resource:Improves handling of Application Database clusters in multi-Kubernetes cluster MongoDB deployments.
In the last release, to scale down processes, the Kubernetes Operator required a connection to the Kubernetes cluster. This could block the reconciliation process due to a full-cluster outage.
In this release, the Kubernetes Operator successfully manages the remaining healthy clusters as long as they have a majority of votes to elect a primary. The Kubernetes Operator doesn't remove associated processes from the automation configuration and replica set configuration. The Kubernetes Operator deletes these processes only if you delete the corresponding cluster from
spec.applicationDatabase.clusterSpecList
or change the number of the cluster members to zero. When the Kubernetes Operator deletes these processes, it scales down the replica set by removing processes tied to that cluster one at a time.
MongoDB
Resource
Improvements
Adds an automatic recovery mechanism for
MongoDB
resources when a custom resource remains in aPending
orFailed
state for a longer period of time. In addition, introduces the following environment variables:To learn more, see Recover Resource Due to Broken Automation Configuration.
Allows you to route the audit logs for the
MongoDB
resource to the Kubernetes Pod logs. Ensure that you write theMongoDB
resource's audit logs to the/var/log/mongodb-mms-automation/mongodb-audit.log
file. The Pod hosting the resource monitors this file and appends its content to its Kubernetes logs.To send audit logs to the Kubernetes Pod logs, use the following example configuration in the
MongoDB
resource:spec: additionalMongodConfig: auditLog: destination: file format: JSON path: /var/log/mongodb-mms-automation/mongodb-audit.log The Kubernetes Operator tags audit log entries with the
mongodb-audit
key in the Pod logs.To extract audit log entries, use a command similar to the following example:
kubectl logs -c mongodb-enterprise-database replica-set-0 | \ jq -r 'select(.logType == "mongodb-audit") | .contents'
Bug Fixes
Fixes an issue where you couldn't set the spec.backup.autoTerminateOnDeletion
setting to true
for sharded clusters. This setting controls whether the
Kubernetes Operator stops and terminates the backup when you delete a MongoDB
resource. If omitted, the default value is false
.
MongoDB Enterprise Kubernetes Operator 1.21 Series
MongoDB Enterprise Kubernetes Operator 1.21.0
Released 2023-08-25
Breaking Changes
Renames the environment variable
CURRENT_NAMESPACE
toNAMESPACE
. This variable tracks the namespace of the Kubernetes Operator. If you've set this variable by editing theMongoDB
resources, updateCURRENT_NAMESPACE
toNAMESPACE
while upgrading the Kubernetes Operator.
Bug Fixes
Fixes an issue where
StatefulSet
override labels failed to override theStatefulSet
.
Improvements
Supports configuring backups of the Application Database and MongoDB for the
MongoDBMultiCluster
resource.Adds documentation for configuring a
MongoDBMultiCluster
resources deployment in a GitOps environment. To learn more, see Configure Resources for GitOps.Adds
MetadataWrapper
, a label and annotations wrapper, to theMongoDB
resource,MongoDBMultiCluster
resource andMongoDBOpsManager
resources. The wrapper supports overridingmetadata.Labels
andmetadata.Annotations
.
MongoDBOpsManager Resource
Breaking Changes and Deprecations
The
appdb-ca
is not automatically added to the JVM trust store in Ops Manager. Theappdb-ca
is the CA saved in the ConfigMap specified inspec.applicationDatabase.security.tls.ca
. This impacts you if:You use the same custom certificate for the
appdb-ca
and your S3 snapshot store.You use a version of Kubernetes Operator earlier than 1.17.0 or you've mounted your own trust store to Ops Manager.
If you need to use the same custom certificate for
appdb-ca
and the S3 snapshot store, specify the CA withspec.backup.s3Stores.customCertificateSecretRefs
.Deprecates the
spec.backup.s3Stores.customCertificate
andspec.backup.s3OpLogStores.customCertificate
settings. Usespec.backup.s3OpLogStores.customCertificateSecretRefs
andspec.backup.s3Stores.customCertificateSecretRefs
instead.
Bug Fixes
Fixes an issue that prevented setting an arbitrary port number for
spec.externalConnectivity.port
when using theLoadBalancer
service type to expose Ops Manager externally.Fixes an issue that caused Ops Manager to reject certificates by enabling the Kubernetes Operator to import the
appdb-ca
, which is a bundle of CAs, into the Ops Manager JVM trust store.
Improvements
Supports configuring the
MongoDBOpsManager
resource with a highly available Application Database across multiple Kubernetes clusters by adding the following new fields to theMongoDBOpsManager
resource:The default value for the new optional
spec.applicationDatabase.topology
field issingleCluster
, and it is used if you omit the value. To upgrade to Kubernetes Operator 1.21, you don't need to update yourMongoDBOpsManager
resources. This makes the addition of thespec.applicationDatabase.topology
setting backward-compatible with single Kubernetes cluster deployments of the Application Database. To learn more, see Deploy an Ops Manager Resource and the Ops Manager Resource Specification.Allows you to add a list of custom certificates for backups in the S3 snapshot store using the
spec.backup.s3Stores.customCertificateSecretRefs
andspec.backup.s3OpLogStores.customCertificateSecretRefs
fields in theMongoDBOpsManager
resource.
Older Release Notes
To see the release notes for older versions of the operator, click here.