Darshana Paithankar

8 results

3 Ways MongoDB EA Azure Arc Certification Serves Customers

One reason more than 50,000 customers across industries choose MongoDB is the freedom to run anywhere—across major cloud providers, on-premises in data centers, and in hybrid deployments. This is why MongoDB is always working to meet customers where they are. For example, many customers choose MongoDB Atlas (which is available in more than 115 cloud regions across major cloud providers) for a fully managed experience. Other customers choose MongoDB Enterprise Advanced (EA) to self-manage their database deployments to meet specific on-premises or hybrid requirements. To that end, we’re pleased to announce that MongoDB EA is one of the first certified Microsoft Azure Arc-enabled Kubernetes applications, which provides customers even more choice of where and how they run MongoDB. Customer adoption of Azure Arc has grown by leaps and bounds. This new certification, and the launch of MongoDB EA as an Arc-enabled Kubernetes application on Azure Marketplace , means that more customers will be able to leverage the unparalleled security, availability, durability, and performance of MongoDB across environments with the centralized management of their Kubernetes deployments. We are very excited to have MongoDB available for our customers on the Azure Marketplace. By extending Azure Arc’s management capabilities to your MongoDB deployments, customers gain the benefit of centralized governance, enhanced security, and deeper insights into database performance. Azure Arc makes hybrid database management with MongoDB efficient and consistent. Collaboration between MongoDB and Microsoft represents an opportunity for many of our customers to further accelerate their digital transformation when building enterprise-class solutions with Azure Arc. Christa St Pierre, Partner Group Manager, Azure Edge Devices, Microsoft Here are three ways the launch of MongoDB EA on Azure Marketplace for Arc-enabled Kubernetes applications gives customers greater flexibility. 1. MongoDB EA supports multi-Kubernetes cluster deployments, simplifies management MongoDB Enterprise Advanced seamlessly integrates market-leading MongoDB capabilities along with robust enterprise support and tools for self-managed deployments at any scale. This powerful solution includes advanced automation, comprehensive auditing, strong authentication, reliable backup, and insightful monitoring capabilities, all of which work together to ensure security compliance and operational efficiency for organizations of any size. The relationship between MongoDB and Kubernetes is one of strong synergy. With Kubernetes, MongoDB EA really can run anywhere, such as a single deployment spanning on-premises and more than one public cloud Kubernetes cluster. Customers can use the MongoDB Enterprise Kubernetes Operator, a key component of MongoDB Enterprise Advanced, to simplify the management and automation of self-managed MongoDB deployments in Kubernetes. This includes tasks like creating and updating deployments, managing backups, and integrating with various Kubernetes services. The ability of the MongoDB Enterprise Kubernetes Operator to deploy and manage MongoDB deployments that span multiple Kubernetes clusters significantly enhances resilience, improves disaster recovery, and minimizes latency by allowing data to be co-located closer to where it is needed, ensuring optimal performance and reliability. 2. Azure Arc complements MongoDB EA, providing centralized management While MongoDB Enterprise Advanced is already among a select group of databases capable of operating across multiple Kubernetes clusters , it is now also supported in Azure Arc-enabled Kubernetes environments. Azure Arc enables the standardized management of Kubernetes clusters across various environments—including in Azure, on-premises, and even other clouds—while harnessing the power of Azure services. Azure Arc accomplishes this by extending the Azure control plane to standardize security and governance across a wide range of resources and locations. For instance, organizations can centrally monitor all of the Azure Arc-enabled Kubernetes clusters using Azure Monitor for containers , or they can enforce threat protection at scale using Microsoft Defender for Kubernetes. This centralized control significantly reduces the complexity of managing Kubernetes clusters running anywhere, as customers can oversee all resources and apply consistent security and compliance policies across their hybrid environment. 3. Customers can leverage the resilience of MongoDB EA and the centralized governance of Azure Arc Together, these solutions empower organizations to build robust applications across a wide array of environments, whether on-premises or in multi-cloud settings. The combination of MongoDB Enterprise Advanced and the MongoDB Enterprise Operator simplifies the deployment of MongoDB across Kubernetes clusters, allowing organizations to fully leverage enhanced resilience and geographic distribution that surpasses the capabilities of a single Kubernetes cluster. Azure Arc further enhances this synergy by providing centralized management for all of these Kubernetes clusters, regardless of where they are running; for customers running entirely in the public cloud, we recommend using MongoDB’s fully managed developer data platform, MongoDB Atlas. If you’re interested in learning more, we invite you to explore the Azure Marketplace listing for MongoDB Enterprise Advanced for Arc-enabled Kubernetes applications. Please note that aside from use for evaluation and development purposes, this offering requires the purchase of a MongoDB Enterprise Advanced subscription. For licensing inquiries, we encourage you to reach out to MongoDB at https://www.mongodb.com/contact to secure your license and to begin harnessing the full potential of these powerful solutions.

November 19, 2024

MongoDB Introduces Workload Identity Federation for Database Access

Update June 5, 2024: Workload Identity Federation is now generally available. Head over to our docs page to learn more. MongoDB Atlas customers run workloads (applications) inside AWS, Azure, and Google Cloud. Today, to enable these workloads to authenticate with MongoDB Atlas cluster—customers create and manage MongoDB Atlas database users using the natively supported SCRAM (password) and X.509 authentication mechanisms and configure them in their workloads. Customers have to manage the full identity lifecycle of these users in their applications, including frequently rotating secrets. To meet their evolving security and compliance requirements, our enterprise customers require database users to be managed within their existing identity providers or cloud providers of their choice. Workload Identity Federation will be in general availability later this month and allows management of MongoDB Atlas database users with Azure Managed Identities, Azure Service Principals, Google Service Accounts, or an OAuth2.0 compliant authorization service. This approach makes it easier for customers to manage, secure, and audit their MongoDB Atlas database users in their existing identity provider or a cloud provider of their choice and enables them to have "passwordless" access to their MongoDB Atlas databases. Along with Workload Identity Federation, Workforce Identity Federation , which was launched in public preview last year, will be generally available later this month. Workforce Identity Federation allows organizations to configure access to MongoDB clusters for their employees with single sign-on (SSO) using OpenID Connect. Both features complement each other and enable organizations to have complete control of database access for both application users and employees. Workload Identity Federation support will be available in Atlas Dedicated Clusters on MongoDB 7.0 and above, and is supported by Java, C#, Node, and Python drivers. Go driver support will be added soon. Quick steps to get started with Workload Identity Federation: Configure Atlas with your OAuth2.0 compatible workload identity provider such as Azure or Google Cloud. Configure Azure Service Principal or Google Cloud Service Accounts for the Azure or Google Cloud resource where your application runs. Add the configured Azure Service Principal or Google Cloud Service Account as Atlas database users with Federated authentication. Using Python or any supported driver inside your application, authenticate and authorize with your workload identity provider and Atlas clusters. To learn more about Workload Identity Federation, please refer to the documentation . And to learn more about how MongoDB’s robust operational and security controls protect your data, read more about our security features .

May 2, 2024

Ways to Integrate MongoDB Atlas in Your DevOps Processes

MongoDB Atlas - the industry-leading developer data platform integrates all of the data services you need to build modern applications in a unified developer experience. We want to meet you where you are and offer various ways to begin with Atlas and make the most of its features. Starting with the Atlas User Interface is a good initial step. Now what if your requirement is to automate the deployment of Atlas clusters at scale, and by leveraging tools that are already integral to your application ecosystem? Through this blog, we will address this question. We will discuss the programmatic methods for starting with Atlas and deploying Atlas resources and infrastructure according to your specific needs. The tools we provide cover most of the control plane management ( easily deploy, manage, scale) Atlas clusters. Thereby, enabling you to create the building blocks of Atlas such as clusters, database users, projects, Atlas Search indexes, backups, alerts, and more. You can interact directly with the data plane through the command line or programmatically using tools like MongoDB Shell , Compass , language drivers , etc., to work with data and perform CRUD operations. 1. Atlas Administration API: Leveraging your choice of client No matter what your choice of client: be it cURL commands, Postman, Insomnia etc. You can use a client of your choice to directly interact with Atlas through the Atlas Administration API . The Atlas Administration API gives you a RESTful interface to interact with Atlas resources and perform various actions within MongoDB Atlas. Each endpoint represents a specific resource in Atlas ( e.g. cluster). You can programmatically deploy and manage all of the Atlas resources from an administrative standpoint such as creating a cluster, database user, projects, advanced clusters, backups, monitoring etc. All of the tools that we will cover below are built on top of the Atlas Administration API and abstract away the complexities of using the Atlas Atlas Administration API directly. 2. GoSDK Client: A simplified way to get started with the Atlas Admin API One of the mechanisms that simplify the interaction with the API is the availability of SDK (Software Development Kit). If you are a Go developer, the GoSDK client gives you a much simpler experience of getting started with the Atlas Administration API. It also has full endpoint coverage of the Administration API and improves the speed of getting started with the Atlas Administration API. Getting started with the Admin API through the Go SDK client only takes a few lines of code since the SDK includes pre-built functions, structs, and methods that encapsulate the complexity of HTTP requests, authentication, error handling, versioning, and other low-level details. We will be adding SDK support for more languages in the future. Feel free to add feedback on what other languages you would like to see supported. 3. MongoDB Atlas CLI: Simple command-line tool to easily deploy Atlas resources If you prefer to manage your Atlas resources using simple commands in the terminal, the MongoDB Atlas Command-Line Interface (CLI) is your answer. With the Atlas CLI, you can seamlessly manage your clusters, automate database user creation, control network access, and perform various other administrative tasks, all from the command line. You can also script these actions out with the CLI for even easier repeatability. It is available for multiple operating systems, including Windows, Linux, and macOS. It is often used in conjunction with other command-line tools to automate workflows and integrate with CI/CD pipelines. An easy way to get started with the Atlas CLI is through the quickstart . 4. Infrastructure as Code (IaC) integrations: Automating deployment of Atlas using Infrastructure as Code tools There are several advantages of using Infrastructure as Code ( IaC) tools to provision application infrastructure. By treating your infrastructure code similar to your application code, IaC tools offer benefits such as version control, scalability, security, repeatability etc. What if you could easily deploy your Atlas resources using your preferred IaC tools? MongoDB Atlas gives you that flexibility. Whether you're an AWS CloudFormation or a HashiCorp Terraform enthusiast, you can provision and manage Atlas resources using these tools with ease through the integrations we offer: AWS CloudFormation integration: MongoDB Atlas supports three ways to provision resources using AWS CloudFormation The first is by leveraging Atlas resources directly from the CloudFormation Public Registry . We have 33+ resources available today. The configurations are defined in JSON/ YAML and can be executed using the AWS CLI/ AWS management console. If you want a faster way to get started, you can explore our AWS Partner Solutions ( formerly known as quickstarts), which have pre-built Cloudformation templates to help you provision a group of Atlas resources for specific use cases instead of deploying them one by one. And if you are passionate about using languages you are comfortable with such as JavaScript/TypeScript, Python, Java, C#, and Go instead of learning YAML/JSON, you can leverage the AWS Cloud Development Kit (CDK) to deploy Atlas resources. Under the hood, when your AWS CDK applications are run, they translate your code to CloudFormation templates which use the CloudFormation service for provisioning. HashiCorp Terraform integration: If you are already using Terraform as your IaC tool of choice, we have integrations with Hashicorp Terraform as well. There are two easy ways to get started with Hashicorp Terraform By directly provisioning Atlas resources on AWS, Azure, and Google Cloud using the Terraform MongoDB Atlas provider . If you prefer to use your favorite language, e.g. Typescript, Python, C#, Java, Go, you can use CDKTF (Cloud Development Kit for Terraform) , which will allow you to deploy Atlas using Terraform under the hood without knowing the specifics of Terraform’s configuration language 5. Atlas Kubernetes Operator: Use your existing Kubernetes tooling to manage Atlas resources For organizations leveraging Kubernetes for container orchestration, the Atlas Kubernetes Operator provides seamless integration between Kubernetes and MongoDB Atlas. This operator allows you to deploy and manage Atlas resources using your existing Kubernetes tooling, streamlining the process of spinning up and scaling databases alongside your applications. You can manage Atlas in exactly the same way you manage your applications running in Kubernetes. This is done by managing Atlas directly via custom resources in Kubernetes. These custom resources can be created, managed, and stored as the source of truth in your repository. You can then leverage a Continuous Deployment tool of your choice such as ArgoCD to apply them into Kubernetes. Whether you prefer working with the command line, a RESTful API, Kubernetes, or IaC tools, MongoDB Atlas provides a diverse set of tools to help you achieve your automation goals. By embracing these methods, you can streamline your operations, improve efficiency, and pave the way for a more agile and responsive development process. Learn more from our Atlas Programmatic Access documentation page

October 11, 2023

Improved Developer Experience with the Atlas Admin API

With MongoDB Atlas, we meet our developers where they are and offer multiple ways to get started and work with Atlas. One of the ways to get started programmatically with Atlas is through the Atlas Administration API. It provides programmatic access to Atlas resources such as clusters, database users, or backups to name a few, enabling developers to perform operational tasks like creating, modifying, and deleting resources. We are excited to announce two key capabilities that will improve the developer experience when working with the Atlas Administration API. Versioned Atlas Administration API If you use the Atlas Administration API today, you are working with the unversioned Administration API (/v1). We have heard your feedback on the challenges around the API changes not having a clear policy as well as communication gaps about the new features/ deprecations. To address this, we are excited to now introduce resource-level versioning with our new versioned Atlas Administration API ( /v2). Here is what you can expect: More predictability and consistency in handling API changes: With this new versioning, any breaking changes that can impact your code will only be introduced in a new resource version. You can rest assured that no breaking changes will affect your production code running the current, stable version. Also, deprecation will occur with the introduction of a new stable API resource version. This will give you at least 1 year to upgrade before the removal of the deprecated resource version. It adds more predictability to what’s coming to the API. Minimum impact with resource-based versioning: With the resource level versioning, whenever we talk about API versions we’re referring to the actual API resource versions represented by date. So once you migrate from the current unversioned Administration API (/v1) to the new versioned Administration API (/v2), this will point to version 2023-02-01. To make the initial migration process smooth and easy this first resource version applies to all API resources (e.g. /serverless, /backup, /clusters, etc.). However, moving forward, each resource can introduce a new version (e.g. /serverless can move to 2023-06-01, /backup can stay on 2023-02-01) independently from each other at various points in time. The advantage is that if you have not implemented the e.g. /serverless resource and say a new version is introduced, you will not need to take any action. You will only need to take action if and when the resources you are utilizing are deprecated. More time to plan your migration: Once a particular resource version is deprecated, there will be enough time ( 12 months) before it is removed, so this will give you ample time to plan and transition to the new version. Improved context and visibility: Our updated documentation has all the details to guide you through the versioning process. Whether it’s a release of a new endpoint, deprecation of an existing resource version, or a non-breaking change to a #stable resource - all of them are now tracked on a dedicated and automatically updated Changelog . Other than that we provide more visibility and context on any API changes through the API specification which presents information for all stable and deprecated resource versions enabling you to access documentation that’s relevant for your particular case. We have made it very simple for you to get started with the new versioned Administration API (/v2). To start using it you need to migrate from the current Unversioned Administration API (/v1) to the new Versioned Administration API (/v2). This migration can be done smoothly by making two changes in your code: Update the path of each endpoint from /v1 to /v2 . Add a version header to each endpoint: application/vnd.atlas.2023-02-01+json. The 2023-02-01 version will have a long support timeframe of 2 years from its deprecation, giving you ample time to transition. Read more about the versioned Admin API. To learn more about the transition process, read our migration guide . Go SDK for the Atlas Administration API One of the mechanisms that simplifies interaction with APIs is the availability of SDKs. To make it easy to get started and work with the Versioned Administration API, we are excited to introduce the new Go SDK for the Administration API. If you are a Go developer and have interacted with the Atlas Administration API, you must be familiar with our current Go client. We are now introducing the new Go SDK for the Atlas Admin API. This provides a significantly improved developer experience for Go developers as it supports full endpoint coverage, improves the speed of getting started with the versioned Admin API, provides you a consistent experience when working with the Admin API, and gives you the choice of version for better control of changes and the impact on the scripts. Let’s look at some of the benefits you can expect: Full Atlas Administration API endpoint coverage: The new Go SDK allows you to access all the features and capabilities that the Atlas Administration API offers today with full endpoint coverage. This ensures you can programmatically leverage the full breadth of the developer data platform. Flexibility in choosing the API resource version: When interacting with the new versioned Atlas Administration API through the Go SDK client, you can choose a particular version of the Admin API, giving you control over when you are impacted by breaking changes or always work with the latest version. Ease of use: Your getting started experience for Admin API through the Go SDK client is now much more simplified with fewer lines of code since it includes pre-built functions, structs, and methods that encapsulate the complexity of HTTP requests, authentication, error handling, versioning, and other low-level details. Immediate access to updates: When using the new Go SDK, you can immediately access any newly released Atlas Admin API capabilities. And every time a new version of Atlas is released, the SDK will be quickly updated and continuously maintained, ensuring compatibility with any changes in the API. Get started today with the GoSDK client. Also, refer to our migration guide to learn more.

June 15, 2023

MongoDB Atlas Integrations for CDKTF are now Generally Available

Infrastructure as Code (IaC) tools allows developers to manage and provision infrastructure resources through code, rather than through manual configuration. IaC have empowered developers to apply similar best practices from software development to application instructure deployments. This includes: Automation - helping to ensure repeatable, consistent, and reliable infrastructure deployments Version Control - check in IaC code into GitHub, BitBucket, or GitLab for improved team collaboration and higher code quality Security - create clear audit trails of each infrastructure modification Disaster Recovery - IaC scripts can be used to quickly recreate infrastructure in the event of availability zone or region outages Cost Savings - prevent overprovisioning and waste of cloud resources Improved Compliance - easier to enforce organizational policies and standards Today we are doubling down on this commitment and announcing MongoDB Atlas integrations with CDKTF (Cloud Development Kit for Terraform). These new integrations are built on top of the Atlas Admin API and allow users to automate infrastructure deployments by making it easy to provision, manage, and control Atlas infrastructure as code in the cloud without first having to create in HCL or YAML configuration scripts. CDKTF abstracts away the low-level details of cloud infrastructure, making it easier for developers to define and manage their infrastructure natively in their programming language of choice. Under the hood, CDKTF is converted into Terraform config files on your behalf. This helps to simplify the deployment process and eliminates context switching. MongoDB Atlas & HashiCorp Terraform: MongoDB began this journey with our partners at HashiCorp when we launched the HashiCorp Terraform MongoDB Atlas Provider in 2019. We then have since grown to 10M+ downloads over all time and our provider is the number one provider in the database category. Today we are delighted to support all CDKTF supported languages including JavaScript, TypeScript, Python, Java , Go, and .NET. In addition, with CDKTF users are free to deploy their MongoDB Atlas resources to AWS, Azure and Google Cloud enabling true multi-cloud deployments. Learn how to get started via this quick demo . Start building today! MongDB Atlas CDKTF integrations are free and open source licensed under Mozilla Public License 2.0 . Users only pay for underlying Atlas resources created and can get started with Atlas always free tier ( M0 clusters ). Getting started today is faster than ever with MongoDB Atlas and CDK for HashiCorp Terraform . We can’t wait to see what you will build next with this powerful combination! Learn more about MongoDB Atlas and CDK for Hashicorp Terraform

February 28, 2023

MongoDB Atlas Integrations for AWS CloudFormation and CDK are now Generally Available

Infrastructure as Code (IaC) tools allows developers to manage and provision infrastructure resources through code, rather than through manual configuration. IaC have empowered developers to apply similar best practices from software development to application instructure deployments. This includes: Automation - helping to ensure repeatable, consistent, and reliable infrastructure deployments Version Control - check in IaC code into GitHub, BitBucket, AWS CodeCommit, or GitLab for improved team collaboration and higher code quality Security - create clear audit trails of each infrastructure modification Disaster Recovery - IaC scripts can be used to quickly recreate infrastructure in the event of availability zone or region outages Cost Savings - prevent overprovisioning and waste of cloud resources Improved Compliance - easier to enforce organizational policies and standards Today we are doubling down on this commitment and announcing MongoDB Atlas integrations with AWS CloudFormation and Cloud Development Kit (CDK). AWS CloudFormation allows customers to define and provision infrastructure resources using JSON or YAML templates. CloudFormation provides a simple way to manage infrastructure as code and automate the deployment of resources. AWS Cloud Development Kit (CDK) is an open-source software development framework that allows customers to define cloud infrastructure in code and provision it through AWS CloudFormation. It supports multiple programming languages and allows customers to use high-level abstractions to define infrastructure resources. These new integrations are built on top of the Atlas Admin API and allow users to automate infrastructure deployments by making it easy to provision, manage, and control Atlas Infrastructure as Code in the cloud. MongoDB Atlas & AWS CloudFormation: To meet developers where they are, we now have multiple ways to get started with MongoDB Atlas using AWS Infrastructure as Code. Each of these allow users to provision, manage, and control Atlas infrastructure as code on AWS: Option 1: AWS CloudFormation Customers can begin their journey using Atlas resources directly from the AWS CloudFormation Public Registry . We currently have 33 Atlas Resources and will continue adding more. Examples of available Atlas resources today include: Dedicated Clusters, Serverless Instances, AWS PrivateLink , Cloud Backups, and Encryption at Rest using Customer Key Management. In addition, we have published these resources to 22 (and counting) AWS Regions where MongoDB Atlas is supported today. Learn how to get started via this quick demo . Option 2: AWS CDK After its launch in 2019 as an open source project, AWS CDK has gained immense popularity among the developer community with over a thousand external contributors and more than 1.3 million weekly downloads. AWS CDK abstracts away the low-level details of cloud infrastructure, making it easier for developers to define and manage their infrastructure natively in their programming language of choice. This helps to simplify the deployment process and eliminates context switching. Under the hood, AWS CDK synthesizes CloudFormation templates on your behalf which is then deployed to AWS accounts. In AWS CDK, L1 (Level 1) and L2 (Level 2) constructs refer to two different levels of abstraction for defining infrastructure resources: L1 constructs are lower-level abstractions that provide a one-to-one mapping to AWS CloudFormation resources. They are essentially AWS CloudFormation resources wrapped in code, making them easier to use in a programming context. L2 constructs are higher-level abstractions that provide a more user-friendly and intuitive way to define AWS infrastructure. They are built on top of L1 constructs and provide a simpler and more declarative API for defining resources. Today we announce MongoDB Atlas availability for AWS CDK in JavaScript and TypeScript, with plans for Python, Java, Go, and .NET support coming later in 2023. Now customers can easily deploy and manage all available Atlas resources by vending AWS CDK applications with prebuilt L1 Constructs. We also have a growing number of L2 and L3 CDK Constructs available. These include Constructs to help users to quickly deploy the core resources they need to get started with MongoDB Atlas on AWS in just a few lines JavaScript or TypeScript (see awscdk-resources-mongodbatlas to learn more). Users can also optionally select to add more advanced networking configurations such as VPC peering and AWS PrivateLink. Option 3: AWS Partner Solutions (previously AWS Quick Starts) Instead of manually pulling together multiple Atlas CloudFormation resources, AWS Partner Solutions gives customers access to pre-built CloudFormation templates for both general and specific use cases with MongoDB Atlas. By using AWS Partner Solution templates, customers can save time and effort compared to architecting their deployments from scratch. These were jointly created and incorporate best practices from MongoDB Atlas and AWS. Go to the AWS Partner Solutions Portal to get started. Start building today! These MongDB Atlas integrations with AWS CloudFormation are free and open source licensed under Apache License 2.0 . Users only pay for underlying Atlas resources created and can get started with Atlas always free tier ( M0 clusters ). Getting started today is faster than ever with MongoDB Atlas and AWS CloudFormation. We can’t wait to see what you will build next with this powerful combination! Learn more about MongoDB Atlas integrations with AWS CloudFormation

February 28, 2023

Optimizing Your MongoDB Deployment with Performance Advisor

We are happy to announce additional enhancements to MongoDB’s Performance Advisor, now available in MongoDB Atlas , MongoDB Cloud Manager , and MongoDB Ops Manager . MongoDB’s Performance Advisor automatically analyzes logs for slow-running queries and provides index suggestions to improve query performance. In this latest update, we’ve made some key updates, including: A new ranking algorithm and additional performance statistics (e.g., average documents scanned, average documents returned, and average object size) make it easier to understand the relative importance of each index recommendation. Support for additional query types including regexes, negation operators (e.g., $ne, $nin, $not), $count, $distinct, and $match to ensure we cover with optimized index suggestions. Index recommendations are now more deterministic so they are less impacted by time and provide more consistent query performance benefits. Before diving further into MongoDB’s Performance Advisor, let’s look at tools MongoDB provides out of the box to simplify database monitoring. Background Deploying your MongoDB cluster and getting your database running is a critical first step, but another important aspect of managing your database is ensuring that your database is performant and running efficiently. To make this easier for you, MongoDB offers several out-of-the-box monitoring tools , such as the Query Profiler, Performance Advisor, Real-Time Performance Panel, and Metrics Charts, to name a few. Suppose you notice that your database queries are running slower. The first place you might go is to the metrics charts to look at the “Opcounters” metrics to see whether you have more operations running. You might also look at the “Operation Execution Time” to see if your queries are taking longer to run. The “Query Targeting” metric shows the ratio of the number of documents scanned over the number of documents returned. This datapoint is a great measure of the overall efficiency of a query — the higher the ratio, the less efficient the query. These and other metrics can help you identify performance issues with your overall cluster, which you can then use as context to dive a level deeper and perform more targeted diagnostics of individual slow-running queries . MongoDB’s Performance Advisor takes this functionality a step further by automatically scanning your slowest queries and recommending indexes where appropriate to improve query performance. Getting started with Performance Advisor The Performance Advisor is a unique tool that automatically monitors MongoDB logs for slow-running queries and suggests indexes to improve query performance. Performance Advisor also helps improve both your read and write performance by intelligently recommending indexes to create and/or drop (Figure 1). These suggestions are ranked by the determined impact on your cluster. Performance Advisor is available on M10 and above clusters in MongoDB Atlas as well as in Cloud Manager and Ops Manager. Figure 1:  Performance Advisor can recommend indexes to create or drop. Performance Advisor will suggest which indexes to create, what queries will be affected by the index, and the expected improvements to query performance. All of these user interactions are available in the user interface directly within Performance Advisor, and indexes can be easily created with just a few clicks. Figure 2 shows additional Performance Advisor statistics about the performance improvements this index would provide. The performance statistics that are highlighted for each index recommendation include: Execution Count: The number of queries per hour that would be covered by the recommended index Avg Execution Time: The average execution time of queries that would be covered by the recommended index Avg Query Targeting: The inefficiency of queries that would be covered by the recommended index, measured by the number of documents or index keys scanned in order to return one document In Memory Sort: The number of in-memory sorts performed per hour for queries that would be covered by the recommended index Avg Docs Scanned: The average number of documents that were scanned by slow queries with this query shape Avg Docs Returned: The average number of documents that were returned by slow queries with this query shape Avg Object Size: The average object size of all objects in the impacted collection If you have multiple index recommendations, they are ranked by their relative impact to query performance so that the most beneficial index suggestion is displayed at the top. This means that the most impactful index is displayed at the top and would be the most beneficial to query performance. Figure 2:  Detailed performance statistics. Creating optimal indexes ensures that queries are not scanning more documents than they return. However, creating too many indexes can slow down write performance, as each write operation needs to check each index before writing. Performance Advisor provides suggestions on which indexes to drop based on whether they are unused or redundant (Figure 3). Users also have the option to “hide” indexes as a way to evaluate the impact of dropping an index without actually dropping the index. Figure 3: Performance Advisor shows which indexes are unused or redundant. The Performance Advisor in MongoDB provides a simple and cost-efficient way to ensure you’re getting the best performance out of your MongoDB database. If you’d like to see the Performance Advisor in action, the easiest way to get started is to sign up for MongoDB Atlas , our cloud database service. Performance Advisor is available on MongoDB Atlas on M10 cluster tiers and higher. Learn more from the following resources: Monitor and Improve Slow Queries Monitor Your Database Deployments

November 22, 2022

Enhancing Atlas Online Archive With Data Expiration and Scheduled Archiving

Atlas's Online Archive feature allows you to set archiving rules to move data that is not frequently accessed from your Atlas cluster to a MongoDB-managed object store. It also allows you to query both your Atlas and Online Archive data in a unified manner without having to worry about the tier in which the data resides. We are enhancing this feature by adding two new capabilities: data expiration and scheduled archiving (in preview). Data expiration: Online Archive makes it easy to tier data out of a live database into an object store, but what if you want to set a second threshold to delete data from the archive entirely? Perhaps you don’t want to store data indefinitely due to costs, or maybe you have a compliance requirement for data expiration that means you need to ensure deletion on a schedule. Previously, there was no option to remove data from the archive except deleting the archive completely, which is not a preferred option for most use cases. With our new data expiration feature, you can specify for how many days data should be stored in the online archive before being deleted. You can set an expiration from the archive as low as seven days and as high as 9,125 days; you can set the archive expiration time through either the Atlas UI or the Admin API. Expiration rules can be edited after creation, if needed. Note that once the data is deleted from the archive there is no way to recover it, so you must define your rules carefully. Archiving with the data expiration feature Scheduled archiving: Previously, the archiving process ran every five minutes. Though in most cases this is acceptable, some customers were concerned that this process could affect cluster performance when, for example, a cluster is running close to capacity during a specific time period. If an archiving window overlaps with this period, it may overload the cluster and lead to stability issues. These customers requested the ability to schedule archiving during an off-peak time, when the clusters have spare capacity. With this requirement in mind, we are thrilled to introduce the scheduled archiving feature. You can configure the scheduled window by setting rules. The window can be scheduled to repeat every day, every week, or every month, depending on your preference. To ensure that the archive process is able to work through any backlog that’s accrued, there is a minimum window requirement of two hours. A scheduled archiving window set to repeat every week You are also able to edit the archive rule and define when you want to archive your data and when you want to delete it. Data expiration and scheduled archiving will provide operational efficiency to Atlas customers. Both are in preview mode and will be generally available soon. See the documentation for additional information about data expiration and scheduled archiving .

June 7, 2022