Product Updates

The most recent MongoDB product releases and updates

MongoDB, Microsoft Team Up to Enhance Copilot in VS Code

As modern applications grow increasingly complex, developers face the challenge of meeting market demands for faster, smarter solutions. To stay ahead, they need tools that streamline their workflows, available directly in the environments where they build. According to the 2024 Stack Overflow Developer Survey , Microsoft’s Visual Studio Code (VS Code) is the integrated development environment (IDE) of choice for 74% of professional developers, serving as a central hub for building, testing, and deploying applications. With the rise of AI-powered tools like GitHub Copilot—which is used by 44% of professional developers—there’s a growing demand for intelligent assistance in the development process without disrupting flow. At MongoDB, we believe that the future of development lies in democratizing the value of these experiences by incorporating domain-specific knowledge and capabilities directly into developer flows. That’s why we’re thrilled to announce the public preview of MongoDB’s extension to GitHub Copilot in VS Code. With this integration, developers can effortlessly generate MongoDB queries, inspect collection schemas, and get answers from the latest MongoDB docs—all without leaving their IDE. Our collaboration with MongoDB continues to bring powerful, integrated solutions to developers building the modern applications of the future. The new MongoDB extension for GitHub Copilot exemplifies a shared commitment to the developer experience, leveraging AI to ensure that workflows are optimized for developer productivity by keeping everything developers need within reach, without breaking their flow. Isidor Nikolic, Senior Product Manager for VS Code, Microsoft But we’re not stopping there. As AI continues to evolve, so will the ways developers interact with their tools. Stay tuned for more exciting developments next week at Microsoft Ignite , where we’ll unveil more ways we’re pushing the boundaries of what’s possible with AI through MongoDB and Microsoft’s partnership! What is MongoDB's Copilot extension? MongoDB’s Copilot extension supercharges your GitHub Copilot in VS Code with MongoDB domain knowledge. The Copilot integration is built into the MongoDB for VS Code extension , which has more than 1.8M downloads in the VS Code marketplace today. Type ‘@MongoDB’ in Copilot chat and take advantage of three transformative commands: Generate queries from natural language (/query) —this generates accurate MongoDB queries by passing collection schema as context to Github Copilot Query MongoDB documentation (/docs) —this answers any documentation questions using the latest MongoDB documentation through Retrieval-Augmented Generation (RAG) Browse collection schema (/schema) —this provides schema information for any collection and is useful for data modeling with the Copilot extension. Generate queries from natural language This command transforms natural language prompts into MongoDB queries, leveraging your collection schema to produce precise, valid queries. It eliminates the need to manually write complex query syntax, and allows developers to quickly extract data without taking their focus away from building applications. Whether you run the query directly from the Copilot chat or refine it in a MongoDB playground file, we’ve sped up the query-building process by deeply integrating these capabilities into the existing flow of MongoDB VS Code extension. Query MongoDB documentation The /docs command answers MongoDB documentation-specific questions, complemented by direct links to the official documentation site. There’s no need to switch back and forth between your browser and your IDE; the Copilot extension calls out to the MongoDB Documentation Chatbot API that leverages retrieval-augmented generation technology to generate responses that are informed by the most recent version of the MongoDB documentation. In the near future, these questions will be smartly routed to documentation for the specific server version of the cluster you are connected to in the MongoDB VS Code extension. Browse collection schema The /schema command offers quick access to collection schemas, making it easier for developers to access and interact with their data model in real-time. This can be helpful in situations where developers are debugging with Copilot or just want to know valid field names while developing their applications. Developers can additionally export collection schemas into JSON files or ask follow-up questions directly to brainstorm data modeling techniques with the MongoDB Copilot extension. On the Horizon This is just the start of our work on MongoDB’s Copilot extension. As we continue to improve the experience with new features—like translating and testing queries to and from popular programming languages, and in-line query generation in Playgrounds—we remain focused on democratizing AI-driven workflows, empowering developers to access the tools and knowledge they need to build smarter, faster, and more efficiently, right within their existing environments. Download MongoDB’s VS Code extension and enable the MongoDB chat experience to get started today.

November 13, 2024
Updates

MongoDB Atlas Introduces Enhanced Cost Optimization Tools

MongoDB Atlas was designed with elasticity at its core and has always allowed customers to scale capacity vertically and horizontally, as required and automatically. Today, these inherent capabilities are even better and more cost-effective. At the recent MongoDB.local London, MongoDB announced several new MongoDB Atlas features that improve elasticity and help optimize costs while maintaining the performance and availability that business-critical applications demand. These include scaling each shard independently, extending storage beyond 4 TB or more , and 5X more responsive auto-scaling . Organizations and their customers are inherently dynamic, with operations, web traffic, and application usage growing unpredictably and non-linearly. For example, website traffic can spike due to a single video going viral on social media, and holidays are a frequent cause of application usage slowdowns. Traditionally, organizations have tackled this volatility by over-provisioning infrastructure, often at significant cost. Cloud adoption has improved the speed at which infrastructure can be provisioned in response to growing and volatile demand. Simultaneously, companies are focused on striking the perfect balance between performance and cost efficiency. This balance is acute in the current economic climate, where cost optimization is a top priority for Infrastructure & IT Operations (I&O) leaders. The goal is not balance between supply and demand. The goal is to meet the most profitable and mission-critical demand with the resources available. Nathan Hill, Distinguished VP Analyst, Gartner - Dec 2023 However, scaling infrastructure to meet demand without overprovisioning can be complex and costly. Organizations have often relied on manual processes (like scheduled scripts) or dedicated teams (like IT ops) to manage this challenge. MongoDB Atlas enables a more effective approach. With MongoDB Atlas, customers can manage flexible provisioning, zero-downtime scaling, and easy auto-scaling of their clusters. From October 2024, all Atlas customers with dedicated tier clusters can employ these recently announced enhancements for improved cost optimization. Granular resource provisioning MongoDB’s tens of thousands of customers have complex and diverse workloads with constantly changing requirements. Over time, workloads can grow unpredictably, requiring scaling up storage, compute, and IOPS independently and at differing granularities. Imagine a global retailer preparing for Cyber Monday, when traffic could be 512% higher than average — additional resources to serve customers are vital. Independent shard scaling enables customers running MongoDB Atlas to do this in a cost-optimal manner. Customers can independently scale the tier of individual shards in a cluster when one or more shards experience disproportionately higher traffic. For customers running workloads on sharded clusters, scaling each shard independently of all other shards is now an option (for example, only the shards serving US traffic during Thanksgiving). Customers can scale operational and analytical nodes independently in a single shard. This improves scalability and cost-optimization by providing fine-grained control to add resources to hot shards while maintaining the resources provisioned to other shards. All Atlas customers running dedicated clusters can use this feature through Terraform and the Admin API . Support for independent shard auto-scaling and configuration management via the Admin API and Terraform will be available in late 2024. Extended Storage and IOPS in Azure : MongoDB is introducing the ability to provision additional storage and IOPS on Atlas clusters running on Azure. This enables support for optimal performance without over-provisioning. Customers can create new clusters on Azure to provision additional IOPS and extended storage with 4TB or more on larger clusters (M40+). This feature is being rolled out and will be available to all Atlas clusters by late 2024. Head over to our docs page to learn more. With these updates, customers have greater flexibility and granularity in provisioning and scaling resources across their Atlas clusters on all three major cloud providers. Therefore, customers can optimize for performance and costs more effectively. More responsive auto-scaling Granular provisioning is excellent for optimizing costs while ensuring availability for an expected increase in traffic. However, what happens if a website gets 13X higher traffic or a surge in app interactions due to an unexpected social media post? Several enhancements to the algorithms and infrastructure powering MongoDB’s auto-scaling capabilities were announced in October 2024 at .local London . Cumulatively, these improve the time taken to scale and the responsiveness of MongoDB’s auto-scaling engine. Customers running dynamic workloads, particularly those with sharper peaks, will see up to 5X improvement in responsiveness. Smarter scaling decisions by Atlas will ensure that resource provisioning is optimized while maintaining high performance. This capability is available on all Atlas clusters with auto-scaling turned on, and customers should experience the benefits immediately. Industry-leading MongoDB Atlas customers like Conrad and Current use auto-scaling to automatically scale their compute capacity, storage capacity, or both without needing custom scripts, manual intervention, or third-party consulting services. Customers can set upper and lower tier limits, and Atlas will automatically scale their storage and tiers depending on their workload demands. This ensures clusters always have the optimal resources to maintain performance while optimizing costs. Take a look at how Coinbase is optimizing for both availability and cost in the volatile world of cryptocurrency with MongoDB Atlas’ help, or read our auto-scaling docs page to learn more. Optimize price and performance with MongoDB Atlas As businesses focus more on optimizing cloud infrastructure costs, the latest MongoDB Atlas enhancements— independent shard scaling, more responsive auto-scaling, and extended storage with IOPS—empower organizations to manage resources efficiently while maintaining top performance. These tools provide the flexibility and control needed to achieve cost-effective scalability. Ready to take control of your cloud costs? Sign up for a free trial today or spin up a cluster to get the performance, availability, and cost efficiency you need.

October 31, 2024
Updates

Announcing Hybrid Search Support for LlamaIndex

MongoDB is excited to announce enhancements to our LlamaIndex integration. By combining MongoDB’s robust database capabilities with LlamaIndex’s innovative framework for context-augmented large language models (LLMs), the enhanced MongoDB-LlamaIndex integration unlocks new possibilities for generative AI development. Specifically, it supports vector (powered by Atlas Vector Search ), full-text (powered by Atlas Search ), and hybrid search, enabling developers to blend precise keyword matching with semantic search for more context-aware applications, depending on their use case. Building AI applications with LlamaIndex LlamaIndex is one of the world’s leading AI frameworks for building with LLMs. It streamlines the integration of external data sources, allowing developers to combine LLMs with relevant context from various data formats. This makes it ideal for building application features like retrieval-augmented generation (RAG), where accurate, contextual information is critical. LlamaIndex empowers developers to build smarter, more responsive AI systems while reducing the complexities involved in data handling and query management. Advantages of building with LlamaIndex include: Simplified data ingestion with connectors that integrate structured databases, unstructured files, and external APIs, removing the need for manual processing or format conversion. Organizing data into structured indexes or graphs , significantly enhancing query efficiency and accuracy, especially when working with large or complex datasets. An advanced retrieval interface that responds to natural language prompts with contextually enhanced data, improving accuracy in tasks like question-answering, summarization, or data retrieval. Customizable APIs that cater to all skill levels—high-level APIs enable quick data ingestion and querying for beginners, while lower-level APIs offer advanced users full control over connectors and query engines for more complex needs. MongoDB's LlamaIndex integration Developers are able to build powerful AI applications using LlamaIndex as a foundational AI framework alongside MongoDB Atlas as the long term memory database. With MongoDB’s developer-friendly document model and powerful vector search capabilities within MongoDB Atlas, developers can easily store and search vector embeddings for building RAG applications. And because of MongoDB’s low-latency transactional persistence capabilities, developers can do a lot more with MongoDB integration in LlamIndex to build AI applications in an enterprise-grade manner. LlamaIndex's flexible architecture supports customizable storage components, allowing developers to leverage MongoDB Atlas as a powerful vector store and a key-value store. By using Atlas Vector Search capabilities, developers can: Store and retrieve vector embeddings efficiently ( llama-index-vector-stores-mongodb ) Persist ingested documents ( llama-index-storage-docstore-mongodb ) Maintain index metadata ( llama-index-storage-index-store-mongodb ) Store Key-value pairs ( llama-index-storage-kvstore-mongodb ) Figure adapted from Liu, Jerry and Agarwal, Prakul (May 2023). “Build a ChatGPT with your Private Data using LlamaIndex and MongoDB”. Medium. https://medium.com/llamaindex-blog/build-a-chatgpt-with-your-private-data-using-llamaindex-and-mongodb-b09850eb154c Adding hybrid and full-text search support Developers may use different approaches to search for different use cases. Full-text search retrieves documents by matching exact keywords or linguistic variations, making it efficient for quickly locating specific terms within large datasets, such as in legal document review where exact wording is critical. Vector search, on the other hand, finds content that is ‘semantically’ similar, even if it does not contain the same keywords. Hybrid search combines full-text search with vector search to identify both exact matches and semantically similar content. This approach is particularly valuable in advanced retrieval systems or AI-powered search engines, enabling results that are both precise and aligned with the needs of the end-user. It is super simple for developers to try out powerful retrieval capabilities on their data and improve the accuracy of their AI applications with this integration. In the LlamaIndex integration, the MongoDBAtlasVectorSearch class is used for vector search. All you have to do is enable full-text search, using VectorStoreQueryMode.TEXT_SEARCH in the same class. Similarly, to use Hybrid search, enable VectorStoreQueryMode.HYBRID . To learn more, check out the GitHub repository . With the MongoDB-LlamaIndex integration’s support, developers no longer need to navigate the intricacies of Reciprocal Rank Fusion implementation or to determine the optimal way to combine vector and text searches—we’ve taken care of the complexities for you. The integration also includes sensible defaults and robust support, ensuring that building advanced search capabilities into AI applications is easier than ever. This means that MongoDB handles the intricacies of storing and querying your vectorized data, so you can focus on building! We’re excited for you to work with our LlamaIndex integration. Here are some resources to expand your knowledge on this topic: Check out how to get started with our LlamaIndex integration Build a content recommendation system using MongoDB and LlamaIndex with our helpful tutorial Experiment with building a RAG application with LlamaIndex, OpenAI, and our vector database Learn how to build with private data using LlamaIndex, guided by one of its co-founders

October 17, 2024
Updates

Introducing: Multi-Kubernetes Cluster Deployment Support

Resilience and scalability are critical for today's production applications. MongoDB and Kubernetes are both well known for their ability to support those needs to the highest level. To better enable developers using MongoDB and Kubernetes, we’ve introduced a series of updates and capabilities that makes it easier to manage MongoDB across multiple Kubernetes clusters. In addition to the previously released support for running MongoDB replica sets and Ops Manager across multiple Kubernetes clusters, we're excited to announce the public preview release of support for Sharded Clusters spanning multiple Kubernetes clusters (GA to follow in November 2024). Support for deployment across multiple Kubernetes clusters is facilitated through the Enterprise Kubernetes Operator. As a recap for anyone unaware, the Enterprise Operator automates the deployment, scaling, and management of MongoDB clusters in Kubernetes. It simplifies database operations by handling tasks such as backups, upgrades, and failover, ensuring consistent performance and reliability in the Kubernetes environment. Multi-Kubernetes cluster deployment support enhances availability, resilience, and scalability for critical MongoDB workloads, empowering developers to efficiently manage these workloads within Kubernetes. This approach unlocks the highest level of availability and resilience by allowing shards to be located closer to users and applications, increasing geographical flexibility and reducing latency for globally distributed applications. Deploying replica sets across multiple Kubernetes clusters MongoDB replica sets are engineered to ensure high availability, data redundancy, and automated failover in database deployments. A replica set consists of multiple MongoDB instances—one primary and several secondary nodes—all maintaining the same dataset. The primary node handles all write operations, while the secondary nodes replicate the data and are available to take over as primary if the original primary node fails. This architecture is critical for maintaining continuous data availability, especially in production environments where downtime can be costly. Support for deploying MongoDB replica sets across multiple Kubernetes clusters helps ensure this level of availability for MongoDB-based applications running in Kubernetes. Deploying MongoDB replica sets across multiple Kubernetes clusters enables you to distribute your data, not only across nodes in the Kubernetes cluster, but across different clusters and geographic locations, ensuring that the rest of your deployments remain operational (even if one or more Kubernetes clusters or locations fail) and facilitating faster disaster recovery. To learn more about how to deploy replica sets across multiple Kubernetes clusters using the Enterprise Kubernetes Operator, visit our documentation . Sharding MongoDB across multiple Kubernetes clusters While replica sets duplicate data for resilience (and higher read rates), MongoDB sharded clusters divide the data up between shards, each of which is effectively a replica set, providing resilience for each portion of the data. This helps your database handle large datasets and high-throughput operations since each shard has a primary member handling write operations to that portion of the data; this allows MongoDB to scale up the write throughput horizontally, rather than requiring vertical scaling of every member of a replica set. In a Kubernetes environment, these shards can now be deployed across multiple Kubernetes clusters, giving higher resilience in the event of a loss of a Kubernetes cluster or an entire geographic location. This also offers the ability to locate shards in the same region as the applications or users accessing that portion of the data, reducing latency and improving user experience. Sharding is particularly useful for applications with large datasets and those requiring high availability and resilience as they grow. Support for sharding MongoDB across multiple Kubernetes clusters is currently in public preview and will be generally available in November. Deploying Ops Manager across multiple Kubernetes clusters Ops Manager is the self-hosted management platform that supports automation, monitoring, and backup of MongoDB on your own infrastructure. Ops Manager's most critical function is backup, and deploying it across multiple Kubernetes clusters greatly improves resilience and disaster recovery for your MongoDB deployments in Kubernetes. With Ops Manager distributed across several Kubernetes clusters, you can ensure that backups of deployments remain robust and available, even if one Kubernetes cluster or site fails. Furthermore, it allows Ops Manager to efficiently manage and monitor MongoDB deployments that are themselves distributed across multiple clusters, improving resilience and simplifying scaling and disaster recovery. To learn more about how to deploy Ops Manager across multiple Kubernetes clusters using the Enterprise Kubernetes Operator, visit our documentation . To leverage multi-Kubernetes-cluster support, you can get started with the Enterprise Kubernetes Operator .

October 10, 2024
Updates

Introducing Dark Mode for MongoDB Documentation

We’re excited to announce a highly requested feature: Dark mode is now available for MongoDB Documentation ! Every day, developers from all backgrounds—beginners to experts—turn to the MongoDB Documentation. It’s packed with comprehensive resources that help you build modern applications using MongoDB and the Atlas developer data platform. With detailed information and step-by-step guides, it’s an invaluable tool for improving your skills and making your development work smoother. From troubleshooting tricky queries to exploring new features, MongoDB Documentation is there to support your projects and help you succeed. With dark mode, you can now switch to a darker interface that’s easier on the eyes. Whether you’re working late or prefer a subdued color palette, dark mode enhances your MongoDB Documentation experience. How to enable dark mode Enabling dark mode is simple. Just click on the sun and moon icon at the top right of the page to switch between dark mode, light mode, and system settings. It will initially default to your system settings. This is a personal setting and won't affect other users within the project or organization. We’ve designed dark mode to provide the same user-friendly experience you’re used to and stay consistent across different tools in the developer workflow, including MongoDB Atlas, which is also available in dark mode . We're all about making your reading experience top-notch! Dark mode is here because you asked for it through our feedback widget on the Docs page. Whether you’re an early adopter of dark mode or just trying it out, we’d love your opinion. Just drop your feedback in the widget next to the color theme selector on the MongoDB Documentation page. Less strain, more gain Dark mode offers a sleek, modern look that brings a refreshing change from the traditional light mode. Beyond its stylish appearance, dark mode also provides significant practical benefits. Reducing the amount of bright light emitted from your screen helps minimize eye strain and fatigue, making extended periods of device use more comfortable. For those using OLED screens, dark mode can help conserve battery life, as these screens consume less power by displaying darker pixels. Whether you’re coding into the late hours or just looking for a more comfortable viewing experience, dark mode is a simple yet powerful tool to enhance your development experience. Try out dark mode on MongoDB Documentation today and enjoy a more comfortable, stylish, and efficient reading experience!

October 9, 2024
Updates

Vector Quantization: Scale Search & Generative AI Applications

We are excited to announce a robust set of vector quantization capabilities in MongoDB Atlas Vector Search . These capabilities will reduce vector sizes while preserving performance, enabling developers to build powerful semantic search and generative AI applications with more scale—and at a lower cost. In addition, unlike relational or niche vector databases, MongoDB’s flexible document model—coupled with quantized vectors—allows for greater agility in testing and deploying different embedding models quickly and easily. Support for scalar quantized vector ingestion is now generally available, and will be followed by several new releases in the coming weeks. Read on to learn how vector quantization works and visit our documentation to get started! The challenges of large-scale vector applications While the use of vectors has opened up a range of new possibilities , such as content summarization and sentiment analysis, natural language chatbots, and image generation, unlocking insights within unstructured data can require storing and searching through billions of vectors—which can quickly become infeasible. Vectors are effectively arrays of floating-point numbers representing unstructured information in a way that computers can understand (ranging from a few hundred to billions of arrays), and as the number of vectors increases, so does the index size required to search over them. As a result, large-scale vector-based applications using full-fidelity vectors often have high processing costs and slow query times, hindering their scalability and performance. Vector quantization for cost-effectiveness, scalability, and performance Vector quantization, a technique that compresses vectors while preserving their semantic similarity, offers a solution to this challenge. Imagine converting a full-color image into grayscale to reduce storage space on a computer. This involves simplifying each pixel's color information by grouping similar colors into primary color channels or "quantization bins," and then representing each pixel with a single value from its bin. The binned values are then used to create a new grayscale image with smaller size but retaining most original details, as shown in Figure 1. Figure 1: Illustration of quantizing an RGB image into grayscale Vector quantization works similarly, by shrinking full-fidelity vectors into fewer bits to significantly reduce memory and storage costs without compromising the important details. Maintaining this balance is critical, as search and AI applications need to deliver relevant insights to be useful. Two effective quantization methods are scalar (converting a float point into an integer) and binary (converting a float point into a single bit of 0 or 1). Current and upcoming quantization capabilities will empower developers to maximize the potential of Atlas Vector Search. The most impactful benefit of vector quantization is increased scalability and cost savings through reduced computing resources and efficient processing of vectors. And when combined with Search Nodes —MongoDB’s dedicated infrastructure for independent scalability through workload isolation and memory-optimized infrastructure for semantic search and generative AI workloads— vector quantization can further reduce costs and improve performance, even at the highest volume and scale to unlock more use cases. "Cohere is excited to be one of the first partners to support quantized vector ingestion in MongoDB Atlas,” said Nils Reimers, VP of AI Search at Cohere. “Embedding models, such as Cohere Embed v3, help enterprises see more accurate search results based on their own data sources. We’re looking forward to providing our joint customers with accurate, cost-effective applications for their needs.” In our tests, compared to full-fidelity vectors, BSON-type vectors —MongoDB’s JSON-like binary serialization format for efficient document storage—reduced storage size by 66% (from 41 GB to 14 GB). And as shown in Figures 2 and 3, the tests illustrate significant memory reduction (73% to 96% less) and latency improvements using quantized vectors, where scalar quantization preserves recall performance and binary quantization’s recall performance is maintained with rescoring–a process of evaluating a small subset of the quantized outputs against full-fidelity vectors to improve the accuracy of the search results. Figure 2: Significant storage reduction + good recall and latency performance with quantization on different embedding models Figure 3: Remarkable improvement in recall performance for binary quantization when combining with rescoring In addition, thanks to the reduced cost advantage, vector quantization facilitates more advanced, multiple vector use cases that would have been too computationally-taxing or cost-prohibitive to implement. For example, vector quantization can help users: Easily A/B test different embedding models using multiple vectors produced from the same source field during prototyping. MongoDB’s document model —coupled with quantized vectors—allows for greater agility at lower costs. The flexible document schema lets developers quickly deploy and compare embedding models’ results without the need to rebuild the index or provision an entirely new data model or set of infrastructure. Further improve the relevance of search results or context for large language models (LLMs) by incorporating vectors from multiple sources of relevance, such as different source fields (product descriptions, product images, etc.) embedded within the same or different models. How to get started, and what’s next Now, with support for the ingestion of scalar quantized vectors, developers can import and work with quantized vectors from their embedding model providers of choice (such as Cohere, Nomic, Jina, Mixedbread, and others)—directly in Atlas Vector Search. Read the documentation and tutorial to get started. And in the coming weeks, additional vector quantization features will equip developers with a comprehensive toolset for building and optimizing applications with quantized vectors: Support for ingestion of binary quantized vectors will enable further reduction of storage space, allowing for greater cost savings and giving developers the flexibility to choose the type of quantized vectors that best fits their requirements. Automatic quantization and rescoring will provide native capabilities for scalar quantization as well as binary quantization with rescoring in Atlas Vector Search, making it easier for developers to take full advantage of vector quantization within the platform. With support for quantized vectors in MongoDB Atlas Vector Search, you can build scalable and high-performing semantic search and generative AI applications with flexibility and cost-effectiveness. Check out these resources to get started documentation and tutorial . Head over to our quick-start guide to get started with Atlas Vector Search today.

October 7, 2024
Updates

MongoDB.local London 2024: Better Applications, Faster

Since we kicked off MongoDB’s series of 2024 events in April, we’ve connected with thousands of customers, partners, and community members in cities around the world—from Mexico City to Mumbai. Yesterday marked the nineteenth stop of the 2024 MongoDB.local tour, and we had a blast welcoming folks across industries to MongoDB.local London, where we discussed the latest technology trends, celebrated customer innovations, and unveiled product updates that make it easier than ever for developers to build next-gen applications. Over the past year, MongoDB’s more than 50,000 customers have been telling us that their needs are changing. They’re increasingly focused on three areas: Helping developers build faster and more efficiently Empowering teams to create AI-powered applications Moving from legacy systems to modern platforms Across these areas, there’s a common need for a solid foundation: each requires a resilient, scalable, secure, and highly performant database. The updates we shared at MongoDB.local London reflect these priorities. MongoDB is committed to ensuring that our products are built to exceed our customers’ most stringent requirements, and that they provide the strongest possible foundation for building a wide range of applications, now and in the future. Indeed, during yesterday’s event, Sahir Azam, MongoDB’s Chief Product Officer, discussed the foundational role data plays in his keynote address. He also shared the latest advancement from our partner ecosystem, an AI solution powered by MongoDB, Amazon Web Services, and Anthropic that makes it easier for customers to deploy gen AI customer care applications. MongoDB 8.0: The best version of MongoDB ever The biggest news at .local London was the general availability of MongoDB 8.0 , which provides significant performance improvements, reduced scaling costs, and adds additional scalability, resilience, and data security capabilities to the world’s most popular document database. Architectural optimizations in MongoDB 8.0 have significantly reduced memory usage and query times, and MongoDB 8.0 has more efficient batch processing capabilities than previous versions. Specifically, MongoDB 8.0 features 36% better read throughput, 56% faster bulk writes, and 20% faster concurrent writes during data replication. In addition, MongoDB 8.0 can handle higher volumes of time series data and can perform complex aggregations more than 200% faster—with lower resource usage and costs. Last (but hardly least!), Queryable Encryption now supports range queries, ensuring data security while enabling powerful analytics. For more on MongoDB.local London’s product announcements—which are designed to accelerate application development, simplify AI innovation, and speed developer upskilling—please read on! Accelerating application development Improved scaling and elasticity on MongoDB Atlas capabilities New enhancements to MongoDB Atlas’s control plane allow customers to scale clusters faster, respond to resource demands in real-time, and optimize performance—all while reducing operational costs. First, our new granular resource provisioning and scaling features—including independent shard scaling and extended storage and IOPS on Azure—allow customers to optimize resources precisely where needed. Second, Atlas customers will experience faster cluster scaling with up to 50% quicker scaling times by scaling clusters in parallel by node type. Finally, MongoDB Atlas users will enjoy more responsive auto-scaling, with a 5X improvement in responsiveness thanks to enhancements in our scaling algorithms and infrastructure. These enhancements are being rolled out to all Atlas customers, who should start seeing benefits immediately. IntelliJ plugin for MongoDB Announced in private preview, the MongoDB for IntelliJ Plugin is designed to functionally enhance the way developers work with MongoDB in IntelliJ IDEA, one of the most popular IDEs among Java developers. The plugin allows enterprise Java developers to write and test Java queries faster, receive proactive performance insights, and reduce runtime errors right in their IDE. By enhancing the database-to-IDE integration, JetBrains and MongoDB have partnered to deliver a seamless experience for their shared user-base and unlock their potential to build modern applications faster. Sign up for the private preview here . MongoDB Copilot Participant for VS Code (Public Preview) Now in public preview, the new MongoDB Participant for GitHub Copilot integrates domain-specific AI capabilities directly with a chat-like experience in the MongoDB Extension for VS Code .

October 3, 2024
Updates

Top 4 Reasons to Use MongoDB 8.0

This post is also available in: Deutsch , Français , Español , Português , Italiano , 한국어 , 简体中文 . We’re excited to announce that MongoDB 8.0 —the newest version of the world’s most popular document database, used by millions of developers and more than 50,000 customers around the world—is now generally available. MongoDB 8.0 builds upon MongoDB’s industry-leading capabilities to provide significant performance improvements, reduced costs, and greater ease of use, from local deployments to globally distributed applications at enterprise scale. Developers have long loved building with MongoDB, so we've ensured that 8.0 kept the bar extremely high for developer usability. MongoDB 8.0 was also built to exceed our customers’ most stringent security, resiliency, availability, and performance requirements, and is the most impressive version of MongoDB yet. MongoDB 8.0 gives customers the strongest possible foundation for building a wide range of applications, now and in the future. Jim Scharf, Chief Technology Officer, MongoDB For MongoDB 8.0, we focused our engineering efforts around four core goals: Optimize performance for the widest variety of applications Deliver innovative encryption to unlock new use cases Reduce costs and increase scale with rapid and intuitive horizontal scaling for high availability Ensure resilience for unexpected application demand So how do these goals actually benefit teams as they build and manage applications? We’ll start by looking at why you should use MongoDB 8.0. Whether you’re a seasoned MongoDB veteran or are new to the database, MongoDB 8.0 is a great foundation for new applications and supercharging existing ones alike. Version 8.0 combines the things developers love most about MongoDB—like an intuitive and cohesive developer experience, support for a broad set of use cases, and operational ease of use—with unparalleled performance improvements. Top reasons to switch to MongoDB 8.0 1. MongoDB 8.0 is over 30% faster than before As the data applications generate and use grows, minor inefficiencies can lead to disproportionate increases in infrastructure costs. Because many customers primarily interact with businesses through their applications, poor or inconsistent application performance can lead to customer unhappiness, lost opportunities, and declines in revenue. So it’s imperative for organizations to ensure that their applications perform consistently well. MongoDB 8.0 significantly improves performance by allowing applications to rapidly and efficiently query and transform data, with up to 36% better throughput. Architectural optimizations in MongoDB 8.0 have reduced memory usage and query times, and a combination of more efficient batch processing and optimizations has enabled 59% higher throughput for updates and 20% faster concurrent writes during data replication. Additionally, optimizations in MongoDB 8.0 mean the database can handle higher volumes of time series data and perform operations over 200% faster—with lower resource usage and costs. 2. MongoDB 8.0 is more secure than ever Data protection and security are essential. With the increasing complexity and volume of data being transmitted, stored, and processed across environments, safeguarding sensitive information with robust encryption is more critical than ever. Organizations must protect their data throughout its lifecycle—in transit over networks, at rest where it is stored, and while it’s in use for querying and processing. However, it can be challenging to encrypt data while it is queried and processed, leaving data vulnerable to exposure or exfiltration by malicious actors. MongoDB Queryable Encryption is an industry-first innovation developed by the MongoDB Cryptography Research Group. It allows customers to encrypt sensitive data on the client side, store it securely as fully randomized encrypted data in the MongoDB database, and to run expressive queries on the encrypted data for processing. MongoDB 8.0 now includes support for range queries—in addition to equality queries—to expand secure data retrieval with greater flexibility for common searches. With Queryable Encryption, the required data remains encrypted until it reaches an authorized end user using a customer-controlled decryption key—with no cryptography expertise required. 3. MongoDB 8.0 makes it cheaper and easier to scale As organizations grow, their applications’ requirements tend to evolve. For example, scaling to support millions of users can be challenging for organizations that originally designed their applications for thousands of users. This is because implementing architectural changes in production applications can involve significant effort that can be costly and time-consuming. With MongoDB 8.0, horizontal scaling is now faster and easier, and at a lower cost. With horizontal scaling, applications can scale beyond the limits of traditional database resources by splitting data across multiple servers known as shards—without having to pre-provision increasing amounts of compute resources for a single server. New sharding capabilities in MongoDB 8.0 distribute data across shards up to 50 times faster and at up to 50% lower cost to get started. 4. MongoDB 8.0 gives you more control to help your applications run smoothly End-users expect consistent application experiences, even during periods of high demand and usage spikes. Organizations without a highly durable operational database risk poor customer experiences, with lagging application behavior (or even downtime) during times of high demand. MongoDB 8.0 provides greater control for teams optimizing database performance for unpredictable spikes in usage and sustained periods of high demand. MongoDB 8.0 includes new capabilities to set a default maximum time limit for running queries, to reject recurring types of problematic queries, and to set query settings to persist through events like database restarts. These capabilities help deliver consistent application behavior and high performance, irrespective of demand spikes or unexpected events. Ready to try MongoDB 8.0? If you are building a new application, the easiest way to get started with MongoDB 8.0 is by going to mongodb.com/try , where you can sign up for a free Atlas account, download the Community edition, and learn more about self-managing MongoDB with an Enterprise Advanced subscription. If you are running a previous version of MongoDB, there are helpful upgrade tutorials for MongoDB Atlas and self-managed deployments . Additionally, documentation and expert help from the MongoDB professional services team are on hand. If you have an existing application that is not currently using MongoDB as the database, check out the MongoDB Relational Migrator tool . Relational Migrator can help you map existing relational schemas to a MongoDB schema, perform data migrations, and convert existing relational queries, triggers, and stored procedures to work with MongoDB. The MongoDB engineering and product teams listened attentively to developer feedback , and MongoDB 8.0 was built with developer usability—as well as security, durability, availability, and performance—top of mind. We’re excited for you to give it a try, and are sure you’ll enjoy the performance gains and other benefits of MongoDB 8.0!

October 2, 2024
Updates

Atlas Stream Processing: A Cost-Effective Way to Integrate Kafka and MongoDB

Developers around the world use Apache Kafka and MongoDB together to build responsive, modern applications. There are two primary interfaces for integrating Kafka and MongoDB. In this post, we’ll introduce these interfaces and highlight how Atlas Stream Processing offers an easy developer experience, cost savings, and performance advantages when using Apache Kafka in your applications. First, we will provide some background. The Kafka Connector For many years, MongoDB has offered the MongoDB Connector for Kafka (Kafka Connector). The Kafka Connector enables the movement of data between Apache Kafka and MongoDB, and thousands of development teams use it. While it supports simple message transformation, developers largely handle data processing with separate downstream tools. Atlas Stream Processing More recently , we announced Atlas Stream Processing—a native stream processing solution in MongoDB Atlas. Atlas Stream Processing is built on the document model and extends the MongoDB Query API to give developers a powerful, familiar way to connect to streams of data and perform continuous processing. The simplest stream processors act similarly to the primary Kafka Connector use case, helping developers move data from one place to another, whether from Kafka to MongoDB or vice versa. Check out an example: // Connect to MongoDB Atlas database using $source. s = { $source: { connectionName: 'myAtlasCluster', db: myDB', coll: ‘myCollection’ } } // Write your data to a Kafka topic using $emit. e = { $emit: { connectionName: 'myKafkaConnection', topic: myTopic } } // Create your processor and start it! sp.createStreamProcessor("mongoDBtoKafka", [s,e]) sp.mongoDBToKafka.start() Beyond making data movement easy, Atlas Stream Processing enables advanced stream processing use cases not possible in the Kafka Connector. One common use case is enriching your event data by using $lookup as a stage in your stream processor. In the example above, a developer can perform this enrichment by simply adding a lookup stage in the pipeline between source and sink. While the Kafka Connector can perform some single message transformations, Atlas Stream Processing makes for both an easier overall experience and gives teams the ability to perform much more complex processing. Choosing the right solution for your needs It’s important to note that Atlas Stream Processing was built to simplify complex, continuous processing and streaming analytics rather than as a replacement for the Kafka Connector. However, even for the more basic data movement use cases referenced above, it provides a new alternative to the Kafka Connector. The decision will depend on data movement and processing needs. Three common considerations we see teams making to help with this choice are ease of use, performance, and cost. Ease of use The Kafka Connector runs on Kafka Connect. If your team already heavily uses Kafka Connect across many systems beyond MongoDB, this may be a good reason to keep it in place. However, many teams find configuring, monitoring, and maintaining connectors costly and cumbersome. In contrast, Atlas Stream Processing is a fully managed service integrated into MongoDB Atlas. It prioritizes ease of use by leveraging the MongoDB Query API to process your event data continuously. Atlas Stream Processing balances simplicity (no managing servers, utilizing other cloud platforms, or learning new tools) and processing power to reduce development time, decrease infrastructure and maintenance costs, and build applications quicker. Performance High performance is increasingly a priority with all data infrastructure, but it’s often a must-have for use cases that rely on streams of event data (commonly from Apache Kafka) to deliver an application feature. Many of our early customers have found Atlas Stream Processing more performant than similar data movement in their Kafka Connector configurations. By connecting directly to your data in Kafka and MongoDB and acting on it as needed, Atlas Stream Processing eliminates the need for a tool in-between. Cost Finally, managing costs is a critical consideration for all development teams. We’ve priced Atlas Stream Processing competitively when compared to typical Kafka Connector configurations. Most hosted Kafka providers charge per task. That means each additional source and sink will generate a separate data transfer and storage cost that linearly scales as you expand. Atlas Stream Processing charges per Stream Processing Instance (SPI) worker and each worker supports up to four stream processors. This means potential cost savings when running similar configurations to the Kafka Connector. See more details in the documentation . Atlas Stream Processing launched just a few months ago. Developers are already using it for a wide range of use cases, like managing real-time inventories, serving contextually relevant recommendations, and optimizing yields in industrial manufacturing facilities. We can’t wait to see what you build and hear about your experience! Ready to get started? Log in to Atlas today. Already a Kafka Connector user? Dig into even more details and get started using our tutorial .

September 9, 2024
Updates

Exploring New Security, Billing, and Customization Features in Atlas Charts

MongoDB is excited to announce a few new updates to Atlas Charts that enable you to securely share insights, gain deeper visibility into expenses, and customize your most frequently used data visualizations. Based on specific feedback received from users of our native visualization tool, these significant improvements will make data analysis even more productive. We: Improved security in Atlas Charts for passcode-protected public dashboards Increased visibility into Atlas spending through an updated billing dashboard Introduced new customization for table charts through hyperlinks and hidden columns Secure insights with passcode-protected public dashboards First, there’s the new passcode-protected public dashboards feature that brings an extra layer of security to publicly shared dashboards—we understand that not everyone who benefits from Atlas Charts operates within MongoDB Atlas. Alongside the ability to schedule email reports and support for publicly-shared dashboards , we’re offering a new and secure way to spread insights with the launch of our latest feature. Add an extra layer of security to previously publicly shared dashboards, ensuring that only authorized users with the passcode can access your data. Enabling passcode protection on a dashboard is simple. As a dashboard owner, a new option is available to protect dashboard links with a passcode when sharing it publicly. Check the box to protect your public link with a passcode Once enabled, a passcode is automatically generated and can be copied to the clipboard (and regenerated on demand as needed). Viewers navigating to dashboards via the public link will see a new screen prompting them to enter a passcode. Once authenticated successfully, they can view the dashboard just as before. Easily access your dashboards by inputting your password when prompted Whether you're sharing insights with clients, stakeholders, or team members, rest assured that your data remains easily accessible yet secure. To learn more about the different ways we support dashboard sharing, check out our documentation . What’s new in the Atlas Charts billing dashboard Next, we continue to make enhancements to the MongoDB Atlas Charts billing dashboard , all of which provide insights into Atlas expenses. We are delighted to share that it’s now possible to see resource tags data, as well as billing data from all linked organizations inside the Atlas Charts billing dashboard. Additionally, users can now ingest billing data from another organization, provided they possess the organization’s API keys. These newly introduced features rely on the availability of billing data within the organization. And for those leveraging resource tags, the billing data will seamlessly integrate, empowering users to generate personalized charts or to incorporate tailored dashboard filters within the Atlas Charts billing dashboard. If cross-organization billing is enabled, editing the configuration will ingest the linked organization’s billing data for the last three months, with the option to extend this period to up to a year by creating a new ingestion. Project tags data in the Atlas Charts billing dashboard Resource tags are now seamlessly integrated into billing data and can be included in any of the charts or the dashboard filters inside the Atlas billing dashboard. For example, our MongoDB organization uses the Atlas auto-suggested tags “application” and “environment,” alongside a custom resource tag labeled "team." The following chart uses the tags data and shows the billing cost per team and per environment. A chart which depicts cost per team and environment using tags The subsequent chart presents the billing cost allocated per project and team, providing valuable insights into the primary cost drivers for each team's projects. A chart depicting cost allocated per project and team Users can also add a dashboard filter to the “tags” field, which will allow them to see the whole dashboard based on the selected tag values. In the next example, we have selected a specific “team” : “Charts” from the tags dashboard filter, so we can see all of the billing insights per team thanks to our custom tag. Billing insights filtered by specific "charts" team in an intuitive dashboard Linked organization’s data in the Atlas Charts billing dashboard For complex Atlas projects spanning multiple organizations, the Atlas Charts billing dashboard now seamlessly integrates billing data from all linked organizations. The most productive use case is to add a dashboard filter based on the "organizationId" to enable filtering data according to specific organizations for a more granular analysis of the spending. Dashboard filtered by the organizationID field to show insights for one organization Billing data from another organization Users can now ingest billing data from other organizations that are not directly linked, provided they possess authorization API keys, bringing the data you need to where you are. Provide the API key to ingest billing data from other organizations These new features in the Atlas Charts billing dashboard are designed to provide richer, more detailed insights into organization spend. Check out our documentation and our previous blog post to learn more about it. Hyperlinks and hidden columns for tables in Atlas Charts Of all the data visualization methods available in Atlas Charts, table charts rank as one of the most popular among our users. So it should come as no surprise that one of the most highly requested features from our customers is the ability to format columnar data as hyperlinks. We're excited to announce that this is now possible in Atlas Charts through the new hyperlink customization options available for table charts . With hyperlink customization, you can format columnar data as hyperlinks using any of the following URI protocols: http, https, mailto, or tel, and can be constructed statically or dynamically using encoded fields. Let’s assume we’ve created a table using the sample movies dataset in Atlas, with encodings like title, imdb.id, runtime, genre, poster_display—which is a calculated field —and more. Customization panel in Atlas Charts To turn movie titles into clickable links that direct users to their respective IMDB pages, navigate to the customization panel and click into the hyperlinking feature in the fields tab . We will format the title field as a hyperlink which links to the Internet Movie Database (IMDB) entry for that movie. IMDB URLs are formatted as follows, where id needs to be substituted with the value of the imdb.id field for each document. https://www.imdb.com/title/tt<id>/ Customize the “title” field in the table chart to link to IMDB using the “imdb.id” field in the URI input. Below, a preview displays the fully formatted URI with fields substituted for their values, helping to ensure it’s correct before we save it to be applied to the chart. Preview of URI in the hyperlinking panel Since we only need the imdb.id field to be encoded for the purpose of constructing the URI applied to the title field, we can hide the column from rendering using another new customization option. Select the imdb.id field in the customization panel, and toggle on the “Hide Column” option. Toggle "Hide Column" We also support using URI values directly from fields (provided they use one of the supported protocols). Let’s see this in action by creating a hyperlink to the movie poster. In the URI input, trigger the encoded field menu using the @ keyboard shortcut, and select the poster field. Similar to the previous example, a preview will be displayed. After saving and applying the hyperlink formatting, we can hide the rendering of the poster field as needed to keep the chart clean. Use the @ keyboard shortcut to trigger the encoded field menu All these options are accessible in the customization panel, making it straightforward to enhance table charts with interactive hyperlinks. For more detailed instructions, visit our documentation . As we conclude this roundup, we hope you’re as excited about these updates as we are. The Atlas Charts team is dedicated to continuously improving Atlas Charts to meet your needs and enhance your data visualization experience. Stay tuned for more updates, and happy charting! New to Atlas Charts? Get started today by logging into or signing up for MongoDB Atlas , deploying or selecting a cluster, and activating Charts for free.

September 5, 2024
Updates

MongoDB Atlas for Government Supports GCP Assured Workloads

We’re excited to announce that MongoDB Atlas for Government now supports the US regions of Google Cloud Assured Workloads, alongside existing support for AWS GovCloud and AWS US regions. This expansion offers greater flexibility and expanded support for public sector organizations and the independent software vendors (ISVs) that serve them as they modernize applications and migrate workloads to the cloud. Furthermore, MongoDB Atlas for Government is now available for purchase through the Google Cloud Marketplace . MongoDB Atlas for Government: Driving digital transformation in the public sector MongoDB Atlas for Government is an independent, dedicated version of MongoDB Atlas, designed specifically to meet the unique needs of the U.S. public sector and ISVs developing public sector solutions. This developer data platform provides the versatility and scalability required to modernize legacy applications and migrate workloads to the cloud, all within a secure, fully-managed, FedRAMP authorized environment. Refer to the FedRAMP Marketplace listing for additional information about Atlas for Government. By leveraging the full functionality of MongoDB's document database and application services, Atlas for Government supports a wide range of use cases within a unified developer data platform, including Internet of Things, AI/ML, analytics, mobile development, single view, transactional workloads, and more. Ensuring robust resilience and comprehensive disaster recovery, Atlas for Government maintains business continuity and minimizes downtime. With a ~99.995% uptime SLA , auto-scaling to handle data consumption fluctuations, and automated backup and recovery, organizations can have peace of mind that their data is always protected. Getting started with MongoDB Atlas for Government MongoDB Atlas for Government can be used to create database clusters deployed to a single region or spanning multiple US regions. Google Cloud Assured Workloads US regions are now supported in Atlas for Government projects tagged as “Gov regions only,” allowing for the use of both traditional Google Cloud regions as well as Assured Workloads US regions. To get started, create a project in Atlas for Government and make sure to select 'Designate as a Gov Cloud regions-only project' during the project creation process. After creating the project, you can set up a MongoDB cluster in the GCP regions. To do this, start the cluster creation process and select GCP as the Cloud Provider, as shown in the figure below. You'll then be prompted to choose one or more GCP regions for your cluster. You can find more details on supported cloud providers and regions in the Atlas for Government documentation . Creating multi-cloud clusters The introduction of support for Google Cloud Assured Workloads (US regions) makes MongoDB Atlas for Government the first fully managed multi-cloud data platform authorized at FedRAMP Moderate. This means that public sector organizations and ISVs can now deploy clusters across Google Cloud Assured Workloads US regions and AWS GovCloud regions, in addition to deploying database clusters across multiple US regions. Whether prioritizing performance, cost, or specific feature sets, Atlas for Government empowers teams to deploy application architectures that simultaneously take advantage of the best-of-class services from multiple cloud providers while meeting FedRAMP requirements. Multi-cloud support also provides additional resiliency and enhanced disaster recovery, safeguarding data and applications against potential service outages and failures with automatic failover. Ensuring robust data protection and seamless continuity MongoDB Atlas for Government now supports Google Cloud Assured Workloads US regions, expanding its multi-cloud capabilities alongside existing support for AWS GovCloud and AWS US regions. This enhancement provides public sector organizations and ISVs with the flexibility to modernize applications and migrate workloads in a secure, FedRAMP authorized environment. With robust resilience, comprehensive disaster recovery, and a ~99.995% uptime SLA, Atlas for Government ensures data protection and business continuity. By offering a unified developer data platform for a wide range of use cases, Atlas for Government empowers teams to leverage best-in-class cloud services while meeting stringent compliance requirements. How do I get started? Visit our product page to learn more about MongoDB Atlas for Government. Or, read the Atlas for Government documentation to learn how to get started today.

August 20, 2024
Updates

Atlas Search Nodes: Now with Multi-Region Availability

At MongoDB, we are continually refining our products to try and create the simplest and most seamless developer experience possible. This mantra has also been applicable to how we think about search, from the beginning with Atlas Text Search, to the announcement of the next paradigm with Atlas Vector Search. We have continued to expand this vision with the introduction of Search Nodes, initially launching on AWS , and then expanding to both Google Cloud and Microsoft Azure . Today we’re excited to take the next step in that journey with the announcement of multi-region availability on all three major cloud providers. Search Nodes: Isolation and scale As a quick refresher, Search Nodes provide dedicated infrastructure for Atlas Search and Vector Search workloads, enabling even greater control over search workloads. They also allow you to isolate and optimize compute resources to scale search and database needs independently, delivering better performance at scale and higher availability. Since our announcements, we’ve been thrilled with the excitement around Search Nodes and the desire for better control, flexibility, and availability for scaling both Atlas Search and Vector Search workloads. Incorporating Search Nodes into your deployment delivers workload isolation, and the ability to optimize resource usage. A visual of the evolution from the previous coupled architecture to dedicated nodes is shown below: Figure 1: Improved workload sizing alignment and enhanced scalability with Search Nodes Introducing Global Availability Another tenet of our builder's journey is making sure the flexibility, scalability, and performance with Search Nodes are available to everyone, regardless of the cloud you’re using or cloud region. Today, we’re excited to officially announce multi-region availability for Search Nodes to allow anyone to better optimize resource usage regardless of location. Now, with multi-region availability, you can take full advantage of global scalability by no longer being limited to one geographic area. Furthermore, you now have the peace of mind by having the redundancy needed to protect yourself in the case of any unforeseen outage event, whether due to technical issues or natural disasters that could cause data center downtime. Figure 2: Multi-region availability on all three major cloud providers Here is a quick video tutorial about how to enable Search Nodes, as well as take advantage of multi-region availability: Brief tutorial on how to enable multi-region Search Nodes With today’s announcements we’re excited to bring the power and control of dedicated Search Nodes to people using all clouds and regions across the globe. We’re excited to see the continued adoption and improved results from having greater ubiquity across your search implementations. As always, reach out to us with any feedback, as we’d love to hear what you think!

August 14, 2024
Updates

Ready to get Started with MongoDB Atlas?

Start Free