MongoDB Blog

Announcements, updates, news, and more

Advancing Encryption in MongoDB Atlas

Maintaining a strong security posture and ensuring compliance with regulations and industry standards are core responsibilities of enterprise security teams. However, satisfying these responsibilities is becoming increasingly complex, time-consuming, and high-stakes. The rapid evolution of the threat landscape is a key driver of this challenge. In 2024, the percentage of organizations that experienced a data breach costing $1 million or more jumped from 27% to 36%. 1 This was partly fueled by a 180% surge from 2023 to 2024 in vulnerability exploitation by attackers. 2 Concurrently, regulations are tightening. Laws like the Health Insurance Portability and Accountability Act (HIPAA) 3 and the U.S. Securities and Exchange Commission’s cybersecurity regulations 4 have introduced stricter security requirements. This has raised the bar for compliance. Thousands of enterprises rely on MongoDB Atlas to protect their sensitive data and support compliance efforts. Encryption plays a crucial role on three levels; securing data at rest, in transit, and in use. However, security teams need more than solely strong encryption. Flexibility and control are essential to align with an organization’s specific requirements. MongoDB is introducing significant upgrades to MongoDB Atlas encryption to meet these needs. This includes enhanced customer-managed key (CMK) functionality and support for TLS 1.3. This post explores these improvements, along with the planned deprecation of outdated TLS versions, to strengthen organizations’ security postures. Why customer-managed keys (CMKs) matter Customer-managed keys (CMKs) are a security and data governance feature that delivers enterprises full control over the encryption keys protecting their data. With CMKs, customers can define and manage their encryption strategy. This ensures they have ultimate authority over access to their sensitive information. MongoDB Atlas customer key management provides file-level encryption, similar to transparent data encryption (TDE) in other databases. This customer-managed encryption-at-rest feature works alongside always-on volume-level encryption 5 in MongoDB Atlas. CMKs ensure all database files and backups are encrypted. MongoDB Atlas also integrates with AWS Key Management Service (AWS KMS), Azure Key Vault , and Google Cloud KMS . This ensures customers have the flexibility to manage keys as part of their broader enterprise security strategy. Customers using CMKs retain complete control of their encryption keys. If an organization needs to revoke access to data due to a security concern or any other reason, it can do so immediately by freezing or destroying the encryption keys. This capability acts as a “kill switch,” ensuring sensitive information becomes inaccessible when protection is critical. Similarly, an organization can destroy the keys to render the data and backups permanently unreadable and irretrievable. This may be applicable should they choose to retire a cluster permanently. Announcing CMK over private networking As part of a commitment to deliver secure and flexible solutions for enterprise customers, MongoDB is introducing CMKs over private networking. This enhancement enables organizations to manage their encryption keys without exposing their key management service (KMS) to the public internet. Using CMKs in MongoDB Atlas previously required Azure Key Vault and AWS KMS to be accessible via public IP addresses prior to today. While functional, this posed challenges for customers who need to keep KMS traffic private. It forced those customers to either expose their KMS endpoints or manage IP allow lists. By using private networking, customers can now: Eliminate the need for public IP exposure. Simplify network management by removing the need to manage allowed IP addresses. This reduces administrative effort and misconfiguration risk. Align with organizational requirements that mandate the use of private networking. Customer key management over private networking is now available for Azure Key Vault and AWS KMS . Customers can enable and manage this feature for all their MongoDB Atlas projects through the MongoDB Atlas UI or the MongoDB Atlas Administration API . More enhancements are coming for MongoDB customer key management in 2025. These include secretless authentication mechanisms and CMKs for search nodes. MongoDB Atlas TLS enhancements advance encryption in transit Securing data in transit is equally vital as a foundation of encryption at rest with CMKs. To address this, MongoDB Atlas enforces TLS by default. This ensures encrypted communication across all aspects of the platform, including client connections. Now MongoDB is reinforcing its TLS implementation with key enhancements for enterprise-grade security. MongoDB is in the process of rolling out fleetwide support for TLS 1.3 in MongoDB Atlas. The latest version of the cryptographic protocol offers several advantages over its predecessors. This includes stronger security defaults, faster handshakes, and reduced latency. Concurrently, TLS versions 1.0 and 1.1 are being deprecated. The rationale for this is known weaknesses and their inability to meet modern security standards. MongoDB is aligning with industry best practices by standardizing on TLS 1.2 and 1.3. This ensures a secure communication environment for all MongoDB Atlas users. Additionally, MongoDB now offers custom cipher suite selection, giving enterprises more control over their cryptographic configurations. This feature lets organizations choose the cipher suites for their TLS connections, ensuring compliance with their security requirements. Achieving encryption everywhere This post covers how MongoDB secures data at rest with CMKs and in transit with TLS. However, what about data in use while it’s being processed in a MongoDB Atlas instance? That’s where Queryable Encryption comes in. This groundbreaking feature enables customers to run expressive queries on encrypted data without ever exposing the plaintext or keys outside the client application. Sensitive data and queries never leave the client unencrypted. This ensures sensitive information is protected and inaccessible to anyone without the keys, including database administrators and MongoDB itself. MongoDB is committed to providing enterprise-grade security that evolves with the changing threat and regulatory landscapes. Organizations now have greater control, flexibility, and protection across every stage of the data lifecycle with enhanced CMK functionality, TLS 1.3 adoption, and custom cipher suite selection. As security challenges grow more complex, MongoDB continues to innovate to enable enterprises to safeguard their most sensitive data. To learn more about these encryption enhancements and how they can strengthen your security posture, visit MongoDB Data Encryption . 1 PwC , October 2024 2 Verizon Data Breach Investigations Report , 2024 3 U.S. Department of Health and Human Services , December 2024 4 U.S. Securities and Exchange Commission , 2023 5 MongoDB Atlas Security White Paper , Encryption at Rest section page 12

March 5, 2025
Applied

AI-Powered Java Applications With MongoDB and LangChain4j

MongoDB is pleased to introduce its integration with LangChain4j , a popular framework for integrating large language models (LLMs) into Java applications. This collaboration simplifies the integration of MongoDB Atlas Vector Search into Java applications for building AI applications. The advent of generative AI has opened up many new possibilities for developing novel applications. These advancements have led to the development of AI frameworks that simplify the complexities of orchestrating and integrating LLMs and the various components of the AI stack , where MongoDB plays a key role as an operational and vector database. Simplifying AI development for Java The first AI frameworks to emerge were developed for Python and JavaScript, which were favored by early AI developers. However, Java remains widespread in enterprise software. This has led to the development of LangChain4j to address the needs of the Java ecosystem. While largely inspired by LangChain and other popular AI frameworks, LangChain4j is independently developed. As with other LLM frameworks, LangChain4j offers several advantages for developing AI systems and applications by providing: A unified API for integrating LLM providers and vector stores. This enables developers to adopt a modular approach with an interchangeable stack while ensuring a consistent developer experience. Common abstractions for LLM-powered applications, such as prompt templating, chat memory management, and function calling, offering ready-to-use building blocks for common AI applications like retrieval-augmented generation (RAG) and agents. Powering RAG and agentic systems with MongoDB and LangChain4j MongoDB worked with the LangChain4j open-source community to integrate MongoDB Atlas Vector Search into the framework, enabling Java developers to develop AI-powered applications from simple RAG to agentic applications. In practice, this means developers can now use the unified LangChain4j API to store vector embeddings in MongoDB Atlas and use Atlas Vector Search capabilities for retrieving relevant context data. These capabilities are essential for enabling RAG pipelines, where private, often enterprise data is retrieved based on relevancy and combined with the original prompt to get more accurate results in LLM-based applications. LangChain4j supports various levels of RAG, from basic to advanced implementations, making it easy to prototype and experiment before customizing and scaling your solution to your needs. A basic RAG setup with LangChain4j typically involves loading and parsing unstructured data from documents stored locally or on remote services like Amazon S3 or Azure Storage using the Document API. The process then transforms and splits the data, then embeds it to capture the semantic meaning of the content. For more details, check out the documentation on core RAG APIs . However, real-world use cases often demand solutions with advanced RAG and agentic systems. LangChain4j optimizes RAG pipelines with predefined components designed to enhance accuracy, latency, and overall efficiency through techniques like query transformation, routing, content aggregation, and reranking. It also supports AI agent implementation through dedicated APIs, such as AI Services and Tools , with function calling and RAG integration, among others. Learn more about the MongoDB Atlas Vector Search integration in LangChain4j’s documentation . MongoDB’s dedication to providing the best developer experience for building AI applications across different ecosystems remains strong, and this integration reinforces that commitment. We will continue strengthening our integration with LLM frameworks enabling developers to build more-innovative AI applications, agentic systems, and AI agents. Ready to start building AI applications with Java? Learn how to create your first RAG system by visiting our tutorial: How to Make a RAG Application With LangChain4j .

March 4, 2025
Artificial Intelligence

MongoDB 8.0: Eating Our Own Dog Food

Key Takeaways We achieve real-world testing by adopting release candidates (RCs) on our internal production systems before finalizing a release. Our diverse internal workloads delivered unique insights. For instance, an internal cluster’s upgrade identified a rare MongoDB server crash and an inefficiency for a specific query shape introduced by a new MongoDB 8.0 feature. Issues encountered while testing MongoDB 8.0 internally were fixed proactively before they went out to customers. For example, during an upgrade to an 8.0 RC, one of our internal databases crashed and the issue was fixed in the next RC. Prerelease testing uncovered gaps in our automated testing, leading to improved coverage with additional tests. Using MongoDB 8.0 internally on mission-critical internal systems demonstrated its reliability. This gave customers confidence that the release could handle their demanding workloads, just as it did for our own engineering teams. Release jitters Every software release, whether it’s a new product or an update of an existing one, comes with an inherent risk: what if users encounter a bug that the development team didn’t anticipate? With a mission-critical product like MongoDB 8.0 , even minor issues can have a significant impact on customer operations, uptime, and business continuity. Unfortunately, no amount of automated testing can guarantee how MongoDB will perform when it lands with customers. So how does MongoDB proactively identify and resolve issues in our software before customers encounter them, thereby ensuring a seamless upgrade experience and maintaining customer trust? Catching issues before you do To address these challenges, we employ a combination of methods to ensure reliability. One approach is to formally model our system to prove the design is correct, such as the effort we undertook to mathematically model our protocols with lightweight formal methods like TLA+. Another method is to prove reliability empirically by dogfooding. Dogfooding (🤨)? Eating your own dog food—aka eating your own pizza, aka “dogfooding”—refers to a development process where you put yourself in customers’ shoes by using your own product in your own production systems. In short: you’re your own customer. Why dogfood? Enhanced product quality: Testing in a controlled environment can’t replicate the edge cases of true-to-life workloads, so real-world scenarios are needed to ensure robustness, reliability, and performance under diverse conditions. Early identification of issues: Testing internally surfaces issues earlier in the release process, enabling fixes to be deployed proactively before customers encounter them. Build customer empathy: Acting as users provides direct insight into customer pain points and needs. Engineers gain firsthand understanding of the challenges of using their product, informing more customer-centric solutions. Without dogfooding, things like upgrades are taken for granted and customer pain points can be overlooked. Boost credibility and trust: Relying on our own software to power critical internal systems reassures customers of its dependability. Dogfooding at MongoDB MongoDB has a strong dogfooding culture. Many internal services are built with MongoDB and hosted on MongoDB Atlas , the very same setup we provide our customers. Eating our own dog food is essential to our customer mindset. Because internal teams work alongside MongoDB engineers, acting as users bridges the gap between MongoDB engineers and their customers. Additionally, real-life workloads vet our software and processes in a way automated testing cannot. Release dogfooding With the release of MongoDB 8.0, the company decided to take dogfooding one step further. Driven by a company-wide focus on making 8.0 the most performant version of MongoDB yet, we embarked on an ambitious plan to dogfood the release candidates within our own infrastructure. Before, our release process looked like this: Figure 1. Releases without real-world testing. We wanted it to look more like this: Figure 2. Releases pregamed on internal clusters. Adding internal testing to the release process allows us to iterate long before we make the product available to customers. Whereas in the past we’d release and fix issues reactively as customers encountered them, using the release internally, before it got into customers’ hands, would uncover edge cases so we could fix them proactively. By acting as our own customers, we remove our real customers from the development cycle and build confidence in the release. The confidence team To tackle upgrades effectively, we assembled a cross-functional team of MongoDB engineers, Atlas SREs, and internal service developers. A technical program manager (TPM) was assigned to the effort to track progress and coordinate efforts across the team. Together, we enumerated the databases, scheduled upgrade dates, and assigned directly responsible individuals (DRIs) to each upgrade. To streamline communication, we created an internal Slack channel and invited everyone on the team to it. We agreed on a playbook: with the support of the team, the assigned DRI would upgrade their cluster and monitor for any issues. If something came up we would create a ticket in an internal Jira project and mention it in Slack for visibility. I took on the role of DRI for Evergreen database upgrades. Evergreen My team maintains the database clusters for Evergreen , MongoDB’s bespoke continuous integration (CI) system. Evergreen is responsible for running automated tests at scale against MongoDB, Atlas, the drivers, Evergreen itself, and many other products. At last count, each day Evergreen executes, in parallel, roughly ten years of tests per day and is on the critical path for many teams at the company. Evergreen runs on two separate clusters in Atlas: the application’s main replica set and a smaller one for our background job coordinator, Amboy . In terms of scale, the main replica set contains around 9.5TB of data and handles 1 billion CRUD operations per day, while the Amboy cluster contains about 1TB of data and handles 100 million CRUD operations per day. Because of Evergreen’s criticality to the development cycle, historically we’ve taken a cautious approach to any operational changes and database upgrades were not a priority. The initiative to dogfood our internal clusters changed our approach—we were going to use 8.0 before it went out to customers. Enabling a feature flag in Atlas made the RC build available in our Atlas project before it was available to customers. A showstopper Our first target was the Amboy cluster, which handles background jobs for Evergreen. I clicked the button to upgrade our Amboy cluster and we held our collective breath. Atlas upgrades are rolling. This means an upgrade is applied iteratively to each secondary in the cluster until finally the primary is stepped down and upgraded. Usually this works well since any issues will at most affect just a secondary, but in our case it didn’t work out. The secondaries’ upgrades succeeded, but when the primary was stepped down, each node that won the election to be the next primary crashed. The result was that our cluster had no primary and the Amboy database was unavailable, which threw a monkey-wrench in our application. We sounded the alarm and an investigation commenced ASAP. Stack traces, logs, and diagnostics were captured and the cluster was downgraded to 7.0. As it turned out, we’d hit an edge case that was triggered by a malformed TTL index specification with a combination of two irregularities: Its expireAfterSeconds was not an integer. It contained a weights field , which is not valid in an index that’s not a text index . Both irregularities were previously allowed, but became invalid due to strengthened validation checks. When a node steps up to primary, it corrects these malformed index specifications, but in that 8.0 RC if there were two things wrong with an index it would go down an execution path that ended in a segfault. This bug only occurs when a node steps up to primary, which is why it brought down our cluster despite the rolling upgrade. SERVER-94487 was opened to fix the bug and the fix was rolled into the next RC. When the RC was ready, we upgraded the Amboy database again and the upgrade succeeded. Not a showstopper Next up was the main database cluster for the Evergreen application. We performed the upgrade, and at first all indications were that the upgrade was a success. However, on further inspection a discontinuous jump had appeared in two of the Atlas monitoring graphs. Before the upgrade our Query Executor graph usually looked like this: Figure 3. Query Executor graph before the upgrade. Whereas after the upgrade it looked like this: Figure 4. Query Executor graph after the upgrade. This represented roughly a 5x increase in the rate per second of index keys and documents scanned by queries and query plans. Similarly, the Query Targeting graph looked like this before the upgrade: Figure 5. Query Targeting graph before the upgrade. Whereas after the upgrade it looked like this: Figure 6. Query Targeting graph after the upgrade. This also represented roughly a 5x increase to the ratio of scanned index keys and documents to the number of documents returned. Both these graphs indicated there was at least one query that wasn’t using indexes as well as it had been before the upgrade. We got eyes on the cluster and it was determined that a bug in index pruning (a new feature introduced in 8.0) was causing the query planner to remove the most efficient index for a contained $or query shape. This is when a query contains an $or branch that isn’t the root of the query predicate, such as A and (C or B) . For the 8.0 release this was listed as a known issue and disabled in Atlas, and index pruning was disabled entirely by the 8.0.1 release until we can fix the underlying issue in SERVER-94741 . Other clusters Other teams’ clusters followed suit, but their upgrades went off without a hitch. It’s to be expected that the particulars of each dataset and workload would trigger various edge cases. Evergreen’s clusters hit some while the rest did not. This brings out an important lesson: testing against a variegated set of live workloads raises the likelihood we’ll encounter and address all the issues our customers would have encountered. Continuous improvement Although we caught these issues before they reached customers, our shift-left mindset motivates us to catch them earlier in the process through automated testing. As part of this effort, we plan to add additional tests focused on upgrades from older versions of the database. The index pruning issue, in particular, was part of the inspiration for us to investigate property based testing –an approach that has already uncovered several new bugs ( SERVER-89308 ). SERVER-92232 will introduce a property based test specifically for index pruning. What’s next? All told, the exercise was a success. The 8.0 upgrade reduced Evergreen’s operation execution times by an order of magnitude: Figure 7. Drastically faster database operations after the upgrade. For customers, dogfooding uncovered novel issues and gave us the chance to fix them before they could disrupt customer workloads. By the time we cut the release we were confident we were providing our customers a seamless upgrade. Through the dogfooding process we discovered additional internal teams with services built on MongoDB. And now we’re further leaning in on dogfooding by building out a formal framework that will include those teams and their clusters. For the next release, this will uncover even more insights and provide greater confidence. Looking ahead, as our CTO aptly put it , "all customers demand security, durability, availability, and performance" from their technology. Our commitment to eating our own dogfood directly strengthens these very pillars. It's a commitment to our customers, a commitment to innovation, and a commitment to making MongoDB the best database in the world. Join our MongoDB Community to learn about upcoming events, hear stories from MongoDB users, and connect with community members from around the world.

March 3, 2025
Engineering Blog

Secure by Default: Mandatory MFA in MongoDB Atlas

On March 26, 2025, MongoDB will start rolling out mandatory multi-factor authentication (MFA) for MongoDB Atlas users. While MFA has long been supported in Atlas, it was previously optional. MongoDB is committed to delivering customers the highest level of security, and the introduction of mandatory MFA adds an extra layer of protection against unauthorized access to MongoDB Atlas. Note: MFA will require users to provide a second form of authentication, such as a one-time passcode or biometrics. To ensure a smooth transition, users are encouraged to set up their preferred MFA method in advance. This process should take around three minutes to set up. If MFA is not configured by March 26, 2025, users will need to enter a one-time password (OTP) sent to their registered email each time they log in. Why are we making MFA mandatory? Stealing users’ credentials is a key tactic in the modern cyberattack playbook. According to a Verizon report, stolen credentials have been involved in 31% of data breaches in the past decade, and credential stuffing is the most common attack type for web applications. 1 Credential stuffing is when attackers use stolen credentials obtained from a data breach on one service to attempt to log in to another service. These breaches are particularly harmful, taking an average of 292 days to detect and contain. 2 This rise in cyber threats has rendered password-only security inadequate. Organizations of all sizes trust MongoDB Atlas to safeguard their mission-critical applications and sensitive data. These range from global enterprises to individual developers. Therefore, to strengthen account security and to reduce the risk of unauthorized access, MongoDB is introducing mandatory MFA. The impact of MFA A large-scale study by Microsoft measured the effectiveness of MFA to prevent cyberattacks on enterprise accounts. The findings indicated enabling MFA reduces the risk of account compromise by 99.22%. For accounts with previously leaked credentials, MFA still lowered the risk by 98.56%. This makes MFA one of the most effective defenses against unauthorized access. By default, requiring MFA strengthens the security of all MongoDB Atlas accounts. By reducing the risk of compromised accounts being used in broader attacks, this proactive step protects individual users and enhances MongoDB Atlas’s overall security. Ensuring strong authentication practices across the Atlas ecosystem maintains the integrity of mission-critical applications and sensitive data— and a safer experience for everyone is the result. Preparing for mandatory MFA MFA will be a prerequisite for all users when logging into MongoDB services using Atlas credentials. These services include: MongoDB Atlas user interface MongoDB Support portal MongoDB University MongoDB Forums Atlas supports the following MFA methods: Security key or biometrics: FIDO2 (WebAuthn) compliant security keys (e.g., YubiKey ) or biometric authentication (e.g., Apple Touch ID or Windows Hello) One-time password (OTP) and push notifications: Provided through the Okta Verify app Authenticator apps: Such as Twilio Authy , Google Authenticator , or Microsoft Authenticator for generating time-based OTPs Email: For generating OTPs MongoDB encourages users to choose phishing-resistant MFA methods, such as security keys or biometrics. Strengthening security with mandatory MFA Requiring MFA is a significant step that enhances MongoDB Atlas’s default security. Multi-factor authentication protects users from credential-based attacks and unauthorized access. Making MFA’s additional layer of authentication mandatory ensures greater account security. This safeguards mission-critical applications and data. To ensure a smooth transition, users are encouraged to set up their preferred MFA method before March 26, 2025. For detailed setup instructions, refer to the MongoDB documentation . And, please visit the MongoDB security webpage and Trust Center to learn more about MongoDB’s commitment to security.

February 28, 2025
Updates

Why Vector Quantization Matters for AI Workloads

Key takeaways As vector embeddings scale into millions, memory usage and query latency surge, leading to inflated costs and poor user experience. By storing embeddings in reduced-precision formats (int8 or binary), you can dramatically cut memory requirements and speed up retrieval. Voyage AI's quantization-aware embedding models are specifically tuned to handle compressed vectors without significant loss of accuracy. MongoDB Atlas streamlines the workflow by handling the creation, storage, and indexing of compressed vectors, enabling easier scaling and management. MongoDB is built for change, allowing users to effortlessly scale AI workloads as resource demands evolve. Organizations are now scaling AI applications from proofs of concept to production systems serving millions of users. This shift creates scalability, latency, and resource challenges for mission-critical applications leveraging recommendation engines, semantic search, and retrieval-augmented generation (RAG) systems. At scale, minor inefficiencies compound and become major bottlenecks, increasing latency, memory usage, and infrastructure costs. This guide explains how vector quantization enables high-performance, cost-effective AI applications at scale. The challenge: Scaling vector search in production Let’s start by considering a modern voice assistance platform that combines semantic search with natural language understanding. During development, the system only needs to process a few hundred queries per day, converting speech to text and matching the resulting embeddings against a modest database of responses. The initial implementation is straightforward: each query generates a 32-bit floating-point embedding vector that's matched against a database of similar vectors using cosine similarity. This approach works smoothly in the prototype phase—response times are quick, memory usage is manageable, and the development team can focus on improving accuracy and adding features. However, as the platform gains traction and scales to processing thousands of queries per second against millions of document embeddings, the simple approach begins to break down. Each incoming query now requires loading massive amounts of high-precision floating-point vectors into memory, computing similarity scores across an exponentially larger dataset, and maintaining increasingly complex vector indexes for efficient retrieval. Without proper optimization, the system struggles as memory usage balloons, query latency increases, and infrastructure costs spiral upward. What started as a responsive, efficient prototype has become a bottleneck production system that struggles to maintain its performance requirements while serving a growing user base. The key challenges are: Loading high-precision 32-bit floating-point vectors into memory Computing similarity scores across massive embedding collections Maintaining large vector indexes for efficient retrieval Which can lead to critical issues like: High memory usage as vector databases struggle to keep float32 embeddings in RAM Increased latency as systems process large volumes of high-precision data Growing infrastructure costs as organizations scale their vector operations Reduced query throughput due to computational overhead AI workloads with tens or hundreds of millions of high-dimensional vectors (e.g., 80M+ documents at 1536 dimensions) face soaring RAM and CPU requirements. Storing float32 embeddings for these workloads can become prohibitively expensive. Vector quantization: A path to efficient scaling The obvious question is: How can you maintain the accuracy of your recommendations, semantic matches, and search queries, while drastically cutting down on compute and memory usage and reducing retrieval latency? Vector quantization is how. It helps you store embeddings more compactly, reduce retrieval times, and keep costs under control. Vector quantization offers a powerful solution to scalability, latency, and resource utilization challenges by compressing high-dimensional embeddings into compact representations while preserving their essential characteristics. This technique can dramatically reduce memory requirements and accelerate similarity computations without compromising retrieval accuracy. What is vector quantization? Vector quantization is a compression technique widely applied in digital signal processing and machine learning. Its core idea is to represent numerical data using fewer bits, reducing storage requirements without entirely sacrificing the data’s informative value. In the context of AI workloads, quantization commonly involves converting embeddings—originally stored as 32-bit floating-point values—into formats like 8-bit integers. By doing so, you can substantially decrease memory and storage consumption while maintaining a level of precision suitable for similarity search tasks. An important point to note is that the quantization mechanism is especially suitable for use cases that involve over 1 million vector embeddings, such as RAG applications, semantic search, or recommendation systems that require tight control of operational costs without a compromise on retrieval accuracy. Smaller datasets with fewer than 1 million embeddings might not see significant gains from quantization procedures. For smaller datasets, the overhead of implementing quantization might outweigh its benefits. Understanding vector quantization Vector quantization operates by mapping high-dimensional vectors to a discrete set of prototype vectors or converting them to lower-precision formats. There are three main approaches: Scalar quantization: Converts individual 32-bit floating-point values to 8-bit integers, reducing memory usage of vector values by 75% while maintaining reasonable precision. Product quantization: Compresses entire vectors at once by mapping them to a codebook of representative vectors, offering better compression than scalar quantization at the cost of more complex encoding/decoding. Binary quantization: Transforms vectors into binary (0/1) representations, achieving maximum compression but with more significant information loss. A vector database that applies these compression techniques must effectively manage multiple data structures: Hierarchical navigable small world (HNSW) graph for navigable search Full-fidelity vectors (32-bit float embeddings) Quantized vectors (int8 or binary) When quantization is defined in the vector index, the system builds quantized vectors and constructs the HNSW graph from these compressed vectors. Both structures are placed in memory for efficient search operations, significantly reducing the RAM footprint compared to storing full-fidelity vectors alone. The table below illustrates how different quantization mechanisms impact memory usage and disk consumption. This example focuses on HNSW indexes storing 30 GB of original float32 embeddings alongside a 0.1 GB HNSW graph structure. Our RAM usage estimates include a 10% overhead factor (1.1 multiplier) to account for JVM memory requirements with indexes loaded into page cache, reflecting typical production deployment conditions. Actual overhead may vary based on specific configurations. Here are key attributes to consider based on the table below: Estimated RAM usage: Combines HNSW graph size with either full or quantized vectors, plus a small overhead factor (1.1 for index overhead). Disk usage: Includes storage for full-fidelity vectors, HNSW graph, and quantized vectors when applicable. Notice that while enabling quantization increases total disk usage —because you still store full-fidelity vectors for exact nearest neighbor queries in both cases and rescoring in the case of binary quantization—it dramatically decreases RAM requirements and speeds up initial retrieval . MongoDB Atlas Vector Search offers powerful scaling capabilities through its automatic quantization system . As illustrated in Figure 1 below, MongoDB Atlas supports multiple vector search indexes with varying precision levels: Float32 for maximum accuracy, Scalar Quantized (int8) for balanced performance with 3.75× RAM reduction, and Binary Quantized (1-bit) for maximum speed with 24× RAM reduction. The quantization variety provided by MongoDB Atlas allows users to optimize their vector search workloads based on specific requirements. For collections exceeding 1M vectors, Atlas automatically applies the appropriate quantization mechanism, with binary quantization particularly effective when combined with Float32 rescoring for final refinement. Figure 1: MongoDB Atlas Vector Search Architecture with Automatic Quantization Data flow through embedding generation, storage, and tiered vector indexing with binary rescoring. Binary quantization with rescoring A particularly effective strategy is to combine binary quantization with a rescoring step using full-fidelity vectors. This approach offers the best of both worlds: extremely fast lookups thanks to binary data formats, plus more precise final rankings from higher-fidelity embeddings. Initial retrieval (Binary) Embeddings are stored as binary to minimize memory usage and accelerate the approximate nearest neighbor (ANN) search. Hamming distance (via XOR + population count) is used, which is computationally faster than Euclidean or cosine similarity on floats. Rescoring The top candidate results from the binary pass are re-evaluated using their float or int8 vectors to refine the ranking. This step mitigates the loss of detail in binary vectors, balancing result accuracy with the speed of the initial retrieval. By pairing binary vectors for rapid recall with full-fidelity embeddings for final refinement, you can keep your system highly performant and maintain strong relevance. The need for quantization-aware models Not all embedding models perform equally well under quantization. Models need to be specifically trained with quantization in mind to maintain their effectiveness when compressed. Some models—especially those trained purely for high-precision scenarios—suffer significant accuracy drops when their embeddings are represented with fewer bits. Quantization-aware training (QAT) involves: Simulating quantization effects during the training process Adjusting model weights to minimize information loss Ensuring robust performance across different precision levels This is particularly important for production applications where maintaining high accuracy is crucial. Embedding models like those from Voyage AI— which recently joined MongoDB —are specifically designed with quantization awareness, making them more suitable for scaled deployments. These models preserve more of their essential feature information even under aggressive compression. Voyage AI provides a suite of embedding models specifically designed with QAT in mind, ensuring minimal loss in semantic quality when shifting to 8-bit integer or even binary representations. Figure 2: Embedding model performance comparing retrieval quality (NDCG@10) versus storage costs . Voyage AI models (green) maintain superior retrieval quality even with binary quantization (triangles) and int8 compression (squares), achieving up to 100x storage efficiency compared to standard float embeddings (circles) . The graph above shows several important patterns that demonstrate why quantization-aware training (QAT) is crucial for maintaining performance under aggressive compression. The Voyage AI family of models (shown in green) demonstrates strong performance in retrieval quality even under extreme compression. The voyage-3-large model demonstrates this dramatically—when using int8 precision at 1024 dimensions, it performs nearly identically to its float precision, 2048-dimensional counterpart, showing only a minimal 0.31% quality reduction despite using 8 times less storage. This showcases how models specifically designed with quantization in mind can preserve their semantic understanding even under substantial compression. Even more impressive is how QAT models maintain their edge over larger, uncompressed models. The voyage-3-large model with int8 precision and 1024 dimensions outperforms OpenAI-v3-large (using float precision and 3072 dimensions) by 9.44% while requiring 12 times less storage. This performance gap highlights that raw model size and dimension count aren't the decisive factors —it's the intelligent design for quantization that matters. The cost implications become truly striking when we examine binary quantization. Using voyage-3-large with 512-dimensional binary embeddings, we still achieve better retrieval quality than OpenAI-v3-large with its full 3072-dimensional float embeddings while using 200 times less storage. To put this in practical terms: what would have cost $20,000 in monthly storage can be reduced to just $100 while actually improving performance. In contrast, models not specifically trained for quantization, such as OpenAI's v3-small (shown in gray), show a more dramatic drop in retrieval quality as compression increases. While these models perform well in their full floating-point representation (at 1x storage cost), their effectiveness deteriorates more sharply when quantized, especially with binary quantization. For production applications where both accuracy and efficiency are crucial, choosing a model that has undergone quantization-aware training can make the difference between a system that degrades under compression and one that maintains its effectiveness while dramatically reducing resource requirements. Read more on the Voyage AI blog . Impact: Memory, retrieval latency, and cost Vector quantization addresses the three core challenges of large-scale AI workloads—memory, retrieval latency, and cost—by compressing full-precision embeddings into more compact representations. Below is a breakdown of how quantization drives efficiency in each area. Figure 3: Quantization Performance Metrics: Memory Savings with Minimal Accuracy Trade-offs Comparison of scalar vs. binary quantization showing RAM reduction (75%/96%), query accuracy retention (99%/95%), and performance gains (>100%) for vector search operations Memory and storage optimization Quantization techniques dramatically reduce compute resource requirements while maintaining search accuracy for vector embeddings at scale. Lower RAM footprint Storage in RAM is often the primary bottleneck for vector search systems Embeddings stored as 8-bit integers or binary reduce overall memory usage, allowing significantly more vectors to remain in memory. This compression directly shrinks vector indexes (e.g., HNSW), leading to faster lookups and fewer disk I/O operations. Reduced disk usage in collection with binData binData (binary) formats can cut raw storage needs by up to 66%. Some disk overhead may remain when storing both quantized and original vectors, but the performance benefits justify this tradeoff. Practical gains 3.75× reduction in RAM usage with scalar (int8) quantization Up to 24× reduction with binary quantization, especially when combined with rescoring to preserve accuracy. Significantly more efficient vector indexes, enabling large-scale deployments without prohibitive hardware upgrades. Retrieval latency Quantization methods leverage CPU cache optimizations and efficient distance calculations to accelerate vector search operations beyond what's possible with standard float32 embeddings. Faster similarity computations Smaller data types are more CPU-cache-friendly, which speeds up distance calculations. Binary quantization uses Hamming distance (XOR + popcount), yielding dramatically faster top-k candidate retrieval. Improved throughput With reduced memory overhead, the system can handle more concurrent queries at lower latencies. In internal benchmarks, query performance for large-scale retrievals improved by up to 80% when adopting quantized vectors. Cost efficiency Vector quantization provides substantial infrastructure savings by reducing memory and computation requirements while maintaining retrieval quality through compression and rescoring techniques. Lower infrastructure costs Smaller vectors consume fewer hardware resources, enabling deployments on less expensive instances or tiers. Reduced CPU/GPU time per query allows resource reallocation to other critical parts of the application. Better scalability As data volumes grow, memory and compute requirements don’t escalate as sharply. Quantization-aware training (QAT) models, such as those from Voyage AI, help maintain accuracy while reaping cost savings at scale. By compressing vectors into int8 or binary formats, you tackle memory constraints, accelerate lookups, and curb infrastructure expenses—making vector quantization an indispensable strategy for high-volume AI applications. MongoDB Atlas: Built for Changing Workloads with Automatic Vector Quantization The good news for developers is that MongoDB Atlas supports “automatic scalar” and “automatic binary quantization” in index definitions, reducing the need for external scripts or manual data preprocessing. By quantizing at index build time and query time, organizations can run large-scale vector workloads on smaller, more cost-effective clusters. A common question most developers ask is when to use quantization. Quantization becomes most valuable once you reach substantial data volumes—on the order of a million or more embeddings. At this scale, memory and compute demands can skyrocket, making reduced memory footprints and faster retrieval speeds essential. Examples of cases that call for quantization include: High-volume scenarios: Datasets with millions of vector embeddings where you must tightly control memory and disk usage. Real-time responses: Systems needing low-latency queries under high user concurrency. High query throughput: Environments with numerous concurrent requests demanding both speed and cost-efficiency. For smaller datasets (under 1 million vectors), the added complexity of quantization may not justify the benefits. However, for large-scale deployments, it becomes a critical optimization that can dramatically improve both performance and cost-effectiveness. Now that we have established a strong foundation on the advantages of quantization—specifically the benefits of binary quantization with rescoring— feel free to refer to the MongoDB documentation to learn more about implementing vector quantization. You can also learn more about Voyage AI’s state-of-the-art embedding models on our product page .

February 27, 2025
Artificial Intelligence

Hasura: Powerful Access Control on MongoDB Data

Across industries—and especially in highly regulated sectors like healthcare, financial services, and government—MongoDB has been a preferred modern database solution for organizations handling large volumes of sensitive data that require strict compliance adherence. In such enterprises, secure access to data via APIs is critical, particularly when information is distributed across multiple MongoDB databases and external data stores. Hasura extends and enhances MongoDB's access control capabilities by providing granular permissions at the column and field level across multiple databases through its unified interface. At the same time, designing a secure API system from scratch to meet this need takes significant development resources and becomes a burden to maintain and update. Hasura solves this problem for enterprises by elegantly serving as a federated data layer, with robust access control policies built-in. Hasura enforces powerful access control rules across data domains, joins data from multiple sources, and exposes it to the user via a single API. In this blog, we'll explore how Hasura and MongoDB work together to empower teams with granular data access control while simplifying data retrieval across collections. Team-specific data domains First, Hasura makes it possible for a business unit or team to own a set of databases and collections, also known as a data domain. Within each domain, a team can connect any number of MongoDB databases and other data sources, allowing the domain to have fine-grained role-based access control (RBAC) and attribute-based access control (ABAC) across all sources. More important though, is the ability to enable relationships that span domains, effectively connecting data from various teams or business units and exposing it to a verified user as necessary. This granular permissioning system means that the right users can access the right data at the right time, without compromising security. Field-level access control Hasura’s MongoDB connector also provides a powerful, declarative way to define access control rules at the collection and field level. For each MongoDB collection, roles may be specified for read, create, update, and delete (CRUD) permissions. Within those permissions, access may be further restricted based on the values of specific attributes. By defining these rules declaratively, Hasura makes it easy to implement and reason about complex access control policies. Joining across collections In addition to enabling granular access control, Hasura simplifies the retrieval of related data across multiple databases. By inspecting your MongoDB collections, Hasura can automatically create schemas and API endpoints (in GraphQL, REST, etc.) that let you query data along with its relationships. This eliminates the need to manually stitch together data from different collections in your application code. Instead, a graph of related data can be easily retrieved in a single API call, while still having that data filtered through your access control rules. As companies wrestle with the challenges of secure data access across sprawling database environments, Hasura provides a compelling solution. By serving as a federated data layer on MongoDB and external data, Hasura enables granular access control through a combination of role-based permissions, attribute-based restrictions, and the ability to join data and apply access across sources. Figure 1. Hasura & MongoDB demo environment With Hasura’s MongoDB connector , teams can easily implement sophisticated data access policies in a declarative way and provide their applications with secure access to the data they need. This combination of security and simplicity makes Hasura and MongoDB a powerful solution for organizations that strive to modernize, especially those in industries with strict compliance requirements. Visit the MongoDB Resources Hub to learn more about MongoDB Atlas.

February 26, 2025
Applied

Debunking MongoDB Myths: Enterprise Use Cases

MongoDB is frequently viewed as a go-to database for proof-of-concept (POC) applications. The flexibility of MongoDB’s document model enables teams to rapidly prototype and iterate. This allows for adaptation of the data model as requirements evolve during the early stages of application development. It is common for applications to continuously evolve during initial development. However, moving an application to production requires developers to add validation logic and fully define the data structures. A frequent assumption is that because MongoDB data models can be flexible, they can not be structured. However, while MongoDB does not require a defined schema, it does support them. MongoDB allows users to precisely calibrate rules and enforcement levels for every component of data. This enables a level of granular control that traditional databases, with their all-or-nothing approach to schema enforcement, struggle to match. Data model flexibility is not a binary choice between "schemaless" or "strictly enforced." More accurately, it exists on a spectrum in MongoDB. Users can incrementally define schemas in parallel with the overall “hardening” of the application. MongoDB's approach to data modeling makes it an ideal platform for business-critical applications. It is designed to support the entire application lifecycle; from nascent concepts and initial prototypes, to global rollouts of production environments. Enterprise-grade features like ACID transactions and industry-leading scalability ensure MongoDB can meet the demands of any modern application. Learning from the past So why do misconceptions persist regarding MongoDB? These perceptions originated over a decade ago. Teams working with MongoDB back in 2014 or earlier faced challenges when deploying it in production. Applications could slow down under heavy loads, data consistency was not guaranteed when writing to multiple documents, and teams lacked tools to monitor and manage deployments effectively. As a result, MongoDB gained a perception of being unsuitable for specific use cases or critical workloads. This perception has persisted despite a decade of subsequent development and innovation . Therefore, this is now an inaccurate assessment of today’s preeminent document database. MongoDB has evolved into a mature platform that directly addresses these historical pain points. Today’s MongoDB delivers robust tooling, guaranteed consistency, and comprehensive data validation capabilities. Myth: MongoDB is a niche database What are the top use cases for MongoDB? This question is difficult to answer because MongoDB is a general-purpose database that can support any use case. The document model is the primary driver of MongoDB’s versatility. Documents are similar to JSON objects with data being represented as key-value pairs. Values can be simple types like strings or numbers. However, values can also be arrays or nested objects which allows documents to easily represent complex hierarchical structures. The document model's flexibility allows data to be stored exactly as the application consumes it. This enables highly efficient writing and optimizes data for retrieval without needing to set up standard or materialized views, although both are supported . While MongoDB is no longer a niche database, it does have advanced capabilities to support niche requirements. The aggregation pipeline provides a powerful framework for data analytics and transformation. Time-series collections store and query temporal data efficiently to support IoT and financial applications. Geospatial indexes and queries enable location-based applications to perform complex proximity calculations. MongoDB Atlas includes native support for vector search . This enabled Cisco to experiment with generative AI use cases and streamline their applications to production. MongoDB handles the diverse data requirements that power modern applications. The document model provides the foundation for general use. Concurrently, advanced features ensure teams do not need to integrate additional tools as application requirements evolve. The result is a single platform that can grow from prototype to production, handling general requirements and specialized workloads with equal proficiency. Myth: MongoDB is not suitable for enterprise-grade workloads A common perception is that MongoDB works well for small applications but falls short at enterprise scale. Ironically, many organizations first consider MongoDB while struggling to scale their relational databases. These organizations have discovered MongoDB’s architecture is specifically designed to support scale-out distributed deployments. While MongoDB matches relational databases in vertical scaling capabilities, the document model enables a more natural and intuitive approach for horizontal scaling. Related data is stored together in a single document. Therefore, MongoDB can easily distribute complete data units across shards. This contrasts with relational databases. Relational data is split across multiple tables. This makes it difficult to place all related data on the same shard. Horizontal scaling with MongoDB sets an organization up for better performance. Most MongoDB queries need to access only a single shard. Equivalent queries in a relational database often require costly cross-server communication. Telefonica Tech has leveraged horizontal scaling to nearly double their capacity with a 40% hardware reduction . MongoDB Atlas further automates and simplifies these scaling capabilities through a fully managed service built to meet demanding enterprise requirements. Atlas provides a 99.995% uptime guarantee and availability across AWS, Google Cloud, and Azure in over 100 regions worldwide. This frees teams to focus on rapid development and innovation rather than infrastructure maintenance by offloading the operational complexity of deploying and running databases at scale. Powering the enterprise applications of today and tomorrow Over 50,000 customers and 70% of the Fortune 100 rely on MongoDB to power their enterprise applications. Independent industry reports from Gartner and Forrester continue to recognize MongoDB as a leader in the database space. Do not let outdated myths prevent your organization from the competitive advantages of MongoDB's enterprise capabilities. To learn more about MongoDB, head over to MongoDB University and take our free Intro to MongoDB course . Read more about customers building on MongoDB. Read our first blog in this series about myths around MongoDB vs relational databases. Check out the full video to learn about the other 6 myths that we're debunking in this series.

February 25, 2025
Applied

MongoDB & DKatalis’s Bank Jago, Empowering Over 500 Engineers

DKatalis , a technology company specialized in developing scalable digital solutions, is the engineering arm behind Bank Jago , Indonesia’s first digital bank. An app-only institution, Bank Jago enables end-to-end banking with features such as auto budgeting. This allows Bank Jago’s customers to easily and effectively organize their finances by creating " Pockets "—for expenses like food, savings, or entertainment. Launched in 2019, Bank Jago has seen tremendous growth in only a few years, with its customer base reaching 14.1 million as of October 2024. While speaking at MongoDB.local Jakarta , Chris Samuel, Staff Engineer at DKatalis, shared how MongoDB became the data backbone of Bank Jago, and how MongoDB Atlas supported Bank Jago’s growth. Bank Jago’s journey with MongoDB started in 2019, when DKatalis built the first version of Bank Jago using the on-premise version of MongoDB: MongoDB Community Edition . “We did everything ourselves, up to the point when we realized that the bigger our user [base] grew, the more painful it was for us to monitor everything,” said Samuel. In 2021, DKatalis decided to migrate Bank Jago [from MongoDB Community Edition] to MongoDB Atlas. This first involved migrating all data to Atlas. Then the database platform had to be set up to facilitate scalability and enable improved maintenance operations in the long-term. “In terms of process, it is actually seamless,” said Samuel during his MongoDB.local talk. Specifically, MongoDB Atlas offers six key capabilities that have facilitated the bank’s daily operations, supported its fast growth, and improved efficiencies: Flexibility: MongoDB's document model supports diverse data types and adapts to Jago's dynamic requirements. Scalability: MongoDB Atlas effortlessly supports the rapid growth in user base and data volume. High performance: The platform enables fast query execution and efficient data retrieval for a seamless customer experience. Real-time capabilities: MongoDB Atlas prevents delays during transactions, account creation, and balance checking. Regulation compliance: With MongoDB Atlas, local hosting is possible. This enables DKatalis to meet Indonesian financial regulatory standards. Community support: MongoDB’s strong developer community and rich ecosystem in Jakarta fosters collaboration and learning. All of these have also helped improve efficiencies for DKatalis’s team of over 500 engineers, who are now able to reduce data architecture complexity, and focus on innovation. Fostering a great engineering culture and community with MongoDB In another talk at MongoDB.local Singapore , DKatalis’s Chief Engineering Officer, Alex Titlyanov, explained that using MongoDB has been and continues to be a great learning, upskilling, and operational experience for his team. “DKatalis has a pretty unique organizational culture when it comes to its engineering teams: there are no designated engineering managers or project managers; instead, teams are self-managed,” said Titlyanov. “This encourages a community-driven environment, where engineers are continuously upgrading their skills, particularly with tools like MongoDB.” The company has established internal communities, such as the MongoDB community led by Principal Software Engineer Boon Hian Tek. These communities focus on knowledge sharing, skill-building, and ensuring that the company’s 500 engineers are proficient in using MongoDB. This deep knowledge of MongoDB—and the ease of use offered by the Atlas platform—means that DKatalis’s engineers are also able to build their own bespoke tools to improve daily operations and meet specific needs. For example, the team has built a range of tools aimed at helping deal with the complexity and scale of Bank Jago’s data architecture. “Most traditional banks offer their customers access to six months, sometimes a year’s worth of transaction history. But Bank Jago gives access to the entire transaction history,” said Boon. The engineering team ended up having to deal with 56 different databases and 485 data collections. Some would reach 1.13 billion documents, while others receive up to 42.5 million new documents every day. Some of the bespoke tools built on MongoDB Atlas include: Index sync report: DKatalis implemented a custom-built tool using MongoDB’s Atlas API to manage database indexing automatically. This was essential given the bank’s real-time requirements. Adding indexes manually during peak hours would have disrupted performance. Daily reporting: The team built a tool to monitor for slow queries. This provides daily reports on query performance so issues can be identified and resolved quickly. Add index: The Rolling Index feature from Atlas was initially used. However, the team required greater context for each index. Therefore, they built a tool that at 3:00 am automatically checks if there are any indexes to create. The tool calls in the Atlas API to create and publish the results. Exporting metrics: The Atlas console was used to source diagrams that were helpful. However, the team required each metric to be available per database and per collection versus cluster. The team built a thin layer on top of the Atlas console to slice up the required metrics using the Atlas API. “The scalability and flexibility of MongoDB have been essential in helping the team handle the bank’s fast growth and complex feature set. MongoDB’s document-oriented structure enables us to develop innovative features like ‘Pockets’, and we continue to see MongoDB as an integral part of our technology stack in the future,” said Titlyanov. Visit our product page to learn more about MongoDB Atlas . To learn how MongoDB powers solutions in the financial services industry, visit our solutions page .

February 24, 2025
Applied

Multi-Agent Collaboration for Manufacturing Operations Optimization

While there are some naysayers across the media landscape who doubt the potential impact of AI innovations, for those of us immersed in implementing AI on a daily basis, there’s wide agreement that its potential is huge and world-altering. It’s now generally accepted that Large Language Models (LLMs) will eventually be able to perform tasks as well—if not better—than a human. And the size of the potential AI market is truly staggering. Bain’s AI analysis estimates that the total addressable market (TAM) for AI and gen AI-related hardware and software will grow between 40% and 55% annually, reaching between $780 billion and $990 billion by 2027. This growth is especially relevant to industries like manufacturing, where generative AI can be applied across the value chain. From inventory categorization to product risk assessments, knowledge management, and predictive maintenance strategy generation, AI's potential to optimize manufacturing operations cannot be overstated. But in order to realize the transformative economic potential of AI, applications powered by LLMs need to evolve beyond chatbots that leverage retrieval-augmented generation (RAG). Truly transformative AI-powered applications need to be objective-driven, not just responding to user queries but also taking action on behalf of the user. This is crucial in complex manufacturing processes. In other words, they need to act like agents. Agentic systems, or compound AI systems, are currently emerging as the next frontier of generative AI applications. These systems consist of a single or multiple AI agents that collaborate with each other and use tools to provide value. An AI agent is a computational entity containing short- and long-term memory, which enables it to provide context to an LLM. It also has access to tools, such as web search and function calling, that enable it to act upon the response from an LLM or provide additional information to the LLM. Figure 1. Basic components of an agentic system. An agentic system can have more than one AI agent. In most cases, AI agents may be required to interact with other agents within the same system or external systems., They’re expected to engage with humans for feedback or review of outputs from execution steps. AI agents can also comprehend the context of outputs from other agents and humans, and change their course of action and next steps. For example, agents can monitor and optimize various facets of manufacturing operations simultaneously, such as supply chain logistics and production line efficiency. There are certain benefits of having a multi-agent collaboration system instead of having one single agent. You can have each agent customized to do one thing and do it well. For example, one agent can create meeting minutes while another agent writes follow-up emails. It can also be implemented on predictive maintenance, with one agent analyzing machine data to find mechanical issues before they occur while another optimizes resource allocation, ensuring materials and labor are utilized efficiently. You can also provision dedicated resources and tools for different agents. For example, one agent uses a model to analyze and transcribe videos while the other uses models for natural language processing (NLP) and answering questions about the video. Figure 2. Multi-agent collaboration system. MongoDB can act as the memory provider for an agentic system. Conversation history alongside vector embeddings can be stored in MongoDB leveraging the flexible document model. Atlas Vector Search can be used to run semantic search on stored vector embeddings, and our sharding capabilities allow for horizontal scaling without compromising on performance. Our clients across industries have been leveraging MongoDB Atlas for their generative AI use cases , including agentic AI use cases such as Questflow , which is transforming work by using multi-agent AI to handle repetitive tasks in strategic roles. Supported by MiraclePlus and MongoDB Atlas, it enables startups to automate workflows efficiently. As it expands to larger enterprises, it aims to boost AI collaboration and streamline task automation, paving the way for seamless human-AI integration. The concept of a multi-agent collaboration system is new, and it can be challenging for manufacturing organizations to identify the right use case to apply this cutting-edge technology. Below, we propose a use case where three agents collaborate with each other to optimize the performance of a machine. Multi-agent collaboration use case in manufacturing In manufacturing operations, leveraging multi-agent collaboration for predictive maintenance can significantly boost operational efficiency. For instance, consider a production environment where three distinct agents—predictive maintenance, process optimization, and quality assurance—collaborate in real-time to refine machine operations and maintain the factory at peak performance. In Figure 3, the predictive maintenance agent is focused on machinery maintenance. Its main tasks are to monitor equipment health by analyzing sensor data generated from the machines. It predicts machine failures and recommends maintenance actions to extend machinery lifespan and prevent downtime as much as possible. Figure 3. A multi-agent system for production optimization. The process optimization agent is designed to enhance production efficiency. It analyzes production parameters to identify inefficiencies and bottlenecks, and it optimizes said parameters by adjusting them (speed, vibration, etc.) to maintain product quality and production efficiency. This agent also incorporates feedback from the other two agents while making decisions on what production parameter to tune. For instance, the predictive maintenance agent can flag an anomaly in a milling machine temperature sensor reading; for example, if temperature values are going up, the process optimization agent can review the cutting speed parameter for adjustment. The quality assurance agent is responsible for evaluating product quality. It analyzes optimized production parameters and checks how those parameters can affect the quality of the product being fabricated. It also provides feedback for the other two agents. The three agents constantly exchange feedback with each other, and this feedback is also stored in the MongoDB Atlas database as agent short-term memory. In contrast, vector embeddings and sensor data are persisted as long-term memory. MongoDB is an ideal memory provider for agentic AI use case development thanks to its flexible document model, extensive security and data governance features, and horizontal scalability. All three agents have access to a "search_documents" tool, which leverages Atlas Vector Search to query vector embeddings of machine repair manuals and old maintenance work orders. The predictive maintenance agent leverages this tool to figure out additional insights while performing machine root cause diagnostics. Set up the use case shown in this article using our repo . To learn more about MongoDB’s role in the manufacturing industry, please visit our manufacturing and automotive webpage . To learn more about AI agents, visit our Demystifying AI Agents guide .

February 19, 2025
Artificial Intelligence

BAIC Group Powers the Internet of Vehicles With MongoDB

The Internet of Vehicles (IoV) is revolutionizing the automotive industry by connecting vehicles to the Internet. Vehicle sensors generate a wealth of data, affording manufacturers, vehicle owners, and traffic departments deep insights. This unlocks new business opportunities and enhances service experiences for both enterprises and consumers. BAIC Research Institute , a subsidiary of Beijing Automotive Group Co. (BAIC Group), is a backbone enterprise of the Chinese auto industry. Headquartered in Beijing, BAIC Group is involved in everything from R&D and manufacturing of vehicles and parts to the automobile service trade, comprehensive traveling services, financing, and investments. BAIC Group is a Fortune Global 500 company with more than 67 billion USD of annual revenue. The Institute is also heavily invested in the IoV industry. It plays a pivotal role in the research and development of the two major independent passenger vehicle products in China: Arcfox and Beijing Automotive . It is also actively involved in building vehicle electronic architecture, intelligent vehicle controls, smart cockpit systems, and smart driving technologies. To harness cutting-edge, data-driven technologies such as cloud computing, the Internet of Things, and big data, the Institute has built a comprehensive IoV cloud platform based on ApsaraDB for MongoDB . The platform collects, processes, and analyzes data generated by over a million vehicles, providing intelligent and personalized services to vehicle owners, automotive companies, and traffic management departments. At MongoDB.local Beijing in September 2024, BAIC Group’s Deputy Chief Engineer Chungang Zuo said that the BAIC IoV cloud platform facilitates data access for over a million vehicles. It also supports online services for hundreds of thousands of vehicles. Data technology acts as a key factor for IoV development With a rapid increase of vehicle ownership in recent years, the volume of data on BAIC Group’s IoV cloud platform quickly surged. This led to several data management challenges, namely the need to handle the following: Large data volumes High update frequencies Complex data formats High data concurrency Low query efficiency Data security issues The IoV platform also needed to support automotive manufacturers who must centrally store and manage a large amount of diverse transactional data. Finally, the platform is needed to enable manufacturers to leverage AI and analytical capabilities to interpret and create value from this data. BAIC Group’s IoV cloud platform reached a breaking point because the legacy databases it employed were incapable of handling the deluge of exponential vehicle data nor supporting planned AI-driven capabilities. The Institute identified MongoDB as the solution to support its underlying data infrastructure. By using MongoDB, BAIC would gain a robust core to enhance data management efficiency from the business layer to the application layer. The power of MongoDB as a developer data platform offered a wide range of capabilities. This was a game-changer for the Institute. MongoDB’s document model makes managing complex data simple Unlike traditional relational database models, MongoDB’s JSON data structure and flexible schema model are well suited for the variety and scale of the ever-changing data produced by connected vehicles. In traditional databases, vehicle information is spread across multiple tables, each with nearly a hundred fields, leading to redundancy, inflexibility, and complexity. With MongoDB, all vehicle information can be stored in a single collection, simplifying data management. Migrating vehicle information to MongoDB has significantly improved the Institute’s data application efficiency. MongoDB’s GeoJSON supports location data management The ability to accurately calculate vehicle location within the IoV cloud platform is a key benefit offered by MongoDB. Particularly, MongoDB’s GeoJSON (geospatial indexing) supports important features, such as the ability to screen vehicle parking situations. Zuo explained that during the data cleaning phase, the Institute formats raw vehicle data for MongoDB storage and outputs it as standardized cleaned data. In the data calculation phase, GeoJSON filters vehicles in a specific range. This is followed by algorithmic clustering analysis of locations to gain vehicle parking information. Finally, the Institute retrieves real-time data from the MongoDB platform to classify and display vehicle parking situations on a map for easy viewing. MongoDB provides scalability and high-performance MongoDB’s sharded cluster enhances data capacity and processing performance, enabling the Institute to effectively manage exponential IoV data growth. The querying and result-returning processes are executed concurrently in a multi-threaded manner. This facilitates continuous horizontal expansion without any downtime as data needs grow. Zuo said that a significant advantage for developers is the high self-healing capability of the sharded cluster; if a primary node fails, MongoDB automatically switches to a backup. This ensures seamless service and process integrity. Security features meet data regulatory requirements MongoDB’s built-in security features enable the IoV platform to meet rigorous data protection standards, helping the Institute stay compliant with regulatory requirements and industry standards. With MongoDB, the Institute can ensure end-to-end data encryption throughout the entire data lifecycle, including during transmission, storage, and processing, with support for executing queries directly on encrypted data. For example, during storage, MongoDB encrypts sensitive data, such as vehicle identification numbers and phone numbers. Sharding and replication mechanisms establish a robust data security firewall. Furthermore, MongoDB’s permission control mechanism enables secure database management with decentralized authority. Zuo said that MongoDB’s sharded storage and clustered deployment features ensure the platform’s reliability exceeds the 99.99% service-level agreement. MongoDB’s high concurrency capabilities enable the Institute to share real-time vehicle status updates with vehicle owners’ apps, enhancing user experience and satisfaction. In addition, MongoDB’s unique compression technology and flexible cloud server configurations reduce data storage space and resource waste. This significantly lowers data storage and application costs. BAIC uses MongoDB to prepare for future opportunities Looking ahead, Zuo Chungang stated that the BAIC IoV cloud platform has expanding demands for data development and application in three areas: vehicle data centers, application scenario implementation, and AI applications. MongoDB’s capabilities will remain core to helping address the Institute’s upcoming needs and challenges.

February 19, 2025
Applied

Smarter Care: MongoDB & Microsoft

Healthcare is on the cusp of a revolution powered by data and AI. Microsoft, with innovations like Azure OpenAI, Microsoft Fabric, and Power BI, has become a leading force in this transformation. MongoDB Atlas complements these advancements with a flexible and scalable platform for unifying operational, metadata, and AI data, enabling seamless integration into healthcare workflows. By combining these technologies, healthcare providers can enhance diagnostics, streamline operations, and deliver exceptional patient care. In this blog post, we explore how MongoDB and Microsoft AI technologies converge to create cutting-edge healthcare solutions through our “Leafy Hospital” demo—a showcase of possibilities in breast cancer diagnosis. The healthcare data challenge The healthcare industry faces unique challenges in managing and utilizing massive datasets. From mammograms and biopsy images to patient histories and medical literature, making sense of this data is often time-intensive and error-prone. Radiologists, for instance, must analyze vast amounts of information to deliver accurate diagnoses, while ensuring sensitive patient data is handled securely. MongoDB Atlas addresses these challenges by providing a unified view of disparate data sources, offering scalability, flexibility, and advanced features like Search and Vector search. When paired with Microsoft AI technologies, the potential to revolutionize healthcare workflows becomes limitless. The leafy hospital solution: A unified ecosystem Our example integrated solution, Leafy Hospital, showcases the transformative potential of MongoDB Atlas and Microsoft AI capabilities in healthcare. Focused on breast cancer diagnostics, this demo explores how the integration of MongoDB’s flexible data platform with Microsoft’s cutting-edge features—such as Azure OpenAI, Microsoft Fabric, and Power BI—can revolutionize patient care and streamline healthcare workflows. The solution takes a three-pronged approach to improve breast cancer diagnosis and patient care: Predictive AI for early detection Generative AI for workflow automation Advanced BI and analytics for actionable insights Figure 1. Leafy hospital solution architecture If you’re interested in discovering how this solution could be applied to your organization’s unique needs, we invite you to connect with your MongoDB account representative. We’d be delighted to provide a personalized demonstration of the Leafy Hospital solution and collaborate on tailoring it for your specific use case. Key capabilities Predictive AI for early detection Accurate diagnosis is critical in breast cancer care. Traditional methods rely heavily on radiologists manually analyzing mammograms and biopsies, increasing the risk of errors. Predictive AI transforms this process by automating data analysis and improving accuracy. BI-RADS prediction BI-RADS (Breast Imaging-Reporting and Data System) is a standardized classification for mammogram findings, ranging from 0 (incomplete) to 6 (malignant). To predict BI-RADS scores, deep learning models like VGG16 and EfficientNetV2L are trained on mammogram images dataset. Fabric Data Science simplifies the training and experimentation process by enabling: Direct data uploads to OneLake for model training Easy comparison of multiple ML experiments and metrics Auto-logging of parameters with MLflow for lifecycle management These models are trained on a significant number of epochs until a reliable accuracy is achieved, offering reliable predictions for radiologists. Biopsy classification In the case of biopsy analysis, classification models such as the random forest classifier are trained on biopsy features like cell size, shape uniformity, and mitoses counts. Classification models attain high accuracy when trained on scalar data, making it highly effective for classifying cancers as malignant or benign. Data ingestion, training, and prediction cycles are well managed using Fabric Data Science and the MongoDB Spark Connector , ensuring a seamless flow of metadata and results between Azure and MongoDB Atlas. Generative AI for workflow automation Radiologists often spend hours documenting findings, which could be better spent analyzing cases. Generative AI streamlines this process by automating report generation and enabling intelligent chatbot interactions. Vector search: The foundation of semantic understanding At the heart of these innovations lies MongoDB Atlas Vector Search , which revolutionizes how medical data is stored, accessed, and analyzed. By leveraging Azure OpenAI’s embedding models, clinical notes and other unstructured data are transformed into vector embeddings—mathematical representations that capture the meaning of the text in a high-dimensional space. Similarity search is a key use case, enabling radiologists to query the system with natural language prompts like “Show me cases where additional tests were recommended.” The system interprets the intent behind the question, retrieves relevant documents, and delivers precise, context-aware results. This ensures that radiologists can quickly access information without sifting through irrelevant data. Beyond similarity search, vector search facilitates the development of RAG architectures , which combine semantic understanding with external contextual data. This architecture allows for the creation of advanced features like automated report generation and intelligent chatbots, which further streamline decision-making and enhance productivity. Automated report generation Once a mammogram or biopsy is analyzed, Azure OpenAI’sLarge Language models can be used to generate detailed clinical notes, including: Findings: Key observations from the analysis Conclusions: Diagnoses and suggested next steps Standardized codes: Using SNOMED terms for consistency This automation enhances productivity by allowing radiologists to focus on verification rather than manual documentation. Chatbots with retrieval-augmented generation Chatbots can be another approach to support radiologists, when they need quick access to historical patient data or medical research. Traditional methods can be inefficient, particularly when dealing with older records or specialized cases. Our retrieval-augmented generation-based chatbot, powered by Azure OpenAI, Semantic Kernel, and MongoDB Atlas, provides: Patient-specific insights: Querying MongoDB for 10 years of patient history, summarized and provided as context to the chatbot Medical literature searches: Using vector search to retrieve relevant documents from indexed journals and studies Secure responses: Ensuring all answers are grounded in validated patient data and research The chatbot improves decision-making and enhances the user experience by delivering accurate, context-aware responses in real-time. Advanced BI and analytics for actionable insights In healthcare, data is only as valuable as the insights it provides. MongoDB Atlas bridges real-time transactional analytics and long-term data analysis, empowering healthcare providers with tools for informed decision-making at every stage. Transactional analytics Transactional, or in-app, analytics deliver insights directly within applications. For example, MongoDB Atlas enables radiologists to instantly access historical BI-RADS scores and correlate them with new findings, streamlining the diagnostic process. This ensures decisions are based on accurate, real-time data. Advanced clinical decision support (CDS) systems benefit from integrating predictive analytics into workflows. For instance, biopsy results stored in MongoDB are enriched with machine learning predictions generated in Microsoft Fabric , helping radiologists make faster, more precise decisions. Long-term analytics While transactional analytics focus on operational efficiency, long-term analytics enable healthcare providers to step back and evaluate broader trends. MongoDB Atlas, integrated with Microsoft Power BI and Fabric, facilitates this critical analysis of historical data. For instance, patient cohort studies become more insightful when powered by a unified dataset that combines MongoDB Atlas’ operational data with historical trends stored in Microsoft OneLake. Long-term analytics also shine in operational efficiency assessments. By integrating MongoDB Atlas data with Power BI, hospitals can create dashboards that track key performance indicators such as average time to diagnosis, wait times for imaging, and treatment start times. These insights help identify bottlenecks, streamline processes, and ultimately improve the patient experience. Furthermore, historical data stored in OneLake can be combined with MongoDB’s real-time data to train machine learning models, enhancing future predictive analytics. OLTP vs OLAP This unified approach is exemplified by the distinction between OLTP and OLAP workloads. On the OLTP side, MongoDB Atlas handles real-time data processing, supporting immediate tasks like alerting radiologists to anomalies. On the OLAP side, data stored in Microsoft OneLake supports long-term analysis, enabling hospitals to identify trends, evaluate efficiency, and train advanced AI models. This dual capability allows healthcare providers to “run the business” through operational insights and “analyze the business” by uncovering long-term patterns. Figure 2. Real-time analytics data pipeline MongoDB’s Atlas SQL Connector plays a crucial role in bridging these two worlds. By converting MongoDB’s flexible document model into a relational format, it allows tools like Power BI to work seamlessly with MongoDB data. Next steps For a detailed, technical exploration of the architecture, including ML notebooks, chatbot implementation code, and dataset resources, visit our Solution Library Building Advanced Healthcare Solutions with MongoDB and Microsoft . Whether you’re a developer, data scientist, or healthcare professional, you’ll find valuable insights to replicate and expand upon this solution! To learn more about how MongoDB can power healthcare solutions, visit our solutions page . Check out our Atlas Vector Search Quick Start guide to get started with MongoDB Atlas Vector Search today.

February 18, 2025
Artificial Intelligence

Supercharge AI Data Management With Knowledge Graphs

WhyHow.AI has built and open-sourced a platform using MongoDB, enhancing how organizations leverage knowledge graphs for data management and insights. Integrated with MongoDB, this solution offers a scalable foundation with features like vector search and aggregation to support organizations in their AI journey. Knowledge graphs address the limitations of traditional retrieval-augmented generation (RAG) systems, which can struggle to capture intricate relationships and contextual nuances in enterprise data. By embedding rules and relationships into a graph structure, knowledge graphs enable accurate and deterministic retrieval processes. This functionality extends beyond information retrieval: knowledge graphs also serve as foundational elements for enterprise memory, helping organizations maintain structured datasets that support future model training and insights. WhyHow.AI enhances this process by offering tools designed to combine large language model (LLM) workflows with Python- and JSON-native graph management. Using MongoDB’s robust capabilities, these tools help combine structured and unstructured data and search capabilities, enabling efficient querying and insights across diverse datasets. MongoDB’s modular architecture seamlessly integrates vector retrieval, full-text search, and graph structures, making it an ideal platform for RAG and unlocking the full potential of contextual data. Check out our AI Learning Hub to learn more about building AI-powered apps with MongoDB. Creating and storing knowledge graphs with WhyHow.AI and MongoDB Creating effective knowledge graphs for RAG requires a structured approach that combines workflows from LLMs, developers, and nontechnical domain experts. Simply capturing all entities and relationships from text and relying on an LLM to organize the data can lead to a messy retrieval process that lacks utility. Instead, WhyHow.AI advocates for a schema-constrained graph creation method, emphasizing the importance of developing a context-specific schema tailored to the user’s use case. This approach ensures that the knowledge graphs focus on the specific relationships that matter most to the user’s workflow. Once the knowledge graphs are created, the flexibility of MongoDB’s schema design ensures that users are not confined to rigid structures. This adaptability enables seamless expansion and evolution of knowledge graphs as data and use cases develop. Organizations can rapidly iterate during early application development without being restricted by predefined schemas. In instances where additional structure is required, MongoDB supports schema enforcement, offering a balance between flexibility and data integrity. For instance, aligning external research with patient records is crucial to delivering personalized healthcare. Knowledge graphs bridge the gap between clinical trials, best practices, and individual patient histories. New clinical guidelines can be integrated with patient records to identify which patients would benefit most from updated treatments, ensuring that the latest practices are applied to individual care plans. Optimizing knowledge graph storage and retrieval with MongoDB Harnessing the full potential of knowledge graphs requires both effective creation tools and robust systems for storage and retrieval. Here’s how WhyHow.AI and MongoDB work together to optimize the management of knowledge graphs. Storing data in MongoDB WhyHow.AI relies on MongoDB’s document-oriented structure to organize knowledge graph data into modular, purpose-specific collections, enabling efficient and flexible queries. This approach is crucial for managing complex entity relationships and ensuring accurate provenance tracking. To support this functionality, the WhyHow.AI Knowledge Graph Studio comprises several key components: Workspaces separate documents, schemas, graphs, and associated data by project or domain, maintaining clarity and focus. Chunks are raw text segments with embeddings for similarity searches, linked to triples and documents to provide evidence and provenance. Graph collection stores the knowledge graph along with metadata and schema associations, all organized by workspace for centralized data management. Schemas define the entities, relationships, and patterns within graphs, adapting dynamically to reflect new data and keep the graph relevant. Nodes represent entities like people, locations, or concepts, each with unique identifiers and properties, forming the graph’s foundation. Triples define subject-predicate-object relationships and store embedded vectors for similarity searches, enabling reliable retrieval of relevant facts. Queries log user queries, including triple results and metadata, providing an immutable history for analysis and optimization. Figure 1. WhyHow.AI platform and knowledge graph illustration. To enhance data interoperability, MongoDB’s aggregation framework enables efficient linking across collections. For instance, retrieving chunks associated with a specific triple can be seamlessly achieved through an aggregation pipeline, connecting workspaces, graphs, chunks, and document collections into a cohesive data flow. Querying knowledge graphs With the representation established, users can perform both structured and unstructured queries with the WhyHow.AI querying system. Structured queries enable the selection of specific entity types and relationships, while unstructured queries enable natural language questions to return related nodes, triples, and linked vector chunks. WhyHow.AI’s query engine embeds triples to enhance retrieval accuracy, bypassing traditional Text2Cypher methods. Through a retrieval engine that embeds triples and enables users to retrieve embedded triples with chunks tied to them, WhyHow.AI uses the best of both structured and unstructured data structures and retrieval patterns. And, with MongoDB’s built-in vector search, users can store and query vectorized text chunks alongside their graph and application data in a single, unified location. Enabling scalability, portability, and aggregations MongoDB’s horizontal scalability ensures that knowledge graphs can grow effortlessly alongside expanding datasets. Users can also easily utilize WhyHow.AI's platform to create modular multiagent and multigraph workflows. They can deploy MongoDB Atlas on their preferred cloud provider or maintain control by running it in their own environments, gaining flexibility and reliability. As graph complexity increases, MongoDB’s aggregation framework facilitates diverse queries, extracting meaningful insights from multiple datasets with ease. Providing familiarity and ease of use MongoDB’s familiarity enables developers to apply their existing expertise without the need to learn new technologies or workflows. With WhyHow.AI and MongoDB, developers can build graphs with JSON data and Python-native APIs, which are perfect for LLM-driven workflows. The same database trusted for years in application development can now manage knowledge graphs, streamlining onboarding and accelerating development timelines. Taking the next steps WhyHow.AI’s knowledge graphs overcome the limitations of traditional RAG systems by structuring data into meaningful entities, relationships, and contexts. This enhances retrieval accuracy and decision-making in complex fields. Integrated with MongoDB, these capabilities are amplified through a flexible, scalable foundation featuring modular architecture, vector search, and powerful aggregation. Together, WhyHow.AI and MongoDB help organizations unlock their data’s potential, driving insights and enabling innovative knowledge management solutions. No matter where you are in your AI journey, MongoDB can help! You can get started with your AI-powered apps by registering for MongoDB Atlas and exploring the tutorials available in our AI Learning Hub . Otherwise, head over to our quick-start guide to get started with MongoDB Atlas Vector Search today. Want to learn more about why MongoDB is the best choice for supporting modern AI applications? Check out our on-demand webinar, “ Comparing PostgreSQL vs. MongoDB: Which is Better for AI Workloads? ” presented by MongoDB Field CTO, Rick Houlihan. If your company is interested in being featured in a story like this, we’d love to hear from you. Reach out to us at ai_adopters@mongodb.com .

February 13, 2025
Artificial Intelligence

Ready to get Started with MongoDB Atlas?

Start Free