MongoDB Blog
Announcements, updates, news, and more
Redefining the Database for AI: Why MongoDB Acquired Voyage AI
February 24, 2025
News
Why Vector Quantization Matters for AI Workloads
Key takeaways As vector embeddings scale into millions, memory usage and query latency surge, leading to inflated costs and poor user experience. By storing embeddings in reduced-precision formats (int8 or binary), you can dramatically cut memory requirements and speed up retrieval. Voyage AI's quantization-aware embedding models are specifically tuned to handle compressed vectors without significant loss of accuracy. MongoDB Atlas streamlines the workflow by handling the creation, storage, and indexing of compressed vectors, enabling easier scaling and management. MongoDB is built for change, allowing users to effortlessly scale AI workloads as resource demands evolve. Organizations are now scaling AI applications from proofs of concept to production systems serving millions of users. This shift creates scalability, latency, and resource challenges for mission-critical applications leveraging recommendation engines, semantic search, and retrieval-augmented generation (RAG) systems. At scale, minor inefficiencies compound and become major bottlenecks, increasing latency, memory usage, and infrastructure costs. This guide explains how vector quantization enables high-performance, cost-effective AI applications at scale. The challenge: Scaling vector search in production Let’s start by considering a modern voice assistance platform that combines semantic search with natural language understanding. During development, the system only needs to process a few hundred queries per day, converting speech to text and matching the resulting embeddings against a modest database of responses. The initial implementation is straightforward: each query generates a 32-bit floating-point embedding vector that's matched against a database of similar vectors using cosine similarity. This approach works smoothly in the prototype phase—response times are quick, memory usage is manageable, and the development team can focus on improving accuracy and adding features. However, as the platform gains traction and scales to processing thousands of queries per second against millions of document embeddings, the simple approach begins to break down. Each incoming query now requires loading massive amounts of high-precision floating-point vectors into memory, computing similarity scores across an exponentially larger dataset, and maintaining increasingly complex vector indexes for efficient retrieval. Without proper optimization, the system struggles as memory usage balloons, query latency increases, and infrastructure costs spiral upward. What started as a responsive, efficient prototype has become a bottleneck production system that struggles to maintain its performance requirements while serving a growing user base. The key challenges are: Loading high-precision 32-bit floating-point vectors into memory Computing similarity scores across massive embedding collections Maintaining large vector indexes for efficient retrieval Which can lead to critical issues like: High memory usage as vector databases struggle to keep float32 embeddings in RAM Increased latency as systems process large volumes of high-precision data Growing infrastructure costs as organizations scale their vector operations Reduced query throughput due to computational overhead AI workloads with tens or hundreds of millions of high-dimensional vectors (e.g., 80M+ documents at 1536 dimensions) face soaring RAM and CPU requirements. Storing float32 embeddings for these workloads can become prohibitively expensive. Vector quantization: A path to efficient scaling The obvious question is: How can you maintain the accuracy of your recommendations, semantic matches, and search queries, while drastically cutting down on compute and memory usage and reducing retrieval latency? Vector quantization is how. It helps you store embeddings more compactly, reduce retrieval times, and keep costs under control. Vector quantization offers a powerful solution to scalability, latency, and resource utilization challenges by compressing high-dimensional embeddings into compact representations while preserving their essential characteristics. This technique can dramatically reduce memory requirements and accelerate similarity computations without compromising retrieval accuracy. What is vector quantization? Vector quantization is a compression technique widely applied in digital signal processing and machine learning. Its core idea is to represent numerical data using fewer bits, reducing storage requirements without entirely sacrificing the data’s informative value. In the context of AI workloads, quantization commonly involves converting embeddings—originally stored as 32-bit floating-point values—into formats like 8-bit integers. By doing so, you can substantially decrease memory and storage consumption while maintaining a level of precision suitable for similarity search tasks. An important point to note is that the quantization mechanism is especially suitable for use cases that involve over 1 million vector embeddings, such as RAG applications, semantic search, or recommendation systems that require tight control of operational costs without a compromise on retrieval accuracy. Smaller datasets with fewer than 1 million embeddings might not see significant gains from quantization procedures. For smaller datasets, the overhead of implementing quantization might outweigh its benefits. Understanding vector quantization Vector quantization operates by mapping high-dimensional vectors to a discrete set of prototype vectors or converting them to lower-precision formats. There are three main approaches: Scalar quantization: Converts individual 32-bit floating-point values to 8-bit integers, reducing memory usage of vector values by 75% while maintaining reasonable precision. Product quantization: Compresses entire vectors at once by mapping them to a codebook of representative vectors, offering better compression than scalar quantization at the cost of more complex encoding/decoding. Binary quantization: Transforms vectors into binary (0/1) representations, achieving maximum compression but with more significant information loss. A vector database that applies these compression techniques must effectively manage multiple data structures: Hierarchical navigable small world (HNSW) graph for navigable search Full-fidelity vectors (32-bit float embeddings) Quantized vectors (int8 or binary) When quantization is defined in the vector index, the system builds quantized vectors and constructs the HNSW graph from these compressed vectors. Both structures are placed in memory for efficient search operations, significantly reducing the RAM footprint compared to storing full-fidelity vectors alone. The table below illustrates how different quantization mechanisms impact memory usage and disk consumption. This example focuses on HNSW indexes storing 30 GB of original float32 embeddings alongside a 0.1 GB HNSW graph structure. Our RAM usage estimates include a 10% overhead factor (1.1 multiplier) to account for JVM memory requirements with indexes loaded into page cache, reflecting typical production deployment conditions. Actual overhead may vary based on specific configurations. Here are key attributes to consider based on the table below: Estimated RAM usage: Combines HNSW graph size with either full or quantized vectors, plus a small overhead factor (1.1 for index overhead). Disk usage: Includes storage for full-fidelity vectors, HNSW graph, and quantized vectors when applicable. Notice that while enabling quantization increases total disk usage —because you still store full-fidelity vectors for exact nearest neighbor queries in both cases and rescoring in the case of binary quantization—it dramatically decreases RAM requirements and speeds up initial retrieval . MongoDB Atlas Vector Search offers powerful scaling capabilities through its automatic quantization system . As illustrated in Figure 1 below, MongoDB Atlas supports multiple vector search indexes with varying precision levels: Float32 for maximum accuracy, Scalar Quantized (int8) for balanced performance with 3.75× RAM reduction, and Binary Quantized (1-bit) for maximum speed with 24× RAM reduction. The quantization variety provided by MongoDB Atlas allows users to optimize their vector search workloads based on specific requirements. For collections exceeding 1M vectors, Atlas automatically applies the appropriate quantization mechanism, with binary quantization particularly effective when combined with Float32 rescoring for final refinement. Figure 1: MongoDB Atlas Vector Search Architecture with Automatic Quantization Data flow through embedding generation, storage, and tiered vector indexing with binary rescoring. Binary quantization with rescoring A particularly effective strategy is to combine binary quantization with a rescoring step using full-fidelity vectors. This approach offers the best of both worlds: extremely fast lookups thanks to binary data formats, plus more precise final rankings from higher-fidelity embeddings. Initial retrieval (Binary) Embeddings are stored as binary to minimize memory usage and accelerate the approximate nearest neighbor (ANN) search. Hamming distance (via XOR + population count) is used, which is computationally faster than Euclidean or cosine similarity on floats. Rescoring The top candidate results from the binary pass are re-evaluated using their float or int8 vectors to refine the ranking. This step mitigates the loss of detail in binary vectors, balancing result accuracy with the speed of the initial retrieval. By pairing binary vectors for rapid recall with full-fidelity embeddings for final refinement, you can keep your system highly performant and maintain strong relevance. The need for quantization-aware models Not all embedding models perform equally well under quantization. Models need to be specifically trained with quantization in mind to maintain their effectiveness when compressed. Some models—especially those trained purely for high-precision scenarios—suffer significant accuracy drops when their embeddings are represented with fewer bits. Quantization-aware training (QAT) involves: Simulating quantization effects during the training process Adjusting model weights to minimize information loss Ensuring robust performance across different precision levels This is particularly important for production applications where maintaining high accuracy is crucial. Embedding models like those from Voyage AI— which recently joined MongoDB —are specifically designed with quantization awareness, making them more suitable for scaled deployments. These models preserve more of their essential feature information even under aggressive compression. Voyage AI provides a suite of embedding models specifically designed with QAT in mind, ensuring minimal loss in semantic quality when shifting to 8-bit integer or even binary representations. Figure 2: Embedding model performance comparing retrieval quality (NDCG@10) versus storage costs . Voyage AI models (green) maintain superior retrieval quality even with binary quantization (triangles) and int8 compression (squares), achieving up to 100x storage efficiency compared to standard float embeddings (circles) . The graph above shows several important patterns that demonstrate why quantization-aware training (QAT) is crucial for maintaining performance under aggressive compression. The Voyage AI family of models (shown in green) demonstrates strong performance in retrieval quality even under extreme compression. The voyage-3-large model demonstrates this dramatically—when using int8 precision at 1024 dimensions, it performs nearly identically to its float precision, 2048-dimensional counterpart, showing only a minimal 0.31% quality reduction despite using 8 times less storage. This showcases how models specifically designed with quantization in mind can preserve their semantic understanding even under substantial compression. Even more impressive is how QAT models maintain their edge over larger, uncompressed models. The voyage-3-large model with int8 precision and 1024 dimensions outperforms OpenAI-v3-large (using float precision and 3072 dimensions) by 9.44% while requiring 12 times less storage. This performance gap highlights that raw model size and dimension count aren't the decisive factors —it's the intelligent design for quantization that matters. The cost implications become truly striking when we examine binary quantization. Using voyage-3-large with 512-dimensional binary embeddings, we still achieve better retrieval quality than OpenAI-v3-large with its full 3072-dimensional float embeddings while using 200 times less storage. To put this in practical terms: what would have cost $20,000 in monthly storage can be reduced to just $100 while actually improving performance. In contrast, models not specifically trained for quantization, such as OpenAI's v3-small (shown in gray), show a more dramatic drop in retrieval quality as compression increases. While these models perform well in their full floating-point representation (at 1x storage cost), their effectiveness deteriorates more sharply when quantized, especially with binary quantization. For production applications where both accuracy and efficiency are crucial, choosing a model that has undergone quantization-aware training can make the difference between a system that degrades under compression and one that maintains its effectiveness while dramatically reducing resource requirements. Read more on the Voyage AI blog . Impact: Memory, retrieval latency, and cost Vector quantization addresses the three core challenges of large-scale AI workloads—memory, retrieval latency, and cost—by compressing full-precision embeddings into more compact representations. Below is a breakdown of how quantization drives efficiency in each area. Figure 3: Quantization Performance Metrics: Memory Savings with Minimal Accuracy Trade-offs Comparison of scalar vs. binary quantization showing RAM reduction (75%/96%), query accuracy retention (99%/95%), and performance gains (>100%) for vector search operations Memory and storage optimization Quantization techniques dramatically reduce compute resource requirements while maintaining search accuracy for vector embeddings at scale. Lower RAM footprint Storage in RAM is often the primary bottleneck for vector search systems Embeddings stored as 8-bit integers or binary reduce overall memory usage, allowing significantly more vectors to remain in memory. This compression directly shrinks vector indexes (e.g., HNSW), leading to faster lookups and fewer disk I/O operations. Reduced disk usage in collection with binData binData (binary) formats can cut raw storage needs by up to 66%. Some disk overhead may remain when storing both quantized and original vectors, but the performance benefits justify this tradeoff. Practical gains 3.75× reduction in RAM usage with scalar (int8) quantization Up to 24× reduction with binary quantization, especially when combined with rescoring to preserve accuracy. Significantly more efficient vector indexes, enabling large-scale deployments without prohibitive hardware upgrades. Retrieval latency Quantization methods leverage CPU cache optimizations and efficient distance calculations to accelerate vector search operations beyond what's possible with standard float32 embeddings. Faster similarity computations Smaller data types are more CPU-cache-friendly, which speeds up distance calculations. Binary quantization uses Hamming distance (XOR + popcount), yielding dramatically faster top-k candidate retrieval. Improved throughput With reduced memory overhead, the system can handle more concurrent queries at lower latencies. In internal benchmarks, query performance for large-scale retrievals improved by up to 80% when adopting quantized vectors. Cost efficiency Vector quantization provides substantial infrastructure savings by reducing memory and computation requirements while maintaining retrieval quality through compression and rescoring techniques. Lower infrastructure costs Smaller vectors consume fewer hardware resources, enabling deployments on less expensive instances or tiers. Reduced CPU/GPU time per query allows resource reallocation to other critical parts of the application. Better scalability As data volumes grow, memory and compute requirements don’t escalate as sharply. Quantization-aware training (QAT) models, such as those from Voyage AI, help maintain accuracy while reaping cost savings at scale. By compressing vectors into int8 or binary formats, you tackle memory constraints, accelerate lookups, and curb infrastructure expenses—making vector quantization an indispensable strategy for high-volume AI applications. MongoDB Atlas: Built for Changing Workloads with Automatic Vector Quantization The good news for developers is that MongoDB Atlas supports “automatic scalar” and “automatic binary quantization” in index definitions, reducing the need for external scripts or manual data preprocessing. By quantizing at index build time and query time, organizations can run large-scale vector workloads on smaller, more cost-effective clusters. A common question most developers ask is when to use quantization. Quantization becomes most valuable once you reach substantial data volumes—on the order of a million or more embeddings. At this scale, memory and compute demands can skyrocket, making reduced memory footprints and faster retrieval speeds essential. Examples of cases that call for quantization include: High-volume scenarios: Datasets with millions of vector embeddings where you must tightly control memory and disk usage. Real-time responses: Systems needing low-latency queries under high user concurrency. High query throughput: Environments with numerous concurrent requests demanding both speed and cost-efficiency. For smaller datasets (under 1 million vectors), the added complexity of quantization may not justify the benefits. However, for large-scale deployments, it becomes a critical optimization that can dramatically improve both performance and cost-effectiveness. Now that we have established a strong foundation on the advantages of quantization—specifically the benefits of binary quantization with rescoring— feel free to refer to the MongoDB documentation to learn more about implementing vector quantization. You can also learn more about Voyage AI’s state-of-the-art embedding models on our product page .
Hasura: Powerful Access Control on MongoDB Data
Across industries—and especially in highly regulated sectors like healthcare, financial services, and government—MongoDB has been a preferred modern database solution for organizations handling large volumes of sensitive data that require strict compliance adherence. In such enterprises, secure access to data via APIs is critical, particularly when information is distributed across multiple MongoDB databases and external data stores. Hasura extends and enhances MongoDB's access control capabilities by providing granular permissions at the column and field level across multiple databases through its unified interface. At the same time, designing a secure API system from scratch to meet this need takes significant development resources and becomes a burden to maintain and update. Hasura solves this problem for enterprises by elegantly serving as a federated data layer, with robust access control policies built-in. Hasura enforces powerful access control rules across data domains, joins data from multiple sources, and exposes it to the user via a single API. In this blog, we'll explore how Hasura and MongoDB work together to empower teams with granular data access control while simplifying data retrieval across collections. Team-specific data domains First, Hasura makes it possible for a business unit or team to own a set of databases and collections, also known as a data domain. Within each domain, a team can connect any number of MongoDB databases and other data sources, allowing the domain to have fine-grained role-based access control (RBAC) and attribute-based access control (ABAC) across all sources. More important though, is the ability to enable relationships that span domains, effectively connecting data from various teams or business units and exposing it to a verified user as necessary. This granular permissioning system means that the right users can access the right data at the right time, without compromising security. Field-level access control Hasura’s MongoDB connector also provides a powerful, declarative way to define access control rules at the collection and field level. For each MongoDB collection, roles may be specified for read, create, update, and delete (CRUD) permissions. Within those permissions, access may be further restricted based on the values of specific attributes. By defining these rules declaratively, Hasura makes it easy to implement and reason about complex access control policies. Joining across collections In addition to enabling granular access control, Hasura simplifies the retrieval of related data across multiple databases. By inspecting your MongoDB collections, Hasura can automatically create schemas and API endpoints (in GraphQL, REST, etc.) that let you query data along with its relationships. This eliminates the need to manually stitch together data from different collections in your application code. Instead, a graph of related data can be easily retrieved in a single API call, while still having that data filtered through your access control rules. As companies wrestle with the challenges of secure data access across sprawling database environments, Hasura provides a compelling solution. By serving as a federated data layer on MongoDB and external data, Hasura enables granular access control through a combination of role-based permissions, attribute-based restrictions, and the ability to join data and apply access across sources. Figure 1. Hasura & MongoDB demo environment With Hasura’s MongoDB connector , teams can easily implement sophisticated data access policies in a declarative way and provide their applications with secure access to the data they need. This combination of security and simplicity makes Hasura and MongoDB a powerful solution for organizations that strive to modernize, especially those in industries with strict compliance requirements. Visit the MongoDB Resources Hub to learn more about MongoDB Atlas. Want to learn more or see Hasura and MongoDB in action? Join Sig Narváez, Executive Solutions Architect, MongoDB and Adam Malone, Director of Solutions Engineering, Hasura on February 27, 2025 for a webinar on how MongoDB’s cutting-edge architecture, combined with Hasura’s powerful data access engine, provides a robust solution for enterprises dealing with data sprawl and security risks. Sign up here !
Debunking MongoDB Myths: Enterprise Use Cases
MongoDB is frequently viewed as a go-to database for proof-of-concept (POC) applications. The flexibility of MongoDB’s document model enables teams to rapidly prototype and iterate. This allows for adaptation of the data model as requirements evolve during the early stages of application development. It is common for applications to continuously evolve during initial development. However, moving an application to production requires developers to add validation logic and fully define the data structures. A frequent assumption is that because MongoDB data models can be flexible, they can not be structured. However, while MongoDB does not require a defined schema, it does support them. MongoDB allows users to precisely calibrate rules and enforcement levels for every component of data. This enables a level of granular control that traditional databases, with their all-or-nothing approach to schema enforcement, struggle to match. Data model flexibility is not a binary choice between "schemaless" or "strictly enforced." More accurately, it exists on a spectrum in MongoDB. Users can incrementally define schemas in parallel with the overall “hardening” of the application. MongoDB's approach to data modeling makes it an ideal platform for business-critical applications. It is designed to support the entire application lifecycle; from nascent concepts and initial prototypes, to global rollouts of production environments. Enterprise-grade features like ACID transactions and industry-leading scalability ensure MongoDB can meet the demands of any modern application. Learning from the past So why do misconceptions persist regarding MongoDB? These perceptions originated over a decade ago. Teams working with MongoDB back in 2014 or earlier faced challenges when deploying it in production. Applications could slow down under heavy loads, data consistency was not guaranteed when writing to multiple documents, and teams lacked tools to monitor and manage deployments effectively. As a result, MongoDB gained a perception of being unsuitable for specific use cases or critical workloads. This perception has persisted despite a decade of subsequent development and innovation . Therefore, this is now an inaccurate assessment of today’s preeminent document database. MongoDB has evolved into a mature platform that directly addresses these historical pain points. Today’s MongoDB delivers robust tooling, guaranteed consistency, and comprehensive data validation capabilities. Myth: MongoDB is a niche database What are the top use cases for MongoDB? This question is difficult to answer because MongoDB is a general-purpose database that can support any use case. The document model is the primary driver of MongoDB’s versatility. Documents are similar to JSON objects with data being represented as key-value pairs. Values can be simple types like strings or numbers. However, values can also be arrays or nested objects which allows documents to easily represent complex hierarchical structures. The document model's flexibility allows data to be stored exactly as the application consumes it. This enables highly efficient writing and optimizes data for retrieval without needing to set up standard or materialized views, although both are supported . While MongoDB is no longer a niche database, it does have advanced capabilities to support niche requirements. The aggregation pipeline provides a powerful framework for data analytics and transformation. Time-series collections store and query temporal data efficiently to support IoT and financial applications. Geospatial indexes and queries enable location-based applications to perform complex proximity calculations. MongoDB Atlas includes native support for vector search . This enabled Cisco to experiment with generative AI use cases and streamline their applications to production. MongoDB handles the diverse data requirements that power modern applications. The document model provides the foundation for general use. Concurrently, advanced features ensure teams do not need to integrate additional tools as application requirements evolve. The result is a single platform that can grow from prototype to production, handling general requirements and specialized workloads with equal proficiency. Myth: MongoDB is not suitable for enterprise-grade workloads A common perception is that MongoDB works well for small applications but falls short at enterprise scale. Ironically, many organizations first consider MongoDB while struggling to scale their relational databases. These organizations have discovered MongoDB’s architecture is specifically designed to support scale-out distributed deployments. While MongoDB matches relational databases in vertical scaling capabilities, the document model enables a more natural and intuitive approach for horizontal scaling. Related data is stored together in a single document. Therefore, MongoDB can easily distribute complete data units across shards. This contrasts with relational databases. Relational data is split across multiple tables. This makes it difficult to place all related data on the same shard. Horizontal scaling with MongoDB sets an organization up for better performance. Most MongoDB queries need to access only a single shard. Equivalent queries in a relational database often require costly cross-server communication. Telefonica Tech has leveraged horizontal scaling to nearly double their capacity with a 40% hardware reduction . MongoDB Atlas further automates and simplifies these scaling capabilities through a fully managed service built to meet demanding enterprise requirements. Atlas provides a 99.995% uptime guarantee and availability across AWS, Google Cloud, and Azure in over 100 regions worldwide. This frees teams to focus on rapid development and innovation rather than infrastructure maintenance by offloading the operational complexity of deploying and running databases at scale. Powering the enterprise applications of today and tomorrow Over 50,000 customers and 70% of the Fortune 100 rely on MongoDB to power their enterprise applications. Independent industry reports from Gartner and Forrester continue to recognize MongoDB as a leader in the database space. Do not let outdated myths prevent your organization from the competitive advantages of MongoDB's enterprise capabilities. To learn more about MongoDB, head over to MongoDB University and take our free Intro to MongoDB course . Read more about customers building on MongoDB. Read our first blog in this series about myths around MongoDB vs relational databases. Check out the full video to learn about the other 6 myths that we're debunking in this series.
MongoDB & DKatalis’s Bank Jago, Empowering Over 500 Engineers
DKatalis , a technology company specialized in developing scalable digital solutions, is the engineering arm behind Bank Jago , Indonesia’s first digital bank. An app-only institution, Bank Jago enables end-to-end banking with features such as auto budgeting. This allows Bank Jago’s customers to easily and effectively organize their finances by creating " Pockets "—for expenses like food, savings, or entertainment. Launched in 2019, Bank Jago has seen tremendous growth in only a few years, with its customer base reaching 14.1 million as of October 2024. While speaking at MongoDB.local Jakarta , Chris Samuel, Staff Engineer at DKatalis, shared how MongoDB became the data backbone of Bank Jago, and how MongoDB Atlas supported Bank Jago’s growth. Bank Jago’s journey with MongoDB started in 2019, when DKatalis built the first version of Bank Jago using the on-premise version of MongoDB: MongoDB Community Edition . “We did everything ourselves, up to the point when we realized that the bigger our user [base] grew, the more painful it was for us to monitor everything,” said Samuel. In 2021, DKatalis decided to migrate Bank Jago [from MongoDB Community Edition] to MongoDB Atlas. This first involved migrating all data to Atlas. Then the database platform had to be set up to facilitate scalability and enable improved maintenance operations in the long-term. “In terms of process, it is actually seamless,” said Samuel during his MongoDB.local talk. Specifically, MongoDB Atlas offers six key capabilities that have facilitated the bank’s daily operations, supported its fast growth, and improved efficiencies: Flexibility: MongoDB's document model supports diverse data types and adapts to Jago's dynamic requirements. Scalability: MongoDB Atlas effortlessly supports the rapid growth in user base and data volume. High performance: The platform enables fast query execution and efficient data retrieval for a seamless customer experience. Real-time capabilities: MongoDB Atlas prevents delays during transactions, account creation, and balance checking. Regulation compliance: With MongoDB Atlas, local hosting is possible. This enables DKatalis to meet Indonesian financial regulatory standards. Community support: MongoDB’s strong developer community and rich ecosystem in Jakarta fosters collaboration and learning. All of these have also helped improve efficiencies for DKatalis’s team of over 500 engineers, who are now able to reduce data architecture complexity, and focus on innovation. Fostering a great engineering culture and community with MongoDB In another talk at MongoDB.local Singapore , DKatalis’s Chief Engineering Officer, Alex Titlyanov, explained that using MongoDB has been and continues to be a great learning, upskilling, and operational experience for his team. “DKatalis has a pretty unique organizational culture when it comes to its engineering teams: there are no designated engineering managers or project managers; instead, teams are self-managed,” said Titlyanov. “This encourages a community-driven environment, where engineers are continuously upgrading their skills, particularly with tools like MongoDB.” The company has established internal communities, such as the MongoDB community led by Principal Software Engineer Boon Hian Tek. These communities focus on knowledge sharing, skill-building, and ensuring that the company’s 500 engineers are proficient in using MongoDB. This deep knowledge of MongoDB—and the ease of use offered by the Atlas platform—means that DKatalis’s engineers are also able to build their own bespoke tools to improve daily operations and meet specific needs. For example, the team has built a range of tools aimed at helping deal with the complexity and scale of Bank Jago’s data architecture. “Most traditional banks offer their customers access to six months, sometimes a year’s worth of transaction history. But Bank Jago gives access to the entire transaction history,” said Boon. The engineering team ended up having to deal with 56 different databases and 485 data collections. Some would reach 1.13 billion documents, while others receive up to 42.5 million new documents every day. Some of the bespoke tools built on MongoDB Atlas include: Index sync report: DKatalis implemented a custom-built tool using MongoDB’s Atlas API to manage database indexing automatically. This was essential given the bank’s real-time requirements. Adding indexes manually during peak hours would have disrupted performance. Daily reporting: The team built a tool to monitor for slow queries. This provides daily reports on query performance so issues can be identified and resolved quickly. Add index: The Rolling Index feature from Atlas was initially used. However, the team required greater context for each index. Therefore, they built a tool that at 3:00 am automatically checks if there are any indexes to create. The tool calls in the Atlas API to create and publish the results. Exporting metrics: The Atlas console was used to source diagrams that were helpful. However, the team required each metric to be available per database and per collection versus cluster. The team built a thin layer on top of the Atlas console to slice up the required metrics using the Atlas API. “The scalability and flexibility of MongoDB have been essential in helping the team handle the bank’s fast growth and complex feature set. MongoDB’s document-oriented structure enables us to develop innovative features like ‘Pockets’, and we continue to see MongoDB as an integral part of our technology stack in the future,” said Titlyanov. Visit our product page to learn more about MongoDB Atlas . To learn how MongoDB powers solutions in the financial services industry, visit our solutions page .
Multi-Agent Collaboration for Manufacturing Operations Optimization
While there are some naysayers across the media landscape who doubt the potential impact of AI innovations, for those of us immersed in implementing AI on a daily basis, there’s wide agreement that its potential is huge and world-altering. It’s now generally accepted that Large Language Models (LLMs) will eventually be able to perform tasks as well—if not better—than a human. And the size of the potential AI market is truly staggering. Bain’s AI analysis estimates that the total addressable market (TAM) for AI and gen AI-related hardware and software will grow between 40% and 55% annually, reaching between $780 billion and $990 billion by 2027. This growth is especially relevant to industries like manufacturing, where generative AI can be applied across the value chain. From inventory categorization to product risk assessments, knowledge management, and predictive maintenance strategy generation, AI's potential to optimize manufacturing operations cannot be overstated. But in order to realize the transformative economic potential of AI, applications powered by LLMs need to evolve beyond chatbots that leverage retrieval-augmented generation (RAG). Truly transformative AI-powered applications need to be objective-driven, not just responding to user queries but also taking action on behalf of the user. This is crucial in complex manufacturing processes. In other words, they need to act like agents. Agentic systems, or compound AI systems, are currently emerging as the next frontier of generative AI applications. These systems consist of a single or multiple AI agents that collaborate with each other and use tools to provide value. An AI agent is a computational entity containing short- and long-term memory, which enables it to provide context to an LLM. It also has access to tools, such as web search and function calling, that enable it to act upon the response from an LLM or provide additional information to the LLM. Figure 1. Basic components of an agentic system. An agentic system can have more than one AI agent. In most cases, AI agents may be required to interact with other agents within the same system or external systems., They’re expected to engage with humans for feedback or review of outputs from execution steps. AI agents can also comprehend the context of outputs from other agents and humans, and change their course of action and next steps. For example, agents can monitor and optimize various facets of manufacturing operations simultaneously, such as supply chain logistics and production line efficiency. There are certain benefits of having a multi-agent collaboration system instead of having one single agent. You can have each agent customized to do one thing and do it well. For example, one agent can create meeting minutes while another agent writes follow-up emails. It can also be implemented on predictive maintenance, with one agent analyzing machine data to find mechanical issues before they occur while another optimizes resource allocation, ensuring materials and labor are utilized efficiently. You can also provision dedicated resources and tools for different agents. For example, one agent uses a model to analyze and transcribe videos while the other uses models for natural language processing (NLP) and answering questions about the video. Figure 2. Multi-agent collaboration system. MongoDB can act as the memory provider for an agentic system. Conversation history alongside vector embeddings can be stored in MongoDB leveraging the flexible document model. Atlas Vector Search can be used to run semantic search on stored vector embeddings, and our sharding capabilities allow for horizontal scaling without compromising on performance. Our clients across industries have been leveraging MongoDB Atlas for their generative AI use cases , including agentic AI use cases such as Questflow , which is transforming work by using multi-agent AI to handle repetitive tasks in strategic roles. Supported by MiraclePlus and MongoDB Atlas, it enables startups to automate workflows efficiently. As it expands to larger enterprises, it aims to boost AI collaboration and streamline task automation, paving the way for seamless human-AI integration. The concept of a multi-agent collaboration system is new, and it can be challenging for manufacturing organizations to identify the right use case to apply this cutting-edge technology. Below, we propose a use case where three agents collaborate with each other to optimize the performance of a machine. Multi-agent collaboration use case in manufacturing In manufacturing operations, leveraging multi-agent collaboration for predictive maintenance can significantly boost operational efficiency. For instance, consider a production environment where three distinct agents—predictive maintenance, process optimization, and quality assurance—collaborate in real-time to refine machine operations and maintain the factory at peak performance. In Figure 3, the predictive maintenance agent is focused on machinery maintenance. Its main tasks are to monitor equipment health by analyzing sensor data generated from the machines. It predicts machine failures and recommends maintenance actions to extend machinery lifespan and prevent downtime as much as possible. Figure 3. A multi-agent system for production optimization. The process optimization agent is designed to enhance production efficiency. It analyzes production parameters to identify inefficiencies and bottlenecks, and it optimizes said parameters by adjusting them (speed, vibration, etc.) to maintain product quality and production efficiency. This agent also incorporates feedback from the other two agents while making decisions on what production parameter to tune. For instance, the predictive maintenance agent can flag an anomaly in a milling machine temperature sensor reading; for example, if temperature values are going up, the process optimization agent can review the cutting speed parameter for adjustment. The quality assurance agent is responsible for evaluating product quality. It analyzes optimized production parameters and checks how those parameters can affect the quality of the product being fabricated. It also provides feedback for the other two agents. The three agents constantly exchange feedback with each other, and this feedback is also stored in the MongoDB Atlas database as agent short-term memory. In contrast, vector embeddings and sensor data are persisted as long-term memory. MongoDB is an ideal memory provider for agentic AI use case development thanks to its flexible document model, extensive security and data governance features, and horizontal scalability. All three agents have access to a "search_documents" tool, which leverages Atlas Vector Search to query vector embeddings of machine repair manuals and old maintenance work orders. The predictive maintenance agent leverages this tool to figure out additional insights while performing machine root cause diagnostics. Set up the use case shown in this article using our repo . To learn more about MongoDB’s role in the manufacturing industry, please visit our manufacturing and automotive webpage . To learn more about AI agents, visit our Demystifying AI Agents guide .
BAIC Group Powers the Internet of Vehicles With MongoDB
The Internet of Vehicles (IoV) is revolutionizing the automotive industry by connecting vehicles to the Internet. Vehicle sensors generate a wealth of data, affording manufacturers, vehicle owners, and traffic departments deep insights. This unlocks new business opportunities and enhances service experiences for both enterprises and consumers. BAIC Research Institute , a subsidiary of Beijing Automotive Group Co. (BAIC Group), is a backbone enterprise of the Chinese auto industry. Headquartered in Beijing, BAIC Group is involved in everything from R&D and manufacturing of vehicles and parts to the automobile service trade, comprehensive traveling services, financing, and investments. BAIC Group is a Fortune Global 500 company with more than 67 billion USD of annual revenue. The Institute is also heavily invested in the IoV industry. It plays a pivotal role in the research and development of the two major independent passenger vehicle products in China: Arcfox and Beijing Automotive . It is also actively involved in building vehicle electronic architecture, intelligent vehicle controls, smart cockpit systems, and smart driving technologies. To harness cutting-edge, data-driven technologies such as cloud computing, the Internet of Things, and big data, the Institute has built a comprehensive IoV cloud platform based on ApsaraDB for MongoDB . The platform collects, processes, and analyzes data generated by over a million vehicles, providing intelligent and personalized services to vehicle owners, automotive companies, and traffic management departments. At MongoDB.local Beijing in September 2024, BAIC Group’s Deputy Chief Engineer Chungang Zuo said that the BAIC IoV cloud platform facilitates data access for over a million vehicles. It also supports online services for hundreds of thousands of vehicles. Data technology acts as a key factor for IoV development With a rapid increase of vehicle ownership in recent years, the volume of data on BAIC Group’s IoV cloud platform quickly surged. This led to several data management challenges, namely the need to handle the following: Large data volumes High update frequencies Complex data formats High data concurrency Low query efficiency Data security issues The IoV platform also needed to support automotive manufacturers who must centrally store and manage a large amount of diverse transactional data. Finally, the platform is needed to enable manufacturers to leverage AI and analytical capabilities to interpret and create value from this data. BAIC Group’s IoV cloud platform reached a breaking point because the legacy databases it employed were incapable of handling the deluge of exponential vehicle data nor supporting planned AI-driven capabilities. The Institute identified MongoDB as the solution to support its underlying data infrastructure. By using MongoDB, BAIC would gain a robust core to enhance data management efficiency from the business layer to the application layer. The power of MongoDB as a developer data platform offered a wide range of capabilities. This was a game-changer for the Institute. MongoDB’s document model makes managing complex data simple Unlike traditional relational database models, MongoDB’s JSON data structure and flexible schema model are well suited for the variety and scale of the ever-changing data produced by connected vehicles. In traditional databases, vehicle information is spread across multiple tables, each with nearly a hundred fields, leading to redundancy, inflexibility, and complexity. With MongoDB, all vehicle information can be stored in a single collection, simplifying data management. Migrating vehicle information to MongoDB has significantly improved the Institute’s data application efficiency. MongoDB’s GeoJSON supports location data management The ability to accurately calculate vehicle location within the IoV cloud platform is a key benefit offered by MongoDB. Particularly, MongoDB’s GeoJSON (geospatial indexing) supports important features, such as the ability to screen vehicle parking situations. Zuo explained that during the data cleaning phase, the Institute formats raw vehicle data for MongoDB storage and outputs it as standardized cleaned data. In the data calculation phase, GeoJSON filters vehicles in a specific range. This is followed by algorithmic clustering analysis of locations to gain vehicle parking information. Finally, the Institute retrieves real-time data from the MongoDB platform to classify and display vehicle parking situations on a map for easy viewing. MongoDB provides scalability and high-performance MongoDB’s sharded cluster enhances data capacity and processing performance, enabling the Institute to effectively manage exponential IoV data growth. The querying and result-returning processes are executed concurrently in a multi-threaded manner. This facilitates continuous horizontal expansion without any downtime as data needs grow. Zuo said that a significant advantage for developers is the high self-healing capability of the sharded cluster; if a primary node fails, MongoDB automatically switches to a backup. This ensures seamless service and process integrity. Security features meet data regulatory requirements MongoDB’s built-in security features enable the IoV platform to meet rigorous data protection standards, helping the Institute stay compliant with regulatory requirements and industry standards. With MongoDB, the Institute can ensure end-to-end data encryption throughout the entire data lifecycle, including during transmission, storage, and processing, with support for executing queries directly on encrypted data. For example, during storage, MongoDB encrypts sensitive data, such as vehicle identification numbers and phone numbers. Sharding and replication mechanisms establish a robust data security firewall. Furthermore, MongoDB’s permission control mechanism enables secure database management with decentralized authority. Zuo said that MongoDB’s sharded storage and clustered deployment features ensure the platform’s reliability exceeds the 99.99% service-level agreement. MongoDB’s high concurrency capabilities enable the Institute to share real-time vehicle status updates with vehicle owners’ apps, enhancing user experience and satisfaction. In addition, MongoDB’s unique compression technology and flexible cloud server configurations reduce data storage space and resource waste. This significantly lowers data storage and application costs. BAIC uses MongoDB to prepare for future opportunities Looking ahead, Zuo Chungang stated that the BAIC IoV cloud platform has expanding demands for data development and application in three areas: vehicle data centers, application scenario implementation, and AI applications. MongoDB’s capabilities will remain core to helping address the Institute’s upcoming needs and challenges.
Smarter Care: MongoDB & Microsoft
Healthcare is on the cusp of a revolution powered by data and AI. Microsoft, with innovations like Azure OpenAI, Microsoft Fabric, and Power BI, has become a leading force in this transformation. MongoDB Atlas complements these advancements with a flexible and scalable platform for unifying operational, metadata, and AI data, enabling seamless integration into healthcare workflows. By combining these technologies, healthcare providers can enhance diagnostics, streamline operations, and deliver exceptional patient care. In this blog post, we explore how MongoDB and Microsoft AI technologies converge to create cutting-edge healthcare solutions through our “Leafy Hospital” demo—a showcase of possibilities in breast cancer diagnosis. The healthcare data challenge The healthcare industry faces unique challenges in managing and utilizing massive datasets. From mammograms and biopsy images to patient histories and medical literature, making sense of this data is often time-intensive and error-prone. Radiologists, for instance, must analyze vast amounts of information to deliver accurate diagnoses, while ensuring sensitive patient data is handled securely. MongoDB Atlas addresses these challenges by providing a unified view of disparate data sources, offering scalability, flexibility, and advanced features like Search and Vector search. When paired with Microsoft AI technologies, the potential to revolutionize healthcare workflows becomes limitless. The leafy hospital solution: A unified ecosystem Our example integrated solution, Leafy Hospital, showcases the transformative potential of MongoDB Atlas and Microsoft AI capabilities in healthcare. Focused on breast cancer diagnostics, this demo explores how the integration of MongoDB’s flexible data platform with Microsoft’s cutting-edge features—such as Azure OpenAI, Microsoft Fabric, and Power BI—can revolutionize patient care and streamline healthcare workflows. The solution takes a three-pronged approach to improve breast cancer diagnosis and patient care: Predictive AI for early detection Generative AI for workflow automation Advanced BI and analytics for actionable insights Figure 1. Leafy hospital solution architecture If you’re interested in discovering how this solution could be applied to your organization’s unique needs, we invite you to connect with your MongoDB account representative. We’d be delighted to provide a personalized demonstration of the Leafy Hospital solution and collaborate on tailoring it for your specific use case. Key capabilities Predictive AI for early detection Accurate diagnosis is critical in breast cancer care. Traditional methods rely heavily on radiologists manually analyzing mammograms and biopsies, increasing the risk of errors. Predictive AI transforms this process by automating data analysis and improving accuracy. BI-RADS prediction BI-RADS (Breast Imaging-Reporting and Data System) is a standardized classification for mammogram findings, ranging from 0 (incomplete) to 6 (malignant). To predict BI-RADS scores, deep learning models like VGG16 and EfficientNetV2L are trained on mammogram images dataset. Fabric Data Science simplifies the training and experimentation process by enabling: Direct data uploads to OneLake for model training Easy comparison of multiple ML experiments and metrics Auto-logging of parameters with MLflow for lifecycle management These models are trained on a significant number of epochs until a reliable accuracy is achieved, offering reliable predictions for radiologists. Biopsy classification In the case of biopsy analysis, classification models such as the random forest classifier are trained on biopsy features like cell size, shape uniformity, and mitoses counts. Classification models attain high accuracy when trained on scalar data, making it highly effective for classifying cancers as malignant or benign. Data ingestion, training, and prediction cycles are well managed using Fabric Data Science and the MongoDB Spark Connector , ensuring a seamless flow of metadata and results between Azure and MongoDB Atlas. Generative AI for workflow automation Radiologists often spend hours documenting findings, which could be better spent analyzing cases. Generative AI streamlines this process by automating report generation and enabling intelligent chatbot interactions. Vector search: The foundation of semantic understanding At the heart of these innovations lies MongoDB Atlas Vector Search , which revolutionizes how medical data is stored, accessed, and analyzed. By leveraging Azure OpenAI’s embedding models, clinical notes and other unstructured data are transformed into vector embeddings—mathematical representations that capture the meaning of the text in a high-dimensional space. Similarity search is a key use case, enabling radiologists to query the system with natural language prompts like “Show me cases where additional tests were recommended.” The system interprets the intent behind the question, retrieves relevant documents, and delivers precise, context-aware results. This ensures that radiologists can quickly access information without sifting through irrelevant data. Beyond similarity search, vector search facilitates the development of RAG architectures , which combine semantic understanding with external contextual data. This architecture allows for the creation of advanced features like automated report generation and intelligent chatbots, which further streamline decision-making and enhance productivity. Automated report generation Once a mammogram or biopsy is analyzed, Azure OpenAI’sLarge Language models can be used to generate detailed clinical notes, including: Findings: Key observations from the analysis Conclusions: Diagnoses and suggested next steps Standardized codes: Using SNOMED terms for consistency This automation enhances productivity by allowing radiologists to focus on verification rather than manual documentation. Chatbots with retrieval-augmented generation Chatbots can be another approach to support radiologists, when they need quick access to historical patient data or medical research. Traditional methods can be inefficient, particularly when dealing with older records or specialized cases. Our retrieval-augmented generation-based chatbot, powered by Azure OpenAI, Semantic Kernel, and MongoDB Atlas, provides: Patient-specific insights: Querying MongoDB for 10 years of patient history, summarized and provided as context to the chatbot Medical literature searches: Using vector search to retrieve relevant documents from indexed journals and studies Secure responses: Ensuring all answers are grounded in validated patient data and research The chatbot improves decision-making and enhances the user experience by delivering accurate, context-aware responses in real-time. Advanced BI and analytics for actionable insights In healthcare, data is only as valuable as the insights it provides. MongoDB Atlas bridges real-time transactional analytics and long-term data analysis, empowering healthcare providers with tools for informed decision-making at every stage. Transactional analytics Transactional, or in-app, analytics deliver insights directly within applications. For example, MongoDB Atlas enables radiologists to instantly access historical BI-RADS scores and correlate them with new findings, streamlining the diagnostic process. This ensures decisions are based on accurate, real-time data. Advanced clinical decision support (CDS) systems benefit from integrating predictive analytics into workflows. For instance, biopsy results stored in MongoDB are enriched with machine learning predictions generated in Microsoft Fabric , helping radiologists make faster, more precise decisions. Long-term analytics While transactional analytics focus on operational efficiency, long-term analytics enable healthcare providers to step back and evaluate broader trends. MongoDB Atlas, integrated with Microsoft Power BI and Fabric, facilitates this critical analysis of historical data. For instance, patient cohort studies become more insightful when powered by a unified dataset that combines MongoDB Atlas’ operational data with historical trends stored in Microsoft OneLake. Long-term analytics also shine in operational efficiency assessments. By integrating MongoDB Atlas data with Power BI, hospitals can create dashboards that track key performance indicators such as average time to diagnosis, wait times for imaging, and treatment start times. These insights help identify bottlenecks, streamline processes, and ultimately improve the patient experience. Furthermore, historical data stored in OneLake can be combined with MongoDB’s real-time data to train machine learning models, enhancing future predictive analytics. OLTP vs OLAP This unified approach is exemplified by the distinction between OLTP and OLAP workloads. On the OLTP side, MongoDB Atlas handles real-time data processing, supporting immediate tasks like alerting radiologists to anomalies. On the OLAP side, data stored in Microsoft OneLake supports long-term analysis, enabling hospitals to identify trends, evaluate efficiency, and train advanced AI models. This dual capability allows healthcare providers to “run the business” through operational insights and “analyze the business” by uncovering long-term patterns. Figure 2. Real-time analytics data pipeline MongoDB’s Atlas SQL Connector plays a crucial role in bridging these two worlds. By converting MongoDB’s flexible document model into a relational format, it allows tools like Power BI to work seamlessly with MongoDB data. Next steps For a detailed, technical exploration of the architecture, including ML notebooks, chatbot implementation code, and dataset resources, visit our Solution Library Building Advanced Healthcare Solutions with MongoDB and Microsoft . Whether you’re a developer, data scientist, or healthcare professional, you’ll find valuable insights to replicate and expand upon this solution! To learn more about how MongoDB can power healthcare solutions, visit our solutions page . Check out our Atlas Vector Search Quick Start guide to get started with MongoDB Atlas Vector Search today.
Supercharge AI Data Management With Knowledge Graphs
WhyHow.AI has built and open-sourced a platform using MongoDB, enhancing how organizations leverage knowledge graphs for data management and insights. Integrated with MongoDB, this solution offers a scalable foundation with features like vector search and aggregation to support organizations in their AI journey. Knowledge graphs address the limitations of traditional retrieval-augmented generation (RAG) systems, which can struggle to capture intricate relationships and contextual nuances in enterprise data. By embedding rules and relationships into a graph structure, knowledge graphs enable accurate and deterministic retrieval processes. This functionality extends beyond information retrieval: knowledge graphs also serve as foundational elements for enterprise memory, helping organizations maintain structured datasets that support future model training and insights. WhyHow.AI enhances this process by offering tools designed to combine large language model (LLM) workflows with Python- and JSON-native graph management. Using MongoDB’s robust capabilities, these tools help combine structured and unstructured data and search capabilities, enabling efficient querying and insights across diverse datasets. MongoDB’s modular architecture seamlessly integrates vector retrieval, full-text search, and graph structures, making it an ideal platform for RAG and unlocking the full potential of contextual data. Check out our AI Learning Hub to learn more about building AI-powered apps with MongoDB. Creating and storing knowledge graphs with WhyHow.AI and MongoDB Creating effective knowledge graphs for RAG requires a structured approach that combines workflows from LLMs, developers, and nontechnical domain experts. Simply capturing all entities and relationships from text and relying on an LLM to organize the data can lead to a messy retrieval process that lacks utility. Instead, WhyHow.AI advocates for a schema-constrained graph creation method, emphasizing the importance of developing a context-specific schema tailored to the user’s use case. This approach ensures that the knowledge graphs focus on the specific relationships that matter most to the user’s workflow. Once the knowledge graphs are created, the flexibility of MongoDB’s schema design ensures that users are not confined to rigid structures. This adaptability enables seamless expansion and evolution of knowledge graphs as data and use cases develop. Organizations can rapidly iterate during early application development without being restricted by predefined schemas. In instances where additional structure is required, MongoDB supports schema enforcement, offering a balance between flexibility and data integrity. For instance, aligning external research with patient records is crucial to delivering personalized healthcare. Knowledge graphs bridge the gap between clinical trials, best practices, and individual patient histories. New clinical guidelines can be integrated with patient records to identify which patients would benefit most from updated treatments, ensuring that the latest practices are applied to individual care plans. Optimizing knowledge graph storage and retrieval with MongoDB Harnessing the full potential of knowledge graphs requires both effective creation tools and robust systems for storage and retrieval. Here’s how WhyHow.AI and MongoDB work together to optimize the management of knowledge graphs. Storing data in MongoDB WhyHow.AI relies on MongoDB’s document-oriented structure to organize knowledge graph data into modular, purpose-specific collections, enabling efficient and flexible queries. This approach is crucial for managing complex entity relationships and ensuring accurate provenance tracking. To support this functionality, the WhyHow.AI Knowledge Graph Studio comprises several key components: Workspaces separate documents, schemas, graphs, and associated data by project or domain, maintaining clarity and focus. Chunks are raw text segments with embeddings for similarity searches, linked to triples and documents to provide evidence and provenance. Graph collection stores the knowledge graph along with metadata and schema associations, all organized by workspace for centralized data management. Schemas define the entities, relationships, and patterns within graphs, adapting dynamically to reflect new data and keep the graph relevant. Nodes represent entities like people, locations, or concepts, each with unique identifiers and properties, forming the graph’s foundation. Triples define subject-predicate-object relationships and store embedded vectors for similarity searches, enabling reliable retrieval of relevant facts. Queries log user queries, including triple results and metadata, providing an immutable history for analysis and optimization. Figure 1. WhyHow.AI platform and knowledge graph illustration. To enhance data interoperability, MongoDB’s aggregation framework enables efficient linking across collections. For instance, retrieving chunks associated with a specific triple can be seamlessly achieved through an aggregation pipeline, connecting workspaces, graphs, chunks, and document collections into a cohesive data flow. Querying knowledge graphs With the representation established, users can perform both structured and unstructured queries with the WhyHow.AI querying system. Structured queries enable the selection of specific entity types and relationships, while unstructured queries enable natural language questions to return related nodes, triples, and linked vector chunks. WhyHow.AI’s query engine embeds triples to enhance retrieval accuracy, bypassing traditional Text2Cypher methods. Through a retrieval engine that embeds triples and enables users to retrieve embedded triples with chunks tied to them, WhyHow.AI uses the best of both structured and unstructured data structures and retrieval patterns. And, with MongoDB’s built-in vector search, users can store and query vectorized text chunks alongside their graph and application data in a single, unified location. Enabling scalability, portability, and aggregations MongoDB’s horizontal scalability ensures that knowledge graphs can grow effortlessly alongside expanding datasets. Users can also easily utilize WhyHow.AI's platform to create modular multiagent and multigraph workflows. They can deploy MongoDB Atlas on their preferred cloud provider or maintain control by running it in their own environments, gaining flexibility and reliability. As graph complexity increases, MongoDB’s aggregation framework facilitates diverse queries, extracting meaningful insights from multiple datasets with ease. Providing familiarity and ease of use MongoDB’s familiarity enables developers to apply their existing expertise without the need to learn new technologies or workflows. With WhyHow.AI and MongoDB, developers can build graphs with JSON data and Python-native APIs, which are perfect for LLM-driven workflows. The same database trusted for years in application development can now manage knowledge graphs, streamlining onboarding and accelerating development timelines. Taking the next steps WhyHow.AI’s knowledge graphs overcome the limitations of traditional RAG systems by structuring data into meaningful entities, relationships, and contexts. This enhances retrieval accuracy and decision-making in complex fields. Integrated with MongoDB, these capabilities are amplified through a flexible, scalable foundation featuring modular architecture, vector search, and powerful aggregation. Together, WhyHow.AI and MongoDB help organizations unlock their data’s potential, driving insights and enabling innovative knowledge management solutions. No matter where you are in your AI journey, MongoDB can help! You can get started with your AI-powered apps by registering for MongoDB Atlas and exploring the tutorials available in our AI Learning Hub . Otherwise, head over to our quick-start guide to get started with MongoDB Atlas Vector Search today. Want to learn more about why MongoDB is the best choice for supporting modern AI applications? Check out our on-demand webinar, “ Comparing PostgreSQL vs. MongoDB: Which is Better for AI Workloads? ” presented by MongoDB Field CTO, Rick Houlihan. If your company is interested in being featured in a story like this, we’d love to hear from you. Reach out to us at ai_adopters@mongodb.com .
Reintroducing the Versioned MongoDB Atlas Administration API
Our MongoDB Atlas Administration API has gotten some work done in the last couple of years to become the best “Versioned” of itself. In this blog post, we’ll go over what’s changed and why migrating to the newest version can help you have a seamless experience managing MongoDB Atlas . What does the MongoDB Atlas Administration API do? MongoDB Atlas, MongoDB’s managed developer data platform, contains a range of tools and capabilities that enable developers to build their applications’ data infrastructure with confidence. As application requirements and developer teams grow, MongoDB Atlas users might want to further automate database operation management to scale their application development cycles and enhance the developer experience. The entry point to managing MongoDB Atlas in a more programmatic fashion is the legacy MongoDB Atlas Administration API. This API enables developers to manage their use of MongoDB Atlas at a control plane level. The API and its various endpoints enable developers to interact with different MongoDB Atlas resources—such as clusters, database users, or backups—and lets them perform operational tasks like creating, modifying, and deleting those resources. Additionally, the Atlas Administration API supports the MongoDB Atlas Go SDK , which empowers developers to seamlessly interact with the full range of MongoDB Atlas features and capabilities using the Go programming language. Why should I migrate to the Versioned Atlas Administration API? While it serves the same purpose as the legacy version, the new Versioned Atlas Administration API gives a significantly better overall experience in accessing MongoDB Atlas programmatically. Here’s what you can expect when you move over to the versioned API. A better developer experience The Versioned Atlas Administration API provides a predictable and consistent experience with API changes and gives better visibility into new features and changes via the Atlas Administration API changelog . This means that breaking changes that can impact your code will only be introduced in a new resource version and will not affect the production code running the current, stable version. Also, every time a new version two resource is added, you will be notified of the older version being deprecated, giving you at least one year to upgrade before the removal of the previous resource version. As an added benefit, the Versioned Atlas Administration API supports Service Accounts as a new way to authenticate to MongoDB Atlas using the industry standard OAuth2.0 protocol with the Client Credentials flow. Minimal workflow disruptions With resource-level versioning, the Versioned Atlas Administration API provides specific resource versions, which are represented by dates. When migrating from the legacy, unversioned MongoDB Atlas Administration API (/v1) to the new Versioned Atlas Administration API (/v2), the API will default to resource version 2023-02-01. To simplify the initial migration, this resource version applies uniformly to all API resources (e.g., /backup or /clusters). This helps ensure that migrations do not adversely affect current MongoDB Atlas Administration API–based workloads. In the future, each resource can adopt a new version independently (e.g., /cluster might update to 2026-01-01 while /backup remains on 2023-02-01). This flexibility ensures you only need to act when a resource you use is deprecated. Improved context and visibility Our updated documentation provides detailed guidance on the versioning process. All changes—including the release of new endpoints, the deprecation of resource versions, or nonbreaking updates to #stable resources—are now tracked in a dedicated, automatically updated changelog. Additionally, the API specification offers enhanced visibility and context for all stable and deprecated resource versions, ensuring you can easily access documentation relevant to your specific use case. Why should I migrate to the new Go SDK? In addition to an updated API experience, we’ve introduced version 2 of the MongoDB Atlas Go SDK for the MongoDB Atlas Administration API. This version supports a range of capabilities that streamline your experience when using the Versioned Atlas Administration API: Full endpoint coverage: MongoDB Atlas Go SDK version 2 enables you to access all the features and capabilities that the versioned API offers today with full endpoint coverage so that you can programmatically use MongoDB Atlas in full. Flexibility: When interacting with the new versioned API through the new Go SDK you can choose which version of the MongoDB Administration API you want to work with, giving you control over when breaking changes impact you. Ease of use: The new Go SDK enables you to simplify getting started with the MongoDB Atlas Administration API. You’ll be able to work with fewer lines of code and prebuilt functions, structs, and methods that encapsulate the complexity of HTTP requests, authentication, error handling, versioning, and other low-level details. Immediate access to updates: When using the new Go SDK, you can immediately access any newly released API capabilities. Every time a new version of MongoDB Atlas is released, the SDK will be quickly updated and continuously maintained, ensuring compatibility with any changes in the API and speeding up your development process. How can I experience the enhanced version? To get started using the Versioned Atlas Administration API, you can visit the migration guide , which outlines how you can transition over from the legacy version. To learn more about the MongoDB Atlas Administration API, you can visit our documentation page .
Building Gen AI with MongoDB & AI Partners | January 2025
Even for those of us who work in technology, it can be hard to keep track of the awards companies give and receive throughout the year. For example, in the past few months MongoDB has announced both our own awards (such as the William Zola Award for Community Excellence ) and awards the company has received—like the AWS Technology Partner of the Year NAMER and two awards from RepVue. And that’s just us! It can be a lot! But as hard as they can be to follow, industry awards—and the recognition, thanks, and collaboration they represent—are important. They highlight the power and importance of working together and show how companies like MongoDB and partners are committed to building best-in-class solutions for customers. So without further ado, I’m pleased to announce that MongoDB has been named Technology Partner of the Year in Confluent’s 2025 Global Partner Awards ! As a member of the MongoDB AI Applications Program (MAAP) ecosystem, Confluent enables businesses to build a trusted, real-time data foundation for generative AI applications through seamless integration with MongoDB and Atlas Vector Search. Above all, this award is a testament to MongoDB and Confluent’s shared vision: to help enterprises unlock the full potential of real-time data and AI. Here’s to what’s next! Welcoming new AI and tech partners It's been an action-packed start to the year: in January 2025, we welcomed six new AI and tech partners that offer product integrations with MongoDB. Read on to learn more about each great new partner! Base64 Base64 is an all-in-one solution to bring AI into document-based workflows, enabling complex document processing, workflow automation, AI agents, and data intelligence. “MongoDB provides a fantastic platform for storing and querying all kinds of data, but getting unstructured information like documents into a structured format can be a real challenge. That's where Base64 comes in. We're the perfect onramp, using AI to quickly and accurately extract the key data from documents and feed it right into MongoDB,” said Chris Huff, CEO of Base64. “ This partnership makes it easier than ever for businesses to unlock the value hidden in their documents and leverage the full power of MongoDB." Dataloop Dataloop is a platform that allows developers to build and orchestrate unstructured data pipelines and develop AI solutions faster. " We’re thrilled to join forces with MongoDB to empower companies in building multimodal AI agents”, said Nir Buschi, CBO and co-founder of Dataloop. “Our collaboration enables AI developers to combine Dataloop’s data-centric AI orchestration with MongoDB’s scalable database. Enterprises can seamlessly manage and process unstructured data, enabling smarter and faster deployment of AI agents. This partnership accelerates time to market and helps companies get real value to customers faster." Maxim AI Maxim AI is an end-to-end AI simulation and evaluation platform, helping teams ship their AI agents reliably and more than 5x faster. “ We're excited to collaborate with MongoDB to empower developers in building reliable, scalable AI agents faster than ever,” said Vaibhavi Gangwar, CEO of Maxim AI. “By combining MongoDB’s robust vector database capabilities with Maxim’s comprehensive GenAI simulation, evaluation, and observability suite, this partnership enables teams to create high-performing retrieval-augmented generation (RAG) applications and deliver outstanding value to their customers.” Mirror Security Mirror Security offers a comprehensive AI security platform that provides advanced threat detection, security policy management, continuous monitoring ensuring compliance and protection for enterprises. “ We're excited to partner with MongoDB to redefine security standards for enterprise AI deployment,” said Dr. Aditya Narayana, Chief Research Officer, at Mirror Security. “By combining MongoDB's scalable infrastructure with Mirror Security's end-to-end vector encryption, we're making it simple for organizations to launch secure RAG pipelines and trusted AI agents. Our collaboration eliminates security-performance trade-offs, empowering enterprises in regulated industries to confidently accelerate their AI initiatives while maintaining the highest security standards.” Squid AI Squid AI is a full-featured platform for creating private AI agents in a faster, secure, and automated way. “As an AI agent platform that securely connects to MongoDB in minutes, we're looking forward to helping MongoDB customers reveal insights, take action on their data, and build enterprise AI agents,” said Leslie Lee, Head of Product at Squid AI. “ By pairing Squid's semantic RAG and AI functions with MongoDB's exceptional performance , developers can build powerful AI agents that respond to new inputs in real-time.” TrojAI TrojAI is an AI security platform that protects AI models and applications from new and evolving threats before they impact businesses. “ TrojAI is thrilled to join forces with MongoDB to help companies secure their RAG-based AI apps built on MongoDB,” said Lee Weiner, CEO of TrojAI. “We know how important MongoDB is to helping enterprises adopt and harness AI. Our collaboration enables enterprises to add a layer of security to their database initialization and RAG workflows to help protect against the evolving GenAI threat landscape.” But what, there’s more! In February, we’ve got two webinars coming up with MAAP partners that you don’t want to miss: Build a JavaScript AI Agent With MongoDB and LangGraph.js : Join MongoDB Staff Developer Advocate Jesse Hall and LangChain Founding Software Engineer Jacob Lee for an exclusive webinar that highlights the integration of LangGraph.js, LangChain’s cutting-edge JavaScript library, and MongoDB - live on Feb 25 . Architecting the Future: RAG and Al Agents for Enterprise Transformation : Join MongoDB, LlamaIndex, and Together AI to explore how to strategically build a tech stack that supports the development of enterprise-grade RAG and AI agentic systems, explore technical foundations and practical applications, and learn how the MongoDB Applications Program (MAAP) will enable you to rapidly innovate with AI - content on demand . To learn more about building AI-powered apps with MongoDB, check out our AI Learning Hub and stop by our Partner Ecosystem Catalog to read about our integrations with MongoDB’s ever-evolving AI partner ecosystem.
MongoDB Empowers ISVs to Drive SaaS Innovation in India
Independent Software Vendors (ISVs) play a pivotal role in the Indian economy. Indeed, the Indian software market is expected to experience an annual growth rate of 10.40%, resulting in a market volume of $15.89bn by 2029. 1 By developing specialized software solutions and digital products that can be bought 'off the shelf', ISVs empower Indian organizations to innovate, improve efficiency, and remain competitive. Many established enterprises in India choose a 'buy' rather than 'build' strategy when it comes to creating modern software applications. This is particularly true when it comes to cutting-edge AI use cases. MongoDB works closely with Indian ISVs across industries, providing them with a multi-cloud data platform and highly flexible, scalable technologies to build operational and efficient software solutions. For example, Intellect AI , a business unit of Intellect Design Arena, has used MongoDB Atlas to drive a number of innovative use cases in the banking, financial services, and insurance industries. Intellect AI chose MongoDB for its flexibility coupled with its ability to meet complex enterprise requirements such as scale, resilience, and security compliance. And Ambee, a climate tech startup, is using MongoDB Atlas ’ flexible document model to support its AI and ML models. Here are three more examples of ISV customers who are enabling, powering, and growing their SaaS solutions with MongoDB Atlas. MongoDB enhancing Contentstack's content delivery capabilities Contentstack is a leading provider of composable digital experience solutions, and specializes in headless content management systems (CMS). Headless CMS is a backend-only web content management system that acts primarily as a content repository. “Our headless CMS allows our customers to bring all forms of content to the table, and we host the content for them,” said Suryanarayanan Ramamurthy, Head of Data Science at Contentstack, while speaking at MongoDB.local 2024 . A great challenge in the CMS industry is the ability to provide customers with content that remains factually correct, brand-aligned, and tailored to the customer’s identity. Contentstack created an innovative, AI-based product— Brand Kit —that does exactly that, built on MongoDB Atlas. “Our product Brand Kit, which launched in June 2024, overcomes factual incorrectness. The AI capabilities the platform offers help our customers create customized and context-specific content that meets their brand guidelines and needs,” said Ramamurthy. MongoDB Atlas Vector Search enables Contentstack to transform content and bring contextual relevance to retrievals. This helps reduce errors caused by large language model hallucinations, allowing the retrieval-augmented generation (RAG) application to deliver better results to users. AppViewX: unlocking scale for a growing cybersecurity SaaS pioneer AppViewX delivers a platform for organizations to manage a range of cybersecurity capabilities, such as certificate lifecycle management and public key infrastructure. The company ensures end-to-end security compliance and data integrity for large enterprises across industries like banking, healthcare, and automotive. Speaking at MongoDB.local Bengaluru in 2024, Karthik Kannan, Vice President of Product Management at AppViewX, explained how AppViewX transitioned from an on-premise product to a SaaS platform in 2021. MongoDB Atlas powered this transition. MongoDB Atlas's unique flexibility, scalability, and multi-cloud capabilities enabled AppViewX to easily manage fast-growing data sets, authentication, and encryption from its customers’ endpoints, device identities, workload identities, user identities, and more. Furthermore, MongoDB provides AppViewX with robust security guaranteeing critical data protection, and compliance. “We've been really able to grow fast and at scale across different regions, gaining market share,” said Kannan. “Our engineering team loves MongoDB,” added Kannan. “The support that we get from MongoDB allowed us to get into different regions, penetrate new markets to grow at scale, so this is a really important partnership that helped us get to where we are.” Zluri Streamlines SaaS Management with MongoDB Zluri provides a unified SaaS management platform that helps IT and security teams manage applications across the organization. The platform provides detailed insights into application usage, license optimization, security risks, and cost savings opportunities. Zluri processes massive volumes of unstructured data—around 9 petabytes per month—from over 800 native integrations with platforms like single sign-on, human resources management systems, and Google Workspace. One of its challenges was to automate the discovery and data analysis across those platforms, as opposed to employing an exhaustive time and labour intensive manual approach. MongoDB Atlas has allowed Zluri to ingest, normalize, process, and manage the high volume and complexity of data seamlessly across diverse sources. “We wanted to connect with every single system that's currently available, get all that data, process all that data so that the system works on autopilot mode, so that you're not manually adding all that information,” said Chaithaniya Yambari, Zluri’s Co-Founder and Chief Technology Officer, when speaking at MongoDB.local Bengaluru in 2024 . As a fully managed database, MongoDB Atlas platform allows Zluri to eliminate maintenance overhead, so its team of engineers and developers can focus on innovation. Zluri also utilizes MongoDB Atlas Search to perform real-time queries, filtering, and ranking of metadata. This eliminates the challenges of synchronizing separate search solutions with the database, ensuring IT managers get fast, accurate, and up-to-date results. These are just a few examples of how MongoDB’s is working with ISVs to shape the future of India’s digital economy. As technology continues to evolve, the role of ISVs in fostering innovation and economic growth will become ever more integral. MongoDB is committed to providing ISVs with a robust, flexible, and scalable database that removes barriers to growth and the ability to innovate. Visit our product page to learn more about MongoDB Atlas. Learn more about MongoDB Atlas Search on our product details page. Check out our Quick Start Guide to get started with MongoDB Atlas Vector Search today.
Simplify Security At Scale with Resource Policies in MongoDB Atlas
Innovation is the gift that keeps on giving: industries that are more innovative have higher returns, and more innovative industries see higher rates of long-term growth 1 . No wonder organizations everywhere strive to innovate. But in the pursuit of innovation, organizations can struggle to balance the need for speed and agility with critical security and compliance requirements. Specifically, software developers need the freedom to rapidly provision resources and build applications. But manual approval processes, inconsistent configurations, and security errors can slow progress and create unnecessary risks. Friction that slows down employees and leads to insecure behavior is a significant driver of insider risk. Paul Furtado Vice President, Analyst, Gartner Enter resource policies , which are now available in public preview in MongoDB Atlas. This new feature balances rapid innovation with robust security and compliance. Resource policies allow organizations to enable developers with self-service access to Atlas resources while maintaining security through automated, organization-wide ‘guardrails’. What are resource policies? Resource policies help organizations enforce security and compliance standards across their entire Atlas environment. These policies act as guardrails by creating organization-wide rules that control how Atlas can be configured. Instead of targeting specific user groups, resource policies apply to all users in an organization, and focus on governing a particular resource. Consider this example: An organization subject to General Data Protection Regulation (GDPR) 2 requirements needs to ensure that all of their Atlas clusters run only on approved cloud providers in regions that comply with data residency and privacy regulations. Without resource policies, developers may inadvertently deploy clusters on any cloud provider. This risks non-compliance and potential fines of up to 20 million euros or 4% of global annual turnover according to article 83 of the GDPR. But, by using resource policies, the organization can mandate which cloud providers are permitted, ensuring that data resides only in approved environments. The policy is automatically applied to every project in the organization, preventing the creation of clusters on unauthorized cloud platforms. Thus compliance with GDPR is maintained. The following resource policies are now in public preview: Restrict cloud provider: Limit Atlas clusters to approved cloud providers (AWS, Azure, Google Cloud). Restrict cloud region: Restrict cluster deployments in approved cloud providers to specific regions. Block wildcard IP: Reduce security risk by disabling the use of 0.0.0.0/0 (or “wildcard”) IP address for cluster access. How resource policies enable secure self-service Atlas access Resource policies address the challenges organizations face when trying to balance developer agility with robust security and compliance. Without standardized controls, there is a risk that developers will configure Atlas clusters to deviate from corporate or external requirements. This invites security vulnerabilities and compliance gaps. Manual approval and provisioning processes for every new project creates delays. Concurrently, platform teams struggle to enforce consistent standards across an organization, increasing operational complexity and costs. With resource policies, security and compliance standards are automatically enforced across all Atlas projects. This eliminates manual approvals and reduces the risk of misconfigurations. Organizations can deliver self-service access to Atlas resources for their developers. This allows them to focus on building applications instead of navigating complex internal review and compliance processes. Meanwhile, platform teams can manage policies centrally. This ensures consistent configurations across the organization and frees time for strategic initiatives. The result is a robust security posture, accelerated innovation, and greater efficiency. Automated guardrails prevent unauthorized configurations. Concurrently, centralized policy management streamlines operations and ensures alignment with corporate and external standards. Resource policies enable organizations to scale securely and innovate without compromise. This empowers developers to move quickly while simplifying governance. iA Financial Group, one of Canada’s largest insurance and wealth management firms, uses resource policies to ensure consistency and compliance in MongoDB Atlas. “Resource Policies have allowed us to proactively supervise Atlas’s usage by our IT delivery teams,” said Geoffrey Céré, Solution Architecture Advisor at iA Financial Group. “This has been helpful in preventing non-compliant configurations with the company’s regulatory framework. Additionally, it saves our IT delivery teams time by avoiding unauthorized deployments and helps us demonstrate to internal audits that our configurations on the MongoDB Atlas platform adhere to the regulatory framework.” Creating resource policies Atlas resource policies are defined using the open-source Cedar policy language , which combines expressiveness with simplicity. Cedar’s concise syntax makes writing and understanding policies easy, streamlining policy creation and management. Resource policies can be created and managed programmatically through infrastructure-as-code tools like Terraform or CloudFormation, or by integrating directly using the Atlas Admin API. To explore what constructing a resource policy looks like in practice, let’s return to our earlier example. This is an organization subject to GDPR requirements that wants to ensure all of their Atlas clusters run on approved cloud providers only. To prevent users from creating clusters on Google Cloud (GCP), the organization could write the following policy named “ Policy Preventing GCP Clusters .” This policy forbids creating or editing a cluster when the cloud provider is Google Cloud. The body defines the behavior of the policy in the human and machine-readable Cedar language. If required, ‘ gcp ’ could be replaced with ‘ aws ’. Figure 1. Example resource policy preventing the creation of Atlas clusters on GCP. Alternatively, the policy could allow users to create clusters only on Google Cloud with the following policy named “Policy Allowing Only GCP Clusters”. This policy uses the Cedar clause “unless” to restrict creating or editing a cluster unless it is on GCP. Figure 2. Example resource policy that restricts cluster creation to GCP only. Policies can also have compound elements. For example, an organization can create a project-specific policy that only enforces the creation of clusters in GCP for the Project with ID 6217f7fff7957854e2d09179 . Figure 3. Example resource policy that restricts cluster creation to GCP only for a specific project. And, as shown in Figure 4, another policy might restrict cluster deployments on GCP as well as on two unapproved AWS regions: US-EAST-1 and US-WEST-1. Figure 4. Example resource policy restricting cluster deployments on GCP as well as AWS regions US-EAST-1 and US-WEST-1. Getting started with resource policies Resource policies are available now in MongoDB Atlas in public preview. Get started creating and managing resource policies programmatically using infrastructure-as-code tools like Terraform or CloudFormation. Alternatively, integrate directly with the Atlas Admin API. Support for managing resource policies in the Atlas user interface is expected by mid-2025. Use the resources below to learn more about resource policies. Feature documentation Postman Collection Atlas Administration API documentation Terraform Provider documentation AWS CDK AWS Cloud Formation documentation 1 McKinsey & Company , August 2024 2 gdpr.eu