MongoDB Blog

Announcements, updates, news, and more

BAIC Group Powers the Internet of Vehicles With MongoDB

The Internet of Vehicles (IoV) is revolutionizing the automotive industry by connecting vehicles to the Internet. Vehicle sensors generate a wealth of data, affording manufacturers, vehicle owners, and traffic departments deep insights. This unlocks new business opportunities and enhances service experiences for both enterprises and consumers. BAIC Research Institute , a subsidiary of Beijing Automotive Group Co. (BAIC Group), is a backbone enterprise of the Chinese auto industry. Headquartered in Beijing, BAIC Group is involved in everything from R&D and manufacturing of vehicles and parts to the automobile service trade, comprehensive traveling services, financing, and investments. BAIC Group is a Fortune Global 500 company with more than 67 billion USD of annual revenue. The Institute is also heavily invested in the IoV industry. It plays a pivotal role in the research and development of the two major independent passenger vehicle products in China: Arcfox and Beijing Automotive . It is also actively involved in building vehicle electronic architecture, intelligent vehicle controls, smart cockpit systems, and smart driving technologies. To harness cutting-edge, data-driven technologies such as cloud computing, the Internet of Things, and big data, the Institute has built a comprehensive IoV cloud platform based on ApsaraDB for MongoDB . The platform collects, processes, and analyzes data generated by over a million vehicles, providing intelligent and personalized services to vehicle owners, automotive companies, and traffic management departments. At MongoDB.local Beijing in September 2024, BAIC Group’s Deputy Chief Engineer Chungang Zuo said that the BAIC IoV cloud platform facilitates data access for over a million vehicles. It also supports online services for hundreds of thousands of vehicles. Data technology acts as a key factor for IoV development With a rapid increase of vehicle ownership in recent years, the volume of data on BAIC Group’s IoV cloud platform quickly surged. This led to several data management challenges, namely the need to handle the following: Large data volumes High update frequencies Complex data formats High data concurrency Low query efficiency Data security issues The IoV platform also needed to support automotive manufacturers who must centrally store and manage a large amount of diverse transactional data. Finally, the platform is needed to enable manufacturers to leverage AI and analytical capabilities to interpret and create value from this data. BAIC Group’s IoV cloud platform reached a breaking point because the legacy databases it employed were incapable of handling the deluge of exponential vehicle data nor supporting planned AI-driven capabilities. The Institute identified MongoDB as the solution to support its underlying data infrastructure. By using MongoDB, BAIC would gain a robust core to enhance data management efficiency from the business layer to the application layer. The power of MongoDB as a developer data platform offered a wide range of capabilities. This was a game-changer for the Institute. MongoDB’s document model makes managing complex data simple Unlike traditional relational database models, MongoDB’s JSON data structure and flexible schema model are well suited for the variety and scale of the ever-changing data produced by connected vehicles. In traditional databases, vehicle information is spread across multiple tables, each with nearly a hundred fields, leading to redundancy, inflexibility, and complexity. With MongoDB, all vehicle information can be stored in a single collection, simplifying data management. Migrating vehicle information to MongoDB has significantly improved the Institute’s data application efficiency. MongoDB’s GeoJSON supports location data management The ability to accurately calculate vehicle location within the IoV cloud platform is a key benefit offered by MongoDB. Particularly, MongoDB’s GeoJSON (geospatial indexing) supports important features, such as the ability to screen vehicle parking situations. Zuo explained that during the data cleaning phase, the Institute formats raw vehicle data for MongoDB storage and outputs it as standardized cleaned data. In the data calculation phase, GeoJSON filters vehicles in a specific range. This is followed by algorithmic clustering analysis of locations to gain vehicle parking information. Finally, the Institute retrieves real-time data from the MongoDB platform to classify and display vehicle parking situations on a map for easy viewing. MongoDB provides scalability and high-performance MongoDB’s sharded cluster enhances data capacity and processing performance, enabling the Institute to effectively manage exponential IoV data growth. The querying and result-returning processes are executed concurrently in a multi-threaded manner. This facilitates continuous horizontal expansion without any downtime as data needs grow. Zuo said that a significant advantage for developers is the high self-healing capability of the sharded cluster; if a primary node fails, MongoDB automatically switches to a backup. This ensures seamless service and process integrity. Security features meet data regulatory requirements MongoDB’s built-in security features enable the IoV platform to meet rigorous data protection standards, helping the Institute stay compliant with regulatory requirements and industry standards. With MongoDB, the Institute can ensure end-to-end data encryption throughout the entire data lifecycle, including during transmission, storage, and processing, with support for executing queries directly on encrypted data. For example, during storage, MongoDB encrypts sensitive data, such as vehicle identification numbers and phone numbers. Sharding and replication mechanisms establish a robust data security firewall. Furthermore, MongoDB’s permission control mechanism enables secure database management with decentralized authority. Zuo said that MongoDB’s sharded storage and clustered deployment features ensure the platform’s reliability exceeds the 99.99% service-level agreement. MongoDB’s high concurrency capabilities enable the Institute to share real-time vehicle status updates with vehicle owners’ apps, enhancing user experience and satisfaction. In addition, MongoDB’s unique compression technology and flexible cloud server configurations reduce data storage space and resource waste. This significantly lowers data storage and application costs. BAIC uses MongoDB to prepare for future opportunities Looking ahead, Zuo Chungang stated that the BAIC IoV cloud platform has expanding demands for data development and application in three areas: vehicle data centers, application scenario implementation, and AI applications. MongoDB’s capabilities will remain core to helping address the Institute’s upcoming needs and challenges.

February 19, 2025
Applied

Smarter Care: MongoDB & Microsoft

Healthcare is on the cusp of a revolution powered by data and AI. Microsoft, with innovations like Azure OpenAI, Microsoft Fabric, and Power BI, has become a leading force in this transformation. MongoDB Atlas complements these advancements with a flexible and scalable platform for unifying operational, metadata, and AI data, enabling seamless integration into healthcare workflows. By combining these technologies, healthcare providers can enhance diagnostics, streamline operations, and deliver exceptional patient care. In this blog post, we explore how MongoDB and Microsoft AI technologies converge to create cutting-edge healthcare solutions through our “Leafy Hospital” demo—a showcase of possibilities in breast cancer diagnosis. The healthcare data challenge The healthcare industry faces unique challenges in managing and utilizing massive datasets. From mammograms and biopsy images to patient histories and medical literature, making sense of this data is often time-intensive and error-prone. Radiologists, for instance, must analyze vast amounts of information to deliver accurate diagnoses, while ensuring sensitive patient data is handled securely. MongoDB Atlas addresses these challenges by providing a unified view of disparate data sources, offering scalability, flexibility, and advanced features like Search and Vector search. When paired with Microsoft AI technologies, the potential to revolutionize healthcare workflows becomes limitless. The leafy hospital solution: A unified ecosystem Our example integrated solution, Leafy Hospital, showcases the transformative potential of MongoDB Atlas and Microsoft AI capabilities in healthcare. Focused on breast cancer diagnostics, this demo explores how the integration of MongoDB’s flexible data platform with Microsoft’s cutting-edge features—such as Azure OpenAI, Microsoft Fabric, and Power BI—can revolutionize patient care and streamline healthcare workflows. The solution takes a three-pronged approach to improve breast cancer diagnosis and patient care: Predictive AI for early detection Generative AI for workflow automation Advanced BI and analytics for actionable insights Figure 1. Leafy hospital solution architecture If you’re interested in discovering how this solution could be applied to your organization’s unique needs, we invite you to connect with your MongoDB account representative. We’d be delighted to provide a personalized demonstration of the Leafy Hospital solution and collaborate on tailoring it for your specific use case. Key capabilities Predictive AI for early detection Accurate diagnosis is critical in breast cancer care. Traditional methods rely heavily on radiologists manually analyzing mammograms and biopsies, increasing the risk of errors. Predictive AI transforms this process by automating data analysis and improving accuracy. BI-RADS prediction BI-RADS (Breast Imaging-Reporting and Data System) is a standardized classification for mammogram findings, ranging from 0 (incomplete) to 6 (malignant). To predict BI-RADS scores, deep learning models like VGG16 and EfficientNetV2L are trained on mammogram images dataset. Fabric Data Science simplifies the training and experimentation process by enabling: Direct data uploads to OneLake for model training Easy comparison of multiple ML experiments and metrics Auto-logging of parameters with MLflow for lifecycle management These models are trained on a significant number of epochs until a reliable accuracy is achieved, offering reliable predictions for radiologists. Biopsy classification In the case of biopsy analysis, classification models such as the random forest classifier are trained on biopsy features like cell size, shape uniformity, and mitoses counts. Classification models attain high accuracy when trained on scalar data, making it highly effective for classifying cancers as malignant or benign. Data ingestion, training, and prediction cycles are well managed using Fabric Data Science and the MongoDB Spark Connector , ensuring a seamless flow of metadata and results between Azure and MongoDB Atlas. Generative AI for workflow automation Radiologists often spend hours documenting findings, which could be better spent analyzing cases. Generative AI streamlines this process by automating report generation and enabling intelligent chatbot interactions. Vector search: The foundation of semantic understanding At the heart of these innovations lies MongoDB Atlas Vector Search , which revolutionizes how medical data is stored, accessed, and analyzed. By leveraging Azure OpenAI’s embedding models, clinical notes and other unstructured data are transformed into vector embeddings—mathematical representations that capture the meaning of the text in a high-dimensional space. Similarity search is a key use case, enabling radiologists to query the system with natural language prompts like “Show me cases where additional tests were recommended.” The system interprets the intent behind the question, retrieves relevant documents, and delivers precise, context-aware results. This ensures that radiologists can quickly access information without sifting through irrelevant data. Beyond similarity search, vector search facilitates the development of RAG architectures , which combine semantic understanding with external contextual data. This architecture allows for the creation of advanced features like automated report generation and intelligent chatbots, which further streamline decision-making and enhance productivity. Automated report generation Once a mammogram or biopsy is analyzed, Azure OpenAI’sLarge Language models can be used to generate detailed clinical notes, including: Findings: Key observations from the analysis Conclusions: Diagnoses and suggested next steps Standardized codes: Using SNOMED terms for consistency This automation enhances productivity by allowing radiologists to focus on verification rather than manual documentation. Chatbots with retrieval-augmented generation Chatbots can be another approach to support radiologists, when they need quick access to historical patient data or medical research. Traditional methods can be inefficient, particularly when dealing with older records or specialized cases. Our retrieval-augmented generation-based chatbot, powered by Azure OpenAI, Semantic Kernel, and MongoDB Atlas, provides: Patient-specific insights: Querying MongoDB for 10 years of patient history, summarized and provided as context to the chatbot Medical literature searches: Using vector search to retrieve relevant documents from indexed journals and studies Secure responses: Ensuring all answers are grounded in validated patient data and research The chatbot improves decision-making and enhances the user experience by delivering accurate, context-aware responses in real-time. Advanced BI and analytics for actionable insights In healthcare, data is only as valuable as the insights it provides. MongoDB Atlas bridges real-time transactional analytics and long-term data analysis, empowering healthcare providers with tools for informed decision-making at every stage. Transactional analytics Transactional, or in-app, analytics deliver insights directly within applications. For example, MongoDB Atlas enables radiologists to instantly access historical BI-RADS scores and correlate them with new findings, streamlining the diagnostic process. This ensures decisions are based on accurate, real-time data. Advanced clinical decision support (CDS) systems benefit from integrating predictive analytics into workflows. For instance, biopsy results stored in MongoDB are enriched with machine learning predictions generated in Microsoft Fabric , helping radiologists make faster, more precise decisions. Long-term analytics While transactional analytics focus on operational efficiency, long-term analytics enable healthcare providers to step back and evaluate broader trends. MongoDB Atlas, integrated with Microsoft Power BI and Fabric, facilitates this critical analysis of historical data. For instance, patient cohort studies become more insightful when powered by a unified dataset that combines MongoDB Atlas’ operational data with historical trends stored in Microsoft OneLake. Long-term analytics also shine in operational efficiency assessments. By integrating MongoDB Atlas data with Power BI, hospitals can create dashboards that track key performance indicators such as average time to diagnosis, wait times for imaging, and treatment start times. These insights help identify bottlenecks, streamline processes, and ultimately improve the patient experience. Furthermore, historical data stored in OneLake can be combined with MongoDB’s real-time data to train machine learning models, enhancing future predictive analytics. OLTP vs OLAP This unified approach is exemplified by the distinction between OLTP and OLAP workloads. On the OLTP side, MongoDB Atlas handles real-time data processing, supporting immediate tasks like alerting radiologists to anomalies. On the OLAP side, data stored in Microsoft OneLake supports long-term analysis, enabling hospitals to identify trends, evaluate efficiency, and train advanced AI models. This dual capability allows healthcare providers to “run the business” through operational insights and “analyze the business” by uncovering long-term patterns. Figure 2. Real-time analytics data pipeline MongoDB’s Atlas SQL Connector plays a crucial role in bridging these two worlds. By converting MongoDB’s flexible document model into a relational format, it allows tools like Power BI to work seamlessly with MongoDB data. Next steps To learn more about how MongoDB can power healthcare solutions, visit our solutions page . Check out our Atlas Vector Search Quick Start guide to get started with MongoDB Atlas Vector Search today.

February 18, 2025
Artificial Intelligence

Supercharge AI Data Management With Knowledge Graphs

WhyHow.AI has built and open-sourced a platform using MongoDB, enhancing how organizations leverage knowledge graphs for data management and insights. Integrated with MongoDB, this solution offers a scalable foundation with features like vector search and aggregation to support organizations in their AI journey. Knowledge graphs address the limitations of traditional retrieval-augmented generation (RAG) systems, which can struggle to capture intricate relationships and contextual nuances in enterprise data. By embedding rules and relationships into a graph structure, knowledge graphs enable accurate and deterministic retrieval processes. This functionality extends beyond information retrieval: knowledge graphs also serve as foundational elements for enterprise memory, helping organizations maintain structured datasets that support future model training and insights. WhyHow.AI enhances this process by offering tools designed to combine large language model (LLM) workflows with Python- and JSON-native graph management. Using MongoDB’s robust capabilities, these tools help combine structured and unstructured data and search capabilities, enabling efficient querying and insights across diverse datasets. MongoDB’s modular architecture seamlessly integrates vector retrieval, full-text search, and graph structures, making it an ideal platform for RAG and unlocking the full potential of contextual data. Check out our AI Learning Hub to learn more about building AI-powered apps with MongoDB. Creating and storing knowledge graphs with WhyHow.AI and MongoDB Creating effective knowledge graphs for RAG requires a structured approach that combines workflows from LLMs, developers, and nontechnical domain experts. Simply capturing all entities and relationships from text and relying on an LLM to organize the data can lead to a messy retrieval process that lacks utility. Instead, WhyHow.AI advocates for a schema-constrained graph creation method, emphasizing the importance of developing a context-specific schema tailored to the user’s use case. This approach ensures that the knowledge graphs focus on the specific relationships that matter most to the user’s workflow. Once the knowledge graphs are created, the flexibility of MongoDB’s schema design ensures that users are not confined to rigid structures. This adaptability enables seamless expansion and evolution of knowledge graphs as data and use cases develop. Organizations can rapidly iterate during early application development without being restricted by predefined schemas. In instances where additional structure is required, MongoDB supports schema enforcement, offering a balance between flexibility and data integrity. For instance, aligning external research with patient records is crucial to delivering personalized healthcare. Knowledge graphs bridge the gap between clinical trials, best practices, and individual patient histories. New clinical guidelines can be integrated with patient records to identify which patients would benefit most from updated treatments, ensuring that the latest practices are applied to individual care plans. Optimizing knowledge graph storage and retrieval with MongoDB Harnessing the full potential of knowledge graphs requires both effective creation tools and robust systems for storage and retrieval. Here’s how WhyHow.AI and MongoDB work together to optimize the management of knowledge graphs. Storing data in MongoDB WhyHow.AI relies on MongoDB’s document-oriented structure to organize knowledge graph data into modular, purpose-specific collections, enabling efficient and flexible queries. This approach is crucial for managing complex entity relationships and ensuring accurate provenance tracking. To support this functionality, the WhyHow.AI Knowledge Graph Studio comprises several key components: Workspaces separate documents, schemas, graphs, and associated data by project or domain, maintaining clarity and focus. Chunks are raw text segments with embeddings for similarity searches, linked to triples and documents to provide evidence and provenance. Graph collection stores the knowledge graph along with metadata and schema associations, all organized by workspace for centralized data management. Schemas define the entities, relationships, and patterns within graphs, adapting dynamically to reflect new data and keep the graph relevant. Nodes represent entities like people, locations, or concepts, each with unique identifiers and properties, forming the graph’s foundation. Triples define subject-predicate-object relationships and store embedded vectors for similarity searches, enabling reliable retrieval of relevant facts. Queries log user queries, including triple results and metadata, providing an immutable history for analysis and optimization. Figure 1. WhyHow.AI platform and knowledge graph illustration. To enhance data interoperability, MongoDB’s aggregation framework enables efficient linking across collections. For instance, retrieving chunks associated with a specific triple can be seamlessly achieved through an aggregation pipeline, connecting workspaces, graphs, chunks, and document collections into a cohesive data flow. Querying knowledge graphs With the representation established, users can perform both structured and unstructured queries with the WhyHow.AI querying system. Structured queries enable the selection of specific entity types and relationships, while unstructured queries enable natural language questions to return related nodes, triples, and linked vector chunks. WhyHow.AI’s query engine embeds triples to enhance retrieval accuracy, bypassing traditional Text2Cypher methods. Through a retrieval engine that embeds triples and enables users to retrieve embedded triples with chunks tied to them, WhyHow.AI uses the best of both structured and unstructured data structures and retrieval patterns. And, with MongoDB’s built-in vector search, users can store and query vectorized text chunks alongside their graph and application data in a single, unified location. Enabling scalability, portability, and aggregations MongoDB’s horizontal scalability ensures that knowledge graphs can grow effortlessly alongside expanding datasets. Users can also easily utilize WhyHow.AI's platform to create modular multiagent and multigraph workflows. They can deploy MongoDB Atlas on their preferred cloud provider or maintain control by running it in their own environments, gaining flexibility and reliability. As graph complexity increases, MongoDB’s aggregation framework facilitates diverse queries, extracting meaningful insights from multiple datasets with ease. Providing familiarity and ease of use MongoDB’s familiarity enables developers to apply their existing expertise without the need to learn new technologies or workflows. With WhyHow.AI and MongoDB, developers can build graphs with JSON data and Python-native APIs, which are perfect for LLM-driven workflows. The same database trusted for years in application development can now manage knowledge graphs, streamlining onboarding and accelerating development timelines. Taking the next steps WhyHow.AI’s knowledge graphs overcome the limitations of traditional RAG systems by structuring data into meaningful entities, relationships, and contexts. This enhances retrieval accuracy and decision-making in complex fields. Integrated with MongoDB, these capabilities are amplified through a flexible, scalable foundation featuring modular architecture, vector search, and powerful aggregation. Together, WhyHow.AI and MongoDB help organizations unlock their data’s potential, driving insights and enabling innovative knowledge management solutions. No matter where you are in your AI journey, MongoDB can help! You can get started with your AI-powered apps by registering for MongoDB Atlas and exploring the tutorials available in our AI Learning Hub . Otherwise, head over to our quick-start guide to get started with MongoDB Atlas Vector Search today. Want to learn more about why MongoDB is the best choice for supporting modern AI applications? Check out our on-demand webinar, “ Comparing PostgreSQL vs. MongoDB: Which is Better for AI Workloads? ” presented by MongoDB Field CTO, Rick Houlihan. If your company is interested in being featured in a story like this, we’d love to hear from you. Reach out to us at ai_adopters@mongodb.com .

February 13, 2025
Artificial Intelligence

Reintroducing the Versioned MongoDB Atlas Administration API

Our MongoDB Atlas Administration API has gotten some work done in the last couple of years to become the best “Versioned” of itself. In this blog post, we’ll go over what’s changed and why migrating to the newest version can help you have a seamless experience managing MongoDB Atlas . What does the MongoDB Atlas Administration API do? MongoDB Atlas, MongoDB’s managed developer data platform, contains a range of tools and capabilities that enable developers to build their applications’ data infrastructure with confidence. As application requirements and developer teams grow, MongoDB Atlas users might want to further automate database operation management to scale their application development cycles and enhance the developer experience. The entry point to managing MongoDB Atlas in a more programmatic fashion is the legacy MongoDB Atlas Administration API. This API enables developers to manage their use of MongoDB Atlas at a control plane level. The API and its various endpoints enable developers to interact with different MongoDB Atlas resources—such as clusters, database users, or backups—and lets them perform operational tasks like creating, modifying, and deleting those resources. Additionally, the Atlas Administration API supports the MongoDB Atlas Go SDK , which empowers developers to seamlessly interact with the full range of MongoDB Atlas features and capabilities using the Go programming language. Why should I migrate to the Versioned Atlas Administration API? While it serves the same purpose as the legacy version, the new Versioned Atlas Administration API gives a significantly better overall experience in accessing MongoDB Atlas programmatically. Here’s what you can expect when you move over to the versioned API. A better developer experience The Versioned Atlas Administration API provides a predictable and consistent experience with API changes and gives better visibility into new features and changes via the Atlas Administration API changelog . This means that breaking changes that can impact your code will only be introduced in a new resource version and will not affect the production code running the current, stable version. Also, every time a new version two resource is added, you will be notified of the older version being deprecated, giving you at least one year to upgrade before the removal of the previous resource version. As an added benefit, the Versioned Atlas Administration API supports Service Accounts as a new way to authenticate to MongoDB Atlas using the industry standard OAuth2.0 protocol with the Client Credentials flow. Minimal workflow disruptions With resource-level versioning, the Versioned Atlas Administration API provides specific resource versions, which are represented by dates. When migrating from the legacy, unversioned MongoDB Atlas Administration API (/v1) to the new Versioned Atlas Administration API (/v2), the API will default to resource version 2023-02-01. To simplify the initial migration, this resource version applies uniformly to all API resources (e.g., /backup or /clusters). This helps ensure that migrations do not adversely affect current MongoDB Atlas Administration API–based workloads. In the future, each resource can adopt a new version independently (e.g., /cluster might update to 2026-01-01 while /backup remains on 2023-02-01). This flexibility ensures you only need to act when a resource you use is deprecated. Improved context and visibility Our updated documentation provides detailed guidance on the versioning process. All changes—including the release of new endpoints, the deprecation of resource versions, or nonbreaking updates to #stable resources—are now tracked in a dedicated, automatically updated changelog. Additionally, the API specification offers enhanced visibility and context for all stable and deprecated resource versions, ensuring you can easily access documentation relevant to your specific use case. Why should I migrate to the new Go SDK? In addition to an updated API experience, we’ve introduced version 2 of the MongoDB Atlas Go SDK for the MongoDB Atlas Administration API. This version supports a range of capabilities that streamline your experience when using the Versioned Atlas Administration API: Full endpoint coverage: MongoDB Atlas Go SDK version 2 enables you to access all the features and capabilities that the versioned API offers today with full endpoint coverage so that you can programmatically use MongoDB Atlas in full. Flexibility: When interacting with the new versioned API through the new Go SDK you can choose which version of the MongoDB Administration API you want to work with, giving you control over when breaking changes impact you. Ease of use: The new Go SDK enables you to simplify getting started with the MongoDB Atlas Administration API. You’ll be able to work with fewer lines of code and prebuilt functions, structs, and methods that encapsulate the complexity of HTTP requests, authentication, error handling, versioning, and other low-level details. Immediate access to updates: When using the new Go SDK, you can immediately access any newly released API capabilities. Every time a new version of MongoDB Atlas is released, the SDK will be quickly updated and continuously maintained, ensuring compatibility with any changes in the API and speeding up your development process. How can I experience the enhanced version? To get started using the Versioned Atlas Administration API, you can visit the migration guide , which outlines how you can transition over from the legacy version. To learn more about the MongoDB Atlas Administration API, you can visit our documentation page .

February 12, 2025
Updates

Building Gen AI with MongoDB & AI Partners | January 2025

Even for those of us who work in technology, it can be hard to keep track of the awards companies give and receive throughout the year. For example, in the past few months MongoDB has announced both our own awards (such as the William Zola Award for Community Excellence ) and awards the company has received—like the AWS Technology Partner of the Year NAMER and two awards from RepVue. And that’s just us! It can be a lot! But as hard as they can be to follow, industry awards—and the recognition, thanks, and collaboration they represent—are important. They highlight the power and importance of working together and show how companies like MongoDB and partners are committed to building best-in-class solutions for customers. So without further ado, I’m pleased to announce that MongoDB has been named Technology Partner of the Year in Confluent’s 2025 Global Partner Awards ! As a member of the MongoDB AI Applications Program (MAAP) ecosystem, Confluent enables businesses to build a trusted, real-time data foundation for generative AI applications through seamless integration with MongoDB and Atlas Vector Search. Above all, this award is a testament to MongoDB and Confluent’s shared vision: to help enterprises unlock the full potential of real-time data and AI. Here’s to what’s next! Welcoming new AI and tech partners It's been an action-packed start to the year: in January 2025, we welcomed six new AI and tech partners that offer product integrations with MongoDB. Read on to learn more about each great new partner! Base64 Base64 is an all-in-one solution to bring AI into document-based workflows, enabling complex document processing, workflow automation, AI agents, and data intelligence. “MongoDB provides a fantastic platform for storing and querying all kinds of data, but getting unstructured information like documents into a structured format can be a real challenge. That's where Base64 comes in. We're the perfect onramp, using AI to quickly and accurately extract the key data from documents and feed it right into MongoDB,” said Chris Huff, CEO of Base64. “ This partnership makes it easier than ever for businesses to unlock the value hidden in their documents and leverage the full power of MongoDB." Dataloop Dataloop is a platform that allows developers to build and orchestrate unstructured data pipelines and develop AI solutions faster. " We’re thrilled to join forces with MongoDB to empower companies in building multimodal AI agents”, said Nir Buschi, CBO and co-founder of Dataloop. “Our collaboration enables AI developers to combine Dataloop’s data-centric AI orchestration with MongoDB’s scalable database. Enterprises can seamlessly manage and process unstructured data, enabling smarter and faster deployment of AI agents. This partnership accelerates time to market and helps companies get real value to customers faster." Maxim AI Maxim AI is an end-to-end AI simulation and evaluation platform, helping teams ship their AI agents reliably and more than 5x faster. “ We're excited to collaborate with MongoDB to empower developers in building reliable, scalable AI agents faster than ever,” said Vaibhavi Gangwar, CEO of Maxim AI. “By combining MongoDB’s robust vector database capabilities with Maxim’s comprehensive GenAI simulation, evaluation, and observability suite, this partnership enables teams to create high-performing retrieval-augmented generation (RAG) applications and deliver outstanding value to their customers.” Mirror Security Mirror Security offers a comprehensive AI security platform that provides advanced threat detection, security policy management, continuous monitoring ensuring compliance and protection for enterprises. “ We're excited to partner with MongoDB to redefine security standards for enterprise AI deployment,” said Dr. Aditya Narayana, Chief Research Officer, at Mirror Security. “By combining MongoDB's scalable infrastructure with Mirror Security's end-to-end vector encryption, we're making it simple for organizations to launch secure RAG pipelines and trusted AI agents. Our collaboration eliminates security-performance trade-offs, empowering enterprises in regulated industries to confidently accelerate their AI initiatives while maintaining the highest security standards.” Squid AI Squid AI is a full-featured platform for creating private AI agents in a faster, secure, and automated way. “As an AI agent platform that securely connects to MongoDB in minutes, we're looking forward to helping MongoDB customers reveal insights, take action on their data, and build enterprise AI agents,” said Leslie Lee, Head of Product at Squid AI. “ By pairing Squid's semantic RAG and AI functions with MongoDB's exceptional performance , developers can build powerful AI agents that respond to new inputs in real-time.” TrojAI TrojAI is an AI security platform that protects AI models and applications from new and evolving threats before they impact businesses. “ TrojAI is thrilled to join forces with MongoDB to help companies secure their RAG-based AI apps built on MongoDB,” said Lee Weiner, CEO of TrojAI. “We know how important MongoDB is to helping enterprises adopt and harness AI. Our collaboration enables enterprises to add a layer of security to their database initialization and RAG workflows to help protect against the evolving GenAI threat landscape.” But what, there’s more! In February, we’ve got two webinars coming up with MAAP partners that you don’t want to miss: Build a JavaScript AI Agent With MongoDB and LangGraph.js : Join MongoDB Staff Developer Advocate Jesse Hall and LangChain Founding Software Engineer Jacob Lee for an exclusive webinar that highlights the integration of LangGraph.js, LangChain’s cutting-edge JavaScript library, and MongoDB - live on Feb 25 . Architecting the Future: RAG and Al Agents for Enterprise Transformation : Join MongoDB, LlamaIndex, and Together AI to explore how to strategically build a tech stack that supports the development of enterprise-grade RAG and AI agentic systems, explore technical foundations and practical applications, and learn how the MongoDB Applications Program (MAAP) will enable you to rapidly innovate with AI - content on demand . To learn more about building AI-powered apps with MongoDB, check out our AI Learning Hub and stop by our Partner Ecosystem Catalog to read about our integrations with MongoDB’s ever-evolving AI partner ecosystem.

February 11, 2025
Artificial Intelligence

MongoDB Empowers ISVs to Drive SaaS Innovation in India

Independent Software Vendors (ISVs) play a pivotal role in the Indian economy. Indeed, the Indian software market is expected to experience an annual growth rate of 10.40%, resulting in a market volume of $15.89bn by 2029. 1 By developing specialized software solutions and digital products that can be bought 'off the shelf', ISVs empower Indian organizations to innovate, improve efficiency, and remain competitive. Many established enterprises in India choose a 'buy' rather than 'build' strategy when it comes to creating modern software applications. This is particularly true when it comes to cutting-edge AI use cases. MongoDB works closely with Indian ISVs across industries, providing them with a multi-cloud data platform and highly flexible, scalable technologies to build operational and efficient software solutions. For example, Intellect AI , a business unit of Intellect Design Arena, has used MongoDB Atlas to drive a number of innovative use cases in the banking, financial services, and insurance industries. Intellect AI chose MongoDB for its flexibility coupled with its ability to meet complex enterprise requirements such as scale, resilience, and security compliance. And Ambee, a climate tech startup, is using MongoDB Atlas ’ flexible document model to support its AI and ML models. Here are three more examples of ISV customers who are enabling, powering, and growing their SaaS solutions with MongoDB Atlas. MongoDB enhancing Contentstack's content delivery capabilities Contentstack is a leading provider of composable digital experience solutions, and specializes in headless content management systems (CMS). Headless CMS is a backend-only web content management system that acts primarily as a content repository. “Our headless CMS allows our customers to bring all forms of content to the table, and we host the content for them,” said Suryanarayanan Ramamurthy, Head of Data Science at Contentstack, while speaking at MongoDB.local 2024 . A great challenge in the CMS industry is the ability to provide customers with content that remains factually correct, brand-aligned, and tailored to the customer’s identity. Contentstack created an innovative, AI-based product— Brand Kit —that does exactly that, built on MongoDB Atlas. “Our product Brand Kit, which launched in June 2024, overcomes factual incorrectness. The AI capabilities the platform offers help our customers create customized and context-specific content that meets their brand guidelines and needs,” said Ramamurthy. MongoDB Atlas Vector Search enables Contentstack to transform content and bring contextual relevance to retrievals. This helps reduce errors caused by large language model hallucinations, allowing the retrieval-augmented generation (RAG) application to deliver better results to users. AppViewX: unlocking scale for a growing cybersecurity SaaS pioneer AppViewX delivers a platform for organizations to manage a range of cybersecurity capabilities, such as certificate lifecycle management and public key infrastructure. The company ensures end-to-end security compliance and data integrity for large enterprises across industries like banking, healthcare, and automotive. Speaking at MongoDB.local Bengaluru in 2024, Karthik Kannan, Vice President of Product Management at AppViewX, explained how AppViewX transitioned from an on-premise product to a SaaS platform in 2021. MongoDB Atlas powered this transition. MongoDB Atlas's unique flexibility, scalability, and multi-cloud capabilities enabled AppViewX to easily manage fast-growing data sets, authentication, and encryption from its customers’ endpoints, device identities, workload identities, user identities, and more. Furthermore, MongoDB provides AppViewX with robust security guaranteeing critical data protection, and compliance. “We've been really able to grow fast and at scale across different regions, gaining market share,” said Kannan. “Our engineering team loves MongoDB,” added Kannan. “The support that we get from MongoDB allowed us to get into different regions, penetrate new markets to grow at scale, so this is a really important partnership that helped us get to where we are.” Zluri Streamlines SaaS Management with MongoDB Zluri provides a unified SaaS management platform that helps IT and security teams manage applications across the organization. The platform provides detailed insights into application usage, license optimization, security risks, and cost savings opportunities. Zluri processes massive volumes of unstructured data—around 9 petabytes per month—from over 800 native integrations with platforms like single sign-on, human resources management systems, and Google Workspace. One of its challenges was to automate the discovery and data analysis across those platforms, as opposed to employing an exhaustive time and labour intensive manual approach. MongoDB Atlas has allowed Zluri to ingest, normalize, process, and manage the high volume and complexity of data seamlessly across diverse sources. “We wanted to connect with every single system that's currently available, get all that data, process all that data so that the system works on autopilot mode, so that you're not manually adding all that information,” said Chaithaniya Yambari, Zluri’s Co-Founder and Chief Technology Officer, when speaking at MongoDB.local Bengaluru in 2024 . As a fully managed database, MongoDB Atlas platform allows Zluri to eliminate maintenance overhead, so its team of engineers and developers can focus on innovation. Zluri also utilizes MongoDB Atlas Search to perform real-time queries, filtering, and ranking of metadata. This eliminates the challenges of synchronizing separate search solutions with the database, ensuring IT managers get fast, accurate, and up-to-date results. These are just a few examples of how MongoDB’s is working with ISVs to shape the future of India’s digital economy. As technology continues to evolve, the role of ISVs in fostering innovation and economic growth will become ever more integral. MongoDB is committed to providing ISVs with a robust, flexible, and scalable database that removes barriers to growth and the ability to innovate. Visit our product page to learn more about MongoDB Atlas. Learn more about MongoDB Atlas Search on our product details page. Check out our Quick Start Guide to get started with MongoDB Atlas Vector Search today.

February 11, 2025
Applied

Simplify Security At Scale with Resource Policies in MongoDB Atlas

Innovation is the gift that keeps on giving: industries that are more innovative have higher returns, and more innovative industries see higher rates of long-term growth 1 . No wonder organizations everywhere strive to innovate. But in the pursuit of innovation, organizations can struggle to balance the need for speed and agility with critical security and compliance requirements. Specifically, software developers need the freedom to rapidly provision resources and build applications. But manual approval processes, inconsistent configurations, and security errors can slow progress and create unnecessary risks. Friction that slows down employees and leads to insecure behavior is a significant driver of insider risk. Paul Furtado Vice President, Analyst, Gartner Enter resource policies , which are now available in public preview in MongoDB Atlas. This new feature balances rapid innovation with robust security and compliance. Resource policies allow organizations to enable developers with self-service access to Atlas resources while maintaining security through automated, organization-wide ‘guardrails’. What are resource policies? Resource policies help organizations enforce security and compliance standards across their entire Atlas environment. These policies act as guardrails by creating organization-wide rules that control how Atlas can be configured. Instead of targeting specific user groups, resource policies apply to all users in an organization, and focus on governing a particular resource. Consider this example: An organization subject to General Data Protection Regulation (GDPR) 2 requirements needs to ensure that all of their Atlas clusters run only on approved cloud providers in regions that comply with data residency and privacy regulations. Without resource policies, developers may inadvertently deploy clusters on any cloud provider. This risks non-compliance and potential fines of up to 20 million euros or 4% of global annual turnover according to article 83 of the GDPR. But, by using resource policies, the organization can mandate which cloud providers are permitted, ensuring that data resides only in approved environments. The policy is automatically applied to every project in the organization, preventing the creation of clusters on unauthorized cloud platforms. Thus compliance with GDPR is maintained. The following resource policies are now in public preview: Restrict cloud provider: Limit Atlas clusters to approved cloud providers (AWS, Azure, Google Cloud). Restrict cloud region: Restrict cluster deployments in approved cloud providers to specific regions. Block wildcard IP: Reduce security risk by disabling the use of 0.0.0.0/0 (or “wildcard”) IP address for cluster access. How resource policies enable secure self-service Atlas access Resource policies address the challenges organizations face when trying to balance developer agility with robust security and compliance. Without standardized controls, there is a risk that developers will configure Atlas clusters to deviate from corporate or external requirements. This invites security vulnerabilities and compliance gaps. Manual approval and provisioning processes for every new project creates delays. Concurrently, platform teams struggle to enforce consistent standards across an organization, increasing operational complexity and costs. With resource policies, security and compliance standards are automatically enforced across all Atlas projects. This eliminates manual approvals and reduces the risk of misconfigurations. Organizations can deliver self-service access to Atlas resources for their developers. This allows them to focus on building applications instead of navigating complex internal review and compliance processes. Meanwhile, platform teams can manage policies centrally. This ensures consistent configurations across the organization and frees time for strategic initiatives. The result is a robust security posture, accelerated innovation, and greater efficiency. Automated guardrails prevent unauthorized configurations. Concurrently, centralized policy management streamlines operations and ensures alignment with corporate and external standards. Resource policies enable organizations to scale securely and innovate without compromise. This empowers developers to move quickly while simplifying governance. iA Financial Group, one of Canada’s largest insurance and wealth management firms, uses resource policies to ensure consistency and compliance in MongoDB Atlas. “Resource Policies have allowed us to proactively supervise Atlas’s usage by our IT delivery teams,” said Geoffrey Céré, Solution Architecture Advisor at iA Financial Group. “This has been helpful in preventing non-compliant configurations with the company’s regulatory framework. Additionally, it saves our IT delivery teams time by avoiding unauthorized deployments and helps us demonstrate to internal audits that our configurations on the MongoDB Atlas platform adhere to the regulatory framework.” Creating resource policies Atlas resource policies are defined using the open-source Cedar policy language , which combines expressiveness with simplicity. Cedar’s concise syntax makes writing and understanding policies easy, streamlining policy creation and management. Resource policies can be created and managed programmatically through infrastructure-as-code tools like Terraform or CloudFormation, or by integrating directly using the Atlas Admin API. To explore what constructing a resource policy looks like in practice, let’s return to our earlier example. This is an organization subject to GDPR requirements that wants to ensure all of their Atlas clusters run on approved cloud providers only. To prevent users from creating clusters on Google Cloud (GCP), the organization could write the following policy named “ Policy Preventing GCP Clusters .” This policy forbids creating or editing a cluster when the cloud provider is Google Cloud. The body defines the behavior of the policy in the human and machine-readable Cedar language. If required, ‘ gcp ’ could be replaced with ‘ aws ’. Figure 1. Example resource policy preventing the creation of Atlas clusters on GCP. Alternatively, the policy could allow users to create clusters only on Google Cloud with the following policy named “Policy Allowing Only GCP Clusters”. This policy uses the Cedar clause “unless” to restrict creating or editing a cluster unless it is on GCP. Figure 2. Example resource policy that restricts cluster creation to GCP only. Policies can also have compound elements. For example, an organization can create a project-specific policy that only enforces the creation of clusters in GCP for the Project with ID 6217f7fff7957854e2d09179 . Figure 3. Example resource policy that restricts cluster creation to GCP only for a specific project. And, as shown in Figure 4, another policy might restrict cluster deployments on GCP as well as on two unapproved AWS regions: US-EAST-1 and US-WEST-1. Figure 4. Example resource policy restricting cluster deployments on GCP as well as AWS regions US-EAST-1 and US-WEST-1. Getting started with resource policies Resource policies are available now in MongoDB Atlas in public preview. Get started creating and managing resource policies programmatically using infrastructure-as-code tools like Terraform or CloudFormation. Alternatively, integrate directly with the Atlas Admin API. Support for managing resource policies in the Atlas user interface is expected by mid-2025. Use the resources below to learn more about resource policies. Feature documentation Postman Collection Atlas Administration API documentation Terraform Provider documentation AWS CDK AWS Cloud Formation documentation 1 McKinsey & Company , August 2024 2 gdpr.eu

February 10, 2025
Updates

Dynamic Workloads, Predictable Costs: The MongoDB Atlas Flex Tier

MongoDB is excited to announce the launch of the Atlas Flex tier . This new offering is designed to help developers and teams navigate the complexities of variable workloads while growing their apps. Modern development environments demand database solutions that can dynamically scale without surprise costs, and the Atlas Flex tier is an ideal option offering elasticity and predictable pricing. Previously, developers could either pick the predictable pricing of a shared tier cluster or the elasticity of a serverless instance. Atlas Flex tier combines the best features of the Shared and Serverless tiers and replaces them, providing an easier choice for developers. This enables teams to focus on innovation rather than database management. This new tier underscores MongoDB’s commitment to empowering developers through an intuitive and customer-friendly platform. It simplifies cluster provisioning on MongoDB Atlas , providing a unified, simple path from idea to production. With the ever-increasing complexity of application development, it’s imperative that a database evolve alongside the project it supports. Whether prototyping a new app or managing dynamic production environments, MongoDB Atlas provides comprehensive support. And, by seamlessly combining scalability and affordability, the Atlas Flex tier reduces friction as requirements expand. Bridging the gap between flexibility and predictability: What the Atlas Flex tier offers developers Database solutions that can adapt to fluctuating workloads without incurring unexpected costs are becoming a must-have for every organization. While traditional serverless models offer flexibility, they can result in unpredictable expenses due to unoptimized queries or unanticipated traffic surges . The Atlas Flex tier bridges this gap and empowers developers with: Flexibility: 100 ops/sec and 5 GB of storage are included by default, as is dynamic scaling of up to 500 ops/sec. Predictable pricing: Customers will be billed an $8 base fee and additional fees based on usage. And pricing is capped at $30 per month. This prevents runaway costs—a persistent challenge with serverless architectures. Data services: Customers can access various features such as MongoDB Atlas Search , MongoDB Atlas Vector Search , Change Streams , MongoDB Atlas Triggers , and more. This delivers a comprehensive solution for development and test environments. Seamless migration: Atlas Flex tier customers can transition to dedicated clusters when needed via the MongoDB Atlas UI or using the Admin API. The Atlas Flex tier marks a significant step forward in streamlining database management and enhancing its adaptability to the needs of modern software development. The Atlas Flex tier provides unmatched flexibility and reliability for managing high-variance traffic and testing new features. Building a unified on-ramp: From exploration to production MongoDB Atlas enables a seamless progression for developers at every stage of application development. With three distinct tiers—Free, Flex, and Dedicated—MongoDB Atlas encourages developers to explore, build, and scale their applications: Atlas Free tier: Perfect for experimenting with MongoDB and building small applications at no initial cost, this tier remains free forever. Atlas Flex tier: Bridging the gap between exploration and production, this tier offers scalable, cost-predictable solutions for growing workloads. Atlas Dedicated tier: Designed for high-performance, production-ready applications with built-in automated performance optimization, this tier lets you scale applications confidently with MongoDB Atlas’s robust observability, security, and management capabilities. Figure 1.   An overview of the Free, Flex, and Dedicated tiers This tiered approach gives developers a unified platform for their entire journey. It ensures smooth transitions as projects evolve from prototypes to enterprise-grade applications. At MongoDB, our focus has always been on removing obstacles for innovators, and this simple scaling path empowers developers to focus on innovation rather than navigating infrastructure challenges. Supporting startups with unpredictable traffic When startups launch applications with uncertain user adoption rates, they often face scalability and cost challenges. But the Atlas Flex tier addresses these issues! For example, startups can begin building apps with minimal upfront costs. The Atlas Flex tier enables them to scale effortlessly to accommodate traffic spikes, with support for up to 500 operations per second whenever required. And as user activity stabilizes and grows, migrating to dedicated clusters is a breeze. MongoDB Atlas removes the stress of managing infrastructure. It enables startups to focus on building exceptional user experiences and achieving product-market fit. Accelerating MVPs for gen AI applications The Atlas Flex tier is particularly suitable for minimum viable products in generative AI applications. Indeed, those incorporating vector search capabilities are perfect use cases. For example, imagine a small research team specializing in AI. It has developed a prototype that employs MongoDB Atlas Vector Search for the management of embeddings in the domain of natural language processing. The initial workloads remain under 100 ops/sec. As such, the overhead costs $8 per month. As the model is subjected to comprehensive testing and as demand for queries increases, the application can be seamlessly scaled while performance is uninterrupted. Given the top-end cap of $30 per month, developers can refine the application without concerns for infrastructure scalability or unforeseen expenses. The table below shows how monthly Atlas Flex tier pricing breaks down by capacity. Understanding the costs: The Atlas Flex tier’s pricing breakdown. The monthly fee for each level of usage is prorated and billed on an hourly basis. All clusters on MongoDB Atlas, including Atlas Flex tier clusters, are pay-as-you-go. Clusters are only charged for as long as they remain active. For example, a workload that requires 100 ops/sec for 20 days, 250 ops/sec for 5 days, and 500 ops/sec for 5 days would cost approximately $13.67. If the cluster was deleted after the first 20 days of usage, the cost would be approximately $5.28. This straightforward and transparent pricing model ensures developers can plan budgets with confidence while accessing world-class database capabilities. Get started today The Atlas Flex tier revolutionizes database management. It caters to projects at all stages—from prototypes to production. Additionally, it delivers cost stability, enhanced scalability, and access to MongoDB’s robust developer tools in a single seamless solution. With Atlas Flex tier, developers gain the freedom to innovate without constraints, confident that their database can handle any demand their applications generate. Whether testing groundbreaking ideas or scaling for a product launch, this tier provides comprehensive support. Learn more or get started with Atlas Flex tier today to elevate application development to the next level.

February 6, 2025
Updates

Automate Network Management Using Gen AI Ops with MongoDB

Imagine that it’s a typical Tuesday afternoon and that you’re the operations manager for a major North American telecommunications company. Suddenly, your Network Operations Center (NOC) receives an alert that web traffic in Toronto has surged by hundreds of percentage points over the last hour—far above its usual baseline. At nearly the same moment, a major Toronto-based client complains that their video streams have been buffering nonstop. Just a few years ago, a scenario like this would trigger a frantic scramble: teams digging into logs, manually writing queries, and attempting to correlate thousands of lines of data in different formats to find a single root cause. Today, there’s a more streamlined, AI-driven approach. By combining MongoDB’s developer data platform with large language models (LLMs) and a retrieval-augmented generation (RAG) architecture, you can move from reactive “firefighting” to proactive, data-informed diagnostics. Instead of juggling multiple monitoring dashboards or writing complicated queries by hand, you can simply ask for insights—and the system retrieves and analyzes the necessary data automatically. Facing the unexpected traffic spike Now let’s imagine the same situation, but this time with AI-assisted network management. Shortly after you spot a traffic surge in Toronto, your NOC chatbot pings you with a situation report: requests from one neighborhood are skyrocketing, and an unusually high percentage involve video streaming paths or caching servers. Under the hood, MongoDB automatically ingests every log entry and telemetry event in real time—capturing IP addresses, geographic data, request paths, timestamps, router logs, and sensor data. Meanwhile, textual content (such as error messages, user complaints, and chat transcripts) is vectorized and stored in MongoDB for semantic search. This setup enables near-instant access to relevant information whenever a keyword like “buffering,” “video streams,” or “streaming lag” is mentioned, ensuring a fast, end-to-end diagnosis. Refer to this article to learn more about semantic search. Zeroing in on the root cause Instead of rummaging through separate logging tools, you pose a simple natural-language question to the system: “What might be causing the client’s video stream buffering problem in Toronto?” The LLM responds by generating a custom MongoDB Aggregation Pipeline —written in Python code—tailored to your query. It might look something like this: a $match stage to filter for the last twenty-four hours of data in Toronto, a $group stage to roll up metrics by streaming services, and a $sort stage to find the largest error counts. The code is automatically served back to you, and with a quick confirmation, you execute it on your MongoDB cluster. A moment later, the chatbot returns with a summarized explanation that points to an overloaded local CDN node, along with higher-than-expected requests from older routers known to misbehave under peak load. Next, you ask the system to explain the core issue in simpler terms so you can share it with a business stakeholder. The LLM takes the numeric results from the Aggregation Pipeline, merges them with textual logs that mention “firmware out-of-date,” and then outputs a cohesive explanation. It even suggests that many of these older routers are still running last year’s firmware release—a known contributor to buffering issues on video streams during traffic spikes. How retrieval-augmented generation (RAG) helps The power behind this effortless insight is a RAG architecture, which marries semantic search with generative text responses. First, the LLM uses vector search in MongoDB to retrieve only those log entries, complaint records, and knowledge base articles that directly relate to streaming. Once it has these key data chunks, the LLM can generate—and continually refine—its analysis. Figure 1. Network chatbot architecture with MongoDB. When the system references historical data to confirm that “similar spikes occurred during the playoffs last year” or that “users with older firmware frequently complain about buffering,” it’s not blindly guessing. Instead, it’s accessing domain-specific logs, user feedback, and diagnostic documents stored in MongoDB, and then weaving them together into a coherent explanation. This eliminates guesswork and slashes the time your team would otherwise spend on low-level data cleanup, correlation, and interpretation. Executing automated remediation Armed with these insights, your team can roll out a targeted fix, possibly involving an auto-update to the affected routers or load-balancing traffic to alternative CDN endpoints. MongoDB’s Change Streams can monitor for future anomalies. If a traffic spike starts to look suspiciously similar to the scenario you just solved, the system can raise a proactive alert or even initiate the fix automatically. Refer to the official documentation to learn more about the change streams. Meanwhile, the cost savings add up. You no longer need engineers manually piecing data together, nor do you endure prolonged user dissatisfaction while you try to figure out what’s happening. Everything from anomaly detection to root-cause analysis and recommended mitigation steps is fed through a single pipeline—visible and explainable in plain language. A future of AI-driven operations This scenario highlights how (gen) AI Ops and MongoDB complement each other to transform network management: Schema flexibility: MongoDB’s document-based model effortlessly stores logs, performance metrics, and user feedback in a single, consistent environment. Real-time performance: With horizontal scaling, you can ingest the massive volumes of data generated by network logs and user requests at any hour of the day. Vector search integration: By embedding textual data (such as logs, user complaints, or FAQs) and storing those vectors in MongoDB, you enable instant retrieval of semantically relevant content—making it easy for an LLM to find exactly what it needs. Aggregation + LLM: An LLM can auto-generate MongoDB Aggregation Pipelines to sift through numeric data with ease, while a second pass to the LLM composes a final summary that merges both numeric and textual analysis. Once you see how much time and effort this end-to-end workflow saves, you can extend it across the entire organization. Whether it’s analyzing sudden traffic spikes in specific geographies, diagnosing a security event, or handling peak online shopping loads during a holiday sale, the concept remains the same: empower people to ask natural-language questions about complex data, rely on AI to craft the specialized queries behind the scenes, and store it all in a platform that can handle unbounded complexity. Ready to embrace gen AI ops with MongoDB? Network disruptions will never fully disappear, but how quickly and intelligently you respond can be a game-changer. By uniting MongoDB with LLM-based AI and a retrieval-augmented generation (RAG) strategy, you transform your network operations from a tangle of logs and dashboards into a conversational, automated, and deeply informed system. Sign up for MongoDB Atlas to start building your own RAG-based workflows. With intelligent vector search, automated pipeline generation, and natural-language insight, you’ll be ready to tackle everything from video streams buffering complaints to the next unexpected traffic surge—before users realize there’s a problem. If you would like to learn more about how to build gen AI applications with MongoDB, visit the following resources: Learn more about MongoDB capabilities for artificial intelligence on our product page. Get started with MongoDB Vector Search by visiting our product page. Blog: Leveraging an Operational Data Layer for Telco Success Want to learn more about why MongoDB is the best choice for supporting modern AI applications? Check out our on-demand webinar, “ Comparing PostgreSQL vs. MongoDB: Which is Better for AI Workloads? ” presented by MongoDB Field CTO, Rick Houlihan.

February 5, 2025
Artificial Intelligence

Official Django MongoDB Backend Now Available in Public Preview

We are pleased to announce that the Official Django MongoDB Backend Public Preview is now available. This Python package makes it easier than ever to combine the sensible defaults and fast development speed Django provides with the convenience and ease of MongoDB. Building for the Python community For years, Django has been consistently rated one of the most popular web frameworks in the Python ecosystem. It’s a powerful tool for building web applications quickly and securely, and implements best practices by default while abstracting away complexity. Over the last few years, Django developers have increasingly used MongoDB, presenting an opportunity for an official MongoDB-built Python package to make integrating both technologies as painless as possible. We recognize that success in this endeavor requires more than just technical expertise in database systems—it demands a deep understanding of Django's ecosystem, conventions, and the needs of its developer community. So we’re committed to ensuring that the Official Django MongoDB Backend not only meets the technical requirements of developers, but also feels painless and intuitive, and is a natural complement to the base Django framework. What’s in the Official Django MongoDB Backend In this public preview release, the Official Django MongoDB Backend offers developers the following capabilities: The ability to use Django models with confidence . Developers can use Django models to represent MongoDB documents, with support for Django forms, validations, and authentication. Django admin support . The package allows users to fire up the Django admin page as they normally would, with full support for migrations and database schema history. Native connecting from settings.py . Just as with any other database provider, developers can customize the database engine in settings.py to get MongoDB up and running. MongoDB-specific querying optimizations . Field lookups have been replaced with aggregation calls (aggregation stages and aggregate operators), JOIN operations are represented through $lookup, and it’s possible to build indexes right from Python. Limited advanced functionality . While still in development, the package already has support for time series, projections, and XOR operations. Aggregation pipeline support . Raw querying allows aggregation pipeline operators. Since aggregation is a superset of what traditional MongoDB Query API methods provide, it gives developers more functionality. And this is just the start—more functionality (including BSON data type support and embedded document support in arrays) is on its way. Stay tuned for the General Availability release later in 2025! Benefits of using the Official Django MongoDB Backend While during the public preview MongoDB requires more work to set up in the initial stages of development than Django’s defaults, the payoff that comes from the flexibility of the document model and the full feature set of Atlas makes that tradeoff worth it over the whole lifecycle of a project. With the Official Django MongoDB Backend, developers can architect applications in a distinct and novel way, denormalizing their data and creating Django models so that data that is accessed together is stored together. These models are both easier to maintain and their retrieval is more performant for a number of use cases—which when paired with the robust, native Django experience MongoDB is creating is a compelling offering, improving the developer experience and accelerating software development. At its core, the MongoDB document model aligns well with Django's mission to “encourage rapid development and clean, pragmatic design.” The MongoDB document model naturally mirrors how developers think about and structure their data in code, allowing for a seamless context switch between a Django model and a MongoDB document. For many modern applications— especially those dealing with hierarchical, semi-structured, or rapidly evolving data structures— the document model provides a more natural and flexible solution than traditional relational databases. Dovetailing with this advantage is the fact it’s simpler than ever to develop locally with MongoDB, thanks to how painless it is to create a local Atlas deployment with Docker. With sensible preconfigured defaults, it’s possible to create a single-node replica set simply by pulling the Docker image and running it, using only an Atlas connection string, and no extra steps needed. The best part? It’s even possible to convert an existing Atlas implementation running in Docker Compose to a local image. Developing with Django and MongoDB just works with the Atlas CLI and Docker. How to get started with the Official Django MongoDB Backend To get started, it’s as easy as running pip install django-mongodb-backend . MongoDB has even created an easy-to-use starter template that works with the django-admin command startproject , making it a snap to see what typical MongoDB migrations look like in Django. For more information, check out our quickstart guide . Interested in giving the package a try for yourself? Please try our quickstart guide and consult our comprehensive documentation . To see the raw code behind the package and follow along with development, check out the repository . For an in-depth look into some of the thinking behind major package architecture decisions, please read this blog post by Jib Adegunloye. Questions? Feedback? Please post on our community forums or through UserVoice . We value your input as we continue to work to build a compelling offering for the Django community.

February 3, 2025
Updates

Improving MongoDB Queries by Simplifying Boolean Expressions

Key takeaways Document databases encourage storing data in fewer collections with many fields, in contrast to relational databases’ normalization principle. This approach improves efficiency but requires careful handling of complex filters. Simplifying Boolean expressions can improve query performance by reducing computational overhead and enabling better plan generation. The solution uses a modified Quine–McCluskey algorithm and Petrick’s method on an efficient bitset representation of Boolean expressions. MongoDB's culture supports innovation by empowering engineers to tackle problems and solve them from beginning to end. Introduction Document databases like MongoDB encourage a strategy that involves storing data in fewer collections with a large number of fields. This is in contrast to the normalization principle of relational databases, which recommends spreading data across numerous tables with a smaller number of fields. Denormalization and avoiding complex joins is a source of efficiency for document databases. However, as the filters become complex, you must take care to handle them properly. Poorly handled complex filters can negatively affect database performance, resulting in slower query responses and higher resource usage. Addressing complex filters becomes a critical task. One optimization technique to mitigate the performance issues with complex filters is Boolean expression simplification. This involves reducing complex Boolean expressions into simpler forms, which the query engine can evaluate more efficiently. As a result, the database can execute queries faster and with less computational overhead. To demonstrate the importance of filter simplification, consider a real MongoDB customer case. The query in the case was enormous in size and clearly machine-generated, which the optimizer couldn't handle efficiently. It was clear that simplifying the query would help the optimizer find a better plan. Here's a smaller example inspired by that case: simplifying filters can lead to more efficient query plans. db.collection.createIndex({b: 1}) filter = {$or: [{$and: [{a: 1}, {a: {$ne: 1}}]}, {b: 2}]} db.collection.find(filter) The query involves predicates on the field “a” so the optimizer cannot generate an Index Scan plan for the unoptimized version of the query and opts for a collection scan plan. However, the simplified expression: {b: 2} makes the IndexScan plan possible. The difference between the plans can be drastic for large collections and selective indexes. In our benchmark, the simplifier showcased a remarkable 18,100% throughput improvement in this scenario. The benefits of the filter simplification can come in different flavors: in one of our benchmarks testing large queries, execution time was cut in half in a collection scan plan. This improvement is due to the faster execution of the simplified queries. Simplifying Boolean expressions: Modified Quine–McCluskey algorithm and Petrick’s method The journey to find a good solution for Boolean simplification was surprisingly complex. The final solution can be expressed in just four steps: Convert the simplifying expression into bitset DNF form Apply QMC reduction operation of DNF terms: (x ∧ y) ∨ (x ∧ ¬y) = x Apply Absorption law: x ∨ (x ∧ y) = x Use Petrick’s method for further simplification Yet we faced a few challenges along the way. This section will explore the challenges and how we addressed them. Simplifying AST The initial solution, applying Boolean laws directly to filters in the Abstract Syntax Tree (AST) format, which MongoDB's query engine uses, proved to be resource-intensive, consuming significant memory and CPU time. This outlines the first challenge in the journey: finding an efficient method to simplify Boolean expressions. One issue with the initial solution was that transformations could be applied repeatedly, leading to the same expression appearing multiple times. To address this, we needed to store previously observed expressions in memory and check each time if an expression had already been seen. Modified Quine–McCluskey The Quine–McCluskey algorithm, commonly used in digital circuit design, offers a different approach with a finite number of steps. The explanation of the algorithm might seem daunting but in essence, we just apply the following reduction rule to a pair of expressions: (x ∧ y) ∨ (x ∧ ¬y) = x This allows the pair of expressions to be reduced into one. A challenge with the Quine–McCluskey algorithm is that it requires a list of prime implicants as input. An implicant of a Boolean expression is a combination of predicates that makes the expression true, essentially a row of the truth table where the Boolean expression is true. To obtain a list of prime implicants from a given Boolean expression, we need to calculate a truth table. This involves executing the expression (2^n) times, producing up to (2^n) implicants. For an expression with 10 predicates, we need to evaluate the expression 1024 times. With 20 predicates, the number of evaluations is 1,048,576. This is impractical. Implicants are similar to Disjunctive Normal Form (DNF) minterms, as both evaluate the expression to be true. We could use DNF instead of implicants, but here is another challenge: even though the expressions below are equivalent: (a ∧ b) ∨ (a ∧ ¬b) a ∨ (a ∧ ¬b) Only the first one is represented via implicants: (a ∧ b) is an implicant of the expressions, and can be simplified by QMC, while (a) is not, as it says nothing about the value of predicate (b). Absorption law Tackling the previous challenge, we face another one: now we need to find a way where we can compensate for the reduced power of QMC. Fortunately, this one can be resolved using Absorption Law: x ∨ (x ∧ y) = x With its help, we can simplify the expression from the previous example a ∨ (a ∧ ¬b) to just a. Effective bitset representation One proved to be good optimization is to use bitwise instructions on a bitset representation of Boolean expressions instead of working with the AST representation. This approach boosts performance by taking advantage of the speed and simplicity of bitwise operations, which are generally quicker and more straightforward than working with the more complex AST structure. Petrick’s method Can we do better and ensure the expression stays accurate while further reducing redundancy? For example, for the given expression: (¬a ∧ ¬b ∧ ¬c) ∨ (¬a ∧ b ∧ ¬c) ∨ (a ∧ ¬b ∧ ¬c) ∨ (¬a ∧ b ∧ c) After some QMC reduction steps are applied, we end up with the following expression: (¬a ∧ ¬c) ∨ (¬b ∧ ¬c) ∨ (¬a ∧ b) However, this can be reduced further down to: (¬b ∧ ¬c) ∨ (¬a ∧ b) This redundancy comes from the fact that when we reduce two terms into one - the new term sort of covers (or we can say represents) two original ones, and some of the derivative terms can be removed without breaking the “coverage” of the original expression. To find the minimal “coverage” we use Petrick’s method . The idea of this method is just to express the coverage as a Boolean expression and then find the minimal combination of its predicates which evaluate the expression to be true. It can be done by transforming the expression into DNF and picking up the smallest minterm. Boolean expressions in MongoDB In the previous section, we developed the algorithm for simple Boolean expressions. However, we can't use it to simplify filters in MongoDB just yet. The MongoDB Query Language (MQL) has unique logical semantics, particularly regarding negation, handling missing values, and array values. Logical operators MQL supports the following logical operators: $and - regular conjunction operator. $or - regular disjunction operator. $not - Negation operator with special semantics, see below for details $nor - Negation of disjunction, which is equivalent to conjunction of negations (DeMorgan Laws) The only specific operator is $nor, which can be easily converted to the conjunction of negations. MQL negation semantics MongoDB's flexible schema can lead to some fields being missing in documents. For instance, in a collection with documents {a: 1, b: 1} and {a: 2}, the field b is missing in the second document. When dealing with negations in MQL, missing values play an important role. Negations will match documents where the specified field is missing. Let's look at a collection of four documents and see how they answer to different queries (see Table 1): Table 1: &nbsp; Negating comparison expressions in MQL. As shown, the $not operator matches documents where the field "a" is missing. The $ne operator in MQL is only a syntactic sugar for {$not: {$eq: ...}}, which means it matches documents where the specified field is missing (see Table 2): Table 2: &nbsp; Negating Equality Expressions in MQL. Let’s prove that for MQL negations Boolean algebra laws still hold. To do that we need to introduce new notations: not* - MQL negation exists - exists A means that the field of predicate A does exist not-exists - is a short for not (exists A), and means that the field of predicate A does not exist Being short for not (exists), not-exists behaves in the same way as a normal not: not-exists(A and B) = not-exists(A) or non-exists(B) not-exists(A or B) = not-exists(A) and non-exists(B) The following equation is always true by the definition of MQL negation: not* A = not A or not (exists A) We also state that the predicate A cannot be true when its field is missing: A and not-exists A = false This is true even for operator {$exists: 0} in MQL: e.g. {$and: [{a: {$not: {$exists: 0}}}, {a: {$exists: 0}}]} always returns an empty set of documents. Internally {$exists: false} is implemented through negation of {$exists: 1}, hence the following statements are also true in MQL: not* (not-exists A) = exists A not* (exists A) = not-exists A Complement laws We need to prove that: A and not* A = false A or not* A = true The proof of the first Complement law: A and not* A = A and (not A or not-exists A) = (A and not A) or (A and not-exists A) = false or false = false The proof of the second Complement law: A or not* A = A or (not A or not-exists A) = A or not A or not-exists A = true DeMorgan laws Proof of the first DeMorgan law not* (A and B) = (not* A) or (not* B): not* (A and B) = not (A and B) or not-exists (A and B) = (not A or not B) or (not-exists A or not-exists B) = (not A or not-exists A) or (not B or not-exists B) = not* A or not* B The proof of the second DeMorgan law not* (A or B) = (not* A) and (not* B) is a bit more complicated: not* (A or B) = not (A or B) or not-exists (A or B) = (not A and not B) or (not-exists A and not-exists B) = (not A or not-exists A) and (not B or not-exists A) and (not A or not-exists B) and (not B or not-exists B) = ((not* A) and (not* B)) and ((not A and not B) or (not A and not-exist B) or (not B and not-exists A) or (not-exists A and not-exists B)) = ((not* A) and (not* B)) and ((not* A) and (not* B)) = (not* A) and (not* B) // because (not* A) and (not* B) = (notA and not B) or (not A and not-exist B) or (not B and not-exists A) or (not-exists A and not-exists B) Involution law We need to prove that not* (not* A) = A: not* (not* A) = not* (not A or not-exists A) = not* (not A) and not* (not-exists A) = (A or not-exists A) and (exists A) = (A and exists A) or (not-exists A and exists A) = A or false = A Here we used DeMorgan law for not* proved in the previous section. MQL array values semantics In MQL, array values behave as expected and don’t change the semantics of Boolean operators. However, they alter the semantics of operands within predicates. It’s helpful to think of it as implicitly adding the word “contains” to the operators interpretation: {a: {$eq: 7}} means “array a contains a value equal 7” {a: {$ne: 7}}, which is always equivalent to {a: {$not: {$eq: 7}}, means “array does not contain a value equal 7” {a: {$gt: 7}} - “array a contains a value greater than 7”. Since the Boolean simplifier focuses on the logical structure of expressions, rather than the internal meaning of predicates, it's irrelevant whether the value is an array or scalar. The one important difference with array values semantics is that they behave differently on interval simplifications. If for a scalar value the following interval transformations are always valid, and invalid for arrays: scalar > 100 && scalar <= 1 => scalar: () // empty interval scalar < 100 && scalar >= 1 => scalar: [1, 100) These differences are important for building Index Scan Bounds. However, as previously mentioned, they are not significant for the Boolean simplifier. Implementation considerations We faced several challenges during implementation. To achieve maximum performance, we created our own version of bitsets in C++. The existing options, such as boost::dynamic_bitset, were too slow, and std::bitset lacked flexibility. Memory management was crucial, so we worked hard to reduce memory allocations. This included using an optimized storage for our bitset and a C++ polymorphic allocator for some algorithms. We released the simplifier cautiously, enabling it only for specific types of expressions. We plan to expand the range of expressions that can be simplified in the future. We also set a limit on the maximum number of terms in DNF. Before each transformation, we estimate the number of resulting terms. If the number is too high, we cancel the simplification. Conclusion In this post, we described an effective algorithm to simplify Boolean expressions. We used bitsets to represent query filters, modified the Quine-McCluskey algorithm, and applied Petrick's method. Finally, we proved that the algorithm works with MQL's Boolean algebra. This journey began, as does so much at MongoDB, with a real-world customer example. The customer was struggling with a complex query, and we developed the Boolean Expression Simplifier to help them overcome their specific challenge—and ended up enhancing MongoDB's performance for any user dealing with intricate filters. As noted before, in one particularly demanding case involving large collections and selective indexes, the Simplifier helped achieve an 18,100% throughput improvement" This project reflects MongoDB’s longstanding commitment to turning customer challenges into successes, and our practice of taking what we learn from individual projects and making broad improvements. We’re always working to improve MongoDB’s performance, and the simplifier does this—it enables faster and more efficient query execution. Finally, this project is a good example of how MongoDB's culture of trust and ownership can lead to impact. Our engineers are empowered to innovate from concept to completion, transforming theoretical ideas like Boolean expression minimization into tangible gains that directly improve the customer experience. Join our MongoDB Community to learn about upcoming events, hear stories from MongoDB users, and connect with community members from around the world.

January 30, 2025
Engineering Blog

2024 William Zola Award for Community Excellence Recipient

We are thrilled to announce the 2024 recipient of MongoDB’s prestigious William Zola Award for Community Excellence: Mateus Leonardi! Each year, the William Zola Award for Community Excellence recognizes an outstanding community contributor who embodies the legacy of William Zola, a lead technical services engineer at MongoDB who passed away in 2014. Zola had an unwavering commitment to user success, and believed deeply in understanding and catering to users' emotions while resolving their technical problems. His philosophy is a guiding force for MongoDB’s community to this day. This award comes with a cash prize and travel sponsorship to attend a regional MongoDB.local event . Mateus Leonardi embodies these leadership values within the MongoDB community. In 2023, he led the launch and leadership of a new MongoDB User Group (MUG) in Florianópolis/Santa Catarina, Brazil. With his co-leaders, he has organized six successful MUG events, and has formed partnerships with local organizations and universities. He has also expanded his impact by becoming a MongoDB Community Creator , and has actively shared his MongoDB expertise outside of the user group. For example, in 2024 Mateus volunteered as a subject matter expert for MongoDB certifications. This involved being a panelist on several critical exam development milestones—from defining the knowledge and skills needed to earn credentials, to developing exam blueprints, to writing exam questions and verifying the technical accuracy of MongoDB certifications. Based on this record of support for our community, Mateus was also honored as a MongoDB Community Champion in 2024. Mateus excels as a dynamic leader and community advocate, extending invitations to fellow Brazilian MongoDB community leaders to speak at MUGs and local events. He also organizes and delivers talks at numerous events, including those for local university students. Ultimately, Mateus goes above and beyond in everything he commits to, and has demonstrated a genuine commitment to enhancing both the local MongoDB community and the broader developer community in his area. Here’s what Mateus Leonardi had to say about what the MongoDB community means to him: Q: Could you tell our readers a little about your day-to-day work? Mateus: As Head of Engineering at HeroSpark, my mission is to empower our team to innovate with quality and consistency. I work to create an environment where efficiency and constant evolution are natural in our day-to-day, always focusing on solutions that benefit both our team and our customers. Q: How does MongoDB support you as a developer? Mateus: MongoDB has been instrumental in our journey to enable adaptability without compromising quality. In work, MongoDB gives us that same flexibility in development, allowing us to quickly adapt to market changes while maintaining high performance and controlled costs. This combination gives us the confidence to innovate sustainably. Q: Why is being a leader in the MongoDB community important to you? Mateus: My sixteen-plus-year career has been marked by people who have helped me grow, and now it's my turn to give back. I view my role in the community as a way to multiply knowledge and create a positive impact. Technology has transformed my life, and through the MongoDB community, I can help others transform their realities too. Q: What has the community taught you? Mateus: The community has taught me that true learning happens when we share experiences. The act of teaching makes us better learners. Each interaction in the community allows us to reflect on our limitations and grow collectively. I've learned that the most rewarding thing is not the destination, but a journey shared with other developers. Congratulations to Mateus Leonardi, recipient of the 2024 William Zola Award for Community Excellence! To learn more about the MongoDB Community, please visit the MongoDB Community homepage .

January 29, 2025
News

Ready to get Started with MongoDB Atlas?

Start Free