Product Updates

The most recent MongoDB product releases and updates

Welcome to MongoDB.local NYC 2024!

AI promises to upend how enterprises operate and reach customers… if only they could first find the "On" button. Despite the tremendous promise of AI, most companies still find themselves in the experimentation phase, working through proofs of concept, hampered by unfamiliar technologies that don't work well together. But MongoDB is uniquely positioned to help developers turn all this AI noise into "signal" that benefits customers. This week at MongoDB .local NYC, thousands of developers and executives—representing Fortune 500 companies and cutting-edge startups—have gathered to discuss and demonstrate the real-world successes they've had building on MongoDB's developer data platform. MongoDB is fast becoming the industry’s go-to memory database for retrieval-augmented generation (RAG) and agentic systems, offering a unified data model across the entire AI stack. But this isn’t just a technology story, as important as that is. MongoDB also now offers essential programs and services to make AI much more accessible. In short, MongoDB is taking developers from experimentation to impact, and advancing our long-standing mission of making it easy to work with data. Demystifying AI Businesses are eager to adopt generative AI, but they don’t know where to start. The AI landscape is incredibly complex—and seems to get more so by the minute. This complexity, coupled with limited in-house AI expertise and concerns about the performance and security risks of integrating disparate technologies, is keeping too many organizations on the sidelines. MongoDB can help. To get organizations started, we’re announcing the MongoDB AI Applications Program (MAAP) . With MAAP, we give customers the blueprints and reference architectures to easily understand how to build AI applications. We also take on the heavy lifting of integrating MongoDB's developer data platform with leading AI partners like Anthropic, Cohere, Fireworks AI, Langchain, LlamaIndex, Nomic, Anyscale, Credal.ai, and Together AI, all running on the cloud provider of your choice. MAAP will be available to customers in early access starting in July. In addition to MAAP, we’re also introducing two new professional services engagements to help you build AI-powered apps quickly, safely, and cost-effectively: An AI Strategy service that leverages experts to help customers identify the highest-impact AI opportunities and to create specific plans on how to pursue them. For customers who have already identified use cases to pursue, an AI Accelerator service that brings expert consulting—from solution design through prototyping—to enable customers to execute their AI application roadmap from idea to production. Once developers get to building AI apps, they’ll find that MongoDB allows them to speak the data “language” of AI. Our developer data platform unifies all different data types alongside your real-time operational data—including source data, vector embeddings, metadata, and generated data—and supports a broad range of use cases. Not only do we give developers the most intuitive way to work with their data, we also keep improving where they can do so. Many developers first experience MongoDB in a local environment before moving to a fully managed cloud service like MongoDB Atlas. So, I'm excited to share that we will be introducing full-text search and vector search in MongoDB Community Edition later this year, making it even easier for developers to quickly experiment with new features and streamlining end-to-end software development workflows when building AI applications. These new capabilities also enable support for customers who want to run AI-powered apps on devices or on-premises. As customers begin to mature these applications, cost becomes an important consideration. Last year, we introduced dedicated nodes for Atlas Search on AWS. Using dedicated nodes, customers can isolate their vector search workloads and scale them up or down independently from operational workloads, improving performance and ensuring high availability. By giving customers workload isolation without data isolation, they can manage resources efficiently without additional complexity. Today, we’re announcing Atlas Search nodes on all three cloud providers, which customers can configure programmatically using the Atlas CLI or our Infrastructure-as-Code integrations . Learn more about how MongoDB is the best solution to the challenges posed by the fast-moving generative AI landscape . Real-time and highly performant Though AI rightly claims center stage at MongoDB .local NYC this week, it's not the only way we're helping developers. From real-time fraud detection , to predictive maintenance , to content summarization , customers need to efficiently process large volumes of high-velocity data from multiple sources. Today, we’re also announcing the general availability of Atlas Stream Processing , the public preview of Atlas Edge Server , and improved performance of time series workloads with MongoDB 8.0. Together, these capabilities enable customers to design applications that solve virtually any business challenge. Learn more about how MongoDB powers modern application requirements . These are just a few of the things we're announcing this week. Whether you’re just dipping your toes into the world of generative AI or are well on your way, MongoDB’s developer data platform, strong and diverse network of partners, and proven industry solutions will give you a competitive edge in a fast-moving market. Please take a minute to see what we've built for you, so that you can more easily build for your customers. Enjoy the conference, and we hope to see you soon! To see more announcements and get the latest product updates, visit our What’s New page. And head to the MongoDB.local hub to see where we’re stopping along our 2024 world tour.

May 2, 2024
Updates

Top AI Announcements at MongoDB.local NYC

The AI landscape is evolving so quickly that it’s no surprise customers are overwhelmed by their choices. Between foundation models for everything from text to code, AI frameworks, and the steady stream of AI-related companies being founded daily, developers and organizations face a dizzying array of AI choices. MongoDB empowers customers through a developer data platform that helps them avoid vendor lock-in from cloud providers or AI vendors in this fast-moving space. This freedom allows customers to choose the large language model (LLM) that best suits their needs - now or in the future, whether it's open source or proprietary. Today at MongoDB.local NYC, we announced many new product capabilities, partner integrations, services, and solution offering that enable development teams to get started and build customer-facing solutions with AI. This post is also available in: Deutsch , Français , Español , Português , Italiano , 한국어 , 简体中文 . Run everywhere, with whatever technology you are using in your AI stack MongoDB’s flexible document model is built on the ethos of “data that is accessed and used together is stored together.” Vectors are a natural extension of this capability, meaning customers can store their source data, metadata, and related vector embeddings in the same document. All of this is accessed and queried with a common Query API, making vector data easy to combine and work with other types of data stored within MongoDB. MongoDB Atlas—our fully managed, multi-cloud developer data platform—makes it easy to build AI-powered applications and experiences, with the breadth and depth of MongoDB’s AI partnerships and integrations—no matter which language, application framework, foundation model, or technology partner is used or preferred by developers. This year, we’re continuing to focus on our AI partnerships and integrations to make it easier for developers to build innovative applications with generative AI, including: Python and JavaScript using the dedicated Langchain-MongoDB package Python and C# Microsoft Semantic Kernel integration for Atlas Vector Search AI models from Mistral and Cohere AI models on the Fireworks AI platform Addition of Atlas Vector Search as a knowledge base in Amazon Bedrock Atlas as a datastore enabling storage, query, and retrieval using natural language in ChatGPT Atlas Vector Search as a datastore on Haystack Atlas Vector Search as a datastore on DocArray Collaboration with Google Gemini Code Assist and Amazon Q to quickly prototype new features and accelerate application development. Google Vertex AI Extension to harness natural language with MongoDB queries MongoDB integrates well with a rich ecosystem of AI developer frameworks, LLMs, and embedding providers. We continue investing in making the entire AI stack work seamlessly, enabling developers to take advantage of generative AI capabilities in their applications easily. MongoDB’s integrations and our industry-leading multi-cloud capabilities allow organizations to move quickly and avoid lock-in to any particular cloud provider or AI technology in a rapidly evolving space. Build high-performance AI applications securely and at scale Workload isolation, without data isolation, is critical for building performant, scalable AI applications. Search Nodes in MongoDB Atlas provide dedicated computing and enable users to isolate memory-intensive AI workloads for superior performance and higher availability. Users can optimize resource consumption for their use case, upsizing or downsizing the hardware for that specific node irrespective of the rest of the database cluster. Search Nodes make optimizing performance for vector search queries easy without over or under-provisioning an entire cluster. The IaC integrations with Hashicorp Terraform Atlas Provider and Cloudformation enable developers to configure and programmatically deploy Search Nodes at scale. Search Nodes are an integral part of Atlas - our fully managed, battle-tested, multi-cloud platform. Previously, we announced the availability of Search Nodes for our AWS and Google Cloud customers. We are excited to announce the preview of Search Nodes for our Azure customers at MongoDB.local NYC. Search Nodes on Atlas helps developers move faster by removing the friction of integrating, securing, and maintaining the essential data components required to build and deploy modern AI applications. Improve developer productivity with AI-powered experiences Today, we also announced new and improved releases of our intelligent developer experiences in MongoDB Compass , MongoDB Relational Migrator , and MongoDB Atlas Charts , aiming to enhance developer productivity and velocity. With the updated releases, developers can use natural language to query their data using MongoDB Compass, troubleshoot common problems during development, perform SQL-to-Query API conversion right from within MongoDB Relational Migrator , and quickly build charts and dashboards using natural language prompts in MongoDB Atlas Charts. Collectively, these intelligent experiences will help developers build differentiated features with greater control and flexibility, making it easier than ever to build applications with MongoDB. Enable development teams to get started and build customer-facing solutions faster and easier with AI MongoDB makes it easy for companies of all sizes to build AI-powered applications. To provide customers with a straightforward way to get started with generative AI, MongoDB is announcing the MongoDB AI Application Program (MAAP). Based on usage patterns for common AI use cases, customers receive a functioning application built on a reference architecture backed by MongoDB Atlas, vetted AI models and hosting solutions, technical support, and a full-service engagement led by our Professional Services team. We’re launching with an incredible group of industry-leading partners, including Anthropic, Anyscale, AWS, Cohere, Credal.ai, Fireworks.ai, Google Cloud, gravity9, LangChain, LlamaIndex, Microsoft Azure, Nomic, PeerIslands, Pureinsights, and Together AI. MongoDB is in a unique position in the market to be able to pull together such an impressive AI partner ecosystem in a single customer-focused program, and we’re excited to see how MAAP will help customers more easily go from ideation to fully functioning generative AI applications. Last year, to further enable startups to build AI solutions with MongoDB Atlas, we launched the AI Innovators Program , an extension of MongoDB for Startups , which offers an additional $5000 in Atlas credits to our AI startups. This year, we are expanding the program by introducing an AI Startup Hub , which features a curated guide for getting started with MongoDB and AI, quickstarts for MongoDB and select AI partners, and startup credit offerings from our AI partners. We provide two new AI Accelerator consulting packages for larger enterprise companies: AI Essentials and AI Implementation. While MAAP is aimed exclusively at building highly vetted reference architectures, these consulting packages allow customers to design, build, and deploy open-ended AI prototypes and solutions into their applications. Data has always been a competitive advantage for organizations, and MongoDB makes it easy, fast, and flexible to innovate with data. We continue to invest in making all the other parts of the AI stack easy for organizations: vetting top partners to ensure compatibility with different parts of the application stack, building a managed service that spans multiple clouds in operation, and ensuring the openness that's always been a part of MongoDB which avoids vendor lock-in. How does MongoDB Atlas unify operational, analytical, and generative AI data services to streamline building AI-enriched applications? Check out our MongoDB for AI page to learn more.

May 2, 2024
Updates

MongoDB Introduces Workload Identity Federation for Database Access

MongoDB Atlas customers run workloads (applications) inside AWS, Azure, and Google Cloud. Today, to enable these workloads to authenticate with MongoDB Atlas cluster—customers create and manage MongoDB Atlas database users using the natively supported SCRAM (password) and X.509 authentication mechanisms and configure them in their workloads. Customers have to manage the full identity lifecycle of these users in their applications, including frequently rotating secrets. To meet their evolving security and compliance requirements, our enterprise customers require database users to be managed within their existing identity providers or cloud providers of their choice. Workload Identity Federation will be in general availability later this month and allows management of MongoDB Atlas database users with Azure Managed Identities, Azure Service Principals, Google Service Accounts, or an OAuth2.0 compliant authorization service. This approach makes it easier for customers to manage, secure, and audit their MongoDB Atlas database users in their existing identity provider or a cloud provider of their choice and enables them to have "passwordless" access to their MongoDB Atlas databases. Along with Workload Identity Federation, Workforce Identity Federation , which was launched in public preview last year, will be generally available later this month. Workforce Identity Federation allows organizations to configure access to MongoDB clusters for their employees with single sign-on (SSO) using OpenID Connect. Both features complement each other and enable organizations to have complete control of database access for both application users and employees. Workload Identity Federation support will be available in Atlas Dedicated Clusters on MongoDB 7.0 and above, and is supported by Java, C#, Node, and Python drivers. Go driver support will be added soon. Quick steps to get started with Workload Identity Federation: Configure Atlas with your OAuth2.0 compatible workload identity provider such as Azure or Google Cloud. Configure Azure Service Principal or Google Cloud Service Accounts for the Azure or Google Cloud resource where your application runs. Add the configured Azure Service Principal or Google Cloud Service Account as Atlas database users with Federated authentication. Using Python or any supported driver inside your application, authenticate and authorize with your workload identity provider and Atlas clusters. To learn more about Workload Identity Federation, please refer to the documentation . And to learn more about how MongoDB’s robust operational and security controls protect your data, read more about our security features .

May 2, 2024
Updates

Elevating Database Performance: Introducing Query Insights in MongoDB Atlas

Today, at .local NYC, MongoDB Atlas introduced the new Query Insights tab, enhancing how users monitor, manage, and optimize their database performance directly within the Atlas UI. This new feature offers developers deeper insights into their database’s performance, with a more powerful query analysis tool and detailed namespace-level metrics for faster issue resolution and enhanced performance. Applications and workloads change over time, making it increasingly difficult to track inefficient queries that strain a database's resources. Metrics can spike for various reasons, and developers need the right tooling to determine the source of the problem so they can quickly identify and resolve the issue. MongoDB Atlas's Query Insights directly tackles these challenges by enhancing MongoDB's observability capabilities with two crucial features: Namespace Insights and an upgraded Query Profiler. Query Insights delivers performance optimization through actionable intelligence The introduction of MongoDB Atlas Query Insights demonstrates MongoDB’s commitment to advanced database management. This feature enhances our platform’s observability capabilities with detailed and actionable insights. This feature integrates Namespace Insights and an upgraded Query Profiler within a new dynamic interface, helping boost database performance by streamlining diagnostics and reducing troubleshooting times. The newly added Namespace Insights provides users with collection-level latency statistics and a comprehensive view of how the hottest collections on a cluster perform over time. This enables developers to answer "Who or what is causing the problem?” which is instrumental in identifying performance trends and prioritizing query optimizations. The enhanced cluster-centric Query Profiler introduces a more comprehensive view of slow and inefficient queries over a broader period. Having an overall view of data across the entire cluster facilitates more straightforward navigation between nodes and a longer lookback period to identify trends. This ultimately reduces troubleshooting time, thereby enhancing developer productivity and improving overall database performance. Key benefits of Query Insights Query Insights brings MongoDB Atlas users several new benefits, including: Granular telemetry: Faster identification and resolution of database issues with namespace-level latency statistics Improved observability: It is easier to spot performance trends, identify root causes, and debug applications Enhanced productivity: Reduced troubleshooting time thanks to a more comprehensive view of slow operations Try it out! The Query Insights page provides more granular insights into database performance by providing collection and operation-level details. The Namespace Insights page provides metrics for the top 20 collections by total latency. Hover over the charts to see how collections perform relative to each other over time. This information makes it easier to answer the question: “who/what is causing the problem?” Use the Query Profiler to view specific slow operations. Click on a point in the scatter plot to bring up additional metadata about each slow operation. Click on View More Details to see more metrics and metadata about each slow operation, including the app name, the operation, the plan summary, execution stats, etc. Empowering users for peak performance The launch of Query Insights in MongoDB Atlas underscores MongoDB’s commitment to enhancing our platform's observability capabilities. By providing users with the necessary tools and insights for optimal database performance, MongoDB enables developers to spend less time debugging and more time creating—lowering the total cost of ownership and maximizing efficiency, adding significant value to our users' operations. Sign up for MongoDB Atlas , our cloud database service, to see Query Insights in action, and for more information, see Monitor Query Performance .

May 2, 2024
Updates

Atlas Stream Processing is Now Generally Available!

We're thrilled to announce that Atlas Stream Processing —the MongoDB-native way to process streaming data—is now generally available, empowering developers to quickly build responsive, event-driven applications! Our team spent the last two years defining a vision and building a product that leans into MongoDB’s strengths to overcome the hard challenges in stream processing. After a decade of building stream processing products outside of MongoDB, we are using everything that makes MongoDB unique and differentiated—the Query API and powerful aggregation framework, as well as the document model and its schema flexibility—to create an awesome developer experience. It’s a new approach to stream processing, and based on the feedback of so many of you in our community, it’s the best way for most developers using MongoDB to do it. Let’s get into what’s new. This post is also available in: Deutsch , Français , Español , Português , Italiano , 한국어 , 简体中文 . What's new in general availability? Production Readiness Ready to support your production workloads, ensuring reliable and scalable stream processing for your mission-critical applications. Time Series Collection Support Emit processor results into Time Series Collections . Pre-process data continuously while saving it for historical access later in a collection type available in MongoDB Atlas built to efficiently store and query time series data. Development and Production Tiers Besides the SP30 cluster tier available during the public preview, we’re introducing an SP10 tier to provide flexibility and a cost-effective option for exploratory use cases and low-traffic stream processing workloads. Improved Kafka Support Added support for Kafka headers allows applications to provide additional metadata alongside event data. They are helpful for various stream processing use cases (e.g., routing messages, conditional processing, and more). Least Privilege Access Atlas Database Users can grant access to Stream Processing Instances and enable access to only those who need it. Read our tutorial for more information. Stream Processor Alerting Gain insight and visibility into the health of your stream processors by creating alerts for when a failure occurs. Supported methods for alerting include email, SMS, monitoring platforms like Datadog, and more . Why Atlas Stream Processing? Atlas Stream Processing brings the power and flexibility of MongoDB's document model and Query API to the challenging stream processing space. With Atlas Stream Processing, developers can: Effortlessly handle complex and rapidly changing data structures Use the familiar MongoDB Query API for processing streaming data Seamlessly integrate with MongoDB Atlas Benefit from a fully managed service that eliminates operational overhead Customer highlights Read what developers are saying about Atlas Stream Processing: At Acoustic, our key focus is to empower brands with behavioral insights that enable them to create engaging, personalized customer experiences. To do so, our Acoustic Connect platform must be able to efficiently process and manage millions of marketing, behavioral, and customer signals as they occur. With Atlas Stream Processing, our engineers can leverage the skills they already have from working with data in Atlas to process new data continuously, ensuring our customers have access to real-time customer insights. John Riewerts, EVP, Engineering at Acoustic Atlas Stream Processing enables us to process, validate, and transform data before sending it to our messaging architecture in AWS powering event-driven updates throughout our platform. The reliability and performance of Atlas Stream Processing has increased our productivity, improved developer experience, and reduced infrastructure cost. Cody Perry, Software Engineer, Meltwater What's ahead for Atlas Stream Processing? We’re rapidly introducing new features and functionality to ensure MongoDB delivers a world-class stream processing experience for all development teams. Over the next few months, you can expect to see: Advanced Networking Support Support for VPC Peering to Kafka Clusters for teams requiring additional networking capabilities Expanded Cloud Region Support Support for all cloud regions available in Atlas Data Federation Expanded Cloud Provider Support Support for Microsoft Azure Expanded Data Source and Sink Support We have plans to expand beyond Kafka and Atlas databases in the coming months. Let us know which sources and sinks you need, and we will factor that into our planning Richer Metrics & Observability Support for expanded visibility into your stream processors to help simplify monitoring and troubleshooting Expanded Deployment Flexibility Support for deploying stream processors with Terraform. This integration will help to enable a seamless CI/CD pipeline, enhancing operational efficiency with infrastructure as code. Look out for a dedicated blog in the near future on how to get started with Atlas Stream Processing and Terraform. So whether you're looking to process high-velocity sensor data, continuously analyze customer data to deliver personalized experiences, or perform predictive maintenance to increase yields and reduce costs, Atlas Stream Processing has you covered. Join the hundreds of development teams already building with Atlas Stream Processing. Stay tuned to hear more from us soon, and good luck building! Login today or check out our introductory tutorial to get started.

May 2, 2024
Updates

Atlas Edge Server is Now in Public Preview

We’re excited to announce that Atlas Edge Server is now in public preview! Any developer on Atlas can now deploy Edge Server for their connected infrastructure. Learn more in our docs or get started today. Developers value MongoDB’s developer data platform for the flexibility and ease of use of the document model, as well as for helpful tools like search and charts that simplify data management. As a crucial component of our Atlas for the Edge solution, Atlas Edge Server extends these capabilities to remote and network-constrained environments. First announced at MongoDB.local London 2023, Atlas for the Edge enables local data processing and management within edge environments and across edge devices, reducing latency, enhancing performance, and allowing for disconnection resilience. This post is also available in: Deutsch , Français , Español , Português , Italiano , 한국어 , 简体中文 . What's new in public preview? One of our top priorities is providing developers with a seamless experience when managing their data and applications. We continuously seek to enhance this experience, which is why, starting today, Atlas Edge Server can be directly downloaded, configured, and managed through the Atlas UI. Developers who deploy from the Atlas UI will be able to choose between two onboarding flows to ensure that their configuration is tailored to their needs. This includes both developers who want to connect their edge server with a MongoDB driver or client, and those who want to support connecting to the Edge Server via Device Sync. Why Atlas Edge Server? While edge computing brings data processing closer to end-users and offers substantial benefits, such as network resilience and increased security, a number of challenges inherent to edge computing can make it difficult to fully leverage. Edge computing challenges include managing complex networks, handling large volumes of data, and addressing security concerns, any of which can deter organizations from adopting edge computing. Additionally, the costs associated with building, maintaining, and scaling edge computing systems can be significant. Atlas for the Edge and Atlas Edge Server alleviate these challenges. Atlas Edge Server provides a MongoDB instance equipped with a synchronization server that can be deployed on local or remote infrastructure. It enables real-time synchronization, conflict resolution, and disconnection tolerance. This ensures that mission-critical applications and devices operate seamlessly, even with intermittent connectivity. Edge Server allows for selective synchronization of only modified fields, conserving bandwidth and prioritizing crucial data transfers to Atlas. It also maintains edge client functionality even with intermittent cloud connectivity, preventing disruptions to essential operations like inventory management and point-of-sale systems. Processing data locally reduces latency and enables rapid data insights, reducing dependency on central databases. We'll meet you at the edge The Public Preview of Atlas Edge Server underscores MongoDB’s ongoing commitment to enhancing our developer data platform for distributed infrastructures. As we continue to invest in Atlas for the Edge, MongoDB’s goal is to equip teams with a robust data solution that not only offers an exceptional developer experience but also empowers them to drive innovative solutions for their businesses and customers. Get started today , or visit the Atlas for the Edge web page to learn more about how companies are benefiting from our edge solution.

May 2, 2024
Updates

Workload Isolation for More Scalability and Availability: Search Nodes Now on Google Cloud

May 2, 2024: Announcing Search Nodes in preview on Microsoft Azure Today we’re excited to take the next step in bringing scalable, dedicated architecture to your search experiences with the introduction of Atlas Search Nodes, now in general availability for Google Cloud. This post is also available in: Deutsch , Français , Español , Português , Italiano , 한국어 , 简体中文 . Since our initial announcement of Search Nodes in June of 2023, we’ve been rapidly accelerating access to the most scalable dedicated architecture, starting with general availability on AWS and now expanding to general availability on Google Cloud. We'd like to give you a bit more context on what Search Nodes are and why they're important to any search experience running at scale. Search Nodes provide dedicated infrastructure for Atlas Search and Vector Search workloads to enable even greater control over search workloads. They also allow you to isolate and optimize compute resources to scale search and database needs independently, delivering better performance at scale and higher availability. One of the last things developers want to deal with when building and scaling apps is having to worry about infrastructure problems. Any downtime or poor user experiences can result in lost users or revenue, especially when it comes to your database and search experience. This is one of the reasons developers turn to MongoDB, given the ease of use of having one unified system for your database and search solution. With the introduction of Atlas Search Nodes, we’ve taken the next step in providing our builders with ultimate control, giving them the ability to remain flexible by scaling search workloads without the need to over-provision the database. By isolating your search and database workloads while at the same time automatically keeping your search cluster data synchronized with operational data, Atlas Search and Atlas Vector Search eliminate the need to run a separate ETL tool, which takes time and effort to set up and is yet another fail point for your scaling app. This provides superior performance and higher availability while reducing architectural complexity and wasted engineering time recovering from sync failures. In fact, we’ve seen a 40% to 60% decrease in query time for many complex queries, while eliminating the chances of any resource contention or downtime. With just a quick button click, Search Nodes on Google Cloud offer our existing Atlas Search and Vector Search users the following benefits: Higher availability Increased scalability Workload isolation Better performance at scale Improved query performance We offer both compute-heavy search-specific nodes for relevance-based text search, as well as a memory-optimized option that is optimal for semantic and retrieval augmented generation (RAG) production use cases with Atlas Vector Search. This makes resource contention or availability issues a thing of the past. Search Nodes are easy to opt into and set up — to start, jump on into the MongoDB UI and follow the steps do the following: Navigate to your “Database Deployments” section in the MongoDB UI Click the green “+Create” button On the “Create New Cluster” page, change the radio button for Google Cloud for “Multi-cloud, multi-region & workload isolation” to enable Toggle the radio button for “Search Nodes for workload isolation” to enable. Select the number of nodes in the text box Check the agreement box Click “Create cluster” For existing Atlas Search users, click “Edit Configuration” in the MongoDB Atlas Search UI and enable the toggle for workload isolation. Then the steps are the same as noted above. Jump straight into our docs to learn more! MongoDB.local NYC Join us in person on May 2, 2024 for our keynote address, announcements, and technical sessions to help you build and deploy mission-critical applications at scale. Use Code Web50 for 50% off your ticket! Learn More

March 28, 2024
Updates

AI-powered SQL Query Converter Tool is Now Available in Relational Migrator

When I traveled to Japan for the first time it was shortly after translation apps on smartphones had really taken off. Even though I knew enough phrases to get by as a tourist I was amazed at how empowered I was by being able to have smoother conversations and read signs more easily. The power of AI helped me understand a language I had only a passing familiarity with and drastically improved my experience in another country. I was able to spend more time enjoying myself and spend less time looking up common words and sentences in a phrase book. So what does this have to do with application modernization? MongoDB.local NYC Join us in person on May 2, 2024 for our keynote address, announcements, and technical sessions to help you build and deploy mission-critical applications at scale. Use Code Web50 for 50% off your ticket! Learn More Transitioning from relational databases as part of a modernization effort is more than migrating data from a legacy database to a modern one. There is all the planning, designing, testing, refactoring, validating, and ongoing operation that makes modernization efforts a complex project to navigate successfully. MongoDB’s free Relational Migrator tool has helped with many of these tasks including schema design, data migration, and code generation, but we know this is just the beginning. One of the most common challenges of migrating legacy applications to MongoDB is working with SQL queries, triggers, and stored procedures that are often undocumented and must be manually converted to MongoDB Query API syntax. This requires deep knowledge of both SQL and the MongoDB Query API, which is rare if teams are used to only using one system or the other. In addition, teams often have hundreds, if not thousands of queries, triggers, and stored procedures that must be converted, which is extremely time-consuming and tedious. Doing these conversions manually would be like traveling abroad and looking up each object one by one in a phrase book instead of using a translation app. Thankfully with generative AI, we are finally able to get the modern version of the translation app on your phone. The latest release of Relational Migrator is able to use generative AI to help your developers quickly convert existing SQL queries, triggers, and stored procedures to work with MongoDB using your choice of programming language (JavaScript, C#, or Java). By automating the generation of development-ready MongoDB queries, your team can be more efficient by redirecting their time to more important testing and optimization efforts — accelerating your migration project. Teams that are familiar with SQL can also use the Query Converter to help close their MongoDB knowledge gap. The SQL objects they're familiar with are translated, making it easier to learn the new syntax by seeing them next to each other. Let’s take a closer look at how Query Converter can convert a SQL Server stored procedure to work with MongoDB. Figure 1: The MongoDB Query Converter Dashboard We’ll start by importing the stored procedure from the relational database into our Relational Migrator project. This particular stored procedure joins the results from two tables, performs some arithmetic on some of the columns, and filters the results based on an input parameter. CREATE PROCEDURE CustOrdersDetail @OrderID int AS SELECT ProductName, UnitPrice=ROUND(Od.UnitPrice, 2), Quantity, Discount=CONVERT(int, Discount * 100), ExtendedPrice=ROUND(CONVERT(money, Quantity * (1 - Discount) * Od.UnitPrice), 2) FROM Products P, [Order Details] Od WHERE Od.ProductID = P.ProductID and Od.OrderID = @OrderID Developers who are experienced with the MongoDB aggregation framework would know that the equivalent method to join data from two collections is to use the $lookup stage. However, when migrating a relational database to MongoDB, it often makes sense to consolidate data from multiple tables into a single collection. In this example, we are doing exactly that, by combining data from the Orders , Order Details , and Products table into a single orders collection. This means that, when considering the changes to the schema, we do not actually need a $lookup stage at all, as the data from each of the required tables has already been merged into a single collection. Relational Migrator’s Query Converter works alongside the schema mapping functionality and automatically adjusts the generated query to work against your chosen schema. With JavaScript chosen as our target language, the converted query avoids the need for a costly join and includes MongoDB equivalents of our original SQL arithmetic functions. The query is now ready to test and include in our modernized app. const CustOrdersDetail = async (db, OrderID) => { return await db.collection('orders').aggregate([ { $match: { orderId: OrderID } }, { $unwind: '$lineItems' }, { $project: { ProductName: '$product.productName', UnitPrice: { $round: ['$lineItems.unitPrice', 2] }, Quantity: '$lineItems.quantity', Discount: { $multiply: ['$lineItems.discount', 100] }, ExtendedPrice: { $round: [ { $multiply: [ '$lineItems.quantity', { $subtract: [1, '$lineItems.discount'] }, '$lineItems.unitPrice' ] }, 2 ] } } } ]).toArray(); }; Relational Migrator does more than just query conversion, it also assists with app code generation, data modeling, and data migration, which drastically cuts down on the time and effort required to modernize your team's applications. Just like a language translation app while traveling abroad it can drastically improve your experience converting and understanding a new language or technology. The new Query Converter tool is now available for free for anyone to try as part of a public preview in the Relational Migrator tool. Download Relational Migrator and try converting your SQL queries and stored procedures today.

March 25, 2024
Updates

Introducing Semantic Caching and a Dedicated MongoDB LangChain Package for Gen AI Apps

We are in an unprecedented time in history where developers can build transformative AI applications quickly, without being AI experts themselves. This ability is enabling new classes of applications that can better serve customers with conversational AI for assistance and automation, advanced reasoning and analysis using AI-powered retrieval, and recommendation systems. Behind this revolution are large language models (LLMs) that can be prompted to solve for a wide range of use cases. However, LLMs have various limitations, like knowledge cutoff and a tendency to hallucinate. To overcome these limitations, they must be integrated with proprietary enterprise data sources to build reliable, relevant, and high-quality generative AI applications. That’s where MongoDB plays a critical role in the modern generative AI stack. Developers use MongoDB Atlas Vector Search as a vital part of the generative AI technique known as retrieval-augmented generation (RAG). RAG is the process of feeding LLMs the supplementary data necessary to ground their responses, ensuring they're dependable and precise. LangChain has been a critical part of this journey since the public launch of Atlas Vector Search, enabling developers to build better retriever systems powered by vector search and store conversation history in the operational database. Today, we are excited to announce support for two enhancements: Semantic cache powered by Atlas vector search, which improves the performance of your apps A dedicated LangChain-MongoDB package for Python and JS/TS developers, enabling them to build advanced applications even more efficiently The MongoDB Atlas integration with LangChain can now power all the database requirements for building modern generative AI applications: vector search, semantic caching (currently only available in Python), and conversation history. Earlier, we announced the launch of MongoDB LangChain Templates , which enable the developers to quickly deploy RAG applications, and provided a reference implementation of a basic RAG template using MongoDB Atlas Vector Search and OpenAI and a more advanced Parent-document Retrieval RAG template using MongoDB Atlas Vector Search. We are excited about our partnership with LangChain and will continue innovating. Improve LLM application performance with semantic cache Semantic cache improves the performance of LLM applications by caching responses based on the semantic meaning or context within the queries themselves. This is different from a traditional cache that works based on exact keyword matching. In the era of LLM the value of semantic cache is increasing tremendously, enabling sophisticated user experiences that closely mimic human interactions. For example, if two different users enter two different prompts, “give me suggestions for a comedy movie” and “recommend a comedy movie”, the semantic cache can understand that the intent behind the queries are same and return a similar response, even though different keywords are used, whereas a traditional cache will fail. Figure 1: Semantic cache using MongoDB Atlas Vector Search Check out this video walkthrough for the semantic cache: Accelerate development with a dedicated package With a dedicated LangChain-MongoDB package, MongoDB is even more deeply integrated with LangChain. The Python and Javascript packages contain the following LangChain Integrations: MongoDBAtlasVectorSearch ( Vector stores ) and MongoDBChatMessageHistory ( Chat Messages Memory ). In addition, the Python package includes the MongoDBAtlasSemanticCache ( LLM Caching ). The new package langchain-mongodb contains all the MongoDB-specific implementations and needs to be installed separately from langchain, which includes all the core abstractions. Earlier, everything was in the same package, making it challenging to correctly version and communicate what version should be used and whether any breaking changes were made. Find out more about the langchain-mongodb package: Python: Source code , LangChain docs , MongoDB docs Javascript: Source code , LangChain.js docs , MongoDB docs Get started today Check out this accompanying tutorial and notebook on building advanced RAG with MongoDB and LangChain, which contains a walkthrough and use cases for using semantic cache, vector search, and chat message history. Check out the “ PDFtoChat ” app to see langchain-mongodb JS in action. It allows you to have a conversation with your proprietary PDFs using AI and is built with MongoDB Atlas, LangChain.js, and TogetherAI. It’s an end-to-end SaaS-in-a-box app and includes user authentication, saving PDFs, and saving chats per PDF. Read the excellent overview of semantic caching using LangChain and MongoDB.

March 20, 2024
Updates

Announcing Search Index Management in MongoDB Compass

You can now create and manage Atlas Search and Atlas Vector Search indexes on the interface many of you know and love: MongoDB Compass . Seamlessly build full-text and semantic search applications on top of your Atlas database, delivering swift and relevant results for a range of use cases including e-commerce sites, customer support chatbots, recommendation systems, and more. Gone are the days of juggling multiple tools to bring your search queries to fruition. And, with a variety of templates to choose from, Compass simplifies learning search index syntax so you can focus on what’s most important to you: building exceptional end-user experiences on top of your search queries. Try it out To get started, connect to an Atlas cluster from Compass. If you don’t have one, sign up . From there, simply navigate to Compass’ Indexes tab and select Create Search Index . It’s easy to build your first search index using one of our templates. Select either Search or Vector Search, and use the appropriate template. In this example, we’re going to create a Vector Search index. Once you're satisfied with your index definition, click Aggregate to start testing out your pipeline in Compass. Compass’ new search index experience leads you to results in just three guided steps, all without leaving the comfort of Compass. To learn more about search indexing in Compass, visit our documentation . If you have feedback about Compass’ search index experience, let us know on our feedback forum . Happy indexing!

March 18, 2024
Updates

Atlas Data Federation and Online Archive Can Now Be Deployed in Azure

This post is also available in: Deutsch , Français , Español , Português , Italiano , 한국어 , 简体中文 . Exciting developments are on the horizon for users of Microsoft Azure, marking a significant leap in data management capabilities. First off, Atlas Data Federation is now Generally Available on Azure. This means you can now deploy it directly within Azure and even query data from Microsoft Azure Blob Storage. And that's not all. We've also launched the General Availability of Atlas Online Archive on Azure. These advancements usher in a new era of efficient archiving solutions for Azure-based data solutions. Both updates are big steps forward in making data management on Azure more powerful and flexible. Let's dive into what this means for you! Azure support in Atlas Data Federation (General Availability) With Atlas Data Federation, users can seamlessly query, transform, and create views across multiple Atlas databases and cloud object storage solutions, such as Amazon S3 and now Microsoft Azure Blob Storage. This feature, previously exclusive to AWS, is a game-changer, allowing direct deployment within Azure and the ability to tap into Microsoft Azure Blob Storage for data insights. Figure 1: Tap into Azure Blob Storage easily from the Atlas UI Key features of Atlas Data Federation Cloud flexibility: Choose between AWS and Azure for hosting federated database instances. Diverse data sources: Incorporate MongoDB Atlas clusters or Azure storage solutions (Azure Blob Storage and Azure Data Lake Storage Gen2) as data sources for comprehensive queries, including cross-region. Advanced aggregation: Comprehensive aggregation capabilities with operators inclusive of $match, $lookup, $queryHistory, $merge, $out, etc. Direct $out support for Azure Blob Storage and Azure Data Lake Storage Gen2. Atlas SQL queries on Azure: Execute SQL queries on Azure, integrating MongoDB data for a unified analysis experience. Atlas Data Federation simplifies accessing and analyzing complex data sets by combining data across multiple sources into a single, federated view, providing valuable insights for more informed business decisions. Explore Atlas Data Federation on Azure Today . Azure support in Atlas Online Archive (General Availability) Atlas Online Archive's expansion to Azure ensures that data tiering is not only efficient but also integrated, keeping archival data within the Azure ecosystem. This integration addresses the previous limitation of defaulting to AWS for storage, even for Azure-hosted clusters. Figure 2: Seamlessly select Azure when choosing a region Key features of Atlas Online Archive Provider choice: Opt for AWS or Azure to align with your cloud strategy. Automatic archiving: Set rules to move older data to cost-effective cloud storage automatically, eliminating manual offloading. Unified querying endpoint: Access all data through a single endpoint, ensuring quick insights without compromising data availability. Integrated MongoDB Atlas UI management: Manage your data tiering and archiving within the familiar Atlas interface, streamlining operations and maintenance. Seamlessly manage your MongoDB Atlas data tiering at scale with Atlas Online Archive. Atlas Online Archive empowers you to manage your data lifecycle efficiently, balancing cost and accessibility with ease. Finally, here are a few points to consider: Any newly created archive on an Azure cluster on or after 02/28 will default to Azure regions. Note that storage regions for Online Archive will default to Azure clusters only if there are no pre-existing AWS archives on that specific Azure cluster If there are any pre-existing AWS Online Archives on Azure clusters, then all newly created archives on that specific cluster will remain on AWS. Cloud providers or storage regions cannot be edited or modified once configured Embrace the Full Potential of Atlas Online Archive on Azure Today . We're thrilled to support your data management journey, offering enhanced control and flexibility over your data through these new Azure capabilities. As MongoDB Atlas continues to expand as a multi-cloud solution, we're here to ensure your data strategy is as dynamic and versatile as your business needs. For guidance on getting started, check out our documentation on Atlas Data Federation or Atlas Online Archive . Thank you for trusting MongoDB Atlas as your Developer Data Platform. Welcome to the future of multi-cloud data management!

February 29, 2024
Updates

Announcing the GA of the Atlas Device SDK for C++

This post is also available in: Deutsch , Français , Español , Português , Italiano , 한국어 , 简体中文 . MongoDB's developer data platform was designed to offer unparalleled flexibility and scalability for developers. By streamlining the integration of complex data structures and real-time analytics, and accelerating the development and deployment of mission-critical applications, its adoption has added significant value to businesses across industries. Today we continue our mission to provide the best experience for developers and are excited to announce the general availability (GA) of the Atlas Device SDK for C++ . The updates in this release come after numerous iterations that were guided by feedback from our preview users and target performance and portability. The Atlas Device SDK for C++ enables developers to effortlessly store data on devices for offline access while seamlessly synchronizing data to and from the MongoDB Atlas cloud within their C++ applications. It serves as a user-friendly alternative to SQLite, offering simplicity due to its object-oriented database nature, removing the necessity for a separate mapping layer or ORM. Aligned with MongoDB's developer data platform mission of streamlining the development process - the C++ SDK incorporates networking retry logic and advanced conflict merging functionality, eliminating the traditional need for writing and maintaining extensive and complex synchronization code. Why choose the Atlas Device SDK for C++? The Atlas Device SDK for C++ is particularly well-suited for applications in embedded devices, IoT, and cross-platform scenarios. It serves as a comprehensive object-oriented persistence layer for edge, mobile, and embedded devices, offering built-in support for synchronization with the MongoDB Atlas as a cloud backend. In the evolving landscape of connected and smart devices, the demand for more data, including historical data for automated decision-making, highlights the importance of efficient persistence layers and real-time cloud-syncing technologies which are robust towards changing network connections and outages. The database included in the Atlas Device SDK for C++, comes with over a decade of history, and is a mature, feature-rich, and enterprise-ready technology, integrated into tens of thousands of applications on Google Play and the Apple App Store with billions of downloads. Its lightweight design is optimized for resource-constrained environments. It considers factors like compute, memory, bandwidth, and battery usage in its design. Embedding the SDK directly into application code eliminates the need for additional deployment tasks and simplifies the development process. The fully object-oriented nature of the SDK guides the data modeling, providing a straightforward and idiomatic approach. This stands in contrast to alternative technologies like SQLite database, which require an object-relational mapping library, adding complexity and making future development, maintenance, and debugging more challenging. Furthermore, the SDK’s underlying data store enables seamless integration with reactive UI layers across various environments. In the Atlas Device SDK for C++ we give examples of how to integrate with the Qt framework , but other UI layers can also be added. Improvements in the GA release The new API was developed based on performance measurements with a coordinated focus and effort to improve the read/write operations of the data layer. There has been great interest from major automotive and manufacturing OEMs and this feedback has been invaluable in guiding our final API. Some of the changes added to the Atlas Device SDK for C++ include: Aligning our APIs with other Atlas Device SDKs, e.g. improved control of the database state with monitoring and manual compaction HTTP tunneling Better control for the Atlas Device Sync sessions Windows support Compatibility with OpenWRT among other Linux distributions by supporting musl Android Automotive support with Blueprint/Soong build files What's next Looking ahead we are working towards geospatial support as well as the ability to build with a variety of package managers such as vcpkg and Conan. We welcome and value all feedback - if you have any comments or suggestions, please share them via our GitHub project . Ready to get started? Install the Atlas Device SDK for C++ — start your journey with our docs or jump right into example projects with source code . Then, register for Atlas to connect to Atlas Device Sync, a fully managed mobile backend as a service. Leverage out-of-the-box infrastructure, data synchronization capabilities, network handling, and much more to quickly launch enterprise-grade mobile apps. Finally, let us know what you think, and get involved in our forums . See you there!

February 22, 2024
Updates

Ready to get Started with MongoDB Atlas?

Start Free