MongoDB Blog

Announcements, updates, news, and more

Gamuda Puts AI in Construction with MongoDB Atlas

Gamuda Berhad is a leading Malaysian engineering and construction company with operations across the world, including in Australia, Taiwan, Singapore, Vietnam, the United Kingdom, and more. The company is known for its innovative approach to construction through the use of cutting-edge technology. Speaking at MongoDB.local Kuala Lumpur in August 2024 , John Lim, Chief Digital Officer at Gamuda said: “In the construction industry, AI is increasingly being used to analyze vast amounts of data, from sensor readings on construction equipment to environmental data that impacts project timelines.” One of Gamuda’s priorities is determining how AI and other tools can impact the company’s methods for building large projects across the world. For that, the Gamuda team needed the right infrastructure, with a database equipped to handle the demands of modern AI-driven applications. MongoDB Atlas fulfilled all the requirements and enabled Gamuda to deliver on its AI-driven goals. Why Gamuda chose MongoDB Atlas “Before MongoDB, we were dealing with a lot of different databases and we were struggling to do even simple things such as full-text search,” said Lim. “How can we have a tool that's developer-friendly, helps us scale across the world, and at the same time helps us to build really cool AI use cases, where we're not thinking about the infrastructure or worrying too much about how things work but are able to just focus on the use case?” After some initial conversations with MongoDB, Lim’s team saw that MongoDB Atlas could help it streamline its technology stack, which was becoming very complex and time consuming to manage. MongoDB Atlas provided the optimal balance between ease of use and powerful functionality, enabling the company to focus on innovation rather than database administration. “I think the advantage that we see is really the speed to market. We are able to build something quickly. We are fast to meet the requirements to push something out,” said Lim. Chi Keen Tan, Senior Software Engineer at Gamuda, added, “The team was able to use a lot of developer tools like MongoDB Compass , and we were quite amazed by what we can do. This [ability to search the items within the database easily] is just something that’s missing from other technologies.” Being able to operate MongoDB on Google Cloud was also a key selling point for Gamuda: “We were able to start on MongoDB without any friction of having to deal with a lot of contractual problems and billing and setting all of that up,” said Lim. How MongoDB is powering more AI use cases Gamuda uses MongoDB Atlas and functionalities such as Atlas Search and Vector Search to bring a number of AI use cases to life. This includes work implemented on Gamuda’s Bot Unify platform, which Gamuda built in-house using MongoDB Atlas as the database. By using documents stored in SharePoint and other systems, this platform helps users write tenders quicker, find out about employee benefits more easily, or discover ways to improve design briefs. “It’s quite incredible. We have about 87 different bots now that people across the company have developed,” Lim said. Additionally, the team has developed Gamuda Digital Operating System (GDOS), which can optimize various aspects of construction, such as predictive maintenance, resource allocation, and quality control. MongoDB’s ability to handle large volumes of data in real-time is crucial for these applications, enabling Gamuda to make data-driven decisions that improve efficiency and reduce costs. Specifically, MongoDB Atlas Vector Search enables Gamuda’s AI models to quickly and accurately retrieve relevant data, improving the speed and accuracy of decision-making. It also helps the Gamuda team find patterns and correlations in the data that might otherwise go unnoticed. Gamuda’s journey with MongoDB Atlas is just beginning as the company continues to explore new ways to integrate technology into its operations and expand to other markets. To learn more and get started with MongoDB Vector Search, visit our Vector Search Quick Start page.

October 22, 2024
Applied

Empower Innovation in Insurance with MongoDB and Informatica

For insurance companies, determining the right technology investments can be difficult, especially in today's climate where technology options are abundant but their future is uncertain. As is the case with many large insurers, there is a need to consolidate complex and overlapping technology portfolios. At the same time, insurers want to make strategic, future-proof investments to maximize their IT expenditures. What does the future hold, however? Enter scenario planning. Using the art of scenario planning, we can find some constants in a sea of uncertain variables, and we can more wisely steer the organization when it comes to technology choices. Consider the following scenarios: Regulatory disruption: A sudden regulatory change forces re-evaluation of an entire market or offering. Market disruption: Vendor and industry alliances and partnerships create disruption and opportunity. Tech disruption: A new CTO directs a shift in the organization's cloud and AI investments, aligning with a revised business strategy. What if you knew that one of these three scenarios was going to play itself out in your company but weren’t sure which one? How would you invest now to prepare for one of the three? At the same time that insurers are grappling with technology choices, they’re also facing clashing priorities: Running the enterprise: supporting business imperatives and maintaining health and security of systems. Innovating with AI: maintaining a competitive position by investing in AI technologies. Optimizing spend: minimizing technology sprawl, technical debt, and maximizing business outcomes. Data modernization What is the common thread among all these plausible future scenarios? How can insurers apply scenario planning principles while bringing diverging forces into alignment? There is one constant in each scenario, and that’s the organization’s data—if it’s hard to work with, any future scenario will be burdened by this fact. One of the most critical strategic investments an organization can make is to ensure data is easy to work with. Today, we refer to this as data modernization, which involves removing the friction that manifests itself in data processing, ensuring data is current, secure, and adaptable. For developers, who are closest to the data, this means enabling them with a seamless and fully integrated developer data platform along with a flexible data model. In the past, data models and databases would remain unchanged for long periods. Today, this approach is outdated. Consolidation creates a data model problem, resulting in a portfolio with relational, hierarchical, and file-based data models—or, worst of all, a combination of all three. Add to this the increased complexity that comes with relational models, including supertype-subtype conditional joins and numerous data objects, and you can see how organizations wind up with a patchwork of data models and overly complicated data architecture. A document database, like MongoDB Atlas , stores data in documents and is often referred to as a non-relational (or NoSQL) database. The document model offers a variety of advantages and specifically excels in data consolidation and agility: Serves as the superset of all other data model types (relational, hierarchical, file-based, etc.) Consolidates data assets into elegant single-views, capable of accommodating any data structure, format, or source Supports agile development, allowing for quick incorporation of new and existing data Eliminates the lengthy change cycles associated with rigid, single-schema relational approaches Makes data easier to work with, promoting faster application development By adopting the document model, insurers can streamline their data operations, making their technology investments more efficient and future-proof. The challenges of making data easier to work with include data quality. One significant hurdle insurers continue to face is the lack of a unified view of customers, products, and suppliers across various applications and regions. Data is often scattered across multiple systems and sources, leading to discrepancies and fragmented information. Even with centralized data, inconsistencies may persist, hindering the creation of a single, reliable record. For insurers to drive better reporting, analytics, and AI, there's a need for a shared data source that is accurate, complete, and up-to-date. Centralized data is not enough; it must be managed, reconciled, standardized, cleansed, and enriched to maintain its integrity for decision-making. Mastering data management across countless applications and sources is complex and time-consuming. Success in master data management (MDM) requires business commitment and a suite of tools for data profiling, quality, and integration. Aligning these tools with business use cases is essential to extract the full value from MDM solutions, although the process can be lengthy. Informatica’s MDM solution and MongoDB Informatica’s MDM solution has been developed to answer the key questions organizations face when working with their customer data: “How do I get a 360-degree view of my customer, partner and & supplier data?” “How do I make sure that my data is of the highest quality?” The Informatica MDM platform helps ensure that organizations around the world can confidently use their data and make business decisions based on it. Informatica’s entire MDM solution is built on MongoDB Atlas , including its AI engine, Claire. Figure 1: Everything you need to modernize the practice of master data management. Informatica MDM solves the following challenges: Consolidates data from overlapping and conflicting data sources. Identifies data quality issues and cleanses data. Provides governance and traceability of data to ensure transparency and trust. Insurance companies typically have several claim systems that they’ve amassed over the years through acquisitions, with each one containing customer data. The ability to relate that data together and ensure it’s of the highest quality enables insurers to overcome data challenges. MDM capabilities are essential for insurers who want to make informed decisions based on accurate and complete data. Below are some of the different use cases for MDM: Modernize legacy systems and processes (e.g. claims or underwriting) by effectively collecting, storing, organizing, and maintaining critical data Improve data security and improve fraud detection and prevention Effective customer data management for omni-channel engagement and cross- or up-sell Data management for compliance, avoiding or predicting in advance any possible regulatory issues Given we already leverage the performance and scale of MongoDB Atlas within our cloud-native MDM SaaS solution and share a common focus on high-value, industry solutions, this partnership was a natural next step. Now, as a strategic MDM partner of MongoDB, we can help customers rapidly consolidate and sunset multiple legacy applications for cloud-native ones built on a trusted data foundation that fuels their mission-critical use cases. Rik Tamm-Daniels, VP of Strategic Ecosystems and Technology at Informatica Taking the next step For insurance companies navigating the complexities of modern technology and data management, MDM combined with powerful tools like MongoDB and Informatica provide a strategic advantage. As insurers face an uncertain future with potential regulatory, market, and technological disruptions, investing in a robust data infrastructure becomes essential. MDM ensures that insurers can consolidate and cleanse their data, enabling accurate, trustworthy insights for decision-making. By embracing data modernization and the flexibility of document databases like MongoDB, insurers can future-proof their operations, streamline their technology portfolios, and remain agile in an ever-changing landscape. Informatica’s MDM solution, underpinned by MongoDB Atlas, offers the tools needed to master data across disparate systems, ensuring high-quality, integrated data that drives better reporting, analytics, and AI capabilities. If you would like to discover more about how MongoDB and Informatica can help you on your modernization journey, take a look at the following resources: Unify data across the enterprise for a contextual 360-degree view and AI-powered insights with Informatica’s MDM solution Automating digital underwriting with machine learning Claim management using LLMs and vector search for RAG

October 22, 2024
Applied

Built With MongoDB: Buzzy Makes AI Application Development More Accessible

AI adoption rates are sky-high and showing no signs of slowing down. One of the driving forces behind this explosive growth is the increasing popularity of low- and no-code development tools that make this transformative technology more accessible to tech novices. Buzzy , an AI-powered no-code platform that aims to revolutionize how applications are created, is one such company. Buzzy enables anyone to transform an idea into a fully functional, scalable web or mobile application in minutes. Buzzy developers use the platform for a wide range of use cases, from a stock portfolio tracker to an AI t-shirt store. The only way the platform could support such diverse applications is by being built upon a uniquely versatile data architecture. So it’s no surprise that the company chose MongoDB Atlas as its underlying database. Creating the buzz Buzzy’s mission is simple but powerful: to democratize the creation of applications by making the process accessible to everyone, regardless of technical expertise. Founder Adam Ginsburg—a self-described husband, father, surfer, geek, and serial entrepreneur—spent years building solutions for other businesses. After building and selling an application that eventually became the IBM Web Content Manager, he created a platform allowing anyone to build custom applications quickly and easily. Buzzy initially focused on white-label technology for B2B applications, which global vendors brought to market. Over time, the platform evolved into something much bigger. The traditional method of developing software, as Ginsburg puts it, is dead. Ginsburg observed two major trends that contributed to this shift: the rise of artificial intelligence (AI) and the design-centric approach to product development exemplified by tools like Figma. Buzzy set out to address two major problems. First, traditional software development is often slow and costly. Small-to-medium-sized business (SMB) projects can take anywhere from $50,000 to $250,000 and nine months to complete. Due to these high costs and lengthy timelines, many projects either fail to start or run out of resources before they’re finished. The second issue is that while AI has revolutionized many aspects of development, it isn’t a cure-all for generating vast amounts of code. Generating tens of thousands of lines of code using AI is not only unreliable but also lacks the security and robustness that enterprise applications demand. Additionally, the code generated by AI often can’t be maintained or supported effectively by IT teams. This is where Buzzy found a way to harness AI effectively, using it in a co-pilot mode to create maintainable, scalable applications. Buzzy’s original vision was focused on improving communication and collaboration through custom applications. Over time, the platform’s mission shifted toward no-code development, recognizing that these custom apps were key drivers of collaboration and business effectiveness. The Buzzy UX is highly streamlined so even non-technical users can leverage the power of AI in their apps. Initially, Buzzy's offerings were somewhat rudimentary, producing functional but unpolished B2B apps. However, the platform soon evolved. Instead of building their own user experience (UX) and user interface (UI) capabilities, Buzzy integrated with Figma, giving users access to the design-centric workflow they were already familiar with. The advent of large language models (LLMs) provided another boost to the platform, enabling Buzzy to accelerate AI-powered development. What sets Buzzy apart is its unique approach to building applications. Unlike traditional development, where code and application logic are often intertwined, Buzzy separates the "app definition" from the "core code." This distinction allows for significant benefits, including scalability, maintainability, and better integration with AI. Instead of handing massive chunks of code to an AI system—which can result in errors and inefficiencies—Buzzy gives the AI a concise, consumable description of the application, making it easier to work with. Meanwhile, the core code, written and maintained by humans, remains robust, secure, and high-performing. This approach not only simplifies AI integration but also ensures that updates made to Buzzy’s core code benefit all customers simultaneously, an efficiency that few traditional development teams can achieve. Flexible platform, fruitful partnership The partnership between Buzzy and MongoDB has been crucial to Buzzy’s success. MongoDB’s Atlas developer data platform provides a scalable, cost-effective solution that supports Buzzy’s technical needs across various applications. One of the standout features of MongoDB Atlas is its flexibility and scalability, which allows Buzzy to customize schemas to suit the diverse range of applications the platform supports. Additionally, MongoDB’s support—particularly with new features like Atlas Vector Search —has allowed Buzzy to grow and adapt without complicating its architecture. In terms of technology, Buzzy’s stack is built for flexibility and performance. The platform uses Kubernetes and Docker running on Node.js with MongoDB as the database. Native clients are powered by React Native, using SQLLite and Websockets for communication with the server. On the AI side, Buzzy leverages several models, with OpenAI as the primary engine for fine-tuning its AI capabilities. Thanks to the MongoDB for Startups program , Buzzy has received critical support, including Atlas credits, consulting, and technical guidance, helping the startup continue to grow and scale. With the continued support of MongoDB and an innovative approach to no-code development, Buzzy is well-positioned to remain at the forefront of the AI-driven application development revolution. A Buzzy future Buzzy embodies the spirit of innovation in its own software development lifecycle (SDLC). The company is about to release two game-changing features that are going to take AI driven App development to the next level: Buzzy FlexiBuild, which will allow users to build more complex applications using just AI prompts, and Buzzy Automarkup, which will allow Figma users to easily mark up screens, views, lists, forms, and actions with AI in minutes. Ready to start bringing your own app visions to life? Try Buzzy and start building your application in minutes for Free. To learn more and get started with MongoDB Vector Search, visit our Vector Search Quick Start guide .

October 18, 2024
Applied

Announcing Hybrid Search Support for LlamaIndex

MongoDB is excited to announce enhancements to our LlamaIndex integration. By combining MongoDB’s robust database capabilities with LlamaIndex’s innovative framework for context-augmented large language models (LLMs), the enhanced MongoDB-LlamaIndex integration unlocks new possibilities for generative AI development. Specifically, it supports vector (powered by Atlas Vector Search ), full-text (powered by Atlas Search ), and hybrid search, enabling developers to blend precise keyword matching with semantic search for more context-aware applications, depending on their use case. Building AI applications with LlamaIndex LlamaIndex is one of the world’s leading AI frameworks for building with LLMs. It streamlines the integration of external data sources, allowing developers to combine LLMs with relevant context from various data formats. This makes it ideal for building application features like retrieval-augmented generation (RAG), where accurate, contextual information is critical. LlamaIndex empowers developers to build smarter, more responsive AI systems while reducing the complexities involved in data handling and query management. Advantages of building with LlamaIndex include: Simplified data ingestion with connectors that integrate structured databases, unstructured files, and external APIs, removing the need for manual processing or format conversion. Organizing data into structured indexes or graphs , significantly enhancing query efficiency and accuracy, especially when working with large or complex datasets. An advanced retrieval interface that responds to natural language prompts with contextually enhanced data, improving accuracy in tasks like question-answering, summarization, or data retrieval. Customizable APIs that cater to all skill levels—high-level APIs enable quick data ingestion and querying for beginners, while lower-level APIs offer advanced users full control over connectors and query engines for more complex needs. MongoDB's LlamaIndex integration Developers are able to build powerful AI applications using LlamaIndex as a foundational AI framework alongside MongoDB Atlas as the long term memory database. With MongoDB’s developer-friendly document model and powerful vector search capabilities within MongoDB Atlas, developers can easily store and search vector embeddings for building RAG applications. And because of MongoDB’s low-latency transactional persistence capabilities, developers can do a lot more with MongoDB integration in LlamIndex to build AI applications in an enterprise-grade manner. LlamaIndex's flexible architecture supports customizable storage components, allowing developers to leverage MongoDB Atlas as a powerful vector store and a key-value store. By using Atlas Vector Search capabilities, developers can: Store and retrieve vector embeddings efficiently ( llama-index-vector-stores-mongodb ) Persist ingested documents ( llama-index-storage-docstore-mongodb ) Maintain index metadata ( llama-index-storage-index-store-mongodb ) Store Key-value pairs ( llama-index-storage-kvstore-mongodb ) Figure adapted from Liu, Jerry and Agarwal, Prakul (May 2023). “Build a ChatGPT with your Private Data using LlamaIndex and MongoDB”. Medium. https://medium.com/llamaindex-blog/build-a-chatgpt-with-your-private-data-using-llamaindex-and-mongodb-b09850eb154c Adding hybrid and full-text search support Developers may use different approaches to search for different use cases. Full-text search retrieves documents by matching exact keywords or linguistic variations, making it efficient for quickly locating specific terms within large datasets, such as in legal document review where exact wording is critical. Vector search, on the other hand, finds content that is ‘semantically’ similar, even if it does not contain the same keywords. Hybrid search combines full-text search with vector search to identify both exact matches and semantically similar content. This approach is particularly valuable in advanced retrieval systems or AI-powered search engines, enabling results that are both precise and aligned with the needs of the end-user. It is super simple for developers to try out powerful retrieval capabilities on their data and improve the accuracy of their AI applications with this integration. In the LlamaIndex integration, the MongoDBAtlasVectorSearch class is used for vector search. All you have to do is enable full-text search, using VectorStoreQueryMode.TEXT_SEARCH in the same class. Similarly, to use Hybrid search, enable VectorStoreQueryMode.HYBRID . To learn more, check out the GitHub repository . With the MongoDB-LlamaIndex integration’s support, developers no longer need to navigate the intricacies of Reciprocal Rank Fusion implementation or to determine the optimal way to combine vector and text searches—we’ve taken care of the complexities for you. The integration also includes sensible defaults and robust support, ensuring that building advanced search capabilities into AI applications is easier than ever. This means that MongoDB handles the intricacies of storing and querying your vectorized data, so you can focus on building! We’re excited for you to work with our LlamaIndex integration. Here are some resources to expand your knowledge on this topic: Check out how to get started with our LlamaIndex integration Build a content recommendation system using MongoDB and LlamaIndex with our helpful tutorial Experiment with building a RAG application with LlamaIndex, OpenAI, and our vector database Learn how to build with private data using LlamaIndex, guided by one of its co-founders

October 17, 2024
Updates

Strengthen Data Security with MongoDB Queryable Encryption

MongoDB Queryable Encryption is a groundbreaking, industry-first innovation developed by the MongoDB Cryptography Research Group that allows customers to encrypt sensitive application data, store it securely in an encrypted state in the MongoDB database, and perform equality and range queries directly on the encrypted data—with no cryptography expertise required. Adding range query support to Queryable Encryption significantly enhances data retrieval capabilities by enabling more flexible and powerful searches. Queryable Encryption is available in MongoDB Atlas, Enterprise Advanced, and Community Edition. Encryption: Protecting data through every stage of its lifecycle Encryption is a critical security method for ensuring protection of sensitive data and compliance with regulations like GDPR, CCPA, and HIPAA. It involves rendering data unreadable to anyone without the decryption key. It can protect data in three ways: in-transit (over networks), at-rest (when stored), and in-use (during processing). While encryption in-transit and at-rest are standard for all databases and are well-supported by MongoDB , encryption in-use presents a unique challenge. Encryption in-use is difficult because encrypted data is unreadable—it looks like random characters and symbols. Traditionally, the database can’t run queries on encrypted data without decrypting it first to make it readable. However, if the database doesn’t have a decryption key, it has to send encrypted data back to the application or system (i.e., the client) that has the key so it can be decrypted before querying. This is a pattern that doesn’t scale well for real-world applications. This puts organizations in a difficult spot: in-use encryption is important for data privacy and regulatory compliance, but it's hard to implement. In the past, companies have either chosen not to encrypt sensitive data in-use or have employed less secure workarounds that complicate their operations. MongoDB Queryable Encryption: Safeguarding data in use without sacrificing efficiency MongoDB Queryable Encryption solves this problem. It allows organizations to encrypt their sensitive data, like personally identifiable information (PII) or protected health information (PHI), and to run equality and range queries directly on that data without having to decrypt it. Queryable Encryption was developed by the MongoDB Cryptography Research Group , drawing on their pioneering expertise in cryptography and encrypted search, and Queryable Encryption has been peer-reviewed by leading cryptography experts worldwide. Unmatched in the industry, MongoDB is the only data platform that allows customers to run expressive queries directly on non-deterministically encrypted data. This represents a groundbreaking advantage for customers, allowing them to maintain robust protection for their sensitive data without sacrificing operational efficiency or developer productivity by still enabling expressive queries to be performed on it. Organizations of all sizes, across all industries, can benefit from the impactful outcomes enabled by Queryable Encryption, such as: Stronger data protection: Data stays encrypted at every stage—whether in-transit, at-rest, or in-use—reducing the risk of sensitive data exposure or breaches. Enhanced regulatory compliance: Provides customers with the necessary tools to comply with data protection regulations like GDPR, CCPA, and HIPAA by ensuring robust encryption at every stage. Streamlined operations: Simplifies the encryption process without needing costly custom solutions, specialized cryptography teams, or complex third-party tools. Solidified separation of duties: Supports stricter access controls, where MongoDB and even a customer's database administrators (DBAs) don’t have access to sensitive data. Use cases for Queryable Encryption MongoDB Queryable Encryption has many use cases for organizations that host sensitive data, regardless of their size or industry. The recent addition of range query support to Queryable Encryption broadens those use cases even wider. Here are some examples to help illustrate how Queryable Encryption could be used to protect and query sensitive data: Financial Services Credit Scoring: Assess creditworthiness by querying encrypted data such as credit scores and income levels. For example, segment your customers based on credit scores between 600 and 750. Fraud Detection: Detect anomalies by querying encrypted transaction amounts for values that exceed typical spending patterns, such as transactions above $10,000. Insurance Risk Assessment: Personalize policy offerings by querying encrypted client data for risk levels within specified ranges, enhancing customer service without exposing sensitive information. Claims Processing: Automate claims processing by querying encrypted claims data for amounts within specific ranges or for claims within time periods, streamlining operations while safeguarding information. Healthcare Medical Research: Execute range-based searches on encrypted medical records, such as querying encrypted datasets for patients within specific age ranges or for abnormal lab results for medical research. Billing and Insurance Processing: Perform secure range queries on encrypted billing data to process insurance claims and payments while protecting patient financial details. Education Grading Systems: Process encrypted student scores to award grades within specific ranges, ensuring compliance with FERPA while protecting student privacy and maintaining data security. Financial Aid Distribution: Analyze encrypted income data within certain ranges to determine eligibility for scholarships and financial aid. Comprehensive data protection at every stage With Queryable Encryption, MongoDB offers unmatched protection for sensitive data throughout its entire lifecycle—whether in-transit, at-rest, or in-use. Now, with the addition of range query support, Queryable Encryption meets even more of the demands of modern applications, unlocking new use cases. To get started, explore the Queryable Encryption documentation .

October 16, 2024
Updates

Unlocking Seamless Data Migrations from Cosmos DB to MongoDB Atlas with Adiom

As enterprises continue to scale, the need for powerful, seamless data migration tools becomes increasingly important. Adiom , founded by industry veterans with deep expertise in data mobility and distributed systems, is addressing this challenge head-on with its open-source tool, dsync. By focusing on high-stakes, production-level migrations, Adiom has developed a solution that works effortlessly with MongoDB Atlas and makes large-scale migrations to it from Cosmos DB and other NoSQL databases faster, safer, and more predictable. The real migration struggles Enterprises often approach migrations with apprehension, and for good reason. When handling massive datasets powering mission-critical services or user-facing applications, even small mistakes can have significant consequences. Adiom understands these challenges deeply, particularly when migrating to MongoDB Atlas. Here are a few of the common pain points that enterprises face: Time-consuming processes: Moving large datasets involves extensive planning, testing, and iteration. What’s more, enterprises need migrations that are repeatable and can handle the same dataset efficiently multiple times—something traditional tools often struggle to provide. Risk management: From data integrity issues to downtime during the migration window, the stakes are high. Tools that worked for smaller datasets and in lower-tier environments no longer meet the requirements. Custom migration scripts often introduce unforeseen risks, while databases like Cosmos DB come with their own unique limitations. Cost overruns: Enterprises frequently encounter hidden migration costs—whether it's the need to provision special infrastructure, reworking application code for compatibility with migration plans, or paying SaaS vendors by the row. These complications can balloon the overall migration budget or send the project into the approval death spiral. To make things even more complicated, the pains feed into each other. The longer the project takes, the more risks need to be accounted for, the longer the planning and testing, and the bigger the cost. Adiom’s dsync: Power and simplicity in one tool Dsync was built with these challenges in mind. Designed specifically for large production workloads, dsync enables enterprises to handle complex migrations more easily, lowering the hurdles that typically slow down the process, reducing risks and uncertainty. Here’s why dsync stands out: Ease of deployment: Starting with dsync is incredibly simple. All it takes is downloading a single binary—there’s no need for specialized infrastructure, and it runs seamlessly on VMs or Docker. Users can monitor migrations through the command line or a web interface, giving flexibility depending on the team’s preferences. Resilience and Safety: dsync is not only efficient, but it’s also resumable. Should a migration be interrupted, there’s no need to start over. This means that migrations can continue smoothly from where they left off, reducing the risk of downtime and minimizing the complexity of the process. Verification: dsync is designed to protect the integrity of migrated data. Dsync features embedded data verification mechanisms that automatically check for consistency between the source and destination databases after migration. Security: dsync doesn't store data, doesn't send it outside the organization other than to the designated destination, and supports network encryption. No hidden costs: As an open-source tool, dsync eliminates the need to onboard expensive SaaS solutions or purchase licenses in the early stages of the process. It operates independently of third-party vendors, giving enterprises flexibility and control over their migrations without the additional financial burden. Enhancing MongoDB customers' experiences For MongoDB customers, the ability to migrate data quickly and efficiently can be the key to unlocking new products, features, and cost savings. With dsync, Adiom provides a solution that can accelerate migrations, reduce risks, and enable enterprises to leverage MongoDB Atlas without the usual headaches. Faster time-to-market: By significantly accelerating migrations, dsync allows companies to take advantage of MongoDB Atlas offerings and integrations sooner, offering a direct path to quicker returns on investment. Self-service and support: Many migrations can be handled entirely in-house, thanks to dsync’s intuitive design. However, for organizations that need additional guidance, Adiom offers support and has partnered with MongoDB Professional Services and PeerIslands to offer comprehensive coverage during the migration process. Five compelling advantages of migrating to MongoDB Flexible schema: MongoDB’s schema-less design reduces development time by up to 30% by allowing you to change data structures. Scalability: You can scale MongoDB to multiple petabytes of data seamlessly using sharding. High performance: MongoDB helps to improve read and write speeds by up to 50% compared to traditional databases. Expressive Query API: Its advanced querying capabilities reduce query writing time and increase execution efficiency by 70%. Partner Ecosystem: MongoDB’s strong partner ecosystem helps with service integrations, AI capabilities, purpose-built solutions, and other significant competitive differentiators. Conclusion Dsync is more than just a migration tool—it’s a powerful engine that abstracts away the complexity of managing large datasets across different systems. By seamlessly tying together initial data copying, change-data-capture, and all the nuances of large-scale migrations, dsync lets enterprises focus on building their future, not on the logistics of data transfer. For those interested in technical details, some of those logistics and nuances can be found in our CEO’s blog . With Adiom and dsync, enterprises no longer have to choose between performance, correctness, or ease of use when planning a migration from Cosmos DB or another NoSQL database. Whether you’re moving from Cosmos DB or another NoSQL database, dsync provides an enterprise-grade solution that helps to enable faster, more secure, and more reliable migrations. By partnering with MongoDB, Adiom supports you in continuing to innovate without being held back by the limitations of legacy databases. Try dsync yourself or contact Adiom for a demo. Head over to our product page to learn more about MongoDB Atlas .

October 15, 2024
Applied

From Chaos to Control: Real-Time Data Analytics for Airlines

Delays are a significant challenge for the airline industry. They disrupt travel plans, erode customer loyalty, and inflict significant financial losses. In an industry built on precision and punctuality, even minor setbacks can have cascading effects. Whether due to adverse weather conditions or unforeseen technical issues, these delays ripple through flight schedules, affecting both passengers and operations managers. While neither group is typically at fault, the ability to quickly reallocate resources and return to normal operations is crucial. To mitigate these disruptions and restore passenger trust, airlines must have the tools and strategies to quickly identify delays and efficiently reallocate resources. This blog explores how a unified platform with real-time data analysis can be a game-changer in this regard especially in saving costs. The high cost of delays Delays from disruptions, like weather events or crew unavailability, pose major challenges for the airline industry. These delays have significant financial impact according to some studies, costing European airlines on average € 4,320 per hour per flight . They also create operational challenges like crew disruptions and reduced airplane availability, leading to further delays, which is known in the industry as delay propagation. To address these challenges, airlines have traditionally focused on optimizing their pre-flight planning processes. However, while planning is crucial, effective recovery strategies are equally essential for minimizing the impact of disruptions. Unfortunately, many airlines have underinvested in recovery systems, leaving them ill-prepared to respond to unexpected events. The consequences of this imbalance include: Delay propagation: Initial delays can cascade, causing widespread schedule disruptions. Financial and operational damage: Increased costs and inefficiencies strain airline resources. Customer dissatisfaction: Poor disruption management leads to negative passenger experiences. The power of real-time data analysis In response to the significant challenges posed by flight delays, a real-time unified platform offers a powerful solution designed to enhance how airlines manage disruptions. Event-driven architectural approach The diagram below showcases an event-driven architecture that can be used to build a robust and decoupled platform that supports real-time data flow between microservices. In an event-driven architecture, services or components communicate by producing and consuming events, which is why this architecture relies on Pub/Sub (messaging middleware) to manage data flows. Moreover, MongoDB’s flexible document model and ability to handle high volumes of data make it ideal for event-driven systems. Combining these features with PubSub’s, this approach proves to offer a powerful solution for modern applications that require scalability, flexibility, and real-time processing. Figure 1: Application architecture In this architecture, the blue line in the diagram shows the operational data flow. The data simulation is triggered by the application’s front-end and is initialized in the FastAPI microservice. The microservice, in turn, starts publishing airplane sensor data to the custom Pub/Sub topics, which forwards these data to the rest of the architecture components, such as cloud functions, for data transformation and processing. The data is processed in each microservice, including the creation of analytical data, as shown by the green lines in the diagram. Afterward, data is introduced in MongoDB and fetched from the application to provide the user with organized, up-to-date information regarding each flight. This leads to more precise and detailed analysis of real-time data for flight operations managers. As a result, new and improved opportunities for resource reallocation can be explored, helping to minimize delays and reduce associated costs for the company. Microservice overview As mentioned earlier, the primary goal is to create an event-driven, decoupled architecture founded on MongoDB and Google Cloud services integrations. The following components contribute to this: FastAPI: Serves as the main data source, generating data for analytical insights, predictions, and simulation. Telemetry data: Pulls and transforms operational data published in the PubSub topic in real-time, storing it in a MongoDB time series collection for aggregation and optimization. Application data: Subscribed to a different PubSub topic, this service acknowledges static operational data, including initial route, recalculated route, and disruption status. Therefore, this service will only be triggered provided any of the previous fields are altered. Finally, this data is updated in its MongoDB collection accordingly. Vertex AI integration—analytical data flow: A cloud function triggered by PubSub messages that executes data transformations and forwards data to the Vertex AI deployed machine learning (ML) model. Predictions are then stored in MongoDB. MongoDB: A flexible, scalable, and real-time data solution Building a unified real-time platform for the airline industry requires efficient management of massive, diverse datasets. From aircraft sensor data to flight cost calculations, data processing and management are central to operations. To meet these demands, the platform needs a flexible data platform capable of handling multiple data types and integrating with various systems. This enables airlines to extract valuable insights from their data and develop features that improve operations and the passenger experience. Real-time data processing is a must-have feature. This allows airlines to receive immediate alerts about delays, minimizing disruptions and ensuring smooth operations. In fast-paced airport environments, where every minute counts, real-time data processing is indispensable. For example, integrating MongoDB with Google Cloud's Vertex AI allows for the real-time processing and storage of airplane sensor data, transforming it into actionable insights. Business benefits This solution provides real-time access to critical flight data, enabling efficient cost management and operational planning. Immediate access to this information allows flight operation managers to plan ahead, reallocate existing resources, or even initiate recovery procedures in order to diminish the consequences of the identified delay. Moreover, its ML model customization ensures adaptability to various use cases. Regarding the platform’s long-term sustainability, it has been purposely designed to integrate highly reliable and scalable products in order to excel in three key standards: Scalability The platform’s compatibility with both horizontal and vertical scaling is clearly demonstrated by its integral design. The decoupled architecture illustrates how this solution is divided into different components—and therefore instances—that work together as a cohesive whole. Vertical scalability can be achieved by simply increasing the computing power allocated to the designed Vertex AI model, if needed. Availability The decoupled architecture exemplifies the central importance of availability in any project’s design. Using different tracks to introduce operational and analytical data into the database allows us to handle any issues in a way that remains unnoticeable to end users. Latency Optimizing the connections between components and integrations within the product is key to achieving the desired results. Using PubSub as our asynchronous messaging service helps minimize unnecessary delays and avoid holding resources needlessly. Get started! To sum up, this blog has explored how MongoDB can be integrated into an airline flight management system, offering significant benefits in terms of cost savings and enhanced customer experience. Check out our AI resource page to learn more about building AI-powered apps with MongoDB, and try out the demo yourself via this repo . To learn more and get started with MongoDB Vector Search, visit our Vector Search Quick Start page .

October 15, 2024
Applied

Grab Drives 50% Efficiencies with MongoDB Atlas

Grab is Southeast Asia’s leading ‘super application,’ offering a wide range of services, targeting both consumers and businesses, including deliveries, mobility, financial services, enterprise and more. Their range of applications, such as the popular Grab Taxi, Grab Pay, Grab Mart, Grab Ads, and more, count approximately 38 million active users monthly across 500 cities and eight countries. Managing a high volume of constantly growing users and handling regular spikes in demand and activity means that Grab needs to maintain a robust, scalable, and flexible digital infrastructure. Presenting at MongoDB.local Singapore in 2024, Grab shared their journey of migrating one of their key service apps— GrabKios —from the Community Edition of MongoDB to MongoDB Atlas . Grab also described how they are expanding their use of MongoDB to support semantic search. “Transitioning to MongoDB Atlas was not just a migration—it was a strategic move aimed at enhancing our database infrastructure,” said Jude Dulaj Lakshan De Croos, Database Engineering Manager at Grab. A smooth transition to MongoDB Atlas Grab’s journey with MongoDB Atlas began with the realization that their existing database infrastructure, while functional, was not equipped to handle the scale and complexity of their operations. Grab’s eventual migration to MongoDB Atlas was meticulously planned and executed, including extensive testing to ensure a smooth transition. During the critical testing phase, the creation of a replica “prod clone” environment, allowed Grab to test and refine their migration strategy. This minimized the possibility of unforeseen issues. The migration also involved the use of Mongomirror . This facilitated the seamless transfer of data from Grab’s self-hosted clusters to MongoDB Atlas. “We were able to ensure that migration was actually smooth and ran without any issues,” said Swarit Arora, Senior Database Engineer at Grab. MongoDB Atlas’s developer data platform offers Grab high levels of flexibility and scalability, accommodating Grab’s fast growth (the company recorded a 23% revenue growth YoY in 2024) in an ever-changing digital landscape. MongoDB Atlas also delivers unique automation and streamlining capabilities, as well as enterprise-grade support which led to improved process and database management efficiency. Efficiency gains with greater scalability, flexibility, performance MongoDB Atlas provided Grab with an automated, scalable, and secure platform, which empowered its engineering teams to focus on product development rather than database maintenance. “With MongoDB Atlas, we don’t have to worry about the scaling changes. And with hands-on security we can deliver secure and fast applications,” said Arora. “Being able to configure the exact resources required and then scale up and down based on our requirements is a plus. Considering we don't have to manage the scalability part, this is, I think, saving us around 50% of the time.” Furthermore, MongoDB Atlas delivers proactive recommendations to Grab’s team. For example, MongoDB Atlas’s Performance Advisor saves the team time by delivering real-time insights and recommendations to optimize query performance, ultimately reducing manual management tasks and increasing database efficiency. “It is now easy to set up our MongoDB clusters compared to what we were doing when we self-hosted, which was more time-consuming,” added Arora. “Secondly, if we are required to upgrade the cluster version, it is as easy as the click of a button.” Dedicated analytics nodes mean that Grab’s team is able to enhance the analytical capabilities of any application running on MongoDB. The successful migration to MongoDB Atlas has positioned Grab to explore new possibilities, including leveraging MongoDB’s advanced features for use cases such as semantic search and AI applications. Learn more about MongoDB Atlas .

October 14, 2024
Applied

MongoDB Atlas + PowerSync: The Future of Offline-First Enterprise Apps

Picture this: your field team is miles from the nearest cell tower, but their apps still work flawlessly—tracking assets, updating data, and syncing the moment they're back online. Since even a second of downtime can cost millions, the future of enterprise apps is one where "offline" doesn't mean "out of commission." Enter MongoDB Atlas and PowerSync, two organizations behind cutting-edge, always-on experiences. The recently announced MongoDB Atlas-PowerSync integration delivers seamless performance and real-time data syncing that keeps businesses running—even when internet connectivity is spotty. By pairing MongoDB Atlas (the gold standard for cloud databases) with PowerSync (a game-changing sync engine) enterprises can create fully offline-first applications that keep teams productive, no matter where they are. Across a variety of industries— from energy, to manufacturing, to field services, to retail—the new integration delivers smooth performance and reliable bi-directional data sync between backend and frontend systems. We are excited to bring support for MongoDB into PowerSync, and to offer MongoDB customers a proven enterprise-grade sync engine to drive their offline-first applications. Conrad Hofmeyr, CEO of JourneyApps/PowerSync MongoDB Atlas + PowerSync: A power move The partnership between MongoDB Atlas and PowerSync offers a new level of app performance, especially for those that need to operate seamlessly in both online and offline environments. PowerSync transforms your applications by enabling data synchronization between MongoDB on the backend and SQLite on the front end. This means your apps don’t just cache data while offline—they remain fully operational. Users can perform queries, update records, and sync millions of objects, all while disconnected. As soon as they reconnect, the data syncs effortlessly with MongoDB Atlas, ensuring that no data is lost in the process. This solution has been refined over a decade, delivering enterprise-grade reliability and scalability. With MongoDB Atlas as the backend, it brings the stability and compliance that enterprises demand. Both PowerSync and MongoDB Atlas are SOC 2 Type 2 audited, ensuring that your data remains secure and compliant with even the strictest regulatory standards. Whether you’re handling a small team or synchronizing millions of users, this combination scales effortlessly, making it a perfect fit for teams of any size. What’s more, integrating PowerSync into your system doesn’t require a major overhaul. Using MongoDB’s change streams, PowerSync delivers high-performance syncing without the need for configuring complex database triggers. The integration is seamless and designed to minimize disruption, making it a low-stress solution for developers looking for a high-efficiency sync tool. Finally, PowerSync gives developers complete control over how data is handled. Whether you want to sync immediately or start with a local-first architecture, PowerSync allows for full customization, letting you determine what data is synced and when business logic is applied. No matter your stack or environment, PowerSync is designed to adapt to your specific needs, providing the flexibility and control to create robust, scalable applications. Where this combo shines: Real-world use cases Energy & field services: Technicians deep in the field, in low connectivity conditions, but equipped with PowerSync apps. They continue collecting and syncing data, from equipment diagnostics to work orders. Once they reconnect, MongoDB Atlas captures everything in real-time, ensuring no critical data gets lost in the shuffle. Manufacturing: Whether it’s tracking production metrics or running diagnostics on machinery, PowerSync’s offline-first architecture keeps operations running. Workers can continue making updates and querying data locally, with MongoDB Atlas acting as the central hub for oversight and reporting when the network is back up. Utilities & Mining: From power grids to remote mining operations, where outages are common, PowerSync delivers secure, bi-directional syncing that ensures your teams have the most up-to-date information, no matter the conditions. Retail: Network connectivity issues should never prevent app users from completing core retail activities like accepting orders, tracking inventory or making deliveries. Point of sale (POS) apps are typically expected to keep working even when network connectivity is not available. Unlock your enterprise's potential today The ability to maintain operations without interruption is increasingly a necessity. By leveraging the powerful integration of MongoDB Atlas and PowerSync, your enterprise can ensure its applications are always ready to perform, regardless of connectivity challenges. Are you ready to transform your offline operations and unlock the full potential of your data? Visit PowerSync’s migration guide today to explore how to seamlessly transition to this innovative solution and empower your teams with the tools they need to thrive in any environment. Don’t let downtime hold you back—take the first step toward a more resilient, agile future!

October 14, 2024
Applied

Introducing: Multi-Kubernetes Cluster Deployment Support

Resilience and scalability are critical for today's production applications. MongoDB and Kubernetes are both well known for their ability to support those needs to the highest level. To better enable developers using MongoDB and Kubernetes, we’ve introduced a series of updates and capabilities that makes it easier to manage MongoDB across multiple Kubernetes clusters. In addition to the previously released support for running MongoDB replica sets and Ops Manager across multiple Kubernetes clusters, we're excited to announce the public preview release of support for Sharded Clusters spanning multiple Kubernetes clusters (GA to follow in November 2024). Support for deployment across multiple Kubernetes clusters is facilitated through the Enterprise Kubernetes Operator. As a recap for anyone unaware, the Enterprise Operator automates the deployment, scaling, and management of MongoDB clusters in Kubernetes. It simplifies database operations by handling tasks such as backups, upgrades, and failover, ensuring consistent performance and reliability in the Kubernetes environment. Multi-Kubernetes cluster deployment support enhances availability, resilience, and scalability for critical MongoDB workloads, empowering developers to efficiently manage these workloads within Kubernetes. This approach unlocks the highest level of availability and resilience by allowing shards to be located closer to users and applications, increasing geographical flexibility and reducing latency for globally distributed applications. Deploying replica sets across multiple Kubernetes clusters MongoDB replica sets are engineered to ensure high availability, data redundancy, and automated failover in database deployments. A replica set consists of multiple MongoDB instances—one primary and several secondary nodes—all maintaining the same dataset. The primary node handles all write operations, while the secondary nodes replicate the data and are available to take over as primary if the original primary node fails. This architecture is critical for maintaining continuous data availability, especially in production environments where downtime can be costly. Support for deploying MongoDB replica sets across multiple Kubernetes clusters helps ensure this level of availability for MongoDB-based applications running in Kubernetes. Deploying MongoDB replica sets across multiple Kubernetes clusters enables you to distribute your data, not only across nodes in the Kubernetes cluster, but across different clusters and geographic locations, ensuring that the rest of your deployments remain operational (even if one or more Kubernetes clusters or locations fail) and facilitating faster disaster recovery. To learn more about how to deploy replica sets across multiple Kubernetes clusters using the Enterprise Kubernetes Operator, visit our documentation . Sharding MongoDB across multiple Kubernetes clusters While replica sets duplicate data for resilience (and higher read rates), MongoDB sharded clusters divide the data up between shards, each of which is effectively a replica set, providing resilience for each portion of the data. This helps your database handle large datasets and high-throughput operations since each shard has a primary member handling write operations to that portion of the data; this allows MongoDB to scale up the write throughput horizontally, rather than requiring vertical scaling of every member of a replica set. In a Kubernetes environment, these shards can now be deployed across multiple Kubernetes clusters, giving higher resilience in the event of a loss of a Kubernetes cluster or an entire geographic location. This also offers the ability to locate shards in the same region as the applications or users accessing that portion of the data, reducing latency and improving user experience. Sharding is particularly useful for applications with large datasets and those requiring high availability and resilience as they grow. Support for sharding MongoDB across multiple Kubernetes clusters is currently in public preview and will be generally available in November. Deploying Ops Manager across multiple Kubernetes clusters Ops Manager is the self-hosted management platform that supports automation, monitoring, and backup of MongoDB on your own infrastructure. Ops Manager's most critical function is backup, and deploying it across multiple Kubernetes clusters greatly improves resilience and disaster recovery for your MongoDB deployments in Kubernetes. With Ops Manager distributed across several Kubernetes clusters, you can ensure that backups of deployments remain robust and available, even if one Kubernetes cluster or site fails. Furthermore, it allows Ops Manager to efficiently manage and monitor MongoDB deployments that are themselves distributed across multiple clusters, improving resilience and simplifying scaling and disaster recovery. To learn more about how to deploy Ops Manager across multiple Kubernetes clusters using the Enterprise Kubernetes Operator, visit our documentation . To leverage multi-Kubernetes-cluster support, you can get started with the Enterprise Kubernetes Operator .

October 10, 2024
Updates

Building Gen AI with MongoDB & AI Partners | September 2024

Last week I was in London for MongoDB.local London —the 19th stop of the 2024 MongoDB.local tour—where MongoDB, our customers, and our AI partners came together to share solutions we’ve been building that enable companies to accelerate their AI journey. I love attending these events because they offer an opportunity to celebrate our collective achievements, and because it’s great to meet so many (mainly Zoom) friends in person! One of the highlights of MongoDB.local London 2024 was the release of our reference architecture with our MAAP partners AWS and Anthropic , which supports memory-enhanced AI agents. This architecture is already helping businesses streamline complex processes and develop smarter, more responsive applications. We also announced a robust set of vector quantization capabilities in MongoDB Atlas Vector Search that will help developers build powerful semantic search and generative AI applications with more scale—and at a lower cost. Now, with support for the ingestion of scalar quantized vectors, you can import and work with quantized vectors from your embedding model providers of choice, including MAAP partners Cohere, Nomic, and others. A big thank you to all of MongoDB’s AI partners, who continually amaze me with their innovation. MongoDB.local London was another great reminder of the power of collaboration, and I’m excited for what lies ahead as we continue to shape the future of AI together. As the Brits say: Cheers! Welcoming new AI and tech partners In September we also welcomed seven new AI and tech partners that offer product integrations with MongoDB. Read on to learn more about each great new partner! Arize Arize AI is a platform that helps organizations visualize and debug the flow of data through AI applications by quickly identifying bottlenecks in LLM calls and understanding agentic paths. "At Arize AI, we are committed to helping AI teams build, evaluate, and troubleshoot cutting-edge agentic systems. Partnering with MongoDB allows us to provide a comprehensive solution for managing the memory and retrieval that these systems rely on”, said Jason Lopatecki, co-founder and CEO of Arize AI. “With MongoDB’s robust vector search and flexible document storage, combined with Arize’s advanced observability and evaluation tools, we’re empowering developers to confidently build and deploy AI applications." Baseten Baseten provides the applied AI research and infrastructure needed to serve custom and open-source machine learning models performantly, scalably, and cost-efficiently. " We're excited to partner with MongoDB to combine their scalable vector database with Baseten's high-performance inference infrastructure and high-performance models. Together, we're enabling companies to build and deploy generative AI applications, such as RAG apps, that not only scale infinitely but also deliver optimal performance per dollar,” said Tuhin Srivastava, CEO of Baseten. “This partnership empowers developers to bring mission-critical AI solutions to market faster, while maintaining cost-effectiveness at every stage of growth." Doppler Doppler is a cloud-based platform that helps teams manage, organize, and secure secrets across environments and applications that can be used throughout the entire development lifecycle. “Doppler rigorously focuses on making the easy path, the most secure path for developers. This is only possible with deep product partnerships with all the tooling developers have come to love. We are excited to join forces with MongoDB to make zero-downtime secrets rotation for non-relational databases effortlessly simple to set up and maintenance-free,” said Brian Vallelunga, founder and CEO of Doppler. “This will immediately bolster the security posture of a company’s most sensitive data without any additional overhead or distractions." Haize Labs Haize Labs automates language model stress testing at massive scales to discover and eliminate failure modes. This, alongside their inference-time mitigations and observability tools, enables the risk-free adoption of AI. " We're thrilled to partner with MongoDB in empowering companies to build RAG applications that are both powerful yet secure, safe, and reliable,” said Leonard Tang, co-founder and CEO of Haize Labs. “MongoDB Atlas has streamlined the process of developing production-ready GenAI systems, and we're excited to work together to accelerate customers' journey to trust and confidence in their GenAI initiatives." Modal Modal is a serverless platform for data and AI/ML engineers to run and deploy code in the cloud without having to think about infrastructure. Run generative AI models, large-scale batch jobs, job queues, and more, all faster than ever before. “The coming wave of intelligent applications will be built on the potent combination of foundation models, large-scale data, and fast search,” explained Charles Frye, AI Engineer at Modal. “MongoDB Atlas provides an excellent platform for storing, querying, and searching data, from hot new techniques like vector indices to old standbys like lexical search. It's the perfect counterpart to Modal's flexible compute, like serverless GPUs. Together, MongoDB and Modal make it easy to get started with this new paradigm, and then they make it easy to scale it out to millions of users querying billions of records & maxing out thousands of GPUs.” Portkey AI Portkey AI is an AI gateway and observability suite that helps companies develop, deploy, and manage LLM-based applications. " Our partnership with MongoDB is a game-changer for organizations looking to operationalize AI at scale. By combining Portkey's LLMOps expertise with MongoDB's comprehensive data solution, we're enabling businesses to deploy, manage, and scale AI applications with unprecedented efficiency and control,” said Ayush Garg, Chief Technology Officer of Portkey AI. “Together, we're not just streamlining the path from POC to production; we're setting a new standard for how businesses can leverage AI to drive innovation and deliver tangible value." Reka Reka offers fully multimodal models including images, videos with audio, text, and documents to empower AI agents that can see, hear, and speak. "At Reka, we know how challenging it can be to retrieve information buried in unstructured multimodal data. We are excited to join forces with MongoDB to help companies test and optimize multimodal RAG features for faster production deployment,” said Dani Yogatama, CEO of Reka. “Our models understand and reason over multimodal data including text, tables, and images in PDF documents or conversations in videos. Our joint solution streamlines the whole RAG development lifecycle, speeding up time to market and helping companies deliver real values to their customers faster." But wait, there's more! To learn more about building AI-powered apps with MongoDB, check out our AI Resources Hub , and stop by our Partner Ecosystem Catalog to read about our integrations with MongoDB’s ever-evolving AI partner ecosystem.

October 9, 2024
Artificial Intelligence

Introducing Dark Mode for MongoDB Documentation

We’re excited to announce a highly requested feature: Dark mode is now available for MongoDB Documentation ! Every day, developers from all backgrounds—beginners to experts—turn to the MongoDB Documentation. It’s packed with comprehensive resources that help you build modern applications using MongoDB and the Atlas developer data platform. With detailed information and step-by-step guides, it’s an invaluable tool for improving your skills and making your development work smoother. From troubleshooting tricky queries to exploring new features, MongoDB Documentation is there to support your projects and help you succeed. With dark mode, you can now switch to a darker interface that’s easier on the eyes. Whether you’re working late or prefer a subdued color palette, dark mode enhances your MongoDB Documentation experience. How to enable dark mode Enabling dark mode is simple. Just click on the sun and moon icon at the top right of the page to switch between dark mode, light mode, and system settings. It will initially default to your system settings. This is a personal setting and won't affect other users within the project or organization. We’ve designed dark mode to provide the same user-friendly experience you’re used to and stay consistent across different tools in the developer workflow, including MongoDB Atlas, which is also available in dark mode . We're all about making your reading experience top-notch! Dark mode is here because you asked for it through our feedback widget on the Docs page. Whether you’re an early adopter of dark mode or just trying it out, we’d love your opinion. Just drop your feedback in the widget next to the color theme selector on the MongoDB Documentation page. Less strain, more gain Dark mode offers a sleek, modern look that brings a refreshing change from the traditional light mode. Beyond its stylish appearance, dark mode also provides significant practical benefits. Reducing the amount of bright light emitted from your screen helps minimize eye strain and fatigue, making extended periods of device use more comfortable. For those using OLED screens, dark mode can help conserve battery life, as these screens consume less power by displaying darker pixels. Whether you’re coding into the late hours or just looking for a more comfortable viewing experience, dark mode is a simple yet powerful tool to enhance your development experience. Try out dark mode on MongoDB Documentation today and enjoy a more comfortable, stylish, and efficient reading experience!

October 9, 2024
Updates

Ready to get Started with MongoDB Atlas?

Start Free