Artificial Intelligence

Building AI-powered Apps with MongoDB

MongoDB, Microsoft Team Up to Enhance Copilot in VS Code

As modern applications grow increasingly complex, developers face the challenge of meeting market demands for faster, smarter solutions. To stay ahead, they need tools that streamline their workflows, available directly in the environments where they build. According to the 2024 Stack Overflow Developer Survey , Microsoft’s Visual Studio Code (VS Code) is the integrated development environment (IDE) of choice for 74% of professional developers, serving as a central hub for building, testing, and deploying applications. With the rise of AI-powered tools like GitHub Copilot—which is used by 44% of professional developers—there’s a growing demand for intelligent assistance in the development process without disrupting flow. At MongoDB, we believe that the future of development lies in democratizing the value of these experiences by incorporating domain-specific knowledge and capabilities directly into developer flows. That’s why we’re thrilled to announce the public preview of MongoDB’s extension to GitHub Copilot in VS Code. With this integration, developers can effortlessly generate MongoDB queries, inspect collection schemas, and get answers from the latest MongoDB docs—all without leaving their IDE. Our collaboration with MongoDB continues to bring powerful, integrated solutions to developers building the modern applications of the future. The new MongoDB extension for GitHub Copilot exemplifies a shared commitment to the developer experience, leveraging AI to ensure that workflows are optimized for developer productivity by keeping everything developers need within reach, without breaking their flow. Isidor Nikolic, Senior Product Manager for VS Code, Microsoft But we’re not stopping there. As AI continues to evolve, so will the ways developers interact with their tools. Stay tuned for more exciting developments next week at Microsoft Ignite , where we’ll unveil more ways we’re pushing the boundaries of what’s possible with AI through MongoDB and Microsoft’s partnership! What is MongoDB's Copilot extension? MongoDB’s Copilot extension supercharges your GitHub Copilot in VS Code with MongoDB domain knowledge. The Copilot integration is built into the MongoDB for VS Code extension , which has more than 1.8M downloads in the VS Code marketplace today. Type ‘@MongoDB’ in Copilot chat and take advantage of three transformative commands: Generate queries from natural language (/query) —this generates accurate MongoDB queries by passing collection schema as context to Github Copilot Query MongoDB documentation (/docs) —this answers any documentation questions using the latest MongoDB documentation through Retrieval-Augmented Generation (RAG) Browse collection schema (/schema) —this provides schema information for any collection and is useful for data modeling with the Copilot extension. Generate queries from natural language This command transforms natural language prompts into MongoDB queries, leveraging your collection schema to produce precise, valid queries. It eliminates the need to manually write complex query syntax, and allows developers to quickly extract data without taking their focus away from building applications. Whether you run the query directly from the Copilot chat or refine it in a MongoDB playground file, we’ve sped up the query-building process by deeply integrating these capabilities into the existing flow of MongoDB VS Code extension. Query MongoDB documentation The /docs command answers MongoDB documentation-specific questions, complemented by direct links to the official documentation site. There’s no need to switch back and forth between your browser and your IDE; the Copilot extension calls out to the MongoDB Documentation Chatbot API that leverages retrieval-augmented generation technology to generate responses that are informed by the most recent version of the MongoDB documentation. In the near future, these questions will be smartly routed to documentation for the specific server version of the cluster you are connected to in the MongoDB VS Code extension. Browse collection schema The /schema command offers quick access to collection schemas, making it easier for developers to access and interact with their data model in real-time. This can be helpful in situations where developers are debugging with Copilot or just want to know valid field names while developing their applications. Developers can additionally export collection schemas into JSON files or ask follow-up questions directly to brainstorm data modeling techniques with the MongoDB Copilot extension. On the Horizon This is just the start of our work on MongoDB’s Copilot extension. As we continue to improve the experience with new features—like translating and testing queries to and from popular programming languages, and in-line query generation in Playgrounds—we remain focused on democratizing AI-driven workflows, empowering developers to access the tools and knowledge they need to build smarter, faster, and more efficiently, right within their existing environments. Download MongoDB’s VS Code extension and enable the MongoDB chat experience to get started today.

November 13, 2024
Artificial Intelligence

Building Gen AI with MongoDB & AI Partners | October 2024

It’s no surprise that AI is a topic of seemingly every professional conversation and meeting nowadays—my friends joke that 11 out of 10 words that come out of my mouth are “gen AI.” But an important question remains: do organizations truly know how to harness AI, or do they simply feel pressured to join the crowd? Are they driven by FOMO more than anything else? One thing is for sure: adopting generative AI still presents a huge learning curve. Which is why we’ve been working to provide the right tools for companies to build innovative gen AI apps with, and why we offer organizations a variety of AI knowledge and guidance, regardless of where they are with gen AI. We’re fortunate to work with our industry-leading partners to help educate and shape this nascent market. Working so closely with them on product launches, integrations, and solving real-world challenges allows us to bring diverse perspectives and a better understanding of AI to our customers, giving them the technology and confidence to move forward even before engaging with tough use cases and specific technical problems (something that the MongoDB AI Applications Program can definitely help with). One of our main educational initiatives has been our webinar series with our top-tier MAAP partners. We’ve constantly launched video content to deepen understanding of topics essential to gen AI for enterprises answering broader questions such as “ how can my company generate AI-driven outcomes ” and “ how can I modernize my workload ,” to specific, tangible topics such as “ how to build a chatbot that knows my business .” Each session is designed to move beyond the basics, sharing insights from experts in AI, and addressing our customers’ burning questions and challenges that matter most to them. Welcoming new AI and tech partners In October, we also welcomed four new AI and tech partners that offer product integrations with MongoDB. Read on to learn more about each great new partner! Astronomer Astronomer empowers data teams to bring mission-critical software, analytics, and AI to life and is the company behind Astro, the industry-leading data orchestration and observability platform powered by Apache Airflow. " Astronomer's partnership with MongoDB is redefining RAG workflows for GenAI workloads. By integrating Astronomer's managed Apache Airflow platform with MongoDB Atlas' powerful vector database capabilities, we enable organizations to orchestrate complex data pipelines that fuel advanced AI and machine learning applications”, said Julian LaNeve, CTO at Astronomer. “This collaboration empowers data teams to manage real-time, high-dimensional data with ease, accelerating the journey from raw data to actionable insights and transforming how businesses harness the power of generative AI." CloudZero CloudZero is a cloud cost optimization platform that automates the collection, allocation, and analysis of cloud costs to identify savings opportunities and improve cloud efficiency rates. "Database spending is one of the shared costs that can make it tricky for organizations to reach 100% cost allocation. CloudZero eliminates that problem," said Anand Sundaram, Senior Vice President of Product at CloudZero. “ Our industry-leading allocation engine can organize MongoDB spend in a matter of hours , tracing it precisely to the products, features, customers, and/or teams responsible for it. This way, companies get a clear view of what’s driving their costs, who’s accountable, and how to optimize to maximize their cloud efficiency.” ObjectBox ObjectBox is an on-device vector database for mobile, IoT, and embedded devices that enables storing, syncing, and querying data locally online and offline. " We’re thrilled to partner with MongoDB to give developers an edge,” celebrated Vivien Dollinger, CEO and co-founder of ObjectBox. “By combining MongoDB’s cloud and scalability with ObjectBox’s high-performance on-device database and data sync, we empower developers to build fast, data-rich applications that feel right at home across devices and environments. Offline, online, edge, cloud, whenever, wherever... We’re here to enable your data with speed and reliability." Rasa Rasa is a flexible framework for building conversational AI platforms that lets companies develop scalable generative AI assistants that hit the market faster. “ Rasa is excited to partner with MongoDB to empower companies in building conversational AI experiences. Together, we’re helping create generative AI assistants that save costs, speed up development, and maintain full brand control and security,” said Melissa Gordon, CEO of Rasa. “With MongoDB, deploying production-ready generative AI assistants is seamless, and we’re eager to continue accelerating our customers’ journey toward trusted conversational AI solutions.” But wait, there's more! Whether you’re starting out or scaling up, MongoDB and our partners are here with the resources, expertise, and trusted guidance to help you succeed in your genAI strategy! And if you have any suggestions for a good webinar topic, don’t hesitate to reach out. To learn more about building AI-powered apps with MongoDB, check out our AI Resources Hub and stop by our Partner Ecosystem Catalog to read about our integrations with MongoDB’s ever-evolving AI partner ecosystem.

November 11, 2024
Artificial Intelligence

MongoDB and Partners: Building the AI Future, Together

If you’re like me, over the past year you’ve closely watched AI’s developments—and the world’s reactions to them. From infectious excitement about AI’s capabilities, to impatience with its cost and return on investment, every day has been filled with AI twists and turns. It’s been quite the roller coaster. During the ride, from time to time I’ve wondered where AI falls on the Gartner hype cycle, which gives "a view of how a technology or application will evolve over time." Have we hit the "peak of inflated expectations" only to fall into the "trough of disillusionment?" Or is the hype cycle an imperfect guide, as The Economist argues? The reality is that it takes time for any new technology—even transformative ones like AI—to take hold. And every advance, no matter how big, has had its detractors. A famous example is that of Picasso (!), who in 1968 said, “Computers are useless. They can only give you answers.” (!!) For our part, MongoDB is convinced that AI is a once-in-a-generation technology that will enhance every future application—a belief that has been reinforced by the incredible work our partners have shared at MongoDB’s 2024 events. Speeding AI development MongoDB is committed to helping organizations of all sizes succeed with AI, and one way we’re doing that is by collaborating with the MongoDB partner ecosystem to create powerful, user-friendly AI development tools and solutions. For example, Fireworks.ai —which is a member of the MongoDB AI Applications Program ecosystem —created an inference solution that hosts gen AI models and supports containerized deployments. This tool makes it easier for developers to build and deploy powerful applications with a range of easy-to-use tools and customization options. They can choose to use state-of-the-art, open-source language, image, and multimodal foundation models off the shelf, or they can customize and fine-tune models to their needs. Jointly, Fireworks.ai and MongoDB provide a solution for developers who want to leverage highly curated and optimized open-source models and combine these with their organization’s own proprietary data—and to do so with unparalleled speed and security. “MongoDB is one of the most sophisticated database providers, and it’s very easy to use,” said Benny Chen , cofounder of Fireworks.ai. "We want developers to be able to use these tools, and we want to work with providers who enable and empower developers." Nomic , another MAAP ecosystem member, also enables developers with best-in-class solutions across the entire unstructured data workflow. Their Embed offering, available through the Nomic API , allows users to vectorize large-scale datasets for use in text, image, and multimodal retrieval applications, including retrieval-augmented generation (RAG), using only their web browser. The Nomic-MongoDB solution is a highly efficient, open-weight model that developers can use to visualize the unstructured datasets they store in MongoDB Atlas . These insights help users quickly discover trends and articulate data-driven value propositions. Nomic also supported the recently announced vector quantization in MongoDB Atlas Vector Search , which reduces vector sizes while preserving performance. Last—but hardly least!—there’s our new reference architecture with MAAP partners AWS and Anthropic. Announced at MongoDB.local London , the reference architecture supports building memory-enhanced AI agents, and is designed to streamline complex processes and develop smarter, more responsive applications. For more—including a link to the code on Github— check out the MongoDB Developer Center . Making AI work for anyone and everyone The companies MongoDB partners with aren’t just making gen AI easier for developers—they’re building tools for everyone. For example, Capgemini has invested $2 billion in gen AI and is training 100,000 of its employees in the technology. GenYoda, a solution that helps insurance professionals with their daily work, is a product of this investment. GenYoda leverages MongoDB Atlas Vector Search to analyze large amounts of customer data, like policy statements, premiums, claims history, and health information. Using GenYoda, insurance professionals can quickly analyze underwriters’ reports to make informed decisions, create longitudinal health summaries, and streamline customer interactions to improve contact center efficiency. GenYoda can ingest 100,000 documents in just a few hours and respond to users’ queries in two to three seconds—a metric on par with the most widely used gen AI models. And it produces results: in one example, by using Capgemini’s solution an insurer was able to increase productivity by 15%, add new reports 25% faster (thus speeding decision-making), and reduce the manual effort of searching PDFs, increasing efficiency by 10%. Building the future of AI together So, what’s next? Honestly, I’m as curious as you are. But I’m also incredibly excited. At MongoDB, we’re active participants in the AI revolution, working to embrace the possibilities that lie ahead. The future of gen AI is bright, and I can’t wait to see what we’ll build together. To learn more about how MongoDB can accelerate your AI journey, explore the MongoDB AI Applications Program .

November 4, 2024
Artificial Intelligence

Reflections On Our Recent AI "Think-A-Thon"

Interesting ideas are bound to emerge when great minds come together, so there was no shortage of interesting ideas on October 2nd, when MongoDB’s Developer Relations team hosted our second-ever AI Build Together event at MongoDB.local London. In some ways, the event is similar to a hackathon: a group of developers come together to solve a problem. But in other ways, the event is quite different. While hackathons normally take an entire day and involve intensive coding, the AI Build Together events are organized to take place over just a few hours and don't involve any coding at all. Instead, it’s all based around discussion and ideation. For these reasons, MongoDB’s Developer Relations team likes to dub them “think-a-thons.” Our first AI Build Together event was held earlier this year at .local NYC. After seeing the energy in the room and the excitement from attendees, our Developer Relations team knew it wanted to host another one. The .local London event’s fifty attendees—which included developers from numerous industries and leading AI innovators who served as mentors—came together to brainstorm and discuss AI-based solutions to common industry problems. .local London AI Build Together attendees brainstorming AI solutions for the healthcare industry The AI mentors included: Loghman Zadeh (gravity9), Ben Gutkovich (Superlinked), Jesse Martin (Hasura), Marlene Mhangami (Microsoft), Igor Alekseev (AWS), and John Willis and Patrick Debois (co-founders of DevOps). Upon arrival, participants joined a workflow group best aligned with their industry and/or area of interest—AI for Education, AI for DevOps, AI for Healthcare, AI for Optimizing Travel, AI for Supply Chain, and AI for Productivity. The AI for Productivity group collaborating on their workflow The discussions were lively, and it was amazing to see how much energy these attendees brought to their discussions. For example, the AI for Education workflow group vigorously discussed developing a personalized AI education coach to help students develop their educational plans and support them with career advice. Meanwhile, the AI for Healthcare workflow group focused on the idea of creating an AI drive tool to provide personalized healthcare to patients and real-time insights to their providers. The AI for Productivity team came up with a clever product that helps you read, digest, and identify the key aspects of long legal documents. The AI for Optimizing Travel group seeking advice from AI mentor Marlene A talented artist was also brought in to visualize each workflow group’s problem statements and potential solutions—literally and figuratively illustrating their innovative ideas. Graphic recorder Maria Foulquié putting the final touches on the illustration Final illustration documenting the 2024 MongoDB.local London AI Build Together event All in all, our second time hosting this event was deemed a success by everyone involved. “It was impressive to see how attendees, regardless of their technical background, found ways to contribute to complex AI solutions,” says Loghman Zadeh, AI Director at gravity9, who served as one of the event’s advisors. “Engaging with so many creative and forward-thinking individuals, all eager to push the boundaries of AI innovation was refreshing. The collaborative atmosphere fostered dynamic discussions and allowed participants to explore new ideas in a supportive environment.” If you’re interested in taking part in events like these—which offer a range of networking opportunities—there are three more MongoDB.local events slated for 2024—Sao Paulo, Paris, and Stockholm. Additionally, you can join your local MongoDB user group to learn from and connect with other MongoDB developers in your area.

October 23, 2024
Artificial Intelligence

Built With MongoDB: Buzzy Makes AI Application Development More Accessible

AI adoption rates are sky-high and showing no signs of slowing down. One of the driving forces behind this explosive growth is the increasing popularity of low- and no-code development tools that make this transformative technology more accessible to tech novices. Buzzy , an AI-powered no-code platform that aims to revolutionize how applications are created, is one such company. Buzzy enables anyone to transform an idea into a fully functional, scalable web or mobile application in minutes. Buzzy developers use the platform for a wide range of use cases, from a stock portfolio tracker to an AI t-shirt store. The only way the platform could support such diverse applications is by being built upon a uniquely versatile data architecture. So it’s no surprise that the company chose MongoDB Atlas as its underlying database. Creating the buzz Buzzy’s mission is simple but powerful: to democratize the creation of applications by making the process accessible to everyone, regardless of technical expertise. Founder Adam Ginsburg—a self-described husband, father, surfer, geek, and serial entrepreneur—spent years building solutions for other businesses. After building and selling an application that eventually became the IBM Web Content Manager, he created a platform allowing anyone to build custom applications quickly and easily. Buzzy initially focused on white-label technology for B2B applications, which global vendors brought to market. Over time, the platform evolved into something much bigger. The traditional method of developing software, as Ginsburg puts it, is dead. Ginsburg observed two major trends that contributed to this shift: the rise of artificial intelligence (AI) and the design-centric approach to product development exemplified by tools like Figma. Buzzy set out to address two major problems. First, traditional software development is often slow and costly. Small-to-medium-sized business (SMB) projects can take anywhere from $50,000 to $250,000 and nine months to complete. Due to these high costs and lengthy timelines, many projects either fail to start or run out of resources before they’re finished. The second issue is that while AI has revolutionized many aspects of development, it isn’t a cure-all for generating vast amounts of code. Generating tens of thousands of lines of code using AI is not only unreliable but also lacks the security and robustness that enterprise applications demand. Additionally, the code generated by AI often can’t be maintained or supported effectively by IT teams. This is where Buzzy found a way to harness AI effectively, using it in a co-pilot mode to create maintainable, scalable applications. Buzzy’s original vision was focused on improving communication and collaboration through custom applications. Over time, the platform’s mission shifted toward no-code development, recognizing that these custom apps were key drivers of collaboration and business effectiveness. The Buzzy UX is highly streamlined so even non-technical users can leverage the power of AI in their apps. Initially, Buzzy's offerings were somewhat rudimentary, producing functional but unpolished B2B apps. However, the platform soon evolved. Instead of building their own user experience (UX) and user interface (UI) capabilities, Buzzy integrated with Figma, giving users access to the design-centric workflow they were already familiar with. The advent of large language models (LLMs) provided another boost to the platform, enabling Buzzy to accelerate AI-powered development. What sets Buzzy apart is its unique approach to building applications. Unlike traditional development, where code and application logic are often intertwined, Buzzy separates the "app definition" from the "core code." This distinction allows for significant benefits, including scalability, maintainability, and better integration with AI. Instead of handing massive chunks of code to an AI system—which can result in errors and inefficiencies—Buzzy gives the AI a concise, consumable description of the application, making it easier to work with. Meanwhile, the core code, written and maintained by humans, remains robust, secure, and high-performing. This approach not only simplifies AI integration but also ensures that updates made to Buzzy’s core code benefit all customers simultaneously, an efficiency that few traditional development teams can achieve. Flexible platform, fruitful partnership The partnership between Buzzy and MongoDB has been crucial to Buzzy’s success. MongoDB’s Atlas developer data platform provides a scalable, cost-effective solution that supports Buzzy’s technical needs across various applications. One of the standout features of MongoDB Atlas is its flexibility and scalability, which allows Buzzy to customize schemas to suit the diverse range of applications the platform supports. Additionally, MongoDB’s support—particularly with new features like Atlas Vector Search —has allowed Buzzy to grow and adapt without complicating its architecture. In terms of technology, Buzzy’s stack is built for flexibility and performance. The platform uses Kubernetes and Docker running on Node.js with MongoDB as the database. Native clients are powered by React Native, using SQLLite and Websockets for communication with the server. On the AI side, Buzzy leverages several models, with OpenAI as the primary engine for fine-tuning its AI capabilities. Thanks to the MongoDB for Startups program , Buzzy has received critical support, including Atlas credits, consulting, and technical guidance, helping the startup continue to grow and scale. With the continued support of MongoDB and an innovative approach to no-code development, Buzzy is well-positioned to remain at the forefront of the AI-driven application development revolution. A Buzzy future Buzzy embodies the spirit of innovation in its own software development lifecycle (SDLC). The company is about to release two game-changing features that are going to take AI driven App development to the next level: Buzzy FlexiBuild, which will allow users to build more complex applications using just AI prompts, and Buzzy Automarkup, which will allow Figma users to easily mark up screens, views, lists, forms, and actions with AI in minutes. Ready to start bringing your own app visions to life? Try Buzzy and start building your application in minutes for Free. To learn more and get started with MongoDB Vector Search, visit our Vector Search Quick Start guide .

October 18, 2024
Artificial Intelligence

Announcing Hybrid Search Support for LlamaIndex

MongoDB is excited to announce enhancements to our LlamaIndex integration. By combining MongoDB’s robust database capabilities with LlamaIndex’s innovative framework for context-augmented large language models (LLMs), the enhanced MongoDB-LlamaIndex integration unlocks new possibilities for generative AI development. Specifically, it supports vector (powered by Atlas Vector Search ), full-text (powered by Atlas Search ), and hybrid search, enabling developers to blend precise keyword matching with semantic search for more context-aware applications, depending on their use case. Building AI applications with LlamaIndex LlamaIndex is one of the world’s leading AI frameworks for building with LLMs. It streamlines the integration of external data sources, allowing developers to combine LLMs with relevant context from various data formats. This makes it ideal for building application features like retrieval-augmented generation (RAG), where accurate, contextual information is critical. LlamaIndex empowers developers to build smarter, more responsive AI systems while reducing the complexities involved in data handling and query management. Advantages of building with LlamaIndex include: Simplified data ingestion with connectors that integrate structured databases, unstructured files, and external APIs, removing the need for manual processing or format conversion. Organizing data into structured indexes or graphs , significantly enhancing query efficiency and accuracy, especially when working with large or complex datasets. An advanced retrieval interface that responds to natural language prompts with contextually enhanced data, improving accuracy in tasks like question-answering, summarization, or data retrieval. Customizable APIs that cater to all skill levels—high-level APIs enable quick data ingestion and querying for beginners, while lower-level APIs offer advanced users full control over connectors and query engines for more complex needs. MongoDB's LlamaIndex integration Developers are able to build powerful AI applications using LlamaIndex as a foundational AI framework alongside MongoDB Atlas as the long term memory database. With MongoDB’s developer-friendly document model and powerful vector search capabilities within MongoDB Atlas, developers can easily store and search vector embeddings for building RAG applications. And because of MongoDB’s low-latency transactional persistence capabilities, developers can do a lot more with MongoDB integration in LlamIndex to build AI applications in an enterprise-grade manner. LlamaIndex's flexible architecture supports customizable storage components, allowing developers to leverage MongoDB Atlas as a powerful vector store and a key-value store. By using Atlas Vector Search capabilities, developers can: Store and retrieve vector embeddings efficiently ( llama-index-vector-stores-mongodb ) Persist ingested documents ( llama-index-storage-docstore-mongodb ) Maintain index metadata ( llama-index-storage-index-store-mongodb ) Store Key-value pairs ( llama-index-storage-kvstore-mongodb ) Figure adapted from Liu, Jerry and Agarwal, Prakul (May 2023). “Build a ChatGPT with your Private Data using LlamaIndex and MongoDB”. Medium. https://medium.com/llamaindex-blog/build-a-chatgpt-with-your-private-data-using-llamaindex-and-mongodb-b09850eb154c Adding hybrid and full-text search support Developers may use different approaches to search for different use cases. Full-text search retrieves documents by matching exact keywords or linguistic variations, making it efficient for quickly locating specific terms within large datasets, such as in legal document review where exact wording is critical. Vector search, on the other hand, finds content that is ‘semantically’ similar, even if it does not contain the same keywords. Hybrid search combines full-text search with vector search to identify both exact matches and semantically similar content. This approach is particularly valuable in advanced retrieval systems or AI-powered search engines, enabling results that are both precise and aligned with the needs of the end-user. It is super simple for developers to try out powerful retrieval capabilities on their data and improve the accuracy of their AI applications with this integration. In the LlamaIndex integration, the MongoDBAtlasVectorSearch class is used for vector search. All you have to do is enable full-text search, using VectorStoreQueryMode.TEXT_SEARCH in the same class. Similarly, to use Hybrid search, enable VectorStoreQueryMode.HYBRID . To learn more, check out the GitHub repository . With the MongoDB-LlamaIndex integration’s support, developers no longer need to navigate the intricacies of Reciprocal Rank Fusion implementation or to determine the optimal way to combine vector and text searches—we’ve taken care of the complexities for you. The integration also includes sensible defaults and robust support, ensuring that building advanced search capabilities into AI applications is easier than ever. This means that MongoDB handles the intricacies of storing and querying your vectorized data, so you can focus on building! We’re excited for you to work with our LlamaIndex integration. Here are some resources to expand your knowledge on this topic: Check out how to get started with our LlamaIndex integration Build a content recommendation system using MongoDB and LlamaIndex with our helpful tutorial Experiment with building a RAG application with LlamaIndex, OpenAI, and our vector database Learn how to build with private data using LlamaIndex, guided by one of its co-founders

October 17, 2024
Artificial Intelligence

From Chaos to Control: Real-Time Data Analytics for Airlines

Delays are a significant challenge for the airline industry. They disrupt travel plans, erode customer loyalty, and inflict significant financial losses. In an industry built on precision and punctuality, even minor setbacks can have cascading effects. Whether due to adverse weather conditions or unforeseen technical issues, these delays ripple through flight schedules, affecting both passengers and operations managers. While neither group is typically at fault, the ability to quickly reallocate resources and return to normal operations is crucial. To mitigate these disruptions and restore passenger trust, airlines must have the tools and strategies to quickly identify delays and efficiently reallocate resources. This blog explores how a unified platform with real-time data analysis can be a game-changer in this regard especially in saving costs. The high cost of delays Delays from disruptions, like weather events or crew unavailability, pose major challenges for the airline industry. These delays have significant financial impact according to some studies, costing European airlines on average € 4,320 per hour per flight . They also create operational challenges like crew disruptions and reduced airplane availability, leading to further delays, which is known in the industry as delay propagation. To address these challenges, airlines have traditionally focused on optimizing their pre-flight planning processes. However, while planning is crucial, effective recovery strategies are equally essential for minimizing the impact of disruptions. Unfortunately, many airlines have underinvested in recovery systems, leaving them ill-prepared to respond to unexpected events. The consequences of this imbalance include: Delay propagation: Initial delays can cascade, causing widespread schedule disruptions. Financial and operational damage: Increased costs and inefficiencies strain airline resources. Customer dissatisfaction: Poor disruption management leads to negative passenger experiences. The power of real-time data analysis In response to the significant challenges posed by flight delays, a real-time unified platform offers a powerful solution designed to enhance how airlines manage disruptions. Event-driven architectural approach The diagram below showcases an event-driven architecture that can be used to build a robust and decoupled platform that supports real-time data flow between microservices. In an event-driven architecture, services or components communicate by producing and consuming events, which is why this architecture relies on Pub/Sub (messaging middleware) to manage data flows. Moreover, MongoDB’s flexible document model and ability to handle high volumes of data make it ideal for event-driven systems. Combining these features with PubSub’s, this approach proves to offer a powerful solution for modern applications that require scalability, flexibility, and real-time processing. Figure 1: Application architecture In this architecture, the blue line in the diagram shows the operational data flow. The data simulation is triggered by the application’s front-end and is initialized in the FastAPI microservice. The microservice, in turn, starts publishing airplane sensor data to the custom Pub/Sub topics, which forwards these data to the rest of the architecture components, such as cloud functions, for data transformation and processing. The data is processed in each microservice, including the creation of analytical data, as shown by the green lines in the diagram. Afterward, data is introduced in MongoDB and fetched from the application to provide the user with organized, up-to-date information regarding each flight. This leads to more precise and detailed analysis of real-time data for flight operations managers. As a result, new and improved opportunities for resource reallocation can be explored, helping to minimize delays and reduce associated costs for the company. Microservice overview As mentioned earlier, the primary goal is to create an event-driven, decoupled architecture founded on MongoDB and Google Cloud services integrations. The following components contribute to this: FastAPI: Serves as the main data source, generating data for analytical insights, predictions, and simulation. Telemetry data: Pulls and transforms operational data published in the PubSub topic in real-time, storing it in a MongoDB time series collection for aggregation and optimization. Application data: Subscribed to a different PubSub topic, this service acknowledges static operational data, including initial route, recalculated route, and disruption status. Therefore, this service will only be triggered provided any of the previous fields are altered. Finally, this data is updated in its MongoDB collection accordingly. Vertex AI integration—analytical data flow: A cloud function triggered by PubSub messages that executes data transformations and forwards data to the Vertex AI deployed machine learning (ML) model. Predictions are then stored in MongoDB. MongoDB: A flexible, scalable, and real-time data solution Building a unified real-time platform for the airline industry requires efficient management of massive, diverse datasets. From aircraft sensor data to flight cost calculations, data processing and management are central to operations. To meet these demands, the platform needs a flexible data platform capable of handling multiple data types and integrating with various systems. This enables airlines to extract valuable insights from their data and develop features that improve operations and the passenger experience. Real-time data processing is a must-have feature. This allows airlines to receive immediate alerts about delays, minimizing disruptions and ensuring smooth operations. In fast-paced airport environments, where every minute counts, real-time data processing is indispensable. For example, integrating MongoDB with Google Cloud's Vertex AI allows for the real-time processing and storage of airplane sensor data, transforming it into actionable insights. Business benefits This solution provides real-time access to critical flight data, enabling efficient cost management and operational planning. Immediate access to this information allows flight operation managers to plan ahead, reallocate existing resources, or even initiate recovery procedures in order to diminish the consequences of the identified delay. Moreover, its ML model customization ensures adaptability to various use cases. Regarding the platform’s long-term sustainability, it has been purposely designed to integrate highly reliable and scalable products in order to excel in three key standards: Scalability The platform’s compatibility with both horizontal and vertical scaling is clearly demonstrated by its integral design. The decoupled architecture illustrates how this solution is divided into different components—and therefore instances—that work together as a cohesive whole. Vertical scalability can be achieved by simply increasing the computing power allocated to the designed Vertex AI model, if needed. Availability The decoupled architecture exemplifies the central importance of availability in any project’s design. Using different tracks to introduce operational and analytical data into the database allows us to handle any issues in a way that remains unnoticeable to end users. Latency Optimizing the connections between components and integrations within the product is key to achieving the desired results. Using PubSub as our asynchronous messaging service helps minimize unnecessary delays and avoid holding resources needlessly. Get started! To sum up, this blog has explored how MongoDB can be integrated into an airline flight management system, offering significant benefits in terms of cost savings and enhanced customer experience. Check out our AI resource page to learn more about building AI-powered apps with MongoDB, and try out the demo yourself via this repo . To learn more and get started with MongoDB Vector Search, visit our Vector Search Quick Start page .

October 15, 2024
Artificial Intelligence

Building Gen AI with MongoDB & AI Partners | September 2024

Last week I was in London for MongoDB.local London —the 19th stop of the 2024 MongoDB.local tour—where MongoDB, our customers, and our AI partners came together to share solutions we’ve been building that enable companies to accelerate their AI journey. I love attending these events because they offer an opportunity to celebrate our collective achievements, and because it’s great to meet so many (mainly Zoom) friends in person! One of the highlights of MongoDB.local London 2024 was the release of our reference architecture with our MAAP partners AWS and Anthropic , which supports memory-enhanced AI agents. This architecture is already helping businesses streamline complex processes and develop smarter, more responsive applications. We also announced a robust set of vector quantization capabilities in MongoDB Atlas Vector Search that will help developers build powerful semantic search and generative AI applications with more scale—and at a lower cost. Now, with support for the ingestion of scalar quantized vectors, you can import and work with quantized vectors from your embedding model providers of choice, including MAAP partners Cohere, Nomic, and others. A big thank you to all of MongoDB’s AI partners, who continually amaze me with their innovation. MongoDB.local London was another great reminder of the power of collaboration, and I’m excited for what lies ahead as we continue to shape the future of AI together. As the Brits say: Cheers! Welcoming new AI and tech partners In September we also welcomed seven new AI and tech partners that offer product integrations with MongoDB. Read on to learn more about each great new partner! Arize Arize AI is a platform that helps organizations visualize and debug the flow of data through AI applications by quickly identifying bottlenecks in LLM calls and understanding agentic paths. "At Arize AI, we are committed to helping AI teams build, evaluate, and troubleshoot cutting-edge agentic systems. Partnering with MongoDB allows us to provide a comprehensive solution for managing the memory and retrieval that these systems rely on”, said Jason Lopatecki, co-founder and CEO of Arize AI. “With MongoDB’s robust vector search and flexible document storage, combined with Arize’s advanced observability and evaluation tools, we’re empowering developers to confidently build and deploy AI applications." Baseten Baseten provides the applied AI research and infrastructure needed to serve custom and open-source machine learning models performantly, scalably, and cost-efficiently. " We're excited to partner with MongoDB to combine their scalable vector database with Baseten's high-performance inference infrastructure and high-performance models. Together, we're enabling companies to build and deploy generative AI applications, such as RAG apps, that not only scale infinitely but also deliver optimal performance per dollar,” said Tuhin Srivastava, CEO of Baseten. “This partnership empowers developers to bring mission-critical AI solutions to market faster, while maintaining cost-effectiveness at every stage of growth." Doppler Doppler is a cloud-based platform that helps teams manage, organize, and secure secrets across environments and applications that can be used throughout the entire development lifecycle. “Doppler rigorously focuses on making the easy path, the most secure path for developers. This is only possible with deep product partnerships with all the tooling developers have come to love. We are excited to join forces with MongoDB to make zero-downtime secrets rotation for non-relational databases effortlessly simple to set up and maintenance-free,” said Brian Vallelunga, founder and CEO of Doppler. “This will immediately bolster the security posture of a company’s most sensitive data without any additional overhead or distractions." Haize Labs Haize Labs automates language model stress testing at massive scales to discover and eliminate failure modes. This, alongside their inference-time mitigations and observability tools, enables the risk-free adoption of AI. " We're thrilled to partner with MongoDB in empowering companies to build RAG applications that are both powerful yet secure, safe, and reliable,” said Leonard Tang, co-founder and CEO of Haize Labs. “MongoDB Atlas has streamlined the process of developing production-ready GenAI systems, and we're excited to work together to accelerate customers' journey to trust and confidence in their GenAI initiatives." Modal Modal is a serverless platform for data and AI/ML engineers to run and deploy code in the cloud without having to think about infrastructure. Run generative AI models, large-scale batch jobs, job queues, and more, all faster than ever before. “The coming wave of intelligent applications will be built on the potent combination of foundation models, large-scale data, and fast search,” explained Charles Frye, AI Engineer at Modal. “MongoDB Atlas provides an excellent platform for storing, querying, and searching data, from hot new techniques like vector indices to old standbys like lexical search. It's the perfect counterpart to Modal's flexible compute, like serverless GPUs. Together, MongoDB and Modal make it easy to get started with this new paradigm, and then they make it easy to scale it out to millions of users querying billions of records & maxing out thousands of GPUs.” Portkey AI Portkey AI is an AI gateway and observability suite that helps companies develop, deploy, and manage LLM-based applications. " Our partnership with MongoDB is a game-changer for organizations looking to operationalize AI at scale. By combining Portkey's LLMOps expertise with MongoDB's comprehensive data solution, we're enabling businesses to deploy, manage, and scale AI applications with unprecedented efficiency and control,” said Ayush Garg, Chief Technology Officer of Portkey AI. “Together, we're not just streamlining the path from POC to production; we're setting a new standard for how businesses can leverage AI to drive innovation and deliver tangible value." Reka Reka offers fully multimodal models including images, videos with audio, text, and documents to empower AI agents that can see, hear, and speak. "At Reka, we know how challenging it can be to retrieve information buried in unstructured multimodal data. We are excited to join forces with MongoDB to help companies test and optimize multimodal RAG features for faster production deployment,” said Dani Yogatama, CEO of Reka. “Our models understand and reason over multimodal data including text, tables, and images in PDF documents or conversations in videos. Our joint solution streamlines the whole RAG development lifecycle, speeding up time to market and helping companies deliver real values to their customers faster." But wait, there's more! To learn more about building AI-powered apps with MongoDB, check out our AI Resources Hub , and stop by our Partner Ecosystem Catalog to read about our integrations with MongoDB’s ever-evolving AI partner ecosystem.

October 9, 2024
Artificial Intelligence

Introducing Two MongoDB Generative AI Learning Badges

Want to boost your resume quickly? MongoDB is introducing two new Learning Badges , Building gen AI Apps and Deploying and Evaluating gen AI Apps. Unlike high-stakes certifications, which cover a large breadth and depth of subjects, these digital credentials are focused on specific topics, making them easier and quicker to earn. Best of all, they’re free! The Building Gen AI Applications with MongoDB Learning Badge validates users’ knowledge of developing gen AI applications using MongoDB Atlas Vector Search. It recognizes your understanding of semantic search and how to build chatbots with retrieval-augmented generation (RAG), MongoDB, and Langchain. The Deploying and Evaluating Gen AI Applications with MongoDB Learning Badge validates users’ knowledge of optimizing the performance and evaluating the results of gen AI applications. It recognizes your understanding of chunking strategies, performance evaluation techniques, and deployment options within MongoDB for both prototyping and production stages. Learn, prepare, and earn To earn your badge, simply complete the Learning Badge Path and take a short assessment at the end. Once you pass the short assessment, you'll receive an email with your official Credly badge and digital certificate. You can share it on social media, in email signatures, or on digital resumes. Additionally, you'll gain inclusion in the Credly Talent Directory , where you will be visible to recruiters from top employers and can open up new career opportunities. Learning paths are like curated roadmaps that guide you through essential concepts and skills needed for the assessment. Each badge has its own learning path: Building Gen AI Apps Learning Badge Path: This learning path guides you through the foundations of building a gen AI application with MongoDB Atlas Vector Search. You'll learn what semantic search is and how you can leverage it across a variety of use cases. Then you'll learn how to build your own chatbot by creating a RAG application with MongoDB and Langchain. Deploying and Evaluating Gen AI Apps Learning Badge Path: This learning path will help you take a gen AI application from creation to full deployment, with a focus on optimizing performance and evaluating results. You'll explore chunking strategies, performance evaluation techniques, and deployment options in MongoDB for both prototyping and production stages. We recommend completing the Building gen AI Apps Learning Badge Path before beginning this path. Badge up with MongoDB MongoDB Learning Badges offer a valuable opportunity to showcase your commitment to continuous learning and expertise in specific topics. These digital credentials not only recognize your educational achievements but also serve as a testament to your knowledge and skills. Whether you're a seasoned developer, an aspiring data scientist, or an enthusiastic student, earning a MongoDB badge can significantly enhance your profile and open up new opportunities in your field. Start earning your badges today—it’s quick, effective, and free! Visit MongoDB Learning Badges to begin your journey toward becoming a gen AI application expert and boosting your career prospects.

October 8, 2024
Artificial Intelligence

Vector Quantization: Scale Search & Generative AI Applications

We are excited to announce a robust set of vector quantization capabilities in MongoDB Atlas Vector Search . These capabilities will reduce vector sizes while preserving performance, enabling developers to build powerful semantic search and generative AI applications with more scale—and at a lower cost. In addition, unlike relational or niche vector databases, MongoDB’s flexible document model—coupled with quantized vectors—allows for greater agility in testing and deploying different embedding models quickly and easily. Support for scalar quantized vector ingestion is now generally available, and will be followed by several new releases in the coming weeks. Read on to learn how vector quantization works and visit our documentation to get started! The challenges of large-scale vector applications While the use of vectors has opened up a range of new possibilities , such as content summarization and sentiment analysis, natural language chatbots, and image generation, unlocking insights within unstructured data can require storing and searching through billions of vectors—which can quickly become infeasible. Vectors are effectively arrays of floating-point numbers representing unstructured information in a way that computers can understand (ranging from a few hundred to billions of arrays), and as the number of vectors increases, so does the index size required to search over them. As a result, large-scale vector-based applications using full-fidelity vectors often have high processing costs and slow query times, hindering their scalability and performance. Vector quantization for cost-effectiveness, scalability, and performance Vector quantization, a technique that compresses vectors while preserving their semantic similarity, offers a solution to this challenge. Imagine converting a full-color image into grayscale to reduce storage space on a computer. This involves simplifying each pixel's color information by grouping similar colors into primary color channels or "quantization bins," and then representing each pixel with a single value from its bin. The binned values are then used to create a new grayscale image with smaller size but retaining most original details, as shown in Figure 1. Figure 1: Illustration of quantizing an RGB image into grayscale Vector quantization works similarly, by shrinking full-fidelity vectors into fewer bits to significantly reduce memory and storage costs without compromising the important details. Maintaining this balance is critical, as search and AI applications need to deliver relevant insights to be useful. Two effective quantization methods are scalar (converting a float point into an integer) and binary (converting a float point into a single bit of 0 or 1). Current and upcoming quantization capabilities will empower developers to maximize the potential of Atlas Vector Search. The most impactful benefit of vector quantization is increased scalability and cost savings through reduced computing resources and efficient processing of vectors. And when combined with Search Nodes —MongoDB’s dedicated infrastructure for independent scalability through workload isolation and memory-optimized infrastructure for semantic search and generative AI workloads— vector quantization can further reduce costs and improve performance, even at the highest volume and scale to unlock more use cases. "Cohere is excited to be one of the first partners to support quantized vector ingestion in MongoDB Atlas,” said Nils Reimers, VP of AI Search at Cohere. “Embedding models, such as Cohere Embed v3, help enterprises see more accurate search results based on their own data sources. We’re looking forward to providing our joint customers with accurate, cost-effective applications for their needs.” In our tests, compared to full-fidelity vectors, BSON-type vectors —MongoDB’s JSON-like binary serialization format for efficient document storage—reduced storage size by 66% (from 41 GB to 14 GB). And as shown in Figures 2 and 3, the tests illustrate significant memory reduction (73% to 96% less) and latency improvements using quantized vectors, where scalar quantization preserves recall performance and binary quantization’s recall performance is maintained with rescoring–a process of evaluating a small subset of the quantized outputs against full-fidelity vectors to improve the accuracy of the search results. Figure 2: Significant storage reduction + good recall and latency performance with quantization on different embedding models Figure 3: Remarkable improvement in recall performance for binary quantization when combining with rescoring In addition, thanks to the reduced cost advantage, vector quantization facilitates more advanced, multiple vector use cases that would have been too computationally-taxing or cost-prohibitive to implement. For example, vector quantization can help users: Easily A/B test different embedding models using multiple vectors produced from the same source field during prototyping. MongoDB’s document model —coupled with quantized vectors—allows for greater agility at lower costs. The flexible document schema lets developers quickly deploy and compare embedding models’ results without the need to rebuild the index or provision an entirely new data model or set of infrastructure. Further improve the relevance of search results or context for large language models (LLMs) by incorporating vectors from multiple sources of relevance, such as different source fields (product descriptions, product images, etc.) embedded within the same or different models. How to get started, and what’s next Now, with support for the ingestion of scalar quantized vectors, developers can import and work with quantized vectors from their embedding model providers of choice (such as Cohere, Nomic, Jina, Mixedbread, and others)—directly in Atlas Vector Search. Read the documentation and tutorial to get started. And in the coming weeks, additional vector quantization features will equip developers with a comprehensive toolset for building and optimizing applications with quantized vectors: Support for ingestion of binary quantized vectors will enable further reduction of storage space, allowing for greater cost savings and giving developers the flexibility to choose the type of quantized vectors that best fits their requirements. Automatic quantization and rescoring will provide native capabilities for scalar quantization as well as binary quantization with rescoring in Atlas Vector Search, making it easier for developers to take full advantage of vector quantization within the platform. With support for quantized vectors in MongoDB Atlas Vector Search, you can build scalable and high-performing semantic search and generative AI applications with flexibility and cost-effectiveness. Check out these resources to get started documentation and tutorial . Head over to our quick-start guide to get started with Atlas Vector Search today.

October 7, 2024
Artificial Intelligence

Bringing Gen AI Into The Real World with Ramblr and MongoDB

How do you bring the benefits of gen AI, a technology typically experienced on a keyboard and screen, into the physical world? That's the problem the team at Ramblr.ai , a San Francisco-based startup, is solving with its powerful and versatile 3D annotation and recognition capabilities. “With Ramblr you can record continuously what you are doing, and then ask the computer, in natural language, ‘Where did I go wrong’ or ‘What should I do next?” said Frank Angermann, Lead Pipeline & Infrastructure Engineer at Ramblr.ai. Gen AI for the real world One of the best examples of Ramblr’s technology, and its potential, is its work with the international chemical giant BASF. In a video demonstration on Ramblr’s website, a BASF engineer can be seen tightening bolts on a connector (or ‘flange’) joining two parts of a pipeline. Every move the engineer makes is recorded via a helmet-mounted camera. Once the worker is finished for the day this footage, and the footage of every other person working on the pipeline, is uploaded to a database. Using Ramblr’s technology, quality assurance engineers from BASF then query the collected footage from every worker, asking the software to, ‘Please assess footage from today’s pipeline connection work and see if any of the bolts were not tightened enough.’ Having processed the footage, Ramblr assesses whether those flanges had been assembled correctly and identifies any that required further inspection or correction. The method behind the magic “We started Ramblr.ai as an annotation platform, a place where customers could easily label images from a video and have machine learning models then identify that annotation throughout the video automatically,” said Frank. “In the past this work would be carried out manually by thousands of low-paid workers tagging videos by hand. We thought we could be better by automating that process,” he added. The software allows customers to easily customize and add annotations to footage for their particular use case, and with its gen-AI powered active learning approach Ramblr then ‘fills in’ the rest of the video based on those annotations. Why MongoDB? MongoDB has been part of the Ramblr technology stack since the beginning. “We use MongoDB Atlas for half of our storage processes. Metadata, annotation data, etc., can all be stored in the same database. This means we don’t have to rely on separate databases to store different types of data,” said Frank. Flexibility of data storage was also a key consideration when choosing a database. “With MongoDB Atlas, we could store information the way we wanted to,” he added. The built-in vector database capabilities of Atlas were also appealing to the Rambler team, “The ability to store vector embeddings without having to do any more work - for instance not having to move a 3mb array of data somewhere else to process it, was a big bonus for us.” The future Aside from infrastructure and construction Q&A, robotics is another area in which the Ramblr team is eager to deploy their technology. “Smaller robotics companies don’t typically have the data to train the models that inform their products. There are quite a few use cases where we could support these companies and provide a more efficient and cost-effective way to teach the robots more efficiently. We are extremely efficient in providing information for object detectors,” said Frank. But while there are plenty of commercial uses for Ramblr’s technology, the growth in spatial computing in the consumer sector - especially following the release of Apple’s Vision Pro and Meta Quest headsets - opens up a whole new category of use cases. “Spatial computing will be a big part of the world. Being able to understand the particular processes, taxonomy, and what the person is actually seeing in front of them will be a vital part of the next wave of innovation in user interfaces and the evolution of gen AI,” Frank added. Are you building AI apps? Join the MongoDB AI Innovators Program today! Successful participants gain access to free Atlas credits, technical enablement, and invaluable connections within the broader AI ecosystem. If your company is interested in being featured, we’d love to hear from you. Connect with us at ai_adopters@mongodb.com. Head over to our quick-start guide to get started with Atlas Vector Search today.

September 30, 2024
Artificial Intelligence

AI-Driven Noise Analysis for Automotive Diagnostics

Aftersales service is a crucial revenue stream for the automotive industry, with leading manufacturers executing repairs through their dealer networks. One global automotive giant recently embarked on an ambitious project to revolutionize their diagnostic process. Their project—which aimed to increase efficiency, customer satisfaction, and revenue throughput—involved the development of an AI-powered solution that could quickly analyze engine sounds and compare them to a database of known problems, significantly reducing diagnostic times for complex engine issues. Traditional diagnostic methods can be time-consuming, expensive, and imprecise, especially for complex engine issues. MongoDB’s client in automotive manufacturing envisioned an AI-powered solution that could quickly analyze engine sounds and compare them to a database of known problems, significantly reducing diagnostic times. Initial setbacks, then a fresh perspective Despite the client team's best efforts, the project faced significant challenges and setbacks during the nine-month prototype phase. Though the team struggled to produce reliable results, they were determined to make the project a success. At this point, MongoDB introduced its client to Pureinsights , a specialized gen AI implementation and MongoDB AI Application Program partner , to rethink the solution and to salvage the project. As new members of the project team, and as Pureinsights’s CTO and Lead Architect, respectively, we brought a fresh perspective to the challenge. Figure 1: Before and after the AI-powered noise diagnostic solution A pragmatic approach: Text before sound Upon review, we discovered that the project had initially started with a text-based approach before being persuaded to switch to sound analysis. The Pureinsights team recommended reverting to text analysis as a foundational step before tackling the more complex audio problem. This strategy involved: Collecting text descriptions of car problems from technicians and customers. Comparing these descriptions against a vast database of known issues already stored in MongoDB. Utilizing advanced natural language processing, semantic / vector search, and Retrieval Augmented Generation techniques to identify similar cases and potential solutions. Our team tested six different models for cross-lingual semantic similarity, ultimately settling on Google's Gecko model for its superior performance across 11 languages. Pushing boundaries: Integrating audio analysis With the text-based foundation in place, we turned to audio analysis. Pureinsights developed an innovative approach to the project by combining our AI expertise with insights from advanced sound analysis research. We drew inspiration from groundbreaking models that had gained renown for their ability to identify cities solely from background noise in audio files. This blend of AI knowledge and specialized audio analysis techniques resulted in a robust, scalable system capable of isolating and analyzing engine sounds from various recordings. We adapted these sophisticated audio analysis models, originally designed for urban sound identification, to the specific challenges of automotive diagnostics. These learnings and adaptations are also applicable to future use cases for AI-driven audio analysis across various industries. This expertise was crucial in developing a sophisticated audio analysis model capable of: Isolating engine and car noises from customer or technician recordings. Converting these isolated sounds into vectors. Using these vectors to search the manufacturer's existing database of known car problem sounds. At the heart of this solution is MongoDB’s powerful database technology. The system leverages MongoDB’s vector and document stores to manage over 200,000 case files. Each "document" is more akin to a folder or case file containing: Structured data about the vehicle and reported issue Sound samples of the problem Unstructured text describing the symptoms and context This unified approach allows for seamless comparison of text and audio descriptions of customer engine problems using MongoDB's native vector search technology. Encouraging progress and phased implementation The solution's text component has already been rolled out to several dealers, and the audio similarity feature will be integrated in late 2024. This phased approach allows for real-world testing and refinement before a full-scale deployment across the entire repair network. The client is taking a pragmatic, step-by-step approach to implementation. If the initial partial rollout with audio diagnostics proves successful, the plan is to expand the solution more broadly across the dealer network. This cautious (yet forward-thinking) strategy aligns with the automotive industry's move towards more data-driven maintenance practices. As the solution continues to evolve, the team remains focused on enhancing its core capabilities in text and audio analysis for current diagnostic needs. The manufacturer is committed to evaluating the real-world impact of these innovations before considering potential future enhancements. This measured approach ensures that each phase of the rollout delivers tangible benefits in efficiency, accuracy, and customer satisfaction. By prioritizing current diagnostic capabilities and adopting a phased implementation strategy, the automotive giant is paving the way for a new era of efficiency and customer service in their aftersales operations. The success of this initial rollout will inform future directions and potential expansions of the AI-powered diagnostic system. A new era in automotive diagnostics The automotive giant brought industry expertise and a clear vision for improving their aftersales service. MongoDB provided the robust, flexible data platform essential for managing and analyzing diverse, multi-modal data types at scale. We, at Pureinsights, served as the AI application specialist partner, contributing critical AI and machine learning expertise, and bringing fresh perspectives and innovative approaches. We believe our role was pivotal in rethinking the solution and salvaging the project at a crucial juncture. This synergy of strengths allowed the entire project team to overcome initial setbacks and develop a groundbreaking solution that combines cutting-edge AI technologies with MongoDB's powerful data management capabilities. The result is a diagnostic tool leveraging text and audio analysis to significantly reduce diagnostic times, increase customer satisfaction, and boost revenue through the dealer network. The project's success underscores several key lessons: The value of persistence and flexibility in tackling complex challenges The importance of choosing the right technology partners The power of combining domain expertise with technological innovation The benefits of a phased, iterative approach to implementation As industries continue to evolve in the age of AI and big data, this collaborative model—bringing together industry leaders, technology providers, and specialized AI partners—sets a new standard for innovation. It demonstrates how companies can leverage partnerships to turn ambitious visions into reality, creating solutions that drive business value while enhancing customer experiences. The future of automotive diagnostics—and AI-driven solutions across industries—looks brighter thanks to the combined efforts of forward-thinking enterprises, cutting-edge database technologies like MongoDB, and specialized AI partners like Pureinsights. As this solution continues to evolve and deploy across the global dealer network, it paves the way for a new era of efficiency, accuracy, and customer satisfaction in the automotive industry. This solution has the potential to not only revolutionize automotive diagnostics but also set a new standard for AI-driven solutions in other industries, demonstrating the power of collaboration and innovation. To deliver more solutions like this—and to accelerate gen AI application development for organizations at every stage of their AI journey—Pureinsights has joined the MongoDB AI Application Program (MAAP). Check out the MAAP page to learn more about the program and how MAAP ecosystem members like Pureinsights can help your organization accelerate time-to-market, minimize risks, and maximize the value of your AI investments.

September 27, 2024
Artificial Intelligence

Ready to get Started with MongoDB Atlas?

Start Free