veronica-cooley-perry

2906 results

AI Agents, Hybrid Search, and Indexing with LangChain and MongoDB

Since we announced integration with LangChain last year, MongoDB has been building out tooling to help developers create advanced AI applications with LangChain . With recent releases, MongoDB has made it easier to develop agentic AI applications (with a LangGraph integration), perform hybrid search by combining Atlas Search and Atlas Vector Search , and ingest large-scale documents more effectively. For more on each development—plus new support for the LangChain Indexing API—please read on! The rise of AI agents Agentic applications have emerged as a compelling next step in the development of AI. Imagine an application able to act on its own, working towards complicated goals and drawing on context to create a strategy. These applications leverage large language models (LLMs) to dynamically determine their execution path, breaking free from the constraints of traditional, deterministic logic. Consider an application tasked with answering a question like "In our most profitable market, what is the current weather?" While a traditional retrieval-augmented generation (RAG) app may falter, unable to obtain information about “current weather,” an agentic application shines. The application can intelligently deduce the need for an external API call to obtain current weather information, seamlessly integrating this with data retrieved from a vector search to identify the most profitable market. These systems take action and gather additional information with limited human intervention, supplementing what they already know. Building such a system is easier than ever thanks to MongoDB’s continued work with LangGraph. Unleashing the power of AI agents with LangGraph and MongoDB Because it now offers LangGraph—a framework for performing multi-agent orchestration—LangChain is more effective than ever at simplifying the creation of applications using LLMs, including AI agents. These agents require memory to maintain context across multiple interactions, allowing users to engage with them repeatedly while the agent retains information from previous exchanges. While basic agentic applications can utilize in-memory structures, for more complicated use cases these structures are not sufficient. MongoDB allows developers to build stateful, multi-actor applications with LLMs, storing and retrieving the “checkpoints” needed by LangGraph.js. The new MongoDBSaver class makes integration simpler than ever before, as LangGraph.js is able to utilize historical user interactions to enhance agentic AI. By segmenting this history into checkpoints, the library allows for persistent session memory, easier error recovery, and even the ability to “time travel”—allowing users to jump back in the graph to a previous state to explore alternative execution. The MongoDBSaver class implements all of this functionality right into LangGraph.js, with sensible defaults and MongoDB-specific optimization. To learn more, please visit the source code , the documentation , and our new tutorial (which includes both a written and video version). Improve retrieval accuracy with Hybrid Search Retriever Hybrid search is particularly well-suited for queries that have both semantic and keyword-based components. Let’s look at an example, a query such as "find recent scientific papers about climate change impacts on coral reefs that specifically mention ocean acidification". This query would use a hybrid search approach, combining semantic search to identify papers discussing climate change effects on coral ecosystems, keyword matching to ensure "ocean acidification" is mentioned, and potential date-based filtering or boosting to prioritize recent publications. This combination allows for more comprehensive and relevant results than either semantic or keyword search alone could provide. With our recent release of Retrievers in LangChain-MongoDB, building such advanced retrieval patterns is more accessible than ever. Retrievers are how LangChain integrates external data sources into LLM applications. MongoDB has added two new custom, purpose-built Retrievers to the langchain-mongodb Python package, giving developers a unified way to perform hybrid search and full-text search with sensible defaults and extensive code annotation. These new classes make it easier than ever to use the full capabilities of MongoDB Vector Search with LangChain. The new MongoDBAtlasFullTextSearchRetriever class performs full-text searches using the Best Match 25 (BM25) analyzer. The MongoDBAtlasHybridSearchRetriever class builds on this work, combining the above implementation with vector search, fusing the results with Reciprocal Rank Fusion (RRF) algorithm. The combination of these two techniques is a potent tool for improving the retrieval step of a Retrieval-Augmented Generation (RAG) application, enhancing the quality of the results. To find out more, please dive into the MongoDBAtlasHybridSearchRetriever and MongoDBAtlasFullTextSearchRetriever classes. Seamless synchronization using LangChain Indexing API In addition to these releases, we’re also excited to announce that MongoDB now supports the LangChain Indexing API, allowing for seamless loading and synchronization of documents from any source into MongoDB, leveraging LangChain's intelligent indexing features. This new support will help users avoid duplicate content, minimize unnecessary rewrites, and optimize embedding computations. The LangChain Indexing API's record management system ensures efficient tracking of document writes, computing hashes for each document, and storing essential information like write time and source ID. This feature is particularly valuable for large-scale document processing and retrieval applications, offering flexible cleanup modes to manage documents effectively in MongoDB vector search. To read more about how to use the Indexing API, please visit the LangChain Indexing API Documentation . We’re excited about these LangChain integrations and we hope you are too. Here are some resources to further your learning: Check out our written and video tutorial to walk you through building your own JavaScript AI agent with LangGraph.js and MongoDB. Experiment with Hybrid search retrievers to see the power of Hybrid search for yourself. Read the previous announcement with LangChain about Semantic Caching.

September 12, 2024

Building Gen AI with MongoDB & AI Partners | August 2024

As the AI landscape continues to evolve, companies, industries, and developers seek tailored solutions to their unique challenges. Gone are the days when general-purpose AI models could be applied universally. Now, organizations are looking for industry-specific applications, verticalized AI solutions, and specialized tools to gain a competitive edge and best serve their customers. And as gen AI use cases have diversified—from healthcare diagnostics and autonomous driving, to personalized recommendations and creative content generation—so has the technology stack supporting them. The complexity of building and deploying AI models has led to the rise of specialized AI frameworks and platforms that streamline workflows and optimize performance for specific use cases. In this context, having the right AI stack is essential for driving innovation. AI development is no longer just about choosing the best model but also about selecting the right tools, libraries, and infrastructure to support that model across the board. All of which makes partnerships (and combining technical strengths) increasingly important to innovating with AI. Take, for example, our most recent integration with LangChain: the MongoDB-LangChain partnership exemplifies how having the right components in an AI stack allows teams to focus on innovating instead of managing infrastructure bottlenecks. By combining LangGraph with MongoDB’s vector search capabilities, developers can create more sophisticated, high-performing AI applications. This integration allows for the seamless development of agentic AI systems capable of generating actionable insights and delivering complex tasks. To learn more about building powerful AI agents with LangGraph.js and MongoDB, plus our recent work making vector search even more versatile with custom LangChain Retrievers, check out our tutorial . Welcoming new AI partners MongoDB’s partnership with LangChain highlights the importance of building adaptable solutions that can grow and change as the needs of developers and customers grow and change. Which is why MongoDB is always on the lookout for innovative partners and solutions—in August we welcomed five new AI partners that offer product integrations with MongoDB. Read on to learn more about each great new partner! BuildShip BuildShip is a low-code visual backend and workflow builder to instantly create APIs, scheduled tasks, backend cloud jobs, and automation, powered by AI. " We at BuildShip are thrilled to partner with MongoDB to introduce an innovative low-code approach for rapidly building AI workflows and backend tasks in a visual and scalable manner,” said Harini Janakiraman, CEO of BuildShip.com. “MongoDB offers a comprehensive data stack for AI developers and organizations, enabling them to efficiently build scalable databases and access vector or hybrid search options for their products. Our collaboration provides customizable low-code templates that allow for easy integration of MongoDB databases with a variety of AI models and tools. This enables teams and companies to quickly build powerful APIs, automations, vector search, and scheduled tasks, unlocking organizational efficiency and driving product innovation.” Inductor Inductor is a platform to prototype, evaluate, improve, and observe LLM apps and features, helping developers ship high-quality LLM-powered functionality rapidly and systematically. “ We’re excited to partner with MongoDB to enable companies to rapidly create production-grade LLM applications, by combining MongoDB's powerful vector search with Inductor’s developer platform enabling streamlined, systematic workflows for developing RAG-based applications,” said Ariel Kleiner, CEO of Inductor. “While many LLM-powered demos have been created, few have successfully evolved into production-grade applications that deliver business wins. Together, Inductor and MongoDB enable enterprises to build impactful, needle-moving LLM applications, accelerating time to market and delivering real value to customers.” Metabase Metabase is the easy-to-use, open source Business Intelligence tool that lets everyone work with data, with or without SQL, for internal and customer-facing, embedded analytics. "This partnership is an important step forward for NoSQL database analytics. By integrating Metabase with MongoDB , two popular open-source tools, we are making it easier for users to quickly get valuable insights from their MongoDB data,” explained Luiz Arakaki, Product Manager at Metabase. “Our goal is to create a better integration between the tools to offer more advanced features and stability, simplifying the use of NoSQL databases for advanced analytics.” Shakudo Shakudo is a comprehensive development platform that lets data professionals develop, run, and deploy data pipelines and applications in an all-in-one integrated environment. “ Shakudo is thrilled to be partnering with MongoDB to streamline the entire retrieval-augmented generation (RAG) development lifecycle. Together we help companies test and optimize their RAG features for faster PoC, and production deployment,” noted Yevgeniy Vahlis, CEO of Shakudo. “MongoDB has made it dead simple to launch a scalable vector database with operational data, and Shakudo brings industry leading AI tooling to that data. Our collaboration speeds up time to market and helps companies get real value to customers faster.” VLM Run VLM Run is a versatile API that enables accurate JSON extraction from any visual content such as images, videos, and documents, helping users to integrate visual AI to applications. “ VLM Run is excited to partner with MongoDB to help enterprises accurately extract structured insights from visual content such as images, videos and visual documents,” said Sudeep Pillai, Co-Founder and CEO of VLM Run. “Our combined solution will enable enterprises to turn their often-untapped unstructured visual content into actionable, queryable business intelligence.” But wait, there's more! To learn more about building AI-powered apps with MongoDB, check out our AI Resources Hub , and stop by our Partner Ecosystem Catalog to read about our integrations with MongoDB’s ever-evolving AI partner ecosystem.

September 11, 2024

Boosting Customer Lifetime Value with Agmeta and MongoDB

Nobody likes calling customer service. The phone trees, the wait times, the janky music, and how often your issue just isn’t resolved can make the whole process one most people would rather avoid. For business owners, the customer contact center can also be a source of frustration, simultaneously creating customer churn and unhappiness, while also acting as a black hole of information as to why that churn occurred. It doesn’t have to be this way. What if instead, customer service centers offered valuable ways to increase the Customer Lifetime Value (CLTV) of customers, pipelines of upsell opportunities, and valuable sources of information? That’s the goal of Agmeta.AI , a startup dedicated to giving businesses actionable insights to fight churn, identify key customers primed for upsell, and improve customer service overall. Lost in translation “We started with a very simple thesis±people call into contact centers because they have a problem. That is a real make-or-break moment. The opportunity for churn is very high… or that customer can be a great target for upselling,” said Samir Agarwal, CEO and co-founder of Agmeta. “All of this data sits in a contact center, and businesses don't ever get to see it,” he added. According to Samir, even the businesses that think they are collecting useful information on customer service interactions are instead collecting incorrect or incomplete information. Or worse, they’re analyzing the information they do record incorrectly. Every business today talks about the importance of customer experience (CX), but the challenge businesses face is how they quantify that CX. Many contact centers substitute call sentiment for CX, or use keywords to determine canned responses. For example, imagine if a customer calls into a service center and they have what appears to be a positive conversation with an agent. They use words and phrases like “thank you,” and “yes, I understand,” and reply “no, I do not have anything else to ask” at the end of a call in which their complaint is not resolved. After putting the phone down, the customer goes on to cancel the service, or worse, initiate a chargeback request with their credit card provider. In some businesses the customer service agent may manually mark such a call as positive’ The agent, after all, ‘answered all the customers' concerns.’ As this example illustrates, the sentiment of a call should not be confused with the measure of customer experience. Another common way businesses try to gather feedback is by sending a post-call survey. However, a problem with this approach is that industry response rates for surveys are close to 3%. This implies that decisions get made on that small sample, and may not take into account the other 97% of the customers who didn’t respond to the survey. Survey results are also frequently skewed, as those most likely to respond are also the ones who were most unhappy with the contact center interaction and want their voices heard. The MongoDB advantage Using machine learning and generative AI, backed by MongoDB Atlas , Agmeta’s software understands not only the content of the call, but the context too. Taking our example above, Agmeta’s software would detect that the customer is unhappy, despite their polite and ‘positive’ sounding conversation with the agent, and flag the customer as a potential churn or chargeback candidate in need of immediate attention. “We will give you a CSAT (customer satisfaction) score and a reason for that CSAT score within seconds of the call ending±for 100% of the interactions,” said Samir. For Agmeta to work, Samir and his team had to have a database ready to accept all kinds of data, including voice recordings, unstructured text, and constantly evolving schema. “We didn’t have a fixed schema, we needed a database that was as flexible as Agmeta needed to be. I’ve known of MongoDB forever, so when I started to look at databases it seemed an obvious choice to me,” he said. The ability to quickly and easily work with vectorized data for gen AI was also crucial. “MongoDB provides vector search capabilities in an operational database. Rather than having to add a bolt on a vector database and figure out the ETL, MongoDB solved this issue for me in a single product. The way I look at it, if you do a good job on Vector search, then my life as an entrepreneur and software builder becomes much easier,” Samir said. After assessing database options and multiple LLMs, Samir and his team chose to pair MongoDB Atlas with Google Cloud, taking advantage of Gemini on Google’s generative AI platform. “With Atlas on Google Cloud, there are zero worries about database administration, maintenance, and availability. This frees us up to focus on creating business value,” Samir said. “Another benefit of using MongoDB is the flexibility to use the customer’s MongoDB setup which gives the customer the peace of mind from the perspective of security and privacy of their data.” Customer service first With the power of generative AI and MongoDB, Agmeta can deliver a CSAT score that measures the customers’ true takeaway from the call. The CSAT score is a multi-dimensional score that takes into account areas including resolution (as the customer sees it), politeness, the onus on the customer, and many other attributes. In the short term, the primary use for this technology is to detect and flag those customers at risk of churn, filing a charge dispute with their card provider, or potentially upselling, giving businesses an opportunity to “see” what they could never find out before. “When we talk to customers, the number one thing they are concerned about is customer churn. Right now they operate completely blind with no idea why people are leaving them,” said Samir. “One large telecoms customer Agmeta is in talks with had no idea where their churn was happening. But when we described being able to assign every customer a CSAT score, they were very excited,” he added. And it’s not just about preventing churn. Businesses can identify happy customers too, targeting them for upsell opportunities. “One of the things we do is spot patterns of unanswered questions from product support interactions,” Samir added. “When we see ‘Oh look, suddenly there are a lot more calls because of a release,’ then we can flag this to product teams as a must-fix issue.” The future of customer service Agmeta aims to amalgamate customer information with current and past experiences to provide businesses a more holistic±and nuanced—picture of their customers, and more precise next steps they can take. “What we want to do is look back in time and see what else happened with this customer,” Samir said. “The goal is to provide businesses with targeted directives to minimize churn and grow customer lifetime value.” Retrieval-augmented generation plays a key role in Agmeta’s vision. This also means an expanded role for both MongoDB’s vector database as the source of information against which semantic searches can be run, as well as Gemini for both analysis and presentation of the directives for the business. You can learn more about how innovators across the world are using MongoDB by reviewing our Building AI case studies . If your team is building AI apps, sign up for the AI Innovators Program . Successful companies get access to free Atlas credits and technical enablement, as well as connections into the broader AI ecosystem. Additionally, if your company is interested in being featured in a story like this, we'd love to hear from you! Reach out to us at ai_adopters@mongodb.com .

September 10, 2024

Atlas Stream Processing: A Cost-Effective Way to Integrate Kafka and MongoDB

Developers around the world use Apache Kafka and MongoDB together to build responsive, modern applications. There are two primary interfaces for integrating Kafka and MongoDB. In this post, we’ll introduce these interfaces and highlight how Atlas Stream Processing offers an easy developer experience, cost savings, and performance advantages when using Apache Kafka in your applications. First, we will provide some background. The Kafka Connector For many years, MongoDB has offered the MongoDB Connector for Kafka (Kafka Connector). The Kafka Connector enables the movement of data between Apache Kafka and MongoDB, and thousands of development teams use it. While it supports simple message transformation, developers largely handle data processing with separate downstream tools. Atlas Stream Processing More recently , we announced Atlas Stream Processing—a native stream processing solution in MongoDB Atlas. Atlas Stream Processing is built on the document model and extends the MongoDB Query API to give developers a powerful, familiar way to connect to streams of data and perform continuous processing. The simplest stream processors act similarly to the primary Kafka Connector use case, helping developers move data from one place to another, whether from Kafka to MongoDB or vice versa. Check out an example: // Connect to MongoDB Atlas database using $source. s = { $source: { connectionName: 'myAtlasCluster', db: myDB', coll: ‘myCollection’ } } // Write your data to a Kafka topic using $emit. e = { $emit: { connectionName: 'myKafkaConnection', topic: myTopic } } // Create your processor and start it! sp.createStreamProcessor("mongoDBtoKafka", [s,e]) sp.mongoDBToKafka.start() Beyond making data movement easy, Atlas Stream Processing enables advanced stream processing use cases not possible in the Kafka Connector. One common use case is enriching your event data by using $lookup as a stage in your stream processor. In the example above, a developer can perform this enrichment by simply adding a lookup stage in the pipeline between source and sink. While the Kafka Connector can perform some single message transformations, Atlas Stream Processing makes for both an easier overall experience and gives teams the ability to perform much more complex processing. Choosing the right solution for your needs It’s important to note that Atlas Stream Processing was built to simplify complex, continuous processing and streaming analytics rather than as a replacement for the Kafka Connector. However, even for the more basic data movement use cases referenced above, it provides a new alternative to the Kafka Connector. The decision will depend on data movement and processing needs. Three common considerations we see teams making to help with this choice are ease of use, performance, and cost. Ease of use The Kafka Connector runs on Kafka Connect. If your team already heavily uses Kafka Connect across many systems beyond MongoDB, this may be a good reason to keep it in place. However, many teams find configuring, monitoring, and maintaining connectors costly and cumbersome. In contrast, Atlas Stream Processing is a fully managed service integrated into MongoDB Atlas. It prioritizes ease of use by leveraging the MongoDB Query API to process your event data continuously. Atlas Stream Processing balances simplicity (no managing servers, utilizing other cloud platforms, or learning new tools) and processing power to reduce development time, decrease infrastructure and maintenance costs, and build applications quicker. Performance High performance is increasingly a priority with all data infrastructure, but it’s often a must-have for use cases that rely on streams of event data (commonly from Apache Kafka) to deliver an application feature. Many of our early customers have found Atlas Stream Processing more performant than similar data movement in their Kafka Connector configurations. By connecting directly to your data in Kafka and MongoDB and acting on it as needed, Atlas Stream Processing eliminates the need for a tool in-between. Cost Finally, managing costs is a critical consideration for all development teams. We’ve priced Atlas Stream Processing competitively when compared to typical Kafka Connector configurations. Most hosted Kafka providers charge per task. That means each additional source and sink will generate a separate data transfer and storage cost that linearly scales as you expand. Atlas Stream Processing charges per Stream Processing Instance (SPI) worker and each worker supports up to four stream processors. This means potential cost savings when running similar configurations to the Kafka Connector. See more details in the documentation . Atlas Stream Processing launched just a few months ago. Developers are already using it for a wide range of use cases, like managing real-time inventories, serving contextually relevant recommendations, and optimizing yields in industrial manufacturing facilities. We can’t wait to see what you build and hear about your experience! Ready to get started? Log in to Atlas today. Already a Kafka Connector user? Dig into even more details and get started using our tutorial .

September 9, 2024

Exploring New Security, Billing, and Customization Features in Atlas Charts

MongoDB is excited to announce a few new updates to Atlas Charts that enable you to securely share insights, gain deeper visibility into expenses, and customize your most frequently used data visualizations. Based on specific feedback received from users of our native visualization tool, these significant improvements will make data analysis even more productive. We: Improved security in Atlas Charts for passcode-protected public dashboards Increased visibility into Atlas spending through an updated billing dashboard Introduced new customization for table charts through hyperlinks and hidden columns Secure insights with passcode-protected public dashboards First, there’s the new passcode-protected public dashboards feature that brings an extra layer of security to publicly shared dashboards—we understand that not everyone who benefits from Atlas Charts operates within MongoDB Atlas. Alongside the ability to schedule email reports and support for publicly-shared dashboards , we’re offering a new and secure way to spread insights with the launch of our latest feature. Add an extra layer of security to previously publicly shared dashboards, ensuring that only authorized users with the passcode can access your data. Enabling passcode protection on a dashboard is simple. As a dashboard owner, a new option is available to protect dashboard links with a passcode when sharing it publicly. Check the box to protect your public link with a passcode Once enabled, a passcode is automatically generated and can be copied to the clipboard (and regenerated on demand as needed). Viewers navigating to dashboards via the public link will see a new screen prompting them to enter a passcode. Once authenticated successfully, they can view the dashboard just as before. Easily access your dashboards by inputting your password when prompted Whether you're sharing insights with clients, stakeholders, or team members, rest assured that your data remains easily accessible yet secure. To learn more about the different ways we support dashboard sharing, check out our documentation . What’s new in the Atlas Charts billing dashboard Next, we continue to make enhancements to the MongoDB Atlas Charts billing dashboard , all of which provide insights into Atlas expenses. We are delighted to share that it’s now possible to see resource tags data, as well as billing data from all linked organizations inside the Atlas Charts billing dashboard. Additionally, users can now ingest billing data from another organization, provided they possess the organization’s API keys. These newly introduced features rely on the availability of billing data within the organization. And for those leveraging resource tags, the billing data will seamlessly integrate, empowering users to generate personalized charts or to incorporate tailored dashboard filters within the Atlas Charts billing dashboard. If cross-organization billing is enabled, editing the configuration will ingest the linked organization’s billing data for the last three months, with the option to extend this period to up to a year by creating a new ingestion. Project tags data in the Atlas Charts billing dashboard Resource tags are now seamlessly integrated into billing data and can be included in any of the charts or the dashboard filters inside the Atlas billing dashboard. For example, our MongoDB organization uses the Atlas auto-suggested tags “application” and “environment,” alongside a custom resource tag labeled "team." The following chart uses the tags data and shows the billing cost per team and per environment. A chart which depicts cost per team and environment using tags The subsequent chart presents the billing cost allocated per project and team, providing valuable insights into the primary cost drivers for each team's projects. A chart depicting cost allocated per project and team Users can also add a dashboard filter to the “tags” field, which will allow them to see the whole dashboard based on the selected tag values. In the next example, we have selected a specific “team” : “Charts” from the tags dashboard filter, so we can see all of the billing insights per team thanks to our custom tag. Billing insights filtered by specific "charts" team in an intuitive dashboard Linked organization’s data in the Atlas Charts billing dashboard For complex Atlas projects spanning multiple organizations, the Atlas Charts billing dashboard now seamlessly integrates billing data from all linked organizations. The most productive use case is to add a dashboard filter based on the "organizationId" to enable filtering data according to specific organizations for a more granular analysis of the spending. Dashboard filtered by the organizationID field to show insights for one organization Billing data from another organization Users can now ingest billing data from other organizations that are not directly linked, provided they possess authorization API keys, bringing the data you need to where you are. Provide the API key to ingest billing data from other organizations These new features in the Atlas Charts billing dashboard are designed to provide richer, more detailed insights into organization spend. Check out our documentation and our previous blog post to learn more about it. Hyperlinks and hidden columns for tables in Atlas Charts Of all the data visualization methods available in Atlas Charts, table charts rank as one of the most popular among our users. So it should come as no surprise that one of the most highly requested features from our customers is the ability to format columnar data as hyperlinks. We're excited to announce that this is now possible in Atlas Charts through the new hyperlink customization options available for table charts . With hyperlink customization, you can format columnar data as hyperlinks using any of the following URI protocols: http, https, mailto, or tel, and can be constructed statically or dynamically using encoded fields. Let’s assume we’ve created a table using the sample movies dataset in Atlas, with encodings like title, imdb.id, runtime, genre, poster_display—which is a calculated field —and more. Customization panel in Atlas Charts To turn movie titles into clickable links that direct users to their respective IMDB pages, navigate to the customization panel and click into the hyperlinking feature in the fields tab . We will format the title field as a hyperlink which links to the Internet Movie Database (IMDB) entry for that movie. IMDB URLs are formatted as follows, where id needs to be substituted with the value of the imdb.id field for each document. https://www.imdb.com/title/tt<id>/ Customize the “title” field in the table chart to link to IMDB using the “imdb.id” field in the URI input. Below, a preview displays the fully formatted URI with fields substituted for their values, helping to ensure it’s correct before we save it to be applied to the chart. Preview of URI in the hyperlinking panel Since we only need the imdb.id field to be encoded for the purpose of constructing the URI applied to the title field, we can hide the column from rendering using another new customization option. Select the imdb.id field in the customization panel, and toggle on the “Hide Column” option. Toggle "Hide Column" We also support using URI values directly from fields (provided they use one of the supported protocols). Let’s see this in action by creating a hyperlink to the movie poster. In the URI input, trigger the encoded field menu using the @ keyboard shortcut, and select the poster field. Similar to the previous example, a preview will be displayed. After saving and applying the hyperlink formatting, we can hide the rendering of the poster field as needed to keep the chart clean. Use the @ keyboard shortcut to trigger the encoded field menu All these options are accessible in the customization panel, making it straightforward to enhance table charts with interactive hyperlinks. For more detailed instructions, visit our documentation . As we conclude this roundup, we hope you’re as excited about these updates as we are. The Atlas Charts team is dedicated to continuously improving Atlas Charts to meet your needs and enhance your data visualization experience. Stay tuned for more updates, and happy charting! New to Atlas Charts? Get started today by logging into or signing up for MongoDB Atlas , deploying or selecting a cluster, and activating Charts for free.

September 5, 2024

Saving Energy, Smarter: MongoDB and Cedalo for Smart Meter Systems

The global energy landscape is undergoing a significant transformation, with energy consumption rising 2.2% in 2023, surpassing the 2010-2019 average of 1.5% per year. This increase is largely due to global developments in BRICS member countries—Brazil, Russia, India, China, and South Africa. As renewable sources like solar power and wind energy become more prevalent (in the EU, renewables accounted for over 50% of the power mix in the first quarter of 2024 ), ensuring a reliable and efficient energy infrastructure is crucial. Smart meters, the cornerstone of intelligent energy networks, play a vital role in this evolution. According to IoT analyst firm Berg Insight, the penetration of smart meters is skyrocketing, with the US and Canada expected to reach nearly 90% adoption by 2027, whereas China is expected to account for as much as 70–80% of smart electricity meter demand across Asia in the next few years. This surge is indicative of a growing trend towards smarter, more sustainable energy solutions. In Central Asian countries, the Asian Development Bank is supporting the fast deployment of smart meters to save energy and improve the financial position of countries' power utilities. This article will delve into the benefits of smart meters, the challenges associated with managing their data, and the innovative solutions offered by MongoDB and Cedalo. The rise of smart meters Smart meters, unlike traditional meters that require manual readings, collect and transmit real-time energy consumption data directly to energy providers. This digital transformation offers numerous benefits, including: Accurate Billing: Smart meters eliminate the need for estimations, ensuring that consumers are billed precisely for the energy they use. Personalized Tariffs: Energy providers can offer tailored tariffs based on individual consumption patterns, allowing consumers to take advantage of off-peak rates, special discounts, and other cost-saving opportunities. Enhanced Grid Management: Smart meter data enables utilities to optimize grid operations, reduce peak demand, and improve overall system efficiency. Energy Efficiency Insights: Consumers can gain valuable insights into their energy usage patterns, identifying areas for improvement and reducing their overall consumption. With the increasing adoption of smart meters worldwide, there is a growing need for effective data management solutions to harness the full potential of this technology. Data challenges in smart meter adoption Despite the numerous benefits, the widespread adoption of smart meters also presents significant data management challenges. To use smart metering, power utility companies need to deploy a core smart metering ecosystem that includes the smart meters themselves, the meter data collection network, the head-end system (HES), and the meter data management system (MDMS). Smart meters collect data from end consumers and transmit it to the data aggregator via the Local Area Network (LAN). The transmission frequency can be adjusted to 15 minutes, 30 minutes, or hourly, depending on data demand requirements. The aggregator retrieves the data and then transmits it to the head-end system. The head-end system analyzes the data and sends it to the MDMS. The initial communications path is two-way, signals or commands can be sent directly to the meters, customer premise, or distribution device. Figure 1: End-to-end data flow for a smart meter management system / advanced metering infrastructure (AMI 2.0) When setting up smart meter infrastructure, power, and utility companies face several significant data-related challenges: Data interoperability: The integration and interoperability of diverse data systems pose a substantial challenge. Smart meters must be seamlessly integrated with existing utility systems and other smart grid components often requiring extensive upgrades and standardization efforts. Data management: The large volume of data generated by smart meters requires advanced data management and analytics capabilities. Utilities must implement robust data storage, processing, and analysis solutions to handle real-time time series data streams storage, analysis for anomaly detection, and trigger decision-making processes. Data privacy: Smart meters collect vast amounts of sensitive information about consumer energy usage patterns, which must be protected against breaches and unauthorized access. Addressing these challenges is crucial for the successful deployment and operation of smart meter infrastructure. MQTT: A cornerstone of smart meter communication MQTT , a lightweight publish-subscribe protocol, shines in smart meter communication beyond the initial connection. It's ideal for resource-constrained devices on low-bandwidth networks, making it perfect for smart meters. While LoRaWAN or PLC handle meter-to-collector links, MQTT bridges Head-End Systems (HES) and Meter Data Management Systems (MDMS). Its efficiency, reliable delivery, and security make it well-suited for large-scale smart meter deployments. Cedalo MQTT platform and MongoDB: A powerful combination Cedalo , established in 2017, is a leading German software provider specializing in MQTT solutions. Their flagship product, the Cedalo MQTT Platform, offers a comprehensive suite of features, including the Pro Mosquitto MQTT broker and Management Center . Designed to meet the demands of large enterprises, the platform delivers high availability, audit trail logging, persistent queueing, role-based access control, SSO integration, advanced security, and enhanced monitoring. To complement the platform's capabilities, MongoDB's Time Series collections provide a robust and optimized solution for storing and analyzing smart meter data. These collections leverage a columnar storage format and compound secondary indexes to ensure efficient data ingestion, reduced disk usage, and rapid query processing. Additionally, window functions enable flexible time-based analysis, making them ideal for IoT and analytical applications. Figure 2: MongoDB as the main database for the meter data management system where it receives meter data via Pro Mosquitto MQTT broker. Let us revisit Figure 1 and leverage both the Cedalo MQTT Platform and MongoDB in our design. In Figure 2, the Head-end System (HES) can use MQTT to filter, aggregate, and convert data before storing it in MongoDB. This data flow can be established using the MongoDB Bridge plugin provided by Cedalo. Since the MQTT payload is JSON, it is ideal to store it in MongoDB as the database stores data in BSON (Binary JSON). The MongoDB Bridge plugin offers advanced features such as flexible data import settings (specifying target databases and collections, choosing authentication methods, and selecting specific topics and message fields to import) and advanced collection mapping (mapping multiple MQTT topics to one or more collections with the ability to choose specific fields for insertion). MongoDB's schema flexibility is crucial for adapting to the ever-changing structures of MQTT payloads. Unlike traditional databases, MongoDB accommodates shifts in data format seamlessly, eliminating the constraints of rigid schema requirements. This helps with interoperability challenges faced by utility companies. Once the data is stored in MongoDB, it can be analyzed for anomalies. Anomalies in smart meter data can be identified based on various criteria, including sudden spikes or drops in voltage, current, power, or other metrics that deviate significantly from normal patterns. Here are some common types of anomalies that we might look for in smart meter data: Sudden spikes or drops: These include voltage, current, or power spikes or drops. A sudden increase or decrease in voltage beyond expected limits. Outliers: Data points that are significantly different from the majority of the data. Unusual patterns: Unusually high or low energy consumption compared to historical data or inconsistent power factor readings. Frequency anomalies: Frequency readings that deviate from the normal range. MongoDB's robust aggregation framework can aid in anomaly detection. Both anomaly data and raw data can be stored in time series collections, which offer reduced storage footprint and improved query performance due to an automatically created clustered index on timestamp and _id. The high compression offered addresses the challenge of data management at scale. Additionally, data tiering capabilities like Atlas Online Archive can be leveraged to push cold data into cost-effective storage. MongoDB also provides built-in security controls for all your data, whether managed in a customer environment or MongoDB Atlas, a fully managed cloud service. These security features include authentication, authorization, auditing, data encryption (including Queryable Encryption ), and the ability to access your data security with dedicated clusters deployed in a unique Virtual Private Cloud (VPC). End-to-end solution Figure 3: End-to-end data flow Interested readers can clone this repository and set up their own MongoDB-based smart meter data collection and anomaly detection solution. The solution follows the pattern illustrated in Figure 3, where a smart meter simulator generates raw data and transmits it via an MQTT topic. A Mosquitto broker receives these messages and then stores them in a MongoDB collection using the MongoDB Bridge. By leveraging MongoDB change streams , an algorithm can retrieve these messages, transform them according to MDMS requirements, and perform anomaly detection. The results are stored in a time series collection using a highly compressed format. The Cedalo MQTT Platform with MongoDB offers all the essential components for a flexible and scalable smart meter data management system, enabling a wide range of applications such as anomaly detection, outage management, and billing services. This solution empowers power distribution companies to analyze trends, implement real-time monitoring, and make informed decisions regarding their smart meter infrastructure. We are actively working with our clients to solve IoT challenges. Take a look at our Manufacturing and Industrial IoT page for more stories.

September 4, 2024

Mobile and Edge Solutions with MongoDB and Ditto

Mobile and edge solutions offer impressive opportunities for profit and growth for a variety of businesses around the world. Companies have consistently found ways to use mobile applications to grow revenue, cut costs, and stay ahead of the competition. In the power and utilities sector, for example, field workers can get enabled quickly by accessing their daily tasks on mobile devices and, in retail, consumers can use mobile apps to skip lines, providing businesses with upselling opportunities that can result in larger transactions. Indeed, mobile commerce is estimated to make up 44.6% of total US retail ecommerce sales in 2024. For banks, increased use of mobile applications can reduce operating costs by decreasing the demand for in-person and phone-based customer service. At the same time, having a mobile app allows financial institutions to reach additional customers, as many internet users around the world (particularly in developing countries) rely on mobile access. Time and time again, we’ve seen that the most successful apps are those thats meet modern user expectations. Specifically, apps need to be fast and reactive, without lags or crashes. And if internet connectivity drops, the app should continue functioning normally until connectivity is restored. In cases where the workforce is located in low-connectivity areas—e.g., warehouses, factories, and rural areas—peer-to-peer sync is a requirement for apps to communicate with each other and sync data. In such an ever-important space, partnerships are critical to combining the strengths of organizations to create solutions that would be challenging to develop independently. At MongoDB, we’re laser-focused on bringing the best solutions to customers. So we’re thrilled to announce MongoDB’s partnership with Ditto , a company that enables consistently fast data synchronization between devices like mobile phones and point-of-sale systems for mission-critical enterprise apps regardless of environment connectivity and existing infrastructure. With MongoDB and Ditto, businesses can drive consistent revenue at the edge without Wi-Fi, servers, or a cloud connection. Retailers can sell products, banks can deliver services, and energy companies can conduct operations anywhere without worrying about connectivity. Welcome to our mobile partner, Ditto Based in San Francisco, Ditto is revolutionizing the mobile app development space. Ditto technology uses existing devices like phones and tablets to create a distributed wireless network that can sync data anytime, even without the internet, Wi-Fi, or servers. With Ditto’s SDK, devices automatically discover, connect, and sync with each other in peer-to-peer (P2P) mesh networks. This means that when the internet goes down or Wi-Fi is spotty, deskless workers can continue to serve customers or complete business-critical workflows. Ditto manages a mesh network of devices and automatically syncs data changes locally in the mesh and opportunistically with the cloud when available. Depending on the environment and device positioning, Ditto intelligently switches between LAN, BLE, P2P Wi-Fi, IP-based transports, and cellular to ensure that apps get the fastest sync. Ditto’s platform has two major components: Small Peer/Ditto SDK: This is the Ditto SDK embedded into an application that lives on a mobile device, point of sale system, IoT device, and more. There can be many Small Peers in a solution. Small Peers self-organize and sync with each other regardless of internet connectivity and with the cloud when connectivity is available. Big Peer: Ditto’s middleware platform that receives the data from small peers and forwards them to MongoDB. And some of the unique value propositions that Ditto offers include: Self-organizing mesh networking: Devices running Ditto-powered apps automatically and securely discover nearby peers and form wireless, distributed networks. Intelligent peer-to-peer data sync: Devices in the mesh exchange data in real-time via Bluetooth Low Energy, Peer-to-Peer Wi-Fi, Local Area Network, and more. Conflict-free replicated data types (CRDTs): Ditto peers each have a local database. To ensure low-bandwidth usage and concurrent edits, only the deltas, or changes, are synced Distributed architecture: As the image below shows, Ditto isn’t reliant on a centralized system to synchronize data. Each device has an embedded database capable of reading, writing, and syncing deltas within the mesh. This means there is no single point of failure, such as a cloud or server. With MongoDB and Ditto working together, developers can create robust data pipelines from mobile to cloud. MongoDB Atlas is a multi-cloud developer data platform that gives users the versatility they need to build a wide variety of applications—including mobile applications. With MongoDB Atlas, users can scale their mobile applications’ backend confidently with a foundation built for resilience, performance, and security. Additionally, MongoDB Atlas enables delivering fast and consistent mobile user experiences in any region on AWS, Azure, and Google Cloud—or replicate data across multiple regions and clouds to reach wider audiences and protect against broader outages. Read more about Ditto at our partner catalog page .

September 3, 2024

Away From the Keyboard: Anaiya Raisinghani, MongoDB Developer Advocate

Welcome to our new article series focused on developers and what they do when they’re not building incredible things with code and data. “Away From the Keyboard” features interviews with developers at MongoDB, discussing what they do, how they establish a healthy work-life balance, and their advice for others looking to create a more holistic approach to coding. In our first article, Anaiya Raisinghani shares her day-to-day responsibilities as a Developer Advocate at MongoDB; how she uses nonrefundable workout classes and dinner reservations to help her step away from work; and her hack for making sure that when she logs off for the day, she stays logged off. Q: What do you do at MongoDB? Anaiya: I’m a developer advocate here at MongoDB on the Technical Content team! This means I get to build super fun MongoDB tutorials for the entire developer community. I’m lucky where each day is different. If I’m researching a platform to build a tutorial, it can mean hours of research and reading up on documentation, whereas if I’m filming a YouTube video it means lots of time recording and editing. Q: What does work-life balance look like for you? Anaiya: A bad habit of mine is to get really caught up in a piece of content I’m creating and refuse to leave a certain spot until I’ve accomplished what I’ve set out to do that day. Because of this—and because I work mainly from home—if I can anticipate that I’m going to get caught up in a project, I create plans that force me to leave my desk. Some examples of these are non-refundable workout classes, drinks with friends after work (I hate being a flake), or even dinner reservations that charge you if you cancel less than 24 hours in advance. My biggest gripe is paying for something that I didn’t get anything out of. If I’m paying for a single pilates class, I will make sure I’m there trying my best on the reformer. So this has been a fantastic motivator. Being 25 and living in NYC means that my weekends are always booked, so I’m always out and about, and this allows me to not think about work on my time off. I’m also lucky enough to have a great manager and team that keep very great work-life boundaries, so I never feel guilty practicing those boundaries myself. Q: Was that balance always a priority for you or did you develop it later in your career? Anaiya: This balance was definitely something I had to develop and actively work on. I’ve always been an anxious over-achiever, and when coming into my first corporate job I thought staying overtime would be expected. We’ve all heard the phrase: “Be the first one in and the last to leave.” My manager actually used to actively tell me to log off when I first started because he would notice that my Slack was active past work hours (shoutout to Nic!). Having him and my team as a great example helped me understand that there will always be more work and to enjoy the time that you spend away from your laptop. It was also the realization that working shouldn’t be your entire life. You need to develop hobbies and build relationships within your community in order to be a happier human being. Q: What benefits has this balance given you? Anaiya: The biggest benefit this balance has given me both at work and in my life is that I’m incredibly present when I’m doing one or the other. When I’m working during the day, I’m entirely locked in and take advantage of each hour. And when I’m done with the workday, I’m actually done and can focus on my hobbies or my friends. It’s also taught me to plan in advance and it gives me a better understanding of how much work on average is expected for each project. Q: What advice would you give to a developer seeking to find a better balance? Anaiya: If you’re seeking a better balance, I recommend removing Slack from your personal phone and laptop. This way when you’re disconnected, you’re truly disconnected. Of course, there are some teams and companies that require you to be on call or working around the clock, but even then having a specific laptop or device with everything you need that is separate from your personal devices can help bridge this gap. Thank you to Anaiya Raisinghani for sharing her insights! And thanks to all of you for reading. Look for more in our new series. Interested in learning more about or connecting more with MongoDB? Join our MongoDB Community to meet other community members, hear about inspiring topics, and receive the latest MongoDB news and events. And let us know if you have any questions for our future guests when it comes to building a better work-life balance as developers. Tag us on social media: @mongodb #AwayFromTheKeyboard

September 3, 2024

The Learning Experience: Celebrating a Year of MongoDB Developer Days

One year ago today MongoDB held our first regional Developer Day event, a full-day experience designed to teach the fundamentals and advanced capabilities of MongoDB. Developer Day events have been held in person in over 35 cities, across 16 countries, and in seven languages. Initially created by a core team with a passion for enabling developers, a group of talented MongoDB Developer Advocates and Solution Architects have since scaled the program to directly engage and teach thousands of developers worldwide. As developer relations programs go, engaging builders through hands-on workshops is hardly a new approach. But the difference at MongoDB is how closely we collaborate cross-functionally with other teams, and how focused we are on providing authentically hands-on, engaging experiences for developers. From the start, the MongoDB Developer Day program was a collaborative effort to take a platform that is easy to get started with and to introduce more advanced capabilities in a compelling way. Based on the helpful and positive feedback we continue to get from participants, we know Developer Days help developers gain skills and a sense of accomplishment as they alternate between instruction and applying that learning through hands-on exercises. To share more about what has made Developer Days so successful, I asked members of that core team—Lead Developer Advocate Joel Lord , and Sr Developer Advocates Mira Vlaeva and Diego Freniche Brito —to reflect on key areas that went into creating this enduring experience. Our Developer Day class at MongoDB.local NYC in May, 2024 A sense of accomplishment “We all agreed that a Developer Day should focus on hands-on learning, where developers can experiment with MongoDB, potentially make mistakes, and take pride in building something on their own,” Mira said. “The goal was to build a fun learning experience rather than just sitting and listening to lectures all day.” The curriculum was designed to encourage developers to work together at certain points as they advance from data modeling and schema design concepts to implementing powerful Atlas Search and Vector Search capabilities. “One of my favorite moments of the day is when people start working together,” Joel said. “At the beginning of the day, our attendees can be a little hesitant, but they quickly begin to collaborate with each other, and it's wonderful to witness that happen.” Building great Developer Days together While our MongoDB Developer Relations team designed the agenda and course material, it was our partnerships with other teams and stakeholders that helped Developer Days take flight. There were many key stakeholders that shared a vision for enabling developers to realize more value with our platform. As Joel remembers, “We had to work and collaborate with a number of other teams, which was, at the time, new to us.” Among the key teams involved, Diego added, “Working with Field and Strategic Marketing teams has been a great experience. They help us so much with all the really important tasks… there's so much they've done that Developer Days wouldn't be a reality without them.” The program has expanded our collaboration with several other teams, including marketing, product, and sales, to ensure our courses remain up-to-date and we make the most of our time in each city by welcoming developers from key accounts. Continued success and improvements To ensure Developer Days were as impactful as possible, we initially ran the program as a pilot in seven cities. In addition to noting live observations and interactions, we used surveys to collect feedback and report on an NPS (net promoter score) to assess whether the event exceeded participant expectations. These initial events were spaced out enough that the team could implement improvements and try new approaches at subsequent Developer Days. “We had the opportunity to run the same labs multiple times, make small changes each time, and observe how people react to the different configurations,” said Mira, who continues to contribute improvements such as new interactive elements. As we continue to bring the Developer Day experience to new cities, we’re also taking Developer Days online. “There are so many reasons why people may not be able to attend our in-person events,” said Lauren Schaefer , who recently rejoined MongoDB Developer Relations to lead the program forward. “I look forward to working with my team to tackle the challenges of bringing our curriculum successfully online.” So a year later, I want to say thank you to everyone who has made Developer Days a success—from seven staff members who supported our first event in Chicago, to the roughly 100 talented people across MongoDB who are now part of the program. Even more importantly, I’m thankful for all of the participants (across 35 great cities and 16 countries!) that joined us for this full-day experience. As I love to say at the beginning of every Developer Day, we’re here to learn from each other. I hope you’ve learned as much from us as we have from you! To learn more about MongoDB’s global community of millions of developers—and to check out upcoming events like Developer Days—please visit our events page .

August 29, 2024

The Dual Journey: Healthcare Interoperability and Modernization

Interoperability in healthcare isn’t just a buzzword; it’s a fundamental necessity. It refers to the ability of IT systems to enable the timely and secure access, integration, and use of electronic health data. However, integrating data across different applications is a pressing challenge, with 48% of US hospitals reporting a one-sided sharing relationship in which they share patient data with other providers who do not, in turn, share patient data with the hospital. The ability to share electronic health data seamlessly across various healthcare systems can revolutionize patient care, enhance operational efficiency, and drive innovation. In this post, we’ll explore the challenges of healthcare data sharing, the role of interoperability , and how MongoDB can be a game-changer in this landscape. The challenge of data sharing in healthcare Today's consumers have high expectations for accessing information, and many now anticipate quick and continuous access to their health and care records. One of the biggest IT challenges faced by healthcare organizations is sharing data effectively and creating seamless data integrations to build patient-centric healthcare solutions. Healthcare data has to be shared in multiple ways: Between internal applications to ensure seamless data flow across various internal systems. Between primary and secondary care , to coordinate care across healthcare providers. To patient portals and telemedicine to enhance patient engagement and remote care. To payers, institutions, and patients themselves , to streamline interactions with insurance companies and regulatory bodies. To R&D units to accelerate medical research and pharmaceutical developments. The complexity of healthcare data is staggering, and hospitals regularly need to integrate dozens of different applications—all of which means that there are significant barriers to healthcare data sharing and integration. A vision for patient-centric healthcare Imagine a world where patient data is shared in real-time with all relevant parties—doctors, hospitals, labs, pharmacies, and insurance companies. This level of interoperability would streamline the flow of information, reduce errors, and improve patient outcomes. Achieving this, however, is no easy feat as healthcare data is immensely complex, involving various types of data such as unstructured clinical notes, lab tests, medical images, medical devices, and even genomic data. Furthermore, types of data mean different things depending on where and where it was collected. Achieving seamless data sharing also involves overcoming barriers to data sharing between different healthcare providers and systems, all while adapting to evolving regulations and standards. Watch the " MongoDB and FHIR: Navigating Healthcare Data " session from MongoDB.local NYC on YouTube. The intersection of modernization and interoperability Modernization of healthcare IT systems and achieving interoperability are two sides of the same coin. Both require significant investments and a focus on transitioning from application-driven to data-driven architecture. By focusing first on data and then connecting applications with a developer data platform like MongoDB Atlas , healthcare organizations can avoid data silos and achieve vendor-neutral data ownership. As healthcare interoperability standards define a common language, organizations might question whether, instead of reinventing the wheel with their own data domains, they can use the interoperability journey (and its high investments) to modernize their applications. MongoDB’s document data model supports the JSON format, just like FHIR (Fast Healthcare Interoperability Resources) and other interoperability standards, making it a more efficient and flexible data platform for developing healthcare applications beyond the limitations of external APIs. FHIR for storing healthcare data? The most implemented standard worldwide, HL7 FHIR , treats each piece of data as a self-contained resource with external links, similar to web pages. HL7 adopted a pragmatic approach: there was no need to define a complete set of resources for all the clinical data, but they wanted to get the 80% that most electronic health records (EHR) share. For the 20% of non-standardized data, they created FHIR Extensions to extend every resource to specific needs. However, FHIR is not yet fully developed, with only 15 of the 158 resources it defines having reached the highest level of maturity. The constant changes can be as simple as a name change or can be so complex that data has to be rearranged. FHIR is designed for the exchange of data but can also be used for persistence. Figure 1: Using FHIR for persistence depending on the complexity of the use case For specific applications with no complex data, such as integrating data from wearables, you can leverage FHIR. However, building a primary operational repository for broader applications like a patient summary or even a healthcare information system presents a significant challenge. This is because the data model required goes beyond the capabilities of FHIR, and solving that through FHIR extensions is an inefficient approach. OpenEHR as an alternative approach In Catalonia, Spain—for about 8 million people and roughly 60 public hospitals—there are 29 different hospital EHR systems. Each hospital maintains a team of developers exclusively focused on building interoperability interfaces. Due to an increased demand for data sharing, the cost of maintaining this data will only grow. Rather than implementing interoperability interfaces, why not create new applications that are implicitly interoperable? This is what openEHR proposes, defining the clinical information model from the maximal perspective and developing applications that consume a subset of the clinical information system using an open architecture. However, while FHIR is very versatile—offering data models for administrative and operational data efficiently—openEHR focuses exclusively on clinical data. So while a combination of FHIR and openEHR can solve part of the problem, future healthcare applications need to integrate a wide variety of data, including medical images, genomics, proteomics, and data from complex medical devices—which could be complicated by the lack of a single standard. Overcoming this challenge with the document model and MongoDB Now, let’s discover the power of the document model to advance interoperability while modernizing systems. MongoDB Atlas features a flexible document data model, which provides a flexible way of storing and organizing healthcare data using JSON-like documents. With a flexible data schema, healthcare organizations can accommodate any data structure, format, or source into one platform, providing seamless third-party integration capabilities necessary for interoperability. While different use cases will have different solutions, the flexibility of the document model means MongoDB is able to adapt to changes. Figure 2 below shows a database modeled in MongoDB, where each collection stores a FHIR resource type (e.g., patients, encounters, conditions). These documents mirror the FHIR objects, conserving its complex hierarchy. Let's imagine our application requires specific fields not supported by FHIR, and there is no need for an FHIR extension because it won’t be shared externally. We can add a metadata field containing all this information that is as flexible as needed. It can be used to track the standard evolution of the resource, the version of the document itself, tenant ID for multi-tenant applications, and more. Figure 2: Data modeled in MongoDB Another possibility is to add the searchable fields of the resource as key-value pairs so that you can retrieve data with a single index. We can maintain the indexes by automating this with the FHIR search parameters. In a single repository, we can combine FHIR data with custom application data. Additionally, data in other protocols can be integrated, providing unparalleled flexibility in accessing your data. This setup permits access through different endpoints within one repository. Using MQL (the MongoDB Query Language), you can build FHIR APIs or use the MongoDB SQL interface to provide SQL querying capabilities to connect your preferred business intelligence tools. Figure 3: Unified and flexible data store and flexible data retrieval MongoDB’s developer data platform At the center of MongoDB’s developer data platform is MongoDB Atlas, the most advanced cloud database service on the market. It provides integrated full-text search capabilities, allowing applications to perform when making complex queries without the need to maintain a separate system. With generative AI, multidimensional vectors that represent data are becoming a necessity. MongoDB Atlas stores vectors along the operational data and provides vector search, which enables fast data retrieval. Therefore, you can store metadata in your vector embeddings, as shown in Figure 4. Figure 4: Vector embeddings These capabilities transform a single database into a unique, powerful, and easy-to-use interface capable of handling diverse use cases without the need for any single-purpose databases. Solving the dual challenge with MongoDB Achieving interoperability and modernization in healthcare IT are challenging but essential. MongoDB provides a powerful platform that meets organizations’ modern data management needs. By embracing MongoDB, healthcare organizations can unlock the full potential of their data, leading to improved patient outcomes and operational efficiency. Figure 5: Closing the gap between interoperability and modernization journey with MongoDB Refactoring applications to incorporate interoperability resources as part of documents—and extending them with all the requirements for your modern needs—will ensure organizations’ data layers remain robust and adaptable. By doing so, organizations can create a flexible architecture that can seamlessly integrate diverse data types and accommodate future advancements. This approach not only enhances data accessibility and simplifies data management but also supports compliance with evolving standards and regulations. Furthermore, it enables real-time data analytics and insights, fostering innovation and driving better decision-making. Ultimately, this strategy positions healthcare organizations to effectively manage and leverage their data, leading to improved patient outcomes and operational efficiencies. For more detailed information and resources on how MongoDB can transform organizations’ healthcare IT systems, we encourage you to apply for an exclusive innovation workshop with MongoDB's industry experts to explore bespoke modern app development and tailored solutions for your organization. Additionally, check out these resources: MongoDB and FHIR: Navigating Healthcare Data with MongoDB How Leading Industries are Transforming with AI and MongoDB Atlas The MongoDB Solutions Library is curated with tailored solutions to help developers kick-start their projects For developers: From FHIR Synthea data to MongoDB

August 28, 2024

CTF Life Leverages MongoDB Atlas to Deliver Customer-Centric Service

Hong Kong-based Chow Tai Fook Life Insurance Company Limited (CTF Life) is proud of its rich, nearly 40-year history of providing a wide range of insurance and financial planning services. The company provides life, health, accident, savings, and investment insurance to its customers, helping them and their loved ones navigate life’s journey with personalized planning solutions, lifelong protection, and diverse lifestyle experiences. A wholly-owned subsidiary of NWS Holdings Limited and a member of Chow Tai Fook Group, CTF Life consistently strengthens its collaboration with the diverse conglomerate of the Cheng family (Chow Tai Fook Group) and draws on the Group’s robust financial strength, strategic investments across the globe, and advanced customer-focused digital technology with the aspiration of becoming a leading insurance company in the Greater Bay Area. To achieve this goal, CTF Life modernized its on-premises infrastructure to provide the speed and flexibility required to offer customers personalized experiences. To turn their vision into reality, CTF Life decided to adopt MongoDB Atlas . By modernizing their systems and processes with the world’s most versatile developer data platform, CTF Life knew they’d be able to meet customer expectations, offering improved customer service, faster response times, and more convenient access to their products and services. Data-driven customer service The insurance industry is undergoing a significant shift, from traditional data management to near-real-time data-driven insights, driven by strong consumer demand and the urgent need for companies to process large amounts of data efficiently. As insurance companies strive to provide personalized and real-time products, the move towards sophisticated and real-time data-driven customer service is inevitable. CTF Life is on its digital transformation journey to modernize its relational database management system (RDBMS) infrastructure to empower its agents, known as Life Planners, to provide enhanced customer experiences. The company faced obstacles to legacy systems and siloed data. Life Planners were spending a lot of time looking up customer information from various systems and organizing this into useful customer insights. Not having a holistic view of customer data also made it challenging to recommend personalised products and services within CTF Life, the Group, and beyond. Reliance on legacy RDBMS systems presented a major challenge in CTF Life’s pursuit of leveraging real-time customer information to enhance customer experiences and operational efficiency. For their modernization efforts, CTF Life was looking for the following required capabilities: A modernized application with agile development No downtime for changing schema, new modules, or feature updates A single way of centralizing and organizing data from a number of sources (backend, CRM, etc.) into a standardized format ready for a front-end mobile application A future-proof data platform with extensible capability for analytics across CTF Life, their diverse conglomerate collaboration, and their strategic partners to support the company’s digital solutions Embracing the operational data layer for enhanced experiences CTF Life knew they had to build a solution for the Life Planners to harness the wealth of useful information available to them, making it easier to engage and connect with customers. The first project identified was their clienteling system, which is designed to establish long-term relationships with customers based on data about their preferences, behaviors, and needs. To overcome their legacy systems and siloed data, CTF Life built their clienteling system on MongoDB Atlas . Atlas serves as the digital data store for Life Planners, creating a single view of the customer (SVOC) with a flexible document model that enables CTF Life to handle large volumes of customer data in real-time efficiently. By integrating their operational data into one platform with MongoDB Atlas on Microsoft Azure, CTF Life’s revamped clienteling system provides their Life Planners with a comprehensive view of customer profiles, which allows them to share targeted content with customers. Additionally, CTF Life is using Atlas Search to build relevance-based search capabilities directly into the application, making it faster and easier to search for customer data across the company’s system landscape. These benefits helped improve customer service with faster access to data with an SVOC so Life Planners can provide more accurate and timely information to their customers. Atlas Search is now the foundation of the clienteling system, which powers data analytics and machine learning capabilities to support various use cases. For example, the clienteling app's smart reminder feature recognizes key moments in a customer's life, like the impending arrival of a newborn child. Based on these types of insights, the app can help Life Planners make personalized recommendations to the customer about relevant services and products that may be of interest to them as new parents. Because of its work with MongoDB, CTF Life can now analyze customer profiles and use smart reminders to engage customers at the right time in the right context. This has made following up with customers and leads faster and easier. And, contacting prospects, scheduling appointments, setting reminders, sharing relevant content, running campaigns and promotions, recommending products and services, and tracking lead progress can all be performed in one system. Moreover, access to real-time data enables Life Planners to streamline their work and reduce manual processes. And data-driven insights empower Life Planners to make informed decisions quickly. They can analyze customer information, identify trends, and tailor their recommendations to meet individual needs more effectively. With MongoDB Atlas Search, Life Planners can use advanced search capabilities to identify opportunities to serve customers better. Continuing to create value beyond insurance CTF Life strives to provide its customers with value beyond insurance. Through a range of collaborations with Chow Tai Fook Group, and strategic partnerships with technology partners like MongoDB, CTF Life has created a customer-centric approach and continues to advance its digital transformation strategy to enhance a well-rounded experience for customers that goes beyond insurance with a sincere and deep understanding of their diverse needs in every chapter of their life journey. In the future, CTF Life will continue to build upon its strategic partnership with MongoDB and expand the use of its digital data store on MongoDB Atlas by creating new client servicing modules on the mobile app their Life Planners use. CTF Life will also be expanding its search capabilities with Atlas Vector Search to accelerate their journey to building advanced search and generative AI applications for more automated servicing. Partnering with MongoDB helped us prioritize technology that accelerates our digital transformation. The integration between generative AI and MongoDB as a medium for information search can be leveraged to further support front-line Life Planners as well as mid/back-office operations. Derek Ip, Chief Digital and Technology Officer of CTF Life Learn how to tap into real-time data with MongoDB Atlas .

August 28, 2024

Elevate Your Java Applications with MongoDB and Spring AI

MongoDB is excited to announce an integration with Spring AI, enhancing MongoDB Atlas Vector Search for Java developers. This collaboration brings Vector Search to Java applications, making it easier to build intelligent, high-performance AI applications. Why Spring AI? Spring AI is an AI library designed specifically for Java, applying the familiar principles of the Spring ecosystem to AI development. It enables developers to build, train, and deploy AI models efficiently within their Java applications. Spring AI addresses the gap left by other AI frameworks and integrations that focus on other programming languages, such as Python, providing a streamlined solution for Java developers. Spring has been a cornerstone for Java developers for decades, offering a consistent and reliable framework for building robust applications. The introduction of Spring AI continues this legacy, providing a straightforward path for Java developers to incorporate AI into their projects. With the MongoDB-Spring integration, developers can leverage their existing Spring knowledge to build next-generation AI applications without the friction associated with learning a new framework. Key features of Spring AI include: Familiarity: Leverage the design principles of the Spring ecosystem. Spring AI allows Java developers to use the same familiar tools and patterns they already know from other Spring projects, reducing the learning curve and allowing them to focus on building innovative AI applications. This means you can integrate AI capabilities—including Atlas Vector Search—without having to learn a new language or framework, making the transition smoother and more intuitive. Portability: Applications built with Spring AI can run anywhere the Spring framework runs. This ensures that AI applications are highly portable and can be deployed across various environments without modification, guaranteeing flexibility and consistency in deployment strategies. Modular design: Use Plain Old Java Objects (POJOs) as building blocks. Spring AI’s modular design promotes clean code architecture and maintainability. By using POJOs, developers can create modular, reusable components that simplify the development and maintenance of AI applications. This modularity also facilitates easier testing and debugging, leading to more robust applications that efficiently integrate with Atlas Vector Search. Efficiency: Streamline development with tools and features designed for AI applications in Java. Spring AI provides a range of tools that enhance development efficiency, including pre-built templates, configuration management, and integrated testing tools. These features reduce the time and effort required to develop AI applications, allowing developers to bring their ideas to market faster. These features streamline AI development by enhancing the integration and performance of Atlas Vector Search within Java applications, making it easier to build and scale AI-driven features. Enhancing AI development with Spring AI and Atlas Vector Search MongoDB Atlas Vector Search enhances AI application development by providing advanced search capabilities. The new Spring AI integration enables developers to manage and search vector data within AI models, enabling features like recommendation systems, natural language processing, and predictive analytics. Atlas Vector Search allows you to store, index, and search high-dimensional vectors, which are crucial for AI and machine learning models. This capability supports a range of AI features: Recommendation systems: Provide personalized recommendations based on user behavior and preferences. Natural language processing: Enhance text analysis and understanding for chatbots, sentiment analysis, and more. Predictive analytics: Improve forecasting and decision-making with advanced data models. What the integration means for Java developers Prior to MongoDB-Spring integration, Java developers did not have an easy way to integrate Spring into their AI applications using MongoDB Atlas Vector Search, which led to longer development times and suboptimal application performance. With this integration, the Java development landscape is transformed, allowing developers to build and deploy AI applications with greater efficiency. The integration simplifies the entire process, enabling developers to concentrate on creating innovative solutions rather than dealing with integration hurdles. This approach not only reduces development time but also accelerates time-to-market. Additionally, MongoDB offers robust support through comprehensive tutorials and a wealth of community-driven content. Whether you’re just beginning or looking to optimize existing applications, you’ll find the resources and guidance you need at every stage of your development journey. Get started! The MongoDB and Spring AI integration is designed to simplify the development of intelligent Java applications. By combining MongoDB's robust data platform with Spring AI's capabilities, you can create high-performance applications more efficiently. To start using MongoDB with Spring AI, explore our documentation , tutorial , and check out our GitHub repository to build the next generation of AI-driven applications today.

August 26, 2024