MongoDB Blog

Announcements, updates, news, and more

Exploring New Security, Billing, and Customization Features in Atlas Charts

MongoDB is excited to announce a few new updates to Atlas Charts that enable you to securely share insights, gain deeper visibility into expenses, and customize your most frequently used data visualizations. Based on specific feedback received from users of our native visualization tool, these significant improvements will make data analysis even more productive. We: Improved security in Atlas Charts for passcode-protected public dashboards Increased visibility into Atlas spending through an updated billing dashboard Introduced new customization for table charts through hyperlinks and hidden columns Secure insights with passcode-protected public dashboards First, there’s the new passcode-protected public dashboards feature that brings an extra layer of security to publicly shared dashboards—we understand that not everyone who benefits from Atlas Charts operates within MongoDB Atlas. Alongside the ability to schedule email reports and support for publicly-shared dashboards , we’re offering a new and secure way to spread insights with the launch of our latest feature. Add an extra layer of security to previously publicly shared dashboards, ensuring that only authorized users with the passcode can access your data. Enabling passcode protection on a dashboard is simple. As a dashboard owner, a new option is available to protect dashboard links with a passcode when sharing it publicly. Check the box to protect your public link with a passcode Once enabled, a passcode is automatically generated and can be copied to the clipboard (and regenerated on demand as needed). Viewers navigating to dashboards via the public link will see a new screen prompting them to enter a passcode. Once authenticated successfully, they can view the dashboard just as before. Easily access your dashboards by inputting your password when prompted Whether you're sharing insights with clients, stakeholders, or team members, rest assured that your data remains easily accessible yet secure. To learn more about the different ways we support dashboard sharing, check out our documentation . What’s new in the Atlas Charts billing dashboard Next, we continue to make enhancements to the MongoDB Atlas Charts billing dashboard , all of which provide insights into Atlas expenses. We are delighted to share that it’s now possible to see resource tags data, as well as billing data from all linked organizations inside the Atlas Charts billing dashboard. Additionally, users can now ingest billing data from another organization, provided they possess the organization’s API keys. These newly introduced features rely on the availability of billing data within the organization. And for those leveraging resource tags, the billing data will seamlessly integrate, empowering users to generate personalized charts or to incorporate tailored dashboard filters within the Atlas Charts billing dashboard. If cross-organization billing is enabled, editing the configuration will ingest the linked organization’s billing data for the last three months, with the option to extend this period to up to a year by creating a new ingestion. Project tags data in the Atlas Charts billing dashboard Resource tags are now seamlessly integrated into billing data and can be included in any of the charts or the dashboard filters inside the Atlas billing dashboard. For example, our MongoDB organization uses the Atlas auto-suggested tags “application” and “environment,” alongside a custom resource tag labeled "team." The following chart uses the tags data and shows the billing cost per team and per environment. A chart which depicts cost per team and environment using tags The subsequent chart presents the billing cost allocated per project and team, providing valuable insights into the primary cost drivers for each team's projects. A chart depicting cost allocated per project and team Users can also add a dashboard filter to the “tags” field, which will allow them to see the whole dashboard based on the selected tag values. In the next example, we have selected a specific “team” : “Charts” from the tags dashboard filter, so we can see all of the billing insights per team thanks to our custom tag. Billing insights filtered by specific "charts" team in an intuitive dashboard Linked organization’s data in the Atlas Charts billing dashboard For complex Atlas projects spanning multiple organizations, the Atlas Charts billing dashboard now seamlessly integrates billing data from all linked organizations. The most productive use case is to add a dashboard filter based on the "organizationId" to enable filtering data according to specific organizations for a more granular analysis of the spending. Dashboard filtered by the organizationID field to show insights for one organization Billing data from another organization Users can now ingest billing data from other organizations that are not directly linked, provided they possess authorization API keys, bringing the data you need to where you are. Provide the API key to ingest billing data from other organizations These new features in the Atlas Charts billing dashboard are designed to provide richer, more detailed insights into organization spend. Check out our documentation and our previous blog post to learn more about it. Hyperlinks and hidden columns for tables in Atlas Charts Of all the data visualization methods available in Atlas Charts, table charts rank as one of the most popular among our users. So it should come as no surprise that one of the most highly requested features from our customers is the ability to format columnar data as hyperlinks. We're excited to announce that this is now possible in Atlas Charts through the new hyperlink customization options available for table charts . With hyperlink customization, you can format columnar data as hyperlinks using any of the following URI protocols: http, https, mailto, or tel, and can be constructed statically or dynamically using encoded fields. Let’s assume we’ve created a table using the sample movies dataset in Atlas, with encodings like title, imdb.id, runtime, genre, poster_display—which is a calculated field —and more. Customization panel in Atlas Charts To turn movie titles into clickable links that direct users to their respective IMDB pages, navigate to the customization panel and click into the hyperlinking feature in the fields tab . We will format the title field as a hyperlink which links to the Internet Movie Database (IMDB) entry for that movie. IMDB URLs are formatted as follows, where id needs to be substituted with the value of the imdb.id field for each document. https://www.imdb.com/title/tt<id>/ Customize the “title” field in the table chart to link to IMDB using the “imdb.id” field in the URI input. Below, a preview displays the fully formatted URI with fields substituted for their values, helping to ensure it’s correct before we save it to be applied to the chart. Preview of URI in the hyperlinking panel Since we only need the imdb.id field to be encoded for the purpose of constructing the URI applied to the title field, we can hide the column from rendering using another new customization option. Select the imdb.id field in the customization panel, and toggle on the “Hide Column” option. Toggle "Hide Column" We also support using URI values directly from fields (provided they use one of the supported protocols). Let’s see this in action by creating a hyperlink to the movie poster. In the URI input, trigger the encoded field menu using the @ keyboard shortcut, and select the poster field. Similar to the previous example, a preview will be displayed. After saving and applying the hyperlink formatting, we can hide the rendering of the poster field as needed to keep the chart clean. Use the @ keyboard shortcut to trigger the encoded field menu All these options are accessible in the customization panel, making it straightforward to enhance table charts with interactive hyperlinks. For more detailed instructions, visit our documentation . As we conclude this roundup, we hope you’re as excited about these updates as we are. The Atlas Charts team is dedicated to continuously improving Atlas Charts to meet your needs and enhance your data visualization experience. Stay tuned for more updates, and happy charting! New to Atlas Charts? Get started today by logging into or signing up for MongoDB Atlas , deploying or selecting a cluster, and activating Charts for free.

September 5, 2024
Updates

Saving Energy, Smarter: MongoDB and Cedalo for Smart Meter Systems

The global energy landscape is undergoing a significant transformation, with energy consumption rising 2.2% in 2023, surpassing the 2010-2019 average of 1.5% per year. This increase is largely due to global developments in BRICS member countries—Brazil, Russia, India, China, and South Africa. As renewable sources like solar power and wind energy become more prevalent (in the EU, renewables accounted for over 50% of the power mix in the first quarter of 2024 ), ensuring a reliable and efficient energy infrastructure is crucial. Smart meters, the cornerstone of intelligent energy networks, play a vital role in this evolution. According to IoT analyst firm Berg Insight, the penetration of smart meters is skyrocketing, with the US and Canada expected to reach nearly 90% adoption by 2027, whereas China is expected to account for as much as 70–80% of smart electricity meter demand across Asia in the next few years. This surge is indicative of a growing trend towards smarter, more sustainable energy solutions. In Central Asian countries, the Asian Development Bank is supporting the fast deployment of smart meters to save energy and improve the financial position of countries' power utilities. This article will delve into the benefits of smart meters, the challenges associated with managing their data, and the innovative solutions offered by MongoDB and Cedalo. The rise of smart meters Smart meters, unlike traditional meters that require manual readings, collect and transmit real-time energy consumption data directly to energy providers. This digital transformation offers numerous benefits, including: Accurate Billing: Smart meters eliminate the need for estimations, ensuring that consumers are billed precisely for the energy they use. Personalized Tariffs: Energy providers can offer tailored tariffs based on individual consumption patterns, allowing consumers to take advantage of off-peak rates, special discounts, and other cost-saving opportunities. Enhanced Grid Management: Smart meter data enables utilities to optimize grid operations, reduce peak demand, and improve overall system efficiency. Energy Efficiency Insights: Consumers can gain valuable insights into their energy usage patterns, identifying areas for improvement and reducing their overall consumption. With the increasing adoption of smart meters worldwide, there is a growing need for effective data management solutions to harness the full potential of this technology. Data challenges in smart meter adoption Despite the numerous benefits, the widespread adoption of smart meters also presents significant data management challenges. To use smart metering, power utility companies need to deploy a core smart metering ecosystem that includes the smart meters themselves, the meter data collection network, the head-end system (HES), and the meter data management system (MDMS). Smart meters collect data from end consumers and transmit it to the data aggregator via the Local Area Network (LAN). The transmission frequency can be adjusted to 15 minutes, 30 minutes, or hourly, depending on data demand requirements. The aggregator retrieves the data and then transmits it to the head-end system. The head-end system analyzes the data and sends it to the MDMS. The initial communications path is two-way, signals or commands can be sent directly to the meters, customer premise, or distribution device. Figure 1: End-to-end data flow for a smart meter management system / advanced metering infrastructure (AMI 2.0) When setting up smart meter infrastructure, power, and utility companies face several significant data-related challenges: Data interoperability: The integration and interoperability of diverse data systems pose a substantial challenge. Smart meters must be seamlessly integrated with existing utility systems and other smart grid components often requiring extensive upgrades and standardization efforts. Data management: The large volume of data generated by smart meters requires advanced data management and analytics capabilities. Utilities must implement robust data storage, processing, and analysis solutions to handle real-time time series data streams storage, analysis for anomaly detection, and trigger decision-making processes. Data privacy: Smart meters collect vast amounts of sensitive information about consumer energy usage patterns, which must be protected against breaches and unauthorized access. Addressing these challenges is crucial for the successful deployment and operation of smart meter infrastructure. MQTT: A cornerstone of smart meter communication MQTT , a lightweight publish-subscribe protocol, shines in smart meter communication beyond the initial connection. It's ideal for resource-constrained devices on low-bandwidth networks, making it perfect for smart meters. While LoRaWAN or PLC handle meter-to-collector links, MQTT bridges Head-End Systems (HES) and Meter Data Management Systems (MDMS). Its efficiency, reliable delivery, and security make it well-suited for large-scale smart meter deployments. Cedalo MQTT platform and MongoDB: A powerful combination Cedalo , established in 2017, is a leading German software provider specializing in MQTT solutions. Their flagship product, the Cedalo MQTT Platform, offers a comprehensive suite of features, including the Pro Mosquitto MQTT broker and Management Center . Designed to meet the demands of large enterprises, the platform delivers high availability, audit trail logging, persistent queueing, role-based access control, SSO integration, advanced security, and enhanced monitoring. To complement the platform's capabilities, MongoDB's Time Series collections provide a robust and optimized solution for storing and analyzing smart meter data. These collections leverage a columnar storage format and compound secondary indexes to ensure efficient data ingestion, reduced disk usage, and rapid query processing. Additionally, window functions enable flexible time-based analysis, making them ideal for IoT and analytical applications. Figure 2: MongoDB as the main database for the meter data management system where it receives meter data via Pro Mosquitto MQTT broker. Let us revisit Figure 1 and leverage both the Cedalo MQTT Platform and MongoDB in our design. In Figure 2, the Head-end System (HES) can use MQTT to filter, aggregate, and convert data before storing it in MongoDB. This data flow can be established using the MongoDB Bridge plugin provided by Cedalo. Since the MQTT payload is JSON, it is ideal to store it in MongoDB as the database stores data in BSON (Binary JSON). The MongoDB Bridge plugin offers advanced features such as flexible data import settings (specifying target databases and collections, choosing authentication methods, and selecting specific topics and message fields to import) and advanced collection mapping (mapping multiple MQTT topics to one or more collections with the ability to choose specific fields for insertion). MongoDB's schema flexibility is crucial for adapting to the ever-changing structures of MQTT payloads. Unlike traditional databases, MongoDB accommodates shifts in data format seamlessly, eliminating the constraints of rigid schema requirements. This helps with interoperability challenges faced by utility companies. Once the data is stored in MongoDB, it can be analyzed for anomalies. Anomalies in smart meter data can be identified based on various criteria, including sudden spikes or drops in voltage, current, power, or other metrics that deviate significantly from normal patterns. Here are some common types of anomalies that we might look for in smart meter data: Sudden spikes or drops: These include voltage, current, or power spikes or drops. A sudden increase or decrease in voltage beyond expected limits. Outliers: Data points that are significantly different from the majority of the data. Unusual patterns: Unusually high or low energy consumption compared to historical data or inconsistent power factor readings. Frequency anomalies: Frequency readings that deviate from the normal range. MongoDB's robust aggregation framework can aid in anomaly detection. Both anomaly data and raw data can be stored in time series collections, which offer reduced storage footprint and improved query performance due to an automatically created clustered index on timestamp and _id. The high compression offered addresses the challenge of data management at scale. Additionally, data tiering capabilities like Atlas Online Archive can be leveraged to push cold data into cost-effective storage. MongoDB also provides built-in security controls for all your data, whether managed in a customer environment or MongoDB Atlas, a fully managed cloud service. These security features include authentication, authorization, auditing, data encryption (including Queryable Encryption ), and the ability to access your data security with dedicated clusters deployed in a unique Virtual Private Cloud (VPC). End-to-end solution Figure 3: End-to-end data flow Interested readers can clone this repository and set up their own MongoDB-based smart meter data collection and anomaly detection solution. The solution follows the pattern illustrated in Figure 3, where a smart meter simulator generates raw data and transmits it via an MQTT topic. A Mosquitto broker receives these messages and then stores them in a MongoDB collection using the MongoDB Bridge. By leveraging MongoDB change streams , an algorithm can retrieve these messages, transform them according to MDMS requirements, and perform anomaly detection. The results are stored in a time series collection using a highly compressed format. The Cedalo MQTT Platform with MongoDB offers all the essential components for a flexible and scalable smart meter data management system, enabling a wide range of applications such as anomaly detection, outage management, and billing services. This solution empowers power distribution companies to analyze trends, implement real-time monitoring, and make informed decisions regarding their smart meter infrastructure. We are actively working with our clients to solve IoT challenges. Take a look at our Manufacturing and Industrial IoT page for more stories.

September 4, 2024
Applied

Mobile and Edge Solutions with MongoDB and Ditto

Mobile and edge solutions offer impressive opportunities for profit and growth for a variety of businesses around the world. Companies have consistently found ways to use mobile applications to grow revenue, cut costs, and stay ahead of the competition. In the power and utilities sector, for example, field workers can get enabled quickly by accessing their daily tasks on mobile devices and, in retail, consumers can use mobile apps to skip lines, providing businesses with upselling opportunities that can result in larger transactions. Indeed, mobile commerce is estimated to make up 44.6% of total US retail ecommerce sales in 2024. For banks, increased use of mobile applications can reduce operating costs by decreasing the demand for in-person and phone-based customer service. At the same time, having a mobile app allows financial institutions to reach additional customers, as many internet users around the world (particularly in developing countries) rely on mobile access. Time and time again, we’ve seen that the most successful apps are those thats meet modern user expectations. Specifically, apps need to be fast and reactive, without lags or crashes. And if internet connectivity drops, the app should continue functioning normally until connectivity is restored. In cases where the workforce is located in low-connectivity areas—e.g., warehouses, factories, and rural areas—peer-to-peer sync is a requirement for apps to communicate with each other and sync data. In such an ever-important space, partnerships are critical to combining the strengths of organizations to create solutions that would be challenging to develop independently. At MongoDB, we’re laser-focused on bringing the best solutions to customers. So we’re thrilled to announce MongoDB’s partnership with Ditto , a company that enables consistently fast data synchronization between devices like mobile phones and point-of-sale systems for mission-critical enterprise apps regardless of environment connectivity and existing infrastructure. With MongoDB and Ditto, businesses can drive consistent revenue at the edge without Wi-Fi, servers, or a cloud connection. Retailers can sell products, banks can deliver services, and energy companies can conduct operations anywhere without worrying about connectivity. Welcome to our mobile partner, Ditto Based in San Francisco, Ditto is revolutionizing the mobile app development space. Ditto technology uses existing devices like phones and tablets to create a distributed wireless network that can sync data anytime, even without the internet, Wi-Fi, or servers. With Ditto’s SDK, devices automatically discover, connect, and sync with each other in peer-to-peer (P2P) mesh networks. This means that when the internet goes down or Wi-Fi is spotty, deskless workers can continue to serve customers or complete business-critical workflows. Ditto manages a mesh network of devices and automatically syncs data changes locally in the mesh and opportunistically with the cloud when available. Depending on the environment and device positioning, Ditto intelligently switches between LAN, BLE, P2P Wi-Fi, IP-based transports, and cellular to ensure that apps get the fastest sync. Ditto’s platform has two major components: Small Peer/Ditto SDK: This is the Ditto SDK embedded into an application that lives on a mobile device, point of sale system, IoT device, and more. There can be many Small Peers in a solution. Small Peers self-organize and sync with each other regardless of internet connectivity and with the cloud when connectivity is available. Big Peer: Ditto’s middleware platform that receives the data from small peers and forwards them to MongoDB. And some of the unique value propositions that Ditto offers include: Self-organizing mesh networking: Devices running Ditto-powered apps automatically and securely discover nearby peers and form wireless, distributed networks. Intelligent peer-to-peer data sync: Devices in the mesh exchange data in real-time via Bluetooth Low Energy, Peer-to-Peer Wi-Fi, Local Area Network, and more. Conflict-free replicated data types (CRDTs): Ditto peers each have a local database. To ensure low-bandwidth usage and concurrent edits, only the deltas, or changes, are synced Distributed architecture: As the image below shows, Ditto isn’t reliant on a centralized system to synchronize data. Each device has an embedded database capable of reading, writing, and syncing deltas within the mesh. This means there is no single point of failure, such as a cloud or server. With MongoDB and Ditto working together, developers can create robust data pipelines from mobile to cloud. MongoDB Atlas is a multi-cloud developer data platform that gives users the versatility they need to build a wide variety of applications—including mobile applications. With MongoDB Atlas, users can scale their mobile applications’ backend confidently with a foundation built for resilience, performance, and security. Additionally, MongoDB Atlas enables delivering fast and consistent mobile user experiences in any region on AWS, Azure, and Google Cloud—or replicate data across multiple regions and clouds to reach wider audiences and protect against broader outages. Read more about Ditto at our partner catalog page .

September 3, 2024
Applied

Away From the Keyboard: Anaiya Raisinghani, MongoDB Developer Advocate

Welcome to our new article series focused on developers and what they do when they’re not building incredible things with code and data. “Away From the Keyboard” features interviews with developers at MongoDB, discussing what they do, how they establish a healthy work-life balance, and their advice for others looking to create a more holistic approach to coding. In our first article, Anaiya Raisinghani shares her day-to-day responsibilities as a Developer Advocate at MongoDB; how she uses nonrefundable workout classes and dinner reservations to help her step away from work; and her hack for making sure that when she logs off for the day, she stays logged off. Q: What do you do at MongoDB? Anaiya: I’m a developer advocate here at MongoDB on the Technical Content team! This means I get to build super fun MongoDB tutorials for the entire developer community. I’m lucky where each day is different. If I’m researching a platform to build a tutorial, it can mean hours of research and reading up on documentation, whereas if I’m filming a YouTube video it means lots of time recording and editing. Q: What does work-life balance look like for you? Anaiya: A bad habit of mine is to get really caught up in a piece of content I’m creating and refuse to leave a certain spot until I’ve accomplished what I’ve set out to do that day. Because of this—and because I work mainly from home—if I can anticipate that I’m going to get caught up in a project, I create plans that force me to leave my desk. Some examples of these are non-refundable workout classes, drinks with friends after work (I hate being a flake), or even dinner reservations that charge you if you cancel less than 24 hours in advance. My biggest gripe is paying for something that I didn’t get anything out of. If I’m paying for a single pilates class, I will make sure I’m there trying my best on the reformer. So this has been a fantastic motivator. Being 25 and living in NYC means that my weekends are always booked, so I’m always out and about, and this allows me to not think about work on my time off. I’m also lucky enough to have a great manager and team that keep very great work-life boundaries, so I never feel guilty practicing those boundaries myself. Q: Was that balance always a priority for you or did you develop it later in your career? Anaiya: This balance was definitely something I had to develop and actively work on. I’ve always been an anxious over-achiever, and when coming into my first corporate job I thought staying overtime would be expected. We’ve all heard the phrase: “Be the first one in and the last to leave.” My manager actually used to actively tell me to log off when I first started because he would notice that my Slack was active past work hours (shoutout to Nic!). Having him and my team as a great example helped me understand that there will always be more work and to enjoy the time that you spend away from your laptop. It was also the realization that working shouldn’t be your entire life. You need to develop hobbies and build relationships within your community in order to be a happier human being. Q: What benefits has this balance given you? Anaiya: The biggest benefit this balance has given me both at work and in my life is that I’m incredibly present when I’m doing one or the other. When I’m working during the day, I’m entirely locked in and take advantage of each hour. And when I’m done with the workday, I’m actually done and can focus on my hobbies or my friends. It’s also taught me to plan in advance and it gives me a better understanding of how much work on average is expected for each project. Q: What advice would you give to a developer seeking to find a better balance? Anaiya: If you’re seeking a better balance, I recommend removing Slack from your personal phone and laptop. This way when you’re disconnected, you’re truly disconnected. Of course, there are some teams and companies that require you to be on call or working around the clock, but even then having a specific laptop or device with everything you need that is separate from your personal devices can help bridge this gap. Thank you to Anaiya Raisinghani for sharing her insights! And thanks to all of you for reading. Look for more in our new series. Interested in learning more about or connecting more with MongoDB? Join our MongoDB Community to meet other community members, hear about inspiring topics, and receive the latest MongoDB news and events. And let us know if you have any questions for our future guests when it comes to building a better work-life balance as developers. Tag us on social media: @mongodb #AwayFromTheKeyboard

September 3, 2024
Culture

The Learning Experience: Celebrating a Year of MongoDB Developer Days

One year ago today MongoDB held our first regional Developer Day event, a full-day experience designed to teach the fundamentals and advanced capabilities of MongoDB. Developer Day events have been held in person in over 35 cities, across 16 countries, and in seven languages. Initially created by a core team with a passion for enabling developers, a group of talented MongoDB Developer Advocates and Solution Architects have since scaled the program to directly engage and teach thousands of developers worldwide. As developer relations programs go, engaging builders through hands-on workshops is hardly a new approach. But the difference at MongoDB is how closely we collaborate cross-functionally with other teams, and how focused we are on providing authentically hands-on, engaging experiences for developers. From the start, the MongoDB Developer Day program was a collaborative effort to take a platform that is easy to get started with and to introduce more advanced capabilities in a compelling way. Based on the helpful and positive feedback we continue to get from participants, we know Developer Days help developers gain skills and a sense of accomplishment as they alternate between instruction and applying that learning through hands-on exercises. To share more about what has made Developer Days so successful, I asked members of that core team—Lead Developer Advocate Joel Lord , and Sr Developer Advocates Mira Vlaeva and Diego Freniche Brito —to reflect on key areas that went into creating this enduring experience. Our Developer Day class at MongoDB.local NYC in May, 2024 A sense of accomplishment “We all agreed that a Developer Day should focus on hands-on learning, where developers can experiment with MongoDB, potentially make mistakes, and take pride in building something on their own,” Mira said. “The goal was to build a fun learning experience rather than just sitting and listening to lectures all day.” The curriculum was designed to encourage developers to work together at certain points as they advance from data modeling and schema design concepts to implementing powerful Atlas Search and Vector Search capabilities. “One of my favorite moments of the day is when people start working together,” Joel said. “At the beginning of the day, our attendees can be a little hesitant, but they quickly begin to collaborate with each other, and it's wonderful to witness that happen.” Building great Developer Days together While our MongoDB Developer Relations team designed the agenda and course material, it was our partnerships with other teams and stakeholders that helped Developer Days take flight. There were many key stakeholders that shared a vision for enabling developers to realize more value with our platform. As Joel remembers, “We had to work and collaborate with a number of other teams, which was, at the time, new to us.” Among the key teams involved, Diego added, “Working with Field and Strategic Marketing teams has been a great experience. They help us so much with all the really important tasks… there's so much they've done that Developer Days wouldn't be a reality without them.” The program has expanded our collaboration with several other teams, including marketing, product, and sales, to ensure our courses remain up-to-date and we make the most of our time in each city by welcoming developers from key accounts. Continued success and improvements To ensure Developer Days were as impactful as possible, we initially ran the program as a pilot in seven cities. In addition to noting live observations and interactions, we used surveys to collect feedback and report on an NPS (net promoter score) to assess whether the event exceeded participant expectations. These initial events were spaced out enough that the team could implement improvements and try new approaches at subsequent Developer Days. “We had the opportunity to run the same labs multiple times, make small changes each time, and observe how people react to the different configurations,” said Mira, who continues to contribute improvements such as new interactive elements. As we continue to bring the Developer Day experience to new cities, we’re also taking Developer Days online. “There are so many reasons why people may not be able to attend our in-person events,” said Lauren Schaefer , who recently rejoined MongoDB Developer Relations to lead the program forward. “I look forward to working with my team to tackle the challenges of bringing our curriculum successfully online.” So a year later, I want to say thank you to everyone who has made Developer Days a success—from seven staff members who supported our first event in Chicago, to the roughly 100 talented people across MongoDB who are now part of the program. Even more importantly, I’m thankful for all of the participants (across 35 great cities and 16 countries!) that joined us for this full-day experience. As I love to say at the beginning of every Developer Day, we’re here to learn from each other. I hope you’ve learned as much from us as we have from you! To learn more about MongoDB’s global community of millions of developers—and to check out upcoming events like Developer Days—please visit our events page .

August 29, 2024
Events

The Dual Journey: Healthcare Interoperability and Modernization

Interoperability in healthcare isn’t just a buzzword; it’s a fundamental necessity. It refers to the ability of IT systems to enable the timely and secure access, integration, and use of electronic health data. However, integrating data across different applications is a pressing challenge, with 48% of US hospitals reporting a one-sided sharing relationship in which they share patient data with other providers who do not, in turn, share patient data with the hospital. The ability to share electronic health data seamlessly across various healthcare systems can revolutionize patient care, enhance operational efficiency, and drive innovation. In this post, we’ll explore the challenges of healthcare data sharing, the role of interoperability , and how MongoDB can be a game-changer in this landscape. The challenge of data sharing in healthcare Today's consumers have high expectations for accessing information, and many now anticipate quick and continuous access to their health and care records. One of the biggest IT challenges faced by healthcare organizations is sharing data effectively and creating seamless data integrations to build patient-centric healthcare solutions. Healthcare data has to be shared in multiple ways: Between internal applications to ensure seamless data flow across various internal systems. Between primary and secondary care , to coordinate care across healthcare providers. To patient portals and telemedicine to enhance patient engagement and remote care. To payers, institutions, and patients themselves , to streamline interactions with insurance companies and regulatory bodies. To R&D units to accelerate medical research and pharmaceutical developments. The complexity of healthcare data is staggering, and hospitals regularly need to integrate dozens of different applications—all of which means that there are significant barriers to healthcare data sharing and integration. A vision for patient-centric healthcare Imagine a world where patient data is shared in real-time with all relevant parties—doctors, hospitals, labs, pharmacies, and insurance companies. This level of interoperability would streamline the flow of information, reduce errors, and improve patient outcomes. Achieving this, however, is no easy feat as healthcare data is immensely complex, involving various types of data such as unstructured clinical notes, lab tests, medical images, medical devices, and even genomic data. Furthermore, types of data mean different things depending on where and where it was collected. Achieving seamless data sharing also involves overcoming barriers to data sharing between different healthcare providers and systems, all while adapting to evolving regulations and standards. Watch the " MongoDB and FHIR: Navigating Healthcare Data " session from MongoDB.local NYC on YouTube. The intersection of modernization and interoperability Modernization of healthcare IT systems and achieving interoperability are two sides of the same coin. Both require significant investments and a focus on transitioning from application-driven to data-driven architecture. By focusing first on data and then connecting applications with a developer data platform like MongoDB Atlas , healthcare organizations can avoid data silos and achieve vendor-neutral data ownership. As healthcare interoperability standards define a common language, organizations might question whether, instead of reinventing the wheel with their own data domains, they can use the interoperability journey (and its high investments) to modernize their applications. MongoDB’s document data model supports the JSON format, just like FHIR (Fast Healthcare Interoperability Resources) and other interoperability standards, making it a more efficient and flexible data platform for developing healthcare applications beyond the limitations of external APIs. FHIR for storing healthcare data? The most implemented standard worldwide, HL7 FHIR , treats each piece of data as a self-contained resource with external links, similar to web pages. HL7 adopted a pragmatic approach: there was no need to define a complete set of resources for all the clinical data, but they wanted to get the 80% that most electronic health records (EHR) share. For the 20% of non-standardized data, they created FHIR Extensions to extend every resource to specific needs. However, FHIR is not yet fully developed, with only 15 of the 158 resources it defines having reached the highest level of maturity. The constant changes can be as simple as a name change or can be so complex that data has to be rearranged. FHIR is designed for the exchange of data but can also be used for persistence. Figure 1: Using FHIR for persistence depending on the complexity of the use case For specific applications with no complex data, such as integrating data from wearables, you can leverage FHIR. However, building a primary operational repository for broader applications like a patient summary or even a healthcare information system presents a significant challenge. This is because the data model required goes beyond the capabilities of FHIR, and solving that through FHIR extensions is an inefficient approach. OpenEHR as an alternative approach In Catalonia, Spain—for about 8 million people and roughly 60 public hospitals—there are 29 different hospital EHR systems. Each hospital maintains a team of developers exclusively focused on building interoperability interfaces. Due to an increased demand for data sharing, the cost of maintaining this data will only grow. Rather than implementing interoperability interfaces, why not create new applications that are implicitly interoperable? This is what openEHR proposes, defining the clinical information model from the maximal perspective and developing applications that consume a subset of the clinical information system using an open architecture. However, while FHIR is very versatile—offering data models for administrative and operational data efficiently—openEHR focuses exclusively on clinical data. So while a combination of FHIR and openEHR can solve part of the problem, future healthcare applications need to integrate a wide variety of data, including medical images, genomics, proteomics, and data from complex medical devices—which could be complicated by the lack of a single standard. Overcoming this challenge with the document model and MongoDB Now, let’s discover the power of the document model to advance interoperability while modernizing systems. MongoDB Atlas features a flexible document data model, which provides a flexible way of storing and organizing healthcare data using JSON-like documents. With a flexible data schema, healthcare organizations can accommodate any data structure, format, or source into one platform, providing seamless third-party integration capabilities necessary for interoperability. While different use cases will have different solutions, the flexibility of the document model means MongoDB is able to adapt to changes. Figure 2 below shows a database modeled in MongoDB, where each collection stores a FHIR resource type (e.g., patients, encounters, conditions). These documents mirror the FHIR objects, conserving its complex hierarchy. Let's imagine our application requires specific fields not supported by FHIR, and there is no need for an FHIR extension because it won’t be shared externally. We can add a metadata field containing all this information that is as flexible as needed. It can be used to track the standard evolution of the resource, the version of the document itself, tenant ID for multi-tenant applications, and more. Figure 2: Data modeled in MongoDB Another possibility is to add the searchable fields of the resource as key-value pairs so that you can retrieve data with a single index. We can maintain the indexes by automating this with the FHIR search parameters. In a single repository, we can combine FHIR data with custom application data. Additionally, data in other protocols can be integrated, providing unparalleled flexibility in accessing your data. This setup permits access through different endpoints within one repository. Using MQL (the MongoDB Query Language), you can build FHIR APIs or use the MongoDB SQL interface to provide SQL querying capabilities to connect your preferred business intelligence tools. Figure 3: Unified and flexible data store and flexible data retrieval MongoDB’s developer data platform At the center of MongoDB’s developer data platform is MongoDB Atlas, the most advanced cloud database service on the market. It provides integrated full-text search capabilities, allowing applications to perform when making complex queries without the need to maintain a separate system. With generative AI, multidimensional vectors that represent data are becoming a necessity. MongoDB Atlas stores vectors along the operational data and provides vector search, which enables fast data retrieval. Therefore, you can store metadata in your vector embeddings, as shown in Figure 4. Figure 4: Vector embeddings These capabilities transform a single database into a unique, powerful, and easy-to-use interface capable of handling diverse use cases without the need for any single-purpose databases. Solving the dual challenge with MongoDB Achieving interoperability and modernization in healthcare IT are challenging but essential. MongoDB provides a powerful platform that meets organizations’ modern data management needs. By embracing MongoDB, healthcare organizations can unlock the full potential of their data, leading to improved patient outcomes and operational efficiency. Figure 5: Closing the gap between interoperability and modernization journey with MongoDB Refactoring applications to incorporate interoperability resources as part of documents—and extending them with all the requirements for your modern needs—will ensure organizations’ data layers remain robust and adaptable. By doing so, organizations can create a flexible architecture that can seamlessly integrate diverse data types and accommodate future advancements. This approach not only enhances data accessibility and simplifies data management but also supports compliance with evolving standards and regulations. Furthermore, it enables real-time data analytics and insights, fostering innovation and driving better decision-making. Ultimately, this strategy positions healthcare organizations to effectively manage and leverage their data, leading to improved patient outcomes and operational efficiencies. For more detailed information and resources on how MongoDB can transform organizations’ healthcare IT systems, we encourage you to apply for an exclusive innovation workshop with MongoDB's industry experts to explore bespoke modern app development and tailored solutions for your organization. Additionally, check out these resources: MongoDB and FHIR: Navigating Healthcare Data with MongoDB How Leading Industries are Transforming with AI and MongoDB Atlas The MongoDB Solutions Library is curated with tailored solutions to help developers kick-start their projects For developers: From FHIR Synthea data to MongoDB

August 28, 2024
Applied

CTF Life Leverages MongoDB Atlas to Deliver Customer-Centric Service

Hong Kong-based Chow Tai Fook Life Insurance Company Limited (CTF Life) is proud of its rich, nearly 40-year history of providing a wide range of insurance and financial planning services. The company provides life, health, accident, savings, and investment insurance to its customers, helping them and their loved ones navigate life’s journey with personalized planning solutions, lifelong protection, and diverse lifestyle experiences. A wholly-owned subsidiary of NWS Holdings Limited and a member of Chow Tai Fook Group, CTF Life consistently strengthens its collaboration with the diverse conglomerate of the Cheng family (Chow Tai Fook Group) and draws on the Group’s robust financial strength, strategic investments across the globe, and advanced customer-focused digital technology with the aspiration of becoming a leading insurance company in the Greater Bay Area. To achieve this goal, CTF Life modernized its on-premises infrastructure to provide the speed and flexibility required to offer customers personalized experiences. To turn their vision into reality, CTF Life decided to adopt MongoDB Atlas . By modernizing their systems and processes with the world’s most versatile developer data platform, CTF Life knew they’d be able to meet customer expectations, offering improved customer service, faster response times, and more convenient access to their products and services. Data-driven customer service The insurance industry is undergoing a significant shift, from traditional data management to near-real-time data-driven insights, driven by strong consumer demand and the urgent need for companies to process large amounts of data efficiently. As insurance companies strive to provide personalized and real-time products, the move towards sophisticated and real-time data-driven customer service is inevitable. CTF Life is on its digital transformation journey to modernize its relational database management system (RDBMS) infrastructure to empower its agents, known as Life Planners, to provide enhanced customer experiences. The company faced obstacles to legacy systems and siloed data. Life Planners were spending a lot of time looking up customer information from various systems and organizing this into useful customer insights. Not having a holistic view of customer data also made it challenging to recommend personalised products and services within CTF Life, the Group, and beyond. Reliance on legacy RDBMS systems presented a major challenge in CTF Life’s pursuit of leveraging real-time customer information to enhance customer experiences and operational efficiency. For their modernization efforts, CTF Life was looking for the following required capabilities: A modernized application with agile development No downtime for changing schema, new modules, or feature updates A single way of centralizing and organizing data from a number of sources (backend, CRM, etc.) into a standardized format ready for a front-end mobile application A future-proof data platform with extensible capability for analytics across CTF Life, their diverse conglomerate collaboration, and their strategic partners to support the company’s digital solutions Embracing the operational data layer for enhanced experiences CTF Life knew they had to build a solution for the Life Planners to harness the wealth of useful information available to them, making it easier to engage and connect with customers. The first project identified was their clienteling system, which is designed to establish long-term relationships with customers based on data about their preferences, behaviors, and needs. To overcome their legacy systems and siloed data, CTF Life built their clienteling system on MongoDB Atlas . Atlas serves as the digital data store for Life Planners, creating a single view of the customer (SVOC) with a flexible document model that enables CTF Life to handle large volumes of customer data in real-time efficiently. By integrating their operational data into one platform with MongoDB Atlas on Microsoft Azure, CTF Life’s revamped clienteling system provides their Life Planners with a comprehensive view of customer profiles, which allows them to share targeted content with customers. Additionally, CTF Life is using Atlas Search to build relevance-based search capabilities directly into the application, making it faster and easier to search for customer data across the company’s system landscape. These benefits helped improve customer service with faster access to data with an SVOC so Life Planners can provide more accurate and timely information to their customers. Atlas Search is now the foundation of the clienteling system, which powers data analytics and machine learning capabilities to support various use cases. For example, the clienteling app's smart reminder feature recognizes key moments in a customer's life, like the impending arrival of a newborn child. Based on these types of insights, the app can help Life Planners make personalized recommendations to the customer about relevant services and products that may be of interest to them as new parents. Because of its work with MongoDB, CTF Life can now analyze customer profiles and use smart reminders to engage customers at the right time in the right context. This has made following up with customers and leads faster and easier. And, contacting prospects, scheduling appointments, setting reminders, sharing relevant content, running campaigns and promotions, recommending products and services, and tracking lead progress can all be performed in one system. Moreover, access to real-time data enables Life Planners to streamline their work and reduce manual processes. And data-driven insights empower Life Planners to make informed decisions quickly. They can analyze customer information, identify trends, and tailor their recommendations to meet individual needs more effectively. With MongoDB Atlas Search, Life Planners can use advanced search capabilities to identify opportunities to serve customers better. Continuing to create value beyond insurance CTF Life strives to provide its customers with value beyond insurance. Through a range of collaborations with Chow Tai Fook Group, and strategic partnerships with technology partners like MongoDB, CTF Life has created a customer-centric approach and continues to advance its digital transformation strategy to enhance a well-rounded experience for customers that goes beyond insurance with a sincere and deep understanding of their diverse needs in every chapter of their life journey. In the future, CTF Life will continue to build upon its strategic partnership with MongoDB and expand the use of its digital data store on MongoDB Atlas by creating new client servicing modules on the mobile app their Life Planners use. CTF Life will also be expanding its search capabilities with Atlas Vector Search to accelerate their journey to building advanced search and generative AI applications for more automated servicing. Partnering with MongoDB helped us prioritize technology that accelerates our digital transformation. The integration between generative AI and MongoDB as a medium for information search can be leveraged to further support front-line Life Planners as well as mid/back-office operations. Derek Ip, Chief Digital and Technology Officer of CTF Life Learn how to tap into real-time data with MongoDB Atlas .

August 28, 2024
Applied

Elevate Your Java Applications with MongoDB and Spring AI

MongoDB is excited to announce an integration with Spring AI, enhancing MongoDB Atlas Vector Search for Java developers. This collaboration brings Vector Search to Java applications, making it easier to build intelligent, high-performance AI applications. Why Spring AI? Spring AI is an AI library designed specifically for Java, applying the familiar principles of the Spring ecosystem to AI development. It enables developers to build, train, and deploy AI models efficiently within their Java applications. Spring AI addresses the gap left by other AI frameworks and integrations that focus on other programming languages, such as Python, providing a streamlined solution for Java developers. Spring has been a cornerstone for Java developers for decades, offering a consistent and reliable framework for building robust applications. The introduction of Spring AI continues this legacy, providing a straightforward path for Java developers to incorporate AI into their projects. With the MongoDB-Spring integration, developers can leverage their existing Spring knowledge to build next-generation AI applications without the friction associated with learning a new framework. Key features of Spring AI include: Familiarity: Leverage the design principles of the Spring ecosystem. Spring AI allows Java developers to use the same familiar tools and patterns they already know from other Spring projects, reducing the learning curve and allowing them to focus on building innovative AI applications. This means you can integrate AI capabilities—including Atlas Vector Search—without having to learn a new language or framework, making the transition smoother and more intuitive. Portability: Applications built with Spring AI can run anywhere the Spring framework runs. This ensures that AI applications are highly portable and can be deployed across various environments without modification, guaranteeing flexibility and consistency in deployment strategies. Modular design: Use Plain Old Java Objects (POJOs) as building blocks. Spring AI’s modular design promotes clean code architecture and maintainability. By using POJOs, developers can create modular, reusable components that simplify the development and maintenance of AI applications. This modularity also facilitates easier testing and debugging, leading to more robust applications that efficiently integrate with Atlas Vector Search. Efficiency: Streamline development with tools and features designed for AI applications in Java. Spring AI provides a range of tools that enhance development efficiency, including pre-built templates, configuration management, and integrated testing tools. These features reduce the time and effort required to develop AI applications, allowing developers to bring their ideas to market faster. These features streamline AI development by enhancing the integration and performance of Atlas Vector Search within Java applications, making it easier to build and scale AI-driven features. Enhancing AI development with Spring AI and Atlas Vector Search MongoDB Atlas Vector Search enhances AI application development by providing advanced search capabilities. The new Spring AI integration enables developers to manage and search vector data within AI models, enabling features like recommendation systems, natural language processing, and predictive analytics. Atlas Vector Search allows you to store, index, and search high-dimensional vectors, which are crucial for AI and machine learning models. This capability supports a range of AI features: Recommendation systems: Provide personalized recommendations based on user behavior and preferences. Natural language processing: Enhance text analysis and understanding for chatbots, sentiment analysis, and more. Predictive analytics: Improve forecasting and decision-making with advanced data models. What the integration means for Java developers Prior to MongoDB-Spring integration, Java developers did not have an easy way to integrate Spring into their AI applications using MongoDB Atlas Vector Search, which led to longer development times and suboptimal application performance. With this integration, the Java development landscape is transformed, allowing developers to build and deploy AI applications with greater efficiency. The integration simplifies the entire process, enabling developers to concentrate on creating innovative solutions rather than dealing with integration hurdles. This approach not only reduces development time but also accelerates time-to-market. Additionally, MongoDB offers robust support through comprehensive tutorials and a wealth of community-driven content. Whether you’re just beginning or looking to optimize existing applications, you’ll find the resources and guidance you need at every stage of your development journey. Get started! The MongoDB and Spring AI integration is designed to simplify the development of intelligent Java applications. By combining MongoDB's robust data platform with Spring AI's capabilities, you can create high-performance applications more efficiently. To start using MongoDB with Spring AI, explore our documentation , tutorial , and check out our GitHub repository to build the next generation of AI-driven applications today.

August 26, 2024
Artificial Intelligence

Better Business Loans with MongoDB and Generative AI

Business loans are a cornerstone of banking operations, providing significant benefits to both financial institutions and broader economies. For example, in 2023 the value of commercial and industrial loans in the United States reached nearly $2.8 trillion . However, these loans can present unique challenges and risks that banks must navigate. Besides credit risk, where the borrower may default, banks also face business risk, in which economic downturns or sector-specific declines can impact borrowers' ability to repay loans. In this post, we dive into the potential of generative AI to generate detailed risk assessments for business loans, and how MongoDB’s multimodal features can be leveraged for comprehensive and multidimensional risk analyses. The critical business plan A business plan is essential for a business loan as it serves as a comprehensive roadmap detailing the borrower's plans, strategies, and financial projections. It helps lenders understand the business's goals, viability, and profitability, demonstrating how the loan will be used for growth and repayment. A detailed business plan includes market analysis, competitive positioning, operational plans, and financial forecasts which build a compelling case for the lender's investment and the business’s ability to manage risks effectively, increasing the likelihood of securing the loan. Reading through borrower credit information and detailed business plans (roughly 15-20 pages long ) poses significant challenges for loan officers due to time constraints, the material’s complexity, and the difficulty of extracting key metrics from detailed financial projections, market analyses, and risk factors. Navigating technical details and industry-specific jargon can also be challenging and require specialized knowledge. Identifying critical risk factors and mitigation strategies only adds further complexity along with ensuring accuracy and consistency among loan officers and approval committees. To overcome these challenges, gen AI can assist loan officers by efficiently analyzing business plans, extracting essential information, identifying key risks, and providing consistent interpretations, thereby facilitating informed decision-making. Assessing loans with gen AI Interactive risk analysis with gen AI-powered chatbots Gen AI can help analyze business plans when built on a flexible developer data platform like MongoDB Atlas . One approach is implementing a gen AI-powered chatbot that allows loan officers to "discuss" the business plan. The chatbot can analyze the input and provide insights on the various risks associated with lending to the borrower for the proposed business. MongoDB sits at the heart of many customer support applications due to its flexible data model that makes it easy to build a single, 360-degree view of data from a myriad of siloed backend source systems. Figure 1 below shows an example of how ChatGPT-4o responds when asked to assess the risk of a business loan. Although the input of the loan purpose and business description is simplistic, gen AI can offer a detailed analysis. Figure 1: Example of how ChatGPT-4o could respond when asked to assess the risk of a business loan Hallucinations or ignorance? By applying gen AI to risk assessments, lenders can explore additional risk factors that gen AI can evaluate. One factor could be the risk of natural disasters or broader climate risks. In Figure 2 below, we added flood risk specifically as a factor to the previous question to see what the ChatGPT4-o comes back with. Figure 2: Example of how ChatGPT-4o responded to flood risk as a factor Based on the above, there is a low risk of flooding. To validate this, we asked ChatGPT-4o the question differently, focusing on its knowledge of flood data. It suggested reviewing FEMA flood maps and local flood history, indicating it might not have the latest information. Figure 3: Asking location-specific flood questions In the query shown in Figure 3 above, ChatGPT gave an opposite answer and indicated there is “significant flooding” providing references to flood evidence after having performed an internet search across 4 sites which it did not perform previously. From this example, we can see that when ChatGPT does not have the relevant data, it starts to make false claims, which can be considered hallucinations. Initially, it indicated a low flood risk due to a lack of information. However, when specifically asked about flood risk in the second query, it suggested reviewing external sources like FEMA flood maps, recognizing its limitations and need for external validation. Gen AI-powered chatbots can recognize and intelligently seek additional data sources to fill their knowledge gaps. However, a causal web search won’t provide the level of detail required. Retrieval-augmented generation-assisted risk analysis The promising example above demonstrates the experience of how gen AI can augment loan officers to analyze business loans. However, interacting with a gen AI chatbot relies on loan officers repeatedly prompting and augmenting the context with relevant information. This can be time-consuming and impractical due to the lack of prompt engineering skills or the lack of data needed. Below is a simplified solution of how gen AI can be used to augment the risk analysis process to fill the knowledge gap of the LLM. This demo uses MongoDB as an operational data store leveraging geospatial queries to find out the floods within 5km of the proposed business location. The prompting for this risk analysis highlights the analysis of the flood risk assessment rather than the financial projections. A similar test was performed on Llama 3 , hosted by our MAAP partner Fireworks.AI . It tested the model’s knowledge of flood data showing a similar knowledge gap as ChatGPT-4o. Interestingly, rather than providing misleading answers, LLama 3 provided a “hallucinated list of flood data,” but highlighted that “this data is fictional and for demonstration purposes only. In reality, you would need to access reliable sources such as FEMA's flood data or other government agencies' reports to obtain accurate information.” Figure 4: LLM’s response with Fictional flood locations With this consistent demonstration of the knowledge gap in the LLMs in specialized areas, it reinforces the need to explore how RAG (retrieval-augmented generation) with a multimodal data platform can help. In this simplified demo, you select a business location, a business purpose, and a description of a business plan. To make inputs easier, an “Example” button has been added to leverage gen AI to generate a sample brief business description to avoid the need to key in the description template from scratch. Figure 5: Choosing a location on the map and writing a brief plan description Upon submission, it will provide an analysis using RAG with the appropriate prompt engineering to provide a simplified analysis of the business with the consideration of the location and also the flood risk earlier downloaded from external flood data sources. Figure 6: Loan risk response using RAG In the Flood Risk Assessment section, gen AI-powered geospatial analytics enable loan officers to quickly understand historical flood occurrences and identify the data sources. You can also reveal all the sample flood locations within the vicinity of the business location selected by clicking on the “Pin” icon. The geolocation pins include the flood location and the blue circle indicates the 5km radius in which flood data is queried, using a simple geospatial command $geoNear . Figure 7: Flood locations displayed with pins The following diagram provides a logical architecture overview of the RAG data process implemented in this solution highlighting the different technologies used including MongoDB, Meta Llama 3, and Fireworks.AI. Figure 8: RAG data flow architecture diagram With MongoDB's multimodal capabilities, developers can enhance the RAG process by utilizing features such as network graphs, time series, and vector search. This enriches the context for the gen AI agent, enabling it to provide more comprehensive and multidimensional risk analysis through multimodal analytics. Building risk assessments with MongoDB When combined with RAG and a multimodal developer data platform like MongoDB Atlas , gen AI applications can provide more accurate and context-aware insights to reduce hallucination and offer profound insights to augment a complex business loan risk assessment process. Due to the iterative nature of the RAG process, the gen AI model will continually learn and improve from new data and feedback, leading to increasingly accurate risk assessments and minimizing hallucinations. A multimodal data platform would allow you to fully maximize the capabilities of the multimodal AI models. If you would like to discover how MongoDB can help you on this multimodal gen AI application journey, we encourage you to apply for an exclusive innovation workshop with MongoDB's industry experts to explore bespoke modern app development and tailored solutions to your organization. Additionally, you can enjoy these resources: Solution GitHub: Loan Risk Assessor How Leading Industries are Transforming with AI and MongoDB Atlas Accelerate Your AI Journey with MongoDB’s AI Applications Program The MongoDB Solutions Library is curated with tailored solutions to help developers kick-start their projects

August 22, 2024
Artificial Intelligence

MongoDB Atlas for Government Supports GCP Assured Workloads

We’re excited to announce that MongoDB Atlas for Government now supports the US regions of Google Cloud Assured Workloads, alongside existing support for AWS GovCloud and AWS US regions. This expansion offers greater flexibility and expanded support for public sector organizations and the independent software vendors (ISVs) that serve them as they modernize applications and migrate workloads to the cloud. Furthermore, MongoDB Atlas for Government is now available for purchase through the Google Cloud Marketplace . MongoDB Atlas for Government: Driving digital transformation in the public sector MongoDB Atlas for Government is an independent, dedicated version of MongoDB Atlas, designed specifically to meet the unique needs of the U.S. public sector and ISVs developing public sector solutions. This developer data platform provides the versatility and scalability required to modernize legacy applications and migrate workloads to the cloud, all within a secure, fully-managed, FedRAMP authorized environment. Refer to the FedRAMP Marketplace listing for additional information about Atlas for Government. By leveraging the full functionality of MongoDB's document database and application services, Atlas for Government supports a wide range of use cases within a unified developer data platform, including Internet of Things, AI/ML, analytics, mobile development, single view, transactional workloads, and more. Ensuring robust resilience and comprehensive disaster recovery, Atlas for Government maintains business continuity and minimizes downtime. With a ~99.995% uptime SLA , auto-scaling to handle data consumption fluctuations, and automated backup and recovery, organizations can have peace of mind that their data is always protected. Getting started with MongoDB Atlas for Government MongoDB Atlas for Government can be used to create database clusters deployed to a single region or spanning multiple US regions. Google Cloud Assured Workloads US regions are now supported in Atlas for Government projects tagged as “Gov regions only,” allowing for the use of both traditional Google Cloud regions as well as Assured Workloads US regions. To get started, create a project in Atlas for Government and make sure to select 'Designate as a Gov Cloud regions-only project' during the project creation process. After creating the project, you can set up a MongoDB cluster in the GCP regions. To do this, start the cluster creation process and select GCP as the Cloud Provider, as shown in the figure below. You'll then be prompted to choose one or more GCP regions for your cluster. You can find more details on supported cloud providers and regions in the Atlas for Government documentation . Creating multi-cloud clusters The introduction of support for Google Cloud Assured Workloads (US regions) makes MongoDB Atlas for Government the first fully managed multi-cloud data platform authorized at FedRAMP Moderate. This means that public sector organizations and ISVs can now deploy clusters across Google Cloud Assured Workloads US regions and AWS GovCloud regions, in addition to deploying database clusters across multiple US regions. Whether prioritizing performance, cost, or specific feature sets, Atlas for Government empowers teams to deploy application architectures that simultaneously take advantage of the best-of-class services from multiple cloud providers while meeting FedRAMP requirements. Multi-cloud support also provides additional resiliency and enhanced disaster recovery, safeguarding data and applications against potential service outages and failures with automatic failover. Ensuring robust data protection and seamless continuity MongoDB Atlas for Government now supports Google Cloud Assured Workloads US regions, expanding its multi-cloud capabilities alongside existing support for AWS GovCloud and AWS US regions. This enhancement provides public sector organizations and ISVs with the flexibility to modernize applications and migrate workloads in a secure, FedRAMP authorized environment. With robust resilience, comprehensive disaster recovery, and a ~99.995% uptime SLA, Atlas for Government ensures data protection and business continuity. By offering a unified developer data platform for a wide range of use cases, Atlas for Government empowers teams to leverage best-in-class cloud services while meeting stringent compliance requirements. How do I get started? Visit our product page to learn more about MongoDB Atlas for Government. Or, read the Atlas for Government documentation to learn how to get started today.

August 20, 2024
Updates

Find Hidden Insights in Vector Databases: Semantic Clustering

Vector databases, a powerful class of databases designed to optimize the storage, processing, and retrieval of large volume, multi-dimensional data, have increasingly been instrumental to generative AI (gen AI) applications, with Forrester predicted a 200% increase in the adoption of vector databases in 2024. But their power extends far beyond these applications. Semantic vector clustering, a technique within vector databases, can unlock hidden knowledge within your organization’s data, democratizing insights across teams. Mining diverse data for hidden knowledge Imagine your organization’s data as a library of diverse knowledge—a treasure trove of information waiting to be unearthed. Traditionally, uncovering valuable insights from data often relied on asking the right questions, which can be a challenge for developers, data scientists, and business leaders alike. They might spend vast amounts of time sifting through limited, siloed datasets, potentially missing hidden gems buried within the organization's vast data troves. Simply put, without knowing the right questions to ask, these valuable insights often remain undiscovered, leading to missed opportunities or losses. Enter vector databases and semantic vector clustering. A vector database is designed to store and manage unstructured data efficiently. Within a vector database, semantic vector clustering is a technique for organizing information by grouping vectors with similar meaning together. Text analysis, sentiment analysis, knowledge classification, and uncovering semantic connections between data sets—these are just a few examples of how semantic vector clustering empowers organizations to vastly improve data mining. Semantic vector clustering offers a multifaceted approach to organizational improvement. By analyzing text data, it can illuminate customer and employee sentiments, behaviors, and preferences, informing strategic decisions, enhancing customer service, and optimizing employee satisfaction. Furthermore, it revolutionizes knowledge management by categorizing information into easily accessible clusters, thereby boosting collaboration and efficiency. Finally, by bridging data silos and uncovering hidden relationships, semantic vector clustering facilitates informed decision-making and breaks down organizational barriers. For example, the business can gain significant insights from its customer interaction data which is routinely kept, classified, or summarized. Those data points (texts, numbers, images, videos, etc.) can be vectorized and semantic vector clustering applied to identify the most prominent customer patterns (the densest vector clusters) from those interactions, classifications, or summaries. From the identified patterns, the business can take further actions or make more informed decisions that they wouldn’t have been able to do otherwise. The power of semantic vector clustering So, how does semantic vector clustering achieve all this? Discover semantic structures: Clustering groups similar LLM-embedded vector sets together. This allows for fast retrieval of themes. Beyond clustering regular vectors (individual data points or concepts), clustering RAG vectors (summarization of themes and concepts) can provide superior LLM contexts compared to basic semantic search. Reduce data complexity via clustering: Data points are grouped based on overall similarity, effectively reducing the complexity of the data. This reveals patterns and summarizes key features, making it easier to grasp the bigger picture. Imagine organizing the library by theme or genre, making it easier to navigate vast amounts of information. Semantic auto-aggregation: Here is the coolest part. We can classify groups of vectors into hierarchies by effectively semantically "auto-aggregating" them. This means that the data itself “figures out” these groups and "self-organizes." Imagine a library with an efficient automated catalog system, allowing researchers to find what they need quickly and easily. Vector clustering can be used to create hierarchies, essentially "auto-aggregating" groups of vectors semantically. Think of it as automatically organizing sections of the library based on thematic connections without a set of pre-built questions. This allows you to identify patterns within a vast, semantically-diverse data within your organization. Unlock hidden insights in your vector database The semantic clustering of vector embeddings is a powerful tool to go beyond the surface of data and identify meanings that otherwise would not have been discovered. By unlocking hidden relationships and patterns, you can extract valuable insights that drive better decision-making, enhance customer experiences, and improve overall business efficiency—all enabled through MongoDB’ secure, unified, and fully-managed vector database capabilities. Head over to our quick-start guide to get started with Atlas Vector Search today. Add vector search to your arsenal for more accurate and cost-efficient RAG applications by enrolling in the MongoDB and DeepLearning.AI course " Prompt Compression and Query Optimization " for free today.

August 19, 2024
Artificial Intelligence

Built With MongoDB: Atlas Helps Team-GPT Launch in Two Weeks

Team-GPT enables teams large and small to collaborate on AI projects. When OpenAI released GPT-4, it turned out to be a game-changer for the startup. Founded in 2023, the company has been helping people train machine learning (ML) models, in particular natural language processing (NLP) models. But when OpenAI launched GPT-4 in March 2023, the team was blown away by how much progress had been made on large language models (LLMs). So Team-GPT dropped everything they were doing and started experimenting with it. Many of those early ideas are still memorialized on a whiteboard in one of the office's meeting rooms: The birth of an idea. Like many startups, Team-GPT began with a brainstorm on a whiteboard. Evolving the application Of all the ideas they batted around, there was one issue in particular the team wanted to solve—the need for a shared workspace where they could experiment with LLMs together. What they found was that having to work with LLMs in the terminal was a major point of friction. Plus, there weren't any sharing abilities. So they set out to create a UI consisting of chat sharing, in-chat team collaboration, folders and subfolders, and a prompt library. The whole thing came together in an incredibly short period of time. This was due, in large part, to their initial choice of MongoDB Atlas, which allowed them to build with speed and scalability. "MongoDB made it possible for us to launch in just two weeks," said Team-GPT Founder and CTO, Ilko Kacharov. "With the MongoDB Atlas cloud platform, we were able to move rapidly, focusing our efforts on developing innovative product features rather than dealing with the complexities of infrastructure management." Before long, the team realized there was a lot more that could be built around LLMs than simply chat, and set out to add more advanced capabilities. Today, users can integrate any LLM of their choice and add custom instructions. The platform also supports multimodality like ChatGPT Vision and DALL-E. Users use any GPT model to turn chat responses into a standalone document that can then be edited. All these improvements are meant to unify teams' AI workflows in a single, AI-powered tool. A platform built for developers Diving deeper into more technical aspects of the solution, Team-GPT CEO Iliya Valchanov acknowledges the virtues of the document data model, which underpins the Atlas developer data platform. "We wanted the ability to quickly update and create new collections, add more data, and expand the existing database setup without major hurdles or time consumption," he said. "That's something that relational databases often struggle with." A developer data platform consists of integrated data infrastructure components and services for quick deployment. With transactional, analytical, search, and stream processing capabilities, it supports various use cases, reduces complexity, and accelerates development. Valchanov's team leverages a few key elements of the platform to address a range of application needs. "We benefited from Atlas Triggers , which allow automatic execution of specified database operations," he said. "This greatly simplified many of our routine tasks." It's not easy to build truly differentiated applications without a friction-free developer experience. Valchanov cites Atlas' user-friendly UI as a key advantage for a startup where time is of the essence. And he said that Atlas Charts has been instrumental for the team, who use it every day, even their less technical people. Of course one of the biggest reasons why developers and tech leaders choose MongoDB, and why so many are moving away from relational databases, is its ability to scale—which Valchanov said is one of the most critical requirements for supporting the company's growth. "With MongoDB handling the scaling aspect, we were able to focus our attention entirely on building the best possible features for our customers." Team-GPT deployment options Accelerating AI transformation Team-GPT is a collaborative platform that allows teams of up to 20,000 people to use AI in their work. It's designed to help teams learn, collaborate, and master AI in a shared workspace. The platform is used by over 2,000 high-performing businesses worldwide, including EY, Charles Schwab, Johns Hopkins University, Yale University, and Columbia University, all of which are also MongoDB customers. The company's goal is to empower every person who works on a computer to use AI in a productive and safe manner. Valchanov fully appreciates the rapid change that accompanies a product's explosive growth. "We never imagined that we would eventually grow to provide our service to over 40,000 users," he said. "As a startup, our primary focus when selecting a data platform was flexibility and the speed of iteration. As we transitioned from a small-scale tool to a product used by tens of thousands, MongoDB's attributes like flexibility, agility, and scalability became necessary for us." Another key enabler of Team-GPT's explosive growth has been the MongoDB for Startups program . It offers valuable resources such as free Atlas credits, technical guidance, co-marketing opportunities, and access to a network of partners. Valchanov makes no secret of how instrumental the program has been for his company's success. "The startup program made it free! It offered us enough credits to build out the MVP and cater to all our needs," he said. "Beyond financial aid, the program opened doors for us to learn and network. For instance, my co-founder, Yavor Belakov, and I participated in a MongoDB hackathon in MongoDB's office in San Francisco." Team-GPT co-founders Yavor Belakov (l) and Iliya Valchanov (r) participated in a MongoDB hackathon at the San Francisco office Professional services engagements are an essential part of the program, especially for early-stage startups. "The program offered technical sessions and consultations with MongoDB staff, which enriched our knowledge and understanding, especially for Atlas Vector Search , aiding our growth as a startup," said Valchanov. The roadmap ahead for the company includes the release of Team-GPT 2.0, which will introduce a brand-new user interface and new, robust functionalities. The company encourages anyone looking to learn more or join their efforts to ease adoption of AI innovations to reach out on LinkedIn . Are you part of a startup and interested in joining the MongoDB for Startups program? Apply to the program now . For more startup content, check out our Built With MongoDB blog collection.

August 15, 2024
Applied

Ready to get Started with MongoDB Atlas?

Start Free