veronica-cooley-perry

2947 results

MongoDB Atlas Introduces Enhanced Cost Optimization Tools

MongoDB Atlas was designed with elasticity at its core and has always allowed customers to scale capacity vertically and horizontally, as required and automatically. Today, these inherent capabilities are even better and more cost-effective. At the recent MongoDB.local London, MongoDB announced several new MongoDB Atlas features that improve elasticity and help optimize costs while maintaining the performance and availability that business-critical applications demand. These include scaling each shard independently, extending storage beyond 4 TB or more , and 5X more responsive auto-scaling . Organizations and their customers are inherently dynamic, with operations, web traffic, and application usage growing unpredictably and non-linearly. For example, website traffic can spike due to a single video going viral on social media, and holidays are a frequent cause of application usage slowdowns. Traditionally, organizations have tackled this volatility by over-provisioning infrastructure, often at significant cost. Cloud adoption has improved the speed at which infrastructure can be provisioned in response to growing and volatile demand. Simultaneously, companies are focused on striking the perfect balance between performance and cost efficiency. This balance is acute in the current economic climate, where cost optimization is a top priority for Infrastructure & IT Operations (I&O) leaders. The goal is not balance between supply and demand. The goal is to meet the most profitable and mission-critical demand with the resources available. Nathan Hill, Distinguished VP Analyst, Gartner - Dec 2023 However, scaling infrastructure to meet demand without overprovisioning can be complex and costly. Organizations have often relied on manual processes (like scheduled scripts) or dedicated teams (like IT ops) to manage this challenge. MongoDB Atlas enables a more effective approach. With MongoDB Atlas, customers can manage flexible provisioning, zero-downtime scaling, and easy auto-scaling of their clusters. From October 2024, all Atlas customers with dedicated tier clusters can employ these recently announced enhancements for improved cost optimization. Granular resource provisioning MongoDB’s tens of thousands of customers have complex and diverse workloads with constantly changing requirements. Over time, workloads can grow unpredictably, requiring scaling up storage, compute, and IOPS independently and at differing granularities. Imagine a global retailer preparing for Cyber Monday, when traffic could be 512% higher than average — additional resources to serve customers are vital. Independent shard scaling enables customers running MongoDB Atlas to do this in a cost-optimal manner. Customers can independently scale the tier of individual shards in a cluster when one or more shards experience disproportionately higher traffic. For customers running workloads on sharded clusters, scaling each shard independently of all other shards is now an option (for example, only the shards serving US traffic during Thanksgiving). Customers can scale operational and analytical nodes independently in a single shard. This improves scalability and cost-optimization by providing fine-grained control to add resources to hot shards while maintaining the resources provisioned to other shards. All Atlas customers running dedicated clusters can use this feature through Terraform and the Admin API . Support for independent shard auto-scaling and configuration management via the Admin API and Terraform will be available in late 2024. Extended Storage and IOPS in Azure : MongoDB is introducing the ability to provision additional storage and IOPS on Atlas clusters running on Azure. This enables support for optimal performance without over-provisioning. Customers can create new clusters on Azure to provision additional IOPS and extended storage with 4TB or more on larger clusters (M40+). This feature is being rolled out and will be available to all Atlas clusters by late 2024. Head over to our docs page to learn more. With these updates, customers have greater flexibility and granularity in provisioning and scaling resources across their Atlas clusters on all three major cloud providers. Therefore, customers can optimize for performance and costs more effectively. More responsive auto-scaling Granular provisioning is excellent for optimizing costs while ensuring availability for an expected increase in traffic. However, what happens if a website gets 13X higher traffic or a surge in app interactions due to an unexpected social media post? Several enhancements to the algorithms and infrastructure powering MongoDB’s auto-scaling capabilities were announced in October 2024 at .local London . Cumulatively, these improve the time taken to scale and the responsiveness of MongoDB’s auto-scaling engine. Customers running dynamic workloads, particularly those with sharper peaks, will see up to 5X improvement in responsiveness. Smarter scaling decisions by Atlas will ensure that resource provisioning is optimized while maintaining high performance. This capability is available on all Atlas clusters with auto-scaling turned on, and customers should experience the benefits immediately. Industry-leading MongoDB Atlas customers like Conrad and Current use auto-scaling to automatically scale their compute capacity, storage capacity, or both without needing custom scripts, manual intervention, or third-party consulting services. Customers can set upper and lower tier limits, and Atlas will automatically scale their storage and tiers depending on their workload demands. This ensures clusters always have the optimal resources to maintain performance while optimizing costs. Take a look at how Coinbase is optimizing for both availability and cost in the volatile world of cryptocurrency with MongoDB Atlas’ help, or read our auto-scaling docs page to learn more. Optimize price and performance with MongoDB Atlas As businesses focus more on optimizing cloud infrastructure costs, the latest MongoDB Atlas enhancements— independent shard scaling, more responsive auto-scaling, and extended storage with IOPS—empower organizations to manage resources efficiently while maintaining top performance. These tools provide the flexibility and control needed to achieve cost-effective scalability. Ready to take control of your cloud costs? Sign up for a free trial today or spin up a cluster to get the performance, availability, and cost efficiency you need.

October 31, 2024

Health-Tech Startup Aktivo Labs Scales Up With MongoDB Atlas

Aktivo Labs , a pioneering health-tech startup based in Singapore, has made significant strides in the fight against chronic diseases. Aktivo Labs develops innovative preventative healthcare technology solutions that encourage healthier lifestyles. The Aktivo Score ® —the flagship product of Aktivo Labs built on MongoDB Atlas —is a simple yet powerful tool designed to guide users toward healthier living. “By collecting and analyzing data from smartphones and wearables—including physical activity, sleep patterns, and sedentary behavior—the Aktivo Score provides personalized recommendations to help users improve their health,” said Aktivo Labs CTO Jonnie Avinash at MongoDB.local Singapore in August 2024 . Aktivo Labs also works closely with insurance companies. Acting as a data processor, it helps insurers integrate some of the Aktivo Score features into their own apps to improve customer engagement. Empowering insurers with out-of-the-box apps and user journeys From the start, the Aktivo Labs engineering team chose to work on MongoDB Atlas because the platform’s document model and cloud nature provided the flexibility and scalability required to support the company’s business model. The first goal of the engineering team was to enable insurance providers to integrate Aktivo Score smoothly within their own infrastructures. The team built software development kits (SDKs) that insurers can embed in various iOS and Android apps. The SDKs enable progressive web app journeys for user experience, which insurers can then rebrand and customize as their own. Next, the Aktivo Labs team created a web portal to help companies manage their apps and monitor their performance. This required discreet direct integrations with a myriad of wearables. “When we started to deploy things with companies, we were able to replicate this architecture so we could support all kinds of configurations,” Avinash said. “We could give you dedicated clusters if the number of users that you’re expecting is big enough. If you’re not expecting too many customers, we could give you colocated or shared environments.” Finding more efficiencies, flexibility, and scalability with MongoDB Atlas “When we started off, one of our challenges was that we had a very small engineering team. A lot of the focus had to be on functionality, and the cost of tech had to be kept low,” said Avinash. Working on MongoDB Atlas allowed the Aktivo Labs team to focus on product development rather than on database management and overhead costs. As the company grew and expanded to markets across Asia, Africa, and the Middle East, another challenge arose: Aktivo Labs needed to ensure its platform could scale and handle large volumes of disparate data efficiently. MongoDB Atlas was the optimal solution because its fully managed multi-cloud platform could easily scale as the company grew. MongoDB Atlas also provided Aktivo Labs the flexibility it needed to handle the wide variety, volume, and complexity of data generated by users’ health metrics. Based on insights from the MongoDB Atlas oplog, the engineering team made proactive updates to the database in real-time in anticipation of dynamic changes to leaderboards and challenges in the app. This approach enables Aktivo Labs to manage complex data flows efficiently, ensuring that users always have access to the latest metrics about their health. MongoDB Atlas’s secondary nodes and analytics nodes provide isolated environments for intensive data processing tasks, such as calculating risk scores for diabetes and hypertension. This separation ensures that the primary user-facing applications remain responsive, even during periods of heavy data processing. These isolated environments have also been an important factor in achieving compliance with the data-anonymization requirements from health insurers. “The moment you start showing that it’s a managed service and you’re able to show a lot of these things, the amount of faith that both auditors and clients have in us is a lot more,” said Avinash. Powered by MongoDB Atlas, Aktivo Labs is now looking to expand into U.S. and European markets, pursuing its mission of preventing chronic diseases on a global scale. Visit our product page to learn more about MongoDB Atlas.

October 29, 2024

Away From the Keyboard: Rafa Liou, Senior Partner Marketing Manager

Welcome to the latest article in our “Away From the Keyboard” series, which features interviews with people at MongoDB, discussing what they do, how they prioritize time away from their work, and their advice for others looking to create a more holistic approach to coding. Rafa Liou, Senior Partner Marketing Manager at MongoDB, was gracious enough to tell us why he's not ashamed to advocate strongly for a healthy work-life balance and how his past career in the wild world of advertising helped him first recognize the need to do so Q: What do you do at MongoDB? RAFA: I’m a Marketing Manager focused on MongoDB’s AI partner ecosystem . I help promote our partnerships with companies such as Anthropic, Cohere, LangChain, Together AI, and many others. I work to drive mutual awareness, credibility, and product adoption in the gen AI space via marketing programs. Basically telling the world why we’re better together. It’s a cool job where I’m able to wear many hats and interact with lots of different teams internally and externally. Q: What does work-life balance look like for you? RAFA: Work-life balance is really important to me. It’s actually one of the things I value the most in a job. I know some people advise against this but anytime I’m interviewing with a company I ask about it because it definitely impacts my mental health, how I spend my time outside of work, and my ability to do the things I love. I’m very fortunate to work for a company that understands that, and trusts me to do my job and, at the same time, be able to step out for a walk, a workout, not miss a dinner reservation with my husband, or whatever it is. It makes a lot of difference in both my productivity and happiness. After I log off, you can find me taking a HIIT class, exploring the restaurant scene in LA, or biking at the beach. It’s so good to be able to do all of that stress-free! Q: How do you ensure you set boundaries between work and personal life? RAFA: I usually joke that if you do everything you’re tasked with at the pace you’d like things to get done, you will never stop working. It is really important to prioritize them based on value, urgency, and feasibility. By assessing your pipeline more critically, you will be able to distill what needs to be done right now and also be at peace with the things that will be handled down the road, making it easier to disconnect when you’re done for the day. It’s also important to set expectations and boundaries with your manager and teams so you can fully enjoy life after work without worrying about that Slack message when you’re at the movies. Q: Has work/life balance always been a priority for you, or did you develop it later in your career? RAFA: Before tech, I worked in advertising, which is a very fast-paced industry with the craziest deadlines. For some time in my career, working relentlessly was not only required, but it was also rewarded by agency culture. When you’re young, nights in the office brainstorming over pizza with friends may sound fun. But it starts to wear you out pretty quickly, especially when you don’t have the time, energy, or even the mental state to enjoy your personal life after long hours. As I matured and climbed a few steps in my career, I felt the urge and empowerment to set some boundaries to protect myself. Now, it’s a non-negotiable factor for me. Q: What benefits has this balance given you in your career? RAFA: By constantly exercising prioritization, I’ve become a more efficient professional. When you focus on what really matters, you are also able to execute at higher quality, without distractions or the feeling of getting overwhelmed. Of course, with prioritization comes a lot of trade-offs and discussions with stakeholders on what should be prioritized today versus tomorrow. So, I think I’ve also gotten better at negotiation and conflict resolution (things I’ve always struggled with). Last but not least: having consistent downtime to unwind makes me more creative and energized to come up with new ideas and take on new projects. Q: What advice would you give to someone seeking to find a better balance? RAFA: First and foremost: don’t be ashamed of wanting a better work-life balance. I often find people living and breathing work just because they don’t want to be seen as lazy or uncommitted. Once you understand that a better work-life balance will actually make you a better professional—more intentional, efficient, and even strategic (as you will spend energy to solve what creates more value in a timely manner)—it will be easier to have this mindset, communicate it to others, and live by it. Something more practical would be to start a list of all the things you have to do, acknowledge you can’t finish them all by the end of the day (or week, or month), and ask yourself: Do they all carry the same importance? How can I prioritize them? What would happen if I work on X now instead of Y? I would experiment with this approach and check how you feel and how it impacts your day-to-day life. You might be surprised by the result. Making time for personal life events, hobbies, and meet-ups with family and friends will also help you have something to look forward to after closing your laptop. This is all easier said than done but I guarantee that once this becomes part of your core values and you find the balance that works for you, it is totally worth it! Thank you to Rafa Liou for sharing his insights! And thanks to all of you for reading. For past articles in this series, check out our interviews with: Senior AI Developer Advocate, Apoorva Joshi Developer Advocate Anaiya Raisinghani Interested in learning more about or connecting more with MongoDB? Join our MongoDB Community to meet other community members, hear about inspiring topics, and receive the latest MongoDB news and events. And let us know if you have any questions for our future guests when it comes to building a better work-life balance as developers. Tag us on social media: @/mongodb

October 29, 2024

MongoDB Atlas与YoMio.AI近乎完美适配:推理更快速、查询更灵活、场景更丰富

人工智能(AI) 世界正在以闪电般的速度发展,各种应用层出不穷,其中包括目前最为炫酷的新AI聊天机器人之一:角色AI。角色AI可以进行有趣的对话,帮助学习一门新语言,或者创建用户自己的聊天机器人。 YoMio.AI是一家专注角色AI的天使轮初创公司,聚焦AI娱乐,致力于从各方面让AI成为人类的陪伴。YoMio.AI目前主要开发了AI原生娱乐产品Rubii,并围绕Rubii构建了一整套产品矩阵,将Rubii中的功能解构,创造一套独立的服务,其中包括:全球最快的语音生成推理引擎之一;从Rubii上一键将角色放到其他社交平台,例如QQ;提供公开竞技场测评大语言模型的角色扮演能力(Roleplay LLM Arena);快速定制富知识机器人等。 初创公司,尤其是AI初创公司正在以最大限度的想象力在改变着我们每天的生活。他们每天在为我们创造工具,而在这个过程中,AI初创公司也迫切需要好用的工具。YoMio.AI创始人Junity指出,就开发而言,初创公司首先最需要的是统一有效的云架构解决方案,将全部应用迁移到一家云;其次,初创公司需求变化快,需要随时更改表单,非关系型数据库更为适配;此外,多语言全文搜索也是一项必要功能。 为了应对以上挑战与需求, MongoDB Atlas 成为了YoMio.AI近乎完美的适配解决方案。 利用二进制存储缓存张量,实现MongoDB版Prompt Cache,打造全球最快TTS推理引擎之一。 利用MongoDB储存二进制文件的能力,YoMio.AI实现了行业首个GPT-SoVITS极速推理,成功将原版3秒左右一条音频优化到15秒推理出160条音频(注:GPT-SoVITS是一款先进的TTS框架,在Github上超过30000星标,以跨语言、3秒语音无需训练即可克隆而著称)。据Junity介绍,通过MongoDB Atlas,YoMio.AI无需像PostgreSQL装插件来实现中文全文搜索,也无需像Elastic Search专门配置搜索节点,配置Atlas Index后,仅需简单的代码即可搜索。 Search Index实现多语言全文搜索。 MongoDB 的全文索引可以帮助用户快速地查找包含特定关键字或短语在内的数据。这对很多应用程序来说非常重要,因为可以使用全文索引来快速查找相关数据。在MongoDB支持之下,YoMio.AI不但实现了中日英韩粤多语言搜索,而且能够实现跨语言搜索,甚至是在同一句话中进行混读。 Atlas Vector Search搭配Infinity推理引擎,实现极低延迟且超高性能检索重排。 MongoDB Atlas 提供非常丰富的开箱即用功能,向量检索构建了最低延迟且同时满足检索+重排的系统,并且搭建本地Infinity镜像实现embedding+reranker即插即用,单次检索全流程延迟低于50ms。 除此之外,通过Atlas全球集群(Global Cluster),YoMio.AI上述系统在全球任何范围内都是低延迟高可用,而实现这一切仅用了两个月。 Junity 解释到,YoMio.AI业务分为ToC和ToB两类。ToC为主推的AI角色Rubii,利用丰富的数据和精进的算法,Rubii正在变得更富场景感和体验感;ToB主推富有定制知识的聊天机器人,YoMio.AI内部检索引擎会将客户的文档分块,转换成向量,并且用知识图谱解析,每一次和机器人对话时,机器人都会获得最符合该对话场景下的文档分片。 无论是ToC端还是ToB端,YoMio.AI都在与时代赛跑,始终要拿出最快、最优质的产品。作为YoMio.AI的数据库技术合作伙伴,MongoDB在AI前沿探索方面也开足马力,正在积极探索AI在应用程序现代化改造中的应用,尤其在代码分析、智能模式映射和代码转换等领域。通过引入AI,MongoDB将进一步简化应用现代化的过程,缩短迁移时间,使企业能够更快地适应市场需求。 随着MongoDB的新发布和革新,YoMio.AI的“闪电式发展”值得期待。 点击注册,免费开始使用 MongoDB Atlas

October 29, 2024

Driving Neurodiversity Awareness and Education at MongoDB

Roughly 20% of the US population is neurodiverse, which means that you likely work with a colleague who learns and navigates the workplace (and the world) differently than you do. Which is a good thing! Studies have shown that hiring neurodiverse individuals benefits workplaces , with Deloitte noting that organizations “can gain a competitive edge from increased diversity in skills, ways of thinking, and approaches to problem-solving.” Config at MongoDB —which Cian and I are the global leaders of—recognizes the prevalence, importance, and power of neurodiversity in the workplace. Config’s mission is to educate both our members and the wider employee population at MongoDB about neurodiversity in the workplace, and through education to empower them to embrace—and champion—neurodiversity. Since it was founded in April 2023, Config’s membership has grown by over 150%, and it now has members in New York, Dublin, Paris, Gurugram, and Sydney. In fact, more than 200 people who span a range of MongoDB teams—from Engineering and Product, to the People team, to Marketing—take part in Config. We like to say that no one succeeds until all of us succeed. And that no one belongs until all of us belong. As managers, culture leaders, and as people, it's our responsibility to do whatever we can to make that true. Invisible differences like neurodiversity are hard to spot, but they enrich our work and our lives. Config.MDB plays an important role in helping us achieve this ambition. Making an impact on the MongoDB community Over the last year and a half, Config has held over fifteen events globally—with almost 1,000 employees in attendance. Config has held educational events for both the group’s members and the wider MongoDB audience on neurodiversity-related topics like autism awareness and ADHD awareness, along with events tailored to allies and members who identify as neurodivergent or who are part of a neurodivergent family. Config has also held training sessions for MongoDB people managers that provide them knowledge and tools to better manage neurodiverse team members. Ger Hartnett, an Engineering Lead at MongoDB said the training “gave me a much better understanding and appreciation for neurodiversity. This course was truly eye-opening for me. I learned practical ways to be more inclusive and supportive, both at work and in everyday life.” The group also holds quarterly virtual meetings to share the latest updates, personal experiences, and practical tips for members, focusing on career development, benefit entitlements, and events happening within MongoDB. Outside of events and training sessions, Config has had a broader business impact on the company, with some Config leads partnering with the employee inclusion and recruiting teams to put together an interview accommodation program. This program supports candidates who are neurodiverse or have a disability by allowing them to apply for special requests to make their interview experience more inclusive and enjoyable. Making a difference for individual members Config’s focus on educational and training events has had a dramatic and direct impact on members. The group is a safe space for neurodiverse or disabled people to share their experiences and seek advice on various issues. Cian is one of Config’s founding members, and had this to say about his personal experience: I was diagnosed with dyslexia in college and wanted to start a group like Config after speaking with other employees who were neurodiverse. We agreed that there was a need for a group like this at MongoDB. After the group was formed, I attended several events that focused on ADHD and saw a lot of similarities between traits and experiences of those with ADHD and myself. After attending these events, struggles that I had and that I thought were personality traits could be a sign of ADHD, I turned to some of our members for guidance on how to seek a diagnosis. Earlier this year, I was diagnosed with ADHD by a medical professional. I have noticed an improvement in my quality of life, and thanks to Config, I have a lot of valuable tips and resources to help me in my day-to-day. Had it not been for Config and these events I would still be none the wiser. Config has also made an impact on employees who are parents of neurodivergent children, like Sarah Lin , a senior information/content architect and Config member: I joined Config to be part of the change I want to see in the world—to help make the inclusive and supportive workplace I'd want my autistic daughter to experience. I certainly hope I'm contributing because membership has benefitted me personally. I've learned more about different types of neurodivergence and ways to support my colleagues. From our employee resource group events, I've learned more about autism and the lives of autistic adults so that I can be a better support for my daughter as we look toward her adulthood. The best part has been conversations with other parents and seeing myself reflected in their struggles, persistence, and achievements. Looking ahead As Config continues to expand its footprint within MongoDB, the group plans to introduce advanced educational programming to raise awareness for neurodiversity in the workplace. It also plans to hold workshops to foster professional development and executive functioning. Config also hopes to grow its global membership to provide community outreach at scale for nonprofit organizations that specifically service neurodiverse individuals. Ultimately, Config’s aim is to create the best environment for teams at MongoDB. Our view of success is not only the “what” but also the “how.” Being sustainable, encouraging growth through learning, and accomplishing goals as a team are all meaningful to us. And we believe strongly in the power of allyship; we want MongoDB to be a place where amazing people feel supported and are given the opportunity to do their best. After all, many of us are already close to neurodivergent individuals. One of Config’s Executive Sponsors, Mick Graham, has a daughter who is neurodivergent—which he says gives him extra inspiration to support Config now and in the future. Overall, being part of Config has raised our understanding of how neurodivergent people navigate the world. And the group—and the inspirations and experiences members have shared—contribute to making MongoDB a place that great people want to be. Interested in learning more about employee resource groups at MongoDB? Join our talent community to receive the latest MongoDB culture highlights.

October 24, 2024

Reflections On Our Recent AI "Think-A-Thon"

Interesting ideas are bound to emerge when great minds come together, so there was no shortage of interesting ideas on October 2nd, when MongoDB’s Developer Relations team hosted our second-ever AI Build Together event at MongoDB.local London. In some ways, the event is similar to a hackathon: a group of developers come together to solve a problem. But in other ways, the event is quite different. While hackathons normally take an entire day and involve intensive coding, the AI Build Together events are organized to take place over just a few hours and don't involve any coding at all. Instead, it’s all based around discussion and ideation. For these reasons, MongoDB’s Developer Relations team likes to dub them “think-a-thons.” Our first AI Build Together event was held earlier this year at .local NYC. After seeing the energy in the room and the excitement from attendees, our Developer Relations team knew it wanted to host another one. The .local London event’s fifty attendees—which included developers from numerous industries and leading AI innovators who served as mentors—came together to brainstorm and discuss AI-based solutions to common industry problems. .local London AI Build Together attendees brainstorming AI solutions for the healthcare industry The AI mentors included: Loghman Zadeh (gravity9), Ben Gutkovich (Superlinked), Jesse Martin (Hasura), Marlene Mhangami (Microsoft), Igor Alekseev (AWS), and John Willis and Patrick Debois (co-founders of DevOps). Upon arrival, participants joined a workflow group best aligned with their industry and/or area of interest—AI for Education, AI for DevOps, AI for Healthcare, AI for Optimizing Travel, AI for Supply Chain, and AI for Productivity. The AI for Productivity group collaborating on their workflow The discussions were lively, and it was amazing to see how much energy these attendees brought to their discussions. For example, the AI for Education workflow group vigorously discussed developing a personalized AI education coach to help students develop their educational plans and support them with career advice. Meanwhile, the AI for Healthcare workflow group focused on the idea of creating an AI drive tool to provide personalized healthcare to patients and real-time insights to their providers. The AI for Productivity team came up with a clever product that helps you read, digest, and identify the key aspects of long legal documents. The AI for Optimizing Travel group seeking advice from AI mentor Marlene A talented artist was also brought in to visualize each workflow group’s problem statements and potential solutions—literally and figuratively illustrating their innovative ideas. Graphic recorder Maria Foulquié putting the final touches on the illustration Final illustration documenting the 2024 MongoDB.local London AI Build Together event All in all, our second time hosting this event was deemed a success by everyone involved. “It was impressive to see how attendees, regardless of their technical background, found ways to contribute to complex AI solutions,” says Loghman Zadeh, AI Director at gravity9, who served as one of the event’s advisors. “Engaging with so many creative and forward-thinking individuals, all eager to push the boundaries of AI innovation was refreshing. The collaborative atmosphere fostered dynamic discussions and allowed participants to explore new ideas in a supportive environment.” If you’re interested in taking part in events like these—which offer a range of networking opportunities—there are three more MongoDB.local events slated for 2024—Sao Paulo, Paris, and Stockholm. Additionally, you can join your local MongoDB user group to learn from and connect with other MongoDB developers in your area.

October 23, 2024

Gamuda Puts AI in Construction with MongoDB Atlas

Gamuda Berhad is a leading Malaysian engineering and construction company with operations across the world, including in Australia, Taiwan, Singapore, Vietnam, the United Kingdom, and more. The company is known for its innovative approach to construction through the use of cutting-edge technology. Speaking at MongoDB.local Kuala Lumpur in August 2024 , John Lim, Chief Digital Officer at Gamuda said: “In the construction industry, AI is increasingly being used to analyze vast amounts of data, from sensor readings on construction equipment to environmental data that impacts project timelines.” One of Gamuda’s priorities is determining how AI and other tools can impact the company’s methods for building large projects across the world. For that, the Gamuda team needed the right infrastructure, with a database equipped to handle the demands of modern AI-driven applications. MongoDB Atlas fulfilled all the requirements and enabled Gamuda to deliver on its AI-driven goals. Why Gamuda chose MongoDB Atlas “Before MongoDB, we were dealing with a lot of different databases and we were struggling to do even simple things such as full-text search,” said Lim. “How can we have a tool that's developer-friendly, helps us scale across the world, and at the same time helps us to build really cool AI use cases, where we're not thinking about the infrastructure or worrying too much about how things work but are able to just focus on the use case?” After some initial conversations with MongoDB, Lim’s team saw that MongoDB Atlas could help it streamline its technology stack, which was becoming very complex and time consuming to manage. MongoDB Atlas provided the optimal balance between ease of use and powerful functionality, enabling the company to focus on innovation rather than database administration. “I think the advantage that we see is really the speed to market. We are able to build something quickly. We are fast to meet the requirements to push something out,” said Lim. Chi Keen Tan, Senior Software Engineer at Gamuda, added, “The team was able to use a lot of developer tools like MongoDB Compass , and we were quite amazed by what we can do. This [ability to search the items within the database easily] is just something that’s missing from other technologies.” Being able to operate MongoDB on Google Cloud was also a key selling point for Gamuda: “We were able to start on MongoDB without any friction of having to deal with a lot of contractual problems and billing and setting all of that up,” said Lim. How MongoDB is powering more AI use cases Gamuda uses MongoDB Atlas and functionalities such as Atlas Search and Vector Search to bring a number of AI use cases to life. This includes work implemented on Gamuda’s Bot Unify platform, which Gamuda built in-house using MongoDB Atlas as the database. By using documents stored in SharePoint and other systems, this platform helps users write tenders quicker, find out about employee benefits more easily, or discover ways to improve design briefs. “It’s quite incredible. We have about 87 different bots now that people across the company have developed,” Lim said. Additionally, the team has developed Gamuda Digital Operating System (GDOS), which can optimize various aspects of construction, such as predictive maintenance, resource allocation, and quality control. MongoDB’s ability to handle large volumes of data in real-time is crucial for these applications, enabling Gamuda to make data-driven decisions that improve efficiency and reduce costs. Specifically, MongoDB Atlas Vector Search enables Gamuda’s AI models to quickly and accurately retrieve relevant data, improving the speed and accuracy of decision-making. It also helps the Gamuda team find patterns and correlations in the data that might otherwise go unnoticed. Gamuda’s journey with MongoDB Atlas is just beginning as the company continues to explore new ways to integrate technology into its operations and expand to other markets. To learn more and get started with MongoDB Vector Search, visit our Vector Search Quick Start page.

October 22, 2024

Empower Innovation in Insurance with MongoDB and Informatica

For insurance companies, determining the right technology investments can be difficult, especially in today's climate where technology options are abundant but their future is uncertain. As is the case with many large insurers, there is a need to consolidate complex and overlapping technology portfolios. At the same time, insurers want to make strategic, future-proof investments to maximize their IT expenditures. What does the future hold, however? Enter scenario planning. Using the art of scenario planning, we can find some constants in a sea of uncertain variables, and we can more wisely steer the organization when it comes to technology choices. Consider the following scenarios: Regulatory disruption: A sudden regulatory change forces re-evaluation of an entire market or offering. Market disruption: Vendor and industry alliances and partnerships create disruption and opportunity. Tech disruption: A new CTO directs a shift in the organization's cloud and AI investments, aligning with a revised business strategy. What if you knew that one of these three scenarios was going to play itself out in your company but weren’t sure which one? How would you invest now to prepare for one of the three? At the same time that insurers are grappling with technology choices, they’re also facing clashing priorities: Running the enterprise: supporting business imperatives and maintaining health and security of systems. Innovating with AI: maintaining a competitive position by investing in AI technologies. Optimizing spend: minimizing technology sprawl, technical debt, and maximizing business outcomes. Data modernization What is the common thread among all these plausible future scenarios? How can insurers apply scenario planning principles while bringing diverging forces into alignment? There is one constant in each scenario, and that’s the organization’s data—if it’s hard to work with, any future scenario will be burdened by this fact. One of the most critical strategic investments an organization can make is to ensure data is easy to work with. Today, we refer to this as data modernization, which involves removing the friction that manifests itself in data processing, ensuring data is current, secure, and adaptable. For developers, who are closest to the data, this means enabling them with a seamless and fully integrated developer data platform along with a flexible data model. In the past, data models and databases would remain unchanged for long periods. Today, this approach is outdated. Consolidation creates a data model problem, resulting in a portfolio with relational, hierarchical, and file-based data models—or, worst of all, a combination of all three. Add to this the increased complexity that comes with relational models, including supertype-subtype conditional joins and numerous data objects, and you can see how organizations wind up with a patchwork of data models and overly complicated data architecture. A document database, like MongoDB Atlas , stores data in documents and is often referred to as a non-relational (or NoSQL) database. The document model offers a variety of advantages and specifically excels in data consolidation and agility: Serves as the superset of all other data model types (relational, hierarchical, file-based, etc.) Consolidates data assets into elegant single-views, capable of accommodating any data structure, format, or source Supports agile development, allowing for quick incorporation of new and existing data Eliminates the lengthy change cycles associated with rigid, single-schema relational approaches Makes data easier to work with, promoting faster application development By adopting the document model, insurers can streamline their data operations, making their technology investments more efficient and future-proof. The challenges of making data easier to work with include data quality. One significant hurdle insurers continue to face is the lack of a unified view of customers, products, and suppliers across various applications and regions. Data is often scattered across multiple systems and sources, leading to discrepancies and fragmented information. Even with centralized data, inconsistencies may persist, hindering the creation of a single, reliable record. For insurers to drive better reporting, analytics, and AI, there's a need for a shared data source that is accurate, complete, and up-to-date. Centralized data is not enough; it must be managed, reconciled, standardized, cleansed, and enriched to maintain its integrity for decision-making. Mastering data management across countless applications and sources is complex and time-consuming. Success in master data management (MDM) requires business commitment and a suite of tools for data profiling, quality, and integration. Aligning these tools with business use cases is essential to extract the full value from MDM solutions, although the process can be lengthy. Informatica’s MDM solution and MongoDB Informatica’s MDM solution has been developed to answer the key questions organizations face when working with their customer data: “How do I get a 360-degree view of my customer, partner and & supplier data?” “How do I make sure that my data is of the highest quality?” The Informatica MDM platform helps ensure that organizations around the world can confidently use their data and make business decisions based on it. Informatica’s entire MDM solution is built on MongoDB Atlas , including its AI engine, Claire. Figure 1: Everything you need to modernize the practice of master data management. Informatica MDM solves the following challenges: Consolidates data from overlapping and conflicting data sources. Identifies data quality issues and cleanses data. Provides governance and traceability of data to ensure transparency and trust. Insurance companies typically have several claim systems that they’ve amassed over the years through acquisitions, with each one containing customer data. The ability to relate that data together and ensure it’s of the highest quality enables insurers to overcome data challenges. MDM capabilities are essential for insurers who want to make informed decisions based on accurate and complete data. Below are some of the different use cases for MDM: Modernize legacy systems and processes (e.g. claims or underwriting) by effectively collecting, storing, organizing, and maintaining critical data Improve data security and improve fraud detection and prevention Effective customer data management for omni-channel engagement and cross- or up-sell Data management for compliance, avoiding or predicting in advance any possible regulatory issues Given we already leverage the performance and scale of MongoDB Atlas within our cloud-native MDM SaaS solution and share a common focus on high-value, industry solutions, this partnership was a natural next step. Now, as a strategic MDM partner of MongoDB, we can help customers rapidly consolidate and sunset multiple legacy applications for cloud-native ones built on a trusted data foundation that fuels their mission-critical use cases. Rik Tamm-Daniels, VP of Strategic Ecosystems and Technology at Informatica Taking the next step For insurance companies navigating the complexities of modern technology and data management, MDM combined with powerful tools like MongoDB and Informatica provide a strategic advantage. As insurers face an uncertain future with potential regulatory, market, and technological disruptions, investing in a robust data infrastructure becomes essential. MDM ensures that insurers can consolidate and cleanse their data, enabling accurate, trustworthy insights for decision-making. By embracing data modernization and the flexibility of document databases like MongoDB, insurers can future-proof their operations, streamline their technology portfolios, and remain agile in an ever-changing landscape. Informatica’s MDM solution, underpinned by MongoDB Atlas, offers the tools needed to master data across disparate systems, ensuring high-quality, integrated data that drives better reporting, analytics, and AI capabilities. If you would like to discover more about how MongoDB and Informatica can help you on your modernization journey, take a look at the following resources: Unify data across the enterprise for a contextual 360-degree view and AI-powered insights with Informatica’s MDM solution Automating digital underwriting with machine learning Claim management using LLMs and vector search for RAG

October 22, 2024

Built With MongoDB: Buzzy Makes AI Application Development More Accessible

AI adoption rates are sky-high and showing no signs of slowing down. One of the driving forces behind this explosive growth is the increasing popularity of low- and no-code development tools that make this transformative technology more accessible to tech novices. Buzzy , an AI-powered no-code platform that aims to revolutionize how applications are created, is one such company. Buzzy enables anyone to transform an idea into a fully functional, scalable web or mobile application in minutes. Buzzy developers use the platform for a wide range of use cases, from a stock portfolio tracker to an AI t-shirt store. The only way the platform could support such diverse applications is by being built upon a uniquely versatile data architecture. So it’s no surprise that the company chose MongoDB Atlas as its underlying database. Creating the buzz Buzzy’s mission is simple but powerful: to democratize the creation of applications by making the process accessible to everyone, regardless of technical expertise. Founder Adam Ginsburg—a self-described husband, father, surfer, geek, and serial entrepreneur—spent years building solutions for other businesses. After building and selling an application that eventually became the IBM Web Content Manager, he created a platform allowing anyone to build custom applications quickly and easily. Buzzy initially focused on white-label technology for B2B applications, which global vendors brought to market. Over time, the platform evolved into something much bigger. The traditional method of developing software, as Ginsburg puts it, is dead. Ginsburg observed two major trends that contributed to this shift: the rise of artificial intelligence (AI) and the design-centric approach to product development exemplified by tools like Figma. Buzzy set out to address two major problems. First, traditional software development is often slow and costly. Small-to-medium-sized business (SMB) projects can take anywhere from $50,000 to $250,000 and nine months to complete. Due to these high costs and lengthy timelines, many projects either fail to start or run out of resources before they’re finished. The second issue is that while AI has revolutionized many aspects of development, it isn’t a cure-all for generating vast amounts of code. Generating tens of thousands of lines of code using AI is not only unreliable but also lacks the security and robustness that enterprise applications demand. Additionally, the code generated by AI often can’t be maintained or supported effectively by IT teams. This is where Buzzy found a way to harness AI effectively, using it in a co-pilot mode to create maintainable, scalable applications. Buzzy’s original vision was focused on improving communication and collaboration through custom applications. Over time, the platform’s mission shifted toward no-code development, recognizing that these custom apps were key drivers of collaboration and business effectiveness. The Buzzy UX is highly streamlined so even non-technical users can leverage the power of AI in their apps. Initially, Buzzy's offerings were somewhat rudimentary, producing functional but unpolished B2B apps. However, the platform soon evolved. Instead of building their own user experience (UX) and user interface (UI) capabilities, Buzzy integrated with Figma, giving users access to the design-centric workflow they were already familiar with. The advent of large language models (LLMs) provided another boost to the platform, enabling Buzzy to accelerate AI-powered development. What sets Buzzy apart is its unique approach to building applications. Unlike traditional development, where code and application logic are often intertwined, Buzzy separates the "app definition" from the "core code." This distinction allows for significant benefits, including scalability, maintainability, and better integration with AI. Instead of handing massive chunks of code to an AI system—which can result in errors and inefficiencies—Buzzy gives the AI a concise, consumable description of the application, making it easier to work with. Meanwhile, the core code, written and maintained by humans, remains robust, secure, and high-performing. This approach not only simplifies AI integration but also ensures that updates made to Buzzy’s core code benefit all customers simultaneously, an efficiency that few traditional development teams can achieve. Flexible platform, fruitful partnership The partnership between Buzzy and MongoDB has been crucial to Buzzy’s success. MongoDB’s Atlas developer data platform provides a scalable, cost-effective solution that supports Buzzy’s technical needs across various applications. One of the standout features of MongoDB Atlas is its flexibility and scalability, which allows Buzzy to customize schemas to suit the diverse range of applications the platform supports. Additionally, MongoDB’s support—particularly with new features like Atlas Vector Search —has allowed Buzzy to grow and adapt without complicating its architecture. In terms of technology, Buzzy’s stack is built for flexibility and performance. The platform uses Kubernetes and Docker running on Node.js with MongoDB as the database. Native clients are powered by React Native, using SQLLite and Websockets for communication with the server. On the AI side, Buzzy leverages several models, with OpenAI as the primary engine for fine-tuning its AI capabilities. Thanks to the MongoDB for Startups program , Buzzy has received critical support, including Atlas credits, consulting, and technical guidance, helping the startup continue to grow and scale. With the continued support of MongoDB and an innovative approach to no-code development, Buzzy is well-positioned to remain at the forefront of the AI-driven application development revolution. A Buzzy future Buzzy embodies the spirit of innovation in its own software development lifecycle (SDLC). The company is about to release two game-changing features that are going to take AI driven App development to the next level: Buzzy FlexiBuild, which will allow users to build more complex applications using just AI prompts, and Buzzy Automarkup, which will allow Figma users to easily mark up screens, views, lists, forms, and actions with AI in minutes. Ready to start bringing your own app visions to life? Try Buzzy and start building your application in minutes for Free. To learn more and get started with MongoDB Vector Search, visit our Vector Search Quick Start guide .

October 18, 2024

Announcing Hybrid Search Support for LlamaIndex

MongoDB is excited to announce enhancements to our LlamaIndex integration. By combining MongoDB’s robust database capabilities with LlamaIndex’s innovative framework for context-augmented large language models (LLMs), the enhanced MongoDB-LlamaIndex integration unlocks new possibilities for generative AI development. Specifically, it supports vector (powered by Atlas Vector Search ), full-text (powered by Atlas Search ), and hybrid search, enabling developers to blend precise keyword matching with semantic search for more context-aware applications, depending on their use case. Building AI applications with LlamaIndex LlamaIndex is one of the world’s leading AI frameworks for building with LLMs. It streamlines the integration of external data sources, allowing developers to combine LLMs with relevant context from various data formats. This makes it ideal for building application features like retrieval-augmented generation (RAG), where accurate, contextual information is critical. LlamaIndex empowers developers to build smarter, more responsive AI systems while reducing the complexities involved in data handling and query management. Advantages of building with LlamaIndex include: Simplified data ingestion with connectors that integrate structured databases, unstructured files, and external APIs, removing the need for manual processing or format conversion. Organizing data into structured indexes or graphs , significantly enhancing query efficiency and accuracy, especially when working with large or complex datasets. An advanced retrieval interface that responds to natural language prompts with contextually enhanced data, improving accuracy in tasks like question-answering, summarization, or data retrieval. Customizable APIs that cater to all skill levels—high-level APIs enable quick data ingestion and querying for beginners, while lower-level APIs offer advanced users full control over connectors and query engines for more complex needs. MongoDB's LlamaIndex integration Developers are able to build powerful AI applications using LlamaIndex as a foundational AI framework alongside MongoDB Atlas as the long term memory database. With MongoDB’s developer-friendly document model and powerful vector search capabilities within MongoDB Atlas, developers can easily store and search vector embeddings for building RAG applications. And because of MongoDB’s low-latency transactional persistence capabilities, developers can do a lot more with MongoDB integration in LlamIndex to build AI applications in an enterprise-grade manner. LlamaIndex's flexible architecture supports customizable storage components, allowing developers to leverage MongoDB Atlas as a powerful vector store and a key-value store. By using Atlas Vector Search capabilities, developers can: Store and retrieve vector embeddings efficiently ( llama-index-vector-stores-mongodb ) Persist ingested documents ( llama-index-storage-docstore-mongodb ) Maintain index metadata ( llama-index-storage-index-store-mongodb ) Store Key-value pairs ( llama-index-storage-kvstore-mongodb ) Figure adapted from Liu, Jerry and Agarwal, Prakul (May 2023). “Build a ChatGPT with your Private Data using LlamaIndex and MongoDB”. Medium. https://medium.com/llamaindex-blog/build-a-chatgpt-with-your-private-data-using-llamaindex-and-mongodb-b09850eb154c Adding hybrid and full-text search support Developers may use different approaches to search for different use cases. Full-text search retrieves documents by matching exact keywords or linguistic variations, making it efficient for quickly locating specific terms within large datasets, such as in legal document review where exact wording is critical. Vector search, on the other hand, finds content that is ‘semantically’ similar, even if it does not contain the same keywords. Hybrid search combines full-text search with vector search to identify both exact matches and semantically similar content. This approach is particularly valuable in advanced retrieval systems or AI-powered search engines, enabling results that are both precise and aligned with the needs of the end-user. It is super simple for developers to try out powerful retrieval capabilities on their data and improve the accuracy of their AI applications with this integration. In the LlamaIndex integration, the MongoDBAtlasVectorSearch class is used for vector search. All you have to do is enable full-text search, using VectorStoreQueryMode.TEXT_SEARCH in the same class. Similarly, to use Hybrid search, enable VectorStoreQueryMode.HYBRID . To learn more, check out the GitHub repository . With the MongoDB-LlamaIndex integration’s support, developers no longer need to navigate the intricacies of Reciprocal Rank Fusion implementation or to determine the optimal way to combine vector and text searches—we’ve taken care of the complexities for you. The integration also includes sensible defaults and robust support, ensuring that building advanced search capabilities into AI applications is easier than ever. This means that MongoDB handles the intricacies of storing and querying your vectorized data, so you can focus on building! We’re excited for you to work with our LlamaIndex integration. Here are some resources to expand your knowledge on this topic: Check out how to get started with our LlamaIndex integration Build a content recommendation system using MongoDB and LlamaIndex with our helpful tutorial Experiment with building a RAG application with LlamaIndex, OpenAI, and our vector database Learn how to build with private data using LlamaIndex, guided by one of its co-founders

October 17, 2024

Strengthen Data Security with MongoDB Queryable Encryption

MongoDB Queryable Encryption is a groundbreaking, industry-first innovation developed by the MongoDB Cryptography Research Group that allows customers to encrypt sensitive application data, store it securely in an encrypted state in the MongoDB database, and perform equality and range queries directly on the encrypted data—with no cryptography expertise required. Adding range query support to Queryable Encryption significantly enhances data retrieval capabilities by enabling more flexible and powerful searches. Queryable Encryption is available in MongoDB Atlas, Enterprise Advanced, and Community Edition. Encryption: Protecting data through every stage of its lifecycle Encryption is a critical security method for ensuring protection of sensitive data and compliance with regulations like GDPR, CCPA, and HIPAA. It involves rendering data unreadable to anyone without the decryption key. It can protect data in three ways: in-transit (over networks), at-rest (when stored), and in-use (during processing). While encryption in-transit and at-rest are standard for all databases and are well-supported by MongoDB , encryption in-use presents a unique challenge. Encryption in-use is difficult because encrypted data is unreadable—it looks like random characters and symbols. Traditionally, the database can’t run queries on encrypted data without decrypting it first to make it readable. However, if the database doesn’t have a decryption key, it has to send encrypted data back to the application or system (i.e., the client) that has the key so it can be decrypted before querying. This is a pattern that doesn’t scale well for real-world applications. This puts organizations in a difficult spot: in-use encryption is important for data privacy and regulatory compliance, but it's hard to implement. In the past, companies have either chosen not to encrypt sensitive data in-use or have employed less secure workarounds that complicate their operations. MongoDB Queryable Encryption: Safeguarding data in use without sacrificing efficiency MongoDB Queryable Encryption solves this problem. It allows organizations to encrypt their sensitive data, like personally identifiable information (PII) or protected health information (PHI), and to run equality and range queries directly on that data without having to decrypt it. Queryable Encryption was developed by the MongoDB Cryptography Research Group , drawing on their pioneering expertise in cryptography and encrypted search, and Queryable Encryption has been peer-reviewed by leading cryptography experts worldwide. Unmatched in the industry, MongoDB is the only data platform that allows customers to run expressive queries directly on non-deterministically encrypted data. This represents a groundbreaking advantage for customers, allowing them to maintain robust protection for their sensitive data without sacrificing operational efficiency or developer productivity by still enabling expressive queries to be performed on it. Organizations of all sizes, across all industries, can benefit from the impactful outcomes enabled by Queryable Encryption, such as: Stronger data protection: Data stays encrypted at every stage—whether in-transit, at-rest, or in-use—reducing the risk of sensitive data exposure or breaches. Enhanced regulatory compliance: Provides customers with the necessary tools to comply with data protection regulations like GDPR, CCPA, and HIPAA by ensuring robust encryption at every stage. Streamlined operations: Simplifies the encryption process without needing costly custom solutions, specialized cryptography teams, or complex third-party tools. Solidified separation of duties: Supports stricter access controls, where MongoDB and even a customer's database administrators (DBAs) don’t have access to sensitive data. Use cases for Queryable Encryption MongoDB Queryable Encryption has many use cases for organizations that host sensitive data, regardless of their size or industry. The recent addition of range query support to Queryable Encryption broadens those use cases even wider. Here are some examples to help illustrate how Queryable Encryption could be used to protect and query sensitive data: Financial Services Credit Scoring: Assess creditworthiness by querying encrypted data such as credit scores and income levels. For example, segment your customers based on credit scores between 600 and 750. Fraud Detection: Detect anomalies by querying encrypted transaction amounts for values that exceed typical spending patterns, such as transactions above $10,000. Insurance Risk Assessment: Personalize policy offerings by querying encrypted client data for risk levels within specified ranges, enhancing customer service without exposing sensitive information. Claims Processing: Automate claims processing by querying encrypted claims data for amounts within specific ranges or for claims within time periods, streamlining operations while safeguarding information. Healthcare Medical Research: Execute range-based searches on encrypted medical records, such as querying encrypted datasets for patients within specific age ranges or for abnormal lab results for medical research. Billing and Insurance Processing: Perform secure range queries on encrypted billing data to process insurance claims and payments while protecting patient financial details. Education Grading Systems: Process encrypted student scores to award grades within specific ranges, ensuring compliance with FERPA while protecting student privacy and maintaining data security. Financial Aid Distribution: Analyze encrypted income data within certain ranges to determine eligibility for scholarships and financial aid. Comprehensive data protection at every stage With Queryable Encryption, MongoDB offers unmatched protection for sensitive data throughout its entire lifecycle—whether in-transit, at-rest, or in-use. Now, with the addition of range query support, Queryable Encryption meets even more of the demands of modern applications, unlocking new use cases. To get started, explore the Queryable Encryption documentation .

October 16, 2024

Unlocking Seamless Data Migrations to MongoDB Atlas with Adiom

As enterprises continue to scale, the need for powerful, seamless data migration tools becomes increasingly important. Adiom , founded by industry veterans with deep expertise in data mobility and distributed systems, is addressing this challenge head-on with its open-source tool, dsync. By focusing on high-stakes, production-level migrations, Adiom has developed a solution that works effortlessly with MongoDB Atlas and makes large-scale migrations to it from NoSQL databases faster, safer, and more predictable. The real migration struggles Enterprises often approach migrations with apprehension, and for good reason. When handling massive datasets powering mission-critical services or user-facing applications, even small mistakes can have significant consequences. Adiom understands these challenges deeply, particularly when migrating to MongoDB Atlas. Here are a few of the common pain points that enterprises face: Time-consuming processes: Moving large datasets involves extensive planning, testing, and iteration. What’s more, enterprises need migrations that are repeatable and can handle the same dataset efficiently multiple times—something traditional tools often struggle to provide. Risk management: From data integrity issues to downtime during the migration window, the stakes are high. Tools that worked for smaller datasets and in lower-tier environments no longer meet the requirements. Custom migration scripts often introduce unforeseen risks, while other databases come with their own unique limitations. Cost overruns: Enterprises frequently encounter hidden migration costs—whether it's the need to provision special infrastructure, reworking application code for compatibility with migration plans, or paying SaaS vendors by the row. These complications can balloon the overall migration budget or send the project into the approval death spiral. To make things even more complicated, the pains feed into each other. The longer the project takes, the more risks need to be accounted for, the longer the planning and testing, and the bigger the cost. Adiom’s dsync: Power and simplicity in one tool Dsync was built with these challenges in mind. Designed specifically for large production workloads, dsync enables enterprises to handle complex migrations more easily, lowering the hurdles that typically slow down the process, reducing risks and uncertainty. Here’s why dsync stands out: Ease of deployment: Starting with dsync is incredibly simple. All it takes is downloading a single binary—there’s no need for specialized infrastructure, and it runs seamlessly on VMs or Docker. Users can monitor migrations through the command line or a web interface, giving flexibility depending on the team’s preferences. Resilience and Safety: dsync is not only efficient, but it’s also resumable. Should a migration be interrupted, there’s no need to start over. This means that migrations can continue smoothly from where they left off, reducing the risk of downtime and minimizing the complexity of the process. Verification: dsync is designed to protect the integrity of migrated data. Dsync features embedded data verification mechanisms that automatically check for consistency between the source and destination databases after migration. Security: dsync doesn't store data, doesn't send it outside the organization other than to the designated destination, and supports network encryption. No hidden costs: As an open-source tool, dsync eliminates the need to onboard expensive SaaS solutions or purchase licenses in the early stages of the process. It operates independently of third-party vendors, giving enterprises flexibility and control over their migrations without the additional financial burden. Enhancing MongoDB customers' experiences For MongoDB customers, the ability to migrate data quickly and efficiently can be the key to unlocking new products, features, and cost savings. With dsync, Adiom provides a solution that can accelerate migrations, reduce risks, and enable enterprises to leverage MongoDB Atlas without the usual headaches. Faster time-to-market: By significantly accelerating migrations, dsync allows companies to take advantage of MongoDB Atlas offerings and integrations sooner, offering a direct path to quicker returns on investment. Self-service and support: Many migrations can be handled entirely in-house, thanks to dsync’s intuitive design. However, for organizations that need additional guidance, Adiom offers support and has partnered with MongoDB Professional Services and PeerIslands to offer comprehensive coverage during the migration process. Five compelling advantages of migrating to MongoDB Flexible schema: MongoDB’s schema-less design reduces development time by up to 30% by allowing you to change data structures. Scalability: You can scale MongoDB to multiple petabytes of data seamlessly using sharding. High performance: MongoDB helps to improve read and write speeds by up to 50% compared to traditional databases. Expressive Query API: Its advanced querying capabilities reduce query writing time and increase execution efficiency by 70%. Partner Ecosystem: MongoDB’s strong partner ecosystem helps with service integrations, AI capabilities, purpose-built solutions, and other significant competitive differentiators. Conclusion Dsync is more than just a migration tool—it’s a powerful engine that abstracts away the complexity of managing large datasets across different systems. By seamlessly tying together initial data copying, change-data-capture, and all the nuances of large-scale migrations, dsync lets enterprises focus on building their future, not on the logistics of data transfer. For those interested in technical details, some of those logistics and nuances can be found in our CEO’s blog . With Adiom and dsync, enterprises no longer have to choose between performance, correctness, or ease of use when planning a migration from another NoSQL database. Dsync provides an enterprise-grade solution that helps to enable faster, more secure, and more reliable migrations. By partnering with MongoDB, Adiom supports you in continuing to innovate without being held back by the limitations of legacy databases. Try dsync yourself or contact Adiom for a demo. Head over to our product page to learn more about MongoDB Atlas .

October 15, 2024