Resources

Scaling Vector Database Operations with MongoDB and Voyage AI

The news is out! Voyage AI, a leader in embedding and reranking models that make AI-powered search and retrieval more accurate, has now joined MongoDB. In our recent webinar, MongoDB’s Frank Liu, Staff Product Manager and Richmond Alake, Staff Developer Advocate break down the powerful new AI capabilities that are now easily accessible to developers everywhere. A few highlights from the discussion include: *Why these embedding models are so important is if you have a very really powerful one—one that’s able to understand semantic relationships between texts and one that is able to pick apart a lot of the nuances in your inputs—then you're going to be able to have a really powerful search and retrieval system.* — Frank Liu, Staff Product Manager, MongoDB *What does this mean for you as a developer? Well, I build these AI applications almost every day. So it really means less code. If you have your data layer that can handle the embedding generation and also the re-ranking. It's just less code—and that's good—but also it really means that you can have a reliable retrieval process. And Voyage AI embedding models are the state of the art, the best in class across any way you want to cut them, including domain specific models.* — Richmond Alake, Staff Developer Advocate, MongoDB Watch it on-demand now for a deep dive into reranking and vector embeddings for building AI-powered user experiences followed by a technical question-and-answer period. We hope to see you at our next webinar!

Watch presentation →

AI Database Comparison: MongoDB vs. PostgreSQL and pgvector

Large language models (LLMs) are rapidly reshaping how we build AI solutions, from retrieval-augmented generation (RAG) to AI agents and agentic systems that continuously reason, reflect, and act on new data. The database technology you choose to power these applications can significantly impact the performance, scalability, and success of your AI application. In this webinar, Staff Developer Advocate Richmond Alake will compare two vector search solutions — PostgreSQL with pgvector and MongoDB Atlas Vector Search — and guide you through selecting the right option for your AI workloads. Whether you’re a data engineer, AI architect, or developer, you’ll walk away with actionable insights on how to think about and optimize critical metrics like latency and throughput to meet the demands of modern AI applications. What you’ll learn: How RAG boosts LLM-based applications by integrating external data in real time and how semantic search and RAG can be implemented using both databases. How PostgreSQL/pgvector and MongoDB Atlas handle high-performance vector operations crucial for tasks like semantic search and recommendation. How robust vector databases enable AI agents to reason, plan, and act autonomously, creating truly dynamic and interactive AI experiences. How a real-world application using a financial Q&A dataset illustrates practical deployment and optimization strategies. How key metrics like latency and throughput directly affect the success of LLM applications and how to apply proven tuning techniques.

Watch presentation →

Beyond Open Banking – Exploring the Move to Open Finance

  The financial industry has made significant progress in financial data access through open banking and progress will expand beyond payments with regulations like Payment Services Directive 2 (PSD2) to what is explored in the PSR and PSD3. Open banking has marked the beginning of this new era, and now the shift toward open finance is the essential next phase in this transition, with incoming regulations like Financial Data Access (FiDA) covering new areas including mortgages, pensions, investments, and savings. While these initiatives hold promise, a successful open finance framework will require many financial institutions – who still rely on outdated systems and processes that may not natively support the flexible data access requirements of FiDA – to review their data architectures. If institutions can’t adhere to strict geographic data regulations (with certain jurisdictions enforcing that data remains within specific regions), guarantee real-time data availability, or scale in real-time to handle increased API traffic, further data modernisation efforts will be required. How can financial institutions navigate this complex web of open architecture, flexible infrastructure, data requirements and regulation? Modern ecosystems require modern solutions, so organisations need to re-think their approach to data, not just to be able to facilitate open finance, but also to stay relevant in an increasingly competitive landscape. Sign up for this Finextra webinar, hosted in association with MongoDB and Temenos, to join our panel of industry experts who will discuss how the move to open finance can be managed effectively with the right data architecture in place.

Watch presentation →