Resources

Scaling Vector Database Operations with MongoDB and Voyage AI

The news is out! Voyage AI, a leader in embedding and reranking models that make AI-powered search and retrieval more accurate, has now joined MongoDB. In our recent webinar, MongoDB’s Frank Liu, Staff Product Manager and Richmond Alake, Staff Developer Advocate break down the powerful new AI capabilities that are now easily accessible to developers everywhere. A few highlights from the discussion include: *Why these embedding models are so important is if you have a very really powerful one—one that’s able to understand semantic relationships between texts and one that is able to pick apart a lot of the nuances in your inputs—then you're going to be able to have a really powerful search and retrieval system.* — Frank Liu, Staff Product Manager, MongoDB *What does this mean for you as a developer? Well, I build these AI applications almost every day. So it really means less code. If you have your data layer that can handle the embedding generation and also the re-ranking. It's just less code—and that's good—but also it really means that you can have a reliable retrieval process. And Voyage AI embedding models are the state of the art, the best in class across any way you want to cut them, including domain specific models.* — Richmond Alake, Staff Developer Advocate, MongoDB Watch it on-demand now for a deep dive into reranking and vector embeddings for building AI-powered user experiences followed by a technical question-and-answer period. We hope to see you at our next webinar!

Watch presentation →

AI Database Comparison: MongoDB vs. PostgreSQL and Pgvector

Large language models (LLMs) are rapidly reshaping how we build AI solutions, from retrieval-augmented generation (RAG) to AI agents and agentic systems that continuously reason, reflect, and act on new data. The database technology you choose to power these applications can significantly impact the performance, scalability, and success of your AI application. In this webinar, Staff Developer Advocate Richmond Alake will compare two vector search solutions — PostgreSQL with pgvector and MongoDB Atlas Vector Search — and guide you through selecting the right option for your AI workloads. Whether you’re a data engineer, AI architect, or developer, you’ll walk away with actionable insights on how to think about and optimize critical metrics like latency and throughput to meet the demands of modern AI applications. What you’ll learn: How RAG boosts LLM-based applications by integrating external data in real time and how semantic search and RAG can be implemented using both databases. How PostgreSQL/pgvector and MongoDB Atlas handle high-performance vector operations crucial for tasks like semantic search and recommendation. How robust vector databases enable AI agents to reason, plan, and act autonomously, creating truly dynamic and interactive AI experiences. How a real-world application using a financial Q&A dataset illustrates practical deployment and optimization strategies. How key metrics like latency and throughput directly affect the success of LLM applications and how to apply proven tuning techniques.

Watch presentation →