Resources

May 13, 2025

How Iron Mountain Redefined Data Strategy for Modern Business Needs

In the rapidly changing landscape of modern industries, businesses need agile and robust data solutions to stay ahead. Join us on Tuesday, May 13, 2025 at 2-3 p.m. for a foundational webinar about why MongoDB's flexible and powerful architecture, data model, and out-of-the-box capabilities are essential for driving innovation at speed and scale. Presenters from MongoDB and our trusted partner, Iron Mountain, will guide you through how MongoDB empowers businesses to adapt and thrive in dynamic environments. Key takeaways: Understand the architectural differences between MongoDB and traditional relational databases. Discover how MongoDB’s document data model enhances development speed and flexibility. Explore real-world use cases showcasing MongoDB's effectiveness in supporting diverse applications, from transactions to AI. Don't miss out on this opportunity to gain valuable insights that will elevate your ability to deliver transformative user experiences using data. Whether you're a developer, architect, or business leader, this engaging and educational session will provide valuable knowledge on how to leverage MongoDB for your organization. At the end of the session, you can interact with our experts and have your questions answered. Can't attend live? Register anyway, and you'll receive a link to watch the recorded session on-demand after the webinar concludes. Sign up today to ensure you don’t miss this chance to learn from our industry leaders.

Save your seat →

May 22, 2025

Scaling Vector Database Operations with MongoDB and Voyage AI

The performance and scalability of your AI application depend on efficient vector storage and retrieval. In this webinar, we explore how MongoDB Atlas Vector Search and Voyage AI embeddings optimize these aspects through quantization—a technique that reduces the precision of vector embeddings (e.g., float32 to int8) to decrease storage costs and improve query performance while managing accuracy trade-offs. Vector embeddings are the foundation of AI-driven applications, along with powerful capabilities such as retrieval-augmented generation (RAG), semantic search, and agent-based workflows. However, as data volumes grow, the cost and complexity of storing and querying high-dimensional vectors increase. Join Staff Developer Advocate Richmond Alake to learn how quantization improves vector search efficiency. We’ll cover practical strategies for converting embeddings to lower-bit representations, balancing performance with accuracy. In a step-by-step tutorial, you'll see how to apply these optimizations using Voyage AI embeddings to reduce both query latency and infrastructure costs. Key Takeaways: How quantization works to dramatically reduce the memory footprint of embeddings How MongoDB Atlas Vector Search integrates automatic quantization to efficiently manage millions of vector embeddings Real-world metrics for retrieval latency, resource utilization, and accuracy across float32, int8, and binary embeddings Combining binary quantization with a rescoring step yields near float32-level accuracy with a fraction of the computational overhead Best practices and tips for balancing speed, cost, and precision—especially at the 1M+ embedding scale essential for RAG, semantic search, and recommendation systems

Save your seat →

May 8, 2025

AI Database Comparison: MongoDB vs. PostgreSQL and Pgvector

Large language models (LLMs) are rapidly reshaping how we build AI solutions. The database technology you choose to power these applications can significantly impact the performance, scalability, and success of your AI application. In this webinar, Senior Staff Developer Advocate Anant Srivastava will compare two vector search solutions—PostgreSQL with pgvector and MongoDB Atlas Vector Search—and guide you through selecting the right option for your AI workloads. Whether you’re a data engineer, AI architect, or developer, you’ll understand how to think about and optimize critical metrics like latency and throughput to meet the demands of modern AI applications. What you’ll learn: How retrieval-augmented generation (RAG) boosts LLM-based applications by integrating external data in real time. How PostgreSQL/pgvector and MongoDB Atlas handle high-performance vector operations for tasks like semantic search and recommendation engines. How robust vector databases enable AI agents to reason, plan, and act autonomously, creating truly dynamic and interactive AI experiences. How a real-world application using a financial Q&A dataset illustrates practical deployment and optimization strategies. How key metrics like latency and throughput directly affect the success of LLM applications. Reserve your seat now to be part of this engaging and thought-provoking discussion. Even if you can’t attend, register anyway and we’ll send you a link to watch on-demand and at your convenience. Hope to see you there!

Save your seat →

April 24, 2025

Scaling Vector Database Operations with MongoDB and Voyage AI

11:30 a.m. SGT The performance and scalability of your AI application depend on efficient vector storage and retrieval. In this webinar, we explore how MongoDB Atlas Vector Search and Voyage AI embeddings optimize these aspects through quantization—a technique that reduces the precision of vector embeddings (e.g., float32 to int8) to decrease storage costs and improve query performance while managing accuracy trade-offs. Vector embeddings are the foundation of AI-driven applications, along with powerful capabilities such as retrieval-augmented generation (RAG), semantic search, and agent-based workflows. However, as data volumes grow, the cost and complexity of storing and querying high-dimensional vectors increase. Join Senior Staff Developer Advocate Anant Srivastava to learn how quantization improves vector search efficiency. We’ll cover practical strategies for converting embeddings to lower-bit representations, balancing performance with accuracy. In a step-by-step tutorial, you'll see how to apply these optimizations using Voyage AI embeddings to reduce both query latency and infrastructure costs. Key Takeaways: How quantization works to dramatically reduce the memory footprint of embeddings How MongoDB Atlas Vector Search integrates automatic quantization to efficiently manage millions of vector embeddings Real-world metrics for retrieval latency, resource utilization, and accuracy across float32, int8, and binary embeddings Combining binary quantization with a rescoring step yields near float32-level accuracy with a fraction of the computational overhead Best practices and tips for balancing speed, cost, and precision—especially at the 1M+ embedding scale essential for RAG, semantic search, and recommendation systems

Save your seat →

April 29, 2025

AI Database Comparison: MongoDB vs. PostgreSQL and Pgvector

10 A.M. BST Large language models (LLMs) are rapidly reshaping how we build AI solutions. The database technology you choose to power these applications can significantly impact the performance, scalability, and success of your AI application. In this webinar, Staff Developer Advocate Richmond Alake will compare two vector search solutions—PostgreSQL with pgvector and MongoDB Atlas Vector Search—and guide you through selecting the right option for your AI workloads. Whether you’re a data engineer, AI architect, or developer, you’ll understand how to think about and optimize critical metrics like latency and throughput to meet the demands of modern AI applications. What you’ll learn: How retrieval-augmented generation (RAG) boosts LLM-based applications by integrating external data in real time. How PostgreSQL/pgvector and MongoDB Atlas handle high-performance vector operations for tasks like semantic search and recommendation engines. How robust vector databases enable AI agents to reason, plan, and act autonomously, creating truly dynamic and interactive AI experiences. How a real-world application using a financial Q&A dataset illustrates practical deployment and optimization strategies. How key metrics like latency and throughput directly affect the success of LLM applications. Reserve your seat now to be part of this engaging and thought-provoking discussion. Even if you can’t attend, register anyway and we’ll send you a link to watch on-demand and at your convenience. Hope to see you there!

Save your seat →