LangChain-MongoDB is a dedicated package that provides “long-term memory” capabilities for LLMs — vector store, conversation history, and semantic caching.
MongoDB Atlas Vector Search integrates with LlamaIndex to provide “long-term memory” to LLMs as well as provide a store for document chunks.
Vector embeddings generated by OpenAI can be stored in MongoDB Atlas Vector Search to build high-performance generative AI applications.
Hugging Face provides access to many open source models that can be easily used for generating vector embeddings and storing them in Atlas Vector Search.
Vector embeddings generated by Cohere can be stored in MongoDB Atlas Vector Search to build high-performance generative AI applications.
Semantic Kernel is an SDK that simplifies building LLM application with programming languages like C# and python. Atlas Vector search integrates to provide “memory” for LLM applications.
Knowledge Bases for Amazon Bedrock is a fully managed capability that allows for implementing the entire RAG workflow, from ingestion to retrieval. Atlas Vector Search integrates natively and securely.
KNN stands for "K Nearest Neighbors," which is the algorithm frequently used to find vectors near one another. Learn more.
ANN stands for "Approximate Nearest Neighbors" and it is an approach to finding similar vectors that trades accuracy in favor of performance. This is one of the core algorithms used to power Atlas Vector Search. Our algorithm for Approximate Nearest Neighbor search uses the Hierarchical Navigable Small World (HNSW) graph for efficient indexing and querying of millions of vectors.
In MongoDB Atlas, you can implement exact K nearest neighbor search via the $vectorSearch stage. This method would guarantee to return the exact closest vectors to a query vector, with the number of vectors specified by the variable limit. Exact vector search query execution can maintain sub-second latency for unfiltered queries up to 10,000 documents. It can also provide low-latency responses for highly selective filters that restrict a broad set of documents into 10,000 documents or less, ordered by vector relevance.
Start with the multi-cloud database service built for resilience, scale, and the highest levels of data privacy and security.
Bring your data to life instantly. Create, share, and embed visualizations for real-time insights and business intelligence.
Analyze rich data easily across Atlas and AWS S3. Combine, transform and enrich data from multiple sources without complex integrations.