Docs Menu

Integrate Vector Search with AI Technologies

You can use Atlas Vector Search with popular AI providers and LLMs through their standard APIs. MongoDB and partners also provide specific product integrations to help you leverage Atlas Vector Search in your RAG and AI-powered applications.

This page highlights notable AI integrations that MongoDB and partners have developed. For a complete list of integrations and partner services, see Explore MongoDB Partner Ecosystem.

You can integrate Atlas Vector Search with the following open-source frameworks to store custom data in Atlas and implement RAG with Atlas Vector Search.

LangChain is a framework that simplifies the creation of LLM applications through the use of "chains," which are LangChain-specific components that can be combined together for a variety of use cases, including RAG.

To get started, see the following resources:

LangGraph is a specialized framework within the LangChain ecosystem designed for building AI agents and complex multi-agent workflows. LangGraph's graph-based approach allows you to dynamically determine the execution path of your application, enabling advanced agentic applications and use cases. It also supports features like persistence, streaming, and memory.

To get started, see Integrate MongoDB with LangGraph.

LangChainGo is a framework that simplifies the creation of LLM applications in Go. LangChainGo incorporates the capabilities of LangChain into the Go ecosystem. You can use LangChainGo for a variety of use cases, including semantic search and RAG.

To get started, see Get Started with the LangChainGo Integration.

LangChain4j is a framework that simplifies the creation of LLM applications in Java. LangChain4j combines concepts and functionality from LangChain, Haystack, LlamaIndex, and other sources. You can use LangChain4j for a variety of use cases, including semantic search and RAG.

To get started, see Get Started with the LangChain4j Integration.

LlamaIndex is a framework that simplifies how you connect custom data sources to LLMs. It provides several tools to help you load and prepare vector embeddings for RAG applications.

To get started, see Get Started with the LlamaIndex Integration.

Microsoft Semantic Kernel is an SDK that allows you to combine various AI services with your applications. You can use Semantic Kernel for a variety of use cases, including RAG.

To get started, see the following tutorials:

Haystack is a framework for building custom applications with LLMs, embedding models, vector search, and more. It enables use cases such as question-answering and RAG.

To get started, see Get Started with the Haystack Integration.

Spring AI is an application framework that allows you to apply Spring design principles to your AI application. You can use Spring AI for a variety of use cases, including semantic search and RAG.

To get started, see Get Started with the Spring AI Integration.

You can also integrate Atlas Vector Search with the following AI services.

Amazon Bedrock is a fully-managed service for building generative AI applications. You can integrate Atlas Vector Search as a knowledge base for Amazon Bedrock to store custom data in Atlas and implement RAG.

To get started, see Get Started with the Amazon Bedrock Knowledge Base Integration.