Docs Menu
Docs Home
/
MongoDB Atlas
/ /

Get Started with the LlamaIndex Integration

On this page

  • Background
  • Prerequisites
  • Set Up the Environment
  • Use Atlas as a Vector Store
  • Create the Atlas Vector Search Index
  • Run Vector Search Queries
  • Answer Questions on Your Data
  • Next Steps

You can integrate Atlas Vector Search with LlamaIndex to implement retrieval-augmented generation (RAG) in your LLM application. This tutorial demonstrates how to start using Atlas Vector Search with LlamaIndex to perform semantic search on your data and build a RAG implementation. Specifically, you perform the following actions:

  1. Set up the environment.

  2. Store custom data on Atlas.

  3. Create an Atlas Vector Search index on your data.

  4. Run the following vector search queries:

    • Semantic search.

    • Semantic search with metadata pre-filtering.

  5. Implement RAG by using Atlas Vector Search to answer questions on your data.

LlamaIndex is an open-source framework designed to simplify how you connect custom data sources to LLMs. It provides several tools such as data connectors, indexes, and query engines to help you load and prepare vector embeddings for RAG applications.

By integrating Atlas Vector Search with LlamaIndex, you can use Atlas as a vector database and use Atlas Vector Search to implement RAG by retrieving semantically similar documents from your data. To learn more about RAG, see Key Concepts.

To complete this tutorial, you must have the following:

  • An Atlas cluster running MongoDB version 6.0.11, 7.0.2, or later (including RCs).

  • An OpenAI API Key. You must have a paid OpenAI account with credits available for API requests.

  • A notebook to run your Python project such as Colab.

First, set up the environment for this tutorial by copying and pasting the following code snippets into your notebook.

1

Run the following command:

!pip install llama-index llama-index-vector-stores-mongodb llama-index-embeddings-openai pymongo

Then, run the following code to import the required packages:

import getpass, os, pymongo, pprint
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex, StorageContext
from llama_index.core.settings import Settings
from llama_index.core.retrievers import VectorIndexRetriever
from llama_index.core.vector_stores import MetadataFilter, MetadataFilters, ExactMatchFilter, FilterOperator
from llama_index.core.query_engine import RetrieverQueryEngine
from llama_index.embeddings.openai import OpenAIEmbedding
from llama_index.llms.openai import OpenAI
from llama_index.vector_stores.mongodb import MongoDBAtlasVectorSearch
2

Run the following code and provide the following when prompted:

os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")
ATLAS_CONNECTION_STRING = getpass.getpass("MongoDB Atlas SRV Connection String:")

Note

Your connection string should use the following format:

mongodb+srv://<username>:<password>@<clusterName>.<hostname>.mongodb.net
3

Run the following code to configure settings that are specific to LlamaIndex. These settings specify the following:

  • OpenAI as the LLM used by your application to answer questions on your data.

  • text-embedding-ada-002 as the embedding model used by your application to generate vector embeddings from your data.

  • Chunk size and overlap to customize how LlamaIndex partitions your data for storage.

Settings.llm = OpenAI()
Settings.embed_model = OpenAIEmbedding(model="text-embedding-ada-002")
Settings.chunk_size = 100
Settings.chunk_overlap = 10

Then, load custom data into Atlas and instantiate Atlas as a vector database, also called a vector store. Copy and paste the following code snippets into your notebook.

1

For this tutorial, you use a publicly accessible PDF document titled MongoDB Atlas Best Practices as the data source for your vector store. This document describes various recommendations and core concepts for managing your Atlas deployments.

To load the sample data, run the following code snippet. It does the following:

  • Creates a new directory called data.

  • Retrieves the PDF from the specified URL and saves it as a file in the directory.

  • Uses the SimpleDirectoryReader data connector to extract raw text and metadata from the file. It also formats the data into documents.

# Load the sample data
!mkdir -p 'data/'
!wget 'https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE4HkJP' -O 'data/atlas_best_practices.pdf'
sample_data = SimpleDirectoryReader(input_files=["./data/atlas_best_practices.pdf"]).load_data()
# Print the first document
sample_data[0]
Document(id_='e9893be3-e1a3-4249-9355-e4f42539f508', embedding=None, metadata={'page_label': '1', 'file_name': 'atlas_best_practices.pdf',
'file_path': 'data/atlas_best_practices.pdf', 'file_type': 'application/pdf', 'file_size': 512653, 'creation_date': '2024-02-20',
'last_modified_date': '2020-10-27', 'last_accessed_date': '2024-02-20'}, excluded_embed_metadata_keys=['file_name', 'file_type', 'file_size',
'creation_date', 'last_modified_date', 'last_accessed_date'], excluded_llm_metadata_keys=['file_name', 'file_type', 'file_size', 'creation_date',
'last_modified_date', 'last_accessed_date'], relationships={}, text='Mong oDB Atlas Best P racticesJanuary 20 19A MongoD B White P aper\n',
start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\n\n{content}', metadata_template='{key}: {value}', metadata_seperator='\n')
2

Run the following code to create a vector store named atlas_vector_search by using the MongoDBAtlasVectorSearch method, which specifies the following:

  • A connection to your Atlas cluster.

  • llamaindex_db.test as the Atlas database and collection used to store the documents.

  • vector_index as the index to use for querying the vector store.

Then, you save the vector store to a storage context, which is a LlamaIndex container object used to prepare your data for storage.

# Connect to your Atlas cluster
mongodb_client = pymongo.MongoClient(ATLAS_CONNECTION_STRING)
# Instantiate the vector store
atlas_vector_search = MongoDBAtlasVectorSearch(
mongodb_client,
db_name = "llamaindex_db",
collection_name = "test",
index_name = "vector_index"
)
vector_store_context = StorageContext.from_defaults(vector_store=atlas_vector_search)
3

Once you've loaded your data and instantiated Atlas as a vector store, generate vector embeddings from your data and store them in Atlas. To do this, you must build a vector store index. This type of index is a LlamaIndex data structure that splits, embeds, and then stores your data in the vector store.

The following code uses the VectorStoreIndex.from_documents method to build the vector store index on your sample data. It turns your sample data into vector embeddings and stores these embeddings as documents in the llamaindex_db.test collection in your Atlas cluster, as specified by the vector store's storage context.

Note

This method uses the embedding model and chunk settings that you configured when you set up your environment.

vector_store_index = VectorStoreIndex.from_documents(
sample_data, storage_context=vector_store_context, show_progress=True
)

Tip

After running the sample code, you can view your vector embeddings in the Atlas UI by navigating to the langchain_db.test collection in your cluster.

To enable vector search queries on your vector store, create an Atlas Vector Search index on the llamaindex_db.test collection.

To create an Atlas Vector Search index, you must have Project Data Access Admin or higher access to the Atlas project.

1
  1. If it is not already displayed, select the organization that contains your desired project from the Organizations menu in the navigation bar.

  2. If it is not already displayed, select your desired project from the Projects menu in the navigation bar.

  3. If the Clusters page is not already displayed, click Database in the sidebar.

2
  1. Click your cluster's name.

  2. Click the Atlas Search tab.

3
  1. Click Create Search Index.

  2. Under Atlas Vector Search, select JSON Editor and then click Next.

  3. In the Database and Collection section, find the llamaindex_db database, and select the test collection.

  4. In the Index Name field, enter vector_index.

  5. Replace the default definition with the following index definition and then click Next.

    This index definition specifies indexing the following fields in an index of the vectorSearch type:

    • embedding field as the vector type. The embedding field contains the embeddings created using OpenAI's text-embedding-ada-002 embedding model. The index definition specifies 1536 vector dimensions and measures similarity using cosine.

    • metadata.page_label field as the filter type for pre-filtering data by the page number in the PDF.

    1{
    2 "fields": [
    3 {
    4 "type": "vector",
    5 "path": "embedding",
    6 "numDimensions": 1536,
    7 "similarity": "cosine"
    8 },
    9 {
    10 "type": "filter",
    11 "path": "metadata.page_label"
    12 }
    13 ]
    14}
4

A modal window displays to let you know that your index is building.

5

The index should take about one minute to build. While it builds, the Status column reads Initial Sync. When it finishes building, the Status column reads Active.

Once Atlas builds your index, return to your notebook and run vector search queries on your data. The following examples demonstrate different queries that you can run on your vectorized data.

This section demonstrates how to implement RAG in your application with Atlas Vector Search and LlamaIndex. Now that you've learned how to run vector search queries to retrieve semantically similar documents, run the following code to use Atlas Vector Search to retrieve documents and a LlamaIndex query engine to then answer questions based on those documents.

To explore LlamaIndex's full library of tools for RAG applications, which includes data connectors, indexes, and query engines, see LlamaHub.

To extend the application in this tutorial to have back-and-forth conversations, see Chat Engine.

MongoDB also provides the following developer resources:

Tip

See also:

← Get Started with the LangChain JS/TS Integration