Docs Menu

Vector Quantization

Note

Atlas Vector Search support for the following is available as a Preview feature:

  • Ingestion of BSON BinData vectors with the int1 subtype.

  • Automatic scalar quantization.

  • Automatic binary quantization.

Atlas Vector Search supports automatic quantization of your float vector embeddings (both 32-bit and 64-bit). It also supports ingesting and indexing your pre-quantized scalar and binary vectors from certain embedding models.

Quantization is the process of shrinking full-fidelity vectors into fewer bits. It reduces the amount of main memory required to store each vector in an Atlas Vector Search index by indexing the reduced representation vectors instead. This allows for storage of more vectors or vectors with higher dimensions. Therefore, quantization reduces resource consumption and improves speed. We recommend quantization for applications with a large number of vectors, such as over 10M.

Scalar quantization involves first identifying the minimum and maximum values for each dimension of the indexed vectors to establish a range of values for a dimension. Then, the range is divided into equally sized intervals or bins. Finally, each float value is mapped to a bin to convert the continuous float values into discrete integers. In Atlas Vector Search, this quantization reduces the vector embedding's RAM cost to one fourth (1/4) of the pre-quantization cost.

Binary quantization involves assuming a midpoint of 0 for each dimension, which is typically appropriate for embeddings normalized to length 1 such as OpenAI's text-embedding-3-large. Then, each value in the vector is compared to the midpoint and assigned a binary value of 1 if it's greater than the midpoint and a binary value of 0 if it's less than or equal to the midpoint. In Atlas Vector Search, this quantization reduces the vector embedding's RAM cost to one twenty-fourth (1/24) of the pre-quantization cost. The reason it's not 1/32 is because the data structure containing the Hierarchical Navigable Small Worlds graph itself, separate from the vector values, isn't compressed.

When you run a query, Atlas Vector Search converts the float value in the query vector into a binary vector using the same midpoint for efficient comparison between the query vector and indexed binary vectors. It then rescores by reevaluating the identified candidates in the binary comparison using the original float values associated with those results from the binary index to further refine the results. The full fidelity vectors are stored in their own data structure on disk, and are only referenced during rescoring when you configure binary quantization or when you perform exact search against either binary or scalar quantized vectors.

See also:

The following table shows the requirements for automatically quantizing and ingesting quantized vectors.

Note

Atlas stores all floating-point values as the double data type internally; therefore, both 32-bit and 64-bit embeddings are compatible with automatic quantization without conversion.

Requirement
For int1 Ingestion
For int8 Ingestion
For Automatic Scalar Quantization
For Automatic Binary Quantization

Requires index definition settings

No

No

Yes

Yes

Requires BSON binData format

Yes

Yes

No

No

Storage on mongod

binData(int1)

binData(int8)

binData(float32)
array(double)
binData(float32)
array(double)

Supported Similarity method

euclidean

cosine
euclidean
dotProduct
cosine
euclidean
dotProduct
cosine
euclidean
dotProduct

Supported Number of Dimensions

Multiple of 8

1 to 8192

1 to 8192

Multiple of 8

Supports ENN Search

ENN on int1

ENN on int8

ENN on float32

ENN on float32

You can configure Atlas Vector Search to automatically quantize float vector embeddings in your collection to reduced representation types, such as int8 (scalar) and binary in your vector indexes.

To set or change the quantization type, specify a quantization field value of either scalar or binary in your index definition. This triggers an index rebuild similar to any other index definition change. The specified quantization type applies to all indexed vectors and query vectors at query-time.

For most embedding models, we recommend binary quantization with rescoring. If you want to use lower dimension models that are not QAT, use scalar quantization because it has less representational loss and therefore, incurs less representational capacity loss.

Atlas Vector Search provides native capabilities for scalar quantization as well as binary quantization with rescoring. Automatic quantization increases scalability and cost savings for your applications by reducing the computational resources for efficient processing of your vectors. Automatic quantization reduces the RAM for mongot by 3.75x for scalar and by 24x for binary; the vector values shrink by 4x and 32x respectively, but Hierarchical Navigable Small Worlds graph itself does not shrink. This improves performance, even at the highest volume and scale.

We recommend automatic quantization if you have large number of full fidelity vectors, typically over 10M vectors. After quantization, you index reduced representation vectors without compromising the accuracy when retrieving vectors.

To enable automatic quantization:

1

In a new or existing Atlas Vector Search index, specify one of the following quantization types in the fields.quantization field for your index definition:

  • scalar: to produce byte vectors from float input vectors.

  • binary: to produce bit vectors from float input vectors.

If you specify automatic quantization on data that is not an array of float values, Atlas Vector Search silently ignores that vector instead of indexing it, and those vectors will be skipped. Since Atlas stores float values (both 32-bit and 64-bit) as the double type internally, embeddings from models that output either precision will work with automatic quantization.

2

The index should take about one minute to build. While it builds, the index is in an initial sync state. When it finishes building, you can start querying the data in your collection.

The specified quantization type applies to all indexed vectors and query vectors at query-time.

Atlas Vector Search also supports ingestion and indexing of scalar and binary quantized vectors from certain embedding models. If you don't already have quantized vectors, you can convert your embeddings to BSON BinData vectors with float32, int1, or int8 subtype.

Note

Atlas Vector Search support for the following is available as a Preview feature:

  • Ingestion of BSON BinData vectors with the int1 subtype.

  • Automatic scalar quantization.

  • Automatic binary quantization.

We recommend ingesting quantized BSON binData vectors for the following use cases:

  • You need to index quantized vector output from embedding models.

  • You have a large number of float vectors and want to reduce the storage and WiredTiger footprint (such as disk and memory usage) in mongod.

BinData is a BSON data type that stores binary data. It compresses your vector embeddings and requires about three times less disk space in your cluster compared to embeddings that use a standard float32 array. To learn more, see Vector Compression.

This subtype also allows you to index your vectors with alternate types such as int1 or int8 vectors, reducing the memory needed to build the Atlas Vector Search index for your collection. It reduces the RAM for mongot by 3.75x for scalar and by 24x for binary; the vector values shrink by 4x and 32x respectively, but the Hierarchical Navigable Small Worlds graph itself doesn't shrink.

If you don't already have binData vectors, you can convert your embeddings to this format by using any supported driver before writing your data to a collection. The following procedure walks you through the steps for converting your embeddings to the BinData vectors with float32, int8, and int1 subtypes.

BSON BinData vectors with float32, int1, and int8 subtypes is supported by the following drivers:


Use the Select your language drop-down menu to set the language of the procedure on this page.


To quantize your BSON binData vectors, you must have the following:

  • An Atlas cluster running MongoDB version 6.0.11, 7.0.2, or later.

    Ensure that your IP address is included in your Atlas project's access list.

  • Access to an embedding model that supports byte vector output.

    The outputs from the following embedding models can be used to generate BSON binData vectors with a supported MongoDB driver:

    Embedding Model Provider
    Embedding Model

    embed-english-v3.0

    nomic-embed-text-v1.5

    jina-embeddings-v2-base-en

    mxbai-embed-large-v1

    Scalar quantization preserves recall for these models because these models are all trained to be quantization aware. Therefore, recall degradation for scalar quantized embeddings produced by these models is minimal even at lower dimensions like 384.

  • Java Development Kit (JDK) version 8 or later.

  • An environment to set up and run a Java application. We recommend that you use an integrated development environment (IDE) such as IntelliJ IDEA or Eclipse IDE to configure Maven or Gradle to build and run your project.

  • A terminal and code editor to run your Node.js project.

  • npm and Node.js installed.

  • An environment to run interactive Python notebooks such as VS Code or Colab.

The examples in this procedure use either new data or existing data and embeddings generated by using Cohere's embed-english-v3.0 model. The example for new data uses sample text strings, which you can replace with your own data. The example for existing data uses a subset of documents without any embeddings from the listingsAndReviews collection in the sample_airbnb database, which you can replace with your own database and collection (with or without any embeddings).

Select the tab based on whether you want to quantize binData vectors for new data or for data you already have in your Atlas cluster.

Create a Java project in your IDE with the dependencies configured for the MongoDB Java Driver, and then perform the following steps in the project. To try the example, replace the placeholders with valid values.

1
  1. From your IDE, create a Java project using Maven or Gradle.

  2. Add the following dependencies, depending on your package manager:

    If you are using Maven, add the following dependencies to the dependencies array in your project's pom.xml file:

    pom.xml
    <dependencies>
    <dependency>
    <groupId>junit</groupId>
    <artifactId>junit</artifactId>
    <version>4.13.2</version>
    <scope>test</scope>
    </dependency>
    <dependency>
    <groupId>org.mongodb</groupId>
    <artifactId>mongodb-driver-sync</artifactId>
    <version>5.3.1</version>
    </dependency>
    <dependency>
    <groupId>com.cohere</groupId>
    <artifactId>cohere-java</artifactId>
    <version>1.6.0</version>
    </dependency>
    <dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>slf4j-api</artifactId>
    <version>2.0.16</version>
    </dependency>
    <dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>slf4j-simple</artifactId>
    <version>2.0.16</version>
    <scope>test</scope>
    </dependency>
    </dependencies>

    If you are using Gradle, add the following to the dependencies array in your project's build.gradle file:

    build.gradle
    dependencies {
    // MongoDB Java Sync Driver v5.3.1 or later
    implementation 'org.mongodb:mongodb-driver-sync:[5.3.1,)'
    // Java library for working with Cohere models
    implementation 'ai.cohere:cohere-java:1.6.0'
    // SLF4J (The Simple Logging Facade for Java)
    testImplementation("org.slf4j:slf4j-simple:2.0.16")
    implementation("org.slf4j:slf4j-api:2.0.16")
    }
  3. Run your package manager to install the dependencies to your project.

2

Note

This example sets the variables for the project in the IDE. Production applications might manage environment variables through a deployment configuration, CI/CD pipeline, or secrets manager, but you can adapt the provided code to fit your use case.

In your IDE, create a new configuration template and add the following variables to your project:

  • If you are using IntelliJ IDEA, create a new Application run configuration template, then add your variables as semicolon-separated values in the Environment variables field (for example, FOO=123;BAR=456). Apply the changes and click OK.

    To learn more, see the Create a run/debug configuration from a template section of the IntelliJ IDEA documentation.

  • If you are using Eclipse, create a new Java Application launch configuration, then add each variable as a new key-value pair in the Environment tab. Apply the changes and click OK.

    To learn more, see the Creating a Java application launch configuration section of the Eclipse IDE documentation.

Environment variables
COHERE_API_KEY=<api-key>
MONGODB_URI=<connection-string>

Update the placeholders with the following values:

  • Replace the <api-key> placeholder value with your Cohere API key.

  • Replace the <connection-string> placeholder value with the SRV connection string for your Atlas cluster.

    Your connection string should use the following format:

    mongodb+srv://<db_username>:<db_password>@<clusterName>.<hostname>.mongodb.net
3

You can use an embedding model provider to generate float, int8, and int1 embeddings for your data and then use the MongoDB Java driver to convert your native vector embedding to BSON vectors. The following sample code uses Cohere's embed API to generate full-precision vectors.

  1. Create a new file named GenerateAndConvertEmbeddings.java in your Java project.

    touch GenerateAndConvertEmbeddings.java
  2. Copy and paste the following code in the GenerateAndConvertEmbeddings.java file.

    This code does the following:

    • Generates the float32, int8, and ubinary vector embeddings by using Cohere's embed API.

    • Converts the embeddings to BSON binData vectors by using MongoDB Java driver.

    • Creates a file named embeddings.json and saves the data with embeddings in the file to upload to Atlas.

    GenerateAndConvertEmbeddings.java
    1import com.cohere.api.Cohere;
    2import com.cohere.api.requests.EmbedRequest;
    3import com.cohere.api.types.EmbedByTypeResponse;
    4import com.cohere.api.types.EmbedByTypeResponseEmbeddings;
    5import com.cohere.api.types.EmbedInputType;
    6import com.cohere.api.types.EmbedResponse;
    7import com.cohere.api.types.EmbeddingType;
    8import java.io.FileOutputStream;
    9import java.io.IOException;
    10import java.util.ArrayList;
    11import java.util.List;
    12import java.util.Objects;
    13import java.util.Optional;
    14import org.bson.BinaryVector;
    15import org.bson.Document;
    16
    17public class GenerateAndConvertEmbeddings {
    18
    19 // List of text data to embed
    20 private static final List<String> DATA = List.of(
    21 "The Great Wall of China is visible from space.",
    22 "The Eiffel Tower was completed in Paris in 1889.",
    23 "Mount Everest is the highest peak on Earth at 8,848m.",
    24 "Shakespeare wrote 37 plays and 154 sonnets during his lifetime.",
    25 "The Mona Lisa was painted by Leonardo da Vinci."
    26 );
    27
    28 public static void main(String[] args) {
    29 // Cohere API key for authentication
    30 String apiKey = System.getenv("COHERE_API_KEY");
    31
    32 // Fetch embeddings from the Cohere API
    33 EmbedByTypeResponseEmbeddings embeddings = fetchEmbeddingsFromCohere(apiKey);
    34 Document bsonEmbeddings = convertEmbeddingsToBson(embeddings);
    35
    36 writeEmbeddingsToFile(bsonEmbeddings, "embeddings.json");
    37 }
    38
    39 // Fetches embeddings based on input data from the Cohere API
    40 private static EmbedByTypeResponseEmbeddings fetchEmbeddingsFromCohere(String apiKey) {
    41 if (Objects.isNull(apiKey) || apiKey.isEmpty()) {
    42 throw new RuntimeException("API key not found. Please set COHERE_API_KEY in your environment.");
    43 }
    44
    45 Cohere cohere = Cohere.builder().token(apiKey).clientName("embed-example").build();
    46
    47 try {
    48 EmbedRequest request = EmbedRequest.builder()
    49 .model("embed-english-v3.0")
    50 .inputType(EmbedInputType.SEARCH_DOCUMENT)
    51 .texts(DATA)
    52 .embeddingTypes(List.of(EmbeddingType.FLOAT, EmbeddingType.INT_8, EmbeddingType.UBINARY))
    53 .build();
    54
    55 EmbedResponse response = cohere.embed(request);
    56 Optional<EmbedByTypeResponse> optionalEmbeddingsWrapper = response.getEmbeddingsByType();
    57
    58 return optionalEmbeddingsWrapper.orElseThrow().getEmbeddings();
    59 } catch (Exception e) {
    60 System.err.println("Error fetching embeddings: " + e.getMessage());
    61 throw e;
    62 }
    63 }
    64
    65 // Converts embeddings to BSON binary vectors using MongoDB Java Driver
    66 private static Document convertEmbeddingsToBson(EmbedByTypeResponseEmbeddings embeddings) {
    67 List<List<Double>> floatEmbeddings = embeddings.getFloat().orElseThrow();
    68 List<List<Integer>> int8Embeddings = embeddings.getInt8().orElseThrow();
    69 List<List<Integer>> ubinaryEmbeddings = embeddings.getUbinary().orElseThrow();
    70
    71 List<Document> bsonEmbeddings = new ArrayList<>();
    72 for (int i = 0; i < floatEmbeddings.size(); i++) {
    73 Document bsonEmbedding = new Document()
    74 .append("text", DATA.get(i))
    75 .append("embeddings_float32", BinaryVector.floatVector(listToFloatArray(floatEmbeddings.get(i))))
    76 .append("embeddings_int8", BinaryVector.int8Vector(listToByteArray(int8Embeddings.get(i))))
    77 .append("embeddings_int1", BinaryVector.packedBitVector(listToByteArray(ubinaryEmbeddings.get(i)), (byte) 0));
    78
    79 bsonEmbeddings.add(bsonEmbedding);
    80 }
    81
    82 return new Document("data", bsonEmbeddings);
    83 }
    84
    85 // Writes embeddings to JSON file
    86 private static void writeEmbeddingsToFile(Document bsonEmbeddings, String fileName) {
    87 try (FileOutputStream fos = new FileOutputStream(fileName)) {
    88 fos.write(bsonEmbeddings.toJson().getBytes());
    89 System.out.println("Embeddings saved to " + fileName);
    90 } catch (IOException e) {
    91 System.out.println("Error writing embeddings to file: " + e.getMessage());
    92 }
    93 }
    94
    95 // Convert List of Doubles to an array of floats
    96 private static float[] listToFloatArray(List<Double> list) {
    97 float[] array = new float[list.size()];
    98 for (int i = 0; i < list.size(); i++) {
    99 array[i] = list.get(i).floatValue();
    100 }
    101 return array;
    102 }
    103
    104 // Convert List of Integers to an array of bytes
    105 private static byte[] listToByteArray(List<Integer> list) {
    106 byte[] array = new byte[list.size()];
    107 for (int i = 0; i < list.size(); i++) {
    108 array[i] = list.get(i).byteValue();
    109 }
    110 return array;
    111 }
    112}
  3. Replace the COHERE_API_KEY placeholder value in the code if you didn't set the environment variable and save the file.

  4. Compile and run the file using your application run configuration.

    If you are using a terminal, run the following commands to compile and execute your program.

    javac GenerateAndConvertEmbeddings.java
    java GenerateAndConvertEmbeddings
    BSON embeddings saved to embeddings.json
  5. Verify the embeddings in the embeddings.json file.

To learn more about generating embeddings and converting the embeddings to binData vectors, see How to Create Vector Embeddings.

4

You must upload your data and embeddings to a collection in your Atlas cluster and create an Atlas Vector Search index on the data to run $vectorSearch queries against the data.

  1. Create a new file named UploadDataAndCreateIndex.java in your Java project.

    touch UploadDataAndCreateIndex.java
  2. Copy and paste the following code in the UploadDataAndCreateIndex.java file.

    This code does the following:

    • Uploads the data in the embeddings.json file to your Atlas cluster.

    • Creates an Atlas Vector Search index on the embeddings_float32, embeddings_int8, and embeddings_int1 fields.

    UploadDataAndCreateIndex.java
    1import com.mongodb.client.MongoClient;
    2import com.mongodb.client.MongoClients;
    3import com.mongodb.client.MongoCollection;
    4import com.mongodb.client.MongoDatabase;
    5import com.mongodb.client.model.SearchIndexModel;
    6import com.mongodb.client.model.SearchIndexType;
    7import org.bson.Document;
    8import org.bson.conversions.Bson;
    9
    10import java.io.IOException;
    11import java.nio.file.Files;
    12import java.nio.file.Path;
    13import java.util.Collections;
    14import java.util.List;
    15import java.util.concurrent.TimeUnit;
    16import java.util.stream.StreamSupport;
    17
    18public class UploadDataAndCreateIndex {
    19
    20 private static final String MONGODB_URI = System.getenv("MONGODB_URI");
    21 private static final String DB_NAME = "<DATABASE-NAME>";
    22 private static final String COLLECTION_NAME = "<COLLECTION-NAME>";
    23 private static final String INDEX_NAME = "<INDEX-NAME>";
    24
    25 public static void main(String[] args) {
    26 try (MongoClient mongoClient = MongoClients.create(MONGODB_URI)) {
    27 storeEmbeddings(mongoClient);
    28 setupVectorSearchIndex(mongoClient);
    29 } catch (IOException | InterruptedException e) {
    30 e.printStackTrace();
    31 }
    32 }
    33
    34 public static void storeEmbeddings(MongoClient client) throws IOException {
    35 MongoDatabase database = client.getDatabase(DB_NAME);
    36 MongoCollection<Document> collection = database.getCollection(COLLECTION_NAME);
    37
    38 String fileContent = Files.readString(Path.of("embeddings.json"));
    39 List<Document> documents = parseDocuments(fileContent);
    40
    41 collection.insertMany(documents);
    42 System.out.println("Inserted documents into MongoDB");
    43 }
    44
    45 private static List<Document> parseDocuments(String jsonContent) throws IOException {
    46 Document rootDoc = Document.parse(jsonContent);
    47 return rootDoc.getList("data", Document.class);
    48 }
    49
    50 public static void setupVectorSearchIndex(MongoClient client) throws InterruptedException {
    51 MongoDatabase database = client.getDatabase(DB_NAME);
    52 MongoCollection<Document> collection = database.getCollection(COLLECTION_NAME);
    53
    54 Bson definition = new Document(
    55 "fields",
    56 List.of(
    57 new Document("type", "vector")
    58 .append("path", "embeddings_float32")
    59 .append("numDimensions", 1024)
    60 .append("similarity", "dotProduct"),
    61 new Document("type", "vector")
    62 .append("path", "embeddings_int8")
    63 .append("numDimensions", 1024)
    64 .append("similarity", "dotProduct"),
    65 new Document("type", "vector")
    66 .append("path", "embeddings_int1")
    67 .append("numDimensions", 1024)
    68 .append("similarity", "euclidean")
    69 )
    70 );
    71
    72 SearchIndexModel indexModel = new SearchIndexModel(
    73 INDEX_NAME,
    74 definition,
    75 SearchIndexType.vectorSearch()
    76 );
    77
    78 List<String> result = collection.createSearchIndexes(Collections.singletonList(indexModel));
    79 System.out.println("Successfully created vector index named: " + result.get(0));
    80 System.out.println("It may take up to a minute for the index to leave the BUILDING status and become queryable.");
    81
    82 System.out.println("Polling to confirm the index has changed from the BUILDING status.");
    83 waitForIndex(collection, INDEX_NAME);
    84 }
    85
    86 public static <T> boolean waitForIndex(final MongoCollection<T> collection, final String indexName) {
    87 long startTime = System.nanoTime();
    88 long timeoutNanos = TimeUnit.SECONDS.toNanos(60);
    89 while (System.nanoTime() - startTime < timeoutNanos) {
    90 Document indexRecord = StreamSupport.stream(collection.listSearchIndexes().spliterator(), false)
    91 .filter(index -> indexName.equals(index.getString("name")))
    92 .findAny().orElse(null);
    93 if (indexRecord != null) {
    94 if ("FAILED".equals(indexRecord.getString("status"))) {
    95 throw new RuntimeException("Search index has FAILED status.");
    96 }
    97 if (indexRecord.getBoolean("queryable")) {
    98 System.out.println(indexName + " index is ready to query");
    99 return true;
    100 }
    101 }
    102 try {
    103 Thread.sleep(100); // busy-wait, avoid in production
    104 } catch (InterruptedException e) {
    105 Thread.currentThread().interrupt();
    106 throw new RuntimeException(e);
    107 }
    108 }
    109 return false;
    110 }
    111}
  3. Replace the following placeholder values in the code and save the file.

    MONGODB_URI

    Your Atlas cluster connection string if you didn't set the environment variable.

    <DATABASE-NAME>

    Name of the database in your Atlas cluster.

    <COLLECTION-NAME>

    Name of the collection where you want to upload the data.

    <INDEX-NAME>

    Name of the Atlas Vector Search index for the collection.

  4. Compile and run the file using your application run configuration.

    If you are using a terminal, run the following commands to compile and execute your program.

    javac UploadDataAndCreateIndex.java
    java UploadDataAndCreateIndex
    Inserted documents into MongoDB
    Successfully created vector index named: <INDEX_NAME>
    It may take up to a minute for the index to leave the BUILDING status and become queryable.
    Polling to confirm the index has changed from the BUILDING status.
    <INDEX_NAME> index is ready to query
  5. Log in to your Atlas cluster and verify the following:

    • Data in the namespace.

    • Atlas Vector Search index for the collection.

5

To test your embeddings, you can run a query against your collection. Use an embedding model provider to generate float, int8, and int1 embeddings for your query text. The following sample code uses Cohere's embed API to generate full-precision vectors. After generating the embeddings, use the MongoDB Java driver to convert your native vector embedding to BSON vectors and run $vectorSearch query against the collection.

  1. Create a new file named CreateEmbeddingsAndRunQuery.java in your Java project.

    touch CreateEmbeddingsAndRunQuery.java
  2. Copy and paste the following code in the CreateEmbeddingsAndRunQuery.java file.

    This code does the following:

    • Generates the float32, int8, and ubinary vector embeddings by using Cohere's embed API.

    • Converts the embeddings to BSON binData vectors by using MongoDB Java driver.

    • Runs the query against your collection.

    CreateEmbeddingsAndRunQuery.java
    1import com.cohere.api.Cohere;
    2import com.cohere.api.requests.EmbedRequest;
    3import com.cohere.api.types.EmbedResponse;
    4import com.cohere.api.types.EmbedByTypeResponse;
    5import com.cohere.api.types.EmbedByTypeResponseEmbeddings;
    6import com.cohere.api.types.EmbeddingType;
    7import com.cohere.api.types.EmbedInputType;
    8import com.mongodb.client.MongoClient;
    9import com.mongodb.client.MongoClients;
    10import com.mongodb.client.MongoCollection;
    11import com.mongodb.client.MongoDatabase;
    12import org.bson.Document;
    13import org.bson.conversions.Bson;
    14import org.bson.BinaryVector;
    15
    16import java.util.ArrayList;
    17import java.util.HashMap;
    18import java.util.List;
    19import java.util.Map;
    20import java.util.Optional;
    21
    22import static com.mongodb.client.model.Aggregates.project;
    23import static com.mongodb.client.model.Aggregates.vectorSearch;
    24import static com.mongodb.client.model.Projections.fields;
    25import static com.mongodb.client.model.Projections.include;
    26import static com.mongodb.client.model.Projections.exclude;
    27import static com.mongodb.client.model.Projections.metaVectorSearchScore;
    28import static com.mongodb.client.model.search.SearchPath.fieldPath;
    29import static com.mongodb.client.model.search.VectorSearchOptions.approximateVectorSearchOptions;
    30import static java.util.Arrays.asList;
    31
    32public class CreateEmbeddingsAndRunQuery {
    33 private static final String COHERE_API_KEY = System.getenv("COHERE_API_KEY");
    34 private static final String MONGODB_URI = System.getenv("MONGODB_URI");
    35 private static final String DB_NAME = "<DATABASE-NAME>";
    36 private static final String COLLECTION_NAME = "<COLLECTION-NAME>";
    37 private static final String VECTOR_INDEX_NAME = "<INDEX-NAME>";
    38 private static final String DATA_FIELD_NAME = "<DATA-FIELD>";
    39
    40 public static void main(String[] args) {
    41 String queryText = "<QUERY-TEXT>";
    42
    43 try {
    44 CreateAndRunQuery processor = new CreateAndRunQuery();
    45 Map<String, BinaryVector> embeddingsData = processor.generateAndConvertEmbeddings(queryText);
    46 processor.runVectorSearchQuery(embeddingsData);
    47 } catch (Exception e) {
    48 e.printStackTrace();
    49 }
    50 }
    51
    52 // Generate embeddings using Cohere's embed API from the query text
    53 public Map<String, BinaryVector> generateAndConvertEmbeddings(String text) throws Exception {
    54 if (COHERE_API_KEY == null || COHERE_API_KEY.isEmpty()) {
    55 throw new RuntimeException("API key not found. Set COHERE_API_KEY in your environment.");
    56 }
    57
    58 Cohere cohere = Cohere.builder().token(COHERE_API_KEY).build();
    59
    60 EmbedRequest request = EmbedRequest.builder()
    61 .model("embed-english-v3.0")
    62 .inputType(EmbedInputType.SEARCH_QUERY)
    63 .texts(List.of(text))
    64 .embeddingTypes(List.of(EmbeddingType.FLOAT, EmbeddingType.INT_8, EmbeddingType.UBINARY))
    65 .build();
    66
    67 EmbedResponse response = cohere.embed(request);
    68 Optional<EmbedByTypeResponse> optionalEmbeddingsWrapper = response.getEmbeddingsByType();
    69 if (optionalEmbeddingsWrapper.isEmpty()) {
    70 throw new RuntimeException("No embeddings found in the API response.");
    71 }
    72
    73 EmbedByTypeResponseEmbeddings embeddings = optionalEmbeddingsWrapper.get().getEmbeddings();
    74 return createBinaryVectorEmbeddings(embeddings);
    75 }
    76
    77 // Convert embeddings to BSON binary vectors using MongoDB Java Driver
    78 private static Map<String, BinaryVector> createBinaryVectorEmbeddings(EmbedByTypeResponseEmbeddings embeddings) {
    79 Map<String, BinaryVector> binaryVectorEmbeddings = new HashMap<>();
    80
    81 // Convert float embeddings
    82 List<Double> floatList = embeddings.getFloat().orElseThrow().get(0);
    83 if (floatList != null) {
    84 float[] floatData = listToFloatArray(floatList);
    85 BinaryVector floatVector = BinaryVector.floatVector(floatData);
    86 binaryVectorEmbeddings.put("float32", floatVector);
    87 }
    88
    89 // Convert int8 embeddings
    90 List<Integer> int8List = embeddings.getInt8().orElseThrow().get(0);
    91 if (int8List != null) {
    92 byte[] int8Data = listToByteArray(int8List);
    93 BinaryVector int8Vector = BinaryVector.int8Vector(int8Data);
    94 binaryVectorEmbeddings.put("int8", int8Vector);
    95 }
    96
    97 // Convert ubinary embeddings
    98 List<Integer> ubinaryList = embeddings.getUbinary().orElseThrow().get(0);
    99 if (ubinaryList != null) {
    100 byte[] int1Data = listToByteArray(ubinaryList);
    101 BinaryVector packedBitsVector = BinaryVector.packedBitVector(int1Data, (byte) 0);
    102 binaryVectorEmbeddings.put("int1", packedBitsVector);
    103 }
    104
    105 return binaryVectorEmbeddings;
    106 }
    107
    108 // Define and run $vectorSearch query using the embeddings
    109 public void runVectorSearchQuery(Map<String, BinaryVector> embeddingsData) {
    110 if (MONGODB_URI == null || MONGODB_URI.isEmpty()) {
    111 throw new RuntimeException("MongoDB URI not found. Set MONGODB_URI in your environment.");
    112 }
    113
    114 try (MongoClient mongoClient = MongoClients.create(MONGODB_URI)) {
    115 MongoDatabase database = mongoClient.getDatabase(DB_NAME);
    116 MongoCollection<Document> collection = database.getCollection(COLLECTION_NAME);
    117
    118 for (String path : embeddingsData.keySet()) {
    119 BinaryVector queryVector = embeddingsData.get(path);
    120
    121 List<Bson> pipeline = asList(
    122 vectorSearch(
    123 fieldPath("embeddings_" + path),
    124 queryVector,
    125 VECTOR_INDEX_NAME,
    126 2,
    127 approximateVectorSearchOptions(5)
    128 ),
    129 project(
    130 fields(
    131 exclude("_id"),
    132 include(DATA_FIELD_NAME),
    133 metaVectorSearchScore("vectorSearchScore")
    134 )
    135 )
    136 );
    137
    138 List<Document> results = collection.aggregate(pipeline).into(new ArrayList<>());
    139
    140 System.out.println("Results from " + path + " embeddings:");
    141 for (Document result : results) {
    142 System.out.println(result.toJson());
    143 }
    144 }
    145 }
    146 }
    147
    148 private static float[] listToFloatArray(List<Double> list) {
    149 float[] array = new float[list.size()];
    150 for (int i = 0; i < list.size(); i++) {
    151 array[i] = list.get(i).floatValue();
    152 }
    153 return array;
    154 }
    155
    156 private static byte[] listToByteArray(List<Integer> list) {
    157 byte[] array = new byte[list.size()];
    158 for (int i = 0; i < list.size(); i++) {
    159 array[i] = list.get(i).byteValue();
    160 }
    161 return array;
    162 }
    163}
  3. Replace the following placeholder values in the code and save the file.

    MONGODB_URI

    Your Atlas cluster connection string if you didn't set the environment variable.

    COHERE_API_KEY

    You Cohere API key if you didn't set the environment variable.

    <DATABASE-NAME>

    Name of the database in your Atlas cluster.

    <COLLECTION-NAME>

    Name of the collection where you ingested the data.

    <INDEX-NAME>

    Name of the Atlas Vector Search index for the collection.

    <DATA-FIELD-NAME>

    Name of the field that contain the text from which you generated embeddings. For this example, use text.

    <QUERY-TEXT>

    Text for the query. For this example, use science fact.

  4. Compile and run the file using your application run configuration.

    If you are using a terminal, run the following commands to compile and execute your program.

    javac CreateEmbeddingsAndRunQuery.java
    java CreateEmbeddingsAndRunQuery
    Results from int1 embeddings:
    {"text": "Mount Everest is the highest peak on Earth at 8,848m.", "score": 0.642578125}
    {"text": "The Great Wall of China is visible from space.", "score": 0.61328125}
    Results from int8 embeddings:
    {"text": "Mount Everest is the highest peak on Earth at 8,848m.", "score": 0.5149773359298706}
    {"text": "The Great Wall of China is visible from space.", "score": 0.5146723985671997}
    Results from float32 embeddings:
    {"text": "Mount Everest is the highest peak on Earth at 8,848m.", "score": 0.6583383083343506}
    {"text": "The Great Wall of China is visible from space.", "score": 0.6536108255386353}

To learn more about generating embeddings and converting the embeddings to binData vectors, see How to Create Vector Embeddings.

1
  1. From your IDE, create a Java project using Maven or Gradle.

  2. Add the following dependencies, depending on your package manager:

    If you are using Maven, add the following dependencies to the dependencies array in your project's pom.xml file:

    pom.xml
    <dependencies>
    <dependency>
    <groupId>junit</groupId>
    <artifactId>junit</artifactId>
    <version>4.13.2</version>
    <scope>test</scope>
    </dependency>
    <dependency>
    <groupId>org.mongodb</groupId>
    <artifactId>mongodb-driver-sync</artifactId>
    <version>5.3.1</version>
    </dependency>
    <dependency>
    <groupId>com.cohere</groupId>
    <artifactId>cohere-java</artifactId>
    <version>1.6.0</version>
    </dependency>
    <dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>slf4j-api</artifactId>
    <version>2.0.16</version>
    </dependency>
    <dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>slf4j-simple</artifactId>
    <version>2.0.16</version>
    <scope>test</scope>
    </dependency>
    </dependencies>

    If you are using Gradle, add the following to the dependencies array in your project's build.gradle file:

    build.gradle
    dependencies {
    // MongoDB Java Sync Driver v5.3.1 or later
    implementation 'org.mongodb:mongodb-driver-sync:[5.3.1,)'
    // Java library for working with Cohere models
    implementation 'ai.cohere:cohere-java:1.6.0'
    // SLF4J (The Simple Logging Facade for Java)
    testImplementation("org.slf4j:slf4j-simple:2.0.16")
    implementation("org.slf4j:slf4j-api:2.0.16")
    }
  3. Run your package manager to install the dependencies to your project.

2

Note

This example sets the variables for the project in the IDE. Production applications might manage environment variables through a deployment configuration, CI/CD pipeline, or secrets manager, but you can adapt the provided code to fit your use case.

In your IDE, create a new configuration template and add the following variables to your project:

  • If you are using IntelliJ IDEA, create a new Application run configuration template, then add your variables as semicolon-separated values in the Environment variables field (for example, FOO=123;BAR=456). Apply the changes and click OK.

    To learn more, see the Create a run/debug configuration from a template section of the IntelliJ IDEA documentation.

  • If you are using Eclipse, create a new Java Application launch configuration, then add each variable as a new key-value pair in the Environment tab. Apply the changes and click OK.

    To learn more, see the Creating a Java application launch configuration section of the Eclipse IDE documentation.

Environment variables
COHERE_API_KEY=<api-key>
MONGODB_URI=<connection-string>

Update the placeholders with the following values:

  • Replace the <api-key> placeholder value with your Cohere API key.

  • Replace the <connection-string> placeholder value with the SRV connection string for your Atlas cluster.

    Your connection string should use the following format:

    mongodb+srv://<db_username>:<db_password>@<clusterName>.<hostname>.mongodb.net
3

You can use an embedding model provider to generate float, int8, and int1 embeddings for your data and then use the MongoDB Java driver to convert your native vector embedding to BSON vectors. The following sample code uses Cohere's embed API to generate full-precision vectors from the data in the sample_airbnb.listingsAndReviews namespace.

  1. Create a new file named GenerateAndConvertEmbeddings.java in your Java project.

    touch GenerateAndConvertEmbeddings.java
  2. Copy and paste the following code in the GenerateAndConvertEmbeddings.java file.

    This code does the following:

    • Gets the summary field from 50 documents in the sample_airbnb.listingsAndReviews namespace.

    • Generates the float32, int8, and ubinary vector embeddings by using Cohere's embed API.

    • Converts the embeddings to BSON binData vectors by using MongoDB Java driver.

    • Creates a file named embeddings.json and saves the data with embeddings in the file.

    GenerateAndConvertEmbeddings.java
    1import com.cohere.api.Cohere;
    2import com.cohere.api.requests.EmbedRequest;
    3import com.cohere.api.types.EmbedByTypeResponse;
    4import com.cohere.api.types.EmbedResponse;
    5import com.cohere.api.types.EmbeddingType;
    6import com.cohere.api.types.EmbedInputType;
    7import com.cohere.api.types.EmbedByTypeResponseEmbeddings;
    8import com.mongodb.client.MongoClient;
    9import com.mongodb.client.MongoClients;
    10import com.mongodb.client.MongoDatabase;
    11import com.mongodb.client.MongoCollection;
    12import com.mongodb.client.FindIterable;
    13import org.bson.BsonArray;
    14import org.bson.Document;
    15import org.bson.BinaryVector;
    16import org.slf4j.Logger;
    17import org.slf4j.LoggerFactory;
    18import java.io.FileOutputStream;
    19import java.io.IOException;
    20import java.util.ArrayList;
    21import java.util.Arrays;
    22import java.util.List;
    23import java.util.Objects;
    24import java.util.Optional;
    25
    26public class GenerateAndConvertEmbeddings {
    27 private static final Logger logger = LoggerFactory.getLogger(GenerateAndConvertEmbeddings.class);
    28 private static final String COHERE_API_KEY = System.getenv("COHERE_API_KEY");
    29 private static final String MONGODB_URI = System.getenv("MONGODB_URI");
    30
    31 public static void main(String[] args) {
    32 try {
    33 List<String> summaries = fetchSummariesFromMongoDB();
    34 if (summaries.isEmpty()) {
    35 throw new RuntimeException("No summaries retrieved from MongoDB.");
    36 }
    37 EmbedByTypeResponseEmbeddings embeddingsData = fetchEmbeddingsFromCohere(COHERE_API_KEY, summaries);
    38 if (embeddingsData == null) {
    39 throw new RuntimeException("Failed to fetch embeddings.");
    40 }
    41 convertAndSaveEmbeddings(summaries, embeddingsData);
    42 } catch (Exception e) {
    43 logger.error("Unexpected error: {}", e.getMessage(), e);
    44 }
    45 }
    46
    47 private static List<String> fetchSummariesFromMongoDB() {
    48 List<String> summaries = new ArrayList<>();
    49 if (MONGODB_URI == null || MONGODB_URI.isEmpty()) {
    50 throw new RuntimeException("MongoDB URI is not set.");
    51 }
    52 logger.info("Connecting to MongoDB at URI: {}", MONGODB_URI);
    53 try (MongoClient mongoClient = MongoClients.create(MONGODB_URI)) {
    54 String dbName = "sample_airbnb";
    55 String collName = "listingsAndReviews";
    56 MongoDatabase database = mongoClient.getDatabase(dbName);
    57 MongoCollection<Document> collection = database.getCollection(collName);
    58 Document filter = new Document("summary", new Document("$nin", Arrays.asList(null, "")));
    59 FindIterable<Document> documentsCursor = collection.find(filter).limit(50);
    60 for (Document doc : documentsCursor) {
    61 String summary = doc.getString("summary");
    62 if (summary != null && !summary.isEmpty()) {
    63 summaries.add(summary);
    64 }
    65 }
    66 logger.info("Retrieved {} summaries from MongoDB.", summaries.size());
    67 } catch (Exception e) {
    68 logger.error("Error fetching from MongoDB: {}", e.getMessage(), e);
    69 throw new RuntimeException("Failed to fetch data from MongoDB", e);
    70 }
    71 return summaries;
    72 }
    73
    74 private static EmbedByTypeResponseEmbeddings fetchEmbeddingsFromCohere(String apiKey, List<String> data) {
    75 if (Objects.isNull(apiKey) || apiKey.isEmpty()) {
    76 throw new RuntimeException("API key is not set.");
    77 }
    78 Cohere cohere = Cohere.builder().token(apiKey).clientName("embed-example").build();
    79 try {
    80 EmbedRequest request = EmbedRequest.builder()
    81 .model("embed-english-v3.0")
    82 .inputType(EmbedInputType.SEARCH_DOCUMENT)
    83 .texts(data)
    84 .embeddingTypes(List.of(EmbeddingType.FLOAT, EmbeddingType.INT_8, EmbeddingType.UBINARY))
    85 .build();
    86 EmbedResponse response = cohere.embed(request);
    87 Optional<EmbedByTypeResponse> optionalEmbeddingsWrapper = response.getEmbeddingsByType();
    88 if (optionalEmbeddingsWrapper.isPresent()) {
    89 return optionalEmbeddingsWrapper.get().getEmbeddings();
    90 } else {
    91 logger.warn("No embeddings were returned.");
    92 }
    93 } catch (Exception e) {
    94 logger.error("Error fetching embeddings: {}", e.getMessage(), e);
    95 }
    96 return null;
    97 }
    98
    99 private static void convertAndSaveEmbeddings(List<String> summaries, EmbedByTypeResponseEmbeddings embeddings) {
    100 try {
    101 Document doc = new Document();
    102 BsonArray array = new BsonArray();
    103 for (int i = 0; i < summaries.size(); i++) {
    104 String summary = summaries.get(i);
    105
    106 // Retrieve the embeddings for the current index
    107 List<Double> floatList = embeddings.getFloat().orElseThrow().get(i);
    108 List<Integer> int8List = embeddings.getInt8().orElseThrow().get(i);
    109 List<Integer> ubinaryList = embeddings.getUbinary().orElseThrow().get(i);
    110
    111 // Convert lists to arrays
    112 float[] floatData = listToFloatArray(floatList);
    113 byte[] int8Data = listToByteArray(int8List);
    114 byte[] int1Data = listToByteArray(ubinaryList);
    115
    116 // Create BinaryVector objects
    117 BinaryVector floatVector = BinaryVector.floatVector(floatData);
    118 BinaryVector int8Vector = BinaryVector.int8Vector(int8Data);
    119 BinaryVector packedBitsVector = BinaryVector.packedBitVector(int1Data, (byte) 0);
    120
    121 Document document = new Document()
    122 .append("text", summary)
    123 .append("embeddings_float32", floatVector)
    124 .append("embeddings_int8", int8Vector)
    125 .append("embeddings_int1", packedBitsVector);
    126 array.add(document.toBsonDocument());
    127 }
    128 doc.append("data", array);
    129 try (FileOutputStream fos = new FileOutputStream("embeddings.json")) {
    130 fos.write(doc.toJson().getBytes());
    131 }
    132 logger.info("Embeddings with BSON vectors have been saved to embeddings.json");
    133 } catch (IOException e) {
    134 logger.error("Error writing embeddings to file: {}", e.getMessage(), e);
    135 }
    136 }
    137
    138 private static float[] listToFloatArray(List<Double> list) {
    139 float[] array = new float[list.size()];
    140 for (int i = 0; i < list.size(); i++) {
    141 array[i] = list.get(i).floatValue();
    142 }
    143 return array;
    144 }
    145
    146 private static byte[] listToByteArray(List<Integer> list) {
    147 byte[] array = new byte[list.size()];
    148 for (int i = 0; i < list.size(); i++) {
    149 array[i] = list.get(i).byteValue();
    150 }
    151 return array;
    152 }
    153}
  3. Replace the following placeholder values in the code if you didn't set the environment variables and save the file.

    MONGODB_URI

    Your Atlas cluster connection string if you didn't set the environment variable.

    COHERE_API_KEY

    You Cohere API key if you didn't set the environment variable.

  4. Compile and run the file using your application run configuration.

    If you are using a terminal, run the following commands to compile and execute your program.

    javac GenerateAndConvertEmbeddings.java
    java GenerateAndConvertEmbeddings
    [main] INFO GenerateAndConvertEmbeddings - Connecting to MongoDB at URI: <CONNECTION-STRING>
    ...
    [main] INFO GenerateAndConvertEmbeddings - Retrieved 50 summaries from MongoDB.
    [main] INFO GenerateAndConvertEmbeddings - Embeddings with BSON vectors have been saved to embeddings.json
  5. Verify the embeddings in the embeddings.json file.

To learn more about generating embeddings and converting the embeddings to binData vectors, see How to Create Vector Embeddings.

4

You must upload your data and embeddings to a collection in your Atlas cluster and create an Atlas Vector Search index on the data to run $vectorSearch queries against the data.

  1. Create a new file named UploadDataAndCreateIndex.java in your Java project.

    touch UploadDataAndCreateIndex.java
  2. Copy and paste the following code in the UploadDataAndCreateIndex.java file.

    This code does the following:

    • Uploads the float32, int8, and int1 embeddings in the embeddings.json file to your Atlas cluster.

    • Creates an Atlas Vector Search index on the embeddings.float32, embeddings.int8, and embeddings.int1 fields.

    UploadDataAndCreateIndex.java
    1import com.mongodb.client.MongoClient;
    2import com.mongodb.client.MongoClients;
    3import com.mongodb.client.MongoCollection;
    4import com.mongodb.client.MongoDatabase;
    5import com.mongodb.client.model.SearchIndexModel;
    6import com.mongodb.client.model.SearchIndexType;
    7
    8import org.bson.Document;
    9import org.bson.conversions.Bson;
    10import org.bson.BinaryVector; // Import the BinaryVector
    11
    12import java.io.IOException;
    13import java.nio.file.Files;
    14import java.nio.file.Path;
    15import java.util.Collections;
    16import java.util.List;
    17import java.util.concurrent.TimeUnit;
    18import java.util.stream.StreamSupport;
    19
    20public class UploadDataAndCreateIndex {
    21
    22 private static final String MONGODB_URI = System.getenv("MONGODB_URI");
    23 private static final String DB_NAME = "sample_airbnb";
    24 private static final String COLLECTION_NAME = "listingsAndReviews";
    25 private static final String INDEX_NAME = "<INDEX-NAME>";
    26
    27 public static void main(String[] args) {
    28 try (MongoClient mongoClient = MongoClients.create(MONGODB_URI)) {
    29 uploadEmbeddingsData(mongoClient);
    30 setupVectorSearchIndex(mongoClient);
    31 } catch (Exception e) {
    32 e.printStackTrace();
    33 }
    34 }
    35
    36 public static void uploadEmbeddingsData(MongoClient mongoClient) throws IOException {
    37 MongoDatabase database = mongoClient.getDatabase(DB_NAME);
    38 MongoCollection<Document> collection = database.getCollection(COLLECTION_NAME);
    39 String filePath = "embeddings.json";
    40 String fileContent = Files.readString(Path.of(filePath));
    41
    42 Document rootDoc = Document.parse(fileContent);
    43 List<Document> embeddingsDocs = rootDoc.getList("data", Document.class);
    44
    45 for (Document doc : embeddingsDocs) {
    46 // Retrieve the string value from the document
    47 String summary = doc.getString("text");
    48
    49 // Get the BinaryVector objects from the document
    50 BinaryVector embeddingsFloat32 = doc.get("embeddings_float32", BinaryVector.class);
    51 BinaryVector embeddingsInt8 = doc.get("embeddings_int8", BinaryVector.class);
    52 BinaryVector embeddingsInt1 = doc.get("embeddings_int1", BinaryVector.class);
    53
    54 // Create filter and update documents
    55 Document filter = new Document("summary", summary);
    56 Document update = new Document("$set", new Document("summary", summary)
    57 .append("embeddings_float32", embeddingsFloat32)
    58 .append("embeddings_int8", embeddingsInt8)
    59 .append("embeddings_int1", embeddingsInt1));
    60
    61 // Perform update operation with upsert option
    62 collection.updateOne(filter, update, new com.mongodb.client.model.UpdateOptions().upsert(true));
    63 System.out.println("Processed document with summary: " + summary);
    64 }
    65 }
    66
    67 public static void setupVectorSearchIndex(MongoClient client) throws InterruptedException {
    68 MongoDatabase database = client.getDatabase(DB_NAME);
    69 MongoCollection<Document> collection = database.getCollection(COLLECTION_NAME);
    70 // Define the index details
    71 Bson definition = new Document(
    72 "fields",
    73 List.of(
    74 new Document("type", "vector")
    75 .append("path", "embeddings_float32")
    76 .append("numDimensions", 1024)
    77 .append("similarity", "dotProduct"),
    78 new Document("type", "vector")
    79 .append("path", "embeddings_int8")
    80 .append("numDimensions", 1024)
    81 .append("similarity", "dotProduct"),
    82 new Document("type", "vector")
    83 .append("path", "embeddings_int1")
    84 .append("numDimensions", 1024)
    85 .append("similarity", "euclidean")
    86 )
    87 );
    88 // Define the index model
    89 SearchIndexModel indexModel = new SearchIndexModel(
    90 INDEX_NAME,
    91 definition,
    92 SearchIndexType.vectorSearch()
    93 );
    94 // Create the index using the defined model
    95 List<String> result = collection.createSearchIndexes(Collections.singletonList(indexModel));
    96 System.out.println("Successfully created vector index named: " + result.get(0));
    97 System.out.println("It may take up to a minute for the index to leave the BUILDING status and become queryable.");
    98 // Wait for Atlas to build the index
    99 System.out.println("Polling to confirm the index has changed from the BUILDING status.");
    100 waitForIndex(collection, INDEX_NAME);
    101 }
    102
    103 public static <T> boolean waitForIndex(final MongoCollection<T> collection, final String indexName) {
    104 long startTime = System.nanoTime();
    105 long timeoutNanos = TimeUnit.SECONDS.toNanos(60);
    106 while (System.nanoTime() - startTime < timeoutNanos) {
    107 Document indexRecord = StreamSupport.stream(collection.listSearchIndexes().spliterator(), false)
    108 .filter(index -> indexName.equals(index.getString("name")))
    109 .findAny().orElse(null);
    110 if (indexRecord != null) {
    111 if ("FAILED".equals(indexRecord.getString("status"))) {
    112 throw new RuntimeException("Search index has FAILED status.");
    113 }
    114 if (indexRecord.getBoolean("queryable")) {
    115 System.out.println(indexName + " index is ready to query");
    116 return true;
    117 }
    118 }
    119 try {
    120 Thread.sleep(100); // busy-wait, avoid in production
    121 } catch (InterruptedException e) {
    122 Thread.currentThread().interrupt();
    123 throw new RuntimeException(e);
    124 }
    125 }
    126 return false;
    127 }
    128}
  3. Replace the following placeholder values in the code and save the file.

    MONGODB_URI

    Your Atlas cluster connection string if you didn't set the environment variable.

    <INDEX-NAME>

    Name of the Atlas Vector Search index for the collection.

  4. Compile and run the file using your application run configuration.

    If you are using a terminal, run the following commands to compile and execute your program.

    javac UploadDataAndCreateIndex.java
    java UploadDataAndCreateIndex
    Successfully created vector index named: <INDEX_NAME>
    It may take up to a minute for the index to leave the BUILDING status and become queryable.
    Polling to confirm the index has changed from the BUILDING status.
    <INDEX_NAME> index is ready to query
  5. Log in to your Atlas cluster and verify the following:

    • Data in the namespace.

    • Atlas Vector Search index for the collection.

5

To test your embeddings, you can run a query against your collection. Use an embedding model provider to generate float, int8, and int1 embeddings for your query text. The following sample code uses Cohere's embed API to generate full-precision vectors. After generating the embeddings, use the MongoDB Java driver to convert your native vector embedding to BSON vectors and run $vectorSearch query against the collection.

  1. Create a new file named CreateEmbeddingsAndRunQuery.java in your Java project.

    touch CreateEmbeddingsAndRunQuery.java
  2. Copy and paste the following code in the CreateEmbeddingsAndRunQuery.java file.

    This code does the following:

    • Generates the float32, int8, and ubinary vector embeddings by using Cohere's embed API.

    • Converts the embeddings to BSON binData vectors by using MongoDB Java driver.

    • Runs the query against your collection and returns the results.

    CreateEmbeddingsAndRunQuery.java
    1import com.cohere.api.Cohere;
    2import com.cohere.api.requests.EmbedRequest;
    3import com.cohere.api.types.EmbedResponse;
    4import com.cohere.api.types.EmbedByTypeResponse;
    5import com.cohere.api.types.EmbedByTypeResponseEmbeddings;
    6import com.cohere.api.types.EmbeddingType;
    7import com.cohere.api.types.EmbedInputType;
    8import com.mongodb.client.MongoClient;
    9import com.mongodb.client.MongoClients;
    10import com.mongodb.client.MongoCollection;
    11import com.mongodb.client.MongoDatabase;
    12import org.bson.Document;
    13import org.bson.conversions.Bson;
    14import org.bson.BinaryVector;
    15
    16import java.util.ArrayList;
    17import java.util.HashMap;
    18import java.util.List;
    19import java.util.Map;
    20import java.util.Optional;
    21
    22import static com.mongodb.client.model.Aggregates.project;
    23import static com.mongodb.client.model.Aggregates.vectorSearch;
    24import static com.mongodb.client.model.Projections.fields;
    25import static com.mongodb.client.model.Projections.include;
    26import static com.mongodb.client.model.Projections.exclude;
    27import static com.mongodb.client.model.Projections.metaVectorSearchScore;
    28import static com.mongodb.client.model.search.SearchPath.fieldPath;
    29import static com.mongodb.client.model.search.VectorSearchOptions.approximateVectorSearchOptions;
    30import static java.util.Arrays.asList;
    31
    32public class CreateEmbeddingsAndRunQuery {
    33 private static final String COHERE_API_KEY = System.getenv("COHERE_API_KEY");
    34 private static final String MONGODB_URI = System.getenv("MONGODB_URI");
    35 private static final String DB_NAME = "<DATABASE-NAME>";
    36 private static final String COLLECTION_NAME = "<COLLECTION-NAME>";
    37 private static final String VECTOR_INDEX_NAME = "<INDEX-NAME>";
    38 private static final String DATA_FIELD_NAME = "<DATA-FIELD>";
    39
    40 public static void main(String[] args) {
    41 String queryText = "<QUERY-TEXT>";
    42
    43 try {
    44 CreateAndRunQuery processor = new CreateAndRunQuery();
    45 Map<String, BinaryVector> embeddingsData = processor.generateAndConvertEmbeddings(queryText);
    46 processor.runVectorSearchQuery(embeddingsData);
    47 } catch (Exception e) {
    48 e.printStackTrace();
    49 }
    50 }
    51
    52 // Generate embeddings using Cohere's embed API from the query text
    53 public Map<String, BinaryVector> generateAndConvertEmbeddings(String text) throws Exception {
    54 if (COHERE_API_KEY == null || COHERE_API_KEY.isEmpty()) {
    55 throw new RuntimeException("API key not found. Set COHERE_API_KEY in your environment.");
    56 }
    57
    58 Cohere cohere = Cohere.builder().token(COHERE_API_KEY).build();
    59
    60 EmbedRequest request = EmbedRequest.builder()
    61 .model("embed-english-v3.0")
    62 .inputType(EmbedInputType.SEARCH_QUERY)
    63 .texts(List.of(text))
    64 .embeddingTypes(List.of(EmbeddingType.FLOAT, EmbeddingType.INT_8, EmbeddingType.UBINARY))
    65 .build();
    66
    67 EmbedResponse response = cohere.embed(request);
    68 Optional<EmbedByTypeResponse> optionalEmbeddingsWrapper = response.getEmbeddingsByType();
    69 if (optionalEmbeddingsWrapper.isEmpty()) {
    70 throw new RuntimeException("No embeddings found in the API response.");
    71 }
    72
    73 EmbedByTypeResponseEmbeddings embeddings = optionalEmbeddingsWrapper.get().getEmbeddings();
    74 return createBinaryVectorEmbeddings(embeddings);
    75 }
    76
    77 // Convert embeddings to BSON binary vectors using MongoDB Java Driver
    78 private static Map<String, BinaryVector> createBinaryVectorEmbeddings(EmbedByTypeResponseEmbeddings embeddings) {
    79 Map<String, BinaryVector> binaryVectorEmbeddings = new HashMap<>();
    80
    81 // Convert float embeddings
    82 List<Double> floatList = embeddings.getFloat().orElseThrow().get(0);
    83 if (floatList != null) {
    84 float[] floatData = listToFloatArray(floatList);
    85 BinaryVector floatVector = BinaryVector.floatVector(floatData);
    86 binaryVectorEmbeddings.put("float32", floatVector);
    87 }
    88
    89 // Convert int8 embeddings
    90 List<Integer> int8List = embeddings.getInt8().orElseThrow().get(0);
    91 if (int8List != null) {
    92 byte[] int8Data = listToByteArray(int8List);
    93 BinaryVector int8Vector = BinaryVector.int8Vector(int8Data);
    94 binaryVectorEmbeddings.put("int8", int8Vector);
    95 }
    96
    97 // Convert ubinary embeddings
    98 List<Integer> ubinaryList = embeddings.getUbinary().orElseThrow().get(0);
    99 if (ubinaryList != null) {
    100 byte[] int1Data = listToByteArray(ubinaryList);
    101 BinaryVector packedBitsVector = BinaryVector.packedBitVector(int1Data, (byte) 0);
    102 binaryVectorEmbeddings.put("int1", packedBitsVector);
    103 }
    104
    105 return binaryVectorEmbeddings;
    106 }
    107
    108 // Define and run $vectorSearch query using the embeddings
    109 public void runVectorSearchQuery(Map<String, BinaryVector> embeddingsData) {
    110 if (MONGODB_URI == null || MONGODB_URI.isEmpty()) {
    111 throw new RuntimeException("MongoDB URI not found. Set MONGODB_URI in your environment.");
    112 }
    113
    114 try (MongoClient mongoClient = MongoClients.create(MONGODB_URI)) {
    115 MongoDatabase database = mongoClient.getDatabase(DB_NAME);
    116 MongoCollection<Document> collection = database.getCollection(COLLECTION_NAME);
    117
    118 for (String path : embeddingsData.keySet()) {
    119 BinaryVector queryVector = embeddingsData.get(path);
    120
    121 List<Bson> pipeline = asList(
    122 vectorSearch(
    123 fieldPath("embeddings_" + path),
    124 queryVector,
    125 VECTOR_INDEX_NAME,
    126 2,
    127 approximateVectorSearchOptions(5)
    128 ),
    129 project(
    130 fields(
    131 exclude("_id"),
    132 include(DATA_FIELD_NAME),
    133 metaVectorSearchScore("vectorSearchScore")
    134 )
    135 )
    136 );
    137
    138 List<Document> results = collection.aggregate(pipeline).into(new ArrayList<>());
    139
    140 System.out.println("Results from " + path + " embeddings:");
    141 for (Document result : results) {
    142 System.out.println(result.toJson());
    143 }
    144 }
    145 }
    146 }
    147
    148 private static float[] listToFloatArray(List<Double> list) {
    149 float[] array = new float[list.size()];
    150 for (int i = 0; i < list.size(); i++) {
    151 array[i] = list.get(i).floatValue();
    152 }
    153 return array;
    154 }
    155
    156 private static byte[] listToByteArray(List<Integer> list) {
    157 byte[] array = new byte[list.size()];
    158 for (int i = 0; i < list.size(); i++) {
    159 array[i] = list.get(i).byteValue();
    160 }
    161 return array;
    162 }
    163}
  3. Replace the following placeholder values in the code and save the file.

    MONGODB_URI

    Your Atlas cluster connection string if you didn't set the environment variable.

    COHERE_API_KEY

    You Cohere API key if you didn't set the environment variable.

    <DATABASE-NAME>

    Name of the database in your Atlas cluster. For this example, use sample_airbnb.

    <COLLECTION-NAME>

    Name of the collection where you ingested the data. For this example, use listingsAndReviews.

    <INDEX-NAME>

    Name of the Atlas Vector Search index for the collection.

    <DATA-FIELD-NAME>

    Name of the field that contain the text from which you generated embeddings. For this example, use summary.

    <QUERY-TEXT>

    Text for the query. For this example, use ocean view.

  4. Compile and run the file using your application run configuration.

    If you are using a terminal, run the following commands to compile and execute your program.

    javac CreateEmbeddingsAndRunQuery.java
    java CreateEmbeddingsAndRunQuery
    Results from int1 embeddings:
    {"summary": "A beautiful and comfortable 1 Bedroom Air Conditioned Condo in Makaha Valley - stunning Ocean & Mountain views All the amenities of home, suited for longer stays. Full kitchen & large bathroom. Several gas BBQ's for all guests to use & a large heated pool surrounded by reclining chairs to sunbathe. The Ocean you see in the pictures is not even a mile away, known as the famous Makaha Surfing Beach. Golfing, hiking,snorkeling paddle boarding, surfing are all just minutes from the front door.", "vectorSearchScore": 0.6591796875}
    {"summary": "A short distance from Honolulu's billion dollar mall, and the same distance to Waikiki. Parking included. A great location that work perfectly for business, education, or simple visit. Experience Yacht Harbor views and 5 Star Hilton Hawaiian Village.", "vectorSearchScore": 0.6337890625}
    Results from int8 embeddings:
    {"summary": "A beautiful and comfortable 1 Bedroom Air Conditioned Condo in Makaha Valley - stunning Ocean & Mountain views All the amenities of home, suited for longer stays. Full kitchen & large bathroom. Several gas BBQ's for all guests to use & a large heated pool surrounded by reclining chairs to sunbathe. The Ocean you see in the pictures is not even a mile away, known as the famous Makaha Surfing Beach. Golfing, hiking,snorkeling paddle boarding, surfing are all just minutes from the front door.", "vectorSearchScore": 0.5215557217597961}
    {"summary": "A short distance from Honolulu's billion dollar mall, and the same distance to Waikiki. Parking included. A great location that work perfectly for business, education, or simple visit. Experience Yacht Harbor views and 5 Star Hilton Hawaiian Village.", "vectorSearchScore": 0.5179016590118408}
    Results from float32 embeddings:
    {"summary": "A beautiful and comfortable 1 Bedroom Air Conditioned Condo in Makaha Valley - stunning Ocean & Mountain views All the amenities of home, suited for longer stays. Full kitchen & large bathroom. Several gas BBQ's for all guests to use & a large heated pool surrounded by reclining chairs to sunbathe. The Ocean you see in the pictures is not even a mile away, known as the famous Makaha Surfing Beach. Golfing, hiking,snorkeling paddle boarding, surfing are all just minutes from the front door.", "vectorSearchScore": 0.7278661131858826}
    {"summary": "A short distance from Honolulu's billion dollar mall, and the same distance to Waikiki. Parking included. A great location that work perfectly for business, education, or simple visit. Experience Yacht Harbor views and 5 Star Hilton Hawaiian Village.", "vectorSearchScore": 0.688639760017395}

To learn more about generating embeddings and converting the embeddings to binData vectors, see How to Create Vector Embeddings.

1

Run the following command to install the MongoDB Node.js Driver. This operation might take a few minutes to complete.

npm install mongodb

You must install Node.js v6.11 or later driver. If necessary, you can also install libraries from your embedding model provider. For example, to generate float32, int8, and int1 embeddings by using Cohere as demonstrated in this page, install Cohere:

npm install cohere-ai dotenv
npm show cohere-ai version
2
  1. To access the embedding model provider for generating and converting embeddings, set the environment variable for the embedding model provider's API key, if necessary.

    For using embeddings from Cohere, set up the COHERE_API_KEY environment variable.

    export COHERE_API_KEY="<COHERE-API-KEY>"

    If you don't set the environment variable, replace the <COHERE-API-KEY> in the sample code with the API key before running the code.

  2. To access Atlas cluster, set the MONGODB_URI environment variable.

    export MONGODB_URI="<CONNECTION-STRING>"

    Your connection string should be in the following format:

    mongodb+srv://<db_username>:<db_password>@<clusterName>.<hostname>.mongodb.net

    If you don't set the environment variable, replace the <CONNECTION-STRING> in the sample code with your connection string before running the code.

3
  1. Create a file named get-embeddings.js to generate float32, int8, and int1 vector embeddings by using Cohere's embed API.

    touch get-embeddings.js
  2. Copy and paste the following code in the get-embeddings.js file.

    This code does the following:

    • Generates float32, int8, and int1 embeddings for the given data by using Cohere's embed-english-v3.0 embedding model.

    • Stores the float, int8, and int1 embeddings in fields named float, int8, and ubinary respectively.

    • Creates a file named embeddings.json and saves the embeddings in the file.

    get-embeddings.js
    1// Use 'require' for modules in a Node.js environment
    2const { CohereClient } = require('cohere-ai');
    3const { writeFile } = require('fs/promises');
    4dd:queueMicrotask
    5// Retrieve API key from environment variables or default placeholder
    6const apiKey = process.env.COHERE_API_KEY || '<COHERE-API-KEY>';
    7
    8if (!apiKey) {
    9 throw new Error('API key not found. Please set COHERE_API_KEY in your environment.');
    10}
    11
    12// Instantiate the CohereClient with the API key
    13const cohere = new CohereClient({ token: apiKey });
    14
    15async function main() {
    16 try {
    17 // Data to embed
    18 const data = [
    19 "The Great Wall of China is visible from space.",
    20 "The Eiffel Tower was completed in Paris in 1889.",
    21 "Mount Everest is the highest peak on Earth at 8,848m.",
    22 "Shakespeare wrote 37 plays and 154 sonnets during his lifetime.",
    23 "The Mona Lisa was painted by Leonardo da Vinci.",
    24 ];
    25
    26 // Fetch embeddings for the data using the cohere API
    27 const response = await cohere.v2.embed({
    28 model: 'embed-english-v3.0',
    29 inputType: 'search_document',
    30 texts: data,
    31 embeddingTypes: ['float', 'int8', 'ubinary'],
    32 });
    33
    34 // Extract embeddings from the API response
    35 const { float, int8, ubinary } = response.embeddings;
    36
    37 // Map the embeddings to the text data
    38 const embeddingsData = data.map((text, index) => ({
    39 text,
    40 embeddings: {
    41 float: float[index],
    42 int8: int8[index],
    43 ubinary: ubinary[index],
    44 },
    45 }));
    46
    47 // Write the embeddings data to a JSON file
    48 await writeFile('embeddings.json', JSON.stringify(embeddingsData, null, 2));
    49 console.log('Embeddings saved to embeddings.json');
    50 } catch (error) {
    51 console.error('Error fetching embeddings:', error);
    52 }
    53}
    54
    55// Execute the main function
    56main();
  3. Replace the <COHERE_API_KEY> placeholder if you didn't set your API Key for Cohere as an environment variable and then save the file.

  4. Run the code to generate embeddings.

    node get-embeddings.js
    Embeddings saved to embeddings.json
  5. Verify the generated embeddings in the generated embeddings.json file.

4
  1. Create a file named convert-embeddings.js to convert the float32, int8, and int1 vector embeddings from Cohere to BSON binData vectors by using the MongoDB Node.js driver.

    touch convert-embeddings.js
  2. Copy and paste the following code in the convert-embeddings.js file.

    This code does the following:

    • Generates BSON binData vectors for the float32, int8, and int1 embeddings.

    • Appends the float32, int8, and ubinary BSON binData vectors to the embeddings.json file.

    convert-embeddings.js
    1const fs = require('fs/promises');
    2const { BSON } = require('mongodb');
    3const { Binary } = BSON;
    4
    5async function main() {
    6 try {
    7 // Read and parse the contents of 'embeddings.json' file
    8 const fileContent = await fs.readFile('embeddings.json', 'utf8');
    9 const embeddingsData = JSON.parse(fileContent);
    10
    11 // Map the embeddings data to add BSON binary representations with subtype 9
    12 const convertEmbeddingsData = embeddingsData.map(({ text, embeddings }) => {
    13 // Create Binary for Float32Array with manual subtype 9
    14 const bsonFloat32 = Binary.fromFloat32Array(new Float32Array(embeddings.float));
    15
    16 // Create Binary for Int8Array with subtype 9
    17 const bsonInt8 = Binary.fromInt8Array(new Int8Array(embeddings.int8));
    18
    19 // Create Binary for PackedBits (Uint8Array) with subtype 9
    20 const bsonPackedBits = Binary.fromPackedBits(new Uint8Array(embeddings.ubinary));
    21
    22 return {
    23 text,
    24 embeddings: {
    25 float: embeddings.float, // Original float data
    26 int8: embeddings.int8, // Original int8 data
    27 ubinary: embeddings.ubinary, // Original packed bits data
    28 },
    29 bsonEmbeddings: {
    30 float32: bsonFloat32,
    31 int8: bsonInt8,
    32 packedBits: bsonPackedBits,
    33 },
    34 };
    35 });
    36
    37 // Serialize the updated data to EJSON for BSON compatibility
    38 const ejsonSerializedData = BSON.EJSON.stringify(convertEmbeddingsData, null, null, { relaxed: false });
    39
    40 // Write the serialized data to 'embeddings.json'
    41 await fs.writeFile('embeddings.json', ejsonSerializedData);
    42 console.log('Embeddings with BSON vectors have been saved to embeddings.json');
    43 } catch (error) {
    44 console.error('Error processing embeddings:', error);
    45 }
    46}
    47
    48main();
  3. Run the program to generate the BSON binData vectors.

    node convert-embeddings.js
    Embeddings with BSON vectors have been saved to embeddings.json
  4. Verify the generated BSON embeddings in the embeddings.json file.

5
  1. Create a file named upload-data.js to connect to the Atlas cluster and create a collection in a database for the data in the embeddings.json file.

    touch upload-data.js
  2. Copy and paste the following code in the upload-data.js file.

    This code does the following:

    • Connects to your Atlas cluster and creates a namespace with the database and collection name that you specify.

    • Uploads the data including the embeddings in the embeddings.json file to the specified namespace.

    upload-data.js
    1const fs = require('fs/promises'); // Use fs/promises for asynchronous operations
    2const { MongoClient, BSON } = require('mongodb'); // Import from the 'mongodb' package
    3
    4const { Binary } = BSON; // Ensure the Binary class is imported correctly
    5
    6async function main() {
    7 const MONGODB_URI = process.env.MONGODB_URI || "<CONNECTION-STRING>";
    8 const DB_NAME = "<DB-NAME>";
    9 const COLLECTION_NAME = "<COLLECTION-NAME>";
    10
    11 let client;
    12 try {
    13 client = new MongoClient(MONGODB_URI);
    14 await client.connect();
    15 console.log("Connected to MongoDB");
    16
    17 const db = client.db(DB_NAME);
    18 const collection = db.collection(COLLECTION_NAME);
    19
    20 // Read and parse the contents of 'embeddings.json' file using EJSON
    21 const fileContent = await fs.readFile('embeddings.json', 'utf8');
    22 const embeddingsData = BSON.EJSON.parse(fileContent);
    23
    24 // Map embeddings data to recreate BSON binary representations with the correct subtype
    25 const documents = embeddingsData.map(({ text, bsonEmbeddings }) => {
    26 return {
    27 text,
    28 bsonEmbeddings: {
    29 float32: bsonEmbeddings.float32,
    30 int8: bsonEmbeddings.int8,
    31 int1: bsonEmbeddings.packedBits
    32 }
    33 };
    34 });
    35
    36 const result = await collection.insertMany(documents);
    37 console.log(`Inserted ${result.insertedCount} documents into MongoDB`);
    38
    39 } catch (error) {
    40 console.error('Error storing embeddings in MongoDB:', error);
    41 } finally {
    42 if (client) {
    43 await client.close();
    44 }
    45 }
    46}
    47
    48// Run the store function
    49main();
  3. Replace the following settings and save the file.

    <CONNECTION-STRING>

    Connection string to connect to the Atlas cluster where you want to create the database and collection.

    Replace this value only if you didn't set the MONGODB_URI environment variable.

    <DB-NAME>

    Name of the database where you want to create the collection.

    <COLLECTION-NAME>

    Name of the collection where you want to store the generated embeddings.

  4. Run the following command to upload the data.

    node upload-data.js
  5. Verify that the documents exist in the collection on your Atlas cluster.

6
  1. Create a file named create-index.js to define an Atlas Vector Search index on the collection.

    touch create-index.js
  2. Copy and paste the following code to create the index in the create-index.js file.

    The code does the following:

    • Connects to the Atlas cluster and creates an index with the specified name for the specified namespace.

    • Indexes the bsonEmbeddings.float32 and bsonEmbeddings.int8 fields as vector type that uses the dotProduct similarity function, and the bsonEmbeddings.int1 field also as vector type that uses the euclidean function.

    create-index.js
    1const { MongoClient } = require("mongodb");
    2const { setTimeout } = require("timers/promises"); // Import from timers/promises
    3
    4// Connect to your Atlas deployment
    5const uri = process.env.MONGODB_URI || "<CONNECTION-STRING>";
    6
    7const client = new MongoClient(uri);
    8
    9async function main() {
    10 try {
    11 const database = client.db("<DB-NAME>");
    12 const collection = database.collection("<COLLECTION-NAME>");
    13
    14 // Define your Atlas Vector Search index
    15 const index = {
    16 name: "<INDEX-NAME>",
    17 type: "vectorSearch",
    18 definition: {
    19 fields: [
    20 {
    21 type: "vector",
    22 numDimensions: 1024,
    23 path: "bsonEmbeddings.float32",
    24 similarity: "dotProduct",
    25 },
    26 {
    27 type: "vector",
    28 numDimensions: 1024,
    29 path: "bsonEmbeddings.int8",
    30 similarity: "dotProduct",
    31 },
    32 {
    33 type: "vector",
    34 numDimensions: 1024,
    35 path: "bsonEmbeddings.int1",
    36 similarity: "euclidean",
    37 },
    38 ],
    39 },
    40 };
    41
    42 // Run the helper method
    43 const result = await collection.createSearchIndex(index);
    44 console.log(`New search index named ${result} is building.`);
    45
    46 // Wait for the index to be ready to query
    47 console.log("Polling to check if the index is ready. This may take up to a minute.");
    48 let isQueryable = false;
    49
    50 // Use filtered search for index readiness
    51 while (!isQueryable) {
    52 const [indexData] = await collection.listSearchIndexes(index.name).toArray();
    53
    54 if (indexData) {
    55 isQueryable = indexData.queryable;
    56 if (!isQueryable) {
    57 await setTimeout(5000); // Wait for 5 seconds before checking again
    58 }
    59 } else {
    60 // Handle the case where the index might not be found
    61 console.log(`Index ${index.name} not found.`);
    62 await setTimeout(5000); // Wait for 5 seconds before checking again
    63 }
    64 }
    65
    66 console.log(`${result} is ready for querying.`);
    67 } catch (error) {
    68 console.error("Error:", error);
    69 } finally {
    70 await client.close();
    71 }
    72}
    73
    74main().catch((err) => {
    75 console.error("Unhandled error:", err);
    76});
  3. Replace the following settings and save the file.

    <CONNECTION-STRING>

    Connection string to connect to the Atlas cluster where you want to create the index.

    Replace this value only if you didn't set the MONGODB_URI environment variable.

    <DB-NAME>

    Name of the database where you want to create the collection.

    <COLLECTION-NAME>

    Name of the collection where you want to store the generated embeddings.

    <INDEX-NAME>

    Name of the index for the collection.

  4. Run the following command to create the index.

    node create-index.js
7
  1. Create a file named get-query-embedding.js.

    touch get-query-embeddings.js
  2. Copy and paste the code in the get-query-embedding.js file.

    The sample code does the following:

    • Generates float32, int8, and int1 embeddings for the query text by using Cohere.

    • Converts the generated embeddings to BSON binData vectors by using PyMongo.

    • Saves the generated embeddings to a file named query-embeddings.json.

    get-query-embedding.js
    1const { CohereClient } = require('cohere-ai');
    2const { BSON } = require('mongodb');
    3const { writeFile } = require('fs/promises');
    4const dotenv = require('dotenv');
    5const process = require('process');
    6
    7// Load environment variables
    8dotenv.config();
    9
    10const { Binary } = BSON;
    11
    12// Get the API key from environment variables or set the key here
    13const apiKey = process.env.COHERE_API_KEY || '<COHERE-API-KEY>';
    14
    15if (!apiKey) {
    16 throw new Error('API key not found. Provide the COHERE_API_KEY.');
    17}
    18
    19// Initialize CohereClient
    20const cohere = new CohereClient({ token: apiKey });
    21
    22async function main(queryText) {
    23 try {
    24 if (typeof queryText !== 'string' || queryText.trim() === '') {
    25 throw new Error('Invalid query text. It must be a non-empty string.');
    26 }
    27
    28 const data = [queryText];
    29
    30 // Request embeddings from the Cohere API
    31 const response = await cohere.v2.embed({
    32 model: 'embed-english-v3.0',
    33 inputType: 'search_query',
    34 texts: data,
    35 embeddingTypes: ['float', 'int8', 'ubinary'], // Request all required embedding types
    36 });
    37
    38 if (!response.embeddings) {
    39 throw new Error('Embeddings not found in the API response.');
    40 }
    41
    42 const { float, int8, ubinary } = response.embeddings;
    43
    44 const updatedEmbeddingsData = data.map((text, index) => {
    45 // Create the BSON Binary objects using VECTOR_TYPE for all embedding types
    46 const float32Binary = Binary.fromFloat32Array(new Float32Array(float[index])); // VECTOR_TYPE.FLOAT32
    47 const int8Binary = Binary.fromInt8Array(new Int8Array(int8[index])); // VECTOR_TYPE.INT8
    48 const packedBitsBinary = Binary.fromPackedBits(new Uint8Array(ubinary[index])); // VECTOR_TYPE.PACKED_BIT
    49
    50 return {
    51 text,
    52 embeddings: {
    53 float: float[index],
    54 int8: int8[index],
    55 ubinary: ubinary[index],
    56 },
    57 bsonEmbeddings: {
    58 float32: float32Binary,
    59 int8: int8Binary,
    60 int1: packedBitsBinary,
    61 },
    62 };
    63 });
    64
    65 // Serialize the embeddings using BSON EJSON for BSON compatibility
    66 const outputFileName = 'query-embeddings.json';
    67 const ejsonSerializedData = BSON.EJSON.stringify(updatedEmbeddingsData, null, null, { relaxed: false });
    68 await writeFile(outputFileName, ejsonSerializedData);
    69 console.log(`Embeddings with BSON data have been saved to ${outputFileName}`);
    70 } catch (error) {
    71 console.error('Error processing query text:', error);
    72 }
    73}
    74
    75// Main function that takes a query string
    76(async () => {
    77 const queryText = "<QUERY-TEXT>"; // Replace with your actual query text
    78 await main(queryText);
    79})();
  3. Replace the following settings and save the file.

    <COHERE-API-KEY>

    Your API Key for Cohere. Only replace this value if you didn't set the environment variable.

    <QUERY-TEXT>

    Your query text. For this tutorial, use science fact.

  4. Run the code to generate the embeddings for the query text.

    node get-query-embeddings.js
    Embeddings with BSON vectors have been saved to query-embeddings.json
8
  1. Create a file named run-query.js.

    touch run-query.js
  2. Copy and paste the following sample $vectorSearch query in the run-query.js file.

    The sample query does the following:

    • Connects to your Atlas cluster and runs the $vectorSearch query against the bsonEmbeddings.float32, bsonEmbeddings.int8, and bsonEmbeddings.int1 fields in the specified collection by using the embeddings in the query-embeddings.json file.

    • Prints the results from Float32, Int8, and Packed Binary (Int1) embeddings to the console.

    run-query.js
    1const { MongoClient } = require('mongodb');
    2const fs = require('fs/promises');
    3const { BSON } = require('bson'); // Use BSON's functionality for EJSON parsing
    4const dotenv = require('dotenv');
    5
    6dotenv.config();
    7
    8// MongoDB connection details
    9const mongoUri = process.env.MONGODB_URI || '<CONNECTION-STRING>';
    10const dbName = '<DB-NAME>'; // Update with your actual database name
    11const collectionName = '<COLLECTION-NAME>'; // Update with your actual collection name
    12
    13// Indices and paths should match your MongoDB vector search configuration
    14const VECTOR_INDEX_NAME = '<INDEX-NAME>'; // Replace with your actual index name
    15const NUM_CANDIDATES = 5; // Number of candidate documents for the search
    16const LIMIT = 2; // Limit for the number of documents to return
    17
    18// Fields in the collection that contain the BSON query vectors
    19const FIELDS = [
    20 { path: 'float32', subtype: 9 }, // Ensure that the path and custom subtype match
    21 { path: 'int8', subtype: 9 }, // Use the custom subtype if needed
    22 { path: 'int1', subtype: 9 } // Use the same custom subtype
    23];
    24
    25
    26// Function to read BSON vectors from JSON and run vector search
    27async function main() {
    28 // Initialize MongoDB client
    29 const client = new MongoClient(mongoUri);
    30
    31 try {
    32 await client.connect();
    33 console.log("Connected to MongoDB");
    34
    35 const db = client.db(dbName);
    36 const collection = db.collection(collectionName);
    37
    38 // Load query embeddings from JSON file using EJSON parsing
    39 const fileContent = await fs.readFile('query-embeddings.json', 'utf8');
    40 const embeddingsData = BSON.EJSON.parse(fileContent);
    41
    42 // Define and run the query for each embedding type
    43 const results = {};
    44
    45 for (const fieldInfo of FIELDS) {
    46 const { path, subtype } = fieldInfo;
    47 const bsonBinary = embeddingsData[0]?.bsonEmbeddings?.[path];
    48
    49 if (!bsonBinary) {
    50 console.warn(`BSON embedding for ${path} not found in the JSON.`);
    51 continue;
    52 }
    53
    54 const bsonQueryVector = bsonBinary; // Directly use BSON Binary object
    55
    56 const pipeline = [
    57 {
    58 $vectorSearch: {
    59 index: VECTOR_INDEX_NAME,
    60 path: `bsonEmbeddings.${path}`,
    61 queryVector: bsonQueryVector,
    62 numCandidates: NUM_CANDIDATES,
    63 limit: LIMIT,
    64 }
    65 },
    66 {
    67 $project: {
    68 _id: 0,
    69 text: 1, // Adjust projection fields as necessary to match your document structure
    70 score: { $meta: 'vectorSearchScore' }
    71 }
    72 }
    73 ];
    74
    75 results[path] = await collection.aggregate(pipeline).toArray();
    76 }
    77
    78 return results;
    79 } catch (error) {
    80 console.error('Error during vector search:', error);
    81 } finally {
    82 await client.close();
    83 }
    84}
    85
    86// Main execution block
    87(async () => {
    88 try {
    89 const results = await main();
    90
    91 if (results) {
    92 console.log("Results from Float32 embeddings:");
    93 console.table(results.float32 || []);
    94 console.log("--------------------------------------------------------------------------");
    95
    96 console.log("Results from Int8 embeddings:");
    97 console.table(results.int8 || []);
    98 console.log("--------------------------------------------------------------------------");
    99
    100 console.log("Results from Packed Binary (PackedBits) embeddings:");
    101 console.table(results.int1 || []);
    102 }
    103 } catch (error) {
    104 console.error('Error executing main function:', error);
    105 }
    106})();
  3. Replace the following settings and save the run-query.js file.

    <CONNECTION-STRING>

    Connection string to connect to the Atlas cluster where you want to run the query.

    Replace this value only if you didn't set the MONGODB_URI environment variable.

    <DB-NAME>

    Name of the database which contains the collection.

    <COLLECTION-NAME>

    Name of the collection that you want to query.

    <INDEX-NAME>

    Name of the index for the collection.

  4. Run the following command to execute the query.

    node run-query.js
    Connected to MongoDB
    Results from Float32 embeddings:
    ┌─────────┬─────────────────────────────────────────────────────────┬────────────────────┐
    │ (index) │ text │ score │
    ├─────────┼─────────────────────────────────────────────────────────┼────────────────────┤
    │ 0 │ 'Mount Everest is the highest peak on Earth at 8,848m.' │ 0.6583383083343506 │
    │ 1 │ 'The Great Wall of China is visible from space.' │ 0.6536108255386353 │
    └─────────┴─────────────────────────────────────────────────────────┴────────────────────┘
    --------------------------------------------------------------------------
    Results from Int8 embeddings:
    ┌─────────┬─────────────────────────────────────────────────────────┬────────────────────┐
    │ (index) │ text │ score │
    ├─────────┼─────────────────────────────────────────────────────────┼────────────────────┤
    │ 0 │ 'Mount Everest is the highest peak on Earth at 8,848m.' │ 0.5149773359298706 │
    │ 1 │ 'The Great Wall of China is visible from space.' │ 0.5146723985671997 │
    └─────────┴─────────────────────────────────────────────────────────┴────────────────────┘
    --------------------------------------------------------------------------
    Results from Packed Binary (PackedBits) embeddings:
    ┌─────────┬─────────────────────────────────────────────────────────┬─────────────┐
    │ (index) │ text │ score │
    ├─────────┼─────────────────────────────────────────────────────────┼─────────────┤
    │ 0 │ 'Mount Everest is the highest peak on Earth at 8,848m.' │ 0.642578125 │
    │ 1 │ 'The Great Wall of China is visible from space.' │ 0.61328125 │
    └─────────┴─────────────────────────────────────────────────────────┴─────────────┘
1

Run the following command to install the MongoDB Node.js Driver. This operation might take a few minutes to complete.

npm install mongodb

You must install Node.js v6.11 or later driver. If necessary, you can also install libraries from your embedding model provider. For example, to generate float32, int8, and int1 embeddings by using Cohere as demonstrated in this page, install Cohere:

npm install cohere-ai dotenv
npm show cohere-ai version
2
  1. To access the embedding model provider for generating and converting embeddings, set the environment variable for the embedding model provider's API key, if necessary.

    For using embeddings from Cohere, set up the COHERE_API_KEY environment variable.

    export COHERE_API_KEY="<COHERE-API-KEY>"

    If you don't set the environment variable, replace the <COHERE-API-KEY> in the sample code with the API key before running the code.

  2. To access Atlas cluster, set the MONGODB_URI environment variable.

    export MONGODB_URI="<CONNECTION-STRING>"

    Your connection string should be in the following format:

    mongodb+srv://<db_username>:<db_password>@<clusterName>.<hostname>.mongodb.net

    If you don't set the environment variable, replace the <CONNECTION-STRING> in the sample code with your connection string before running the code.

3
  1. Create a file named get-data.js.

    touch get-data.js
  2. Copy and paste the following sample code to fetch the data from the sample_airbnb.listingsAndReviews namespace in your Atlas cluster.

    The sample code does the following:

    • Connects to your Atlas cluster and finds documents with the summary field.

    • Creates a file named subset.json to which it writes the data from the collection.

    get-data.js
    1const { MongoClient } = require('mongodb');
    2const fs = require('fs'); // Import the fs module for file system operations
    3
    4async function main() {
    5 // Replace with your Atlas connection string
    6 const uri = process.env.MONGODB_URI || '<CONNECTION-STRING>';
    7
    8 // Create a new MongoClient instance
    9 const client = new MongoClient(uri);
    10
    11 try {
    12 // Connect to your Atlas cluster
    13 await client.connect();
    14
    15 // Specify the database and collection
    16 const db = client.db('sample_airbnb');
    17 const collection = db.collection('listingsAndReviews');
    18
    19 // Filter to exclude null or empty summary fields
    20 const filter = { summary: { $nin: [null, ''] } };
    21
    22 // Get a subset of documents in the collection
    23 const documentsCursor = collection.find(filter).limit(50);
    24
    25 // Convert the cursor to an array to get the documents
    26 const documents = await documentsCursor.toArray();
    27
    28 // Log the documents to verify their content
    29 console.log('Documents retrieved:', documents);
    30
    31 // Write the documents to a local file called "subset.json"
    32 const outputFilePath = './subset.json';
    33 fs.writeFileSync(outputFilePath, JSON.stringify(documents, null, 2), 'utf-8');
    34
    35 console.log(`Subset of documents written to: ${outputFilePath}`);
    36 } catch (error) {
    37 console.error('An error occurred:', error);
    38 } finally {
    39 // Ensure the client is closed when finished
    40 await client.close();
    41 }
    42}
    43
    44main().catch(console.error);
  3. Replace the <CONNECTION-STRING> placeholder if you didn't set the environment variable for your Atlas connection string and then save the file.

  4. Run the following command to fetch the data:

    node get-data.js
    Subset of documents written to: ./subset.json
4

If you already have float32, int8, or int1 vector embeddings in your collection, skip this step.

  1. Create a file named get-embeddings.js to generate float32, int8, and int1 vector embeddings by using Cohere's embed API.

    touch get-embeddings.js
  2. Copy and paste the following code in the get-embeddings.js file.

    This code does the following:

    • Generates float32, int8, and int1 embeddings for the given data by using Cohere's embed-english-v3.0 embedding model.

    • Stores the float32, int8, and int1 embeddings in fields named float, int8, and ubinary respectively.

    • Creates a file named embeddings.json and saves the embeddings in the file.

    get-embeddings.js
    1// Import necessary modules using the CommonJS syntax
    2const { CohereClient } = require('cohere-ai');
    3const { readFile, writeFile } = require('fs/promises');
    4
    5// Retrieve the API key from environment variables or provide a placeholder
    6const apiKey = process.env.COHERE_API_KEY || '<COHERE-API-KEY>';
    7
    8if (!apiKey || apiKey === '<COHERE-API-KEY>') {
    9 throw new Error('API key not found. Please set COHERE_API_KEY in your environment.');
    10}
    11
    12// Initialize the Cohere client with the API key
    13const cohere = new CohereClient({ token: apiKey });
    14
    15async function main() {
    16 try {
    17 // Read and parse the contents of 'subset.json'
    18 const subsetData = await readFile('subset.json', 'utf-8');
    19 const documents = JSON.parse(subsetData);
    20
    21 // Extract the 'summary' fields that are non-empty strings
    22 const data = documents
    23 .map(doc => doc.summary)
    24 .filter(summary => typeof summary === 'string' && summary.length > 0);
    25
    26 if (data.length === 0) {
    27 throw new Error('No valid summary texts available in the data.');
    28 }
    29
    30 // Request embeddings from the Cohere API
    31 const response = await cohere.v2.embed({
    32 model: 'embed-english-v3.0',
    33 inputType: 'search_document',
    34 texts: data,
    35 embeddingTypes: ['float', 'int8', 'ubinary'],
    36 });
    37
    38 // Extract embeddings from the API response
    39 const { float, int8, ubinary } = response.embeddings;
    40
    41 // Structure the embeddings data
    42 const embeddingsData = data.map((text, index) => ({
    43 text,
    44 embeddings: {
    45 float: float[index],
    46 int8: int8[index],
    47 ubinary: ubinary[index],
    48 },
    49 }));
    50
    51 // Write the embeddings data to 'embeddings.json'
    52 await writeFile('embeddings.json', JSON.stringify(embeddingsData, null, 2));
    53 console.log('Embeddings saved to embeddings.json');
    54 } catch (error) {
    55 console.error('Error fetching embeddings:', error);
    56 }
    57}
    58
    59// Execute the main function
    60main();
  3. If you didn't set the environment variable for your Cohere API Key, replace the <COHERE-API-KEY> placeholder and save the file.

  4. Run the code to generate the embeddings.

    node get-embeddings.js
    Embeddings saved to embeddings.json
  5. Verify the generated embeddings by opening the generated embeddings.json file.

5
  1. Create a file named convert-embeddings.js to convert the float32, int8, and int1 vector embeddings from Cohere to BSON binData vectors.

    touch convert-embeddings.js
  2. Copy and paste the following code in the convert-embeddings.js file.

    This code does the following:

    • Generates BSON binData vectors for the float32, int8, and int1 embeddings.

    • Appends the float32, int8, and ubinary BSON binData vectors to the embeddings.json file.

    convert-embeddings.js
    1const fs = require('fs/promises');
    2const { BSON } = require('mongodb');
    3const { Binary } = BSON;
    4
    5async function main() {
    6 try {
    7 // Read and parse the contents of 'embeddings.json' file
    8 const fileContent = await fs.readFile('embeddings.json', 'utf8');
    9 const embeddingsData = JSON.parse(fileContent);
    10
    11 // Map the embeddings data to add BSON binary representations with subtype 9
    12 const convertEmbeddingsData = embeddingsData.map(({ text, embeddings }) => {
    13 // Create Binary for Float32Array with manual subtype 9
    14 const bsonFloat32 = Binary.fromFloat32Array(new Float32Array(embeddings.float));
    15
    16 // Create Binary for Int8Array with subtype 9
    17 const bsonInt8 = Binary.fromInt8Array(new Int8Array(embeddings.int8));
    18
    19 // Create Binary for PackedBits (Uint8Array) with subtype 9
    20 const bsonPackedBits = Binary.fromPackedBits(new Uint8Array(embeddings.ubinary));
    21
    22 return {
    23 text,
    24 embeddings: {
    25 float: embeddings.float, // Original float data
    26 int8: embeddings.int8, // Original int8 data
    27 ubinary: embeddings.ubinary, // Original packed bits data
    28 },
    29 bsonEmbeddings: {
    30 float32: bsonFloat32,
    31 int8: bsonInt8,
    32 packedBits: bsonPackedBits,
    33 },
    34 };
    35 });
    36
    37 // Serialize the updated data to EJSON for BSON compatibility
    38 const ejsonSerializedData = BSON.EJSON.stringify(convertEmbeddingsData, null, null, { relaxed: false });
    39
    40 // Write the serialized data to 'embeddings.json'
    41 await fs.writeFile('embeddings.json', ejsonSerializedData);
    42 console.log('Embeddings with BSON vectors have been saved to embeddings.json');
    43 } catch (error) {
    44 console.error('Error processing embeddings:', error);
    45 }
    46}
    47
    48main();
  3. Run the program to generate the BSON binData vectors.

    node convert-embeddings.js
    Embeddings with BSON vectors have been saved to embeddings.json
  4. Verify the generated BSON embeddings in the embeddings.json file.

6
  1. Create a file named upload-data.js to connect to the Atlas cluster and upload the data to the sample_airbnb.listingsAndReviews namespace.

    touch upload-data.js
  2. Copy and paste the following code in the upload-data.js file.

    This code does the following:

    • Connects to your Atlas cluster and creates a namespace with the database and collection name that you specify.

    • Uploads the data including the embeddings into the sample_airbnb.listingsAndReviews namespace.

    upload-data.js
    1const fs = require('fs/promises'); // Use fs/promises for asynchronous operations
    2const { MongoClient, BSON } = require('mongodb'); // Import from the 'mongodb' package
    3const { EJSON, Binary } = require('bson'); // Import EJSON and Binary from bson
    4
    5async function main() {
    6 const MONGODB_URI = process.env.MONGODB_URI || "<CONNECTION-STRING>";
    7 const DB_NAME = "sample_airbnb";
    8 const COLLECTION_NAME = "listingsAndReviews";
    9
    10 let client;
    11 try {
    12 // Connect to MongoDB
    13 client = new MongoClient(MONGODB_URI);
    14 await client.connect();
    15 console.log("Connected to MongoDB");
    16
    17 // Access database and collection
    18 const db = client.db(DB_NAME);
    19 const collection = db.collection(COLLECTION_NAME);
    20
    21 // Load embeddings from JSON using EJSON.parse
    22 const fileContent = await fs.readFile('embeddings.json', 'utf8');
    23 const embeddingsData = EJSON.parse(fileContent); // Use EJSON.parse
    24
    25 // Map embeddings data to recreate BSON binary representations
    26 const documents = embeddingsData.map(({ text, bsonEmbeddings }) => {
    27 return {
    28 summary: text,
    29 bsonEmbeddings: {
    30 float32: bsonEmbeddings.float32,
    31 int8: bsonEmbeddings.int8,
    32 int1: bsonEmbeddings.packedBits
    33 }
    34 };
    35 });
    36
    37 // Iterate over documents and upsert each into the MongoDB collection
    38 for (const doc of documents) {
    39 const filter = { summary: doc.summary };
    40 const update = { $set: doc };
    41
    42 // Update the document with the BSON binary data
    43 const result = await collection.updateOne(filter, update, { upsert: true });
    44 if (result.matchedCount > 0) {
    45 console.log(`Updated document with summary: ${doc.summary}`);
    46 } else {
    47 console.log(`Inserted new document with summary: ${doc.summary}`);
    48 }
    49 }
    50
    51 console.log("Embeddings stored in MongoDB successfully.");
    52 } catch (error) {
    53 console.error('Error storing embeddings in MongoDB:', error);
    54 } finally {
    55 if (client) {
    56 await client.close();
    57 }
    58 }
    59}
    60
    61// Run the main function to load the data
    62main();
  3. Replace the <CONNECTION-STRING> placeholder if you didn't set the environment variable for your Atlas connection string and then save the file.

  4. Run the following command to upload the data.

    node upload-data.js
    Connected to MongoDB
    Updated document with text: ...
    ...
    Embeddings stored in MongoDB successfully.
  5. Verify by logging into your Atlas cluster and checking the namespace in the Data Explorer.

7
  1. Create a file named create-index.js.

    touch create-index.js
  2. Copy and paste the following code to create the index in the create-index.js file.

    The code does the following:

    • Connects to the Atlas cluster and creates an index with the specified name for the specified namespace.

    • Indexes the bsonEmbeddings.float32 and bsonEmbeddings.int8 fields as vector type by using the dotProduct similarity function, and the bsonEmbeddings.int1 field also as vector type by using the euclidean function.

    create-index.js
    1const { MongoClient } = require("mongodb");
    2const { setTimeout } = require("timers/promises"); // Import from timers/promises
    3
    4// Connect to your Atlas deployment
    5const uri = process.env.MONGODB_URI || "<CONNECTION-STRING>";
    6
    7const client = new MongoClient(uri);
    8
    9async function main() {
    10 try {
    11 const database = client.db("<DB-NAME>");
    12 const collection = database.collection("<COLLECTION-NAME>");
    13
    14 // Define your Atlas Vector Search index
    15 const index = {
    16 name: "<INDEX-NAME>",
    17 type: "vectorSearch",
    18 definition: {
    19 fields: [
    20 {
    21 type: "vector",
    22 numDimensions: 1024,
    23 path: "bsonEmbeddings.float32",
    24 similarity: "dotProduct",
    25 },
    26 {
    27 type: "vector",
    28 numDimensions: 1024,
    29 path: "bsonEmbeddings.int8",
    30 similarity: "dotProduct",
    31 },
    32 {
    33 type: "vector",
    34 numDimensions: 1024,
    35 path: "bsonEmbeddings.int1",
    36 similarity: "euclidean",
    37 },
    38 ],
    39 },
    40 };
    41
    42 // Run the helper method
    43 const result = await collection.createSearchIndex(index);
    44 console.log(`New search index named ${result} is building.`);
    45
    46 // Wait for the index to be ready to query
    47 console.log("Polling to check if the index is ready. This may take up to a minute.");
    48 let isQueryable = false;
    49
    50 // Use filtered search for index readiness
    51 while (!isQueryable) {
    52 const [indexData] = await collection.listSearchIndexes(index.name).toArray();
    53
    54 if (indexData) {
    55 isQueryable = indexData.queryable;
    56 if (!isQueryable) {
    57 await setTimeout(5000); // Wait for 5 seconds before checking again
    58 }
    59 } else {
    60 // Handle the case where the index might not be found
    61 console.log(`Index ${index.name} not found.`);
    62 await setTimeout(5000); // Wait for 5 seconds before checking again
    63 }
    64 }
    65
    66 console.log(`${result} is ready for querying.`);
    67 } catch (error) {
    68 console.error("Error:", error);
    69 } finally {
    70 await client.close();
    71 }
    72}
    73
    74main().catch((err) => {
    75 console.error("Unhandled error:", err);
    76});
  3. Replace the following settings and save the file.

    <CONNECTION-STRING>

    Connection string to connect to your Atlas cluster that you want to create the database and collection.

    Replace this value only if you didn't set the MONGODB_URI environment variable.

    <DB-NAME>

    Name of the collection, which is sample_airbnb.

    <COLLECTION-NAME>

    Name of the collection, which is listingsAndReviews.

    <INDEX-NAME>

    Name of the index for the collection.

  4. Run the following command to create the index.

    node create-index.js
    New search index named vector_index is building.
    Polling to check if the index is ready. This may take up to a minute.
    <INDEX-NAME> is ready for querying.
8
  1. Create a file named get-query-embeddings.js.

    touch get-query-embeddings.js
  2. Copy and paste the code in the get-query-embedding.js file.

    The sample code does the following:

    • Generates float32, int8, and int1 embeddings for the query text by using Cohere.

    • Converts the generated embeddings to BSON binData vectors by using PyMongo.

    • Saves the generated embeddings to a file named query-embeddings.json.

    get-query-embedding.js
    1const { CohereClient } = require('cohere-ai');
    2const { BSON } = require('mongodb');
    3const { writeFile } = require('fs/promises');
    4const dotenv = require('dotenv');
    5const process = require('process');
    6
    7// Load environment variables
    8dotenv.config();
    9
    10const { Binary } = BSON;
    11
    12// Get the API key from environment variables or set the key here
    13const apiKey = process.env.COHERE_API_KEY || '<COHERE-API-KEY>';
    14
    15if (!apiKey) {
    16 throw new Error('API key not found. Provide the COHERE_API_KEY.');
    17}
    18
    19// Initialize CohereClient
    20const cohere = new CohereClient({ token: apiKey });
    21
    22async function main(queryText) {
    23 try {
    24 if (typeof queryText !== 'string' || queryText.trim() === '') {
    25 throw new Error('Invalid query text. It must be a non-empty string.');
    26 }
    27
    28 const data = [queryText];
    29
    30 // Request embeddings from the Cohere API
    31 const response = await cohere.v2.embed({
    32 model: 'embed-english-v3.0',
    33 inputType: 'search_query',
    34 texts: data,
    35 embeddingTypes: ['float', 'int8', 'ubinary'], // Request all required embedding types
    36 });
    37
    38 if (!response.embeddings) {
    39 throw new Error('Embeddings not found in the API response.');
    40 }
    41
    42 const { float, int8, ubinary } = response.embeddings;
    43
    44 const updatedEmbeddingsData = data.map((text, index) => {
    45 // Create the BSON Binary objects using VECTOR_TYPE for all embedding types
    46 const float32Binary = Binary.fromFloat32Array(new Float32Array(float[index])); // VECTOR_TYPE.FLOAT32
    47 const int8Binary = Binary.fromInt8Array(new Int8Array(int8[index])); // VECTOR_TYPE.INT8
    48 const packedBitsBinary = Binary.fromPackedBits(new Uint8Array(ubinary[index])); // VECTOR_TYPE.PACKED_BIT
    49
    50 return {
    51 text,
    52 embeddings: {
    53 float: float[index],
    54 int8: int8[index],
    55 ubinary: ubinary[index],
    56 },
    57 bsonEmbeddings: {
    58 float32: float32Binary,
    59 int8: int8Binary,
    60 int1: packedBitsBinary,
    61 },
    62 };
    63 });
    64
    65 // Serialize the embeddings using BSON EJSON for BSON compatibility
    66 const outputFileName = 'query-embeddings.json';
    67 const ejsonSerializedData = BSON.EJSON.stringify(updatedEmbeddingsData, null, null, { relaxed: false });
    68 await writeFile(outputFileName, ejsonSerializedData);
    69 console.log(`Embeddings with BSON data have been saved to ${outputFileName}`);
    70 } catch (error) {
    71 console.error('Error processing query text:', error);
    72 }
    73}
    74
    75// Main function that takes a query string
    76(async () => {
    77 const queryText = "<QUERY-TEXT>"; // Replace with your actual query text
    78 await main(queryText);
    79})();
  3. Replace the following settings and save the file.

    <COHERE-API-KEY>

    Your API Key for Cohere. Only replace this value if you didn't set the key as an environment variable.

    <QUERY-TEXT>

    Your query text. For this example, use ocean view.

  4. Run the code to generate the embeddings for the query text.

    node get-query-embeddings.js
    Embeddings with BSON vectors have been saved to query-embeddings.json
9
  1. Create a file named run-query.js.

    touch run-query.js
  2. Copy and paste the following sample $vectorSearch query in the run-query.js file.

    The sample query does the following:

    • Connects to your Atlas cluster and runs the $vectorSearch query against the bsonEmbeddings.float32, bsonEmbeddings.int8, and bsonEmbeddings.int1 fields in the sample_airbnb.listingsAndReviews namespace by using the embeddings in the query-embeddings.json file.

    • Prints the results from Float32, Int8, and Packed Binary (Int1) embeddings to the console.

    run-query.js
    1const { MongoClient } = require('mongodb');
    2const fs = require('fs/promises');
    3const { BSON } = require('bson'); // Use BSON's functionality for EJSON parsing
    4const dotenv = require('dotenv');
    5
    6dotenv.config();
    7
    8// MongoDB connection details
    9const mongoUri = process.env.MONGODB_URI || '<CONNECTION-STRING>';
    10const dbName = 'sample_airbnb'; // Update with your actual database name
    11const collectionName = 'listingsAndReviews'; // Update with your actual collection name
    12
    13// Indices and paths should match your MongoDB vector search configuration
    14const VECTOR_INDEX_NAME = '<INDEX-NAME>'; // Replace with your actual index name
    15const NUM_CANDIDATES = 20; // Number of candidate documents for the search
    16const LIMIT = 5; // Limit for the number of documents to return
    17
    18// Fields in the collection that contain the BSON query vectors
    19const FIELDS = [
    20 { path: 'float32', subtype: 9 }, // Ensure that the path and custom subtype match
    21 { path: 'int8', subtype: 9 }, // Use the custom subtype if needed
    22 { path: 'int1', subtype: 9 } // Use the same custom subtype
    23];
    24
    25
    26// Function to read BSON vectors from JSON and run vector search
    27async function main() {
    28 // Initialize MongoDB client
    29 const client = new MongoClient(mongoUri);
    30
    31 try {
    32 await client.connect();
    33 console.log("Connected to MongoDB");
    34
    35 const db = client.db(dbName);
    36 const collection = db.collection(collectionName);
    37
    38 // Load query embeddings from JSON file using EJSON parsing
    39 const fileContent = await fs.readFile('query-embeddings.json', 'utf8');
    40 const embeddingsData = BSON.EJSON.parse(fileContent);
    41
    42 // Define and run the query for each embedding type
    43 const results = {};
    44
    45 for (const fieldInfo of FIELDS) {
    46 const { path, subtype } = fieldInfo;
    47 const bsonBinary = embeddingsData[0]?.bsonEmbeddings?.[path];
    48
    49 if (!bsonBinary) {
    50 console.warn(`BSON embedding for ${path} not found in the JSON.`);
    51 continue;
    52 }
    53
    54 const bsonQueryVector = bsonBinary; // Directly use BSON Binary object
    55
    56 const pipeline = [
    57 {
    58 $vectorSearch: {
    59 index: VECTOR_INDEX_NAME,
    60 path: `bsonEmbeddings.${path}`,
    61 queryVector: bsonQueryVector,
    62 numCandidates: NUM_CANDIDATES,
    63 limit: LIMIT,
    64 }
    65 },
    66 {
    67 $project: {
    68 _id: 0,
    69 name: 1,
    70 summary: 1, // Adjust projection fields as necessary to match your document structure
    71 score: { $meta: 'vectorSearchScore' }
    72 }
    73 }
    74 ];
    75
    76 results[path] = await collection.aggregate(pipeline).toArray();
    77 }
    78
    79 return results;
    80 } catch (error) {
    81 console.error('Error during vector search:', error);
    82 } finally {
    83 await client.close();
    84 }
    85}
    86
    87// Main execution block
    88(async () => {
    89 try {
    90 const results = await main();
    91
    92 if (results) {
    93 console.log("Results from Float32 embeddings:");
    94 (results.float32 || []).forEach((result, index) => {
    95 console.log(`Result ${index + 1}:`, result);
    96 });
    97
    98 console.log("Results from Int8 embeddings:");
    99 (results.int8 || []).forEach((result, index) => {
    100 console.log(`Result ${index + 1}:`, result);
    101 });
    102
    103 console.log("Results from Packed Binary (PackedBits) embeddings:");
    104 (results.int1 || []).forEach((result, index) => {
    105 console.log(`Result ${index + 1}:`, result);
    106 });
    107 }
    108 } catch (error) {
    109 console.error('Error executing main function:', error);
    110 }
    111 })();
    112
  3. Replace the following settings and save the run-query.js file.

    <CONNECTION-STRING>

    Connection string to connect to your Atlas cluster that you want to create the database and collection.

    Replace this value if you didn't set the MONGODB_URI environment variable.

    <INDEX-NAME>

    Name of the index for the collection.

  4. Run the query.

    To execute the query, run the following command:

    node run-query.js
    Connected to MongoDB
    Results from Float32 embeddings:
    Result 1: {
    name: 'Makaha Valley Paradise with OceanView',
    summary: "A beautiful and comfortable 1 Bedroom Air Conditioned Condo in Makaha Valley - stunning Ocean & Mountain views All the amenities of home, suited for longer stays. Full kitchen & large bathroom. Several gas BBQ's for all guests to use & a large heated pool surrounded by reclining chairs to sunbathe. The Ocean you see in the pictures is not even a mile away, known as the famous Makaha Surfing Beach. Golfing, hiking,snorkeling paddle boarding, surfing are all just minutes from the front door.",
    score: 0.7278661131858826
    }
    Result 2: {
    name: 'Ocean View Waikiki Marina w/prkg',
    summary: "A short distance from Honolulu's billion dollar mall, and the same distance to Waikiki. Parking included. A great location that work perfectly for business, education, or simple visit. Experience Yacht Harbor views and 5 Star Hilton Hawaiian Village.",
    score: 0.688639760017395
    }
    Result 3: {
    name: 'A Casa Alegre é um apartamento T1.',
    summary: 'Para 2 pessoas. Vista de mar a 150 mts. Prédio com 2 elevadores. Tem: - quarto com roupeiro e cama de casal (colchão magnetizado); - cozinha: placa de discos, exaustor, frigorifico, micro-ondas e torradeira; casa de banho completa; - sala e varanda.',
    score: 0.6831139326095581
    }
    Result 4: {
    name: 'Your spot in Copacabana',
    summary: 'Having a large airy living room. The apartment is well divided. Fully furnished and cozy. The building has a 24h doorman and camera services in the corridors. It is very well located, close to the beach, restaurants, pubs and several shops and supermarkets. And it offers a good mobility being close to the subway.',
    score: 0.6802051663398743
    }
    Result 5: {
    name: 'LAHAINA, MAUI! RESORT/CONDO BEACHFRONT!! SLEEPS 4!',
    summary: 'THIS IS A VERY SPACIOUS 1 BEDROOM FULL CONDO (SLEEPS 4) AT THE BEAUTIFUL VALLEY ISLE RESORT ON THE BEACH IN LAHAINA, MAUI!! YOU WILL LOVE THE PERFECT LOCATION OF THIS VERY NICE HIGH RISE! ALSO THIS SPACIOUS FULL CONDO, FULL KITCHEN, BIG BALCONY!!',
    score: 0.6779564619064331
    }
    Results from Int8 embeddings:
    Result 1: {
    name: 'Makaha Valley Paradise with OceanView',
    summary: "A beautiful and comfortable 1 Bedroom Air Conditioned Condo in Makaha Valley - stunning Ocean & Mountain views All the amenities of home, suited for longer stays. Full kitchen & large bathroom. Several gas BBQ's for all guests to use & a large heated pool surrounded by reclining chairs to sunbathe. The Ocean you see in the pictures is not even a mile away, known as the famous Makaha Surfing Beach. Golfing, hiking,snorkeling paddle boarding, surfing are all just minutes from the front door.",
    score: 0.5215557217597961
    }
    Result 2: {
    name: 'Ocean View Waikiki Marina w/prkg',
    summary: "A short distance from Honolulu's billion dollar mall, and the same distance to Waikiki. Parking included. A great location that work perfectly for business, education, or simple visit. Experience Yacht Harbor views and 5 Star Hilton Hawaiian Village.",
    score: 0.5179016590118408
    }
    Result 3: {
    name: 'A Casa Alegre é um apartamento T1.',
    summary: 'Para 2 pessoas. Vista de mar a 150 mts. Prédio com 2 elevadores. Tem: - quarto com roupeiro e cama de casal (colchão magnetizado); - cozinha: placa de discos, exaustor, frigorifico, micro-ondas e torradeira; casa de banho completa; - sala e varanda.',
    score: 0.5173280239105225
    }
    Result 4: {
    name: 'Your spot in Copacabana',
    summary: 'Having a large airy living room. The apartment is well divided. Fully furnished and cozy. The building has a 24h doorman and camera services in the corridors. It is very well located, close to the beach, restaurants, pubs and several shops and supermarkets. And it offers a good mobility being close to the subway.',
    score: 0.5170232057571411
    }
    Result 5: {
    name: 'LAHAINA, MAUI! RESORT/CONDO BEACHFRONT!! SLEEPS 4!',
    summary: 'THIS IS A VERY SPACIOUS 1 BEDROOM FULL CONDO (SLEEPS 4) AT THE BEAUTIFUL VALLEY ISLE RESORT ON THE BEACH IN LAHAINA, MAUI!! YOU WILL LOVE THE PERFECT LOCATION OF THIS VERY NICE HIGH RISE! ALSO THIS SPACIOUS FULL CONDO, FULL KITCHEN, BIG BALCONY!!',
    score: 0.5168724060058594
    }
    Results from Packed Binary (PackedBits) embeddings:
    Result 1: {
    name: 'Makaha Valley Paradise with OceanView',
    summary: "A beautiful and comfortable 1 Bedroom Air Conditioned Condo in Makaha Valley - stunning Ocean & Mountain views All the amenities of home, suited for longer stays. Full kitchen & large bathroom. Several gas BBQ's for all guests to use & a large heated pool surrounded by reclining chairs to sunbathe. The Ocean you see in the pictures is not even a mile away, known as the famous Makaha Surfing Beach. Golfing, hiking,snorkeling paddle boarding, surfing are all just minutes from the front door.",
    score: 0.6591796875
    }
    Result 2: {
    name: 'Ocean View Waikiki Marina w/prkg',
    summary: "A short distance from Honolulu's billion dollar mall, and the same distance to Waikiki. Parking included. A great location that work perfectly for business, education, or simple visit. Experience Yacht Harbor views and 5 Star Hilton Hawaiian Village.",
    score: 0.6337890625
    }
    Result 3: {
    name: 'A Casa Alegre é um apartamento T1.',
    summary: 'Para 2 pessoas. Vista de mar a 150 mts. Prédio com 2 elevadores. Tem: - quarto com roupeiro e cama de casal (colchão magnetizado); - cozinha: placa de discos, exaustor, frigorifico, micro-ondas e torradeira; casa de banho completa; - sala e varanda.',
    score: 0.62890625
    }
    Result 4: {
    name: 'LAHAINA, MAUI! RESORT/CONDO BEACHFRONT!! SLEEPS 4!',
    summary: 'THIS IS A VERY SPACIOUS 1 BEDROOM FULL CONDO (SLEEPS 4) AT THE BEAUTIFUL VALLEY ISLE RESORT ON THE BEACH IN LAHAINA, MAUI!! YOU WILL LOVE THE PERFECT LOCATION OF THIS VERY NICE HIGH RISE! ALSO THIS SPACIOUS FULL CONDO, FULL KITCHEN, BIG BALCONY!!',
    score: 0.6279296875
    }
    Result 5: {
    name: 'Be Happy in Porto',
    summary: 'Be Happy Apartment is an amazing space. Renovated and comfortable apartment, located in a building dating from the nineteenth century in one of the most emblematic streets of the Porto city "Rua do Almada". Be Happy Apartment is located in the city center, able you to visit the historic center only by foot, being very close of majority points of interesting of the Porto City. Be Happy Apartment is located close of central Station MetroTrindade.',
    score: 0.619140625
    }

    Your results might be different because the generated embeddings can vary depending on your environment.

Create an interactive Python notebook by saving a file with the .ipynb extension, and then perform the following steps in the notebook. To try the example, replace the placeholders with valid values.

Work with a runnable version of this tutorial as a Python notebook.

1

Run the following command to install the PyMongo Driver. If necessary, you can also install libraries from your embedding model provider. This operation might take a few minutes to complete.

pip install pymongo

You must install PyMongo v4.10 or later driver.

Example

Install PyMongo and Cohere

pip install --quiet --upgrade pymongo cohere
2

Example

Sample Data to Import

data = [
"The Great Wall of China is visible from space.",
"The Eiffel Tower was completed in Paris in 1889.",
"Mount Everest is the highest peak on Earth at 8,848m.",
"Shakespeare wrote 37 plays and 154 sonnets during his lifetime.",
"The Mona Lisa was painted by Leonardo da Vinci.",
]
3

This step is required if you haven't yet generated embeddings from your data. If you've already generated embeddings, skip this step. To learn more about generating embeddings from your data, see How to Create Vector Embeddings.

Example

Generate Embeddings from Sample Data Using Cohere

Placeholder
Valid Value

<COHERE-API-KEY>

API key for Cohere.

import os
import cohere
# Specify your Cohere API key
os.environ["COHERE_API_KEY"] = "<COHERE-API-KEY>"
cohere_client = cohere.Client(os.environ["COHERE_API_KEY"])
# Generate embeddings using the embed-english-v3.0 model
generated_embeddings = cohere_client.embed(
texts=data,
model="embed-english-v3.0",
input_type="search_document",
embedding_types=["float", "int8", "ubinary"]
).embeddings
float32_embeddings = generated_embeddings.float
int8_embeddings = generated_embeddings.int8
int1_embeddings = generated_embeddings.ubinary
4

You can use the PyMongo driver to convert your native vector embedding to BSON vectors.

Example

Define and Run a Function to Generate BSON Vectors

from bson.binary import Binary, BinaryVectorDtype
def generate_bson_vector(vector, vector_dtype):
return Binary.from_vector(vector, vector_dtype)
# For all vectors in your collection, generate BSON vectors of float32, int8, and int1 embeddings
bson_float32_embeddings = []
bson_int8_embeddings = []
bson_int1_embeddings = []
for i, (f32_emb, int8_emb, int1_emb) in enumerate(zip(float32_embeddings, int8_embeddings, int1_embeddings)):
bson_float32_embeddings.append(generate_bson_vector(f32_emb, BinaryVectorDtype.FLOAT32))
bson_int8_embeddings.append(generate_bson_vector(int8_emb, BinaryVectorDtype.INT8))
bson_int1_embeddings.append(generate_bson_vector(int1_emb, BinaryVectorDtype.PACKED_BIT))
5

If you already have the BSON vector embeddings inside of documents in your collection, skip this step.

Example

Create Documents from the Sample Data

Placeholder
Valid Value

<FIELD-NAME-FOR-FLOAT32-TYPE>

Name of field with float32 values.

<FIELD-NAME-FOR-INT8-TYPE>

Name of field with int8 values.

<FIELD-NAME-FOR-INT1-TYPE>

Name of field with int1 values.

# Specify the field names for the float32, int8, and int1 embeddings
float32_field = "<FIELD-NAME-FOR-FLOAT32-TYPE>"
int8_field = "<FIELD-NAME-FOR-INT8-TYPE>"
int1_field = "<FIELD-NAME-FOR-INT1-TYPE>"
# Define function to create documents with BSON vector embeddings
def create_docs_with_bson_vector_embeddings(bson_float32_embeddings, bson_int8_embeddings, bson_int1_embeddings, data):
docs = []
for i, (bson_f32_emb, bson_int8_emb, bson_int1_emb, text) in enumerate(zip(bson_float32_embeddings, bson_int8_embeddings, bson_int1_embeddings, data)):
doc = {
"_id": i,
"data": text,
float32_field: bson_f32_emb,
int8_field: bson_int8_emb,
int1_field: bson_int1_emb
}
docs.append(doc)
return docs
# Create the documents
documents = create_docs_with_bson_vector_embeddings(bson_float32_embeddings, bson_int8_embeddings, bson_int1_embeddings, data)
6

You can load your data from the Atlas UI and programmatically. To learn how to load your data from the Atlas UI, see Insert Your Data. The following steps and associated examples demonstrate how to load your data programmatically by using the PyMongo driver.

  1. Connect to your Atlas cluster.

    Placeholder
    Valid Value

    <ATLAS-CONNECTION-STRING>

    Atlas connection string. To learn more, see Connect via Drivers.

    Example

    import pymongo
    mongo_client = pymongo.MongoClient("<ATLAS-CONNECTION-STRING>")
    if not MONGO_URI:
    print("MONGO_URI not set in environment variables")
  2. Load the data into your Atlas cluster.

    Placeholder
    Valid Value

    <DB-NAME>

    Name of the database.

    <COLLECTION-NAME>

    Name of the collection in the specified database.

    Example

    # Insert documents into a new database and collection
    db = mongo_client["<DB-NAME>"]
    collection_name = "<COLLECTION-NAME>"
    db.create_collection(collection_name)
    collection = db[collection_name]
    collection.insert_many(documents)
7

You can create Atlas Vector Search indexes by using the Atlas UI, Atlas CLI, Atlas Administration API, and MongoDB drivers. To learn more, see How to Index Fields for Vector Search.

Example

Create Index for the Sample Collection

Placeholder
Valid Value

<INDEX-NAME>

Name of vector type index.

from pymongo.operations import SearchIndexModel
import time
# Define and create the vector search index
index_name = "<INDEX-NAME>"
search_index_model = SearchIndexModel(
definition={
"fields": [
{
"type": "vector",
"path": float32_field,
"similarity": "dotProduct",
"numDimensions": 1024
},
{
"type": "vector",
"path": int8_field,
"similarity": "dotProduct",
"numDimensions": 1024
},
{
"type": "vector",
"path": int1_field,
"similarity": "euclidean",
"numDimensions": 1024
}
]
},
name=index_name,
type="vectorSearch"
)
result = collection.create_search_index(model=search_index_model)
print("New search index named " + result + " is building.")
# Wait for initial sync to complete
print("Polling to check if the index is ready. This may take up to a minute.")
predicate=None
if predicate is None:
predicate = lambda index: index.get("queryable") is True
while True:
indices = list(collection.list_search_indexes(index_name))
if len(indices) and predicate(indices[0]):
break
time.sleep(5)
print(result + " is ready for querying.")
8

The function to run Atlas Vector Search queries must perform the following actions:

  • Convert the query text to a BSON vector.

  • Define the pipeline for the Atlas Vector Search query.

Example

Placeholder
Valid Value

<NUMBER-OF-CANDIDATES-TO-CONSIDER>

Number of nearest neighbors to use during the search.

<NUMBER-OF-DOCUMENTS-TO-RETURN>

Number of documents to return in the results.

# Define a function to run a vector search query
def run_vector_search(query_text, collection, path):
query_text_embeddings = cohere_client.embed(
texts=[query_text],
model="embed-english-v3.0",
input_type="search_query",
embedding_types=["float", "int8", "ubinary"]
).embeddings
if path == float32_field:
query_vector = query_text_embeddings.float[0]
vector_dtype = BinaryVectorDtype.FLOAT32
elif path == int8_field:
query_vector = query_text_embeddings.int8[0]
vector_dtype = BinaryVectorDtype.INT8
elif path == int1_field:
query_vector = query_text_embeddings.ubinary[0]
vector_dtype = BinaryVectorDtype.PACKED_BIT
bson_query_vector = generate_bson_vector(query_vector, vector_dtype)
pipeline = [
{
'$vectorSearch': {
'index': index_name,
'path': path,
'queryVector': bson_query_vector,
'numCandidates': <NUMBER-OF-CANDIDATES-TO-CONSIDER>, # for example, 5
'limit': <NUMBER-OF-DOCUMENTS-TO-RETURN> # for example, 2
}
},
{
'$project': {
'_id': 0,
'data': 1,
'score': { '$meta': 'vectorSearchScore' }
}
}
]
return collection.aggregate(pipeline)
9

You can run Atlas Vector Search queries programmatically. To learn more, see Run Vector Search Queries.

Example

from pprint import pprint
# Run the vector search query on the float32, int8, and int1 embeddings
query_text = "tell me a science fact"
float32_results = run_vector_search(query_text, collection, float32_field)
int8_results = run_vector_search(query_text, collection, int8_field)
int1_results = run_vector_search(query_text, collection, int1_field)
print("results from float32 embeddings")
pprint(list(float32_results))
print("--------------------------------------------------------------------------")
print("results from int8 embeddings")
pprint(list(int8_results))
print("--------------------------------------------------------------------------")
print("results from int1 embeddings")
pprint(list(int1_results))
results from float32 embeddings
[{'data': 'Mount Everest is the highest peak on Earth at 8,848m.',
'score': 0.6578356027603149},
{'data': 'The Great Wall of China is visible from space.',
'score': 0.6420407891273499}]
--------------------------------------------------------------------------
results from int8 embeddings
[{'data': 'Mount Everest is the highest peak on Earth at 8,848m.',
'score': 0.5149182081222534},
{'data': 'The Great Wall of China is visible from space.',
'score': 0.5136760473251343}]
--------------------------------------------------------------------------
results from int1 embeddings
[{'data': 'Mount Everest is the highest peak on Earth at 8,848m.',
'score': 0.62109375},
{'data': 'The Great Wall of China is visible from space.',
'score': 0.61328125}]

Work with a runnable version of this tutorial as a Python notebook.

1

Run the following command to install the PyMongo Driver. If necessary, you can also install libraries from your embedding model provider. This operation might take a few minutes to complete.

pip install pymongo

You must install PyMongo v4.10 or later driver.

Example

Install PyMongo and Cohere

pip install --quiet --upgrade pymongo cohere
2

You must define functions that perform the following by using an embedding model:

  • Generate embeddings from your existing data if your existing data doesn't have any embeddings.

  • Convert the embeddings to BSON vectors.

Example

Function to Generate and Convert Embeddings

Placeholder
Valid Value

<COHERE-API-KEY>

API key for Cohere.

import os
import pymongo
import cohere
from bson.binary import Binary, BinaryVectorDtype
# Specify your Cohere API key
os.environ["COHERE_API_KEY"] = "<COHERE-API-KEY>"
cohere_client = cohere.Client(os.environ["COHERE_API_KEY"])
# Define function to generate embeddings using the embed-english-v3.0 model
def get_embedding(text):
response = cohere_client.embed(
texts=[text],
model='embed-english-v3.0',
input_type='search_document',
embedding_types=["float"]
)
embedding = response.embeddings.float[0]
return embedding
# Define function to convert embeddings to BSON-compatible format
def generate_bson_vector(vector, vector_dtype):
return Binary.from_vector(vector, vector_dtype)
import os
import pymongo
import cohere
from bson.binary import Binary, BinaryVectorDtype
# Specify your Cohere API key
os.environ["COHERE_API_KEY"] = "<COHERE-API-KEY>"
cohere_client = cohere.Client(os.environ["COHERE_API_KEY"])
# Define function to generate embeddings using the embed-english-v3.0 model
def get_embedding(text):
response = cohere_client.embed(
texts=[text],
model='embed-english-v3.0',
input_type='search_document',
embedding_types=["int8"]
)
embedding = response.embeddings.int8[0]
return embedding
# Define function to convert embeddings to BSON-compatible format
def generate_bson_vector(vector, vector_dtype):
return Binary.from_vector(vector, vector_dtype)
import os
import pymongo
import cohere
from bson.binary import Binary, BinaryVectorDtype
# Specify your Cohere API key
os.environ["COHERE_API_KEY"] = "<COHERE-API-KEY>"
cohere_client = cohere.Client(os.environ["COHERE_API_KEY"])
# Define function to generate embeddings using the embed-english-v3.0 model
def get_embedding(text):
response = cohere_client.embed(
texts=[text],
model='embed-english-v3.0',
input_type='search_document',
embedding_types=["ubinary"]
)
embedding = response.embeddings.ubinary[0]
return embedding
# Define function to convert embeddings to BSON-compatible format
def generate_bson_vector(vector, vector_dtype):
return Binary.from_vector(vector, vector_dtype)
3

You must provide the following:

  • Connection string to connect to your Atlas cluster that contains the database and collection for which you want to generate embeddings.

  • Name of the database that contains the collection for which you want to generate embeddings.

  • Name of the collection for which you want to generate embeddings.

Example

Connect to Atlas Cluster for Accessing Data

Placeholder
Valid Value

<ATLAS-CONNECTION-STRING>

Atlas connection string. To learn more, see Connect via Drivers.

1# Connect to your Atlas cluster
2mongo_client = pymongo.MongoClient("<ATLAS-CONNECTION-STRING>")
3db = mongo_client["sample_airbnb"]
4collection = db["listingsAndReviews"]
5
6# Filter to exclude null or empty summary fields
7filter = { "summary": {"$nin": [None, ""]} }
8
9# Get a subset of documents in the collection
10documents = collection.find(filter).limit(50)
11
12# Initialize the count of updated documents
13updated_doc_count = 0
4
  1. Generate embeddings from your data using any embedding model if your data doesn't already have embeddings. To learn more about generating embeddings from your data, see How to Create Vector Embeddings.

  2. Convert the embeddings to BSON vectors (as shown on line 7 in the following example).

  3. Upload the embeddings to your collection on the Atlas cluster.

These operation might take a few minutes to complete.

Example

Generate, Convert, and Load Embeddings to Collection

for doc in documents:
# Generate embeddings based on the summary
summary = doc["summary"]
embedding = get_embedding(summary) # Get float32 embedding
# Convert the float32 embedding to BSON format
bson_float32 = generate_bson_vector(embedding, BinaryVectorDtype.FLOAT32)
# Update the document with the BSON embedding
collection.update_one(
{"_id": doc["_id"]},
{"$set": {"embedding": bson_float32}}
)
updated_doc_count += 1
print(f"Updated {updated_doc_count} documents with BSON embeddings.")
for doc in documents:
# Generate embeddings based on the summary
summary = doc["summary"]
embedding = get_embedding(summary) # Get int8 embedding
# Convert the int8 embedding to BSON format
bson_int8 = generate_bson_vector(embedding, BinaryVectorDtype.INT8)
# Update the document with the BSON embedding
collection.update_one(
{"_id": doc["_id"]},
{"$set": {"embedding": bson_int8}}
)
updated_doc_count += 1
print(f"Updated {updated_doc_count} documents with BSON embeddings.")
for doc in documents:
# Generate embeddings based on the summary
summary = doc["summary"]
embedding = get_embedding(summary) # Get int1 embedding
# Convert the int1 embedding to BSON format
bson_int1 = generate_bson_vector(embedding, BinaryVectorDtype.PACKED_BIT)
# Update the document with the BSON embedding
collection.update_one(
{"_id": doc["_id"]},
{"$set": {"embedding": bson_int1}}
)
updated_doc_count += 1
print(f"Updated {updated_doc_count} documents with BSON embeddings.")
5

You can create Atlas Vector Search indexes by using the Atlas UI, Atlas CLI, Atlas Administration API, and MongoDB drivers in your preferred language. To learn more, see How to Index Fields for Vector Search.

Example

Create Index for the Collection

Placeholder
Valid Value

<INDEX-NAME>

Name of vector type index.

1from pymongo.operations import SearchIndexModel
2import time
3
4# Define and create the vector search index
5index_name = "<INDEX-NAME>"
6search_index_model = SearchIndexModel(
7 definition={
8 "fields": [
9 {
10 "type": "vector",
11 "path": "embedding",
12 "similarity": "euclidean",
13 "numDimensions": 1024
14 }
15 ]
16 },
17 name=index_name,
18 type="vectorSearch"
19)
20result = collection.create_search_index(model=search_index_model)
21print("New search index named " + result + " is building.")
22
23# Wait for initial sync to complete
24print("Polling to check if the index is ready. This may take up to a minute.")
25predicate=None
26if predicate is None:
27 predicate = lambda index: index.get("queryable") is True
28while True:
29 indices = list(collection.list_search_indexes(index_name))
30 if len(indices) and predicate(indices[0]):
31 break
32 time.sleep(5)
33print(result + " is ready for querying.")

The index should take about one minute to build. While it builds, the index is in an initial sync state. When it finishes building, you can start querying the data in your collection.

6

The function to run Atlas Vector Search queries must perform the following actions:

  • Generate embeddings for the query text.

  • Convert the query text to a BSON vector.

  • Define the pipeline for the Atlas Vector Search query.

Example

Function to Run Atlas Vector Search Query

Placeholder
Valid Value

<NUMBER-OF-CANDIDATES-TO-CONSIDER>

Number of nearest neighbors to use during the search.

<NUMBER-OF-DOCUMENTS-TO-RETURN>

Number of documents to return in the results.

def run_vector_search(query_text, collection, path):
query_embedding = get_embedding(query_text)
bson_query_vector = generate_bson_vector(query_embedding, BinaryVectorDtype.FLOAT32)
pipeline = [
{
'$vectorSearch': {
'index': index_name,
'path': path,
'queryVector': bson_query_vector,
'numCandidates': <NUMBER-OF-CANDIDATES-TO-CONSIDER>, # for example, 20
'limit': <NUMBER-OF-DOCUMENTS-TO-RETURN> # for example, 5
}
},
{
'$project': {
'_id': 0,
'name': 1,
'summary': 1,
'score': { '$meta': 'vectorSearchScore' }
}
}
]
return collection.aggregate(pipeline)
def run_vector_search(query_text, collection, path):
query_embedding = get_embedding(query_text)
bson_query_vector = generate_bson_vector(query_embedding, BinaryVectorDtype.INT8)
pipeline = [
{
'$vectorSearch': {
'index': index_name,
'path': path,
'queryVector': bson_query_vector,
'numCandidates': <NUMBER-OF-CANDIDATES-TO-CONSIDER>, # for example, 20
'limit': <NUMBER-OF-DOCUMENTS-TO-RETURN> # for example, 5
}
},
{
'$project': {
'_id': 0,
'name': 1,
'summary': 1,
'score': { '$meta': 'vectorSearchScore' }
}
}
]
return collection.aggregate(pipeline)
def run_vector_search(query_text, collection, path):
query_embedding = get_embedding(query_text)
bson_query_vector = generate_bson_vector(query_embedding, BinaryVectorDtype.PACKED_BIT)
pipeline = [
{
'$vectorSearch': {
'index': index_name,
'path': path,
'queryVector': bson_query_vector,
'numCandidates': <NUMBER-OF-CANDIDATES-TO-CONSIDER>, # for example, 20
'limit': <NUMBER-OF-DOCUMENTS-TO-RETURN> # for example, 5
}
},
{
'$project': {
'_id': 0,
'name': 1,
'summary': 1,
'score': { '$meta': 'vectorSearchScore' }
}
}
]
return collection.aggregate(pipeline)
7

You can run Atlas Vector Search queries programmatically. To learn more, see Run Vector Search Queries.

Example

Run a Sample Atlas Vector Search Query

from pprint import pprint
query_text = "ocean view"
query_results = run_vector_search(query_text, collection, "embedding")
print("query results:")
pprint(list(query_results))
query results:
[{'name': 'Your spot in Copacabana',
'score': 0.5468248128890991,
'summary': 'Having a large airy living room. The apartment is well divided. '
'Fully furnished and cozy. The building has a 24h doorman and '
'camera services in the corridors. It is very well located, close '
'to the beach, restaurants, pubs and several shops and '
'supermarkets. And it offers a good mobility being close to the '
'subway.'},
{'name': 'Twin Bed room+MTR Mongkok shopping&My',
'score': 0.527062714099884,
'summary': 'Dining shopping conveniently located Mongkok subway E1, airport '
'shuttle bus stops A21. Three live two beds, separate WC, 24-hour '
'hot water. Free WIFI.'},
{'name': 'Quarto inteiro na Tijuca',
'score': 0.5222363471984863,
'summary': 'O quarto disponível tem uma cama de solteiro, sofá e computador '
'tipo desktop para acomodação.'},
{'name': 'Makaha Valley Paradise with OceanView',
'score': 0.5175154805183411,
'summary': 'A beautiful and comfortable 1 Bedroom Air Conditioned Condo in '
'Makaha Valley - stunning Ocean & Mountain views All the '
'amenities of home, suited for longer stays. Full kitchen & large '
"bathroom. Several gas BBQ's for all guests to use & a large "
'heated pool surrounded by reclining chairs to sunbathe. The '
'Ocean you see in the pictures is not even a mile away, known as '
'the famous Makaha Surfing Beach. Golfing, hiking,snorkeling '
'paddle boarding, surfing are all just minutes from the front '
'door.'},
{'name': 'Cozy double bed room 東涌鄉村雅緻雙人房',
'score': 0.5149975419044495,
'summary': 'A comfortable double bed room at G/F. Independent entrance. High '
'privacy. The room size is around 100 sq.ft. with a 48"x72" '
'double bed. The village house is close to the Hong Kong Airport, '
'AsiaWorld-Expo, HongKong-Zhuhai-Macau Bridge, Disneyland, '
'Citygate outlets, 360 Cable car, shopping centre, main tourist '
'attractions......'}]

Your results might vary depending on the vector data type that you specified in the previous steps.

For an advanced demonstration of this procedure on sample data using Cohere's embed-english-v3.0 embedding model, see this notebook.

You can measure the accuracy of your Atlas Vector Search query by evaluating how closely the results for an ANN search match the results of an ENN search against your quantized vectors. That is, you can compare the results of ANN search with the results of ENN search for the same query criteria and measure how frequently the ANN search results include the nearest neighbors in the results from the ENN search.

For a demonstration of evaluating your query results, see How to Measure the Accuracy of Your Query Results.