BlogAtlas Vector Search voted most loved vector database in 2024 Retool State of AI reportLearn more >>
MongoDB Developer
Atlas
plus
Sign in to follow topics
MongoDB Developer Centerchevron-right
Developer Topicschevron-right
Productschevron-right
Atlaschevron-right

Learn to Build AI-Enhanced Retail Search Solutions with MongoDB and Databricks

Francesco Baldissera, Ashwin Gangadhar, Vittal Pai14 min read • Published Sep 25, 2023 • Updated Sep 25, 2023
Node.jsKafkaAtlasSearchPython
Facebook Icontwitter iconlinkedin icon
Rate this tutorial
star-empty
star-empty
star-empty
star-empty
star-empty
In the rapidly evolving retail landscape, businesses are constantly seeking ways to optimize operations, improve customer experience, and stay ahead of competition. One of the key strategies to achieve this is through leveraging the opportunities search experiences provide.
Imagine this: You walk into a department store filled with products, and you have something specific in mind. You want a seamless and fast shopping experience — this is where product displays play a pivotal role. In the digital world of e-commerce, the search functionality of your site is meant to be a facilitating tool to efficiently display what users are looking for.
Shockingly, statistics reveal that only about 50% of searches on retail websites yield the results customers seek. Think about it — half the time, customers with a strong buying intent are left without an answer to their queries.
The search component of your e-commerce site is not merely a feature; it's the bridge between customers and the products they desire. Enhancing your search engine logic with artificial intelligence is the best way to ensure that the bridge is sturdy.
In this article, we'll explore how MongoDB and Databricks can be integrated to provide robust solutions for the retail industry, with a particular focus on the MongoDB Apache Spark Streaming processor; orchestration with Databricks workflows; data transformation and featurization with MLFlow and the Spark User Defined Functions; and by building a product catalog index, sorting, ranking, and autocomplete with Atlas Search.
Let’s get to it!

Solution overview

High-level overview of an event-driven architecture for an AI-enhanced search solution using MongoDB Atlas and Databricks
A modern e-commerce-backed system should be able to collate data from multiple sources in real-time, as well as batch loads, and be able to transform this data into a schema upon which a Lucene search index can be built. This enables discovery of the added inventory.
The solution should integrate website customer behavior events data in real-time to feed an “intelligence layer” that will create the criteria to display and order the most interesting products in terms of both relevance to the customer and relevance to the business.
These features are nicely captured in the above-referenced e-commerce architecture. We’ll divide it into four different stages or layers:
  1. Multi-tenant streaming ingestion: With the help of the MongoDB Kafka connector, we are able to sync real-time data from multiple sources to Mongodb. For the sake of simplicity, in this tutorial, we will not focus on this stage.
  2. Stream processing: With the help of the MongoDB Spark connector and Databricks jobs and notebooks, we are able to ingest data and transform it to create machine learning model features.
  3. AI/ML modeling: All the generated streams of data are transformed and written into a unified view in a MongoDB collection called catalog, which is used to build search indexes and support querying and discovery of products.
  4. Building the search logic: With the help of Atlas Search capabilities and robust aggregation pipelines, we can power features such as search/discoverability, hyper-personalization, and featured sort on mobile/web applications.

Prerequisites

Before running the app, you'll need to have the following installed on your system:

Streaming data into Databricks

In this tutorial, we’ll focus on explaining how to orchestrate different ETL pipelines in real time using Databricks Jobs. A Databricks job represents a single, standalone execution of a Databricks notebook, script, or task. It is used to run specific code or analyses at a scheduled time or in response to an event.
Our search solution is meant to respond to real-time events happening in an e-commerce storefront, so the search experience for a customer can be personalized and provide search results that fit two criteria:
  1. Relevant for the customer: We will define a static score comprising behavioral data (click logs) and an Available to Promise status, so search results are products that we make sure are available and relevant based off of previous demand.
  2. Relevant for the business: The results will be scored based on which products are more price sensitive, so higher price elasticity means they appear first on the product list page and as search results. We will also compute an optimal suggested price for the product.
So let’s check out how to configure these ETL processes over Databricks notebooks and orchestrate them using Databricks jobs to then fuel our MongoDB collections with the intelligence that we will use to build our search experience.

Databricks jobs for product stream processing, static score, and pricing

We’ll start by explaining how to configure notebooks in Databricks. Notebooks are a key tool for data science and machine learning, allowing collaboration, real-time coauthoring, versioning, and built-in data visualization. You can also make them part of automated tasks, called jobs in Databricks. A series of jobs are called workflows. Your notebooks and workflows can be attached to computing resources that you can set up at your convenience, or they can be run via autoscale.
Learn more about how to configure jobs in Databricks using JSON configuration files.
You can find our first job JSON configuration files in our GitHub. In these JSON files, we specify the different parameters on how to run the various jobs in our Databricks cluster. We specify different parameters such as the user, email notifications, task details, cluster information, and notification settings for each task within the job. This configuration is used to automate and manage data processing and analysis tasks within a specified environment.
Now, without further ado, let’s start with our first workflow, the “Catalog collection indexing workflow.”

Catalog collection indexing workflow

Overview of a Databricks job, including two pipelines to ingest data from MongoDB collections, transform that data, and vectorize it using a text transformer model
The above diagram shows how our solution will run two different jobs closely related to each other in two separate notebooks. Let’s unpack this job with the code and its explanation:
The first part of your notebook script is where you’ll define and install different packages. In the code below, we have all the necessary packages, but the main ones — pymongo and tqdm — are explained below:
  • PyMongo is commonly used in Python applications that need to store, retrieve, or analyze data stored in MongoDB, especially in web applications, data pipelines, and analytics projects.
  • tqdm is often used in Python scripts or applications where there's a need to provide visual feedback to users about the progress of a task.
The rest of the packages are pandas, JSON, and PySpark. In this part of the snippet, we also define a variable for the MongoDB connection string to our cluster.

Data streaming from MongoDB

The script reads data streams from various MongoDB collections using the spark.readStream.format("mongodb") method.
For each collection, specific configurations are set, such as the MongoDB connection URI, database name, collection name, and other options related to change streams and aggregation pipelines.
The snippet below is the continuation of the code from above. It can be put in a different cell in the same notebook.
In this specific case, the code is reading from the atp_status collection. It specifies options for the MongoDB connection, including the URI, and enables the capture of the full document when changes occur in the MongoDB collection. The empty aggregation pipeline indicates that no specific transformations are applied at this stage.
Following with the next stage of the job for the atp_status collection, we can break down the code snippet into three different parts:

Data transformation and data writing to MongoDB

After reading the data streams, we drop the _id field. This is a special field that serves as the primary key for a document within a collection. Every document in a MongoDB collection must have a unique _id field, which distinguishes it from all other documents in the same collection. As we are going to create a new collection, we need to drop the previous _id field of the original documents, and when we insert it into a new collection, a new _id field will be assigned.

Data writing to MongoDB

The transformed data streams are written back to MongoDB using the writeStream.format("mongodb") method.
The data is written to the catalog_myn collection in the search database.
Specific configurations are set for each write operation, such as the MongoDB connection URI, database name, collection name, and other options related to upserts, checkpoints, and output modes.
The below code snippet is a continuation of the notebook from above.

Checkpointing

Checkpoint locations are specified for each write operation. Checkpoints are used to maintain the state of streaming operations, allowing for recovery in case of failures. The checkpoints are stored in the /tmp/ directory with specific subdirectories for each collection.
Here is an example of checkpointing. It’s included in the script right after the code from above.
The full snippet of code performs different data transformations for the various collections we are ingesting into Databricks, but they all follow the same pattern of ingestion, transformation, and rewriting back to MongoDB. Make sure to check out the full first indexing job notebook.
For the second part of the indexing job, we will use a user-defined function (UDF) in our code to embed our product catalog data using a transformers model. This is useful to be able to build Vector Search features.
This is an example of how to define a user-defined function. You can define your functions early in your notebook so you can reuse them later for running your data transformations or analytics calculations. In this case, we are using it to embed text data from a document.
The ‘@F.udf()’ decorator is used to define a user-defined function in PySpark using the F object, which is an alias for the pyspark.sql.functions module. In this specific case, it is defining a UDF named ‘get_vec’ that takes a single argument text and returns the result of calling ‘model.encode(text)’.
The code from below is a continuation of the same notebook.
Our notebook code continues with similar snippets to previous examples. We'll use the MongoDB Connector for Spark to ingest data from the previously built catalog collection.
Then, it performs data transformations on the catalog_status DataFrame, including adding a new column, the atp_status that is now a boolean value, 1 for available, and 0 for unavailable. This is useful for us to be able to define the business logic of the search results showcasing only the products that are available.
We also calculate the discounted price based on data from another job we will explain further along.
The below snippet is a continuation of the notebook code from above:
We vectorize the title of the product and we create a new field called “vec”. We then drop the "_id" field, indicating that this field will not be updated in the target MongoDB collection.
Finally, it sets up a structured streaming write operation to write the transformed data to a MongoDB collection named "catalog_final_myn" in the "search" database while managing query state and checkpointing.
Let’s see how to configure the second workflow to calculate a BI score for each product in the collection and introduce the result back into the same document so it’s reusable for search scoring.

BI score computing logic workflow

Diagram overview of the BI score computing job logic using materialized views to ingest data from a MongoDB collection and process user click logs with Empirical Bayes algorithm.
In this stage, we will explain the script to be run in our Databricks notebook as part of the BI score computing job. Please bear in mind that we will only explain what makes this code snippet different from the previous, so make sure to understand how the complete snippet works. Please feel free to clone our complete repository so you can get a full view on your local machine.
We start by setting up the configuration for Apache Spark using the SparkConf object and specify the necessary package dependency for our MongoDB Spark connector.
Then, we initialize a Spark session for our Spark application named "test1" running in local mode. It also configures Spark with the MongoDB Spark connector package dependency, which is set up in the conf object defined earlier. This Spark session can be used to perform various data processing and analytics tasks using Apache Spark.
The below code is a continuation to the notebook snippet explained above:
We’ll use MongoDB Aggregation Pipelines in our code snippet to get a set of documents, each representing a unique "product_id" along with the corresponding counts of total views, purchases, and cart events. We’ll use the transformed resulting data to feed an Empirical Bayes algorithm and calculate a value based on the cumulative distribution function (CDF) of a beta distribution.
Make sure to check out the entire .ipynb file in our repository.
This way, we can calculate the relevance of a product based on the behavioral data described before. We’ll also use window functions to calculate different statistics on each one of the products — like the average of purchases and the purchase beta (the difference between the average total clicks and average total purchases) — to use as input to create a BI relevance score. This is what is shown in the below code:
After calculating the BI score for our product, we want to use a machine learning algorithm to calculate the price elasticity of demand for the product and the optimal price.

Calculating optimal price workflow

Diagram showcasing the pricing workflows to be run as Databricks notebooks
For calculating the optimal recommended price, first, we need to figure out a pipeline that will shape the data according to what we need. Get the pipeline definition in our repository.
We’ll first take in data from the MongoDB Atlas click logs (clog) collection that’s being ingested in the database in real-time, and create a DataFrame that will be used as input for a Random Forest regressor machine learning model. We’ll leverage the MLFlow library to be able to run MLOps stages, run tests, and register the best-performing model that will be used in the second job to calculate the price elasticity of demand, the suggested discount, and optimal price for each product. Let’s see what the code looks like!
After we’ve done the test and train split required for fitting the model, we leverage the mlFlow model wrapping to be able to log model parameters, metrics, and dependencies.
For the next stage, we apply the previously trained and registered model to the sales data:
Then, we just need to create the sales DataFrame with the resulting data. But first, we use the .fillna function to make sure all our null values are cast into floats 0.0. We need to perform this so our model has proper data and because most machine learning models return an error if you pass null values.
Now, we can calculate new columns to add to the sales DataFrame: the predicted optimal price, the price elasticity of demand per product, and a discount column which will be rounded up to the next nearest integer. The below code is a continuation of the code from above — they both reside in the same notebook:
Then, we push the data back using the MongoDB Connector for Spark into the proper MongoDB collection. These will be used together with the rest as the baseline on top of which we’ll build our application’s search business logic.
After these workflows are configured, you should be able to see the new collections and updated documents for your products.

Building the search logic

To build the search logic, first, you’ll need to create an index. This is how we’ll make sure that our application runs smoothly as a search query, instead of having to look into all the documents in the collection. We will limit the scan by defining the criteria for those scans.
To understand more about indexing in MongoDB, you can check out the article from the documentation. But for the purposes of this tutorial, let’s dive into the two main parameters you’ll need to define for building our solution:
Mappings: This key dictates how fields in the index should be stored and how they should be treated when queries are made against them.
Fields: The fields describe the attributes or columns of the index. Each field can have specific data types and associated settings. We implement the sortable number functionality for the fields ‘pred_price’, ‘price_elasticity’, and ‘score’. So in this way, our search results are organized by relevance.
The latter steps of building the solution come to defining the index mapping for the application. You can find the full mappings snippet in our GitHub repository.
To configure the index, you can insert the snippet in MongoDB Atlas by browsing your cluster splash page and clicking over the “Search” tab: Overview of the Search configuration panel for MongoDB Atlas collections
Next, you can click over “Create Index.” Make sure you select “JSON Editor”:
Overview of the JSON Editor functionality in the Search Configuration panel for MongoDB Atlas
Paste the JSON snippet from above — make sure you select the correct database and collection! In our case, the collection name is catalog_final_myn.

Autocomplete

To define autocomplete indexes, you can follow the same browsing instructions from the Building the search logic stage, but in the JSON editor, your code snippet may vary. Follow our tutorial to learn how to fully configure autocomplete in Atlas Search.
For our search solution, check out the code below. We define how the data should be treated and indexed for autocomplete features.
Let’s break down each of the parameters:
foldDiacritics: Setting this to false means diacritic marks on characters (like accents on letters) are treated distinctly. For instance, "résumé" and "resume" would be treated as different words.
minGrams and maxGrams: These specify the minimum and maximum lengths of the edge n-grams. In this case, it would index substrings (edgeGrams) with lengths ranging from 3 to 7.
Tokenization: The value edgeGram means the text is tokenized into substrings starting from the beginning of the string. For instance, for the word "example", with minGrams set to 3, the tokens would be "exa", "exam", "examp", etc. This is commonly used in autocomplete scenarios to match partial words.
After all of this, you should have an AI-enhanced search functionality for your e-commerce storefront!

Conclusion

In summary, we’ve covered how to integrate MongoDB Atlas and Databricks to build a performant and intelligent search feature for an e-commerce application.
By using the MongoDB Connector for Spark and Databricks, along with MLFlow for MLOps, we've created real-time pipelines for AI. Additionally, we've configured MongoDB Atlas Search indexes, utilizing features like Autocomplete, to build a cutting-edge search engine.
Grasping the complexities of e-commerce business models is complicated enough without also having to handle knotty integrations and operational overhead! Counting on the right tools for the job gets you several months ahead out-innovating the competition.
Check out the GitHub repository or reach out over LinkedIn if you want to discuss search or any other retail functionality!

Facebook Icontwitter iconlinkedin icon
Rate this tutorial
star-empty
star-empty
star-empty
star-empty
star-empty
Related
Article

5 Ways to Reduce Costs With MongoDB Atlas


Sep 23, 2022 | 3 min read
Tutorial

How to Implement Databricks Workflows and Atlas Vector Search for Enhanced Ecommerce Search Accuracy


Sep 22, 2023 | 6 min read
Tutorial

Build Your Own Function Retry Mechanism with Realm


Jul 19, 2022 | 4 min read
Tutorial

Leveraging OpenAI and MongoDB Atlas for Improved Search Functionality


Dec 04, 2023 | 5 min read
Table of Contents