Bringing the Power of Vector Search to Enterprise Knowledge Bases

INDUSTRY

SaaS

PRODUCTS

Atlas Database
Atlas Vector Search

USE CASE

Product search
In-app search

CUSTOMER SINCE

2022
INTRODUCTION

Filling A Market Need

Founded in 2011, with offices in both the United Kingdom and India, Kovai is a privately-owned, bootstrapped enterprise software company that offers multiple products in both the enterprise and B2B SaaS arena. Since its founding, the company has grown to nearly 300 employees serving over 2,500 customers. While attending the 2010 Microsoft Global MVP Summit, Kovai Founder and CEO Saravana Kumar identified a gap in the market around critical tools for managing and monitoring Microsoft BizTalk Server environments (a middleware system that automates business processes). After roughly a year of development and building, the company’s first product, BizTalk360, was launched.

BizTalk360 provides a single view for administration, monitoring, and accessing the analytics of a BizTalk Server environment. After the initial product introduction, the company scaled rapidly, eventually reaching 145 customers by 2013, and Saravana was named the “Integration MVP of the year” at the Microsoft Global MVP Summit.

In 2018 Kovai launched Document360, a knowledge base platform for SaaS companies looking for a self-service software documentation solution. Document360 offers tools to create content quickly and efficiently, and provides a self-hosted documentation site so that content can be consumed effectively.

THE CHALLENGE

A Shift in User Behavior

In early 2023, Saravana and team noticed two primary shifts in customer behavior. First, their customers began to move towards asking questions and seeking more personalized, relevant answers without the need for traditional “search keywords.” This meant customers were looking to get more immediate and accurate answers, rather than having to spend the time reading through an entire knowledge base article. The second thing Kovai noticed was the macro trend of increasing popularity and appetite for tools like ChatGPT and Large Language Models (LLMs) for enterprise use cases, where users were typing in questions rather than keywords in order to get answers quickly.

To capitalize on these trends, Kovai recently released their AI assistant “Eddy,” which provides answers to customer’s questions utilizing LLMs, based on the information in a given knowledge base. Search is one of the critical features of a knowledge base platform, enabling customers to easily find the right information instead of navigating complex hierarchical structures to find what they’re looking for. The goal of Eddy is to provide an engaging customer experience that:

  • Gives accurate answers to user questions
  • Provides a holistic answer with low latency
  • Understands the contextual information associated with the question
  • Offers the ability to fine-tune questions based on responses

During the product building phase, the Document 360 engineering and data science teams researched solutions that would allow customers to ask questions and quickly provide accurate answers from their knowledge base articles. After reading whitepapers on Retrieval Augmented Generation (RAG), the Kovai team was convinced a RAG framework would help solve their specific challenge, allowing Eddy to constrain responses to questions based on context within a knowledge base, and to construct a response based on that context. Additional capabilities the team required included:

  • Creating embeddings for the knowledge base articles
  • A vector database for storing and retrieving embeddings
  • Tools to cache responses for “exact questions” asked by different customers
  • Orchestration tools
THE SOLUTION

A Unified Approach to Vector Search

Kovai was already using MongoDB as their database system of record, but now needed to also procure a vector search solution. The engineering team evaluated a few vector databases in the market to store and retrieve embeddings of knowledge base articles. However, they quickly found that other vector database solutions created various issues in providing accurate answers due to the need to move and sync lots of data between their existing MongoDB database and any additional vector database.

So in July of 2023, the engineering team chose to increase their investment in MongoDB Atlas by using the recently released MongoDB Vector Search for storing and retrieving embeddings, ensuring that both the content and its respective embeddings are housed inside MongoDB. Atlas Vector Search offers the team powerful search to retrieve embeddings based on similarity metrics at lower latency, and fits seamlessly into their existing Atlas implementation. Kovai’s Saravana also notes that Atlas Vector Search is “robust, cost-effective, and blazingly fast” which is especially important to their growing team. MongoDB capabilities help Document 360 engineering and data science teams with its:

  • Architectural simplicity: MongoDB Vector Search's architectural simplicity helps Kovai optimize the technical architecture needed to implement Eddy
  • Operational efficiency: Atlas Vector Search allows Kovai to store both knowledge base articles and their embeddings together in MongoDB collections, eliminating “data syncing” issues that come with other vendors
  • Performance: Kovai gets faster query response from MongoDB Vector Search at scale to ensure a positive user experience
THE RESULTS

Blazingly Fast Vector Search

Some of the early benchmarks of MongoDB Vector Search exceeded the expectations of the engineering and data science team. Specifically, the team has seen average time taken to return 3, 5 and 10 chunks between 2 milliseconds and 4 milliseconds, and if the question is a closed loop, the average time reduces to less than 2 milliseconds. Saravana goes on to note that “one of the hardest parts with RAG frameworks is the need to search for all chunks based on the text embeddings of the question based on ‘similarity metrics’ which can be computationally expensive, but Atlas Vector Search really reduces this complexity.”
“Atlas Vector Search is robust, cost-effective, and blazingly fast!”

Saravana Kumar, CEO, Kovai

The Kovai team has found Atlas Vector Search easy to implement, and their data science team can easily understand the API functionality through MongoDB documentation. The company’s data scientists collaborated extensively with MongoDB experts to find the right balance between performance, user experience, and accuracy of generated responses.
“MongoDB Vector Search is an efficient toolkit to implement a Retrieval Augmented Generation framework”

Dr. Selvaraaju Murugesan, Head of Data Science, Kovai

A diagram of how they architected their stack, utilizing Vector Search combined with OpenAI for vector embeddings, is shown below:
A diagram shows the architected stack, utilizing Vector Search combined with OpenAI for vector embeddings
In keeping with Kovai’s human-centric design philosophy, the traditional search box and AI assistant search are combined into one, providing their customers with a single seamless experience. Customers can switch between keyword and semantic search depending on their needs, and have the ability to set any search experience as their default.
Image of a search bar
Screenshot of the Kovai Eddy product

Screenshot of the Kovai Eddy product

The new product has already enhanced the user experience. Only answers from the knowledge base relevant to the context of the question are provided, and those answers are cross-referenced with relevant articles to ensure trust.

The product has been a labor of love for Saravana and team, something he notes was made possible through the partnership and technical assistance they have received from MongoDB and their investment in Atlas Vector Search.

“We want to make it possible for users of our customers’ knowledge base to receive instant, trust-worthy, and accurate answers to their questions using conversational search powered by MongoDB Atlas Vector Search and Generative AI capabilities.”

Saravana Kumar, CEO, Kovai