Building an Autocomplete Form Element with Atlas Search and JavaScript
Rate this tutorial
When you're developing a web application, a quality user experience can make or break your application. A common application feature is to allow users to enter text into a search bar to find a specific piece of information. Rather than having the user enter information and hope it's valid, you can help your users find what they are looking for by offering autocomplete suggestions as they type.
So what could go wrong?
If your users are like me, they'll make multiple spelling mistakes for every one word of text. If you're creating an autocomplete field using regular expressions on your data, programming to account for misspellings and fat fingers is tough!
In this tutorial, we're going to see how to create a simple web application that surfaces autocomplete suggestions to the user. These suggestions can be easily created using the full-text search features available in Atlas Search.
To get a better idea of what we want to accomplish, have a look at the following animated image:
In the above image you'll notice that I not only made spelling mistakes,
but I also made use of a word that appeared anywhere within the field
for any document in a collection.
We'll skip the basics of configuring Node.js or MongoDB and assume that
you already have a few things installed, configured, and ready to go:
- MongoDB Atlas with an M0 cluster or better, with user and network safe-list configurations established
- Node.js installed and configured
- A food database with a recipes collection established
We'll be using Atlas Search within MongoDB Atlas. To follow this
tutorial, the recipes collection (within the food database) will
expect documents that look like this:
1 { 2 "_id": "5e5421451c9d440000e7ca13", 3 "name": "chocolate chip cookies", 4 "ingredients": [ 5 "sugar", 6 "flour", 7 "chocolate" 8 ] 9 }
Make sure to create many documents within your recipes collection,
some of which with similar names. In my example, I used "grilled
cheese", "five cheese lasagna", and "baked salmon".
Before we start creating a frontend or backend, we need to prepare our
collection for search by creating a special search index.
Within the Collections tab of your cluster, find the recipes
collection and then choose the Search Indexes tab.
You probably won't have an Atlas Search index created yet, so we'll need
to create one.
By default, Atlas Search dynamically maps every field in a collection.
That means every field in our document will be checked against our
search terms. This is great for growing collections where the schema may
evolve, and you want to search through many different fields. However it
can be resource intensive, as well. For our app, we actually just want
to search by one particular field, the "name" field in our recipe
documents. To do that, choose "Create Search Index" and change the code
to the following:
1 { 2 "mappings": { 3 "dynamic": false, 4 "fields": { 5 "name": [ 6 { 7 "foldDiacritics": false, 8 "maxGrams": 7, 9 "minGrams": 3, 10 "tokenization": "edgeGram", 11 "type": "autocomplete" 12 } 13 ] 14 } 15 } 16 }
In the above example, we're creating an index on the
name
field within
our documents using an autocomplete index. Any fields that aren't
explicitly mapped, like the ingredients
array, will not be searched.For this example, we can paste the JSON using the JSON Editor.
Now, click "Create Index". That's it! Just give MongoDB Atlas a few
minutes to create your search index.
If you want to learn more about Atlas Search autocomplete indexes and
the various tokenization strategies that can be used, you can find
information in the official
documentation.
At this point in time, we should have our data collection of recipes, as
well as an Atlas Search index created on that data for the
name
field.
We're now ready to create a backend that will interact with our data
using the MongoDB Node.js driver.We're only going to brush over the getting started with MongoDB aspect
of this backend application. If you want to read something more
in-depth, check out Lauren Schaefer's tutorial
series
on the subject.
On your computer, create a new project directory with a main.js file
within that directory. Using the command line, execute the following
commands:
1 npm init -y 2 npm install mongodb express body-parser cors --save
The above commands will initialize the package.json file and install
each of our project dependencies for creating a RESTful API that
interacts with MongoDB.
Within the main.js file, add the following code:
1 const { MongoClient, ObjectID } = require("mongodb"); 2 const Express = require("express"); 3 const Cors = require("cors"); 4 const BodyParser = require("body-parser"); 5 const { request } = require("express"); 6 7 const client = new MongoClient(process.env["ATLAS_URI"]); 8 const server = Express(); 9 10 server.use(BodyParser.json()); 11 server.use(BodyParser.urlencoded({ extended: true })); 12 server.use(Cors()); 13 14 var collection; 15 16 server.get("/search", async (request, response) => {}); 17 server.get("/get/:id", async (request, response) => {}); 18 19 server.listen("3000", async () => { 20 try { 21 await client.connect(); 22 collection = client.db("food").collection("recipes"); 23 } catch (e) { 24 console.error(e); 25 } 26 });
Remember when I said I'd be brushing over the getting started with
MongoDB stuff? I meant it, but, if you're copying and pasting the above
code, make sure you replace the following line in your code:
1 const client = new MongoClient(process.env["ATLAS_URI"]);
I store my MongoDB Atlas information in an environment variable rather
than hard-coding it into the application. If you wish to do the same,
create an environment variable on your computer called
ATLAS_URI
and
set it to your MongoDB connection string. This connection string will
look something like this:1 mongodb+srv://<username>:<password>@cluster0-yyarb.mongodb.net/<dbname>?retryWrites=true&w=majority
If you need help obtaining it, circle back to that
tutorial
by Lauren Schaefer that I had suggested.
What we're interested in for this example are the
/search
and
/get/:id
endpoints for the RESTful API. The first endpoint will
leverage Atlas Search, while the second endpoint will get the document
based on its _id
value. This is useful in case you want to search for
documents and then get all the information about the selected document.So let's expand upon the endpoint for searching:
1 server.get("/search", async (request, response) => { 2 try { 3 let result = await collection.aggregate([ 4 { 5 "$search": { 6 "autocomplete": { 7 "query": `${request.query.query}`, 8 "path": "name", 9 "fuzzy": { 10 "maxEdits": 2, 11 "prefixLength": 3 12 } 13 } 14 } 15 } 16 ]).toArray(); 17 response.send(result); 18 } catch (e) { 19 response.status(500).send({ message: e.message }); 20 } 21 });
In the above code, we are creating an aggregation pipeline with a single
$search
stage, which will be powered by our Atlas Search index. It
will use the user-provided data as the query and the autocomplete
operator to give them that type-ahead experience. In a production
scenario we might want to do further validation on the user provided
data, but it's fine for this example. Also note that when using Atlas
Search, an aggregation pipeline is required, and the $search
operator
must be the first stage of that pipeline.The
path
field of name
represents the field within our documents
that we want to search in. Remember, name
is also the field we defined
in our index.This is where the fun stuff comes in!
We're doing a fuzzy search. This means we're finding strings which are
similar, but not necessarily exactly the same, to the search term.
Remember when I misspelled
cheese
by entering chease
instead? The
maxEdits
field represents how many consecutive characters must match.
In my example there was only one, but what if I misspelled it as
cheaze
where the az
characters are not correct?The
prefixLength
field indicates the number of characters at the
beginning of each term in the result that must match exactly. In our
example, three characters at the beginning of each term must match.This is all very powerful considering what kind of mess your code would
look like by using regular expressions or
$text
instead.You can find more information on what can be used with the
autocomplete
operator in the
documentation.So let's take care of our other endpoint:
1 server.get("/get/:id", async (request, response) => { 2 try { 3 let result = await collection.findOne({ "_id": ObjectID(request.params.id) }); 4 response.send(result); 5 } catch (e) { 6 response.status(500).send({ message: e.message }); 7 } 8 });
The above code is nothing fancy. We're taking an id hash that the user
provides, converting it into a proper object id, and then finding a
single document.
You can test this application by first serving it with
node main.js
and then using a tool like Postman against the
http://localhost:3000/search?query= or http://localhost:3000/get/
urls.Now that we have a backend to work with, we can take care of the
frontend that improves the overall user experience. There are plenty of
ways to create an autocomplete form, but for this example jQuery will be
doing the heavy lifting.
Create a new project with an index.html file in it. Open that file
and include the following:
1 2 <html> 3 <head> 4 <link rel="stylesheet" href="//code.jquery.com/ui/1.12.1/themes/base/jquery-ui.css"> 5 <script src="//code.jquery.com/jquery-1.12.4.js"></script> 6 <script src="//code.jquery.com/ui/1.12.1/jquery-ui.js"></script> 7 </head> 8 <body> 9 <div class="ui-widget"> 10 <label for="recipe">Recipe:</label><br /> 11 <input id="recipe"> 12 <ul id="ingredients"></ul> 13 </div> 14 <script> 15 $(document).ready(function () {}); 16 </script> 17 </body> 18 </html>
The above markup doesn't do much of anything. We're just importing the
jQuery dependencies and defining the form element that will show the
autocomplete. After clicking the autocomplete element, the data will
populate our list of ingredients in the list.
Within the
<script>
tag we can add the following:1 <script> 2 $(document).ready(function () { 3 $("#recipe").autocomplete({ 4 source: async function(request, response) { 5 let data = await fetch(`http://localhost:3000/search?query=${request.term}`) 6 .then(results => results.json()) 7 .then(results => results.map(result => { 8 return { label: result.name, value: result.name, id: result._id }; 9 })); 10 response(data); 11 }, 12 minLength: 2, 13 select: function(event, ui) { 14 fetch(`http://localhost:3000/get/${ui.item.id}`) 15 .then(result => result.json()) 16 .then(result => { 17 $("#ingredients").empty(); 18 result.ingredients.forEach(ingredient => { 19 $("#ingredients").append(`<li>${ingredient}</li>`); 20 }); 21 }); 22 } 23 }); 24 }); 25 </script>
A few things are happening in the above code.
Within the
autocomplete
function, we define a source
for where our
data comes from and a select
for what happens when we select something
from our list. We also define a minLength
so that we aren't hammering
our backend and database with every keystroke.If we take a closer look at the
source
function, we have the
following:1 source: async function(request, response) { 2 let data = await fetch(`http://localhost:3000/search?query=${request.term}`) 3 .then(results => results.json()) 4 .then(results => results.map(result => { 5 return { label: result.name, value: result.name, id: result._id }; 6 })); 7 response(data); 8 },
We're making a
fetch
against our backend, and then formatting the
results into something the jQuery plugin recognizes. If you want to
learn more about making HTTP requests with JavaScript, you can check out
a previous tutorial I wrote titled Execute HTTP Requests in JavaScript
Applications.In the
select
function, we can further analyze what's happening:1 select: function(event, ui) { 2 fetch(`http://localhost:3000/get/${ui.item.id}`) 3 .then(result => result.json()) 4 .then(result => { 5 $("#ingredients").empty(); 6 result.ingredients.forEach(ingredient => { 7 $("#ingredients").append(`<li>${ingredient}</li>`); 8 }); 9 }); 10 }
We are making a second request to our other API endpoint. We are then
flushing the list of ingredients on our page and repopulating them with
the new ingredients.
When running this application, make sure the backend is running as well,
otherwise your frontend will have nothing to communicate to.
To serve the frontend application, there are a few options not limited
to what's below:
- Use Python to create a
SimpleHTTPServer
.
My personal favorite is to use the
serve
package. If you install it,
you can execute serve
from your command line within the working
directory of your project.With the project serving with
serve
, it should be accessible at
http://localhost:5000 in your web browser.You just saw how to leverage MongoDB Atlas Search to suggest data to
users in an autocomplete form. Atlas Search is great because it uses
natural language to search within document fields to spare you from
having to write long and complicated regular expressions or application
logic.
Don't forget that we did our search by using the
$search
operator
within an aggregation pipeline. This means you could add other stages to
your pipeline to do some really extravagant things. For example, after
the $search
pipeline stage, you could use an $in
on the ingredients
array to limit the results to only chocolate recipes. Also, you can make
use of other neat operators within the $search
stage, beyond the
autocomplete
operator. For example, you could make use of the near
operator for numerical and geospatial search, or operators such as
compound
and wildcard
for other tasks. More information on these
operators can be found in the
documentation.