Explore Developer Center's New Chatbot! MongoDB AI Chatbot can be accessed at the top of your navigation to answer all your MongoDB questions.

Join us at AWS re:Invent 2024! Learn how to use MongoDB for AI use cases.
MongoDB Developer
Python
plus
Sign in to follow topics
MongoDB Developer Centerchevron-right
Developer Topicschevron-right
Languageschevron-right
Pythonchevron-right

Build a RESTful API With Flask, MongoDB, and Python

Mark Smith10 min read • Published Jan 14, 2022 • Updated Sep 11, 2024
FlaskPython
Facebook Icontwitter iconlinkedin icon
Rate this tutorial
star-empty
star-empty
star-empty
star-empty
star-empty
This is the first part of a short series of blog posts called "Rewrite it in Rust (RiiR)." It's a tongue-in-cheek title for some posts that will investigate the similarities and differences between the same service written in Python with Flask, and Rust with Actix-Web.
This post will show how I built a RESTful API for a collection of cocktail recipes I just happen to have lying around. The aim is to show an API server with some complexity, so although it's a small example, it will cover important factors such as:
  • Data transformation between the database and a JSON representation.
  • Data validation.
  • Pagination.
  • Error-handling.

Prerequisites

  • Python 3.8 or above
  • A MongoDB Atlas cluster. Follow the "Get Started with Atlas" guide to create your account and MongoDB cluster. Keep a note of your database username, password, and connection string as you will need those later.
This is an advanced guide, so it'll cover a whole bunch of different libraries which can be brought together to build a declarative Restful API server on top of MongoDB. I won't cover repeating patterns in the codebase, so if you want to build the whole thing, I recommend checking out the source code, which is all on GitHub.
It won't cover the basics of Python, Flask, or MongoDB, so if that's what you're looking for, I recommend checking out the following resources before tackling this post:

Getting Started

Begin by cloning the sample code source from GitHub. There are four top-level directories:
  • actix-cocktail-api: You can ignore this for now.
  • data: This contains an export of my cocktail data. You'll import this into your cluster in a moment.
  • flask-cocktail-api: The code for this blog post.
  • test_scripts: A few shell scripts that use curl to test the HTTP interface of the API server.
There are more details in the GitHub repo, but the basics are: Install the project with your virtualenv active:
1pip install -e .
Next, you should import the data into your cluster. Set the environment variable $MONGO_URI to your cluster URI. This environment variable will be used in a moment to import your data, and also by the Flask app. I use direnv to configure this, and put the following line in my .envrc file in my project's directory:
1export MONGO_URI="mongodb+srv://USERNAME:PASSW0RD@cluster0-abcde.azure.mongodb.net/cocktails?retryWrites=true&w=majority"
Note that your database must be called "cocktails," and the import will create a collection called "recipes." After checking that $MONGO_URI is set correctly, run the following command:
1mongoimport --uri "$MONGO_URI" --file ./recipes.json
Now you should be able to run the Flask app from the flask-cocktail-api directory:
1FLASK_DEBUG=true FLASK_APP=cocktailapi flask run
(You can run make run if you prefer.)
Check the output to ensure it is happy with the configuration, and then in a different terminal window, run the list_cocktails.sh script in the test_scripts directory. It should print something like this:
1{
2 "_links": {
3 "last": {
4 "href": "http://localhost:5000/cocktails/?page=5"
5 },
6 "next": {
7 "href": "http://localhost:5000/cocktails/?page=5"
8 },
9 "prev": {
10 "href": "http://localhost:5000/cocktails/?page=3"
11 },
12 "self": {
13 "href": "http://localhost:5000/cocktails/?page=4"
14 }
15 },
16 "recipes": [
17 {
18 "_id": "5f7daa198ec9dfb536781b0d",
19 "date_added": null,
20 "date_updated": null,
21 "ingredients": [
22 {
23 "name": "Light rum",
24 "quantity": {
25 "unit": "oz",
26 }
27 },
28 {
29 "name": "Grapefruit juice",
30 "quantity": {
31 "unit": "oz",
32 }
33 },
34 {
35 "name": "Bitters",
36 "quantity": {
37 "unit": "dash",
38 }
39 }
40 ],
41 "instructions": [
42 "Pour all of the ingredients into an old-fashioned glass almost filled with ice cubes",
43 "Stir well."
44 ],
45 "name": "Monkey Wrench",
46 "slug": "monkey-wrench"
47 },
48 ]
49 ...

Breaking it All Down

The code is divided into three submodules.
  • __init__.py contains all the Flask setup code, and defines all the HTTP routes.
  • model.py contains all the Pydantic model definitions.
  • objectid.py contains a Pydantic field definition that I stole from the Beanie object-data mapper for MongoDB.
I mentioned earlier that this code makes use of several libraries:
  • PyMongo and Flask-PyMongo handle the connection to the database. Flask-PyMongo specifically wraps the database collection object to provide a convenientfind_one_or_404 method.
  • Pydantic manages data validation, and some aspects of data transformation between the database and a JSON representations.
  • along with a single function from FastAPI.

Data Validation and Transformation

When building a robust API, it's important to validate all the data passing into the system. It would be possible to do this using a stack of if/else statements, but it's much more effective to define a schema declaratively, and to allow that to programmatically validate the data being input.
I used a technique that I learned from Beanie, a new and neat ODM that I unfortunately couldn't practically use on this project, because Beanie is async, and Flask is a blocking framework.
Beanie uses Pydantic to define a schema, and adds a custom Field type for ObjectId.
1# model.py
2
3class Cocktail(BaseModel):
4 id: Optional[PydanticObjectId] = Field(None, alias="_id")
5 slug: str
6 name: str
7 ingredients: List[Ingredient]
8 instructions: List[str]
9 date_added: Optional[datetime]
10 date_updated: Optional[datetime]
11
12 def to_json(self):
13 return jsonable_encoder(self, exclude_none=True)
14
15 def to_bson(self):
16 data = self.dict(by_alias=True, exclude_none=True)
17 if data["_id"] is None:
18 data.pop("_id")
19 return data
This Cocktail schema defines the structure of a Cocktail instance, which will be validated by Pydantic when instances are created. It includes another embedded schema for Ingredient, which is defined in a similar way.
I added convenience functions to export the data in the Cocktail instance to either a JSON-compatible dict or a BSON-compatible dict. The differences are subtle, but BSON supports native ObjectId and datetime types, for example, whereas when encoding as JSON, it's necessary to encode ObjectId instances in some other way (I prefer a string containing the hex value of the id), and datetime objects are encoded as ISO8601 strings.
The to_json method makes use of a function imported from FastAPI, which recurses through the instance data, encoding all values in a JSON-compatible form. It already handles datetime instances correctly, but to get it to handle ObjectId values, I extracted some custom field code from Beanie, which can be found in objectid.py.
The to_bson method doesn't need to pass the dict data through jsonable_encoder. All the types used in the schema can be directly saved with PyMongo. It's important to set by_alias to True, so that the key for _id is just that, _id, and not the schema's id without an underscore.
1# objectid.py
2
3class PydanticObjectId(ObjectId):
4 """
5 ObjectId field. Compatible with Pydantic.
6 """
7
8 @classmethod
9 def __get_validators__(cls):
10 yield cls.validate
11
12 @classmethod
13 def validate(cls, v):
14 return PydanticObjectId(v)
15
16 @classmethod
17 def __modify_schema__(cls, field_schema: dict):
18 field_schema.update(
19 type="string",
20 examples=["5eb7cf5a86d9755df3a6c593", "5eb7cfb05e32e07750a1756a"],
21 )
22
23ENCODERS_BY_TYPE[PydanticObjectId] = str
This approach is neat for this particular use-case, but I can't help feeling that it would be limiting in a more complex system. There are many patterns for storing data in MongoDB. These often result in storing data in a form that is optimal for writes or reads, but not necessarily the representation you would wish to export in an API.
What is a Slug?
Looking at the schema above, you may have wondered what a "slug" is ... well, apart from a slimy garden pest.
A slug is a unique, URL-safe, mnemonic used for identifying a document. I picked up the terminology as a Django developer, where this term is part of the framework. A slug is usually derived from another field. In this case, the slug is derived from the name of the cocktail, so if a cocktail was called "Rye Whiskey Old-Fashioned," the slug would be "rye-whiskey-old-fashioned."
In this API, that cocktail could be accessed by sending a GET request to the /cocktails/rye-whiskey-old-fashioned endpoint.
I've kept the unique slug field separate from the auto-assigned _id field, but I've provided both because the slug could change if the name of the cocktail was tweaked, in which case the _id value would provide a constant identifier to look up an exact document.
In the Rust version of this code, I was nudged to use a different approach. It's a bit more verbose, but in the end I was convinced that it would be more powerful and flexible as the system grew.

Creating a New Document

Now I'll show you what a single endpoint looks like, first focusing on the "Create" endpoint, that handles a POST request to /cocktails and creates a new document in the "recipes" collection. It then returns the document that was stored, including the newly unique ID that MongoDB assigned as _id, because this is a RESTful API, and that's what RESTful APIs do.
1@app.route("/cocktails/", methods=["POST"])
2def new_cocktail():
3 raw_cocktail = request.get_json()
4 raw_cocktail["date_added"] = datetime.utcnow()
5
6 cocktail = Cocktail(**raw_cocktail)
7 insert_result = recipes.insert_one(cocktail.to_bson())
8 cocktail.id = PydanticObjectId(str(insert_result.inserted_id))
9 print(cocktail)
10
11 return cocktail.to_json()
This endpoint modifies the incoming JSON directly, to add a date_added item with the current time. It then passes it to the constructor for our Pydantic schema. At this point, if the schema failed to validate the data, an exception would be raised and displayed to the user.
After validating the data, to_bson() is called on the Cocktail to convert it to a BSON-compatible dict, and this is directly passed to PyMongo's insert_one method. There's no way to get PyMongo to return the document that was just inserted in a single operation (although an upsert using find_one_and_update is similar to just that).
After inserting the data, the code then updates the local object with the newly-assigned id and returns it to the client.

Reading a Single Cocktail

Thanks to Flask-PyMongo, the endpoint for looking up a single cocktail is even more straightforward:
1@app.route("/cocktails/<string:slug>", methods=["GET"])
2def get_cocktail(slug):
3 recipe = recipes.find_one_or_404({"slug": slug})
4 return Cocktail(**recipe).to_json()
This endpoint will abort with a 404 if the slug can't be found in the collection. Otherwise, it simply instantiates a Cocktail with the document from the database, and calls to_json to convert it to a dict that Flask will automatically encode correctly as JSON.

Listing All the Cocktails

This endpoint is a monster, and it's because of pagination, and the links for pagination. In the sample data above, you probably noticed the _links section:
1"_links": {
2 "last": {
3 "href": "http://localhost:5000/cocktails/?page=5"
4 },
5 "next": {
6 "href": "http://localhost:5000/cocktails/?page=5"
7 },
8 "prev": {
9 "href": "http://localhost:5000/cocktails/?page=3"
10 },
11 "self": {
12 "href": "http://localhost:5000/cocktails/?page=4"
13 }
14},
This _links section is specified as part of the HAL (Hypertext Application Language) specification. It's a good idea to follow a standard for pagination data, and I didn't feel like inventing something myself!
And here's the code to generate all this. Don't freak out.
1@app.route("/cocktails/")
2def list_cocktails():
3 """
4 GET a list of cocktail recipes.
5
6 The results are paginated using the `page` parameter.
7 """
8
9 page = int(request.args.get("page", 1))
10 per_page = 10 # A const value.
11
12 # For pagination, it's necessary to sort by name,
13 # then skip the number of docs that earlier pages would have displayed,
14 # and then to limit to the fixed page size, ``per_page``.
15 cursor = recipes.find().sort("name").skip(per_page * (page - 1)).limit(per_page)
16
17 cocktail_count = recipes.count_documents({})
18
19 links = {
20 "self": {"href": url_for(".list_cocktails", page=page, _external=True)},
21 "last": {
22 "href": url_for(
23 ".list_cocktails", page=(cocktail_count // per_page) + 1, _external=True
24 )
25 },
26 }
27 # Add a 'prev' link if it's not on the first page:
28 if page > 1:
29 links["prev"] = {
30 "href": url_for(".list_cocktails", page=page - 1, _external=True)
31 }
32 # Add a 'next' link if it's not on the last page:
33 if page - 1 < cocktail_count // per_page:
34 links["next"] = {
35 "href": url_for(".list_cocktails", page=page + 1, _external=True)
36 }
37
38 return {
39 "recipes": [Cocktail(**doc).to_json() for doc in cursor],
40 "_links": links,
41 }
Although there's a lot of code there, it's not as complex as it may first appear. Two requests are made to MongoDB: one for a page-worth of cocktail recipes, and the other for the total number of cocktails in the collection. Various calculations are done to work out how many documents to skip, and how many pages of cocktails there are. Finally, some links are added for "prev" and "next" pages, if appropriate (i.e.: the current page isn't the first or last.) Serialization of the cocktail documents is done in the same way as the previous endpoint, but in a loop this time.
The update and delete endpoints are mainly repetitions of the code I've already included, so I'm not going to include them here. Check them out in the GitHub repo if you want to see how they work.

Error Handling

Nothing irritates me more than using a JSON API which returns HTML when an error occurs, so I was keen to put in some reasonable error handling to avoid this happening.
After Flask set-up code, and before the endpoint definitions, the code registers two error-handlers:
1@app.errorhandler(404)
2def resource_not_found(e):
3 """
4 An error-handler to ensure that 404 errors are returned as JSON.
5 """
6 return jsonify(error=str(e)), 404
7
8
9@app.errorhandler(DuplicateKeyError)
10def resource_not_found(e):
11 """
12 An error-handler to ensure that MongoDB duplicate key errors are returned as JSON.
13 """
14 return jsonify(error=f"Duplicate key error."), 400
The first error-handler intercepts any endpoint that fails with a 404 status code and ensures that the error is returned as a JSON dict.
The second error-handler intercepts a DuplicateKeyError raised by any endpoint, and does the same thing as the first error-handler, but sets the HTTP status code to "400 Bad Request."
As I was writing this post, I realised that I've missed an error-handler to deal with invalid Cocktail data. I'll leave implementing that as an exercise for the reader! Indeed, this is one of the difficulties with writing robust Python applications: Because exceptions can be raised from deep in your stack of dependencies, it's very difficult to comprehensively predict what exceptions your application may raise in different circumstances.
This is something that's very different in Rust, and even though, as you'll see, error-handling in Rust can be verbose and tricky, I've started to love the language for its insistence on correctness.

Wrapping Up

When I started writing this post, I though it would end up being relatively straightforward. As I added the requirement that the code should not just be a toy example, some of the inherent difficulties with building a robust API on top of any database became apparent.
In this case, Flask may not have been the right tool for the job. I recently wrote a blog post about building an API with Beanie. Beanie and FastAPI are a match made in heaven for this kind of application and will handle validation, transformation, and pagination with much less code. On top of that, they're self-documenting and can provide the data's schema in open formats, including OpenAPI Spec and JSON Schema!
If you're about to build an API from scratch, I strongly recommend you check them out, and you may enjoy reading Aaron Bassett's posts on the FARM (FastAPI, React, MongoDB) Stack.
I will shortly publish the second post in this series, Build a Cocktail API with Actix-Web, MongoDB, and Rust, and then I'll conclude with a third post, I Rewrote it in Rust—How Did it Go?, where I'll evaluate the strengths and weaknesses of the two experiments.
Thank you for reading. Keep a look out for the upcoming posts!
If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.

Facebook Icontwitter iconlinkedin icon
Rate this tutorial
star-empty
star-empty
star-empty
star-empty
star-empty
Related
Tutorial

How to Implement Agentic RAG Using Claude 3.5 Sonnet, LlamaIndex, and MongoDB


Jul 02, 2024 | 17 min read
Tutorial

How to Migrate Your Flask App From SQL To MongoDB


Jun 28, 2024 | 9 min read
Code Example

MergeURL - Python Example App


Jul 07, 2022 | 3 min read
Tutorial

Building a Real-Time, Dynamic Seller Dashboard on MongoDB


Aug 05, 2024 | 7 min read
Table of Contents