Explore Developer Center's New Chatbot! MongoDB AI Chatbot can be accessed at the top of your navigation to answer all your MongoDB questions.

Join us at AWS re:Invent 2024! Learn how to use MongoDB for AI use cases.
MongoDB Developer
MongoDB
plus
Sign in to follow topics
MongoDB Developer Centerchevron-right
Developer Topicschevron-right
Productschevron-right
MongoDBchevron-right

Schema Performance Evaluation in MongoDB Using PerformanceBench

Graeme Robinson20 min read • Published Jan 18, 2023 • Updated Apr 02, 2024
MongoDBJava
Facebook Icontwitter iconlinkedin icon
schema performance evaluation
Rate this tutorial
star-empty
star-empty
star-empty
star-empty
star-empty
MongoDB is often incorrectly described as being schemaless. While it is true that MongoDB offers a level of flexibility when working with schema designs that traditional relational databases systems cannot match, as with any database system, the choice of schema design employed by an application built on top of MongoDB will still ultimately determine whether the application is able to meet its performance objectives and SLAs.
Fortunately, a number of design patterns (and corresponding anti-patterns) exist to help guide application developers design appropriate schemas for their MongoDB applications. A significant part of our role as developer advocates within the global strategic account team at MongoDB involves educating developers new to MongoDB on the use of these design patterns and how they differ from those they may have previously used working with relational database systems. My colleague, Daniel Coupal, contributed to a fantastic set of blog posts on the most common patterns and anti-patterns we see working with MongoDB.
Whilst schema design patterns provide a great starting point for guiding our design process, for many applications, there may come a point where it becomes unclear which one of a set of alternative designs will best support the application’s anticipated workloads. In these situations, a quote by Rear Admiral Grace Hopper that my manager, Rick Houlihan, made me aware of rings true:“One accurate measurement is worth a thousand expert opinions.”
In this article, we will explore using PerformanceBench, a Java framework application used by my team when evaluating candidate data models for a customer workload.

PerformanceBench

PerformanceBench is a simple Java framework designed to allow developers to assess the relative performance of different database design patterns within MongoDB.
PerformanceBench defines its functionality in terms of models (the design patterns being assessed) and measures (the operations to be measured against each model). As an example, a developer may wish to assess the relative performance of a design based on having data spread across multiple collections and accessed using $lookup (join) aggregations, versus one based on a hierarchical model where related documents are embedded within each other. In this scenario, the models might be respectively referred to as multi-collection and hierarchical, with the "measures" for each being CRUD operations: Create, Read, Update, and Delete.
The framework allows Java classes to be developed that implement a defined interface known as “SchemaTest,” with one class for each model to be tested. Each SchemaTest class implements the functionality to execute the measures defined for that model and returns, as output, an array of documents with the results of the execution of each measure — typically timing data for the measure execution, plus any metadata needed to later identify the parameters used for the specific execution. PerformanceBench stores these returned documents in a MongoDB collection for later analysis and evaluation.
PerformanceBench is configured via a JSON format configuration file which contains an array of documents — one for each model being tested. Each model document in the configuration file contains a set of standard fields that are common across all models being tested, plus a set of custom fields specific to that model. Developers implementing SchemaTest model classes are free to include whatever custom parameters their testing of a specific model requires.
When executed, PerformanceBench uses the data in the configuration file to identify the implementing class for each model to be tested and its associated measures. It then instructs the implementing classes to execute a specified number of iterations of each measure, optionally using multiple threads to simulate multi-user/multi-client environments.
Full details of the SchemaTest interface and the format of the PerformanceBench JSON configuration file are provided in the GitHub readme file for the project.
The PerformanceBench source in Github was developed using IntelliJ IDEA 2022.2.3 with OpenJDK Runtime Environment Temurin-17.0.3+7 (build 17.0.3+7).
The compiled application has been run on Amazon Linux using OpenJDK 17.0.5 (2022-10-18 LTS - Corretto).

Designing SchemaTest model classes: factors to consider

Other than the requirement to implement the SchemaTest interface, PerformanceBench gives model class developers wide latitude in designing their classes in whatever way is needed to meet the requirements of their test cases. However, there are some common considerations to take into account.

Understand the intention of the SchemaTest interface methods

The SchemaTest interface defines the following four methods:
1public void initialize(JSONObject args);
1public String name();
1public void warmup(JSONObject args);
1public Document[] executeMeasure(int opsToTest, String measure, JSONObject args);
1public void cleanup(JSONObject args);
The initialize method is intended to allow implementing classes to carry out any necessary steps prior to measures being executed. This could, for example, include establishing and verifying connection to the database, building or preparing a test data set, and/or removing the results of prior execution runs. PerformanceBench calls initialize immediately after instantiating an instance of the class, but before any measures are executed.
The name method should return a string name for the implementing class. Class implementers can set the returned value to anything that makes sense for their use case. Currently, PerformanceBench only uses this method to add context to logging messages.
The warmup method is called by PerformanceBench prior to any iterations of any measure being executed. It is designed to allow model class implementers to attempt to create an environment that accurately reflects the expected state of the database in real-life. This could, for example, include carrying out queries designed to seed the MongoDB cache with an appropriate working set of data.
The executeMeasure method allows PerformanceBench to instruct a model-implementing class to execute a defined number of iterations of a specified measure. Typically, the method implementation will contain a case statement redirecting execution to the code for each defined measure. However, there is no requirement to implement in that way. The return from this method should be an array of BSON Document objects containing the results of each test iteration. Implementers are free to include whatever fields are necessary in these documents to support the metrics their use case requires.
The cleanup method is called by PerformanceBench after all iterations of all measures have been executed by the implementing class and is designed primarily to allow test data to be deleted or reset ahead of future test executions. However, the method can also be used to execute any other post test-run functionality necessary for a given use case. This may, for example, include calculating average/mean/percentile execution times for a test run, or for cleanly disconnecting from a database.

Execute measures using varying test data sets

When assessing a given model, it is important to measure the model’s performance against varying data sets. For example, the following can all impact the performance of different search and data manipulation operations:
  • Overall database and collection sizes
  • Individual document sizes
  • Available CPU and memory on the MongoDB servers being used
  • Total number of documents within individual collections.
Executing a sequence of measures using different test data sets can help to identify if there is a threshold beyond which one model may perform better than another. It may also help to identify the amount of memory needed to store the working set of data necessary for the workload being tested to avoid excessive paging. Model-implementing classes should ensure that they add sufficient metadata to the results documents they generate to allow the conditions of the test to be identified during later analysis.

Ensure queries are supported by appropriate indexes

As with most databases, query performance in MongoDB is dependent on appropriate indexes existing on collections being queried. Model class implementers should ensure any such indexes needed by their test cases either exist or are created during the call to their classes’ initialize method. Index size compared with available cache memory should be considered, and often, finding the point at which performance is negatively impacted by paging of indexes is a major objective of PerformanceBench testing.

Remove variables such as network latency

With any testing regime, one goal should be to limit the number of variables potentially impacting performance discrepancies between test runs so differences in measured performance can be attributed with confidence to the intentional differences in test conditions. Items that come under this heading include network latency between the server running PerformanceBench and the MongoDB cluster servers. When working with MongoDB Atlas in a cloud environment, for example, specifying dedicated rather than shared servers can help avoid background load on the servers impacting performance, whilst deploying all servers in the same availability zone/region can reduce potential impacts from varying network latency.

Model multi-user environments realistically

PerformanceBench allows measures to be executed concurrently in multiple threads to simulate a multi-user environment. However, if making use of this facility, put some thought into how to accurately model real user behavior. It is rare, for example, for users to execute a complex ad-hoc aggregation pipeline and immediately execute another on its completion. Your model class may therefore want to insert a delay between execution of measure iterations to attempt to model a realistic length of time you may expect between query requests from an individual user in a realistic production environment.

APIMonitor: an example PerformanceBench model implementation

The PerformaceBench GitHub repository includes example model class implementations for a hypothetical application designed to report on success and failure rates of calls to a set of APIs monitored by observability software.
Data for the application is stored in two document types in two different collections.
The APIDetails collection contains one document for each monitored API with metadata about that API:
1{
2 "_id": "api#9",
3 "apiDetails": {
4 "appname": "api#9",
5 "platform": "Linux",
6 "language": {
7 "name": "Java",
8 "version": "11.8.202"
9 },
10 "techStack": {
11 "name": "Springboot",
12 "version": "UNCATEGORIZED"
13 },
14 "environment": "PROD"
15 },
16 "deployments": {
17 "region": "UK",
18 "createdAt": {
19 "$date": {
20 "$numberLong": "1669164599000"
21 }
22 }
23 }
24}
The second collection, APIMetrics, is designed to represent the output from monitoring software with one document generated for each API at 15-minute intervals, giving the total number of calls to the API, the number that were successful, and the number that failed:
1{
2 "_id": "api#1#S#2",
3 "appname": "api#1",
4 "creationDate": {
5 "$date": {
6 "$numberLong": "1666909520000"
7 }
8 },
9 "transactionVolume": 54682,
10 "errorCount": 33302,
11 "successCount": 21380,
12 "region": "TK",
13 "year": 2022,
14 "monthOfYear": 10,
15 "dayOfMonth": 27,
16 "dayOfYear": 300
17}
The documents include a deployment region value for each API (one of “Tokyo,” “Hong Kong,” “India,” or “UK”). The sample model classes in the repository are designed to compare the performance of options for running aggregation pipelines that calculate the total number of calls, the overall success rate, and the corresponding failure rate for all the APIs in a given region, for a given time period.
Four approaches are evaluated:
  1. Carrying out an aggregation pipeline against the APIDetails collection that includes a $lookup stage to perform a join with and summarization of relevant data in the APIMetrics collection.
  2. Carrying out an initial query against the APIDetails collection to produce a list of the API ids for a given region and use that list as input to an $in clause as part of a $match stage in a separate aggregation pipeline against the APIMetrics collection to summarize the relevant monitoring data.
  3. A third approach that uses an equality clause on the region information in each document as part of the initial $match stage of a pipeline against the APIMetrics collection to summarize the relevant monitoring data. This approach is designed to test whether an equality match against a single value performs better than one using an $in clause with a large number of possible values, as used in the second approach. Two measures are implemented in this model: one that queries the two collections sequentially using the standard MongoDB Java driver, and one that queries the two collections in parallel using the MongoDB Java Reactive Streams driver.
  4. A fourth approach that adds a third collection called APIPreCalc that stores documents with pre-calculated total calls, total failed calls, and total successful calls for each API for each complete day, month, and year in the data set, with the aim of reducing the number of documents and size of calculations the aggregation pipeline has to execute. This model is an example implementation of the Computed schema design pattern and also uses the MongoDB Java Reactive Streams driver to query the collections in parallel.
For the fourth approach, the pre-computed documents in the APIPreCalc collection look like the following:
1{
2 "_id": "api#379#Y#2022",
3 "transactionVolume": 166912052,
4 "errorCount": 84911780,
5 "successCount": 82000272,
6 "region": "UK",
7 "appname": "api#379",
8 "metricsCount": {
9 "$numberLong": "3358"
10 },
11 "year": 2022,
12 "type": "year_precalc",
13 "dateTag": "2022"
14},
15{
16 "_id": "api#379#Y#2022#M#11",
17 "transactionVolume": 61494167,
18 "errorCount": 31247475,
19 "successCount": 30246692,
20 "region": "UK",
21 "appname": "api#379",
22 "metricsCount": {
23 "$numberLong": "1270"
24 },
25 "year": 2022,
26 "monthOfYear": 11,
27 "type": "month_precalc",
28 "dateTag": "2022-11"
29},
30{
31 "_id": "api#379#Y#2022#M#11#D#19",
32 "transactionVolume": 4462897,
33 "errorCount": 2286438,
34 "successCount": 2176459,
35 "region": "UK",
36 "appname": "api#379",
37 "metricsCount": {
38 "$numberLong": "96"
39 },
40 "year": 2022,
41 "monthOfYear": 11,
42 "dayOfMonth": 19,
43 "type": "dom_precalc",
44 "dateTag": "2022-11-19"
45}
Note the type field in the documents used to differentiate between totals for a year, month, or day of month.
For the purposes of showing how PerformanceBench organizes models and measures, in the PerformanceBench GitHub repository, the first and second approaches are implemented as two separate SchemaTest model classes, each with a single measure, while the third and fourth approaches are implemented in a third SchemaTest model class with two measures — one for each approach.

APIMonitorLookupTest class

The first model, implementing the $lookup approach, is implemented in package com.mongodb.devrel.pods.performancebench.models.apimonitor_lookup in a class named APIMonitorLookupTest.
The aggregation pipeline implemented by this approach is:
1[
2 {
3 $match: {
4 "deployments.region": "HK",
5 },
6 },
7 {
8 $lookup: {
9 from: "APIMetrics",
10 let: {
11 apiName: "$apiDetails.appname",
12 },
13 pipeline: [
14 {
15 $match: {
16 $expr: {
17 $and: [
18 {
19 $eq: ["$apiDetails.appname", "$$apiName"],
20 },
21 {
22 $gte: [
23 "$creationDate", ISODate("2022-11-01"),
24 ],
25 },
26 ],
27 },
28 },
29 },
30 {
31 $group: {
32 _id: "apiDetails.appName",
33 totalVolume: {
34 $sum: "$transactionVolume",
35 },
36 totalError: {
37 $sum: "$errorCount",
38 },
39 totalSuccess: {
40 $sum: "$successCount",
41 },
42 },
43 },
44 {
45 $project: {
46 aggregatedResponse: {
47 totalTransactionVolume: "$totalVolume",
48 errorRate: {
49 $cond: [
50 {
51 $eq: ["$totalVolume", 0],
52 },
53 0,
54 {
55 $multiply: [
56 {
57 $divide: [
58 "$totalError",
59 "$totalVolume",
60 ],
61 },
62 100,
63 ],
64 },
65 ],
66 },
67 successRate: {
68 $cond: [
69 {
70 $eq: ["$totalVolume", 0],
71 },
72 0,
73 {
74 $multiply: [
75 {
76 $divide: [
77 "$totalSuccess",
78 "$totalVolume",
79 ],
80 },
81 100,
82 ],
83 },
84 ],
85 },
86 },
87 _id: 0,
88 },
89 },
90 ],
91 as: "results",
92 },
93 },
94]
The pipeline is executed against the APIDetails collection and is run once for each of the four geographical regions. The $lookup stage of the pipeline contains its own sub-pipeline which is executed against the APIMetrics collection once for each API belonging to each region.
This results in documents looking like the following being produced:
1{
2 "_id": "api#100",
3 "apiDetails": {
4 "appname": "api#100",
5 "platform": "Linux",
6 "language": {
7 "name": "Java",
8 "version": "11.8.202"
9 },
10 "techStack": {
11 "name": "Springboot",
12 "version": "UNCATEGORIZED"
13 },
14 "environment": "PROD"
15 },
16 "deployments": [
17 {
18 "region": "HK",
19 "createdAt": {
20 "$date": {
21 "$numberLong": "1649399685000"
22 }
23 }
24 }
25 ],
26 "results": [
27 {
28 "aggregatedResponse": {
29 "totalTransactionVolume": 43585837,
30 "errorRate": 50.961542851637795,
31 "successRate": 49.038457148362205
32 }
33 }
34 ]
35}
One document will be produced for each API in each region. The model implementation records the total time taken (in milliseconds) to generate all the documents for a given region and returns this in a results document to PerformanceBench. The results documents look like:
1{
2 "_id": {
3 "$oid": "6389b6581a3cd92944057c6c"
4 },
5 "startTime": {
6 "$numberLong": "1669962059685"
7 },
8 "duration": {
9 "$numberLong": "1617"
10 },
11 "model": "APIMonitorLookupTest",
12 "measure": "USEPIPELINE",
13 "region": "HK",
14 "baseDate": {
15 "$date": {
16 "$numberLong": "1667260800000"
17 }
18 },
19 "apiCount": 189,
20 "metricsCount": 189,
21 "threads": 3,
22 "iterations": 1000,
23 "clusterTier": "M10",
24 "endTime": {
25 "$numberLong": "1669962061302"
26 }
27}
As can be seen, as well as the region, start time, end time, and duration of the execution run, the result documents also include:
  • The model name and measure executed (in this case, ‘USEPIPELINE’).
  • The number of APIs (apiCount) found for this region, and number of APIs for which metrics were able to be generated (metricsCount). These numbers should always match and are included as a sanity check that data was generated correctly by the measure.
  • The number of threads and iterations used for the execution of the measure. PerformanceBench allows measures to be executed a defined number of times (iterations) to allow a good average to be determined. Executions can also be run in one or more concurrent threads to simulate multi-user/multi-client environments. In the above example, three threads each concurrently executed 1,000 iterations of the measure (3,000 total iterations).
  • The MongoDB Atlas cluster tier on which the measures were executed. This is simply used for tracking purposes when analyzing the results and could be set to any value by the class developer. In the sample class implementations, the value is set to match a corresponding value in the PerformanceBench configuration file. Importantly, it remains the user’s responsibility to ensure the cluster tier being used matches what is written to the results documents.
  • baseDate indicates the date period for which monitoring data was summarized. For a given baseDate, the summarized period is always baseDate to the current date (inclusive). An earlier baseDate will therefore result in more data being summarized.
With a single measure defined for the model, and with three threads each carrying out 1,000 iterations of the measure, an array of 3,000 results documents will be returned by the model class to PerformanceBench. PerformanceBench then writes these documents to a collection for later analysis.
To support the aggregation pipeline, the model implementation creates the following indexes in its initialize method implementation:
APIDetails: {"deployments.region": 1} APIMetrics: {"appname": 1, "creationDate": 1}
The model temporarily drops any existing indexes on the collection to avoid contention for memory cache space. The above indexes are subsequently dropped in the model’s cleanup method implementation, and all original indexes restored.

APIMonitorMultiQueryTest class

The second model carries out an initial query against the APIDetails collection to produce a list of the API ids for a given region and then uses that list as input to an $in clause as part of a $match stage in an aggregation pipeline against the APIMetrics collection. It is implemented in package com.mongodb.devrel.pods.performancebench.models.apimonitor_multiquery in a class named APIMonitorMultiQueryTest.
The initial query, carried out against the APIDetails collection, looks like:
1db.APIDetails.find("deployments.region": "HK")
This query is carried out for each of the four regions in turn and, from the returned documents, a list of the APIs belonging to each region is generated. The generated list is then used as the input to a $in clause in the $match stage of the following aggregation pipeline run against the APIMetrics collection:
1[
2 {
3 $match: {
4 "apiDetails.appname": {$in: ["api#1", "api#2", "api#3"]},
5 creationDate: {
6 $gte: ISODate("2022-11-01"),
7 },
8 },
9 },
10 {
11 $group: {
12 _id: "$apiDetails.appname",
13 totalVolume: {
14 $sum: "$transactionVolume",
15 },
16 totalError: {
17 $sum: "$errorCount",
18 },
19 totalSuccess: {
20 $sum: "$successCount",
21 },
22 },
23 },
24 {
25 $project: {
26 aggregatedResponse: {
27 totalTransactionVolume: "$totalVolume",
28 errorRate: {
29 $cond: [
30 {
31 $eq: ["$totalVolume", 0],
32 },
33 0,
34 {
35 $multiply: [
36 {
37 $divide: ["$totalError", "$totalVolume"],
38 },
39 100,
40 ],
41 },
42 ],
43 },
44 successRate: {
45 $cond: [
46 {
47 $eq: ["$totalVolume", 0],
48 },
49 0,
50 {
51 $multiply: [
52 {
53 $divide: [
54 "$totalSuccess",
55 "$totalVolume",
56 ],
57 },
58 100,
59 ],
60 },
61 ],
62 },
63 },
64 },
65 },
66]
This pipeline is essentially the same as the sub-pipeline in the $lookup stage of the aggregation used by the APIMonitorLookupTest class, the main difference being that this pipeline returns the summary documents for all APIs in a region using a single execution, whereas the sub-pipeline is executed once per API as part of the $lookup stage in the APIMonitorLookupTest class. Note that the pipeline shown above has only three API values listed in its $in clause. In reality, the list generated during testing was between two and three hundred items long for each region.
When the documents are returned from the pipeline, they are merged with the corresponding API details documents retrieved from the initial query to create a set of documents equivalent to those created by the pipeline in the APIMonitorLookupTest class. From there, the model implementation creates the same summary documents to be returned to and saved by PerformanceBench.
To support the pipeline, the model implementation creates the following indexes in its initialize method implementation:
APIDetails: {"deployments.region": 1} APIMetrics: {"appname": 1, "creationDate": 1}
As with the APIMonitorLookupTest class, this model temporarily drops any existing indexes on the collections to avoid contention for memory cache space. The above indexes are subsequently dropped in the model’s cleanup method implementation, and all original indexes restored.

APIMonitorRegionTest class

The third model class, com.mongodb.devrel.pods.performancebench.models.apimonitor_regionquery.APIMonitorRegionTest, implements two measures, both similar to the measure in APIMonitorMultiQueryTest, but where the $in clause in the $match stage is replaced with a equivalency check on the ”region” field. The purpose of these measures is to assess whether an equivalency check against the region field provides any performance benefit versus an $in clause where the list of matching values could be several hundred items long. The difference between the two measures in this model, named “QUERYSYNC” and “QUERYASYNC” respectively, is that the first performs the initial find query against the APIDetails collection, and then the aggregation pipeline against the APIMetrics collection in sequence, whilst the second model uses the Reactive Streams MongoDB Driver to carry out the two operations in parallel to assess whether that provides any performance benefit.
With these changes, the match stage of the aggregation pipeline for this model looks like:
1 {
2 $match: {
3 "deployments.region": "HK",
4 creationDate: {
5 $gte: ISODate("2022-11-01"),
6 },
7 },
8 }
In all other regards, the pipeline and the subsequent processes for creating summary documents to pass back to PerformanceBench are the same as those used in APIMonitorMultiQueryTest.

APIMonitorPrecomputeTest class

The fourth model class, com.mongodb.devrel.pods.performancebench.models.apimonitor_precompute.APIMonitorPrecomputeTest, implements a single measure named “PRECOMPUTE”. This measure makes use of a third collection named APIPreCalc that contains precalculated summary data for each API for each complete day, month, and year in the data set. The intention with this measure is to assess what, if any, performance gain can be obtained by reducing the number of documents and resulting calculations the aggregation pipeline is required to carry out.
The measure calculates complete days, months, and years between the baseDate specified in the configuration file, and the current date. The total number of calls, failed calls and successful calls for each API for each complete day, month, or year is then retrieved from APIPreCalc. A $unionWith stage in the pipeline is then used to combine these values with the metrics for the partial days at either end of the period (the basedate and current date) retrieved from APIMetrics.
The pipeline used for this measure looks like:
1[
2 {
3 "$match": {
4 "region": "UK",
5 "dateTag": {
6 "$in": [
7 "2022-12",
8 "2022-11-2",
9 "2022-11-3",
10 "2022-11-4",
11 "2022-11-5",
12 "2022-11-6",
13 "2022-11-7",
14 "2022-11-8",
15 "2022-11-9",
16 "2022-11-10"
17 ]
18 }
19 }
20 },
21 {
22 "$unionWith": {
23 "coll": "APIMetrics",
24 "pipeline": [
25 {
26 "$match": {
27 "$expr": {
28 "$or": [
29 {
30 "$and": [
31 {
32 "$eq": [
33 "$region",
34 "UK"
35 ]
36 },
37 {
38 "$eq": [
39 "$year", 2022
40 ]
41 },
42 {
43 "$eq": [
44 "$dayOfYear",
45 305
46 ]
47 },
48 {
49 "$gte": [
50 "$creationDate",
51 {
52 "$date": "2022-11-01T00:00:00Z"
53 }
54 ]
55 }
56 ]
57 },
58 {
59 "$and": [
60 {
61 "$eq": [
62 "$region",
63 "UK"
64 ]
65 },
66 {
67 "$eq": [
68 "$year",
69 2022
70 ]
71 },
72 {
73 "$eq": [
74 "$dayOfYear",
75 315
76 ]
77 },
78 {
79 "$lte": [
80 "$creationDate",
81 {
82 "$date": "2022-11-11T01:00:44.774Z"
83 }
84 ]
85 }
86 ]
87 }
88 ]
89 }
90 }
91 }
92 ]
93 }
94 },
95 {
96 "$group": {
97
98 }
99 },
100 {
101 "$project": {
102
103 }
104 }
105 ]
The $group and $project stages are identical to the prior models and are not shown above.
To support the queries and carried out by the pipeline, the model creates the following indexes in its initialize method implementation:
APIDetails: {"deployments.region": 1} APIMetrics: {"region": 1, "year": 1, "dayOfYear": 1, "creationDate": 1} APIPreCalc: {"region": 1, "dateTag": 1}

Controlling PerformanceBench execution — config.json

The execution of PerformanceBench is controlled by a configuration file in JSON format. The name and path to this file is passed as a command line argument using the -c flag. In the PerformanceBench GitHub repository, the file is called config.json:
1{
2 "models": [
3 {
4 "namespace": "com.mongodb.devrel.pods.performancebench.models.apimonitor_lookup",
5 "className": "APIMonitorLookupTest",
6 "measures": ["USEPIPELINE"],
7 "threads": 2,
8 "iterations": 500,
9 "resultsuri": "mongodb+srv://myuser:mypass@my_atlas_instance.mongodb.net/?retryWrites=true&w=majority",
10 "resultsCollectionName": "apimonitor_results",
11 "resultsDBName": "performancebenchresults",
12 "custom": {
13 "uri": "mongodb+srv://myuser:mypass@my_atlas_instance.mongodb.net/?retryWrites=true&w=majority",
14 "apiCollectionName": "APIDetails",
15 "metricsCollectionName": "APIMetrics",
16 "precomputeCollectionName": "APIPreCalc",
17 "dbname": "APIMonitor",
18 "regions": ["UK", "TK", "HK", "IN" ],
19 "baseDate": "2022-11-01T00:00:00.000Z",
20 "clusterTier": "M40",
21 "rebuildData": false,
22 "apiCount": 1000
23 }
24 },
25 {
26 "namespace": "com.mongodb.devrel.pods.performancebench.models.apimonitor_multiquery",
27 "className": "APIMonitorMultiQueryTest",
28 "measures": ["USEINQUERY"],
29 "threads": 2,
30 "iterations": 500,
31 "resultsuri": "mongodb+srv://myuser:mypass@my_atlas_instance.mongodb.net/?retryWrites=true&w=majority",
32 "resultsCollectionName": "apimonitor_results",
33 "resultsDBName": "performancebenchresults",
34 "custom": {
35 "uri": "mongodb+srv://myuser:mypass@my_atlas_instance.mongodb.net/?retryWrites=true&w=majority",
36 "apiCollectionName": "APIDetails",
37 "metricsCollectionName": "APIMetrics",
38 "precomputeCollectionName": "APIPreCalc",
39 "dbname": "APIMonitor",
40 "regions": ["UK", "TK", "HK", "IN" ],
41 "baseDate": "2022-11-01T00:00:00.000Z",
42 "clusterTier": "M40",
43 "rebuildData": false,
44 "apiCount": 1000
45 }
46 },
47 {
48 "namespace": "com.mongodb.devrel.pods.performancebench.models.apimonitor_regionquery",
49 "className": "APIMonitorRegionQueryTest",
50 "measures": ["QUERYSYNC","QUERYASYNC"],
51 "threads": 2,
52 "iterations": 500,
53 "resultsuri": "mongodb+srv://myuser:mypass@my_atlas_instance.mongodb.net/?retryWrites=true&w=majority",
54 "resultsCollectionName": "apimonitor_results",
55 "resultsDBName": "performancebenchresults",
56 "custom": {
57 "uri": "mongodb+srv://myuser:mypass@my_atlas_instance.mongodb.net/?retryWrites=true&w=majority",
58 "apiCollectionName": "APIDetails",
59 "metricsCollectionName": "APIMetrics",
60 "precomputeCollectionName": "APIPreCalc",
61 "dbname": "APIMonitor",
62 "regions": ["UK", "TK", "HK", "IN" ],
63 "baseDate": "2022-11-01T00:00:00.000Z",
64 "clusterTier": "M40",
65 "rebuildData": false,
66 "apiCount": 1000
67 }
68 },
69 {
70 "namespace": "com.mongodb.devrel.pods.performancebench.models.apimonitor_precompute",
71 "className": "APIMonitorPrecomputeTest",
72 "measures": ["PRECOMPUTE"],
73 "threads": 2,
74 "iterations": 500,
75 "resultsuri": "mongodb+srv://myuser:mypass@my_atlas_instance.mongodb.net/?retryWrites=true&w=majority",
76 "resultsCollectionName": "apimonitor_results",
77 "resultsDBName": "performancebenchresults",
78 "custom": {
79 "uri": "mongodb+srv://myuser:mypass@my_atlas_instance.mongodb.net/?retryWrites=true&w=majority",
80 "apiCollectionName": "APIDetails",
81 "metricsCollectionName": "APIMetrics",
82 "precomputeCollectionName": "APIPreCalc",
83 "dbname": "APIMonitor",
84 "regions": ["UK", "TK", "HK", "IN" ],
85 "baseDate": "2022-11-01T00:00:00.000Z",
86 "clusterTier": "M40",
87 "rebuildData": false,
88 "apiCount": 1000
89 }
90 }
91 ]
92}
The document contains a single top-level field called “models,” the value of which is an array of sub-documents, each of which describes a model and its corresponding measures to be executed. PerformanceBench attempts to execute the models and measures in the order they appear in the file.
For each model, the configuration file defines the Java class implementing the model and its measures, the number of concurrent threads there should be executing each measure, the number of iterations of each measure each thread should execute, an array listing the names of the measures to be executed, and the connection URI, database name, and collection name where PerformanceBench should write results documents.
Additionally, there is a “custom” sub-document for each model where model class implementers can add any parameters specific to their model implementations. In the case of the APIMonitor class implementations, this includes the connection URI, database name and collection names where the test data resides, an array of acronyms for the geographic regions, the base date from which monitoring data should be summarized (summaries are based on values for baseDate to the current date, inclusive), and the Atlas cluster tier on which the tests were run (this is included in the results documents to allow comparison of performance of different tiers). The custom parameters also include a flag indicating if the test data set should be rebuilt before any of the measures for a model are executed and, if so, how many APIs data should be built for. The data rebuild code included in the sample model implementations builds data for the given number of APIs with the data for each API starting from a random date within the last 90 days.

Summarizing results of the APIMonitor tests

By having PerformanceBench save the results of each test to a MongoDB collection, we are able to carry out analysis of the results in a variety of ways. The MongoDB aggregation framework includes over 20 different available stages and over 150 available expressions allowing enormous flexibility in performing analysis, and if you are using MongoDB Atlas, you have access to Atlas Charts, allowing you to quickly and easily visually display and analyze the data in a variety of chart formats.
For analyzing larger data sets, the MongoDB driver for Python or Connector for Apache Spark could be considered.
The output from one simulated test run generated the following results:

Test setup

test setup
Note that the AWS EC2 server used to run PerformanceBench was located within the same AWS availability zone as the MongoDB Atlas cluster in order to minimize variations in measurements due to variable network latency.
The above conditions resulted in a total of 20,000 results documents being written by PerformanceBench to MongoDB (five measures, executed 500 times for each of four geographical regions, by two threads). Atlas Charts was used to display the results:
Atlas Charts Representation of Measure Results
A further aggregation pipeline was then run on the results to find, for each measure, run by each model:
  • The shortest iteration execution time
  • The longest iteration execution time
  • The mean iteration execution time
  • The 95 percentile execution time
  • The number of iterations completed per second.
The pipeline used was:
1[
2 {
3 $group: {
4 _id: {
5 model: "$model",
6 measure: "$measure",
7 region: "$region",
8 baseDate: "$baseDate",
9 threads: "$threads",
10 iterations: "$iterations",
11 clusterTier: "$clusterTier",
12 },
13 max: {
14 $max: "$duration",
15 },
16 min: {
17 $min: "$duration",
18 },
19 mean: {
20 $avg: "$duration",
21 },
22 stddev: {
23 $stdDevPop: "$duration",
24 }
25 },
26 },
27 {
28 $project: {
29 model: "$_id.model",
30 measure: "$_id.measure",
31 region: "$_id.region",
32 baseDate: "$_id.baseDate",
33 threads: "$_id.threads",
34 iterations: "$_id.iterations",
35 clusterTier: "$_id.clusterTier",
36 max: 1,
37 min: 1,
38 mean: {
39 $round: ["$mean"],
40 },
41 "95th_Centile": {
42 $round: [
43 {
44 $sum: [
45 "$mean",
46 {
47 $multiply: ["$stddev", 2],
48 },
49 ],
50 },
51 ],
52 },
53 throuput: {
54 $round: [
55 {
56 $divide: [
57 "$count",
58 {
59 $divide: [
60 {
61 $subtract: ["$end", "$start"],
62 },
63 1000,
64 ],
65 },
66 ],
67 },
68 2,
69 ],
70 },
71 _id: 0,
72 },
73 },
74]
This produced the following results:
Table of summary results
As can be seen, the pipelines using the $lookup stage and the equality searches on the region values in APIMetrics performed significantly slower than the other approaches. In the case of the $lookup based pipeline, this was most likely because of the overhead of marshaling one call to the sub-pipeline within the lookup for every API (1,000 total calls to the sub-pipeline for each iteration), rather than one call per geographic region (four calls total for each iteration) in the other approaches. With two threads each performing 500 iterations of each measure, this would mean marshaling 1,000,000 calls to the sub-pipeline with the $lookup approach as opposed to 4,000 calls for the other measures.
If verification of the results indicated they were accurate, this would be a good indicator that an approach that avoided using a $lookup aggregation stage would provide better query performance for this particular use case. In the case of the pipelines with the equality clause on the region field (QUERYSYNC and QUERYASYNC), their performance was likely impacted by having to sort a large number of documents by APIID in the $group stage of their pipeline. In contrast, the pipeline using the $in clause (USEINQUERY) utilized an index on the APPID field, meaning documents were returned to the pipeline already sorted by APPID — this likely gave it enough of an advantage during the $group stage of the pipeline for it to consistently complete the stage faster. Further investigation and refinement of the indexes used by the QUERYSYNC and QUERYASYNC measures could reduce their performance deficit.
It’s also noticeable that the precompute model was between 25 and 40 times faster than the other approaches. By using the precomputed values for each API, the number of documents the pipeline needed to aggregate was reduced from as much as 96,000, to, at most, 1,000 for each full day being measured, and from as much as 2,976,000 to, at most, 1,000 for each complete month being measured. This has a significant impact on throughput and underlies the value of the computed schema design pattern.

Final thoughts

PerformanceBench provides a quick way to organize, create, execute, and record the results of tests to measure how different schema designs perform when executing different workloads. However, it is important to remember that the accuracy of the results will depend on how well the implemented model classes simulate the real life access patterns and workloads they are intended to model.
Ensuring the models accurately represent the workloads and schemas being measured is the job of the implementing developers, and PerformanceBench can only provide the framework for executing those models. It cannot improve or provide any guarantee that the results it records are an accurate prediction of an application’s real world performance.
Finally, it is important to understand that PerformanceBench, while free to download and use, is not in any way endorsed or supported by MongoDB.
The repository for PerformanceBench can be found on Github. The project was created in IntelliJ IDEA using Gradle.

Facebook Icontwitter iconlinkedin icon
Rate this tutorial
star-empty
star-empty
star-empty
star-empty
star-empty
Related
Quickstart

How to Build a CRUD Application With MongoDB, Quarkus, and GraalVM


Aug 29, 2024 | 7 min read
Article

Set Global Read and Write Concerns in MongoDB 4.4


Sep 23, 2022 | 7 min read
Tutorial

Building with Patterns: The Outlier Pattern


May 16, 2022 | 3 min read
Tutorial

Joining Collections in MongoDB with .NET Core and an Aggregation Pipeline


Feb 03, 2023 | 7 min read
Table of Contents