Supercharging Time Series Collections: Key Enhancements in MongoDB 8.0 with Block Processing
Rate this announcement
The landscape for time-series data has evolved significantly in recent years. Businesses are capturing more granular data, recognizing that the value of data rises in proportion with its precision. Storing time-stamped data and performing temporal analytics is becoming essential, even mandated across industries. As data volumes inevitably grow and precision-based analytics become increasingly crucial, tools that provide efficient ways to work with time-series data will become even more critical.
To address the rising demand for time-series analytics, we introduced Time Series Collections in MongoDB 5.0, designed to meet the expanding needs of time-stamped data. Unlike point solutions that require separate complex setups, MongoDB’s Time Series Collection enables users to simply stand up a collection and instantly leverage time-series capabilities. Over time, we’ve expanded its capabilities with features like columnar compression, enhanced temporal analytics, enriched indexing, geo-support, and seamless integration with the broader MongoDB portfolio—creating a streamlined, developer-friendly experience.
In our upcoming 8.0 release, we’re excited to introduce new features that significantly enhance scalability and query performance for managing large time-series workloads, delivering even better price-performance for our users.
As time-series data volumes grow, the challenge isn't just about scaling, it's about doing so efficiently, balancing resources, cost, and performance. With MongoDB 8.0, we’re introducing key optimizations in Time Series Collections to help users maximize resource value while managing increasingly complex workloads.
In previous versions, Time Series Collections inserted data in an uncompressed format, causing a larger working set, increased cache use, and write amplification, leading to high I/O as uncompressed data was written to the WiredTiger storage engine. This was especially problematic for high-cardinality workloads with millions of devices, sensors etc. With MongoDB 8.0, Time Series Collections now directly write into a column-compressed format, reducing cache usage, lowering write I/O, improving insert performance and storage efficiency.
For users, the benefits of lower cache usage directly translates into cost savings. Users will be able to extract a lot more value from their existing cluster resources resulting in better price-performance. We’ve observed throughput improvements of 2-3x, with cache usage reduced by 10-20x compared to workloads on version 7.0.
For example, workload on MongoDB 7.0 can experience performance fluctuations, leading to inconsistent write performance and a sawtooth pattern caused by I/O overload. Writing large amounts of uncompressed data to disk strained checkpoints. As shown below, a test on a 7.0 Atlas M50 (RAM - 32GB, Storage 160GB, 8 vCPUs) cluster reached peaks of 500K inserts/s while displaying this pattern:
With 8.0, the write performance is now steady eliminating I/O strain. As shown below, the same workload on Atlas M50 achieves a stable 600K inserts/s, eliminating the previous sawtooth pattern.
User engagement highlighted that query performance is crucial as time-series workloads scale. Traditionally, the MongoDB query engine processes data one document at a time, which can be inefficient for large-scale analytics. For Time Series Collections, this inefficiency is due to the need to unpack and reshape a large volume of compressed data. To address this, we introduced Block Processing for Time Series Collections – a new automatic query execution model that processes "blocks of data" at once, leveraging column-level summaries while avoiding the overhead of unpacking and reshaping documents. This approach significantly improves performance, particularly for aggregation that leverage stages such as $match, $sort, $group and other analytical stages like $setWindowFields.
By making each step more efficient, the overall impact becomes exponential, reducing overhead and leveraging time-series data patterns previously unavailable. Each aggregation stage processes larger data chunks, leading to a more efficient query execution model with significantly faster performance. Use cases like financial analysis and IoT, which involve intensive filtering, grouping, and sorting (i.e $match, $group, $sort), will see major performance improvements. With Block Processing, we've observed improvements ranging from 10-40x, with some large-scale aggregations reaching up to 100x.
Let’s explore how Block Processing can optimize common financial aggregations, such as generating OHLC (Open, High, Low, Close) and calculating an exponential moving average over a specified time period. In this example, we analyze an aggregation that saw a 20x improvement on an Atlas M50 replica set using MongoDB's fork of TSBS (time-series benchmarking suite). We use the TSBS finance use case that generates a workload containing 10 stock symbols, each creating an event per second over 7 days, resulting in approximately 6 million events loaded into a Time Series collection.
Here's a sample document so you can see what this looks like:
1 { 2 "time" : ISODate("2022-01-01T00:00:00Z"), 3 "tags" : { 4 "symbol" : "MDB" 5 }, 6 "_id" : ObjectId("64c4092a9451cd8064c69be1"), 7 "measurement" : "price", 8 "price" : 200.13171 9 }
We start by creating a time-series collection
market_data
in MongoDB, where the timeField
is set to time, and metaField to tags. The granularity is set to "seconds" to capture financial data at a 1-minute interval.1 db.createCollection("market_data", { 2 timeseries: { 3 timeField: "time", 4 metaField: "tags", 5 granularity: "seconds" 6 } 7 }
We create two compound indexes to support queries within the workload:
1 db.market_data.createIndex({ "tags": 1, "time": 1 }); 2 db.market_data.createIndex({ "tags.symbol": 1, "time": -1 });
Next, we construct a query to generate the OHLC and exponential moving average for a group of stock symbols computed every 4 hours over a 1 day time window.
1 db.market_data.aggregate([ 2 {"$match": {"$expr": {"$gte": ["$time", {"$dateSubtract": {"startDate": new Date("2022-01-01T03:00:00Z"), "unit": "hour", "amount": 24}}]}}}, 3 {"$sort": {"time": 1}}, 4 {"$group": { 5 "_id": {"symbol": "$tags.symbol", "time": {"$dateTrunc": {"date": "$time", "unit": "minute", "binSize": 240}}}, 6 "high": {"$max": "$price"}, 7 "low": {"$min": "$price"}, 8 "open": {"$first": "$price"}, 9 "close": {"$last": "$price"} 10 }}, 11 {"$setWindowFields": { 12 "partitionBy": "$_id.symbol", 13 "sortBy": {"_id.time": 1}, 14 "output": {"expMovingAverage": {"$expMovingAvg": {"input": "$close", "N": 100}}} 15 }}, 16 {"$sort": {"_id.time": -1}} 17 ]);
1 { 2 "_id": { "symbol": "MDB", "time": ISODate("2022-01-01T02:00:00Z") }, 3 "high": 148.27729312597552, 4 "low": 51.01901106672195, 5 "open": 126.83590008130241, 6 "close": 99.44471233463418, 7 "expMovingAverage": 99.44471233463418 8 }
Compared to 7.0, this query saw its operations per second improve by an incredible 2000%, or 20x. This dramatic boost reflects our ongoing commitment to helping MongoDB users handle complex time-series data with ease, and we’re excited to see what can be achieved with these new enhancements.
Please try it out, and let us know your feedback!
contributed by Nishith Atreya and Michael Gargiulo
Top Comments in Forums
There are no comments on this article yet.