HI
I am trying to $skip and $limit after $search in the aggregation. Each and every time when I try to increase my skip size the execution time gets longer
Example:
Skip 10 and limit 10 then execution time is 500ms
Skip 30 and limit 10 then execution time is 700ms
Skip 50 and limit 10 then execution time is 900ms
Skip 800 and limit 10 then execution time is 20Sec
I need to know if is there any other way to optimise the query to get faster and if is there any way to specify ascending or descending order in the Atlas search index.
MongoDB still has to iterate over documents to skip them which explains what you were experiencing in the above quote.
Just wanting to understand more of the use case details here - Is the question about pagination of Atlas Search results? Could you provide more details on the intended use case?
YEP, That’s about pagination each and every time when I go to the next page in UI response time is increasing as I mentioned in the Example.
I found the solution for that and made a change in the Atlas search index and that works well.
Ref: storing-source-fields
Thanks for the reply @Jason_Tran
Hey @Nanthakumar_DG , we are working on improving this for Atlas Search currently, what was your expected latency for this query you were hoping to see? Did stored source help?
@Elle_Shwer Yes storedSource helps to improve the performance of the query timing but still, I need help in this search index the accuracy of the result is not as expected. I need a full-text search type indexing in my case can anyone help with that this index should also need to be sorted in descending, skip and limit.
When you have a sequence with $skip followed by a $limit, the $limit moves before the $skip. With the reordering, the $limit value increases by the $skip amount.
For example, if the pipeline consists of the following stages:
{ $skip: 10 },
{ $limit: 5 }
During the optimization phase, the optimizer transforms the sequence to the following:
{ $limit: 15 },
{ $skip: 10 }
This optimization allows for more opportunities for $sort + $limit Coalescence, such as with $sort + $skip + $limit sequences. See $sort + $limit Coalescence for details on the coalescence and $sort + $skip + $limit Sequence for an example.
For aggregation operations on sharded collections, this optimization reduces the results returned from each shard.
generate Base64-encoded tokens for each document within your results using the new $meta keyword searchSequenceToken , and
use that token as a point of reference with the new searchBefore and searchAfter options within the $search stage to specify what set of results you want to return
Would love for you to give it a try and get your feedback!