I am trying to improve the performance of our total count query. We have a collection called offers
with 600k documents (Storage Size: 2.09GB, Logical Data Size: 3.73GB) . Each offer has ~150 associated attributes. Therefore we used your recommened attribute pattern.
Document structure:
{
...
attributes: [
{ key: "color", value: "red" },
{ key: "price", value: 50 },
...100 attributes more...
]
}
Compound index:
{
'attributes.key': 1,
'attributes.value': 1,
}
Query:
db.offers.count({"attributes":{"$all":[{"$elemMatch":{"key":"color","value":{"$in":["red"]}}}]}})
The explain()
output:
{ explainVersion: '1',
queryPlanner:
{ namespace: 'offers.offers',
indexFilterSet: false,
parsedQuery:
{ attributes:
{ '$elemMatch':
{ '$and':
[ { key: { '$eq': 'color' } },
{ value: { '$eq': 'red' } } ] } } },
queryHash: '3EC4C516',
planCacheKey: '37ABC9CD',
maxIndexedOrSolutionsReached: false,
maxIndexedAndSolutionsReached: false,
maxScansToExplodeReached: false,
winningPlan:
{ stage: 'COUNT',
inputStage:
{ stage: 'FETCH',
filter:
{ attributes:
{ '$elemMatch':
{ '$and':
[ { key: { '$eq': 'color' } },
{ value: { '$eq': 'red' } } ] } } },
inputStage:
{ stage: 'IXSCAN',
keyPattern: { 'attributes.key': 1, 'attributes.value': 1 },
indexName: 'attributes.key_1_attributes.value_1',
isMultiKey: true,
multiKeyPaths:
{ 'attributes.key': [ 'attributes' ],
'attributes.value': [ 'attributes', 'attributes.value' ] },
isUnique: false,
isSparse: false,
isPartial: false,
indexVersion: 2,
direction: 'forward',
indexBounds:
{ 'attributes.key': [ '["color", "color"]' ],
'attributes.value': [ '["red", "red"]' ] } } } },
rejectedPlans: [] },
command:
{ count: 'offers',
query: { attributes: { '$all': [ { '$elemMatch': { key: 'color', value: { '$in': [ 'red' ] } } } ] } },
'$db': 'offers' },
serverInfo:
{ host: 'be1988957f70',
port: 27017,
version: '6.0.3',
gitVersion: 'f803681c3ae19817d31958965850193de067c516' },
serverParameters:
{ internalQueryFacetBufferSizeBytes: 104857600,
internalQueryFacetMaxOutputDocSizeBytes: 104857600,
internalLookupStageIntermediateDocumentMaxSizeBytes: 104857600,
internalDocumentSourceGroupMaxMemoryBytes: 104857600,
internalQueryMaxBlockingSortMemoryUsageBytes: 104857600,
internalQueryProhibitBlockingMergeOnMongoS: 0,
internalQueryMaxAddToSetBytes: 104857600,
internalDocumentSourceSetWindowFieldsMaxMemoryBytes: 104857600 },
ok: 1 }
This query is really really slow. It takes between 2 and 6 seconds. Is there anyway for us to improve the performance of the count?