Hello all,
I’m using triggers and functions (Realm apps) on a Datalake to download to an S3 bucket a daily report matching all documents of a collections.
Problem is that for some collections with large number of docs (2.2M) the function is timing out (15s).
Function main code is
{
// Match all documents
$match: {}
},
{
//Convert Date types to String
$addFields: {
created: {$convert: {input: "$created", to: "long", onNull: ""}},
modified: {$convert: {input: "$modified", to: "long", onNull: ""}},
activated: {$convert: {input: "$activated", to: "long", onNull: ""}}
}
},
{
//Write results out to S3 bucket in specified file format and size chunks.
"$out": {
"s3": {
"bucket": s3BucketName,
"region": s3BucketRegion,
"filename": s3pathAndFilename,
"format": { "name": "json.gz", "maxFileSize": "1TiB"} //1TiB effectively means a single file will be written
}
}
}
How can I improve this query or increase the timeout limit?
Thanks in advance.