What I have done in some processes to overcome this problem is I used a " follow-up trigger" to continue the work in batches until completion using a workflow table while listening to its documents.
The main idea is I had:
Trigger A which was triggered based on the business requirement (database, schedule, auth) trying to process whatever in 80s and then writing a document to the workflow collection.
Then I had trigger B who would run over and over until task completed based on workflow collection updates ( passing information through changeEvent documents)
This way I was able to perform a task of 5-10 min.
Thanks sharing how to work around Realm trigger timeouts. I like the idea of a follow-up trigger but am struggling to wrap my head around implementing the solution. Would either of you two be willing to elaborate further with some details?
For example, I was reading the article below on using preimage triggers to cascade delete throughout my database. In the article, all relevant quest documents are deleted via the map method.
Absolutely! Details on how to integrate are belowâŠ
Step 1) Follow the documentation for an EventBridge trigger. For the Realm Trigger I would make sure that the AWS Region is consistent between MongoDB and AWS. The documentation is pretty clear to set up an Event Bus.
Define Pattern:
â Type: Event Pattern
â Event Matching Pattern: Pre-Defined Pattern by Service
â Service Provider: Service Partners â MongoDB (event pattern to the right of your screen should display your account ID as follows⊠{ âaccountâ: [âAccount ID, Same as EventBridge Trigger AWS IDâ] }
Select Event Bus
â Custom or partner event bus: aws.partner/mongodb.com/stich.trigger/[your_new_event_bus_id]
â Confirm that âenable the rule on the selected event busâ is checked
Select Targets
â Lamda function
â yourLamdaFunction (setup described below) â you donât need to configure anything else in this section
Lambda Function Setup
I found this example helpful to get my bearings (should only take 5-10 minutes to learn the basics)
After following the tutorial above, make sure you upload .zip of your files (node_modules, index.js, package-lock.json, and package.json).
Update index.js to work the way you need it to. If possible, I would build a trigger function in realm, make sure it works, and then just replace it with the EventBridge, I found it easier to troubleshoot this way. As a baseline, you will need to do the following to interact with MongoDB
const MongoClient = require("mongodb").MongoClient;
const MONGODB_URI =
"URI IS FROM 'CONNECT YOUR APPLICATION' SECTION IN ATLAS, YOU NEED TO UPDATE PASSWORD AND YOUR DATABASE NAME";
let cachedDb = null;
async function connectToDatabase() {
if (cachedDb) {
return cachedDb;
}
const client = await MongoClient.connect(MONGODB_URI);
const db = await client.db('databaseName');
cachedDb = db;
return db
}
// This is important if you are working with ObjectIds (BSON.ObjectId() does not work here)
const ObjectId = require('mongodb').ObjectID;
// Note that ObjectIDs do not appear to be passed through to Lambda so need to convert here
exports.handler = async (event, context) => {
// By default, the callback waits until the runtime event loop is empty before freezing the process and returning the results to the caller. Setting this property to false requests that AWS Lambda freeze the process soon after the callback is invoked, even if there are events in the event loop. AWS Lambda will freeze the process, any state data, and the events in the event loop. Any remaining events in the event loop are processed when the Lambda function is next invoked, if AWS Lambda chooses to use the frozen process.
context.callbackWaitsForEmptyEventLoop = false;
// Get an instance of our database
const db = await connectToDatabase();
const fullDocument = event.detail.fullDocument;
// Get access to collections the same way
const myCollection = db.collection("myCollection")
// Notice that you may need to convert an ID to ObjectID even if it is stored as one in Atlas
const currentCollectionId = await myCollection.findOne({ "_id": ObjectId(fullDocument.userId) })
...
}
If you integrate the MongoDB SDK correctly you should be able to access all the typical functionality you would expect in a function, whether that is find, insert, update, delete, etc.
One last comment, AWS Cloud Watch is your friend here. If you get stuck at any point I would recommend just using console.log at various points in the lambda function to see what is going wrong. In Cloud Watch â Logs â Log Groups you can find your logs for the lambda function. This helped me a lot.