Oplog cleared on M0 Plan

Hi,

I have been developing an application locally using a localhost (docker) version of mongodb with replicasets so I can use the changestreams feature. My python application (using pymongo) is able to happily watch the steam, and store the resume_token. Stoping and starting my application works seamlessly and resumes from the store resume_token.

My next step is to play with using a hosted version of mongo. I have opted for an M0 instance on Atlas while I am still testing. On Friday (17 Oct 24), I was able to connect my application to the atlas database, and it all worked well, I was able to create documents and receive change events without any issue, including restarting my application.

Today (Monday 21st Oct), when I tried to start my application, it got into a bad state because the resume_token was unable to be found. When I checked the database, the local.oplog.rs collection had no data in it (completely empty), even though it did days ago. My application was off all weekend so it didn’t break anything.

So I created a few manual entries to test, and the oplog is once again filling with appropriate entries (only the new ones).

Can anyone explain why my oplog got emptied out? I have tried for hours to find documentation about the limitation of M0 accounts, and although there are differences, nothing says the oplog will be wiped clear. Its a real problem.

If there is a problem using the oplog with an M0 plan, what is the lowest plan I can use that will give me a reliable oplog (i understand that size will get it to drop entries, but I didn’t add anything to the db to change the size)?

I posted this on stack-overflow for anyone who want to follow along. Apologies for the redirect…atlas - MongoDB Altas oplogs in M0 plans - Stack Overflow

M0 is a shared tier cluster, meaning the real MongoDB cluster is hosting data for many virtual M0 clusters. So even though your application was off and not writing data, other applications using that cluster could have written enough data to push your application’s writes out of the oplog.

To avoid this problem the app needs to read the change stream more frequently to ensure the resumeToken remains valid. Otherwise you’ll need to upgrade to a dedicated cluster tier.