Hello,
I have scheduled the oplog incremental backups but it fails too often with below errors. I have tried increasing WT cache and Oplog size as well but it didn’t made any difference.
2020-07-19T09:06:08.711+0000 I STORAGE [WT RecordStoreThread: local.oplog.rs] WiredTiger record store oplog truncation finished in: 282ms
2020-07-19T09:06:08.726+0000 E QUERY [conn73957] Plan executor error during find command: DEAD, stats: { stage: “COLLSCAN”, filter: { $and: [ { ts: { $lte: Timestamp(1595149556, 11466) }
}, { ts: { $gt: Timestamp(1595148019, 493) } } ] }, nReturned: 0, executionTimeMillisEstimate: 9010, works: 1222090, advanced: 0, needTime: 1222089, needYield: 0, saveState: 9553, restoreS
tate: 9553, isEOF: 0, invalidates: 0, direction: “forward”, docsExamined: 1222088 }
2020-07-19T09:06:08.728+0000 I COMMAND [conn73957] command local.oplog.rs command: find { find: “oplog.rs”, filter: { ts: { $gt: Timestamp(1595148019, 493), $lte: Timestamp(1595149556, 11466) } }, skip: 0, $readPreference: { mode: “secondaryPreferred” }, $db: “local” } planSummary: COLLSCAN numYields:9553 reslen:522 locks:{ Global: { acquireCount: { r: 19108 }, acquireWaitCount: { r: 129 }, timeAcquiringMicros: { r: 2253882 } }, Database: { acquireCount: { r: 9554 } }, oplog: { acquireCount: { r: 9554 } } } protocol:op_query 11479ms
Also, I have noticed whenever this thread “STORAGE [WT RecordStoreThread: local.oplog.rs] WiredTiger record store oplog truncation finished in” runs just before my oplog dump schedule, it fails.
2020-08-01T04:39:09.679+0000 Failed: error writing data for collection
local.oplog.rs
to disk: error reading collection: Executor error during find command :: caused by :: errmsg: “CollectionScan died due to position in capped collection being deleted. Last seen record id: RecordId(6855387362038385666)”
Note : Timestamps are different because I just picked random logs from different dates but the error is same everytime.
Could anyone please help ?