Hi,
we are using latest MongoDB recently. But after changing to the latest MongoDB we are facing issue with file system utilization. MongoDB file WiredTigerLAS.wt abruptly using more than the 70% of disk space. So, Utilization of the disk space become 100% sometime and downs DB automatically. Please help us to understand the issue and provide a solution for this issue.
MongoDB Version : 4.2.2
No. of application point MongoDB Instance : 2
File Name: WiredTigerLAS.wt
Note for readers coming here from web search etc: The typical symptoms are suddenly you notice that the WiredTigerLAS.wt file is growing rapidly. So long as the WT cache is filled to it’s maximum the file will grow approximately as fast as the Oplog GB/hr rate at the time. Disk utilization will be 100%, and the WiredTigerLAS.wt writes and read compete for the same disk IO as the normal db disk files. The WiredTigerLAS.wt never shrinks, not even after a restart. The only way to get rid of it is to delete all the files and restart the node for an initial sync (which you probably can’t do until the heavy application load stops).
Don’t forget: The initial cause is not primarily the software issue - the initial cause is that the application load has overwhelmed replica sets node’s capacity to write all the document updates to disk. The symptom manifests on the primary, but it may be lag on a secondary that is the driving the issue.
What sort of deployment do you have (standalone, replica set, or sharded cluster)? If you have a replica set or sharded cluster, can you describe your the roles of your instances in terms of Primary, Secondary, and Arbiter and also confirm whether you are seeing the LAS growth on the Primary, Secondaries, or both?
WiredTigerLAS.wt is an overflow buffer for data that does not fit in the WiredTiger cache but cannot be persisted to the data files yet (analogous to “swap” if you run out of system memory). This file should be removed on restart by mongod as it is not useful without the context of the in-memory WiredTiger cache which is freed when mongod is restarted.
If you are seeing unbounded growth of WiredTigerLAS.wt, likely causes are a deployment that is severely underprovisioned for the current workload, a replica set configuration with significant lag, or a replica set deployment including an arbiter with a secondary unavailable.
The last scenario is highlighted in the documentation: Read Concern majority and Three-Member PSA and as a startup warning in recent versions of MongoDB (3.6.10+, 4.0.5+, 4.2.0+).
The maxCacheOverflowFileSizeGB configuration option mentioned by @chris will prevent your cache overflow from growing unbounded, but is not a fix for the underlying problem.
Please provide additional details on your deployment so we can try to identify the issue.
I am using standalone setup, but the single MongoDB instance used by two homogenised application by pointing different database.
We were used old version of MongoDB 3.4.17. In that, i haven’t seen this kind issues.
In this, Are you saying? I am overloading the MongoDB?
If so, Performance can be degaraded, why the caching size is keep on increasing, That I don’t understand.
What is the purpose of WiredTigerLAS.wt in MongoDB?
I am not able to accept that disk usage will keep on increase and down automcatically.
If So, then How i can calculate the load that can be given to MongoDB to avoid this issue. Please comment. Thanks.
Hey there,
I have the exact same problem with a single mongo instance.
I’m doing many bulkWrites one after the other and in parallel and for some cases the file size increases until the entire disk gets full and Mongo crash.
I think me and many others need a more definite answer on how can I tell how much writing operation could cause this and how to limit my load on mongo, it’s really hard to just “guess” the numbers here. Also I think it’s weird that mongo can’t handle the load in a better way or clear this file after the load has finished.