MongoDB 4.2.19: Extremely high File Descriptor counts (70k+)

Hello,

we’re managing a small MongoDB 4.2 ReplicaSet cluster (PSA) with about 54GBs of on-disk WireTiger-based data.

Today, I was upgrading the replica from <4.0 and MMAPv1 (Meaning I had to delete its whole datadir and let it start up from the primary from scratch).

All worked fine after I readded it into the replicaset and it transition into the Startup2 state, meaning it was fetching data from primary.

What I didn’t expect is for the primary to crash due to exceeding the default max number of open files (64k)

The reason it reached such a high number is probably that our client has a large number of small collections inside the database, resulting in about 74k inodes taken up by the datadir.

It seems the server loops over all the files, incrementally sending them over, without ever closing them again.

Even now, several hours after the incident, the process’s holding ~73.5k file descriptors open (We had to increase the limit to allow the replica to start up).

Is this the intended behavior? Only “solution” to this problem I was able to find online is “Increase the max FDs limit”, which is… Not really a solution, rather, a hotpatch…

Hi @Lukas_Pavljuk welcome to the community!

The reason it reached such a high number is probably that our client has a large number of small collections inside the database, resulting in about 74k inodes taken up by the datadir.

I think, if the deployment requires that many files to be open, then it needs that many files to be open :slight_smile:

The other solution is to artificially limit the amount of number of collections, which is probably not the best either.

Another possibility is to split the replica set into 2-3 smaller ones (via sharding or just separate deployments) if it hits some issues due to the number of files open.

Having said that, are you seeing any issues (performance or otherwise) due to the large number of open files?

Best regards
Kevin

1 Like