-
We are running MongoDB 7.0 in a sharded environment with a properly functioning shard key. However, when the balancer operates and splits chunks for migration, does it consider the number of documents in a chunk rather than the chunk size? When the balancer requests a split for a chunk in a merged range, the chunk gets split into large sizes, and as this chunk migrates, it consumes all the oplog across multiple shards, causing the system to fail to achieve a balanced state. This issue continues as chunks keep migrating, even though migrating a single small chunk should suffice. This was never an issue when the chunk count was the basis. How can we resolve this issue?
-
Additionally, does the balancer include the size of undeleted orphaned documents when determining data size differences for chunk migration? It seems like the balancer is endlessly migrating due to the time lag in deleting orphaned documents, leading to an infinite loop where the balancer reactivates after the documents are eventually deleted.
-
In previous versions, we could set maxSize to ensure appropriate data distribution across shards with different hardware specifications. However, since maxSize has been deprecated in MongoDB 7.0, how can we achieve proper data distribution across shards with different hardware configurations in the current version? The official documentation suggests using WiredTiger’s compaction, but this appears to be for data compression on a single shard and not related to overall shard data distribution. What are the recommended practices for this scenario?