Why is disk usage too high when migrating from 4.2 to 4.4?

Hey,

So… I don’t know where to begin because there are so many things to say here and this raises so many new questions.

I don’t have a clear explanation for the increase of sizes you noticed to be honest but many things can explain a part of it…

  • Did you use the same MongoDB versions in the former RS and in the sharded cluster?
  • Did you use the same configuration ? Not using a different WiredTiger compression algorithm by any chance?
  • Are these stats just about a single collection?
  • Does that collection contains exactly the same data?
  • How did you gather these stats?

Also one thing that I don’t understand in what you are saying:

One collection in a RS == one collection in a sharded cluster. If the collection isn’t shared, it will live in its associated primary shard. If it’s sharded, it will be distributed across the different shards according to the shard key.

Another thing that might not be obvious but Sharded Clusters come with an overhead. They require more computing power and more machines. If you only have 3 machines available, then it’s more efficient to run a RS on them rather than deploying a sharded cluster & sharing the machines to host multiple nodes on the same machines. It’s also an anti pattern because sharded clusters are designed to be Single Point of Failure safe and here every single one of your 3 machines is a SPOF that will take the entire cluster down with them if one of them fails. This architecture is OK for a small dev environment, but definitely not ready for a production environment.

This doesn’t make sense to me. Why would an index give you more storage capacity? It’s the opposite. An index costs RAM & storage.

Also choosing the right shard key is SUPER important.

image

A hashed index on the _id is the VERY LAST trick in the book you can use to create a shard key. 99% of the time, there is a better solution and a better shard key that will improve read performances dramatically and still distribute the write operations evenly across the different shards.
A good shard key should optimize as much as possible your read operations by limiting the reads to a single shard and distribute the writes evenly across all the shards. The shard key must not increase (or decrease) monotonically and the rule “data that is access together must be stored together” also applies more than ever. Ideally, data that you will query “together” should be in the same chunk as much as possible. Using a hashed _id does exactly the opposite. It’s basically does a random spread of all the documents across the entire cluster chunks, and consequently shards. Unless your use cases falls in the 1% special cases, most of your queries will be scatter gather queries and they aren’t efficient at all.

Also, I see you are using Arbiter nodes and they are usually not recommended in production.
MongoDB Atlas doesn’t use them at all for instance… Anywhere.

There are already a few topics in the community forum covering why arbiters are generally a bad idea.

Regarding the initial question of the sizes… It doesn’t make sense to me. Unless the questions I asked above can help to find something suspicious.

Don’t forget that RS needs to be on 3 different machines to enforce HA and sharding == scaling but it shouldn’t cost you your HA status. Each mongod should be on its dedicated machine with the correct amont of ressource to work correctly.

If your 3 nodes RS is already at about 1TB data, you probably have about 150GB RAM on each node so splitting your dataset in 2 => 500GB and squeezing 2 mongod on the same node (2*500GB) with still150GB RAM is exactly the same problem but now with an extra layer of complexity and overhead. A sharded cluster needs a bunch of admin collections to work. They shouldn’t be big enough to be noticeable and create the big difference in sizes that you noticed. So I guess there is something else here.

I hope this helps a bit :smile:.
Cheers,
Maxime.

1 Like