Pushing copy of App fails when including the sync directory: push failed: failed to deploy app: error initializing stores: failed to initialize stores: (AtlasError) cannot create a new collection -- already using 500 collections of 500

I am trying to automatically create a copy of an existing App in a CICD pipeline.
Following the instructions as described here: https://www.mongodb.com/docs/atlas/app-services/apps/copy/,

I get this error when trying to push the updated/copied configuration to the new app:

push failed: failed to deploy app: error initializing stores: failed to initialize stores: (AtlasError) cannot create a new collection -- already using 500 collections of 500

Which seems to be incorrect because I only have 13 collections in my database / cluster (it is a shared cluster; but I assume the 500 limit is meant per user, right?).

Furthermore, if, in addition to the root_config.json as mentioned in the article, I also remove the sync directory from the app I want to copy before I copy its contents, the push does succeed. And afterwards I can enable sync without problems from the UI, which results in the exact same configuration that I am trying to push in the first placeā€¦ but doing this through the UI does not meet my CI/CD needs.

Does anyone has any idea what could be going wrong here / suggestions how to fix this issue?

Note: Pushing a copy of the app as described in the article, including its sync directory, did succeed the first few times. It was only after a while that I started to get this error.

Any suggestions much appreciated!

Hi, if you provide your application id we can take a look behind the scenes, but Device Sync creates a lot of metadata collections (something on the order of 10 + the number of collections you are syncing). These collections are hidden from the Data Explorer / Compass, but you can see them if you use the shell and they will be prefixed by __realm_sync. I tell you think just so you can confirm they are there though, removing them will break sync for the application.

Therefore, I suspect that you have several (dozens or so) applications enabling sync on this shared cluster. The short answer is that you can/should move to a dedicated tier cluster which has no limitations on data stored, number of collections, or operations per second.

Best,
Tyler

Hi Tyler,

Thank you for your quick reply.
The id of the app Iā€™m trying to copy is: 657cb67429738cbcd4dc60ee

I have 4 apps running on that cluster/project (all copies of the same app, successfully copied as described above). I used the mongosh listDatabases command and see 17 instances starting with ā€˜__realm_sync*ā€™, which would lead me to believe there should be max 17 * 13 = 221 collections there if I understand you correctly.

I feel like something goes wrong specifically when I use the cli to push a copy of an app. As said, I can create a copy without sync enabled through the cli, and then manually enable sync through the UI without getting there error - giving me another reason to believe the 500 limit has not been reached yet.

I believe I have been able to create about 20 copies of the initial app through the cli before the error first appeared. I now deleted most of those copies and only have 4 running still. So I donā€™t understand why I would not be able to create at least another 16 again. I just used the appservices app delete command to delete the copies, and I can confirm from the UI that they are no longer running. But maybe I need to something else to fully clean the effect they have had on the cluster?

All the copies I made solely differ w.r.t to their root_config.json (they use(d) the same cluster/database/schemas). So I initially reasoned the copies would not increase the number of collections at all, because they share the same collections.

Iā€™m planning to move to a dedicated cluster in a few weeks, but feel like the current size/stage of development of the app should not mandate that yet, and would prefer to save on the associated costs for now.

Thanks in advance for your time / any other suggestions!

Best Julius

Hi, it does seem possible that some part of your experience has led to metadata being left around (though we should be cleaning it up) . Can you post the result of listing all the databases in your cluster? I can provide you with the ones you can likely drop to clear up some capacity for collections.

Hey Tyler, thanks again for your quick response.

Below is a list of all my databases in the cluster;
The ā€œfiber-devā€ database is the only one I knowingly use in my config files / the one with data in it that Iā€™d like to keep for development purposes.

If you could provide the reasoning deciding which ones I can delete thatā€™d be great; then I know how to solve this problem in the future might it pop up againā€¦

Thanks in advance!

db.adminCommand(
...    {
...      listDatabases: 1
...    }
... )
{
  databases: [
    {
      name: '__realm_sync_64f43c7dc45369bc2280cbaa',
      sizeOnDisk: Long('7938048'),
      empty: false
    },
    {
      name: '__realm_sync_657c9f1ecb5abd78027cad2f',
      sizeOnDisk: Long('6922240'),
      empty: false
    },
    {
      name: '__realm_sync_657cb27b29738cbcd4d573de',
      sizeOnDisk: Long('3948544'),
      empty: false
    },
    {
      name: '__realm_sync_657cb4236e644de4be53d077',
      sizeOnDisk: Long('3928064'),
      empty: false
    },
    {
      name: '__realm_sync_657cb67429738cbcd4dc60ee',
      sizeOnDisk: Long('3756032'),
      empty: false
    },
    {
      name: '__realm_sync_657cb6efcb5abd7802a8acce',
      sizeOnDisk: Long('4747264'),
      empty: false
    },
    {
      name: '__realm_sync_657cb72420f53848adc95f37',
      sizeOnDisk: Long('4567040'),
      empty: false
    },
    {
      name: '__realm_sync_657da0cb6e644de4becaa677',
      sizeOnDisk: Long('4554752'),
      empty: false
    },
    {
      name: '__realm_sync_657da119cb5abd78021befab',
      sizeOnDisk: Long('6115328'),
      empty: false
    },
    {
      name: '__realm_sync_657da1f6f09cceef34d5aa95',
      sizeOnDisk: Long('5070848'),
      empty: false
    },
    {
      name: '__realm_sync_657da35429738cbcd4532a80',
      sizeOnDisk: Long('5840896'),
      empty: false
    },
    {
      name: '__realm_sync_657da3accb5abd78022170ea',
      sizeOnDisk: Long('3985408'),
      empty: false
    },
    {
      name: '__realm_sync_657ec5ecf09cceef3460ec07',
      sizeOnDisk: Long('3788800'),
      empty: false
    },
    {
      name: '__realm_sync_657eca44ba243ca50ab83d12',
      sizeOnDisk: Long('5677056'),
      empty: false
    },
    {
      name: '__realm_sync_657ecd62afad6ecd1122012d',
      sizeOnDisk: Long('139264'),
      empty: false
    },
    {
      name: '__realm_sync_657ed52dab43cf9b04d30aad',
      sizeOnDisk: Long('40960'),
      empty: false
    },
    {
      name: '__realm_sync_657ed9e5cb5abd7802ee4885',
      sizeOnDisk: Long('40960'),
      empty: false
    },
    { name: 'fiber-dev', sizeOnDisk: Long('3170304'), empty: false },
    { name: 'admin', sizeOnDisk: Long('344064'), empty: false },
    { name: 'local', sizeOnDisk: Long('1132654592'), empty: false }
  ],
  totalSize: Long('1207230464'),
  totalSizeMb: Long('1151'),
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1702901336, i: 26 }),
    signature: {
      hash: Binary.createFromBase64('8MMel1IIbzlf6+5lcvA0Ss18rDY=', 0),
      keyId: Long('7263176850183028738')
    }
  },
  operationTime: Timestamp({ t: 1702901336, i: 26 })
}

So each sync database is suffixed with the application_id in the URL https://realm.mongodb.com/groups/GROUP_ID/apps/APP_ID/dashboard ). Looking at your group, you have the following applications that exist

658035b3846edb77f4d77e6b
657f5375cb5abd7802819b03
657cb67429738cbcd4dc60ee
657cb6efcb5abd7802a8acce
64f43c7dc45369bc2280cbaa

Therefore, you should be able to delete the _realm_sync{app_id} databases for applications not in this list.

On my end, I will take a look at why these were not cleaned up for these applications (since we should be doing that on termination of sync and/or deletion of the application).

Best,
Tyler

1 Like

Thank you Tyler, that solved it!

Quick overview for future reference:

Problem

Deleting realm apps (through the cli) seems to not always remove the corresponding databases correctly when using Realm/Atlas Sync. This may unexpectedly lead you to reach the 500 collection limit on a shared cluster with only a few apps actually running.

Solution

  1. Identify the idā€™s of the applications that are currently running by inspecting the _id column of the output of:
$ appservices app list
  1. Connect to your cluster:
$ mongosh "mongodb+srv://YOUR_CLUSTER_NAME.YOUR_HASH.mongodb.net/" --apiVersion YOUR_API_VERSION --username YOUR_USERNAME
  1. Identify the database names that have are suffixed with idā€™s of non-existing apps:
db.adminCommand( { listDatabases: 1 } )
  1. Delete each of the databases that have no associated running app (for each of those databases, do):
use DATABASE_NAME
db.dropDatabase()

This last step could probably be done in a much cleaner / safer way by iterating over the list of identified database names. But Iā€™m new to the Mongosh tool so I did it step by step in a manual fashion.

This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.