Hi, I have created a mongoose model as below, and set recordId field to “unique: true” to handle duplicate entry. But when through API multiple calls are happening within milliseconds the collection is allowing duplicate entries. Can anyone please tell how to handle this error.
With Mongoose ODM, when you create a field in a schema with property unique:true, it means that a unique constraint be created on that field. In fact it does create such an index in the database for that collection.
For example, the following code creates such data and inserts one document. I can verify the collection, the document and the unique index from mongosh or Compass. In the shell, db.collection.getIndexes() prints the newly created index details.
When I run the same program again, or try to insert another document with the same name: 'john', there is an error: MongoError: E11000 duplicate key error collection: test.records index: name_1 dup key: { name: "john" }.
Please include the version of MongoDB and Mongoose you are working with.
This may be that the index is created through Mongoose with background: true option. This option may not create the index immediately, and this allows duplicate entries on the indexed field.
An option for you is to create the index from mongosh or Compass initially. You can still keep the Mongoose definition as it is. This will definitely trigger duplicate data error immediately.
A quick query on my index data showed that the index is created with background:true option [*]:
Hi, @Prasad_Saya Thank You for this solution.
Actually one of unique field was deleted from compass bymistakely that’s why the above error was comming.
Hi @Prasad_Saya ,
I am having similar kind of issue. My operation is based on the last generated call on mongodb. I am generating a key in incremental order and for that I need to fetch the last generated key and then I am incrementing it based on last generated. Issue I am getting is when i hit the api in 0 sec for 50 keys the keys are not uniquely generated it points to the same thread. So is there any lock mechanism in mongodb to generate only one key at a time? Please suggest
The issue is that 2 or more processes/tasks/threads will read the same value and increment it to the same value and store back that same value. This is a typical issue in bad distributed system. This is not the way you do thing in a distributed system. You need to read and increment in a ACID way. It is not clear how you fetch the last generated key but if you do it with a sort in your collection then I do not know how you could do it. May be with a transaction. May be with repeating an upsert until you do not get a duplicate. One way to do without sort, it is to have the value in a collection and you use findOneAndUpdate to increment your value in an ACID way. But sorting or findOneAndUpdate is a big NO NO as it involves 2 or more server requests. Why don’t you use an $oid, UUID, GUID?
Hi @steevej ,
Thanks for the answer.
Why don’t you use an $oid, UUID, GUID?
Actually, our requirement is that the alphanumeric key be generated once in a lifetime, never repeated and its size is 6 which will be later on incremented when the permutations will complete. I tried to lock the key using pessimistic locking, but I don’t know if it’s the correct way. According to the requirement, I am generating the key in incremental order using modulus operations I am dividing some incremental number/alphabets.size() to get the alphanumeric key of 6 digits in incremental order and it never repeats, where I need to have the last generated number. Can I use pessimistic locking?
From what I understand you are not using mongodb to generate your unique key. It looks like you are developing your own function in some kind of library. If this is the case then you have to make sure that your function will not return 2 identical keys. This seems to be a JS question rather than a Mongo question.
It would be best to share your code so that we really understand. But SO might be a better venue since it is JS.