I have a system in golang which does insert/update at rate of approx 10-15K insert per second
can M50 aws instance handle this ? .
Cuz I am getting below error
{"file":"/Users/updater_sec/db/mongo.go:82","func":"updater_sec/db.InsertVolMDB","level":"error","msg":"Error while performing upsert: timed out while checking out a connection from connection pool: context deadline exceeded; maxPoolSize: 100, connections in use by cursors: 0, connections in use by transactions: 0, connections in use by other operations: 5","time":"2024-05-06 09:15:24.845172"}
{"file":"/Users/updater_sec/db/mongo.go:82","func":"updater_sec/db.InsertMDB","level":"error","msg":"Error while performing upsert: connection(my mongo url) incomplete read of message header: context deadline exceeded","time":"2024-05-06 09:15:21.089048"}
I have 600 routines running per process. 8 go process in linux server. What happend was my server ram got 100% and 2 process got shut down. Though server cpu was 30% only. As per atlas metrics
They were normal too. 50% cpu , 50% ram, 12K insert/update per second. Query executor 30K scanned Obj. Document metric 30K return/S. My doc/object is also not big around 5 7 keys per obj. Where val is number
What could be bottleneck or issue here ?