For days I have a problem that has prevented me from going to production and that is that on 4 occasions the database is deleted and I do not know the reason.
I keep a normal record, the next day I consult it and it no longer appears, when I check the database it no longer exists. I try again and it comes back and it happens, I am not sure when time passes or if there is something that triggers this event.
in mongodb.conf there is the storage directory and normal. I use MongoCompass as a client, does this have something to do with connecting to the remote base? permissions or something similar?
I am working with MEAN and everything is in digitalocean, with the support of them from the server everything is ok, there are no reboots or anything abnormal.
I appreciate if you can help me identify what happens, because that way I can’t go to production.
If data is being removed unexpectedly I would start by making sure your deployment is properly secured with access control enabled, appropriate firewall rules, and TLS/SSL network encryption.
To understand more about your scenario can you please:
Confirm the Security Measures you have implemented for your deployment.
Confirm the exact MongoDB server version used (i.e. output of db.version() in the mongo shell) and the host O/S version.
Describe more specifically what data is missing. Does database mean all of your databases & collections? Are there any databases or collections that are not affected?
There are other possibilities, but this is the most likely one to eliminate first.
Thank you for the extra details. One aspect you did not confirm was any security measures you have taken, but your screenshot confirms that someone was able to remotely access your deployment, drop the databases, and create a database called READ__ME_TO_RECOVER_YOUR_DATA which will likely have some instructions on paying a “ransom”. See: How to Avoid a Malicious Attack That Ransoms Your Data.
You can secure a deployment following the measures in the MongoDB Security Checklist. At a minimum a public deployment should have access control and authentication enabled, TLS/SSL network encryption configured, and appropriate firewall rules to limit network exposure. You should also set up Monitoring and Backup for a production environment and should review the Production Notes if you want to tune your deployment.
If that sounds like a daunting list of administrative tasks to take care of before your production launch, I would strongly recommend using MongoDB Atlas. Atlas deploys fully managed MongoDB clusters on AWS, GCP, and Azure with Enterprise-level security, backup, and monitoring features that can be configured via web UI and API. There’s a free tier with 512MB of data if you want to try out the platform or have a development sandbox, and resources can be scaled (or auto-scaled) depending on your cluster tier and configuration. MongoDB Atlas undergoes independent verification of platform security, privacy, and compliance controls (you can find more information in the Trust Centre).
If you prefer to manage deployments in your own VPS in the long term, you can always backup your data from Atlas and restore into a self-managed MongoDB deployment.
effectively some instructions on paying a “ransom”.
The site is one of tests in which I am reviewing all these aspects before going to production with the real one.
I have a question, how do you manage to violate the security of the server and access the DB?
I am applying Security Measures indicated. after that I can erase the base and load again?
or how can I be sure that the security measures were well applied?
Typically this happens through a combination of disabling default security measures (only bind to localhost) and not correctly configuring security measures like access control and firewalls. If you do not have access control enabled on your deployment and anyone can connect remotely, those remote connections will have full administrative access to your deployment. If you do not enable network encryption and connect to your deployment remotely over the public internet (without an encrypted path like a VPN or SSH tunnel), all of your data is exchanged in plaintext and could be subject to eavesdropping.
As @chris noted, there is no hacking effort required if there are no effective security measures in place.
Direct access to your database server should ideally only be allowed from a limited set of origin application IPs that themselves would like be inside the same VPN or firewall perimeter. User access should be authenticated using Role-Based Access Control following the Principle of least privilege. The general security measures are similar for any infrastructure service you want to host and secure.
You can test to make sure the security measures are effective. For example, if access control is enabled you should not be able to run any commands to view or modify data as an authorised user. If firewall configuration is correct, you should be able to connect from whitelisted IPs but not from any other IPs.
Good security involves multiple layers of defence. Normally I would configure & test the inner layers of security (access control, enforcing authentication, TLS/SSL) before opening up to broader levels of exposure (binding to a non-local IP, firewall configuration). Since you have already had some unwelcome connections, I would start by limiting network exposure via your firewall so you can configure and test the other security measures.