Persisting data with mongodb/mongodb-atlas-local

Hi!

I tried using the container: mongodb/mongodb-atlas-local with the latest version (redownloaded the images this morning), with the following docker-compose file:

name: database-stack
services:
  mongodb-atlas-local:
    container_name: mongodb
    ports:
      - "27017:27017"
    volumes:
      - D:\databases\mongodb:/data/db
    restart: unless-stopped
    image: mongodb/mongodb-atlas-local:latest

volumes:
  D:
    external: true
    name: D

When I don’t add the volume, it works flawless. However, after every restart of the server, all of the data is gone. When I add the volume, it starts only the first time, IE when the directory ‘D:\databases\mongodb’ is empty or doesn’t exist. Every time it starts up with a non empty directory I get a looping stackrace:

2024-07-01 09:27:23 
2024-07-01 09:27:23 goroutine 1 [running]:
2024-07-01 09:27:23 main.main()
2024-07-01 09:27:23     /app/cmd/runner/main.go:29 +0x11a
2024-07-01 09:27:56 Error: error checking mongod: error pinging: server selection error: server selection timeout, current topology: { Type: Single, Servers: [{ Addr: localhost:27017, Type: Unknown, Last error: dial tcp [::1]:27017: connect: connection refused }, ] }
2024-07-01 09:27:56 Usage:
2024-07-01 09:27:56   runner server [flags]
2024-07-01 09:27:56 
2024-07-01 09:27:56 Flags:
2024-07-01 09:27:56   -h, --help   help for server
2024-07-01 09:27:56 
2024-07-01 09:27:56 panic: error checking mongod: error pinging: server selection error: server selection timeout, current topology: { Type: Single, Servers: [{ Addr: localhost:27017, Type: Unknown, Last error: dial tcp [::1]:27017: connect: connection refused }, ] }

And everytime I try to connect I get a different stacktrace, ending in the following lines:

2024-07-01 09:36:10 {"t":{"$date":"2024-07-01T07:36:10.258+00:00"},"s":"I",  "c":"ASIO",     "id":22582,   "ctx":"main","msg":"Killing all outstanding egress activity."}
2024-07-01 09:36:10 {"t":{"$date":"2024-07-01T07:36:10.258+00:00"},"s":"I",  "c":"SHARDING", "id":5847201, "ctx":"main","msg":"Balancer command scheduler stop requested"}
2024-07-01 09:36:10 {"t":{"$date":"2024-07-01T07:36:10.258+00:00"},"s":"I",  "c":"ASIO",     "id":6529201, "ctx":"main","msg":"Network interface redundant shutdown","attr":{"state":"Stopped"}}
2024-07-01 09:36:10 {"t":{"$date":"2024-07-01T07:36:10.258+00:00"},"s":"I",  "c":"ASIO",     "id":22582,   "ctx":"main","msg":"Killing all outstanding egress activity."}
2024-07-01 09:36:10 {"t":{"$date":"2024-07-01T07:36:10.258+00:00"},"s":"F",  "c":"CONTROL",  "id":20575,   "ctx":"main","msg":"Error creating service context","attr":{"error":"Location5579201: Unable to acquire security key[s]"}}

Does anyone know how I can fix/workaround this issue to use this container for local development? Thanks in advance!

We are investigating the problem. We’ll post back here once we know what is going on.

1 Like

@Anthony_Schuijlenburg can you provide the full log from docker compose? (you can copy-paste all of it from docker desktop).

Also, if you have the chance, can you try the following:

  1. Empty D:\databases\mongodb
  2. Try running with a different port with every run, i.e. 27019:27017

I know that 2 is impractical, but I want to rule out an issue we’ve been seeing with docker not releasing the ports. We are thinking about what we can do here, but I would like to confirm if that could be the problem.

Hi @Massimiliano_Marcon!

First of all, thank you for looking into this! I tried including the full logs inside a .txt file, however, this was not possible for users. I made a WeTransfer link for the .txt files since it still isn’t possible for me to add files.

It includes 2 log files, 1 successful and 1 unsuccessful, for each log files I will include my steps to reproduce.

First the successful one:

  1. I made sure the folder: D:\databases\mongodb did not exist (deleted)
  2. Changed the external port to 27019:27017 as per your request
  3. Ran docker-compose up -d
  4. Waited a few moments for the server to be done starting up
  5. Tried the built in function ‘Test connection’ on NoSQL Booster, which was successful
  6. Connected to the server
  7. Added a database, collection and a sample record
  8. Copied the logs into successful-logs.txt
  9. Ran docker-compose down

The second, unsuccessful one:

  1. Ran docker-compose up -d
  2. Waited a few moments for the server to be done starting up
  3. Tried the built in function ‘Test connection’ on NoSQL Booster, which was unsuccessful
  4. Tried to connect to server, which also timed out
  5. Copied the logs into unsuccessful-logs.txt
  6. Ran docker-compose down

A side note I noticed while making these exports was that the moment I ran docker-compose up the second time, the attribute last started of my Redis docker container was about what I would expect, but the same attribute on the Mongo container never reached more than a minute. This makes me suspect it is in some sort of boot loop.

Thanks in advance!

Thank you @Anthony_Schuijlenburg!

I think we’ve narrowed it down to a race condition in how we manage the processes. We are working on it. I’ll follow up when we have the fix out.

1 Like

@Anthony_Schuijlenburg can you try pulling mongodb/mongodb-atlas-local:latest and running docker compose up again? We released a new image last night with a fix for the race condition.

Hi @Massimiliano_Marcon,

Thank you (and your team) for your efforts into fixing this problem. I removed all of my containers, images and volumes to have a ‘fresh’ environment in which to test out your new version. Unfortunately the outcome is the same as before…

I did however poke around a bit more. And tried this exact docker-compose file:

services:
  mongodb:
    image: mongodb/mongodb-atlas-local
    environment:
      - MONGODB_INITDB_ROOT_USERNAME=user
      - MONGODB_INITDB_ROOT_PASSWORD=pass
    ports:
      - 27019:27017
    volumes:
      - data:/data/db

volumes:
  data:

Which again, works the first time starting it up. What I did find out is that restarting (with restart, or stop followed by start) through docker and not docker-compose did work. This also included restarting the entire computer. My cautious assumption is that docker-compose down does not gracefully or completely stop the container, which makes it unable to boot again?

Hope this information helps! Have a nice weekend!

Hello guys, hope your having a good day. I also stumbled upon this issue yesterday and I found a solution for it, so basically if you check in docker the volumes you can see “data” which you defined for mongodb service and it’s status is “used”.

You can also see that another volume with a random hashed name also with status “used” is linked to mongodb service. If you click on it to check the content you can find the keyfile inside.

It turns out that every time you do docker-compose down and then re-run docker it doesn’t find the keyfile again because its not in the volume you created but a random one.

So this is what I did:

  mongodb:
    image: mongodb/mongodb-atlas-local
    container_name: 'mongo'
    restart: unless-stopped
    ports:
      - 27018:27017
    volumes:
      - mongodb_config:/data/configdb
      - mongodb_data:/data/db

volumes:
  mongodb_data:
  mongodb_config:

I added a new volume to make sure it saved the config files and keyfile when generated the first time inside /data/configdb.

Hope this helps!

2 Likes

Hi @Abdellatif_Edlby!

This works like a charm! Thanks for your response!

@Massimiliano_Marcon Could you try to get this information inside the guide on: https://www.mongodb.com/docs/atlas/cli/current/atlas-cli-deploy-docker/ to make it easily available for people encountering this problem in the future?

Thanks in advance!

This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.