How to fix replica set is already initialized

Hello,
I have 3 vm pinging each other in virtualbox.
They All have an almalinux9.5 os and mongodb v.6 installed.
I am attempting to create a replica set.
The 3 ips are 10.17.60.102, 10.17.60.103, 10.17.60.104 and the relevant hostname is dw4-mongo-test-01 dw4-mongo-test-02 dw4-mongo-test-03.
When I open mongosh with command:
mongosh --port 27017 --authenticationDatabase "admin" -u "admin" -p
and try to do

rs.initiate(
  {
    _id : "rs0",
    members: [
      { _id : 0, host : "dw4-mongo-test-01:27017" },
      { _id : 1, host : "dw4-mongo-test-02:27017" },
      { _id : 2, host : "dw4-mongo-test-03:27017" }
    ]
  }
)

on the tree machines, I get:
MongoServerError[AlreadyInitialized]: already initialized
and if I give on each node rs.conf that is the output:
node1)

rs0 [direct: primary] test> rs.conf()
{
  _id: 'rs0',
  version: 3,
  term: 2,
  members: [
    {
      _id: 0,
      host: 'dw4-mongo-test-01:27017',
      arbiterOnly: false,
      buildIndexes: true,
      hidden: false,
      priority: 1,
      tags: {},
      secondaryDelaySecs: Long('0'),
      votes: 1
    },
    {
      _id: 1,
      host: '10.17.60.103:27017',
      arbiterOnly: false,
      buildIndexes: true,
      hidden: false,
      priority: 1,
      tags: {},
      secondaryDelaySecs: Long('0'),
      votes: 1
    },
    {
      _id: 2,
      host: '10.17.60.104:27017',
      arbiterOnly: false,
      buildIndexes: true,
      hidden: false,
      priority: 1,
      tags: {},
      secondaryDelaySecs: Long('0'),
      votes: 1
    }
  ],
  protocolVersion: Long('1'),
  writeConcernMajorityJournalDefault: true,
  settings: {
    chainingAllowed: true,
    heartbeatIntervalMillis: 2000,
    heartbeatTimeoutSecs: 10,
    electionTimeoutMillis: 10000,
    catchUpTimeoutMillis: -1,
    catchUpTakeoverDelayMillis: 30000,
    getLastErrorModes: {},
    getLastErrorDefaults: { w: 1, wtimeout: 0 },
    replicaSetId: ObjectId('678925cb7823cfd20d73907b')
  }
}

node2

rs0 [direct: primary] test> rs.conf();
{
  _id: 'rs0',
  version: 2,
  term: 2,
  members: [
    {
      _id: 0,
      host: 'dw4-mongo-test-02:27017',
      arbiterOnly: false,
      buildIndexes: true,
      hidden: false,
      priority: 1,
      tags: {},
      secondaryDelaySecs: Long('0'),
      votes: 1
    },
    {
      _id: 1,
      host: '10.17.60.102:27017',
      arbiterOnly: false,
      buildIndexes: true,
      hidden: false,
      priority: 1,
      tags: {},
      secondaryDelaySecs: Long('0'),
      votes: 1
    }
  ],
  protocolVersion: Long('1'),
  writeConcernMajorityJournalDefault: true,
  settings: {
    chainingAllowed: true,
    heartbeatIntervalMillis: 2000,
    heartbeatTimeoutSecs: 10,
    electionTimeoutMillis: 10000,
    catchUpTimeoutMillis: -1,
    catchUpTakeoverDelayMillis: 30000,
    getLastErrorModes: {},
    getLastErrorDefaults: { w: 1, wtimeout: 0 },
    replicaSetId: ObjectId('678925cb1a6df32544b87ae4')
  }
}

node3

rs0 [direct: primary] test> rs.conf();
{
  _id: 'rs0',
  version: 2,
  term: 2,
  members: [
    {
      _id: 0,
      host: 'dw4-mongo-test-03:27017',
      arbiterOnly: false,
      buildIndexes: true,
      hidden: false,
      priority: 1,
      tags: {},
      secondaryDelaySecs: Long('0'),
      votes: 1
    },
    {
      _id: 1,
      host: '10.17.60.102:27017',
      arbiterOnly: false,
      buildIndexes: true,
      hidden: false,
      priority: 1,
      tags: {},
      secondaryDelaySecs: Long('0'),
      votes: 1
    }
  ],
  protocolVersion: Long('1'),
  writeConcernMajorityJournalDefault: true,
  settings: {
    chainingAllowed: true,
    heartbeatIntervalMillis: 2000,
    heartbeatTimeoutSecs: 10,
    electionTimeoutMillis: 10000,
    catchUpTimeoutMillis: -1,
    catchUpTakeoverDelayMillis: 30000,
    getLastErrorModes: {},
    getLastErrorDefaults: { w: 1, wtimeout: 0 },
    replicaSetId: ObjectId('678925cbf60d4cde61d263e0')
  }
}

how can I fix that and re initialize the cluster with my previous command? the tree hosts are pinging each other and ports are open.

If i do:

cfg = rs.conf()

printjson(cfg)

cfg.members = [cfg.members[0].host="dw4-mongo-test-01:27017" , cfg.members[1].host="dw4-mongo-test-02:27017" , cfg.members[2].host="dw4-mongo-test-03:27017"]

rs.reconfig(cfg, {force : true})

I get:
MongoServerError[InvalidReplicaSetConfig]: BSON field 'ReplSetConfig.members.0' is the wrong type 'string', expected type 'object'

I tried deleting
/var/lib/mongo
and restarting the service
systemctl start mongod
it exits with code 100 and writes the following log:

{"t":{"$date":"2025-01-22T10:49:55.673+01:00"},"s":"I",  "c":"NETWORK",  "id":4915701, "ctx":"-","msg":"Initialized wire specification","attr":{"spec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":17},"incomingInternalClient":{"minWireVersion":0,"maxWireVersion":17},"outgoing":{"minWireVersion":6,"maxWireVersion":17},"isInternalClient":true}}}
{"t":{"$date":"2025-01-22T10:49:55.677+01:00"},"s":"I",  "c":"CONTROL",  "id":23285,   "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
{"t":{"$date":"2025-01-22T10:49:55.680+01:00"},"s":"I",  "c":"NETWORK",  "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
{"t":{"$date":"2025-01-22T10:49:55.777+01:00"},"s":"I",  "c":"REPL",     "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationDonorService","namespace":"config.tenantMigrationDonors"}}
{"t":{"$date":"2025-01-22T10:49:55.777+01:00"},"s":"I",  "c":"REPL",     "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationRecipientService","namespace":"config.tenantMigrationRecipients"}}
{"t":{"$date":"2025-01-22T10:49:55.777+01:00"},"s":"I",  "c":"REPL",     "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"ShardSplitDonorService","namespace":"config.tenantSplitDonors"}}
{"t":{"$date":"2025-01-22T10:49:55.777+01:00"},"s":"I",  "c":"CONTROL",  "id":5945603, "ctx":"main","msg":"Multi threading initialized"}
{"t":{"$date":"2025-01-22T10:49:55.778+01:00"},"s":"I",  "c":"CONTROL",  "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":1739,"port":27017,"dbPath":"/var/lib/mongo","architecture":"64-bit","host":"dw4-mongo-test-01"}}
{"t":{"$date":"2025-01-22T10:49:55.778+01:00"},"s":"I",  "c":"CONTROL",  "id":23403,   "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"6.0.19-16","gitVersion":"8a76498d71756221e50602ad8d520c0ed6f29ff5","openSSLVersion":"OpenSSL 3.2.2 4 Jun 2024","modules":[],"proFeatures":[],"allocator":"tcmalloc","environment":{"distarch":"x86_64","target_arch":"x86_64"}}}}
{"t":{"$date":"2025-01-22T10:49:55.778+01:00"},"s":"I",  "c":"CONTROL",  "id":51765,   "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"AlmaLinux release 9.5 (Teal Serval)","version":"Kernel 5.14.0-503.11.1.el9_5.x86_64"}}}
{"t":{"$date":"2025-01-22T10:49:55.778+01:00"},"s":"I",  "c":"CONTROL",  "id":21951,   "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"config":"/etc/mongod.conf","net":{"bindIp":"0.0.0.0","port":27017},"processManagement":{"fork":true,"pidFilePath":"/var/run/mongod.pid"},"replication":{"replSetName":"rs0"},"security":{"authorization":"enabled","keyFile":"/etc/mongodb/keyfile"},"storage":{"dbPath":"/var/lib/mongo","journal":{"enabled":true}},"systemLog":{"destination":"file","logAppend":true,"path":"/var/log/mongo/mongod.log"}}}}
{"t":{"$date":"2025-01-22T10:49:55.779+01:00"},"s":"E",  "c":"CONTROL",  "id":20557,   "ctx":"initandlisten","msg":"DBException in initAndListen, terminating","attr":{"error":"NonExistentPath: Data directory /var/lib/mongo not found. Create the missing directory or specify another path using (1) the --dbpath command line option, or (2) by adding the 'storage.dbPath' option in the configuration file."}}
{"t":{"$date":"2025-01-22T10:49:55.779+01:00"},"s":"I",  "c":"REPL",     "id":4784900, "ctx":"initandlisten","msg":"Stepping down the ReplicationCoordinator for shutdown","attr":{"waitTimeMillis":15000}}
{"t":{"$date":"2025-01-22T10:49:55.780+01:00"},"s":"I",  "c":"REPL",     "id":4794602, "ctx":"initandlisten","msg":"Attempting to enter quiesce mode"}
{"t":{"$date":"2025-01-22T10:49:55.780+01:00"},"s":"I",  "c":"-",        "id":6371601, "ctx":"initandlisten","msg":"Shutting down the FLE Crud thread pool"}
{"t":{"$date":"2025-01-22T10:49:55.780+01:00"},"s":"I",  "c":"COMMAND",  "id":4784901, "ctx":"initandlisten","msg":"Shutting down the MirrorMaestro"}
{"t":{"$date":"2025-01-22T10:49:55.780+01:00"},"s":"I",  "c":"SHARDING", "id":4784902, "ctx":"initandlisten","msg":"Shutting down the WaitForMajorityService"}
{"t":{"$date":"2025-01-22T10:49:55.780+01:00"},"s":"I",  "c":"NETWORK",  "id":20562,   "ctx":"initandlisten","msg":"Shutdown: going to close listening sockets"}
{"t":{"$date":"2025-01-22T10:49:55.780+01:00"},"s":"I",  "c":"NETWORK",  "id":4784905, "ctx":"initandlisten","msg":"Shutting down the global connection pool"}
{"t":{"$date":"2025-01-22T10:49:55.780+01:00"},"s":"I",  "c":"CONTROL",  "id":4784906, "ctx":"initandlisten","msg":"Shutting down the FlowControlTicketholder"}
{"t":{"$date":"2025-01-22T10:49:55.780+01:00"},"s":"I",  "c":"-",        "id":20520,   "ctx":"initandlisten","msg":"Stopping further Flow Control ticket acquisitions."}
{"t":{"$date":"2025-01-22T10:49:55.780+01:00"},"s":"I",  "c":"REPL",     "id":4784907, "ctx":"initandlisten","msg":"Shutting down the replica set node executor"}
{"t":{"$date":"2025-01-22T10:49:55.780+01:00"},"s":"I",  "c":"NETWORK",  "id":4784918, "ctx":"initandlisten","msg":"Shutting down the ReplicaSetMonitor"}
{"t":{"$date":"2025-01-22T10:49:55.780+01:00"},"s":"I",  "c":"SHARDING", "id":4784921, "ctx":"initandlisten","msg":"Shutting down the MigrationUtilExecutor"}
{"t":{"$date":"2025-01-22T10:49:55.780+01:00"},"s":"I",  "c":"ASIO",     "id":22582,   "ctx":"MigrationUtil-TaskExecutor","msg":"Killing all outstanding egress activity."}
{"t":{"$date":"2025-01-22T10:49:55.780+01:00"},"s":"I",  "c":"COMMAND",  "id":4784923, "ctx":"initandlisten","msg":"Shutting down the ServiceEntryPoint"}
{"t":{"$date":"2025-01-22T10:49:55.780+01:00"},"s":"I",  "c":"CONTROL",  "id":4784928, "ctx":"initandlisten","msg":"Shutting down the TTL monitor"}
{"t":{"$date":"2025-01-22T10:49:55.780+01:00"},"s":"I",  "c":"CONTROL",  "id":6278511, "ctx":"initandlisten","msg":"Shutting down the Change Stream Expired Pre-images Remover"}
{"t":{"$date":"2025-01-22T10:49:55.780+01:00"},"s":"I",  "c":"CONTROL",  "id":4784929, "ctx":"initandlisten","msg":"Acquiring the global lock for shutdown"}
{"t":{"$date":"2025-01-22T10:49:55.780+01:00"},"s":"I",  "c":"-",        "id":4784931, "ctx":"initandlisten","msg":"Dropping the scope cache for shutdown"}
{"t":{"$date":"2025-01-22T10:49:55.780+01:00"},"s":"I",  "c":"CONTROL",  "id":20565,   "ctx":"initandlisten","msg":"Now exiting"}
{"t":{"$date":"2025-01-22T10:49:55.780+01:00"},"s":"I",  "c":"CONTROL",  "id":8423404, "ctx":"initandlisten","msg":"mongod shutdown complete","attr":{"Summary of time elapsed":{"Statistics":{"Enter terminal shutdown":"0 ms","Step down the replication coordinator for shutdown":"0 ms","Time spent in quiesce mode":"1 ms","Shut down FLE Crud subsystem":"0 ms","Shut down MirrorMaestro":"0 ms","Shut down WaitForMajorityService":"0 ms","Shut down the transport layer":"0 ms","Shut down the global connection pool":"0 ms","Shut down the flow control ticket holder":"0 ms","Shut down the replica set node executor":"0 ms","Shut down the replica set monitor":"0 ms","Shut down the migration util executor":"0 ms","Shut down the TTL monitor":"0 ms","Shut down expired pre-images remover":"0 ms","Shut down full-time data capture":"0 ms","shutdownTask total elapsed time":"1 ms"}}}}
{"t":{"$date":"2025-01-22T10:49:55.780+01:00"},"s":"I",  "c":"CONTROL",  "id":23138,   "ctx":"initandlisten","msg":"Shutting down","attr":{"exitCode":100}}

also I don’t understand the log.
It says that there is a missing directory but it’s not true, the directory is there:

ll /var/lib/
total 0
drwx------. 2 root       root       105 Jan 22 10:41 NetworkManager
drwxr-xr-x. 2 root       root       143 Jan 15 15:40 alternatives
drwxr-xr-x. 2 root       root        35 Jan 15 15:41 authselect
drwxr-x---. 2 chrony     chrony      19 Jan 22 10:40 chrony
drwxr-xr-x. 3 root       root        93 Jan 16 09:31 dnf
drwxr-xr-x. 2 root       root         6 Oct  2 23:00 games
drwxr-xr-x. 2 root       root         6 Oct  1 14:37 initramfs
drwxr-xr-x. 2 root       root         6 Nov 12 15:39 kdump
drwxr-xr-x. 2 root       root        30 Jan 22 00:00 logrotate
drwxr-xr-x. 2 root       root         6 Oct  2 23:00 misc
drwxr-xr-x. 2 mongodb    mongodb      6 Jan 22 10:54 mongo
drwxr-xr-x. 2 mongodb    mongodb      6 Jan 22 10:48 mongodb
drwxr-sr-x. 3 opensearch opensearch 146 Jan 16 09:19 opensearch
drwxr-xr-x. 2 root       root         6 May  9  2023 os-prober
drwx------. 2 root       root         6 Jan 15 15:40 private
drwxr-xr-x. 2 root       root        91 Oct  7 09:47 rpm
drwxr-xr-x. 3 root       root        20 Jan 15 15:40 rpm-state
drwx------. 2 root       root        29 Jan 22 11:23 rsyslog
drwxr-xr-x. 5 root       root        46 Jan 15 15:40 selinux
drwxr-xr-x. 9 root       root       105 Jan 15 15:40 sss
drwxr-xr-x. 7 root       root        98 Jan 15 16:21 systemd
drwxr-xr-x. 3 root       root        20 Jan 15 15:40 tpm2-tss

solved, the user is mongod and not mongodb. now i have the replica set to fix.

strange enough if I now give:
rs.initiate(
{
_id : “rs0”,
members: [
{ _id : 0, host : “10.17.60.102:27017” },
{ _id : 1, host : “10.17.60.103:27017” },
{ _id : 2, host : “10.17.60.104:27017” }
]
}
)
in the primary node, everything seems to go up fine.

This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.