Replica Set Members
On this page
A replica set in MongoDB is a group of mongod
processes
that provide redundancy and high availability. The members of a
replica set are:
- Primary
- The primary receives all write operations.
- Secondaries
- Secondaries replicate operations from the primary to maintain an identical data set. Secondaries may have additional configurations for special usage profiles. For example, secondaries may be non-voting or priority 0.
The minimum recommended configuration for a replica set is a three member replica set with three data-bearing members: one primary and two secondary members. In some circumstances (such as you have a primary and a secondary but cost constraints prohibit adding another secondary), you may choose to include an arbiter. An arbiter participates in elections but does not hold data (i.e. does not provide data redundancy).
A replica set can have up to 50 members but only 7 voting members.
Primary
The primary is the only member in the replica set that receives write operations. MongoDB applies write operations on the primary and then records the operations on the primary's oplog. Secondary members replicate this log and apply the operations to their data sets.
In the following three-member replica set, the primary accepts all write operations. Then the secondaries replicate the oplog to apply to their data sets.
All members of the replica set can accept read operations. However, by default, an application directs its read operations to the primary member. See Read Preference for details on changing the default read behavior.
The replica set can have at most one primary. [2] If the current primary becomes unavailable, an election determines the new primary. See Replica Set Elections for more details.
Secondaries
A secondary maintains a copy of the primary's data set. To replicate data, a secondary applies operations from the primary's oplog to its own data set in an asynchronous process. [1] A replica set can have one or more secondaries.
The following three-member replica set has two secondary members. The secondaries replicate the primary's oplog and apply the operations to their data sets.
Although clients cannot write data to secondaries, clients can read data from secondary members. See Read Preference for more information on how clients direct read operations to replica sets.
A secondary can become a primary. If the current primary becomes unavailable, the replica set holds an election to choose which of the secondaries becomes the new primary.
See Replica Set Elections for more details.
You can configure a secondary member for a specific purpose. You can configure a secondary to:
Prevent it from becoming a primary in an election, which allows it to reside in a secondary data center or to serve as a cold standby. See Priority 0 Replica Set Members.
Prevent applications from reading from it, which allows it to run applications that require separation from normal traffic. See Hidden Replica Set Members.
Keep a running "historical" snapshot for use in recovery from certain errors, such as unintentionally deleted databases. See Delayed Replica Set Members.
[1] | Starting in version 4.2, secondary
members of a replica set now log oplog entries that take longer than the slow operation
threshold to apply. These slow oplog messages:
|
Arbiter
In some circumstances (such as when you have a primary and a secondary, but cost constraints prohibit adding another secondary), you may choose to add an arbiter to your replica set. An arbiter participates in elections for primary but an arbiter does not have a copy of the data set and cannot become a primary.
An arbiter has exactly 1
election vote. By default an arbiter has
priority 0
.
Important
Do not run an arbiter on systems that also host the primary or the secondary members of the replica set.
To add an arbiter, see Add an Arbiter to Replica Set.
For considerations when using an arbiter, see Replica Set Arbiter.
[2] | In some circumstances, two nodes in a replica set
may transiently believe that they are the primary, but at most, one
of them will be able to complete writes with { w:
"majority" } write concern. The node that can complete
{ w: "majority" } writes is the current
primary, and the other node is a former primary that has not yet
recognized its demotion, typically due to a network partition.
When this occurs, clients that connect to the former primary may
observe stale data despite having requested read preference
primary , and new writes to the former primary will
eventually roll back. |