Read Preference Use Cases
On this page
The following document explains common use cases for various
read preference modes, as well as
counter-indications
outlining when you should not change the read preference from the
default primary
.
Read Preference Modes
Read Preference Mode | Description |
---|---|
Default mode. All operations read from the current replica set primary. Multi-document transactions that contain
read operations must use read preference | |
In most situations, operations read from the primary but if it is unavailable, operations read from secondary members. Starting in version 4.4, | |
All operations read from the secondary members of the replica set. Starting in version 4.4, | |
Operations typically read data from secondary members of the replica set. If the replica set has only one single primary member and no other members, operations read data from the primary member. Starting in version 4.4, | |
Operations read from a random eligible replica set member, irrespective of whether that member is a primary or secondary, based on a specified latency threshold. The operation considers the following when calculating latency:
Starting in version 4.4, |
Indications to Use Non-Primary Read Preference
The following are common use cases for using non-primary
read preference modes:
Running systems operations that do not affect the front-end application.
Note
Read preferences aren't relevant to direct connections to a single
mongod
instance. However, in order to perform read operations on a direct connection to a secondary member of a replica set, you must set a read preference, such as secondary.Providing local reads for geographically distributed applications.
If you have application servers in multiple data centers, you may consider having a geographically distributed replica set and using a non primary or
nearest
read preference. This allows the client to read from the lowest-latency members, rather than always reading from the primary.Maintaining availability during a failover.
Use
primaryPreferred
if you want an application to read from the primary under normal circumstances, but to allow stale reads from secondaries when the primary is unavailable. This provides a "read-only mode" for your application during a failover.
Counter-Indications for Non-Primary Read Preference
In general, do not use secondary
and
secondaryPreferred
to provide extra capacity for
reads, because:
All members of a replica have roughly equivalent write traffic; as a result, secondaries will service reads at roughly the same rate as the primary.
Replication is asynchronous and there is some amount of delay between a successful write operation and its replication to secondaries. Reading from a secondary can return stale data; reading from different secondaries may result in non-monotonic reads.
Changed in version 3.6: Starting in MongoDB 3.6, clients can use Client Sessions and Causal Consistency Guarantees to ensure monotonic reads.
Distributing read operations to secondaries can compromise availability if any members of the set become unavailable because the remaining members of the set will need to be able to handle all application requests.
Sharding increases read and write capacity by distributing read and write operations across a group of machines, and is often a better strategy for adding capacity.
See Server Selection Algorithm for more information about the internal application of read preferences.
Maximize Consistency
To avoid stale reads, use primary
read preference and
"majority"
readConcern
. If the primary is
unavailable, e.g. during elections or when a majority of the replica
set is not accessible, read operations using primary
read
preference produce an error or throw an exception.
In some circumstances, it may be possible for a replica set to
temporarily have two primaries; however, only one primary will be
capable of confirming writes with the "majority"
write
concern.
A partial network partition may segregate a primary (
P
old) into a partition with a minority of the nodes, while the other side of the partition contains a majority of nodes. The partition with the majority will elect a new primary (P
new), but for a brief period, the old primary (P
old) may still continue to serve reads and writes, as it has not yet detected that it can only see a minority of nodes in the replica set. During this period, if the old primary (P
old) is still visible to clients as a primary, reads from this primary may reflect stale data.A primary (
P
old) may become unresponsive, which will trigger an election and a new primary (P
new) can be elected, serving reads and writes. If the unresponsive primary (P
old) starts responding again, two primaries will be visible for a brief period. The brief period will end whenP
old steps down. However, during the brief period, clients might read from the old primaryP
old, which can provide stale data.
To increase consistency, you can disable automatic failover; however, disabling automatic failover sacrifices availability.
Maximize Availability
To permit read operations when possible, use
primaryPreferred
. When there's a primary you will get
consistent reads [1], but if there is no primary
you can still query secondaries. However, when
using this read mode, consider the situation described in
secondary
vs secondaryPreferred
.
[1] | In some circumstances, two nodes in a replica set
may transiently believe that they are the primary, but at most, one
of them will be able to complete writes with { w:
"majority" } write concern. The node that can complete
{ w: "majority" } writes is the current
primary, and the other node is a former primary that has not yet
recognized its demotion, typically due to a network partition.
When this occurs, clients that connect to the former primary may
observe stale data despite having requested read preference
primary , and new writes to the former primary will
eventually roll back. |
Minimize Latency
To always read from a low-latency node, use nearest
. The
driver or mongos
will read from the nearest member and
those no more than 15 milliseconds [2]
further away than the nearest member.
nearest
does not guarantee consistency. If the nearest
member to your application server is a secondary with some replication
lag, queries could return stale data. nearest
only
reflects network distance and does not reflect I/O or CPU load.
[2] | This threshold is configurable. See
localPingThresholdMs for mongos or your driver
documentation for the appropriate setting. |
Query From Geographically Distributed Members
If the members of a replica set are geographically distributed, you can create replica tags based that reflect the location of the instance and then configure your application to query the members nearby.
For example, if members in "east" and "west" data centers are
tagged {'dc': 'east'}
and
{'dc': 'west'}
, your application servers in the east data center can read
from nearby members with the following read preference:
db.collection.find().readPref('nearest', [ { 'dc': 'east' } ])
Although nearest
already favors members with low network latency,
including the tag makes the choice more predictable.
secondary
vs secondaryPreferred
For specific dedicated queries (e.g. ETL, reporting), you may shift the
read load from the primary by using the secondary
read
preference mode. For this use case, the secondary
mode is
preferable to the secondaryPreferred
mode because
secondaryPreferred
risks the following situation: if all
secondaries are unavailable and your replica set has enough arbiters [3] to prevent the primary from stepping down,
then the primary will receive all traffic from the clients. If the
primary is unable to handle this load, the queries will compete with
the writes. For this reason, use read preference secondary
to
distribute these specific dedicated queries instead of
secondaryPreferred
.
[3] | In general, avoid deploying arbiters in replica sets and use an odd number of data-bearing nodes instead. If you must deploy arbiters, avoid deploying more than one arbiter per replica set. |