Log Messages
On this page
Overview
As part of normal operation, MongoDB maintains a running log of events, including entries such as incoming connections, commands run, and issues encountered. Generally, log messages are useful for diagnosing issues, monitoring your deployment, and tuning performance.
To get your log messages, you can use any of the following methods:
View logs in your configured log destination.
Run the
getLog
command.Download logs through MongoDB Atlas. To learn more, see Download Your Logs.
Structured Logging
mongod
/ mongos
instances output all log messages
in structured JSON format. Log entries
are written as a series of key-value pairs, where each key indicates a log
message field type, such as "severity", and each corresponding value records
the associated logging information for that field type, such as "informational".
Previously, log entries were output as plaintext.
Example
The following is an example log message in JSON format as it would appear in the MongoDB log file:
{"t":{"$date":"2020-05-01T15:16:17.180+00:00"},"s":"I", "c":"NETWORK", "id":12345, "ctx":"listener", "svc": "R", "msg":"Listening on", "attr":{"address":"127.0.0.1"}}
JSON log entries can be pretty-printed for readability. Here is the same log entry pretty-printed:
{ "t": { "$date": "2020-05-01T15:16:17.180+00:00" }, "s": "I", "c": "NETWORK", "id": 12345, "ctx": "listener", "svc": "R", "msg": "Listening on", "attr": { "address": "127.0.0.1" } }
In this log entry, for example, the key s
, representing
severity, has a corresponding value of
I
, representing "Informational", and the key c
, representing
component, has a corresponding value
of NETWORK
, indicating that the "network" component was
responsible for this particular message. The various field types are
presented in detail in the Log Message Field Types section.
Structured logging with key-value pairs allows for efficient parsing by automated tools or log ingestion services, and makes programmatic search and analysis of log messages easier to perform. Examples of analyzing structured log messages can be found in the Parsing Structured Log Messages section.
Note
The mongod
quits if it's unable to write to the log file. To
ensure that mongod
can write to the log file, verify that the log
volume has space on the disk and the logs are rotated.
JSON Log Output Format
All log output is in JSON format including output sent to:
Log file
Syslog
Stdout (standard out) log destinations
Output from the getLog
command is also in JSON format.
Each log entry is output as a self-contained JSON object which follows the Relaxed Extended JSON v2.0 specification, and has the following layout and field order:
{ "t": <Datetime>, // timestamp "s": <String>, // severity "c": <String>, // component "id": <Integer>, // unique identifier "ctx": <String>, // context "svc": <String>, // service "msg": <String>, // message body "attr": <Object>, // additional attributes (optional) "tags": <Array of strings>, // tags (optional) "truncated": <Object>, // truncation info (if truncated) "size": <Object> // original size of entry (if truncated) }
Field descriptions:
Field Name | Type | Description |
---|---|---|
t | Datetime | Timestamp of the log message in ISO-8601 format. For an example,
see Timestamp. |
s | String | Short severity code of the log message. For an example, see
Severity. |
c | String | Full component string for the log message. For an example, see
Components. |
id | Integer | Unique identifier for the log statement. For an example, see
Filtering by Known Log ID. |
ctx | String | Name of the thread that caused the log statement. |
svc | String | Name of the service in whose context the log statement was made. Will be
S for "shard", R "router", or - for "unknown" or "none". |
msg | String | Log output message passed from the server or driver. If
necessary, the message is escaped according to the JSON specification. |
attr | Object | One or more key-value pairs for additional log attributes. If a log message does not include any additional attributes, the attr object is omitted.Attribute values may be referenced by their key name in the msg message body, depending on the message. If necessary, theattributes are escaped according to the JSON specification. |
tags | Array of strings | Strings representing any tags applicable to the log statement.
For example, ["startupWarnings"] . |
truncated | Object | Information about the log message truncation, if applicable. Only included if the
log entry contains at least one truncated attr attribute. |
size | Object | Original size of a log entry if it has been truncated. Only included if the log entry
contains at least one truncated attr attribute. |
Escaping
The message and attributes fields will escape control characters as necessary according to the Relaxed Extended JSON v2.0 specification:
Character Represented | Escape Sequence |
---|---|
Quotation Mark ( " ) | \" |
Backslash ( \ ) | \\ |
Backspace ( 0x08 ) | \b |
Formfeed ( 0x0C ) | \f |
Newline ( 0x0A ) | \n |
Carriage return ( 0x0D ) | \r |
Horizontal tab ( 0x09 ) | \t |
Control characters not listed above are escaped with \uXXXX
where
"XXXX" is the unicode codepoint in hexadecimal. Bytes with invalid
UTF-8 encoding are replaced with the unicode replacement character
represented by \ufffd
.
An example of message escaping is provided in the examples section.
Truncation
Changed in version 7.3.
Any attributes that exceed the maximum size defined with
maxLogSizeKB
(default: 10 KB) are truncated. Truncated
attributes omit log data beyond the configured limit, but retain the
JSON formatting of the entry to ensure that the entry remains parsable.
For example, the following JSON object represents a command
attribute
that contains 5000 elements in the $in
field without truncation.
Note
The example log entries are reformatted for readability.
{ "command": { "find": "mycoll", "filter": { "value1": { "$in": [0, 1, 2, 3, ... 4999] }, "value2": "foo" }, "sort": { "value1": 1 }, "lsid":{"id":{"$uuid":"80a99e49-a850-467b-a26d-aeb2d8b9f42b"}}, "$db": "testdb" } }
In this example, the $in
array is truncated at the 376th element because the
size of the command
attribute would exceed maxLogSizeKB
if
it included the subsequent elements. The remainder of the command
attribute is omitted. The truncated log entry resembles the following output:
{ "t": { "$date": "2021-03-17T20:30:07.212+01:00" }, "s": "I", "c": "COMMAND", "id": 51803, "ctx": "conn9", "msg": "Slow query", "attr": { "command": { "find": "mycoll", "filter": { "value1": { "$in": [ 0, 1, ..., 376 ] // Values in array omitted for brevity } } }, ... // Other attr fields omitted for brevity }, "truncated": { "command": { "truncated": { "filter": { "truncated": { "value1": { "truncated": { "$in": { "truncated": { "377": { "type": "double", "size": 8 } }, "omitted": 4623 } } } }, "omitted": 1 } }, "omitted": 3 } }, "size": { "command": 21692 } }
Log entries containing one or more truncated attributes include nested
truncated
objects, which provide the following information for each
truncated attribute in the log entry:
The attribute that was truncated
The specific sub-object of that attribute that triggered truncation, if applicable
The data
type
of the truncated fieldThe
size
, in bytes, of the element that triggers truncationThe number of elements that were
omitted
under each sub-object due to truncation
Log entries with truncated attributes may also include an additional
size
field at the end of the entry which indicates the original
size of the attribute before truncation, in this case 21692
or about
22KB. This final size
field is only shown if it is different from
the size
field in the truncated
object, i.e. if the total object
size of the attribute is different from the size of the truncated
sub-object, as is the case in the example above.
Padding
When output to the file or the syslog log destinations, padding is added after the severity, context, and id fields to increase readability when viewed with a fixed-width font.
The following MongoDB log file excerpt demonstrates this padding:
{"t":{"$date":"2020-05-18T20:18:12.724+00:00"},"s":"I", "c":"CONTROL", "id":23285, "ctx":"main", "svc": "R", "msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"} {"t":{"$date":"2020-05-18T20:18:12.734+00:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main", "svc": "R", "msg":"No TransportLayer configured during NetworkInterface startup"} {"t":{"$date":"2020-05-18T20:18:12.734+00:00"},"s":"I", "c":"NETWORK", "id":4648601, "ctx":"main", "svc": "R", "msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."} {"t":{"$date":"2020-05-18T20:18:12.814+00:00"},"s":"I", "c":"STORAGE", "id":4615611, "ctx":"initandlisten", "svc": "R", "msg":"MongoDB starting", "attr":{"pid":10111,"port":27001,"dbPath":"/var/lib/mongo","architecture":"64-bit","host":"centos8"}} {"t":{"$date":"2020-05-18T20:18:12.814+00:00"},"s":"I", "c":"CONTROL", "id":23403, "ctx":"initandlisten", "svc": "R", "msg":"Build Info", "attr":{"buildInfo":{"version":"4.4.0","gitVersion":"328c35e4b883540675fb4b626c53a08f74e43cf0","openSSLVersion":"OpenSSL 1.1.1c FIPS 28 May 2019","modules":[],"allocator":"tcmalloc","environment":{"distmod":"rhel80","distarch":"x86_64","target_arch":"x86_64"}}}} {"t":{"$date":"2020-05-18T20:18:12.814+00:00"},"s":"I", "c":"CONTROL", "id":51765, "ctx":"initandlisten", "svc": "R", "msg":"Operating System", "attr":{"os":{"name":"CentOS Linux release 8.0.1905 (Core) ","version":"Kernel 4.18.0-80.11.2.el8_0.x86_64"}}}
Pretty Printing
When working with MongoDB structured logging, the third-party jq command-line utility is a useful tool that allows for easy pretty-printing of log entries, and powerful key-based matching and filtering.
jq
is an open-source JSON parser, and is available for
Linux, Windows, and macOS.
You can use jq
to pretty-print log entries as follows:
Pretty-print the entire log file:
cat mongod.log | jq Pretty-print the most recent log entry:
cat mongod.log | tail -1 | jq
More examples of working with MongoDB structured logs are available in the Parsing Structured Log Messages section.
Configuring Log Message Destinations
MongoDB log messages can be output to file, syslog, or stdout (standard output).
To configure the log output destination, use one of the following settings, either in the configuration file or on the command-line:
- Configuration file:
The
systemLog.destination
option for file or syslog
- Command-line:
Not specifying either file or syslog sends all logging output to stdout.
For the full list of logging settings and options see:
- Configuration file:
- Command-line:
Note
Error messages sent to stderr
(standard error), such as fatal
errors during startup when not using the file or syslog log
destinations, or messages having to do with misconfigured logging
settings, are not affected by the log output destination setting, and
are printed to stderr
in plaintext format.
Log Message Field Types
Timestamp
The timestamp field type indicates the precise date and time at which the logged event occurred.
{ "t": { "$date": "2020-05-01T15:16:17.180+00:00" }, "s": "I", "c": "NETWORK", "id": 12345, "ctx": "listener", "svc": "R", "msg": "Listening on", "attr": { "address": "127.0.0.1" } }
When logging to file or to syslog [1], the default
format for the timestamp is iso8601-local
. To modify the
timestamp format, use the --timeStampFormat
runtime option or the
systemLog.timeStampFormat
setting.
See Filtering by Date Range for log parsing examples that filter on the timestamp field.
Note
The ctime
timestamp format is no longer supported.
[1] | If logging to syslog, the syslog daemon generates timestamps
when it logs a message, not when MongoDB issues the message. This
can lead to misleading timestamps for log entries, especially when
the system is under heavy load. |
Severity
The severity field type indicates the severity level associated with the logged event.
{ "t": { "$date": "2020-05-01T15:16:17.180+00:00" }, "s": "I", "c": "NETWORK", "id": 12345, "ctx": "listener", "svc": "R", "msg": "Listening on", "attr": { "address": "127.0.0.1" } }
Severity levels range from "Fatal" (most severe) to "Debug" (least severe):
Level | Description |
---|---|
F | Fatal |
E | Error |
W | Warning |
I | Informational, for verbosity level 0 |
D1 - D5 | Debug, for verbosity levels > MongoDB indicates the specific
debug verbosity level.
For example, if verbosity level is 2, MongoDB indicates In previous versions, MongoDB log messages specified |
You can specify the verbosity level of various components to determine the amount of Informational and Debug messages MongoDB outputs. Severity categories above these levels are always shown. [2] To set verbosity levels, see Configure Log Verbosity Levels.
Components
The component field type indicates the category a logged event is a member of, such as NETWORK or COMMAND.
{ "t": { "$date": "2020-05-01T15:16:17.180+00:00" }, "s": "I", "c": "NETWORK", "id": 12345, "ctx": "listener", "svc": "R", "msg": "Listening on", "attr": { "address": "127.0.0.1" } }
Each component is individually configurable via its own verbosity filter. The available components are as follows:
ACCESS
Messages related to access control, such as authentication. To specify the log level for
ACCESS
components, use thesystemLog.component.accessControl.verbosity
setting.
COMMAND
Messages related to database commands, such as
count
. To specify the log level forCOMMAND
components, use thesystemLog.component.command.verbosity
setting.
CONTROL
Messages related to control activities, such as initialization. To specify the log level for
CONTROL
components, use thesystemLog.component.control.verbosity
setting.
ELECTION
Messages related specifically to replica set elections. To specify the log level for
ELECTION
components, set thesystemLog.component.replication.election.verbosity
parameter.REPL
is the parent component ofELECTION
. IfsystemLog.component.replication.election.verbosity
is unset, MongoDB uses theREPL
verbosity level forELECTION
components.
FTDC
Messages related to the diagnostic data collection mechanism, such as server statistics and status messages. To specify the log level for
FTDC
components, use thesystemLog.component.ftdc.verbosity
setting.
GEO
Messages related to the parsing of geospatial shapes, such as verifying the GeoJSON shapes. To specify the log level for
GEO
components, set thesystemLog.component.geo.verbosity
parameter.
INDEX
Messages related to indexing operations, such as creating indexes. To specify the log level for
INDEX
components, set thesystemLog.component.index.verbosity
parameter.
INITSYNC
Messages related to initial sync operation. To specify the log level for
INITSYNC
components, set thesystemLog.component.replication.initialSync.verbosity
parameter.REPL
is the parent component ofINITSYNC
. IfsystemLog.component.replication.initialSync.verbosity
is unset, MongoDB uses theREPL
verbosity level forINITSYNC
components.
JOURNAL
Messages related specifically to storage journaling activities. To specify the log level for
JOURNAL
components, use thesystemLog.component.storage.journal.verbosity
setting.STORAGE
is the parent component ofJOURNAL
. IfsystemLog.component.storage.journal.verbosity
is unset, MongoDB uses theSTORAGE
verbosity level forJOURNAL
components.
NETWORK
Messages related to network activities, such as accepting connections. To specify the log level for
NETWORK
components, set thesystemLog.component.network.verbosity
parameter.
QUERY
Messages related to queries, including query planner activities. To specify the log level for
QUERY
components, set thesystemLog.component.query.verbosity
parameter.
QUERYSTATS
Messages related to
$queryStats
operations. To specify the log level forQUERYSTATS
components, set thesystemLog.component.queryStats.verbosity
parameter.
RECOVERY
Messages related to storage recovery activities. To specify the log level for
RECOVERY
components, use thesystemLog.component.storage.recovery.verbosity
setting.STORAGE
is the parent component ofRECOVERY
. IfsystemLog.component.storage.recovery.verbosity
is unset, MongoDB uses theSTORAGE
verbosity level forRECOVERY
components.
REPL
Messages related to replica sets, such as initial sync, heartbeats, steady state replication, and rollback. [2] To specify the log level for
REPL
components, set thesystemLog.component.replication.verbosity
parameter.REPL
is the parent component of theELECTION
,INITSYNC
,REPL_HB
, andROLLBACK
components.
REPL_HB
Messages related specifically to replica set heartbeats. To specify the log level for
REPL_HB
components, set thesystemLog.component.replication.heartbeats.verbosity
parameter.REPL
is the parent component ofREPL_HB
. IfsystemLog.component.replication.heartbeats.verbosity
is unset, MongoDB uses theREPL
verbosity level forREPL_HB
components.
ROLLBACK
Messages related to rollback operations. To specify the log level for
ROLLBACK
components, set thesystemLog.component.replication.rollback.verbosity
parameter.REPL
is the parent component ofROLLBACK
. IfsystemLog.component.replication.rollback.verbosity
is unset, MongoDB uses theREPL
verbosity level forROLLBACK
components.
SHARDING
Messages related to sharding activities, such as the startup of the
mongos
. To specify the log level forSHARDING
components, use thesystemLog.component.sharding.verbosity
setting.
STORAGE
Messages related to storage activities, such as processes involved in the
fsync
command. To specify the log level forSTORAGE
components, use thesystemLog.component.storage.verbosity
setting.
TXN
Messages related to multi-document transactions. To specify the log level for
TXN
components, use thesystemLog.component.transaction.verbosity
setting.
WRITE
Messages related to write operations, such as
update
commands. To specify the log level forWRITE
components, use thesystemLog.component.write.verbosity
setting.
WT
New in version 5.3.
Messages related to the WiredTiger storage engine. To specify the log level for
WT
components, use thesystemLog.component.storage.wt.verbosity
setting.
WTBACKUP
New in version 5.3.
Messages related to backup operations performed by the WiredTiger storage engine. To specify the log level for the
WTBACKUP
components, use thesystemLog.component.storage.wt.wtBackup.verbosity
setting.
WTCHKPT
New in version 5.3.
Messages related to checkpoint operations performed by the WiredTiger storage engine. To specify the log level for
WTCHKPT
components, use thesystemLog.component.storage.wt.wtCheckpoint.verbosity
setting.
WTCMPCT
New in version 5.3.
Messages related to compaction operations performed by the WiredTiger storage engine. To specify the log level for
WTCMPCT
components, use thesystemLog.component.storage.wt.wtCompact.verbosity
setting.
WTEVICT
New in version 5.3.
Messages related to eviction operations performed by the WiredTiger storage engine. To specify the log level for
WTEVICT
components, use thesystemLog.component.storage.wt.wtEviction.verbosity
setting.
WTHS
New in version 5.3.
Messages related to the history store of the WiredTiger storage engine. To specify the log level for
WTHS
components, use thesystemLog.component.storage.wt.wtHS.verbosity
setting.
WTRECOV
New in version 5.3.
Messages related to recovery operations performed by the WiredTiger storage engine. To specify the log level for
WTRECOV
components, use thesystemLog.component.storage.wt.wtRecovery.verbosity
setting.
WTRTS
New in version 5.3.
Messages related to rollback to stable (RTS) operations performed by the WiredTiger storage engine. To specify the log level for
WTRTS
components, use thesystemLog.component.storage.wt.wtRTS.verbosity
setting.
WTSLVG
New in version 5.3.
Messages related to salvage operations performed by the WiredTiger storage engine. To specify the log level for
WTSLVG
components, use thesystemLog.component.storage.wt.wtSalvage.verbosity
setting.
WTTS
New in version 5.3.
Messages related to timestamps used by the WiredTiger storage engine. To specify the log level for
WTTS
components, use thesystemLog.component.storage.wt.wtTimestamp.verbosity
setting.
WTTXN
New in version 5.3.
Messages related to transactions performed by the WiredTiger storage engine. To specify the log level for
WTTXN
components, use thesystemLog.component.storage.wt.wtTransaction.verbosity
setting.
WTVRFY
New in version 5.3.
Messages related to verification operations performed by the WiredTiger storage engine. To specify the log level for
WTVRFY
components, use thesystemLog.component.storage.wt.wtVerify.verbosity
setting.
WTWRTLOG
New in version 5.3.
Messages related to log write operations performed by the WiredTiger storage engine. To specify the log level for
WTWRTLOG
components, use thesystemLog.component.storage.wt.wtWriteLog.verbosity
setting.
-
Messages not associated with a named component. Unnamed components have the default log level specified in the
systemLog.verbosity
setting. ThesystemLog.verbosity
setting is the default setting for both named and unnamed components.
See Filtering by Component for log parsing examples that filter on the component field.
Client Data
MongoDB Drivers and client applications (including
mongosh
) have the ability to send identifying information at the
time of connection to the server. After the connection is established, the
client does not send the identifying information again unless the connection is
dropped and reestablished.
This identifying information is contained in the attributes field of the log entry. The exact information included varies by client.
Below is a sample log message containing the client data document as
transmitted from a mongosh
connection. The client
data is contained in the doc
object in the attributes field:
{"t":{"$date":"2020-05-20T16:21:31.561+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn202", "svc": "R", "msg":"client metadata", "attr":{"remote":"127.0.0.1:37106","client":"conn202","doc":{"application":{"name":"MongoDB Shell"},"driver":{"name":"MongoDB Internal Client","version":"4.4.0"},"os":{"type":"Linux","name":"CentOS Linux release 8.0.1905 (Core) ","architecture":"x86_64","version":"Kernel 4.18.0-80.11.2.el8_0.x86_64"}}}}
When secondary members of a
replica set initiate
a connection to a primary, they send similar data. A sample log message
containing this initiation connection might appear as follows. The
client data is contained in the doc
object in the attributes
field:
{"t":{"$date":"2020-05-20T16:33:40.595+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn214", "svc": "R", "msg":"client metadata", "attr":{"remote":"127.0.0.1:37176","client":"conn214","doc":{"driver":{"name":"NetworkInterfaceTL","version":"4.4.0"},"os":{"type":"Linux","name":"CentOS Linux release 8.0.1905 (Core) ","architecture":"x86_64","version":"Kernel 4.18.0-80.11.2.el8_0.x86_64"}}}}
See the examples section for a pretty-printed example showing client data.
For a complete description of client information and required fields, see the MongoDB Handshake specification.
Verbosity Levels
You can specify the logging verbosity level to increase or decrease the amount of log messages MongoDB outputs. Verbosity levels can be adjusted for all components together, or for specific named components individually.
Verbosity affects log entries in the severity categories Informational and Debug only. Severity categories above these levels are always shown.
You might set verbosity levels to a high value to show detailed logging for debugging or development, or to a low value to minimize writes to the log on a vetted production deployment. [2]
View Current Log Verbosity Level
To view the current verbosity levels, use the
db.getLogComponents()
method:
db.getLogComponents()
Your output might resemble the following:
{ "verbosity" : 0, "accessControl" : { "verbosity" : -1 }, "command" : { "verbosity" : -1 }, ... "storage" : { "verbosity" : -1, "recovery" : { "verbosity" : -1 }, "journal" : { "verbosity" : -1 } }, ...
The initial verbosity
entry is the parent verbosity level for all
components, while the individual named components that follow, such as accessControl
,
indicate the specific verbosity level for that component, overriding the
global verbosity level for that particular component if set.
A value of -1
, indicates that the component inherits the verbosity
level of their parent, if they have one (as with recovery
above,
inheriting from storage
), or the global verbosity level if they do
not (as with command
). Inheritance relationships for verbosity
levels are indicated in the components section.
Configure Log Verbosity Levels
You can configure the verbosity level using: the
systemLog.verbosity
and
systemLog.component.<name>.verbosity
settings, the
logComponentVerbosity
parameter, or the
db.setLogLevel()
method. [2]
systemLog
Verbosity Settings
To configure the default log level for all components, use the systemLog.verbosity
setting. To configure the level of specific components, use the
systemLog.component.<name>.verbosity
settings.
For example, the following configuration sets the
systemLog.verbosity
to 1
, the
systemLog.component.query.verbosity
to 2
, the
systemLog.component.storage.verbosity
to 2
, and the
systemLog.component.storage.journal.verbosity
to 1
:
systemLog: verbosity: 1 component: query: verbosity: 2 storage: verbosity: 2 journal: verbosity: 1
You would set these values in the configuration file or on the command line for your
mongod
or mongos
instance.
All components not specified explicitly in the configuration have a
verbosity level of -1
, indicating that they inherit the verbosity
level of their parent, if they have one, or the global verbosity level
(systemLog.verbosity
) if they do not.
logComponentVerbosity
Parameter
To set the logComponentVerbosity
parameter, pass a
document with the verbosity settings to change.
For example, the following sets the default verbosity level
to 1
, the query
to 2
, the storage
to 2
, and the
storage.journal
to 1
.
db.adminCommand( { setParameter: 1, logComponentVerbosity: { verbosity: 1, query: { verbosity: 2 }, storage: { verbosity: 2, journal: { verbosity: 1 } } } } )
You would set these values from mongosh
.
db.setLogLevel()
Use the db.setLogLevel()
method to update a single component
log level. For a component, you can specify verbosity level of 0
to
5
, or you can specify -1
to inherit the verbosity of the
parent. For example, the following sets the
systemLog.component.query.verbosity
to its parent verbosity
(i.e. default verbosity):
db.setLogLevel(-1, "query")
You would set this value from mongosh
.
[2] | (1, 2, 3, 4, 5) Secondary members of a replica set now log oplog entries that take longer than the slow operation
threshold to apply. These slow oplog messages:
|
Logging Slow Operations
Client operations (such as queries) appear in the log if their duration exceeds the slow operation threshold or when the log verbosity level is 1 or higher. [2] These log entries include the full command object associated with the operation.
The profiler entries and the diagnostic log messages (i.e. mongod/mongos logmessages) for read/write operations include:
planCacheShapeHash
to help identify slow queries with the same plan cache query shape.Starting in MongoDB 8.0, the pre-existing
queryHash
field is renamed toplanCacheShapeHash
. If you're using an earlier MongoDB version, you'll seequeryHash
instead ofplanCacheShapeHash
.planCacheKey
to provide more insight into the query plan cache for slow queries.
Starting in MongoDB 5.0, slow operation log messages include a
remote
field specifying client IP address.
Starting in MongoDB 6.2, slow operation log messages include a
queryFramework
field that indicates which query engine executed the
query:
queryFramework: "classic"
indicates that the classic engine executed the query.queryFramework: "sbe"
indicates that the slot-based query execution engine executed the query.
Starting in MongoDB 6.1, slow operation log messages include cache refresh time fields.
Starting in MongoDB 6.3, slow operation log messages and database
profiler entries include a cpuNanos
field that specifies the
total CPU time spent by a query operation in nanoseconds. The cpuNanos
field
is only available on Linux systems.
Starting in MongoDB 7.0 (and 6.0.13, 5.0.24), the
totalOplogSlotDurationMicros
in the slow query log message shows the
time between a write operation getting a commit timestamp to commit the
storage engine writes and actually committing. mongod
supports
parallel writes. However, it commits write operations with commit
timestamps in any order.
Example
Consider the following writes with commit timestamps:
writeA with Timestamp1
writeB with Timestamp2
writeC with Timestamp3
Suppose writeB commits first at Timestamp2. Replication is paused until writeA commits because writeA's oplog entry with Timestamp1 is required for replication to copy the oplog to secondary replica set members.
For a pretty-printed example of a slow operation log entry, see Log Message Examples.
Starting in MongoDB 8.0, the queryShapeHash
field for a query
shape is also included in the slow query log when
available.
Also starting in MongoDB 8.0, the slow query output includes a
queues
document that contains information about the operation's
queues. Each queue in the queues
field
contains a totalTimeQueuedMicros
field that contains the total
cumulative time, in microseconds, that the operation spent in the
corresponding queue.
Time Waiting for Shards Logged in remoteOpWaitMillis
Field
New in version 5.0.
Starting in MongoDB 5.0, you can use the remoteOpWaitMillis
log
field to obtain the wait time (in milliseconds) for results from
shards.
remoteOpWaitMillis
is only logged:
If you configure slow operations logging.
To determine if a merge operation or a shard issue is causing a slow
query, compare the workingMillis
and remoteOpWaitMillis
time
fields in the log. workingMillis
is the total time the query took
to complete. Specifically:
If
workingMillis
is slightly longer thanremoteOpWaitMillis
, then most of the time was spent waiting for a shard response. For example,workingMillis
of 17 andremoteOpWaitMillis
of 15.If
workingMillis
is significantly longer thanremoteOpWaitMillis
, then most of the time was spent performing the merge. For example,workingMillis
of 100 andremoteOpWaitMillis
of 15.
Log Redaction
Queryable Encryption Log Redaction
When using Queryable Encryption, CRUD operations against encrypted collections are omitted from the slow query log. For details, see Queryable Encryption redaction.
Enterprise Log Redaction
Available in MongoDB Enterprise only
A mongod
or mongos
running with
redactClientLogData
redacts any message accompanying a given log
event before logging, leaving only metadata, source files, or line numbers
related to the event. redactClientLogData
prevents
potentially sensitive information from entering the system log at the cost of
diagnostic detail.
For example, the following operation inserts a document into a
mongod
running without log redaction. The mongod
has the log verbosity level set to
1
:
db.clients.insertOne( { "name" : "Joe", "PII" : "Sensitive Information" } )
This operation produces the following log event:
{ "t": { "$date": "2024-07-19T15:36:55.024-07:00" }, "s": "I", "c": "COMMAND", ... "attr": { "type": "command", ... "appName": "mongosh 2.2.10", "command": { "insert": "clients", "documents": [ { "name": "Joe", "PII": "Sensitive Information", "_id": { "$oid": "669aea8792c7fd822d3e1d8c" } } ], "ordered": true, ... } ... } }
When mongod
runs with redactClientLogData
and
performs the same insert operation, it produces the following log event:
{ "t": { "$date": "2024-07-19T15:36:55.024-07:00" }, "s": "I", "c": "COMMAND", ... "attr": { "type": "command", ... "appName": "mongosh 2.2.10", "command": { "insert": "###", "documents": [ { "name": "###", "PII": "###", "_id": "###" } ], "ordered": "###", ... } ... } }
Use redactClientLogData
in conjunction with
Encryption at Rest and TLS/SSL (Transport Encryption) to assist
compliance with regulatory requirements.
Parsing Structured Log Messages
Log parsing is the act of programmatically searching through and analyzing log files, often in an automated manner. With the introduction of structured logging, log parsing is made simpler and more powerful. For example:
Log message fields are presented as key-value pairs. Log parsers can query by specific keys of interest to efficiently filter results.
Log messages always contain the same message structure. Log parsers can reliably extract information from any log message, without needing to code for cases where information is missing or formatted differently.
The following examples demonstrate common log parsing workflows when working with MongoDB JSON log output.
Log Parsing Examples
When working with MongoDB structured logging, the third-party jq command-line utility is a useful tool that allows for easy pretty-printing of log entries, and powerful key-based matching and filtering.
jq
is an open-source JSON parser, and is available for
Linux, Windows, and macOS.
These examples use jq
to simplify log parsing.
Counting Unique Messages
The following example shows the top 10 unique message values in a given log file, sorted by frequency:
jq -r ".msg" /var/log/mongodb/mongod.log | sort | uniq -c | sort -rn | head -10
Monitoring Connections
Remote client connections are shown in the log under the "remote" key in the attribute object. The following counts all unique connections over the course of the log file and presents them in descending order by number of occurrences:
jq -r '.attr.remote' /var/log/mongodb/mongod.log | grep -v 'null' | sort | uniq -c | sort -r
Note that connections from the same IP address, but connecting over different ports, are treated as different connections by this command. You could limit output to consider IP addresses only, with the following change:
jq -r '.attr.remote' /var/log/mongodb/mongod.log | grep -v 'null' | awk -F':' '{print $1}' | sort | uniq -c | sort -r
Analyzing Driver Connections
The following example counts all remote MongoDB driver connections, and presents each driver type and version in descending order by number of occurrences:
jq -cr '.attr.doc.driver' /var/log/mongodb/mongod.log | grep -v null | sort | uniq -c | sort -rn
Analyzing Client Types
The following example analyzes the reported client data of remote MongoDB driver
connections and client applications, including mongosh
,
and prints a total for each unique operating system type that
connected, sorted by frequency:
jq -r '.attr.doc.os.type' /var/log/mongodb/mongod.log | grep -v null | sort | uniq -c | sort -rn
The string "Darwin", as reported in this log field, represents a macOS client.
Analyzing Slow Queries
With slow operation logging enabled, the following returns only the slow operations that took above 2000 milliseconds:, for further analysis:
jq 'select(.attr.workingMillis>=2000)' /var/log/mongodb/mongod.log
Consult the jq documentation
for more information on the jq
filters shown in this example.
Filtering by Component
Log components (the third field in the JSON log output format) indicate the general category a given log message falls under. Filtering by component is often a great starting place when parsing log messages for relevant events.
The following example prints only the log messages of component type REPL:
jq 'select(.c=="REPL")' /var/log/mongodb/mongod.log
The following example prints all log messages except those of component type REPL:
jq 'select(.c!="REPL")' /var/log/mongodb/mongod.log
The following example print log messages of component type REPL or STORAGE:
jq 'select( .c as $c | ["REPL", "STORAGE"] | index($c) )' /var/log/mongodb/mongod.log
Consult the jq documentation
for more information on the jq
filters shown in this example.
Filtering by Known Log ID
Log IDs (the fifth field in the JSON log output format) map to specific log events, and can be relied upon to remain stable over successive MongoDB releases.
As an example, you might be interested in the following two log events, showing a client connection followed by a disconnection:
{"t":{"$date":"2020-06-01T13:06:59.027-0500"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener", "svc": "R", "msg":"connection accepted from {session_remote} #{session_id} ({connectionCount}{word} now open)", "attr":{"session_remote":"127.0.0.1:61298", "session_id":164,"connectionCount":11,"word":" connections"}} {"t":{"$date":"2020-06-01T13:07:03.490-0500"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn157", "svc": "R", "msg":"end connection {remote} ({connectionCount}{word} now open)", "attr":{"remote":"127.0.0.1:61298","connectionCount":10,"word":" connections"}}
The log IDs for these two entries are 22943
and 22944
respectively. You could then filter your log output to show only these
log IDs, effectively showing only client connection activity, using the
following jq
syntax:
jq 'select( .id as $id | [22943, 22944] | index($id) )' /var/log/mongodb/mongod.log
Consult the jq documentation
for more information on the jq
filters shown in this example.
Filtering by Date Range
Log output can be further refined by filtering on the timestamp field, limiting log entries returned to a specific date range. For example, the following returns all log entries that occurred on April 15th, 2020:
jq 'select(.t["$date"] >= "2020-04-15T00:00:00.000" and .t["$date"] <= "2020-04-15T23:59:59.999")' /var/log/mongodb/mongod.log
Note that this syntax includes the full timestamp, including milliseconds but excluding the timezone offset.
Filtering by date range can be combined with any of the examples above, creating weekly reports or yearly summaries for example. The following syntax expands the "Monitoring Connections" example from earlier to limit results to the month of May, 2020:
jq 'select(.t["$date"] >= "2020-05-01T00:00:00.000" and .t["$date"] <= "2020-05-31T23:59:59.999" and .attr.remote)' /var/log/mongodb/mongod.log
Consult the jq documentation
for more information on the jq
filters shown in this example.
Log Ingestion Services
Log ingestion services are third-party products that intake and aggregate log files, usually from a distributed cluster of systems, and provide ongoing analysis of that data in a central location.
The JSON log format allows for more flexibility when working with log ingestion and analysis services. Whereas plaintext logs generally require some manner of transformation before being eligible for use with these products, JSON files can often be consumed out of the box, depending on the service. Further, JSON-formatted logs offer more control when performing filtering for these services, as the key-value structure offers the ability to specifically import only the fields of interest, while omitting the rest.
Consult the documentation for your chosen third-party log ingestion service for more information.
Log Message Examples
The following examples show log messages in JSON output format.
These log messages are presented in pretty-printed format for convenience.
Startup Warning
This example shows a startup warning:
{ "t": { "$date": "2020-05-20T19:17:06.188+00:00" }, "s": "W", "c": "CONTROL", "id": 22120, "ctx": "initandlisten", "svc": "R", "msg": "Access control is not enabled for the database. Read and write access to data and configuration is unrestricted", "tags": [ "startupWarnings" ] }
Client Connection
This example shows a client connection that includes client data:
{ "t": { "$date": "2020-05-20T19:18:40.604+00:00" }, "s": "I", "c": "NETWORK", "id": 51800, "ctx": "conn281", "svc": "R", "msg": "client metadata", "attr": { "remote": "192.168.14.15:37666", "client": "conn281", "doc": { "application": { "name": "MongoDB Shell" }, "driver": { "name": "MongoDB Internal Client", "version": "4.4.0" }, "os": { "type": "Linux", "name": "CentOS Linux release 8.0.1905 (Core) ", "architecture": "x86_64", "version": "Kernel 4.18.0-80.11.2.el8_0.x86_64" } } } }
Slow Operation
Starting in MongoDB 8.0, slow operations are logged based on the time that MongoDB spends working on that operation, rather than the total latency for the operation.
You can use the metrics in the slow operation log to identify where an operation spends time in its lifecycle, which helps identify possible performance improvements.
In the following example log message:
The amount of time spent waiting for resources while executing the query is shown in these metrics:
queues.execution.totalTimeQueuedMicros
timeAcquiringMicros
workingMillis
is the amount of time that MongoDB spends working on the operation.durationMillis
is the operation's total latency.
{ "t":{ "$date":"2024-06-01T13:24:10.034+00:00" }, "s":"I", "c":"COMMAND", "id":51803, "ctx":"conn3", "msg":"Slow query", "attr":{ "type":"command", "isFromUserConnection":true, "ns":"db.coll", "collectionType":"normal", "appName":"MongoDB Shell", "command":{ "find":"coll", "filter":{ "b":-1 }, "sort":{ "splitPoint":1 }, "readConcern":{ }, "$db":"db" }, "planSummary":"COLLSCAN", "planningTimeMicros":87, "keysExamined":0, "docsExamined":20889, "hasSortStage":true, "nBatches":1, "cursorExhausted":true, "numYields":164, "nreturned":99, "planCacheShapeHash":"9C05019A", "planCacheKey":"C41063D6", "queryFramework":"classic", "reslen":96, "locks":{ "ReplicationStateTransition":{ "acquireCount":{ "w":3 } }, "Global":{ "acquireCount":{ "r":327, "w":1 } }, "Database":{ "acquireCount":{ "r":1 }, "acquireWaitCount":{ "r":1 }, "timeAcquiringMicros":{ "r":2814 } }, "Collection":{ "acquireCount":{ "w":1 } } }, "flowControl":{ "acquireCount":1, "acquireWaitCount":1, "timeAcquiringMicros":8387 }, "readConcern":{ "level":"local", "provenance":"implicitDefault" }, "storage":{ }, "cpuNanos":20987385, "remote":"127.0.0.1:47150", "protocol":"op_msg", "queues":{ "ingress":{ "admissions":7, "totalTimeQueuedMicros":0 }, "execution":{ "admissions":328, "totalTimeQueuedMicros":2109 } }, "workingMillis":89, "durationMillis":101 } }
Starting in MongoDB 8.0, the pre-existing queryHash
field is renamed
to planCacheShapeHash
. If you're using an earlier MongoDB version,
you'll see queryHash
instead of planCacheShapeHash
.
Escaping
This example demonstrates character escaping, as shown in the setName
field of the
attribute object:
{ "t": { "$date": "2020-05-20T19:11:09.268+00:00" }, "s": "I", "c": "REPL", "id": 21752, "ctx": "ReplCoord-0", "svc": "R", "msg": "Scheduling remote command request", "attr": { "context": "vote request", "request": "RemoteCommand 229 -- target:localhost:27003 db:admin cmd:{ replSetRequestVotes: 1, setName: \"my-replica-name\", dryRun: true, term: 3, candidateIndex: 0, configVersion: 2, configTerm: 3, lastAppliedOpTime: { ts: Timestamp(1589915409, 1), t: 3 } }" } }
View
Starting in MongoDB 5.0, log messages for slow queries on views include a
resolvedViews
field that contains the view details:
"resolvedViews": [ { "viewNamespace": <String>, // namespace and view name "dependencyChain": <Array of strings>, // view name and collection "resolvedPipeline": <Array of documents> // aggregation pipeline for view } ]
The following example uses the test
database and creates a view
named myView
that sorts the documents in myCollection
by the
firstName
field:
use test db.createView( "myView", "myCollection", [ { $sort: { "firstName" : 1 } } ] )
Assume a slow query is run on myView
.
The following example log message contains a resolvedViews
field for
myView
:
{ "t": { "$date": "2021-09-30T17:53:54.646+00:00" }, "s": "I", "c": "COMMAND", "id": 51803, "ctx": "conn249", "svc": "R", "msg": "Slow query", "attr": { "type": "command", "ns": "test.myView", "appName": "MongoDB Shell", "command": { "find": "myView", "filter": {}, "lsid": { "id": { "$uuid": "ad176471-60e5-4e82-b977-156a9970d30f" } }, "$db": "test" }, "planSummary":"COLLSCAN", "resolvedViews": [ { "viewNamespace": "test.myView", "dependencyChain": [ "myView", "myCollection" ], "resolvedPipeline": [ { "$sort": { "firstName": 1 } } ] } ], "keysExamined": 0, "docsExamined": 1, "hasSortStage": true, "cursorExhausted": true, "numYields": 0, "nreturned": 1, "planCacheShapeHash": "3344645B", "planCacheKey": "1D3DE690", "queryFramework": "classic" "reslen": 134, "locks": { "ParallelBatchWriterMode": { "acquireCount": { "r": 1 } }, "ReplicationStateTransition": { "acquireCount": { "w": 1 } }, "Global": { "acquireCount": { "r": 4 } }, "Database": { "acquireCount": {"r": 1 } }, "Collection": { "acquireCount": { "r": 1 } }, "Mutex": { "acquireCount": { "r": 4 } } }, "storage": {}, "remote": "127.0.0.1:34868", "protocol": "op_msg", "workingMillis": 0, "durationMillis": 0 } } }
Starting in MongoDB 8.0, the pre-existing queryHash
field is renamed
to planCacheShapeHash
. If you're using an earlier MongoDB version,
you'll see queryHash
instead of planCacheShapeHash
.
Authorization
Starting in MongoDB 5.0, log messages for slow queries include a
system.profile.authorization
section. These metrics help
determine if a request is delayed because of contention for the user
authorization cache.
"authorization": { "startedUserCacheAcquisitionAttempts": 1, "completedUserCacheAcquisitionAttempts": 1, "userCacheWaitTimeMicros": 508 },
Session Workflow Log Message
Starting in MongoDB 6.3, a message is added to the log if the time to send an operation response exceeds the slowms threshold option.
The message is known as a session workflow log message and contains various times to perform an operation in a database session.
Example session workflow log message:
{ "t": { "$date": "2022-12-14T17:22:44.233+00:00" }, "s": "I", "c": "EXECUTOR", "id": 6983000, "ctx": "conn1", "svc": "R", "msg": "Slow network response send time", "attr": { "elapsed": { "totalMillis": 109, "activeMillis": 30, "receiveWorkMillis": 2, "processWorkMillis": 10, "sendResponseMillis": 22, "yieldMillis": 15, "finalizeMillis": 30 } } }
The times are in milliseconds.
A session workflow message is added to the log if sendResponseMillis
exceeds the slowms threshold option.
Field | Description |
---|---|
totalMillis | Total time to perform the operation in the session, which
includes the time spent waiting for a message to be received. |
activeMillis | Time between receiving a message and completing the operation
associated with that message. Time includes sending a response
and performing any clean up. |
receivedWorkMillis | Time to receive the operation information over the network. |
processWorkMillis | Time to process the operation and create the response. |
sendResponseMillis | Time to send the response. |
yieldMillis | Time between releasing the worker thread and the thread being
used again. |
finalize | Time to end and close the session workflow. |
Connection Acquisition To Wire Log Message
Starting in MongoDB 6.3, a message is added to the log if the time that an operation waited between acquisition of a server connection and writing the bytes to send to the server over the network exceeds 1 millisecond.
By default, the message is logged at the "I"
information level, and
at most once every second to avoid too many log messages. If you must
obtain every log message, change your log level to debug.
If the operation wait time exceeds 1 millisecond and the message is logged at the information level within the last second, then the next message is logged at the debug level. Otherwise, the next message is logged at the information level.
Example log message:
{ "t": { "$date":"2023-01-31T15:22:29.473+00:00" }, "s": "I", "c": "NETWORK", "id": 6496702, "ctx": "ReplicaSetMonitor-TaskExecutor", "svc": "R", "msg": "Acquired connection for remote operation and completed writing to wire", "attr": { "durationMicros": 1683 } }
The following table describes the durationMicros
field in attr
.
Field | Description |
---|---|
durationMicros | Time in microseconds that the operation waited between
acquisition of a server connection and writing the bytes to send
to the server over the network. |
Cache Refresh Times
Starting in MongoDB 6.1, log messages for slow queries include the following cache refresh time fields:
catalogCacheDatabaseLookupDurationMillis
catalogCacheCollectionLookupDurationMillis
databaseVersionRefreshDurationMillis
shardVersionRefreshMillis
Starting in MongoDB 7.0, log messages for slow queries also include the
catalogCacheIndexLookupDurationMillis
field that indicates the
time that the operation spent fetching information from the index
cache. This release also renames the shardVersionRefreshMillis
field to placementVersionRefreshMillis
.
The following example includes:
catalogCacheDatabaseLookupDurationMillis
catalogCacheCollectionLookupDurationMillis
catalogCacheIndexLookupDurationMillis
{ "t": { "$date": "2023-03-17T09:47:55.929+00:00" }, "s": "I", "c": "COMMAND", "id": 51803, "ctx": "conn14", "svc": "R", "msg": "Slow query", "attr": { "type": "command", "ns": "db.coll", "appName": "MongoDB Shell", "command": { "insert": "coll", "ordered": true, "lsid": { "id": { "$uuid": "5d50b19c-8559-420a-a122-8834e012274a" } }, "$clusterTime": { "clusterTime": { "$timestamp": { "t": 1679046398, "i": 8 } }, "signature": { "hash": { "$binary": { "base64": "AAAAAAAAAAAAAAAAAAAAAAAAAAA=", "subType": "0" } }, "keyId": 0 } }, "$db": "db" }, "catalogCacheDatabaseLookupDurationMillis": 19, "catalogCacheCollectionLookupDurationMillis": 68, "catalogCacheIndexLookupDurationMillis": 16026, "nShards": 1, "ninserted": 1, "numYields": 232, "reslen": 96, "readConcern": { "level": "local", "provenance": "implicitDefault", }, "cpuNanos": 29640339, "remote": "127.0.0.1:48510", "protocol": "op_msg", "remoteOpWaitMillis": 4078, "workingMillis": 20334, "durationMillis": 20334 } }
Linux Syslog Limitations
In a Linux system, messages are subject to the rules defined in the Linux
configuration file /etc/systemd/journald.conf
. By default, log message
bursts are limited to 1000 messages within a 30 second period. To see more
messages, increase the RateLimitBurst
parameter in
/etc/systemd/journald.conf
.
Download Your Logs
You can use MongoDB Atlas to download a zipped file containing the logs for a selected hostname or process in your database deployment. To learn more, see View and Download MongoDB Logs.