Docs Menu
Docs Home
/
MongoDB Manual
/ /

In-Memory Storage Engine

On this page

  • Specify In-Memory Storage Engine
  • Concurrency
  • Memory Use
  • Durability
  • Transactions
  • Deployment Architectures

Changed in version 3.2.6.

Starting in MongoDB Enterprise version 3.2.6, the in-memory storage engine is part of general availability (GA) in the 64-bit builds. Other than some metadata and diagnostic data, the in-memory storage engine does not maintain any on-disk data, including configuration data, indexes, user credentials, etc.

By avoiding disk I/O, the in-memory storage engine allows for more predictable latency of database operations.

To select the in-memory storage engine, specify:

  • inMemory for the --storageEngine option, or the storage.engine setting if using a configuration file.

  • --dbpath, or storage.dbPath if using a configuration file. Although the in-memory storage engine does not write data to the filesystem, it maintains in the --dbpath small metadata files and diagnostic data as well temporary files for building large indexes.

For example, from the command line:

mongod --storageEngine inMemory --dbpath <path>

Or, if using the YAML configuration file format:

storage:
engine: inMemory
dbPath: <path>

See inMemory Options for configuration options specific to this storage engine. Most mongod configuration options are available for use with in-memory storage engine except for those options that are related to data persistence, such as journaling or encryption at rest configuration.

Warning

The in-memory storage engine does not persist data after process shutdown.

The in-memory storage engine uses document-level concurrency control for write operations. As a result, multiple clients can modify different documents of a collection at the same time.

In-memory storage engine requires that all its data (including indexes, oplog if mongod instance is part of a replica set, etc.) must fit into the specified --inMemorySizeGB command-line option or storage.inMemory.engineConfig.inMemorySizeGB setting in the YAML configuration file.

By default, the in-memory storage engine uses 50% of physical RAM minus 1 GB.

If a write operation would cause the data to exceed the specified memory size, MongoDB returns with the error:

"WT_CACHE_FULL: operation would overflow cache"

To specify a new size, use the storage.inMemory.engineConfig.inMemorySizeGB setting in the YAML configuration file format:

storage:
engine: inMemory
dbPath: <path>
inMemory:
engineConfig:
inMemorySizeGB: <newSize>

Or use the command-line option --inMemorySizeGB:

mongod --storageEngine inMemory --dbpath <path> --inMemorySizeGB <newSize>

The in-memory storage engine is non-persistent and does not write data to a persistent storage. Non-persisted data includes application data and system data, such as users, permissions, indexes, replica set configuration, sharded cluster configuration, etc.

As such, the concept of journal or waiting for data to become durable does not apply to the in-memory storage engine.

If any voting member of a replica set uses the in-memory storage engine, you must set writeConcernMajorityJournalDefault to false.

Note

Starting in version 4.2 (and 4.0.13 and 3.6.14 ), if a replica set member uses the in-memory storage engine (voting or non-voting) but the replica set has writeConcernMajorityJournalDefault set to true, the replica set member logs a startup warning.

With writeConcernMajorityJournalDefault set to false, MongoDB does not wait for w: "majority" writes to be written to the on-disk journal before acknowledging the writes. As such, "majority" write operations could possibly roll back in the event of a transient loss (e.g. crash and restart) of a majority of nodes in a given replica set.

Write operations that specify a write concern journaled are acknowledged immediately. When an mongod instance shuts down, either as result of the shutdown command or due to a system error, recovery of in-memory data is impossible.

Transactions are supported on replica sets and sharded clusters where:

  • the primary uses the WiredTiger storage engine, and

  • the secondary members use either the WiredTiger storage engine or the in-memory storage engines.

Note

You cannot run transactions on a sharded cluster that has a shard with writeConcernMajorityJournalDefault set to false, such as a shard with a voting member that uses the in-memory storage engine.

In addition to running as standalones, mongod instances that use in-memory storage engine can run as part of a replica set or part of a sharded cluster.

You can deploy mongod instances that use in-memory storage engine as part of a replica set. For example, as part of a three-member replica set, you could have:

With this deployment model, only the mongod instances running with the in-memory storage engine can become the primary. Clients connect only to the in-memory storage engine mongod instances. Even if both mongod instances running in-memory storage engine crash and restart, they can sync from the member running WiredTiger. The hidden mongod instance running with WiredTiger persists the data to disk, including the user data, indexes, and replication configuration information.

Note

In-memory storage engine requires that all its data (including oplog if mongod is part of replica set, etc.) fit into the specified --inMemorySizeGB command-line option or storage.inMemory.engineConfig.inMemorySizeGB setting. See Memory Use.

You can deploy mongod instances that use an in-memory storage engine as part of a sharded cluster. The in-memory storage engine avoids disk I/O to allow for more predictable database operation latency. In a sharded cluster, a shard can consist of a single mongod instance or a replica set. For example, you could have one shard that consists of the following replica set:

To this shard, add the tag inmem. For example, if this shard has the name shardC, connect to the mongos and run sh.addShardTag().

For example,

sh.addShardTag("shardC", "inmem")

To the other shards, add a separate tag persisted .

sh.addShardTag("shardA", "persisted")
sh.addShardTag("shardB", "persisted")

For each sharded collection that should reside on the inmem shard, assign to the entire chunk range the tag inmem:

sh.addTagRange("test.analytics", { shardKey: MinKey }, { shardKey: MaxKey }, "inmem")

For each sharded collection that should reside across the persisted shards, assign to the entire chunk range the tag persisted:

sh.addTagRange("salesdb.orders", { shardKey: MinKey }, { shardKey: MaxKey }, "persisted")

For the inmem shard, create a database or move the database.

Back

Change Sharded Cluster to WiredTiger