Docs Home → Develop Applications → MongoDB Manual
Release Notes for MongoDB 2.0
Upgrading
Although the major version number has changed, MongoDB 2.0 is a standard, incremental production release and works as a drop-in replacement for MongoDB 1.8.
Preparation
Read through all release notes before upgrading, and ensure that no changes will affect your deployment.
If you create new indexes in 2.0, then downgrading to 1.8 is possible but you must reindex the new collections.
mongoimport
and mongoexport
now correctly adhere to the CSV spec
for handling CSV input/output. This may break existing import/export
workflows that relied on the previous behavior. For more information see
SERVER-1097.
Journaling is enabled by default in 2.0 for 64-bit builds.
If you still prefer to run without journaling, start mongod
with the --nojournal
run-time option.
Otherwise, MongoDB creates journal files during startup. The first time you start mongod
with
journaling, you will see a delay as mongod
creates new files.
In addition, you may see reduced write throughput.
2.0 mongod
instances are interoperable with 1.8
mongod
instances; however, for best results, upgrade your
deployments using the following procedures:
Upgrading a Standalone mongod
Download the v2.0.x binaries from the MongoDB Download Page.
Shutdown your
mongod
instance. Replace the existing binary with the 2.0.xmongod
binary and restart MongoDB.
Upgrading a Replica Set
Upgrade the secondary members of the set one at a time by shutting down the
mongod
and replacing the 1.8 binary with the 2.0.x binary from the MongoDB Download Page.To avoid losing the last few updates on failover you can temporarily halt your application (failover should take less than 10 seconds), or you can set write concern in your application code to confirm that each update reaches multiple servers.
Use the
rs.stepDown()
to step down the primary to allow the normal failover procedure.rs.stepDown()
andreplSetStepDown
provide for shorter and more consistent failover procedures than simply shutting down the primary directly.When the primary has stepped down, shut down its instance and upgrade by replacing the
mongod
binary with the 2.0.x binary.
Upgrading a Sharded Cluster
Upgrade all config server instances first, in any order. Since config servers use two-phase commit, shard configuration metadata updates will halt until all are up and running.
Upgrade
mongos
routers in any order.
Changes
Compact Command
A compact
command is now available for compacting a single
collection and its indexes. Previously, the only way to compact was to
repair the entire database.
Concurrency Improvements
When going to disk, the server will yield the write lock when writing data that is not likely to be in memory. The initial implementation of this feature now exists:
See SERVER-2563 for more information.
The specific operations yield in 2.0 are:
Updates by
_id
Removes
Long cursor iterations
Default Stack Size
MongoDB 2.0 reduces the default stack size. This change can reduce total memory usage when there are many (e.g., 1000+) client connections, as there is a thread per connection. While portions of a thread's stack can be swapped out if unused, some operating systems do this slowly enough that it might be an issue. The default stack size is lesser of the system setting or 1MB.
Index Performance Enhancements
v2.0 includes significant improvements to the index. Indexes are often 25% smaller and 25% faster (depends on the use case). When upgrading from previous versions, the benefits of the new index type are realized only if you create a new index or re-index an old one.
Dates are now signed, and the max index key size has increased slightly from 819 to 1024 bytes.
All operations that create a new index will result in a 2.0 index by default. For example:
Reindexing results on an older-version index results in a 2.0 index. However, reindexing on a secondary does not work in versions prior to 2.0. Do not reindex on a secondary. For a workaround, see SERVER-3866.
The
repairDatabase
command converts indexes to a 2.0 indexes.
To convert all indexes for a given collection to the 2.0 type, invoke the compact
command.
Once you create new indexes, downgrading to 1.8.x will require a re-index of any indexes created using 2.0. See /tutorial/roll-back-to-v1.8-index.
Sharding Authentication
Applications can now use authentication with sharded clusters.
Replica Sets
Hidden Nodes in Sharded Clusters
In 2.0, mongos
instances can now determine when a member of
a replica set becomes "hidden" without requiring a restart. In 1.8,
mongos
if you reconfigured a
member as hidden, you had to restart mongos
to prevent
queries from reaching the hidden member.
Priorities
Each replica set member can now have a priority value consisting of a floating-point from 0 to 1000, inclusive. Priorities let you control which member of the set you prefer to have as primary the member with the highest priority that can see a majority of the set will be elected primary.
For example, suppose you have a replica set with three members, A
, B
, and
C
, and suppose that their priorities are set as follows:
A
's priority is2
.B
's priority is3
.C
's priority is1
.
During normal operation, the set will always chose B
as
primary. If B
becomes unavailable, the set will elect A
as primary.
For more information, see the
priority
documentation.
Data-Center Awareness
You can now "tag" replica set members to indicate their location. You can use these tags to design custom write rules across data centers, racks, specific servers, or any other architecture choice.
For example, an administrator can define rules such as "very important write" or
customerData
or "audit-trail" to replicate to certain servers,
racks, data centers, etc. Then in the application code, the developer
would say:
db.foo.insert(doc, {w : "very important write"})
which would succeed if it fulfilled the conditions the DBA defined for "very important write".
For more information, see Data Center Awareness.
Drivers may also support tag-aware reads. Instead of
specifying slaveOk
, you specify slaveOk
with tags indicating
which data-centers to read from. For details, see the
Drivers documentation.
w
: majority
You can also set w
to majority
to ensure that the write
propagates to a majority of nodes, effectively committing it. The
value for "majority" will automatically adjust as you add or
remove nodes from the set.
For more information, see Write Concern.
Reconfiguration with a Minority Up
If the majority of servers in a set has been permanently lost, you can now force a reconfiguration of the set to bring it back online.
For more information see Reconfigure a Replica Set with Unavailable Members.
Primary Checks for a Caught up Secondary before Stepping Down
To minimize time without a primary, the rs.stepDown()
method will now fail if the primary does not see a secondary
within 10 seconds of its latest optime. You can force the primary to
step down anyway, but by default it will return an error message.
See also Force a Member to Become Primary.
Extended Shutdown on the Primary to Minimize Interruption
When you call the shutdown
command, the primary
will refuse to shut down unless there is a secondary whose
optime is within 10 seconds of the primary. If such a secondary isn't
available, the primary will step down and wait up to a minute for the
secondary to be fully caught up before shutting down.
Note that to get this behavior, you must issue the shutdown
command explicitly; sending a signal to the process will not trigger
this behavior.
You can also force the primary to shut down, even without an up-to-date secondary available.
Maintenance Mode
When repairDatabase
or compact
runs on a secondary, the
secondary will automatically drop into "recovering" mode until the
operation finishes. This prevents clients from trying to read from it
while it's busy.
Geospatial Features
Multi-Location Documents
Indexing is now supported on documents which have multiple location objects, embedded either inline or in embedded documents. Additional command options are also supported, allowing results to return with not only distance but the location used to generate the distance.
For more information, see Multi-location Documents for 2d
Indexes.
Polygon searches
Polygonal $within
queries are also now supported for simple polygon
shapes. For details, see the $within
operator documentation.
Journaling Enhancements
Journaling is now enabled by default for 64-bit platforms. Use the
--nojournal
command line option to disable it.The journal is now compressed for faster commits to disk.
A new
--journalCommitInterval
run-time option exists for specifying your own group commit interval. The default settings do not change.A new
{ getLastError: { j: true } }
option is available to wait for the group commit. The group commit will happen sooner when a client is waiting on{j: true}
. If journaling is disabled,{j: true}
is a no-op.
New ContinueOnError
Option for Bulk Insert
Set the continueOnError
option for bulk inserts, in the
driver, so that bulk insert will
continue to insert any remaining documents even if an insert fails, as
is the case with duplicate key exceptions or network interruptions. The getLastError
command will report whether any inserts have failed, not just the
last one. If multiple errors occur, the client will only receive the
most recent getLastError
results.
Note
For bulk inserts on sharded clusters, the getLastError
command alone is insufficient to verify success. Applications
should must verify the success of bulk inserts in application
logic.
Map Reduce
Output to a Sharded Collection
Using the new sharded
flag, it is possible to send the result of a
map/reduce to a sharded collection. Combined with the reduce
or
merge
flags, it is possible to keep adding data to very large
collections from map/reduce jobs.
For more information, see Map-Reduce and the
mapReduce
reference.
Performance Improvements
Map/reduce performance will benefit from the following:
Larger in-memory buffer sizes, reducing the amount of disk I/O needed during a job
Larger javascript heap size, allowing for larger objects and less GC
Supports pure JavaScript execution with the
jsMode
flag. See themapReduce
reference.
New Querying Features
Additional regex options: s
Allows the dot (.
) to match all characters including new lines. This is
in addition to the currently supported i
, m
and x
. See $regex
.
$and
A special boolean $and
query operator is now available.
Command Output Changes
The output of the validate
command and the documents in the
system.profile
collection have both been enhanced to return
information as BSON objects with keys for each value rather than as
free-form strings.
Shell Features
Custom Prompt
You can define a custom prompt for the mongo
shell. You can
change the prompt at any time by setting the prompt variable to a string
or a custom JavaScript function returning a string. For examples, see
Customize the Prompt.
Default Shell Init Script
On startup, the shell will check for a .mongorc.js
file in the
user's home directory. The shell will execute this file after connecting
to the database and before displaying the prompt.
If you would like the shell not to run the .mongorc.js
file
automatically, start the shell with --norc
.
For more information, see the mongo
reference.
Most Commands Require Authentication
In 2.0, when running with authentication (e.g. authorization
) all
database commands require authentication, except the following
commands.