Troubleshooting
On this page
On this page, you can find solutions to common issues encountered while using PyMongo with MongoDB.
Connection
Server Reports Wire Version X, PyMongo Requires Y
If you try to connect to MongoDB Server v3.6 or earlier, PyMongo might raise the following error:
pymongo.errors.ConfigurationError: Server at localhost:27017 reports wire version 6, but this version of PyMongo requires at least 7 (MongoDB 4.0).
This occurs when the driver version is too new for the server it's connecting to. To resolve this issue, you can do one of the following:
Upgrade your MongoDB deployment to v4.0 or later.
Downgrade to PyMongo 4.10 or earlier, which supports MongoDB Server v3.6 and later.
Downgrade to PyMongo v3.x, which supports MongoDB Server v2.6 and later.
AutoReconnect
An AutoReconnect
exception indicates that a
failover has occurred. This means that
PyMongo has lost its connection to the original primary member
of the replica set, and its last operation might have failed.
When this error occurs, PyMongo automatically tries to find the new primary member for subsequent operations. To handle the error, your application must take one of the following actions:
Retry the operation that might have failed
Continue running, with the understanding that the operation might have failed
Important
PyMongo raises an AutoReconnect
error on all operations until the
replica set elects a new primary member.
Timeout When Accessing MongoDB from PyMongo with Tunneling
If you try to connect to a MongoDB replica set over an SSH tunnel, you receive the following error:
File "/Library/Python/2.7/site-packages/pymongo/collection.py", line 1560, in count return self._count(cmd, collation, session) File "/Library/Python/2.7/site-packages/pymongo/collection.py", line 1504, in _count with self._socket_for_reads() as (connection, slave_ok): File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/contextlib.py", line 17, in __enter__ return self.gen.next() File "/Library/Python/2.7/site-packages/pymongo/mongo_client.py", line 982, in _socket_for_reads server = topology.select_server(read_preference) File "/Library/Python/2.7/site-packages/pymongo/topology.py", line 224, in select_server address)) File "/Library/Python/2.7/site-packages/pymongo/topology.py", line 183, in select_servers selector, server_timeout, address) File "/Library/Python/2.7/site-packages/pymongo/topology.py", line 199, in _select_servers_loop self._error_message(selector)) pymongo.errors.ServerSelectionTimeoutError: localhost:27017: timed out
This occurs because PyMongo discovers replica set members by using the response
from the isMaster
command, which contains the addresses and ports of the other
replica set members. However, you can't access these addresses and ports through the SSH
tunnel.
Instead, you can connect directly to a single MongoDB node by using the
directConnection=True
option with SSH tunneling.
Read and Write Operations
AutoReconnect
Error
You receive this error if you specify tag-sets
in your
read preference and MongoDB is unable to find replica set members with the specified
tags. To avoid this error, include an empty dictionary ({}
) at the end of
the tag-set list. This instructs PyMongo to read from any member that
matches the read-reference mode when it can't find matching tags.
DeprecationWarning: Count Is Deprecated
PyMongo no longer supports the count()
method.
Instead, use the count_documents()
method from the Collection
class.
Important
The count_documents()
method belongs to the Collection
class.
If you try to call Cursor.count_documents()
,
PyMongo raises the following error:
Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'Cursor' object has no attribute 'count'
MongoClient Fails ConfigurationError
Providing invalid keyword argument names causes the driver to raise this error.
Ensure that the keyword arguments you specify exist and are spelled correctly.
No Results When Querying for a Document by ObjectId in Web Applications
It's common in web applications to encode documents' ObjectIds in URLs, as shown in the following code example:
"/posts/50b3bda58a02fb9a84d8991e"
Your web framework passes the ObjectId part of the URL to your request
handler as a string. You must convert the string to an ObjectId
instance
before passing it to the find_one()
method.
The following code example shows how to perform this conversion in a Flask application. The process is similar for other web frameworks.
from pymongo import MongoClient from bson.objectid import ObjectId from flask import Flask, render_template client = MongoClient() app = Flask(__name__) def show_post(_id): # NOTE!: converting _id from string to ObjectId before passing to find_one post = client.db.posts.find_one({'_id': ObjectId(_id)}) return render_template('post.html', post=post) if __name__ == "__main__": app.run()
Query Works in the Shell But Not in PyMongo
After the _id
field, which is always first, the key-value pairs in a BSON document can
be in any order. The mongo
shell preserves key order when reading and writing
data, as shown by the fields "b" and "a" in the following code example:
// mongo shell db.collection.insertOne( { "_id" : 1, "subdocument" : { "b" : 1, "a" : 1 } } ) // Returns: WriteResult({ "nInserted" : 1 }) db.collection.findOne() // Returns: { "_id" : 1, "subdocument" : { "b" : 1, "a" : 1 } }
PyMongo represents BSON documents as Python dictionaries by default,
and the order of keys in dictionaries is not defined. In Python, a dictionary declared with
the "a" key first is the same as one with the "b" key first. In the following example,
the keys are displayed in the same order regardless of their order in the print
statement:
print({'a': 1.0, 'b': 1.0}) # Returns: {'a': 1.0, 'b': 1.0} print({'b': 1.0, 'a': 1.0}) # Returns: {'a': 1.0, 'b': 1.0}
Similarly, Python dictionaries might not show keys in the order they are stored in BSON. The following example shows the result of printing the document inserted in a preceding example:
print(collection.find_one()) # Returns: {'_id': 1.0, 'subdocument': {'a': 1.0, 'b': 1.0}}
To preserve the order of keys when reading BSON, use the SON
class,
which is a dictionary that remembers its key order.
The following code example shows how to create a collection
configured to use the SON
class:
from bson import CodecOptions, SON opts = CodecOptions(document_class=SON) CodecOptions(document_class=...SON..., tz_aware=False, uuid_representation=UuidRepresentation.UNSPECIFIED, unicode_decode_error_handler='strict', tzinfo=None, type_registry=TypeRegistry(type_codecs=[], fallback_encoder=None), datetime_conversion=DatetimeConversion.DATETIME) collection_son = collection.with_options(codec_options=opts)
When you find the preceding subdocument, the driver represents query results with
SON
objects and preserves key order:
print(collection_son.find_one())
SON([('_id', 1.0), ('subdocument', SON([('b', 1.0), ('a', 1.0)]))])
The subdocument's actual storage layout is now visible: "b" is before "a".
Because a Python dictionary's key order is not defined, you cannot predict how it will be serialized to BSON. However, MongoDB considers subdocuments equal only if their keys have the same order. If you use a Python dictionary to query on a subdocument, it may not match:
collection.find_one({'subdocument': {'b': 1.0, 'a': 1.0}}) is None
True
Because Python considers the two dictionaries the same, swapping the key order in your query makes no difference:
collection.find_one({'subdocument': {'b': 1.0, 'a': 1.0}}) is None
True
You can solve this in two ways. First, you can match the subdocument field-by-field:
collection.find_one({'subdocument.a': 1.0, 'subdocument.b': 1.0})
{'_id': 1.0, 'subdocument': {'a': 1.0, 'b': 1.0}}
The query matches any subdocument with an "a" of 1.0 and a "b" of 1.0, regardless of the order in which you specify them in Python, or the order in which they're stored in BSON. This query also now matches subdocuments with additional keys besides "a" and "b", whereas the previous query required an exact match.
The second solution is to use a ~bson.son.SON
object to specify the key order:
query = {'subdocument': SON([('b', 1.0), ('a', 1.0)])} collection.find_one(query)
{'_id': 1.0, 'subdocument': {'a': 1.0, 'b': 1.0}}
The driver preserves the key order you use when you create a ~bson.son.SON
when serializing it to BSON and using it as a query. Thus, you can create a
subdocument that exactly matches the subdocument in the collection.
Note
For more information about subdocument matching, see the Query on Embedded/Nested Documents guide in the MongoDB Server documentation.
Cursors
'Cursor' Object Has No Attribute '_Cursor__killed'
PyMongo v3.8 or earlier raises a TypeError
and an
AttributeError
if you supply invalid arguments to the Cursor
constructor. The AttributeError
is irrelevant, but the TypeError
contains debugging information as shown in the following example:
Exception ignored in: <function Cursor.__del__ at 0x1048129d8> ... AttributeError: 'Cursor' object has no attribute '_Cursor__killed' ... TypeError: __init__() got an unexpected keyword argument '<argument>'
To fix this, ensure that you supply the correct keyword arguments. You can also upgrade to PyMongo v3.9 or later, which removes the irrelevant error.
"CursorNotFound cursor id not valid at server"
Cursors in MongoDB can timeout on the server if they've been open for
a long time without any operations being performed on them. This can
lead to a CursorNotFound
exception when you try to iterate through the cursor.
Projections
'Cannot Do Exclusion on Field <field> in Inclusion Projection'
The driver returns an OperationFailure
with this message if you attempt to
include and exclude fields in a single projection. Ensure that your
projection specifies only fields to include or fields to exclude.
Indexes
DuplicateKeyException
If you perform a write operation that stores a duplicate value that violates
a unique index, the driver raises a
DuplicateKeyException
, and MongoDB throws an error resembling the following:
E11000 duplicate key error index
Data Formats
ValueError: cannot encode native uuid.UUID with UuidRepresentation.UNSPECIFIED
This error results from trying to encode a native UUID
object to a Binary
object
when the UUID representation is UNSPECIFIED
, as shown in the following code
example:
unspecified_collection.insert_one({'_id': 'bar', 'uuid': uuid4()}) Traceback (most recent call last): ... ValueError: cannot encode native uuid.UUID with UuidRepresentation.UNSPECIFIED. UUIDs can be manually converted to bson.Binary instances using bson.Binary.from_uuid() or a different UuidRepresentation can be configured. See the documentation for UuidRepresentation for more information.
Instead, you must explicitly convert a native UUID to a Binary
object by using the
Binary.from_uuid()
method, as shown in the following example:
explicit_binary = Binary.from_uuid(uuid4(), UuidRepresentation.STANDARD) unspec_collection.insert_one({'_id': 'bar', 'uuid': explicit_binary})
OverflowError When Decoding Dates Stored by Another Language's Driver
PyMongo decodes BSON datetime
values to instances of Python's
datetime.datetime
class. Instances of datetime.datetime
are
limited to years between datetime.MINYEAR
(1) and
datetime.MAXYEAR
(9999). Some MongoDB drivers
can store BSON datetimes with year values far outside those supported
by datetime.datetime
.
There are a few ways to work around this issue. Starting with PyMongo 4.3,
bson.decode
can decode BSON datetime
values in one of four ways. You can specify
the conversion method by using datetime_conversion
parameter of
~bson.codec_options.CodecOptions
.
The default conversion option is
~bson.codec_options.DatetimeConversion.DATETIME
, which will
attempt to decode the value as a datetime.datetime
, allowing
~builtin.OverflowError
to occur for out-of-range dates.
~bson.codec_options.DatetimeConversion.DATETIME_AUTO
alters
this behavior to instead return ~bson.datetime_ms.DatetimeMS
when
representations are out-of-range, while returning ~datetime.datetime
objects as before:
from datetime import datetime from bson.datetime_ms import DatetimeMS from bson.codec_options import DatetimeConversion from pymongo import MongoClient client = MongoClient(datetime_conversion=DatetimeConversion.DATETIME_AUTO) client.db.collection.insert_one({"x": datetime(1970, 1, 1)}) client.db.collection.insert_one({"x": DatetimeMS(2**62)}) for x in client.db.collection.find(): print(x)
{'_id': ObjectId('...'), 'x': datetime.datetime(1970, 1, 1, 0, 0)} {'_id': ObjectId('...'), 'x': DatetimeMS(4611686018427387904)}
For other options, see the API documentation for the DatetimeConversion class.
Another option that does not involve setting datetime_conversion
is to
filter out document values outside of the range supported by
~datetime.datetime
:
from datetime import datetime coll = client.test.dates cur = coll.find({'dt': {'$gte': datetime.min, '$lte': datetime.max}})
If you don't need the value of datetime
, you can filter out just that field:
cur = coll.find({}, projection={'dt': False})
TLS
CERTIFICATE_VERIFY_FAILED
An error message similar to the following means that OpenSSL couldn't verify the server's certificate:
[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed
This often happens because OpenSSL can't access the system's root certificates, or because the certificates are out of date.
If you use Linux, ensure that you have the latest root certificate updates installed from your Linux vendor.
If you use macOS, and if you're running Python v3.7 or later that you downloaded from python.org, run the following command to install root certificates:
open "/Applications/Python <YOUR PYTHON VERSION>/Install Certificates.command"
Tip
For more information on this issue, see Python issue 29065.
If you use portable-pypy, you might need to set an environment
variable to tell
OpenSSL where to find root certificates.
The following code example shows how to install the
certifi module from PyPi and
export the SSL_CERT_FILE
environment variable:
$ pypy -m pip install certifi $ export SSL_CERT_FILE=$(pypy -c "import certifi; print(certifi.where())")
Tip
For more information on this issue, see portable-pypy issue 15.
TLSV1_ALERT_PROTOCOL_VERSION
An error message similar to the following means that the OpenSSL version used by Python doesn't support a new enough TLS protocol to connect to the server:
[SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version
Industry best practices recommend, and some regulations require, that older TLS protocols be disabled in some MongoDB deployments. Some deployments might disable TLS 1.0, while others might disable TLS 1.0 and TLS 1.1.
No application changes are required for PyMongo to use the newest TLS versions, but some operating system versions might not provide an OpenSSL version new enough to support them.
If you use macOS v10.12 (High Sierra) or earlier, install Python from python.org, homebrew, macports, or a similar source.
If you use Linux or another non-macOS Unix, use the following command to check your OpenSSL version:
openssl version
If the preceding command shows a version number less than 1.0.1, support for TLS 1.1 or newer isn't available. Upgrade to a newer version or contact your OS vendor for a solution.
To check the TLS version of your Python interpreter, install the requests
module and
execute the following code:
python -c "import requests; print(requests.get('https://www.howsmyssl.com/a/check', verify=False).json()['tls_version'])"
You should see TLS 1.1 or later.
Invalid Status Response
An error message similar to the following means that certificate revocation checking failed:
[('SSL routines', 'tls_process_initial_server_flight', 'invalid status response')]
For more details, see the OCSP section of this guide.
SSLV3_ALERT_HANDSHAKE_FAILURE
When using Python v3.10 or later with MongoDB versions earlier than v4.0, you might see errors similar to the following messages:
SSL handshake failed: localhost:27017: [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:997) SSL handshake failed: localhost:27017: EOF occurred in violation of protocol (_ssl.c:997)
The MongoDB Server logs might also show the following error:
2021-06-30T21:22:44.917+0100 E NETWORK [conn16] SSL: error:1408A0C1:SSL routines:ssl3_get_client_hello:no shared cipher
Changes made to the ssl module in Python v3.10 might cause incompatibilities with MongoDB versions earlier than v4.0. To resolve this issue, try one or more of the following steps:
Downgrade Python to v3.9 or earlier
Upgrade MongoDB Server to v4.2 or later
Install PyMongo with the OCSP option, which relies on PyOpenSSL
Unsafe Legacy Renegotiation Disabled
When using OpenSSL v3 or later, you might see an error similar to the following message:
[SSL: UNSAFE_LEGACY_RENEGOTIATION_DISABLED] unsafe legacy renegotiation disabled
These types of errors occur because of outdated or buggy SSL proxies that mistakenly enforce legacy TLS renegotiation.
To resolve this issue, perform the following steps:
Use the UnsafeLegacyServerConnect
Option
Create a configuration file that includes the
UnsafeLegacyServerConnect
option. The following example shows how to set
the UnsafeLegacyServerConnect
option:
openssl_conf = openssl_init [openssl_init] ssl_conf = ssl_sect [ssl_sect] system_default = system_default_sect [system_default_sect] Options = UnsafeLegacyServerConnect
Important
Because setting the UnsafeLegacyServerConnect
option has
security implications,
use this workaround as a last
resort to address unsafe legacy renegotiation disabled
errors.
Client-Side Operation Timeouts
ServerSelectionTimeoutError
This error indicates that the client couldn't find an available server to run the operation within the given timeout:
pymongo.errors.ServerSelectionTimeoutError: No servers found yet, Timeout: -0.00202266700216569s, Topology Description: <TopologyDescription id: 63698e87cebfd22ab1bd2ae0, topology_type: Unknown, servers: [<ServerDescription ('localhost', 27017) server_type: Unknown, rtt: None>]>
NetworkTimeout
This error indicates either that the client couldn't establish a connection within the given timeout or that the operation was sent but the server didn't respond in time:
pymongo.errors.NetworkTimeout: localhost:27017: timed out
ExecutionTimeout
This error might indicate that the server cancelled the operation because it exceeded the given timeout. Even if PyMongo raises this exception, the operation might have partially completed on the server.
pymongo.errors.ExecutionTimeout: operation exceeded time limit, full error: {'ok': 0.0, 'errmsg': 'operation exceeded time limit', 'code': 50, 'codeName': 'MaxTimeMSExpired'}
It also might indicate that the client cancelled the operation because it wasn't possible to complete it within the given timeout:
pymongo.errors.ExecutionTimeout: operation would exceed time limit, remaining timeout:0.00196 <= network round trip time:0.00427
WTimeoutError
This error indicates that the server couldn't complete the requested write operation within the given timeout and following the specified write concern:
pymongo.errors.WTimeoutError: operation exceeded time limit, full error: {'code': 50, 'codeName': 'MaxTimeMSExpired', 'errmsg': 'operation exceeded time limit', 'errInfo': {'writeConcern': {'w': 1, 'wtimeout': 0}}}
BulkWriteError
This error indicates that the server couldn't complete an insert_many()
or bulk_write()
method within the given timeout and following the specified
write concern:
pymongo.errors.BulkWriteError: batch op errors occurred, full error: {'writeErrors': [], 'writeConcernErrors': [{'code': 50, 'codeName': 'MaxTimeMSExpired', 'errmsg': 'operation exceeded time limit', 'errInfo': {'writeConcern': {'w': 1, 'wtimeout': 0}}}], 'nInserted': 2, 'nUpserted': 0, 'nMatched': 0, 'nModified': 0, 'nRemoved': 0, 'upserted': []}
Forking Processes
Forking a Process Causes a Deadlock
A MongoClient
instance spawns multiple threads to run background tasks, such as
monitoring connected servers. These threads share state that is protected by instances
of the threading.Lock
class, which are themselves
not fork-safe.
PyMongo is subject to the same limitations as any other multithreaded
code that uses the threading.Lock
class, or any mutexes.
One of these limitations is that the locks become useless after calling the
fork()
method. When fork()
executes, the driver copies all the parent process's locks to
the child process in the same state as they were in the parent. If they are
locked in the parent process, they are also locked in the child process. The child process
created by fork()
has only one thread, so any locks created by
other threads in the parent process are never released in the child process.
The next time the child process attempts to acquire one of these locks, deadlock occurs.
Starting in PyMongo version 4.3, after you call the os.fork()
method, the
driver uses the os.register_at_fork()
method to reset its locks and other shared state
in the child process. Although this reduces the likelihood of a deadlock,
PyMongo depends
on libraries that aren't fork-safe in multithreaded applications, including
OpenSSL and
getaddrinfo(3).
Therefore, a deadlock can still occur.
The Linux manual page for fork(2) also imposes the following restriction:
After a
fork()
in a multithreaded program, the child can safely call only async-signal-safe functions (see signal-safety(7)) until such time as it calls execve(2).
Because PyMongo relies on functions that are not async-signal-safe, it can cause deadlocks or crashes when running in a child process.
Tip
For an example of a deadlock in a child process, see PYTHON-3406 in Jira.
For more information about the problems caused by Python locks in
multithreaded contexts with fork()
, see Issue 6721
in the Python Issue Tracker.