Performance Drop After Upgrade 6.0.10 > 7.0.1

We noticed a significant performance drop after upgrading our servers from version 6 to 7.

The same query that selects 2 index entries and took 2ms on version 6 would suddenly
take 50 ms on version 7.
The number of scanned/returned documents did not change.

The size of the dataset is as follows:

DOCUMENTS 410.1k
TOTAL SIZE 306.4MB
AVG. SIZE 784B
INDEXES 9
TOTAL SIZE 183.0MB
AVG. SIZE 20.3MB

Example document
documentStructure.json (1.4 KB)

Execution plan version 6
mongoExecutionPlan_6_0_10.json (77.7 KB)
Execution plan version 7
mongoExecutionPlan_7_0_1.json (114.4 KB)

Indizes

Hi @Sijing_You,

Thanks for providing those details. I assume these tests / explain outputs were run on the same server that was upgraded but please correct me if I’m wrong here.

I’m going to do some tests on my own version 6.0 and 7.0 environments to see if theres similar behaviour.

It’s possible it may have something to do with the slot based query engine but hard to confirm at this stage.

I did notice a larger amount of document scans within the allPlansExecution of the version 7 explain output which seems to be adding up to most of the difference between execution times you are seeing but what is the cause of that is yet unknown.

I will see if I can spot anything.

Regards,
Jason

1 Like

I assume these tests / explain outputs were run on the same server that was upgraded but please correct me if I’m wrong here.

Yes, this is correct. These outputs were run on a smaller test instance, but we were getting the same behavior on a bigger cluster.
Downgrading to 6.0 also restores the query run time. (We kept the setFeatureCompatibilityVersion on 6)

Hi guys, I had the same problem. What most reflected in my metrics were spikes in scanned documents, which directly impacted the application. Some data simply did not load and operations were interrupted by the client due to a timeout.

I just downgraded to 6.0.11 and the problem was completely resolved!

Posting information in this thread so you can follow up and find out if anyone else has had the same type of problem.

Best!

1 Like

Hello,

Any idea if this issue was resolved or not yet?

Thanks

Hello,

Following an upgrade from 6.0 to 7.0 (on the same hardware resources) will also notice a significant performance drop for some query patterns that seem related to this discussion.

For instance, the following find() query below, using multiple $or conditions, provides very poor performance: ~10 seconds for MongoDB 7.0, a few tens of milliseconds for MongodBD <= 6.x. Note that the query refers to a shared collection containing 15K documents, collection size ~150MB:

We were not able to identify a 7.0 ticket referring to this kind of issue… Did anyone identify a 7.0 ticket related to that? or does anyone can share new information elements on a potential SBE performance issue?

Thank


{
  "t": {
    "$date": "2023-12-29T03:15:23.046+01:00"
  },
  "s": "I",
  "c": "COMMAND",
  "id": 51803,
  "ctx": "conn101308",
  "msg": "Slow query",
  "attr": {
    "type": "command",
    "ns": "wireless.BaseStationReport",
    "command": {
      "find": "BaseStationReport",
      "filter": {
        "$or": [
          {
            "owID": "actility-ope-np",
            "lrID": "200017C9"
          },
          {
            "owID": "actility-ope-np",
            "lrID": "200008CA"
          },
          {
            "owID": "actility-ope-np",
            "lrID": "200008D6"
          },
          {
            "owID": "actility-ope-np",
            "lrID": "200008DD"
          },
          {
            "owID": "actility-ope-np",
            "lrID": "20001800"
          },
(... x58)
      {
            "owID": "actility-ope-np",
            "lrID": "2000110D"
          },
          {
            "owID": "actility-ope-np",
            "lrID": "20001A25"
          },
          {
            "owID": "actility-ope-np",
            "lrID": "20001A25"
          },
          {
            "owID": "actility-ope-np",
            "lrID": "20001D00"
          },
          {
            "owID": "actility-ope-np",
            "lrID": "20001D0B"
          },
          {
            "owID": "actility-ope-np",
            "lrID": "20001D0B"
          }
        ]
      },
      "batchSize": 200,
      "singleBatch": false,
      "readConcern": {
        "level": "local",
        "provenance": "implicitDefault"
      },
      "shardVersion": {
        "e": {
          "$oid": "6092e7608603992f012bee99"
        },
        "t": {
          "$timestamp": {
            "t": 1703235663,
            "i": 1580
          }
        },
        "v": {
          "$timestamp": {
            "t": 9,
            "i": 3
          }
        }
      },
      "clientOperationKey": {
        "$uuid": "f610edf7-5ea7-4b30-b21a-25be6ce7d1cd"
      },
      "lsid": {
        "id": {
          "$uuid": "374e7e8f-609f-4b4f-9838-59bb9c0db653"
        },
        "uid": {
          "$binary": {
            "base64": "7vqRQ3ETE3NcDKMLlrngHlRZddMqElJsfVilVRNDZSs=",
            "subType": "0"
          }
        }
      },
      "$clusterTime": {
        "clusterTime": {
          "$timestamp": {
            "t": 1703816112,
            "i": 844
          }
        },
        "signature": {
          "hash": {
            "$binary": {
              "base64": "rLKEqaMyMayh58C265HCaP24uQ8=",
              "subType": "0"
            }
          },
          "keyId": 7280573868618549000
        }
      },
      "$configTime": {
        "$timestamp": {
          "t": 1703816110,
          "i": 295
        }
      },
      "$topologyTime": {
        "$timestamp": {
          "t": 0,
          "i": 1
        }
      },
      "$audit": {
        "$impersonatedUser": {
          "user": "twa",
          "db": "admin"
        },
        "$impersonatedRoles": [
          {
            "role": "readWrite",
            "db": "wireless"
          }
        ]
      },
      "$client": {
        "driver": {
          "name": "mongo-java-driver|sync|spring-boot",
          "version": "4.6.1"
        },
        "os": {
          "type": "Linux",
          "name": "Linux",
          "architecture": "amd64",
          "version": "4.18.0-477.27.2.el8_8.x86_64"
        },
        "platform": "Java/Eclipse Adoptium/11.0.18+10",
        "mongos": {
          "host": "rd-tb-twa2:27017",
          "client": "127.0.0.1:35122",
          "version": "7.0.4"
        }
      },
      "mayBypassWriteBlocking": false,
      "$db": "wireless"
    },
    "planSummary": "IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }, IXSCAN { owID: 1, lrID: 1 }",
    "planningTimeMicros": 1023,
    "keysExamined": 58,
    "docsExamined": 50,
    "nBatches": 1,
    "cursorExhausted": true,
    "numYields": 891,
    "nreturned": 50,
    "queryHash": "64E4A6F9",
    "planCacheKey": "E679585C",
    "queryFramework": "sbe",
    "reslen": 668936,
    "locks": {
      "FeatureCompatibilityVersion": {
        "acquireCount": {
          "r": 892
        }
      },
      "Global": {
        "acquireCount": {
          "r": 892
        }
      },
      "Mutex": {
        "acquireCount": {
          "r": 1203
        }
      }
    },
    "readConcern": {
      "level": "local",
      "provenance": "implicitDefault"
    },
    "storage": {
      "data": {
        "bytesRead": 58221,
        "bytesWritten": 1338531,
        "timeReadingMicros": 15,
        "timeWritingMicros": 1821284
      },
      "timeWaitingMicros": {
        "cache": 4692915
      }
    },
    "cpuNanos": 2107668910,
    "remote": "192.168.151.19:59538",
    "protocol": "op_msg",
    "durationMillis": 10387
  }
}

Blockquote

We see the same picture. A normal query from a collection + - 500K elements, a query by index is processed in a fraction of a second on 6.0 and in 3 minutes on 7.0

6.0 was launched from dockerhub mongo:6.0.4

7.0.5 was installed on Ubuntu 22.04 LTS and has the most basic settings in the config

after exporting data from 6.0.4 to 7.0.5

12:42:47 Fetching for 6.0.4
12:15:22 Complete [00:00:04.3660599] [136154]
12:15:22 Fetching for 7.0.5
12:18:17 Complete [00:02:54.7382902] [136154]

CPU & Memory utilisation is very small, so guessing maybe 7.0 need to be tuned and need to be configured manualy ?

We have experienced the same issue with an upgrade to version 7.0.1. Unexplainable slow queries that often result in timeouts. Here is an example showing the P99 performance of one of our APIs, you can easily guess when we upgraded to 7.0.1:


After downgrading to version 6.0.13, everything is back to normal.

We didn’t upgrade the fCV to 7.0 though, as we can’t go back from there. I wonder if the issue persists with fCV 7.0?

When I stop all write operations to the database, the performance seems to be OK with 7.0.1. (The performance issues are observable even when we don’t hit the IOPS limits of the SSDs.)

I have also noticed a similar slowdown.

I recently upgraded to Mongo 7 from Mongo 4.2 (both wiredTiger) and noticed a large performance drop for queries in general. After changing to Mongo 6, it seems to be performing fine!

I can try and provide more information about the specs when I have time (my apologies), but for now I will just say I have experienced something similar as well.

Performance degraded from upgrading Mongo 4.0.6 to 7.0.5 using (java sync driver)

  • 7.0.5 upgrade updateone and insertOne performance degraded 7X times below is matrix we have analyzed before and after.





upgraded 6.0.10 to 7.0.4
data size ~ 5Tb, compressed 2.2Tb
yesterday upgraded to 7.0.11 and that was awful


We rolled back to 6.0.15
7.x branch - usless

also 7.0.11 produced 100Gb WiredTigerHS file.

Any news on this? Did anyone of you got a reply from the support that adresses the issue? We had similiar problems with our main database holding big data and downgraded to 6.x again.

no news.
i do not plan to upgrade to 7.x at all

About to upgrade from 6.x, this is concerning, any performance regressions identified or fixed recently? (I didn’t see much about it in the release notes).

they are speaking about 30% perf increase in 8.0 ( compared to 7? ), but if compare to 6 I do not think …

Jason_Tran are you still following this? Any known perf drop between 6.0.x and 7.0.x?

Any updates? This is keeping us from upgrading to 7.0.

Any updates with this?? also prevent us to upgrade to 7.0 @Tema_Gordiyenko @Sijing_You

no. we still on 6.x branch
and linux 6.10 kernel