refactor: Upgrade mongodb from 4.10.0 to 6.10.0 #9438
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Snyk has created this PR to upgrade mongodb from 4.10.0 to 6.10.0.
ℹ️ Keep your dependencies up-to-date. This makes it easier to fix existing vulnerabilities and to more quickly identify and fix newly disclosed vulnerabilities when they affect your project.
The recommended version is 178 versions ahead of your current version.
The recommended version was released on 25 days ago.
Issues fixed by the recommended upgrade:
SNYK-JS-IP-6240864
SNYK-JS-IP-7148531
SNYK-JS-MONGODB-5871303
Release notes
Package name: mongodb
6.10.0 (2024-10-21)
The MongoDB Node.js team is pleased to announce version 6.10.0 of the
mongodb
package!Release Notes
Warning
Server versions 3.6 and lower will get a compatibility error on connection and support for MONGODB-CR authentication is now also removed.
Support for new client bulkWrite API (8.0+)
A new bulk write API on the
MongoClient
is now supported for users on server versions 8.0 and higher.This API is meant to replace the existing bulk write API on the
Collection
as it supports a bulkwrite across multiple databases and collections in a single call.
Usage
Users of this API call
MongoClient#bulkWrite
and provide a list of bulk write models and options.The models have a structure as follows:
Insert One
Note that when no
_id
field is provided in the document, the driver will generate a BSONObjectId
automatically.
Update One
Update Many
Note that write errors occuring with an update many model present are not retryable.
Replace One
Delete One
Delete Many
Note that write errors occuring with a delete many model present are not retryable.*
Example
Below is a mixed model example of using the new API:
The bulk write specific options that can be provided to the API are as follows:
ordered
: Optional boolean that indicates the bulk write as ordered. Defaults to true.verboseResults
: Optional boolean to indicate to provide verbose results. Defaults to false.bypassDocumentValidation
: Optional boolean to bypass document validation rules. Defaults to false.let
: Optional document of parameter names and values that can be accessed using $$var. No default.The object returned by the bulk write API is:
Error Handling
Server side errors encountered during a bulk write will throw a
MongoClientBulkWriteError
. This errorhas the following properties:
writeConcernErrors
: Ann array of documents for each write concern error that occurred.writeErrors
: A map of index pointing at the models provided and the individual write error.partialResult
: The client bulk write result at the point where the error was thrown.Schema assertion support
name: string;
authorName: string;
}
interface Author {
name: string;
}
type MongoDBSchemas = {
'db.books': Book;
'db.authors': Author;
}
const model: ClientBulkWriteModel<MongoDBSchemas> = {
namespace: 'db.books'
name: 'insertOne',
document: { title: 'Practical MongoDB Aggregations', authorName: 3 }
// error
authorName
cannot be number};
Notice how authorName is type checked against the
Book
type because namespace is set to"db.books"
.Allow SRV hostnames with fewer than three
.
separated partsIn an effort to make internal networking solutions easier to use like deployments using kubernetes, the client now accepts SRV hostname strings with one or two
.
separated parts.For security reasons, the returned addresses of SRV strings with less than three parts must end with the entire SRV hostname and contain at least one additional domain level. This is because this added validation ensures that the returned address(es) are from a known host. In future releases, we plan on extending this validation to SRV strings with three or more parts, as well.
'mongodb+srv://mySite.com' => 'myEvilSite.com'
// Example 2: Validation fails since the returned address is identical to the SRV hostname
'mongodb+srv://mySite.com' => 'mySite.com'
// Example 3: Validation passes since the returned address ends with the entire SRV hostname and contains an additional domain level
'mongodb+srv://mySite.com' => 'cluster_1.mySite.com'
Explain now supports maxTimeMS
Driver CRUD commands can be explained by providing the
explain
option:However, if maxTimeMS was provided, the maxTimeMS value was applied to the command to explain, and consequently the server could take more than maxTimeMS to respond.
Now, maxTimeMS can be specified as a new option for explain commands:
If a top-level maxTimeMS option is provided in addition to the explain maxTimeMS, the explain-specific maxTimeMS is applied to the explain command, and the top-level maxTimeMS is applied to the explained command:
maxTimeMS: 1000,
explain: {
verbosity: 'queryPlanner',
maxTimeMS: 2000
}
);
// the actual command that gets sent to the server looks like:
{
explain: { delete: <collection name>, ..., maxTimeMS: 1000 },
verbosity: 'queryPlanner',
maxTimeMS: 2000
}
Find and Aggregate Explain in Options is Deprecated
Note
Specifying explain for cursors in the operation's options is deprecated in favor of the
.explain()
methods on cursors:// -> collection.find({}).explain()
collection.aggregate([], { explain: true })
// -> collection.aggregate([]).explain()
db.find([], { explain: true })
// -> db.find([]).explain()
Fixed writeConcern.w set to 0 unacknowledged write protocol trigger
The driver now correctly handles w=0 writes as 'fire-and-forget' messages, where the server does not reply and the driver does not wait for a response. This change eliminates the possibility of encountering certain rare protocol format, BSON type, or network errors that could previously arise during server replies. As a result, w=0 operations now involve less I/O, specifically no socket read.
In addition, when command monitoring is enabled, the
reply
field of aCommandSucceededEvent
of an unacknowledged write will always be{ ok: 1 }
.Fixed indefinite hang bug for high write load scenarios
When performing large and many write operations, the driver will likely encounter buffering at the socket layer. The logic that waited until buffered writes were complete would mistakenly drop
'data'
(reading from the socket), causing the driver to hang indefinitely or until a socket timeout. Using pausing and resuming mechanisms exposed by Node streams we have eliminated the possibility for data events to go unhandled.Shout out to @ hunkydoryrepair for debugging and finding this issue!
Fixed change stream infinite resume
Before this fix, when change streams would fail to establish a cursor on the server, the driver would infinitely attempt to resume the change stream. Now, when the aggregate to establish the change stream fails, the driver will throw an error and clos the change stream.
ClientSession.commitTransaction()
no longer unconditionally overrides write concernPrior to this change,
ClientSession.commitTransaction()
would always override any previously configuredwriteConcern
on the initial attempt. This overriding behaviour now only applies to internal and user-initiated retries ofClientSession.commitTransaction()
for a given transaction.Features
Bug Fixes
Documentation
We invite you to try the
mongodb
library immediately, and report any issues to the NODE project.6.9.0 (2024-09-06)
The MongoDB Node.js team is pleased to announce version 6.9.0 of the
mongodb
package!Release Notes
Driver support of upcoming MongoDB server release
Increased the driver's max supported Wire Protocol version and server version in preparation for the upcoming release of MongoDB 8.0.
MongoDB 3.6 server support deprecated
Warning
Support for 3.6 servers is deprecated and will be removed in a future version.
Support for explicit resource management
The driver now natively supports explicit resource management for
MongoClient
,ClientSession
,ChangeStreams
and cursors. Additionally, on compatible Node.js versions, explicit resource management can be used withcursor.stream()
and theGridFSDownloadStream
, since these classes inherit resource management from Node.js' readable streams.This feature is experimental and subject to changes at any time. This feature will remain experimental until the proposal has reached stage 4 and Node.js declares its implementation of async disposable resources as stable.
To use explicit resource management with the Node driver, you must:
tslib
polyfills for your applicationSymbol.asyncDispose
(see the TS 5.2 release announcement for more information).Explicit resource management is a feature that ensures that resources' disposal methods are always called when the resources' scope is exited. For driver resources, explicit resource management guarantees that the resources' corresponding
close
method is called when the resource goes out of scope.{
try {
const client = MongoClient.connect('<uri>');
try {
const session = client.startSession();
const cursor = client.db('my-db').collection("my-collection").find({}, { session });
try {
const doc = await cursor.next();
} finally {
await cursor.close();
}
} finally {
await session.endSession();
}
} finally {
await client.close();
}
}
// with explicit resource management:
{
await using client = MongoClient.connect('<uri>');
await using session = client.startSession();
await using cursor = client.db('my-db').collection('my-collection').find({}, { session });
const doc = await cursor.next();
}
// outside of scope, the cursor, session and mongo client will be cleaned up automatically.
The full explicit resource management proposal can be found here.
Driver now supports auto selecting between IPv4 and IPv6 connections
For users on Node versions that support the
autoSelectFamily
andautoSelectFamilyAttemptTimeout
options (Node 18.13+), they can now be provided to theMongoClient
and will be passed through to socket creation.autoSelectFamily
will default totrue
withautoSelectFamilyAttemptTimeout
by default not defined. Example:Allow passing through
allowPartialTrustChain
Node.js TLS optionThis option is now exposed through the MongoClient constructor's options parameter and controls the
X509_V_FLAG_PARTIAL_CHAIN
OpenSSL flag.Fixed
enableUtf8Validation
optionStarting in v6.8.0 we inadvertently removed the ability to disable UTF-8 validation when deserializing BSON. Validation is normally a good thing, but it was always meant to be configurable and the recent Node.js runtime issues (v22.7.0) make this option indispensable for avoiding errors from mistakenly generated invalid UTF-8 bytes.
Add duration indicating time elapsed between connection creation and when the connection is ready
ConnectionReadyEvent
now has adurationMS
property that represents the time between the connection creation event and when the connection ready event is fired.Add duration indicating time elapsed between the beginning and end of a connection checkout operation
ConnectionCheckedOutEvent
/ConnectionCheckFailedEvent
now have adurationMS
property that represents the time between checkout start and success/failure.Create native cryptoCallbacks 🔐
Node.js bundles OpenSSL, which means we can access the crypto APIs from C++ directly, avoiding the need to define them in JavaScript and call back into the JS engine to perform encryption. Now, when running the bindings in a version of Node.js that bundles OpenSSL 3 (should correspond to Node.js 18+), the
cryptoCallbacks
option will be ignored and C++ defined callbacks will be used instead. This improves the performance of encryption dramatically, as much as 5x faster. 🚀This improvement was made to [email protected] which is available now!
Only permit mongocryptd spawn path and arguments to be own properties
We have added some defensive programming to the options that specify spawn path and spawn arguments for
mongocryptd
due to the sensitivity of the system resource they control, namely, launching a process. Now,mongocryptdSpawnPath
andmongocryptdSpawnArgs
must be own properties ofautoEncryption.extraOptions
. This makes it more difficult for a global prototype pollution bug related to these options to occur.Support for range v2: Queryable Encryption supports range queries
Queryable encryption range queries are now officially supported. To use this feature, you must:
Important
Collections and documents encrypted with range queryable fields with a 7.0 server are not compatible with range queries on 8.0 servers.
Documentation for queryable encryption can be found in the MongoDB server manual.
insertMany
andbulkWrite
acceptReadonlyArray
inputsThis improves the typescript developer experience, developers tend to use
ReadonlyArray
because it can help understand where mutations are made and when enablingnoUncheckedIndexedAccess
leads to a better type narrowing experience.Please note, that the array is read only but not the documents, the driver adds
_id
fields to your documents unless you request that the server generate the_id
withforceServerObjectId
Fix retryability criteria for write concern errors on pre-4.4 sharded clusters
Previously, the driver would erroneously retry writes on pre-4.4 sharded clusters based on a nested code in the server response (error.result.writeConcernError.code). Per the common drivers specification, retryability should be based on the top-level code (error.code). With this fix, the driver avoids unnecessary retries.
The
LocalKMSProviderConfiguration
'skey
property acceptsBinary
for auto encryptionIn #4160 we fixed a type issue where a
local
KMS provider at runtime accepted aBSON
Binary
instance but the Typescript inaccurately only permittedBuffer
andstring
. The same change has now been applied toAutoEncryptionOptions
.BulkOperationBase
(superclass ofUnorderedBulkOperation
andOrderedBulkOperation
) now reportslength
property in TypescriptThe
length
getter for these classes was defined manually usingObject.defineProperty
which hid it from typescript. Thanks to @ sis0k0 we now have the getter defined on the class, which is functionally the same, but a greatly improved DX when working with types. 🎉MongoWriteConcernError.code
is overwritten by nested code withinMongoWriteConcernError.result.writeConcernError.code
MongoWriteConcernError
is now correctly formed such that the original top-level code is preservedMongoWriteConcernError.code
should be set toMongoWriteConcernError.result.writeConcernError.code
writeConcernError.code
Optimized
cursor.toArray()
Prior to this change,
toArray()
simply used the cursor's async iterator API, which parses BSON documents lazily (see more here).toArray()
, however, eagerly fetches the entire set of results, pushing each document into the returned array. As such,toArray
does not have the same benefits from lazy parsing as other parts of the cursor API.With this change, when
toArray()
accumulates documents, it empties the current batch of documents into the array before calling the async iterator again, which means each iteration will fetch the next batch rather than wrap each document in a promise. This allows thecursor.toArray()
to avoid the required delays associated with async/await execution, and allows for a performance improvement of up to 5% on average! 🎉Note: This performance optimization does not apply if a transform has been provided to
cursor.map()
beforetoArray
is called.Fixed mixed use of
cursor.next()
andcursor[Symbol.asyncIterator]
In 6.8.0, we inadvertently prevented the use of
cursor.next()
along with usingfor await
syntax to iterate cursors. If your code made use of the following pattern and the call tocursor.next
retrieved all your documents in the first batch, then the for-await loop would never be entered. This issue is now fixed.for await (const doc of cursor) {
// process doc
// ...
}
Features