Aggregation of 200Go Data. We want the collection to always be available with old data until the full commit is done

Hello everyone,
We use an aggregation of 200Go Data. We want the collection to always be available with old data until the full commit is done.

// For a replica set, include the replica set name and a seedlist of the members in the URI string; e.g.
// string uri = “mongodb://mongodb0.example.com:27017,mongodb1.example.com:27017/?replicaSet=myRepl”;
// For a sharded cluster, connect to the mongos instances; e.g.
// string uri = “mongodb://mongos0.example.com:27017,mongos1.example.com:27017/”;
var client = new MongoClient(connectionString);
// Prereq: Create collections.
var database1 = client.GetDatabase(“mydb1”);
var collection1 = database1.GetCollection(“foo”).WithWriteConcern(WriteConcern.WMajority);
collection1.InsertOne(new BsonDocument(“abc”, 0));
var database2 = client.GetDatabase(“mydb2”);
var collection2 = database2.GetCollection(“bar”).WithWriteConcern(WriteConcern.WMajority);
collection2.InsertOne(new BsonDocument(“xyz”, 0));
// Step 1: Start a client session.
using (var session = client.StartSession())
{
// Step 2: Optional. Define options to use for the transaction.
var transactionOptions = new TransactionOptions(
writeConcern: WriteConcern.WMajority);
// Step 3: Define the sequence of operations to perform inside the transactions
var cancellationToken = CancellationToken.None; // normally a real token would be used
result = session.WithTransaction(
(s, ct) =>
{
db.collection1.aggregate( [
{ $group : { _id : “$author”, books: { $push: “$title” } } },
{ $out : “authors” }
] )
return “Inserted into collections in different databases”;
},
transactionOptions,
cancellationToken);
}

Thanks