Explore Developer Center's New Chatbot! MongoDB AI Chatbot can be accessed at the top of your navigation to answer all your MongoDB questions.

Join us at AWS re:Invent 2024! Learn how to use MongoDB for AI use cases.
MongoDB Developer
MongoDB
plus
Sign in to follow topics
MongoDB Developer Centerchevron-right
Developer Topicschevron-right
Productschevron-right
MongoDBchevron-right

How to Migrate Your Node.js App from SQL to MongoDB

Emmanuel John16 min read • Published Jul 09, 2024 • Updated Jul 09, 2024
Node.jsJavaScriptMongoDB
Facebook Icontwitter iconlinkedin icon
Rate this tutorial
star-empty
star-empty
star-empty
star-empty
star-empty
With the rapid adoption of the Agile methodology in software development, developers now seek to migrate from SQL to NoSQL databases for more flexibility and horizontal scaling which is cheaper and more efficient for software development.
Migrating from a SQL database to a NoSQL database can be challenging because of key factors like data model differences, query language, access patterns, and query optimization.
In this article, we'll explore key MongoDB concepts like collections, documents, and BSON and the differences in data modeling between SQL and NoSQL databases. We'll also migrate from a relational database (Postgres) to MongoDB.

Prerequisites

Below are a few prerequisites that you'll need before moving forward with this article:
  • Knowledge of JavaScript
  • Experience building Node.js applications
  • Experience with SQL and Postgres database
  • Node.js ≥v20 installed

Planning the migration

In this section, we'll migrate the database and queries of a Node.js backend application from Postgres to MongoDB. The application will be a public phonebook app with the following features:
  1. User registration and login: Passwords are hashed using bcrypt, and JWT is used for authentication.
  2. CRUD operations:
  • Create contact: Authenticated users can create contacts.
  • Get contacts: Get contacts with call rate count and user details, filtered by date range and ordered by call rate count.
  • Get contact by ID: Retrieve a single contact by ID for the authenticated user.
  • Update contact: Authenticated users can update their contacts.
  • Delete contact: Authenticated users can delete their contacts.
Open psql and run the following command to create the required tables for the application:
1CREATE TABLE users (
2 id SERIAL PRIMARY KEY,
3username VARCHAR(100) UNIQUE NOT NULL,
4 password VARCHAR(100) NOT NULL,
5email VARCHAR(100) UNIQUE NOT NULL,
6created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
7);
8CREATE TABLE contacts (
9 id SERIAL PRIMARY KEY,
10 name VARCHAR(100) NOT NULL,
11phone VARCHAR(20) NOT NULL,
12email VARCHAR(100),
13user_id INTEGER REFERENCES users(id) ON DELETE CASCADE,
14created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
15);
16CREATE TABLE rates (
17 id SERIAL PRIMARY KEY,
18 description TEXT NOT NULL,
19contact_id INTEGER REFERENCES contacts(id) ON DELETE CASCADE,
20user_id INTEGER REFERENCES users(id) ON DELETE CASCADE,
21created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
22);
Create a Node.js project and install the following dependencies:
1"dependencies": {
2 "bcryptjs": "^2.4.3",
3 "express": "^4.19.2",
4 "jsonwebtoken": "^9.0.2",
5 "pg": "^8.11.5"
6}
You can install all the required packages with the following command:
1npm install bcryptjs express jsonwebtoken pg
Create a .env.postgres file and add the following environment variables. You will need to change the values to use the credentials to connect to your database.
1DB_USER=your_username
2DB_HOST=localhost
3DATABASE=your_DB_name
4DB_PASSWORD=your_password
5DB_PORT=5432
6SECRET_KEY=your_jwt_secret_key
Create a postgres.js file in the project root directory and add the following:
1import express from 'express';
2import bcrypt from 'bcryptjs';
3import jwt from 'jsonwebtoken';
4import pg from 'pg';
5
6const { Pool } = pg;
7
8const app = express();
9const port = 3000;
10
11app.use(express.json());
12
13const pool = new Pool({
14 user: process.env.DB_USER,
15 host: process.env.DB_HOST,
16 database: process.env.DATABASE,
17 password: process.env.DB_PASSWORD,
18 port: process.env.DB_PORT,
19});
20
21// JWT secret key
22const SECRET_KEY = process.env.SECRET_KEY;
23
24/* START FEATURE IMPLEMENTATION*/
25
26/* END FEATURE IMPLEMENTATION*/
27
28// Start server
29app.listen(port, () => {
30 console.log(`Server running on port ${port}`);
31});
Ensure your package.json includes "type": "module".
We've successfully set up a Nodejs backend application and configured the Postgres database using the Express framework.
You can start your server by running:
1node --env-file=.env.postgres postgres.js
Using the --env-file parameter will inject the environment variables, and you'll be able to use them through process.env.
Let's proceed with implementing the features of the application.
User register: 
1app.post('/register', async (req, res) => {
2 const { username, email, password } = req.body;
3 const salt = await bcrypt.genSalt(10);
4 const hashedPassword = await bcrypt.hash(password, salt);
5 const result = await pool.query(
6 'INSERT INTO users (username, email, password) VALUES ($1, $2, $3) RETURNING *',
7            [username, email, hashedPassword]
8    );
9    const {  ...user } = result.rows[0];
10
11    res.status(201).json({...user, password: null});
12});
The user register endpoint ensures secure handling of passwords using bcrypt for hashing the password and executes a SQL query to insert the new user into the users table in the PostgreSQL database via the connection pool.
User login:
1app.post('/login', async (req, res) => {
2    const { email, password } = req.body;
3
4    const user = await pool.query('SELECT * FROM users WHERE email = $1', [email]);
5
6    let validPass = false;
7
8    if (user.rows[0]) validPass = await bcrypt.compare(password, user.rows[0].password);
9
10    if (!validPass) return res.status(400).json({ message: 'Invalid credentials' });
11
12    const token = jwt.sign({ id: user.rows[0].id, email: user.rows[0].email }, SECRET_KEY, { expiresIn: '1h' });
13
14    res.status(200).json({ token });
15
16});
The user login endpoint ensures secure handling of user authentication using bcrypt for password validation and JWT for session management.
Middleware to authenticate users:
1const authenticateToken = (req, res, next) => {
2 const token = req.header('Authorization');
3 if (!token) return res.status(401).json({ message: 'Access Denied' });
4 const verified = jwt.verify(token.split(" ")[1], SECRET_KEY);
5    req.user = verified;
6    next();
7};
This middleware function ensures that only requests with valid JWT tokens can access protected routes, enhancing the application's security.
Create contact:
1app.post('/contacts', authenticateToken, async (req, res) => {
2 const { name, phone, email } = req.body;
3 const result = await pool.query(
4 'INSERT INTO contacts (name, phone, email, user_id) VALUES ($1, $2, $3, $4) RETURNING *',
5            [name, phone, email, req.user.id]
6        );
7    res.status(201).json(result.rows[0]);
8});
The create contact endpoint creates new contacts while ensuring only authenticated users can perform this action.
Get contact by ID:
1app.get('/contacts/:id', authenticateToken, async (req, res) => {
2 const { id } = req.params;
3 const result = await pool.query('SELECT * FROM contacts WHERE id = $1 AND user_id = $2', [id, req.user.id]);
4 if (result.rows.length === 0) return res.status(404).json({ message: 'Contact not found' });
5    res.status(200).json(result.rows[0]);
6});
The get contact by ID endpoint allows only authenticated users to fetch a specific contact by its ID and checks that it belongs to the authenticated user.
Update contact:
1app.put('/contacts/:id', authenticateToken, async (req, res) => {
2 const { id } = req.params;
3 const { name, phone, email } = req.body;
4 const result = await pool.query(
5 'UPDATE contacts SET name = $1, phone = $2, email = $3 WHERE id = $4 AND user_id = $5 RETURNING *',
6            [name, phone, email, id, req.user.id]\
7    );
8 if (result.rows.length === 0) return res.status(404).json({ message: 'Contact not found or not authorized' });
9    res.status(200).json(result.rows[0]);
10});
The update contact endpoint allows only authenticated users to access and update their contacts.
Delete contact:
1app.delete('/contacts/:id', authenticateToken, async (req, res) => {
2 const { id } = req.params;
3 const result = await pool.query('DELETE FROM contacts WHERE id = $1 AND user_id = $2 RETURNING *', [id, req.user.id]);
4 if (result.rows.length === 0) return res.status(404).json({ message: 'Contact not found or not authorized' });
5    res.status(200).json(result.rows[0]);
6});
The delete contact endpoint allows only authenticated users to delete their contacts.
Add call rate:
1app.post('/contacts/:id/rates', authenticateToken, async (req, res) => {
2 const { id } = req.params;
3 const { description } = req.body;
4 const result = await pool.query(
5 'INSERT INTO rates (body, contact_id, user_id) VALUES ($1, $2, $3) RETURNING *',
6    [description, id, req.user.id]
7  );
8  res.status(201).json(result.rows[0]);
9});
The add call rate endpoint allows only authenticated users to rate a specific contact by its ID.
The rates are associated with contacts in a one-to-many relationship. The contact_id column in the rates table is a foreign key that references the id column in the contacts table.
This foreign key establishes a relationship where each rate is linked to a specific contact.
Having a defined relationship allows for efficient queries that can join two or more tables.
For example, you can quickly fetch a contact along with all its associated rates or aggregate data, like the number of rates per contact.
Get contacts:
1app.get('/contacts', async (req, res) => {
2 let { startDate, endDate } = req.query;
3 if (!endDate) {
4endDate = new Date().toISOString();
5} else {
6endDate = new Date(endDate).toISOString();
7      }
8
9 if (!startDate) {\
10 const now = new Date();
11 const pastDate = new Date(now);
12pastDate.setDate(now.getDate() - 7);
13        startDate = pastDate.toISOString();
14} else {
15startDate = new Date(startDate).toISOString();
16      }
17
18 const result = await pool.query(`
19            SELECT
20                contacts.id, contacts.name, contacts.phone, contacts.created_at,
21                users.username,
22                COUNT(rates.id) AS call_rate
23            FROM contacts
24            LEFT JOIN users ON contacts.user_id = users.id
25            LEFT JOIN rates ON contacts.id = rates.contact_id
26            WHERE contacts.created_at BETWEEN $1 AND $2
27            GROUP BY contacts.id, users.username
28            ORDER BY call_rate DESC
29        `, [startDate, endDate]);
30  res.status(200).json(result.rows);
31});
The get contacts endpoint retrieves and aggregates contact data, ensuring that only contacts within the specified date range are included in the results. It also provides useful information about the number of calls associated with each contact, ordered by the number of call ratings.
SELECT clause: The SELECT clause selects the specified column in the table to select data from.
FROM clause: The FROM clause specifies the table from which to select data.
LEFT JOIN users ON contacts.user_id = users.id: This joins the contacts table with the users table on the condition that the user_id in the contacts table matches the id in the users table.
LEFT JOIN rates ON contacts.id = rates.contact_id: This joins the contacts table with the rates table on the condition that the id in the contacts table matches the contact_id in the rates table.
WHERE contacts.created_at BETWEEN $1 AND $2: This filters the contacts to only include those created between the dates specified by $1 (startDate) and $2 (endDate). 
GROUP BY contacts.id, users.username: This groups the results by contacts.id and users.username. This is necessary because we are using an aggregate function (COUNT(rates.id)), and SQL requires that all non-aggregated columns in the SELECT clause be included in the GROUP BY clause.
ORDER BY call_rate DESC: This orders the results by call_rate in descending order (DESC). This means that contacts with the highest number of rates (or calls) will appear first in the results.
To test out your implementation, don't forget to stop and restart your server.
As we can see, PostgreSQL follows a relational data model. At the same time, MongoDB is a NoSQL document-oriented database so we need to redesign our schema to fit MongoDB's document-based model. In the SQL migration section, we'll identify the entities and relationships in our PostgreSQL schema and map them to MongoDB's collections and documents.

Setting up MongoDB

To set up MongoDB for use with our Node.js application, we'll use MongoDB Atlas, a multi-cloud database service that simplifies deploying and managing your databases with just a few clicks.
It also offers a free-forever tier with no credit card requirements. How cool is that? 
With that said, head over to MongoDB Atlas and create an account if you don't have one already.

Create database cluster

To create a database cluster, follow the official docs. 
Choose the shared cluster option for a free-forever tier. The default configuration is sufficient for our demo application. 
When you create your database cluster, you'll be prompted to add a database user and password for your database cluster. Make sure you save the username and password where you can reference as you'll need them to connect to the database cluster.

Connecting to the database cluster 

Create a Node.js project and install the following dependencies:
1"dependencies": {
2 "express": "^4.19.2",
3 "mongodb": "^8.4.0",
4 "bcryptjs": "^2.4.3",
5 "jsonwebtoken": "^9.0.2",
6}
You can install the libraries with:
1npm install mongodb express bcryptjs jsonwebtoken
Create a .env.mongodb file and add the following environment variables:
1SECRET_KEY=your_jwt_secret_key
2DB_USER=your_cluster_username
3DB_PASSWORD=your_cluster_password
4DATABASE=users
5CLUSTER_URI=your_cluster_uri
Create a mongodb.js file in the project root directory and add the following:
1import { MongoClient, ObjectId } from 'mongodb';
2import express from 'express';
3import bcrypt from 'bcryptjs';
4import jwt from 'jsonwebtoken';
5
6const app = express();
7app.use(express.json());
8
9const url = `mongodb+srv://${process.env.DB_USER}:${process.env.DB_PASSWORD}@${process.env.CLUSTER_URI}`;
10const client = new MongoClient(url);
11const dbName = process.env.DATABASE;
12const SECRET_KEY = process.env.SECRET_KEY;
13
14await client.connect();
15const db = client.db(dbName);
16const usersCollection = db.collection('users');
17const contactsCollection = db.collection('contacts');
18
19/* START FEATURE IMPLEMENTATION*/
20
21/* END FEATURE IMPLEMENTATION*/
22app.listen(3000, () => {
23 console.log('Server is running on port 3000');
24});
To use top-level await, you will need to enable ES module support and ensure that your package.json has "type": "module".
The above code allows your Node app to connect to your database cluster.
Run the following command to start your Node server:
1node --env-file=.env.mongodb mongodb.js

Migrating the SQL queries

There are two steps here:
  • Translate the SQL queries to the MongoDB Query API.
  • Use the MongoDB Query Converter to migrate your SQL queries to the MongoDB Query API.

Translate the SQL queries to the MongoDB Query API

The MongoDB Node.js driver is a package that allows you to connect to MongoDB, execute database operations, and manage collections and documents. It provides a straightforward, schemaless solution to model your application data in a flexible manner.
The first thing to consider for database migration is the data models. Remember how we create tables and relationships for the Postgres database by running the following SQL in the psql terminal:
1CREATE TABLE users (
2 id SERIAL PRIMARY KEY,
3username VARCHAR(100) UNIQUE NOT NULL,
4 password VARCHAR(100) NOT NULL,
5email VARCHAR(100) UNIQUE NOT NULL,
6created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
7);
Here is the good part: Using the MongoDB Node.js driver, you do not have to define schemas. When you insert a document into a collection, it is created with the fields you provided with the insert method.
Register:
1app.post('/register', async (req, res) => {
2 const { username, email, password } = req.body;
3
4    const salt = await bcrypt.genSalt(10);
5 const hashedPassword = await bcrypt.hash(password, salt);
6 const user = { username, email, password: hashedPassword };
7 const result = await usersCollection.insertOne(user);
8    res.status(201).json({...user, password: null});
9  });
This endpoint is the MongoDB equivalent of the previous register endpoint designed for PostgreSQL. It uses MongoDB's driver insertOne methods to interact with the database. While the PostgreSQL implementation uses an SQL INSERT statement to insert new data, MongoDB's implementation uses the insertOne method to insert new data in a collection.
Login:
1app.post('/login', async (req, res) => {
2 const { email, password } = req.body;
3  const user = await usersCollection.findOne({ email });
4
5  let validPass = false;
6
7  if (user) validPass = await bcrypt.compare(password, user.password);
8
9  if (!validPass) return res.status(400).json({ message: 'Invalid credentials' });
10
11    const token = jwt.sign({ sub: user._id, email: user.email }, SECRET_KEY, { expiresIn: '1h' });
12    res.status(200).json({ token });
13});
This endpoint is the MongoDB equivalent of the previous /login endpoint designed for PostgreSQL. It uses MongoDB's driver findOne methods to interact with the database. While the PostgreSQL implementation uses pool.query to execute a SQL SELECT statement to find a user row in the table, MongoDB's implementation uses the usersCollection.findOne to find a user document in the collection.
In MongoDB, the findOne method uses a JSON object to specify the query condition, which is analogous to the WHERE clause in SQL. In SQL, users is the name of the table being queried, similar to a collection in MongoDB.
Middleware to authenticate users:
1const authenticateToken = (req, res, next) => {
2 const token = req.header('Authorization');
3 if (!token) return res.status(401).json({ message: 'Access Denied' });
4 const verified = jwt.verify(token.split(" ")[1], SECRET_KEY);
5    req.user = verified;
6    next();
7};
Here, we're using the same function as we did in the previous example.
Create contact:
1app.post('/contacts', authenticateToken, async (req, res) => {
2 const { name, phone, email } = req.body;
3 const contact = { name, phone, email, user_id: req.user.sub, createdAt: new Date().toISOString() };
4 const result = await contactsCollection.insertOne(contact);
5    res.status(201).json(contact);
6});
This endpoint is the MongoDB version of the previous POST /contact endpoint we designed for PostgreSQL. Instead of using SQL commands like pool.query to add a contact row to a table, it uses MongoDB's contactsCollection.insertOne method to add a contact document to a collection.
Converting req.user.id to ObjectId ensures that the user ID is correctly formatted and can be efficiently queried and indexed in MongoDB. This is important as we'll need to perform operations that involve matching or referencing this ID within the database.
Get contact by ID:
1app.get('/contacts/:id', authenticateToken, async (req, res) => {
2 const { id } = req.params;
3 const contact = await contactsCollection.findOne({ _id: new ObjectId(id), user_id: req.user.sub });
4 if (!contact) return res.status(404).json({ message: 'Contact not found' });
5    res.status(200).json(contact);
6});
This endpoint is the MongoDB equivalent of the previous GET /contacts/:id endpoint designed for PostgreSQL. While the PostgreSQL implementation uses pool.query to execute an SQL SELECT  statement and the WHERE clause to find a contact row in the table, the MongoDB implementation uses the contactsCollection.findOne to find a contact document in the collection and returns the found document.
PostgreSQL directly uses the id and user_id parameters in the SQL query while MongoDB converts id and user_id to ObjectId to match MongoDB's document ID format.
Update contact:
1app.put('/contacts/:id', authenticateToken, async (req, res) => {
2 const { id } = req.params;
3 const { name, phone, email } = req.body;
4 const contact = await contactsCollection.findOneAndUpdate(
5{ _id: new ObjectId(id), user_id: req.user.sub },
6{ $set: req.body },
7{ returnOriginal: false }
8      );
9 if (!contact) return res.status(404).json({ message: 'Contact not found or not authorized' });
10    res.status(200).json(contact);
11});
This endpoint is the MongoDB equivalent of the previous PUT /contacts endpoint designed for PostgreSQL. While the PostgreSQL implementation uses pool.query to execute a SQL UPDATE  statement and a WHERE clause to update the contact in the contacts table, the MongoDB implementation uses the findOneAndUpdate method with $set to update the document in the contacts collection.
Delete contact:
1app.delete('/contacts/:id', authenticateToken, async (req, res) => {
2 const { id } = req.params;
3 const contact = await contactsCollection.deleteOne({ _id: new ObjectId(id), user_id: req.user.sub });
4 if (!contact.deletedCount) return res.status(404).json({ message: 'Contact not found or not authorized' });
5    res.status(200).json(contact.value);
6});
This endpoint is the MongoDB equivalent of the previous DELETE /contact endpoint designed for PostgreSQL. While the PostgreSQL implementation uses pool.query to execute an SQL DELETE  statement and a WHERE clause to delete the contact in the contacts table, the MongoDB implementation uses the deleteOne method to delete the document in the contacts collection.
In MongoDB, querying is done using JSON-like objects. The equivalent of the WHERE clause is the query object passed to methods like findOne, deleteOne, etc.
Add call rate:
1app.post('/contacts/:id/rates', authenticateToken, async (req, res) => {
2 const { id } = req.params;
3 const { description } = req.body;
4 const rate = { description, createdAt: new Date().toISOString(), user_id: new ObjectId(req.user.id) };
5 const result = await contactsCollection.updateOne(
6{ _id: new ObjectId(id), user_id: req.user.sub },
7{ $push: { rates: rate } }
8    );
9 if (result.matchedCount === 0) return res.status(404).json({ message: 'Contact not found ' });
10    res.status(201).json(rate);
11});
This endpoint is the MongoDB equivalent of the previous POST /contacts/:id/rates endpoint designed for PostgreSQL. While the PostgreSQL implementation uses pool.query to execute a SQL INSERT INTO statement to insert a rate row in the table, MongoDB's implementation uses the contactsCollection.updateOne to add a rate to a contact, which will now update the contact document by pushing a new rate into the embedded rates array.
By embedding rates within contacts, you avoid the need for joins (like $lookup), which can improve query performance, especially for read-heavy operations. Now, all relevant information about a contact, including its rates, is stored together in a single document.
Get contacts:
1app.get("/contacts", authenticateToken, async (req, res) => {
2 let { startDate, endDate } = req.query;
3
4 if (!endDate) {
5endDate = new Date().toISOString();
6} else {
7endDate = new Date(endDate).toISOString();
8  }
9
10 if (!startDate) {
11 const now = new Date();
12 const pastDate = new Date(now);
13pastDate.setDate(now.getDate() - 7);
14
15    startDate = pastDate.toISOString();
16} else {
17startDate = new Date(startDate).toISOString();
18  }
19
20 const contacts = await contactsCollection
21    .aggregate([
22      {
23 $match: {
24 createdAt: {
25 $gte: startDate,
26 $lte: endDate,
27          },
28 user_id: req.user.sub
29        },
30      },
31      {
32 $lookup: {
33 from: "users",
34 localField: "user_id",
35 foreignField: "_id",
36 as: "user",
37        },
38      },
39      {
40 $addFields: {
41 call_rate: { $size: { $ifNull: ["$rates", []] } },
42        },
43      },
44{ $sort: { call_rate: -1 } },
45    ])
46    .toArray();
47
48  res.status(200).json(contacts);
49});
This endpoint is the MongoDB equivalent of the previous GET /contacts endpoint designed for PostgreSQL. While the PostgreSQL implementation uses a SQL query with JOIN statements, WHERE clause, GROUP BY, and ORDER BY to perform (matching, joining, adding fields, sorting) operations on the contacts table, MongoDB's implementation uses the aggregate method to perform similar operations on the contacts collection.
An aggregation pipeline in MongoDB is a framework that passes documents into a multi-stage pipeline that can transform them and output aggregated results. Each stage in the pipeline performs an operation on the input documents and passes the result to the next stage.
You can refer to the MongoDB Aggregation Pipeline documentation.
This aggregation pipeline filters contacts based on their creation date and the authenticated user's ID, joins the contacts collection with the users collection to fetch user details, joins the contacts collection with the rates collection to fetch rate details, and then adds a new field --- call_rate --- to count the number of rates per contact and sorts the contacts by call_rate in descending order.
Compared to the SQL statement, you don't need to use two JOINs. Therefore, the performance is better.

Migrating the data

Now, let's talk about using the MongoDB Query Converter to migrate your SQL queries to the MongoDB Query API.
MongoDB provides a tool called Relational Migrator which allows you to migrate your SQL queries to the MongoDB Query API with ease. It also addresses common migration and data modeling challenges to provide a smooth transition to MongoDB.
Download Relational Migrator to try it out yourself.
Before using the query converter, you must sign in to your Atlas account in Relational Migrator. For details, see Log In with Atlas.
  • Open the Relational Migrator app on your PC and log in with Atlas. 
  • Connect to your Postgres database.
  • Select the tables that you want to migrate.
  • Define your schema as follows:
Define Initial Schema screenshot
Then, name your project.
Generate the MongoDB schema from the imported Postgres database table as follows:
Screenshot of the relational migrator From the Code Generation tab, click the Query Converter pane.
Click the Paste SQL query button to access the query converter editor.
Screenshot of the relational migrator
Here, you can enter the SQL queries you want to convert and click Convert to convert them to MongoDB queries.

Import data using MongoDB Compass

MongoDB Compass is a graphical user interface for MongoDB and it's completely free to install on Mac, Windows, and Linux. You can import JSON or CSV data to MongoDB via MongoDB Compass.
Compass Create New Collection screen
To import data to a collection, connect to your database cluster using the connection string from Atlas, create a database and collection, and navigate to the detailed collection view by selecting the collection from the Databases tab or clicking the collection in the left-side navigation.
Next, click the Add Data dropdown, select Import JSON or CSV file, select the appropriate file type, and click Import. After successful import, the dialog closes and Compass displays the collection page containing the newly imported documents.

Import data using MongoImport

To import data using mongoimport, install the mongoDB database tools, a suite of command-line utilities for working with MongoDB, and then run mongoimport from the system command line.
Here is an example of importing data via mongoimport:
1mongoimport mongodb+srv://<cluster-username>:<cluster-password>@cluster0.33t5arb.mongodb.net/Node-API?retryWrites=true&w=majority&appName=Cluster0 --collection=user --file=users.json
Here, we use mongoimport to import the JSON data from the users.json file into the users collection in the Node-API database.

Import data using the VS Code extension

To import data using the VS Code extension, go to the extensions tab in VS Code and search for MongoDB. You should see the MongoDB for VS Code extension.
MongoDB for VS Code extension
Once you successfully install it, you'll see a leaf icon in your VS Code sidebar. Click on it and under connections, add a new connection. Then, click Connect with connection string and pass in your database cluster connection string.
You should see your cluster. Hover on your cluster and click the plus icon to create a new database. This will generate a playground file where you can add the following script to create a database, collection and import data:
1// Select the database to use.
2use('database-name');
3// Insert documents into the users collection.
4db.getCollection('users').insertMany("Copy and paste the content of your JSON file here");

Testing the migration

Use Postman or any API testing tool to test all the updated endpoints with the migrated queries to ensure the migration works as expected.

Conclusion

Migrating a Node.js application from a SQL database to MongoDB can be seamless and efficient when approached with a well-structured resource. In this tutorial, we took a look at setting up MongoDB, migrating the SQL queries, and performing data migration with MongoDB Compass, MongoImport, and the MongoDB for VS Code extension.  
By following the structured approach in the tutorial, you can successfully migrate your Node.js application from a SQL database to MongoDB.
Questions? Want to share what you're building? Head to the MongoDB Developer Community next.
Top Comments in Forums
There are no comments on this article yet.
Start the Conversation

Facebook Icontwitter iconlinkedin icon
Rate this tutorial
star-empty
star-empty
star-empty
star-empty
star-empty
Related
Article

Window Functions & Time Series Collections


Aug 13, 2024 | 7 min read
Tutorial

Unique Indexes Quirks and Unique Documents in an Array of Documents


Oct 04, 2023 | 7 min read
Tutorial

How to Maintain Multiple Versions of a Record in MongoDB (2024 Updates)


Aug 12, 2024 | 6 min read
Quickstart

Getting Started With MongoDB and FastAPI


Jul 12, 2024 | 7 min read
Table of Contents