Build a JavaScript AI Agent With LangGraph.js and MongoDB
Jesse Hall15 min read • Published Sep 18, 2024 • Updated Sep 18, 2024
FULL APPLICATION
Rate this tutorial
As a web developer, building artificial intelligence into your web applications may seem daunting. As someone with zero background in AI/ML technologies, I totally understand. At first, it sounded so foreign to me. But I was quickly hooked when I saw how easy it is!
In this tutorial, we're going to dive into the exciting world of AI agents in JavaScript. Trust me, it's not as scary as it sounds! We're going to build something really cool using LangGraph.js and MongoDB.
So, what's the big deal with AI agents? And no, not Agent Smith. Imagine having a smart assistant that not only understands what you're saying but also remembers your previous conversations and can even utilize a number of "tools" to look up additional info, process data, and more. Pretty neat, right?
Enter LangGraph.js — your new best friend when it comes to building AI agents. Here's what makes it so awesome:
- It can handle complex stuff: Want your AI to make decisions or repeat tasks? LangGraph.js has got your back with its loops and branching features.
- It's got a great memory: No more "Oops, I forgot what we were talking about." LangGraph.js saves the state of your app after each step.
- It plays well with humans: You can easily add human input to your AI workflows to monitor and alter the agent's approach.
- It's super quick: With its streaming support, you get instant results. No more twiddling your thumbs waiting for responses.
These features make LangGraph.js an ideal choice for developing sophisticated AI agents that can maintain context and handle complex interactions. And, of course, LangGraph.js fits perfectly with LangChain.js, making it easy to integrate with other AI tools and libraries.
By integrating LangGraph.js with MongoDB, we can create AI agents that not only process and generate language but also store and retrieve information efficiently. This combination is particularly powerful for building applications that require context-aware conversations and data-driven decision-making.
This dynamic duo is perfect for building apps that need to have meaningful conversations and make smart decisions based on data. It's like creating your own J.A.R.V.I.S., minus the fancy holographic displays (for now, at least).
In this tutorial, we'll build an AI agent that can assist with HR-related queries using a database of employee information. Our agent will be able to:
- Start new conversations and continue existing ones.
- Look up employee information using MongoDB Atlas Vector Search.
- Persist conversation state (LangGraph checkpoints) in MongoDB.
Let's get started by setting up our project!
If you are a visual learner, give the video version of this tutorial a watch!
Before we begin, make sure you have the following:
- Node.js and npm installed
- A free MongoDB Atlas account
While we are using OpenAI for embeddings and Anthropic for conversations, you can easily swap these out to use any LLM combo of your choice.
Our base project structure will look like this:
1 ├── .env 2 ├── index.ts 3 ├── agent.ts 4 ├── seed-database.ts 5 ├── package.json 6 ├── tsconfig.json
Initialize a new Node.js project with TypeScript and install the required dependencies:
1 npm init -y 2 npm i -D typescript ts-node @types/express @types/node 3 npx tsc --init 4 npm i langchain @langchain/langgraph @langchain/mongodb @langchain/langgraph-checkpoint-mongodb @langchain/anthropic dotenv express mongodb zod
Create a
.env
file in the root of your project and add your OpenAI and Anthropic API keys as well as your MongoDB Atlas connection string:1 OPENAI_API_KEY=your-openai-api-key 2 ANTHROPIC_API_KEY=your-anthropic-api-key 3 MONGODB_ATLAS_URI=your-mongodb-atlas-connection-string
Before we do anything, we need to create some synthetic data to work with. We'll use this data to seed our MongoDB database.
We'll use MongoDB Atlas as our database service. If you haven't already, create a cluster in MongoDB Atlas and obtain your connection string.
Create an
index.ts
file in the root of your project. We'll establish a connection to MongoDB using the MongoDB driver:1 import { MongoClient } from "mongodb"; 2 import 'dotenv/config'; 3 4 const client = new MongoClient(process.env.MONGODB_ATLAS_URI as string); 5 6 async function startServer() { 7 try { 8 await client.connect(); 9 await client.db("admin").command({ ping: 1 }); 10 console.log("Pinged your deployment. You successfully connected to MongoDB!"); 11 12 // ... rest of the server setup 13 } catch (error) { 14 console.error("Error connecting to MongoDB:", error); 15 process.exit(1); 16 } 17 } 18 19 startServer();
Start the server by running
npx ts-node index.ts
in your terminal. If you see the message "Pinged your deployment. You successfully connected to MongoDB!" you're good to go.To populate your database with synthetic employee data, let's create a
seed-database.ts
script. This script generates realistic employee records using OpenAI's GPT model and stores them in MongoDB along with their vector embeddings.First, we import the necessary dependencies. We're using LangChain for AI-related functionality, MongoDB for database operations, and Zod for schema validation.
1 import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai"; 2 import { StructuredOutputParser } from "@langchain/core/output_parsers"; 3 import { MongoClient } from "mongodb"; 4 import { MongoDBAtlasVectorSearch } from "@langchain/mongodb"; 5 import { z } from "zod"; 6 import "dotenv/config";
Next, let's set up our MongoDB client and ChatOpenAI instance:
1 const client = new MongoClient(process.env.MONGODB_ATLAS_URI as string); 2 3 const llm = new ChatOpenAI({ 4 modelName: "gpt-4o-mini", 5 temperature: 0.7, 6 });
Here, we create a MongoDB client using the connection string from our environment variables. We also initialize a ChatOpenAI instance with a specific model and temperature setting.
Now, let's define our employee schema using Zod:
1 const EmployeeSchema = z.object({ 2 employee_id: z.string(), 3 first_name: z.string(), 4 last_name: z.string(), 5 date_of_birth: z.string(), 6 address: z.object({ 7 street: z.string(), 8 city: z.string(), 9 state: z.string(), 10 postal_code: z.string(), 11 country: z.string(), 12 }), 13 contact_details: z.object({ 14 email: z.string().email(), 15 phone_number: z.string(), 16 }), 17 job_details: z.object({ 18 job_title: z.string(), 19 department: z.string(), 20 hire_date: z.string(), 21 employment_type: z.string(), 22 salary: z.number(), 23 currency: z.string(), 24 }), 25 work_location: z.object({ 26 nearest_office: z.string(), 27 is_remote: z.boolean(), 28 }), 29 reporting_manager: z.string().nullable(), 30 skills: z.array(z.string()), 31 performance_reviews: z.array( 32 z.object({ 33 review_date: z.string(), 34 rating: z.number(), 35 comments: z.string(), 36 }) 37 ), 38 benefits: z.object({ 39 health_insurance: z.string(), 40 retirement_plan: z.string(), 41 paid_time_off: z.number(), 42 }), 43 emergency_contact: z.object({ 44 name: z.string(), 45 relationship: z.string(), 46 phone_number: z.string(), 47 }), 48 notes: z.string(), 49 }); 50 51 type Employee = z.infer<typeof EmployeeSchema>; 52 53 const parser = StructuredOutputParser.fromZodSchema(z.array(EmployeeSchema));
Maybe this schema is too detailed, but it shows the power of what we can do with LLMs. As far as the LLM is concerned, this is a walk in the park.
This schema defines the structure of our employee data. We use Zod to ensure type safety and create a parser that will help us generate structured data from the AI's output.
This really is a game changer. We can now use this schema to generate data that is both realistic and consistent.
Next, let's implement the function to generate synthetic data:
1 async function generateSyntheticData(): Promise<Employee[]> { 2 const prompt = `You are a helpful assistant that generates employee data. Generate 10 fictional employee records. Each record should include the following fields: employee_id, first_name, last_name, date_of_birth, address, contact_details, job_details, work_location, reporting_manager, skills, performance_reviews, benefits, emergency_contact, notes. Ensure variety in the data and realistic values. 3 4 ${parser.getFormatInstructions()}`; 5 6 console.log("Generating synthetic data..."); 7 8 const response = await llm.invoke(prompt); 9 return parser.parse(response.content as string); 10 }
This function uses the
ChatOpenAI
instance along with some prompt engineering to generate synthetic employee data based on our schema.Now, let's create a function to generate a summary for each employee:
1 async function createEmployeeSummary(employee: Employee): Promise<string> { 2 return new Promise((resolve) => { 3 const jobDetails = `${employee.job_details.job_title} in ${employee.job_details.department}`; 4 const skills = employee.skills.join(", "); 5 const performanceReviews = employee.performance_reviews 6 .map( 7 (review) => 8 `Rated ${review.rating} on ${review.review_date}: ${review.comments}` 9 ) 10 .join(" "); 11 const basicInfo = `${employee.first_name} ${employee.last_name}, born on ${employee.date_of_birth}`; 12 const workLocation = `Works at ${employee.work_location.nearest_office}, Remote: ${employee.work_location.is_remote}`; 13 const notes = employee.notes; 14 15 const summary = `${basicInfo}. Job: ${jobDetails}. Skills: ${skills}. Reviews: ${performanceReviews}. Location: ${workLocation}. Notes: ${notes}`; 16 17 resolve(summary); 18 }); 19 }
This function takes an employee object and creates a concise summary of their information using the various metadata created by the LLM. We'll use this summary to create embeddings for each employee.
Finally, let's implement the main function to seed the database:
1 async function seedDatabase(): Promise<void> { 2 try { 3 await client.connect(); 4 await client.db("admin").command({ ping: 1 }); 5 console.log("Pinged your deployment. You successfully connected to MongoDB!"); 6 7 const db = client.db("hr_database"); 8 const collection = db.collection("employees"); 9 10 await collection.deleteMany({}); 11 12 const syntheticData = await generateSyntheticData(); 13 14 const recordsWithSummaries = await Promise.all( 15 syntheticData.map(async (record) => ({ 16 pageContent: await createEmployeeSummary(record), 17 metadata: {...record}, 18 })) 19 ); 20 21 for (const record of recordsWithSummaries) { 22 await MongoDBAtlasVectorSearch.fromDocuments( 23 [record], 24 new OpenAIEmbeddings(), 25 { 26 collection, 27 indexName: "vector_index", 28 textKey: "embedding_text", 29 embeddingKey: "embedding", 30 } 31 ); 32 33 console.log("Successfully processed & saved record:", record.metadata.employee_id); 34 } 35 36 console.log("Database seeding completed"); 37 38 } catch (error) { 39 console.error("Error seeding database:", error); 40 } finally { 41 await client.close(); 42 } 43 } 44 45 seedDatabase().catch(console.error);
This function connects to the MongoDB database, generates synthetic data, creates summaries for each employee, and then stores the data in the database using MongoDB Atlas Vector Search. It also handles error logging and ensures the database connection is closed when the operation is complete.
To seed the database, run the following command:
1 npx ts-node seed-database.ts
This script creates a collection of employee records in the
hr_database
.Go to your MongoDB Atlas dashboard and check out the data that has been generated. It's really amazing! You'll find the same structure as the schema we defined earlier along with the summary and vector embeddings for the summary.
Next, we need to set up a vector index for similarity search, which we'll use later in our AI agent.
To set up the vector index, follow the steps outlined in our How to Index Fields for Vector Search documentation.
Be sure to name your index “vector_index” and select the
employees
collection. This is the JSON definition for the index:1 { 2 "fields": [ 3 { 4 "numDimensions": 1536, 5 "path": "embedding", 6 "similarity": "cosine", 7 "type": "vector" 8 } 9 ] 10 }
Now that we have our database set up, let's create our AI agent using LangGraph.js. We'll define the agent structure, implement tools for employee lookup, and set up the conversation flow.
Let's create a new file to define the agent called
agent.ts
. Here are the key components:1 import { OpenAIEmbeddings } from "@langchain/openai"; 2 import { ChatAnthropic } from "@langchain/anthropic"; 3 import { AIMessage, BaseMessage, HumanMessage } from "@langchain/core/messages"; 4 import { ChatPromptTemplate, MessagesPlaceholder } from "@langchain/core/prompts"; 5 import { StateGraph } from "@langchain/langgraph"; 6 import { Annotation } from "@langchain/langgraph"; 7 import { tool } from "@langchain/core/tools"; 8 import { ToolNode } from "@langchain/langgraph/prebuilt"; 9 import { MongoDBSaver } from "@langchain/langgraph-checkpoint-mongodb"; 10 import { MongoDBAtlasVectorSearch } from "@langchain/mongodb"; 11 import { MongoClient } from "mongodb"; 12 import { z } from "zod"; 13 import "dotenv/config";
This is the full list of imports for the agent. It's a mix of LangChain, LangGraph, Zod, and MongoDB libraries.
To use this code within our application, we will set up a function that will be exported from this file. We'll also start by defining the MongoDB connection and collection:
1 export async function callAgent(client: MongoClient, query: string, thread_id: string) { 2 // Define the MongoDB database and collection 3 const dbName = "hr_database"; 4 const db = client.db(dbName); 5 const collection = db.collection("employees"); 6 7 // ... (We'll add the rest of the code here) 8 }
Next, we'll use LangGraph's
StateGraph
and Annotation
to define our agent's state. This will help us manage the conversation state and keep track of the conversation history.1 const GraphState = Annotation.Root({ 2 messages: Annotation<BaseMessage[]>({ 3 reducer: (x, y) => x.concat(y), 4 }), 5 });
The
GraphState
keeps track of the conversation messages.We'll implement an employee lookup tool that uses MongoDB Atlas Vector Search:
1 const employeeLookupTool = tool( 2 async ({ query, n = 10 }) => { 3 console.log("Employee lookup tool called"); 4 5 const dbConfig = { 6 collection: collection, 7 indexName: "vector_index", 8 textKey: "embedding_text", 9 embeddingKey: "embedding", 10 }; 11 12 const vectorStore = new MongoDBAtlasVectorSearch( 13 new OpenAIEmbeddings(), 14 dbConfig 15 ); 16 17 const result = await vectorStore.similaritySearchWithScore(query, n); 18 return JSON.stringify(result); 19 }, 20 { 21 name: "employee_lookup", 22 description: "Gathers employee details from the HR database", 23 schema: z.object({ 24 query: z.string().describe("The search query"), 25 n: z.number().optional().default(10).describe("Number of results to return"), 26 }), 27 } 28 );
This tool uses MongoDB Atlas Vector Search to find relevant employee information based on the query. It returns a list of employees with their details.
The tool leverages vector embeddings to perform semantic search. This approach enables the agent to understand the intent behind the query and retrieve relevant information accordingly.
The
n
parameter allows you to customize the number of results returned, with a default of 10. This flexibility enables users to retrieve more or fewer results based on their specific needs.Now, we'll define our tools and create a
toolNode
to manage the tools and their execution. In this example, we are only using a single tool, but you can add more tools as needed.1 const tools = [employeeLookupTool]; 2 3 // We can extract the state typing via `GraphState.State` 4 const toolNode = new ToolNode<typeof GraphState.State>(tools);
We'll use the
ChatAnthropic
model from LangChain.js for our chat model and bind it with our tools.Again, you can change this out to any other LLM model you’d like to use.
1 const model = new ChatAnthropic({ 2 model: "claude-3-5-sonnet-20240620", 3 temperature: 0, 4 }).bindTools(tools);
Some prompt engineering will go into this function which is our main entry point for the agent.
1 async function callModel(state: typeof GraphState.State) { 2 const prompt = ChatPromptTemplate.fromMessages([ 3 [ 4 "system", 5 `You are a helpful AI assistant, collaborating with other assistants. Use the provided tools to progress towards answering the question. If you are unable to fully answer, that's OK, another assistant with different tools will help where you left off. Execute what you can to make progress. If you or any of the other assistants have the final answer or deliverable, prefix your response with FINAL ANSWER so the team knows to stop. You have access to the following tools: {tool_names}.\n{system_message}\nCurrent time: {time}.`, 6 ], 7 new MessagesPlaceholder("messages"), 8 ]); 9 10 const formattedPrompt = await prompt.formatMessages({ 11 system_message: "You are helpful HR Chatbot Agent.", 12 time: new Date().toISOString(), 13 tool_names: tools.map((tool) => tool.name).join(", "), 14 messages: state.messages, 15 }); 16 17 const result = await model.invoke(formattedPrompt); 18 19 return { messages: [result] }; 20 }
The
callModel
function is responsible for formatting the prompt, invoking the model, and returning the result. It takes the current state of the conversation and uses it to format the prompt. It then invokes the model and returns the result. The result is an array of messages, which is what the GraphState
expects. The GraphState
will then update the state with the new messages. This is a simple example, but it can be extended to handle more complex conversations.Next, we'll define a function that decides whether the agent should call a tool or stop and reply to the user.
1 function shouldContinue(state: typeof GraphState.State) { 2 const messages = state.messages; 3 const lastMessage = messages[messages.length - 1] as AIMessage; 4 5 // If the LLM makes a tool call, then we route to the "tools" node 6 if (lastMessage.tool_calls?.length) { 7 return "tools"; 8 } 9 // Otherwise, we stop (reply to the user) 10 return "__end__"; 11 }
The
shouldContinue
function grabs the last message from the state and checks if it has a tool call. If it does, it returns "tools," which will be handled by the tools
node. If it doesn't, it returns "end," which will be handled by the end
node.We'll use LangGraph to define our conversation flow:
1 const workflow = new StateGraph(GraphState) 2 .addNode("agent", callModel) 3 .addNode("tools", toolNode) 4 .addEdge("__start__", "agent") 5 .addConditionalEdges("agent", shouldContinue) 6 .addEdge("tools", "agent");
This sets up a simple back-and-forth between the agent and its tools. Let's break down the workflow:
- The conversation starts with the "agent" node.
- The agent processes the user's input and decides whether to use a tool or end the conversation.
- If a tool is needed, control is passed to the "tools" node, where the selected tool is executed.
- The result from the tool is sent back to the "agent" node.
- The agent interprets the tool's output and formulates a response or decides on the next action.
- This cycle continues until the agent determines that no further action is needed (
shouldContinue
returns "end").
This workflow allows for flexible and dynamic conversations, where the agent can use multiple tools in sequence, if necessary, to fulfill the user's request. The
StateGraph
structure ensures that the conversation maintains context and can handle complex, multi-step interactions efficiently.We'll use the
MongoDBSaver
checkpoint saver from LangGraph to add memory to our agent.1 const checkpointer = new MongoDBSaver({ client, dbName }); 2 3 const app = workflow.compile({ checkpointer });
This will save the state of the conversation to a MongoDB database. We'll also compile the graph and include the MongoDB
checkpointer
to create an application that can be run.Finally, we'll run the agent:
1 const finalState = await app.invoke( 2 { 3 messages: [new HumanMessage(query)], 4 }, 5 { recursionLimit: 15, configurable: { thread_id: thread_id } } 6 ); 7 8 console.log(finalState.messages[finalState.messages.length - 1].content); 9 10 return finalState.messages[finalState.messages.length - 1].content;
This will run the agent and return the final response. Here's a breakdown of what's happening:
- We invoke the compiled workflow (
app.invoke()
) with the initial state containing the user's query. - The
recursionLimit
is set to 15 to prevent infinite loops. - We pass a
thread_id
in the configurable options, which allows for conversation persistence across multiple interactions. - The workflow runs through its nodes (agent and tools) until a final state is reached.
- We extract the last message from the final state, which contains the agent's final response.
- This final response is both logged to the console and returned from the function.
Now, let's set up an Express.js server to expose our AI agent via API endpoints. We'll do this in the
index.ts
file:1 import 'dotenv/config'; 2 import express, { Express, Request, Response } from "express"; 3 import { MongoClient } from "mongodb"; 4 import { callAgent } from './agent'; 5 6 const app: Express = express(); 7 app.use(express.json()); 8 9 // Initialize MongoDB client 10 const client = new MongoClient(process.env.MONGODB_ATLAS_URI as string); 11 12 async function startServer() { 13 try { 14 await client.connect(); 15 await client.db("admin").command({ ping: 1 }); 16 console.log("Pinged your deployment. You successfully connected to MongoDB!"); 17 18 app.get('/', (req: Request, res: Response) => { 19 res.send('LangGraph Agent Server'); 20 }); 21 22 app.post('/chat', async (req: Request, res: Response) => { 23 const initialMessage = req.body.message; 24 const threadId = Date.now().toString(); 25 try { 26 const response = await callAgent(client, initialMessage, threadId); 27 res.json({ threadId, response }); 28 } catch (error) { 29 console.error('Error starting conversation:', error); 30 res.status(500).json({ error: 'Internal server error' }); 31 } 32 }); 33 34 app.post('/chat/:threadId', async (req: Request, res: Response) => { 35 const { threadId } = req.params; 36 const { message } = req.body; 37 try { 38 const response = await callAgent(client, message, threadId); 39 res.json({ response }); 40 } catch (error) { 41 console.error('Error in chat:', error); 42 res.status(500).json({ error: 'Internal server error' }); 43 } 44 }); 45 46 const PORT = process.env.PORT || 3000; 47 app.listen(PORT, () => { 48 console.log(`Server running on port ${PORT}`); 49 }); 50 } catch (error) { 51 console.error('Error connecting to MongoDB:', error); 52 process.exit(1); 53 } 54 } 55 56 startServer();
This sets up two main endpoints:
/chat
for starting a new conversation/chat/:threadId
for continuing an existing conversation
The
/chat
endpoint handles the initial message and starts a new conversation. It generates a unique thread ID for each conversation and passes it to the callAgent
function.The
/chat/:threadId
endpoint handles subsequent messages in an existing conversation. It passes the thread ID to the callAgent
function to continue the conversation.Now that we have our server set up, we can test our AI agent:
- Start the server.
1 npx ts-node index.ts
2. Use a tool like cURL or Postman to send requests to the
/chat
endpoint with the initial message.1 curl -X POST -H "Content-Type: application/json" -d '{"message": "Build a team to make an iOS app, and tell me the talent gaps."}' http://localhost:3000/chat
This will start a new conversation and return the agent's response. Here's an example response:
1 { 2 "threadId": "1695905400000", 3 "response": "Now that we have gathered information about potential team members, let's analyze the results and build a team for the iOS app development project. I'll also identify any talent gaps we may have. 4 5 FINAL ANSWER: 6 7 Based on the employee information we've gathered, here's the proposed team for building an iOS app, along with identified talent gaps: 8 9 1. iOS Developer: 10 - We don't have a dedicated iOS developer in our search results. This is a significant talent gap that needs to be addressed. 11 12 2. UI/UX Designer: 13 - Linda Green (Employee ID: E003) 14 Skills: Adobe Photoshop, Illustrator, UX/UI Design 15 Strengths: Outstanding creativity and design skills, always meets deadlines with high-quality work 16 Note: Part-time employee, works remotely 17 18 3. Frontend Developer: 19 - Isabella Lopez (Employee ID: E009) 20 Skills: JavaScript, HTML, CSS 21 Strengths: Great attention to detail and coding skills, enthusiastic about new technologies and frameworks 22 Note: While not an iOS developer, her frontend skills could be valuable for the app's web components or for learning iOS development 23 24 4. Project Manager: 25 - James Brown (Employee ID: E004) 26 Skills: Agile, Scrum, Leadership 27 Strengths: Excellent leadership and team management skills, highly experienced in project management 28 29 5. Backend Developer: 30 - Alice Johnson (Employee ID: E001) 31 Skills: Java, Python, SQL 32 Strengths: Strong team player with excellent problem-solving skills, meets project deadlines 33 34 Talent Gaps and Recommendations: 35 36 1. iOS Developer: This is the most critical gap. We need to either: 37 a) Hire a dedicated iOS developer with experience in Swift and Xcode. 38 b) Provide intensive iOS development training to Isabella Lopez or Alice Johnson, leveraging their existing programming skills. 39 40 2. Mobile-specific UI/UX Design: While Linda Green has UX/UI design skills, we should ensure she has experience with mobile app design, specifically for iOS. If not, consider providing additional training or hiring a specialist in iOS app design. 41 42 3. iOS-specific Project Management: James Brown should familiarize himself with iOS app development lifecycle and any specific methodologies used in mobile app projects. 43 44 4. Quality Assurance: We don't have a dedicated QA specialist for mobile apps. Consider either hiring one or training an existing team member in iOS app testing. 45 46 5. DevOps for iOS: Ensure that one of the team members (possibly Alice Johnson) can handle the iOS app deployment process, including working with the App Store and managing continuous integration/continuous deployment (CI/CD) for iOS apps. 47 48 6. Full-time UI/UX Designer: Since Linda Green is part-time, consider either increasing her hours or hiring an additional full-time UI/UX designer to ensure consistent availability throughout the project. 49 50 To address these gaps, the company should consider a combination of hiring new talent, providing specialized training to existing employees, and possibly engaging freelancers or consultants for specific iOS development needs. This will ensure a well-rounded team capable of successfully developing and launching an iOS app."
- The server should respond with the agent's message. You can continue the conversation by sending additional messages to the
/chat/:threadId
endpoint.
1 curl -X POST -H "Content-Type: application/json" -d '{"message": "What are the talent gaps?"}' http://localhost:3000/chat/<threadId>
Congratulations! You've successfully built an AI agent using LangGraph.js and MongoDB.
There you have it; we've created a sophisticated AI agent with the help of LangGraph.js and MongoDB! This agent can hold a conversation, remember what is being talked about, look up information about employees, and give a smart response to HR-related questions.
The combination of LangGraph.js for conversational flow control and MongoDB to store and retrieve memory offers a great way forward for you to build some amazing AI applications. This example can be layered with additional tools, more sophisticated conversation flows, or connectors to other data sources such as images, audio, and video.
LangGraph is a powerful tool that can enhance your AI agent development. If you're already familiar with Node.js and MongoDB, you have a solid foundation to create sophisticated AI agents capable of handling diverse tasks during interactions. By leveraging these technologies together, you can build intelligent systems that provide valuable recommendations and insights.
Top Comments in Forums
There are no comments on this article yet.