Explore Developer Center's New Chatbot! MongoDB AI Chatbot can be accessed at the top of your navigation to answer all your MongoDB questions.

MongoDB Developer
Atlas
plus
Sign in to follow topics
MongoDB Developer Centerchevron-right
Developer Topicschevron-right
Productschevron-right
Atlaschevron-right

Supercharge Your AI Applications: AWS Bedrock, MongoDB, and TypeScript

Pavel Duchovny9 min read • Published Oct 10, 2024 • Updated Oct 10, 2024
AIAWSNode.jsAtlasVector SearchTypeScript
Facebook Icontwitter iconlinkedin icon
Rate this tutorial
star-empty
star-empty
star-empty
star-empty
star-empty
Imagine harnessing the power of large language models, seamlessly integrated with your data, all accessible through a sleek TypeScript application. That's exactly what we're going to build today! This guide will walk you through creating a cutting-edge AI application that leverages AWS Bedrock's powerful agents, MongoDB Atlas's flexible data storage, and the robustness of TypeScript.
Whether you're building a next-generation customer service bot, an AI-powered research assistant, or a smart content creation tool, this stack will provide the foundation you need. Let's dive in and create something extraordinary!

Prerequisites

  • AWS account with access to Bedrock
  • MongoDB Atlas account
  • Node.js and npm installed on your local machine
  • Basic familiarity with TypeScript and Express.js

Part 1: AWS Bedrock and MongoDB Atlas setup

Before we dive into the TypeScript application, let's set up our AI infrastructure. This section will guide you through configuring AWS Bedrock and MongoDB Atlas to create a powerful, data-aware AI agent.

1. MongoDB Atlas setup

  1. Sign up for MongoDB Atlas if you haven't already.
  2. Create a new cluster in your preferred region.
  3. In your cluster, create a new database called bedrock with a collection named agenda.
  4. Create a vector search index on the agenda collection with the following configuration (name: vector_index):
1{
2 "fields": [
3 {
4 "type": "vector",
5 "path": "embedding",
6 "numDimensions": 1024,
7 "similarity": "cosine"
8 },
9 {
10 "type": "filter",
11 "path": "metadata"
12 },
13 {
14 "type": "filter",
15 "path": "text"
16 }
17 ]
18}
Please take a note of the Atlas username and password, as well as the base cluster hostname (e.g., cluster0.abcd.mongodb.net).

2. AWS Bedrock configuration

We will use an AWS agenda data set to build an agent that assists with providing information on the specific agenda items in its knowledge base.
  1. Log in to your AWS Management Console.
  2. Navigate to the AWS Bedrock console.
  3. Enable the following models:
    • Amazon Titan Text Embedding model (amazon.titan-embed-text-v2:0)
    • Claude 3.5 Sonnet model
  4. Upload the AWS Summit agenda data to an S3 bucket:

3. AWS Secrets Manager setup

  1. Create a new secret with your MongoDB Atlas credentials:
    • Secret type: "Other type of secret"
    • Key/value pairs:
      • Key: username, Value: <Your_Atlas_Username>
      • Key: password, Value: <Your_Atlas_Password>
  2. Note down the ARN of the created secret.

4. Creating the Knowledge Base

  1. In the Bedrock console, go to the Knowledge Base section.
  2. Click "Create Knowledge Base" and follow the wizard:
    • Name: Choose a meaningful name
    • Data source: Select your S3 bucket with the uploaded agenda files
    • Embedding Model: Titan Text Embeddings v2
    • Vector Database: MongoDB Atlas
    • Fill in the MongoDB connection details:
      • Hostname: Your Atlas cluster hostname (e.g., cluster0.abcd.mongodb.net)
      • Database name: bedrock
      • Collection name: agenda
      • Credentials secret ARN: The ARN of the secret created in Step 3
      • Vector search index name: vector_index
      • Vector embedding field path: embedding
      • Text field path: text
      • Metadata field path: metadata
  3. Review and create the Knowledge Base.
  4. Once the Knowledge Base is ready, go to the "Data source" section and click "Sync" to load the data into Atlas.

5. Creating the Bedrock agent

  1. Navigate to the Agents section in the Bedrock console.
  2. Click "Create Agent" and provide the following details:
    • Name: agenda_assistant (or your preferred name)
    • Model: Anthropic - Claude 3.5 Sonnet
    • Instructions: "You are a friendly AI chatbot that helps users find and build agenda items for AWS Summit Tel Aviv. Elaborate as much as possible on the response."
    • Knowledge bases: Select the Knowledge Base you created in Step 4.
    • Create a new alias for the agent.
  3. Review and create the agent.
  4. Note down the Agent ID and Agent Alias ID for use in our TypeScript application.

6. Adding a YouTube search tool action group

To enhance your Bedrock agent's capabilities, let's add a YouTube search tool as an action group. This will allow the agent to find and present relevant AWS-related videos during conversations.
  1. In the AWS Bedrock console, navigate to your agent's "Action groups" section.
  2. Click "Add action group" or "Create action group."
  3. Fill in the following details:
    • Action group name: YouTubeSearchTool
    • Description: Searches YouTube for AWS-related videos, including tutorials, service overviews, and best practices. Use this to find educational content and stay updated on AWS technologies.
  4. For "Action group type," select "Define with function details."
  5. In the "Action group invocation" section:
    • Choose "Create a new Lambda function."
    • Name your Lambda function (e.g., YouTubeSearchTool).
  6. Configure the "Action group function":
    • Name: YouTubeSearchGroup
    • Description: This tool searches YouTube for AWS-related educational content. It can find videos on specific AWS services, tutorials, best practices, and general AWS topics. Use this to gather relevant video resources for AWS learning and staying updated on AWS technologies.
    • Leave "Enable confirmation of action group function" disabled.
  7. Add two parameters:
    • Name: maxResults Description: The maximum number of video results to return (default is 5). Use this to control the amount of video suggestions. Type: string Required: false
    • Name: query Description: The search query for finding AWS-related YouTube videos; can be specific (e.g., "AWS Lambda tutorial") or general (e.g., "AWS cloud computing basics") Type: string Required: true
  8. Save the action group configuration.
  9. Set up the Lambda function:
    • Go to the AWS Lambda console and find the newly created function.
    • Prepare your typescript code to be uploaded to the Lambda function:
1mkdir youtube-search-lambda
2cd youtube-search-lambda
3npm init -y
4npm install axios
5npm install --save-dev typescript @types/node @types/aws-lambda
Create a tsconfig.json file:
1{
2 "compilerOptions": {
3 "target": "es2018",
4 "module": "commonjs",
5 "strict": true,
6 "esModuleInterop": true,
7 "outDir": "./dist",
8 "rootDir": "./src"
9 },
10 "include": ["src/**/*"],
11 "exclude": ["node_modules"]
12}
  • In src/index.ts, create the handler to locate Youtube videos:
1import axios from 'axios';
2
3interface Event {
4 agent: string;
5 actionGroup: string;
6 function: string;
7 parameters: { name: string; value: string }[];
8 messageVersion: string;
9}
10
11interface Context {
12 // Add any necessary context properties
13}
14
15interface Video {
16 title: string;
17 description: string;
18 thumbnail: string;
19 video_id: string;
20 url: string;
21}
22
23interface SearchResult {
24 items: {
25 id: { videoId: string };
26 snippet: {
27 title: string;
28 description: string;
29 thumbnails: {
30 default: { url: string };
31 };
32 };
33 }[];
34}
35
36export const handler = async (event: Event, context: Context): Promise<any> => {
37 const { agent, actionGroup, function: functionName, parameters } = event;
38
39 // Extract query and maxResults from parameters
40 const query = parameters.find(param => param.name === 'query')?.value || 'AWS services';
41 const maxResults = parseInt(parameters.find(param => param.name === 'maxResults')?.value || '5', 10);
42
43 // YouTube Data API v3 search endpoint
44 const apiKey = process.env.YOUTUBE_API_KEY;
45 const baseUrl = 'https://www.googleapis.com/youtube/v3/search';
46
47 // Construct the URL with query parameters
48 const params = new URLSearchParams({
49 part: 'snippet',
50 q: query,
51 type: 'video',
52 maxResults: maxResults.toString(),
53 key: apiKey!
54 });
55
56 try {
57 // Make the API request
58 const response = await axios.get<SearchResult>(`${baseUrl}?${params}`);
59 const searchResults = response.data;
60
61 // Process the search results
62 const videos: Video[] = searchResults.items.map(item => ({
63 title: item.snippet.title,
64 description: item.snippet.description,
65 thumbnail: item.snippet.thumbnails.default.url,
66 video_id: item.id.videoId,
67 url: `https://www.youtube.com/watch?v=${item.id.videoId}`
68 }));
69
70 // Prepare the response
71 let responseText = `Here are ${videos.length} YouTube videos related to '${query}':\n\n`;
72 videos.forEach((video, index) => {
73 responseText += `${index + 1}. ${video.title}\n ${video.url}\n\n`;
74 });
75
76 const actionResponse = {
77 actionGroup,
78 function: functionName,
79 functionResponse: {
80 responseBody: {
81 TEXT: {
82 body: responseText
83 }
84 }
85 }
86 };
87
88 const functionResponse = { response: actionResponse, messageVersion: event.messageVersion };
89 console.log(`Response: ${JSON.stringify(functionResponse)}`);
90 return functionResponse;
91 } catch (error) {
92 console.error('Error:', error);
93 throw error;
94 }
95};
Modify your package.json to include build and start scripts:
1"scripts": {
2 "build": "tsc",
3 "start": "node dist/index.js"
4}
Create the deployment package:
1npm run build
2cd dist
3npm install --production
4zip -r ../function.zip .
Go to the Lambda function UI and change the runtime settings to NodeJS:
Make sure you are setting the handler as index.handler to point to our build structure. Save and upload the function.zip file via the “Upload From” button on the code tab.
  1. Add an environment variable to the Lambda function:
    • Key: YOUTUBE_API_KEY
    • Value: Your YouTube data API key
  2. Ensure the Lambda function has permission to make outbound HTTPS requests.
  3. Save the Lambda function.
  4. Return to the Bedrock console and verify that your agent is configured to use this new action group.
With this setup, your Bedrock agent can now search YouTube for AWS-related videos and present them in conversations. This feature enhances the agent's ability to provide educational resources and keep users updated on AWS technologies.

Part 2: Building the TypeScript application

Step 1: Project setup

You can find the full codebase in the GitHub repository.
  1. Create a new directory for your project and navigate into it:
1mkdir bedrock-ts-agent && cd bedrock-ts-agent
  1. Initialize a new Node.js project:
1npm init -y
  1. Install necessary dependencies:
1npm install express dotenv @aws-sdk/client-bedrock-agent-runtime
2npm install -D typescript @types/express @types/node ts-node
  1. Initialize TypeScript configuration:
1npx tsc --init
  1. Update tsconfig.json with the following configuration:
1{
2 "compilerOptions": {
3 "target": "es6",
4 "module": "commonjs",
5 "outDir": "./dist",
6 "rootDir": "./src",
7 "strict": true,
8 "esModuleInterop": true
9 },
10 "include": ["src/**/*"],
11 "exclude": ["node_modules"]
12}

Step 2: Create the application

  1. Create a src directory and add an app.ts file:
1mkdir src && touch src/app.ts
  1. Open src/app.ts and add the following code:
1import express from 'express';
2import dotenv from 'dotenv';
3import { BedrockAgentRuntimeClient, InvokeAgentCommand } from '@aws-sdk/client-bedrock-agent-runtime';
4
5dotenv.config();
6
7const app = express();
8app.use(express.json());
9
10const bedrockAgentRuntime = new BedrockAgentRuntimeClient({
11 region: process.env.AWS_REGION || 'us-east-1',
12 credentials: {
13 accessKeyId: process.env.AWS_ACCESS_KEY_ID || '',
14 secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY || '',
15 },
16});
17
18const AGENT_ID = process.env.AGENT_ID || '';
19const AGENT_ALIAS_ID = process.env.AGENT_ALIAS_ID || '';
20
21async function invokeBedrockAgent(prompt: string, sessionId: string): Promise<{ sessionId: string; completion: string }> {
22 const command = new InvokeAgentCommand({
23 agentId: AGENT_ID,
24 agentAliasId: AGENT_ALIAS_ID,
25 sessionId,
26 inputText: prompt,
27 });
28
29 try {
30 let completion = "";
31 const response = await bedrockAgentRuntime.send(command);
32
33 if (response.completion === undefined) {
34 throw new Error("Completion is undefined");
35 }
36
37 for await (let chunkEvent of response.completion) {
38 const chunk = chunkEvent.chunk;
39 if (chunk && chunk.bytes) {
40 const decodedResponse = new TextDecoder("utf-8").decode(chunk.bytes);
41 completion += decodedResponse;
42 }
43 }
44
45 return { sessionId, completion };
46 } catch (err) {
47 console.error(err);
48 throw err;
49 }
50}
51
52app.post('/chat', async (req, res) => {
53 try {
54 const { message, sessionId } = req.body;
55
56 if (!message || !sessionId) {
57 return res.status(400).json({ error: 'Message and sessionId are required.' });
58 }
59
60 const response = await invokeBedrockAgent(message, sessionId);
61 res.json(response);
62 } catch (error) {
63 console.error('Error:', error);
64 res.status(500).json({ error: 'An error occurred while processing your request.' });
65 }
66});
67
68const PORT = process.env.PORT || 3000;
69app.listen(PORT, () => {
70 console.log(`Server is running on port ${PORT}`);
71});
  1. Create a .env file in the root directory and add your AWS credentials and agent information:
1AWS_REGION=us-east-1
2AWS_ACCESS_KEY_ID=your_access_key_id
3AWS_SECRET_ACCESS_KEY=your_secret_access_key
4AGENT_ID=your_agent_id
5AGENT_ALIAS_ID=your_agent_alias_id

Step 3: Update package.json

Update the scripts section in your package.json:
1"scripts": {
2 "build": "tsc",
3 "start": "node dist/app.js",
4 "dev": "ts-node src/app.ts"
5}

Step 4: Run the application

  1. Build the TypeScript code:
1npm run build
  1. Start the server:
1npm start
Your server should now be running on http://localhost:3000.

Testing the application

You can test the application using cURL or a tool like Postman:
1curl -X POST http://localhost:3000/chat \
2-H "Content-Type: application/json" \
3-d '{"message": "Find me AWS formation talks and videos", "sessionId": "test-session-1"}'
4
5{"sessionId":"test-session-1","completion":"Certainly! I've found some excellent AWS CloudFormation talks and videos that can help you learn about this important AWS service. AWS CloudFormation is a powerful tool for infrastructure as code, allowing you to model and provision your AWS resources easily. Here's a summary of the videos I found:\n\n1. For beginners, there's a step-by-step tutorial on creating and deleting an AWS CloudFormation stack. This video will give you hands-on experience with the basics of CloudFormation.\n\n2. If you're looking for a comprehensive introduction, there's an AWS CloudFormation tutorial by Intellipaat that covers the fundamentals of the service.\n\n3. For those interested in more advanced topics, there's an AWS re:Invent 2020 talk about building your first AWS CloudFormation resource provider. This is great for understanding how to extend CloudFormation's capabilities.\n\n4. Simplilearn offers an AWS CloudFormation tutorial and demo, which can provide practical insights into using the service.\n\n5. Lastly, there's a step-by-step tutorial specifically focused on creating a DynamoDB table using CloudFormation. This is excellent for understanding how CloudFormation can be used to provision specific AWS services.\n\nThese videos cover a range of topics from beginner to more advanced levels, providing a good mix of theoretical knowledge and practical demonstrations. They can help you understand how to use AWS CloudFormation to define and manage your infrastructure as code, which is a crucial skill for efficient cloud resource management.\n\nTo get started, I recommend watching these videos in order, starting with the beginner tutorials and progressing to the more advanced topics. This will give you a solid foundation in AWS CloudFormation and help you build your skills progressively.\n\nHere are the direct links to the videos for your convenience:\n\n1. [Create and Delete an AWS CloudFormation Stack | Step-by-Step Tutorial for Beginners](https://www.youtube.com/watch?v=fmDG-W5TFp4)\n2. [AWS CloudFormation | Introduction to AWS CloudFormation | AWS CloudFormation Tutorial | Intellipaat](https://www.youtube.com/watch?v=uunDvWG2hnE)\n3. [AWS re:Invent 2020: Build your first AWS CloudFormation resource provider](https://www.youtube.com/watch?v=qjtsuTVgrjs)\n4. [AWS CloudFormation Tutorial | AWS CloudFormation Demo | AWS Tutorial For Beginners | Simplilearn](https://www.youtube.com/watch?v=t97jZch4lMY)\n5. [AWS Cloudformation Step by Step Tutorial - Create a DynamoDB Table!](https://www.youtube.com/watch?v=YXVCdGyHDSk)\n\nEnjoy learning about AWS CloudFormation!"}
This will send a request to your Bedrock agent and return the response.
There is a small frontend application to demo UI for responses under the folder aws-conference-assistant: Conference Assistant UI

Conclusion

You've now set up a TypeScript application that interacts with an AWS Bedrock agent configured with MongoDB integration. This application provides a simple API endpoint that allows users to chat with the agent and retrieve information about the AWS Summit schedule.
Remember to handle errors appropriately, implement proper security measures, and consider adding more features like session management and response caching for a production-ready application.
Start building AI apps with MongoDB Atlas today. If you have any questions or need assistance, please check out the MongoDB community forums.
Top Comments in Forums
There are no comments on this article yet.
Start the Conversation

Facebook Icontwitter iconlinkedin icon
Rate this tutorial
star-empty
star-empty
star-empty
star-empty
star-empty
Related
Tutorial

How to Migrate PostgreSQL to MongoDB With Confluent Kafka


Aug 30, 2024 | 10 min read
Article

How to Optimize Your Serverless Instance Bill with Indexing


Sep 13, 2023 | 6 min read
Article

How to Work With Johns Hopkins University COVID-19 Data in MongoDB Atlas


Sep 09, 2024 | 8 min read
Tutorial

IoT and MongoDB: Powering Time Series Analysis of Household Power Consumption


Aug 28, 2024 | 6 min read
Table of Contents