Webinar

Continuously Updating Vector Embeddings for Gen AI Apps

Register Now

July 18, 2024

12 P.M. ET

Retrieval-augmented generation (RAG) lets us prompt generative AI large language models with enterprise data to produce answers founded on knowledge of the business’s operations and customers. But it’s a changing world, and keeping that data fresh and up-to-date is complex. This webinar shows how to continuously update the data used by RAG to ensure gen AI apps are always working with the latest data to produce the most accurate and current answers.

The solution uses MongoDB Atlas Stream Processing and Atlas Vector Search to continuously update vector embeddings with data received from an Apache Kafka data pipeline. Join our Senior Consulting Engineer, David Sanchez as he walks developers through continuously updating, storing, and searching embeddings with a single interface. And it shows why the MongoDB document data model is so well suited to stream processing.

The webinar details:

  • How to set up and configure the environment.
  • How to create vector search indices in Atlas.
  • How to create a private and scalable embedding generator system using purpose-built LLMs.
  • How to interactively run semantic queries.

It also summarizes the key learnings of the exercise to help attendees apply the lessons learned to their own applications. We look forward to seeing you there!

Register Now

Submit