LA MUG: Ace the Gen AI RAG Race - Building Q&A on Private Company Data Without Getting Stuck in the Mud

Ace the Gen AI RAG Race: Building Q&A on Private Company Data Without Getting Stuck in the Mud

About this Event

The January event will dive into real world, practical Generative AI and RAG systems. This talk and demo will show you how to create and privately host an LLM to power a Q&A system like ChatGPT, using Dataworkz together with Atlas Vector Search.

Agenda

Time Topic
6:00 PM Registration Food & Networking
6:30 PM Gen AI Talk & Demo
8:00 PM* Discussion, Wrap up
* We may go over-time according to interest and discussions

Location

Event Type: In-Person
Location: 2219 Main St, Santa Monica, CA 90405
Doors open : 6:00 PM

Presentation Details

AI-powered tools, including chatbots and virtual assistants using Large Language Models leveraging the likes of Chat GPT have provided quick wins for businesses by boosting productivity during the initial wave of Gen AI to automate tasks within peoples jobs. Now companies are looking to leverage Generative AI on their private data to build powerful applications to automate entire functions such as tier 1 & 2 customer service.

This requires a full Retrieval Augmented Generation (RAG) based system that must be trusted, accurate, reliable and scalable.

Building this is difficult for a company to achieve on their own for a even single RAG application, much more for multiple applications. MongoDB & Dataworkz combined provide everything they need to build, operate and maintain multiple production RAG based systems with the existing staff they have today.

You will learn how to assemble a Generative AI stack to power an LLM for the purpose of building a Q&A system from data stored in your own database (ex: MongoDB Atlas). In the demo portion of the session, we will build our own Q&A system on top of MongoDB in less than 30 minutes!

What We Will Cover

  • Pre-processing of your data - Get it into a shape where it can be used by LLMs.
  • Create chunks and embeddings.
  • Store chunks in MongoDB Atlas for fast retrieval and search.
  • Store embeddings in a vector database, such as MongoDB Vector Search.
  • Privately hosted LLM.
  • Use commercially available open-source models like Llama 2, Dolly v2, ChatGPT, etc. in a private VPC.

Takeaways From This Session

  • How to assemble and operationalize a Q&A system using open-source LLMs
  • Data quality impacts the LLM response
  • How to rapidly experiment with different LLM models for your specific use case

About the Speaker

Nikhil Smotra, CTO and Co-founder, Dataworkz - Nikhil is driven by the potential for innovation and really excited about leveraging advanced technologies such as artificial intelligence, especially LLMs and applying them to extract valuable insights from customer data. Nikhil’s robust experience working with data management at scale led him to co-found Dataworkz. His vision is to create self-service experience that brings together – data, transformation and AI applications – for users of different skill levels.
Prior to Dataworkz, Nikhil worked as SVP, Head of Data Engineering at iQor, a leader in BPO and Product Support, where he led development and management of BigData platforms. Nikhil helped launch the enterprise data initiative and built a high-performing global data engineering team. During his tenure at iQor, Nikhil also managed QeyMetrics – a Business Intelligence and Operational Analytics SaaS offering. Nikhil spent several years at Lockheed Martin(R&D) where he harnessed the potential of NoSQL technology, prior to it gaining popularity, and used it along with semantic web technologies to build a massively scalable Digital Archive with automated data preservation, curation and classification.
Nikhil is an executive alumnus of Haas School of Business, UC Berkeley (Data Science and Analytics Program) and holds a B.E in Computer Science from University of Pune, India. Nikhil also served on Advisory Board of Rutgers University’s BigData certificate program for executives from 2018-2022.

8 Likes