Artwork

Content provided by Weaviate. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Weaviate or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

DSPy and ColBERT with Omar Khattab! - Weaviate Podcast #85

31:25
 
Share
 

Manage episode 395524263 series 3524543
Content provided by Weaviate. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Weaviate or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Hey everyone! I am beyond excited to present our interview with Omar Khattab from Stanford University! Omar is one of the world's leading scientists on AI and NLP. I highly recommend you check out Omar's remarkable list of publications linked below! This interview completely transformed my understanding of building RAG and LLM applications! I believe that DSPy will be one of the most impactful software project in LLM development because of the abstractions around *program optimization*. Here is my TLDR of this concept of LLM programs and program optimization with DSPy, I of course encourage you to view the podcast and listen to Omar's explanation haha. RAG is one of the most popular LLM programs we have seen. RAG typically consists of two components of retrieve and then generate. Within the generate component we have a prompt like "please ground your answer based on the search results {search_results}". DSPy gives us a framework to optimize this prompt, bootstrap few-shot examples, or even fine-tune the model if needed. This works by compiling the program based on some evaluation criteria we give DSPy. Now let's say we add a query re-writer that takes the query and writes a new query before sending it to the retrieval system, and a reranker that takes the search results and re-orders them before handing them to the answer generator. Now we have 4 components of query writer, retrieve, rerank, answer. The 3 components of query writer, rerank, and answer all have a prompt that can be optimized with DSPy to enhance the description of the task or add examples! This optimization is done with DSPy's Teleprompters. There are a few other really interesting components to DSPy as well -- such as the formatting of prompts with the docstrings and Signature abstraction, which in my view is quite similar to instructor or LMQL. DSPy also comes with built-in prompts like Chain-of-Thought that offer a really quick way to add this reasoning step and follow a structured output format. I am having so much fun learning about DSPy and I highly recommend you join me in viewing the GitHub repository linked below (with new examples!!): Omar also discusses ColBERT and late interaction retrieval! Omar describes how this achieves the contextualized attention of cross encoders but in a much more scalable system with the maximum similarity between vectors! Stay tuned for more updates from Weaviate as we are diving into multi vector representations to hopefully support systems like this soon!

Chapters

0:00 Weaviate at NeurIPS 2023!

0:38 Omar Khattab

0:57 What is the state of AI?

2:35 DSPy

10:37 Pipelines

14:24 Prompt Tuning and Optimization

18:12 Models for Specific Tasks

21:44 LLM Compiler

23:32 Colbert or ColBERT?

24:02 ColBERT

  continue reading

102 episodes

Artwork
iconShare
 
Manage episode 395524263 series 3524543
Content provided by Weaviate. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Weaviate or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Hey everyone! I am beyond excited to present our interview with Omar Khattab from Stanford University! Omar is one of the world's leading scientists on AI and NLP. I highly recommend you check out Omar's remarkable list of publications linked below! This interview completely transformed my understanding of building RAG and LLM applications! I believe that DSPy will be one of the most impactful software project in LLM development because of the abstractions around *program optimization*. Here is my TLDR of this concept of LLM programs and program optimization with DSPy, I of course encourage you to view the podcast and listen to Omar's explanation haha. RAG is one of the most popular LLM programs we have seen. RAG typically consists of two components of retrieve and then generate. Within the generate component we have a prompt like "please ground your answer based on the search results {search_results}". DSPy gives us a framework to optimize this prompt, bootstrap few-shot examples, or even fine-tune the model if needed. This works by compiling the program based on some evaluation criteria we give DSPy. Now let's say we add a query re-writer that takes the query and writes a new query before sending it to the retrieval system, and a reranker that takes the search results and re-orders them before handing them to the answer generator. Now we have 4 components of query writer, retrieve, rerank, answer. The 3 components of query writer, rerank, and answer all have a prompt that can be optimized with DSPy to enhance the description of the task or add examples! This optimization is done with DSPy's Teleprompters. There are a few other really interesting components to DSPy as well -- such as the formatting of prompts with the docstrings and Signature abstraction, which in my view is quite similar to instructor or LMQL. DSPy also comes with built-in prompts like Chain-of-Thought that offer a really quick way to add this reasoning step and follow a structured output format. I am having so much fun learning about DSPy and I highly recommend you join me in viewing the GitHub repository linked below (with new examples!!): Omar also discusses ColBERT and late interaction retrieval! Omar describes how this achieves the contextualized attention of cross encoders but in a much more scalable system with the maximum similarity between vectors! Stay tuned for more updates from Weaviate as we are diving into multi vector representations to hopefully support systems like this soon!

Chapters

0:00 Weaviate at NeurIPS 2023!

0:38 Omar Khattab

0:57 What is the state of AI?

2:35 DSPy

10:37 Pipelines

14:24 Prompt Tuning and Optimization

18:12 Models for Specific Tasks

21:44 LLM Compiler

23:32 Colbert or ColBERT?

24:02 ColBERT

  continue reading

102 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide