Artwork

Content provided by Nicolay Gerold. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Nicolay Gerold or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Building Reliable LLM Applications, Production-Ready RAG, Data-Driven Evals | ep 5

29:40
 
Share
 

Manage episode 428522578 series 3585930
Content provided by Nicolay Gerold. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Nicolay Gerold or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

In this episode of "How AI is Built", we learn how to build and evaluate real-world language model applications with Shahul and Jithin, creators of Ragas. Ragas is a powerful open-source library that helps developers test, evaluate, and fine-tune Retrieval Augmented Generation (RAG) applications, streamlining their path to production readiness.

Main Insights

  • Challenges of Open-Source Models: Open-source large language models (LLMs) can be powerful tools, but require significant post-training optimization for specific use cases.
  • Evaluation Before Deployment: Thorough testing and evaluation are key to preventing unexpected behaviors and hallucinations in deployed RAGs. Ragas offers metrics and synthetic data generation to support this process.
  • Data is Key: The quality and distribution of data used to train and evaluate LLMs dramatically impact their performance. Ragas is enabling novel synthetic data generation techniques to make this process more effective and cost-efficient.
  • RAG Evolution: Techniques for improving RAGs are continuously evolving. Developers must be prepared to experiment and keep up with the latest advancements in chunk embedding, query transformation, and model alignment.

Practical Takeaways

  • Start with a solid testing strategy: Before launching, define the quality metrics aligned with your RAG's purpose. Ragas helps in this process.
  • Embrace synthetic data: Manually creating test data sets is time-consuming. Tools within Ragas help automate the creation of synthetic data to mirror real-world use cases.
  • RAGs are iterative: Be prepared for continuous improvement as better techniques and models emerge.

Interesting Quotes

  • "...models are very stochastic and grading it directly would rather trigger them to give some random number..." - Shahul, on the dangers of naive model evaluation.
  • "Reducing the developer time in acquiring these test data sets by 90%." - Shahul, on the efficiency gains of Ragas' synthetic data generation.
  • "We want to ensure maximum diversity..." - Shahul, on creating realistic and challenging test data for RAG evaluation.

Ragas:

Jithin James:

Shahul ES:

Nicolay Gerold:

00:00 Introduction

02:03 Introduction to Open Assistant project

04:05 Creating Customizable and Fine-Tunable Models

06:07 Ragas and the LLM Use Case

08:09 Introduction to Language Model Metrics (LLMs)

11:12 Reducing the Cost of Data Generation

13:19 Evaluation of Components at Melvess

15:40 Combining Ragas Metrics with AutoML Providers

20:08 Improving Performance with Fine-tuning and Reranking

22:56 End-to-End Metrics and Component-Specific Metrics

25:14 The Importance of Deep Knowledge and Understanding

25:53 Robustness vs Optimization

26:32 Challenges of Evaluating Models

27:18 Creating a Dream Tech Stack

27:47 The Future Roadmap for Ragas

28:02 Doubling Down on Grid Data Generation

28:12 Open-Source Models and Expanded Support

28:20 More Metrics for Different Applications

RAG, Ragas, LLM, Evaluation, Synthetic Data, Open-Source, Language Model Applications, Testing.

  continue reading

19 episodes

Artwork
iconShare
 
Manage episode 428522578 series 3585930
Content provided by Nicolay Gerold. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Nicolay Gerold or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

In this episode of "How AI is Built", we learn how to build and evaluate real-world language model applications with Shahul and Jithin, creators of Ragas. Ragas is a powerful open-source library that helps developers test, evaluate, and fine-tune Retrieval Augmented Generation (RAG) applications, streamlining their path to production readiness.

Main Insights

  • Challenges of Open-Source Models: Open-source large language models (LLMs) can be powerful tools, but require significant post-training optimization for specific use cases.
  • Evaluation Before Deployment: Thorough testing and evaluation are key to preventing unexpected behaviors and hallucinations in deployed RAGs. Ragas offers metrics and synthetic data generation to support this process.
  • Data is Key: The quality and distribution of data used to train and evaluate LLMs dramatically impact their performance. Ragas is enabling novel synthetic data generation techniques to make this process more effective and cost-efficient.
  • RAG Evolution: Techniques for improving RAGs are continuously evolving. Developers must be prepared to experiment and keep up with the latest advancements in chunk embedding, query transformation, and model alignment.

Practical Takeaways

  • Start with a solid testing strategy: Before launching, define the quality metrics aligned with your RAG's purpose. Ragas helps in this process.
  • Embrace synthetic data: Manually creating test data sets is time-consuming. Tools within Ragas help automate the creation of synthetic data to mirror real-world use cases.
  • RAGs are iterative: Be prepared for continuous improvement as better techniques and models emerge.

Interesting Quotes

  • "...models are very stochastic and grading it directly would rather trigger them to give some random number..." - Shahul, on the dangers of naive model evaluation.
  • "Reducing the developer time in acquiring these test data sets by 90%." - Shahul, on the efficiency gains of Ragas' synthetic data generation.
  • "We want to ensure maximum diversity..." - Shahul, on creating realistic and challenging test data for RAG evaluation.

Ragas:

Jithin James:

Shahul ES:

Nicolay Gerold:

00:00 Introduction

02:03 Introduction to Open Assistant project

04:05 Creating Customizable and Fine-Tunable Models

06:07 Ragas and the LLM Use Case

08:09 Introduction to Language Model Metrics (LLMs)

11:12 Reducing the Cost of Data Generation

13:19 Evaluation of Components at Melvess

15:40 Combining Ragas Metrics with AutoML Providers

20:08 Improving Performance with Fine-tuning and Reranking

22:56 End-to-End Metrics and Component-Specific Metrics

25:14 The Importance of Deep Knowledge and Understanding

25:53 Robustness vs Optimization

26:32 Challenges of Evaluating Models

27:18 Creating a Dream Tech Stack

27:47 The Future Roadmap for Ragas

28:02 Doubling Down on Grid Data Generation

28:12 Open-Source Models and Expanded Support

28:20 More Metrics for Different Applications

RAG, Ragas, LLM, Evaluation, Synthetic Data, Open-Source, Language Model Applications, Testing.

  continue reading

19 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide