Artwork

Content provided by Weaviate. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Weaviate or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Weights and Biases on Fine-Tuning LLMs - Weaviate Podcast #68!

52:09
 
Share
 

Manage episode 381292829 series 3524543
Content provided by Weaviate. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Weaviate or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Hey everyone! Thank you so much for watching the 68th episode of the Weaviate Podcast! We are super excited to welcome Morgan McGuire, Darek Kleczek, and Thomas Capelle! This was such a fun discussion beginning with generally how see the space of fine-tuning from why you would want to do it, to the available tooling, intersection with RAG and more! Check out W&B Prompts! https://wandb.ai/site/prompts Check out the W&B Tiny Llama Report! https://wandb.ai/capecape/llamac/reports/Training-Tiny-Llamas-for-Fun-and-Science--Vmlldzo1MDM2MDg0 Chapters 0:00 Tiny Llamas! 1:53 Welcome! 2:22 LLM Fine-Tuning 5:25 Tooling for Fine-Tuning 7:55 Why Fine-Tune? 9:55 RAG vs. Fine-Tuning 12:25 Knowledge Distillation 14:40 Gorilla LLMs 18:25 Open-Source LLMs 22:48 Jonathan Frankle on W&B 23:45 Data Quality for LLM Training 25:55 W&B for Data Versioning 27:25 Curriculum Learning 29:28 GPU Rich and Data Quality 30:30 Vector DBs and Data Quality 32:50 Tuning Training with Weights & Biases 35:47 Training Reports 42:28 HF Collections and W&B Sweeps 44:50 Exciting Directions for AI

  continue reading

100 episodes

Artwork
iconShare
 
Manage episode 381292829 series 3524543
Content provided by Weaviate. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Weaviate or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Hey everyone! Thank you so much for watching the 68th episode of the Weaviate Podcast! We are super excited to welcome Morgan McGuire, Darek Kleczek, and Thomas Capelle! This was such a fun discussion beginning with generally how see the space of fine-tuning from why you would want to do it, to the available tooling, intersection with RAG and more! Check out W&B Prompts! https://wandb.ai/site/prompts Check out the W&B Tiny Llama Report! https://wandb.ai/capecape/llamac/reports/Training-Tiny-Llamas-for-Fun-and-Science--Vmlldzo1MDM2MDg0 Chapters 0:00 Tiny Llamas! 1:53 Welcome! 2:22 LLM Fine-Tuning 5:25 Tooling for Fine-Tuning 7:55 Why Fine-Tune? 9:55 RAG vs. Fine-Tuning 12:25 Knowledge Distillation 14:40 Gorilla LLMs 18:25 Open-Source LLMs 22:48 Jonathan Frankle on W&B 23:45 Data Quality for LLM Training 25:55 W&B for Data Versioning 27:25 Curriculum Learning 29:28 GPU Rich and Data Quality 30:30 Vector DBs and Data Quality 32:50 Tuning Training with Weights & Biases 35:47 Training Reports 42:28 HF Collections and W&B Sweeps 44:50 Exciting Directions for AI

  continue reading

100 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide