Artwork

Content provided by Weaviate. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Weaviate or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Shishir Patil and Tianjun Zhang on Gorilla - Weaviate Podcast #64!

49:15
 
Share
 

Manage episode 381292834 series 3524543
Content provided by Weaviate. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Weaviate or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Hey everyone! Thank you so much for watching the 64th Weaviate Podcast with Shishir Patil and Tianjun Zhang, co-authors of Gorilla: Large Language Models Connected with Massive APIs! I learned so much about Gorilla from Shishir and Tianjun, from the APIBench dataset to the continually evolving APIZoo, how the models are trained with Retrieval-Aware Training, Self-Instruct Training data and how the authors think of fine-tuning LLaMA-7B models for tasks such as this, and many more! I hope you enjoy the podcast! As always I am more than happy to answer any questions or discuss any ideas you have about the content in the podcast! Please check out the paper here! https://arxiv.org/abs/2305.15334 Chapters 0:00 Welcome Shishir and Tianjun 0:25 Gorilla LLM Story 1:50 API Examples 7:40 The APIZoo 10:55 Gorilla vs. OpenAI Funcs 12:50 Retrieval-Aware Training 19:55 Mixing APIs, Gorilla for Integration 25:12 LlaMA-7B Fine-Tuning vs. GPT-4 29:08 Weaviate Gorilla 33:52 Gorilla and Baby Gorillas 35:40 Gorilla vs. HuggingFace 38:32 Structured Output Parsing 41:14 Reflexion Prompting for Debugging 44:00 Directions for the Future

  continue reading

101 episodes

Artwork
iconShare
 
Manage episode 381292834 series 3524543
Content provided by Weaviate. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Weaviate or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Hey everyone! Thank you so much for watching the 64th Weaviate Podcast with Shishir Patil and Tianjun Zhang, co-authors of Gorilla: Large Language Models Connected with Massive APIs! I learned so much about Gorilla from Shishir and Tianjun, from the APIBench dataset to the continually evolving APIZoo, how the models are trained with Retrieval-Aware Training, Self-Instruct Training data and how the authors think of fine-tuning LLaMA-7B models for tasks such as this, and many more! I hope you enjoy the podcast! As always I am more than happy to answer any questions or discuss any ideas you have about the content in the podcast! Please check out the paper here! https://arxiv.org/abs/2305.15334 Chapters 0:00 Welcome Shishir and Tianjun 0:25 Gorilla LLM Story 1:50 API Examples 7:40 The APIZoo 10:55 Gorilla vs. OpenAI Funcs 12:50 Retrieval-Aware Training 19:55 Mixing APIs, Gorilla for Integration 25:12 LlaMA-7B Fine-Tuning vs. GPT-4 29:08 Weaviate Gorilla 33:52 Gorilla and Baby Gorillas 35:40 Gorilla vs. HuggingFace 38:32 Structured Output Parsing 41:14 Reflexion Prompting for Debugging 44:00 Directions for the Future

  continue reading

101 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide