Artwork

Content provided by Arize AI. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Arize AI or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Breaking Down EvalGen: Who Validates the Validators?

44:47
 
Share
 

Manage episode 418004543 series 3448051
Content provided by Arize AI. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Arize AI or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Due to the cumbersome nature of human evaluation and limitations of code-based evaluation, Large Language Models (LLMs) are increasingly being used to assist humans in evaluating LLM outputs. Yet LLM-generated evaluators often inherit the problems of the LLMs they evaluate, requiring further human validation.

This week’s paper explores EvalGen, a mixed-initative approach to aligning LLM-generated evaluation functions with human preferences. EvalGen assists users in developing both criteria acceptable LLM outputs and developing functions to check these standards, ensuring evaluations reflect the users’ own grading standards.
Read it on the blog: https://arize.com/blog/breaking-down-evalgen-who-validates-the-validators/

To learn more about ML observability, join the Arize AI Slack community or get the latest on our LinkedIn and Twitter.

  continue reading

24 episodes

Artwork
iconShare
 
Manage episode 418004543 series 3448051
Content provided by Arize AI. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Arize AI or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Due to the cumbersome nature of human evaluation and limitations of code-based evaluation, Large Language Models (LLMs) are increasingly being used to assist humans in evaluating LLM outputs. Yet LLM-generated evaluators often inherit the problems of the LLMs they evaluate, requiring further human validation.

This week’s paper explores EvalGen, a mixed-initative approach to aligning LLM-generated evaluation functions with human preferences. EvalGen assists users in developing both criteria acceptable LLM outputs and developing functions to check these standards, ensuring evaluations reflect the users’ own grading standards.
Read it on the blog: https://arize.com/blog/breaking-down-evalgen-who-validates-the-validators/

To learn more about ML observability, join the Arize AI Slack community or get the latest on our LinkedIn and Twitter.

  continue reading

24 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide