Artwork

Content provided by Aaron Bergman. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Aaron Bergman or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Lessons from reinforcement learning from human feedback | Stephen Casper | EAG Boston 23

55:40
 
Share
 

Manage episode 385311651 series 3503936
Content provided by Aaron Bergman. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Aaron Bergman or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Reinforcement Learning from Human Feedback (RLHF) has emerged as the central alignment technique used to finetune state-of-the-art systems such as GPT-4, Claude-2, Bard, and Llama-2. However, RLHF has a number of known problems, and these models have exhibited some troubling alignment failures. How did we get here? What lessons should we learn? And what does it mean for the next generation of AI systems? Stephen is a third year Computer Science Ph.D student at MIT in in the Algorithmic Alignment Group advised by Dylan Hadfield-Menell. Formerly, he has worked with the Harvard Kreiman Lab and the Center for Human-Compatible AI. His main focus is on interpreting, diagnosing, debugging, and auditing deep learning systems.

  continue reading

157 episodes

Artwork
iconShare
 
Manage episode 385311651 series 3503936
Content provided by Aaron Bergman. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Aaron Bergman or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Reinforcement Learning from Human Feedback (RLHF) has emerged as the central alignment technique used to finetune state-of-the-art systems such as GPT-4, Claude-2, Bard, and Llama-2. However, RLHF has a number of known problems, and these models have exhibited some troubling alignment failures. How did we get here? What lessons should we learn? And what does it mean for the next generation of AI systems? Stephen is a third year Computer Science Ph.D student at MIT in in the Algorithmic Alignment Group advised by Dylan Hadfield-Menell. Formerly, he has worked with the Harvard Kreiman Lab and the Center for Human-Compatible AI. His main focus is on interpreting, diagnosing, debugging, and auditing deep learning systems.

  continue reading

157 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide