Artwork

Content provided by Michaël Trazzi. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Michaël Trazzi or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Ethan Perez–Inverse Scaling, Language Feedback, Red Teaming

2:01:26
 
Share
 

Manage episode 338916961 series 2966339
Content provided by Michaël Trazzi. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Michaël Trazzi or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Ethan Perez is a research scientist at Anthropic, working on large language models. He is the second Ethan working with large language models coming on the show but, in this episode, we discuss why alignment is actually what you need, not scale. We discuss three projects he has been pursuing before joining Anthropic, namely the Inverse Scaling Prize, Red Teaming Language Models with Language Models, and Training Language Models with Language Feedback.

Ethan Perez: https://twitter.com/EthanJPerez

Transcript: https://theinsideview.ai/perez

Host: https://twitter.com/MichaelTrazzi

OUTLINE

(00:00:00) Highlights

(00:00:20) Introduction

(00:01:41) The Inverse Scaling Prize

(00:06:20) The Inverse Scaling Hypothesis

(00:11:00) How To Submit A Solution

(00:20:00) Catastrophic Outcomes And Misalignment

(00:22:00) Submission Requirements

(00:27:16) Inner Alignment Is Not Out Of Distribution Generalization

(00:33:40) Detecting Deception With Inverse Scaling

(00:37:17) Reinforcement Learning From Human Feedback

(00:45:37) Training Language Models With Language Feedback

(00:52:38) How It Differs From InstructGPT

(00:56:57) Providing Information-Dense Feedback

(01:03:25) Why Use Language Feedback

(01:10:34) Red Teaming Language Models With Language Models

(01:20:17) The Classifier And Advesarial Training

(01:23:53) An Example Of Red-Teaming Failure

(01:27:47) Red Teaming Using Prompt Engineering

(01:32:58) Reinforcement Learning Methods

(01:41:53) Distributional Biases

(01:45:23) Chain of Thought Prompting

(01:49:52) Unlikelihood Training and KL Penalty

(01:52:50) Learning AI Alignment through the Inverse Scaling Prize

(01:59:33) Final thoughts on AI Alignment

  continue reading

55 episodes

Artwork
iconShare
 
Manage episode 338916961 series 2966339
Content provided by Michaël Trazzi. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Michaël Trazzi or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Ethan Perez is a research scientist at Anthropic, working on large language models. He is the second Ethan working with large language models coming on the show but, in this episode, we discuss why alignment is actually what you need, not scale. We discuss three projects he has been pursuing before joining Anthropic, namely the Inverse Scaling Prize, Red Teaming Language Models with Language Models, and Training Language Models with Language Feedback.

Ethan Perez: https://twitter.com/EthanJPerez

Transcript: https://theinsideview.ai/perez

Host: https://twitter.com/MichaelTrazzi

OUTLINE

(00:00:00) Highlights

(00:00:20) Introduction

(00:01:41) The Inverse Scaling Prize

(00:06:20) The Inverse Scaling Hypothesis

(00:11:00) How To Submit A Solution

(00:20:00) Catastrophic Outcomes And Misalignment

(00:22:00) Submission Requirements

(00:27:16) Inner Alignment Is Not Out Of Distribution Generalization

(00:33:40) Detecting Deception With Inverse Scaling

(00:37:17) Reinforcement Learning From Human Feedback

(00:45:37) Training Language Models With Language Feedback

(00:52:38) How It Differs From InstructGPT

(00:56:57) Providing Information-Dense Feedback

(01:03:25) Why Use Language Feedback

(01:10:34) Red Teaming Language Models With Language Models

(01:20:17) The Classifier And Advesarial Training

(01:23:53) An Example Of Red-Teaming Failure

(01:27:47) Red Teaming Using Prompt Engineering

(01:32:58) Reinforcement Learning Methods

(01:41:53) Distributional Biases

(01:45:23) Chain of Thought Prompting

(01:49:52) Unlikelihood Training and KL Penalty

(01:52:50) Learning AI Alignment through the Inverse Scaling Prize

(01:59:33) Final thoughts on AI Alignment

  continue reading

55 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide