Artwork

Content provided by Daniel Filan. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Daniel Filan or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

35 - Peter Hase on LLM Beliefs and Easy-to-Hard Generalization

2:17:24
 
Share
 

Manage episode 435988478 series 2844728
Content provided by Daniel Filan. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Daniel Filan or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

How do we figure out what large language models believe? In fact, do they even have beliefs? Do those beliefs have locations, and if so, can we edit those locations to change the beliefs? Also, how are we going to get AI to perform tasks so hard that we can't figure out if they succeeded at them? In this episode, I chat with Peter Hase about his research into these questions.

Patreon: https://www.patreon.com/axrpodcast

Ko-fi: https://ko-fi.com/axrpodcast

The transcript: https://axrp.net/episode/2024/08/24/episode-35-peter-hase-llm-beliefs-easy-to-hard-generalization.html

Topics we discuss, and timestamps:

0:00:36 - NLP and interpretability

0:10:20 - Interpretability lessons

0:32:22 - Belief interpretability

1:00:12 - Localizing and editing models' beliefs

1:19:18 - Beliefs beyond language models

1:27:21 - Easy-to-hard generalization

1:47:16 - What do easy-to-hard results tell us?

1:57:33 - Easy-to-hard vs weak-to-strong

2:03:50 - Different notions of hardness

2:13:01 - Easy-to-hard vs weak-to-strong, round 2

2:15:39 - Following Peter's work

Peter on Twitter: https://x.com/peterbhase

Peter's papers:

Foundational Challenges in Assuring Alignment and Safety of Large Language Models: https://arxiv.org/abs/2404.09932

Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs: https://arxiv.org/abs/2111.13654

Does Localization Inform Editing? Surprising Differences in Causality-Based Localization vs. Knowledge Editing in Language Models: https://arxiv.org/abs/2301.04213

Are Language Models Rational? The Case of Coherence Norms and Belief Revision: https://arxiv.org/abs/2406.03442

The Unreasonable Effectiveness of Easy Training Data for Hard Tasks: https://arxiv.org/abs/2401.06751

Other links:

Toy Models of Superposition: https://transformer-circuits.pub/2022/toy_model/index.html

Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV): https://arxiv.org/abs/1711.11279

Locating and Editing Factual Associations in GPT (aka the ROME paper): https://arxiv.org/abs/2202.05262

Of nonlinearity and commutativity in BERT: https://arxiv.org/abs/2101.04547

Inference-Time Intervention: Eliciting Truthful Answers from a Language Model: https://arxiv.org/abs/2306.03341

Editing a classifier by rewriting its prediction rules: https://arxiv.org/abs/2112.01008

Discovering Latent Knowledge Without Supervision (aka the Collin Burns CCS paper): https://arxiv.org/abs/2212.03827

Weak-to-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision: https://arxiv.org/abs/2312.09390

Concrete problems in AI safety: https://arxiv.org/abs/1606.06565

Rissanen Data Analysis: Examining Dataset Characteristics via Description Length: https://arxiv.org/abs/2103.03872

Episode art by Hamish Doodles: hamishdoodles.com

  continue reading

41 episodes

Artwork
iconShare
 
Manage episode 435988478 series 2844728
Content provided by Daniel Filan. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Daniel Filan or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

How do we figure out what large language models believe? In fact, do they even have beliefs? Do those beliefs have locations, and if so, can we edit those locations to change the beliefs? Also, how are we going to get AI to perform tasks so hard that we can't figure out if they succeeded at them? In this episode, I chat with Peter Hase about his research into these questions.

Patreon: https://www.patreon.com/axrpodcast

Ko-fi: https://ko-fi.com/axrpodcast

The transcript: https://axrp.net/episode/2024/08/24/episode-35-peter-hase-llm-beliefs-easy-to-hard-generalization.html

Topics we discuss, and timestamps:

0:00:36 - NLP and interpretability

0:10:20 - Interpretability lessons

0:32:22 - Belief interpretability

1:00:12 - Localizing and editing models' beliefs

1:19:18 - Beliefs beyond language models

1:27:21 - Easy-to-hard generalization

1:47:16 - What do easy-to-hard results tell us?

1:57:33 - Easy-to-hard vs weak-to-strong

2:03:50 - Different notions of hardness

2:13:01 - Easy-to-hard vs weak-to-strong, round 2

2:15:39 - Following Peter's work

Peter on Twitter: https://x.com/peterbhase

Peter's papers:

Foundational Challenges in Assuring Alignment and Safety of Large Language Models: https://arxiv.org/abs/2404.09932

Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs: https://arxiv.org/abs/2111.13654

Does Localization Inform Editing? Surprising Differences in Causality-Based Localization vs. Knowledge Editing in Language Models: https://arxiv.org/abs/2301.04213

Are Language Models Rational? The Case of Coherence Norms and Belief Revision: https://arxiv.org/abs/2406.03442

The Unreasonable Effectiveness of Easy Training Data for Hard Tasks: https://arxiv.org/abs/2401.06751

Other links:

Toy Models of Superposition: https://transformer-circuits.pub/2022/toy_model/index.html

Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV): https://arxiv.org/abs/1711.11279

Locating and Editing Factual Associations in GPT (aka the ROME paper): https://arxiv.org/abs/2202.05262

Of nonlinearity and commutativity in BERT: https://arxiv.org/abs/2101.04547

Inference-Time Intervention: Eliciting Truthful Answers from a Language Model: https://arxiv.org/abs/2306.03341

Editing a classifier by rewriting its prediction rules: https://arxiv.org/abs/2112.01008

Discovering Latent Knowledge Without Supervision (aka the Collin Burns CCS paper): https://arxiv.org/abs/2212.03827

Weak-to-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision: https://arxiv.org/abs/2312.09390

Concrete problems in AI safety: https://arxiv.org/abs/1606.06565

Rissanen Data Analysis: Examining Dataset Characteristics via Description Length: https://arxiv.org/abs/2103.03872

Episode art by Hamish Doodles: hamishdoodles.com

  continue reading

41 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide