Artwork

Content provided by Daniel Filan. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Daniel Filan or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

32 - Understanding Agency with Jan Kulveit

2:22:29
 
Share
 

Manage episode 421052030 series 2844728
Content provided by Daniel Filan. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Daniel Filan or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

What's the difference between a large language model and the human brain? And what's wrong with our theories of agency? In this episode, I chat about these questions with Jan Kulveit, who leads the Alignment of Complex Systems research group.

Patreon: patreon.com/axrpodcast

Ko-fi: ko-fi.com/axrpodcast

The transcript: axrp.net/episode/2024/05/30/episode-32-understanding-agency-jan-kulveit.html

Topics we discuss, and timestamps:

0:00:47 - What is active inference?

0:15:14 - Preferences in active inference

0:31:33 - Action vs perception in active inference

0:46:07 - Feedback loops

1:01:32 - Active inference vs LLMs

1:12:04 - Hierarchical agency

1:58:28 - The Alignment of Complex Systems group

Website of the Alignment of Complex Systems group (ACS): acsresearch.org

ACS on X/Twitter: x.com/acsresearchorg

Jan on LessWrong: lesswrong.com/users/jan-kulveit

Predictive Minds: Large Language Models as Atypical Active Inference Agents: arxiv.org/abs/2311.10215

Other works we discuss:

Active Inference: The Free Energy Principle in Mind, Brain, and Behavior: https://www.goodreads.com/en/book/show/58275959

Book Review: Surfing Uncertainty: https://slatestarcodex.com/2017/09/05/book-review-surfing-uncertainty/

The self-unalignment problem: https://www.lesswrong.com/posts/9GyniEBaN3YYTqZXn/the-self-unalignment-problem

Mitigating generative agent social dilemmas (aka language models writing contracts for Minecraft): https://social-dilemmas.github.io/

Episode art by Hamish Doodles: hamishdoodles.com

  continue reading

37 episodes

Artwork
iconShare
 
Manage episode 421052030 series 2844728
Content provided by Daniel Filan. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Daniel Filan or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

What's the difference between a large language model and the human brain? And what's wrong with our theories of agency? In this episode, I chat about these questions with Jan Kulveit, who leads the Alignment of Complex Systems research group.

Patreon: patreon.com/axrpodcast

Ko-fi: ko-fi.com/axrpodcast

The transcript: axrp.net/episode/2024/05/30/episode-32-understanding-agency-jan-kulveit.html

Topics we discuss, and timestamps:

0:00:47 - What is active inference?

0:15:14 - Preferences in active inference

0:31:33 - Action vs perception in active inference

0:46:07 - Feedback loops

1:01:32 - Active inference vs LLMs

1:12:04 - Hierarchical agency

1:58:28 - The Alignment of Complex Systems group

Website of the Alignment of Complex Systems group (ACS): acsresearch.org

ACS on X/Twitter: x.com/acsresearchorg

Jan on LessWrong: lesswrong.com/users/jan-kulveit

Predictive Minds: Large Language Models as Atypical Active Inference Agents: arxiv.org/abs/2311.10215

Other works we discuss:

Active Inference: The Free Energy Principle in Mind, Brain, and Behavior: https://www.goodreads.com/en/book/show/58275959

Book Review: Surfing Uncertainty: https://slatestarcodex.com/2017/09/05/book-review-surfing-uncertainty/

The self-unalignment problem: https://www.lesswrong.com/posts/9GyniEBaN3YYTqZXn/the-self-unalignment-problem

Mitigating generative agent social dilemmas (aka language models writing contracts for Minecraft): https://social-dilemmas.github.io/

Episode art by Hamish Doodles: hamishdoodles.com

  continue reading

37 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide