Artwork

Content provided by Daniel Filan. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Daniel Filan or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

11 - Attainable Utility and Power with Alex Turner

1:27:36
 
Share
 

Manage episode 303127498 series 2844728
Content provided by Daniel Filan. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Daniel Filan or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Many scary stories about AI involve an AI system deceiving and subjugating humans in order to gain the ability to achieve its goals without us stopping it. This episode's guest, Alex Turner, will tell us about his research analyzing the notions of "attainable utility" and "power" that underlie these stories, so that we can better evaluate how likely they are and how to prevent them.

Topics we discuss:

- Side effects minimization

- Attainable Utility Preservation (AUP)

- AUP and alignment

- Power-seeking

- Power-seeking and alignment

- Future work and about Alex

The transcript: axrp.net/episode/2021/09/25/episode-11-attainable-utility-power-alex-turner.html

Alex on the AI Alignment Forum: alignmentforum.org/users/turntrout

Alex's Google Scholar page: scholar.google.com/citations?user=thAHiVcAAAAJ&hl=en&oi=ao

Conservative Agency via Attainable Utility Preservation: arxiv.org/abs/1902.09725

Optimal Policies Tend to Seek Power: arxiv.org/abs/1912.01683

Other works discussed:

- Avoiding Side Effects by Considering Future Tasks: arxiv.org/abs/2010.07877

- The "Reframing Impact" Sequence: alignmentforum.org/s/7CdoznhJaLEKHwvJW

- The "Risks from Learned Optimization" Sequence: alignmentforum.org/s/7CdoznhJaLEKHwvJW

- Concrete Approval-Directed Agents: ai-alignment.com/concrete-approval-directed-agents-89e247df7f1b

- Seeking Power is Convergently Instrumental in a Broad Class of Environments: alignmentforum.org/s/fSMbebQyR4wheRrvk/p/hzeLSQ9nwDkPc4KNt

- Formalizing Convergent Instrumental Goals: intelligence.org/files/FormalizingConvergentGoals.pdf

- The More Power at Stake, the Stronger Instumental Convergence Gets for Optimal Policies: alignmentforum.org/posts/Yc5QSSZCQ9qdyxZF6/the-more-power-at-stake-the-stronger-instrumental

- Problem Relaxation as a Tactic: alignmentforum.org/posts/JcpwEKbmNHdwhpq5n/problem-relaxation-as-a-tactic

- How I do Research: lesswrong.com/posts/e3Db4w52hz3NSyYqt/how-i-do-research

- Math that Clicks: Look for Two-way Correspondences: lesswrong.com/posts/Lotih2o2pkR2aeusW/math-that-clicks-look-for-two-way-correspondences

- Testing the Natural Abstraction Hypothesis: alignmentforum.org/posts/cy3BhHrGinZCp3LXE/testing-the-natural-abstraction-hypothesis-project-intro

  continue reading

32 episodes

Artwork
iconShare
 
Manage episode 303127498 series 2844728
Content provided by Daniel Filan. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Daniel Filan or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Many scary stories about AI involve an AI system deceiving and subjugating humans in order to gain the ability to achieve its goals without us stopping it. This episode's guest, Alex Turner, will tell us about his research analyzing the notions of "attainable utility" and "power" that underlie these stories, so that we can better evaluate how likely they are and how to prevent them.

Topics we discuss:

- Side effects minimization

- Attainable Utility Preservation (AUP)

- AUP and alignment

- Power-seeking

- Power-seeking and alignment

- Future work and about Alex

The transcript: axrp.net/episode/2021/09/25/episode-11-attainable-utility-power-alex-turner.html

Alex on the AI Alignment Forum: alignmentforum.org/users/turntrout

Alex's Google Scholar page: scholar.google.com/citations?user=thAHiVcAAAAJ&hl=en&oi=ao

Conservative Agency via Attainable Utility Preservation: arxiv.org/abs/1902.09725

Optimal Policies Tend to Seek Power: arxiv.org/abs/1912.01683

Other works discussed:

- Avoiding Side Effects by Considering Future Tasks: arxiv.org/abs/2010.07877

- The "Reframing Impact" Sequence: alignmentforum.org/s/7CdoznhJaLEKHwvJW

- The "Risks from Learned Optimization" Sequence: alignmentforum.org/s/7CdoznhJaLEKHwvJW

- Concrete Approval-Directed Agents: ai-alignment.com/concrete-approval-directed-agents-89e247df7f1b

- Seeking Power is Convergently Instrumental in a Broad Class of Environments: alignmentforum.org/s/fSMbebQyR4wheRrvk/p/hzeLSQ9nwDkPc4KNt

- Formalizing Convergent Instrumental Goals: intelligence.org/files/FormalizingConvergentGoals.pdf

- The More Power at Stake, the Stronger Instumental Convergence Gets for Optimal Policies: alignmentforum.org/posts/Yc5QSSZCQ9qdyxZF6/the-more-power-at-stake-the-stronger-instrumental

- Problem Relaxation as a Tactic: alignmentforum.org/posts/JcpwEKbmNHdwhpq5n/problem-relaxation-as-a-tactic

- How I do Research: lesswrong.com/posts/e3Db4w52hz3NSyYqt/how-i-do-research

- Math that Clicks: Look for Two-way Correspondences: lesswrong.com/posts/Lotih2o2pkR2aeusW/math-that-clicks-look-for-two-way-correspondences

- Testing the Natural Abstraction Hypothesis: alignmentforum.org/posts/cy3BhHrGinZCp3LXE/testing-the-natural-abstraction-hypothesis-project-intro

  continue reading

32 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide