Artwork

Content provided by Matthew van der Merwe, Pablo Stafforini, Matthew van der Merwe, and Pablo Stafforini. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Matthew van der Merwe, Pablo Stafforini, Matthew van der Merwe, and Pablo Stafforini or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

#7: AI timelines, AI skepticism, and lock-in

 
Share
 

Manage episode 354374641 series 3340630
Content provided by Matthew van der Merwe, Pablo Stafforini, Matthew van der Merwe, and Pablo Stafforini. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Matthew van der Merwe, Pablo Stafforini, Matthew van der Merwe, and Pablo Stafforini or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Future Matters is a newsletter about longtermism and existential risk by Matthew van der Merwe and Pablo Stafforini. Each month we curate and summarize relevant research and news from the community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, read on the EA Forum and follow on Twitter. Future Matters is also available in Spanish.

00:00 Welcome to Future Matters. 00:57 Davidson — What a compute-centric framework says about AI takeoff speeds. 02:19 Chow, Halperin & Mazlish — AGI and the EMH. 02:58 Hatfield-Dodds — Concrete reasons for hope about AI. 03:37 Karnofsky — Transformative AI issues (not just misalignment). 04:08 Vaintrob — Beware safety-washing. 04:45 Karnofsky — How we could stumble into AI catastrophe. 05:21 Liang & Manheim — Managing the transition to widespread metagenomic monitoring. 05:51 Crawford — Technological stagnation: why I came around. 06:38 Karnofsky — Spreading messages to help with the most important century. 07:16 Wynroe Atkinson & Sevilla — Literature review of transformative artificial intelligence timelines. 07:50 Yagudin, Mann & Sempere — Update to Samotsvety AGI timelines. 08:15 Dourado — Heretical thoughts on AI. 08:43 Browning & Veit — Longtermism and animals. 09:04 One-line summaries. 10:28 News. 14:13 Conversation with Lukas Finnveden. 14:37 Could you clarify what you mean by AGI and lock-in? 16:36 What are the five claims one could make about the long run trajectory of intelligent life? 18:26 What are the three claims about lock-in, conditional on the arrival of AGI? 20:21 Could lock-in still happen without whole brain emulation? 21:32 Could you explain why the form of alignment required for lock-in would be easier to solve? 23:12 Could you elaborate on the stability of the postulated long-lasting institutions and on potential threats? 26:02 Do you have any thoughts on the desirability of long-term lock-in? 28:24 What’s the story behind this report?

  continue reading

9 episodes

Artwork
iconShare
 
Manage episode 354374641 series 3340630
Content provided by Matthew van der Merwe, Pablo Stafforini, Matthew van der Merwe, and Pablo Stafforini. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Matthew van der Merwe, Pablo Stafforini, Matthew van der Merwe, and Pablo Stafforini or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Future Matters is a newsletter about longtermism and existential risk by Matthew van der Merwe and Pablo Stafforini. Each month we curate and summarize relevant research and news from the community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, read on the EA Forum and follow on Twitter. Future Matters is also available in Spanish.

00:00 Welcome to Future Matters. 00:57 Davidson — What a compute-centric framework says about AI takeoff speeds. 02:19 Chow, Halperin & Mazlish — AGI and the EMH. 02:58 Hatfield-Dodds — Concrete reasons for hope about AI. 03:37 Karnofsky — Transformative AI issues (not just misalignment). 04:08 Vaintrob — Beware safety-washing. 04:45 Karnofsky — How we could stumble into AI catastrophe. 05:21 Liang & Manheim — Managing the transition to widespread metagenomic monitoring. 05:51 Crawford — Technological stagnation: why I came around. 06:38 Karnofsky — Spreading messages to help with the most important century. 07:16 Wynroe Atkinson & Sevilla — Literature review of transformative artificial intelligence timelines. 07:50 Yagudin, Mann & Sempere — Update to Samotsvety AGI timelines. 08:15 Dourado — Heretical thoughts on AI. 08:43 Browning & Veit — Longtermism and animals. 09:04 One-line summaries. 10:28 News. 14:13 Conversation with Lukas Finnveden. 14:37 Could you clarify what you mean by AGI and lock-in? 16:36 What are the five claims one could make about the long run trajectory of intelligent life? 18:26 What are the three claims about lock-in, conditional on the arrival of AGI? 20:21 Could lock-in still happen without whole brain emulation? 21:32 Could you explain why the form of alignment required for lock-in would be easier to solve? 23:12 Could you elaborate on the stability of the postulated long-lasting institutions and on potential threats? 26:02 Do you have any thoughts on the desirability of long-term lock-in? 28:24 What’s the story behind this report?

  continue reading

9 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide