Go offline with the Player FM app!
Podcasts Worth a Listen
SPONSORED


1 Encore: Jessica B. Harris Believes in a Welcome Table 42:14
#5: supervolcanoes, AI takeover, and What We Owe the Future
Manage episode 340996248 series 3340630
Future Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, read on the EA Forum and follow on Twitter.
00:00 Welcome to Future Matters. 01:08 MacAskill — What We Owe the Future. 01:34 Lifland — Samotsvety's AI risk forecasts. 02:11 Halstead — Climate Change and Longtermism. 02:43 Good Judgment — Long-term risks and climate change. 02:54 Thorstad — Existential risk pessimism and the time of perils. 03:32 Hamilton — Space and existential risk. 04:07 Cassidy & Mani — Huge volcanic eruptions. 04:45 Boyd & Wilson — Island refuges for surviving nuclear winter and other abrupt sun-reducing catastrophes. 05:28 Hilton — Preventing an AI-related catastrophe. 06:13 Lewis — Most small probabilities aren't Pascalian. 07:04 Yglesias — What's long-term about "longtermism”? 07:33 Lifland — Prioritizing x-risks may require caring about future people. 08:40 Karnofsky — AI strategy nearcasting. 09:11 Karnofsky — How might we align transformative AI if it's developed very soon? 09:51 Matthews — How effective altruism went from a niche movement to a billion-dollar force. 10:28 News. 14:28 Conversation with Ajeya Cotra. 15:02 What do you mean by human feedback on diverse tasks (HFDT) and what made you focus on it? 18:08 Could you walk us through the three assumptions you make about how this scenario plays out? 20:49 What are the key properties of the model you call Alex? 22:55 What do you mean by “playing the training game”, and why would Alex behave in that way? 24:34 Can you describe how deploying Alex would result in a loss of human control? 29:40 Can you talk about the sorts of specific countermeasures to prevent takeover?
9 episodes
Manage episode 340996248 series 3340630
Future Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, read on the EA Forum and follow on Twitter.
00:00 Welcome to Future Matters. 01:08 MacAskill — What We Owe the Future. 01:34 Lifland — Samotsvety's AI risk forecasts. 02:11 Halstead — Climate Change and Longtermism. 02:43 Good Judgment — Long-term risks and climate change. 02:54 Thorstad — Existential risk pessimism and the time of perils. 03:32 Hamilton — Space and existential risk. 04:07 Cassidy & Mani — Huge volcanic eruptions. 04:45 Boyd & Wilson — Island refuges for surviving nuclear winter and other abrupt sun-reducing catastrophes. 05:28 Hilton — Preventing an AI-related catastrophe. 06:13 Lewis — Most small probabilities aren't Pascalian. 07:04 Yglesias — What's long-term about "longtermism”? 07:33 Lifland — Prioritizing x-risks may require caring about future people. 08:40 Karnofsky — AI strategy nearcasting. 09:11 Karnofsky — How might we align transformative AI if it's developed very soon? 09:51 Matthews — How effective altruism went from a niche movement to a billion-dollar force. 10:28 News. 14:28 Conversation with Ajeya Cotra. 15:02 What do you mean by human feedback on diverse tasks (HFDT) and what made you focus on it? 18:08 Could you walk us through the three assumptions you make about how this scenario plays out? 20:49 What are the key properties of the model you call Alex? 22:55 What do you mean by “playing the training game”, and why would Alex behave in that way? 24:34 Can you describe how deploying Alex would result in a loss of human control? 29:40 Can you talk about the sorts of specific countermeasures to prevent takeover?
9 episodes
All episodes
×
1 #8: Bing Chat, AI labs on safety, and pausing Future Matters 41:48


1 #6: FTX collapse, value lock-in, and counterarguments to AI x-risk 37:47

1 #5: supervolcanoes, AI takeover, and What We Owe the Future 31:26

1 #4: AI timelines, AGI risk, and existential risk from climate change 31:13

1 #3: digital sentience, AGI ruin, and forecasting track records 34:05

1 #2: Clueless skepticism, 'longtermist' as an identity, and nanotechnology strategy research 23:07

1 #1: AI takeoff, longtermism vs. existential risk, and probability discounting 29:55
Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.