Artwork

Content provided by London Futurists. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by London Futurists or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Catastrophe and consent

31:46
 
Share
 

Manage episode 366648336 series 3390521
Content provided by London Futurists. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by London Futurists or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

In this episode, co-hosts Calum and David continue their reflections on what they have both learned from their interactions with guests on this podcast over the last few months. Where have their ideas changed? And where are they still sticking to their guns?
The previous episode started to look at two of what Calum calls the 4 Cs of superintelligence: Cease and Control. In this episode, under the headings of Catastrophe and Consent, the discussion widens to look at what might be the very bad outcomes and also the very good outcomes, from the emergence of AI superintelligence.
Topics addressed in this episode include:
*) A 'zombie' argument that corporations are superintelligences - and what that suggests about the possibility of human control over a superintelligence
*) The existential threat of the entire human species being wiped out
*) The vulnerabilities of our shared infrastructure
*) An AGI may pursue goals even without it being conscious or having agency
*) The risks of accidental and/or coincidental catastrophe
*) A single technical fault caused the failure of automated passport checking throughout the UK
*) The example of automated control of the Boeing 737 Max causing the deaths of everyone aboard two flights - in Indonesia and in Ethiopia
*) The example from 1983 of Stanislav Petrov using his human judgement regarding an automated alert of apparently incoming nuclear missiles
*) Reasons why an AGI might decide to eliminate humans
*) The serious risk of a growing public panic - and potential mishandling of it by self-interested partisan political leaders
*) Why "Consent" is a better name than "Celebration"
*) Reasons why an AGI might consent to help humanity flourish, solving all our existential problems
*) Two models for humans merging with an AI superintelligence - to seek "Control", and as a consequence of "Consent"
*) Enhanced human intelligence could play a role in avoiding a surge of panic
*) Reflections on "The Artilect War" by Hugo de Garis: cosmists vs. terrans
*) Reasons for supporting "team human" (or "team posthuman") as opposed to an AGI that might replace us
*) Reflections on "Diaspora" by Greg Egan: three overlapping branches of future humans
*) Is collaboration a self-evident virtue?
*) Will an AGI consider humans to be endlessly fascinating? Or regard our culture and history as shallow and uninspiring?
*) The inscrutability of AGI motivation
*) A reason to consider "Consent" as the most likely outcome
*) A fifth 'C' word, as discussed by Max Tegmark
*) A reason to keep working on a moonshot solution for "Control"
*) Practical steps to reduce the risk of public panic
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

  continue reading

Chapters

1. [Ad] What If? So What? (00:14:10)

2. (Cont.) Catastrophe and consent (00:14:11)

90 episodes

Artwork

Catastrophe and consent

London Futurists

12 subscribers

published

iconShare
 
Manage episode 366648336 series 3390521
Content provided by London Futurists. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by London Futurists or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

In this episode, co-hosts Calum and David continue their reflections on what they have both learned from their interactions with guests on this podcast over the last few months. Where have their ideas changed? And where are they still sticking to their guns?
The previous episode started to look at two of what Calum calls the 4 Cs of superintelligence: Cease and Control. In this episode, under the headings of Catastrophe and Consent, the discussion widens to look at what might be the very bad outcomes and also the very good outcomes, from the emergence of AI superintelligence.
Topics addressed in this episode include:
*) A 'zombie' argument that corporations are superintelligences - and what that suggests about the possibility of human control over a superintelligence
*) The existential threat of the entire human species being wiped out
*) The vulnerabilities of our shared infrastructure
*) An AGI may pursue goals even without it being conscious or having agency
*) The risks of accidental and/or coincidental catastrophe
*) A single technical fault caused the failure of automated passport checking throughout the UK
*) The example of automated control of the Boeing 737 Max causing the deaths of everyone aboard two flights - in Indonesia and in Ethiopia
*) The example from 1983 of Stanislav Petrov using his human judgement regarding an automated alert of apparently incoming nuclear missiles
*) Reasons why an AGI might decide to eliminate humans
*) The serious risk of a growing public panic - and potential mishandling of it by self-interested partisan political leaders
*) Why "Consent" is a better name than "Celebration"
*) Reasons why an AGI might consent to help humanity flourish, solving all our existential problems
*) Two models for humans merging with an AI superintelligence - to seek "Control", and as a consequence of "Consent"
*) Enhanced human intelligence could play a role in avoiding a surge of panic
*) Reflections on "The Artilect War" by Hugo de Garis: cosmists vs. terrans
*) Reasons for supporting "team human" (or "team posthuman") as opposed to an AGI that might replace us
*) Reflections on "Diaspora" by Greg Egan: three overlapping branches of future humans
*) Is collaboration a self-evident virtue?
*) Will an AGI consider humans to be endlessly fascinating? Or regard our culture and history as shallow and uninspiring?
*) The inscrutability of AGI motivation
*) A reason to consider "Consent" as the most likely outcome
*) A fifth 'C' word, as discussed by Max Tegmark
*) A reason to keep working on a moonshot solution for "Control"
*) Practical steps to reduce the risk of public panic
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

  continue reading

Chapters

1. [Ad] What If? So What? (00:14:10)

2. (Cont.) Catastrophe and consent (00:14:11)

90 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide