Artwork

Content provided by Catherine Carr and David Runciman. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Catherine Carr and David Runciman or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Talking Politics Guide to ... Existential Risk

29:47
 
Share
 

Manage episode 224782451 series 1423621
Content provided by Catherine Carr and David Runciman. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Catherine Carr and David Runciman or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

David talks to Martin Rees about how we should evaluate the greatest threats facing the human species in the twenty-first century. Does the biggest danger come from bio-terror or bio-error, climate change, nuclear war or AI? And what prospects does space travel provide for a post-human future?


Talking Points:


Existential risk is risk that cascades globally and is a severe setback to civilization. We are now so interconnected and so empowered as a species that humans could be responsible for this kind of destruction.

  • There are natural existential risks too, such as asteroids. But what is concerning about the present moment is that humans have the ability to affect the entire biosphere.
  • This is a story about technology, but it’s also about global population growth and the depletion of resources.

There are four categories of existential risk: climate change, bioterror/bioerror, nuclear weapons, and AI/new technology.

  • Climate Change has a long tail, meaning that the risk of total catastrophe is non-negligible.
  • Bioterror/bio-error is becoming more of a risk as technology advances. It’s hard to predict the consequences of the misuse of biotech. Our social order is more vulnerable than it used to be. Overwhelmed hospitals could lead to a societal breakdown.
  • Machine learning has not yet reached the level of existential risk. Real stupidity, not artificial intelligence, will remain our chief concern in the coming decades. Still, AI could make certain kinds of cyber-attacks much worse.
  • The nuclear risk has changed since the Cold War. Today there is a greater risk that some nukes go off in a particular region, although global catastrophe is less likely.

These threats are human-made. Solving them is also our responsibility.

  • We can’t all move to Mars. Earth problems have to be dealt with here.
  • There are downsides to tech, but we will also need it. Martin describes himself as a technical optimist, but a political pessimist.

Mentioned in this episode:

Further Learning:

And as ever, recommended reading curated by our friends at the LRB can be found here: lrb.co.uk/talking

  continue reading

382 episodes

Artwork
iconShare
 
Manage episode 224782451 series 1423621
Content provided by Catherine Carr and David Runciman. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Catherine Carr and David Runciman or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

David talks to Martin Rees about how we should evaluate the greatest threats facing the human species in the twenty-first century. Does the biggest danger come from bio-terror or bio-error, climate change, nuclear war or AI? And what prospects does space travel provide for a post-human future?


Talking Points:


Existential risk is risk that cascades globally and is a severe setback to civilization. We are now so interconnected and so empowered as a species that humans could be responsible for this kind of destruction.

  • There are natural existential risks too, such as asteroids. But what is concerning about the present moment is that humans have the ability to affect the entire biosphere.
  • This is a story about technology, but it’s also about global population growth and the depletion of resources.

There are four categories of existential risk: climate change, bioterror/bioerror, nuclear weapons, and AI/new technology.

  • Climate Change has a long tail, meaning that the risk of total catastrophe is non-negligible.
  • Bioterror/bio-error is becoming more of a risk as technology advances. It’s hard to predict the consequences of the misuse of biotech. Our social order is more vulnerable than it used to be. Overwhelmed hospitals could lead to a societal breakdown.
  • Machine learning has not yet reached the level of existential risk. Real stupidity, not artificial intelligence, will remain our chief concern in the coming decades. Still, AI could make certain kinds of cyber-attacks much worse.
  • The nuclear risk has changed since the Cold War. Today there is a greater risk that some nukes go off in a particular region, although global catastrophe is less likely.

These threats are human-made. Solving them is also our responsibility.

  • We can’t all move to Mars. Earth problems have to be dealt with here.
  • There are downsides to tech, but we will also need it. Martin describes himself as a technical optimist, but a political pessimist.

Mentioned in this episode:

Further Learning:

And as ever, recommended reading curated by our friends at the LRB can be found here: lrb.co.uk/talking

  continue reading

382 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide