Artwork

Content provided by Sam Kimpton-Nye. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Sam Kimpton-Nye or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

36: "The Singularity: A Philosophical Analysis", David Chalmers

23:47
 
Share
 

Manage episode 371686151 series 2948327
Content provided by Sam Kimpton-Nye. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Sam Kimpton-Nye or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Recently, there has been frenzied interest in artificial intelligence and, in particular, in the issue of AI safety; there have been “open letters” signed by some of the biggest names in the tech business urging us to take seriously the existential threat posed by AI, and the UK government has just announced that it will convene the first global AI safety summit this autumn.

But what is the threat here, exactly? There are risks associated with any new technology: fire burns, nuclear energy can be harnessed in bombs and social media algorithms threaten democracy. The so-called AI singularity is supposed to be at least on par with the absolute worst of these threats since, according to some, it has the real potential to wipe out all of humanity.
Will there be a singularity? How should we negotiate a singularity and will it necessarily be a bad thing resulting in human extinction? Assuming the singularity doesn’t wipe out humanity, how can we integrate into a post-singularity world? Listen to find out!
Here is a link to the paper.

Support the Show.

  continue reading

38 episodes

Artwork
iconShare
 
Manage episode 371686151 series 2948327
Content provided by Sam Kimpton-Nye. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Sam Kimpton-Nye or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Recently, there has been frenzied interest in artificial intelligence and, in particular, in the issue of AI safety; there have been “open letters” signed by some of the biggest names in the tech business urging us to take seriously the existential threat posed by AI, and the UK government has just announced that it will convene the first global AI safety summit this autumn.

But what is the threat here, exactly? There are risks associated with any new technology: fire burns, nuclear energy can be harnessed in bombs and social media algorithms threaten democracy. The so-called AI singularity is supposed to be at least on par with the absolute worst of these threats since, according to some, it has the real potential to wipe out all of humanity.
Will there be a singularity? How should we negotiate a singularity and will it necessarily be a bad thing resulting in human extinction? Assuming the singularity doesn’t wipe out humanity, how can we integrate into a post-singularity world? Listen to find out!
Here is a link to the paper.

Support the Show.

  continue reading

38 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide