Artwork

Content provided by Michaël Trazzi. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Michaël Trazzi or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Joscha Bach on how to stop worrying and love AI

2:54:29
 
Share
 

Manage episode 376449117 series 2966339
Content provided by Michaël Trazzi. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Michaël Trazzi or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Joscha Bach (who defines himself as an AI researcher/cognitive scientist) has recently been debating existential risk from AI with Connor Leahy (previous guest of the podcast), and since their conversation was quite short I wanted to continue the debate in more depth.

The resulting conversation ended up being quite long (over 3h of recording), with a lot of tangents, but I think this gives a somewhat better overview of Joscha’s views on AI risk than other similar interviews. We also discussed a lot of other topics, that you can find in the outline below.

A raw version of this interview was published on Patreon about three weeks ago. To support the channel and have access to early previews, you can subscribe here: https://www.patreon.com/theinsideview

Youtube: ⁠https://youtu.be/YeXHQts3xYM⁠

Transcript: https://theinsideview.ai/joscha

Host: https://twitter.com/MichaelTrazzi

Joscha: https://twitter.com/Plinz

OUTLINE

(00:00) Intro

(00:57) Why Barbie Is Better Than Oppenheimer

(08:55) The relationship between nuclear weapons and AI x-risk

(12:51) Global warming and the limits to growth

(20:24) Joscha’s reaction to the AI Political compass memes

(23:53) On Uploads, Identity and Death

(33:06) The Endgame: Playing The Longest Possible Game Given A Superposition Of Futures

(37:31) On the evidence of delaying technology leading to better outcomes

(40:49) Humanity is in locust mode

(44:11) Scenarios in which Joscha would delay AI

(48:04) On the dangers of AI regulation

(55:34) From longtermist doomer who thinks AGI is good to 6x6 political compass

(01:00:08) Joscha believes in god in the same sense as he believes in personal selves

(01:05:45) The transition from cyanobacterium to photosynthesis as an allegory for technological revolutions

(01:17:46) What Joscha would do as Aragorn in Middle-Earth

(01:25:20) The endgame of brain computer interfaces is to liberate our minds and embody thinking molecules

(01:28:50) Transcending politics and aligning humanity

(01:35:53) On the feasibility of starting an AGI lab in 2023

(01:43:19) Why green teaming is necessary for ethics

(01:59:27) Joscha's Response to Connor Leahy on "if you don't do that, you die Joscha. You die"

(02:07:54) Aligning with the agent playing the longest game

(02:15:39) Joscha’s response to Connor on morality

(02:19:06) Caring about mindchildren and actual children equally

(02:20:54) On finding the function that generates human values

(02:28:54) Twitter And Reddit Questions: Joscha’s AGI timelines and p(doom)

(02:35:16) Why European AI regulations are bad for AI research

(02:38:13) What regulation would Joscha Bach pass as president of the US

(02:40:16) Is Open Source still beneficial today?

(02:42:26) How to make sure that AI loves humanity

(02:47:42) The movie Joscha would want to live in

(02:50:06) Closing message for the audience

  continue reading

55 episodes

Artwork
iconShare
 
Manage episode 376449117 series 2966339
Content provided by Michaël Trazzi. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Michaël Trazzi or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Joscha Bach (who defines himself as an AI researcher/cognitive scientist) has recently been debating existential risk from AI with Connor Leahy (previous guest of the podcast), and since their conversation was quite short I wanted to continue the debate in more depth.

The resulting conversation ended up being quite long (over 3h of recording), with a lot of tangents, but I think this gives a somewhat better overview of Joscha’s views on AI risk than other similar interviews. We also discussed a lot of other topics, that you can find in the outline below.

A raw version of this interview was published on Patreon about three weeks ago. To support the channel and have access to early previews, you can subscribe here: https://www.patreon.com/theinsideview

Youtube: ⁠https://youtu.be/YeXHQts3xYM⁠

Transcript: https://theinsideview.ai/joscha

Host: https://twitter.com/MichaelTrazzi

Joscha: https://twitter.com/Plinz

OUTLINE

(00:00) Intro

(00:57) Why Barbie Is Better Than Oppenheimer

(08:55) The relationship between nuclear weapons and AI x-risk

(12:51) Global warming and the limits to growth

(20:24) Joscha’s reaction to the AI Political compass memes

(23:53) On Uploads, Identity and Death

(33:06) The Endgame: Playing The Longest Possible Game Given A Superposition Of Futures

(37:31) On the evidence of delaying technology leading to better outcomes

(40:49) Humanity is in locust mode

(44:11) Scenarios in which Joscha would delay AI

(48:04) On the dangers of AI regulation

(55:34) From longtermist doomer who thinks AGI is good to 6x6 political compass

(01:00:08) Joscha believes in god in the same sense as he believes in personal selves

(01:05:45) The transition from cyanobacterium to photosynthesis as an allegory for technological revolutions

(01:17:46) What Joscha would do as Aragorn in Middle-Earth

(01:25:20) The endgame of brain computer interfaces is to liberate our minds and embody thinking molecules

(01:28:50) Transcending politics and aligning humanity

(01:35:53) On the feasibility of starting an AGI lab in 2023

(01:43:19) Why green teaming is necessary for ethics

(01:59:27) Joscha's Response to Connor Leahy on "if you don't do that, you die Joscha. You die"

(02:07:54) Aligning with the agent playing the longest game

(02:15:39) Joscha’s response to Connor on morality

(02:19:06) Caring about mindchildren and actual children equally

(02:20:54) On finding the function that generates human values

(02:28:54) Twitter And Reddit Questions: Joscha’s AGI timelines and p(doom)

(02:35:16) Why European AI regulations are bad for AI research

(02:38:13) What regulation would Joscha Bach pass as president of the US

(02:40:16) Is Open Source still beneficial today?

(02:42:26) How to make sure that AI loves humanity

(02:47:42) The movie Joscha would want to live in

(02:50:06) Closing message for the audience

  continue reading

55 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide