Artwork

Content provided by Lex Fridman. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Lex Fridman or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

#431 – Roman Yampolskiy: Dangers of Superintelligent AI

2:22:39
 
Share
 

Manage episode 421648770 series 2430467
Content provided by Lex Fridman. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Lex Fridman or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this podcast by checking out our sponsors:
Yahoo Finance: https://yahoofinance.com
MasterClass: https://masterclass.com/lexpod to get 15% off
NetSuite: http://netsuite.com/lex to get free product tour
LMNT: https://drinkLMNT.com/lex to get free sample pack
Eight Sleep: https://eightsleep.com/lex to get $350 off

Transcript: https://lexfridman.com/roman-yampolskiy-transcript

EPISODE LINKS:
Roman’s X: https://twitter.com/romanyam
Roman’s Website: http://cecs.louisville.edu/ry
Roman’s AI book: https://amzn.to/4aFZuPb

PODCAST INFO:
Podcast website: https://lexfridman.com/podcast
Apple Podcasts: https://apple.co/2lwqZIr
Spotify: https://spoti.fi/2nEwCF8
RSS: https://lexfridman.com/feed/podcast/
YouTube Full Episodes: https://youtube.com/lexfridman
YouTube Clips: https://youtube.com/lexclips

SUPPORT & CONNECT:
– Check out the sponsors above, it’s the best way to support this podcast
– Support on Patreon: https://www.patreon.com/lexfridman
– Twitter: https://twitter.com/lexfridman
– Instagram: https://www.instagram.com/lexfridman
– LinkedIn: https://www.linkedin.com/in/lexfridman
– Facebook: https://www.facebook.com/lexfridman
– Medium: https://medium.com/@lexfridman

OUTLINE:
Here’s the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time.
(00:00) – Introduction
(09:12) – Existential risk of AGI
(15:25) – Ikigai risk
(23:37) – Suffering risk
(27:12) – Timeline to AGI
(31:44) – AGI turing test
(37:06) – Yann LeCun and open source AI
(49:58) – AI control
(52:26) – Social engineering
(54:59) – Fearmongering
(1:04:49) – AI deception
(1:11:23) – Verification
(1:18:22) – Self-improving AI
(1:30:34) – Pausing AI development
(1:36:51) – AI Safety
(1:46:35) – Current AI
(1:51:58) – Simulation
(1:59:16) – Aliens
(2:00:50) – Human mind
(2:07:10) – Neuralink
(2:16:15) – Hope for the future
(2:20:11) – Meaning of life

  continue reading

445 episodes

Artwork
iconShare
 
Manage episode 421648770 series 2430467
Content provided by Lex Fridman. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Lex Fridman or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this podcast by checking out our sponsors:
Yahoo Finance: https://yahoofinance.com
MasterClass: https://masterclass.com/lexpod to get 15% off
NetSuite: http://netsuite.com/lex to get free product tour
LMNT: https://drinkLMNT.com/lex to get free sample pack
Eight Sleep: https://eightsleep.com/lex to get $350 off

Transcript: https://lexfridman.com/roman-yampolskiy-transcript

EPISODE LINKS:
Roman’s X: https://twitter.com/romanyam
Roman’s Website: http://cecs.louisville.edu/ry
Roman’s AI book: https://amzn.to/4aFZuPb

PODCAST INFO:
Podcast website: https://lexfridman.com/podcast
Apple Podcasts: https://apple.co/2lwqZIr
Spotify: https://spoti.fi/2nEwCF8
RSS: https://lexfridman.com/feed/podcast/
YouTube Full Episodes: https://youtube.com/lexfridman
YouTube Clips: https://youtube.com/lexclips

SUPPORT & CONNECT:
– Check out the sponsors above, it’s the best way to support this podcast
– Support on Patreon: https://www.patreon.com/lexfridman
– Twitter: https://twitter.com/lexfridman
– Instagram: https://www.instagram.com/lexfridman
– LinkedIn: https://www.linkedin.com/in/lexfridman
– Facebook: https://www.facebook.com/lexfridman
– Medium: https://medium.com/@lexfridman

OUTLINE:
Here’s the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time.
(00:00) – Introduction
(09:12) – Existential risk of AGI
(15:25) – Ikigai risk
(23:37) – Suffering risk
(27:12) – Timeline to AGI
(31:44) – AGI turing test
(37:06) – Yann LeCun and open source AI
(49:58) – AI control
(52:26) – Social engineering
(54:59) – Fearmongering
(1:04:49) – AI deception
(1:11:23) – Verification
(1:18:22) – Self-improving AI
(1:30:34) – Pausing AI development
(1:36:51) – AI Safety
(1:46:35) – Current AI
(1:51:58) – Simulation
(1:59:16) – Aliens
(2:00:50) – Human mind
(2:07:10) – Neuralink
(2:16:15) – Hope for the future
(2:20:11) – Meaning of life

  continue reading

445 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide