Artwork

Content provided by Future of Life Institute. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Future of Life Institute or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Roman Yampolskiy on the Uncontrollability, Incomprehensibility, and Unexplainability of AI

1:12:01
 
Share
 

Manage episode 287851833 series 1334308
Content provided by Future of Life Institute. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Future of Life Institute or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Roman Yampolskiy, Professor of Computer Science at the University of Louisville, joins us to discuss whether we can control, comprehend, and explain AI systems, and how this constrains the project of AI safety. Topics discussed in this episode include: -Roman’s results on the unexplainability, incomprehensibility, and uncontrollability of AI -The relationship between AI safety, control, and alignment -Virtual worlds as a proposal for solving multi-multi alignment -AI security You can find the page for this podcast here: https://futureoflife.org/2021/03/19/roman-yampolskiy-on-the-uncontrollability-incomprehensibility-and-unexplainability-of-ai/ You can find FLI's three new policy focused job postings here: https://futureoflife.org/job-postings/ Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps: 0:00 Intro 2:35 Roman’s primary research interests 4:09 How theoretical proofs help AI safety research 6:23 How impossibility results constrain computer science systems 10:18 The inability to tell if arbitrary code is friendly or unfriendly 12:06 Impossibility results clarify what we can do 14:19 Roman’s results on unexplainability and incomprehensibility 22:34 Focusing on comprehensibility 26:17 Roman’s results on uncontrollability 28:33 Alignment as a subset of safety and control 30:48 The relationship between unexplainability, incomprehensibility, and uncontrollability with each other and with AI alignment 33:40 What does it mean to solve AI safety? 34:19 What do the impossibility results really mean? 37:07 Virtual worlds and AI alignment 49:55 AI security and malevolent agents 53:00 Air gapping, boxing, and other security methods 58:43 Some examples of historical failures of AI systems and what we can learn from them 1:01:20 Clarifying impossibility results 1:06 55 Examples of systems failing and what these demonstrate about AI 1:08:20 Are oracles a valid approach to AI safety? 1:10:30 Roman’s final thoughts This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
  continue reading

204 episodes

Artwork
iconShare
 
Manage episode 287851833 series 1334308
Content provided by Future of Life Institute. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Future of Life Institute or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Roman Yampolskiy, Professor of Computer Science at the University of Louisville, joins us to discuss whether we can control, comprehend, and explain AI systems, and how this constrains the project of AI safety. Topics discussed in this episode include: -Roman’s results on the unexplainability, incomprehensibility, and uncontrollability of AI -The relationship between AI safety, control, and alignment -Virtual worlds as a proposal for solving multi-multi alignment -AI security You can find the page for this podcast here: https://futureoflife.org/2021/03/19/roman-yampolskiy-on-the-uncontrollability-incomprehensibility-and-unexplainability-of-ai/ You can find FLI's three new policy focused job postings here: https://futureoflife.org/job-postings/ Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps: 0:00 Intro 2:35 Roman’s primary research interests 4:09 How theoretical proofs help AI safety research 6:23 How impossibility results constrain computer science systems 10:18 The inability to tell if arbitrary code is friendly or unfriendly 12:06 Impossibility results clarify what we can do 14:19 Roman’s results on unexplainability and incomprehensibility 22:34 Focusing on comprehensibility 26:17 Roman’s results on uncontrollability 28:33 Alignment as a subset of safety and control 30:48 The relationship between unexplainability, incomprehensibility, and uncontrollability with each other and with AI alignment 33:40 What does it mean to solve AI safety? 34:19 What do the impossibility results really mean? 37:07 Virtual worlds and AI alignment 49:55 AI security and malevolent agents 53:00 Air gapping, boxing, and other security methods 58:43 Some examples of historical failures of AI systems and what we can learn from them 1:01:20 Clarifying impossibility results 1:06 55 Examples of systems failing and what these demonstrate about AI 1:08:20 Are oracles a valid approach to AI safety? 1:10:30 Roman’s final thoughts This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
  continue reading

204 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide