Ep. 42: Shahar Avin on Artificial Intelligence


Manage episode 243386977 series 1260429
By Atief Heermance, Robert de Neufville, Scott Eastman, Atief Heermance, Robert de Neufville, and Scott Eastman. Discovered by Player FM and our community — copyright is owned by the publisher, not Player FM, and audio is streamed directly from their servers. Hit the Subscribe button to track updates in Player FM, or paste the feed URL into other podcast apps.

Episode 42 of the NonProphets podcast, in which Atief, Robert and Scott interview the University of Cambridge Centre for the Study of Existential Risk's Dr. Shahar Avin. We discuss the prospects for the development of artificial general intelligence (00:54), why general intelligence might be harder to control than narrow intelligence (04:29), how we can forecast the development of new, unprecedented technologies (07:54), what the greatest threats to human survival are (11:25), the "value-alignment problem" and why developing artificial intelligence might be dangerous (14:39), what form artificial intelligence is likely to take (18:32), recursive self-improvement and "the singularity" (22:05), whether we can regulate or limit the development of AI (26:46), the prospect of an AI arms race (29:21), how AI could be used to undermine political security (31:01), Open AI and the prospects for protective AI (34:55), tackling AI safety and control problems (38:51), why it matters what data is used to train AI (45:58), when we will have self-driving cars (49:14), the potential benefits of AI (55:42), and why scientific research should be funded by lottery (57:23). As always, you can reach us at nonprophetspod.com, or nonprophetspod@gmail.com. (recorded 9/27/2017)

162 episodes