Artwork

Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

LW - Yudkowsky vs Hanson on FOOM: Whose Predictions Were Better? by 1a3orn

39:11
 
Share
 

Manage episode 364975221 series 3314709
Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Yudkowsky vs Hanson on FOOM: Whose Predictions Were Better?, published by 1a3orn on June 1, 2023 on LessWrong. TLDR Starting in 2008, Robin Hanson and Eliezer Yudkowsky debated the likelihood of FOOM: a rapid and localized increase in some AI's intelligence that occurs because an AI recursively improves itself. As Yudkowsky summarizes his position: I think that, at some point in the development of Artificial Intelligence, we are likely to see a fast, local increase in capability—“AI go FOOM.” Just to be clear on the claim, “fast” means on a timescale of weeks or hours rather than years or decades; and “FOOM” means way the hell smarter than anything else around, capable of delivering in short time periods technological advancements that would take humans decades, probably including full-scale molecular nanotechnology. (FOOM, 235) Over the course of this debate, both Hanson and Yudkowsky made a number of incidental predictions about things which could occur before the advent of artificial superintelligence -- or for which we could at the very least receive strong evidence before artificial superintelligence. On the object level, my conclusions is that when you examine these predictions, Hanson probably does a little better than Yudkowsky. Although depending on how you weigh different topics, I could see arguments from "they do about the same" to "Hanson does much better." On one meta level, my conclusion is that Hanson's view --- that we should try to use abstractions that have proven prior predictive power -- looks like a pretty good policy. On another meta level, my conclusion -- springing to a great degree from how painful seeking clear predictions in 700 pages of words has been -- is that if anyone says "I have a great track record" without pointing to specific predictions that they made, you should probably ignore them, or maybe point out their lack of epistemic virtue if you have the energy to spare for doing that kind of criticism productively. Intro There are number of difficulties involved in evaluating some public figure's track record. We want to avoid cherry-picking sets of particularly good or bad predictions. And we want to have some baseline to compare them to. We can mitigate both of these difficulties -- although not, alas, eliminate them -- by choosing one document to evaluate: "The Hanson-Yudkowsky Foom Debate". (All future page numbers refer to this PDF.) Note that the PDF includes the (1) debate-via-blogposts which took place on OvercomingBias, (2) an actual in-person debate that took place at Jane Street in 2011 and (3) further summary materials from Hanson (further blogposts) and Yudkowsky ("Intelligence Explosion Microeconomic"). This spans a period from 2008 to 2013. I do not intend this to be a complete review of everything in these arguments. The discussion spans the time from the big bang until hypothetical far future galactic civilizations. My review is a little more constrained: I am only going to look at predictions for which I think we've received strong evidence in the 15 or so years since the debate started. Note also that the context of this debate was quite different than it would be if it happened today. At the time of the debate, both Hanson and Yudkowsky believed that machine intelligence would be extremely important, but that the time of its arrival was uncertain. They thought that it would probably arrive this century, but neither had the very, certain short timelines which are common today. At this point Yudkowsky was interested in actually creating a recursively self-improving artificial intelligence, a "seed AI." For instance, in 2006 the Singularity Institute -- what MIRI was before it renamed -- had a website explicitly stating that they sought funding to create recursively self-improving AI. During the Jane Street debate Y...
  continue reading

2409 episodes

Artwork
iconShare
 
Manage episode 364975221 series 3314709
Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Yudkowsky vs Hanson on FOOM: Whose Predictions Were Better?, published by 1a3orn on June 1, 2023 on LessWrong. TLDR Starting in 2008, Robin Hanson and Eliezer Yudkowsky debated the likelihood of FOOM: a rapid and localized increase in some AI's intelligence that occurs because an AI recursively improves itself. As Yudkowsky summarizes his position: I think that, at some point in the development of Artificial Intelligence, we are likely to see a fast, local increase in capability—“AI go FOOM.” Just to be clear on the claim, “fast” means on a timescale of weeks or hours rather than years or decades; and “FOOM” means way the hell smarter than anything else around, capable of delivering in short time periods technological advancements that would take humans decades, probably including full-scale molecular nanotechnology. (FOOM, 235) Over the course of this debate, both Hanson and Yudkowsky made a number of incidental predictions about things which could occur before the advent of artificial superintelligence -- or for which we could at the very least receive strong evidence before artificial superintelligence. On the object level, my conclusions is that when you examine these predictions, Hanson probably does a little better than Yudkowsky. Although depending on how you weigh different topics, I could see arguments from "they do about the same" to "Hanson does much better." On one meta level, my conclusion is that Hanson's view --- that we should try to use abstractions that have proven prior predictive power -- looks like a pretty good policy. On another meta level, my conclusion -- springing to a great degree from how painful seeking clear predictions in 700 pages of words has been -- is that if anyone says "I have a great track record" without pointing to specific predictions that they made, you should probably ignore them, or maybe point out their lack of epistemic virtue if you have the energy to spare for doing that kind of criticism productively. Intro There are number of difficulties involved in evaluating some public figure's track record. We want to avoid cherry-picking sets of particularly good or bad predictions. And we want to have some baseline to compare them to. We can mitigate both of these difficulties -- although not, alas, eliminate them -- by choosing one document to evaluate: "The Hanson-Yudkowsky Foom Debate". (All future page numbers refer to this PDF.) Note that the PDF includes the (1) debate-via-blogposts which took place on OvercomingBias, (2) an actual in-person debate that took place at Jane Street in 2011 and (3) further summary materials from Hanson (further blogposts) and Yudkowsky ("Intelligence Explosion Microeconomic"). This spans a period from 2008 to 2013. I do not intend this to be a complete review of everything in these arguments. The discussion spans the time from the big bang until hypothetical far future galactic civilizations. My review is a little more constrained: I am only going to look at predictions for which I think we've received strong evidence in the 15 or so years since the debate started. Note also that the context of this debate was quite different than it would be if it happened today. At the time of the debate, both Hanson and Yudkowsky believed that machine intelligence would be extremely important, but that the time of its arrival was uncertain. They thought that it would probably arrive this century, but neither had the very, certain short timelines which are common today. At this point Yudkowsky was interested in actually creating a recursively self-improving artificial intelligence, a "seed AI." For instance, in 2006 the Singularity Institute -- what MIRI was before it renamed -- had a website explicitly stating that they sought funding to create recursively self-improving AI. During the Jane Street debate Y...
  continue reading

2409 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide