Artwork

Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

LW - Superintelligent AI is possible in the 2020s by HunterJay

44:36
 
Share
 

Manage episode 434149125 series 2997284
Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Superintelligent AI is possible in the 2020s, published by HunterJay on August 14, 2024 on LessWrong.
Back in June 2023, Soroush Pour and I discussed AI timelines on his podcast, The AGI Show. The biggest difference between us was that I think "machines more intelligent than people are likely to be developed within a few years", and he thinks that it's unlikely to happen for at least a few decades.[1]
We haven't really resolved our disagreement on this prediction in the year since, so I thought I would write up my main reasons for thinking we're so close to superintelligence, and why the various arguments made by Soroush (and separately by Arvind Narayanan and Sayash Kapoor) aren't persuasive to me.
Part 1 - Why I Think We Are Close
Empirically
You can pick pretty much any trend relating to AI & computation, and it looks like this:[2]
We keep coming up with new benchmarks, and they keep getting saturated. While there are still some notable holdouts such as ARC-AGI, SWE-Bench, and GPQA, previous holdouts like MATH also looked like this until they were solved by newer models.
If these trends continue, it's hard to imagine things that AI won't be able to do in a few years time[3], unless they are bottlenecked by regulation (like being a doctor), or by robot hardware limitations (like being a professional football player)[4].
Practically
The empirical trends are the result of several different factors; changes in network architecture, choice of hyperparameters, optimizers, training regimes, synthetic data creation, and data cleaning & selection. There are also many ideas in the space that have not been tried at scale yet. Hardware itself is also improving -- chips continue to double in price performance every 2-3 years, and training clusters are scaling up massively.
It's entirely possible that some of these trends slow down -- we might not have another transformers-level architecture advance this decade, for instance -- but the fact that there are many different ways to continue improving AI for the foreseeable future makes me think that it is unlikely for progress to slow significantly. If we run out of text data, we can use videos. If we run out of that, we can generate synthetic data.
If synthetic data doesn't generalise, we can get more efficient with what we have through better optimisers and training schedules. If that doesn't work, we can find architectures which learn the patterns more easily, and so on.
In reality, all of these will be done at the same time and pretty soon (arguably already) the AIs themselves will be doing a significant share of the research and engineering needed to find and test new ideas[5]. This makes me think progress will accelerate rather than slow.
Theoretically
Humans are an existence proof of general intelligence, and since human cognition is itself just computation[6], there is no physical law stopping us from building another general intelligence (in silicon) given enough time and resources[7].
We can use the human brain as an upper bound for the amount of computation needed to get AGI (i.e. we know it can be done with the amount of computation done in the brain, but it might be possible with less)[8]. We think human brains do an equivalent of between 10^12 and 10^28 FLOP[9] per second [a hilariously wide range]. Supercomputers today can do 10^18. The physical, theoretical limit seems to be approximately 10^48 FLOP per second per kilogram.
We can also reason that humans are a lower bound for the compute efficiency of the AGI (i.e. we know that with this amount of compute, we can get human-level intelligence, but it might be possible to do it with less)[10]. If humans are more efficient than current AI systems per unit of compute, then we know that more algorithmic progress must be possible as well.
In other words, there seems to be...
  continue reading

2447 episodes

Artwork
iconShare
 
Manage episode 434149125 series 2997284
Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Superintelligent AI is possible in the 2020s, published by HunterJay on August 14, 2024 on LessWrong.
Back in June 2023, Soroush Pour and I discussed AI timelines on his podcast, The AGI Show. The biggest difference between us was that I think "machines more intelligent than people are likely to be developed within a few years", and he thinks that it's unlikely to happen for at least a few decades.[1]
We haven't really resolved our disagreement on this prediction in the year since, so I thought I would write up my main reasons for thinking we're so close to superintelligence, and why the various arguments made by Soroush (and separately by Arvind Narayanan and Sayash Kapoor) aren't persuasive to me.
Part 1 - Why I Think We Are Close
Empirically
You can pick pretty much any trend relating to AI & computation, and it looks like this:[2]
We keep coming up with new benchmarks, and they keep getting saturated. While there are still some notable holdouts such as ARC-AGI, SWE-Bench, and GPQA, previous holdouts like MATH also looked like this until they were solved by newer models.
If these trends continue, it's hard to imagine things that AI won't be able to do in a few years time[3], unless they are bottlenecked by regulation (like being a doctor), or by robot hardware limitations (like being a professional football player)[4].
Practically
The empirical trends are the result of several different factors; changes in network architecture, choice of hyperparameters, optimizers, training regimes, synthetic data creation, and data cleaning & selection. There are also many ideas in the space that have not been tried at scale yet. Hardware itself is also improving -- chips continue to double in price performance every 2-3 years, and training clusters are scaling up massively.
It's entirely possible that some of these trends slow down -- we might not have another transformers-level architecture advance this decade, for instance -- but the fact that there are many different ways to continue improving AI for the foreseeable future makes me think that it is unlikely for progress to slow significantly. If we run out of text data, we can use videos. If we run out of that, we can generate synthetic data.
If synthetic data doesn't generalise, we can get more efficient with what we have through better optimisers and training schedules. If that doesn't work, we can find architectures which learn the patterns more easily, and so on.
In reality, all of these will be done at the same time and pretty soon (arguably already) the AIs themselves will be doing a significant share of the research and engineering needed to find and test new ideas[5]. This makes me think progress will accelerate rather than slow.
Theoretically
Humans are an existence proof of general intelligence, and since human cognition is itself just computation[6], there is no physical law stopping us from building another general intelligence (in silicon) given enough time and resources[7].
We can use the human brain as an upper bound for the amount of computation needed to get AGI (i.e. we know it can be done with the amount of computation done in the brain, but it might be possible with less)[8]. We think human brains do an equivalent of between 10^12 and 10^28 FLOP[9] per second [a hilariously wide range]. Supercomputers today can do 10^18. The physical, theoretical limit seems to be approximately 10^48 FLOP per second per kilogram.
We can also reason that humans are a lower bound for the compute efficiency of the AGI (i.e. we know that with this amount of compute, we can get human-level intelligence, but it might be possible to do it with less)[10]. If humans are more efficient than current AI systems per unit of compute, then we know that more algorithmic progress must be possible as well.
In other words, there seems to be...
  continue reading

2447 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide