Artwork

Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

LW - Contra papers claiming superhuman AI forecasting by nikos

14:34
 
Share
 

Manage episode 439585960 series 3314709
Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Contra papers claiming superhuman AI forecasting, published by nikos on September 12, 2024 on LessWrong.
[Conflict of interest disclaimer: We are
FutureSearch, a company working on AI-powered forecasting and other types of quantitative reasoning. If thin LLM wrappers could achieve superhuman forecasting performance, this would obsolete a lot of our work.]
Widespread, misleading claims about AI forecasting
Recently we have seen a number of papers - (Schoenegger et al., 2024, Halawi et al., 2024, Phan et al., 2024, Hsieh et al., 2024) - with claims that boil down to "we built an LLM-powered forecaster that rivals human forecasters or even shows superhuman performance".
These papers do not communicate their results carefully enough, shaping public perception in inaccurate and misleading ways. Some examples of public discourse:
Ethan Mollick (>200k followers)
tweeted the following about the paper Wisdom of the Silicon Crowd: LLM Ensemble Prediction Capabilities Rival Human Crowd Accuracy by Schoenegger et al.:
A post on
Marginal Revolution
with the title and abstract of the paper
Approaching Human-Level Forecasting with Language Models by Halawi et al. elicits responses like
"This is something that humans are notably terrible at, even if they're paid to do it. No surprise that LLMs can match us."
"+1 The aggregate human success rate is a pretty low bar"
A
Twitter thread with >500k views on LLMs Are Superhuman Forecasters by Phan et al. claiming that "AI […] can predict the future at a superhuman level" had more than half a million views within two days of being published.
The number of such papers on AI forecasting, and the vast amount of traffic on misleading claims, makes AI forecasting a uniquely misunderstood area of AI progress. And it's one that matters.
What does human-level or superhuman forecasting mean?
"Human-level" or "superhuman" is a hard-to-define concept. In an academic context, we need to work with a reasonable operationalization to compare the skill of an AI forecaster with that of humans.
One reasonable and practical definition of a superhuman forecasting AI forecaster is
The AI forecaster is able to consistently outperform the crowd forecast on a sufficiently large number of randomly selected questions on a high-quality forecasting platform.[1]
(For a human-level forecaster, just replace "outperform" with "performs on par with".)
Except for Halawi et al., the papers had a tendency to operationalize human-level or superhuman forecasting in ways falling short of that standard. Some issues we saw were:
Looking at average/random instead of aggregate or top performance (for superhuman claims)
Looking at only at a small number of questions
Choosing a (probably) relatively easy target (i.e. Manifold)
Red flags for claims to (super)human AI forecasting accuracy
Our experience suggests there are a number of things that can go wrong when building AI forecasting systems, including:
1. Failing to find up-to-date information on the questions. It's inconceivable on most questions that forecasts can be good without basic information.
Imagine trying to forecast the US presidential election without knowing that Biden dropped out.
2. Drawing on up-to-date, but low-quality information. Ample experience shows low quality information confuses LLMs even more than it confuses humans.
Imagine forecasting election outcomes with biased polling data.
Or, worse, imagine forecasting OpenAI revenue based on claims like
> The number of ChatGPT Plus subscribers is estimated between 230,000-250,000 as of October 2023.
without realising that this mixing up ChatGPT vs ChatGPT mobile.
3. Lack of high-quality quantitative reasoning. For a decent number of questions on Metaculus, good forecasts can be "vibed" by skilled humans and perhaps LLMs. But for many questions, simple calculations ...
  continue reading

2438 episodes

Artwork
iconShare
 
Manage episode 439585960 series 3314709
Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Contra papers claiming superhuman AI forecasting, published by nikos on September 12, 2024 on LessWrong.
[Conflict of interest disclaimer: We are
FutureSearch, a company working on AI-powered forecasting and other types of quantitative reasoning. If thin LLM wrappers could achieve superhuman forecasting performance, this would obsolete a lot of our work.]
Widespread, misleading claims about AI forecasting
Recently we have seen a number of papers - (Schoenegger et al., 2024, Halawi et al., 2024, Phan et al., 2024, Hsieh et al., 2024) - with claims that boil down to "we built an LLM-powered forecaster that rivals human forecasters or even shows superhuman performance".
These papers do not communicate their results carefully enough, shaping public perception in inaccurate and misleading ways. Some examples of public discourse:
Ethan Mollick (>200k followers)
tweeted the following about the paper Wisdom of the Silicon Crowd: LLM Ensemble Prediction Capabilities Rival Human Crowd Accuracy by Schoenegger et al.:
A post on
Marginal Revolution
with the title and abstract of the paper
Approaching Human-Level Forecasting with Language Models by Halawi et al. elicits responses like
"This is something that humans are notably terrible at, even if they're paid to do it. No surprise that LLMs can match us."
"+1 The aggregate human success rate is a pretty low bar"
A
Twitter thread with >500k views on LLMs Are Superhuman Forecasters by Phan et al. claiming that "AI […] can predict the future at a superhuman level" had more than half a million views within two days of being published.
The number of such papers on AI forecasting, and the vast amount of traffic on misleading claims, makes AI forecasting a uniquely misunderstood area of AI progress. And it's one that matters.
What does human-level or superhuman forecasting mean?
"Human-level" or "superhuman" is a hard-to-define concept. In an academic context, we need to work with a reasonable operationalization to compare the skill of an AI forecaster with that of humans.
One reasonable and practical definition of a superhuman forecasting AI forecaster is
The AI forecaster is able to consistently outperform the crowd forecast on a sufficiently large number of randomly selected questions on a high-quality forecasting platform.[1]
(For a human-level forecaster, just replace "outperform" with "performs on par with".)
Except for Halawi et al., the papers had a tendency to operationalize human-level or superhuman forecasting in ways falling short of that standard. Some issues we saw were:
Looking at average/random instead of aggregate or top performance (for superhuman claims)
Looking at only at a small number of questions
Choosing a (probably) relatively easy target (i.e. Manifold)
Red flags for claims to (super)human AI forecasting accuracy
Our experience suggests there are a number of things that can go wrong when building AI forecasting systems, including:
1. Failing to find up-to-date information on the questions. It's inconceivable on most questions that forecasts can be good without basic information.
Imagine trying to forecast the US presidential election without knowing that Biden dropped out.
2. Drawing on up-to-date, but low-quality information. Ample experience shows low quality information confuses LLMs even more than it confuses humans.
Imagine forecasting election outcomes with biased polling data.
Or, worse, imagine forecasting OpenAI revenue based on claims like
> The number of ChatGPT Plus subscribers is estimated between 230,000-250,000 as of October 2023.
without realising that this mixing up ChatGPT vs ChatGPT mobile.
3. Lack of high-quality quantitative reasoning. For a decent number of questions on Metaculus, good forecasts can be "vibed" by skilled humans and perhaps LLMs. But for many questions, simple calculations ...
  continue reading

2438 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide