Artwork

Content provided by Razib Khan. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Razib Khan or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Richard Hanania: markets in every prediction

1:00:52
 
Share
 

Manage episode 342552972 series 3005967
Content provided by Razib Khan. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Razib Khan or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

How do we know when to trust the experts? On January 23rd, 2020, Vox published a piece titled The evidence on travel bans for diseases like coronavirus is clear: They don’t work. Journalists are largely limited to reporting what experts tell them, and in this case, it seems Vox's experts misled them. By December 2020 The New York Times could reflect that “interviews with more than two dozen experts show the policy of unobstructed travel was never based on hard science. It was a political decision, recast as health advice, which emerged after a plague outbreak in India in the 1990s.” The coronavirus pandemic has highlighted for many that expertise and specialized knowledge are not so straightforward, and “trusting the science” isn’t always straightforward, and hasty decisions can have global consequences. More narrowly, the political scientist Philip Tetlock’s 2005 Expert Political Judgment: How Good is it? How can We Know? reported that the most confident pundits often prove the least accurate.

To get around the biases and limitations of individuals, there has been a recent vogue for “prediction markets” using distributed knowledge and baking “skin in the game” into the process. On this episode of Unsupervised Learning Richard Hanania joins Razib to discuss his think tank’s collaboration with UT Austin’s Salem Center for Policy and Manifold Markets on a forecasting tournament. What’s their goal? What are the limitations of these sorts of markets? Why do they not care about the contestants’ credentials? Razib pushes Hanania on the idea there is no expertise, and they discuss domains where the application of specialized knowledge has concrete consequences (civil engineering) as opposed to those where it does not (political and foreign policy forecasting).

Hanania also addresses his decision to leave Twitter after his latest banning.

  continue reading

188 episodes

Artwork
iconShare
 
Manage episode 342552972 series 3005967
Content provided by Razib Khan. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Razib Khan or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

How do we know when to trust the experts? On January 23rd, 2020, Vox published a piece titled The evidence on travel bans for diseases like coronavirus is clear: They don’t work. Journalists are largely limited to reporting what experts tell them, and in this case, it seems Vox's experts misled them. By December 2020 The New York Times could reflect that “interviews with more than two dozen experts show the policy of unobstructed travel was never based on hard science. It was a political decision, recast as health advice, which emerged after a plague outbreak in India in the 1990s.” The coronavirus pandemic has highlighted for many that expertise and specialized knowledge are not so straightforward, and “trusting the science” isn’t always straightforward, and hasty decisions can have global consequences. More narrowly, the political scientist Philip Tetlock’s 2005 Expert Political Judgment: How Good is it? How can We Know? reported that the most confident pundits often prove the least accurate.

To get around the biases and limitations of individuals, there has been a recent vogue for “prediction markets” using distributed knowledge and baking “skin in the game” into the process. On this episode of Unsupervised Learning Richard Hanania joins Razib to discuss his think tank’s collaboration with UT Austin’s Salem Center for Policy and Manifold Markets on a forecasting tournament. What’s their goal? What are the limitations of these sorts of markets? Why do they not care about the contestants’ credentials? Razib pushes Hanania on the idea there is no expertise, and they discuss domains where the application of specialized knowledge has concrete consequences (civil engineering) as opposed to those where it does not (political and foreign policy forecasting).

Hanania also addresses his decision to leave Twitter after his latest banning.

  continue reading

188 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide