Artwork

Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

EA - [Draft] The humble cosmologist's P(doom) paradox by titotal

17:03
 
Share
 

Manage episode 407007615 series 3337191
Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Draft] The humble cosmologist's P(doom) paradox, published by titotal on March 17, 2024 on The Effective Altruism Forum.[This post has been published as part of draft amnesty week. I did quite a bit of work on this post, but abandoned it because I was never sure of my conclusions. I don't do a lot of stats work, so I could never be sure if I was missing something obvious, and I'm not certain of the conclusions to draw. If this gets a good reception, I might finish it off into a proper post.]Part 1: Bayesian distributionsI'm not sure that I'm fully on board the "Bayesian train". I worry about Garbage in, garbage, out, that it will lead to overconfidence about what are ultimately just vibes, etc.But I think if you are doing Bayes, you should at least try to do it right.See, in Ea/rationalist circles, the discussion of Bayesianism often stops at bayes 101. For example, the "sequences" cover the "mammaogram problem", in detail, but never really cover how Bayesian statistics works outside of toy examples. The CFAR handbook doesn't either.Of course, plenty of the people involved have read actual textbooks and the like, (and generally research institutes use proper statistics), but I'm not sure that the knowledge has spread it's way around to the general EA public.See, in the classic mammogram problem (I won't cover the math in detail because there are 50 different explainers), both your prior probabilities, and the amount you should update, are well established, known, exact numbers. So you have your initial prior of say, 1%, that someone has cancer. and then you can calculate a likelihood ratio of exactly 10:1 resulting from a probable test, getting you a new, exact 10% chance that the person has cancer after the test.Of course, in real life, there is often not an accepted, exact number for your prior, or for your likliehood ratio. A common way to deal with this in EA circles is to just guess. Do aliens exist? well I guess that there is a prior of 1% that they do, and then i'll guess likliehood ratio of 10:1 that we see so many UFO reports, so the final probability of aliens existing is now 10%. [magnus vinding example] Just state that the numbers are speculative, and it'll be fine.Sometimes, people don't even bother with the Bayes rule part of it, and just nudge some numbers around.I call this method "pop-Bayes". Everyone acknowledges that this is an approximation, but the reasoning is that some numbers are better than no numbers. And according to the research of Phillip Tetlock, people who follow this technique, and regularly check the results of their predictions, can do extremely well at forecasting geopolitical events. Note that for practicality reasons they only tested forecasting for near-term events where they thought the probability was roughly in the 5-95% range.Now let's look at the following scenario (most of this is taken from this tutorial):Your friend Bob has a coin of unknown bias. It may be fair, or it may be weighted to land more often on heads or tails. You watch them flip the coin 3 times, and each time it comes up heads. What is the probability that the next flip is also heads?Applying "pop-bayes" to this starts off easy. Before seeing any flip outcomes, the prior of your final flip being heads is obviously 0.5, just from symmetry. But then you have to update this based on the first flip being heads. To do this, you have to estimate P(E|H) and P(E|~H). P(E|H) corresponds to "the probability of this flip having turned up heads, given that my eventual flip outcome is heads". How on earth are you meant to calculate this?Well, the key is to stop doing pop-bayes, and start doing actual bayesian statistics. Instead of reducing your prior to a single number, you build a distribution for the parameter of coin bias, with 1 corresponding to fully...
  continue reading

2217 episodes

Artwork
iconShare
 
Manage episode 407007615 series 3337191
Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Draft] The humble cosmologist's P(doom) paradox, published by titotal on March 17, 2024 on The Effective Altruism Forum.[This post has been published as part of draft amnesty week. I did quite a bit of work on this post, but abandoned it because I was never sure of my conclusions. I don't do a lot of stats work, so I could never be sure if I was missing something obvious, and I'm not certain of the conclusions to draw. If this gets a good reception, I might finish it off into a proper post.]Part 1: Bayesian distributionsI'm not sure that I'm fully on board the "Bayesian train". I worry about Garbage in, garbage, out, that it will lead to overconfidence about what are ultimately just vibes, etc.But I think if you are doing Bayes, you should at least try to do it right.See, in Ea/rationalist circles, the discussion of Bayesianism often stops at bayes 101. For example, the "sequences" cover the "mammaogram problem", in detail, but never really cover how Bayesian statistics works outside of toy examples. The CFAR handbook doesn't either.Of course, plenty of the people involved have read actual textbooks and the like, (and generally research institutes use proper statistics), but I'm not sure that the knowledge has spread it's way around to the general EA public.See, in the classic mammogram problem (I won't cover the math in detail because there are 50 different explainers), both your prior probabilities, and the amount you should update, are well established, known, exact numbers. So you have your initial prior of say, 1%, that someone has cancer. and then you can calculate a likelihood ratio of exactly 10:1 resulting from a probable test, getting you a new, exact 10% chance that the person has cancer after the test.Of course, in real life, there is often not an accepted, exact number for your prior, or for your likliehood ratio. A common way to deal with this in EA circles is to just guess. Do aliens exist? well I guess that there is a prior of 1% that they do, and then i'll guess likliehood ratio of 10:1 that we see so many UFO reports, so the final probability of aliens existing is now 10%. [magnus vinding example] Just state that the numbers are speculative, and it'll be fine.Sometimes, people don't even bother with the Bayes rule part of it, and just nudge some numbers around.I call this method "pop-Bayes". Everyone acknowledges that this is an approximation, but the reasoning is that some numbers are better than no numbers. And according to the research of Phillip Tetlock, people who follow this technique, and regularly check the results of their predictions, can do extremely well at forecasting geopolitical events. Note that for practicality reasons they only tested forecasting for near-term events where they thought the probability was roughly in the 5-95% range.Now let's look at the following scenario (most of this is taken from this tutorial):Your friend Bob has a coin of unknown bias. It may be fair, or it may be weighted to land more often on heads or tails. You watch them flip the coin 3 times, and each time it comes up heads. What is the probability that the next flip is also heads?Applying "pop-bayes" to this starts off easy. Before seeing any flip outcomes, the prior of your final flip being heads is obviously 0.5, just from symmetry. But then you have to update this based on the first flip being heads. To do this, you have to estimate P(E|H) and P(E|~H). P(E|H) corresponds to "the probability of this flip having turned up heads, given that my eventual flip outcome is heads". How on earth are you meant to calculate this?Well, the key is to stop doing pop-bayes, and start doing actual bayesian statistics. Instead of reducing your prior to a single number, you build a distribution for the parameter of coin bias, with 1 corresponding to fully...
  continue reading

2217 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide