So you’d in general be correct in applying Laplace’s law to this kind of scenario except that you run into selection effects (a keyword to Google is anthropic effect, or anthropic principle.) I.e., suppose that the chance of human extinction was actually much higher, on the order of 10% per year. Then, after 250 years, Earth will probably not have any humans, but if it does and they use Laplace’s rule to estimate its chances, it will overshoot them by a lot. That is, they can’t actually update on extinction happening because if it happens nobody will be there to update.
There is a magic trick where I give you a deck of cards, tell you to shuffle it, and choose a card however you want, and then I guess it correctly. Most of the time it doesn’t work, but on the 1⁄52 chance that it does, it looks really impressive (or so I’m told, I didn’t have the patience to do it enough times). There is also a scam based on a similar principle.
On the other hand, Laplace’s law is empirically really quite brutal, and in my experience tends to output probabilities that are too high. In particular, I’d assign some chance to there being no black balls, and that would eventually bring my probability of extinction close to 0, whereas Laplace’s law always predicts that an event will happen if given enough time (even if it has never happened before).
Overall, I guess I’d be more interested in trying to figure out the pathways to extinction and their probabilities. For technologies which already exist, that might involve looking at close calls, e.g., nuclear close calls.
I hadn’t thought to think about selection effects, thanks for pointing that out. I suppose Bostrom actually describes black balls as technologies that cause catastrophe but doesn’t set the bar as high as extinction. Then drawing a black ball doesn’t affect future populations drastically, so perhaps selection effects don’t apply?
Also, I think in The Precipice Toby Ord makes some inferences for natural extinction risk given the length of time humanity has existed for? Though I may not be remembering correctly. I think the logic was something like “Assume we’re randomly distributed amongst possible humans. If existential risk was very high, then there’d be a very small set of worlds in which humans have been around for this long, and it would be very unlikely that we’d be in such a world. Therefore it’s more likely that our estimate of existential risk is too high”. This then seems quite similar to my model of making inferences based on not having previously drawn a black ball. I don’t think I understand selection effects too well though so I appreciate any comments on this!
So you’d in general be correct in applying Laplace’s law to this kind of scenario except that you run into selection effects (a keyword to Google is anthropic effect, or anthropic principle.) I.e., suppose that the chance of human extinction was actually much higher, on the order of 10% per year. Then, after 250 years, Earth will probably not have any humans, but if it does and they use Laplace’s rule to estimate its chances, it will overshoot them by a lot. That is, they can’t actually update on extinction happening because if it happens nobody will be there to update.
There is a magic trick where I give you a deck of cards, tell you to shuffle it, and choose a card however you want, and then I guess it correctly. Most of the time it doesn’t work, but on the 1⁄52 chance that it does, it looks really impressive (or so I’m told, I didn’t have the patience to do it enough times). There is also a scam based on a similar principle.
On the other hand, Laplace’s law is empirically really quite brutal, and in my experience tends to output probabilities that are too high. In particular, I’d assign some chance to there being no black balls, and that would eventually bring my probability of extinction close to 0, whereas Laplace’s law always predicts that an event will happen if given enough time (even if it has never happened before).
Overall, I guess I’d be more interested in trying to figure out the pathways to extinction and their probabilities. For technologies which already exist, that might involve looking at close calls, e.g., nuclear close calls.
Thanks for your comment!
I hadn’t thought to think about selection effects, thanks for pointing that out. I suppose Bostrom actually describes black balls as technologies that cause catastrophe but doesn’t set the bar as high as extinction. Then drawing a black ball doesn’t affect future populations drastically, so perhaps selection effects don’t apply?
Also, I think in The Precipice Toby Ord makes some inferences for natural extinction risk given the length of time humanity has existed for? Though I may not be remembering correctly. I think the logic was something like “Assume we’re randomly distributed amongst possible humans. If existential risk was very high, then there’d be a very small set of worlds in which humans have been around for this long, and it would be very unlikely that we’d be in such a world. Therefore it’s more likely that our estimate of existential risk is too high”. This then seems quite similar to my model of making inferences based on not having previously drawn a black ball. I don’t think I understand selection effects too well though so I appreciate any comments on this!