The Satisficer’s Curse is a systematic overvaluation that occurs when any uncertain prospect is chosen because its estimate exceeds a positive threshold. It is the most general version of the three curses, all of which can be seen as statistical artefacts.
Also, if your criterion for choosing an intervention is how frequently it still looks good under different models and priors, as people seem to be suggesting in lieu of EV maximization, you will still get similar curses—they’ll just apply to the number of models/priors, rather than the number in the EV estimate.
Isn’t this essentially a reformulation of the common EA argument that the most high-impact ideas are likely to be “weird-sounding” or unintuitive? I think it’s a strong point in favor of explicit modelling, but I want to avoid double-counting evidence if they are in fact similar arguments.
Nah, I’m just saying that a curse applies to every method, so it doesn’t tell us to use a particular method. I’m excluding arguments from the issue, not bringing them in. So if we were previously thinking that weird causes are good and common sense/model pluralism aren’t useful, then we should just stick to our guns. But if we were previously thinking that common sense/model pluralism are generally more accurate anyway, then we should stick with them.
There’s actually a thing called the Satisficer’s Curse (pdf) which is even more general:
Also, if your criterion for choosing an intervention is how frequently it still looks good under different models and priors, as people seem to be suggesting in lieu of EV maximization, you will still get similar curses—they’ll just apply to the number of models/priors, rather than the number in the EV estimate.
Isn’t this essentially a reformulation of the common EA argument that the most high-impact ideas are likely to be “weird-sounding” or unintuitive? I think it’s a strong point in favor of explicit modelling, but I want to avoid double-counting evidence if they are in fact similar arguments.
Nah, I’m just saying that a curse applies to every method, so it doesn’t tell us to use a particular method. I’m excluding arguments from the issue, not bringing them in. So if we were previously thinking that weird causes are good and common sense/model pluralism aren’t useful, then we should just stick to our guns. But if we were previously thinking that common sense/model pluralism are generally more accurate anyway, then we should stick with them.