Why expect “heuristics” to be robust to unknown unknowns?
I often read/hear claims that, if we’re worried that our evaluations of interventions won’t hold up under unknown unknowns, we should follow (simple) heuristics. But what precisely is the argument for this? This isn’t a rhetorical question — I’m just noting my confusion and want to understand this view better.
Interested to hear more from those who endorse this view!
I partly had in mind personal communications, but some public examples (and very brief summaries of my reactions, not fleshed out counterarguments):
In “Sequence thinking vs. cluster thinking”, Holden says, “For example, obeying common-sense morality (“ends don’t justify the means”) heuristics seems often to lead to unexpected good outcomes, and contradicting such morality seems often to lead to unexpected bad outcomes.”
I guess the argument is supposed to be that we have empirical evidence of heuristics working well in this sense. But on its face, this just pushes the question back to why we should expect “how well a strategy works under unknown unknowns” to generalize so cleanly from local scales to longtermist scales. (Related discussion here.)
“Heuristics for clueless agents” claims that “heuristics produce effective decisions without demanding too much of ordinary decision-makers.”
Their arguments seem to be some combination of “in some decision situations, it’s pretheoretically clear which decision procedures are more or less ‘effective’” (Sec. 5) and “heuristics have theoretical justification based on the bias-variance tradeoff” (Sec. 7). But pretheoretic judgments about effectiveness from a longtermist perspective seem extremely unreliable, and appeals to bias-variance tradeoffs are irrelevant when the problem (under UUs) is model misspecification.
Appreciate the examples, especially the Holden essay which was the main reason I started doing more cluster reasoning to form decision-oriented views. And thanks for the pointer to your writeup, you’ve given me food for thought.
Why expect “heuristics” to be robust to unknown unknowns?
I often read/hear claims that, if we’re worried that our evaluations of interventions won’t hold up under unknown unknowns, we should follow (simple) heuristics. But what precisely is the argument for this? This isn’t a rhetorical question — I’m just noting my confusion and want to understand this view better.
Interested to hear more from those who endorse this view!
Out of curiosity, any particular examples of such claims?
I partly had in mind personal communications, but some public examples (and very brief summaries of my reactions, not fleshed out counterarguments):
In “Sequence thinking vs. cluster thinking”, Holden says, “For example, obeying common-sense morality (“ends don’t justify the means”) heuristics seems often to lead to unexpected good outcomes, and contradicting such morality seems often to lead to unexpected bad outcomes.”
I guess the argument is supposed to be that we have empirical evidence of heuristics working well in this sense. But on its face, this just pushes the question back to why we should expect “how well a strategy works under unknown unknowns” to generalize so cleanly from local scales to longtermist scales. (Related discussion here.)
“Heuristics for clueless agents” claims that “heuristics produce effective decisions without demanding too much of ordinary decision-makers.”
Their arguments seem to be some combination of “in some decision situations, it’s pretheoretically clear which decision procedures are more or less ‘effective’” (Sec. 5) and “heuristics have theoretical justification based on the bias-variance tradeoff” (Sec. 7). But pretheoretic judgments about effectiveness from a longtermist perspective seem extremely unreliable, and appeals to bias-variance tradeoffs are irrelevant when the problem (under UUs) is model misspecification.
Appreciate the examples, especially the Holden essay which was the main reason I started doing more cluster reasoning to form decision-oriented views. And thanks for the pointer to your writeup, you’ve given me food for thought.