[thinking/ârambling aloud] I feel like an âideal reasonerâ or something should indeed have that heuristic, but I feel unsure whether boundedly rational people internalising it more or having it advocated for to them more would be net positive or net negative. (I feel close to 50â50 on this and havenât thought about it much; âunsureâ doesnât mean âI suspect itâd probably be bad.)
If I had to choose whether to make most of the world closer to naive consequentialism than they are now, and I canât instead choose sophisticated consequentialism, Iâd probably do that. But Iâm not sure for EA grantmakers. And of course sophisticated consequentialism seems better.
Maybe thereâs a way we could pair this heuristic with some other heuristics or counter-examples such that the full package is quite useful. Or maybe adding more of this heuristic would already help âbalance things outâ, since grantmakers may already be focusing somewhat too much on downside risk. I really donât know.
Hmm, I think this heuristic actually doesnât make sense for ideal (Bayesian) reasoners, since ideal reasoners can just multiply the EVs out for all actions and donât need weird approximations/âheuristics.
I broadly think this heuristic makes sense in a loose way in situations where the downside risks are not disproportionately high. Iâm not sure what you mean by âsophisticated consequentialismâ here, but I guess Iâd sort of expect sophisticated consequentialism (at least in situations where explicit EV calculations are less practical) to include a variant of this heuristic somewhere.
Consequentialists are supposed to estimate all of the effects of their actions, and then add them up appropriately. This means that they cannot just look at the direct and immediate effects of their actions, but also have to look at indirect and less immediate effects. Failing to do so amounts to applying naive consequentialism. That is to be contrasted with sophisticated consequentialism, which appropriately takes indirect and less immediate effects into account (cf. the discussion on âsimplisticâ vs. âcorrectâ replaceability on 80,000 Hoursâ blog (Todd 2015)).
As for a concrete example, a naive conception of consequentialism may lead one to believe that it is right to break rules if it seems that that would have net positive effects on the world. Such rule-breaking normally has negative side-effects, howeverâe.g. it can lower the degree of trust in society, and for the rule-breakerâs group in particularâwhich means that sophisticated consequentialism tends to be more opposed to rule-breaking than naive consequentialism.
I think maybe what I have in mind is actually âconsequentialism that accounts appropriately for biases, model uncertainty, optimizerâs curse, unilateralistâs curse, etc.â (This seems like a natural fit for the words sophisticated consequentialism, but it sounds like thatâs not what the term is meant to mean.)
Iâd be much more comfortable with someone having your heuristic if they were aware of those reasons why your EV estimates (whether implicit or explicit, qualitative or quantitative) should often be quite uncertain and may be systematically biased towards too much optimism for whatever choice youâre most excited about. (Thatâs not the same as saying EV estimates are useless, just that they should often be adjusted in light of such considerations.)
[thinking/ârambling aloud] I feel like an âideal reasonerâ or something should indeed have that heuristic, but I feel unsure whether boundedly rational people internalising it more or having it advocated for to them more would be net positive or net negative. (I feel close to 50â50 on this and havenât thought about it much; âunsureâ doesnât mean âI suspect itâd probably be bad.)
I think this intersects with concerns about naive consequentialism and (less so) potential downsides of using explicit probabilities.
If I had to choose whether to make most of the world closer to naive consequentialism than they are now, and I canât instead choose sophisticated consequentialism, Iâd probably do that. But Iâm not sure for EA grantmakers. And of course sophisticated consequentialism seems better.
Maybe thereâs a way we could pair this heuristic with some other heuristics or counter-examples such that the full package is quite useful. Or maybe adding more of this heuristic would already help âbalance things outâ, since grantmakers may already be focusing somewhat too much on downside risk. I really donât know.
Hmm, I think this heuristic actually doesnât make sense for ideal (Bayesian) reasoners, since ideal reasoners can just multiply the EVs out for all actions and donât need weird approximations/âheuristics.
I broadly think this heuristic makes sense in a loose way in situations where the downside risks are not disproportionately high. Iâm not sure what you mean by âsophisticated consequentialismâ here, but I guess Iâd sort of expect sophisticated consequentialism (at least in situations where explicit EV calculations are less practical) to include a variant of this heuristic somewhere.
I now think sophisticated consequentialism may not be what I really had in mind. Hereâs the text from the entry on naive consequentialism I linked to:
I think maybe what I have in mind is actually âconsequentialism that accounts appropriately for biases, model uncertainty, optimizerâs curse, unilateralistâs curse, etc.â (This seems like a natural fit for the words sophisticated consequentialism, but it sounds like thatâs not what the term is meant to mean.)
Iâd be much more comfortable with someone having your heuristic if they were aware of those reasons why your EV estimates (whether implicit or explicit, qualitative or quantitative) should often be quite uncertain and may be systematically biased towards too much optimism for whatever choice youâre most excited about. (Thatâs not the same as saying EV estimates are useless, just that they should often be adjusted in light of such considerations.)