[thinking/ārambling aloud] I feel like an āideal reasonerā or something should indeed have that heuristic, but I feel unsure whether boundedly rational people internalising it more or having it advocated for to them more would be net positive or net negative. (I feel close to 50ā50 on this and havenāt thought about it much; āunsureā doesnāt mean āI suspect itād probably be bad.)
If I had to choose whether to make most of the world closer to naive consequentialism than they are now, and I canāt instead choose sophisticated consequentialism, Iād probably do that. But Iām not sure for EA grantmakers. And of course sophisticated consequentialism seems better.
Maybe thereās a way we could pair this heuristic with some other heuristics or counter-examples such that the full package is quite useful. Or maybe adding more of this heuristic would already help ābalance things outā, since grantmakers may already be focusing somewhat too much on downside risk. I really donāt know.
Hmm, I think this heuristic actually doesnāt make sense for ideal (Bayesian) reasoners, since ideal reasoners can just multiply the EVs out for all actions and donāt need weird approximations/āheuristics.
I broadly think this heuristic makes sense in a loose way in situations where the downside risks are not disproportionately high. Iām not sure what you mean by āsophisticated consequentialismā here, but I guess Iād sort of expect sophisticated consequentialism (at least in situations where explicit EV calculations are less practical) to include a variant of this heuristic somewhere.
Consequentialists are supposed to estimate all of the effects of their actions, and then add them up appropriately. This means that they cannot just look at the direct and immediate effects of their actions, but also have to look at indirect and less immediate effects. Failing to do so amounts to applying naive consequentialism. That is to be contrasted with sophisticated consequentialism, which appropriately takes indirect and less immediate effects into account (cf. the discussion on āsimplisticā vs. ācorrectā replaceability on 80,000 Hoursā blog (Todd 2015)).
As for a concrete example, a naive conception of consequentialism may lead one to believe that it is right to break rules if it seems that that would have net positive effects on the world. Such rule-breaking normally has negative side-effects, howeverāe.g. it can lower the degree of trust in society, and for the rule-breakerās group in particularāwhich means that sophisticated consequentialism tends to be more opposed to rule-breaking than naive consequentialism.
I think maybe what I have in mind is actually āconsequentialism that accounts appropriately for biases, model uncertainty, optimizerās curse, unilateralistās curse, etc.ā (This seems like a natural fit for the words sophisticated consequentialism, but it sounds like thatās not what the term is meant to mean.)
Iād be much more comfortable with someone having your heuristic if they were aware of those reasons why your EV estimates (whether implicit or explicit, qualitative or quantitative) should often be quite uncertain and may be systematically biased towards too much optimism for whatever choice youāre most excited about. (Thatās not the same as saying EV estimates are useless, just that they should often be adjusted in light of such considerations.)
[thinking/ārambling aloud] I feel like an āideal reasonerā or something should indeed have that heuristic, but I feel unsure whether boundedly rational people internalising it more or having it advocated for to them more would be net positive or net negative. (I feel close to 50ā50 on this and havenāt thought about it much; āunsureā doesnāt mean āI suspect itād probably be bad.)
I think this intersects with concerns about naive consequentialism and (less so) potential downsides of using explicit probabilities.
If I had to choose whether to make most of the world closer to naive consequentialism than they are now, and I canāt instead choose sophisticated consequentialism, Iād probably do that. But Iām not sure for EA grantmakers. And of course sophisticated consequentialism seems better.
Maybe thereās a way we could pair this heuristic with some other heuristics or counter-examples such that the full package is quite useful. Or maybe adding more of this heuristic would already help ābalance things outā, since grantmakers may already be focusing somewhat too much on downside risk. I really donāt know.
Hmm, I think this heuristic actually doesnāt make sense for ideal (Bayesian) reasoners, since ideal reasoners can just multiply the EVs out for all actions and donāt need weird approximations/āheuristics.
I broadly think this heuristic makes sense in a loose way in situations where the downside risks are not disproportionately high. Iām not sure what you mean by āsophisticated consequentialismā here, but I guess Iād sort of expect sophisticated consequentialism (at least in situations where explicit EV calculations are less practical) to include a variant of this heuristic somewhere.
I now think sophisticated consequentialism may not be what I really had in mind. Hereās the text from the entry on naive consequentialism I linked to:
I think maybe what I have in mind is actually āconsequentialism that accounts appropriately for biases, model uncertainty, optimizerās curse, unilateralistās curse, etc.ā (This seems like a natural fit for the words sophisticated consequentialism, but it sounds like thatās not what the term is meant to mean.)
Iād be much more comfortable with someone having your heuristic if they were aware of those reasons why your EV estimates (whether implicit or explicit, qualitative or quantitative) should often be quite uncertain and may be systematically biased towards too much optimism for whatever choice youāre most excited about. (Thatās not the same as saying EV estimates are useless, just that they should often be adjusted in light of such considerations.)