Here are some places where motivated reasoning can come in. It’s past 2am here so I’ll only give an example to some.
In which interventions you choose to compare or to ignore, or which aspects you choose to include in your assessment.
In how you estimate the consequences of your choices. Often, EV calculations in EA rely on guesses or deference to prediction markets (or even play-money ones like Metaculus). These all have as strong biases as you’d find anywhere else. As an explicit example, some Longtermists like Bostrom rely on figures for how many people may live in the future (10^(a lot), allegedly) and these figures are almost purely fictional.
In how you choose to apply EV-maximisation reasoning in situations where it’s unclear if that’s the right thing to do. For example, if you’re not entirely risk-neutral, it only makes sense to try to maximise the expected value of decisions if you know there is a large number of independent ones. But this is not what we do:
a. We rank charities in ways that make donation decisions highly correlated with each other.
b. We treat sequential decisions as if they were independent even when that’s not true.
c. We use EV reasoning on big one-off decisions (like double-or-nothing experiments).
Here are some places where motivated reasoning can come in. It’s past 2am here so I’ll only give an example to some.
In which interventions you choose to compare or to ignore, or which aspects you choose to include in your assessment.
In how you estimate the consequences of your choices. Often, EV calculations in EA rely on guesses or deference to prediction markets (or even play-money ones like Metaculus). These all have as strong biases as you’d find anywhere else. As an explicit example, some Longtermists like Bostrom rely on figures for how many people may live in the future (10^(a lot), allegedly) and these figures are almost purely fictional.
In how you choose to apply EV-maximisation reasoning in situations where it’s unclear if that’s the right thing to do. For example, if you’re not entirely risk-neutral, it only makes sense to try to maximise the expected value of decisions if you know there is a large number of independent ones. But this is not what we do: a. We rank charities in ways that make donation decisions highly correlated with each other. b. We treat sequential decisions as if they were independent even when that’s not true. c. We use EV reasoning on big one-off decisions (like double-or-nothing experiments).