You mention a few times that EV calculations are susceptible to motivated reasoning. But this conflicts with my understanding, which is that EV calculations are useful partly (largely) because they help to prevent motivated reasoning from guiding our decisions too heavily
(e.g. You can imagine a situation where charity Y performs an intervention that is more cost effective than charity X. By following on EV calculation, one might switch their donation from charity X to charity Y, despite that charity X sounds intuitively better.)
Maybe you could include some examples/citations of where you think this “EV motivated reasoning” has occurred. Otherwise I find it hard to believe that EV calculations are worse than the alternative, from a “susceptible-to-motivated-reasoning” perspective (here, the alternative is not using EV calculations).
I don’t think EV calculations directly guard against motivated reasoning.
I think the main benefit of EV calculations is that they allow more precise comparison between interventions (compared to say, just calling many interventions ‘good’).
However, many EV calculations involve probabilities and estimates derived from belief rather than from empirical evidence. These probabilities and estimates are highly prone to motivated reasoning and cognitive biases.
For example, if I was to calculate the EV of an EA funding org investing in more transparency, I might need to estimate a percentage of grants which were approved but ideally should not have been. As someone who has a strong prior in favour of transparency, I might estimate this to be much higher than someone who has a strong prior against transparency. This could have a large effect on my calculated EV.
That being said, there are certainly EV calculations where all the inputs can be pegged to empirical evidence, especially in the cause area of international development and global health. These EV calculations are less prone to motivated reasoning, but motivated reasoning remains nonetheless, because where there is empirical evidence available from different sources, motivated reasoning may affect the source used. (Guy Raveh points out some other ways that motivated reasoning can affect these calculations too)
With sufficient transparency, I think EV calculations can help reduce motivated reasoning since people can debate the inputs into the EV calculation, allowing the probabilities and estimates derived from belief to be refined, which may make them more accurate than before.
I agree that EV calculations are less susceptible to motivated reasoning than alternative approaches, but I think they are very susceptible nonetheless, which is why I think we should make certain changes to how they are used and implement stronger safeguards against motivated reasoning.
Here are some places where motivated reasoning can come in. It’s past 2am here so I’ll only give an example to some.
In which interventions you choose to compare or to ignore, or which aspects you choose to include in your assessment.
In how you estimate the consequences of your choices. Often, EV calculations in EA rely on guesses or deference to prediction markets (or even play-money ones like Metaculus). These all have as strong biases as you’d find anywhere else. As an explicit example, some Longtermists like Bostrom rely on figures for how many people may live in the future (10^(a lot), allegedly) and these figures are almost purely fictional.
In how you choose to apply EV-maximisation reasoning in situations where it’s unclear if that’s the right thing to do. For example, if you’re not entirely risk-neutral, it only makes sense to try to maximise the expected value of decisions if you know there is a large number of independent ones. But this is not what we do:
a. We rank charities in ways that make donation decisions highly correlated with each other.
b. We treat sequential decisions as if they were independent even when that’s not true.
c. We use EV reasoning on big one-off decisions (like double-or-nothing experiments).
You mention a few times that EV calculations are susceptible to motivated reasoning. But this conflicts with my understanding, which is that EV calculations are useful partly (largely) because they help to prevent motivated reasoning from guiding our decisions too heavily
(e.g. You can imagine a situation where charity Y performs an intervention that is more cost effective than charity X. By following on EV calculation, one might switch their donation from charity X to charity Y, despite that charity X sounds intuitively better.)
Maybe you could include some examples/citations of where you think this “EV motivated reasoning” has occurred. Otherwise I find it hard to believe that EV calculations are worse than the alternative, from a “susceptible-to-motivated-reasoning” perspective (here, the alternative is not using EV calculations).
Thanks for your comment.
I don’t think EV calculations directly guard against motivated reasoning.
I think the main benefit of EV calculations is that they allow more precise comparison between interventions (compared to say, just calling many interventions ‘good’).
However, many EV calculations involve probabilities and estimates derived from belief rather than from empirical evidence. These probabilities and estimates are highly prone to motivated reasoning and cognitive biases.
For example, if I was to calculate the EV of an EA funding org investing in more transparency, I might need to estimate a percentage of grants which were approved but ideally should not have been. As someone who has a strong prior in favour of transparency, I might estimate this to be much higher than someone who has a strong prior against transparency. This could have a large effect on my calculated EV.
That being said, there are certainly EV calculations where all the inputs can be pegged to empirical evidence, especially in the cause area of international development and global health. These EV calculations are less prone to motivated reasoning, but motivated reasoning remains nonetheless, because where there is empirical evidence available from different sources, motivated reasoning may affect the source used. (Guy Raveh points out some other ways that motivated reasoning can affect these calculations too)
With sufficient transparency, I think EV calculations can help reduce motivated reasoning since people can debate the inputs into the EV calculation, allowing the probabilities and estimates derived from belief to be refined, which may make them more accurate than before.
I agree that EV calculations are less susceptible to motivated reasoning than alternative approaches, but I think they are very susceptible nonetheless, which is why I think we should make certain changes to how they are used and implement stronger safeguards against motivated reasoning.
Here are some places where motivated reasoning can come in. It’s past 2am here so I’ll only give an example to some.
In which interventions you choose to compare or to ignore, or which aspects you choose to include in your assessment.
In how you estimate the consequences of your choices. Often, EV calculations in EA rely on guesses or deference to prediction markets (or even play-money ones like Metaculus). These all have as strong biases as you’d find anywhere else. As an explicit example, some Longtermists like Bostrom rely on figures for how many people may live in the future (10^(a lot), allegedly) and these figures are almost purely fictional.
In how you choose to apply EV-maximisation reasoning in situations where it’s unclear if that’s the right thing to do. For example, if you’re not entirely risk-neutral, it only makes sense to try to maximise the expected value of decisions if you know there is a large number of independent ones. But this is not what we do: a. We rank charities in ways that make donation decisions highly correlated with each other. b. We treat sequential decisions as if they were independent even when that’s not true. c. We use EV reasoning on big one-off decisions (like double-or-nothing experiments).