The EA community generally underrates the significance of long-term x-risk reduction: 3 in 4
Marginal work on (explicit) long-term x-risk reduction is more cost-effective than marginal work on short-term x-risk reduction: 1 in 3
I’m probably being dumb here, but I don’t understand this conjunction of probabilities. What does it mean for EAs to underrate something that’s not worth doing on the margin? Is the idea that there are increasing returns to scale, such that even though marginal work on explicit long-term x-risk reduction is not worthwhile, if EAs correctly rated long-term x-risk reduction, this will no longer be true? Or is this a complicated “probability over probabilities” question about value of information?
I think your comment highlights interesting questions about precisely what the first statement means, and whether or not any of these estimates are conditioning on one of the other statements being true.
But I think one can underrate the significance of something even if that thing is less cost-effective, on the margin, than something else. Toy example:
Marginal short-term x-risk reduction work turns out to score 100 for cost-effectiveness, while marginal (explicit) long-term x-risk reduction work turns out to score on 80. (So Michael’s statement 2 is true.)
But EAs in general explicitly believe the latter scores 50, or seem to act/talk/think as though they do. (So Michael’s statement 1 is true.)
This matters because:
This means the cost-effectiveness ordering might later reverse without EAs realising this, because they’re updating from the wrong point or just not paying attention.
Long-term x-risk work might be better on the margin for some people, due to personal fit, even if it’s less good on average. If we have the wrong belief about its marginal cost-effectiveness on average, then we might not notice when this is the case.
Having more accurate beliefs about this may help us have more accurate beliefs about other things.
(It’s also possible that Michael had in mind a distinction between the value of long-term x-risk reduction and the value of explicit long-term x-risk reduction. But I’m not sure precisely what that distinction would be.)
In addition to what Michael A. said, a 1 in 3 chance that cause A is more effective than cause B means even though we should generally prefer cause B, there could be high value to doing more prioritization research on A vs. B, because it’s not too unlikely that we decide A > B. So “The EA community generally underrates the significance of long-term x-risk reduction” could mean there’s not enough work on considering the expected value of long-term x-risk reduction.
I’m probably being dumb here, but I don’t understand this conjunction of probabilities. What does it mean for EAs to underrate something that’s not worth doing on the margin? Is the idea that there are increasing returns to scale, such that even though marginal work on explicit long-term x-risk reduction is not worthwhile, if EAs correctly rated long-term x-risk reduction, this will no longer be true? Or is this a complicated “probability over probabilities” question about value of information?
I think your comment highlights interesting questions about precisely what the first statement means, and whether or not any of these estimates are conditioning on one of the other statements being true.
But I think one can underrate the significance of something even if that thing is less cost-effective, on the margin, than something else. Toy example:
Marginal short-term x-risk reduction work turns out to score 100 for cost-effectiveness, while marginal (explicit) long-term x-risk reduction work turns out to score on 80. (So Michael’s statement 2 is true.)
But EAs in general explicitly believe the latter scores 50, or seem to act/talk/think as though they do. (So Michael’s statement 1 is true.)
This matters because:
This means the cost-effectiveness ordering might later reverse without EAs realising this, because they’re updating from the wrong point or just not paying attention.
Long-term x-risk work might be better on the margin for some people, due to personal fit, even if it’s less good on average. If we have the wrong belief about its marginal cost-effectiveness on average, then we might not notice when this is the case.
Having more accurate beliefs about this may help us have more accurate beliefs about other things.
(It’s also possible that Michael had in mind a distinction between the value of long-term x-risk reduction and the value of explicit long-term x-risk reduction. But I’m not sure precisely what that distinction would be.)
This pretty much captures what I was thinking.
In addition to what Michael A. said, a 1 in 3 chance that cause A is more effective than cause B means even though we should generally prefer cause B, there could be high value to doing more prioritization research on A vs. B, because it’s not too unlikely that we decide A > B. So “The EA community generally underrates the significance of long-term x-risk reduction” could mean there’s not enough work on considering the expected value of long-term x-risk reduction.
Got it, thanks! Yeah this is what I meant by “probabilities over probabilities.”