I think your comment highlights interesting questions about precisely what the first statement means, and whether or not any of these estimates are conditioning on one of the other statements being true.
But I think one can underrate the significance of something even if that thing is less cost-effective, on the margin, than something else. Toy example:
Marginal short-term x-risk reduction work turns out to score 100 for cost-effectiveness, while marginal (explicit) long-term x-risk reduction work turns out to score on 80. (So Michael’s statement 2 is true.)
But EAs in general explicitly believe the latter scores 50, or seem to act/​talk/​think as though they do. (So Michael’s statement 1 is true.)
This matters because:
This means the cost-effectiveness ordering might later reverse without EAs realising this, because they’re updating from the wrong point or just not paying attention.
Long-term x-risk work might be better on the margin for some people, due to personal fit, even if it’s less good on average. If we have the wrong belief about its marginal cost-effectiveness on average, then we might not notice when this is the case.
Having more accurate beliefs about this may help us have more accurate beliefs about other things.
(It’s also possible that Michael had in mind a distinction between the value of long-term x-risk reduction and the value of explicit long-term x-risk reduction. But I’m not sure precisely what that distinction would be.)
I think your comment highlights interesting questions about precisely what the first statement means, and whether or not any of these estimates are conditioning on one of the other statements being true.
But I think one can underrate the significance of something even if that thing is less cost-effective, on the margin, than something else. Toy example:
Marginal short-term x-risk reduction work turns out to score 100 for cost-effectiveness, while marginal (explicit) long-term x-risk reduction work turns out to score on 80. (So Michael’s statement 2 is true.)
But EAs in general explicitly believe the latter scores 50, or seem to act/​talk/​think as though they do. (So Michael’s statement 1 is true.)
This matters because:
This means the cost-effectiveness ordering might later reverse without EAs realising this, because they’re updating from the wrong point or just not paying attention.
Long-term x-risk work might be better on the margin for some people, due to personal fit, even if it’s less good on average. If we have the wrong belief about its marginal cost-effectiveness on average, then we might not notice when this is the case.
Having more accurate beliefs about this may help us have more accurate beliefs about other things.
(It’s also possible that Michael had in mind a distinction between the value of long-term x-risk reduction and the value of explicit long-term x-risk reduction. But I’m not sure precisely what that distinction would be.)
This pretty much captures what I was thinking.