I think the upshot of this is that an asymmetrist who accepts the other key arguments underlying longtermism (future is vast in expectation, we can tractably influence the far future) should want to allocate all of their altruistic resources to longtermist causes. They would just be more selective about which specific causes.
For an asymmetrist, the stakes are still incredibly high, and it’s not as if the marginal value of contributing to longtermist approaches such AI alignment, climate change etc. have been driven down to a very low level.
So I’m basically disagreeing with you when you say:
People who agree with asymmetry and people who are less confident in the probability of / quality of a good future would allocate fewer resources to longtermist causes than Will MacAskill would.
I think the upshot of this is that an asymmetrist who accepts the other key arguments underlying longtermism (future is vast in expectation, we can tractably influence the far future) should want to allocate all of their altruistic resources to longtermist causes. They would just be more selective about which specific causes.
For an asymmetrist, the stakes are still incredibly high, and it’s not as if the marginal value of contributing to longtermist approaches such AI alignment, climate change etc. have been driven down to a very low level.
So I’m basically disagreeing with you when you say: