I think the asymmetry argument is quite different to the “bad futures” argument?
(Although I think the bad futures argument is one of the other good objections to the astronomical waste argument).
I think we might disagree on whether “astronomical waste” is a core longtermist claim—I think it is.
I don’t think either objection means that we shouldn’t care about extinction or about future people, but both drastically reduce the expected value of longtermist interventions.
And given that the counterfactual use of EA resources always has high expected value, the reduction in EV of longtermist interventions is action-relevant.
People who agree with asymmetry and people who are less confident in the probability of / quality of a good future would allocate fewer resources to longtermist causes than Will MacAskill would.
Someone bought into the asymmetry should still want to improve the lives of future people who will necessarily exist.
In other words the asymmetry doesn’t go against longtermist approaches that have the goal to improve average future well-being, conditional on humanity not going prematurely extinct.
Such approaches might include mitigating climate change, institutional design, and ensuring aligned AI. For example, an asymmetrist should find it very bad if AI ends up enslaving us for the rest of time…
I think that even in the EA community, there are people who vote based on whether or not they like the point being made, as opposed to whether or not the logic underlying a point is valid or not. I think this happens to explain the downvotes on my comment—some asymmetrists just don’t like longtermism and want their asymmetry to be a valid way out of it.
I don’t necessarily think this phenomenon applies to downvotes on other comments I might make though—I’m not arrogant enough to think I’m always right!
I have a feeling this phenomenon is increasing. As the movement grows we will attract people with a wider range of views and so we may see more (unjustifiable) downvoting as people downvote things that don’t align to their views (regardless of the strength of argument). I’m not sure if this will happen, but it might, and to some degree I have already started to lose some confidence in the relationship between comment/post quality and karma.
I think the upshot of this is that an asymmetrist who accepts the other key arguments underlying longtermism (future is vast in expectation, we can tractably influence the far future) should want to allocate all of their altruistic resources to longtermist causes. They would just be more selective about which specific causes.
For an asymmetrist, the stakes are still incredibly high, and it’s not as if the marginal value of contributing to longtermist approaches such AI alignment, climate change etc. have been driven down to a very low level.
So I’m basically disagreeing with you when you say:
People who agree with asymmetry and people who are less confident in the probability of / quality of a good future would allocate fewer resources to longtermist causes than Will MacAskill would.
I think the asymmetry argument is quite different to the “bad futures” argument?
(Although I think the bad futures argument is one of the other good objections to the astronomical waste argument).
I think we might disagree on whether “astronomical waste” is a core longtermist claim—I think it is.
I don’t think either objection means that we shouldn’t care about extinction or about future people, but both drastically reduce the expected value of longtermist interventions.
And given that the counterfactual use of EA resources always has high expected value, the reduction in EV of longtermist interventions is action-relevant.
People who agree with asymmetry and people who are less confident in the probability of / quality of a good future would allocate fewer resources to longtermist causes than Will MacAskill would.
Someone bought into the asymmetry should still want to improve the lives of future people who will necessarily exist.
In other words the asymmetry doesn’t go against longtermist approaches that have the goal to improve average future well-being, conditional on humanity not going prematurely extinct.
Such approaches might include mitigating climate change, institutional design, and ensuring aligned AI. For example, an asymmetrist should find it very bad if AI ends up enslaving us for the rest of time…
I don’t get why this is being downvoted so much. Can anyone explain?
I think that even in the EA community, there are people who vote based on whether or not they like the point being made, as opposed to whether or not the logic underlying a point is valid or not. I think this happens to explain the downvotes on my comment—some asymmetrists just don’t like longtermism and want their asymmetry to be a valid way out of it.
I don’t necessarily think this phenomenon applies to downvotes on other comments I might make though—I’m not arrogant enough to think I’m always right!
I have a feeling this phenomenon is increasing. As the movement grows we will attract people with a wider range of views and so we may see more (unjustifiable) downvoting as people downvote things that don’t align to their views (regardless of the strength of argument). I’m not sure if this will happen, but it might, and to some degree I have already started to lose some confidence in the relationship between comment/post quality and karma.
Yes, this is basically my view!
I think the upshot of this is that an asymmetrist who accepts the other key arguments underlying longtermism (future is vast in expectation, we can tractably influence the far future) should want to allocate all of their altruistic resources to longtermist causes. They would just be more selective about which specific causes.
For an asymmetrist, the stakes are still incredibly high, and it’s not as if the marginal value of contributing to longtermist approaches such AI alignment, climate change etc. have been driven down to a very low level.
So I’m basically disagreeing with you when you say: