I’ve seen the asymmetry discussed multiple times on the forum—I think it is still the best objection to the astronomical waste argument for longtermism.
I don’t think this has been addressed enough by longtermists (I would count “longtermism rejects the assymetry and if you think the assymetry is true than you probably reject longtermism” as addressing it).
The idea that “the future might not be good” comes up on the forum every so often, but this doesn’t really harm the core longtermist claims. The counter-argument is roughly: - You still want to engage in trajectory changes (e.g. ensuring that we don’t fall to the control of a stable totalitarian state) - Since the effort bars are ginormous and we’re pretty uncertain about the value of the future, you still want to avoid extinction so that we can figure this out, rather than getting locked in by a vague sense we have today
I think the asymmetry argument is quite different to the “bad futures” argument?
(Although I think the bad futures argument is one of the other good objections to the astronomical waste argument).
I think we might disagree on whether “astronomical waste” is a core longtermist claim—I think it is.
I don’t think either objection means that we shouldn’t care about extinction or about future people, but both drastically reduce the expected value of longtermist interventions.
And given that the counterfactual use of EA resources always has high expected value, the reduction in EV of longtermist interventions is action-relevant.
People who agree with asymmetry and people who are less confident in the probability of / quality of a good future would allocate fewer resources to longtermist causes than Will MacAskill would.
Someone bought into the asymmetry should still want to improve the lives of future people who will necessarily exist.
In other words the asymmetry doesn’t go against longtermist approaches that have the goal to improve average future well-being, conditional on humanity not going prematurely extinct.
Such approaches might include mitigating climate change, institutional design, and ensuring aligned AI. For example, an asymmetrist should find it very bad if AI ends up enslaving us for the rest of time…
I think that even in the EA community, there are people who vote based on whether or not they like the point being made, as opposed to whether or not the logic underlying a point is valid or not. I think this happens to explain the downvotes on my comment—some asymmetrists just don’t like longtermism and want their asymmetry to be a valid way out of it.
I don’t necessarily think this phenomenon applies to downvotes on other comments I might make though—I’m not arrogant enough to think I’m always right!
I have a feeling this phenomenon is increasing. As the movement grows we will attract people with a wider range of views and so we may see more (unjustifiable) downvoting as people downvote things that don’t align to their views (regardless of the strength of argument). I’m not sure if this will happen, but it might, and to some degree I have already started to lose some confidence in the relationship between comment/post quality and karma.
I think the upshot of this is that an asymmetrist who accepts the other key arguments underlying longtermism (future is vast in expectation, we can tractably influence the far future) should want to allocate all of their altruistic resources to longtermist causes. They would just be more selective about which specific causes.
For an asymmetrist, the stakes are still incredibly high, and it’s not as if the marginal value of contributing to longtermist approaches such AI alignment, climate change etc. have been driven down to a very low level.
So I’m basically disagreeing with you when you say:
People who agree with asymmetry and people who are less confident in the probability of / quality of a good future would allocate fewer resources to longtermist causes than Will MacAskill would.
This post by Rohin attempts to address it. If you hold the asymmetry view then you would allocate more resources to [1] causing a new neutral life to come into existence (-1 cent) then later once they exist improve that neutral life (many dollars) than you would to [2] causing a new happy life to come into existence (-1 cent). They both result in the same world.
In general you can make a dutch booking argument like this whenever your resource allocation doesn’t correspond to the gradient of a value function (i.e. the resources should be aimed at improving the state of the world).
This only applies to flavors of the Asymmetry that treat happiness as intrinsically valuable, such that you would pay to add happiness to a “neutral” life (without relieving any suffering by doing so). If the reason you don’t consider it good to create new lives with more happiness than suffering is that you don’t think happiness is intrinsically valuable, at least not at the price of increasing suffering, then you can’t get Dutch booked this way. See this comment.
I’ve seen the asymmetry discussed multiple times on the forum—I think it is still the best objection to the astronomical waste argument for longtermism.
I don’t think this has been addressed enough by longtermists (I would count “longtermism rejects the assymetry and if you think the assymetry is true than you probably reject longtermism” as addressing it).
The idea that “the future might not be good” comes up on the forum every so often, but this doesn’t really harm the core longtermist claims. The counter-argument is roughly:
- You still want to engage in trajectory changes (e.g. ensuring that we don’t fall to the control of a stable totalitarian state)
- Since the effort bars are ginormous and we’re pretty uncertain about the value of the future, you still want to avoid extinction so that we can figure this out, rather than getting locked in by a vague sense we have today
I think the asymmetry argument is quite different to the “bad futures” argument?
(Although I think the bad futures argument is one of the other good objections to the astronomical waste argument).
I think we might disagree on whether “astronomical waste” is a core longtermist claim—I think it is.
I don’t think either objection means that we shouldn’t care about extinction or about future people, but both drastically reduce the expected value of longtermist interventions.
And given that the counterfactual use of EA resources always has high expected value, the reduction in EV of longtermist interventions is action-relevant.
People who agree with asymmetry and people who are less confident in the probability of / quality of a good future would allocate fewer resources to longtermist causes than Will MacAskill would.
Someone bought into the asymmetry should still want to improve the lives of future people who will necessarily exist.
In other words the asymmetry doesn’t go against longtermist approaches that have the goal to improve average future well-being, conditional on humanity not going prematurely extinct.
Such approaches might include mitigating climate change, institutional design, and ensuring aligned AI. For example, an asymmetrist should find it very bad if AI ends up enslaving us for the rest of time…
I don’t get why this is being downvoted so much. Can anyone explain?
I think that even in the EA community, there are people who vote based on whether or not they like the point being made, as opposed to whether or not the logic underlying a point is valid or not. I think this happens to explain the downvotes on my comment—some asymmetrists just don’t like longtermism and want their asymmetry to be a valid way out of it.
I don’t necessarily think this phenomenon applies to downvotes on other comments I might make though—I’m not arrogant enough to think I’m always right!
I have a feeling this phenomenon is increasing. As the movement grows we will attract people with a wider range of views and so we may see more (unjustifiable) downvoting as people downvote things that don’t align to their views (regardless of the strength of argument). I’m not sure if this will happen, but it might, and to some degree I have already started to lose some confidence in the relationship between comment/post quality and karma.
Yes, this is basically my view!
I think the upshot of this is that an asymmetrist who accepts the other key arguments underlying longtermism (future is vast in expectation, we can tractably influence the far future) should want to allocate all of their altruistic resources to longtermist causes. They would just be more selective about which specific causes.
For an asymmetrist, the stakes are still incredibly high, and it’s not as if the marginal value of contributing to longtermist approaches such AI alignment, climate change etc. have been driven down to a very low level.
So I’m basically disagreeing with you when you say:
This post by Rohin attempts to address it. If you hold the asymmetry view then you would allocate more resources to [1] causing a new neutral life to come into existence (-1 cent) then later once they exist improve that neutral life (many dollars) than you would to [2] causing a new happy life to come into existence (-1 cent). They both result in the same world.
In general you can make a dutch booking argument like this whenever your resource allocation doesn’t correspond to the gradient of a value function (i.e. the resources should be aimed at improving the state of the world).
This only applies to flavors of the Asymmetry that treat happiness as intrinsically valuable, such that you would pay to add happiness to a “neutral” life (without relieving any suffering by doing so). If the reason you don’t consider it good to create new lives with more happiness than suffering is that you don’t think happiness is intrinsically valuable, at least not at the price of increasing suffering, then you can’t get Dutch booked this way. See this comment.