Just to second this because it seems to be a really common mistake- Greaves and MacAskill stress in the strong longtermism paper that the aim is to advance an argument about what someone should do with their impartial altruistic budget (of time or resources), not to tell anyone how large that budget should be in the first place.
Also- I think the author would be able to avoid what they see as a ānon-rigorousā decision to weight the short-term and long-term the same by reconceptualising the uneasiness around longtermism dominating their actions as an uneasiness with their totally impartial budget taking up more space in their life. I think everyone I have talked to about this feels a pull to support present day people and problems alongside the future, so it might help to just bracket off the present day section of your commitments away from the totally impartial side, especially if the argument against the longtermist conclusion is that it precludes other things you care about. No one can live an entirely impartial life and we should recognise that, but this doesnāt necessarily mean that the arguments for the rightness of doing so are wrong.
Thanks, that is valuable, but there are a couple of pieces here I want to clarify. I agree that there is space for people to have a budget for non-impartial altruistic donations. I am arguing that within the impartial altruistic budget, we should have a place for a balance between discounted values that emphasize the short term and impartial welfarist longtermism. Perhaps this is what you mean by ābracket off the present day section of your commitments away from the totally impartial side.ā
For example, I give at least 10% of my budget to altruistic causes, but I reserve some of the money for Givewell, Against Malaria Foundation, and similar charities, rather than focusing entirely on longtermist causes. This is in part moral uncertainty, at least on my part, since even putting aside the predictability argument, the argument for prioritizing possible future lives rests on a few assumptions that are debatable.
But Iām very unhappy with the claim that āNo one can live an entirely impartial life and we should recognise that,ā which is largely what led to the post. This type of position implies, among other things, that morality is objective and independent of instantiated human values, and that weāre saying everyone is morally compromised. If what we are claiming as impartial welfare maximization requires that philosophical position, and we also agree itās not something people can do in practice, Iād argue we are doing something fundamentally wrong both in practice, condemning everyone for being immoral while saying they should do better, and in theory, saying that longtermist EA only works given an objective utilitarian position on morality. Thankfully, I disagree, and I think these problems are both at least mostly fixable, hence my (still-insufficient, partially worked out) argument in the post. But I wasnāt trying to solve morality ab initio based on my intuitions. And perhaps I need to extend it to the more general position of how to allocate money and effort across both personal and altruistic spendingāwhich seems like a far larger if not impossible general task.
Thanks for the post and the response David, that helpfully clarifies where you are coming from. What I was trying to get at is that if you want to say that strong longtermism isnāt the correct conclusion for an impartial altruist who wants to know what to do with their resources, then that would call for more argument as to where the strong longtermistās mistake lies or where the uncertainty should be. On the other hand, it would be perfectly possible to say that the impartial altruist should end up endorsing strong longtermism, while recognising that you yourself are not entirely impartial (and have done with the issue). Personally I also think that strong longtermism relies on very debatable grounds, and I would also put some uncertainty on the claim āthe impartial altruist should be a strong longtermistā- the tricky and interesting thing is working out where we disagree with the longtermist.
(also I recognise as you said that this post is not supposed to be a final word on all these problems, Iām just pointing to where the inquiry could go next).
On the second part of your response, I think that depends on what motivates you and what your general worldview is. I donāt believe in objective moral facts, but I also generally see the world as a place where each and all could do better. For some that helps motivate action, for some it causes angst- I donāt think there is a correct view there.
Separately I do actually worry that strong longtermism only works for consequentialists (though you donāt have to believe in objective morality). The recent paper attempts to make the foundations more robust but the work there is still in its infancy. I guess we will see where it goes.
Just to second this because it seems to be a really common mistake- Greaves and MacAskill stress in the strong longtermism paper that the aim is to advance an argument about what someone should do with their impartial altruistic budget (of time or resources), not to tell anyone how large that budget should be in the first place.
Also- I think the author would be able to avoid what they see as a ānon-rigorousā decision to weight the short-term and long-term the same by reconceptualising the uneasiness around longtermism dominating their actions as an uneasiness with their totally impartial budget taking up more space in their life. I think everyone I have talked to about this feels a pull to support present day people and problems alongside the future, so it might help to just bracket off the present day section of your commitments away from the totally impartial side, especially if the argument against the longtermist conclusion is that it precludes other things you care about. No one can live an entirely impartial life and we should recognise that, but this doesnāt necessarily mean that the arguments for the rightness of doing so are wrong.
Thanks, that is valuable, but there are a couple of pieces here I want to clarify. I agree that there is space for people to have a budget for non-impartial altruistic donations. I am arguing that within the impartial altruistic budget, we should have a place for a balance between discounted values that emphasize the short term and impartial welfarist longtermism. Perhaps this is what you mean by ābracket off the present day section of your commitments away from the totally impartial side.ā
For example, I give at least 10% of my budget to altruistic causes, but I reserve some of the money for Givewell, Against Malaria Foundation, and similar charities, rather than focusing entirely on longtermist causes. This is in part moral uncertainty, at least on my part, since even putting aside the predictability argument, the argument for prioritizing possible future lives rests on a few assumptions that are debatable.
But Iām very unhappy with the claim that āNo one can live an entirely impartial life and we should recognise that,ā which is largely what led to the post. This type of position implies, among other things, that morality is objective and independent of instantiated human values, and that weāre saying everyone is morally compromised. If what we are claiming as impartial welfare maximization requires that philosophical position, and we also agree itās not something people can do in practice, Iād argue we are doing something fundamentally wrong both in practice, condemning everyone for being immoral while saying they should do better, and in theory, saying that longtermist EA only works given an objective utilitarian position on morality. Thankfully, I disagree, and I think these problems are both at least mostly fixable, hence my (still-insufficient, partially worked out) argument in the post. But I wasnāt trying to solve morality ab initio based on my intuitions. And perhaps I need to extend it to the more general position of how to allocate money and effort across both personal and altruistic spendingāwhich seems like a far larger if not impossible general task.
Thanks for the post and the response David, that helpfully clarifies where you are coming from. What I was trying to get at is that if you want to say that strong longtermism isnāt the correct conclusion for an impartial altruist who wants to know what to do with their resources, then that would call for more argument as to where the strong longtermistās mistake lies or where the uncertainty should be. On the other hand, it would be perfectly possible to say that the impartial altruist should end up endorsing strong longtermism, while recognising that you yourself are not entirely impartial (and have done with the issue). Personally I also think that strong longtermism relies on very debatable grounds, and I would also put some uncertainty on the claim āthe impartial altruist should be a strong longtermistā- the tricky and interesting thing is working out where we disagree with the longtermist.
(also I recognise as you said that this post is not supposed to be a final word on all these problems, Iām just pointing to where the inquiry could go next).
On the second part of your response, I think that depends on what motivates you and what your general worldview is. I donāt believe in objective moral facts, but I also generally see the world as a place where each and all could do better. For some that helps motivate action, for some it causes angst- I donāt think there is a correct view there.
Separately I do actually worry that strong longtermism only works for consequentialists (though you donāt have to believe in objective morality). The recent paper attempts to make the foundations more robust but the work there is still in its infancy. I guess we will see where it goes.
Thanks for the responseāI think we mostly agree, at least to the extent that these questions have answers at all.
Definitely, cheers!