Thanks, that is valuable, but there are a couple of pieces here I want to clarify. I agree that there is space for people to have a budget for non-impartial altruistic donations. I am arguing that within the impartial altruistic budget, we should have a place for a balance between discounted values that emphasize the short term and impartial welfarist longtermism. Perhaps this is what you mean by “bracket off the present day section of your commitments away from the totally impartial side.”
For example, I give at least 10% of my budget to altruistic causes, but I reserve some of the money for Givewell, Against Malaria Foundation, and similar charities, rather than focusing entirely on longtermist causes. This is in part moral uncertainty, at least on my part, since even putting aside the predictability argument, the argument for prioritizing possible future lives rests on a few assumptions that are debatable.
But I’m very unhappy with the claim that “No one can live an entirely impartial life and we should recognise that,” which is largely what led to the post. This type of position implies, among other things, that morality is objective and independent of instantiated human values, and that we’re saying everyone is morally compromised. If what we are claiming as impartial welfare maximization requires that philosophical position, and we also agree it’s not something people can do in practice, I’d argue we are doing something fundamentally wrong both in practice, condemning everyone for being immoral while saying they should do better, and in theory, saying that longtermist EA only works given an objective utilitarian position on morality. Thankfully, I disagree, and I think these problems are both at least mostly fixable, hence my (still-insufficient, partially worked out) argument in the post. But I wasn’t trying to solve morality ab initio based on my intuitions. And perhaps I need to extend it to the more general position of how to allocate money and effort across both personal and altruistic spending—which seems like a far larger if not impossible general task.
Thanks for the post and the response David, that helpfully clarifies where you are coming from. What I was trying to get at is that if you want to say that strong longtermism isn’t the correct conclusion for an impartial altruist who wants to know what to do with their resources, then that would call for more argument as to where the strong longtermist’s mistake lies or where the uncertainty should be. On the other hand, it would be perfectly possible to say that the impartial altruist should end up endorsing strong longtermism, while recognising that you yourself are not entirely impartial (and have done with the issue). Personally I also think that strong longtermism relies on very debatable grounds, and I would also put some uncertainty on the claim “the impartial altruist should be a strong longtermist”- the tricky and interesting thing is working out where we disagree with the longtermist.
(also I recognise as you said that this post is not supposed to be a final word on all these problems, I’m just pointing to where the inquiry could go next).
On the second part of your response, I think that depends on what motivates you and what your general worldview is. I don’t believe in objective moral facts, but I also generally see the world as a place where each and all could do better. For some that helps motivate action, for some it causes angst- I don’t think there is a correct view there.
Separately I do actually worry that strong longtermism only works for consequentialists (though you don’t have to believe in objective morality). The recent paper attempts to make the foundations more robust but the work there is still in its infancy. I guess we will see where it goes.
Thanks, that is valuable, but there are a couple of pieces here I want to clarify. I agree that there is space for people to have a budget for non-impartial altruistic donations. I am arguing that within the impartial altruistic budget, we should have a place for a balance between discounted values that emphasize the short term and impartial welfarist longtermism. Perhaps this is what you mean by “bracket off the present day section of your commitments away from the totally impartial side.”
For example, I give at least 10% of my budget to altruistic causes, but I reserve some of the money for Givewell, Against Malaria Foundation, and similar charities, rather than focusing entirely on longtermist causes. This is in part moral uncertainty, at least on my part, since even putting aside the predictability argument, the argument for prioritizing possible future lives rests on a few assumptions that are debatable.
But I’m very unhappy with the claim that “No one can live an entirely impartial life and we should recognise that,” which is largely what led to the post. This type of position implies, among other things, that morality is objective and independent of instantiated human values, and that we’re saying everyone is morally compromised. If what we are claiming as impartial welfare maximization requires that philosophical position, and we also agree it’s not something people can do in practice, I’d argue we are doing something fundamentally wrong both in practice, condemning everyone for being immoral while saying they should do better, and in theory, saying that longtermist EA only works given an objective utilitarian position on morality. Thankfully, I disagree, and I think these problems are both at least mostly fixable, hence my (still-insufficient, partially worked out) argument in the post. But I wasn’t trying to solve morality ab initio based on my intuitions. And perhaps I need to extend it to the more general position of how to allocate money and effort across both personal and altruistic spending—which seems like a far larger if not impossible general task.
Thanks for the post and the response David, that helpfully clarifies where you are coming from. What I was trying to get at is that if you want to say that strong longtermism isn’t the correct conclusion for an impartial altruist who wants to know what to do with their resources, then that would call for more argument as to where the strong longtermist’s mistake lies or where the uncertainty should be. On the other hand, it would be perfectly possible to say that the impartial altruist should end up endorsing strong longtermism, while recognising that you yourself are not entirely impartial (and have done with the issue). Personally I also think that strong longtermism relies on very debatable grounds, and I would also put some uncertainty on the claim “the impartial altruist should be a strong longtermist”- the tricky and interesting thing is working out where we disagree with the longtermist.
(also I recognise as you said that this post is not supposed to be a final word on all these problems, I’m just pointing to where the inquiry could go next).
On the second part of your response, I think that depends on what motivates you and what your general worldview is. I don’t believe in objective moral facts, but I also generally see the world as a place where each and all could do better. For some that helps motivate action, for some it causes angst- I don’t think there is a correct view there.
Separately I do actually worry that strong longtermism only works for consequentialists (though you don’t have to believe in objective morality). The recent paper attempts to make the foundations more robust but the work there is still in its infancy. I guess we will see where it goes.
Thanks for the response—I think we mostly agree, at least to the extent that these questions have answers at all.
Definitely, cheers!