The arbitrariness (“not really any principled reason”) comes from your choice to define the community you consider yourself to belong to for setting the reference allocation. In your first comment, you said the world, while in your reply, you said “the rest of my community”, which I assume to be narrower (maybe just the EA community?). How do you choose between them? And then why not the whole universe/multiverse, the past and the future? Where do you draw the lines and why? I think some allocations in the world, like by poor people living in very remote regions, are extremely unlikely for you to affect, except through your impact on things with global scope, like global catastrophe (of course, they don’t have many resources, so in practice, it probably doesn’t matter whether or not you include them). Allocations in inaccessible parts of the universe are far far less likely for you to affect (except acausally), but not impossible to affect with certainty, if you allow the possibility that we’re wrong about physical limits. I don’t see how you could draw lines non-arbitrarily here.
By risk-neutral total symmetric views, I mean risk-neutral expected value maximizing total utilitarianism and other views with such an axiology (but possibly with other non-axiological considerations), where lives of neutral welfare are neutral to add, better lives are good to add and worse lives are bad to add. Risk neutrality just means you apply the expected value directly to the sum and maximize that, so it allows fanaticism, St. Petersburg problems and the like, in principle.
Rejecting separability requires rejecting total utilitarianism.
FWIW, I don’t think it’s unreasonable to reject separability or total utilitarianism, and I’m pretty sympathetic to rejecting both. Why can’t I just care about the global distibution, and not just what I can affect? But rejecting separability is kind of weird: one common objection (often aimed at average utilitarianism) is that what you should do depends non-instrumentally on how well off the ancient Egyptians were.
The arbitrariness (“not really any principled reason”) comes from your choice to define the community you consider yourself to belong to for setting the reference allocation. In your first comment, you said the world, while in your reply, you said “the rest of my community”, which I assume to be narrower (maybe just the EA community?). How do you choose between them? And then why not the whole universe/multiverse, the past and the future? Where do you draw the lines and why? I think some allocations in the world, like by poor people living in very remote regions, are extremely unlikely for you to affect, except through your impact on things with global scope, like global catastrophe (of course, they don’t have many resources, so in practice, it probably doesn’t matter whether or not you include them). Allocations in inaccessible parts of the universe are far far less likely for you to affect (except acausally), but not impossible to affect with certainty, if you allow the possibility that we’re wrong about physical limits. I don’t see how you could draw lines non-arbitrarily here.
By risk-neutral total symmetric views, I mean risk-neutral expected value maximizing total utilitarianism and other views with such an axiology (but possibly with other non-axiological considerations), where lives of neutral welfare are neutral to add, better lives are good to add and worse lives are bad to add. Risk neutrality just means you apply the expected value directly to the sum and maximize that, so it allows fanaticism, St. Petersburg problems and the like, in principle.
Rejecting separability requires rejecting total utilitarianism.
FWIW, I don’t think it’s unreasonable to reject separability or total utilitarianism, and I’m pretty sympathetic to rejecting both. Why can’t I just care about the global distibution, and not just what I can affect? But rejecting separability is kind of weird: one common objection (often aimed at average utilitarianism) is that what you should do depends non-instrumentally on how well off the ancient Egyptians were.