Iâm not sure about 2 (at least the second sentence) being desirable. We can already make trades, cooperate and coordinate with proportionality, and sometimes this happens just through how the (EA) labour market responds to you making a job decision. If what you have in mind is that you should bring the world allocation closer to proportional according to your own credences or otherwise focus on neglected views, then thereâs not really any principled reason to rule out trying to take into account allocations you canât possibly affect (causally or even acausally), e.g. the past and inaccesible parts of the universe/âmultiverse, which seems odd. Also, 1 might already account for 2, if 2 is about neglected views having relatively more at stake.
Some other things that could happen with 2:
You might overweight views you actually think are pretty bad.
I think this would undermine risk-neutral total symmetric views, because a) those are probably overrepresented in the universe relative to other plausible views because they motivate space colonization and expansionism, and b) it conflicts with separability, the intuition that what you canât affect (causally or acausally) shouldnât matter to your decision-making, a defining feature of the total view and sometimes used to justify it.
Also, AFAIK, the other main approaches to moral uncertainty arenât really sensitive to how others are allocating resources in a way that the proportional view isnât (except possibly through 1?). But I might be wrong about what you have in mind.
then thereâs not really any principled reason to rule out trying to take into account allocations you canât possibly affect (causally or even acausally), e.g. the past and inaccesible parts of the universe/âmultiverse, which seems odd
I donât understand why 1) this is the case or 2) why this is undersirable.
If the rest of my community seems obsessed with IDK longtermism and overallocating resources to it I think itâs entirely reasonable for me to have my inner longtermist shut up entirely and just focus on near term issues.
I imagine the internal dialogue here between the longtermist and neartermist being like âlook I donât know why you care so much about things that are going to wash off in a decade, but clearly this is bringing you a lot of pain, so Iâm just going to let you have itâ
I think this would undermine risk-neutral total symmetric views, because a) those are probably overrepresented in the universe
I donât understand what you mean.
it conflicts with separability, the intuition that what you canât affect (causally or acausally) shouldnât matter to your decision-making
Well then separability is wrong.
It seems to me that it matters that no one is working on a problem you deem important, even if that does not affect the chances of you solving the problem.
other main approaches to moral uncertainty arenât really sensitive to how others are allocating resources in a way that the proportional view isnâ
I am not familiar with other proposals to moral uncertainty, so probably you are right!
(Generally I would not take it too seriously what I am sayingâI find it hard to separate my intuitions on values from intuitions about how real world operates, and my responses are more off-the-cuff than meditated)
The arbitrariness (ânot really any principled reasonâ) comes from your choice to define the community you consider yourself to belong to for setting the reference allocation. In your first comment, you said the world, while in your reply, you said âthe rest of my communityâ, which I assume to be narrower (maybe just the EA community?). How do you choose between them? And then why not the whole universe/âmultiverse, the past and the future? Where do you draw the lines and why? I think some allocations in the world, like by poor people living in very remote regions, are extremely unlikely for you to affect, except through your impact on things with global scope, like global catastrophe (of course, they donât have many resources, so in practice, it probably doesnât matter whether or not you include them). Allocations in inaccessible parts of the universe are far far less likely for you to affect (except acausally), but not impossible to affect with certainty, if you allow the possibility that weâre wrong about physical limits. I donât see how you could draw lines non-arbitrarily here.
By risk-neutral total symmetric views, I mean risk-neutral expected value maximizing total utilitarianism and other views with such an axiology (but possibly with other non-axiological considerations), where lives of neutral welfare are neutral to add, better lives are good to add and worse lives are bad to add. Risk neutrality just means you apply the expected value directly to the sum and maximize that, so it allows fanaticism, St. Petersburg problems and the like, in principle.
Rejecting separability requires rejecting total utilitarianism.
FWIW, I donât think itâs unreasonable to reject separability or total utilitarianism, and Iâm pretty sympathetic to rejecting both. Why canât I just care about the global distibution, and not just what I can affect? But rejecting separability is kind of weird: one common objection (often aimed at average utilitarianism) is that what you should do depends non-instrumentally on how well off the ancient Egyptians were.
Iâm not sure about 2 (at least the second sentence) being desirable. We can already make trades, cooperate and coordinate with proportionality, and sometimes this happens just through how the (EA) labour market responds to you making a job decision. If what you have in mind is that you should bring the world allocation closer to proportional according to your own credences or otherwise focus on neglected views, then thereâs not really any principled reason to rule out trying to take into account allocations you canât possibly affect (causally or even acausally), e.g. the past and inaccesible parts of the universe/âmultiverse, which seems odd. Also, 1 might already account for 2, if 2 is about neglected views having relatively more at stake.
Some other things that could happen with 2:
You might overweight views you actually think are pretty bad.
I think this would undermine risk-neutral total symmetric views, because a) those are probably overrepresented in the universe relative to other plausible views because they motivate space colonization and expansionism, and b) it conflicts with separability, the intuition that what you canât affect (causally or acausally) shouldnât matter to your decision-making, a defining feature of the total view and sometimes used to justify it.
Also, AFAIK, the other main approaches to moral uncertainty arenât really sensitive to how others are allocating resources in a way that the proportional view isnât (except possibly through 1?). But I might be wrong about what you have in mind.
I donât understand why 1) this is the case or 2) why this is undersirable.
If the rest of my community seems obsessed with IDK longtermism and overallocating resources to it I think itâs entirely reasonable for me to have my inner longtermist shut up entirely and just focus on near term issues.
I imagine the internal dialogue here between the longtermist and neartermist being like âlook I donât know why you care so much about things that are going to wash off in a decade, but clearly this is bringing you a lot of pain, so Iâm just going to let you have itâ
I donât understand what you mean.
Well then separability is wrong. It seems to me that it matters that no one is working on a problem you deem important, even if that does not affect the chances of you solving the problem.
I am not familiar with other proposals to moral uncertainty, so probably you are right!
(Generally I would not take it too seriously what I am sayingâI find it hard to separate my intuitions on values from intuitions about how real world operates, and my responses are more off-the-cuff than meditated)
The arbitrariness (ânot really any principled reasonâ) comes from your choice to define the community you consider yourself to belong to for setting the reference allocation. In your first comment, you said the world, while in your reply, you said âthe rest of my communityâ, which I assume to be narrower (maybe just the EA community?). How do you choose between them? And then why not the whole universe/âmultiverse, the past and the future? Where do you draw the lines and why? I think some allocations in the world, like by poor people living in very remote regions, are extremely unlikely for you to affect, except through your impact on things with global scope, like global catastrophe (of course, they donât have many resources, so in practice, it probably doesnât matter whether or not you include them). Allocations in inaccessible parts of the universe are far far less likely for you to affect (except acausally), but not impossible to affect with certainty, if you allow the possibility that weâre wrong about physical limits. I donât see how you could draw lines non-arbitrarily here.
By risk-neutral total symmetric views, I mean risk-neutral expected value maximizing total utilitarianism and other views with such an axiology (but possibly with other non-axiological considerations), where lives of neutral welfare are neutral to add, better lives are good to add and worse lives are bad to add. Risk neutrality just means you apply the expected value directly to the sum and maximize that, so it allows fanaticism, St. Petersburg problems and the like, in principle.
Rejecting separability requires rejecting total utilitarianism.
FWIW, I donât think itâs unreasonable to reject separability or total utilitarianism, and Iâm pretty sympathetic to rejecting both. Why canât I just care about the global distibution, and not just what I can affect? But rejecting separability is kind of weird: one common objection (often aimed at average utilitarianism) is that what you should do depends non-instrumentally on how well off the ancient Egyptians were.