FWIW my first impulse when reading the summary is that proportionality does not seem particularly desirable.
In particular:
I think it’s reasonable for one of the moral theories to give up part of their alloted resources if the other moral theory believes the stakes are sufficiently high. The distribution should be stakes sensitive (though inter-moral theory comparisons of stakes is something that is not clear how to do)
The answer does not seem to guide individual action very well, at least in the example. Even accepting proportionality, it seems that how I split my portfolio should be influenced by the resource allocation of the world at large.
I’m not so sure what to say about 2., but I want to note in response to 1. that although the Property Rights Theory (PRT) that I propose does not require any intertheoretic comparisons of choiceworthiness, it nonetheless licences a certain kind of stakes sensitivity. PRT gives moral theories greater influence over the particular choice situations that matter most to them, and lesser influence over the particular choice situations that matter least to them.
PRT gives moral theories greater influence over the particular choice situations that matter most to them, and lesser influence over the particular choice situations that matter least to them.
That seemed like the case to me.
I still think that this is too weak and that theories should be allowed to entirely give up resources without trading, though this is more an intuition than a thoroughly meditated point.
I’m not sure about 2 (at least the second sentence) being desirable. We can already make trades, cooperate and coordinate with proportionality, and sometimes this happens just through how the (EA) labour market responds to you making a job decision. If what you have in mind is that you should bring the world allocation closer to proportional according to your own credences or otherwise focus on neglected views, then there’s not really any principled reason to rule out trying to take into account allocations you can’t possibly affect (causally or even acausally), e.g. the past and inaccesible parts of the universe/multiverse, which seems odd. Also, 1 might already account for 2, if 2 is about neglected views having relatively more at stake.
Some other things that could happen with 2:
You might overweight views you actually think are pretty bad.
I think this would undermine risk-neutral total symmetric views, because a) those are probably overrepresented in the universe relative to other plausible views because they motivate space colonization and expansionism, and b) it conflicts with separability, the intuition that what you can’t affect (causally or acausally) shouldn’t matter to your decision-making, a defining feature of the total view and sometimes used to justify it.
Also, AFAIK, the other main approaches to moral uncertainty aren’t really sensitive to how others are allocating resources in a way that the proportional view isn’t (except possibly through 1?). But I might be wrong about what you have in mind.
then there’s not really any principled reason to rule out trying to take into account allocations you can’t possibly affect (causally or even acausally), e.g. the past and inaccesible parts of the universe/multiverse, which seems odd
I don’t understand why 1) this is the case or 2) why this is undersirable.
If the rest of my community seems obsessed with IDK longtermism and overallocating resources to it I think it’s entirely reasonable for me to have my inner longtermist shut up entirely and just focus on near term issues.
I imagine the internal dialogue here between the longtermist and neartermist being like “look I don’t know why you care so much about things that are going to wash off in a decade, but clearly this is bringing you a lot of pain, so I’m just going to let you have it”
I think this would undermine risk-neutral total symmetric views, because a) those are probably overrepresented in the universe
I don’t understand what you mean.
it conflicts with separability, the intuition that what you can’t affect (causally or acausally) shouldn’t matter to your decision-making
Well then separability is wrong.
It seems to me that it matters that no one is working on a problem you deem important, even if that does not affect the chances of you solving the problem.
other main approaches to moral uncertainty aren’t really sensitive to how others are allocating resources in a way that the proportional view isn’
I am not familiar with other proposals to moral uncertainty, so probably you are right!
(Generally I would not take it too seriously what I am saying—I find it hard to separate my intuitions on values from intuitions about how real world operates, and my responses are more off-the-cuff than meditated)
The arbitrariness (“not really any principled reason”) comes from your choice to define the community you consider yourself to belong to for setting the reference allocation. In your first comment, you said the world, while in your reply, you said “the rest of my community”, which I assume to be narrower (maybe just the EA community?). How do you choose between them? And then why not the whole universe/multiverse, the past and the future? Where do you draw the lines and why? I think some allocations in the world, like by poor people living in very remote regions, are extremely unlikely for you to affect, except through your impact on things with global scope, like global catastrophe (of course, they don’t have many resources, so in practice, it probably doesn’t matter whether or not you include them). Allocations in inaccessible parts of the universe are far far less likely for you to affect (except acausally), but not impossible to affect with certainty, if you allow the possibility that we’re wrong about physical limits. I don’t see how you could draw lines non-arbitrarily here.
By risk-neutral total symmetric views, I mean risk-neutral expected value maximizing total utilitarianism and other views with such an axiology (but possibly with other non-axiological considerations), where lives of neutral welfare are neutral to add, better lives are good to add and worse lives are bad to add. Risk neutrality just means you apply the expected value directly to the sum and maximize that, so it allows fanaticism, St. Petersburg problems and the like, in principle.
Rejecting separability requires rejecting total utilitarianism.
FWIW, I don’t think it’s unreasonable to reject separability or total utilitarianism, and I’m pretty sympathetic to rejecting both. Why can’t I just care about the global distibution, and not just what I can affect? But rejecting separability is kind of weird: one common objection (often aimed at average utilitarianism) is that what you should do depends non-instrumentally on how well off the ancient Egyptians were.
TL;DR, so this might be addressed in the paper
FWIW my first impulse when reading the summary is that proportionality does not seem particularly desirable.
In particular:
I think it’s reasonable for one of the moral theories to give up part of their alloted resources if the other moral theory believes the stakes are sufficiently high. The distribution should be stakes sensitive (though inter-moral theory comparisons of stakes is something that is not clear how to do)
The answer does not seem to guide individual action very well, at least in the example. Even accepting proportionality, it seems that how I split my portfolio should be influenced by the resource allocation of the world at large.
I’m not so sure what to say about 2., but I want to note in response to 1. that although the Property Rights Theory (PRT) that I propose does not require any intertheoretic comparisons of choiceworthiness, it nonetheless licences a certain kind of stakes sensitivity. PRT gives moral theories greater influence over the particular choice situations that matter most to them, and lesser influence over the particular choice situations that matter least to them.
That seemed like the case to me.
I still think that this is too weak and that theories should be allowed to entirely give up resources without trading, though this is more an intuition than a thoroughly meditated point.
I’m not sure about 2 (at least the second sentence) being desirable. We can already make trades, cooperate and coordinate with proportionality, and sometimes this happens just through how the (EA) labour market responds to you making a job decision. If what you have in mind is that you should bring the world allocation closer to proportional according to your own credences or otherwise focus on neglected views, then there’s not really any principled reason to rule out trying to take into account allocations you can’t possibly affect (causally or even acausally), e.g. the past and inaccesible parts of the universe/multiverse, which seems odd. Also, 1 might already account for 2, if 2 is about neglected views having relatively more at stake.
Some other things that could happen with 2:
You might overweight views you actually think are pretty bad.
I think this would undermine risk-neutral total symmetric views, because a) those are probably overrepresented in the universe relative to other plausible views because they motivate space colonization and expansionism, and b) it conflicts with separability, the intuition that what you can’t affect (causally or acausally) shouldn’t matter to your decision-making, a defining feature of the total view and sometimes used to justify it.
Also, AFAIK, the other main approaches to moral uncertainty aren’t really sensitive to how others are allocating resources in a way that the proportional view isn’t (except possibly through 1?). But I might be wrong about what you have in mind.
I don’t understand why 1) this is the case or 2) why this is undersirable.
If the rest of my community seems obsessed with IDK longtermism and overallocating resources to it I think it’s entirely reasonable for me to have my inner longtermist shut up entirely and just focus on near term issues.
I imagine the internal dialogue here between the longtermist and neartermist being like “look I don’t know why you care so much about things that are going to wash off in a decade, but clearly this is bringing you a lot of pain, so I’m just going to let you have it”
I don’t understand what you mean.
Well then separability is wrong. It seems to me that it matters that no one is working on a problem you deem important, even if that does not affect the chances of you solving the problem.
I am not familiar with other proposals to moral uncertainty, so probably you are right!
(Generally I would not take it too seriously what I am saying—I find it hard to separate my intuitions on values from intuitions about how real world operates, and my responses are more off-the-cuff than meditated)
The arbitrariness (“not really any principled reason”) comes from your choice to define the community you consider yourself to belong to for setting the reference allocation. In your first comment, you said the world, while in your reply, you said “the rest of my community”, which I assume to be narrower (maybe just the EA community?). How do you choose between them? And then why not the whole universe/multiverse, the past and the future? Where do you draw the lines and why? I think some allocations in the world, like by poor people living in very remote regions, are extremely unlikely for you to affect, except through your impact on things with global scope, like global catastrophe (of course, they don’t have many resources, so in practice, it probably doesn’t matter whether or not you include them). Allocations in inaccessible parts of the universe are far far less likely for you to affect (except acausally), but not impossible to affect with certainty, if you allow the possibility that we’re wrong about physical limits. I don’t see how you could draw lines non-arbitrarily here.
By risk-neutral total symmetric views, I mean risk-neutral expected value maximizing total utilitarianism and other views with such an axiology (but possibly with other non-axiological considerations), where lives of neutral welfare are neutral to add, better lives are good to add and worse lives are bad to add. Risk neutrality just means you apply the expected value directly to the sum and maximize that, so it allows fanaticism, St. Petersburg problems and the like, in principle.
Rejecting separability requires rejecting total utilitarianism.
FWIW, I don’t think it’s unreasonable to reject separability or total utilitarianism, and I’m pretty sympathetic to rejecting both. Why can’t I just care about the global distibution, and not just what I can affect? But rejecting separability is kind of weird: one common objection (often aimed at average utilitarianism) is that what you should do depends non-instrumentally on how well off the ancient Egyptians were.