Why would this be? For example, could not an individual donor be uncertain of the moral status of animals and therefore morally uncertain about the relative value of donations to an animal welfare charity compared to a human welfare one?
Of course they might be uncertain of the moral status of animals and therefore uncertain whether donations to an animal welfare vs a human welfare charity is more effective. That is not at all a reason for an individual to split their donations between animal and human charities. You might want the portfolio of all EA donations to be diversified, but if an individual splits their donations in that way, they are reducing the impact of their donations relative to contributing only to one or the other.
You seem to be assuming a maximize-expected-choiceworthiness or a my-favorite-theory rule for dealing with moral uncertainty. There are other plausible rules, such as a moral parliament model, which could endorse splitting.
I’m definitely not assuming the my-favorite-theory rule.
I agree that what I’m describing is favored by the maximize-expected-choiceworthiness approach, though I think you should reach the same conclusion even if you don’t use it.
Can you explain how a moral parliament would end up voting to split the donations? That seems impossible to me in the case where two conflicting views disagree on the best charity—I don’t see any moral trade the party with less credence/voting power can offer the larger party not to just override them. For parliaments with 3+ views but no outright majority, are you envisioning a spoiler view threatening to vote for the charity favored by the second-place view unless the plurality view allocates it some donation money in the final outcome?
edit: actually, I think the donations might end up split if you choose the allocation by randomly selecting a representative in the parliament and implementing their vote, in which case the dominant party would offer a little bit of donations in cases where it wins in exchange for donations in cases where someone else is selected?
I don’t know how philosophically sound they are, but the following rules, taken from the RP moral parliament tool, would end up splitting donations among multiple causes:
Maximize Minimum; “Sometimes termed the ‘Rawlsian Social Welfare Function’, this method maximizes the payoff for the least-satisfied worldview. This method treats utilities for all worldviews as if they fall on the same scale, despite the fact that some worldviews see more avenues for value than others. The number of parliamentarians assigned to each worldview doesn’t matter because the least satisfied parliamentarian is decisive.”
Moral Marketplace: “This method gives each parliamentarian a slice of the budget to allocate as they each see fit, then combines each’s chosen allocation into one shared portfolio. This process is relatively insensitive to considerations of decreasing cost-effectiveness. For more formal details, see this paper.”
There are a few other other voting/bargaining style views they have that can also lead to splitting.
I don’t really have anything intelligent to say about whether or not it makes sense to apply these rules for individual donations, or whether these rules make sense at all, but I thought they were worth mentioning.
Thank you very much, I hadn’t seen that the moral parliament calculator had implemented all of those.
Moral Marketplace strikes me as quite dubious in the context of allocating a single person’s donations, though I’m not sure it’s totally illogical.
Maximize Minimum is a nonsensically stupid choice here. A theory with 80% probability, another with 19%, and another with 0.000001% get equal consideration? I can force someone who believes in this to give all their donations to any arbitrary cause by making up an astronomically improbable theory that will be very dissatisfied if they don’t, e.g. “the universe is ruled by a shrimp deity who will torture you and 10^^10 others for eternity unless you donate all your money to shrimp welfare”. You can be 99.9999...% sure this isn’t true but never 100% sure, so this gets a seat in your parliament.
Why would this be? For example, could not an individual donor be uncertain of the moral status of animals and therefore morally uncertain about the relative value of donations to an animal welfare charity compared to a human welfare one?
Of course they might be uncertain of the moral status of animals and therefore uncertain whether donations to an animal welfare vs a human welfare charity is more effective. That is not at all a reason for an individual to split their donations between animal and human charities. You might want the portfolio of all EA donations to be diversified, but if an individual splits their donations in that way, they are reducing the impact of their donations relative to contributing only to one or the other.
You seem to be assuming a maximize-expected-choiceworthiness or a my-favorite-theory rule for dealing with moral uncertainty. There are other plausible rules, such as a moral parliament model, which could endorse splitting.
I’m definitely not assuming the my-favorite-theory rule.
I agree that what I’m describing is favored by the maximize-expected-choiceworthiness approach, though I think you should reach the same conclusion even if you don’t use it.
Can you explain how a moral parliament would end up voting to split the donations? That seems impossible to me in the case where two conflicting views disagree on the best charity—I don’t see any moral trade the party with less credence/voting power can offer the larger party not to just override them. For parliaments with 3+ views but no outright majority, are you envisioning a spoiler view threatening to vote for the charity favored by the second-place view unless the plurality view allocates it some donation money in the final outcome?
edit: actually, I think the donations might end up split if you choose the allocation by randomly selecting a representative in the parliament and implementing their vote, in which case the dominant party would offer a little bit of donations in cases where it wins in exchange for donations in cases where someone else is selected?
I don’t know how philosophically sound they are, but the following rules, taken from the RP moral parliament tool, would end up splitting donations among multiple causes:
Maximize Minimum; “Sometimes termed the ‘Rawlsian Social Welfare Function’, this method maximizes the payoff for the least-satisfied worldview. This method treats utilities for all worldviews as if they fall on the same scale, despite the fact that some worldviews see more avenues for value than others. The number of parliamentarians assigned to each worldview doesn’t matter because the least satisfied parliamentarian is decisive.”
Moral Marketplace: “This method gives each parliamentarian a slice of the budget to allocate as they each see fit, then combines each’s chosen allocation into one shared portfolio. This process is relatively insensitive to considerations of decreasing cost-effectiveness. For more formal details, see this paper.”
There are a few other other voting/bargaining style views they have that can also lead to splitting.
I don’t really have anything intelligent to say about whether or not it makes sense to apply these rules for individual donations, or whether these rules make sense at all, but I thought they were worth mentioning.
Thank you very much, I hadn’t seen that the moral parliament calculator had implemented all of those.
Moral Marketplace strikes me as quite dubious in the context of allocating a single person’s donations, though I’m not sure it’s totally illogical.
Maximize Minimum is a nonsensically stupid choice here. A theory with 80% probability, another with 19%, and another with 0.000001% get equal consideration? I can force someone who believes in this to give all their donations to any arbitrary cause by making up an astronomically improbable theory that will be very dissatisfied if they don’t, e.g. “the universe is ruled by a shrimp deity who will torture you and 10^^10 others for eternity unless you donate all your money to shrimp welfare”. You can be 99.9999...% sure this isn’t true but never 100% sure, so this gets a seat in your parliament.